text
stringlengths
3
1.04M
lang
stringclasses
4 values
len
int64
3
1.04M
\begin{document} \thanks{Author supported by FPU grant and grant MTM2005-08379 from MEC of Spain and grant 00690/PI/2004 of Fundanción Séneca of Región de Murcia.} \subjclass[2000]{46B03,46B26} \keywords{LUR, Kadec norm, strictly convex norm, James tree} \title[]{Renormings of the dual of James tree spaces} \author{Antonio Avilés} \address{Departamento de Matemáticas\\ Universidad de Murcia\\ 30100 Espinardo (Murcia)\\ Spain} \begin{abstract} We discuss renorming properties of the dual of a James tree space $JT$. We present examples of weakly Lindel\"of determined $JT$ such that $JT^\ast$ admits neither strictly convex nor Kadec renorming and of weakly compactly generated $JT$ such that $JT^\ast$ does not admit Kadec renorming although it is strictly convexifiable. \end{abstract} \maketitle The norm of a Banach space is said to be locally uniformly rotund (LUR) if for every for every $x_0$ with $\|x_0\|=1$ and every $\varepsilon>0$ there exists $\delta>0$ such that $\|x-x_0\|<\varepsilon$ whenever $\|\frac{x+x_0}{2}\|>1-\delta$. A lot of research during the last decades has been devoted to understanding which Banach spaces have an equivalent LUR norm, and this is still a rather active line of research. In this note we are concerned with this problem in the case of dual Banach spaces. It is a consequence of a result of Fabian and Godefroy \cite{FabGodPRI} that the dual of every Asplund Banach space (that is, a Banach space such that every separable subspace has a separable dual) admits an equivalent norm which is locally uniformly rotund. It is natural to ask whether, more generally, the dual of every Banach space not containing $\ell_1$ admits an equivalent LUR norm. We shall give counterexamples to this question by looking at the dual of James tree spaces $JT$ over different trees $T$. However all these examples are nonseparable, and the problem remains open for the separable case. It was established by Troyanski~\cite{TroyanskiLUR} that a Banach space admits an equivalent LUR norm if and only if it admits an equivalent strictly convex norm and also an equivalent Kadec norm. We recall that a norm is strictly convex if its sphere does not contain any proper segment and it is a Kadec norm if the weak and the norm topologies coincide on its sphere.\\ In Section \ref{sectionJT} we shall recall the definition of the spaces $JT$ and the main properties that we shall need.\\ In Section~\ref{sectionFurther} we remark that the space $JT^\ast$ has a LUR renorming whenever $JT$ is separable, so they cannot provide any counterexample for the separable case. We also point out the relation which exists between the renorming properties of $JT^\ast$ and those of $C_0(\bar{T})$, the space of continuous functions on the completed tree $\bar{T}$ vanishing at $\infty$. Haydon~\cite{Haydontrees} gave satisfactory characterizations of those trees $\Upsilon$ for which $C_0(\Upsilon)$ admits LUR, strictly convex or Kadec equivalent norm. We show that if $C_0(\bar{T})$ has a LUR (respectively strictly convex) norm then also does $JT^\ast$, and that, on the contrary, if $JT^\ast$ has an equivalent Kadec norm, then so does $C_0(\bar{T})$. We do not know about any of the converses.\\ In Section \ref{sectionWCGJT} we study the case when $JT$ is weakly compactly generated. The dual of every weakly compactly generated space is strictly convexifiable, however we shall show that for some trees, $JT$ is weakly compactly generated but $JT^\ast$ does not admit any equivalent Kadec norm.\\ In Section \ref{sectionBaire} we provide a sufficient condition on a tree $T$ in order that $JT^\ast$ does not admit neither a strictly convex nor a Kadec renorming, namely that it is an infinitely branchig Baire tree. This is inspired by a construction of Haydon which can be found in \cite{ArgMerWLD} of the dual of a weakly Lindel\"of determined Banach space with no equivalent strictly convex norm (this space contains nevertheless $\ell_1$). Similar ideas appear also in other Haydon's papers like \cite{HaydonBaire} and \cite{Haydontrees}. If we consider a particular tree constructed by Todor\v{c}evi\'c \cite{Todorcevicorder}, then the Banach space that we construct is in addition weakly Lindel\"of determined. The short proof of the properties of the mentioned tree of Todor\v{c}evi\'c presented in \cite{Todorcevicorder} is based on metamathematical arguments, while there exists another proof of Haydon~\cite{HaydonBaire} using games. We include another proof in Section \ref{treesection}, purely combinatorial.\\ As we mentioned, it is an open question whether the dual of every separable Banach space $X$ not containing $\ell_1$ admits an equivalent LUR norm. For such a space $X$, the bidual ball $B_{X^{\ast\ast}}$ is a separable Rosenthal compact in the weak$^\ast$ topology (that is, it is a pointwise compact set of Baire one functions on a Polish space). Hence, the problem is a particular instance of the more general whether $C(K)$ is LUR renormable whenever $K$ is a separable Rosenthal compact. Todor\v{c}evi\'{c} \cite{TodorcevicnoLUR} has recently constructed a nonseparable Rosenthal compact $K$ such that $C(K)$ is not LUR renormable, while Haydon, Molt\'{o} and Orihuela \cite{HayMolOri} have shown that if $K$ is a separable pointwise compact set of Baire one functions with countably many discontinuities on a Polish space, then $C(K)$ is LUR renormable.\\ This research was done while visiting the National Technical University of Athens. We want to express our gratitude to the Department of Mathematics for its hospitality. Our special thanks go to Spiros Argyros, the discussion with whom is the origin of the present work. \section{General properties of James tree spaces}\label{sectionJT} In this section we shall give the definition and state some well known of James tree spaces. We recall that a tree is a partially ordered set $(T,\prec)$ such that for every $t\in T$, the set $\{s\in T : s\prec t\}$ is well ordered by $\prec$. A chain is a subset of $T$ which is totally ordered by $\prec$ and a segment is a chain $\sigma$ with the extra property that whenever $s\prec t\prec u$ and $s,u\in\sigma$ then $t\in\sigma$. For a tree $T$ we consider the James tree space $JT$ which is the completion of $c_{00}(T) = \{f\in\mathbb{R}^T : |supp(f)|<\omega\}$ endowed with the norm $$\|f\| = \sup \left\{ \left(\sum_{i=1}^n \left(\sum_{t\in \sigma_i}f(t)\right)^2\right)^{\frac{1}{2}}\right\}$$ where the supremum runs over all finite families of disjoint segments $\sigma_1,\ldots,\sigma_n$ of the tree $T$. The space $JT$ is $\ell_2$-saturated, that is, every subspace contains a copy of $\ell_2$ and in particular $JT$ does not contain $\ell_1$, cf. \cite{HagOde} and \cite{ArgMersWCG} and also \cite{Jamestree}.\\ An element $h^\ast\in\mathbb{R}^T$ induces a linear map $c_{00}(T)\longrightarrow \mathbb{R}$ given by $h^\ast(x) = \sum_{t\in T} h^\ast(t)x(t)$. When such a linear map is bounded for the norm of $JT$, then $h^\ast$ defines an element of the dual space $JT^\ast$. This is the case when $h^\ast$ is the characteristic function of a segment $\sigma$ of the tree, $\chi^\ast_\sigma$, for which we have indeed $\|\chi^\ast_\sigma\| = 1$. Namely, if we take an element $x\in c_{00}(T)$ of norm less than or equal to one we will have, taking only the segment $\sigma$ in the definition of the norm of $JT$, that $|\sum_{i\in \sigma}x(i)|\leq 1$, and this is the action of $\chi^\ast_\sigma$ on $x$.\\ \begin{prop}\label{ell2sums} If $t_1,\ldots,t_n$ are incomparable nodes of the tree $T$ and we have $f_1,\ldots,f_n\in c_{00}(T)$ such that all the elements on the support of $f_i$ are greater than or equal to $t_i$, shortly $|f_i|\leq\chi_{[t_i,\infty)}$, then $$\|f_1+\cdots+f_n\| = \left(\|f_1\|^2+\cdots+\|f_n\|^2\right)^\frac{1}{2}$$ \end{prop} Proof: Every segment of the tree $T$ intersects at most one of the segments $[t_i,\infty)$, so the set whose supremum computes the norm of $f_1+\cdots+f_n$ consists exactly of the numbers of the form $\left(\sum_1^n\lambda_i^2\right)^\frac{1}{2}$ where each $\lambda_i$ is one of the numbers whose supremum computes the norm of $f_i$.$\qed$\\ An antichain is a subset $S$ of $T$ such that every two different elements of $S$ are incomparable. \begin{defn} Let $S$ be an antichain of the tree $T$. We define $X_S$ as the subspace of $JT$ generated by all $x\in c_{00}(T)$ whose support is contained in $[s,\infty)$ for some $s\in S$. For an element $t\in T$, we denote $X_t = X_{\{t\}}$. \end{defn} The properties of the subspaces $X_S$ are the following:\\ \begin{enumerate} \item $X_S = \left(\bigoplus_{s\in S} X_s\right)_{\ell_2}$. This is Proposition \ref{ell2sums}.\\ \item $X_S$ is a complemented subspace of $JT$, indeed we have a norm one projection $\pi_S: JT\longrightarrow X_S$ which is defined for an element $x\in c_{00}(T)$ setting $\pi_S(x)_t = x_t$ if $t\succeq s$ for some $s\in S$ and $\pi_S(x)_t=0$ otherwise. First, $\pi_S$ reduces the norm because if we have a family of segments providing a sum for computing the norm of $\pi_S(x)$, then we can assume that every segment is contained in some $[s,\infty)$ for $s\in S$, and then, the same segments will provide the same sum for the computation of the norm of $x$. Second, clearly $\pi_S(x)=x$ if $x\in X_S$.\\ \item The dual map of the operator $\pi_S$ defined above allows us to consider $X_S^\ast$ as a subspace of $JT^\ast$ since $\pi_S^\ast:X_S^\ast\longrightarrow JT^\ast$ is an isometric embedding because $\pi_S$ is a projection of norm one. In this way $X_S^\ast$ is identified with the range of $\pi_S^\ast$, which equals the set of all elements of $JT^\ast$ which take the same values on $x$ and on $\pi_S(x)$ for every $x\in JT$ (in particular, $\chi_{[t,u)}^\ast\in X_s^\ast$ whenever $s\preceq t$). Again, $X_S^\ast$ is a complemented subspace, since if we call $i_S:X_S\longrightarrow JT$ to the inclusion, then $i_S^\ast:JT^\ast\longrightarrow X_S^\ast$ is a projection of norm one. Taking duals in (1), we obtain $$X_S^\ast= \left(\bigoplus_{s\in S} X^\ast_s\right)_{\ell_2}$$ \item Taking duals again, we have an isometric embedding $i_S^{\ast\ast}:X_S^{\ast\ast}\longrightarrow JT^{\ast\ast}$ and a projection of norm one $\pi_S^{\ast\ast}:JT^{\ast\ast}\longrightarrow X_S^{\ast\ast}$ and again $$X_S^{\ast\ast}= \left(\bigoplus_{s\in S} X^{\ast\ast}_s\right)_{\ell_2}$$ \end{enumerate} \section{The relation with $C_0(\bar{T})$}\label{sectionFurther} We notice first that James tree spaces $JT$ cannot be used to provide examples of separable Banach spaces with non LUR renormable dual. Let us denote by $\bar{T}$, the completed tree of $T$, the tree whose nodes are the initial segments of the tree $T$ (that is, the segments $\sigma$ of $T$ with the property that whenever $s\prec t$ and $t\in\sigma$ then $s\in\sigma$) ordered by inclusion. We view $T\subset\bar{T}$ by identifying every $t\in T$ with the initial segment $\{s\in T: s\preceq t\}$. A result of Brackebusch states that for every tree $T$, $JT^{\ast\ast}$ is isometric to $J\bar{T}$ where $\bar{T}$ is the completed tree of $T$. We shall need also that by \cite[Theorem VII.2.7]{DGZ}, if $Y^\ast$ is a subspace of a weakly compactly generated space, then $Y$ has an equivalent LUR norm.\\ \begin{prop} Let $T$ be a tree and $X$ be a separable subspace of $JT$, then $X^{\ast\ast}$ is a subspace of a weakly compactly generated and hence, $X^\ast$ admits an equivalent LUR norm.\\ \end{prop} PROOF: Let $T_1$ be a countable set (that we view as a subtree of the tree $T$) such that $X\subset\overline{span}(\{\chi_{\{t\}} : t\in T_1\})\cong JT_1$. Since $T_1$ is a countable tree, it has countable height $ht(T_1)=\alpha<\omega_1$ and the height of the completed tree cannot be essentially larger, $ht(\bar{T}_1)\leq\alpha+1<\omega_1$, so in particular, $\bar{T}_1$ is countable union of antichains and $J\bar{T}_1$ is weakly compactly generated. Finally, $JT_1^{\ast\ast} \cong J\bar{T}_1$, so $X^{\ast\ast}$ is a subspace of a weakly compactly generated space and so $X^\ast$ is LUR renormable.$\qed$\\ Let us recall now how Brackebusch identifies the basic elements of $J\bar{T}$ inside $JT^{\ast\ast}$ in order to get an isometry. For every initial segment of the tree $T$, $s\in\bar{T}$, we have the basic element $e_s\in JT^{\ast\ast}$ whose action on every $x^\ast\in JT^\ast$ is given by: $$ (\star)\ \ e_s(x^\ast) = \lim_{t\in s}x^\ast(\chi_{\{t\}}).$$ The initial segment $s$ is well ordered, so when we write $a = \lim_{t\in s}a_t$ we mean that for every neighborhood $U$ of $a$ there exists $t_0\in s$ such that $a_t\in U$ whenever $t\geq t_0$. We consider a tree $\Upsilon$ endowed with its natural locally compact topology with intervals of the form $(s,s']$ as basic open sets, and $\Upsilon\cup\{\infty\}$ its one-point compactification. Let us first notice the following fact:\\ \begin{prop}\label{topology of the tree} The set $\{e_s : s\in\bar{T}\}\cup\{0\}$ is homeomorphic in the weak$^\ast$ topology of $JT^{\ast\ast}$ to the space $\bar{T}\cup\{\infty\}$ through the natural correspondence.\\ \end{prop} Proof: Since $\bar{T}\cup\{\infty\}$ is compact, it is enough to check that the natural identification $\bar{T}\cup\{\infty\}\longrightarrow \{e_s : s\in\bar{T}\}\cup\{0\}$ is continuous. The fact that it is continuous at the points $t\in\bar{T}$ follows immediately from $(\star)$. For the continuity at $\infty$ we take $V$ a neighborhood of $0$ in the weak$^\ast$ topology and we shall see that the set $L = \{t\in\bar{T} : t\not\in V\}$ is a relatively compact subset of $\bar{T}$. We shall prove that every transfinite sequence $\{t_\alpha : \alpha<\lambda\}$ of elements of $L$ has a cofinal subsequence which converges to a point of $L$ (this is a stronger principle than that every net has a convergent subnet and holds on those sets with scattered compact closure). A partition principle due to Dushnik and Miller \cite[Theorem 5.22]{DusMil} yields that either there is an infinite subsequence $\{t_{\alpha_n} : n\in\omega\}$ of incomparable elements or there is a cofinal subsequence in which every couple of elements is comparable. The first possibility is excluded because we know that a family of vectors of $J\bar{T}$ corresponding to an antichain is isometric to the basis of $\ell_2$, and in particular it weakly (and hence weak$^\ast$) converges to 0, contradicting that $V$ is a weak$^\ast$ neighborhood of 0. In the latter case, the cofinal subsequence is contained in a branch of the tree which is a well ordered set, and again the same partition principle of Dushkin and Miller \cite[Theorem 5.22]{DusMil} implies that it has a further cofinal and increasing subsequence, and this subsequence converges to its lowest upper bound in $\bar{T}$.$\qed$\\ Proposition~\ref{topology of the tree} allows us to view every element $x^\ast\in JT^\ast$ as a continuous function on $\bar{T}\cup\{\infty\}$ vanishing at $\infty$, and thus to define an operator, $$F:JT^\ast\longrightarrow C_0(\bar{T}).$$ Recall that $C_0(\bar{T})$ stands for the space of real valued continuous functions on $\bar{T}\cup\{\infty\}$ vanishing at $\infty$, endowed with the supremum norm $\|\cdot\|_\infty$. Haydon~\cite{Haydontrees} has characterized the classes of trees $\Upsilon$ for which the space $C_0(\Upsilon)$ admits equivalent LUR, Kadec or strictly convex norms. Notice that $F$ is an operator of norm 1, since $\|F(x^\ast)\|_\infty = \sup\{|e_s(x^\ast)| : s\in\bar{T}\}\leq \|x^\ast\|$.\\ \begin{thm}\label{relationwithHaydon} Let $T$ be a tree.\begin{enumerate} \item If $C_0(\bar{T})$ admits an equivalent strictly convex norm, then $JT^\ast$ also admits an equivalent strictly convex norm. \item If $C_0(\bar{T})$ admits an equivalent LUR norm, then $JT^\ast$ also admits an equivalent LUR norm. \item If $JT^\ast$ admits an equivalent Kadec norm, then $C_0(\bar{T})$ also admits an equivalent Kadec norm. \end{enumerate} \end{thm} PROOF: Part (1) follows from the fact that $F$ is a one-to-one operator and one-to-one operators transfer strictly convex renorming. Moreover, $F$ has the additional property that the dual operator $F^\ast: C_0(\bar{T})^\ast\longrightarrow JT^{\ast\ast}\cong J\bar{T}$ has dense range, because for every dirac measure $\delta_s$, $s\in\bar{T}$ we have that $F^\ast(\delta_s) = e_s$. One to one operators whose dual has dense range transfer LUR renorming \cite{Memoir}, so this proves part (2). Concerning part (3), we observe that if $|||\cdot|||$ is an equivalent Kadec norm on $JT^\ast$ and $\rho:\bar{T}\longrightarrow \mathbb{R}$ is defined by $$\rho(s) = \inf\{|||\chi^\ast_\sigma||| : s\subset\sigma\}$$ then $\rho:\bar{T}\longrightarrow \mathbb{R}$ is an increasing function with no bad points in the sense of~\cite{Haydontrees}, just by the same argument as in \cite[Proposition 3.2]{Haydontrees}. Hence, by \cite[Theorem 6.1]{Haydontrees}, $C_0(\bar{T})$ admits an equivalent Kadec norm.$\qed$\\ We do not know whether any of the converses of Theorem~\ref{relationwithHaydon} holds true. Concerning part (3), no transfer result for Kadec norms is available. In the other two cases, it would be natural to try to imitate Haydon's arguments in \cite{Haydontrees} using the function $\rho(s) = \inf\{|||\chi^\ast_\sigma||| : s\subset\sigma\}$ on $JT^\ast$. But these arguments rely on the consideration of certain special functions $f\in C_0(\Upsilon)$ which are not available anymore in $JT^\ast$ which is a rather smaller space.\\ \section{When JT is weakly compactly generated}\label{sectionWCGJT} In this section we analyze the case when $JT$ is weakly compactly generated. This property is characterized in terms of the tree as it is shown in the following result which can be found in \cite{ArgMerWUR}: \begin{thm}\label{JTWCG} For a tree $T$ the following are equivalent \begin{enumerate} \item $JT$ is weakly compactly generated. \item $JT$ is weakly countably determined. \item $T$ is the union of countably many antichains. \item $T = \bigcup_{n<\omega}S_n$ where for every $n<\omega$, $S_n$ contains no infinite chain. \end{enumerate} \end{thm} A tree is union is the union of countably many antichains if and only if it is $\mathbb{Q}$-embeddable, cf. \cite[Theorem 9.1]{Todorcevicorder}. It happens that for a tree $T$ satisfying the conditions of Theorem \ref{JTWCG}, the renorming properties of $JT^{\ast}$ depend on whether the completed tree $\bar{T}$ is still the union of countably many antichains. \begin{thm}\label{JTWCGKadec} Let $T$ be a tree which is the union of countably many antichains. The following are equivalent: \begin{enumerate} \item $\bar{T}$ is also the union of countably many antichains. \item $JT^\ast$ admits an equivalent Kadec norm. \item $JT^\ast$ admits an equivalent LUR norm. \end{enumerate} \end{thm} The dual of every weakly compactly generated space admits always an equivalent strictly convex norm since, by the Amir-Lindenstrauss Theorem there is a one-to-one operator into $c_0(\Gamma)$. Hence, that $(2)$ and $(3)$ are equivalent is a consequence of the result of Troyanski mentioned in the introduction. On the other hand, we also mentioned in Section~\ref{sectionFurther} the result of Brackebusch~\cite{Brackebusch} that for any tree $T$, $JT^{\ast\ast}$ is isometric to $J\bar{T}$. Hence, if (1) is verified, then $JT^{\ast\ast}$ is weakly compactly generated and it follows then by \cite[Theorem VII.2.7]{DGZ} that $JT^\ast$ admits an equivalent LUR norm. Our goal is therefore to prove that (2) implies (1) but before passing to this we give an example of a tree $T_0$ which is the union of countably many antichains but the completion $\bar{T}_0$ does not share this property, so that after Theorem \ref{JTWCGKadec} $JT_0$ is a weakly compactly generated space not containing $\ell_1$ and such that $JT_0^\ast$ does not admit any equivalent Kadec norm, namely $$T_0 =\sigma'\mathbb{Q} = \{t\subset \mathbb{Q} : (t,<)\text{ is well ordered and }\max(t)\text{ exists}\},$$ where $t\prec s$ if $t$ is a proper initial segment of $s$. For every rational number $q\in\mathbb{Q}$, the set $S_q = \{t\in T_0 : \max(t)=q\}$ is an antichain of $T_0$, and $T_0 = \bigcup_{q\in\mathbb{Q}}S_q$. The completed tree $\bar{T}_0$ can be identified with the following tree: $$T_1 = \sigma\mathbb{Q} = \{t\subset \mathbb{Q} : (t,<)\text{ is well ordered}\},$$ the identification sending every $t\in T_1$ to the initial segment $\{t'\in T_0 : t'\prec t\}$ of $T_0$. The fact that $T_1$ is not countable union of antichains is a well known result due to Kurepa \cite{KurepasigmaQ}, cf. also \cite{Todorcevicorder}. The reason is the following: suppose there existed $f:T_1\longrightarrow\mathbb{N}$ such that $f^{-1}(n)$ is an antichain. Then we could construct by recursion a sequence $t_1\prec t_2\prec\cdots$ inside $T_1$ and a sequence of rational numbers $q_1>q_2>\cdots$ such that $q_i> \sup(t_i)$ and $f(t_{n+1}) = \min\{f(t) : t_n\prec t, \sup(t)<q_n\}$. The consideration of the element $t_\omega = \bigcup_{n<\omega}t_n$ leads to a contradiction.\\ \begin{lem}\label{Kadecenelarbol} Let $T$ be any tree and suppose that there exists an equivalent Kadec norm on $JT$, then there exist \begin{itemize} \item [(a)] a countable partition of $\bar{T}$, $\bar{T}=\bigcup_{n<\omega}T_n$ and \item [(b)] a function $F:\bar{T}\longrightarrow 2^T$ which associates to each initial segment $\sigma\in \bar{T}$ a finite set $F(\sigma)$ of immediate successors of $\sigma$, \end{itemize} such that for every $n<\omega$ and for every infinite chain $\sigma_1\prec\sigma_2\prec\cdots$ contained in $T_n$ there exists $k_0<\omega$ such that $F(\sigma_k)\cap \sigma_{k+1}\neq\emptyset$ for every $k>k_0$. \end{lem} Proof: Let $|||\cdot|||$ be an equivalent Kadec norm on $JT^\ast$.\\ \emph{Claim}: For every $\sigma\in \bar{T}$ there exists a natural number $n_\sigma$ and a finite set $F(\sigma)\subset T$ of immediate successors of $\sigma$ such that $\left|\ |||\chi_\sigma^\ast|||-|||\chi_{\sigma'}^\ast|||\ \right|\geq \frac{1}{n_\sigma}$ for every $\sigma'\in \bar{T}$ such that $\sigma\prec \sigma'$ and $F(\sigma)\cap\sigma'=\emptyset$.\\ Proof of the claim: Suppose that there existed $\sigma\in\bar{T}$ failing the claim. Then, we can find recursively a sequence $\{q_n\}$ of different immediate succesors of $\sigma$ together with a sequence $\{\sigma_n\}$ of elements of $\bar{T}$ such that $\sigma\cup\{q_n\}\preceq \sigma_n$ and $$\left|\ |||\chi_\sigma^\ast|||-|||\chi_{\sigma_n}^\ast|||\ \right|<\frac{1}{n}.$$ Now, $\{\sigma'_n=\sigma_n\setminus\sigma\}$ is a sequence of incomparable segments of $T$, so the sequence $\{\chi_{\sigma'_n}^\ast\}$ is isometric to the base of $\ell_2$ and in particular it weakly converges to 0. Hence the sequence $\chi_{\sigma_n}^\ast = \chi_\sigma^\ast + \chi_{\sigma'_n}^\ast$ weakly converges to $\chi_\sigma^\ast$, however it does not converge in norm since $\|\chi_{\sigma'_n}^\ast\|=1$ for every $n$. Finally, since $|||\chi_{\sigma_n}^\ast|||$ converges to $|||\chi_{\sigma}^\ast|||$ we obtain, after normalizing, a contradiction with the fact that $|||\cdot|||$ is a Kadec norm.\\ From the claim we get the function $F$ and also the countable decomposition setting $T_n = \{\sigma\in\bar{T} : n_\sigma = n\}$. Suppose that we have an increasing sequence $\sigma_1\prec\sigma_2\prec\cdots$ inside $T_n$. We observe that whenever $F(\sigma_k)\cap \sigma_{k+1} = \emptyset$ we have that $\left|\ |||\chi_{\sigma_k}^\ast|||-|||\chi_{\sigma_{k'}}^\ast|||\ \right|\geq \frac{1}{n}$ for all $k'>k$. This can happen only for finitely many $k$'s because $|||\cdot|||$ is an equivalent norm so it is bounded on the unit sphere of $JT^\ast$.$\qed$\\ Now we assume that $T$ is union of countably many antichains, $T=\bigcup_{m<\omega}R_m$, and that it verifies the conclusion of Lemma \ref{Kadecenelarbol} for a decomposition $\bar{T} = \bigcup_{n<\omega}T_n$ and a function $F$, and we shall show that indeed $\bar{T}$ is the union of countably many antichains. For every $n<\omega$ and every finite subset $A$ of natural numbers we consider the set $$S_{n,A} = \left\{\sigma\in\bar{T} : \sigma\in T_n \text{ and } F(\sigma)\subset \bigcup_{m\in A}R_m\right\}$$ This gives an expression of $\bar{T}$ as countable union $\bar{T} = \bigcup_{n,A}S_{n,A}$. We shall verify that this expression verifies condition (4) of Theorem~\ref{JTWCG}. Suppose by contradiction that we had an infinite chain $\sigma_1\prec\sigma_2\prec\cdots$ inside a fixed $S_{n,A}$. First, since $S_{n,A}\subset T_n$ there exists $k_0$ such that $F(\sigma_k)\cap\sigma_{k+1}\neq\emptyset$ for every $k>k_0$, say $t_k\in F(\sigma_k)\cap\sigma_{k+1}\subset\bigcup_{m\in A}R_m$. Then $t_1\prec t_2\prec\cdots$ is an infinite chain of $T$ contained in $\bigcup_{m\in A}R_m$ which is a finite union of antichains. This contradiction finishes the proof of Theorem~\ref{JTWCGKadec}. \section{Spaces with no strictly convex nor Kadec norms}\label{sectionBaire} In this section we give a criterion on a tree $T$ in order that $JT^\ast$ admits neither a Kadec norm nor a strictly convex norm. We recall that the downwards closure of a subset $S$ of a tree $T$ is defined as $$\hat{S} = \{t\in T : \exists s\in S : t\preceq s\}.$$ \begin{thm}\label{renormJamestreestar} Let $T$ be a tree verifying the following properties: \begin{itemize} \item[(T1)] Every node of $T$ has infinitely many immediate succesors. \item[(T2)] For any countable family of antichains $\{S_n : n<\omega\}$ there exists $t\in T$ such that $t\not\in\bigcup_{n<\omega}\hat{S}_n$. \end{itemize} Then there is neither a strictly convex nor a Kadec equivalent norm in $JT^\ast$.\\ \end{thm} Condition (T2) is called \emph{Baire property} of the tree and condition (T1) is usually expressed saying that $T$ is an \emph{infinitely branching tree}. An example of a tree satisfying properties (T1) and (T2) is the tree whose nodes are the countable subsets of $\omega_1$ with $s\prec t$ if $s$ is an initial segment of $t$ (property (T2) is proved by constructing a sequence $t_1\prec t_2\prec\cdots$ with $t_i\not\in \hat{S}_i$ and taking $t\succ \bigcup t_i$). A refinement of this construction due to Todor\v{c}evi\'c \cite{Todorcevicorder} produces a tree with the additional property that all branches are countable, and this implies that for this tree $JT$ is weakly Lindel\"of determined \cite{ArgMerWLD}. This is the example discussed in Section \ref{treesection}.\\ Along the work of Haydon it is possible to find different results implying that if a tree $\Upsilon$ is an infinitely branching Baire tree, then $C_0(\Upsilon)$ (or certain spaces which can be related to it) has no Kadec or strictly convex norm, cf. \cite{Haydontrees}, \cite{HaydonBaire}. One may be tempted to use Theorem~\ref{relationwithHaydon} in conjunction with these results to get Theorem~\ref{renormJamestreestar}. However, there is a difficulty since these properties (T1) and (T2) on $T$ are not easily reflected on the completed tree $\bar{T}$. The tree $\bar{T}$ is never a Baire tree, since the set $M$ of all maximal elements verifies that $\hat{M}=\bar{T}$, and even if we try to remove these maximal elements, the hypothesis that $T$ is infinitely branching is weaker than the hypothesis that $\bar{T}$ is infinitely branching. We shall do it therefore by hand, using in any case, similar arguments as in Haydon's proofs.\\ We assume now that $T$ satisfies (T1) and (T2), we fix an equivalent norm $(JT^\ast,|||\cdot|||)$ and we shall see that this norm is neither strictly convex nor a Kadec norm. \begin{lem}\label{lemaclave} For any node of the tree $t\in T$ and every $\varepsilon>0$ we can find another node $s\succ t$ and an element $x_s^{\ast\ast}\in JT^{\ast\ast}$ with $|||x_s^{\ast\ast}|||^\ast=1$ such that \begin{enumerate} \item $\left|\sup\{|||\chi_{[0,u]}^\ast||| : u\succeq s\}-|||\chi_{[0,s]}^\ast|||\right|<\varepsilon$. \item $x_s^{\ast\ast}(\chi_{[0,u]}^\ast) \geq |||\chi_{[0,s]}^\ast|||-\varepsilon$ whenever $s\prec u$. \end{enumerate} \end{lem} PROOF: First we take a node $t'\succ t$ such that $$\left|\sup\{|||\chi_{[0,u]}^\ast||| : u\succeq t\}-|||\chi_{[0,t']}^\ast|||\right|<\frac{\varepsilon}{2},$$ and we find $x^{\ast\ast}\in JT^{\ast\ast}$ with $|||x_s^{\ast\ast}|||^\ast=1$ such that $x^{\ast\ast}(\chi_{[0,t']})=|||\chi_{[0,t']}^\ast|||$. We consider the set $S$ of all immediate successors of $t'$ in the tree $T$ which is an infinite antichain. Then, we can consider the projection $$\pi_S^{\ast\ast}:JT^{\ast\ast}\longrightarrow X_S^{\ast\ast}= \left(\bigoplus_{s\in S} X^{\ast\ast}_s\right)_{\ell_2}$$ Since $S$ is infinite, there must exist $s\in S$ such that $\|\pi_{s}^{\ast\ast}(x^{\ast\ast})\| < \frac{\varepsilon}{2}$.\\ The elements $s\in T$ and $x^{\ast\ast}_s = x^{\ast\ast}$ are the desired. Namely, for any $u\succeq s$, $$x^{\ast\ast}(\chi_{[0,u]}^\ast) = x^{\ast\ast}(\chi_{[0,t']}^\ast) + x^{\ast\ast}(\chi_{[s,u]}^\ast),$$ and $\chi_{[s,u]}^\ast\in X_s^\ast$, so $\pi_s^\ast(\chi_{[s,u]}^\ast)=\chi_{[s,u]}^\ast$ and \begin{eqnarray*}x^{\ast\ast}(\chi_{[0,u]}^\ast) &=& x^{\ast\ast}(\chi_{[0,t']}^\ast) + x^{\ast\ast}(\chi_{[s,u]}^\ast)\\ &=& x^{\ast\ast}(\chi_{[0,t']}^\ast) + x^{\ast\ast}(\pi_s^\ast (\chi_{[s,u]}^\ast))\\ &=& x^{\ast\ast}(\chi_{[0,t']}^\ast) + \pi_s^{\ast\ast}(x^{\ast\ast})(\chi_{[s,u]}^\ast)\\ &\geq& x^{\ast\ast}(\chi_{[0,t']}^\ast)-\frac{\varepsilon}{2}\\ &=& |||\chi_{[0,t']}^\ast||| - \frac{\varepsilon}{2}\\ &\geq& |||\chi_{[0,s]}^\ast||| - \varepsilon. \end{eqnarray*} This guarantees in particular that $|||\chi_{[0,s]}^\ast|||\geq x^{\ast\ast}(\chi^\ast_{[0,s]})\geq |||\chi_{[0,t']}^\ast|||-\frac{\varepsilon}{2}$. This together with the property which follows from the initial choice of $t'$ gives also property (1) in the lemma and finishes the proof.$\qed$\\ We construct by recursion, using Lemma \ref{lemaclave}, a sequence of maximal antichains of $T$, $\{S_n : n<\omega\}$ which are increasing (that is for every $t\in S_{n+1}$, there exists $s\in S_{n}$ with $s\prec t$) and such that for every $n<\omega$ and for every $s\in S_n$ there exists an element $x_s^{\ast\ast}\in JT^{\ast\ast}$ with $|||x_s^{\ast\ast}|||^\ast=1$ such that \begin{enumerate} \item $\left|\sup\{|||\chi_{[0,u]}^\ast||| : u\succ s\}-|||\chi_{[0,s]}^\ast|||\right|<\frac{1}{n}$. \item $x_s^{\ast\ast}(\chi_{[0,u]}^\ast) = x_s^{\ast\ast}(\chi_{[0,s]}^\ast) \geq |||\chi_{[0,s]}^\ast|||-\frac{1}{n}$ whenever $s\prec u$.\\ \end{enumerate} Now, by property (T2), we can pick $t\in T\setminus\bigcup_{n<\omega}S_n$. We can find for $t$ a sequence $s_1\prec s_2\prec\cdots\prec t$ with $s_n\in S_n$.\\ For any $t'\succeq t$ and for every $n<\omega$, $$|||\chi_{[0,s_n]}^\ast|||-\frac{1}{n} \leq x_{s_n}^{\ast\ast}(\chi_{[0,t']}^\ast) \leq |||\chi_{[0,t']}^\ast||| \leq \sup_{u\succeq s_n}|||\chi_{[0,u]}^\ast||| \leq |||\chi_{[0,s_n]}^\ast|||+\frac{1}{n}.$$ This implies that all the successors of $t$ have the same norm $|||\cdot|||$ equal to the limit of the norms $|||\chi_{[0,s_n]}^\ast|||$. If we take $t_1$ and $t_2$ two immediate succesors of $t$, in addition, for every $n<\omega$ $$|||\frac{\chi_{[0,t_1]}^\ast + \chi_{[0,t_2]}^\ast}{2}|||\geq x^{\ast\ast}_{s_n}\left(\frac{\chi_{[0,t_1]}^\ast + \chi_{[0,t_2]}^\ast}{2}\right)\geq |||\chi_{[0,s_n]}^\ast|||-\frac{1}{n}$$ and passing to the limit $$ |||\frac{\chi_{[0,t_1]}^\ast + \chi_{[0,t_2]}^\ast}{2}|||\geq |||\chi_{[0,t_1]}^\ast||| = |||\chi_{[0,t_2]}^\ast|||$$ and this shows that $|||\cdot|||$ is not a strictly convex norm.\\ If now we take a sequence of different immediate succesors of $t$, $\{t_n : n<\omega\}$, then $\chi_{\{t_n\}}^\ast$ is an element of norm one of $X_{t_n}^\ast$ and since $$X_{\{t_n : n<\omega\}}^\ast = \left(\bigoplus_{n<\omega} X^\ast_{t_n}\right)_{\ell_2}$$ the sequence $(\chi_{\{t_n\}}^\ast : n<\omega)$ is isometric to the base of $\ell_2$ and in particular it is weakly null. Therefore $\chi_{[0,t_n]}^\ast$ is a sequence in a sphere which weakly converges to $\chi_{[0,t]}^\ast$ which is in the same sphere. However $\|\chi_{[0,t_n]}^\ast - \chi_{[0,t]}^\ast\| = \|\chi_{[t_n,t_n]}^\ast\|=1$ so this sequence does not converge in norm. This shows that $|||\cdot |||$ is not a Kadec norm. \section{About a tree of Todor\v{c}evi\'{c}}\label{treesection} A subset $A$ of $\omega_1$ is called stationary if the intersection of $A$ with every closed and unbounded subset of $\omega_1$ is nonempty. We shall fix a set $A$ such that both $A$ and $\omega_1\setminus A$ are stationary. The existence of such a set follows from a result of Ulam \cite[Theorem 3.2]{Kunencombinatorics}. \begin{defn}[Todor\v{c}evi\'c] We define $T$ to be the tree whose nodes are the closed subsets of $\omega_1$ which are contained in $A$ and whose order relation is that $s\prec t$ if $s$ is an initial segment of $t$.\\ \end{defn} First, $T$ has property (T1) because if $t\in T$ and $\eta\in A$ verifies that $\eta>\max(t)$, then $t\cup \{\gamma\}$ is an immediate successor of $t$ in $T$. On the other hand, $T$ does not contain any uncountable chain. If $\{t_i\}{i<\omega_1}$ were an uncountable chain, then $\bigcup_{i<\omega_1}t_i$ is a closed an unbounded subset of $\omega_1$, so it should intersect $\omega_1\setminus A$, which is impossible. The difficult point is in showing that $T$ verifies property (T2).\\ \begin{thm}[Todor\v{c}evi\'c]\label{antichainstree} For any countable family of antichains $\{S_n : n<\omega\}$ there exists $t\in T$ such that $t\not\in\bigcup_{n<\omega}\hat{S}_n$.\\ \end{thm} PROOF: We suppose by contradiction that we have a family of antichains $\{S_n : n<\omega\}$ which does not verify the statement. We can suppose without loss of generality that every one of these antichains is a maximal antichain, and that they are increasing, that is, for every $t\in S_{n+1}$ there exists $s\in S_n$ such that $s\prec t$. What we know is that for every $t\in T$ we can find $t'\in\bigcup_{m<\omega}S_m$ such that $t\prec t'$. Moreover, since the antichains are taken maximal and increasing,\\ $(\ast)$ For every natural number $n$ and for every $t\in T$ there exists $t'\in\bigcup_{m>n}S_m$ such that $t\prec t'$.\\ We construct a family $\{R_\xi : \xi<\omega_1\}$ of subsets of $T$ with the following properties:\\ \begin{enumerate} \item $R_\xi$ is a countable subset of $\bigcup_{n<\omega} S_n$. \item $R_\xi\subset R_\zeta$ whenever $\xi<\zeta$. \item If $\xi$ is a limit ordinal, then $R_\xi = \bigcup_{\zeta<\xi}R_\zeta$. \item If we set $\gamma_\xi = \sup\{\max(t) : t\in R_\xi\}$ then the following are satisfied \begin{enumerate} \item $\gamma_\xi<\gamma_\zeta$ whenever $\xi<\zeta$. \item For every $\xi<\omega_1$, every $t\in R_\xi$, every $n<\omega$ and every $\eta\in A$ such that $\max(t)<\eta<\gamma_\xi$ there exists $t'\in R_\xi\cap\cup_{m>n}S_m$ such that $t\cup\{\eta\}\prec t'$. \item $\gamma_\xi\neq \max(t)$ for every $t\in R_\xi$.\\ \end{enumerate} \end{enumerate} These sets are constructed by induction on $\xi$. We set $R_0=\emptyset$ and we suppose we have constructed $R_\zeta$ for every $\zeta<\xi$. If $\xi$ is a limit ordinal, then we define $R_\xi = \bigcup_{\zeta<\xi}R_\zeta$. Notice that then $\gamma_\xi = \sup\{\gamma_\zeta : \zeta<\xi\}$ and all properties are immediately verified for $R_\xi$ provided they are verified for every $\zeta<\xi$.\\ Now, we suppose that $\xi=\zeta +1$. In order that 4(b) is verified, we will carry out a saturation argument. We will find $R_\xi$ as the union of a sequence $R_\xi = \bigcup_{n<\omega}R_\xi^n$.\\ First, we set $R_\xi^0 = R_\zeta$ and $\gamma_\xi^0 = \gamma_\zeta$. Because we know that property 4(b) is verified by $R_\zeta$, we have guaranteed property 4(b) in $R_\xi$ when $\eta<\gamma_\zeta$.\\ In the next step, we take care that 4(b) is verified for every $t\in R_\xi^0$ and $\eta=\gamma_\zeta$. That is, for every $t\in R_\xi^0$ and every $n<\omega$ we find, using property $(\ast)$, $t'_n\in\bigcup_{m>n}S_m$ such that $t\cup\{\gamma_\xi^0\}\prec t'_n$ and we set $R_\xi^1 = R_\xi^0\cup\{t'_n : t\in R_\xi^0,\ n<\omega\}$ and $\gamma_\xi^1 = \sup\{\max(s) : s\in R_\xi^1\}$.\\ If we have already defined $R_\xi^{n}$ and $\gamma_\xi^{n} = \sup\{\max(s) : s\in R_\xi^{n}\}$ then we make sure that property 4(b) will be verified in $R_\xi$ for any $\eta\leq\gamma_\xi^n$, that is for every every $n<\omega$, every $t\in R_\xi^{n}$ and every $\eta\in(max(t),\gamma_\xi^n]$, we find, by property $(\ast)$, an element $t'_{n\eta}\in\bigcup_{m>n}S_m$ such that $t\cup\{\gamma_\xi^0\}\prec t'_{n\eta}$ and we set $$R_\xi^{n+1} = R_\xi^n\cup\{t'_{n\eta} : t\in R_\xi^0,\ n<\omega,\ \eta\in(\max(t),\gamma_\xi^n]\}$$ and $\gamma_\xi^{n+1} = \sup\{\max(s) : s\in R_\xi^{n+1}\}.$\\ Finally, setting $R_\xi=\bigcup_{n<\omega} R_\xi^n$, we will have that $\gamma_\xi = \sup_{n<\omega}\gamma_\xi^n$ and the construction is finished.\\ Now, we will derive a contradiction from the existence of the sets $R_\xi$. The set $\{\gamma_\xi : \xi<\omega_1\}$ is a closed and unbounded subset of $\omega_1$, so since $A$ is stationary, there exists $\xi<\omega_1$ such that $\gamma_\xi\in A$. We will construct a sequence $t_1\prec t_2\prec\cdots$ of elements of $R_\xi$ such that $t_n\in \bigcup_{m>n}S_m$ and $\gamma_\xi = \sup\{\max(t_n) : n<\omega\}$. Such a sequence leads to a contradiction, because in this case, $t=\bigcup_{n=1}^\infty t_n\cup\{\gamma_\xi\}$ is a node of the tree with the property that for every $n$, $t\succ t_n\in S_{m_n}$, $m_n>n$, and this implies that $t\not\in\bigcup_{n<\omega}\hat{S}_n$. The construction of the sequence $t_n$ is done inductively as follows. An increasing sequence of ordinals $\{\eta_i : i<\omega\}$ converging to $\gamma_\xi$ is chosen. If we already defined $t_{n-1}$, we find $i$ with $max(t_n)<\eta_i$ and we use property 4(b) to find $t_n\in R_\xi\cap\bigcup_{m>n}S_m$ with $t_{n-1}\cup\{\eta_i\}\prec t_n$.\\ \end{document}
math
37,966
\begin{document} maketitle \begin{abstract} One of the most popular algorithms for clustering in Euclidean space is the $k$-means algorithm; $k$-means is difficult to analyze mathematically, and few theoretical guarantees are known about it, particularly when the data is {\em well-clustered}. In this paper, we attempt to fill this gap in the literature by analyzing the behavior of $k$-means on well-clustered data. In particular, we study the case when each cluster is distributed as a different Gaussian -- or, in other words, when the input comes from a mixture of Gaussians. We analyze three aspects of the $k$-means algorithm under this assumption. First, we show that when the input comes from a mixture of two spherical Gaussians, a variant of the $2$-means algorithm successfully isolates the subspace containing the means of the mixture components. Second, we show an exact expression for the convergence of our variant of the $2$-means algorithm, when the input is a very large number of samples from a mixture of spherical Gaussians. Our analysis does not require any lower bound on the separation between the mixture components. Finally, we study the sample requirement of $k$-means; for a mixture of $2$ spherical Gaussians, we show an upper bound on the number of samples required by a variant of $2$-means to get close to the true solution. The sample requirement grows with increasing dimensionality of the data, and decreasing separation between the means of the Gaussians. To match our upper bound, we show an information-theoretic lower bound on any algorithm that learns mixtures of two spherical Gaussians; our lower bound indicates that in the case when the overlap between the probability masses of the two distributions is small, the sample requirement of $k$-means is {\em near-optimal}. \end{abstract} \section{Introduction} One of the most popular algorithms for clustering in Euclidean space is the $k$-means algorithm~\cite{L82, F65, M67}; this is a simple, local-search algorithm that iteratively refines a partition of the input points until convergence. Like many local-search algorithms, $k$-means is notoriously difficult to analyze, and few theoretical guarantees are known about it. There has been three lines of work on the $k$-means algorithm. A first line of questioning addresses the quality of the solution produced by $k$-means, in comparison to the globally optimal solution. While it has been well-known that for general inputs the quality of this solution can be arbitrarily bad, the conditions under which $k$-means yields a globally optimal solution on {\em well-clustered} data are not well-understood. A second line of work~\cite{AV06, V09} examines the number of iterations required by $k$-means to converge. ~\cite{V09} shows that there exists a set of $n$ points on the plane, such that $k$-means takes as many as $\Omega(2^n)$ iterations to converge on these points. A smoothed analysis upper bound of $poly(n)$ iterations has been established by~\cite{AMR09}, but this bound is still much higher than what is observed in practice, where the number of iterations are frequently sublinear in $n$. Moreover, the smoothed analysis bound applies to small perturbations of arbitrary inputs, and the question of whether one can get faster convergence on well-clustered inputs, is still unresolved. A third question, considered in the statistics literature, is the statistical efficiency of $k$-means. Suppose the input is drawn from some simple distribution, for which $k$-means is statistically consistent; then, how many samples is required for $k$-means to converge? Are there other consistent procedures with a better sample requirement? In this paper, we study all three aspects of $k$-means, by studying the behavior of $k$-means on Gaussian clusters. Such data is frequently modelled as a mixture of Gaussians; a mixture is a collection of Gaussians $\mathcal{D} = \{ D_1, \ldots, D_k \}$ and weights $w_1, \ldots, w_k$, such that $\sum_i w_i = 1$. To sample from the mixture, we first pick $i$ with probability $w_i$ and then draw a random sample from $D_i$. Clustering such data then reduces to the problem of {\em learning a mixture}; here, we are given only the ability to sample from a mixture, and our goal is to learn the parameters of each Gaussian $D_i$, as well as determine which Gaussian each sample came from. Our results are as follows. First, we show that when the input comes from a mixture of two spherical Gaussians, a variant of the $2$-means algorithm successfully isolates the subspace containing the means of the Gaussians. Second, we show an exact expression for the convergence of a variant of the $2$-means algorithm, when the input is a large number of samples from a mixture of two spherical Gaussians. Our analysis shows that the convergence-rate is logarithmic in the dimension, and decreases with increasing separation between the mixture components. Finally, we address the sample requirement of $k$-means; for a mixture of $2$ spherical Gaussians, we show an upper bound on the number of samples required by a variant of $2$-means to get close to the true solution. The sample requirement grows with increasing dimensionality of the data, and decreasing separation between the means of the distributions. To match our upper bound, we show an information-theoretic lower bound on any algorithm that learns mixtures of two spherical Gaussians; our lower bound indicates that in the case when the overlap between the probability masses of the two distributions is small, the sample requirement of $2$-means is {\em near-optimal}. Additionally, we make some partial progress towards analyzing $k$-means in the more general case -- we show that if our variant of $2$-means is run on a mixture of $k$ spherical Gaussians, then, it converges to a vector in the subspace containing the means of $D_i$. The key insight in our analysis is a novel potential function $\theta_t$, which is the minimum angle between the subspace of the means of $D_i$, and the normal to the hyperplane separator in $2$-means. We show that this angle decreases with iterations of our variant of $2$-means, and we can characterize convergence rates and sample requirements, by characterizing the rate of decrease of the potential. \iffalse One of the most popular algorithms for clustering in Euclidean space is the $k$-means algorithm\cite{F65, M67,L82}. $k$-means is an iterative algorithm, which begins with an initial partition of the input points, and successively refines the partition until convergence. In this paper, we perform a probabilistic analysis of $k$-means, when applied to the problem of learning mixture models. A mixture model $\mathcal{D}$ is a collection of distributions $\mathcal{D} = \{ D_1, \ldots, D_k\}$ and weights $\rho^1, \ldots, \rho^k$, such that $\sum_j \rho^j = 1$. A sample from a mixture $\mathcal{D}$ is obtained by selecting $i$ with probability $\rho^i$, and then selecting a random sample from $D_i$. Given only the ability to sample from a mixture, the problem of learning a mixture is that of (a) determining the parameters of the distributions comprising the mixture and (b) classifying the samples, according to source distribution. Most previous work on the analysis of $k$-means~\cite{M67,P81} studies the problem in a statistical setting, and shows consistency guarantees, when the number of samples tend to infinity. The $k$-means algorithm is also closely related to the widely-used EM algorithm~\cite{DLR77} for learning mixture models -- essentially, the main difference between $k$-means and EM being that EM allows a sample to fractionally belong to multiple clusters and $k$-means does not. Most previous work on analyzing EM view it as an optimization procedure over the likelihood surface, and study its convergence properties by analyzing the likelihood surface around the optimum~\cite{RW84,XJ96}. In this paper, we perform a probabilistic analysis of a variant of $k$-means, when the input is generated from a mixture of spherical Gaussians. Instead of analyzing the likelihood surface, we examine the geometry of the input, and use the structure in it to show that the algorithm makes progress towards the correct solution in each round with high probability. Previous probabilistic analysis of EM, due to~\cite{DS00}, applies when the input comes from a mixture of spherical Gaussians, separated, such that two samples from the same Gaussian are closer in space than two samples from different Gaussians. In contrast, our analysis is much finer, and while it still deals with mixtures of two or more spherical Gaussians, applies under any separation. Moreover, we quantify the number of samples required by $k$-means to work correctly. \fi medskip\noindent{\textbf{Our Results.}} More specifically, our results are as follows. We perform a probabilistic analysis of a variant of $2$-means; our variant is essentially a symmetrized version of $2$-means, and it reduces to $2$-means when we have a very large number of samples from a mixture of two identical spherical Gaussians with equal weights. In the $2$-means algorithm, the separator between the two clusters is always a hyperplane, and we use the angle $\theta_t$ between the normal to this hyperplane and the mean of a mixture component in round $t$, as a measure of the potential in each round. Note that when $\theta_t = 0$, we have arrived at the correct solution. First, in Section~\ref{sec:k2infsamples}, we consider the case when we have at our disposal a very large number of samples from a mixture of $N(mu^1,(\sigma^1)^2 I_d)$ and $N(mu^2,(\sigma^2)^2 I_d)$ with mixing weights $\rho^1, \rho^2$ respectively. We show an exact relationship between $\theta_t$ and $\theta_{t+1}$, for any value of $mu^j$, $\sigma^j$, $\rho^j$ and $t$. Using this relationship, we can approximate the rate of convergence of $2$-means, for different values of the separation, as well as different initialization procedures. Our guarantees illustrate that the progress of $k$-means is very fast -- namely, the square of the cosine of $\theta_t$ grows by at least a constant factor (for high separation) each round, when one is far from the actual solution, and slow when the actual solution is very close. Next, in Section~\ref{sec:k2finsamples}, we characterize the sample requirement for our variant of $2$-means to succeed, when the input is a mixture of two spherical Gaussians. For the case of two identical spherical Gaussians with equal mixing weight, our results imply that when the separation $mu < 1$, and when $\tilde{\Omega}(\frac{d}{mu^4})$ samples are used in each round, the $2$-means algorithm makes progress at roughly the same rate as in Section~\ref{sec:k2infsamples}. This agrees with the $\Omega(\frac{1}{mu^4})$ sample complexity lower bound~\cite{Lbook} for learning a mixture of Gaussians on the line, as well as with experimental results of~\cite{SSR06}. When $mu > 1$, our variant of $2$-means makes progress in each round, when the number of samples is at least $\tilde{\Omega}(\frac{d}{mu^2})$. Then, in Section~\ref{sec:lowerbounds}, we provide an information-theoretic lower bound on the sample requirement of any algorithm for learning a mixture of two spherical Gaussians with standard deviation $1$ and equal weight. We show that when the separation $mu > 1$, any algorithm requires $\Omega(\frac{d}{mu^2})$ samples to converge to a vector within angle $\theta = \cos^{-1}(c)$ of the true solution, where $c$ is a constant. This indicates that $k$-means has near-optimal sample requirement when $mu > 1$. Finally, in Section~\ref{sec:genkinf}, we examine the performance of $2$-means when the input comes from a mixture of $k$ spherical Gaussians. We show that, in this case, the normal to the hyperplane separating the two clusters converges to a vector in the subspace containing the means of the mixture components. Again, we characterize exactly the rate of convergence, which looks very similar to the bounds in Section~\ref{sec:k2infsamples}. medskip\noindent{\textbf{Related Work.}} The convergence-time of the $k$-means algorithm has been analyzed in the worst-case~\cite{AV06, V09}, and the smoothed analysis settings~\cite{MR09, AMR09}; ~\cite{V09} shows that the convergence-time of $k$-means may be $\Omega(2^n)$ even in the plane.~\cite{AMR09} establishes a $O(n^{30})$ smoothed complexity bound. ~\cite{ORSS06} analyzes the performance of $k$-means when the data obeys a clusterability condition; however, their clusterability condition is very different, and moreover, they examine conditions under which constant-factor approximations can be found. In statistics literature, the $k$-means algorithm has been shown to be consistent~\cite{M67}.~\cite{P81} shows that minimizing the $k$-means objective function (namely, the sum of the squares of the distances between each point and the center it is assigned to), is consistent, given sufficiently many samples. As optimizing the $k$-means objective is NP-Hard, one cannot hope to always get an exact solution. None of these two works quantify either the convergence rate or the exact sample requirement of $k$-means. There has been two lines of previous work on theoretical analysis of the EM algorithm~\cite{DLR77}, which is closely related to $k$-means. Essentially, for learning mixtures of identical Gaussians, the only difference between EM and $k$-means is that EM uses {\em partial assignments} or {\em soft clusterings}, whereas $k$-means does not. First, ~\cite{RW84, XJ96} views learning mixtures as an optimization problem, and EM as an optimization procedure over the likelihood surface. They analyze the structure of the likelihood surface around the optimum to conclude that EM has first-order convergence. An optimization procedure on a parameter $m$ is said to have first-order convergence, if, \[ ||m_{t+1} - m^*|| \leq R \cdot ||m_t - m^*|| \] where $m_t$ is the estimate of $m$ at time step $t$ using $n$ samples, $m^*$ is the maximum likelihood estimator for $m$ using $n$ samples, and $R$ is some fixed constant between $0$ and $1$. In contrast, our analysis also applies when one is far from the optimum. The second line of work is a probabilistic analysis of EM due to~\cite{DS00}; they show a two-round variant of EM which converges to the correct partitioning of the samples, when the input is generated by a mixture of $k$ well-separated, spherical Gaussians. For their analysis to work, they require the mixture components to be separated such that two samples from the same Gaussian are a little closer in space than two samples from different Gaussians. In contrast, our analysis applies when the separation is much smaller. The sample requirement of learning mixtures has been previously studied in the literature, but not in the context of $k$-means. ~\cite{CHRZ07, C07} provides an algorithm that learns a mixture of two binary product distributions with uniform weights, when the separation $mu$ between the mixture components is at least a constant, so long as $\tilde{\Omega}(\frac{d}{mu^4})$ samples are available. (Notice that for such distributions, the directional standard deviation is at most $1$.) Their algorithm is similar to $k$-means in some respects, but different in that they use different sets of coordinates in each round, and this is very crucial in their analysis. Additionally,~\cite{BCFZ07} show a spectral algorithm which learns a mixture of $k$ binary product distributions, when the distributions have small overlap in probability mass, and the sample size is at least $\tilde{\Omega}(d/mu^2)$. \cite{Lbook} shows that at least $\tilde{\Omega}(\frac{1}{mu^4})$ samples are required to learn a mixture of two Gaussians in one dimension. We note that although our lower bound of $\Omega(d/mu^2)$ for $mu >1$ seems to contradict the upper bound of~\cite{CHRZ07, C07}, this is not actually the case. Our lower bound characterizes the number of samples required to find a vector at an angle $\theta = \cos^{-1}(1/10)$ with the vector joining the means. However, in order to classify a constant fraction of the points correctly, we only need to find a vector at an angle $\theta' = \cos^{-1}(1/mu)$ with the vector joining the means. Since the goal of~\cite{CHRZ07} is to simply classify a constant fraction of the samples, their upper bound is less than $O(d/mu^2)$. In addition to theoretical analysis, there has been very interesting experimental work due to~\cite{SSR06}, which studies the sample requirement for EM on a mixture of $k$ spherical Gaussians. They conjecture that the problem of learning mixtures has three phases, depending on the number of samples : with less than about $\frac{d}{mu^4}$ samples, learning mixtures is information-theoretically hard; with more than about $\frac{d}{mu^2}$ samples, it is computationally easy, and in between, computationally hard, but easy in an information-theoretic sense. Finally, there has been a line of work which provides algorithms (different from EM or $k$-means) that are guaranteed to learn mixtures of Gaussians under certain separation conditions -- see, for example,~\cite{D99, VW02, AK01, AM05, KSV05, CR08, BV08}. For mixtures of two Gaussians, our result is comparable to the best results for spherical Gaussians~\cite{VW02} in terms of separation requirement, and we have a smaller sample requirement. \section{The Setting} \label{sec:setting} The $k$-means algorithm iteratively refines a partitioning of the input data. At each iteration, $k$ points are maintained as {\em centers}; each input is assigned to its closest center. The center of each cluster is then recomputed as the empirical mean of the points assigned to the cluster. This procedure is continued until convergence. Our variant of $k$-means is described below. There are two main differences between the actual $2$-means algorithm, and our variant. First, we use a separate set of samples in each iteration. Secondly, we always fix the cluster boundary to be a hyperplane through the origin. When the input is a very large number of samples from a mixture of two identical Gaussians with equal mixing weights, and with center of mass at the origin, this is exactly $2$-means initialized with symmetric centers (with respect to the origin). We analyze this symmetrized version of $2$-means even when the mixing weights and the variances of the Gaussians in the mixture are not equal. The input to our algorithm is a set of samples $mathcal{S}$, a number of iterations $N$, and a starting vector $\breve{u}_0$, and the output is a vector $u_N$ obtained after $N$ iterations of the $2$-means algorithm. medskip \noindent \textbf{2-means-iterate($mathcal{S}$, $N$, $u_0$)} \begin{enumerate} \item Partition $mathcal{S}$ randomly into sets of equal size $mathcal{S}_1, \ldots, mathcal{S}_N$. \item For iteration $t = 0, \ldots, N-1$, compute: \begin{eqnarray*} C_{t+1} & = & \{ x \in mathcal{S}_{t+1} | \dot{x}{u_{t}} > 0 \} \\ \bar{C}_{t+1} & = & \{ x \in mathcal{S}_{t+1} | \dot{x}{u_t} < 0 \} \end{eqnarray*} Compute: $u_{t+1}$ as the empirical average of $C_{t+1}$. \end{enumerate} \noindent {\textbf{Notation.}} In Sections~\ref{sec:k2infsamples} and~\ref{sec:k2finsamples}, we analyze Algorithm 2-means-iterate, when the input is generated by a mixture $\mathcal{D} = \{D_1, D_2\}$ of two Gaussians. We let $D_1 = N(mu^1, (\sigma^1)^2 I_d)$, $D_2 = N(mu^2, (\sigma^2)^2 I_d)$, with mixing weights $\rho^1$ and $\rho^2$. We also assume without loss of generality that for all $j$, $\sigma^j \geq 1$. As the center of mass of the mixture lies at the origin, $\rho^1 mu^1 + \rho^2 mu^2 = 0$. In Section~\ref{sec:genkinf}, we study a somewhat more general case. We define $b$ as the unit vector along $mu^1$, i.e. $b = \frac{mu^1}{||mu^1||}$.Henceforth, for any vector $v$, we use the notation $\breve{v}$ to denote the unit vector along $v$, i.e. $\breve{v} = \frac{v}{||v||}$. Therefore, $\breve{u}_t$ is the unit vector along $u_t$. We assume without loss of generality that $mu^1$ lies in the cluster $C_{t+1}$. In addition, for each $t$, we define $\theta_t$ as the angle between $mu^1$ and $u_t$.We use the cosine of $\theta_t$ as a measure of progress of the algorithm at round $t$, and our goal is to show that this quantity increases as $t$ increases. Observe that $0 \leq \cos(\theta_t) \leq 1$, and $\cos(\theta_t) = 1$ when $u_t$ and $mu^1$ are aligned along the same direction. For each $t$, we define $\tau_t^j =\dot{mu^j}{\breve{u}_t} = \dot{mu^j}{b}\cos(\theta_t)$. Moreover, from our notation, $\cos(\theta_t) = \frac{\tau_t^1}{||mu^1||}$. In addition, we define $\rho_{min} = min_j \rho^j$, $mu_{min} = min_j ||mu^j||$, and $\sigma_{max} = max_j \sigma^j$. For the special case of two identical spherical Gaussians with equal weights, we use $\text{\boldmath$mu$} = ||mu^1|| = ||mu^2||$. Finally, for $a \leq b$, we use the notation $\Phi(a,b)$ to denote the probability that a standard normal variable takes values between $a$ and $b$. \begin{figure} \caption{\small Here we are depicting the plane defined by the vectors $mu^1$ and $\breve{u} \end{figure} \iffalse We would like to compute $u_{t+1}$, the center of $C_{t+1}$. We define: \begin{eqnarray*} w^1_{t+1} & = & \Pr[x \sim D_1 | x \in C_{t+1}] \\ w^2_{t+1} & = & \Pr[x \sim D_2 | x \in C_{t+1}] \\ c^1_{t+1} & = & \mathbf{E}[x | x \sim D_1, x \in C_{t+1}] \\ c^2_{t+1} & = & \mathbf{E}[x | x \sim D_2, x \in C_{t+1}] \end{eqnarray*} We observe that $c_{t+1}$, the expected center of cluster $C_{t+1}$ can be written as: \[ c_{t+1} =w^1_{t+1} c^1_{t+1} +w^2_{t+1} c^2_{t+1} \] In addition, for each $t$, we define $\theta_t$ as the angle between $mu$ and the expected center of the cluster $C_t$ containing $mu$. Obviously, closer $\theta_t$ is to $0$, the closer we are to the ideal solution of the problem. We use the cosine of $\theta_t$ as a measure of the progress of $k$-means at Step $t$, and our goal is to show that this quantity increases as $t$ increases. Observe that $0 \leq \cos(\theta_t) \leq 1$, and $\cos(\theta_t) = 1$ when $c_{t}$ and $mu$ are aligned along the same direction. We observe that, from our notation: \[ \cos (\theta_t) = \frac{\tau_t}{mu} \] Finally, for $a \leq b$, we use the notation $P(a,b)$ to denote the probability that a standard normal variable takes values between $a$ and $b$. \fi \section{Exact Estimation}\label{sec:k2infsamples} In this section, we examine the performance of Algorithm 2-means-iterate\ when one can estimate the vectors $u_{t}$ exactly -- that is, when a very large number of samples from the mixture is available. Our main result of this section is Lemma \ref{lem:expr1}, which exactly characterizes the behavior of 2-means-iterate\ at a specific iteration $t$. For any $t$, we define the quantities $\xi_t$ and $m_t$ as follows: \begin{eqnarray*} \xi_t = \sum_j \rho^j \sigma^j\frac{e^{-(\tau_t^j)^2/2(\sigma^j)^2}}{\sqrt{2 \pi}}, & m_t = \sum_j \rho^j \dot{mu^j}{b} \cdot \Phi(-\frac{\tau_t^j}{\sigma^j},\infty) \end{eqnarray*} Now, our main lemma can be stated as follows. \begin{lemma}\label{lem:expr1} \[\cos^2(\theta_{t+1}) = \cos^2(\theta_t) \left(1 + \tan^2(\theta_t) \frac{ 2\cos(\theta_t)\xi_tm_t + m_t^2 }{ \xi_t^2 + 2\cos(\theta_t)\xi_tm_t + m_t^2 } \right) \] \end{lemma} The proof is in the Appendix. Using Lemma~\ref{lem:expr1}, we can characterize the convergence rates and times of 2-means-iterate\ for different values of $mu^j$, $\rho^j$ and $\sigma^j$, as well as different initializations of $u_0$. The convergence rates can be characterized in terms of two natural parameters of the problem, $M = \sum_j \frac{\rho^j ||mu^j||^2}{\sigma^j}$, which measures how much the distributions are separated, and $V = \sum_j \rho^j \sigma^j$, which measures the average standard deviations of the distributions. We observe that as $\sigma^j \geq 1$, for all $j$, $V \geq 1$ always. To characterize these rates, it is also convenient to look at two different cases, according to the value of $mu^j$, the separation between the mixture components. \noindent {\bf{Small $mu^j$.}} First, we consider the case when each $||mu^j||/\sigma^j$ is less than a fixed constant $\sqrt{\ln \frac{9}{2 \pi}}$, including the case when $||mu^j||$ can be much less than $1$. In this case, the Gaussians are not even separated in terms of probability mass; in fact, as $||mu^j||/\sigma^j$ decreases, the overlap in probability mass between the Gaussians tends to $1$. However, we show that 2-means-iterate\ can still do something interesting, in terms of recovering the subspace containing the means of the distributions. Theorem~\ref{thm:smallmu} summarizes the convergence rate in this case. \begin{theorem}[Small $mu^j$]\label{thm:smallmu} Let $||mu^j||/\sigma^j < \sqrt{\ln \frac{9}{2 \pi}}$, for $j=1,2$. Then, there exist fixed constants $a_1$ and $a_2$, such that: \begin{eqnarray*} \cos^2(\theta_t)(1 + a_1(M/V) \sin^2(\theta_t)) \leq \cos^2(\theta_{t+1}) \leq \cos^2 (\theta_t) (1 + a_2 (M/V) \sin^2(\theta_t)) \end{eqnarray*} \end{theorem} For a mixture of two identical Gaussians with equal mixing weights, we can conclude: \begin{corollary}\label{cor1:smallmu} For a mixture of two identical spherical Gaussians with equal mixing weights, standard deviation $1$, if $\text{\boldmath$mu$} = ||mu^1|| = ||mu^2|| < \sqrt{\ln \frac{9}{2 \pi}}$, then, \[ \cos^2(\theta_t) (1 + a_1' \text{\boldmath$mu$}^2 \sin^2(\theta_t)) \leq \cos^2(\theta_{t+1}) \leq \cos^2(\theta_t) (1 + a_2'\text{\boldmath$mu$}^2 \sin^2 (\theta_t)) \] \end{corollary} The proof follows by a combination of Lemma~\ref{lem:expr1}, and Lemma~\ref{lem:smalltau}. From Corollary~\ref{cor1:smallmu}, we observe that $\cos^2(\theta_t)$ grows by a factor of $(1 + \Theta(\text{\boldmath$mu$}^2))$ in each iteration, except when $\theta_t$ is very close to $0$. This means that when 2-means-iterate\ is far from the actual solution, it approaches the solution at a consistently high rate. The convergence rate only grows slower, once $k$-means is very close to the actual solution. medskip \noindent {\bf{Large $mu^j$.}} In this case, there exists a $j$ such that $||mu^j||/\sigma^j \geq \sqrt{\ln \frac{9}{2 \pi}}$. In this regime, the Gaussians have small overlap in probability mass, yet, the distance between two samples from the same distribution is much greater than the separation between the distributions. Our guarantees for this case are summarized by Theorem \ref{thm:largemu}. We see from Theorem \ref{thm:largemu} that there are two regimes of behavior of the convergence rate, depending on the value of $max_j |\tau_t^j|/\sigma^j$. These regimes have a natural interpretation. The first regime corresponds to the case when $\theta_t$ is large enough, such that when projected onto $u_t$, at most a constant fraction of samples from the two distributions can be classified with high confidence. The second regime corresponds to the case when $\theta_t$ is close enough to $0$ such that when projected along $u_t$, most of the samples from the distributions can be classified with high confidence. As expected, in the second regime, the convergence rate is much slower than in the first regime. \begin{theorem}[Large $mu^j$]\label{thm:largemu} Suppose there exists $j$ such that $||mu^j||/\sigma^j \geq \sqrt{\ln \frac{9}{2 \pi}}$. If $|\tau_t^j|/\sigma^j < \sqrt{ \ln \frac{9}{2 \pi}}$, for all $j$, then, there exist fixed constants $a_3$, $a_4$, $a_5$ and $a_6$ such that: \begin{eqnarray*} \cos^2(\theta_t) \left(1 + \frac{a_3 (M/V)^2 \sin^2(\theta_t) }{a_4 + (M/V)^2 \cos^2(\theta_t)}\right) \leq \cos^2(\theta_{t+1}) \leq \cos^2(\theta_t) \left(1 + \frac{a_5 ((M/V) + (M/V)^2)\sin^2(\theta_t)}{a_6 + (M/V)^2\cos^2(\theta_t)}\right) \end{eqnarray*} On the other hand, if there exists $j$ such that $|\tau_t^j|/\sigma^j \geq \sqrt{\ln \frac{9}{2 \pi}}$, then, there exist fixed constants $a_7$ and $a_8$ such that: \begin{eqnarray*} \cos^2(\theta_t) (1 + \frac{ a_7 \rho_{min}^2mu_{min}^2}{a_8 V^2 + \rho_{min}^2 mu_{min}^2}\tan^2(\theta_t)) \leq \cos^2(\theta_{t+1}) \leq \cos^2(\theta_t)(1 + \tan^2(\theta_t)) \end{eqnarray*} \end{theorem} For two identical Gaussians with standard deviation $1$, we can conclude: \begin{corollary} For a mixture of two identical Gaussians with equal mixing weights, and standard deviation $1$, if $\text{\boldmath$mu$} = ||mu^1|| = ||mu^2|| > \sqrt{\ln \frac{9}{2 \pi}}$, and if $|\tau_t^1|=|\tau_t^2| \leq \sqrt{\ln \frac{9}{2 \pi}}$, then, there exist fixed constants $a'_3, a'_4, a'_5, a'_6$ such that: \begin{eqnarray*} \cos^2(\theta_t) \left(1 + \frac{a'_3 \text{\boldmath$mu$}^4 \sin^2(\theta_t) }{a'_4 + \text{\boldmath$mu$}^4 \cos^2(\theta_t)}\right) \leq \cos^2(\theta_{t+1}) \leq \cos^2(\theta_t) \left(1 + \frac{a'_5 \text{\boldmath$mu$}^4\sin^2(\theta_t)}{a_6 + \text{\boldmath$mu$}^4\cos^2(\theta_t)}\right) \end{eqnarray*} On the other hand, if $|\tau_t^1|= |\tau_t^2| \geq \sqrt{\ln \frac{9}{2 \pi}}$, then, there exists a fixed constant $a'_7$ such that: \begin{eqnarray*} \cos^2(\theta_t) (1 + a'_7 \tan^2(\theta_t)) \leq \cos^2(\theta_{t+1}) \leq \cos^2(\theta_t)(1 + \tan^2(\theta_t)) \end{eqnarray*} \end{corollary} In this case as well, we observe the same phenomenon: the convergence rate is high when we are far away from the solution, and slow when we are close. Using Theorems~\ref{thm:smallmu} and~\ref{thm:largemu}, we can characterize the convergence times of 2-means-iterate; for the sake of simplicity, we present the convergence time bounds for a mixture of two spherical Gaussians with equal mixing weights and standard deviation $1$. We recall that in this case 2-means-iterate~is exactly $2$-means. \begin{corollary}[Convergence Time]\label{cor:convtime} If $\theta_0$ is the initial angle between $mu^1$ and $u_0$, then, $\cos^2(\theta_N) \geq 1 - \epsilon$ after $N = C_0 \cdot \left(\frac{\ln(\frac{1}{\cos^2(\theta_0)})}{ \ln(1 + \text{\boldmath$mu$}^2)} + \frac{1}{\ln(1 + \epsilon)} \right)$ iterations, where $C_0$ is a fixed constant. \end{corollary} medskip\noindent{\textbf{Effect of Initialization.}} As apparent from Corollary~\ref{cor:convtime}, the effect of initialization is only to ensure a lower bound on the value of $\cos(\theta_0)$. We illustrate below, two natural ways by which one can select $u_0$, and their effect on the convergence rate. For the sake of simplicity, we state these bounds for the case in which we have two identical Gaussians with equal mixing weights and standard deviation $1$. \begin{itemize} \item First, one can choose $u_0$ uniformly at random from the surface of a unit sphere in ${mathbf{R}}^d$; in this case, $\cos^2(\theta_0) = \Theta(\frac{1}{d})$, with constant probability, and as a result, the convergence time to reach $\cos^{-1}(1/\sqrt{2})$ is $O(\frac{\ln d}{\ln(1 +\text{\boldmath$mu$}^2)})$. \item A second way to choose $u_0$ is to set it to be a random sample from the mixture; in this case, $\cos^2(\theta_0) = \Theta( \frac{(1 +\text{\boldmath$mu$})^2}{d})$ with constant probability, and the time to reach $\cos^{-1}(1/\sqrt{2})$ is $O(\frac{\ln d}{\ln(1 +\text{\boldmath$mu$}^2)})$. \end{itemize} \iffalse \begin{corollary} [Small $mu$] Let $mu < \sqrt{\ln \frac{9}{2 \pi}}$ and suppose the initial vector $\breve{u}_0$ is picked uniformly at random from the surface of the unit sphere in ${mathbf{R}}^d$. Then, with constant probability, $\cos^2(\theta_T)\ge 1-\epsilon$ after $T=O(\frac{\log{d}}{mu^2}+\frac{1}{mu^2\epsilon})$ rounds. \label{cor:smallmu} \end{corollary} \begin{corollary}[Small $mu$] Let $mu < \sqrt{\ln \frac{9}{2 \pi}}$ and suppose the initial vector $u_0$ is chosen to be a random sample from the mixture. Then, with constant probability, $\cos^2(\theta_T)\ge 1-\epsilon$ after $T=O(\frac{\log{d}}{mu^2}+\frac{1}{mu\epsilon})$ rounds. \label{cor:smallmu2} \end{corollary} \fi \iffalse \begin{corollary} Let $mu < \sqrt{\ln \frac{9}{2 \pi}}$, and suppose that we start $k$-means with a vector $\breve{u}_0$. If $\theta_f \geq \cos^{-1}(1/2)$, then, with constant probability, $\theta_t \leq \theta_f$, after $\Theta( \frac{\ln (d \cos^2(\theta_f))}{mu^2})$ rounds. Otherwise, $\theta_t \leq \theta_f$ after $\Theta( \frac{\ln d}{mu^2} + \frac{1}{mu^2 \sin^2 (\theta_f)})$ rounds. \label{cor:timesmallmu} \end{corollary} \begin{corollary} [Small $mu$] Let $mu < \sqrt{\ln \frac{9}{2 \pi}}$. Then, if $k$-means is initialized with a unit random vector $\breve{u}_0$, then $T=O(\frac{\log d}{\log(1+mu^2)} + \frac{1}{\log(1+\epsilonmu^2)})$ rounds of the algorithm guarantee that $\cos^2(\theta_T) \ge 1-\epsilon$. \label{cor:smallmu} \end{corollary} \fi \section{Finite Samples} \label{sec:k2finsamples} In this section, we analyze Algorithm 2-means-iterate, when we are required to estimate the statistics at each round with a finite number of samples. We characterize the number of samples needed to ensure that 2-means-iterate\ makes progress in each round, and we also characterize the rate of progress when the required number of samples are available. The main result of this section is the following lemma, which characterizes $\theta_{t+1}$, the angle between $mu^1$ and the hyperplane separator in 2-means-iterate, given $\theta_t$. Notice that now $\theta_t$ is a random variable, which depends on the samples drawn in rounds $1, \ldots, t-1$, and given $\theta_t$, $\theta_{t+1}$ is a random variable, whose value depends on samples in round $t$. Also we use $u_{t+1}$ as the center of partition $C_t$ in iteration $t+1$, and ${mathbf{E}}[u_{t+1}]$ is the expected center. Note that all the expectations in round $t$ are conditioned on $\theta_t$. In addition, we use $S_{t+1}$ to denote the quantity ${mathbf{E}}[X \cdot 1_{X \in C_{t+1}}]$, where $1_{X \in C_{t+1}}$ is the indicator function for the event $X \in C_{t+1}$, and the expectation is taken over the entire mixture. Note that, $S_{t+1} = {mathbf{E}}[u_{t+1}] \Pr[X \in C_{t+1}] = Z_{t+1} {mathbf{E}}[u_{t+1}]$. We use $\hat{S}_{t+1}$ to denote the empirical value of $S_{t+1}$. \begin{lemma}\label{lem:keysample} If we use $n$ samples in iteration $t$, then, given $\theta_t$, with probability $1 - 2\delta$, \begin{eqnarray*} \cos^2(\theta_{t+1}) \geq \cos^2(\theta_t) & \left(1 + \tan^2(\theta_t) \frac{ 2\cos(\theta_t)\xi_tm_t + m_t^2}{ \xi_t^2 + 2\cos(\theta_t)\xi_tm_t + m_t^2 + \Delta_2 } \right) - \left( \frac{\Delta_2 \cos^2(\theta_t) + 2 \Delta_1 (m_t + \xi_t \cos(\theta_t))}{ m_t^2 + \xi_t^2 + 2 \xi_t m_t \cos(\theta_t) + \Delta_2 } \right) \end{eqnarray*} where, \begin{eqnarray*} \Delta_1 &=& \frac{8 \log(4n/\delta)(\sigma_{max} + max_j ||mu^j||)}{\sqrt{n}} \\ \Delta_2 &=& \frac{128 \log^2(8n/\delta)(\sigma_{max}^2 d + \sum_j ||mu^j||^2)}{n} + \frac{8 \log(n/\delta)}{\sqrt{n}}(\sigma_{max}||S_{t+1}|| + max_j|\dot{S_{t+1}}{mu^j}|) \enspace \end{eqnarray*} \end{lemma} The main idea behind the proof of Lemma~\ref{lem:keysample} is that we can write $\cos^2(\theta_{t+1}) = \frac{\dot{\hat{S}_{t+1}}{mu^1}^2}{||mu^1||^2 ||\hat{S}_{t+1}||^2}$. Next, we can use Lemma~\ref{lem:expr1}, and the definition of $S_{t+1}$ to get an expression for $\frac{\dot{S_{t+1}}{mu^1}^2}{||S_{t+1}||^2 ||mu^1||^2}$, and Lemmas~\ref{lem:projconc} and~\ref{lem:normconc} to bound $\dot{\hat{S}_{t+1} - S_{t+1}}{mu^1}$, and $||\hat{S}_{t+1}||^2 - ||S_{t+1}||^2$. Plugging in all these values gives us a proof of Lemma~\ref{lem:keysample}. We also assume for the rest of the section that the number of samples $n$ is at most some polynomial in $d$, such that $\log(n) = \Theta(\log(d))$. The two main lemmas used in the proof of Lemma~\ref{lem:keysample} are Lemmas~\ref{lem:projconc} and~\ref{lem:normconc}. To state them, we need to define some notation. At time $t$, we use the notation \begin{lemma}\label{lem:projconc} For any $t$, and for any vector $v$ with norm $||v||$, with probability at least $1 - \delta$, \[ | \dot{\hat{S}_{t+1} - S_{t+1}}{v} | \leq \frac{8 \log(4n/\delta)(\sigma_{max} ||v|| + max_j |\dot{mu^j}{v}|)}{\sqrt{n}} \] \end{lemma} \begin{lemma}\label{lem:normconc} For any $t$, with probability at least $1 - \delta$, \[ ||\hat{S}_{t+1}||^2 \leq ||S_{t+1}||^2 + \frac{128\log^2(8n/\delta)(\sigma_{max}^2 d + \sum_j (mu^j)^2)}{n} + \frac{16 \log(8n/\delta)}{\sqrt{n}} ( \sigma_{max} ||S_{t+1}|| + max_j |\dot{S_{t+1}}{mu^j}|) \] \end{lemma} The proofs of Lemmas~\ref{lem:projconc} and ~\ref{lem:normconc} are in the Appendix. Applying Lemma~\ref{lem:keysample}, we can characterize the number of samples required such that 2-means-iterate\ makes progress in each round for different values of $||mu^j||$. Again, it is convenient to look at two separate cases, based on $||mu^j||$. \begin{theorem}[Small $mu^j$]\label{thm:finsmallmu} Let $||mu^j||/\sigma^j < \sqrt{\ln \frac{9}{2 \pi}}$, for all $j$. If the number of samples drawn in round $t$ is at least $a_9 \sigma_{max}^2\log^2(d/\delta) \left( \frac{d}{M V\sin^4(\theta_t)} + \frac{1}{M^2 \sin^4(\theta_t)\cos^2(\theta_t)} \right)$, for some fixed constant $a_{9}$, then, with probability at least $1-\delta$, $\cos^2({\theta}_{t+1}) \ge \cos^2(\theta_t)(1+ a_{10}(M/V)\sin^2(\theta_t))$, where $a_{10}$ is some fixed constant. \end{theorem} In particular, for the case of two identical Gaussians with equal mixing weights and standard deviation $1$, our results implies the following. \begin{corollary} \label{thm:finsmallmuunif} Let $\text{\boldmath$mu$} = ||mu^1|| = ||mu^2|| < \sqrt{\ln \frac{9}{2 \pi}}$. If the number of samples drawn in round $t$ is at least $a_{9} \log^2(d/\delta) \left( \frac{d}{\text{\boldmath$mu$}^2\sin^4(\theta_t)} + \frac{1}{\text{\boldmath$mu$}^4 \cos^2(\theta_t) \sin^4(\theta_t)} \right)$, for some fixed constant $a_{9}$, then, with probability at least $1-\delta$, $\cos^2({\theta}_{t+1}) \ge \cos^2(\theta_t)(1+ a_{10}\text{\boldmath$mu$}^2\sin^2(\theta_t))$, where $a_{10}$ is some fixed constant. \end{corollary} In particular, when we initialize $u_0$ with a vector picked uniformly at random from a $d$-dimensional sphere, $\cos^2(\theta_0) \geq \frac{1}{d}$, with constant probability, and thus the number of samples required for success in the first round is $\tilde{\Theta}(\frac{d}{\text{\boldmath$mu$}^4})$. This bound matches with the lower bounds for learning mixtures of Gaussians in one dimension~\cite{Lbook}, as well as with conjectured lower bounds in experimental work~\cite{SSR06}. The following corollary summarizes the total number of samples required to learn the mixture with some fixed precision, for two identical spherical Gaussians with variance $1$ and equal mixing weights. \begin{corollary}\label{cor:smallmu1s} Let $\text{\boldmath$mu$} = ||mu^1|| = ||mu^2|| \leq \sqrt{\ln \frac{9}{2 \pi}}$. Suppose $u_0$ is chosen uniformly at random, and the number of rounds is $N \geq C_0 \cdot (\frac{\ln d}{ \ln(1 + \text{\boldmath$mu$}^2)} + \frac{1}{\ln(1 + \epsilon)})$, where $C_0$ is the fixed constant in Corollary \ref{cor:convtime}. If the number of samples $|mathcal{S}|$ is at least: $ \frac{N \cdot a_{9} d\log^2(d)}{\text{\boldmath$mu$}^4 \epsilon^2}$, then, with constant probability, after $N$ rounds, $\cos^2(\theta_N) \geq 1 - \epsilon$. \end{corollary} One can show a very similar corollary when $u_0$ is initialized as a random sample from the mixture. We note that the total number of samples is a factor of $N \approxeq \frac{\ln d}{\text{\boldmath$mu$}^2}$ times greater than the bound in Theorem~\ref{thm:finsmallmu}. This is due to the fact that we use a fresh set of samples in every round, in order to simplify our analysis. In practice, successive iterations of $k$-means or EM is run on the same data-set. \begin{theorem}[Large $mu^j$]\label{thm:finlargemu} Suppose that there exists some $j$ such that $||mu^j||/\sigma^j \geq \sqrt{\ln\frac{9}{2 \pi}}$, and suppose that the number of samples drawn in round $t$ is at least \[ a_{11} \log^2(d/\delta) \left( \frac{d\sigma_{max}^2}{ \rho_{min}^2 mu_{min}^2\sin^4(\theta_t)} + \frac{ \sigma_{max}^2 + max_j ||mu^j||^2}{M^2 \cos^2(\theta_t) \sin^4(\theta_t)} + \frac{\sigma_{max}^2 max_j ||mu^j||^2 + max_j ||mu^j||^4 }{\rho_{min}^4 mu_{min}^4 \sin^4(\theta_t)} \right) \] for some constant $a_{11}$. If $|\tau_t^j| \leq \sqrt{\ln \frac{9}{2 \pi}}$, for all $j$, then, with probability at least $1 - \delta$, $\cos^2({\theta}_{t+1}) \ge \cos^2(\theta_t)(1+ a_{12} min(1, M^2 + MV)\sin^2(\theta_t))$; otherwise, with probability at least $1 - \delta$, $\cos^2({\theta}_{t+1}) \ge \cos^2(\theta_t)(1+ a_{13} \frac{\rho_{min}^2 mu_{min}^2 \tan^2(\theta_t)}{ V^2 + \rho_{min}^2 mu_{min}^2} )$, where $a_{12}$ and $a_{13}$ are fixed constants. \end{theorem} For a mixture of two identical Gaussians with equal mixing weights and standard deviation $1$, our result implies: \begin{corollary}\label{cor:largemuunif} Suppose that $\text{\boldmath$mu$} = ||mu^1||=||mu^2|| \geq \sqrt{\ln \frac{9}{2 \pi}}$, and suppose that the number of samples in round $t$ is at least: $a_{11}\log^2(d/\delta) \left( \frac{d}{\text{\boldmath$mu$}^2\sin^4(\theta_t)} + \frac{1}{\text{\boldmath$mu$}^2 \cos^2(\theta_t) \sin^4(\theta_t)}\right)$, for some constant $a_{11}$. If $|\tau_t^j| \leq \sqrt{\ln \frac{9}{2 \pi}}$, then, with probability at least $1 - \delta$, $\cos^2({\theta}_{t+1}) \ge \cos^2(\theta_t)(1+ a_{12} \sin^2(\theta_t))$; otherwise, with probability $1 - \delta$, $\cos^2({\theta}_{t+1}) \ge \cos^2(\theta_t)(1+ a_{13}\tan^2(\theta_t))$, where $a_{12}$ and $a_{13}$ are fixed constants. \end{corollary} Again, if we pick $u_0$ uniformly at random, we require about $\tilde{\Omega}(\frac{d}{\text{\boldmath$mu$}^2})$ samples for the first round to succeed. When $\text{\boldmath$mu$} > 1$, this bound is worse than $\frac{d}{\text{\boldmath$mu$}^4}$, but matches with the upper bounds of~\cite{BCFZ07}. The following corollary shows the number of samples required in total for 2-means-iterate~to converge. \begin{corollary}\label{cor:largemu1s} Let $\text{\boldmath$mu$} \geq \sqrt{\ln \frac{9}{2 \pi}}$. Suppose $u_0$ is chosen uniformly at random and the number of rounds is $N \geq C_0 \cdot (\ln d + \frac{1}{\ln(1 + \epsilon)})$, where $C_0$ is the constant in Corollary \ref{cor:convtime}. If $|mathcal{S}|$ is at least $\frac{2N C_0 d \log^2(d) }{\text{\boldmath$mu$}^2 \epsilon^2}$, then, with constant probability, after $N$ rounds, $\cos^2(\theta_N) \geq 1 - \epsilon$. \end{corollary} \section{Lower Bounds} \label{sec:lowerbounds} In this section, we prove a lower bound on the sample complexity of learning mixtures of Gaussians, using Fano's Inequality~\cite{Y97,CT90}, stated in Theorem~\ref{thm:fano}. Our main theorem in this section can be summarized as follows. \begin{theorem} Suppose we are given samples from the mixture $D(mmu) = \frac{1}{2} \mathcal{N}(mmu, I_d) + \frac{1}{2}\mathcal{N}(-mmu, I_d)$, for some $mmu$, and let $\hat{mmu}$ be the estimate of $mmu$ computed from $n$ samples. If $n < \frac{C d}{||mmu||^2}$ for some constant $C$, and $||mmu|| > 1$, then, there exists $mmu$ such that ${mathbf{E}}_{D(mmu)} ||mmu - \hat{mmu}|| \geq C' ||mmu||$, where $C'$ is a constant. \label{thm:lowermain} \end{theorem} The main tools in the proof of Theorem~\ref{thm:lowermain} are the following lemmas, and a generalized version of Fano's Inequality~\cite{CT90, Y97}. \begin{lemma} Let $mmu_1, mmu_2 \in {mathbf{R}}^d$, and let $D_1$ and $D_2$ be the following mixture distributions: $D_1 = \frac{1}{2} \mathcal{N}(mmu_1, I_d) + \frac{1}{2} \mathcal{N}(-mmu_1, I_d)$, and $D_2 = \frac{1}{2} \mathcal{N}(mmu_2, I_d) + \frac{1}{2} \mathcal{N}(-mmu_2, I_d)$. Then, \[ \mathbf{KL}(D_1, D_2) \leq \frac{1}{\sqrt{2 \pi}}\cdot \left( ||mmu_2||^2 - ||mmu_1||^2 + \frac{3\sqrt{2 \pi}}{2} \ln 2 + 2||mmu_1|| ( e^{-||mmu_1||^2/2} + \sqrt{2 \pi} ||mmu_1|| \Phi(0, ||mmu_1||)) \right) \] \label{lem:kllower} \end{lemma} \begin{lemma} There exists a set of vectors $V = \{ v_1, \ldots, v_K\}$ in ${mathbf{R}}^d$ with the following properties: (1) For each $i$ and $j$, $d(v_i, v_j) \geq \frac{1}{5}, d(v_i, -v_j) \geq \frac{1}{5}$. (2) $K = e^{d/10}$. (3) For all $i$, $||v_i|| \leq \sqrt{\frac{7}{5}} $. \label{lem:spherepacking} \end{lemma} \begin{theorem}[Fano's Inequality] Consider a class of densities $F$, which contains $r$ densities $f_1, \ldots, f_r$, corresponding to parameter values $\theta_1, \ldots, \theta_r$. Let $d(\cdot)$ be any metric on $\theta$, and let $\hat{\theta}$ be an estimate of $\theta$ from $n$ samples from a density $f$ in $F$. If, for all $i$ and $j$, $d(\theta_i, \theta_j) \geq \alpha$, and $\mathbf{KL}(f_i, f_j) \leq \beta$, then, $max_{j} {mathbf{E}}_j d(\hat{\theta}, \theta_j) \geq \frac{\alpha}{2} ( 1 - \frac{n \beta + \log 2}{\log(r - 1)})$, where ${mathbf{E}}_j$ denotes the expectation with respect to distribution $j$. \label{thm:fano} \end{theorem} \begin{proof}(Of Theorem~\ref{thm:lowermain}) We apply Fano's Inequality. Our class of densities $F$ is the class of all mixtures of the form $\frac{1}{2} \mathcal{N}(mmu', I_d) + \frac{1}{2} \mathcal{N}(-mmu', I_d)$. We set the parameter $\theta = mmu'$, and $d(mmu_1, mmu_2) = ||mmu_1 - mmu_2||$. We construct a subclass $\mathcal{F} = \{f_1, \ldots, f_r \}$ of $F$ as follows. We set each $f_i = \frac{1}{2} \mathcal{N}(||mmu|| v_i, I_d) + \frac{1}{2} \mathcal{N}(-||mmu||v_i, I_d)$, for each vector $v_i$ in $V$ in Lemma~\ref{lem:spherepacking}. Notice that now $r = e^{d/10}$. Moreover, for each pair $i$ and $j$, from Lemma~\ref{lem:kllower} and Lemma~\ref{lem:spherepacking}, $\mathbf{KL}(f_i, f_j) \leq C_1 ||mmu||^2 + C_2$, for constants $C_1$ and $C_2$. Finally, from Lemma~\ref{lem:spherepacking}, for each pair $i$ and $j$, $d(mmu_i, mmu_j) \geq \frac{||mmu||}{5}$. The Theorem now follows by an application of Fano's Inequality~\ref{thm:fano}. \end{proof} \section{More General $k$-means}\label{sec:genkinf} In this section, we show that when we apply $2$-means on an input generated by a mixture of $k$ spherical Gaussians, the normal to the hyperplane which partitions the two clusters in the $2$-means algorithm, converges to a vector in the subspace ${mathcal{M}}$ containing the means of mixture components. We assume that our input is generated by a mixture of $k$ spherical Gaussians, with means $mu^j$, variances $(\sigma^j)^2$, $j = 1, \ldots, k$, and mixing weights $\rho^1, \ldots, \rho^k$. The mixture is centered at the origin such that $\sum \rho^j mu^j = 0$. We use ${mathcal{M}}$ to denote the subspace containing the means $mu^1, \ldots, mu^k$. We use Algorithm~2-means-iterate\ on this input, and our goal is to show that it still converges to a vector in ${mathcal{M}}$. \noindent{\textbf{Notation.}} In the sequel, given a vector $x$ and a subspace $W$, we define the angle between $x$ and $W$ as the angle between $x$ and the projection of $x$ onto $W$. We examine the angle $\theta_{t}$, between $u_t$ and ${mathcal{M}}$, and our goal is to show that the cosine of this angle grows as $t$ increases. Our main result of this section is Lemma~\ref{lem:kexpr1}, which exactly defines the behavior of 2-means-iterate\ on a mixture of $k$ spherical Gaussians. Recall that at time $t$, we use $\breve{u}_t$ to partition the input data, and the projection of $\breve{u}_t$ along ${mathcal{M}}$ is $\cos(\theta_t)$ by definition. Let $b^1_t$ be a unit vector lying in the subspace ${mathcal{M}}$ such that: $\breve{u}_t = \cos(\theta_t) b^1_t + \sin(\theta_t) v_t $, where $v_t$ lies in the orthogonal complement of ${mathcal{M}}$, and has norm $1$. We define a second vector $\breve{u}p_t$ as follows: $\breve{u}p_t = \sin(\theta_t) b^1_t - \cos(\theta_t) v_t $. We observe that $\dot{\breve{u}_t}{\breve{u}p_t} = 0$, $||\breve{u}p_t|| = 1$, and the projection of $\breve{u}p_t$ on ${mathcal{M}}$ is $\sin(\theta_t) b^1_t$.We now extend the set $\{b^1_t\}$ to complete an orthonormal basis $\mathcal{B} = \{b^1_t, \ldots, b^{k-1}_t\}$ of ${mathcal{M}}$. We also observe that $\{b^2_t, \ldots, b^{k-1}_t, \breve{u}_t, \breve{u}p_t \}$ is an orthonormal basis of the subspace spanned by any basis of ${mathcal{M}}$, along with $v_t$, and can be extended to a basis of ${mathbf{R}}^d$. For $j = 1, \ldots, k$, we define $\tau^j_t$ as follows: $\tau_j^t = \dot{mu^j}{\breve{u}_t} = \cos(\theta_t) \dot{mu^j}{b^1_t}$. Finally we (re)-define the quantity $\xi_t$, and define $m_t^l$, for $l=1, \ldots, k-1$ as \[ \xi_t = \sum_j \rho^j \sigma^j \frac{e^{-(\tau_t^j)^2/2(\sigma^j)^2}}{\sqrt{2 \pi}}, ~~~~~m_t^l = \sum_j \rho^j \Phi(-\frac{\tau_t^j}{\sigma^j},\infty) \dot{mu^j}{b^l_t} \] Our main lemma is stated below. The proof is in the Appendix. \begin{lemma}\label{lem:kexpr1} At any iteration $t$ of Algorithm 2-means-iterate, \[ \cos^2(\theta_{t+1}) = \cos^2(\theta_t) \left(1 + \tan^2(\theta_t)\frac{ 2 \cos(\theta_t) \xi_t m_t^1 + \sum_l (m_t^l)^2 }{ \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + \sum_l (m_t^l)^2 } \right) \] \end{lemma} \section*{Appendix} \subsection{Proof of Lemma~\ref{lem:expr1}} In this section, we prove Lemma~\ref{lem:expr1}. First, we need some additional notation. medskip \noindent \textbf{Notation.} We define, for $j=1,2$: \begin{eqnarray*} w^j_{t+1} & = & \Pr[x \sim D_j | x \in C_{t+1}] \\ u^j_{t+1} & = & \mathbf{E}[ x | x \sim D_j, x \in C_{t+1}] \end{eqnarray*} We observe that $u_{t+1}$ now can be written as: \[ u_{t+1} =w^1_{t+1} u^1_{t+1} +w^2_{t+1} u^2_{t+1} \] Moreover, we define $Z_{t+1} = \Pr[x \in C_{t+1}]$. medskip \noindent {\textbf{Proof of Lemma~\ref{lem:expr1}.}} We start by providing exact expressions for $w^1_{t+1}$ and $w^2_{t+1}$ with respect to the partition computed in the previous round $t$. These are used to compute the projections of $u_{t+1}$ along the vectors $\breve{u}_t$ and $mu_1-\dot{mu_1}{\breve{u}_t}\breve{u}_t$, which finally leads to a proof of Lemma~\ref{lem:expr1}. \begin{lemma}\label{lem:wts12} In round $t$, for $j=1,2$, $w^j_{t+1} = \frac{ \rho^j \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)}{Z_{t+1}}$. \end{lemma} \begin{proof} We can write: \[ w^j_{t+1} = \frac{ \Pr[x \in C_{t+1} | x \sim D_j] \Pr[x \sim D_j]}{ \Pr[x \in C_{t+1}]} \] We note that $\Pr[x \sim D_j] = \rho^j$, and $\Pr[x \in C_{t+1}] = Z_{t+1}$. As $D_j$ is a spherical Gaussian, for any $x$ generated from $D_j$, and for any vector $y$ orthogonal to $u_t$, $\dot{y}{x}$ is distributed independently from $\dot{\breve{u}_t}{x}$. Moreover, we observe that $\dot{\breve{u}_t}{x}$ is distributed as a Gaussian with mean $\dot{mu^j}{\breve{u}_t} = \tau_t^j$ and standard deviation $\sigma^j$. Therefore, \[ \Pr[x \in C_{t+1} | x \sim D_j] = \Pr_{x \sim D_j}[\dot{\breve{u}_t}{x} > 0] = \Pr[N(\tau_t^j, \sigma^j) \geq 0] = \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) \] from which the lemma follows. \end{proof} \begin{lemma}\label{lem:expr2} For any $t$, $\dot{u_{t+1}}{\breve{u}_t} = \frac{\xi_t + m_t\cos(\theta_t)}{Z_{t+1}}$. \end{lemma} \begin{proof} Consider a sample $x$ drawn from $D_j$. Then, $\dot{x}{\breve{u}_t}$ is distributed as a Gaussian with mean $\dot{mu^j}{\breve{u}_t} = \tau_t^j$ and standard deviation $\sigma^j$. We recall that $\Pr[x \in C_{t+1}] = Z_{t+1}$. Therefore, $\dot{u^{j}_{t+1}}{\breve{u}_t}$ is equal to: \[ \frac{{mathbf{E}}[x, x\in C_{t+1} | x \sim D_j]}{\Pr[x \in C_{t+1}| x \sim D_j]} = \frac{1}{\Pr[N(\tau_t^j, \sigma^j) > 0]} \cdot \int_{y=0}^{\infty}\frac{y e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j \sqrt{2 \pi}} dy \] which is, again, equal to: \begin{eqnarray*} && \frac{1}{ \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)} \left( \tau_t^j \int_{y=0}^{\infty} \frac{e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} dy + \int_{y=0}^{\infty} \frac{(y - \tau_t^j)e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} dy \right)\\ & = &\frac{1}{ \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)} \left( \tau_t^j \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) + \int_{y=0}^{\infty} \frac{(y - \tau_t^j)e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} dy \right) \end{eqnarray*} We can compute the integral in the equation above as follows. \begin{eqnarray*} \int_{y = 0}^{\infty} (y-\tau_t^j)e^{-(y - \tau_t^j)^2/2(\sigma^j)^2} dy = (\sigma^j)^2 \int_{z = (\tau_t^j)^2/2(\sigma^j)^2}^{\infty} e^{-z}dz = (\sigma^j)^2 e^{-(\tau_t^j)^2/2 (\sigma^j)^2} \end{eqnarray*} We can now compute $\dot{u_{t+1}}{\breve{u}_t}$ as follows. \begin{eqnarray*} \dot{u_{t+1}}{\breve{u}_t} = w^1_{t+1} \dot{u^1_{t+1}}{\breve{u}_t} + w^2_{t+1} \dot{u^2_{t+1}}{\breve{u}_t} = \frac{1}{Z_{t+1}} \cdot \sum_j \left( \rho^j \tau_t^j \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) + \rho^j (\sigma^j)^2 \frac{e^{-(\tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} \right) \end{eqnarray*} The lemma follows by recalling $\tau_t^j = \dot{mu^j}{b}\cos(\theta_t)$ and plugging in the values of $m_t$ and $\xi_t$. \end{proof} \begin{lemma}\label{lem:expr3} Let $\breve{v}_t$ be a unit vector along $mu_1 - \dot{mu_1}{\breve{u}_t}\breve{u}_t$. Then, $\dot{u_{t+1}}{\breve{v}_t} = \frac{m_t\sin(\theta_t)}{Z_{t+1}}$. In addition, for any vector $z$ orthogonal to $\breve{u}_t$ and $\breve{v}_t$, $\dot{u_{t+1}}{z} = 0$. \end{lemma} \begin{proof} We observe that for a sample $x$ drawn from distribution $D_1$ (respectively, $D_2$) and any unit vector $v_1$, orthogonal to $\breve{u}_t$, $\dot{x}{v_1}$ is distributed as a Gaussian with mean $\dot{mu^1}{v_1}$ ($\dot{mu^2}{v_1}$, respectively) and standard deviation $\sigma^1$ (resp. $\sigma^2$). Therefore, the projection of $u_{t+1}$ on $\breve{v}_t$ can be written as: \begin{eqnarray*} \dot{u_{t+1}}{\breve{v}_t} & = & \sum_j w^j_{t+1} \dot{mu^j}{\breve{v}_t} = \frac{1}{Z_{t+1}} \sum_j \rho^j \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) \dot{mu^j}{\breve{v}_t} \end{eqnarray*} from which the first part of the lemma follows. The second part of the lemma follows from the observation that for any vector $z$ orthogonal to $\breve{u}_t$ and $\breve{v}_t$, $\dot{mu^j}{z} = 0$, for $j = 1,2$. \end{proof} \begin{lemma}\label{lem:expr4} For any $t$, \begin{eqnarray*} \dot{u_{t+1}}{mu^1} & = & \frac{||mu^1|| (\xi_t\cos(\theta_t) + m_t)}{Z_{t+1}}\\ ||u_{t+1}||^2 & = & \frac{\xi_t^2 + m_t^2 + 2\xi_tm_t\cos(\theta_t)}{(Z_{t+1})^2} \end{eqnarray*} \end{lemma} \begin{proof} As we have an infinite number of samples, $\theta_{t+1}$ lies on the same plane as $\theta_t$. Therefore, we can write $\dot{u_{t+1}}{mu^1} = \dot{u_{t+1}}{\breve{u}_t} \dot{mu^1}{\breve{u}_t} + \dot{u_{t+1}}{\breve{v}_t} \dot{mu^1}{ \breve{v}_t}$. Moreover, we can write $||u_{t+1}||^2 = \dot{u_{t+1}}{\breve{u}_t}^2 + \dot{u_{t+1}}{\breve{v}_t}^2$. Thus, the first two equation follow by using Lemma \ref{lem:expr2} and \ref{lem:expr3}, and recalling that $\dot{mu^1}{\breve{u}_t} = \tau_t^1 = ||mu^1||\cos(\theta_t)$ and $\dot{mu^1}{\breve{v}_t}=||mu^1||\sin(\theta_t)$. \end{proof} We are now ready to complete the proof of Lemma \ref{lem:expr1}. \begin{proof}(Of Lemma \ref{lem:expr1}) By definition of $\theta_{t+1}$, $\cos^2(\theta_{t+1}) = \frac{\dot{u_{t+1}}{mu^1}^2}{||u_{t+1}||^2 ||mu^1||^2}$. Therefore, \begin{eqnarray*} ||mu^1||^2 \cos^2 (\theta_{t+1}) & = & \frac{\dot{u_{t+1}}{mu^1}^2}{||u_{t+1}||^2}\\ & = & (\tau_t^1)^2 \left(1+\frac{\dot{u_{t+1}}{mu^1}^2-||mu^1||^2\cos^2(\theta_t)||u_{t+1}||^2}{||mu^1||^2\cos^2(\theta_t)||u_{t+1}||^2}\right)\\ & = & (\tau_t^1)^2 \left(1 + \frac{||mu^1||^2\sin^2(\theta_t) (m_t^2+2\xi_tm_t\cos(\theta_t))}{||mu^1||^2\cos^2(\theta_t)||u_{t+1}||^2}\right)\\ & = & ||mu^1||^2\cos^2(\theta_t) \left(1 + \tan^2(\theta_t)\frac{m_t^2+2\xi_tm_t\cos(\theta_t)}{||u_{t+1}||^2}\right) \end{eqnarray*} where we used Lemma~\ref{lem:expr4} and the observation that $\cos(\theta_t) = \frac{\tau_t^1}{||mu^1||}$. The Lemma follows by replacing $||u_{t+1}||^2$ using the expression in Lemma~\ref{lem:expr4}. \end{proof} \iffalse As we have an infinite number of samples, $\theta_{t+1}$ lies on the same plane as $\theta_t$. Therefore, we can write: \begin{eqnarray*} \dot{u_{t+1}}{mu} & = & \dot{u_{t+1}}{ \breve{u}_t} \dot{mu}{ \breve{u}_t} + \dot{u_{t+1}}{\breve{v}_t} \dot{mu}{ \breve{v}_t} \\ & = & \frac{2\tau_t e^{-\tau_t^2/2}}{\sqrt{2 \pi}} + mu^2 \Phi(-\tau_t, \tau_t) \end{eqnarray*} Moreover, \begin{eqnarray*} ||u_{t+1}||^2 & = & \dot{u_{t+1}}{\breve{u}_t}^2 + \dot{u_{t+1}}{\breve{v}_t}^2 \\ & = & (\frac{2e^{-\tau_t^2/2}}{\sqrt{2 \pi}} + \tau_t \Phi(-\tau_t,\tau_t))^2 + (mu^2 - \tau_t^2) \Phi^2(-\tau_t,\tau_t) \\ & = & \frac{2e^{-\tau_t^2}}{\pi} + mu^2 \Phi^2(-\tau_t, \tau_t) + \frac{4 e^{-\tau_t^2/2}}{\sqrt{2\pi}} \tau_t \Phi(-\tau_t, \tau_t) \end{eqnarray*} Therefore, \begin{eqnarray*} mu^2 \cos^2(\theta_{t+1}) & = & \frac{ \frac{2 e^{-\tau_t^2}}{\pi} \tau_t^2 + mu^4 \Phi^2(-\tau_t, \tau_t) + \frac{4e^{-\tau_t^2/2}}{\sqrt{2 \pi}} mu^2 \tau_t \Phi(-\tau_t, \tau_t) } { \frac{2e^{-\tau_t^2}}{\pi} + mu^2 \Phi^2(-\tau_t, \tau_t) + \frac{4 e^{-\tau_t^2/2}}{\sqrt{2\pi}} \tau_t \Phi(-\tau_t, \tau_t)} \\ & = & \tau_t^2 \bigg( 1 + \frac{mu^4 \Phi^2(-\tau_t, \tau_t)/\tau_t^2 + \frac{4e^{-\tau_t^2/2}}{\sqrt{2\pi}} mu^2 \Phi(-\tau_t, \tau_t)/\tau_t - mu^2 \Phi^2(-\tau_t,\tau_t) - \frac{4e^{-\tau_t^2/2}}{\sqrt{2 \pi}}\Phi(-\tau_t, \tau_t) \tau_t }{ \frac{2e^{-\tau_t^2}}{\pi} + mu^2 \Phi^2(-\tau_t, \tau_t) + \frac{4 e^{-\tau_t^2/2}}{\sqrt{2\pi}} \tau_t \Phi(-\tau_t, \tau_t)} \bigg)\\ & = & \tau_t^2 \bigg(1 + (\frac{mu^2}{\tau_t^2} - 1) \frac{mu^2 \Phi^2(-\tau_t, \tau_t) + \frac{4 \tau_t e^{-\tau_t^2/2}}{\sqrt{2 \pi}}\Phi(-\tau_t, \tau_t) } { \frac{2e^{-\tau_t^2}}{\pi} + mu^2 \Phi^2(-\tau_t, \tau_t) + \frac{4 e^{-\tau_t^2/2}}{\sqrt{2\pi}} \tau_t \Phi(-\tau_t, \tau_t) } \bigg) \end{eqnarray*} The lemma follows from this equation, and the observation that $\cos(\theta_t) = \frac{\tau_t}{mu}$. \fi The next Lemma helps us to derive Theorem~\ref{thm:smallmu} from Lemma~\ref{lem:expr1}. It shows how to approximate $\Phi(-\tau, \tau)$ when $\tau$ is small. \begin{lemma}\label{lem:smalltau} Let $\tau \leq \sqrt{ \ln \frac{9}{2 \pi}}$. Then, $\frac{5}{3 \sqrt{2 \pi}} \tau \leq \Phi(-\tau, \tau) \leq \frac{2}{\sqrt{2 \pi}} \tau$. In addition, $\frac{2e^{-\tau^2/2}}{\sqrt{2 \pi}} \geq \frac{2}{3}$. \end{lemma} \iffalse \begin{proof} The second part of the lemma follows from the observation that $e^{-\tau^2/2}$ is a decreasing function of $\tau$ for $\tau \geq 0$, and $e^{-\tau^2/2} = \frac{2}{3}$ at $\tau = \sqrt{\ln \frac{9}{2 \pi}}$. To prove the first part of the lemma, let $f(t) = e^{-t^2/2}$. Using a Taylor series expansion, we can write: \[ f(t) = f(0) + t f'(0) + \frac{1}{2}t^2 f''(t^*) \] where $t^* \in [0,t]$, and $f'(t)$ and $f''(t)$ are the single and double derivatives of $f$ respectively. We note that $f(0) = 1$, $f'(0) = 0$, and $f''(t) \leq 0$, when $0 \leq t \leq 1$. Now, as $f''(t)\le 0$, $f'(t)$ decreases; so, as $f'(0)\le 0$, we have that $f'(t)\le 0$ as well. Therefore, for any $t$ between $0$ and $\tau \leq \sqrt{\ln \frac{9}{2 \pi}} < 1$, $f(t) \leq f(0)\leq 1$. Thus, \[ \Phi(-\tau,\tau) = \frac{2}{\sqrt{2 \pi}} \int_{t=0}^{\tau} e^{-t^2/2} dt \leq \frac{2}{\sqrt{2 \pi}} \int_{t=0}^{\tau} dt \leq \frac{2}{\sqrt{2\pi}}\tau \] from which the upper bound on $\Phi(-\tau,\tau)$ follows. For the lower bound on $\Phi(-\tau,\tau)$, we observe that for $0 \leq t \leq \tau \leq \sqrt{\ln \frac{9}{2 \pi}}$, $f''(t) \geq -1$. Therefore, \[ \Phi(-\tau,\tau) = \frac{2}{\sqrt{2 \pi}} \int_{t=0}^{\tau} e^{-t^2/2} dt \geq \frac{2}{\sqrt{2 \pi}} \int_{t=0}^{\tau} (1 - \frac{t^2}{2}) dt \geq \frac{2}{\sqrt{2 \pi}} (\tau - \frac{\tau^3}{6}) \] The lemma follows by plugging in the bound on $\tau$. \end{proof} \fi \iffalse Therefore, \begin{eqnarray*} u^1_{t+1} & = & (\tau_t + \mathbf{E}[X | X \sim N, X \geq -\tau_t]) \cdot \breve{u}_t + \dot{mu}{\breve{v}_t} \breve{v}_t \\ u^2_{t+1} & = & (-\tau_t + \mathbf{E}[X | X \sim N, X \geq \tau_t]) \cdot \breve{u}_t - \dot{mu}{\breve{v}_t} \breve{v}_t \end{eqnarray*} \fi \iffalse Notice that since $\dot{u_{t+1}}{\breve{v}_t} > 0$, the algorithm makes progress in round $t$. Let $\theta_t$ denote the angle between $mu$ and $u_t$. Then, by definition of $\tau_t$, \[ \tan \theta_t = \frac{\sqrt{mu^2 - \tau_t^2}}{\tau_t} \] Then, $\theta_{t+1} = \theta_t - \bar{\theta}$, where, \begin{eqnarray*} \tan \bar{\theta} = \frac{\dot{u_{t+1}}{\breve{v}_t}}{\dot{u_{t+1}}{\breve{u}_t}} = \frac{ \sqrt{2\pi} \sqrt{mu^2 - \tau_t^2} \Pr[ -\tau_t \leq X \leq \tau_t]}{2e^{-\tau_t^2/2}} \end{eqnarray*} When $\tau_t$ is very small, $\Pr[ -\tau_t \leq X \leq \tau_t] \geq \tau_t$, and $e^{-\tau_t^2/2} \leq 1$. Therefore, \[ \tan \bar{\theta} \geq \frac{1}{2} \sqrt{2 \pi(mu^2 - \tau_t^2)} \tau_t \] and \begin{eqnarray*} \tan \theta_{t+1} & = & \frac{\tan \theta_t - \tan \bar{\theta}}{1 + \tan \theta_t \tan \bar{\theta}} \\ & \leq & \frac{ \frac{\sqrt{mu^2 - \tau_t^2}}{\tau_t} }{ 1 + \sqrt{2 \pi} (mu^2 - \tau_t^2)}\\ & \leq & \frac{\sqrt{mu^2 - \tau_t^2}}{\tau_t} \cdot \frac{1}{1 + mu^2} \end{eqnarray*} Here, in the last step, we assume that $\tau_t$ is small compared to $mu$. In this case, we have that: \[ \tan \theta_{t+1} \leq \frac{1}{1 + mu^2} \cdot \tan \theta_t \] which means that the number of rounds for $\tau_t$ to become a constant times $mu$ is $\frac{C}{mu^2} \log d$. We observe that $\tau_0 = \Theta(\frac{1}{\sqrt{d}})$, if we start with a random vector $\breve{u}_0$, which defines a random classifying hyperplane $v_0$. \fi \subsection{Proofs of Sample Requirement Bounds} For the rest of the section, we prove Lemmas~\ref{lem:projconc} and~\ref{lem:normconc}, which lead to a proof of Lemma~\ref{lem:keysample}. First, we need to define some notation. medskip \noindent {\textbf{Notation.}} At time $t$, we use the notation $S_{t+1}$ to denote the quantity ${mathbf{E}}[X \cdot 1_{X \in C_{t+1}}]$, where $1_{X \in C_{t+1}}$ is the indicator function for the event $X \in C_{t+1}$, and the expectation is taken over the entire mixture. In the sequel, we also use the notation $\hat{S}_{t+1}$ to denote the empirical value of $S_{t+1}$. Our goal is to bound the concentration of certain functions of $\hat{S}_{t+1}$ around their expected values, when we are given only $n$ samples from the mixture. Recall that we define $\theta_{t+1}$ as the angle between $mu^1$ and the hyperplane separator in 2-means-iterate, given $\theta_t$. Notice that now $\theta_t$ is a random variable, which depends on the samples drawn in rounds $1, \ldots, t-1$, and given $\theta_t$, $\theta_{t+1}$ is a random variable, whose value depends on samples in round $t$. Also we use $u_{t+1}$ as the center of partition $C_t$ in iteration $t+1$, and ${mathbf{E}}[u_{t+1}]$ is the expected center. Note that all the expectations in round $t$ are conditioned on $\theta_t$. medskip \noindent {\textbf{Proofs.}} We are now ready to prove Lemmas~\ref{lem:projconc} and~\ref{lem:normconc}. \begin{proof}(Of Lemma~\ref{lem:projconc}) Let $X_1, \ldots, X_n$ be the $n$ iid samples from the mixture; for each $i$, we can write the projection of $X_i$ along $v$ as follows: \[ \dot{X_i}{v} = Y_i + Z_i \] where $Z_i \sim N(0,\sigma^j)$, if $X_i$ is generated from distribution $D^j$, and $Y_i =\dot{mu^j}{v}$, if $X_i$ is generated by $D^j$. Therefore, we can write: \[ \dot{\hat{S}_{t+1}}{v} = \frac{1}{n} \left( \sum_i Y_i \cdot 1_{X_i \in C_{t+1}} + \sum_i Z_i \cdot 1_{X_i \in C_{t+1}} \right) \] To determine the concentration of $\dot{\hat{S}_{t+1}}{v}$ around its expected value, we address the two terms separately. The first term is a sum of $n$ independently distributed random variables, such that changing one variable changes the sum by at most $max_j \frac{2 |\dot{mu^j}{v}|}{n}$; therefore, to calculate its concentration, one can apply Hoeffding's Inequality. It follows that with probability at most $\frac{\delta}{2}$, \[ |\frac{1}{n} \sum_i Y_i \cdot 1_{X_i \in C_{t+1}} - {mathbf{E}}[\frac{1}{n} \sum_i Y_i \cdot 1_{X_i \in C_{t+1}}] | > max_j \frac{4|\dot{mu^j}{v}| \sqrt{\log(4n/\delta)}}{\sqrt{n}} \] We note that, in the second term, each $Z_i$ is a Gaussian with mean $0$ and variance $\sigma^j$, scaled by $||v||$. For some $0 \leq \delta' \leq 1$, let $E_i(\delta')$ denote the event \[ -\sigma_{max}||v|| \sqrt{2 \log(1/\delta')} \leq Z_i \cdot 1_{X_i \in C_{t+1}} \leq \sigma_{max} ||v|| \sqrt{2 \log (1/\delta')} \] As $Z_i \sim N(0,\sigma^j)$, if $X_i$ is generated from distribution $D_j$, and $1_{X_i \in C_{t+1}}$ takes values $0$ and $1$, for any $i$, for $\delta'$ small enough,$\Pr[E_i(\delta')] \geq 1 - \delta'$. We use $\delta' = \frac{\delta}{4n}$, and condition on the fact that all the events $\{ E_i(\delta'), i=1, \ldots, n\}$ happen; using an Union bound over the events $\bar{E_i(\delta')}$, the probability that this holds is at least $1 - \frac{\delta}{4}$. We also observe that, as the Gaussians $Z_i$ are independently distributed, conditioned on the union of the events $E_i$, the Gaussians $Z_i$ are still independent. Therefore, conditioned on the event $\cup_i E_i(\delta')$, $\frac{1}{n} \sum_i Z_i \cdot 1_{X_i \in C_{t+1}}$ is the sum of $n$ independent random variables, such that changing one variable changes the sum by at most $\frac{2\sigma_{max}||v||\sqrt{2 \log(1/\delta')}}{n}$. We can now apply Hoeffding's bound to conclude that with probability at least $1 - \frac{\delta}{2}$, \[| \frac{1}{n} \sum_i Z_i \cdot 1_{X_i \in C_{t+1}} - {mathbf{E}}[\frac{1}{n}\sum_i Z_i \cdot 1_{X_i \in C_{t+1}}] | \leq \frac{4\sigma_{max}||v||\sqrt{2 \log(1/\delta')} \sqrt{2 \log(1/\delta)}}{\sqrt{n}} \leq \frac{8 \sigma_{max} ||v|| \log(4n/\delta)}{\sqrt{n}} \] The lemma now follows by applying an union bound. \end{proof} \begin{proof} (Of Lemma~\ref{lem:normconc}) We can write: \[ ||\hat{S}_{t+1}||^2 \leq ||S_{t+1}||^2 + ||\hat{S}_{t+1} - S_{t+1}||^2 + 2|\dot{\hat{S}_{t+1} - S_{t+1}}{S_{t+1}}| \] If $v_1, \ldots, v_d$ is any orthonormal basis of ${mathbf{R}}^d$, then, we can bound the second term as follows. With probability at least $1 - \frac{\delta}{2}$, \begin{eqnarray*} ||\hat{S}_{t+1} - S_{t+1} ||^2 & = & \sum_{i=1}^{d} (\dot{\hat{S}_{t+1} - S_{t+1}}{v_i})^2 \leq \ \frac{128 \log^2(8n/\delta)}{n} (\sum_i \sigma_{max}^2 ||v_i||^2 + \sum_{i,j} \dot{mu^j}{v_i}^2) \\ & \leq & \frac{128 \log^2(8n/\delta)}{n}(\sigma_{max}^2 d + \sum_j (mu^j)^2) \end{eqnarray*} \iffalse \begin{eqnarray*} ||\hat{S}_{t+1} - S_{t+1}||^2 = \sum_{i=1}^{d} (\dot{\hat{S}_{t+1} - S_{t+1}}{v_i})^2 \\ \ \leq \ \frac{128 \log^2(8n/\delta)}{n} (\sum_i ||v_i||^2 + \sum_i \dot{mu}{v_i}^2) \\ \ \leq \ \frac{128 \log^2(8n/\delta)}{n}(d + mu^2) \end{eqnarray*} \fi The second step follows by the application of Lemma \ref{lem:projconc}, and the fact that for any $a$ and $b$, $(a + b)^2 \leq 2(a^2 + b^2)$. Using Lemma \ref{lem:projconc}, with probability at least $1 - \frac{\delta}{2}$, \[ \dot{\hat{S}_{t+1} - S_{t+1}}{S_{t+1}} \leq \frac{8 \log(8n/\delta)}{\sqrt{n}}(\sigma_{max} ||S_{t+1}|| + max_j |\dot{S_{t+1}}{mu^j}|) \] The lemma follows by a union bound over these two above events. \end{proof} \iffalse \begin{proof} (of Lemma~\ref{lem:finsmallmu}, {\bf Small $mu$}) We let $\Delta_1 = \dot{S_{t+1}-\hat{S}_{t+1}}{mu}$ and $\Delta_2 = ||\hat{S}_{t+1}||^2-||S_{t+1}||^2$. Thus, \begin{eqnarray} \frac{\dot{\hat{S}_{t+1}}{mu}^2}{||\hat{S}_{t+1}||^2} & = & \frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2}{||S_{t+1}||^2+\Delta_2}\nonumber \\ & = & \tau_t^2\left(1+\frac{1}{\tau_t^2}\frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2-\tau_t^2||S_{t+1}||^2-\tau_t^2\Delta_2}{||S_{t+1}||^2+\Delta_2}\right) \nonumber \\ & \ge & \tau_t^2\left(1+\frac{1}{\tau_t^2}\frac{\dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2-2\Delta_1\dot{S_{t+1}}{mu}-\tau_t^2\Delta_2}{||S_{t+1}||^2+\Delta_2} \right) \label{eqn:smalldeltamain} \end{eqnarray} Using Lemma~\ref{lem:expr4}, along with the equation $S_{t+1}=\frac{1}{2}u_{t+1}$, \[ \dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2 = \frac{1}{4}(mu^2-\tau_t^2)(mu^2\Phi^2(-\tau_t,\tau_t)+\frac{4\tau_t e^{-\tau_t^2/2}}{\sqrt{2\pi}}\Phi(-\tau_t,\tau_t)) \geq \gamma \tau_t^2(mu^2-\tau_t^2) \] for some fixed constant $\gamma$. In the equation above, to get to the last step, we use the fact that as $\tau_t \leq mu \leq \sqrt{\ln \frac{9}{2 \pi}}$, $e^{-\tau_t^2/2} \geq \frac{\sqrt{2 \pi}}{3}$ (from Lemma~\ref{lem:smalltau}), and $\Phi(-\tau_t, \tau_t) \geq \frac{5}{3 \sqrt{2 \pi}} \tau_t$. For the rest of the proof, we show that, if $n$, the number of samples is large enough, then the following three inequalities are satisfied: \begin{eqnarray} 2\Delta_1\dot{S_{t+1}}{mu} & \leq & \frac{1}{3} \gamma \tau_t^2 (mu^2 - \tau_t^2) \label{eqn:smalldelta1}\\ \tau_t^2\Delta_2 & \leq & \frac{1}{3} \gamma \tau_t^2 (mu^2 - \tau_t^2) \label{eqn:smalldelta2}\\ ||S_{t+1}||^2+\Delta_2 & \leq & \gamma_2 \label{eqn:smalldelta3} \end{eqnarray} where $\gamma_2$ is a fixed constant. If Equations \ref{eqn:smalldelta1},~\ref{eqn:smalldelta2} and \ref{eqn:smalldelta3} hold, the lemma can be shown by plugging these inequalities in to Equation \ref{eqn:smalldeltamain}. >From Lemma~\ref{lem:expr4}, and Lemma~\ref{lem:smalltau}, as $mu$ is at most $\sqrt{\ln \frac{9}{2 \pi}}$, $||S_{t+1}||^2 \leq \gamma_3$ for some fixed constant $\gamma_3$. Moreover, $\dot{S_{t+1}}{mu} \leq \gamma_4 \tau_t$, for some other fixed constant $\gamma_4$. Now we bound $\Delta_1$ and $\Delta_2$ using Lemma~\ref{lem:projconc} and Lemma~\ref{lem:normconc} respectively. >From Lemma~\ref{lem:projconc}, for some fixed constant $c_1$, \begin{eqnarray*} \Delta_1 & \le & \frac{c_1(mu+mu^2) \log(n/\delta)}{\sqrt{n}} \leq \frac{2 c_1 mu \log(n/\delta)}{\sqrt{n}} \end{eqnarray*} The second step follows from the fact that $mu \leq 1$. From Lemma~\ref{lem:normconc}, for some fixed constants $c_2$, $c_3$ and $c_4$, \begin{eqnarray*} \Delta_2 & \le & \frac{c_2(d +mu^2)}{n} + \frac{c_3(||S_{t+1}|| + |\dot{S_{t+1}}{mu}|) \log(n/\delta)}{\sqrt{n}} \leq c_4 ( \frac{d \log^2(n/\delta)}{n} + \frac{\log(n/\delta)}{\sqrt{n}}) \end{eqnarray*} For the last inequality, we used the facts that $mu$ is less than $1$, and both $||S_{t+1}||$ and $|\dot{S_{t+1}}{mu}|$ are upper bounded by some constant. Now, if $n > \frac{a_{15} d \log^2(d/\delta)}{mu^2 \sin^2(\theta_t)}$, for some constant $a_{15}$, $ \Delta_2 \leq c_4 (\frac{mu^2}{a_{15}} + \frac{mu}{\sqrt{a_{15}}})$, from which Equation \ref{eqn:smalldelta3} follows, when $a_{15}$ is large enough. Moreover, if $n > a_{15} \left( \frac{ d \log^2(d / \delta)}{mu^2 \sin^2(\theta_t)} + \frac{\log^2(d / \delta)}{mu^2 \tau_t^2 \sin^4(\theta_t)} \right)$, we can write: \begin{eqnarray*} \Delta_2 & < & \frac{c_4}{\sqrt{a_{15}}}\cdot ( mu^2 \sin^2(\theta_t) + mu \tau_t \sin^2(\theta_t) ) \leq \frac{1}{3} \gamma (mu^2 - \tau_t^2) \end{eqnarray*} when $a_{15}$ is large enough. Here, in the last step, we use the fact that $mu^2 \sin^2(\theta_t) = mu^2 - \tau_t^2$. Equation~\ref{eqn:smalldelta2} follows. Finally, if $n > \frac{a_{15} \log^2(d/\delta)}{mu^2 \tau_t^2 \sin^2(\theta_t)}$, then, \begin{eqnarray*} \Delta_1 \leq \frac{2c_1 mu^2 \tau_t \sin^2(\theta_t)}{\sqrt{a_{15}}} \leq \frac{1}{3} \gamma \tau_t (mu^2 - \tau_t^2) \end{eqnarray*} provided $a_{15}$ is large enough. In the last step, we used the fact that $mu^2 \sin^2(\theta_t) = mu^2 - \tau_t^2$. Equation \ref{eqn:smalldelta1} follows. The lemma follows by combining Equations~\ref{eqn:smalldelta1},~\ref{eqn:smalldelta2} and~\ref{eqn:smalldelta3}. $\Box$ \end{proof} \begin{proof} (of Lemma~\ref{lem:finlargemu}, {\bf Large $mu$}) We let $\Delta_1 = \dot{S_{t+1}-\hat{S}_{t+1}}{mu}$ and $\Delta_2 = ||\hat{S}_{t+1}||^2-||S_{t+1}||^2$. Thus, \begin{eqnarray} \frac{\dot{\hat{S}_{t+1}}{mu}^2}{||\hat{S}_{t+1}||^2} & = & \frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2}{||S_{t+1}||^2+\Delta_2}\nonumber \\ & = & \tau_t^2\left(1+\frac{1}{\tau_t^2}\frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2-\tau_t^2||S_{t+1}||^2-\tau_t^2\Delta_2}{||S_{t+1}||^2+\Delta_2}\right) \nonumber \\ & \ge & \tau_t^2\left(1+\frac{1}{\tau_t^2}\frac{\dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2-2\Delta_1\dot{S_{t+1}}{mu}-\tau_t^2\Delta_2}{||S_{t+1}||^2+\Delta_2} \right) \label{eqn:largedeltamain} \end{eqnarray} We now consider two cases. \begin{enumerate} \item {\textbf{Small $\tau_t$.}} First, suppose that $\tau_t \leq \sqrt{\ln \frac{9}{2 \pi}}$. In this case, using Lemma~\ref{lem:expr4}, along with the equation $S_{t+1}=\frac{1}{2}u_{t+1}$, we can write: \[ \dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2 = \frac{1}{4}(mu^2-\tau_t^2)(mu^2\Phi^2(-\tau_t,\tau_t)+\frac{4\tau_t e^{-\tau_t^2/2}}{\sqrt{2\pi}}\Phi(-\tau_t,\tau_t)) \geq \gamma mu^2\tau_t^2(mu^2-\tau_t^2) \] for some fixed constant $\gamma$. In the equation above, the last step follows from the fact that $e^{-\tau_t^2/2} \leq 1$ for any $\tau_t$, $mu \geq \sqrt{\ln \frac{9}{2 \pi}}$, and $\Phi(-\tau_t, \tau_t) \geq \frac{5}{3 \sqrt{2 \pi}} \tau_t$, which follows from Lemma \ref{lem:smalltau}. >From Lemma~\ref{lem:expr4} and Lemma~\ref{lem:smalltau}, we observe that in this case, $\dot{S_{t+1}}{mu} \leq \gamma_1 mu^2 \tau_t$, for some constant $\gamma_1$, and $||S_{t+1}||^2 \leq \gamma_2 (1 + mu^2 \tau_t^2)$, for some constant $\gamma_2$. For the rest of the proof, we show that, if $n$, the number of samples is large enough, then the following three inequalities are satisfied: \begin{eqnarray} 2\Delta_1\dot{S_{t+1}}{mu} & \le & \frac{1}{3} \gamma mu^2 \tau_t^2 (mu^2 - \tau_t^2) \label{eqn:largesdelta1}\\ \tau_t^2\Delta_2 & \le & \frac{1}{3} \gamma mu^2 \tau_t^2 (mu^2 - \tau_t^2) \label{eqn:largesdelta2}\\ ||S_{t+1}||^2+\Delta_2 & \leq & \gamma_3 mu^2 \label{eqn:largesdelta3} \end{eqnarray} where $\gamma_3$ is a fixed constant. If Equations \ref{eqn:largesdelta1},~\ref{eqn:largesdelta2} and \ref{eqn:largesdelta3} hold, the lemma can be shown by plugging these inequalities in to Equation \ref{eqn:largedeltamain}. To show these inequalities, we first bound the values of $\Delta_1$ and $\Delta_2$ using Lemmas~\ref{lem:normconc} and~\ref{lem:projconc}. From Lemma~\ref{lem:normconc}, \[ \Delta_1 \leq \frac{c_1(mu + mu^2) \log(n/\delta)}{\sqrt{n}} \leq \frac{3 c_1 mu^2 \log(n/\delta)}{\sqrt{n}} \] where the second step follows from the fact that $mu \geq \sqrt{\ln \frac{9}{2 \pi}}$. Moreover, from Lemma~\ref{lem:normconc}, \begin{eqnarray*} \Delta_2 \leq \frac{c_2(d +mu^2)}{n} + \frac{c_3(||S_{t+1}|| + |\dot{S_{t+1}}{mu}|) \log(n/\delta)}{\sqrt{n}} \leq c_4 ( \frac{d \log^2(n/\delta)}{n} + \frac{mu^2 \tau_t \log(n/\delta)}{\sqrt{n}}) \end{eqnarray*} For the last step, we use the fact that $mu \leq \sqrt{d}$, the bounds on $\dot{S_{t+1}}{mu}$ and $||S_{t+1}||$, and the fact that $mu \geq \sqrt{\ln \frac{9}{2 \pi}}$. Now, if $n > \frac{a_{17}\log^2(d/\delta)}{\tau_t^2 \sin^4(\theta_t)}$, then, \[ \Delta_1 \dot{S_{t+1}}{mu} \leq \frac{3c_1 \gamma_1 mu^4 \tau_t \log(n/\delta)}{\sqrt{n}} \leq \frac{3c_1 \gamma_1}{\sqrt{a_{17}}} mu^4 \tau_t^2 \sin^2(\theta_t)\] If $a_{17}$ is large enough, Equation \ref{eqn:largesdelta1} follows, from the observation that $mu^2 - \tau_t^2 = mu^2 \sin^2(\theta_t)$. In addition, if $n > \frac{a_{17} d \log^2(d/\delta)}{mu^2 \sin^4(\theta_t)}$, then, \[ \tau_t^2 \Delta_2 \leq \tau_t^2 \cdot \frac{c_4}{\sqrt{a_{17}}}( mu^4 \sin^4(\theta_t) + mu^4 \sin^2(\theta_t)) \] If $a_{17}$ is large enough, Equation \ref{eqn:largesdelta2} follows, from the observation that $mu^2 - \tau_t^2 = mu^2 \sin^2(\theta_t)$. Moreover, if $n > \frac{a_{17} d \log^2(d/\delta)}{mu^2}$, then, $||S_{t+1}||^2 + \Delta_2 \leq \gamma_3 mu^2$, for some constant $\gamma_3$, which proves Equation \ref{eqn:largesdelta3}. The lemma follows by combining Equations \ref{eqn:largesdelta1},\ref{eqn:largesdelta2} and \ref{eqn:largesdelta3}, along with Equation \ref{eqn:largedeltamain}. \item {\textbf{Large $\tau_t$.}} Next we consider the case in which $\tau_t \geq \sqrt{\ln \frac{9}{2 \pi}}$. In this case, using Lemma~\ref{lem:expr4}, along with the equation $S_{t+1}=\frac{1}{2}u_{t+1}$, we can write: \[ \dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2 = \frac{1}{4}(mu^2-\tau_t^2)(mu^2\Phi^2(-\tau_t,\tau_t)+\frac{4\tau_t e^{-\tau_t^2/2}}{\sqrt{2\pi}}\Phi(-\tau_t,\tau_t)) \geq \gamma mu^2(mu^2-\tau_t^2) \] for some fixed constant $\gamma$. In the equation above, the last step follows from the fact that $e^{-\tau_t^2/2} \leq 1$ for any $\tau_t$, $mu \geq \sqrt{\ln \frac{9}{2 \pi}}$, and for $\tau_t \geq \sqrt{\ln \frac{9}{2 \pi}}$, $\Phi(-\tau_t,\tau_t)$ is at least a constant. >From Lemma~\ref{lem:expr4}, we observe that in this case, $\dot{S_{t+1}}{mu} \leq \gamma_1 mu^2 $, for some constant $\gamma_1$, and $||S_{t+1}||^2 \leq \gamma_2 mu^2$, for some constant $\gamma_2$. Again, this follows because $e^{-\tau_t^2/2} \leq 1$, for any $\tau_t$, $mu \geq \sqrt{\ln \frac{9}{2 \pi}}$, and $\Phi(-\tau_t, \tau_t)$ is at least some constant. For the rest of the proof, we show that, if $n$, the number of samples is large enough, then the following three inequalities are satisfied: \begin{eqnarray} 2\Delta_1\dot{S_{t+1}}{mu} & \le & \frac{1}{3} \gamma mu^2 (mu^2 - \tau_t^2) \label{eqn:largeldelta1}\\ \tau_t^2\Delta_2 & \le & \frac{1}{3} \gamma mu^2 (mu^2 - \tau_t^2) \label{eqn:largeldelta2}\\ ||S_{t+1}||^2+\Delta_2 & \leq & \gamma_3 mu^2 \label{eqn:largeldelta3} \end{eqnarray} where $\gamma_3$ is a fixed constant. If Equations \ref{eqn:largeldelta1},~\ref{eqn:largeldelta2} and \ref{eqn:largeldelta3} hold, the lemma can be shown by plugging these inequalities in to Equation \ref{eqn:largedeltamain}. >From Lemma~\ref{lem:normconc}, \[ \Delta_1 \leq \frac{c_1(mu + mu^2) \log(n/\delta)}{\sqrt{n}} \leq \frac{3 c_1 mu^2 \log(n/\delta)}{\sqrt{n}} \] where the second part follows from the fact that $mu \geq \sqrt{\ln \frac{9}{2 \pi}}$. Moreover, from Lemma~\ref{lem:normconc}, \begin{eqnarray*} \Delta_2 \leq \frac{c_2(d +mu^2)}{n} + \frac{c_3(||S_{t+1}|| + |\dot{S_{t+1}}{mu}|) \log(n/\delta)}{\sqrt{n}} \leq c_4 ( \frac{d \log^2(n/\delta)}{n} + \frac{mu^2 \log(n/\delta)}{\sqrt{n}}) \end{eqnarray*} The last step follows from the bounds on $\dot{S_{t+1}}{mu}$ and $||S_{t+1}||$, along with the assumption that $mu \leq \sqrt{d}$. If $n > \frac{a_{19} \log^2(d/\delta)}{\sin^4(\theta_t)}$, then, \[ \Delta_1 \dot{S_{t+1}}{mu} \leq \frac{3c_1 \gamma_1 mu^4 \log(n/\delta)}{\sqrt{n}} \leq \frac{3 c_1 \gamma_1}{\sqrt{a_{19}}} mu^4 \sin^2(\theta_t) \] Equation \ref{eqn:largeldelta1} follows, provided $a_{19}$ is large enough. If $n > \frac{a_{19} d \log^2(d/\delta)}{mu^2 \sin^2(\theta_t)}$, then, $\tau_t^2 \Delta_2 \leq \frac{2 c_4}{a_{19}} mu^2 \tau_t^2 \sin^2(\theta_t)$. Equation \ref{eqn:largeldelta2} follows, for large enough values of $a_{19}$. Moreover, if $n > \frac{a_{19} d \log^2(d/\delta)}{mu^2}$, Equation \ref{eqn:largeldelta3} holds. The lemma follows by combining these equations with Equation \ref{eqn:largedeltamain}. \end{enumerate} \fi \iffalse We let $\Delta_1 = \dot{S_{t+1}-\hat{S}_{t+1}}{mu}$ and $\Delta_2 = ||\hat{S}_{t+1}||^2-||S_{t+1}||^2$. Thus, \begin{eqnarray*} \frac{\dot{\hat{S}_{t+1}}{mu}^2}{||\hat{S}_{t+1}||^2} = \frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2}{||S_{t+1}||^2+\Delta_2} & = & \tau_t^2\left(1+\frac{1}{\tau_t^2}\frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2-\tau_t^2||S_{t+1}||^2-\tau_t^2\Delta_2}{||S_{t+1}||^2+\Delta_2}\right)\\ & \ge & \tau_t^2\left(1+\frac{1}{\tau_t^2}\frac{\dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2-2\Delta_1\dot{S_{t+1}}{mu}-\tau_t^2\Delta_2}{||S_{t+1}||^2+\Delta_2} \right) \end{eqnarray*} Now we split the analysis in two cases, depending on $\tau_t$. \begin{enumerate} \item {\bf Large $\tau$ Case.} First we consider the case when $\tau_t\ge\sqrt{\ln \frac{9}{2 \pi}}$. Using Lemma~\ref{lem:expr4}, along with the relation $S_{t+1}=\frac{1}{2}u_{t+1}$, \[ \dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2 = \frac{1}{4}(mu^2-\tau_t^2)(mu^2\Phi^2(-\tau_t,\tau_t)+\frac{4\tau_t e^{-\tau_t^2/2}}{\sqrt{2\pi}}\Phi(-\tau_t,\tau_t)) = \Omega(mu^2(mu^2-\tau_t^2)) \] where for the last step we used the fact that $\Phi(-\tau_t,\tau_t)=\Theta(1)$ when $\tau_t$ is at least a constant. Therefore, in order to conclude to proof, it is enough to show that, under the hypotheses of the lemma, the following system of inequalities holds: \begin{eqnarray*} \Delta_1\dot{S_{t+1}}{mu} & < & \frac{1}{3}(\dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2)\\ \tau_t^2\Delta_2 & < & \frac{1}{3}(\dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2)\\ \Delta_2 + ||S_{t+1}||^2 & = & O(mu^2) \end{eqnarray*} We start observing that Lemma~\ref{lem:expr4} implies that $||S_{t+1}||^2=O(mu^2)$ and $\dot{S_{t+1}}{mu}=O(mu^2)$. Now we bound $\Delta_1$ and $\Delta_2$ using respectively Lemma~\ref{lem:projconc} and Lemma~\ref{lem:normconc}. We apply these Lemmas with $\delta=\Theta(1/n^c)$ for some constant $c\ge 1$, so that, with high probability, \begin{eqnarray*} \Delta_1 & \le & \frac{mu+mu^2}{\sqrt{n}}\Theta(\log{n}) = O(\frac{mu^2}{\sqrt{n}}\log{n})\\ \Delta_2 & \le & \frac{d +mu^2}{n}\Theta(\log^2{n}) + \frac{||S_{t+1}|| + |\dot{S_{t+1}}{mu}|}{\sqrt{n}} \Theta(\log{n}) = O(\frac{d}{n}\log^2{n} + \frac{mu^2}{\sqrt{n}}\log{n}) \end{eqnarray*} Now the system of inequalities simplifies to \begin{eqnarray*} O(\tau_t\frac{mu^4}{\sqrt{n}}\log{n}) & < & \Omega(mu^2(mu^2-\tau_t^2))\\ O(\tau_t^2(\frac{d}{n}\log^2{n} + \frac{mu^2}{\sqrt{n}}\log{n})) & <& \Omega(mu^2(mu^2-\tau_t^2)+\frac{mu^2}{\tau_t^2}) \end{eqnarray*} where we merged the second and third constraints. It is not hard to verify that \[ n=\tilde\Omega(\frac{d}{mu^2}+\frac{d\cdot\tau_t^2}{mu^2(mu^2-\tau_t^2)}+\frac{mu^2}{(mu^2-\tau_t^2)^2}) \] satisfies the system. The lemma follows by recalling that $mu^2\sin^2(\theta_t)=mu^2-\tau_t^2$ and $\tau_t\le mu$. \item {\bf Small $\tau$ Case.} The proof for this case follows the same approach used for large $\tau_t$. However, in this case, \[ \dot{S_{t+1}}{mu}^2 - \tau_t^2||S_{t+1}||^2 = \Omega(\tau^2mu^2(mu^2-\tau_t^2)), \] and we have $||S_{t+1}||^2=O(1+\tau_t^2mu^2)$ and $\dot{S_{t+1}}{mu}=O(\tau_tmu^2)$. The system is identical except for the constraint involving the denominator, which, in this case, is \[ \Delta_2 + ||S_{t+1}||^2 = O(mu^2(1+mu^2\tau_t^2)) \] Again, similarly to the previous case, we have that w.h.p. $\Delta_1=O(\frac{mu^2}{\sqrt{n}}\log{n})$ and $\Delta_2=O(\frac{d+mu^2}{n}\log^2{n} +\frac{1+\tau_tmu^2}{\sqrt{n}}\log{n})$. Thus, we want a number $n$ of samples such that the following system is satisfied: \begin{eqnarray*} O(\tau_t\frac{mu^4}{\sqrt{n}}\log{n}) & < & \Omega(\tau_t^2mu^2(mu^2-\tau_t^2))\\ O(\tau_t^2(\frac{d+mu^2}{n}\log^2{n} +\frac{1+\tau_tmu^2}{\sqrt{n}}\log{n})) & <&\Omega(\tau_t^2mu^2(mu^2-\tau_t^2))\\ O(\frac{d+mu^2}{n}\log^2{n} +\frac{1+\tau_tmu^2}{\sqrt{n}}\log{n}) & <&\Omega(mu^2(1+mu^2\tau_t^2)) \end{eqnarray*} It can be verified that it suffices \[ n=\tilde\Omega(\frac{mu^4}{\tau_t^2(mu^2-\tau_t^2)^2}+\frac{d+mu^2}{mu^2(mu^2-\tau_t^2)}+\frac{d}{mu^2}) \] Using $mu^2\sin^2(\theta_t)=mu^2-\tau_t^2$, and the assumption $\tau_t\ge \frac{mu}{\sqrt{d}}$, we obtain $n=\tilde\Omega(\frac{d}{mu^2\sin^4(\theta_t)})$ which completes the proof. \end{enumerate} $\Box$ \end{proof} \fi \iffalse \begin{proof} (of Theorem~\ref{thm:finsmallmu} and Theorem~\ref{thm:finlargemu}) We let $\Delta_1 = \dot{S_{t+1}-\hat{S}_{t+1}}{mu}$ and $\Delta_2 = ||\hat{S}_{t+1}||^2-||S_{t+1}||^2$. Now we can write $$ \frac{\dot{\hat{S}_{t+1}}{mu}^2}{||\hat{S}_{t+1}||^2} = \frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2}{||S_{t+1}||^2+\Delta_2} = \tau_t^2\left(1+\frac{(\dot{S_{t+1}}{mu}-\Delta_1)^2-\tau_t^2||S_{t+1}||^2-\tau_t^2\Delta_2}{\tau_t^2(||S_{t+1}||^2+\Delta_2)}\right) $$ So we just need to prove that \begin{eqnarray} \label{ineq:finsamples} \dot{S_{t+1}}{mu}^2-2\Delta_1\dot{S_{t+1}}{mu}+\Delta_1^2 > \tau_t^2||S_{t+1}||^2+\tau_t^2\Delta_2 \end{eqnarray} Since $\cos^2(\theta_{t+1}) = \frac{\dot{c_{t+1}}{mu}^2}{mu^2||c_{t+1}||^2} = \frac{\dot{S_{t+1}}{mu}^2}{mu^2||S_{t+1}||^2}$, using Theorem~\ref{thm:smallmu} and Theorem~\ref{thm:largemu} we have that $\dot{S_{t+1}}{mu}^2\ge \tau_t^2(1+\lambda_t)||S_{t+1}||^2$, where $\lambda_t$ is some positive function of $mu$ and $\tau_t$. Thus, inequality~(\ref{ineq:finsamples}) is implied by $$ \tau_t^2\lambda_t||S_{t+1}||^2 -2\Delta_1\dot{S_{t+1}}{mu} > \tau_t^2\Delta_2 $$ where we also dropped the (non-negative) term $\Delta_1^2$. Now we bound $\Delta_1$ and $\Delta_2$ using respectively Lemma~\ref{lem:projconc} and Lemma~\ref{lem:normconc}. We apply these Lemmas with $\delta=\Theta(1/n^c)$ for some constant $c\ge 1$, so that, with high probability, the inequality~(\ref{ineq:finsamples}) is implied by $$ \tau_t^2\lambda_t||S_{t+1}||^2 > \dot{S_{t+1}}{mu}\frac{\Theta(\log{n})(mu^2+mu)}{\sqrt{n}} + \tau_t^2 (\frac{\Theta(\log^2{n})(d + mu^2)}{n} + \frac{\Theta(\log{n})}{\sqrt{n}} ( ||S_{t+1}|| + |\dot{S_{t+1}}{mu}|)) $$ which is equivalent to \begin{eqnarray} \label{ineq:asymptotic} \tau_t^2||S_{t+1}||^2\left(\lambda_t-\frac{\Theta(\log{n})}{||S_{t+1}||\sqrt{n}}\right) > \dot{S_{t+1}}{mu}\frac{\Theta(\log{n})(\tau_t^2+mu^2+mu)}{\sqrt{n}} + \tau_t^2 \frac{\Theta(\log^2{n})(d + mu^2)}{n} \end{eqnarray} We recall that \begin{eqnarray*} ||S_{t+1}||^2 & = & \frac{1}{4}||c_{t+1}||^2 = \frac{1}4(\frac{2e^{-\tau_t^2}}{\pi} + mu^2 P^2(-\tau_t, \tau_t) + \frac{4 e^{-\tau_t^2/2}}{\sqrt{2\pi}} \tau_t P(-\tau_t, \tau_t))\\ \dot{S_{t+1}}{mu} & = & \frac{1}{2}\dot{c_{t+1}}{mu} = \frac{1}2(\frac{2\tau_t e^{-\tau_t^2/2}}{\sqrt{2 \pi}} + mu^2 P(-\tau_t, \tau_t)) \end{eqnarray*} We now split the analysis in two cases. \begin{enumerate} \item {\bf Small $mu$ Case} (Theorem~\ref{thm:finsmallmu}). Since $mu\leq M_{L} < \sqrt{\ln \frac{9}{2 \pi}}$, we observe that $||S_{t+1}||^2=\Omega(1)$ and $\dot{S_{t+1}}{mu}=O(\tau_t)$. Thus, imposing $n/\log^2{n}>\Theta(d/\lambda_t^2)$, the inequality~(\ref{ineq:asymptotic}) is implied by $$ \Omega(\tau_t^2\lambda_t) > O\left(\frac{\tau_tmu\lambda_t}{\sqrt{d}} + \tau_t^2 \lambda_t^2\right) $$ Conditioning on the fact that $\tau_t\gemu/\sqrt{d}$, this inequality is true for suitable constants. Theorem~\ref{thm:finsmallmu} follows by observing that $\lambda_t=\Theta(mu^2-\tau_t^2)$ by Theorem~\ref{thm:smallmu}. \item {\bf Large $mu$ Case} (Theorem~\ref{thm:finlargemu}). We consider two different subcases. First we assume $\tau_t\le \sqrt{\ln \frac{9}{2 \pi}}$. Under this assumption $P(-\tau_t,\tau_t)=\Theta(\tau_t)$ by Lemma~\ref{lem:smalltau}, so $\dot{S_{t+1}}{mu}=O(\tau_tmu^2)$. Also, again because $tau_t$ is small, $||S_{t+1}||^2=\Omega(1+\tau_t^2mu^2)$. Thus, inequality~(\ref{ineq:asymptotic}) is implied by $$ \Omega(\tau_t^2(1+\tau_t^2mu^2))\left(\lambda_t-\frac{\Theta(\log{n})}{\sqrt{n}}\right) > O\left(\frac{\tau_tmu^4}{\sqrt{n}} +\frac{d\tau_t^2}{n}\right) $$ \end{enumerate} $\Box$ \end{proof} \fi \iffalse \begin{lemma} When $\tau_t$ is very small, for some constant $c$, \[ \cos \theta_{t+1} \geq (1 + cmu^2) \cos \theta_t \] \end{lemma} \begin{proof} We observe that $\cos \theta_{t} = \frac{\tau_t}{mu}$, and $\cos \theta_{t+1} = \frac{ \dot{mu}{c_{t+1}} }{mu ||c_{t+1}||}$. From the previous section, we can write: \[ \dot{mu}{c_{t+1}} = \dot{mu}{u} \dot{c_{t+1}}{u} + \dot{mu}{v} \dot{c_{t+1}}{v} = \tau_t \cdot \frac{2e^{-\tau_t^2/2}}{\sqrt{2\pi}} + mu^2 \cdot \Pr[-\tau_t \leq X \leq \tau_t] \] which, for small $\tau_t$, is going to be at least \[ \dot{mu}{c_{t+1}} \geq \tau_t \cdot \frac{2e^{-\tau_t^2/2}}{\sqrt{2\pi}} + mu^2 \tau_t = \frac{2e^{-\tau_t^2/2}}{\sqrt{2 \pi}} \cdot (1 + mu^2) \tau_t \] On the other hand, \begin{eqnarray*} ||c_{t+1}||^2 & = & \dot{c_{t+1}}{u}^2 + \dot{c_{t+1}}{v}^2 \\ & = & (\tau_t \Pr[-\tau_t \leq X \leq \tau_t] + \frac{2 e^{-\tau_t^2/2}}{\sqrt{2\pi}})^2 + (mu^2 - \tau^2) (\Pr[-\tau_t \leq X \leq \tau_t])^2 \\ & = & \frac{2e^{-\tau_t^2}}{\pi}+ mu^2 \Theta(\tau_t^2) + \frac{4 \Theta(\tau_t^2) e^{-\tau_t^2/2}}{\sqrt{2\pi}} \\ & \leq & \frac{2e^{-\tau_t^2}}{\pi} \cdot (1 + \Theta(\tau_t^2(1 + mu^2))) \end{eqnarray*} Therefore, \[ \cos \theta_t \geq (1 + \Theta(mu^2)) \cdot \cos \theta_{t+1} \] \end{proof} For the rest of this section, we bound each term in the calculation of $\cos \theta_t$ for all $t$, when the calculation is performed with $s$ samples. We use $\hc_{t+1}$ to denote the empirical average of the samples in $C_{t+1}$. \begin{lemma} For any unit vector $v$, \[ \Pr[ | \dot{\hc_{t+1}}{v} - \dot{c_{t+1}}{v}| \geq \frac{t}{\sqrt s}] \leq e^{-t^2/2} \] \end{lemma} \begin{proof}(Sketch:) Follows from the Gaussian concentration of measure theorem, and the fact that $\dot{\hc_{t+1}}{v}$ is $1$-Lipschitz. \end{proof} \begin{lemma} With probability at least $1 - \delta$, \[ ||\hc_{t+1}||^2 \leq (1 + \frac{1}{\sqrt s}) ||c_{t+1}||^2 + \frac{d \log \frac{d}{\delta}}{s} \] \end{lemma} \begin{proof}(Sketch:) Let $b_1, \ldots, b_d$ be any basis of ${mathbf{R}}^d$. Then, \begin{eqnarray*} ||\hc_{t+1}||^2 & = & \sum_{i=1}^{d} \dot{\hc_{t+1}}{b_i}^2 \\ & = & \sum_{i=1}^{d} ( \dot{c_{t+1}}{b_i} + \dot{\hc_{t+1} - c_{t+1}}{b_i})^2 \\ & = & \sum_i \dot{c_{t+1}}{b_i}^2 + \sum_i \dot{\hc_{t+1} - c_{t+1}}{b_i}^2 + 2 \sum_i \dot{c_{t+1}}{b_i} \dot{\hc_{t+1} - c_{t+1}}{b_i}\\ & \leq & ||c_{t+1}||^2 + \frac{d \log(d/\delta)}{s} + 2 \sum_i \dot{c_{t+1}}{b_i} \dot{\hc_{t+1} - c_{t+1}}{b_i} \\ \end{eqnarray*} The last step holds with probability at least $1 - \delta$. Again, using a Gaussian concentration of measure inequality, the third term can be bounded as $2 \frac{||c_{t+1}||^2}{\sqrt{s}}$. \end{proof} To ensure that the denominator does not increase by too much, we need $\frac{d}{s} < \epsilon$, which means that $s > \frac{d}{\epsilon^2}$. In addition, to ensure that the numerator does not decrease by too much, we need $\tau_t mu^2 > \frac{1}{\sqrt{s}}$, which means that $s > \frac{1}{\tau_t^2 mu^4}$. Notice that in the beginning, $\tau_0 = \frac{1}{\sqrt{d}}$, therefore, we require $s > \frac{d}{mu^4}$ for our algorithm to work. \fi \subsection{Proofs of Lower Bounds} \begin{proof}(Of Lemma~\ref{lem:kllower}) Let $P$ be the plane containing the origin $O$ and the vectors $mmu_1$ and $mmu_2$. If $v$ is a vector orthogonal to $P$, then, the projection of $D_1$ along $v$ is a Gaussian $\mathcal{N}(0,1)$, which is distributed independently of the projection of $D_1$ along $P$ (and same is the case for $D_2$).Therefore, to compute the KL-Divergence of $D_1$ and $D_2$, it is sufficient to compute the KL-Divergence of the projections of $D_1$ and $D_2$ along the plane $P$. Let $x$ be a vector in $P$. Then, \begin{eqnarray*} \mathbf{KL}(D_1, D_2) & = & \frac{1}{\sqrt{2 \pi}}\int_{x \in P} (\frac{1}{2}e^{-||x - mmu_1||^2/2} + \frac{1}{2} e^{-||x + mmu_1||^2/2}) \ln \left(\frac{ \frac{1}{2}e^{-||x - mmu_1||^2/2} + \frac{1}{2} e^{-||x + mmu_1||^2/2}}{ \frac{1}{2}e^{-||x - mmu_2||^2/2} + \frac{1}{2} e^{-||x + mmu_2||^2/2}} \right) dx \\ & = & \frac{1}{\sqrt{2 \pi}}\int_{x \in P} (\frac{1}{2}e^{-||x - mmu_1||^2/2} + \frac{1}{2} e^{-||x + mmu_1||^2/2}) \ln \left( \frac{ e^{-||x + mmu_1||^2/2} \cdot (1 + e^{2 \dot{x}{mmu_1}})}{ e^{-||x + mmu_2||^2/2} \cdot (1 + e^{2 \dot{x}{mmu_2}})} \right) dx \\ & = & \frac{1}{\sqrt{2 \pi}}\int_{x \in P} (\frac{1}{2}e^{-||x - mmu_1||^2/2} + \frac{1}{2} e^{-||x + mmu_1||^2/2})\left( ( ||x + mmu_2||^2 - ||x + mmu_1||^2) + \ln \frac{1 + e^{2 \dot{x}{mmu_1}}}{1 + e^{2 \dot{x}{mmu_2}}} \right)dx \end{eqnarray*} We observe that for any $x$, $||x + mmu_2||^2 - ||x + mmu_1||^2 = ||mmu_2||^2 - ||mmu_1||^2 + 2\dot{x}{mmu_2 - mmu_1}$. As the expected value of $D_1$ is $0$, we can write that: \begin{equation} \int_{x \in P} (\frac{1}{2}e^{-||x - mmu_1||^2/2} + \frac{1}{2} e^{-||x + mmu_1||^2/2}) \dot{x}{mmu_2 - mmu_1} = {mathbf{E}}_{x \sim D_1} \dot{x}{mmu_1 - mmu_2} = 0 \label{eqn:lowerterm1} \end{equation} We now focus on the case where $||mmu_1|| >> 1$. We observe that for any $mmu_2$ and any $x$, $1 + e^{2 \dot{x}{mmu_2}} > 1$. Therefore, combining the previous two equations, \[ \mathbf{KL}(D_1, D_2) \leq \frac{1}{\sqrt{2 \pi}}\left( ||mmu_2||^2 - ||mmu_1||^2 + \int_{x \in P} (\frac{1}{2}e^{-||x - mmu_1||^2/2} + \frac{1}{2} e^{-||x + mmu_1||^2/2}) \ln (1 + e^{2 \dot{x}{ mmu_1}}) dx \right)\] Again, since the projection of $D_1$ perpendicular to $mmu_1$ is distributed independently of the projection of $D_1$ along $mmu_1$, the above integral can be taken over a one-dimensional $x$ which varies along the vector $mmu_1$. For the rest of the proof, we abuse notation, and use $mmu_1$ to denote both the vector $mmu_1$ and the scalar $||mmu_1||$. We can write: \begin{eqnarray*} && \int_{x=-\infty}^{\infty} (\frac{1}{2}e^{-(x - mmu_1)^2/2} + \frac{1}{2} e^{-(x + mmu_1)^2/2}) \ln(1 + e^{2 mmu_1 x}) dx \\ & \leq & \sqrt{2 \pi} \ln 2 + \int_{x = 0}^{\infty} (\frac{1}{2}e^{-(x - mmu_1)^2/2} + \frac{1}{2} e^{-(x + mmu_1)^2/2}) \ln(1 + e^{2 mmu_1 x}) dx \\ & \leq & \sqrt{2 \pi} \ln 2 + \int_{x = 0}^{\infty} (\frac{1}{2}e^{-(x - mmu_1)^2/2} + \frac{1}{2} e^{-(x + mmu_1)^2/2}) (\ln 2 + 2 x mmu_1) dx \\ & \leq & \frac{3 \sqrt{2 \pi}}{2} \ln 2 + 2 mmu_1 \int_{x = 0}^{\infty} (\frac{1}{2}e^{-(x - mmu_1)^2/2} + \frac{1}{2} e^{-(x + mmu_1)^2/2}) x dx \end{eqnarray*} The first part follows because for $x < 0$, $\ln (1 + e^{2 x mmu_1}) \leq \ln 2$. The second part follows because for $x > 0$, $\ln (1 + e^{2 x mmu_1}) \leq \ln (2 e^{2 xmmu_1})$. The third part follows from the symmetry of $D_1$ around the origin. Now, for any $a$, we can write: \begin{eqnarray*} \frac{1}{\sqrt{2 \pi}}\int_{x = 0}^{\infty} xe^{-(x+a)^2/2} dx = \frac{1}{\sqrt{2 \pi}} \cdot e^{-a^2/2} - a \Phi(a, \infty) \end{eqnarray*} Plugging this in, we can show that, \[ \mathbf{KL}(D_1, D_2) \leq \frac{1}{\sqrt{2 \pi}}\left( ||mmu_2||^2 - ||mmu_1||^2 + \frac{3\sqrt{2 \pi}}{2} \ln 2 + 2||mmu_1|| ( e^{-||mmu_1||^2/2} + \sqrt{2 \pi} ||mmu_1|| \Phi(0, ||mmu_1||)) \right) \] from which the lemma follows. \end{proof} \begin{proof}(Of Lemma \ref{lem:spherepacking}) For each $i$, let each $v_i$ be drawn independently from the distribution $\frac{1}{\sqrt{d}}\mathcal{N}(0, I_d)$. For each $i, j$, let $P_{ij} =\frac{d}{2} \cdot d(v_i, v_j)$ and $N_{ij} = \frac{d}{2} \cdot d(v_i, -v_j)$. Then, for each $i$ and $j$, $P_{ij}$ and $N_{ij}$ are distributed according to the Chi-squared distribution with parameter $d$. From Lemma~\ref{lem:chisq}, it follows that: $\Pr[ P_{ij} < \frac{d}{10}] \leq e^{-3d/10}$. A similar lemma can also be shown to hold for the random variables $N_{ij}$. Applying the Union Bound, the probability that this holds for $P_{ij}$ and $N_{ij}$ for all pairs $(i,j), i \in V, j \in V$ is at most $2K^2 e^{-3d/10}$. This probability is at most $\frac{1}{2}$ when $K = e^{d/10}$. In addition, we observe that for each vector $v_i$, $d \cdot ||v_i||^2$ is also distributed as a Chi-squared distribution with parameter $d$. From Lemma~\ref{lem:chisq}, for each $i$, $\Pr[ ||v_i||^2 > 7/5] \leq e^{-2d/15}$. The second part of the lemma now follows by an Union Bound over all $K$ vectors in the set $V$. \end{proof} \begin{lemma} Let $X$ be a random variable, drawn from the Chi-squared distribution with parameter $d$. Then, \[ \Pr[X < \frac{d}{10}] \leq e^{-3d/10} \] Moreover, \[ \Pr[X > \frac{7d}{5}] \leq e^{-2d/15} \] \label{lem:chisq} \end{lemma} \begin{proof} Let $Y$ be the random variable defined as follows: $Y = d - X$. Then, \[ \Pr[ X < \frac{d}{10}] = \Pr[ Y > \frac{9d}{10}] = \Pr[ e^{tY} > e^{9dt/10}] \leq \frac{{mathbf{E}}[e^{tY}]}{e^{9dt/10}} \] where the last step uses a Markov's Inequality. We observe that ${mathbf{E}}[e^{tY}] = e^{td}{mathbf{E}}[e^{-tX}] = e^{td}(1 - 2t)^{d/2}$, for $t < \frac{1}{2}$. The first part of the lemma follows from the observation that $(1 - 2t)^{d/2} \leq e^{-td}$, and by plugging in $t = \frac{1}{3}$. For the second part, we again observe that \[ \Pr[ X > \frac{7d}{5}] \leq (1 - 2t)^{-d/2} e^{-7dt/5} \leq e^{-2dt/5} \] The lemma now follows by plugging in $t = \frac{1}{3}$. \end{proof} \subsection{More General $k$-means : Results and Proofs}\label{sec:genkinfappendix} In this section, we show that when we apply $2$-means on an input generated by a mixture of $k$ spherical Gaussians, the normal to the hyperplane which partitions the two clusters in the $2$-means algorithm, converges to a vector in the subspace ${mathcal{M}}$ containing the means of mixture components. This subspace is interesting because, in this subspace, the distance between the means is as high as in the original space; however, if the number of clusters is small compared to the dimension, the distance between two samples from the same cluster is much smaller. In fact, several algorithms for learning mixture models~\cite{VW02, AM05, CR08} attempt to isolate this subspace first, and then use some simple clustering methods in this subspace. \subsubsection{The Setting} We assume that our input is generated by a mixture of $k$ spherical Gaussians, with means $mu^j$, variances $(\sigma^j)^2$, $j = 1, \ldots, k$, and mixing weights $\rho^1, \ldots, \rho^k$. The mixture is centered at the origin such that $\sum \rho^j mu^j = 0$. We use ${mathcal{M}}$ to denote the subspace containing the means $mu^1, \ldots, mu^k$. We use Algorithm~2-means-iterate\ on this input, and our goal is to show that it still converges to a vector in ${mathcal{M}}$. \iffalse We also provide rates of convergence for Algorithm~2-means-iterate. To characterize this convergence rate, we need an assumption, which we call the {\em rank condition}. We use $mu$ to denote the following quantity: \[ mu = \sqrt{\sum_j \rho^j ||mu^j||^2} \] We note that, if we assigned weights $\rho^j$ to point mass $mu^j$, then $mu^2$ is the Frobenius norm of the covariance matrix of these point masses. Our assumption is thus a statement about the projection of this matrix along any direction, and can be interpreted as a condition on the dimension of the space spanned by the weighted mean vectors. \begin{assumption}(Rank Condition) Let $M$ be the matrix $\sum_j \rho^j mu^j (mu^j)^{\top}$. Then, the minimum non-zero eigenvalue of $M$ is at least $\lambda_{min}^2 mu^2$. \end{assumption} Moreover, we let $\lambda_{max}^2 = 1 - (k-1) \lambda_{min}^2$; this is an upper bound on the maximum eigenvalue of the matrix $M$. \fi In the sequel, given a vector $x$ and a subspace $W$, we define the angle between $x$ and $W$ as the angle between $x$ and the projection of $x$ onto $W$. As in Sections 2 and 3, we examine the angle $\theta_{t}$, between $u_t$ and ${mathcal{M}}$, and our goal is to show that the cosine of this angle grows as $t$ increases. Our main result of this section is Lemma~\ref{lem:kexpr1}, which, analogous to Lemma \ref{lem:expr1} in Section \ref{sec:k2infsamples}, exactly defines the behavior of $2$-means on a mixture of $k$ spherical Gaussians. Before we can prove the lemma, we need some additional notation. \subsubsection{Notation} Recall that at time $t$, we use $\breve{u}_t$ to partition the input data, and the projection of $\breve{u}_t$ along ${mathcal{M}}$ is $\cos(\theta_t)$ by definition. Let $b^1_t$ be a unit vector lying in the subspace ${mathcal{M}}$ such that: \[ \breve{u}_t = \cos(\theta_t) b^1_t + \sin(\theta_t) v_t \] where $v_t$ lies in the orthogonal complement of ${mathcal{M}}$, and has norm $1$. We define a second vector $\breve{u}p_t$ as follows: \[ \breve{u}p_t = \sin(\theta_t) b^1_t - \cos(\theta_t) v_t \] We observe that $\dot{\breve{u}_t}{\breve{u}p_t} = 0$, $||\breve{u}p_t|| = 1$, and the projection of $\breve{u}p_t$ on ${mathcal{M}}$ is $\sin(\theta_t) b^1_t$. We now extend the set $\{b^1_t\}$ to complete an orthonormal basis $\mathcal{B} = \{b^1_t, \ldots, b^{k-1}_t\}$ of ${mathcal{M}}$. We also observe that $\{b^2_t, \ldots, b^{k-1}_t, \breve{u}_t, \breve{u}p_t \}$ is an orthonormal basis of the subspace spanned by any basis of ${mathcal{M}}$, along with $v_t$, and can be extended to a basis of ${mathbf{R}}^d$. For $j = 1, \ldots, k$, we define $\tau^j_t$ as follows: \[ \tau_j^t = \dot{mu^j}{\breve{u}_t} = \cos(\theta_t) \dot{mu^j}{b^1_t} \] Finally we (re)-define the quantity $\xi_t$ as \[ \xi_t = \sum_j \rho^j \sigma^j \frac{e^{-(\tau_t^j)^2/2(\sigma^j)^2}}{\sqrt{2 \pi}} \] and, for any $l = 1, \ldots, k-1$, we define: \[ m_t^l = \sum_j \rho^j \Phi(-\frac{\tau_t^j}{\sigma^j},\infty) \dot{mu^j}{b^l_t} \] \iffalse \subsubsection{Our Guarantees} Now our main Lemma can be stated as follows. \begin{lemma}\label{lem:kexpr1} At any iteration $t$ of Algorithm 2-means-iterate, \[ \cos^2(\theta_{t+1}) = \cos^2(\theta_t) \left(1 + \tan^2(\theta_t)\frac{ 2 \cos(\theta_t) \xi_t m_t^1 + \sum_l (m_t^l)^2 }{ \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + \sum_l (m_t^l)^2 } \right) \] \end{lemma} \fi \iffalse As in Section~\ref{sec:k2infsamples}, we can characterize the convergence rate of our algorithm for different values of the means: it is convenient to consider the case when all the $mu^j$ are small, and the case when at least one of them is large. medskip\noindent{\bf All small $mu_j$.} First, we study the case when all the $mu^j$ are small -- namely, for all $j$, $mu^j \leq \sqrt{ \ln \frac{9}{2 \pi}}$. In this case, the rate of convergence of Algorithm 2-means-iterate\ can be summarized by the following theorem. \begin{theorem}\label{thm:smallmuk} Suppose $mu^j \leq \sqrt{\ln \frac{9}{2 \pi}}$. Then, there exist fixed constants $a_{14}$, $a_{15}$, such that: \begin{eqnarray*} \cos^2(\theta_{t+1}) & \geq & \cos^2(\theta_t) (1 + a_{14} \lambda_{min}^2 mu^2 \sin^2(\theta_t)) \\ \cos^2(\theta_{t+1}) & \leq & \cos^2(\theta_t) (1 + a_{15} \sin^2(\theta_t) ( \lambda_{min}^2 mu^2 + mu^4 )) \end{eqnarray*} \end{theorem} \begin{corollary}\label{cor:genksmallmu} Suppose $mu^j \leq \sqrt{\ln \frac{9}{2 \pi}}$. If $\theta_0$ is the initial angle between $mu$ and $u_0$, then, $\cos^2(\theta_N) \geq 1 - \epsilon$ after \[ N = C_1 \cdot \left( \frac{\ln(\frac{1}{2 \cos^2(\theta_0)})}{ \ln(1 + \lambda_{min}^2mu^2)} + \frac{1}{\ln(1 + \lambda_{min}^2mu^2 \epsilon)}\right) \] iterations, where $C_1$ is a fixed constant. \end{corollary} medskip\noindent{\bf Some large $mu_j$} Next, we study the case when there exists at least one $j$ such that $mu^j > \sqrt{ \ln \frac{9}{2 \pi}}$. In this case, we can characterize the convergence rate of Algorithm 1 as follows. \begin{theorem}\label{thm:largemuk} Suppose there exists some $j$ such that $mu^j \geq \sqrt{\ln \frac{9}{2 \pi}}$. Then, if $\tau_t^j \leq \sqrt{ \ln \frac{9}{2 \pi}}$, for all $j$, then, there exist constants $a_{16}$, $a_{17}$, $a_{18}$ and $a_{19}$ such that: \[ \begin{split} \cos^2(\theta_{t+1}) & \geq \\ \cos^2(\theta_t) & \left(1 + \frac{a_{16} \sin^2(\theta_t) (\lambda_{min}^2 mu^2 + \lambda_{min}^4 mu^4) }{a_{17} + \cos^2(\theta_t) (\lambda_{min}^2 mu^2 + \lambda_{min}^4 mu^4)}\right) \end{split} \] and, \[ \begin{split} \cos^2(\theta_{t+1}) & \leq \\ \cos^2(\theta_t) & \left(1 + \frac{a_{18} \sin^2(\theta_t) (\lambda_{max}^2 mu^2 + mu^4)}{a_{19} + \cos^2(\theta_t) ( \lambda_{max}^2 mu^2 + \lambda_{max}^4 mu^4)}\right) \end{split} \] On the other hand, if there exists a $j$ such that $\tau_t^j > \sqrt{\ln \frac{9}{2 \pi}}$, then, there exist constants $a_{20}$ and $a_{21}$ such that: \[ \begin{split} \cos^2(\theta_{t+1}) &\geq \cos^2(\theta_t) (1 + \tan^2(\theta_t)\frac{ \rho_{min}^2 \lambda_{min}^2 mu^2} {a_{20} + \rho_{min}^2 \lambda_{min}^2 mu^2})\\ \cos^2(\theta_{t+1}) &\leq \cos^2(\theta_t) (1 + a_{21} \tan^2(\theta_t)) \end{split} \] \end{theorem} \begin{corollary} Suppose there exists some $j$ such that $mu^j \geq \sqrt{\ln \frac{9}{2 \pi}}$. If $\theta_0$ is the initial angle between $mu$ and $u_0$, then, $\cos^2(\theta_N) \geq 1 - \epsilon$ after \[ N = C_2 \cdot \left( 1+\frac{1}{\lambda_{min}^4 mu^4+ \rho_{min}^2 \lambda_{min}^2 mu^2}\right) \left(\ln\frac{1}{2 \cos^2(\theta_0)}+\frac{1}{\epsilon}\right) \] iterations, where $C_2$ is a fixed constant. \end{corollary} \fi \subsubsection{Proof of Lemma \ref{lem:kexpr1}} The main idea behind the proof of Lemma \ref{lem:kexpr1} is to estimate the norm and the projection of $u_{t+1}$; we do this in three steps. First, we estimate the projection of $u_{t+1}$ along $\breve{u}_t$; next, we estimate this projection on $\breve{u}p_t$, and finally, we estimate its projection along $b^2_t, \ldots, b^l_t$. Combining these projections, and observing that the projection of $u_{t+1}$ on any direction perpendicular to these is $0$, we can prove the lemma. As before, we define \[ Z_{t+1} = \Pr[ x \in C_{t+1}] \] Now we make the following claim. \begin{lemma}\label{lem:wts} For any $t$ and any $j$, \[ \Pr[x \sim D_j | x \in C_{t+1}] = \frac{\rho^j}{Z_{t+1}} \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) \] \end{lemma} \begin{proof} Same proof of Lemma~\ref{lem:wts12} \end{proof} Next, we estimate the projection of $u_{t+1}$ along $\breve{u}_t$. \begin{lemma}\label{lem:proj1} \[ \dot{u_{t+1}}{\breve{u}_t} = \frac{ \xi_t + \cos(\theta_t) m_t^1 }{Z_{t+1}} \] \end{lemma} \begin{proof} Consider a sample $x$ drawn from distribution $D_j$. The projection of $x$ on $\breve{u}_t$ is distributed as a Gaussian with mean $\tau_t^j$ and standard deviation $\sigma^j$. The probability that $x$ lies in $C_{t+1}$ is $\Pr[N(\tau_t^j,\sigma^j)>0] = \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)$. Given that $x$ lies in $C_{t+1}$, the projection of $x$ on $\breve{u}_t$ is distributed as a truncated Gaussian, with mean $\tau_t^j$ and standard deviation $\sigma^j$, which is truncated at $0$. Therefore, \[ {mathbf{E}}[\dot{x}{\breve{u}_t} | x \in C_{t+1}, x \sim D_j] = \frac{1}{\Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)} \left( \int_{y=0}^{\infty} \frac{y e^{-(y - \tau_t^j)^2/2}}{\sigma^j\sqrt{2 \pi}} dy \right) \] which is again equal to \begin{eqnarray*} && \frac{1}{ \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)} \left( \tau_t^j \int_{y=0}^{\infty} \frac{e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} dy + \int_{y=0}^{\infty} \frac{(y - \tau_t^j)e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} dy \right)\\ & = &\frac{1}{ \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)} \left( \tau_t^j \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) + \int_{y=0}^{\infty} \frac{(y - \tau_t^j)e^{-(y - \tau_t^j)^2/2 (\sigma^j)^2}}{\sigma^j\sqrt{2 \pi}} dy \right) \end{eqnarray*} We can evaluate the integral in the equation above as follows. \begin{eqnarray*} \int_{y = 0}^{\infty} (y-\tau_t^j)e^{-(y - \tau_t^j)^2/2(\sigma^j)^2} dy = (\sigma^j)^2 \int_{z = (\tau_t^j)^2/2(\sigma^j)^2}^{\infty} e^{-z}dz = (\sigma^j)^2 e^{-(\tau_t^j)^2/2 (\sigma^j)^2} \end{eqnarray*} Therefore we can conclude that \[ {mathbf{E}}[\dot{x}{\breve{u}_t} | x \in C_{t+1}, x \sim D_j] = \tau_t^j + \frac{1}{\Phi(-\frac{\tau_t^j}{\sigma^j}, \infty)} \cdot \sigma^j \frac{e^{-(\tau_t^j)^2/2 (\sigma^j)^2}}{\sqrt{2 \pi}} \] Now we can write \begin{eqnarray*} \dot{u_{t+1}}{\breve{u}_t} & = & \sum_j{mathbf{E}}[\dot{x}{\breve{u}_t} | x \sim D_j, x \in C_{t+1}]\Pr[x \sim D_j | x \in C_{t+1}] \\ & = & \frac{1}{Z_{t+1}}\sum_j\rho^j \Phi(-\frac{\tau_t^j}{\sigma^j}, \infty) {mathbf{E}}[\dot{x}{\breve{u}_t} | x \sim D_j, x \in C_{t+1}] \\ \end{eqnarray*} where we used lemma~\ref{lem:wts}. The lemma follows by recalling that $\tau_t^j = \cos(\theta_t) \dot{mu^j}{b^1_t}$. \end{proof} \begin{lemma}\label{lem:proj2} For any $t$, \[ \dot{u_{t+1}}{\breve{u}p_t} = \frac{\sin(\theta_t) m_t^1}{Z_{t+1}} \] \end{lemma} \begin{proof} Let $x$ be a sample drawn from distribution $D_j$. Since $\breve{u}p_t$ is perpendicular to $\breve{u}_t$, and $D_j$ is a spherical Gaussian, given that $x \in C_{t+1}$, that is, the projection of $x$ on $\breve{u}_t$ is greater than $0$, the projection of $x$ on $\breve{u}p_t$ is still distributed as a Gaussian with mean $\dot{mu^j}{\breve{u}p_t}$ and standard deviation $\sigma^j$. That is, \[ {mathbf{E}}[ \dot{x}{\breve{u}p_t} | x \sim D_j, x \in C_{t+1}] = \dot{mu^j}{\breve{u}p_t} \] Also recall that, by definition of $\breve{u}p_t$, $\dot{mu^j}{\breve{u}p_t} = \sin(\theta_t)\dot{mu^j}{b^1_t}$. To prove the lemma, we observe that $\dot{u_{t+1}}{\breve{u}p_t}$ is equal to \[ \sum_j {mathbf{E}}[\dot{x}{\breve{u}p_t} | x \sim D_j, x \in C_{t+1}] \Pr[x \sim D_j | x \in C_{t+1}] \] The lemma follows by using lemma~\ref{lem:wts}. \end{proof} \begin{lemma}\label{lem:proj3} For $l \geq 2$, \[ \dot{u_{t+1}}{b^l_t} = \frac{m_t^l}{Z_{t+1}} \] \end{lemma} \begin{proof} Let $x$ be a sample drawn from distribution $D_j$. Since $b^l_t$ is perpendicular to $\breve{u}_t$, and $D_j$ is a spherical Gaussian, given that $x \in C_{t+1}$, that is, the projection of $x$ on $\breve{u}_t$ is greater than $0$, the projection of $x$ on $b^l_t$ is still distributed as a Gaussian with mean $\dot{mu^j}{b^l_t}$ and standard deviation $\sigma^j$. That is, \[ {mathbf{E}}[ \dot{x}{b^l_t} | x \sim D_j, x \in C_{t+1}] = \dot{mu^j}{b^l_t} \] To prove the lemma, we observe that $\dot{b^l_t}{u_{t+1}}$ is equal to \[ \sum_j {mathbf{E}}[\dot{x}{b^l_t} | x \sim D_j, x \in C_{t+1}] \Pr[x \sim D_j | x \in C_{t+1}] \] The lemma follows by using lemma~\ref{lem:wts}. \end{proof} Finally, we show a lemma which estimates the norm of the vector $u_{t+1}$. \begin{lemma}\label{lem:proj4} \begin{eqnarray*} ||u_{t+1}||^2 & = & \frac{1}{Z_{t+1}^2}( \xi_t^2 + 2 \xi_t \cos(\theta_t) m_t^1 + \sum_{l=1}^{k} (m_t^l)^2 ) \end{eqnarray*} \end{lemma} \begin{proof} Combining Lemmas \ref{lem:proj1}, \ref{lem:proj2} and \ref{lem:proj3}, we can write: \begin{eqnarray*} ||u_{t+1}||^2 & = & \dot{\breve{u}_t}{u_{t+1}}^2 + \dot{\breve{u}p_t}{u_{t+1}}^2 + \sum_{l \geq 2} \dot{b^l_t}{u_{t+1}}^2 \\ & = & \frac{1}{Z_{t+1}^2} \bigg( \xi_t^2 + 2 \xi_t \cos(\theta_t) m_t^1 + \cos^2(\theta_t) (m_t^1)^2 + \sin^2(\theta_t) (m_t^1)^2 + \sum_{l=2}^{k} (m_t^l)^2 \bigg) \end{eqnarray*} The lemma follows by plugging in the fact that $\cos^2(\theta_t) + \sin^2(\theta_t) = 1$. \end{proof} Now we are ready to prove Lemma \ref{lem:kexpr1}. \begin{proof}(Of Lemma \ref{lem:kexpr1}) Since $b_t^1, \ldots, b_t^k$ form a basis of ${mathcal{M}}$, we can write: \begin{equation} \cos^2(\theta_{t+1}) = \frac{ \sum_{l=1}^{k} \dot{u_{t+1}}{b_t^l}^2}{||u_{t+1}||^2} \label{eqn:cosk} \end{equation} $||u_{t+1}||^2$ is estimated in Lemma \ref{lem:proj4}, and $\dot{u_{t+1}}{b_t^l}$ is estimated by Lemma \ref{lem:proj2}. Using these lemmas, as $b_t^1$ lies in the subspace spanned by the orthogonal vectors $\breve{u}_t$ and $\breve{u}p_t$, we can write: \begin{eqnarray*} \dot{u_{t+1}}{b_t^1} & = & \dot{\breve{u}_t}{u_{t+1}} \dot{\breve{u}_t}{b_t^1} + \dot{\breve{u}p_t}{u_{t+1}} \dot{\breve{u}p_t}{b_t^1} \\ & = & \frac{ \cos(\theta_t) \xi_t + m_t^1}{Z_{t+1}} \end{eqnarray*} Plugging this in to Equation \ref{eqn:cosk}, we get: \[ \cos^2(\theta_{t+1}) = \frac{ \xi_t^2 \cos^2(\theta_t) + 2 \xi_t \cos(\theta_t) m_t^1 + \sum_l (m_t^l)^2 }{ \xi_t^2 + 2 \xi_t \cos(\theta_t) m_t^1 + \sum_l (m_t^l)^2 } \] The lemma follows by rearranging the above equation, similar to the proof of Lemma \ref{lem:expr1}. \end{proof} \iffalse The following lemma is useful for getting from Lemma~\ref{lem:kexpr1} to Theorems~\ref{thm:smallmuk} and~\ref{thm:largemuk}. \begin{lemma} Let $R_l=\sum_j \rho^j \dot{mu^j}{b_t^l}^2$. If, for all $j$, $\tau_t^j \leq \sqrt{\ln \frac{9}{2 \pi}}$, then, \[ \frac{5}{3 \sqrt{2 \pi}} \cos(\theta_t)\cdot R_1 \leq m_t^1 \leq \frac{2}{\sqrt{2 \pi}} \cos(\theta_t)\cdot R_1 \] and $|m_t^l| \leq \frac{2}{\sqrt{2 \pi}}\cos(\theta_t)\cdot(R_1\cdot R_l)^{1/2}$ for $l\neq 1$. Otherwise, \[ \Phi(0, \sqrt{\ln \frac{9}{2 \pi}}) \cdot \rho_{min} \cdot R_1^{1/2} \leq m_t^1 \leq R_1^{1/2} \] and $|m_t^l| \leq R_l$ for $l\neq 1$. \end{lemma} \subsubsection{Proofs of Theorems \ref{thm:smallmuk} and \ref{thm:largemuk}} \begin{lemma} Let $\tau_t^j \leq \sqrt{\ln \frac{9}{2 \pi}}$. Then, \begin{eqnarray*} m_t^1 & \geq & \frac{5}{3 \sqrt{2 \pi}} \cos(\theta_t)\left( \sum_j \rho^j \dot{mu^j}{b_t^1}^2\right) \\ m_t^1 & \leq & \frac{2}{\sqrt{2 \pi}} \cos(\theta_t) \left(\sum_j \rho^j \dot{mu^j}{b_t^1}^2\right) \end{eqnarray*} Moreover, for any $l \neq 1$, \[ |m_t^l| \leq \frac{2}{\sqrt{2 \pi}}\cos(\theta_t) \big(\sum_j \rho^j \dot{mu^j}{b_t^1}^2\big)^{1/2} \big(\sum_j \rho_j \dot{mu^j}{b_t^l}^2\big)^{1/2} \] \label{lem:smallmuk} \end{lemma} \fi \iffalse \begin{proof} If $\tau_t^j \leq \sqrt{\ln \frac{9}{2 \pi}}$, then, for each $j$, from Lemma \ref{lem:smalltau} $\Phi(-\tau_t^j, \infty)$ lies between $\frac{1}{2} + \frac{5}{3 \sqrt{2 \pi}} \tau_t^j$ and $\frac{1}{2} + \frac{2}{\sqrt{2 \pi}} \tau_t^j$. Therefore, \[ \frac{1}{2} \rho^j \tau_t^j + \frac{5}{3 \sqrt{2 \pi}} \rho^j (\tau_t^j)^2 \leq \rho^j \Phi(-\tau_t^j, \infty) \tau_t^j \leq \frac{1}{2} \rho^j \tau_t^j + \frac{2}{\sqrt{2 \pi}} \rho^j (\tau_t^j)^2 \] The first part of the lemma now follows, using the fact that $\sum_j \rho^j mu^j = 0$, (as the mean of the mixture is at the origin), and the fact that $\tau_t^j = \cos(\theta_t) \dot{mu^j}{b_t^1}$. For the second part of the lemma, we note again that if for all $j$, $\tau_t^j \leq \sqrt{\ln \frac{9}{2 \pi}}$, then, for any $j$ and $l$, \[ \rho^j \Phi(-\tau_t^j, \infty) \dot{mu^j}{b_t^l} \leq \frac{1}{2} \rho^j \dot{mu^j}{b_t^l} + \frac{2}{\sqrt{2 \pi}} \rho^j |\tau_t^j| \dot{mu^j}{b_t^l} \] Using the fact that the mixture is centered at the origin, the first term on the right hand side, when summed over all $j$, is $0$. Using the Cauchy-Schwartz Inequality, the second term on the right hand side can be upper bounded as: \[ \frac{2}{\sqrt{2 \pi}} \left( \sum_j \rho^j (\tau_t^j)^2 \right)^{1/2} \cdot \left( \sum_j \rho^j (\dot{mu^j}{b_t^l})^2 \right)^{1/2} \] The second part of the lemma now follows from plugging in the fact that $\tau_t^j = \cos(\theta_t) \dot{mu^j}{b_t^1}$. \end{proof} \fi \iffalse \begin{lemma} Suppose there exists some $j$ such that $\tau_t^j \geq \sqrt{\ln \frac{9}{2 \pi}}$. Then, \begin{eqnarray*} m_t^1 & \geq & \Phi(0, \sqrt{\ln \frac{9}{2 \pi}}) \cdot \rho_{min} \left( \sum_j \rho^j \dot{mu^j}{b_t^1}^2 \right)^{1/2} \\ m_t^1 & \leq & \left( \sum_j \rho^j \dot{mu^j}{b_t^1}^2 \right)^{1/2} \end{eqnarray*} Moreover, for any $l$, $|m_t^l| \leq \left(\sum_j \rho^j \dot{mu^j}{b_t^l}^2\right)^{1/2}$. \label{lem:largemuk} \end{lemma} \fi \iffalse \begin{proof} To prove the first part of the lemma, we observe that: \[ m_t^1 = \sum_j \rho^j \Phi(0, |\tau_t^j|) |\dot{mu^j}{b_t^1}| \geq \rho_{min} \Phi(0, \sqrt{\ln \frac{9}{2 \pi}}) max_{j} |\dot{mu^j}{b_t^1}| \geq \rho_{min} \Phi(0, \sqrt{\ln \frac{9}{2 \pi}})(\sum_j \rho^j \dot{mu^j}{b_t^1}^2)^{1/2} \] Moreover, for any $l$, we observe that \begin{eqnarray*} \sum_j \rho^j \Phi(-\tau_t^j, \infty) \dot{mu^j}{b_t^l} \leq \sum_j \rho^j |\dot{mu^j}{b_t^l}| \leq (\sum_j \rho^j \dot{mu^j}{b_t^l}^2 )^{1/2} \end{eqnarray*} The first inequality follows because $\Phi(x,y) \leq 1$ for any $x$ and $y$, and the second inequality follows by an application of the Cauchy-Schwartz Inequality. \end{proof} Now we are ready to prove Theorem \ref{thm:largemuk}. \kc{make a pass over this proof and fill in more details.} \begin{proof} (Of Theorem \ref{thm:largemuk}) For the first part of the theorem, we again use the inequality: \begin{eqnarray*} \cos^2(\theta_{t+1}) & = & \frac{ \cos^2(\theta_t) \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + \sum_l (m_t^l)^2} { \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + \sum_l (m_t^l)^2} \\ & \geq & \frac{\cos^2(\theta_t) \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + (m_t^1)^2}{ \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + (m_t^1)^2}\\ & = & \cos^2(\theta_t) \left( 1 + \tan^2(\theta_t) \cdot \frac{ 2 \cos(\theta_t) \xi_t m_t^1 + (m_t^1)^2}{ \xi_t^2 + 2 \cos(\theta_t) \xi_t m_t^1 + (m_t^1)^2 } \right) \end{eqnarray*} Now, from Lemma \ref{lem:smallmuk}, it follows that $m_t^1$ lies between $\frac{5}{3 \sqrt{2 \pi}} \lambda_{min}^2 mu^2$ and $\frac{2}{\sqrt{2 \pi}} \lambda_{max}^2 mu^2$. Using this fact, and the fact that $\xi_t \geq \frac{2}{3}$ (from Lemma \ref{lem:smalltau}), $\cos^2(\theta_{t+1})$ is at least \[ \cos^2(\theta_t) \left( 1 + \sin^2(\theta_t) \cdot \frac{ \gamma_1 \lambda_{min}^2 mu^2 + \gamma_2 \lambda_{min}^4 mu^4}{\gamma_3 + \gamma_4 \cos^2(\theta_t) \lambda_{min}^2 mu^2 + \gamma_5 \cos^2(\theta_t) \lambda_{min}^4 mu^4} \right) \] for fixed constants $\gamma_1, \ldots, \gamma_5$. The first part of the theorem follows. The second part also follows similarly, from the observation that $\sum_l (m_t^l)^2 \leq mu^4$. Finally, the third part of the lemma follows from a combination of Lemma \ref{lem:largemuk} and the fact that $\xi_t \leq 1$, in this case. \end{proof} \fi \end{document}
math
116,435
\begin{document} \title{A Feasibility-Seeking Approach to Two-stage Robust Optimization in Kidney Exchange} \begin{abstract} Kidney paired donation programs (KPDPs) match patients with willing but incompatible donors to compatible donors with an assurance that when they donate, their intended recipient receives a kidney in return from a different donor. A patient and donor join a KPDP\ as a pair, represented as a vertex in a compatibility graph, where arcs represent compatible kidneys flowing from a donor in one pair to a patient in another. A challenge faced in real-world KPDPs\ is the possibility of a planned match being cancelled, e.g., due to late detection of organ incompatibility or patient-donor dropout. We therefore develop a two-stage robust optimization approach to the kidney exchange problem wherein (1) the first stage determines a kidney matching solution according to the original compatibility graph, and then (2) the second stage repairs the solution after observing transplant cancellations. In addition to considering homogeneous failure, we present the first approach that considers non-homogeneous failure between vertices and arcs. To this end, we develop solution algorithms with a feasibility-seeking master problem and evaluate two types of recourse policies. Our framework outperforms the state-of-the-art kidney exchange algorithm under homogeneous failure on publicly available instances. Moreover, we provide insights on the scalability of our solution algorithms under non-homogeneous failure for two recourse policies and analyze their impact on highly-sensitized patients, patients for whom few kidney donors are available and whose associated exchanges tend to fail at a higher rate than non-sensitized patients. \end{abstract} \section{Introduction} \label{Introduction} Kidney paired donation programs (KPDPs) across the globe have facilitated living donor kidney transplants for patients experiencing renal failure who have a willing but incompatible or sub-optimal donor. A patient in need of a kidney transplant registers for a KPDP\ with their incompatible donor (paired donor) as a pair. The patient then receives a compatible kidney from either the paired donor in another pair, who in turn receives a kidney from another donor, or from a singleton donor who does not expect a kidney in return (called a \emph{non-directed donor}). The transplants are then made possible through the exchange of paired donors between patient-donor pairs. Since first discussed by \cite{Rapaport1986}, and put in practice for the first time in South Korea \citep{Park1999}, kidney exchanges performed through KPDPs\ have been introduced in several countries around the world, e.g., the United States \citep{Saidman2006}, the United Kingdom \citep{Manlove2015}, Canada \citep{Malik2014} and Australia \citep{AustraliaKPD}, and the underlying matching of patients to donors has been the subject of study from multiple disciplines (see, e.g., \citet{Roth2005,Dickerson2016, Dickerson2019,Carvalho2021, Riascos2020}). Despite this attention, KPDPs\ still face challenges from both a practical and theoretical point of view \citep{Ashlagi2021}. In this work, motivated by the high rate of exchanges that do not proceed to transplant \citep{Bray2015, Dickerson2016, CBS2019}, we provide a robust optimization framework that proposes a set of exchanges for transplant, observes failures in the transplant plan, and then repairs the affected exchanges provided that a recovery rule (i.e., \textit{recourse policy}) is given by KPDP\ operators. Kidney exchange, or the kidney exchange problem as it is known in the literature, can be modeled with a compatibility graph, (i.e., a digraph). Each vertex represents either a pair or a non-directed donor, and each arc indicates that the donor in the starting vertex is blood type and tissue type compatible with the patient in the ending vertex. Arcs may have an associated weight that represents the priority of that transplant. Exchanges then take the form of simple cycles and simple paths (called chains). Cycles consist of patient-donor pair exchanges only, whereas in a chain the first donation occurs from a non-directed donor to a patient-donor pair, and is followed by a sequence of donations from a paired donor to the patient in the next patient-donor pair. Since a donor/patient can withdraw from a KPDP\ at any time, or late-detected medical issues can prevent a paired donor from donating in the future, cyclic transplants are performed simultaneously. Unlike cycles, a patient in a chain does not risk exchanging his/her paired donor without first receiving a kidney from another donor. Therefore, transplants in a chain can be executed sequentially, but depending on KPDP\ regulations, they can also be performed simultaneously \citep{Biro2021}. Due to the simultaneity constraint for cycles, the maximum number of transplants in a cycle is limited by the logistical implications of arranging operating rooms and surgical teams. The maximum number of transplants in a chain can vary depending on the simultaneity constraint and specific regulations. In the United States, the size of chains can be unbounded and theoretically limited only by the number of available donors \citep{Ashlagi2012, Dickerson2012b, Anderson2015,Ding2018, Dickerson2019} by allowing the donor in the last pair of a chain to become a \textit{bridge donor}, i.e., a paired donor that acts as a non-directed donor in a future algorithmic matching. However, in Canada \citep{Malik2014} and in Europe \citep{Biro2021, Carvalho2021}, chains are limited in size since their KPDPs\ do not use bridge donors. In this case, the donor in the last pair donates to a patient on the deceased donor waiting list, ``closing up'' a chain. In constructing kidney exchanges, KPDP\ operators use ``believed'' information on the compatibility between patients and donors; however, this information may not be accurate as final confirmation of compatibility is confirmed only when a set of transplants has been selected. Thus, there is uncertainty about the existence of vertices and arcs in the compatibility graph. There are multiple reasons a patient-donor pair or a non-directed donor selected for transplant may not be available, e.g., match offer rejection, already transplanted out of the KPDP, illness, pregnancy, reneging, etc. Thus, even if some of this information is captured prior to the matching process, there is still a chance of subsequent failure. Additionally, tissue type compatibility is not known with certainty when the compatibility graph is built, unlike blood type compatibility. Tissue type compatibility is based on the result of a \textit{virtual crossmatch test}, which typically has lower accuracy than a \textit{physical crossmatch test}. Both tests try to determine if there are antibodies of importance in a patient that can lead to the rejection of a potential donor's kidney. Physical crossmatch tests are challenging to perform, making them impractical, and thus unlikely to be performed between all patients and donors in real life \citep{Carvalho2021}. So, in the first stage, a ``believed'' compatibility graph is built according to the results of the virtual crossmatch test. Once the set of transplants has been proposed, those exchanges undergo a physical crossmatch test to confirm or rule out the viability of the transplants. After confirming infeasible transplants and depending on the KPDPs' regulations, KPDP\ operators may attempt to repair the originally planned cycles and chains impacted by the non-existence of a vertex or an arc. We refer to these impacted cycles and chains as \textit{failed} cycles and chains. A cycle fails completely if any of its elements (vertices or arcs) cease to exist, whereas a chain is cut short at the pair preceding the first failed transplant. Failures in the graph, caused by the disappearance of a vertex or arc, have a significant impact on the number of exchanges that actually proceed to transplant \citep{Dickerson2019,CBS2019} and can even drive KPDP\ regulations \citep{Carvalho2021}. For instance, \cite{Dickerson2019} reported that for selected transplants from the UNOS program between 2010--2012, 93\% did not proceed to transplant. Of those non-successful transplants, 44\% had a failure reason (e.g., failed physical crossmatch result or patient/donor drop-out), which in turn caused the cancellation of the other 49\% of non-successful transplants. In Canada, between 2009--2018, 62\% of cycles and chains with six transplants failed, among which only 10\% could be repaired \citep{CBS2019}. Half the cycles and chains with three or fewer transplants were successful, and approximately 30\% of the total could not proceed to transplant. The set of transplants that is proposed and later repaired by KPDPs\ does not account for subsequent failures, and thus the number of successful transplants performed is sub-optimal. There is a large body of work concerned with maximizing the expectation of proposed transplants \citep{Awasti2009, Dickerson2014, Dickerson2019, Klimentova2016, Smeulders2022}. From a practical perspective, maximizing expectation could increase the number of planned exchanges that become actual transplants. However, such a policy risks favoring patients that are likely to be compatible with multiple donors over highly-sensitized patients \citep{Carvalho2021}, i.e., patients for whom few kidney donors are available and whose associated exchanges tend to fail at a higher rate than non-sensitized patients. Furthermore, real data is limited and there is not currently a enough understanding of the dynamics between patients and donors to derive a probability distribution of failures that could be generalized to most KPDPs. We therefore model failure through an uncertainty set that does not target patient sensitization level or probabilistic knowledge, and aims to find a set of transplants that allows the biggest recovery under the worst-case failure scenario in the uncertainty set. We develop a two-stage robust optimization (RO) approach to the kidney exchange problem wherein (1) the first stage determines a kidney matching solution according to the original compatibility graph, and then (2) the second stage repairs the solution after observing transplant cancellations. We extend the current state-of-the-art RO methodologies \citep{Carvalho2021} by assuming that the failure rate of vertices and arcs is non-homogeneous, since failure reasons, such as a late-detected incompatibility, seem to be independent from patient/donor dropout and vice versa. Moreover, we also consider the impact of scarce match possibilities for highly-sensitized patients. The contributions of this work are as follows: \begin{enumerate} \item We develop a novel two-stage RO framework with non-homogeneous failure between vertices and arcs. \item We present a novel general solution framework for any recourse policy whose recourse solution set is finite after selection of a first-stage set of transplants. \item We are the first to introduce two feasibility-seeking reformulations of the second stage, as opposed to optimality-based formulations \citep{Carvalho2021, Blom2021}, which improves scalability since the number of decision variables in our formulation grows linearly with the number of vertices and arcs. \item We derive dominating scenarios and explore several second-stage solution algorithms to overcome the drawback of a lack of lower bound in our solution framework. \item We show that our framework results in significant computational and solution quality improvements compared to state-of-the-art algorithms. \end{enumerate} The remainder of the paper is organized as follows. Section \ref{LitReview} presents a collection of related works. Section \ref{Preliminaries} establishes the problem we address. Sections \ref{RobustModels} and \ref{SecondStage} present the first and second-stage formulations, respectively. Section \ref{sec:HSAs} presents the full algorithmic framework. Section \ref{Experiments} shows computational results. Lastly, Section \ref{sec:Conclusion} draws conclusions and states a path for future work. \section{Related Work} \label{LitReview} \cite{Abraham2007} and \cite{Roth2007} introduced the first KPDP\ mixed-integer programming (MIP) formulations for maximizing the number/weighted sum of exchanges. These formulations are the well-known \textit{edge formulation} and \textit{cycle formulation}. The edge formulation uses arcs in the input graph to index decision variable, whereas the cycle formulation, which was initially proposed for cycles only, has a decision variable for every feasible cycle and chain in the input graph. Both formulations are of exponential size either in the number of constraints (edge formulation) or in the number of decision variables (cycle formulation). However, the cycle formulation, along with a subsequent formulation \citep{Dickerson2016}, is the MIP formulation with the strongest linear relaxation. Due to its strength and natural adaptability, multiple works have designed branch-and-price algorithms employing the cycle formulation. The branch-and-price algorithm in \citet{Abraham2007} was effective for cycles of size up to three, while that of \citet{Lam2020} solved the problem for long cycles, and \cite{Riascos2020} used decision diagrams to solve large instances with both long cycles and long chains for the first time. More recently, \cite{Omer2022} built on the work in \citet{Riascos2020} by implementing a branch-and-price able to solve remarkably large instances (10,000 pairs and 1,000 non-directed donors), opening the door to large-scale multi-hospital and multi-country efforts. Another trend has focused on new arc-based formulations (e.g., \cite{Constantino2013,Dickerson2016}) and arc-and-cycle-based formulations (e.g., \cite{Anderson2015, Dickerson2016}). Between these two approaches, arc-and-cycle-based formulations seem to outperform arc-based formulations \citep{Dickerson2016}, especially for instances with cycles with at most three exchanges. The previously discussed studies do not consider uncertainty in the proposed exchanges. However, the high percentage of planned transplants that end up cancelled suggests a need to plan for uncertainty. There are two sources of uncertainty that have been studied in the literature: weight accuracy (e.g., \cite{Duncan2019}) and vertex/arc existence (e.g., \cite{Dickerson2016, Klimentova2016, Duncan2019, Carvalho2021, Smeulders2022}). Weight accuracy uncertainty assumes that the social benefit (weight) associated with an exchange can vary, e.g., due to changes in a patient's health condition, and from the existence of multiple opinions from policy makers on the priority that should be given to each patient \citep{Duncan2019}. Uncertainty in the existence of a vertex/arc, i.e., whether or not a patient or donor leaves the exchange or compatibility between a patient and donor changes, has received greater attention. There are three main approaches in the literature addressing vertex or arc existence as the source of uncertainty: (1) a maximum expected value approach; (2) an identification of exchanges for which a physical crossmatch test should be performed to maximize the expected number of realized transplants; and (3) a maximization of the number of transplants under the worst-case disruption of vertices and arcs. The maximum expected value approach is the approach most investigated in the literature. It is concerned with finding the set of transplants with maximum expected value, i.e., a set of transplants that is most likely to yield either the maximum number of exchanges, or the maximum weighted sum of exchanges given some vertex/arc failure probabilities. This approach has mostly been modeled as a deterministic kidney exchange problem (KEP), where the objective function approximates the expected value of a matching using the given probabilities as objective coefficient multipliers of deterministic decisions. \cite{Awasti2009} considered the failure of vertices in an online setting of the cycle-only version for cycles with at most three exchanges. The authors generate sample trajectories on the arrival of patients/donors and patients survival, then use a REGRETS algorithm as a general framework to approximate the collection of cycles with maximum expectation. \cite{Dickerson2012} proposed a heuristic method to learn the ``potential'' of structural elements (e.g., vertex), that quantifies the future expected usefulness of that element in a changing graph with new patient/donor arrivals and departures. \cite{Dickerson2013} considered arc failure probabilities and found a matching with maximum expected value, but solution repairs for failures are not considered. This work is extended in \citet{Dickerson2019}. \cite{Klimentova2016} computed the expected number of transplants for the cycle-only version while considering \textit{internal} recourse and \textit{subset} recourse to recover a solution in case of vertex or arc failure. Internal recourse, also known as \textit{back-arcs recourse} (e.g., \cite{Carvalho2021}) allows surviving pairs to match among themselves, whereas subset recourse allows a wider subset of vertices to participate in the repaired solution. To compute the expectation, an enumeration tree is used for all possible failure patterns in a cycle and its extended subset. This subset consists of the additional vertices (for the subset recourse only) such that the pairs in the original cycle can form feasible cycles. To limit the size of the tree, the subset recourse is limited to a small subset of extra vertices and the internal recourse seems to scale for short cycles only. \cite{Alvelos2019} proposed to find the expected value for the cycle-only version while considering internal recourse through a branch-and-price algorithm, finding that the overall run time grew rapidly with the size of the cycles. To identify exchanges where a physical crossmatch test should be performed, \cite{Blum2013} modeled the KEP in an undirected graph representing pairwise exchanges only. They proposed to perform two physical crossmatch tests per patient-donor pair---one for every arc in a cycle of size two---before exchanges are selected with the goal of maximizing the expected number of transplants. They showed that their algorithm yields near-optimal solutions in polynomial time. Subsequent works \citep{Assadi2019,Blum2020} evaluated adaptive and non-adaptive policies to query edges in the graph. In the same spirit, but for the general kidney exchange problem (with directed cycles and chains), \cite{Smeulders2022} formulated the maximization of the expected number of transplants as a two-stage stochastic integer programming problem with a limited budget on the number of arcs that can be tested in the first stage. Different algorithmic approaches were proposed, but scalability was a challenge. In addressing worst-case vertex/arc disruption, \cite{Duncan2019} found robust solutions with no recourse for budget failure of the number of arcs that fail in the graph. \cite{Carvalho2021} proposed a two-stage robust optimization model that allowed recovery of failed solutions through the back-arcs recourse and full recourse policies. The full recourse policies can be considered a subset recourse policy (e.g., \cite{Klimentova2016}), in which all vertices that were not selected in the matching can be included in the repaired solution. Unlike our work, vertex and arc failure probabilities are treated as homogeneous, i.e., both elements fail with the same probability. Since in homogeneous failure there is a worst-case scenario in which all failures are vertex failures, the recourse policies are evaluated under vertex failure only. The back-arcs recourse policy only scales for instances with 20 vertices, whereas the full-recourse policy scales for instances up to 50 vertices. \citet{Blom2021} examined the general robust model for the full-recourse policy studied in \citet{Carvalho2021} and showed its structure to be a defender-attacker-defender model. Two Benders-type approaches were proposed and tested using the same instances from \citet{Carvalho2021}, and showed improved performance over previous branch-and-bound algorithms \citep{Carvalho2021}. This approach, however, is limited to homogeneous failure for the full-recourse policy. In our work, we allow for different failure rates between vertices and arcs, and present a solution scheme that can address recourse policies whose recourse solution set corresponds to a subset of the feasible cycles and chains in the compatibility graph. Specifically, we apply our solution scheme to the full recourse policy, as well as to a new recourse policy we introduce later, but our solution scheme can also be used for the back-arcs recourse. Our solution method requires a robust MIP formulation adapted to a specific policy that can be solved iteratively as new failure scenarios are added. The second-stage problem is decomposed into a master problem and a sub-problem. The master problem is formulated as the same feasibility problem regardless of the policy but it is implicit in the constraint set, whereas the sub-problem (i.e., recourse problem) corresponds to a deterministic KEP where only non-failed cycles and chains contribute to the robust objective. \section{Preliminaries} \label{Preliminaries} In this section, we describe our two-stage robust model. A full list of notation is in Appendix \ref{sec:notation}. We begin by formally describing the first-stage problem. Specifically, we define the compatibility graph and the feasible set for the first-stage decisions. We similarly define the second-stage problem and then introduce our two-stage robust problem. Finally, we define the uncertainty set and recourse policies. \paragraph{First-stage compatibility graph.} The KEP can be defined on a directed graph $DKEP$, whose vertex set $V := P \cup N$ represents the set of patient-donor pairs, $P$, and the set of non-directed donors $N$. From this point onward, we will refer to patient-donor pairs simply as \emph{pairs}. The arc set $A \subseteq V \times P$ contains arc $(ua,ub)$ if and only if the donor in vertex $u \in V$ is compatible with a patient in vertex $v \in P$. A matching of donors and patients in the KEP can take the form of simple cycles and simple chains (i.e., paths from the digraph). A cycle is feasible if it has no more than $cCap$ arcs, whereas a chain is feasible if it has no more than $L$ arcs ($L + 1$ vertices) and starts with a non-directed donor. The set of feasible cycles and chains is denoted by $cSet$ and $\mathcal{C}_{\chainCap}$, respectively. Furthermore, let $V(\cdot)$ and $A(\cdot)$ be the set of vertices and arcs in $(\cdot)$, where $(\cdot)$ refers to a cycle, chain, etc. \paragraph{Feasible set of first-stage decisions.} A feasible solution to the KEP corresponds to a collection of vertex-disjoint feasible cycles and chains, referred to as a \emph{matching}\footnote{Note that since each vertex in the KEP compatibility graph corresponds to a patient-donor pair or a non-directed donor, a KEP matching---which is referred to as a matching in the literature and in this paper---is not actually a matching of the digraph. Instead it is a collection of simple cycles and paths in the digraph which leads to an actual underlying 1-1 matching of selected donors and patients.}, i.e., $cSet \subseteq cSet \cup \mathcal{C}_{\chainCap}$ such that $V(c) \cap V(c^{\prime}) = \emptyset,$ for all $c, c^{\prime} \in cSet$ with $c \ne c^{\prime}$, noting that $c$ and $c^\prime$ are entire cycles or chains in the matching. We let $cSpace$ denote the set of all KEP matchings in graph $D$. Also, we define $ \mathcal{X} := \{\bm{x_{\matchSet}}: cSet \in cSpace \} $ as the set of all binary vectors representing the selection of a feasible matching, where $\bm{x_{\matchSet}}$ is the characteristic vector of matching $cSet$ in terms of the cycles/chains sets $cSet \cup \mathcal{C}_{\chainCap}$. That is, $\bm{x_{\matchSet}} \in \{0,1\}^{\mid cSet \cup \mathcal{C}_{\chainCap} \mid}$ with $x_{cSet,c} = 1$ if and only if $c \in cSet$, meaning that a patient in a pair obtains a transplant if it is in some cycle/chain $c$ selected in matching $cSet$. \paragraph{Transitory compatibility graph.} Upon selection of a solution $\varFirstPlain \in \mathcal{X}$ but before uncertainty is revealed, there exists a \emph{transitory} compatibility graph $D^{\pi}(\varFirstPlain) = (VSndBe, ASndBe)$ with vertex set $VSndBe = \cup_{c \in cSetSndTr \cup \mathcal{C}_{\chainCap}SndTr}V(c)$ and arc set $ASndBe = \cup_{c \in cSetSndTr \cup \mathcal{C}_{\chainCap}SndTr}A(c)$. The set of feasible cycles $cSetSndTr$ and feasible chains $\mathcal{C}_{\chainCap}SndTr$ in the transitory compatibility graph (i) are \textit{allowed} under recourse policy $\pi \in \piSet$, and (ii) have \textit{at least one pair} in $\varFirstPlain$. Sections \ref{subsec:uncertaintyset} and \ref{sec:RecoursePolicies} provide details on uncertainty set $\Gamma$ and the types of recourse policies $\piSet$, respectively, and provide an explicit definition of condition (i). Condition (ii) enforces that each $c \in cSetSndTr \cup \mathcal{C}_{\chainCap}SndTr$ satisfies $V(c) \cap V(\varFirstPlain) \cap P \neq \emptyset$, where $V(\varFirstPlain) = \cup_{c' \in cSet \cup \mathcal{C}_{\chainCap}:\varFirstPlain_{c'} = 1} V(c')$. \paragraph{Second-stage compatibility graph.} Once a failure scenario $\bm{\oneCeS} \in \Gamma$ is observed in the first-stage compatibility graph, a digraph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ for the second-stage problem is \emph{induced} in the transitory graph, thus revealing the cycles and chains with selected pairs from the first stage that were unaffected by $\bm{\oneCeS}$. We refer to $VSndBeF$ and $ASndBeF$ as the set of vertices and the set of arcs that fail under scenario $\bm{\oneCeS}$ in the first-stage compatibility graph, respectively. We then define a second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS}) = (VSndBe \setminus \{VSndBe \cap VSndBeF\}, ASndBe \setminus \{ASndBe \cap ASndBeF\})$ as the sub-graph obtained by removing the vertices and arcs that fail under scenario $\bm{\oneCeS}$ in the first-stage compatibility graph, and are also present in the transitory graph, from the transitory compatibility graph. Thus, we define $cSetSndS$ and $\mathcal{C}_{\chainCap}SndS$ as the set of \emph{non-failed} cycles and chains in the second-stage compatibility graph, respectively. \paragraph{Feasible set of second-stage decisions.} A solution to the second stage is referred to as a \textit{recourse solution} in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ under some scenario $\bm{\oneCeS} \in \Gamma$, leading to an alternative matching where pairs from the first-stage solution $\varFirstPlain \in \mathcal{X}$ are re-arranged into non-failed cycles and chains, among those allowed under policy $\pi \in \piSet$. We can now define $\matchSet^{\policy}(\varFirst)One:= \{cSet \subseteq cSetSndS \cup \mathcal{C}_{\chainCap}SndS \mid V(c) \cap V(c^{\prime}) = \emptyset \text{ for all } c, c^{\prime} \in cSet \text{; } c \neq c^{\prime}\}$ as the set of allowed recovering matchings under policy $\pi$ such that every cycle/chain in $\matchSet^{\policy}(\varFirst)One$ contains at least one pair in $\varFirstPlain$. In other words, a recourse solution must contain at least one pair from the first-stage solution. Thus, similar to the first-stage characteristic vector $\bm{x_{\matchSet}}$, let $ \setRecoSym^{\policy}(\varFirst, \oneCe):= \{\bm{\mathbf{y}p_{\matchSet}}:cSet \in \matchSet^{\policy}(\varFirst)One \} $ be the set of all binary vectors representing the selection of a feasible matching in a second-stage compatibility graph with non-failed elements (vertices/arcs), under scenario $\bm{\oneCeS} \in \Gamma$ and policy $\pi \in \piSet$ that contain at least one pair in $\varFirstPlain$. \paragraph{Two-stage RO problem.} A general two-stage RO problem for the KEP can then be defined as follows: \begin{align} \label{ROmodel0} \max_{\varFirstPlain \in \mathcal{X}} \min_{\bm{\oneCeS} \in \Gamma} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} f(\varFirstPlain,\bm{\oneCeS},\mathbf{y})& \end{align} A set of transplants given by solution $\varFirstPlain \in \mathcal{X}$ (the outer maximization) is selected in the first stage. Then, the uncertainty vector $\bm{\oneCeS} \in \Gamma$ is observed (the middle minimization) and a recourse solution $\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$ is found to repair $\varFirstPlain$ according to recourse policy $\pi \in \piSet$ (the inner maximization). The \emph{second stage}, established by the min-max problem ($\max_{\varFirstPlain \in \mathcal{X}} \min_{\bm{\oneCeS} \in \Gamma}$), finds a recourse solution by solving the \emph{recourse problem} ($\max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)}$), whose objective value maximizes $f(\varFirstPlain,\bm{\oneCeS},\mathbf{y})$ under failure scenario $\bm{\oneCeS} \in \Gamma$, but it is the lowest objective value among all failure scenarios. The scenario optimizing the second-stage problem is then referred to as the \emph{worst-case scenario} for solution $\varFirstPlain \in \mathcal{X}$. The \emph{recourse objective function}, $f(\varFirstPlain,\bm{\oneCeS},\mathbf{y})$, assigns weights to the cycles and chains of a recovered matching associated with a recourse solution $\mathbf{y}$ under failure scenario $\bm{\oneCeS}$, based on the number of pairs matched in the first stage solution $\varFirstPlain$. Thus, we define the recourse objective as \begin{equation} \label{eq:RecourseCostFunction} f(\varFirstPlain,\bm{\oneCeS},\mathbf{y}) := \sum_{c \in cSetSndS \cup \mathcal{C}_{\chainCap}SndS} \mathbf{w}_{c} (\varFirstPlain) \mathbf{y}_{c} := {\mathbf{w}^{\oneCe}(\varFirst)}^{\top}\mathbf{y}, \end{equation} where $\mathbf{w}_{c}(\varFirstPlain) := |V(c) \cap V(\varFirstPlain) \cap P|$ is the weight of a cycle/chain $c \in cSetSndS \cup \mathcal{C}_{\chainCap}SndS$ corresponding to the \textit{number of pairs} that were matched in the first stage by solution $\varFirstPlain \in \mathcal{X}$, and can now also be matched in the second stage by recourse solution $\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$, after failures are observed. As a result, the weight of every cycle and chain selected by $\mathbf{y}$ is the number of pairs present from the first-stage solution $\varFirstPlain$. Accordingly, an optimal KEP robust solution is a matching in the first stage which, under the worst-case scenario, has the fewest pairs disrupted by the recovery in the second stage. For the sake of a compact representation, and with a slight abuse of notation, we represent the recourse cost function as the inner product given on the right-hand side of Equation \eqref{eq:RecourseCostFunction}, where the superscript $\bm{\oneCeS}$ on the weight vector determines its dimension and in turn makes it consistent with that of the recourse decision vector. Thus, our two-stage RO problem for the KEP is defined as follows: \begin{align} \label{ROmodel} \max_{\varFirstPlain \in \mathcal{X}} \min_{\bm{\oneCeS} \in \Gamma} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \weightRO\varReco& \end{align} \subsection{Uncertainty set} \label{subsec:uncertaintyset} Failures in a planned matching are observed after a final checking process, leading to the removal of the affected vertices and arcs from the first-stage compatibility graph. The failure of a vertex/arc causes the failure of the entire cycle to which that element belongs and the shortening of the chain at the last vertex before the first failure. Unlike previous studies, we consider \textit{non-homogeneous failure} by allowing vertices and arcs to fail at different rates. We define two failure budgets, one for vertices and one for arcs; in other words, we assume that there exist two unknown probability distributions causing vertices and arcs to fail independently from one another, rather than assuming that both vertices and arcs follow the same failure probability distribution. We note that this approach can still model homogeneous failure, which is a special case of non-homogeneous failure. Traditionally, the uncertainty set in robust optimization does not depend on the first-stage decision. While we consider vertex and arc failures as exogenous random variables, we define the uncertainty set $\bm{\Gamma}$ in terms of all the uncertainty sets $\bm{\Gamma}(\varFirstPlain)$. Failure scenarios in $\bm{\Gamma}(\varFirstPlain)$ lead to a second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ denoting the dependency between the selection of a first-stage decision and the ability to observe a failure. Recall that the checking process leading to the discovery of failures is based on the matching proposed for transplant. In other words, failures that involve vertices and arcs not present in the transitory graph cannot be discovered even if they occur since the set of alternative cycles and chains in the transitory graph that can be used to repair a first-stage solution $\varFirstPlain$ are defined by policy $\pi$ regardless of the failure scenario $\bm{\oneCeS}$. Therefore, we define the uncertainty set $\bm\Gamma$ as follows: \begin{subequations} \label{UDef} \begin{align} \bm{\Gamma}(\varFirstPlain) &:= \left(\bm{\bigsCe^{\text{v}}}(\varFirstPlain), \bm{\bigsCe^{\text{a}}}(\varFirstPlain)\right) \text{ where,}\\ \bm{\bigsCe^{\text{v}}}(\varFirstPlain) &:=\{\bm{\gamma^{\text{v}}} \in \{0,1\} ^{\midV \mid }\mid \lvert VSndBe \cap VSndBeF \rvert \le \sum_{u \in V} \gamma^{\text{v}}_{u} \le r^{\text{v}}\}\\ \bm{\bigsCe^{\text{a}}}(\varFirstPlain) &:= \{\bm{\gamma^{\text{a}}} \in \{0,1\} ^{\mid A \mid}\mid \lvert ASndBe \cap ASndBeF \rvert \le \sum_{(ua,ub) \in A} \gamma^{\text{a}}_{uaub} \le r^{\text{a}} \}\\ \bm{\Gamma} &:= \bigcup\limits_{\varFirstPlain \in \mathcal{X}} \bm{\Gamma}(\varFirstPlain) \end{align} \end{subequations} A failure scenario $\bm{\oneCeS} = (\gamma^{\text{v}}, \gamma^{\text{a}})$ is represented by binary vectors $\gamma^{\text{v}}$ and $\gamma^{\text{a}}$, where $\gamma^{\text{v}}$ refers to vertex failures and $\gamma^{\text{a}}$ refers to arc failures. A vertex $u \in V$ and arc $(ua,ub) \in A$ from the first-stage compatibility graph fail under a realized scenario if $\gamma^{\text{v}}_{u} = 1$ or $\gamma^{\text{a}}_{uaub} = 1$, respectively. The total number of vertex and arc failures in the first-stage compatibility graph is controlled by user-defined parameters $r^{\text{v}}$ and $r^{\text{a}}$, respectively. Therefore, the number of vertex failures in the transitory compatibility graph $D^{\pi}(\varFirstPlain)$ leading to a second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ cannot exceed $r^{\text{v}}$. Likewise, the number of arc failures in $D^{\pi}(\varFirstPlain)$ cannot exceed $r^{\text{a}}$. Thus, the uncertainty set $\bm\Gamma$ is the union over all failure scenarios leading to a second-stage compatibility graph when a set of transplants in the first stage has been proposed for transplantation. Note that this uncertainty set definition only distinguishes failures by element type (vertex or arc), and does not distinguish between sensitized and non-sensitized patients. \subsection{Recourse policies} \label{sec:RecoursePolicies} An important consideration made by KPDPs is the guideline according to which a selected matching is allowed to be repaired. Although re-optimizing the deterministic KEP with non-failed vertices/arcs is an option \citep{Dutch2005}, some KPDPs opt for recovery strategies when failures in the matching given by the deterministic model are observed \citep{CBS2019, Manlove2015}. Thus, it is reasonable to use those strategies as recourse policies when uncertainty is considered. We consider the \textit{full-recourse policy} studied in \citet{Blom2021, Carvalho2021} and introduce a natural extension of this policy, which we refer to as \textit{the first-stage-only recourse policy}. Examples of each are shown in Figure \ref{fig:RecourseExamples}. \subsubsection{Full recourse policy} Under the full-recourse policy, pairs that were selected in the first stage belonging to failed components are allowed to be re-arranged in the second stage within non-failed cycles and chains that may involve any other vertex (pair or non-directed donor), regardless of whether that vertex was selected or not in the first stage. Figure \ref{fig:FullRecourseGraph} shows an example of the full-recourse policy arising from the compatibility graph in Figure \ref{fig:CompGraph}. The first-stage solution depicted with bold arcs has a total weight of eight, since there are eight exchanges. Suppose there is a scenario in which $\gamma^{\text{v}}_{2} = 1$ and $\gamma^{\text{a}}_{56} = 1$. Assuming that $cCap = L = 4$, then the best recovery plan under this scenario, depicted by the recourse solution with shaded arcs, is to re-arrange vertices 3, 4 and 6 by bringing them together into a new cycle and include vertices 1 and 5 in a chain started by non-directed donor 8 (which was not selected in the first stage). Alternatively, vertices 1 and 5 could be selected along with vertex 7 to form a cycle. In both cases, the recourse solution involves only five pairs from the first stage. \input{Figures/RO_FullRecourse} \subsubsection{First-stage-only recourse policy} We refer to the first-stage-only as a recourse policy in which only vertices selected in the first stage can be used to repair a first-stage solution, i.e., the new non-failed cycles and chains selected in the second stage must include vertices from that first-stage solution only. The recourse solution set of the first-stage-only recourse policy corresponds to a subset of the full recourse policy. Although more conservative, KPDPs can opt for the first-stage-only since in the full recourse there can be vertices selected in the second stage that were not selected in the first stage, and thus have not yet been checked by KPDP operators, adding additional uncertainty about the actual presence of vertices/arcs. The back-arcs recourse policy studied in \citet{Carvalho2021} also allows recovery of a first-stage solution with pairs that were selected in that solution only, but such a policy only allows recovery of a cycle (chain) if there exists other cycles (chains) nested within it, making the back-arcs recourse more conservative than the first-stage-only recourse. In Figure \ref{fig:FirstStageOnlyRecourseGraph}, the recourse solution under the first-stage-only recourse includes vertices 3, 4 and 6 in a cycle just as in the full-recourse policy, but chains started by vertex 8 and the cycle to which vertex 7 belongs would not be within the feasible recourse set. Thus, the recourse objective value corresponds to only three pairs from the first stage as opposed to five pairs. \section{Robust model and first stage} \label{RobustModels} For a finite (yet possibly large) uncertainty set, the second optimization problem in Model \eqref{ROmodel} ($\min_{\bm{\oneCeS} \in \Gamma}$) can be removed as an optimization level and included as a set of scenario constraints instead \citep{Carvalho2021}. Model \eqref{ROmodel} can be expressed as the following MIP robust formulation: \begin{subequations} \label{SingleROmodel0} \begin{align} \bm{P}(\policy, \bigsCe): \quad \max_{\varFirstPlain, Z_{P}} \quad& Z_{P} \label{eq:SingleROobj} \\ & Z_{P} \le \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \weightRO\varReco & \bm{\oneCeS} \in \Gamma \label{eq:scenarios}\\ &\varFirstPlain \in \mathcal{X} \end{align} \end{subequations} The optimization problem in \eqref{eq:scenarios} is the recourse problem, which given a first-stage solution $\varFirstPlain \in \mathcal{X}$ and policy $\pi \in \piSet$, it finds a recourse solution in the set of binary vectors $\setRecoSym^{\policy}(\varFirst, \oneCe)$ whose cycles and chains under scenario $\bm{\oneCeS} \in \Gamma$ have the largest number of pairs from the first stage. Having the recourse problem in the constraints is challenging since it must be solved as many times as the number of failure scenarios $\bm{\oneCeS} \in \Gamma$ for any fixed decision $\varFirstPlain$, which can be prohibitively large. To reach a more tractable form, first observe that the directions of the external (Objective \eqref{eq:SingleROobj}) and internal (Constraint \eqref{eq:scenarios}) objectives in Model \eqref{SingleROmodel0} are maximization. We can then define a binary decision vector $\mathbf{y}^{\bm{\oneCeS}}$ for every scenario $\bm{\oneCeS} \in \Gamma$ whose feasible space corresponds to a recourse solution in $\setRecoSym^{\policy}(\varFirst, \oneCe)$. Thus, an equivalent model for the first-stage formulation (FSF) has the inner optimization problem removed and the set $\setRecoSym^{\policy}(\varFirst, \oneCe)$ included as part of the constraints set for every failure scenario. The resulting FSF model is as follows: \begin{subequations} \begin{align} \tag{FSF} \bm{P}(\policy, \bigsCe)= \max_{\varFirstPlain, \mathbf{y}^{\bm{\oneCeS}}, Z_{P}} \quad& Z_{P} \label{SingleROmodel}\\ & Z_{P} \le \weightRO\varReco & \bm{\oneCeS} \in \Gamma \label{eq:scenariosBound}\\ &\mathbf{y}^{\bm{\oneCeS}} \in \setRecoSym^{\policy}(\varFirst, \oneCe) & \bm{\oneCeS} \in \Gamma \label{eq:scenariosReco}\\ &\varFirstPlain \in \mathcal{X} \end{align} \end{subequations} The first-stage solution and its corresponding recourse solution can be found in a single-level MIP formulation if the set of scenarios $\Gamma$ can be enumerated. Observe that once a scenario $\bm{\oneCeS} \in \Gamma$ is fixed, $\setRecoSym^{\policy}(\varFirst, \oneCe)$ can be expressed by a set of linear constraints and new decision variables, e.g., the position-indexed formulation for the robust KEP under full recourse proposed in \citet{Carvalho2021}. This formulation follows the structure of Model \eqref{SingleROmodel}. We present this formulation (Appendix \ref{FullFirstSFormulation}) and a variant of it that satisfies the first-stage-only recourse policy (Appendix \ref{sec:fisrtOnlyROFormulation}) in the Online Supplement. To overcome the large size of $\Gamma$, we solve the robust KEP in Model \eqref{SingleROmodel} iteratively by finding a new failure scenario whose associated optimal recourse solution $\mathbf{y}^{\bm{\oneCeS}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$ recovers the maximum number of pairs from the first stage under that scenario in Model \eqref{SingleROmodel} (Algorithm \ref{Alg:ScenarioGeneration}). We note that adding new failure scenarios worsens the optimal objective value in Model \eqref{SingleROmodel}. The goal is to find a scenario that yields a recourse solution with an objective value lower than the current objective value of the robust KEP model, if such scenario exists. If a scenario is found, then a new set of recourse decision variables along with Constraints \eqref{eq:scenariosBound} and \eqref{eq:scenariosReco} are added to Model \eqref{SingleROmodel}. Algorithm \ref{Alg:ScenarioGeneration} starts with an empty restricted set of scenarios $\tilde{\Gamma}$. First, $\bm{P}(\policy, \bigsCe)r$ is solved and its incumbent solution $\tilde{\varFirstPlain}$ is retrieved (Step \ref{alg:R0-solve}). Then, the second-stage problem is solved to optimality for an incumbent solution $\tilde{\varFirstPlain}$ under policy $\pi$ (Step \ref{alg:RO-2ndstage}). The solution to the second-stage problem yields a failure scenario $\oneCe^{\star} \in \Gamma$ with the lowest optimal recourse objective value among all possible failure scenarios. We refer to that scenario as the worst-case scenario for a first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$. If such recourse objective value is lower than $Z_{P}r$, then $\oneCe^{\star}$ is added to the restricted set of scenarios $\tilde{\Gamma}$. Then, the corresponding Constraints \eqref{eq:scenariosBound} and \eqref{eq:scenariosReco} are added to Model \eqref{SingleROmodel} along with decision vector $\mathbf{y}^{\oneCe^{\star}}$, and Model \eqref{SingleROmodel} is re-optimized. We refer to Model \eqref{SingleROmodel} with a restricted set of scenarios $\tilde{\Gamma}$ as the first-stage problem or first-stage formulation to imply that the search for the optimal robust solution corresponding to some $\tilde{\varFirstPlain} \in \mathcal{X}$ continues. In principle, as long as a failure scenario $\bm{\oneCeS} \in \Gamma$ leads to an optimal recourse solution with an objective value lower than $Z_{P}r$, that scenario could be added to Model \eqref{SingleROmodel}. However, for benchmark purposes, at Step \ref{alg:R0-solve} the worst-case scenario is found for a given first-stage solution $\tilde{\varFirstPlain}$ under policy $\pi$. If the optimal objective value of the second-stage problem equals $Z_{P}r$, then an optimal robust solution is returned. \begin{algorithm}[tbp] \textbf{Input:} { A policy $\pi \in \piSet$ and restricted set of scenarios $\tilde{\Gamma}$, $\tilde{\Gamma}:= \emptyset$}\\ \textbf{Output:} { Optimal robust solution $\varFirstPlain^{\star}$} \begin{algorithmic}[1] \REPEAT \STATE \label{alg:R0-solve} Solve $\bm{P}(\policy, \bigsCe)r$ and obtain optimal solution $\tilde{\varFirstPlain}$ \\ \IF{$\bm{\min}_{\bm{\oneCeS} \in \Gamma}\bm{\max}_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)c} {\mathbf{w}^{\oneCe}(\varFirst)}^{\top}c\mathbf{y} < Z_{P}r$} \label{alg:RO-2ndstage} \STATE $\tilde{\Gamma} \gets \tilde{\Gamma} \cup \{\oneCe^{\star}\}$, add recourse decision vector $\mathbf{y}^{\oneCe^{\star}}$, Constraints \eqref{eq:scenariosBound}, \eqref{eq:scenariosReco} and go to Step 1 \ENDIF \UNTIL{$\bm{\min}_{\bm{\oneCeS} \in \Gamma}\bm{\max}_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)c} {\mathbf{w}^{\oneCe}(\varFirst)}^{\top}c\mathbf{y} \ge Z_{P}r$} \STATE $\varFirstPlain^{\star} \gets \varFirstPlain$ \RETURN $\varFirstPlain^{\star}$ \caption{Solving the robust KEP in Model \eqref{SingleROmodel}} \label{Alg:ScenarioGeneration} \end{algorithmic} \end{algorithm} Algorithm \ref{Alg:ScenarioGeneration} converges in a finite number of iterations due to the finiteness of $\mathcal{X}$ and $\Gamma$. Due to the large set of scenarios, Step \ref{alg:R0-solve} is critical to efficiently solve the robust problem. In subsequent sections, we decompose the second-stage problem into a master problem yielding a failure scenario and a sub-problem (the recourse problem) finding an alternative matching with the maximum number of pairs from the first stage. \section{New second-stage decompositions} \label{SecondStage} In this section, we present two new decompositions of the second-stage problem that is solved at Step \ref{alg:RO-2ndstage} in Algorithm \ref{Alg:ScenarioGeneration} as part of the iterative solution to the robust KEP. Both consist of a feasibility-seeking master problem that finds a failure scenario and a sub-problem that finds an alternative matching under that scenario. Each decomposition solves a recourse problem in either the second-stage compatibility graph (Appendix \ref{sec:RecourseFormls}, Model \eqref{RecoPModel}) or the transitory graph (Appendix \ref{sec:RecourseFormls}, Model \eqref{RecoPModelexp}). Such models use cycle-chain decision variables as proposed in \citet{Blom2021}. Although the optimal solutions provided by both formulations are identical, we use the structure of each solution set to formulate the master problem of each decomposition accordingly. \subsection{An initial feasibility-seeking formulation for the second stage} \label{sec:basicfeas} In the initial feasibility-seeking formulation, we formulate the second-stage problem as a linear binary program whose optimal solution can be obtained through another linear binary program that finds a feasible failure scenario. There exists one constraint in the feasibility-seeking formulation for every constraint in its optimality-seeking counterpart. Each constraint is associated with an optimal recourse solution under some failure scenario. However, the number of constraints may be exponential as the formulation requires knowing an optimal recourse solution under all possible failure scenarios in advance. To efficiently solve such formulations (optimality-seeking and the feasibility-seeking), we decompose them it into a master problem and sub-problem (recourse problem). The master problems finds a failure scenario which is then used by the recourse problem to obtain an optimal recourse solution, which is then added to the master problem and a new failure scenario is found. We show that any feasible failure scenario leads to an optimal recourse solution whose objective value is an upper bound on the optimal objective value of the second-stage problem. Based on this upper bound, we also show that finding the worst-case scenario in the optimality-seeking formulation implies the failure of some of the cycles/chains in \emph{every} optimal recourse solution in that formulation. The feasibility-seeking master problem tries to find a failure scenario satisfying that at least one cycle/chain fails in every recourse solution of the optimality-seeking master problem. When a new failure scenario leads to an optimal recourse solution with the lowest objective value among the ones found in previous iterations, we update the constraints in the feasibility-seeking master problem to increase in the minimum number of cycles/chains that should fail in every recourse solution of the optimality-seeking master problem. We refer to this update as a \emph{strengthening} procedure to indicate that the feasible space of failure scenarios in our feasibility-seeking formulation can be reduced based on a new upper bound on the optimal objective value of the second-stage problem. Our solution framework is purely a cutting plane since no new decision variables need to be created when a new constraint is added to the feasibility-seeking master problem. From this point onward, our feasibility-seeking formulations are shown (and referred to) as master problems, since we assume that at the start of the cutting plane algorithm we have an empty set of failure scenarios $\GammacR \subseteq \Gammac$ for a given $\tilde{\varFirstPlain} \in \mathcal{X}$ rather than the full set. We show that once the master problem becomes infeasible, an optimal solution to the second-stage problem has been found. In this formulation, a recourse solution is found on the second-stage compatibility graph. Recall that a second-stage compatibility graph does not include cycles/chains that fail under the scenario that induced the graph in the transitory graph. Therefore, we refer to the master problem in the initial feasibility-seeking formulation as MasterSecond (MS). We first present the recourse problem formulation on the second-stage problem, then we formally define MasterSecond and prove its validity. Finally, we present non-valid inequalities to strengthen the right-hand side of constraints in MasterSecond. \subsubsection{The recourse problem on the second-stage compatibility graph} Given a first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$, let $\GammacR \subseteq \Gammac$ be a restricted set of failure scenarios such that $\tilde{\oneCe} \in \GammacR$ induces a second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$ as defined in Section \ref{Preliminaries}. The recourse problem consists of finding a matching of maximum weight in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$, i.e., a matching with the largest number of pairs selected in the first stage. We define $R^{\policy}(\varFirst, \oneCe)cdte$ as the MIP formulation (Appendix \ref{sec:RecourseFormls}) for the recourse problem in the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$, and define $\mathbf{y}OpSndSc$ as an optimal recourse solution in $\setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ with objective value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$. \subsubsection{The master problem: MasterSecond} We continue to use $\gamma^{\text{v}}$ and $\gamma^{\text{a}}$ as binary vectors representing the failure of a vertex ($\gamma^{\text{v}}_{u} = 1 \ \forall u \in V$) and arc ($\gamma^{\text{a}}_{uaub} = 1 \ \forall (ua,ub) \in A$), respectively. Thus, the master problem for the second-stage problem can be formulated as follows: \begin{subequations} \label{mo:basicfeas} \begin{align} \tag{MS} \text{MasterSecond}(\tilde{\varFirstPlain}) \text{: } &&& \qquad \qquad \text{Find } \bm{\oneCeS} \label{MasterBasic}\\ && \sum_{c \in cSet \cup \mathcal{C}_{\chainCap}: \tilde{\varFirstPlain}_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} \right) \ge 1 && \label{eq:AtLeastOneX} \\ && \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe})dte: \hat{\mathbf{y}}_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} \right) \ge 1 && \tilde{\oneCe} \in \GammacR; \mathbf{y}OpSndSc \label{eq:AtLeastOne}\\ &&& \qquad \qquad \bm{\oneCeS} \in \GammacR&& \label{basicGamma} \end{align} \end{subequations} where $\mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe})dte = cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc$. Because we start with an empty set of scenarios, Constraints \eqref{eq:AtLeastOneX}-\eqref{eq:AtLeastOne} correspond to covering constraints in a set cover problem, which is NP-complete \citep{Karp1972}. At least one vertex or arc in \textit{every} recourse solution $\mathbf{y}OpSndSc \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ \textit{must} fail before finding the optimal worst-case scenario $\gammatar \in \Gamma$ for a first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$. Observe that at the start, when $\GammacR = \emptyset$ the second-stage compatibility graph is equivalent to the transitory graph, and so the first recourse solution known is $\tilde{\varFirstPlain}$ itself. Intuitively, we can see that we can find a worse failure scenario if at least one element (vertex/arc) in $\tilde{\varFirstPlain}$ fails (Constraint \eqref{eq:AtLeastOneX}). Otherwise, the optimal objective value of the second stage would equal $Z_{P}r$ and we would have found the optimal solution to the robust KEP in Algorithm \ref{Alg:ScenarioGeneration}. Suppose we ``generate'' a failure scenario $\tilde{\oneCe}$ affecting solution $\tilde{\varFirstPlain}$ which we use to induce a new second-stage compatibility graph and include that scenario in our restricted set $\GammacR$. We then need to find an optimal recourse solution $\mathbf{y}OpSndSc$ under that scenario. However, unless we cause a failure in this new solution (Constraint \eqref{eq:AtLeastOne}) we will not succeed at finding a scenario that maximally decreases the value of $Z_{P}r$ in Model \eqref{SingleROmodel}. We repeat the same procedure until we realize that we can no longer generate a failure affecting the last recourse solution added to MasterSecond without violating the failure vertex budget $r^{\text{v}}$ and arc failure budget $r^{\text{a}}$ implicit in Constraint \eqref{basicGamma}. Therefore, when MasterSecond becomes infeasible, the worst-case scenario must be the one that led to the optimal recourse solution with the fewest number of pairs from the first stage. Algorithm \ref{Alg:BasicCovering} generates new failure scenarios $\tilde{\oneCe} \in \GammacR$ until \ref{MasterBasic} becomes infeasible, at which point the worst-case scenario, $\gammatar$, and its associated optimal recourse solution value, i.e., the optimal objective value of the second-stage problem, $Z^{\pi}_{Q}(\varFirstc)Star$, have already been found. Algorithm \ref{Alg:BasicCovering} starts with an empty subset of scenarios $\GammacR$, and with an upper bound on the objective value of the second stage ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = Z_{P}r$. \ref{MasterBasic} is solved for the first time to obtain a failure scenario $\tilde{\oneCe}$, which is then added to $\GammacR$. At Step \ref{Alg:SolveRecourse}, the recourse formulation $R^{\policy}(\varFirst, \oneCe)cdte$ is solved and an optimal recourse solution $\mathbf{y}OpSndS$ is obtained with objective value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$. The recourse solution is then used to create Constraint \eqref{eq:AtLeastOne} in \ref{MasterBasic}. If the objective value of the recourse solution ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ is less than the current upper bound ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$, then the upper bound is updated, along with the incumbent failure scenario $\oneCe^{\prime}$. At Step \ref{Alg:ResolveMS}, an attempt is made to find a feasible failure scenario in \ref{MasterBasic}. If such a scenario exists, the process is repeated, otherwise the algorithm returns an optimal solution ($\oneCe^{\star}, Z^{\pi}_{Q}(\varFirstc)Star$). \begin{algorithm}[tbp] \textbf{Input:} { A recourse policy $\pi \in \piSet$, first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$ and current KEP robust objective $Z_{P}r$}\\ \textbf{Output:} { Optimal recovery plan value $Z^{\pi}_{Q}(\varFirstc)Star$ and worst-case scenario $\oneCe^{\star} \in \Gammac$} \begin{algorithmic}[1] \STATE $\GammacR = \emptyset$; ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets Z_{P}r$ \STATE Solve \ref{MasterBasic} with Constraint \eqref{eq:AtLeastOneX} to obtain scenario $\tilde{\oneCe}$ \\ \WHILE{MasterSecond$(\tilde{\varFirstPlain})$ is feasible} \STATE $\GammacR \gets \GammacR \cup \{\tilde{\oneCe}\}$ \STATE \label{Alg:SolveRecourse} Solve $R^{\policy}(\varFirst, \oneCe)cdte$ to obtain objective value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ and recourse solution $\mathbf{y}OpSndSc$; create Constraint \eqref{eq:AtLeastOne}\\ \IF {${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$} \STATE \label{Alg:UpdateUB} ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$, $\oneCe^{\prime} \gets \tilde{\oneCe}$ \ENDIF \STATE \label{Alg:ResolveMS} Attempt to solve MasterSecond$(\tilde{\varFirstPlain})$ to get a new candidate scenario $\tilde{\oneCe}$\\ \ENDWHILE \STATE $\oneCe^{\star} \gets \oneCe^{\prime}$, $Z^{\pi}_{Q}(\varFirstc)Star \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ \RETURN $\oneCe^{\star}$, $Z^{\pi}_{Q}(\varFirstc)Star$ \caption{Solving the second-stage problem (Model \eqref{ROmodel}, Step \ref{alg:RO-2ndstage} in Algorithm \ref{Alg:ScenarioGeneration}) using \ref{MasterBasic}} \label{Alg:BasicCovering} \end{algorithmic} \end{algorithm} To prove the validity of Algorithm \ref{Alg:BasicCovering}, we state the following: \begin{proposition} \label{prop:BCproof} Algorithm \ref{Alg:BasicCovering} returns the optimal objective value of the second stage $Z^{\pi}_{Q}(\varFirstc)Star$ and worst-case scenario $\gammatar \in \Gammac$ for a first-stage decision $\tilde{\varFirstPlain} \in \mathcal{X}$. \end{proposition} \proof In the first part of the proof, we show that any optimal objective value of the recourse problem ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$, regardless of the scenario $\tilde{\oneCe} \in \Gammac$ is an upper bound on the optimal objective value of the second stage for a given $\tilde{\varFirstPlain} \in \mathcal{X}$. In the second part, we show that the second-stage problem can be decomposed into an optimality-seeking non-linear binary program which can be linearized by the iterative addition of recourse solutions. We then show that constraints in the binary program have a one-to-one correspondence with those in \ref{MasterBasic}. Part I. Observe that given a candidate solution $\tilde{\varFirstPlain} \in \mathcal{X}$ and policy $\pi \in \piSet$ \begin{align*} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte} \weightRO\tilde{\varReco} &\ge \min_{\bm{\oneCeS} \in \Gammac} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)} \weightRO\tilde{\varReco} & \tilde{\oneCe} &\in \Gammac \\ \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe}): \mathbf{y}OpSndSc_{c} = 1} \mathbf{w_{\match}}(\varFirst)c &\ge Z^{\pi}_{Q}(\varFirstc)Star & \tilde{\oneCe} &\in \Gammac; \mathbf{y}OpSndSc \end{align*} That is, the optimal objective value of a recourse solution, $\sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe}): \mathbf{y}OpSndSc_{c} = 1} \mathbf{w_{\match}}(\varFirst)c = {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$, regardless of the scenario $\tilde{\oneCe} \in \Gammac$ is at least as big as the smallest value that $Z^{\pi}_{Q}(\varFirstc)Star$ can reach. Part II. In what follows $\mathbbm{1}_{c,\bm{\oneCeS}}$ is an indicating variable that takes on value one if cycle/chain $c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \oneCe)$ fails under scenario $\bm{\oneCeS} \in \Gammac$. Then, we define $\alpha_{c}$ as a binary decision variable for a cycle/chain $c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$ in the transitory graph that takes on value one if such cycle/chain fails, and zero otherwise. We define $\setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$ as the set of binary vectors representing a recourse solution in the transitory graph and denote $\mathbf{y}c$ to be a feasible recourse solution, as opposed to its decision vector counterpart $\mathbf{y}$. In Appendix \ref{Alg:CyChrecourse} we present a procedure to find $cSetSndTrc$ and $\mathcal{C}_{\chainCap}SndTrc$ given a first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$. Note that since all feasible chains from length 1 to $L$ are found, the shortening of a chain when a failure occurs is represented by some $\alpha_{c}$ taking on value zero and another one taking on value one. Then, the second-stage problem (SSP) can be reformulated as follows: \begin{subequations} \begin{align} &\min_{\bm{\oneCeS} \in \Gammac} \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)c} \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \oneCe)} \mathbf{w_{\match}}(\varFirst)c\mathbf{y}_{c} \tag{SSP} \label{eq:SSP}\\ \text{\ref{eq:SSP}} =\min_{\bm{\oneCeS} \in \Gammacdte} \quad &Z^{\pi}_{Q}(\varFirstc) \label{refSndSvOne}\\ & Z^{\pi}_{Q}(\varFirstc) \ge \max_{\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)c} \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \oneCe)} \left(\mathbf{w_{\match}}(\varFirst)c - \mathbf{w_{\match}}(\varFirst)c \mathbbm{1}_{c,\bm{\oneCeS}} \right) \mathbf{y}_{c} \\ \text{\ref{eq:SSP}} =\min_{\bm{\oneCeS} \in \Gammacdte} \quad &Z^{\pi}_{Q}(\varFirstc) \label{refSndSvTwo}\\ & Z^{\pi}_{Q}(\varFirstc) \ge \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \oneCe): \mathbf{y}c_{c} = 1} \left(\mathbf{w_{\match}}(\varFirst)c - \mathbf{w_{\match}}(\varFirst)c \mathbbm{1}_{c,\bm{\oneCeS}} \right) \mathbf{y}c_{c} & \mathbf{y}c \in \setRecoSym^{\policy}(\varFirst, \oneCe)c \end{align} \setcounter{storesubequations}{\value{equation}} \end{subequations} Note that all feasible failure scenarios $\bm{\oneCeS} \in \Gammac$ exist as a combination of vertex and arc failures in cycles/chains of the transitory graph, i.e., $c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$. Therefore, we can model all possible values of the indicating variable $\mathbbm{1}_{c,\bm{\oneCeS}}$ by replacing it with a decision variable $\alpha_c$ that takes on value one if cycle/chain $ c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$ fails and zero otherwise. The idea is to link this new decision variable to a set of linear constraints such that whenever a vertex/arc in a cycle/chain fails so does the latter. To this end, we introduce the binary vectors $(\gamma^{\text{v}}, \gamma^{\text{a}}) \in \Gammac$ as defined in Section \ref{subsec:uncertaintyset}. Thus, we include the minimization over the scenarios within the constraint set as follows: \addtocounter{equation}{-1} \begin{subequations}\setcounter{equation}{\value{storesubequations}} \begin{align} \text{\ref{eq:SSP}} =\min \quad &Z^{\pi}_{Q}(\varFirstc) \label{objMPOpt}\\ & Z^{\pi}_{Q}(\varFirstc) \ge \sum_{c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc: \mathbf{y}c_{c} = 1} \left(\mathbf{w_{\match}}(\varFirst)c - \mathbf{w_{\match}}(\varFirst)c \alpha_{c} \right) \mathbf{y}c_{c} & \mathbf{y}c \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte \label{eq:solsTwo}\\ & \alpha_{c} \le \sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} &c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \label{eq:Inter}\\ &\bm{\oneCeS} \in \Gammac \label{eq:budget}\\ &\alpha_c \in \{0,1\} & c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \label{eq:Bound} \end{align} \setcounter{storesubequations}{\value{equation}} \end{subequations} Therefore, Constraints \eqref{eq:Inter} mandate that when a cycle/chain fails, $\alpha_c$ takes on value one due to the minimization objective. Otherwise, $\alpha_c$ becomes zero and thus the weight of such cycle/chain is considered in Constraints \eqref{eq:solsTwo}. Observe that once $\alpha_c$ is set to a value (either zero or one), Model \eqref{objMPOpt} is equivalent to solving Models \eqref{refSndSvOne} or \eqref{refSndSvTwo} when $\bm{\oneCeS} = \tilde{\oneCe}$ and the indicating variables $\mathbbm{1}_{c,\bm{\oneCeS}}$ take their values accordingly for all $c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \oneCe)$. Lastly, Constraints \eqref{eq:budget} require that the number of vertices and arcs that fail is not exceeded. Since there can exist an exponential number of Constraints \eqref{eq:solsTwo}, a natural way to solve Model \eqref{objMPOpt} is by assuming that the first recourse solution is $\tilde{\varFirstPlain}$, which can be obtained in the transitory graph. Then, use $\mathbf{y}c = \tilde{\varFirstPlain}$ to create the first constraint of Constraints \eqref{eq:solsTwo}. Afterwards, a feasible solution $\tilde{\alpha}$ mapping to some $\tilde{\oneCe} \in \Gammac$ can be found, allowing us to solve a recourse problem with optimal solution $\mathbf{y}OpSndSc$. Then, a new Constraint \eqref{eq:solsTwo} can be created using $\mathbf{y}OpSndSc$ and a new failure scenario $\tilde{\oneCe}$ can be found. This process continues until $Z^{\pi}_{Q}(\varFirstc) = {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$. Thus, a formulation for the second-stage problem can be defined by the following second-stage formulation (\ref{objMPOptQ}): \addtocounter{equation}{-1} \begin{subequations}\setcounter{equation}{\value{storesubequations}} \begin{align} \bm{Q}(\policy, \varFirstc) = \min \quad &Z^{\pi}_{Q}(\varFirstc) \tag{SSF} \label{objMPOptQ}\\ & Z^{\pi}_{Q}(\varFirstc) \ge Z_{P}r - \sum_{c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc: \tilde{\varFirstPlain}_{c} = 1} \mathbf{w_{\match}}(\varFirst)c \alpha_{c} \tilde{\varFirstPlain}_{c} & \label{eq:solsTwoX}\\ & Z^{\pi}_{Q}(\varFirstc) \ge {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) - \sum_{c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc: \mathbf{y}OpSndSc_{c} = 1} \mathbf{w_{\match}}(\varFirst)c \alpha_{c} \mathbf{y}OpSndSc_{c} &\tilde{\oneCe} \in \GammacR; \mathbf{y}OpSndSc \label{eq:solsTwoQ}\\ & \alpha_{c} \le \sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} &c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \label{eq:InterQ}\\ &\bm{\oneCeS} \in \Gammac \label{eq:budgetQ}\\ &\alpha_c \in \{0,1\} & c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \label{eq:BoundQ} \end{align} \end{subequations} Since $Z_{P}r \ge {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) \ge Z^{\pi}_{Q}(\varFirstc)$, for $Z^{\pi}_{Q}(\varFirstc)$ to be as small as possible, at least one cycle or chain---and thus, at least one vertex or arc---must fail for some cycle/chain in \emph{every} matching associated with an optimal recourse solution (Constraints \eqref{eq:solsTwoX} and \eqref{eq:solsTwoQ}). If it was not for the failure budget limiting the maximum number of failed vertices and arcs (Constraint \eqref{eq:budgetQ}), $Z^{\pi}_{Q}(\varFirstc)$ could reach zero. Note that if Model \eqref{objMPOptQ} is unable to cause the failure of a new optimal recourse solution $\mathbf{y}OpSndSc$, it is because Constraint \eqref{eq:budgetQ} would be violated. Thus, the worst-case scenario $\oneCe^{\star}$ is found when there exists a Constraint \eqref{eq:solsTwoQ} associated to a recourse solution $\mathbf{y}OpSndSc$ with non-failed cycles/chains or equivalently when $Z^{\pi}_{Q}(\varFirstc) = {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ for some $\tilde{\oneCe} \in \GammacR$. Thus, there is a one-to-one correspondence between (i) Constraint \eqref{eq:AtLeastOneX} in Model \eqref{objMPOptQ} and Constraint \eqref{eq:solsTwoX} in \ref{MasterBasic}, and (ii) Constraints \eqref{eq:solsTwoQ} in Model \eqref{objMPOptQ} and Constraints \eqref{eq:AtLeastOne} in \ref{MasterBasic}. Therefore, the optimal value to the second stage is the smallest value of ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ among all scenarios $\tilde{\oneCe} \in \GammacR$ before \ref{MasterBasic} becomes infeasible. $\square$ It is worth noting that a cycle-chain-based formulation similar to \ref{objMPOptQ} was proposed by \citet{Blom2021} for the full-recourse policy under homogeneous failure, using the set of feasible cycles and chains in what we refer to as the first-stage compatibility graph. However, with our two-stage optimization formulation, Model \eqref{objMPOptQ} can address multiple policies under homogeneous and non-homogeneous failure. \subsubsection{Strengthening constraints in \ref{MasterBasic}} \label{StrengthMS} We now derive a novel family of non-valid inequalities to narrow the search of the worst-case scenario in \ref{MasterBasic}. Unlike valid inequalities, the so-called \textit{non-valid} inequalities cut off feasible solutions \citep{Atamturk2000, Hooker1994}, and therefore are invalid in the standard sense, though optimal solutions are preserved. Observe that every time the value of ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ is updated in Algorithm \ref{Alg:BasicCovering}, we know from Part II of Preposition \ref{prop:BCproof} that ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ could be the optimal objective value of the second-stage problem and if so, \ref{objMPOptQ} would need to ``cause" the failure of the previously added constraints, i.e., Constraints \eqref{eq:solsTwoX} and \eqref{eq:solsTwoQ}, in such a way that $Z^{\pi}_{Q}(\varFirstc)$ is at most ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$. Thus, there exists a minimum number of cycles/chains that fail in Constraints \eqref{eq:solsTwoX} and \eqref{eq:solsTwoQ} to achieve this goal, and because those constraints are also represented in the feasibility-seeking master problem, we aim to solve the feasibility-seeking master problem based on our knowledge of the optimality-seeking master problem. Thus, we strengthen the right-hand side of Constraints \eqref{eq:AtLeastOneX} and \eqref{eq:AtLeastOne} by updating the minimum number of vertices/arcs that should fail in each of those constraints whenever a smaller value of ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ is found at Step \ref{Alg:UpdateUB} of Algorithm \ref{Alg:BasicCovering}. For a recourse solution $\mathbf{y}OpSndSc$ and assuming that the failure of a vertex/arc completely causes the failure of a cycle or chain, we sort cycle and chain weights $\mathbf{w_{\match}}(\varFirst)c$ with $c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe}) \ \forall \mathbf{y}OpSndSc_c = 1$ in non-increasing order so that $\mathbf{w_{\match}}(\varFirst)c_1 \ge \mathbf{w_{\match}}(\varFirst)c_2 ... \ge \mathbf{w_{\match}}(\varFirst)c_{\mid \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe}) \mid}$. Thus, we can now state the following: \begin{proposition} \label{prop:RHSi} Constraints \eqref{eq:AtLeastOneX} and \eqref{eq:AtLeastOne} with right-hand side value $t$ is a non-valid inequality such that $t$ is the smallest index for which the following condition is true: ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) - \sum_{t = 1}^{\mid \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe}) \mid} \mathbf{w_{\match}}(\varFirst)c_t < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$. \end{proposition} \proof Observe that unless optimal, $Z^{\pi}_{Q}(\varFirstc) < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \forall \tilde{\oneCe} \in \GammacR$ due to Proposition \ref{prop:BCproof} and Step \ref{Alg:UpdateUB} of Algorithm \ref{Alg:BasicCovering}. Thus, some cycles/chains must fail in each of the analogous constraints of \eqref{eq:AtLeastOneX} and \eqref{eq:AtLeastOne} in Model \eqref{objMPOptQ}, i.e., Constraints \eqref{eq:solsTwoX} and \eqref{eq:solsTwoQ}. Also, observe that Constraints \eqref{eq:InterQ} imply that whenever at least one vertex/arc fails, so does its associated cycle/chain. Therefore, finding the minimum number of cycles/chains that should fail in Constraints \eqref{eq:solsTwoX} and \eqref{eq:solsTwoQ} to satisfy $Z^{\pi}_{Q}(\varFirstc) < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ implies finding the minimum number of vertices/arcs that should fail in Constraints \eqref{eq:AtLeastOneX} and \eqref{eq:AtLeastOne}. Thus, sorting the cycle/chain weight of every cycle/chain in non-increasing order, and then subtracting it in that order from ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ until the subtraction is strictly less than ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$, yields a valid lower bound on the number of cycles/chains that must fail. $\square$ \subsection{An expanded feasibility-seeking decomposition} \label{sec:ExtFeasForm} Next, we introduce the second novel decomposition for the second-stage problem. This decomposition ``expands" the recourse solutions to allow failed cycles and chains with the goal of generating failure scenarios that speed up the convergence of our master problem to infeasibility and thus, to optimality. To this end, we make use of the transitory graph to find optimal recourse solutions as opposed to the second-stage compatibility graph. \subsubsection{The recourse problem on the transitory graph} So far, we have solved the recourse problem using the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$. However, an optimal solution to the recourse problem $R^{\policy}(\varFirst, \oneCe)cdte$ can also be found in the transitory graph $D^{\pi}(\varFirstPlain)c$ by fitting failed cycles/chains into the solution given by $R^{\policy}(\varFirst, \oneCe)cdte$. Fitting failed cycles/chains into recourse solutions with non-failed cycles/chains was considered by \cite{Blom2021} to lift the constraints in their optimality-seeking second-stage formulation. Such formulation is referred to as ``the attack generation problem'' in their work. Our approach borrows a modified objective function to find recourse solutions in \ref{RecoP:Objexp}. Unlike their work, our master problems generalize to homogeneous/non-homogeneous failure and multiple recourse policies. Although an optimal recourse solution in the transitory graph has the same objective value as an optimal recourse solution in the second-stage compatibility graph (failed cycles/chains do not increase the recourse objective value), as we will show, it can prevent some dominated scenarios from being explored, and therefore help reduce the number of times the recourse problem is resolved in Algorithm \ref{Alg:BasicCovering}. A failure scenario $\bm{\oneCeS}Prime \in \Gammac$ dominates another failure scenario $\bm{\oneCeS}NoPrime \in \Gammac$ if $Z^{\star}_{R}(\varFirstPlain,\bm{\oneCeS}Prime) \le Z^{\star}_{R}(\varFirstPlain, \bm{\oneCeS}NoPrime)$. We define to $R^{\policy}(\varFirst, \oneCe)expc$ as the recourse problem solved in the transitory graph whose optimal recourse solutions are also optimal with respect to $R^{\policy}(\varFirst, \oneCe)cdte$ but may allow some failed cycles/chains in the solution. We refer the reader to Appendix \ref{sec:RecourseFormls} for proof that optimal recourse solutions to $R^{\policy}(\varFirst, \oneCe)expc$ are also optimal to $R^{\policy}(\varFirst, \oneCe)cdte$, which is adapted from \citet{Blom2021}. Thus, we let $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$ be an optimal recourse solution to $R^{\policy}(\varFirst, \oneCe)expc$ in the transitory graph that is also optimal to $R^{\policy}(\varFirst, \oneCe)cdte$ under scenario $\tilde{\oneCe} \in \GammacR$. \subsubsection{The master problem: MasterTransitory} In the basic reformulation, we assumed the recourse solutions correspond to matchings with non-failed components only. In our expanded formulation, we assume that recourse solutions can be expanded to fit some failed cycles/chains by solving the recourse problem in the transitory graph $D^{\pi}(\varFirstPlain)c$. We let $\psi^{\star\tilde{\oneCe}}cdte \subseteq \psi^{\star\tilde{\oneCe}}$ be the subset of the optimal recourse solution $\psi^{\star\tilde{\oneCe}}$ that has no failed cycles/chains and thus corresponds to a feasible solution in $\setRecoSym^{\policy}(\varFirst, \oneCe)cdte$. Therefore, the expanded feasibility-seeking reformulation, which we call MasterTransitory, is expressed as follows: \begin{subequations} \label{mo:extendedfeas} \begin{align} \tag{MT} \text{MasterTransitory}(\tilde{\varFirstPlain}) \text{: } &&& \qquad \qquad \text{Find } \bm{\oneCeS} \label{MasterExp}\\ && \sum_{c \in cSet \cup \mathcal{C}_{\chainCap}: \tilde{\varFirstPlain}_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} \right) \ge 1 && \label{eq:AtLeastOneXExp}\\ && \sum_{c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe})dte: \mathbf{y}OpSndSc_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} \right) \ge 1 && \tilde{\oneCe} \in \GammacR; \psi^{\star\tilde{\oneCe}}cdte \label{eq:AtLeastOneExp}\\ && \sum_{c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc: \mathbf{y}OpSndSc_{c} = 1} &\left(\sum_{u \in V(c)} \gamma^{\text{v}}_{u}+ \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} \right) \ge H(\tilde{\oneCe})ite && \ \tilde{\oneCe} \in \GammacR; \psi^{\star\tilde{\oneCe}} \label{eq:AtLeastOneRHSExp}\\ &&& \qquad \qquad \bm{\oneCeS} \in \Gamma&& \label{GammaExp} \end{align} \end{subequations} Constraints \eqref{eq:AtLeastOneXExp} and \eqref{eq:AtLeastOneExp} are equivalent to Constraints \eqref{eq:AtLeastOneX} and \eqref{eq:AtLeastOne}. Constraints \eqref{eq:AtLeastOneRHSExp} require that a combination of at least $H(\tilde{\oneCe})ite$ vertices and arcs fail in an expanded recourse solution $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Expc$. Proposition \ref{prop:RHSExtdi} defines the value of $H(\tilde{\oneCe})ite$. The goal of \ref{MasterExp} is to enforce the failure of two different recourse solutions with identical objective values under scenario $\tilde{\oneCe} \in \GammacR$, i.e. a recourse solution found in the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$ and a solution found in the transitory graph $D^{\pi}(\varFirstPlain)c$. For a recourse solution $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Expc$ again assuming that the failure of a vertex/arc completely causes the failure of a cycle or chain, we sort cycle and chain weights $\mathbf{w_{\match}}(\varFirst)c$ $\forall \mathbf{y}_c = 1$ with $c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$, in non-increasing order so that $\mathbf{w_{\match}}(\varFirst)c_1 \ge \mathbf{w_{\match}}(\varFirst)c_2 ... \ge \mathbf{w_{\match}}(\varFirst)c_{\mid cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \mid}$. Thus, we can now state: \begin{proposition} \label{prop:RHSExtdi} $H(\tilde{\oneCe})ite = t$ is a non-valid inequality such that $t$ is the smallest index for which the following condition is true ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) - \sum_{t = 1}^{\mid cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \mid} \mathbf{w_{\match}}(\varFirst)c_t < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$. \end{proposition} \proof Follows by the same arguments given for Proposition \ref{prop:RHSi}. $\square$ An example demonstrating that \ref{MasterExp} can yield failure scenarios that dominate those in \ref{MasterBasic} can be found in Appendix \ref{Example}. \subsubsection{Dominating scenarios} \label{DominatingScenarios} Next, we derive another novel family of non-valid inequalities based on dominating scenarios. We say that a failure scenario $\bm{\oneCeS}Prime \in \Gammac$ dominates another failure scenario $\bm{\oneCeS}NoPrime \in \Gammac$ when ${Z}^{\pi,\star}_{R}(\varFirst, \oneCe)iteExPrime \le {Z}^{\pi,\star}_{R}(\varFirst, \oneCe)iteEx$. Let $ I(\bm{\oneCeS}Prime)$ and $I(\bm{\oneCeS}NoPrime)$ be the number of failed vertices and arcs under their corresponding scenario. Moreover, let $cchainSetPrimeX \subseteq cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$ and $cchainSetNoPX \subseteq cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$ be the set of feasible cycles and chains that fail in the transitory graph $D^{\pi}(\varFirstPlain)c$ under scenarios $\bm{\oneCeS}Prime$ and $\bm{\oneCeS}NoPrime$, respectively. We then have the following result: \begin{proposition} \label{prop:dominance} If $cchainSetNoPX \subseteq cchainSetPrimeX$, $\bm{\oneCeS}Prime$ dominates $\bm{\oneCeS}NoPrime$ and the following dominance inequality is valid for the second-stage problem: \begin{align} \label{eq:dominance} \sum_{u \in V: \bm{\oneCeS}NoPrime_{u} = 1} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A: \bm{\oneCeS}NoPrime_{uaub} = 1} \gamma^{\text{a}}_{uaub} \le I(\bm{\oneCeS}NoPrime) \left(I(\bm{\oneCeS}Prime) - \sum_{u: \bm{\oneCeS}Prime_{u} = 1} \gamma^{\text{v}}_{u} - \sum_{(ua,ub): \bm{\oneCeS}Prime_{uaub} = 1} \gamma^{\text{a}}_{uaub} \right) \end{align} \end{proposition} \proof By definition, $cchainSetNoPX \subseteq cchainSetPrimeX$, thus, all cycles and chains that fail under $\bm{\oneCeS}NoPrime$ also fail under $\bm{\oneCeS}Prime$, which means that ${Z}^{\pi,\star}_{R}(\varFirst, \oneCe)iteExPrime \le {Z}^{\pi,\star}_{R}(\varFirst, \oneCe)iteEx$. Therefore, it follows that $\bm{\oneCeS}Prime$ dominates scenario $\bm{\oneCeS}NoPrime$. $\square$ Finding all $\bm{\oneCeS}NoPrime, \bm{\oneCeS}Prime \in \Gammac$ satisfying Proposition \ref{prop:dominance} is not straightforward and there may be too many constraints of type \eqref{eq:dominance} to feed into our master problem formulations. Therefore, we explore two alternatives to separate these dominating-scenario cuts. The first consists of identifying a subset of scenarios that satisfy constraint \eqref{eq:dominance} a priori, which we call adjacent-failure separation. The second consists of attempting to solve \ref{MasterBasic} and \ref{MasterExp} via a heuristic and then ``discovering'' dominating scenarios on the fly, which we call single-vertex-arc separation. We refer to this approach as a heuristic because it may not always find a feasible failure scenario satisfying the constraints in \ref{MasterBasic} or \ref{MasterExp}. \paragraph{Adjacent-failure separation} A vertex failure in the transitory graph $D^{\pi}(\varFirstPlain)$ is equivalent to the failure of that vertex and the failure of any arc adjacent to that vertex, since in both cases the failed cycles and chains are the same. Thus, for every vertex $u \in V$, we can build a scenario where the only non-zero value in vector $\bm{\oneCeS}NoPrime^{\prime\text{v}}$ is $\bm{\oneCeS}NoPrime^{\prime\text{v}}_{u} = 1$, which dominates the scenario where all arcs either leave $u$, i.e., $\bm{\oneCeS}NoPrime^{\text{a}}_{uaub} = 1$ or point towards it, i.e., $\bm{\oneCeS}NoPrime^{\text{a}}_{uaubrev} = 1$. Note that the following constraints satisfy Proposition \ref{prop:dominance}: \begin{align} \label{eq:adjacent} \sum_{(ua,ub) \in A} \gamma^{\text{a}}_{uaub} + \sum_{(ua,ub)rev \in A} \gamma^{\text{a}}_{uaubrev} \le r^{\text{a}} \left(1 - \gamma^{\text{v}}_{u} \right)&& u \in V \end{align} That is, if a vertex fails, the arcs adjacent to it get disconnected from the graph. Therefore, the objective value of a recourse problem with a failure scenario where an arc adjacent to a failed vertex also fails is equivalent to the objective value obtained when we consider the scenario with only the vertex failure. Thus, Constraints \eqref{eq:adjacent} can be added to \ref{MasterBasic} and \ref{MasterExp} before the start of Algorithm \ref{Alg:BasicCovering}. \paragraph{Single-vertex-arc separation} \label{vxtarcsep} Suppose we have a set of proposed failed arcs $A(\bm{\oneCeS}NoPrime^{\text{a}})$ such that $\gamma^{\text{a}}_{uaub} = 1 \ \forall {(ua,ub)} \in A(\bm{\oneCeS}NoPrime^{\text{a}})$, and a set of proposed failed vertices, $V(\bm{\oneCeS}NoPrime^{\text{v}})$, such that $\bm{\oneCeS}NoPrime^{\text{v}}_{u} = 1 \ \forall u \in V(\bm{\oneCeS}NoPrime^{\text{v}})$. The second-stage compatibility graph corresponds to $D^{\pi}(\varFirstPlain, \bm{\oneCeS})pr = (VSndBe \setminus V(\bm{\oneCeS}NoPrime^{\text{v}}), ASndBe \setminus A(\bm{\oneCeS}NoPrime^{\text{a}}))$. Then, the following two cases also satisfy Proposition \ref{prop:dominance}: \begin{enumerate} \item \label{CaseOne} Suppose there is a candidate pair $\bar{v} \in VSndBe \setminus V(\bm{\oneCeS}NoPrime^{\text{v}})$ and $cchainSetXvtx$ is the set of feasible cycles and chains in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})pr$ that include vertex $\bar{v}$. Then, if $cchainSetXvtx = \emptyset$, scenario $\bm{\oneCeS}NoPrime$ dominates scenario $\bm{\oneCeS}NoPrime \cup \{\bar{v}\}$ and thus for ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ to decrease, another vertex $\bar{v}$ should be proposed to fail. \item \label{CaseTwo} Suppose there is a candidate arc $\bar{a} \in ASndBe \setminus A(\bm{\oneCeS}NoPrime^{\text{a}})$ and $cchainSetXarc$ is the set of feasible cycles and chains in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})pr$ that include arc $\bar{a}$. Then, if $cchainSetXarc = \emptyset$, scenario $\bm{\oneCeS}NoPrime$ dominates scenario $\bm{\oneCeS}NoPrime \cup \{\bar{a}\}$ and thus another arc $\bar{a}$ should be proposed to fail if ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ is to be decreased. \end{enumerate} We perform the single-vertex-arc separation within a heuristic while attempting to find a feasible failure scenario satisfying the constraints in \ref{MasterBasic} and \ref{MasterExp}, as detailed in the next section. \section{Solution algorithms for the second-stage problem} \label{sec:HSAs} This section presents solution algorithms to solve the second stage. We refer to \textit{hybrid} solution algorithms as the combination of a linear optimization solver and a heuristic attempting to solve the second-stage problem. Our goal is to specialize the steps of Algorithm \ref{Alg:BasicCovering} to improve the efficiency of our final solution approaches. \subsection{A feasibility-based solution algorithm for MasterBasic} \label{FBSA-MB} Algorithm \ref{Alg:BasicHybrid} is the first of our two feasibility-based solution algorithms (FBSAs), which has \ref{MasterBasic} as master problem, and it is referred to as FBSA\_MB. FBSA\_MB requires a policy $\pi \in \piSet$ and a first-stage solution $\tilde{\varFirstPlain}$ found at Step \ref{alg:R0-solve} of Algorithm \ref{Alg:ScenarioGeneration}. We refer to $\textbf{ToMng}(\cdot)$ as a function that ``extracts'' the matching from a recourse solution. This matching is then added to a set $\mathcal{M}$ containing all matchings associated with the recourse solutions found up to some iteration. The procedure \textbf{Heuristic}($\mathcal{M}$)---whose algorithmic details we present shortly---attempts to find a failure scenario satisfying the constraints in \ref{MasterBasic}. The heuristic returns a tuple $(\incumCe, \cover)$ with two outputs: a candidate failure scenario $\oneCe^{\prime}$ and a boolean variable $\textit{cover}$ that indicates whether $\oneCe^{\prime}$ is feasible to \ref{MasterBasic}. If $\textit{cover} = \text{\textbf{true}}$, MasterSecond$(\tilde{\varFirstPlain})$ is feasible. Algorithm \ref{Alg:BasicHybrid} starts with an empty subset of scenarios $\GammacR$, and with an upper bound on the objective value of the second stage ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = Z_{P}r$. At Step \ref{HeuTrue}, because there is only one matching in $\mathcal{M}$, it is trivial to heuristically choose a failure scenario that satisfies Constraint \eqref{eq:AtLeastOneX} and thus the result of the boolean variable $\textit{cover}$ is \textbf{true}. This failure scenario is then added to $\GammacR$. At Step \ref{ColGenAlg3}, $\textbf{ColGen}(R^{\policy}(\varFirst, \oneCe)cdte)$ finds an upper bound $Z^{\star}_{\text{cg}}UB$ on the optimal value of the recourse problem $R^{\policy}(\varFirst, \oneCe)cdte$ through Column Generation (CG). CG is a technique for solving linear programs with a large number of variables or columns \citep{Barnhart1998}. For $\textbf{ColGen}(R^{\policy}(\varFirst, \oneCe)cdte)$, the columns correspond to cycles and chains in the set $cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc$. CG considers two problems: a master problem and subproblem(s). In Algorithm \ref{Alg:BasicHybrid}, the master problem is the linear programming relaxation of \ref{RecoP:Obj}. At the start, the master problem starts with no columns. The subproblem consists of finding columns that have the potential to increase the objective value of the master problem, i.e., columns with positive reduced cost (we refer the reader to \citep{Riascos2020} for further details). We add up to 10 cycle columns and up to 10 chain columns at every iteration of the CG algorithm, unless $L \ge 4$, case in which we add up to 5 chain columns. Moreover, if $L \ge 4$, we split the set of chains $\mathcal{C}_{\chainCap}SndSc$ into two equally sized sets. After having unsuccessfully searched for chain columns with positive reduced cost in the first set, we then continue the search for such chain columns over all chains in $\mathcal{C}_{\chainCap}SndSc$. Once no more columns with positive reduced cost are found, and thus the optimality of the CG master problem is proven, an upper bound $Z^{\star}_{\text{cg}}UB$ on the optimal value of $R^{\policy}(\varFirst, \oneCe)cdte$ is returned. We then take the decision variables from the optimal base of the master problem and turn them into binary decision variables to obtain a feasible recourse solution $\tilde{\mathbf{y}}$ with objective value $Z^{\star}_{\text{cg}}$. If $Z^{\star}_{\text{cg}}$ equals $Z^{\star}_{\text{cg}}UB$, then $\tilde{\mathbf{y}}$ is optimal to formulation $R^{\policy}(\varFirst, \oneCe)cdte$. Thus, at Step \ref{SolCGisOPT_Alg3}, $\tilde{\mathbf{y}}$ becomes the optimal recourse solution $\mathbf{y}OpSndS$ and ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ is updated. Otherwise, we incur in the cost of solving \ref{RecoP:Obj} from scratch as a MIP instance to obtain $\mathbf{y}OpSndS$ and ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$. The matching associated with $\mathbf{y}OpSndS$ is then added to $\mathcal{M}$. If the new recourse value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ is smaller than the upper bound on the objective value of the second stage, ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$, then we update both ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ and the incumbent failure scenario $\oneCe^{\star}$ that led to that value. Since a new lower bound has been found, the right-hand side of Constraints \eqref{eq:AtLeastOneX}-\eqref{eq:AtLeastOne} is updated following Proposition \ref{prop:RHSi}. At Step \ref{newIteAlg3}, the heuristic tries to find a new failure scenario feasible to \ref{MasterBasic}+Constraints\eqref{eq:adjacent}. If unsuccessful, \ref{MasterBasic}+Constraints\eqref{eq:adjacent} is solved as a MIP instance. If a feasible failure scenario is found, then the algorithm continues. Otherwise, the algorithm ends and returns the optimal recovery plan ($Z^{\pi}_{Q}(\varFirstc)Star$, $\oneCe^{\star}$). \begin{algorithm}[tbp] \textbf{Input:} { A recourse policy $\pi \in \piSet$, first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$ and current KEP robust objective $Z_{P}r$} \\ \textbf{Output:} { Optimal recovery plan value $Z^{\pi}_{Q}(\varFirstc)Star$ and worst-case scenario $\oneCe^{\star} \in \Gammac$} \begin{algorithmic}[1] \STATE $\GammacR = \emptyset$; ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets Z_{P}r$ \STATE Create Constraint \eqref{eq:AtLeastOneX} in \ref{MasterBasic} \STATE $\mathcal{M} \gets \textbf{ToMng}(\tilde{\varFirstPlain})$ \STATE \label{HeuTrue} $(\incumCe, \text{\textbf{true}}) \gets$ \text{\textbf{Heuristic}}($\mathcal{M}$); $\tilde{\oneCe} \gets \oneCe^{\prime}$ \WHILE{MasterSecond$(\tilde{\varFirstPlain})$ is feasible} \STATE $\GammacR \gets \GammacR \cup \{\tilde{\oneCe}\}$ \STATE \label{ColGenAlg3} $(\ZRecoCol, \ZRecoColUB, \tilde{\varReco}) \gets \textbf{ColGen}\left(R^{\policy}(\varFirst, \oneCe)cdte \right)$ \IF{$Z^{\star}_{\text{cg}} = Z^{\star}_{\text{cg}}UB$} \STATE ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) \gets Z^{\star}_{\text{cg}}$ \STATE \label{SolCGisOPT_Alg3} $\mathbf{y}OpSndSc \gets \tilde{\mathbf{y}}$ \ELSE \STATE Solve $R^{\policy}(\varFirst, \oneCe)cdte$ to obtain solution $\mathbf{y}OpSndSc \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ with objective value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ \ENDIF \STATE $\mathcal{M} \gets \mathcal{M} \cup \textbf{ToMng}(\mathbf{y}OpSndSc)$ \IF{${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$} \STATE ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$; \STATE $\oneCe^{\star} \gets \tilde{\oneCe}$ \STATE Update right-hand side of constraints in \ref{MasterBasic}, \ $\forall \tilde{\oneCe} \in \GammacR$ following Proposition \ref{prop:RHSi} \ENDIF \STATE \label{newIteAlg3} $(\incumCe, \cover) \gets \text{\textbf{Heuristic}}(\mathcal{M})$ \IF{$\textit{cover} = \text{\textbf{true}}$} \STATE $\tilde{\oneCe} \gets \oneCe^{\prime}$ \ELSE \STATE Create Constraint \eqref{eq:AtLeastOne} for $\mathbf{y}OpSndSc \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ in \ref{MasterBasic} \STATE Attempt to solve \ref{MasterBasic}+Consts.\eqref{eq:adjacent} to get new failure scenario $\tilde{\oneCe}$ \ENDIF \ENDWHILE \STATE $Z^{\pi}_{Q}(\varFirstc)Star \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$; Return $\oneCe^{\star}$ and $Z^{\pi}_{Q}(\varFirstc)Star$ \caption{FBSA\_MB: A Feasibility-based Solution Algorithm for \ref{MasterBasic}} \label{Alg:BasicHybrid} \end{algorithmic} \end{algorithm} \subsection{A feasibility-based solution algorithm for MasterExp} Algorithm \ref{Alg:EnhancedHybrid} is our second feasibility-based solution algorithm and has \ref{MasterExp} as the master problem of the second stage. We refer to Algorithm \ref{Alg:EnhancedHybrid} as FBSA\_ME. FBSA\_ME requires a recourse policy $\pi \in \piSet$ and a first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$. In addition to the notation defined for FBSA\_MB, we introduce new one. We refer to $\mathcal{M}exp$ as the set of matchings associated to the optimal recourse solutions in $R^{\policy}(\varFirst, \oneCe)expc$. The CG algorithm, $\textbf{XColGen}\left(R^{\policy}(\varFirst, \oneCe)expc \right)$, returns a tuple with three values: $Z^{\star}_{\text{cg}}UBexp$, $Z^{\star}_{\text{cg}}exp$ and $\tilde{\mathbf{y}}$. $Z^{\star}_{\text{cg}}UBexp$ corresponds to the optimal objective value returned by the master problem of the CG algorithm; $Z^{\star}_{\text{cg}}exp$ is the feasible solution to the KEP after turning the decision variables in the optimal basis of the CG master problem into binaries. Lastly, $\tilde{\mathbf{y}}$ is the feasible recourse solution found by the CG algorithm with objective value $Z^{\star}_{\text{cg}}exp$. For $\textbf{XColGen}\left(R^{\policy}(\varFirst, \oneCe)expc \right)$, the master problem is the linear programming relaxation of \ref{RecoP:Obj}. The subproblem also splits $\mathcal{C}_{\chainCap}SndTrc$ in half when $L \ge 4$ and cycle/chain columns are searched and added in the same way as described in Section \ref{FBSA-MB}. At the start of Algorithm \ref{Alg:EnhancedHybrid}, \textbf{Heuristic}($\mathcal{M}$) returns the first failure scenario that is added to $\GammacR$. At Step \ref{XColAlg4}, the CG algorithm in $\textbf{XColGen}\left(R^{\policy}(\varFirst, \oneCe)expc \right)$ is solved. If $Z^{\star}_{\text{cg}}exp = Z^{\star}_{\text{cg}}UBexp$ then a function $\textbf{TrueVal}(\tilde{\mathbf{y}})$ computes the objective value of $\tilde{\mathbf{y}}$ in terms of the original recourse objective function, i.e., \ref{RecoP:Obj}. In case $Z^{\star}_{\text{cg}}exp$ is lower than $Z^{\star}_{\text{cg}}UBexp$, the original recourse problem $R^{\policy}(\varFirst, \oneCe)cdte$ is solved as a MIP instance. The reason for this decision is that $R^{\policy}(\varFirst, \oneCe)expc$ includes a larger set of cycle-and-chain decision variables, possibly leading to scalability issues when the recourse problem is solved as a MIP instance. At Step \ref{UpdateNonFMngSet}, the matching associated with the non-failed cycles and chains in the optimal recourse solution is included in $\mathcal{M}$. If the upper bound ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ is updated, so is the right-hand side of constraints in \ref{MasterExp}. At Step \ref{newIteAlg4}, the heuristic tries to find a new failure scenario feasible to \ref{MasterBasic}+Constraints\eqref{eq:adjacent}. If unsuccessful, \ref{MasterBasic}+Constraints\eqref{eq:adjacent} is solved as a MIP instance. If a feasible failure scenario is found, then the algorithm continues. Otherwise, the algorithm ends and returns the optimal recovery plan ($Z^{\pi}_{Q}(\varFirstc)Star$, $\oneCe^{\star}$). \begin{algorithm}[tbp] \textbf{Input:} { A recourse policy $\pi \in \piSet$, a first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$ and current KEP robust objective $Z_{P}r$}\\ \textbf{Output:} { Optimal recovery plan value $Z^{\pi}_{Q}(\varFirstc)Star$ and worst-case scenario $\bm{\oneCeS}^{\star} \in \Gammacdte$} \begin{algorithmic}[1] \STATE ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets Z_{P}r$; $\GammacR = \emptyset$ \STATE Create Constraint \eqref{eq:AtLeastOneXExp} in \ref{MasterExp} \STATE $\mathcal{M} \gets \textbf{ToMng}(\tilde{\varFirstPlain})$ \STATE $(\incumCe, \text{\textbf{true}}) \gets$ \text{\textbf{Heuristic}}($\mathcal{M}$); $\tilde{\oneCe} \gets \oneCe^{\prime}$ \WHILE{MasterTransitory$(\tilde{\varFirstPlain})$ is feasible} \STATE \label{AddNewCAlg4} $\GammacR \gets \GammacR \cup \{\tilde{\oneCe}\}$ \STATE \label{XColAlg4} $(\ZRecoCol, \ZRecoColUB, \tilde{\varReco})exp \gets \textbf{XColGen}\left(R^{\policy}(\varFirst, \oneCe)expc \right)$ \IF{$Z^{\star}_{\text{cg}}exp = Z^{\star}_{\text{cg}}UBexp$} \STATE ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) \gets \textbf{TrueVal}(\tilde{\mathbf{y}})$ \STATE $\psi^{\star\tilde{\oneCe}} \gets \tilde{\mathbf{y}}$ \STATE $\psi^{\star\tilde{\oneCe}}cdte \subseteq \psi^{\star\tilde{\oneCe}}$, s.t. $\psi^{\star\tilde{\oneCe}}cdte \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ \STATE $\mathcal{M}exp \gets \mathcal{M}exp \cup \textbf{ToMng}(\psi^{\star\tilde{\oneCe}})$ \ELSE \STATE Solve $R^{\policy}(\varFirst, \oneCe)cdte$ to obtain recourse objective value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ and recourse solution $\mathbf{y}OpSndSc$ \STATE $\psi^{\star\tilde{\oneCe}}cdte = \mathbf{y}OpSndSc$; $\psi^{\star\tilde{\oneCe}} = \emptyset$ \ENDIF \STATE \label{UpdateNonFMngSet} $\mathcal{M} \gets \mathcal{M} \cup \textbf{ToMng}(\psi^{\star\tilde{\oneCe}}cdte)$ \IF{${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$} \STATE ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets {Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$ \STATE $\oneCe^{\star} \gets \tilde{\oneCe}$ \STATE Update right-hand side of \eqref{eq:AtLeastOneXExp} and \eqref{eq:AtLeastOneExp} following Proposition \ref{prop:RHSi} and \eqref{eq:AtLeastOneRHSExp} following Proposition \ref{prop:RHSExtdi} \ $\forall \tilde{\oneCe} \in \GammacR$ \ENDIF \label{EndMTUpdateAlg4} \STATE \label{newIteAlg4} $(\incumCe, \cover) \gets \text{\textbf{Heuristic}}(\mathcal{M} \cup \mathcal{M}exp)$ \IF{$\textit{cover} = \text{\textbf{true}}$} \STATE $\tilde{\oneCe} \gets \oneCe^{\prime}$ \ELSE \STATE Create Const.\eqref{eq:AtLeastOneExp} for $\psi^{\star\tilde{\oneCe}}cdte \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ and Const.\eqref{eq:AtLeastOneRHSExp} for $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$ in \ref{MasterExp} \STATE Attempt to solve \ref{MasterExp}+Consts.\eqref{eq:adjacent} to get new failure scenario $\tilde{\oneCe}$ \ENDIF \label{EndnewIteAlg4} \ENDWHILE \STATE $Z^{\pi}_{Q}(\varFirstc)Star \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$; Return $\oneCe^{\star}$ and $Z^{\pi}_{Q}(\varFirstc)Star$ \caption{FBSA\_ME: A Feasibility-based Solution Algorithm for \ref{MasterExp}} \label{Alg:EnhancedHybrid} \end{algorithmic} \end{algorithm} \subsection{Hybrid solution algorithms} The main difference between the hybrid algorithms and the feasibility-based ones is the possibility of transitioning from the feasibility-seeking master problems to the optimality-seeking problem \ref{objMPOptQ} after TR iterations. The goal is to obtain a lower bound on the objective value of the second stage, ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc)$, that can be compared to ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$ to prove optimality. We refer to these hybrid algorithms as HSA\_ME (Algorithm \ref{Alg:HybridHSA-ME}) and HSA\_MB. The former has \ref{MasterExp} as master problem, whereas the latter has \ref{MasterBasic}. For both hybrid algorithms, we solve the recourse problem $R^{\policy}(\varFirst, \oneCe)expc$, but in the case of HSA\_MB, whenever an attempt to find a failure scenario occurs, it is done with respect to \ref{MasterBasic} + Constraints \eqref{eq:adjacent}. The reason for this approach is to accumulate the recourse solutions given by $R^{\policy}(\varFirst, \oneCe)expc$ and use them to solve \ref{objMPOptQ} when the number of iterations exceeds TR. The HSA\_MB algorithm is the HSA\_ME algorithm (Algorithm \ref{Alg:HybridHSA-ME}) with the following modifications: \begin{itemize} \item Step 2: Create Constraint \eqref{eq:AtLeastOneX} in \ref{MasterBasic} \item Step 5: Change MasterTransitory$(\tilde{\varFirstPlain})$ to MasterSecond$(\tilde{\varFirstPlain})$ in the \texttt{while} loop \item Step 21: Update right-hand side of constraints in \ref{MasterBasic} $\forall \tilde{\oneCe} \in \GammacR$ following Proposition \ref{prop:RHSi} \item Step 27: Create Constraint \eqref{eq:AtLeastOne} for $\psi^{\star\tilde{\oneCe}}cdte \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$ in \ref{MasterBasic} \item Step 28: Attempt to solve \ref{MasterBasic}+Constraints\eqref{eq:adjacent} to get new failure scenario $\tilde{\oneCe}$ \end{itemize} \begin{algorithm}[tbp] \textbf{Input:}{ Parameter TR indicating iteration from which \ref{objMPOptQ} is solved and those parameters in Algorithm \ref{Alg:EnhancedHybrid}}\\ \textbf{Output:} { Optimal recovery plan value $Z^{\pi}_{Q}(\varFirstc)Star$ and worst-case scenario $\bm{\oneCeS}^{\star} \in \Gammacdte$} \begin{algorithmic}[1] \STATE ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) \gets Z_{P}r$; ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc) = 0$; $\GammacR = \emptyset$ \STATE Create Constraint \eqref{eq:AtLeastOneXExp} in \ref{MasterExp} \STATE $\mathcal{M} \gets \textbf{ToMng}(\tilde{\varFirstPlain})$ \STATE $(\incumCe, \text{\textbf{true}}) \gets$ \text{\textbf{Heuristic}}($\mathcal{M}$); $\tilde{\oneCe} \gets \oneCe^{\prime}$ \WHILE{MasterTransitory$(\tilde{\varFirstPlain})$ is feasible \OR ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc) < {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$} \STATE Step \ref{AddNewCAlg4} to \ref{EndMTUpdateAlg4} in Algorithm \ref{Alg:EnhancedHybrid} \IF{$i \ge$ TR} \STATE Create a Const.\eqref{eq:solsTwoQ} per matching in $\mathcal{M}exp$ and solve \ref{objMPOptQ} to obtain ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc)$ and failure scenario $\tilde{\oneCe}$ \IF{${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc) = {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$} \STATE $\oneCe^{\star} \gets \tilde{\oneCe}$ \ENDIF \ELSE \STATE Steps \ref{newIteAlg4} to \ref{EndnewIteAlg4} in Algorithm \ref{Alg:EnhancedHybrid} \ENDIF \ENDWHILE \STATE $Z^{\pi}_{Q}(\varFirstc)Star \gets {\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$; Return $\oneCe^{\star}$ and $Z^{\pi}_{Q}(\varFirstc)Star$ \caption{HSA\_ME} \label{Alg:HybridHSA-ME} \end{algorithmic} \end{algorithm} \subsection{Heuristic} We now discuss the heuristic (Algorithm \ref{Alg:Heuristics}), used in the feasibility-based and hybrid algorithms. Algorithm \ref{Alg:Heuristics} requires a set of matchings $W$ as input. The heuristic returns a failure scenario $\oneCe^{\prime}$ and a boolean variable \textit{cover}. If \textit{cover} = \textbf{true}, $\oneCe^{\prime}$ satisfies either \ref{MasterBasic}+Constraints\eqref{eq:adjacent} or \ref{MasterExp}+Constraints\eqref{eq:adjacent}, accordingly. Although the heuristic cannot ensure that $\oneCe^{\prime}$ is always feasible with respect to the feasibility-seeking master problems, it does ensure (i) Constraints \eqref{eq:adjacent} are satisfied and (ii) $\oneCe^{\prime}$ is not dominated according to the two single-vertex-arc separation cases presented in Section \ref{DominatingScenarios}. We first introduce notation specific to the heuristic, and then detail the steps of the heuristic. A function \textbf{UniqueElms}($W$) returns the set of unique vertices and arcs among all matchings in $W$, which are then kept in $E$. With a slight abuse of notation, we say that each vertex/arc $\text{e}_{n} \in E$ with $n = 1,...\lvert E \rvert$ has an associated boolean variable $s_{n}$ and a calculated weight $\text{w}_{n}$. The boolean variable $s_{n}$ becomes \textbf{true} if element $\text{e}_{n}$ has been checked, i.e., either it has been proposed for failure in $\oneCe^{\prime}$ or it has been proven that adding $\text{e}_{n}$ to the current failure scenario will result in a dominated scenario according to Section \ref{DominatingScenarios}. Function $\text{\textbf{Weight}}(\text{e}_{n},W)$ returns the weight $\text{w}_{n}$ corresponding to the number of times element $\text{e}_{n}$ is repeated in the set of matchings $W$. Moreover, a function $\text{\textbf{IsNDD}}(\text{e}star_{n})$ determines whether element $\text{e}star_{n}$ is a non-directed donor. At Step \ref{whileHeu}, while it is true, a list of unique and non-checked elements $cHeuSet$ is created as long as the vertex and arc budgets, $r^{\text{v}}$ and $r^{\text{a}}$ are not exceeded, respectively. If $cHeuSet$ turns out to have no elements, the algorithm ends at Step \ref{firstEndHeu}. Otherwise, the elements in $cHeuSet$ are sorted in non-increasing order of their weights, and among the ones with the highest weight, an element $\text{e}star_{n}$ is selected randomly. At Step \ref{SepHeu}, the single-vertex-arc separation described in Section \ref{vxtarcsep} is performed if (i) $\text{e}star_{n}$ is a pair, and (ii) there is at least one arc or at least two vertices proposed for failure already. The reason we check that $\text{e}star_{n}$ is a pair is because a non-directed donor does not belong to a cycle by definition. Moreover, when a non-directed donor is removed from the transitory graph or second-stage compatibility graph, its removal trivially causes the failure of all chains triggered by it. Thus, a boolean variable \textit{ans} remains \textbf{true} if $\text{e}star_{n}$ is a non-directed donor or a vertex/arc does not satisfy neither Case \ref{CaseOne} nor Case \ref{CaseTwo}. At Step \ref{checked}, $\text{e}star_{n}$ is labeled as checked by setting $\text{e}star_{n} = \text{\textbf{true}}$. Then, if \textit{ans} = \textbf{true}, it is checked whether $\text{e}star_{n}$ is a vertex or an arc, and the proposed failure scenario $\bar{\bm{\gamma}}$ is updated accordingly. If element $\text{e}star_{n}$ is a vertex, at Step \ref{adjArcstHeu} the set of arcs adjacent to $\text{e}star_{n}$ is labeled as checked, so that $\bar{\bm{\gamma}}$ satisfies Constraints \eqref{eq:adjacent}. We say that scenario $\bar{\bm{\gamma}}$ covers all matchings in $W$, if for every matching in that set, the number of vertices/arcs proposed for failure satisfies Proposition \ref{prop:RHSi} (for Constraints \eqref{eq:AtLeastOneX}-\eqref{eq:AtLeastOne} or Constraints \eqref{eq:AtLeastOneXExp}-\eqref{eq:AtLeastOneExp}) and Proposition \ref{prop:RHSExtdi} (for Constraints \eqref{eq:AtLeastOneRHSExp}), accordingly. If $\bar{\bm{\gamma}}$ covers all matchings in $W$, then \textit{cover} = \textbf{true}. The heuristic is greedy in the sense that even when \textit{cover} = \textbf{true}, it will attempt to use up the vertex and arc budgets as checked at Step \ref{budgetHeu}. If another vertex/arc can still be proposed for failure, then a new iteration in the While loop starts. Thus, the heuristic ends at either Steps \ref{firstEndHeu} or \ref{SndEndHeu}. \begin{algorithm}[tbp] \textbf{Input:}{ A set of matchings $W$}\\ \textbf{Output:}{ Failure scenario $\bar{\bm{\gamma}}$ and a boolean variable \textit{cover}\ indicating whether $\bar{\bm{\gamma}}$ covers matchings in $W$} \begin{algorithmic}[1] \STATE \textit{cover} $=$ \textbf{false}; $E \gets \textbf{UniqueElms}(W)$ \WHILE{\textit{true}} \label{whileHeu} \STATE $cHeuSet = \emptyset$ \FOR{$n = 1,..., \lvert E \rvert$} \IF{$\text{e}_{n} \in A$ and $s_{n} =$ \textbf{false} and $\sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} < r^{\text{v}}$} \STATE $\text{w}_{n} = \text{\textbf{Weight}}(\text{e}_{n},W)$ ; $cHeuSet \gets cHeuSet \cup \{\text{e}_{n}\}$ \ELSIF{$\text{e}_{n} \in V$ and $s_{n} =$ \textbf{false} and $\sum_{(ua,ub) \in A} \bar{\bm{\gamma}}_{uaub}^{\text{a}} < r^{\text{a}}$} \STATE $\text{w}_{n} = \text{\textbf{Weight}}(\text{e}_{n},W)$ ; $cHeuSet \gets cHeuSet \cup \{\text{e}_{n}\}$ \ENDIF \ENDFOR \IF{$cHeuSet = \emptyset$} \STATE Return $\bar{\bm{\gamma}}$ and \textit{cover}\label{firstEndHeu} \ENDIF \STATE Sort $cHeuSet$ in non-increasing order of values s.t. $\text{w}_{1} \ge \text{w}_{2} ...\ge \text{w}_{\lvert E \rvert}$ \STATE Select $\text{e}star_{n} \in cHeuSet$ randomly among elements whose weight equals $\text{w}_{1}$ \STATE \textit{ans} = \textbf{true} \item[] \textbf{---Start single-vertex-arc separation \ref{vxtarcsep}---} \IF{$(\sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} + \sum_{(ua,ub) \in A} \bar{\bm{\gamma}}_{uaub}^{\text{a}}) \ge 1$ and $\text{\textbf{IsNDD}}(\text{e}star_{n}) = \textbf{false}$ and $(\sum_{(ua,ub) \in A} \bar{\bm{\gamma}}_{uaub}^{\text{a}} \ge 1 \text{ or } \sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} \ge 2)$} \label{SepHeu} \STATE $V(\bm{\oneCeS}NoPrime^{\text{v}}) \gets V(\bar{\bm{\gamma}}^{\text{v}})$; $A(\bm{\oneCeS}NoPrime^{\text{a}}) \gets A(\bar{\bm{\gamma}}^{\text{a}})$ \STATE \textit{ans} = \textbf{false} \IF{$\text{e}star_{n} \in V$} \STATE $\bar{v} \gets \text{e}star_{n}$; $cchainSetXvtx \gets$ Check Case \ref{CaseOne} \IF{$cchainSetXvtx \neq \emptyset$} \STATE \textit{ans} = \textbf{true} \ENDIF \ELSE \STATE $\bar{a} \gets \text{e}star_{n}$; $cchainSetXarc \gets $ Check Case \ref{CaseTwo} \IF{$cchainSetXarc \neq \emptyset$} \STATE \textit{ans} = \textbf{true} \ENDIF \ENDIF \ENDIF \item[] \textbf{---End separation---} \STATE $\text{e}star_{n} = \text{\textbf{true}}$ \label{checked} \IF{\textit{ans} = \textbf{true}} \IF{$\text{e}star_{n} \in V $} \STATE $\bar{\bm{\gamma}}^{\text{v}}_{\text{e}star_{n}} = 1$; $s_{n} =$ \textbf{true} $\forall \ \text{e}_{n} \in \delta^{+}(\Elemstar_{n}) \cup \delta^{-}(\Elemstar_{n})$ \label{adjArcstHeu} \ELSE \STATE $\bar{\bm{\gamma}}^{\text{a}}_{\text{e}star_{n}} = 1$ \ENDIF \ENDIF \IF{$\bar{\bm{\gamma}} \text{ covers all matchings in } W$} \STATE \textit{cover} = \textbf{true} \ENDIF \IF{\textit{cover} = \textbf{true}} \IF{$\sum_{u \in V} \bar{\bm{\gamma}}_{u}^{\text{v}} = r^{\text{v}}$ and $\sum_{(ua,ub) \in A} \bar{\bm{\gamma}}_{uaub}^{\text{a}} = r^{\text{a}}$} \label{budgetHeu} \STATE Return $\bar{\bm{\gamma}}$ and \textit{cover} \label{SndEndHeu} \ENDIF \ENDIF \ENDWHILE \caption{Heuristic} \label{Alg:Heuristics} \end{algorithmic} \end{algorithm} \section{Computational Experiments} \label{Experiments} We test our framework on the same instances of 20, 50, and 100 vertices tested in \citet{Blom2021, Carvalho2021}. There are 30 instances for each network size. For homogeneous failure, we focus on the 100-vertex sets, which are more computationally challenging. For non-homogeneous failure, we focus on the 50- and 100-vertex sets. In the first part of this section, we compare the efficiency of our solution approaches to the state-of-the-art algorithm addressing the full-recourse policy under homogeneous failure, Benders-PICEF, proposed by \citet{Blom2021}. Next, we analyze the performance and practical impacts of our approaches under non-homogeneous failure for the full-recourse and first-stage-only recourse policies presented in Section \ref{sec:RecoursePolicies}. Our implementations are coded in C++ using CPLEX 12.10, including the state-of-the-art algorithm we compare against, on a machine with Debian GNU/Linux as the operating system and a 3.60GHz processor Intel(R) Core(TM). A time limit of one hour is given to every run. The TR value for the HSA\_MB and HSA\_ME algorithms was 150 iterations for each run. \subsection{Homogeneous failure analysis} Figure \ref{fig:Performance} shows the performance profile for five algorithms: our lifted-constraint version of Benders-PICEF, FBSA\_MB, FBSA\_ME, HSA\_MB and HSA\_ME. Recall that FBSA\_MB and FBSA\_ME are feasibility-based algorithms without the optimization step. Figure \ref{fig:Performance} shows that solving the recourse problem in the transitory graph pays off for FBSA\_ME compared to FBSA\_MB. Although Benders-PICEF solves more instances in general than FBSA\_MB and FBSA\_ME, the performance of FBSA\_ME noticeably outperforms that of Benders-PICEF when the maximum length of cycles is four and that of chains is three and up to three vertices are allowed to fail. In the same settings, FBSA\_MB is comparable to Benders-PICEF. However, as soon as the feasibility-seeking master problems incorporate the optimization step (Algorithm \ref{Alg:HybridHSA-ME}), the performance of HSA\_MB and HSA\_ME is consistently ahead of all other algorithms. As we show shortly, in most cases HSA\_MB and HSA\_ME need a small percentage of iterations in the optimization version of the second-stage master problem to converge. Benders-PICEF is fast when cycles and chains of size up to three and four are considered, respectively; however, increased cycle length and budget failure result in worse performance of Benders-PICEF compared to the hybrid approaches. \begin{figure} \caption{Performance profile for multiple $r^{\text{v} \label{fig:Performance} \end{figure} Table \ref{tab:performance} summarizes the computational performance details of FBSA\_ME, HSA\_ME and Benders-PICEF. On average, column 2SndS divided by column 1stS indicates the number of iterations that were needed per first-stage iteration to solve the second-stage problem. The average number of iterations per first-stage iteration can also be obtained by adding up columns Alg-\ref{Alg:Heuristics}-true, \ref{MasterExp} and \ref{objMPOptQ} and then dividing the sum by column 1stS. The first observation is that the CG algorithm successfully finds optimal recourse solutions in most cases and it is responsible for most of the average total time for FBSA\_ME and HSA\_ME. On the other hand, the average total number of iterations needed by FBSA\_ME to converge was significantly higher than that needed by HSA\_ME and yet the average total time per iteration of the second stage with respect to the total time (2ndS/total) is between 0.16 and 1.84 seconds. FBSA\_ME clearly generates scenarios that \ref{objMPOptQ} would not explore due to its cycle-and-chain decision variables and minimization objective, both properties that FBSA\_MB and FBSA\_ME lack. However, also for the full recourse policy, \citet{Blom2021} tested a master problem analogous to \ref{objMPOptQ} and showed that due to the large number of cycles and chains in the formulation, its scalability is limited as the size of cycles and chains grows. We find that solving the feasibility problem is much more efficient than solving \ref{objMPOptQ}. In fact, (i) the heuristic Alg\ref{Alg:Heuristics}, even when close to 1700 iterations, only accounts for about 26\% of the CPU time, and (ii) even when the heuristic fails and \ref{MasterExp} is solved as a MIP, around 1000 iterations are needed to account for about 39\% of the total time, while for \ref{objMPOptQ}, $<$200 iterations are needed to account for about the same percentage. Observe that in some cases the average total number of second-stage iterations spent by HSA\_ME is a third of those spent by FBSA\_ME, indicating that \ref{objMPOptQ} helps convergence. However, the number of \ref{objMPOptQ} iterations per first-stage solution (\ref{objMPOptQ}/1stS) is on average 40.5 while the same statistics for the feasibility-seeking iterations are on average 132.5, thus, highlighting that \ref{MasterExp} helps reduce the number of iterations spent by \ref{objMPOptQ}. As a result, the hybrid algorithms are able to converge quickly in all the tested settings. Thus, the feasibility-seeking master problems are able to find near-optimal failure scenarios within the first hundreds of iterations, but in the absence of a lower bound, they may require thousands of iterations to reach infeasibility and thus prove optimality. \begin{table}[tbp] \caption{Computational performance details of FBSA\_ME, HSA\_ME and Benders-PICEF under homogeneous failure and full recourse for 30 instances with $\lvertV \rvert = 100$. MPBP and RPBP are the master problem and recourse problem presented in \citet{Blom2021}, respectively. A column is left empty if it does not apply. Data includes both optimal and sub-optimal runs.} \label{tab:performance} \begin{adjustbox}{width=1\textwidth} \centering \input{99_table_performance} \end{adjustbox} { \footnotesize \begin{description} \item $(\%)$: Percentage of total time spent in that part of the optimization \item \ref{MasterExp}: Time solving MT as a MIP when the heuristic failed to find a feasible solution \item \ref{RecoP:Objexp}: Time solving RE as a MIP when the CG algorithm failed to find an optimal recourse solution \item \ref{objMPOptQ}: Total \# of iterations SSF was solved as a MIP when more than TR = 150 iterations passed without MT becoming infeasible \item 1stS: Total \# of first-stage decisions required before finding the robust solution \item 2SndS: Total \# of iterations to solve the second-stage problem for all the first-stage decisions \item Alg\ref{Alg:Heuristics}-true: Total \# of iterations the heuristic found a feasible failure scenario \item CG-true: Total \# of iterations the CG algorithm found an optimal recourse solution \end{description} } \end{table} \subsection{Non-homogeneous failure analysis} In this part of the analysis, we focus on understanding both the scalability of the solution algorithms under non-homogeneous failure and the impact of the two recourse policies on the total number of pairs that can be re-arranged into new cycles and chains. We additionally examine the percent of re-arranged pairs that correspond to highly-sensitized patients, who are more impacted by non-homogenous failure rates due to their real-life higher failure rates. Pairs in the instances published by \cite{Carvalho2021} have an associated panel reactive antibody (PRA), which determines how likely a patient may reject a potential donor. The higher this value, the more sensitized a patient becomes and thus the patient becomes less likely to get a compatible donor. Typically, a patient with a PRA greater or equal to 90\% is considered sensitized. Table \ref{tab:policiesk3} presents computational details for HSA\_MB with a maximum cycle length of three. A similar table for a maximum cycle length of four is presented in Appendix \ref{CycleLengthFourNonHomoFail}. The average total number of dominated scenarios for instances in Table \ref{tab:performance} was negligible and thus, not presented. The non-homogeneous case is more difficult to solve based on the average total time, the average total number of first-stage iterations, and the average percentage of total time to solve the optimality problem \ref{objMPOptQ} after 150 iterations. The average number of dominated scenarios is particularly high for the first-stage-only recourse. This behaviour may be explained because when a failure occurs, the second-stage compatibility graph under this policy is smaller and thus more susceptible to having vertices and arcs that can no longer be part of cycles and chains. In terms of the robust objective, all instances that were able to be solved optimally under both policies, obtained the same objective. This interesting fact indicates that for the tested instances, there exists a set of transplants pairs that can be recovered amongst themselves, and there are that many transplants that can be recovered by the full-recourse policy. \begin{table}[tbp] \caption{Computational performance details of HSA\_MB under non-homogeneous failure for 30 instances with $cCap = 3$. Columns are mostly the same as in Table \ref{tab:performance}, with new columns defined below. Data includes both optimal and sub-optimal runs.$^*$} \label{tab:policiesk3} \begin{adjustbox}{width=\textwidth} \centering \input{policies_table_K3_aBudget_5_10} \end{adjustbox} { \footnotesize \begin{description} \item $r^{\text{a}}$: Average number of arcs that fail, corresponding to either 5\% or 10\% of the arcs in the deterministic solution of an instance \item $^*$HSP $(\%)$: Average percentage of highly-sensitized patients re-matched in the second stage. Includes only instances solved to optimality. \item DomS: Average total number of dominated scenarios that are found by the single-vertex-arc separation procedure (Section \ref{vxtarcsep}) \end{description} } \end{table} The percentage of highly-sensitized patients that are selected in the first stage and re-matched in the second stage under both full-recourse and first-stage-only recourse policies is shown in Figure \ref{fig:hsp-policies}. On average, around 30\% of highly-sensitized patients are re-matched under both policies, but since this percentage comes from a worst-case scenario, Figure \ref{fig:hsp-policies} is a lower bound on the percentage of highly-sensitized patients that can be transplanted in practice under both policies. While there is variability in all cases, instances with the smallest failure budgets account for most instances that can re-match more than 50\% of the highly-sensitized patients. \begin{figure} \caption{Percentage of highly-sensitized patients selected in both first and second stage with respect to their total number in an instance. Markers above the line indicate a higher percentage for the first-stage-only recourse policy. Every combination (``comb'' in the legend) of $r^{\text{v} \label{fig:hsp-policies} \end{figure} \section{Conclusion} \label{sec:Conclusion} We presented a general solution framework for KPDPs to identify robust matchings that can be repaired after failure through a predefined recourse policy. Our framework consists of a two-stage RO problem whose second-stage master problem seeks feasibility rather than optimality, and yet is able to model multiple policies. The most significant limitation of this approach is the inability to obtain a lower bound on the optimal objective value of the second-stage problem, which we overcame by (i) separating dominated scenarios, and (ii) solving an optimality-seeking master problem when needed. Under homogeneous failure, our purely feasibility-based solution algorithms outperform the state-of-the-art in some settings, and our hybrid methods consistently outperform the state-of-the-art. Under non-homogeneous failure, we solved all instances to optimality under both recourse policies, finding that while the percentage of highly-sensitized patients that can be recovered varies, 30\% seems to be the lower bound. Future directions include simulations to assess matching paradigm effectiveness over time, as opposed to just in a single matching, and expanding uncertainty set possibilities. For example, the uncertainty set could model a policy where an intervention is performed over some arcs/vertices prior to proposing a matching (e.g., performing immunotherapy to patients, running a questionnaire on pairs, etc.), thus reducing the likelihood of an exchange failing if resources are allocated for that intervention. Or, failure could be seen as a change in the quality of a matching rather than the non-existence of it. Moreover, adaptations to our framework could include non-unitary weights for vertices/arcs in our definition of the uncertainty set and similarly for pairs in the robust objective. { \footnotesize } \appendix \section{Notation} \label{sec:notation} The following is a list of the notation divided by section. Notation is defined in the section where it was first introduced. \paragraph{Preliminaries} \begin{itemize} \item[] $cCap$: Maximum allowed length of cycles, user defined. \item[] $L$: Maximum allowed length of chains in terms of arcs, user defined. \item[] $P$: Set of patient-donor pairs or simply \emph{set of pairs}. \item[] $N$: Set of non-directed donors. \item[] $V := P \cup N$ represents the set of patient-donor pairs, $P$, and the set of non-directed donors $N$. \item[] $A \subseteq V \times P$: Arc set containing arc $(ua,ub)$ if and only if the donor in vertex $u \in V$ is compatible with patient in vertex $v \in P$. \item[] $DKEP$: First-stage compatibility graph. \item[] $cSet$: Set of feasible cycles (up to size $cCap$) in the first-stage compatibility graph. \item[] $\mathcal{C}_{\chainCap}$: Set of feasible chains (up to size $L$) in the first-stage compatibility graph. \item[] $V(\cdot)$ and $A(\cdot)$ be the set of vertices and arcs in $(\cdot)$, where $(\cdot)$ refers to a cycle, chain, etc. \item[] $cSet \subseteq cSet \cup \mathcal{C}_{\chainCap}$: A feasible solution to the KEP, i.e., a collection of vertex-disjoint cycles and chains (referred to as a \emph{matching}) if $V(c) \cap V(c^{\prime}) = \emptyset,$ for all $c, c^{\prime} \in cSet$ with $c \ne c^{\prime}$, noting that $c$ and $c^\prime$ are entire cycles or chains in the matching. \item[] $cSpace$: Set of all KEP matchings in the first-stage compatibility graph $D$. \item[] $\mathcal{X} := \{\bm{x_{\matchSet}}: cSet \in cSpace \}$ is the set of all binary vectors representing the selection of a feasible matching in the first-stage compatibility graph, where $\bm{x_{\matchSet}}$ is the characteristic vector of matching $cSet$ in terms of the cycles/chains sets $cSet \cup \mathcal{C}_{\chainCap}$. \item[] $\varFirstPlain \in \mathcal{X}$: A first-stage solution representing a matching in the first-stage compatibility graph $D$. \item[] $\pi \in \piSet$: Recourse policy (or simply \emph{policy}) deciding on the cycles and chains that can be used to repair a first-stage decision $\varFirstPlain \in \mathcal{X}$ after observing vertex/arc failures. \item[] $cSetSndTr$: Set of all cycles (up to size $cCap$) in the first-stage compatibility graph satisfying policy $\pi \in \piSet$ and including at least one pair in $\varFirstPlain$. \item[] $\mathcal{C}_{\chainCap}SndTr$: Set of all chains (up to size $L$) in the first-stage compatibility graph satisfying policy $\pi \in \piSet$ and including at least one pair in $\varFirstPlain$. \item[] $D^{\pi}(\varFirstPlain) = (VSndBe, ASndBe)$: Transitory graph with vertex set $VSndBe = \cup_{c \in cSetSndTr \cup \mathcal{C}_{\chainCap}SndTr}V(c)$ and arc set $ASndBe = \cup_{c \in cSetSndTr \cup \mathcal{C}_{\chainCap}SndTr}A(c)$. \item[] $V(\varFirstPlain) = \cup_{c' \in cSet \cup \mathcal{C}_{\chainCap}:\varFirstPlain_{c'} = 1} V(c')$: Vertices (pairs and non-directed donors) selected by a first-stage solution $\varFirstPlain \in \mathcal{X}$. \item[] $\bm{\oneCeS} \in \Gamma$: A failure scenario represented as a binary vector in the uncertainty set $\Gamma$. \item[] $VSndBeF$ and $ASndBeF$: Set of vertices and set of arcs that fail under scenario $\bm{\oneCeS} \in \Gamma$ in the first-stage compatibility graph, respectively. \item[] $D^{\pi}(\varFirstPlain, \bm{\oneCeS}) = (VSndBe \setminus \{VSndBe \cap VSndBeF\}, ASndBe \setminus \{ASndBe \cap ASndBeF\})$: A second-stage compatibility graph induced in the transitory graph $D^{\pi}(\varFirstPlain)$ by a failure scenario $\bm{\oneCeS} \in \Gamma$. \item[] $cSetSndS$: Set of existing cycles (up to size $cCap$) in a second-stage compatibility graph as they do not fail under scenario $\bm{\oneCeS} \in \Gamma$ and are allowed by policy $\pi \in \piSet$. \item[] $\mathcal{C}_{\chainCap}SndS$: Set of existing chains (up to size $L$) in a second-stage compatibility graph as they do not fail under scenario $\bm{\oneCeS} \in \Gamma$ and are allowed by policy $\pi \in \piSet$. \item[] $\matchSet^{\policy}(\varFirst)One:= \{cSet \subseteq cSetSndS \cup \mathcal{C}_{\chainCap}SndS \mid V(c) \cap V(c^{\prime}) = \emptyset \text{ for all } c, c^{\prime} \in cSet \text{; } c \neq c^{\prime}\}$: Set of allowed recovering matchings under policy $\pi$ such that every cycle/chain in $\matchSet^{\policy}(\varFirst)One$ contains at least one pair in $\varFirstPlain$. \item[] $\setRecoSym^{\policy}(\varFirst, \oneCe):= \{\bm{\mathbf{y}p_{\matchSet}}:cSet \in \matchSet^{\policy}(\varFirst)One\}$: Set of all binary vectors representing the selection of a feasible matching in a second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})$ with non-failed elements (vertices/arcs) under scenario $\bm{\oneCeS} \in \Gamma$ and policy $\pi \in \piSet$ that contain at least one pair in $\varFirstPlain$. Here, $\bm{\mathbf{y}p_{\matchSet}}$ is the characteristic vector of matching $cSet$ in terms of $cSetSndS \cup \mathcal{C}_{\chainCap}SndS$. \item[] $\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$: A second-stage solution (a.k.a \emph{recourse solution}) found in a second-stage compatibility graph. \item[] $f(\varFirstPlain,\bm{\oneCeS},\mathbf{y})$: General robust objective function (a.k.a \emph{recourse objective function}) that assigns weights to the cycles and chains of a recovered matching associated to a recourse solution $\mathbf{y}$ under failure scenario $\bm{\oneCeS}$, based on the number of pairs matched in the first stage solution $\varFirstPlain$. \item[] $\mathbf{w}_{c}(\varFirstPlain) := |V(c) \cap V(\varFirstPlain) \cap P|$: weight of a cycle/chain $c \in cSetSndS \cup \mathcal{C}_{\chainCap}SndS$ corresponding to the \textit{number of pairs} that having been matched in the first stage by solution $\varFirstPlain \in \mathcal{X}$, can also be matched in the second stage by recourse solution $\mathbf{y} \in \setRecoSym^{\policy}(\varFirst, \oneCe)$, after failures are observed. \item[] $\Gamma(\varFirstPlain) \in \Gamma$: Observable uncertainty set in the transitory graph when the first-stage solution is $\varFirstPlain$. \item[] $\bm{\gamma^{\text{v}}} \in \{0,1\}^{\midV \mid}$: Binary vector representing a vertex failure scenario. If the entry corresponding to vertex $u \in V$ in $\bm{\gamma^{\text{v}}}$ takes on value one then $u$ fails, and on value zero otherwise. \item[] $\bm{\gamma^{\text{a}}} \in \{0,1\} ^{\mid A \mid}$: Binary vector representing an arc failure scenario. If the entry corresponding to arc $(ua,ub) \in A$ in $\bm{\gamma^{\text{a}}}$ takes on value one then $(ua,ub)$ fails, and on value zero otherwise. \item[] $r^{\text{v}}$: User-defined number of vertices that can occur in the first-stage compatibility graph. \item[] $r^{\text{a}}$: User-defined number of arcs that can occur in the first-stage compatibility graph. \end{itemize} \paragraph{Robust model and first stage} \begin{itemize} \item[] $\tilde{\Gamma}$: Restricted set of scenarios for the first-stage problem \ref{SingleROmodel}. \item[] $\tilde{\varFirstPlain} \in \mathcal{X}$: Optimal first-stage solution to \ref{SingleROmodel} in Algorithm \ref{Alg:ScenarioGeneration} with restricted set of scenarios $\tilde{\Gamma}$. \item[] $Z_{P}r$: Objective value of solution $\tilde{\varFirstPlain} \in \mathcal{X}$. \end{itemize} \paragraph{New second-stage decompositions} \begin{itemize} \item[] $\GammacR \subseteq \Gammac$: Restricted set of failure scenarios observable in the transitory graph. \item[] $\tilde{\oneCe} \in \Gammac$: A failure scenario that induces the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$. \item[] $R^{\policy}(\varFirst, \oneCe)cdte$: MIP formulation (\ref{RecoP:Obj}) for the recourse problem when solved in the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$. \item[] $\mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \tilde{\oneCe})dte = cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc$. \item[] $\mathbf{y}OpSndS$: Optimal recourse solution to the recourse problem $R^{\policy}(\varFirst, \oneCe)cdte$ with objective value ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe})$. \item[] $Z^{\pi}_{Q}(\varFirstc)Star$: Optimal objective of the second-stage problem \ref{eq:SSP}. \item[] $Z^{\pi}_{Q}(\varFirstc)$: Decision variable indicating the objective value of the second-stage problem \ref{eq:SSP}. \item[] ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$: Upper bound on the objective value of the second-stage problem \ref{eq:SSP}. \item[] $\mathbbm{1}_{c,\bm{\oneCeS}}$: Indicating variable that takes on value one if cycle/chain $c \in \mathcal{C}_{K,L}^{\pi}(\varFirstPlainc, \oneCe)$ fails under scenario $\bm{\oneCeS} \in \Gammac$. \item[] $\setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$: Set of binary vectors representing a recourse solution in the transitory graph. \item[] $R^{\policy}(\varFirst, \oneCe)expc$: MIP formulation (\ref{RecoP:Objexp}) for the recourse problem solved in the transitory graph , whose optimal recourse solutions are also optimal to $R^{\policy}(\varFirst, \oneCe)cdte$ but may allow some failed cycles/chains in the solution. \item[] $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$: Optimal recourse solution to $R^{\policy}(\varFirst, \oneCe)expc$ in the transitory graph that is also optimal to $R^{\policy}(\varFirst, \oneCe)cdte$ under scenario $\tilde{\oneCe} \in \GammacR$. \item[] $\psi^{\star\tilde{\oneCe}}cdte \subseteq \psi^{\star\tilde{\oneCe}}$: Subset of the optimal recourse solution $\psi^{\star\tilde{\oneCe}}$ that has no failed cycles/chains and thus corresponds to a feasible solution in $\setRecoSym^{\policy}(\varFirst, \oneCe)cdte$. \item[] $ I(\bm{\oneCeS}Prime)$: Number of failed vertices and arcs under scenario $\bm{\oneCeS}Prime$. \item[] $I(\bm{\oneCeS}NoPrime)$: Number of failed vertices and arcs under scenario $\bm{\oneCeS}NoPrime$. \item[] $cchainSetPrimeX \subseteq cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$: Set of feasible cycles and chains that fail in the transitory graph $D^{\pi}(\varFirstPlain)c$ under scenario $\bm{\oneCeS}Prime$. \item[] $cchainSetNoPX \subseteq cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$: Set of feasible cycles and chains that fail in the transitory graph $D^{\pi}(\varFirstPlain)c$ under scenario $\bm{\oneCeS}NoPrime$. \item[] $cchainSetXvtx$: Set of feasible cycles and chains in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})pr$ that include vertex $\bar{v}$. \item[] $cchainSetXarc$: Set of feasible cycles and chains in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})pr$ that include arc $\bar{a}$. \end{itemize} \paragraph{Solution algorithms for the second-stage problem} \begin{itemize} \item[] FBSA\_MB: Algorithm \ref{Alg:BasicHybrid}. \item[] FBSA\_ME: Algorithm \ref{Alg:EnhancedHybrid}. \item[] ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc)$: Lower bound on the objective value of the second-stage problem \ref{eq:SSP}. \item[] HSA\_MB and HSA\_ME: Algorithms with additional steps needed for FBSA\_MB and FBSA\_ME to find ${\bunderline{\varCov}}^{\pi}_{Q}(\varFirstc)$, respectively. Such steps are given by Algorithm \ref{Alg:HybridHSA-ME}. \item[] $\mathcal{M}$,$\mathcal{M}exp$, $W$: Set of matchings \item[] $\textbf{ToMng}(\cdot)$: Function that ``extracts'' the matching from a recourse solution $(\cdot)$. \item[] \textbf{Heuristic}($\cdot$): Algorithm \ref{Alg:Heuristics} which takes as input a set of matchings $(\cdot)$ and returns a tuple $(\incumCe, \cover)$. \item[] $\textbf{ColGen}(R^{\policy}(\varFirst, \oneCe)cdte)$: Column generation algorithm used to solve the linear relaxation of $R^{\policy}(\varFirst, \oneCe)cdte$ with optimal value $Z^{\star}_{\text{cg}}UB$. \item[] $Z^{\star}_{\text{cg}}$: Objective value of the feasible KEP solution obtained when the columns obtained by $\textbf{ColGen}(R^{\policy}(\varFirst, \oneCe)cdte)$ are turned to binary. \item[] $\textbf{XColGen}\left(R^{\policy}(\varFirst, \oneCe)expc \right)$: Column generation algorithm used to solve the linear relaxation of $R^{\policy}(\varFirst, \oneCe)expc$ with objective value $Z^{\star}_{\text{cg}}UBexp$. \item[] $Z^{\star}_{\text{cg}}exp$: Objective value of the feasible KEP solution obtained when the columns obtained by $\textbf{XColGen}(R^{\policy}(\varFirst, \oneCe)expc)$ are turned to binary. \item[] $\textbf{TrueVal}(Z^{\star}_{\text{cg}}exp)$: A function that returns $\mathbf{w}_{c}(\varFirstPlain) := |V(c) \cap V(\varFirstPlain) \cap P|$ for all cycles and chains $c$ in the feasible solution with objective value $Z^{\star}_{\text{cg}}exp$. \item[] \textbf{UniqueElms}($W$): Function that returns the set of unique vertices and arcs among all matchings in $W$, referred to as $E$. \item[] $\text{e}_{n} \in E$: Element, either vertex or arc in the set $E$. \item[] $\text{\textbf{Weight}}(\text{e}_{n}, W)$: Function that returns the weight $\text{w}_{n}$ of an element $\text{e}_{n}$ corresponding to the number of times $\text{e}_{n}$ is repeated in the set of matchings $W$. \item[] $\text{\textbf{IsNDD}}(\text{e}star_{n})$: Function that returns true if element $\text{e}star_{n}$ is a non-directed donor, and false otherwise. \end{itemize} \section{Robust formulation for full recourse} \label{sec:fullROFormulation} Our solution approach supports the use of any MIP formulation that has the structure shown in Section \ref{RobustModels}. For the results presented in this paper, we adapt a variant presented by \cite{Carvalho2021} of the position-indexed cycle edge formulation (PICEF) proposed by \cite{Dickerson2016}. Cycles are modeled through cycle variables $\mathbf{y}Z_c \ \forall c \in cSet$ for first-stage decisions and through $cvarCe \ \forall c \in cSet$ for recourse cycles under scenario $\bm{\oneCeS} \in \Gamma$. However, chains are modeled through first-stage decision variables $\delta_{\vertexa\vertexb\ell}$, indexed by arc $(ua,ub) \in A$ and the feasible position $\ell \in \mathcal{L}(\vertexa, \vertexb)$ of that arc within a chain. The set $\mathcal{L}(\vertexa, \vertexb) \subseteq \mathcal{L} = \{1,...,L\}$ corresponds to the set of positions for which that arc is reached from some non-directed donor in a simple path with $\ell \le L$ arcs. For vertices $u \in N$, the set of possible arc positions becomes $\mathcal{L}(\vertexa, \vertexb) = \{1\}$, since non-directed donors always start a chain. To identify $\mathcal{L}(\vertexa, \vertexb)$ for the other arcs, a shortest-path based search can be performed \citep{Dickerson2016}. Thus, we define $AL = \{(ua,ub) \in A \mid \ell \in \mathcal{L}(\vertexa, \vertexb)\}$. Likewise, recourse chain decision variables for every scenario $\bm{\oneCeS} \in \Gamma$ are denoted by $\delta_{\vertexa\vertexb\ell}Ce$. A binary decision variable $t^{\oneCe}_{\vertexb}$ is also defined for every pair $v \in P$ and scenario $\bm{\oneCeS} \in \Gamma$ to identify the pairs that are selected in both the first stage and in the second stage under some scenario $\bm{\oneCeS} \in \Gamma$. Moreover, we denote by $csetv$ and $csetuv$ the set of feasible cycles including vertex $v \in P$ and the set of feasible cycles including arc $(ua,ub) \in A$, respectively. \label{FullFirstSFormulation} \begin{subequations} \small \begin{align} \max \qquad Z \label{objmaxRO}\\ Z - \sum_{v \in P} t^{\oneCe}_{\vertexb} \le 0 && \bm{\oneCeS} \in \Gamma \label{selidxF}\\ t^{\oneCe}_{\vertexb} - \sum_{c \in csetv} \mathbf{y}Z_c - \sum_{\ell \in \mathcal{L}} \sum_{(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell} \le 0 && \bm{\oneCeS} \in \Gamma, v \in P \label{FirstSSol}\\ t^{\oneCe}_{\vertexb} - \sum_{c \in csetv} cvarCe - \sum_{\ell \in \mathcal{L}} \sum_{(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell}Ce \le 0 && \bm{\oneCeS} \in \Gamma, v \in P \label{SecondSSol}\\ \sum_{u: (ua,ub)rev}\delta_{\vertexa\vertexb\ell}RCe \le 1 - \gamma^{\text{v}}_{v} && \bm{\oneCeS} \in \Gamma, v \in N \label{FailedNDD}\\ \sum_{c \in csetv} cvarCe - \sum_{\ell \in \mathcal{L}} \sum_{(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell}Ce \le 1 - \gamma^{\text{v}}_{v} && \bm{\oneCeS} \in \Gamma, v \in P \label{FailedPair}\\ \sum_{c \in csetuv} cvarCe - \sum_{\ell \in \mathcal{L}(\vertexa, \vertexb)} \delta_{\vertexa\vertexb\ell}Ce \le 1 - \gamma^{\text{a}}_{uaub} && \bm{\oneCeS} \in \Gamma, (ua,ub) \in A \label{FailedArc}\\ \sum_{u:(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell}Ce - \sum_{u:(ua,ub)rev \in AL} \delta_{\vertexa\vertexb\ell}NextCe \le 0 && \bm{\oneCeS} \in \Gamma, v \in P, \ell \in \mathcal{L} \setminus \{L - 1\} \label{ChainPos}\\ \sum_{u: (ua,ub)rev}\delta_{\vertexa\vertexb\ell}R \le 1 && v \in N \label{FailedNDDf}\\ \sum_{c \in csetv} cvar - \sum_{\ell \in \mathcal{L}} \sum_{(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell} \le 1 && v \in P\\ \sum_{u:(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell} - \sum_{u:(ua,ub)rev \in AL} \delta_{\vertexa\vertexb\ell}Next \le 0 && \bm{\oneCeS} \in \Gamma, v \in P, \ell \in \mathcal{L} \setminus \{L - 1\} \label{ChainPosf}\\ t^{\oneCe}_{\vertexb} \ge 0 && \bm{\oneCeS} \in \Gamma, v \in P\\ \mathbf{y}Z_c, cvarCe \in \{0,1\} && \bm{\oneCeS} \in \Gamma, c \in cSet\\ \delta_{\vertexa\vertexb\ell}, \delta_{\vertexa\vertexb\ell}Ce \in \{0,1\} && \bm{\oneCeS} \in \Gamma, (ua,ub) \in A, \ell \in \mathcal{L}(\vertexa, \vertexb) \label{varNature} \end{align} \label{for:PICEF_RO} \end{subequations} Constraints \eqref{selidxF} are used to determine the scenario binding the number of patients that receive a transplant in both stages. Such scenario is the worst-case scenario. The objective \eqref{objmaxRO} is then equivalent to the maximum number of patients from the first stage that can be recovered in the second stage under the worst-case scenario. Constraints \eqref{FirstSSol} and \eqref{SecondSSol} assure that a pair $v$ is counted as recovered in the objective if it is selected in the first-stage solution (Constraints \eqref{FirstSSol}) and it is also selected in the second stage under scenario $\bm{\oneCeS} \in \Gamma$ (Constraints \eqref{SecondSSol}). Constraints \eqref{FailedNDD} to Constraints \eqref{FailedArc} guarantee that the solution obtained for every scenario $\bm{\oneCeS} \in \Gamma$ is a matching. Specifically, Constraints \eqref{FailedNDD} assure that if a non-directed donor fails under some scenario $\bm{\oneCeS} \in \Gamma$, i.e., $\gamma^{\text{v}}_{v} = 1$ then its corresponding arcs in position one cannot be used to trigger a chain. Similarly, Constraints \eqref{FailedPair} guarantee that if a pair fails, then it cannot be present in either a cycle nor a chain. Constraints \eqref{FailedArc} ensure that when an arc $(ua,ub) \in A$ fails under some scenario $\bm{\oneCeS} \in \Gamma$, it does not get involved in either a cycle or a chain. Constraints \eqref{ChainPos} assure the continuity of a chain by selecting arcs in consecutive positions. Constraints \eqref{FailedNDDf} to Constraints \eqref{ChainPosf} select a solution corresponding to a matching in the first stage. The remaining constraints correspond to the nature of the decision variables. \section{Robust formulation for first-stage-only recourse} \label{sec:fisrtOnlyROFormulation} In addition to the constraints defining formulation \eqref{for:PICEF_RO}, a new one is introduced to limit the recourse solutions to include only vertices that were selected in the first stage under every scenario. \begin{subequations} \label{FirstStageOnlyFormulation} \begin{align} \max \qquad &Z \\ \eqref{selidxF} &-\eqref{varNature}&\\ \sum_{c \in csetv} cvarCe - \sum_{\ell \in \mathcal{L}} \sum_{(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell}Ce &\le \sum_{c \in csetv} \mathbf{y}Z_c - \sum_{\ell \in \mathcal{L}} \sum_{(ua,ub) \in AL} \delta_{\vertexa\vertexb\ell} & \bm{\oneCeS} \in \Gamma, v \in P \end{align} \end{subequations} \section{Failure scenarios generated by \ref{MasterBasic} and \ref{MasterExp}} \label{Example} The following example demonstrates that \ref{MasterExp} can yield failure scenarios that dominate those in \ref{MasterBasic}. Consider Figure \ref{fig:RecourseExamples} again under full recourse, $r^{\text{v}} = 1$, $r^{\text{a}} = 1$ and the first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$ involving eight pairs in that figure. Observe, the first-stage compatibility graph also corresponds to the transitory graph $D^{\pi}(\varFirstPlain)c$. At iteration 1, we observe failure scenario $\tilde{\gamma}^{\text{v}}_{2} = 1$ and $\tilde{\gamma}^{\text{a}}_{56} = 1$, thus, vertices 2, 9 and 10 do not belong to the realization of the second-stage compatibility graph, $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$ (Figure \ref{fig:IteOne-ScndSGraph}). The optimal objective value to the recourse problem in $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$, i.e., $R^{\policy}(\varFirst, \oneCe)cdte$, is ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) = 5$ with an optimal solution $\mathbf{y}OpSndSc$ involving cycle (3,4,6) and chain (8,1,5). When we solve the recourse problem in the transitory graph $D^{\pi}(\varFirstPlain)c$ (Figure \ref{fig:IteOne-TrGraph}), the optimal objective value is also 5, but its optimal recourse solution $\psi^{\star\tilde{\oneCe}} = \mathbf{y}OpSndSc \cup \{(2,9,10)\}$ involves cycle (3,4,6), chain (8,1,5), as well as the failed cycle (2,9,10). At iteration 2, \ref{MasterBasic} and \ref{MasterExp} attempt to find a new failure scenario, but the failure scenario that is feasible to \ref{MasterBasic} is infeasible to \ref{MasterExp}. Below, the first two iterations of \ref{MasterBasic} and \ref{MasterExp} are shown. The vertices/arcs in the failure scenario of every iteration are indicated within a box. \begin{tabular}{p{0.45\textwidth} @{\quad} | @{\quad} p{0.45\textwidth}} $\text{MasterSecond}(\tilde{\varFirstPlain})$ (\ref{MasterBasic}): & $\text{MasterTransitory}(\tilde{\varFirstPlain})$ (\ref{MasterExp}):\\ \hline \uline{Iteration 1}: \ ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = 8$\newline $\boxed{\gamma^{\text{v}}_{2}} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \gamma^{\text{a}}_{9,10} + \gamma^{\text{a}}_{10,2} + \gamma^{\text{v}}_{3} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4} \newline \text{\quad} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + \boxed{\gamma^{\text{v}}_{5,6}} + \gamma^{\text{v}}_{6,1} \ge 1$ \newline\newline \uline{Iteration 2}: \ ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = 5$ \newline $\gamma^{\text{v}}_{2} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \gamma^{\text{a}}_{9,10} + \gamma^{\text{a}}_{10,2} + \boxed{\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4} \newline \text{\quad} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + \boxed{ \gamma^{\text{v}}_{5,6}} + \gamma^{\text{v}}_{6,1} \ge \cancel{1} \rightarrow 2 \text{ by } \ref{prop:RHSi} \newline \boxed{\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,6} + \gamma^{\text{a}}_{6,4} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{8} \newline \text{\quad} + \gamma^{\text{a}}_{1,5} + \gamma^{\text{a}}_{8,1} \ge 1 $ & \uline{Iteration 1}: \ ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = 8$\newline $\boxed{\gamma^{\text{v}}_{2}} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \gamma^{\text{a}}_{9,10} + \gamma^{\text{a}}_{10,2} + \gamma^{\text{v}}_{3} + \gamma^{\text{v}}_{4} \newline \text{\quad} + \gamma^{\text{a}}_{3,4} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + \boxed{ \gamma^{\text{v}}_{5,6}} + \gamma^{\text{v}}_{6,1} \ge 1$ \newline\newline \uline{Iteration 2}: \ ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = 5$\newline $\gamma^{\text{v}}_{2} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \boxed{\gamma^{\text{a}}_{9,10}} + \gamma^{\text{a}}_{10,2} + \boxed{\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,4} \newline \text{\quad} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{1,5} + \gamma^{\text{v}}_{5,6} + \gamma^{\text{v}}_{6,1} \ge \cancel{1} \rightarrow 2 \text{ by } \ref{prop:RHSi} \newline \boxed{\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,6} + \gamma^{\text{a}}_{6,4} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{8} \newline \text{\quad} + \gamma^{\text{a}}_{1,5} + \gamma^{\text{a}}_{8,1} \ge 1 \newline \boxed{\gamma^{\text{v}}_{3}} + \gamma^{\text{v}}_{6} + \gamma^{\text{v}}_{4} + \gamma^{\text{a}}_{3,6} + \gamma^{\text{a}}_{6,4} + \gamma^{\text{a}}_{4,3} + \gamma^{\text{v}}_{1} + \gamma^{\text{v}}_{5} + \gamma^{\text{v}}_{8} \newline \text{\quad} + \gamma^{\text{a}}_{1,5} + \gamma^{\text{a}}_{8,1} + \gamma^{\text{v}}_{2} + \gamma^{\text{v}}_{9} + \gamma^{\text{v}}_{10} + \gamma^{\text{a}}_{2,9} + \boxed{\gamma^{\text{a}}_{9,10}} \newline \text{\quad} + \gamma^{\text{a}}_{10,2} \ge \cancel{1} \rightarrow 2 \text{ by } \ref{prop:RHSExtdi}$ \end{tabular} \\ \input{Figures/MS-MT-example.tex} The new failure scenario for \ref{MasterBasic} would lead to an optimal recourse solution with again ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) = 5$, whereas the optimal recourse solution for \ref{MasterExp} would lead to ${Z}^{\pi,\star}_{R}(\varFirstc, \tilde{\oneCe}) = 3$. That is, in the third iteration, the right-hand side of constraints \ref{MasterExp} will get updated with a new upper bound ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc) = 3$, leading \ref{MasterExp} to infeasibility sooner and thus, to the optimal objective value of the second-stage problem. On the other hand, \ref{MasterBasic} in the third iteration will need another attempt to ``discover'' a failure scenario that will bring down the ceiling of ${\bar{\varCov}}^{\pi}_{Q}(\varFirstc)$. \section{The recourse problem} \label{Alg:CyChrecourse} We present in this section two algorithms to enumerate the feasible cycles and chains in $cSet^{\pi}(\tilde{\varFirstPlain})$ and $\mathcal{C}_{\chainCap}^{\pi}(\tilde{\varFirstPlain})$, respectively, that lead to a transitory graph $D^{\pi}(\varFirstPlain)c$ and a realization of the second-stage compatibility graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$ under scenario $\tilde{\oneCe} \in \Gammac$. \subsection{Cycles and chains for the full-recourse policy} We start by defining some notation. Let $\pairSet(\varFirstPlainc) = V(\varFirstPlainc) \cap P$ be an auxiliary set, corresponding to the pairs selected in the first-stage solution. Moreover, we denote by $cchainSet_{cCap}^{u}$ the set of feasible \textit{cycles} that include vertex $u \in \pairSet(\varFirstPlainc)$. Lastly, consider $R$ as an auxiliary vertex set. Algorithm \ref{Alg:FullRecoCycleSet} iteratively builds $cSet^{\text{Full}}(\tilde{\varFirstPlain})$. At the start, $R$ and $cSet^{\text{Full}}(\tilde{\varFirstPlain})$ are empty. Within the While loop, a pair $u$ from the first-stage solution is selected. Then, in line 4, a deep search procedure, starting from vertex $u$, is used to find $cchainSet_{cCap}^{u}$ in graph $\tilde{D} = (V \setminus R, A)$. Note that if $R \neq \emptyset$, the new cycles are found in a graph where the previously selected vertices (the ones in set $R$) are removed, since otherwise, cycles already in $cchainSet_{cCap}^{u}$ could be found again. The new cycles are then added to $cSet^{\text{Full}}(\tilde{\varFirstPlain})$ and vertex $u$ is removed from $P(\tilde{\varFirstPlain})$. When no more vertices are left in that set, the algorithm ends. \begin{algorithm}[tbp] \textbf{Input:} { Set with pairs from the first-stage solution, $\pairSet(\varFirstPlainc)$}\\ \textbf{Output:} { Set $cSet^{\text{Full}}(\tilde{\varFirstPlain})$} \begin{algorithmic}[1] \item[] \textbf{Step 0: } \STATE $R = \emptyset$; $cSet^{\text{Full}}(\tilde{\varFirstPlain}) = \emptyset$ \\ \item[] \textbf{Step 1: } \WHILE{$\pairSet(\varFirstPlainc) \neq \emptyset$} \STATE \text{Select vertex} $u \in \pairSet(\varFirstPlainc) $ \STATE \text{Find} $cchainSet_{cCap}^{u}$ \text{in graph } $\tilde{D} = (V \setminus R, A)$ \text{from vertex} $u$ \STATE $cSet^{\text{Full}}(\tilde{\varFirstPlain}) \gets cSet^{\text{Full}}(\tilde{\varFirstPlain}) \cup cchainSet_{cCap}^{u}$ \STATE $\pairSet(\varFirstPlainc) \gets \pairSet(\varFirstPlainc) \setminus \{u\}$ \STATE $R \gets R \cup \{u\}$ \ENDWHILE \item[] \textbf{Step 2: } \STATE Return $cSet^{\text{Full}}(\tilde{\varFirstPlain})$ \caption{Obtaining $cSet^{\pi}(\tilde{\varFirstPlain})$ with $\pi = \text{Full}$} \label{Alg:FullRecoCycleSet} \end{algorithmic} \end{algorithm} The correctness of Algorithm \ref{Alg:FullRecoCycleSet} follows from the fact that only cycles/chains with at least one vertex from the first stage contribute to the weight of a recourse solution. Thus, it suffices to find such cycles per every vertex in $\pairSet(\varFirstPlainc)$. Similar reasoning is used when finding chains. To this end, we include additional notation. Let $\tilde{N} = N$ be an auxiliary set that corresponds to the set of non-directed donors. Moreover, let $cchainSet_{\ell}^{u}$ be the set of chains that include at least one pair in $\pairSet(\varFirstPlainc)$ and are triggered by vertex $u \in N$ with exactly $1 \le \ell \le L$ arcs. Algorithm \ref{Alg:FullRecoChainSet} iteratively builds the set of chains that include at least one pair that is selected in the first stage. \begin{algorithm}[tbp] \textbf{Input:} { A first-stage solution $\tilde{\varFirstPlain} \in \mathcal{X}$}\\ \textbf{Output:} { Set $\mathcal{C}_{\chainCap}^{\text{Full}}(\tilde{\varFirstPlain})$} \begin{algorithmic}[1] \item[] \textbf{Step 0: } \STATE $R = \emptyset$; $\mathcal{C}_{\chainCap}^{\text{Full}}(\tilde{\varFirstPlain}) = \emptyset$ \\ \item[] \textbf{Step 1: } \WHILE{$\tilde{N} \neq \emptyset$} \STATE \text{Select vertex} $u \in \tilde{N}$ \FORALL{$ 1 \le \ell \le L$} \STATE \text{Find} $cchainSet_{\ell}^{u}$ \text{in graph } $DKEP$ \text{from vertex} $u$ \STATE $\mathcal{C}_{\chainCap}^{\text{Full}}(\tilde{\varFirstPlain}) \gets \mathcal{C}_{\chainCap}^{\text{Full}}(\tilde{\varFirstPlain}) \cup cchainSet_{\ell}^{u}$ \ENDFOR \STATE $\tilde{N} \gets \tilde{N} \setminus \{u\}$ \ENDWHILE \item[] \textbf{Step 2: } \STATE Return $\mathcal{C}_{\chainCap}^{\text{Full}}(\tilde{\varFirstPlain})$ \caption{Obtaining $\mathcal{C}_{\chainCap}^{\pi}(\tilde{\varFirstPlain})$ with $\pi = \text{Full}$} \label{Alg:FullRecoChainSet} \end{algorithmic} \end{algorithm} \subsection{Cycles and chains for the first-stage-only recourse policy} Algorithm \ref{Alg:FullRecoCycleSet} can be modified to accommodate the first-stage-only recourse for cycles. Specifically, we can replace $cSet^{\text{Full}}(\tilde{\varFirstPlain})$ by $cSet^{\text{1stSO}}(\tilde{\varFirstPlain})$, where the latter is the set of simple cycles with at least one vertex in $\pairSet(\varFirstPlainc)$ satisfying the first-stage-only recourse policy. In line 4, the vertex set $V$ in graph $\tilde{D}$ is replaced by $\pairSet(\varFirstPlainc)$ so that only pairs from the first stage can be part of the allowed cycles. The arc set of $\tilde{D}$ can then be defined such that every arc in it has a starting and terminal vertex in $\pairSet(\varFirstPlainc)$. Likewise, Algorithm \ref{Alg:FullRecoChainSet} can also support chains for the first-stage-only recourse. The only change in Algorithm \ref{Alg:FullRecoChainSet} is to replace graph $DKEP$ by graph $\tilde{D} = (V(\varFirstPlainc), \tilde{A})$ where every arc in the arc set $\tilde{A}$ has both extreme vertices in $V(\varFirstPlainc)$. The algorithms just described are used to obtain the cycles and chains that can participate in a recourse solution when the recourse problem is solved as a sub-problem in the robust decomposition presented in the next section. \subsection{Formulations for the recourse problem} \label{sec:RecourseFormls} In this section we present the cycle-and-chain MIP formulations presented in \citet{Blom2021} adapted to the problem we study. However, we note that a MIP formulation for the recourse problem does not require the explicit enumeration of cycles and chains. The advantage of enumeration is that different policies can be easily addressed, specifically, the full recourse and the first-stage-only recourse. Recall that $cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc \subseteq cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$ is the set of non-failed cycles/chains in a second-stage compatibility graph under scenario $\tilde{\oneCe} \in \Gamma$ and policy $\pi \in \piSet$, i.e., $\sum_{u \in V(c)} \gamma^{\text{v}}_{u} + \sum_{(ua,ub) \in A(c)} \gamma^{\text{a}}_{uaub} = 0 \ \forall c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc$. We present the recourse problem based on the so-called cycle formulation \citep{Abraham2007} as follows: \begin{subequations} \label{RecoPModel} \begin{align} R^{\policy}(\varFirst, \oneCe): \max_{y} \quad& \sum_{c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc}\mathbf{w_{\match}}(\varFirst)c y_{c} \label{RecoP:Obj} \tag{R}\\ & \sum_{c: u \in V(c)} y_{c} \le 1 & u \in V \label{eq:uniqueC}\\ &y_{c} \in \{0,1\} & c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc \end{align} \end{subequations} \noindent Here, $y_{c}$ is a decision variable for every cycle/chain $c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc$, taking on value one if selected, and zero otherwise. Constraints \eqref{eq:uniqueC} ensure that a vertex belongs to at most one cycle/chain. An optimal solution to formulation $R^{\policy}(\varFirst, \oneCe)cdte$ finds a matching with the greatest number of matched pairs from the first stage after observing failure scenario $\tilde{\oneCe}$. Thus, by solving formulation \eqref{RecoPModel}, a new Constraint \eqref{eq:solsTwo} can be created. \cite{Blom2021} proposed to expand recourse solutions by including, in addition to the optimal non-failed cycles/chains found by formulation \eqref{RecoPModel}, failed cycles and chains while guaranteeing that the solution is still a matching. Although an expanded solution does not contribute to more recourse value under the failure scenario in consideration, it may imply other violated constraints. Consider two recourse solutions $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$ and $\mathbf{y}OpSndSc \in \setRecoSym^{\policy}(\varFirst, \oneCe)cdte$, one in the transitory compatibility graph and another one in the second-stage compatibility graph, respectively, and also assume that $\mathbf{y}OpSndSc \subseteq \psi^{\star\tilde{\oneCe}}$. Then, constraint \eqref{eq:solsTwo} associated to $\mathbf{y}OpSndSc$ is directly implied by that of $\psi^{\star\tilde{\oneCe}}$, i.e., if the constraint corresponding to $\mathbf{y}OpSndSc$ is violated, so is the constraint corresponding to $\psi^{\star\tilde{\oneCe}}$. We find expanded recourse solutions by solving a deterministic KEP in the transitory graph $D^{\pi}(\varFirstPlain)c$ instead of the second-stage graph $D^{\pi}(\varFirstPlain, \bm{\oneCeS})c$ in formulation \eqref{RecoPModel}, and by assigning new weights to all cycles/chains $c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$ as in \citet{Blom2021}. The new weights, assigned to each cycle $c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$, are given by \begin{align} \label{expRecoObFcn} \mathbf{w_{\match}}(\varFirst)Expanded = \left\{ \begin{array}{ll} \mathbf{w_{\match}}(\varFirst)c \lvert V \rvert + 1 & \mbox{if } V(c) \subseteq V \setminus V_{\bm{\oneCeS}}\\ 1 & \mbox{otherwise } \end{array} \right. \end{align} We denote by $R^{\policy}(\varFirst, \oneCe)expc$ the resulting recourse problem with expanded solutions, \begin{subequations} \label{RecoPModelexp} \begin{align} R^{\policy}(\varFirst, \oneCe)expc: \max_{y} \quad& \sum_{c \in cSetSndTr \cup \mathcal{C}_{\chainCap}SndTr}\mathbf{w_{\match}}(\varFirst)Expanded y_{c} \label{RecoP:Objexp} \tag{RE}\\ & \sum_{c: u \in V(c)} y_{c} \le 1 & u \in V \label{eq:uniqueCexp}\\ &y_{c} \in \{0,1\} & c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc \end{align} \end{subequations} The following lemma, based on \citet{Blom2021}, states that the set of cycles/chains that do not fail in a recourse solution optimal to $R^{\policy}(\varFirst, \oneCe)expc$ is an optimal recourse solution to the original recourse problem $R^{\policy}(\varFirst, \oneCe)cdte$. \begin{lemma} \label{lemma:exp} For a recourse solution $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$ that is optimal to $R^{\policy}(\varFirst, \oneCe)expc$, its set of non-failed cycles/chains, $\psi^{\star\tilde{\oneCe}}cdte \subseteq \psi^{\star\tilde{\oneCe}}$ under scenario $\tilde{\oneCe} \in \Gammac$ is an optimal recourse solution to $R^{\policy}(\varFirst, \oneCe)cdte$. \end{lemma} \proof Let ${Z}^{\pi,\star}_{RE}(\varFirstc, \tilde{\oneCe})$ be the optimal objective value to solution $\psi^{\star\tilde{\oneCe}} \in \setRecoSym^{\policy}(\varFirst, \oneCe)Trcdte$ and consider $\vert V \rvert/2$ as the maximum number of cycles/chains that can originate in a feasible matching, i.e., the maximum number of decision variables for which $\psi^{\star\tilde{\oneCe}}_{c} = 1 \ \forall c \in cSetSndTrc \cup \mathcal{C}_{\chainCap}SndTrc$. Moreover, let $\mathbbm{1}_{c}$ be an indicator variable that takes on value one if $\psi^{\star\tilde{\oneCe}}_{c} = 1 \ \forall c \notin cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc$ and zero otherwise. Then, from \eqref{expRecoObFcn} we know that \begin{subequations} \begin{align} {Z}^{\pi,\star}_{RE}(\varFirstc, \tilde{\oneCe}) = & \sum_{c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc: \psi^{\star\tilde{\oneCe}}cdte_{c} = 1} \left( \mathbf{w_{\match}}(\varFirst)c \lvert V \rvert + 1 \right) + \sum_{c \notin cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc: \psi^{\star\tilde{\oneCe}}_{c} = 1} \mathbbm{1}_{c} \\ =&\lvert V \rvert \sum_{c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc: \psi^{\star\tilde{\oneCe}}cdte_{c} = 1} \mathbf{w_{\match}}(\varFirst)c + \sum_{i = 1}^{\vert V \rvert/2} 1 + \sum_{i = 1}^{\vert V \rvert/2} 1\\ =&z^{\star}\lvert V \rvert + \vert V \rvert \label{result} \end{align} \end{subequations} Now, suppose $\psi^{\star\tilde{\oneCe}}cclt$ is not optimal to $R^{\policy}(\varFirst, \oneCe)cdte$, therefore $\sum_{c \in cSetSndSc \cup \mathcal{C}_{\chainCap}SndSc: \psi^{\star\tilde{\oneCe}}_{c} = 1} \mathbf{w_{\match}}(\varFirst)c$ can be at most $z^{\star} - 1$. If that is true, then, when replacing $z^{\star}$ in \eqref{result} by $z^{\star} - 1$, we obtain that ${Z}^{\pi,\star}_{RE}(\varFirstc, \tilde{\oneCe}) = z^{\star}\lvert V \rvert + \vert V \rvert = z^{\star}\lvert V \rvert$, which is a contradiction. Thus, it follows that $\psi^{\star\tilde{\oneCe}}cclt$ must be optimal to $R^{\policy}(\varFirst, \oneCe)cdte$. $\square$ It is worth noting that we formulated the recourse problem by means of cycle-and-chain decision variables, but since it is reduced to a deterministic KEP, the recourse problem can be solved via multiple formulations/algorithms from the literature, e.g., \citet{Omer2022, Blom2021, Riascos2020}. \section{Additional results for non-homogeneous failure} \label{CycleLengthFourNonHomoFail} The data in Table \ref{tab:policiesk4} includes all runs, optimal or not, except for column HSP(\%) where only instances that were solved to optimality are included. The interpretation of the displayed columns correspond to those in Table \ref{tab:policiesk3}. \begin{table}[htpb] \caption{Policies comparison for $cCap = 4$ under HSA\_MB and non-homogeneous failure.} \label{tab:policiesk4} \begin{adjustbox}{width=\textwidth} \centering \input{policies_table_K4_aBudget_5_10} \end{adjustbox} \end{table} \end{document}
math
168,457
\begin{document} \title{Quadratic homogeneous polynomial maps $H$ and Keller maps $x+H$ with $3 \le \operatorname{rk} {\mathcal{J}} H \le 4$} \author{Michiel de Bondt} \maketitle \begin{abstract} We compute by hand all quadratic homogeneous polynomial maps $H$ and all Keller maps of the form $x + H$, for which $\operatorname{rk} {\mathcal{J}} H = 3$, over a field of arbitrary characteristic. Furthermore, we use computer support to compute Keller maps of the form $x + H$ with $\operatorname{rk} {\mathcal{J}} H = 4$, namely: \begin{compactitem} \item all such maps in dimension $5$ over fields with $\frac12$; \item all such maps in dimension $6$ over fields without $\frac12$. \end{compactitem} We use these results to prove the following over fields of arbitrary characteristic: for Keller maps $x + H$ for which $\operatorname{rk} {\mathcal{J}} H \le 4$, the rows of ${\mathcal{J}} H$ are dependent over the base field. \end{abstract} \section{Introduction} Let $n$ be a positive integer and let $x = (x_1,x_2,\ldots,x_n)$ be an $n$-tuple of variables. We write $a|_{b=c}$ for the result of substituting $b$ by $c$ in $a$. Let $K$ be any field. In the scope of this introduction, denote by $L$ an unspecified (but big enough) field, which contains $K$ or even $K(x)$. For a polynomial or rational map $H = (H_1,H_2,\ldots,H_m) \in L^m$, write ${\mathcal{J}} H$ or ${\mathcal{J}}_x H$ for the Jacobian matrix of $H$ with respect to $x$. So $$ {\mathcal{J}} H = {\mathcal{J}}_x H = \left(\begin{array}{cclc} \parder{}{x_1} H_1 & \parder{}{x_2} H_1 & \cdots & \parder{}{x_n} H_1 \\ \parder{}{x_1} H_2 & \parder{}{x_2} H_2 & \cdots & \parder{}{x_n} H_2 \\ \vdots & \vdots & \vdots\,\vdots\,\vdots & \vdots \\ \parder{}{x_1} H_m & \parder{}{x_2} H_m & \cdots & \parder{}{x_n} H_m \end{array}\right) $$ Denote by $\operatorname{rk} M$ the rank of a matrix $M$, whose entries are contained in $L$, and write $\operatorname{tr}deg_K L$ for the transcendence degree of $L$ over $K$. It is known that $\operatorname{rk} {\mathcal{J}} H \le \operatorname{tr}deg_K K(H)$ for a rational map $H$ of any degree, with equality if $K(H) \subseteq K(x)$ is separable, in particular if $K$ has characteristic zero. This is proved in \cite[Th.\@ 1.3]{1501.06046}, see also \cite[Ths.\@ 10, 13]{DBLP:conf/mfcs/PandeySS16}. Let a \emph{Keller map} be a polynomial map $F \in K[x]^n$, for which $\det {\mathcal{J}} F \in K \setminus \{0\}$. If $H \in K[x]^n$ is homogeneous of degree at least $2$, then $x + H$ is a Keller map, if and only if ${\mathcal{J}} H$ is nilpotent. We say that a matrix $M \in \operatorname{Mat}_n(L)$ is \emph{similar over $K$} to a matrix $\tilde{M} \in \operatorname{Mat}_n(L)$, if there exists a $T \in \operatorname{GL}_n(K)$ such that $\tilde{M} = T^{-1}MT$. If ${\mathcal{J}} H$ is similar over $K$ to a triangular matrix, say that $T^{-1}({\mathcal{J}} H)T$ is a triangular matrix, then $$ {\mathcal{J}} \big(T^{-1}H(Tx)\big) = T^{-1}({\mathcal{J}} H)|_{x=Tx}T $$ is triangular as well. Section $2$ is about quadratic homogeneous maps $H$ in general, with the focus on compositions with invertible linear maps, and the invariant $r = \operatorname{rk} {\mathcal{J}} H$. In section $3$, a classification is given for the case $r = 3$. In section $4$, a classification is given for the case $r = 3$, combined with the nilpotency of ${\mathcal{J}} H$. Nilpotency is an invariant of conjugations with invertible linear maps, but is not an invariant of compositions with invertible linear maps in general. In section $5$, we compute all Keller maps $x + H$ with $H$ quadratic homogeneous, for which $\operatorname{rk} {\mathcal{J}} H = 4$ in dimension $5$ over fields with $\frac12$, and in dimension $6$ over fields without $\frac12$. We use these results to prove the following over fields of arbitrary characteristic: for Keller maps $x + H$ with $H$ quadratic homogeneous, for which $\operatorname{rk} {\mathcal{J}} H \le 4$, the rows of ${\mathcal{J}} H$ are dependent over the base field. \section{rank {\mathversion{bold}$r$}} \begin{theorem} \label{rkr} Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map, and $r := \operatorname{rk} {\mathcal{J}} H$. Then there are $S \in \operatorname{GL}_m(K)$ and $T \in \operatorname{GL}_n(K)$, such that for $\tilde{H} := S H(Tx)$, only the first $\frac12 r^2 + \frac12 r$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero, and one of the following statements holds: \begin{enumerate}[\upshape(1)] \item Only the first $\frac12 r^2 - \frac12 r + 1$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero; \item $\operatorname{char} K \ne 2$ and only the first $r$ columns of ${\mathcal{J}} \tilde{H}$ are nonzero; \item $\operatorname{char} K = 2$ and only the first $r+1$ columns of ${\mathcal{J}} \tilde{H}$ are nonzero. \end{enumerate} Conversely, $\operatorname{rk} {\mathcal{J}} \tilde{H} \le r$ if either $\tilde{H}$ is as in {\upshape(2)} or {\upshape(3)}, or $1 \le r \le 2$ and $\tilde{H}$ is as in {\upshape(1)}. \end{theorem} \begin{corollary} \label{rk4} Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map. Suppose that $r := \operatorname{rk} {\mathcal{J}} H \le 4$. Then there are $S \in \operatorname{GL}_m(K)$ and $T \in \operatorname{GL}_n(K)$, such that for $\tilde{H} := S H(Tx)$, only the first $\frac12 r^2 + \frac12 r$ of ${\mathcal{J}} \tilde{H}$ may be nonzero, and one of the following statements holds: \begin{enumerate}[\upshape (1)] \item Only the first $r+1$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero; \item $r = 4$, $\tilde{H}_i \in K[x_1,x_2,x_3]$ for all $i \ge 2$, and $\operatorname{char} K \neq 2$; \item $r = 4$, $\tilde{H}_i \in K[x_1,x_2,x_3,x_4,x_5^2,x_6^2,\ldots,x_n^2]$ for all $i \ge 2$, and $\operatorname{char} K = 2$; \item Only the first $r+1$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero; \item $r = 4$, only the leading principal minor matrix of size $6$ of ${\mathcal{J}} \tilde{H}$ is nonzero, and $\operatorname{char} K = 2$. \end{enumerate} \end{corollary} \begin{lemma} \label{23} If $\tilde{H}$ is as in {\upshape(2)} or {\upshape(3)} of theorem {\upshape\ref{rkr}}, then theorem {\upshape\ref{rkr}} holds for $H$. \end{lemma} \begin{proof} The number of terms of degree $d$ in $x_1, x_2, \ldots, x_r$ is $$ \binom{r-1+d}{d} $$ and the number of square-free terms of degree $d$ in $x_1, x_2, \ldots, x_r, x_{r+1}$ is $$ \binom{r+1}{d} $$ which is both $\frac12 r^2 + \frac12 r$ if $d = 2$. If $\tilde{H}$ is as in (2) of theorem \ref{rkr}, then $\operatorname{char} K \ne 2$ and the rows of ${\mathcal{J}} \tilde{H}$ are dependent over $K$ on the $\frac12 r^2 + \frac12 r$ rows of $$ {\mathcal{J}} (x_1^2,x_1 x_2, \ldots, x_r^2) $$ If $\tilde{H}$ is as in (3) of theorem \ref{rkr}, then $\operatorname{char} K = 2$ and the rows of ${\mathcal{J}} \tilde{H}$ are dependent over $K$ on the $\frac12 r^2 + \frac12 r$ rows of $$ {\mathcal{J}} (x_1 x_2, x_1 x_3, \ldots, x_r x_{r+1}) $$ So at most $\frac12 r^2 + \frac12 r$ rows of ${\mathcal{J}} \tilde{H}$ as in theorem \ref{rkr} need to be nonzero. If $\tilde{H}$ is as in (2) of theorem \ref{rkr}, then $\operatorname{rk} {\mathcal{J}} \tilde{H} \le r$ is direct. If $\tilde{H}$ is as in (3) of theorem \ref{rkr}, then $\operatorname{rk} {\mathcal{J}} \tilde{H} \le r$ follows as well, because ${\mathcal{J}} \tilde{H} \cdot x = 0$ if $\operatorname{char} K = 2$. \end{proof} \begin{lemma} \label{Irlem} Let $M$ be a nonzero matrix whose entries are linear forms in $K[x]$. Suppose that $r := \operatorname{rk} M$ does not exceed the cardinality of $K$. Then there are invertible matrices $S$ and $T$ over $K$, such that for $\tilde{M} := S M T$, $$ \tilde{M} = \tilde{M}^{(1)} L_1 + \tilde{M}^{(2)} L_2 + \cdots + \tilde{M}^{(n)} L_n $$ where $\tilde{M}^{(i)}$ is a matrix with coefficients in $K$ for each $i$, $L_1, L_2, \ldots, L_n$ are independent linear forms, and $$ \tilde{M}^{(1)} = \left( \begin{array}{cc} I_r & \zeromat \\ \zeromat & \zeromat \\ \end{array} \right) $$ \end{lemma} \begin{proof} Since $\operatorname{rk} M = r \ge 1$, $M$ has a minor matrix of size $r \times r$ whose determinant is nonzero. Assume without loss of generality that the determinant of the leading principal minor matrix of size $r \times r$ of $M$ is nonzero. Then this determinant $f$ is a homogeneous polynomial of degree $r$. From \cite[Lemma 5.1 (ii)]{1310.7843}, it follows that there exists a $v \in K^n$ such that $f(v) \ne 0$. Take independent linear forms $L_1, L_2, \ldots, L_n$ such that $L_i(v) = 0$ for all $i \ge 2$. Then $L_1(v) \ne 0$, and we may assume that $L_1(v) = 1$. We can write $$ M = M^{(1)} L_1 + M^{(2)} L_2 + \cdots + M^{(n)} L_n $$ where $M^{(i)}$ is a matrix with coefficients in $K$ for each $i$. If we substitute $x = v$ on both sides, we see that $\operatorname{rk} M^{(1)} \le r$ and that the leading principal minor matrix of size $r \times r$ of $M^{(1)}$ has a nonzero determinant. So $\operatorname{rk} M^{(1)} = r$, and we can choose invertible matrices $S$ and $T$ over $K$, such that $$ S M^{(1)} T = \left( \begin{array}{cc} I_r & \zeromat \\ \zeromat & \zeromat \\ \end{array} \right) $$ So we can take $\tilde{M}^{(i)} = S M^{(i)} T$ for each $i$. \end{proof} Suppose that $\tilde{M}$ is as in lemma \ref{Irlem}. Write \begin{equation} \label{ABCD} \tilde{M} = \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \end{equation} where $A \in \operatorname{Mat}_r(K)$. If we extend $A$ with one row and one column of ${\mathcal{J}} H$, we get an element of $\operatorname{Mat}_{r+1}\big(K(x)\big)$ from which the determinant is zero. If we focus on the coefficients of $L_1^r$ and $L_1^{r-1}$ of this determinant, we see that \begin{equation} \label{D0CB0} D = 0 \qquad \mbox{and} \qquad C \cdot B = 0 \end{equation} respectively. In particular, \begin{equation} \label{rkCrkBler} D = 0 \qquad \mbox{and} \qquad C \cdot B = 0 \end{equation} \begin{lemma} \label{Ccoldep1} Let $\tilde{H} \in K[x]^m$, such that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in lemma \ref{Irlem}. Suppose that either $\operatorname{char} K \ne 2$ or the rows of $B$ are dependent over $K$. Then the following hold. \begin{enumerate}[\upshape (i)] \item The columns of $C$ in \eqref{ABCD} are dependent over $K$. \item If $C \ne 0$, then there exists a $v \in K^n$ of which the first $r$ coordinates are not all zero, such that $$ ({\mathcal{J}} \tilde{H}) \cdot v = \left( \begin{array}{cc} I_r & \zeromat \\ \zeromat & \zeromat \end{array} \right) \cdot x $$ \end{enumerate} \end{lemma} \begin{proof} The claims are direct if $C = 0$, so assume that $C \ne 0$. Then there exists an $i > r$, such that $\tilde{H}_i \ne 0$. \begin{enumerate}[\upshape (i)] \item Take $v$ as in (ii). From $D = 0$, we deduce that $C \cdot v' = (C | D) \cdot v = 0$ for some nonzero $v' \in K^r$, which yields (i). \item Take $v$ as in lemma \ref{Irlem}, and write $v = (v',v'')$, such that $v' \in K^r$ and $v'' \in K^{n-r}$. Since $\tilde{H}$ is quadratic homogeneous, we have $$ ({\mathcal{J}} \tilde{H}) \cdot v = ({\mathcal{J}} \tilde{H})|_{x=v} \cdot x = \tilde{M}^{(1)} \cdot x = \left( \begin{array}{cc} I_r & \zeromat \\ \zeromat & \zeromat \end{array} \right) \cdot x $$ So it remains to show that $v' \ne 0$. We distinguish two cases: \begin{itemize} \item \emph{the rows of $B$ are dependent over $K$.} Take $w \in K^r$ nonzero, such that $w^{\rm t} B = 0$. Then $$ w^{\rm t} A\,v' = w^{\rm t} A\,v' + w^{\rm t} B\,v'' = w^{\rm t} (A|B)\,v = w^{\rm t} \left( \begin{smallmatrix} x_1 \\ x_2 \\[-5pt] \vdots \\ x_r \end{smallmatrix} \right) = \sum_{i=1}^r w_i x_i \ne 0 $$ so $v' \ne 0$. \item \emph{$\operatorname{char} K \ne 2$.} From $CB = 0$, we deduce that $$ CA\,v' = CA\,v' + CB\,v'' = C\,(A|B)\,v = C \left( \begin{smallmatrix} x_1 \\ x_2 \\[-5pt] \vdots \\ x_r \end{smallmatrix} \right) = 2 \left( \begin{smallmatrix} \tilde{H}_{r+1} \\ \tilde{H}_{r+2} \\[-5pt] \vdots \\ \tilde{H}_{m} \end{smallmatrix} \right) $$ As $\tilde{H}_{i} \ne 0$ for some $i > r$, the right-hand side is nonzero, so $v' \ne 0$. \qedhere \end{itemize} \end{enumerate} \end{proof} \begin{lemma} \label{BCrkr} Let $\tilde{H} \in K[x]^m$, such that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in lemma \ref{Irlem}. Suppose that $\operatorname{rk} B + \operatorname{rk} C = r$ and that the columns of $C$ are dependent over $K$. Then the column space of $B$ contains a nonzero constant vector. \end{lemma} \begin{proof} From $\operatorname{rk} C + \operatorname{rk} B = r$ and $CB = 0$, we deduce that $\ker C$ is equal to the column space of $B$. Hence any $v' \in K^r$ such that $C v' = 0$ is contained in the column space of $B$. \end{proof} \begin{proof}[Proof of theorem \ref{rkr}] From lemma \ref{Fqlem} below, it follows that we may assume that $K$ has at least $r$ elements. Let $M = {\mathcal{J}} H$ and take $S$ and $T$ as in lemma \ref{Irlem}. Then $S ({\mathcal{J}} H) T$ is as $\tilde{M}$ in lemma \ref{Irlem}. Let $\tilde{H} := SH(Tx)$. Then ${\mathcal{J}} \tilde{H} = S ({\mathcal{J}} H) |_{x=Tx} T$ is as $\tilde{M}$ in lemma \ref{Irlem} as well, but for different linear forms $L_i$. Take $\tilde{M} = {\mathcal{J}} \tilde{H}$ and take $A$, $B$, $C$, $D$ as in \eqref{ABCD}. We distinguish four cases: \begin{itemize} \item \emph{The column space of $B$ contains a nonzero constant vector.} Then there exists an $U \in \operatorname{GL}_m(K)$, such that the column space of $U \tilde{M}$ contains $e_1$, because $D = 0$. Consequently, the matrix which consists of the last $m-1$ rows of ${\mathcal{J}} (U \tilde{H}) = U \tilde{M}$ has rank $r-1$. By induction on $r$, it follows that we can choose $U$ such that only $$ \tfrac12(r-1)^2 + \tfrac12(r-1) = (\tfrac12 r^2 - r + \tfrac12) + (\tfrac12 r - \tfrac12) = \tfrac12 r^2 - \tfrac12 r $$ rows of ${\mathcal{J}} (U \tilde{H})$ are nonzero besides the first row of ${\mathcal{J}} (U \tilde{H})$. So $U \tilde{H}$ is as $\tilde{H}$ in (1) of theorem \ref{rkr}. \item \emph{The rows of $B$ are dependent over $K$ in pairs.} If $B \ne 0$, then the column space of $B$ contains a nonzero constant vector, and the case above applies because $D = 0$. So assume that $B = 0$. Then only the first $r$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero, because $D = 0$. Since $\operatorname{rk} {\mathcal{J}} \tilde{H} = r$, the first $r$ columns of ${\mathcal{J}} \tilde{H}$ are indeed nonzero. Furthermore, it follows from ${\mathcal{J}} \tilde{H} \cdot x = 2 \tilde{H}$ that $\operatorname{char} K \neq 2$. So $\tilde{H}$ is as in (2) of theorem \ref{rkr}, and the result follows from lemma \ref{23}. \item \emph{$\operatorname{char} K = 2$ and $\operatorname{rk} B \le 1$.} If the rows of $B$ are dependent over $K$ in pairs, then the second case above applies, so assume that the rows of $B$ are not dependent over $K$ in pairs. On account of \cite[Theorem 2.1]{1601.00579}, the columns of $B$ are dependent over $K$ in pairs. As $D = 0$, there exists an $U'' \in \operatorname{GL}_{n-r}(K)$ such that only the first column of $$ \binom{B}{D} U'' $$ may be nonzero. Hence there exists an $U \in \operatorname{GL}_n(K)$ such that only the first $r+1$ columns of $({\mathcal{J}} \tilde{H})\, U$ may be nonzero. Consequently, $\tilde{H}(Ux)$ is as $\tilde{H}$ in (3) of theorem \ref{rkr}, and the result follows from lemma \ref{23}. \item \emph{None of the above.} We first show that $\operatorname{rk} C \le r - 2$. So assume that $\operatorname{rk} C \ge r - 1$. From $\operatorname{rk} C + \operatorname{rk} B \le r$, it follows that $\operatorname{rk} B \le 1$. As the last case above does not apply, $\operatorname{char} K \ne 2$. From (i) of lemma \ref{Ccoldep1}, it follows that the columns of $C$ are dependent over $K$. As the first case above does not apply, it follows from lemma \ref{BCrkr} that $\operatorname{rk} C + \operatorname{rk} B < r$. So $\operatorname{rk} B = 0$ and the rows of $B$ are dependent over $K$ in pairs, which is the second case above, and a contradiction. So $\operatorname{rk} C \le r - 2$ indeed. By induction on $r$, it follows that $C$ needs to have at most $$ \tfrac12 (r-2)^2 + \tfrac12 (r-2) = (\tfrac12 r^2 - 2r + 2) + (\tfrac12 r - 1) = \tfrac12 r^2 - \tfrac32 r + 1 $$ nonzero rows. As $A$ has $r$ rows, there exists an $U \in \operatorname{GL}_m(K)$ such that $U\tilde{H}$ is as $\tilde{H}$ in (1) of theorem \ref{rkr}. \end{itemize} The last claim of theorem \ref{rkr} follows from lemma \ref{23} and the fact that $\frac12 r^2 - \frac12 r + 1 = r$ if $1 \le r \le 2$. \end{proof} \begin{lemma} \label{Fqlem} Let $L$ be an extension field of $K$. If theorem {\upshape\ref{rkr}} holds for $L$ instead of $K$, then theorem {\upshape\ref{rkr}} holds. \end{lemma} \begin{proof} We only prove lemma \ref{Fqlem} for the first claim of theorem \ref{rkr}, because the second claim can be treated in a similar manner, and the last claim does not depend on the actual base field. Suppose $H$ satisfies the first claim of theorem \ref{rkr}, but with $L$ instead of $K$. If $m \le \frac12 r^2 + \frac12 r$, then only the first $\frac12 r^2 + \frac12 r$ rows of ${\mathcal{J}} H$ may be nonzero, and $H$ satisfies the first claim of theorem \ref{rkr}. So assume that $m > \frac12 r^2 + \frac12 r$. Then the rows of ${\mathcal{J}} H$ are dependent over $L$. Since $L$ is a vector space over $K$, the rows of ${\mathcal{J}} H$ are dependent over $K$. So we may assume that the last row of ${\mathcal{J}} H$ is zero. By induction on $m$, $(H_1,H_2,\ldots,H_{m-1})$ satisfies the first claim for $H$ in in theorem \ref{rkr}. As $H_m = 0$, we conclude that $H$ satisfies the first claim of theorem \ref{rkr}. \end{proof} \begin{proof}[Proof of corollary \ref{rk4}] If (2) or (3) of theorem \ref{rkr} applies, then (4) of corollary \ref{rk4} follows. So assume that (1) of theorem \ref{rkr} applies. Then only the first $\tfrac12 r^2 - \tfrac12 r + 1$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero. Suppose first that $r \le 3$. Then $$ \tfrac12 r^2 - \tfrac12 r + 1 \le \tfrac32 r - \tfrac12 r + 1 = r + 1 $$ and corollary \ref{rk4} follows. Suppose next that $r = 4$. Take $\tilde{M} = {\mathcal{J}} \tilde{H}$ and take $A$, $B$, $C$, $D$ as in \eqref{ABCD}. We distinguish three cases. \begin{itemize} \item \emph{The column space of $B$ contains a nonzero constant vector.} Then there exists an $U \in \operatorname{GL}_m(K)$, such that the column space of $U \tilde{M}$ contains $e_1$. So the matrix which consists of the last $m-1$ rows of ${\mathcal{J}} (U \tilde{H}) = U \tilde{M}$ has rank $r-1$. Make $\tilde{U}$ from $U$ by replacing its first row by the zero row. Then $\operatorname{rk} {\mathcal{J}} (\tilde{U} \tilde{H}) = r - 1$, and we can apply theorem \ref{rkr} to $\tilde{U} \tilde{H}$. \begin{compactitem} \item If case (1) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$, then case (1) of corollary \ref{rk4} follows, because $$ \tfrac12 (r-1)^2 - \tfrac12 (r-1) + 1 = \tfrac92 - \tfrac32 + 1 = 4 = r $$ \item If case (2) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$, then case (2) of corollary \ref{rk4} follows. \item If case (3) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$, then case (3) of corollary \ref{rk4} follows. \end{compactitem} \item \emph{$\operatorname{rk} B \le 1$.} If the columns of $B$ are dependent over $K$ in pairs, then (4) of corollary \ref{rk4} is satisfied. So assume that the columns of $B$ are not dependent over $K$ in pairs. Then $B \ne 0$, and from \cite[Theorem 2.1]{1601.00579}, it follows that the rows of $B$ are dependent over $K$ in pairs. Hence the first case above applies. \item \emph{$\operatorname{rk} C \le 1$.} Then it follows from theorem \ref{rkr} that at most one row of $C$ needs to be nonzero. So (1) of corollary \ref{rk4} is satisfied. \item \emph{None of the above.} We first show that $\operatorname{rk} C = \operatorname{rk} B = 2$ and that the columns of $C$ are independent over $K$. Since $\operatorname{rk} C + \operatorname{rk} B \le r = 4$, we deduce from $\operatorname{rk} B > 1$ and $\operatorname{rk} C > 1$ that $\operatorname{rk} C = \operatorname{rk} B = 2$. So $\operatorname{rk} C + \operatorname{rk} B = 4 = r$. As the first case above does not apply, it follows from lemma \ref{BCrkr} that the columns of $C$ are independent over $K$. From (i) of lemma \ref{Ccoldep1}, we deduce that $\operatorname{char} K = 2$ and that the rows of $B$ are independent over $K$. Since the $\operatorname{rk} C + 2$ columns of $C$ are independent over $K$, it follows from theorem \ref{rkr} that $C$ needs to have at most $$ \tfrac12 \cdot 2^2 - \tfrac12 \cdot 2 + 1 = 2 - 1 + 1 = 2 $$ nonzero rows. Since the $\operatorname{rk} B + 2$ rows of $B$ are independent over $K$, it follows from \cite[Theorem 2.3]{1601.00579} that $B$ needs to have at most $2$ nonzero columns, because the first case above does not apply. So (5) of corollary \ref{rk4} is satisfied. \qedhere \end{itemize} \end{proof} The last case in corollary \ref{rk4} is indeed necessary, e.g.\@ ${\mathcal{J}} \tilde{H} = {\mathcal{H}} (x_1 x_2 x_3 + x_4 x_5 x_6)$, or ${\mathcal{J}} \tilde{H} = {\mathcal{H}} (x_1 x_2 x_3 + x_1 x_5 x_6 + x_4 x_2 x_6 + x_4 x_5 x_3)$. \section{rank 3} \begin{theorem} \label{rk3} Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map, such that $r := \operatorname{rk} {\mathcal{J}} H = 3$. Then we can choose $S \in \operatorname{GL}_m(K)$ and $T \in \operatorname{GL}_n(K)$, such that for $\tilde{H} := S H(Tx)$, one of the following statements holds: \begin{enumerate}[\upshape(1)] \item Only the first $3$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero; \item Only the first $4$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero, and $$ (\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4) = (\tilde{H}_1, \tfrac12 x_1^2, x_1 x_2, \tfrac12 x_2^2) $$ (in particular, $\operatorname{char} K \ne 2$); \item Only the first $4$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero, $$ {\mathcal{J}} (\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4) = {\mathcal{J}} (\tilde{H}_1, x_1 x_2, x_1 x_3, x_2 x_3) $$ and $\operatorname{char} K = 2$; \item $\tilde{H}$ is as in {\upshape(2)} or {\upshape(3)} of theorem {\upshape\ref{rkr}}; \item Only the first $4$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero, and $$ \big(\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4\big) = \big( x_1 x_3 + c x_2 x_4, x_2 x_3 - x_1 x_4, \tfrac12 x_3^2 + \tfrac{c}2 x_4^2, \tfrac12 x_1^2 + \tfrac{c}2 x_2^2 \big) $$ for some nonzero $c \in K$ (in particular, $\operatorname{char} K \ne 2$). \end{enumerate} Conversely, $\operatorname{rk} {\mathcal{J}} \tilde{H} \le 3$ in each of the five statements above. \end{theorem} \begin{corollary} \label{rktrdeg} Let $H \in K[x]^m$ be a quadratic homogeneous polynomial map, such that $\operatorname{rk} {\mathcal{J}} H \le 3$. If $\operatorname{char} K \neq 2$, then $\operatorname{rk} {\mathcal{J}} H = \operatorname{tr}deg_K K(H)$. \end{corollary} \begin{proof} Since $\operatorname{rk} {\mathcal{J}} H \le \operatorname{tr}deg_K K(H)$, it suffices to show that $\operatorname{tr}deg_K K(H) \le 3$ if $\operatorname{char} K \neq 2$. In (5) of theorem \ref{rk3}, we have $\operatorname{tr}deg_K K(H) \le 3$ because $$ \tilde{H}_1^2 + c \tilde{H}_2^2 - 4 \tilde{H}_3 \tilde{H}_4 = 0 $$ In the other cases of theorem \ref{rk3} where $\operatorname{char} K \neq 2$, $\operatorname{tr}deg_K K(H) \le 3$ follows directly. \end{proof} \begin{lemma} \label{Ccoldep2} Let $\tilde{H} \in K[x]^m$, such that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in lemma \ref{Irlem}. If $\operatorname{rk} C = 1$ in \eqref{ABCD} and $r$ is odd, then the columns of $C$ in \eqref{ABCD} are dependent over $K$. \end{lemma} \begin{proof} The case where $\operatorname{char} K \ne 2$ follows from (i) of lemma \ref{Ccoldep1}, so assume that $\operatorname{char} K = 2$. Since $\operatorname{rk} C = 1 = \frac12 \cdot 1^2 + \frac12 \cdot 1$, we deduce from theorem \ref{rkr} that the rows of $C$ are dependent over $K$ in pairs. Say that the first row of $C$ is nonzero. As $r$ is odd, it follows from proposition \ref{evenrk} below that $\operatorname{rk} {\mathcal{H}} \tilde{H}_{r+1} < r$. Hence there exists a $v' \in K^r$ such that $({\mathcal{H}} \tilde{H}_{r+1}) \, v' = 0$, and $$ ({\mathcal{J}} \tilde{H}_{r+1}) \, v' = x^{\rm t} ({\mathcal{H}} \tilde{H}_{r+1}) \, = 0 $$ The row space of $C$ is spanned by ${\mathcal{J}} \tilde{H}_{r+1}$, so $C\,v' = 0$. \end{proof} \begin{proposition} \label{evenrk} Let $M \in \operatorname{Mat}_{n,n}(K)$ be either a symmetric matrix, or an anti-symmetric matrix with zeroes on the diagonal. Then there exists a lower triangular matrix $T \in \operatorname{Mat}_{n,n}(K)$ with ones on the diagonal, such that $T^{\rm t} M T$ is the product of a symmetric permutation matrix and a diagonal matrix. In particular, $\operatorname{rk} M$ is even if $M$ is an anti-symmetric matrix with zeroes on the diagonal. \end{proposition} \begin{proof} We describe an algorithm to transform $M$ to the product of a symmetric permutation matrix and a diagonal matrix. We distinguish three cases. \begin{itemize} \item \emph{The last column of $M$ is zero.} Advance with the principal minor of $M$ that we get by removing row and column $n$. \item \emph{The entry in the lower right corner of $M$ is nonzero.} Use $M_{nn}$ as a pivot to clean the rest of the last column and the last row of $M$. Advance with the principal minor of $M$ that we get by removing row and column $n$. \item \emph{None of the above.} Let $i$ be the index of the lowest nonzero entry in the last column of $M$. Use $M_{in}$ and $M_{ni}$ as pivots to clean the rest of columns $i$ and $n$ of $M$ and rows $i$ and $n$ of $M$. Advance with the principal minor of $M$ that we get by removing rows and columns $i$ and $n$. \end{itemize} If $M$ is $M$ is an anti-symmetric matrix with zeroes on the diagonal, then so is $T^{\rm t} M T$. By combining this with that $T^{\rm t} M T$ is the product of a symmetric permutation matrix and a diagonal matrix, we infer the last claim. \end{proof} Notice that over characteristic $2$, any Hessian matrix of a polynomial is \mbox{(anti-)}\allowbreak symmetric with zeroes on the diagonal, which has even rank by extension of scalars. Furthermore, one can show that any nondegenerate quadratic form in odd dimension $r$ over a perfect field of characteristic $2$ is equivalent to $$ x_1 x_2 + x_3 x_4 + \cdots + x_{r-2} x_{r-1} + x_r^2 $$ \begin{corollary} \label{symdiag} Let $\operatorname{char} K \neq 2$ and $M \in \operatorname{Mat}_{n,n}(K)$ be a symmetric matrix. Then there exists a $T \in \operatorname{GL}_{n}(K)$, such that $T^{\rm t} M T$ is a diagonal matrix. \end{corollary} \begin{proof} Notice that the permutation matrix $P$ in proposition \ref{evenrk} only has cycles of length $1$ and $2$, because just like $M$, the product of $P$ and the diagonal matrix $D$ is symmetric. Furthermore, for every cycle of length $2$, the entries on the diagonal of $D$ which correspond to the two coordinates of that cycle are equal. Since $$ \left( \begin{array}{cc} 1 & 1 \\ -1 & 1 \end{array} \right) \left( \begin{array}{cc} 0 & c \\ c & 0 \end{array} \right) \left( \begin{array}{cc} 1 & -1 \\ 1 & 1 \end{array} \right) = \left( \begin{array}{cc} 2c & 0 \\ 0 & -2c \end{array} \right) $$ we can get rid of the cycles of length $2$ in $P$. \end{proof} \begin{lemma} \label{rk3trafo} Suppose that $H \in K[x_1,x_2,x_3,x_4]^4$, such that $$ {\mathcal{J}} H_4 = (\, x_1 ~ c x_2 ~ 0 ~ 0 \,) \qquad \mbox{and} \qquad {\mathcal{J}} H \cdot v = \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right) $$ for some nonzero $c \in K$, and a $v \in K^4$ of which the first $3$ coordinates are not all zero. Suppose in addition that $\det {\mathcal{J}} H = 0$ and that the last column of ${\mathcal{J}} H$ does not generate a nonzero constant vector. Then there are $S, T \in \operatorname{GL}_4(K)$, such that for $\tilde{H} := S H(Tx)$, ${\mathcal{J}} \tilde{H}$ is of the form $$ {\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc} A_{11} & A_{12} & x_1 & c x_2 \\ A_{21} & A_{22} & x_2 & -x_1 \\ A_{31} & A_{32} & x_3 & \tilde{c}x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) $$ where the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ are zero. \end{lemma} \begin{proof} Since the last row of ${\mathcal{J}} H$ is $(\, x_1 ~ c x_2 ~ 0 ~ 0 \,)$ and the last coordinate of ${\mathcal{J}} \tilde{H} \cdot v$ is zero, we deduce that $v_1 = v_2 = 0$. As the first $3$ coordinates of $v$ are not all zero, we conclude that $v_3 \ne 0$. Make $S$ from $v_3 I_4$ by changing column $4$ to $e_4$ Make $T$ from $I_4$ by changing column $3$ to $v_3^{-1} v$, i.e.\@ replacing the last entry of column $3$ by $v_3^{-1} v_4$, and by replacing the last column by an arbitrary scalar multiple of $e_4$. Then \begin{align*} ({\mathcal{J}} \tilde{H}) \cdot e_3 &= S\, ({\mathcal{J}} H)|_{x=Tx} \cdot T e_3 = \big(S\,({\mathcal{J}} H) \cdot v_3^{-1} v\big)\big|_{x=Tx} \\ &= v_3^{-1} S\, \left. \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right) \right|_{x=Tx} = \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right) \end{align*} and $$ \tilde{H}_4 = H_4(Tx) = H_4 $$ So the third column of ${\mathcal{J}} \tilde{H}$ is as claimed. It follows that ${\mathcal{J}} \tilde{H}$ is as $\tilde{M}$ in lemma \ref{Irlem}, with $L_1 = x_3$, $L_2 = x_2$, $L_3 = x_1$, and $L_4 = x_4$. Define $A$, $B$, $C$ and $D$ as in \eqref{ABCD}. Just like the last column of ${\mathcal{J}} H$, the last column of ${\mathcal{J}} \tilde{H}$ does not generate a nonzero constant vector. Consequently, $B_{11}$ and $B_{21}$ are not both zero. From \eqref{D0CB0}, it follows that $C_{11} B_{11} + C_{12} B_{21} = 0$. So we can choose the last column of $T$, such that $B_{11} = C_{12} = c x_2$ and $B_{21} = -C_{11} = -x_1$. The coefficient of $x_3$ in $B_{31}$ is zero, and by changing the third row of $I_4$ on the left of the diagonal in a proper way, we can get an $U \in \operatorname{GL}_4$ such that $$ U \binom{B}{D} = \left( \begin{smallmatrix} B_{11} \\ B_{21} \\ \tilde{c} x_4 \\ D_{11} \end{smallmatrix} \right) = \left( \begin{smallmatrix} c x_2 \\ - x_3 \\ \tilde{c} x_4 \\ 0 \end{smallmatrix} \right) $$ for some $\tilde{c} \in K$. Since $U^{-1}$ can be obtained by changing the third row of $I_4$ on the left of the diagonal in a proper way as well, we infer that \begin{align*} {\mathcal{J}} \big(U^{-1} \tilde{H} (Ux)\big) \cdot e_3 &= U^{-1}\, ({\mathcal{J}} \tilde{H})|_{x = Ux} \,U \cdot e_3 = U^{-1}\, ({\mathcal{J}} \tilde{H})|_{x = Ux} \cdot e_3 \\ &= U^{-1} \left. \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right) \right|_{x=Ux} = \left( \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \\ 0 \end{smallmatrix} \right) \end{align*} and $$ (U^{-1})_4\, \tilde{H} (Ux) = \tilde{H}_4 (Ux) = \tilde{H}_4 $$ So if we replace $S$ and $T$ by $U^{-1} S$ and $TU$ respectively, then $\tilde{H}$ will be replaced by $U^{-1} \tilde{H}(Ux)$. So we can get $B_{31}$ of the form $\tilde{c}x_4$ for some $\tilde{c} \in K$. Finally, we can clean the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ with row operations, using $C_{11}$ as a pivot. So we can get the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ equal to zero. \end{proof} \begin{lemma} \label{rk3calc} Suppose that $\det {\mathcal{J}} \tilde{H} = 0$ and ${\mathcal{J}} \tilde{H}$ is of the form $$ \left( \begin{array}{cccc} A_{11} & A_{12} & x_1 & c x_2 \\ A_{21} & A_{22} & x_2 & -x_1 \\ A_{31} & A_{32} & x_3 & \tilde{c}x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) $$ where $c,\tilde{c} \in K$, such that $c \neq 0$. If the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ are zero, then $$ \tilde{H} = \left( \begin{array}{c} x_1 x_3 + c x_2 x_4 \\ x_2 x_3 - x_1 x_4 \\ \frac12 x_3^2 + \frac{c}2 x_4^2 \\ \frac12 x_1^2 + \frac{c}2 x_2^2 \end{array} \right) \qquad \mbox{and} \qquad {\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc} x_3 & c x_4 & x_1 & c x_2 \\ - x_4 & x_3 & x_2 & -x_1 \\ 0 & 0 & x_3 & c x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) $$ \end{lemma} \begin{proof} Let $a_i$ and $b_i$ be the coefficients of $x_1$ and $x_2$ in $A_{i2}$ respectively, for each $i \le 3$. Then \begin{gather*} \tilde{H} = \left( \begin{array}{c} a_1 x_1 x_2 + \frac12 b_1 x_2^2 + x_1 x_3 + c x_2 x_4 \\ a_2 x_1 x_2 + \frac12 b_2 x_2^2 + x_2 x_3 - x_1 x_4 \\ a_3 x_1 x_2 + \frac12 b_3 x_2^2 + \frac12 x_3^2 + \frac{\tilde{c}}2 x_4^2 \\ \frac12 x_1^2 + \frac{c}2 x_2^2 \end{array} \right) \qquad \mbox{and} \\ {\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc} a_1 x_2 + x_3 & a_1 x_1 + b_1 x_2 + c x_4 & x_1 & c x_2 \\ a_2 x_2 - x_4 & a_2 x_1 + b_2 x_2 + x_3 & x_2 & -x_1 \\ a_3 x_2 & a_3 x_1 + b_3 x_2 & x_3 & \tilde{c} x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) \end{gather*} Consequently, it suffices to show that $a_i = b_i = 0$ for each $i \le 3$, and that $\tilde{c} = c$. Suppose that the coefficients of $x_1^4$ in $\det {\mathcal{J}} \tilde{H}$ are zero. Then we see by expansion along rows $3$, $4$, $1$, in that order, that $a_3 x_1 = 0$. Hence the third row of ${\mathcal{J}} \tilde{H}$ reads $$ {\mathcal{J}} \tilde{H}_3 = (\, 0 ~ b_3 x_2 ~ x_3 ~ \tilde{c} x_4 \,) $$ Since the coefficients of $x_1^3 x_2$ and $x_1^3 x_3$ in $\det {\mathcal{J}} \tilde{H}$ are zero, we see by expansion along rows $3$, $4$, $1$, in that order, that $b_3 x_2 = a_1 x_1 = 0$. Hence the third row of ${\mathcal{J}} \tilde{H}$ reads $$ {\mathcal{J}} \tilde{H}_3 = (\, 0 ~ 0 ~ x_3 ~ \tilde{c} x_4 \,) $$ Since the coefficient of $x_2^3 x_3$ in $\det {\mathcal{J}} \tilde{H}$ is zero, we see by expansion along rows $3$, $4$, $2$, in that order, that $a_2 x_2 = 0$. So $$ {\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc} x_3 & b_1 x_2 + c x_4 & x_1 & c x_2 \\ - x_4 & b_2 x_2 + x_3 & x_2 & -x_1 \\ 0 & 0 & x_3 & \tilde{c} x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) $$ Since the coefficient of $x_1^2 x_3 x_4$ in $\det {\mathcal{J}} \tilde{H}$ is zero, we see by expansion along row $3$, and columns $2$ and $1$, in that order, that $\tilde{c} x_4 = c x_4$. Since the coefficient of $x_1 x_2^2 x_3$ in $\det {\mathcal{J}} \tilde{H}$ is zero, we see by expansion along row $3$, and columns $1$, $4$, in that order, that $b_2 x_2 = 0$. Using that and that the coefficient of $x_1^2 x_2 x_3$ in $\det {\mathcal{J}} \tilde{H}$ is zero, we see by expansion along row $3$, and columns $1$, $2$, in that order, that $b_1 x_2 = 0$. So $\tilde{H}$ is as claimed. \end{proof} \begin{proof}[Proof of theorem \ref{rk3}] Take $\tilde{M} = {\mathcal{J}} \tilde{H}$ and take $A$, $B$, $C$, $D$ as in \eqref{ABCD}. We distinguish three cases: \begin{itemize} \item \emph{The column space of $B$ contains a nonzero constant vector.} Take $U$ and $\tilde{U}$ as in the corresponding case in the proof of corollary \ref{rk4}. \begin{compactitem} \item If case (1) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$, then case (1) of theorem \ref{rk3} follows, because $$ \tfrac12 (r-1)^2 - \tfrac12 (r-1) + 1 = \tfrac42 - \tfrac22 + 1 = 2 = r - 1 $$ \item If case (2) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$, then case (1) or case (2) of theorem \ref{rk3} follows. \item If case (3) of theorem \ref{rkr} applies for $\tilde{U} \tilde{H}$, then case (1) or case (3) of theorem \ref{rk3} follows. \end{compactitem} \item \emph{The columns of $B$ are dependent over $K$ in pairs.} If $\operatorname{char} K = 2$, then $\tilde{H}$ is as in (3) of theorem \ref{rkr}, which is included in (4) of theorem \ref{rk3}. So assume that $\operatorname{char} K \neq 2$. If $B = 0$, then $\tilde{H}$ is as in (2) of theorem \ref{rkr}, which is included in (4) of theorem \ref{rk3} as well. So assume that $B \ne 0$. From (i) of lemma \ref{Ccoldep1}, it follows that the columns of $C$ are dependent over $K$. If $\operatorname{rk} C \ge 2$, then $\operatorname{rk} C = 2$ because $\operatorname{rk} B + \operatorname{rk} C \le r = 3$ and $B \ne 0$, and $B$ contains a nonzero constant vector because of lemma \ref{BCrkr}. If $\operatorname{rk} C = 0$, then (1) of theorem \ref{rk3} follows. So assume that $\operatorname{rk} C = 1$. Since $\operatorname{rk} C = 1 = \frac12 \cdot 1^2 + \frac12 \cdot 1$, we deduce from theorem \ref{rkr} that the rows of $C$ are dependent over $K$ in pairs. So we can choose $S$ such that only the first row of $C$ is nonzero. From corollary \ref{symdiag}, it follows that there exists an $U' \in \operatorname{GL}_r(K)$, such that $$ {\mathcal{H}}_{x_1,x_2,\ldots,x_r} \big(\tilde{H}_{r+1}(U'x)\big) = (U')^{-1} ({\mathcal{H}} \tilde{H}_{r+1})\, U' $$ is a diagonal matrix. Furthermore, the first entry on the diagonal is nonzero because ${\mathcal{H}} \tilde{H}_{r+1} \ne 0$, and the last entry on the diagonal is zero because the columns of $C$ are dependent over $K$. By adapting $S$, we can obtain that the entries on the diagonal of ${\mathcal{H}}_{x_1,x_2,\ldots,x_r} (\tilde{H}_{r+1}(U'x)$ are $1$, $c$ and $0$, in that order. By adapting $S$ and $T$, we can replace $\tilde{H}$ by $$ \left( \begin{array}{cc} (U')^{-1} & \zeromat \\ \zeromat & I_{m-3} \end{array} \right) \tilde{H}\left(\left( \begin{array}{cc} U' & \zeromat \\ \zeromat & I_{n-3} \end{array} \right) x \right) $$ Then the first row of $C$ becomes $(\, x_1 ~ c x_2 ~ 0 \,)$. We distinguish two cases. \begin{compactitem} \item $c = 0$. Notice that $$ {\mathcal{J}} (\tilde{H}|_{x_1=1}) = ({\mathcal{J}} \tilde{H})|_{x_1=1} \cdot ({\mathcal{J}} (1,x_2,x_3,\ldots,x_m)) $$ so $\operatorname{rk} {\mathcal{J}} (\tilde{H}|_{x_1=1}) = r - 1 = 2$, and we can apply \cite[Theorem 2.3]{1601.00579}. \begin{compactitem} \item In the case of \cite[Theorem 2.3]{1601.00579} (1), case (2) of theorem \ref{rkr} follows, which yields (4) of theorem \ref{rk3}. \item In the case of \cite[Theorem 2.3]{1601.00579} (2), case (1) of theorem \ref{rk3} follows, because any linear combination of the rows of ${\mathcal{J}} \tilde{H}$ which is dependent on $e_1$, is linearly dependent on $C_1$. \item In the case of \cite[Theorem 2.3]{1601.00579} (3), case (1) or case (2) of theorem \ref{rk3} follows. \end{compactitem} \item $c \ne 0$. From (ii) of lemma \ref{Ccoldep1} and lemma \ref{rk3trafo}, it follows that we can choose $S$ and $T$, such that the leading principal minor matrix of size $4$ of ${\mathcal{J}} \tilde{H}$ is of the form $$ \left( \begin{array}{cccc} A_{11} & A_{12} & x_1 & c x_2 \\ A_{21} & A_{22} & x_2 & -x_1 \\ A_{31} & A_{32} & x_3 & \tilde{c}x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) $$ where the coefficients of $x_1$ in $A_{11}$, $A_{21}$, $A_{21}$ are zero. From lemma \ref{rk3calc}, case (5) of theorem \ref{rk3} follows. \end{compactitem} \item \emph{None of the above.} We first show that $\operatorname{rk} B \ge 2$. For that purpose, assume that $\operatorname{rk} B \le 1$. Since the columns of $B$ are not dependent over $K$ in pairs, we deduce that $B \ne 0$, and it follows from \cite[Theorem 2.1]{1601.00579} that the rows of $B$ are dependent over $K$ in pairs. This contradicts the fact that the column space of $B$ does not contain a nonzero constant vector. So $\operatorname{rk} B \ge 2$ indeed. From $\operatorname{rk} B + \operatorname{rk} C = r$, we deduce that $\operatorname{rk} C \le 1$. If $C = 0$, then (1) of theorem \ref{rk3} follows, so assume that $C \ne 0$. Then $\operatorname{rk} C = 1$ and $\operatorname{rk} B = r - 1 = 2$. From lemmas \ref{Ccoldep2} and \ref{BCrkr}, we deduce that the column space of $B$ contains a nonzero constant vector, which is a contradiction. \end{itemize} So it remains to prove the last claim. In case of (5) of theorem \ref{rk3}, the last claim follows from the proof of corollary \ref{rktrdeg}. Otherwise, the last claim of theorem \ref{rk3} follows in a similar manner as the last claim of theorem \ref{rkr}. \end{proof} \begin{lemma} \label{F2lem} Let $K$ be a field of characteristic $2$ and $L$ be an extension field of $K$. If theorem {\upshape\ref{rk3}} holds, but for $L$ instead of $K$, then theorem {\upshape\ref{rk3}} holds. \end{lemma} \begin{proof} Take $H$ as in theorem \ref{rk3}. Then $H$ is as in (1), (3) or (4) of theorem \ref{rk3}, but with $L$ instead of $K$. We assume that $H$ is as in (3) of theorem \ref{rk3} with $L$ instead of $K$, because the other case follows in a similar manner as lemma \ref{Fqlem}. Notice that $$ \big(\,0 ~ x_3 ~ x_2 ~ x_1 ~ y_4 ~ y_5 ~ \cdots ~ y_m\,\big) \cdot {\mathcal{J}} \tilde{H} = 0 $$ Since ${\mathcal{J}} \tilde{H} = S^{-1} ({\mathcal{J}} H)|_{Tx} T$, it follows that $$ \big(\,0 ~ x_3 ~ x_2 ~ x_1 ~ y_4 ~ y_5 ~ \cdots ~ y_m\,\big) \cdot S \cdot {\mathcal{J}} H = 0 $$ as well. Suppose first that $m = 4$. As $\operatorname{rk} {\mathcal{J}} H = m-1$, there exists a nonzero $v \in K(x)^m$ such that $\ker \big(\,v_1 ~ v_2 ~ v_3 ~ v_4\,\big)$ is equal to the column space of ${\mathcal{J}} H$. Since the column space of ${\mathcal{J}} H$ is contained in $\ker \big((\,0~x_3~x_2~x_1\,) \cdot S\big)$, it follows that $\big(\, v_1 ~ v_2 ~ v_3 ~ v_4 \,\big)$ is dependent on $(\,0~x_3~x_2~x_1\,) \cdot S$. So $$ v_1 (S^{-1})_{11} + v_2 (S^{-1})_{21} + v_3 (S^{-1})_{31} + v_4 (S^{-1})_{41} = 0 $$ and the components of $v$ are dependent over $L$. Consequently, the components of $v$ are dependent over $K$. So $\ker \big(\,v_1 ~ v_2 ~ v_3 ~ v_4\,)$ contains a nonzero vector over $K$, and so does the column space of ${\mathcal{J}} H$. Now we can follow the same argumentation as in the first case in the proof of theorem \ref{rk3}. Suppose next that $m > 4$. Then the rows of ${\mathcal{J}} H$ are dependent over $L$. Hence the rows of ${\mathcal{J}} H$ are dependent over $K$ as well. So we may assume that the last row of ${\mathcal{J}} H$ is zero. By induction on $m$, $(H_1,H_2,\ldots,H_{m-1})$ is as $H$ in theorem \ref{rk3}. As $H_m = 0$, we conclude that $H$ satisfies theorem \ref{rk3}. \end{proof} \section{rank 3 with nilpotency} \begin{theorem} \label{rk3np} Let $H \in K[x]^n$ be quadratic homogeneous, such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \le 3$. Then there exists a $T \in \operatorname{GL}_n(K)$, such that for $\tilde{H} := T^{-1} H(Tx)$, one of the following statements holds: \begin{enumerate}[\upshape (1)] \item ${\mathcal{J}} \tilde{H}$ is lower triangular with zeroes on the diagonal; \item $n \ge 5$, $\operatorname{rk} {\mathcal{J}} H = 3$, and ${\mathcal{J}} \tilde{H}$ is of the form $$ \left( \begin{array}{ccc@{\qquad}ccc} 0 & x_5 & 0 & * & \cdots & * \\ x_4 & 0 & -x_5 & * & \cdots & * \\ 0 & x_4 & 0 & * & \cdots & * \\[10pt] 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 \end{array} \right) $$ \item $n \ge 6$, $\operatorname{rk} {\mathcal{J}} H = 3$, $\operatorname{char} K = 2$ and ${\mathcal{J}} \tilde{H}$ is of the form $$ \left( \begin{array}{ccc@{\qquad}ccc@{\qquad}ccc} 0 & x_6 & 0 & 0 & 0 & x_2 & 0 & \cdots & 0 \\ x_5 & 0 & -x_6 & 0 & * & * & * & \cdots & * \\ 0 & x_5 & 0 & 0 & x_2 & 0 & 0 & \cdots & 0 \\[10pt] 0 & 0 & 0 & 0 & x_6 & x_5 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\[10pt] 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{array} \right) $$ \end{enumerate} Furthermore, $x + H$ is tame if $\operatorname{char} K \neq 2$, and there exists a tame invertible map $x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$ in general. Conversely, ${\mathcal{J}} \tilde{H}$ is nilpotent in each of the three statements above. Furthermore, $\operatorname{rk} {\mathcal{J}} \tilde{H} = 3$ in {\upshape (2)}, $\operatorname{rk} {\mathcal{J}} \tilde{H} = 4$ in {\upshape (3)} if $\operatorname{char} K \neq 2$, and $\operatorname{rk} {\mathcal{J}} \tilde{H} = 3$ in {\upshape (3)} if $\operatorname{char} K = 2$. \end{theorem} \begin{lemma} \label{lem3} Let $H \in K[x]^3$ be homogeneous of degree $2$, such that ${\mathcal{J}}_{x_1,x_2,x_3} H$ is nilpotent. If ${\mathcal{J}}_{x_1,x_2,x_3} H$ is not similar over $K$ to a triangular matrix, then ${\mathcal{J}}_{x_1,x_2,x_3} H$ is similar over $K$ to a matrix of the form $$ \left( \begin{array}{ccc} 0 & f & 0 \\ b & 0 & f \\ 0 & -b & 0 \end{array} \right) $$ where $f$ and $b$ are independent linear forms in $K[x_4,x_5,\ldots,x_n]$. \end{lemma} \begin{proof} Suppose that ${\mathcal{J}}_{x_1,x_2,x_3} H$ is not similar over $K$ to a triangular matrix. Take $i$ such that the coefficient matrix of $x_i$ of ${\mathcal{J}}_{x_1,x_2,x_3} H$ is nonzero, and define $$ N := {\mathcal{J}}_{x_1,x_2,x_3} (H|_{x_i=x_i+1}) = ({\mathcal{J}}_{x_1,x_2,x_3} H)|_{x_i=x_i+1} $$ Then $N$ is nilpotent, and $N$ is not similar over $K$ to a triangular matrix. $N(0)$ is similar over $K$ to the matrix $N(0)$ in either (iii) or (iv) of \cite[Lemma 3.1]{1601.00579}, so it follows from \cite[Lemma 3.1]{1601.00579} that $N$ is similar over $K$ to a matrix of the form $$ \left( \begin{array}{ccc} 0 & f+1 & 0 \\ b & 0 & f+1 \\ 0 & -b & 0 \end{array} \right) $$ where $b$ and $f$ are linear forms. $b$ and $f$ are independent, because the coefficients of $x_i$ in $b$ and $f$ are $0$ and $1$ respectively. So ${\mathcal{J}}_{x_1,x_2,x_3} H$ is similar over $K$ to a matrix of the form $$ \left( \begin{array}{ccc} 0 & f & 0 \\ b & 0 & f \\ 0 & -b & 0 \end{array} \right) $$ where $b$ and $f$ are independent linear forms. Let $\bar{H}$ be the quadratic part with respect to $x_1, x_2, x_3$ of $H$. Then ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ is nilpotent. By showing that $\bar{H} = 0$, we prove that $b$ and $f$ are contained in $K[x_4,x_5,\ldots,x_n]$. So assume that $\bar{H} \ne 0$. From \cite[Theorem 3.2]{1601.00579}, it follows that we may assume that ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ is lower triangular. If the coefficient matrix of $x_2$ of ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ is nonzero, then it has rank $1$ because only the last row is nonzero, and so has the coefficient matrix of $x_2$ of ${\mathcal{J}}_{x_1,x_2,x_3} H$. Otherwise, the coefficient matrix of $x_1$ of ${\mathcal{J}}_{x_1,x_2,x_3} \bar{H}$ and ${\mathcal{J}}_{x_1,x_2,x_3} H$ has rank $1$, because only the first column is nonzero. So we could have chosen $i$, such that the coefficient matrix of $x_i$ of ${\mathcal{J}}_{x_1,x_2,x_3} H$ would have rank $1$. From \cite[Lemma 3.1]{1601.00579}, it follows that ${\mathcal{J}}_{x_1,x_2,x_3} H$ is similar over $K$ to a triangular matrix. Contradiction, so $\bar{H} = 0$ indeed. \end{proof} \begin{lemma} \label{lem1} Suppose that $H \in K[x]^n$, such that ${\mathcal{J}} H$ is nilpotent. \begin{enumerate}[\upshape (i)] \item Suppose that ${\mathcal{J}} H$ may only be nonzero in the first row and the first $2$ columns. Then there exists a $T \in \operatorname{GL}_n(K)$ such that for $\tilde{H} := T^{-1} H(Tx)$, the following holds. \begin{enumerate}[\upshape (a)] \item ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the first $2$ columns (just like ${\mathcal{J}} H$). \item The Hessian matrix of the leading part with respect to $x_2,x_3,\ldots,x_n$ of $\tilde{H}_1$ is the product of a symmetric permutation matrix and a diagonal matrix. \item Every principal minor determinant of the leading principal minor matrix of size $2$ of ${\mathcal{J}} \tilde{H}$ is zero. \end{enumerate} \item Suppose that $\operatorname{char} K = 2$ and that ${\mathcal{J}} H$ may only be nonzero in the first row and the first $3$ columns. Then there exists a $T \in \operatorname{GL}_n(K)$ such that for $\tilde{H} := T^{-1} H(Tx)$, the following holds. \begin{enumerate}[\upshape (a)] \item ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the first $3$ columns (just like ${\mathcal{J}} H$). \item The Hessian matrix of the leading part with respect to $x_2,x_3,\ldots,x_n$ of $\tilde{H}_1$ is the product of a symmetric permutation matrix and a diagonal matrix. \item Every principal minor determinant of the leading principal minor matrix of size $3$ of ${\mathcal{J}} \tilde{H}$ is zero. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} From proposition \ref{evenrk}, it follows that there exists a $T \in \operatorname{Mat}_{n,n}(K)$ of the form $$ \left( \begin{array} {ccccc} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & * & 1 & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 0 \\ 0 & * & \cdots & * & 1 \end{array} \right) $$ such that the Hessian matrix of the leading part with respect to $x_2,x_3,\ldots,x_n$ of $\tilde{H}_1 = H_1(Tx)$ is the product of a symmetric permutation matrix and a diagonal matrix. Furthermore, ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the first $2$ or $3$ columns respectively (just like ${\mathcal{J}} H$), because of the form of $T$. \begin{enumerate}[\upshape (i)] \item Let $N$ be the principal minor matrix of size $2$ of ${\mathcal{J}} \tilde{H}$. Suppose first that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first $2$ columns. From \cite[Lemma 1.2]{1601.00579}, it follows that $N$ is nilpotent. On account of \cite[Theorem 3.2]{1601.00579}, $N$ is similar over $K$ to a triangular matrix. Hence the rows of $N$ are dependent over $K$. If the second row of $N$ is zero, then (i) follows. If the second row of $N$ is not zero, then we may assume that the first row of $N$ is zero, and (i) follows as well. Suppose next that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the first $2$ columns, but not just the first $2$ columns. We distinguish two cases. \begin{compactitem} \item $\parder{}{x_2} \tilde{H}_1 \in K[x_1,x_2]$. Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 \in K[x_1,x_2]$ as well. Let $G := \tilde{H}(x_1,x_2,0,0,\ldots,0)$. Then ${\mathcal{J}} G = ({\mathcal{J}} \tilde{H})|_{x_3=x_4=\cdots=x_n=0}$. Consequently, the nonzero part of ${\mathcal{J}} G$ is restricted to the first two columns. So the leading principal minor matrix of size $2$ of ${\mathcal{J}} G$ is nilpotent. But this minor matrix is just $N$, and just as for $\tilde{H}$ before, we may assume that only one row of $N$ is nonzero. This gives (i). \item $\parder{}{x_2} \tilde{H}_1 \notin K[x_1,x_2]$. Since ${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is the product of a permutation matrix and a diagonal matrix, it follows that $\parder{}{x_2} \tilde{H}_1$ is a linear combination of $x_1$ and $x_i$, where $i \ge 3$, such that $x_i$ does not occur in any other entry of ${\mathcal{J}} \tilde{H}$. Looking at the coefficient of $x_i^1$ in the sum of the determinants of the principal minors of size $2$, we infer that $\parder{}{x_1} \tilde{H}_2 = 0$. So the second row of ${\mathcal{J}} \tilde{H}$ is dependent on $e_2^{\rm t}$. From a permuted version of \cite[Lemma 1.2]{1601.00579}, we infer that $\parder{}{x_2} \tilde{H}_2$ is nilpotent. Hence the second row of ${\mathcal{J}} \tilde{H}$ is zero. Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we infer (i). \end{compactitem} \item Let $N$ be the principal minor matrix of size $3$ of ${\mathcal{J}} \tilde{H}$. Suppose first that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first $3$ columns. From \cite[Lemma 1.2]{1601.00579}, it follows that the leading principal minor of size $3$ of ${\mathcal{J}} \tilde{H}$ is nilpotent. On account of \cite[Theorem 3.2]{1601.00579}, $N$ is similar over $K$ to a triangular matrix. But for a triangular nilpotent Jacobian matrix of size $3$ over characteristic $2$, the rank cannot be $2$. So $\operatorname{rk} N \le 1$. Hence the rows of $N$ are dependent over $K$ in pairs. If the second and the third row of $N$ are zero, then (ii) follows. If the second or the third row of $N$ is not zero, then we may assume that the first $2$ rows of $N$ are zero, and (ii) follows as well. Suppose next that ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first row and the first $3$ columns, but not just the first $3$ columns. We distinguish three cases. \begin{compactitem} \item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$. Using techniques of the proof of (i), we can reduce to the case where ${\mathcal{J}} \tilde{H}$ may only be nonzero in the first $3$ columns. \item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3]$. Using techniques of the proof of (i), we can deduce that $\parder{}{x_1} \tilde{H}_2 = \parder{}{x_1} \tilde{H}_3 = 0$, and that ${\mathcal{J}}_{x_2,x_3} (\tilde{H}_2,\tilde{H}_3)$ is nilpotent. On account of \cite[Theorem 3.2]{1601.00579}, ${\mathcal{J}}_{x_2,x_3} (\tilde{H}_2,\tilde{H}_3)$ is similar over $K$ to a triangular matrix. But for a triangular nilpotent Jacobian matrix of size $2$ over characteristic $2$, the rank cannot be $1$. So $\operatorname{rk} {\mathcal{J}}_{x_2,x_3} (\tilde{H}_2,\tilde{H}_3) = 0$. Consequently, the last two rows of $N$ are zero, and (ii) follows. \item None of the above. Assume without loss of generality that $\parder{}{x_2} \tilde{H}_1 \in K[x_1,x_2,x_3]$ and $\parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3]$. Since ${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is the product of a permutation matrix and a diagonal matrix, it follows that $\parder{}{x_3} \tilde{H}_1$ is a linear combination of $x_1$ and $x_i$, where $i \ge 4$, such that $x_i$ does not occur in any other entry of ${\mathcal{J}} \tilde{H}$. Looking at the coefficient of $x_i^1$ in the sum of the determinants of the principal minors of size $2$, we infer that $\parder{}{x_1} \tilde{H}_3 = 0$. If $\parder{}{x_1} \tilde{H}_2 = 0$ as well, then we can advance as above, so assume that $\parder{}{x_1} \tilde{H}_2 \neq 0$. Looking at the coefficient of $x_i^1$ in the sum of the determinants of the principal minors of size $3$, we infer that $\parder{}{x_2} \tilde{H}_3 = 0$. So the third row of ${\mathcal{J}} \tilde{H}$ is dependent on $e_3^{\rm t}$. From a permuted version of \cite[Lemma 1.2]{1601.00579}, we infer that $\parder{}{x_3} \tilde{H}_3$ is nilpotent. Hence the third row of ${\mathcal{J}} \tilde{H}$ is zero. From $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 = -\parder{}{x_2} \tilde{H}_2$. We show that \begin{equation} \label{DN0} \parder{}{x_1} \tilde{H}_1 = \parder{}{x_2} \tilde{H}_2 = 0 \end{equation} For that purpose, suppose that $\parder{}{x_1} \tilde{H}_1 \neq 0$. Since $\parder{}{x_1} x_1^2 = 0$, the coefficient of $x_1$ in $\parder{}{x_1} \tilde{H}_1$ is zero. Similarly, the coefficient of $x_2$ in $\parder{}{x_2} \tilde{H}_2$ is zero. As $\tilde{H}_2 \in K[x_1,x_2,x_3]$, we infer that $$ \parder{}{x_1} \tilde{H}_1 = -\parder{}{x_2} \tilde{H}_2 \in K x_3 \setminus \{0\} $$ Looking at the coefficient of $x_3^2$ in the sum of the determinants of the principal minors of size $2$, we deduce that the coefficient of $x_3^2$ in $(\parder{}{x_i} \tilde{H}_1) \cdot (\parder{}{x_1} \tilde{H}_i)$ is nonzero. Consequently, the coefficient of $x_3^3$ in $(\parder{}{x_i} \tilde{H}_1) \cdot (\parder{}{x_2} \tilde{H}_2) \cdot (\parder{}{x_1} \tilde{H}_i) \in K x_3^3 \setminus \{0\}$ is nonzero. This contributes to the coefficient of $x_3^3$ in the sum of the determinants of the principal minors of size $3$. Contradiction, because this contribution cannot be canceled. So \eqref{DN0} is satisfied. We show that in addition, \begin{equation} \label{N120} \parder{}{x_2} \tilde{H}_1 = 0 \end{equation} The coefficient of $x_1$ of $\parder{}{x_2} \tilde{H}_1$ is zero, because of \eqref{DN0}. The coefficient of $x_2$ of $\parder{}{x_2} \tilde{H}_1$ is zero, because $\parder{}{x_2} x_2^2 = 0$. The coefficient of $x_3$ of $\parder{}{x_2} \tilde{H}_1$ is zero, because the coefficient of $x_2$ of $\parder{}{x_3} \tilde{H}_1 \in K x_1 + K x_i$ is zero. So \eqref{N120} is satisfied as well. Recall that the third row of $N$ is zero. From \eqref{DN0} and \eqref{N120}, it follows that the diagonal and the second column of $N$ are zero as well. Hence every principal minor determinant of $N$ is zero, which gives (ii). \qedhere \end{compactitem} \end{enumerate} \end{proof} \begin{lemma} \label{lem2} Let $\tilde{H}$ be as in lemma \ref{lem1}. Suppose that ${\mathcal{J}} \tilde{H}$ has a principal minor matrix $M$ of which the determinant is nonzero. Then \begin{enumerate}[\upshape (a)] \item $\tilde{H}$ is as in {\upshape (ii)} of lemma \ref{lem1}; \item rows $2$ and $3$ of ${\mathcal{J}} \tilde{H}$ are zero; \item $M$ has size $2$ and $x_2 x_3 \mid \det M$; \item Besides $M$, there exists exactly one principal minor matrix $M'$ of size $2$ of ${\mathcal{J}} H$, such that $\det M' = - \det M$. \end{enumerate} \end{lemma} \begin{proof} Take $N$ as in (i) or (ii) of lemma \ref{lem1} respectively. Then $M$ is not a principal minor matrix of $N$. So if $M$ does not contain the upper left corner of ${\mathcal{J}} \tilde{H}$, then the last column of $M$ is zero. Hence $M$ does contain the upper left corner of ${\mathcal{J}} \tilde{H}$. If $M$ has two columns outside the column range of $N$, then both columns are dependent on $e_1$. So $M$ has exactly one column outside the column range of $N$, say column $i$. \begin{enumerate}[(i)] \item Suppose first that $\tilde{H}$ is as in (i) of lemma \ref{lem1}. Then either $M$ has size $2$ with row and column indices $1$ and $i$, or $M$ has size $3$ with row and column indices $1$, $2$ and $i$. The coefficient of $x_1$ in the upper right corner of $M$ is zero, because $N_{11} = 0$. Hence the upper right corner of $M$ is of the form $c x_j$ for some nonzero $c \in K$ and a $j \ge 2$. If $j \ge 3$, then $\det M$ is equal to the sum of the terms which are divisible by $x_j$ in the sum of the determinants of the principal minors of the same size as $M$, which is zero. So $j = 2$. Now $M$ is the only principal minor matrix of its size, of which the determinant is nonzero, because if we would take another $i$, then $j$ changes as well. Contradiction, because the sum of the determinants of the principal minors of the same size as $M$ is zero. \item Suppose next that $\tilde{H}$ is as in (ii) of lemma \ref{lem1}. If the second row of ${\mathcal{J}}\tilde{H}$ is nonzero, then the coefficient of $x_1 x_3$ in $\tilde{H}_2$ is nonzero, because $N_{22} = 0$. If the third row of ${\mathcal{J}}\tilde{H}$ is nonzero, then the coefficient of $x_1 x_2$ in $\tilde{H}_3$ is nonzero, because $N_{33} = 0$. Since every principal minor determinant of $N$ is zero, we infer that $N_{23} N_{32} = 0$, so either the second or the third row of ${\mathcal{J}}\tilde{H}$ is zero. Assume without loss of generality that the second row of ${\mathcal{J}} \tilde{H}$ is zero. Then either $M$ has size $2$ with row and column indices $1$ and $i$, or $M$ has size $3$ with row and column indices $1$, $3$ and $i$. The upper right corner of $M$ is of the form $c x_j$ for some nonzero $c \in K$, and with the techniques in (i) above, we see that $2 \le j \le 3$. Furthermore, we infer with the techniques in (i) above that ${\mathcal{J}} \tilde{H}$ has another principal minor matrix $M'$ of the same size as $M'$, of which the determinant is nonzero as well. The upper right corner of $M'$ can only be of the form $c' x_{5-j}$ for some nonzero $c' \in K$. It follows that $N_{12} \ne 0$ and $N_{13} \ne 0$. Consequently, $N_{21} = N_{31} = 0$. This is only possible if both the second and the third row of ${\mathcal{J}} \tilde{H}$ are zero. So $M$ has size $2$, and claims (c) and (d) follow. \qedhere \end{enumerate} \end{proof} \begin{lemma} \label{lem4} Let $n = 4$, $$ \tilde{H} = \left( \begin{array}{c} x_1 x_3 + c x_2 x_4 \\ x_2 x_3 - x_1 x_4 \\ \frac12 x_3^2 + \frac{c}2 x_4^2 \\ \frac12 x_1^2 + \frac{c}2 x_2^2 \end{array} \right) \qquad \mbox{and} \qquad {\mathcal{J}} \tilde{H} = \left( \begin{array}{cccc} x_3 & c x_4 & x_1 & c x_2 \\ - x_4 & x_3 & x_2 & -x_1 \\ 0 & 0 & x_3 & c x_4 \\ x_1 & c x_2 & 0 & 0 \end{array} \right) $$ as in lemma \ref{rk3calc} (with $c \neq 0$). Let $M \in \operatorname{Mat}_{4,4}(K)$, such that $\deg \det \big({\mathcal{J}}{\tilde{H}} + M\big) \le 2$. Then there exists a translation $G$, such that $$ \tilde{H}\big(G(x)\big) - \big(\tilde{H} + Mx\big) \in K^4 $$ In particular, $\det \big({\mathcal{J}}{\tilde{H}} + M\big) = \det {\mathcal{J}} \big(\tilde{H} + Mx\big) = 0$. \end{lemma} \begin{proof} Since the quartic part of $\det ({\mathcal{J}}{\tilde{H}} + M)$ is zero, we deduce that $\det ({\mathcal{J}}{\tilde{H}}) = 0$. By way of completing the squares, we can choose a translation $G$ such that the linear part of $F := \tilde{H}(G^{-1}(x)) + M\,G^{-1}(x)$ is of the form $$ \left( \begin{array}{cccccccc} a_1 x_1 &+& b_1 x_2 &+& c_1 x_3 &+& d_1 x_4 \\ a_2 x_1 &+& b_2 x_2 &+& c_2 x_3 &+& d_2 x_4 \\ a_3 x_1 &+& b_3 x_2 & & & & \\ & & & & c_4 x_3 &+& d_4 x_4 \end{array}\right) $$ Notice that $\deg \det {\mathcal{J}} F \le 2$. Looking at the coefficients of $x_1^3$, $x_2^3$, $x_3^3$, and $x_4^3$ of $\det {\mathcal{J}} F$, we see that $b_3 = a_3 = d_4 = c_4 = 0$. Looking at the coefficients of $x_1^2 x_3$, $x_1 x_3^2$, $x_2^2 x_4$, and $x_2 x_4^2$ of $\det {\mathcal{J}} F$, we see that $b_1 = d_1 = a_1 = c_1 = 0$. Looking at the coefficients of $x_1^2 x_4$, $x_1 x_4^2$, $x_2^2 x_3$, and $x_2 x_3^2$ of $\det {\mathcal{J}} F$, we see that $b_2 = c_2 = a_2 = d_2 = 0$. So $F$ has trivial linear part, and $\tilde{H} - F \in K^4$. Hence $\tilde{H}(G) - F(G) \in K^4$, as claimed. The last claim follows from $\det ({\mathcal{J}}{\tilde{H}}) = 0$. \end{proof} \begin{proof}[Proof of theorem \ref{rk3np}] From \cite[Theorem 3.2]{1601.00579}, it follows that (1) is satisfied if $\operatorname{rk} {\mathcal{J}} H \le 2$. So assume that $\operatorname{rk} {\mathcal{J}} H = 3$. Then we can follow the cases of theorem \ref{rk3}. \begin{itemize} \item $H$ is as in (1) of theorem \ref{rk3}. Let $\tilde{H} = S H(S^{-1}x)$. Then only the first $3$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero. If the leading principal minor matrix $N$ of size $3$ of ${\mathcal{J}} \tilde{H}$ is similar over $K$ to a triangular matrix, then so is ${\mathcal{J}} \tilde{H}$ itself. So assume that $N$ is not similar over $K$ to a triangular matrix. From lemma \ref{lem3}, it follows that $N$ is similar over $K$ to a matrix of the form \begin{equation} \label{eq3} \left( \begin{array}{ccc} 0 & f & 0 \\ b & 0 & f \\ 0 & -b & 0 \end{array} \right) \end{equation} where $f$ and $b$ are independent linear forms in $K[x_4,x_5,\ldots,x_n]$. Consequently, we can choose $S$, such that for $\tilde{H} := S H(S^{-1}x)$, only the first $3$ rows of ${\mathcal{J}} \tilde{H}$ are nonzero, and the leading principal minor matrix of size $3$ is as in \eqref{eq3}. If we negate the third row and the third column of \eqref{eq3}, and replace $b$ by $x_4$ and $f$ by $x_5$, then we get $$ \left( \begin{array}{ccc} 0 & x_5 & 0 \\ x_4 & 0 & -x_5 \\ 0 & x_4 & 0 \end{array} \right) $$ We can even choose $S$ such that the leading principal minor matrix of size $3$ of ${\mathcal{J}} \tilde{H}$ is as above. So (2) of theorem \ref{rk3np} is satisfied for $T = S^{-1}$. If we replace $\tilde{H}_2$ by $0$, then all principal minor determinants of ${\mathcal{J}} \tilde{H}$ become zero. On account of \cite[Lemma 1.2]{MR1210415}, ${\mathcal{J}} \tilde{H}$ becomes permutation similar to a triangular matrix. From proposition \ref{tameZ} below, we infer that $x + H$ is tame if $\operatorname{char} K = 0$. \item $H$ is as in (2) of theorem \ref{rk3} or as in (2) of theorem \ref{rkr}. Let $\tilde{H} = T^{-1} H(Tx)$. Then the rows of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ are dependent over $K$ in pairs. Suppose first that the first $2$ rows of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ are zero. Then we can choose $T$ such that only the last row of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ may be nonzero. From \cite[Lemma 1.2]{1601.00579}, it follows that leading principal minor matrix $N$ of size $2$ of ${\mathcal{J}} \tilde{H}$ is nilpotent as well. On account of \cite[Theorem 3.2]{1601.00579}, $N$ is similar over $K$ to a triangular matrix. From \cite[Corollary 1.4]{1601.00579}, we deduce that ${\mathcal{J}} \tilde{H}$ is similar over $K$ to a triangular matrix. So we can choose $T$ such that ${\mathcal{J}} \tilde{H}$ is lower triangular, and (1) is satisfied. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$. Suppose next that the first $2$ rows of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ are not both zero. Then we can choose $T$ such that only the first row of ${\mathcal{J}}_{x_3,x_4,\ldots,x_n} \tilde{H}$ may be nonzero. From lemma \ref{lem1} (i) and lemma \ref{lem2}, we infer that we can choose $T$ such that every principal minor of ${\mathcal{J}} \tilde{H}$ has determinant zero. From \cite[Lemma 1.2]{MR1210415}, it follows that ${\mathcal{J}} \tilde{H}$ is permutation similar to a triangular matrix. So (1) is satisfied. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$. \item $H$ is as in (3) of theorem \ref{rk3} or as in (3) of theorem \ref{rkr}. Let $\tilde{H} = T^{-1} H(Tx)$. Then the rows of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ are dependent over $K$ in pairs. Suppose first that the first $3$ rows of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ are zero. Then we can choose $T$ such that only the last row of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ may be nonzero, and just as above, (1) is satisfied. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$. Suppose next that the first $3$ rows of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ are not all zero. Then we can choose $T$ such that only the first row of ${\mathcal{J}}_{x_4,x_5,\ldots,x_n} \tilde{H}$ may be nonzero. If we can choose $T$ such that every principal minor of ${\mathcal{J}} \tilde{H}$ has determinant zero, then (1) is satified, just as before. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$. So assume that we cannot choose $T$ such that every principal minor of ${\mathcal{J}} \tilde{H}$ has determinant zero. From lemma \ref{lem1} (ii) and lemma \ref{lem2}, we infer that we can choose $T$ such that $\tilde{H}$ is as in lemma \ref{lem2}. More precisely, we can choose $T$ such that ${\mathcal{J}} \tilde{H}$ is of the form \begin{equation} \label{eq2} \left( \begin{array}{cccccccc} 0 & x_4 & -x_5 & x_2 & -x_3 & * & * & \cdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ x_3 & * & * & 0 & 0 & 0 & 0 & \cdots \\ x_2 & * & * & 0 & 0 & 0 & 0 & \cdots \\ * & * & * & 0 & 0 & 0 & 0 & \cdots \\ * & * & * & 0 & 0 & 0 & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right) \end{equation} Since $\parder{}{x_1} x_1^2 = 0$, the coefficients of $x_1$ of the starred entries in the first column of \eqref{eq2} are zero. Hence we can clean these starred entries by way of row operations with rows $4$ and $5$. We can also clean them by way of a linear conjugation, because if a starred entry in the first column is nonzero, the transposed entry in the first row is zero, so the corresponding column operations which are induced by the linear conjugation will not have any effect. If rows $1$, $4$, and $5$ remain as the only nonzero rows, then $H$ is as in (1) of theorem \ref{rk3}, which is the first case. So assume that another row remains nonzero. By way of additional row operations and associated column operations, we can get ${\mathcal{J}} \tilde{H}$ of the form \begin{equation} \label{eq2a} \left( \begin{array}{cccccccc} 0 & x_4 & -x_5 & x_2 & -x_3 & * & * & \cdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ x_3 & 0 & x_1 & 0 & 0 & 0 & 0 & \cdots \\ x_2 & x_1 & 0 & 0 & 0 & 0 & 0 & \cdots \\ 0 & x_3 & x_2 & 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right) \end{equation} where we do not maintain (b) of lemma \ref{lem1} (ii) any more. Hence $P^{-1} \tilde{H}(Px)$ is as $\tilde{H}$ in (3) for a suitable permutation matrix $P$. In a similar manner as in the case where $H$ is as in (1) of theorem \ref{rk3}, we infer that $x + H$ is tame if $\operatorname{char} K = 0$. \item $H$ is as in (5) of theorem \ref{rk3}. Then only the first $4$ columns of $\tilde{H} = T^{-1} H(Tx)$ may be nonzero. Hence the leading principal minor matrix $N$ of size $4$ of ${\mathcal{J}} \tilde{H}$ is nilpotent. We distinguish two subcases. \begin{compactitem} \item The rows of $N$ are linearly independent over $K$. Then there exists an $U \in \operatorname{GL}_4(K)$, such that $U N$ is as ${\mathcal{J}} \tilde{H}$ in lemma \ref{lem4}. Furthermore, $$ \det (UN + U) = \det U \det (N + I_4) = \det U \in K^{*} $$ So $\det (UN + U) \ne 0$ and $\deg \det (UN + U) \le 2$. This contradicts lemma \ref{lem4}. \item The rows of $N$ are linearly independent over $K$. Then we can apply the first case above on the map $(\tilde{H}_1,\tilde{H}_2,\tilde{H}_3,\tilde{H}_4)$. Since $N$ has less than $5$ columns, the case where $N$ is not similar over $K$ to a triangular matrix cannot occur in the case above. So $N$ is similar over $K$ to a triangular matrix, and so are ${\mathcal{J}} \tilde{H}$ and ${\mathcal{J}} H$. Furthermore, $x + H$ is tame if $\operatorname{char} K = 0$. \end{compactitem} \end{itemize} So it remains to prove the last claim in the case $\operatorname{char} K \neq 0$. This follows by way of techniques in the proof of theorem \ref{dim5} in the next section. \end{proof} \begin{proposition} \label{tameZ} Let $R$ be an integral domain of characteristic zero, and let $F \in R[x]^n$ be a polynomial map which is invertible over $R$. Let $\tilde{F} \in R[x]^n$, such that only the $i$\textsuperscript{th} component of $\tilde{F}$ is different from that of $F$, and such that $\det {\mathcal{J}} \tilde{F} = \det {\mathcal{J}} F$. Then $\tilde{F}(F^{-1})$ is an elementary invertible polynomial map over $R$. In particular, $\tilde{F}$ is invertible over $R$. \end{proposition} \begin{proof} Assume without loss of generality that $i = n$. Let $K$ be the fraction field of $R$. Then $\operatorname{rk} {\mathcal{J}} H = \operatorname{tr}deg_K K(H)$, because $K$ has characteristic zero. Since $$ \operatorname{rk} {\mathcal{J}} (F_1,F_2,\ldots,F_{n-1},0) = n-1 = \operatorname{rk} {\mathcal{J}} (F_1,F_2,\ldots,F_{n-1},\tilde{F}_n-F_n) $$ it follows that $\tilde{F}_n-F_n$ is algebraically dependent over $K$ on $F_1,F_2,\ldots,F_{n-1}$. As $F$ is invertible over $R$, $$ \tilde{F}_n-F_n \in R[F_1,F_2,\ldots,F_{n-1}] $$ So $\tilde{F}_n(F^{-1})-x_n \in R[x_1,x_2,\ldots,x_{n-1}]$. Furthermore, $\tilde{F}_i(F^{-1}) = F_i(F^{-1}) = x_i$ for all $i < n$ by assumption, so $\tilde{F}(F^{-1})$ is elementary. \end{proof} \section{rank 4 with nilpotency} Let $M \in \operatorname{Mat}_n(K)$ be nilpotent and $v \in K^n$ be nonzero. Define the {\em image exponent} of $v$ with respect to $M$ as $$ \operatorname{IE} (M,v) = \operatorname{IE}_K (M,v) := \max \{i \in \mathbb{N} \mid M^i v \ne 0\} $$ and the {\em preimage exponent} of $v$ with respect to $M$ as $$ \operatorname{PE} (M,v) = \operatorname{PE}_K (M,v) := \max \{i \in \mathbb{N} \mid M^i w = v \mbox{ for some } w \in K^n\} $$ \begin{theorem} \label{dim5} Let $\operatorname{char} K \neq 2$ and $n = 5$. Suppose that $H \in K[x]^5$, such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \ge 4$. Then $\operatorname{rk} {\mathcal{J}} H = 4$, and there exists a $T \in GL_5(K)$, such that ${\mathcal{J}} \big(T^{-1}H(Tx)\big)$ is either triangular with zeroes on the diagonal, or of one of the following forms. $$ \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ * & 0 & 0 & 0 & 0 \\ 0 & x_4 & 0 & x_2 & 0 \\ x_3 & -x_5 & x_1 & 0 & -x_2 \\ * & * & 0 & x_1 & 0 \end{array} \right) \quad \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ * & 0 & 0 & x_4 & 0 \\ x_2 & x_1 & 0 & -x_5 & -x_4 \\ x_3 & 0 & x_1 & 0 & x_5 \\ * & 0 & 0 & x_1 & 0 \end{array} \right) $$ Furthermore, $x + H$ is tame. \end{theorem} \begin{proof} Suppose first that $K$ is infinite. From \cite[Theorem 4.4]{1609.09753}, it follows that $\operatorname{PE}_{K(x)}({\mathcal{J}} H,x) = 0$. As $\operatorname{rk} {\mathcal{J}} H = n - 1$, $\operatorname{IE}_{K(x)}({\mathcal{J}} H,x) + \operatorname{PE}_{K(x)}({\mathcal{J}} H,x) = n-1$, so $\operatorname{IE}_{K(x)}({\mathcal{J}} H,x) = n - 1$. From \cite[Corollary 4.3]{1609.09753}, it follows that there exists a $T \in \operatorname{GL}_5(K)$, such that for $\tilde{H} := T^{-1}H(Tx)$, we have $$ ({\mathcal{J}} \tilde{H})|_{x = e_1} = \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{array} \right) $$ Using this, one can compute all solutions, and match them against the given classification. We did this with Maple 8, see {\tt dim5qdr.pdf}. To prove that $x + H$ is tame, we show that the maps of the classification are tame. If ${\mathcal{J}} H$ is similar over $K$ to a triangular matrix, then $x + H$ is tame. So assume that ${\mathcal{J}} H$ is not similar over $K$ to a triangular matrix. If $$ G = \left( \begin{array}{c} 0 \\ z_1 x_1^2 \\ x_2 x_4 \\ x_1 x_3 - x_2 x_5 \\ z_2 x_1^2 + z_3 x_1 x_2 + z_4 x_2^2 + x_1 x_4 \end{array} \right) \quad \mbox{or} \quad G = \left( \begin{array}{c} 0 \\ z_1 x_1^2 + \tfrac12 x_4^2 \\ x_1 x_2 - x_4 x_5 \\ x_1 x_3 - \tfrac12 x_5^2 \\ z_2 x_1^2 + x_1 x_4 \end{array} \right) $$ then we can apply proposition \ref{tameZ} with $R = \mathbb{Z}[z_1,z_2,z_3,z_4]$ and $i = 4$, to obtain that $x + G$ is the composition of an elementary map and a map $x + \tilde{G}$ for which ${\mathcal{J}} \tilde{G}$ is permutation similar to a triangular matrix with zeroes on the diagonal. So $x + G$ is tame over $R$. Hence $x + G$ modulo $I$ is tame over $R/I$ for any ideal $I$ of $R$. Since $x + H$ has this form up to conjugation with a linear map, we infer that $x + H$ is tame. Suppose next that $K$ is finite, and let $L$ be an infinite extension field of $K$. If ${\mathcal{J}} H$ is similar over $L$ to a triangular matrix, then by \cite[Proposition 1.3]{1601.00579}, ${\mathcal{J}} H$ is similar over $K$ to a triangular matrix as well. So assume that ${\mathcal{J}} H$ is not similar over $K$ to a triangular matrix. Then one can check for the solutions over $L$ that \cite[Theorem 5.2]{1609.09753} is satisfied, which we did. So \cite[Corollary 4.3]{1609.09753} holds over $K$ as well, and so does this theorem. \end{proof} \begin{theorem} \label{dim6} Let $\operatorname{char} K = 2$ and $n = 6$. Suppose that $H \in K[x]^6$, such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \ge 4$. Then $\operatorname{rk} {\mathcal{J}} H = 4$, and there exists a $T \in GL_6(K)$, such that ${\mathcal{J}} \big(T^{-1}H(Tx)\big)$ is either triangular with zeroes on the diagonal, or of one of the following forms. \begin{gather*} \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ x_2 & x_1 & 0 & 0 & 0 & 0 \\ 0 & 0 & x_5 & 0 & x_3 & 0 \\ * & * & * & x_2 & 0 & -x_3 \\ * & * & * & 0 & x_2 & 0 \end{array} \right) \quad \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & x_5 & 0 & 0 & x_2 & 0 \\ x_3 & \!\!-c x_5 - x_6\!\! & x_1 & 0 & -c x_2 & -x_2 \\ x_4 & c x_6 & 0 & x_1 & 0 & c x_2 \\ x_5 & 0 & 0 & 0 & x_1 & 0 \end{array} \right) \\ \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & x_4 & 0 & x_2 & 0 & 0 \\ x_3 & -x_5 & x_1 & 0 & -x_2 & 0 \\ x_4 & 0 & 0 & x_1 & 0 & 0 \\ * & * & * & * & * & 0 \end{array} \right) \quad \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & x_5 & x_4 & 0 \\ x_2 & x_1 & 0 & -x_6 & -2x_5 & -x_4 \\ x_3 & 0 & x_1 & 0 & x_6 & x_5 \\ x_4 & 0 & 0 & x_1 & 0 & 0 \\ x_5 & 0 & 0 & 0 & x_1 & 0 \end{array} \right) \end{gather*} where $c \in \{0,1\}$. Furthermore, there exists a tame invertible map $x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$. \end{theorem} \begin{proof} Suppose first that $K$ is infinite. From \cite[Theorem 4.4]{1609.09753}, it follows that $\operatorname{PE}({\mathcal{J}} H,x) = 0$. Since $\operatorname{IE}_{K(x)}({\mathcal{J}} H,x) = 0$ as well, it follows from \cite[Theorem 4.4]{1609.09753} that there exists a $T \in \operatorname{GL}_5(K)$, such that for $\tilde{H} := T^{-1}H(Tx)$, we have $$ ({\mathcal{J}} \tilde{H})|_{x = e_1} = \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right) $$ Using this, one can compute all solutions, and match them against the given classification. We did this with Maple 8, see {\tt dim6chr2qdr.pdf}. To prove that $x + \bar{H}$ is tame for some quadratic homogeneous $\bar{H}$ such that ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$, we can use the same arguments as in the proof of theorem \ref{dim5}. Namely, we apply proposition \ref{tameZ} with $R = \mathbb{Z}[z_1,z_2,z_3,z_4,z_5,z_6,z_7,z_8,z_9,z_{10}]$ and $i=5$, $i=5$, $i=4$, and $i=4$ respectively. Suppose next that $K$ is finite, and let $L$ be any infinite extension field of $K$. If ${\mathcal{J}} H$ is similar over $L$ to a triangular matrix, then by \cite[Proposition 1.3]{1601.00579}, ${\mathcal{J}} H$ is similar over $K$ to a triangular matrix as well. So assume that ${\mathcal{J}} H$ is not similar over $K$ to a triangular matrix. We distinguish two cases. \begin{itemize} \item The columns of ${\mathcal{J}} H$ are dependent over $L$. Then the columns of ${\mathcal{J}} H$ are dependent over $K$ as well. So we may assume that the last column of ${\mathcal{J}} H$ is zero. Then the leading principal minor $N$ of size $5$ of ${\mathcal{J}} H$ is nilpotent as well. Just like $\operatorname{rk} {\mathcal{J}} H = 6 - 2 = 4$, we deduce that $\operatorname{rk} N = 5 - 2 = 3$. So we can apply theorem \ref{rk3np} on $N$, to infer that we can get ${\mathcal{J}} \tilde{H}$ of the form of the third matrix. \item The columns of ${\mathcal{J}} H$ are independent over $L$. Then one can check for the solutions over $L$ that lemma \ref{lem5} below is satisfied, which we did. So there exists a $v \in K^6$, such that $({\mathcal{J}} H)|_{x=v}^4 \neq 0$. Hence the Jordan Normal Form of $({\mathcal{J}} H)|_{x=v}$ has Jordan blocks of size $1$ and $5$, just like that of ${\mathcal{J}} H$. Furthermore, $\operatorname{IE}\big(({\mathcal{J}} H)|_{x=v},v\big) = 0 = \operatorname{IE}_{K(x)}({\mathcal{J}} H,x)$, and it follows from \cite[Theorem 4.4]{1609.09753} that $\operatorname{PE}\big(({\mathcal{J}} H)|_{x=v},v\big) = 0 = \operatorname{PE}_{K(x)}({\mathcal{J}} H,x)$. Consequently, one can verify that \cite[Theorem 4.2]{1609.09753} holds over $K$ as well, and so does this theorem. \qedhere \end{itemize} \end{proof} \begin{lemma} \label{lem5} Let $L$ be an extension field of $K$. Suppose that $H \in K[x]^n$ and $T \in \operatorname{GL}_n(L)$, such that for $\tilde{H} := T^{-1}H(Tx)$, the ideal generated by the entries of $({\mathcal{J}} \tilde{H})^s$ contains a power of a polynomial $f$. Then there exists a $v \in K^n$, such that $({\mathcal{J}} H)|_{x=v}^s \neq 0$, in the following cases. \begin{enumerate}[\upshape (i)] \item $\#K \ge \deg f + 1$; \item $f$ is homogeneous and $\#K \ge \deg f$. \end{enumerate} \end{lemma} \begin{proof} From \cite[Lemma 5.1]{1310.7843}, it follows that there exists a $v \in K^n$ such that $f(T^{-1}v) \ne 0$. Let $I$ be the ideal over $L$ generated by the entries of $({\mathcal{J}} \tilde{H})^s$. As the radical of $I$ contains $f$, the radical of $I(T^{-1}v)$ contains $f(T^{-1}v)$. $I(T^{-1}v)$ is generated over $L$ by the entries of $({\mathcal{J}} \tilde{H})|_{x=T^{-1}v}^s$ and $$ T ({\mathcal{J}} \tilde{H})|_{x=T^{-1}v}^s T^{-1} = ({\mathcal{J}} H)|_{x=v}^s $$ From $f(T^{-1}v) \ne 0$, it follows that $I(T^{-1}v) \ne 0$ and $({\mathcal{J}} H)|_{x=v}^s \ne 0$. \end{proof} The cardinality of $K$ may be too small for the computational method of \cite[Theorem 4.2]{1609.09753}: the following maps $H$ do not satisfy \cite[Theorem 4.2]{1609.09753} and neither lemma \ref{Irlem}: \begin{itemize} \item $\#K = 3$ and $H = (0,\frac12 x_1^2,\frac12 x_2^2,(x_1+x_2)x_3,(x_1-x_2)x_4)$; \item $\#K = 2$ and $H = (0,0,x_1 x_2,x_1 x_3,x_2 x_4,(x_1-x_2)x_5)$; \item $\#K = 2$ and $H = (0,0,x_1 x_4,x_1 x_3-x_2 x_5,x_2 x_4,(x_1-x_4)x_5)$. \end{itemize} \begin{theorem} Let $H$ be quadratic homogeneous such that ${\mathcal{J}} H$ is nilpotent and $\operatorname{rk} {\mathcal{J}} H \le 4$. Then the rows of ${\mathcal{J}} H$ are dependent over $K$. Furthermore, one of the following statements hold. \begin{enumerate}[\upshape (1)] \item Every set of $6$ rows of ${\mathcal{J}} H$ is dependent over $K$. \item There exists a tame invertible map $x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$. \end{enumerate} In addition, $x + H$ is stably tame if $\operatorname{char} K \neq 2$. \end{theorem} \begin{proof} The case where $\operatorname{rk} {\mathcal{J}} H \le 3$ follows from theorem \ref{rk3np}, so assume that $\operatorname{rk} {\mathcal{J}} H = 4$. We follow the cases of corollary \ref{rk4}. \begin{itemize} \item $H$ is as in (1) of corollary \ref{rk4}. Let $\tilde{H} = S^{-1}H(Sx)$. Then only the first $5$ rows of ${\mathcal{J}} \tilde{H}$ may be nonzero. Hence every set of $6$ rows of ${\mathcal{J}} H$ is dependent over $K$. If $n > 5$, then the sixth row of ${\mathcal{J}} \tilde{H}$ is zero. So assume that $n \le 5$. Then it follows from (i) of lemma \ref{lem6} below that the rows of ${\mathcal{J}} H$ are dependent over $K$. From \cite[Theorem 4.14]{MR2927368} and theorem \ref{dim5}, it follows that $x + \tilde{H}$ is stably tame if $\operatorname{char} K \neq 2$. \item $H$ is as in (2) of corollary \ref{rk4}. Let $\tilde{H} = T^{-1}H(Tx)$ and let $N$ be the leading principal minor matrix of size $3$ of ${\mathcal{J}} \tilde{H}$. Just like for the case where $H$ is as in (2) of theorem \ref{rk3} in the proof of theorem \ref{rk3np}, we can reduce to the case where only the first row and the first three columns of ${\mathcal{J}} \tilde{H}$ may be nonzero. From proposition \ref{evenrk}, it follows that we may assume that ${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is a product of a permutation matrix and a diagonal matrix. We distinguish three subcases: \begin{compactitem} \item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$. Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 \in K[x_1,x_2,x_3]$ as well. We may assume that the coefficient of either $x_4^2$ or $x_4 x_5$ of $\tilde{H}_1$ is nonzero. With techniques in the proof of corollary \ref{symdiag}, $x_4x_5$ can be transformed to $x_4^2 - x_5^2$, so we may assume that the coefficient of $x_4^2$ of $\tilde{H}_1$ is nonzero. Let $G := \tilde{H}(x_1,x_2,x_3,x_4,0,0,\ldots,0)$. Then ${\mathcal{J}} G = ({\mathcal{J}} \tilde{H})|_{x_5=\cdots=x_n=0}$. Consequently, the nonzero part of ${\mathcal{J}} G$ is restricted to the first four columns. From (i) of lemma \ref{lem6} below, it follows that the rows of ${\mathcal{J}} G$ are dependent over $K$ and that $x+G$ is tame. Since the first row of ${\mathcal{J}} G$ is independent of the other rows of ${\mathcal{J}} G$, the rows of ${\mathcal{J}} \tilde{H}$ are dependent over $K$. From proposition \ref{tameZ}, it follows that $x + \tilde{H}$ is tame if $\operatorname{char} K = 0$, which gives (2) if $\operatorname{char} K = 0$. Using techniques in the proof of theorem \ref{dim5}, the case $\operatorname{char} K \neq 0$ follows as well. \item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3]$. Then we may assume that the coefficients of $x_2 x_4$ and $x_3 x_5$ of $\tilde{H}_1$ are nonzero. Looking at the coefficients of $x_4^1$ and $x_5^1$ of the sum of the determinants of the principal minors of size $2$, we infer that $\parder{}{x_1} \tilde{H}_2 = 0$ and $\parder{}{x_1} \tilde{H}_3 = 0$. From a permuted version of \cite[Lemma 2]{1601.00579}, we deduce that ${\mathcal{J}}_{x_2,x_3} \allowbreak (\tilde{H}_2,\tilde{H}_3)$ is nilpotent. On account of theorem \ref{rk3np} or \cite[Theorem 3.2]{1601.00579}, the second and the third row of ${\mathcal{J}} \tilde{H}$ are dependent over $K$. By applying \cite[Lemma 2]{1601.00579} twice, we see that we can replace $\tilde{H}_1$ by $0$ without affecting the nilpotency of ${\mathcal{J}} \tilde{H}$. Now (2) follows in a similar manner as in the case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$ above. \item None of the above. Then we may assume that $\parder{}{x_2} \tilde{H}_1 \notin K[x_1,x_2,x_3]$ and $\parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$. Furthermore, we may assume that the coefficient of $x_2 x_4$ of $\tilde{H}_1$ is nonzero. Now we can apply the same arguments as in the case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3]$ above. \end{compactitem} \item $H$ is as in (3) of corollary \ref{rk4}. Let $\tilde{H} = T^{-1}H(Tx)$ and let $N$ be the leading principal minor matrix of size $4$ of ${\mathcal{J}} \tilde{H}$. Just like for the case where $H$ is as in (3) of theorem \ref{rk3} in the proof of theorem \ref{rk3np}, we can reduce to the case where only the first row and the first four columns of ${\mathcal{J}} \tilde{H}$ may be nonzero. From proposition \ref{evenrk}, it follows that we may assume that ${\mathcal{H}} (\tilde{H}_1|_{x_1=0})$ is a product of a permutation matrix and a diagonal matrix. We distinguish three subcases: \begin{compactitem} \item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1, \parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$. Since $\operatorname{tr} {\mathcal{J}} \tilde{H} = 0$, we deduce that $\parder{}{x_1} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$ as well. We may assume that the coefficient of $x_5 x_6$ of $\tilde{H}_1$ is nonzero. Just like we reduced to the case where only the first $4$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero above, we can reduce to the case where only the first $6$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero. Hence the results follow from (ii) of lemma \ref{dim6}. \item $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1, \parder{}{x_4} \tilde{H}_1 \notin K[x_1,x_2,x_3,x_4]$. Then we may assume that the coefficients of $x_2 x_5$, $x_3 x_6$ and $x_4 x_7$ of $\tilde{H}_1$ are nonzero. Looking at the coefficients of $x_5^1$, $x_6^1$ and $x_7^1$ of the sum of the determinants of the principal minors of size $2$, we infer that $\parder{}{x_1} \tilde{H}_2 = 0$, $\parder{}{x_1} \tilde{H}_3 = 0$ and $\parder{}{x_1} \tilde{H}_4 = 0$. From a permuted version of \cite[Lemma 2]{1601.00579}, we deduce that ${\mathcal{J}}_{x_2,x_3,x_4} \allowbreak (\tilde{H}_2,\tilde{H}_3,\tilde{H}_4)$ is nilpotent. On account of theorem \ref{rk3np} or \cite[Theorem 3.2]{1601.00579}, the second, the third and the fourth row of ${\mathcal{J}} \tilde{H}$ are dependent over $K$. By applying \cite[Lemma 2]{1601.00579} twice, we see that we can replace $\tilde{H}_1$ by $0$ without affecting the nilpotency of ${\mathcal{J}} \tilde{H}$. Now (2) follows in a similar manner as in the case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1, \parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$ above. \item None of the above. Then we may assume that $\parder{}{x_2} \tilde{H}_1 \notin K[x_1,x_2,x_3,x_4]$ and $\parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$. Furthermore, we may assume that the coefficient of $x_2 x_5$ of $\tilde{H}_1$ is nonzero. Suppose first that $\parder{}{x_3} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$. Then we can apply the same argument as in the case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1, \parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3,x_4]$ above, to reduce to the case where only the first $5$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero, which follows from (ii) of lemma \ref{lem6} below. Suppose next that $\parder{}{x_3} \tilde{H}_1 \notin K[x_1,x_2,x_3,x_4]$. Then we may assume that the coefficient of $x_3 x_6$ of $\tilde{H}_1$ is nonzero. Now we can apply the same arguments as in the case $\parder{}{x_2} \tilde{H}_1, \parder{}{x_3} \tilde{H}_1, \allowbreak \parder{}{x_4} \tilde{H}_1 \in K[x_1,x_2,x_3]$ above, to reduce to the case where only the first $6$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero, which follows from (ii) of lemma \ref{lem6} below. \end{compactitem} \item $H$ is as in (4) of corollary \ref{rk4}. Let $\tilde{H} = T^{-1}H(Tx)$. Then only the first $5$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero, so the results follow from (i) of lemma \ref{lem6} below. \item $H$ is as in (5) of corollary \ref{rk4}. Let $\tilde{H} = T^{-1}H(Tx)$. Then only the first $6$ columns of ${\mathcal{J}} \tilde{H}$ may be nonzero, so the results follow from (ii) of lemma \ref{lem6} below. \qedhere \end{itemize} \end{proof} \begin{lemma} \label{lem6} Let $H$ be quadratic homogeneous, such that ${\mathcal{J}} H$ is nilpotent. \begin{enumerate}[\upshape (i)] \item If $\operatorname{char} K \neq 2$ and only the first $5$ columns of ${\mathcal{J}} H$ may be nonzero, then the first $5$ rows of ${\mathcal{J}} H$ are dependent over $K$, and $x + H$ is tame. \item If $\operatorname{char} K = 2$ and only the first $6$ columns of ${\mathcal{J}} H$ may be nonzero, then the first $6$ rows of ${\mathcal{J}} H$ are dependent over $K$, and there exists a tame invertible map $x + \bar{H}$ such that $\bar{H}$ is quadratic homogeneous and ${\mathcal{J}} \bar{H} = {\mathcal{J}} H$. \end{enumerate} \end{lemma} \begin{proof} Let $N$ be the leading principal minor matrix of size $5$ of ${\mathcal{J}} H$ in case of (i), and leading principal minor matrix of size $6$ of ${\mathcal{J}} H$ in case of (ii). From \cite[Lemma 2]{1601.00579}, it follows that $N$ is nilpotent. If $\operatorname{rk} N \le 3$, then the results follow from theorem \ref{rk3np}. If $\operatorname{rk} N \ge 4$, then the results follow from theorem \ref{dim5} in case of (i) and from theorem \ref{dim6} in case of (ii). \end{proof} Notice that we only used the case where only the first $4$ columns of ${\mathcal{J}} H$ may be nonzero of (i) of lemma \ref{lem6}, which does not require the calculations of theorem \ref{dim5} to be proved. \end{document}
math
91,417
\begin{document} \title[Intersections on the moduli space of rank two bundles]{On the intersection theory of the moduli space of rank two bundles} \author {Alina Marian} \address {Department of Mathematics} \address {Yale University} \email {[email protected]} \author {Dragos Oprea} \address {Department of Mathematics} \address {Massachusetts Institute of Technology.} \email {[email protected]} \date{} \begin {abstract} We give an algebro-geometric derivation of the known intersection theory on the moduli space of stable rank $2$ bundles of odd degree over a smooth curve of genus $g$. We lift the computation from the moduli space to a Quot scheme, where we obtain the intersections by equivariant localization with respect to a natural torus action. \end {abstract} \maketitle We compute the intersection numbers on the moduli space of stable rank $2$ odd degree bundles over a smooth curve of genus $g$. This problem has been intensely studied in the physics and mathematics literature. Complete and mathematically rigorous answers were obtained by Thaddeus \cite {thaddeus}, Donaldson \cite {donaldson}, Zagier \cite {zagier}, and others in rank $2$, and by Jeffrey-Kirwan \cite {jeffreykirwan} in arbitrary rank. These answers are in agreement with the formulas written down by Witten \cite {witten}. We indicate yet another method of calculation which recovers the exact formulas obtained by the aforementioned authors in rank $2$. Our approach works in principle in any rank, and we will turn to this general case in future work. To set the stage, we let $C$ be a smooth complex algebraic curve of genus $g$. We let $\mathcal N_g$ denote the moduli space of stable rank $2$ bundles with fixed odd determinant, and we write $\mathcal V$ for the universal bundle on $C \times \mathcal N_g$. We also fix, once and for all, a symplectic basis $\{1, \delta_1, \ldots, \delta_{2g}, \omega\}$ for the cohomology $H^{\star}(C)$. A result of Newstead shows that the K\"{u}nneth components of $c_2(\endv)$ generate the cohomology ring $H^{\star}(\mathcal N_g)$ \cite {newstead}. We will use the notation of Newstead and Thaddeus, writing $$c_2(\endv)=- \beta\otimes 1 +4 \sum_{k=1}^{2g} \psi_k \otimes \delta_k + 2\alpha \otimes \omega$$ for classes $\alpha\in H^2(\ng), \beta\in H^{4}(\ng), \psi_k \in H^3(\ng).$ Thaddeus showed that nonzero top intersections on $\mathcal N_g$ must contain the $\psi_k$s in pairs, which can then be removed using the formula $$\int_{\ng} \alpha^m \beta^n \prod_{k=1}^{g} (\psi_k \psi_{k+g})^{p_k}=\int_{\mathcal N_{g-p}} \alpha^m \beta^n$$ where $p=\sum_k p_k$ and $2m+4n+6p=6g-6$. The top intersections of $\alpha$ and $\beta$ are further determined: \begin {theorem}\cite {thaddeus}\cite{donaldson}\cite{zagier}\cite{jeffreykirwan} \begin {equation} \label{main} \int_{\ng}\alpha^m \beta^n =(-1)^g 2^{2g-2} \frac{m!}{(m-g+1)!} (2^{m-g+1}-2)B_{m-g+1}.\end {equation} \end {theorem} Here $B_k$ are the Bernoulli numbers defined, for instance, by the power series expansion \begin{equation}\label{bernoulli} -\frac{u}{\sinh u}=\sum_{k} \frac{2^{k}-2}{k!} B_{k} u^{k}.\end{equation} In this paper, we reprove theorem $\ref{main}$. The idea is to lift the computation from $\ng$ to a Quot scheme as indicated in the diagram below. Then, we effectively calculate the needed intersections on Quot by equivariant localization with respect to a natural torus action. The convenience of this approach lies in that the fixed loci are easy to understand; in any rank they are essentially symmetric products of $C$. In rank $2$, their total contribution can be evaluated to the intersection numbers in $\eqref{main}$. \begin{center} $ \xymatrix {Quot_{N, d} \ar@{<.>}[r] & {\p_{N, d}} \ar[d]^{\pi} \\ & \mg & \ng \times J \ar[l]^{\tau}} $ \end{center} The idea that the intersection theory of the moduli space of rank $r$ bundles on a curve and that of a suitable Quot scheme are related goes back to Witten \cite {witten2} in the context of the Verlinde formula. Moreover, in \cite {bertram} the authors have the reverse approach of calculating certain intersection numbers on Quot in low rank and genus by using the intersection theory of the moduli space of bundles. In this note however, the translation of the intersection theory of the moduli space of bundles into that of the Quot scheme is really very straightforward. The spaces in the diagram are as follows. \begin{itemize} \item $\mg$ denotes the moduli space of rank 2, odd degree stable bundles on $C$. By contrast with $\ng,$ the determinant of the bundles is allowed to vary in $\mg.$ There is a finite covering map $$\tau: \n_g \times J \rightarrow \mg$$ of degree $4^g$, given by tensoring bundles. Here $J$ is the Jacobian of degree 0 line bundles on $C$. We will write $\widetilde {\mathcal V}$ for the universal sheaf on $\mg \times C$, which is only defined up to twisting with line bundles from $\mg$. \\ \item For a positive integer $N$, we let $\p_{N, d}$ be the projective bundle $$\pi: {\bb P}_{N,d} \rightarrow \mg$$ whose fiber over a stable $[V]\in \mg$ is ${{\bb P} (H^0 (C, V)^{\oplus N}}).$ In other words,$$\p_{N, d} = {\bb P} (\text {pr}_{\s} \widetilde {\mathcal V}^{\oplus N} ).$$ We will take the degree to be as large as convenient to ensure, for instance, that the fiber dimension of $\p_{N, d}$ is constant. One can view $\p_{N, d}$ as a fine moduli space of pairs $(V, {\phi})$ consisting of a stable rank 2, degree $d$ bundle $V$ together with a nonzero $N$-tuple of holomorphic sections $${\phi} = (\phi^1, \ldots \phi^N):{\mathcal {O}}^N \to V$$ considered projectively. As such, there is a universal morphism $$\Phi: {\mathcal {O}}^N\to \overline {\mathcal V} \text{ on }\p_{N, d}\times C.$$ A whole series of moduli spaces of pairs of vector bundles with $N$ sections, each labeled by a parameter $\tau$, were defined and studied in \cite{thaddeus2} for $N=1$ and \cite{bertram} for arbitrary $N$. $\p_{N, d}$ is one of these moduli spaces, undoubtedly the least exciting one from the point of view of studying a new object, due to its straightforward relationship with $\mg$. One would expect that the universal bundle $\overline {\mathcal V}$ on the total space of $\p_{N,d}$ and the noncanonical universal bundle $\widetilde {\mathcal V}$ on its base $\mg$ to be closely related, and indeed they coincide up to a twist of ${\mathcal {O}}(1)$ \begin{equation}\label{universal}\overline {\mathcal V}=\pi^{\star} \widetilde {\mathcal V}\otimes {\mathcal {O}}_{\p_{N,d}}(1).\end{equation} \item Finally, $Quot_{N, d}$ is Grothendieck's Quot scheme parametrizing degree $d$ rank $2$ subbundles $E$ of the trivial rank $N$ bundle on $C$, $$0 \to E \to {\mathcal {O}}^N\to F\to 0.$$ We let $$0\to \mathcal E\to {\mathcal {O}}^N\to \mathcal F\to 0$$ be the universal sequence on $Quot_{N, d}\times C$. For large $d$ relative to $N$ and $g$, the Quot scheme is irreducible, generically smooth of the expected dimension $Nd-2(N-2)(g-1)$ \cite {bertram}. Then, $Quot_{N, d}$ and $\p_{N, d}$ are birational and they agree on the open subscheme corresponding to subbundles $E=V^{\vee}$ where $\phi: {\mathcal {O}}^N \to V$ is generically surjective. The universal structures also coincide on this open set. For arbitrary $d$, $Quot_{N,d}$ may be badly behaved, but intersection numbers can be defined with the aid of the virtual fundamental cycle constructed in \cite {opreamarian}. \end {itemize} We now define the cohomology classes that we are going to intersect. We consider the K\"{u}nneth decomposition $$c_2 (\text {End} \mathcal {\widetilde V}) = - \tilde{\beta} \otimes 1 + 4 \sum_{k=1}^{2g} \tilde \psi_k \otimes \delta_{k} + 2 \tilde{\alpha} \otimes \omega$$ of the universal endomorphism bundle on $\mg \times C.$ In keeping with the notation of \cite{jeffreykirwan}, we further let $$c_i(\widetilde {\mathcal V}) = \tilde a_i \otimes 1 + \sum_{k=1}^{2g} {\tilde b_i^k} \otimes \delta_k + \tilde f_i \otimes \omega, \, 1\leq i \leq 2 $$ be the K\"unneth decomposition of the (noncanonical) universal bundle $\widetilde {\mathcal V}$ on $\mg \times C.$ Then $$\tilde f_1 = d, \, \, \tilde{\beta} = \tilde a_1^2 - 4 \tilde a_2, \, \, \tilde{\alpha} = 2 \tilde f_2 + \sum_{k=1}^{g} \tilde b_1^k \tilde b_1^{k+g} - d \tilde a_1.$$ It is an easy exercise, using that $\ng$ is simply connected, to see that \begin{equation}\label{pullback}\tau^{\star}\tilde {\alpha}=\alpha,\;\;\; \tau^{\star}\tilde {\beta}=\beta,\;\;\; \tau^{\star} (\sum_{k=1}^{g} \tilde b_1^k \tilde b_1^{k+g}) = 4 \theta,\end{equation} where $\theta$ is the class of the theta divisor on $J.$ On $\p_{N, d}$, we let $\zeta$ denote the first Chern class of ${\mathcal {O}}(1)$ and let \begin{equation}c_i(\overline {\mathcal V}) = {\bar{a}}_i \otimes 1 + \sum_{k=1}^{2g} {\bar{b}}_i^k \otimes \delta_k + {\bar{f}}_i \otimes \omega, \, 1 \leq i \leq 2 \end{equation} be the K{\"{u}}nneth decomposition of the Chern classes of $\overline {\mathcal V}.$ Finally, we consider the corresponding $a, b, f$ classes on $Quot_{N, d}$: $$c_i(\mathcal E^{\vee})=a_i\otimes 1+ \sum_{k=1}^{2g} b_i^{k}\otimes \delta_k + f_i \otimes \omega, \, 1\leq i\leq 2.$$ We now show how to express any top intersection of $\alpha$ and $\beta$ classes on $\ng$ as an intersection on $Quot_{N, d}$. Let $m$ and $n$ be any nonnegative integers such that $$m+2n = 4g-3, \,\,\, m\geq g.$$ With the aid of \eqref{pullback} we note \begin{equation}\label{red1}\int_{\mg} (\tilde{\alpha} + \sum_{k=1}^{g} \tilde b_1^k \tilde b_1^{k+g})^m \tilde{\beta}^n = \frac{1}{4^g} \cdot \binom{m}{g} \int_{J} (4 \theta)^g \int_{\n_g} \alpha^{m-g} \beta^n.\end {equation} A top intersection on $\mg$ which is invariant under the normalization of the universal bundle $\mathcal {\widetilde V}$ on $\mg \times C$ can be readily expressed as a top intersection on $\p_{N, d}$ as follows. We assume from now on that $N$ is odd and we let $$2M = N (d-2\bar{g})-1.$$ Then \begin {equation}\label{red2}\int_{\mg} \left(\tilde{\alpha} + \sum_{k=1}^{g} \tilde b_1^k \tilde b_1^{k+g}\right)^m \tilde{\beta}^n = \int_{\mg} \left( 2\tilde f_2 + 2 \sum_{k=1}^{g} \tilde b_1^k \tilde b_1^{k+g}-d \tilde a_1 \right)^m (\tilde a_1^2-4\tilde a_2)^n=\end {equation} $$=\int_{\p_{N, d}} \zeta^{2M} \left( 2{\bar{f}}_2 + 2 \sum_{k=1}^{g} {\bar{b}}_1^k {\bar{b}}_1^{k+g}-d {\bar{a}}_1 \right)^m \left({\bar{a}}_1^2-4{\bar{a}}_2\right)^n =$$ $$=\int_{\p_{N, d}} {\bar{a}}_2^M \left ( 2{\bar{f}}_2 + 2 \sum_{k=1}^{g} {\bar{b}}_1^k {\bar{b}}_1^{k+g} - d {\bar{a}}_1 \right)^m \left({\bar{a}}_1^2-4{\bar{a}}_2\right)^n.$$ Here we used \eqref{universal} to write $${\bar{a}}_2^M = (\zeta^2 + \tilde a_1 \zeta + \tilde a_2)^M = \zeta^{2M} + \text { lower order terms in }\zeta,$$ the latter being zero when paired with a top intersection from the base $\mg$. Finally, this last intersection number can be transfered to $Quot_{N, d}$ using the results of \cite {marian}. It is shown there that the equality \begin{equation}\label{red3} \int_{\p_{N, d}} {\bar{a}}_2^M R(\bar a, \bar b, \bar f)=\int_{Quot_{N, d}} a_2^M R(a, b, f)\end {equation} holds for any polynomial $R$ in the $a, b, f$ classes, in the regime that $N$ is large compared to the genus $g$, and in turn $d$ is large enough relative to $N$ and $g$, so that $Quot_{N,d}$ is irreducible of the expected dimension. Moreover, the equality \begin{equation}\label{virdeg}\int_{\left[Quot_{N,d}\right]^{vir}}a_2^M R(a, b, f)=\int_{\left[Quot_{N, d-2}\right]^{vir}} a_2^{M-N} R(a, b, f)\end {equation} established in \cite {opreamarian} allows us to assume from this moment on that the degree $d$ is as small as desired relative to $N$. The trade-off is that we need to make use of the virtual fundamental classes alluded to above and defined in \cite {opreamarian}. Putting $\eqref{red1}$, $\eqref{red2}$, $\eqref{red3}$ together we obtain \begin {equation} \int_{\ng} \alpha^{m-g}\beta^n=\frac{(m-g)!}{m!}\int_{\left[Quot_{N, d}\right]^{vir}} a_2^M \left (2f_2 + 2 \sum_{k=1}^{g}b_1^k b_1^{k+g}-d {a}_1\right)^m \left(a_1^2-4a_2\right)^n.\end {equation} To prove $\eqref {main}$, we will show that \begin {equation}\label{red}\int_{[Quot_{N, d}]^{vir}} a_2^M \left (2f_2 + 2 \sum_{k=1}^{g}b_1^k b_1^{k+g}-d {a}_1\right)^m \left(a_1^2-4a_2\right)^n=\end {equation} $$=(-1)^g(2^{m-1}-2^{2g-1}) \frac{m!}{(m-2g+1)!} B_{m-2g+1}.$$ Equation $\eqref{red}$ will be verified by virtual localization. The torus action we will use and its fixed loci were described in \cite {opreamarian}. For the convenience of the reader, we summarize the facts we need below. The torus action on $Quot_{N, d}$ is induced by the fiberwise $\mathbb C^{\star}$ action on ${\mathcal {O}}^N$ with distinct weights $-\lambda_1, \ldots, -\lambda_N$. On closed points, the action of $g\in \mathbb C^{\star}$ is $$\left[E\stackrel{i}{\to} {\mathcal {O}}^N \right]\mapsto \left[E\stackrel{g\circ i}{\to}{\mathcal {O}}^N\right].$$ The fixed loci $Z$ correspond to split subbundles $$E= L_1 \oplus L_2$$ where $L_1$ and $L_2$ are line subbundles of copies of ${\mathcal {O}}$ of degree $-d_1$ and $-d_2$. Thus $Z=\text {Sym}^{d_1} C \times \text {Sym}^{d_2} C$ and $$\mathcal E|_{Z}=\mathcal L_1\oplus \mathcal L_2$$ where we let $\mathcal L_1$ and $\mathcal L_2$ be the universal line subbundles on $\text {Sym}^{d_1} C\times C$ and $\text {Sym}^{d_2} C\times C$. We write $$c_1(\mathcal L_i^{\vee})=x_i\otimes 1+ \sum_{k} y_i^{k}\otimes \delta_k+d_i \otimes \omega,\,\,\, 1\leq i\leq 2.$$ We set the weights to be the $N^{\text {th}}$ roots of unity. The equivariant Euler class of the virtual normal bundle of $Z$ in $Quot_{N, d}$ was determined in \cite {opreamarian} to be \begin {equation}\label{normal} \frac{1}{e_{T}(\n^{vir})} = (-1)^{g} \left ((\lambda_1 h +x_1)- (\lambda_2 h + x_2\right) )^{-2\gbar}\cdot {\prod_{i=1}^{2}} \left ( \frac{x_i}{(\lambda_i h + x_i)^N - h^N} \right )^{d_i - \gbar}\end {equation}$$ \cdot\prod_{i=1}^{2}\exp\left(\theta_i\cdot{\left ( \frac{N (\lambda_i h + x_i)^{N-1}}{(\lambda_i h + x_i)^N - h^N} - \frac{1}{x_i}\right )}\right).$$ Here $\gbar=g-1$, $h$ is the equivariant parameter, and $\theta_i$ are the pullbacks of the theta divisor from the Jacobian. Moreover, it is clear that \begin{equation}\label{rest1}a_1 = (x_1 + \lambda_1 h) + (x_2+\lambda_2 h), \;\; a_2=(x_1+\lambda_1 h) (x_2+\lambda_2 h),\end{equation} \begin{equation}\label{rest2}b_{1}^{k}=y_1^{k}+y_2^{k},\end{equation} \begin{equation}\label{rest3}f_2=-\sum_{k=1}^{2g} y_1^{k} y_{2}^{k+g} + d_2(x_1+\lambda_1 h) + d_1(x_2+\lambda_2 h).\end{equation} We collect $\eqref{normal}$, $\eqref{rest1}$, $\eqref{rest2}$, $\eqref{rest3}$, and rewrite the left hand side of $\eqref{red}$, via the virtual localization theorem, \begin{equation}\label{bigsum}\text {LHS of }\eqref{red}=(-1)^{g} \sum_{d_1, d_2, \lambda_1, \lambda_2}\mathcal I_{d_1, d_2, \lambda_1, \lambda_2}\end{equation} where the sum ranges over all degree splittings $d=d_1+d_2$ and pairs of distinct roots of unity $(\lambda_1, \lambda_2)$. The summand $\mathcal I_{d_1, d_2, \lambda_1, \lambda_2}$ is defined as the evaluation on $\text {Sym}^{d_1} C\times \text {Sym}^{d_2} C$ of the expression \begin{equation}\label{tosum} \left((\lambda_1 h + x_1)-(\lambda_2 h+ x_2)\right)^{2n-2\gbar} \left(2\theta_1+2\theta_2+(d_2-d_1) ((\lambda_1 h+x_1)-(\lambda_2 h +x_2))\right)^{m}\end {equation} $$\cdot \prod_{i=1}^{2}(\lambda_i h+ x_i)^{M} \left ( \frac{x_i}{(\lambda_i h + x_i)^N - h^N} \right )^{d_i - \gbar} \exp\left(\theta_i\cdot{\left ( \frac{N (\lambda_i h + x_i)^{N-1}}{(\lambda_i h + x_i)^N - h^N} - \frac{1}{x_i}\right )}\right).$$ To carry out this evaluation, we use the following standard facts regarding intersections on $\text {Sym}^{d}C$: \begin{equation} \label{xtheta} x^{d-l} \theta^{l} = \frac{g!}{(g-l)!}\, \, \text{for} \, \, l \leq g, \, \, \text{and} \, \, x^{d-l} \theta^{l} = 0 \text{ for} \, \, l > g. \end{equation} We will henceforth replace any $\theta^{l}$ appearing in a top intersection on $\text {Sym}^d C$ by $\frac{g!}{(g-l)!}x^{l}$. Then \begin {equation}\label{thetay} \frac {\theta^{l}}{l!} \exp\left(\theta\cdot{\left ( \frac{N (\lambda h + x)^{N-1}}{(\lambda h + x)^N - h^N} - \frac{1}{x}\right )}\right)= \sum_{k} \frac{\theta^{l+k}}{l!\, k!} \left( \frac{N (\lambda h + x)^{N-1}}{(\lambda h + x)^N - h^N} - \frac{1}{x}\right )^{k}= \end{equation} $$=\sum_{k\leq g-l} \frac{g!\, x^{l+k}}{l!\,(g-l-k)!\,k!} \left( \frac{N (\lambda h + x)^{N-1}}{(\lambda h + x)^N - h^N} - \frac{1}{x}\right )^{k}= N^{g-l} \binom{g}{l} x^{g}\cdot \frac{(\lambda h+x)^{(N-1)(g-l)}}{((\lambda h+x)^{N}-h^N)^{g-l}}.$$ We further set $$\bar{x}_i = \frac{x_i}{\lambda_i h} \text{ and } \mathcal J=\lambda_1 (1+\bar {x}_1)-\lambda_2(1+\bar{x}_2).$$ In terms of the rescaled variables, ${\mathcal I}_{d_1, d_2, \lambda_1, \lambda_2}$ becomes, via \eqref{thetay}, the residue at $\bar{x}_1=\bar{x}_2=0$ of $$\sum_{l_1+ l_2 + s = m} \left ( \frac{m!}{s!} (d_2 - d_1)^s \cdot \mathcal J^{2n-2\gbar +s} \cdot \prod_{i=1}^{2} 2^{l_i}N^{g-l_i} \binom{g}{l_i} \lambda_i^{M+1 + l_i} \frac{(1+\bar{x}_i)^{(N-1)(g-l_i) +M}}{((1+\bar{x}_i)^N -1)^{d_i - l_i +1}} \right ).$$ We expand $$\mathcal J^{2n-2\gbar+s}=\sum_{k=0}^{d}\sum_{\alpha_1+\alpha_2=k} (-1)^{\alpha_2} \binom{k}{\alpha_1} \lambda_1^{\alpha_1} \lambda_2^{\alpha_2} \cdot {\mathfrak z}_N(s, k) \cdot \bar {x}_1^{\alpha_1} \bar{x}_2^{\alpha_2}$$ where we set $${\mathfrak z}_N(s, k)=\binom{2n-2\gbar+s}{k} (\lambda_1-\lambda_2)^{2n-2\gbar+s-k}.$$ In \cite {opreamarian}, we computed the residue $$\text {Res}_{x=0}\left\{ \frac{x^a (1+x)^{N-1+b}}{((1+x)^N-1)^{c+1}}\right\}=\frac{1}{N}\sum_{p=0}^{a} (-1)^{a-p}\binom{\frac{b+p}{N}}{c}\binom{a}{p}.$$ Therefore, $\mathcal I_{d_1, d_2, \lambda_1, \lambda_2}$ becomes \begin{equation}\label{tosum2}\sum_{l_1+l_2+s=m}\sum_{k=0}^{d}\sum_{\alpha_1+ \alpha_2=k}\sum_{p_1, p_2} \left\{\frac{m!}{s!}(d_2-d_1)^s (-1)^{\alpha_2}\binom{k}{\alpha_1}\cdot {\mathfrak z}_N(s, k)\cdot \right.\end{equation} $$\left.\cdot \prod_{i=1}^{2} 2^{l_i} N^{\gbar-l_i}\binom{g}{l_i}\lambda_i^{M+l_i+\alpha_i+1}\binom{\frac{M+(N-1)(\gbar-l_i)+p_i}{N}}{d_i-l_i}\binom{\alpha_i}{p_i}(-1)^{\alpha_i-p_i}\right \}.$$ Now $\eqref{bigsum}$ computes an intersection number on $\mg$, hence it is independent of $N$. Thus it is enough to make $N\to \infty$ in the above expression. Lemma $\ref{bern}$ below clarifies the $N$ dependence of the sum over the roots of unity \begin{equation}\label{sumoverroots}\sum {\mathfrak z_{N}}(s, k) \cdot \lambda_1^{M+l_1+\alpha_1+1}\lambda_2^{M+l_2+\alpha_2+1},\end{equation} by writing $\lambda_1=\zeta\lambda_2$. Lemma $\ref{binoms}$ takes care of all the other terms in \eqref{tosum2}. A moment's thought shows that in the limit the sum over the roots of unity of the terms \eqref{tosum2} reduces to $$\frac{1}{2}\sum \frac{m!}{s!}(d_2-d_1)^{s} \cdot \prod_{i=1}^{2} 2^{l_i}\binom{g}{l_i} \cdot k! \text { Coeff }_{x_1^{\alpha_1}} \binom{x_1+\frac{d}{2}-l_1}{d_1-l_1}\cdot \text { Coeff }_{(-x_2)^{\alpha_2}}\binom{x_2+\frac{d}{2}-l_2}{d_2-l_2}\cdot {\mathfrak z}_{\infty}(s, k)$$ with $${\mathfrak z}_{\infty}(s, k)= \binom{2n-2\gbar+s}{k}\cdot \frac{B_{2\gbar-2n-s+k}}{(2\gbar-2n-s+k)!}\cdot (1-2^{2n-2\gbar+s-k+1}).$$ Summing over $\alpha_1+\alpha_2=k$ first, we obtain $$\frac{1}{2}\sum \frac{m!}{s!}(d_2-d_1)^{s} \cdot \prod_{i=1}^{2} 2^{l_i}\binom{g}{l_i} \cdot k! \text { Coeff }_{x^{k}} \binom{x + \frac{d}{2}-l_1}{d_1-l_1}\binom{-x+\frac{d}{2}-l_2}{d_2-l_2} \cdot {\mathfrak z}_{\infty}(s, k).$$ We sum next over $d_1+d_2=d$. An easy induction on $m$ shows that generally $$\sum_{b_1+b_2=a} \binom{a_1}{b_1} \binom{a_2}{b_2} (t+b_1-b_2)^m=(t+a_1-a_2)^m$$ holds whenever $a_1+a_2=a$. Via this observation, our expression simplifies to $$\frac{1}{2} \sum \frac{m!}{s!}\cdot \prod_{i=1}^{2} 2^{l_i}\binom{g}{l_i}\cdot k! \text { Coeff}_{x^k} (-2x)^s \cdot {\mathfrak z}_{\infty}(s, k)= 2^{m-1} m! \sum \binom{g}{l_1}\binom{g}{l_2} (-1)^{s}\cdot {\mathfrak z}_{\infty}(s, s)$$ $$=2^{m-1} m!\, \frac{B_{2\gbar-2n}}{(2\gbar-2n)!}\,(1-2^{2n-2\gbar+1})\sum (-1)^{s} \binom{g}{l_1}\binom{g}{l_2}\binom{2n-2\gbar+s}{s}$$ $$=2^{m-1}\,\frac{m!}{(m-2g+1)!}\,B_{m-2g+1}\,(1-2^{-m+2g}) \sum \binom{g}{l_1} \binom{g}{l_2} \binom{m-2g}{s}$$ $$=2^{m-1}\frac{m!}{(m-2g+1)!}\,B_{m-2g+1}\,(1-2^{-m+2g}).$$ This last equality and \eqref{bigsum} complete the proof of \eqref{red}, hence of the theorem. \begin {lemma}\label{bern} For all integers $a$ and $k$ we have \begin {equation}\label{limit} \lim_{N\to \infty} \frac{1}{N^{k}} \left(\sum_{\zeta\neq 1} \frac{\zeta^{\frac{N-1}{2}+a}}{(1-\zeta)^{k}}\right)=(1-2^{-k+1})\cdot\frac{B_{k}}{k!}\end {equation} the sum being taken over the $N^{\text {th}}$ roots $\zeta$ of $1$. \end {lemma} {\bf Proof.} When $N$ is large, the sum to compute is $0$ for $k<0$, and $-1$ for $k=0$. We may thus assume that $k\geq 1$. We introduce the auxiliary variable $z$, and evaluate $$\sum_{k=1}^{\infty} {z^{k-1}} \left(\sum_{\zeta\neq 1} \frac{\zeta^{\frac{N-1}{2}+a}}{(1-\zeta)^{k}}\right)=\sum_{\zeta\neq 1} \frac{\zeta^{\frac{N-1}{2}+a}}{1-z-\zeta}=\frac{1}{z}+N \frac{(1-z)^{\frac{N-1}{2}+a-1}}{(1-z)^{N}-1}.$$ Setting $z=\frac{u}{N}$ and making $N\to \infty$ we obtain $$\sum_{k=1}^{\infty} \frac{u^{k}}{N^k} \left(\sum_{\zeta\neq 1} \frac{\zeta^{\frac{N-1}{2}+a}}{(1-\zeta)^{k}}\right)=1+u\cdot \frac{(1-u/N)^{\frac{N-1}{2}+a-1}}{(1-u/N)^{N}-1}\to 1+ \frac{ue^{-u/2}}{e^{-u}-1}=1-\frac{u}{2\sinh\frac{u}{2}}.$$ The lemma follows. \begin{lemma}\label{binoms} Let $b, \alpha$ be fixed non-negative integers and $z$ a real number. Then the following limit $$\lim_{N\to \infty} N^{\alpha} \sum_{p=0}^{\alpha} \binom{z+\frac{p}{N}}{b}\binom{\alpha}{p}(-1)^{\alpha-p}$$ equals the coefficient of $x^{\alpha}$ in $\alpha!\, \binom{x+z}{b}.$ \end {lemma} {\bf Proof.} Let us write $$f_b(z)=\sum_{p=0}^{\alpha} \binom{z+\frac{p}{N}}{b}\binom{\alpha}{p}(-1)^{\alpha-p}.$$ The recursion $$f_{b+1}(z+1)=f_{b+1}(z)+f_{b}(z)$$ implies by induction that $$f_b(z)=\sum_{j=0}^{b} \binom{z}{b-j}f_j(0).$$ We evaluate $$N^{\alpha}f_{j}(0)=N^{\alpha}\sum_{p=0}^{\alpha} \binom{\frac{p}{N}}{j}\binom{\alpha}{p}(-1)^{\alpha-p}=N^{\alpha}\sum_{i=0}^{j} c(j,i)\sum_{p=0}^{\alpha}\left(\frac{p}{N}\right)^{i}\binom{\alpha}{p}(-1)^{\alpha-p},$$ where $c(j,i)$ are the coefficients defined by the expansion $$\binom{x}{j}=\sum_{i=0}^{j} c(j, i)\, x^{i}.$$ We use the Euler identities \begin{equation*}\sum_{p=0}^{\alpha} p^i \binom{\alpha}{p}(-1)^{\alpha-p}=\begin{cases}0 & \text {if } i<\alpha\\ \alpha! &\text {if } i=\alpha \end{cases}.\end{equation*} Making $N\to \infty$ we obtain $$\lim_{N\to \infty} N^{\alpha} f_j(0)=\alpha!\, c(j, \alpha).$$ Therefore, $$\lim_{N\to \infty} f_b(z)=\alpha! \,\sum_{j=0}^{b}\binom{z}{b-j} c(j, \alpha).$$ This expression can be computed as the coefficient of $ x^{\alpha}$ in $$\alpha!\,\sum_{j=0}^{b} \binom{z}{b-j}\binom{x}{j}=\alpha!\binom{x+z}{b}.$$ \end{document}
math
22,958
\begin{document} \title{Rooted tree graphs and the Butcher group:\ Combinatorics of elementary perturbation theory} \begin{abstract} The perturbation expansion of the solution of a fixed point equation or of an ordinary differential equation may be expressed as a power series in the perturbation parameter. The terms in this series are indexed by rooted trees and depend on a parameter in the equation in a way determined by the structure of the tree. Power series of this form may be considered more generally; there are two interesting and useful group structures on these series, corresponding to operations of composition and substitution. The composition operation defines the Butcher group, an infinite dimensional group that was first introduced in the context of numerical analysis. This survey discusses various ways of realizing these rooted trees: as labeled rooted trees, or increasing labeled rooted trees, or unlabeled rooted trees. It is argued that the simplest framework is to use labeled rooted trees. \end{abstract} \section{Introduction} This year we celebrate the scientific contributions of Charles Newman. Chuck has worked in almost every aspect of mathematical physics. He can identify a significant problem, locate the appropriate framework, and find an unexpected path to a comprehensible solution. His insights and his generosity in sharing them are extraordinarily valuable to the community. Chuck has been my friend and colleague from years together at the University of Arizona and now at NYU Shanghai. It is a pleasure to dedicate this paper to such a distinguished scientist. The paper is a largely expository survey of rooted trees and the Butcher group. The Butcher group is an infinite dimensional group associated with rooted trees. See \cite{ChartierHairerVilmart} for the current status of this subject. Most expositions are in the framework of unlabeled rooted trees. The main message of the present work is that the combinatorics is simpler when formulated in terms of labeled rooted trees. In particular, group structures associated with rooted trees arise naturally from calculations using elementary calculus formulas. This is in the spirit of the theory of combinatorial species \cite{BergeronLabelleLeroux}. This same point of view is fruitful in the study of graph expansions in statistical mechanics \cite{Faris1} and of diagram expansions in quantum field theory \cite{Faris2}. The subject matter of the present work has two independent origins. One begins in 1963 with work of Butcher in numerical analysis. He discovered that a large class of numerical methods for ordinary differential equations may be expressed as sums indexed by rooted trees. Furthermore, these sums may be combined in a way that defines a group structure. This subject has become part of the lore of numerical analysis \cite{Butcher,HairerWanner1,HairerWanner2}. It remains active; for instance see \cite{LundervoldMuntheKaas,BogfjellmoSchmeding} and works cited therein. The other origin is research on renormalization in quantum field theory, starting with the 1998 contribution of Connes and Kreimer \cite{ConnesKreimer}. This work is usually presented in the language of combinatorial Hopf algebras. Many authors, for instance \cite{Brouder,GirelliKrajewskiMartinetti}, have investigated the relation between the Butcher group and problems in quantum field theory. Recently there has been an explosion of mathematics papers treating Hopf algebras associated with rooted trees. See for instance \cite{CalaqueEbrahimiFardManchon} and papers cited there. The approach here begins by distinguishing basic rooted tree constructions. The starting point is a finite set $U$ known as the \emph{label set} or \emph{vertex set}. A labeled rooted tree on $U$ is a tree graph with vertex set $U$ together with a distinguished point $r$ in $U$. An increasing rooted tree is one for which the label set $U$ is linearly ordered and the labels increase with distance from the root $r$. An unlabeled rooted tree is an isomorphism class of labeled rooted trees. \begin{itemize} \item $\mathcal{A}[U]$ is all rooted trees on label set $U$. \item $\mathcal{A}^\uparrow[U]$ is all increasing rooted trees on linearly ordered label set $U$. \item $\tilde \mathcal{A}[n]$ is all unlabeled rooted trees with $n$ vertices. \end{itemize} The $\mathcal{A}$ notation is from the French ``arbre''. The relation between these constructions is that if $U$ has $n$ elements, then \begin{equation} \mathcal{A}^\uparrow[U] \to \mathcal{A}[U] \to \tilde \mathcal{A}[n]. \end{equation} The first map is an injection, and the second map is a surjection. The topics discussed include: \begin{itemize} \item Unlabeled, labeled, and increasing rooted trees \item Fixed point equations and ordinary differential equations \item The composition operation (Butcher group) \item The substitution operation \end{itemize} An appendix reviews calculus formulas as used in combinatorics. \begin{example} Figure~1 illustrates the distinction between labeled rooted trees and unlabeled rooted trees in the case of a vertex set with three elements. There are 9 labeled rooted trees. However there are only 2 unlabeled rooted trees. One of these has a symmetry that exchanges the two non-root vertices. \end{example} \begin{figure} \caption{Labeled rooted trees on three vertices} \end{figure} \section{Labeled rooted trees} \noindent\textbf{Labeled rooted trees as functions} Let $U$ be a non-empty finite set. Let $f: U \to U$ be a function with a single fixed point $r$ such that $U \setminus \{r\}$ has no non-empty invariant subset. Then $f$ defines a labeled rooted tree. In the following it will be more convenient to consider the function $f$ restricted to $U \setminus \{r\}$. This leads to the official definition of labeled rooted tree used here. Let $U$ be a non-empty finite set. Let $r$ be a point in $U$, and let $T: U \setminus \{r\} \to U$ be a function that has no non-empty invariant subset. Then $T$ is called a \emph{labeled rooted tree}. The set $U$ is the label set, and the point $r$ is the \emph{root}. A point in $U$ is called a \emph{label} or a \emph{vertex}. Each ordered pair $(i, T(i))$ with $i \neq r$ is called an \emph{edge}. The point $T(i)$ is called the \emph{predecessor} of $i$. The set of labeled rooted trees with label set $U$ is $\mathcal{A}[U]$. If $U$ is empty, then there are no labeled rooted trees on $U$, so $\mathcal{A}[\emptyset] = \emptyset$. If $T$ is in $\mathcal{A}[U]$, then $[T] = U$ is the label set of $T$. The number of points in the label set is $|T| = |U|$. The set of \emph{immediate successor} points that map to $j$ in $U$ is $T^{-1}(j)$. The \emph{degree} of $j$ is $|T^{-1}(j)|$, the number of immediate successor points. (The definition of degree used here is special to rooted trees; it is not the usual definition from graph theory.) A \emph{leaf} is a vertex with degree zero. For a one-vertex tree $\bullet$ the root is a leaf. If $b: W \to U$ is a bijection, then the map $T \mapsto T \circ b$ maps $\mathcal{A}[U]$ to $\mathcal{A}[W]$. Such a map is called a \emph{relabeling}. Most interesting properties of labeled rooted trees are not affected by relabeling. It might seem reasonable to use a standard label set $U_n$ for each $n$. An obvious candidate is $[1,n]$, that is, the set $\{1, \ldots, n\}$. On the other hand, it is common to consider a labeled rooted tree on a subset of $U$, and that means that other labels sets are going to arise naturally. The set $\mathcal{A}$ of all labeled rooted trees may be defined by choosing a label set $U$ with $|U| = n$ for each $n = 1,2,3,\ldots$. For some purposes it is useful to adjoin an empty set object associated with the empty label set. The set of all labeled rooted trees with this extra object is written $\mathcal{A}_\emptyset$. \noindent\textbf{Labeled rooted trees as partially ordered sets} A labeled rooted tree $T$ on a finite label set $U \neq \emptyset$ may be viewed as a partial order $\leq_{\mathrm{tree}}$ on $U$. This is the unique partial order with the property that $T(j) \leq_{\mathrm{tree}} j$ for every vertex $j \neq r$. For each vertex $i$ there is a rooted tree $T_i$ whose vertex set consists of all vertices that are sent to $i$ by some iterate of $T$. The tree $T_i$ is the restriction of $T$ to this set. To say that $ i \leq_{\mathrm{tree}} j$ is the same as saying that $j$ is in the vertex set of the tree $T_i$. The special feature of this partial order is that for each $j$ in $U$ the set of all $i \leq_{\mathrm{tree}} j$ is linearly ordered with respect to the restriction of $\leq_{\mathrm{tree}}$ to this set. Furthermore, there is a least element in $U$, the root $r$. There can be one or more maximal elements; these are the leaves. \noindent\textbf{Labeled rooted trees as graphs} A {labeled tree} $T$ on a non-empty finite set $U$ is a simple graph with $U$ as vertex set that is connected and has no cycles. For every pair of vertices $i \neq j$ there is a unique simple path connecting the two points. A {labeled rooted tree} $T$ on $U$ is equivalent to a labeled tree and a choice of root point in $U$. For each vertex $i$ other than the root, there is a unique edge from $i$ in the direction of the root and corresponding vertex $T(i)$. The usual way of picturing a labeled rooted tree is as a set together with tree graph and distinguished point. \noindent\textbf{Forests of labeled rooted trees} Let $V$ be a finite set. Let $f: V \to V$ be a function with a set of fixed points $R$ such that $U \setminus R$ has no non-empty invariant subset. Then consider the function $f$ restricted to $U \setminus R$. This motivates the official definition of forest of labeled rooted trees. A \emph{forest of labeled rooted trees} on a set $V$ is a subset $R \subseteq V$ and a function $F: V \setminus R \to V$ such that $V \setminus R$ has no non-empty invariant subset. It is possible that $V = \emptyset$, in which case $R = \emptyset$, and $F$ is the empty function. The most important fact about a forest of labeled rooted trees is that there is a set partition $\Gamma$ of $V$ with the following property. For each block $B$ of $\Gamma$ there is a unique $r$ in $R$ such that the restriction $F_B$ of $F$ to $B \setminus \{r\}$ is a labeled rooted tree. When $V = \emptyset$ this is the empty set partition. \iffalse Let $R$ be the set of roots of the rooted trees in the forest. A forest on a set $V$ may also be considered as a function defined on $U \setminus R$. The value of the function on a vertex $i$ in block $B$ is obtained by applying the rooted tree function $F(B)$ to $i$. Remark: Let $U$ be a finite set. Let $T: U \to U$ be an arbitrary function from the set to itself. Then there is a set partition $\Gamma$ of $U$ with the following property. For each block $B$ of the set partition there is a subset $C_B \subseteq B$ such that $f$ restricted to $C_B$ is a cycle with some period (possibly one). The block $B$ itself admits a set partition into blocks $U_j$ for $j$ in $C_B$ so that $T: U_j \setminus C_B \to U_j$ is a rooted tree function. In conclusion, an arbitrary function consists of trees feeding into cycles. Suppose that the function $T:U \to U$ has no non-trivial cycles, only fixed points. Then there is a set partition $\Gamma$ of $U$ with the following property. For each block $B$ of the set partition there is a point $j$ in $ B$ that is a fixed point of $T$. Furthermore $T: U_j \setminus \{j\} \to U_j$ is a rooted tree function. Thus an arbitrary function with no cycles consists of trees feeding into fixed points. A function with no cycles is essentially the same thing as a forest function. \fi \begin{figure} \caption{Labeled rooted trees on three vertices: Pr\"ufer sequences} \end{figure} \noindent\textbf{Labeled rooted trees defined recursively} A labeled rooted tree $T$ on $U$ has a recursive definition as a point $r$ in $U$ together with a forest of labeled rooted trees on $U \setminus \{r\}$. This forest $F$ is the restriction of $T$ to points in $U \setminus \{r\}$ that are not in $T^{-1}(r)$. The set of roots of the forest is $R = T^{-1}(r)$. The recursive definition ends when the tree consists only of a root; the forest is then empty. \noindent\textbf{Notation for labeled rooted trees} A labeled rooted tree on a one-point set may be designated by its label $j$. A labeled rooted tree on a set with two or more points may be denoted by $j [ \; \; \; \;]$, where the forest of immediate successor rooted trees is listed (in some arbitrary order) inside the bracket. As an example, consider the tree with root $c$ and with vertices $b,e$ that are sent to $c$ and vertices $a,d$ that are sent to $e$. In this notation the tree would be $c[be[ad]]$. \noindent\textbf{Labeled rooted trees as sequences of vertices} Consider a non-empty vertex set $U$ with $n$ elements. For the construction to follow it is necessary to impose a linear order on $U$. The result says that labeled rooted trees on $U$ correspond to sequences of $n-1$ elements. \begin{example} For $n=3$ the vertices of a labeled rooted tree may be numbered 1,2,3. As shown below, each labeled rooted tree may be coded by a sequence of two numbers. For example, the tree 2[13] is coded by 22, wile the tree $2[1[3]]$ is coded by 12. There are nine sequences: three of them 11, 22, 33 correspond to one unlableled rooted tree, while six of them 12, 13, 21, 23, 31, 33 correspond to the other unlabeled rooted tree. See Figure~2 for the picture. \end{example} \begin{proposition}[Pr\"ufer correspondence] Given non-empty label set $U$ with a given linear order, there is a bijection between the set of sequences $s$ of length $n-1$ of elements of $U$ and the set of labeled rooted trees $T$ in $\mathcal{A}[U]$. For each $j$ in $U$ the number of times the sequence $s$ assumes the value $j$ is the degree $|T^{-1}(j)|$. \end{proposition} \begin{proof}It is easiest to see how to go from the labeled rooted tree $T$ to the corresponding sequence $s$. At each stage remove the smallest leaf and its corresponding edge from the tree. The value of $s$ at this stage is the vertex at the other end of this edge. When this is repeated $k-1$ times the final sequence value is the root. This procedure is illustrated in Figure~3. Here is the construction to go from the sequence $s$ to the labeled rooted tree. The edges are restored in the same order as they were removed. At each stage add a new edge as follows. Take the smallest vertex that has not yet been used and that does not occur in the part of the sequence that has not been used. The edge then goes from this vertex to the next element of the sequence. See Figure~4 for a picture. \end{proof} \begin{proposition}[Cayley] The number of labeled rooted trees $T$ with $|T| = n$ vertices is $n^{n-1}$. \end{proposition} \begin{proof} This famous result of Cayley results from the Pr\"ufer correspondence. There are $n^{n-1}$ sequences of length $n-1$ in a set $U$ with $n$ elements. \end{proof} \begin{figure} \caption{From labeled rooted tree to Pr\"ufer sequence} \end{figure} \begin{figure} \caption{From Pr\"ufer sequence to labeled rooted tree} \end{figure} \section{Unlabeled rooted trees} The difficulty with unlabeled rooted trees is the general difficulty with unlabeled combinatorial structures (isomorphism classes of structures). This is the presence of symmetry. This is a well-studied topic; there are nice accounts in \cite{BergeronLabelleLeroux} and in \cite{Kerber}. \noindent\textbf{Unlabeled rooted trees via orbits of labeled rooted trees} An \emph{unlabeled rooted tree} is an isomorphism invariant of labeled rooted trees. It is convenient to fix the label set $U$. For each labeled rooted tree $T$ on $U$ there is a corresponding unlabeled rooted tree $\tau$. Here are the details. Let $U$ be a non-empty set. Let $\mathcal{A}[U]$ be the set of labeled rooted trees on vertex set $U$. Each such tree is a function $T: U \setminus \{r\}$ to $U$. For each $T$ in $\mathcal{A}[U]$ and each bijection $b: U \to U$, the composite function $T \circ b$ is another labeled rooted tree in $\mathcal{A}[U]$. It is a function $T' = T \circ b: U' \setminus \{r'\}$ to $U$, where $b r' = r$. Two such labeled rooted trees $T, T'$ are said to be isomorphic. An \emph{unlabeled rooted tree} $\tau$ with $n$ vertices is an object that corresponds to an isomorphism class of labeled rooted trees, where the label set has $n$ elements. The set of unlabeled rooted trees with $n$ vertices is denoted $\tilde \mathcal{A}[n]$. The set of all unlabeled rooted trees is denoted $\tilde \mathcal{A}$. There is no unlabeled rooted tree with zero vertices, but sometimes it is convenient to introduce an extra empty set object associated with zero vertices. The augmented set is written $\tilde \mathcal{A}_\emptyset$. An unlabeled rooted tree $\tau$ has no underlying set and thus no vertices and no edges. Nevertheless, it may be pictured by any $T$ in the isomorphism class. There are various invariants of an unlabeled rooted tree $\tau$. Among them are the number of vertices $|\tau|$ and the number of vertices $v_k(\tau)$ of degree $k$. These are related by \begin{equation} \sum_k k v_k(\tau) = |\tau| - 1. \end{equation} There are relatively few unlabeled rooted trees. The simplest (but nevertheless important) ones have 1 root and $n-1$ leaves. In the following such a tree will be denoted $\tau = n-1$. For this tree $|\tau| = n$ and $v_0(\tau) = n-1$, $v_{n-1}(\tau) = 1$. Another simple class of unlabeled rooted trees are the linear trees with $v_0(\tau) = 1$ and $v_1(\tau) = n-1$. Group theory illuminates the situation. Fix a label set $U$ with $n$ elements. Let $\mathcal{G}$ be the permutation group of this set. This consists of all bijections $b: U \to U$. This group has $|\mathcal{G}| = n!$ elements. The group $\mathcal{G}$ also acts on the set $\mathcal{A}[U]$ of labeled rooted trees with $n^{n-1}$ elements. For each bijection $b$, the map $T \to T \circ b$ is a map from $\mathcal{A}[U]$ to itself. The set $\tilde \mathcal{A}[n]$ of unlabeled rooted trees corresponds to the set of orbits under this action: \begin{equation} \tilde \mathcal{A}[n] \cong \mathcal{A}[U]/\mathcal{G}. \end{equation} For each labeled rooted tree $T$, the corresponding unlabeled rooted tree $\tau$ is an abstract object corresponding to the orbit $\mathcal{G} T$ of $T$. The map from labeled rooted trees to unlabeled rooted trees may be summarized by the surjection \begin{equation} \mathcal{A}[U] \to \tilde\mathcal{A}[n]. \end{equation} According to the theory of group actions, the size of the orbit $\mathcal{G} T$ of $T$ is the order $|\mathcal{G}| = n!$ divided by the order $|\mathcal{G}_T|$, where $\mathcal{G}_T$ is the stabilizer subgroup of $T$. Thus \begin{equation} |\mathcal{G} T| = \frac{n!}{|\mathcal{G}_T|} . \end{equation} The order $|\mathcal{G}_T|$ is the same for all $T$ in an orbit and hence may be denoted $\sigma(\tau)$, where $\tau$ is the unlabeled rooted tree corresponding to $T$. The number $\sigma(\tau)$ is the \emph{symmetry factor} of $\tau$. The size of the orbit also depends only on $\tau$; it will be denoted $r(\tau)$. This gives the following basic result: \begin{proposition} For rooted trees with $|\tau| = n$ vertices the number of labeled rooted trees in $\mathcal{A}[U]$ per unlabeled rooted tree in $\tilde \mathcal{A}[n]$ is \begin{equation} r(\tau) = \frac{n!}{\sigma(\tau)} . \end{equation} \end{proposition} \begin{example} There are labeled rooted trees on $n=4$ vertices that consist of a root and three leaves. The orbit $\mathcal{G} T$ of such a rooted tree is shown in Figure~5. For each rooted tree $T$ in this orbit, the corresponding stabilizer subgroup $\mathcal{G}_T$ has 6 elements, corresponding to the 6 permutations of the leaves. The symmetry factor is $\sigma(\tau) = 6$. The number of labeled rooted trees in the orbit is $r(\tau) = 24/6 = 4$. \end{example} \begin{figure} \caption{Orbit of a labeled rooted tree on four vertices: 4 trees, $\sigma(\tau) = 6$} \end{figure} There is an identity that expresses the fact that the sum over unlabeled rooted trees of the corresponding number of labeled rooted trees is the total number of labeled rooted trees. \begin{proposition} For unlabeled rooted trees $\tau$ with $|\tau| = n$ vertices the sum of the corresponding numbers $r(\tau)$ of labeled rooted trees in $\mathcal{A}[U]$ with $|U| = n$ is \begin{equation} \sum_{\tau \in \tilde\mathcal{A}[n]} \frac{n!}{\sigma(\tau)} = n^{n-1}. \end{equation} \end{proposition} \noindent\textbf{Multisets of unlabeled rooted trees} The analog of a forest of labeled rooted trees is a multiset of unlabeled rooted trees. A multiset of unlabeled rooted trees with $m$ vertices is defined as a function $N$ from unlabeled rooted trees in $\tilde \mathcal{A}$ to natural numbers $\geq 0$ such that \begin{equation} \sum_{\tau} |\tau| N(\tau) = m. \end{equation} Thus $N(\tau)$ represents the number of times the unlabeled rooted tree $\tau$ occurs in the multiset. Every forest of labeled rooted trees gives rise to a multiset of unlabeled rooted trees, where $N(\tau)$ is the number of blocks in the forest that correspond to unlabeled rooted tree $\tau$. Such a multiset has a \emph{symmetry factor} derived from the symmetry factor associated with unlabeled rooted trees. Let $F$ be a forest with corresponding multiset $N$. The symmetry factor $\sigma(N)$ is the order of the stabilizer subgroup $\mathcal{G}_F$ of the forest. It is \begin{equation} \sigma(N) = \prod_{\tau'} N(\tau')!\sigma(\tau')^{N(\tau')} . \end{equation} This expression may be derived using group theory. Let $\mathcal{B}$ be the subgroup of $\mathcal{G}_F$ generated by permutations that leave each block invariant. For each block there is a corresponding symmetry factor $\sigma(\tau')$, so $\mathcal{B}$ is a product group with order $\prod_{\tau'} \sigma(\tau')^{N(\tau')}$. Let $\mathcal{H}$ be the subgroup that permutes blocks with identical unlabeled rooted trees. If unlabeled rooted tree $\tau'$ occurs in $N(\tau')$ blocks, then there are $N(\tau')!$ permutations involving that rooted tree. The order of $\mathcal{H}$ is thus $\prod_{\tau'} N(\tau')! $. The group $\mathcal{H}$ is a normal subgroup of $\mathcal{G}_F$, so in particular for every $b'$ in $\mathcal{B}$ and for every $h$ in $\mathcal{H}$ the element $h b' h^{-1}$ is also in $\mathcal{B}$. Every element of $\mathcal{G}_F$ may be uniquely expressed as product $bh$ of an element of $\mathcal{B}$ with an element of $\mathcal{H}$. (This decomposition respects multiplication: $bh b'h' = (b h b' h^{-1}) (h h')$. In fact the group $\mathcal{G}_F$ is the semidirect product of the group $\mathcal{B}$ with the group $\mathcal{H}$ \cite{MacLaneBirkhoff}.) The conclusion is that the order of $\mathcal{G}_F$ is the product of the orders of $\mathcal{B}$ and $\mathcal{H}$. Here is how to construct a forest corresponding to a given multiset $N$ with $m$ vertices. Find a set $V$ with $m$ elements and a set partition $\Gamma$ of $V$. Require that there is a function $\chi: \Gamma \to \mathcal{A}$ such that for each $\tau$ the inverse image $\chi^{-1}(\tau)$ consists of $N(\tau)$ blocks of size $|\tau|$. Finally, for each block $B$ in $\Gamma$ find a labeled rooted tree $T$ that determines unlabeled rooted tree $\chi(B)$. The number of pairs $\Gamma, \chi$ satisfying these conditions is the coefficient \begin{equation} C(N) = \frac{m!}{\prod_{\tau'} (|\tau'|!)^{N(\tau')} } \frac{1}{\prod_{\tau'} N(\tau')!}. \end{equation} The first factor is the multinomial coefficient that determines how many ways of producing blocks of the appropriate sizes in a given order. The second factor has a denominator that describes how many ways there are of permuting the blocks to preserve $\chi$. The number of forests is \begin{equation} f(N) = C(N) \prod_{\tau'} r(\tau')^{N(\tau')} = \frac{m!}{\prod_{\tau'} N(\tau')!\sigma(\tau')^{N(\tau')} }= \frac{m!}{\sigma(N)}. \end{equation} The first equality comes from counting the number of ways of putting labeled rooted trees in the appropriate blocks. The second equality results from inserting $r(\tau') = |\tau'|!/\sigma(\tau')$. \noindent\textbf{Unlabeled rooted trees defined recursively} There is a recursive definition of unlabeled rooted trees. An unlabeled rooted tree $\tau$ with $n\geq 1$ vertices is equivalent to a multiset $N$ of unlabeled rooted trees with $n-1$ vertices. This counts the unlabeled rooted subtrees that result when the root is removed. The recursion terminates with unlabeled rooted trees with one vertex; the corresponding multiset is zero. The number $r(\tau)$ that counts labeled rooted trees satisfies the recursion \begin{equation} r(\tau) = |\tau| f(N) = |\tau| C(N) \prod_{\tau'} r(\tau')^{N(\tau')} . \end{equation} Here $\tau$ is an unlabeled rooted tree on $n$ vertices, and $N$ is the corresponding multiset on $n-1$ vertices. This is because a labeled rooted tree is determined by a root point and a forest over the remaining points. There is also a recursion relation for the symmetry factors. If $\tau$ has $n$ vertices and its subtrees define a multiset $N$ with $n-1$ vertices. \begin{equation} \sigma(\tau) = \sigma(N) = \prod_{\tau' \in \tilde \mathcal{A}} N(\tau')!\sigma(\tau')^{N(\tau')}. \end{equation} This recursion relation has an explicit solution. Let $\tau$ be an unlabeled rooted tree. Consider some labeling, so that there is a set $U$ of vertices and a labeled rooted tree on $U$. For each vertex $j$, consider the subtree above $j$, and let $N_j$ count the unlabeled rooted trees above this subtree. Then \begin{equation} \sigma(\tau) = \prod_j \prod_{\tau'} N_j(\tau')! \end{equation} \noindent\textbf{Notation for unlabeled rooted trees} The multiset notation gives a convenient way of describing unlabeled rooted trees. A tree with a single vertex is denoted 0. Otherwise, the tree is denoted $N_1[\tau_1] N_2[\tau_2] \ldots N_k[\tau_k]$, where each $N_j \neq 0$ and the $\tau_j$ are descriptions of different unlabeled rooted trees. It is convenient to abbreviate $N[0]$ by $N$. Thus, for example, the labeled rooted tree $c[b[e[ad]]$ would determine the unlabeled rooted tree $1[0]1[2[0]]$. In the abbreviated form this would be $11[2]$. This says that the root has 1 immediate successor with a single vertex and 1 immediate successor that is a tree with 2 immediate successor vertices. \begin{example} For $n=1$ the only rooted tree is 0, consisting of a single root point. For $n=2$ and a given vertex set there are two labeled rooted trees, depending on which point is chosen for the root. There is only one unlabeled rooted tree, denoted 1. For $n=3$ and a given vertex set there are $3^2 = 9$ labeled rooted trees. These decompose into two orbits, as shown in Figure~1. These correspond to unlabeled rooted trees $\tau$ that may be denoted $2$ and $1[1]$. The rooted tree 2 has a root with two leaves. The symmetry factor is 2. The $1[1]$ linear rooted tree has a root and a successor rooted tree $1$. The symmetry factor is 1. This gives the correct number of labeled rooted trees as the sum $6/2 + 6/1 = 9 = 3^2$. The two unlabeled rooted trees are shown in Figure~6. \end{example} \begin{figure} \caption{Unlabeled rooted trees on three vertices: $\sigma(\tau) = 2,1$} \end{figure} \begin{example} The case $n=4$ is more interesting. There are four unlabeled rooted trees, which may be denoted in multiset notation by 3, 11[1], 1[2], and 1[1[1]]. These rooted trees have symmetry factors $6,1,2,1$. The number of labeled rooted trees is the sum $24/6 + 24/1 + 24/2 + 24/1 = 64 = 4^3$. See Figure~7 for a picture of the unlabeled rooted trees. \end{example} \begin{figure} \caption{Unlabeled rooted trees on four vertices: $\sigma(\tau) = 6, 1, 2, 1$} \end{figure} \begin{remark} The formula for the number $a_n$ of labeled rooted trees on a vertex set with $n$ vertices is $a_n = n^{n-1}$. The number $\tilde a_n$ of unlabeled rooted trees with $n$ vertices is not so easy to compute. This may be seen by contrasting the generating functions. The exponential generating function for the number $a_n$ of labeled rooted trees with $n$ vertices is $a(t) = \sum_{n = 1}^\infty \frac{1}{n!} a_n t^n$. The recursive definition of labeled rooted tree from root point and forest gives \begin{equation} a(t) = t \exp( a(t)) . \end{equation} Labeled enumeration give a simple result: for fixed $t$ the value $x= a(t)$ satisfies the fixed point equation $x = t \exp(x)$. Contrast this with the generating function for the number $\tilde a_n$ of unlabeled rooted trees with $n$ vertices. This is $\tilde a(t) = \sum_{n = 1}^\infty \tilde a_n t^n$. The recursive definition of unlabeled rooted tree from multiset gives the identity \begin{equation} \tilde a(t) = t \exp( \sum_{k=1}^\infty \frac{1}{k} \tilde a(t^k)) . \end{equation} Unlabeled enumeration produces a much more complicated equation. See \cite{FlajoletSedgewick} or \cite{BergeronLabelleLeroux} for the full story. \end{remark} \section{Fixed point equations and labeled rooted trees} Let $\beta(x)$ be a formal power series in $x$. Let $t$ and $g$ be parameters. Consider the \emph{fixed point equation} \begin{equation} x = g + t \beta(x). \end{equation} \begin{proposition} The fixed point equation $x = g + t \beta(x)$ has the formal solution \begin{equation} x = f(t,g) = \sum_{n=0}^\infty \frac{t^n}{n!} f_n(g) , \end{equation} where $f_0(g) = g$, and where for for $n \geq 1$ the coefficient $f_n(g)$ has the explicit representation \begin{equation} f_n(g) = \left( \frac{\partial}{\partial g} \right)^{n-1} \beta(g)^n. \end{equation} \end{proposition} \begin{proof} Fix $g$. The problem is to find the expansion of $x$ as a function of $t$. We know the inverse function \begin{equation} t = (x-g)/\beta(x) \end{equation} giving $t$ as a function of $x$. Notice that $t=0$ corresponds to $x=g$. The Lagrange inversion formula applies. The formula is based on the the fact that the residue of a differential form expressed by a formal Laurent series is invariant under change of variable. Start with the identity \begin{equation} \frac{1}{n} d \left(\frac{x}{t^n} \right) = - \frac{x}{t^{n+1}} \, dt + \frac{1}{n} \frac{1}{t^n} \, dx. \end{equation} Since the left hand is a perfect differential, it has residue zero. This gives an identity for the residues \begin{equation} \mathrm{res} \left(\frac{x}{t^{n+1}} \, dt\right) = \frac{1}{n} \mathrm{res} \left(\frac{1}{t^n} \, dx \right). \end{equation} This is the Lagrange inversion formula. In the case at hand \begin{equation} \frac{1}{n!} f_n(g) = \mathrm{res} \left(\frac{x}{t^{n+1}} \, dt\right) = \frac{1}{n} \mathrm{res} \left(\frac{1}{t^n} \, dx \right) = \frac{1}{n} \mathrm{res} \left(\frac{\beta(x)^n}{(x-g)^n} \, dx \right), \end{equation} where the last residue is computed at the singularity $x=g$. The residue is $1/(n-1)!$ times the $n-1$st derivative of $\beta(g)^n$ with respect to $g$. This gives the result. \end{proof} A combinatorial solution gives more detailed information. This is given by an expansion indexed by rooted trees. For each $n\geq 1$ fix a label set $U_n$ with $n$ elements, and consider rooted trees $T$ with the label set as vertex set. For $n = 0$ there is an empty set object associated with the label set $U_0 = \emptyset$. \begin{proposition} The fixed point equation $x = g + t \beta(x)$ has the solution given by a formal power series as above, where for $n \geq 1$ the coefficient $f_n(g)$ has the explicit representation \begin{equation} f_n(g) = \sum_{T \in \mathcal{A}[U_n]} f_T(g) \end{equation} For $n \geq 1$ the coefficient \begin{equation} f_T(g) = \prod_{j \in U_n} \left( \frac{\partial}{\partial g} \right)^{|T^{-1}(j)|} \beta(g), \end{equation} where $|T^{-1}(j)|$ is the degree of vertex $j$ of rooted tree $T$. For $n=0$ the contribution from the empty rooted tree is $f_\emptyset(g) = g$. \end{proposition} \begin{proof} This proof uses calculus formula in the form explained in the appendix. Identify the $n$ factors in $\beta(g)^n$ with the $n$ points in the vertex set $U_n$. Each derivative corresponds to a point in $[1,n-1] = \{1, \ldots, n-1\}$. Use the product rule to expand \begin{equation} \left( \frac{\partial}{\partial g} \right)^{n-1} \prod_{\ell \in U_n} \beta(g) = \sum_{\phi: [n-1] \to U_n} \prod_{\ell \in U_n } \left( \frac{\partial}{\partial g} \right)^{\phi^{-1}(\ell)} \beta(g). \end{equation} The sum is over functions $\phi$ that pick out for each of the $n-1$ derivatives the factor to which it applies. Use the Pr\"ufer correspondence. Given a linear order on $U_n$, there is a corresponding bijection between functions $\phi$ from $[n-1]$ to $U_n$ and rooted trees with vertex set $U_n$. Furthermore, given a rooted tree $T$, the number of times the function assumes value $j$ in $U_n$ is the degree $ |T^{-1}(j)|$. \end{proof} The above may be expressed in an elegant way as \begin{equation} f(t,g) = \sum_{T \in \mathcal{A}_\emptyset} \frac{1}{|T|!} t^{|T|} f_T(g). \end{equation} Here $|T|$ denotes the number of vertices of the rooted tree, and there is one label vertex set for each value of this number. \begin{remark} The problem of counting labeled rooted trees is the special case when $\beta(g) = \exp(g)$. In that case $f_n(g) = n^{n-1} \exp(ng)$ and each $f_T(g) = \exp(ng)$. In this special case the result is Cayley's formula $n^{n-1} = | \mathcal{A}[U_n] |$. \end{remark} The solution of the fixed point equation may be expressed more economically in terms of unlabeled rooted trees by \begin{equation} f(t,g) = \sum_{\tau \in\tilde \mathcal{A}_\emptyset} \frac{r(\tau)}{|\tau|!} t^{|\tau|} f_\tau(g) = \sum_{\tau \in\tilde \mathcal{A}_\emptyset} \frac{1}{\sigma(\tau)} t^{|\tau|} f_\tau(g). \end{equation} Here $r(\tau)$ is the corresponding number of labeled rooted trees, $|\tau|$ is the number of vertices, and $\sigma(\tau)$ is the symmetry factor. \begin{example} Use the notation $f_\tau(g)$ to denote the factor associated with unlabeled rooted tree $\tau$. Then \begin{eqnarray} \lefteqn{ f(x,t) = g + f_0 t + f_1(g) t^2 + \left( \frac{1}{2} f_2(g) + f_{1[2]}(g) \right) t^2 } \\ && + \left( \frac{1}{6} f_3(g) + f_{11[1]}(g) + \frac{1}{2} f_{1[2]}(g) + f_{1[1[1])}(g) \right) t^4 + \cdots . \nonumber \end{eqnarray} Explicitly, this is \begin{eqnarray} \lefteqn{ f(x,t) = g + \beta(g ) t + \beta'(g)\beta(g) t^2 + \left( \frac{1}{2} \beta''(g)\beta(g)^2 +\beta'(g)^2\beta(g) \right) t^3 } \\ && + \left( \frac{1}{6} \beta'''(g)\beta(g)^3 + \beta''(g)\beta'(g) \beta(g)^2 + \frac{1}{2}\beta''(g) \beta'(g)\beta(g)^2 + \beta'(g)^3\beta(g) \right) t^4 + \cdots . \nonumber \end{eqnarray} To determine the contribution of a rooted tree, all that is needed is to know the number of vertices with given degree. \end{example} \section{Increasing rooted trees} \noindent\textbf{Increasing rooted trees as partially ordered sets} The convention used here is that the partial order of a labeled rooted tree increases as one moves away from the root. Consider a non-empty label set $U$ together with a given linear order $\leq_{\mathrm{lin}}$. An \emph{increasing rooted tree} is a labeled rooted tree with the property that the map from $U$ with its rooted tree partial order $\leq_{\mathrm{tree}}$ to $U$ with the given linear order $\leq_{\mathrm{lin}}$ is order-preserving. In other words, if $i \leq_{\mathrm{tree}} j$ in the partial order of the labeled rooted tree, then the $ i \leq_{\mathrm{lin}} j$ in the linear order of the labels. For an increasing rooted tree the root $r$ is the least element in both orders. The greatest element in the linear order is maximal in the partial order, so it is a leaf. \noindent\textbf{Increasing rooted trees as functions} Consider a non-empty label set $U$ together with a given linear order. An {increasing rooted tree} is a labeled rooted tree $T: U \setminus \{r\} \to U$ that is decreasing with respect to the linear order. In other words, it is required that $T(j) \leq_{\mathrm{lin}} j$ in the linear order for all $j \neq r$. The collection of all increasing rooted trees on $U$ is denoted $\mathcal{A}^\uparrow[U]$. The relation between the three kinds of rooted trees for a given linearly ordered label set $U$ with $n\geq 1$ elements is \begin{equation} \mathcal{A}^\uparrow[U] \to \mathcal{A}[U] \to \tilde \mathcal{A}[n]. \end{equation} The first map is an injection, and the second map is a surjection. The composite map is also a surjection. \noindent\textbf{Increasing rooted trees defined recursively from below} For each $k$ in $U$ the set of all $\ell$ with $k \leq_{\mathrm{tree}} \ell$ is also an increasing rooted tree $T_k$, with root $k$. If $r$ is the root, then it must be the least element of $U$. So for each immediate successor $j$ of $r$ the rooted tree $T_j$ is an increasing rooted tree. This gives a recursive characterization of an increasing rooted tree on $U$ as a forest of increasing rooted trees on $U \setminus \{r\}$. \noindent\textbf{Increasing rooted trees defined recursively from above} This gives another recursive description. Let $m$ be the greatest element of $U$ in the linear order. An increasing rooted tree on $U$ is a increasing rooted tree on $U\setminus \{m\}$ together with a point in $U \setminus \{m\}$. Thus there is an edge in the tree from $m$ to the chosen point. By taking $m = n, n-1, n-2, \ldots, 2$ this defines a map $\phi$ from $\{2,3, \ldots, n\}$ to the set of non-leaf vertices. (This is a variation on the Pr\"ufer correspondence.) \noindent\textbf{Increasing rooted trees as permutations} Each increasing rooted tree on $\{1, \ldots, n\}$ may be coded as a permutation of $\{2, \ldots, n\}$. Such a permutation may be represented as a list of the elements of $\{2, \ldots, n\}$ in some order. For $n=2$ the only entry in the list is 2. Suppose $n \geq 3$ and an increasing rooted trees on $\{1, \ldots, n-1\}$ is coded as a list taken from $\{2, \ldots, n-1\}$. Consider a new increasing rooted tree on $\{1, \ldots, n\}$. If the tree sends $n$ to $k$ with $1 \leq k \leq n-1$, then create a new list such that for $j<k$ the $j$th place entry is the same, the $k$th place entry is $n$, and for $j> k$ the entry in the $j$th place is the original $j-1$ place entry. This represents the new increasing rooted tree as a list taken from $\{2, \ldots, n\}$. \begin{example} The tree $1[2]$ is encoded by 2. The trees $1[23]$ and $1[2[3]]$ are encoded by $32$ and $23$. The trees $1[234]$ and $1[32[4]]$ and $ 1[23[4]]$ are encoded by $432$ and $342$ and $324$, while the trees $1[42[3]]$ and $1[2[34]] $ and $ 1[2[3[4]]]$ are encoded by $423 $ and $243$ and $ 234$. \end{example} The permutation representation immediately gives the following result. \begin{proposition} The number of increasing rooted trees on a label set with $n$ vertices is $(n-1)!$. \end{proposition} The \emph{rooted tree factorial} $T!$ of a labeled rooted tree $T$ with root $r$ is defined inductively as the number of vertices of $T$ times the product over $i$ with $T(i) = r$ of $T_i!$. (An empty product gives 1.) This is an invariant under isomorphism, so for each unlabeled rooted tree $\tau$ there is a rooted tree factorial $\tau!$. The rooted tree factorial satisfies the recursive relation \begin{equation} \tau! = |\tau| \prod_{\tau'} \left( \tau'! \right)^{N(\tau')}, \end{equation} where $N$ counts the subtrees obtained by removing the root. There is another formula for the rooted tree factorial that is often convenient. For an unlabeled rooted tree $\tau$ consider a corresponding labeled rooted tree $T$. Let $T_j$ be the subtree over vertex $j$. Then \begin{equation} \tau! = \prod_j |T_j|, \end{equation} where the product is over all vertices of $T$. The quantity $|T_j|$ is the number of vertices of $T_j$. The number of increasing rooted trees per unlabeled rooted tree satisfies a recursion relation \begin{equation} i(\tau) = C(N) \prod_{\tau'} i(\tau')^{N(\tau')} , \end{equation} where $N(\tau')$ counts immediate successor rooted trees $\tau'$. This is similar to the formula $r(\tau)$ for the number of rooted trees per unlabeled rooted tree; the distinction is that there is only one choice of root point. It follows that the ratio $r(\tau)/i(\tau)$ satisfies \begin{equation} \frac{r(\tau)}{i(\tau)} = |\tau| \prod_{\tau'} \left( \frac{r(\tau')}{i(\tau')} \right)^{N(\tau')}, \end{equation} where $N$ counts the successor rooted trees obtained by removing the root. This leads to the following relation. \begin{proposition} Fix $n$ and an unlabeled rooted tree $\tau$ with $n$ vertices. The ratio of the number of labeled rooted trees to the number of increasing rooted trees is the rooted tree factorial \begin{equation} \frac{r(\tau)}{i(\tau)} = \tau!. \end{equation} As a consequence, the number of increasing rooted trees for given unlabeled rooted tree $\tau$ is \begin{equation} i(\tau) = \frac{r(\tau)}{\tau!} = \frac{|\tau|!}{\tau! \sigma(\tau)} \end{equation} \end{proposition} For $\tau$ with $n$ vertices there is an identity that expresses the fact that the sum over unlabeled rooted trees of the corresponding number of increasing labeled rooted trees is the total number of increasing labeled rooted trees. \begin{proposition} The sum over unlabeled rooted trees with $n$ vertices of the corresponding number of increasing rooted trees gives \begin{equation} \sum_{\tau \in \tilde \mathcal{A}[n]} \frac{n!}{\sigma(\tau) \tau!} = (n-1)!. \end{equation} \end{proposition} \begin{example} When $n = 3$ there are only two increasing rooted trees. The symmetry factors are 2 and 1, while the rooted tree factorials are 3 and 6. This is illustrated in Figure~8. \end{example} \begin{figure} \caption{Increasing rooted trees on three vertices: $\tau! = 3,6$} \end{figure} \begin{example} The case $n=4$ is more interesting. There are $3!=6$ increasing rooted trees. These map to the four unlabeled rooted trees, which are $3, 11[1], 1[2], 1[1[1]]$. These four unlabeled rooted trees have symmetry factors $6,1,2,1$ and rooted tree factorials $4, 8, 12, 24$. Three of the increasing rooted trees correspond to the unlabeled rooted tree 11[1] with symmetry factor 1 and rooted tree factorial 8. The number of increasing labeled rooted trees is the sum $24/(6\cdot 4) + 24/(1 \cdot 8) + 24/(2 \cdot 12) + 24/(1\cdot 24) = 1 + 3 + 1 + 1 = 6 $. The picture is in Figure~9. \end{example} \begin{figure} \caption{Increasing rooted trees on four vertices: $\tau! = 4, 8, 8, 12, 24$} \end{figure} \begin{example} For $n=5$ vertices there are 9 unlabeled rooted trees. These are indicated in Table~1. The symmetry group of all permutations has order $n!= 120$. The table lists the symmetry factors $\sigma(\tau)$, the number of labeled rooted trees $r(\tau)$, the tree factorial $\tau!$, and the number of unlabeled rooted trees. The symmetry factor $\sigma(\tau)$ may be read off from the multiset description of $\tau$. The other quantities are related by $r(\tau) = n!/\sigma(\tau)$ and $i(\tau) = r(\tau)/\tau!$. If all is well, the total number of labeled rooted trees should be $n^{n-1} = 625$, while the total number of increasing rooted trees should be $(n-1)! = 24$. \end{example} \begin{remark} It is instructive to look at the exponential generating function for the number $a^\uparrow_n$ of increasing rooted trees with $n$ vertices. The recursive definition gives \begin{equation} \frac{d}{dt} a^\uparrow(t) = \exp( a^\uparrow(t)) . \end{equation} This has the easy solution $a^\dagger(t) = \log( \frac{1}{1-t} )$. The details are in \cite{BergeronLabelleLeroux}. \end{remark} \begin{table} \begin{center} \begin{tabular}{lrrrr} $\tau$ & $\sigma(\tau)$ & $r(\tau)$ & $\tau!$ & $i(\tau)$ \\ \hline 4 & 24 & 5 & 5 & 1 \\ 21[1] & 2 & 60 & 10 & 6 \\ 2[1] & 2 & 60 & 20 & 3 \\ 11[2] &2 & 60 & 15 & 4 \\ 11[1[1]] &1 & 120 & 30 & 4 \\ 1[3] &6 & 20 & 20 & 1 \\ 1[11[1]] &1 & 120 & 40 &3 \\ 1[1[2]] &2& 60 & 60 & 1 \\ 1[1[1[1]]] &1 & 120 & 120 & 1 \\ \end{tabular} \caption{Unlabeled rooted trees on 5 vertices} \end{center} \end{table} \section{Ordinary differential equations and increasing rooted trees} Let $\beta(x)$ be a formal power series in $x$. Let $t$ and $g$ be parameters. Consider the \emph{ordinary differential equation} \begin{equation} \frac{dx}{dt} = \beta(x) \end{equation} with initial condition $x = g$ at $t = 0$. \begin{proposition} This ordinary differential equation has the formal solution \begin{equation} x = \bar f(t,g) = \sum_{n=0}^\infty \frac{t^n}{n!} \bar f_n(g) , \end{equation} where the coefficient $\bar f_n(g)$ has the explicit representation \begin{equation} \bar f_n(g) = \left( \beta(g) \frac{\partial}{\partial g} \right)^n g. \end{equation} \end{proposition} \begin{proof} Let the solution of the initial value problem be $\bar f(t,g)$. It is easy to see by induction that \begin{equation} \frac{\partial^n \bar f(t,g)}{\partial t^n} = h_n(\bar f(t,g)) \end{equation} for a suitable function $h_n(x)$. Taking one more derivative gives \begin{equation} h_{n+1}(\bar f(t,g)) = h'_n(\bar f(t,g)) \beta(\bar f(t,g)) . \end{equation} Setting $t=0$ gives $\bar f(0,g) = g$, so the result is \begin{equation} h_{n+1}(g) = h'_n(g) \beta(g) . \end{equation} The conclusion follows immediately. \end{proof} The combinatorial solution is given by an expansion indexed by increasing rooted trees. For each $n\geq 1$ fix a linearly ordered label set $U_n$ with $k$ elements. For instance, take $U_n = \{1, \ldots, n \}$ with the usual linear order. Furthermore, consider increasing rooted trees $\bar T$ with the label set as vertex set. For $n = 0$ introduce an empty set object. \begin{proposition} The ordinary differential equation has the solution given by a formal power series as above, where $\bar f_n(g)$ has the explicit representation \begin{equation} \bar f_n(g) = \sum_{\bar T \in \mathcal{A}^\uparrow [ U_n]} f_{\bar T}(g) \end{equation} For $n \geq 1$ the coefficient is \begin{equation} f_{\bar T}(g) = \prod_{j \in U_n} \left( \frac{\partial}{\partial g} \right)^{|\bar T^{-1}(j)| } \beta(g), \end{equation} where $|{\bar T}^{-1}(j)|$ is the degree of vertex $j$ of rooted tree ${\bar T}$. For $n=0$ the contribution from the empty rooted tree is $f_\emptyset(g) = g$. \end{proposition} \begin{proof} Write \begin{equation} \bar f_n(g) = \left( \beta(g) \frac{\partial}{\partial g} \right)^{n-1} \beta(g). \end{equation} Index the partial derivatives from $n$ down to $2$. Index the $\beta(g)$ factors from $n$ down to 1. Then every partial derivative acts only on the $\beta(g)$ factors with strictly smaller index. So \begin{equation} \bar f_n(g) = \sum_\phi \prod_{j=1}^n \left( \frac{\partial}{\partial g} \right)^{|\phi^{-1}(j)|} \beta(g), \end{equation} where the sum is over functions $\phi$ from to $[2,n]$ to $[1,n]$ with the property that $\phi(i) < i$ for all $i$. Every such function $\phi$ is an increasing rooted tree on $[1,n]$, where $\phi(i)$ is the immediate predecessor of $i$, and $\phi^{-1}(j)$ is the set of immediate successors of $j$. \end{proof} The above may be expressed in an elegant way as \begin{equation} \bar f(t,g) = \sum_{\bar T \in \mathcal{A}_\emptyset^\uparrow} \frac{1}{|\bar T|!} t^{|\bar T|} f_{\bar T}(g). \end{equation} Here $|\bar T|$ denotes the number of vertices of the increasing rooted tree, and there is one label vertex set for each value of this number. \begin{remark} The problem of counting increasing rooted trees is the special case when $\beta(g) = \exp(g)$. The solution of the differential equation is given explicitly by $x = g - \log(1-\exp(g)t)$. The coefficient in $n$th order is $\bar f_n(g) = (n-1)! \exp(ng)$, and each $f_T(g) = \exp(ng)$. In this special case the result is equivalent to the formula $(n-1)! = | \mathcal{A}^\uparrow [U_n] |$. \end{remark} The solution of the ordinary differential equation may be expressed in terms of unlabeled rooted trees by \begin{equation} \bar f(t,g) = \sum_{\tau \in \tilde \mathcal{A}_\emptyset} \frac{i(\tau)}{|\tau|!} t^{|\tau|} f_\tau(g) = \sum_{\tau \in \tilde \mathcal{A}_\emptyset} \frac{1}{\sigma(\tau) \tau!} t^{|\tau|} f_\tau(g). \end{equation} Here $r(\tau)$ is the corresponding number of increasing rooted trees, $|\tau|$ denotes the number of vertices of the rooted tree, $\sigma(\tau)$ is the symmetry factor, and $\tau!$ is the rooted tree factorial. \begin{example} Use the notation $f_\tau(g)$ to denote the factor associated with unlabeled rooted tree $\tau$. Then \begin{eqnarray} \lefteqn{ \bar f(x,t) = g + f_0 t + \frac{1}{2} f_1(g) t^2 + \frac{1}{6} \left( f_2(g) + f_{1[2]}(g) \right) t^2 } \\ && + \frac{1}{24} \left( f_3(g) + 3 f_{11[1]}(g) + f_{1[2]}(g) + f_{1[1[1]]}(g) \right) t^4 + \cdots . \nonumber \end{eqnarray} Explicitly, this is \begin{eqnarray} \lefteqn{ \bar f(x,t) = g + \beta(g) t + \frac{1}{2}\beta'(g)\beta(g) t^2 + \frac{1}{6} \left( \beta''(g)\beta(g)^2 + \beta'(g)^2\beta(g) \right) t^3 } \\ && + \frac{1}{24} \left( \beta'''(g)\beta(g)^3 + 3\beta''(g)\beta'(g) \beta(g)^2 + \beta''(g) \beta'(g)\beta(g)^2 + \beta'(g)^3\beta(g) \right) t^4 + \cdots . \nonumber \end{eqnarray} The factor 3 comes from the 3 increasing rooted trees associated with $\tau = 11(1)$. These ordinary differential equation coefficients are the fixed point equation coefficients divided by tree factorials. \end{example} \section{The Butcher group (composition) for labeled rooted trees} Let $T$ be a labeled rooted tree with vertex set $U$ and root $r$ in $U$. A \emph{rooted subtree} $T_0$ is a rooted tree on some non-empty subset $U_0 \subseteq U$ with the same root $r$ in $U_0$ that is a restriction of $T$ to this subset. The condition that $T_0$ is a rooted subtree of $T$ is denoted $T_0 \to T$. The empty subset $U_0 = \emptyset$ corresponds to an empty set object $T_0$; that case is also abbreviated $T_0 \to T$. For each $T_0$ with $T_0 \to T$ there is a corresponding \emph{difference forest} $T \setminus T_0$ of rooted trees on $U \setminus U_0$. The trees in the forest are all subtrees $T_j$ with $j \in U \setminus U_0$ and $T(j) \in U_0$. If $U_0 = \emptyset$ and $T_0$ is the empty set object, then the difference forest consists of the tree $T$ on $U$. \begin{proposition} For each non-empty label set $U$ there is a one-to-one correspondence between rooted tree pairs $T_0, T$ with $T_0 \to T$ and triples $T_0$, $F_1$, $\phi$, where $T_0$ is a rooted tree on $U_0 \subseteq U$, $F_1$ is a forest on $U_1 = U \setminus U_0$, and $\phi$ is a function from the set partition $\Gamma_1$ of the forest to $U_0$. \end{proposition} Let $\tilde \mathcal{A}_\emptyset$ be the set of unlabeled rooted trees together with the empty set object. Consider the space $\mathbf{R}^{\tilde \mathcal{A}_\emptyset}$ of all functions $c$ from $\tilde \mathcal{A}_\emptyset$ to the real numbers. These are coefficients $c(\tau)$ that depend on unlabeled rooted trees $\tau$. Since each labeled rooted tree $T$ determines a corresponding unlabeled rooted tree $\tau$, the coefficients $c(T)$ are also defined for labeled rooted trees. For a forest of rooted trees the coefficient $a^\times(F)$ is the product of the $a(T')$ for $T'$ in the forest. In particular, for the empty forest $a^\times(\emptyset) =1$. The operation of \emph{subtree convolution} $a * b$ is defined when $a(\emptyset) = 1$ by $c = a*b$, where \begin{equation} c(T) = \sum_{T_0: T_0 \rightarrow T} b(T_0) a^\times(T \setminus T_0). \end{equation} When $T_0$ is the empty set object, the corresponding term in the sum is is $b(\emptyset) a(T)$. When $T_0 = T$, the corresponding term in the sum is $b(T)a^\times(\emptyset) = b(T)$. When $T$ is the empty set object, then $c(\emptyset) = b(\emptyset)$. In the multiplication $a*b$ the forest factor is on the left and the tree factor is on the right. This convention is common in this context \cite{HairerWanner2,ChartierHairerVilmart}. The Butcher group multiplication is the special case when both $a(\emptyset) = 1$ and $b(\emptyset) = 1$. This group is denoted $G_C$, where $C$ stands for composition. The identity $\delta_\emptyset$ in the group has coefficient 1 for the empty set object and 0 for the rooted trees. An inductive argument shows that every element has an inverse element. The group $G_C$ is the character group of the rooted tree Hopf algebra of Connes and Kreimer. \begin{example} It is easy to compute the Butcher group multiplication for reasonably small labeled rooted trees. Here are the results for up to 3 vertices. For notational simplicity label the tree as an increasing tree. Take the root with label 1. For one-vertex rooted trees $c(1) = b(1) + a(1)$. For two-vertex rooted trees the result is $c(1[2]) = b(1[2]) + b(1)a(2) + a(1[2])$. The first interesting case is $n=3$. Let $1[23]$ be the rooted tree with root at 1 and with leaves at 2 and 3, and let $1[2[3]]$ be the rooted tree with root at 1 and with successor rooted tree $2[3]$. Then \begin{equation} c(1[23]) = b(1[23]) + b(1[2])a(3) + b(1[3])a(2) + b(1)a(2)a(3) + a(1[23]). \end{equation} Similarly \begin{equation} c(1[2[3]]) = b(1[2[3]]) + b(1[2])a(3) + b(1)a(2[3]) + a(1[2[3]]). \end{equation} The expressions for the two cases are quite different. \end{example} \begin{example} Here is one calculation for $n=4$. The rooted tree is $T = 1[3[2[4]]$ with root at 1 and successor rooted trees 3 and $2[4]$. This happens to be an increasing rooted tree, but that is just for notational convenience. The result is \begin{eqnarray} \lefteqn{c(T) = b(T) + b(1[2[4]])a(3) + b(1[23])a(4)} \kill \\ && + b(1[3])a(2[4]) + b(1[2])a(3)a(4)) + b(1)a(2[4])a(3) + a(T). \nonumber \end{eqnarray} This is illustrated in Figure~10. The number of vertices in the subtrees are 4, 3, 3, 2, 2, 1, 0. The difference forests consist of 0, 1, 1, 1, 2, 2, 1 rooted trees. \end{example} \begin{figure} \caption{Subtrees and difference forests} \end{figure} Coefficients depending on rooted trees also define certain functions (defined as formal power series). Each such function is a weighted tree sum in the form of an exponential generating function. For coefficients $c$ the function is \begin{equation} f^c(t,g) = \sum_{T \in \mathcal{A}_\emptyset} \frac{1}{|T|!} t^{|T|} c(T) f_T(g). \end{equation} Again for rooted tree $T$ the coefficient is \begin{equation} f_T(g) = \prod_{\ell \in U}\left( \frac{\partial}{\partial g} \right)^{ |T^{-1}(\ell)|} \beta(g) . \end{equation} The zero order term is $c(\emptyset) g$. The remainder of the series is a power series in powers of $t$ and powers of derivatives of $\beta(g)$. Such as series may be further expanded in powers of $g$, but that is not done here. \begin{theorem}[Composition] Suppose $a(\emptyset) = 1$, and let $c = a*b$ be the subtree convolution. The corresponding weighted sums are related by \begin{equation} f^c(t,g) = f^b(t,f^a(t,g)). \end{equation} \end{theorem} \begin{proof} The proof is based on the calculus formulas in the appendix. The task is to show that $f^c(t,g)$ and $f^b(t,f^a(t,g))$ have the same terms of each order. The $n$th term is the $n$th partial derivative with respect to $t$, evaluated at zero. (The case $n=0$ is trivial, so assume $n \geq 1$.) Take a set $U$ with $|U| = n$ elements. For $f^c(t,g)$ \begin{equation} D_t^{|U|} f^c(t,g)|_{t=0} = \sum_{T \in \mathcal{A}[U]} c(T) f_T(g). \end{equation} \noindent\textbf{Product rule} The composition \begin{equation} f^b(t,f^a(t,g)) = \sum_{T\in \mathcal{A}_\emptyset} \frac{1}{|T|!} t^{|T|} b(T) f_{T}( f^a(t,g)). \end{equation} is a combination of products of two factors $t^{|T|}$ and $f_{T}(f^a(t,g))$ By the product rule, the $n$th derivative is a sum over disjoint unions $U = U_0 + U_1$ of the form \begin{equation} D_t^{|U|} f^b(t,f^a(t,g))|_{t=0} = \sum_{U = U_0 + U_1} \sum_{T_0 \in \mathcal{A}[U_0]} b(T_0) D_t^{|U_1|} f_{T_0}( f^a(t,g))|_{t=0} . \end{equation} \noindent\textbf{Chain rule} The next task is to evaluate $D^{|U_1|} f_{T_0}( f^a(t,g))$ by the chain rule. This is a sum over set partitions $\Gamma_1$ of $U_1$ \begin{equation} D_t^{|U_1|} f_{T_0}( f^a(t,g))|_{t=0} = \sum_{\Gamma_1 \in \mathrm{Part}[U_1]} D_g^{|\Gamma_1|} f_{T_0}(f^a(t,g))|_{t = 0} \prod_{B \in \Gamma_1} D^{|B|} f^a(t,g)|_{t=0}. \end{equation} Since $f^a(0,g) = g$, this is \begin{equation} D_t^{|U_1|} f_{T_0}( f^a(t,g))|_{t=0} = \sum_{\Gamma_1 \in \mathrm{Part}[U_1]} D_g^{|\Gamma_1|} f_{T_0}(g) \prod_{B\in \Gamma_1} \left( \sum_{H \in \mathcal{A}[ B] } a(H) f_H(g) \right). \end{equation} \noindent\textbf{Distributive law} The distributive law is used to expand the product \begin{equation} \prod_{B\in \Gamma_1} \left( \sum_{H \in \mathcal{A}[B]} a(H) f_H(g) \right) = \sum_F \prod_{B \in \Gamma_1} a(F(B)) f_{F(B)}(g), \end{equation} where $F$ is a function defined on $\Gamma_1$ such that the value of $F$ on block $B$ in $\Gamma_1$ is a rooted tree on $B$. \noindent\textbf{Product rule} When $T_0$ is non-empty the coefficient $f_{T_0}(g)$ is a product over vertices $\ell$ in $U_0$ of factors $\beta^{(d(\ell))}(g)$. By the product rule the derivative of order $|\Gamma_1|$ is a sum over functions $\phi: \Gamma_1 \to U_0$ in the form \begin{equation} D_g^{|\Gamma_1|} f_{T_0}(g) = \sum_{\phi: \Gamma_1 \to U_0} f_{T_0,\phi}(g). \end{equation} Here \begin{equation} f_{T_0,\phi}(g) = \prod_{\ell \in U_0} \beta^{(|T_0^{-1}(\ell)| + |\phi^{-1}(\ell)|)} (g). \end{equation} For the empty rooted tree $f_\emptyset(g) = g$ and so the only derivative that is non-zero is the first derivative, corresponding to a set partition of $U = U_1$ into one block. It is convenient to define $f_{\emptyset,\phi}(g) = 1$, where $\phi$ is some unique object whose nature is not important. \noindent\textbf{Construction of rooted tree and subtree} The end result is \begin{eqnarray} \lefteqn{D_t^{|U|} f^b(t,f^a(t,g))|_{t=0} = } \\ && \sum_{U = U_0 + U_1} \sum_{T_0 \in \mathcal{A}[ U_0]} \sum_{\Gamma_1 \in \mathrm{Part}[U_1]} \sum_{\phi: \Gamma_1 \to U_0} \sum_F \prod_{B \in \Gamma_1} b(T_0)a(F(B)) f_{T_0,\phi}(g) f_{F(B)}(g) . \nonumber \end{eqnarray} Fix $U = U_0 + U_1$ and rooted tree $T_0$ on $U_0$. The remaining data $\Gamma_1$, $\phi$, $F$ determine a rooted tree $T$ on $U$ that extends $T_0$. The value $\phi(B)$ may be thought of as the vertex to which the root of the tree $F(B)$ maps. So \begin{equation} D_t^{|U|} f^b(t,f^a(t,g))|_{t=0} = \sum_{U = U_0 + U_1} \sum_{T_0 \in \mathcal{A}[ U_0] } \sum_{T: T_0 \rightarrow T} b(T_0)a^\times(T \setminus T_0) f_T(g) . \end{equation} The sum may be done in the other order, first the rooted tree $T$ and then the subtree $T_0$. This gives \begin{equation} D_t^{|U|} f^b(t,f^a(t,g)) |_{t=0} = \sum_{T \in \mathcal{A}[U]} \sum_{T_0 : T_0 \rightarrow T} b(T_0)a^\times(T \setminus T_0) f_T(g) . \end{equation} In other words, the derivative is $\sum_{T \in \mathcal{A}[U]} (a * b)(T) f_T(g)$. \end{proof} One special case of the multiplication law is when the sequence $b$ is zero except for $b(\bullet) = 1$. This corresponds to the function $f^b(t,g) = t \beta(g)$. In this case $c(T) = a^\times( T \setminus \bullet)$, the product of $a(T')$ for all rooted trees $T'$ in the successor forest. \begin{example} Consider $f^b(t,g) = t \beta(g)$ and $f^a(t,g) = g + a(\bullet) t \beta(g)$. The composition is \begin{equation} f^c(t,g) = t \beta( g + a(\bullet) t \beta(g)). \end{equation} This type of composition occurs in numerical methods for the solution of ordinary differential equations. In this case $c(T)$ is zero unless $T$ is a rooted tree on a set with $n\geq 1$ points, one root and $n-1$ leaves. There are $n$ such rooted trees. The corresponding successor forests each consist of $n-1$ one point rooted trees. This gives $c(T) = a(\bullet)^{n-1}$. The function $f^c(t,g)$ has the rooted tree expansion \begin{equation} f^c(t,g) = \sum_{n=1}^\infty n \frac{t^n}{n!} a(\bullet)^{n-1} \beta^{(n-1)}(g) \beta(g)^{n-1}. \end{equation} This is the Taylor expansion of the composite function. This example underpins the Runge-Kutta methods for the numerical solution of ordinary differential equations. The first order Euler method is to use $g + \beta(g) t$ to approximate the solution. Various second order methods depend on a parameter $a$; they are of the form $ g + (1 - \frac{1}{2a} ) \beta(g) t + \frac{1}{2a} \beta(g + a \beta(g) t )$. In particular, $a = 1$ is the analog of the trapezoidal rule, and $a = \frac{1}{2} $ is the analog of the midpoint rule. Since $\beta(g + a\beta(g)t)$ agrees with $\beta(g) + a \beta'(g) \beta(t) t $ to second order, the second order Runge-Kutta method agrees with the Taylor method $g + \beta(t) t + \frac{1}{2} \beta'(g) \beta(g) t $ to second order. The Runge-Kutta method has the advantage that it does not require computing the derivative $\beta'(g)$. \end{example} The Butcher group is related to composition of power series. Take the case when $\beta(g) = \exp(g)$. In that case the contribution of a labeled rooted tree on $n \geq 1$ vertices is $\exp(ng)$. Suppose $c = a * b$ and $f^c(t, g) = f^b( t, f^a(t,g)) $. Each of the individual functions is a power series in powers of $\exp(g)$. Define $h^a(w) = \exp( f^a(1,\log(w)) )$ and similarly for the others. The resulting functions are power series in $w$, and they are related by $h^c(w) = h^b(h^a(w))$. Subtree convolution is mapped to composition of power series. \section{The Butcher group for increasing rooted trees} The composition law for the Butcher group may also be written in terms of increasing rooted trees. The function corresponding to sequence $a$ is \begin{equation} f^a(t,g) = \sum_{\bar T \in \mathcal{A}_\emptyset^\uparrow} \frac{t^{|{\bar T}|}}{|{\bar T}|!} a({\bar T}) \bar T! f_{\bar T}(g). \end{equation} The rooted subtree convolution is defined in the same way, since if $T$ is an increasing rooted tree and $T_0 \to T$, then the subtree $T_0$ is also increasing. If $T_0 \to T$, the \emph{rooted tree binomial coefficient} is \begin{equation} {T \choose T_0} = \frac{T!}{T_0! \prod_{T' \in T \setminus T_0} T'!} . \end{equation} For a linear rooted tree this is the usual binomial coefficient. The change of variable $\bar c_{ T} = c({ T}) T!$ gives another representation of the Butcher multiplication as \begin{equation} \bar c(T) = \sum_{T_0: T_0 \rightarrow T} { T \choose T_0} \bar b(T_0) \bar a^\times(T \setminus T_0). \end{equation} The solution $\bar f(t,g)$ of the differential equation $dx/dt = \beta(x)$ with initial condition $g$ is the case corresponding to $a( T) = 1/ T!$, or $\bar a( T) = 1$. This fact leads to an identity for a binomial coefficient associated with rooted trees. \begin{proposition} For a labeled rooted tree with $n$ vertices \begin{equation} \sum_{T_0 \to T} {T \choose T_0} = 2^n. \end{equation} \end{proposition} \begin{proof} The solution of the differential equation has the group property $\bar f(2t,g) = \bar f(t,\bar f(t,g)) $. Take $\bar a(T) = 1$ and the corresponding $\bar f^a(t,g)$. Then $\bar f^a(2t,g) = \bar f^a(t,\bar f^a(t,g))$. Now take $\bar a(T) = \bar b(T) = 1$, so $\bar c(T)= 2^{|T|}$. This then translates to $\bar f^c(T) = \bar f^b(t, \bar f^a(t,g))$. This gives the rooted tree binomial coefficient identity for increasing rooted trees. Since rooted tree factorials do not depend on the order on the label set, the identity holds for all labeled rooted trees. \end{proof} \begin{example} For the labeled rooted tree with four vertices shown in Figure~10 the binomial identity says \[ 16 = 1+ \frac{4}{3} + \frac{8}{3} + 2 + 4 + 4 + 1 . \] The rooted tree binomial coefficients need not be whole numbers, but the sum is always a power of 2. \end{example} \section{The Butcher group for unlabeled rooted trees} The Butcher group may also be presented using unlabeled rooted trees, but there is a complication. If $\tau, \tau_0$ are unlabeled rooted trees, choose $T$ to be a labeled rooted tree that determines $\tau$. Consider the set of labeled rooted trees $T_0$ with $T_0 \to T$ and with $T_0$ determining $\tau_0$. This set depends on the chosen $T$, but the number of elements in the set is independent of $T$. Denote this number by $[\tau,\tau_0]$. The multiplication operation for coefficients may then be written $c = a * b$, where \begin{equation} c(\tau) = \sum_{\tau_0} [\tau, \tau_0] b(\tau_0) a^\times(\tau \setminus \tau_0). \end{equation} The extra complication is the presence of the multiplicity coefficients $[\tau, \tau_0]$. For an arbitrary coefficient function $c$ there is a corresponding function \begin{equation} f^c(t,g) = \sum_{\tau \in \tilde \mathcal{A}_\emptyset} \frac{1}{\sigma(\tau)} c(\tau) t^{|\tau|} f_\tau(g). \end{equation} The zero order term is $c(\emptyset) g$, and for the other terms \begin{equation} f_\tau(g) = \prod_k (D^k_g \beta(g))^{v_k(\tau)}. \end{equation} This is precisely the same sum as before. The following is a restatement of the previous theorem. \begin{corollary} Suppose $a(\emptyset) = 1$. Define $c = a * b$ with the multiplicity factor. The corresponding functions satisfy \begin{equation} f^c(t,g) = f^b(t,f^a(t,g)). \end{equation} \end{corollary} \begin{example} Take $k=4$ vertices, and let $3$ be the unlabeled rooted tree with a root and three leaves. Let $2$ be the unlabeled rooted tree with a root and two leaves. Then the multiplicity $[3, 2] = 3$. Similarly, $[3,1] = 3$. On the other hand, $[3,0] = 1$, since there is only one way of inserting the root. The conclusion is that \begin{equation} c(3) = b(3) + 3 b(2) a(0) + 3 b(1) a(0)^2 + b(0) a(0)^3 + a(3). \end{equation} In the world of unlabeled rooted trees, multiplicity factors are inescapable. \end{example} \section{Substitution for labeled rooted trees} The Butcher group is about subtree convolution and composition of functions. There is another algebraic structure based on quotient tree convolution and substitution. The reference \cite{ChartierHairerVilmart} has an account of the subject and its history, along with some applications. See also \cite{CalaqueEbrahimiFardManchon} for a Hopf algebra approach and for more background. The following is a brief account in the labeled rooted tree framework. Given a rooted tree $T$ on $U$ with root $r$ and a set $R$ with $r \in R$, there is a corresponding forest $F$ obtained by restricting $T$ to $U \setminus R$. This is called a \emph{subforest} of the rooted tree. The set partition $\Gamma$ defined by this forest is in one-to-one correspondence with the set of roots $R$ of the rooted trees in the forest. When $F$ is a subforest of $T$ we can write $F \sqsubseteq T$. Given a rooted tree $T$ and a subforest $F$, there is also a \emph{quotient rooted tree} $T/F$. This is a labeled rooted tree with label set $\Gamma$. Let $U_0$ be the block in $\Gamma$ such that $r$ is in $F(U_0)$. Then $T/F$ is defined on $\Gamma_1 = \Gamma\setminus \{U_0\}$ as follows. The value of $T/F$ on block $B$ is obtained by finding the root $j$ of rooted tree $F[B]$ and taking the value to be the block $B'$ containing $T(j)$. There is another representation of the quotient rooted tree $T/F$ that may be easier to picture. This is as a labeled rooted tree with label set $R$, where $R$ is the set of roots in the forest $F$. The value of $T/F$ on vertex $j$ in $R\setminus \{r\}$ is obtained by finding $T(j)$ and the block $B'$ that contains it, and taking the value to be the root of rooted tree $F[B']$. \begin{remark} The subtree $T_0$ and difference forest $T\setminus T_0$ construction used for the Butcher group is a special case. The subforest $F$ is $T_0$ together with $F_1 = T \setminus T_0$, and in this case the quotient rooted tree consists only of a root and immediate successors. \end{remark} \begin{example} Figure~11 gives an example of the forests associated with a given rooted tree. The number of rooted trees in the forest (and the number of roots of these rooted trees) are 1, 2, 2, 2, 3, 3, 3, 4. Figure~12 gives an example of quotient rooted trees of a given rooted tree on 4 vertices. These correspond to the subforests in Figure~11. The roots of the rooted trees in the forest give the vertices of the quotient rooted tree. The number of vertices in the quotient rooted trees are 1, 2, 2, 2, 3, 3, 3, 4. \end{example} \begin{figure} \caption{Subforests} \end{figure} \begin{figure} \caption{Quotient rooted trees} \end{figure} The pair consisting of the subforest $F$ and the quotient rooted tree $T/F$ do not completely determine the original rooted tree $T$. The quotient rooted tree assigns to each block in $\Gamma_1$ another block in $\Gamma$. It is also necessary to specify a point in that block. \begin{proposition} For each label set $U$ there is a one-to-one correspondence between rooted tree, subforest pairs $T, F$ and triples $F$, $\hat T$, $\phi$, where $F$ is a forest of rooted trees with set partition $\Gamma$, $\hat T$ is a rooted tree on vertex set $\Gamma$ with root $U_0$, and $\phi$ is a function defined on $\Gamma_1 = \Gamma \setminus \{U_0\}$ such that for every $B$ in $\Gamma_1$ the value $\phi(B)$ is in $\hat T(B)$. \end{proposition} In the proposition the function $\phi$ sends a block to a vertex in another target block. There is a parametrization of such functions by target blocks. If $B'$ is a block in $\Gamma$, then define $\Phi(B')$ to be the restriction of $\phi$ to $T^{-1}(B')$. Thus if $\hat T(B) = B'$, then $\Phi(B')$ applied to $B$ is a vertex in $B'$. Conversely, suppose that there is a function $\Phi$ that maps each $B'$ in $\Gamma$ to a function $\Phi(B')$ from $\hat T^{-1}(B')$ to $B'$. Then there is a corresponding $\phi$ given on $B$ with $\hat T(B) = B'$ by $\phi( B) = \Phi(B')(B)$. Let $\tilde A_\emptyset$ be the set of unlabeled rooted trees augmented with the empty object. There is a multiplication defined on certain elements of $\mathbf{R}^{\tilde \mathcal{A}_\emptyset}$. This is the \emph{quotient rooted tree convolution} $c = a \star b$, defined whenever $a(\emptyset) = 0$, such that \begin{equation} c(T) = \sum_{F \sqsubseteq T} b( T/F) a^\times( F) . \end{equation} The sum is over subforests $F$ of $T$. The contribution of a forest is $a^\times(F) = \prod_{T' \in F} a(T')$. For the empty forest this product is 1. For the special case of the empty set object $c(\emptyset) = b(\emptyset)a^\times(\emptyset) = b(\emptyset)$. If $R = \{r\}$, then the forest $F$ has only one rooted tree $T$, and $T/F$ is a rooted tree on a one point vertex set. So the contribution to the sum is $a(T) b(\bullet) $. When $R$ is the entire vertex set, then $F$ is the discrete forest, and $T/F = T$. The contribution to the sum is $a(\bullet)^{|T|}b(T) $. As a special case, $c(\bullet) = a(\bullet) b(\bullet)$. If the multiplication is restricted to $a(\emptyset) = b(\emptyset) = 0$, then $G_S = \mathbf{R}^{\tilde \mathcal{A}}$ becomes an algebraic system closed under multiplication. The $S$ stands for substitution. The identity for quotient rooted tree convolution is $\delta_\bullet$, which has coefficient 1 for a one-vertex rooted tree and 0 for all other rooted trees. If the multiplication is also restricted to $a(\bullet) \neq 0, b(\bullet) \neq 0$, the resulting system $G_S^\star $ is a group. If the multiplication is further restricted to $a(\bullet) = b(\bullet) = 1$, then this defines a subgroup $G_S^1$. The group $G_S^1$ is the character group of the rooted tree Hopf algebra of Calaque, Ebrahimi-Fard, and Manchon. There is also a functional representation of quotient rooted tree convolution. This deals with formal functions of the form $g \mapsto \alpha(g) = \sum_{n=0}^\infty (1/n!) a_n g^n$. Such a function is denoted $\alpha$ to indicate that it depends only on the coefficients $a_n$ and not on the input $g$. The power series depends on the choice of $\alpha$ and is of the form \begin{equation} f^c(t, \alpha, g) = \sum_{n=0}^\infty \sum_{T \in \mathcal{A}[U_n] } \frac{t^n}{n!} c(T) \prod_{\ell \in U_n} \alpha^{(|T^{-1}(\ell)|)}(g). \end{equation} The zero order term is $c(0) g$. This leads to the following theorem \cite{ChartierHairerVilmart}. \begin{theorem}[Substitution] Suppose that $a(\emptyset) = 0$ and $c = a \star b$ is the quotient rooted tree convolution. Then \begin{equation} f^c( t , \beta , g) = f^b( 1, f^a(t ,\beta, \cdot), g) . \end{equation} \end{theorem} \begin{proof} The proof here uses the calculus formulas in the appendix. Let $D_t$ be the partial derivative with respect to $t$. It is sufficient to show that applying $D_t^n$ and then setting $t$ to zero gives the same result for both sides of the equation. For the left hand side this is \begin{equation} D_t^{|U|} f^c( t,\beta, g) |_{t = 0} = \sum_{T \in \mathcal{A}[U] } c(T) \prod_{\ell \in U} D_g^{|T^{-1}(\ell)|} \beta(g), \end{equation} where $U$ is a label set with $n$ points. \noindent\textbf{Product rule} The computation for the right hand side begins with \begin{equation} f^b( 1, f^a (t, \beta, \cdot), g) = \sum_{\hat T}\frac{1}{|\hat T|!} b(\hat T) \prod_{i \in [\hat T]} D_g^{|{\hat T}^{-1}(i)|} f^a(t,\beta, g) \end{equation} By the product rule \begin{equation} D_t^{|U|} \prod_{i \in [\hat T]} D_g^{|{\hat T}^{-1}(i)|} f^a (t ,\beta, g) = \sum_{\psi: U \to [\hat T] } \prod_{i \in [\hat T]} D_t^{|\psi^{-1}(i)|} D_g^{|{\hat T}^{-1}(i)|} f^a(t,\beta,g) . \end{equation} Set $t = 0$. Since $f^a(0,\beta,g) = a(\emptyset) g = 0$, the contributions from $i$ with $\psi^{-1}(i) = \emptyset $ are zero. So $\psi: U \to [\hat T]$ induces a set partition $\Gamma$ of $U$ and a bijection from $\Gamma$ to $[\hat T]$. There are $|\hat T|!$ such bijections. This leads to \begin{equation} D_t^{|U|} f^b(1, f^a (t,\beta, \cdot), g) |_{t=0} = \sum_{\Gamma \in \mathrm{Part}[U]} \sum_{\hat T \in \mathcal{A}[\Gamma]} c(\Gamma, \hat T) , \end{equation} where \begin{equation} c(\Gamma, \hat T) = \prod_{B \in \Gamma} b(\hat T) D_g^{{|\hat T}^{-1}(B)|} D_t^{|B|} f^a(t, \beta, g)|_{t=0} . \end{equation} \noindent\textbf{Distributive law} The next stage is to insert \begin{equation} D_t^{|B|} f^a( t,\beta, g) |_{t = 0} = \sum_{H \in \mathcal{A}[B] } a(H) \prod_{j \in B} D_g^{|H^{-1}(j)|} \beta(g) \end{equation} and use the distributive law. This gives a forest sum \begin{equation} D_t^{|U|} f^b(1, f^a (t,\beta, \cdot), g) |_{t=0} = \sum_{\Gamma \in \mathrm{Part}[U]} \sum_{\hat T \in \mathcal{A}[\Gamma]} \sum_F c(\Gamma, \hat T, F) , \end{equation} where \begin{equation} c(\Gamma, \hat T, F) = \prod_{B \in \Gamma} b(\hat T) a(F(B)) D_g^{|{\hat T}^{-1}(B)|} \prod_{j \in B} D_g^{|F(B)^{-1}(j)|} \beta(g) . \end{equation} \noindent\textbf{Product rule} The product rule for differentiation produces \begin{equation} c(\Gamma, \hat T, F) = \prod_{B \in \Gamma} b(\hat T) a(F(B)) \sum_{\phi: {\hat T}^{-1}(B)) \to B} \prod_{j \in B} D_g^{|\phi^{-1}(j)|} D_g^{|F(B)^{-1}(j|} \beta(g) . \end{equation} \noindent\textbf{Distributive law} The distributive law gives \begin{equation} D_t^{|U|} f^b(1, f^a (t,\beta, \cdot), g) |_{t=0} = \sum_{\Gamma \in \mathrm{Part}[U]} \sum_{\hat T \in \mathcal{A}[\Gamma]} \sum_F \sum_\Phi c(\Gamma, \hat T, F, \Phi) , \end{equation} where \begin{equation} c(\Gamma, \hat T, F, \Phi) = \prod_{B \in \Gamma} \prod_{j \in B} b(\hat T) a(F(B)) D_g^{|\Phi(B)^{-1}(j)|} D_g^{|F(B)^{-1}(j|)} \beta(g) . \end{equation} The sum is over set partitions $\Gamma$ of $U$ and over forest functions $F$ that send block $B$ to rooted tree $F(B)$ on vertex set $B$. It is also over rooted trees $\hat T$ on vertex set $\Gamma$. Finally, it is over functions $\Phi$ that send each block $B$ in $\Gamma$ to a function $\Phi(B)$ that takes each $\hat T$ preimage block $B'$ and sends it to a vertex in $B$. \noindent\textbf{Construction of rooted tree and subforest} These data determine a rooted tree on $U$ that is made from the rooted trees $F(B)$ internal to the blocks $B$ and from the rooted tree $\hat T$ and the function $\Phi$. If $\hat T(B') = B$, then there is an edge from the root of the rooted tree on $B'$ to the $\Phi(B)(B')$ in $B$. The corresponding contribution involves the coefficients $b(\hat T)$ and $a(F(B))$ and a contribution from the rooted tree. At a given vertex $j$ this involves a derivative of $\beta(g)$ of an order equal to the total number of edges impinging on this vertex, both from within the block and from the other blocks. Giving these data is the same as giving the pair $T \in \mathcal{A}[U]$ together with subforest $F$. So the final expression is \begin{equation} D_t^{|U|} f^b( f^a (t ,\beta, \cdot), g)|_{t=0} = \sum_{T \in \mathcal{A}[U]} \sum_{F \sqsubseteq T} a^\times(F) b(T/F) f_T(g). \end{equation} This establishes the result. \end{proof} The authors \cite{ChartierHairerVilmart} give two applications of this result. For both the idea is to consider the coefficients $e(\tau) = 1/\tau!$ that give the exact solution of an ordinary differential equation $dx/dt = \beta(x)$. In backward error analysis the idea is to take $c$ corresponding to some numerical method and solve $c = a \star e$ for $a$. This produces a modified differential equation that agrees with the numerical method. In the application to modified integrators, start with a numerical method given by $b$ and solve $e = a \star b$ for $a$. This produces a modified numerical method that agrees with the solution of the differential equation. \iffalse The authors \cite{ChartierHairerVilmart} give two applications of this result. The first is to backward error analysis. Let $b(\tau) = 1/\tau!$, so $x = f^b(t,\beta, g)$ be the exact solution of the ordinary differential equation $dx/dt = \beta(x)$, with $x = g$ at $t = 0$. Let $f^c(h,\beta, g)$ be a numerical method for solving the same equation with time step $h$. Since $G_S^1$ is a group, there is an $f^a(h,\beta, \cdot)$ with $f^c( h , \beta , g) = f^b( 1, f^a(h ,\beta, \cdot), g) $. This says that the result of the numerical method agrees with the exact solution of the modified differential equation $dx/dt = f^a(h, \beta, x) $ with $x = g$ at $t=0$, evaluated at $t = 1$. The second application is to modified integrators. Let $c(\tau) = 1/\tau!$, so $x = f^c(t,\beta, g)$ is the exact solution of the ordinary differential equation $dx/dt = \beta(x)$, with $x = g$ at $t = 0$. Let $f^b(h,\beta, g)$ be a numerical method for solving the same equation with time step $h$. Since $G_S^1$ is a group, there is a $f^a(t,\beta, \cdot)$ with $f^c( t , \beta , g) = f^b( 1, f^a(t ,\beta, \cdot), g) $. This says that the solution of the differential equation agrees with the result of the modified numerical method in which $\beta$ is replaced by $f^a(t,\beta, \cdot)$, evaluated at $h=1$. \fi \section*{Appendix: Algebra and calculus in combinatorics} Here are some basic results from algebra and calculus in forms that are useful for combinatorics. These are stated in the setting of functions of one variable. There are even more illuminating multi-variable results, but they are not needed in the present exposition. \noindent\textbf{The distributive law} A version of the distributive law of algebra is the following. Suppose that $B$ is a set and for each $b \in B$ there is a corresponding set $F_b$. Then the product over $b \in B$ of sums indexed by $F_b$ is a sum of products: \begin{equation} \prod_{b \in B} \sum_{ t \in F_b} a_b(t) = \sum_{s } \prod_{b \in B} a_b(s(b)). \end{equation} The sum on the right is over all functions $s: B \to \bigcup_b F_b$ such that for each $b$ the value $s(b) \in F_b$. The set of all such functions is the product space $\prod_b F_b$. In the special case when the $F_b=F$ are all the same, the sum is over all functions $s: B \to F$. In this case the set of all such functions is the Cartesian power space $F^B$. \noindent\textbf{The product rule} A version of the product rule for differentiation is the following. Let $U$ be a set with $|U|$ elements. Then the $|U|$ order derivative of a product function is given by \begin{equation} D^{|U|} \prod_{b \in B} F_b = \sum_{\phi \in B^U} \prod_{b \in B} D^{ |\phi^{-1}(b)|}F_b. \end{equation} Here $B^U$ consists of all functions $\phi: U \to B$. Sometimes a function $\phi$ is described by its inverse images $U_b = \phi^{-1}(b)$, so one can think of this as a sum over the corresponding maps $b \mapsto U_b$. \noindent\textbf{The chain rule} A version of the chain rule is the following. Let $U$ be a set with $|U|$ elements. Then the $|U|$ order derivative of a composite function is given by \begin{equation} D^{|U|}(F \circ G) = \sum_{\Gamma \in \mathrm{Part}[U]} (D^{|\Gamma|}F)\circ G \cdot \prod_{B \in \Gamma} D^{|B|} G. \end{equation} Here $\mathrm{Part}[U]$ consists of all set partitions of $U$ into disjoint non-empty subsets with union $U$. \end{document}
math
81,071
\begin{document} \baselineskip=18truept \def{\mathbb C}{{\mathbb C}} \def{\mathbb C}^n{{\mathbb C}^n} \def{\mathbb R}{{\mathbb R}} \def{\mathbb R}^n{{\mathbb R}^n} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb N}{{\mathbb N}} \def\cal#1{{\mathcal #1}} \def\bb#1{{\mathbb #1}} \def\bar \partial {\bar \partial } \def{\mathcal D}{{\mathcal D}} \def\lev#1{{\mathcal L}\left(#1\right)} \def\Delta {\Delta } \def{\mathcal O}{{\mathcal O}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def\zeta {\zeta } \def\text {Harm}\, {\text {Harm}\, } \def\nabla {\nabla } \def\{ M_k \} _{k=0}^{\infty } {\{ M_k \} _{k=0}^{\infty } } \def\sing#1{#1_{\text {sing}}} \def\reg#1{#1_{\text {reg}}} \def\pione#1{\pi_1(#1)} \def\pioneat#1#2{\pi_1(#1,#2)} \def\big{\backslash}{\big{\backslash}} \def\setof#1#2{\{ \, #1 \mid #2 \, \} } \def\text{\rm Im}\, e#1{\text{\rm im}\,\bigl[#1\bigr]} \def\kernel#1{\text{\rm ker}\,\bigl[#1\bigr]} \defM\setminus \overline M_0{M\setminus \overline M_0} \defM\setminus M_0{M\setminus M_0} \def\frac {\partial }{\partial\nu } {\frac {\partial }{\partial\nu } } \def\frac {\partial }{\partial\nu } of#1{\frac {\partial#1}{\partial\nu } } \def\pdof#1#2{\frac {\partial#1}{\partial#2}} \def\pdwrt#1{\frac{\partial}{\partial #1}} \def\ensuremath{\mathcal C^{\infty}} {\ensuremath{\mathcal C^{\infty}} } \def\ensuremath{\mathcal C^{\infty}} ns{\ensuremath{\mathcal C^{\infty}}} \def\text{\rm dist}{\text{\rm dist}} \def\text{\rm diam}{\text{\rm diam}} \def\text{\rm Re}\, {\text{\rm Re}\, } \def\text{\rm Im}\, {\text{\rm Im}\, } \def\text{\rm supp}\, {\text{\rm supp}\, } \def\text{\rm vol}{\text{\rm vol}} \def\restrict#1{{\upharpoonright_{#1}}} \def\setminus{\setminus} \def{\mathcal {P}}{{\mathcal {P}}} \def{\mathcal S\mathcal P}{{\mathcal S\mathcal P}} \def\geqtrace#1#2{{\geq_{(#1,#2)}}} \def\gtrace#1#2{{>_{(#1,#2)}}} \def\leqtrace#1#2{{\leq_{(#1,#2)}}} \def\ltrace#1#2{{<_{(#1,#2)}}} \defanalytic {analytic } \defanalytic ns{analytic} \defbounded {bounded } \defbounded ns{bounded} \defcompact {compact } \defcompact ns{compact} \defcomplex {complex } \defcomplex ns{complex} \defcontinuous {continuous } \defcontinuous ns{continuous} \defdimension {dimension } \defdimension ns{dimension } \defexhaustion {exhaustion } \defexhaustion ns{exhaustion} \deffunction {function } \deffunction ns{function} \deffunction s{functions } \deffunction sns{functions} \defholomorphic {holomorphic } \defholomorphic ns{holomorphic} \defmeromorphic {meromorphic } \defmeromorphic ns{meromorphic} \defholomorphic convex{holomorphically convex } \defholomorphic convexns{holomorphically convex} \defirreducible component {irreducible component } \defconnected component {connected component } \defirreducible component ns{irreducible component} \defconnected component ns{connected component} \defirreducible component s{irreducible components } \defconnected component s{connected components } \defirreducible component sns{irreducible components} \defconnected component sns{connected components} \defirreducible {irreducible } \defirreducible ns{irreducible} \defconnected {connected } \defconnected ns{connected} \defcomponent {component } \defcomponent ns{component} \defcomponent s{components } \defcomponent sns{components} \defmanifold {manifold } \defmanifold ns{manifold} \defmanifold s{manifolds } \defmanifold sns{manifolds} \defneighborhood {neighborhood } \defneighborhood s{neighborhoods } \defneighborhood ns{neighborhood} \defneighborhood sns{neighborhoods} \defharmonic {harmonic } \defharmonic ns{harmonic} \defpluriharmonic {pluriharmonic } \defpluriharmonic ns{pluriharmonic} \defplurisubharmonic {plurisubharmonic } \defplurisubharmonic ns{plurisubharmonic} \def\qplsh#1{$#1$-plurisubharmonic} \def$(n-1)$-plurisubharmonic {$(n-1)$-plurisubharmonic } \def$(n-1)$-plurisubharmonic ns{$(n-1)$-plurisubharmonic} \defparabolic {parabolic } \defparabolic ns{parabolic} \defrelatively {relatively } \defrelatively ns{relatively} \defstrictly {strictly } \defstrictly ns{strictly} \defstrictly g{strongly } \defstrictly gns{strongly} \defconvex {convex } \defconvex ns{convex} \defwith respect to {with respect to } \defwith respect to ns{with respect to} \defsuch that {such that } \defsuch that{such that} \defharmonic measure {harmonic measure } \defharmonic measure{harmonic measure} \defharmonic measure of the ideal boundary of {harmonic measure of the ideal boundary of } \defharmonic measure of the ideal boundary of{harmonic measure of the ideal boundary of} \def\text{{\rm Vert}}\, {\text{{\rm Vert}}\, } \def\text{{\rm Edge}}\, {\text{{\rm Edge}}\, } \def\til#1{\tilde{#1}} \def\wtil#1{\widetilde{#1}} \def\what#1{\widehat{#1}} \def\seq#1#2{\{#1_{#2}\} } \def\varphi {\varphi } \def ^{-1} { ^{-1} } \def\ssp#1{^{(#1)}} \def\set#1{\{ #1 \}} \title[The Bochner--Hartogs dichotomy] {The Bochner--Hartogs dichotomy for bounded geometry hyperbolic K\"ahler manifolds} \author[T.~Napier]{Terrence Napier} \address{Department of Mathematics\\Lehigh University\\Bethlehem, PA 18015\\USA} \email{[email protected]} \thanks{To appear in Annales de l'Institut Fourier.} \author[M.~Ramachandran]{Mohan Ramachandran} \address{Department of Mathematics\\University at Buffalo\\Buffalo, NY 14260\\USA} \email{[email protected]} \subjclass[2010]{32E40} \keywords{Green's function, pluriharmonic} \date{June 12, 2015} \begin{abstract} The main result is that for a connected hyperbolic complete K\"ahler manifold with bounded geometry of order two and exactly one end, either the first compactly supported cohomology with values in the structure sheaf vanishes or the manifold admits a proper holomorphic mapping onto a Riemann surface. \end{abstract} \maketitle \section*{Introduction} \label{introduction} Let $(X,g)$ be a connected noncompact complete K\"ahler manifold. According to \cite{Gro-Sur la groupe fond}, \cite{Li Structure complete Kahler}, \cite{Gro-Kahler hyperbolicity}, \cite{Gromov-Schoen}, \cite{NR-Structure theorems}, \cite{Delzant-Gromov Cuts}, \cite{NR Filtered ends}, and \cite{NR L2 Castelnuovo}, if $X$ has at least three filtered ends relative to the universal covering (i.e., $\tilde e(X)\geq 3$ in the sense of Definition~\ref{ends filtered ends def}) and $X$ is weakly $1$-complete (i.e., $X$ admits a continuous plurisubharmonic exhaustion function ns) or $X$ is regular hyperbolic (i.e., $X$ admits a positive symmetric Green's function that vanishes at infinity) or $X$ has bounded geometry of order two (see Definition~\ref{Bdd geom along set defn}), then $X$ admits a proper holomorphic mapping onto a Riemann surface. In particular, if $X$ has at least three (standard) ends (i.e., $e(X)\geq 3$) and $X$ satisfies one of the above three conditions, then such a mapping exists. Cousin's example~\cite{Cousin} of a $2$-ended weakly $1$-complete covering of an Abelian variety that has only constant holomorphic function s demonstrates that two (filtered) ends do not suffice. A noncompact complex manifold~$X$ for which $H^1_c(X,\ol)=0$ is said to have the {\it Bochner--Hartogs property} (see Hartogs~\cite{Hartogs}, Bochner~\cite{Bochner}, and Harvey~and~Lawson~\cite{Harvey-Lawson}). Equivalently, for every \ensuremath{\mathcal C^{\infty}} compactly supported form~$\alpha$ of type $(0,1)$ with $\dbar\alpha=0$ on~$X$, there is a \ensuremath{\mathcal C^{\infty}} compactly supported function $\beta$ on~$X$ such that $\dbar\beta=\alpha$. If $X$ has the Bochner--Hartogs property, then every holomorphic function on a neighborhood of infinity with no relatively compact connected component s extends to a holomorphic function on~$X$. For cutting off away from infinity, one gets a \ensuremath{\mathcal C^{\infty}} function $\lambda$ on $X$. Taking $\alpha\equiv\dbar\lambda$ and forming $\beta$ as above, one then gets the desired extension $\lambda-\beta$. In particular, $e(X)=1$, since for a complex manifold with multiple ends, there exists a locally constant function on a neighborhood of $\infty$ that is equal to~$1$ along one end and $0$ along the other ends, and such a function cannot extend holomorphically. Thus in a sense, the space $H^1_c(X,\ol)$ is a function-theoretic approximation of the set of (topological) ends of~$X$. An open Riemann surface~$S$, as well as any complex manifold admitting a proper holomorphic mapping onto~$S$, cannot have the Bochner--Hartogs property, because $S$ admits meromorphic function s with finitely many poles. Examples of manifolds of dimension~$n$ having the Bochner--Hartogs property include strongly $(n-1)$-complete complex manifolds (Andreotti and Vesentini~\cite{Andreotti-Vesentini}) and strongly hyper-$(n-1)$-convex K\"ahler manifolds (Grauert and Riemenschneider~\cite{Grauert-Riemenschneider}). We will say that the \emph{Bochner--Hartogs dichotomy} holds for a class of connected complex manifolds if each element either has the Bochner--Hartogs property or admits a proper holomorphic mapping onto a Riemann surface. According to \cite{Ramachandran-BH for coverings}, \cite{NR-BH Weakly 1-complete}, and \cite{NR-BH Regular hyperbolic Kahler}, the Bochner--Hartogs dichotomy holds for the class of weakly $1$-complete or regular hyperbolic complete K\"ahler manifolds with exactly one end. The main goal of this paper is the following: \begin{thm}\label{BHD bounded geom hyperbolic thm} Let $X$ be a connected noncompact hyperbolic complete K\"ahler manifold with bounded geometry of order two, and assume that $X$ has exactly one end. Then $X$ admits a proper holomorphic mapping onto a Riemann surface if and only if $H^1_c(X,\ol)\neq 0$. \end{thm} In other words, the Bochner--Hartogs dichotomy holds for the class of hyperbolic connected noncompact complete K\"ahler manifolds with bounded geometry of order two and exactly one end. When combined with the earlier results, the above gives the following: \begin{cor}\label{combined BHD cor} Let $X$ be a connected noncompact complete K\"ahler manifold that has exactly one end (or has at least three filtered ends) and satisfies at least one of the following: \begin{enumerate} \item[(i)] $X$ is weakly $1$-complete; \item[(ii)] $X$ is regular hyperbolic; or \item [(iii)] $X$ is hyperbolic and of bounded geometry of order two. \end{enumerate} Then $X$ admits a proper holomorphic mapping onto a Riemann surface if and only if $H^1_c(X,\ol)\neq 0$. \end{cor} In particular, since connected coverings of compact K\"ahler manifolds have bounded geometry of all orders, we have the following (cf.~\cite{ArapuraBressRam}, \cite{Ramachandran-BH for coverings}, and Theorem~0.2 of \cite{NR-BH Regular hyperbolic Kahler}): \begin{cor}\label{BHD covering cor} Let $X$ be a compact K\"ahler manifold, and $\what X\to X$ a connected infinite covering that is hyperbolic and has exactly one end (or at least three filtered ends). Then $\what X$ admits a proper holomorphic mapping onto a Riemann surface if and only if $H^1_c(\what X,\ol)\neq 0$. \end{cor} A standard method for constructing a proper holomorphic mapping onto a Riemann surface is to produce suitable linearly independent holomorphic $1$-forms (usually as holomorphic differentials of pluriharmonic function sns), and to then apply versions of Gromov's cup product lemma and the Castelnuovo--de~Franchis theorem. In this context, an \emph{irregular} hyperbolic manifold has a surprising advantage over a \emph{regular} hyperbolic manifold in that an irregular hyperbolic complete K\"ahler manifold with bounded geometry of order two automatically admits a nonconstant positive pluriharmonic function ns. In particular, the proof of Theorem~\ref{BHD bounded geom hyperbolic thm} in the irregular hyperbolic case is, in a sense, simpler than the proof in the regular hyperbolic case (which appeared in~\cite{NR-BH Regular hyperbolic Kahler}). Because the existence of irregular hyperbolic complete K\"ahler manifolds with one end and bounded geometry of order two is not completely obvious, a $1$-dimensional example is provided in Section~\ref{BG irreg hyp example sect}. However, the authors do not know whether or not there exist examples with the above properties that satisfy the Bochner--Hartogs property (and hence do not admit proper holomorphic mappings onto Riemann surfaces). Section~\ref{ends and bh basic sect} is a consideration of some elementary properties of ends, as well as some elementary topological properties of complex manifolds with the Bochner--Hartogs property. Section~\ref{Bounded geometry sect} contains the definition of bounded geometry. Section~\ref{greens fn sect} consists of some terminology and facts from potential theory, and a proof that the Bochner--Hartogs property holds for any one-ended connected noncompact hyperbolic complete K\"ahler manifold with no nontrivial $L^2$ holomorphic $1$-forms. A modification of Nakai's construction of an infinite-energy positive quasi-Dirichlet finite harmonic function on an irregular hyperbolic manifold, as well as a modification of a theorem of Sullivan which gives pluriharmonicity in the setting of a complete K\"ahler manifold with bounded geometry of order two, appear in Section~\ref{quasidirichlet finite fns sect}. The proof of Theorem~\ref{BHD bounded geom hyperbolic thm} and the proofs of some related results appear in Section~\ref{proof main thm sect}. An example of an irregular hyperbolic complete K\"ahler manifold with one end and bounded geometry of all orders is constructed in Section~\ref{BG irreg hyp example sect}. \end{ack} \section{Ends and the Bochner--Hartogs property}\label{ends and bh basic sect} In this section, we consider an elementary topological property of complex manifolds with the Bochner--Hartogs property. Further topological characterizations of the Bochner--Hartogs dichotomy will be considered in Section~\ref{proof main thm sect}. We first recall some terminology and facts concerning ends. By an {\it end} of a connected manifold~$M$, we will mean either a component $E$ of $M\setminus K$ with noncompact closure, where $K$ is a given compact subset of $M$, or an element of \[ \lim _{\leftarrow} \pi _0 (M\setminus K), \] where the limit is taken as $K$ ranges over the compact subsets of $M$ (or equivalently, the compact subsets of $M$ for which the complement $M\setminus K$ has no relatively compact component sns, since the union of any compact subset of~$M$ with the relatively compact connected component s of its complement is compact ns). The number of ends of $M$ will be denoted by~$e(M)$. For a compact set $K$ such that $M\setminus K$ has no relatively compact component sns, we will call \[ M\setminus K=E_1\cup \cdots \cup E_m, \] where $\seq Ej_{j=1}^m$ are the distinct component s of $M\setminus K$, an {\it ends decomposition} for~$M$. \begin{lem}\label{basic ends lem} Let $M$ be a connected noncompact \ensuremath{\mathcal C^{\infty}} manifold. \begin{enumerate} \item[(a)] If $S\Subset M$, then the number of component s of $M\setminus S$ that are not relatively compact in~$M$ is at most the number of component s of $M\setminus T$ for any set $T$ with $S\subset T\Subset M$. In particular, the number of such component s of $M\setminus S$ is at most $e(M)$. \item[(b)] If $K$ is a compact subset of~$M$, then there exists a \ensuremath{\mathcal C^{\infty}} relatively compact domain~$\Omega$ in~$M$ containing~$K$ \st $M\setminus\Omega$ has no compact component sns. In particular, if $k$ is a positive integer with $k\leq e(M)$, then we may choose $\Omega$ so that $M\setminus\Omega$ also has at least $k$ component sns; and hence $\partial\Omega$ has at least $k$ component sns. \item[(c)] If $\Omega$ is a nonempty relatively compact domain in $M$, then the number of component s of~$\partial\Omega$ is at most $e(\Omega)$, with equality if $\Omega$ is also smooth. \item[(d)] Given an ends decomposition $M\setminus K=E_1\cup\cdots\cup E_m$, there is a connected compact set~$K'\supset K$ such that any domain~$\Theta$ in~$M$ containing~$K'$ has an ends decomposition $\Theta\setminus K=E_1'\cup\cdots\cup E_m'$, where $E_j'=E_j\cap\Theta$ for $j=1,\dots,m$. \item[(e)] If $\Omega$ and $\Theta$ are domains in~$M$ with $\Theta\subset\Omega$ and both $M\setminus\Omega$ and $\Omega\setminus\Theta$ have no compact component sns, then $M\setminus\Theta$ has no compact component sns. \item[(f)] If $M$ admits a proper surjective continuous open mapping onto an orientable topological surface that is not simply connected ns, then there exists a \ensuremath{\mathcal C^{\infty}} relatively compact domain~$\Omega$ in~$M$ \st $M\setminus\Omega$ has no compact component s and $\partial\Omega$ is not connected ns. \end{enumerate} \end{lem} \begin{pf} For the proof of (a), we simply observe that if $S\subset T\Subset M$, then each connected component of $M\setminus S$ that is not relatively compact in~$M$ must meet $M\setminus T$ and must therefore contain some component of $M\setminus T$. Choosing $T\supset S$ to be a compact set for which $M\setminus T$ has no relatively compact component sns, we see that the number of component s of $M\setminus S$ is at most $e(M)$. For the proof of (b), observe that given a compact set $K\subset M$, we may fix a \ensuremath{\mathcal C^{\infty}} domain~$\Omega_0$ with $K\subset\Omega_0\Subset M$. The union of $\Omega_0$ with those (finitely many) component s of $M\setminus\Omega_0$ which are compact is then a \ensuremath{\mathcal C^{\infty}} relatively compact domain~$\Omega\supset K$ in~$M$ for which $M\setminus\Omega$ has no compact component sns. Given a positive integer $k\leq e(M)$, we may choose~$\Omega$ to also contain a compact set~$K'$ for which $M\setminus K'$ has at least~$k$ component s and no relatively compact component s in~$M$. Part~(a) then implies that $M\setminus\Omega$ has at least $k$ component sns, and since each component must contain a component of $\partial\Omega$, we see also that $\partial\Omega$ has at least~$k$ component sns. For the proof of (c), suppose $\Omega$ is a nonempty relatively compact domain in~$M$ and $k$ is a positive integer. If $\partial\Omega$ has at least $k$ component sns, then we may fix a covering of $\partial\Omega$ by disjoint relatively compact open subsets $U_1,\dots,U_k$ of $M$ each of which meets $\partial\Omega$ (one may prove the existence of such sets by induction on~$k$). We may also fix a compact set $K\subset\Omega$ containing $\Omega\setminus(U_1\cup\cdots\cup U_k)$ such that the component s of $\Omega\setminus K\subset U_1\cup\cdots\cup U_k$ are not relatively compact in~$\Omega$. For each~$j=1,\dots,k$, $U_j$ meets $\partial\Omega$ and therefore some component ns~$E$ of $\Omega\setminus K$, and hence $E\subset U_j$. Since $\Omega\setminus K$ has at most $e(\Omega)$ component sns, it follows that $k\leq e(\Omega)$. Furthermore, if $\Omega$ is smooth, then we may choose $k$ to be equal to the number of boundary component sns, and we may choose the arbitrarily small neighborhood s so that $U_j\cap\Omega$ is connected for each~$j$. We then get $k=e(\Omega)$ in this case. For the proof of~(d), let $M\setminus K=E_1\cup\cdots\cup E_m$ be an ends decomposition. We may fix a \ensuremath{\mathcal C^{\infty}} relatively compact domain~$\Omega$ in~$M$ containing~$K$ such that $M\setminus\Omega$ has no compact component sns, and for each $j=1,\dots,m$, we may fix a connected compact set~$A_j\subset E_j$ such that $A_j$ meets each of the finitely many component s of $E_j\cap\partial\Omega$. The compact set $K'\equiv\overline\Omega\cup\bigcup_{j=1}^mA_j$ then has the required properties. For the proof of (e), suppose $\Omega$ and $\Theta$ are domains in~$M$ with $\Theta\subset\Omega$, and both $M\setminus\Omega$ and $\Omega\setminus\Theta$ have no compact component sns. If $E$ is a component of $M\setminus\Theta$, then either $E$ meets $M\setminus\Omega$, in which case $E$ contains a noncompact component of $M\setminus\Omega$, or $E\subset\Omega$, in which case $E$ is a component of $\Omega\setminus\Theta$. In either case, $E$ is noncompact ns. Finally for the proof of~(f), suppose $\Phi\colon M\to S$ is a proper surjective continuous open mapping onto an orientable topological surface~$S$ that is not simply connected ns. By part~(b), we may assume without loss of generality that $e(M)=1$. If $U$ is any open set in~$S$ and $V$ is any component of $\Phi ^{-1} (U)$, then $\Phi(V)$ is both open and closed in~$U$; i.e., $\Phi(V)$ is a component of~$U$. Consequently, if $K\subset S$ is a compact set for which $S\setminus K$ has no relatively compact component s and $V$ is any component of $M\setminus\Phi ^{-1} (K)=\Phi ^{-1} (S\setminus K)$, then $\Phi(V)$ must be a component of $S\setminus K$. Hence $V$ must be the unique component of $M\setminus\Phi ^{-1} (K)$ that is not relatively compact in~$M$, and it follows that $V=M\setminus\Phi ^{-1} (K)$ and $\Phi(V)=S\setminus K$ are connected ns. In particular, $e(S)=1$. Since every planar domain with one end is simply connected ns, $S$ must be nonplanar; that is, there exists a nonseparating simple closed curve in~$S$. Hence there exists a homeomorphism $\Psi$ of a suitable annulus $\Delta(0;r',R')\equiv\setof{z\in\C}{r'<|z|<R'}$ onto a domain $A'\Subset S$ with connected complement $S\setminus A'$. Fixing $r$ and $R$ with $0<r'<r<R<R'$, setting $A\equiv\Psi(\Delta(0;r,R))\Subset A'$, $F\equiv S\setminus\overline A$ and $E\equiv\Phi ^{-1} (F)=M\setminus\Phi ^{-1} (\overline A)$, and letting $\Theta$ be a component of $\Phi ^{-1} (A)$, we see that $E$ is connected and $\Phi(\Theta)=A$ (by the above), and that $\overline E\subset M\setminus\Theta$. It also follows that $M\setminus\Theta$ is connected ns. For if $K$ is a compact component of $M\setminus\Theta$, then we must have $K\cap\overline E=\emptyset$. Forming a connected neighborhood ns~$U$ of $K$ in $M\setminus\overline E\subset M\setminus E=\Phi ^{-1} (\overline A)$, we get $\Phi(K)\subset\Phi(U)\subset\overline A$, and hence $\Phi(U)\subset A$. Thus $K$ must lie in some component ns~$V\subset M\setminus\Theta$ of $\Phi ^{-1} (A)$, and hence $K=V$. But then $K$ must be both open and closed in~$M$, which is clearly impossible. Therefore $M\setminus\Theta$ is connected ns. Moreover, since $\Phi(\Theta)=A$, we must have $\Phi(\partial\Theta)=\partial A$, and hence $\partial\Theta$ is not connected ns. Applying parts (b), (c), and~(e), we get the desired smooth domain $\Omega\Subset\Theta$. \end{pf} As indicated in the introduction, a connected noncompact complex manifold with the Bochner--Hartogs property must have exactly one end and cannot admit a proper holomorphic mapping onto a Riemann surface. In fact, the following elementary observations suggest that complex manifolds with the Bochner--Hartogs property are very different topologically from those admitting proper holomorphic mappings onto Riemann surfaces: \begin{prop}\label{BH con boundary prop} Let $X$ be a connected noncompact complex manifold. \begin{enumerate} \item[(a)] Assume that $H^1_c(X,\ol)=0$. Then $e(X)=1$. In fact, if $\Omega$ is any nonempty domain in~$X$ for which each connected component of the complement $X\setminus\Omega$ is noncompact ns, then $e(\Omega)=1$. In particular, if $\Omega$ is a relatively compact domain in $X$ and $X\setminus\Omega$ is connected ns, then $\partial\Omega$ is connected ns. Moreover, every compact orientable \ensuremath{\mathcal C^{\infty}} hypersurface in~$X$ is the boundary of some smooth relatively compact domain in~$X$. \item[(b)] If $X$ admits a surjective proper continuous open mapping onto an orientable topological surface that is not simply connected (for example, if $X$ admits a proper holomorphic mapping onto a Riemann surface other than the disk or the plane), then there exists a \ensuremath{\mathcal C^{\infty}} relatively compact domain~$\Omega$ in~$X$ such that $X\setminus\Omega$ is connected but $\partial\Omega$ is \emph{not} connected (and $e(\Omega)>1$). In particular, $H^1_c(X,\ol)\neq 0$. \end{enumerate} \end{prop} \begin{pf} For the proof of (a), let us assume that $H^1_c(X,\ol)=0$. As argued in the introduction, we must then have $e(X)=1$. Next, we show that any compact orientable \ensuremath{\mathcal C^{\infty}} hypersurface~$M$ in~$X$ is the boundary of some relatively compact \ensuremath{\mathcal C^{\infty}} domain in~$X$. For we may fix a relatively compact connected neighborhood ns~$U$ of $M$ in~$X$ such that $U\setminus M$ has exactly two connected component sns, $U_0$ and $U_1$. We may also fix a relatively compact neighborhood ns~$V$ of~$M$ in~$U$ and a \ensuremath{\mathcal C^{\infty}} function ns~$\lambda$ on~$X\setminus M$ such that $\text{\rm supp}\, \lambda\Subset U$, $\lambda\equiv 0$ on $U_0\cap V$, and $\lambda\equiv 1$ on $U_1\cap V$. Hence $\dbar\lambda$ extends to a $\dbar$-closed \ensuremath{\mathcal C^{\infty}} $(0,1)$-form~$\alpha$ on~$X$ with compact support in $U\setminus M$, and since $H^1_c(X,\ol)=0$, we have $\alpha=\dbar\beta$ for some \ensuremath{\mathcal C^{\infty}} compactly supported function ns~$\beta$ on~$X$. The difference $f\equiv\lambda-\beta$ is then a holomorphic function on~$X\setminus M$ that vanishes on some nonempty open subset. If $X\setminus M$ is connected ns, then $f\equiv 0$ on the entire set~$X\setminus M$, and in particular, the restriction $\beta\restriction_V$ is a \ensuremath{\mathcal C^{\infty}} function that is equal to~$1$ on $U_1\cap V$, $0$ on $U_0\cap V$. Since $M=V\cap\partial U_0=V\cap\partial U_1$, we have arrived at a contradiction. Thus $X\setminus M$ cannot be connected ns, and hence $X\setminus M$ must have exactly two connected component sns, one containing $U_0$ and the other containing~$U_1$. Since $e(X)=1$, one of these connected component s must be a relatively compact \ensuremath{\mathcal C^{\infty}} domain with boundary~$M$ in~$X$. It follows that in particular, the boundary of any relatively compact \ensuremath{\mathcal C^{\infty}} domain in~$X$ with connected complement must be connected ns. Next, suppose $\Omega$ is an arbitrary nonempty domain for which $X\setminus\Omega$ has no compact component sns. If $e(\Omega)>1$, then part~(b) of Lemma~\ref{basic ends lem} provides a \ensuremath{\mathcal C^{\infty}} relatively compact domain~$\Theta$ in~$\Omega$ such that $\Omega\setminus\Theta$ has no compact component s and $\partial\Theta$ is not connected ns, and hence part~(e) implies that $X\setminus\Theta$ has no compact component sns; i.e., $X\setminus\Theta$ is connected ns. However, as shown above, any smooth relatively compact domain in~$X$ with connected complement must have connected boundary. Thus we have arrived at a contradiction, and hence $\Omega$ must have only one end. In particular, if $\Omega\Subset X$ (and $X\setminus\Omega$ is connected ns), then by part~(c) of Lemma~\ref{basic ends lem}, $\partial\Omega$ must be connected ns. Part~(b) follows immediately from part~(f) of Lemma~\ref{basic ends lem}. \end{pf} \section{Bounded geometry}\label{Bounded geometry sect} In this section, we recall the definition of bounded geometry and we fix some conventions. Let $X$ be a complex manifold with almost complex structure $J\colon TX\to TX$. By a \emph{Hermitian metric} on~$X$, we will mean a Riemannian metric~$g$ on $X$ \st $g(Ju,Jv)=g(u,v)$ for every choice of real tangent vectors $u,v\in T_pX$ with $p\in X$. We call $(X,g)$ a \emph{Hermitian manifold}. We will also denote by $g$ the complex bilinear extension of $g$ to the complexified tangent space $(TX)_{\C}$. The corresponding real $(1,1)$-form~$\omega$ is given by $(u,v)\mapsto\omega(u,v)\equiv g(Ju,v)$. The corresponding Hermitian metric (in the sense of a smoothly varying family of Hermitian inner products) in the holomorphic tangent bundle $T^{1,0}X$ is given by \((u,v)\mapsto g(u,\bar v)\). Observe that with this convention, under the holomorphic vector bundle isomorphism $(TX,J)\oversetconnected g\to T^{1,0}X$ given by $u\mapsto \frac 12(u-iJu)$, the pullback of this Hermitian metric to~$(TX,J)$ is given by \((u,v)\mapsto\tfrac 12g(u,v)-\tfrac i2\omega(u,v)\). In a slight abuse of notation, we will also denote the induced Hermitian metric in $T^{1,0}X$, as well as the induced Hermitian metric in $\Lambda^r(TX)_{\C}\otimes\Lambda^s(T^*X)_{\C}$, by~$g$. The corresponding Laplacians are given by: \begin{align*} \lap&=\lap_d\equiv -(dd^*+d^*d),\\ \lap_{\dbar}&=-(\dbar\dbar^*+\dbar^*\dbar),\\ \lap_{\partial}&=-(\partial\partial^*+\partial^*\partial). \end{align*} If $(X,g,\omega)$ is \emph{K\"ahler}, i.e.,~$d\omega=0$, then $\lap=2\lap_{\dbar}=2\lap_{\partial}$. \begin{defn}\label{Bdd geom along set defn} For $S\subset X$ and $k$ a nonnegative integer, we will say that a Hermitian manifold~$(X,g)$ of dimension~$n$ {\it has bounded geometry of order $k$ along $S$} if for some constant $C>0$ and for every point $p\in S$, there is a biholomorphism $\Psi$ of the unit ball $B\equiv B_{g_{\C^n}}(0;1)\subset\C^n$ onto a neighborhood of $p$ in $X$ such that $\Psi(0)=p$ and such that on $B$, \[ C ^{-1} g_{\C^n}\leq\Psi^*g\leq Cg_{\C^n} \quad\text{and}\quad|D^m\Psi^*g|\leq C\text{ for }m=0,1,2,\dots,k. \] \end{defn} \section{Green's function s and harmonic projections}\label{greens fn sect} In this section we recall some terminology and facts from potential theory (a more detailed outline is provided in~\cite{NR-Structure theorems}). We will also see that the Bochner--Hartogs property holds for a connected noncompact complete K\"ahler manifold with exactly one end and no nontrivial $L^2$ holomorphic $1$-forms. A connected noncompact oriented Riemannian manifold~$(M,g)$ is called \textit{hyperbolic} if there exists a positive symmetric Green's function ns~$G(x,y)$ on~$M$; otherwise, $M$ is called \emph{parabolic}. Equivalently, $M$ is hyperbolic if given a relatively compact \ensuremath{\mathcal C^{\infty}} domain~$\Omega $ for which no connected component of $M\setminus\Omega$ is compact ns, there is a connected component ns~$E$ of $M\setminus\overline\Omega$ and a (unique) greatest \ensuremath{\mathcal C^{\infty}} function $u_E:\overline E\to[0,1)$ such that $u_E$ is harmonic on $E$, $u_E=0$ on $\partial E$, and $\sup_Eu_E=1$ (see, for example, Theorem~3 of \cite{Glasner-Katz Function-theoretic degeneracy}). We will also call $E$, and any end containing~$E$, a \emph{hyperbolic} end. An end that is not hyperbolic is called \emph{parabolic}, and we set $u_E\equiv 0$ for any parabolic end component ns~$E$ of $M\setminus\Omega$. We call the function $u:M\setminus\Omega\to[0,1)$ defined by $u\restrict{\overline E}=u_E$ for each connected component ns~$E$ of $M\setminus\overline\Omega$, the \emph{harmonic measure of the ideal boundary of}~$M$ \emph{with respect to} $M\setminus\overline\Omega$. A sequence $\seq x\nu$ in $M$ with $x_\nu\to\infty$ and $G(\cdot,x_\nu)\to 0$ (equivalently, $u(x_\nu)\to 1$) is called a \emph{regular sequence}. Such a sequence always exists (for $M$ hyperbolic). A sequence $\seq x\nu$ tending to infinity with $\liminf_{\nu\to\infty}G(\cdot,x_\nu)>0$ (i.e., $\limsup_{\nu\to\infty}u(x_\nu)<1$ or equivalently, $\seq x\nu$ has no regular subsequences) is called an \emph{irregular sequence}. Clearly, every sequence tending to infinity that is not regular admits an irregular subsequence. We say that an end $E$ of $M$ is \emph{regular} (\emph{irregular}) if every sequence in~$E$ tending to infinity in~$M$ is regular (respectively, there exists an irregular sequence in~$E$). Another characterization of hyperbolicity is that $M$ is hyperbolic if and only if $M$ admits a nonconstant negative continuous subharmonic function ns~$\vphi$. In fact, if $\seq x\nu$ is a sequence in $M$ with $x_\nu\to\infty$ and $\vphi(x_\nu)\to 0$, then $\seq x\nu$ is a regular sequence. We recall that the \emph{energy} (or \emph{Dirichlet integral}) of a suitable function ns~$\vphi$ (for example, a function with first-order distributional derivatives) on a Riemannian manifold~$M$ is given by $\int_M|\nabla\vphi|^2\,dV$. To any \ensuremath{\mathcal C^{\infty}} compactly supported $\dbar$-closed $(0,1)$-form~$\alpha$ on a connected noncompact hyperbolic complete K\"ahler manifold $X$, we may associate a bounded finite-energy (i.e., Dirichlet-finite) pluriharmonic function on $X\setminus\text{\rm supp}\, \alpha$ that vanishes at infinity along any regular sequence: \begin{lem}[see, for example, Lemma~1.1 of \cite{NR-BH Regular hyperbolic Kahler}]\label{Cpt support dbar class gives plh fn lem} Let $X$ be a connected noncompact complete hyperbolic K\"ahler manifold, and let $\alpha$ be a \ensuremath{\mathcal C^{\infty}} compactly supported form of type~$(0,1)$ on~$X$ with $\dbar\alpha=0$. Then there exist a closed and coclosed $L^2$ harmonic form $\gamma$ of type $(0,1)$ and a \ensuremath{\mathcal C^{\infty}} bounded function $\beta\colon X\to\C$ with finite energy \st $\gamma=\alpha-\dbar\beta$ and $\beta(x_\nu)\to 0$ for every regular sequence $\seq x\nu $ in $X$. \end{lem} \begin{rmks} 1. In particular, $\bar\gamma$ is a holomorphic $1$-form on $X$, and $\beta$ is pluriharmonic on the complement of the support of~$\alpha$. 2. Under certain conditions, the leaves of the foliation determined by $\bar\gamma$ outside a large compact subset of~$X$ are compact ns, and one gets a proper holomorphic mapping onto a Riemann surface. 3. According to Lemma~\ref{holo vanish along regular sequence not hyp lem} below (which is a modification of an observation due to J.~Wang), if $\beta$ is holomorphic on some hyperbolic end, then $\beta$ vanishes on that end. \end{rmks} \begin{lem}[cf.~Lemma~1.3 of \cite{NR-BH Regular hyperbolic Kahler}]\label{holo vanish along regular sequence not hyp lem} Let $X$ be a connected noncompact complete (hyperbolic) K\"ahler manifold, and let $E$ be a hyperbolic end of~$X$. If $f$ is a bounded holomorphic function on~$E$ and $f(x_\nu)\to 0$ for every regular sequence~$\seq x\nu$ for $X$ in~$E$, then $f\equiv 0$ on~$E$. \end{lem} \begin{pf} We may fix a nonempty smooth domain~$\Omega$ \st $\partial E\subset\Omega\Subset X$ and $X\setminus\Omega$ has no compact connected component sns. In particular, some component ns~$E_0$ of $E\setminus\overline\Omega$ is a hyperbolic end of~$X$. The harmonic measure of the ideal boundary of $X$ with respect to $X\setminus\overline\Omega$ is a nonconstant function $u\colon X\setminus\Omega\to[0,1)$. By replacing~$f$ with the product of~$f$ and a sufficiently small nonzero constant, we may assume that $|f|<1$ and hence for each $\epsilon>0$, $u+\epsilon\log|f|<0$ on $E\cap\partial\Omega$. Thus we get a nonnegative bounded continuous subharmonic function $\vphi_\epsilon$ on $X$ by setting $\vphi_\epsilon\equiv 0$ on $X\setminus E_0$ and $\vphi_\epsilon\equiv\max(0,u+\epsilon\log|f|)$ on~$E_0$. If $f(p)\neq 0$ at some point $p\in E_0$, then $\vphi_\epsilon(p)>0$ for $\epsilon$ sufficiently small. However, any sequence $\seq x\nu$ in~$E_0$ with $\vphi_\epsilon(x_\nu)\to m\equiv\sup\vphi_\epsilon>0$ must be a regular sequence and must therefore satisfy \(u(x_\nu)+\epsilon\log|f(x_\nu)|\to-\infty\), which contradicts the choice of $\seq x\nu$. Thus $f$ vanishes on $E_0$ and therefore, on~$E$. \end{pf} The above considerations lead to the following observation (cf.~Proposition~4.4 of~\cite{NR Weak Lefschetz}): \begin{thm}\label{Bochner Hartogs no L2 forms thm} Let $X$ be a connected noncompact hyperbolic complete K\"ahler manifold with no nontrivial $L^2$ holomorphic $1$-forms. \begin{enumerate} \item[(a)] For every compactly supported $\dbar$-closed \ensuremath{\mathcal C^{\infty}} form~$\alpha$ of type~$(0,1)$ on~$X$, there exists a bounded \ensuremath{\mathcal C^{\infty}} function ns~$\beta$ with finite energy on~$X$ such that $\dbar\beta=\alpha$ on~$X$ and $\beta$ vanishes on every hyperbolic end $E$ of~$X$ that is contained in $X\setminus\text{\rm supp}\, \alpha$. \item[(b)] In any ends decomposition $X\setminus K=E_1\cup\cdots\cup E_m$, exactly one of the ends, say~$E_1$, is hyperbolic, and moreover, every holomorphic function on~$E_1$ admits a (unique) extension to a holomorphic function on~$X$. \item[(c)] If $e(X)=1$ (equivalently, every end of~$X$ is hyperbolic), then $H^1_c(X,\ol)=0$. \end{enumerate} \end{thm} \begin{pf} Given a compactly supported $\dbar$-closed \ensuremath{\mathcal C^{\infty}} form~$\alpha$ of type~$(0,1)$ on~$X$, Lemma~\ref{Cpt support dbar class gives plh fn lem} provides a bounded \ensuremath{\mathcal C^{\infty}} function $\beta$ with finite energy such that $\dbar\beta=\alpha$ and $\beta(x_\nu)\to 0$ for every regular sequence $\seq x\nu$ in $X$ (by hypothesis, the $L^2$ holomorphic $1$-form~$\bar\gamma$ provided by the lemma must be trivial). In particular, $\beta$ is holomorphic on $X\setminus\text{\rm supp}\, \alpha$, and Lemma~\ref{holo vanish along regular sequence not hyp lem} implies that $\beta$ must vanish on every hyperbolic end of~$X$ contained in~$X\setminus\text{\rm supp}\, \alpha$. Thus part~(a) is proved. For the proof of part~(b), suppose $X\setminus K=E_1\cup\cdots\cup E_m$ is an ends decomposition. Then at least one of the ends, say~$E_1$, must be hyperbolic. Given a function $f\in\ol(X\setminus K)$, we may fix a relatively compact neighborhood ns~$U$ of $K$ in~$X$ and a \ensuremath{\mathcal C^{\infty}} function ns~$\lambda$ on~$X$ such that $\lambda\equiv f$ on~$X\setminus U$. Applying part~(a) to the $(0,1)$-form $\alpha\equiv\dbar\lambda$, we get a \ensuremath{\mathcal C^{\infty}} function ns~$\beta$ such that $\dbar\beta=\alpha$ on~$X$ and $\beta\equiv 0$ on any hyperbolic end contained in~$X\setminus U$. If $E_j$ is a hyperbolic end (for example, if $j=1$), then $E_j\setminus U$ must contain a hyperbolic end~$E$ of~$X$, and the holomorphic function $h\equiv\lambda-\beta$ on~$X$ must agree with~$f$ on~$E$ and therefore, on~$E_j$. Thus we get a holomorphic function on~$X$ that agrees with~$f$ on every $E_j$ which is hyperbolic. Taking $f$ to be a locally constant function on $X\setminus K$ with distinct values on the component s $E_1,\dots,E_m$, we see that in fact, $E_j$ must be a parabolic end for $j=2,\dots,m$. Part~(c) follows immediately from parts~(a) and (b). \end{pf} We close this section with a preliminary step toward the proof of Theorem~\ref{BHD bounded geom hyperbolic thm}: \begin{lem}\label{BHD hyperbolic with plh bounded gradient etc lem} Suppose $(X,g)$ is a connected noncompact hyperbolic complete K\"ahler manifold with bounded geometry of order~$0$, $e(X)=1$, and there exists a real-valued pluriharmonic function ns~$\rho$ with bounded gradient and infinite energy on~$X$. Then $X$ admits a proper holomorphic mapping onto a Riemann surface if and only if $H^1_c(X,\ol)\neq 0$. \end{lem} \begin{pf} Given a compactly supported $\dbar$-closed \ensuremath{\mathcal C^{\infty}} form~$\alpha$ of type~$(0,1)$ on~$X$, Lemma~\ref{Cpt support dbar class gives plh fn lem} provides a closed and coclosed $L^2$ harmonic form $\gamma$ of type $(0,1)$ and a \ensuremath{\mathcal C^{\infty}} bounded function $\beta\colon X\to\C$ with finite energy such that $\gamma=\alpha-\dbar\beta$ and $\beta(x_\nu)\to 0$ for every regular sequence $\seq x\nu $ in $X$. If $\gamma\equiv 0$, then $\dbar\beta=\alpha$ and Lemma~\ref{holo vanish along regular sequence not hyp lem} implies that~$\beta$ vanishes on the complement of some compact set. If $\gamma$ is nontrivial, then the $L^2$ holomorphic $1$-form $\theta_1\equiv\bar\gamma$ and the bounded holomorphic $1$-form $\theta_2\equiv\partial\rho$, which is not in~$L^2$, must be linearly independent. Theorem~0.1 and Theorem~0.2 of~\cite{NR L2 Castelnuovo} then provide a proper holomorphic mapping of~$X$ onto a Riemann surface. \end{pf} \begin{rmk} The proofs of Lemma~1.1 of~\cite{NR L2 Castelnuovo} and Theorem~0.1 of~\cite{NR L2 Castelnuovo} (the latter fact was applied above and relies on the former) contain a minor mistake in their application of continuity of intersections (see \cite{Stein} or \cite{Tworzewski-Winiarski Cont of intersect} or Theorem~4.23 in \cite{ABCKT}). In each of these proofs, one has a sequence of levels $\seq L\nu$ of a holomorphic mapping $f\colon X\to\mathbb P^1$ and a sequence of points $\seq x\nu$ such that $x_\nu\in L_\nu$ for each~$\nu$ and $x_\nu\to p$. For $L$ the level of~$f$ through~$p$, by continuity of intersections, $\seq L\nu$ converges to~$L$ relative to the ambient manifold $X\setminus[f ^{-1} (f(p))\setminus L]$, but contrary to what was stated in these proofs, a~priori, this convergence need not hold relative to~$X$. Aside from this small misstatement, the proofs are correct and no further changes are needed. \end{rmk} \section{Quasi-Dirichlet-finite pluriharmonic functions}\label{quasidirichlet finite fns sect} The following is the main advantage of working with \emph{irregular} hyperbolic manifolds: \begin{lem}[Nakai]\label{quasidirichlet finite irreg hyp exists lem} Let $(M,g)$ be a connected noncompact irregular hyperbolic oriented complete Riemannian manifold, let $\seq qk$ be an irregular sequence, let $G(\cdot,\cdot)$ be the Green's function ns, and let \(\rho_k\equiv G(\cdot,q_k)\colon M\to(0,\infty]\) for each~$k$. Then some subsequence of $\seq\rho k$ converges uniformly on compact subsets of~$M$ to a function ns~$\rho$. Moreover, any such limit function ns~$\rho$ has the following properties: \begin{enumerate} \item[(i)] The function ns~$\rho$ is positive and harmonic ns; \item[(ii)] $\int_M|\nabla\rho|^2\,dV_g=\infty$; \item[(iii)] \(\int_{\rho ^{-1} ([a,b])}|\nabla\rho|_g^2\,dV_g\leq b-a\) for all $a$ and $b$ with $0\leq a<b$ (in particular, $\rho$ is unbounded); and \item[(iv)] If $\Omega$ is any smooth domain with compact boundary (i.e., either $\Omega$ is an end or $\Omega\Subset M$) and at most finitely many terms of the sequence $\seq qk$ lie in~$\Omega$, then \[ \sup_\Omega\rho=\max_{\partial\Omega}\rho<\infty\qquad\text{and}\qquad \int_\Omega|\nabla\rho|^2\,dV\leq\int_{\partial\Omega}\rho \pdof\rho\nu\,d\sigma<\infty. \] \end{enumerate} \end{lem} \begin{rmk} Following Nakai \cite{Nakai Green potential} and Sario and Nakai \cite{SarioNakai}, a positive function $\vphi$ on a Riemannian manifold $(M,g)$ is called \emph{quasi-Dirichlet-finite} if there is a positive constant~$C$ \st \[ \int_{\vphi ^{-1} ([0,b])}|\nabla\vphi|_g^2\,dV_g\leq Cb \] for every $b>0$. Nakai proved the existence of an Evans-type quasi-Dirichlet-finite positive harmonic function on an irregular Riemann surface. His arguments, which involve the behavior of the Green's function at the Royden boundary, carry over to a Riemannian manifold and actually show that the constructed function has the slightly stronger property appearing in the above lemma. One can instead prove the lemma via Nakai's arguments simply by taking $\rho=G(\cdot, q)$, where $G$ is the extension of the Green's function to the Royden compactification and $q$ is a point in the Royden boundary for which $\rho>0$ on $M$. The direct proof appearing below is essentially this latter argument. \end{rmk} \begin{pf*}{Proof of Lemma~\ref{quasidirichlet finite irreg hyp exists lem}} Fixing a sequence of nonempty smooth domains $\seq\Omega m_{m=0}^\infty$ such that $M\setminus\Omega_0$ has no compact connected component sns, $\bigcup_{m=0}^\infty\Omega_m=M$, and $\Omega_{m-1}\Subset\Omega_m$ for $m=1,2,3,\dots$, and letting $G_m$ be the Green's function on $\Omega_m$ for each~$m$, we get $G_m\nearrow G$. Given $m_0\in\Z_{>0}$, for each integer $m>m_0$ and each point $p\in\Omega_{m_0}$, the continuous function ns~$G_m(p,\cdot)\restrict{\overline\Omega_m\setminus\Omega_{m_0}}$ vanishes on~$\partial\Omega_m$, and the function is positive on $\partial\Omega_{m_0}$ and harmonic on $\Omega_m\setminus\overline\Omega_{m_0}$. Thus \[ G_m(p,\cdot)\restrict{\overline\Omega_m\setminus\Omega_{m_0}} \leq\max_{\partial\Omega_{m_0}}G_m(p,\cdot)\leq\max_{\partial\Omega_{m_0}} G(p,\cdot). \] Passing to the limit we get \[ G(p,\cdot)\leq\max_{\partial\Omega_{m_0}}G(p,\cdot) \] on $M\setminus\Omega_{m_0}$ for each point $p\in\Omega_{m_0}$. Hence \[ G\leq A_{m_0}\equiv\max_{\overline\Omega_{m_0-1}\times\partial\Omega_{m_0}}G \] on $\overline\Omega_{m_0-1}\times(M\setminus\Omega_{m_0})$. In particular, $\rho_k=G(\cdot,q_k)\leq A_{m_0}$ on $\overline\Omega_{m_0-1}$ for $k\gg 0$. Therefore, by replacing $\seq qk$ with a suitable subsequence, we may assume that $\rho_k$ converges uniformly on compact subsets of~$M$ to a positive harmonic function ns~$\rho$. Suppose $0<a<b$. Given $k\in\Z_{>0}$, for $m\gg 0$ we have $q_k\in\Omega_m$, and the function $\rho_k\ssp m\equiv G_m(\cdot,q_k)\colon\overline\Omega_m\to[0,\infty]$ satisfies \((\rho_k\ssp m) ^{-1} ((a,\infty])\Subset\Omega_m\). Hence if $r$ and $s$ are regular values of $\rho_k\ssp m\restrict{\Omega_m\setminus\set{q_k}}$ with $a<r<s<b$, then \begin{align*} \int_{(\rho_k\ssp m) ^{-1} ((r,s))}|\nabla\rho_k\ssp m|^2\,dV& =\int_{(\rho_k\ssp m) ^{-1} (s)}\rho_k\ssp m\pdof{\rho_k\ssp m}{\nu}\,d\sigma -\int_{(\rho_k\ssp m) ^{-1} (r)}\rho_k\ssp m\pdof{\rho_k\ssp m}{\nu}\,d\sigma\\ &=\int_{(\rho_k\ssp m) ^{-1} (s)}s\cdot\pdof{\rho_k\ssp m}\nu\,d\sigma -\int_{(\rho_k\ssp m) ^{-1} (r)}r\cdot\pdof{\rho_k\ssp m}\nu\,d\sigma\\ &=(r-s)\int_{\partial\Omega_m}\pdof{\rho_k\ssp m}\nu\,d\sigma\\ &=(s-r)\int_{\partial\Omega_m}(-1)\pdwrt\nu\left[G_m(\cdot,q_k)\right]\,d\sigma\\ &=s-r, \end{align*} where $\partial/\partial\nu$ is the normal derivative oriented outward for the open sets $\Omega_m$, $(\rho_k\ssp m) ^{-1} ((0,s))$, and $(\rho_k\ssp m) ^{-1} ((0,r))$. Here we have normalized $G_m$ (and similarly, all Green's function sns) so that $-\lap_{\text{distr.}} G_m(\cdot,q)$ is the Dirac function at~$q$ for each point~$q\in\Omega_m$. Letting $r\to a^+$ and $s\to b^-$, we get \[ \int_{(\rho_k\ssp m) ^{-1} ((a,b))}|\nabla\rho_k\ssp m|^2\,dV=(b-a). \] Letting $\chi_A$ denote the characteristic function of each set $A\subset M$, we have \[ \lim_{m\to\infty}|\nabla\rho_k\ssp m|=|\nabla\rho_k|\text{ on }M\setminus\set{q_k}\qquad\text{and}\qquad \liminf_{m\to\infty}\chi_{(\rho_k\ssp m) ^{-1} ((a,b))}\geq\chi_{\rho_k ^{-1} ((a,b))}. \] Hence Fatou's lemma gives \[ \int_{\rho_k ^{-1} ((a,b))}|\nabla\rho_k|^2\,dV\leq(b-a). \] Similarly, letting $k\to\infty$, we get \[ \int_{\rho ^{-1} ((a,b))}|\nabla\rho|^2\,dV\leq(b-a). \] Applying this inequality to $a'$ and $b'$ with $0<a'<b'$ and letting $a'\to a^-$ and $b'\to b^+$, we get \[ \int_{\rho ^{-1} ([a,b])}|\nabla\rho|^2\,dV\leq(b-a). \] Letting $a\to 0^+$ (and noting that $\rho>0$), we also get the above inequality for $a=0$. Assuming now that $\rho$ has finite energy, we will reason to a contradiction. We may fix a constant~$b>\sup_{\Omega_0}\rho$ that is a regular value of~$\rho$, of $\rho_k\restrict{M\setminus\set{q_k}}$ for all~$k$, and of $\rho_k\ssp m\restrict{\Omega_m\setminus\set{q_k}}$ for all $k$ and~$m$. Note that we have not yet shown that~$\rho$ is unbounded, so we have not yet ruled out the possibility that $\rho ^{-1} ((0,b))=M$, and in particular, that $\rho ^{-1} (b)=\emptyset$. Since $\rho_k\to\rho$ uniformly on compact subsets of~$M$ as $k\to\infty$, and for each~$k$, $\rho_k\ssp m\to\rho_k$ uniformly on compact subsets of $M\setminus\set{q_k}$ as $m\to\infty$, we may fix a positive integer~$k_0$ and a strictly increasing sequence of positive integers $\seq mk$ \st $q_k\in\Omega_{m_k}$ for each~$k$, $\rho_k\ssp{m_k}\leq\rho_k<b$ on $\overline\Omega_0$ for each~$k\geq k_0$, and $\rho_k\ssp{m_k}\to\rho$ uniformly on compact sets as $k\to\infty$. Letting $\vphi\equiv\min(\rho,b)$ and letting $\vphi_k\colon M\to[0,b]$ be the Lipschitz function given by \[ \vphi_k\equiv \begin{cases} {\min(\rho_k\ssp{m_k},b)}&{\text{ on }\overline\Omega_{m_k}}\\ {0}&{\text{ elsewhere}} \end{cases} \] for each~$k$, we see that $\vphi_k\to\vphi$ uniformly on compact subsets of~$M$ and $\nabla\vphi_k\to\nabla\vphi$ uniformly on compact subsets of $M\setminus\rho ^{-1} (b)$. Moreover, for each $k$, \[ \int_M|\nabla\vphi_k|^2\,dV =\int_{(\rho_k\ssp{m_k}) ^{-1} ((0,b))}|\nabla\rho_k\ssp{m_k}|^2\,dV=b. \] Applying weak compactness, we may assume that $\set{\nabla\vphi_k}$ converges weakly in $L^2$ to a vector field~$v$. But for each compact set $K\subset M\setminus\rho ^{-1} (b)$, $(\nabla\vphi_k)\restrict K\to(\nabla\vphi)\restrict K$ uniformly, and therefore in~$L^2$. Since $\rho ^{-1} (b)$ is a set of measure~$0$, we must have $v=\nabla\vphi$ (in~$L^2$). Hence \begin{align*} \int_{\rho ^{-1} ((0,b))}|\nabla\rho|^2\,dV =\langle\nabla\vphi,\nabla\rho\rangle\leftarrow\langle\nabla\vphi_k,\nabla\rho\rangle &=\int_{\partial\Omega_{m_k}}\rho_k\ssp{m_k}\pdof \rho\nu\,d\sigma\\ &\qquad\qquad\qquad+\int_{(\rho_k\ssp{m_k}) ^{-1} (b)}\rho_k\ssp{m_k}\pdof \rho\nu\,d\sigma\\ &=0-b\int_{\partial\left((\rho_k\ssp{m_k}) ^{-1} ((b,\infty])\right)} \pdof\rho\nu\,d\sigma=0. \end{align*} It follows that $\rho\equiv a$ for some constant~$a$ (in particular, $0<a<b$). Letting $u$ be the harmonic measure of the ideal boundary of $M$ with respect to $M\setminus\overline\Omega_0$ and letting $\psi\colon M\to[0,1)$ be the Dirichlet-finite locally Lipschitz function on $M$ obtained by extending $u$ by~$0$, we get \begin{align*} 0=\langle 0,\nabla\psi\rangle \leftarrow\langle\nabla\vphi_k,\nabla\psi\rangle &=\int_{\partial\Omega_{m_k}}\rho_k\ssp{m_k}\pdof u\nu\,d\sigma-\int_{\partial\Omega_0}\rho_k\ssp{m_k}\pdof u\nu\,d\sigma\\ &\qquad\qquad\qquad\qquad\qquad+\int_{(\rho_k\ssp{m_k}) ^{-1} (b)}\rho_k\ssp{m_k}\pdof u\nu\,d\sigma\\ &=0-\int_{\partial\Omega_0}\rho_k\ssp{m_k}\pdof u\nu\,d\sigma-b\int_{\partial\left((\rho_k\ssp{m_k}) ^{-1} ((b,\infty])\right)} \pdof u\nu\,d\sigma\\ &=-\int_{\partial\Omega_0}\rho_k\ssp{m_k}\pdof u\nu\,d\sigma\to-a\int_{\partial\Omega_0}\pdof u\nu\,d\sigma<0. \end{align*} Thus we have arrived at a contradiction, and hence $\rho$ must have infinite energy. Finally, given a smooth domain~$\Omega$ as in~(iv), for each $k\gg 0$, we have $q_k\notin\overline\Omega$. For $m\gg 0$, we have $q_k\in\Omega_m$ and $\partial\Omega\subset\Omega_m$. Since $\rho_k\ssp m$ is then continuous on $\overline{\Omega\cap\Omega_m}$, harmonic on $\Omega\cap\Omega_m$, and zero on $\partial\Omega_m$, we also have \[ \sup_{\Omega\cap\Omega_m}\rho_k\ssp m=\max_{\partial\Omega}\rho_k\ssp m\qquad\text{and}\qquad \int_{\Omega\cap\Omega_m}|\nabla\rho_k\ssp m|^2\,dV =\int_{\partial\Omega}\rho_k\ssp m\pdof{\rho_k\ssp m}\nu\,d\sigma. \] Letting $m\to\infty$, and then letting $k\to\infty$, we get the required properties of~$\rho$ on~$\Omega$. \end{pf*} We will also use the following analogue of a theorem of Sullivan (see~\cite{Sullivan} and Theorem~2.1 of~\cite{NR-Structure theorems}): \begin{lem}\label{quasidirichlet harm bdd grad lem} Let $(M,g)$ be a connected noncompact oriented complete Riemannian manifold, $E$ an end of~$M$, and $h$ a positive \ensuremath{\mathcal C^{\infty}} function on~$M$. Assume that: \begin{enumerate} \item[(i)] There exist positive constants~$K$, $R_0$, and $\delta$ such that \(\text{\rm Ric}_g\geq-Kg\) on~$E$ and \(\text{\rm vol}\,(B(x;R_0))\geq\delta\) for every point $x\in E$; \item[(ii)] The restriction $h\restrict E$ is harmonic ns; and \item[(iii)] For some positive constant $C$, $\int_{E\cap h ^{-1} ([a,b])}|\nabla h|^2\,dV\leq C(b-a)+C$ for all $a$ and $b$ with $0\leq a<b$. \end{enumerate} Then $|\nabla h|$ is bounded on~$E$, and for each point $p\in M$, \[ \int_{E\cap B(p;R)}|\nabla h|^2\,dV=O(R)\text{ as }R\to\infty. \] \end{lem} \begin{pf*}{Sketch of the proof} We may fix a nonempty compact set $A\supset\partial E$. As in the proof of Theorem~2.1 of~\cite{NR-Structure theorems}, setting $\vphi\equiv|\nabla h|^2$, we get a positive constant $C_1$ \st for each point $x_0\in E$ with $\text{dist}(x_0,A)>R_1\equiv 4R_0$, \[ \sup_{B(x_0;R_0)}\vphi\leq C_1\int_{B(x_0;2R_0)}\vphi\,dV. \] For $a\equiv\inf_{B(x_0;2R_0)}h$ and $b\equiv\sup_{B(x_0;2R_0)}h$, we have on the one hand, \[ \int_{B(x_0;2R_0)}\vphi\,dV\leq\int_{E\cap h ^{-1} ([a,b])}\vphi\,dV\leq C(b-a)+C. \] On the other hand, $b-a\leq\sup_{B(x_0,R_1)}|\nabla h|R_1$. Combining the above, we see that if $C_2>1$ is a sufficiently large positive constant that is, in particular, greater than the supremum of $|\nabla h|$ on the $2R_1$-neighborhood of~$A$, then for each point $x_0\in E$ at which $|\nabla h(x_0)|>C_2$, we have \[ \sup_{B(x_0;R_0)}|\nabla h|^2<C_2\sup_{B(x_0;R_1)}|\nabla h|. \] Fixing constants $C_3>C_2$ and $\epsilon>0$ so that $C_3^{1-\epsilon}>C_2$, we see that if $|\nabla h(x_0)|>C_3$, then there exists a point $x_1\in B(x_0;R_1)$ \st \[ (1+\epsilon)\log|\nabla h(x_0)|\leq\log|\nabla h(x_1)|. \] Assuming now that $|\nabla h|$ is unbounded on~$E$, we will reason to a contradiction. Fixing a point $x_0\in E$ at which $|\nabla h(x_0)|>C_3$ and applying the above inequality inductively, we get a sequence $\seq xm$ in $E$ such that $\text{dist}(x_m,x_{m-1})<R_1$ and \[ (1+\epsilon)\log|\nabla h(x_{m-1})|\leq\log|\nabla h(x_m)| \] for $m=1,2,3,\dots$; that is, $\set{|\nabla h(x_m)|}$ has super-exponential growth. However, the local version of Yau's Harnack inequality (see \cite{Cheng-Yau Diff eqs mflds}) provides a constant $C_4>0$ such that \[ |\nabla h(x)|\leq C_4h(x)\qquad\text{and}\qquad h(x)\leq C_4 h(p) \] for all points $x,p\in M$ with $\text{dist}(p,A)>2R_1$ and $\text{dist}(x,p)<R_1$, so $\set{|\nabla h(x_m)|}$ has at most exponential growth. Thus we have arrived at a contradiction, and hence $|\nabla h|$ must be bounded on~$E$. Finally, by redefining $h$ outside a neighborhood of~$\overline E$, we may assume without loss of generality that $|\nabla h|$ is bounded on~$M$. Fixing a point $p\in M$, we see that for $R>0$, $a\equiv\inf_{B(p;R)}h$, and $b\equiv\sup_{B(p;R)}h$, we have \[ \int_{E\cap B(p;R)}|\nabla h|^2\,dV\leq\int_{E\cap h ^{-1} ([a,b])}|\nabla h|^2\,dV\leq C(b-a)+C\leq C\cdot\sup|\nabla h|\cdot 2R+C. \] Therefore, \[ \int_{E\cap B(p;R)}|\nabla h|^2\,dV=O(R)\text{ as }R\to\infty. \] \end{pf*} Applying the above in the K\"ahler setting, we get the following: \begin{prop}\label{plh fn infinite energy exist kahler prop} Let $(X,g)$ be a connected noncompact complete K\"ahler manifold, let $E$ be an irregular hyperbolic end along which $X$ has bounded geometry of order~$2$ (or for which there exist positive constants~$K$, $R_0$, and $\delta$ such that \(\text{\rm Ric}_g\geq-Kg\) on~$E$ and \(\text{\rm vol}\,(B(x;R_0))\geq\delta\) for every point $x\in E$), let $\seq qk$ be an irregular sequence in~$E$, let $G(\cdot,\cdot)$ be the Green's function on~$X$, and let \(\rho_k\equiv G(\cdot,q_k)\colon X\to(0,\infty]\) for each~$k$. Then some subsequence of $\seq\rho k$ converges uniformly on compact subsets of~$X$ to a function ns~$\rho$. Moreover, any such limit function ns~$\rho$ has the following properties: \begin{enumerate} \item[(i)] The function ns~$\rho$ is positive and pluriharmonic ns; \item[(ii)] $\int_E|\nabla\rho|^2\,dV=\infty>\int_{X\setminus E}|\nabla\rho|^2\,dV$; \item[(iii)] \(\int_{\rho ^{-1} ([a,b])}|\nabla\rho|^2\,dV\leq b-a\) for all $a$ and $b$ with $0\leq a<b$ (in particular, $\rho$ is unbounded on~$E$); \item[(iv)] If $\Omega$ is any smooth domain with compact boundary (i.e., either $\Omega$ is an end or $\Omega\Subset X$) and at most finitely many terms of the sequence $\seq qk$ lie in~$\Omega$, then \[ \sup_\Omega\rho=\max_{\partial\Omega}\rho<\infty\qquad\text{and}\qquad \int_\Omega|\nabla\rho|^2\,dV\leq\int_{\partial\Omega}\rho \pdof\rho\nu\,d\sigma<\infty; \] and \item[(v)] $|\nabla\rho|$ is bounded. \end{enumerate} \end{prop} \begin{pf} By Lemma~\ref{quasidirichlet finite irreg hyp exists lem}, some subsequence of $\seq\rho k$ converges uniformly on compact sets, and the limit~$\rho$ of any such subsequence is positive and harmonic and satisfies (ii)--(iv). Lemma~\ref{quasidirichlet harm bdd grad lem} implies that $|\nabla\rho|$ is bounded on~$E$ and for $p\in X$, \(\int_{B(p;R)}|\nabla\rho|^2\,dV=O(R)\) (hence $\int_{B(p;R)}|\nabla\rho|^2\,dV=o(R^2)$) as \(R\to\infty\). By an observation of Gromov~\cite{Gro-Kahler hyperbolicity} and of Li~\cite{Li Structure complete Kahler} (see Corollary~2.5 of~\cite{NR-Structure theorems}), $\rho$ is pluriharmonic ns. \end{pf} \section{Proof of the main result and some related results}\label{proof main thm sect} This section contains the proof of Theorem~\ref{BHD bounded geom hyperbolic thm}. We also consider some related results. \begin{pf*}{Proof of Theorem~\ref{BHD bounded geom hyperbolic thm}} Let $X$ be a connected noncompact hyperbolic complete K\"ahler manifold with bounded geometry of order two, and assume that $X$ has exactly one end. By the main result of~\cite{NR-BH Regular hyperbolic Kahler}, the Bochner--Hartogs dichotomy holds for $X$ regular hyperbolic. If $X$ is irregular hyperbolic, then Proposition~\ref{plh fn infinite energy exist kahler prop} provides a (quasi-Dirichlet-finite) positive pluriharmonic function ~$\rho$ on~$X$ with infinite energy and bounded gradient, and hence Lemma~\ref{BHD hyperbolic with plh bounded gradient etc lem} gives the claim. \end{pf*} The above arguments together with those appearing in \cite{NR-Structure theorems}, \cite{NR Filtered ends}, and \cite{NR L2 Castelnuovo} give results for multi-ended complete K\"ahler manifolds. To see this, we first recall some terminology and facts. \begin{defn}\label{ends filtered ends def} Let $M$ be a connected manifold. Following Geoghegan \cite{Geoghegan} (see also Kropholler and Roller \cite{KroR}), for $\Upsilon\colon\widetilde M\to M$ the universal covering of $M$, elements of the set \[ \lim_{\leftarrow}\pi_0 [\Upsilon ^{-1} (M\setminus K)], \] where the limit is taken as $K$ ranges over the compact subsets of $M$ (or the compact subsets of $M$ for which the complement $M\setminus K$ has no relatively compact component sns) will be called {\it filtered ends}. The number of filtered ends of $M$ will be denoted by~$\tilde e(M)$. \end{defn} \begin{lem}\label{Behavior of filtered ends for covering lem} Let $M$ be a connected noncompact topological manifold. \begin{enumerate} \item[(a)] We have $\tilde e(M)\geq e(M)$. In fact for any $k\in \N$, we have $\tilde e(M)\geq k$ if and only if there exists an ends decomposition $M\setminus K=E_1\cup \cdots \cup E_m$ \st \[ \sum _{j=1}^m[\pi _1(M):\Gamma _j]\geq k, \] where $\Gamma_j\equiv\text{\rm Im}\, e{\pi_1(E_j)\to\pi_1(M)}$ for $j=1,\dots,m$. \item[(b)] If $\Upsilon\colon\widehat M\to M$ is a connected covering space, $E$ is an end of~$\what M$, and $E_0\equiv\Upsilon(E)\varsubsetneqq M$, then \begin{enumerate} \item[(i)] $E_0$ is an end of~$M$; \item[(ii)] $\partial E_0=\Upsilon(\partial E)\setminus E_0$; \item[(iii)] $\overline E\cap\Upsilon ^{-1} (\partial E_0)=(\partial E)\setminus\Upsilon ^{-1} (E_0)$; \item[(iv)] The mapping $\Upsilon\restrict{\overline E}\colon\overline E\to\overline E_0$ is proper and surjective; and \item[(v)] If $F_0\subset E_0\setminus\Upsilon(\partial E)$ is an end of~$M$ and $F\equiv E\cap\Upsilon ^{-1} (F_0)$, then $\Upsilon\restrict F\colon F\to F_0$ is a finite covering and each connected component of $F$ is an end of~$\what M$. \end{enumerate} \item[(c)] If $\Upsilon\colon\widehat M\to M$ is a connected covering space, then $\tilde e(\widehat M)\leq\tilde e(M)$, with equality holding if the covering is finite. \end{enumerate} \end{lem} \begin{pf} For any nonempty domain~$U$ in~$M$, the index of $\text{\rm Im}\, e{\pione U\to\pione M}$ is equal to the number of connected component s of the lifting of $U$ to the universal covering of~$M$, so part~(a) holds. For the proof of part~(b), observe that $E_0$ is a domain in~$M$, $\partial E_0\neq\emptyset$, $\Upsilon(\overline E)\subset\overline E_0$, and therefore, $\overline E\cap\Upsilon ^{-1} (\partial E_0)=(\partial E)\setminus\Upsilon ^{-1} (E_0)$. Given a point $p\in M$, we may fix domains $U$ and $V$ in $M$ such that $p\in U\Subset V$, $U\cap\partial E_0\neq\emptyset$, and the image of $\pione V$ in $\pione X$ is trivial (their existence is trivial if $p\in\partial E_0$, while for $p\notin\partial E_0$, we may take $U$ and $V$ to be sufficiently small connected neighborhood s of the image of an injective path from~$p$ to a point in $\partial E_0$). The connected component s of $\what U\equiv\Upsilon ^{-1} (U)$ then form a locally finite collection of relatively compact domains in $\what M$, and those component s that meets~$\overline E$ must also meet the compact set $\partial E$, so only finitely many component sns, say $U_1,\dots,U_m$, meet~$\overline E$. Thus for $U_0\equiv\bigcup_{i=1}^mU_i$, we have $\what U\cap\overline E=U_0\cap\overline E\Subset\what M$, and it follows that the restriction $\overline E\to\overline E_0$ is a proper mapping. In particular, this is a closed mapping, and hence $\Upsilon(\overline E)=\overline E_0$. Furthermore, the boundary \[ \partial E_0=\Upsilon(\overline E)\setminus\Upsilon(E)=\Upsilon(\partial E)\setminus E_0 \] is compact and $\overline E_0$ is noncompact (by properness), so $E_0$ must be an end of~$M$. Finally, if $F_0\subset E_0\setminus\Upsilon(\partial E)$ is an end of~$M$, then each connected component of $\Upsilon ^{-1} (F_0)$ that meets $E$ must lie in~$E$. Thus the restriction $F\equiv E\cap\Upsilon ^{-1} (F_0)\to F_0$ is a covering space. Properness then implies that this restriction is actually a finite covering, $\partial F\subset\overline E\cap\Upsilon ^{-1} (\partial F_0)$ is compact ns, and in particular, each connected component of~$F$ is an end of~$\what M$. For the proof of part~(c), let $\what\Upsilon\colon\wtil M\to\what M$ be the universal covering, and let $k\in\N$ with $\tilde e(\what M)\geq k$. Then there exists an ends decomposition $\what M\setminus L=F_1\cup\cdots\cup F_n$ such that $\what\Upsilon ^{-1} (\what M\setminus L)$ has at least $k$ connected component sns, and there exists an ends decomposition $M\setminus K=E_1\cup\cdots\cup E_m$ \st $K\supset\Upsilon(L)$. For each~$j=1,\dots,n$, part~(b) implies that $\Upsilon(F_j)\not\subset K$, and hence $F_j$ meets, and therefore contains, some connected component of $\Upsilon ^{-1} (M\setminus K)$. Thus, under the universal covering $\Upsilon\circ\what\Upsilon\colon\wtil M\to M$, the inverse image of $M\setminus K$ has at least $k$ connected component sns, and therefore $\tilde e(M)\geq k$. Thus $\tilde e(M)\geq\tilde e(\what M)$. Furthermore, if $\Upsilon$ is finite covering map, then the connected component s of the liftings of the ends in any ends decompostion of $M$ form an ends decomposition for~$\what M$. Hence in this case we have $\tilde e(\what M)\geq\tilde e(M)$, and therefore we have equality. \end{pf} \begin{defn}[cf.~Definition~2.2 of~\cite{NR Filtered ends}]\label{Special end definition} We will call an end $E$ of a connected noncompact complete Hermitian manifold $X$ \emph{special} if $E$ is of at least one of the following types: \begin{enumerate} \item[(BG)] $X$ has bounded geometry of order $2$ along $E$; \item[(W)] There exists a continuous plurisubharmonic function $\vphi$ on~$X$ \st \[ \setof{x\in E}{\vphi(x)<a}\Subset X\qquad\forall\,a\in\R; \] \item[(RH)] $E$ is a hyperbolic end and the Green's function vanishes at infinity along $E$; or \item[(SP)] $E$ is a parabolic end, the Ricci curvature of $g$ is bounded below on $E$, and there exist positive constants $R$ and $\delta$ such that \(\text{\rm vol}\,\big(B(x;R)\big)>\delta\) for all $x\in E$. \end{enumerate} We will call an ends decomposition in which each of the ends is special a \emph{special ends decomposition}. \end{defn} According to \cite{Gro-Sur la groupe fond}, \cite{Li Structure complete Kahler}, \cite{Gro-Kahler hyperbolicity}, \cite{Gromov-Schoen}, \cite{NR-Structure theorems}, \cite{Delzant-Gromov Cuts}, \cite{NR Filtered ends}, and \cite{NR L2 Castelnuovo}, a connected noncompact complete K\"ahler manifold~$X$ that admits a special ends decomposition and has at least three filtered ends admits a proper holomorphic mapping onto a Riemann surface. One goal of this section is to show that if $X$ has an irregular hyperbolic end of type~(BG), then two filtered ends suffice. \begin{thm}\label{BG irregular hyp 2 ends thm} If $X$ is a connected noncompact hyperbolic complete K\"ahler manifold that admits a special ends decomposition $X\setminus K=E_1\cup\cdots\cup E_m$ for which $E_1$ is an irregular hyperbolic end (i.e., $E_1$ contains an irregular sequence for $X$) of type~(BG) and $m\geq 2$, then $X$ admits a proper holomorphic mapping onto a Riemann surface. \end{thm} \begin{pf*}{Sketch of the proof} Every end lying in a special end is itself special, so by the main results of~\cite{NR-Structure theorems} and~\cite{NR Filtered ends}, we may assume that $m=e(X)=2$. Moreover, as in the proof of Theorem~3.4 of \cite{NR-Structure theorems}, we may also assume that $E_2$ is a hyperbolic end of type~(BG). Theorem~2.6 of~\cite{NR-Structure theorems} then provides a nonconstant bounded positive Dirichlet-finite pluriharmonic function ns~$\rho_1$ on~$X$. Proposition~\ref{plh fn infinite energy exist kahler prop} implies that $X$ also admits a positive (quasi-Dirichlet-finite) pluriharmonic function ns~$\rho_2$ with bounded gradient and infinite energy. In particular, the holomorphic $1$-forms $\theta_1\equiv\partial\rho_1$ and $\theta_2\equiv\partial\rho_2$, are linearly independent, and Theorems~0.1~and~0.2 of \cite{NR L2 Castelnuovo} give a proper holomorphic mapping of~$X$ onto a Riemann surface. \end{pf*} \begin{lem}[cf.~Proposition~4.1 of~\cite{NR-BH Regular hyperbolic Kahler}]\label{cover to RS gives map to RS lem} Let $(X,g)$ be a connected noncompact complete K\"ahler manifold. If $X$ admits a special ends decomposition and some connected covering space $\Upsilon\colon\what X\to X$ admits a proper holomorphic mapping onto a Riemann surface, then $X$ admits a proper holomorphic mapping onto a Riemann surface. \end{lem} \begin{pf} The Cartan--Remmert reduction of $\what X$ is given by a proper holomorphic mapping $\what\Phi\colon\what X\to\what S$ of $\what X$ onto a Riemann surface~$\what S$ with $\what\Phi_*\ol_{\what X}=\ol_{\what S}$. Fixing a fiber~$\what Z_0$ of~$\what\Phi$, we may form a relatively compact connected neighborhood ns~$\what U_0$ of $\what Z_0$ in~$\what X$ and a nonnegative \ensuremath{\mathcal C^{\infty}} plurisubharmonic function ns~$\hat\vphi_0$ on~$\what X\setminus\what Z_0$ such that $\hat\vphi_0$ vanishes on $\what X\setminus\what U_0$ and $\vphi_0\to\infty$ at $\what Z_0$. The image~$Z_0\equiv\Upsilon(\what Z_0)$ is then a connected compact analytic subset of~$X$, and the function $\vphi_0\colon x\mapsto\sum_{y\in\Upsilon ^{-1} (x)}\hat\vphi_0(y)$ is a nonnegative \ensuremath{\mathcal C^{\infty}} plurisubharmonic function on the domain $X\setminus Z_0$ that vanishes on the complement of the relatively compact connected neighborhood ~$U_0\equiv\Upsilon(\what U_0)$ of~$Z_0$ in~$X$ and satisfies $\vphi_0\to\infty$ at~$Z_0$. We may form a special ends decomposition $X\setminus K=E_1\cup\cdots\cup E_m$ with $K\supset U_0$, and setting $K_0\equiv K\setminus U_0$ and $E_0\equiv U_0\setminus Z_0$, we get an ends decomposition \[ (X\setminus Z_0)\setminus K_0=E_0\cup E_1\cup\cdots\cup E_m \] of $X\setminus Z_0$. By part~(d) of Lemma~\ref{basic ends lem}, for $a\gg 0$, the set $\setof{x\in X\setminus Z_0}{\vphi(x)<a}$ has a connected component ns~$Y_0$ that contains $X\setminus U_0$ and has the ends decomposition \[ Y_0\setminus K_0=E_0'\cup\cdots\cup E_m', \] where $E_j'\equiv E_j\cap Y_0$ for $j=0,\dots,m$. In particular, the above is a special ends decomposition for the complete K\"ahler metric $g_0\equiv g+\lev{-\log(a-\vphi)}$ on~$Y_0$ (see, for example, \cite{Dem}), with $E_0'$ regular hyperbolic and of type~(W). Here, for any $\cal C^2$ function ns~$\psi$, $\lev\psi$ denotes the \emph{Levi form} of~$\psi$; that is, in local holomorphic coordinates $(z_1,\dots,z_n)$, \[ \lev\psi=\sum_{j,k=1}^n\frac{\partial^2\vphi}{\partial z_j\partial\bar z_k} dz_jd\bar z_k. \] Theorem~3.6 of~\cite{NR L2 Castelnuovo} implies that there exists a nonconstant nonnegative continuous plurisubharmonic function on~$Y_0$ that vanishes on \(E_0'\cup K_0\) and, therefore, extends to a continuous plurisubharmonic function ns~$\alpha$ on~$X$ that vanishes on~$K$. Fixing a fiber $\what Z_1$ of $\what\Phi$ through a point at which $\alpha\circ\Upsilon>0$, we see that, since $\alpha\circ\Upsilon$ is constant on~$\what Z_1$, the image $Z_1\equiv\Upsilon(\what Z_1)$ must be a connected compact analytic subset of~$X\setminus K\subset Y_0\subset X\setminus Z_0$. As above, we get a domain $Y_1\subset Y_0$, a complete K\"ahler metric~$g_1$ on~$Y_1$, and a special ends decomposition of $(Y_1,g_1)$ with at least three ends. Therefore, by Theorem~3.4 of \cite{NR-Structure theorems} (or Theorem~3.1 of~\cite{NR Filtered ends}), there exists a proper holomorphic mapping $\Phi_1\colon Y_1\to S_1$ of $Y_1$ onto a Riemann surface~$S_1$ such that $(\Phi_1)_*\ol_{Y_1}=\ol_{S_1}$. Forming the complement in~$X$ of two distinct fibers of~$\Phi_1$ and applying a construction similar to the above, we get a proper holomorphic mapping $\Phi_2\colon Y_2\to S_2$ of a domain $Y_2\supset X\setminus Y_1$ in~$X$ onto a Riemann surface~$S_2$ such that $(\Phi_2)_*\ol_{Y_2}=\ol_{S_2}$. The maps $\Phi_1$ and $\Phi_2$ now determine a proper holomorphic mapping~$\Phi$ of $X$ onto the Riemann surface \[ S\equiv (S_1\sqcup S_2)\big\slash\left[\Phi_1(x)\sim\Phi_2(x)\quad\forall\,x\in Y_1\cap Y_2\right]. \] \end{pf} \begin{rmks} 1. The authors do not know whether or not the above lemma holds in general for the base an arbitrary connected noncompact complete K\"ahler manifold. 2. For the base a complete K\"ahler manifold with bounded geometry (which is the relevant case for this paper), one may instead obtain the lemma from properness of the projection from the graph over a suitable irreducible component of the appropriate Barlet cycle space as in (Theorem~3.18 and the appendix of)~\cite{Campana}. \end{rmks} \begin{thm}\label{BG irreg hyp no BH cover thm} Suppose $X$ is a connected noncompact irregular hyperbolic complete K\"ahler manifold with bounded geometry of order~$2$ and $e(X)=1$. If $X$ admits a connected covering space $\Upsilon\colon\what X\to X$ with $H^1_c(\what X,\ol)\neq 0$, then $X$ admits a proper holomorphic mapping onto a Riemann surface. \end{thm} \begin{pf} Clearly, $\what X$ has bounded geometry of order~$2$. If $e(\what X)\geq 3$ or $e(\what X)=1$, then $\what X$ admits a proper holomorphic mapping onto a Riemann surface, and Lemma~\ref{cover to RS gives map to RS lem} provides such a mapping on~$X$. Thus we may assume that $e(\what X)=2$, and we may fix an ends decomposition $\what X\setminus K=E_1\cup E_2$. By part~(b) of Lemma~\ref{Behavior of filtered ends for covering lem}, for $j=1,2$, $\Upsilon(E_j)$ is a hyperbolic end of~$X$. It follows that $E_j$ is a hyperbolic end of~$\what X$, since the lifting to~$\what X$ of a negative continuous subharmonic function with supremum~$0$ on~$X$ is a negative continuous subharmonic function on~$\what X$ with supremum~$0$ along~$E_j$. Proposition~\ref{plh fn infinite energy exist kahler prop} provides an unbounded positive pluriharmonic function ns~$\rho_1$ with bounded gradient and infinite energy on~$X$, and we may set $\hat\rho_1\equiv\rho_1\circ\Upsilon$. Theorem~2.6 of~\cite{NR-Structure theorems} provides a nonconstant bounded pluriharmonic function ns~$\hat\rho_2$ with finite energy on~$\what X$, and Theorems~0.1 and 0.2 of \cite{NR L2 Castelnuovo}, applied to the holomorphic $1$-forms $\partial\hat\rho_1$ and $\partial\hat\rho_2$, give a proper holomorphic mapping of $\what X$ onto a Riemann surface. Lemma~\ref{cover to RS gives map to RS lem} then gives the required mapping on~$X$. \end{pf} Proposition~\ref{BH con boundary prop} provides some topological conditions that give nonvanishing of the first compactly supported cohomology with values in the structure sheaf. In particular, since any manifold with at least two filtered ends admits a connected covering space with at least two ends, we get the following consequence of Theorem~\ref{BG irreg hyp no BH cover thm} (one may instead apply Theorem~\ref{BG irregular hyp 2 ends thm} and Lemma~\ref{cover to RS gives map to RS lem}): \begin{cor}\label{BG irreg hyp 2 filtered end cor} If $X$ is a connected noncompact irregular hyperbolic complete K\"ahler manifold with bounded geometry of order~$2$, $e(X)=1$, and $\tilde e(X)\geq 2$, then $X$ admits a proper holomorphic mapping onto a Riemann surface. \end{cor} We also get the following: \begin{cor}\label{BG irreg hyp 2 pione boundary domain cor} Suppose $X$ is a connected noncompact irregular hyperbolic complete K\"ahler manifold with bounded geometry of order~$2$, $e(X)=1$, $\Omega$ is a nonempty smooth relatively compact domain in~$X$ for which $E\equiv X\setminus\overline\Omega$ is connected (i.e., $E$ is an end), and $\Gamma'\equiv\text{\rm im}\,[\pione{\overline\Omega}\to\pione{X}]$. If either $\partial\Omega$ is not connected ns, or $\partial\Omega$ is connected but $\pione{\partial\Omega}$ does not surject onto $\Gamma'$, then $X$ admits a proper holomorphic mapping onto a Riemann surface. \end{cor} \begin{pf} If $\partial\Omega$ is not connected ns, then part~(a) of Proposition~\ref{BH con boundary prop} implies that $H^1_c(X,\ol)\neq 0$, and hence $X$ admits a proper holomorphic mapping onto a Riemann surface. Suppose instead that $C\equiv\partial\Omega$ is connected ns, but \(\Gamma\equiv\text{im}\,\left[\pione{C}\to\pione X\right] \subsetneqq\Gamma'\). For a connected covering space $\Upsilon\colon\what X\to X$ with $\Upsilon_*\pione{\what X}=\Gamma$, $\Upsilon$ maps some relatively compact connected neighborhood ns~$U_0$ of some connected component ns~$C_0$ of $\what C\equiv\Upsilon ^{-1} (C)$ isomorphically onto a neighborhood ns~$U$ of~$C$. By Theorem~\ref{BG irreg hyp no BH cover thm}, we may assume that $e(\what X)=1$. The unique connected component ns~$\Omega_0$ of $\what\Omega\equiv\Upsilon ^{-1} (\Omega)$ for which $C_0$ is a boundary component is a smooth domain, and $C_0\subsetneqq\partial\Omega_0$. Moreover, each component of $\what X\setminus\overline\Omega_0$ must meet, and therefore contain, a component of $\Upsilon ^{-1} (E)$, so any such component must have noncompact closure. Proposition~\ref{BH con boundary prop} and Theorem~\ref{BG irreg hyp no BH cover thm} together now give the claim. \end{pf} \section{An irregular hyperbolic example}\label{BG irreg hyp example sect} Because the existence of irregular hyperbolic complete K\"ahler manifolds with one end and bounded geometry of order two is not completely obvious, an example is provided in this section. In fact, the following is obtained: \begin{thm}\label{Irregular hyp example thm} There exists an irregular hyperbolic connected noncompact complete K\"ahler manifold~$X$ with bounded geometry of all orders \st $e(X)=1$ and $\dim X=1$. \end{thm} \begin{rmk} The authors do not know whether or not there exists an irregular hyperbolic connected noncompact complete K\"ahler manifold~$X$ with bounded geometry of order~$0$ for which $H^1_c(X,\ol)=0$ (and hence which does not admit a proper holomorphic mapping onto a Riemann surface). \end{rmk} The idea of the construction is as follows. The complement of a closed disk~$D$ in~$\C$ is irregular hyperbolic, but it has two ends. Holomorphic attachment of a suitable sequence of tubes (i.e., annuli) $\seq T\nu$, with boundary components $A_\nu$ and $A'_\nu$ of $T_\nu$ for each~$\nu$, satisfying $A_\nu\to\infty$ and $A'_\nu\to p\in\partial D$, yields an irregular hyperbolic Riemann surface with one end, and a direct construction yields a K\"ahler metric with bounded geometry. \begin{lem}\label{Irreg hyp ex disjoint disks lem} Let $\set{\Delta(\zeta_\nu;R_\nu)}_{\nu=0}^\infty$ be a locally finite sequence of disjoint disks in~$\C$. Then there exists a sequence of positive numbers $\seq r\nu_{\nu=1}^\infty$ \st $r_\nu<R_\nu$ for $\nu=1,2,3,\dots$, and \[ b\equiv\sum_{\nu=1}^\infty\frac{\log\left[R_0 ^{-1} R_\nu ^{-1} (|\zeta_\nu-\zeta_0|+R_0)(|\zeta_\nu-\zeta_0|+R_\nu)\right]} {\log\left[R_0 ^{-1} r_\nu ^{-1} (|\zeta_\nu-\zeta_0|+R_0)(|\zeta_\nu-\zeta_0|-r_\nu)\right]}<1. \] Moreover, for any such sequence $\seq r\nu$, the region $\Omega\equiv\C\setminus\bigcup_{\nu=1}^\infty \overline{\Delta(\zeta_\nu;r_\nu)}$ is hyperbolic, and there exists an irregular sequence $\seq\eta k$ in~$\Omega$ \st $\eta_k\to\infty$ in~$\C$. \end{lem} \begin{pf} It is easy to see that the above inequality will hold for all sufficiently small positive sequences $\seq r\nu$. For each $\nu=1,2,3,\dots$, let \[ B_\nu\equiv\log\left[R_0 ^{-1} R_\nu ^{-1} (|\zeta_\nu-\zeta_0|+R_0)(|\zeta_\nu-\zeta_0|+R_\nu)\right], \] let \[ C_\nu\equiv\log\left[R_0 ^{-1} r_\nu ^{-1} (|\zeta_\nu-\zeta_0|+R_0)(|\zeta_\nu-\zeta_0|-r_\nu)\right], \] and let $\alpha_\nu$ be the harmonic function on $\C\setminus\set{\zeta_0,\zeta_\nu}$ given by \[ z\mapsto\alpha_\nu(z)\equiv\frac 1{C_\nu}\log\left[\frac{|z-\zeta_0|}{|z-\zeta_\nu|} \left(\frac{|\zeta_\nu-\zeta_0|+R_0}{R_0}\right)\right]. \] Clearly, $B_\nu>0$, and since $|\zeta_\nu-\zeta_0|\geq R_\nu+R_0$, $C_\nu>0$. At each point $z\in\partial\Delta(\zeta_0;R_0)$, we have \[ 0\leq\alpha_\nu(z)=\frac 1{C_\nu}\log\left[ \frac{|\zeta_\nu-\zeta_0|+R_0}{|z-\zeta_\nu|}\right]\leq \frac{B_\nu}{C_\nu}, \] since $(|\zeta_\nu-\zeta_0|+R_\nu)|z-\zeta_\nu|\geq R_0R_\nu$. At each point $z\in\partial\Delta(\zeta_\nu;r_\nu)$, we have \[ \alpha_\nu(z)=\frac 1{C_\nu}\log\left[R_0 ^{-1} r_\nu ^{-1} (|\zeta_\nu-\zeta_0|+R_0)|z-\zeta_0|\right]\geq 1; \] while at each point $z\in\partial\Delta(\zeta_\nu;R_\nu)$, we have \[ \alpha_\nu(z)=\frac 1{C_\nu}\log\left[R_0 ^{-1} R_\nu ^{-1} (|\zeta_\nu-\zeta_0|+R_0)|z-\zeta_0|\right]\leq \frac{B_\nu}{C_\nu}. \] Moreover, \[ 0\leq\lim_{z\to\infty}\alpha_\nu(z)=\frac 1{C_\nu}\log\left[R_0 ^{-1} (|\zeta_\nu-\zeta_0|+R_0)\right] \leq\frac{B_\nu}{C_\nu}. \] Therefore, since $\alpha_\nu$ is harmonic ns, we have $\alpha_\nu\geq 0$ on \[ \C\setminus\left[\Delta(\zeta_0;R_0)\cup \Delta(\zeta_\nu;r_\nu)\right] \supset\overline\Omega\setminus\Delta(\zeta_0;R_0), \] and $0\leq\alpha_\nu\leq\frac{B_\nu}{C_\nu}$ on \(\C\setminus\left[\Delta(\zeta_0;R_0)\cup\Delta(\zeta_\nu;R_\nu)\right]\). Consequently, the series $\sum_{\nu=1}^\infty\alpha_\nu$ converges uniformly on compact subsets of~$\overline\Omega\setminus\Delta(\zeta_0;R_0)$ to a nonnegative continuous function ns~$\alpha$ such that $\alpha$ is positive and harmonic on~$\Omega\setminus\overline{\Delta(\zeta_0;R_0)}$, $\alpha\leq b<1$ on the set \[ I\equiv\Omega\setminus\bigcup_{\nu=0}^\infty\Delta(\zeta_\nu;R_\nu) =\C\setminus\bigcup_{\nu=0}^\infty \Delta(\zeta_\nu;R_\nu), \] and for each~$\nu=1,2,3,\dots$, we have $0<\alpha-\alpha_\nu<1$ on $\overline{\Delta(\zeta_\nu;R_\nu)}\setminus\Delta(\zeta_\nu;r_\nu)$ and $\alpha>\alpha_\nu\geq 1$ on $\partial\Delta(z_\nu;r_\nu)$. Clearly, $\Omega\supset\overline{\Delta(\zeta_0;R_0)}$ is hyperbolic, and the harmonic measure of the ideal boundary of $\Omega$ with respect to $\Omega\setminus\overline{\Delta(\zeta_0;R_0)}$ extends to a continuous function $u\colon\overline\Omega\setminus\Delta(\zeta_0;R_0)\to[0,1]$. For each $R>R_0$, the continuous function $\beta_R$ on $\overline\Omega\setminus\Delta(\zeta_0;R_0)$ given by \[ z\mapsto\beta_R(z)\equiv\alpha(z) +\frac{\log\left(|z-\zeta_0|/R_0\right)}{\log(R/R_0)} \] is harmonic on $\Omega\setminus\overline{\Delta(\zeta_0;R_0)}$ and satisfies $\beta_R>\alpha>1=u$ on $\bigcup_{\nu=1}^\infty\partial\Delta(\zeta_\nu;r_\nu)$, $\beta_R=\alpha\geq 0=u$ on $\partial\Delta(\zeta_0;R_0)$, and $\beta_R\geq 1\geq u$ on $\overline\Omega\cap\partial\Delta(\zeta_0;R)$. Hence $\beta_R\geq u$ on $(\overline\Omega\setminus\Delta(\zeta_0;R_0))\cap\overline{\Delta(\zeta_0;R)}$. Passing to the pointwise limit as $R\to\infty$, we get $\alpha\geq u$ on $\overline\Omega\setminus\Delta(\zeta_0;R_0)$. However, $\alpha\leq b<1$ on the set \(I\subset\Omega\setminus\Delta(\zeta_0;R_0)\), so any sequence $\seq\eta k$ in~$I$ with $\eta_k\to\infty$ in~$\C$ is an irregular sequence in~$\Omega$. \end{pf} \begin{lem}\label{BG from fn on C lem} Let $k$ be a positive integer, and $\rho$ a positive \ensuremath{\mathcal C^{\infty}} function on~$\C$ such that $D\rho,D^2\rho,D^3\rho,\dots,D^k\rho$ are bounded. Then the complete K\"ahler metric $g\equiv e^{2\rho}g_{\C}$ has bounded geometry of order~$k$. In fact, the pullbacks of~$g$ under the local holomorphic charts \[ \Psi_{z_0}\colon\Delta(0;1)\to\Delta(z_0;e^{-\rho(z_0)}) \] given by $\Psi_{z_0}\colon z\mapsto e^{-\rho(z_0)}z+z_0$, for each point $z_0\in\C$, have the appropriate uniformly bounded derivatives. \end{lem} \begin{pf} For each point $z_0\in\C$, the pullback of the associated $(1,1)$-form $\omega_g\equiv e^{2\rho}\frac i2dz\wedge d\bar z$ under~$\Psi_{z_0}$ is given by \[ \Psi_{z_0}^*\omega_g =e^{2\left(\rho(\Psi_{z_0})-\rho(z_0)\right)}\frac i2dz\wedge d\bar z. \] The bound on $D\rho$ gives a Lipschitz constant~$C$ for~$\rho$, and hence \[ e^{-2C}\leq e^{-2Ce^{-\rho(z_0)}}\leq e^{2\left(\rho(\Psi_{z_0})-\rho(z_0)\right)}\leq e^{2Ce^{-\rho(z_0)}}\leq e^{2C}. \] A similar argument gives uniform bounds on the $m$th order derivatives of the function s $\left\{e^{2\left(\rho(\Psi_{z_0})-\rho(z_0)\right)}\right\}_{z_0\in\C}$ for $m=1,\dots,k$. \end{pf} \begin{pf*}{Proof of Theorem~\ref{Irregular hyp example thm}} \textbf{Step 1.} Construction of a suitable irregular hyperbolic region in~$\C$. Let us fix a constant~$R>1$ and disjoint disks $\set{\Delta(\zeta_\nu;R)}_{\nu=0}^\infty$ such that $\zeta_0=0$ and $\zeta_\nu\to\infty$ so fast that \[ \sum_{\nu=1}^\infty\frac{\log\left[R^{-2}(|\zeta_\nu|+R)^2\right]} {\log\left[R ^{-1} e^{|\zeta_\nu|}(|\zeta_\nu|+R)(|\zeta_\nu|-e^{-|\zeta_\nu|})\right]}<1. \] In particular, by Lemma~\ref{Irreg hyp ex disjoint disks lem}, the domain \[ \Omega_0\equiv\C\setminus\bigcup_{\nu=1}^\infty \overline{\Delta(\zeta_\nu;e^{-|\zeta_\nu|})} \] is irregular hyperbolic; in fact, there exists an irregular sequence $\seq\eta k$ in~$\Omega_0$ such that $\eta_k\to\infty$ in~$\C$. \textbf{Step 2. Construction of a bounded geometry K\"ahler metric on a region.} By Lemma~\ref{BG from fn on C lem}, fixing a positive \ensuremath{\mathcal C^{\infty}} function ns~$\rho$ on~$\C$ such that $\rho(z)=|z|$ on a neighborhood of $\C\setminus\Delta(0;1)$, we get a complete K\"ahler metric $g_0\equiv e^{2\rho}g_{\C}$ with bounded geometry of all orders on~$\C$ and associated local holomorphic charts \[ \Psi_{z_0}\colon\Delta(0;1)\to\Delta(z_0;e^{-\rho(z_0)}) \] given by $\Psi_{z_0}\colon z\mapsto e^{-\rho(z_0)}z+z_0$ for each point $z_0\in\C$. Letting $R_0$ and $R_1$ be constants with $1<R_0<R_1<R$, $g_{\mathbb H}$ the standard hyperbolic metric on the upper half plane~$\mathbb H$, $\Phi$ a M\"obius transformation with $\Phi((\C\setminus\overline{\Delta(0;1)})\cup\set\infty)=\mathbb H$ and $\text{\rm Im}\, \Phi>5R$ on $(\C\setminus\Delta(0;R_0))\cup\set\infty$, $g_1\equiv\Phi^*g_{\mathbb H}$, and $\lambda\colon\C\to[0,1]$ a \ensuremath{\mathcal C^{\infty}} function with $\lambda\equiv 0$ on a neighborhood of $\overline{\Delta(0;R_0)}$ and $\lambda\equiv 1$ on $\C\setminus\Delta(0;R_1)$, we get a complete K\"ahler metric \[ g_2\equiv\lambda g_0+(1-\lambda)g_1 \] with bounded geometry of all orders on the region \(\C\setminus\overline{\Delta(0;1)}\). Setting $\xi_\nu\equiv 2\nu R+i2R$ for each~$\nu=1,2,3,\dots$, we get disjoint disks $\set{\Delta(\xi_\nu;R)}_{\nu=1}^\infty$ in $\setof{z\in\mathbb H}{R<\text{\rm Im}\, z<3R}$ (and an isometric isomorphism $\Delta(\xi_1;R)\to\Delta(\xi_\nu;R)$ in $\mathbb H$ given by $z\mapsto z+2(\nu-1)R$ for each~$\nu$). We have \[ \overline{\Delta(0;1)} \cup\bigcup_{\nu=1}^\infty\Phi ^{-1} (\overline{\Delta(\xi_\nu;R)}) \subset\Delta(0;R_0)\Subset\Delta(0;R)\Subset\Omega_0, \] and hence we have a region \[ \Omega_1\equiv\Omega_0\setminus\left(\overline{\Delta(0;1)} \cup\bigcup_{\nu=1}^\infty\Phi ^{-1} (\overline{\Delta(\xi_\nu;1)})\right). \] \textbf{Step 3. Construction of the Riemann surface $X$.} For each $\nu=1,2,3,\dots$, let $T_\nu$ be a copy of the annulus \(\Delta(0;1/R,R)\equiv \setof{z\in\C}{1/R<|z|<R}\), and let \[ \Lambda_\nu\colon\C\to{\mathbb C}\qquad\text{and}\qquad \Upsilon_\nu\colon\C^*\to\C^* \] be the biholomorphisms given by $w\mapsto e^{-|\zeta_\nu|}w+\zeta_\nu$ and $w\mapsto\frac 1w+\xi_\nu$, respectively. We then get a Riemann surface \[ X\equiv\left(\Omega_1\sqcup\bigsqcup_{\nu=1}^\infty T_\nu\right)\bigg/{\sim}, \] where for each $\nu=1,2,3,\dots$, and each $w\in T_\nu$, $z\in\Delta(\zeta_\nu;e^{-|\zeta_\nu|},Re^{-|\zeta_\nu|})$ satisfies \[ z\sim w\qquad\iff\qquad z=\Lambda_\nu(w), \] and $z\in\Phi ^{-1} (\Delta(\xi_\nu;1,R))$ satisfies \[ z\sim w\qquad\iff\qquad \Phi(z)=\Upsilon_\nu(w). \] $X$ is hyperbolic, because for each point $z_0\in(\partial\Delta(0;1))\setminus\set{\Phi ^{-1} (\infty)} \subset\partial\Omega_1$, there exists a barrier~$\beta$ on~$\Omega_1$ at~$z_0$ and a relatively compact neighborhood ns~$U$ of $z_0$ in~$\C$ such that $\overline U\setminus\overline{\Delta(0;1)}\subset\Omega_1$ and $\beta$ is equal to~$-1$ on $\Omega_1\setminus U$, and thus we may extend~$\beta$ to a continuous subharmonic function on~$X$ that is equal to~$-1$ on $X\setminus(\Omega_1\cap U)$. Fixing a disk \[ D\Subset\Delta(0;R)\cap\Omega_1\subset X, \] and letting $u\colon X\setminus D\to[0,1)$ be the harmonic measure of the ideal boundary of $X$ with respect to $X\setminus\overline D$, we see that the restriction $u\restrict{\Omega_0\setminus\Delta(0;R)}$ cannot approach~$1$ along the sequence $\seq\eta k$, so $X$ must be irregular hyperbolic. It is easy to see that $e(X)=1$. \textbf{Step 4. Construction of a bounded geometry K\"ahler metric on~$X$.} Let us fix a \ensuremath{\mathcal C^{\infty}} function ns~$\tau$ on~$\C$ \st $0\leq\tau\leq 1$, $\tau\equiv 1$ on $\C\setminus\Delta(0;R_0)$, and $\tau\equiv 0$ on $\Delta(0;1/R_0)$. Then we get a K\"ahler metric~$g$ on~$X$ by setting $g=g_2$ on \[ \Omega_2\equiv\C\setminus\left(\overline{\Delta(0;1)}\cup \bigcup_{\nu=1}^\infty \overline{\Delta(\zeta_\nu;R_0e^{-|\zeta_\nu|})} \cup \bigcup_{\nu=1}^\infty\Phi ^{-1} ( \overline{\Delta(\xi_\nu;R_0)})\right)\subset\Omega_1\subset X, \] and \(g=\tau\Lambda_\nu^*g_0+(1-\tau)\Upsilon_\nu^*g_{\mathbb H}\) on~$T_\nu\subset X$ for each $\nu=1,2,3,\dots$. For $\nu=1,2,3,\dots$, on $T_\nu$ we have \(\Lambda_\nu^*g_0=e^{2(|e^{-|\zeta_\nu|}w+\zeta_\nu|-|\zeta_\nu|)}g_{\C}\) and \(\Upsilon_\nu^*g_{\mathbb H}=\Upsilon_1^*g_{\mathbb H}\) (since \(\Upsilon_\nu=\Upsilon_1+2(\nu-1)R\)). Therefore, since the function s \[ w\mapsto |e^{-|\zeta_\nu|}w+\zeta_\nu|-|\zeta_\nu|\in[-Re^{-R},Re^{-R}]\qquad\text{for }\nu=1,2,3,\dots, \] have uniformly bounded derivatives of order~$k$ on~$T_\nu=\Delta(0;1/R,R)$ for each~$k=0,1,2,\dots$, $(X,g)$ has bounded geometry of all orders along $X\setminus\Omega_3$, where \[ \Omega_3\equiv\C\setminus\left(\overline{\Delta(0;1)}\cup \bigcup_{\nu=1}^\infty \overline{\Delta(\zeta_\nu;R_1e^{-|\zeta_\nu|})} \cup \bigcup_{\nu=1}^\infty\Phi ^{-1} ( \overline{\Delta(\xi_\nu;R_1)})\right)\subset\Omega_2\subset\Omega_1\subset X. \] There exists a positive constant $r_0$ such that for each point $z_0\in\Omega_3\cap\Delta(0;R_0)$, we have \[ B\equiv B_{g_{\mathbb H}}(\Phi(z_0);r_0)\subset\Phi(\Omega_2\cap\Delta(0;R_1)) \] and $g=g_1=\Phi^*g_{\mathbb H}$ on $\Phi ^{-1} (B)\subset\Omega_2\cap\Delta(0;R_1)$. Thus $(X,g)$ has bounded geometry of all orders along $\Omega_3\cap\Delta(0;R_0)$, as well as along the compact set $\overline{\Delta(0;R_0,R)}\subset\Omega_1$. Finally, if $r_1$ is a constant with \(0<r_1<\min(1,R-R_1)\), and $z_0\in\Omega_3\setminus\Delta(0;R)$, then \[ \Delta(z_0;r_1e^{-\rho(z_0)})\cap\Delta(0;R_1)=\emptyset. \] Moreover, if $z\in\Delta(z_0;r_1e^{-\rho(z_0)})\cap \overline{\Delta(\zeta_\nu;R_0e^{-|\zeta_\nu|})}$ for some~$\nu$, then \begin{align*} R_1e^{-|\zeta_\nu|}&<|z_0-\zeta_\nu|<r_1e^{-|z_0|}+R_0e^{-|\zeta_\nu|} \leq(r_1e^{|\zeta_\nu-z_0|}+R_0)e^{-|\zeta_\nu|}\\ &\leq(r_1e^{r_1e^{-|z_0|}+R_0e^{-|\zeta_\nu|}}+R_0)e^{-|\zeta_\nu|} \leq(r_1e^{r_1e^{-R}+R_0e^{-R}}+R_0)e^{-|\zeta_\nu|}. \end{align*} Thus for $r_1$ sufficiently small, we will have, for every point $z_0\in\Omega_3\setminus\Delta(0;R)$, \[ D_{z_0}\equiv\Delta(z_0;r_1e^{-\rho(z_0)})\subset\Omega_2\setminus\Delta(0;R_1), \] and in particular, $g=g_2=g_0$ on~$D_{z_0}$. The resulting family of biholomorphisms $\Delta(0;1)\to D_{z_0}$ given by $z\mapsto r_1ze^{-|z_0|}+z_0$ for each such point~$z_0$ then have the required uniform bounds, so $(X,g)$ has bounded geometry of all orders along $\Omega_3\setminus\Delta(0;R)$, and therefore along~$X$ itself, and completeness follows. \end{pf*} \end{document}
math
89,781
\begin{document} \title{Zero-sum Stochastic Differential Games of Impulse Versus Continuous Control by FBSDEs\footnote{This work was supported by the Swedish Energy Agency through grant number 48405-1}} \author{Magnus Perninge\footnote{M.\ Perninge is with the Department of Physics and Electrical Engineering, Linnaeus University, V\"axj\"o, Sweden. e-mail: [email protected].}} \maketitle \begin{abstract} We consider a stochastic differential game in the context of forward-backward stochastic differential equations, where one player implements an impulse control while the opponent controls the system continuously. Utilizing the notion of ``backward semigroups'' we first prove the dynamic programming principle (DPP) for a truncated version of the problem in a straightforward manner. Relying on a uniform convergence argument then enables us to show the DPP for the general setting. In particular, this avoids technical constraints imposed in previous works dealing with the same problem. Moreover, our approach allows us to consider impulse costs that depend on the present value of the state process in addition to unbounded coefficients. Using the dynamic programming principle we deduce that the upper and lower value functions are both solutions (in viscosity sense) to the same Hamilton-Jacobi-Bellman-Isaacs obstacle problem. By showing uniqueness of solutions to this partial differential inequality we conclude that the game has a value. \end{abstract} \section{Introduction} The history of differential games is almost as long as the history of modern optimal control theory and traces back to the seminal work by Isaacs~\cite{Isaacs65}. To counter the unrealistic idea that one of the players have to give up their control to the opponent, Elliot and Kalton introduced the notion of strategies defined as non-anticipating maps from the opponents set of controls to the players own controls~\cite{ElliotKalton72}. Assuming that one player plays a strategy while the opponent plays a classical control, Evans and Souganidis \cite{Evans84} used the theory of viscosity solutions to find a representation of the upper and lower value functions in deterministic differential games as solutions to Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations. Using a discrete time approximation technique, this was later translated to the stochastic setting by Flemming and Souganidis \cite{FlemSoug89}. The natural terminology for these games being zero-sum stochastic differential games (SDGs). Using the theory of backward stochastic differential equations (BSDEs), in particular the notion of backward semigroups, Buckdahn and Li~\cite{Buckdahn08} simplified the arguments and further extended the results in \cite{FlemSoug89} to cost functionals defined in terms of BSDEs. Just as stochastic control was extended to various types of controls in the latter half of the previous century (notably to controls of impulse type in~\cite{BensLionsImpulse}), so has stochastic differential games. Tang and Hou~\cite{TangHouSWgame} considered the setting of two-player, zero-sum SDGs where both players play switching controls (a particular type of impulse control). Their result was later extended by Djehiche \textit{et.~al.\ } \cite{BollanSWG1,DjehicheSWG2} to incorporate stochastic switching-costs. In the context of general impulse controls, Cosso~\cite{Cosso13} considered a zero-sum game where both players play impulse controls. By adapting the theory developed in~\cite{Buckdahn08}, L.~Zhang recently extended these results to cost functionals defined by BSDEs~\cite{LZhang21}. In the present work we will be dealing with SDGs where one player plays an impulse control while the opponent plays a continuous control. This type of game problems have previously be considered by Azimzadeh~\cite{Azimzadeh19} for linear expectations and when the intervention costs are deterministic and by Bayraktar \textit{et.~al.\ } \cite{BayraktarRobust} when the impulse control is of switching type. We follow the path described above where the cost functional is defined in terms of the solution to a BSDE and introduce the lower value function \begin{align*} V_-(t,x):=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_t}\mathop{\rm{ess}\sup}_{u\in\mathcal U_t}J(t,x;u,\alpha^S(u)) \end{align*} and the upper value function \begin{align*} V_+(t,x):=\mathop{\rm{ess}\sup}_{u^S\in\mathcal U^S_t}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;u^S(\alpha),\alpha) \end{align*} with $J(t,x;u,\alpha):=Y^{t,x;u,\alpha}_t$, where the pair $(Y^{t,x;u,\alpha},Z^{t,x;u,\alpha})$ solves the non-standard BSDE \begin{align}\nonumber Y^{t,x;u,\alpha}_s&=\psi(X^{t,x;u,\alpha}_T)+\int_s^Tf(r,X^{t,x;u,\alpha}_r,Y^{t,x;u,\alpha}_r,Z^{t,x;u,\alpha}_r,\alpha_r)dr \\ &\quad-\int_s^T Z^{t,x;u,\alpha}_rdW_r - \Xi^{t,x;u,\alpha}_{T+}+\Xi^{t,x;u,\alpha}_s. \label{ekv:non-ref-bsde} \end{align} In the above definitions, $\mathcal U$ (resp. $\mathcal{A}$) and $\mathcal U^S$ (resp. $\mathcal{A}^S$) represent the set of impulse (resp. continuous) controls and their corresponding non-anticipative strategies. The generic member of $\mathcal U$ will be denoted by $u:=(\tau_i,\beta_i)_{1\leq i\leq N}$ where $\tau_i$ is the time of the $i^{\rm th}$ intervention and $\beta_i$ is the corresponding impulse, taking values in the compact set $U$. Moreover, the impulse cost process $\Xi$ is defined as \begin{align} \Xi^{t,x;u,\alpha}_s:=\sum_{j=1}^N\mathbbm{1}_{[\tau_j<s]}\ell(\tau_j,X^{t,x;[u]_{j-1},\alpha}_{\tau_{j}},\beta_j), \end{align} where $[u]_j:=(\tau_i,\beta_i)_{1\leq i\leq N\wedge j}$ and $X^{t,x;u,\alpha}$ solves the impulsively and continuously controlled SDE \begin{align} X^{t,x;u,\alpha}_s&=x+\int_t^s a(r,X^{t,x;u,\alpha}_r,\alpha_r)dr+\int_t^s\sigma(r,X^{t,x;u,\alpha}_r,\alpha_r)dW_r\label{ekv:forward-sde1} \end{align} for $s\in [t,\tau_{1})$ and \begin{align} X^{t,x;u,\alpha}_{s}&=\Gamma(\tau_j\vee t,X^{t,x;[u]_{j-1},\alpha}_{\tau_j\vee t},\beta_j)+\int_{\tau_j\vee t}^s a(r,X^{t,x;u,\alpha}_r,\alpha_r)dr+\int_{\tau_j\vee t}^s\sigma(r,X^{t,x;u,\alpha}_r,\alpha_r)dW_r,\label{ekv:forward-sde2} \end{align} whenever $s\in [\tau_{j},\tau_{j+1})$ with $\tau_{N+1}:=\infty$. We show that $V_-$ and $V_+$ are both viscosity solutions to the Hamilton-Jacobi-Bellman-Isaacs quasi-variational inequality (HJBI-QVI) \begin{align}\label{ekv:var-ineq} \begin{cases} \min\{v(t,x)-\mathcal M v(t,x),-v_t(t,x)-\inf_{\alpha\in A}H(t,x,v(t,x),Dv(t,x),D^2v(t,x),\alpha)\}=0,\\ \quad\forall (t,x)\in[0,T)\times \mathbb R^d \\ v(T,x)=\psi(x), \end{cases} \end{align} where $\mathcal M v(t,x):=\sup_{b\in U}\{v(t,\Gamma(t,x,b))-\ell(t,x,b)\}$ and \begin{align*} H(t,x,y,p,X,\alpha):=p\cdot a(t,x,\alpha)+\frac{1}{2}{\rm Tr} [\sigma\sigma^\top(t,x,\alpha)X]+f(t,x,y,p^\top \sigma(t,x,\alpha),\alpha). \end{align*} We then move on to prove that \eqref{ekv:var-ineq} admits at most one solution, leading to the main contribution of the paper, namely the conclusion that the game has a value, \textit{i.e.\ } that $V_-\equiv V_+$. As in most previous works on stochastic differential games involving impulse controls, the main technical difficulty we face is showing continuity of the upper and lower value functions in the time variable. In previous works such as \cite{TangHouSWgame,Cosso13,FZhang11} continuity is simplified by assuming that the intervention costs do not depend on the state and are non-increasing in time. In \cite{Azimzadeh19} the assumption of non-increasing intervention costs is replaced by one where the impulse player commits to, at the start of the game, limit to a fixed number of $q\geq 0$ impulses (where $q$ can be chosen arbitrarily large) in addition to assuming that impulses can only be made at rational times. In the present work we take a completely different approach to the above mentioned articles, where we first show continuity under a truncation and then show that the truncated value functions converge uniformly to the true value functions on compact sets. The paper is organized as follows. In the next section we give some preliminary definitions and describe the by now well established theory of viscosity solutions to partial differential equations (PDEs) as well as the notion of backward semigroups. Then, in Section~\ref{sec:FBSDEs} we give some preliminary estimates on the solutions to the non-standard BSDE in \eqref{ekv:non-ref-bsde}. Section~\ref{sec:DPP} is devoted to showing that dynamic programming principles hold for the lower and upper value functions. The proof that the lower and upper value functions are both solutions in viscosity sense to the same HJBI-QVI, that is \eqref{ekv:var-ineq}, is given in Section~\ref{sec:hjbi-qvi} while the uniqueness proof is postponed to Section~\ref{sec:unique}. \section{Preliminaries\label{sec:prel}} We let $(\Omega,\mathcal F,\mathbb{P})$ be a complete probability space on which lives a $d$-dimensional Brownian motion $W$. We denote by $\mathbb F:=(\mathcal F_t)_{0\leq t\leq T}$ the augmented natural filtration of $W$.\\ \noindent Throughout, we will use the following notation: \begin{itemize} \item $\mathcal P_{\mathbb F}$ is the $\sigma$-algebra of $\mathbb F$-progressively measurable subsets of $[0,T]\times \Omega$. \item For $p\geq 1$, we let $\mathcal S^{p}$ be the set of all $\mathbb R$-valued, $\mathcal P_{\mathbb F}$-measurable c\`agl\`ad~ processes $(Z_t: t\in [0,T])$ such that $\|Z\|_{\mathcal S^p}:=\mathbb{E}\big[\sup_{t\in[0,T]} |Z_t|^p\big]<\infty$ and we let $\mathcal S^p_c$ be the subset of processes that are continuous. \item We let $\mathcal H^{p}$ denote the set of all $\mathbb R^d$-valued $\mathcal P_{\mathbb F}$-measurable processes $(Z_t: t\in[0,T])$ such that $\|Z\|_{\mathcal H^p}:=\mathbb{E}\big[\big(\int_0^T |Z_t|^2 dt\big)^{p/2}\big]^{1/p}<\infty$. \item We let $\mathcal T$ be the set of all $\mathbb F$-stopping times and for each $\eta\in\mathcal T$ we let $\mathcal T_\eta$ be the corresponding subsets of stopping times $\tau$ such that $\tau\geq \eta$, $\mathbb{P}$-a.s. \item We let $\mathcal{A}$ be the set of all $A$-valued processes $\alpha\in \mathcal H^2$ where $A$ is a compact set. \item We let $\mathcal U$ be the set of all $u=(\tau_j,\beta_j)_{1\leq j\leq N}$, where $(\tau_j)_{j=1}^N$ is a non-decreasing sequence of $\mathbb F$-stopping times and $\beta_j$ is a $\mathcal F_{\tau_j}$-measurable r.v.~taking values in $U$, such that $\Xi^{t,x;u,\alpha}_T\in L^2(\Omega,\mathcal F_T,\mathbb{P})$ for all $\alpha\in\mathcal{A}$. \item For stopping times $\underline\eta\leq \bar\eta$ we let $\mathcal U_{\underline\eta,\bar\eta}$ be the subset of $\mathcal U$ with $\underline\eta\leq\tau_j\leq\bar\eta$, $\mathbb{P}$-a.s.~for $j=1,\ldots,N$. Similarly, we let $\mathcal{A}_{\underline\eta,\bar\eta}$, be the restriction of $\mathcal{A}$ to all $\alpha:\Omega\times [\underline\eta,\bar\eta]\to A$. When $\bar\eta=T$ we use the shorthands $\mathcal U_{\underline\eta}$ and $\mathcal{A}_{\underline\eta}$. \item For any $u\in\mathcal U$, we let $[u]_{j}:=(\tau_i,\beta_i)_{1\leq i\leq N\wedge j}$. Moreover, we introduce $N(s):=\max\{j\geq 0:\tau_j\leq s\}$ and let $u_s:=[u]_{N(s)}$ and $u^s:=(\tau_j,\beta_j)_{N(s)+1\leq j\leq N}$. \item We let $\Pi_{pg}$ denote the set of all functions $\varphi:[0,T]\times\mathbb R^n\to\mathbb R$ that are of polynomial growth in $x$, \textit{i.e.\ } there are constants $C,\rho>0$ such that $|\varphi(t,x)|\leq C(1+|x|^\rho)$ for all $(t,x)\in [0,T]\times\mathbb R^n$. \end{itemize} We also mention that, unless otherwise specified, all inequalities between random variables are to be interpreted in the $\mathbb{P}$-a.s.~sense. \begin{defn} We introduce that notion of \emph{non-anticipative strategies} defined as all maps $u^S:\mathcal{A}\to\mathcal U$ for which $(u^S(\alpha))_s=(u^S(\tilde \alpha))_s$ whenever $\alpha_r=\tilde\alpha_r$, $d\lambda\times d\mathbb{P}$-a.e. on $[0,s]\times\Omega$ (resp. $\alpha^S:\mathcal U\to\mathcal{A}$ for which $(\alpha^S(u))_s=(\alpha^S(\tilde u))_s$ whenever $\tilde u_s=u_s$, $\mathbb{P}$-a.s.). We denote by $\mathcal U^S$ (resp. $\mathcal{A}^S$) the set of non-anticipative strategies. Moreover, we define the restrictions to an interval $[\underline\eta,\bar\eta]$ denoted $\mathcal U^S_{\underline\eta,\bar\eta}$ (resp. $\mathcal{A}^S_{\underline\eta,\bar\eta}$) as all non-anticipative maps $u^S:\mathcal{A}_{\underline\eta,\bar\eta}\to\mathcal U_{\underline\eta,\bar\eta}$ (resp. $\alpha^S:\mathcal U_{\underline\eta,\bar\eta}\to\mathcal{A}_{\underline\eta,\bar\eta}$). \end{defn} \begin{defn} We will rely heavily on approximation schemes where we limit the number of interventions in the impulse control. To this extent we let $\mathcal U^k:=\{u\in\mathcal U:N\leq k,\,\mathbb{P}-{\rm a.s.}\}$ for $k\geq 0$ and let $\mathcal U^{S,k}$ be the corresponding set of non-anticipative strategies $u^S:\mathcal{A}\to \mathcal U^k$. \end{defn} \begin{defn} We introduce the concatenation of impulse controls $\oplus$ as \begin{align*} (\tau_j,\beta_j)_{1\leq j\leq N}\oplus (\tilde\tau_j,\tilde\beta_j)_{1\leq j\leq \tilde N}:=((\tau_1,\beta_1),\ldots,(\tau_N,\beta_N),(\tilde \tau_1\vee\tau_N,\tilde\beta_1),\ldots,(\tilde\tau_{\tilde N}\vee\tau_N,\beta_N)) \end{align*} and note that for each $\eta\in \mathcal T$ we have the decomposition $u=u_\eta\oplus u^\eta$. Similarly, when $0\leq t\leq s\leq T$ we let the concatenation of $\alpha\in\mathcal{A}_{t,s}$ and $\tilde\alpha\in \mathcal{A}_{s}$ at $s$ be defined as \begin{align*} (\alpha\oplus_{s}\tilde\alpha)_r:=\mathbbm{1}_{[t,s)}(r)\alpha_r+\mathbbm{1}_{[s,T]}(r)\tilde\alpha_r \end{align*} for all $r\in [t,T]$. \end{defn} Throughout, we make the following assumptions on the parameters in the cost functional where $C>0$ and $\rho>0$ are fixed constants: \begin{ass}\label{ass:on-coeff} \begin{enumerate}[i)] \item\label{ass:on-coeff-f} We assume that $f:[0,T]\times \mathbb R^n\times\mathbb R\times\mathbb R^{d}\times A$ is Borel measurable, of polynomial growth in $x$, \textit{i.e.\ } there is a $C>0$ and a $\rho\geq 0$ such that \begin{align*} |f(t,x,0,0,\alpha)|\leq C(1+|x|^\rho) \end{align*} for all $\alpha\in A$, and that there is a constant $k_f>0$ such that for any $t\in[0,T]$, $x,x'\in\mathbb R^n$, $y,y'\in\mathbb R$, $z,z'\in\mathbb R^{d}$ and $\alpha\in A$ we have \begin{align*} |f(t,x',y',z',\alpha)-f(t,x,y,z,\alpha)|&\leq k_f((1+|x|^\rho+|x'|^\rho)|x'-x|+|y'-y|+|z'-z|). \end{align*} Moreover, we assume that $f(t,x,y,z,\cdot)$ is continuous for all $(t,x,y,z)\in [0,T]\times\mathbb R^n\times\mathbb R\times\mathbb R^d\to\mathbb R$. \item\label{ass:on-coeff-psi} The terminal reward $\psi:\mathbb R^n\to\mathbb R$ satisfies the growth condition \begin{align*} |\psi(x)|\leq C(1+|x|^\rho) \end{align*} for all $x\in\mathbb R^n$, and the following local Lipschitz criterion \begin{align*} |\psi(x)-\psi(x')|\leq C(1+|x|^\rho+|x'|^\rho)|x-x'|. \end{align*} \item\label{ass:on-coeff-ell} The intervention cost $\ell:[0,T]\times \mathbb R^n\times U\to \mathbb R_+$ is jointly continuous in $(t,x,b)$, bounded from below, \textit{i.e.\ } \begin{align*} \ell(t,x,b)\geq\delta >0, \end{align*} locally Lipschitz in $x$ and locally H\"older continuous in $t$, in particular, we assume that \begin{align*} |\ell(t,x,b)-\ell(t',x',b)|\leq C(1+|x'|^\rho+|x|^\rho)(|x-x'|+|t'-t|^\varsigma), \end{align*} for some $\varsigma>0$. \item\label{ass:on-coeff-@end} For each $(x,b)\in\mathbb R^n\times U$ we have \begin{align*} \psi(x)>\psi(\Gamma(T,x,b))-\ell(t,x,b). \end{align*} \end{enumerate} \end{ass} \begin{rem}\label{rem:@end} Note in particular that Assumption~\ref{ass:on-coeff}.\ref{ass:on-coeff-@end} implies that the lower and upper value functions defined in the introduction satisfies $V_-(T,x)=V_+(T,x)=\psi(T,x)$ for all $x\in\mathbb R^n$. \end{rem} Moreover, we make the following assumptions on the coefficients of the controlled forward SDE: \begin{ass}\label{ass:onSFDE} For any $t,t'\in [0,T]$, $b\in U$, $\alpha\in A$ and $x,x'\in\mathbb R^n$ we have: \begin{enumerate}[i)] \item\label{ass:onSFDE-Gamma} The function $\Gamma:[0,T]\times\mathbb R^n\times U\to\mathbb R^d$ is jointly continuous and satisfies \begin{align*} |\Gamma(t,x,b)-\Gamma(t',x',b)|&\leq k_{\Gamma}(|x'-x|+|t'-t|^\varsigma(1+|x|+|x'|)) \end{align*} and the growth condition \begin{align}\label{ekv:imp-bound} |\Gamma(t,x,b)|\leq K_\Gamma\vee |x|. \end{align} for some constants $k_\Gamma,K_\Gamma>0$ and $\varsigma>0$. \item\label{ass:onSFDE-a-sigma} The coefficients $a:[0,T]\times\mathbb R^n\times A\to\mathbb R^{n}$ and $\sigma:[0,T]\times\mathbb R^n\times A\to\mathbb R^{n\times d}$ are jointly continuous and satisfy the growth condition \begin{align*} |a(t,x,\alpha)|+|\sigma(t,x,\alpha)|&\leq C(1+|x|), \end{align*} and the Lipschitz continuity \begin{align*} |a(t,x,\alpha)-a(t,x',\alpha)|+|\sigma(t,x,\alpha)-\sigma(t,x',\alpha)|&\leq C|x'-x|. \end{align*} \end{enumerate} \end{ass} \subsection{Viscosity solutions} We define the upper, $v^*$, and lower, $v_*$ semi-continuous envelope of a function $v$ as \begin{align*} v^*(t,x):=\limsup_{(t',x')\to(t,x),\,t'<T}v(t',x')\quad {\rm and}\quad v_*(t,x):=\liminf_{(t',x')\to(t,x),\,t'<T}v(t',x') \end{align*} Next we introduce the notion of a viscosity solution using the limiting parabolic superjet $\bar J^+v$ and subjet $\bar J^-v$ of a function $v$ (see pp. 9-10 of \cite{UsersGuide} for a definition): \begin{defn}\label{def:visc-sol-jets} Let $v$ be a locally bounded l.s.c. (resp. u.s.c.) function from $[0,T]\times \mathbb R^n$ to $\mathbb R$. Then, \begin{enumerate}[a)] \item It is referred to as a viscosity supersolution (resp. subsolution) to \eqref{ekv:var-ineq} if: \begin{enumerate}[i)] \item $v(T,x)\geq \psi(x)$ (resp. $v(T,x)\leq \psi(x)$) \item For any $(t,x)\in [0,T)\times\mathbb R^d$ and $(p,q,X)\in \bar J^- v(t,x)$ (resp. $J^+ v(t,x)$) we have \begin{align*} \min\Big\{&v(t,x)-\mathcal M v(t,x),-p-\inf_{\alpha\in A}H(t,x,v(t,x),q,X,a)\Big\}\geq 0 \end{align*} (resp. \begin{align*} \min\Big\{&v(t,x)-\mathcal M v(t,x),-p-\inf_{\alpha\in A}H(t,x,v(t,x),q,X,a)\Big\}\leq 0). \end{align*} \end{enumerate} \item It is referred to as a viscosity solution if it is both a supersolution and a subsolution. \end{enumerate} \end{defn} We will sometimes use the following equivalent definition of viscosity supersolutions (resp. subsolutions): \begin{defn}\label{def:visc-sol-dom} A l.s.c.~(resp. u.s.c.) function $v$ is a viscosity supersolution (subsolution) to \eqref{ekv:var-ineq} if $v(T,x)\geq \psi(x)$ (resp. $\leq \psi(x)$) and whenever $\varphi\in C^{3}_{l,b}([0,T]\times\mathbb R^d\to\mathbb R)$ is such that $\varphi(t,x)=v(t,x)$ and $\varphi-v$ has a local maximum (resp. minimum) at $(t,x)$, then \begin{align*} \min\big\{&v(t,x)-\mathcal M v(t,x),-\varphi_t(t,x)-\inf_{\alpha\in A}H(t,x,v(t,x),D\varphi(t,x),D^2\varphi(t,x),a)\big\}\geq 0\:(\leq 0). \end{align*} \end{defn} \begin{rem} $C^{3}_{l,b}$ denotes the set of real-valued functions that are continuously differentiable up to third order and whose derivatives of order one to three are bounded \end{rem} \subsection{Backward semigroups} For $(t,x)\in [0,T]\times \mathbb R^n$ we let $h\in [0,T-t]$ and assume that $\eta\in L^2(\Omega,\mathcal F_{t+h},\mathbb{P})$. For all $(u,\alpha)\in\mathcal U_{t,t+h}\times\mathcal{A}_{t,t+h}$ we then define (see \cite{LiPeng09}) \begin{align} G_{t,t+h}^{t,x;u,\alpha}[\eta]:=\mathcal Y_t, \end{align} where $(\mathcal Y,\mathcal Z)\in\mathcal S^2\times\mathcal H^2$ is the unique solution\footnote{From now on we assume that any referred to uniqueness of solutions to a BSDE is uniqueness in $\mathcal S^2\times\mathcal H^2$ and therefore refrain from referring to the space.} to \begin{align*} \mathcal Y_s&=\eta+\int_s^{t+h}f(r,X^{t,x;u,\alpha}_r,\mathcal Y_r,\mathcal Z_r)dr-\int_s^{t+h} \mathcal Z_rdW_r -\Xi^{t,x;u,\alpha}_{(t+h)+}+\Xi^{t,x;u,\alpha}_s. \end{align*} The so defined family of operators $G^{t,x;u,\alpha}$ is referred to as the backward semigroup related to the BSDE. We note that by the uniqueness of solutions to \eqref{ekv:non-ref-bsde} (see the next section) we have that \begin{align}\label{ekv:G-is-semigroup} G_{t,T}^{t,x;u,\alpha}[\psi(X^{t,x;u,\alpha}_T)]=G_{t,t+h}^{t,x;u_{t+h},\alpha}[Y^{t+h,X^{t,x;u_{t+h},\alpha}_{t+h};u^{t+h},\alpha}_{t+h}]. \end{align} We refer to \eqref{ekv:G-is-semigroup} as the semigroup property of $G$. \section{Forward- Backward SDEs with impulses\label{sec:FBSDEs}} In this section we consider the non-standard BSDE in \eqref{ekv:non-ref-bsde}. Impulsively controlled BSDEs in the non-Markovian framework were treated in \cite{rbsde_impulse}, while BSDEs related to switching problems have been treated in \cite{HuTang,HamZhang,Morlais13}. Considering first the forward SDE, we get by repeated use of standard results for SDEs (see \textit{e.g.\ } Chapter 5 in \cite{Protter}) that \eqref{ekv:forward-sde1}-\eqref{ekv:forward-sde2} admits a unique solution $X^{t,x;u,\alpha}$ for any $(u,\alpha)\in\mathcal U\times\mathcal{A}$ since $N<\infty$, $\mathbb{P}$-a.s. Now, any solution of \eqref{ekv:non-ref-bsde} can be written $Y^{t,x;u,\alpha}_s=\tilde Y^{t,x;u,\alpha}_s+\Xi^{t,x;u,\alpha}_s$, where $(\tilde Y^{t,x;u,\alpha},\tilde Z^{t,x;u,\alpha})\in \mathcal S^2_c\times\mathcal H$ solves the standard BSDE \begin{align} \tilde Y^{t,x;u,\alpha}_s&=\psi(X^{t,x;u,\alpha}_T)-\Xi^{t,x;u,\alpha}_{T+}+\int_s^Tf(r,X^{t,x;u,\alpha}_r,\tilde Y^{t,x;u,\alpha}_r+\Xi^{t,x;u,\alpha}_r,\tilde Z^{t,x;u,\alpha}_r)dr-\int_s^T \tilde Z^{t,x;u,\alpha}_rdW_r. \label{ekv:non-ref-bsde-std} \end{align} By standard results we find that \eqref{ekv:non-ref-bsde-std} admits a unique solution whenever $\Xi^{t,x;u,\alpha}_{T+}\in L^2(\Omega,\mathbb{P})$ and $f(\cdot,X^{t,x;u,\alpha}_\cdot,0,0)\in\mathcal H^2$. By a moment estimate given in the next section we are able to conclude that \eqref{ekv:non-ref-bsde} admits a unique solution whenever $(u,\alpha)\in\mathcal U\times\mathcal{A}$. \subsection{Estimates for the controlled diffusion process} \begin{prop}\label{prop:SDEmoment} For each $p\geq 1$, there is a $C>0$ such that \begin{align}\label{ekv:SDEmoment} \mathbb{E}\Big[\sup_{s\in[\zeta,T]}|X^{t,x;u,\alpha}_s|^{p}\Big|\mathcal F_\zeta\Big]\leq C(1+|X^{t,x;u,\alpha}_\zeta|^{p}), \end{align} $\mathbb{P}$-a.s.~for all $(t,\zeta,x,u,\alpha)\in [0,T]^2\times\mathbb R^n\times\mathcal U\times\mathcal{A}$. \end{prop} \noindent\emph{Proof.} We use the shorthand $X^j:=X^{t,x;[u]_j,\alpha}$. By Assumption~\ref{ass:onSFDE}.(\ref{ass:onSFDE-Gamma}) we get for $s\in [\tau_{j},T]$, using integration by parts, that \begin{align*} |X^{j}_s|^2&= |X^{j}_{\tau_{j}}|^2+2\int_{\tau_{j}+}^s X^{j}_{r}dX^{j}_r+\int_{\tau_{j}+}^s d[X^{j},X^{j}]_r \\ &\leq K^2_\Gamma\vee |X^{{j-1}}_{\tau_{j}}|^2+2\int_{\tau_{j}+}^s X^{j}_{r} dX^{j}_r+\int_{\tau_{j}+}^s d[X^{j},X^{j}]_r. \end{align*} We note that if $|X^{j}_s|> K_\Gamma$ and $|X^{j}_r|\leq K_\Gamma$ for some $r\in[\zeta,s)$ then there is a largest time $\theta<s$ such that $|X^{j}_{\theta}|\leq K_\Gamma$. This means that during the interval $(\theta,s]$ interventions will not increase the magnitude $|X^{j}|$. By induction we find that \begin{align}\label{ekv:X2-bound} |X^{j}_s|^2&\leq |X^{j}_\zeta|^2\vee K_\Gamma^2+\sum_{i=0}^{j} \Big\{2\int_{\theta\vee(\tilde\tau_{i}+)}^{s\wedge\tilde\tau_{i+1}} X^{i}_{r}dX^{i}_r+\int_{\theta\vee(\tilde\tau_{i}+)}^{s\wedge\tilde\tau_{i+1}} d[X^{i},X^{i}]_r\Big\} \end{align} for all $s\in[t,T]$, where $\theta:=\sup\{r\geq 0 : |X^u_r|\leq K_\Gamma\}\vee \zeta$, $\tilde\tau_0+=0$, $\tilde\tau_i=\tau_i$ for $i=1,\ldots,j$ and $\tilde\tau_{j+1}=\infty$. Now, since $X^{i}$ and $X^{j}$ coincide on $[0,\tau_{i+1\wedge j+1})$ we have \begin{align*} \sum_{i=0}^{j}\int_{\theta\vee\tilde\tau_{i}+}^{s\wedge\tilde\tau_{i+1}} X^{i}_{r} dX^{i}_r &=\int_{\theta}^s X^{j}_{r}a(r,X^{j}_r,\alpha_r)dr+\int_{\theta}^{s}X^{j}_{r}\sigma(r,X^{j}_r,\alpha_r)dW_r, \end{align*} and \begin{align*} \sum_{i=0}^{j} \int_{\theta\vee\tilde\tau_{i}+}^{s\wedge\tilde\tau_{i+1}} d[X^{i},X^{i}]_r&=\int_{\theta}^{s} \sigma^2(r,X^{j}_r,\alpha_r)dr. \end{align*} Inserted in \eqref{ekv:X2-bound} this gives \begin{align*} |X^{j}_s|^2&\leq |X^{j}_\zeta|^2\vee K_\Gamma^2+\int_{\theta}^s (2X^{j}_{s}a(r,X^{j}_r,\alpha_r)+\sigma^2(r,X^{j}_r,\alpha_r))dr+2\int_{\theta}^{s}X^{j}_{r}\sigma(r,X^{j}_r,\alpha_r)dW_r \\ &\leq |X^{j}_\zeta|^2+C\Big(1+\int_{\zeta}^{s}|X^{j}_{r}|^2dr+\sup_{v\in[\zeta,s]}\Big|\int_{\zeta}^{v}X^{j}_r\sigma(r,X^{j}_r)dW_r\Big|\Big). \end{align*} The Burkholder-Davis-Gundy inequality now gives that for $p\geq 2$, \begin{align*} \mathbb{E}\Big[\sup_{r\in[\zeta,s]}|X^{j}_r|^{p}\Big|\mathcal F_\zeta\Big]\leq |X^{j}_{\zeta}|^p +C\big(1+\mathbb{E}\Big[\int_{\zeta}^{s}|X^{i}_{r}|^{p}dr+\big(\int_{\zeta}^{s}|X^{j}_r|^4 dr\big)^{p/4}\Big]\big) \end{align*} and Gr\"onwall's lemma gives that for $p\geq 4$, \begin{align}\label{ekv:moment_steg1} \mathbb{E}\Big[\sup_{s\in[\zeta,T]}|X^{j}_s|^{p}\big|\mathcal F_\zeta\Big]&\leq C(1+ |X^{j}_\zeta|^{p}), \end{align} $\mathbb{P}$-a.s., where the constant $C=C(T,p)$ does not depend on $u$, $\alpha$ or $j$ and \eqref{ekv:SDEmoment} follows by letting $j\to\infty$ on both sides and using Fatou's lemma. The result for general $p\geq 1$ follows by Jensen's inequality.\qed\\ As mentioned above, inequality \eqref{ekv:SDEmoment} guarantees existence of a unique solution to the BSDE \eqref{ekv:non-ref-bsde}. We will also need the following stability property. \begin{prop}\label{prop:SFDEflow} For each $k\geq 0$ and $p\geq 1$, there is a $C\geq 0$ such that \begin{align*} \mathbb{E}\Big[\sup_{s\in[t',T]}|X^{t,x;u,\alpha}_{s} -X^{t',x';u,\alpha}_{s}|^{p}\Big|\mathcal F_t\Big]\leq C(|x-x'|^p+(1+|x|^p)|t'-t|^{p(\varsigma\wedge 1/2)}), \end{align*} $\mathbb{P}$-a.s.~for all $(t,t',x,x')\in [0,T]^2\times \mathbb R^{2n}$, with $t'\geq t$, and all $(u,\alpha)\in\mathcal U^k\times\mathcal{A}$. \end{prop} \noindent\emph{Proof.} To simplify notation we let $X^j:=X^{t,x;[u]_j,\alpha}$ and $X^{'j}:=X^{t',x';[u]_j,\alpha}$ for $j=0,\ldots,k$. Moreover, we let $\delta X^j:=X^j- X^{'j}$ and set $\delta X:=\delta X^{k}$. Define $\kappa:=\max\{j\geq 0:\tau_j\leq t'\}\vee 0$, then if $\kappa=0$ we have $|\delta X_{t'}|=|\delta X^0_{t'}|$, where for any value of $\kappa$, \begin{align*} |\delta X^0_{t'}|=|X^{t,x;u,\alpha}_{t'}-x'|. \end{align*} When $\kappa>0$ we get for $j=1,\ldots,\kappa$, \begin{align*} |\delta X^{j}_{t'}|&\leq k_\Gamma(|\delta X^{{j-1}}_{t'}| +|X^{{j-1}}_{t'}-X^{{j-1}}_{\tau_j}|+|t'-t|^\varsigma(1+|X^{{j-1}}_{\tau_j}|+|X^{'{j-1}}_{t'}|)) \end{align*} By induction we find that \begin{align*} |\delta X^{\kappa}_{t'}|&\leq \sum_{j=1}^{\kappa} k_\Gamma^{\kappa+1-j}(|X^{{j-1}}_{t'}-X^{{j-1}}_{\tau_{j}}| + |t'-t|^\varsigma(1+\sup_{s\in [t,t']}|X^{{j-1}}_{s}|+|X^{'{j-1}}_{t'}|)). \end{align*} Now, since \begin{align*} |X^{{j-1}}_{t'}-X^{{j-1}}_{\tau_j}|\leq \int_{\tau_j}^{t'}|a(r,X^{j-1}_r,\alpha_r)|dr+\Big| \int_{\tau_j}^{t'}\sigma(r,X^{j-1}_r,\alpha_r)dW_s\Big|, \end{align*} Proposition~\ref{prop:BSDEmoment} gives that \begin{align*} \mathbb{E}\big[|X^{j-1}_{t'}-X^{j-1}_{\tau_j}|^p\big|\mathcal F_t\big] \leq C(1+|x|^p)|t'-t|^{p/2}. \end{align*} Similarly, \begin{align*} \mathbb{E}\big[|X^{t,x;u,\alpha}_{t'}-x'|^p\big|\mathcal F_t\big] \leq C(|x-x'|^p+(1+|x|^p)|t'-t|^{p/2}) \end{align*} and we find that \begin{align*} \mathbb{E}\big[|\delta X_{t'}|^p\big|\mathcal F_t\big] \leq C(|x-x'|^p+(1+|x|^p)|t'-t|^{p(\varsigma\wedge 1/2)}). \end{align*} Moreover, we note that for $j\geq\kappa$ and $s\geq \tau_j$ (with $\tau_0:=t'$), \begin{align*} |\delta X^{j}_{s}|&\leq (1\vee k_\Gamma)|\delta X^j_{\tau_j}|+\int_{\tau_j}^{s}|a(r,X^{j}_r,\alpha_r)-a(r, X^{'j}_r,\alpha_r)|dr \\ &\quad+\Big|\int_{t'}^{s}(\sigma(r,X^{j}_r,\alpha_r)-\sigma(r,X^{'j}_r,\alpha_r)dW_r\Big| \end{align*} and the Burkholder-Davis-Gundy inequality gives for $p\geq 2$ we have \begin{align*} \mathbb{E}\Big[\sup_{r\in[\tau_j,s]}|\delta X^j_{r}|^{p}\Big] &\leq C\mathbb{E}\Big[|\delta X^j_{\tau_j}|^{p}+ \Big(\int_{\tau_j}^s|a(r,X^{j}_r,\alpha_r)- a(r, X^{'j}_r,\alpha_r)|dr\Big)^{p} \\ & + \Big(\int_{\tau_j}^{s}|\sigma(r, X^{'j}_r,\alpha_r)-\sigma(r,X^j_r,\alpha_r)|^2dr\Big)^{p/2}\Big] \\ &\leq C\mathbb{E}\Big[|\delta X^j_{\tau_j}|^{p}+ \big(\int_{\tau_j}^s|\delta X^j_{r}|^{2}dr\big)^{p/2}\Big]. \end{align*} The Lipschitz conditions on the coefficients combined with Gr\"onwall's lemma then implies that \begin{align*} \mathbb{E}\Big[\sup_{r\in[\tau_j,T]}|\delta X^j_{r}|^{p}\Big] &\leq C\mathbb{E}\Big[|\delta X^j_{\tau_j}|^{p}\Big]. \end{align*} Now, since $|\delta X^l_{\tau_l}|\leq k_\Gamma|\delta X^{l-1}_{\tau_l}|$ for $l=\kappa+1,\ldots,N$ the result follows by induction.\qed\\ \subsection{Estimates for the BSDE} For $(t,x)\in[0,T]\times\mathbb R^n$ and $(u,\alpha)\in\mathcal U\times\mathcal{A}$ we let $(\check Y^{t,x;u,\alpha},\check Z^{t,x;u,\alpha})$ be the unique solution to the following standard BSDE \begin{align}\label{ekv:bsde-trad} \check Y_s^{t,x;u,\alpha}=\psi(X^{t,x;u,\alpha}_T)+\int_s^Tf(r,X^{t,x;u,\alpha}_r,\check Y^{t,x;u,\alpha}_r,\check Z^{t,x;u,\alpha}_r)dr-\int_s^T \check Z^{t,x;u,\alpha}_rdW_r. \end{align} Combining classical results (see \textit{e.g.\ } \cite{ElKaroui2}) with Proposition~\ref{prop:SDEmoment}, we have \begin{align}\nonumber &\mathbb{E}\Big[\sup_{s\in [t,T]}|\check Y_s^{t,x;u,\alpha}|^2+\int_t^T|\check Z^{t,x;u,\alpha}_s|^2ds\Big|\mathcal F_t\Big] \\ &\leq C\mathbb{E}\Big[|\psi(X^{t,x;u,\alpha}_T)|^2+\int_t^T|f(r,X^{t,x;u,\alpha}_r,0,0,\alpha_r)|^2dr\Big|\mathcal F_t\Big]\leq C(1+|x|^{2\rho}),\label{ekv:bsde-trad-moment} \end{align} $\mathbb{P}$-a.s.~for all $(u,\alpha)\in\mathcal U\times\mathcal{A}$. We have the following straightforward generalization of the standard comparison principle: \begin{lem} (Comparison principle) If $\hat f$ satisfies Assumption~\ref{ass:on-coeff}, and $\hat G^{t,x;u,\alpha}$ is defined as $G^{t,x;u,\alpha}$ but with driver $\hat f$ instead of $f$, then if $f(t,x,y,z,\alpha)\leq \hat f(t,x,y,z,\alpha)$ for all $(t,x,y,z,\alpha)\in [0,T]\times\mathbb R^d\times\mathbb R\times\mathbb R^d\times U$, we have $G_{s,r}^{t,x;u,\alpha}[\eta]\leq \hat G_{s,r}^{t,x;u,\alpha}[\hat\eta]$, $\mathbb{P}$-a.s.~for each $t\leq s\leq r\leq T$ whenever $\eta,\hat\eta\in L^2(\Omega,\mathcal F_s,\mathbb{P})$ are such that $\eta\leq \hat\eta$, $\mathbb{P}$-a.s. \end{lem} \noindent\emph{Proof.} This follows immediately from the standard comparison principle (see Theorem 2.2 in \cite{ElKaroui2}).\qed\\ Using the comparison principle we easily deduce the following moment estimates: \begin{prop}\label{prop:BSDEmoment} We have, \begin{align}\label{ekv:Ybnd} \mathop{\rm{ess}\sup}_{\alpha\in\mathcal{A}}|\mathop{\rm{ess}\sup}_{u\in\mathcal U}Y^{t,x;u,\alpha}_t|\leq C(1+|x|^\rho),\qquad \mathbb{P}-{\rm a.s.} \end{align} and for each $k\geq 0$, there is a $C>0$ such that \begin{align}\label{ekv:BSDEmoment} \mathbb{E}\Big[\sup_{s\in[t,T]}|Y^{t,x;u,\alpha}_s|^{2}+\int_t^T| Z^{t,x;u,\alpha}_s|^2ds\Big|\mathcal F_t\Big]\leq C(1+|x|^{2\rho}), \end{align} $\mathbb{P}$-a.s.~for all $(t,x,u,\alpha)\in [0,T]\times\mathbb R^n\times\mathcal U^k\times\mathcal{A}$. \end{prop} \noindent\emph{Proof.} The first statement follows by repeated application of the comparison principle which gives that $\check Y^{t,x;\emptyset,\alpha}_t\leq\mathop{\rm{ess}\sup}_{u\in\mathcal U}Y^{t,x;u,\alpha}_t \leq \mathop{\rm{ess}\sup}_{u\in\mathcal U}\check Y^{t,x;u,\alpha}_t$ and using~\eqref{ekv:bsde-trad-moment}. The second statement follows by noting that for fixed $k\geq 0$, there is a $C>0$ such that \begin{align*} \mathbb{E}[|\Xi^{t,x;u,\alpha}_{T+}|^2]&\leq C(1+\mathbb{E}[\sup_{s\in[t,T]}|X^{t,x;u,\alpha}_{s}|^{2\rho}])\leq C(1+|x|^{2\rho}) \end{align*} for all $(u,\alpha)\in \mathcal U^k\times\mathcal{A}$.\qed\\ \begin{prop}\label{prop:BSDEflow} For each $k\geq 0$, there is a $C>0$ such that \begin{align}\label{ekv:BSDEmoment} |\mathbb{E}\big[Y^{t',x';u,\alpha}_{t'}-Y^{t,x;u,\alpha}_{t}\big]|\leq C(1+|x|^{\rho+1}+|x'|^{\rho+1})(|x'-x|+|t'-t|^{\varsigma\wedge 1/2}), \end{align} $\mathbb{P}$-a.s.~for all $(t,x),(t',x')\in[0,T]\times\mathbb R^n$ with $t\leq t'$ and all $u\in\mathcal U^k$ and $\alpha\in\mathcal{A}$. \end{prop} \noindent\emph{Proof.} To simplify notation, we let $X:=X^{t,x;u,\alpha}$ and $X':=X^{t',x';u,\alpha}$ and set $(Y,Z):=(Y^{t,x;u,\alpha},Z^{t,x;u,\alpha})$ and $(Y',Z'):=(Y^{t',x';u,\alpha},Z^{t',x';u,\alpha})$. By defining $\delta Y:=Y-Y'_{\cdot\vee t'}$ and $\delta Z:=Z-\mathbbm{1}_{[\cdot\geq t']}Z'$ we have for $s\in[t,T]$ that \begin{align*} \delta Y_{s}&=\psi(X_T)-\psi(X'_T)+\int_s^{T}(f(r,X_r,Y_r,Z_r,\alpha_r)-f(r,X'_r,Y'_r,Z'_r,\alpha_r))dr \\ &\quad-\int_s^{T} \delta Z_rdW_r-\sum_{j=1}^N(\mathbbm{1}_{[\tau_j \geq s]}\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j)-\mathbbm{1}_{[\tau_j\vee t' \geq s]}\ell(\tau_j\vee t',X^{'j-1}_{\tau_j\vee t'},\beta_j)), \end{align*} with $X^j:=X^{t,x;[u]_j,\alpha}$ and $X^{'j}:=X^{t',x';[u]_j,\alpha}$. We now introduce the processes $(\zeta_1(s))_{s\in [t,T]}$ and $(\zeta_2(s))_{s\in [t,T]}$ defined as\footnote{Throughout, we use the convention that $\frac{0}{0}0=0$} \begin{align*} \zeta_1(s):=\frac{f(s,X_s,Y_s,Z_s,\alpha_s)-f(s,X_s,\mathbbm{1}_{[s\geq t']}Y'_s,Z_s,\alpha_s)}{Y_s-\mathbbm{1}_{[s\geq t']}Y'_s}\mathbbm{1}_{[Y_s\neq \mathbbm{1}_{[s\geq t']}Y'_s]} \end{align*} and \begin{align*} \zeta_2(s):=\frac{f(s,X_s,\mathbbm{1}_{[s\geq t']}Y'_s,Z_s,\alpha_s)-f(s,X_s,\mathbbm{1}_{[s\geq t']}Y'_s,\mathbbm{1}_{[s\geq t']}Z'_s,\alpha_s)}{|Z_s-\mathbbm{1}_{[s\geq t']}Z'_s|^2}(Z_s-\mathbbm{1}_{[s\geq t']}Z'_s)^\top. \end{align*} We then have by the Lipschitz continuity of $f$ that $|\zeta_1(s)|\vee|\zeta_2(s)|\leq k_f$. Using Ito's formula we find that \begin{align*} \delta Y_s&=R_{s,T}(\psi(X_T)-\psi(X'_T))+\int_s^{T}R_{s,r}(f(r,X_r,\mathbbm{1}_{[r\geq t']}Y'_r,\mathbbm{1}_{[r\geq t']}Z'_r,\alpha_r)-\mathbbm{1}_{[r\geq t']}f(r,X'_r,Y'_r,Z'_r,\alpha_r))dr \\ &\quad-\int_s^{T} R_{s,r}\delta Z_rdW_r-\sum_{j=1}^N(\mathbbm{1}_{[\tau_j \geq s]}R_{s,\tau_j}\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j)-\mathbbm{1}_{[\tau_j\vee t' \geq s]}R_{s,\tau_j\vee t'}\ell(\tau_j\vee t',X^{'j-1}_{\tau_j\vee t'},\beta_j)) \end{align*} with $R_{s,r}:=e^{\int_s^{r}(\zeta_1(v)-\frac{1}{2}|\zeta_2(v)|^2)dv+\frac{1}{2}\int_s^{r}\zeta^u_2(v)dW_v}$. Taking expectations on both sides yields \begin{align*} |\mathbb{E}\big[\delta Y_t\big]|&\leq C\mathbb{E}\Big[R_{t,T}(1+|X_T|^\rho+|X'_T|^\rho)|X'_T-X_T|+\int_t^{t'}R_{t,r}(1+|X_r|^\rho)dr \\ &\quad+\int_{t'}^{T}R_{t,r}(1+|X_r|^\rho+|X'_r|^\rho)|X'_r-X_r|dr \\ &\quad+\sum_{j=1}^N R_{t,\tau_j}|\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j)-R_{\tau_j,\tau_j\vee t'}\ell(\tau_j\vee t',X^{'j-1}_{\tau_j\vee t'},\beta_j)|\Big]. \end{align*} Now, \begin{align*} &\mathbb{E}\Big[R_{t,T}(1+|X_T|^\rho+|X'_T|^\rho)|X'_T-X_T|+\int_t^{t'}R_{t,r}(1+|X_r|^\rho)dr+\int_{t'}^{T}R_{t,r}(1+|X_r|^\rho+|X'_r|^\rho)|X'_r-X_r|dr\Big] \\ &\leq C\mathbb{E}\Big[\sup_{s\in [t,T]}|R_{t,s}|^2\Big]^{1/2}\mathbb{E}\Big[(t'-t)\int_t^{t'}(1+|X_r|^{2\rho})dr+\sup_{r\in [t',T]}(1+|X_r|^{2\rho}+|X'_r|^{2\rho})|X'_r-X_r|^2\Big]^{1/2} \\ &\leq C(|t'-t|+\mathbb{E}\Big[\sup_{r\in [t',T]}(1+|X_r|^{4\rho}+|X'_r|^{4\rho})\Big]^{1/4}\mathbb{E}\Big[\sup_{r\in [t',T]}|X'_r-X_r|^4\Big]^{1/4}) \\ &\leq C(1+|x|^\rho+|x'|^\rho)(|x-x'|+(1+|x|)|t'-t|^{\varsigma\wedge 1/2}) \end{align*} where we have used Proposition~\ref{prop:SFDEflow} to reach the last inequality. Moreover, \begin{align*} &\mathbb{E}\Big[\sum_{j=1}^N R_{t,\tau_j}|\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j)-R_{\tau_j,\tau_j\vee t'}\ell(\tau_j\vee t',X^{'j-1}_{\tau_j\vee t'},\beta_j)|\Big] \\ &\leq \mathbb{E}\Big[\sum_{j=1}^N R_{t,\tau_j}\big((1+R_{\tau_j,\tau_j\vee t'})|\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j)-\ell(\tau_j\vee t',X^{'j-1}_{\tau_j\vee t'},\beta_j)| \\ &\quad+|1-R_{\tau_j,\tau_j\vee t'}|(\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j)+\ell(\tau_j\vee t',X^{'j-1}_{\tau_j\vee t'},\beta_j))\big)\Big] \\ &\leq Ck\mathbb{E}\Big[\sup_{s\in [t,T]}|R_{t,s}|^2\Big]^{1/2}\Big(\mathbb{E}\Big[\sup_{r\in [t,T]}(1+|X_r|^{2\rho}+|X'_r|^{2\rho})(|X'_{r\vee t'}-X_r|^2+|t'-t|^{2\varsigma})\Big]^{1/2} \\ &\quad + \mathbb{E}\Big[\sup_{r\in [t,t']}|1-R_{t,r}|^2(1+|X_r|^{2\rho})\Big]^{1/2} \\ &\leq Ck(1+|x|^\rho+|x'|^\rho)(|x'-x|+(1+|x|)|t'-t|^{ \varsigma\wedge1/2}). \end{align*} Combining the above inequalities, the assertion follows.\qed\\ The above proof immediately gives the following stability result: \begin{cor} (Stability) If $\hat f$ satisfies Assumption~\ref{ass:on-coeff}, and $\hat G^{t,x;u,\alpha}$ is defined as $G^{t,x;u,\alpha}$ with driver $\hat f$ instead of $f$, then there is a $C>0$ such that \begin{align*} |\hat G_{t,s}^{t,x;u,\alpha}[\hat\eta]-G_{t,s}^{t,x;u,\alpha}[\eta]|\leq C\mathbb{E}\Big[|\hat\eta-\eta|^2+\int_t^{s}|\hat f(r,X_r,\mathcal Y_r,\mathcal Z_r,\alpha_r)-f(r,X_r,\mathcal Y_r,\mathcal Z_r,\alpha_r)|^2dr \Big|\mathcal F_t\Big]^{1/2}, \end{align*} $\mathbb{P}$-a.s.~for all $s\in [t,T]$ and $\eta,\hat\eta\in L^2(\Omega,\mathcal F_{s},\mathbb{P})$. \end{cor} \section{Dynamic programming principles\label{sec:DPP}} In this section we show that $V_-$ and $V_+$ are jointly continuous (deterministic) functions that satisfy the dynamic programming relations \begin{align} V_-(t,x)=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[ V_-(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})]\label{ekv:dynp-W} \end{align} and \begin{align} V_+(t,x)=\mathop{\rm{ess}\sup}_{u^S\in\mathcal U^S_{t,t+h}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;u^S(\alpha),\alpha}[ V_+(t+h,X^{t,x;u^S(\alpha),\alpha}_{t+h})],\label{ekv:dynp-U} \end{align} for $t\in[0,T]$ and $h\in[0,T-t]$. \begin{prop} For every $(t,x)\in[0,T]\times \mathbb R^n$ we have $V_-(t,x)=\mathbb{E}[V_-(t,x)]$ and $V_+(t,x)=\mathbb{E}[V_+(t,x)]$, $\mathbb{P}$-a.s. \end{prop} \noindent\emph{Proof.} This follows by repeating the steps in the proof of Proposition 4.1 in \cite{Buckdahn08}.\qed\\ We can thus pick the deterministic versions to represent $V_-$ and $V_+$. As mentioned in the introduction, the main technical difficult that we encounter appears when trying to show continuity of the upper and lower value functions in the time variable. The reason for this is that the constant $C$ in Proposition~\ref{prop:BSDEflow} depends on $k$ and tends to infinity as $k$ tends to infinity. We resolve this issue by first considering the upper and lower value functions under an imposed restriction on the number of interventions in the impulse control. Relying on a uniform convergence result will then give us continuity of $V_-$ and $V_+$. \subsection{A DPP with limited number of impulses} We introduce the truncated value functions \begin{align*} V_-^k(t,x):=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_t}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k_t}J(t,x;u,\alpha^S(u)) \end{align*} and \begin{align*} V_+^k(t,x):=\mathop{\rm{ess}\sup}_{u^S\in\mathcal U^{S,k}_t}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;u^S(\alpha),\alpha) \end{align*} for $k\geq 0$. Similarly to $V_-$ and $V_+$ we have: \begin{lem} For every $(t,x)\in[0,T]\times \mathbb R^n$ and $k\geq 0$ we have $V_-^k(t,x)=\mathbb{E}[V_-^k(t,x)]$ and $V_+^k(t,x)=\mathbb{E}[V_+^k(t,x)]$, $\mathbb{P}$-a.s. \end{lem} Combined with the estimates of the previous section this gives the following estimates: \begin{prop}\label{prop:W-k-cont} For each $k\geq 0$, there is a $C>0$ such that \begin{align} |V_-^k(t,x)-V_-^k(t',x')|+|V_+^k(t,x)-V_+^k(t,x')|\leq C(1+|x|^{\rho+1}+|x|^{\rho+1})(|x'-x|+|t-t'|^{\varsigma\wedge 1/2}), \end{align} for all $(t,x),(t',x')\in [0,T]\times\mathbb R^n$. Moreover, there is a $C>0$ such that \begin{align*} |V_-^k(t,x)|+|V_+^k(t,x)|\leq C(1+|x|^\rho) \end{align*} for all $k\geq 0$ and $(t,x)\in[0,T]\times \mathbb R^n$. \end{prop} \noindent\emph{Proof.} Since \begin{align*} V_-^k(t,x)=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k}Y^{t,x;u,\alpha^S(u)}, \end{align*} we have \begin{align*} V_-^k(t,x)-V_-^k(t',x')&= \mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k}Y^{t,x;u,\alpha^S(u)}_t-\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k}Y^{t',x';u,\alpha^S(u)}_{t'} \\ &\leq\mathop{\rm{ess}\sup}_{\alpha^S\in\mathcal{A}^S}\{\mathop{\rm{ess}\sup}_{u\in\mathcal U^k}Y^{t,x;u,\alpha^S(u)}_t-\mathop{\rm{ess}\sup}_{u\in\mathcal U^k}Y^{t',x';u,\alpha^S(u)}_{t'}\} \\ &\leq \mathop{\rm{ess}\sup}_{\alpha\in\mathcal{A}}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k}\{Y^{t,x;u,\alpha}_t-Y^{t',x';u,\alpha}_{t'}\} \\ &\leq Y^{t,x;u_\varepsilonilon,\alpha_\varepsilonilon}_t-Y^{t',x';u_\varepsilonilon,\alpha_\varepsilonilon}_{t'}+\varepsilonilon \end{align*} for each $\varepsilonilon>0$ and some $(u_\varepsilonilon,\alpha_\varepsilonilon)\in \mathcal U^k\times\mathcal{A}$. We also see that the same relation holds for $V_+^k$. Taking expectation on both sides and using that $V_-^k$ and $V_+^k$ are deterministic, the first inequality follows by Proposition~\ref{prop:BSDEflow} since $\varepsilonilon>0$ was arbitrary. The second inequality is an immediate consequence of Proposition~\ref{prop:BSDEmoment}.\qed\\ Turning now to the dynamic programming principles, that will be obtained by applying arguments similar to those in Section 4 of \cite{Buckdahn08}, we have: \begin{prop}\label{prop:dynp-trunk} For each $k\geq 0$ and any $t\in[0,T]$, $h\in[0,T-t]$ and $x\in \mathbb R^n$ we have \begin{align}\label{ekv:dynp-W-trunk} V_-^k(t,x)=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-^{k-N}(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})] \end{align} and \begin{align}\label{ekv:dynp-U-trunk} V_+^k(t,x)=\mathop{\rm{ess}\sup}_{u^S\in\mathcal U^{S,k}_{t,t+h}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;u^S(\alpha),\alpha}[V_+^{k-N}(t + h,X^{t,x;u^S(\alpha),\alpha}_{t+h})]. \end{align} \end{prop} \begin{rem}\label{rem:@tph} At first glance the DPP for $V_+$ may seem counter-intuitive as, on the right-hand side, $\alpha$ could take two different values at time $t+h$ (one under $G$ and the other in $V_+^{k-N}(t + h,\cdot)$) and thus trigger two different reactions from the impulse controller at time $t+h$. However, by the definition of a non-anticipative strategy, $u^S(\alpha)=u^S(\tilde\alpha)$ whenever $\alpha=\tilde\alpha$, $d\mathbb{P}\times d\lambda$-a.s.~and an arbitrary choice of $\alpha_{t+h}$ will not influence the overall value. \end{rem} \noindent\emph{Proof.} The proof (which is only given for the lower value function $V_-^k$ as the arguments for $V_+^k$ are identical) will be carried out over a sequence of lemmata where \begin{align*} V_-h^k(t,x):=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[ V_-^{k-N}(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})]. \end{align*} \begin{lem} $V_-h^k$ can be chosen to be deterministic. \end{lem} \noindent\emph{Proof.} Again, this follows by repeating the steps in the proof of Proposition 4.1 in \cite{Buckdahn08}.\qed\\ \begin{lem}\label{lem:Wh-lessthan-W} $V_-h^k(t,x)\leq V_-^k(t,x)$. \end{lem} \noindent\emph{Proof.} We begin by picking an arbitrary $\alpha^S\in\mathcal{A}^S_t$ and note that we can define the restriction, $\alpha_1^S$, of $\alpha^S$ to $\mathcal{A}^S_{t,t+h}$ as \begin{align*} \alpha^S_1(u_1):=\alpha^S(u_1)\big|_{[t,t+h]},\qquad \forall u_1\in\mathcal U_{t,t+h}. \end{align*} We fix $\varepsilonilon>0$ and have by a pasting property\footnote{We can paste together two controls $u_1,u_2\in\mathcal U^k_s$ on sets $B_1\in\mathcal F_s$ and $B_2=B_1^c$ by setting $u=\mathbbm{1}_{B_1}u_1+\mathbbm{1}_{B_2}u_2\in\mathcal U^k_s$ and get by uniqueness of solutions to our BSDE that $G_{s,r}^{t,x;u,\alpha}[\eta]=\mathbbm{1}_{B_1}G_{s,r}^{t,x;u_1,\alpha}[\eta]+\mathbbm{1}_{B_2}G_{s,r}^{t,x;u_2,\alpha}[\eta]$.} that there is a $u^\varepsilonilon_{1}=(\tau^{1,\varepsilonilon}_j,\beta^{1,\varepsilonilon}_j)_{1\leq j\leq N^\varepsilonilon_1}\in \mathcal U^k_{t,t+h}$ such that \begin{align*} V_-h^k(t,x)&\leq\mathop{\rm{ess}\sup}_{u\in\mathcal U^k_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha_1(u)}[V_-^{k-N}(t+h,X^{t,x;u_1,\alpha_1(u_1)}_{t+h})] \\ &\leq G_{t,t+h}^{t,x;u^\varepsilonilon_1,\alpha_1(u^\varepsilonilon_1)}[V_-^{k-N^\varepsilonilon_1}(t+h,X^{t,x;u^\varepsilonilon_1,\alpha_1(u^\varepsilonilon_1)}_{t+h})]+\varepsilonilon. \end{align*} Now, given $u^\varepsilonilon_{1}$ we can define the restriction, $\alpha^S_2$, of $\alpha^S$ to $\mathcal{A}_{t+h}$ as \begin{align*} \alpha^S_2(u_2):=\alpha^S(u_1^\varepsilonilon\oplus u_2)\big|_{[t+h,T]},\qquad \forall u_2\in\mathcal U_{t+h}. \end{align*} We let $(\mathcal O_i)_{i\geq 1}\subset\mathcal{B}(\mathbb R^n)$ be a partition of $\mathbb R^n$ such that $(1+\sup_{x\in\mathcal O_i}|x|^\rho){\rm diam}(\mathcal O_i)\leq\varepsilonilon$, then by Proposition~\ref{prop:W-k-cont} there is a $C> 0$ such that $|V_-(t+h,x)-V_-(t+h,x')|\leq C\varepsilonilon$ for all $i\geq 1$ and $x,x'\in\mathcal O_i$. We pick $x_i\in\mathcal O_i$ and have by the same pasting property as above that there is for each $i\geq 1$ and $j\in \{0,\ldots,k\}$, a $u^\varepsilonilon_{2,i,j}\in\mathcal U_{t+h}^j$ such that \begin{align*} V_-^{j}(t+h,x_i) &\leq J(t+h,x_i,u^\varepsilonilon_{2,i,j},\alpha^S_2(u^\varepsilonilon_{2,i,j}))+\varepsilonilon. \end{align*} Consequently, \begin{align*} &V_-^{k-N^\varepsilonilon_1}(t+h,X^{t,x;u^\varepsilonilon_1,\alpha_1^S(u^\varepsilonilon_1)}_{t+h}) \leq \sum_{i\geq 1}\mathbbm{1}_{[X^{t,x;u^\varepsilonilon_1,\alpha_1^S(u^\varepsilonilon_1)}_{t+h}\in \mathcal O_i]}V_-^{k-N^\varepsilonilon_1}(t+h,x_i)+C\varepsilonilon \\ &\leq\sum_{i\geq 1}\sum_{j=0}^k\mathbbm{1}_{[k-N^\varepsilonilon_1=j]}\mathbbm{1}_{[X^{t,x;u^\varepsilonilon_1,\alpha_1^S(u^\varepsilonilon_1)}_{t+h}\in \mathcal O_i]}J(t+h,x_i,u^\varepsilonilon_{2,i,j},\alpha^S_2(u^\varepsilonilon_{2,i,j}))+C\varepsilonilon \\ &\leq\sum_{i\geq 1}\sum_{j=0}^k\mathbbm{1}_{[k-N^\varepsilonilon_1=j]}\mathbbm{1}_{[X^{t,x;u^\varepsilonilon_1,\alpha_1^S(u^\varepsilonilon_1)}_{t+h}\in \mathcal O_i]}J(t+h,X^{t,x;u_1^\varepsilonilon,\alpha_1^S(u_1^\varepsilonilon)}_{t+h},u^\varepsilonilon_{2,i,j},\alpha^S(u^\varepsilonilon))+C\varepsilonilon, \end{align*} with \begin{align*} u^\varepsilonilon:=u_1^\varepsilonilon\oplus\sum_{i\geq 1}\sum_{j=0}^k\mathbbm{1}_{[k-N^\varepsilonilon_1=j]}\mathbbm{1}_{X^{t,x;u_1^\varepsilonilon,\alpha_1^S(u_1^\varepsilonilon)}_{t+h}\in\mathcal O_i]}u_{2,i,j}. \end{align*} Using first comparison and then the stability property for BSDEs we find that \begin{align*} V_-h^{k}(t,x) &\leq G_{t,t+h}^{t,x;u^\varepsilonilon_1,\alpha_1^S(u^\varepsilonilon_1)}[\sum_{i\geq 1}\sum_{j=0}^k\mathbbm{1}_{[k-N^\varepsilonilon_1=j]}\mathbbm{1}_{[X^{t,x;u^\varepsilonilon_1,\alpha_1^S(u^\varepsilonilon_1)}_{t+h}\in \mathcal O_i]}J(t+h,X^{t,x;u_1^\varepsilonilon,\alpha_1^S(u_1^\varepsilonilon)}_{t+h},u^\varepsilonilon_{2,i,j},\alpha^S(u^\varepsilonilon))+C\varepsilonilon] \\ &\leq J(t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon))+C\varepsilonilon \\ &\leq \mathop{\rm{ess}\sup}_{u\in\mathcal U^k}J(t,x;u^\varepsilonilon,\alpha^S(u))+C\varepsilonilon, \end{align*} where $C>0$ only depends on the coefficients of the BSDE. Now, as this holds for all $\alpha^S\in \mathcal{A}^S_t$ we conclude that $V_-h^k(t,x)\leq V_-^k(t,x)+C\varepsilonilon$, but $\varepsilonilon>0$ was arbitrary and the result follows.\qed\\ The opposite inequality and its proof are classical (see \textit{e.g.\ } Proposition 1.10 in \cite{FlemSoug89} and Proposition 3.1 in \cite{TangHouSWgame}) and we give the proof only for the sake of completeness. \begin{lem}\label{lem:W-lessthan-Wh} $V_-^k(t,x)\leq V_-h^k(t,x)$. \end{lem} \noindent\emph{Proof.} We again fix an $\varepsilonilon>0$ and let $(\mathcal O_i)_{i\geq 1}$ be defined as above. We pick an $x_i\in\mathcal O_i$ for each $i\geq 1$ and note that there is a $\alpha_{2,i,j}^S\in\mathcal{A}^S_{t+h,T}$ (see \cite{Buckdahn08} Lemma 4.5) such that \begin{align*} V_-^j(t+h,x_i)\geq J(t+h,x_i;u_2,\alpha^S_{2,i,j}(u_2))-\varepsilonilon, \end{align*} for all $u_2\in\mathcal U^j_{t+h}$. Moreover, there is an $\alpha^S_1\in\mathcal{A}_{t,t+h}^S$ such that \begin{align*} V_-h^k(t,x)\geq G_{t,t+h}^{t,x;u_1,\alpha^S_1(u_1)}[V_-^{k-N_1}(t+h,X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h})]-\varepsilonilon, \end{align*} for all $u_1\in\mathcal U^k_{t,t+h}$, where $N_1$ is the number of interventions in $u_1$. Now, each $u=(\tau_i,\beta_i)_{1\leq i\leq N}\in\mathcal U^k_t$ can be uniquely decomposed as $u=u_1\oplus u_2$ with $u_1\in\mathcal U^k_{t,t+h}$ (with $N_1:=\max\{j\geq 0:\tau_j\leq t+h\}$ interventions) and $u_2\in\mathcal U^k_{t+h}$ (with first intervention at $\tau^2_1>t+h$). Then, \begin{align*} V_-h^k(t,x)&\geq G_{t,t+h}^{t,x;u_1,\alpha^S_1(u_1)}[V_-^{k-N_1}(t+h,X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h})]-\varepsilonilon \\ &=G_{t,t+h}^{t,x;u_1,\alpha^S_1(u_1)}[\sum_{j=0}^k\mathbbm{1}_{[k-N_1=j]} V_-^{j}(t+h,X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h})]-\varepsilonilon \\ &\geq G_{t,t+h}^{t,x;u_1,\alpha^S_1(u_1)}[\sum_{j=0}^k\mathbbm{1}_{[k-N_1=j]}\sum_{i\geq 1}\mathbbm{1}_{[X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h}\in\mathcal O_i]} V_-^{j}(t+h,x_i)]-C\varepsilonilon \\ &\geq G_{t,t+h}^{t,x;u_1,\alpha^S_1(u_1)}[\sum_{j=0}^k\mathbbm{1}_{[k-N_1=j]}\sum_{i\geq 1}\mathbbm{1}_{[X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h}\in\mathcal O_i]} J(t+h,x_i;u_2,\alpha^S_{2,i,j}(u_2))]-C\varepsilonilon \\ &\geq G_{t,t+h}^{t,x;u_1,\alpha^S_1(u_1)}[\sum_{j=0}^k\mathbbm{1}_{[k-N_1=j]}\sum_{i\geq 1}\mathbbm{1}_{[X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h}\in\mathcal O_i]} J(t+h,X^{t,x;u_1,\alpha^S(u_1)}_{t+h};u_2,\alpha^S_{2,i,j}(u))]-C\varepsilonilon \\ &=J(t,x;u,\alpha_1^S(u_1)\oplus_{t+h}\alpha_2^S(u))-C\varepsilonilon, \end{align*} with \begin{align*} \alpha_2^S(u):=\sum_{j=0}^k\mathbbm{1}_{[k-N_1=j]}\sum_{i\geq 1}\mathbbm{1}_{[X^{t,x;u_1,\alpha^S_1(u_1)}_{t+h}\in\mathcal O_i]}\alpha^S_{2,i,j}(u_2). \end{align*} Since $u\mapsto \alpha^S(u):=u\mapsto \alpha_1^S(u_1)\oplus_{t+h}\alpha_2^S(u)\in\mathcal{A}^S_{t}$, we conclude that $V_-h^k(t,x)\geq V_-(t,x)-C\varepsilonilon$, where $C>0$ does not depend on $\varepsilonilon>0$ which in turn was arbitrary and the result follows.\qed\\ Similarly, letting $V_+h^k$ denote the right-hand-side of \eqref{ekv:dynp-U-trunk}, we find $V_+h^k(t,x)=V_+^k(t,x)$ for each $(t,x)\in[0,T]\times \mathbb R^n$ and the statement in Proposition~\ref{prop:dynp-trunk} follows.\qed\\ \subsection{A DPP for the general case} We turn now to the general case where there is no restriction on the number of interventions in the impulse control. Before taking the limit as $k\to\infty$ in $V_-^k$ and $V_+^k$, we need to delimit the set of impulse controls: \begin{defn}\label{def:sens-controls} For $(t,x)\in [0,T]\times \mathbb R^n$ and $\alpha^S\in\mathcal{A}_t^S$ we let $\bar\mathcal U_{t,x,\alpha^S}$ be the set of all $u\in\mathcal U_t$ such that $Y^{t,x;u,\alpha^S(u)}_s\geq Y^{t,x;u_{-s},\alpha^S(u_{-s})}_s$, $\mathbb{P}$-a.s., for all $s\in[t,T]$. Moreover, we let $\bar\mathcal U^{S,k}_{t,x}$ be the subset of all $u^S\in\mathcal U^{S,k}_t$ such that for each $\alpha\in\mathcal{A}_t$ and $s\in [t,T]$, \begin{align*} \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;u^S(\alpha\oplus_s\tilde\alpha),\alpha\oplus_s\tilde\alpha}_s\geq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;(u^S(\alpha))_{s-},\alpha\oplus_s\tilde\alpha}_s \end{align*} $\mathbb{P}$-a.s. \end{defn} Given an $\alpha^S\in\mathcal{A}_t^S$ we note that the set $\bar\mathcal U_{t,x,\alpha^S}$ consists of all controls $u$ where it is never (on average) beneficial to abandon $u$ and stop intervening on the system for the remainder of the period. Similarly, $\bar\mathcal U^S_{t,x}$ is the set of strategies where, given that the opponent acts rationally, it will never be beneficial to abandon $u$ and stop intervening. The usefulness of the above definitions in our case lies in the fact that they allow us to bound the corresponding solution to \eqref{ekv:non-ref-bsde} from below by an expression that does not involve intervention costs. In particular, we have whenever $\alpha^S\in\mathcal{A}_t$ and $u\in \bar\mathcal U_{t,x,\alpha^S}$, that \begin{align}\label{ekv:sens-useful-C} Y^{t,x;u,\alpha^S(u)}_s\geq Y^{t,x;u_{-s},\alpha^S(u_{-s})}_s\geq\mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;u_{-s},\alpha^S(u_{-s})\oplus_s\tilde\alpha}_s \end{align} for all $s\in [t,T]$, and similarly when $u^S\in\bar\mathcal U^{S,k}_{t,x}$ we have \begin{align}\label{ekv:sens-useful-S} Y^{t,x;u^S(\alpha),\alpha}_s\geq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;u^S(\alpha\oplus_s\tilde\alpha),\alpha\oplus_s\tilde\alpha}_s\geq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;(u^S(\alpha))_{s-},\alpha\oplus_s\tilde\alpha}_s \end{align} for all $\alpha\in\mathcal{A}_t$ and $s\in [t,T]$. The following lemma shows that these sets contain all relevant impulse controls and strategies, respectively. \begin{lem}\label{lem:sens-is-opt} We have \begin{align*} V_-(t,x)=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_t}\mathop{\rm{ess}\sup}_{u\in\bar\mathcal U_{t,x,\alpha^S}}J(t,x;u,\alpha^S(u)) \end{align*} and \begin{align*} V_+(t,x)=\mathop{\rm{ess}\sup}_{u^S\in\bar\mathcal U^{S}_{t,x}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;u^S(\alpha),\alpha). \end{align*} \end{lem} \noindent\emph{Proof.} For any $\alpha^S\in\mathcal{A}^S_t$ and arbitrary $u\in\mathcal U_t\setminus \bar\mathcal U_{t,x,\alpha^S}$ we let \begin{align*} \chi:=\inf\big\{s\geq t: Y^{t,x;u,\alpha^S(u)}_s\leq Y^{t,x;u_{s-},\alpha^S(u_{s-})}_s\big\}\wedge T. \end{align*} Assumption~\ref{ass:on-coeff}.\ref{ass:on-coeff-@end} implies that $Y^{t,x;u,\alpha^S(u)}_T\leq Y^{t,x;u_{T-},\alpha^S(u_{T-})}_T$ and we get that with \begin{align*} B_1:=\{\omega:Y^{t,x;u,\alpha^S(u)}_\chi\leq Y^{t,x;u_{\chi-},\alpha^S(u_{\chi-})}_\chi\}\in\mathcal F_\chi \end{align*} and \begin{align*} B_2:=\{\omega:Y^{t,x;u,\alpha^S(u)}_{\chi+}\leq Y^{t,x;u_{\chi},\alpha^S(u_{\chi})}_{\chi+}\}\cap B_1^c\in\mathcal F_\chi, \end{align*} the set $(B_1\cup B_2)^c$ is $\mathbb{P}$-negligible. Moreover, since \begin{align*} Y^{t,x;u,\alpha^S(u)}_{\chi+}-Y^{t,x;u,\alpha^S(u)}_{\chi}=Y^{t,x;u_{\chi},\alpha^S(u_{\chi})}_{\chi+}-Y^{t,x;u_{\chi},\alpha^S(u_{\chi})}_{\chi}, \end{align*} it follows that on $B_2$ we have $Y^{t,x;u,\alpha^S(u)}_{\chi}\leq Y^{t,x;u_{\chi},\alpha^S(u_{\chi})}_{\chi}$ and we conclude that letting $\tilde u:=\mathbbm{1}_{B_1}u_{\chi-}+\mathbbm{1}_{B_2}u_{\chi}$ we have $Y^{t,x;u,\alpha^S(u)}_\chi\leq Y^{t,x;\tilde u,\alpha^S(\tilde u)}_\chi$ $\mathbb{P}$-a.s. By comparison we thus find that $Y^{t,x;u,\alpha^S(u)}_s \leq Y^{t,x;\tilde u,\alpha^S(\tilde u)}_s$, $\mathbb{P}$-a.s.~for all $s\in [t,\chi]$. In particular, this gives that $\tilde u\in \bar\mathcal U_{t,x,\alpha^S}$ and $Y^{t,x;u,\alpha^S(u)}_t \leq Y^{t,x;\tilde u,\alpha^S(\tilde u)}_t$ from which we conclude that any $u\in\mathcal U_t\setminus \bar\mathcal U_{t,x,\alpha^S}$ is dominated by an element of $\bar\mathcal U_{t,x,\alpha^S}$. Since this holds for any $\alpha^S\in\mathcal{A}^S_t$, we have that \begin{align*} \mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_t}\mathop{\rm{ess}\sup}_{u\in\mathcal U_{t}}J(t,x;u,\alpha^S(u))= \mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_t}\mathop{\rm{ess}\sup}_{u\in\bar\mathcal U_{t,x,\alpha^S}}J(t,x;u,\alpha^S(u)), \end{align*} proving the first statement. For the second statement we fix $u^S\in\mathcal U_t^{S}\setminus \bar\mathcal U_{t,x}^{S}$ and $\alpha\in\mathcal{A}_t$. We then set $u=(\tau_i,\beta_i)_{1\leq i\leq N}:=u^S(\alpha)$ and let \begin{align*} N(\alpha):=\min\Big\{j\geq 0: \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_{\tau_j}}Y^{t,x;u^S(\alpha\oplus_{\tau_j}\tilde\alpha),\alpha\oplus_{\tau_j}\tilde\alpha}_{\tau_j}\leq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_{\tau_j}}Y^{t,x;[u^S(\alpha)]_{j-1},\alpha\oplus_{\tau_j}\tilde\alpha}_{\tau_j}\Big\}. \end{align*} Furthermore, we define $\tilde u^S\in\mathcal U^S_t$ as $\tilde u^S(\alpha):=[u^S(\alpha)]_{N(\alpha)-1}$ and let $\chi(\alpha):=\tau_{N(\alpha)}\wedge T$. By definition we have \begin{align}\label{ekv:end-rel} \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_{\chi(\alpha)}} Y^{t,x;u^S(\alpha\oplus_{\chi(\alpha)}\tilde\alpha),\alpha\oplus_{\chi(\alpha)}\tilde\alpha}_{\chi(\alpha)}\leq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_{\chi(\alpha)}} Y^{t,x;\tilde u^S(\alpha),\alpha\oplus_{\chi(\alpha)}\tilde\alpha}_{\chi(\alpha)}. \end{align} For $\varepsilonilon>0$ and $s\in[t,T]$ we let $\mathcal{A}^{\varepsilonilon}_{s}$ be the subset of all $\hat\alpha\in\mathcal{A}_t$ with $\hat\alpha=\alpha$ on $[t,s)$ such that \begin{align*} \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_{\chi(\hat\alpha)\vee s}}Y^{t,x;u^S(\hat\alpha\oplus_{\chi(\hat\alpha)\vee s}\tilde\alpha),\hat\alpha\oplus_{\chi(\hat\alpha)\vee s}\tilde\alpha}_{\chi(\hat\alpha)\vee s}\geq Y^{t,x;u^S(\hat\alpha),\hat\alpha}_{\chi(\hat\alpha)\vee s}-\varepsilonilon \end{align*} and similarly let $\tilde\mathcal{A}^{\varepsilonilon}_{s}$ be the subset of all $\hat\alpha\in\mathcal{A}_t$ with $\hat\alpha=\alpha$ on $[t,s)$ such that \begin{align*} \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_{\chi(\hat\alpha)\vee s}}Y^{t,x;\tilde u^S(\hat\alpha\oplus_{\chi(\hat\alpha)\vee s}\tilde\alpha),\hat\alpha\oplus_{\chi(\hat\alpha)\vee s}\tilde\alpha}_{\chi(\hat\alpha)\vee s}\geq Y^{t,x;\tilde u^S(\hat\alpha),\hat\alpha}_{\chi(\hat\alpha)\vee s}-\varepsilonilon. \end{align*} Then, we can repeat the arguments in Lemma~\ref{lem:Wh-lessthan-W} to conclude that for all $s\in [t,T]$, the sets $\mathcal{A}^{\varepsilonilon}_s$ and $\tilde\mathcal{A}^{\varepsilonilon}_s$ are non-empty and comparison implies that \begin{align}\label{ekv:A-eps-opt} \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x; u^S(\alpha\oplus_s\tilde\alpha),\alpha\oplus_s\tilde\alpha}_s = \mathop{\rm{ess}\inf}_{\hat\alpha\in\mathcal{A}^{\varepsilonilon}_s}Y^{t,x; u^S(\hat\alpha),\hat\alpha}_s \end{align} and \begin{align}\label{ekv:tilde-A-eps-opt} \mathop{\rm{ess}\inf}_{\hat\alpha\in\mathcal{A}_s}Y^{t,x;\tilde u^S(\alpha\oplus_s\hat\alpha),\alpha\oplus_s\hat\alpha}_s=\mathop{\rm{ess}\inf}_{\hat\alpha\in\tilde\mathcal{A}^{\varepsilonilon}_s}Y^{t,x; \tilde u^S(\hat\alpha),\hat\alpha}_s. \end{align} Moreover, for $\hat\alpha\in\mathcal{A}^{\varepsilonilon}_s$ and $\tilde\alpha\in\tilde\mathcal{A}^{\varepsilonilon}_s$ with $\hat\alpha=\tilde \alpha$ on $[t,\chi(\hat\alpha))$ we have by \eqref{ekv:end-rel}, that \begin{align*} Y^{t,x;u^S(\hat\alpha),\hat\alpha}_{\chi(\hat\alpha)} \leq Y^{t,x;\tilde u^S(\tilde\alpha),\tilde\alpha}_{\chi(\tilde\alpha)}+\varepsilonilon. \end{align*} and using comparison together with stability implies that \begin{align*} \mathop{\rm{ess}\inf}_{\hat\alpha\in\mathcal{A}^\varepsilonilon_s}Y^{t,x;u^S(\hat\alpha),\hat\alpha}_s\leq \mathop{\rm{ess}\inf}_{\hat\alpha\in\tilde\mathcal{A}^\varepsilonilon_s}Y^{t,x;\tilde u^S(\hat\alpha),\hat\alpha}_s+C\varepsilonilon \end{align*} for all $s\in [t,\chi(\alpha)]$. In particular, since $\varepsilonilon>0$ was arbitrary, letting $s=t$ and using \eqref{ekv:A-eps-opt} and \eqref{ekv:tilde-A-eps-opt} gives that \begin{align*} \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;u^S(\alpha),\alpha)\leq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t} J(t,x;\tilde u^S(\alpha),\alpha) \end{align*} and we conclude that $\tilde u^S$ dominates $u^S$. On the other hand, by a similar argument we find that \begin{align*} \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;\tilde u^S(\alpha\oplus_s\tilde\alpha),\alpha\oplus_s\tilde\alpha}_s\geq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;(u^S(\alpha))_{s-},\alpha\oplus_s\tilde\alpha}_s =\mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}Y^{t,x;(\tilde u^S(\alpha))_{s-},\alpha\oplus_s\tilde\alpha}_s \end{align*} for all $s\in[t,\chi(\alpha)]$ and since $(\tilde u^S(\alpha))_{s-}=\tilde u^S(\alpha)$ on $(\chi(\alpha),T]$ we conclude that $\tilde u^S\in\bar\mathcal U^S_{t,x}$ and the assertion follows.\qed\\ In particular, we may w.l.o.g.~restrict our attention to impulse controls (resp. strategies) in Definition~\ref{def:sens-controls}. The following result relates the number of interventions in these impulse controls and strategies to the magnitude of the initial value and is central in deriving continuity of $V_-$ and $V_+$. \begin{lem}\label{lem:EN-bound} There is a constant $C>0$ such that \begin{align}\label{ekv:N-bound} \mathbb{E}[N]\leq C(1+|x|^\rho) \end{align} for all $\alpha^S\in \mathcal{A}^S_t$ and $u\in\bar\mathcal U_{t,x,\alpha^S}$. Moreover, \eqref{ekv:N-bound} also holds for $u=u^S(\alpha)$ whenever $u^S\in\bar\mathcal U^{S}_{t,x}$ and $\alpha\in\mathcal{A}_t$. \end{lem} \noindent \emph{Proof.} Both statements will follow by a similar argument and we set $\alpha:=\alpha^S(u)$ (resp. $u:=u^S(\alpha)$). To simplify notation we let $(X,Y,Z):=(X^{t,x;u,\alpha},Y^{t,x;u,\alpha},Z^{t,x;u,\alpha})$ and $X^j=X^{t,x;[u]_j,\alpha}$ and get that \begin{align*} Y_s&=\psi(X_T)+\int_s^{T}f(r,X_r,Y_r,Z_r,\alpha_r)dr-\int_s^{T} Z_rdW_r-\sum_{\tau_j\geq s}\ell(\tau_j,X^{j-1}_{\tau_j},\beta_j). \end{align*} Letting \begin{align*} \zeta^{u,\alpha}_1(s):=\frac{f(s,X^{t,x;u,\alpha}_s,Y^{t,x;u,\alpha}_s,Z^{t,x;u,\alpha}_s,\alpha_s) -f(s,X^{t,x;u,\alpha}_s,0,Z^{t,x;u,\alpha}_s,\alpha_s)}{Y^{t,x;u,\alpha}_s}\mathbbm{1}_{[Y_s\neq 0]} \end{align*} and \begin{align*} \zeta^{u,\alpha}_2(s):=\frac{f(s,X^{t,x;u,\alpha}_s,0,Z^{t,x;u,\alpha}_s,\alpha_s) - f(s,X^{t,x;u,\alpha}_s,0,0,\alpha_s)}{|Z^{t,x;u,\alpha}_s|^2}(Z^{t,x;u,\alpha}_s)^\top \end{align*} we have by the Lipschitz continuity of $f$ that $|\zeta^{u,\alpha}_1(s)|\vee|\zeta^{u,\alpha}_2(s)|\leq k_f$. Using Ito's formula we find that \begin{align*} Y_s&=R^{u,\alpha}_{s,T}\psi(X_T)+\int_s^{T}R^{u,\alpha}_{s,r}f(r,X_r,0,0,\alpha_r)dr -\int_s^TR^{u,\alpha}_{s,r}Z_rdW_r-\sum_{j=1}^N R^{u,\alpha}_{s,\tau_{j}}\mathbbm{1}_{[\tau_j\geq s]}\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j) \end{align*} with $R^{u,\alpha}_{s,r}:=e^{\int_s^{r}(\zeta^{u,\alpha}_1(v)-\frac{1}{2}|\zeta^{u,\alpha}_2(v)|^2)dv+\frac{1}{2}\int_s^{r}\zeta^{u,\alpha}_2(v)dW_v}$. Since the intervention costs are positive, taking the conditional expectation on both sides gives \begin{align*} Y_s&\leq\mathbb{E}\Big[R^{u,\alpha}_{s,T}\psi(X_T)+\int_s^{T}R^{u,\alpha}_{s,r}f(r,X_r,0,0,\alpha_r)dr \Big|\mathcal F_s\Big] \\ &\leq C(1+|X_s|^\rho) \end{align*} On the other hand, by \eqref{ekv:sens-useful-C} (resp.~\eqref{ekv:sens-useful-S}) we have \begin{align*} Y_s&\geq \mathop{\rm{ess}\inf}_{\tilde\alpha\in\mathcal{A}_s}\mathbb{E}\Big[R^{u_{s-},\alpha\oplus_s\tilde\alpha}_{s,T}\psi(X^{t,x;u_{s-},\alpha\oplus_s\tilde\alpha}_T) +\int_s^{T}R^{u_{s-},\alpha\oplus_s\tilde\alpha}_{s,r}f(r,X^{t,x;u_{s-},\alpha\oplus_s\tilde\alpha}_r,0,0,\alpha_r)dr \Big|\mathcal F_s\Big] \\ &\geq -C(1+|X^{t,x;u_{s-},\alpha}_s|^\rho). \end{align*} Proposition~\ref{prop:SDEmoment} then gives \begin{align*} \mathbb{E}\Big[\sup_{s\in[t,T]}|Y_s|^2\Big]&\leq C(1+|x|^{2\rho}). \end{align*} Next, we derive a bound on the $\mathcal H^2$-norm of $Z$. Applying Ito's formula to $|Y_s|^{2}$ we get \begin{align}\nonumber |Y_t|^{2}+\int_t^T| Z_s|^2ds&=\psi^2(X_T)+\int_t^TY_sf(s,X_s,Y_s,Z_s,\alpha_s)ds-2\int_t^T Y_sZ_sdW_s \\ &\quad-\sum_{j=1}^N(2Y^{j-1}_{\tau_{j}}\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)+\ell^2(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)),\label{ekv:from-ito} \end{align} where $Y^{j-1}$ is $Y$ without the $j-1$ first intervention costs. Since the intervention costs are nonnegative, we have \begin{align*} -\sum_{j=1}^N(2Y^{j-1}_{\tau_{j}}\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)+\ell^2(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)) &\leq 2\sup_{s\in [t,T]}|Y_{s}|\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j) \\ &\leq \kappa \sup_{s\in [t,T]}|Y_{s}|^2+\frac{1}{\kappa}\Big(\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)\Big)^2 \end{align*} for any $\kappa>0$. Inserted in \eqref{ekv:from-ito} and using the Lipschitz property of $f$ this gives \begin{align*} |Y_t|^{2}+\int_t^T| Z_s|^2ds&\leq \psi^2(X_T)+(C+\kappa)\sup_{s\in[t,T]}|Y_s|^2+\int_t^T(|f(s,X_s,0,0,\alpha_s)|^2+\frac{1}{2}|Z_s|^2)ds \\ &\quad -2\int_t^T Y_sZ_sdW_s+\frac{1}{\kappa}\Big(\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)\Big)^2. \end{align*} Now, as $u\in\mathcal U$, it follows that the stochastic integral is uniformly integrable and thus a martingale. To see this, note that the Burkholder-Davis-Gundy inequality gives \begin{align*} \mathbb{E}\Big[\sup_{s\in[t,T]}\Big|\int_t^s Y_rZ_rdW_r\Big|\Big]\leq C\mathbb{E}\Big[\Big(\int_t^T |Y_sZ_s|^2ds\Big)^{1/2}\Big]\leq C\mathbb{E}\Big[\sup_{s\in [t,T]}|Y_s|^2+\int_t^T |Z_s|^2ds\Big]. \end{align*} Taking expectations on both sides thus gives \begin{align*} \mathbb{E}\Big[\int_t^T| Z_s|^2ds\Big]&\leq C(1+\kappa)(1+|x|^{2\rho})+\frac{2}{\kappa}\mathbb{E}\Big[\Big(\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)\Big)^2\Big]. \end{align*} Finally, \begin{align*} \mathbb{E}[N]&\leq \frac{1}{\delta}\mathbb{E}\Big[\Big(\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)\Big)^2\Big]^{1/2} \end{align*} and \begin{align*} \mathbb{E}\Big[\Big(\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)\Big)^2\Big]&\leq C\mathbb{E}\Big[|Y_t|^2+|\psi(X_T)|^2+\int_t^{T}|f(r,X_r,Y_r,Z_r,\alpha_r)|^2dr+\int_t^{T}|Z_r|^2dr\Big] \\ &\leq C\mathbb{E}\Big[|\psi(X_T)|^2+\sup_{s\in[t,T]}|Y_s|^2+\int_t^{T}(|f(s,X_s,0,0,\alpha_s)|^2+|Z_s|^2)ds\Big] \\ &\leq C(1+\kappa)(1+|x|^{2\rho})+\frac{C}{\kappa}\mathbb{E}\Big[\Big(\sum_{j=1}^N\ell(\tau_j,X^{j-1}_{\tau_{j}},\beta_j)\Big)^2\Big] \end{align*} from which \eqref{ekv:N-bound} follows by choosing $\kappa$ sufficiently large.\qed\\ \begin{lem}\label{lem:unif-conv} There is a $C>0$ such that for all $k\geq 1$ we have \begin{align*} V_-(t,x)-V_-^{k}(t,x)\leq \frac{C(1+|x|^{2\rho})}{\sqrt{k}} \end{align*} for all $(t,x)\in[0,T]\times \mathbb R^n$. In particular, the sequence $\{V_-^k\}_{k\geq 0}$ converges to $V_-$, uniformly on compact subsets of $[0,T]\times\mathbb R^n$. \end{lem} \noindent\emph{Proof.} For each $\alpha^S\in\mathcal{A}^S$ and $\varepsilonilon>0$ there is by Lemma~\ref{lem:sens-is-opt} a $u^{\varepsilonilon}=(\tau_i^{\varepsilonilon}, \beta_i^{\varepsilonilon})_{1\leq i\leq N^{\varepsilonilon}}\in\bar\mathcal U_{t,x,\alpha^S}$ such that \begin{align}\label{ekv:u-eps-optimal} \mathop{\rm{ess}\sup}_{u\in\mathcal U_t}J(t,x;u,\alpha^S(u))\leq J(t,x;u^{\varepsilonilon},\alpha^S(u^{\varepsilonilon}))+\varepsilonilon/2, \end{align} $\mathbb{P}$-a.s. Now, let $(Y,Z)=(Y^{{t,x;u^{\varepsilonilon},\alpha^S(u^{\varepsilonilon})}},Z^{{t,x;u^{\varepsilonilon},\alpha^S(u^{\varepsilonilon})}})$ and for $k\geq 0$, set\\ $(\hat Y,\hat Z)=(Y^{{t,x;[u^{\varepsilonilon}]_{k},\alpha^S([u^{\varepsilonilon}]_{k})}},Y^{{t,x;[u^{\varepsilonilon}]_{k},\alpha^S([u^{\varepsilonilon}]_{k})}})$, where we recall that $[u]_l$ is the truncation of $u$ to the first $l$ interventions. As $\alpha^S([u^{\varepsilonilon}]_{k})=\alpha^S(u^{\varepsilonilon})$ on $[0,\tau^{\varepsilonilon}_{k+1})\cap[0,T]$ we have, with $X:=X^{t,x;u^{\varepsilonilon},\alpha^S(u^{\varepsilonilon})}$ and $\hat X:=X^{t,x;[u^{\varepsilonilon}]_{k},\alpha^S([u^{\varepsilonilon}]_{k})}$, that $\hat X_s=X_s$ for all $s\in[0,\tau^{\varepsilonilon}_{k+1})\cap[0,T]$. Letting $\alpha:=\alpha^S(u^{\varepsilonilon})$ and $\hat\alpha:=\alpha^S([u^{\varepsilonilon}]_{k})$, this gives \begin{align*} Y_t-\hat Y_t&=\psi(X_T)-\psi(\hat X_T) +\int_t^{T}(f(s,X_s,Y_s,Z_s,\alpha_s)- f(s,\hat X_s,\hat Y_s,\hat Z_s,\hat\alpha_s))ds \\ &\quad -\int_t^{T}(Z_s- \hat Z_s)dW_s+\Xi^{t,x;[u^{\varepsilonilon}]_{k},\alpha}_{T+}-\Xi^{t,x;u^{\varepsilonilon},\alpha^S(u^{\varepsilonilon})}_{T+} \\ &\leq \mathbbm{1}_{[N^{\varepsilonilon}>k]}\Big(R_T(\psi(X_T)-\psi(\hat X_T)) +\int_t^{T}R_s(f(s,X_s,Y_s,Z_s,\alpha_s)- f(s,\hat X_s,Y_s,Z_s,\hat\alpha_s))ds\Big) \\ &\quad-\int_t^{T}R_s(Z_s- \hat Z_s)dW_s \end{align*} for some $R_{s}:=e^{\int_t^{s}(\zeta_1(r)-\frac{1}{2}|\zeta_2(r)|^2)dr+\frac{1}{2}\int_t^{s}\zeta_2(r)dW_r}$, with $|\zeta_1(r)|\vee|\zeta_2(r)|\leq k_f$. Taking expectation on both sides and using the Cauchy-Schwartz inequality gives \begin{align*} \mathbb{E}[Y_t-\hat Y_t]&\leq\mathbb{E}\Big[\mathbbm{1}_{[N^{\varepsilonilon}>k]}\Big(R_T(\psi(X_T) - \psi(\hat X_T)) +\int_t^{T}R_s(f(s,X_s,Y_s,Z_s,\alpha_s)- f(s,\hat X_s,Y_s,Z_s,\hat\alpha_s))ds\Big) \Big] \\ &\leq C(1+|x|^{\rho})\mathbb{E}\big[\mathbbm{1}_{[N^{\varepsilonilon}>k]}\big]^{1/2}. \end{align*} Now, as $u^{\varepsilonilon}\in\bar\mathcal U_{t,x,\alpha^S}$, Lemma~\ref{lem:EN-bound} implies that \begin{align*} \mathbb{E}\big[\mathbbm{1}_{[N^{\varepsilonilon}>k]}\big]\leq \frac{C(1+|x|^\rho)}{k}. \end{align*} Since $\alpha^S$ was arbitrary we can pick $\alpha^S\in\mathcal{A}_t$ such that \begin{align*} V_-^{k}(t,x)\geq \mathop{\rm{ess}\sup}_{u\in\mathcal U^{k}_t}J(t,x;u,\alpha^S(u))-\varepsilonilon/2 \end{align*} and we find that \begin{align*} V_-(t,x)-V_-^{k}(t,x)&\leq \mathop{\rm{ess}\sup}_{u\in\mathcal U_t} J(t,x;u,\alpha^S(u))- \mathop{\rm{ess}\sup}_{u\in\mathcal U^{k}_t} J(t,x;u,\alpha^S(u))+\varepsilonilon/2 \\ &\leq J(t,x;u^{\varepsilonilon},\alpha^S(u^{\varepsilonilon}))-J(t,x;[u^{\varepsilonilon}]_{k},\alpha^S([u^{\varepsilonilon}]_{k}))+\varepsilonilon \\ &\leq \frac{C(1+|x|^{2\rho})}{\sqrt{k}}+\varepsilonilon \end{align*} from which the desired inequality follows since $\varepsilonilon>0$ was arbitrary. In particular, we find that $V_-^{k}$ converges uniformly on sets where $x$ is bounded.\qed\\ \begin{thm}\label{thm:W-is-cont} $V_-$ is continuous and satisfies \eqref{ekv:dynp-W} \end{thm} \noindent\emph{Proof.} Since the sequence $\{V_-^k\}_{k\geq 0}$ is non-decreasing, Lemma~\ref{lem:unif-conv} implies that $V_-^k\nearrowV_-$ uniformly on compacts as $k\to\infty$. Hence, $V_-$ is continuous. It remains to show that $V_-$ satisfies \eqref{ekv:dynp-W}. We have by \eqref{ekv:dynp-W-trunk} and comparison that \begin{align*} V_-^k(t,x)&=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U^{k}_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)} [V_-^{k-N}(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})] \\ &\leq \mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})] =:V_-h(t,x) \end{align*} and it follows that $V_-\leq V_-h$. On the other hand, for each $\varepsilonilon>0$ and any $\alpha^S\in\mathcal{A}^S$ we can repeat the argument in Lemma~\ref{lem:unif-conv} to find that there is a $k\geq 0$ such that \begin{align*} \mathop{\rm{ess}\sup}_{u\in\mathcal U_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})] &\leq \mathop{\rm{ess}\sup}_{u\in\mathcal U^{k}_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})]+\varepsilonilon/2. \end{align*} Moreover, for each $(\alpha,u)\in\mathcal{A}_t\times\mathcal U_t$, let $(\mathcal Y^k,\mathcal Z^k)$ be the unique solution to \begin{align*} \mathcal Y^k_s&=V_-^k(t+h,X^{t,x;u,\alpha}_{t+h})+\int_s^{t+h}f(r,X^{t,x;u,\alpha}_r,\mathcal Y^k_r,\mathcal Z^k_r,\alpha_r)dr \\ &\quad-\int_s^{t+h} \mathcal Z^k_rdW_r -\Xi^{t,x;u,\alpha}_{T+}+\Xi^{t,x;u,\alpha}_s \end{align*} while we assume that $(\mathcal Y,\mathcal Z)$ satisfies \begin{align*} \mathcal Y_s&=V_-(t+h,X^{t,x;u,\alpha}_{t+h})+\int_s^{t+h}f(r,X^{t,x;u,\alpha}_r,\mathcal Y_r,\mathcal Z_r,\alpha_r)dr \\ &\quad-\int_s^{t+h} \mathcal Z_rdW_r -\Xi^{t,x;u,\alpha}_{T+}+\Xi^{t,x;u,\alpha}_s. \end{align*} Then $\mathcal Y_t\geq \mathcal Y^k_t$ by comparison and \begin{align}\nonumber \mathcal Y_t-\mathcal Y^k_t&=\mathbb{E}\big[R_T(V_-(t+h,X^{t,x;u,\alpha}_{t+h}) - V_-^k(t+h,X^{t,x;u,\alpha}_{t+h}))\big] \\ &\leq \frac{C}{\sqrt k}\mathbb{E}\big[R_T(1+|X^{t,x;u,\alpha}_{t+h}|^{2\rho})\big|\mathcal F_t\big],\label{ekv:G-stability} \end{align} with $R_{s}:=e^{\int_t^{s}(\zeta_1(r)-\frac{1}{2}|\zeta_2(r)|^2)dr+\frac{1}{2}\int_t^{s}\zeta_2(r)dW_r}$, where $|\zeta_1(r)|\vee|\zeta_2(r)|\leq k_f$. Since the right-hand side of the above inequality tends to 0 as $k\to\infty$ we conclude by taking the essential supremum over all $(\alpha^S,u)\in\mathcal{A}_t\times\mathcal U_t$ that there is a $k'\geq k$ such that \begin{align*} \mathop{\rm{ess}\sup}_{u\in\mathcal U^{k'}_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-(t+h,X^{t,x;u,\alpha^S(u)}_{t+h})]\leq \mathop{\rm{ess}\sup}_{u\in\mathcal U^{k'}_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u^\varepsilonilon)}[V_-^{k'}(t+h,X^{t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon)}_{t+h})]+\varepsilonilon/2. \end{align*} for each $\alpha^S\in\mathcal{A}^S_t$. We conclude that $V_-h(t,x)\leq V_-^{k'}(t,x)+\varepsilonilon\leq V_-(t,x)+\varepsilonilon$ and since $\varepsilonilon>0$ was arbitrary it follows that $V_-$ satisfies \eqref{ekv:dynp-W}.\qed\\ \begin{lem}\label{lem:unif-conv-U} There is a constant $C>0$ such that for all $k\geq 1$ we have \begin{align*} V_+(t,x)-V_+^{k'}(t,x)\leq \frac{C(1+|x|^{2\rho})}{\sqrt{k}}. \end{align*} for all $(t,x)\in[0,T]\times \mathbb R^n$. In particular, the sequence $\{V_+^k\}_{k\geq 0}$ converges uniformly on compact subsets of $[0,T]\times\mathbb R^n$. \end{lem} \noindent\emph{Proof.} For each $\varepsilonilon>0$ there is a $u^{\varepsilonilon}=(\tau_i^{\varepsilonilon},\beta_i^{\varepsilonilon})_{1\leq i\leq N^{\varepsilonilon}}\in\bar\mathcal U^{S}_{t,x}$ such that \begin{align*} \mathop{\rm{ess}\sup}_{u^S\in\mathcal U^{S}_t}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t} J(t,x;u^S(\alpha),\alpha)\leq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;u^{\varepsilonilon}(\alpha),\alpha)+\varepsilonilon, \end{align*} $\mathbb{P}$-a.s. Then, for $k\geq 0$, \begin{align*} V_+(t,x)-V_+^{k}(t,x)&=\mathop{\rm{ess}\sup}_{u\in\mathcal U^{S}_{t}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t} J(t,x;u(\alpha),\alpha)-\mathop{\rm{ess}\sup}_{u\in\mathcal U^{S,k}_t}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t} J(t,x;u(\alpha),\alpha) \\ &\leq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;u^{\varepsilonilon}(\alpha),\alpha)-\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_t}J(t,x;[u^{\varepsilonilon}(\alpha)]_{k},\alpha)+\varepsilonilon \\ & \leq \mathop{\rm{ess}\sup}_{\alpha\in\mathcal{A}_t}\{J(t,x;u^{\varepsilonilon}(\alpha),\alpha)-J(t,x;[u^{\varepsilonilon}(\alpha)]_{k'},\alpha)\}+\varepsilonilon. \end{align*} By arguing as in the proof of Lemma~\ref{lem:unif-conv} the result now follows.\qed\\ \begin{thm} $V_+$ is continuous and satisfies \eqref{ekv:dynp-U}. \end{thm} \noindent\emph{Proof.} As above we find that $V_+^k\nearrow V_+$ uniformly on compacts and conclude that $V_+$ is continuous. We have again by comparison that \begin{align*} V_+(t,x)\leq \mathop{\rm{ess}\sup}_{u^S\in\mathcal U^S_{t,t+h}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;u^S(\alpha),\alpha}[V_+(t+h,X^{t,x;u^S(\alpha),\alpha}_{t+h})] =:V_+h(t,x). \end{align*} Moreover, for each $\varepsilonilon>0$ there is a $k\geq 0$ such that \begin{align*} V_+h(t,x)\leq \mathop{\rm{ess}\sup}_{u^S\in\mathcal U^{S,k}_{t,t+h}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;u^S(\alpha),\alpha}[V_+(t+h,X^{t,x;u^S(\alpha),\alpha}_{t+h})] +\varepsilonilon/2. \end{align*} Finally, by \eqref{ekv:G-stability} there is a $k'\geq k$ such that \begin{align*} &\mathop{\rm{ess}\sup}_{u^S\in\mathcal U^{S,k'}_{t,t+h}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;u^S(\alpha),\alpha}[ V_+(t+h,X^{t,x;u^S(\alpha),\alpha}_{t+h})] \\ &\leq \mathop{\rm{ess}\sup}_{u^S\in\mathcal U^{S,k'}_{t,t+h}}\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;u^S(\alpha),\alpha}[ V_+^{k'}(t+h,X^{t,x;u^S(\alpha),\alpha}_{t+h})]+\varepsilonilon/2 \end{align*} and we conclude that $V_+$ satisfies \eqref{ekv:dynp-U}.\qed\\ \section{The value functions as viscosity solutions to the HJBI-QVI\label{sec:hjbi-qvi}} Our main motivation for deriving the dynamic programming relations in the previous section is that we wish to use them to prove that the upper and lower value functions are solutions, in viscosity sense, to the Hamilton-Jacobi-Bellman-Isaacs quasi-variational inequality \eqref{ekv:var-ineq}. Whenever $V_-(t,x)>\mathcal M V_-(t,x)$ (resp. $V_+(t,x)>\mathcal M V_+(t,x)$) a simple application of the dynamic programming principle stipulates that it is suboptimal for the impulse controller to intervene on the system at time $t$. One main ingredient when proving that $V_-$ (resp. $V_+$) is a viscosity solution to \eqref{ekv:var-ineq} is showing that if $V_-(t,x)>\mathcal M V_-(t,x)$ (resp. $V_+(t,x)>\mathcal M V_+(t,x)$) then, on sufficiently small time intervals, we may (to a sufficient accuracy) assume that the impulse controller does not intervene on the system. As the probability that the state, when starting in $x$ at time $t$, leaves any ball with a finite radius containing $x$ on a non-empty interval $[t,t+h)$ is positive, this results requires a slightly intricate analysis compared to the deterministic setting (something that was pointed out already in \cite{TangHouSWgame}). In the following sequence of lemmas we extend the results from \cite{TangHouSWgame} to the case when the cost functional is defined in terms of the solution to a BSDE. The first lemma is given without proof as it follows immediately from the definitions: \begin{lem}\label{lem:mcM-monotone} Let $u,v:[0,T]\times \mathbb R^n\to\mathbb R$ be locally bounded functions. $\mathcal M$ is monotone (if $u\leq v$ pointwise, then $\mathcal M u\leq \mathcal M v$). Moreover, $\mathcal M(u_*)$ (resp. $\mathcal M(u^*)$) is l.s.c.~(resp. u.s.c.). \end{lem} In addition, rather than relying on the standard DPP from the previous section, formulated at deterministic times, we need the following ``weak'' DPP: \begin{lem}\label{lem:alter-dpp} Assume that $(t,x)\in[0,T]\times\mathbb R^n$ and $h\in[0,T-t]$, then for any $\alpha\in\mathcal{A}_{t,t+h}$ we have \begin{align} V_-(t,x)\leq\mathop{\rm{ess}\sup}_{\tau\in\mathcal T_t}G_{t,\tau\wedge t+h}^{t,x;\emptyset,\alpha}[\mathbbm{1}_{[\tau\leq t+h]}\mathcal M V_-(\tau,X^{t,x;\emptyset,\alpha}_{\tau})+\mathbbm{1}_{[\tau>t+h]}V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]\label{ekv:dynp2-W} \end{align} and \begin{align} V_+(t,x)\leq\mathop{\rm{ess}\sup}_{\tau\in\mathcal T_t}G_{t,\tau\wedge t+h}^{t,x;\emptyset,\alpha}[\mathbbm{1}_{[\tau\leq t+h]}\mathcal M V_+(\tau,X^{t,x;\emptyset,\alpha}_{\tau})+\mathbbm{1}_{[\tau>t+h]}V_+(t+h,X^{t,x;\emptyset,\alpha}_{t+h})].\label{ekv:dynp2-U} \end{align} \end{lem} \noindent \emph{Proof.} For $\varepsilonilon>0$ we let $\alpha^S(u):=\alpha\oplus_{\tau_1}\alpha^{S,\varepsilonilon}(u)$ for all $u\in\mathcal U_{t,t+h}$, where $\alpha^{S,\varepsilonilon}\in\mathcal{A}_{t,t+h}$ is such that\footnote{We can repeat the approximation routine from Lemma~\ref{lem:W-lessthan-Wh} to show that such a strategy exists.} \begin{align} &\mathbbm{1}_{[\tau_1\leq t+h]}V_-(\tau_1,X^{t,x;(\tau_1,\beta_1),\alpha}_{\tau_1})\geq \mathbbm{1}_{[\tau_1\leq t+h]}G_{\tau_1, t+h}^{\tau_1,X^{t,x;(\tau_1,\beta),\alpha}_{\tau_1};[u]_{2:},\alpha^{S,\varepsilonilon}(u)}[V_-(t+h,X^{t,x;u,\alpha^{S}(u)}_{t+h})]-\varepsilonilon,\label{ekv:weird-alpha} \end{align} $\mathbb{P}$-a.s., for all $u\in\mathcal U_{t,t+h}$, where $[u]_{2:}:=(\tau_i,\beta_i)_{2\leq i\leq N}$. Then $\alpha^S\in\mathcal{A}_{t,t+h}$ and there is a $u^\varepsilonilon:=(\tau_i^\varepsilonilon,\beta_i^\varepsilonilon)_{1\leq i\leq N^\varepsilonilon}\in\mathcal U_{t,t+h}$ such that \begin{align*} V_-(t,x)\leq G_{t,t+h}^{t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon)}[V_-(t+h,X^{t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon)}_{t+h})]+\varepsilonilon. \end{align*} On the other hand, the semi-group property of $G$ along with \eqref{ekv:weird-alpha} and comparison gives that \begin{align*} &G_{t,t+h}^{t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon)}[V_-(t+h,X^{t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon)}_{t+h})] \\ &= G_{t,\tau^\varepsilonilon_1\wedge t+h}^{t,x;(\tau^\varepsilonilon_1,\beta^\varepsilonilon_1),\alpha^S(u^\varepsilonilon)}[G_{\tau^\varepsilonilon_1\wedge t+h,t+h}^{\tau^\varepsilonilon_1,X^{t,x;(\tau^\varepsilonilon_1,\beta^\varepsilonilon_1),\alpha}_{\tau^\varepsilonilon_1};[u^\varepsilonilon]_{2:},\alpha^S(u^\varepsilonilon)}[V_-(t+ h,X^{t,x;u^\varepsilonilon,\alpha^S(u^\varepsilonilon)}_{t+h})]] \\ &\leq G_{t,\tau^\varepsilonilon_1\wedge t+h}^{t,x;\emptyset,\alpha}[V_-(\tau^\varepsilonilon_1\wedge t+h ,X^{t,x;(\tau^\varepsilonilon_1,\beta^\varepsilonilon_1),\alpha}_{\tau^\varepsilonilon_1\wedge t+h})-\mathbbm{1}_{[\tau^\varepsilonilon_1\leq t+h]}\ell(\tau^\varepsilonilon_1,X^{t,x;\emptyset,\alpha}_{\tau^\varepsilonilon_1},\beta^\varepsilonilon_1)+\varepsilonilon] \\ &\leq G_{t,\tau_1^\varepsilonilon\wedge t+h}^{t,x;\emptyset,\alpha}[\mathbbm{1}_{[\tau_1^\varepsilonilon\leq t+h]}\mathcal M V_-(\tau^\varepsilonilon_1,X^{t,x;\emptyset,\alpha}_{\tau^\varepsilonilon_1})+\mathbbm{1}_{[\tau_1^\varepsilonilon> t+h]} V_-(\tau^\varepsilonilon_1,X^{t,x;\emptyset,\alpha}_{\tau^\varepsilonilon_1})]+C\varepsilonilon. \end{align*} Since $\varepsilonilon>0$ was arbitrary the first inequality follows by taking the essential supremum over all $\tau_1^\varepsilonilon\in\mathcal T$. Concerning the second inequality we have that for each $\varepsilonilon>0$, there is a $u^{S}_\varepsilonilon\in\mathcal U^S_{t,t+h}$ such that \begin{align*} V_+(t,x)\leq G_{t,t+h}^{t,x;u^S_\varepsilonilon(\tilde \alpha),\tilde \alpha}[V_+(t+h,X^{t,x;u^S_\varepsilonilon(\tilde \alpha),\tilde \alpha}_{t+h})]+\varepsilonilon \end{align*} for all $\tilde\alpha\in\mathcal{A}_{t,t+h}$. With $(\tau^\varepsilonilon_1,\beta^\varepsilonilon_1)=[u^S_\varepsilonilon(\alpha)]_1$ (assuming that $\tau^\varepsilonilon_1=\infty$ when $u^S_\varepsilonilon(\alpha)$ does not contain interventions) we let $\alpha^\varepsilonilon\in\mathcal{A}_{\tau^\varepsilonilon_1,t+h}$ be such that \begin{align*} \mathbbm{1}_{[\tau_1^\varepsilonilon\leq t+h]}V_+(\tau^\varepsilonilon_1,X^{t,x;(\tau^\varepsilonilon_1,\beta^\varepsilonilon_1),\alpha}_{\tau^\varepsilonilon_1})\geq \mathbbm{1}_{[\tau_1^\varepsilonilon\leq t+h]}G_{\tau^\varepsilonilon_1, t+h}^{\tau^\varepsilonilon_1,X^{t,x;(\tau^\varepsilonilon_1,\beta^\varepsilonilon_1),\alpha}_{\tau^\varepsilonilon_1}; [u^S_\varepsilonilon(\alpha^\varepsilonilon)]_{2:},\alpha^\varepsilonilon}[ V_+(t+h,X^{t,x;u^S_\varepsilonilon(\alpha^\varepsilonilon),\alpha}_{t+h})]-\varepsilonilon. \end{align*} Applying the continuous control $\alpha\oplus_{\tau^\varepsilonilon_1}\alpha^\varepsilonilon$ (see Remark~\ref{rem:@tph} concerning the value at the point of concatenation) and using the semi-group property of $G$ (as above) now leads to the second inequality.\qed\\ \begin{lem}\label{lem:do-nothing} Let $(t,x)\in [t,T)\times \mathbb R^n$ be such that $V_-(t,x)>\mathcal M V_-(t,x)$ then there is a $C>0$ and an $h'\in (0,T-t]$ such that \begin{align*} V_-(t,x)\leq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]+ Ch^{3/2} \end{align*} for all $h\in[0,h']$. \end{lem} \noindent \emph{Proof.} Since $V_-$ is continuous, Lemma~\ref{lem:mcM-monotone} implies that so is $\mathcal MV_-$. There is thus a $h''>0$ and an $\varepsilonilon>0$ such that \begin{align*} \inf_{(t',x')\in [t,t+h'']\times \bar B_\varepsilonilon(x)}V_-(t',x')\geq \sup_{(t',x')\in [t,t+h'']\times \bar B_\varepsilonilon(x)}\mathcal M V_-(t',x')+\varepsilonilon, \end{align*} with $\bar B_\varepsilonilon(x):=\{x'\in\mathbb R^n:|x'-x|\leq\varepsilonilon\}$. For each $\alpha\in\mathcal{A}_{t,t+h}$ we have, with $X:=X^{t,x;\emptyset,\alpha}$, for $p\geq 2$ that \begin{align}\nonumber \mathbb{E}\Big[\sup_{s\in [t,t+h]}|X_s-x|^p\Big|\mathcal F_t\Big]&\leq C\mathbb{E}\Big[(\int_t^{t+h}|a(s,X^{t,x}_s)|ds)^p+(\int_t^{t+h}|\sigma(s,X_s)|^2ds)^{p/2}\Big|\mathcal F_t\Big] \\ &\leq Ch^{p/2}(1+\mathbb{E}\Big[\sup_{s\in [t,t+h]}|X_s|^p\Big|\mathcal F_t\Big])\nonumber \\ &\leq Ch^{p/2}(1+|x|^p),\label{ekv:divergence-bound} \end{align} $\mathbb{P}$-a.s. We introduce the stopping time \begin{align*} \eta:=\inf\big\{s\geq t: X_s\notin B_\varepsilonilon(x)\big\} \end{align*} (with $\inf\emptyset=\infty$) and get that \begin{align*} \mathbb{E}\big[\mathbbm{1}_{[\eta\leq t+h]}\big|\mathcal F_t \big]\varepsilonilon^p\leq\mathbb{E}\Big[\sup_{s\in [t,t+h]}|X_s-x|^p\Big|\mathcal F_t\Big]&\leq Ch^{p/2}(1+|x|^p), \end{align*} $\mathbb{P}$-a.s. Choosing $p=6$ gives \begin{align*} \mathbb{E}\big[\mathbbm{1}_{[\eta\leq t+h]}\big|\mathcal F_t \big]\leq \varepsilonilon^{-6}Ch^{3}(1+|x|^6), \end{align*} $\mathbb{P}$-a.s. Using this inequality we will show that there is a $C>0$ such that for some $h'\in(0,h'']$ and all $h\in [0,h']$ we have \begin{align*} \mathop{\rm{ess}\sup}_{\tau\in\mathcal T_t}G_{t,\tau\wedge t+h}^{t,x;\emptyset,\alpha}[\mathbbm{1}_{[\tau\leq t+h]}\mathcal M V_-(\tau,X_{\tau})+\mathbbm{1}_{[\tau>t+h]}V_-(t+h,X_{t+h})]\leq G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X_{t+h})]+ Ch^{3/2} \end{align*} for all $\alpha\in\mathcal{A}_{t,t+h}$ from which the result of this lemma follows by Lemma~\ref{lem:alter-dpp}. For any $\tau\in\mathcal T_t$, let $(\mathcal Y^1,\mathcal Z^1)$ be the unique solution to \begin{align*} \mathcal Y^1_s&=\mathbbm{1}_{[\tau\leq t+h]}\mathcal M V_-(\tau,X_{\tau}) + \mathbbm{1}_{[\tau>t+h]}V_-(t+h,X_{t+h}) \\ &\quad+\int_s^{\tau\wedge t+h} f(r,X_r,\mathcal Y^1_r,\mathcal Z^1_r,\alpha_r)dr-\int_s^{\tau\wedge t+h} \mathcal Z^1_rdW_r. \end{align*} with $X:=X^{t,x;\emptyset,\alpha}$ and let $(\mathcal Y^2,\mathcal Z^2)$ solve \begin{align*} \mathcal Y^2_s&=V_-(t+h,X_{t+h})+\int_s^{t+h} f(r,X_r,\mathcal Y^2_r,\mathcal Z^2_r,\alpha_r)dr-\int_s^{t+h} \mathcal Z^2_rdW_r. \end{align*} Then, with \begin{align}\label{ekv:zeta_1-in-DN} \zeta_1(s):=\frac{f(s,X_s,\mathcal Y^2_s,\mathcal Z^2_s,\alpha_s) -f(s,X_s,\mathbbm{1}_{[s\leq\tau]}\mathcal Y^1_s,\mathcal Z^2_s,\alpha_s)} {\mathcal Y^2_s-\mathcal Y^1_s} \mathbbm{1}_{[\mathcal Y^2_s\neq\mathbbm{1}_{[s\leq\tau]}\mathcal Y^1_s]} \end{align} and \begin{align}\label{ekv:zeta_2-in-DN} \zeta_2(s):=\frac{f(s,X_s,\mathbbm{1}_{[s\leq\tau]}\mathcal Y^1_s,\mathcal Z^2_s,\alpha_s) -f(s,X_s,\mathbbm{1}_{[s\leq\tau]}\mathcal Y^1_s,\mathbbm{1}_{[s\leq\tau]}\mathcal Z^1_s,\alpha_s)} {|\mathcal Z^2_s-\mathbbm{1}_{[s\leq\tau]}\mathcal Z^1_s|^2} (\mathcal Z^2_s-\mathbbm{1}_{[s\leq\tau]}\mathcal Z^1_s)^\top \end{align} we have by the Lipschitz continuity of $f$ that $|\zeta_1(s)|\vee|\zeta_2(s)|\leq k_f$. Then, with \begin{align*} M_{r,s}:=e^{\int_r^{s}(\zeta_1(v)-\frac{1}{2}|\zeta_2(v)|^2)dv+\frac{1}{2}\int_r^{s}\zeta_2(v)dW_v}=:e^{\int_r^{s}\zeta_1(v)dv}\tilde M_{r,s}, \end{align*} we have \begin{align*} \mathcal Y^1_t-\mathcal Y^2_t&=\mathbb{E}\Big[\mathbbm{1}_{[\tau< t+h]}\Big\{M_{t,\tau}(\mathcal M V_-(\tau,X_{\tau})-M_{\tau,t+h}V_-(t+h,X_{t+h})) -\int_\tau^{t+h}M_{t,s}f(s,X_s,0,0)ds\Big\}\Big|\mathcal F_t\Big] \\ &=\Lambda_1(h)+\Lambda_2(h), \end{align*} where \begin{align*} \Lambda_1(h):\!&\!=\mathbb{E}\Big[\mathbbm{1}_{[\eta<t+h]}\mathbbm{1}_{[\tau< t+h]}\Big\{M_{t,\tau}(\mathcal M V_-(\tau,X_{\tau})-M_{\tau,t+h}V_-(t+h,X_{t+h})) \\ &\quad-\int_\tau^{t+h}M_{t,s}f(s,X_s,0,0)ds\Big\}\Big|\mathcal F_t\Big] \\ &\leq C\mathbb{E}\big[\mathbbm{1}_{[\eta<t+h]}\big]^{1/2}\mathbb{E}\Big[|M_{t,\tau}\mathcal M V_-(\tau,X^{t,x;\emptyset,\alpha}_{\tau})|^2+|M_{t,t+h}V_-(t+h,X_{t+h})|^2 \\ &\quad+\int_\tau^{t+h}|M_{t,s}f(s,X_s,0,0)|^2ds\Big|\mathcal F_t\Big]^{1/2} \\ &\leq C(1+|x|^\rho)h^{3/2} \end{align*} and \begin{align*} \Lambda_2(h):\!&\!=\mathbb{E}\Big[\mathbbm{1}_{[\eta\geq t+h]}\mathbbm{1}_{[\tau< t+h]}\Big\{M_{t,\tau}(\mathcal M V_-(\tau,X_{\tau})-M_{\tau,t+h}V_-(t+h,X_{t+h})) \\ &\quad-\int_\tau^{t+h}M_{t,s}f(s,X_s,0,0)ds\Big\}\Big|\mathcal F_t\Big] \\ &\leq \mathbb{E}\Big[\mathbbm{1}_{[\eta\geq t+h]}\mathbbm{1}_{[\tau< t+h]}\Big\{M_{t,\tau}(\mathcal M V_-(\tau,X_{\tau})-V_-(t+h,X_{t+h})+(1-M_{\tau,t+h})V_-(t+h,X_{t+h})) \\ &\quad+C(1+|x|^\rho)\int_{\tau}^{t+h}M_{t,s}ds\Big\}\Big|\mathcal F_t\Big] \\ &\leq \mathbb{E}\Big[\mathbbm{1}_{[\eta\geq t+h]}\mathbbm{1}_{[\tau< t+h]}\Big\{-\varepsilonilon M_{t,\tau}+(M_{t,t+h}-M_{t,\tau}+\int_{\tau}^{t+h}M_{t,s}ds)C(1+|x|^\rho)\Big\}\Big|\mathcal F_t\Big], \end{align*} where we have used the polynomial growth of $V_-$ and $\mathcal M V_-$ together with the fact that $\sup_{s\in[t,t+h]}|X_s|\leq |x|+\varepsilonilon$ on $\{\omega:\eta\geq t+h\}$. We can now get rid of $\mathbbm{1}_{[\eta\geq t+h]}$ and use the martingale property of $\tilde M$ to find that \begin{align*} \Lambda_2(h)&\leq \mathbb{E}\Big[\mathbbm{1}_{[\tau< t+h]}\Big\{-\varepsilonilon M_{t,\tau}+(M_{t,t+h}-M_{t,\tau}+\int_{\tau}^{t+h}M_{t,s}ds)C(1+|x|^\rho)\Big\}\Big|\mathcal F_t\Big]+C(1+|x|^\rho)h^{3/2} \\ &\leq\mathbb{E}\Big[\mathbbm{1}_{[\tau< t+h]}e^{\int_t^{\tau}\zeta_1(r)dr}\tilde M_{t,t+h}\Big\{-\varepsilonilon +C(1+|x|^\rho)(e^{k_f h}-1+he^{k_f h}) \Big\}\Big|\mathcal F_t\Big]+C(1+|x|^\rho)h^{3/2} \\ &\leq C(1+|x|^\rho)h^{3/2} \end{align*} whenever $h>0$ is small enough that $-\varepsilonilon +C(1+|x|^\rho)(e^{k_f h}-1+he^{k_f h})\leq 0$. Combined, this gives that there is a $h'\in [0,h'']$ such that whenever $h\in [0,h']$ we have $\mathcal Y^1_t-\mathcal Y^2_t\leq C(1+|x|^\rho)h^{3/2}$. Since $\tau$ and $\alpha$ were arbitrary the assertion follows.\qed\\ \begin{lem}\label{lem:do-nothing-U} Let $(t,x)\in [t,T)\times \mathbb R^n$ be such that $V_+(t,x)>\mathcal M V_+(t,x)$ then there is a $C>0$ and an $h'\in (0,T-t]$ such that \begin{align*} V_+(t,x)\leq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;\emptyset,\alpha}[V_+(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]+ Ch^{3/2} \end{align*} for all $h\in[0,h')$. \end{lem} \noindent\emph{Proof.} As in the proof of the above lemma, there is a $h''>0$ and an $\varepsilonilon>0$ such that \begin{align*} \inf_{(t',x')\in [t,t+h'']\times \bar B_\varepsilonilon(x)}V_+(t',x')\geq \sup_{(t',x')\in [t,t+h'']\times \bar B_\varepsilonilon(x)}\mathcal M V_+(t',x')+\varepsilonilon, \end{align*} We can thus repeat the steps in the previous lemma to conclude that there is a $C>0$ such that \begin{align*} \mathop{\rm{ess}\sup}_{\tau\in\mathcal T_t}G_{t,\tau\wedge t+h}^{t,x;\emptyset,\alpha} [\mathbbm{1}_{[\tau\leq t+h]}\mathcal M V_+(\tau,X_{\tau})+\mathbbm{1}_{[\tau>t+h]}V_+(t+h,X_{t+h})]\leq G_{t,t+h}^{t,x;\emptyset,\alpha}[V_+(t+h,X_{t+h})]+ Ch^{3/2} \end{align*} for all $\alpha\in\mathcal{A}_{t,t+h}$ and $h\in [0,h']$ for some $h'\in (0,h'']$. The lemma then follows by applying the second inequality in Lemma~\ref{lem:alter-dpp}.\qed\\ We now fix $(t,x)\in[0,T)\times \mathbb R^n$, $h\in (0,T-t]$ and $\varphi\in C^{3}_{l,b}$. Following the standard procedure to go from a DPP to a quasi-variational inequality when dealing with a controlled FBSDE (see \textit{e.g.\ } \cite{Li14}) we introduce the BSDEs \begin{align*} Y^{1,\alpha}_{s}=\int_s^{t+h}F(r,X^{t,x;\emptyset,\alpha}_r,Y^{1,\alpha}_r,Z^{1,\alpha}_r,\alpha_r)dr-\int_s^{t+h}Z^{1,\alpha}_rdW_r \end{align*} and \begin{align}\label{ekv:bsde-Y2} Y^{2,\alpha}_{s}=\int_s^{t+h}F(r,x,Y^{2,\alpha}_r,Z^{2,\alpha}_r,\alpha_r)dr-\int_s^{t+h}Z^{2,\alpha}_rdW_r, \end{align} with \begin{align*} F(s,x,y,z,\alpha)&:=\frac{\partial}{\partial s}\varphi(s,x)+\frac{1}{2}{\rm Tr}\{\sigma\sigma^\top(s,x,\alpha)D^2_x\varphi(s,x)\}+(D_x\varphi(s,x))b(s,x,\alpha) \\ &\quad +f(s,x,\varphi(s,x)+y,(D_x\varphi(s,x))\sigma(s,x,\alpha)+z,\alpha). \end{align*} \begin{rem} It is easy to check that the driver $F$ satisfies Assumption~\ref{ass:on-coeff}.\ref{ass:on-coeff-f} from which we conclude that the above BSDEs both admit unique solutions. \end{rem} In particular, we note that $u$ is a viscosity supersolution (subsolution) of \eqref{ekv:var-ineq} if $u(T,x)\geq(\leq)\psi(T,x)$, $u(t,x)\geq \mathcal M u(t,x)$ and $\inf_{\alpha\in A}F(t,x,0,0,\alpha)\leq 0$ ($\geq 0$) on $\mathcal D_C(u):=\{(t,x):u(t,x)> \mathcal M u(t,x)\}$ whenever $\varphi\in C^3_{l,b}$ is such that $u(t,x)=\varphi(t,x)$ and $u(t,x)-\varphi(t,x)$ attains a local minimum (maximum) at $(t,x)$. Note that the only reason that \eqref{ekv:bsde-Y2} is stochastic comes from the fact that $\alpha$ is a stochastic control. In regard to Hamiltonian minimization it seems natural to introduce the following ordinary differential equation (ODE) \begin{align*} Y^{0}_{s}=\int_s^{t+h}\inf_{\alpha\in A}F(s,x,Y^{0}_s,0,\alpha)ds. \end{align*} We have the following auxiliary lemma, that summarize the results in Lemma 5.1 and Lemma 5.3 of \cite{Buckdahn08}. \begin{lem}\label{lem:Y1-to-G} For every $\alpha\in\mathcal{A}_{t,t+h}$ and $s\in[t,t+h]$ we have \begin{align}\label{ekv:Y1-to-G} Y^{1,\alpha}_s=G^{t,x;\emptyset,\alpha}_{s,t+h}[\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]-\varphi(s,X^{t,x;\emptyset,\alpha}_{s}),\qquad \mathbb{P}-{\rm a.s.} \end{align} Also, we have that \begin{align}\label{ekv:Y0-rep} Y^{0}_t=\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}Y^{2,\alpha}_t,\qquad \mathbb{P}-{\rm a.s.} \end{align} \end{lem} \noindent \emph{Proof.} The first property follows from the definition of $G$ and Ito's formula applied to $\varphi(s,X^{t,x;\emptyset,\alpha}_{s})$. The second result is immediate from the comparison principle of BSDEs.\qed\\ We now give a sequence of lemmata that will help us show that $V_-$ is a viscosity solution to \eqref{ekv:var-ineq}. \begin{lem}\label{lem:Y1-Y2-diff} We have \begin{align*} |Y^{1,\alpha}_t-Y^{2,\alpha}_t|\leq Ch^{3/2},\qquad \mathbb{P}-{\rm a.s.} \end{align*} \end{lem} \noindent \emph{Proof.} Note that \begin{align*} |Y^{1,\alpha}_t-Y^{2,\alpha}_t|&\leq\mathbb{E}\Big[\int_t^{t+h}| F(s,X^{t,x;\emptyset,\alpha}_s,Y^{1,\alpha}_s,Z^{1,\alpha}_s,\alpha_s)-F(s,x,Y^{2,\alpha}_s,Z^{2,\alpha}_s,\alpha_s)|ds \Big|\mathcal F_t\Big] \\ &\leq C\mathbb{E}\Big[\int_t^{t+h}((1+|x|^\rho+|X^{t,x;\emptyset,\alpha}_s|^\rho)|X^{t,x;\emptyset,\alpha}_s-x|+|Y^{1,\alpha}_s-Y^{2,\alpha}_s|+|Z^{1,\alpha}_s -Z^{2,\alpha}_s|)ds \Big|\mathcal F_t\Big]. \end{align*} Concerning the first term on the right-hand side we have \begin{align*} \mathbb{E}\Big[\int_t^{t+h}(1+|x|^\rho+|X^{t,x;\emptyset,\alpha}_s|^\rho)|X^{t,x;\emptyset,\alpha}_s-x|ds \Big|\mathcal F_t\Big]&\leq C\mathbb{E}\Big[\sup_{s\in[t,t+h]}|X^{t,x;\emptyset,\alpha}_s-x|^{2}\Big|\mathcal F_t\Big]^{1/2}h \\ &\leq Ch^{3/2} \end{align*} For the remaining terms, \begin{align*} &\mathbb{E}\Big[\int_t^{t+h}(|Y^{1,\alpha}_s-Y^{2,\alpha}_s|+|Z^{1,\alpha}_s-Z^{2,\alpha}_s|)ds \Big|\mathcal F_t\Big] \\ &\leq \sqrt 2\mathbb{E}\Big[\int_t^{t+h}(|Y^{1,\alpha}_s-Y^{2,\alpha}_s|^2+|Z^{1,\alpha}_s-Z^{2,\alpha}_s|^2)ds \Big|\mathcal F_t\Big]^{1/2}h^{1/2} \end{align*} and classically we have \begin{align*} &\mathbb{E}\Big[\int_t^{t+h}|Y^{1,\alpha}_s-Y^{2,\alpha}_s|^2+|Z^{1,\alpha}_s-Z^{2,\alpha}_s|^2)ds \Big|\mathcal F_t\Big] \\ &\leq C\mathbb{E}\Big[\int_t^{t+h}|F(s,X^{t,x;\emptyset,\alpha}_s,Y^{1,\alpha}_s,Z^{1,\alpha}_s,\alpha_s)-F(s,x,Y^{1,\alpha}_s,Z^{1,\alpha}_s,\alpha_s)|^2ds \Big|\mathcal F_t\Big] \\ &\leq C\mathbb{E}\Big[\int_t^{t+h}(1+|x|^\rho+|X^{t,x;\emptyset,\alpha}_s|^{2\rho})|X^{t,x;\emptyset,\alpha}_s-x|^2ds \Big|\mathcal F_t\Big] \\ &\leq Ch^{2}. \end{align*} Combining the above estimates the desired results follows.\qed\\ \begin{lem}\label{lem:Y0-int} There is a $C>0$ such that \begin{align*} \int_t^{t+h}|Y^{0}_s|ds\leq Ch^{3/2}. \end{align*} for each $t\in[0,T]$ and $h\in [0,T-t]$. \end{lem} \noindent \emph{Proof.} Gr\"onwall's inequality gives that \begin{align*} \sup_{s\in [t,t+h]}|Y^{0,\alpha}_s|\leq Ch \end{align*} and we conclude that $\int_t^{t+h}|Y^{0,\alpha}_s|ds\leq h\sup_{s\in [t,t+h]}|Y^{0,\alpha}_s|\leq Ch^2$. \qed\\ \begin{lem}\label{lem:local-minimum} Assume that $\varphi\in \Pi_{pg}$ is such that $\varphi-V_-$ has a local maximum at $(t,x)$ where $\varphi(t,x)=V_-(t,x)$. Then, there are constants $C,h'>0$ such that \begin{align*} G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]\geq G_{t,t+h}^{t,x;\emptyset,\alpha}[\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]-Ch^{3/2} \end{align*} for all $h\in [0,(T-t)\wedge h']$ and $\alpha\in\mathcal{A}_{t,t+h}$. \end{lem} \noindent \emph{Proof.} Since $\varphi-V_-$ has a local maximum at $(t,x)$ there are constants $\varepsilonilon,h'>0$ and a $h'>0$ such that $V_-(t',x')\geq \varphi(t',x')$ for all $(t',x')\in [t,t+h'\wedge T]\times \bar B_\varepsilonilon(x)$. Now, let \begin{align*} \eta:=\inf\{s\geq t: X^{t,x;\emptyset,\alpha}\notin B_\varepsilonilon(x)\} \end{align*} and note from the proof of Lemma~\ref{lem:do-nothing} that $\mathbb{E}[\mathbbm{1}_{[\eta\leq t+h]}|\mathcal F_t]\leq Ch^3$, $\mathbb{P}$-a.s. Assume that $h\in [0,T-t]$ and let $(\mathcal Y^1,\mathcal Z^1)$ be the unique solution to \begin{align*} \mathcal Y^1_s&=V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})+\int_s^{t+h} f(r,X^{t,x;\emptyset,\alpha}_r,\mathcal Y^1_r,\mathcal Z^1_r,\alpha_r)dr-\int_s^{t+h} \mathcal Z^1_rdW_r. \end{align*} and assume that $(\mathcal Y^2,\mathcal Z^2)$ satisfies \begin{align*} \mathcal Y^2_s&=\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})+\int_s^{t+h} f(r,X^{t,x;\emptyset,\alpha}_r,\mathcal Y^2_r,\mathcal Z^2_r,\alpha_r)dr-\int_s^{t+h} \mathcal Z^2_rdW_r. \end{align*} Then, with \begin{align*} M_s:=e^{\int_t^{s}(\zeta_1(r)-\frac{1}{2}\zeta_2^2(r))dr+\frac{1}{2}\int_t^{s}\zeta_2(r)dW_r}, \end{align*} where $\zeta_1$ and $\zeta_2$ are given by \eqref{ekv:zeta_1-in-DN}-\eqref{ekv:zeta_2-in-DN}. By comparison we have \begin{align*} \mathcal Y^2_t-\mathcal Y^1_t&\leq \mathbb{E}\big[\mathbbm{1}_{[\eta\leq t+h]}M_{t+h}(\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})-V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h}))\big|\mathcal F_t\big] \\ &\leq \sqrt{2}\mathbb{E}\big[\mathbbm{1}_{[\eta\leq t+h]}\big|\mathcal F_t\big]^{1/2}\mathbb{E}\big[M_{t+h}^2(|\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})|^2 +|V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})|^2)\big|\mathcal F_t\big]^{1/2} \\ &\leq Ch^{3/2} \end{align*} and the result follows.\qed\\ \begin{thm} $V_-$ is a viscosity solution to \eqref{ekv:var-ineq}. \end{thm} \noindent \emph{Proof.} To begin with we clearly have that $V_-(T,x)=\psi(x)$ for all $x\in\mathbb R^n$ (see Remark~\ref{rem:@end}). We first show that $V_-$ is a viscosity supersolution. For this, we fix $(t,x)\in [0,T]\times \mathbb R^n$ and assume that $\varphi$ is such that $\varphi-V_-$ has a local maximum at $(t,x)$, where $\varphi(t,x)=V_-(t,x)$. If $(t,x)\in \mathcal D_C(V_-)$ we have by the DPP that \begin{align*} \varphi(t,x)=V_-(t,x)&=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-(t + h,X^{t,x;u,\alpha^S(u)}_{t+h})] \\ &\geq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})] \end{align*} On the other hand by Lemma~\ref{lem:local-minimum} we have for $h>0$ sufficiently small that \begin{align*} G_{t,t+h}^{t,x;\emptyset,\alpha}[\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]\leq G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]+Ch^{3/2}. \end{align*} Now, \eqref{ekv:Y1-to-G} gives \begin{align*} Y^{1,\alpha}_t\leq G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]-\varphi(t,x)+Ch^{3/2}. \end{align*} Combined this gives \begin{align*} \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}Y^{1,\alpha}_t\leq Ch^{3/2}. \end{align*} In particular, by Lemma~\ref{lem:Y1-Y2-diff} and \eqref{ekv:Y0-rep} this implies that \begin{align*} Y^0_t=\mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}Y^{2,\alpha}_t\leq Ch^{3/2}. \end{align*} Hence, $\lim_{h\to 0} h^{-1}Y^0_t\leq 0$ and we conclude by Lemma~\ref{lem:Y0-int} that \begin{align*} 0&\geq\lim_{h\to 0} h^{-1}\int_{t}^{t+h}\inf_{\alpha\in A}F(s,x,Y^0_s,0,\alpha)ds \\ &\geq \lim_{h\to 0} h^{-1}\int_{t}^{t+h}\inf_{\alpha\in A}(F(s,x,0,0,\alpha)-C|Y^0_s|)ds \\ &=\lim_{h\to 0} h^{-1}\int_{t}^{t+h}\inf_{\alpha\in A}F(s,x,0,0,\alpha)ds \end{align*} and by continuity of $\inf_{\alpha\in A}F(\cdot,x,0,0,\alpha)$ it follows that \begin{align*} \inf_{\alpha\in A} F(t,x,0,0,\alpha)\leq 0. \end{align*} Assume instead that $(t,x)\in \mathcal D_S(V_-):=([0,T]\times\mathbb R^n)\setminus \mathcal D_C(V_-)$, then $V_-(t,x)=\mathcal M V_-(t,x)$ and we conclude that $V_-$ is a viscosity supersolution.\\ We turn now to the subsolution property. We fix $(t,x)\in [0,T]\times \mathbb R^n$ and assume that $\varphi$ is such that $\varphi-V_-$ has a local minimum at $(t,x)$, where $\varphi(t,x)=V_-(t,x)$. If $(t,x)\in \mathcal D_C(V_-)$ we have by the DPP and Lemma~\ref{lem:do-nothing} that, whenever $h>0$ is sufficiently small, \begin{align*} \varphi(t,x)=V_-(t,x)&=\mathop{\rm{ess}\inf}_{\alpha^S\in\mathcal{A}^S_{t,t+h}}\mathop{\rm{ess}\sup}_{u\in\mathcal U^k_{t,t+h}}G_{t,t+h}^{t,x;u,\alpha^S(u)}[V_-(t + h,X^{t,x;u,\alpha^S(u)}_{t+h})] \\ &\leq \mathop{\rm{ess}\inf}_{\alpha\in\mathcal{A}_{t,t+h}}G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})] + Ch^{3/2} \end{align*} On the other hand repeating the argument in the proof of Lemma~\ref{lem:local-minimum} gives that \begin{align*} G_{t,t+h}^{t,x;\emptyset,\alpha}[V_-(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]\leq G_{t,t+h}^{t,x;\emptyset,\alpha}[\varphi(t+h,X^{t,x;\emptyset,\alpha}_{t+h})]+ Ch^{3/2} \end{align*} and we get that \begin{align*} -Y^{1,\alpha}_t&=\varphi(t,x)-G_{t,t+h}^{t,x;\emptyset,\alpha^S(\emptyset)}[\varphi(t+h,X^{t,x;\emptyset,\alpha^S(\emptyset)}_{t+h})] \\ &\leq Ch^{3/2}, \end{align*} \textit{i.e.\ } $Y^{1,\alpha}_t\geq -Ch^{3/2}$. Now, repeating the above argument we find that \begin{align*} \inf_{\alpha\in A} F(t,x,0,0,\alpha)\geq 0. \end{align*} Analogously we get when $(t,x)\in \mathcal D_S(V_-)$ then $V_-(t,x)= \mathcal M\varphi(t,x)$ and we conclude that $V_-$ is a viscosity subsolution.\qed\\ \begin{rem} By the same argument while using Lemma~\ref{lem:do-nothing-U} instead of Lemma~\ref{lem:do-nothing} we conclude that $V_+$ is a viscosity solution to \eqref{ekv:var-ineq}. \end{rem} \section{Uniqueness of viscosity solutions to the HJBI-QVI\label{sec:unique}} To be able to conclude that the game has a value, \textit{i.e.\ } that $V_-\equivV_+$, we will now show that \eqref{ekv:var-ineq} has at most one solution in the viscosity sense in $\Pi_{pg}$. We let \begin{align} \mathcal L^\alpha\varphi(t,x):=\sum_{j=1}^d a_j(t,x,\alpha)\frac{\partial}{\partial x_j}\varphi(t,x)+\frac{1}{2}\sum_{i,j=1}^d (\sigma\sigma^\top(t,x,\alpha))_{i,j}\frac{\partial^2}{\partial x_i\partial x_j}\varphi(t,x) \end{align} and have that \begin{align*} H(t,x,v(t,x),Dv(t,x),D^2v(t,x),\alpha):=\mathcal L^\alpha v(t,x)+f(t,x,v(t,x),Dv(t,x)\cdot \sigma(t,x,\alpha),\alpha). \end{align*} We will need the following lemma: \begin{lem}\label{lem:is-super} Let $v$ be a supersolution to \eqref{ekv:var-ineq} satisfying \begin{align*} \forall (t,x)\in[0,T]\times\mathbb R^d,\quad |v(t,x)|\leq C(1+|x|^{2\gamma}) \end{align*} for some $\gamma>0$. Then there is a $\lambda_0 > 0$ such that for any $\lambda>\lambda_0$ and $\theta> 0$, the function $v + \theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})$ is also a supersolution to \eqref{ekv:var-ineq}. \end{lem} \noindent \emph{Proof.} With $w:=v + \theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})$ we note that, since $v$ is a supersolution and $\theta e^{-\lambda T}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})\geq 0$, we have $w(T,x)\geq v(T,x)\geq \psi(x)$ so that the terminal condition holds. Moreover, we have \begin{align*} &w(t,x)-\sup_{b\in U}\{w(t,\Gamma(t,x,b))-\ell(t,x,b)\} \\ &=v(t,x) + \theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2}) \\ &\quad-\sup_{b\in U}\{v(t,\Gamma(t,x,b)) + \theta e^{-\lambda t}(1+((|\Gamma(t,x,b)|-K_\Gamma)^+)^{2\gamma+2}-\ell(t,x,b))\} \\ &\geq v(t,x)- \sup_{b\in U}\{v(t,\Gamma(t,x,b))-\ell(t,x,b)\} \\ &\quad+\theta e^{-\lambda t}\{(1+((|x|-K_\Gamma)^+)^{2\gamma+2})- \sup_{b\in U}(1+((|\Gamma(t,x,b)|-K_\Gamma)^+)^{2\gamma+2})\}. \end{align*} Since $v$ is a supersolution, we have \begin{align*} v(t,x)- \sup_{b\in U}\{v(t,\Gamma(t,x,b))-\ell(t,x,b)\}\geq 0 \end{align*} Now, either $|x|\leq K_\Gamma$ in which case it follows by \eqref{ekv:imp-bound} that $|\Gamma(t,x,b)|\leq K_\Gamma$ or $|x|> K_\Gamma$ and \eqref{ekv:imp-bound} gives that $|\Gamma(t,x,b)|\leq |x|$. We conclude that \begin{align*} w(t,x)- \sup_{b\in U}\{w(t,\Gamma(t,x,b))-\ell(t,x,b)\}\geq 0. \end{align*} Next, let $\varphi\in C^{1,2}([0,T]\times\mathbb R^d\to\mathbb R)$ be such that $\varphi-w$ has a local maximum of 0 at $(t_0,x_0)$ with $t_0<T$. Then $\tilde \varphi(t,x):=\varphi (t,x)-\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})\in C^{1,2}([0,T]\times\mathbb R^d\to\mathbb R)$ and $\tilde \varphi-v$ has a local maximum of 0 at $(t_0,x_0)$. Since $v$ is a viscosity supersolution, we have \begin{align*} 0&\leq -\partial_t \tilde \varphi(t,x)-\inf_{\alpha\in A}H(t,x,\tilde \varphi(t,x),D\tilde \varphi(t,x),D^2\tilde \varphi(t,x),\alpha) \\ &=-\partial_t(\varphi(t,x)-\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2}))-\inf_{\alpha\in A}\big\{\mathcal L^\alpha (\varphi(t,x)-\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})) \\ &\quad+f(t,x,\varphi(t,x)-\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2}),\sigma^\top(t,x)\nabla_x (\varphi(t,x)-\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})),\alpha)\big\} \\ &\leq -\partial_t\varphi(t,x)-\inf_{\alpha\in A}\big\{\mathcal L^\alpha \varphi(t,x)+f(t,x,\varphi(t,x),\sigma^\top(t,x)\nabla_x \varphi(t,x),\alpha)\big\} \\ &\quad-\theta \lambda e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})+\sup_{\alpha\in A}\mathcal L^\alpha\{\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2})\} \\ &\quad+k_f\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2}+C(1+|x|) ((|x|-K_\Gamma)^+)^{2\gamma+1}) \end{align*} Consequently, \begin{align*} &-\partial_t\varphi(t,x)-\inf_{\alpha\in A}\mathcal L^\alpha \{\varphi(t,x)+f(t,x,\varphi(t,x),\sigma^\top(t,x)\nabla_x \varphi(t,x),\alpha)\} \\ &\geq \theta e^{-\lambda t}\big(\lambda(1+((|x|-K_\Gamma)^+)^{2\gamma+2})-C(1+|x|)((|x|-K_\Gamma)^+)^{2\gamma+1}- C(1+|x|)^2 e^{-\lambda t}((|x|-K_\Gamma)^+)^{2\gamma} \\ &\quad -k_f (1+((|x|-K_\Gamma)^+)^{2\gamma+2}+C(1+|x|) ((|x|-K_\Gamma)^+)^{2\gamma+1})\big), \end{align*} where the right hand side is non-negative for all $\theta> 0$ and all $\lambda>\lambda_0$ for some $\lambda_0>0$.\qed\\ We have the following results the proof of which we omit since it is classical: \begin{lem}\label{lem:integ-factor} A locally bounded function $v:[0,T]\times \mathbb R^d\to\mathbb R$ is a viscosity supersolution (resp. subsolution) to \eqref{ekv:var-ineq} if and only if for every $\lambda\in\mathbb R$, $\tilde v(t,x):=e^{\lambda t}v(t,x)$ is a viscosity supersolution (resp. subsolution) to \begin{align}\label{ekv:var-ineq-if} \begin{cases} \min\big\{\tilde v(t,x)-\sup_{b\in U}\{\tilde v(t,\Gamma(t,x,b))-e^{\lambda t}\ell(t,x,b)\},-\tilde v_t(t,x)+\lambda \tilde v(t,x)-\inf_{\alpha\in A}\{\mathcal L^\alpha \tilde v(t,x)\\ +e^{\lambda t}f(t,x,e^{-\lambda t}\tilde v(t,x),e^{-\lambda t}\sigma^\top(t,x)\nabla_x \tilde v(t,x),\alpha)\}\big\}=0,\quad\forall (t,x)\in[0,T)\times \mathbb R^d \\ \tilde v(T,x)=e^{\lambda T}\psi(x). \end{cases} \end{align} \end{lem} \begin{rem} Here, it is important to note that $\tilde \ell(t,x):=e^{\lambda t}\ell(t,x)$, $\tilde f(t,x,y,z,\alpha):=-\lambda y\\+e^{\lambda t}f(t,x,e^{-\lambda t}y,e^{-\lambda t}z,\alpha)$ and $\tilde \psi(x):=e^{\lambda T}\psi(x)$ satisfy Assumption~\ref{ass:on-coeff}. In particular, this implies that Lemma~\ref{lem:is-super} holds for supersolutions to \eqref{ekv:var-ineq-if} as well. \end{rem} We have the following comparison result for viscosity solutions in $\Pi_{pg}$: \begin{prop} Let $v$ (resp. $u$) be a supersolution (resp. subsolution) to \eqref{ekv:var-ineq}. If $u,v\in \Pi_{pg}$, then $u\leq v$. \end{prop} \noindent \emph{Proof.} First, we note that we only need to show that the statement holds for solutions to~\eqref{ekv:var-ineq-if}. We thus assume that $v$ (resp.~$u$) is a viscosity supersolution (resp.~subsolution) to \eqref{ekv:var-ineq-if}. It is sufficient to show that \begin{align*} w(t,x)&=w^{\theta,\lambda}(t,x):=v(t,x)-\theta e^{-\lambda t}(1+((|x|-K_\Gamma)^+)^{2\gamma+2}) \\ &\geq u(t,x) \end{align*} for all $(t,x)\in[0,T]\times\mathbb R^d$ and any $\theta>0$. Then the result follows by taking the limit $\theta\to 0$. Moreover, we know from Lemma~\ref{lem:is-super} that there is a $\lambda_0>0$ such that $w$ is a supersolution to \eqref{ekv:var-ineq-if} for each $\lambda\geq\lambda_0$ and $\theta>0$. By assumption, $u,v\in \Pi_{pg}$, which implies that there are $C>0$ and $\gamma>0$ such that \begin{align*} |v(t,x)|+|u(t,x)|\leq C(1+|x|^{2\gamma}). \end{align*} Hence, for each $\lambda,\theta>0$ there is a $R\geq K_\Gamma$ such that \begin{align*} w(t,x)>u(t,x),\quad\forall (t,x)\in[0,T]\times\mathbb R^d,\:|x|>R. \end{align*} We search for a contradiction and assume that there is a $(t_0,x_0)\in [0,T]\times \mathbb R^d$ such that $v(t_0,x_0)>w(t_0,x_0)$. Then there is a point $(\bar t,\bar x)\in[0,T)\times B_R$ (the open unit ball of radius $R$ centered at 0) such that \begin{align*} \max_{(t,x)\in[0,T]\times\mathbb R^d}(u(t,x)-w(t,x))&=\max_{(t,x)\in[0,T)\times B_R}(u(t,x)-w(t,x)) \\ &=u(\bar t,\bar x)-w(\bar t,\bar x)=\eta>0. \end{align*} We first show that there is at least one point $(t^*,x^*)\in[0,T)\times B_R$ such that \begin{enumerate}[a)] \item $u(t^*,x^*)-w(t^*,x^*)=\eta$ and \item $u(t^*,x^*)>\sup_{b\in U}\{u(t^*,\Gamma(t^*,x^*,b))-\tilde\ell(t^*,b)\}$. \end{enumerate} We again argue by contradiction and assume that $u(t,x)=\sup_{b\in U}\{u(t,\Gamma(t,x,b))-\tilde\ell(t,b)\}$ for all $(t,x)\in A:=\{(s,y)\in[0,T]\times\mathbb R^d: u(s,y)-w(s,y)=\eta\}$. Indeed, as $u$ is u.s.c. and $\Gamma$ is continuous, there is a $b_1$ such that \begin{align}\label{ekv:equal} u(\bar t,\bar x)=\sup_{b\in U}\{u(\bar t,\Gamma(\bar t,\bar x,b))-\tilde\ell(\bar t,b)\}=u(\bar t,\Gamma(\bar t,\bar x,b_1))-\tilde\ell(\bar t,b_1). \end{align} Now, set $x_1=\Gamma(\bar t,\bar x,b_1)$ and note that since \begin{align*} |\Gamma(t,x,b)|\leq R,\quad \forall(t,x,b)\in [0,T]\times \bar B_R\times U, \end{align*} it follows that $x_1\in \bar B_R$. Moreover, as $w$ is a supersolution it satisfies \begin{align*} w(\bar t,\bar x)- (w(\bar t,\Gamma(\bar t,\bar x,b_1))-\tilde\ell(t,\bar x,b_1))\geq 0 \end{align*} or \begin{align*} - w(\bar t,x_1))\geq -w(\bar t,\bar x)-\tilde\ell(t,\bar x,b_1) \end{align*} and we conclude from \eqref{ekv:equal} that \begin{align*} u(\bar t,x_1)- w(\bar t,x_1)&\geq u(\bar t,\bar x)+\tilde\ell(\bar t,\bar x,b_1)-(w(\bar t,\bar x)+\tilde\ell(\bar t,\bar x,b_1)) \\ &=u(\bar t,\bar x)-w(\bar t,\bar x)=\eta. \end{align*} Hence, $(\bar t,x_1)\in A$ and by our assumption it follows that there is a $b_2\in U$ such that \begin{align*} u(\bar t,x_1)=u(\bar t,\Gamma(\bar t,x_1,b_2))-\tilde\ell(\bar t,b_2) \end{align*} and a corresponding $x_2:=\Gamma(\bar t,x_1,b_2)\in B_R$. Now, this process can be repeated indefinitely to find a sequence $(x_j,b_j)_{j\geq 1}$ in $B_R\times U$ such that for any $l\geq 0$ we have \begin{align*} u(\bar t,\bar x)=u(\bar t,x_l)-\sum_{j=1}^{l}\tilde\ell(\bar t,x_{j-1},b_j), \end{align*} with $x_0:=\bar x$. Now, as $\tilde\ell\geq (1\wedge e^{\lambda T})\delta>0$ we get a contradiction by letting $l\to\infty$ while noting that $|u(t,x)|$ is bounded on $[0,T]\times \bar B_R$. We can thus pick a $(t^*,x^*)\in [0,T)\times B_R$ such that \emph{a)} and \emph{b)} above holds. The remainder of the proof is similar to the proof of Proposition 4.1 in~\cite{Morlais13}. We assume without loss of generality that $\gamma\geq 2$ and define \begin{align*} \Phi_n(t,x,y):=u(t,x)-w(t,x)-\varphi_n(t,x,y), \end{align*} where \begin{align*} \varphi_n(t,x,y):=\frac{n}{2}|x-y|^{2\gamma}+|x-x^*|^2+|y-x^*|^2+(t-t^*)^2. \end{align*} Since $u$ is u.s.c.~and $w$ is l.s.c.~there is a $(t_n,x_n,y_n)\in[0,T]\times \bar B_R\times \bar B_R$ (with $\bar B_R$ the closure of $B_R$) such that \begin{align*} \Phi_n(t_n,x_n,y_n)=\max_{(t,x,y)\in [0,T]\times \bar B_R\times \bar B_R}\Phi_n(t,x,y). \end{align*} Now, the inequality $2\Phi_n(t_n,x_n,y_n)\geq \Phi_n(t_n,x_n,x_n)+\Phi_n(t_n,y_n,y_n)$ gives \begin{align*} n|x_n-y_n|^{2\gamma}\leq u(t_n,x_n)-u(t_n,y_n)+w(t_n,x_n)-w(t_n,y_n). \end{align*} Consequently, $n|x_n-y_n|^{2\gamma}$ is bounded (since $u$ and $w$ are bounded on $[0,T]\times\bar B_R\times\bar B_R$) and $|x_n-y_n|\to 0$ as $n\to\infty$. We can, thus, extract subsequences $n_l$ such that $(t_{n_l},x_{n_l},y_{n_l})\to (\tilde t,\tilde x,\tilde x)$ as $l\to\infty$. Since \begin{align*} u(t^*,x^*)-w(t^*,x^*)\leq \Phi_n(t_n,x_n,y_n)\leq u(t_n,x_n)-w(t_n,y_n), \end{align*} it follows that \begin{align*} u(t^*,x^*)-w(t^*,x^*)&\leq \limsup_{l\to\infty} \{u(t_{n_l},x_{n_l})-w(t_{n_l},y_{n_l})\} \\ &\leq u(\tilde t,\tilde x)-w(\tilde t,\tilde x) \end{align*} and as the righthand side is dominated by $u(t^*,x^*)-w(t^*,x^*)$ we conclude that \begin{align*} u(\tilde t,\tilde x)-w(\tilde t,\tilde x)=u(t^*,x^*)-w(t^*,x^*). \end{align*} In particular, this gives that $\lim_{l\to\infty}\Phi_n(t_{n_l},x_{n_l},y_{n_l})=u(\tilde t,\tilde x)-w(\tilde t,\tilde x)$ which implies that \begin{align*} \limsup_{l\to\infty} n_l|x_{n_l}-y_{n_l}|^{2\gamma}= 0 \end{align*} and \begin{align*} (t_{n_l},x_{n_l},y_{n_l})\to (t^*,x^*,x^*). \end{align*} We can extract a subsequence $(\tilde n_l)_{l\geq 0}$ of $(n_l)_{l\geq 0}$ such that $t_{\tilde n_l}<T$, $|x_{\tilde n_l}|<R$ and \begin{align*} u(t_{\tilde n_l},x_{\tilde n_l})-w(t_{\tilde n_l},x_{\tilde n_l})\geq \frac{\eta}{2}. \end{align*} Moreover, since $\sup_{b\in U}\{u(t,\Gamma(t,x,b))-\tilde\ell(t,b)\}$ is u.s.c.~(see Lemma~\ref{lem:mcM-monotone}) and $u(t_{\tilde n_l},x_{\tilde n_l})\to u(t^*,x^*)$ there is an $l_0\geq 0$ such that \begin{align*} u(t_{\tilde n_l},x_{\tilde n_l})-\sup_{b\in U}\{u(t_{\tilde n_l},\Gamma(t_{\tilde n_l},x_{\tilde n_l},b))-\tilde\ell(t_{\tilde n_l},b)\}>0, \end{align*} for all $l\geq l_0$. To simplify notation we will, from now on, denote $(\tilde n_l)_{l\geq l_0}$ simply by $n$.\\ By Theorem 8.3 of~\cite{UsersGuide} there are $(p^u_n,q^u_n,M^u_n)\in \bar J^{2,+}u(t_n,x_n)$ and $(p^w_n,q^w_n,M^w_n)\in \bar J^{2,+}w(t_n,y_n)$ such that \begin{align*} \begin{cases} p^u_n-p^w_n=\partial_t\varphi_n(t_n,x_n,y_n)=2(t_n-t^*) \\ q^u_n=D_x\varphi_n(t_n,x_n,y_n)=n\gamma(x-y)|x-y|^{2\gamma-2}+2(x-x^*) \\ q^w_n=-D_y\varphi_n(t_n,x_n,y_n)=n\gamma(x-y)|x-y|^{2\gamma-2}-2(y-x^*) \end{cases} \end{align*} and for every $\varepsilonilon>0$, \begin{align*} \left[\begin{array}{cc} M^n_x & 0 \\ 0 & -M^n_y\end{array}\right]\leq B_n(t_n,x_n,y_n)+\varepsilonilon B_n^2(t_n,x_n,y_n), \end{align*} where $B_n(t_n,x_n,y_n):=D^2_{(x,y)}\varphi_n(t_n,x_n,y_n)$. Now, we have \begin{align*} D^2_{(x,y)}\varphi_n(t,x,y)=\left[\begin{array}{cc} D_x^2\varphi_n(t,x,y) & D^2_{yx}\varphi_n(t,x,y) \\ D^2_{xy}\varphi_n(t,x,y) & D_y^2\varphi_n(t,x,y)\end{array}\right] = \left[\begin{array}{cc} n\xi(x,y)+2 I & -n\xi(x,y) \\ -n\xi(x,y) & n\xi(x,y) +2 I \end{array}\right] \end{align*} where $I$ is the identity-matrix of suitable dimension and \begin{align*} \xi(x,y):=\gamma|x-y|^{2\gamma-4}\{|x-y|^2I+2(\gamma-1)(x-y)(x-y)^\top\}. \end{align*} In particular, since $x_n$ and $y_n$ are bounded, choosing $\varepsilonilon:=\frac{1}{n}$ gives that \begin{align}\label{ekv:mat-bound} \tilde B_n:=B_n(t_n,x_n,y_n)+\varepsilonilon B_n^2(t_n,x_n,y_n)\leq Cn|x_n-y_n|^{2\gamma-2}\left[\begin{array}{cc} I & -I \\ -I & I \end{array}\right]+C I. \end{align} By the definition of viscosity supersolutions and subsolutions we have that \begin{align*} &-p^u_n+\lambda u(t_n,x_n)-a^\top(t_n,x_n,\alpha)q^u_n-\frac{1}{2}{\rm Tr} [\sigma^\top(t_n,x_n,\alpha)M^u_n\sigma(t_n,x_n,\alpha)] \\ &-e^{\lambda t_n}f(t_n,x_n,e^{-\lambda t_n}u(t_n,x_n),e^{-\lambda t_n}\sigma^\top(t_n,x_n)q^u_n,\alpha)\big\}\leq 0 \end{align*} for all $\alpha\in A$ and \begin{align*} &-p^w_n+\lambda w(t_n,y_n)-\inf_{\alpha\in A}\big\{a^\top(t_n,y_n,\alpha)q^w_n+\frac{1}{2}{\rm Tr} [\sigma^\top(t_n,y_n,\alpha)M^w_n\sigma(t_n,y_n,\alpha)] \\ &+e^{\lambda t_n}f(t_n,y_n,e^{-\lambda t_n}w(t_n,y_n),e^{-\lambda t_n}\sigma^\top(t_n,x_n)q^w_n,\alpha)\big\}\geq 0. \end{align*} Combined, this gives that \begin{align*} \lambda (u(t_n,x_n)-w(t_n,y_n))&\leq \sup_{\alpha\in A}\big\{p^u_n+a^\top(t_n,x_n,\alpha)q^u_n+\frac{1}{2}{\rm Tr} [\sigma^\top(t_n,x_n,\alpha)M^u_n\sigma(t_n,x_n,\alpha)] \\ &+e^{\lambda t_n}f(t_n,x_n,e^{-\lambda t_n}u(t_n,x_n),e^{-\lambda t_n}\sigma^\top(t_n,x_n)q^u_n,\alpha) \\ &-p^w_n-a^\top(t_n,y_n,\alpha)q^w_n-\frac{1}{2}{\rm Tr} [\sigma^\top(t_n,y_n,\alpha)M^w_n\sigma(t_n,y_n,\alpha)] \\ &-e^{\lambda t_n}f(t_n,y_n,e^{-\lambda t_n}w(t_n,y_n),e^{-\lambda t_n}\sigma^\top(t_n,x_n)q^w_n,\alpha)\big\} \end{align*} Collecting terms we have that \begin{align*} p^u_n-p^w_n&=2(t_n-t^*) \end{align*} and since $a$ is Lipschitz continuous in $x$ and bounded on $\bar B_R$, we have \begin{align*} a^\top(t_n,x_n,\alpha)q^u_n-a^\top(t_n,y_n,\alpha)q^w_n&\leq (a^\top(t_n,x_n,\alpha)-a^\top(t_n,y_n,\alpha))n\gamma(x_n-y_n)|x_n-y_n|^{2\gamma-2} \\ &\quad+C(|x_n-x^*|+|y_n-x^*|) \\ &\leq C(n|x_n-y_n|^{2\gamma}+|x_n-x^*|+|y_n-x^*|), \end{align*} where the right-hand side tends to 0 as $n\to\infty$. Let $s_x$ denote the $i^{\rm th}$ column of $\sigma(t_n,x_n,\alpha)$ and let $s_y$ denote the $i^{\rm th}$ column of $\sigma(t_n,y_n,\alpha)$ then by the Lipschitz continuity of $\sigma$ and \eqref{ekv:mat-bound}, we have \begin{align*} s_x^\top M^u_n s_x-s_y^\top M^w_n s_y&=\left[\begin{array}{cc} s_x^\top & s_y^\top \end{array}\right]\left[\begin{array}{cc} M^u_n & 0 \\ 0 &-M^w_n\end{array}\right]\left[\begin{array}{c} s_x \\ s_y \end{array}\right] \\ &\leq \left[\begin{array}{cc} s_x^\top & s_y^\top \end{array}\right]\tilde B_n\left[\begin{array}{c} s_x \\ s_y \end{array}\right] \\ &\leq C(n|x_n-y_n|^{2\gamma}+|x_n-y_n|) \end{align*} and we conclude that \begin{align*} \limsup_{n\to\infty}\sup_{\alpha\in A}\frac{1}{2}{\rm Tr} [\sigma^\top(t_n,x_n,\alpha)M^u_n\sigma(t_n,x_n,\alpha)-\sigma^\top(t_n,y_n,\alpha)M^w_n\sigma(t_n,y_n,\alpha)]\leq 0. \end{align*} Finally, we have for some $C_R>0$ that \begin{align*} &e^{\lambda t_n}f(t_n,x_n,e^{-\lambda t_n}u(t_n,x_n),e^{-\lambda t_n}\sigma^\top(t_n,x_n)q^u_n,\alpha)-e^{\lambda t_n}f(t_n,y_n,e^{-\lambda t_n}w(t_n,y_n),e^{-\lambda t_n}\sigma^\top(t_n,x_n)q^w_n,\alpha) \\ &\leq k_f(u(t_n,x_n)-w(t_n,y_n))+C_R(|x_n-y_n|+|\sigma^\top(t_n,x_n,\alpha)q^u_n-\sigma^\top(t_n,x_n,\alpha)q^w_n|). \end{align*} Repeating the above argument we get that the upper limit of the right-hand side when $n\to\infty$ is bounded by $k_f(u(t_n,x_n)-w(t_n,y_n))$. Put together, this gives that \begin{align*} (\lambda-k_f) \limsup_{n\to\infty}(u(t_n,x_n)-w(t_n,y_n))&\leq 0 \end{align*} a contradiction since $\lambda\in\mathbb R$ was arbitrary.\qed\\ \end{document}
math
119,649
\begin{document} \begin{abstract} Let $R$ be an arbitrary subset of a commutative ring. We introduce a combinatorial model for the set of tame frieze patterns with entries in $R$ based on a notion of irreducibility of frieze patterns. When $R$ is a ring, then a frieze pattern is reducible if and only if it contains an entry (not on the border) which is $1$ or $-1$. To my knowledge, this model generalizes simultaneously all previously presented models for tame frieze patterns bounded by $0$'s and $1$'s. \end{abstract} \maketitle \section{Introduction} Conway and Coxeter introduced a combinatorial model for the so-called `frieze patterns' \cite{jChC73}: their patterns, consisting entirely of positive numbers within the frieze, are in one-to-one correspondence to triangulations of a convex polygon by non-intersecting diagonals. This gives a connection between specializations of the variables of cluster algebras of type $A$ to positive integers on one side (see for example \cite{mCtH17}), and Catalan combinatorics on the other side. Since then, many generalizations of these concepts were considered (see \cite{MG15} for a survey). In the present note, for each set $R$ of numbers, we present a combinatorial model which is associated to the set of tame frieze patterns with entries in this set $R$. Hence we generalize the above connection to arbitrary specializations of the variables in the cluster algebras of type $A$. \begin{figure} \caption{$(a,0,-a,0) \oplus (-1,-1,-1) = (a-1,0,-a,-1,-1)$.\label{fig0} \label{fig0} \end{figure} To this end, we introduce a notion of irreducibility of frieze patterns, Definition \ref{def:irr}. Every frieze pattern has a (not necessarily unique) decomposition into irreducible frieze patterns. In the combinatorial model, irreducible patterns become polygons that may be glued together to produce arbitrary frieze patterns (see for example Figure \ref{fig0}). The problem of understanding this type of combinatorics for a given set $R$ thus reduces to the problem of classifying the irreducible patterns. It turns out that a frieze pattern is reducible over a ring $R$ if and only if it contains an entry (not on the border) which is $1$ or $-1$ (see Lemma \ref{lem:irr1-1}). \noindent{\bf Acknowledgement:} {I am very grateful to C.~Bessenrodt, T.~Holm, P.~J\o r\-gensen, S.~Morier-Genoud, and V.~Ovsienko for many valuable comments.} \section{Quiddity cycles} \begin{defin} \label{def:etamatrix} For $c$ in a commutative ring, let $$\eta(c) := \begin{pmatrix} c & -1 \\ 1 & 0 \end{pmatrix}.$$ \end{defin} \begin{defin} \label{def:quiddity} Let $R$ be a subset of a commutative ring and $\lambda\in\{\pm 1\}$. A \emph{$\lambda$-\cycle}\footnote{Notice that the case $R=\NN_{>0}$ was also recently considered in \cite{vO17}.} over $R$ is a sequence $(c_1,\ldots,c_m)\in R^m$ satisfying \begin{equation}\label{etaid} \prod_{k=1}^{m} \eta(c_k) = \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix} = \lambda \id. \end{equation} A $(-1)$-\cycle is called a \emph{\cycle} for short. \end{defin} \begin{remar} We agree that $m>0$ in Def.\ \ref{def:quiddity}. In fact, $m>1$ by Def.\ \ref{def:etamatrix}. \end{remar} \begin{examp} Consider the commutative ring $\CC$ and $R=\CC$. \begin{enumerate} \item $(0,0)$ is the only $\lambda$-\cycle of length $2$. \item $(1,1,1)$ and $(-1,-1,-1)$ are the only $\lambda$-\cycles of length $3$. \item $(t,2/t,t,2/t)$, $t$ a unit and $(a,0,-a,0)$, $a$ arbitrary, are the only $\lambda$-\cycles of length $4$. \end{enumerate} \end{examp} \begin{defin} Let $D_n$ be the dihedral group with $2n$ elements acting on $\{1,\ldots,n\}$. If $\underline{c}=(c_1,\ldots,c_n)$ is a $\lambda$-\cycle, then we write \[ \underline{c}^\sigma := (c_1,\ldots,c_n)^\sigma := (c_{\sigma(1)},\ldots,c_{\sigma(n)}) \] for $\sigma\in D_n$. \end{defin} \begin{propo} Let $\underline{c}=(c_1,\ldots,c_m)$ be a $\lambda$-\cycle. Then for any $\sigma\in D_n$, the cycle $\underline{c}^\sigma$ is a $\lambda$-\cycle as well. \end{propo} \begin{proof} Since the matrix $\lambda \id$ commutes with every matrix, rotating this cycle is again a $\lambda$-\cycle. Reversing a $\lambda$-\cycle is also a $\lambda$-\cycle, see for example \cite[Prop.\ 5.3 (3)]{p-CH09d}. \end{proof} When thinking about a $\lambda$-\cycle $\underline{c}$, in general we do not care which element in $D_n\cdot \underline{c}$ we consider. In the following lemma however, we have to be careful. We introduce a \emph{sum} of $\lambda$-\cycles which is not invariant under the action of the dihedral group. Note that a similar ``gluing'' of \friezes was already described in other papers (for instance \cite[Lemma 3.2]{tHpJ17} for \cycles in which all entries are equal or \cite{MG12} for 2-friezes). \begin{lemma}\label{lem:comp} Let $(a_1,\ldots,a_k)$ be a $\lambda'$-\cycle and $(b_1,\ldots,b_\ell)$ be a $\lambda''$-\cycle. Then \[ (a_1+b_\ell,a_2,\ldots,a_{k-1},a_k+b_1,b_2,\ldots,b_{\ell-1}) \] is a $(-\lambda'\lambda'')$-\cycle of length $k+\ell-2$ which we call the \emph{sum}: $$(a_1,\ldots,a_k) \oplus (b_1,\ldots,b_\ell):= (a_1+b_\ell,a_2,\ldots,a_{k-1},a_k+b_1,b_2,\ldots,b_{\ell-1}).$$ \end{lemma} \begin{proof} We use the identities $\eta(a+b)=-\eta(a)\eta(0)\eta(b)$ and $\eta(0)^2 = -\id$ (which are easy to check, see also \cite[Lemma 4.1]{mCtH17}): \begin{eqnarray*} && \hspace{-22pt} \eta(a_1+b_\ell)\eta(a_2)\cdots\eta(a_{k-1})\eta(a_k+b_1)\eta(b_2)\cdots\eta(b_{\ell-1}) \\ &=& \eta(b_\ell)\eta(0)\eta(a_1)\eta(a_2)\cdots\eta(a_{k-1})\eta(a_k)\eta(0)\eta(b_1)\eta(b_2)\cdots\eta(b_{\ell-1}) \\ &=& \lambda' \eta(b_\ell)\eta(0)\eta(0)\eta(b_1)\eta(b_2)\cdots\eta(b_{\ell-1}) \\ &=& -\lambda' \eta(b_\ell)\eta(b_1)\eta(b_2)\cdots\eta(b_{\ell-1}) = -\lambda'\lambda'' \id. \qedhere \end{eqnarray*} \end{proof} \begin{examp} \begin{enumerate} \item If $(a_1,\ldots,a_m)$ is a \cycle, then $$(a_1,\ldots,a_m)\oplus (0,0) = (a_1,\ldots,a_m).$$ \item For $a\in\CC$, $(a,0,-a,0)$ and $(-1,-1,-1)$ are $1$-\cycles, their sum is\\ $(a-1,0,-a,-1,-1)$ and is a \cycle (see also Fig.\ \ref{fig0}). \end{enumerate} \end{examp} The following are the central notions of \emph{reducibility} and \emph{irreducibility} of \cycles mentioned in the introduction. \begin{defin}\label{def:irr} Let $R$ be a subset of a commutative ring. A $\lambda$-\cycle $(c_1,\ldots,c_m)\in R^m$, $m>2$ is called \emph{reducible over $R$} if there exist a $\lambda'$-\cycle $(a_1,\ldots,a_k)\in R^k$, a $\lambda''$-\cycle $(b_1,\ldots,b_\ell)\in R^\ell$, and $\sigma\in D_m$ such that $\lambda = -\lambda'\lambda''$, $k,\ell>2$ and \begin{eqnarray*} (c_1,\ldots,c_m)^\sigma &=& (a_1+b_\ell,a_2,\ldots,a_{k-1},a_k+b_1,b_2,\ldots,b_{\ell-1})\\ &=& (a_1,\ldots,a_k)\oplus (b_1,\ldots,b_\ell). \end{eqnarray*} A $\lambda$-\cycle of length $m>2$ is called \emph{irreducible over $R$} if it is not reducible. \end{defin} \begin{remar} There is no need to consider the cycle of length $m<3$ (which is $(0,0)$) in Definition \ref{def:irr}. \end{remar} \begin{defin} Consider a $\lambda$-\cycle $\underline{c}=(c_1,\ldots,c_m)$ and define $c_k$ for all $k\in\ZZ$ by repeating $\underline{c}$ periodically. For $i,j\in\ZZ$ let \[ x_{i,j}:= \left(\prod_{k=i}^{j-2} \eta(c_k)\right)_{1,1} \quad \text{if } i\le j-2, \] $x_{i,i+1}:=1$, and $x_{i,i}:=0$. Notice that $x_{i,i+2}=c_i$.\\ Then we call the array $\Fc=(x_{i,j})_{i\le j\le i+m}$ the \emph{\frieze} of $\underline{c}$. The \emph{entries} of the \frieze of $\underline{c}$ are the numbers $x_{i,j}$ with $i+2\le j \le i+m-2$.\\ We say that the \frieze of $\underline{c}$ is \emph{reducible} resp.\ \emph{irreducible} if $\underline{c}$ is \emph{reducible} resp.\ \emph{irreducible}. \end{defin} \begin{remar} (a) If $\underline{c}$ is a \cycle, then we obtain what we usually call the frieze pattern. In fact, in this way we exactly obtain all \emph{tame} frieze patterns, i.e.\ those for which every adjacent $3\times 3$ determinant is zero (see for example \cite[Prop.\ 2.4]{mCtH17}).\\ Starting with a $1$-\cycle, one obtains a \frieze with $1$'s on one border and $-1$'s on the other border, i.e.\ $x_{i,i+m-1}=-1$ for all $i$. (b) The entries $x_{i,j}$ of a \frieze are specialized cluster variables of a cluster algebra of Dynkin type $A$ (see for example \cite[Section 5]{mCtH17}). (c) Notice that if $\underline{c}$ is a $\lambda$-\cycle over $R$, then its \frieze may have entries which are not in $R$. It is an interesting question to determine the set of entries of \friezes of $\lambda$-\cycles for a fixed set $R$. For example, if $R$ is a ring then all entries in the \friezes are in $R$. \end{remar} The following lemma explains the appearance of $1$'s and $-1$'s in friezes. Some similar statement is contained implicitly for the case $R=\NN_{>0}$ in \cite[Cor.\ 1.11]{sMG14} for Coxeter friezes. \begin{lemma}\label{lem:irr1-1} Let $R$ be a commutative ring. A $\lambda$-\cycle is reducible over $R$ if and only if the corresponding tame frieze pattern contains an entry $1$ or $-1$. \end{lemma} \begin{proof} Reducibility requires that the length $m$ of the cycle is at least $4$; since there are no entries in a frieze pattern with $\lambda$-\cycle of length less than $4$, we may assume $m\ge 4$.\\ Assume first the existence of an entry $\varepsilon=\pm 1$, i.e.\ without loss of generality (rotating the cycle if necessary) there are $i,j\in\{1,\ldots,m\}$ with $i<j-1$, $j-i<m-1$ and $M_{1,1}=\varepsilon$ for $M=\prod_{k=i}^{j-2} \eta(c_k)$. Since $\det(M)=1$, with $a:=\varepsilon M_{2,1}$, $b:=-\varepsilon M_{1,2}$ we have \[ M = \begin{pmatrix} \varepsilon & -\varepsilon b \\ \varepsilon a & -\varepsilon ab+\varepsilon \end{pmatrix} = -\varepsilon \begin{pmatrix} -1 & b \\ -a & ab-1 \end{pmatrix} = -\varepsilon \eta(a)^{-1}\eta(b)^{-1}. \] We obtain \[ \eta(a) \left(\prod_{k=i}^{j-2} \eta(c_k)\right) \eta(b) = -\varepsilon \id, \] so $(a,c_i,\ldots,c_{j-2},b)$ is a $(-\varepsilon)$-\cycle. It follows that \[ (c_{j-1}-b,c_j,\ldots,c_m,c_1,\ldots,c_{i-2},c_{i-1}-a) \] is a $(\lambda\varepsilon)$-\cycle of length $m-j+i+1\ge 3$ since $j-i<m-1$; thus the cycle is reducible.\\ For the converse, assume that we have a decomposition into a sum as above. But then $\left(\prod_{k=i}^{j-2} \eta(c_k)\right)_{1,1}\in\{\pm 1\}$ for some $i,j$ with $i<j-2$ which gives an entry $\pm 1$ in the pattern. \end{proof} \section{Examples of subsets} Some classifications of irreducible $\lambda$-\cycles are already known. For example, every \cycle over $\NN_{>0}$ contains a $1$. Thus any \cycle over $\NN_{>0}$ of length greater than $3$ has a summand $(1,1,1)$ (cf.\ {\cite{jChC73}}), although the other summand only has positive entries if the original frieze pattern has no entry zero. In general: \begin{theor} The only irreducible $\lambda$-\cycles over $\ZZ_{\ge 0}$ are $(0,0,0,0)$ and $(1,1,1)$. \end{theor} \begin{proof} Let $\underline{c}=(c_1,\ldots,c_m)\in \ZZ_{\ge 0}^m$, $m>2$ be a $\lambda$-\cycle. If $c_i>0$ for all $i$ then by \cite[Cor.\ 3.3]{mCtH17} there exists a $j\in\{1,\ldots,m\}$ with $c_j=1$, without loss of generality $j=2$. But then $\underline{c}=(1,1,1)\oplus \underline{c}'$ where $\underline{c}'=(c_3-1,c_4,\ldots,c_m,c_1-1)\in\ZZ_{\ge 0}^{m-1}$. Otherwise there are zeros in $\underline{c}$. If $\underline{c}$ contains two adjacent zeros, say $c_2=c_3=0$ then $\underline{c}=(0,0,0,0)\oplus \underline{c}'$ where $\underline{c}'=(c_4,\ldots,c_m,c_1)\in\ZZ_{\ge 0}^{m-2}$. The last case is when there are zeros, but none of them has an adjacent zero. Notice first that since $\eta(a)\eta(0)\eta(b)=-\eta(a+b)$ for all $a,b$ (cf.\ \cite[Lem.\ 4.1]{mCtH17}), if $(c_1,0,c_3,\ldots,c_m)$ is a $\lambda$-\cycle, then $(c_1+c_3,\ldots,c_m)$ is a $(-\lambda)$-\cycle. Applying this transformation to all zeros simultaneously yields a $\lambda$-\cycle $\underline{c}''$ in which only the entries coming from $\underline{c}$ which were not adjacent to a zero may be $\le 1$. But by \cite[Cor.\ 3.3]{mCtH17} there exists an entry $\le 1$ in $\underline{c}''$, so we find a $1$ in $\underline{c}$ which has nonzero adjacent entries, hence $\underline{c}^\sigma=(1,1,1)\oplus \underline{c}'$ for some $\underline{c}'\in\ZZ_{\ge 0}^{m-1}$ and $\sigma\in D_m$ as in the first case. \end{proof} If we allow entries in the set of all integers, the situation is slightly more complicated: \begin{theor}[{\cite[Thm.\ 6.2]{mCtH17}}] The set of irreducible $\lambda$-\cycles over $\ZZ$ is \[ \{ (1,1,1), (-1,-1,-1), (a,0,-a,0), (0,a,0,-a) \mid a\in \ZZ\setminus\{\pm 1\}\}. \] \end{theor} \begin{propo} Let $k\in\NN_{>0}$ and $\ii=\sqrt{-1}$. Then \[ \underline{c} = (2\ii,-\ii+1, \underbrace{2,\ldots,2}_{2k\text{-times}}, \ii+1, -2\ii,\ii-1, \underbrace{-2,\ldots,-2}_{2k\text{-times}}, -\ii-1) \] is an irreducible \cycle over $\ZZ[\ii]$. \end{propo} \begin{proof} Notice first that \[ \eta(2)^\ell = \begin{pmatrix} \ell+1 & -\ell \\ \ell & 1-\ell \end{pmatrix}, \quad \eta(-2)^\ell = (-1)^\ell \begin{pmatrix} \ell+1 & \ell \\ -\ell & 1-\ell \end{pmatrix} \] for $\ell\in\NN_{>0}$. It is then easy to check that $\underline{c}$ is a \cycle. Further, using the same identities we can compute each type of entry in the frieze pattern. We compute $x_{1,2k+5}$ as an example: \[ \prod_{i=1}^{2k+3} \eta(c_i) = \eta(2\ii)\eta(-\ii+1) \eta(2)^{2k} \eta(\ii+1) = \begin{pmatrix} 2\mathrm{i}k + \mathrm{i}- 1 & -2k - 2\mathrm{i}- 1 \\ 2k + 1 & 2\mathrm{i}k + \mathrm{i}- 1 \end{pmatrix} \] and thus $x_{1,2k+5} = 2\mathrm{i}k + \mathrm{i}- 1$. It turns out that none of them is $\pm 1$ and hence it is irreducible by Lemma \ref{lem:irr1-1}. \end{proof} This immediately yields: \begin{corol} There are infinitely many irreducible $\lambda$-\cycles over the Gaussian numbers $\ZZ[\ii]$. \end{corol} \section{Combinatorial model} Let $(a_1,\ldots,a_k)$ be a $\lambda'$-\cycle and $(b_1,\ldots,b_\ell)$ be a $\lambda''$-\cycle. If we represent these two cycles as polygons, then gluing them together yields a larger polygon representing their sum, see Figure \ref{fig1}. \begin{figure} \caption{$(a_1,\ldots,a_k) \oplus (b_1,\ldots,b_\ell)$.\label{fig1} \label{fig1} \end{figure} We see the sum $$(a_1,\ldots,a_k) \oplus (b_1,\ldots,b_\ell) = (a_1+b_\ell,a_2,\ldots,a_{k-1},a_k+b_1,b_2,\ldots,b_{\ell-1})$$ in the new polygon when adding the entries at the vertices which are glued together. Hence the decomposition of a $\lambda$-\cycle into a sum of irreducible ones translates in a natural way into a polygon decomposed into building blocks which correspond to some irreducible summands. Since the only irreducible $\lambda$-\cycles for $R=\NN_{\ge 0}$ are $(0,0,0,0)$ and $(1,1,1)$, in this special case we recover the Catalan combinatorics originally proposed by Conway and Coxeter. More precisely: It is easy to prove that if the frieze pattern of a $\lambda$-\cycle $\underline{c}$ for $R=\NN_{>0}$ has only positive entries, then $\underline{c}$ is a sum of \cycles $(1,1,1)$. The $(0,0,0,0)$-polygons are the parts that glue classical Conway-Coxeter friezes together; they produce zeros within the corresponding frieze pattern. We close this note with a somewhat vague task: \begin{oppro} Classify irreducible \cycles for some of the most interesting sets $R\subseteq \CC$. \end{oppro} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
math
15,525
\begin{document} \title{Field Test of Twin-Field Quantum Key Distribution through Sending-or-Not-Sending over 428~km} \author{Hui Liu} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology ofChina, Hefei, Anhui 230026, People’s Republic of China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China} \author{Cong Jiang} \affiliation{Jinan Institute of Quantum Technology, Jinan, Shandong 250101, People’s Republic of China} \author{Hao-Tao Zhu} \author{Mi Zou} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology ofChina, Hefei, Anhui 230026, People’s Republic of China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China} \author{Zong-Wen Yu} \affiliation{State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, People’s Republic of China} \affiliation{Data Communication Science and Technology Research Institute, Beijing 100191, People’s Republic of China} \author{Xiao-Long Hu} \author{Hai Xu} \affiliation{State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, People’s Republic of China} \author{Shizhao Ma} \author{Zhiyong Han} \affiliation{Jinan Institute of Quantum Technology, Jinan, Shandong 250101, People’s Republic of China} \author{Jiu-Peng Chen} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology ofChina, Hefei, Anhui 230026, People’s Republic of China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China} \author{Yunqi Dai} \author{Shi-Biao Tang} \affiliation{QuantumCTek Corporation Limited, Hefei, Anhui 230088, People’s Republic of China} \author{Weijun Zhang} \author{Hao Li} \author{Lixing You} \author{Zhen Wang} \affiliation{State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, People’s Republic of China} \author{Fei Zhou} \affiliation{Jinan Institute of Quantum Technology, Jinan, Shandong 250101, People’s Republic of China} \author{Qiang Zhang} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology ofChina, Hefei, Anhui 230026, People’s Republic of China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China} \affiliation{Jinan Institute of Quantum Technology, Jinan, Shandong 250101, People’s Republic of China} \author{Xiang-Bin Wang} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China} \affiliation{Jinan Institute of Quantum Technology, Jinan, Shandong 250101, People’s Republic of China} \affiliation{State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, People’s Republic of China} \author{Teng-Yun Chen} \author{Jian-Wei Pan} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology ofChina, Hefei, Anhui 230026, People’s Republic of China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China} \begin{abstract} {Quantum key distribution endows people with information-theoretical security in communications. Twin-field quantum key distribution (TF-QKD) has attracted considerable attention because of its outstanding key rates over long distances. Recently, several demonstrations of TF-QKD have been realized. Nevertheless, those experiments are implemented in the laboratory, remaining a critical question about whether the TF-QKD is feasible in real-world circumstances. Here, by adopting the sending-or-not-sending twin-field QKD (SNS-TF-QKD) with the method of actively odd parity pairing (AOPP), we demonstrate a field-test QKD over 428~km deployed commercial fiber and two users are physically separated by about 300~km in a straight line. To this end, we explicitly measure the relevant properties of the deployed fiber and develop a carefully designed system with high stability. The secure key rate we achieved breaks the absolute key rate limit of repeater-less QKD. The result provides a new distance record for the field test of both TF-QKD and all types of fiber-based QKD systems. Our work bridges the gap of QKD between laboratory demonstrations and practical applications, and paves the way for intercity QKD network with high-speed and measurement-device-independent security.} \end{abstract} \maketitle {\it Introduction.---} Since Bennet and Brassard proposed the BB84 protocol~\cite{bennett1984quantum}, quantum key distribution (QKD) has been studied extensively~\cite{gisin2002quantum,scarani2009security,liao2017satellite,xu2020secure} towards its final goal of application in the real-life world. Given the fact that quantum signals cannot be amplified, the secure distance is severely limited by the channel loss. For example, considering the possible photon-number-splitting (PNS) attack, the key rate of a BB84 protocol with the imperfect single-photon source is propositional to $\eta^2$, given the channel transmittance $\eta$. So far, many efforts have been made towards the more loss-tolerant QKD in practice. There are two mile-stone signs of progress towards this goal. First, the decoy-state method~\cite{H03,wang05,LMC05} can improve the key rate of coherent-state based QKD from quadratic scaling $\eta^2$ to linear scaling $\eta$, as what behaves of a perfect single-photon source. Importantly, the method can be applied to the measurement-device-independent QKD (MDI-QKD) successfully~\cite{braunstein2012side,lo2012measurement,WangPRA2016,MDI404km}. Second, the secure key rate can be further improved to the scale of the square root of the channel transmittance by using the ideal of twin-field QKD (TF-QKD)~\cite{nature18}. This method has the potential to break the known distance records of existing protocols in practical QKD and break the theoretical key rate limit of a trusted-relay-less QKD protocol known as the Pirandola-Laurenza-Ottaviani-Bianchi (PLOB) bound~\cite{PLOB2017}. The real-world QKD aims to physically separate users on Earth. However, despite tremendous efforts were made into fiber-based QKD field test ~\cite{peev2009the, stucki2011longterm, chen2009field, dynes2019cambridge, chen2010metropolitan,wang2010field, tang2016measurement}, the maximum fiber distance is about 130~km~\cite{chen2010metropolitan} to date. The maximal physical separation achieved between two users is about 100~km~\cite{chen2010metropolitan}, and challenges for longer distances remain. It is worth noting that experimental TF-QKD ~\cite{minder2019experimental, liuyang2019, wang2019beating, zhong2019proof, fang2019surpassing, chen2020sending} has advanced significantly up to a distance of more than 500~km~\cite {fang2019surpassing, chen2020sending}. However, all the experiments are implemented in the laboratory with either the simulated channel loss or the optical fiber spool, leaving a huge gap between laboratory demonstrations and practical applications. Field trial of TF-QKD remains experimentally challenging. In this work, for the first time, we present a field-test of high-rate TF-QKD on the deployed commercial fiber (428~km length with 79.1~dB channel loss, buried underground). Furthermore, it is the longest fiber-based QKD field test without relying on trusted relays. Two users, Alice and Bob, realize the longest physical separation distance (about 300~km) in the terrestrial QKD so far, to the best of our knowledge. The secure key rate of our work breaks the absolute key rate limit of trusted-relay-less QKD. The result lays the foundation for a high-speed intercity-scale QKD network in the absence of the quantum repeater. We adopt the sending-or-not-sending (SNS) protocol~\cite {wang2018sns} of TF-QKD with finite-key effects~\cite {jiang2019}. Besides, we apply the efficient error rejection method, known as the actively odd parity pairing (AOPP)~\cite{xu2019general} with the finite-key effects studied in Ref. \cite {jiang2019}. Given such an asymmetric channel, we adopt the asymmetric protocol \cite {huxl} to improve the secure key rate further. {\it Protocol.---}Consider the SNS-TF-QKD protocol proposed in Ref.~\cite{wang2018sns}. Here, we implement an asymmetric 3-intensity method for decoy-state analysis. To improve the key rate, we take bit error rejection by AOPP \cite{xu2019general} in the post data processing stage. In this way, the sending probability in signal windows can be far improved and hence the number of effective events is raised greatly. As a result, the final key rate is improved a lot especially in the case of small data size with finite key effects being considered. We use the zigzag approach to take the finite-key effects in calculating the final key rate~\cite{jiang2019}. In the protocol, Alice (Bob) randomly chooses the decoy window and signal window with probabilities $1-p_{A2}(1-p_{B2})$ and $p_{A2}(p_{B2})$, respectively. In the decoy window, both Alice and Bob prepare and send decoy pulses. In our 3-intensity protocol, there are $2$ types of decoy states in decoy windows for each party of Alice and Bob, one vacuum and one non-vacuum coherent states, of intensity $\mu_{A1}$ for Alice and $\mu_{B1}$ for Bob. Private random phase shifts of $\theta_{A}$ and $\theta_B$ are applied to each pulse. And in the signal window, Alice (Bob) decides to send out a phase-randomized weak coherent state pulse with intensity $\mu_{A2}$ ($\mu_{B2}$) or a vacuum pulse with probabilities $\epsilon_A$ $(\epsilon_B)$ and $1-\epsilon_A$ $(1-\epsilon_B)$, respectively. A $Z$ window event is defined as an event that both Alice and Bob choose the signal windows. A $Z$ window event is regarded as being effective if Charlie announces that only one detector clicked. An $X$ window event is defined as an event that both Alice’s WCS pulse is $\mu_{A1}$ and the intensity of Bob’s WCS pulse is $\mu_{B1}$ and their phases satisfy an extra phase-slice condition to reduce the observed error rate~\cite{huxl}. As shown in Ref.~\cite{huxl}, we set the condition of \begin{equation} \frac{\mu_{A_1}}{\mu_{B1}}=\frac{\epsilon_A(1-\epsilon_B)\mu_{A2}e^{-\mu_{A2}}}{\epsilon_B(1-\epsilon_A)\mu_{B2}e^{-\mu_{B2}}} \end{equation} for the security of our asymmetric protocol. An error in the $X$ window is defined as an effective event in the $X$ window when Charlie announces a click of right (left) while the phase difference between the pulse pair from Alice and Bob would provably cause a left (right) clicking at Charlie's measurement set-up. At a signal window, Alice (Bob) puts down a bit value 1(0) when she (he) decides sending, Alice (Bob) puts down a bit value 0(1) when she (he) decides not-sending. The values of $e_1^{ph}$ and $n_1$, the phase-flip error rate and the number of effective single-photon events in the $Z$-basis, can be calculated by the conventional decoy-state method~\cite{wang2018sns,yu2018sending}. Then we can calculate the secure key rate by the zigzag approach proposed in Ref.~\cite{jiang2019}. Calculation details are shown in the Supplemental Materials. {\it Experiment.---} In our field test, Alice and Bob are located in Jinan city and Qingdao city, respectively. The central relay Charlie is placed in Linyi city, as shown in FIG.~\ref{fig:MapSetup} (a). The distance between Charlie and Alice (Bob) is 223~km with 40.5~dB channel loss (205~km with 38.6~dB channel loss). \begin{figure} \caption{(a) Bird's-eye view of our field test. Alice is located at the Jinan Institute of Quantum Technology (JIQT) in Jinan city (36°41'0.60" N, 117°8'10.93" E), while Bob is located at an internet data center (IDC) room in Qingdao city (36°7'24.29" N, 120°27'11.88" E). The third-party measurement is done by Charlie in a room at Linyi city (36°1'39.84" N, 118°44'50.58" E), which is 223~km from Alice and 205~km from Bob. Two yellow marks show the locations of two machine rooms at Yiyuan city (36$^\circ$ 1'12.60" N, 118$^\circ$ 2'24.16" E) and Zhucheng city (36$^\circ$ 11'59.31" N, 119$^\circ$ 4'43.58" E), respectively. An erbium-doped fiber amplifier (EDFA) is placed in each machine room to amplify the light for the clock and wavelength synchronization, of which the detail is shown in Section Experiment and the Supplemental Materials. Map data from Google, Landsat/Copernicus. (b) Illustration of the experimental set-up. A continuous-wave (cw) bright beam from a 1550.12~nm master laser is multiplexed with the pulses from two 1570~nm auxiliary synchronization lasers (Sync Lasers) in Charlie and is transmitted along the synchronization channel. At each sides of Alice and Bob, the slave laser is seeded by the cw bright beam and generates pulses with a width of 320~ps and a repetition rate of 312.5~MHz. The optical launch power of the slave laser is monitored in real-time by a watchdog photoelectric detector PD2 of Alice (Bob). Then these pulses are sent to two sagnac rings SR1-2, which is randomly prepared in one of the four intensity: strong $\mu_{r} \label{fig:MapSetup} \end{figure} The experimental setup is comprised of the synchronization system and the encoding and measurement system, as shown in FIG.~\ref{fig:MapSetup} (b). Alice and Bob are connected by two parallel field-deployed commercial fibers (in the same optical cable) with 428~km length each, which are named synchronization channel and quantum channel respectively in the following. The synchronization system includes two functions: 1) the clock synchronization, of which the details are shown in the Supplemental Materials; 2) the wavelength synchronization. The first issue that makes implementation difficult is avoiding the rapid relative phase drift caused by Alice's and Bob's lasers' wavelength difference. We realize the wavelength synchronization with the assistance of the laser injection technique. A laser with 1550.12~nm wavelength and 3~kHz linewidth is placed in Charlie as the master laser. The continuous-wave (cw) bright beam is produced and injected into Alice's and Bob's slave laser. To guarantee 0~dBm cw bright beam injected into slave laser, we add four erbium-doped fiber amplifiers (EDFAs), two of which are placed in Yiyuan city (36$^\circ$1'12.60" N, 118$^\circ$2'24.16" E) and Zhucheng city (36$^\circ$11'59.31" N, 119$^\circ$4'43.58" E), respectively (as shown in FIG.~\ref{fig:MapSetup}). And the rest two are added in Alice's and Bob's apparatus. A 10~GHz fiber Bragg grating (FBG) is inserted in Alice's (Bob's) apparatus to filter the amplified spontaneous emission (ASE) noise of the EDFAs. To gain stable and high-efficiency injection, a polarization auto-alignment module is inserted before the injection. The pulses produced from the slave laser pass through two sagnac rings (SRs) and three phase modulators (PMs) for encoding and phase randomization in the encoding and measurement system. The pulses are attenuated to the desired levels by an electrical variable optical attenuator (EVOA) before being transmitted to Charlie through the quantum channel. In Charlie, a 50:50 BS performs a single photon interference of the incoming pulses after noise filtering. The measurement results are detected by two superconducting nanowire single photon detectors (SNSPDs) with efficiencies of 73\% and 76\%, respectively. Two polarization auto-alignment modules are employed for real-time compensate polarization drifts in the long fiber before interference. Charlie's overall detection efficiency is 28.20\%, taking into account 2.4~dB insertion loss, 30\% non-overlapping between signal pulse and detection window, and 94\% polarization alignment efficiency. The dark count of each SNSPD is about 6~Hz, corresponding to a dark count rate of $2.0 \times 10^{-9}$/pulse. \begin{figure} \caption{Relative phase drift caused by the fiber channel with different fiber distance. All results except our work are test on the optical fiber spool. In our work, the total relative phase drift is influenced by an 856 km fiber, which is 7.80 rad/ms.} \label{fig:RelativePhaseDrift} \end{figure} Another challenge we have encountered is the significant changes to the relative phase drift stemmed from the long fiber channel. A comparison of the relative phase drift in different fiber distance in previous works and our work is shown in FIG.~\ref{fig:RelativePhaseDrift}. We stress that in our work, the signal pulses produced by the slave laser inherit the global phase of the cw bright beam, which is influenced by the 428~km synchronization channel. And the signal pulses transmit along the 428~km quantum channel before interference. So the relative phase drift is influenced by the total 856~km fiber links. Fortunately, even though the relative phase drift in our field test is influenced by the longest fiber than all the previous lab works, it is not the fastest one in all works. It makes the relative phase calculation in our field test less demanding than the lab experiment in ~\cite{fang2019surpassing} over 402~km (about 800~km fiber influencing the relative phase drift). We verified that we could indeed estimate and compensate the relative phase drift caused by long fiber channel. In our work, Alice and Bob sacrifice a part of signal pulses as bright reference pulses periodically and send them to Charlie for relative phase calculation and apply a post selection method to the signal pulses (the detail see ~\cite{fang2019surpassing} and the Supplemental Materials). The bright reference pulses $\mu_{r}$ are set to about 450 photons per pulse at a repetition rate of 200~MHz, which results in a 5~MHz count rate of two SNSPDs for calculation and the duration time of each calculation is 20~us. The bright reference pulses will lead to noise in the long fiber channel, which is hard to avoid in the field test and the lab test. After being filtered by four 100~GHz dense wavelength division multiplexers (DWDMs) in Charlie, the remaining noise is about $1.4\times 10^{-8}$/pulse. It is an optimal trade-off optimization, and the details are shown in the Supplemental Materials. \begin{figure} \caption{Characterization of the crosstalk noise. All measurements are performed under the same overall detection efficiency (28.20\%). (a) The crosstalk noise caused by the classical services running in some fibers in the optical cable. We test two available fiber channels Fiber~1 and Fiber~2 (Fiber~3 and Fiber~4) from Alice (Bob) to Charlie. The blue and red columns are the measurement results before and after filtering with two 100~GHz DWDMs, respectively. Taken the loss and the crosstalk noise of the fiber channel into account, we use Fiber~1 and Fiber~4 as the quantum channel. (b) The crosstalk noise caused by the cw bright beam in the synchronization channel with different optical launch power. Each experiment lasts 5 minutes. The experimental results are the average and variance (1 standard deviation) calculated by 144 experiments.} \label{fig:crosstalknoise} \end{figure} Besides, we face the crosstalk noise in the field test, which is never met in TF-QKD lab experiments. The quantum channel for transmitting signal pulses is in an optical cable (96 fibers included). Part of the noise proceeds from the classical services running in some fibers in the optical cable. Fortunately, it can be filtered by four DWDMs in Charlie to approximately $5.1\times 10^{-9}$/pulse, which is acceptable for us, as shown in FIG.~\ref{fig:crosstalknoise}(a). The other part of the noise is raised from the cw bright beam (same wavelength with the signal, generated from the master laser in Charlie) in the synchronization channel, which is also in the same optical cable. Thus it cannot be filtered whether spectrally or temporally. We found that the crosstalk noise becomes more ignorable as the optical launch power of the cw bright beam decrease, which is shown in FIG.~\ref{fig:crosstalknoise}(b). To suppress the noise, we reduce the optical launch power of the master laser to about 5~dBm and increase the EDFA gain appropriately, resulting in a noise level of $3.6\times 10^{-9}$/pulse. Still, a stable and high-efficiency injection can be ensured in this case. \begin{figure} \caption{(a)Experimentally and simulated secure key rates. The purple pentagram point is the secure key rate of our work. The red curve is the simulation results using the parameters in Table~\ref{tab:para} \label{result} \end{figure} {\it Results.---} The main system parameters are list in Table~\ref{tab:para}. In our field test, Alice and Bob sent a total of $5.59\times 10^{12}$ pulse pairs, and get $2.79\times 10^7$ sifted key bits in the $Z$-basis, including $27.84\%$ error bits. According to the method shown in the Supplement Materials and the data acquired in the experiment, there are at least $1.29\times 10^7$ untagged bits in the sifted keys, corresponding to an $11.07\%$ phase flip error rate before AOPP. After AOPP, $5.84\times 10^6$ keys are survived in which contains 0.69\% error bits. These values are in agreement very well with the theoretically expected values. The number of untagged bits is $2.38\times 10^6$ with a corresponding phase flip error rate is $20.24\%$. With the finite-key effect being taken into consideration, we finally obtain a secure key rate of $4.80\times 10^{-8}$/pulse (corresponding to 3.36~bps), which is $170\%$ higher than the absolute PLOB bound and $859\%$ higher than the relative PLOB bound. FIG.~\ref{result} shows the performance of our work, in terms of the simulation key rates, the achieved secure key rate and the total efficiency of the polarization auto-alignment module and arrival time synchronization. \begin{table}[h] \caption{List of the main experimental parameters used in the numerical simulation: total number of signal pulse pairs $N$, overall dark count probability $p_d$, overall detection efficiency of Charlie $\eta_d$, misalignment-error probability of $X$-basis $e_d^X$, loss coefficient of the quantum channel from Alice (Bob) to Charlie $\alpha_{AC}$($\alpha_{BC}$) in dB/km, quantum channel distance from Alice (Bob) to Charlie $L_{AC} (L_{BC})$ in km, error-correction efficiency $f$ and failure probability $\epsilon$.} \begin{spacing}{1.5} \begin{tabular}{p{2cm}<{\centering}p{2cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}} \hline\hline $N$ & $p_d$ & $\eta_d$ & $\alpha_{AC}$ & $\alpha_{BC}$ & $L_{AC}$ & $L_{BC}$ & $e_d^X$ & $f$ & $\epsilon$ \\ \hline $5.59\times 10^{12}$ & $2.50\times10^{-8}$ & 28.20\% & 0.182 & 0.188 & 223 & 205 & 8\% & 1.1 & $10^{-10}$ \\ \hline\hline \end{tabular} \end{spacing} \label{tab:para} \end{table} {\it Conclusions.---} Applying the SNS protocol~\cite {wang2018sns}, we have performed the first field test of high-rate TF-QKD over the deployed commercial fiber, in which 3.36~bps secure key rate was generated over 428~km. It is the longest distance of the terrestrial real-word QKD without relying on trusted relays at present and pushes the separation between two users beyond 300~km. The result demonstrated in our experiment exhibits the feasibility of the trusted-relay-less QKD in practical circumstances between cities. It motivates future demonstration of high-speed intercity-scale QKD network in the absence of the quantum repeater. Further extensions to higher key rates include increasing the system's repetition, utilizing the fiber link with lower attenuation and less noise, and enhancing the laser and detector's performance. This work was supported by the National Key R\&D Program of China (2017YFA0303903), the Chinese Academy of Science, the National Fundamental Research Program, the National Natural Science Foundation of China (grants 11875173, 61875182 and 11674193) and Anhui Initiative in Quantum Information Technologies and Fundamental Research Funds for the Central Universities (WK2340000083). \begin{thebibliography}{32} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \mathrm{Tr}anslation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Bennett}\ and\ \citenamefont {Brassard}(1984)}]{bennett1984quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Bennett}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the IEEE International\ Conference on Computers, Systems, and Signal Processing}}}\ (\bibinfo {year} {1984})\ pp.\ \bibinfo {pages} {175--179}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin}, \citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont {Zbinden}}]{gisin2002quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2009)\citenamefont {Scarani}, \citenamefont {Bechmann-Pasquinucci}, \citenamefont {Cerf}, \citenamefont {Du{\v{s}}ek}, \citenamefont {L{\"u}tkenhaus},\ and\ \citenamefont {Peev}}]{scarani2009security} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bechmann-Pasquinucci}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Du{\v{s}}ek}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L{\"u}tkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Peev}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {1301} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liao}\ \emph {et~al.}(2017)\citenamefont {Liao}, \citenamefont {Cai}, \citenamefont {Liu}, \citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {Ren}, \citenamefont {Yin}, \citenamefont {Shen}, \citenamefont {Cao}, \citenamefont {Li} \emph {et~al.}}]{liao2017satellite} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {W.-Q.}\ \bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.-G.}\ \bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Shen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {Z.-P.}\ \bibnamefont {Li}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {549}},\ \bibinfo {pages} {43} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2020{\natexlab{a}})\citenamefont {Xu}, \citenamefont {Ma}, \citenamefont {Zhang}, \citenamefont {Lo},\ and\ \citenamefont {Pan}}]{xu2020secure} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {025002} (\bibinfo {year} {2020}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hwang}(2003)}]{H03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont {Hwang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {057901} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}(2005)}]{wang05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {230503} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lo}\ \emph {et~al.}(2005)\citenamefont {Lo}, \citenamefont {Ma},\ and\ \citenamefont {Chen}}]{LMC05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {230504} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Braunstein}\ and\ \citenamefont {Pirandola}(2012)}]{braunstein2012side} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Braunstein}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {130502} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lo}\ \emph {et~al.}(2012)\citenamefont {Lo}, \citenamefont {Curty},\ and\ \citenamefont {Qi}}]{lo2012measurement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {130503} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2016)\citenamefont {Zhou}, \citenamefont {Yu},\ and\ \citenamefont {Wang}}]{WangPRA2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {042324} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2016)\citenamefont {Yin}, \citenamefont {Chen}, \citenamefont {Yu}, \citenamefont {Liu}, \citenamefont {You}, \citenamefont {Zhou}, \citenamefont {Chen}, \citenamefont {Mao}, \citenamefont {Huang}, \citenamefont {Zhang} \emph {et~al.}}]{MDI404km} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {L.-X.}\ \bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Mao}}, \bibinfo {author} {\bibfnamefont {M.-Q.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {190501} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucamarini}\ \emph {et~al.}(2018)\citenamefont {Lucamarini}, \citenamefont {Yuan}, \citenamefont {Dynes},\ and\ \citenamefont {Shields}}]{nature18} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {557}},\ \bibinfo {pages} {400} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pirandola}\ \emph {et~al.}(2017)\citenamefont {Pirandola}, \citenamefont {Laurenza}, \citenamefont {Ottaviani},\ and\ \citenamefont {Banchi}}]{PLOB2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laurenza}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ottaviani}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Banchi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Communications}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {15043} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peev}\ \emph {et~al.}(2009)\citenamefont {Peev}, \citenamefont {Pacher}, \citenamefont {Alleaume}, \citenamefont {Barreiro}, \citenamefont {Bouda}, \citenamefont {Boxleitner}, \citenamefont {Debuisschert}, \citenamefont {Diamanti}, \citenamefont {Dianati}, \citenamefont {Dynes} \emph {et~al.}}]{peev2009the} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Peev}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Pacher}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alleaume}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Barreiro}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bouda}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Boxleitner}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Debuisschert}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Diamanti}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Dianati}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {075001} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stucki}\ \emph {et~al.}(2011)\citenamefont {Stucki}, \citenamefont {Legre}, \citenamefont {Buntschu}, \citenamefont {Clausen}, \citenamefont {Felber}, \citenamefont {Gisin}, \citenamefont {Henzen}, \citenamefont {Junod}, \citenamefont {Litzistorf}, \citenamefont {Monbaron} \emph {et~al.}}]{stucki2011longterm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stucki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Legre}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Buntschu}}, \bibinfo {author} {\bibfnamefont {B.~F.}\ \bibnamefont {Clausen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Felber}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Henzen}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Junod}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Litzistorf}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Monbaron}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {123001} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2009)\citenamefont {Chen}, \citenamefont {Liang}, \citenamefont {Liu}, \citenamefont {Cai}, \citenamefont {Ju}, \citenamefont {Liu}, \citenamefont {Wang}, \citenamefont {Yin}, \citenamefont {Chen}, \citenamefont {Chen} \emph {et~al.}}]{chen2009field} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ju}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Chen}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics Express}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {6540} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dynes}\ \emph {et~al.}(2019)\citenamefont {Dynes}, \citenamefont {Wonfor}, \citenamefont {Tam}, \citenamefont {Sharpe}, \citenamefont {Takahashi}, \citenamefont {Lucamarini}, \citenamefont {Plews}, \citenamefont {Yuan}, \citenamefont {Dixon}, \citenamefont {Cho} \emph {et~al.}}]{dynes2019cambridge} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wonfor}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tam}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Sharpe}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Takahashi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Plews}}, \bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Dixon}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cho}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2010)\citenamefont {Chen}, \citenamefont {Wang}, \citenamefont {Liang}, \citenamefont {Liu}, \citenamefont {Liu}, \citenamefont {Jiang}, \citenamefont {Wang}, \citenamefont {Wan}, \citenamefont {Cai}, \citenamefont {Ju} \emph {et~al.}}]{chen2010metropolitan} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wan}}, \bibinfo {author} {\bibfnamefont {W.-Q.}\ \bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ju}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics Express}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {27217} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2010)\citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Yin}, \citenamefont {Zhang}, \citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {Xu}, \citenamefont {Zhou}, \citenamefont {Yang}, \citenamefont {Huang} \emph {et~al.}}]{wang2010field} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Huang}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics Letters}\ }\textbf {\bibinfo {volume} {35}},\ \bibinfo {pages} {2454} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tang}\ \emph {et~al.}(2016)\citenamefont {Tang}, \citenamefont {Yin}, \citenamefont {Zhao}, \citenamefont {Liu}, \citenamefont {Sun}, \citenamefont {Huang}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Zhang}, \citenamefont {You} \emph {et~al.}}]{tang2016measurement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.-X.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {M.-Q.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {L.-X.}\ \bibnamefont {You}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {011024} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Minder}\ \emph {et~al.}(2019)\citenamefont {Minder}, \citenamefont {Pittaluga}, \citenamefont {Roberts}, \citenamefont {Lucamarini}, \citenamefont {Dynes}, \citenamefont {Yuan},\ and\ \citenamefont {Shields}}]{minder2019experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Minder}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pittaluga}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Roberts}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Dynes}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yuan}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Shields}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {334} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2019)\citenamefont {Liu}, \citenamefont {Yu}, \citenamefont {Zhang}, \citenamefont {Guan}, \citenamefont {Chen}, \citenamefont {Zhang}, \citenamefont {Hu}, \citenamefont {Li}, \citenamefont {Jiang}, \citenamefont {Lin} \emph {et~al.}}]{liuyang2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.-Y.}\ \bibnamefont {Guan}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lin}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {100505} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2019)\citenamefont {Wang}, \citenamefont {He}, \citenamefont {Yin}, \citenamefont {Lu}, \citenamefont {Cui}, \citenamefont {Chen}, \citenamefont {Zhou}, \citenamefont {Guo},\ and\ \citenamefont {Han}}]{wang2019beating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {D.-Y.}\ \bibnamefont {He}}, \bibinfo {author} {\bibfnamefont {Z.-Q.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {F.-Y.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {C.-H.}\ \bibnamefont {Cui}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-F.}\ \bibnamefont {Han}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {021046} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhong}\ \emph {et~al.}(2019)\citenamefont {Zhong}, \citenamefont {Hu}, \citenamefont {Curty}, \citenamefont {Qian},\ and\ \citenamefont {Lo}}]{zhong2019proof} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Qian}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letter}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {100506} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fang}\ \emph {et~al.}(2020)\citenamefont {Fang}, \citenamefont {Zeng}, \citenamefont {Liu}, \citenamefont {Zou}, \citenamefont {Wu}, \citenamefont {Tang}, \citenamefont {Sheng}, \citenamefont {Xiang}, \citenamefont {Zhang}, \citenamefont {Li} \emph {et~al.}}]{fang2019surpassing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-T.}\ \bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zeng}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zou}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {Y.-J.}\ \bibnamefont {Sheng}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xiang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Photonics}\ ,\ \bibinfo {pages} {1}} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2020)\citenamefont {Chen}, \citenamefont {Zhang}, \citenamefont {Liu}, \citenamefont {Jiang}, \citenamefont {Zhang}, \citenamefont {Hu}, \citenamefont {Guan}, \citenamefont {Yu}, \citenamefont {Xu}, \citenamefont {Lin} \emph {et~al.}}]{chen2020sending} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {J.-Y.}\ \bibnamefont {Guan}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lin}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages} {070501} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2018)\citenamefont {Wang}, \citenamefont {Yu},\ and\ \citenamefont {Hu}}]{wang2018sns} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {062323} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jiang}\ \emph {et~al.}(2019)\citenamefont {Jiang}, \citenamefont {Yu}, \citenamefont {Hu},\ and\ \citenamefont {Wang}}]{jiang2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {024061} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2020{\natexlab{b}})\citenamefont {Xu}, \citenamefont {Yu}, \citenamefont {Jiang}, \citenamefont {Hu},\ and\ \citenamefont {Wang}}]{xu2019general} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {042330} (\bibinfo {year} {2020}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hu}\ \emph {et~al.}(2019)\citenamefont {Hu}, \citenamefont {Jiang}, \citenamefont {Yu},\ and\ \citenamefont {Wang}}]{huxl} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {062337} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2019)\citenamefont {Yu}, \citenamefont {Hu}, \citenamefont {Jiang}, \citenamefont {Xu},\ and\ \citenamefont {Wang}}]{yu2018sending} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Xu}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific reports}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \end{thebibliography} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{equation}{0} \onecolumngrid \global\long\defS\arabic{equation}{S\arabic{equation}} \global\long\defS\arabic{figure}{S\arabic{figure}} \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{0.6}{0.6} \normalsize \msection{SUPPLEMENTARY INFORMATION} \section{The SNS-TF-QKD protocol with odd-parity error rejection} \subsection{The theory} The SNS-TF-QKD protocol with asymmetric parameters is used to perform the experiment, and the theory of actively odd-parity paring (AOPP) with finite key effects is used to extract the final key. In this protocol, Alice and Bob will repeat the following process $N$ times to obtain a series of data: \noindent At each time window, Alice (Bob) randomly decides whether it is a decoy window with probability $1-p_{A2}$ ($1-p_{B2}$), or a signal window with probability $p_{A2}$ ($p_{B2}$). If it is a signal window, with probability $\epsilon_A$ ($\epsilon_B$), Alice (Bob) randomly prepares a phase-randomized weak coherent state (WCS) pulse with intensity $\mu_{A2}$ ($\mu_{B2}$), and denotes it as bit $1$ ($0$); with probability $1-\epsilon_A$ ($1-\epsilon_B$), Alice (Bob) prepares a vacuum pulse, and denotes it as bit $0$ ($1$). If it is a decoy window, Alice (Bob) randomly prepares a vacuum pulse or coherent state pulse $\ket{e^{i\theta_{A1}}\sqrt{\mu_{A1}}}$ ( $\ket{e^{i\theta_{B1}}\sqrt{\mu_{B1}}}$) with probabilities $1-p_{A1}$ and $p_{A1}$ ($1-p_{B1}$ and $p_{B1}$), respectively, where $\theta_{A1}$ and $\theta_{B1}$ are different in different windows, and are random in $[0,2\pi)$. We set the condition in Eq.~(\ref{equ:condition}) for the security. \begin{equation}\label{equ:condition} \frac{\mu_{A1}}{ \mu_{B1}}= \frac{\epsilon_\text{A}(1-\epsilon_\text{B}) \mu_{A2} e^{-\mu_{A2}}}{\epsilon_\text{B}(1-\epsilon_\text{A}) \mu_{B2} e^{-\mu_{B2}}}. \end{equation} Then Alice and Bob send their prepared pulses to Charlie who is assumed to perform interferometric measurements on the received pulses and then announce the results to Alice and Bob. If one and only one detector clicks in the measurement process, Charlie also tells Alice and Bob which detector it was, and Alice and Bob take it as a one-detector heralded event. Then Alice and Bob announce the mode they used in each time window in the public channel. A time window that both Alice and Bob determined to be a signal window is called a $Z$ window. One-detector-heralded events in $Z$ windows are called effective events. Alice and Bob get two $n_t$-bit strings, $Z_A$ and $Z_B$, as formed by the corresponding bits of effective events of the $Z$ windows. We denote the bit flip error rate of strings $Z_A$ and $Z_B$ as $E$. Strings $Z_A$ and $Z_B$ will be used to extract the secure final keys. The pulse intensities in each $Z$ window, that is the decision of whether to send or not send, is kept private, but the intensities of other pulses in each window would be publicly announced after Alice and Bob finished mode calibration. For a time window that the intensity of Alice's WCS pulse is $\mu_{A1}$ and the intensity of Bob's WCS pulse is $\mu_{B1}$, Alice and Bob also announce the phase information $\theta_{A1}$ and $\theta_{B1}$ in the public channel, and if the phases of the WCS pulses satisfy \begin{equation} 1-\vert \cos(\theta_{A1}-\theta_{B1}-\psi_{AB})\vert\le \lambda, \end{equation} it is called an $X$ window. Here, $\psi_{AB}$ can take an arbitrary value, which can be different from time to time as Alice and Bob like, so as to obtain a satisfactory key rate for the protocol~\cite{liu2019experimental}. $\lambda$ is a positive value close to $0$, and would be optimized to obtain the highest key rate. One-detector-heralded events in $X$ windows are called effective events. The AOPP method is used to reduce the errors of raw key strings $Z_A$ and $Z_B$ before the error correction and privacy amplification processes. In AOPP, Bob first actively pairs the bits $0$ with bits $1$ of the raw key string $Z_B$, and get $n_g$ pairs. And then Alice computes the parities of those $n_g$ pairs and announces them to Bob, then they keep the pairs with parity value $1$ at both sides and discard the pairs with parity $0$ at Alice’s side. Finally, Alice and Bob randomly keep one bit from each survived pair and form two new $n_t^\prime$-bits strings, which would be used to extract the secure final keys. The formula of the key rate $R$ of AOPP is \begin{equation}\label{r2} \begin{split} R=&\frac{1}{N}\{n_1^\prime[1-h(e_{1}^{\prime ph})]-fn_t^\prime h(E^\prime)-2\log_2{\frac{2}{\varepsilon_{cor}}}\\ &-2\log_2{\frac{1}{\sqrt{2}\varepsilon_{PA}\hat{\varepsilon}}}\}. \end{split} \end{equation} where $n_1^\prime$ is the number of the untagged bits after AOPP, $e_{1}^{\prime ph}$ is the phase flip error rate of untagged bits after AOPP, $h(x)=-x\log_2x-(1-x)\log_2(1-x)$ is the Shannon entropy, $E^\prime$ is the bit-flip error rate of the remaining bits after AOPP, $\varepsilon_{cor}$ is the failure probability of error correction, $\varepsilon_{PA}$ is the failure probability of privacy amplification, and $\hat{\varepsilon}$ is the coefficient while using the chain rules of smooth min- and max- entropy~\cite{vitanov2013chain}. $n_1^\prime$ and $e_{1}^{\prime ph}$ are values after AOPP. As shown in Ref.~\cite{jiang2020zigzag}, we can calculate $n_1^\prime$ and $e_{1}^{\prime ph}$ by taking the number of untagged bits and phase flip error rate before AOPP as input values. The calculation details are shown in the next section. \subsection{The calculation method} To clearly show the calculation method, we denote the vacuum source, the WCS source with intensity $\mu_{A1}$, the WCS source with intensity $\mu_{A2}$ of Alice by $o,x$ and $y$. Similarly, we denote the vacuum source, the WCS source with intensity $\mu_{B1}$, the WCS source with intensity $\mu_{B2}$ of Bob by $o^\prime,x^\prime$, and $y^\prime$. We denote the number of pulse pairs of source $\kappa\zeta(\kappa=o,x,y;\zeta=o^\prime,x^\prime,y^\prime)$ sent out in the whole protocol by $N_{\kappa\zeta}$, and the total number of one-detector heralded events of source $\kappa\zeta$ by $n_{\kappa\zeta}$. We define the counting rate of source $\kappa\zeta$ by $S_{\kappa\zeta}=n_{\kappa\zeta}/N_{\kappa\zeta}$, and the corresponding expected value by $\mean{S_{\kappa\zeta}}$. With all those definitions, we have \begin{equation} \begin{split} N_{oo^\prime}=&(1-p_{A2})(1-p_{B2})(1-p_{A1})(1-p_{B1})N\\ N_{ox^\prime}=&(1-p_{A2})(1-p_{B2})(1-p_{A1})p_{B1}N\\ N_{xo^\prime}=&(1-p_{A2})(1-p_{B2})p_{A1}(1-p_{B1})N\\ N_{oy^\prime}=&(1-p_{A2})p_{B2}(1-p_{A1})\varepsilon_{B}N\\ N_{yo^\prime}=&p_{A2}(1-p_{B2})\varepsilon_A(1-p_{B1})N \end{split} \end{equation} As sources $x,y,x^\prime,y^\prime$ are phase-randomized WCS sources, they are actually the classical mixture of different photon number states~\cite{hu2019general}. Thus we can use the decoy-state method to calculate the lower bounds of the expected values of the counting rate of states $\oprod{01}{01}$ and $\oprod{10}{10}$, which are \begin{align} \label{s01mean}\mean{\underline{s_{01}}}&= \frac{\mu_{B2}^{2}e^{\mu_{B1}}\mean{S_{ox^\prime}}-\mu_{B1}^{2}e^{\mu_{B2}}\mean{S_{oy^\prime}}-(\mu_{B2}^{2}-\mu_{B1}^{2})\mean{S_{oo^\prime}}}{\mu_{B2}\mu_{B1}(\mu_{B2}-\mu_{B1})},\\ \mean{\underline{s_{10}}}&= \frac{\mu_{A2}^{2}e^{\mu_{A1}}\mean{S_{xo^\prime}}-\mu_{A1}^{2}e^{\mu_{A2}}\mean{S_{yo^\prime}}-(\mu_{A2}^{2}-\mu_{A1}^{2})\mean{S_{oo^\prime}}}{\mu_{A2}\mu_{A1}(\mu_{A2}-\mu_{A1})}. \end{align} Then we can get the lower bound of the expected value of the counting rate of untagged photons \begin{equation} \mean{\underline{s_1}}=\frac{\mu_{A1}}{\mu_{A1}+\mu_{B1}}\mean{\underline{s_{10}}}+\frac{\mu_{B1}}{\mu_{A1}+\mu_{B1}}\mean{\underline{s_{01}}}, \end{equation} and the lower bound of the expected value of the untagged bits $1$, $\mean{\underline{n_{10}}}$, and untagged bits $0$, $\mean{\underline{n_{01}}}$ \begin{align} \mean{\underline{n_{10}}}=Np_{A2}p_{B2}\epsilon_A(1-\epsilon_B)\mu_{A2}e^{-\mu_{A2}}\mean{\underline{s_{10}}},\\ \mean{\underline{n_{01}}}=Np_{A2}p_{B2}\epsilon_B(1-\epsilon_A)\mu_{B2}e^{-\mu_{B2}}\mean{\underline{s_{10}}}. \end{align} The error counting rate of the $X$ windows $T_{X}$, can be used to estimate the upper bound of the expected value of the phase flip error rate $\mean{\overline{e_1^{ph}}}$. The criterion of error events in $X$ windows are shown in Ref.~\cite{hu2019general}. We denote the number of total pulses with intensities $\mu_{A1}$ and $\mu_{B1}$ sent out in the $X$ windows by $N_{X}$, and the number of corresponding error events by $m_{X}$, then we have \begin{equation} T_{X}=\frac{m_{X}}{N_{X}}. \end{equation} Then we have \begin{equation}\label{e1} \mean{\overline{e_1^{ph}}}=\frac{\mean{T_{X}}-e^{-\mu_{A1}-\mu_{B1}}\mean{S_{oo^\prime}}/2}{e^{-\mu_{A1}-\mu_{B1}}(\mu_{A1}+\mu_{B1})\mean{\underline{s_1}}}, \end{equation} where $\mean{T_{X}}$ is the expected value of $T_{X}$. By taking the estimated values before AOPP, $\mean{\underline{n_{10}}},\mean{\underline{n_{01}}}$ and $\mean{\overline{e_1^{ph}}}$ as input values, we can calculate $n_1^\prime$ and $e_{1}^{\prime ph}$ by the method proposed in Ref.~\cite{jiang2020zigzag}. We have the related formulas as follows. \begin{subequations} \begin{align} &u=\frac{n_g}{2n_{odd}},\quad \underline{n_{1}}=\varphi^L(\mean{\underline{n_{10}}}+\mean{\underline{n_{01}}}),\\ &\underline{n_{10}}=\varphi^L(\mean{\underline{n_{10}}}),\underline{n_{01}}=\varphi^L(\mean{\underline{n_{01}}}),\\ &n=\varphi^L\left(\frac{{\underline{n_{1}}}}{n_t}\frac{{\underline{n_{1}}}}{n_t}\frac{un_t}{2}\right),\\ &k=u\underline{n_{1}}-2n,\\ &r=\frac{2n+k}{k}\ln\frac{3k^2}{\varepsilon(r,k)},\\ &\bar{M}=\varphi^U(2n\mean{\overline{e_1^{ph}}}),\\ &\mean{e_{\tau}}=\frac{\bar{M}}{2n-r}\\ &\bar{M}_s=\varphi^U[(n-r)\mean{e_{\tau}}(1-\mean{e_{\tau}})]+r, \end{align} \end{subequations} where ${n_{odd}}$ is the number of pairs with odd-parity if Bob randomly groups all the bits in $Z_B$ two by two, $n_{odd}$ is observed values, $\varepsilon(r,k)=10^{-10}$ is the trace distance while using the exponential de Finetti’s representation theorem, and $\varphi^U(x),\varphi^L(x)$ are the upper and lower bounds while using Chernoff bound~\cite{chernoff1952measure} to estimate the real values according to the expected values. Finally, we can calculate $n_1^\prime$ and $e_{1}^{\prime ph}$ by \begin{align} &n_1^\prime=\varphi^L\left(\frac{{\underline{n_{01}}}}{n_{t0}}\frac{{\underline{n_{10}}}}{n_{t1}}n_g\right),\\ &e_{1}^{\prime ph}=\frac{2\bar{M}_s}{n_1^\prime}, \end{align} where $n_{t0}$ is the number of all bit $0 $ and $n_{t1}$ is the number of all bit $1$ in string $Z_B$. \section{DETAILS OF THE EXPERIMENT} \subsection{The relative phase calculation} Instead of actively phase-locking, we calculate the relative phase straightforwardly and apply a post-selection method for effective events when Alice and Bob both choose X-window. The total relative phase is influenced by an 856~km fiber, which is 7.80~rad/ms in our work. Alice and Bob sacrifice a part of signal pulses as bright reference pulses periodically and send them to Charlie for relative phase calculation. We divide each cycle (2~us) into four regions: a signal region, a recovery region, and two reference regions, as shown in FIG~\ref{fig:Cycle}. Alice and Bob send the bright reference pulses with the intensity of $\mu_r$ to Charlie for interference in the reference region. Alice (Bob) loads 0 (0) and 0 ($\pi/2$) phase in the former and latter reference region, respectively. \begin{figure*} \caption{Time sequence in one cycle. Alice and Bob send the signal pulses in the signal region. The bright reference pulses are sent in the Reference region. Alice (Bob) loads 0 (0) and 0 ($\pi/2$) phase in the former and latter reference region, respectively. The vacuum state are send in the recovery region followed by each reference region to avoid influencing signal detection in Charlie. After interference, Charlie record the counts of SNSPD1 and SNSPD2 in the former (latter) reference region, donated as $n_1$ and $n_2$($m_1$ and $m_2$), respectively.} \label{fig:Cycle} \end{figure*} In each cycle, Charlie record the counts of SNSPD1 and SNSPD2 in the former and latter reference region, donated as $n_1$, $n_2$, $m_1$ and $m_2$, respectively. Charlie collects counts in time of $T$ around an effective event of X-basis, and the total counts are donated as $N_1$, $N_2$, $M_1$ and $M_2$, respectively. \begin{equation} \begin{split} \label{equ:E3.3} N_{1,2} = \sum{n_{1,2}},\quad M_{1,2} = \sum{m_{1,2}} \end{split} \end{equation} The relative phase $\delta$ can be calculated by \begin{align} \begin{split} &\left\{ \begin{aligned} & \dfrac{N1}{N1+N2}=\dfrac{1+cos\delta}{2}\\ & \dfrac{M1}{M1+M2}=\dfrac{1-sin\delta}{2} \end{aligned} \right. \end{split} \end{align} There are a few practical aspects to take into consideration in the calculation. On the one hand, we need to send the bright reference pulses to gain enough counts for rapid calculation. However, when the total count rate of two SNSPDs exceeds about 4~MHz, it is hard to increase continuously through increasing $\mu_r$ in our condition. Besides, the bright reference pulses will lead to more noise due to the long fiber channel's nonlinear effect, which is hard to avoid both in the field test and the lab test. On the other hand, the limit count rate of two SNSPDs requires the longer duration time $T$ of each calculation, and it will induce more measuring error in the relative phase calculation and lead to additional QBER in our work. It is an optimal trade-off that $\mu_r$ is set to about 450 photons per pulse at a repetition rate of 200~MHz, which results in 5~MHz total count rate of two SNSPDs for calculation. The duration time $T$ of each calculation is 20~us. After filtered by four 100~GHz dense wavelength division multiplexers (DWDMs) in Charlie, the remaining noise is about $1.4 \times 10^{-8}$/pulse. \subsection{The polarization auto-alignment} To real-time compensate polarization drifts in the long fiber, we use a polarization auto-alignment module in each node. The schematic of the module is shown in FIG.1 of the main text. In Alice (Bob), the polarization mismatch beam is split in a polarizing beam splitter (PBS) and monitored by a commercial power meter. The measurement results are used as the reference for Alice (Bob) to dynamically adjust the DC loaded in an EPC before the PBS. In Charlie, slightly different from Alice (Bob), the polarization mismatch part, which is split in two PBSs, are combined with a 70:30 beam splitter (BS) and monitored by a superconducting nanowire single-photon detector SNSPD3. Charlie monitors the reflectivity for the PBSs in real-time during the QKD process. Charlie records the average counting rate of SNSPD1 and calculates the polarization alignment efficiency of every minute. We post select the obtained measurement results when the polarization alignment efficiency exceeds 82\%. The probability distribution of the reflectivity for the PBSs in Charlie is shown in FIG. 5. (c) in the main text. The polarization alignment efficiency of all modules is about 94\%. \subsection{Time Synchronization} Three 100~KHz electric signals in phase are modulated by two arbitrary-function generators (Tektronix, AFG3253) in Charlie, one of which is used for Charlie's system clock. The rest are used as triggers for two auxiliary synchronization lasers (Sync Lasers) to generate 1570~nm Sync pulses. Multiplexed with the CW bright beam from the master laser, the pulses from each Sync Laser are transmitted through the synchronization channel to Alice and Bob, respectively. Two EDFAs are placed in Yiyuan city (118°12'24.160"E, 36°11'12.602"N) and Zhucheng city (119°24'43.578"E, 36°2'59.305"N). Each EDFA produces a 10~dB gain in both the CW bright beam and the Sync pulses. In Alice's (Bob's) apparatus, the Sync pulses are demultiplexed by a 100~GHz dense wavelength division multiplexer (DWDM) and detected by a photoelectric detector (PD1). The output electric signals are used for regenerating Alice's (Bob's) own 312.5 MHz system clock. We need to realize indistinguishability in the arriving time of the signal pulses from two independent systems. Before the QKD process, Charlie measures the arriving time of the signal pulses from Alice and Bob, respectively, with a precision of 20~ps. Based on the arrival time's different values, Charlie adjusts the time delay between the electric signals from the arbitrary-function generators. To achieve a better signal to noise ratio, we adopt a time window of 340~ps. On account of the long optical fiber length drifting, the arriving time drifts during the QKD process. Charlie monitors the ratio of the non-overlapping between the signal pulse and the detection window in real-time. We post select the obtained measurement results when the ratio of the non-overlapping is less than 55\%. The probability distribution of the ratio is shown in FIG. 5. (d) in the main text. The non-overlapping between signal pulse and detection window in the experiment is 30\%. \section{DETAILED EXPERIMENTAL RESULTS} Here we list the experimental results in the following. Table~\ref{tab:para} shows the experimental parameters used by Alice and Bob, and the key length calculation. Table~\ref{tab:totalsendandgain} illustrate the sending and received statistics of all the signals. \begin{table*}[htbp] \centering \caption{Experimental parameters and the key length calculation.} \begin{spacing}{2.5} \begin{tabular}{cc} \hline \hline $\mu_{A2}$ & 0.454 \\ $\mu_{B2}$ & 0.425 \\ $\mu_{A1}$ & 0.042 \\ $\mu_{B1}$ & 0.029 \\ $\epsilon_A$ & 0.307 \\ $\epsilon_B$ & 0.241 \\ total number of signal pulse pairs $N$ & 5590517734411 \\ \hline sifted key bits in the Z-basis before AOPP & 27921308 \\ $QBER_{ZZ}$ - Before AOPP & 27.84\% \\ \hline effective events in X windows & 43382 \\ QBER in X windows & 9.62\% \\ \hline the relative PLOB bound & 5.01E-09 \\ the absolute PLOB bound & 1.78E-08 \\ Key Rate & 4.80E-08 \\ \hline \hline \end{tabular} \label{tab:para2} \end{spacing} \end{table*} \begin{table*}[htbp] \centering \caption{Sending and received statistics of all the signals. Here, the notation $Z_{AO}$ and $Z_A$ ($Z_{BO}$ and $Z_B$) shown in the first column denotes Alice (Bob) chose Z basis and chose the source vacuum and $ u_{A2}$, respectively. The notation $X_{AO}$ and $X_{A1}$ ($X_{BO}$ and $X_{B1}$) denotes Alice (Bob) chose X basis and chose the source vacuum and $ u_{A1}$, respectively.} \begin{spacing}{2.5} \begin{tabular}{p{3cm}<{\centering}|p{4cm}<{\centering}p{3cm}<{\centering}} \hline \hline Source & Total Send & Total Gain \\ \hline $Z_{AO}Z_{BO}$ & 1971056824075 & 91307 \\ $Z_{AO}X_{BO}$ & 53109918477 & 2506 \\ $Z_{AO}X_{B1}$ & 523112730863 & 622318 \\ $Z_{AO}Z_B$ & 626936631645 & 10353195 \\ $X_{AO}Z_{BO}$ & 58301113516 & 2666 \\ $X_{AO}X_{BO}$ & 1597290781 & 75 \\ $X_{AO}X_{B1}$ & 15573585117 & 18484 \\ $X_{AO}Z_B$ & 18768166680 & 309128 \\ $X_{A1}Z_{BO}$ & 569833486215 & 632696 \\ $X_{A1}X_{BO}$ & 15174262422 & 17086 \\ $X_{A1}X_{B1}$ & 151343301524 & 343572 \\ $X_{A1}Z_B$ & 181292503673 & 3234733 \\ $Z_{A}Z_{BO}$ & 872120766568 & 9794704 \\ $Z_{A}X_{BO}$ & 23560039024 & 265688 \\ $Z_{A}X_{B1}$ & 231207840587 & 2823218 \\ $Z_{A}Z_B$ & 277529273244 & 7682102 \\ \hline \hline \end{tabular} \label{tab:totalsendandgain} \end{spacing} \end{table*} \end{document}
math
74,082
\begin{document} \setstretch{1.2} \title[Quasinormal extensions of subnormal operator-weighted composition operators]{Quasinormal extensions of subnormal operator-weighted composition operators in $\ell^2$-spaces} \author[P.\ Budzy\'{n}ski]{Piotr Budzy\'{n}ski} \address{Katedra Zastosowa\'{n} Matematyki, Uniwersytet Rolniczy w Krakowie, ul. Balicka 253c, 30-189 Krak\'ow, Poland} \email{[email protected]} \author[P.\ Dymek]{Piotr Dymek} \address{Katedra Zastosowa\'{n} Matematyki, Uniwersytet Rolniczy w Krakowie, ul. Balicka 253c, 30-189 Krak\'ow, Poland} \email{[email protected]} \author[A.\ P\l aneta]{Artur P\l aneta} \address{Katedra Zastosowa\'{n} Matematyki, Uniwersytet Rolniczy w Krakowie, ul. Balicka 253c, 30-189 Krak\'ow, Poland} \email{[email protected]} \subjclass[2010]{Primary 47B20, 47B37; Secondary 44A60} \keywords{operator-weighted composition operator, quasinormal operator, subnormal operator} \begin{abstract} We prove the subnormality of an operator-weighted composition operator whose symbol is a transformation of a discrete measure space and weights are multiplication operators in $L^2$-spaces under the assumption of existence of a family of probability measures whose Radon-Nikodym derivatives behave regular along the trajectories of the symbol. We build the quasinormal extension which is a weighted composition operator induced by the same symbol. We give auxiliary results concerning commutativity of operator-weighted composition operators with multiplication operators. \end{abstract} \maketitle \section{Introduction} Recent years have brought rapid development in studies over unbounded composition operators in $L^2$-spaces (see \cite{b-jfsa-2012, b-d-j-s-aaa-2013, b-d-p-f-2016, b-d-p-bjma-2016, b-j-j-s-ampa-2014, b-j-j-s-jmaa-2014, b-j-j-s-jfa-2015, b-j-j-s-golf, c-h-jot-1993, j-mpcps-2003, j-s-pems-2001}) and weighted shifts on directed trees (see \cite{b-j-j-s-jmaa-2012, b-j-j-s-jmaa-2013, b-j-j-s-jmaa-2016, j-j-s-mams-2012, j-j-s-jfa-2012, j-j-s-caot-2013, j-j-s-pams-2014,p-jmaa-2016, t-jmaa-2015}), mostly in connection with the question of their subnormality. One can see that all these operators belong to a larger class of Hilbert space operators composed of unbounded weighted composition operators in $L^2$-spaces (see \cite{b-j-j-s-wco}). The class, in the bounded case, is well-known and there is extensive literature concerning properties of its members, both in the general case as well as in the case of particular realizations like weighted shifts, composition operators, or multiplication operators (see, e.g., \cite{shi, s-m}). Weighted composition operators acting in spaces of complex-valued functions have a natural generalization in the context of vector-valued functions -- the usual complex-valued weight function is replaced by a function whose values are operators. These are the operators we call operator-weighted composition operators, we will refer to them as {\em o-wco's}. Their particular realizations are weighted shifts acting on $\ell^2$-spaces of Hilbert space-valued functions or composition operators acting on Hilbert spaces of vector-valued functions, which has already been studied (see, e.g., \cite{h-j-pams-1996, j-gmj-2004, k-pams-1984, l-bams-1971}). Interestingly, many weighted composition operators can be represented as operator weighted shifts (see \cite{c-j-jmaa-1991}). In this paper we turn our attention to o-wco's that act in an $\ell^2$-space of $L^2$-valued functions and have a weight function whose values are multiplication operators. We focus on the subnormality of these operators. Our work is motivated by the very recent criterion (read: sufficient condition) for the subnormality of unbounded composition operators in $L^2$-spaces (see \cite{b-j-j-s-jfa-2015}). The criterion relies on a construction of quasinormal extensions for composition operators, which is doable if the so-called consistency condition holds. It turns out that these ideas can be used in the context of o-wco's leading to the criterion for subnormality in Theorem \ref{kryterium}, which is the main result of the paper. The quasinormal extension for a composition operator built as in \cite{b-j-j-s-jfa-2015} is still a composition operator which acts over a different measure space. Interestingly, in our case for a given o-wco we get a quasinormal extension which also is an o-wco and acts over the same measure space. The extension comes from changing the set of values of a weight function. We investigate the subnormality of o-wco's in the bounded case and we show in Theorem \ref{konieczny} that the conditions appearing in our criterion are not only sufficient but also necessary in this case. Later we provide a few illustrative examples. The paper is concluded with some auxiliary results concerning commutativity of wco's and multiplication operators. \section{Preliminaries} In all what follows $\zbb$, $\rbb$ and $\cbb$ stands for the sets of integers, real numbers and complex numbers, respectively; $\nbb$, $\zbb_+$ and $\rbb_+$ denotes the sets of positive integers, nonnegative integers and nonnegative real numbers, respectively. Set $\rbop = \rbb_+ \cup \{\infty\}$. If $S$ is a set and $E\subseteq S$, then $\chi_E$ is the characteristic function of $E$. Given a $\sigma$-algebra $\varSigma$ of subsets of $S$ we denote by $\mscr(\varSigma)$ the space of all $\varSigma$-measurable $\cbb$- or $\rbop$-valued (depending on the context) functions on $S$; by writing $\mscr_+(\varSigma)$ we specify that the functions have values in $\rbop$. If $\mu$ and $\nu$ are positive measures on $\varSigma$ and $\nu$ is absolute continuous with respect to $\mu$, then we denote this fact by writing $\nu\ll\mu$. If $Z$ is a topological space, then $\borel{Z}$ stands for the $\sigma$-algebra of all Borel subsets of $Z$. For $t\in \rbb_+$ the symbol $\delta_t$ stands for the Borel probability measure on $\rbb_+$ concentrated at $t$. If $\hh$ is a Hilbert space and $\ff$ is a subset of $\hh$, then $\lin \ff$ stands for the linear span of $\ff$. Let $\hh$ and $\kk$ be Hilbert spaces (all Hilbert spaces considered in this paper are assumed to be complex). Then ${\boldsymbol L}(\hh,\kk)$ stands for the set of all linear (possibly unbounded) operators defined in a Hilbert space $\hh$ with values in a Hilbert space $\kk$. If $\hh=\kk$, then we write ${\boldsymbol L}(\hh)$ instead of ${\boldsymbol L}(\hh,\hh)$. ${\boldsymbol B}(\hh)$ denotes the algebra of all bounded linear operators with domain equal to $\hh$. Let $A\in {\boldsymbol L}(\hh,\kk)$. Denote by $\dom(A)$, $\overline{A}$ and $A^*$ the domain, the closure and the adjoint of $A$ (in case they exist). A subspace $\ee$ of $\hh$ is a core of $A$ if $\ee$ is dense in $\dom(A)$ in the graph norm $\|\cdot\|_A$ of $A$; recall that $\|f\|_A^2:=\|Af\|^2+\|f\|^2$ for $f\in\dom(A)$. If $A$ and $B$ are operators in $\hh$ such that $\dom(A)\subseteq \dom(B)$ and $Af=Bf$ for every $f\in\dom(A)$, then we write $A\subseteq B$. A closed densely defined operator $N$ in $\hh$ is said to be {\em normal} if $N^*N=NN^*$. A densely defined operator $S$ in $\hh$ is said to be {\em subnormal} if there exists a Hilbert space $\kk$ and a normal operator $N$ in $\kk$ such that $\hh \subseteq \kk$ (isometric embedding) and $Sh = Nh$ for all $h \in \dom(S)$. A closed densely defined operator $A$ in $\hh$ is {\em quasinormal} if and only if $U|A|\subseteq |A|U$, where $|A|$ is the modulus of $A$ and $A=U|A|$ is the polar decomposition of $A$. It was recently shown (cf. \cite{j-j-s1}) that: \begin{align} \label{QQQ} \begin{minipage}{85ex} A closed densely defined operator $Q$ is quasinormal if and only if $Q Q^*Q = Q^* Q Q$. \end{minipage} \end{align} It is well-known that quasinormal operators are subnormal (cf.\ \cite{bro,sto-sza2}). Throughout the paper $X$ stands for a countable set. Let $\mu$ be a discrete measure on $X$, i.e., $\mu$ is a measure on $2^X$, the power set of $X$, such that $0<\mu_x:=\mu(\{x\})<\infty$ for every $x\in X$. Let $\hhb=\{\hh_x\colon x\in X\}$ be a family of (complex) Hilbert spaces. Then $\ell^2(\hhb, \mu)$ denotes the Hilbert space of all sequences ${\boldsymbol{f}}=\{f_x\}_{x\in X}$ such that $f_x\in\hh_x$ for every $x\in X$ and $\sum_{x\in X}||f_x||_{\hh_x}^2\mu_x<\infty$. For brevity, if this leads to no confusion, we suppress the dependence of the norm $\|\cdot\|_{\hh_x}$ on $\hh_x$ and write just $\|\cdot\|$. If $\mu$ is the counting measure on $X$, then we denote $\ell^2(\hhb,\mu)$ by $\ell^2(\hhb)$. Here and later on we adhere to the notation that all the sequences, families or systems indexed by a set $X$ will be denoted by bold symbols while the members will be written with normal ones. Let $X$ be a countable set, $\mu$ be a discrete measure on $X$, $\phi$ be a self-map of $X$, $\hhb=\{\hh_x\colon x\in X\}$ be a family of Hilbert spaces and $\varLambdab=\{\varLambda_x\colon x\in X\}$ be a family of operators such that $\varLambda_x\in{\boldsymbol L}(\hh_{\phi(x)},\hh_x)$ for every $x\in X$. We say that $(X,\phi,\mu,\hhb,\varLambdab)$ is {\em admissible} then. By saying that $(X,\phi,\hhb,\varLambdab)$ is admissible we mean that $(X,\phi,\nu,\hhb,\varLambdab)$ is admissible where $\nu$ is the counting measure. Denote by $\dom_{\varLambdab}$ the set of all $\boldsymbol{f}\in \ell^2(\hhb,\mu)$ such that $f_y\in\bigcap_{z\in\phi^{-1}(\{y\})}\dom(\varLambda_z)$ for every $y\in\phi(X)$, i.e., $f_{\phi(x)}\in \dom(\varLambda_x)$ for every $x\in X$. Then an {\em operator-weighted composition operator} in $\ell^2(\hhb,\mu)$ induced by $\phi$ and $\varLambdab$ is the operator \begin{align*} \cfl \colon \ell^2(\hhb,\mu)\supseteq\dom\big(\cfl\big)\to \ell^2(\hhb,\mu) \end{align*} defined according to the following formula \allowdisplaybreaks \begin{gather*} \dom\big(\cfl\big)=\Big\{\bsf\in\dom_{\varLambdab} \colon \quad\sum_{x\in X} \big\|\varLambda_x f_{\phi(x)}\big\|_{\hh_x}^2\mu_x<\infty \Big\},\\ \big(\cfl \bsf\big)_x= \varLambda_{x}f_{\phi(x)} ,\quad x\in X,\ \bsf \in \dom\big(\cfl\big). \end{gather*} \begin{rem}\label{wco} In case every $\hh_x$ is equal to $\cbb$ and $\varLambda_x$ is just multiplying by a complex number $w_x$, $\cfl$ is the classical {\em weighted composition operator} $C_{\phi, w}$ induced by $\phi$ and the weight function $w\colon X\to \cbb$ given by $w(x)=w_x$, and acting in the $L^2(\mu):=L^2(X,2^X,\mu)$. More precisely, $C_{\phi, w}\colon L^2(\mu)\supseteq\dom(C_{\phi, w})\to L^2(\mu)$ is defined by \begin{gather*} \dom(C_{\phi, w})=\big\{f\in L^2(\mu) \colon w\cdot(f\circ\phi)\in L^2(\mu) \big\},\\ C_{\phi, w} f=w\cdot (f\circ\phi),\quad f\in \dom(C_{\phi, w}). \end{gather*} For more information on (unbounded) weighted composition operators in $L^2$-spaces we refer the reader to \cite{b-j-j-s-wco}. \end{rem} It is clear that the operator $U\colon \ell^2(\hhb,\mu)\to \ell^2(\hhb)$ given by \begin{align}\label{kawa} (U\bsf)_x=\sqrt{\mu_x}f_x,\quad x\in X,\ \bsf\in\ell^2(\hhb,\mu), \end{align} is unitary. Using this operator one can show that the operator $\cfl$ in $\ell^2(\hhb,\mu)$ is in fact unitarily equivalent to an o-wco acting in $\ell^2(\hhb)$. \begin{prop}\label{liczaca} Let $(X,\phi,\mu,\hhb,\varLambdab)$ be admissible. Then $\cfl$ in $\ell^2(\hhb,\mu)$ is unitarily equivalent to $C_{\phi,\varLambdab^\prime}$ in $\ell^2(\hhb)$ with $\varLambdab^\prime=\Big\{\sqrt{\frac{\mu_x}{\mu_{\phi(x)}}}\varLambda_x \colon x\in X\Big\}$ via $U$ defined by \eqref{kawa}. \end{prop} The above enables us to restrict ourselves to studying o-wco's in exclusively in $\ell^2(\hhb)$ in all considerations that follow. The following convention is used in the paper: if $(X,\hhb, \phi,\varLambdab)$ is admissible, $\mathcal{P}$ is a property of Hilbert space operators, then we say that $\varLambdab$ satisfies $\mathcal{P}$ if and only if $\varLambda_x$ satisfies $\mathcal{P}$ for every $x\in X$. \begin{lem}\label{closed} Let $(X,\phi,\hhb,\varLambdab)$ be admissible. Then $\cfl$ in $\ell^2(\hhb)$ is closed whenever $\varLambdab$ is closed. \end{lem} \begin{proof} Suppose that $\varLambda_x$ is closed for every $x\in X$. Take a sequence $\{\bsf^{(n)}\}_{n=1}^\infty\subseteq \dom(\cfl)$ such that $\bsf^{(n)}\to \bsf\in\ell^2(\hhb)$ and $\cfl \bsf^{(n)}\to \bsg\in\ell^2(\hhb)$ as $n\to\infty$. Clearly, for every $x\in X$, $f^{(n)}_x\to f_x$ and $\big(\cfl \bsf^{(n)}\big)_x\to g_x$ as $n\to\infty$. The latter implies that $\varLambda_{x}f^{(n)}_{\phi(x)}\to g_x$ as $n\to \infty$ for every $x\in X$. Since all $\varLambda_x$'s are closed we see that for every $x\in X$, $f_{\phi(x)}\in\dom(\varLambda_{x})$ and $\varLambda_{x} f_{\phi(x)}=g_x$. This yields $\bsf\in\dom(\cfl)$ and $\cfl \bsf=\bsg$. \end{proof} The reverse implication does not hold in general. \begin{exa} Let $X=\{-1,0,1\}$. Let $\phi\colon X\to X$ be the transformation given by $\phi(-1)=\phi(1)=0$ and $\phi(0)=0$. Let $\hhb=\{\hh_x\colon x\in X\}$ be a set of Hilbert spaces such that $\hh_1\subseteq \hh_{-1}$, and $\varLambdab=\big\{\varLambda_x \in {\boldsymbol L}(\hh_0,\hh_x)\colon x\in X\big\}$ be a set of operators such that $\varLambda_0$ and $\varLambda_{1}$ are closed, $\varLambda_{-1}$ is not closed, and $\varLambda_1\subseteq\varLambda_{-1}$. Then $(X,\phi,\hhb,\varLambdab)$ is admissible. Let $\cfl$ be the o-wco in $\ell^2(\hhb)$ induced by $\phi$ and $\varLambdab$. Let $\{\bsf^{(n)}\}_{n=1}^\infty$ be a sequence in $\dom(\cfl)$ such that $\bsf^{(n)}\to \bsf\in\ell^2(\hhb)$ and $\cfl \bsf^{(n)}\to \bsg\in\ell^2(\hhb)$ as $n\to\infty$. As in the proof of Lemma \ref{closed} we see that $\varLambda_{x}f^{(n)}_0\to g_x$ for every $x\in X$. Since $\varLambda_1\subseteq\varLambda_{-1}$ we get $g_{-1}=g_1$. Closedness of $\varLambda_1$ implies that $f_0\in \dom(\varLambda_1)\subseteq\dom(\varLambda_{-1})$ and $\varLambda_{-1} f_0=\varLambda_1 f_0=g_1=g_{-1}$. For $x=0$ we can argue as in the proof of Lemma \ref{closed} to show that $f_0\in\dom(\varLambda_0)$ and $\varLambda_xf_0=g_0$. These facts imply that $f\in\dom(\cfl)$ and $\cfl f=g$, which shows that $\cfl$ is closed. \end{exa} Some properties of the operator $\cfl$ can be deduced by investigating operators \begin{align*} \cflx\colon \hh_x\supseteq \dom(\cflx) \to \ell^2\big(\hhb^x\big),\quad x\in \phi(X), \end{align*} with $\hhb^x=\{\hh_y\colon y\in\phi^{-1}(\{x\})\}$, which are defined by \begin{gather*} \dom(\cflx) = \Big\{f\in \hh_x\colon f\in \bigcap_{y\in\phi^{-1}(\{x\})}\dom(\varLambda_{y})\text{ and } \sum_{y\in\phi^{-1}(\{x\})}\|\varLambda_{y}f\|^2<\infty\Big\}\\ \big(\cflx f\big)_y=\varLambda_{y}f,\quad y\in\phi^{-1}(\{x\}),\ f\in\dom(\cflx). \end{gather*} \begin{prop}\label{ortsumdense} Let $(X,\phi,\hhb,\varLambdab)$ be admissible. Then $\cfl$ is a densely defined operator in $\ell^2(\hhb)$ if and only if $\cflx$ is a densely defined operator for every $x\in \phi(X)$. \end{prop} \begin{proof} Suppose that for every $x\in \phi(X)$, $\cflx$ is densely defined while $\cfl$ is not. Then there exists $\bsf\in\ell^2(\hhb)$ and $r>0$ such that $\mathbb{B}(\bsf,r)\cap \dom(\cfl)=\varnothing$, where $\mathbb{B}(\bsf,r)$ denotes the open ball in $\ell^2(\hhb)$ with center $\bsf$ and radius $r$. Since all $\cflx$'s are densely defined, we may assume that $f_x\in\dom(\cflx)$ for every $x\in X$. From $\sum_{x\in X}\|f_x\|^2<\infty$ we deduce that there exists a finite set $Y\subseteq X$ such that $\sum_{x\in X\setminus Y}\|f_x\|^2<\frac{r^2}{2}$. Then $\bsg\in\ell^2(\hhb)$ given by $g_x=\chi_Y(x) f_x$ belongs to $\mathbb{B}(\bsf,r)$. Moreover, $\sum_{x \in X} \| \varLambda_x g_{\phi(x)}\|^2 \Le \sum_{x \in X} \| \varLambda_x f_{\phi(x)}\|^2 < \infty$, which shows that $\bsg \in \dom(\cfl)$. This contradiction proves the ''if'' part. The ''only if'' part follows from the fact that $f_x \in \dom(\cflx)$ for every $x \in \phi(X)$ whenever $ \bsf \in \dom(\cfl)$. \end{proof} \begin{rem} It is worth mentioning that if $\varLambdab$ is closed, then $\cfl$ is unitarily equivalent to the orthogonal sum of operators $\cflx$, $x\in X$ (this can be shown by using a version of \cite[Theorem 5, p. 81]{b-s}). This leads to another proof of Proposition \ref{ortsumdense} in case $\varLambdab$ is closed. \end{rem} In the course of our study we use frequently multiplication operators. Below we set the notation and introduce required terminology concerning these operators. Suppose $\{(\varOmega_x,\ascr_x,\mu_x)\colon x\in X\}$ and $\{(\varOmega_x,\ascr_x,\nu_x)\colon x\in X\}$ are families of $\sigma$-finite measure spaces. Let $\varGammab=\{\varGamma_x\colon x\in X\}$ with $\varGamma_x\in\mscr(\ascr_x)$ for $x\in X$. Assume that $|\varGamma_x|^2\nu_x\ll\mu_x$ for every $x\in X$. Let $\hhb=\{L^2(\mu_x)\colon x\in X\}$ and $\kkb=\{L^2(\nu_x)\colon x\in X\}$. Then $M_{\varGammab}\colon \ell^2(\hhb)\ni\dom(M_{\varGammab})\to \ell^2(\kkb)$, {\em the operator of multiplication by $\varGammab$}, is given by \begin{gather*} \dom(M_{\varGammab})=\big\{\bsf\in \ell^2(\hhb)\colon \varGamma_yf_y\in \kk_y \text { for every } y\in X \text{ and } \sum_{x\in X}\|\varGamma_x f_x\|_{\kk_x}^2<\infty\big\},\\ \big(M_{\varGammab} \bsf\big)_x = \varGamma_x f_x,\quad x\in X,\ \bsf\in \dom(M_{\varGammab}). \end{gather*} Clearly, $M_{\varGammab}$ is well-defined. Note that the above definition agrees with the definition of classical multiplication operators if $X$ is a one-point set $\{x_0\}$ and $\mu_{x_0}=\nu_{x_0}$. Below we show when a multiplication operator $M_{\varGammab}$ is closed (since our setting in not entirely classical we give a short proof). \begin{lem} \label{poswietach-1} Let $x \in X$. If $\frac{\D|\varGamma_x|^2\nu_x}{\D\mu_x}<\infty$ a.e. $[\mu_x]$, then $M_{\varGamma_x}\colon L^2(\mu_x) \to L^2(\nu_x)$ is densely defined and closed. \end{lem} \begin{proof} Set $h_x:=\frac{\D|\varGamma_x|^2\nu_x}{\D\mu_x}$. Applying the Radon-Nikodym theorem we obtain \begin{align}\label{dzmnozenie} \dom(M_{\varGamma_x})=L^2\big((1+h_x) \D\mu_x\big). \end{align} Since, by \cite[Lemma 12.1]{b-j-j-s-ampa-2014}, $L^2\big((1+h_x) \D\mu_x\big)$ is dense in $L^2(\mu_x)$, \eqref{dzmnozenie} implies that $M_{{\varGamma_x}}$ is densely defined. For every $f\in\dom(M_{\varGamma_x})$ the graph norm of $f$ equals $\int_{\varOmega_x} |f|^2(1+h_x)\D\mu_x$. Thus $\dom(M_{\varGamma_x})$ is complete in the graph norm of $M_{\varGamma_x}$, which proves that $M_{\varGamma_x}$ is closed (see \cite[Theorem 5.1]{wei}). \end{proof} It is easily seen that $M_{\varGammab}$ is the orthogonal sum of $M_{\varGamma_y}$'s, which yields the aforementioned fact. \begin{cor}\label{poswietach} Suppose $\frac{\D|\varGamma_y|^2\nu_y}{\D\mu_y}<\infty$ a.e. $[\mu_y]$ for all $y\in X$. Then $M_{\varGammab}$ is densely defined and closed. \end{cor} It is well-known that classical multiplication operator is selfadjoint if the multiplying function is $\overline{\rbb}$-valued. One can show that the same applies to $M_{\varGammab}$, i.e., $M_{\varGammab}$ is selfadjoint if $\varGammab$ is $\overline{\rbb}$-valued (of course, assuming dense definiteness of $M_{\varGammab}$). \section{The criterion} In this section we provide a criterion for the subnormality of an o-wco $\cfl$ with $\varLambdab$ being a family of multiplication operators acting in a common $L^2$-space. More precisely, we will assume that \begin{align}\label{zemanek} \begin{minipage}{88ex} $X$ is a countable set, $\phi$ is a self-map of $X$, $(W, \ascr, \varrho)$ is a $\sigma$-finite measure space, $\lambdab=\{\lambda_x\}_{x\in X}\subseteq \mscr(\ascr)$, $\hhb=\{\hh_x\colon x\in X\}$, with $\hh_x=L^2(\varrho)$, and $\varLambdab=\{M_{\lambda_x}\colon x\in X\}$, where $M_{\lambda_x}\colon L^2(\varrho) \supseteq\dom(M_{\lambda_x})\to L^2(\varrho)$ is the operator of multiplication by $\lambda_x$. \end{minipage} \end{align} The criterion for subnormality we are aiming for will rely on a well-known measure-theoretic construction of a measure from a measurable family of probability measures, which has already been used in the context of subnormality (see \cite{b-j-j-s-jfa-2015,lam-1988-mmj}). Using it we will build an extension for $\cfl$. To this end we consider a measurable space $(S,\varSigma)$ and a family $\{\vartheta_x^w\colon x\in X, w\in W\}$ of probability measures on $\varSigma$ satisfying the following conditions: \begin{itemize} \item[(\texttt{A})] for all $x\in X$ and $\sigma\in\varSigma$ the map $W\ni w\mapsto \vartheta^w_x(\sigma)\in[0,1]$ is $\ascr$-measurable, \item[(\texttt{B})] for all $x\in X$ and $w\in W$, $|\lambda_x(w)|^2\vartheta_x^w\ll \vartheta_{\phi(x)}^w$. \end{itemize} By $(\texttt{A})$, for every $x\in X$ the formula \begin{align}\label{miararoz} \widehat\varrho_x(A\times\sigma)= \int_W \int_S \chi_{A\times\sigma}(w,s)\D\vartheta^w_x(s)\D\varrho(w), \quad A\times\sigma\in\ascr\otimes\varSigma, \end{align} where $\ascr\otimes\varSigma$ denotes the $\sigma$-algebra generated by the family $\{A\times\sigma\colon A\in \ascr, \sigma\in\varSigma\}$, defines a $\sigma$-finite measure $\widehat\varrho_x$ on $\ascr\otimes\varSigma$ (cf. \cite[Theorem 2.6.2]{ash}) which satisfies \begin{align}\label{normaroz} \int_{W\times S} F(w,s)\D\widehat\varrho_x(w,s) = \int_W \int_S F(w,s)\D\vartheta^w_x(s)\D\varrho(w), \quad F\in \mscr_+(\ascr\otimes\varSigma). \end{align} We first show that the measures in the family $\{\widehat\varrho_x\colon x\in X\}$ satisfy similar absolute continuity condition as the measures in the family $\{\vartheta_x^w\colon x\in X, w\in W\}$, and that the Radon-Nikodym derivatives coming from the former family can be written in terms of the Radon-Nikodym derivatives coming from the latter one. For $x\in X$, let $\widehat\lambda_x\in\mscr(\ascr\otimes\varSigma)$ be given by $\widehat\lambda_x(w,s)=\lambda_x(w)$ for $(w,s)\in W\times S$. \begin{lem}\label{warunekC} Assume \eqref{zemanek}. Let $(S,\varSigma)$ be a measurable space and $\{\vartheta_x^w\colon x\in X, w\in W\}$ be a family of probability measures on $\varSigma$ satisfying conditions {\em (\texttt{A})} and {\em (\texttt{B})}. Then the following conditions hold$:$ \begin{itemize} \item[(i)] for all $x\in X$, $|\widehat\lambda_x|^2\widehat\varrho_x\ll\widehat\varrho_{\phi(x)}$, \item[(ii)] for every $x\in X$, $\varrho$-a.e. $w\in W$ and $\vartheta_{\phi(x)}^w$-a.e. $s\in S$, $\frac{\D|\widehat\lambda_x|^2\widehat\varrho_x}{\D\widehat\varrho_{\phi(x)}}(w,s)=\frac{\D|\lambda_x(w)|^2\vartheta_x^w}{\D\vartheta_{\phi(x)}^w}(s)$. \end{itemize} \end{lem} \begin{proof} Using $(\texttt{B})$ and \eqref{normaroz} we easily get (i). Now, for any $x\in X$ we define $H_x, h_x\colon W\times S\to \rbop$ by $H_x(w,s)=\frac{\D|\widehat\lambda_x|^2\widehat\varrho_x}{\D\widehat\varrho_{\phi(x)}}(w,s)$ and $h_x(w,s)=\frac{\D|\lambda_x(w)|^2\vartheta_x^w}{\D\vartheta_{\phi(x)}^w}(s)$. Then, for every $x\in X$, by the Radon-Nikodym theorem and \eqref{normaroz}, we have \begin{align*} \int_A\int_\sigma H_x(w,s)\D\vartheta_{\phi(x)}^w(s)\D\varrho(w) &=\int_{A\times\sigma} H_x(w,s)\D\widehat\varrho_{\phi(x)}(w,s)\\ &=\int_A\int_\sigma |\lambda_x(w)|^2\D\vartheta_x^w(s)\D\varrho(w)\\ &=\int_A\int_\sigma h_x(w,s)\D\vartheta_{\phi(x)}^w(s)\D\varrho(w),\quad A\times\sigma\in\ascr\otimes \varSigma. \end{align*} This implies (ii), which completes the proof. \end{proof} We will use frequently the following notation \begin{align}\label{l22} \Gsf_x^w(s):=\left\{ \begin{array}{ll} \sum_{y\in\phi^{-1}(\{x\})} \frac{\D|\widehat\lambda_y|^2\widehat\varrho_y}{\D\widehat\varrho_x}(w,s), & \hbox{for $x\in\phi(X)$,} \\ 0, & \hbox{otherwise.} \end{array} \right. \end{align} Note that for every $x\in X$ the function \begin{align}\label{tlustazupa} \Gsf_x\colon W\times S\ni(w,s)\mapsto \Gsf_x^w(s)\in \rbop \end{align} is $\ascr\otimes\varSigma$-measurable and, in view of Lemma \ref{warunekC}, we have \begin{align}\label{obiad} \sum_{y\in \phi^{-1}(\{x\})}\int_{W}\int_S |\lambda_y (w)|^2 |F(w,s)|^2 \D\vartheta_y^w(s)\D\varrho(w)=\int_{W\times S} \Gsf_x(w,s)|F(w,s)|^2\D\widehat\varrho_x(w,s) \end{align} for every $F\in \mscr(\ascr \otimes \Sigma)$ (here and later on we adhere to the convention that $\sum_{\varnothing}=0$). Set \begin{align*} \Gsf=\{\Gsf_x\colon x\in X\}. \end{align*} The following set of assumptions complements \eqref{zemanek} \begin{align}\label{majdak} \begin{minipage}{88ex} $(S,\varSigma)$ is a measurable space, $\{\vartheta_x^w\colon x\in X, w\in W\}$ is a family of probability measures on $\varSigma$ satisfying $(\texttt{A})$ and $(\texttt{B})$, $\{\widehat\varrho_x\colon x\in X\}$ is a family of measures on $\ascr\otimes\varSigma$ given by \eqref{miararoz}, $\widehat\hhb=\{L^2(\widehat\varrho_x)\colon x\in X\}$, and $\widehat\varLambdab=\{M_{\widehat\lambda_x}\colon x\in X\}$, where $M_{\widehat\lambda_x}\colon L^2(\widehat\varrho_{\phi(x)})\supseteq\dom(M_{\widehat\lambda_x})\to L^2(\widehat\varrho_x)$ is the operator of multiplication by $\widehat\lambda_x$ given by $\widehat\lambda_x(w,s)=\lambda_x(w)$ for $(w,s)\in W\times S$. \end{minipage} \end{align} (That the operator $M_{\widehat\lambda_x}$ is well-defined follows from Lemma \ref{warunekC}.) It is no surprise that our construction leads to an extension of $\cfl$. \begin{lem}\label{exten} Assume \eqref{zemanek} and \eqref{majdak}. Then $\cfl\subseteq C_{\phi, \widehat\varLambdab}$. \end{lem} \begin{proof} In view of \eqref{normaroz}, for every $x\in X$ the space $L^2(\varrho)$ can be isometrically embedded into $L^2(\widehat\varrho_x)$ via the mapping \begin{align*} Q_x\colon L^2(\varrho)\ni f\mapsto F_x\in L^2(\widehat\varrho_x) \end{align*} where \begin{align*} \text{$F_x(w,s)=f(w)$ for $\widehat\varrho_x$-a.e. $(w,s)\in W\times S$.} \end{align*} Therefore, $\ell^2(\hhb)$ can be isometrically embedded into $\ell^2(\widehat\hhb)$ via $Q\in{\boldsymbol L}\big(\ell^2(\hhb), \ell^2(\widehat\hhb)\big)$ defined by $(Q \bsf)_x:= Q_x f_x$, $x\in X$, $\bsf\in\ell^2(\hhb)$. Since all the measures $\vartheta_x^w$ are probability measures, we deduce using \eqref{normaroz} that $\dom(\cfl) \subseteq \dom(C_{\phi, \widehat\varLambdab} Q)$ and $Q \cfl \bsf= C_{\phi, \widehat\varLambdab}Q \bsf$ for every $\bsf\in \dom(\cfl)$. This means that $C_{\phi,\widehat\varLambdab}$ is an extension of $\cfl$. \end{proof} Next we formulate a few necessary results concerning properties of $C_{\phi,\widehat\varLambdab}$. First, we show its dense definiteness and closedness. \begin{prop}\label{invitedlenie} Assume \eqref{zemanek} and \eqref{majdak}. Suppose that for every $x\in X$, $\Gsf_x<\infty$ a.e.\ $[\widehat\varrho_{x}]$. Then $C_{\phi,\widehat\varLambdab}$ is a closed and densely defined operator in $\ell^2(\widehat\hhb)$. \end{prop} \begin{proof} Let $x\in X$. Let $H_x=\frac{\D|\widehat \lambda_x|^2\widehat\varrho_x}{\D\widehat\varrho_{\phi(x)}}$. Using \eqref{dzmnozenie} we get \begin{align*} \dom\big(\widehat\varLambda_x\big)= L^2\Big((1+H_x)\D\widehat\varrho_{\phi(x)}\Big). \end{align*} This yields the equality \begin{align*} \dom(C_{\phi,\widehat\varLambdab, x})=L^2\big((1+\Gsf_x)\D\widehat\varrho_x\big),\quad x\in X. \end{align*} By \cite[Lemma 12.1]{b-j-j-s-ampa-2014}, the space $L^2\big((1+\Gsf_x)\D\widehat\varrho_x\big)$ is dense in $L^2(\widehat\varrho_x)$ and consequently $C_{\phi,\widehat\varLambdab, x}$ is densely defined. Since $x\in X$ can be chosen arbitrarily, by applying Proposition \ref{ortsumdense}, we get dense definiteness of $C_{\phi, \widehat\varLambdab}$. Now using Lemma \ref{closed} and Lemma \ref{poswietach-1}, we deduce that $C_{\phi, \widehat\varLambdab}$ is closed. \end{proof} The claim of the following auxiliary lemma is a direct consequence of $\sigma$-finiteness of $\widehat\varrho_x$. For the reader convenience we provide its proof. \begin{lem} \label{swieta} Assume \eqref{zemanek} and \eqref{majdak}. Let $x \in X$. Suppose that $\Gsf_x<\infty$ a.e.\ $[\widehat\varrho_x]$. Then there exists an $\ascr\otimes\varSigma$-measurable function $\alpha_x \colon W \times S \to (0,+\infty)$ such that \begin{align}\label{alfa} \int_{W\times S} \Big(1+\Gsf_x(w,s) +(\Gsf_x(w,s))^2\Big) (\alpha_x(w,s))^2 \D\widehat\varrho_x(w,s) < \infty. \end{align} \end{lem} \begin{proof} Since the measure $\varrho$ is $\sigma$-finite there exists an $\ascr$-measurable function $f \colon W \to (0,+\infty)$ such that $\int_W f(w) \D \varrho(w)<\infty$. Now, for any given $w \in W$ and $s\in S$ we define \begin{align*} \alpha_x(w,s) = \sqrt{\frac{f(w)}{1+\Gsf_x(w,s)+(\Gsf_x(w,s))^2}}. \end{align*} Clearly, the function $\alpha_x$ is $\ascr\otimes\varSigma$-measurable. Moreover, by \eqref{normaroz}, we have \begin{multline*} \int_{W\times S} \Big(1+\Gsf_x(w,s)+(\Gsf_x(w,s))^2\Big) (\alpha_x(w,s))^2 \D\widehat\varrho_x(w,s)\\ =\int_W \int_S f(w) \D\vartheta_x^w(s)\D\varrho(w) =\int_W f(w)\D\varrho(w) < \infty. \end{multline*} This completes the proof. \end{proof} The proof of the criterion for subnormality of $\cfl$, which we give further below, relies much on the fact that $C_{\phi,\widehat\varLambdab}^*C_{\phi,\widehat\varLambdab}$ is a multiplication operator. \begin{prop}\label{dziekan} Assume \eqref{zemanek} and \eqref{majdak}. Suppose that for every $x\in X$, $\Gsf_x<\infty$ a.e.\ $[\widehat\varrho_{x}]$. Then $C_{\phi,\widehat\varLambdab}^* C_{\phi,\widehat\varLambdab} = M_{\Gsf}$. \end{prop} \begin{proof} Let $x_0 \in X$, $A \in \ascr$, and $\sigma \in \Sigma$. Let $\bsE=\{E_x\colon x\in X\}$ be a family of functions $E_x\colon W\times S\to \rbb_+$ given by $E_x(w,s) = \chi_{A \times \sigma}(w,s) \alpha_{x_0}(w,s) \delta_{x, x_0}$, where function $\alpha_{x_0}$ satisfies \eqref{alfa}, and $\delta_{x,x_0}$ is the Kronecker delta. It follows from \eqref{alfa} that $\bsE\in\ell^2(\widehat\hhb)$. Moreover, by \eqref{normaroz}, \eqref{obiad}, and \eqref{alfa}, we get \begin{align*} \sum_{x \in X} \int_{W\times S} \Big|\widehat\lambda_x(w,s) &E_{\phi(x)} (w, s)\Big|^2\D\widehat\varrho_x(w,s)\\ &=\sum_{x \in \phi^{-1}(\{x_0\})} \int_A \int_{\sigma} |\lambda_x(w)|^2 (\alpha_{x_0}(w,s))^2\D\vartheta_x^w(s)\D\varrho(w)\\ &\leqslant\int_{W\times S}\Gsf_{x_0}(w, s)(\alpha_{x_0}(w,s))^2\D\widehat\varrho_{x_0}(w,s) <\infty, \end{align*} which proves that $\bsE \in \dom(C_{\phi,\widehat\varLambdab})$. Take now $\bsF\in\dom(C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab})$. Then \begin{align*} \is{C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}\bsF}{\bsE}=\int_{A\times \sigma} \big(C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}\bsF\big)_{x_0}(w,s)\,\alpha_{x_0}(w,s)\D\widehat\varrho_{x_0}(w,s). \end{align*} On the other hand, since $\bsE \in \dom(C_{\phi, \widehat\varLambdab})$, we have \begin{align*} \is{C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}\bsF}{\bsE} &=\is{C_{\phi, \widehat\varLambdab}\bsF}{C_{\phi, \widehat\varLambdab}\bsE}\\ &= \sum_{x\in X}\int_{W\times S} |\widehat \lambda_x(w,s)|^2 F_{\phi(x)}(w,s)\overline{E_{\phi(x)}(w,s)}\D\widehat\varrho_x(w,s)\\ &= \sum_{x\in \phi^{-1}(\{x_0\})}\int_{A\times \sigma} |\lambda_x(w)|^2 F_{x_0}(w,s) \alpha_{x_0}(w,s)\D\widehat\varrho_x(w,s)\\ &= \sum_{x\in \phi^{-1}(\{x_0\})}\int_{A\times \sigma} F_{x_0}(w,s) \alpha_{x_0}(w,s)\frac{\D |\widehat \lambda_x|^2\widehat \varrho_x}{\D \widehat \varrho_{x_0}} (w,s)\D\widehat\varrho_{x_0}(w,s)\\ &\overset{(\dag)}{=} \int_{A\times \sigma} \sum_{x\in \phi^{-1}(\{x_0\})} F_{x_0}(w,s) \alpha_{x_0}(w,s)\frac{\D|\widehat \lambda_x|^2 \widehat \varrho_x}{\D \widehat \varrho_{x_0}}(w,s) \D\widehat\varrho_{x_0}(w,s)\\ &= \int_{A\times \sigma} F_{x_0}(w,s)\Gsf_{x_0}(w,s)\alpha_{x_0}(w,s)\D\widehat\varrho_{x_0}(w,s), \end{align*} where in $(\dag)$ we used the fact that the function $(w,s) \mapsto \alpha_{x_0} (w,s) \Gsf_{x_0}(w,s) \in L^2(\widehat \varrho_{x_0})$ which means that the function $$(w,s)\mapsto \sum_{x\in \phi^{-1}(\{x_0\})} \big|F_{x_0}(w,s)\big| \alpha_{x_0}(w,s)\frac{\D |\widehat \lambda_x|^2\widehat \varrho_x}{\D \widehat \varrho_{x_0}}(w,s) $$ belongs to $L^1(\widehat\varrho_{x_0})$. Since $A \in \ascr$, and $\sigma \in \Sigma$ can be arbitrarily chosen, we get \begin{align*} \big(C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}\bsF\big)_{x_0}\alpha_{x_0}=F_{x_0}\Gsf_{x_0}\alpha_{x_0} \quad \text{for a.e.\ $[\widehat\varrho_{x_0}]$,} \end{align*} which implies that \begin{align*} \big(C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}\bsF\big)_{x_0}=F_{x_0}\Gsf_{x_0} \quad \text{for a.e.\ $[\widehat\varrho_{x_0}]$}. \end{align*} Thus $\bsF \in \dom(M_{\Gsf})$ and $C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}\bsF = M_{\Gsf} \bsF$. In view of Proposition \ref{invitedlenie}, the operator $C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}$ is selfadjoint (cf. \cite[Theorem 5.39.]{wei}). Since $M_{\Gsf}$ is selfadjoint as well, both the operators $C_{\phi, \widehat\varLambdab}^*C_{\phi, \widehat\varLambdab}$ and $M_\Gsf$ coincide. This completes the proof. \end{proof} After all the above preparations we are in the position now to prove the criterion for the subnormality of $\cfl$. \begin{thm}\label{kryterium} Assume \eqref{zemanek} and \eqref{majdak}. Suppose that for every $x\in X$, $\Gsf_x<\infty$ a.e.\ $[\widehat\varrho_{x}]$ and \begin{align*} \lambda_x\Gsf_{\phi(x)}=\lambda_x \Gsf_x\quad \text{a.e. $[\widehat\varrho_{x}]$ for every $x\in X$.} \end{align*} Then $C_{\phi, \widehat \varLambdab}$ is quasinormal. Moreover, $\cfl$ is subnormal and $C_{\phi, \widehat \varLambdab}$ is its quasinormal extension. \end{thm} \begin{proof} Let $\bsF\in\ell^2(\widehat\hhb)$. By definition $\bsF\in\dom(C_{\phi,\widehat\varLambdab} M_{\Gsf})$ if and only if \begin{align}\label{ects1} \sum_{x\in X} \int_{W\times S} |\Gsf_x(w,s)F_x(w,s)|^2\D \widehat \varrho_x(w,s)<\infty \end{align} and \begin{align}\label{znowukasza} \sum_{x\in X} \int_{W\times S} |\widehat\lambda_x (w,s)\Gsf_{\phi(x)}(w,s)F_{\phi(x)}(w,s)|^2 \D \widehat\varrho_x(w,s)<\infty. \end{align} On the other hand, $\bsF\in\dom(M_{\Gsf}C_{\phi,\widehat\varLambdab})$ if and only if \begin{align}\label{student} \sum_{x\in X} \int_{W\times S} |\widehat\lambda_x(w,s)F_{\phi(x)}(w,s)|^2 \D \widehat\varrho_x(w,s)<\infty \end{align} and \begin{align}\label{ects2} \sum_{x\in X} \int_{W\times S} |\widehat\lambda_x (w,s)\Gsf_{x}(w,s)F_{\phi(x)}(w,s)|^2\D\widehat\varrho_x(w,s)<\infty. \end{align} Using the decomposition $X=\bigsqcup_{x\in X} \phi^{-1}(\{x\})$ and applying \eqref{normaroz} and \eqref{obiad} we see that \eqref{znowukasza} is equivalent to \begin{align}\label{ects3} \sum_{x\in X} \int_{W\times S} |\Gsf_x(w,s)|^3 |F_x(w,s)|^2\D\widehat\varrho_x(w,s)<\infty. \end{align} The same argument implies that \eqref{student} is equivalent to \begin{align}\label{ects4} \sum_{x\in X} \int_{W\times S} |\Gsf_x(w,s)| |F_x(w,s)|^2\D\widehat\varrho_x(w,s)<\infty. \end{align} Keeping in mind that $\bsF\in\ell^2(\widehat\hhb)$ and using \eqref{ects1}, \eqref{ects2}, \eqref{ects3}, and \eqref{ects4} we deduce that $\dom(C_{\phi,\widehat\varLambdab} M_{\Gsf})=\dom(M_{\Gsf} C_{\phi,\widehat\varLambdab})$. It is elementary to show that for every $\bsF\in \dom(M_{\Gsf} C_{\phi,\widehat\varLambdab})$, $M_{\Gsf} C_{\phi,\widehat\varLambdab} \bsF=C_{\phi,\widehat\varLambdab} M_{\Gsf}\bsF$. Therefore, $C_{\phi,\widehat\varLambdab}$ is quasinormal by \eqref{QQQ} and Theorem \ref{dziekan}. The ``moreover'' part of the claim follows immediately from Lemma \ref{exten} and the fact that operators having quasinormal extensions are subnormal (see \cite[Theorem 2]{sto-sza2}). \end{proof} \section{The bounded case} In this section we investigate the subnormality of $\cfl$ under the assumption of boundedness of $\cfl$. We use a well-know relation between subnormality and Stieltjes moment sequences. We begin with more notation. Suppose \eqref{zemanek} holds. Let $n\in\nbb$. Then $\varLambdab^{[n]}:=\{\varLambda_x^{[n]}\colon x\in X\}$, where $\varLambda_x^{[n]}:=M_{\lambda_x^{[n]}}\in {\boldsymbol L}(L^2(\varrho))$ with $\lambda_x^{[n]}:=\lambda_x\cdots \lambda_{\phi^{n-1}(x)}$, $x\in X$. We define a function \begin{align*} \hsf_x^{[n]} = \sum_{y \in \phi^{-n}(\{x\})} \Big|\lambda_y^{[n]}\Big|^2,\quad x\in X. \end{align*} We set $\lambda_x^{[0]}\equiv 1$, so that $\varLambda_x^{[0]}$ is the identity operator, and $\hsf_x^{[0]}\equiv 1$. It is an easy observation that the $n$th power of $\cfl$ is the o-wco with the symbol $\phi^n$ and the weight $\varLambdab^{[n]}$. We state this below fact for future reference. \begin{lem}\label{potega} Suppose \eqref{zemanek} holds. Let $n \in \nbb$. If $\cfl\in{\boldsymbol B}(\ell^2(\hhb))$, then $\cfl^n = C_{\phi^n,\varLambdab^{[n]}}$. \end{lem} The well-known characterization of subnormality for bounded operators due to Lambert (see \cite{l-jlms-1976}) states that an operator $A\in{\boldsymbol B}(\hh)$ is subnormal if and only if $\{\|A^n f\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $f\in\hh$. Recall, that a sequence $\{a_n\}_{n=0}^\infty\subseteq\rbb_+$ is called a Stieltjes moment sequence if there exists a positive Borel measure $\gamma$ on $\rbb_+$ such that $a_n=\int_{\rbb_+} t^n\D\gamma(t)$ for every $n\in\zbb_+$. We call $\gamma$ a representing measure of $\{a_n\}_{n=0}^\infty$. If there exists a unique representing measure, then we say that $\{a_n\}_{n=0}^\infty$ is determinate. It is well-known that (see \cite{b-c-r, fug}; see also \cite[Theorem 3]{sza-1981-am}): \begin{align}\label{sti} \begin{minipage}{85ex} A sequence $\{a_n\}_{n=0}^\infty\subseteq\rbb_+$ is a Stieltjes moment sequence if and only if $$ \sum_{n,m=0}^\infty a_{n+m} \alpha(n)\overline{\alpha(m)}\Ge 0\text{ and } \sum_{n,m=1}^\infty a_{n+m+1} \alpha(n)\overline{\alpha(m)}\Ge 0, $$ for every $\alpha\in\cbb^{(\zbb_+)}$, where $\cbb^{(\zbb_+)}$ denotes the set of all functions $\alpha\colon \zbb_+\to\cbb$ such that $\{k\in\zbb_+\colon \alpha(k)\neq 0\}$ is finite. Moreover, if $\{a_n\}_{n=0}^\infty\subseteq\rbb_+$ is a Stieltjes moment sequence and there exists $r\in[0,\infty)$ such that $$a_{2n+2}\leqslant r^2a_{2n},$$ then $\{a_n\}_{n=0}^\infty$ is determinate and its representing measure is supported by $[0,r]$. \end{minipage} \end{align} \begin{thm}\label{dyzur} Suppose \eqref{zemanek} holds. Assume that $\cfl\in{\boldsymbol B}(\ell^2(\hhb))$ is subnormal. Then the following conditions hold$:$ \begin{enumerate} \item[(i)] for every $x \in X$ and $\varrho$-a.e. $w \in W$ the sequence $\big\{\hsf_x^{[n]}(w)\big\}_{n=0}^{\infty}$ is a Stieltjes moment sequence having a unique representing measure $\theta_x^w$, \item[(ii)] for every $x\in X$ and $\varrho$-a.e. $w\in W$, $\theta_x^w(\rbb_+)=1$ and $\theta_x^w\big(\rbb_+\setminus [0,\|\cfl\|^2]\big)=0$, \item[(iii)] for every $x\in X$ and $\varrho$-a.e. $w\in W$ we have \begin{align} \int_\sigma t\D\theta_x^w=\sum_{y\in \phi^{-1}(\{x\})} |\lambda_y(w)|^2 \theta_y^w (\sigma),\quad \sigma\in \borel{\rbb_+} \label{zgodne} \end{align} \end{enumerate} \end{thm} \begin{proof} First note that by Lemma \ref{potega} we have \begin{align} \| \cfl^n \bsf\|^2 &= \sum_{x \in X} \int_W |\lambda_x^{[n]}(w)|^2 |f_{\phi^n(x)}(w)|^2 \D \varrho(w)\notag \\ &= \sum_{x \in X} \sum_{y \in \phi^{-n}(\{x\})} \int_W |\lambda_y^{[n]}(w)|^2 |f_x(w)|^2 \D \varrho(w)\notag\\ &= \sum_{x \in X} \int_W \hsf_x^{[n]}(w) |f_x(w)|^2 \D \varrho(w)\notag\\ &= \sum_{x \in X} \int_W \hsf_x^{[n]}(w) |f_x(w)|^2 \D \varrho(w),\quad n\in\zbb_+, \bsf\in\ell^2(\hhb).\label{norma} \end{align} Fix $x_0 \in X$ and consider $\bsg\in\ell^2(\hhb)$ such that $g_x = \delta_{x,x_0} g_x$, $x\in X$. Then, by the Lambert theorem, $\big\{ \| \cfl^n \bsg\|^2\big\}_{n=0}^\infty$ is a Stieltjes moment sequence. Moreover, by \eqref{norma}, we get \begin{align*} \| \cfl^n \bsg\|^2 = \int_W \hsf_{x_0}^{[n]}(w) |g_{x_0}(w)|^2 \D \varrho(w), \quad n\in\zbb_+. \end{align*} Now, by \eqref{sti}, we have \begin{align*} \int_W \bigg(\sum_{m,n \in \zbb_+} \hsf_{x_0}^{[n+m]}(w) &\alpha(n) \overline{\alpha(m)} \bigg) |g_{x_0}(w)|^2\D \varrho(w) \\ &= \sum_{m,n \in \zbb_+} \bigg(\int_W \hsf_{x_0}^{[n+m]}(w) |g_{x_0}(w)|^2\D \varrho(w)\bigg) \alpha(n) \overline{\alpha(m)} \\ &= \sum_{m,n \in \zbb_+} \| \cfl^{n+m} \bsg\|^2 \alpha(n) \overline{\alpha(m)}\\ &\geq 0,\quad \alpha\in \cbb^{(\zbb_+)}. \end{align*} In a similar fashion we show that \begin{align*} \int_W \bigg(\sum_{m,n \in \zbb_+} \hsf_{x_0}^{[n+m+1]}(w) \alpha(n) \overline{\alpha(m)} \bigg) |g_{x_0}(w)|^2\D \varrho(w) \geq 0,\quad \alpha\in \cbb^{(\zbb_+)}. \end{align*} Since $g_{x_0}\in L^2(\varrho)$ may be arbitrary, combining the above inequalities with \eqref{sti}, we deduce that for $\varrho$-a.e. $w\in W$, $\big\{\hsf_{x_0}^{[n]}(w)\big\}_{n=0}^\infty$ is a Stieltjes moment sequence. Now, for any fixed $x_0$ we observe that by \eqref{norma} for every $f\in L^2(\varrho)$ we have \begin{align*} \int_W\hsf^{[2(n+1)]}_{x_0}(w)|f(w)|^2\D\varrho (w)&= \|\cfl^{2n+2}\bsf\|^2\\ &\leqslant \|\cfl\|^4\|\cfl^{2n}\bsf\|^2\\ &=\|\cfl\|^4\int_W\hsf^{[2n]}_{x_0}(w)|f(w)|^2\D\varrho(w),\quad n\in\zbb_+, \end{align*} where $\bsf\in\ell^2(\hhb)$ is given by $f_x=\delta_{x,x_0}f$. This, according to \eqref{sti}, yields that for $\varrho$-a.e. $w\in W$, $\big\{\hsf^{[n]}_{x_0}(w)\big\}_{n=0}^\infty$ has a unique representing measure $\theta_{x_0}^w$ supported by the interval $[0,\|\cfl\|^2]$. Clearly, for $\varrho$-a.e. $w\in W$, $\theta_{x_0}^w(\rbb_+) = \hsf_{x_0}^{[0]}(w)=1$. Now, suppose that $x\in X$. Then for $\varrho$-a.e. $w\in W$ we get \begin{align*} \int_0^{\infty} t^n \D \theta_{x}^w(t) &= \hsf^{[n]}_{x}(w) = \sum_{y \in \phi^{-n}(\{x\})} |\lambda_y^{[n]}(w)|^2 \\ &=\sum_{y \in \phi^{-(n-1)}(\phi^{-1}(\{x\}))} |\lambda_y^{[n]}(w)|^2 \\ &= \sum_{z \in \phi^{-1}(\{x\}) } \sum_{y \in \phi^{-(n-1)}(\{z\})} |\lambda_y^{[n-1]}(w)|^2 |\lambda_{z}(w)|^2 \\ &= \sum_{z \in \phi^{-1}(\{x\}) } \hsf^{[n-1]}_{z}(w) |\lambda_z(w)|^2\\ &= \int_0^{\infty} t^{n-1} \bigg(\sum_{z \in \phi^{-1}(\{x\})} |\lambda_z(w)|^2\bigg) \D\theta_z^w(t),\quad n\in\nbb. \end{align*} This, in view of the fact that $t\D\theta_x^w$ is supported by $[0,\|\cfl\|^2]$, implies that \eqref{zgodne} is satisfied. This completes the proof. \end{proof} The representing measures $\theta_x^w$ existing for a subnormal bounded $\cfl$ by the above theorem turn out to be the building blocks for the family $\{\vartheta_x^w\colon x\in X, w\in W\}$. \begin{thm}\label{konieczny} Suppose \eqref{zemanek} holds. Assume that $\cfl\in{\boldsymbol B}(\ell^2(\hhb))$ is subnormal. Then there exists a family $\{\vartheta_x^w\colon x\in X, w\in W\}$ of Borel probability measures on $\rbb_+$ such that the following conditions hold: \begin{enumerate} \item[(i)] for all $x\in X$ and $\sigma\in\borel{\rbb_+}$ the map $W\ni w\mapsto \vartheta^w_x(\sigma)\in[0,1]$ is $\ascr$-measurable, \item[(ii)] for all $x\in X$ and $w\in W$ we have $|\lambda_x(w)|^2\vartheta_x^w\ll \vartheta_{\phi(x)}^w$, \item[(iii)] for every $x\in X$, \begin{align*} \Gsf_x =\Gsf_{\phi(x)}\quad \text{a.e. $[\widehat\varrho_x]$}, \end{align*} where $\Gsf_x$ is defined by \eqref{tlustazupa} $($see also \eqref{l22} and Lemma \ref{warunekC}$)$. \end{enumerate} \end{thm} \begin{proof} According to Theorem \ref{dyzur} there exist a set $W_0\in\ascr$ and a family $\{\theta_x^w\colon w \in W_0\}$ of Borel probability measures on $\rbb_+$ such that $\varrho(W\setminus W_0)=0$ and for all $x\in X$ and $w\in W_0$ the condition \eqref{zgodne} holds. Define a family $\{\vartheta_x^w\colon x \in X\}$ of Borel probability measures by \begin{align*} \vartheta_x^w = \left\{ \begin{array}{cl} \theta_x^w & \text{ for }x\in X, w \in W_0,\\ \delta_0 & \text{ for }x\in X, w \in W\setminus W_0. \end{array}\right. \end{align*} In view of (i) of Theorem \ref{dyzur}, the mapping $W\ni w\to \int_{\rbb^+}t^n\D \vartheta_x^w \in\rbb_+$ is $\ascr$-measurable for every $x\in X$, hence applying \cite[Lemma 11]{b-j-j-s-jfa-2015} we get (i). In turn (iii) of Theorem \ref{dyzur} yields (ii). Now, by \eqref{zgodne} and Lemma \ref{warunekC}, for every $x\in X$, $\varrho$-a.e. $w\in W$ and $\vartheta_x^w$-a.e. $t\in \rbb_+$ we have $\Gsf_x(w,t)=t$, which gives (iii). \end{proof} \section{Examples and corollaries} The operator $M_z$ of multiplication by the independent variable $z$ plays a special role among all multiplication operators. It is easily seen that the weighted bilateral shift operator acting in $\bigoplus_{n=-\infty}^\infty L^2(\varrho)$, the orthogonal sum of $\aleph_0$-copies of $L^2(\varrho)$, with weights being equal to $M_z$ is normal (and thus subnormal). Below we show a more general result stating that for any given $k\in\nbb$ the o-wco $\cfl$ induced by $\phi$ whose graph is a $k$-ary tree (see \cite{b-j-j-s-golf} for terminology) and $\varLambdab$ consists of $\varLambda_x=M_z$ acting in $L^2(\varrho)$ is subnormal. \begin{exa} Fix $k\in\nbb$. Let $X=\{1,2,\ldots, k\}^\nbb$ and $\phi\colon X\to X$ be given by $\phi \big(\{\varepsilon_i\}_{i=1}^\infty\big)=\{\varepsilon_{i+1}\}_{i=1}^\infty$. Let $W$ be a compact subset of $\cbb$ and $\varrho$ be a Borel measure on $W$. Finally, let $\hhb=\{\hh_x\colon x\in X\}$ with $\hh_x=L^2(\varrho)$ and let $\varLambdab=\{\varLambda_x\colon x\in X\}$ with $\varLambda_x=M_z$ acting in $L^2(\varrho)$. Then we have $\hsf_x^{[n]}(w)=k^n|w^n|^2$ for every $w\in W$ and $n\in\zbb_+$. This means that $\big\{\hsf_x^{[n]}(w)\big\}_{n=0}^\infty$ is a Stieltjes moment sequence with a unique representing measure $\vartheta_x^w:=\delta_{k|w|^2}$ for every $x\in X$ and $w\in W$. Therefore, conditions \eqref{zemanek} and \eqref{majdak} are satisfied, and $\Gsf_x=\Gsf_{\phi(x)}=k$ for every $x\in X$. According to Theorem \ref{kryterium}, $\cfl$ is subnormal in $\ell^2(\hhb)$. \end{exa} A classical weighted unilateral shift in $\ell^2(\zbb_+)$ is subnormal whenever the weights staisfy the well-known Berger-Gellar-Wallen criterion (see \cite{g-w-1970-pja, h-1970-bams}). This can be generalized in the following way. \begin{exa} Let $X=\zbb_+$ and $\phi\colon X\to X$ be given by \begin{align*} \phi(n)=\left\{ \begin{array}{cl} 0 & \text{if } n=0,\\ n-1 & \text{if } n\in\nbb. \end{array} \right. \end{align*} Let $(W,\ascr,\varrho)$ be a $\sigma$-finite measure space and let $\hhb=\{\hh_n\colon n\in \zbb_+\}$ with $\hh_n=L^2(\varrho)$. Suppose $\{\lambda_n\}_{n=1}^\infty\subseteq\mscr(\ascr)$ is a family of functions such that for every $w\in W$, the sequence \begin{align*} {\boldsymbol s}^w=(1,|\lambda_1(w)|^2, |\lambda_1(w)\lambda_2(w)|^2, |\lambda_1(w)\lambda_2(w)\lambda_3(w)|^2,\ldots) \end{align*} is a Stieltjes moment sequence. Set $\lambda_0\equiv 0$. Let $\varLambdab=\{M_{\lambda_n}\colon n\in\zbb_+\}$. Then the o-wco $\cfl$ in $\ell^2(\hhb)$ is subnormal. Indeed, fix $w\in W$. Since ${\boldsymbol s}^w$ is a Stieltjes moment sequence, either $\lambda_k^w=0$ for every $k\in\nbb$ or $\lambda_k^w\neq0$ for every $k\in\nbb$. Let $\theta^w$ be a representing measure of ${\boldsymbol s}^w$. If $\lambda_k^w=0$ for every $k\in\nbb$, then we set $\vartheta_l^w=\delta_0$ for $l\in\zbb_0$. Otherwise, we define a family of probability measures $\{\vartheta_l^w\colon l\in\zbb_+\}$ by \begin{align*} \vartheta_l^w(\sigma)= \left\{ \begin{array}{cl} \theta^w (\sigma) & \text{if } l=0,\\ \frac{1}{|\lambda_l(w)|^2}\int_\sigma t\D\vartheta_{l-1}^w(t) & \text{if } l\in\nbb, \end{array} \quad \sigma\in\borel{\rbb_+}. \right. \end{align*} In both cases we see that \begin{align}\label{procesja} \int_\sigma t\D\vartheta_l^w(t)=|\lambda_{l+1}(w)|^2 \vartheta_{l+1}^w(\sigma),\quad \sigma\in\borel{\rbb_+},\ l\in\zbb_+. \end{align} As a consequence, the family $\{\vartheta_k^w\colon w\in W, k\in\zbb_+\}$ satisfies condition $(\texttt{B})$. Since the mapping $w\mapsto \int_{\rbb_+}t^n\D\theta^w(t)=|\lambda_1(w)\cdots\lambda_{n}(w)|^2$ is $\ascr$-measurable for every $n\in\nbb$, by \cite[Lemma 11]{b-j-j-s-jfa-2015}, the mapping $w\mapsto\vartheta_0^w(\sigma)$ is $\ascr$-measurable for every $\sigma\in\borel{\rbb_+}$. This implies that $\{\vartheta_k^w\colon w\in W, k\in\zbb_+\}$ satisfies condition $(\texttt{A})$. In view of \eqref{procesja}, $\Gsf_l(w,t)=t$ for all $(w,t)\in W\times \rbb_+$ and $l\in\zbb_+$. Therefore, by Theorem \ref{kryterium}, $\cfl$ is subnormal. \end{exa} The class of weighted shifts on directed trees with one branching vertex has proven to be a source of interesting results and examples (see \cite{b-j-j-s-jmaa-2013, b-d-j-s-aaa-2013, b-j-j-s-jmaa-2016}). Below we show an example of a subnormal o-wco $\cfl$ induced by a transformation $\phi$ whose graph is composed of a directed tree with one branching vertex and a loop. \begin{exa} Fix $k\in\nbb \cup \{\infty\}$. Let $X = \{(0,0)\} \cup \nbb \times \{1,2,\ldots, k\}$. Let $\phi\colon X\to X$ be given by \begin{align*} \phi(m,n)=\left\{ \begin{array}{cl} (0,0) & \text{if } m =0 ,\\ (0,0) & \text{if } m=1 \text { and } n\in\{1,\ldots, k\},\\ (m-1,n) & \text{if } m \geq 2 \text { and } n\in\{1,\ldots, k\}. \end{array} \right. \end{align*} Let $W$ be a Borel subset of $\cbb$ and $\varrho$ be a Borel measure on $W$. Let $\hhb=\{\hh_x\colon x\in X\}$ with $\hh_x=L^2(\varrho)$. For a given sequence $\{\beta_n\}_{n=1}^{k} \subset \cbb$ such that $\sum_{n=1}^{k} |\beta_n|^2 < \infty$ we define functions $\{\lambda_x\colon x\in X\}\subseteq\mscr(\borel{W})$ by \begin{align*} \lambda_x(w)=\left\{ \begin{array}{cl} 0 & \text{if } x = (0,0),\\ \beta_n & \text{if } x = (1,n), \\ \sqrt{\sum_{k=1}^n \beta_n^2} & \text{otherwise}, \end{array} \right.\quad w\in W. \end{align*} Let $\varLambdab=\{\varLambda_x\colon x\in X\}$ with $\varLambda_x=M_{\lambda_x}$ acting in $L^2(\varrho)$. Finally, let $S=[0,1]$ and $\vartheta_x^w$, $x\in X$ and $w\in W$, be the Lebesgue measure on $S$. Clearly, for every $x \in X$, $\Gsf_x = \sum_{n=1}^{k} \beta_n^2$. Thus by Theorem \ref{kryterium} the operator $\cfl$ is subnormal. \end{exa} It is well known that normal operators are, up to a unitary equivalence, multiplication operators. This combined with our criterion can be used to investigate subnormality of $\cfl$ when $\varLambdab$ consists of commuting normal operators. \begin{exa} Let $X$ be countable and $\phi\colon X\to X$. Assume that $\hh$ is a separable Hilbert space, $\hhb=\{\hh_x\colon x\in X\}$ with $\hh_x=\hh$, and $\varLambdab=\{\varLambda_x\colon x\in X\}\subseteq {\boldsymbol L}(\hh)$ is a family of commuting normal operators. Then there exist a $\sigma$-finite measure space $(W,\ascr,\varrho)$ and a family $\{\lambda_x\colon x\in X\}\subseteq\mscr(\ascr)$ such that for every $x\in X$, $\varLambda_x$ is unitarily equivalent to the operator $M_{\lambda_x}$ of multiplication by $\lambda_x$ acting in $L^2(\varrho)$. Suppose now that there exists a family $\{\vartheta_x^w\colon x\in X, w\in W\}$ of probability measures on a measurable space $(S,\varSigma)$, such that conditions $(\mathtt{A})$ and $(\mathtt{B})$ are satisfied. If for every $x\in X$, $\rho$-a.e. $w\in \cbb$, and $\vartheta_x^w$-a.e. $s\in S$ we have \begin{align}\label{piwo1} \sum_{y\in\phi^{-1}(\{x\})}\frac{\D|\lambda_y(w) |^2\vartheta_y^w}{\D\vartheta_x^w}<\infty, \end{align} and \begin{align}\label{piwo2} \sum_{y\in\phi^{-1}(\{x\})}\frac{\D|\lambda_y(w) |^2\vartheta_y^w}{\D\vartheta_x^w}=\sum_{z\in\phi^{-1}(\{\phi(x)\})}\frac{\D|\lambda_z(w) |^2\vartheta_z^w}{\D\vartheta_{\phi(x)}^w}, \end{align} then, by applying Theorem \ref{kryterium} and Lemma \ref{warunekC}, we deduce that $\cfl$ acting in $\ell^2(\hhb)$ is subnormal. \end{exa} In a similar manner to the case of a family of commuting normal operators we can deal with $\cfl$ induced by $\varLambdab$ consisting of a single subnormal operator. \begin{exa} Let $X$ be countable and $\phi\colon X\to X$. Suppose that $\hh$ is a separable Hilbert space and $S$ is a subnormal operator in $\hh$. Let $\hhb=\{\hh_x\colon x\in X\}$ with $\hh_x=\hh$, and $\varLambdab=\{\varLambda_x\colon x\in X\}$ with $\varLambda_x=S$. Since $S$ is subnormal, there exists a Hilbert space $\kk$ and a normal operator $N$ in $\kk$ such that $S\subseteq N$. Let $\kkb=\{\kk_x\colon x\in X\}$ with $\kk_x=\kk$, and $\varLambdab^\prime=\{\varLambda^\prime_x\colon x\in X\}$ with $\varLambda^\prime_x=N$. Clearly, $C_{\phi,\varLambdab^\prime}$ is an extension of $\cfl$. Hence, showing that $C_{\phi,\varLambdab^\prime}$ has a quasinormal extension will yield subnormality of $\cfl$. From this point we can proceed as in the previous example. Assuming that there exists a family $\{\vartheta_x^w\colon x\in X, w\in W\}$ of probability measures on $(S,\varSigma)$ satisfying conditions $(\mathtt{A})$-$(\mathtt{B})$, and conditions \eqref{piwo1} and \eqref{piwo2} for every $x\in X$, $\rho$-a.e. $w\in \cbb$, and $\vartheta_x^w$-a.e. $s\in S$, we can show that $\cfl$ is subnormal. \end{exa} The method of proving subnormality via quasinormality and extending the underlying $L^2$-space with help of a family of probability measures has already been used in the context of composition operators (see \cite[Theorem 9]{b-j-j-s-jfa-2015}) and weighted composition operators (see \cite[Theorem 29]{b-j-j-s-wco}). The class of weighted composition operators in $L^2$-spaces over discrete measure spaces is contained in the class of o-wco's (see Remark \ref{wco}). Below we deduce a discrete version of \cite[Theorem 29]{b-j-j-s-wco} from Theorem \ref{kryterium}. \begin{prop} Let $(X,2^X,\mu)$ be a discrete measure space, $\phi$ be a self-map of $X$, and $w\colon X\to \cbb$. Suppose that there exists a family $\{Q_x\colon x\in X\}$ of Borel probability measures on $\rbb_+$ such that \begin{align}\label{CC} \mu_{x}\int_\sigma t\D Q_{x}(t)=\sum_{y\in\phi^{-1}(\{x\})} Q_y(\sigma) |w(y)|^2\mu_y,\quad \sigma\in \borel{\rbb_+}, x\in X. \end{align} and \begin{align}\label{wolny} \int_{\rbb_+}t\D Q_x(t)<\infty,\quad x\in X. \end{align} Then the weighted composition operator $C_{\phi, w}$ induced by $\phi$ and $w$ is subnormal. \end{prop} \begin{proof} By Proposition \ref{liczaca}, $C_{\phi, w}$ is unitarily equivalent to the weighted composition operator $C_{\phi, \widetilde w}$ in $\ell^2(\nu)$, where $\widetilde w (x)=\sqrt{\frac{\mu_x}{\mu_{\phi(x)}}}\, w(x)$, $x\in X$, and $\nu$ is the counting measure on $2^X$. Obviously, it suffices to prove the subnormality of $C_{\phi, \widetilde w}$ now. First we note that \eqref{wolny} and \eqref{CC} imply that $\sum_{y\in\phi^{-1}(\{x\})}|w(y)|^2\mu_y<\infty$ for every $x\in X$ (equivalently, the operator $C_{\phi, w}$ is densely defined). Using \eqref{CC} we get \begin{align*} \int_\sigma t\D Q_{x}(t)=\sum_{y\in\phi^{-1}(\{x\})} Q_y(\sigma) |\widetilde w (y)|^2,\quad \sigma\in \borel{\rbb_+},\ x\in X. \end{align*} This implies that for every $x\in X$ we have $|\widetilde w(x)|^2Q_x\ll Q_{\phi(x)}$ and \begin{align}\label{CCprime} \frac{\D \big(\sum_{y\in\phi^{-1}(\{x\})} |\widetilde w(y)|^2 Q_y\big)}{\D Q_{x}}=t\quad \text{for $Q_{x}$-a.e. $t\in \rbb_+$.} \end{align} Now, we set $W=\{1\}$, $\ascr=\big\{\{1\},\varnothing\big\}$, $\rho (\{1\})=1$, $\lambda_x(1)=\widetilde w (x)$, and $\vartheta_x^1=Q_x$. Then, in view of \eqref{l22}, we have \begin{align*} \Gsf_x^1 =\sum_{\substack{y \in \phi^{-1}(\{x\})}} \frac{\D|\lambda_y(1)|^2 \vartheta_y^1}{\D \vartheta^1_x} = \frac{\D \big(\sum_{y \in \phi^{-1}(\{x\})} |\widetilde w (y)|^2 Q_y\big)}{\D Q_{x}},\quad x\in X. \end{align*} This and \eqref{CCprime} yield $\Gsf_x(t)=t=\Gsf_{\phi(x)}(t)$ for $\vartheta^1_x$-a.e. $t\in\rbb_+$ and every $x\in X$. By Theorem \ref{kryterium} (see also Remark \ref{wco}), the operator $C_{\phi, w^\prime}$ is subnormal which completes the proof. \end{proof} \section{Auxiliary results} In this section we provide additional results concerning commutativity of a multiplication operators and o-wco's motivated by our preceding considerations. We begin with a commutativity criterion. \begin{prop}\label{lemKom1} Let $\{(\varOmega_x,\ascr_x,\mu_x)\colon x\in X\}$ be a family of $\sigma$-finite measure spaces and $\hhb=\{L^2(\mu_x)\colon x\in X\}$. Let $\varGammab=\{\varGamma_x\colon x\in X\}$, with $\varGamma_x\in\mscr(\ascr_x)$, satisfy $M_{\varGammab} \in {\boldsymbol B}(\ell^2(\hhb))$. Let $\varLambdab=\{\varLambda_x\colon x\in X\}$ be a family of operators $\varLambda_x\in{\boldsymbol L}(L^2(\mu_{\phi(x)}), L^2(\mu_x))$. Assume that \begin{align}\label{gwiazdka} M_{\varGamma_x}\varLambda_x\subseteq\varLambda_x M_{\varGamma_{\phi(x)}},\quad x\in X. \end{align} Then $M_{\varGammab}\cfl \subseteq \cfl M_{\varGammab}$. \end{prop} \begin{proof} Let $\bsf \in \ell^2(\hhb)$. Since $M_{\varGammab} \in {\boldsymbol B}(\ell^2(\hhb))$, $\bsf\in \dom(M_{\varGammab}\cfl)$ if and only if $f_{\phi(x)}\in\dom(\varLambda_x)$ for every $x\in X$ and \begin{align*} \sum_{x \in X} \int_{\varOmega_x} \big|\big(\varLambda_x f_{\phi(x)}\big)(w)\big|^2 \D \mu_x(w) <\infty. \end{align*} On the other hand, $\bsf \in \dom(\cfl M_{\varGammab})$ if and only if $\varGamma_{\phi(x)}f_{\phi(x)}\in\dom(\varLambda_x)$ for every $x\in X$ and \begin{align}\label{kapusta} \sum_{x \in X}\int_{\varOmega_x} \big|\varLambda_x \big(\varGamma_{\phi(x)}f_{\phi(x)}\big)(w)\big|^2 \D \mu_x(w) <\infty. \end{align} Now, if $\bsf \in \dom(M_{\varGammab}\cfl)$, then, by \eqref{gwiazdka}, $\varGamma_{\phi(x)}f_{\phi(x)}\in\dom(\varLambda_x)$ for every $x\in X$. Moreover, since $M_{\varGammab} \in {\boldsymbol B}(\ell^2(\hhb))$ implies that $\varGammab$ is uniformly essentially bounded, we see that \begin{align*} \sum_{x \in X}\int_{\varOmega_x} \big|\varGamma_x(w) \big(\varLambda_x f_{\phi(x)}\big)(w)\big|^2 \D \mu_x(w) <\infty, \end{align*} which, by \eqref{gwiazdka}, implies \eqref{kapusta}. Thus $\dom(M_{\varGammab} \cfl)\subseteq \dom(\cfl M_{\varGammab})$. This and \eqref{gwiazdka} yields \begin{align*} \big(M_{\varGammab} \cfl \bsf\big)_x &=\varGamma_x \big(\cfl \bsf\big)_x =\varGamma_x \varLambda_x f_{\phi(x)}=\varLambda_x \big(\varGamma_{\phi(x)} f_{\phi(x)}\big)\\ &= \varLambda_x \big(M_{\varGammab} \bsf\big)_{\phi(x)}=\big(\cfl M_{\varGammab} \bsf\big)_x,\quad x\in X,\quad \bsf\in \dom(M_{\varGammab}\cfl), \end{align*} which completes the proof. \end{proof} \begin{rem} It is worth noticing that if $\{(\varOmega_x,\ascr_x,\mu_x)\colon x\in X\}$, $\hhb$, $\varGammab$, and $\varLambdab$ are as in Proposition \ref{lemKom1}, then $M_{\varGammab} \cfl\subseteq \cfl M_{\varGammab}$ implies $M_{\varGamma_{x}}\varLambda_{x}|_{\dom(C_{\phi, \varLambdab,\phi(x)})}\subseteq\varLambda_{x} M_{\varGamma_{\phi(x)}}$ for every $x\in X$. This can be easily prove by comparing $\big(M_{\varGammab} \cfl \bsf\big)_x$ and $\big(\cfl M_{\varGammab} \bsf\big)_x$ for $\bsf\in\ell^2(\hhb)$ given by $f_{y}=\delta_{y,\phi(x)}g$, where $g\in \dom(C_{\phi,\varLambdab,\phi(x)})$ (see the last part of the proof of Proposition \ref{lemKom1}). \end{rem} In view of our previous investigations it seems natural to ask under what conditions the inclusion in $\cfl M_{\varGammab} \subseteq M_{\varGammab} \cfl$ can be replaced by the equality. Below we propose an answer when $\varLambdab$ consists of multiplication operators. \begin{prop}\label{quasilem1} Let $\{(\varOmega,\ascr,\mu_x)\colon x\in X\}$ be a family of $\sigma$-finite measure spaces. Let $\varGammab=\{\varGamma_x\colon x\in X\}\subseteq \mscr(\ascr)$ and $\{\lambda_x\colon x\in X\}\subseteq \mscr(\ascr)$. Suppose that $|\lambda_x|^2\mu_x\ll\mu_{\phi(x)}$ for every $x\in X$. Let $\hhb=\{L^2(\mu_x)\colon x\in X\}$ and $\varLambdab=\{\varLambda_x\colon x\in X\}$ with $\varLambda_x=M_{\lambda_x}\in{\boldsymbol L}(\hh_{\phi(x)}, \hh_x)$. Assume that $H_x:=|\varGamma_x|+\sum_{y\in \phi^{-1}(\{x\})}\frac{\D|\lambda_y|^2\mu_y}{\D\mu_{x}}<\infty$ a.e. $[\mu_{x}]$ for every $x\in X$. Suppose that $\dom(\cfl)\subseteq \dom(M_{\varGammab})$ and $\cfl M_{\varGammab} \subseteq M_{\varGammab} \cfl$. Then $\cfl M_{\varGammab}= M_{\varGammab} \cfl$. \end{prop} \begin{proof} We first prove that $\cfl M_{\varGammab} \subseteq M_{\varGammab} \cfl$ implies \begin{align}\label{ups2} \lambda_x \varGamma_x=\lambda_x \varGamma_{\phi(x)}\text{ a.e. $[\mu_x]$},\quad x\in X. \end{align} To this end, we fix $x_0\in X$. Since $\mu_{\phi(x_0)}$ is $\sigma$-finite and $|\varGamma_{\phi(x_0)}|+H_{\phi(x_0)}<\infty$ a.e. $[\mu_{\phi(x_0)}]$, using a standard measure-theoretic argument we show that there exists $\{\varOmega_n\}_{n=1}^\infty\subseteq{\ascr}$ such that $\varOmega=\bigcup_{n=1}^\infty \varOmega_n$ and for every $k\in\nbb$ we have $\varOmega_k\subseteq \varOmega_{k+1}$, $\mu_{\phi(x_0)}(\varOmega_k)<\infty$, and $|\varGamma_{\phi(x_0)}|+H_{\phi(x_0)}< k$ on $\varOmega_k$. Now, we consider $\bsf^{(n)}\in \ell^2(\hhb)$, $n\in\nbb$, given by $f_x^{(n)}=\delta_{x,\phi(x_0)}\chi_{\varOmega_n}$, $x\in X$. Then \begin{align*} \int_{\varOmega_n}|\varGamma_{\phi(x_0)}|^2\D\mu_{\phi(x_0)} <\infty \text{ and } \int_{\varOmega_n} |\varGamma_{\phi(x_0)}|^2 H_{\phi(x_0)}\D\mu_{\phi(x_0)}<\infty,\quad n\in\nbb, \end{align*} which yields $\bsf^{(n)}\in\dom(\cfl M_{\varGammab})$ for every $n\in\nbb$. Consequently, $\bsf^{(n)}\in\dom(M_{\varGammab} \cfl)$ for every $n\in\nbb$. Now, by comparing $M_{\varGammab} \cfl \bsf^{(n)}$ and $\cfl M_{\varGammab} \bsf^{(n)}$, we get $\lambda_x \varGamma_x \chi_{\varOmega_n}= \lambda_x \varGamma_{\phi(x)}\chi_{\varOmega_n}$ a.e. $[\mu_{x}]$ for every $x\in X$ such that $\phi(x)=\phi(x_0)$ and every $n\in\nbb$ (see the last part of the proof of Proposition \ref{lemKom1}). Since $\varOmega=\bigcup_{n=1}^\infty\varOmega_n$, we get the equality in \eqref{ups2} for every $x\in X$ such that $\phi(x)=\phi(x_0)$. Considering all possible choices of $x_0\in X$ we deduce \eqref{ups2}. Now, let $\bsf \in \dom(M_{\varGammab} \cfl)$. Then $\bsf \in \dom( \cfl)\subseteq \dom(M_{\varGammab})$ and $$ \sum_{x \in X} \; \int_\varOmega |\lambda_x\varGamma_x|^2 |f_{\phi(x)}|^2 \D \mu_x < \infty.$$ This combined with \eqref{ups2} imply that $\bsf\in \dom(\cfl M_{\varGammab})$. Hence $\dom(M_{\varGammab} \cfl)=\dom(\cfl M_{\varGammab})$ which, in view of $\cfl M_{\varGammab} \subseteq M_{\varGammab} \cfl$, proves the claim. \end{proof} \begin{cor}\label{tikitaki} Let $\{(\varOmega, \bscr,\mu_x)\colon x\in X\}$ be a family of $\sigma$-finite measure spaces. Let $\{\xi_x\colon x\in X\}\subseteq \mscr(\bscr)$. Suppose that $|\xi_x|^2\mu_x\ll\mu_{\phi(x)}$ for every $x\in X$. Let $\hhb=\{L^2(\mu_x)\colon x\in X\}$. Assume that $\varGammab=\{\varGamma_x\colon x\in X\}$ is a family of functions $\varGamma_x\in\mscr(\bscr)$ such that $\varGamma_x=\varGamma_{\phi(x)}$ a.e. $[\mu_x]$ and $z_0-M_{\varGammab}$ is an invertible operator in $\ell^2(\hhb)$ for some $z_0\in \cbb$. Let $\varXib=\{\varXi_x\colon x\in X\}$ with $\varXi_x=M_{\xi_x}\in{\boldsymbol L}(\hh_{\phi(x)}, \hh_x)$, $x\in X$. Suppose that $C_{\phi, \varXib}$ and $M_{\varGammab}$ are densely defined, and $\dom(C_{\phi, \varXib})\subseteq \dom(M_{\varGammab})$. Then $C_{\phi, \varXib} M_{\varGammab}= M_{\varGammab} C_{\phi, \varXib}$. \end{cor} \begin{proof} Since $z_0-M_{\varGammab}$ is invertible, $z_0$ does not belong to the essential range of any $\varGamma_x$, $x\in X$, and $\big(z_0-M_{\varGammab}\big)^{-1}=M_{\boldsymbol{\varDelta}}$ where $\varDeltab=\{\varDelta_x\colon x\in X\}$ with $\varDelta_x:=(z_0-\varGamma_x)^{-1}$ (note that $\varGamma_x<\infty$ a.e. $[\mu_x]$ because $M_{\varGammab}$ is densely defined). Then $\varDelta_x=\varDelta_{\phi(x)}$ a.e. $[\mu_x]$ for every $x\in X$ which means that $M_{\varDelta_x}\varXi_x\subseteq\varXi_x M_{\varDelta_{\phi(x)}}, x\in X$. Consequently, by Proposition \ref{lemKom1}, we get $M_{\varDeltab}C_{\phi, \varXib}\subseteq C_{\phi, \varXib} M_{\varDeltab}$. This in turn implies that $C_{\phi, \varXib} M_{\varGammab}\subseteq M_{\varGammab}C_{\phi, \varXib}$. Dense definiteness of $C_{\phi, \varXib}$ yields $\sum_{y\in\phi^{-1}(\{x\})}\frac{\D|\xi_y|^2\mu_y}{\D\mu_x}<\infty$ a.e. $[\mu_x]$ for every $x\in X$. Hence, by Proposition \ref{quasilem1}, we show that $C_{\phi, \varXib} M_{\varGammab}$ and $M_{\varGammab}C_{\phi, \varXib}$ coincide. \end{proof} As a byproduct of Corollary \ref{tikitaki} we get another proof of Theorem \ref{kryterium}. \begin{proof}[Second proof of Theorem \ref{kryterium}] We apply Corollary \ref{tikitaki} with $(\varOmega,\bscr,\mu_x)=(W\times S, \ascr\otimes\varSigma,\widehat\varrho_x)$, $\varGamma_x=\sqrt{\Gsf_x}$, $\varXi_x=\widehat\varLambda_x$, and any $z_0\in \cbb$ with non-zero imaginary part. Clearly, since $M_{\varGammab}=M_{\sqrt{\Gsf}}$ is selfadjoint, $z_0-M_{\varGammab}$ is invertible. Moreover, $\dom(C_{\phi, \varXib})=\dom(C_{\phi,\widehat\varLambdab})=\dom(|C_{\phi,\widehat\varLambdab}|)=\dom(M_{\sqrt{\Gsf}})=\dom(M_{\varGammab})$ by Proposition \ref{dziekan}. Therefore, applying Corollary \ref{tikitaki} we get $C_{\phi,\widehat\varLambdab} M_{\sqrt\Gsf}= M_{\sqrt\Gsf} C_{\phi,\widehat\varLambdab}$, which implies that $C_{\phi,\widehat\varLambdab} M_{\Gsf}= M_{\Gsf} C_{\phi,\widehat\varLambdab}$. Applying Proposition \ref{dziekan} again, we obtain $C_{\phi,\widehat\varLambdab}^* C_{\phi,\widehat\varLambdab} C_{\phi,\widehat\varLambdab}=C_{\phi,\widehat\varLambdab} C_{\phi,\widehat\varLambdab}^* C_{\phi,\widehat\varLambdab}$. In view of \eqref{QQQ} the proof is complete. \end{proof} The final important observation is that the condition appearing in Theorem \ref{kryterium} is necessary for the quasinormality of $C_{\phi, \widehat\varLambda}$. \begin{prop} Assume \eqref{zemanek} and \eqref{majdak}. If $C_{\phi, \widehat \varLambdab}$ is quasinormal, then \begin{align*} \lambda_x\Gsf_{\phi(x)}=\lambda_x \Gsf_x\quad \text{a.e. $[\widehat\varrho_{x}]$ for every $x\in X$.} \end{align*} \end{prop} \begin{proof} Since $C_{\phi, \widehat \varLambdab}$ is densely defined $\Gsf_x<\infty$ a.e. $\widehat\varrho_{x}$. Now it suffices to argue as in the proof of Proposition \ref{quasilem1} to get $\lambda_x \Gsf_x = \lambda_x\Gsf_{\phi(x)}$ a.e. $[\widehat\varrho_x]$ (cf. \eqref{ups2}). \end{proof} \section{Acknowledgments} The research of the second and third authors was supported by the Ministry of Science and Higher Education (MNiSW) of Poland. \end{document}
math
66,316
\begin{document} \title[]{Explicit Class Field Theory for global function fields} \author{David Zywina} \address{Department of Mathematics and Statistics, Queen's University, Kingston, ON K7L~3N6, Canada} \email{[email protected]} \urladdr{http://www.mast.queensu.ca/\~{}zywina} \begin{abstract} Let $F$ be a global function field and let $F^{\operatorname{ab}}$ be its maximal abelian extension. Following an approach of D.~Hayes, we shall construct a continuous homomorphism $\rho\colon \operatorname{Gal}(F^{\operatorname{ab}}/F)\to C_F$, where $C_F$ is the idele class group of $F$. Using class field theory, we shall show that our $\rho$ is an isomorphism of topological groups whose inverse is the Artin map of $F$. As a consequence of the construction of $\rho$, we obtain an explicit description of $F^{\operatorname{ab}}$. Fix a place $\infty$ of $F$, and let $A$ be the subring of $F$ consisting of those elements which are regular away from $\infty$. We construct $\rho$ by combining the Galois action on the torsion points of a suitable Drinfeld $A$-module with an associated $\infty$-adic representation studied by J.-K.~Yu. \end{abstract} \subjclass[2000]{Primary 11R37; Secondary 11G09} \mathbb Maketitle \section{Introduction} Let $F$ be a global field, that is, a finite field extension of either $\mathbb Q$ or a function field $\mathbb Mathbb F_p(t)$. Fix an algebraic closure $\bbar{F}$ of $F$, and let $F^{\operatorname{sep}}$ be the separable closure of $F$ in $\bbar{F}$. \emph{Class field theory} gives a description of the maximal abelian extension $F^{\operatorname{ab}}$ of $F$ in $F^{\operatorname{sep}}$. Let $\theta_F \colon C_F \to \operatorname{Gal}(F^{\operatorname{ab}}/F)$ be the \defi{Artin map} where $C_F$ is the \defi{idele class group} of $F$; see \S\ref{SS:notation} for notation and \cite{tate-gcft} for background on class field theory. The map $\theta_F$ is a continuous homomorphism. By taking profinite completions, we obtain an isomorphism of topological groups: \begin{equation*} \label{E:thetaF completion} \widehat{\theta}_F \colon \widehat C_F \xrightarrow{\sim} \operatorname{Gal}(F^{\operatorname{ab}}/F). \end{equation*} So $\theta_F$ gives a one-to-one correspondence between the finite abelian extensions $L$ of $F$ in $F^{\operatorname{sep}}$ and the finite index open subgroups of $C_F$. For a finite abelian extension $L/F$, the corresponding open subgroup of $C_F$ is the kernel $U$ of the homomorphism \[ C_F\xrightarrow{\theta_F} \operatorname{Gal}(F^{\operatorname{ab}}/F)\twoheadrightarrow \operatorname{Gal}(L/F) \] where the second homomorphism is the restriction map $\sigma\mathbb Mapsto \sigma|_L$ (the group $U$ is computable; it equals $N_{L/F}(C_L)$ where $N_{L/F}\colon C_L\to C_F$ is the norm map). However, given an open subgroup $U$ with finite index in $C_F$, the Artin map ${\theta}_F$ does \emph{not} explicitly produce the corresponding extension field $L/F$ (in the sense, that it does not give a concrete set of generators for $L$ over $F$; though it does give enough information to uniquely characterize it). \emph{Explicit class field theory} for $F$ entails giving constructions for all the abelian extensions of $F$. We shall give a construction of $F^{\operatorname{ab}}$, for a global function field $F$, that at the same time produces the inverse of $\widehat\theta_F$ (without referring to $\widehat\theta_F$ or class field theory itself).\\ \subsection{Context} D.~Hayes \cite{MR0330106} provided an explicit class field theory for rational function fields $F=k(t)$. He built on and popularized the work of Carlitz from the 1930s \cite{MR1501937}, who had constructed a large class of abelian extensions using what is now called the \emph{Carlitz module} (this is the prototypical Drinfeld module; we will give the basic definitions concerning Drinfeld modules in \S\ref{S:background}). Drinfeld and Hayes have both described explicit class field theory for an arbitrary global function field $F$, see \cite{MR0384707} and \cite{MR535766}. Both proceed by first choosing a distinguished place $\infty$ of $F$, and their constructions give the maximal abelian extension $K_\infty$ of $F$ that splits completely at $\infty$. Drinfeld defines a moduli space of rank 1 Drinfeld (elliptic) modules with level structure arising from the ``finite'' places of $F$ whose spectrum turns out to be $K_\infty$. Hayes fixes a normalized Drinfeld module $\mathfrak phi$ of rank 1 whose field of definition along with its torsion points can be used to construct $K_\infty$ (this approach is more favourable for explicit computations). One approach to computing the full field $F^{\operatorname{ab}}$ is to chose a second place $\infty'$ of $F$, since $F^{\operatorname{ab}}$ will then be the compositum of $K_\infty$ and $K_{\infty'}$. We wish to give a version of explicit class field theory that does not require this second unnatural choice. In Drinfeld's second paper on Drinfeld modules \cite{MR0439758}, he achieves exactly this my considering another moduli space of rank 1 Drinfeld modules but now with additional $\infty$-adic structure. As remarked by Goss in \cite{MR1423131}*{\S7.5}, it would be very useful to have a modification of Drinfeld's construction that can be applied directly to $\mathfrak phi$ to give the full abelian closure $F^{\operatorname{ab}}$. We shall do exactly this! J.-K.~Yu \cite{MR2018826} has studied the additional $\infty$-adic structure introduced by Drinfeld and has teased out the implicit Galois representation occurring there. Yu's representation, which may also be defined for higher rank Drinfeld modules, can be viewed as an analogue of the Sato-Tate law, cf.~\cite{MR2018826} and \cite{Zywina-SatoTate}. \subsection{Overview} The goal of this paper is to given an explicit construction of the inverse $\widehat{\theta}_F$. Moreover, we will construct an isomorphism of topological groups \[ \rho\colon W_F^{\operatorname{ab}} \to C_F, \] where $W_F^{\operatorname{ab}}$ is the subgroup of $\operatorname{Gal}(F^{\operatorname{ab}}/F)$ that acts on the algebraic closure $\bbar{k}$ of $k$ in $F^{\operatorname{sep}}$ as an integral power of Frobenius map $x\mathbb Mapsto x^q$ (we endow $W_F^{\operatorname{ab}}$ with the weakest topology for which $\operatorname{Gal}(F^{\operatorname{ab}}/\bbar{k})$ is an open subgroup and has its usual profinite topology). The inverse of our $\rho$ will be the homomorphism $\alpha\mathbb Mapsto \theta_F(\alpha)^{-1}$ (Theorem~\ref{T:main}). For an open subgroup $U$ of $C_F$ with finite index, define the homomorphism \[ \rho_U\colon W_F^{\operatorname{ab}} \to C_F\twoheadrightarrow C_F/U; \] it factors through $\operatorname{Gal}(L_U/F)$ where $L_U$ is the field corresponding to $U$ via class field theory. Everything about $\rho$ is computable, and in particular one can find generators for $L_U$. In \S\ref{S:background}, we shall give the required background on Drinfeld modules; in particular, we focus on normalized Drinfeld modules of rank 1. The representation $\rho$ will then be defined in \S\ref{SS:inverse}. The rest of the introduction serves as further motivation and will not be needed later. After a quick recap of explicit class field for $\mathbb Q$, we shall describe the abelian extensions of $F=k(t)$ constructed by Carlitz which will lead to a characterization of $\rho_\infty$. We will treat the case $F=k(t)$ in more depth in \S\ref{S:k(t)}; in particular, we will recover Hayes' description of $k(t)^{\operatorname{ab}}$ as the compositum of three linearly disjoint fields. \subsection{The rational numbers} We briefly recall explicit class field theory for $\mathbb Q$. For each positive integer $n$, let $\mathbb Mu_n$ be the subgroup of $n$-torsion points in ${\overline{\mathbb Q}}^\times$; it is a free $\mathbb Z/n\mathbb Z$-module of rank $1$. Let \[ \chi_n \colon \operatorname{Gal}(\mathbb Q^{\operatorname{ab}}/\mathbb Q) \twoheadrightarrow (\mathbb Z/n\mathbb Z)^\times \] be the representation for which $\sigma(\zeta) = \zeta^{\chi_n(\sigma)}$ for all $\sigma \in \operatorname{Gal}(\mathbb Q^{\operatorname{ab}}/\mathbb Q)$ and $\zeta\in \mathbb Mu_n$. By taking the inverse image over all $n$, ordered by divisibility, we obtain a continuous and surjective representation \[ \chi \colon \operatorname{Gal}(\mathbb Q^{\operatorname{ab}}/\mathbb Q) \to \widehat{\mathbb Z}^\times. \] The \emph{Kronecker-Weber theorem} says that $\mathbb Q^{\operatorname{ab}}$ is the cyclotomic extension of $\mathbb Q$, and hence $\chi$ is an isomorphism of topological groups. The group $\widehat{\mathbb Z}^\times \times \mathbb R^+$ can be viewed as a subgroup of $\mathbb A_\mathbb Q^\times$, where $\mathbb R^+$ is the group of positive elements of $\mathbb R^\times$. The quotient map $\widehat{\mathbb Z}^\times \times \mathbb R^+ \to C_\mathbb Q$ is an isomorphism of topological groups, and by taking profinite completions we obtain an isomorphism $\widehat{\mathbb Z}^\times=\widehat{\mathbb Z}^\times\times \widehat{\mathbb R^+} \xrightarrow{\sim} \widehat{C}_\mathbb Q$. Composing $\chi$ with this map, we have a isomorphism \[ \rho\colon \operatorname{Gal}(\mathbb Q^{\operatorname{ab}}/\mathbb Q) \xrightarrow{\sim} \widehat{C}_\mathbb Q. \] One can show that the inverse of the Artin map $\widehat{\theta}_\mathbb Q\colon \widehat{C}_\mathbb Q \xrightarrow{\sim} \operatorname{Gal}(\mathbb Q^{\operatorname{ab}}/\mathbb Q)$ is simply the map $\sigma \mathbb Mapsto \rho(\sigma)^{-1}$. \subsection{The rational function field} \label{SS:rational field} Let us briefly consider the rational function field $F=k(t)$ where $k$ is a finite field with $q$ elements; it is the function field of the projective line $\mathbb P^1_k$. Let $\infty$ denote the place of $F$ for which $A=k[t]$ is the ring of regular function of $\mathbb P^1_k$ that are regular away from $\infty$. Let $\operatorname{End}_{k}(\mathbb Mathbb G_{a,F})$ be the ring of $k$-linear endomorphisms of the additive group scheme $\mathbb Mathbb G_{a}$ over $F$. More concretely, $\operatorname{End}_k(\mathbb Mathbb G_{a,F})$ is the ring of polynomials $\sum_i c_i X^{q^i} \in F[X]$ with the usual addition and the multiplication operation being composition. The \defi{Carlitz module} is the homomorphism \[ \mathfrak phi\colon A \to \operatorname{End}_k(\mathbb Mathbb G_{a,F}),\, a\mathbb Mapsto \mathfrak phi_a \] of $k$-algebras for which $\mathfrak phi_t = t + X^q$. Using the Carlitz module, we can give $F^{\operatorname{sep}}$ an interesting new $A$-module structure; i.e., for $a\in A$ and $\xi \in F^{\operatorname{sep}}$, we define $a\cdot \xi := \mathfrak phi_a(\xi)$. For a monic polynomial $m\in A$, let $\mathfrak phi[m]$ be the $m$-torsion subgroup of $F^{\operatorname{sep}}$, i.e., the set of $\xi \in F^{\operatorname{sep}}$ for which $m\cdot \xi=0$ (equivalently, the roots of the separable polynomial $\mathfrak phi_m \in F[X]$). The group $\mathfrak phi[m]$ is free of rank $1$ as an $A/(m)$-module, and we have a continuous surjective homomorphism \[ \chi_m \colon \operatorname{Gal}(F^{\operatorname{ab}}/F) \to (A/(m))^\times \] such that $\sigma(\xi) = \chi_m(\sigma)\cdot \xi$ for all $\sigma \in \operatorname{Gal}(F^{\operatorname{ab}}/F)$ and $\xi\in \mathfrak phi[m]$. By taking the inverse image over all monic $m\in A$, ordered by divisibility, we obtain a surjective continuous representation \[ \chi \colon \operatorname{Gal}(F^{\operatorname{ab}}/F) \to \widehat{A}^\times. \] However, unlike the cyclotomic case, the map $\chi$ is not an isomorphism. The field $\bigcup_m F(\mathfrak phi[m])$ is a geometric extension of $F$ that is tamely ramified at $\infty$; it does not contain extensions of $F$ that ramify at $\infty$ or the constant extensions. Following J.K.~Yu, and Drinfeld, we shall define a continuous homomorphism \[ \rho_\infty \colon W_F^{\operatorname{ab}} \to F_\infty^\times \] where $F_\infty=k(\!(t^{-1})\!)$ is the completion of $F$ at the place $\infty$. We will put off its definition (see \S\ref{S:k(t)} for further details in the $k(t)$ case), and simply note that $\rho_\infty$ can be characterized by the fact that it satisfies $\rho_\infty(\operatorname{Frob}_\mathfrak p)=\mathfrak p$ for each monic irreducible polynomial $\mathfrak p$ of $A$. The image of $\rho_\infty$ is contained in the open subgroup $F_\infty^+:= \ang{t}\cdot (1+t^{-1}k[\![t^{-1}]\!])$ of $F_\infty^\times$. We can view $\widehat{A}^\times \times F_\infty^+$ as an open subgroup of the ideles $\mathbb A_F^\times$. We define the continuous homomorphism \begin{equation*} \rho\colon W_F^{\operatorname{ab}} \xrightarrow{\chi \times \rho_\infty } \widehat{A}^\times \times F_\infty^+ \to C_F \end{equation*} where the first map takes $\sigma\in W_F^{\operatorname{ab}}$ to $(\chi(\sigma),\rho_\infty(\sigma))$ and the second map is the compositum of the inclusion map to $\mathbb A_F^\times$ with the quotient map $\mathbb A_F^\times \to C_F$. The main result of this paper, for $F=k(t)$, says that the above homomorphism $\rho \colon W_F^{\operatorname{ab}} \to C_F$ is an isomorphism of topological groups. Moreover, the inverse of the Artin map $\theta_F\colon C_F\to W^{\operatorname{ab}}_F$ is the homomorphism $\sigma \mathbb Mapsto \rho(\sigma)^{-1}$. In particular, observe that $\rho$ does not depend on our initial choice of place $\infty$! Taking profinite completions, we obtain an isomorphism $\widehat{\rho}\colon \operatorname{Gal}(F^{\operatorname{ab}}/F)\xrightarrow{\sim} \widehat{C}_F$. \\ The constructions for a general global function field $F$ is more involved; it more closely resembles the theory of complex multiplication for elliptic curves than the cyclotomic theory. We first choose a place $\infty$ of $F$. In place of the Carlitz module, we will consider a suitable rank 1 Drinfeld module $\mathfrak phi$. We cannot always take $\mathfrak phi$ to be defined over $F$, but we can choose a model defined over the maximal abelian extension $H_A$ of $F$ that is unramified at all places and splits completely at $\infty$ (we will actually work with a slightly larger field $H_A^+$). \subsection{Notation} \label{SS:notation} Throughout this paper, we consider a global function field $F$ with a fixed place $\infty$. Let $A$ be the subring consisting of those functions that are regular away from $\infty$. Denote by $k$ the field of constants of $F$ and let $q$ be the cardinality of $k$. For each place $\lambda$ of $F$, let $F_\lambda$ be the completion of $F$ at $\lambda$. Let $\operatorname{ord}_\lambda \colon F_\lambda^\times \twoheadrightarrow \mathbb Z$ be the discrete valuation corresponding to $\lambda$ and let $\mathcal O_\lambda\subseteq F_\lambda$ be its valuation ring. The \defi{idele group} $\mathbb A_F^\times$ of $F$ is the subgroup of $(\alpha_\lambda)\in \mathfrak prod_\lambda F_\lambda^\times$, where the product is over all places of $F$, such that $\alpha_\lambda$ belongs to $\mathcal O_\lambda^\times$ for all but finitely many places $\lambda$. The group $\mathbb A_F^\times$ is locally compact when endowed with the weakest topology for which $\mathfrak prod_\lambda \mathcal O_\lambda^\times$, with the profinite topology, is an open subgroup. We embed $F^\times$ diagonally into $\mathbb A_F^\times$; it is a discrete subgroup. The \defi{idele class group} of $F$ is $C_F:=\mathbb A_F^\times/F^\times$ which we endow with the quotient topology. Let $\mathbb Mathfrak{m}_\lambda$ be the maximal ideal of $\mathcal O_\lambda$. Define the finite field $\mathbb Mathbb F_\lambda=\mathcal O_\lambda/\mathbb Mathfrak{m}_\lambda$ whose cardinality we will denote by $N(\lambda)$. The degree of the place $\infty$ is $d_\infty:= [\mathbb Mathbb F_\infty : k]$. For a place $\mathfrak p$ of $F$, we will denote by $\operatorname{Frob}_\mathfrak p$ an arbitrary representative of the (arithmetic) Frobenius conjugacy class of $\mathfrak p$ in $\operatorname{Gal}(F^{\operatorname{sep}}/F)$.\\ Let $L$ be an extension field of $k$. We fix an algebraic closure $\bbar{L}$ of $L$ and let $L^{\operatorname{sep}}$ be the separable closure of $L$ in $\bbar{L}$. We shall take $\bbar{k}$ to be the the algebraic closure of $k$ in $L^{\operatorname{sep}}$. Let $L^\mathfrak perf$ be the perfect closure of $L$ in $\bbar{L}$. We shall denote the absolute Galois group of $L$ by $\operatorname{Gal}_L:=\operatorname{Gal}(L^{\operatorname{sep}}/L)$. The \defi{Weil group} $W_L$ of $L$ is the subgroup of $\operatorname{Gal}_L$ consisting of those automorphisms $\sigma$ for which there exists an integer $\deg(\sigma)$ that satisfy $\sigma(x)=x^{q^{\deg(\sigma)}}$ for all $x\in \bbar{k}$. The map $\deg \colon W_L \to \mathbb Z$ is a group homomorphism with kernel $\operatorname{Gal}(L^{\operatorname{sep}}/L\bbar{k})$. We endow $W_L$ with the weakest topology for which $\operatorname{Gal}(L^{\operatorname{sep}}/L\bbar{k})$ is an open subgroup with its usual profinite topology. Let $W_L^{\operatorname{ab}}$ be the image of $W_L$ under the restriction map $\operatorname{Gal}_L\to \operatorname{Gal}(L^{\operatorname{ab}}/L)$ where $L^{\operatorname{ab}}$ is the maximal abelian extension of $L$ in $L^{\operatorname{sep}}$.\\ Let $L$ be an extension field of $k$. We define $L[\tau]$ be the ring of polynomials in $\tau$ with coefficients in $L$ that obey the commutation rule $\tau \cdot a = a^q \tau$ for $a\in L$. In particular, note that $L[\tau]$ will be non-commutative if $L\neq k$. We can identify $L[\tau]$ with the $k$-algebra $\operatorname{End}_{k}(\mathbb Mathbb G_{a,L})$ consisting of the $k$-linear endomorphism of the additive group scheme $\mathbb Mathbb G_{a,L}$; identify $\tau$ with the endomorphism $X\mathbb Mapsto X^q$. Suppose that $L$ is perfect. Let $L (\!(\tau^{-1})\!) $ be the skew-field consisting of Laurent series in $\tau^{-1}$; it contains $L[\tau]$ as a subring (we need $L$ to be perfect so that $\tau^{-1} \cdot a = a^{1/q} \tau$ is always valid). Define the valuation $\operatorname{ord}_{\tau^{-1}}\colon L (\!(\tau^{-1})\!) \to \mathbb Z\cup\{+\infty\}$ by $\operatorname{ord}_{\tau^{-1}}(\sum_i a_i \tau^{-i}) = \inf\{ i : a_i \neq 0 \}$ and $\operatorname{ord}_{\tau^{-1}}(0)=+\infty$. The valuation ring of $\operatorname{ord}_{\tau^{-1}}$ is $L [\![\tau^{-1}]\!] $, i.e., the ring of formal power series in $\tau^{-1}$. Again note that $L (\!(\tau^{-1})\!) $ and $L [\![\tau^{-1}]\!] $ are non-commutative if $L\neq k$. For a topological group $G$, we will denote by $\widehat{G}$ the profinite completion of $G$. We will always consider profinite groups, for example Galois groups, with their profinite topology. \section{Background} \label{S:background} For an in-depth introduction to Drinfeld modules, see \cite{MR0384707,MR902591,MR1423131}. The introduction \cite{MR1196509} of Hayes and Chapter VII of \cite{MR1423131} are particularly relevant to the material of this section. \subsection{Drinfeld modules} Let $L$ be a field extension of $k$. A \defi{Drinfeld module} over $L$ is a homomorphism $\mathfrak phi\colon A \to L[\tau],\, x\mathbb Mapsto \mathfrak phi_x$ of $k$-algebras whose image contains a non-constant polynomial. Let $\mathfrak partial\colon L[\tau] \to L$ be the ring homomorphism that takes a twisted polynomial to its constant term. We say that $\mathfrak phi$ has \defi{generic characteristic} if the homomorphism $\mathfrak partial \circ \mathfrak phi \colon A \to L$ is injective; using this map, we then obtain an embedding $F\hookrightarrow L$ that we will view as an inclusion. The ring $L[\tau]$ is contained in the skew field $L^\mathfrak perf (\!(\tau^{-1})\!) $. The map $\mathfrak phi$ is injective, so it extends uniquely to a homomorphism $\mathfrak phi\colon F\hookrightarrow L^\mathfrak perf (\!(\tau^{-1})\!) $. The function $v\colon F\to \mathbb Z\cup\{+\infty\}$ defined by $v(x)=\operatorname{ord}_{\tau^{-1}}(\mathfrak phi_x)$ is a discrete valuation that satisfies $v(x)\leq 0$ for all non-zero $x\in A$; the valuation $v$ is non-trivial since the image of $\mathfrak phi$ contain a non-constant element of $L[\tau]$. Therefore, $v$ is equivalent to $\operatorname{ord}_\infty$ and hence there exists a positive $r\in\mathbb Q$ that satisfies \begin{equation} \label{E:rank defn} \operatorname{ord}_{\tau^{-1}}(\mathfrak phi_x) = r d_\infty \operatorname{ord}_\infty(x) \end{equation} for all $x\in F^\times$. The number $r$ is called the \defi{rank} of $\mathfrak phi$ and it is always an integer. Since $L^\mathfrak perf (\!(\tau^{-1})\!) $ is complete with respect to $\operatorname{ord}_{\tau^{-1}}$, the map $\mathfrak phi$ extends uniquely to a homomorphism \[ \mathfrak phi\colon F_\infty\hookrightarrow L^\mathfrak perf (\!(\tau^{-1})\!) \] that satisfies (\ref{E:rank defn}) for all $x\in F_\infty^\times$. This extension of $\mathfrak phi$ was first constructed in \cite{MR0439758} and will be the key to the $\infty$-adic part of the construction of the inverse of the Artin map in \S\ref{SS:infty-rep}. It will also lead to a more straightforward definition of the ``leading coefficient'' map $\mathbb Mu_\mathfrak phi$ of \cite{MR1196509}*{\S6}, see \S\ref{SS:normalization}. Restricting our extended map $\mathfrak phi$ to $\mathbb Mathbb F_\infty$ gives a homomorphism $\mathbb Mathbb F_\infty \to L^\mathfrak perf [\![\tau^{-1}]\!] $. After composing $\mathfrak phi|_{\mathbb Mathbb F_\infty}$ with the homomorphism $L^\mathfrak perf [\![\tau^{-1}]\!] \to L^\mathfrak perf$ which takes a power series in $\tau^{-1}$ to its constant term, we obtain a homomorphism $\mathbb Mathbb F_\infty \hookrightarrow L^\mathfrak perf$ of $k$-algebras whose image must lie in $L$. In particular, the Drinfeld module $\mathfrak phi$ gives $L$ the structure of an $\mathbb Mathbb F_\infty$-algebra. Given two Drinfeld modules $\mathfrak phi,\mathfrak phi'\colon A \to L[\tau]$, an \defi{isogeny from $\mathfrak phi$ to $\mathfrak phi'$ over $L$} is a non-zero $f\in L[\tau]$ for which $f \mathfrak phi_a = \mathfrak phi'_a f$ for all $a\in A$. \subsection{Normalized Drinfeld modules} \label{SS:normalization} Fix a Drinfeld module $\mathfrak phi\colon A \to L[\tau]$ of rank $r$ and also denote by $\mathfrak phi$ its extension $F_\infty \to L^\mathfrak perf (\!(\tau^{-1})\!) $. For each $x\in F_\infty^\times$, we define $\mathbb Mu_\mathfrak phi(x)\in (L^\mathfrak perf)^\times$ to be the first non-zero coefficient of the Laurent series $\mathfrak phi_x \in L^\mathfrak perf (\!(\tau^{-1})\!) $. By (\ref{E:rank defn}), the first term of $\mathfrak phi_x$ is $\mathbb Mu_\mathfrak phi(x)\tau^{-rd_\infty\operatorname{ord}_\infty(x)}$. For a non-zero $x\in A$, one can also define $\mathbb Mu_\mathfrak phi(x)$ as the leading coefficient of $\mathfrak phi_x$ as a polynomial in $\tau$. For $x,y\in F_\infty^\times$, the value $\mathbb Mu_\mathfrak phi(xy)$ is equal to the coefficient of $\mathbb Mu_ \mathfrak phi(x) \tau^{-rd_\infty\operatorname{ord}_{\infty}(x)} \cdot \mathbb Mu_ \mathfrak phi(y) \tau^{-rd_\infty\operatorname{ord}_{\infty}(y)}$, and hence \begin{equation} \label{E:sign relation} \mathbb Mu_ \mathfrak phi(xy)=\mathbb Mu_ \mathfrak phi(x) \mathbb Mu_ \mathfrak phi(y)^{1/q^{r d_\infty\operatorname{ord}_\infty(x)}}. \end{equation} With respect to our embedding $\mathbb Mathbb F_\infty\hookrightarrow L$ arising from $\mathfrak phi$, we have $\mathbb Mu_\mathfrak phi(x)=x$ for all $x\in \mathbb Mathbb F_\infty^\times$. We say that $\mathfrak phi$ is \defi{normalized} if $\mathbb Mu_\mathfrak phi(x)$ belongs to $\mathbb Mathbb F_\infty^\times$ for all $x\in F_\infty^\times$ (equivalently, for all non-zero $x\in A$). If $\mathfrak phi$ is normalized, then by (\ref{E:sign relation}) the map $\mathbb Mu_\mathfrak phi\colon F_\infty^\times \to \mathbb Mathbb F_\infty^\times$ is a group homomorphism that equals the identity map when restricted to $\mathbb Mathbb F_\infty^\times$; this is an example of a sign function. \begin{definition} A \defi{sign function} for $F_\infty$ is a group homomorphism $\varepsilon\colon F_\infty^\times \to \mathbb Mathbb F_\infty^\times$ that is the identity map when restricted to $\mathbb Mathbb F_\infty^\times$. We say that $\mathfrak phi$ is \defi{$\varepsilon$-normalized} if it is normalized and $\mathbb Mu_\mathfrak phi\colon F_\infty^\times \to \mathbb Mathbb F_\infty^\times$ is equal to $\varepsilon$ composed with some $k$-automorphism of $\mathbb Mathbb F_\infty$. \end{definition} A sign function $\varepsilon$ is trivial on $1+\mathbb Mathfrak{m}_\infty$, so it determined by the value $\varepsilon(\mathfrak pi)$ for a fixed uniformizer $\mathfrak pi$ of $F_\infty$. \begin{lemma} \label{L:normalization works} \cite{MR1196509}*{\S12} Let $\varepsilon$ be a sign function for $F_\infty$ and let $\mathfrak phi'\colon A\to L[\tau]$ be a Drinfeld module. Then $\mathfrak phi'$ is isomorphic over $\bbar{L}$ to an $\varepsilon$-normalized Drinfeld module $\mathfrak phi\colon A\to \bbar{L}[\tau]$. \end{lemma} \subsection{The action of an ideal on a Drinfeld module} Fix a Drinfeld module $\mathfrak phi\colon A \to L[\tau]$ and take a non-zero ideal $\mathfrak a$ of $A$. Let $I_{\mathfrak a,\mathfrak phi}$ be the \emph{left} ideal in $L[\tau]$ generated by the set $\{\mathfrak phi_a: a \in \mathfrak a\}$. All left ideals of $L[\tau]$ are principal, so $I_{\mathfrak a,\mathfrak phi}=L[\tau]\cdot \mathfrak phi_{\mathfrak a}$ for a unique monic polynomial $\mathfrak phi_{\mathfrak a}\in L[\tau]$. Using that $\mathfrak a$ is an ideal, we find that $I_{\mathfrak a,\mathfrak phi} \mathfrak phi_x \subseteq I_{\mathfrak a,\mathfrak phi}$ for all $x\in A$. Thus for each $x\in A$, there is a unique polynomial $(\mathfrak a*\mathfrak phi)_x$ in $L[\tau]$ that satisfies \begin{equation} \label{E:a isogeny} \mathfrak phi_\mathfrak a\cdot \mathfrak phi_x = (\mathfrak a*\mathfrak phi)_x \cdot \mathfrak phi_\mathfrak a. \end{equation} The map \[ \mathfrak a* \mathfrak phi \colon A \to L[\tau],\,\, x\mathbb Mapsto (\mathfrak a* \mathfrak phi)_x \] is also a Drinfeld module, and hence (\ref{E:a isogeny}) shows that $\mathfrak phi_\mathfrak a$ is an isogeny from $\mathfrak phi$ to $\mathfrak a* \mathfrak phi$. \begin{lemma} \label{L:ideal action} \cite{MR1196509}*{\S4} \begin{romanenum} \item \label{I:ideal action, multiplication} Let $\mathfrak a$ and $\mathfrak b$ be non-zero ideals of $A$. Then $\mathfrak phi_{\mathfrak a\mathfrak b}=(\mathfrak b*\mathfrak phi)_\mathfrak a\cdot \mathfrak phi_\mathfrak b$ and $\mathfrak a*(\mathfrak b*\mathfrak phi)=(\mathfrak a\mathfrak b)*\mathfrak phi$. \item \label{I:ideal action, principal} Let $\mathfrak a=wA$ be a non-zero principal ideal of $A$. Then $\mathfrak phi_\mathfrak a=\mathbb Mu_\mathfrak phi(w)^{-1} \cdot \mathfrak phi_w$ and $(\mathfrak a*\mathfrak phi)_x = \mathbb Mu_\mathfrak phi(w)^{-1}\cdot \mathfrak phi_x\cdot \mathbb Mu_\mathfrak phi(w)$ for all $x\in A$. \end{romanenum} \end{lemma} \begin{lemma} \label{L:ideal-Galois actions} Let $\sigma\colon L\hookrightarrow L'$ be an embedding of fields. Let $\sigma(\mathfrak phi)\colon A \to L'[\tau]$ be the Drinfeld module for which $\sigma(\mathfrak phi)_x = \sigma(\mathfrak phi_x)$, where $\sigma$ acts on the coefficients of $L[\tau]$. For each non-zero ideal $\mathfrak a$ of $A$, we have $\sigma(\mathfrak a*\mathfrak phi)=\mathfrak a*\sigma(\mathfrak phi)$ and $\sigma(\mathfrak phi_\mathfrak a)=\sigma(\mathfrak phi)_\mathfrak a$. \end{lemma} \begin{proof} The left ideal of $L'[\tau]$ generated by $\sigma(I_{\mathfrak a,\mathfrak phi})$ is $I_{\mathfrak a,\sigma(\mathfrak phi)}$, and hence $\sigma(\mathfrak phi_\mathfrak a)=\sigma(\mathfrak phi)_\mathfrak a$. Applying $\sigma$ to (\ref{E:a isogeny}), we have \[ \sigma(\mathfrak phi)_\mathfrak a\sigma(\mathfrak phi)_x = \sigma(\mathfrak phi_\mathfrak a\cdot \mathfrak phi_x) = \sigma((\mathfrak a*\mathfrak phi)_x \cdot \mathfrak phi_\mathfrak a) = \sigma(\mathfrak a*\mathfrak phi)_x \sigma(\mathfrak phi)_\mathfrak a, \] for all $x\in A$. This shows that $\sigma(\mathfrak a*\mathfrak phi)=\mathfrak a*\sigma(\mathfrak phi)$. \end{proof} \subsection{Hayes modules} \label{SS:Hayes} Fix a sign function $\varepsilon$ for $F_\infty$. Let $\mathbb Mathbb C_\infty$ be the completion of an algebraic closure $\bbar{F}_\infty$ of $F_\infty$ with respect to the $\infty$-adic norm; it is both complete and algebraically closed. \begin{definition} A \defi{Hayes module} for $\varepsilon$ is a Drinfeld module $\mathfrak phi\colon A \to \mathbb Mathbb C_\infty[\tau]$ of rank 1 that is $\varepsilon$-normalized and for which $\mathfrak partial \circ \mathfrak phi\colon A \to \mathbb Mathbb C_\infty$ is the inclusion map. Denote by $X_\varepsilon$ the set of Hayes modules for $\varepsilon$. \end{definition} We know that Hayes modules exist because Drinfeld $A$-modules over $\mathbb Mathbb C_\infty$ of rank 1 can be constructed analytically \cite{MR0384707}*{\S3} and then we can apply Lemma~\ref{L:normalization works}.\footnote{In our construction of the inverse of the Artin map, this is the only part that we have not made explicit. It is analogous to how one analytically constructs an elliptic curve with complex multiplication by the ring of integers $\mathcal O_K$ of a quadratic imaginary field $K$ [The quotient $\mathbb Mathbb C/\mathcal O_K$ gives such an elliptic curve over $\mathbb Mathbb C$. We can then compute the $j$-invariant to high enough precision to identify what algebraic integer it is (it belongs to the Hilbert class field of $K$, so we know its degree over $\mathbb Q$).]. The current version of \texttt{Magma} \cite{Magma} has a function \texttt{AnalyticDrinfeldModule} that can compute rank 1 Drinfeld modules that are defined over the maximal abelian extension $H_A$ of $F$ that is unramified at all places and splits completely at $\infty$.} Take any Hayes module $\mathfrak phi\in X_\varepsilon$. Using that $\mathfrak phi_\mathfrak a \in \mathbb Mathbb C_\infty[\tau]$ is monic along with (\ref{E:a isogeny}), we see that the Drinfeld module $\mathfrak a*\mathfrak phi$ also belongs to $X_\varepsilon$. By Lemma~\ref{L:ideal action}(\ref{I:ideal action, multiplication}), we find that the group $\mathbb Mathcal{I}$ of fractional ideals of $A$ acts on $X_\varepsilon$. Let $\mathbb Mathcal{P}^+$ be the subgroup of principal fractional ideals generated by those $x\in F^\times$ that satisfy $\varepsilon(x)=1$. The group $\mathbb Mathcal{P}^+$ acts trivially on $X_\varepsilon$ by Lemma~\ref{L:ideal action}(\ref{I:ideal action, principal}), and hence induces an action of the finite group $\operatorname{Pic}^+(A):=\mathbb Mathcal{I}/\mathbb Mathcal{P}^+$ on $X_\varepsilon$. \begin{prop} \label{P:homogenous space} \cite{MR1196509}*{\S13} The set $X_\varepsilon$ is a principal homogeneous space for $\operatorname{Pic}^+(A)$ under the $*$ action. \end{prop} We now consider the arithmetic of the set $X_\varepsilon$. Take any $\mathfrak phi\in X_\varepsilon$ and choose a non-constant $y\in A$. Let $H_A^+$ be the subfield of $\mathbb Mathbb C_\infty$ generated by $F$ and the coefficients of $\mathfrak phi_y$ as a polynomial in $\tau$. We call $H_A^+$ the \defi{normalizing field} for the triple $(F,\infty,\varepsilon)$. \begin{lemma} \cite{MR1196509}*{\S14} The extension $H_A^+/F$ is finite and normal, and depends only on the triple $(F,\infty,\varepsilon)$. \end{lemma} So for every $\mathfrak phi\in X_\varepsilon$, we have $\mathfrak phi(A)\subseteq H^+_A[\tau]$. From now on, we shall view $\mathfrak phi$ as a Drinfeld module $\mathfrak phi\colon A\to H^+_A[\tau]$. Moreover, we actually have $\mathfrak phi(A)\subseteq B[\tau]$ where $B$ is the integral closure of $A$ in $H_A^+$. Since $\mathfrak phi$ is normalized, we find that $\mathfrak phi$ has \defi{good reduction} at every place of $H_A^+$ not lying over $\infty$ (for each non-zero prime ideal $\mathfrak P$ of $B$, we can compose $\mathfrak phi$ with a reduction modulo $\mathfrak P$ map to obtain a Drinfeld module of rank 1 over $B/\mathfrak P$). There is thus a natural action of the Galois group $\operatorname{Gal}(H_A^+/F)$ on $X_\varepsilon$. With a fixed $\mathfrak phi \in X_\varepsilon$, Proposition~\ref{P:homogenous space} implies that there is a unique function $\mathfrak psi\colon \operatorname{Gal}(H_A^+/F) \hookrightarrow \operatorname{Pic}^+(A)$ such that $\sigma(\mathfrak phi)=\mathfrak psi(\sigma)*\mathfrak phi$. Lemma~\ref{L:ideal action} and Lemma~\ref{L:ideal-Galois actions} imply that $\mathfrak psi$ is a group homomorphism that does not depend on the initial choice of $\mathfrak phi$. A consequence of following important proposition is that $\mathfrak psi$ is surjective, and hence an isomorphism. \begin{prop} \label{P:Frob for HA+} The extension $H_A^+/F$ is unramified away from $\infty$. For each non-zero prime ideal $\mathfrak p$ of $A$, the class $\mathfrak psi(\operatorname{Frob}_\mathfrak p)$ of $\operatorname{Pic}^+(A)$ is the one containing $\mathfrak p$. \end{prop} \section{Construction of the inverse of the Artin map} \label{S:construction} Fix a place $\infty$ of $F$. Throughout this section, we also fix a sign function $\varepsilon$ for $F_\infty$ and a Hayes module $\mathfrak phi \in X_\varepsilon$ (as described in the previous section). Let $F_\infty^+$ be the open subgroup of $F_\infty^\times$ consisting of those $x\in F_\infty^\times$ for which $\varepsilon(x)=1$. So as not to clutter the construction, all the lemmas of \S\ref{S:construction} will be proved in \S\ref{S:lemma proofs}. \subsection{$\lambda$-adic representations} Fix a place $\lambda\neq \infty$ of $F$; we shall also denote by $\lambda$ the corresponding maximal ideal of $A$. Take any automorphism $\sigma \in \operatorname{Gal}_F$. Since the map $\mathfrak psi$ of \S\ref{SS:Hayes} is surjective, we can choose a non-zero ideal $\mathfrak a$ of $A$ for which $\sigma(\mathfrak phi)=\mathfrak a * \mathfrak phi$. For each positive integer $e$, let $\mathfrak phi[\lambda^e]$ be the set of $b\in \bbar{F}$ that satisfy $\mathfrak phi_x(b)=0$ for all $x\in \lambda^e$; equivalently, $\mathfrak phi[\lambda^e]$ is the set of $b\in \bbar{F}$ such that $\mathfrak phi_{\lambda^e}(b)=0$ (recall that we can identify each element of $L[\tau]$ with a unique polynomial $\sum_{i\geq 0} c_i X^{q^i} \in L[X]$). We have $\mathfrak phi[\lambda^e]\subseteq F^{\operatorname{sep}}$ since the polynomials $\mathfrak phi_x$ are separable for all $x\in \lambda^e$. Using the $A$-module structure coming from $\mathfrak phi$, we find that $\mathfrak phi[\lambda^e]$ is an $A/\lambda^e$-module of rank 1. The \defi{$\lambda$-adic Tate module} of $\mathfrak phi$ is defined to be \[ T_\lambda(\mathfrak phi)=\operatorname{Hom}_A(F_\lambda/\mathcal O_\lambda,\, \mathfrak phi[\lambda^\infty]) \] where $\mathfrak phi[\lambda^\infty]= \cup_{e\geq 1} \mathfrak phi[\lambda^e]$. The Tate module $T_\lambda(\mathfrak phi)$ is a free $\mathcal O_\lambda$-module of rank $1$, and hence $V_\lambda(\mathfrak phi):=F_\lambda\otimes_{\mathcal O_\lambda} T_\lambda(\mathfrak phi)$ is a one-dimensional vector space over $F_\lambda$. For each $e$, the map $\mathfrak phi[\lambda^e]\to \sigma(\mathfrak phi)[\lambda^e],\, \xi\mathbb Mapsto \sigma(\xi)$ is an isomorphism of $A/\lambda^e$-modules. Combining over all $e$, we obtain an isomorphism $V_\lambda(\sigma)\colon V_\lambda(\mathfrak phi)\to V_\lambda(\sigma(\mathfrak phi))$ of $F_\lambda$-vector spaces. The isogeny $\mathfrak phi_\mathfrak a$ from $\mathfrak phi$ to $\mathfrak a*\mathfrak phi$ induces a homomorphism $\mathfrak phi[\lambda^e]\to (\mathfrak a*\mathfrak phi)[\lambda^e]$ of $A/\lambda^e$-module for each $e$. Combining together, we obtain an isomorphism $V_\lambda(\mathfrak phi_\mathfrak a)\colon V_\lambda(\mathfrak phi)\to V_\lambda(\mathfrak a*\mathfrak phi)$ of $F_\lambda$-vector spaces. Using our assumption $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$, the map $V_\lambda(\mathfrak phi_\mathfrak a)^{-1}\circ V_\lambda(\sigma)$ belongs to $\operatorname{Aut}_{F_\lambda}(V_\lambda(\mathfrak phi))=F_\lambda^\times$; we denote this element of $F_\lambda^\times$ by $\rho_\lambda^{\mathfrak a}(\sigma)$. \begin{lemma} \label{L:lambda basics} \begin{romanenum} \item \label{I:lambda basics-mult} Take $\sigma,\gamma \in \operatorname{Gal}_F$ and fix ideals $\mathfrak a$ and $\mathfrak b$ of $A$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$ and $\gamma(\mathfrak phi)=\mathfrak b*\mathfrak phi$. Then $(\sigma\gamma)(\mathfrak phi)=(\mathfrak a\mathfrak b)*\mathfrak phi$ and $\rho^{\mathfrak a\mathfrak b}_\lambda(\sigma\gamma)=\rho^{\mathfrak a}_\lambda(\sigma) \rho^{\mathfrak b}_\lambda(\gamma)$. \item \label{I:lambda basics-defined} Take $\sigma\in \operatorname{Gal}_F$ and fix ideals $\mathfrak a$ and $\mathfrak b$ of $A$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi=\mathfrak b*\mathfrak phi$. Then $\rho_\lambda^{\mathfrak a}(\sigma) \rho_\lambda^{\mathfrak b}(\sigma)^{-1}$ is the unique generator $w\in F^\times$ of the fractional ideal $\mathfrak b\mathfrak a^{-1}$ that satisfies $\varepsilon(w)=1$. \item \label{I:valuation of automorphism} Take $\sigma\in \operatorname{Gal}_F$ and fix an ideal $\mathfrak a$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$. Identifying $\lambda$ with a non-zero prime ideal of $A$, let $f\geq 0$ be the largest power of $\lambda$ dividing $\mathfrak a$. Then $\operatorname{ord}_\lambda(\rho^{\mathfrak a}_\lambda(\sigma))=-f$. \end{romanenum} \end{lemma} By Lemma~\ref{L:lambda basics}(\ref{I:lambda basics-mult}), the map \[ \rho_\lambda \colon \operatorname{Gal}_{H_A^+}\to \mathcal O_\lambda^\times,\quad \sigma\mathbb Mapsto \rho^{A}_\lambda(\sigma) \] is a homomorphism. It is a continuous Galois representation and is unramified at all places not lying over $\lambda$ or $\infty$ (recall that $\mathfrak phi$ has good reduction at all places not dividing $\infty$). \subsection{$\infty$-adic representation} \label{SS:infty-rep} By \S\ref{SS:normalization}, our Drinfeld module $\mathfrak phi\colon A \to H_A^+[\tau]$ extends uniquely to a homomorphism $\mathfrak phi\colon F_\infty \hookrightarrow (H_A^+)^\mathfrak perf (\!(\tau^{-1})\!) $ that satisfies (\ref{E:rank defn}) with $r=1$ for all $x\in F_\infty^\times$. Recall that we defined a homomorphism $\deg \colon W_F \twoheadrightarrow \mathbb Z$ for which $\sigma(x)$ equals $x^{q^{\deg(\sigma)}}=\tau^{\deg(\sigma)} x \tau^{-\deg(\sigma)}$ for all $x\in \bbar{k}$. Given a series $u\in \bbar{F} (\!(\tau^{-1})\!) $ with coefficients in $F^{\operatorname{sep}}$ and an automorphism $\sigma\in \operatorname{Gal}_F$, we define $\sigma(u)$ to be the series obtained by applying $\sigma$ to the coefficients of $u$. \begin{lemma} \label{L:u facts} \begin{romanenum} \item \label{I:u exists} There exists a series $u \in \bbar{F} [\![\tau^{-1}]\!] ^\times$ such that $u^{-1}\mathfrak phi(F_\infty) u \subseteq \bbar{k} (\!(\tau^{-1})\!) $. Any such $u$ has coefficients in $F^{\operatorname{sep}}$. \item \label{I:u independence} Fix any $u$ as in (\ref{I:u exists}). Take any $\sigma\in W_F$ and fix a non-zero ideal $\mathfrak a$ of $A$ for which $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$. Then \[ \mathfrak phi_{\mathfrak a}^{-1} \sigma(u) \tau^{\deg(\sigma)} u^{-1} \in \bbar{F} (\!(\tau^{-1})\!) \] belongs to $\mathfrak phi(F_\infty^+)$ and is independent of the choice of $u$. \end{romanenum} \end{lemma} Take $\sigma\in W_F$ and fix a non-zero ideal $\mathfrak a$ of $A$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$. Choose a series $u \in \bbar{F} [\![\tau^{-1}]\!] ^\times$ as in Lemma \ref{L:u facts}(\ref{I:u exists}). Using that $\mathfrak phi\colon F_\infty \to (H_A^+)^\mathfrak perf (\!(\tau^{-1})\!) $ is injective and Lemma~\ref{L:u facts}(\ref{I:u independence}), we define $\rho_\infty^\mathfrak a(\sigma)$ to be the unique element of $F_\infty^+$ for which \[ \mathfrak phi\big(\rho_\infty^\mathfrak a(\sigma)\big)=\mathfrak phi_{\mathfrak a}^{-1} \sigma(u) \tau^{\deg(\sigma)} u^{-1}. \] We now state some results about $\rho_\infty^\mathfrak a(\sigma)$; they are analogous to those concerning $\rho_\lambda^\mathfrak a(\sigma)$ in the previous section. \begin{lemma} \label{L:infty basics} \begin{romanenum} \item \label{I:infty basics-mult} Take $\sigma,\gamma \in W_F$ and fix ideals $\mathfrak a$ and $\mathfrak b$ of $A$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$ and $\gamma(\mathfrak phi)=\mathfrak b*\mathfrak phi$. Then $(\sigma\gamma)(\mathfrak phi)=(\mathfrak a\mathfrak b)*\mathfrak phi$, and we have $\rho^{\mathfrak a\mathfrak b}_\infty(\sigma\gamma)=\rho^{\mathfrak a}_\infty(\sigma) \rho^{\mathfrak b}_\infty(\gamma)$. \item \label{I:infty basics-defined} Take $\sigma\in W_F$ and fix ideals $\mathfrak a$ and $\mathfrak b$ of $A$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi=\mathfrak b*\mathfrak phi$. Then $\rho_\infty^{\mathfrak a}(\sigma) \rho_\infty^{\mathfrak b}(\sigma)^{-1}$ is the unique generator $w\in F^\times$ of the fractional ideal $\mathfrak b\mathfrak a^{-1}$ that satisfies $\varepsilon(w)=1$. \end{romanenum} \end{lemma} \begin{lemma} \label{L:rho-infty is continuous} The map \[ \rho_\infty \colon W_{H_A^+}\to F_\infty^+,\quad \sigma\mathbb Mapsto \rho^{A}_\infty(\sigma) \] is a continuous homomorphism that is unramified at all places of $H_A^+$ not lying over $\infty$. \end{lemma} \subsection{The inverse of the Artin map} \label{SS:inverse} For each $\sigma\in W_F$, fix a non-zero ideal $\mathfrak a$ of $A$ such that $\sigma(\mathfrak phi)=\mathfrak a*\mathfrak phi$. By Lemma~\ref{L:lambda basics}(\ref{I:valuation of automorphism}), $(\rho^\mathfrak a_\lambda(\sigma))_\lambda$ is an idele of $F$. We define $\rho(\sigma)$ to be the element of the idele class group $C_F$ that is represented by $(\rho^\mathfrak a_\lambda(\sigma))_\lambda$. By Lemma~\ref{L:lambda basics}(\ref{I:lambda basics-defined}) and Lemma~\ref{L:infty basics}(\ref{I:infty basics-defined}), we find that $\rho(\sigma)$ is independent of the choice of $\mathfrak a$. Lemma~\ref{L:lambda basics}(\ref{I:lambda basics-mult}) and Lemma~\ref{L:infty basics}(\ref{I:infty basics-mult}) imply that the map \[ \rho\colon W_F \to C_F \] is a group homomorphism. The restriction of $\rho$ to the finite index open subgroup $W_{H_A^+}$ agrees with \[ W_{H^+_A} \xrightarrow{\mathfrak prod_\lambda \rho_\lambda} F^+_\infty \times \mathfrak prod_{\lambda\neq \infty} \mathcal O_\lambda^\times \hookrightarrow C_F \] where the second homomorphism is obtained by composing the natural inclusion into $\mathbb A_F^\times$ with the quotient map $\mathbb A_F^\times\to C_F$. Since the representations $\rho_\lambda$ are continuous, we deduce that $\mathfrak prod_\lambda\rho_\lambda$, and hence $\rho$, is continuous. So we may view $\rho$ as a continuous homomorphism $W_F^{\operatorname{ab}} \to C_F$. By taking profinite completions, we obtain a continuous homomorphism \[ \widehat{\rho} \colon \operatorname{Gal}(F^{\operatorname{ab}}/F) \to \widehat C_F. \] Recall that the Artin map ${\theta}_F \colon C_F \to \operatorname{Gal}(F^{\operatorname{ab}}/F)$ of class field theory gives an isomorphism \[ \widehat{\theta}_F \colon \widehat C_F \xrightarrow{\sim} \operatorname{Gal}(F^{\operatorname{ab}}/F) \] of topological groups. Our main result is then the following: \begin{thm} \label{T:main} The map $\widehat\rho\colon \operatorname{Gal}(F^{\operatorname{ab}}/F) \to \widehat C_F$ is an isomorphism of topological groups. The inverse of the isomorphism $\operatorname{Gal}(F^{\operatorname{ab}}/F) \to \widehat C_F,\, \sigma \mathbb Mapsto \widehat\rho(\sigma)^{-1}$ is the Artin map $\widehat{\theta}_F$. \end{thm} Before proving the theorem, we mention the following arithmetic input. \begin{lemma} \label{L:Frobenius} Fix a place $\lambda$ of $F$. Let $\mathfrak p\neq \lambda, \infty$ be a place of $F$ which we identify with the corresponding non-zero prime ideal of $A$. Then $\rho^{\mathfrak p}_\lambda(\operatorname{Frob}_\mathfrak p)=1$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{T:main}] Take an open subgroup $U$ of $C_F$ with finite index. Let $L_U$ be the fixed field in $F^{\operatorname{sep}}$ of the kernel of the homomorphism $W_F^{\operatorname{ab}} \xrightarrow{\rho} C_F\twoheadrightarrow C_F/U$; this gives an injective group homomorphism $\rho_U\colon \operatorname{Gal}(L_U/F) \hookrightarrow C_F/U$. Let $S_U$ be the set of places $\mathfrak p$ of $F$ for which $\mathfrak p=\infty$ or for which there exists an idele $\alpha \in \mathbb A_F^\times$ whose class in $C_F$ does not lie in $U$ and satisfying $\alpha_\lambda=1$ for $\lambda\neq \mathfrak p$ and $\alpha_\mathfrak p \in \mathcal O_\mathfrak p^\times$. The set $S_U$ is finite since $U$ is open in $C_F$. Take any place $\mathfrak p\notin S_U$. Choose a uniformizer $\mathfrak pi_\mathfrak p$ of $F_\mathfrak p$ and let $\alpha(\mathfrak p)$ be the idele of $F$ that is $\mathfrak pi_\mathfrak p$ at the place $\mathfrak p$ and 1 at all other places. Define the idele $\beta:=(\rho^\mathfrak p_\lambda(\operatorname{Frob}_\mathfrak p))_\lambda \cdot \alpha(\mathfrak p) \in \mathbb A_F^\times$. Lemma~\ref{L:Frobenius} says that $\beta_\lambda=1$ for all $\lambda\neq \mathfrak p$ while Lemma~\ref{L:lambda basics}(\ref{I:valuation of automorphism}) tells us that $\operatorname{ord}_\mathfrak p(\beta_\mathfrak p)=0$. By our choice of $S_U$, the image of $\beta$ in $C_F$ must lie in $U$. Therefore, $\rho_U(\operatorname{Frob}_\mathfrak p)$ is the coset of $C_F/U$ represented by $\alpha(\mathfrak p)^{-1}$. In particular, note that $L_U/F$ is unramified at all $\mathfrak p \notin S_U$. The group $C_F/U$ is generated by the elements $\alpha(\mathfrak p)$ with $\mathfrak p\notin S_U$, and hence $\rho_U$ is surjective. Therefore $\rho_U\colon \operatorname{Gal}(L_U/F) \xrightarrow{} C_F/U$ is an isomorphism of groups. Define the isomorphism $\theta_U\colon C_F/U\xrightarrow{\sim} \operatorname{Gal}(L_U/F), \, \alpha \mathbb Mapsto (\rho_U^{-1}(\alpha))^{-1}$. For each $\mathfrak p \notin S_U$, it takes the coset containing $\alpha(\mathfrak p)$ to the Frobenius automorphism corresponding to $\mathfrak p$. Composing the quotient map $C_F\to C_F/U$ with $\theta_U$, we find that the resulting homomorphism $C_F\to \operatorname{Gal}(L_U/F)$ equals the map $\alpha\mathbb Mapsto \theta_F(\alpha)|_{L_U}$ where $\theta_F\colon C_F\to \operatorname{Gal}(F^{\operatorname{ab}}/F)$ is the Artin map of $F$. Recall that class field theory gives a one-to-one correspondence between the finite abelian extensions $L$ of $F$ and the open subgroups $U$ with finite index in $C_F$. Let $L\subseteq F^{\operatorname{sep}}$ be an arbitrary finite abelian extension of $F$. Class field theory says that $L$ corresponds to the kernel $U$ of the map $C_F\to \operatorname{Gal}(L/F),\,\alpha\mathbb Mapsto \theta_F(\alpha)|_L$. By comparing with the computation above, we deduce that $L=L_U$. Since $L$ was an arbitrary finite abelian extension, we deduce that $F^{\operatorname{ab}} = \bigcup_U L_U$. Taking the inverse limit of the isomorphisms $\rho_U\colon \operatorname{Gal}(L_U/F)\xrightarrow{\sim} C_F/U$ as $U$ varies, we find that the corresponding homomorphism $\rho\colon \operatorname{Gal}(F^{\operatorname{ab}}/F) \to \widehat{C}_F$ is an isomorphism (the injectivity is precisely the statement that $F^{\operatorname{ab}} = \bigcup_U L_U$). The inverse of the isomorphism $\operatorname{Gal}(F^{\operatorname{ab}}/F) \to \widehat C_F,\, \sigma \mathbb Mapsto \widehat\rho(\sigma)^{-1}$ is obtained by combining the homomorphisms $\theta_U\colon C_F/U \xrightarrow{\sim} \operatorname{Gal}(L_U/F)$; from the calculation above, this equals $\widehat{\theta}_F$. \end{proof} \begin{cor} \label{C:main} The homomorphism $\rho\colon W_F^{\operatorname{ab}}\to C_F$ is an isomorphism of topological groups. The inverse of the isomorphism $W_F^{\operatorname{ab}} \to C_F,\, \sigma \mathbb Mapsto \rho(\sigma)^{-1}$ is the Artin map ${\theta}_F$. \end{cor} \begin{proof} This follows directly from the theorem. Observe that the natural maps $W_F^{\operatorname{ab}} \to \widehat{W_F^{\operatorname{ab}}}=\operatorname{Gal}(F^{\operatorname{ab}}/F)$ and $C_F\to \widehat{C}_F$ from the group to their profinite completion are both injective since $F$ is a global function field. \end{proof} \begin{remark} \begin{romanenum} \item The isomorphism $\rho\colon W_F^{\operatorname{ab}}\to C_F$ depends only on $F$ (and not on our choices of $\infty$, $\varepsilon$, and $\mathfrak phi\in X_\varepsilon$). \item Our proof only uses class field theory to prove that $\rho$ is injective, i.e., to show that we have constructed all finite abelian extensions of $F$. \end{romanenum} \end{remark} \section{Proof of lemmas} \label{S:lemma proofs} \subsection{Proof of Lemma~\ref{L:lambda basics}} \label{SS:lambda basics} \noindent (i) By Lemma~\ref{L:ideal action}(\ref{I:ideal action, multiplication}) and Lemma~\ref{L:ideal-Galois actions}, we have $(\sigma\gamma)\mathfrak phi=(\mathfrak a\mathfrak b)*\mathfrak phi$ and $\sigma(\mathfrak phi_\mathfrak b)=\sigma(\mathfrak phi)_\mathfrak b$. By Lemma~\ref{L:ideal action}(\ref{I:ideal action, multiplication}) and our choice of $\mathfrak a$, we find that $\mathfrak phi_{\mathfrak a\mathfrak b}=(\mathfrak a*\mathfrak phi)_\mathfrak b\cdot \mathfrak phi_{\mathfrak a}=\sigma(\mathfrak phi)_\mathfrak b \mathfrak phi_\mathfrak a=\sigma(\mathfrak phi_\mathfrak b) \mathfrak phi_\mathfrak a$. We have $\sigma(\mathfrak phi_\mathfrak b(\sigma^{-1}(\xi)))=\sigma(\mathfrak phi)_\mathfrak b(\xi)$ for all $\xi\in \sigma(\mathfrak phi)[\lambda^e]$, so $V_\lambda(\sigma)\circ V_\lambda(\mathfrak phi_\mathfrak b) \circ V_\lambda(\sigma)^{-1}$ and $V_\lambda(\sigma(\mathfrak phi_\mathfrak b))$ are the same automorphism of $V_\lambda(\sigma(\mathfrak phi))$. Therefore, \begin{align*} V_\lambda(\mathfrak phi_{\mathfrak a\mathfrak b})^{-1}\circ V_\lambda(\sigma\gamma) &= V_\lambda( \mathfrak phi_\mathfrak a)^{-1} \circ V_\lambda(\sigma(\mathfrak phi_\mathfrak b))^{-1} \circ V_\lambda(\sigma\gamma)\\ &= V_\lambda( \mathfrak phi_\mathfrak a)^{-1} \circ (V_\lambda(\sigma)\circ V_\lambda(\mathfrak phi_\mathfrak b) \circ V_\lambda(\sigma)^{-1})^{-1} \circ V_\lambda(\sigma\gamma)\\ &= V_\lambda( \mathfrak phi_\mathfrak a)^{-1} \circ V_\lambda(\sigma)\circ V_\lambda(\mathfrak phi_\mathfrak b)^{-1} \circ V_\lambda(\sigma)^{-1} \circ V_\lambda(\sigma\gamma)\\ &= (V_\lambda( \mathfrak phi_\mathfrak a)^{-1} \circ V_\lambda(\sigma))\circ( V_\lambda(\mathfrak phi_\mathfrak b)^{-1} \circ V_\lambda(\gamma)). \end{align*} Thus $\rho^{\mathfrak a\mathfrak b}_\lambda(\sigma\gamma)=\rho^{\mathfrak a}_\lambda(\sigma) \rho^{\mathfrak b}_\lambda(\gamma)$. \noindent (ii) Since $\mathfrak a*\mathfrak phi = \mathfrak b*\mathfrak phi$, Lemma~\ref{P:homogenous space} implies that the fractional ideal $\mathfrak b \mathfrak a^{-1}$ is the identity class in $\operatorname{Pic}^+(A)$. There are thus non-zero $w_1,w_2 \in A$ such that $(w_1)\mathfrak a = (w_2) \mathfrak b$ and $\varepsilon(w_1)=\varepsilon(w_2) = 1$. In particular, $w:=w_1/w_2$ is the unique generator of $\mathfrak b\mathfrak a^{-1}$ satisfying $\varepsilon(w)=1$. By Lemma~\ref{L:ideal action}(\ref{I:ideal action, principal}), we have $(w_1)*\mathfrak phi=\mathfrak phi$ and $\mathfrak phi_{(w_1)}=\mathfrak phi_{w_1}$. Therefore, by Lemma~\ref{L:ideal action}(\ref{I:ideal action, multiplication}) we have \[ \mathfrak phi_{\mathfrak a(w_1)}=((w_1)*\mathfrak phi)_{\mathfrak a} \cdot \mathfrak phi_{(w_1)}= \mathfrak phi_\mathfrak a \mathfrak phi_{w_1}. \] Similarly, $\mathfrak phi_{\mathfrak b(w_2)}= \mathfrak phi_\mathfrak b \mathfrak phi_{w_2}$, and hence $\mathfrak phi_\mathfrak a \mathfrak phi_{w_1} = \mathfrak phi_\mathfrak b \mathfrak phi_{w_2}$. Thus \[ \rho_{\lambda}^\mathfrak a(\sigma) \rho_{\lambda}^\mathfrak b(\sigma)^{-1} = V_\lambda(\mathfrak phi_\mathfrak a)^{-1} \circ V_\lambda(\mathfrak phi_\mathfrak b) = V_\lambda(\mathfrak phi_{w_1}) \circ V_\lambda(\mathfrak phi_{w_2})^{-1}. \] The automorphisms $V_\lambda(\mathfrak phi_{w_1})$ and $V_\lambda(\mathfrak phi_{w_2})$ both belong to $\operatorname{Aut}_{F_\lambda}(V_\lambda(\mathfrak phi))=F_\lambda^\times$ and correspond to $w_1$ and $w_2$, respectively. We conclude that $\rho_{\lambda}^\mathfrak a(\sigma) \rho_{\lambda}^\mathfrak b(\sigma)^{-1}= w_1 w_2^{-1}=w$. \noindent (iii) We view $\lambda$ as both a place of $F$ and a non-zero prime ideal of $A$. For each $e\geq f$, the kernel of $\mathfrak phi[\lambda^e] \to \mathfrak phi[\lambda^e],\, \xi\mathbb Mapsto \sigma^{-1}(\mathfrak phi_\mathfrak a(\xi))$ has cardinality $|\mathfrak phi[\lambda^f]|=N(\lambda)^f$. So $\rho_\lambda^\mathfrak a(\sigma)^{-1}=V_\lambda(\sigma^{-1})\circ V_\lambda(\mathfrak phi_\mathfrak a)$, which is an element of $\operatorname{Aut}_{F_\lambda}(V_\lambda(\mathfrak phi))=F_\lambda^\times$, gives an $\mathcal O_\lambda$-module homomorphism $T_\lambda(\mathfrak phi)\to T_\lambda(\mathfrak phi)$ whose cokernel has cardinality $N(\lambda)^f$. Therefore, $\operatorname{ord}_\lambda(\rho_\lambda^\mathfrak a(\sigma)^{-1})=f$. In particular, we have $\operatorname{ord}_\lambda(\rho_\lambda^\mathfrak a(\sigma))=-f$. \subsection{Proof of Lemma~\ref{L:u facts}} \label{SS:u facts} Fix a non-constant $y\in A$ that satisfies $\varepsilon(y)=1$ and define $h:=-d_\infty\operatorname{ord}_\infty(y)\geq 1$. Since $\mathfrak phi$ is $\varepsilon$-normalized and $\varepsilon(y)=1$, we have $\mathfrak phi_y = \tau^h+\sum_{j=0}^{h-1} b_j \tau^{j}$ for unique $b_j \in H_A^+$. We set $u= \sum_{i=0}^\infty a_i \tau^{-i}$ with $a_i\in \bbar{F}$ to be determined where $a_0\neq 0$. Expanding out the series $\mathfrak phi_y u$ and $u\tau^{h}$, we find that $\mathfrak phi_y u = u\tau^{h}$ holds if and only if \begin{equation} \label{E:artin-schreier} a_i^{q^h} - a_i = - \sum_{\substack{0\leq j\leq h-1,\\ i+j-h\geq 0}} b_j a_{i+j-h}^{q^j} \end{equation} holds for all $i\geq 0$. We can use the equations (\ref{E:artin-schreier}) to recursively solve for $a_0\neq 0,a_1,a_2,\ldots$. The $a_i$ belong to $F^{\operatorname{sep}}$ since (\ref{E:artin-schreier}) is a separable polynomial in $a_i$ and the $b_j$ belong to $H_A^+\subseteq F^{\operatorname{sep}}$. Let $k_h$ be the degree $h$ extension of $k$ in $\bbar{k}$. The elements of the ring $\bbar{F} (\!(\tau^{-1})\!) $ that commute with $\tau^h$ are $k_h (\!(\tau^{-1})\!) $. Since $\tau^h$ belongs to the commutative ring $u^{-1}\mathfrak phi(F_\infty) u$, we find that $u^{-1}\mathfrak phi(F_\infty) u$ is a subset of $k_h (\!(\tau^{-1})\!) $. Thus $u \in \bbar{F} (\!(\tau^{-1})\!) ^\times$ has coefficients in $F^{\operatorname{sep}}$ and satisfies $u^{-1} \mathfrak phi(F_\infty) u \subseteq \bbar{k} (\!(\tau^{-1})\!) $.\\ Recall that $\mathfrak phi$ induces an embedding $\mathbb Mathbb F_\infty\hookrightarrow L$; this gives inclusions $\mathbb Mathbb F_\infty\subseteq \bbar{k} \subseteq \bbar{L}$. Fix a uniformizer $\mathfrak pi$ of $F_\infty$. There is a unique homomorphism $\iota\colon F_\infty \to \bbar{k} (\!(\tau^{-1})\!) $ that satisfies the following conditions: \begin{itemize} \item $\iota(x)=x$ for all $x\in \mathbb Mathbb F_\infty$, \item $\iota(\mathfrak pi)=\tau^{-d_\infty}$, \item $\operatorname{ord}_{\tau^{-1}}(\iota(x))=d_\infty \operatorname{ord}_\infty(x)$ for all $x\in F_\infty^\times$. \end{itemize} We have $\iota(F_\infty)=\mathbb Mathbb F_\infty(\!( \tau^{-d_\infty})\!)$. Let $C$ be the centralizer of $\iota(F_\infty)$ in $\bbar{F} (\!(\tau^{-1})\!) $. Using that $\mathbb Mathbb F_\infty$ and $\tau^{d_\infty}$ are in $\iota(F_\infty)$, we find that $C=\mathbb Mathbb F_\infty(\!(\tau^{-d_\infty})\!)=\iota(F_\infty)$. Take any $v\in \bbar{F} [\![\tau^{-1}]\!] ^\times$ that satisfies $v^{-1} \mathfrak phi(F_\infty) v \subseteq \bbar{k} (\!(\tau^{-1})\!) $. By \cite{MR2018826}*{Lemma~2.3}, there exist $w_1$ and $w_2\in \bbar{k} [\![\tau^{-1}]\!] ^\times$ such that \[ w_1^{-1}(u^{-1} \mathfrak phi_x u) w_1 = \iota(x) = w_2^{-1}(v^{-1} \mathfrak phi_x v) w_2 \] for all $x\in F_\infty$. So for all $x\in F_\infty$, we have $(uw_1) \iota(x) (uw_1)^{-1} =\mathfrak phi_x = (v w_2) \iota(x) (v w_2)^{-1}$ and hence \[ (w_2^{-1} v^{-1} u w_1) \iota(x) (w_2^{-1} v^{-1} u w_1)^{-1} = \iota(x). \] Thus $w_2^{-1} v^{-1} u w_1$ belongs to $C\subseteq \bbar{k} (\!(\tau^{-1})\!) $, and so $v=uw$ for some $w\in \bbar{k} [\![\tau^{-1}]\!] ^\times$. The coefficients of $v$ thus lie in $F^{\operatorname{sep}}$ since the coefficients of $u$ lie in $F^{\operatorname{sep}}$ and $w$ has coefficients in the perfect field $\bbar{k}\subseteq F^{\operatorname{sep}}$. This completes the proof of (i). Since $w$ has coefficients in $\bbar{k}$, we have $\sigma(w)=\tau^{\deg(\sigma)} w \tau^{-\deg(\sigma)}$ and hence \begin{align*} \sigma(v) \tau^{\deg(\sigma)} v^{-1} &= \sigma(uw)\tau^{\deg(\sigma)} (uw)^{-1}\\ &= \sigma(u) \sigma(w) \tau^{\deg(\sigma)} w^{-1} u^{-1}\\ & = \sigma(u) (\tau^{\deg(\sigma)} w \tau^{-{\deg(\sigma)}})\tau^{\deg(\sigma)} w^{-1} u^{-1}\\ &=\sigma(u) \tau^{\deg(\sigma)} u^{-1}. \end{align*} This proves that $\mathfrak phi_{\mathfrak a}^{-1} \sigma(u) \tau^{\deg(\sigma)} u^{-1}$ is independent of the initial choice of $u$. We now show that $ \mathfrak phi_{\mathfrak a}^{-1} \sigma(u) \tau^{\deg(\sigma)} u^{-1}$ belongs to $\mathfrak phi(F_\infty)$. We have seen that $\mathfrak phi(F_\infty)$ is conjugate to $\iota(F_\infty)$ in $\bbar{F} (\!(\tau^{-1})\!) $ and that $\iota(F_\infty)$ is its own centralizer in $\bbar{F} (\!(\tau^{-1})\!) $. Therefore, the centralizer of $\mathfrak phi(F_\infty)$ in $\bbar{F} (\!(\tau^{-1})\!) $ is $\mathfrak phi(F_\infty)$. So it suffices to prove that $ \mathfrak phi_{\mathfrak a}^{-1} \sigma(u) \tau^{\deg(\sigma)} u^{-1}$ commutes with $\mathfrak phi_x$ for all $x\in A$. Take any $x\in A$. We have $\sigma(u^{-1} \mathfrak phi_x u)= \tau^{\deg \sigma} (u^{-1} \mathfrak phi_x u) \tau^{-\deg \sigma}$ since $u^{-1} \mathfrak phi_x u$ has coefficients in $\bbar{k}$. By our choice of $\mathfrak a$, we have $\sigma(\mathfrak phi)_x = (\mathfrak a*\mathfrak phi)_x= \mathfrak phi_\mathfrak a \mathfrak phi_x \mathfrak phi_\mathfrak a^{-1}$. Therefore, \[ \tau^{\deg \sigma} (u^{-1} \mathfrak phi_x u) \tau^{-\deg \sigma} = \sigma(u^{-1} \mathfrak phi_x u) = \sigma(u)^{-1} \sigma(\mathfrak phi)_x \sigma(u) = \sigma(u)^{-1} \mathfrak phi_\mathfrak a \mathfrak phi_x \mathfrak phi_\mathfrak a^{-1} \sigma(u) \] and one concludes that $\mathfrak phi_\mathfrak a^{-1} \sigma(u)\tau^{\deg \sigma} u^{-1}$ commutes with $\mathfrak phi_x$. The only thing that remains to be proved is that $\mathfrak phi_\mathfrak a^{-1} \sigma(u)\tau^{\deg \sigma} u^{-1}$ belongs to $\mathfrak phi(F^+_\infty)$. Since $\mathfrak phi$ is $\varepsilon$-normalized, this is equivalent to showing that the first non-zero coefficient of the Laurent series $\mathfrak phi_\mathfrak a^{-1} \sigma(u)\tau^{\deg \sigma} u^{-1}$ in $\tau^{-1}$ is 1. Since $\mathfrak phi_\mathfrak a$ is a monic polynomial of $\tau$, we need only show that the first non-zero coefficient of $\sigma(u)\tau^{\deg \sigma} u^{-1}\in \bbar{F} (\!(\tau^{-1})\!) $ is 1, i.e., $\sigma(a_0) a_0^{-q^{\deg \sigma}}=1$. This is true since $a_0\in \bbar{k}^\times$; indeed, $a_0$ is non-zero and satisfies $a_0^{q^h}-a_0=0$ by (\ref{E:artin-schreier}). \begin{remark} \label{R:integrality} Take any place $\mathfrak p\neq \infty$ of $F$ and any valuation $v\colon F^{\operatorname{sep}} \to \mathbb Q\cup\{+\infty\}$ extending $\operatorname{ord}_\mathfrak p$. We have $v(b_j) \geq 0$ for $0\leq j\leq h$ (in \S\ref{SS:Hayes}, we noted that the coefficients of $\mathfrak phi_y$ are integral over $A$). Using (\ref{E:artin-schreier}) repeatedly, we find that $v(a_i) \geq 0$ for all $i\geq 0$ (the roots of (\ref{E:artin-schreier}), as a polynomial in $a_i$, differ by a value in $k_h$). For each $i\geq 1$, the extension $F(a_{i})/F(a_{i-1})$ is an Artin-Schreier extension. Since the right-hand side of (\ref{E:artin-schreier}) is integral at each place not lying over $\infty$, we deduce that $F(a_{i})/F(a_{i-1})$ is unramified at all places not lying over $\infty$. Let $L\subseteq F^{\operatorname{sep}}$ be the extension of $F$ generated by $\bbar{k}$ and the set $\{a_i\}_{i\geq 0}$, i.e., the extension of $F\bbar{k}$ generated by the coefficients of $u$. We find that $L$ is unramified at all places of $F$ away from $\infty$. \end{remark} \subsection{Proof of Lemma~\ref{L:infty basics}} Fix a series $u\in \bbar{F} [\![\tau^{-1}]\!] ^\times$ as in Lemma~\ref{L:u facts}(\ref{I:u exists}). \noindent (i) We have $\mathfrak phi(\rho_\infty^{\mathfrak a \mathfrak b}(\sigma\gamma))= \mathfrak phi_{\mathfrak a\mathfrak b}^{-1} \sigma\gamma(u) \tau^{\deg(\sigma\gamma)} u^{-1}$ and $\mathfrak phi(\rho_\infty^\mathfrak b(\gamma))= \mathfrak phi_\mathfrak b^{-1}\gamma(u)\tau^{\deg(\gamma)} u^{-1}$. So it suffices to show that $\mathfrak phi(\rho^\mathfrak a_\infty(\sigma))$ equals \begin{align*} \mathfrak phi(\rho_\infty^{\mathfrak a \mathfrak b}(\sigma\gamma) \rho_\infty^\mathfrak b(\gamma)^{-1} )&= \mathfrak phi_{\mathfrak a\mathfrak b}^{-1} \,\sigma\gamma(u) \tau^{\deg(\sigma\gamma)} u^{-1} \cdot u \tau^{-\deg(\gamma)}\gamma(u)^{-1} \mathfrak phi_\mathfrak b\\ &= \mathfrak phi_{\mathfrak a\mathfrak b}^{-1} \sigma(\gamma(u)) \tau^{\deg(\sigma)} \gamma(u)^{-1} \mathfrak phi_\mathfrak b\\ &= \mathfrak phi_{\mathfrak a\mathfrak b}^{-1}\sigma(\mathfrak phi_\mathfrak b) \mathfrak phi_\mathfrak a \cdot \mathfrak phi_\mathfrak a^{-1} \sigma(\mathfrak phi_\mathfrak b^{-1}\gamma(u)) \tau^{\deg(\sigma)} (\mathfrak phi_\mathfrak b^{-1} \gamma(u))^{-1}. \end{align*} We showed in \S\ref{SS:lambda basics} that $ \mathfrak phi_{\mathfrak a\mathfrak b}=\sigma(\mathfrak phi_\mathfrak b) \mathfrak phi_\mathfrak a$, so we need only prove that $\mathfrak phi_\mathfrak a^{-1} \sigma(\mathfrak phi_\mathfrak b^{-1}\gamma(u)) \tau^{\deg(\sigma)} (\mathfrak phi_\mathfrak b^{-1} \gamma(u))^{-1} $ and $\mathfrak phi(\rho^\mathfrak a_\infty(\sigma))$ agree. By Lemma~\ref{L:u facts}(\ref{I:u independence}), it thus suffices to show that $(\mathfrak phi_\mathfrak b^{-1}\gamma(u))^{-1}\mathfrak phi(F_\infty) \mathfrak phi_\mathfrak b^{-1}\gamma(u) \subseteq \bbar{k} (\!(\tau^{-1})\!) $. Take any $x\in F_\infty$. Since $u^{-1}\mathfrak phi_x u$ belongs to $\bbar{k} (\!(\tau^{-1})\!) $, so does $\gamma(u^{-1} \mathfrak phi_x u)$. By our choice of $\mathfrak b$, we have \[ \gamma(u^{-1} \mathfrak phi_x u)= \gamma(u)^{-1} \gamma(\mathfrak phi)_x \gamma(u) = \gamma(u)^{-1} (\mathfrak b*\mathfrak phi)_x \gamma(u) = \gamma(u)^{-1} \mathfrak phi_\mathfrak b \mathfrak phi_x \mathfrak phi_\mathfrak b^{-1} \gamma(u), \] and hence $(\mathfrak phi_\mathfrak b^{-1}\gamma(u))^{-1} \mathfrak phi_x (\mathfrak phi_\mathfrak b^{-1} \gamma(u))$ is an element of $\bbar{k} (\!(\tau^{-1})\!) $. \noindent (ii) First note that \[ \mathfrak phi(\rho_\infty^{\mathfrak a}(\sigma) \rho_\infty^{\mathfrak b}(\sigma)^{-1}) = \mathfrak phi_\mathfrak a^{-1} \sigma(u)\tau^{\deg \sigma} u^{-1} \cdot u \tau^{-\deg \sigma} \sigma(u)^{-1} \mathfrak phi_\mathfrak b=\mathfrak phi_\mathfrak a^{-1} \mathfrak phi_\mathfrak b. \] In \S\ref{SS:lambda basics}, we showed that $\mathfrak phi_\mathfrak a^{-1} \mathfrak phi_\mathfrak b = \mathfrak phi_{w_1}\mathfrak phi_{w_2}^{-1}$ where $w_1$ and $w_2$ belong to $A$ and $w=w_1/w_2$ is the unique generator of $\mathfrak b\mathfrak a^{-1}$ satisfying $\varepsilon(w)=1$. Therefore, $\rho_\infty^{\mathfrak a}(\sigma) \rho_\infty^{\mathfrak b}(\sigma)^{-1}=w$ as desired. \subsection{Proof of Lemma~\ref{L:rho-infty is continuous}} The map $\rho_\infty$ is a homomorphism by Lemma~\ref{L:infty basics}(\ref{I:infty basics-mult}). Fix a series $u=\sum_{i\geq 0} c_i \tau^{-i}$ as in Lemma~\ref{L:u facts}(\ref{I:u exists}). For $\sigma \in W_{H_A^+}$, we have $\operatorname{ord}_\infty(\rho_\infty(\sigma))= d_\infty \operatorname{ord}_{\tau^{-1}}(\mathfrak phi(\rho_\infty(\sigma))) = -d_\infty \deg(\sigma)$. So to prove that $\rho_\infty$ is continuous, we need only show that $\operatorname{Gal}(F^{\operatorname{sep}}/H_A^+\bbar{k}) \xrightarrow{\rho_\infty} \mathcal O_\infty^\times$ is continuous. It suffices to show that for each $e\geq 1$, the homomorphism \[ \beta_e \colon \operatorname{Gal}(F^{\operatorname{sep}}/H_A^+\bbar{k}) \xrightarrow{\rho_\infty} \mathcal O_\infty^\times \to (\mathcal O_\infty/\mathbb Mathfrak{m}_\infty^e)^\times \] has open kernel. For each $\sigma \in \operatorname{Gal}(F^{\operatorname{sep}}/H_A^+\bbar{k})$, we have $\mathfrak phi(\rho_\infty(\sigma))= \sigma(u) u^{-1}$. One can check that $\beta_e(\sigma)=1$, equivalently $\operatorname{ord}_\infty(\rho_\infty(\sigma)-1)\geq e$, if and only if $\operatorname{ord}_{\tau^{-1}}(\sigma(u) u^{-1} - 1)=\operatorname{ord}_{\tau^{-1}}(\sigma(u)-u)$ is at least $ed_\infty$. Thus the kernel of $\beta_e$ is $\operatorname{Gal}(F^{\operatorname{sep}}/L_e)$ where $L_e$ is the finite extension of $H_A^+\bbar{k}$ generated by the set $\{c_i\}_{0\leq i< ed_\infty}$ It remains to prove that $\rho_\infty$ is unramified at all places of $H_A^+$ not lying over $\infty$. Let $L'$ be the subfield of $F^{\operatorname{sep}}$ fixed by $\ker(\rho_\infty)$; it is the extension of $H_A^+\bbar{k}$ generated by the set $\{c_i\}_{i\geq 0}$. The field $L'$ does not depend on the choice of $u$, since $\rho_\infty$ does not. In Remark~\ref{R:integrality}, we saw that the extension $L$ of $F\bbar{k}$ generated by the coefficients of a particular $u$ was unramified at all places of $F$ away from $\infty$. Therefore, $L'$ is unramified at all places of $F$ away from $\infty$, since $L$ and $H_A^+$ both have this property. \subsection{Proof of Lemma~\ref{L:Frobenius}} First consider the case $\lambda \neq \infty$. For each $e\geq 1$, we have $\operatorname{Frob}_\mathfrak p(\xi) = \mathfrak phi_\mathfrak p(\xi)$ for all $\xi \in \mathfrak phi[\lambda^e] \subseteq F^{\operatorname{sep}}$; this was observed by Hayes \cite{MR1196509}*{p.28}, for a proof see \cite{MR1423131}*{\S7.5}. Thus the map $V_\lambda(\mathfrak phi_\mathfrak p)\colon V_\lambda(\mathfrak phi) \to V_\lambda(\mathfrak phi)$ equals $V_\lambda(\operatorname{Frob}_\mathfrak p)$, and hence $\rho^\mathfrak p_\lambda(\operatorname{Frob}_\mathfrak p)=V_\lambda(\mathfrak phi_\mathfrak p)^{-1}\circ V_\lambda(\operatorname{Frob}_\mathfrak p)=1$. We may now assume that $\lambda=\infty$. Let $\bbar{F}_\mathfrak p$ be an algebraic closure of $F_\mathfrak p$. The field $\bbar{F}_\mathfrak p$ has a unique place extending the $\mathfrak p$-adic place of $F_\mathfrak p$ and let $\bbar\mathcal O_\mathfrak p\subseteq \bbar{F}_\mathfrak p$ be the corresponding valuation ring. The residue field of $\bbar\mathcal O_\mathfrak p$ is an algebraic closure of $\mathbb Mathbb F_\mathfrak p$ that we denote by $\mathbb Mathbb Fbar_\mathfrak p$, and let $r_\mathfrak p\colon \bbar\mathcal O_\mathfrak p \to \mathbb Mathbb Fbar_\mathfrak p$ be the reduction homomorphism. Choose an embedding $\bbar{F} \hookrightarrow \bbar{F}_\mathfrak p$. The restriction map $\operatorname{Gal}(\bbar{F}_\mathfrak p/F_\mathfrak p)\to \operatorname{Gal}(\bbar{F}/F)$ is an inclusion that is well-defined up to conjugation. So after conjugating, we may assume that $\operatorname{Frob}_\mathfrak p$ lies in $\operatorname{Gal}(\bbar{F}_\mathfrak p/F_\mathfrak p)$; it thus acts on $\bbar\mathcal O_\mathfrak p$ and we have $r_\mathfrak p(\operatorname{Frob}_\mathfrak p(\xi))= r_\mathfrak p(\xi)^{N(\mathfrak p)}$ for all $\xi \in \bbar\mathcal O_\mathfrak p$. We will also denote by $r_\mathfrak p$ the map $\bbar\mathcal O_\mathfrak p (\!(\tau^{-1})\!) \to\mathbb Mathbb Fbar_\mathfrak p (\!(\tau^{-1})\!) $ where one reduces the coefficients of the series. In \S\ref{SS:Hayes}, we noted that the coefficients of $\mathfrak phi_x$ are integral over $A$ for each $x\in A$. This implies that $\mathfrak phi(A)\subseteq \bbar\mathcal O_\mathfrak p[\tau]$. Define the homomorphism \[ \bbar \mathfrak phi \colon A \xrightarrow{\mathfrak phi} \bbar\mathcal O_\mathfrak p[\tau] \xrightarrow{r_\mathfrak p} \mathbb Mathbb Fbar_\mathfrak p[\tau]. \] The map $\bbar\mathfrak phi$ is a Drinfeld module over $\mathbb Mathbb Fbar_{\mathfrak p}$ of rank 1 since $\mathfrak phi$ is normalized. Since $\mathfrak phi$ is normalized, we find that $\mathfrak phi_x \in \bbar\mathcal O_\mathfrak p (\!(\tau^{-1})\!) ^\times$ for all non-zero $x\in A$. Therefore, $\mathfrak phi(F) \subseteq \bbar\mathcal O_\mathfrak p (\!(\tau^{-1})\!) $. Using that $\bbar\mathcal O_\mathfrak p (\!(\tau^{-1})\!) $ is complete with respect to $\operatorname{ord}_{\tau^{-1}}$ and that (\ref{E:rank defn}) holds with $r=1$, we deduce that $\mathfrak phi(F_\infty) \subseteq \bbar\mathcal O_\mathfrak p (\!(\tau^{-1})\!) $. We claim that $r_\mathfrak p$ induces an isomorphism $\mathfrak phi(F_\infty) \to \bbar\mathfrak phi(F_\infty)$. Since $\bbar\mathfrak phi$ is a Drinfeld module, and hence injective, the map $r_\mathfrak p$ induces an isomorphism $\mathfrak phi(A)\to \bbar\mathfrak phi(A)$ which then extends to an isomorphism $\mathfrak phi(F)\to \bbar\mathfrak phi(F)$ of their quotient fields. The map $r_\mathfrak p\colon \mathfrak phi(F)\to \bbar\mathfrak phi(F)$ preserves the valuation $\operatorname{ord}_{\tau^{-1}}$ so by the uniqueness of completions, $r_\mathfrak p$ also gives an isomorphism $\mathfrak phi(F_\infty)\to \bbar\mathfrak phi(F_\infty)$. Thus to prove that $\rho_\infty^\mathfrak p(\operatorname{Frob}_\mathfrak p)$ equals 1, it suffices to show that $r_\mathfrak p(\mathfrak phi(\rho_\infty^\mathfrak p(\operatorname{Frob}_\mathfrak p)))=1$. Lemma~\ref{L:u facts}(\ref{I:u exists}), along with Remark~\ref{R:integrality}, shows that there is a series $u \in \bbar\mathcal O_\mathfrak p [\![\tau^{-1}]\!] ^\times$ with coefficients in $F^{\operatorname{sep}}$ such that $u^{-1}\mathfrak phi(F_\infty) u \subseteq \bbar{k} (\!(\tau^{-1})\!) $. With such a series $u$, we have \[ \mathfrak phi(\rho_\infty(\operatorname{Frob}_\mathfrak p)) = \mathfrak phi_\mathfrak p^{-1} \operatorname{Frob}_\mathfrak p(u) \tau^{d} u^{-1} \] where $d=\deg(\operatorname{Frob}_\mathfrak p)$. The polynomial $\mathfrak phi_\mathfrak p \in F[\tau]$ has coefficients in $\mathcal O_\mathfrak p$ and $r_\mathfrak p(\mathfrak phi_\mathfrak p)=\tau^d$; thus $\mathfrak phi_\mathfrak p^{-1}$ belongs to $\bbar\mathcal O_\mathfrak p (\!(\tau^{-1})\!) $ and $r_\mathfrak p(\mathfrak phi_\mathfrak p^{-1})=\tau^{-d}$. We have $r_\mathfrak p(\operatorname{Frob}_\mathfrak p(u))=\tau^d r_\mathfrak p(u) \tau^{-d}$. Therefore, \[ r_\mathfrak p(\mathfrak phi(\rho_\infty(\operatorname{Frob}_\mathfrak p))) = r_\mathfrak p(\mathfrak phi_\mathfrak p^{-1}) r_\mathfrak p(\operatorname{Frob}_\mathfrak p(u)) \tau^{d} r_\mathfrak p(u)^{-1}= \tau^{-d} \cdot \tau^d r_\mathfrak p(u) \tau^{-d} \cdot \tau^d r_\mathfrak p(u)^{-1}=1. \] \section{The rational function field} \label{S:k(t)} We return to the rational function field $F=k(t)$ where $k$ is a finite field with $q$ elements. Using our constructions, we shall recover the description of $F^{\operatorname{ab}}$ given by Hayes in \cite{MR0330106}; he expressed $F^{\operatorname{ab}}$ as the compositum of three linearly disjoint fields over $F$. In particular, we will explain how two of these fields arise naturally from our representation $\rho_\infty$. We define $A=k[t]$; it is the subring of $F$ consisting of functions that are regular away from a unique place $\infty$ of $F$. We have $\mathbb Mathbb F_\infty=k$, $F_\infty=k(\!(t^{-1})\!)$ and $\operatorname{ord}_\infty\colon F_\infty^\times \to \mathbb Z$ is the valuation for which $\operatorname{ord}_\infty(t^{-1})=1$. Let $\varepsilon\colon F_\infty^\times \to k^\times$ be the unique sign function of $F_\infty$ that satisfies $\varepsilon(t^{-1})=1$. Those $x\in F_\infty^\times$ for which $\varepsilon(x)=1$ form the subgroup $F_\infty^+:= \ang{t}(1+t^{-1} k[\![t^{-1}]\!])=\ang{t}(1+\mathbb Mathfrak{m}_\infty)$.\\ Recall that the Carlitz module is the homomorphism $\mathfrak phi\colon A \to F[\tau],\, a\mathbb Mapsto \mathfrak phi_a$ of $k$-algebras for which $\mathfrak phi_t = t + \tau$. In the notation of \S\ref{SS:Hayes}, $\mathfrak phi$ is a Hayes module for $\varepsilon$. The coefficients of $\mathfrak phi_t$ lie in $F$, so the normalizing field $H_A^+$ for $(F,\infty,\varepsilon)$ equals $F$. We saw that $\operatorname{Gal}(H_A^+/F)$ acts transitively on $X_\varepsilon$, so $X_\varepsilon =\{\mathfrak phi\}$. For each place $\lambda$ of $F$ (including $\infty$!), we have defined a continuous homomorphism $\rho_\lambda\colon W_F^{\operatorname{ab}} \to F_\lambda^\times,\, \sigma \mathbb Mapsto \rho^A_\lambda(\sigma)$. The representation $\rho_\lambda$ is characterized by the property that \[ \rho_\lambda(\operatorname{Frob}_\mathfrak p)=\mathfrak p \] holds for each monic irreducible polynomial $\mathfrak p\in A=k[t]$ not corresponding to $\lambda$ (combine Lemmas~\ref{L:lambda basics}(\ref{I:lambda basics-defined}), \ref{L:infty basics}(\ref{I:infty basics-defined}) and \ref{L:Frobenius} to show that $\rho^A_\lambda(\operatorname{Frob}_\mathfrak p) \rho_\lambda^{(\mathfrak p)}(\operatorname{Frob}_\mathfrak p)^{-1} = \rho^A_\lambda(\operatorname{Frob}_\mathfrak p)$ is the unique generator $w$ of $\mathfrak p A$ that satisfies $\varepsilon(w)=1$). For $\lambda\neq \infty$, the representation $\rho_\lambda$ has image in $\mathcal O_\lambda^\times$, so it extends to a continuous representation $\operatorname{Gal}(F^{\operatorname{ab}}/F)\to \mathcal O_\lambda^\times$. The image of $\rho_\infty$ lies in $F_\infty^+$, but is unbounded (so it does not extend to a Galois representation). Combining the representations $\rho_\lambda$ together, we obtain a continuous homomorphism \begin{equation} \label{E:Carlitz expression} \mathfrak prod_\lambda \rho_\lambda \colon W_F^{{\operatorname{ab}}} \to F_\infty^+\times \mathfrak prod_{\lambda\neq \infty} \mathcal O_\lambda^\times. \end{equation} By composing $\mathfrak prod_\lambda \rho_\lambda$ with the quotient map $\mathbb A_F^\times \to C_F$, we obtain a continuous homomorphism $\rho\colon W_F^{\operatorname{ab}} \to C_F$. Corollary~\ref{C:main} says that $\rho$ is an isomorphism of topological groups (and that the inverse of $\sigma \mathbb Mapsto \rho(\sigma)^{-1}$ is the Artin map $\theta_F$). The quotient map $ F_\infty^+\times \mathfrak prod_{\lambda\neq \infty} \mathcal O_\lambda^\times\to C_F$ is actually an isomorphism, so the map (\ref{E:Carlitz expression}) is also an isomorphism. Taking profinite completions, we obtain an isomorphism \begin{equation} \label{E:Carlitz expression 2} \operatorname{Gal}(F^{\operatorname{ab}}/F) \xrightarrow{\sim} \widehat{F_\infty^+}\times \mathfrak prod_{\lambda\neq \infty} \mathcal O_\lambda^\times= \widehat{\ang{t}} \cdot (1+\mathbb Mathfrak{m}_\infty) \times \mathfrak prod_{\lambda\neq \infty} \mathcal O_\lambda^\times. \end{equation} Using this isomorphism, we can now describe three linearly disjoint abelian extensions of $F$ whose compositum is $F^{\operatorname{ab}}$. \subsubsection{Torsion points} The representation $\chi:=\mathfrak prod_{\lambda\neq \infty}\rho \colon \operatorname{Gal}(F^{\operatorname{ab}}/F)\to \mathfrak prod_{\lambda\neq\infty} \mathcal O_\lambda^\times = \widehat{A}^\times$ arises from the Galois action on the torsion points of $\mathfrak phi$ as described in \S\ref{SS:rational field}. The fixed field in $F^{\operatorname{ab}}$ of $\ker(\chi)$ is the field $K_\infty:= \cup_{m} F(\mathfrak phi[m])$ where the union is over all monic polynomials $m$ of $A$, and we have an isomorphism $\operatorname{Gal}(K_\infty/F)\xrightarrow{\sim} \widehat{A}^\times$. The field $K_\infty$, which was first given be Carlitz, is a geometric extension of $F$ that is tamely ramified at $\infty$. \subsubsection{Extension of constants} Define the homomorphism $\deg \colon W_F^{\operatorname{ab}} \to \mathbb Z$ by $\sigma\mathbb Mapsto -\operatorname{ord}_\infty(\rho_\infty(\sigma))$; this agrees with our usual definition of $\deg(\sigma)$ [it is easy to show that $\operatorname{ord}_{\tau^{-1}}(\mathfrak phi(\rho_\infty(\sigma)))$ equals $\operatorname{ord}_{\tau^{-1}}(\tau^{\deg(\sigma)})=-\deg(\sigma)$, and then use (\ref{E:rank defn}) with $r=d_\infty=1$]. The map $\deg$ thus factors through $W(\bbar{k}(t)/k(t))\xrightarrow{\sim} \mathbb Z$, where $W(\bbar{k}(t)/k(t))$ is the group of $\sigma\in \operatorname{Gal}(\bbar{k}(t)/k(t))$ that act on $\bbar{k}$ as an integral power of $q$. Of course, $\bbar{k}(t)/k(t)$ is an abelian extension with $\operatorname{Gal}(\bbar{k}(t)/k(t))\xrightarrow{\sim} \widehat{\mathbb Z}$. \subsubsection{Wildly ramified extension} Define the homomorphism \[ W_F^{\operatorname{ab}} \to 1+ \mathbb Mathfrak{m}_\infty,\quad \sigma\mathbb Mapsto \rho_\infty(\sigma)/t^{\deg(\sigma)}; \] it is well-defined since $\operatorname{ord}_\infty(\rho_\infty(\sigma))=-\deg(\sigma)$ and since the image of $\rho_\infty$ is contained in $F^+_\infty=\ang{t} (1+\mathbb Mathfrak{m}_\infty)$. Since $1+\mathbb Mathfrak{m}_\infty$ is compact, this gives rise to a Galois representation \[ \beta\colon \operatorname{Gal}(F^{\operatorname{ab}}/F) \to 1+ \mathbb Mathfrak{m}_\infty. \] Let $L_\infty$ be the fixed field of $\ker(\beta)$ in $F^{\operatorname{ab}}$. The field $L_\infty/F$ is an abelian extension of $F$ that is unramified away from $\infty$ and is wildly ramified at $\infty$; it is also a geometric extension of $F$. We will now give an explicit description of this field. Define $a_0=1$ and for $i\geq 1$, we recursively choose $a_i \in F^{\operatorname{sep}}$ that satisfy the equation \begin{equation} \label{E:artin-schreier Carlitz} a_i^{q} - a_i = - t a_{i-1}. \end{equation} This gives rise to a chain of field extensions, $F\subset F(a_1) \subset F(a_2) \subset F(a_3) \subset \cdots$. We claim that $L_\infty=\bigcup _i F(a_i)$. The construction of $\rho_\infty$ starts by finding an appropriate series $u\in \bbar{F} [\![\tau^{-1}]\!] ^\times$ with coefficients in $F^{\operatorname{sep}}$. Define $u:=\sum_{i\geq 0} a_i \tau^{-i}$ with the $a_i$ defined as above. The recursive equations that the $a_i$ satisfy give us that $\mathfrak phi_t u = u\tau$ (just multiply them out and check!). As shown in \S\ref{SS:u facts}, this implies that $u^{-1} \mathfrak phi(F_\infty) u \subseteq \bbar{k} (\!(\tau^{-1})\!) $. For $\sigma\in W_F^{\operatorname{ab}}$, $\rho_\infty(\sigma)$ is then the unique element of $F_\infty^+$ for which $\mathfrak phi(\rho_\infty(\sigma))= \sigma(u)\tau^{\deg(\sigma)} u^{-1}$ where $\sigma(u)$ is obtained by letting $\sigma$ act on the coefficients of $u$. In particular, $\mathfrak phi(\beta(\sigma))=\sigma(u)u^{-1}$ for all $\sigma \in \operatorname{Gal}(L_\infty/F)$ since $L_\infty/F$ is a geometric extension. We find that $\beta(\sigma)=1$ if and only if $\sigma(u)=u$, and thus $L_\infty$ is the extension of $F$ generated by the set $\{a_i\}_{i\geq 1}$.\\ Using the isomorphism (\ref{E:Carlitz expression 2}), we find that $F^{\operatorname{ab}}$ is the compositum of the fields $K_\infty$, $\bbar{k}(t)$ and $L_\infty$, and that they are linearly disjoint over $F$. This is exactly the description given by Hayes in \cite{MR0330106}*{\S5}; the advantage of our representation $\rho_\infty$ is that the fields $\bbar{k}(t)$ and $L_\infty$ arise naturally. \begin{bibdiv} \begin{biblist} \bib{Magma}{article}{ author={Bosma, Wieb}, author={Cannon, John}, author={Playoust, Catherine}, title={The {M}agma algebra system. {I}. {T}he user language}, date={1997}, journal={J. Symbolic Comput.}, volume={24}, number={3-4}, pages={235\ndash 265}, note={Computational algebra and number theory (London, 1993)}, } \bib{MR1501937}{article}{ author={Carlitz, Leonard}, title={A class of polynomials}, date={1938}, journal={Trans. Amer. Math. Soc.}, volume={43}, number={2}, pages={167\ndash 182} } \bib{MR902591}{incollection}{ author={Deligne, Pierre}, author={Husemoller, Dale}, title={Survey of {D}rinfel$'$d modules}, date={1987}, booktitle={Current trends in arithmetical algebraic geometry ({A}rcata, {C}alif., 1985)}, series={Contemp. Math.}, volume={67}, publisher={Amer. Math. Soc.}, address={Providence, RI}, pages={25\ndash 91}, } \bib{MR0384707}{article}{ author={Drinfel$'$d, V.~G.}, title={Elliptic modules}, date={1974}, journal={Mat. Sb. (N.S.)}, volume={94(136)}, pages={594\ndash 627, 656}, } \bib{MR0439758}{article}{ author={Drinfel$'$d, V.~G.}, title={Elliptic modules. {II}}, date={1977}, journal={Mat. Sb. (N.S.)}, volume={102(144)}, number={2}, pages={182\ndash 194, 325}, note={English translation: Math. USSR-Sb. \textbf{31} (1977), 159--170}, } \bib{MR1423131}{book}{ author={Goss, David}, title={Basic structures of function field arithmetic}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3)}, publisher={Springer-Verlag}, address={Berlin}, date={1996}, volume={35}, } \bib{MR0330106}{article}{ author={Hayes, D.~R.}, title={Explicit class field theory for rational function fields}, date={1974}, journal={Trans. Amer. Math. Soc.}, volume={189}, pages={77\ndash 91}, } \bib{MR535766}{incollection}{ author={Hayes, David~R.}, title={Explicit class field theory in global function fields}, date={1979}, booktitle={Studies in algebra and number theory}, series={Adv. in Math. Suppl. Stud.}, volume={6}, publisher={Academic Press}, address={New York}, pages={173\ndash 217}, } \bib{MR1196509}{incollection}{ author={Hayes, David~R.}, title={A brief introduction to {D}rinfeld modules}, date={1992}, booktitle={The arithmetic of function fields ({C}olumbus, {OH}, 1991)}, series={Ohio State Univ. Math. Res. Inst. Publ.}, volume={2}, publisher={de Gruyter}, address={Berlin}, pages={1\ndash 32}, } \bib{tate-gcft}{incollection}{ author={Tate, J.~T.}, title={Global class field theory}, date={1967}, booktitle={Algebraic {N}umber {T}heory ({P}roc. {I}nstructional {C}onf., {B}righton, 1965)}, publisher={Thompson, Washington, D.C.}, pages={162\ndash 203}, } \bib{MR2018826}{article}{ author={Yu, Jiu-Kang}, title={A {S}ato-{T}ate law for {D}rinfeld modules}, date={2003}, journal={Compositio Math.}, volume={138}, number={2}, pages={189\ndash 197}, } \bib{Zywina-SatoTate}{unpublished}{ author={Zywina, David}, title={{T}he {S}ato-{T}ate law for {D}rinfeld modules}, date={2011}, note={preprint}, } \end{biblist} \end{bibdiv} \end{document}
math
83,779
\begin{document} \title{Singularity barriers and Borel plane analytic properties of $1^+$ difference equations} \author{O. Costin}\gdef\shorttitle{Borel plane analyticity of $1^+$ equations} \maketitle \date{} \begin{abstract} The paper addresses generalized Borel summability of ``$1^+$'' difference equations in ``critical time''. We show that the Borel transform $Y$ of a prototypical such equation is analytic and exponentially bounded for $\Re(p)<1$ but there is no analytic continuation from $0$ toward $+\infty$: the vertical line $\mathrm{e}ll:=\{p:\Re(p)=1\}$ is a singularity barrier of $Y$. There is a unique natural continuation through the barrier, based on the Borel equation dual to the difference equation, and the functions thus obtained are analytic and decaying on the other side of the barrier. In this sense, the Borel transforms are analytic and well behaved in $\mathbb{C}\setminus\mathrm{e}ll$. The continuation provided allows for generalized Borel summation of the formal solutions. It differs from standard ``pseudocontinuation'' \cite{Shapiro}. This stresses the importance of the notion of cohesivity, a comprehensive extension of analyticity introduced and thoroughly analyzed by \'Ecalle. We also discuss how, in some cases, \'Ecalle acceleration can provide a procedure of natural continuation beyond a singularity barrier. \mathrm{e}nd{abstract} \section{Introduction} In the case of generic differential equations, generalized Borel summation of a formal power series solution, in the sense of \'Ecalle \cite{Ecalle}, essentially consists in the following steps: (1) Borel transform with respect to a {\mathrm{e}m critical time}, related to the order of exponential growth of possible solutions, (see also the note below), usual summation of the obtained series, analytic continuation along the real line or in its neighborhood, proper averaging of the analytic continuations (e.g. medianization) toward infinity, possible use of acceleration operators and Laplace transform $\mathcal{L}$. The choice of the critical time, or of a very slight perturbation --weak acceleration-- of it is crucial for \'Ecalle summability. A slower variable (time) would hide the resurgent structure encapsulating the Stokes phenomena, and, perhaps more importantly, introduces superexponential growth preventing Laplace transformability at least in some directions. In a faster variable, convergence of the Borel transformed series would not hold. In some functional equations and so called type $1^+$ difference equations, new difficulties occur. For them, \'Ecalle replaces analyticity with {\mathrm{e}m cohesivity} \cite{Ecalle2}. This property was studied rigorously for some classes of difference equations by Immink \cite{Immink}. It is the purpose of this note to show the importance of this notion: even in simple $1^+$ difference equations it is shown that critical time Borel transform has barriers of singularities, preventing continuation in some half-plane. This occurs in the prototypical equation \begin{equation} \label{f1} y(x+1)=\frac{1}{x}y(x)+\frac{1}{x} \mathrm{e}nd{equation} (example 2. of \cite{Immink}). A simple proof of Borel space natural boundaries is not present in the literature, as far as the author is aware. We also show that the barrier is traversable: on the real line the associated function is well defined and Laplace transformable to a solution of the difference equation. This function is real analytic except at one point and, in fact has analytic continuation in the whole of $\mathbb{C}\setminus\mathrm{e}ll$ with $\mathrm{e}ll=\{p:\Re(p)=1\}$ a singularity barrier. The present approach is adaptable to more general equations. We expect barriers of singularities to occur quite generally in $1^+$ cases, due to the fact that the pole position is periodic in the original variable, while critical time introduces a logarithmic shift in this periodicity. This leads to lacunary series in Borel plane, hence to singularity barriers. Nonetheless, further analysis shows that, in this simple case, and likely in quite some generality, softer Borel summation methods and study of Stokes phenomena are possible, relying on the convolution equation for continuation through singularity barriers. In spite of its simplicity, the properties in Borel plane of this equation, in the critical time, are very rich. \noindent {\bf Note on critical time.} The solution of the homogeneous equation associated to (\ref{f1}), $f(x)=1/\Gamma(x)$ has large $x$ behavior $(x/{2\pi})^{1/2}e^{-x\ln x+x}$. The critical time $z$ is then the leading asymptotic term in the exponent, $z=x\ln x$ \cite{Immink}. (The origin of the terminology $1^+$ is related to the exponential order slightly larger than one of $f$). Various slight perturbations of this variable, weak accelerations, are used and indeed are quite useful. \section{The singularity barrier} \begin{Theorem}\label{barr} Let $Y(p)$ be the Borel transform of $y$ in (\ref{f1}) in the critical time $z$. Then $Y(p)$ is analytic on $\{p\ne 0:\arg(p) \in (\pi-2\pi,\pi+2\pi); \Re(p)<1\}$ and exponentially bounded as $|p|\to\infty$ in this region. The line $\mathrm{e}ll=\{p:\Re p=1\}$ is a singularity barrier of $Y$. \mathrm{e}nd{Theorem} {\mathrm{e}m Proof of the theorem.} Let $\tilde{y}$ be the formal power series solution of (\ref{f1}). We study the analytic properties of the Borel transform $\mathcal{B}\tilde{y}:=Y(p)$ of the on $\mathcal{S}_0$, the Riemann surface of the log at zero, with respect to the critical time $z$. In critical time the functional equation of $\mathcal{B}\tilde{y}$ (\ref{eq:complicatedeq}) is unwieldy, and instead we look at the meromorphic structure of solutions on which we perform a Mittag--Leffler decomposition. It is straightforward to check that $\tilde{y}$ is the asymptotic series for $\arg(x)\ne 0$ of the following actual solution of (\ref{f1}) \begin{equation} \label{e01} y_0(x)=\sum_{k=1}^{\infty}\prod_{j=1}^k\frac{1}{x-j} \mathrm{e}nd{equation} The fact that Res\,$(y_0;x=n)= e^{-1}/\Gamma(n)$ and the behavior at infinity of $y_0$ show that the Mittag-Leffler partial fraction decomposition of (\ref{e01}) is \begin{equation} \label{e01} y_0=e^{-1}\sum_{k=1}^{\infty}\frac{1}{(x-k)\Gamma(k)} \mathrm{e}nd{equation} (1) {\mathrm{e}m Analyticity in the left half plane.} The inverse function $z\mapsto x(z)$ of $x\ln x$ is analytic on $\mathcal{S}_0\setminus (-e^{-1},0)$ as it can be seen from the differential equation $\frac{dx}{dz}=(1+\ln x)^{-1}$. Then $Y(p)$ is the analytic continuation of the function defined for $p$ {\mathrm{e}m negative} by \begin{equation} \label{s2} -\frac{1}{2\pi i}\int_{ i\mathbb{R}-e^{-1}}e^{pz}y_0(x(z))dz=\frac{1}{2\pi i}\int_{C}e^{pz}y_0(x(z))dz,\ \ p\in\mathbb{R}^- \mathrm{e}nd{equation} where $C$ is a contour from $\infty+i0$ around $ -e^{-1}$ and to $\infty-i0$. (2) {\mathrm{e}m Identities for finding continuation in $\{z:\Re(z)<1\}$ and exponential bounds.} For analytic continuation clockwise we start from $\arg p=\pi$ and rotate up the contour, collecting the residues: \begin{eqnarray} \label{s4} Y(p)=\frac{1}{2e\pi i}\sum_{k=1}^{\infty}\frac{1}{\Gamma(k)}\int_{C}\frac{e^{pz}dz}{x(z)-k}=F(p)+\frac{1}{2e\pi i}\int_{C_1}\sum_{k=1}^{\infty}\frac{1}{\Gamma(k)(x(z)-k)}e^{pz}dz\nonumber \\\ \text{where }\ F(p):=\sum_{k=1}^{\infty}\frac{1+\ln k}{e\Gamma(k)}e^{p k\ln k} \mathrm{e}nd{eqnarray} and where for small $\phi>0$, $C_1$ is the contour from $\infty e^{i\phi+i0}$ around $(-e^{-1},0)$ to $\infty e^{i\phi-i0}$. As $\arg p$ is decreased from to zero (and further to $-\pi$), $\phi$ can be increased from $0^+$ to $2\pi^-$ making $\int_{C_1}$ visibly analytic in $\{p\ne 0:\arg p \in(-\pi,\pi)\}$ and exponentially bounded as $|p|\to \infty$. We decomposed $Y$ into a sum of a lacunary Dirichlet series and a function analytic in the right half plane. (2) {\mathrm{e}m The natural boundary.} The Dirichlet series $F$ is manifestly analytic for $\Re p <1$. As $p\uparrow 1$ we have $F(p)\to +\infty$ and thus $F$ is not entire. But then, by the Fabry-Wennberg-Szasz-Carlson-Landau theorem \cite{Mandelbrojt} pp. 18, $\mathrm{e}ll$ is a singularity barrier of $F$ and thus of $Y$. For a detailed analysis, see also the note below. ${ \hbox{\enspace${\mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}} $ {\bf Note: Description of the behavior of $F$ at $\mathrm{e}ll$}. Since all terms of the Dirichlet series are positive on the real line, it is easy to check using discrete Laplace method\footnote{Determining, for fixed $p$, the maximal term of the series and doing stationary point expansion nearby.} that $F$ increases like an iterated exponential along $\mathbb{R}^+$ toward $\mathrm{e}ll$, $F(p)\propto \mathrm{e}xp((1-p)\mathrm{e}xp(1/(1-p)))$. There are densely many points near $\mathrm{e}ll$ where the growth is similar; it suffices to take a sequence of $k\in\mathbb{N}$, $\Re(p)=k/(1+\ln(k))$ and $(1+\ln(k))\Im (p)$ very close to an integer multiple of $2\pi$. (A Rouch\'e type argument shows there are also infinitely many zeros with a mean separation of order the reciprocal of the maximal order of growth, $\ln(d)\sim -(1-p)e^{1/(1-p)}$.) Rather than attempting some form of continuation through points where $F$ is bounded, which are easy to exhibit, we prefer to soften the barrier first, by acceleration techniques. \section{General Borel summability in the direction of the barrier. Properties beyond the barrier.} \noindent {\bf Strategy of the approach}. It is convenient to perform a ``very weak acceleration'' to smoothen the behavior of $Y(p)$ near $\mathrm{e}ll$. The natural choice of variable is $z=\ln\Gamma(k)$, but we prefer to slightly accelerate further, to $z_m(x)$ defined in Remark~\ref{R11} below. We construct actual solutions of (\ref{f1}) starting from an incomplete Borel sum. We identify these actual solutions and show they are inverse Laplace transformable. Furthermore, they solve the associated convolution equation in Borel space. From these points of view, we have a unique continuation on $\mathbb{R}^+$. We show that the function thus obtained is real analytic on $\mathbb{R}\setminus\{1\}$ and continuable to the whole of $\mathbb{C}\setminus \mathrm{e}ll$. The general solution of (\ref{f1}) is \begin{equation} \label{eq:gensol} y(x)=y_0(x)+\frac{f(x)}{\Gamma(x)} \mathrm{e}nd{equation} where $f$ is any periodic function of period one, as it can be easily seen by making a substitution of the form (\ref{eq:gensol}) in the equation. It can be easily checked that the following solution of (\ref{f1}) \begin{equation} \label{eq:eqf2} y_1(x)=y_0+\frac{\pi}{ e}\frac{\cot\pi x}{\Gamma(x)} \mathrm{e}nd{equation} is an entire function, and has the asymptotic behavior $\tilde{y}$, the formal series solution to (\ref{f1}) defined in the proof of the theorem. \begin{Remark}\label{R11} Let $m\in\mathbb{N}$ and $z_m(x)=x\ln x -x-(m+\frac{1}{2})\ln x$. For given $C>0$, there is a one--parameter family of solutions of (\ref{f1}) which are analytic and polynomially bounded in a region of the form $S_C=\{x:\Re (z_m(x))\ge C\}$. They are of the form $y_c(x)=y_1(x)+c/\Gamma(x)$ for some constant $c$. \mathrm{e}nd{Remark} \noindent{\mathrm{e}m Proof}. The solution (\ref{eq:eqf2}) already has the stated boundedness and analyticity properties (and in fact, it decreases at least like $x^{-m}$ in $S_C$). The general solution is of the form $y_1+f(x)/\Gamma(x)$ with $f$ periodic, as remarked at the beginning of the section. Analyticity implies $f$ is analytic and boundedness in the given region implies $f$ is bounded on the line $\partial S_C$. By periodicity, $f$ is polynomially bounded in the whole of $\mathbb{C}$, which means $f$ is a polynomial, and by periodicity, a constant. ${ \hbox{\enspace${\mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}} $ \begin{Theorem}[Generalized Borel summability]\label{BorelSum} (i) There exists a one parameter family of solutions of (\ref{f1}) which can be written as $\mathcal{L}_{z_m}H_c:=\int_0^{\infty}e^{-z_m p}H_c(p)dp$ where $H_c=\mathcal{B}_{z_m}\tilde{y}$ is analytic and exponentially bounded for $\Re(p)<1$ and $H_c\in C^{m-1}(\mathbb{R}^+)$. (ii) $H_c$ are real analytic on $\mathbb{R}^+\setminus\{1\}$; they extend analytically to $\mathbb{C}\setminus\mathrm{e}ll$, and $\mathrm{e}ll$ is a singularity barrier $H_c$ and the functions are $C^{m-1}$ on the two sides of the barrier\footnote{The values on the two sides cannot, obviously, be the same.}. Furthermore, for $\Re(p)>1$, $H_c$ decrease toward infinity in $\mathbb{C}$. \mathrm{e}nd{Theorem} \begin{Remark} It would not be correct at this time to conclude that, say, $\mathcal{L}^{-1}y_1$ provides Borel summation of $\tilde{y}$; we need to show that $y_1$ satisfies the necessary Gevrey--type estimates to identify the inverse Laplace transform with $\mathcal{B}\tilde{y}$ in the unit disk. We prefer to proceed in a more general way, not using explicit formulas, but constructing actual solutions starting with an incomplete Borel summation (and identifying them later with the explicit formulas). \mathrm{e}nd{Remark} {\mathrm{e}m Proof of Theorem \ref{BorelSum}, (i)} We redo the analysis of the proof of Theorem 1 in the variable $z=z_m$ and we get a decomposition of the form (\ref{s4}), where now $F$ is replaced by \begin{equation} \label{eq:s5} F_2=\sum_{k=1}^{\infty}\frac{\ln k+\frac{m}{k}}{e\Gamma(k)}e^{p \big[k\ln k-k-(m+\frac{1}{2})\ln k\big]} \mathrm{e}nd{equation} which is a Dirichlet series of the same type as $F$ and hence has $\mathrm{e}ll$ as a singularity barrier. However, $F_2$ is (manifestly) uniformly $C^{m-1}$ up to $\mathrm{e}ll$ and so is thus $Y(p)$. For the solutions of (\ref{f1}) that decrease in a sector in the right half --plane it is clear that the dominant balance is between $y(x+1)$ and $1/x$. We then rewrite the equation to prepare it for a contraction mapping argument in Borel space. By a slight abuse of notation we write $y(z)$ for $y(x(z))$ and we have $$(x(z)-1)y(x(z))=y(x(z)-1)+1$$ $$(x(z)-1)y(z)=y(z-g(z))+1$$ where $g(z)=\ln z-\ln\ln z+o(1)$ and then $$(x(z)-1)y(z)=\sum_{k=0}^{\infty}y^{(k)}(z)g(z)^k/k!+1$$ Thus, dividing by $x(z)-1$ and taking inverse Laplace transform, with $G_k(p)$ the inverse Laplace transform of ${g(z)^k}/{(x(z)-1)}/k!$, we have \begin{equation} \label{eq:complicatedeq} Y(p)=\sum_{k=0}^{\infty}[(-p)^kY]*G_k(p)+F(p) \mathrm{e}nd{equation} The term $G_k$ is (roughly) bounded by $|e^{-k(1-p)}|$, as can be seen by the saddle point method applied to the inverse Laplace transform integral. It is easy to check, using standard contraction mapping arguments (see e.g. \cite{Duke}), that $Y$ is given by a convergent ramified expansion in the open unit disk. This was to be expected from estimates of the divergence type of the formal solutions of (\ref{f1}). However, given the estimates on the terms of the convolution equation, the equation, as written, cannot be straightforwardly interpreted beyond $\Re(p)=1$, the threshold of convergence of the ingredient series. It is however possible to write a meaningful global equation by returning to the definition in terms of Laplace transform. We then write $$\mathcal{L}^{-1}y(z+g(z))=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}dz e^{pz}\int_0^{\infty}dq e^{-q(z+g(z))}Y(q)= \int_0^{\infty}H(p,q)Y(q)dq$$ where $$H(p,q)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}e^{(p-q)z-qg(z)}dz= \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}e^{(p-q)z+q(\ln\ln z+...)}z^{-q}dz$$ which is well defined for $q>0$ and integrable at $q=0$; the convolution equation becomes \begin{equation} \label{eq:conveq} \int_0^{\infty}H(p,q)Y(q)dq=Y*\mathcal{L}^{-1}\Big[\frac{1}{x(z)-1}\Big]+ \mathcal{L}^{-1}\Big[\frac{1}{x(z)-1}\Big] \mathrm{e}nd{equation} Based on the solution on $[0,1)$ of (\ref{eq:complicatedeq}) we construct solutions to (\ref{f1}) and their inverse Laplace transforms provide continuation of $Y$ past $\Re(p)=1$ and implicitly solutions to (\ref{eq:conveq}). We define the incomplete Borel sum $$\hat{y}=\int_0^1 e^{-zp} Y_1(p)dp$$ Formal manipulation shows that $\hat{y}$ satisfies (\ref{f1}) with errors of the form\footnote{Resulting from incomplete representation of $1/(x(z)-1)$.} $o(e^{-z})$ or $o(x^{m}/\Gamma(x))$ in the variable $x$) where the estimate of the errors is uniform in the right half--plane in $z$, or in a region $S_C$ w.r. to $x$. We look for a solution of (\ref{f1}) in the form $\hat{y}+\delta(x)/\Gamma(x)$. Then $\delta(x)$ satisfies $\delta(x+1)=\delta(x)+R(x)$ (the $1^+$ degeneracy is not present anymore) where $R(x)=o(x^m)$ with differentiable asymptotics (by Watson's lemma). A solution of this equation is $-\mathcal{P}^{m+3}\sum_{k=0}^{\infty}R^{(m+3)}(x+k)$, with $\mathcal{P}$ an antiderivative, which is manifestly analytic and polynomially bounded in regions of the form $S_C$, and $\hat{y}+\delta/\Gamma$ is manifestly a solution of (\ref{f1}), which, by construction, is also polynomially bounded in $S_C$. By Remark \ref{R11}, $\hat{y}+\delta/\Gamma$ is one of the solutions $y_c$. But $y_c$ is inverse Laplace transformable with respect to $z$, and has sufficient decay to ensure the existence of $m-1$ derivatives of the transform. By Remark \ref{R11}, any solution that decreases in the natural region $S_C$ in the right half plane can be represented in this way and thus the conclusion follows.${ \hbox{\enspace${\mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}} $ \begin{Corollary} In $\{p:\Re(p)<1\}\cup [1,\infty)$, there is a one parameter family of Laplace transformable solutions to (\ref{eq:conveq}), the functions $H_c$ in Theorem \ref{BorelSum} (i). They have $\mathrm{e}ll$ as a barrier of singularities. \mathrm{e}nd{Corollary} \noindent {\mathrm{e}m Proof of Theorem \ref{BorelSum} (ii)}. Since all Laplace transformable solutions to (\ref{eq:conveq}) are those provided in Remark \ref{R11}, we analyze the properties of the inverse Laplace transform of these functions for $\Re(p)>1$. We note that, due to the fact that $y_c(z_m)$ increase at most as $e^{z_m}/z_m^m$, we can deform for $\Re(p)>1$, the integral \begin{equation} \label{eq:contdeform} \int_{c-i\infty}^{c+i\infty} e^{pz_m} y_c(z_m) dz_m \mathrm{e}nd{equation} to an integral \begin{equation} \label{eq:contdeform} \int_C e^{pz_m} y_c(z_m) dz_m \mathrm{e}nd{equation} \noindent where $C$ starts at $-\infty-i\mathrm{e}psilon$, avoids the origin through the right half plane and turns back to $-\infty+i\mathrm{e}psilon$. In view of the bound mentioned above for $ y_c(z_m)$, this function is manifestly bounded and analytic for $\Re(p)>1$, and in fact is continuous with $m-1$ derivatives up to $\Re(p)=1$. \noindent {\bf Cohesive continuation and pseudocontinuation}. It follows from our analysis and from the fact that \'Ecalle's cohesive continuation also provides solutions to the equation, that the results of the continuations are the same (modulo the choice of one parameter, discussed in the Appendix). This type of continuation is the natural one since it provides solutions to the associated convolution equation. It is easy to see however that this continuation is not a classical pseudocontinuation through the barrier, as it follows from the following Proposition. \begin{Proposition} The values of $H_c$ on the two sides of $\mathrm{e}ll$ are not pseudocontinuations \cite{Shapiro} of each--other. \mathrm{e}nd{Proposition} \noindent {\mathrm{e}m Proof.} Indeed, pseudocontinuation \cite{Shapiro}, pp. 49 requires that the analytic elements coincide almost everywhere on the two sides of the barrier. But $H_c$ is continuous on both sides, and then the values would coincide everywhere, immediately implying analyticity through $\mathrm{e}ll$, a contradiction. \begin{Remark} The axis $\mathbb{R}^+$, which is also a Stokes line, plays a special role. No other points on the singularity barrier can be used for Borel summation, as shown in the proposition below. \mathrm{e}nd{Remark} \begin{Proposition}\label{ExpGrowth} No Laplace transformable solution of (\ref{eq:conveq}) exists, in directions $e^{i\phi}\mathbb{R}^+$, $\phi\in(0,\pi/2)$. (The same conclusion holds with $\phi\in(-\pi/2,0)$.) \mathrm{e}nd{Proposition} {\mathrm{e}m Proof}. Indeed, the Laplace transform $y$ of such a solution would be analytic and decreasing in a half plane bisected by $e^{i\phi}$ and solve(\ref{f1}). Since $1/\Gamma(x)$ is entire and the general solution is of the form (\ref{eq:gensol}), by periodicity $f_1=f-\frac{\pi}{e}\cot\pi x$ would be entire too. Taking now a ray $te^{i(\phi+\pi/2-\mathrm{e}psilon)}$ we see, using again periodicity, that $f_1$ decreases factorially in the upper half plane. Standard contour deformation shows that half of the Fourier coefficients are zero, $f_1 (x)=\sum_{k\in\mathbb{N}}c_k e^{ikx}$ and that, because $f$ is entire, $c_k$ decrease faster than geometrically. But then $f_1(x)=:F(\mathrm{e}xp(2\pi ix))$ with $t\mapsto F(t)$ entire. When $x\to i\infty$, $t\to 0$ and, unless $F=0$, we have $F(t)\sim ct^n$ for some $n\in\mathbb{N}$, thus $f(x)\sim ce^{inx}$, incompatible with factorial decay. This means $f=0$ but then (\ref{eq:gensol}) is not analytic on the real line\footnote{We should note that a procedure mimicking the proof of Theorem \ref{BorelSum} (i) in non-horizontal directions would fail because now the remainders $R(x)$ would grow fast along the direction of evolution -- parallel to $\mathbb{R}^+$.}. ${ \hbox{\enspace${\mathchoice\sqr54\sqr54\sqr{4.1}3\sqr{3.5}3}$}} $ \section{Appendix: Weak acceleration, integral representation, median choice, natural crossing of the barrier}\label{Cont} A weak acceleration is provided by the passage $x\ln x -x\mapsto x$. The $x$-- inverse Laplace transform of (\ref{f1}) satisfies $e^{-p}Y-\int_0^p Y(s)ds -1=0$ with the solution $Y=e^{-1}\mathrm{e}xp(p+\mathrm{e}xp(p))$. $\mathcal{L}Y$ exists along any (combination of) paths $R_n$ starting from the origin and ending on a ray of the form $p=\mathbb{R}^++(2n+1)i\pi, n\in\mathbb{Z}$. The function $f_+=\int_{R_1}e^{-xp}e^{p+e^p-1}dp$ is manifestly entire\footnote{It provides, in view of the superexponential properties of the integrand, Borel oversummation.}. For $x=-t;t\to\infty$ the saddle point method gives $$ f_+\sim \sqrt{2\pi}e^{t\ln t-t+\pi i t+\frac{1}{2}\ln t-1}$$ which identifies $f_+$ with $ y_1+\pi i/e/\Gamma(x)$. With obvious notations, we see that $y_1=\frac{1}{2}(f_++f_-)$, reminiscing of medianization. We have also checked numerically that $y_1$ is approximated by least term truncation of its asymptotic series with errors $o(1/\Gamma(x))$. (The integral representation would allow for a rigorous check, but we have not done this and we state the property as a conjecture; we also conjecture that the solution constructed in Proposition \ref{BorelSum} is $y_1$; this could be checked by looking at the asymptotic behavior on $\partial S_C$.) There is, obviously, only one solution so well approximated. It should then be considered as the natural candidate for the medianized transform in critical time and its inverse Laplace transform, defined on the whole of $\mathbb{R}^+$, and the natural continuation of the Borel transform $\mathcal{B}\tilde{y}$ past the barrier. For all these reasons it is likely, but we have not checked it rigorously, that $y_1$ corresponds to the medianized cohesive continuation of \'Ecalle. \begin{Remark} The procedure described of naturally crossing a barrier does not necessarily depend on the existence of an underlying functional equation. It is sufficient to have accelerations as above that allow for Borel (over)summation along some paths, and choose as a natural actual function the one that has minimal errors in least term truncation or resort to a medianized choice. The process of continuation through the barrier can be written as the composition $\mathcal{L}^{-1}_{z_m}\mathcal{L}_{z_1}\mathcal{B}_{z_1}\hat{\mathcal{L}}_{z_m}$ with $\hat{\mathcal{L}}$ formal Laplace transform, and is expected to commute with most operations of natural origin. It is applicable to many other series including the Dirichlet series $\sum _{k=0}^{\infty} e^{(p-1)n^2}$. \mathrm{e}nd{Remark} Finally, it seems a plausible conjecture that in the case of nonlinear systems, infinitely many equally spaced ``isolated'' barriers should occur. \noindent {\bf Acknowledgments}. The author is grateful to B. L. J. Braaksma, and G. Immink for pointing out to the problem and for very useful discussions and to R. D. Costin for a valuable technical suggestion. The work was partially supported by NSF grant 0406193. \begin{thebibliography}{99} \bibitem{Braaksma} B L J Braaksma {\mathrm{e}m Transseries for a class of nonlinear difference equations} J. Differ. Equations Appl. 7, no. 5, 717--750 (2001). \bibitem{Duke} O. Costin, {\mathrm{e}m On Borel summation and Stokes phenomena of nonlinear differential systems}, Duke Math. J. vol 93, No2 (1998). \bibitem{CK} O. Costin, M. D. Kruskal {\mathrm{e}m On optimal truncation of divergent series solutions of nonlinear differential systems; Berry smoothing.} Proc. R. Soc. Lond. A 455, 1931-1956 (1999). \bibitem{Ecalle} J. \'Ecalle {\mathrm{e}m Fonctions Resurgentes, Publications Mathematiques D'Orsay, 1981} \bibitem{Ecalle2} J. \'Ecalle {\mathrm{e}m Cohesive functions and weak accelerations. J. Anal. Math. 60, pp. 71--97, (1993)} \bibitem{Immink} G Immink, A particular type of summability of divergent power series, with an application to difference equations, {\mathrm{e}m Asymptot. Anal. 25, no. 2, 123--148.} (2001). \bibitem{Kuik} R Kuik {\mathrm{e}m Transseries in Difference and Differential Equation} Thesis, University of Groningen (2003). \bibitem{Mandelbrojt} Mandelbrojt, S\'eries lacunaires, Hermann (1936) pp 18. \bibitem{Shapiro} W. T. Ross and H. S. Shapiro {\mathrm{e}m Generalized Analytic Continuation}, American Mathematical Society University Lecture Series Vol. 25 (2002). \mathrm{e}nd{thebibliography} \mathrm{e}nd{document}
math
26,412
\begin{document} \title[Topological properties of convolutor spaces]{Topological properties of convolutor spaces via the short-time Fourier transform} \author[A. Debrouwere]{Andreas Debrouwere} \address{Department of Mathematics: Analysis, Logic and Discrete Mathematics, Ghent University, Krijgslaan 281, 9000 Gent, Belgium} \email{[email protected]} \thanks{A. Debrouwere was supported by FWO-Vlaanderen, through the postdoctoral grant 12T0519N} \author[J. Vindas]{Jasson Vindas} \thanks{J. Vindas was supported by Ghent University, through the BOF-grants 01J11615 and 01J04017.} \address{Department of Mathematics: Analysis, Logic and Discrete Mathematics, Ghent University, Krijgslaan 281, 9000 Gent, Belgium} \email{[email protected]} \subjclass[2010]{Primary 46A13, 46E10, 46F05; Secondary 46M18, 81S30.} \keywords{Convolutor spaces; Short-time Fourier transform; Completeness of inductive limits; Gelfand-Shilov spaces.} \begin{abstract} We discuss the structural and topological properties of a general class of weighted $L^1$ convolutor spaces. Our theory simultaneously applies to weighted $\mathcal{D}'_{L^1}$ spaces as well as to convolutor spaces of the Gelfand-Shilov spaces $\mathcal{K}\{M_p\}$. In particular, we characterize the sequences of weight functions $(M_p)_{p \in \mathbb{N}}$ for which the space of convolutors of $\mathcal{K}\{M_p\}$ is ultrabornological, thereby generalizing Grothendieck's classical result for the space $\mathcal{O}'_{C}$ of rapidly decreasing distributions. Our methods lead to the first direct proof of the completeness of the space $\mathcal{O}_{C}$ of very slowly increasing smooth functions. \end{abstract} \maketitle \section{Introduction} In his fundamental book \cite{Schwartz}, Schwartz introduced the space $\mathcal{O}'_{C}(\mathbb{R}^{d})$ of rapidly decreasing distributions and showed that it is in fact the space of \emph{convolutors} of $\mathcal{S}(\mathbb{R}^{d})$; namely, a tempered distribution $f \in \mathcal{S}'(\mathbb{R}^d)$ belongs to $\mathcal{O}'_C(\mathbb{R}^d)$ if and only if $f \ast \varphi \in \mathcal{S}(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}(\mathbb{R}^d)$ \cite[Thm.\ IX, p.\ 244]{Schwartz}. Moreover, for $f \in \mathcal{O}'_C(\mathbb{R}^d)$ fixed, the mapping $\mathcal{S}(\mathbb{R}^d) \rightarrow \mathcal{S}(\mathbb{R}^d), \, \varphi \mapsto f \ast \varphi$ is continuous by the closed graph theorem. This characterization suggests to endow $\mathcal{O}'_C(\mathbb{R}^d)$ with the initial topology with respect to the mapping $$ \mathcal{O}'_C(\mathbb{R}^d) \rightarrow L_b(\mathcal{S}(\mathbb{R}^d), \mathcal{S}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi). $$ This definition entails that $\mathcal{O}'_C(\mathbb{R}^d)$ is semi-reflexive and nuclear. A detailed study of the locally convex structure of $\mathcal{O}'_C(\mathbb{R}^d)$ was carried out by Grothendieck in the last part of his doctoral thesis \cite{Grothendieck}. He showed that this space is ultrabornological \cite[Chap.\ II, Thm.\ 16, p.\ 131]{Grothendieck} and that its strong dual is given by the space $\mathcal{O}_C(\mathbb{R}^d)$ of very slowly increasing smooth functions \cite[Chap.\ II, p.\ 131]{Grothendieck}. Consequently, $\mathcal{O}_C(\mathbb{R}^d)$ is complete and its strong dual is equal to $\mathcal{O}'_C(\mathbb{R}^d)$. See \cite{Bargetz, B-O, Larcher, L-W, Ortner} for modern works concerning these spaces. We would like to point out that ultrabornologicity is the crucial hypothesis for the application of many functional analytic tools such as De Wilde's open mapping and closed graph theorems \cite{M-V}, and the abstract Mittag-Leffler theorem \cite{Wengenroth}, which is very useful to solve surjectivity problems. On the other hand, two classical results of Schwartz state that a distribution $f \in \mathcal{D}'(\mathbb{R}^d)$ belongs to the space $\mathcal{D}'_{L^1}(\mathbb{R}^d)$ of integrable distributions if and only if $f \ast \varphi \in L^1(\mathbb{R}^d)$ for all $\varphi \in \mathcal{D}(\mathbb{R}^d)$ \cite[Thm.\ XXV, p.\ 201]{Schwartz} and that the strong dual of $\mathcal{D}'_{L^1}(\mathbb{R}^d)$ is given by $\mathcal{B}(\mathbb{R}^d)$ \cite[p.\ 203]{Schwartz}. Weighted $\mathcal{D}'_{L^1}(\mathbb{R}^d)$ spaces have been considered by Ortner and Wagner \cite{O-W} and, more recently, by Dimovski, Pilipovi\'c and the second author in the broader framework of distribution spaces associated to general translation-invariant Banach spaces \cite{D-P-V2015TIB}. The space $\mathcal{D}'_{L^1}(\mathbb{R}^d)$ and its weighted variants play an essential role in the convolution theory for distributions \cite{Schwartz-57,B-N-O,O-W-2,Wagner}. The main goal of this article is to develop a unified approach towards these two types of results and, at the same time, considerably extend them. More precisely, let $\mathcal{W} =(w_N)_{N \in \mathbb{N}}$ be a (pointwise) increasing sequence of positive continuous functions. We define $L^1_{\mathcal{W}}(\mathbb{R}^d)$ as the Fr\'echet space consisting of all measurable functions $f$ on $\mathbb{R}^d$ such that $\|f\|_{L^1_{w_N}} := \|fw_N\|_{L^1} < \infty$ for all $N \in \mathbb{N}$ and $$ \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}) := \{ f \in \mathcal{D}'(\mathbb{R}^d) \, : \, f \ast \varphi \in L^1_\mathcal{W}(\mathbb{R}^d) \mbox{ for all } \varphi \in \mathcal{D}(\mathbb{R}^d) \}. $$ As before, for $f \in \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ fixed, the closed graph theorem implies that the mapping $\mathcal{D}(\mathbb{R}^d) \rightarrow L^1_{\mathcal{W}}(\mathbb{R}^d), \, \varphi \mapsto f \ast \varphi$ is continuous. We endow $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ with the initial topology with respect to the mapping $$ \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}) \rightarrow L_b(\mathcal{D}(\mathbb{R}^d),L^1_{\mathcal{W}}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi) .$$ Next, for a positive continuous function $v$ on $\mathbb{R}^d$, we define $\mathcal{B}_{v}(\mathbb{R}^d)$ as the Fr\'echet space consisting of all $\varphi \in C^\infty(\mathbb{R}^d)$ such that $\| \varphi \|_{v, n} := \max_{|\alpha| \leq n} \| (\partial^\alpha \varphi) v\|_{L^\infty} < \infty$ for all $n \in \mathbb{N}$ and $\dot{\mathcal{B}}_{v}(\mathbb{R}^d)$ as the closure of $\mathcal{D}(\mathbb{R}^d)$ in $\mathcal{B}_{v}(\mathbb{R}^d)$. Finally, we introduce the $(LF)$-spaces $$ \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d) := \varinjlim_{N \in \mathbb{N}} \mathcal{B}_{1/w_N}(\mathbb{R}^d), \qquad \dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d) := \varinjlim_{N \in \mathbb{N}} \dot{\mathcal{B}}_{1/w_N}(\mathbb{R}^d). $$ Since $\mathcal{D}(\mathbb{R}^d)$ is dense in $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$, we may view its dual $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$ as a space of distributions. The main results of the present article can then be stated as follows: \begin{theorem}\label{thm-introduction} Let $\mathcal{W} =(w_N)_{N \in \mathbb{N}}$ be an increasing sequence of positive continuous functions and suppose that \begin{equation} \forall N \in \mathbb{N}\, \exists M \geq N \, : \, \sup_{x \in \mathbb{R}^d}\frac{w_{N}(x + \: \cdot \:)}{w_{M}(x)} \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d). \label{general-cond} \end{equation} Then, $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))' = \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ as sets. Moreover, the following statements are equivalent: \begin{itemize} \item[$(i)$] $\mathcal{W}$ satisfies the condition: \begin{equation} \forall N \in \mathbb{N} \, \exists M \geq N \, \forall K \geq M \, \exists \theta \in (0,1) \, \exists C > 0 \, \forall x \in \mathbb{R}^d: \label{Omega-switched} \end{equation} $$ {w_N(x)}^{1-\theta}{w_K(x)}^{\theta} \leq Cw_M(x). $$ \item[$(ii)$] $\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is complete. \item[$(iii)$] $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is complete. \item[$(iv)$] $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ is ultrabornological. \item[$(v)$] $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_b = \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. \end{itemize} In such a case, the strong bidual of $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is given by $\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$. \end{theorem} Condition \eqref{Omega-switched} means that $L^1_{\mathcal{W}}(\mathbb{R}^d)$ has property $(\Omega)$ of Vogt and Wagner \cite{M-V}. Hence, the equivalence between $(i)$ and $(iv)$ in Theorem \ref{thm-introduction} may be anticipated from the fact that, for a Fr\'echet space $E$, the operator space $L_b(\mathcal{D}(\mathbb{R}^d), E) \cong L_b(s, E)^\mathbb{N}$ is ultrabornological if $E$ has $(\Omega)$, while the converse implication holds if, e.g., $E$ is a K\"{o}the sequence space or $E$ is nuclear (cf. \cite[Thm.\ 4.1 and 4.9]{Vogt-2}). Condition $(\Omega)$ plays an important role in the splitting theory for Fr\'echet spaces \cite{Vogt-2,Vogt} and, in fact, the ideas of some of our proofs in Section \ref{sect-indlimit-smooth} stem from this theory. By applying Theorem \ref{thm-introduction} to a constant sequence $\mathcal{W} = (w)_{N \in \mathbb{N}}$, where $w$ is a positive continuous\footnote{In fact, as we shall show in Theorem \ref{weighted-dual-DL1}, it suffices to assume that $w$ is measurable.} function on $\mathbb{R}^d$ such that $\sup_{x \in \mathbb{R}^d}w(x + \: \cdot \:)/w(x) \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d),$ we obtain the analogue of Schwartz' results for a very general class of weighted $\mathcal{D}'_{L^1}(\mathbb{R}^d)$ spaces (cf.\ Theorem \ref{weighted-dual-DL1}); see \cite{D-P-V2015TIB} for earlier work in this direction. Actually, to the best of our knowledge, the full topological identity $ \mathcal{D}'_{L^1}(\mathbb{R}^d) = \mathcal{O}'_C(\mathcal{D}, L^1)$ even seems to be new in the unweighted case; Schwartz only showed that these spaces coincide algebraically and have the same bounded sets and null sequences \cite[p.\ 202]{Schwartz}. We shall also show that, under natural assumptions on the sequence $\mathcal{W} =(w_N)_{N \in \mathbb{N}}$, the space $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ coincides topologically with the space of convolutors of the Gelfand-Shilov type spaces $\mathcal{B}_{\mathcal{W}}(\mathbb{R}^d) := \varprojlim_{N \in \mathbb{N}} \mathcal{B}_{w_N}(\mathbb{R}^d).$ Hence, Theorem \ref{thm-introduction} also comprises a quantified version of Grothendieck's results (cf.\ Theorem \ref{char-UB-smooth-GS}). The space of convolutors of the space $\mathcal{K}_1(\mathbb{R}^d)$ of exponentially decreasing smooth functions \cite{Hasumi} was studied by Ziele\'{z}ny \cite{Zielezny}. He claims that this space is ultrabornological but his argument seems to contain a gap (see Remark \ref{exp-example}); Theorem \ref{char-UB-smooth-GS} contains this result as a particular instance. It should be pointed out that the methods to be employed are completely different from the ones used by Schwartz and Grothendieck. Namely, we first introduce weighted $(LF)$-spaces of smooth functions and study their completeness; these spaces are the analogue of $\mathcal{O}_C(\mathbb{R}^d)$ in the present setting. To this end, we shall use abstract results concerning the regularity properties of $(LF)$-spaces \cite{Wengenroth-96, Wengenroth}, which have their roots in Palamadov's homological theory for $(LF)$-spaces \cite{Palamadov}. Interestingly, when specialized to $\mathcal{O}_{C}(\mathbb{R}^{d})$, our method supplies what appears to be the first known direct proof in the literature of the completeness of $\mathcal{O}_{C}(\mathbb{R}^{d})$; we refer to \cite{L-W} for a direct proof of the dual statement, that is, the ultrabornologicity of $\mathcal{O}'_C(\mathbb{R}^d) \cong \mathcal{O}_M(\mathbb{R}^d)$. In the second part of this work, we shall exploit the mapping properties of the \emph{short-time Fourier transform} $(STFT)$ \cite{Grochenig} on various function and distribution spaces to show that $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))' = \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ as sets and to link the topological properties of $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ to those of $\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ and $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$. The proof of Theorem \ref{thm-introduction} will then be achieved by combining this with the results obtained in the first part of the article. In our opinion, the use of the STFT leads to transparent and insightful proofs of rather subtle results. In this context, we highlight the papers \cite{B-O} in which the mapping properties of the STFT on $\mathcal{O}'_C(\mathbb{R}^d)$ are established by using Schwartz' theory of vector-valued distributions and \cite{K-P-S-V} in which the STFT is used to characterize weighted $\mathcal{B}'(\mathbb{R}^d)$ and $\dot{\mathcal{B}}'(\mathbb{R}^d)$ spaces in terms of the growth of convolution averages of their elements. We were inspired by both of these works. Finally, we would also like to mention that ultradistributional analogues of our results are treated in the paper \cite{D-Vconvolutorsultra}. The plan of the article is as follows. In the auxiliary Section \ref{STFT-distributions}, we develop a theory of the STFT that applies to \emph{all} elements of $\mathcal{D}'(\mathbb{R}^d)$. This framework will enable us to deal with weight systems satisfying the very general condition \eqref{general-cond}. The locally convex structure of the $(LF)$-spaces $\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ and $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is analyzed in Section \ref{sect-indlimit-smooth}. We shall show that for these spaces all regularity conditions considered in the literature (in particular, completeness) are equivalent to the fact that the weight system $\mathcal{W}$ satisfies condition \eqref{Omega-switched}. For later use, we also characterize these spaces in terms of the STFT. In the main Section \ref{L1-convolutors}, we study various structural and topological properties of the space $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ via the STFT and complete the proof of Theorem \ref{thm-introduction}. Our results concerning weighted $\mathcal{D}'_{L^1}$ spaces are presented in Section \ref{sect-weighted-DL1}. Finally, in Section \ref{GS-convolutors}, we apply our general theory to discuss the spaces of convolutors of $\dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$. \section{The short-time Fourier transform for general distributions}\label{STFT-distributions} The aim of this auxiliary section is to define and study the STFT for general distributions with respect to compactly supported smooth window functions. Most notably, we prove reconstruction and desingularization formulas. These formulas will play an important role in the rest of this article. Our notation from distribution theory is standard. Given a compact $K \Subset \mathbb{R}^d$ and $n \in \mathbb{N}$, we write $\mathcal{D}_K^n$ for the Banach space consisting of all $\varphi \in C^n(\mathbb{R}^d)$ with $\operatorname{supp} \varphi \subseteq K$ endowed with the norm $$ \| \varphi\|_{K,n} := \max_{|\alpha| \leq n} \max_{x \in K } |\partial^\alpha \varphi(x)|. $$ For $n = 0$, we simply write $\|\: \cdot \: \|_{K} = \|\:\cdot \:\|_{K,0}$. We define $$ \mathcal{D}_K := \varprojlim_{n \in \mathbb{N}} \mathcal{D}^n_K, \qquad \mathcal{D}(\mathbb{R}^d) := \varinjlim_{K \Subset \mathbb{R}^d} \mathcal{D}_K. $$ Furthermore, we denote by $\mathcal{E}(\mathbb{R}^d)$ and $\mathcal{S}(\mathbb{R}^d)$ the space of smooth functions on $\mathbb{R}^d$ and the space of rapidly decreasing smooth functions on $\mathbb{R}^d$, respectively, each endowed with their standard Fr\'echet space structure. The dual spaces $\mathcal{D}'(\mathbb{R}^d)$, $\mathcal{E}'(\mathbb{R}^d)$ and $\mathcal{S}'(\mathbb{R}^d)$ are the space of distributions on $\mathbb{R}^d$, the space of compactly supported distributions on $\mathbb{R}^d$, and the space of tempered distributions on $\mathbb{R}^d$, respectively. We endow these spaces with their strong topologies. Next, we recall some fundamental properties of the STFT on the space $L^2(\mathbb{R}^d)$; for further properties of the STFT we refer to the book \cite{Grochenig}. As customary, the translation and modulation operators are denoted by $T_xf = f(\:\cdot\: - x)$ and $M_\xi f = e^{2\pi i \xi \cdot} f$, $x, \xi \in \mathbb{R}^d$, respectively. We also write $\check{f} = f(- \:\cdot\:)$ for reflection about the origin. The STFT of a function $f \in L^2(\mathbb{R}^d)$ with respect to a window function $\psi \in L^2(\mathbb{R}^d)$ is defined as $$ V_\psi f(x,\xi) := (f, M_\xi T_x\psi)_{L^2} = \int_{\mathbb{R}^d} f(t) \overline{\psi(t-x)}e^{-2\pi i \xi t} {\rm d}t , \qquad (x, \xi) \in \mathbb{R}^{2d}. $$ We have that $\|V_\psi f\|_{L^2(\mathbb{R}^{2d})} = \|\psi\|_{L^2}\|f\|_{L^2}$. In particular, the mapping $V_\psi : L^2(\mathbb{R}^d) \rightarrow L^2(\mathbb{R}^{2d})$ is continuous. The adjoint of $V_\psi$ is given by the weak integral $$ V^\ast_\psi F = \int \int_{\mathbb{R}^{2d}} F(x,\xi) M_\xi T_x\psi {\rm d}x {\rm d}x i, \qquad F \in L^2(\mathbb{R}^{2d}). $$ If $\psi \neq 0$ and $\gamma \in L^2(\mathbb{R}^d)$ is a synthesis window for $\psi$, that is, $(\gamma, \psi)_{L^2} \neq 0$, then \begin{equation} \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{L^2(\mathbb{R}^d)}. \label{reconstruction-L2} \end{equation} In order to be able to extend the STFT to the space of distributions $\mathcal{D}'(\mathbb{R}^d)$ we must first establish the mapping properties of the STFT on $\mathcal{D}(\mathbb{R}^d)$. We need some preparation. Given two lcHs (= locally convex Hausdorff spaces) $E$ and $F$, we write $E \widehat{\otimes}_\pi F$, $E \widehat{\otimes}_\varepsilon F$, and $E \widehat{\otimes}_i F$ for the completion of $E \otimes F$ with respect to the projective topology, the $\varepsilon$-topology and the inductive topology, respectively \cite{Komatsu3}. If either $E$ or $F$ is nuclear, we simply write $E \widehat{\otimes} F = E \widehat{\otimes}_\pi F = E \widehat{\otimes}_\varepsilon F$. Let $d_1, d_2 \in \mathbb{N}$ and $K \Subset \mathbb{R}^{d_1}$. We may identify the space $\mathcal{D}_{K,x} \widehat{\otimes} \mathcal{S}(\mathbb{R}^{d_2}_\xi)$ with the Fr\'echet space consisting of all $\varphi \in C^\infty(\mathbb{R}^{d_1 + d_2}_{x,\xi})$ with $\operatorname{supp} \varphi \subseteq K \times \mathbb{R}^{d_2}$ such that \begin{equation} |\varphi|_{K,n} := \max_{|\alpha| \leq n} \max_{|\beta| \leq n} \sup_{(x,\xi) \in K \times \mathbb{R}^{d_2}} |\partial^{\alpha}_x \partial^\beta_\xi \varphi(x,\xi)|(1+|\xi|)^n < \infty \label{definition-norm-tensor} \end{equation} for all $n \in \mathbb{N}$. By \cite[Thm.~2.3, p.~670]{Komatsu3}, we have the following canonical isomorphisms of lcHs $$ \mathcal{D}(\mathbb{R}^{d_1}_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^{d_2}) \cong \varinjlim_{K \Subset \mathbb{R}^{d_1}} \mathcal{D}_{K,x} \widehat{\otimes} \mathcal{S}(\mathbb{R}^{d_2}_\xi) $$ and $$ (\mathcal{D}(\mathbb{R}^{d_1}_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^{d_2}))'_{b} \cong \mathcal{D}'(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}'(\mathbb{R}^{d_2}_\xi). $$ We then have: \begin{proposition}\label{STFT-D} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$. Then, the mappings $$ V_\psi: \mathcal{D}(\mathbb{R}^d) \rightarrow \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d) $$ and $$ V^\ast_\psi: \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d) \rightarrow \mathcal{D}(\mathbb{R}^d) $$ are well-defined and continuous. \end{proposition} \begin{proof} We first look at $V_\psi$. Consider the continuous linear mappings $$ S: \mathcal{D}(\mathbb{R}^d_t) \rightarrow \mathcal{D}(\mathbb{R}^{2d}_{x,t}), \quad \varphi(t) \mapsto \varphi(t)T_x{\overline{\psi}}(t), $$ and $$ \mathcal{F}_t: \mathcal{D}(\mathbb{R}^{2d}_{x,t}) \rightarrow \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d), \quad \varphi(x,t) \mapsto \int_{\mathbb{R}^d} \varphi(x,t)e^{-2\pi i \xi t} {\rm d}t . $$ The result follows from the representation $V_\psi = \mathcal{F}_t \circ S$. Next, we treat $V_\psi^\ast$. Consider the continuous linear mappings $$ S: \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d) \rightarrow \mathcal{D}(\mathbb{R}^d_t) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_{x,\xi}^{2d}), \quad \varphi(x,\xi) \mapsto \varphi(x,\xi)M_\xi T_x\psi(t) $$ and $$ \operatorname{id}_{ \mathcal{D}(\mathbb{R}^d_t)} \widehat{\otimes}_i 1(x,\xi) : \mathcal{D}(\mathbb{R}^d_t) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_{x,\xi}^{2d}) \rightarrow \mathcal{D}(\mathbb{R}^d_t), \quad \varphi(t,x,\xi) \mapsto \int\int_{\mathbb{R}^{2d}} \varphi(t,x,\xi) {\rm d}x {\rm d}x i. $$ The result follows from the representation $V^\ast_\psi = (\operatorname{id}_{ \mathcal{D}(\mathbb{R}^d_t)} \widehat{\otimes}_i 1(x,\xi) )\circ S$. \end{proof} Observe that, if $\psi \in \mathcal{D}(\mathbb{R}^d) \backslash \{0\}$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ is a synthesis window for $\psi$, the reconstruction formula \eqref{reconstruction-L2} implies that \begin{equation} \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{\mathcal{D}(\mathbb{R}^d)}. \label{reconstruction-D} \end{equation} We are ready to define the STFT on the space $\mathcal{D}'(\mathbb{R}^d)$. For $\psi \in \mathcal{D}(\mathbb{R}^d)$ and $f \in \mathcal{D}'(\mathbb{R}^d)$ we define $$ V_\psi f(x,\xi) := \langle f, \overline{M_\xi T_x\psi} \rangle = e^{-2\pi i \xi x} (f \ast M_\xi \check{\overline{\psi}})(x), \qquad (x,\xi) \in \mathbb{R}^{2d}. $$ Clearly, $V_\psi f$ is a smooth function on $\mathbb{R}^{2d}$. \begin{lemma} \label{well-defined-STFT} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and $f \in \mathcal{D}'(\mathbb{R}^d)$. Then, for every $K \Subset \mathbb{R}^d$ there is $n \in \mathbb{N}$ such that $$ \sup_{(x,\xi) \in K \times \mathbb{R}^{d}}\frac{|V_\psi f(x,\xi)|}{(1+|\xi|)^n} < \infty. $$ In particular, $V_\psi f$ defines an element of $\mathcal{D}'(\mathbb{R}^{d}_x) \widehat{\otimes} \mathcal{S}'(\mathbb{R}^{d}_\xi)$ via $$ \langle V_\psi f, \varphi \rangle := \int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) \varphi(x,\xi) {\rm d}x {\rm d}x i, \qquad \varphi \in \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d). $$ \end{lemma} \begin{proof} Let $K \Subset \mathbb{R}^d$ be arbitrary and set $L = \operatorname{supp} \psi + K$. Since $\operatorname{supp} \overline{M_\xi T_x\psi} \subseteq L$ for all $(x,\xi) \in K \times \mathbb{R}^d$, there are $n \in \mathbb{N}$ and $C > 0$ such that $$ |V_\psi f(x,\xi)| = | \langle f, \overline{M_\xi T_x\psi} \rangle| \leq C \| M_\xi T_x\psi \|_{L,n}, \qquad (x,\xi) \in K \times \mathbb{R}^d. $$ The result follows from the fact that \begin{align*} \| M_\xi T_x\psi\|_{L,n} &\leq \max_{|\alpha| \leq n} \sum_{\beta \leq \alpha} \binom{\alpha}{\beta}(2\pi|\xi|)^{|\beta|} \sup_{t \in L} | \partial^{\alpha -\beta} \psi(t-x)| \\ & \leq (2\pi)^n \|\psi\|_{\operatorname{supp} \psi,n} (1+ |\xi|)^n \end{align*} for all $(x,\xi) \in K \times \mathbb{R}^d$. \end{proof} \begin{lemma}\label{STFT-transpose} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $f \in \mathcal{D}'(\mathbb{R}^d)$. Then, $$ \langle V_\psi f, \varphi \rangle = \langle f, \overline{V^\ast_\psi \overline{\varphi}} \rangle, \qquad \varphi \in \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d). $$ \end{lemma} \begin{proof} Let $\varphi \in \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d)$ be arbitrary. Since $$ \varphi(x,\xi)\overline{M_\xi T_x \psi}(t) \in \mathcal{D}(\mathbb{R}^d_t) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_{x,\xi}^{2d}) $$ and $$ 1(x,\xi) \widehat{\otimes}_i f(t) = f(t) \widehat{\otimes}_i 1(x,\xi) \in \mathcal{D}'(\mathbb{R}^{d}_t) \widehat{\otimes} \mathcal{S}'(\mathbb{R}^{2d}_{x,\xi}), $$ we have that \begin{align*} \langle V_\psi f, \varphi \rangle & = \int \int_{\mathbb{R}^{2d}} \langle f, \overline{M_\xi T_x\psi} \rangle \varphi(x,\xi) {\rm d}x {\rm d}x i = \langle 1(x,\xi) \widehat{\otimes}_i f(t), \varphi(x,\xi)\overline{M_\xi T_x\psi}(t) \rangle \\ &= \langle f(t) \widehat{\otimes}_i 1(x,\xi) , \varphi(x,\xi)\overline{M_\xi T_x\psi}(t) \rangle = \langle f(t), \int \int_{\mathbb{R}^{2d}} \varphi(x,\xi)\overline{M_\xi T_x\psi}(t) {\rm d}x {\rm d}x i \rangle \\ &=\langle f, \overline{V^\ast_\psi \overline{\varphi}} \rangle. \qedhere \end{align*} \end{proof} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$. We \emph{define} the adjoint STFT of an element $F \in \mathcal{D}'(\mathbb{R}^{d}_x) \widehat{\otimes} \mathcal{S}'(\mathbb{R}^{d}_\xi)$ as $$ \langle V^\ast_\psi F, \varphi \rangle := \langle F, \overline{V_\psi \overline{\varphi}} \rangle, \qquad \varphi \in \mathcal{D}(\mathbb{R}^d). $$ Notice that $V^\ast_\psi F \in \mathcal{D}'(\mathbb{R}^d)$ because of Proposition \ref{STFT-D}. We have all the necessary ingredients to establish the mapping properties of the STFT on $\mathcal{D}'(\mathbb{R}^d)$. \begin{proposition}\label{STFT-D-dual} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$. Then, the mappings $$ V_\psi: \mathcal{D}'(\mathbb{R}^d) \rightarrow \mathcal{D}'(\mathbb{R}^{d}_x) \widehat{\otimes} \mathcal{S}'(\mathbb{R}^{d}_\xi) $$ and $$ V^\ast_\psi: \mathcal{D}'(\mathbb{R}^{d}_x) \widehat{\otimes} \mathcal{S}'(\mathbb{R}^{d}_\xi) \rightarrow \mathcal{D}'(\mathbb{R}^d) $$ are well-defined and continuous. Moreover, if $\psi \neq 0$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ is a synthesis window for $\psi$, then \begin{equation} \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{\mathcal{D}'(\mathbb{R}^d)} \label{reconstruction-D-dual} \end{equation} and the desingularization formula \begin{equation} \langle f, \varphi \rangle = \frac{1}{(\gamma, \psi)_{L^2}} \int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}} \varphi(x,-\xi) {\rm d}x {\rm d}x i \label{desing-D-dual} \end{equation} holds for all $f \in \mathcal{D}'(\mathbb{R}^d)$ and $\varphi \in \mathcal{D}(\mathbb{R}^d)$. \end{proposition} \begin{proof} The mapping $V_\psi$ is continuous because of Lemma \ref{STFT-transpose} and the continuity of $V^\ast_\psi: \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d) \rightarrow \mathcal{D}(\mathbb{R}^d)$ (Proposition \ref{STFT-D}), while $V^\ast_\psi$ is continuous because of the continuity of $V_\psi: \mathcal{D}(\mathbb{R}^d) \rightarrow \mathcal{D}(\mathbb{R}^d_x) \widehat{\otimes}_i \mathcal{S}(\mathbb{R}_\xi^d)$ (Proposition \ref{STFT-D}). Next, suppose that $\psi \neq 0$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ is a synthesis window for $\psi$. Let $f \in \mathcal{D}'(\mathbb{R}^d)$ and $\varphi \in \mathcal{D}(\mathbb{R}^d)$ be arbitrary. Lemma \ref{STFT-transpose} and the reconstruction formula \eqref{reconstruction-D} imply that $$ \langle V^\ast_\gamma (V_\psi f), \varphi \rangle = \langle V_\psi f, \overline{V_\gamma \overline{\varphi}} \rangle = \langle f, \overline{V^\ast_\psi (V_\gamma \overline{\varphi})} \rangle = (\gamma, \varphi)_{L^2} \langle f, \varphi \rangle; $$ that is, \eqref{reconstruction-D-dual} and \eqref{desing-D-dual} hold. \end{proof} Finally, we show that, for $f \in \mathcal{E}'(\mathbb{R}^d)$, the desingularization formula \eqref{desing-D-dual} holds for all $\varphi \in \mathcal{E}(\mathbb{R}^d)$. To this end, we briefly discuss the STFT on $\mathcal{E}(\mathbb{R}^d)$ and $\mathcal{E}'(\mathbb{R}^d)$. The space $\mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}(\mathbb{R}_\xi^d)$ may be identified with the Fr\'echet space consisting of all $\varphi \in C^\infty(\mathbb{R}^{2d}_{x,\xi})$ such that $|\varphi|_{K,n} < \infty$ for all $K \Subset \mathbb{R}^d$ and $n \in \mathbb{N}$ (cf.\ \eqref{definition-norm-tensor}). \begin{proposition}\label{STFT-E} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$. \begin{itemize} \item[$(i)$] The mapping $$ V_\psi: \mathcal{E}(\mathbb{R}^d) \rightarrow \mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}(\mathbb{R}_\xi^d) $$ is well-defined and continuous. \item[$(ii)$] Let $f \in \mathcal{E}'(\mathbb{R}^d)$. Then, there are a compact $K \Subset \mathbb{R}^d$ and $n \in \mathbb{N}$ such that $\operatorname{supp} V_\psi f \subseteq K \times \mathbb{R}^d$ and $$ \sup_{(x,\xi) \in K \times \mathbb{R}^{d}}\frac{|V_\psi f(x,\xi)|}{(1+|\xi|)^n} < \infty. $$ In particular, $V_\psi f$ defines an element of $\mathcal{E}'(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}'(\mathbb{R}_\xi^d) \cong (\mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}(\mathbb{R}_\xi^d))'$ via $$ \langle V_\psi f, \varphi \rangle := \int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) \varphi(x,\xi) {\rm d}x {\rm d}x i, \qquad \varphi \in \mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}(\mathbb{R}_\xi^d). $$ \end{itemize} \end{proposition} \begin{proof} $(i)$ It suffices to observe that we can factor $V_\psi = \mathcal{F}_t \circ S$ through the continuous linear mappings $$ S: \mathcal{E}(\mathbb{R}^d_t) \rightarrow \mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}(\mathbb{R}_t^d), \quad \varphi(t) \mapsto \varphi(t)T_x{\overline{\psi}}(t) $$ and $$ \mathcal{F}_t: \mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}(\mathbb{R}_t^d) \rightarrow \mathcal{E}(\mathbb{R}^d_x) \widehat{\otimes}\mathcal{S}(\mathbb{R}_\xi^d), \, \varphi(x,t) \mapsto \int_{\mathbb{R}^d} \varphi(x,t)e^{-2\pi i \xi t} {\rm d}t . $$ $(ii)$ Since $\operatorname{supp} \overline{M_\xi T_x \psi} \subseteq \operatorname{supp} \psi + x$ for all $(x,\xi) \in \mathbb{R}^{2d}$, we obtain that $V_\psi f(x,\xi) = \langle f, \overline{M_\xi T_x \psi} \rangle = 0$ for all $(x, \xi) \notin (\operatorname{supp} f - \operatorname{supp} \psi) \times \mathbb{R}^d$, that is, $\operatorname{supp} V_\psi f \subseteq (\operatorname{supp} f - \operatorname{supp} \psi) \times \mathbb{R}^d$. The second part follows from Lemma \ref{well-defined-STFT}. \end{proof} \begin{corollary}\label{desing-E-dual} Let $\psi \in \mathcal{D}(\mathbb{R}^d) \backslash \{0\}$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ be a synthesis window for $\psi$. Then, the desingularization formula \eqref{desing-D-dual} holds for all $f \in \mathcal{E}'(\mathbb{R}^{d})$ and $\varphi \in \mathcal{E}(\mathbb{R}^d)$. \end{corollary} \begin{proof} Let $f \in \mathcal{E}'(\mathbb{R}^d)$ and $\varphi \in \mathcal{E}(\mathbb{R}^d)$ be arbitrary. Choose a sequence $(\varphi_n)_{n \in \mathbb{N}} \subset \mathcal{D}(\mathbb{R}^d)$ such that $\varphi_n \rightarrow \varphi$ in $\mathcal{E}(\mathbb{R}^d)$. Hence, the desingularization formula \eqref{desing-D-dual} and Proposition \ref{STFT-E} imply that $$ \langle f, \varphi \rangle = \lim_{n \to \infty} \langle f, \varphi_n \rangle = \lim_{n \to \infty} \langle V_\psi f, \overline{V_\gamma \overline{\varphi_n}} \rangle = \langle V_\psi f, \overline{V_\gamma \overline{\varphi}} \rangle, $$ so that \eqref{desing-D-dual} holds for $f$ and $\varphi$. \end{proof} \section{Weighted inductive limits of spaces of smooth functions}\label{sect-indlimit-smooth} In this section, we introduce two general classes of weighted inductive limits of spaces of smooth functions defined via a decreasing sequence of positive continuous functions. Our main goal is to characterize the completeness of these spaces in terms of the defining sequence of weight functions. In order to do so, we first recall several regularity conditions for $(LF)$-spaces. Furthermore, we also establish the mapping properties of the STFT on these spaces; we shall repeatedly use the latter results in Sections \ref{L1-convolutors} and \ref{GS-convolutors}. \subsection{Regularity conditions for $(LF)$-spaces}\label{sect-reg} A lcHs $E$ is called an $(LF)$-space if there is a sequence $(E_N)_{N \in \mathbb{N}}$ of Fr\'echet spaces with $E_N \subset E_{N + 1}$ and continuous inclusion mappings such that $E = \bigcup_{N \in \mathbb{N}} E_N$ and the topology of $E$ coincides with the finest locally convex topology for which all inclusion mappings $E_n \rightarrow E$ are continuous. We emphasize that, for us, $(LF)$-spaces are Hausdorff by definition. We call $(E_N)_{N}$ a defining inductive spectrum for $E$ and write $E = \varinjlim_{N}E_N$. If each $E_N$ is a Banach space, $E$ is called an $(LB)$-space. Let $E = \varinjlim_{N}E_N$ be an $(LF)$-space. We shall consider the following regularity conditions on $E$: \begin{itemize} \item[$(i)$] $E$ is said to be \emph{boundedly retractive} if for every bounded set $B$ in $E$ there is $N \in \mathbb{N}$ such that $B$ is contained in $E_N$, and $E$ and $E_N$ induce the same topology on $B$. \item[$(ii)$] $E$ is said to be \emph{regular} if for every bounded set $B$ in $E$ there is $N \in \mathbb{N}$ such that $B$ is contained and bounded in $E_N$. \item[$(iii)$] $E$ is said to be $\beta$-\emph{regular} if for every $N \in \mathbb{N}$ and every subset $B$ of $E_N$ that is bounded in $E$ there is $M \geq N$ such that $B$ is bounded in $E_M$. \item[$(iv)$] $E$ is said to satisfy condition $(wQ)$ if for every $N \in \mathbb{N}$ there are a neighborhood $U$ of $0$ in $E_N$ and $M \geq N$ such that for every $K \geq M$ and every neighborhood $W$ of $0$ in $E_M$ there are a neighborhood $V$ of $0$ in $E_K$ and $\lambda > 0$ such that $V \cap U \subseteq \lambda W$. If $ (\| \: \cdot \: \|_{N,n})_{n \in \mathbb{N}}$ is a fundamental increasing sequence of seminorms for $E_N$, then $E$ satisfies $(wQ)$ if and only if \begin{gather*} \forall N \in \mathbb{N} \, \exists M \geq N \, \exists n \in \mathbb{N} \, \forall K \geq M \, \forall m \in \mathbb{N} \, \exists k \in \mathbb{N} \, \exists C > 0\, \forall e \in E_N: \\ \|e\|_{M,m} \leq C(\|e\|_{N,n} + \|e\|_{K,k}). \end{gather*} \end{itemize} Finally, $E$ is said to be \emph{boundedly stable} if for every $N \in \mathbb{N}$ and every bounded set $B$ in $E_N$ there is $M \geq N$ such that for every $K \geq M$ the spaces $E_M$ and $E_K$ induce the same topology on $B$. Grothendieck's factorization theorem (see e.g.\ \cite[p.\ 225]{Kothe}) implies that all of these conditions do not depend on the defining inductive spectrum of $E$. This justifies calling an $(LF)$-space boundedly retractive, etc., if one (and thus all) of its defining inductive spectra has this property. The following result shall be of crucial importance to us. \begin{theorem} \label{reg-cond} Let $E$ be an $(LF)$-space. Consider the following conditions: \begin{itemize} \item[$(i)$] $E$ is boundedly retractive. \item[$(ii)$] $E$ is complete. \item[$(iii)$] $E$ is regular. \item[$(iv)$] $E$ is $\beta$-regular. \item[$(v)$] $E$ satisfies $(wQ)$. \end{itemize} Then, $(i) \mathbb{R}ightarrow (ii) \mathbb{R}ightarrow (iii) \mathbb{R}ightarrow (iv) \mathbb{R}ightarrow (v)$. Moreover, if $E$ is boundedly stable, then $(v) \mathbb{R}ightarrow (i)$. \end{theorem} We refer to \cite[Sect.\ 6]{Wengenroth} (see also \cite{Wengenroth-96} and the references therein) for the proof of and more information on Theorem \ref{reg-cond}. \subsection{Regularity properties} We now introduce the weighted $(LF)$-spaces of smooth functions that we shall be concerned with. Let $v$ be a non-negative function on $\mathbb{R}^d$ and let $n \in \mathbb{N}$. We define $\mathcal{B}^n_v(\mathbb{R}^d)$ as the seminormed space consisting of all $\varphi \in C^n(\mathbb{R}^d)$ such that $$ \|\varphi\|_{v,n} := \max_{|\alpha| \leq n} \sup_{x \in \mathbb{R}^d} |\partial^{\alpha}\varphi(x)|v(x) < \infty. $$ The closure of $\mathcal{D}(\mathbb{R}^d)$ in $\mathcal{B}^n_v(\mathbb{R}^d)$ is denoted by ${\dot{\mathcal{B}}}^n_v(\mathbb{R}^d)$. Clearly, the latter space consists of all $\varphi \in C^n(\mathbb{R}^d)$ such that $$ \lim_{|x| \to \infty}|\partial^{\alpha}\varphi(x)|v(x) = 0 $$ for all $|\alpha| \leq n$ and we also endow it with the seminorm $\|\:\cdot \:\|_{v,n}$. If $v$ is positive and $v^{-1}$ is locally bounded, $\mathcal{B}^n_v(\mathbb{R}^d)$ and ${\dot{\mathcal{B}}}^n_v(\mathbb{R}^d)$ are Banach spaces. These requirements are fulfilled if $v$ is positive and continuous. For $n=0$ we simply write $\mathcal{B}^0_v(\mathbb{R}^d) = Cv(\mathbb{R}^d)$, ${\dot{\mathcal{B}}}^0_v(\mathbb{R}^d) = C(v)_0(\mathbb{R}^d)$ and $\|\:\cdot \:\|_{v,0} = \|\:\cdot \:\|_v$. Furthermore, we set $$ \mathcal{B}_v(\mathbb{R}^d):= \varprojlim_{n \in \mathbb{N}} \mathcal{B}^n_{v}(\mathbb{R}^d), \qquad \dot{\mathcal{B}}_v(\mathbb{R}^d) := \varprojlim_{n \in \mathbb{N}} {\dot{\mathcal{B}}}^n_{v}(\mathbb{R}^d). $$ A (pointwise) decreasing sequence $\mathcal{V} = (v_{N})_{N \in \mathbb{N}}$ of positive continuous functions on $\mathbb{R}^d$ is called a \emph{decreasing weight system}. We define the following $(LF)$-spaces $$ \mathcal{B}_{\mathcal{V}}(\mathbb{R}^d) := \varinjlim_{N \in \mathbb{N}} \mathcal{B}_{v_N}(\mathbb{R}^d), \qquad \dot{\mathcal{B}}_{\mathcal{V}}(\mathbb{R}^d) := \varinjlim_{N \in \mathbb{N}} \dot{\mathcal{B}}_{v_N}(\mathbb{R}^d). $$ \begin{remark}\label{V-remark} If $\mathcal{V} = (v_N)_N$ satisfies condition $(V)$ (cf.\ \cite[p.\ 114]{B-M-S}), i.e., $$ \forall N \in \mathbb{N} \, \exists M > N: \lim_{|x| \to \infty} v_M(x)/v_N(x) = 0, $$ then $\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d) = \dot{\mathcal{B}}_{\mathcal{V}}(\mathbb{R}^d)$ and this space is an $(LFS)$-space (= inductive limit of $(FS)$-spaces). \end{remark} We shall often need to impose the following mild condition on $\mathcal{V}$: \begin{equation} \forall N \in \mathbb{N}\, \exists \widetilde{N} \geq N \, : \, g_{N,\widetilde{N}} = \sup_{x \in \mathbb{R}^d}\frac{v_{\widetilde{N}}(x + \: \cdot \:)}{v_N(x)} \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d). \label{locally-bounded-decreasing} \end{equation} We are ready to discuss the regularity properties of the $(LF)$-spaces $\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d)$ and $\dot{\mathcal{B}}_{\mathcal{V}}(\mathbb{R}^d)$. We need the following definition. \begin{definition} A decreasing weight system $\mathcal{V} =(v_N)_N$ is said to satisfy $(\Omega)$ if \begin{gather*} \forall N \in \mathbb{N} \, \exists M \geq N \, \forall K \geq M \, \exists \theta \in (0,1) \, \exists C > 0 \, \forall x \in \mathbb{R}^d: \\ v_M(x) \leq C {v_N(x)}^{1-\theta}{v_K(x)}^{\theta}. \end{gather*} \end{definition} \begin{theorem}\label{completeness-ind-lim-smooth} Let $\mathcal{V} = (v_N)_{N}$ be a decreasing weight system satisfying \eqref{locally-bounded-decreasing}. Then, the following statements are equivalent: \begin{itemize} \item[$(i)$] $\mathcal{V}$ satisfies $(\Omega)$. \item[$(ii)$] $\mathcal{B}_\mathcal{V}(\mathbb{R}^d)$ is boundedly retractive. \item[$(iii)$] $\mathcal{B}_\mathcal{V}(\mathbb{R}^d)$ satisfies $(wQ)$. \item[$(ii)'$] $\dot{\mathcal{B}}_\mathcal{V}(\mathbb{R}^d)$ is boundedly retractive. \item[$(iii)'$] $\dot{\mathcal{B}}_\mathcal{V}(\mathbb{R}^d)$ satisfies $(wQ)$. \end{itemize} \end{theorem} The proof of Theorem \ref{completeness-ind-lim-smooth} is based on the ensuing three lemmas. \begin{lemma} \label{boundedly-stable-smooth} Let $\mathcal{V} = (v_N)_{N}$ be a decreasing weight system satisfying $(\Omega)$. Then, $\mathcal{B}_\mathcal{V}(\mathbb{R}^d)$ and $\dot{\mathcal{B}}_\mathcal{V}(\mathbb{R}^d)$ are boundedly stable. \end{lemma} \begin{proof} Notice that $\dot{\mathcal{B}}_\mathcal{V}(\mathbb{R}^d)$ is boundedly stable if $\mathcal{B}_\mathcal{V}(\mathbb{R}^d)$ is so. Hence, it suffices to show that $\mathcal{B}_\mathcal{V}(\mathbb{R}^d)$ is boundedly stable. Let $N \in \mathbb{N}$ be arbitrary and choose $M \geq N$ according to $(\Omega)$. We shall show that for all $K \geq M$ the spaces $\mathcal{B}_{v_K}(\mathbb{R}^d)$ and $\mathcal{B}_{v_M}(\mathbb{R}^d) $ induce the same topology on the bounded sets $B$ of $\mathcal{B}_{v_N}(\mathbb{R}^d)$. We only need to prove that the topology induced by $\mathcal{B}_{v_K}(\mathbb{R}^d)$ is finer than the one induced by $\mathcal{B}_{v_M}(\mathbb{R}^d)$. Consider the basis of neighborhoods of $0$ in $\mathcal{B}_{v_M}(\mathbb{R}^d)$ given by $$ U(n,\varepsilon) = \{ \varphi \in \mathcal{B}_{v_M}(\mathbb{R}^d) \, : \, \|\varphi \|_{v_M,n} \leq \varepsilon \}, \qquad n \in \mathbb{N},\varepsilon > 0. $$ Let $n \in \mathbb{N}$ and $\varepsilon > 0$ be arbitrary. Choose $\theta \in (0,1)$ and $C > 0$ such that $v_M(x) \leq C {v_N(x)}^{1-\theta}v_K(x)^{\theta}$ for all $x \in \mathbb{R}^d$. Set $\delta = (\varepsilon/C)^{1/\theta}(\sup_{\varphi \in B} \|\varphi\|_{v_N,n})^{-(1-\theta)/\theta}$ and $V = \{ \varphi \in \mathcal{B}_{v_K}(\mathbb{R}^d) \, : \, \| \varphi \|_{v_K,n} \leq \delta \}$. We claim that $V \cap B \subseteq U(n,\varepsilon)$. Indeed, for $\varphi \in V \cap B$ we have that \begin{align*} \| \varphi \|_{v_M,n} &=\max_{|\alpha| \leq n} \sup_{x \in \mathbb{R}^d} |\partial^{\alpha}\varphi(x)|v_M(x) \leq C\max_{|\alpha| \leq n}\sup_{x \in \mathbb{R}^d} (|\partial^{\alpha}\varphi(x)| {v_N}(x))^{1-\theta}(|\partial^{\alpha}\varphi(x)|v_K(x))^{\theta} \\ & \leq C\| \varphi\|^{1-\theta}_{v_N,n}\| \varphi\|^\theta_{v_K,n} \leq \varepsilon. \qedhere \end{align*} \end{proof} \begin{lemma}\label{gorny-wQ} Let $\mathcal{V} = (v_N)_{N}$ be a decreasing weight system satisfying \eqref{locally-bounded-decreasing} and $(\Omega)$. Then, \begin{gather*} \forall N \in \mathbb{N} \, \exists M \geq N \, \forall K \geq M \, \forall m \in \mathbb{N} \, \exists k \in \mathbb{N} \, \exists \rho \in (0,1) \, \exists C > 0\, \forall \varphi \in \mathcal{B}_{v_N}(\mathbb{R}^d) : \\ \|\varphi\|_{v_M,m} \leq C(\|\varphi\|^{1-\rho}_{v_N} \|\varphi\|^{\rho}_{v_K,k}). \end{gather*} \end{lemma} \begin{proof} The proof is based on the following Landau-Kolmogorov type inequality for a cube (also known as Gorny's inequality in dimension one), shown in \cite[Example 3.13]{Frerick}: For all $m,k \in \mathbb{N}$ with $m \leq k$ there is $C > 0$ such that $$ \|f\|_{[-1,1]^d,m} \leq C\|f\|^{1-(m/k)}_{[-1,1]^d}\|f\|^{m/k}_{[-1,1]^d,k}, \qquad f \in C^\infty([-1,1]^d). $$ Let $N \in \mathbb{N}$ be arbitrary and choose $\widetilde{N} \geq N$ as in \eqref{locally-bounded-decreasing}. Next, choose $M \geq \widetilde{N}$ according to $(\Omega)$. Let $K \geq M$ and $m \in \mathbb{N}$ be arbitrary. Choose $\widetilde{K} \geq K$ as in \eqref{locally-bounded-decreasing}. There are $\theta \in (0,1)$ and $C' > 0$ such that $v_M(x) \leq C'v _{\widetilde{N}}(x)^{1-\theta}v_{\widetilde{K}}(x)^{\theta}$ for all $x \in \mathbb{R}^d$. Finally, let $k \in \mathbb{N}$ be so large that $\rho = m/k \leq \theta$. Notice that for $0 \leq a \leq b $ it holds that $b^{1-\theta}a^\theta \leq b^{1-\rho}a^\rho$. Hence, for $\varphi \in \mathcal{B}_{v_N}(\mathbb{R}^d)$, it holds that \begin{align*} \|\varphi\|_{v_M,m} &\leq \sup_{x \in \mathbb{R}^d} \|T_{-x}\varphi\|_{[-1,1]^d,m}v_M(x) \\ &\leq CC' \sup_{x \in \mathbb{R}^d} \|T_{-x}\varphi\|^{1-(m/k)}_{[-1,1]^d}\|T_{-x}\varphi\|^{m/k}_{[-1,1]^d,k}v_{\widetilde{N}}(x)^{1-\theta}v_{\widetilde{K}}(x)^{\theta} \\ &\leq CC'(\sup_{x \in \mathbb{R}^d} \|T_{-x}\varphi\|_{[-1,1]^d}v_{\widetilde{N}}(x))^{1-\rho}(\sup_{x \in \mathbb{R}^d} \|T_{-x}\varphi\|_{[-1,1]^d,k}v_{\widetilde{K}}(x))^\rho. \end{align*} The result follows from the fact that $$ \sup_{x \in \mathbb{R}^d} \|T_{-x}\varphi\|_{[-1,1]^d,j}v_{\widetilde{J}}(x) \leq \|\check{g}_{J,\widetilde{J}}\|_{[-1,1]^d} \|\varphi\|_{v_J,j}, $$ for all $j,J \in \mathbb{N}$ and $\varphi \in \mathcal{B}_{v_J}(\mathbb{R}^d)$. \end{proof} In the next lemma, $s$ denotes the Fr\'echet space of rapidly decreasing sequences, that is, the space consisting of all sequences $(c_j)_{j \in \mathbb{N}} \in \mathbb{C}^\mathbb{N}$ such that $\sup_{j \in \mathbb{N}}|c_j| j^k < \infty$ for all $k \in \mathbb{N}$. \begin{lemma}\label{suff-Omega} Let $\mathcal{V} = (v_N)_N$ be a decreasing weight system and let $E$ be a Fr\'echet space such that $ E \cong s$ topologically. Let $(\| \: \cdot \; \|_n)_{n \in \mathbb{N}}$ be a fundamental increasing sequence of seminorms for $E$ and suppose that the pair $(E, \mathcal{V})$ satisfies $(S_2)^*$ (cf.\ \cite[Lemma 2.1]{Albanese}), i.e., \begin{gather*} \forall N \in \mathbb{N} \, \exists M \geq N \, \exists n \in \mathbb{N} \, \forall K \geq M \, \forall m \in \mathbb{N} \, \exists k \in \mathbb{N} \, \exists C > 0\, \forall e \in E \, \forall x \in \mathbb{R}^d: \\ v_M(x)\|e\|_m \leq C(v_N(x)\|e\|_n + v_K(x)\|e\|_k). \end{gather*} Then, $\mathcal{V}$ satisfies $(\Omega)$. \end{lemma} \begin{proof} If $E$ and $F$ are topologically isomorphic Fr\'echet spaces, then $(E, \mathcal{V})$ satisfies $(S_2)^*$ if and only if $(F, \mathcal{V})$ does so. Hence, it suffices to show that $\mathcal{V}$ satisfies $(\Omega)$ provided that $(s,\mathcal{V})$ satisfies $(S_2)^*$. This is essentially shown in \cite[Thm.~4.1]{Vogt}, but we repeat the argument here for the sake of completeness. Let $N \in \mathbb{N}$ be arbitrary and choose $M \geq N$ as in $(S_2)^\ast$. Let $K \geq M$ be arbitrary. Choose $k \in \mathbb{N}$ according to $(S_2)^\ast$ for $m = n+1$. We may assume that $k > m$. By applying $(S_2)^\ast$ to the unit vectors in $s$, we obtain that there is $C \geq 1$ such that $$ v_M(x) \leq C\left(\frac{v_N(x)}{j} + v_K(x)j^{(k-m)}\right) $$ for all $x \in \mathbb{R}^d$ and $j \in \mathbb{Z}_+$, which implies that $$ v_M(x) \leq 2^{k-m}C\left(\frac{v_N(x)}{r} + v_K(x)r^{(k-m)}\right) $$ for all $x \in \mathbb{R}^d$ and $r >0$. The result follows by minimizing the right-hand side for $r > 0$ (with $x \in \mathbb{R}^d$ fixed). \end{proof} \begin{proof}[Proof of Theorem \ref{completeness-ind-lim-smooth}] The implications $(ii) \mathbb{R}ightarrow (iii)$ and $(ii)' \mathbb{R}ightarrow (iii)'$ hold for general $(LF)$-spaces by Theorem \ref{reg-cond}, while $(iii) \mathbb{R}ightarrow (iii)'$ is obvious. In view of Lemmas \ref{boundedly-stable-smooth} and \ref{gorny-wQ}, $(i) \mathbb{R}ightarrow (ii)$ and $(i) \mathbb{R}ightarrow (ii)'$ follow from another application of Theorem \ref{reg-cond}. Finally, we show $(iii)' \mathbb{R}ightarrow (i)$. By Lemma \ref{suff-Omega} and the fact that $\mathcal{D}_{[-1,1]^d} \cong s$ topologically \cite[Prop.\ 31.12]{M-V}, it suffices to show that $(\mathcal{D}_{[-1,1]^d}, \mathcal{V})$ satisfies $(S_2)^\ast$. Let $N \in \mathbb{N}$ be arbitrary and choose $\widetilde{N} \geq N$ as in \eqref{locally-bounded-decreasing}. Next, choose $M \geq \widetilde{N}$ and $n \in \mathbb{N}$ according to $(wQ)$. Pick $\widetilde{M} \geq M$ as in \eqref{locally-bounded-decreasing}. We shall show $(S_2)^\ast$ for $\widetilde{M}$ and $n$. Let $K \geq \widetilde{M}$ and $m \in \mathbb{N}$ be arbitrary. Choose $\widetilde{K} \geq K$ as in \eqref{locally-bounded-decreasing}. By $(wQ)$, there are $k \in \mathbb{N}$ and $C > 0$ such that $$ \|\varphi\|_{v_M,m} \leq C\Big(\|\varphi\|_{v_{\widetilde{N}},n} + \|\varphi\|_{{v_{\widetilde{K}},k}}\Big), \qquad \varphi \in \dot{\mathcal{B}}_{v_N}(\mathbb{R}^d). $$ Let $x \in \mathbb{R}^d$ and $\varphi \in \mathcal{D}_{[-1,1]^d}$ be arbitrary. For all $j,J \in \mathbb{N}$ we have that $T_x\varphi \in \mathcal{D}(\mathbb{R}^d) \subset \dot{\mathcal{B}}_{v_J}(\mathbb{R}^d)$ and $$ \|T_x\varphi\|_{v_{\widetilde{J}},j} \leq \|g_{J,\widetilde{J}}\|_{[-1,1]^d} v_J(x) \|\varphi\|_{[-1,1]^d,j} $$ and $$ v_{\widetilde{J}}(x) \|\varphi\|_{[-1,1]^d,j}\leq \|\check{g}_{J,\widetilde{J}}\|_{[-1,1]^d} \|T_x\varphi\|_{v_J,j}. $$ Hence, \begin{align*} v_{\widetilde{M}}(x) \|\varphi\|_{[-1,1]^d,m} &\leq \|\check{g}_{M,\widetilde{M}}\|_{[-1,1]^d}\|T_x\varphi\|_{v_M,m} \\ &\leq C\|\check{g}_{M,\widetilde{M}}\|_{[-1,1]^d}(\|T_x\varphi\|_{v_{\widetilde{N}},n} + \|T_x\varphi\|_{{v_{\widetilde{K}},k}})\\ &\leq C'(v_N(x)\|\varphi\|_{[-1,1]^d,n} + v_K(x)\|\varphi\|_{[-1,1]^d,k}), \end{align*} where $C' = C\|\check{g}_{M,\widetilde{M}}\|_{[-1,1]^d}\max\{\|g_{N,\widetilde{N}}\|_{[-1,1]^d},\|g_{K,\widetilde{K}}\|_{[-1,1]^d}\}$. \end{proof} \subsection{Characterization via the STFT} We now turn our attention to the mapping properties of the STFT on the spaces $\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d)$ and $\dot{\mathcal{B}}_{\mathcal{V}}(\mathbb{R}^d)$. The following two technical lemmas are needed. \begin{lemma}\label{STFT-test-smooth} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $w$ and $v$ be positive continuous functions on $\mathbb{R}^d$ such that \begin{equation} g = \sup_{x \in \mathbb{R}^d} \frac{v(x + \:\cdot\:)}{w(x)} \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d). \label{locally-bounded-cond} \end{equation} Then, the mappings $$ V_\psi: \mathcal{B}^n_{w}(\mathbb{R}^d) \rightarrow C(v\otimes (1 + |\:\cdot\:|)^n)(\mathbb{R}^{2d}_{x,\xi}) $$ and $$ V_\psi: {\dot{\mathcal{B}}}^n_{w}(\mathbb{R}^d) \rightarrow C(v\otimes (1 + |\:\cdot\:|)^n)_0(\mathbb{R}^{2d}_{x,\xi}) $$ are well-defined and continuous. \end{lemma} \begin{proof} Set $K = \operatorname{supp} \psi$ and let $\varphi \in \mathcal{B}^n_{w}(\mathbb{R}^d)$ be arbitrary. For all $(x,\xi) \in \mathbb{R}^{2d}$ it holds that \begin{align*} &|V_\psi\varphi(x,\xi)| v(x) (1 + |\xi|)^n \leq (1+ \sqrt{d})^n \max_{|\alpha| \leq n}|\xi^\alpha V_\psi\varphi(x,\xi)|v(x) \\ &\leq (1+ \sqrt{d})^n \max_{|\alpha| \leq n}(2\pi)^{-|\alpha|} \sum_{\beta \leq \alpha} \binom{\alpha}{\beta} \int_{K} |\partial^{\beta}\varphi(t+x)|w(t+x)\check{g}(t)|\partial^{\alpha-\beta}\psi(t)| {\rm d}t \\ &\leq (1+ \sqrt{d})^n|K| \|\psi\|_{K,n} \|\check{g}\|_{K} \max_{|\alpha| \leq n}\sup_{t \in K}|\partial^{\alpha}\varphi(t+x)|w(t+x), \end{align*} which shows the continuity of $V_\psi$ on $\mathcal{B}^n_{w}(\mathbb{R}^d)$. We still need to show that $V_\psi({\dot{\mathcal{B}}}^n_{w}(\mathbb{R}^d)) \subseteq C(v\otimes (1 + |\:\cdot\:|)^n)_0(\mathbb{R}^{2d})$. Let $\varphi \in {\dot{\mathcal{B}}}^n_{w}(\mathbb{R}^d)$ be arbitrary. The above inequality shows that $$ \lim_{|x| \to \infty} \sup_{\xi \in \mathbb{R}^d} |V_\psi\varphi(x,\xi)| v(x) (1 + |\xi|)^n = 0. $$ Hence, we only need to prove that $$ \lim_{|\xi| \to \infty} \sup_{x \in L} |V_\psi\varphi(x,\xi)| v(x) (1 + |\xi|)^n = 0 $$ for all compacts $L \Subset \mathbb{R}^d$. Since \begin{align*} &\sup_{x \in L} |V_\psi\varphi(x,\xi)| v(x) (1 + |\xi|)^n \\ &\leq (1+ \sqrt{d})^n\|v\|_{L} \sup_{x \in L}\max_{|\alpha| \leq n}(2\pi)^{-|\alpha|} \sum_{\beta \leq \alpha}\binom{\alpha}{\beta} |\mathcal{F}(\partial^{\beta}\varphi T_x \partial^{\alpha-\beta}\overline{\psi})(\xi)|, \end{align*} it suffices to show that $$ \lim_{|\xi| \to \infty} \sup_{x \in L} |\mathcal{F}(f T_x \chi)(\xi)| = \lim_{|\xi| \to \infty} \sup_{x \in L}|\langle e^{-2\pi i t \xi}, f(t) T_x \chi(t) \rangle_{(L^\infty,L^1)}| = 0 $$ for all $f \in C(\mathbb{R}^d)$ and $\chi \in \mathcal{D}(\mathbb{R}^d)$. Since the set $\{ e^{-2\pi i t \xi} \, : \, \xi \in \mathbb{R}^d\}$ is bounded in $L^\infty(\mathbb{R}^d_t)$ and $\lim_{|\xi| \rightarrow \infty} e^{-2\pi i t \xi} = 0$ in $L^\infty(\mathbb{R}^d_t)$ endowed with the weak-$\ast$ topology (Riemann-Lebesgue lemma), we obtain that $\lim_{|\xi| \rightarrow \infty} e^{-2\pi i t \xi} = 0$ on compact subsets of $L^1(\mathbb{R}^d)$. The result follows by observing that the set $\{ f T_x \chi \, : \, x \in L\}$ is compact in $L^1(\mathbb{R}^d)$.\end{proof} \begin{lemma}\label{double-int-test-smooth} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $w$ and $v$ be positive continuous functions on $\mathbb{R}^d$ for which \eqref{locally-bounded-cond} holds. Then, the mappings $$ V^\ast_\psi: C(w\otimes (1+|\:\cdot\:|)^{n+d+1})(\mathbb{R}^{2d}_{x,\xi}) \rightarrow \mathcal{B}^{n}_v(\mathbb{R}^d) $$ and $$ V^\ast_\psi: C(w\otimes (1+|\:\cdot\:|)^{n+d+1})_0(\mathbb{R}^{2d}_{x,\xi}) \rightarrow {\dot{\mathcal{B}}}^{n}_v(\mathbb{R}^d) $$ are well-defined and continuous. \end{lemma} \begin{proof} Set $K = \operatorname{supp} \psi$ and let $F \in C(w\otimes (1+|\: \cdot \:|)^{n+d+1})(\mathbb{R}^{2d})$ be arbitrary. For all $t \in \mathbb{R}^d$ and $|\alpha| \leq n$ it holds that \begin{align*} |\partial^\alpha V^\ast_\psi F(t)|v(t) &\leq \sum_{\beta \leq \alpha} \binom{\alpha}{\beta} v(t)\int \int_{\mathbb{R}^{2d}} |F(x,\xi)|(2\pi|\xi|)^{|\beta|}|\partial^{\alpha-\beta}\psi(t-x)| {\rm d}x {\rm d}x i \\ &\leq (2\pi)^n|K|\|\psi\|_{K,n}\|g\|_{K} \int_{\mathbb{R}^d}\sup_{x \in K} \frac{|F(t-x,\xi)|w(t-x)(1+|\xi|)^{n+d+1}}{(1+|\xi|)^{d+1}} {\rm d}x i, \end{align*} which shows the continuity of $V^\ast_\psi$ on $C(w\otimes (1+|\: \cdot \:|)^{n+d+1})(\mathbb{R}^{2d})$. The fact that $V^\ast_\psi(C(w\otimes (1+|\: \cdot \:|)^{n+d+1})_0(\mathbb{R}^{2d})) \subseteq {\dot{\mathcal{B}}}^{n}_v(\mathbb{R}^d)$ follows from the above inequality and Lebesgue's dominated convergence theorem. \end{proof} Given a decreasing weight system $\mathcal{V} = (v_N)_N$, we define the following $(LF)$-spaces of continuous functions $$ \mathcal{V}_{\operatorname{pol}}C(\mathbb{R}^{2d}) := \varinjlim_{N \in \mathbb{N}} \varprojlim_{n \in \mathbb{N}} C(v_N\otimes (1+|\: \cdot \:|)^{n})(\mathbb{R}^{2d}) $$ and $$ \mathcal{V}_{\operatorname{pol},0}C(\mathbb{R}^{2d}) := \varinjlim_{N \in \mathbb{N}} \varprojlim_{n \in \mathbb{N}} C(v_N\otimes (1+|\: \cdot \:|)^{n})_0(\mathbb{R}^{2d}). $$ Lemmas \ref{STFT-test-smooth} and \ref{double-int-test-smooth} together with \eqref{reconstruction-D-dual} directly imply the ensuing two results. \begin{proposition}\label{STFT-test-char-smooth} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $\mathcal{V} = (v_N)_N$ be a decreasing weight system satisfying \eqref{locally-bounded-decreasing}. Then, the mappings $$ V_\psi: \mathcal{B}_{\mathcal{V}}(\mathbb{R}^d) \rightarrow \mathcal{V}_{\operatorname{pol}}C(\mathbb{R}^{2d}_{x,\xi}) $$ and $$ V^\ast_\psi: \mathcal{V}_{\operatorname{pol}}C(\mathbb{R}^{2d}_{x,\xi}) \rightarrow \mathcal{B}_{\mathcal{V}}(\mathbb{R}^d) $$ are well-defined and continuous. Moreover, if $\psi \neq 0$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ is a synthesis window for $\psi$, then $$ \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d)}. $$ \end{proposition} \begin{proposition}\label{STFT-test-char-smooth-1} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $\mathcal{V} = (v_N)_N$ be a decreasing weight system satisfying \eqref{locally-bounded-decreasing}. Then, the mappings $$ V_\psi: \dot{\mathcal{B}}_{\mathcal{V}}(\mathbb{R}^d) \rightarrow \mathcal{V}_{\operatorname{pol},0}C(\mathbb{R}^{2d}_{x,\xi}) $$ and $$ V^\ast_\psi: \mathcal{V}_{\operatorname{pol},0}C(\mathbb{R}^{2d}_{x,\xi}) \rightarrow \dot{\mathcal{B}}_{\mathcal{V}}(\mathbb{R}^d) $$ are well-defined and continuous. \end{proposition} Finally, we combine Proposition \ref{STFT-test-char-smooth} with a result concerning the projective description of weighted $(LF)$-spaces of continuous functions from \cite{B-B} to give an explicit system of seminorms generating the topology of $\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d)$ in case $\mathcal{V}$ satisfies $(\Omega)$; this will be of vital importance later on. A double sequence $\mathcal{U} = (u_{N,n})_{(N,n) \in \mathbb{N}^2}$ of positive continuous functions on $\mathbb{R}^d$ is called a \emph{weight system} if $u_{N,n}(x) \geq u_{N+1,n}(x)$ and $u_{N,n}(x) \leq u_{N,n+1}(x)$ for all $N,n \in \mathbb{N}$ and $x \in \mathbb{R}^d$. We define the following $(LF)$-space $$ \mathcal{U}C(\mathbb{R}^d) := \varinjlim_{N \in \mathbb{N}} \varprojlim_{n \in \mathbb{N}} Cu_{N,n}(\mathbb{R}^d). $$ The \emph{maximal Nachbin family associated with $\mathcal{U}$}, denoted by $ \overline{U}=\overline{U}(\mathcal{U})$, is given by the space consisting of all non-negative upper semicontinuous functions $u$ on $\mathbb{R}^d$ for which there is a sequence of natural numbers $(n_N)_{N \in \mathbb{N}}$ such that $\sup_{x \in \mathbb{R}^d} u(x)/u_{N,n_N}(x) < \infty$ for all $N \in \mathbb{N}$. We define $C\overline{U}(\mathbb{R}^d)$ as the space consisting of all $f \in C(\mathbb{R}^d)$ such that $\| f \|_u < \infty$ for all $u \in \overline{U}$ and endow it with the locally convex topology generated by the system of seminorms $\{ \| \: \cdot \: \|_u \, : \, u \in \overline{U}\}$. Clearly, $\mathcal{U}C(\mathbb{R}^d)$ is continuously included in $C\overline{U}(\mathbb{R}^d)$. The problem of projective description in this context is to find conditions on $\mathcal{U}$ which ensure that $\mathcal{U}C(\mathbb{R}^d)$ and $C\overline{U}(\mathbb{R}^d)$ coincide algebraically and/or topologically. This problem was thoroughly studied by Bierstedt and Bonet in \cite{B-B}. We shall use the following result of these authors\footnote{In fact, this result holds for weight systems defined on a general locally compact Hausdorff space.}: \begin{theorem}[{\cite[Cor.\ 5, p.\ 42]{B-B}}] \label{proj-desc-cont-LF} Let $\mathcal{U} = (u_{N,n})_{N,n}$ be a weight system satisfying condition $(Q)$, i.e., \begin{gather*} \forall N \in \mathbb{N} \, \exists M \geq N \, \exists n \in \mathbb{N} \, \forall K \geq M \, \forall m \in \mathbb{N} \, \forall \varepsilon > 0 \, \exists k \in \mathbb{N} \, \exists C > 0\, \forall x \in \mathbb{R}^d: \\ u_{M,m}(x) \leq \varepsilon u_{N,n}(x) + Cu_{K,k}(x). \end{gather*} Then, $\mathcal{U}C(\mathbb{R}^d) = C\overline{U}(\mathbb{R}^d)$ topologically. \end{theorem} We are now able to show the following result: \begin{corollary}\label{proj-desc-smooth-functions} Let $\mathcal{V} = (v_N)_N$ be a decreasing weight system satisfying $(\Omega)$. Then, the weight system $$ \mathcal{V} _{\operatorname{pol}} = (v_N\otimes (1+|\: \cdot \:|)^{n})_{N,n} $$ (defined on $\mathbb{R}^{2d}$) satisfies $(Q)$. Consequently, if $\mathcal{V}$ satisfies \eqref{locally-bounded-decreasing} and $\psi \in \mathcal{D}(\mathbb{R}^d) \backslash \{0\}$ is fixed, $f \in \mathcal{D}'(\mathbb{R}^d)$ belongs to $\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d)$ if and only if $\|V_\psi f \|_u < \infty$ for all $u \in \overline{U}(\mathcal{V} _{\operatorname{pol}})$. Moreover, the topology of $\mathcal{B}_{\mathcal{V}}(\mathbb{R}^d)$ is generated by the system of seminorms $\{ \|V_\psi ( \: \cdot \: ) \|_u \, : \, u \in \overline{U}(\mathcal{V} _{\operatorname{pol}}) \}$. \end{corollary} \begin{proof} The second part follows from the first one, Theorem \ref{proj-desc-cont-LF}, and Proposition \ref{STFT-test-char-smooth}. We now prove that $\mathcal{V} _{\operatorname{pol}}$ satisfies $(Q)$ if $\mathcal{V}$ satisfies $(\Omega)$ by employing the same idea as in \cite[Thm.\ 5.1]{Vogt}. We shall show $(Q)$ with $n = 0$. Let $N \in \mathbb{N}$ be arbitrary and choose $M \geq N$ according to $(\Omega)$. Next, let $K \geq M$ and $m \in \mathbb{N}$ be arbitrary. Condition $(\Omega)$ implies that there are $\theta \in (0,1)$ and $C > 0$ such that $v_M(x) \leq C {v_N(x)}^{1-\theta}{v_K(x)}^{\theta}$ for all $x \in \mathbb{R}^d$. Choose $k \in \mathbb{N}$ so large that $k\theta \geq m$. Let $\varepsilon > 0$ be arbitrary and set $C' = (C\varepsilon^{\theta-1})^{1/\theta}$. We claim that $$ v_{M}(x)(1+|\xi|)^m \leq \varepsilon v_{N}(x) + C'v_{K}(x)(1+|\xi|)^k, \qquad (x,\xi) \in \mathbb{R}^{2d}. $$ Let $(x,\xi) \in \mathbb{R}^{2d}$ be arbitrary. If $v_{M}(x)(1+|\xi|)^m \leq \varepsilon v_{N}(x)$, we are done. So, we may assume that $v_{M}(x)(1+|\xi|)^m > \varepsilon v_{N}(x)$. Hence, \begin{align*} v_{M}(x)(1+|\xi|)^m &\leq C{v_N(x)}^{1-\theta}(v_K(x)(1+|\xi|)^k))^{\theta} \\ &\leq C\varepsilon^{\theta-1} (v_M(x)(1+|\xi|)^m)^{1-\theta}(v_K(x)(1+|\xi|)^k)^{\theta}, \end{align*} and thus $v_{M}(x)(1+|\xi|)^m \leq C'v_{K}(x)(1+|\xi|)^k$. \end{proof} \section{On a class of weighted $L^1$ convolutor spaces}\label{L1-convolutors} In this section, we introduce a class of weighted $L^1$ convolutor spaces $\mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$ defined via an increasing sequence of positive continuous functions $\mathcal{W}$. In the main theorem of this section we characterize the increasing weight systems $\mathcal{W}$ for which the space $\mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$ is ultrabornological. To achieve this goal, we first study various structural and topological properties of these spaces; most notably, we determine a topological predual and give an explicit description of the dual of $\mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$. Since the proofs of the latter results will heavily rely on the mapping properties of the STFT on $\mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$, we start with a discussion of the STFT on this space. Let $w$ be a positive measurable function on $\mathbb{R}^d$. We define $L^1_w(\mathbb{R}^d)$ as the Banach space consisting of all measurable functions $f$ on $\mathbb{R}^d$ such that $$ \|f\|_{L^1_w} := \int_{\mathbb{R}^d}|f(x)|w(x) {\rm d}x < \infty. $$ A pointwise increasing sequence $\mathcal{W} = (w_N)_{N \in \mathbb{N}}$ of positive continuous functions on $\mathbb{R}^d$ is called an \emph{increasing weight system}. Set $$ L^1_\mathcal{W}(\mathbb{R}^d) := \varprojlim_{N \in \mathbb{N}} L^1_{w_N}(\mathbb{R}^d). $$ We are interested in the following convolutor spaces $$ \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}) := \{ f \in \mathcal{D}'(\mathbb{R}^d) \, : \, f \ast \varphi \in L^1_\mathcal{W}(\mathbb{R}^d) \mbox{ for all } \varphi \in \mathcal{D}(\mathbb{R}^d) \}. $$ For $f \in \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ fixed, the mapping $\mathcal{D}(\mathbb{R}^d) \rightarrow L^1_{\mathcal{W}}(\mathbb{R}^d),$ $\varphi \mapsto f \ast \varphi$ is continuous, as follows from the continuity of the mapping $\mathcal{D}(\mathbb{R}^d) \rightarrow \mathcal{E}(\mathbb{R}^d),$ $\varphi \mapsto f \ast \varphi$ and the closed graph theorem. We endow $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ with the initial topology with respect to the mapping $$ \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}) \rightarrow L_b(\mathcal{D}(\mathbb{R}^d),L^1_{\mathcal{W}}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi). $$ Next, we define $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$ as the Fr\'echet space consisting of all $\varphi \in C^\infty(\mathbb{R}^d)$ such that $ \max_{|\alpha| \leq n} \| \partial^\alpha \varphi\|_{L^1_{w_N}} < \infty $ for all $N,n \in \mathbb{N}$. Set $$ \mathcal{O}'_C(\mathcal{D},\mathcal{D}_{L^1_{\mathcal{W}}}) := \{ f \in \mathcal{D}'(\mathbb{R}^d) \, : \, f \ast \varphi \in \mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)\mbox{ for all } \varphi \in \mathcal{D}(\mathbb{R}^d) \} $$ and endow this space with the initial topology with respect to the mapping $$ \mathcal{O}'_C(\mathcal{D}, \mathcal{D}_{L^1_{\mathcal{W}}}) \rightarrow L_b(\mathcal{D}(\mathbb{R}^d), \mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi). $$ The next lemma follows from the fact that $\partial^\alpha (f \ast \varphi) = f \ast \partial^\alpha \varphi$ for all $f \in \mathcal{D}'(\mathbb{R}^d)$ and $\varphi \in \mathcal{D}(\mathbb{R}^d)$. \begin{lemma}\label{new} Let $\mathcal{W} = (w_N)_N$ be an increasing weight system. Then, $$ \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}) = \mathcal{O}'_C(\mathcal{D}, \mathcal{D}_{L^1_{\mathcal{W}}}) $$ topologically. \end{lemma} Finally, we introduce the following condition on $\mathcal{W} = (w_N)_N$ (cf.\ \eqref{locally-bounded-decreasing}): \begin{equation} \forall N \in \mathbb{N}\, \exists \widetilde{N} \geq N \, : \, h_{N,\widetilde{N}} = \sup_{x \in \mathbb{R}^d}\frac{w_{N}(x + \: \cdot \:)}{w_{\widetilde{N}}(x)} \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d). \label{locally-bounded-increasing} \end{equation} \subsection{Characterization via the STFT} We now discuss the mapping properties of the STFT on the space $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. To do so, we need to introduce some more notation. We set $\mathcal{P} := ((1+| \: \cdot \: |)^{-n})_{n}$, a decreasing weight system, and write $$ \mathcal{P}C(\mathbb{R}^d) := \varinjlim_{n \in \mathbb{N}} C(1+| \: \cdot \: |)^{-n}(\mathbb{R}^d) $$ for the corresponding $(LB)$-space of continuous functions. Furthermore, we denote by $\overline{P}$ the maximal Nachbin family associated to $\mathcal{P}$. More explicitly, the space $\overline{P}$ consists of all non-negative upper semicontinuous functions $p$ on $\mathbb{R}^d$ such that $$\sup_{\xi \in \mathbb{R}^d} p(\xi)(1+|\xi|)^{n} < \infty$$ for all $n \in \mathbb{N}$. For a non-negative function $f$ on $\mathbb{R}^d$ it holds that \begin{itemize} \item[$(i)$] $\sup_{\xi \in \mathbb{R}^d} f(\xi) (1+|\xi|)^{-n} < \infty$ for some $n \in \mathbb{N}$ if and only if $\sup_{\xi \in \mathbb{R}^d} f(\xi) p(\xi) < \infty$ for all $p \in \overline{P}$. \item[$(ii)$] $\sup_{\xi \in \mathbb{R}^d} f(\xi)/ p(\xi) < \infty$ for some $p \in \overline{P}$ if and only if $\sup_{\xi \in \mathbb{R}^d} f(\xi)(1+|\xi|)^{n} < \infty$ for all $n \in \mathbb{N}$. \end{itemize} We shall use these properties without explicitly referring to them. Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system. By \cite[Prop.\ 1.5]{Komatsu3} and \cite[Thm.\ 3.1(d)]{B-M-S}, we may identify $L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d)$ with the space consisting of all $f \in C(\mathbb{R}^d_\xi, L^1_{\mathcal{W}}(\mathbb{R}^d_x))$ satisfying the following property: For all $N \in \mathbb{N}$ there is $n \in \mathbb{N}$ such that $$ \sup_{\xi \in \mathbb{R}^d}\|f(\: \cdot \:, \xi)\|_{L^1_{w_N}} (1+|\xi|)^{-n}< \infty. $$ Hence, a function $f \in C(\mathbb{R}^d_\xi, L^1_{\mathcal{W}}(\mathbb{R}^d_x))$ belongs to $L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d)$ if and only if $$ \| f \|_{L^1_{w_N},p} := \sup_{\xi \in \mathbb{R}^d}\|f(\: \cdot \:, \xi)\|_{L^1_{w_{N}}} p(\xi) < \infty $$ for all $N \in \mathbb{N}$ and $p \in \overline{P}$. By \cite[Thm.\ 3.1(c)]{B-M-S}, the topology of $L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon\mathcal{P}C(\mathbb{R}_\xi^d)$ is generated by the system of seminorms $\{ \| \: \cdot \: \|_{L^1_{w_N},p} \, : \, N \in \mathbb{N}, p \in \overline{P} \}$. We then have: \begin{proposition}\label{STFT-convolutors} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, the mappings $$ V_\psi: \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})\rightarrow L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d) $$ and $$ V^\ast_\psi: L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d) \rightarrow \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}) $$ are well-defined and continuous. \end{proposition} \begin{proof} We first consider $V_\psi$. Observe that $V_\psi f( \: \cdot \:,\xi) \in L^1_{\mathcal{W}}(\mathbb{R}^d)$ for $\xi \in \mathbb{R}^d$ fixed, as follows from the representation $V_\psi f(x, \xi) = e^{-2\pi i x\xi} (f \ast M_\xi \check{\overline{\psi}})(x)$. We now prove that $\mathbb{R}^d \rightarrow L^1_{\mathcal{W}}(\mathbb{R}^d), \, \xi \mapsto V_\psi f(\: \cdot \: , \xi)$ is continuous. Since the mappings $\mathbb{R}^d \rightarrow \mathcal{D}(\mathbb{R}^d), \, \xi \mapsto M_\xi \check{\overline{\psi}}$ and $\mathcal{D}(\mathbb{R}^d) \rightarrow L^1_{\mathcal{W}}(\mathbb{R}^d), \, \varphi \mapsto f \ast \varphi$ are continuous, the mapping \begin{equation} \mathbb{R}^d \rightarrow L^1_{\mathcal{W}}(\mathbb{R}^d), \quad \xi \mapsto f \ast M_\xi \check{\overline{\psi}} \label{continuity-conv-L1} \end{equation} is also continuous. Let $\xi_0, \xi \in \mathbb{R}^d$ and $N \in \mathbb{N}$ be arbitrary. We have that \begin{align*} &\| V_\psi f(x, \xi) - V_\psi f(x , \xi_0) \|_{L^1_{w_N},x} \\ &= \| e^{-2\pi i x\xi} (f \ast M_\xi \check{\overline{\psi}})(x) - e^{-2\pi i x \xi_0} (f \ast M_{\xi_0} \check{\overline{\psi}})(x)\|_{L^1_{w_N},x} \\ &\leq \| f \ast M_\xi \check{\overline{\psi}} - f \ast M_{\xi_0} \check{\overline{\psi}}\|_{L^1_{w_N}} + \| (e^{-2\pi i x \xi} - e^{-2\pi i x \xi_0}) (f \ast M_{\xi_0} \check{\overline{\psi}})(x)\|_{L^1_{w_N},x}. \end{align*} The first term tends to zero as $\xi \rightarrow \xi_0$ because the mapping \eqref{continuity-conv-L1} is continuous, while the second term tends to zero as $\xi \rightarrow \xi_0$ because of Lebesgue's dominated convergence theorem. Let $N \in \mathbb{N}$ and $p \in \overline{P}$ be arbitrary. The set $B = \{ M_\xi \check{\overline{\psi}} p(\xi) \, : \, \xi \in \mathbb{R}^d\}$ is bounded in $\mathcal{D}(\mathbb{R}^d)$. Hence, $$ \|V_\psi f\|_{L^1_{w_N},p} = \sup_{\xi \in \mathbb{R}^d} \| f \ast M_\xi \check{\overline{\psi}} \|_{L^1_{w_N}} p(\xi) = \sup_{\varphi \in B} \| f \ast \varphi \|_{L^1_{w_N}}, $$ which shows that $V_\psi$ is well-defined and continuous. Next, we treat $V^\ast_\psi$. Let $B \subset \mathcal{D}(\mathbb{R}^d)$ bounded and $N \in \mathbb{N}$ be arbitrary. Choose $\widetilde{N} \geq N$ according to \eqref{locally-bounded-increasing}. Proposition \ref{STFT-D} implies that there are $K \Subset \mathbb{R}^d$ and $p \in \overline{P}$ such that $\operatorname{supp} V_\psi \check{\overline{\varphi}} \subseteq K \times \mathbb{R}^d$ and $|V_\psi \check{\overline{\varphi}}(x,\xi)| \leq p(\xi)$ for all $(x,\xi) \in K \times \mathbb{R}^d$ and $\varphi \in B$. Set $\widetilde{p} = p(\:\cdot\:)(1+ |\: \cdot \:|)^{d+1} \in \overline{P}$. Hence, \begin{align*} \sup_{\varphi \in B} \| V^\ast_\psi F \ast \varphi \|_{L^1_{w_N}} &\leq \sup_{\varphi \in B} \int \int \int_{\mathbb{R}^{3d}} |F(x,\xi)| |V_\psi \check{\overline{\varphi}}(x-t,\xi)| w_N(t) {\rm d}x {\rm d}x i {\rm d}t \\ &\leq |K|\sup_{x \in K} \int \int_{\mathbb{R}^{2d}} |F(x +t,\xi)| p(\xi) w_N(t) {\rm d}x i {\rm d}t \\ & \leq C \|F\|_{\widetilde{p}, L^1_{w_{\widetilde{N}}}} \end{align*} for all $F \in L^1_{\mathcal{W}}(\mathbb{R}^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}^d)$, where $ C = |K| \|\check{h}_{N,\widetilde{N}}\|_{K} \int_{\mathbb{R}^d} (1+|\xi|)^{-(d+1)} {\rm d}x i < \infty. $ This shows that $V^*_\psi$ is well-defined and continuous. \end{proof} Proposition \ref{STFT-convolutors} together with \eqref{reconstruction-D-dual} implies the following corollary: \begin{corollary}\label{completeness-OC} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ is complete. \end{corollary} \subsection{A predual} Given an increasing weight system $\mathcal{W} =(w_N)_{N}$, we define its dual decreasing weight system as $\mathcal{W}^\circ = (1/w_N)_{N}$. The aim of this subsection is to show that the dual space $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$, endowed with a suitable $\mathfrak{S}$-topology, is topologically equal to $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. We start by introducing the following general notion: \begin{definition} Let $E = \varinjlim_{N} E_N$ be an $(LF)$-space. We define $$ \mathfrak{S} = \{ B \subset E \, : \, B \mbox{ is contained and bounded in $E_N$ for some $N \in \mathbb{N}$} \} $$ and write $bs(E',E)$ for the $\mathfrak{S}$-topology on $E'$ (the topology of uniform convergence on the sets of $\mathfrak{S}$). Grothendieck's factorization theorem implies that $bs(E',E)$ does not depend on the defining inductive spectrum of $E$. \end{definition} \begin{remark} Let $E = \varinjlim_{N} E_N$ be an $(LF)$-space. The bipolar theorem implies that $bs(E', E) = b(E',E)$ if and only if $E$ is \emph{quasiregular}, i.e., if for every bounded set $B$ in $E$ there is $N \in \mathbb{N}$ and a bounded set $A$ in $E_N$ such that $B \subseteq \overline{A}^{E}$. We refer to \cite{D-D} for more information on quasiregular $(LF)$-spaces. \end{remark} We then have: \begin{theorem}\label{thm-predual} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs}= \mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$. \end{theorem} We need some preparation. The proof of the next proposition is standard and therefore omitted. \begin{proposition} \label{convspaces-proposition-structural} Let $w$ be a positive continuous function and let $f \in (\dot{\mathcal{B}}_{1/w}(\mathbb{R}^d))'$. Then, there are $n \in \mathbb{N}$ and regular complex Borel measures $\mu_\alpha \in (C_0(\mathbb{R}^d))'$, $|\alpha| \leq n$, such that \begin{equation} \langle f, \varphi \rangle = \sum_{|\alpha| \leq n} (-1)^{|\alpha|} \int_{\mathbb{R}^d} \frac{\partial^\alpha \varphi(t)}{w(t)} {\rm d}\mu_\alpha(t), \qquad \varphi \in \dot{\mathcal{B}}_{1/w}(\mathbb{R}^d). \label{repr-dual-bdot} \end{equation} \end{proposition} \begin{corollary}\label{conv-b-dot} Let $w$ and $v$ be positive continuous functions on $\mathbb{R}^d$ satisfying \eqref{locally-bounded-cond} and let $f \in (\dot{\mathcal{B}}_{1/w}(\mathbb{R}^d))'$. Then, \begin{itemize} \item[$(i)$] $f\ast \varphi \in L^1_v(\mathbb{R}^d)$ for all $\varphi \in \mathcal{D}(\mathbb{R}^d)$. \item[$(ii)$] For all $\varphi \in \mathcal{D}(\mathbb{R}^d)$ and $h \in C(1/v)_0(\mathbb{R}^d)$ it holds that $\check{\varphi} \ast h \in \dot{\mathcal{B}}_{1/w}(\mathbb{R}^d)$ and $$ \int_{\mathbb{R}^d}( f \ast \varphi) (x) h(x) {\rm d}x = \langle f, \check{\varphi} \ast h \rangle. $$ \end{itemize} \end{corollary} \begin{proof} Assume that $f$ is represented via \eqref{repr-dual-bdot}. In particular, $$ f\ast \varphi = \sum_{|\alpha| \leq n} \int_{\mathbb{R}^d} \frac{\partial^\alpha\varphi(\: \cdot \: - t)}{w(t)} {\rm d}\mu_\alpha(t), \qquad \varphi \in \mathcal{D}(\mathbb{R}^d). $$ $(i)$ Let $\varphi \in \mathcal{D}(\mathbb{R}^d)$ be arbitrary and set $K = \operatorname{supp} \varphi$. We have that $$ \| f\ast \varphi \|_{L^1_v} \leq \sum_{|\alpha| \leq n} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \frac{|\partial^\alpha\varphi(x - t)|}{w(t)} {\rm d}|\mu_\alpha|(t) v(x){\rm d}x , $$ where $|\mu_\alpha|$ denotes the total variation measure associated with $\mu_\alpha$. For each $|\alpha| \leq n$ it holds that \begin{align*} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} |\partial^\alpha\varphi(x - t)| v(x) {\rm d}x \frac{1}{w(t)} {\rm d} |\mu_\alpha|(t) &= \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} |\partial^\alpha\varphi(x)| v(x + t) {\rm d}x \frac{1}{w(t)} {\rm d} |\mu_\alpha|(t) \\ & \leq |K| \|\varphi\|_{K,n} \|g\|_K |\mu_\alpha|(\mathbb{R}^d) < \infty. \end{align*} Hence, Fubini's theorem implies that $\| f\ast \varphi \|_{L^1_v} < \infty$. $(ii)$ Let $\varphi \in \mathcal{D}(\mathbb{R}^d)$ and $h \in C(1/v)_0(\mathbb{R}^d)$ be arbitrary. Set $K = \operatorname{supp} \varphi$. For each $\alpha \in \mathbb{N}^d$ we have that \begin{equation} \frac{|\partial^{\alpha}(\check{\varphi} \ast h)(x)|}{w(x)} \leq |K| \|\partial^\alpha \varphi\|_K \|g\|_K \sup_{t \in K} \frac{|h(t+x)|}{v(t+x)}, \label{bounded-set-later} \end{equation} which tends to zero as $|x| \to \infty$, that is, $\check{\varphi} \ast h \in \dot{\mathcal{B}}_{1/w}(\mathbb{R}^d)$. Finally, \begin{align*} \int_{\mathbb{R}^d} (f \ast \varphi) (x) h(x) {\rm d}x &= \sum_{|\alpha| \leq n} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \frac{\partial^\alpha\varphi(x - t)}{w(t)} {\rm d}\mu_\alpha(t)h(x) {\rm d}x \\ & = \sum_{|\alpha| \leq n} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \partial^\alpha\varphi(x - t)h(x) {\rm d}x \frac{1}{w(t)} {\rm d}\mu_\alpha(t) \\ & = \sum_{|\alpha| \leq n} (-1)^{|\alpha|} \int_{\mathbb{R}^d} \frac{\partial^\alpha(\check{\varphi} \ast h)(t)}{w(t)} {\rm d}\mu_\alpha(t) \\ &= \langle f, \check{\varphi} \ast h \rangle. \qedhere \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm-predual}] From Corollary \ref{conv-b-dot}$(i)$, we obtain that $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))' \subseteq \mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$. We now show that this inclusion holds continuously if we endow the former space with its $bs$-topology. For this, we need the following result from measure theory (cf.\ \cite[Thm.\ 6.9 and Thm.\ 6.13]{Rudin}): Let $w$ be a positive continuous function on $\mathbb{R}^d$ and let $f \in L^1_w(\mathbb{R}^d)$. Denote by $B$ the unit ball in $C_0(\mathbb{R}^d)$. Then, $$ \|f\|_{L^1_w} = \sup_{h \in B} \left | \int_{\mathbb{R}^d} f(x)h(x) w(x){\rm d}x \right|. $$ Let $B' \subset \mathcal{D}(\mathbb{R}^d)$ bounded and $N \in \mathbb{N}$ be arbitrary. By the above remark, we have that $$ \sup_{\varphi \in B'} \| f \ast \varphi\|_{L^1_{w_N}} = \sup_{\varphi \in B'} \sup_{h \in B} \left | \int_{\mathbb{R}^d} f\ast \varphi(x) h(x)w_N(x) {\rm d}x \right | = \sup_{\varphi \in B'} \sup_{h \in B} \left | \langle f, \check{\varphi} \ast (hw_N) \rangle \right|, $$ for all $f \in (\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$, where the last equality follows from Corollary \ref{conv-b-dot}$(ii)$. Choose $\widetilde{N} \geq N$ according to $\eqref{locally-bounded-increasing}$. It is now enough to notice that the set $$ \{ \check{\varphi} \ast (hw_N) \, : \, \varphi \in B', h \in B \} $$ is bounded in $\dot{\mathcal{B}}_{1/w_{\widetilde{N}}}(\mathbb{R}^d)$, as follows from \eqref{bounded-set-later} with $v = w_N $ and $w =w_{\widetilde{N}}$. Next, we show that $\mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$ is continuously included in $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs}$. Let $\gamma \in \mathcal{D}(\mathbb{R}^d)$. By \eqref{reconstruction-D-dual} and Proposition \ref{STFT-convolutors}, it suffices to show that the mapping $$ V^\ast_\gamma: L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d) \rightarrow (\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs} $$ is well-defined and continuous. Let $F \in L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d)$ be arbitrary. The linear functional $$ f: \dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d) \rightarrow \mathbb{C}, \quad \varphi \mapsto \int \int_{\mathbb{R}^{2d}} F(x,\xi) V_{\overline{\gamma}} \varphi(x,-\xi){\rm d}x {\rm d}x i $$ is well-defined and continuous by Lemma \ref{STFT-test-smooth}. Since $V^\ast_\gamma F = f_{| \mathcal{D}(\mathbb{R}^d)}$, we obtain that $V^\ast_\gamma F \in (\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$ and $$ \langle V_\gamma^\ast F, \varphi \rangle = \int \int_{\mathbb{R}^{2d}} F(x,\xi) V_{\overline{\gamma}} \varphi(x,-\xi){\rm d}x {\rm d}x i, \qquad \varphi \in \dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d), $$ whence $V^\ast_\gamma$ is well-defined. Finally, we show that it is continuous. Let $N \in \mathbb{N}$ and $B \subset \dot{\mathcal{B}}_{1/w_N}(\mathbb{R}^d)$ bounded be arbitrary. Choose $\widetilde{N} \geq N$ according to \eqref{locally-bounded-increasing}. Lemma \ref{STFT-test-smooth} implies that there is $p \in \overline{P}$ such that $|V_{\overline{\gamma}} \varphi(x,-\xi)| \leq w_{\widetilde{N}}(x) p(\xi)$ for all $(x,\xi) \in \mathbb{R}^{2d}$ and $\varphi \in B$. Set $\widetilde{p} = p(\: \cdot \:) (1+ |\: \cdot \:|)^{d+1} \in \overline{P}$. Hence, \begin{align*} \sup_{\varphi \in B} |\langle V^\ast_\gamma F, \varphi \rangle | & \leq \sup_{\varphi \in B} \int \int_{\mathbb{R}^{2d}} |F(x,\xi)| |V_{\overline{\gamma}} \varphi(x,-\xi)| {\rm d}x {\rm d}x i\\ &\leq \int \int_{\mathbb{R}^{2d}} |F(x,\xi)|w_{\widetilde{N}}(x) p(\xi) {\rm d}x {\rm d}x i \leq \|F\|_{\widetilde{p}, L^1_{w_{\widetilde{N}}}}\int_{\mathbb{R}^d} (1+|\xi|)^{-(d+1)}{\rm d}x i \end{align*} for all $F \in L^1_{\mathcal{W}}(\mathbb{R}_x^d) \widehat{\otimes}_\varepsilon \mathcal{P}C(\mathbb{R}_\xi^d)$. \end{proof} From now on, we shall interchangeably use $\mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$ and $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs}$ depending on which point of view is most suitable for the given situation. We shall not explicitly refer to Theorem \ref{thm-predual} when we do this. For later use, we point out the following corollary. \begin{corollary}\label{desing-dual-b-dot} Let $\psi \in \mathcal{D}(\mathbb{R}^d) \backslash \{0\}$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ be a synthesis window for $\psi$. Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, the desingularization formula $$ \langle f, \varphi \rangle = \frac{1}{(\gamma,\psi)_{L^2}} \int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}} \varphi(x,-\xi) {\rm d}x {\rm d}x i $$ holds for all $f \in (\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$ and $\varphi \in \dot{\mathcal{B}}_{\mathcal{W}^\circ}$. \end{corollary} \subsection{Topological properties} We now take a closer look at the locally convex structure of $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. Most importantly, we are going to show that $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ is ultrabornological if and only if $\mathcal{W}^\circ$ satisfies $(\Omega)$. Our main tools will be Theorem \ref{completeness-ind-lim-smooth} and an explicit description of the strong dual of $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. The following technical lemma is needed below. \begin{lemma}\label{density-smooth} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, we have the dense continuous inclusion $\mathcal{D}(\mathbb{R}^d) \hookrightarrow \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. \end{lemma} \begin{proof} Notice that $\mathcal{D}(\mathbb{R}^d) \subset \mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d) \subset\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ with continuous inclusion mappings. We shall prove that both inclusion mappings have dense range. We start by showing that $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$ is dense in $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. Let $f \in \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ be arbitrary. Choose $\chi \in \mathcal{D}(\mathbb{R}^d)$ with $\int_{\mathbb{R}^d} \chi(x) {\rm d}x =1$ and $\operatorname{supp} \chi \subseteq \overline{B}(0,1)$. Set $\chi_k = k^d \chi(k \: \cdot \:)$ and $f_k = f \ast \chi_k \in \mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$ for $k \in \mathbb{Z}_+$ (cf.\ Lemma \ref{new}). We claim that $f_k \rightarrow f$ in $\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$. Let $B \subset \mathcal{D}(\mathbb{R}^d)$ bounded and $N \in \mathbb{N}$ be arbitrary. Choose $\widetilde{N} \geq N$ according to \eqref{locally-bounded-increasing}. Hence, \begin{align*} &\sup_{\varphi \in B} \| f_k \ast \varphi - f \ast \varphi \|_{L^1_{w_N}} \\ &\leq \sup_{\varphi \in B} \| f \ast \varphi \ast \chi_k - f \ast \varphi \|_{L^1_{w_N}} \\ &\leq \sup_{\varphi \in B} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} |\chi(t)| | (f\ast \varphi) (x-t/k) - (f \ast \varphi)(x)| {\rm d}t w_N(x) {\rm d}x \\ &\leq \frac{1}{k}\sup_{\varphi \in B} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} |\chi(t)| |t| \sum_{|\beta|=1} \int_{0}^1 |\partial^{\beta}(f\ast \varphi)(x-\gamma t/k)| \mathrm{d}\gamma {\rm d}t w_N(x) {\rm d}x \\ &\leq \frac{1}{k} \sum_{|\beta|=1} \sup_{\varphi \in B} \int_0^1 \int_{\overline{B}(0,1)} |\chi(t)||t| h_{N, \widetilde{N}}(\gamma t/k) \times \\ &\phantom{\leq} \int_{\mathbb{R}^d} |(f\ast \partial^{\beta}\varphi)(x-\gamma t/k)| w_{\widetilde{N}}(x-\gamma t/k) {\rm d}x {\rm d}t \mathrm{d}\gamma \\ &\leq \frac{C}{k}, \end{align*} where $ C = \| \chi(t)t\|_{L_1,t} \|h_{N, \widetilde{N}}\|_{\overline{B}(0,1)}\sum_{|\beta| = 1} \sup_{\varphi \in B} \| f \ast \partial^\beta \varphi \|_{L^1_{w_{\widetilde{N}}}} < \infty. $ Next, we show that $\mathcal{D}(\mathbb{R}^d)$ is dense in $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$. Let $\varphi \in \mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$ be arbitrary. Choose $\theta \in \mathcal{D}(\mathbb{R}^d)$ with $0 \leq \theta \leq 1$ and $\theta(0) = 1$. Set $\theta_k = \theta(\: \cdot \: /k)$ and $\varphi_k = \theta_k\varphi \in \mathcal{D}(\mathbb{R}^d)$ for $k \in \mathbb{Z}_+$. We claim that $\varphi_k \rightarrow \varphi$ in $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$. Let $N,n \in \mathbb{N}$ be arbitrary. Hence, \begin{align*} \max_{|\alpha| \leq n} \| \partial^\alpha(\varphi-\varphi_k)\|_{L^1_{w_N}} &\leq \max_{|\alpha| \leq n} \| \partial^\alpha\varphi(1-\theta_k)\|_{L^1_{w_N}} \\ &+ \frac{1}{k} \max_{|\alpha| \leq n} \sum_{\beta \leq \alpha, \beta \neq 0} \binom{\alpha}{\beta} \| \partial^{\alpha-\beta}\varphi \partial^{\beta}\theta(\: \cdot \: /k)\|_{L^1_{w_N}}\\ &\leq \max_{|\alpha| \leq n} \| \partial^\alpha\varphi(1-\theta_k)\|_{L^1_{w_N}} + \frac{C}{k}, \end{align*} where $C = 2^n\max_{|\alpha| \leq n}\|\partial^\alpha\varphi\|_{L^1_{w_N}}\|\theta\|_{\operatorname{supp}\theta,n} < \infty.$ We still need to show that $\| \partial^\alpha\varphi(1-\theta_k)\|_{L^1_{w_N}} \rightarrow 0$ for all $|\alpha| \leq n$. Let $\varepsilon > 0$ be arbitrary. Choose $K \Subset \mathbb{R}^d$ so large that $$\int_{\mathbb{R}^d \backslash K} |\partial^\alpha \varphi (x)| w_N(x) {\rm d}x \leq \varepsilon.$$ Then, \begin{align*} &\| \partial^\alpha\varphi(1-\theta_k)\|_{L^1_{w_N}} \\ &= \int_{K} |\partial^\alpha \varphi (x)|(1-\theta_k(x)) w_N(x) {\rm d}x + \int_{\mathbb{R}^d \backslash K} |\partial^\alpha \varphi (x)| (1-\theta_k(x)) w_N(x) {\rm d}x \\ &\leq C' \sup_{x \in K} |1-\theta_k(x)| + \varepsilon, \end{align*} where $C' = |K| \|\partial^\alpha \varphi\|_K \|w_N\|_K < \infty.$ The result follows from the fact that, since $\theta(0) = 1$, $\theta_k \rightarrow 1$ uniformly on $K$. \end{proof} \begin{proposition}\label{thm-dual-convolutors} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. \begin{itemize} \item[$(i)$] The canonical inclusion \begin{equation} \dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d) \rightarrow ((\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs})'_b, \quad \varphi \mapsto (f \mapsto \langle f, \varphi \rangle) \label{canonical-incl} \end{equation} is a topological embedding. \item[$(ii)$] Let $\psi \in \mathcal{D}(\mathbb{R}^d) \backslash \{0\}$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ be a synthesis window for $\psi$. Then, the mapping $\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d) \rightarrow ((\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs})'_{b}$ given by \begin{equation} \varphi \mapsto \left( f \mapsto \frac{1}{(\gamma,\psi)_{L^2}}\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i \right) \label{dual-convolutors} \end{equation} is a continuous bijection whose restriction to $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ coincides with the canonical inclusion \eqref{canonical-incl}. \end{itemize} \end{proposition} \begin{proof} $(i)$ The $bs$-topology is coarser than the strong topology and finer than the weak-$\ast$ topology on $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$. Since $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is barrelled (as an $(LF)$-space), a subset of $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$ is equicontinuous if and only if it is bounded in the $bs$-topology, which in turn yields that \eqref{canonical-incl} is a strict morphism. $(ii)$ We start by showing that, for $\varphi \in \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ fixed, the linear functional $$ \mathcal{O}'_C(\mathcal{D},L^1_\mathcal{W}) \rightarrow \mathbb{C}, \quad f \mapsto \frac{1}{(\gamma,\psi)_{L^2}}\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i $$ is well-defined and continuous. Proposition \ref{STFT-test-char-smooth} implies that there are $N \in \mathbb{N}$ and $p \in \overline{P}$ such that $|V_{\overline{\gamma}}\varphi(x,-\xi)| \leq w_N(x) p(\xi)$ for all $(x,\xi) \in \mathbb{R}^{2d}$. Set $\widetilde{p} = p(\: \cdot \:) (1+ |\: \cdot \:|)^{d+1} \in \overline{P}$. Then, \begin{align*} &\left | \frac{1}{(\gamma,\psi)_{L^2}}\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i \right| \\ &\leq \frac{1}{|(\gamma,\psi)_{L^2}|}\int \int_{\mathbb{R}^{2d}} |V_\psi f(x,\xi)| w_N(x) p(\xi) {\rm d}x {\rm d}x i\\ &\leq \frac{\| V_\psi f \|_{L^1_{w_N, \widetilde{p}} }}{|(\gamma,\psi)_{L^2}|} \int_{\mathbb{R}^d} (1+|\xi|)^{-(d+1)} {\rm d}x \end{align*} for all $f \in \mathcal{O}'_C(\mathcal{D},L^1_\mathcal{W})$. The claim follows from Proposition \ref{STFT-convolutors}. Next, we prove that \eqref{dual-convolutors} is continuous. It suffices to show that its restriction to $\mathcal{B}_{1/w_N}(\mathbb{R}^d)$ is continuous for each $N \in \mathbb{N}$. Let $B \subset \mathcal{O}'_C(\mathcal{D},L^1_\mathcal{W})$ bounded be arbitrary. Choose $\widetilde{N} \geq N$ according to \eqref{locally-bounded-increasing}. By Proposition \ref{STFT-convolutors}, there are $n \in \mathbb{N}$ and $C > 0$ such that $$ \|V_\psi f(\: \cdot \:, \xi)\|_{L^1_{w_{\widetilde{N}}}} \leq C(1+|\xi|)^{n}, \qquad \xi \in \mathbb{R}^d, $$ for all $f \in B$. Lemma \ref{STFT-test-smooth} implies that there is $C' > 0$ such that $$ |V_{\overline{\gamma}}\varphi(x,-\xi)| \leq \frac{C'\|\varphi\|_{\mathcal{B}^{n+d+1}_{1/w_N}}w_{\widetilde{N}}(x)}{(1+|\xi|)^{n+d+1}}, \qquad (x,\xi) \in \mathbb{R}^{2d}, $$ for all $\varphi \in \mathcal{B}_{1/w_N}(\mathbb{R}^d)$. Hence, $$ \sup_{f \in B} \left | \frac{1}{(\gamma,\psi)_{L^2}}\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i \right | \leq C''\|\varphi\|_{\mathcal{B}^{n+d+1}_{1/w_N}}, $$ for all $\varphi \in \mathcal{B}_{1/w_N}(\mathbb{R}^d)$, where $ C''= \frac{CC'}{|(\gamma,\psi)_{L^2}|} \int_{\mathbb{R}^d} (1+|\xi|)^{-(d+1)} < \infty. $ Finally, we prove that \eqref{dual-convolutors} is surjective. Let $\Phi \in (\mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}}))'$ be arbitrary. Denote by $\iota: \mathcal{D}(\mathbb{R}^d) \rightarrow \mathcal{O}'_C(\mathcal{D},L^1_{\mathcal{W}})$ the inclusion mapping and set $f = \Phi \circ \iota \in \mathcal{D}'(\mathbb{R}^d)$. By \eqref{desing-D-dual}, it holds that $$ \Phi(\iota(\chi)) = \langle f, \chi \rangle = \frac{1}{(\gamma,\psi)_{L^2}}\int \int_{\mathbb{R}^{2d}} V_\psi \chi(x,\xi) V_{\overline{\gamma}}f(x,-\xi) {\rm d}x {\rm d}x i $$ for all $\chi \in \mathcal{D}(\mathbb{R}^d)$. By Lemma \ref{density-smooth}, it therefore suffices to show that $f \in \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ or, equivalently, that $V_\theta f \in \mathcal{W}^\circ_{\operatorname{pol}}C(\mathbb{R}^{2d})$, where $\theta \in \mathcal{D}(\mathbb{R}^d)$ is a fixed non-zero window function (Proposition \ref{STFT-test-char-smooth}). Since $\Phi$ is continuous, there is $N \in \mathbb{N}$ and a bounded set $B \subset \dot{\mathcal{B}}_{1/w_N}(\mathbb{R}^d)$ such that \begin{align*} |V_\theta f(x,\xi)| = |\Phi(\iota( \overline{M_\xi T_x\theta}))| \leq \sup_{\varphi \in B}\left| \int_{\mathbb{R}^d} \varphi(t) \overline{M_\xi T_x\theta(t)} {\rm d}t \right | = \sup_{\varphi \in B} |V_\theta \varphi(x,\xi)|. \end{align*} The required bounds for $|V_\theta f|$ therefore directly follow from Lemma \ref{STFT-test-smooth}. The last statement is a reformulation of Corollary \ref{desing-dual-b-dot}. \end{proof} In view of Corollary \ref{proj-desc-smooth-functions} and the fact that $\mathcal{D}(\mathbb{R}^d)$ is bornological, a slight modification of the argument used in the last part of the proof of Proposition \ref{thm-dual-convolutors}$(ii)$ yields the following corollary. \begin{corollary}\label{last-cor-ever} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing} such that $\mathcal{W}^\circ$ satisfies $(\Omega)$. Then, every bounded linear functional on $\mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$ is continuous. \end{corollary} Next, we point out two situations in which the mapping \eqref{dual-convolutors} is a topological isomorphism. \begin{corollary} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing} such that $\mathcal{W}^\circ$ satisfies $(V)$ (cf.\ Remark \ref{V-remark}). Then, the mapping $$ \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d) \rightarrow ((\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs})'_{b}, \quad \varphi \mapsto (f \mapsto \langle f, \varphi \rangle) $$ is a topological isomorphism. \end{corollary} \begin{corollary}\label{bidual-Omega} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing} such that $\mathcal{W}^\circ$ satisfies $(\Omega)$. Then, the mapping \eqref{dual-convolutors} is a topological isomorphism. \end{corollary} \begin{proof} We need to show that the mapping \eqref{dual-convolutors} is open. Fix a non-zero window function $\theta \in \mathcal{D}(\mathbb{R}^d)$. By Corollaries \ref{desing-E-dual} and \ref{proj-desc-smooth-functions}, it suffices to show that for every $u \in \overline{U}( \mathcal{W}^\circ_{\operatorname{pol}})$ there is a set $B \subset \mathcal{E}'(\mathbb{R}^d)$ which is bounded in $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs}$ such that $$ \|V_\theta \varphi \|_{u} \leq \sup_{f \in B} | \langle f, \varphi \rangle| $$ for all $\varphi \in \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$. Define $$ B = \{ u(x,\xi) \overline{M_\xi T_x \theta} \, : \, (x,\xi) \in \mathbb{R}^{2d}\} \subset \mathcal{E}'(\mathbb{R}^d) $$ and observe that $B$ satisfies all requirements because $$ \|V_\theta \varphi \|_{u} = \sup_{(x,\xi) \in \mathbb{R}^{2d}} |V_\theta \varphi(x,\xi)|u(x,\xi) = \sup_{(x,\xi) \in \mathbb{R}^{2d}} |\langle \overline{M_\xi T_x \theta}, \varphi \rangle| u(x,\xi) = \sup_{f \in B} |\langle f, \varphi \rangle | $$ for all $\varphi \in \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$. \end{proof} We believe that the mapping \eqref{dual-convolutors} might \emph{always} be a topological isomorphism but were unable to prove this in general. We are ready to prove the main theorem of this section. A lcHs is said to be quasibarrelled (sometimes also called infrabarrelled) if every strongly bounded set in its dual is equicontinuous. Recall that a lcHs $E$ is bornological (ultrabornological, respectively) if and only if it is quasibarrelled and every linear functional on $E$ that is bounded on the bounded subsets of $E$ (on the Banach disks of $E$, respectively), is continuous. In particular, every complete bornological lcHs is ultrabornological. \begin{theorem}\label{char-UB-smooth} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, the following statements are equivalent: \begin{itemize} \item[$(i)$] $\mathcal{W}^\circ$ satisfies $(\Omega)$. \item[$(ii)$] $\mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$ is ultrabornological. \item[$(iii)$] $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{b}= \mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$. \end{itemize} \end{theorem} \begin{proof} $(i) \mathbb{R}ightarrow (ii)$ Since $\mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$ is complete (Corollary \ref{completeness-OC}), we only need to prove that it is bornological. We already pointed out that every bounded linear functional on $\mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$ is continuous (Corollary \ref{last-cor-ever}). We now show that $\mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$ is quasibarrelled. Choose $\psi, \gamma \in \mathcal{D}(\mathbb{R}^d)$ with $(\gamma, \psi)_{L^2} = 1$. By Corollary \ref{bidual-Omega} and Proposition \ref{STFT-convolutors}, it suffices to show that for every bounded set $B \subset \mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ there are $\widetilde{p} \in \overline{P}$, $\widetilde{N} \in \mathbb{N}$, and $C >0$ such that $$ \sup_{\varphi \in B} \left |\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i \right | \leq C\| V_\psi f\|_{\widetilde{p}, L^1_{w_{\widetilde{N}}}} $$ for all $f \in \mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$. Since $\mathcal{B}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is regular (Theorems \ref{reg-cond} and \ref{completeness-ind-lim-smooth}), there is $N \in \mathbb{N}$ such that $B$ is contained and bounded in $\mathcal{B}_{1/w_N}(\mathbb{R}^d)$. Choose $\widetilde{N} \geq N$ according to \eqref{locally-bounded-increasing}. Lemma \ref{STFT-test-smooth} implies that there is $p \in \overline{P}$ such that $|V_{\overline{\gamma}}\varphi(x,-\xi)| \leq w_{\widetilde{N}}(x)p(\xi)$ for all $(x,\xi) \in \mathbb{R}^{2d}$ and $\varphi \in B$. Set $\widetilde{p} = p(\: \cdot \:) (1+ |\: \cdot \:|)^{d+1} \in \overline{P}$. We obtain that $$ \sup_{\varphi \in B} \left |\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i \right | \leq \|V_\psi f\|_{\widetilde{p},L^1_{w_{\widetilde{N}}}} \int_{\mathbb{R}^d}(1+|\xi|)^{-(d+1)}{\rm d}x i $$ for all $f \in \mathcal{O}'_C(\mathcal{D}, L^1_\mathcal{W})$. $(ii) \mathbb{R}ightarrow (iii)$ It suffices to show that the $bs$-topology is finer than the strong topology on $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'$. Let $U \subset (\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_b$ be an arbitrary neighborhood of $0$. We may assume that $U = B^\circ$, where $B^\circ$ is the polar set of some bounded set $B$ in $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$. Since \eqref{canonical-incl} is continuous and $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs}$ is ultrabornological, there is $N \in \mathbb{N}$ and a bounded set $B' \subset \dot{\mathcal{B}}_{1/w_N}(\mathbb{R}^d)$ such that $(B')^\circ \subseteq U$. $(iii) \mathbb{R}ightarrow (i)$ By Theorems \ref{reg-cond} and \ref{completeness-ind-lim-smooth}, it suffices to show that $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$ is $\beta$-regular. Let $N \in \mathbb{N}$ be arbitrary and let $B$ be a subset of $\dot{\mathcal{B}}_{1/w_N}(\mathbb{R}^d)$ that is bounded in $\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d)$. Our assumption implies that there is $M \geq N$ and a bounded set $B' \subset \dot{\mathcal{B}}_{1/w_M}(\mathbb{R}^d)$ such that $(B')^\circ \subseteq B^\circ$, where the polarity is taken twice with respect to the dual system $(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d),(\dot{\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))')$. We now show that $B$ is bounded in $\dot{\mathcal{B}}_{1/w_M}(\mathbb{R}^d)$. Let $n \in \mathbb{N}$ be arbitrary and choose $C > 0$ such that $\| \varphi \|_{1/w_M,n} \leq C$ for all $\varphi \in B'$. Hence, $$ f_{\alpha,x} = \frac{(-1)^{|\alpha|} \partial^{\alpha}(T_x \delta)}{Cw_M(x)} \in (B')^\circ \subseteq B^\circ $$ for all $x \in \mathbb{R}^d$ and $|\alpha| \leq n$. We obtain that \begin{align*} &\sup_{\varphi \in B} \| \varphi\|_{\mathcal{B}^n_{1/w_M}} = C \sup_{\varphi \in B}\max_{|\alpha| \leq n} \sup_{x \in \mathbb{R}^d} |\langle f_{\alpha,x}, \varphi \rangle | \leq C. \qedhere \end{align*} \end{proof} \begin{remark} The implication $(ii) \mathbb{R}ightarrow (iii)$ in Theorem \ref{char-UB-smooth} also follows from De Wilde's open mapping theorem and the fact that the strong dual of an $(LF)$-space is strictly webbed \cite[Prop.\ IV.3.3]{DeWilde-cls}. \end{remark} \section{Weighted $\mathcal{D}'_{L^1}$ spaces}\label{sect-weighted-DL1} Let $w$ be a positive measurable function and set $\mathcal{D}'_{L^1_w}(\mathbb{R}^d) = (\dot{\mathcal{B}}_{1/w}(\mathbb{R}^d))'_b$. In this short section, we apply Theorem \ref{char-UB-smooth} to study the structural and topological properties of $\mathcal{D}'_{L^1_w}(\mathbb{R}^d)$. \begin{theorem}\label{weighted-dual-DL1} Let $w$ be a positive measurable function and assume that \begin{equation} g = \operatorname*{ess \: sup}_{x \in \mathbb{R}^d}\frac{w(x + \: \cdot \:)}{w(x)} \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d). \label{gen-cond-ofnie} \end{equation} Then, the following properties hold: \begin{itemize} \item[$(i)$] $\mathcal{D}'_{L^1_w}(\mathbb{R}^d) = \mathcal{O}'_C(\mathcal{D}, L^1_w)$ topologically. \item[$(ii)$] The strong dual of $\mathcal{D}'_{L^1_w}(\mathbb{R}^d)$ is given by $\mathcal{B}_{1/w}(\mathbb{R}^d)$. More precisely, let $\psi \in \mathcal{D}(\mathbb{R}^d) \backslash \{0\}$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ be a synthesis window for $\psi$. Then, the mapping $\mathcal{B}_{1/w}(\mathbb{R}^d) \rightarrow (\mathcal{D}'_{L^1_w}(\mathbb{R}^d))'_{b}$ given by $$ \varphi \mapsto \left( f \mapsto \frac{1}{(\gamma,\psi)_{L^2}}\int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) V_{\overline{\gamma}}\varphi(x,-\xi) {\rm d}x {\rm d}x i \right) $$ is a topological isomorphism. \item[$(iii)$] $\mathcal{D}'_{L^1_w}(\mathbb{R}^d)$ is ultrabornological. \end{itemize} \end{theorem} \begin{proof} We may assume that $w$ is continuous. For otherwise, consider the continuous weight $\widetilde w = w\ast \varphi$, where $\varphi \in \mathcal{D}(\mathbb{R}^d)$ is non-negative and satisfies $\int_{\mathbb{R}^d} \varphi(t) {\rm d}t = 1$. Set $K = \operatorname{supp} \varphi$. Then, $$ \|g\|_{K}^{-1}w(x) \leq \widetilde{w}(x) \leq \|\check{g}\|_{K}w(x), \qquad x \in \mathbb{R}^d. $$ Hence, $\mathcal{B}_{1/w}(\mathbb{R}^d) =\mathcal{B}_{1/\widetilde{w}}(\mathbb{R}^d)$, $\dot{\mathcal{B}}_{1/w}(\mathbb{R}^d) =\dot{\mathcal{B}}_{1/\widetilde{w}}(\mathbb{R}^d)$ and $L^1_w(\mathbb{R}^d) =L^1_{\widetilde{w}}(\mathbb{R}^d)$. Moreover, $$ \widetilde{w}(x+t) = \int_{\mathbb{R}^d} w(x+t-y) \varphi(y) {\rm d}y \leq g(t) \int_{\mathbb{R}^d} w(x-y) \varphi(y) {\rm d}y = g(t) \widetilde{w}(x), \qquad x,t \in \mathbb{R}^d, $$ whence $$ \sup_{x \in \mathbb{R}^d}\frac{\widetilde{w}(x + \: \cdot \:)}{\widetilde{w}(x)} \in L^\infty_{\operatorname{loc}}(\mathbb{R}^d). $$ From now on, we assume that $w$ is continuous. Set $\mathcal{W} = (w)_{N}$, a constant weight system, and notice that $\mathcal{W}^\circ$ satisfies $(\Omega)$. Properties $(i)$--$(iii)$ therefore follow immediately from Corollary \ref{bidual-Omega} and Theorem \ref{char-UB-smooth}. \end{proof} \begin{remark} When $w = 1$, Theorem \ref{weighted-dual-DL1} is essentially due to Schwartz \cite[pp.\ 201-203]{Schwartz} except for the topological identity $ \mathcal{D}'_{L^1}(\mathbb{R}^d) = \mathcal{O}'_C(\mathcal{D}, L^1)$, which appears to be new even in this case; Schwartz only showed that these spaces coincide as sets and have the same bounded sets and null sequences \cite[p.\ 202]{Schwartz}. Structural and topological properties of weighted $\mathcal{D}'_{L^1}$ spaces have recently been studied in the broader context of distribution spaces associated to general translation-invariant Banach spaces \cite{D-P-V2015TIB}. In particular, the analogues of Schwartz' results were proved there, but, again, Theorem \ref{weighted-dual-DL1}$(i)$ seems to be new in this setting. \end{remark} \begin{remark}\label{O-regular} Observe that the function $g$ from \eqref{gen-cond-ofnie} is submultiplicative. Since every locally bounded submultiplicative function is exponentially bounded, the weight $w$ satisfies \eqref{gen-cond-ofnie} if and only if $$ w(x+y) \leq C w(x)e^{\lambda|y|}, \qquad x, y \in \mathbb{R}^d, $$ for some $C, \lambda > 0$. Moreover, in dimension one, \eqref{gen-cond-ofnie} says that $w\circ \log$ is an $O$-regularly varying function \cite[Chap. 2]{B-G-T}. It is worth pointing out that condition $\eqref{gen-cond-ofnie}$ holds if and only if $L_{w}^{1}(\mathbb{R}^{d})$ is translation-invariant and in this case the function $g$ turns out to be the weight function of its translation group (in the sense of \cite[Def.\ 1, p.\ 500]{D-P-V2015TIB}). In fact, the sufficiency of \eqref{gen-cond-ofnie} is easily established. Conversely, suppose that $L^1_{w}(\mathbb{R}^{d})$ is translation-invariant and consider the functions $$ g (y)= \operatorname*{ess \: sup}_{x \in \mathbb{R}^d}\frac{w(x +y)}{w(x)} \qquad \mbox{and}\qquad \eta(y)=\|T_{y}\|_{L_{b}(L^1_{w})}, \qquad y \in \mathbb{R}^d. $$ The function $g$ may of course take the value $\infty$, while $\eta\in L_{loc}^{\infty}(\mathbb{R}^d)$ because it is measurable, submultiplicative and everywhere finite (cf. \cite[Thm.\ 7.13.1, p.\ 253]{hille-p}). Clearly $\eta\leq g$. Fix $y\in \mathbb{R}^{d}$ and let $\alpha<g(y)$. We select a compact set $K \subset\{x\in\mathbb{R}^{d}\: : \: \alpha<w(x+y)/w(x)\}$ with positive Lebesgue measure and write $\chi_{K}$ for its characteristic function. Then, $$ \alpha \int_{K}w(x)\mathrm{d}x\leq \|T_{y}\chi_{K}\|_{L^{1}_{w}}\leq \eta(y)\int_{K}w(x)\mathrm{d}x, $$ whence, $\alpha\leq\eta(y)$. Since $\alpha$ and $y$ were arbitrary, we conclude that $g=\eta$ and the claim follows. \end{remark} \section{Convolutor spaces of Gelfand-Shilov spaces}\label{GS-convolutors} As a second application of our general theory developed in Section \ref{L1-convolutors}, we discuss the convolutor spaces of general Gelfand-Shilov spaces, as introduced in \cite[Chap.\ II]{G-S}. Let $\mathcal{W} = (w_N)_N$ be an increasing weight system. We define\footnote{Gelfand and Shilov only considered the space $\mathcal{B}_{\mathcal{W}}(\mathbb{R}^d)$ for which they used the notation $\mathcal{K}\{M_p\}$, where $M_p = w_p$, $p \in \mathbb{N}$.} the ensuing Fr\'echet spaces $$ \mathcal{B}_{\mathcal{W}}(\mathbb{R}^d) := \varprojlim_{N \in \mathbb{N}} \mathcal{B}_{w_N}(\mathbb{R}^d), \qquad \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) := \varprojlim_{N \in \mathbb{N}} \dot{\mathcal{B}}_{w_N}(\mathbb{R}^d). $$ \begin{remark}\label{remark-P} If $\mathcal{W} = (w_N)_N$ satisfies condition $(P)$ (cf.\ \cite[p.\ 87]{G-S}), i.e., $$ \forall N \in \mathbb{N} \, \exists M > N: \lim_{|x| \to \infty} w_N(x)/w_M(x) = 0, $$ then $\mathcal{B}_{\mathcal{W}}(\mathbb{R}^d) = \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$ and this space is an $(FS)$-space. \end{remark} Set $\check{\mathcal{W}} = (\check{w}_N)_N$. For every $f \in (\dot{\mathcal{B}}_{\check{\mathcal{W}}}(\mathbb{R}^d))'$ the mapping \begin{equation} \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow \mathcal{E}(\mathbb{R}^d), \quad \varphi \mapsto f \ast \varphi \label{cont-conv} \end{equation} is well-defined and continuous (cf. Proposition \ref{convspaces-proposition-structural}). We define $$ \mathcal{O}'_C(\dot{\mathcal{B}}_{\mathcal{W}}) := \{ f \in (\dot{\mathcal{B}}_{\check{\mathcal{W}}}(\mathbb{R}^d))' \, : \, f \ast \varphi \in \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) \mbox{ for all $\varphi \in \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$} \}. $$ The closed graph theorem and the continuity of the mapping \eqref{cont-conv} imply that, for $f \in \mathcal{O}'_C(\dot{\mathcal{B}}_{\mathcal{W}})$ fixed, the mapping $\dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d), \, \varphi \mapsto f \ast \varphi$ is continuous. We endow $\mathcal{O}'_C(\dot{\mathcal{B}}_{\mathcal{W}})$ with the initial topology with respect to the mapping $$ \mathcal{O}'_C(\dot{\mathcal{B}}_{\mathcal{W}}) \rightarrow L_{b}(\dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d),\dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi). $$ The goal of this section is to study the structural and topological properties of $\mathcal{O}'_C(\dot{\mathcal{B}}_{\mathcal{W}})$. We shall need to impose the following conditions on $\mathcal{W}$: \begin{equation} \forall N \in \mathbb{N} \, \exists M > N : w_N /w_M \in L^1(\mathbb{R}^d) \label{L1-cond} \end{equation} and \begin{equation} \forall N \in \mathbb{N} \, \exists M,K \geq N \, \exists C > 0 \, \forall x,y \in \mathbb{R}^d : w_N(x+y) \leq C w_M(x) w_K(y). \label{trans-inv} \end{equation} Clearly, \eqref{trans-inv} implies \eqref{locally-bounded-increasing}. Clearly, \eqref{trans-inv} implies \eqref{locally-bounded-increasing}. Moreover, one readily shows that \eqref{L1-cond} and \eqref{locally-bounded-increasing} imply $(P)$ (cf.\ Remark \ref{remark-P}) and thus $\mathcal{B}_{\mathcal{W}}(\mathbb{R}^d) = \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$. In such a case, we shall simply write $\mathcal{O}'_C(\dot{\mathcal{B}}_{\mathcal{W}}) = \mathcal{O}'_C(\mathcal{B}_{\mathcal{W}})$. It is worth mentioning that Gelfand and Shilov used condition \eqref{L1-cond} for deriving structural theorems for $(\mathcal{B}_{\mathcal{W}}(\mathbb{R}^d))'$ \cite[p.\ 113]{G-S}. \begin{proposition}\label{eq-L1} Let $\mathcal{W} =(w_N)_N$ be an increasing weight system satisfying \eqref{L1-cond} and \eqref{trans-inv}. Then, $\mathcal{O}'_C(\mathcal{B}_{\mathcal{W}}) = \mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$ topologically. \end{proposition} We need some results in preparation for the proof of Proposition \ref{eq-L1}. Define the following Fr\'echet space of continuous functions $$ \mathcal{W}_{\operatorname{pol},0}C(\mathbb{R}^{2d}) := \varprojlim_{N \in \mathbb{N}} C(w_N\otimes (1+|\:\cdot\:|)^{N})_0(\mathbb{R}^{2d}). $$ Lemmas \ref{STFT-test-smooth} and \ref{double-int-test-smooth} together with \eqref{reconstruction-D-dual} immediately imply the next result. \begin{proposition}\label{STFT-test-char-smooth-proj-1} Let $\psi \in \mathcal{D}(\mathbb{R}^d)$ and let $\mathcal{W} = (w_N)_N$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, the mappings $$ V_\psi: \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow \mathcal{W}_{\operatorname{pol},0}C(\mathbb{R}_{x,\xi}^{2d}) $$ and $$ V^\ast_\psi: \mathcal{W}_{\operatorname{pol},0}C(\mathbb{R}_{x,\xi}^{2d}) \rightarrow \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) $$ are well-defined and continuous. Moreover, if $\psi \neq 0$ and $\gamma \in \mathcal{D}(\mathbb{R}^d)$ is a synthesis window for $\psi$, then $$ \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{\dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)}. $$ \end{proposition} We obtain the following corollary; it is the weighted analogue of a classical result of Schwartz \cite[p.\ 200]{Schwartz}. \begin{corollary}\label{konnieanders} Let $\mathcal{W}= (w_N)_N$ be an increasing weight system satisfying \eqref{locally-bounded-increasing}. Then, $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d) \subseteq \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$ with continuous inclusion mapping. If, in addition, $\mathcal{W}$ satisfies \eqref{L1-cond}, then $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d) = \dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d) = \mathcal{B}_{\mathcal{W}}(\mathbb{R}^d)$. \end{corollary} \begin{proof} We start by showing that $\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$ is continuously included in $\dot{\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$. By Proposition \ref{STFT-test-char-smooth-proj-1}, it suffices to show that $ V_\psi: \mathcal{D}_{L^1_\mathcal{W}}(\mathbb{R}^d) \rightarrow \mathcal{W}_{\operatorname{pol},0}C(\mathbb{R}^{2d}) $ is well-defined and continuous. This can be done by modifying the proof of Lemma \ref{STFT-test-smooth}. Next, observe that condition \eqref{L1-cond} implies that $\mathcal{B}_{\mathcal{W}}(\mathbb{R}^d)$ is continuously included in $ \mathcal{D}_{L^1_\mathcal{W}}(\mathbb{R}^d)$, whence the second part follows from the first one. \end{proof} \begin{proof}[Proof of Proposition \ref{eq-L1}] The continuous inclusion $\mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}}) \subseteq \mathcal{O}'_C(\mathcal{D}, L^1_{\mathcal{W}})$ is clear from \eqref{L1-cond}. For the converse inclusion, we introduce the ensuing space $$ \mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}}, L^1_{\mathcal{W}}) := \{ f \in ({\mathcal{B}}_{\check{\mathcal{W}}}(\mathbb{R}^d))' \, : \, f \ast \varphi \in L^1_{\mathcal{W}}(\mathbb{R}^d) \mbox{ for all $\varphi \in {\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$} \} $$ endowed with the initial topology with respect to the mapping $$ \mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}},L^1_{\mathcal{W}}) \rightarrow L_b({\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d),L^1_{\mathcal{W}}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi). $$ Similarly as in Corollary \ref{conv-b-dot} and the first part of the proof of Theorem \ref{thm-predual}, but by using \eqref{trans-inv} instead of \eqref{locally-bounded-increasing}, one can show that $({\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{bs}$ is continuously included in $\mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}}, L^1_{\mathcal{W}})$. Next, we notice that an element $f \in ({\mathcal{B}}_{\check{\mathcal{W}}}(\mathbb{R}^d))'$ belongs to $\mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}},L^1_{\mathcal{W}})$ if and only if $f \ast \varphi \in \mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)$ for all $\varphi \in {\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d)$ and that the topology of $\mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}},L^1_{\mathcal{W}})$ coincides with the initial topology with respect to the mapping $$ \mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}},L^1_{\mathcal{W}}) \rightarrow L_b({\mathcal{B}}_{\mathcal{W}}(\mathbb{R}^d),\mathcal{D}_{L^1_{\mathcal{W}}}(\mathbb{R}^d)), \quad f \mapsto (\varphi \mapsto f \ast \varphi). $$ Hence, the result follows from Corollary \ref{konnieanders}. \end{proof} Proposition \ref{eq-L1} and Theorem \ref{char-UB-smooth} imply the following result. \begin{theorem}\label{char-UB-smooth-GS} Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system satisfying \eqref{L1-cond} and \eqref{trans-inv}. Then, the following statements are equivalent: \begin{itemize} \item[$(i)$] $\mathcal{W}^\circ$ satisfies $(\Omega)$. \item[$(ii)$] $\mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}})$ is ultrabornological. \item[$(iii)$] $({\mathcal{B}}_{\mathcal{W}^\circ}(\mathbb{R}^d))'_{b}= \mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}})$. \end{itemize} \end{theorem} Finally, we apply Theorem \ref{char-UB-smooth-GS} to study the convolutor spaces of several Gelfand-Shilov spaces that are frequently used in the literature. To this end, we evaluate conditions \eqref{L1-cond}, \eqref{trans-inv} and $(\Omega)$ for two classes of increasing weight systems. Namely, let $w$ be a positive continuous increasing function on $[0,\infty)$ and extend $w$ to $\mathbb{R}^d$ by setting $w(x) = w(|x|)$ for $x \in \mathbb{R}^d$. We define the following increasing weight systems: $\mathcal{W}_{w} = (e^{Nw})_N$ and $\widetilde{\mathcal{W}}_{w} = (e^{w(N\:\cdot\:)})_N$. The function $w$ is said to satisfy $(\alpha)$ (cf.\ \cite{B-M-T}) if there are $C'_0, H' \geq 1$ such that $$ w(2t) \leq H'w(t) + \log C'_0, \qquad t \geq 0. $$ The proof of the next result is simple and therefore omitted. \begin{proposition}\label{char-omega-1} Let $w$ be a positive continuous increasing function on $[0,\infty)$ with $\log(1 + t) = O(w(t))$ satisfying $(\alpha)$. Then, $\mathcal{W}_{w}$ satisfies \eqref{L1-cond} and \eqref{trans-inv}, while $\mathcal{W}^\circ_{w}$ satisfies $(\Omega)$. Consequently, $\mathcal{O}'_C({\mathcal{B}}_{\mathcal{W}_{w}})$ is ultrabornological. \end{proposition} \begin{example}\label{ex 1 convolutors smooth}For $w(t) = \log(e+t)$ we have that $\mathcal{B}_{\mathcal{W}_w}(\mathbb{R}^d) = \mathcal{S}(\mathbb{R}^d)$. Proposition \ref{char-omega-1} implies that the space of convolutors $\mathcal{O}'_C(\mathcal{S}) = \mathcal{O}'_C(\mathbb{R}^d)$ is ultrabornological. This was first proved by Grothendieck \cite[Chap.\ II, Thm.\ 16, p.\ 131]{Grothendieck}. He showed that $\mathcal{O}'_C(\mathbb{R}^d)$ is isomorphic to a complemented subspace of $s \widehat{\otimes} s'$ and proved that the latter space is ultrabornological. \end{example} \begin{example}\label{exp-example} For $w(t) = t$ we have that $\mathcal{B}_{\mathcal{W}_w}(\mathbb{R}^d) = \mathcal{K}_1(\mathbb{R}^d)$, the space of exponentially decreasing smooth functions \cite{Hasumi}. Proposition \ref{char-omega-1} yields that the space of convolutors $\mathcal{O}'_C(\mathcal{K}_1)$ is ultrabornological. This was first claimed by Ziele\'{z}ny in \cite{Zielezny} but his argument seems to contain major gaps (in particular, the proof of \cite[Thm.\ 9]{Zielezny} does not seem to be correct). \end{example} \begin{example} \label{ex 3 convolutors smooth} More generally, let $w(t) = t^p$, $p > 0$, and set $\mathcal{B}_{\mathcal{W}_w}(\mathbb{R}^d) = \mathcal{K}_p(\mathbb{R}^d)$. For $p > 1$ the convolutor spaces $\mathcal{O}'_C(\mathcal{K}_p)$ were studied in \cite{S-Z}, where the hypoelliptic convolution operators in $(\mathcal{K}_p(\mathbb{R}^d))'$ are characterized in terms of their Fourier transform. However, the topological properties of $\mathcal{O}'_C(\mathcal{K}_p)$ do not seem to have been studied yet. Proposition \ref{char-omega-1} implies that the space of convolutors $\mathcal{O}'_C(\mathcal{K}_p)$ is ultrabornological for each $p > 0$. \end{example} Next, we treat increasing weight systems of type $\widetilde{\mathcal{W}}_{w}$. \begin{proposition} \label{char-omega-2} Let $w$ be a positive continuous increasing function on $[0,\infty)$ with $\log(1 + t) = O(w(t))$ such that \begin{equation} 2w(t) \leq w(Ht) + \log C_0, \qquad t \geq 0, \label{M2like} \end{equation} for some $H, C_0 \geq 1$. Then, $\widetilde{\mathcal{W}}_{w}$ satisfies \eqref{L1-cond} and \eqref{trans-inv}, while $\widetilde{\mathcal{W}}^\circ_{w}$ satisfies $(\Omega)$ if and only if $w$ satisfies $(\alpha)$. Consequently, $\mathcal{O}'_C({\mathcal{B}}_{\widetilde{\mathcal{W}}_{w}})$ is ultrabornological if and only if $w$ satisfies $(\alpha)$. \end{proposition} \begin{proof} By iterating \eqref{M2like}, we obtain that for every $\lambda > 0$ there are $L, B \geq 1$ such that $\lambda w(t) \leq w(Lt) + B$ for all $t \geq 0$. Condition $\eqref{L1-cond}$ therefore follows from the assumption $\log(1 + t) = O(w(t))$. Next, \eqref{trans-inv} is a consequence of the fact that $w$ is increasing. We now show that $\widetilde{\mathcal{W}}^\circ_{w}$ satisfies $(\Omega)$ if and only if $w$ satisfies $(\alpha)$. For this, observe that $\widetilde{\mathcal{W}}^\circ_{w}$ satisfies $(\Omega)$ if and only if there is $M \geq 1$ such that for all $K \geq M$ there are $C, S \geq 1$ such that \begin{equation} w(Kt) - w(t) \leq C(w(Mt) - w(t)) + S, \qquad t \geq 0. \label{Omega-equiv-2} \end{equation} This shows that $w$ satisfies $(\alpha)$ if $\widetilde{\mathcal{W}}^\circ_{w}$ satisfies $(\Omega)$. Conversely, suppose that $w$ satisfies $(\alpha)$. We shall show \eqref{Omega-equiv-2} for $M = H$, where $H$ is the constant occurring in \eqref{M2like}. Let $K \geq H$ be arbitrary. By iterating $(\alpha)$, we obtain that there are $L,B \geq 1$ such that $w(Kt) \leq Lw(t) + B$ for all $t \geq 0$. Hence, $$ w(Kt) - w(t) \leq (L-1) w(t) + B \leq (L-1)(w(Ht) - w(t)) + (L-1)\log C_0 + B. \qedhere $$ \end{proof} \begin{remark}\label{remark-fast} Since condition $(\alpha)$ implies that $w$ is polynomially bounded, Proposition \ref{char-omega-2} in particular yields that $\mathcal{O}'_C({\mathcal{B}}_{\widetilde{\mathcal{W}}_{w}})$ is not ultrabornological for any function $w$ satisfying \eqref{M2like} that is not polynomially bounded. Concrete examples are given by: $w(t) = \exp(t^\sigma \log^{\tau}(1+ t))$ with $\sigma > 0$ and $\tau \geq 0$ or $\sigma = 0$ and $\tau > 1$. On the other hand, there are also polynomially bounded functions $w$ for which $\mathcal{O}'_C({\mathcal{B}}_{\widetilde{\mathcal{W}}_{w}})$ fails to be ultrabornological, as shown by the following example. \end{remark} \begin{example}\label{remark-slow} \emph{For any $\sigma > 0$ there is a function $w$ with $\log(1 + t) = O(w(t))$ and $w(t) = O(t^\sigma)$ such that $w$ satisfies \eqref{M2like} but violates $(\alpha)$}. We shall use some results about weight sequences and their associated function; we refer to \cite{Komatsu, B-M-M} for more information on this topic. In \cite[Example 3.3]{Langenbruch-example}, Langenbruch constructed a weight sequence $(M_p)_{p \in \mathbb{N}}$ satisfying $(M.1)$, $(M.2)$ and $(M.3)'$ but not $$ \sup_{p \in \mathbb{N}} \frac{m_p}{m_{Qp}} < \infty, \qquad \mbox{for some $Q \in \mathbb{Z}_+$}, $$ where $m_p = M_{p+1}/M_{p}$, $p \in \mathbb{N}$. Hence, the associated function $M$ of $(M_p)_{p \in \mathbb{N}}$ is a weight function such that $\log(1 + t) = o(M(t))$ \cite[p.\ 49]{Komatsu}, $M(t) = o(t)$ \cite[Lemma 4.1]{Komatsu} and $M$ satisfies \eqref{M2like} \cite[Prop.\ 3.6]{Komatsu} but violates $(\alpha)$ \cite[Prop.\ 13]{B-M-M}. Therefore, $w(t) = M(t^\sigma)$ satisfies all requirements. \end{example} We now further specialize Proposition \ref{char-omega-2} to a class of weights introduced by Gelfand and Shilov in \cite[Chap.\ IV, Appendix 2]{G-S}. Let $\mu$ be an increasing positive function on $[0,\infty)$ and consider $w(t) = \int_0^{t} \mu(s) {\rm d}s $, $t \geq 0$. We write $\mathcal{K}_w(\mathbb{R}^d) = \mathcal{B}_{\widetilde{\mathcal{W}}_w}(\mathbb{R}^d)$. \begin{theorem}\label{thm-UB-GS-2} Let $\mu$ be an increasing positive function on $[0,\infty)$ and set $w(t) = \int_0^{t} \mu(s) {\rm d}s $. Then, the following statements are equivalent: \begin{itemize} \item[$(i)$] $\mathcal{O}'_C(\mathcal{K}_w)$ is ultrabornological. \item[$(ii)$] $w$ satisfies $(\alpha)$. \item[$(iii)$] There are $L, B \geq 1$ such that \begin{equation} \mu(2s) \leq L \mu(s) + \frac{B}{s}, \qquad s > 0. \label{mu-cond} \end{equation} \end{itemize} \end{theorem} \begin{proof} Clearly, $\log(1 + t) = O(w(t))$ (in fact, $t = O(w(t))$). Next, we verify that $w$ satisfies \eqref{M2like}. Since $\mu$ is increasing, we obtain that $$ 2w(t) \leq \int_0^t \mu(s) {\rm d}s + \int_0^t \mu(s+t) {\rm d}s = \int_0^t \mu(s) {\rm d}s + \int_t^{2t} \mu(s) {\rm d}s = \int_0^{2t} \mu(s) {\rm d}s = w(2t). $$ Hence, in view of Proposition \ref{char-omega-2}, it suffices to show that $w$ satisfies $(\alpha)$ if and only if $\mu$ satisfies \eqref{mu-cond}. Suppose first that $\mu$ satisfies \eqref{mu-cond}. Then, \begin{align*} &w(2t) - w(t) = \int^{2t}_t \mu(s) {\rm d}s = 2\int^t_{t/2} \mu(2s) {\rm d}s \leq 2\int^t_{t/2} \left(L \mu(s) + \frac{B}{s}\right){\rm d}s \\ &\leq 2L \int^t_0 \mu(s){\rm d}s + 2(\log2)B = 2Lw(t) + 2(\log2)B. \end{align*} Conversely, assume that $w$ satisfies $(\alpha)$. Applying $(\alpha)$ twice, we obtain that $$ w(4s) -w(2s) \leq (H'-1)w(2s) + \log C'_0 \leq H'(H'-1)w(s) + H' \log C'_0. $$ Since $\mu$ is increasing, we have that \begin{align*} &\mu(2s) = \frac{\mu(2s)}{\log2} \int^{4s}_{2s} \frac{1}{u} {\rm d}u \leq\frac{1}{\log2} \int^{4s}_{2s} \frac{\mu(u)}{u} {\rm d}u \leq \frac{1}{2(\log2)s} \int^{4s}_{2s} \mu(u) {\rm d}u \\ &= \frac{1}{2(\log2)s}(w(4s) - w(2s)) \leq \frac{H'(H'-1)}{2(\log2)s}w(s) + \frac{H'\log C'_0}{2(\log2)s} \\ &= \frac{H'(H'-1)}{2(\log2)s}\int^s_0\mu(u) {\rm d}u + \frac{H'\log C'_0}{2(\log2)s} \leq \frac{H'(H'-1)\mu(s)}{2(\log2)} + \frac{H'\log C'_0}{2(\log2)s}. \qedhere \end{align*} \end{proof} \begin{remark}\label{remark-KM} The topological properties of the convolutor spaces $\mathcal{O}'_C(\mathcal{K}_w)$ were discussed by Abdullah. On \cite[p.\ 179]{Abdullah-2} he states that the spaces $\mathcal{O}'_C(\mathcal{K}_w)$ are always ultrabornological, but he does not provide a proof of this assertion. Theorem \ref{thm-UB-GS-2} shows that his claim is actually false. By Remark \ref{remark-fast}, the space $\mathcal{O}'_C(\mathcal{K}_w)$ is not ultrabornological if $w$ is not polynomially bounded. Furthermore, one can use Example \ref{remark-slow} to construct weights $w$ of the form $w(t) = \int_0^{t} \mu(s) {\rm d}s $ with $w(t) = O(t^{1+\sigma})$, for a fixed but arbitrary $\sigma > 0$, such that $w$ does not satisfy $(\alpha)$ (take $\mu$ to be any function satisfying the conditions from Example \ref{remark-slow} and use Theorem \ref{thm-UB-GS-2}$(iii)$). \end{remark} \end{document}
math
123,911
\begin{document} \subjclass[2010] {Primary 11F41; Secondary 11F11} \keywords{Hilbert modular form, Fourier coefficient, Eisenstein series} \begin{abstract} Associated to an (adelic) Hilbert modular form is a sequence of `Fourier coefficients' which uniquely determine the form. In this paper we characterize Hilbert modular cusp forms by the size of their Fourier coefficients. This answers in the affirmative a question posed by Winfried Kohnen. \end{abstract} \title{Characterizing Hilbert modular cusp forms by coefficient size} \section{Introduction}\label{section:intro} In \cite{Kohnen-OnCertainGMFs,Schmoll} it is shown that if the Fourier coefficients $a(n)$ ($n\geq 1$) of an elliptic modular form $f$ of even integral weight $k\geq 2$ and level $N$ satisfy the Deligne bound $a(n)\ll_{f,\epsilon} n^{(k-1)/2+\epsilon}$ ($\epsilon>0$), then $f$ must be a cusp form. An analogous result was recently proven by Kohnen and Martin \cite{Kohnen-Martin} for Siegel modular forms of weight $k$ and genus $2$ on the full Siegel modular group. Kohnen posed \cite{Kohnen-question} the question of whether one could prove analogous results in more general settings, in particular the Hilbert modular setting. This paper answers Kohnen's question in the affirmative by characterizing Hilbert modular cusp forms in terms of the size of their Fourier coefficients. In seeking to generalize the results of \cite{Kohnen-OnCertainGMFs,Schmoll} to Hilbert modular forms one immediately confronts a number of difficulties: the lack of Fourier expansions and the absence of an action of Hecke operators under which the space of Hilbert modular forms is invariant. Let $K$ be a totally real number field and consider the complex vector space $M_k(\mathcal{N})$ of classical Hilbert modular forms of weight $k$ and level $\mathcal{N}$ over $K$. If the strict class number $h^+$ of $K$ is greater than $1$, then a form $f\in M_k(\mathcal{N})$ need not possess a Fourier expansion and hence Fourier coefficients to examine. In order to circumvent this difficulty we work with the larger space $\mathscr{M}_k(\mathcal{N})$ of adelic Hilbert modular forms of weight $k$ and level $\mathcal{N}$. The elements of $\mathscr{M}_k(\mathcal{N})$ are $h^+$-tuples of classical Hilbert modular forms and to each form $f\in \mathscr{M}_k(\mathcal{N})$ one associates (as in \cite{shimura-duke}) a sequence $\{ C({\mathfrak m}, f) : {\mathfrak m} \mbox{ an integral ideal of } K\}$ of `Fourier coefficients' which uniquely determine $f$. Equally important to our analysis is the fact that unlike $M_k(\mathcal{N})$, the space of adelic Hilbert modular forms $\mathscr{M}_k(\mathcal{N})$ is mapped to itself under the action of the Hecke operators $T_{\mathfrak p}$. \begin{theorem}\label{theorem:intro} Let notation be as above with $k\geq 3$ and fix a real number $\epsilon\in \left(0, \frac{5k-7}{10}\right]$. If $f\in \mathscr{M}_k(\mathcal{N})$ satisfies $$|C({\mathfrak m},f)|\ll_{f,\epsilon} N({\mathfrak m})^{k-1-\epsilon }$$ then $f$ is a cusp form. \end{theorem} The proof of Theorem \ref{theorem:intro} proceeds along the same lines as those of \cite{Kohnen-OnCertainGMFs,Schmoll}, though is necessarily more complicated due to the technical nature of adelic Hilbert Eisenstein series. Crucial in the proof will be the structure of spaces of Hilbert Eisenstein series and in particular the newform theory of these spaces. This theory was proven in the elliptic case by Weisinger \cite{weisinger-thesis} and was later extended to the Hilbert modular setting by Wiles \cite{wiles} and Atwill and Linowitz \cite{Linowitz-Atwill}. \section{Notation}\label{section:notation} We employ the notation of \cite{shimura-duke, Linowitz-Atwill}. However, to make this paper somewhat self-contained, we shall briefly review the basic definitions of the functions and operators which we shall study. Let $K$ be a totally real number field of degree $n$ over $\mathbb Q$ with ring of integers $\mathcal O$, group of units $\mathcal O^\times$, and totally positive units $\mathcal O^\times_+$. Let ${\mathfrak d}$ be the different of $K$. If ${\mathfrak q}$ is a finite prime of $K$, we denote by $K_{{\mathfrak q}}$ the completion of $K$ at ${\mathfrak q}$, $\mathcal O_{{\mathfrak q}}$ the valuation ring of $K_{{\mathfrak q}}$, and $\pi_{{\mathfrak q}}$ a local uniformizer. Finally, we let $\mathfrak P_\infty$ denote the product of all archimedean primes of $K$. We denote by $K_\mathbb A$ the ring of $K$-adeles and by $K_\mathbb A^\times$ the group of $K$-ideles. As usual we view $K$ as a subgroup of $K_\mathbb A$ via the diagonal embedding. If $\tilde{\alpha}\in K_\mathbb A^\times$, we let $\tilde{\alpha}_{\infty}$ denote the archimedean part of $\tilde{\alpha}$ and $\tilde{\alpha}_0$ the finite part of $\tilde{\alpha}$. If $\mathcal{J}$ is an integral ideal we let $\tilde{\alpha}_{\mathcal{J}}$ denote the $\mathcal{J}$-part of $\tilde{\alpha}$. For an integral ideal $\mathcal{N}$ we define a numerical character $\psi$ modulo $\mathcal{N}$ to be a homomorphism $\psi: (\mathcal O/\mathcal{N})^\times \rightarrow \mathbb C^\times$ and denote the conductor of $\psi$ by $\mathfrak{f}_\psi$. A Hecke character is a continuous character on the idele class group: $\Psi: K_\mathbb A^\times/ K^\times \rightarrow \mathbb C^\times$. We denote the induced character on $K_\mathbb A^\times$ by $\Psi$ as well. We adopt the convention that $\psi$ denotes a numerical character and $\Psi$ denotes a Hecke character. For a fractional ideal $\mathcal{I}$ and integral ideal $\mathcal{N}$, define $$\Gamma_0(\mathcal{N},\mathcal{I})=\left\{ A\in \left( \begin{array}{ c c } \mathcal O& \mathcal{I}^{-1}{\mathfrak d}^{-1} \\ \mathcal{N}\mathcal{I}{\mathfrak d} & \mathcal O \end{array} \right) : \dete A \in \mathcal O^\times_+ \right\}.$$ Let $\theta : \mathcal O^\times_+\rightarrow \mathbb C^\times$ be a character of finite order and note that there exists an element $m\in\mathbb{R}^n$ such that $\theta(a)=a^{im}$ for all totally positive units $a$. While such an $m$ is not unique, we shall fix one such $m$ for the remainder of this paper. Let $k=(k_1,...,k_n)\in \mathbb{Z}_+^n$ and $\psi$ be a numerical character modulo $\mathcal{N}$. Following Shimura \cite{shimura-duke}, we define $M_k(\Gamma_0(\mathcal{N},\mathcal{I}),\psi,\theta)$ to be the complex vector space of classical Hilbert modular forms on $\Gamma_0(\mathcal{N},\mathcal I)$. It is well-known that the space of classical Hilbert modular forms of a fixed weight, character and congruence subgroup is not invariant under the algebra generated by the Hecke operators $T_{\mathfrak p}$. We therefore consider the larger space of adelic Hilbert modular forms, which \textit{is} invariant under the Hecke algebra. Our construction follows that of Shimura \cite{shimura-duke}. Fix a set of strict ideal class representatives $\mathcal{I}_1,...,\mathcal{I}_h$ of $K$, set $\Gamma_{\lambda}=\Gamma_0(\mathcal{N},\mathcal{I}_{\lambda})$, and put $$\mathscr{M}_k(\mathcal{N},\psi,\theta)=\prod_{\lambda=1}^h M_k(\Gamma_{\lambda},\psi,\theta).$$ In the case that both $\psi$ and $\theta$ are trivial characters we will simplify our notation and denote $\mathscr{M}_k(\mathcal{N},\psi,\theta)$ by $\mathscr{M}_k(\mathcal{N})$. Let $G_\mathbb A=GL_2(K_\mathbb A)$ and view $G_K=GL_2(K)$ as a subgroup of $G_\mathbb A$ via the diagonal embedding. Denote by $G_{\infty} = GL_2(\mathbb{R})^n$ the archimedean part of $G_\mathbb A$. For an integral ideal $\mathcal{N}$ of $\mathcal O$ and a prime ${\mathfrak p}$, let $$Y_{{\mathfrak p}}(\mathcal{N})=\left\{ A=\left( \begin{array}{ c c } a& b \\ c&d \end{array} \right) \in \left( \begin{array}{ c c } \mathcal O_{{\mathfrak p}}&{\mathfrak d}^{-1}\mathcal O_{{\mathfrak p}} \\ \mathcal{N}{\mathfrak d}\mathcal O_{{\mathfrak p}} & \mathcal O_{{\mathfrak p}} \end{array} \right) : \dete A\in K_{{\mathfrak p}}^\times, (a\mathcal O_{{\mathfrak p}},\mathcal{N}\mathcal O_{{\mathfrak p}})=1 \right\},$$ $$W_{{\mathfrak p}}(\mathcal{N})=\{ A\in Y_{{\mathfrak p}}(\mathcal{N}) : \dete A\in \mathcal O^\times_{{\mathfrak p}} \}$$ and put $$Y=Y(\mathcal{N})=G_\mathbb A\cap \left(G_{\infty +}\times \prod_{{\mathfrak p}\nmid\mathfrak P_\infty} Y_{{\mathfrak p}}(\mathcal{N})\right), \qquad W=W(\mathcal{N})=G_{\infty +}\times \prod_{{\mathfrak p} \nmid\mathfrak P_\infty} W_{{\mathfrak p}}(\mathcal{N}).$$ Given a numerical character $\psi$ modulo $\mathcal{N}$ define a homomorphism $\psi_Y: Y\rightarrow \mathbb C^\times$ by setting $\psi_Y(\left( \begin{array}{ c c } \tilde{a}& * \\ *&* \end{array} \right))=\psi(\tilde{a}_{\mathcal{N}}\mbox{ mod }\mathcal{N} )$. Given a fractional ideal $\mathcal I$ of $K$ define $\tilde{\mathcal{I}}=(\mathcal{I}_{\nu})_{\nu}$ to be a fixed idele such that $\mathcal{I}_{\infty}=1$ and $\tilde{\mathcal{I}}\mathcal O=\mathcal{I}$. For $\lambda=1,...,h,$ set $x_{\lambda}=\left( \begin{array}{ c c } 1& 0 \\ 0&\tilde{I}_{\lambda} \end{array} \right)\in G_\mathbb A$. By the Strong Approximation theorem $$G_\mathbb A=\bigcup_{\lambda=1}^h G_K x_{\lambda} W=\bigcup_{\lambda=1}^h G_K x_{\lambda}^{-\iota} W,$$ where $\iota$ denotes the canonical involution on two-by-two matrices. For an $h$-tuple $(f_1,...,f_h)\in\mathscr{M}_k(\mathcal{N},\psi,\theta)$ we define a function $f: G_\mathbb A\rightarrow \mathbb C$ by $$f(\alpha x_{\lambda}^{-\iota}w)=\psi_Y(w^{\iota})\dete(w_{\infty})^{im}(f_{\lambda}\mid w_{\infty})(\textbf{i})$$ for $\alpha\in G_K$, $w\in W(\mathcal{N})$ and $\textbf{i}=(i,...,i)$ (with $i=\sqrt{-1}$). Here $$f_{\lambda}\mid \left( \begin{array}{ c c } a& b \\ c&d \end{array} \right)(\tau)=(ad-bc)^{\frac{k}{2}}(c\tau+d)^{-k} f_{\lambda}\left(\frac{a\tau+b}{c\tau+d}\right).$$ We identify $\mathscr{M}_k(\mathcal{N},\psi,\theta)$ with the set of functions $f: G_\mathbb A\rightarrow \mathbb C$ satisfying \begin{enumerate} \item $f(\alpha x w)=\psi_Y(w^{\iota})f(x)$ for all $\alpha\in G_K, x\in G_\mathbb A, w\in W(\mathcal{N}), w_{\infty}=1$ \item For each $\lambda$ there exists an element $f_{\lambda}\in M_k$ such that $$f(x_{\lambda}^{-\iota}y)=\dete(y)^{im}(f_{\lambda}\mid y)(\textbf{i})$$ for all $y\in G_{\infty +}$. \end{enumerate} Let $\psi_{\infty}: K_\mathbb A^\times\rightarrow \mathbb C^\times$ be defined by $\psi_{\infty}(\tilde{a})=\mbox{sgn}(\tilde{a}_{\infty})^k|\tilde{a}_{\infty}|^{2im}$, where $m$ was specified in the definition of $\theta$. We say that a Hecke character $\Psi$ extends $\psi\psi_{\infty}$ if $\Psi(\tilde{a})=\psi(\tilde{a}_{\mathcal{N}}\mbox{ mod }\mathcal{N})\psi_{\infty}(\tilde{a})$ for all $\tilde{a}\in K_{\infty}^\times \times \prod_{{\mathfrak p}} \mathcal O_{{\mathfrak p}}^\times$. If the previous equality holds for $\psi_\infty(\tilde{a})=\mbox{sgn}(\tilde{a}_\infty)^k$ then we say that $\Psi$ extends $\psi$. Given a Hecke character $\Psi$ extending $\psi\psi_{\infty}$ we define an ideal character $\Psi^*$ modulo $\mathcal{N}\mathfrak{P}_{\infty}$ by \begin{displaymath} \left\{ \begin{array}{ll} \Psi^*({\mathfrak p})=\Psi(\tilde{\pi}_{{\mathfrak p}}) & \textrm{for ${\mathfrak p}\nmid \mathcal{N}$ and $\tilde{\pi}\mathcal O={\mathfrak p},$}\\ \Psi^*(\mathfrak{a})=0 & \textrm{if $(\mathfrak{a},\mathcal{N})\neq 1$ }\\ \end{array} \right. \end{displaymath} For $\tilde{s}\in K_\mathbb A^\times$, define $f^{\tilde{s}}(x)=f(\tilde{s}x)$. The map $\tilde{s}\longrightarrow \left( f\mapsto f^{\tilde{s}}\right)$ defines a unitary representation of $K_\mathbb A^\times$ in $\mathscr{M}_k(\mathcal{N},\psi,\theta)$. By Schur's Lemma the irreducible subrepresentations are all one-dimensional (since $K_\mathbb A^\times$ is abelian). For a character $\Psi$ on $K_\mathbb A^\times$, let $\mathscr{M}_k(\mathcal{N},\Psi)$ denote the subspace of $\mathscr{M}_k(\mathcal{N},\psi,\theta)$ consisting of all $f$ for which $f^{\tilde{s}}=\Psi(\tilde{s})f$ and let $\mathscr{S}_k(\mathcal{N},\Psi)\subset \mathscr{M}_k(\mathcal{N},\Psi)$ denote the subspace of cusp forms. If $s\in K^\times$ then $f^{s}=f$. It follows that $\mathscr{M}_k(\mathcal{N},\Psi)$ is nonempty only when $\Psi$ is a Hecke character. We now have a decomposition $$\mathscr{M}_k(\mathcal{N},\psi,\theta)=\bigoplus_\Psi \mathscr{M}_k(\mathcal{N},\Psi),$$ where the direct sum is taken over Hecke characters $\Psi$ which extend $\psi\psi_\infty$. If $f=(f_1,...,f_h)\in \mathscr{M}_k(\mathcal{N},\psi,\theta)$, then each $f_{\lambda}$ has a Fourier expansion $$f_{\lambda}(\tau)=a_{\lambda}(0)+\sum_{0\ll \xi\in\mathcal{I}_{\lambda}} a_{\lambda}(\xi) e^{2\pi i \mbox{tr} (\xi\tau)}.$$ If $\mathfrak{m}$ is an integral ideal then we define the $\mathfrak{m}$-th `Fourier' coefficient of $f$ by \begin{displaymath} C(\mathfrak{m},f)=\left\{ \begin{array}{ll} N(\mathfrak{m})^{\frac{k_0}{2}}a_{\lambda}(\xi)\xi^{-\frac{k}{2}-im}& \textrm{if $\mathfrak{m}=\xi\mathcal{I}_{\lambda}^{-1}\subset\mathcal O$}\\ 0 & \textrm{otherwise}\\ \end{array} \right. \end{displaymath} where $k_0=\mbox{max}\{k_1,...,k_n\}$. Given $f\in\mathscr{M}_k(\mathcal{N},\psi,\theta)$ and $y\in G_\mathbb A$ define a slash operator by setting $(f\mid y)(x)=f(xy^{\iota})$. For an integral ideal $\mathfrak{r}$ define the shift operator $B_{\mathfrak{r}}$ by $$f\mid B_{\mathfrak{r}}=N(\mathfrak{r})^{-\frac{k_0}{2}} f\mid \left( \begin{array}{ c c } 1& 0 \\ 0&\tilde{\mathfrak{r}}^{-1} \end{array} \right).$$ The shift operator maps $\mathscr{M}_k(\mathcal{N},\Psi)$ to $\mathscr{M}_k(\mathfrak{r}\mathcal{N},\Psi)$. Further, $C(\mathfrak{m},f\mid B_{\mathfrak{r}})=C(\mathfrak{m}\mathfrak{r}^{-1},f)$. It is clear that $f \mid B_{\mathfrak{r}_1}\mid B_{\mathfrak{r}_2}=f\mid B_{\mathfrak{r}_1\mathfrak{r}_2}$. For an integral ideal $\mathfrak{r}$ the Hecke operator $T_{\mathfrak{r}}=T_{\mathfrak{r}}^{\mathcal{N}}$ maps $\mathscr{M}_k(\mathcal{N},\Psi)$ to itself regardless of whether or not $(\mathfrak{r},\mathcal{N})=1$. This action is given on Fourier coefficients by \begin{equation}\label{equation:heckeformula}C(\mathfrak{m},f\mid T_{\mathfrak{r}})=\sum_{\mathfrak{m}+\mathfrak{r}\subset\mathfrak{a}} \Psi^*(\mathfrak{a})N(\mathfrak{a})^{k_0-1}C(\mathfrak{a}^{-2}\mathfrak{m}\mathfrak{r},f).\end{equation} Note that if $(\mathfrak{a},\mathfrak{r})=1$ then $B_{\mathfrak{a}}T_{\mathfrak{r}}=T_{\mathfrak{r}}B_{\mathfrak{a}}$. \section{A newform theory for Hilbert Eisenstein series}\label{section:newformtheory} In this section we give a very brief outline of the newform theory of spaces of Hilbert Eisenstein series. This theory will play a pivotal role in the proof of Theorem \ref{theorem:main}. A detailed treatment is provided in \cite{Linowitz-Atwill}. Fix a space $\mathscr{M}_k(\mathcal{N}, \Psi)\subset \mathscr{M}_k(\mathcal{N},\psi)$ where $\psi: (\mathcal O/\mathcal{N})^\times\rightarrow\mathbb{C}^\times$ is a numerical character, $\Psi$ is a Hecke character extending $\psi$ and $k\in \mathbb{Z}^n$. Let $\mathscr{E}_k(\mathcal{N}, \Psi)$ be the orthogonal complement of $\mathscr{S}_k(\mathcal{N},\Psi)$ in $ \mathscr{M}_k(\mathcal{N}, \Psi)$ with respect to the Petersson inner product (i.e. $\mathscr{E}_k(\mathcal{N}, \Psi)$ is the subspace of Eisenstein series). It is well known that $ \mathscr{M}_k(\mathcal{N}, \Psi)= \mathscr{S}_k(\mathcal{N}, \Psi)$ unless $k_1=\cdots = k_n\geq 0$. Thus we may abuse notation and make the identification $k=(k,...,k)$ for some integer $k$. We will assume throughout that $k\geq 3$. We begin with a construction due to Shimura (\cite[Prop. 3.4]{shimura-duke}) \begin{proposition}\label{proposition:existence} Let $\mathcal{N}_1, \mathcal{N}_2$ be integral ideals and $\psi_1$ (respectively $\psi_2$) be a character, not necessarily primitive, on the strict ideal class group modulo $\mathcal{N}_1$ (respectively $\mathcal{N}_2$) such that \begin{align*} \psi_1(\nu\mathcal{O}) = \mbox{sgn}(\nu)^a\qquad \mbox{for } \nu\equiv 1\pmod{\mathcal{N}_1}\\ \psi_2(\nu\mathcal{O}) = \mbox{sgn}(\nu)^b\qquad \mbox{for } \nu\equiv 1\pmod{\mathcal{N}_2}, \end{align*} where $a,b\in \mathbb{Z}^n$ and $a+b\equiv k\pmod{2\mathbb{Z}^n}$. Then there exists $E_{\psi_1,\psi_2}\in\mathscr{E}_k(\mathcal{N}_1\mathcal{N}_2,\Psi)$, where $\Psi$ is the Hecke character such that $\Psi^*(\mathfrak r)=(\psi_1\psi_2)(\mathfrak r)$ for $\mathfrak r$ coprime to $\mathcal{N}_1\mathcal{N}_2$, such that for any integral ideal $\mathfrak{m}$, \begin{equation}\label{equation:coeff} C(\mathfrak{m}, E_{\psi_1,\psi_2})=\sum_{\mathfrak{r}\mid \mathfrak{m}} \psi_1(\mathfrak{m}\mathfrak{r}^{-1})\psi_2(\mathfrak{r}) N(\mathfrak{r})^{k-1}. \end{equation} \end{proposition} It is easy to see that, in analogy with the cuspidal case, if ${\mathfrak p}$ is a prime not dividing $\mathcal{N}_1\mathcal{N}_2$ then $E_{\psi_1\psi_2}$ is an eigenform of the Hecke operator $T_{\mathfrak p}$ with eigenvalue $C({\mathfrak p},E_{\psi_1,\psi_2})$. We say that a form $E_{\psi_1,\psi_2}$ is an \textit{Eisenstein newform} of level $\mathcal{N}$ if the characters $\psi_1$ and $\psi_2$ are both primitive and $\mathcal{N}=\mathfrak f_{\psi_1}\mathfrak{f}_{\psi_2}$. We denote by $\mathscr{E}^{new}_k(\mathcal{N},\Psi)$ the subspace of $\mathscr{E}_k(\mathcal{N},\Psi)$ generated by newforms of level $\mathcal{N}$. As in the cuspidal case, $\mathscr{E}_k(\mathcal{N},\Psi)$ has a basis consisting of shifts of newforms of level $\mathcal{M}$ dividing $\mathcal{N}$ (\cite[Prop. 1.5]{wiles}, \cite[Prop. 3.11]{Linowitz-Atwill}). \begin{proposition}\label{proposition:decomp} Let notation be as above. We have the following decomposition of $\mathscr{E}_k(\mathcal{N},\Psi)$:$$\mathscr{E}_k(\mathcal{N},\Psi)=\bigoplus_{\mathfrak{f}_\Psi\mid \mathcal{M}\mid \mathcal{N}}\bigoplus_{\qquad\mathfrak{r}\mid \mathcal{N}\mathcal{M}^{-1}} \mathscr{E}^{(new)}_k(\mathcal{M},\Psi)\mid B_{\mathfrak{r}}.$$ \end{proposition} The eigenvalues of an Eisenstein newform with respect to the Hecke operators $\{ T_{\mathfrak p} : {\mathfrak p}\nmid \mathcal{N}\}$ distinguish it from other newforms. In particular we have the following strong multiplicity-one theorem (Theorem 3.6 of \cite{Linowitz-Atwill}). \begin{theorem}\label{theorem:eigenvalues} Let $\mathcal{M},\mathcal{N}$ be integral ideals and $E_{\psi_1,\psi_2}\in \mathscr{E}^{(new)}_k(\mathcal{M},\Psi)$ and $E_{\phi_1,\phi_2} \in \mathscr{E}_k^{(new)}(\mathcal{N},\Psi)$ be newforms such that $$C({\mathfrak p},E_{\psi_1,\psi_2})=C({\mathfrak p},E_{\phi_1,\phi_2})$$ for a set of finite primes of $K$ having Dirichlet density strictly greater than $\frac{1}{2}$. Then $\mathcal{M}=\mathcal{N}$ and $E_{\psi_1,\psi_2}=E_{\phi_1,\phi_2}$. \end{theorem} \section{Proof of theorem} Fix an integral ideal $\mathcal{N}$ of $K$, a weight vector $(k,\dots, k)\in \mathbb{Z}^n$ with $k\geq 3$ and consider the space $\mathscr{M}_k(\mathcal{N})$ of adelic Hilbert modular forms of level $\mathcal{N}$, weight $k$ and trivial character. We will characterize the Hilbert modular cusp forms of $\mathscr{M}_k(\mathcal{N})$ in terms of the size of their Fourier coefficients. \begin{theorem}\label{theorem:main} Fix a real number $\epsilon\in \left(0, \frac{5k-7}{10}\right]$. If the Fourier coefficients of $f\in\mathscr{M}_k(\mathcal{N})$ satisfy \begin{equation}\label{equation:bound} |C({\mathfrak m},f)| \ll_{f,\epsilon} N({\mathfrak m})^{k-1-\epsilon}, \end{equation} then $f$ is a cusp form. \end{theorem} \begin{proof} The space $\mathscr{M}_k(\mathcal{N})$ decomposes into a direct sum $$\mathscr{M}_k(\mathcal{N})=\mathscr{S}_k(\mathcal{N})\oplus\mathscr{E}_k(\mathcal{N}),$$ where $\mathscr{E}_k(\mathcal{N})$ is the subspace of Eisenstein series. Shahidi \cite{Shahidi} has shown that the Fourier coefficients of cusp forms satisfy $|C({\mathfrak m},f)|\ll_f N({\mathfrak m})^{\frac{k-1}{2}+\frac{1}{5}}$ and hence satisfy (\ref{equation:bound}) as well. Thus it suffices to show that if $f\in\mathscr{E}_k(\mathcal{N})$ satisfies (\ref{equation:bound}) then $f=0$. We have a decomposition \begin{equation}\label{equation:heckedecomp} \mathscr{E}_k(\mathcal{N})=\bigoplus_{\Psi} \mathscr{E}_k(\mathcal{N},\Psi), \end{equation} where the sum is taken over Hecke characters extending the trivial character $$\psi_0: \left(\mathcal O/\mathcal{N}\right)^\times\rightarrow \mathbb C^\times.$$ Let $\mathbb{T}^\mathcal{N}$ denote the algebra generated by all Hecke operators $T_{\mathfrak m}$ with ${\mathfrak m}$ an integral ideal coprime to $\mathcal{N}$. It is well-known that for each direct summand in (\ref{equation:heckedecomp}) we can find an operator $T\in\mathbb{T}^\mathcal{N}$ such that $T$ acts on this summand by multiplication by a non-zero scalar and annihilates all of the other summands. Observing that if $f\in \mathscr{E}_k(\mathcal{N})$ satisfies (\ref{equation:bound}) then so too does $f\mid T$ for all $T\in\mathbb{T}^\mathcal{N}$ (as follows from the action of $T_{\mathfrak m}$ on Fourier coefficients; see (\ref{equation:heckeformula})), we see that it suffices to fix a direct summand $\mathscr{E}_k(\mathcal{N},\Psi)$ of (\ref{equation:heckedecomp}) and prove that if $f\in\mathscr{E}_k(\mathcal{N},\Psi)$ and $f$ satisfies (\ref{equation:bound}) then $f=0$. The newform theory of Hilbert Eisenstein series \cite{Linowitz-Atwill} shows that we have \begin{equation}\mathscr{E}_k(\mathcal{N},\Psi)=\bigoplus_{\substack{(\psi_1,\psi_2)\\ \psi_1\psi_2=\Psi^*}} \bigoplus_{{\mathfrak a}\mid \mathcal{N}{\mathfrak f}_{\psi_1}^{-1}{\mathfrak f}_{\psi_2}^{-1} } \mathbb C E_{\psi_1,\psi_2}\mid B_{\mathfrak a}, \end{equation} where the first sum is over pairs $(\psi_1,\psi_2)$ of primitive characters on the strict class group of $K$ having the property that $\Psi^*(\mathfrak r)=\psi_1(\mathfrak r)\psi_2(\mathfrak r)$ for all integral ideals $\mathfrak r$ coprime to ${\mathfrak f}_{\psi_1}{\mathfrak f}_{\psi_2}$. We therefore fix an Eisenstein newform $E_{\psi_1,\psi_2}$ of level $\mathcal{M}$ dividing $\mathcal{N}$ which satisfies (\ref{equation:bound}) and write \begin{equation}\label{equation:fdecomp}f=\sum_{\frak a\mid\mathcal{N}\mathcal{M}^{-1}} \lambda_{\mathfrak a} E_{\psi_1,\psi_2}\mid B_{\mathfrak a},\end{equation} where the coefficients $\lambda_{\mathfrak a}$ are all complex numbers. We show that each of the coefficients $\lambda_{\mathfrak a}$ is equal to zero. Our argument is by induction. For each $r\geq 0$ we show that if a divisor ${\mathfrak a}$ of $\mathcal{N}\mathcal{M}^{-1}$ has $r$ prime factors (with multiplicity) then $\lambda_{\mathfrak a}=0$. Suppose first that $r=0$ so that ${\mathfrak a}=(1)$. Let ${\mathfrak P}$ be a prime ideal which represents the trivial element of the strict ideal class group modulo $\mathcal{N}$. It is clear that ${\mathfrak P}\nmid \mathcal{N}$. Then $\psi_1({\mathfrak P})=\psi_2({\mathfrak P})=1$. Note as well that if ${\mathfrak b}$ is a divisor of $\mathcal{N}\mathcal{M}^{-1}$ with at least one prime divisor then $$C({\mathfrak P}, E_{\psi_1,\psi_2}\mid B_{{\mathfrak b}})=C({\mathfrak P}{\mathfrak b}^{-1},E_{\psi_1,\psi_2})=0$$ since ${\mathfrak P}{\mathfrak b}^{-1}$ is not integral. Computing the ${\mathfrak P}$-th coefficients in (\ref{equation:fdecomp}) now yields: $$|\lambda_{(1)}|(1+N({\mathfrak P})^{k-1})\ll N({\mathfrak P})^{k-1-\epsilon}.$$ The Chebotarev Density theorem implies that there are infinitely many prime ideals ${\mathfrak P}$ which represent the trivial strict ideal class modulo $\mathcal{N}$, hence as $N({\mathfrak P})\rightarrow \infty$ we obtain a contradiction unless $\lambda_{(1)}=0$. We now suppose that $r\geq 1$ and that we have shown that $\lambda_{{\mathfrak c}}=0$ for any divisor ${\mathfrak c}$ of $\mathcal{N}\mathcal{M}^{-1}$ having fewer than $r$ prime factors. Let ${\mathfrak a}$ be a divisor of $\mathcal{N}\mathcal{M}^{-1}$ having exactly $r$ prime factors (not necessarily distinct) and write ${\mathfrak a}={\mathfrak P}_1\dots{\mathfrak P}_r$ as a product of primes. Let ${\mathfrak P}$ be a prime ideal which represents the trivial element of the strict ideal class group modulo $\mathcal{N}$ and set ${\mathfrak m}={\mathfrak a}{\mathfrak P}={\mathfrak P}_1\dots{\mathfrak P}_r{\mathfrak P}$. As above, if ${\mathfrak b}\neq {\mathfrak a}$ is a divisor of $\mathcal{N}\mathcal{M}^{-1}$ with at least $r$ prime factors then we have $$C({\mathfrak m}, E_{\psi_1,\psi_2}\mid B_{{\mathfrak b}})=C({\mathfrak m}{\mathfrak b}^{-1},E_{\psi_1,\psi_2})=0.$$ Computing the ${\mathfrak m}$-th coefficients in (\ref{equation:fdecomp}) now yields: $$|\lambda_{\mathfrak a}|(1+N({\mathfrak P})^{k-1})\ll N({\mathfrak P})^{k-1-\epsilon}.$$ As ${\mathfrak P}$ was selected from an infinite set of primes (a set which in fact has positive Dirichlet density), by letting $N({\mathfrak P})\rightarrow \infty$ we obtain a contradiction unless $\lambda_{\mathfrak a}=0$. \end{proof} \end{document}
math
24,928
\boldsymbol{1}gin{document} \title{Graph-based algorithms for the efficient solution of a class of optimization problems} \providecommand{\keywords}[1]{\textbf{\textit{Index terms---}} #1} \author{Luca Consolini$^1$, Mattia Laurini$^1$, Marco Locatelli$^1$} \date{\small $^1$ Dipartimento di Ingegneria e Architettura, Universit\`a degli Studi di Parma,\\ Parco Area delle Scienze 181/A, 43124 Parma, Italy.\\ [email protected], [email protected], [email protected]} \maketitle \boldsymbol{1}gin{abstract} In this paper, we address a class of specially structured problems that include speed planning, for mobile robots and robotic manipulators, and dynamic programming. We develop two new numerical procedures, that apply to the general case and to the linear subcase. With numerical experiments, we show that the proposed algorithms outperform generic commercial solvers. \end{abstract} \keywords{Computational methods, Acceleration of convergence, Dynamic programming, Complete lattices} \maketitle \section{Introduction} In this paper, we address a class of specially structured problems of form \boldsymbol{1}gin{equation} \label{eqn_prob_class} \boldsymbol{1}gin{aligned} & \max_x f(x)\\ \textrm{subject to }\ & a \leq x \leq g(x) , \end{aligned} \end{equation} where $x \in \mathbb{R}^n$, $a \in \mathbb{R}^n$, $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is a continuous function, strictly monotone increasing with respect to each component and $g = (g_1, g_2, \ldots, g_n)^T: \mathbb{R}^n \rightarrow \mathbb{R}^n$, is a continuous function such that, for $i = 1, \ldots, n$, $g_i$ is monotone not decreasing with respect to all variables and constant with respect to $x_i$. Also, we assume that there exists a real constant vector $U$ such that \boldsymbol{1}gin{equation} \label{eqn_for_f} g(x) \leq U, \forall x: a \leq x \leq g(x)\,. \end{equation} A Problem related to~\eqref{eqn_prob_class} that is relevant in applications is the following one \boldsymbol{1}gin{equation} \label{eqn_prob_class_lin} \boldsymbol{1}gin{aligned} & \max_x f(x)\\ \textrm{subject to }\ & 0 \leq x \leq \underset{\ell \in \mathcal{L}}{\glb} \left\{ A_\ell x + b_\ell \right\},\ x \leq U, \end{aligned} \end{equation} where, for each $\ell \in \mathcal{L} = \{1, \ldots, L\}$, with $L \in \mathbb{N}$, $A_\ell$ is a nonnegative matrix and $b_\ell$ is a nonnegative vector. Note that the expression $\underset{\ell \in \mathcal{L}}{\glb}$,\ on the right hand side of~\eqref{eqn_prob_class_lin}, denotes the greatest lower bound of $L$ vectors. It corresponds to the component-wise minimum of vectors $A_\ell x + b_\ell$, where a different value of $\ell \in \mathcal{L}$ can be chosen for each component. We will show that Problem~\eqref{eqn_prob_class_lin} is actually a subclass of~\eqref{eqn_prob_class} after a suitable definition of function $g$ in~\eqref{eqn_prob_class}. We will also show that the solution of Problems~\eqref{eqn_prob_class} and~\eqref{eqn_prob_class_lin} is independent on the specific choice of $f$. Hence, Problem~\eqref{eqn_prob_class_lin} is equivalent to the following linear one \boldsymbol{1}gin{equation} \label{eqn_prob_class_proglin} \boldsymbol{1}gin{aligned} & \max_x \sum_{i=1}^n x_i\\ \textrm{subject to }\ & 0 \leq x , C x+ d \leq 0, x \leq U, \end{aligned} \end{equation} where $C$ is a matrix such that every row contains one and only one positive entry and $d$ is a nonpositive vector. The structure of the paper is the following: in Section~\ref{sec:appl} we justify the interest in Problem class~\eqref{eqn_prob_class} and, in particular, its subclass~\eqref{eqn_prob_class_lin}, by presenting some problems in control, which can be reformulated as optimization problems within subclass~\eqref{eqn_prob_class_lin}. In Section~\ref{sec:prob_class} we derive some theoretical results about Problem~\eqref{eqn_prob_class} and a class of algorithms for its solution. In Section~\ref{sec_lin_case} we do the same for the subclass~\eqref{eqn_prob_class_lin}. In Section~\ref{sec:convergence_speed_discussion} we discuss some theoretical and practical issues about convergence speed of the algorithms and we present some numerical experiments. Some proofs are given in the appendix. \subsection{Problems reducible to form~\eqref{eqn_prob_class_lin}}\label{sec:appl} \subsubsection{Speed planning for autonomous vehicles} \boldsymbol{1}gin{figure} \centering \includegraphics[width = 2.5in]{./Figure/draw_cars2.eps} \caption{A path to follow for an autonomous car-like vehicle.} \label{fig:path-to-follow} \end{figure} This example is taken from~\cite{MinSCL17} and we refer the reader to this reference for further detail. We consider a speed planning problem for a mobile vehicle (see Figure~\ref{fig:path-to-follow}). We assume that the path that joins the initial and the final configuration is assigned and we aim at finding the time-optimal speed law that satisfies some kinematic and dynamic constraints. Namely, we consider the following problem \boldsymbol{1}gin{subequations} \label{eqn_problem_pr} \boldsymbol{1}gin{align} & \min_{v \in C^1([0,s_f],\mathbb{R})} \int_0^{s_f} v^{-1}(s) d s \label{obj_fun_pr}\\ \textrm{subject to }\ &v(0)=0,\,v(s_f)=0 \label{inter_con_pr}\\ & 0< v(s) \leq \boldsymbol{a}r v,& s \in (0,s_f), \label{con_speed_pr}\\ & |2 v'(s)v(s)| \leq A_T, &s \in [0,s_f], \label{con_at_pr}\\ & |k(s)| v(s)^2 \leq A_N, &s \in [0,s_f], \label{con_an_pr} \end{align} \end{subequations} where $\boldsymbol{a}r v$, $A_T$, $A_N$ are upper bounds for the velocity, the tangential acceleration and the normal acceleration, respectively. Here, $s_f$ is the length of the path (that is assumed to be parameterized according to its arc length) and $k$ is its scalar curvature (i.e., a function whose absolute value is the inverse of the radius of the circle that locally approximates the trajectory). The objective function~\eqref{obj_fun_pr} is the total maneuver time, constraints~\eqref{inter_con_pr} are the initial and final interpolation conditions and constraints~\eqref{con_speed_pr},~\eqref{con_at_pr},~\eqref{con_an_pr} limit velocity and tangential and normal components of acceleration. After the change of variable $w=v^2$, the problem can be rewritten as \boldsymbol{1}gin{subequations} \label{eqn_problem_cont} \boldsymbol{1}gin{align} & \min_{w \in C^1([0,s_f],\mathbb{R})} \int_0^{s_f} w(s)^{-1/2} ds \label{obj_fun_cont}\\ \textrm{subject to }\ &w(0)=0,\,w(s_f)=0, \label{inter_con_cont}\\ & 0< w(s) \leq \boldsymbol{a}r v^2, &s \in (0,s_f), \label{con_speed_cont}\\ & |w'(s)| \leq A_T, & s \in [0,s_f], \label{con_at_cont}\\ & |k(s)| w(s) \leq A_N, & s \in [0,s_f]. \label{con_an_cont} \end{align} \end{subequations} For $i=1\,\ldots,n$, set $w_i=w((i-1)h)$, with $h=\frac{s_f}{n-1}$, then Problem~\eqref{eqn_problem_cont} can be approximated with \boldsymbol{1}gin{subequations} \label{eqn_problem_dis} \boldsymbol{1}gin{align} & \min_{w\in \mathbb{R}^n} \ \ \phi(w) \label{obj_fun_dis}\\ \textrm{subject to }\ & w_1=0,\,w_n=0, \label{inter_con_dis}\\ & 0< w_i \leq \boldsymbol{a}r v^2, &i=2,\ldots,n-1, \label{con_speed_dis}\\ & |w_{i+1}-w_i| \leq h A_T, & i =1,\ldots,n-1, \label{con_at_dis}\\ & |k(h(i -1))| w_i \leq A_N, & i = 2,\ldots,n-1, \label{con_an_dis} \end{align} \end{subequations} where the total time to travel the complete path is approximated by \boldsymbol{1}gin{equation} \label{eqn_obj_discr} \phi(w) = \sum_{i=1}^{n -1} t_i = 2h \sum_{i=1}^{n-1} \frac{1}{\sqrt{w_i}+ \sqrt{w_{i+1}}}. \end{equation} Note that conditions~\eqref{con_at_dis} is obtained by Euler approximation of $w'(hi)$. Similarly, the objective function~\eqref{eqn_obj_discr} is a discrete approximation of the integral appearing in~\eqref{obj_fun_cont}. By setting $f(w)=\phi(w)$, $a=0$, $g_1(w)=0$, $g_n(w)=0$ and, for $i = 2, \ldots, n-1$, \[ g_i(w)=\bigwedge \left\{\boldsymbol{a}r v^2, \frac{A_N}{|k(h(i - 1))|}, h A_T + w_{i-1}, h A_T + w_{i+1}\right\}\,, \] Problem~\eqref{eqn_problem_dis} takes on the form of Problem~\eqref{eqn_prob_class} and, since $g$ is linear with respect to $w$, it also belongs to the more specific class~\eqref{eqn_prob_class_lin}. We remark that, with respect to the problem class~\eqref{eqn_prob_class_lin}, we minimize a decreasing function which is equivalent to maximizing an increasing function. Our previous works~\cite{Minari16},~\cite{MinSCL17} present an algorithm, with linear-time computational complexity with respect to the number of variables $n$, that provides an optimal solution of Problem~\eqref{eqn_problem_dis}. This algorithm is a specialization of the algorithms proposed in this paper which exploits some specific feature of Problem~\eqref{eqn_problem_dis}. In particular, the key property of Problem~\eqref{eqn_problem_dis}, which strongly simplifies its solution, is that functions $g_i$ fulfill the so-called \emph{superiority condition} \[ g_i(w_{i-1},w_{i+1}) \geq w_{i-1}, w_{i+1}, \] i.e., the value of function $g_i$ is not lower than each one of its arguments. \subsubsection{Speed planning for robotic manipulators} The technical details of this second example are more involved and we refer the reader to~\cite{DBLP:journals/corr/abs-1802-03294} for the complete discussion. Let $\mathbb{R}^p$ be the configuration space of a robotic manipulator with $p$-degrees of freedom. The coordinate vector $\mathbf{q}$ of a trajectory in $U$ satisfies the dynamic equation \boldsymbol{1}gin{equation} \label{eq:manip} \boldsymbol{D}(\mathbf{q})\ddot{\mathbf{q}} + \boldsymbol{C}(\mathbf{q},\dot{\mathbf{q}} )\dot{\mathbf{q}} + \boldsymbol{1}lll(\mathbf{q}) = \boldsymbol{\tau}, \end{equation} where $\mathbf{q} \in \mathbb{R}^{p}$ is the generalized position vector, $\boldsymbol{\tau} \in \mathbb{R}^{p}$ is the generalized force vector, $\boldsymbol{D}(\mathbf{q})$ is the mass matrix, $\boldsymbol{C}(\mathbf{q},\dot{\mathbf{q}})$ is the matrix accounting for centrifugal and Coriolis effects (assumed to be linear in $\dot{\mathbf{q}}$) and $\boldsymbol{1}lll(\mathbf{q})$ is the vector accounting for joints position dependent forces, including gravity. Note that we do not consider Coulomb friction forces. Let $\mathbf{g}amma \in C^2([0,s_{f}],\mathbb{R}^{p}) $ be a function such that ($\forall \lambda \in [0,s_f]$) $\lVert \mathbf{g}amma^\prime(\lambda) \rVert =1$. The image set $\mathbf{g}amma([0,s_f])$ represents the coordinates of the elements of a reference path. In particular, $\mathbf{g}amma(0)$ and $\mathbf{g}amma(s_f)$ are the coordinates of the initial and final configurations. Define $t_{f}$ as the time when the robot reaches the end of the path. Let $\lambda : [0, t_f] \rightarrow [0, s_f]$ be a differentiable monotone increasing function that represents the position of the robot as a function of time and let $ v : [0, s_f] \rightarrow [0, +\infty]$ be such that $\left( \forall t \in [0,t_f]\right) \dot{\lambda}(t) = v(\lambda(t))$. Namely, $v(s)$ is the velocity of the robot at position $s$. We impose ($\forall s \in [0,s_{f}]$) $v(s) \ge 0$. For any $t \in [0,t_f]$, using the chain rule, we obtain \boldsymbol{1}gin{equation} \label{eq:rep} \boldsymbol{1}gin{array}{ll} \mathbf{q}(t) =& \mathbf{g}amma(\lambda(t)),\\[8pt] \dot{\mathbf{q}}(t) = & \mathbf{g}amma^{\prime}(\lambda(t))v(\lambda(t)),\\[8pt] \ddot{\mathbf{q}}(t) = & \mathbf{g}amma^{\prime}(\lambda(t))v^\prime(\lambda(t))v(\lambda(t)) + \mathbf{g}amma^{\prime\prime}(\lambda(t))v(\lambda(t))^2. \end{array} \end{equation} Substituting (\ref{eq:rep}) into the dynamic equations (\ref{eq:manip}) and setting $s = \lambda(t)$, we rewrite the dynamic equation (\ref{eq:manip}) as follows:\\ \boldsymbol{1}gin{equation} \label{eq:dynamic} \mathbf{d}(s)v^{\prime}(s)v(s) + \mathbf{c}(s)v(s)^2 + \mathbf{g}(s) = \boldsymbol{\tau}(s) , \end{equation} where the parameters in (\ref{eq:dynamic}) are defined as \boldsymbol{1}gin{equation} \label{eq:dynamic_parameters} \boldsymbol{1}gin{array}{l} \mathbf{d}(s) = \boldsymbol{D}(\mathbf{g}amma(s))\mathbf{g}amma^{\prime}(s),\\ [8pt] \mathbf{c}(s) = \boldsymbol{D}(\mathbf{g}amma(s))\mathbf{g}amma^{\prime\prime}(s) + \boldsymbol{C}(\mathbf{g}amma(s),\mathbf{g}amma^{\prime}(s))\mathbf{g}amma^{\prime}(s), \\ [8pt] \mathbf{g}(s) = \boldsymbol{1}lll(\mathbf{g}amma(s)). \end{array} \end{equation} The objective function is given by the overall travel time $t_f$ defined as \boldsymbol{1}gin{equation} \label{eq:objective} \displaystyle t_f = \int_0^{t_f}1\,dt = \int_{0}^{s_f} v(s)^{-1}\, ds. \end{equation} Let $\boldsymbol{\mu}, \boldsymbol{\psi}, \boldsymbol{a}lpha : \left[ 0, s_f \right] \rightarrow \mathbb{R}^{p}_{+}$ be assigned bounded functions and consider the following minimum time problem: \boldsymbol{1}gin{subequations}\label{prob:1} \boldsymbol{1}gin{align} & \displaystyle\min_{v \in C^{1},\boldsymbol{\tau}\in C^{0}} \displaystyle\int_0^{s_f} v(s)^{-1} \, ds, \label{obj:v}\\ \textrm{subject to }\ & \ (\forall s \in [0,s_{f}]) \nonumber\\ & \mathbf{d}(s)v^{\prime}(s)v(s) + \mathbf{c}(s)v(s)^2 + \mathbf{g}(s) = \boldsymbol{\tau}(s), \label{con:dynamic}\\ & \mathbf{g}amma^{\prime}(s)v(s) = \dot{\mathbf{q}}(s),\label{con:kinematic1} \\ & \mathbf{g}amma^{\prime}(s)v^\prime(s)v(s) + \mathbf{g}amma^{\prime\prime}(s) v(s)^{2} = \ddot{\mathbf{q}}(s),\label{con:kinematic2} \\ & \lvert \boldsymbol{\tau}(s) \rvert \le \boldsymbol{\mu}(s), \label{con:force_bound}\\ & \lvert \dot{\mathbf{q}}(s) \rvert \le \boldsymbol{\psi}(s),\label{con:vel_bound} \\ & \lvert \ddot{\mathbf{q}}(s) \rvert \le \boldsymbol{a}lpha (s), \label{con:acc_bound}\\ &v(s) \ge 0, \label{con:positive-velocity} \\ & v(0) = 0, \, v(s_f) = 0, \label{con:interpolation} \end{align} \end{subequations} where (\ref{con:dynamic}) represents the robot dynamics, (\ref{con:kinematic1})-(\ref{con:kinematic2}) represent the relation between the path $\mathbf{g}amma$ and the generalized position $\mathbf{q}$ shown in~(\ref{eq:rep}), (\ref{con:force_bound}) represents the bounds on generalized forces, (\ref{con:vel_bound}) and (\ref{con:acc_bound}) represent the bounds on joints velocity and acceleration, respectively. Constraints~(\ref{con:interpolation}) specify the interpolation conditions at the beginning and at the end of the path. After some manipulation and using a carefully chosen finite dimensional approximation (again, see~\cite{DBLP:journals/corr/abs-1802-03294} for the details), Problem~\eqref{prob:1} can be reduced to form (see Proposition~8 of~\cite{DBLP:journals/corr/abs-1802-03294}). \boldsymbol{1}gin{equation} \label{eq:probl} \boldsymbol{1}gin{aligned} & \min_w \phi(w) \\ \textrm{subject to }\ & w_i\leq f_{j,i} w_{i+1} + c_{j,i} & i=1,\ldots,n-1, \quad j=1,\ldots,p, \\[5pt] & w_{i+1}\leq b_{k,i} w_{i} + d_{k,i} & i=1,\ldots,n-1, \quad k=1,\ldots,p,\\[5pt] & 0\leq w_i\leq u_i & i=1,\ldots,n, \end{aligned} \end{equation} where, $\phi$ is defined as in~\eqref{eqn_obj_discr} and $w = (w_1, \ldots, w_n)^T$. For $i = 1, \ldots, n$, $w_i = v((i-1)h)^2$, $h = \frac{s_f}{n-1}$, is the squared manipulator speed at configuration $\mathbf{g}amma((i-1)h)$. Moreover $u_i$, $f_{j,i}$, $c_{j,i}$, $b_{k,i}$, $d_{k,i}$ are nonnegative constant terms depending on problem data. Problem~\eqref{eq:probl} belongs to classes~\eqref{eqn_prob_class} and~\eqref{eqn_prob_class_lin}. Also in this case, the performance of the algorithms proposed in this paper can be enhanced by exploiting some further specific features of Problem~\eqref{eq:probl}. In particular, in~\cite{DBLP:journals/corr/abs-1802-03294}, we were able to develop a version of the algorithm with optimal time-complexity $O(n p)$. \subsubsection{Dynamic Programming} \label{sec:motivation} This section is based on Appendix~A of~\cite{bardi2008optimal}, to which we refer the reader for more detail. Consider a control system defined by the following differential equation in $\mathbb{R}^n$: \boldsymbol{1}gin{equation} \label{eq:controlProblem} \boldsymbol{1}gin{cases} \dot{x}(t) = f(x(t),u(t)) \\ x(0) = x_{0}, \end{cases} \end{equation} \noindent where $f:\mathbb{R}^n \times U \rightarrow \mathbb{R}^n$ is a continuous function, $x_0$ is the initial state, $u(t) \in U \subset \mathbb{R}^m$ is the control input and $U$ is a compact set of admissible controls. Consider an infinite horizon cost functional defined as follows \boldsymbol{1}gin{equation}\label{eq:costFunc} J_{x_0} (u) = \int\limits_0^\infty g(x(t), u(t)) e^{-\lambda t} dt, \end{equation} \noindent where $g:\mathbb{R}^n \times U \rightarrow \mathbb{R}$ is a continuous cost function. The viscosity parameter $\lambda$ is a positive real constant. Following~\cite{bardi2008optimal}, we assume that there exist positive real constants $L_f$, $L_g$, $C_f$, $C_g$ such that, $\forall x_1, x_2 \in \mathbb{R}^n$, $\forall u \in U$, \boldsymbol{1}gin{align*} | f(x_1, u) - f(x_2, u) | \leq L_f | x_1 - x_2 |,\qquad & \left\|f(x_1, u)\right\|_\infty \leq C_f,\\ | g(x_1, u) - g(x_2, u) | \leq L_g | x_1 - x_2 |,\qquad & \left\|g(x_1, u)\right\|_\infty \leq C_g. \end{align*} Define the value function $v: \mathbb{R}^n \to \mathbb{R}$ as \[ v(x_0) = \inf_{u \in U} J_{x_0}(u). \] \noindent As shown in \cite{bardi2008optimal}, the value function $v$ is the unique viscosity solution of the Hamilton-Jacobi-Bellman (HJB) equation: \boldsymbol{1}gin{equation}\label{eq:HJ} \lambda v(x) + \sup_{u \in U}\{- \nabla v(x) f(x, u) - g(x, u)\} = 0,\quad x \in \mathbb{R}^n, \end{equation} \noindent where $\nabla v$ denotes the gradient of $v$. In general, a closed form solution of the partial differential equation~\eqref{eq:HJ} does not exist. Various numerical procedures have been developed to compute approximate solutions, such as in~\cite{4554208}, \cite{bardi2008optimal} \cite{6328288}, \cite{Wang01062000}. In particular,~\cite{bardi2008optimal} presents an approximation scheme based on a finite approximation of state and control spaces and a discretization in time. Roughly speaking, in~\eqref{eq:HJ} one can approximate $\nabla v(x) f(x,u) \simeq h^{-1} (v(x + h f(x,u)) - v(x))$, where $h$ is a small positive real number that represents an integration time. In this way,~\eqref{eq:HJ} becomes \boldsymbol{1}gin{equation*} (1 + \lambda h) v(x) = \min_{u \in U}\{ v(x + h f(x,u)) + h g(x,u) \} = 0, \ x \in \mathbb{R}^n, \end{equation*} \noindent and, by approximating $(1 + \lambda h)^{-1} \simeq (1 - \lambda h)$, $(1 + \lambda h)^{-1} h \simeq h$, one arrives at the following HJB equation in discrete time \boldsymbol{1}gin{equation}\label{eq:HJtime} v_{h}(x) = \min_{u \in U}\left\{(1-\lambda h)v_{h}(x+hf(x,u)) + hg(x,u) \right\},\ x \in \mathbb{R}^n. \end{equation} \noindent For a more rigorous derivation of~\eqref{eq:HJtime}, again, see~\cite{bardi2008optimal}. A triangulation is computed on a finite set of vertices $\mathcal{T} = \{x_i\}_{i \in \mathcal{V}} \subset \mathbb{R}^n$, with $\mathcal{V} \subseteq \mathbb{N}$ and $|\mathcal{V}| = N$. Evaluating~\eqref{eq:HJtime} at $x \in \mathcal{T}$, we obtain \boldsymbol{1}gin{align} \label{eq:HJtimespace} v_h(x_i) = \min_{u \in U}\left\{(1 - \lambda h) v_h(x_i + hf(x_i, u)) + hg(x_i, u) \right\}, \ i \in \mathcal{V}. & \end{align} \noindent Note the dependence of the value cost function on the choice of the integration step $h$. Using the triangulation, function $v$ can be approximated by a linear affine function of the finite set of variables $v_h(x_i)$, with $i \in \{1, \ldots, N\}$. Theorem~2.1 of Appendix~A of~\cite{bardi2008optimal} shows that, if $\lambda > L_f$ and $h \in \left(\left. 0,\frac{1}{\lambda}\right]\right.$, system~\eqref{eq:HJtimespace} has a unique solution that converges uniformly to the solution of~\eqref{eq:HJ} as $h,d,\frac{d}{h}$ tend to $0$, where $d$ is the maximum diameter of the simplices used in the triangulation. Note that, for convergence results, one should choose $\lambda$ large enough since it is bounded from below by $L_f$. To further simplify~\eqref{eq:HJtimespace}, it is possible to discretize the control space, substituting $U$ with a finite set of controls $\{u_\ell\}_{\ell\ \in \mathcal{L}}$, so that we can replace~\eqref{eq:HJtimespace} with \boldsymbol{1}gin{equation} \label{eq:HJtimespacecontols} \boldsymbol{1}gin{aligned} v_h(x_i) = \min_{\ell \in \mathcal{L}}\left\{(1-\lambda h) v_h(x_i + hf(x_i, u_\ell)) + hg(x_i, u_\ell) \right\}, \ i \in \mathcal{V}. \end{aligned} \end{equation} Figure~\ref{fig:triang} illustrates a step of construction of problem~\eqref{eq:HJtimespacecontols}. Namely, for each node of the triangulation $x_i$ and each value of the control $u_\ell$, all end points $x_i + h f(x_i,u_\ell)$ of the Euler approximation of the solution of~\eqref{eq:controlProblem} from the initial state $x_i$ are computed. The value cost function for these end points is given by a convex combination of its values on the triangulation vertices. \boldsymbol{1}gin{figure}[!ht] \centering \includegraphics[width=0.5\columnwidth]{Figure/triangulation.eps} \caption{Approximation of the HJB equation on a triangulation with four controls.} \label{fig:triang} \end{figure} Set vector $w:= [w_1,\ldots,v_n]^T={[v_h(x_1), v_h(x_2), \ldots, v_h(x_N)]}^T$, in this way $w \in \mathbb{R}^N$ represents the value of the cost function on the grid points. Note that, for each $x_i,u_\ell$, the right-hand side of~\eqref{eq:HJtimespacecontols} is affine with respect to $w$, so that Problem~\eqref{eq:HJtimespacecontols} can be rewritten in form \[ \boldsymbol{1}gin{aligned} & \max_w \sum_{i} w_i\\ \textrm{subject to }\ & 0 \leq w \leq \underset{\ell \in \mathcal{L}}{\glb} \left\{ A_\ell w + b_\ell \right\},\ w \leq \frac{1}{\lambda}\,, \end{aligned} \] where for $\ell \in \mathcal{L}$, $A_\ell \in \mathbb{R}^{N \times N}$ are suitable nonnegative matrices and $b_\ell \in \mathbb{R}^{N}$ are suitable nonnegative vectors. Hence, Problem~\eqref{eq:HJtimespacecontols} belongs to class~\eqref{eqn_prob_class_lin}. Moreover, observe that if $h$ is sufficiently small, matrices ${\left\{ A_\ell \right\}}_{\ell \in \mathcal{L}}$ are dominant diagonal. \subsection{Statement of Contribution} The main contributions of the paper are the following ones: \boldsymbol{1}gin{itemize} \item We develop a new procedure (Algorithm~\ref{alg:eps_sol}) for the solution of Problem~\eqref{eqn_prob_class} and a more specific one (Algorithm~\ref{alg:consensus}) for its subclass~\eqref{eqn_prob_class_lin}. We prove the correctness of these solution methods. \item With numerical experiments, we show that the proposed algorithms outperform generic commercial solvers in the solution of linear problem~\eqref{eqn_prob_class_lin}. \end{itemize} \subsection{Notation}\label{subsec:notation} The set of nonnegative real numbers is denoted by $\mathbb{R}_{+} := [0, +\infty)$ and $\underline{0}$ denotes the zero vector of $\mathbb{R}^n$. Given $n, m \in \mathbb{N}$, let $x \in \mathbb{R}^n$ and $A \in \mathbb{R}^{n \times m}$, for $i \in \{ 1, \ldots, n \}$, we denote the $i$-th component of $x$ with ${[x]}_i$ and the $i$-th row of $A$ with ${[A]}_{i*}$; further, for $j \in \{ 1, \ldots, m \}$ we denote the $j$-th column of $A$ with ${[A]}_{*j}$ and the $ij$-th element of $A$ with ${[A]}_{ij}$. Function $\left\| \cdot \right\|_\infty: \mathbb{R}^n \rightarrow \mathbb{R}_+$ is the infinity norm, namely the maximum norm, of $\mathbb{R}^n$ (i.e., $\forall \ x \in \mathbb{R}^n \ \left\| x \right\|_\infty = \max\limits_{i \in \{1, \ldots, n\}}{| {[x]}_i |}$); $\left\| \cdot \right\|_\infty$ is also used to denote the induced matrix norm. Given a finite set $S$, the cardinality of $S$ is denoted by $|S|$, the power set of $S$ is denoted by $\wp(S)$ and symbol $\varnothing$ denotes the empty set. Consider the binary relation $\leq$ defined on $\mathbb{R}^n$ as follows \[ \forall x, y \in \mathbb{R}^{n}\ (x \leq y\ \Longleftrightarrow\ y - x \in \mathbb{R}_+^n). \] It is easy to verify that $\leq$ is a \textit{partial order} of $\mathbb{R}^n$. Finally, given a nonempty set $\mathcal{V}$ let us define a priority queue $Q$ as a finite subset of $\mathcal{Q} := \mathcal{V} \times \mathbb{R}$ such that, if $(v, q) \in Q$, then, no other element $(\boldsymbol{a}r v, \boldsymbol{a}r q) \in Q$ can satisfy that $\boldsymbol{a}r v = v$. Let us also define two operations on priority queues: $\text{Enqueue}: \wp(\mathcal{Q}) \times \mathcal{Q} \rightarrow \mathcal{Q}$, which, given $Q \in \wp(\mathcal{Q})$ and $(v, q) \in \mathcal{Q}$, if $Q$ does not contain elements of the form $(v, p)$, with $p \geq q$, then $\text{Enqueue}$ adds $(v, q)$ to the priority queue $Q$ and removes any other element of the form $(v, p)$, with $p < q$, if previously present. The second operation we need on priority queues is $\text{Dequeue}: \wp(\mathcal{Q}) \rightarrow \wp(\mathcal{Q}) \times \mathcal{V}$ which extracts from a priority queue $Q$ the pair $(v, q)$ with highest priority (i.e., it extracts $(v, q) \in Q$ such that $\forall (\boldsymbol{a}r v, \boldsymbol{a}r q) \in Q, q \geq \boldsymbol{a}r q$) and returns element $v$. \section{Characterization of Problem~\eqref{eqn_prob_class}} \label{sec:prob_class} In this section, we consider Problem~\eqref{eqn_prob_class} with the additional assumption $g(a) \geq a$ which guarantees that the feasible set of Problem~\eqref{eqn_prob_class} \[ \Sigma=\{x \in \mathbb{R}^n: a \leq x \leq g(x)\}\, \] is non-empty. For any $\Gamma \subset \Sigma$ define $\bigvee \Gamma$ as the smallest $x \in \Sigma$, if it exists, such that $(\forall y \in \Gamma)\, x \geq y$. We call $\bigvee \Gamma$ the \emph{least upper bound} of $\Gamma$. Note that $\bigvee \varnothing = a$. The following proposition shows that $\bigvee \Gamma$ exists. \boldsymbol{1}gin{prpstn} \label{prop_closure} For any $\Gamma \subset \Sigma$, $\bigvee \Gamma$ exists. \end{prpstn} \boldsymbol{1}gin{proof} We first prove that, if $x,y \in \Sigma$, then $x \vee y \in \Sigma$ (recall that $\vee$ denotes the component-wise maximum). It is obvious that $x \vee y \geq a$. Thus, we only need to prove that, for each $j=1,\ldots,n$, $[y \vee x]_j \leq g_j(x \vee y)$. To see this, let us assume, w. l. o. g. , that $[x]_j \leq [y]_j$. Since $y \in \Sigma$, then $[y]_j \leq g_j(y)$. Moreover, $g_j(y) \leq g_j(y \vee x)$ since $g_j$ is monotone non decreasing, so that $[y \vee x]_j \leq g_j(y \vee x)$ as we wanted to prove. Set $\Sigma$ is closed since it is defined by non strict inequalities of a continuous function, $\Sigma$ is bounded by assumption, hence $\Sigma$ is compact. Set $x^+=\bigvee \Sigma$, note that $x^+ \leq U$ since $(\forall x \in \Sigma) x \leq U$, where $U$ is defined in~\eqref{eqn_for_f}. There exists a sequence $x: \mathbb{N} \to \Sigma$ such that $\lim_{k \to \infty} x(k)=x^+$. Namely, for any $k >0$, choose $x^{(1)}_k,\ldots,x^{(n)}_k \in \Sigma$ such that $[x^+-x^{(i)}_k]_i< k^{-1}$ and set $x(k) =\bigvee \{x^{(1)}_k,\ldots,x^{(n)}_k\}$. Being $\Sigma$ compact, $\Sigma$ is also sequentially compact and $x^+ \in \Sigma$. \end{proof} Similarly, define $\bigwedge \Gamma$ as the largest $x$, if it exists, such that $(\forall y \in \Gamma) x \leq y$, we call $\bigwedge \Gamma$ the \emph{greatest lower bound} of $\Gamma$. For $x,y \in \Sigma$, note that $x \vee y = \bigvee \{x,y\}$, $x \wedge y = \bigwedge \{x,y\}$. The following proposition characterizes set $\Sigma$ with respect to operations $\vee$, $\wedge$. In particular, it shows that the component-wise minimum and maximum of each subset of $\Sigma$ belongs to $\Sigma$. \boldsymbol{1}gin{prpstn} Set $\Sigma$ with operations $\vee,\wedge$ defined above is a complete lattice. \end{prpstn} \boldsymbol{1}gin{proof} It is a consequence of the dual of Theorem~2.31 of~\cite{davey2002introduction}. Indeed $\Sigma$ has a bottom element ($a$) and $\bigvee \Gamma$ exists for any non-empty $\Gamma \subset \Sigma$ by Proposition~\ref{prop_closure}. \end{proof} A consequence of the previous definition is that also $\bigwedge \Gamma$ exists. The following proposition shows that the least upper bound $x^+$ of $\Sigma$ is a fixed point of $g$ and corresponds to an optimal solution of Problem~\eqref{eqn_prob_class}. \boldsymbol{1}gin{prpstn} \label{prop_max_x} Set \[ x^+=\bigvee \Sigma\,, \] then i) \boldsymbol{1}gin{equation} \label{eqn_fix_point} x^+=g(x^+) \end{equation} ii) $x^+$ is an optimal solution of problem~\eqref{eqn_prob_class}. \end{prpstn} \boldsymbol{1}gin{proof} i) It is a consequence of Knaster-Tarski Theorem (see Theorem~2.35 of~\cite{davey2002introduction}), since $(\Sigma,\wedge,\vee)$ is a complete lattice and $g$ is an order-preserving map. ii) By contradiction, assume that $x^+$ is not optimal, this implies that there exists $x \in \Sigma$ such that $f(x) > f(x^+)$. Being $f$ monotonic increasing, this implies that there exists $i \in 1,\ldots,n$ such that $[x]_i > [x^+]_i$, which implies that $x^+ \neq \bigvee \Sigma$. \end{proof} \boldsymbol{1}gin{rmrk} The previous proposition shows that the actual form of function $f$ is immaterial to the solution of Problem~\eqref{eqn_prob_class}, since the optimal solution is $x^+$ for any strictly monotonic increasing objective function $f$. \end{rmrk} The following defines a relaxed solution of Problem~\eqref{eqn_prob_class}, obtained by allowing an error on fixed-point condition~\eqref{eqn_fix_point}. \boldsymbol{1}gin{dfntn} Let $\epsilon$ be a positive real constant, $x$ is an $\epsilon$-solution of~\eqref{eqn_prob_class} if \[ x \geq a, \quad \|x -g(x)\|_\infty < \epsilon\,. \] \end{dfntn} The following proposition presents a sufficient condition that guarantees that a sequence of $\epsilon$-solutions approaches $x^+$ as $\epsilon$ converges to $0$. \boldsymbol{1}gin{prpstn} If there exists $\delta>0$ such that \boldsymbol{1}gin{equation} \label{eqn_cond_xy} (\forall x,y \geq a) \, \frac{\|g(x)-g(y)\|_\infty}{\|x-y\|_\infty} \notin [1-\delta,1+\delta] \end{equation} then, there exists a constant $M$ such that, for any $\epsilon>0$, if $x \in \mathbb{R}^n$ is an $\epsilon$-solution of~\eqref{eqn_prob_class}, then \[ \|x-x^+\|_\infty \leq M \epsilon\,. \] \end{prpstn} \boldsymbol{1}gin{proof} Let $x$ be an $\epsilon$-solution. By Proposition~\ref{prop_max_x} we have that \[ x-x^+= g(x) -g(x^+) +\xi\,, \] where $\|\xi\|_\infty \leq \epsilon$. By assumption~\eqref{eqn_cond_xy}, either $\|g(x) - g(y)\|_\infty > (1 + \delta) \|x - y\|_\infty$ or $\|g(x) - g(y)\|_\infty < (1 - \delta) \|x - y\|_\infty$. In the first case, \[ \|x-x^+\|_{\infty} \geq -\|\xi\|_{\infty} + (1+\delta) \|x-x^+\|_{\infty}\,, \] in the second case, \[ \|x-x^+\|_{\infty} \leq \|\xi\|_{\infty} + (1-\delta) \|x-x^+\|_{\infty}\,. \] In both cases it follows that \[ \|x-x^+\|_{\infty} \leq \delta^{-1} \|\xi\|_{\infty} \leq \delta^{-1} \epsilon\,. \] \end{proof} \boldsymbol{1}gin{rmrk} If condition~\eqref{eqn_cond_xy} is not satisfied, an $\epsilon$-solution of~\eqref{eqn_prob_class} can be very distant from the optimal solution $x^+$. Figure~\ref{fig:hypothesis} refers to a simple instance of Problem~\eqref{eqn_prob_class} with $x \in \mathbb{R}$, so that $g$ is a scalar function. The optimal value $x^+$ corresponds to the maximum value of $x$ such that $x\leq g(x)$. The figure also shows $\tilde x$, which is an $\epsilon$-solution, for the value of $\epsilon$ depicted in the figure. In this case there is a large separation between $x^+$ and $\tilde x$. Note that in this case function $g$ does not satisfy~\eqref{eqn_cond_xy}. \end{rmrk} \boldsymbol{1}gin{figure}[h!] \centering \psfrag{x}{$x$} \psfrag{g}{$g(x)$} \psfrag{t}{$x$} \psfrag{e}{$\epsilon$} \psfrag{w}{$x^+$} \psfrag{y}{$\tilde x$} \includegraphics[width=.6\textwidth]{Figure/hypothesis_prop} \caption{Representation of an instance of problem~\eqref{eqn_prob_class} in which conditions~\eqref{eqn_cond_xy} does not hold.} \label{fig:hypothesis} \end{figure} \boldsymbol{1}gin{rmrk}\label{remark:fixed_point} If $g$ is a contraction, namely, if there exists $\gamma \in [0, 1)$, such that $\forall x, y \in \mathbb{R}^n\ \| g(x) - g(y) \|_\infty \leq \gamma \| x - y \|_\infty$ (a subcase of~\eqref{eqn_cond_xy}), then $x^+$ can be found with a standard fixed point iteration \boldsymbol{1}gin{equation} \label{PROB} \boldsymbol{1}gin{cases} x(k+1) = g(x)\\ x(0) = x_0, \end{cases} \end{equation} and, given $\epsilon > 0$, an $\epsilon$-solution $x$ of~\eqref{eqn_prob_class} can be computed with Algorithm~\ref{alg:fixed_point}. This algorithm, given an input tolerance $\epsilon$, function $g$ and an initial solution $x_0 \in \mathbb{R}^n$, repeats the fixed point iteration $x=g(x)$ until $x$ satisfies the definition of $\epsilon$-solution, that is, until the infinity norm of error vector $\xi=x-g(x)$ is smaller than the assigned tolerance $\epsilon$. \end{rmrk} \boldsymbol{1}gin{algorithm}[h!] \caption{Fixed Point Iteration.} \label{alg:fixed_point} \boldsymbol{1}gin{algorithmic}[1] \STATE INPUT: initial vector $x_0$, tolerance $\epsilon$, function $g$. \STATE OUTPUT: vector $x$. \STATE \STATE $x := x_0$ \REPEAT \STATE $x_{\textrm{old}} := x$ \STATE $x := g(x)$ \label{step_fix_point} \STATE $\xi := x_{\textrm{old}}-x$ \UNTIL{$\|\xi\|_\infty \leq \epsilon$} \STATE\RETURN{$x$} \end{algorithmic} \end{algorithm} The special structure of Problem~\eqref{eqn_prob_class} leads to a solution algorithm that is much more efficient than Algorithm~\ref{alg:fixed_point} in terms of overall number of elementary operations. As a first step, we associate a graph to constraint $g$ of Problem~\eqref{eqn_prob_class}. \subsection{Graph associated to Problem~\eqref{eqn_prob_class}} It is natural to associate to Problem~\eqref{eqn_prob_class} a directed graph $\mathbb{G} = (V,E)$, where the nodes correspond to the $n$ components of $x$ and of constraint $g$, namely $V=\mathcal{V} \cup \mathcal{C}$, with $\mathcal{V}=\{v_1,\ldots,v_n\}$, $\mathcal{C}=\{c_1,\ldots,c_n\}$, where $v_i$ is the node associated to $[x]_i$ and $c_i$ is the node associated to $g_i$. The edge set $E \subseteq V \times V$ is defined according to the rules: \boldsymbol{1}gin{itemize} \item for $i=1,\ldots,n$, there is a directed edge from $c_i$ to $v_i$, \item for $i=1,\ldots,n$, $j=1,\ldots,n$, there is a directed edge from $v_i$ to $c_j$ if $g_j$ depends on $x_i$, \item no other edges are present in $E$. \end{itemize} For instance, for $x \in \mathbb{R}^3$ consider problem \[ \boldsymbol{1}gin{aligned} & \max_x f(x)&\\ \textrm{subject to }\ & 0 \leq x_1 \leq g_1(x_2,x_3) \\ &0 \leq x_2 \leq g_2(x_1) \\ &0 \leq x_3 \leq g_3(x_1,x_2). \end{aligned} \] The associated graph, with $\mathcal{V}=\{v_1,v_2,v_3\}$, $\mathcal{C}=\{c_1,c_2,c_3\}$, is given by \boldsymbol{1}gin{center} \boldsymbol{1}gin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm] \tikzstyle{every state}=[fill=none,draw=black,text=black, minimum size = 29pt] \node[state] (V1) {$v_1$}; \node[state] (V2) [right of=V1] {$v_2$}; \node[state] (V3) [right of=V2] {$v_3$}; \node[state] (C1) [above of=V1] {$c_1$}; \node[state] (C2) [right of=C1] {$c_2$}; \node[state] (C3) [right of=C2] {$c_3$}; \path (C1) edge (V1) (C2) edge (V2) (C3) edge (V3) (V2) edge (C1) (V3) edge (C1) (V1) edge (C2) (V1) edge (C3) (V2) edge (C3); \end{tikzpicture} \end{center} We define the set of neighbors of node $i \in \mathcal{V}$ as \[ \mathcal{N}(i) := \left\{ j \in \mathcal{V}\ |\ \exists c \in \mathcal{C} : (i, c), (c, j) \in E \right\}, \] namely, a node $j \in \mathcal{V}$ is a neighbor of $i$ if there exists a directed path of length two that connects $i$ to $j$. For instance, in the previous example, $v_1 \in \mathcal{N}(v_3)$ and $v_2 \notin \mathcal{N}(v_3)$. In other words, $v_j \in \mathcal{N}(v_i)$ if constraint $g_j$ depends on $x_i$. \subsection{Selective update algorithm for Problem~\eqref{eqn_prob_class}} In Algorithm~\ref{alg:fixed_point}, each time line~\ref{step_fix_point} is evaluated, the value of all components of $x$ is updated according to the fixed point iteration $x=g(x)$, even though many of them may remain unchanged. We now present a more efficient procedure for computing an $\epsilon$-solution of~\eqref{eqn_prob_class}, in which we update only the value of those components of $x$ that are known to undergo a variation. The algorithm is composed of two phases, an initialization and a main loop. In the \emph{initialization}, $x$ is set to an initial value $x_0$ that is known to satisfy $x_0 \geq x^+$. Then the fixed point error $\xi=x-g(x)$ is computed and all indexes $i=1,\ldots,n$ for which $[\xi]_i > \epsilon$ are inserted into a priority queue, ordered with respect to a policy that will be discussed later. In this way, at the end of the initialization, the priority queue contains all indexes $i$ for which the corresponding fixed point error $[\xi]_i$ exceeds $\epsilon$. Then, the \emph{main loop} is repeated until the priority queue is empty. First, we extract from the priority queue the index $i$ with the highest priority. Then, we update its value by setting $[x]_i=g_i(x)$ and update the fixed point error $\xi$ by setting $[\xi]_j=[x]_j-g_j(x)$ for all variables $j \in \mathcal{N}(i)$. This step is actually the key-point of the algorithm: we recompute the fixed point error \emph{only} of those variables that correspond to components of $g$ that we know to have been affected by the change in variable $[x]_i$. Finally, as in the initialization, all variables $j \in \mathcal{N}(i)$ such that the updated fixed-point error satisfies $[\xi]_j > \epsilon$ are placed into the priority queue. The order in which nodes are actually processed depends on the ordering of the priority queue. The choice of this ordering turns out to be critical in terms of computational cost for the algorithm, as can be seen in the numerical experiments in Section~\ref{subsec:simulations}. Various orderings for the priority queue will be introduced in Section~\ref{subsec:simulations} and the ordering choice will be discussed in more detail. The procedure stops once the priority queue becomes empty, that is, once none of the updated nodes undergoes a significant variation. As we will show, the correctness of the algorithm is independent on the choice of the ordering of the priority queue. We may think of graph $\mathbb{G}$ as a communication network in which each node transmits its updated value to its neighbours, whilst all other nodes maintain their value unchanged. These considerations lead to Algorithm~\ref{alg:eps_sol}. This algorithm takes as input an initial vector $x_0 \in \mathbb{R}^n$, a tolerance $\epsilon$, function $g$ and the lower bound $a$. From lines~\ref{alg2_initialization_begin} to~\ref{alg2_initialization_end} it initializes the solution vector $x$, the priority queue $Q$ and the error vector $\xi$. From line~\ref{alg_first_iter_begin} to~\ref{alg_first_iter_end} it adds into the priority queue those component nodes whose corresponding component of the error vector $\xi$ is greater than tolerance $\epsilon$. The priority with which a node is added to the queue will be discussed in Section~\ref{subsec:simulations}, here symbol * denotes a generic choice of priority. Lines from~\ref{alg_main_loop_begin} to~\ref{alg_main_loop_end} constitute the main loop. While the queue is not empty, the component node $i$ with highest priority is extracted from the queue and its value is updated. Then, each component node $j$ which is a neighbor of $i$ is examined; the variation of node $j$ is updated and, if it is greater than tolerance $\epsilon$, neighbor $j$ is added to the priority queue. After this, the component corresponding to node $i$ in $\xi$ is set to 0. Finally, once the queue becomes empty, the feasibility of solution $x$ is checked and returned along with vector $x$. We remark that Algorithm~\ref{alg:eps_sol} can be seen as a generalization of Algorithm~\ref{alg:fixed_point} in~\cite{Cabassi2018}, where a specific priority queue (namely, one based on the values of the nodes) was employed. Also note that Algorithm~\ref{alg:eps_sol} can be seen as a bound-tightening technique (see, e. g., \cite{Belotti2009}) which, however, for this specific class of problem is able to return the optimal solution. \boldsymbol{1}gin{algorithm}[h!] \caption{Solution algorithm for Problem~\eqref{eqn_prob_class}} \label{alg:eps_sol} \boldsymbol{1}gin{algorithmic}[1] \STATE INPUT: initial vector $x_0$, tolerance $\epsilon$, function $g$, vector $a$. \STATE OUTPUT: vector $x$, bool $feasible$. \STATE \STATE $x := x_0$ \label{alg2_initialization_begin} \STATE $Q := \varnothing$ \STATE $\xi := x - g(x)$ \label{alg2_initialization_end} \STATE \FOR{$i = 1, \ldots, n$} \label{alg_first_iter_begin} \IF {$[\xi]_i>\epsilon$ } \STATE $Q := \textrm{Enqueue}(Q,(i,*))$ \label{alg2_enqueue_1} \ENDIF \ENDFOR \label{alg_first_iter_end} \STATE \WHILE{$Q \neq \varnothing$} \label{alg_main_loop_begin} \STATE $(Q,i) := \textrm{Dequeue}(Q)$ \STATE $[x]_i := [x]_i - [\xi]_i$ \label{alg_change_x} \FOR{all $j \in \mathcal{N}(i)$} \STATE $[\xi]_j := [x]_j - g_j(x)$ \label{alg_change_xi_2} \IF {$[\xi]_j > \epsilon$} \STATE $Q := \textrm{Enqueue}(Q,(j,*))$ \label{alg2_enqueue_2} \ENDIF \ENDFOR \STATE $[\xi]_i : =0$ \label{alg_change_xi} \ENDWHILE \label{alg_main_loop_end} \STATE \STATE $feasible := x \geq a$ \STATE \RETURN{$x,feasible$} \end{algorithmic} \end{algorithm} The following proposition characterizes Algorithm~\ref{alg:eps_sol} and proves its correctness. \boldsymbol{1}gin{prpstn} Assume that $x_0 \geq x^+$ and $g(x_0) \geq x_0$, then Algorithm~\ref{alg:eps_sol} satisfies the following properties: i) At all times, $x \geq x^+$ and $x \geq g(x)$. ii) At every evaluation of line~\ref{alg_main_loop_begin}, $x=g(x)+\xi$ and $\xi \geq 0$. iii) The algorithm terminates in a finite number of steps for any $\epsilon > 0$. iv) If Problem~\eqref{eqn_prob_class} is feasible, output ``feasible'' is true. v) If output ``feasible'' is true, then $x$ is an $\epsilon$-feasible solution of Problem~\eqref{eqn_prob_class}. \end{prpstn} \boldsymbol{1}gin{proof} i) We prove both properties by induction. Note that $x$ is updated only at line~\ref{alg_change_x} and that line~\ref{alg_change_x} is equivalent to $[x]_i=g_i(x)$. For $m \in \mathbb{N}$, let $x(m)$ be the value of $x$ after the $m$-th evaluation of line~\ref{alg_change_x}. Note that $x(0) = x_0 \geq x^+$ and that $x$ is changed only at step~\ref{alg_change_x}. Then $[x(m)]_i = g_i(x(m-1)) \geq g_i(x^+) = x^+$, where we have used the inductive hypothesis $x(m-1) \geq x^+$ and the fact that $g(x^+)=x^+$ (by Proposition~\ref{prop_max_x}). Further, note that $g(x(0)) = g(x_0) \geq x_0$ by assumption. Moreover, $[x(m)]_i = g_i(x(m-1)) = g_i(x(m))$, since $g_i$ does not depend on $[x]_i$ by assumption and variables $x(m)$, $x(m-1)$ differ only on the $i$-th component. By the induction hypothesis, $[x(m)]_i = g_i(x(m-1)) \leq [x(m-1)]_i$ which implies that $x(m) \leq x(m-1)$. Thus, in view of the monotonicity of $g$ and of the inductive assumption, for $k \neq i$, $[g(x(m))]_k = g_k(x(m)) \leq g_k(x(m-1)) \leq [x(m-1)]_k = [x(m)]_k$. ii) Condition $x=g(x)+\xi$ is satisfied after evaluating~\ref{alg2_initialization_end}. Moreover, after evaluating line~\ref{alg_change_xi}, $[x]_i=[g(x)]_i+[\xi]_i$ and all indices $j$ for which potentially $[x]_j \neq [g(x)]_j+[\xi]_j$ belong to set $\mathcal{N}(i)$. For these indices, line~\ref{alg_change_xi_2} re-enforces $[x]_j = [g(x)]_j+[\xi]_j$. The fact that $\xi\geq 0$ is a consequence of point i). iii) At each evaluation of line~\ref{alg_change_x} the value of a component of $x$ is decreased by at least $\epsilon$. If the algorithm did not terminate, at some iteration we would have that $x \ngeq x^+$ which is not possible by i). iv) If Problem~\eqref{eqn_prob_class} is feasible, then $x^+\geq a$ is its optimal solution. By point 1), $x \geq x^+ \geq a$ and output ``feasible'' is true. v) When the algorithm terminates, $Q$ is empty, which implies than $\|x-g(x) \|_{\infty} \leq \epsilon$, if ``feasible'' is true, it is also $x\geq a$ and $x$ is an $\epsilon$-solution. \end{proof} \section{Characterization of Problem~\eqref{eqn_prob_class_lin}} \label{sec_lin_case} In this section, we consider Problem~\eqref{eqn_prob_class_lin} and we propose a solution method that exploits its linear structure and is more efficient than Algorithm~\ref{alg:eps_sol}. First of all, we show that Problem~\eqref{eqn_prob_class_lin} belongs to class~\eqref{eqn_prob_class}. To this end, set \boldsymbol{1}gin{equation}\label{eq:P_def} P_\ell := I - D_\ell, \end{equation} where $I \in \mathbb{R}^{n \times n}$ is the identity matrix and, for $\ell \in \mathcal{L}$, $D_\ell \in \mathbb{R}^{n \times n}$ is a diagonal matrix that contains the elements of $A_\ell$ on the diagonal. Note that here and in what follows we assume that all the diagonal entries of $A_\ell$ are lower than 1. Indeed, for values larger than or equal to 1 the corresponding constraints are redundant and can be eliminated. The proof of the following proposition is in the appendix. \boldsymbol{1}gin{prpstn}\label{Proposition:reformulation} Problem~\eqref{eqn_prob_class_lin} can be reformulated as a problem of class~\eqref{eqn_prob_class}. Namely, this is achieved by setting \boldsymbol{1}gin{equation}\label{def:Ab_hat} \hat A_\ell := {P_\ell}^{-1}(A_\ell - D_\ell), \quad \hat b_\ell := {P_\ell}^{-1} b_\ell \end{equation} and $\hat g(x) = \underset{\ell \in \mathcal{L}}{\glb} \{\hat A_\ell x + \hat b_\ell\} \wedge U$. \end{prpstn} Then we apply the results for Problem~\eqref{eqn_prob_class} to Problem~\eqref{eqn_prob_class_lin}. The following proposition is a corollary of Proposition~\ref{prop_max_x}. \boldsymbol{1}gin{prpstn} Problem~\eqref{eqn_prob_class_lin} is feasible and its optimal solution $x^+$ satisfies the two equations \boldsymbol{1}gin{equation}\label{eq:problem_intro} x^+ =\underset{\ell \in \mathcal{L}}{\glb} \left\{ \hat A_\ell x^+ + \hat b_\ell \right\} \wedge U\,. \end{equation} \boldsymbol{1}gin{equation}\label{eq:fixed_point_ref} x^+ =\underset{\ell \in \mathcal{L}}{\glb} \left\{ A_\ell x^+ + b_\ell \right\} \wedge U\,. \end{equation} \end{prpstn} \boldsymbol{1}gin{proof} Note that $g(0)= b_\ell \wedge U\geq 0$, which implies that $\Sigma \neq \varnothing$ and that Problem~\eqref{eqn_prob_class} is feasible. Then, by Proposition~\ref{prop_max_x}, its solution $x^+$ satisfies $x^+=g(x^+)$, which implies~\eqref{eq:problem_intro} and~\eqref{eq:fixed_point_ref}. \end{proof} The following result, needed below, can be found, e. g., in \cite{heinonen2005lectures}. \boldsymbol{1}gin{lmm}\label{lemma:lipschitz} Let $L \in \mathbb{R}_+$ and $\{g_i: i \in I\}$, with $I$ set of indices, be a family of functions $g_i: \mathbb{R}^n \rightarrow \mathbb{R}^n$ such that $\forall x, y \in \mathbb{R}^n$ \[ \| g_i(x) - g_i(y) \|_\infty \leq L \| x - y \|_\infty. \] Then, function $g(x) := \displaystyle\underset{i \in I}{\glb} \{ g_i(x) \}$ also satisfies $\forall x, y \in \mathbb{R}^n$ \[ \| g(x) - g(y) \|_\infty \leq L \| x - y \|_\infty. \] \end{lmm} The following proposition illustrates that if the infinity norm of all matrices $A_\ell$ is lower than 1, equation~\eqref{eq:fixed_point_ref} is actually a contraction. \boldsymbol{1}gin{prpstn} \label{prop_fix_point_1} Assume that there exists a real constant $\gamma \in [0,1)$ such that \boldsymbol{1}gin{equation} \label{eqn_hyp_fix_point} \forall \ell \in \mathcal{L},\,\|A_{\ell}\|_\infty < \gamma, \end{equation} then function \boldsymbol{1}gin{equation}\label{eq:fixed_point_ref2} \boldsymbol{a}r g(x) = \underset{\ell \in \mathcal{L}}{\glb} \left\{ A_\ell x + b_\ell \right\} \wedge U\,. \end{equation} is a contraction in infinity norm, in particular, $\forall x,y \in \mathbb{R}^n$, \boldsymbol{1}gin{equation} \label{eqn_bound_contr1} \|\boldsymbol{a}r g(x)-\boldsymbol{a}r g(y)\|_\infty \leq \gamma \|x-y\|_\infty\,. \end{equation} \end{prpstn} \boldsymbol{1}gin{proof} Note that, for any $\ell \in \mathcal{L}$, function $h(x)=A_\ell x + b_\ell$ is a contraction, in fact, for any $x,y \in \mathbb{R}^n$ \[ \|h(x)-h(y)\|_{\infty} = \|A_\ell(x-y)\|_\infty \leq \gamma \|x-y\|_\infty\,. \] Then, the thesis is a consequence of Lemma~\ref{lemma:lipschitz}. \end{proof} The following result proves that, under the same assumptions, also~\eqref{eq:problem_intro} is a contraction. The proof is in the appendix. \boldsymbol{1}gin{prpstn} \label{thm_main} Assume that~\eqref{eqn_hyp_fix_point} holds and set \boldsymbol{1}gin{align} \label{eqn_choice_hat} \hat A_\ell = {P_\ell}^{-1}(A_\ell - D_\ell)\quad \text{ and }\quad \hat b_\ell = {P_\ell}^{-1} b_\ell, \end{align} with $P_\ell$ and $D_\ell$ defined as in~\eqref{eq:P_def}. Let \boldsymbol{1}gin{equation} \label{def:g_hat} \hat g(x) = \underset{\ell \in \mathcal{L}}{\glb} \{\hat A_\ell x + \hat b_\ell\} \wedge U, \end{equation} then $\hat g$ is a contraction in infinity norm, in particular, $\forall x,y \in \mathbb{R}^n$, \boldsymbol{1}gin{equation} \label{eqn_bound_contr2} \|\hat g(x) - \hat g(y)\|_\infty \leq \hat \gamma \|x-y\|_\infty\,, \end{equation} where \boldsymbol{1}gin{equation}\label{def:hat_gamma} \hat\gamma := \max_{\substack{\ell \in \mathcal{L} \\ i \in \mathcal{V}}} \left\{\frac{\gamma - \left[D_\ell\right]_{ii}}{1 - \left[D_\ell\right]_{ii}}\right\}. \end{equation} Moreover, it holds that $\hat\gamma \leq \gamma$. \end{prpstn} Hence, in case~\eqref{eqn_hyp_fix_point} is satisfied, Problem~\eqref{eqn_prob_class_lin} can be solved by Algorithm~\ref{alg:fixed_point} using either $g=\boldsymbol{a}r g$ in~\eqref{eq:fixed_point_ref2} or $g=\hat g$ in~\eqref{def:g_hat}. As we will show in Section~\ref{sec:convergence_speed_discussion}, the convergence is faster in the second case. Algorithm~\ref{alg:eps_sol} can be applied to Problem~\eqref{eqn_prob_class_lin}, being a subclass of~\eqref{eqn_prob_class}. Anyway, the linear structure of Problem~\eqref{eqn_prob_class_lin} allows for a more efficient implementation, detailed in Algorithm~\ref{alg:consensus}. This algorithm takes as input an initial vector $x_0 \in \mathbb{R}^n$, a tolerance $\epsilon$, matrices $A_\ell$ and vectors $b_\ell$, for $\ell \in \mathcal{L}$, representing function $g$ and the lower bound $a$. It operates like Algorithm~\ref{alg:eps_sol} but it optimizes the operation performed in line~\ref{alg_change_xi_2} of Algorithm~\ref{alg:eps_sol}. Lines from~\ref{alg3_initial_xi_begin} to~\ref{alg3_initial_xi_end} initialize the error vector $\xi$ and they correspond to line~\ref{alg2_initialization_end} of Algorithm~\ref{alg:eps_sol}. Whilst, lines~\ref{alg3_xi_update_begin} from to~\ref{alg3_xi_update_end} are the equivalent of line~\ref{alg_change_xi_2} of Algorithm~\ref{alg:eps_sol} in which the special structure of Problem~\eqref{eqn_prob_class_lin} is exploited in such a way that the updating of the $j$-th component of vector $\xi$ only involves the evaluation of $L$ scalar products and $L$ scalar sums, with $L = |\mathcal{L}|$, as opposed to (up to) $nL$ scalar products and $nL$ scalar sums of Algorithm~\ref{alg:eps_sol} applied to Problem~\eqref{eqn_prob_class_lin}. \boldsymbol{1}gin{algorithm}[h!] \caption{Solution algorithm for Problem~\eqref{eqn_prob_class_lin}.} \label{alg:consensus} \boldsymbol{1}gin{algorithmic}[1] \STATE INPUT: initial vector $x_0$, tolerance $\epsilon$, matrices $A_\ell$, vectors $b_\ell$ for $\ell \in \mathcal{L}$, vector $a$. \STATE OUTPUT: vector $x$. \STATE $x := x_0$ \STATE $Q := \varnothing$ \STATE \FOR{all $\ell \in \mathcal{L}$} \label{alg3_initial_xi_begin} \STATE $\eta_\ell := A_\ell x + b_\ell$ \ENDFOR \STATE $\xi := x - \underset{\ell \in \mathcal{L}}{\bigwedge} \eta_\ell$ \label{alg3_initial_xi_end} \STATE \FOR{all $i \in \mathcal{V}$} \IF{($\left[ \xi \right]_i > \epsilon$)} \STATE $Q:= \textrm{Enqueue}\left(Q, \left(i, * \right)\right)$ \label{alg3_enqueue_1} \ENDIF \ENDFOR \STATE \WHILE{$Q \neq \varnothing$} \STATE $(Q, i) := \textrm{Dequeue}(Q)$ \STATE $[x]_i = [x]_i - [\xi]_i$ \FOR{all $j \in \mathcal{V} : i \in \mathcal{N}(j)$} \FOR{all $\ell \in \mathcal{L}$} \label{alg3_xi_update_begin} \STATE $\left[ \eta_\ell \right]_j := \left[ \eta_\ell \right]_j - \left[ A_\ell \right]_{ji} \cdot [\xi]_i$ \ENDFOR \STATE $[\xi]_j := [x]_j - \displaystyle\min_{\ell \in \mathcal{L}} \left[ \eta_\ell \right]_j$ \label{alg3_xi_update_end} \IF {$[\xi]_j > \epsilon$} \STATE $Q := \textrm{Enqueue}\left(Q, \left(j, * \right)\right)$ \label{alg3_enqueue_2} \ENDIF \ENDFOR \STATE $[\xi]_i = 0$ \ENDWHILE \STATE \STATE $feasible := x \geq a$ \STATE \RETURN{$x,feasible$} \end{algorithmic} \end{algorithm} \section{Convergence Speed Discussion}\label{sec:convergence_speed_discussion} In this section, we will compare the convergence speed of various methods for solving Problem~\eqref{eqn_prob_class_lin}. First of all, note that Problem~\eqref{eqn_prob_class_lin} can be reformulated as the linear problem~\eqref{eqn_prob_class_proglin}. Hence, it can be solved with any general method for linear problems. As we will show, the performance of such methods is poor since they do not exploit the special stucture of Problem~\eqref{eqn_prob_class_proglin}. \subsection{Fixed point iterations} In case hypothesis~\eqref{eqn_hyp_fix_point} is satisfied, as discussed in Section~\ref{sec_lin_case}, Problem~\eqref{eqn_prob_class_lin} can be solved by Algorithm~\ref{alg:fixed_point} using either $g=\boldsymbol{a}r g$ in~\eqref{eq:fixed_point_ref2} or $g=\hat g$ in~\eqref{def:g_hat}. In other words, $x^+$ can be computed with one of the following iterations: \boldsymbol{1}gin{equation} \label{PROB2_noprec} \boldsymbol{1}gin{cases} x(k+1) = \boldsymbol{a}r g(x)=\underset{\ell \in \mathcal{L}}{\glb} \left\{A_\ell x(k) + b_\ell \right\} \wedge U\\ x(0) = x_0, \end{cases} \end{equation} \boldsymbol{1}gin{equation} \label{PROB2} \boldsymbol{1}gin{cases} x(k+1) = \hat g(x) =\underset{\ell \in \mathcal{L}}{\glb} \left\{ \hat A_\ell x(k) + \hat b_\ell \right\} \wedge U\\ x(0) = x_0, \end{cases} \end{equation} where $x_0 \in \mathbb{R}^n$ is an arbitrary initial condition and $\hat A_\ell$ and $\hat b_\ell$ are defined as in~\eqref{def:Ab_hat}. We can compare the convergence rate of iterations~\eqref{PROB2_noprec} and~\eqref{PROB2}. The speed of convergence of iteration~\eqref{PROB2_noprec} can be measured by the convergence rate: \[ \boldsymbol{a}r \chi := \max_{\substack{x \in \mathbb{R}^N \\ x \neq x^\star}} \left\{ \frac{{\|\boldsymbol{a}r g(x)-\boldsymbol{a}r g(x^+)\|}_\infty}{{\|x - x^+\|}_\infty} \right\}. \] Similarly, we call $\hat \chi$ the convergence rate of iteration~\eqref{PROB2}. Note that, by Proposition~\ref{prop_fix_point_1}, $\boldsymbol{a}r \chi \leq \gamma$ and, by Proposition~\ref{thm_main}, $\hat \chi \leq \max_{\substack{\ell \in \mathcal{L} \\ i \in \mathcal{V}}} \left\{\frac{\gamma - \left[D_\ell\right]_{ii}}{1 - \left[D_\ell\right]_{ii}}\right\} \leq \gamma$. Hence, in general, we have a better upper bound of the convergence rate of iteration~\eqref{PROB2} than~\eqref{PROB2_noprec}. Now, let us assume that matrices $\{A_\ell\}_{\ell \in \mathcal{L}}$ are dominant diagonal, that is, there exists $\Delta \in \left[ 0, \frac{1}{2} \right)$ such that, $\forall i \in \{1, \ldots, n\}$, $\forall \ell \in \mathcal{L}$, \boldsymbol{1}gin{equation}\label{dominant_diagonal} [A_\ell]_{ii} \geq (1 - \Delta)\gamma \quad \text{and} \quad \sum_{\substack{j = 1\\ j \neq i}}^n [A_\ell]_{ij} \leq \Delta \gamma. \end{equation} Recall that in the applications discussed in Section~\ref{sec:motivation} this is attained when $h$ is small enough. In the following theorem, whose proof is proved in the Appendix, we state that, if $\Delta$ is small enough, iteration~\eqref{PROB2} has a faster convergence than iteration~\eqref{PROB}. \boldsymbol{1}gin{prpstn} \label{thm:convergence_speed} Assume that~\eqref{eqn_hyp_fix_point} holds and let $\Delta \in \left[ 0, \frac{1}{2} \right)$ be such that matrices $\{A_\ell\}_{\ell \in \mathcal{L}}$ satisfy~\eqref{dominant_diagonal}. Then, if the starting point $x_0$ is selected in such a way that $x_0\geq x^+$, then the solutions of both~\eqref{PROB2_noprec} and~\eqref{PROB2} satisfy $x(k) \geq x_0$, $\forall k \in \mathbb{N}$. Moreover, if \[ \Delta \in \left[ 0, \frac{\sqrt{1 - \gamma} - (1 - \gamma)}{\gamma} \right)\,, \] then for any $x \geq x^+$ \boldsymbol{1}gin{equation}\label{convergence} {\|\hat g(x) - x^+\|}_\infty < {\| \boldsymbol{a}r g(x)- x^+\|}_\infty. \end{equation} \end{prpstn} \subsection{Speed of Algorithm~\ref{alg:consensus} and priority queue policy} \label{section:priority_queue} As we will see in the numerical experiments section, Algorithm~\ref{alg:consensus} solves Problem~\eqref{eqn_prob_class_lin} more efficiently than iterations~\eqref{PROB2_noprec} and~\eqref{PROB2}. As we already mentioned in the previous section, the order in which we update the values of the nodes in the priority queue does not affect the convergence of the algorithm but impacts heavily on its convergence speed. We implemented four different queue policies, detailed in the following. \subsubsection{Node variation} The priority associated to an index $i$ is given by the opposite of the absolute value of the variation of $[x]_i$ in its last update. In this case, in lines~\ref{alg2_enqueue_1},~\ref{alg2_enqueue_2} of Algorithm~\ref{alg:eps_sol} and lines~\ref{alg3_enqueue_1},~\ref{alg3_enqueue_2} of Algorithm~\ref{alg:consensus}, symbol $*$ is replaced by the opposite of the corresponding component of $\xi$ of the node added to the queue (see Table~\ref{tab:code}). This can be considered a ``greedy'' policy, in fact we update first the components of the solution $[x]_i$ associated to a larger variation $[\xi]_i$, in order to have a faster convergence of the current solution $x$ to $x^+$. \subsubsection{Node values} The priority associated to an index $i$ in the priority queue is given by $[x]_i$. In this case, in lines~\ref{alg2_enqueue_1},~\ref{alg2_enqueue_2} of Algorithm~\ref{alg:eps_sol} and lines~\ref{alg3_enqueue_1},~\ref{alg3_enqueue_2} of Algorithm~\ref{alg:consensus}, symbol $*$ is replaced by the opposite of the value of the node added to the queue (see Table~\ref{tab:code}). The rationale of this policy is the observation that, in Problem~\eqref{eqn_prob_class_proglin}, components of $x$ with lower values are more likely to appear in active constraints. This policy mimics Dijkstra's algorithm, in fact the indexes associated to the solution components with lower values are processed first. \subsubsection{FIFO e LIFO policies} The two remaining policies implement respectively the First In First Out (FIFO) policy, (i.e., a stack) and the Last In First Out (LIFO) policy (i.e., a queue). Namely, in case of FIFO, the nodes are updated in the order in which they are inserted in the queue. In case of LIFO, they are updated in reverse order. In order to formally implement these two policies in a priority queue, we need to introduce a counter $k$ initialized to $0$ and incremented every time a node is added to the priority queue. In lines~\ref{alg2_enqueue_1},~\ref{alg2_enqueue_2} of Algorithm~\ref{alg:eps_sol} and lines~\ref{alg3_enqueue_1},~\ref{alg3_enqueue_2} of Algorithm~\ref{alg:consensus}, symbol $*$ is replaced by $k$ in case we want to implement a LIFO policy and by $-k$ for implementing a FIFO policy (see Table~\ref{tab:code}). These steps are required to formally represent these two policies in Algorithm~\ref{alg:consensus}. As said, these two policies can be more simply implemened with an unordered queue (for FIFO policy) or a stack (for LIFO policy). The rationale of this two policies is to avoid the overhead of managing a priority queue. In fact, inserting an entry into a priority queue of $n$ elements has a time-cost of $O(\log n)$, while the same operation on an unordered queue or a stack has a cost of $O(1)$. Note that, with these policies, we increase the efficiency in the management of the set of the indexes that have to be updated at the expense of a possible less efficient update policy. \boldsymbol{1}gin{table}[ht] \centering \boldsymbol{1}gin{tabular}{|c|l|l|} \hline Policy & Alg.\ref{alg:eps_sol} line~\ref{alg2_enqueue_1}, Alg.\ref{alg:consensus} line~\ref{alg3_enqueue_1} & Alg.\ref{alg:eps_sol} line~\ref{alg2_enqueue_2}, Alg.\ref{alg:consensus} line~\ref{alg3_enqueue_2} \\ \hline Variation & $Q:= \textrm{Enqueue}\left(Q, \left(i, -[\xi]_i \right)\right)$ & $Q:= \textrm{Enqueue}\left(Q, \left(j, -[\xi]_j \right)\right)$ \\ \hline Value & $Q:= \textrm{Enqueue}\left(Q, \left(i, [x]_i \right)\right)$ & $Q:= \textrm{Enqueue}\left(Q, \left(j, [x]_j \right)\right)$ \\ \hline FIFO & $Q:= \textrm{Enqueue}\left(Q, \left(i, k \right)\right)$; $k := k + 1$ & $Q:= \textrm{Enqueue}\left(Q, \left(j, k \right)\right)$; $k := k + 1$ \\ \hline LIFO & $Q:= \textrm{Enqueue}\left(Q, \left(i, -k \right)\right)$; $k := k + 1$ & $Q:= \textrm{Enqueue}\left(Q, \left(j, -k \right)\right)$; $k := k + 1$ \\ \hline \end{tabular} \caption{Possible priority queue policies.} \label{tab:code} \end{table} \subsection{Numerical Experiments}\label{subsec:simulations} In this section, we test Algorithm~\ref{alg:consensus} on randomly generated problems of class~\eqref{eqn_prob_class_lin}. We carried out two sets of tests. In the first one, we compared the solution time of Algorithm~\ref{alg:consensus} with different priority queue policies with a commercial solver for linear problems (Gurobi). In the second class of tests, we compared the number of scalar multiplications executed by Algorithm~\ref{alg:consensus} (with different priority queue policies) with the ones required by the fixed point iteration~\eqref{PROB2_noprec}. \subsubsection{Random problems generation} The following procedure allows generating a random problem of class~\eqref{eqn_prob_class_lin} with $n$ variables. The procedure takes the following input parameters: \boldsymbol{1}gin{itemize} \item $U \in \mathbb{R}^+$: an upper bound for the problem solution, \item $M_A \in \mathbb{R}^+$: maximum value for entries of $A_1,\ldots,A_L$, \item $M_b \in \mathbb{R}^+$: maximum value for entries of $b_1,\ldots,b_L$, \item $G_1,\ldots, G_L$: graphs with $n$ nodes. \end{itemize} A problem of class~\eqref{eqn_prob_class_lin} is then obtained with the following operations, for $i=1,\ldots,L$: \boldsymbol{1}gin{itemize} \item Set $D_i$ as the adiacency matrix of graph $G_i$, \item define $A_i$ as the matrix obtained from $D_i$ by replacing each nonzero entry of $D_i$ with a random number generated from a uniform distribution in interval $[0,M_A]$, \item define $b_i \in \mathbb{R}^n$ so that each entry is a random number generated from a uniform distribution in interval $[0,M_b]$. \end{itemize} Graphs $G_1,\ldots,G_L$ are obtained from standard classes of random graphs, namely: \boldsymbol{1}gin{itemize} \item the Barab\'asi-Albert model~\cite{BARABASI1999}, characterized by a scale-free degree distribution, \item the Newman-Watts-Strogatz model~\cite{NEWMAN1999}, that originates graphs with small-world properties, \item the Holm and Kim algorithm~\cite{HOLME2002}, that produces scale-free graphs with high clustering. \end{itemize} In our tests, we used the software NetworkX~\cite{NetworkX} to generate the random graphs. \subsubsection{Test 1: solution time} We considered random instances of Problem~\eqref{eqn_prob_class_lin} obtained with the following parameters: $U = 10^{5}$, $M_A = 0.5$, $M_b = 1$, $L = 4$, using random graphs with a varying number of nodes obtained with the following models. \boldsymbol{1}gin{itemize} \item The Barab\'asi-Albert model (see~\cite{BARABASI1999} for more details), in which each new node is connected to 5 existing nodes. \item The Watts-Strogatz model (see~\cite{NEWMAN1999}), in which each node is connected to its 2 nearest neighbors and with shortcuts created with a probability of 3 divided by the number of nodes in the graph. \item The Holm and Kim algorithm (see~\cite{HOLME2002}), in which 4 random edges are added for each new node and with a probability of $0.25$ of adding an extra random edge generating a triangle. \end{itemize} Figures~\ref{fig:barabasi_albert_graph_time},~\ref{fig:newman_watts_strogatz_graph_time} and~\ref{fig:powerlaw_cluster_graph_time} compare the solution times obtained with Algoritm~\ref{alg:consensus} (using different queue policies) to those obtained with Gurobi. The figures refer to random graphs generated with Barab\'asi-Albert model, Watts-Strogatz model and Holm and Kim algorithm, respectively. For each figure, the horizontal axis represents the number of variables (that are logarithmically spaced) and the vertical-axis represents the solution times (also logarithmically spaced), obtained as the average of 5 tests. For each graph type, the policies based on FIFO and node variation appear to be the best performing ones. In particular, for problems obtained from the Barabasi-Albert model (Figure~\ref{fig:barabasi_albert_graph_time}) and Holm and Kim algorithm (Figure~\ref{fig:powerlaw_cluster_graph_time}), the solution time obtained with these two policies are more than three orders of magnitude lower than Gurobi. Moreover, the solution time with FIFO policy is more than one order of magniture lower than Gurobi for problems obtained from Watts-Strogatz model (Figure~\ref{fig:newman_watts_strogatz_graph_time}). Note that, in every figure, Gurobi solution times are almost constant for small numbers of variables. A possible explanation could be that Gurobi performs some dimension-independent operations which, at small dimensions, are the most time-consuming ones. Note also that, in Figures~\ref{fig:barabasi_albert_graph_time} and~\ref{fig:powerlaw_cluster_graph_time}, the solution times for node value and LIFO policies are missing starting from a certain number of variables. This is due to excessively high computational times, however, the first collected data points are enough for drawing conclusions on the performances of these policies which, as the number of variables grows, perform far worse than Gurobi. \noindent \boldsymbol{1}gin{tikzpicture} \boldsymbol{1}gin{axis}[ width=.8\textwidth, height=.5\textwidth, at={(1.011in,0.642in)}, scale only axis, xmode=log, xmin=10, xmax=11144, xminorticks=true, xlabel style={font=\color{white!15!black}}, xlabel={Number of variables}, ymode=log, ymin=0.0001, ymax=1000, yminorticks=true, ylabel style={font=\color{white!15!black}}, ylabel={Solution time}, axis background/.style={fill=white}, xmajorgrids, xminorgrids, ymajorgrids, yminorgrids, legend style={at={(0.99,0.01)}, anchor=south east, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=mycolor1, line width=1.0pt, mark=asterisk, mark options={solid, mycolor1}] table[row sep=crcr]{ 10 0.0005951548\\ 13 0.00035344\\ 17 0.000499044\\ 21 0.0005792682\\ 27 0.000787925\\ 35 0.0092325724\\ 45 0.005667969\\ 58 0.0017644296\\ 74 0.0022005946\\ 95 0.0032154846\\ 123 0.0055618398\\ 157 0.0070056918\\ 202 0.012333153\\ 260 0.0159228652\\ 334 0.0156618468\\ 429 0.0265329208\\ 551 0.035822053\\ 708 0.0551215674\\ 910 0.0573981438\\ 1169 0.0684891284\\ 1501 0.084946479\\ 1929 0.134060193\\ 2478 0.1310034238\\ 3184 0.169810344\\ 4090 0.209420571\\ 5255 0.2568849752\\ 6752 0.3660565208\\ 8674 0.4712324662\\ 11144 0.5907037372\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{Variation} \addplot [color=mycolor2, line width=1.0pt, mark=o, mark options={solid, mycolor2}] table[row sep=crcr]{ 10 0.0006611324\\ 13 0.000167685\\ 17 0.0002435054\\ 21 0.000359978\\ 27 0.0004558798\\ 35 0.0015596832\\ 45 0.0013026902\\ 58 0.0015923736\\ 74 0.0026346076\\ 95 0.0029254636\\ 123 0.0030095482\\ 157 0.0045819746\\ 202 0.0044292348\\ 260 0.0150607736\\ 334 0.0079751382\\ 429 0.014015092\\ 551 0.0198774706\\ 708 0.0352434538\\ 910 0.0331614908\\ 1169 0.0521926136\\ 1501 0.0583760694\\ 1929 0.1083283926\\ 2478 0.0887241906\\ 3184 0.1045216054\\ 4090 0.123326712\\ 5255 0.1338982558\\ 6752 0.1638463362\\ 8674 0.2036605504\\ 11144 0.2729580392\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{FIFO} \addplot [color=mycolor3, line width=1.0pt, mark=square, mark options={solid, mycolor3}] table[row sep=crcr]{ 10 0.00314563\\ 13 0.0122389824\\ 17 0.028481783\\ 21 0.0540521318\\ 27 0.1000488006\\ 35 0.386978246\\ 45 0.8433000552\\ 58 1.3572409162\\ 74 3.6587333724\\ 95 0\\ 123 0\\ 157 0\\ 202 0\\ 260 0\\ 334 0\\ 429 0\\ 551 0\\ 708 0\\ 910 0\\ 1169 0\\ 1501 0\\ 1929 0\\ 2478 0\\ 3184 0\\ 4090 0\\ 5255 0\\ 6752 0\\ 8674 0\\ 11144 0\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{Value} \addplot [color=mycolor4, line width=1.0pt, mark=pentagon, mark options={solid, mycolor4}] table[row sep=crcr]{ 10 0.022491162\\ 13 0.0735789134\\ 17 0.176859916\\ 21 0.463932564\\ 27 0.8965218696\\ 35 3.0192616228\\ 45 0\\ 58 0\\ 74 0\\ 95 0\\ 123 0\\ 157 0\\ 202 0\\ 260 0\\ 334 0\\ 429 0\\ 551 0\\ 708 0\\ 910 0\\ 1169 0\\ 1501 0\\ 1929 0\\ 2478 0\\ 3184 0\\ 4090 0\\ 5255 0\\ 6752 0\\ 8674 0\\ 11144 0\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{LIFO} \addplot [color=mycolor5, line width=1.0pt, mark=diamond, mark options={solid, mycolor5}] table[row sep=crcr]{ 10 0.3459368862\\ 13 0.266633962\\ 17 0.264651968\\ 21 0.2609900964\\ 27 0.269805837\\ 35 0.2952675002\\ 45 0.3570352102\\ 58 0.3352853552\\ 74 0.3410980454\\ 95 0.3438401718\\ 123 0.2922790924\\ 157 0.3539103666\\ 202 0.3833662614\\ 260 0.3534752564\\ 334 0.3928540948\\ 429 0.5392693906\\ 551 0.4639342778\\ 708 1.1983676908\\ 910 1.8217313006\\ 1169 2.0217317532\\ 1501 3.0225821664\\ 1929 5.942938245\\ 2478 8.7950893672\\ 3184 16.3822081754\\ 4090 28.8719912166\\ 5255 62.1615212322\\ 6752 126.0592345666\\ 8674 250.490089081\\ 11144 509.2188303676\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{Gurobi} \end{axis} \end{tikzpicture} \captionof{figure}{Solution times for graphs with growing number of nodes generated with Barab\'asi-Albert model.} \label{fig:barabasi_albert_graph_time} \boldsymbol{1}gin{tikzpicture} \boldsymbol{1}gin{axis}[ width=.8\textwidth, height=.5\textwidth, at={(1.011in,0.642in)}, scale only axis, xmode=log, xmin=10, xmax=50119, xminorticks=true, xlabel style={font=\color{white!15!black}}, xlabel={Number of variables}, ymode=log, ymin=0.0001, ymax=100, yminorticks=true, ylabel style={font=\color{white!15!black}}, ylabel={Solution time}, axis background/.style={fill=white}, xmajorgrids, xminorgrids, ymajorgrids, yminorgrids, legend style={at={(0.99,0.01)}, anchor=south east, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=mycolor1, line width=1.0pt, mark=asterisk, mark options={solid, mycolor1}] table[row sep=crcr]{ 10 0.0437058822\\ 13 0.0003105642\\ 17 0.0003349432\\ 21 0.0003880072\\ 27 0.0004336068\\ 35 0.0005247948\\ 45 0.000590372\\ 58 0.0007067506\\ 74 0.0008802644\\ 95 0.0010568416\\ 123 0.001462452\\ 157 0.0018199536\\ 202 0.0022388226\\ 260 0.0038207278\\ 334 0.0056529644\\ 429 0.0071710784\\ 551 0.0108080108\\ 708 0.0162415596\\ 910 0.020296989\\ 1169 0.0286340496\\ 1501 0.036942233\\ 1929 0.0423397638\\ 2478 0.056434152\\ 3184 0.0654424698\\ 4090 0.121155158\\ 5255 0.120423036\\ 6752 0.1400534612\\ 8674 0.178852733\\ 11144 0.2649161826\\ 14318 0.4155184038\\ 18395 0.562368231\\ 23634 0.6613739906\\ 30364 0.9988348148\\ 39010 1.2374534102\\ 50119 1.631090228\\ }; \addlegendentry{Variation} \addplot [color=mycolor2, line width=1.0pt, mark=o, mark options={solid, mycolor2}] table[row sep=crcr]{ 10 0.000501501\\ 13 0.0001129486\\ 17 0.0001205568\\ 21 0.0001454514\\ 27 0.0001625716\\ 35 0.0001890596\\ 45 0.0002200992\\ 58 0.0002302208\\ 74 0.0002439548\\ 95 0.0005181754\\ 123 0.0005229488\\ 157 0.0007790132\\ 202 0.0013731628\\ 260 0.0023038752\\ 334 0.0022321066\\ 429 0.0025945732\\ 551 0.0024802142\\ 708 0.0038862076\\ 910 0.005670732\\ 1169 0.0081429976\\ 1501 0.008239282\\ 1929 0.009971903\\ 2478 0.0122597952\\ 3184 0.0170984232\\ 4090 0.0358771618\\ 5255 0.04051714\\ 6752 0.0593637738\\ 8674 0.0605770912\\ 11144 0.0714349828\\ 14318 0.1042273148\\ 18395 0.1109716336\\ 23634 0.1069118182\\ 30364 0.1207863376\\ 39010 0.143382946\\ 50119 0.1822079336\\ }; \addlegendentry{FIFO} \addplot [color=mycolor3, line width=1.0pt, mark=square, mark options={solid, mycolor3}] table[row sep=crcr]{ 10 0.0030631324\\ 13 0.003665062\\ 17 0.005082224\\ 21 0.0077198636\\ 27 0.0100503912\\ 35 0.0132994676\\ 45 0.0181489316\\ 58 0.0252437954\\ 74 0.0237352274\\ 95 0.0302054782\\ 123 0.035608264\\ 157 0.052395607\\ 202 0.0653032904\\ 260 0.0807155248\\ 334 0.1048264554\\ 429 0.1261190088\\ 551 0.1544885632\\ 708 0.2104607114\\ 910 0.2499445\\ 1169 0.3309349264\\ 1501 0.4111294766\\ 1929 0.546758185\\ 2478 0.6694439334\\ 3184 0.8522266348\\ 4090 1.4145447466\\ 5255 1.4698170538\\ 6752 1.8312934464\\ 8674 2.3711939058\\ 11144 3.1252997074\\ 14318 4.283921471\\ 18395 5.329291155\\ 23634 6.524120078\\ 30364 9.2146832476\\ 39010 12.3157065618\\ 50119 13.9710775174\\ }; \addlegendentry{Value} \addplot [color=mycolor4, line width=1.0pt, mark=pentagon, mark options={solid, mycolor4}] table[row sep=crcr]{ 10 0.0018031918\\ 13 0.0019955598\\ 17 0.0021590202\\ 21 0.0026767832\\ 27 0.0034084286\\ 35 0.0038898614\\ 45 0.0041752668\\ 58 0.0079182316\\ 74 0.007292178\\ 95 0.0113519546\\ 123 0.0138527408\\ 157 0.0188655924\\ 202 0.0216657636\\ 260 0.0264216568\\ 334 0.0332667152\\ 429 0.0407806204\\ 551 0.0484925108\\ 708 0.0602456228\\ 910 0.0744995958\\ 1169 0.091407883\\ 1501 0.1183014852\\ 1929 0.1474717314\\ 2478 0.188861047\\ 3184 0.240157985\\ 4090 0.4588354656\\ 5255 0.4368420284\\ 6752 0.51562392\\ 8674 0.6565395882\\ 11144 0.9455767692\\ 14318 1.1497900564\\ 18395 1.4300264812\\ 23634 1.8082655934\\ 30364 2.8185721652\\ 39010 3.4079966926\\ 50119 3.8371527132\\ }; \addlegendentry{LIFO} \addplot [color=mycolor5, line width=1.0pt, mark=diamond, mark options={solid, mycolor5}] table[row sep=crcr]{ 10 0.323342488\\ 13 0.3281140364\\ 17 0.3381316846\\ 21 0.3451169632\\ 27 0.3367729224\\ 35 0.3357352846\\ 45 0.324017452\\ 58 0.3408094084\\ 74 0.3164435598\\ 95 0.3062755636\\ 123 0.245843998\\ 157 0.2501585924\\ 202 0.2417714116\\ 260 0.2656039764\\ 334 0.2419686076\\ 429 0.2642374488\\ 551 0.2663696784\\ 708 0.2597513194\\ 910 0.2750517558\\ 1169 0.27871696\\ 1501 0.287340832\\ 1929 0.3033246612\\ 2478 0.5301388348\\ 3184 0.620885309\\ 4090 0.9149669274\\ 5255 0.9121104986\\ 6752 1.1056631104\\ 8674 1.2273643368\\ 11144 1.6046847444\\ 14318 1.6613798818\\ 18395 1.5180824458\\ 23634 1.9858505602\\ 30364 2.5996911352\\ 39010 3.2992850724\\ 50119 3.3848775542\\ }; \addlegendentry{Gurobi} \end{axis} \end{tikzpicture} \captionof{figure}{Solution times for graphs with growing number of nodes generated with Newman-Watts-Strogatz model.} \label{fig:newman_watts_strogatz_graph_time} \boldsymbol{1}gin{tikzpicture} \boldsymbol{1}gin{axis}[ width=.8\textwidth, height=.5\textwidth, at={(1.011in,0.642in)}, scale only axis, xmode=log, xmin=10, xmax=11144, xminorticks=true, xlabel style={font=\color{white!15!black}}, xlabel={Number of variables}, ymode=log, ymin=0.0001, ymax=1000, yminorticks=true, ylabel style={font=\color{white!15!black}}, ylabel={Solution time}, axis background/.style={fill=white}, xmajorgrids, xminorgrids, ymajorgrids, yminorgrids, legend style={at={(0.99,0.01)}, anchor=south east, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=mycolor1, line width=1.0pt, mark=asterisk, mark options={solid, mycolor1}] table[row sep=crcr]{ 10 0.0536987564\\ 13 0.0010019214\\ 17 0.0004300098\\ 21 0.0005706234\\ 27 0.0007647136\\ 35 0.00107908\\ 45 0.0013368298\\ 58 0.0022689742\\ 74 0.0028972518\\ 95 0.0032404698\\ 123 0.0053247938\\ 157 0.0061977208\\ 202 0.008244831\\ 260 0.0157692264\\ 334 0.0162276656\\ 429 0.0268331206\\ 551 0.037888087\\ 708 0.0580116484\\ 910 0.0770917402\\ 1169 0.0743282476\\ 1501 0.0720739864\\ 1929 0.1072650794\\ 2478 0.1465275914\\ 3184 0.1879428798\\ 4090 0.2150498624\\ 5255 0.2758929592\\ 6752 0.341359006\\ 8674 0.5186461124\\ 11144 0.7728565042\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{Variation} \addplot [color=mycolor2, line width=1.0pt, mark=o, mark options={solid, mycolor2}] table[row sep=crcr]{ 10 0.0029447472\\ 13 0.0001863554\\ 17 0.0003917964\\ 21 0.000379023\\ 27 0.0005147618\\ 35 0.0005317338\\ 45 0.0008674904\\ 58 0.0031011384\\ 74 0.0018998266\\ 95 0.0038669768\\ 123 0.0025136482\\ 157 0.0040747384\\ 202 0.0052481556\\ 260 0.0089187658\\ 334 0.0120926002\\ 429 0.0198741096\\ 551 0.0223323406\\ 708 0.0369560926\\ 910 0.0553750908\\ 1169 0.0413713452\\ 1501 0.0470457412\\ 1929 0.0830445404\\ 2478 0.0968584854\\ 3184 0.1115110838\\ 4090 0.1177151406\\ 5255 0.1228187552\\ 6752 0.1477737006\\ 8674 0.2115624136\\ 11144 0.2633009292\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{FIFO} \addplot [color=mycolor3, line width=1.0pt, mark=square, mark options={solid, mycolor3}] table[row sep=crcr]{ 10 0.0055850004\\ 13 0.0115391894\\ 17 0.0251107596\\ 21 0.0578950462\\ 27 0.1027943962\\ 35 0.2856070582\\ 45 0.4858989502\\ 58 1.0325888364\\ 74 2.3098624084\\ 95 4.2030332666\\ 123 0\\ 157 0\\ 202 0\\ 260 0\\ 334 0\\ 429 0\\ 551 0\\ 708 0\\ 910 0\\ 1169 0\\ 1501 0\\ 1929 0\\ 2478 0\\ 3184 0\\ 4090 0\\ 5255 0\\ 6752 0\\ 8674 0\\ 11144 0\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{Value} \addplot [color=mycolor4, line width=1.0pt, mark=pentagon, mark options={solid, mycolor4}] table[row sep=crcr]{ 10 0.0220423906\\ 13 0.0454583432\\ 17 0.1392907014\\ 21 0.2452004284\\ 27 0.6623545588\\ 35 2.001916001\\ 45 3.6018202132\\ 58 0\\ 74 0\\ 95 0\\ 123 0\\ 157 0\\ 202 0\\ 260 0\\ 334 0\\ 429 0\\ 551 0\\ 708 0\\ 910 0\\ 1169 0\\ 1501 0\\ 1929 0\\ 2478 0\\ 3184 0\\ 4090 0\\ 5255 0\\ 6752 0\\ 8674 0\\ 11144 0\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{LIFO} \addplot [color=mycolor5, line width=1.0pt, mark=diamond, mark options={solid, mycolor5}] table[row sep=crcr]{ 10 0.3474432906\\ 13 0.2888704704\\ 17 0.3037165692\\ 21 0.299908297\\ 27 0.3093113462\\ 35 0.3001741816\\ 45 0.3209836864\\ 58 0.3036790408\\ 74 0.3182152444\\ 95 0.3046952044\\ 123 0.3147784406\\ 157 0.3247152874\\ 202 0.3111099764\\ 260 0.3381673284\\ 334 0.3644460596\\ 429 0.4187763042\\ 551 0.4201275096\\ 708 0.5188806938\\ 910 1.699291754\\ 1169 2.2449774678\\ 1501 3.0619865764\\ 1929 4.6667603954\\ 2478 7.5373405438\\ 3184 12.3516520966\\ 4090 22.143164653\\ 5255 43.2589357954\\ 6752 87.53416575\\ 8674 183.744954342\\ 11144 393.8327459402\\ 14318 0\\ 18395 0\\ 23634 0\\ 30364 0\\ 39010 0\\ 50119 0\\ }; \addlegendentry{Gurobi} \end{axis} \end{tikzpicture} \captionof{figure}{Solution times for graphs with growing number of nodes generated with Holm and Kim algorithm.} \label{fig:powerlaw_cluster_graph_time} \boldsymbol{1}gin{tikzpicture} \boldsymbol{1}gin{axis}[ width=.8\textwidth, height=.5\textwidth, at={(1.011in,0.642in)}, scale only axis, x dir=reverse, xmode=log, xmin=1e-10, xmax=0.1, xminorticks=true, xlabel style={font=\color{white!15!black}}, xlabel={Tolerance}, ymode=log, ymin=10000, ymax=10000000, yminorticks=true, ylabel style={font=\color{white!15!black}}, ylabel={Multiplications}, axis background/.style={fill=white}, xmajorgrids, xminorgrids, ymajorgrids, yminorgrids, legend style={at={(0.99,0.2)}, anchor=south east, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=mycolor1, line width=1.0pt, mark=asterisk, mark options={solid, mycolor1}] table[row sep=crcr]{ 0.1 24832\\ 0.01 25733\\ 0.001 26606.2\\ 0.0001 27500.6\\ 1e-05 28388\\ 1e-06 29292.2\\ 1e-07 30182.2\\ 1e-08 31059.6\\ 1e-09 31964.8\\ 1e-10 32856.4\\ }; \addlegendentry{Variation} \addplot [color=mycolor2, line width=1.0pt, mark=o, mark options={solid, mycolor2}] table[row sep=crcr]{ 0.1 25092\\ 0.01 26076.6\\ 0.001 27074.2\\ 0.0001 28079.6\\ 1e-05 29083.8\\ 1e-06 30074.6\\ 1e-07 31087\\ 1e-08 32087.8\\ 1e-09 33084.4\\ 1e-10 34086\\ }; \addlegendentry{FIFO} \addplot [color=mycolor3, line width=1.0pt, mark=square, mark options={solid, mycolor3}] table[row sep=crcr]{ 0.1 1245061.2\\ 0.01 4310056\\ 0.001 0\\ 0.0001 0\\ 1e-05 0\\ 1e-06 0\\ 1e-07 0\\ 1e-08 0\\ 1e-09 0\\ 1e-10 0\\ }; \addlegendentry{Value} \addplot [color=mycolor4, line width=1.0pt, mark=pentagon, mark options={solid, mycolor4}] table[row sep=crcr]{ 0.1 50777.6\\ 0.01 70298.8\\ 0.001 99497\\ 0.0001 140810.2\\ 1e-05 216532\\ 1e-06 359016.6\\ 1e-07 584156.2\\ 1e-08 1016941.2\\ 1e-09 1989840.2\\ 1e-10 3891423.4\\ }; \addlegendentry{LIFO} \addplot [color=mycolor5, line width=1.0pt, mark=diamond, mark options={solid, mycolor5}] table[row sep=crcr]{ 0.1 396000\\ 0.01 475200\\ 0.001 534600\\ 0.0001 594000\\ 1e-05 673200\\ 1e-06 732600\\ 1e-07 792000\\ 1e-08 871200\\ 1e-09 930600\\ 1e-10 990000\\ }; \addlegendentry{Fixed point} \end{axis} \end{tikzpicture} \captionof{figure}{Scalar multiplications for different tolerances on a graph generated with Barab\'asi-Albert model.} \label{fig:barabasi_albert_graph_multiplications} \boldsymbol{1}gin{tikzpicture} \boldsymbol{1}gin{axis}[ width=.8\textwidth, height=.5\textwidth, at={(1.011in,0.642in)}, scale only axis, x dir=reverse, xmode=log, xmin=1e-10, xmax=0.1, xminorticks=true, xlabel style={font=\color{white!15!black}}, xlabel={Tolerance}, ymode=log, ymin=8890.4, ymax=1000000, yminorticks=true, ylabel style={font=\color{white!15!black}}, ylabel={Multiplications}, axis background/.style={fill=white}, xmajorgrids, xminorgrids, ymajorgrids, yminorgrids, legend style={at={(0.99,0.2)}, anchor=south east, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=mycolor1, line width=1.0pt, mark=asterisk, mark options={solid, mycolor1}] table[row sep=crcr]{ 0.1 8890.4\\ 0.01 9707.6\\ 0.001 10534.6\\ 0.0001 11372.6\\ 1e-05 12204\\ 1e-06 13037.6\\ 1e-07 13872.6\\ 1e-08 14702\\ 1e-09 15537\\ 1e-10 16372.2\\ }; \addlegendentry{Variation} \addplot [color=mycolor2, line width=1.0pt, mark=o, mark options={solid, mycolor2}] table[row sep=crcr]{ 0.1 8956.2\\ 0.01 9957.6\\ 0.001 10953.8\\ 0.0001 11948.2\\ 1e-05 12930.8\\ 1e-06 13912.8\\ 1e-07 14885\\ 1e-08 15860.4\\ 1e-09 16831.8\\ 1e-10 17805.2\\ }; \addlegendentry{FIFO} \addplot [color=mycolor3, line width=1.0pt, mark=square, mark options={solid, mycolor3}] table[row sep=crcr]{ 0.1 31863.4\\ 0.01 58205.4\\ 0.001 111309.6\\ 0.0001 220091.4\\ 1e-05 442670.4\\ 1e-06 895020.4\\ 1e-07 0\\ 1e-08 0\\ 1e-09 0\\ 1e-10 0\\ }; \addlegendentry{Value} \addplot [color=mycolor4, line width=1.0pt, mark=pentagon, mark options={solid, mycolor4}] table[row sep=crcr]{ 0.1 20940.6\\ 0.01 29482.8\\ 0.001 41511.2\\ 0.0001 60431.6\\ 1e-05 88596\\ 1e-06 132722.6\\ 1e-07 212574\\ 1e-08 311660.4\\ 1e-09 437125.2\\ 1e-10 761744.2\\ }; \addlegendentry{LIFO} \addplot [color=mycolor5, line width=1.0pt, mark=diamond, mark options={solid, mycolor5}] table[row sep=crcr]{ 0.1 80000\\ 0.01 96000\\ 0.001 108000\\ 0.0001 120000\\ 1e-05 136000\\ 1e-06 148000\\ 1e-07 160000\\ 1e-08 176000\\ 1e-09 188000\\ 1e-10 200000\\ }; \addlegendentry{Fixed point} \end{axis} \end{tikzpicture} \captionof{figure}{Scalar multiplications for different tolerances on a graph generated with Newman-Watts-Strogatz model.} \label{fig:newman_watts_strogatz_graph_multiplications} \boldsymbol{1}gin{tikzpicture} \boldsymbol{1}gin{axis}[ width=.8\textwidth, height=.5\textwidth, at={(1.011in,0.642in)}, scale only axis, x dir=reverse, xmode=log, xmin=1e-10, xmax=0.1, xminorticks=true, xlabel style={font=\color{white!15!black}}, xlabel={Tolerance}, ymode=log, ymin=10000, ymax=10000000, yminorticks=true, ylabel style={font=\color{white!15!black}}, ylabel={Multiplications}, axis background/.style={fill=white}, xmajorgrids, xminorgrids, ymajorgrids, yminorgrids, legend style={at={(0.99,0.2)}, anchor=south east, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=mycolor1, line width=1.0pt, mark=asterisk, mark options={solid, mycolor1}] table[row sep=crcr]{ 0.1 20777.4\\ 0.01 21665.4\\ 0.001 22535.4\\ 0.0001 23428.2\\ 1e-05 24309\\ 1e-06 25193\\ 1e-07 26106.6\\ 1e-08 26975.8\\ 1e-09 27870.6\\ 1e-10 28750.8\\ }; \addlegendentry{Variation} \addplot [color=mycolor2, line width=1.0pt, mark=o, mark options={solid, mycolor2}] table[row sep=crcr]{ 0.1 21006\\ 0.01 22028.4\\ 0.001 23052.8\\ 0.0001 24063.6\\ 1e-05 25099.4\\ 1e-06 26124.4\\ 1e-07 27148\\ 1e-08 28157\\ 1e-09 29174.4\\ 1e-10 30221.8\\ }; \addlegendentry{FIFO} \addplot [color=mycolor3, line width=1.0pt, mark=square, mark options={solid, mycolor3}] table[row sep=crcr]{ 0.1 883130.8\\ 0.01 3037159.8\\ 0.001 0\\ 0.0001 0\\ 1e-05 0\\ 1e-06 0\\ 1e-07 0\\ 1e-08 0\\ 1e-09 0\\ 1e-10 0\\ }; \addlegendentry{Value} \addplot [color=mycolor4, line width=1.0pt, mark=pentagon, mark options={solid, mycolor4}] table[row sep=crcr]{ 0.1 43825.6\\ 0.01 60735.8\\ 0.001 86885.8\\ 0.0001 123018.4\\ 1e-05 187654\\ 1e-06 307452.8\\ 1e-07 525227.4\\ 1e-08 874317\\ 1e-09 1754279.8\\ 1e-10 3582942\\ }; \addlegendentry{LIFO} \addplot [color=mycolor5, line width=1.0pt, mark=diamond, mark options={solid, mycolor5}] table[row sep=crcr]{ 0.1 317440\\ 0.01 380928\\ 0.001 428544\\ 0.0001 476160\\ 1e-05 539648\\ 1e-06 587264\\ 1e-07 634880\\ 1e-08 698368\\ 1e-09 745984\\ 1e-10 793600\\ }; \addlegendentry{Fixed point} \end{axis} \end{tikzpicture} \captionof{figure}{Scalar multiplications for different tolerances on a graph generated with Holm and Kim algorithm.} \label{fig:powerlaw_cluster_graph_multiplications} \subsubsection{Test 2: number of operations} We considered three instances of Problem~\eqref{eqn_prob_class_lin}, obtained from the three classes of random graphs considered in the previous tests, with the same parameters and with 500 nodes. For each instance, we considered 10 logarithmically spaced values of tolerance $\epsilon$ between $10^{-1}$ and $10^{-10}$. We solved each problem with the following methods: \boldsymbol{1}gin{itemize} \item the preconditioned fixed point iteration~\eqref{PROB2}, \item Algorithm~\ref{alg:consensus} with FIFO, LIFO, node value and node variation policies. \end{itemize} The results are reported in Figures~\ref{fig:barabasi_albert_graph_multiplications},~\ref{fig:newman_watts_strogatz_graph_multiplications} and~\ref{fig:powerlaw_cluster_graph_multiplications}. These figures show that the number of product operations required with node variation policy is much lower (of one order of magnitude) than those required by the fixed point iteration~\eqref{PROB2_noprec}. The iteration based on FIFO, even though slightly less performing than the the node variation policy, also gives comparable results to it. Observe that, even though the iteration based on node variation requires (slightly) less scalar multiplications than the one based on FIFO, its solution times are worse than those obtained with the FIFO policy, since the management of the priority queue based on node variation is computationally more demanding than a First-In-First-Out data structure. The iteration based on nodes value provides poor performances even with high tolerances. Also, the iteration based on LIFO gives poor computational results, underperforming the fixed point iteration~\eqref{PROB2_noprec} for tolerances smaller than $10^{-7}$, in Figures~\ref{fig:barabasi_albert_graph_multiplications} and~\ref{fig:powerlaw_cluster_graph_multiplications}, and smaller than $10^{-6}$, in Figure~\ref{fig:newman_watts_strogatz_graph_multiplications}. Note that, in Figures~\ref{fig:barabasi_albert_graph_multiplications},~\ref{fig:newman_watts_strogatz_graph_multiplications} and~\ref{fig:powerlaw_cluster_graph_multiplications}, below a certain value of the tolerance, the numbers of scalar multiplications for the priority queue based on node value are missing due to excessively high computational times. However, the first collected data points are enough for drawing conclusions on the performances of this policy. \newline\newline\noindent As a concluding remark, we observe that all the experiments confirm our previous claim about the relevance of the ordering in the priority queue. While convergence is guaranteed for all the orderings we tested, speed of convergence and number of scalar multiplications turn out to be rather different between them. In what follows we give a tentative explanation of such different performances. The good performance of the node variation policy can be explained with the fact that such policy guarantees a quick reduction of the variables values. The LIFO and value orderings seem to update a small subset of variables before proceeding to update also the other variables. This is particularly evident in the case of the value policy, where only variables with small values are initially updated. The FIFO ordering guarantees a more uniform propagation of the updates, thus avoiding stagnation into small portions of the feasible region. \section*{Appendix: Proofs of the Main Results}\label{sec:proof} \subsection{Proof of Proposition~\ref{Proposition:reformulation}} \boldsymbol{1}gin{proof} Given $A \in \mathbb{R}^{n \times n}$ let us define, for $i=1,\ldots,n$ the sum of the elements of row $i$ \boldsymbol{1}gin{equation} \label{eqn_def_s} s_i(A) := \sum_{j = 1}^n {[A]}_{ij}. \end{equation} \noindent Note that, for any $\ell \in \mathcal{L}$, matrix $P_\ell$ defined in~\eqref{eq:P_def} is positive diagonal since, by assumption, all elements of $D_\ell$ are less than $1$. One can rewrite the inequality of Problem~\eqref{eqn_prob_class_lin} as \boldsymbol{1}gin{align*} &\underline 0 \leq \underset{\ell \in \mathcal{L}}{\glb} \{ A_\ell x - (D_\ell - D_\ell + I)x+ b_\ell \} \\ \Leftrightarrow\ & \underline 0 \leq \underset{\ell \in \mathcal{L}}{\glb} \{(A_\ell - D_\ell) x - (I - D_\ell)x + b_\ell \} \\ \Leftrightarrow\ & \underline 0 \leq \underset{\ell \in \mathcal{L}}{\glb} \{(I - D_\ell)^{-1}(A_\ell - D_\ell) x - x + (I - D_\ell)^{-1}b_\ell \} \\ \Leftrightarrow\ & x \leq \underset{\ell \in \mathcal{L}}{\glb} \{(I - D_\ell)^{-1}(A_\ell - D_\ell) x + (I - D_\ell)^{-1}b_\ell \} \\ \Leftrightarrow\ & x \leq \underset{\ell \in \mathcal{L}}{\glb} \{{P_\ell}^{-1}(A_\ell - D_\ell) x + {P_\ell}^{-1}b_\ell \} \end{align*} Then, set $\hat A_\ell := {P_\ell}^{-1}(A_\ell - D_\ell)$ and $\hat b_\ell := {P_\ell}^{-1} b_\ell$ and $\hat g(x)=\underset{\ell \in \mathcal{L}}{\bigwedge} \{\hat g_\ell(x)\} \wedge U$, where, for $\ell \in \mathcal{L}$, \boldsymbol{1}gin{equation}\label{def:hat_g_ell} \hat g_\ell(x) := \hat A_\ell x + \hat b_\ell. \end{equation} Note that $\hat g$ is monotonic (since all entries of $\hat A_\ell$ are nonnegative) and for $i=1,\ldots,n$, $\left[\hat g \right]_i$ is independent on $x_i$ (since the diagonal entries of $\hat A_\ell$ are null). Note also that $\hat b_\ell$ is nonnegative. Hence, Problem~\eqref{eqn_prob_class_lin} takes on the form of Problem~\eqref{eqn_prob_class}. \end{proof} \subsection{Proof of Proposition~\ref{thm_main}} Given $P_\ell$ as in~\eqref{eq:P_def} for $i \in \mathcal{V}$ and $\ell \in \mathcal{L}$ we have that \boldsymbol{1}gin{align*} s_i(\hat A_\ell) \leq \frac{\gamma - \left[D_\ell\right]_{ii}}{\left[P_\ell\right]_{ii}}, \end{align*} where $s_i$ is defined in~\eqref{eqn_def_s} and $\hat A_\ell$ is defined as in~\eqref{def:Ab_hat}. Let us note that \boldsymbol{1}gin{equation}\label{S_def} \max_{\substack{\ell \in \mathcal{L} \\ i \in \mathcal{V}}} \left\{ s_i(\hat A_\ell) \right\} \leq \max_{\substack{\ell \in \mathcal{L} \\ i \in \mathcal{V}}} \left\{ \frac{\gamma - \left[D_\ell\right]_{ii}}{\left[P_\ell\right]_{ii}} \right\} = \max_{\substack{\ell \in \mathcal{L} \\ i \in \mathcal{V}}} \left\{\frac{\gamma - \left[D_\ell\right]_{ii}}{1 - \left[D_\ell\right]_{ii}}\right\} = \hat\gamma, \end{equation} where $\hat \gamma$ is defined as in~\eqref{def:hat_gamma}. Note that the term on the left-hand side is the maximum of $s_i(A)$ for all possible $i \in \mathcal{V}$ and for all possible matrices $A \in \mathbb{R}^{n \times n}$ which can be obtained by all possible combinations of the rows of matrices $A_\ell$, with $\ell \in \mathcal{L}$. We prove that $\hat\gamma \leq \gamma$, under the given assuptions. Indeed, it is immediate to see that function \boldsymbol{1}gin{equation}\label{eq:Sd} S(d) := \frac{\gamma - d}{1 - d} \end{equation} is monotone decreasing for any $d \in [0, \gamma]$. We remark that, for any $\ell \in \mathcal{L}$, $\left\| \hat A_\ell \right\|_\infty \leq \hat \gamma$. Now, for any $x \in \mathbb{R}^n$, let us define $\hat g_U(x) := U$, while for any $\ell \in \mathcal{L}$, $\hat g_\ell(x)$ is defined as in~\eqref{def:hat_g_ell}. It is immediate to see that $\forall x, y \in \mathbb{R}^n$, $\left\| \hat g_i(x) - \hat g_i(y) \right\|_\infty \leq \hat \gamma \| x- y \|_\infty$, for any $i \in \mathcal{L} \cup \{U\}$. Then, by Lemma~\ref{lemma:lipschitz} we have that, for $\hat g(x) = \displaystyle\bigwedge_{k \in \mathcal{L} \cup \{U\}} \hat g_k(x)$, it holds that $\forall x, y \in \mathbb{R}^n$, $\left\| \hat g(x) - \hat g(y) \right\|_\infty \leq \hat \gamma \| x - y \|_\infty$, that is, $\hat g$ is a contraction. \subsection{Proof of Proposition~\ref{thm:convergence_speed}} We first remark that $x_0 \geq x^+$ implies $x_k \geq x^+$ and $\boldsymbol{a}r g(x_k) \geq x^+$ for any $k$, where $\boldsymbol{a}r g$ is defined as in~\eqref{eq:fixed_point_ref2}. Then, we provide a lower bound for $\left\| \boldsymbol{a}r g(x_k) - \boldsymbol{a}r g(x^+) \right\|_\infty$. Let $\boldsymbol{a}r A \in \mathbb{R}^{n \times n}_+$ and $\boldsymbol{a}r b \in \mathbb{R}^n_+$ be such that $\boldsymbol{a}r A x_k + \boldsymbol{a}r b = g(x_k)$. Note that $\boldsymbol{a}r A$ is obtained by a combination of the rows of matrices $A_\ell$, with $\ell \in \mathcal{L}$. In other words, for each $i \in \{1, \ldots, n\}$, $\left[ \boldsymbol{a}r A \right]_{i*} = \left[ A_{\ell_i} \right]_{i*}$ for some $\ell_i \in \mathcal{L}$. Then, in view of $x_k \geq x^+$, $x^+ \leq \boldsymbol{a}r A x^+ + \boldsymbol{a}r b$ and $\boldsymbol{a}r A \geq 0$, \boldsymbol{1}gin{align*} \left\| \boldsymbol{a}r g(x_k) - \boldsymbol{a}r g(x^+) \right\|_\infty =& \left\| \boldsymbol{a}r A x_k + \boldsymbol{a}r b - x^+ \right\|_\infty \geq \left\| \boldsymbol{a}r A x_k + \boldsymbol{a}r b - (\boldsymbol{a}r A x^+ + \boldsymbol{a}r b) \right\|_\infty = \left\| \boldsymbol{a}r A(x_k - x^+) \right\|_\infty \geq \\ \geq& \left\| \diag(\boldsymbol{a}r A) (x_k - x^+) \right\|_\infty \geq (1 - \Delta)\gamma \left\| x_k - x^+ \right\|_\infty, \end{align*} where the last inequality follows from \eqref{dominant_diagonal}. Then, the result follows by observing that \boldsymbol{1}gin{gather*} \frac{\Delta \gamma}{1 - (1 - \Delta)\gamma} < (1 - \Delta)\gamma\ \Leftrightarrow\ \Delta \gamma < (1 - \Delta)\gamma - (1 - \Delta)^2\gamma^2\ \Leftrightarrow \\ \Leftrightarrow\ \gamma^2\Delta^2 + 2(1 - \gamma)\gamma\Delta - (1 - \gamma)\gamma < 0 \Leftrightarrow\ \Delta \in \left[\left. 0, \frac{\sqrt{1 - \gamma} - (1 - \gamma)}{\gamma} \right.\right). \end{gather*} \end{document}
math
95,688
\begin{document} \title[Multiscale splitting for spatial stochastic kinetics]{Pathwise error bounds in Multiscale variable splitting methods for spatial stochastic kinetics} \author[A. Chevallier]{Augustin Chevallier} \address{INRIA Sophia Antipolis \\ 2004 Route des Lucioles,\\ 06902 Valbonne, France.} \email{[email protected]} \author[S. Engblom]{Stefan Engblom} \address{Division of Scientific Computing \\ Department of Information Technology \\ Uppsala University \\ SE-751 05 Uppsala, Sweden.} \urladdr{\url{http://user.it.uu.se/~stefane}} \email{[email protected]} \thanks{Corresponding author: S. Engblom, telephone +46-18-471 27 54, fax +46-18-51 19 25.} \keywords{Hybrid mesoscopic model; Mean square bounds; Continuous-time Markov chain; Jump process; Rate equation} \subjclass[2010]{Primary: 65C20, 60J22; Secondary: 65C40, 60J27} \date{\today} \begin{abstract} Stochastic computational models in the form of pure jump processes occur frequently in the description of chemical reactive processes, of ion channel dynamics, and of the spread of infections in populations. For spatially extended models, the computational complexity can be rather high such that approximate multiscale models are attractive alternatives. Within this framework some variables are described stochastically, while others are approximated with a macroscopic point value. We devise theoretical tools for analyzing the pathwise multiscale convergence of this type of variable splitting methods, aiming specifically at spatially extended models. Notably, the conditions we develop guarantee well-posedness of the approximations without requiring explicit assumptions of \textit{a priori} bounded solutions. We are also able to quantify the effect of the different sources of errors, namely the \emph{multiscale error} and the \emph{splitting error}, respectively, by developing suitable error bounds. Computational experiments on selected problems serve to illustrate our findings. \end{abstract} \selectlanguage{english} \maketitle \section{Introduction} Mesoscopic spatially extended stochastic models are in frequent use in many fields, with notable examples found in cell biology, neuroscience, and epidemiology. The traditional macroscopic description is a partial differential equation (PDE) governing the flow of concentration field variables in a generalized reaction-transport process. Whenever a certain concentration is small enough, discrete stochastic effects become more pronounced, thus invalidating the assumptions behind the macroscopic model. An alternative is then to turn to a mesoscopic stochastic model, a continuous-time Markov chain over a discrete state-space. This model often remains accurate at an acceptable computational complexity. In the traditional non-spatial, or well-stirred setting, early work by Kurtz connected theses two descriptions via limit theorems, showing essentially that continuous approximations emerge in the limit of large molecular numbers, sometimes referred to as the ``thermodynamic limit''. Strong approximation theorems in the same setting were later also developed (for more of this, see the monograph \cite{Markovappr} and the references therein). \emph{Multiscale-}, or hybrid descriptions, in which the two scales are blended has since attracted many researchers. The focus of the research tend to fall into one of two categories; either ``theoretical'' and concerning error bounds and rate of convergence, or more ``practical'' by developing actual implementations and general software. In the first category, tentative analysis of specific examples are found in \cite{kurtz_multiscale}, while \cite{plyasunov_mscaleMMS, TJahnkeMMS2012} are of more general character and based on averaging techniques, and conditional expectations, respectively. A related analysis in the sense of mean-square convergence for operator splitting techniques is found in \cite{jsdesplit}. In \cite{kurtz_multiscale2} the issue of a proper scaling is stressed and similar remarks are made in \cite{AGangulyMMS2014}, where notably, a practical multiscale simulation algorithm is also devised. Towards the more algorithmic side, an early suggestion for a hybrid method in \cite{haseltine_HSSA} came to be followed up by several others \cite{adaptiveHSSA, Hy3S, HyMSMC}. Related multiscale algorithms based on quasi equilibrium assumptions are found in \cite{nestedSSA, slowSSA}, and the method in \cite{ssa_parareal} relied on the macroscale description as a preconditioner to bring out parallelism. With few exceptions \cite{parallel_LKMC, pseudo_compartment}, the main body of work has been done in the well-stirred (or 0-dimensional) setting. Since the work \cite{master_spatial} and the software described in \cite{URDMEpaper}, however, it is fairly well understood how \emph{spatial} models are to be developed. Here the computational complexity is much higher such that multiscale methods appear as a very attractive alternative. This is the starting point for the present contribution. The goal with the analysis of the paper is twofold. We will firstly deal with the multiscale analysis required for the splitting of the state variable into a stochastic and a deterministic part, respectively. Secondly, we will also deal with the numerical analysis relied upon when designing a basic but representative time-discretization of this approximating process. The paper is organized as follows: below we first summarize the main results of the paper. In \S\ref{sec:jsdes} we work through the description of mesoscopic reactive processes as continuous-time Markov chains with a focus on the spatial case. A substantial effort is made to avoid any possibly circular assumptions on the solution regularity, but rather to prove all results within a single coherent framework. The analysis of the multiscale approximation is found in \S\ref{sec:analysis}, where error bounds for both the \emph{multiscale} and the \emph{splitting} errors are developed. Our approach is pathwise in the sense that the errors are measured in $L_2$ over a single probability space. Selected numerical examples are presented in \S\ref{sec:examples}, and a concluding discussion is offered in \S\ref{sec:conclusions}. \subsection{Summary of main results} A brief orientation of the technical results of the paper is as follows: \begin{enumerate} \item \label{it:stab1} Theorem~\ref{th:exist0} proves a strong regularity result for the type of spatial reactive processes considered in the paper. \item Theorem~\ref{th:exist1} proves the corresponding result in the setting of a multiscale framework. In particular, this reveals partial assumptions for when a multiscale description is meaningful. \item \label{it:stab2} Theorems~\ref{th:exist2} and \ref{th:exist3} similarly develop regularity results for the \emph{multiscale-} and the \emph{split-step approximations}, respectively. \item \label{it:conv1} Theorems~\ref{th:ScalingErrorBounded} and \ref{th:ScalingError} provide for a multiscale convergence theory when parts of the dynamics is approximated via deterministic terms. \item \label{it:conv2} Theorems~\ref{th:spliterr2} and \ref{th:spliterr1} similarly provides for a convergence theory of split-step methods in a general multiscale setting. \end{enumerate} In this list, items \ref{it:stab1}--\ref{it:stab2} proves well-posedness and stability for the various involved processes. Following the celebrated Lax principle, items \ref{it:conv1}--\ref{it:conv2} next proves convergence and error estimates by an investigation of the consistency in the different approximations. \section{Mesoscopic spatial stochastic kinetics} \label{sec:jsdes} We devote this section to some technical developments; \S\ref{subsec:rdme}--\ref{subsec:mesh} summarize reaction-transport type modeling over irregular lattices, and regularity results under suitable model assumptions are developed in \S\ref{subsec:RDMEstab}. The variable splitting setup to be studied is similarly detailed in \S\ref{subsec:scaling}--\ref{subsec:splitting}, where the corresponding regularity results are evaluated anew. Throughout the paper we shall remain in the framework of continuous-time Markov processes on a discrete state-space, albeit with some special structure imposed from the spatial context. Assuming a process $X(t) \in \mathbf{Z}_{+}^{D}$ counting at time $t$ the number of entities in each of $D$ compartments, a set of $R$ state transitions $X \mapsto X-\mathbb{N}_{r}$ is generally prescribed by \begin{align} \label{eq:prop} \mathbf{P}\left[X(t+dt) = x-\mathbb{N}_{r}| \; X(t) = x\right] &= w_{r}(x) \, dt+o(dt), \end{align} for $r = 1\ldots R$. To enforce a conservative chain which remains in $\mathbf{Z}_{+}^{D}$, we assume $w_{r}(x) = 0$ whenever $x-\mathbb{N}_{r} \not \in \mathbf{Z}_{+}^{D}$. \subsection{Continuous-time Markov chains on irregular lattices} \label{subsec:rdme} In the traditional well-stirred setting we have $D$ species interacting according to $R$ chemical reactions in some fixed volume $V_{tot}$. Given an initial state $X(0)$, the dynamics is then fully described by the \emph{stoichiometric matrix} $\mathbb{N} \in \mathbf{Z}^{D \times R}$, and $w(x) \equiv [w_{1}(x), \ldots, w_{R}(x)]^{T}$, the set of \emph{propensities}. Assuming a probability space $(\mathbf{P}space,\mathbf{P}filtr,\mathbf{P})$ supporting $R$-dimensional Poisson processes, the state is evolved according to \cite[Chap.~6.2]{Markovappr} \begin{align} \label{eq:Poissrepr} X_i(t) &= X_i(0)-\sum_{r = 1}^{R} \mathbb{N}_{ri} \Pi_{r} \left( \int_{0}^{t} w_{r}(X(s)) \, ds \right), \end{align} for species $i = 1\ldots D$ and with standard unit-rate independent Poisson processes $\Pi_{r}$, $r = 1\ldots R$. If the assumption of a spatially uniform distribution no longer holds a notation for spatial dependency needs to enter. The given continuous volume $V_{tot}$ is discretized into $J$ smaller voxels $(V_j)_{j = 1}^J$ and the state $X \in \mathbf{Z}_{+}^{D \times J}$, where $X_{ij}$ is the number of molecules of the $i$th species in the $j$th voxel. The assumption of global homogeneity is replaced with a \emph{local} assumption about uniformity in each voxel such that the dynamics \eqref{eq:Poissrepr} may be used anew on a per-voxel basis. Adding suitable terms covering any specified transport process we get \begin{align} \label{eq:RDMEPoissrepr} X_{ij}(t) = X_{ij}(0) &- \sum_{r = 1}^{R} \mathbb{N}_{ri} \Pi_{rj} \left( \int_{0}^{t} w_{rj}(X_{\cdot ,j}(s)) \, ds \right) \\ \nonumber &-\sum_{k = 1}^{J} \Pi_{ijk}' \left( \int_{0}^{t} q_{ijk}X_{ij}(s) \, ds \right) \\ \nonumber &+\sum_{k = 1}^{J} \Pi_{ikj}' \left( \int_{0}^{t} q_{ikj}X_{ik}(s) \, ds \right), \end{align} where $q_{ijk}$ is the rate per unit of time for species $i$ to move from the $j$th voxel to the $k$th. An important consequence of the integral representation \eqref{eq:RDMEPoissrepr} is \emph{Dynkin's formula} \cite[Chap.~9.2.2]{BremaudMC}. For $f:\mathbf{Z}_{+}^{D \times J} \to \mathbf{R}$ a suitable function, \begin{align} \nonumber \Expect&\left[f(X(\StopT{t}))-f(X(0))\right] = \\ \label{eq:Dynkin} &\Expect\Biggl[ \int_{0}^{\StopT{t}} \sum_{j = 1}^J \sum_{r = 1}^{R} w_{rj}(X_{\cdot,j}(s)) \left[ f(X(s)-\mathbb{N}_{r}\mathbbm{1}t_j)- f(X(s)) \right] \, ds \Biggr] \\ \nonumber +&\Expect\Biggl[ \int_{0}^{\StopT{t}} \sum_{j,k = 1}^J \sum_{i = 1}^D q_{ijk} X_{ij}(s) \left[ f(X(s)-\mathbbm{1}_i\mathbbm{1}t_j+\mathbbm{1}_i\mathbbm{1}t_k)- f(X(s)) \right] \, ds \Biggr] \\ \nonumber +&\Expect\Biggl[ \int_{0}^{\StopT{t}} \sum_{j,k = 1}^J \sum_{i = 1}^D q_{ikj} X_{ik}(s) \left[ f(X(s)+\mathbbm{1}_i\mathbbm{1}t_j-\mathbbm{1}_i\mathbbm{1}t_k)- f(X(s)) \right]\, ds \Biggr], \end{align} expressed in terms of the stopped process $X(\StopT{t}) = X(t \wedge \tau_{P})$ for a stopping time $\tau_{P} := \inf_{t \ge 0}\{\|X(t)\| > P\}$ in some suitable norm, and $P > 0$ an arbitrary real number. In \eqref{eq:Dynkin}, $\mathbbm{1}_j$ is an all-zero column vector of suitable height and with a single 1 at position $j$. \subsection{Mesh regularity} \label{subsec:mesh} The subdivision of the total volume $V_{tot}$ into smaller voxels is in principle arbitrary. However, any meaningful analysis will clearly depend to some extent on the regularity of this discretization. \begin{definition}[\textit{Mesh regularity parameters}] \label{def:mesh} We consider a geometry in $d$ dimensions and total volume $V_{tot}$, discretized by any member in the set of meshes $\mathcal{M}$. For any such mesh $M \in \mathcal{M}$ consisting of voxel volumes $(V_j)_{j=1}^{J}$ we assume that it holds that \begin{align} \label{eq:uniform_mesh} &m_V \bar{V}_{M} \le V_j \le M_V \bar{V}_{M}, \\ \label{eq:voxel_shape} &m_h V_j^{1/d} \le \diam(V_j) \le M_h V_j^{1/d}, \\ \label{eq:connectivity} &|\{k; \; q_{ijk} \not = 0\}| \leq M_D, \end{align} for constants $0 < m_V \le M_V$, $0 < m_h \le M_h$, $M_D$, and average voxel volume $\bar{V}_{M} = J^{-1}\sum_{j = 1}^{J} V_{j}$. Hence under this parametrization we may write \begin{align*} \mathcal{M} = \mathcal{M}(m_V,M_V,m_h,M_h,M_D). \end{align*} \end{definition} Informally, \eqref{eq:uniform_mesh} measures how far the meshes in $\mathcal{M}$ are from being uniform, \eqref{eq:voxel_shape} ensures that no single voxel collapses into a voxel in less than $d$ dimensions, and \eqref{eq:connectivity} that the connectivity of the mesh is bounded. In the present paper \eqref{eq:voxel_shape} is not used explicitly; this assumption assures a connection to the macroscopic viewpoint in that a concentration variable may be meaningfully defined everywhere. \subsection{Solution regularity} \label{subsec:RDMEstab} We next ensure the well-posedness of \eqref{eq:RDMEPoissrepr} by deriving some pathwise bounds on this process. To get some feeling for what is going on we first look briefly at the corresponding PDE-setting. Assume for simplicity that the transport rates $q_{ijk}$ have been chosen as a consistent discretization of the operator $\sigma_i \Delta$ under homogeneous Neumann conditions at the mesh $M$. Denoting a deterministic time-dependent concentration variable by $v_{i} = v_i(t,x)$ for $x \in \mathbf{R}^d$ and $i = 1\ldots D$, a macroscopic reaction-diffusion PDE corresponding to \eqref{eq:RDMEPoissrepr} reads \begin{align} \label{eq:RDPDE} \frac{\partial v_i}{\partial t} &= \sigma_{i}\Delta v_i- \sum_{r = 1}^{R} \mathbb{N}_{ri} u_{r}(v_{\cdot}), \mbox{ in $V_{tot}$, } \quad \frac{\partial v_i}{\partial n} = 0, \mbox{ on $\partial V_{tot}$}, \end{align} for certain nonlinear rates $u_r$, $r = 1...R$ to be prescribed below. Equipped with suitable initial data, \eqref{eq:RDPDE} can be expected to be a well-posed initial-boundary value problem in $L^{\infty}([0,T]) \times L^{p}(V_{tot})$ for any $p \ge 1$. For the stochastic case \eqref{eq:RDMEPoissrepr}, and in the non-spatial setting, an analysis in the form of assumptions and various \textit{a priori} bounds has been developed previously \cite{jsdestab}. We borrow many ideas from this work in what follows. The propensities in \eqref{eq:RDMEPoissrepr} generally obey the \emph{density dependent} scaling such that $w_{rj}(x) = V_{j} u_{r}(V_{j}^{-1}x)$ for some dimensionless function $u_{r}$ \cite[Chap.~11]{Markovappr}. We further expect from a physically realistic model that the number of molecules in an isolated volume $V_j$ can somehow be bounded \textit{a priori}. To this end we postulate the existence of a weighted norm \begin{align} \label{eq:ldef} \wnorm{x} := \boldsymbol{w}t x, \quad x \in \mathbf{R}_{+}^{D}, \end{align} normalized such that $\min_{i} \boldsymbol{w}_{i} = 1$. Following \cite{jsdestab} we formulate \begin{assumption}[\textit{Reaction regularity}] \label{ass:rbound} For a mesh $M \in \mathcal{M}$ consisting of voxel volumes $(V_j)_{j=1}^{J}$ we assume the density dependent scaling, \begin{align} \label{eq:ddepend} w_{rj}(x) &= V_{j} u_{r}(V_{j}^{-1}x), \\ \intertext{where $u$ is \emph{independent of the mesh} and further satisfies,} \label{eq:1bnd} -\boldsymbol{w}t \mathbb{N} u(x) &\le A+\alpha \wnorm{x}, \\ \label{eq:bnd} (-\boldsymbol{w}t \mathbb{N})^{2} u(x)/2 &\le B+\beta_{1} \wnorm{x}+\beta_{2}\wnorm{x}^{2}, \\ \label{eq:lip} |u_{r}(x)-u_{r}(y)| &\le L_{r}(P)\|x-y\|, \mbox{ for $r = 1\ldots R$, and $\wnorm{x} \vee \wnorm{y} \le P$}. \end{align} With the exception of $\alpha$, all parameters $\{A,B,\beta_{1},\beta_{2},L\}$ are assumed to be non-negative. \end{assumption} When considering spatially varying solutions, the natural analogue to \eqref{eq:ldef} is \begin{align} \label{eq:l1norm} \wunorm{X} &\equiv \sum_{j = 1}^{J} \wnorm{X_{\cdot,j}} = \boldsymbol{w}t X \boldsymbol{1}, \end{align} for $\boldsymbol{1}$ an all-unit column vector of suitable height. Our starting point is Dynkin's formula \eqref{eq:Dynkin}. We find \begin{align} \nonumber \Expect\left[\wunorm{X(\StopT{t})}^{p}\right] &= \Expect\left[\wunorm{X(0)}^{p}\right]+ \Expect\left[ \int_{0}^{\StopT{t}} F(X(s)) \, ds \right] \\ \intertext{where} \label{eq:Fdef} F(X) &\equiv \sum_{j = 1}^J \sum_{r = 1}^{R} w_{rj}(X_{\cdot,j}(s)) \left[ \left( \wunorm{X(s)}-\boldsymbol{w}t \mathbb{N}_{r} \right)^{p}- \wunorm{X(s)}^{p} \right]. \end{align} We quote the following convenient inequality. \begin{lemma}[\textit{Lemma~4.6 in \cite{jsdestab}}] \label{lem:diffbound} Let $f(x) \equiv (x+y)^{p}-x^{p}$ with $x \in \mathbf{R}_{+}$ and $y \in \mathbf{R}$. Then for integer $p \ge 1$ we have the bounds \begin{align} \label{eq:signed_diffbound} f(x) &\le p y x^{p-1}+ 2^{p-4} p(p-1) y^{2} \left[ x^{p-2}+ |y|^{p-2} \right], \\ \label{eq:abs_diffbound} |f(x)| &\le p |y| 2^{p-2} \left[ x^{p-1}+ |y|^{p-1} \right]. \end{align} \end{lemma} Using Lemma~\ref{lem:diffbound} \eqref{eq:signed_diffbound}, Assumption~\ref{ass:rbound}~\eqref{eq:ddepend}--\eqref{eq:bnd}, and Definition~\ref{def:mesh}~\eqref{eq:uniform_mesh} we obtain, where for brevity $x \equiv \wunorm{X}$, \begin{align} \label{eq:Fbound} F(X) &\le p(AV_{tot}+\alpha x) x^{p-1}+ C_p(BV_{tot}+\beta_{1} x+ \beta_{2}^V x^{2}) (x^{p-2}+C_\mathbb{N}^{p-2}), \end{align} where $C_p := 2^{p-3} p(p-1)$, $\beta_{2}^V := \beta_{2} m_V^{-1}\bar{V}_M^{-1}$, and $C_\mathbb{N} := \|\boldsymbol{w}t \mathbb{N}\|_{\infty}$. Combining \eqref{eq:Fdef} and \eqref{eq:Fbound} and using Young's inequality several times we may obtain a bound of the form \begin{align} \label{eq:momest} \Expect\left[\wunorm{X(\StopT{t})}^{p}\right] &\le \Expect\left[\wunorm{X(0)}^{p}\right]+\Expect\Biggl[ \int_{0}^{t} C(1+\wunorm{X(\StopT{s})}^{p}) \, ds \Biggr], \end{align} \noindent for some $C > 0$. Using Gronwall's inequality and letting $P \to \infty$ we arrive at \begin{theorem} \label{th:Epbound} Let $X(t)$ obey \eqref{eq:RDMEPoissrepr} under Assumption~\ref{ass:rbound}. Then for any integer $p \ge 1$, \begin{align} \Expect\left[\wunorm{X(t)}^{p}\right] &\le \left( \Expect\left[\wunorm{X(0)}^{p}\right] +1\right)\exp(Ct)-1, \end{align} where the constant $C > 0$ depends on $p$ and on the constants in the assumptions. \end{theorem} \begin{proof} It remains to prove that $\StopT{t} \to t$ almost surely as $P \to \infty$. Suppose to the contrary that $\StopT{t} = \tau_P \wedge t$ does not converge a.s.~to $t$ as $P \to \infty$. Define $A \equiv \{\mathbf{P}elem; \; \forall P: \tau_P(\mathbf{P}elem) < t\}$. By the assumption $\mathbf{P}(A) > 0$ and, for any $\mathbf{P}elem \in A$, and for all $P > 0$, \begin{align*} \sup_{0 \leq s \leq t} \|X_s(\mathbf{P}elem)\| > P \mbox{, or simply, } \sup_{0 \leq s \leq t} \|X_s\|(\mathbf{P}elem) = \infty. \end{align*} In other words, $X(\StopT{t},\omega) \to \infty$ for every $\omega \in A$, and $\|X(\StopT{t},\omega)\|$ forms an increasing sequence with respect to $P$. Using the Lebesgue monotone convergence theorem together with $\mathbf{P}(A) > 0$, we get that $E[\|X(\StopT{t})\|] \geq E[\|X(\StopT{t})\| 1_{\omega \in A}] \to \infty$. However, $E[\|X(\StopT{t})\|]$ is bounded from above independently of $P$ and thus we have a contradiction. \end{proof} Notably, when small voxels $V_j$ are present and quadratic reactions which are not $\boldsymbol{w}$-neutral are allowed (i.e.~$\beta_2 \not = 0$), then an investigation of $C$ in \eqref{eq:momest} reveals that the second order moment and higher may grow fast as $\exp(\beta_2 V_j^{-1}t)$. To achieve pathwise convergence results we will need a stronger regularity guarantee which requires control of the martingale part via Burkholder's inequality. To this end we define the quadratic variation of a real-valued process $(Y_{t})_{t \ge 0}$ by \begin{align} \label{eq:QVdef} [Y]_{t} &= \lim_{\|\mathcal{P}\| \to 0} \sum_{k = 0}^{n-1} \left( Y_{t_{k+1}}-Y_{t_{k}} \right)^{2}, \end{align} where the partition $\mathcal{P} = \{0 = t_{0} < t_{1} < \cdots < t_{n} = t\}$ for which $\|\mathcal{P}\| := \max_{k} |t_{k+1}-t_{k}|$ and where the limit is in probability. \begin{lemma} \label{lem:quadratic_variation} Let $X(t)$ satisfy \eqref{eq:RDMEPoissrepr} under Assumption~\ref{ass:rbound}. Then the quadratic variation of $\wunorm{X(t)}^{p}$ is bounded by \begin{align} \label{eq:quadratic_variation} \Expect\left([\wunorm{X}^{p}]_{t}^{1/2}\right) &\le \Expect\left[ \int_{0}^{t} C(1+\wunorm{X(s)}^{p}+ \beta_{2}^{V}\wunorm{X(s)}^{p+1}) \, ds \right], \end{align} where $C > 0$ again depends on $p$ and on the constants in Assumption~\ref{ass:rbound}, but not on the mesh resolution, and where $\beta_{2}^{V} := \beta_2 m_V^{-1}\bar{V}_M^{-1}$. \end{lemma} \begin{proof} Let $t_0 = 0$ and $t_i$ for $i = 1, 2, ...$ be the successive jump times of $X$. Then \begin{align*} [\wunorm{X}^{p}]_{\StopT{t}}^{1/2} &= \left( \sum_{0 < t_i \leq \StopT{t}} \left( \wunorm{X (t_i)}^{p} - \wunorm{X(t_{i-1})}^{p} \right)^2 \right)^{1/2}. \end{align*} Under the stopping time $X$ is non-explosive with probability 1 and the number of jumps is finite in $[0,\StopT{t}]$. Thus we can use the inequality $\|\cdot\|_2 \leq \|\cdot\|_1$ to get \begin{align*} [\wunorm{X}^{p}]_{\StopT{t}}^{1/2} &\leq \sum_{0 < t_i \leq \StopT{t}} \left| \wunorm{X (t_i)}^{p} - \wunorm{X(t_{i-1})}^{p} \right|. \end{align*} The right-hand side can be written as a Lebesgue-Steiltjes integral, \begin{align*} \int_{0}^{\StopT{t}} \sum_{j = 1}^J \sum_{r = 1}^{R} \left| \left( \wunorm{X(s)}-\boldsymbol{w}t \mathbb{N}_{r} \right)^{p}- \wunorm{X(s)}^{p} \right| \, dY_{rj}(s), \end{align*} with $Y_{rj}$ the counting process $Y_{rj}(t) = \Pi_{rj}\left( \int_0^t w_{rj}(X_{.,j}(s)) \, ds \right)$. Taking the expectation yields {\small \begin{align*} \Expect \left( [\wunorm{X}^{p}]_{\StopT{t}}^{1/2} \right) &\leq \Expect \left[ \int_{0}^{\StopT{t}} \sum_{j = 1}^J \sum_{r = 1}^{R} w_{rj}(X_{.,j}(s)) \left| \left( \wunorm{X(s)}-\boldsymbol{w}t \mathbb{N}_{r} \right)^{p}- \wunorm{X(s)}^{p} \right|\, ds\right]. \end{align*}} Using Lemma~\ref{lem:diffbound}~\eqref{eq:abs_diffbound} and Assumption~\ref{ass:rbound}~\eqref{eq:ddepend} and \eqref{eq:bnd}, \begin{align*} &\le \Expect \left[ \int_{0}^{\StopT{t}} \sum_{j,r} p |\boldsymbol{w}t \mathbb{N}_{r}| w_{rj}(X_{\cdot,j}(s)) \; 2^{p-2} \left[ \wunorm{X(s)}^{p-1}+|\boldsymbol{w}t \mathbb{N}_{r}|^{p-1} \right] \, ds \right] \\ &\le \Expect \left[ \int_{0}^{\StopT{t}} C_p(BV_{tot}+\beta_{1}\wunorm{X(s)}+\beta_{2}^V\wunorm{X(s)}^{2}) (\wunorm{X(s)}^{p-1}+C_\mathbb{N}^{p-1}) \, ds \right]. \end{align*} Relying on the moment bound in Theorem~\ref{th:Epbound} we let $P \to \infty$ to arrive at the stated bound. \end{proof} We consider the following strong sense of pathwise locally bounded processes: \begin{align} \Sspace{p} &= \left\{ X(t,\mathbf{P}elem): \begin{array}{l} X(t) \in \mathbf{Z}_{+}^{D \times J} \mbox{ is $\mathbf{P}filtr_{t}$-adapted such that } \\ \Expect[\sup_{t \in [0,T]} \wunorm{X_{t}}^{p}] < \infty \mbox{ for } \forall T < \infty \end{array} \right\}. \end{align} \begin{theorem}[\textit{Regularity}] \label{th:exist0} Let $X(t)$ be a solution to \eqref{eq:RDMEPoissrepr} under Assumption~\ref{ass:rbound} with $\beta_{2} = 0$. Then if $\Expect[\wunorm{X(0)}^{p}] < \infty$, $\{X(t)\}_{t \ge 0} \in \Sspace{p}$. If $\beta_{2} > 0$ then the conclusion remains under the additional requirement that $\Expect[\wunorm{X(0)}^{p+1}] < \infty$. \end{theorem} \begin{proof} This result follows as a combination of Theorem~\ref{th:Epbound} and Lemma~\ref{lem:quadratic_variation}. We find that \begin{align*} \wunorm{X(\StopT{t})}^{p} &= \wunorm{X(0)}^{p}+\int_{0}^{\StopT{t}} F(X(s)) \, ds+M_{\StopT{t}}, \end{align*} with $F$ defined in \eqref{eq:Fdef}. The quadratic variation of the local martingale $M_{\StopT{t}}$ can be estimated via Lemma~\ref{lem:quadratic_variation}, \begin{align} \label{eq:Mqv} \Expect \left( [M]_{\StopT{t}}^{1/2} \right) &\le \Expect \left[ \int_{0}^{\StopT{t}} C(1+\wunorm{X(s)}^{p}+ \beta_{2}^{V}\wunorm{X(s)}^{p+1}) \, ds \right]. \end{align} Assume first that $\beta_{2} = 0$. Using the previously developed bound in \eqref{eq:Fbound} and \eqref{eq:momest} for the drift part we get \begin{align*} \wunorm{X(\StopT{t})}^{p} &\le \wunorm{X(0)}^{p}+\int_{0}^{\StopT{t}} C(1+\wunorm{X(s)}^{p}) \, ds+|M_{\StopT{t}}|. \\ \intertext{Combining with \eqref{eq:Mqv} we find after using Burkholder's inequality \cite[Chap.~IV.4]{protterSDE},} \Expect\left[ \sup_{s \in [0,\StopT{t}]} \wunorm{X(s)}^{p}\right] &\le \Expect[\wunorm{X(0)}^{p}]+\int_{0}^{\StopT{t}} C\left(1+\Expect\left[\sup_{s' \in [0,s]} \wunorm{X(s')}^{p}\right] \right)\, ds. \\ \intertext{For clarity, writing $\wunorm{X}^{p}(t) := \sup_{s \in [0,t]} \wunorm{X(s)}^{p}$ we find that} \Expect[\wunorm{X}^{p}(\StopT{t})] &\le \Expect[\wunorm{X(0)}^{p}]+\int_{0}^{t} C(1+E[\wunorm{X}^{p}(\StopT{s})] \, ds. \end{align*} Gronwall's inequality now implies that $\Expect[\wunorm{X}^{p}(\StopT{t})]$ is bounded in terms of the initial data and time $t$. By Fatou's lemma the claim follows by letting $P \to \infty$. We next consider $\beta_{2} > 0$. Using Theorem~\ref{th:Epbound} we still have the bound \eqref{eq:Mqv} which yields \begin{align*} \Expect \left( [M]_{\StopT{t}}^{1/2} \right) &\le \int_{0}^{\StopT{t}} C(1+\Expect[\wunorm{X(s)}^{p+1}]) \, ds \le (e^{C\StopT{t}}-1)(\Expect[\wunorm{X(0)}^{p+1}]+1). \end{align*} where we similarly obtain a bound in terms of $\Expect[\wunorm{X(0)}^{p+1}]$. \end{proof} \subsection{Scaling} \label{subsec:scaling} We shall now regard the transport rates, the reaction rates, and the magnitude of the state variables as problem parameters which may induce a scale separation. Although a completely general multiscale analysis is possible within the current framework, to fix our ideas and in the interest of a transparent presentation, we consider a concrete, but still quite general two-scale separation. \begin{condition}[\textit{Scale separation}] \label{cond:scales} Let a \emph{scale vector} $S \in \mathbf{R}^{D}$ be given. The transport- and reaction rates are assumed to obey the scaling laws \begin{align} \label{eq:scaled} q_{ijk} x &= \epsilon^{-\mu_i} \bar{q}_{ijk} S^{-1}x, \\ \label{eq:scaler} u_r(x) &= \epsilon^{-\nu_r^{(1)}} \bar{u}_r(x) = \epsilon^{-\nu_r^{(1)}-\nu_r^{(2)}} \bar{u}_r(S^{-1}x) \\ \intertext{for $i = 1 \ldots D$, $(j,k) = 1\ldots J$, and $r = 1 \ldots R$. For the state variables we define} \label{eq:scalec} X_{i,\cdot}(t) &= S_{i} \bar{X}_{i,\cdot}(t), \qquad S_{i} = 1 \mbox{ or } \epsilon^{-1}. \end{align} where $\nu_r^{(1)}$ is the scaling of the rate (fast/slow) while $\nu_r^{(2)}$ follows from the number of species involved in transition $r$ such that $S_i = \epsilon^{-1}$. Let the complete scaling be $$\nu_r = \nu_r^{(1)} + \nu_r^{(2)}.$$ The dynamics is considered for $t \in [0,T]$, $T = O(1)$ with respect to $\epsilon$. Also, all non-dimensionalized constants and propensities $\{\bar{q}_{ijk},\bar{u}_r(\cdot)\}$ are understood to be $O(1)$ with respect to $\epsilon$. \end{condition} It is possible to analyze also the general case where the species scale differently in different voxels, i.e.~$X_{ij} = S_{ij} \bar{X}_{ij}$. However, this analysis is complicated by the fact that the results then take place in a transient regime, and, in turn, this regime is difficult to generally estimate. We make a slight abuse of notation by employing $S$ as if it was the $D$-by-$D$ matrix $\diag(S)$. Using a similar convention for $\nu$ we may write \eqref{eq:scaler} in the compact form \begin{align} \label{eq:scaler_compact} u(x) &= \epsilon^{-\nu} \bar{u}(S^{-1}x). \end{align} To take a concrete example: the bimolecular reaction $X+Y \to \emptyset$ at rate $kXY$ obeys \eqref{eq:scaler} with $\nu_r = 0$ for $k \sim \epsilon$ and one of the species scaling macroscopically as $\epsilon^{-1}$. If both species are macroscopic, then instead $\nu_r = 1$ at the same scaling of the rate $k \sim \epsilon$. Following Condition~\ref{cond:scales} we thus divide the species into two disjoint groups, $G_1$ and $G_2$, with $|G_1|+|G_2| = D_1+D_2 = D$. Informally, we suppose that species in low copy numbers are in $G_1$ and species in large copy numbers are in $G_2$. Under an appropriate enumeration of the species this implies the choice of scaling $S_{i} = 1$ for $i \in G_1 = \{1\ldots D_1\}$ and $= \epsilon^{-1}$ for $i \in G_2 = \{D_1+1\ldots D\}$ in \eqref{eq:scalec}. Following this ordering we also write $\boldsymbol{w} = [\boldsymbol{w}_1; \, \boldsymbol{w}_2]$ and $\mathbb{N} = [\mathbb{N}^{(1)};\,\mathbb{N}^{(2)}]$, where $\boldsymbol{w}_i \in \mathbf{R}_{\ge 1}^{D_i}$ and $\mathbb{N}^{(i)} \in \mathbf{R}^{D_i \times R}$ for $i \in \{1,2\}$. We find from \eqref{eq:RDMEPoissrepr} the governing equation \begin{align} \label{eq:RDMEPoissreprS} \bar{X}_{ij}(t) = \bar{X}_{ij}(0) &- \sum_{r = 1}^{R} S_{i}^{-1} \mathbb{N}_{ri} \Pi_{rj} \left( \int_{0}^{t} V_j \epsilon^{-\nu_{r}} \bar{u}_r (V_j^{-1} \bar{X}_{\cdot,j}(s)) \, ds \right) \\ \nonumber &-\sum_{k = 1}^{J} S_{i}^{-1} \Pi_{ijk}' \left( \int_{0}^{t} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{X}_{ij}(s) \, ds \right) \\ \nonumber &+\sum_{k = 1}^{J} S_{i}^{-1} \Pi_{ikj}' \left( \int_{0}^{t} \epsilon^{-\mu_i} \bar{q}_{ikj} \bar{X}_{ik}(s) \, ds \right). \end{align} For the existence of scale separation it is critical to find conditions such that according to some weight-vector $\boldsymbol{l}$, $\lunorm{\bar{X}(t)}$ for $t \in [0,T]$ remains $O(1)$ whenever $\lunorm{\bar{X}(0)}$ is $O(1)$, assuming that $T$ and $\boldsymbol{l}$ both are $O(1)$ with respect to $\epsilon$. Unfortunately, the assumptions and analysis in \S\ref{subsec:RDMEstab} all concerned the unscaled variable $X(t)$, which is now assumed to be $O(\epsilon^{-1})$. In fact, it is not difficult to see that with, say, $\bar{\boldsymbol{l}} := S\boldsymbol{w}$ replacing $\boldsymbol{w}$ throughout Assumption~\ref{ass:rbound}, and requiring that all constants be independent of $\epsilon$, the results in \S\ref{subsec:RDMEstab} are straightforwardly translated into bounds in terms of the $\bar{\boldsymbol{l}}$-norm of $\bar{X}(t)$. Since this is just the $\boldsymbol{w}$-norm of $X(t)$ itself, however, it scales as $O(\epsilon^{-1})$. What is additionally required is that the weight-vector $\boldsymbol{l}$ can be selected \emph{independently} of $\epsilon$. \begin{assumption}[\textit{Reaction regularity, scaled case}] \label{ass:rboundS} The previous assumption of density dependent propensities \eqref{eq:ddepend} is assumed to hold. We further assume the existence of a vector $\boldsymbol{l} \in \mathbf{R}^D_{\ge 1}$, \emph{independent of $\epsilon$}, such that \begin{align} \label{eq:1bndS} -\boldsymbol{l}t S^{-1} \mathbb{N} &u(x) \le A+\alpha \lnorm{S^{-1} x}, \\ \label{eq:bndS} (-\boldsymbol{l}t S^{-1} \mathbb{N})^{2} &u(x)/2 \le B+\beta_{1} \lnorm{S^{-1} x}+\beta_{2}\lnorm{S^{-1} x}^{2}, \\ \label{eq:lipS} |\bar{u}_{r}(x)-\bar{u}_{r}(y)| &\le \bar{L}_{r}(\bar{P})\|x-y\|, \mbox{ for $r = 1\ldots R$, and $\lnorm{x} \vee \lnorm{y} \le \bar{P}$}. \end{align} All parameters $\{A,\alpha,B,\beta_{1},\beta_{2},L\}$ are assumed to be independent of $\epsilon$ and non-negative (with negative values allowed for $\alpha$). \end{assumption} Equipped with this assumption we revisit the regularity results of \S\ref{subsec:RDMEstab}. To this end we consider a version of $\Sspace{p}$ scaled with $S$, \begin{align} \SspaceS{p} &\equiv \left\{ \bar{X}(t,\mathbf{P}elem): \begin{array}{l} \bar{X}(t) \in S^{-1} \mathbf{Z}_{+}^{D \times J} \mbox{ is $\mathbf{P}filtr_{t}$-adapted such that } \\ \Expect[\sup_{t \in [0,T]} \lunorm{\bar{X}_{t}}^{p}] < \infty \mbox{ for } \forall T < \infty \end{array} \right\}, \\ \intertext{where the scaled state space is just} &S^{-1} \mathbf{Z}_{+}^{D \times J} \equiv [\mathbf{Z}_{+}^{D_1 \times J};\; \epsilon \mathbf{Z}_{+}^{D_2 \times J}], \end{align} and $\epsilon \mathbf{Z}_+ = \{0,\epsilon,2\epsilon,\ldots\}$. \begin{theorem}[\textit{Regularity, scaled case}] \label{th:exist1} Under Condition~\ref{cond:scales}, Theorem~\ref{th:Epbound} and \ref{th:exist0} both hold with the new Assumption~\ref{ass:rboundS} replacing the previous Assumption~\ref{ass:rbound} and with $\SspaceS{p}$ replacing $\Sspace{p}$. In particular: \begin{enumerate} \item The constant $C$ in Theorem~\ref{th:Epbound} can be selected independently of $\epsilon$. \item If either $\beta_2 = 0$ and $\Expect[\lunorm{\bar{X}(0)}^{p}]$ is $O(1)$ with respect to $\epsilon$, or $\beta_2 > 0$ and $\Expect[\lunorm{\bar{X}(0)}^{p+1}]$ is $O(1)$, then so is $\Expect[\sup_{s \in [0,t]} \lunorm{\bar{X}(s)}^{p}]$ for $t \in [0,T]$, $T = O(1)$ with respect to $\epsilon$. \end{enumerate} \end{theorem} The proof follows very closely the steps taken to arrive at Theorem~\ref{th:exist0} and is therefore omitted. Theorem~\ref{th:exist1} inherits from Theorem~\ref{th:exist0} the poorer regularity when $\beta_2 > 0$. The predicted growth is then $\exp(t \beta_2^{V})$ where, as in \eqref{eq:Fdef}, $\beta_2^{V} := \beta_2 m_V^{-1} \bar{V}_{M}^{-1}$ and is dependent on the mesh. \subsection{Multiscale splittings} \label{subsec:splitting} We shall consider two multiscale splittings: one ``exact'' in continuous time and one ``numerical'' in discrete time-steps of length $h$. Thus we firstly define $\bar{Z}$, for $i$ in $G_1$ and using that $S_i = 1$, \begin{align} \label{eq:RDMEPoissrepr_ex1} \bar{Z}_{ij}(t) = \bar{Z}_{ij}(0) &- \sum_{r = 1}^{R} \mathbb{N}_{ri} \Pi_{rj} \left( \int_{0}^{t} V_j \epsilon^{-\nu_{r}} \bar{u}_r (V_j^{-1} \bar{Z}_{\cdot,j}(s)) \, ds \right) \\ \nonumber &-\sum_{k = 1}^{J} \Pi_{ijk}' \left( \int_{0}^{t} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{Z}_{ij}(s) \, ds \right) \\ \nonumber &+\sum_{k = 1}^{J} \Pi_{ikj}' \left( \int_{0}^{t} \epsilon^{-\mu_i} \bar{q}_{ikj} \bar{Z}_{ik}(s) \, ds \right), \intertext{while for $i$ in $G_2$, $S_i = \epsilon^{-1}$ and the Poisson process is approximated by a deterministic process,} \label{eq:RDMEPoissrepr_ex2} \bar{Z}_{ij}(t) = \bar{Z}_{ij}(0) &- \sum_{r = 1}^{R} \epsilon \mathbb{N}_{ri} \left( \int_{0}^{t} V_j \epsilon^{-\nu_{r}} \bar{u}_r (V_j^{-1} \bar{Z}_{\cdot,j}(s)) \, ds \right) \\ \nonumber &-\sum_{k = 1}^{J} \epsilon \left( \int_{0}^{t} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{Z}_{ij}(s) \, ds \right) \\ \nonumber &+\sum_{k = 1}^{J} \epsilon \left( \int_{0}^{t} \epsilon^{-\mu_i} \bar{q}_{ikj} \bar{Z}_{ik}(s) \, ds \right). \end{align} In general, there is no guarantee that $\bar{Z}(t)$ remains positive even when $\bar{X}(t)$ is a conservative chain. For example, the presence of a dimerization reaction, say, $A+A \to B$ at rate $A(A-1)$ can reach negative values of $B$ when $A$ is approximated by a continuous variable. In this example one can avoid this problem by reinterpreting the rate as $A(A-1) \vee 0$. In what follows we will for simplicity assume that all models are conservative and remain in the non-negative orthant, presumably after employing some kind of limiters on the rates. To see how a result similar to Theorem~\ref{th:exist1} might be obtained for the new process $\bar{Z}(t)$, we start anew from Dynkin's formula, appropriately modified for the semi-continuous setting. We find \begin{align} &\Expect\left[\lunorm{\bar{Z}(\StopT{t})}^{p}\right] = \Expect\left[\lunorm{\bar{Z}(0)}^{p}\right]+ \Expect\left[\int_0^{\StopT{t}} G(\bar{Z}(s)) \, ds \right], \intertext{where} \label{eq:Gdef} G(Z) &\equiv \sum_{j = 1}^J \sum_{r = 1}^{R} V_j u_r(V_j^{-1}S Z_{\cdot,j}) \left[ \left( z-\boldsymbol{l}t_1 \mathbb{N}_{r}^{(1)} \right)^{p}- z^{p}-p\boldsymbol{l}t_2\epsilon\mathbb{N}_r^{(2)}z^{p-1}\right], \end{align} and where for brevity $z \equiv \lunorm{Z}$ (compare \eqref{eq:Fdef}). Using Lemma~\ref{lem:diffbound}~\eqref{eq:signed_diffbound} we find {\small \begin{align} \label{eq:prelGbound} G(Z) &\le \sum_{j = 1}^J \sum_{r = 1}^{R} V_j u_r(V_j^{-1}S Z_{\cdot,j}) \left[ -p\boldsymbol{l}t S^{-1}\mathbb{N}_{r}z^{p-1}+ C_p/2 \left( -\boldsymbol{l}t_1 \mathbb{N}_{r}^{(1)} \right)^2 \left[z^{p-2}+C_{\mathbb{N}^{(1)}}^{p-2}\right] \right], \end{align} } where $C_p$ and $C_{\mathbb{N}^{(1)}}$ are defined below \eqref{eq:Fbound}. The goal here is to obtain a bound $G(Z) \le C(1+z^p)$ (compare \eqref{eq:Fbound}--\eqref{eq:momest}) and it is not difficult to see what assumption is required. \begin{assumption}[\textit{Reaction regularity, semi-continuous case}] \label{ass:rboundSZ} In Assumption~\ref{ass:rboundS}, replace \eqref{eq:bndS} with \begin{align} \label{eq:bndSZ} \left( -\boldsymbol{l}t_1 \mathbb{N}^{(1)} \right)^2 u(x)/2 \le B+\beta_1\lnorm{S^{-1}x}+\beta_2\lnorm{S^{-1}x}^{2}. \end{align} \end{assumption} This assumption can be understood as firstly, a signed bound \eqref{eq:1bndS} on the drift-part for the fully coupled system, and secondly, the extra assumption due to stochasticity \eqref{eq:bndSZ}, which here applies only to $i \in G_1$, that is, to the stochastic part. Using this in \eqref{eq:prelGbound} we find (compare \eqref{eq:Fbound}) \begin{align} \label{eq:Gbound} G(Z) &\le p(AV_{tot}+\alpha z) z^{p-1}+ C_p(BV_{tot}+\beta_{1} z+ \beta_{2}^V z^{2}) (z^{p-2}+C_{\mathbb{N}^{(1)}}^{p-2}), \end{align} and following the steps in the proofs of Theorems~\ref{th:exist0} and \ref{th:exist1} we obtain after some work the following result. \begin{theorem}[\textit{Regularity, semi-continuous case}] \label{th:exist2} The statement of Theorem~\ref{th:exist1} applies also to the approximating process $\bar{Z}(t)$ with Assumption~\ref{ass:rboundSZ} taking the role of Assumption~\ref{ass:rboundS}. The existence of solutions then concerns the semi-continuous space $\SspaceZ{p}$ and we note that the remark following Theorem~\ref{th:exist1} concerning the dependence on the mesh regularity remains valid since $\beta_2^{V}$ is present in \eqref{eq:Gbound}. \end{theorem} In practice, a numerical method is required to simulate $\bar{Z}$. The most straightforward way is to evolve the stochastic and deterministic parts in different steps, introducing a new process ${\bar{Y}^{(h)}}$ which approximates $\bar{Z}$. Following the partition of unity idea in \cite{jsdesplit} we define the kernel step function \begin{align} \sigma_{h}(t) &= 1-2 \left( \lfloor t/(h/2) \rfloor \mbox{{\footnotesize mod }} 2 \right). \end{align} Then for $i$ in $G_1$, \begin{align} \nonumber {\bar{Y}^{(h)}}_{ij}(t) = {\bar{Y}^{(h)}}_{ij}(0) &- \sum_{r = 1}^{R} \mathbb{N}_{ri} \Pi_{rj} \left( \int_{0}^{t} (1+\sigma_h(s)) V_j \epsilon^{-\nu_{r}} \bar{u}_r (V_j^{-1} {\bar{Y}^{(h)}}_{\cdot,j}(s)) \, ds \right) \\ \label{eq:RDMEPoissrepr_num1} &-\sum_{k = 1}^{J} \Pi_{ijk}' \left( \int_{0}^{t} (1+\sigma_h(s)) \epsilon^{-\mu_i} \bar{q}_{ijk} {\bar{Y}^{(h)}}_{ij}(s) \, ds \right) \\ \nonumber &+\sum_{k = 1}^{J} \Pi_{ikj}' \left( \int_{0}^{t} (1+\sigma_h(s)) \epsilon^{-\mu_i} \bar{q}_{ikj} {\bar{Y}^{(h)}}_{ik}(s) \, ds \right), \intertext{and for $i$ in $G_2$,} \nonumber {\bar{Y}^{(h)}}_{ij}(t) = {\bar{Y}^{(h)}}_{ij}(0) &- \sum_{r = 1}^{R} \epsilon \mathbb{N}_{ri} \left( \int_{0}^{t} (1-\sigma_h(s)) V_j \epsilon^{-\nu_{r}} \bar{u}_r (V_j^{-1} {\bar{Y}^{(h)}}_{\cdot,j}(s)) \, ds \right) \\ \label{eq:RDMEPoissrepr_num2} &-\sum_{k = 1}^{J} \epsilon \left( \int_{0}^{t} (1-\sigma_h(s)) \epsilon^{-\mu_i} \bar{q}_{ijk} {\bar{Y}^{(h)}}_{ij}(s) \, ds \right) \\ \nonumber &+\sum_{k = 1}^{J} \epsilon \left( \int_{0}^{t} (1-\sigma_h(s)) \epsilon^{-\mu_i} \bar{q}_{ikj} {\bar{Y}^{(h)}}_{ik}(s) \, ds \right). \end{align} For regularity we start anew from the semi-continuous Dynkin's formula, \begin{align} &\Expect\left[\lunorm{{\bar{Y}^{(h)}}(\StopT{t})}^{p}\right] = \Expect\left[\lunorm{{\bar{Y}^{(h)}}(0)}^{p}\right]+ \Expect\left[\int_0^{\StopT{t}} H({\bar{Y}^{(h)}}(s),s) \, ds \right], \intertext{where this time} \nonumber H(Y,s) &\equiv \sum_{j = 1}^J \sum_{r = 1}^{R} (1+\sigma_h(s))V_j u_r(V_j^{-1}S Y_{\cdot,j}) \left[ \left( y-\boldsymbol{l}t_1 \mathbb{N}_{r}^{(1)} \right)^{p}- y^{p}\right] \\ \label{eq:Hdef} &\phantom{\equiv} +\sum_{j = 1}^J \sum_{r = 1}^{R} (1-\sigma_h(s))V_j u_r(V_j^{-1}S Y_{\cdot,j}) \left[ -p\boldsymbol{l}t_2\epsilon\mathbb{N}_r^{(2)}y^{p-1}\right], \end{align} and where as before $y \equiv \lunorm{Y}$. This leads us to \begin{assumption}[\textit{Reaction regularity, split-step case}] \label{ass:rboundSY} Besides the modification of Assumption~\ref{ass:rboundSZ}, in Assumption~\ref{ass:rboundS}, additionally replace \eqref{eq:1bndS} with \begin{align} \label{eq:1bndSY} \max \left(-\boldsymbol{l}t_1 \mathbb{N}^{(1)} u(x), -\boldsymbol{l}t_2 \epsilon \mathbb{N}^{(2)} u(x) \right) &\le A+\alpha\lnorm{S^{-1}x}. \end{align} \end{assumption} In other words, \eqref{eq:1bndSY} bounds the drift of the stochastic and continuous parts individually, while as before \eqref{eq:bndSZ} is employed to bound the quadratic variation of the stochastic part alone. Following again the steps in the previous proofs we obtain \begin{theorem}[\textit{Regularity, split-step case}] \label{th:exist3} Theorem~\ref{th:exist2} applies also to the approximating process ${\bar{Y}^{(h)}}(t)$ under Assumption~\ref{ass:rboundSY}. The resulting \textit{a priori} bound is uniform with respect to both $\epsilon$ and $h$ provided the initial data is. \end{theorem} The approximation $\bar{X} \approx \bar{Z}$ gives rise to a \emph{multiscale error}, whereas $\bar{Z} \approx {\bar{Y}^{(h)}}$ induces a \emph{splitting error}. Quite generally, any practical numerical method relies on this very structure in $\bar{X} \approx {\bar{Y}^{(h)}}$. Insight into the nature of the total error thus follows from a consistent analysis of both approximations. This is the purpose with the next section. \section{Error analysis} \label{sec:analysis} We present in this section the error analysis of the two approximations \eqref{eq:RDMEPoissrepr_ex1}--\eqref{eq:RDMEPoissrepr_ex2} and, respectively, \eqref{eq:RDMEPoissrepr_num1}--\eqref{eq:RDMEPoissrepr_num2}. Theorems~\ref{th:exist1}, \ref{th:exist2}, and \ref{th:exist3} assert that all processes are uniformly stable in finite time. By the Lax principle the task has therefore been reduced to an investigation of the degree of consistency of the two approximations. Preliminary lemmas for this are discussed in \S\ref{subsec:prelest}, followed by the actual error analysis in \S\ref{subsec:msconv}--\ref{subsec:splitconv}. In order not to lose the oversight, some material heavily relied upon are developed separately in Appendix~\ref{sec:mserr} and \ref{sec:sserr}. \subsection{Preliminary estimates} \label{subsec:prelest} Intuitively, the same version of a Poisson process evaluated at two different operational times should enjoy a bounded difference, provided of course the times themselves are bounded in some suitable sense. A precise formulation of this property is related to \emph{Doob's optional sampling theorem} \cite[Theorem 17, Chap.~I.2]{protterSDE} and has only just recently been investigated \cite{AGangulyMMS2014, tauAnderson} for the $L^1$-norm, and in \cite{jsdesplit} for the $L^2$-norm. \begin{lemma} \label{lem:PiStop} Let $\Pi$ be a unit-rate $\mathbf{P}filtr_{t}$-adapted Poisson process, and let $T$ be a bounded stopping time. Then \begin{align} \label{eq:PiStop1} \Expect[\Pi(T)] &= \Expect[T], \\ \label{eq:PiStop2} \Expect[\Pi^2(T)] &= 2\Expect[\Pi(T)T]-\Expect[T^{2}]+\Expect[T]. \end{align} \end{lemma} \begin{proof} Let $\tilde{\Pi}(t) := \Pi(t)-t$ be the compensated process. This is a martingale and the sampling theorem implies $\Expect[\tilde{\Pi}(T)] = 0$, which is \eqref{eq:PiStop1}. The quadratic variation is $[\tilde{\Pi}]_{t} = \Pi(t)$ and hence $Z(t) := \tilde{\Pi}^2(t)-\Pi(t)$ is a local martingale. Since $\Expect[\sup_{s \le t} Z(s)] < \infty$ for bounded $t$, it is actually a martingale and the sampling theorem now yields $\Expect[Z(T)] = 0$, or, \begin{align*} 0 &= \Expect[\Pi^2(T)-2\Pi(T)T+T^{2}-\Pi(T)], \end{align*} which is \eqref{eq:PiStop2}. \end{proof} \begin{lemma} \label{lem:Stopping} Let $\Pi$ be a unit-rate $\mathbf{P}filtr_{t}$-adapted Poisson process, and let $T_1$, $T_2$ be bounded stopping times. Then \begin{align} \label{eq:Stopping1} \Expect[|\Pi(T_2)-\Pi(T_1)|] &= \Expect[|T_2-T_1|], \\ \label{eq:Stopping2} \Expect[(\Pi(T_2)-\Pi(T_1))^2] &= 2\Expect[|\Pi(T_2)-\Pi(T_1)|(T_1 \vee T_2)] \\ \nonumber &\hphantom{=} -\Expect[|T_2^{2}-T_1^{2}|]+\Expect[|T_2-T_1]]. \end{align} \end{lemma} \begin{proof} Assume first that $T_2 \ge T_1$. We get from Lemma~\ref{lem:PiStop}~\eqref{eq:PiStop1} \begin{align*} \Expect[\Pi(T_2)-\Pi(T_1)] &= \Expect[T_2-T_1]. \end{align*} For general stopping times $S_1$, $S_2$, say, not necessarily satisfying $S_2 \ge S_1$, \eqref{eq:Stopping1} now follows upon substituting $T_1 := S_1 \wedge S_2$ and $T_2 := S_1 \vee S_2$ into this equality. Next put $X := \Expect[(\Pi(T_2)-\Pi(T_1))^2]$ and assume again that $T_2 \ge T_1$. We get \begin{align*} X &= \Expect[\Pi(T_2)^2 + \Pi(T_1)^2 - 2\Pi(T_1)\Pi(T_2)] \\ &= \Expect[\Pi(T_2)^2 + \Pi(T_1)^2] - 2\Expect[\Pi(T_1)\Expect[\Pi(T_2)|\mathbf{P}filtr_{T_1}]]. \\ \intertext{To evaluate the iterated expectation note that} \Expect[\tilde{\Pi}(T_2)|\mathbf{P}filtr_{T_1}] &= \tilde{\Pi}(T_1) \Longrightarrow \Expect[\Pi(T_2)|\mathbf{P}filtr_{T_1}] = \Pi(T_1)-T_1+E[T_2|\mathbf{P}filtr_{T_1}]. \\ \intertext{Hence,} \Expect[\Pi(T_1)\Expect[\Pi(T_2)|\mathbf{P}filtr_{T_1}]] &= \Expect[\Pi(T_1)^2]- \Expect[\Pi(T_1)T_1]+\Expect[\Pi(T_1)T_2], \\ \intertext{and we thus find that} X &= \Expect[\Pi(T_2)^2 - \Pi(T_1)^2] +2\Expect[\Pi(T_1)T_1]-2\Expect[\Pi(T_1)T_2]. \\ \intertext{Applying Lemma~\ref{lem:PiStop}~\eqref{eq:PiStop2} twice yields finally} X &= 2\Expect[(\Pi(T_2)-\Pi(T_1))T_2]-\Expect[T_2^{2}-T_1^{2}]+ \Expect[T_2-T_1]. \end{align*} For general stopping times $S_1$, $S_2$, \eqref{eq:Stopping2} now follows as before upon substituting $T_1 := S_1 \wedge S_2$ and $T_2 := S_1 \vee S_2$. \end{proof} \begin{remark} We will use Lemma~\ref{lem:Stopping} in the following form. Assuming $T_1 \vee T_2$ has been bounded \textit{a priori} by some value $B$ we get by combining \eqref{eq:Stopping1} with \eqref{eq:Stopping2} that \begin{align} \label{eq:finalStop} \Expect[(\Pi(T_2)-\Pi(T_1))^2] &\le (2B+1)\Expect[|T_2-T_1|]. \end{align} Let $\mathbf{P}filtr_t$ be the filtration adapted to $\tilde{\Pi}_r, r = 1 \ldots R$. Then for a fixed $t$, $ T_r(t) = \int_{0}^{t} w_r(X(s)) \, ds$ is a stopping time \cite[Lemma~3.1]{tauAnderson} with respect to $$\tilde{\mathbf{P}filtr}^r_u = \sigma\{\Pi_r(s), s \in [0,u];\; \Pi_{k \not = r}(s), s\in [0,\infty]\}.$$ Intuitively, as $X(t) = \sum_r \Pi_r(T_r(t)) \mathbb{N}_r$, the event $\{T_r(t) < u\}$ depends on $\Pi_r$ during $[0,u]$ and on all other processes $\{\Pi_k, k\neq r\}$ during $[0,\infty)$. However, as $\Pi_r$, $r=1\ldots R$ are independent, $\Pi_r(t) - t$ is still a martingale with respect to $\tilde{\mathbf{P}filtr}^r_u$ (and not only with respect to $\mathbf{P}filtr^r_u = \sigma\{\Pi_r(s), s \in [0,u] \}$). Hence we can apply the stopping time theorems to $T_r(t)$ and the previous lemmas therefore apply. The result stays true for the approximating process $Z$ (and later $Y$). Hence, given the bound \begin{align} \int_{0}^{t}w_r(X(s)) \, ds \vee \int_{0}^{t}w_r(Y(s)) \, ds \le B \end{align} we get from \eqref{eq:finalStop} that \begin{align} \label{eq:poisson_rmk} \nonumber \Expect\left[\left(\Pi_r\left(\int_{0}^{t}w_r(X(s)) \, ds\right)- \Pi_r\left(\int_{0}^{t}w_r(Y(s)) \, ds\right)\right)^2\right] \\ \le (2B+1)\Expect\left[\left|\int_{0}^{t}w_r(X(s)) \, ds- \int_{0}^{t}w_r(Y(s)) \, ds\right|\right]. \end{align} \end{remark} \subsection{Multiscale convergence} \label{subsec:msconv} This section develops a bound for the multiscale error made in the approximation $\bar{X} \approx \bar{Z}$. Throughout \S\ref{sec:jsdes}, a certain weighted norm which greatly simplified the theory was used. However, in the present case of bounding errors we are interested in the more conventional $L^2$-norm, \begin{align} \pnorm{\bar{X}(t)}^2 \equiv \sum_{j=1}^J \pnorm{\bar{X}(t)_{\cdot,j}}^2 = \sum_{j=1}^J \sum_{i = 1}^D \bar{X}(t)_{ij}^2, \end{align} where, for convenience, from now on we shall write $\left\| \cdot \right\|$ instead of $\pnorm{\cdot}$. Let $\bar{P} > 0$ and define the joint stopping time \begin{align} \label{eq:StopT} \tau_{\bar{P}} &:= \inf_s\{\|\bar{X}(s)\| \vee \|\bar{Z}(s)\| \vee \|{\bar{Y}^{(h)}}(s)\| > \bar{P}\}, \mbox{ and put } \StopT{t} := \tau_{\bar{P}} \wedge t. \end{align} Recall the stopping time $T_r(t)$ from the remark after Lemma~\ref{lem:Stopping}. Clearly, for any fixed $t$, $T_r(\StopT{t})$ is still a stopping time. The first step in the analysis is to split the error in one part which is bounded and one part which is not, \begin{align} \Expect \left[ \|\bar{X}(t)-\bar{Z}(t)\|^2 \right] = \Expect \left[ \|\bar{X}(t)-\bar{Z}(t)\|^2 1_{t > \StopT{t}} \right] +\Expect \left[ \|\bar{X}(t)-\bar{Z}(t)\|^2 1_{t \leq \StopT{t}} \right]. \end{align} The requirement to be able to control the contribution from the non-bounded part motivates the following lemma: \begin{lemma} \label{lem:Pbound} For any $p>1$, there exists a constant $K_p$ independent from $\epsilon$ and $h$ such that \begin{align} \label{eq:KpX} \Expect \left[\|\bar{X}\|^2 1_{t > \StopT{t}}\right] &\le K_p \bar{P}^{-p/2}, \\ \label{eq:KpZ} \Expect \left[\|\bar{Z}\|^2 1_{t > \StopT{t}}\right] &\le K_p \bar{P}^{-p/2}, \\ \label{eq:KpY} \Expect \left[\|{\bar{Y}^{(h)}}\|^2 1_{t > \StopT{t}}\right] &\le K_p \bar{P}^{-p/2}. \end{align} \end{lemma} \begin{proof} Theorem~\ref{th:exist1} yields \begin{align*} \Expect\left[\lunorm{\bar{X}(t)}^{4}\right] &\le \left( \Expect\left[\lunorm{\bar{X}(0)}^{4}\right] +1\right)\exp(Ct)-1. \end{align*} Since $\left\|\cdot\right\|$ and $\lunorm{\cdot}$ are equivalent bounds we have an \textit{a priori} bound \begin{align*} \Expect \left[ \|\bar{X}(t)\|^4\right] &\le B(t), \end{align*} with $B(t)$ independent from $\epsilon$. By Cauchy-Schwartz's inequality, \begin{align*} \Expect \left[\|\bar{X}(t)\|^2 1_{t > \StopT{t}}\right] &\leq \Expect \left[\|\bar{X}(t)\|^4 \right]^{1/2}\mathbf{P}[t > \StopT{t}]^{1/2}. \end{align*} Using that \begin{align*} \mathbf{P}[t > \StopT{t}] \leq \mathbf{P}[\sup_{s \in [0,t]} \|\bar{X}_{s}\| > \bar{P}] + \mathbf{P}[\sup_{s \in [0,t]} \|\bar{Z}_{s}\| > \bar{P}] + \mathbf{P}[\sup_{s \in [0,t]} \|{\bar{Y}^{(h)}}_{s}\| > \bar{P}], \end{align*} we find from Markov's inequality the bound \begin{align*} \mathbf{P}[t > \StopT{t}] \times \bar{P}^{p} &\leq \Expect\left[ \left(\sup_{s\in [0,t]} \|\bar{X}(s)\|\right)^p \right] + \Expect\left[ \left(\sup_{s\in [0,t]} \|{\bar{Y}^{(h)}}(s)\|\right)^p \right] \\ &+ \Expect\left[ \left(\sup_{s\in [0,t]} \|\bar{Z}(s)\|\right)^p \right]. \end{align*} Using the second part of Theorem~\ref{th:exist1} and the equivalence of norms, it is possible to bound the first term on the right independently from $\epsilon$ and $h$. Reasoning similarly for the terms depending on ${\bar{Y}^{(h)}}$ and $\bar{Z}$ we get the stated result. \end{proof} To formulate the main result of this section we let \begin{align} R(G_1) &:= \{ r; \; \exists i \in G_1 \text{ such that } \mathbb{N}_{ri} \neq 0 \}, \end{align} and the analogous definition for $R(G_2)$. In words, $R(G_1)$ contains the reactions which affect any species $i \in G_1$. We additionally define the two effective exponents \begin{align} \label{eq:udef} u &= \min_{r \in R(G_1)} -\nu_r \wedge \min_{i\in G_1} -\mu_i, \\ \label{eq:vdef} v &= 1+\min_{r \in R(G_2)} -\nu_r \wedge \min_{i \in G_2} -\mu_i. \end{align} Note that, if the transport rates do not scale with $\epsilon$, we generally get $u \le 0$ and $v \le 1$. \begin{theorem}[\textit{Multiscale error, bounded version}] \label{th:ScalingErrorBounded} Under the scale separation Condition~\ref{cond:scales}, the regularity Assumptions~\ref{ass:rboundS} and \ref{ass:rboundSZ}, and assuming also that $\bar{Z}$ and $\bar{X}$ are uniformly bounded with respect to $\epsilon$ by some $\bar{P}$, then whenever $u \geq 0$, $v \geq 0$ it holds that \begin{align} \Expect[\|\bar{Z}(t)-\bar{X}(t)\|^2] = O(\epsilon^{1+v} + \epsilon^{1/2 + v/2 + u}). \end{align} \end{theorem} \begin{proof} First notice that, since the processes are uniformly bounded with respect to $\epsilon$, so is $\bar{L}_r$. Thus according to Lemma~\ref{lem:PDMPCVSquare}, \begin{align*} \Expect[\|\bar{Z}(\StopT{t})-\bar{X}(\StopT{t})\|^2] &\le_{C} A+B \int_0^t \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\|^2 ] \, ds+ C \int_0^t \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ] \, ds, \end{align*} with \begin{align*} A &= O(\epsilon^{1+v}), \; B = O(\epsilon^{2v}), \; C = O(\epsilon^{u}). \end{align*} Similarly, according to Lemma~\ref{lem:PDMPCV}, $$\Expect[\|\bar{X}(\StopT{t})-\bar{Z}(\StopT{t})\|] \le_{C} D + E \int_{0}^{t} \Expect[\|\bar{X}(\StopT{s}) - \bar{Z}(\StopT{s})\|] \, ds,$$ where \begin{align} D &= O(\epsilon^{1/2+v/2}), \; E = O(\epsilon^{u}). \end{align} Thus using the Gronwall inequality we find firstly, \begin{align*} \Expect[\|\bar{X}(\StopT{t})-\bar{Z}(\StopT{t})\|] &= O(\epsilon^{1/2+v/2}). \\ \intertext{Using this and Gronwall's inequality a second time gives} \Expect[\|\bar{Z}(\StopT{t})-\bar{X}(\StopT{t})\|^2] &= O(\epsilon^{1 + v} + \epsilon^{1/2 + v/2 + u}). \end{align*} Suppose for the moment that ${\bar{Y}^{(h)}}$ is uniformly bounded by $\bar{P}$ with respect to $\epsilon$ and $h$. As the processes are bounded by $\bar{P}$, $\Expect[\|\bar{X}(\StopT{t})-\bar{Z}(\StopT{t})\|^2] = \Expect[\|\bar{X}(t)-\bar{Z}(t)\|^2]$ and we get the stated result. The extra assumption that ${\bar{Y}^{(h)}}$ is uniformly bounded can easily be removed by changing the definition of $\tau$ in \eqref{eq:StopT} into \begin{align*} \tau_{\bar{P}} &:= \inf_s\{\|\bar{X}(s)\| \vee \|\bar{Z}(s)\| > \bar{P}\}. \end{align*} \end{proof} The two terms in the error bound can be interpreted as firstly, the error introduced in the macro-species, $\epsilon^{1+v}$, and secondly, the error made in the meso-species, $\epsilon^{1/2 + v/2 + u}$, respectively. In order to obtain a theorem also in the unbounded case, the growth of the local Lipschitz constants has to be controlled, and so we make the following convenient assumption: \begin{assumption} \label{ass:lip} There exists $a_1,\ldots,a_R \geq 0$ such that $ \bar{L}_r(\bar{P}) \le_{C} \bar{P}^{a_r}$. Furthermore, we assume $a_r = 0$ for each $r$ such that that $\nu_r = 0$. Hence the Lipschitz constants associated with these transitions are bounded independently from $\bar{P}$. \end{assumption} As in the appendix we use the notation ``$A \le_{C} B$'' to indicate that $A \le C B$ for some constant $C > 0$ which is $O(1)$ with respect to $\epsilon$, $\bar{P}$, and $h$. \begin{theorem}[\textit{Multiscale error}] \label{th:ScalingError} Under the scale separation Condition~\ref{cond:scales}, and under the regularity Assumptions~\ref{ass:rboundS}, \ref{ass:rboundSZ}, and \ref{ass:lip}, and the additional conditions $u \geq 0$, $v > 0$, it holds that \begin{align} \Expect[\|\bar{Z}(t)-\bar{X}(t)\|^2] = O(\epsilon^{1+v -} + \epsilon^{1/2 + v/2 + u-}). \end{align} \end{theorem} \begin{proof} The proof here concerns the case $u > 0$. The special case from Assumption~\ref{ass:lip} where $\nu_r = 0$ and $a_r = 0$ for some $r$ (and thus $u = 0$) is similar but requires some cumbersome notation and is therefore omitted. Select $\bar{P} = \epsilon^{-b}$ for some $b>0$ and let $p>1$, $a := \max_r a_r$. Following the same pattern as in the proof of Theorem~\ref{th:ScalingErrorBounded} we get $$\Expect[\|\bar{Z}(\StopT{t})-\bar{X}(\StopT{t})\|^2] = O(\epsilon^{1 + v-(a+1) b} + \epsilon^{1/2 + v/2 + u-(3a + 1) b/2}).$$ Thus using Lemma~\ref{lem:Pbound}, \begin{align*} \Expect[\|\bar{Z}(t)-\bar{X}(t)\|^2] &= \Expect[\|\bar{Z}(\StopT{t})-\bar{X}(\StopT{t})\|^2 1_{t\leq \StopT{t}}] + \Expect[\|\bar{Z}(t)-\bar{X}(t)\|^2 1_{t > \StopT{t}}] \\ &\leq \Expect[\|\bar{Z}(\StopT{t})-\bar{X}(\StopT{t})\|^2] + 4 K_p \epsilon^{b p/2} \\ &= O(\epsilon^{1 + v-(a+1) b} + \epsilon^{1/2 + v/2 + u - (3a + 1) b/2} + \epsilon^{b p/2}) . \end{align*} As $b p/2$ can be made arbitrarily large while $(a+1) b$ and $(3a + 1) b/2$ can be made arbitrarily close to $0$ (i.e.~$b \rightarrow 0$ and $p \rightarrow \infty$), we arrive at the stated bound. \end{proof} \begin{remark} It is possible to get a convergence result for the case $u = v = 0$. However, in this case the error bound is of the form $O(\log(1/\epsilon)^{-\delta})$ and the dominating part can be traced back to Lemma~\ref{lem:Pbound}. \end{remark} \subsection{Splitting convergence} \label{subsec:splitconv} We next consider the error in the approximation $\bar{Z} \approx {\bar{Y}^{(h)}}$, that is, the splitting error. For this part we are able to prove a somewhat weak error bound in the general case, while the situation improves considerably if the processes are assumed to be bounded \textit{a priori}. \begin{theorem}[\textit{Splitting error, bounded version}] \label{th:spliterr2} Under the scale separation Condition~\ref{cond:scales}, the regularity Assumptions~\ref{ass:rboundSZ}, \ref{ass:rboundSY}, and assuming also that $\bar{X}$, $\bar{Z}$, and ${\bar{Y}^{(h)}}$ are uniformly bounded with respect to $h$ and $\epsilon$ by $\bar{P}$, then whenever $u\geq 0$, $v \geq 0$ it holds that \begin{align} \Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2\right] \le O\left(h(\epsilon^{2u} + \epsilon^{u+v}) \right) + O\left(h^2\epsilon^{2v}\right). \end{align} \end{theorem} \begin{proof} Using Lemma~\ref{lem:SplitStepCVSquare}, \begin{align*} \Expect\left[\|\bar{Z}(\StopT{t})-{\bar{Y}^{(h)}}(\StopT{t})\|^2 \right] &= O(\epsilon^u) \int_0^t \Expect \left[\|\bar{Z}(\StopT{s})-{\bar{Y}^{(h)}}(\StopT{s})\| \right] \, ds \\ &+ O(\epsilon^{2v} ) \int_0^t \Expect[\|\bar{Z}(\StopT{s})-{\bar{Y}^{(h)}}(\StopT{s})\|^2] \, ds \\ &+ O(\epsilon^uh) + O(\epsilon^{2v} h^2 ). \end{align*} Using Lemma~\ref{lem:SplitStepCV} and the Gronwall inequality, one readily shows that \begin{align*} \Expect \left[ \|\bar{Z}(\StopT{t})-{\bar{Y}^{(h)}}(\StopT{t})\| \right] &= O\left( (\epsilon^u + \epsilon^v)h\right). \end{align*} Taken together we find \begin{align*} \Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2\right] &\le O\left((\epsilon^{2u} + \epsilon^{u+v}) h\right) + O\left(\epsilon^{2v} h^2\right) \\ &+ O(\epsilon^{2v} ) \int_0^t \Expect \left[\| \bar{Z}(\StopT{s})- {\bar{Y}^{(h)}}(\StopT{s})\|^2 \right] \, ds. \end{align*} Hence using the Gronwall inequality anew, \begin{align*} \Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2 1_{t\leq \StopT{t}}\right] &= O\left((\epsilon^{2u} + \epsilon^{u+v}) h\right) + O\left(\epsilon^{2v} h^{2}\right). \end{align*} Furthermore, as the processes are bounded by $\bar{P}$, $\Expect[\|\bar{Z}(\StopT{t})-{\bar{Y}^{(h)}}(\StopT{t})\|^2] = \Expect[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2]$ and we get the stated result. \end{proof} As before one can appreciate the two terms of the error as the error made in the meso-species, $(\epsilon^{2u} + \epsilon^{u+v})h$, and $\epsilon^{2v} h^{2}$, the error introduced in the macro-species. \begin{theorem}[\textit{Splitting error}] \label{th:spliterr1} Under the scale separation Condition~\ref{cond:scales}, and under the regularity Assumptions~\ref{ass:rboundSZ}, \ref{ass:rboundSY}, and \ref{ass:lip}, and the additional conditions $u > 0$, $v > 0$, it holds that \begin{align} \lim_{h \to 0} \Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2\right] = 0. \end{align} \end{theorem} \begin{proof} Following the same pattern as in the proof of the bounded version, it is easy to show that for each $\bar{P}$, $$ \Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2 1_{t\leq \StopT{t}}\right] \xrightarrow[h \rightarrow 0]{} 0.$$ We conclude the argument using Lemma~\ref{lem:Pbound}, which implies that $$\Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2 1_{t\geq \StopT{t}}\right]\xrightarrow[\bar{P} \rightarrow \infty]{} 0$$ uniformly with respect to $h$. \end{proof} \begin{remark} Under the Assumptions of Theorem~\ref{th:spliterr1}, it is possible to get an error bound of the form $$\Expect\left[\|\bar{Z}(t)-{\bar{Y}^{(h)}}(t)\|^2\right] \le O \left(\log(1/h)^{-\delta} \right),$$ for any $\delta$ greater than some $\delta_0$. However, in this case the error can be traced to the unbounded part as covered by Lemma~\ref{lem:Pbound}. \end{remark} \section{Numerical examples} \label{sec:examples} We now proceed to illustrate our main findings through some prototypical cases. An all-linear isomerization-type system is investigated in \S\ref{subsec:isomerization} and a nonlinear catalytic model in \S\ref{subsec:catalytic}. In the experiments below we considered reactions taking place in a one-dimensional geometry $[0,1)$ under periodic boundary conditions. The geometry was discretized into 10 equally spaced segments and a diffusion process implemented via the standard 2nd order finite difference stencil, re-interpreted as linearly dependent transition rates. As for the initial data, we let each segment contain either 10 or 20 molecules for the mesoscopic (discrete) species and $20\epsilon^{-1}$ or $10\epsilon^{-1}$ for the macroscopic (continuous) species, respectively. The exact dynamics \eqref{eq:RDMEPoissreprS} was simulated in an operational time framework. Here we relied on an implementation of the \emph{All Events Method} \cite{aem_proceeding}, essentially a spatial extension of the \emph{Common Reaction Path Method} \cite{sensitivitySSA} which evolves \eqref{eq:RDMEPoissreprS} using separate Poisson processes for all events. The multiscale approximation \eqref{eq:RDMEPoissrepr_ex1}--\eqref{eq:RDMEPoissrepr_ex2} falls under the scope of \emph{Piecewise Deterministic Markov Processes (PDMPs)} for which accurate methods have been proposed \cite{hybridMarkov}. We implemented this through the use of \emph{event-detection} in solvers for Ordinary Differential Equations (ODEs). Notably, this allows for a fully consistent coupling with \eqref{eq:RDMEPoissreprS} in operational time. Finally, the split-step approximation \eqref{eq:RDMEPoissrepr_num1}--\eqref{eq:RDMEPoissrepr_num2} was implemented. This is quite straightforward via the kernel step function representation and executes very efficiently. The split-step error is much more challenging to determine accurately than the multiscale error is. In fact, on a predetermined grid in time the split-step approximation ${\bar{Y}^{(h)}}_{ij}$ in \eqref{eq:RDMEPoissrepr_num1} was often found to be exactly equal to the multiscale approximation $\bar{Z}_{ij}$ in \eqref{eq:RDMEPoissrepr_ex1}, thus requiring many realizations for even a very crude estimate. We make repeated use of the estimator \begin{align} \label{eq:mean} \Expect[(Y-X)(t)]^{2} &\approx M \equiv \frac{1}{N}\sum_{i = 1}^{N} (Y-X)(t; \mathbf{P}elem_{i})^{2}, \intertext{for independent trajectories $(\mathbf{P}elem_{i})$. A basic confidence interval is obtained by computing} \label{eq:std} S^{2} &\equiv \frac{1}{N-1}\sum_{i = 1}^{N} \left[(Y-X)(t; \mathbf{P}elem_{i})^{2}-M \right]^{2}, \end{align} such that the error in the estimator \eqref{eq:mean} is $\propto S/\sqrt{N}$. \subsection{Isomerization} \label{subsec:isomerization} We first consider the simple linear isomerization reaction pair, \begin{align} A & \xrightleftharpoons[k_b B]{k_a A} B. \end{align} In order for this example to develop a scale separation, for $A$, the diffusion rate is set to $1/2$ in either direction and per molecule, and for $B$ to $0$. By selecting $k_a = 1$ and $k_b = \epsilon$, a scale separation occurs, with $A \sim 10$ and $B \sim 10\epsilon^{-1}$. We may thus evolve the system by the multiscale approximation \eqref{eq:RDMEPoissrepr_ex1}--\eqref{eq:RDMEPoissrepr_ex2}, letting $A$ remain discrete while $B$ is approximated with a continuous scaled variable. Although the unscaled system is closed, from the perspective of scale separation the system scales unfavorably with $\epsilon$ and hence falls under the scope of Theorem~\ref{th:ScalingError}. We have $u = 0$ and $v = 1$ in \eqref{eq:udef}--\eqref{eq:vdef} and thus expect a mean square error behaving like $O(\epsilon^2)$ for the macroscopic species and $O(\epsilon)$ for the mesoscopic species. This is verified in Figure~\ref{fig:iso1} where the multiscale error for the two components is examined. Since Theorem~\ref{th:spliterr2} is formally not applicable, the only result valid is the guaranteed convergence of Theorem~\ref{th:spliterr1}. Nevertheless, in Figure~\ref{fig:iso2} the split-step error for the two species have been plotted separately. The different terms of the error estimate in Theorem~\ref{th:spliterr2} are clearly visible, suggesting that the uniform bounds on the processes, as required by Theorem~\ref{th:spliterr2}, may in fact be relaxed. Convergence results similar to those of \cite{kurtz_multiscale} and \cite{kurtz_multiscale2} are here consequences of Theorem~\ref{th:ScalingError}, with the added benefit of an error estimate. Indeed, Theorem~\ref{th:ScalingError} yields that the difference between $\bar{X}$ and $\bar{Z}$ goes to 0 and the convergence of $\bar{Z}$ is easy to study. Using \eqref{eq:RDMEPoissrepr_ex1} and \eqref{eq:RDMEPoissrepr_ex2} for voxel $j$ yields \begin{align} \bar{Z}_{B,j}(t) &= \bar{Z}_{B,j}(0) + \epsilon \int_0^t k_a \bar{Z}_{A,j}(s) \, ds - \epsilon \int_0^t k_b \epsilon^{-1}\bar{Z}_{B,j}(s) \, ds \xrightarrow{\epsilon \rightarrow 0} \bar{Z}_{B,j}(0), \end{align} since $(k_a,k_b) = (1,\epsilon)$, and, \begin{align} \nonumber \bar{Z}_{A,j}(t) \xrightarrow{\epsilon \rightarrow 0} &\bar{Z}_{A,j}(0) + \Pi_{1,j} \left(\bar{Z}_{B,j}(0) t \right) - \Pi_{2,j} \left(\int_0^t \bar{Z}_{A,j}(s) ds \right) \\ &+ \sum_{k\in \{j-1,j+1\}} \Pi'_{A,k,j} \left( \int_0^t \bar{Z}_{A,k}(s)/2 \, ds \right) -\Pi'_{A,j,k} \left( \int_0^t \bar{Z}_{A,j}(s)/2 \, ds \right). \end{align} Hence for this simple system, the limit $\epsilon \to 0$ for $B$ is trivial. \begin{figure} \caption{Multiscale error (isomerization): the root mean-square (RMS) error as a function of the scale separation $\epsilon$ for the two components $A$ (discrete) and $B$ (continuous).} \label{fig:iso1} \end{figure} \begin{figure} \caption{Split-step error (isomerization): the RMS error as a function of the split-step $h$ for the two components. Here $\epsilon = 10^{-1} \label{fig:iso2} \end{figure} \subsection{Catalytic reactions} \label{subsec:catalytic} We consider the following pair of catalytic reactions: \begin{align} \left. \begin{array}{rcl} A+B &\xrightarrow{kAB}& C+B \\ C+D &\xrightarrow{kCD}& A+D \\ B & \xrightleftharpoons[k_d D]{k_b B}& D \end{array} \right\} \end{align} We assume that species $A$ and $C$ are abundant and $O(\epsilon^{-1})$, and species $B$ and $D$ are $O(1)$. For the diffusion we put $\sigma_{A,C} = \epsilon$ and $\sigma_{B,D} = 1$, and for the rates $k = 0.01$ and $(k_b,k_d) = (1,0.9)$. The system so defined is closed since there is no coupling from the macro-species to the meso-species (take $\boldsymbol{l} = [1,1,1,1]^T$ in Assumption~\ref{ass:rboundS}). This property carries over to the multiscale and split-step approximations (cf.~Assumptions~\ref{ass:rboundSZ} and \ref{ass:rboundSY}). For the scale separation, we get the critical exponents $u = v = 0$ and Theorem~\ref{th:ScalingErrorBounded} predicts a slow convergence of $O(\epsilon^{1/4})$ in the RMS sense. However, since the meso-species do not depend on the macro-species the corresponding error is in fact $0$. According to the discussion following the proof of Theorem~\ref{th:ScalingErrorBounded}, the RMS is therefore $O(\epsilon^{1/2})$ and is observed in the macroscopic species only. By the same argument, and from the remark following the proof of Theorem~\ref{th:spliterr2}, we predict that the RMS of the split-step error is $O(h)$. Experimental results verifying this are shown in Figure~\ref{fig:cat1} for the multiscale error \emph{(``convergent scaling'')} and in Figure~\ref{fig:cat2} for the split-step error. Like in the previous example, convergence results similar to those of \cite{kurtz_multiscale} and \cite{kurtz_multiscale2} are consequences of Theorem~\ref{th:ScalingErrorBounded}. This time, \eqref{eq:RDMEPoissrepr_ex1} and \eqref{eq:RDMEPoissrepr_ex2} are almost independent of $\epsilon$; only the diffusion for $A$ and $C$ depend on $\epsilon$ and, since $\sigma_{A,C} = \epsilon$, it vanishes in the limit. For voxel $j$, \begin{align} \bar{Z}_{A,j}(t) &\xrightarrow{\epsilon \rightarrow 0} -\int_0^t k \bar{Z}_{A,j}(s) \bar{Z}_{B,j}(s) \, ds + \int_0^t k \bar{Z}_{C,j}(s) \bar{Z}_{D,j}(s) \, ds, \\ \nonumber \bar{Z}_{B,j}(t) &= -\Pi_{1,j} \left( k_b \int_0^t \bar{Z}_{B,j}(s) \, ds \right) + \Pi_{2,j} \left( \int_0^t k_a \bar{Z}_{D,j}(s) \, ds \right) \\ &+ \sum_{k\in \{j-1,j+1\}} \Pi'_{B,k,j} \left( \int_0^t \bar{Z}_{B,k}(s) \, ds \right) - \Pi'_{B,j,k} \left( \int_0^t \bar{Z}_{B,j}(s) \, ds \right) . \end{align} The defining equations for $\bar{Z}_{C,j}$ and $\bar{Z}_{D,j}$ are similar. Thus the limit in this case is not a trivial process, stressing that non-trivial models can be described within the framework. \begin{figure} \caption{Multiscale errors (catalytic reactions).} \label{fig:cat1} \end{figure} \begin{figure} \caption{Split-step error (catalytic reactions). Case of superconvergence of the split-step method.} \label{fig:cat2} \end{figure} \subsection{Catalytic reactions: case of unclear scale separation} It is interesting to turn the scales of the catalytic model around. If we instead let species $A$ and $C$ be $O(1)$, while $B$ and $D$ are $O(\epsilon^{-1})$, the topology does not change and we still have a closed system. We put $k = 0.01 \epsilon^{1/4}$, $(k_b,k_d) = (1,0.9)$, and use the slow diffusion $\sigma_{A,C} = \sigma_{B,D} = \epsilon$. The critical exponents become $u = -3/4$ and $v = 0$ and thus none of the results apply. Although Figure~\ref{fig:cat1} (\emph{``divergent scaling''}) does not strictly exclude the possibility of convergence, the error certainly does not go down convincingly. \section{Conclusions} \label{sec:conclusions} In this paper we have developed a coherent framework for analyzing certain multiscale methods for continuous-time Markov chains of a general spatial structure. Concrete assumptions and conditions have been discovered that enables a multiscale description and a consistent formulation of the approximating methods in operational time. Notably, through explicit \textit{a priori} results, all processes are well-posed and the framework does not rely on any heuristic prior bounds. The analysis distinguishes between two separate sources of errors, namely \emph{the multiscale error} and \emph{the split-step error}. The first is due to an approximate stochastic/deterministic variable splitting strategy, a kind of stochastic homogenization technique. The second emerges when this approximating process in turn is evolved in discrete time-steps. Notably, we found theoretically how the split-step error is composed of factors remindful of the terms making up the multiscale error, thus connecting the two in a qualitative sense. The behavior of these errors were also examined experimentally via actual implementations of the methods. Although some of the boundary cases are difficult to handle theoretically, in particular when confronted with open systems, the numerical experiments support the sharpness of our theoretical predictions. The work opens up for some interesting possibilities. Clearly, an ideal implementation should allow the split-step error to be about as large as the multiscale error. The fully discrete approximation is amenable to several efficient algorithms developed for numerical methods for partial differential equations, including for example multigrid techniques. An interesting challenge to which we would like to return is to develop practical procedures for computing accurate error estimates. We believe this is doable following the theory laid out in the paper. \section*{Acknowledgment} Many detailed suggestions by the two referees have helped us to clarify and improve the paper. The work was supported by ENS Cachan (A.~Chevallier) and by the Swedish Research Council within the UPMARC Linnaeus center of Excellence (S.~Engblom). \appendix \section{The multiscale error} \label{sec:mserr} Below are the statements and proofs of the two critical lemmas used in the proof of Theorem~\ref{th:ScalingError}. Recall the definition of the two effective exponents $u$ and $v$ in \eqref{eq:udef}--\eqref{eq:vdef}. \begin{lemma} \label{lem:PDMPCVSquare} Define $\bar{X}(t)$ by \eqref{eq:RDMEPoissreprS} and $\bar{Z}(t)$ by \eqref{eq:RDMEPoissrepr_ex1}--\eqref{eq:RDMEPoissrepr_ex2} with $\bar{X}(0) = \bar{Z}(0)$ almost surely. Then under the stopping time $\StopT{t}$ defined in \eqref{eq:StopT}, \begin{align*} \Expect[\|\bar{Z}(\StopT{t})-\bar{X}(\StopT{t})\|^2] &\le_{C} A + B \int_0^t \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\|^2 ] \, ds + C \int_0^t \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ] \, ds, \end{align*} where expressions for $A$, $B$, and $C$ are indicated in \eqref{eq:bnd_coeffs} below. These bounds depend on $\epsilon$ and $\bar{P}$ and on the reaction topology $\mathbb{N}$, \begin{align} A &= \epsilon^{1+v}[\bar{L}(\bar{P})\bar{P}], \quad B = \epsilon^{2v}[\bar{L}(\bar{P})]^2, \quad C = \epsilon^{u}\bar{L}(\bar{P}) [\epsilon^{u}\bar{L}(\bar{P})\bar{P}+1]. \end{align} \end{lemma} To improve the readability of the proof, we use the notation ``$A \le_{C} B$'' to indicate that $A \le C B$ for some constant $C > 0$ which is $O(1)$ with respect to $\epsilon$, $\bar{P}$, and $h$. When the processes are assumed to be bounded \textit{a priori}, clearly, $\bar{L}(c\bar{P}) \le_{C} 1$, for any constant $c > 0$. In the unbounded case, Assumption~\ref{ass:lip} yields similarly $\bar{L}(c\bar{P}) \le_{C} \bar{L}(\bar{P})$ for any constant $c > 0$. We additionally let $(c_{\boldsymbol{l}},C_{\boldsymbol{l}})$ be the constants in the norm equivalence \begin{align} c_{\boldsymbol{l}} \pnorm{X} \le \lnorm{X} \le C_{\boldsymbol{l}} \pnorm{X}. \end{align} \begin{proof} We focus first on a single voxel $j$ and analyze the errors on species from $G_1(j)$ and $G_2(j)$, respectively. For $i \in G_1(j)$, from \eqref{eq:RDMEPoissreprS} and \eqref{eq:RDMEPoissrepr_ex1}, \begin{align*} \left( \bar{Z}_{ij}(\StopT{t}) - \bar{X}_{ij}(\StopT{t})\right)^2 = \Bigg[ -&\sum_{r = 1}^{R} \mathbb{N}_{ri} \left( \Pi_{rj} \left( \cdot \right)-\Pi_{rj} \left( \cdot \right) \right) \\ \nonumber -&\sum_{k = 1}^{J} \left( \Pi_{ijk}' \left( \cdot \right) - \Pi_{ijk}' \left( \cdot \right) \right)+ \sum_{k = 1}^{J} \left( \Pi_{ikj}' \left( \cdot \right) -\Pi_{ikj}' \left( \cdot \right) \right) \Bigg]^2 \end{align*} where we have suppressed the local time arguments of the Poisson processes, available in \eqref{eq:RDMEPoissreprS} and \eqref{eq:RDMEPoissrepr_ex1}. By Jensen's inequality and the bound on the mesh connectivity in Definition~\ref{def:mesh} \eqref{eq:connectivity} we get \begin{align*} \left( \bar{Z}_{ij}(\StopT{t}) - \bar{X}_{ij}(\StopT{t})\right)^2 \leq (R + 2M_D)\left( A_1 + A_2 + A_3 \right), \end{align*} where in terms of \begin{align*} A_1 &= \sum_{r = 1}^{R} \mathbb{N}_{ri}^2 \Biggl( \Pi_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) \, ds \right) \\ &\hphantom{= \sum_{r = 1}^{R} \mathbb{N}_{ri}^2 } -\Pi_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1} \bar{X}_{\cdot,j}(s)) \, ds \right) \Biggr)^2, \\ A_2 &= \sum_{k = 1}^{J} \left( \Pi_{ijk}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{Z}_{ij}(s) \, ds \right) - \Pi_{ijk}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{X}_{ij}(s) \, ds \right) \right)^2, \\ A_3 &=\sum_{k = 1}^{J} \left( \Pi_{ikj}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ikj} \bar{Z}_{ik}(s) \, ds \right) -\Pi_{ikj}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ikj} \bar{X}_{ik}(s) \, ds \right) \right)^2. \end{align*} First we need to bound the $\boldsymbol{l}$-norm: \begin{align*} \lnorm{V_j^{-1}\bar{Z}_{\cdot,j}(s)} &\le C_{\boldsymbol{l}} \pnorm{V_j^{-1}\bar{Z}_{\cdot,j}(s)} \le C_{\boldsymbol{l}} V_j^{-1} \bar{P} \le C_{\boldsymbol{l}} m_V^{-1} \bar{V}_M^{-1} \bar{P}. \end{align*} Then using the Lipschitz bound \eqref{eq:lipS} in Assumption~\ref{ass:rboundS}: \begin{align*} \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) &\le \bar{u}_r(0) + \bar{L}_r( C_{\boldsymbol{l}} m_V^{-1} \bar{V}_M^{-1} \bar{P}) \lnorm{V_j^{-1}\bar{Z}_{\cdot,j}(s)} \\ &\le \bar{u}_r(0) + \bar{L}_r( C_{\boldsymbol{l}} m_V^{-1} \bar{V}_M^{-1} \bar{P}) C_{\boldsymbol{l}} \pnorm{V_j^{-1}\bar{Z}_{\cdot,j}(s)} \\ &\le \bar{u}_r(0) + \bar{L}_r( C_{\boldsymbol{l}} m_V^{-1} \bar{V}_M^{-1} \bar{P}) C_{\boldsymbol{l}} V_j^{-1}\bar{P} \\ &\le_{C} 1 + \bar{L}_r(\bar{P}) V_j^{-1}\bar{P}. \end{align*} Thus, \begin{align*} \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) \, ds &\le_{C} \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} \left( V_j + \bar{L}_r(\bar{P})\bar{P} \right) \, ds \\ &\le_{C} t \epsilon^{-\nu_r} ( M_V \bar{V}_M + \bar{L}_r(\bar{P})\bar{P}) \, ds \\ &\le_{C} \epsilon^{-\nu_r} (1 + \bar{L}_r(\bar{P})\bar{P}) \, ds. \end{align*} Using the same method for $\bar{X}$, we conclude \begin{align*} \int_{0}^{\StopT{t}}& \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) \, ds \, \vee \, \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \\ &\le_{C} \epsilon^{-\nu_{r}} (1+\bar{L}_r(\bar{P}) \bar{P}). \end{align*} Hence using Lemma~\ref{lem:Stopping} \eqref{eq:poisson_rmk} and again the Lipschitz bound we get \begin{align*} \Expect[A_1] &\le_{C} \sum_r \mathbb{N}_{ri}^2 \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \left(\epsilon^{-\nu_{r}}(\bar{L}_r(\bar{P}) \bar{P}+1)+1 \right) \int_0^t \Expect \left[ \|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| \right] \, ds. \end{align*} Relying on the same arguments we readily find \begin{align*} \Expect[A_2] &\le_{C} \sum_{k = 1}^{J} \epsilon^{-\mu_i} \left( 1 + \epsilon^{-\mu_i} \right) \int_0^t \Expect[ \|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\|] \, ds, \end{align*} and the identical bound for $\Expect[A_3]$. For $i\in G_2(j)$, we similarly get \begin{align*} \left( \bar{Z}_{ij}(\StopT{t}) - \bar{X}_{ij}(\StopT{t}) \right)^2 \leq \epsilon^{2} (R + 2M_D) (A'_1 + A'_2 + A'_3) \end{align*} where {\small \begin{align*} A'_1 &= \sum_{r = 1}^{R} \mathbb{N}_{ri}^2 \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) \, ds -\Pi_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds\right) \right)^2, \\ A'_2 &= \sum_{k = 1}^{J} \left( \int_{0}^{\StopT{t}} \epsilon^{ -\mu_i} \bar{q}_{ijk} \bar{Z}_{ij}(s) \, ds -\Pi_{ijk}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{X}_{ij}(s) \, ds \right) \right)^2, \\ A'_3 &=\sum_{k = 1}^{J} \left( \int_{0}^{\StopT{t}} \epsilon^{ -\mu_i} \bar{q}_{ikj} \bar{Z}_{ik}(s) \, ds -\Pi_{ikj}' \left( \int_{0}^{\StopT{t}} \epsilon^{ -\mu_i} \bar{q}_{ikj} \bar{X}_{ik}(s) \, ds \right) \right)^2. \end{align*}} The analysis is now slightly different. Species from the second group have a large number of molecules, so $\bar{X}_{ij}(t)$ is expected to remain close to its mean value. We thus introduce the centered Poisson processes $\tilde{\Pi}_r$, \begin{align*} A'_1 &= \sum_{r} \mathbb{N}_{ri}^2 \Bigg( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \left( \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s))- \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \right) \, ds \\ &\hphantom{\sum_{r} \mathbb{N}_{ri}^2}-\tilde{\Pi}_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \right) \Bigg)^2 \\ &\leq \sum_{r} 2\mathbb{N}_{ri}^2 \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \left( \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s))-\bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \right) \, ds \right)^2 \\ &\hphantom{\sum_{r}}+2\mathbb{N}_{ri}^2 \left( \tilde{\Pi}_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \right) \right)^2. \\ \end{align*} Using that the quadratic variation of $\tilde{\Pi}$ is $[\tilde{\Pi}]_t = \Pi(t)$ and the martingale stopping time theorem we get \begin{align*} \Expect &\left[\left( \tilde{\Pi}_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \right) \right)^2 \right] \\ &= \Expect\left[ \Pi_{rj} \left( \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \right) \right] \\ &= \Expect\left[ \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \right] \le_{C} \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \bar{P}. \end{align*} \noindent Using Cauchy-Schwartz for the remaining integral part and following the same approach for $A'_2$ and $A'_3$ we get \begin{align*} \Expect[A'_1] &\le_{C} \sum_{r = 1}^R \mathbb{N}^2_{ri} \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \bar{P}+ \sum_{r = 1}^R \mathbb{N}^2_{ri} \left( \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \right)^2 \int_0^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\|^2] \, ds, \\ \Expect[A'_2] &\le_{C} \sum_{k = 1}^J \epsilon^{-\mu_i} + \sum_{k = 1}^J \left(\epsilon^{-\mu_i} \right)^2 \int_0^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\|^2] \, ds, \end{align*} as well as an identical bound for $\Expect[A'_3]$. We thus get for the $j$th voxel, \begin{align} \nonumber &\Expect \left[ \|\bar{Z}_{\cdot,j}(t) - \bar{X}_{\cdot,j}(t)\|^2\right] \le_{C} \sum_{i \in G_1(j)} \Expect\left[A_1 + A_2 + A_3 \right]+ \sum_{i \in G_2(j)} \epsilon^{2} \Expect\left[A'_1 + A'_2 + A'_3 \right] \\ \label{eq:bnd_coeffs} &\hphantom{\Expect } \le_{C} A^{(j)} + B^{(j)} \int_0^{t} \Expect[\|\bar{X}(\StopT{s})-\bar{Z}(\StopT{s})\|^2] \, ds +C^{(j)} \int_0^{t} \Expect[\|\bar{X}(\StopT{s})-\bar{Z}(\StopT{s})\|] \, ds. \end{align} Summing over $j$ we get the stated result with $A := \sum_{j} A^{(j)}$, $B := \sum_{j} B^{(j)}$, and $C := \sum_{j} C^{(j)}$. \end{proof} \begin{lemma} \label{lem:PDMPCV} Under the same assumptions as in Lemma~\ref{lem:PDMPCVSquare}, \begin{align*} \Expect[\|\bar{X}(\StopT{t})-\bar{Z}(\StopT{t})\|] &\le_{C} D + E \int_{0}^{t} \Expect[\|\bar{X}(\StopT{s}) - \bar{Z}(\StopT{s})\| ] \, ds, \end{align*} where explicit expressions for $D$ and $E$ are found in \eqref{eq:bnd_coeffs2} below and depend on $\epsilon$, $\bar{P}$, and on the reaction topology $\mathbb{N}$, \begin{align} D &= \epsilon^{1/2+v/2}[\bar{L}(\bar{P})\bar{P}]^{1/2}, \quad E = [\epsilon^{u}+\epsilon^{v}]\bar{L}(\bar{P}). \end{align} \end{lemma} \begin{proof} For voxel $j$ and for $i \in G_1(j)$, \begin{align*} \left| \bar{Z}_{ij}(\StopT{t}) - \bar{X}_{ij}(\StopT{t}) \right| \leq &\sum_{r = 1}^{R} \left| \mathbb{N}_{ri} \left( \Pi_{rj} \left( \cdot \right)-\Pi_{rj} \left( \cdot \right) \right) \right| \\ +&\sum_{k = 1}^{J} \left| \left( \Pi_{ijk}' \left( \cdot \right) - \Pi_{ijk}' \left( \cdot \right) \right) \right|+ \sum_{k = 1}^{J} \left| \left( \Pi_{ikj}' \left( \cdot \right) -\Pi_{ikj}' \left( \cdot \right) \right) \right|. \end{align*} \noindent We keep the same notation as in the previous lemma and thus write \begin{align*} &\left| \bar{Z}_{ij}(\StopT{t}) - \bar{X}_{ij}(\StopT{t}) \right| \leq \left( A_1 + A_2 + A_3 \right), \\ \intertext{where} \Expect[A_1 ] &= \sum_{r} |\mathbb{N}_{ri}| \Expect \left[ \left| \Pi_{rj} \left( \cdot \right)-\Pi_{rj} \left( \cdot \right) \right | \right] \\ &= \sum_{r} |\mathbb{N}_{ri}| \Expect \left[ \left| \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) \, ds - \int_{0}^{\StopT{t}} \epsilon^{-\nu_r} V_j \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds \right | \right]\\ &\le_{C} \sum_{r} |\mathbb{N}_{ri}| \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \int_{0}^{t}\Expect\left[\|\bar{Z}_{\cdot,j}(\StopT{s})-\bar{X}_{\cdot,j}(\StopT{s})\|\right]. \end{align*} \noindent In the same spirit we find \begin{align*} \Expect[A_2] &\le_{C} \sum_{k = 1}^{J} \epsilon^{-\mu_i} \int_{0}^{t} \Expect\left[\|\bar{Z}_{\cdot,j}(\StopT{s})-\bar{Z}_{\cdot,j}(\StopT{s})\|\right], \end{align*} and an identical bound for $\Expect[A_3]$. Continuing with $i \in G_2(j)$, \begin{align*} \left( \bar{Z}_{ij}(\StopT{t}) - \bar{X}_{ij}(\StopT{t}) \right) \leq \epsilon (A'_1 + A'_2 + A'_3), \end{align*} where \begin{align*} A'_1 &= \sum_{r = 1}^{R} |\mathbb{N}_{ri}| \left| \int_{0}^{\StopT{t}} V_j \epsilon^{-\nu_r} \bar{u}_r(V_j^{-1}\bar{Z}_{\cdot,j}(s)) \, ds -\Pi_{rj} \left( \int_{0}^{\StopT{t}} V_j \epsilon^{-\nu_r} \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds\right) \right|, \\ A'_2 &= \sum_{k = 1}^{J} \left| \int_{0}^{\StopT{t}} \epsilon^{-\mu_i}\bar{q}_{ijk} \bar{Z}_{ij}(s) \, ds -\Pi_{ijk}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ijk} \bar{X}_{ij}(s) \, ds \right) \right|, \\ A'_3 &=\sum_{k = 1}^{J} \left| \int_{0}^{\StopT{t}} \epsilon^{-\mu_i} \bar{q}_{ikj} \bar{Z}_{ik}(s) \, ds -\Pi_{ikj}' \left( \int_{0}^{\StopT{t}} \epsilon^{-\mu_i}\bar{q}_{ikj} \bar{X}_{ik}(s) \, ds \right) \right|. \end{align*} Using the same techniques developed previously we find {\small \begin{align*} \Expect[A'_1] &\leq \Expect \Biggl[ \sum_{r} |\mathbb{N}_{ri}| \int_{0}^{\StopT{t}} \epsilon^{-\nu_{r}} C_{\boldsymbol{l}} \bar{L}_r(V_j^{-1}\bar{P}) \|\bar{Z}(s)-\bar{X}(s)\| \, ds \\ &\hphantom{\leq \qquad} +|\mathbb{N}_{ri}|\left| \tilde{\Pi}_{rj} \left( \int_{0}^{\StopT{t}} V_j \epsilon^{-\nu_r} \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(s)) \, ds\right) \right| \Biggr]\\ &\le_{C} \sum_{r} |\mathbb{N}_{ri}| \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ]\, ds \\ &\hphantom{\leq \qquad} +|\mathbb{N}_{ri}|\Expect\left[ \tilde{\Pi}^2_{rj} \left( \int_{0}^{t} V_j \epsilon^{-\nu_r} \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(\StopT{s})) \, ds\right) \right]^{1/2} \\ &\le_{C} \sum_{r} |\mathbb{N}_{ri}| \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ]\, ds \\ &\hphantom{\leq \qquad} +|\mathbb{N}_{ri}|\Expect\left[ \left( \int_{0}^{t} V_j \epsilon^{-\nu_r} \bar{u}_r(V_j^{-1}\bar{X}_{\cdot,j}(\StopT{s})) \, ds\right) \right]^{1/2}\\ &\le_{C} \sum_{r} |\mathbb{N}_{ri}| \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ]\, ds + |\mathbb{N}_{ri}| \left( t \epsilon^{-\nu_{r}} C_{\boldsymbol{l}} \bar{L}_r(\bar{P}) \bar{P} \right)^{1/2} \\ &\le_{C} \sum_{r} |\mathbb{N}_{ri}| \left( \epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ]\, ds + \left(\epsilon^{-\nu_{r}} \bar{L}_r(\bar{P}) \bar{P} \right)^{1/2} \right). \end{align*}} In much the same spirit we get \begin{align*} \Expect[A'_2] &\le_{C} \sum_{k = 1}^J \epsilon^{-\mu_i} \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s})-\bar{X}(\StopT{s})\| ] \, ds+ \left( \epsilon^{-\mu_i}\right)^{1/2}, \end{align*} along with an identical bound for $\Expect[A'_3]$. Combined we thus get for the $j$th voxel, \begin{align} \nonumber \Expect \left[ \|\bar{Z}_{\cdot,j}(\StopT{t}) - \bar{X}_{\cdot,j}(\StopT{t})\|\right] &\le_{C} \sum_{i \in G_1(j)} \Expect\left[A_1 + A_2 + A_3 \right]+ \sum_{i \in G_2(j)} \epsilon \Expect\left[A'_1 + A'_2 + A'_3 \right] \\ \label{eq:bnd_coeffs2} &\hphantom{\Expect } \le_{C} D_j + E_j \int_0^{t} \Expect[\|\bar{X}(\StopT{s})-\bar{Z}(\StopT{s})\|] \, ds. \end{align} Summing over $j$ we get the stated result. \end{proof} \section{The split-step error} \label{sec:sserr} The consistency of the numerical split-step method hinges on the regularity of the kernel function $\sigma_{h}(s)$. The following lemma (borrowed from \cite[Lemma~3.7]{jsdesplit}) paired with the strong regularity of the involved processes provides for the order estimate in Theorems~\ref{th:spliterr2} and \ref{th:spliterr1}. Note that the result can be thought of as c{\`a}dl{\`a}g-version of the Riemann-Lebesgue lemma. \begin{lemma}{(\cite[Lemma~3.7]{jsdesplit})} \label{lem:sigma2} Let $G:\mathbf{R}^{D} \to \mathbf{R}$ be a globally Lipschitz continuous function with Lipschitz constant $L$ and let $f:\mathbf{R} \to \mathbf{R}^{D}$ be a piecewise constant c{\`a}dl{\`a}g\ function. Then \begin{align} \label{eq:sigma_lemma2} \left| \int_{0}^{t} \sigma_{h}(s) G(f(s)) \, ds \right| &\le \frac{h}{2} |G(f(t))|+ \frac{h}{2} L V_{[0,t]}(f), \end{align} where the total absolute variation may be exchanged with the square root of the quadratic variation $[f]_{t}^{1/2}$. If $t$ is a multiple of $h$, then the first term on the right side of \eqref{eq:sigma_lemma2} vanishes. \end{lemma} The proofs of the following two lemmas follow closely the ideas in the proofs of Lemmas~\ref{lem:PDMPCVSquare} and \ref{lem:PDMPCV}, but using in addition Lemma~\ref{lem:sigma2} and Theorem~\ref{th:exist3} to bound certain additional terms. \begin{lemma} \label{lem:SplitStepCVSquare} Define $\bar{Z}(t)$ by \eqref{eq:RDMEPoissrepr_ex1}--\eqref{eq:RDMEPoissrepr_ex2} and ${\bar{Y}^{(h)}}(t)$ by \eqref{eq:RDMEPoissrepr_num1}--\eqref{eq:RDMEPoissrepr_num2} with $\bar{Z}(0) = {\bar{Y}^{(h)}}(0)$ almost surely. The under the stopping time $\StopT{t}$ defined in \eqref{eq:StopT}, for a fixed $\epsilon$ and $h$ small enough, \begin{align*} \Expect[\|\bar{Z}(\StopT{t})-{\bar{Y}^{(h)}}(\StopT{t})\|^2] &\le_{C} A + B \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s}) - {\bar{Y}^{(h)}}(\StopT{s})\|^2 ] \, ds \\ &\hphantom{\le_{C} A}+ C \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s}) - {\bar{Y}^{(h)}}(\StopT{s})\| ] \, ds, \end{align*} where \begin{align*} A &= \epsilon^u \bar{L}(\bar{P}) [\epsilon^u\bar{L}(\bar{P})\bar{P}+1] h+ \epsilon^{2v} [\bar{L}(\bar{P})]^2 h^2, \\ B &= \epsilon^{2v} [\bar{L}(\bar{P})]^2, \quad C = \epsilon^u\bar{L}(\bar{P}) [\epsilon^u\bar{L}(\bar{P})\bar{P}+1]. \end{align*} \end{lemma} \begin{lemma} \label{lem:SplitStepCV} Under the same assumptions as in Lemma~\ref{lem:SplitStepCVSquare}, \begin{align*} \Expect[\|\bar{Z}(\StopT{t})-{\bar{Y}^{(h)}}(\StopT{t})\|] &\le_{C} D + E \int_{0}^{t} \Expect[\|\bar{Z}(\StopT{s}) - {\bar{Y}^{(h)}}(\StopT{s})\| ] \, ds, \end{align*} with \begin{align*} D &= [\epsilon^u + \epsilon^v]\bar{L}(\bar{P}) h, \quad E = [\epsilon^u + \epsilon^v]\bar{L}(\bar{P}). \end{align*} \end{lemma} \newcommand{\doi}[1]{\href{http://dx.doi.org/#1}{doi:#1}} \newcommand{\available}[1]{Available at \url{#1}} \newcommand{\availablet}[2]{Available at \href{#1}{#2}} \end{document}
math
94,665
\boldsymbol egin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{conjecture}{Conjecture} \newtheorem{definition}{Definition} \newcommand{\mathcal}{\mathcal} \newcommand{\mc A}{\mathcal A} \newcommand{\mc B}{\mathcal B} \newcommand{\mc C}{\mathcal C} \newcommand{\mc D}{\mathcal D} \newcommand{\mc E}{\mathcal E} \newcommand{\mc F}{\mathcal F} \newcommand{\mc G}{\mathcal G} \newcommand{\mc H}{\mathcal H} \newcommand{\mc I}{\mathcal I} \newcommand{\mc J}{\mathcal J} \newcommand{\mc K}{\mathcal K} \newcommand{\mc L}{\mathcal L} \newcommand{\mc M}{\mathcal M} \newcommand{\mc N}{\mathcal N} \newcommand{\mc P}{\mathcal P} \newcommand{\mc Q}{\mathcal Q} \newcommand{\mc R}{\mathcal R} \newcommand{\mc S}{\mathcal S} \newcommand{\mc U}{\mathcal U} \newcommand{\mc V}{\mathcal V} \newcommand{\mc W}{\mathcal W} \newcommand{\mc X}{\mathcal X} \newcommand{\mc Y}{\mathcal Y} \newcommand{\mc Z}{\mathcal Z} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathfrak A}{\mathfrak A} \newcommand{\mathfrak B}{\mathfrak B} \newcommand{\mathfrak C}{\mathfrak C} \newcommand{\mathfrak D}{\mathfrak D} \newcommand{\mathfrak E}{\mathfrak E} \newcommand{\mathfrak F}{\mathfrak F} \newcommand{\mathfrak I}{\mathfrak I} \newcommand{\mathfrak M}{\mathfrak M} \newcommand{\mathfrak N}{\mathfrak N} \newcommand{\mathfrak P}{\mathfrak P} \newcommand{\mathfrak S}{\mathfrak S} \newcommand{\mathfrak U}{\mathfrak U} \newcommand{f_{\boldsymbol eta}}{f_{\boldsymbol eta}} \newcommand{f_{\gamma}}{f_{\gamma}} \newcommand{g_{\boldsymbol eta}}{g_{\boldsymbol eta}} \newcommand{\varphi}{\varphi} \newcommand{\varphi}{\varphi} \newcommand{\varepsilon}{\varepsilon} \newcommand{\boldsymbol 0}{\boldsymbol 0ldsymbol 0} \newcommand{\boldsymbol 0ne}{\boldsymbol 0ldsymbol 1} \newcommand{\boldsymbol \alpha}{\boldsymbol 0ldsymbol \alpha} \newcommand{\boldsymbol \boldsymbol eta}{\boldsymbol 0ldsymbol \boldsymbol eta} \newcommand{\boldsymbol a}{\boldsymbol 0ldsymbol a} \newcommand{\boldsymbol b}{\boldsymbol 0ldsymbol b} \newcommand{\boldsymbol c}{\boldsymbol 0ldsymbol c} \newcommand{\boldsymbol e}{\boldsymbol 0ldsymbol e} \newcommand{\boldsymbol f}{\boldsymbol 0ldsymbol f} \newcommand{\boldsymbol k}{\boldsymbol 0ldsymbol k} \newcommand{\boldsymbol ell}{\boldsymbol 0ldsymbol \ell} \newcommand{\boldsymbol m}{\boldsymbol 0ldsymbol m} \newcommand{\boldsymbol n}{\boldsymbol 0ldsymbol n} \newcommand{\boldsymbol \gamma}{\boldsymbol 0ldsymbol \gamma} \newcommand{\boldsymbol \lambda}{\boldsymbol 0ldsymbol \lambda} \newcommand{\boldsymbol \theta}{\boldsymbol 0ldsymbol \theta} \newcommand{\boldsymbol q}{\boldsymbol 0ldsymbol q} \newcommand{\boldsymbol t}{\boldsymbol 0ldsymbol t} \newcommand{\boldsymbol u}{\boldsymbol 0ldsymbol u} \newcommand{\boldsymbol v}{\boldsymbol 0ldsymbol v} \newcommand{\boldsymbol w}{\boldsymbol 0ldsymbol w} \newcommand{\boldsymbol x}{\boldsymbol 0ldsymbol x} \newcommand{\boldsymbol wy}{\boldsymbol 0ldsymbol y} \newcommand{\boldsymbol nu}{\boldsymbol 0ldsymbol \nu} \newcommand{\boldsymbol xi}{\boldsymbol 0ldsymbol \xi} \newcommand{\boldsymbol z}{\boldsymbol 0ldsymbol z} \newcommand{\widehat F}{\widehat F} \newcommand{\widehat{G}}{\widehat{G}} \newcommand{\overline{K}}{\overline{K}} \newcommand{\overline{K}t}{\overline{K}^{\times}} \newcommand{\overline{\Q}}{\overline{\mathbb{Q}}} \newcommand{\oQ^{\times}}{\overline{\Q}^{\times}} \newcommand{\overline{\Q}t}{\overline{\Q}^{\times}/\mathbb{T}or\bigl(\overline{\Q}^{\times}\bigr)} \newcommand{\Tor\bigl(\oQ^{\times}\bigr)}{\mathbb{T}or\bigl(\overline{\Q}^{\times}\bigr)} \newcommand{\frac12}{\frac12} \newcommand{\frac12h}{\tfrac12} \newcommand{\text{\rm d}t}{\text{\rm d}t} \newcommand{\text{\rm d}x}{\text{\rm d}x} \newcommand{\text{\rm d}\bx}{\text{\rm d}\boldsymbol x} \newcommand{\text{\rm d}y}{\text{\rm d}y} \newcommand{\text{\rm d}\mu}{\text{\rm d}\mu} \newcommand{\text{\rm d}\nu}{\text{\rm d}\nu} \newcommand{\text{\rm d}\lambda}{\text{\rm d}\lambda} \newcommand{\text{\rm d}\lambdav}{\text{\rm d}\lambda_v} \newcommand{\widetilde{\rho}}{\widetilde{\rho}} \newcommand{\text{\rm d}trho}{\text{\rm d}\widetilde{\rho}} \newcommand{\text{\rm d}\rho}{\text{\rm d}\rho} \def\today{\number\time, \ifcase\month\or January\or February\or March\or April\or May\or June\or July\or August\or September\or October\or November\or December\fi \space\number\day, \number\year} \title[Mahler measure]{Lower bounds for Mahler measure that\\depend on the number of monomials} \author{Shabnam Akhtari and Jeffrey~D.~Vaaler} \subjclass[2010]{11R06} \keywords{Mahler Measure, polynomial inequalities} \thanks{} \address{Department of Mathematics, University of Oregon, Eugene, Oregon 97402 USA \\ Max Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany} \email{[email protected]} \address{Department of Mathematics, University of Texas, Austin, Texas 78712 USA} \email{[email protected]} \thanks{Shabnam Akhtari's research is funded by the NSF grant DMS-1601837.} \boldsymbol egin{abstract} We prove a new lower bound for the Mahler measure of a polynomial in one and in several variables that depends on the complex coefficients, and the number of monomials. In one variable our result generalizes a classical inequality of Mahler. In $M$ variables our result depends on $\mathbb{Z}^M$ as an ordered group, and in general our lower bound depends on the choice of ordering. \end{abstract} \maketitle \numberwithin{equation}{section} \section{Introduction} Let $P(z)$ be a polynomial in $\mathbb{C}[z]$ that is not identically zero. We assume to begin with that $P$ has degree $N$, and that $P$ factors into linear factors in $\mathbb{C}[z]$ as \boldsymbol egin{equation}\label{intro1} P(z) = c_0 + c_1z + c_2 z^2 + \cdots + c_N z^N = c_N \varphirod_{n = 1}^N (z - \alpha_n). \end{equation} If $e : \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{T}$ is the continuous isomorphism given by $e(t) = e^{2 \varphii i t}$, then the Mahler measure of $P$ is the positive real number \boldsymbol egin{equation}\label{intro5} \mathfrak M(P) = \exp\biggl(\int_{\mathbb{R}/\mathbb{Z}} \log \bigl|P\bigl(e(t)\bigr)\bigr|\ \text{\rm d}t\biggr) = |c_N| \varphirod_{n = 1}^N \max\{1, |\alpha_n|\}. \end{equation} The equality on the right of (\ref{intro5}) follows from Jensen's formula. If $P_1(z)$ and $P_2(z)$ are both nonzero polynomials in $\mathbb{C}[z]$, then it is immediate from (\ref{intro5}) that \boldsymbol egin{equation*}\label{intro7} \mathfrak M\bigl(P_1 P_2\bigr) = \mathfrak M\big(P_1\bigr) \mathfrak M\bigl(P_2\bigr). \end{equation*} Mahler measure plays an important role in number theory and in algebraic dynamics, as discussed in \cite{everest1999}, \cite{pritsker2007}, \cite[Chapter 5]{schmidt1995}, and \cite{smyth2007}. Here we restrict our attention to the problem of proving a lower bound for $\mathfrak M(P)$ when the polynomial $P(z)$ has complex coefficients. We establish an analogous result for polynomials in several variables. For $P(z)$ of degree $N$ and given by (\ref{intro1}), there is a well known lower bound due to Mahler which asserts that \boldsymbol egin{equation}\label{intro10} |c_n| \le \binom{N}{n} \mathfrak M(P),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} The inequality (\ref{intro10}) is implicit in \cite{mahler1960}, and is stated explicitly in \cite[section 2]{mahler1962}, (see also the proof in \cite[Theorem 1.6.7]{bombieri2006}). If \boldsymbol egin{equation*}\label{intro15} P(z) = (z \varphim 1)^N, \end{equation*} then there is equality in (\ref{intro10}) for each $n = 0, 1, 2, \dots , N$. We now assume that $P(z)$ is a polynomial in $\mathbb{C}[z]$ that is not identically zero, and we assume that $P(z)$ is given by \boldsymbol egin{equation}\label{intro20} P(z) = c_0 z^{m_0} + c_1 z^{m_1} + c_2 z^{m_2} + \cdots + c_N z^{m_N}, \end{equation} where $N$ is a nonnegative integer, and $m_0, m_1, m_2, \dots , m_N$, are nonnegative integers such that \boldsymbol egin{equation}\label{intro25} m_0 < m_1 < m_2 < \cdots < m_N. \end{equation} We wish to establish a lower bound for $\mathfrak M(P)$ which depends on the coefficients and on the number of monomials, but which does {\it not} depend on the degree of $P$. Such a result was recently proved by Dobrowolski and Smyth \cite{smyth2017}. We use a similar argument, but we obtain a sharper result that includes Mahler's inequality (\ref{intro10}) as a special case. \boldsymbol egin{theorem}\label{thmintro1} Let $P(z)$ be a polynomial in $\mathbb{C}[z]$ that is not identically zero, and is given by {\rm (\ref{intro20})}. Then we have \boldsymbol egin{equation}\label{intro30} |c_n| \le \binom{N}{n} \mathfrak M(P),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} \end{theorem} Let $f : \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{C}$ be a trigonometric polynomial, not identically zero, and a sum of at most $N + 1$ distinct characters. Then we can write $f$ as \boldsymbol egin{equation}\label{intro33} f(t) = \sum_{n = 0}^N c_n e(m_n t), \end{equation} where $c_0, c_1, c_2, \dots , c_N$, are complex coefficients, and $m_0, m_1, m_2, \dots , m_N$, are integers such that \boldsymbol egin{equation*}\label{intro35} m_0 < m_1 < m_2 < \cdots < m_N. \end{equation*} As $f$ is not identically zero, the Mahler measure of $f$ is the positive number \boldsymbol egin{equation*}\label{intro38} \mathfrak M(f) = \exp\biggl(\int_{\mathbb{R}/\mathbb{Z}} \log |f(t)|\ \text{\rm d}t\biggr). \end{equation*} It is trivial that $f(t)$ and $e(-m_0 t)f(t)$ have the same Mahler measure. Thus we get the following alternative formulation of Theorem \ref{thmintro1}. \boldsymbol egin{corollary}\label{corintro1} Let $f(t)$ be a trigonometric polynomial with complex coefficients that is not identically zero, and is given by {\rm (\ref{intro33})}. Then we have \boldsymbol egin{equation}\label{intro40} |c_n| \le \binom{N}{n} \mathfrak M(f),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} \end{corollary} For positive integers $M$ we will prove an extension of Corollary \ref{corintro1} to trigonometric polynomials \boldsymbol egin{equation}\label{intro50} F : (\mathbb{R}/\mathbb{Z})^M \rightarrow \mathbb{C}, \end{equation} that are not identically zero. The Fourier transform of $F$ is the function \boldsymbol egin{equation*}\label{intro53} \widehat F : \mathbb{Z}^M \rightarrow \mathbb{C}, \end{equation*} defined at each lattice point $\boldsymbol k$ in $\mathbb{Z}^M$ by \boldsymbol egin{equation}\label{intro55} \widehat F(\boldsymbol k) = \int_{(\mathbb{R}/\mathbb{Z})^M} F(\boldsymbol x) e\bigl(-\boldsymbol k^T \boldsymbol x\bigr)\ \text{\rm d}\bx. \end{equation} In the integral on the right of (\ref{intro55}) we write $\text{\rm d}\bx$ for integration with respect to a Haar measure on the Borel subsets of $(\mathbb{R}/\mathbb{Z})^M$ normalized so that $(\mathbb{R}/\mathbb{Z})^M$ has measure $1$. We write $\boldsymbol k$ for a (column) vector in $\mathbb{Z}^M$, $\boldsymbol k^T$ for the transpose of $\boldsymbol k$, $\boldsymbol x$ for a (column) vector in $(\mathbb{R}/\mathbb{Z})^M$, and therefore \boldsymbol egin{equation*}\label{intro60} \boldsymbol k^T \boldsymbol x = k_1 x_1 + k_2 x_2 + \cdots + k_N x_N. \end{equation*} As $F$ is not identically zero, the Mahler measure of $F$ is the positive real number \boldsymbol egin{equation*}\label{intro75} \mathfrak M(F) = \exp\biggl(\int_{(\mathbb{R}/\mathbb{Z})^M} \log \bigl|F(\boldsymbol x)\bigr|\ \text{\rm d}\bx\biggr). \end{equation*} We assume that $\mathfrak S \subseteq \mathbb{Z}^M$ is a nonempty, finite set that contains the support of $\widehat F$. That is, we assume that \boldsymbol egin{equation}\label{intro80} \{\boldsymbol k \in \mathbb{Z}^M : \widehat F(\boldsymbol k) \not= 0\} \subseteq \mathfrak S, \end{equation} and therefore $F$ has the representation \boldsymbol egin{equation}\label{intro70} F(\boldsymbol x) = \sum_{\boldsymbol k \in \mathfrak S} \widehat F(\boldsymbol k) e\bigl(\boldsymbol k^T \boldsymbol x\bigr). \end{equation} Basic results in this setting can be found in Rudin \cite[Sections 8.3 and 8.4]{rudin1962}. If $\boldsymbol \alpha = (\alpha_m)$ is a (column) vector in $\mathbb{R}^M$, we write \boldsymbol egin{equation*}\label{intro100} \varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R} \end{equation*} for the homomorphism given by \boldsymbol egin{equation}\label{intro105} \varphi_{\boldsymbol \alpha}(\boldsymbol k) = \boldsymbol k^T \boldsymbol \alpha = k_1 \alpha_1 + k_2 \alpha_2 + \cdots + k_M \alpha_M. \end{equation} It is easy to verify that $\varphi_{\boldsymbol \alpha}$ is an injective homomorphism if and only if the coordinates $\alpha_1, \alpha_2, \dots , \alpha_M$, are $\mathbb{Q}$-linearly independent real numbers. Let the nonempty, finite set $\mathfrak S \subseteq \mathbb{Z}^M$ have cardinality $N+1$, where $0 \le N$. If $\varphi_{\boldsymbol \alpha}$ is an injective homomorphism, then the set \boldsymbol egin{equation*}\label{intro110} \big\{\varphi_{\boldsymbol \alpha}(\boldsymbol k) : \boldsymbol k \in \mathfrak S\big\} \end{equation*} consists of exactly $N+1$ real numbers. It follows that the set $\mathfrak S$ can be indexed so that \boldsymbol egin{equation}\label{intro115} \mathfrak S = \big\{\boldsymbol k_0, \boldsymbol k_1, \boldsymbol k_2, \dots , \boldsymbol k_N\big\}, \end{equation} and \boldsymbol egin{equation}\label{intro120} \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_0\bigr) < \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_1\bigr) < \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_2\bigr) < \cdots < \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_N\bigr). \end{equation} By using a limiting argument introduced in a paper of Boyd \cite{boyd1981}, we will prove the following generalization of (\ref{intro40}). \boldsymbol egin{theorem}\label{thmintro2} Let $F : (\mathbb{R}/\mathbb{Z})^M \rightarrow \mathbb{C}$ be a trigonometric polynomial that is not identically zero, and is given by {\rm (\ref{intro70})}. Let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be an injective homomorphism, and assume that the finite set $\mathfrak S$, which contains the support of $\widehat F$, is indexed so that {\rm (\ref{intro115})} and {\rm (\ref{intro120})} hold. Then we have \boldsymbol egin{equation}\label{intro125} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} \end{theorem} Let $F$ and $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be as in the statement of Theorem \ref{thmintro2}, and then let $\varphi_{\boldsymbol \boldsymbol eta} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be a second injective homomorphism. It follows that $\mathfrak S$ can be indexed so that (\ref{intro115}) and (\ref{intro120}) hold, and $\mathfrak S$ can also be indexed so that \boldsymbol egin{equation}\label{intro145} \mathfrak S = \big\{\boldsymbol ell_0, \boldsymbol ell_1, \boldsymbol ell_2, \dots , \boldsymbol ell_N\big\}, \end{equation} and \boldsymbol egin{equation}\label{intro150} \varphi_{\boldsymbol \boldsymbol eta}\bigl(\boldsymbol ell_0\bigr) < \varphi_{\boldsymbol \boldsymbol eta}\bigl(\boldsymbol ell_1\bigr) < \varphi_{\boldsymbol \boldsymbol eta}\bigl(\boldsymbol ell_2\bigr) < \cdots < \varphi_{\boldsymbol \boldsymbol eta}\bigl(\boldsymbol ell_N\bigr). \end{equation} In general the indexing (\ref{intro115}) is distinct from the indexing (\ref{intro145}). Therefore the system of inequalities \boldsymbol egin{equation}\label{intro155} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F),\quad\text{for each $n = 0, 1, 2, \dots , N$}, \end{equation} and \boldsymbol egin{equation}\label{intro160} \bigl|\widehat F\bigl(\boldsymbol ell_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F),\quad\text{for each $n = 0, 1, 2, \dots , N$}, \end{equation} which follow from Theorem \ref{thmintro2}, are different, and in general neither system of inequalities implies the other. \section{Proof of Theorem \ref{thmintro1}} It follows from (\ref{intro5}) that the polynomial $P(z)$, and the polynomial $z^{-m_0} P(z)$, have the same Mahler measure. Hence we may assume without loss of generality that the exponents $m_0, m_1, m_2, \dots , m_N$, in the representation (\ref{intro20}) satisfy the more restrictive condition \boldsymbol egin{equation}\label{mahler250} 0 = m_0 < m_1 < m_2 < \cdots < m_N. \end{equation} If $N = 0$ then (\ref{intro30}) is trivial. If $N = 1$, then \boldsymbol egin{equation*}\label{mahler252} \binom{1}{0} = \binom{1}{1} = 1, \end{equation*} and using Jensen's formula we find that \boldsymbol egin{equation*}\label{mahler254} \mathfrak M\bigl(c_0 + c_1 z^{m_1}\bigr) = \max\{|c_0|, |c_1|\}. \end{equation*} Therefore the inequality (\ref{intro30}) holds if $N = 1$. Throughout the remainder of the proof we assume that $2 \le N$, and we argue by induction on $N$. Thus we assume that the inequality (\ref{intro30}) holds for polynomials that can be expressed as a sum of strictly less than $N + 1$ monomials. Besides the polynomial \boldsymbol egin{equation}\label{mahler256} P(z) = c_0 z^{m_0} + c_1 z^{m_1} + c_2 z^{m_2} + \cdots + c_N z^{m_N}, \end{equation} we will work with the polynomial \boldsymbol egin{equation}\label{mahler258} Q(z) = z^{m_N} P\bigl(z^{-1}\bigr) = c_0 z^{m_N - m_0} + c_1 z^{m_N - m_1} + c_2 z^{m_N- m_2} + \cdots + c_N. \end{equation} It follows from (\ref{intro5}) that \boldsymbol egin{equation}\label{mahler260} \mathfrak M(Q) = \exp\biggl(\int_{\mathbb{R}/\mathbb{Z}} \log \bigl|e(m_N t) P\bigl(e(- t)\bigr)\bigr|\ \text{\rm d}t\biggr) = \mathfrak M(P). \end{equation} Next we apply an inequality of Mahler \cite{mahler1961} to conclude that both \boldsymbol egin{equation}\label{mahler262} \mathfrak M\bigl(P^{\varphirime}\bigr) \le m_N \mathfrak M(P),\quad\text{and}\quad \mathfrak M\bigl(Q^{\varphirime}\bigr) \le m_N \mathfrak M(Q). \end{equation} Because \boldsymbol egin{equation*}\label{mahler266} P^{\varphirime}(z) = \sum_{n = 1}^N c_n m_n z^{m_n - 1} \end{equation*} is a sum of strictly less than $N + 1$ monomials, we can apply the inductive hypothesis to $P^{\varphirime}$. It follows that \boldsymbol egin{equation}\label{mahler271} |c_n| m_n \le \binom{N-1}{n-1} \mathfrak M\bigl(P^{\varphirime}\bigr) \le m_N \binom{N-1}{n-1} \mathfrak M(P) \end{equation} for each $n = 1, 2, \dots , N$. As \boldsymbol egin{equation*}\label{276} m_0 = 0,\quad\text{and}\quad \binom{N-1}{-1} = 0, \end{equation*} it is trivial that (\ref{mahler271}) also holds at $n = 0$. In a similar manner, \boldsymbol egin{equation*}\label{mahler281} Q^{\varphirime}(z) = \sum_{n = 0}^{N-1} c_n (m_N - m_n) z^{m_N - m_n - 1} \end{equation*} is a sum of strictly less that $N + 1$ monomials. We apply the inductive hypothesis to $Q^{\varphirime}$, and get the inequality \boldsymbol egin{equation}\label{mahler286} |c_n| (m_N - m_n) \le \binom{N-1}{N - 1 - n} \mathfrak M\bigl(Q^{\varphirime}\bigr) \le m_N \binom{N-1}{n} \mathfrak M(Q) \end{equation} for each $n = 0, 1, 2, \dots , N-1$. In this case we have \boldsymbol egin{equation*}\label{mahler291} (m_N - m_N) = 0,\quad\text{and}\quad \binom{N-1}{N} = 0, \end{equation*} and therefore (\ref{mahler286}) also holds at $n = N$. To complete the proof we use the identity (\ref{mahler260}), and we apply the inequality (\ref{mahler271}), and the inequality (\ref{mahler286}). In this way we obtain the bound \boldsymbol egin{equation}\label{mahler296} \boldsymbol egin{split} |c_n| m_N &= |c_n| m_n + |c_n| (m_N - m_n)\\ &\le m_N \binom{N-1}{n-1} \mathfrak M(P) + m_N \binom{N-1}{n} \mathfrak M(P)\\ &= m_N \binom{N}{n} \mathfrak M(P). \end{split} \end{equation} This verifies (\ref{intro30}). \section{Archimedean orderings in the group $\mathbb{Z}^M$} In this section we consider $\mathbb{Z}^M$ as an ordered group. To avoid degenerate situations, we assume throughout this section that $2 \le M$. Let $\boldsymbol \alpha$ belong to $\mathbb{R}^M$, and let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be the homomorphism defined by (\ref{intro105}). We assume that the coordinates $\alpha_1, \alpha_2, \dots , \alpha_M$, are $\mathbb{Q}$-linearly independent so that $\varphi_{\boldsymbol \alpha}$ is an injective homomorphism. It follows, as in \cite[Theorem 8.1.2 (c)]{rudin1962}, that $\varphi_{\boldsymbol \alpha}$ induces an archimedean ordering in the group $\mathbb{Z}^M$. That is, if $\boldsymbol k$ and $\boldsymbol ell$ are distinct points in $\mathbb{Z}^M$ we write $\boldsymbol k < \boldsymbol ell$ if and only if \boldsymbol egin{equation*}\label{order-5} \varphi_{\boldsymbol \alpha}(\boldsymbol k) = \boldsymbol k^T \boldsymbol \alpha < \varphi_{\boldsymbol \alpha}(\boldsymbol ell) = \boldsymbol ell^T \boldsymbol \alpha \end{equation*} in $\mathbb{R}$. Therefore $\bigl(\mathbb{Z}^M, <\bigr)$ is an ordered group, and the order is archimedean. If $\mathfrak S \subseteq \mathbb{Z}^M$ is a nonempty, finite subset of cardinality $N + 1$, then the elements of $\mathfrak S$ can be indexed so that \boldsymbol egin{equation}\label{order1} \mathfrak S = \big\{\boldsymbol k_0, \boldsymbol k_1, \boldsymbol k_2, \dots , \boldsymbol k_N\big\} \end{equation} and \boldsymbol egin{equation}\label{order5} \boldsymbol k_0^T \boldsymbol \alpha < \boldsymbol k_1^T \boldsymbol \alpha < \boldsymbol k_2^T \boldsymbol \alpha < \cdots < \boldsymbol k_N^T \boldsymbol \alpha. \end{equation} A more general discussion of ordered groups is given in \cite[Chapter 8]{rudin1962}. Here we require only the indexing (\ref{order1}) that is induced in the finite subset $\mathfrak S$ by the injective homomorphism $\varphi_{\boldsymbol \alpha}$. If $\boldsymbol b = (b_m)$ is a (column) vector in $\mathbb{Z}^M$, we define the norm \boldsymbol egin{equation}\label{order7} \|\boldsymbol b\|_{\infty} = \max\big\{|b_m| : 1 \le m \le M\big\}. \end{equation} And if $\mathfrak S \subseteq \mathbb{Z}^M$ is a nonempty, finite subset we write \boldsymbol egin{equation*}\label{order10} \|\mathfrak S\|_{\infty} = \max\big\{\|\boldsymbol k\|_{\infty} : \boldsymbol k \in \mathfrak S\big\}. \end{equation*} Following Boyd \cite{boyd1981}, we define the function \boldsymbol egin{equation*}\label{order20} \nu : \mathbb{Z}^M \setminus \{\boldsymbol 0\} \rightarrow \{1, 2, 3, \dots \} \end{equation*} by \boldsymbol egin{equation}\label{order25} \nu(\boldsymbol a) = \min\big\{\|\boldsymbol b\|_{\infty} : \text{$\boldsymbol b \in \mathbb{Z}^M$, $\boldsymbol b \not= \boldsymbol 0$, and $\boldsymbol b^T \boldsymbol a = 0$}\big\}. \end{equation} It is known (see \cite{boyd1981}) that the function $\boldsymbol a \mapsto \nu(\boldsymbol a)$ is unbounded, and a stronger conclusion follows from our Lemma \ref{lemorder2}. Moreover, if $\nu(\boldsymbol a)$ is sufficiently large, then the map $\boldsymbol k \mapsto \boldsymbol k^T \boldsymbol a$ restricted to points $\boldsymbol k$ in the finite subset $\mathfrak S$ takes distinct integer values, and therefore induces an ordering in $\mathfrak S$. This follows immediately from the triangle inequality for the norm (\ref{order7}), and was noted in \cite{boyd1981}. As this result will be important in our proof of Theorem \ref{thmintro2}, we prove it here as a separate lemma. \boldsymbol egin{lemma}{\sc [D.~Boyd]}\label{lemorder1} Let $\mathfrak S \subseteq \mathbb{Z}^M$ be a nonempty, finite subset with cardinality $|\mathfrak S| = N + 1$, and let $\boldsymbol a \not= \boldsymbol 0$ be a point in $\mathbb{Z}^M$ such that \boldsymbol egin{equation}\label{order28} 2 \|\mathfrak S\|_{\infty} < \nu(\boldsymbol a). \end{equation} Then \boldsymbol egin{equation}\label{order30} \big\{\boldsymbol k^T \boldsymbol a : \boldsymbol k \in \mathfrak S\big\} \end{equation} is a collection of $N + 1$ distinct integers. \end{lemma} \boldsymbol egin{proof} If $N = 0$ the result is trivial. Assume that $1 \le N$, and let $\boldsymbol k$ and $\boldsymbol ell$ be distinct points in $\mathfrak S$. If \boldsymbol egin{equation*}\label{order35} \boldsymbol k^T \boldsymbol a = \boldsymbol ell^T \boldsymbol a, \end{equation*} then \boldsymbol egin{equation*}\label{order40} (\boldsymbol k - \boldsymbol ell)^T \boldsymbol a = \boldsymbol 0. \end{equation*} It follows that \boldsymbol egin{equation*}\label{order45} \nu(\boldsymbol a) \le \|\boldsymbol k - \boldsymbol ell\|_{\infty} \le \|\boldsymbol k\|_{\infty} + \|\boldsymbol ell\|_{\infty} \le 2 \|\mathfrak S\|_{\infty}, \end{equation*} and this contradicts the hypothesis (\ref{order28}). We conclude that (\ref{order30}) contains $N + 1$ distinct integers. \end{proof} Let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be an injective homomorphism, and let $\mathfrak S \subseteq \mathbb{Z}^M$ be a nonempty, finite subset of cardinality $N + 1$. We assume that the elements of $\mathfrak S$ are indexed so that both (\ref{order1}) and (\ref{order5}) hold. If $\boldsymbol a \not= \boldsymbol 0$ in $\mathbb{Z}^M$ satisfies (\ref{order28}), then it may happen that the indexing (\ref{order1}) also satisfies the system of inequalities \boldsymbol egin{equation*}\label{order50} \boldsymbol k_0^T \boldsymbol a < \boldsymbol k_1^T \boldsymbol a < \boldsymbol k_2^T \boldsymbol a < \cdots < \boldsymbol k_N^T \boldsymbol a. \end{equation*} We write $\mc B(\boldsymbol \alpha, \mathfrak S)$ for the collection of such lattice points $\boldsymbol a$. That is, we define \boldsymbol egin{equation}\label{order55} \boldsymbol egin{split} \mc B(\boldsymbol \alpha, \mathfrak S) &= \big\{\boldsymbol a \in \mathbb{Z}^M : \text{$2 \|\mathfrak S\|_{\infty} < \nu(\boldsymbol a)$}\\ &\mc Quad\mc Quad\text{and $\boldsymbol k_0^T \boldsymbol a < \boldsymbol k_1^T \boldsymbol a < \boldsymbol k_2^T \boldsymbol a < \cdots < \boldsymbol k_N^T \boldsymbol a$}\big\}. \end{split} \end{equation} The following lemma establishes a crucial property of $\mc B(\boldsymbol \alpha, \mathfrak S)$. \boldsymbol egin{lemma}\label{lemorder2} Let the subset $\mc B(\boldsymbol \alpha, \mathfrak S)$ be defined by {\rm (\ref{order55})}. Then $\mc B(\boldsymbol \alpha, \mathfrak S)$ is an infinite set, and the function $\nu$ restricted to $\mc B(\boldsymbol \alpha, \mathfrak S)$, is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$. \end{lemma} \boldsymbol egin{proof} By hypothesis \boldsymbol egin{equation}\label{order265} \eta = \eta(\boldsymbol \alpha, \mathfrak S) = \min\big\{\boldsymbol k_n^T \boldsymbol \alpha - \boldsymbol k_{n-1}^T \boldsymbol \alpha : 1 \le n \le N\big\} \end{equation} is a positive constant that depends on $\boldsymbol \alpha$ and $\mathfrak S$. By Dirichlet's theorem in Diophantine approximation (see \cite{cassels1965} or \cite{schmidt1980}), for each positive integer $Q$ there exists an integer $q$ such that $1 \le q \le Q$, and \boldsymbol egin{equation}\label{order270} \max\big\{\|q \alpha_m\| : m = 1, 2, \dots , M\big\} \le (Q + 1)^{-\frac{1}{M}} \le (q + 1)^{-\frac{1}{M}}, \end{equation} where $\|\ \|$ on the left of (\ref{order270}) is the distance to the nearest integer function. Let $\mc Q$ be the collection of positive integers $q$ such that \boldsymbol egin{equation}\label{order272} \max\big\{\|q \alpha_m\| : m = 1, 2, \dots , M\big\} \le (q + 1)^{-\frac{1}{M}}. \end{equation} Because $2 \le M$, at least one of the coordinates $\alpha_m$ is irrational, and it follows from (\ref{order270}) that $\mc Q$ is an infinite set. For each positive integer $q$ in $\mc Q$, we select integers $b_{1 q}, b_{2 q}, \dots , b_{M q}$, so that \boldsymbol egin{equation}\label{order275} \|q \alpha_m\| = |q \alpha_m - b_{m q}|,\quad\text{for $m = 1, 2, \dots , M$}. \end{equation} Then (\ref{order272}) can be written as \boldsymbol egin{equation}\label{order277} \max\big\{|q \alpha_m - b_{m q}| : m = 1, 2, \dots , M\big\} \le (q + 1)^{-\frac{1}{M}}. \end{equation} Let $\boldsymbol b_q = \bigl(b_{m q}\bigr)$ be the corresponding lattice point in $\mathbb{Z}^M$, so that $q \mapsto \boldsymbol b_q$ is a map from $\mc Q$ into $\mathbb{Z}^M$. It follows using (\ref{order265}) and (\ref{order277}), that for each index $n$ we have \boldsymbol egin{equation*}\label{order295} \boldsymbol egin{split} q \eta &\le q \boldsymbol k_n^T \boldsymbol \alpha - q \boldsymbol k_{n-1}^T \boldsymbol \alpha\\ &= \boldsymbol k_n^T \boldsymbol b_q - \boldsymbol k_{n-1}^T \boldsymbol b_q + \bigl(\boldsymbol k_n - \boldsymbol k_{n-1}\bigr)^T (q \boldsymbol \alpha - \boldsymbol b_q)\\ &\le \boldsymbol k_n^T \boldsymbol b_q - \boldsymbol k_{n-1}^T \boldsymbol b_q + 2 \|\mathfrak S\|_{\infty} \biggl(\sum_{m = 1}^M |q\alpha_m - b_{m q}|\biggr)\\ &\le \boldsymbol k_n^T \boldsymbol b_q - \boldsymbol k_{n-1}^T \boldsymbol b_q + 2 \|\mathfrak S\|_{\infty} M (q + 1)^{-\frac{1}{M}}. \end{split} \end{equation*} Therefore for each sufficiently large integer $q$ in $\mc Q$, the lattice point $\boldsymbol b_q$ satisfies the system of inequalities \boldsymbol egin{equation*}\label{order296} \boldsymbol k_0^T \boldsymbol b_q < \boldsymbol k_1^T \boldsymbol b_q < \boldsymbol k_2^T \boldsymbol b_q < \cdots < \boldsymbol k_N^T \boldsymbol b_q. \end{equation*} We conclude that for a sufficiently large integer $L$ we have \boldsymbol egin{equation}\label{order298} \big\{\boldsymbol b_q : \text{$L \le q$ and $q \in \mc Q$}\big\} \subseteq \mc B(\boldsymbol \alpha, \mathfrak S). \end{equation} This shows that $\mc B(\boldsymbol \alpha, \mathfrak S)$ is an infinite set. To complete the proof we will show that the function $\nu$ is unbounded on the infinite collection of lattice points \boldsymbol egin{equation}\label{order302} \big\{\boldsymbol b_q : \text{$L \le q$ and $q \in \mc Q$}\big\}. \end{equation} If $\nu$ is bounded on (\ref{order302}), then there exists a positive integer $B$ such that \boldsymbol egin{equation}\label{order305} \nu(\boldsymbol b_q) \le B \end{equation} for all points $\boldsymbol b_q$ in the set (\ref{order302}). Let $\mc C_B$ be the finite set \boldsymbol egin{equation*}\label{order310} \mc C_B = \big\{\boldsymbol c \in \mathbb{Z}^M : 1 \le \|\boldsymbol c\|_{\infty} \le B\big\}. \end{equation*} Because $\alpha_1, \alpha_2, \dots , \alpha_M$, are $\mathbb{Q}$-linearly independent, and $\mc C_B$ is a finite set of nonzero lattice points, we have \boldsymbol egin{equation}\label{order320} 0 < \delta_B = \min\bigg\{\biggl|\sum_{m = 1}^M c_m \alpha_m\biggr| : \boldsymbol c \in \mc C_B\bigg\}. \end{equation} By our assumption (\ref{order305}), for each point $\boldsymbol b_q$ in (\ref{order302}) there exists a point $\boldsymbol c_q = (c_{m q})$ in $\mc C_B$, such that \boldsymbol egin{equation}\label{order325} \boldsymbol c_q^T \boldsymbol b_q = \sum_{m = 1}^M c_{m q} b_{m q} = 0. \end{equation} Using (\ref{order277}) and (\ref{order325}), we find that \boldsymbol egin{equation}\label{order330} \boldsymbol egin{split} q \delta_B &\le q \biggl|\sum_{m = 1}^M c_{m q} \alpha_m \biggr|\\ &= \biggl|\sum_{m = 1}^M c_{m q}\bigl(q \alpha_m - b_{m q}\bigr)\biggr|\\ &\le \biggl(\sum_{m = 1}^M |c_{m q}|\biggr) \max\big\{|q \alpha_m - b_{m q}| : m = 1, 2, \dots , M\big\}\\ &\le M B (q + 1)^{-\frac{1}{M}}. \end{split} \end{equation} But (\ref{order330}) is impossible when $q$ is sufficiently large, and the contradiction implies that the assumption (\ref{order305}) is false. We have shown that $\nu$ is unbounded on the set (\ref{order302}). In view of (\ref{order298}), the function $\nu$ is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$. \end{proof} \section{Proof of Theorem \ref{thmintro2}} If $M = 1$ then the inequality (\ref{intro125}) follows from Corollary \ref{corintro1}. Therefore we assume that $2 \le M$. Let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be an injective homomorphism, and let the set $\mathfrak S$ be indexed so that {\rm (\ref{intro115})} and {\rm (\ref{intro120})} hold. It follows from Lemma \ref{lemorder2} that the collection of lattice points $\mc B(\boldsymbol \alpha, \mathfrak S)$ defined by (\ref{order55}), is an infinite set, and the function $\nu$ defined by (\ref{order25}) is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$. Let $\boldsymbol a$ be a lattice point in $\mc B(\boldsymbol \alpha, \mathfrak S)$. If $F : (\mathbb{R}/\mathbb{Z})^M \rightarrow \mathbb{C}$ is given by (\ref{intro70}), we define an associated trigonometric polynomial $F_{\boldsymbol a} : \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{C}$ in one variable by \boldsymbol egin{equation}\label{order360} F_{\boldsymbol a}(t) = \sum_{\boldsymbol k \in \mathfrak S} \widehat F(\boldsymbol k) e\bigl(\boldsymbol k^T \boldsymbol a t\bigr) = \sum_{n = 0}^N \widehat F(\boldsymbol k_n) e\bigl(\boldsymbol k_n^T \boldsymbol a t\bigr), \end{equation} where the equality on the right of (\ref{order360}) uses the indexing (\ref{intro115}) induced by $\varphi_{\boldsymbol \alpha}$. The hypothesis (\ref{intro120}) implies that the integer exponents on the right of (\ref{order360}) satisfy the system of inequalities \boldsymbol egin{equation}\label{order365} \boldsymbol k_0^T \boldsymbol a < \boldsymbol k_1^T \boldsymbol a < \boldsymbol k_2^T \boldsymbol a < \cdots < \boldsymbol k_N^T \boldsymbol a. \end{equation} Then it follows from (\ref{intro40}), (\ref{order360}), and (\ref{order365}), that \boldsymbol egin{equation}\label{order370} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F_{\boldsymbol a}),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} We have proved that the system of inequalities (\ref{order370}) holds for each lattice point $\boldsymbol a$ in $\mc B(\boldsymbol \alpha, \mathfrak S)$. To complete the proof we appeal to an inequality of Boyd \cite[Lemma 2]{boyd1981}, which asserts that if $\boldsymbol b$ is a parameter in $\mathbb{Z}^M$ then \boldsymbol egin{equation}\label{order375} \limsup_{\nu(\boldsymbol b) \rightarrow \infty} \mathfrak M\bigl(F_{\boldsymbol b}\bigr) \le \mathfrak M(F). \end{equation} More precisely, if $\boldsymbol b_1, \boldsymbol b_2, \boldsymbol b_3, \dots $, is a sequence of points in $\mathbb{Z}^M$ such that \boldsymbol egin{equation}\label{order380} \lim_{j \rightarrow \infty} \nu(\boldsymbol b_j) = \infty, \end{equation} then \boldsymbol egin{equation}\label{order385} \limsup_{j \rightarrow \infty} \mathfrak M\bigl(F_{\boldsymbol b_j}\bigr) \le \mathfrak M(F). \end{equation} Because $\nu$ is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$, there exists a sequence $\boldsymbol b_1, \boldsymbol b_2, \boldsymbol b_3, \dots $, contained in $\mc B(\boldsymbol \alpha, \mathfrak S)$ that satisfies (\ref{order380}). Hence the sequence $\boldsymbol b_1, \boldsymbol b_2, \boldsymbol b_3, \dots $, in $\mc B(\boldsymbol \alpha, \mathfrak S)$ also satisfies (\ref{order385}). From (\ref{order370}) we have \boldsymbol egin{equation}\label{order390} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F_{\boldsymbol b_j}), \end{equation} for each $n = 0, 1, 2, \dots , N$, and for each $j = 1, 2, 3, \dots $. The inequality (\ref{intro125}) plainly follows from (\ref{order385}) and (\ref{order390}). This completes the proof of Theorem \ref{thmintro2}. Boyd conjectured in \cite{boyd1981} that (\ref{order375}) could be improved to \boldsymbol egin{equation}\label{order400} \lim_{\nu(\boldsymbol b) \rightarrow \infty} \mathfrak M\bigl(F_{\boldsymbol b}\bigr) = \mathfrak M(F). \end{equation} The proposed identity (\ref{order400}) was later verified by Lawton \cite{lawton1983} (see also \cite{dobrowolski2017} and \cite{lalin2013}). Here we have used Boyd's inequality (\ref{order375}) because it is simpler to prove than (\ref{order400}), and the more precise result (\ref{order400}) does not effect the inequality (\ref{intro125}). \boldsymbol egin{thebibliography}{1} \bibitem{bombieri2006} E.~Bombieri and W.~Gubler, \newblock {\em Heights in Diophantine Geometry}, \newblock Cambridge U. Press, New York, 2006. \bibitem{boyd1981} D.~W.~Boyd, \newblock Kronecker's Theorem and Lehmer's Problem for Polynomials in Several Variables, \newblock {\em Journal of Number Theory} 13, (1981), 116--121. \bibitem{cassels1965} J.~W.~S.~Cassels, \newblock {\em An Introduction to Diophantine Approximation,} \newblock Cambridge U. Press, London, 1965. \bibitem{dobrowolski2017} E.~Dobrowolski, \newblock A note on Lawton's theorem, \newblock {\em Canad. Math. Bull.} 60, no. 3, (2017), 484--489. \bibitem{smyth2017} E.~Dobrowolski and C.~Smyth, \newblock Mahler measures of polynomials that are sums of a bounded number of monomials, \newblock {\em International Journal of Number Theory}, 13, No. 6 (2017), 1603--1610. \bibitem{everest1999} G.~Everest and T.~Ward, \newblock {\em Heights of Polynomials and Entropy in Algebraic Dynamics,} \newblock Springer-Verlag, London, 1999. \bibitem{lalin2013} Z.~Issa and M.~Lal\'in, \newblock A Generalization of a Theorem of Boyd and Lawton, \newblock {\em Canad. Math. Bull.} 56, no. 4, (2013), 759--768. \bibitem{lawton1983} W.~M.~Lawton, \newblock A Problem of Boyd concerning Geometric Means of Polynomials, \newblock {\em Journal of Number Theory}, 16 (1983), 356--362. \bibitem{mahler1960} K.~Mahler, \newblock An application of Jensen's formula to polynomials, \newblock {\em Mathematika}, 7 (1960), 98--100. \bibitem{mahler1961} K.~Mahler, \newblock On the zeros of the derivative of a polynomial, \newblock {\em Proceedings of the Royal Society,} Ser. A, (London) Vol. 264, (1961), 145--154. \bibitem{mahler1962} K.~Mahler, \newblock Some inequalities for polynomials in several variables, \newblock {\em Journal London Math. Soc.,} 37 (1962), 341--344. \bibitem{pritsker2007} I.~E.~Pritsker, \newblock Mahler's measure, and multipliers, \newblock in {\em Number Theory and Polynomials}, ed. J.~McKee and C.~Smyth, \newblock London Math. Soc. Lecture Notes Series, 352, 255--276. \bibitem{rudin1962} W.~Rudin, \newblock {\em Fourier Analysis on Groups} \newblock Interscience Pub., New York, (1962). \bibitem{schmidt1995} K.~Schmidt, \newblock {\em Dynamical Systems of Algebraic Origin,} \newblock Birkh\"auser Verlag, Basel, 1995. \bibitem{schmidt1980} W.~M.~Schmidt, \newblock {\em Diophantine Approximation,} Lecture Notes in Mathematics 785, \newblock Springer-Verlag, Berlin, 1980. \bibitem{smyth2007} C.~Smyth, \newblock The Mahler measure of algebraic numbers: a survey, \newblock in {\em Number Theory and Polynomials}, ed. J.~McKee and C.~Smyth, \newblock London Math. Soc. Lecture Notes Series, 352, 322--349. \end{thebibliography} \end{document}
math
38,981
\mathbf Egin{document} \title{An approach to nonsolvable base change and descent} \author{Jayce R.~ Getz} \mathrm{ad}dress{Department of Mathematics and Statistics\\ McGill University\\ Montreal, QC, H3A 2K6} \email{[email protected]} \mathbb{S}ubjclass[2000]{Primary 11F70} \mathbf Egin{abstract} We present a collection of conjectural trace identities and explain why they are equivalent to base change and descent of automorphic representations of $\mathrm{GL}_n(\mathbb{A}_F)$ along nonsolvable extensions (under some simplifying hypotheses). The case $n=2$ is treated in more detail and applications towards the Artin conjecture for icosahedral Galois representations are given. \end{abstract} \maketitle \tableofcontents \mathbb{S}ection{Introduction} Let $F$ be a number field and let $v$ be a nonarchimedian place of $F$. By the local Langlands correspondence, now a theorem due to Harris and Taylor building on work of Henniart, there is a bijection \mathbf Egin{align} \label{loc-Langl} \left(\varphi_v:W_{F_v}' \to \mathrm{GL}_n(\textf{C}C)\right) \longmapsto {\sf p}i(\varphi_v) \end{align} between equivalence classes of Frobenius semisimple representations $\varphi_v$ of the local Weil-Deligne group $W_{F_v}'$ and isomorphism classes of irreducible admissible representations of $\mathrm{GL}_n(F_v)$. The bijection is characterized uniquely by certain compatibilities involving $\varepsilon$-factors and $L$-functions which are stated precisely in \cite{HT} (see also \cite{PreuveHenn}). The corresponding statement for $v$ archimedian was proven some time ago by Langlands \cite{LanglandsArch}. We write $\varphi_v({\sf p}i_v)$ for any representation attached to ${\sf p}i_v$ and call it the \textbf{Langlands parameter} or \textbf{$L$-parameter} of ${\sf p}i_v$; it is unique up to equivalence of representations. Now let $E/F$ be an extension of number fields, let $v$ be a place of $F$ and let $w|v$ be a place of $E$. We say that an admissible irreducible representation $\Pi_w$ of $\mathrm{GL}_n(E_w)$ is a \textbf{base change} of ${\sf p}i_v$ and write ${\sf p}i_{vEw}:=\Pi_w$ if $$ \varphi({\sf p}i_v)|_{W_{E_w}'}\cong \varphi(\Pi_w). $$ In this case we also say that $\Pi_w$ \textbf{descends} to ${\sf p}i_v$. We say that an isobaric\footnote{For generalities on isobaric automorphic representations see \cite{LanglEinM} and \cite{JSII}.} automorphic representation $\Pi$ of $\mathrm{GL}_n(\mathbb{A}_E)$ is a \textbf{base change} (resp.~\textbf{weak base change}) of an isobaric automorphic representation ${\sf p}i$ of $\mathrm{GL}_n(\mathbb{A}_F)$ if $\Pi_w={\sf p}i_{vE}$ for all (resp.~almost all) places $v$ of $F$ and all places $w|v$ of $E$. If $\Pi$ is a (weak) base change of ${\sf p}i$, then we also say $\Pi$ descends (weakly) to ${\sf p}i$. We write ${\sf p}i_E$ for a weak base change of ${\sf p}i$, if it exists; it is uniquely determined up to isomorphism by the strong multiplicity one theorem \cite[Theorem 4.4]{JSII}. If $\Pi$ is a weak base change of ${\sf p}i$, we say that the base change is compatible at a place $v$ of $F$ if $\Pi_w$ is a base change of ${\sf p}i_v$ for all $w|v$. If $E/F$ is a prime degree cyclic extension, then the work of Langlands \cite{Langlands} for $n=2$ and Arthur-Clozel \cite{AC} for $n$ arbitrary implies that a base change always exists. The fibers and image of the base change are also described in these works. Given that any finite degree Galois extension $E/F$ contains a family of subextensions $E=E_0 \mathfrak{g}eq E_1 \mathfrak{g}eq \cdots \mathfrak{g}eq E_n=F$ where $E_i/E_{i+1}$ is Galois with simple Galois group, to complete the theory of base change it is necessary to understand base change and descent with respect to Galois extensions with simple nonabelian Galois group. In this paper we introduce a family of conjectural trace identities that are essentially equivalent to proving base change and descent in this setting. The (conjectural) trace identity is based on combining two fundamental paradigms pioneered by Langlands, the second still in its infancy: \mathbf Egin{itemize} \item Comparison of trace formulae, and \item Beyond endoscopy. \end{itemize} The point of this paper is to provide some evidence that proving the conjectural trace identities unconditionally is a viable strategy for proving nonsolvable base change. \mathbb{S}ubsection{Test functions} Fix an integer $n \mathfrak{g}eq 1$, a Galois extension of number fields $E/F$, an automorphism $\tau \in \mathrm{Gal}(E/F)$, and a set of places $S_0$ of $E$ containing the infinite places and the places where $E/F$ is ramified. Let $w \not \in S_0$ and let $v$ be the place of $F$ below $w$. Let $$ A=\mathbf Egin{pmatrix} t_{1w} & & \\ & \mathfrak{d}ots & \\ & & t_{nw}\end{pmatrix}{\sf q}uad \textrm{ and } {\sf q}uad A^{\tau}=\mathbf Egin{pmatrix} t_{1w^{\tau}} & & \\ & \mathfrak{d}ots & \\& & t_{nw^{\tau}}\end{pmatrix}. $$ We view these as matrices in ${\sf p}rod_{w|v} \mathrm{GL}_n(\textf{C}C[t_{1w}^{{\sf p}m 1},\cdots,t_{nw}^{{\sf p}m 1}])$ For $j \in \mathbb{Z}_{> 0}$ let $\mathrm{Sym}^j:\mathrm{GL}_n \to \mathrm{GL}_{\binom{n+j-1}{j}}$ be the $j$th symmetric power representation, where $\binom{m}{j}$ is the $m$-choose-$j$ binomial coefficient. For a prime power ideal $\varpi_w^j$ of $\mathcal{O}_{E}$ define a test function \mathbf Egin{align} f(\varpi_w^j):=\mathcal{S}^{-1}(\mathrm{tr}(\mathrm{Sym}^j(A \otimes (A^{\tau})^{-1}))) \in C_c^{\infty}(\mathrm{GL}_n(E \otimes_F F_v)//\mathrm{GL}_n(\mathcal{O}_{E} \otimes_{\mathcal{O}_F}\mathcal{O}_{F_v})) \end{align} where $\mathcal{S}$ is the Satake isomorphism (see \S \ref{ssec-uha}). We denote by $f(\mathcal{O}_{E_w})$ the characteristic function of $\mathrm{GL}_n(\mathcal{O}_E \otimes_{\mathcal{O}_F} \mathcal{O}_{F_v})$ and regard the $f(\varpi_w^j)$ as elements of $C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_E^{\infty})//\mathrm{GL}_n(\widehat{\mathcal{O}}_E))$. Define $f(\mathfrak{n}) \in C_c^{\infty}(\mathrm{GL}_{n}(\mathbb{A}_{E}^{\infty})//\mathrm{GL}_{n}(\widehat{\mathcal{O}}_E))$ in general by declaring that $f$ is multiplicative, that is, if $\mathfrak{n}+\mathfrak{m}=\mathcal{O}_E$ we set $$ f(\mathfrak{n}\mathfrak{m}):=f(\mathfrak{n})*f(\mathfrak{m}) $$ where the asterisk denotes convolution in the Hecke algebra. If $\mathfrak{n}$ is coprime to $S_0$, we often view $f(\mathfrak{n})$ as an element of $C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_E^{S_0})//\mathrm{GL}_n(\widehat{\mathcal{O}}_E^{S_0}))$. Assume that $\Pi$ is an isobaric automorphic representation of $\mathrm{GL}_n(\mathbb{A}_E)$ unramified outside of $S_0$. Define $\Pi^{\tau}$ by $\Pi^{\tau}(g):=\Pi(g^{\tau})$. The purpose of defining the operators $f(\mathfrak{m})$ is the following equality: $$ \mathbb{S}um_{\mathfrak{m} \mathbb{S}ubset \mathcal{O}_{E}^{S_0}} \frac{\mathrm{tr}(\Pi^{S_0})(f(\mathfrak{m}))}{|\mathrm{N}_{F/\mathbb{Q}}(\mathfrak{m})|^{s}}=L^{S_0}(s,\Pi \times \Pi^{\tau}) $$ This follows from \eqref{RS-descr} and the fact that, in the notation of loc.~cit., $A(\Pi^{\tau}_w)=A(\Pi_{w^{\tau}})$. Let ${\sf p}hi \in C_c^{\infty}(0,\infty)$ be nonnegative. Thus $\widetilde{{\sf p}hi}(1) >0$, where $$ \widetilde{{\sf p}hi}(s):=\int_{0}^{\infty}{\sf p}hi(s)x^{s-1}dx $$ is the Mellin transform of ${\sf p}hi$. We introduce the following test function, a modification of that considered by Sarnak in \cite{Sarnak}: \mathbf Egin{align} \label{Sig-func} \Sigma^{S_0}(X):=\Sigma_{{\sf p}hi}^{S_0}(X):=\mathbb{S}um_{\mathfrak{m} \mathbb{S}ubset \mathcal{O}_E^{S_0}} {\sf p}hi(X/|\mathrm{N}_{E/\mathbb{Q}}(\mathfrak{m})|)f(\mathfrak{m}). \end{align} \mathbb{S}ubsection{Conjectural trace identities} Assume that $E/F$ is a Galois extension. For convenience, let \mathbf Egin{align} \Pi_n(F):&=\{\textrm{Isom.~classes of isobaric automorphic representations of }\mathrm{GL}_n(\mathbb{A}_F)\}\\ \Pi_n^0(F):&=\{\textrm{Isom.~classes of cuspidal automorphic representations of }\mathrm{GL}_n(\mathbb{A}_F)\} \nonumber \\ \Pi_n^{\mathrm{prim}}(E/F):&=\{ \textrm{Isom.~classes of $E$-primitive automorphic representations of }\mathrm{GL}_n(\mathbb{A}_F)\}. \nonumber \end{align} The formal definition of an $E$-primitive automorphic representation is postponed until \S \ref{ssec-primitive}. If we knew Langlands functoriality we could characterize them easily as those representations that are cuspidal and not automorphically induced from an automorphic representation of a subfield $E \mathfrak{g}eq K > F$. We note that there is a natural action of $\mathrm{Gal}(E/F)$ on $\Pi_n(E)$ that preserves $\Pi_n^0(E)$; we write $\Pi_n(E)^{\mathrm{Gal}(E/F)}$ for those representations that are isomorphic to their $\mathrm{Gal}(E/F)$-conjugates and $\Pi_n^0(E)^{\mathrm{Gal}(E/F)}=\Pi_n^0(E) \cap \Pi_n(E)^{\mathrm{Gal}(E/F)}$. Let $S$ be a finite set of places of $F$ including all infinite places and let $S'$, $S_0$ be the set of places of $F'$, $E$ lying above $S$. Assume that $h \in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_{F'}))$, $\Phi \in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_F))$ are transfers of each other in the sense of \S \ref{ssec-transfers} below and that they are unramified outside of $S'$ and $S$, that is, invariant under right and left multiplication by $\mathrm{GL}_n(\widehat{\mathcal{O}}_{F'}^{S'})$ and $\mathrm{GL}_n(\widehat{\mathcal{O}}_F^S)$, respectively. For the purposes of the following theorems, if $G$ is a finite group let $G^{\mathrm{ab}}$ be the maximal abelian quotient of $G$. Assume for the remainder of this subsection that $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple nonabelian group. Let $E \mathfrak{g}eq F' \mathfrak{g}eq F$ be a subfield such that $\mathrm{Gal}(E/F')$ is solvable and $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=0$. Moreover let $\tau \in \mathrm{Gal}(E/F)$ be an element such that $$ \mathrm{Gal}(E/F)=\langle \tau,\mathrm{Gal}(E/F')\rangle. $$ \mathbf Egin{rem} In \S \ref{ssec-upce} we discuss these assumptions, the upshot being that they are no real loss of generality. \end{rem} Our first main theorem is the following: \mathbf Egin{thm} \label{main-thm-1} Consider the following hypotheses: \mathbf Egin{itemize} \item $\mathrm{Gal}(E/F')$ is solvable, $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=0$ and $[E:F']$ is coprime to $n$. \item For all divisors $m|n$ there is no irreducible nontrivial representation $$ \mathrm{Gal}(E/F) \longrightarrow \mathrm{GL}_m(\textf{C}C), $$ \item The case of Langlands functoriality explicated in Conjectures \ref{conj-1} below is true for $E/F$, and \item The case of Langlands functoriality explicated in Conjecture \ref{conj-solv} is true for $E/F'$. \end{itemize} If these hypotheses are valid and $h$ and $\Phi$ are transfers of each other then the limits \mathbf Egin{align} \label{11} \lim_{X \to \infty}|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1}X^{-1} \mathbb{S}um_{{\sf p}i' \textrm{ $E$-primitive}} \mathrm{tr}({\sf p}i')(h^1b_{E/F'}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} and \mathbf Egin{align} \label{12} \lim_{X \to \infty} X^{-1}\mathbb{S}um_{{\sf p}i} \mathrm{tr}({\sf p}i)( \Phi^1b_{E/F}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} converge absolutely and are equal. Here the first sum is over a set of representatives for the equivalence classes of $E$-primitive cuspidal automorphic representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ and the second sum is over a set of representatives for the equivalence classes of cuspidal automorphic representations of $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_{F})$. \end{thm} Here \mathbf Egin{align*} h^1(g):&=\int_{A_{\mathrm{GL}_{nF'}}}h(ag)da'\\ \Phi^1:&=\int_{A_{\mathrm{GL}_{nF}}}\Phi(ag)da \end{align*} where the $da'$ and $da$ are the Haar measures on $A_{\mathrm{GL}_{nF'}}$ and $A_{\mathrm{GL}_{nF}}$, respectively, used in the definition of the transfer. For the definition of $A_{\mathrm{GL}_{nF'}}$ and $A_{\mathrm{GL}_{nF}}$ we refer to \S \ref{HC-subgroup} and for the definition of $b_{E/F}$ and $b_{E/F'}$ we refer to \eqref{bEF}. \mathbf Egin{remarks} \item If we fix a positive integer $n$, then for all but finitely many finite simple groups $G$ with universal perfect central extensions $\widetilde{G}$ any representation $\widetilde{G} \to \mathrm{GL}_{n}$ will be trivial. This follows from \cite[Theorem 1.1]{LS}, for example (the author does not known if it was known earlier). Thus the second hypothesis in Theorem \ref{main-thm-1} holds for almost all groups (if we fix $n$). In particular, if $n=2$, then the only finite simple nonabelian group admitting a projective representation of degree $2$ is $A_5$ by a well-known theorem of Klein. Thus when $n=2$ and $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple group other than $A_5$ the first hypothesis of Theorem \ref{main-thm-1} holds. \item Conjecture \ref{conj-1} and its analogues conjectures \ref{conj-2}, \ref{conj-32} and \ref{conj-33} below each amount to a statement that certain (conjectural) functorial transfers of automorphic representations exist and have certain properties. To motivate these conjectures, we state and prove the properties of $L$-parameters to which they correspond in propositions \ref{prop-bij-EF'}, \ref{prop-A5-EF} and lemmas \ref{lem-A5-EF}, \ref{lem-A5-EF3} respectively. The facts about $L$-parameters we use are not terribly difficult to prove given basic facts from finite group theory, but they are neither obvious nor well-known, and one of the motivations for this paper is to record them. \item Conjecture \ref{conj-solv} is a conjecture characterizing the image and fibers of solvable base change. Rajan \cite{Rajan3} has explained how it can be proved (in principle) using a method of Lapid and Rogawski \cite{LR} together with the work of Arthur and Clozel \cite{AC}. It is a theorem when $n=2$ \cite[Theorem 1]{Rajan3} or when $\mathrm{Gal}(E/F)$ is cyclic of prime degree \cite[Chapter 3, Theorems 4.2 and 5.1]{AC}. \end{remarks} The following weak converse of Theorem \ref{main-thm-1} is true: \mathbf Egin{thm} \label{main-thm-1-conv} Assume Conjecture \ref{conj-transf} on transfers of test functions and assume that $F$ is totally complex. If \eqref{11} and \eqref{12} converge absolutely and are equal for all $h$ unramified outside of $S'$ with transfer $\Phi$ unramified outside of $S$, then every cuspidal automorphic representation $\Pi$ of $\mathrm{GL}_n(\mathbb{A}_E)$ satisfying $\Pi^{\mathbb{S}igma} \cong \Pi$ for all $ \mathbb{S}igma \in \mathrm{Gal}(E/F)$ admits a unique weak descent to $\mathrm{GL}_n(\mathbb{A}_F)$. Conversely, if ${\sf p}i$ is a cuspidal automorphic representation of $\mathrm{GL}_n(\mathbb{A}_F)$ such that \mathbf Egin{align} \label{non-zero-Sig} \lim_{X \to \infty}X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma_{{\sf p}hi}^{S_0}(X))) \neq 0 \end{align} then ${\sf p}i$ admits a weak base change to $\mathrm{GL}_n(\mathbb{A}_E)$. The weak base change of a given cuspidal automorphic representation ${\sf p}i$ of $\mathrm{GL}_n(\mathbb{A}_F)$ is unique. The base change is compatible for the infinite places of $F$ and the finite places $v$ of $F$ where $E/F$ and ${\sf p}i_v$ are unramified or where ${\sf p}i_v$ is a twist of the Steinberg representation by a quasi-character. \end{thm} Here, as usual, $\Pi^{\mathbb{S}igma}:=\Pi \circ \mathbb{S}igma$ is the representation acting on the space of $\Pi$ via $$ \Pi^{\mathbb{S}igma}(g):=\Pi(\mathbb{S}igma(g)). $$ \mathbf Egin{rem} Conjecture \ref{conj-transf} is roughly the statement that there are ``enough'' $h$ and $\Phi$ that are transfers of each other. If one assumes that $\Pi$ (resp.~${\sf p}i$) and $E/F$ are everywhere unramified, then one can drop the assumption that Conjecture \ref{conj-transf} is valid. \end{rem} We conjecture that \eqref{non-zero-Sig} is always nonzero: \mathbf Egin{conj} \label{conj-nonzero} Let ${\sf p}i$ be a cuspidal automorphic representation of $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_F)$; then \mathbf Egin{align*} \lim_{X \to \infty}X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma_{{\sf p}hi}^{S_0}(X))) \neq 0 \end{align*} \end{conj} This conjecture is true for all ${\sf p}i$ that admit a base change to an isobaric automorphic representation of $\mathrm{GL}_n(\mathbb{A}_E)$ by an application of Rankin-Selberg theory (compare Proposition \ref{Perron-prop} and \eqref{ord-pole}), however, assuming this would be somewhat circular for our purposes. The author is hopeful that Conjecture \ref{conj-nonzero} can be proven independently of the existence of the base change. Indeed, the Chebatarev density theorem is proven despite the fact that the Artin conjecture is still a conjecture. The smoothed sum in Conjecture \ref{conj-nonzero} is analogous to some sums that can be evaluated using the Chebatarev density theorem; in some sense the Chebatarev density theorem is the case where ${\sf p}i$ is the trivial representation of $\mathrm{GL}_1(\mathbb{A}_F)$. Isolating primitive representations is not a trivial task. For example, the main focus of \cite{Venk} is the isolation of cuspidal representations that are not primitive when $n=2$. Therefore it seems desirable to have a trace identity similar to that of Theorem \ref{main-thm-1} that involves sums over all cuspidal representations. This is readily accomplished under additional assumptions on $\mathrm{Gal}(E/F)$ using the following lemma: \mathbf Egin{lem} \label{lem-prim}Let $L/K$ be a Galois extension of number fields. Suppose that there is no proper subgroup $H \leq \mathrm{Gal}(L/K)$ such that $[\mathrm{Gal}(L/K):H]|n$. Then $$ \Pi_n^{\mathrm{prim}}(L/K)=\Pi_n^0(K). $${\sf q}ed \end{lem} The proof is immediate from the definition of $L$-primitive automorphic representations in \S \ref{ssec-primitive}. \mathbb{S}ubsection{Icosahedral extensions} We now consider the case of the smallest simple nonabelian group $A_5$ in more detail. We begin by setting notation for specific subsets of $\Pi^0_n(F)$ and $\Pi^0_n(E)$. Let $E/F$ be a Galois extension, and let $$ \rho:W_F' \longrightarrow {}^L\mathrm{GL}_{nF} $$ be an $L$-parameter trivial on $W_E'$; thus $\rho$ can essentially be identified with the Galois representation $\rho_0:\mathrm{Gal}(E/F) \to \mathrm{GL}_{n}(\textf{C}C)$ obtained by composing $\rho$ with the projection ${}^L \mathrm{GL}_{nF} \to \mathrm{GL}_n(\textf{C}C)$. For every quasi-character $\chi:F^{\times} \backslash \mathbb{A}_F^{\times} \cong (W_{F}')^{\mathrm{ab}} \to \textf{C}C^{\times} $ we can then form the $L$-parameter $$ \rho \otimes \chi:W_{F}' \longrightarrow \mathrm{GL}_n(\textf{C}C). $$ We say that a cuspidal automorphic representation ${\sf p}i$ of $\mathrm{GL}_n(\mathbb{A}_F)$ is \textbf{associated} to $\rho \otimes \chi$ if ${\sf p}i_v$ is the representation attached to the $L$-parameter $(\rho \otimes \chi)_v$: $$ {\sf p}i_v={\sf p}i((\rho \otimes \chi)_v) $$ for almost all places $v$ of $F$ (see \eqref{loc-Langl} above). If ${\sf p}i_v={\sf p}i((\rho \otimes \chi)_v)$ for all places $v$, then we write ${\sf p}i={\sf p}i(\rho \otimes \chi)$. In this case we also say that ${\sf p}i$ and $\rho \otimes \chi$ are \textbf{strongly associated}. More generally, if ${\sf p}i$ is a cuspidal automorphic representation of $\mathrm{GL}_n(\mathbb{A}_F)$ such that ${\sf p}i$ is associated to $\rho \otimes \chi$ for some $\chi$ we say that ${\sf p}i$ is of \textbf{$\rho$-type}. If ${\sf p}i$ is associated to $\rho \otimes \chi$ for some $\rho$ and $\chi$ we say that ${\sf p}i$ is of \textbf{Galois type}. Assume for the remainder of this section that $\mathrm{Gal}(E/F) \cong \widetilde{A}_5$, the universal perfect central extension of the alternating group $A_5$ on $5$ letters. One can formulate analogues of theorems \ref{main-thm-1} and \ref{main-thm-1-conv} in this setting. For this purpose, fix an embedding $A_4 \hookrightarrow A_5$, and let $\widetilde{A}_4 \leq \widetilde{A}_5$ be the preimage of $A_4$ under the surjection $ \widetilde{A}_5 \to A_5$. Thus $\widetilde{A}_4$ is a nonsplit double cover of $A_4$. \mathbf Egin{thm} \label{main-thm-2} Let $n=2$, let $F' = E^{\widetilde{A}_4}$, and let $\tau \in \mathrm{Gal}(E/F)$ be any element of order $5$. Let $h \in C_c^{\infty}(\mathrm{GL}_2(\mathbb{A}_{F'}))$ be unramified outside of $S'$ and have transfer $\Phi \in C_c^{\infty}(\mathrm{GL}_2(\mathbb{A}_F))$ unramified outside of $S$. Assume the case of Langlands functoriality explicated in Conjecture \ref{conj-2} for $E/F$. Then the limits \mathbf Egin{align} \label{A21} 2\lim_{X \to \infty}\left(\frac{d^{3}}{ds^{3}}(\widetilde{{\sf p}hi}(s)X^s)|_{s=1}\right)^{-1}|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um_{{\sf p}i'} \mathrm{tr}({\sf p}i')(h^1b_{E/F'}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} and \mathbf Egin{align} \label{A22} \lim_{X \to \infty} \left(\frac{d^{3}}{ds^{3}}(\widetilde{{\sf p}hi}(s)X^s)|_{s=1}\right)^{-1} \mathbb{S}um_{{\sf p}i } \mathrm{tr}({\sf p}i)(\Phi^1b_{E/F}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} converge absolutely and are equal. Similarly, again assuming Conjecture \ref{conj-2} below, the limits \mathbf Egin{align} \label{B21} \lim_{X \to \infty}X^{-1}|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um_{\mathbb{S}ubstack{{\sf p}i' \textrm{ not of $\rho$-type for $\rho$ trivial on $W_E'$}}} \mathrm{tr}({\sf p}i')(h^1b_{E/F'}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} and \mathbf Egin{align} \label{B22} \lim_{X \to \infty} X^{-1} \mathbb{S}um_{\mathbb{S}ubstack{{\sf p}i \textrm{ not of $\rho$-type for $\rho$ trivial on $W_E'$}}} \mathrm{tr}({\sf p}i)(\Phi^1b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))) \end{align} converge absolutely and are equal. In both cases the first sum is over a set of representatives for the equivalence classes of cuspidal automorphic representations of $A_{\mathrm{GL}_{2F'}} \backslash \mathrm{GL}_2(\mathbb{A}_{F'})$ and the second sum is over a set of representatives for the equivalence classes of cuspidal automorphic representations of $A_{\mathrm{GL}_{2F}} \backslash \mathrm{GL}_2(\mathbb{A}_{F})$. \end{thm} Again, a converse statement is true: \mathbf Egin{thm} \label{main-thm-2-conv} Assume Conjecture \ref{conj-transf} on transfers of test functions, assume that $F$ is totally complex, and assume that the limits \eqref{B21} and \eqref{B22} converge absolutely for all test functions $h$ unramified outside of $S'$ with transfer $\Phi$ unramified outside of $S$. Under these assumptions every cuspidal automorphic representation $\Pi$ of $\mathrm{GL}_2(\mathbb{A}_E)$ that is isomorphic to its $\mathrm{Gal}(E/F)$-conjugates is a weak base change of a unique cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_F)$. Conversely, if ${\sf p}i$ is a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_F)$ such that \mathbf Egin{align*} \lim_{X \to \infty}X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma_{{\sf p}hi}^{S_0}(X))) \neq 0 \end{align*} then ${\sf p}i$ admits a unique weak base change to $\mathrm{GL}_2(\mathbb{A}_F)$. If ${\sf p}i$ is a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_F)$ that is not of $\rho$-type for $\rho$ trivial on $W_{F}'$, then ${\sf p}i_E$ is cuspidal. The base change is compatible at the infinite places of $F$ and the finite places $v$ of $F$ where $E/F$ and ${\sf p}i_v$ are unramified or ${\sf p}i_v$ is a twist of the Steinberg representation by a quasi-character. \end{thm} \mathbb{S}ubsection{On the Artin conjecture for icosahedral representations} As in the last subsection we assume that $\mathrm{Gal}(E/F) \cong \widetilde{A}_5$. Fix an embedding $\mathbb{Z}/2 \times \mathbb{Z}/2 \hookrightarrow A_5$ and let $Q \hookrightarrow \widetilde{A}_5$ be the inverse image of $\mathbb{Z}/2 \times \mathbb{Z}/2$ under the quotient $\widetilde{A}_5 \to A_5$. For the purposes of the following theorem, let $S_1$ be a subset of the places of $F$ disjoint from $S$ and let $S'_1$ (resp.~$S_{10}$) be the set of places of $F'$ (resp.~$E$) above $S_1$. Moreover let $h^{S'_1} \in C_c^{\infty}(\mathrm{GL}_2(\mathbb{A}_{F'}^{S'})$ and $\Phi^{S_1} \in C_c^{\infty}(\mathrm{GL}_2(\mathbb{A}_F^{S_1}))$ be transfers of each other unramified outside of $S'$ and $S$, respectively, and let $h_{S_1'} \in C_c^{\infty}(\mathrm{GL}_2(F'_{S'_1})//\mathrm{GL}_2(\mathcal{O}_{F'_{S'_1}}))$. \mathbf Egin{thm} \label{main-thm-3} Consider the following hypotheses: \mathbf Egin{itemize} \item One has $n=2$ and $F'=E^Q$, and the case of Langlands functoriality explicated in Conjecture \ref{conj-32} is true for $E/F$. \item One has $n=3$, $F'=E^{\widetilde{A}_4}$, the case of Langlands functoriality explicated in Conjecture \ref{conj-33} is true for $E/F$, and Conjecture \ref{conj-solv} is true for $E/F'$ \end{itemize} Under these assumptions the limits \mathbf Egin{align} \label{31} 2\lim_{X \to \infty}\left(\frac{d^{n^2-1}}{ds^{n^2-1}}(\widetilde{{\sf p}hi}(s)X^s)\big|_{s=1}\right)^{-1} \mathbb{S}um_{{\sf p}i' } \mathrm{tr}({\sf p}i')((h^{S'_1})^{1}h_{S'_1}b_{E/F'}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} and \mathbf Egin{align} \label{32} \lim_{X \to \infty} \left(\frac{d^{n^2-1}}{ds^{n^2-1}}(\widetilde{{\sf p}hi}(s)X^s)\big|_{s=1}\right)^{-1} \mathbb{S}um_{{\sf p}i } \mathrm{tr}({\sf p}i)((\Phi^{S_1})^1b_{F'/F}(h_{S'_1})b_{E/F}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align} converge absolutely and are equal. Here the first sum is over equivalence classes of cuspidal automorphic representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ and the second sum is over equivalence classes of cuspidal automorphic representations of $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_{F})$. \end{thm} \mathbf Egin{remarks} \item One can always find $\tau \in \mathrm{Gal}(E/F)$ such that $\langle \tau, \mathrm{Gal}(E/F') \rangle=\mathrm{Gal}(E/F)$ (this follows from Theorem \ref{thm-GK}, for example, or by an elementary argument). \item The fact that this theorem involves more general test functions than those in theorems \ref{main-thm-1} and \ref{main-thm-2} is important for applications to the Artin conjecture (see Theorem \ref{main-thm-3-conv}). \end{remarks} Let $\rho_2:W_F' \to {}^L\mathrm{GL}_{2F}$ be an irreducible $L$-parameter trivial on $W_E'$ (i.e. an irreducible Galois representation $\rho_2:\mathrm{Gal}(E/F) \to \mathrm{GL}_2(\textf{C}C)$). Its character takes values in $\mathbb{Q}(\mathbb{S}qrt{5})$ and if $\langle \xi \rangle =\mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$ then $\xi \circ \rho_2$ is another irreducible $L$-parameter that is not equivalent to the first (see \S \ref{appendix}). A partial converse of Theorem \ref{main-thm-3} above is the following: \mathbf Egin{thm} \label{main-thm-3-conv} Assume Conjecture \ref{conj-transf} and that \eqref{31} and \eqref{32} converge and are equal for all test functions as in Theorem \ref{main-thm-3} for $n \in \{2,3\}$. Assume moreover that $F$ is totally complex. Then there is a pair of nonisomorphic cuspidal automorphic representations ${\sf p}i_1,{\sf p}i_2$ of $\mathrm{GL}_2(\mathbb{A}_F)$ such that $$ {\sf p}i_{1} \boxplus {\sf p}i_2 \cong {\sf p}i((\rho_{2} \mathrm{op}lus \xi \circ \rho_2)). $$ \end{thm} Here the $\boxplus$ denotes the isobaric sum \cite{LanglEinM} \cite{JSII}. \mathbf Egin{rem} It should be true that, upon reindexing if necessary, ${\sf p}i_{1} \cong {\sf p}i(\rho_{2})$. However, the author does not know how to prove this at the moment. \end{rem} As a corollary of this theorem and work of Kim and Shahidi, we have the following: \mathbf Egin{cor} \label{cor-artin-cases} Under the hypotheses of Theorem \ref{main-thm-3-conv}, if $\rho:\mathrm{Gal}(E/F)\to \mathrm{GL}_n(\textf{C}C)$ is an irreducible Galois representation of degree strictly greater than $3$, then there is an automorphic representation ${\sf p}i$ of $\mathrm{GL}_n(\mathbb{A}_F)$ such that ${\sf p}i={\sf p}i(\rho)$. \end{cor} The point of the theorems above is that the sums \eqref{11}, \eqref{12} and their analogues in the other theorems can be rewritten in terms of orbital integrals using either the trace formula (compare \cite{LanglBeyond}, \cite{FLN}) or the relative trace formula (specifically the Bruggeman-Kuznetsov formula, compare \cite{Sarnak}, \cite{Venk})\footnote{We note that when applying the relative trace formula the distributions $h \mapsto \mathrm{tr}({\sf p}i')(h^1)$ will be replaced by Bessel distributions defined using Whittaker functionals. Thus the definition of $\Sigma_{{\sf p}hi}^{S_0}(X)$ has to be modified to be useful in a relative trace formula approach.}. One then can hope to compare these limits of orbital integrals and prove nonsolvable base change and descent. The author is actively working on this comparison. He hopes that the idea of comparing limiting forms of trace formulae that underlies theorems \ref{main-thm-1-conv}, \ref{main-thm-2-conv}, \ref{main-thm-3-conv} will be useful to others working ``beyond endoscopy.'' To end the introduction we outline the sections of this paper. Section \ref{sec-notat} states notation and conventions; it can be safely skipped and later referred to if the reader encounters unfamiliar notation. In \S \ref{sec-tf}, we review unramified base change, introduce a notion of transfer for test functions, and prove the existence of the transfer in certain cases. Section \ref{sec-limit-cusp} introduces the smoothed test functions used in the statement and proof of our main theorems and develops their basic properties using Rankin-Selberg theory. Perhaps the most important result is that the trace of these test functions over the cuspidal spectrum is well-defined and picks out the representations of interest (see Corollary \ref{cor-aut-trace}). The behavior of $L$-parameters under restriction along an extension of number fields is considered in \S \ref{sec-rest-desc}; this is used to motivate the conjectures appearing in our main theorems above, which are also stated precisely in \S \ref{sec-rest-desc}. Section \ref{sec-proofs} contains the proofs of the theorems stated above and the proof of Corollary \ref{cor-artin-cases}. Finally, in \S \ref{sec-groups} we explain why the group-theoretic assumptions made in theorems \ref{main-thm-1} and \ref{main-thm-2} are essentially no loss of generality. \mathbb{S}ection{General notation} \label{sec-notat} \mathbb{S}ubsection{Ad\`eles} The ad\`eles of a number field $F$ will be denoted by $\mathbb{A}_F$. We write $\widehat{\mathcal{O}}_{F}:={\sf p}rod_{v \textrm{ finite}}\mathcal{O}_{F_v}$. For a set of places $S$ of $F$ we write $\mathbb{A}_{F,S}:=\mathbb{A}_F \cap {\sf p}rod_{v \in S}F_v$ and $\mathbb{A}^S_F:=\mathbb{A}_F \cap {\sf p}rod_{v \not \in S}F_v$. If $S$ is finite we sometimes write $F_S:=\mathbb{A}_{FS}$. The set of infinite places of $F$ will be denoted by $\infty$. Thus $\mathbb{A}_{\mathbb{Q},\infty}=\mathrm{R}R$ and $\mathbb{A}_{\mathbb{Q}}^{\infty}:={\sf p}rod_{\mathbb{S}ubstack{p \in \mathbb{Z}_{>0} \{\sf p} \textrm{ prime}}}\mathbb{Q}_p$. For an affine $F$-variety $G$ and a subset $W \leq G(\mathbb{A}_F)$ the notation $W_{S}$ (resp. $W^S$) will denote the projection of $W$ to $G(\mathbb{A}_{F,S})$ (resp. $G(\mathbb{A}^S_F)$). If $W$ is replaced by an element of $G(\mathbb{A}_F)$, or if $G$ is an algebraic group and $W$ is replaced by a character of $G(\mathbb{A}_F)$ or a Haar measure on $G$, the same notation will be in force; e.g. if $\mathfrak{g}amma \in G(\mathbb{A}_F)$ then $\mathfrak{g}amma_v$ is the projection of $\mathfrak{g}amma$ to $G(F_v)$. If $w,v$ are places of $E,F$ with $w|v$ we let $e(E_w/F_v)$ (resp.~$f(E_w/F_v)$) the ramification degree (resp.~inertial degree) of $E_w/F_v$. \mathbb{S}ubsection{Restriction of scalars} Let $A \to B$ be a morphism of $\mathbb{Z}$-algebras and let $X \to \mathrm{Spec}(B)$ be a $\mathrm{Spec}(B)$-scheme. We denote by $$ \mathrm{R}_{B/A}(X) \to \text{Sp}ec(A) $$ the Weil restriction of scalars of $X$. We will only use this functor in cases where the representability of $\mathrm{R}_{B/A}(X)$ by a scheme is well-known. If $X \to \text{Sp}ec(A)$, we often abbreviate $$ \mathrm{R}_{B/A}(X):=\mathrm{R}_{B/A}(X_B). $$ \mathbb{S}ubsection{Characters} If $G$ is a group we let $G^{\widetilde{\varepsilon}dge}$ be the group of abelian characters of $G$. Characters are always assumed to be unitary. A general homomorphism $G \to \textf{C}C^{\times}$ will be called a quasi-character. If $E/F$ is a Galois extension of number fields, we often identify $$ \mathrm{Gal}(E/F)^{\widetilde{\varepsilon}dge}=F^{\times} \backslash \mathbb{A}_F^{\times}/\mathrm{N}_{E/F}(\mathbb{A}_E^{\times}) $$ using class field theory. \mathbb{S}ubsection{Harish-Chandra subgroups} \label{HC-subgroup} Let $G$ be a connected reductive group over a number field $F$. We write $A_G \leq Z_G(F \otimes_{\mathbb{Q}} \mathrm{R}R)$ for the connected component of the real points of the largest $\mathbb{Q}$-split torus in the center of $\mathrm{R}_{F/\mathbb{Q}}G$. Here when we say ``connected component'' we mean in the real topology. Write $X^*$ for the group of $F$-rational characters of $G$ and set $\mathfrak{a}_G:=\mathrm{Hom}(X^*,\mathrm{R}R)$. There is a morphism \mathbf Egin{align*} HC_G:G(\mathbb{A}_F) \longrightarrow \mathfrak{a}_G \end{align*} defined by \mathbf Egin{align} \langle HC_G(x),\chi\rangle =|\log(x^{\chi})| \end{align} for $x \in G(\mathbb{A}_F)$ and $\chi \in X^*$. We write \mathbf Egin{align} G(\mathbb{A}_F)^1:=\ker(HC_G). \end{align} and refer to it as the Harish-Chandra subgroup of $G(\mathbb{A}_F)$. Note that $G(F) \leq G(\mathbb{A}_F)^1$ and $G(\mathbb{A}_F)$ is the direct product of $A_G$ and $G(\mathbb{A}_F)^1$. We say that ${\sf p}i$ is an \textbf{automorphic representation of $A_{GF} \backslash G(\mathbb{A}_F)$} if it is an automorphic representation of $G(\mathbb{A}_F)$ trivial on $A_{GF}$ (and therefore unitary). \mathbb{S}ubsection{Local fields} \label{ssec-loc-fields} A uniformizer for a local field equipped with a discrete valuation will be denoted by $\varpi$. If $F$ is a global field and $v$ is a non-archimedian place of $F$ then we will write $\varpi_v$ for a choice of uniformizer of $F_v$. The number of elements in the residue field of $F_v$ will be denoted by $q_v$, and we write $$ | \cdot|_v:F_v \longrightarrow \mathrm{R}R_{\mathfrak{g}eq 0} $$ for the $v$-adic norm, normalized so $|\varpi_v|_v=q_v^{-1}$. For an infinite place $v$, we normalize $$ |a|_v:=\mathbf Egin{cases} |a| &\textrm{(the usual absolute value) if }v \textrm{ is real}\\ a\overline{a} & \textrm{(the square of the usual absolute value) if }v \textrm{ is complex.} \end{cases} $$ \mathbb{S}ubsection{Field extensions} \label{ssec-fe} In this paper we will often deal with a tower of field extensions $E \mathfrak{g}eq F' \mathfrak{g}eq F$. When in this setting we have attempted to adhere to the following notational scheme: \mathbf Egin{center} \mathbf Egin{tabular}{ l | c |c |c | c} & Place & Set of places & Test function & Representation \\ \hline $E$ & $w$ & $S_0$ & $f$ & $\Pi$\\ $F'$ & $v'$ & $S'$ & $h$ & ${\sf p}i'$\\ $F$ & $v$ & $S$ & $\Phi$ & ${\sf p}i$ \end{tabular} \end{center} Thus, e.g. $w$ will be a place of $E$ above the place $v$ of $F$ and $h$ will denote an element of $C_c^{\infty}(\mathrm{GL}_2(F_{v'}'))$ for some place $v'$ of $F'$. \mathbb{S}ection{Test functions} \label{sec-tf} In this section we recall basic results on test functions that are used later in the paper. In \S \ref{ssec-uha} we set notation for unramified Hecke algebras and the Satake isomorphism. In \S \ref{ssec-bcformal} we recall the usual base change map on unramified Hecke algebras and in \S \ref{ssec-transfers} we define a notion of transfer. \mathbb{S}ubsection{Unramified Hecke algebras} \label{ssec-uha} For each positive integer $n$ let $T_n \leq \mathrm{GL}_n$ be the standard diagonal maximal torus and let \mathbf Egin{align} X_*(T_n) \cong \mathbb{Z}^n=\{\lambda:=(\lambda_1, \cdots, \lambda_n)\} \end{align} be the group of rational cocharacters. Let $F_v$ be the completion of a global field $F$ at some non-archimedian place $v$ and let $\varpi_v$ be a uniformizer of $F_v$. We write $$ \mathbf{1}_{\lambda}:=\mathrm{ch}_{\mathrm{GL}_n(\mathcal{O}_{F_v}) \varpi_v^{\lambda} \mathrm{GL}_n(\mathcal{O}_{F_v})} \in C_c^{\infty}(\mathrm{GL}_n(F_v)//\mathrm{GL}_n(\mathcal{O}_{F_v})) $$ for the characteristic function of the double coset $$ \mathrm{GL}_{n}(\mathcal{O}_{F_v})\varpi_v^{\lambda}\mathrm{GL}_n(\mathcal{O}_{F_v}) \in \mathrm{GL}_n(\mathcal{O}_{F_v}) \backslash \mathrm{GL}_n(F_v) /\mathrm{GL}_n(\mathcal{O}_{F_v}). $$ Let $\widehat{T}_n \leq \widehat{\mathrm{GL}}_n$ denote the dual torus in the (complex) connected dual group. We let $$ \mathcal{S}:C_c^{\infty}(\mathrm{GL}_n(F_v)//\mathrm{GL}_n(\mathcal{O}_{F_v})) \longrightarrow \textf{C}C[X^*(\widehat{T}_n)]^{W(T_n,\mathrm{GL}_n)} =\textf{C}C[t_1^{{\sf p}m 1}, \dots,t_n^{{\sf p}m 1}]^{S_n} $$ denote the Satake isomorphism, normalized in the usual manner (see, e.g. \cite[\S 4.1]{Laumon}). Here $W(T_n,\mathrm{GL}_n)$ is the Weyl group of $T_n$ in $\mathrm{GL}_n$; it is well-known that $W(T_n,\mathrm{GL}_n) \cong S_n$, the symmetric group on $n$ letters. Let $E/F$ be a field extension and let $v$ be a finite place of $F$. For the purpose of setting notation, we recall that the Satake isomorphism for $\mathrm{R}_{E/F}\mathrm{GL}_n(F_v)$ induces an isomorphism $$ \mathcal{S}:C_c^{\infty}(\mathrm{R}_{E/F}\mathrm{GL}_n(F_v)//\mathrm{R}_{\mathcal{O}_{E}/\mathcal{O}_F}\mathrm{GL}_n(\mathcal{O}_{F_v})) \tilde{\longrightarrow} \otimes_{w|v} \textf{C}C[t_{1w}^{{\sf p}m 1},\dots,t_{nw}^{{\sf p}m 1}]^{S_n}. $$ Here the product is over the places $w$ of $E$ dividing $v$ and the $w$ factor of $$ C_c^{\infty}(\mathrm{R}_{E/F}\mathrm{GL}_n(F_v)//\mathrm{R}_{\mathcal{O}_{E}/\mathcal{O}_F}\mathrm{GL}_n(\mathcal{O}_{F_v})) \cong {\sf p}rod_{w|v}C_c^{\infty}(\mathrm{GL}_n(E_{w})//\mathrm{GL}_n(\mathcal{O}_{E_w})), $$ is sent to the $w$ factor of $\otimes_{w|v} \textf{C}C[t_{1w}^{{\sf p}m 1},\dots,t_{nw}^{{\sf p}m 1}]^{S_n}$. For a place $w|v$, write $$ \mathbf{1}_{\lambda w} \in C_c^{\infty}(\mathrm{R}_{E/F}\mathrm{GL}_n(F_v)//\mathrm{R}_{\mathcal{O}_{E}/\mathcal{O}_F}\mathrm{GL}_n(\mathcal{O}_{F_v})) $$ for the product of $\mathbf{1}_{\lambda} \in C_c^{\infty}(\mathrm{GL}_n(E_{w})//\mathrm{GL}_n(\mathcal{O}_{E_{w}}))$ with $$ {\sf p}rod_{\mathbb{S}ubstack{w'\neq w\\ w' |v}}\mathbf{1}_{(0,\dots,0)w'} \in {\sf p}rod_{\mathbb{S}ubstack{w' \neq w\\w' |v}}C_c^{\infty}\left(\mathrm{GL}_n(E_{w'})//\mathrm{GL}_n(\mathcal{O}_{E_{w'}})\right). $$ Thus $\mathcal{S}(\mathbf{1}_{\lambda w})=p(t_{1w},\dots,t_{nw})$ for some polynomial $p \in \textf{C}C[x_1^{{\sf p}m 1},\dots,x_n^{{\sf p}m 1}]^{S_n}$. \mathbb{S}ubsection{Base change for unramified Hecke algebras} \label{ssec-bcformal} Let $E/F$ be an extension of global fields. For any subfield $E \mathfrak{g}eq k \mathfrak{g}eq F$ we have a base change map $$ b_{E/k}:{}^L\mathrm{R}_{k/F}\mathrm{GL}_n \longrightarrow {}^L\mathrm{R}_{E/F}\mathrm{GL}_n; $$ it is given by the diagonal embedding on connected components: $$ ({}^L\mathrm{R}_{k/F}\mathrm{GL}_n)^{\circ} \cong \mathrm{GL}_n(\textf{C}C)^{[k:F]} \longrightarrow ({}^L\mathrm{R}_{E/F}\mathrm{GL}_n)^{\circ} \cong \mathrm{GL}_n(\textf{C}C)^{[E:F]} $$ and the identity on the Weil-Deligne group. Suppose that $E/F$ is unramified at a finite place $v$ of $F$. We recall that via the Satake isomorphism the base change map $b_{E/k}$ defines an algebra homomorphism \mathbf Egin{align} \label{bEF} b_{E/k}:C_c^{\infty}(\mathrm{R}_{E/F}\mathrm{GL}_n(F_v)//\mathrm{R}_{\mathcal{O}_E/\mathcal{O}_F}\mathrm{GL}_n(\mathcal{O}_{F_v})) \longrightarrow C_c^{\infty}(\mathrm{R}_{k/F}\mathrm{GL}_n(F_v)//\mathrm{R}_{\mathcal{O}_k/\mathcal{O}_F}\mathrm{GL}_n(\mathcal{O}_{F_v})). \end{align} In terms of Satake transforms, this map is given explicitly by $$ b_{E/k}\left({\sf p}rod_{w|v}\mathcal{S}(f_w)(t_{1w},\dots,t_{nw})\right)={\sf p}rod_{v'|v}{\sf p}rod_{w|v'}\mathcal{S}(f_w)(t_{1v'}^{i_{v'}},\dots,t_{nv'}^{i_{v'}}) $$ where the product over $v'|v$ is over the places of $k$ dividing $v$ and $i_{v'}$ is the inertial degree of $v'$ in the extension $E/k$ \cite[Chapter 1, \S 4.2]{AC}. It satisfies the obvious compatibility condition $$ b_{E/F}=b_{k/F} \circ b_{E/k}. $$ Let ${\sf p}i_v$ be an irreducible admissible unramified representation of $\mathrm{GL}_n(F_v)$ and let $w$ be a place of $E$ above $v$. There exists an irreducible admissible representation $$ b_{E/F}({\sf p}i_v)={\sf p}i_{vEw} $$ of $\mathrm{GL}_n(E_{w})$, unique up to equivalence of representations, such that \mathbf Egin{align} \label{BC-map} \mathrm{tr}(b_{E/F}({\sf p}i_v)(f))=\mathrm{tr}({\sf p}i_v(b_{E/F}(f))) \end{align} for all $f \in C_c^{\infty}(\mathrm{GL}_n(E_w)//\mathrm{GL}_n(\mathcal{O}_{E_w}))$. It is called the \textbf{base change} of ${\sf p}i_v$, and is an unramified irreducible admissible representation of $\mathrm{GL}_n(E_w)$. Explicitly, if ${\sf p}i_v=I(\chi)$ is the irreducible unramified constituent of the unitary induction of an unramified character $\chi:T_n(F_v) \longrightarrow \textf{C}C^{\times}$, then ${\sf p}i_{vEw} \cong I(\chi \circ \mathrm{N}_{E_w/F_v})$, where $$ \mathrm{N}_{E_w/F_v}:T_n(E_w) \longrightarrow T_n(F_v) $$ is the norm map induced by the norm map $\mathrm{N}_{E_w/F_v}:E_w \to F_v$. Here $I(\chi \circ \mathrm{N}_{E_w/F_v})$ is the irreducible unramified constituent of the unitary induction of $\chi \circ \mathrm{N}_{E_w/F_v}$. The fact that \mathbf Egin{align} \label{BC-map2} \mathrm{tr}(b_{E/F}(I(\chi \circ \mathrm{N}_{E_w/F_v})(f))=\mathrm{tr}(I(\chi)(b_{E/F}(f))) \end{align} is readily verified using the well-known formulas for the trace of an (unramified) Hecke operator in $C_c^{\infty}(\mathrm{GL}_n(F_v)//\mathrm{GL}_n(\mathcal{O}_{F_v}))$ acting on a spherical representation in terms of Satake parameters (see \cite[Theorem 7.5.6]{Laumon}). \mathbb{S}ubsection{Transfers} \label{ssec-transfers} Let $E/F$ be a field extension. As indicated at the beginning of this paper, the local Langlands correspondence for $\mathrm{GL}_n$ implies that the local base change transfer exists. Thus any irreducible admissible representation ${\sf p}i_v$ of $\mathrm{GL}_n(F_v)$ admits a base change $\Pi_w={\sf p}i_{vEw}$ to $\mathrm{GL}_n(E_w)$ for any place $w|v$; this representation is uniquely determined up to isomorphism by the requirement that $$ \varphi({\sf p}i_v)|_{W_{E_w}'} \cong \varphi(\Pi_w) $$ where $\varphi(\cdot)$ is the $L$-parameter of $(\cdot)$. \mathbf Egin{rem} There is a representation-theoretic definition, due to Shintani, of a base change of an admissible irreducible representation of $\mathrm{GL}_n(F_v)$ along a cyclic extension \cite[Chapter 1, Definition 6.1]{AC}. One can iterate this definition along cyclic subextensions of a general solvable extension of local fields to arrive at a representation-theoretic definition of the base change of an irreducible admissible representation. For unramified representations of non-archimedian local fields, one uses descent \cite[Lemma 7.5.7]{Laumon} to verify that the two definitions are compatible. Similarly, it is easy to see that they are compatible for abelian twists of the Steinberg representation using compatibility of the local Langlands correspondence with twists, the fact that the Steinberg representation has a very simple $L$-parameter (namely the representation of \cite[(4.1.4)]{Tate}) and \cite[Chapter 1, Lemma 6.12]{AC}. However, the author does not know of any reference for their compatibility in general. It probably follows from the compatibility of the local Langlands correspondence with $L$-functions and $\varepsilon$-factors of pairs together with the local results of \cite[Chapter 1, \S 6]{AC}, but we have not attempted to check this. \end{rem} Now assume that $E \mathfrak{g}eq F' \mathfrak{g}eq F$ is a subfield. Let $S$ be a set of places of $F$ and let $S'$ (resp.~$S_0$) be the set of places of $F'$ (resp.~$E$) lying above places in $S$. \mathbf Egin{defn} \label{defn-transf} Two functions $h_{S'} \in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_{F'S'}))$ and $\Phi_S \in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_{FS}))$ are said to be \textbf{transfers} of each other if there is a function $f_{S_0} \in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_{ES_0}))$ such that for all irreducible generic unitary representations ${\sf p}i_{S}$ of $\mathrm{GL}_n(F_S)$ one has $$ {\sf p}rod_{w \in S_0}\mathrm{tr}({\sf p}i_{vEw})(f_w)={\sf p}rod_{v' \in S'} \mathrm{tr}({\sf p}i_{vF'v'})(h_{v'})={\sf p}rod_{v \in S}\mathrm{tr}({\sf p}i_{v})(\Phi_v) $$ \end{defn} We immediately state one conjecture suggested by this definition: \mathbf Egin{conj} \label{conj-transf} Let $S$ be a finite set of places of $F$ containing the infinite places. If $\Pi^{\mathbb{S}igma} \cong \Pi$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F)$, then there exists an $f_{S_0} \in C_c^{\infty}(\mathrm{GL}_n(E_{S_0}))$ and $h_{S'} \in C_c^{\infty}(\mathrm{GL}_n(F'_{S'}))$ that admits a transfer $\Phi_{S} \in C_c^{\infty}(\mathrm{GL}_n(F_S))$ of positive type such that the identity of Definition \ref{defn-transf} holds for all irreducible generic unitary representations ${\sf p}i_S$ of $\mathrm{GL}_n(F_S)$ and additionally $$ \mathrm{tr}(\Pi_{S_0})(f_{S_0}) \neq 0. $$ \end{conj} Here we say that $\Phi_S$ is of positive type if $\mathrm{tr}({\sf p}i_S)(\Phi_S) \mathfrak{g}eq 0$ for all irreducible generic unitary admissible representations ${\sf p}i_S$ of $\mathrm{GL}_n(F_S)$. \mathbf Egin{rem} Understanding which $h_{S'}$ and $\Phi_S$ are transfers of each other seems subtle. One na\"ive guess is that those $\Phi_S$ that are supported on ``norms'' of elements of $\mathrm{GL}_n(E_{S_0})$ should be transfers. However, there does not appear to be a good notion of the norm map from conjugacy classes in $\mathrm{GL}_n(E_{S_0})$ to conjugacy classes in $\mathrm{GL}_n(F_S)$ and this guess makes little sense without a notion of norm. One also hopes to be able to make use of the theory of cyclic base change, but in that setting one is interested in twisted traces, not traces. For these reasons, the author is somewhat hesitant in stating Conjecture \ref{conj-transf}. The author wouldn't be surprised if the conjectured equality holds only up some transfer factor. \end{rem} In one case the existence of transfers is clear: \mathbf Egin{lem} \label{lem-unr-transf} Assume that $S$ is a set of finite places and that $E/F$ is unramified at all places in $S$. Let $$ f_{S_0} \in C_c^{\infty}(\mathrm{GL}_n(E_{S_0})//\mathrm{GL}_n(\mathcal{O}_{E_{S_0}})). $$ Then $b_{E/F'}(f_{S_0})$ and $b_{E/F}(f_{S_0})$ are transfers of each other. \end{lem} \mathbf Egin{proof} If ${\sf p}i_S$ is unramified then the identity in the definition of transfers follows from the discussion in \S \ref{ssec-bcformal}. If for some $v \in S$ the representation ${\sf p}i_v$ is ramified then ${\sf p}i_{vE}$ and ${\sf p}i_{vF'}$ are both ramified since the extension $E/F$ is unramified and the local Langlands correspondence sends (un)ramified representations to (un)ramified representations. Thus in this case for $f_{S_0}$ as in the lemma one has $$ {\sf p}rod_{w \in S_0}\mathrm{tr}({\sf p}i_{vEw})(f_w)={\sf p}rod_{v' \in S'} \mathrm{tr}({\sf p}i_{vF'v'})(h_{v'})={\sf p}rod_{v \in S}\mathrm{tr}({\sf p}i_{v})(\Phi_v)=0. $$ \end{proof} Another case where transfers exist is the following: \mathbf Egin{lem} \label{lem-archi-transf} Let $v$ be a complex place of $F$ and let $S=\{v\}$. Moreover let $f_{S_0}=\otimes_{w|v}f_w \in C_c^{\infty}(\mathrm{GL}_n(E_{S_0}))$. Then there exist functions $h_{S'} \in C_c^{\infty}(\mathrm{GL}_n(F'_{S'}))$ and $\Phi_v \in C_c^{\infty}(\mathrm{GL}_n(F_v))$ that are transfers of each other such that the identity of Definition \ref{defn-transf} holds. \end{lem} \mathbf Egin{proof} We use descent to prove the lemma. Let $B \leq \mathrm{GL}_n$ be the Borel subgroup of lower triangular matrices. We let $B=MN$ be its Levi decomposition; thus $M \leq \mathrm{GL}_n$ is the maximal torus of diagonal matrices. Let ${\sf p}i_v$ be an irreducible unitary generic representation of $\mathrm{GL}_n(F_v)$. It is necessarily isomorphic to the unitary induction $\mathrm{Ind}_{B(F_v)}^{\mathrm{GL}_n(F_v)}(\chi)$ of a quasi-character $\chi:M(F_v) \to \textf{C}C^{\times}$ (where $\chi$ is extended to a representation of $MN(F_v)$ by letting $\chi$ act trivially on $N(F_v)$). Here we have used our assumption that ${\sf p}i_v$ is generic to conclude that the corresponding induced representation is irreducible (see the comments following \cite[Lemma 2.5]{JacquetRS} or \cite{Vogan}). Let $\Phi_v^{(B(F_v))} \in C_c^{\infty}(M(F_v))$ denote the constant term of $\Phi_v$ along $B(F_v)$. Then one has $$ \mathrm{tr}({\sf p}i_v)(\Phi_v)=\mathrm{tr}(\chi)(\Phi_v^{(B(F_v))}) $$ (see \cite[(10.23)]{Knapp}). Now let \mathbf Egin{align*} N_{E/F'}:M(E \otimes_FF_v) &\longrightarrow M(F' \otimes_FF_v)\\ \mathrm{N}_{E/F} M(E \otimes_F F_v) &\longrightarrow M(F_v) \end{align*} be the norm maps. Let $f_{S_0}=\otimes_vf_v \in C_c^{\infty}(\mathrm{GL}_n(E_{S_0}))$. From the comments above, we see that if $h_{S'} =\otimes_{v'|v}h_v\in C_c^{\infty}(\mathrm{GL}_n(F'_{S'}))$ and $\Phi_v \in C_c^{\infty}(\mathrm{GL}_n(F_v))$ are any functions such that \mathbf Egin{align*} \mathrm{N}_{E/F'*}(f^{(B(E_{S_0}))})&=h_{S'}^{(B(F'_{S'}))}\\ \mathrm{N}_{E/F*}(f^{(B(E_{S_0}))})&=\Phi_v^{(B(F_v))} \end{align*} then $h_{S'}$ and $\Phi_v$ are transfers of each other and the identity of Definition \ref{defn-transf} holds. We note that $$ \mathrm{N}_{E/F'*}(f^{(B(E_{S_0}))})_{v'} \in C_c^{\infty}(M(F'_{v'}))^{W(M,\mathrm{GL}_n)} $$ for each $v'|v$ and $$ \mathrm{N}_{E/F*}(f^{(B(E_{S_0}))})_v \in C_c^{\infty}(M(F_v))^{W(M,\mathrm{GL}_n)}. $$ Thus to complete the proof it suffices to observe that the map \mathbf Egin{align*} C_c^{\infty}(\mathrm{GL}_n(F_v)) &\longrightarrow C_c^{\infty}(M(F_v))^{W(M,\mathrm{GL}_n)}\\ \Phi_v &\longmapsto \Phi_v^{(B(F_v))} \nonumber \end{align*} is surjective by definition of the constant term \cite[(10.22)]{Knapp} and the Iwasawa decomposition of $\mathrm{GL}_n(F_v)$. \end{proof} Finally we consider transfers of Euler-Poincar\'e functions. Let $v$ be a finite place of $F$ and let $$ f_{EP} \in C_c^{\infty}(\mathrm{SL}_n(F_v)) {\sf q}uad \textrm{ and } {\sf q}uad f'_{EP} \in C_c^{\infty}(\mathrm{SL}_n(F' \otimes_FF_v)) $$ be the Euler-Poincar\'e functions on $\mathrm{SL}_n(F_v)$ and $\mathrm{SL}_n(F' \otimes F_v)$, respectively, defined with respect to the Haar measures on $\mathrm{SL}_n(F_v)$ and $\mathrm{SL}_n(F' \otimes F_v)$ giving $\mathrm{SL}_n(\mathcal{O}_{F_v})$ and $\mathrm{SL}_n(\mathcal{O}_{F'} \otimes_{\mathcal{O}_{F}} \mathcal{O}_{F_v})$ volume one, respectively \cite[\S 2]{KottTama}. Moreover fix functions $\nu \in C_c^{\infty}(\mathrm{GL}_n(F_v)/\mathrm{SL}_n(F_v))$ and $\nu' \in C_c^{\infty}(\mathrm{GL}_n(F' \otimes_FF_v)/\mathrm{SL}_n(F' \otimes F_v))$. We thereby obtain functions $$ \nu f_{EP} \in C_c^{\infty}(\mathrm{GL}_n(F_v)) {\sf q}uad \textrm{ and } {\sf q}uad \nu'f'_{EP} \in C_c^{\infty}(\mathrm{GL}_n(F' \otimes_F F_v)) $$ We refer to any function in $C_c^{\infty}(\mathrm{GL}_n(F_v))$ (resp.~$C_c^{\infty}(\mathrm{GL}_n(F' \otimes_F F_v))$) of the form $\nu f_{EP}$ (resp.~$\nu'f'_{EP}$) as a \textbf{truncated Euler-Poincar\'e function}. For the purposes of stating a lemma on transfers, fix measures on $F_v^{\times}$, $(F' \otimes_F F_{v})^{\times}$, and $(E \otimes_F F_v)^{\times}$ giving their maximal compact open subgroups (e.g. $\mathcal{O}_{F_v}^{\times}$) volume one. These measures induce Haar measures on $F_v^{\times} \cong \mathrm{GL}_n(F_v)/\mathrm{SL}_n(F_v)$, etc. Assume for the purposes of the following lemma that $\nu$ and $\nu'$ are chosen so that there is a function $\nu_0 \in C_c^{\infty}(\mathrm{GL}_n(E \otimes_F F_v)/\mathrm{SL}_n(E \otimes_F F_v))$ such that $$ \mathrm{N}_{E\otimes_{F}F_v/F' \otimes_F F_v*}\nu_0=\nu' {\sf q}uad \textrm{ and } {\sf q}uad \mathrm{N}_{E\otimes_{F}F_v/F_v*}\nu_0=\nu. $$ \mathbf Egin{lem} \label{lem-EP} Let $v$ be a finite place of $F$ that is unramified in $E/F$ and let $S=\{v\}$. The Euler Poincar\'e functions $$ (-1)^{r_1}\nu'f'_{EP} {\sf q}uad \textrm{ and } {\sf q}uad (-1)^{r_2} \nu f_{EP} $$ are transfers of each other for some integers $r_1,r_2$ that depend on $v$ and $E/F$. \end{lem} \mathbf Egin{proof} Recall that if $\mathrm{St}_v$ (resp.~$\mathrm{St}_{v'}$, $\mathrm{St}_{w}$) denotes the Steinberg representation of $\mathrm{GL}_n(F_v)$ (resp.~$\mathrm{GL}_n(F'_{v'})$, $\mathrm{GL}_n(E_w)$), then $\mathrm{St}_{vF'v'}=\mathrm{St}_{v'}$ and $\mathrm{St}_{vEw}=\mathrm{St}_w$ for all places $v'|v$ of $F'$ and $w|v$ of $E$ \cite[Chapter 1, Lemma 6.12]{AC}. Let $f_{0EP} \in C_c^{\infty}(\mathrm{SL}_n(E \otimes_FF_v))$ be the Euler-Poincare function constructed with respect to the Haar measure giving measure one to $\mathrm{SL}_n(\mathcal{O}_E \otimes_{\mathcal{O}_F} \mathcal{O}_{F_v})$. We let $f_{S}=\nu_0f_{0EP}$ in the definition of transfer. The statement of the lemma follows from the comments in the first paragraph of this proof and \cite[Theorem 2'(b)]{KottTama}. This theorem of Casselman states in particular that the only generic unitary irreducible admissible representations of a semisimple $p$-adic group that have nonzero trace when applied to an Euler-Poincar\'e function are the Steinberg representations and gives a formula for the trace in this case. We note that in the case at hand one has $q(\mathrm{SL}_{nF_v})=n-1$ in the notation of loc.~cit. \end{proof} \mathbb{S}ection{Limiting forms of the cuspidal spectrum} \label{sec-limit-cusp} In the statement of our main theorems we considered limits built out of sums of the trace of a test function acting on the cuspidal spectrum. In this section we prove that these limits converge absolutely. Let $E/F$ be a Galois extension of number fields, let $E \mathfrak{g}eq F' \mathfrak{g}eq F$ be a solvable subfield, and let $\tau \in \mathrm{Gal}(E/F)$. Assume that $S'$ is a finite set of places of $F'$ including the infinite places and that $S_0$ is the set of places of $E$ above them. Let ${\sf p}hi \in C_c^{\infty}(0,\infty)$ and let $$ \widetilde{{\sf p}hi}(s):=\int_{0}^{\infty}x^s{\sf p}hi(x)\frac{dx}{x} $$ be its Mellin transform. The first result of this section is the following proposition: \mathbf Egin{prop} \label{prop-testsums} Let ${\sf p}i'$ be a cuspidal unitary automorphic representation of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ and assume that ${\sf p}i'$ admits a weak base change $\Pi:={\sf p}i_E'$ to $\mathrm{GL}_n(\mathbb{A}_E)$ that satisfies $\Pi_w=({\sf p}i_{v'}')_E$ for all $w|v'$ such that $v' \not \in S'$. Fix $\varepsilon>0$. One has \mathbf Egin{align*} &\mathrm{tr}({\sf p}i'^{S'})(b_{E/F'}\Sigma^{S_0}_{{\sf p}hi}(X))=\mathrm{Res}_{s=1}\left(\widetilde{{\sf p}hi}(s)X^sL(s,\Pi \times \Pi^{\tau\varepsilone S_0})\right)+O_{E,F',n,S_0,\varepsilon}(C_{\Pi \times \Pi^{\tau \varepsilone}}(0)^{\tfrac{1}{2}+\varepsilon}) \end{align*} for all sufficiently large $X \mathfrak{g}eq 1$. Here ``sufficiently large'' depends only on $E,F',n,S_0,\varepsilon$. \end{prop} Here the complex number $C_{\Pi \times \Pi^{\tau \varepsilone}}(s)$ is the analytic conductor of $L(s,\Pi \times \Pi^{\tau \varepsilone})$ normalized as in \S \ref{ssec-analytic-cond} below and the $L$-function is the Rankin-Selberg $L$-function (see, e.g. \cite{Cog1}). The point of Proposition \ref{prop-testsums} is the fact that $\lim_{X \to \infty} X^{-1} \mathrm{tr}({\sf p}i^{S'})(\Sigma_{{\sf p}hi}^{S_0}(X)) \neq 0$ if and only if $\mathrm{Hom}_{I}(\Pi, \Pi^{\tau}) \neq 0$ (see \S \ref{ssec-analytic-cond}). This is the keystone of the approach to nonsolvable base change and descent exposed in this paper. We will prove Proposition \ref{prop-testsums} in the following subsection. Let \mathbf Egin{align} \label{cusp-subspace} L^2_0(\mathrm{GL}_n(F')A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})) \leq L^2(\mathrm{GL}_n(F')A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})) \end{align} be the cuspidal subspace. For $h\in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_{F'})$ unramified outside of $S'$, let $$ h^1(g):=\int_{A_{\mathrm{GL}_{nF'}}}h(ag)da \in C_c^{\infty}(A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})) $$ and let $R(h^1)$ be the corresponding operator on $L^2(\mathrm{GL}_n(F') A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'}))$. Finally let $R_0(h^1)$ be its restriction to $L^2_0(\mathrm{GL}_n(F') A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'}))$. This restriction is well-known to be of trace class \cite[Theorem 9.1]{Donnely}. The following corollary implies that the limits and sums in \eqref{11}, \eqref{A21}, \eqref{B21}, and \eqref{31} of theorems \ref{main-thm-1}, \ref{main-thm-2} and \ref{main-thm-3} can be switched: \mathbf Egin{cor} \label{cor-aut-trace} One has \mathbf Egin{align*} \mathrm{tr}(R_0(h^1b_{E/F'}(\Sigma^{S'}_{{\sf p}hi}(X))))&+o_{E,F',n,S_0}(X)\\&=\mathbb{S}um_{{\sf p}i'} \mathrm{tr}({\sf p}i')(h^1) \mathrm{Res}_{s=1}\left( \widetilde{{\sf p}hi}(s)X^sL(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0})\right) \end{align*} where the sum is over any subset of the set of equivalence classes of cuspidal automorphic representations ${\sf p}i'$ of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$. The sum on the right converges absolutely. \end{cor} \mathbb{S}ubsection{Rankin-Selberg $L$-functions and their analytic conductors} \label{ssec-analytic-cond} For our later use, in this subsection we will consider a setting slightly more general than that relevant for the proof of Proposition \ref{prop-testsums}. Let $n_1,n_2$ be positive integers and let $\Pi_1,\Pi_2$ be isobaric automorphic representations of $A_{\mathrm{GL}_{n_1E}} \backslash \mathrm{GL}_{n_1}(\mathbb{A}_E)$, $A_{\mathrm{GL}_{n_2E}} \backslash \mathrm{GL}_{n_2}(\mathbb{A}_E)$, respectively\footnote{For generalities on isobaric automorphic representations see \cite{LanglEinM} and \cite{JSII}.}. We always assume that $$ \Pi_i\cong \Pi_{i1} \boxplus \cdots \boxplus \Pi_{im_i} $$ for $i \in\{1,2\}$ where the $\Pi_{ij}$ are cuspidal automorphic representations of $A_{\mathrm{GL}_{d_jE}} \backslash \mathrm{GL}_{d_j}(\mathbb{A}_E)$ for some set of integers $d_j$ such that $\mathbb{S}um_{j=1}^{m_i}d_j=n_i$. We let $L(s,\Pi_1 \times \Pi_2)$ be the Rankin-Selberg $L$-function \cite{Cog1}; it satisfies $$ L(s,\Pi_1 \times \Pi_2)={\sf p}rod_{r=1}^{m_1}{\sf p}rod_{s=1}^{m_2} L(s,\Pi_{1r} \times \Pi_{2s}). $$ and is holomorphic in the complex plane apart from possible poles at $s \in \{0,1\}$ \cite[Theorem 4.2]{Cog1}. One sets \mathbf Egin{align} \label{hom-I} \mathrm{Hom}_I(\Pi_1,\Pi_2)=\mathrm{op}lus_{j_1,j_2:\Pi_{1j_1} \cong \Pi_{2j_2}^{\varepsilone}} \textf{C}C. \end{align} Lest the notation mislead the reader, we note that $\Pi_1$ and $\Pi_2$ are irreducible \cite[\S 2]{LanglEinM}, so the space of ``honest'' morphisms of automorphic representations between them is at most one-dimensional. A fundamental result due to Jacquet and Shalika is that \mathbf Egin{align} \label{ord-pole} -\mathrm{ord}_{s=1}L(s,\Pi_1 \times \Pi_2)=\dim(\mathrm{Hom}_I(\Pi_1,\Pi_2)) \end{align} \cite[Theorem 4.2]{Cog1}. There is a set of complex numbers $\{\mu_{\Pi_{1iv} \times \Pi_{2jv}}\}_{\mathbb{S}ubstack{1 \leq i \leq n_2\\ 1 \leq j \leq n_2}} \mathbb{S}ubset \textf{C}C$, an integer $N_{\Pi_1 \times \Pi_2}$, and a complex number $\varepsilonilon_{\Pi_1 \times \Pi_2}$ such that if we set $$ \Lambda(s,\Pi_1 \times \Pi_2):=L(s,(\Pi_1 \times \Pi_2)^{\infty}){\sf p}rod_{w|\infty} {\sf p}rod_{i=1}^{n_1}{\sf p}rod_{j=1}^{n_2}\Gamma_{w}(s+\mu_{\Pi_{1wi} \times \Pi_{2wj}}) $$ then \mathbf Egin{align} \label{FE} \Lambda(s,\Pi_1 \times \Pi_2)=\varepsilonilon_{\Pi_1 \times \Pi_2} N_{\Pi_1 \times \Pi_2}^{\tfrac{1}{2}(1-s)}\Lambda(1-s,\Pi_1^{\varepsilone} \times \Pi_2^{\varepsilone}). \end{align} Here $$ \Gamma_{v}(s):=\mathbf Egin{cases}{\sf p}i^{-s/2}\Gamma(s/2) \textrm{ if }w \textrm{ is real}\\2(2{\sf p}i)^{-s}\Gamma(s) \textrm{ if }w \textrm{ is complex}. \end{cases} $$ For a proof of this statement, combine \cite[Proposition 3.5]{Cog1}, the appendix of \cite{JacquetRS} and \cite[Theorem 4.1]{Cog1} (\cite[\S 3]{Tate} is also useful). In these references one can also find the fact that, at least after reindexing, one has \mathbf Egin{align} \label{conj} \overline{\mu_{\Pi_{1wi}^{\varepsilone} \times \Pi_{2wj}^{\varepsilone}}}=\mu_{\Pi_{1wi} \times \Pi_{2wj}}. \end{align} One sets $$ \mathfrak{g}amma(\Pi_1 \times \Pi_2,s):={\sf p}rod_{w|\infty}{\sf p}rod_{i=1}^{n_1}{\sf p}rod_{j=1}^{n_2} \Gamma_{w}(s+\mu_{\Pi_{1wi} \times \Pi_{2wj}}) $$ Following Iwaniec-Sarnak \cite{IS} (with a slight modification), for $s \in \textf{C}C$ the \textbf{analytic conductor} is defined to be \mathbf Egin{align} C_{\Pi_1 \times \Pi_2}(s):=N_{\Pi_1 \times \Pi_2} {\sf p}rod_{w|\infty}{\sf p}rod_{i=1}^{n_1}{\sf p}rod_{j=1}^{n_2} \left|\frac{1+\mu_{\Pi_{1wi} \times \Pi_{2wj}}+s}{2{\sf p}i}\right|_w. \end{align} We recall that $N_{\Pi_1 \times \Pi_2}={\sf p}rod_{w \nmid \infty}N_{\Pi_{1w} \times \Pi_{2w}}$ for some integers numbers $N_{\Pi_{1w} \times \Pi_{2w}}$ \cite[Proposition 3.5]{Cog1}. If $S_0$ is a set of places of $E$ set \mathbf Egin{align*} C_{\Pi_{1S_0} \times \Pi_{2S_0}}(s):&=\left({\sf p}rod_{\textrm{infinite }w \in S_0} {\sf p}rod_{i=1}^{n_1}{\sf p}rod_{j=1}^{n_2}\left|\frac{1+\mu_{\Pi_{1iw} \times \Pi_{2jw}}+s}{2{\sf p}i}\right|_w\right) {\sf p}rod_{\textrm{ finite }w \in S_0} N_{\Pi_{1w} \times \Pi_{2w}}. \end{align*} For the purpose of stating a proposition, let $\lambda(m) \in \textf{C}C$ be the unique complex numbers such that $$ L(s,(\Pi_1 \times \Pi_2)^{S_0})=\mathbb{S}um_{m=1}^{\infty}\frac{\lambda(m)}{m^s} $$ for $\mathrm{Re}(s) \mathfrak{g}g 0$. With all this notation set, we have the following proposition: \mathbf Egin{prop} \label{Perron-prop} Assume that $\Pi_1 \times \Pi_2$ is automorphic of the form $$ \Pi_1 \times \Pi_2 \cong \Pi_1 \boxplus \cdots \boxplus \Pi_m $$ where the $\Pi_i$ are cuspidal automorphic representations of $A_{\mathrm{GL}_{n_iE}} \backslash \mathrm{GL}_{n_i}(\mathbb{A}_E)$. Fix $\varepsilon>0$. For $X \in \mathrm{R}R_{>0}$ sufficiently large one has $$ \mathbb{S}um_{m=1}^{\infty}\lambda(m){\sf p}hi(m/X)=\mathrm{Res}_{s=1}\left( \widetilde{{\sf p}hi}(s)X^sL(s,\Pi_1 \times \Pi_2^{S_0})\right) +O_{E,n,S_0}(C_{\Pi_1 \times \Pi_2}(0)^{\tfrac{1}{2}+\varepsilon}). $$ Here ``sufficiently large'' depends only on $E,n,S_0$. \end{prop} Before beginning the proof of Proposition \ref{Perron-prop} we record one consequence of known bounds towards the Ramanujan-Petersson conjecture. Let $r(E,n) \in \mathrm{R}R_{\mathfrak{g}eq 0}$ be a nonnegative real number such that for all finite places $w$ of $E$ one has \mathbf Egin{align} \label{rp-bound} |L(s,\Pi_{1w} \times \Pi_{2w})| \leq (1-q_w^{-\mathrm{Re}(s)+r(E,n)})^{-n} \end{align} for $\mathrm{Re}(s)>r(E,n)$ and \mathbf Egin{align} \label{rbound} |L(s,\Pi_{1w} \times \Pi_{2w})^{-1}| \leq (1+q_w^{-\mathrm{Re}(s)+r(E,n)})^n. \end{align} for all $s$. This real number exists (and can be taken to be independent of the choice of cuspidal automorphic representations $\Pi_1$ and $\Pi_2$ of $A_{\mathrm{GL}_{nE}} \backslash \mathrm{GL}_n(\mathbb{A}_E)$) by \cite[Theorem 2]{LRS} in the unramified case and \cite[Proposition 3.3]{MS} in the general case. In particular we may take \mathbf Egin{align} \label{LRS-bound} r(E,n) < \frac{1}{2}-\frac{1}{n^2+1}. \end{align} If the Ramanujan-Petersson conjecture were known then we could take $r(E,n)=0$. \mathbf Egin{proof}[Proof of Proposition \ref{Perron-prop}] The proof is a standard application of the inverse Mellin transform entirely analogous to the proof of \cite[Theorem 3.2]{Booker}. We only make a few comments on how to adapt the proof of \cite[Theorem 3.2]{Booker}. First, the assumption of the Ramanujan conjecture in \cite[Theorem 3.2]{Booker} can be replaced by the known bounds toward it that are recorded in \eqref{LRS-bound} above. Second, the bounds on the gamma factors in terms of the analytic conductor are proven in detail in \cite{Moreno}. Finally, we recall that if $L(s,\Pi_1 \times \Pi_2)$ has a pole in the half-plane $\mathrm{Re}(s)>0$ then it is located at $s=1$ and is of order equal to $-\dim \mathrm{Hom}_{I}(\Pi_1,\Pi_2)$ by a result of Jacquet, Piatetskii-Shapiro and Shalika \cite[Theorem 4.2]{Cog1}; this accounts for the main term in the expression above. \end{proof} We now prepare for the proof of Proposition \ref{prop-testsums}. If $w$ is a finite place of $E$ and $\Pi_w$ is an unramified representation of $\mathrm{GL}_n(E_w)$ we denote by $A(\Pi_w)$ the Langlands class of $\Pi_w$; it is a semisimple conjugacy class in $({}^L\mathrm{GL}_{nE})^{\circ}=\mathrm{GL}_n(\textf{C}C)$, the neutral component of ${}^L\mathrm{GL}_{nE}$. For $\mathrm{Re}(s)>1$ we have that \mathbf Egin{align} \label{RS-descr} L(s,(\Pi_1 \times \Pi_2)^{S_0}) = {\sf p}rod_{w \not \in S_0} \mathbb{S}um_{n \mathfrak{g}eq 1}\mathrm{tr}(\mathrm{Sym}^n(A(\Pi_{1w}) \otimes A(\Pi_{2w})))q_w^{-ns} \end{align} and the sum on the right converges absolutely \cite[Theorem 5.3 and proof of Proposition 2.3]{JS}. \mathbf Egin{proof}[Proof of Proposition \ref{prop-testsums}] Let $v'$ be a place of $F'$ where ${\sf p}i'$ is unramified and let $a_1,\cdots,a_n$ be the Satake parameters of ${\sf p}i'_v$ (i.e. the eigenvalues of $A({\sf p}i'_{v'})$). We recall that $$ \mathrm{tr}({\sf p}i'_{v'})(h)=\mathcal{S}(h)(a_1,\dots,a_n) $$ for all $h \in C_c^{\infty}(\mathrm{GL}_n(F'_{v'})//\mathrm{GL}_n(\mathcal{O}_{F_{v'}'}))$ \cite[Theorem 7.5.6]{Laumon}. This together with Proposition \ref{Perron-prop}, \eqref{RS-descr}, and the description of unramified base change recalled in \S \ref{ssec-bcformal} implies the proposition. \end{proof} \mathbb{S}ubsection{Proof of Corollary \ref{cor-aut-trace}} \label{ssec-cor-trace} In this subsection our ultimate goal is to prove Corollary \ref{cor-aut-trace}. In order to prove this corollary we first establish the following two lemmas: \mathbf Egin{lem} \label{lem-cond} Let $v'$ be an infinite place of $F'$, let $h_{v'} \in C_c^{\infty}(\mathrm{GL}_n(F_{v'}'))$ and let $N \in \mathbb{Z}$. If $A$ is a countable set of inequivalent irreducible generic admissible representations of $\mathrm{GL}_n(F_{v'}')$, then for ${\sf p}i'_{v'} \in A$ $$ \mathrm{tr}({\sf p}i'_{v'})(h_{v'})C_{{\sf p}i'_{v'}}(0)^N \to 0 \textrm{ as } C_{{\sf p}i'_{v'}}(0) \to \infty. $$ \end{lem} \mathbf Egin{lem} \label{lem-Casimir} Fix a positive integer $n$. There is an integer $N>0$ depending on $n$ and a polynomial $P$ of degree $N$ in $n$ variables such that the Casimir eigenvalue of an irreducible generic admissible representation ${\sf p}i_v$ of $\mathrm{GL}_n(F_v)$ is bounded by $|P(|\mu_{1{\sf p}i_v}|,\dots,|\mu_{n{\sf p}i_v}|)|$. \end{lem} Thus the trace and Casimir eigenvalue of the ${\sf p}i_v$ are controlled by the analytic conductor. This is certainly well-known, but the author was unable to locate these results in the literature. Moreover, the proof of Lemma \ref{lem-cond} is more interesting than one would expect. We begin by recalling some notions that will allow us to use decent. Let $v$ be an archimedian place of $F$; we fix an embedding $\mathrm{R}R \hookrightarrow F_v$ (which is an isomorphism if $v$ is real). Let $\mathfrak{h} \leq \mathrm{R}_{F_v/\mathrm{R}R}\mathfrak{gl}_n$ be the Cartan subalgebra of diagonal matrices. For a Lie algebra $\mathfrak{g}$ over $\mathrm{R}R$, write $\mathfrak{g}_{\textf{C}C}:=\mathfrak{g} \otimes_{\mathrm{R}R} \textf{C}C$. Without loss of generality we assume that the set of positive roots of $\mathfrak{h}_{\textf{C}C}$ inside $\mathrm{R}_{F_v/\mathrm{R}R}\mathfrak{gl}_{n\textf{C}C}$ is defined using the Borel subgroup $B \leq \mathrm{GL}_n$ of \textbf{lower} triangular matrices (this is to be consistent with \cite{JacquetRS}). Thus standard parabolic subgroups are parabolic subgroups containing $B$. If $Q=MN$ is the Levi decomposition of a standard parabolic subgroup then \mathbf Egin{align} \label{Levi-decomp} M ={\sf p}rod_jM_j \cong {\sf p}rod_{j} \mathrm{R}_{F_v/\mathrm{R}R}\mathrm{GL}_{n_j} \end{align} where $M_j \cong \mathrm{R}_{F_v/\mathrm{R}R}\mathrm{GL}_{n_j}$ and $\mathbb{S}um_jn_j=n$. If $Q$ is cuspidal then $n_j \in \{1,2\}$ if $v$ is real and all $n_j=1$ if $v$ is complex. We let $\mathfrak{m}:=\mathrm{Lie}(M)$ and $\mathfrak{m}_j:=\mathrm{Lie}(M_j)$. Moreover we let $\mathfrak{h}_j:=\mathfrak{h} \cap \mathfrak{m}_j$; thus $\mathfrak{h}_j$ is isomorphic to the Cartan subalgebra of diagonal matrices in $\mathrm{R}_{F_v/\mathrm{R}R}\mathrm{GL}_{n_j}$. Let ${\sf p}i_v$ be an irreducible admissible representation of $\mathrm{GL}_n(F_v)$. Thus there is a cuspidal standard parabolic subgroup $Q \leq \mathrm{Res}_{F_v/\mathrm{R}R}\mathrm{GL}_n$ with Levi decomposition $Q=MN$ and an irreducible admissible representation ${\sf p}i_M$ of $M(F_v)$ such that $$ {\sf p}i_v \cong J({\sf p}i_M) $$ where $J({\sf p}i_M)$ is the Langlands quotient of the induced representation $\mathrm{Ind}_{Q(F_v)}^{\mathrm{GL}_n(F_v)}({\sf p}i_M)$ \cite[Theorem 14.92]{Knapp}. Moreover ${\sf p}i_M$ can be taken to be a twist of a discrete series or limit of discrete series. Here we are viewing ${\sf p}i_M$ as a representation of $Q(F_v)$ by letting it act trivially on $N(F_v)$. If ${\sf p}i_v$ is generic, then $\mathrm{Ind}_{Q(F_v)}^{\mathrm{GL}_n(F_v)}({\sf p}i_M)$ is irreducible and hence $$ {\sf p}i_v \cong J({\sf p}i_M) \cong \mathrm{Ind}_{Q(F_v)}^{\mathrm{GL}_n(F_v)}({\sf p}i_M) $$ (see the comments after \cite[Lemma 2.5]{JacquetRS} or \cite{Vogan}). We decompose $$ {\sf p}i_M \cong \otimes_j {\sf p}i_j $$ where each ${\sf p}i_j$ is an admissible irreducible representation of $M_j(F_v)$. We note that, essentially by definition, \mathbf Egin{align} \label{L-prod} L(s,{\sf p}i_v)={\sf p}rod_jL(s,{\sf p}i_j)={\sf p}rod_j{\sf p}rod_{i=1}^{n_j}\Gamma_v(s+\mu_{i{\sf p}i_{j}}) \end{align} (compare \cite[Appendix]{JacquetRS}). We are now in a position to prove Lemma \ref{lem-cond}: \mathbf Egin{proof}[Proof of Lemma \ref{lem-cond}] Let $f \in C_c^{\infty}(\mathrm{GL}_n(F_v))$ and let $f^{(Q)} \in C_c^{\infty}(M(F_v))$ be the constant term of $f$ along $Q$ (see \cite[(10.22)]{Knapp} for notation). Using the natural isomorphism $C_c^{\infty}(M(F_v))={\sf p}rod_{j}C_c^{\infty}(M_j(F_v))$ we decompose $f^{(Q)}={\sf p}rod_jf_j$. One then has \mathbf Egin{align} \label{const} \mathrm{tr}({\sf p}i_v)(f)=\mathrm{tr}({\sf p}i_M)(f^{(Q)})={\sf p}rod_j\mathrm{tr}({\sf p}i_j)(f_j) \end{align} (see \cite[(10.23)]{Knapp}). Combining \eqref{const} and \eqref{L-prod}, we see that the lemma will follow if we establish it in the special cases $n \in \{1,2\}$ for $v$ real and $n=1$ for $v$ complex. Moreover when $n=2$ we can assume that ${\sf p}i_v$ is a twist by a quasi-character of a discrete series or limit of discrete series representation. We henceforth place ourselves in this situation. Assume for the moment that $n=1$ and $v$ is real. Then ${\sf p}i_v$ is a quasi-character of $\mathrm{R}R^{\times}$ and hence it is of the form $$ {\sf p}i_v(t)=|t|_v^{u}\mathrm{sgn}^{k}(t) $$ for $t \in \mathrm{R}R^{\times}$ and some $u \in \textf{C}C$ and $k \in \{0,1\}$. In this case we have $\mu_{1{\sf p}i_v}=u+k$ by \cite[Appendix]{JacquetRS}. Similarly, if $n=1$ and $v$ is complex, then ${\sf p}i_v$ is a quasi-character of $\textf{C}C^{\times}$ and hence is of the form $$ {\sf p}i_v(z):=z^{m}(z\overline{z})^{-\tfrac{m}{2}}|z|_v^{u} $$ for $z \in \textf{C}C^{\times}$ and some $m \in \mathbb{Z}$ and $u \in \textf{C}C$. In this case we have $\mu_{1{\sf p}i_v}=\tfrac{m}{2}+u$ by \cite[Appendix]{JacquetRS}. In either case, as a function of $\mu_{1{\sf p}i_v}$ the trace $\mathrm{tr}({\sf p}i_v)(f)$ is easily seen to be rapidly decreasing since the Fourier transform of a compactly supported smooth function on $\mathrm{R}R^{\times}$ or $\textf{C}C^{\times}$ is rapidly decreasing. The lemma follows in these cases. We are left with the case where $n=2$ and $v$ is real; thus ${\sf p}i_v$ is a twist of a discrete series or nondegenerate limit of discrete series representation by a quasi-character. For $m \in \mathbb{Z}$ let $$ \Omega_{m}(z):=z^{m}(z\overline{z})^{-\tfrac{m}{2}} {\sf q}uad \textrm{ and } {\sf q}uad \mathbb{S}igma_m:=\mathrm{Ind}_{\textf{C}C^{\times}}^{W_{F_v}}(\Omega_m). $$ The $L$-parameter $\varphi({\sf p}i_v):W_{F_v} \to {}^L\mathrm{GL}_{2F_v}$ attached to ${\sf p}i_v$ is of the form $\mathbb{S}igma_m \otimes \chi$ for some $m \in \mathbb{Z}_{\mathfrak{g}eq 0}$ and some one-dimensional representation $\chi:W_{F_v} \to \mathrm{R}R^{\times}$. The discrete series (or limit of discrete series) representation ${\sf p}i(\mathbb{S}igma_m)$ will be denoted by $D_{m+1}$; it is in the discrete series if and only if $m>0$ (see \cite[Appendix]{JacquetRS}). The representation $D_{m+1}$ is usually referred to as the discrete series of weight $m+1$ if $m>0$ and the limit of discrete series if $m=0$. Recall that any one-dimensional representation of $W_{F_v}$ can be regarded (canonically) as a character $\mathrm{R}R^{\times} \to \mathrm{R}R^{\times}$; this applies in particular to $\chi$. We note that $$ \mathbb{S}igma_m \otimes \mathrm{sgn} \cong \mathbb{S}igma_m $$ since $\mathrm{sgn}$ can be identified with the nontrivial character of $W_{\mathrm{R}R}/W_{\textf{C}C}$ by class field theory. Since ${\sf p}i$ is assumed to be unitary, we assume without loss of generality that $\chi=|\cdot|_v^{it}$ for some real number $t$. With this in mind, the duplication formula implies that we may take $\mu_{1{\sf p}i_v}=\tfrac{m}{2}+it$ and $\mu_{2{\sf p}i_v}=\tfrac{m}{2}+1+it$ (compare \cite[Appendix]{JacquetRS}). We compute the trace $\mathrm{tr}({\sf p}i_v)(f)$ for $f \in C_c^{\infty}(\mathrm{GL}_2(F_v))=C_c^{\infty}(\mathrm{GL}_2(\mathrm{R}R))$. First, define \mathbf Egin{align*} f_t:\mathrm{SL}_2(\mathrm{R}R) &\longrightarrow \textf{C}C\\ g &\longmapsto \int_{\mathrm{R}R^{\times}}|z|_v^{it}\int_{\mathrm{SO}_2(\mathrm{R}R)}f(k^{-1}zgk)dzdk \end{align*} where we normalize the measure $dk$ so that $\mathrm{meas}_{dk}(\mathrm{SO}_2(\mathrm{R}R))=1$ and $dz$ is some choice of Haar measure. Thus $f_t \in C_c^{\infty}(\mathrm{SL}_2(\mathrm{R}R))$. One has (with appropriate choices of measures) \mathbf Egin{align*} \mathrm{tr}({\sf p}i_v)(f)=\int_{\mathrm{SL}_2(\mathrm{R}R)} \Theta_{m+1}(g)f_t(g)dg=:\Theta_{m+1}(f_t) \end{align*} where $dg$ is the Haar measure on $\mathrm{SL}_2(\mathrm{R}R)$ giving $\mathrm{SO}_2(\mathrm{R}R)$ measure one and $\Theta_{m+1}$ is the character of $D_{m+1}|_{\mathrm{SL}_2(\mathrm{R}R)}$. By Fourier theory on $C_{\mathrm{GL}_{2}}(\mathrm{R}R) \cong \mathrm{R}R^{\times}$, one sees that to prove the lemma it suffices to prove that as $m \to \infty$ $$ |\Theta_{m+1}(f_t)| |m|^N \to 0 $$ for all $N \in \mathbb{Z}$. For this it suffices to show that for all $f \in C_c^{\infty}(\mathrm{SL}_2(\mathrm{R}R))$ such that $f(kxk^{-1})=f(x)$ one has \mathbf Egin{align} \label{to-show-induct} |\Theta_{m+1}(f)||m|^N \to 0 \end{align} for all $N \in \mathbb{Z}$. This is what we will show. Let $$ M(F_v)=\left\{ a_t: a_t=\mathbf Egin{pmatrix}e^t & 0 \\ 0 & e^{-t} \end{pmatrix},t \in \mathrm{R}R\right\} $$ and $$ T(F_v)=\left\{k_{\theta}: k_{\theta}=\mathbf Egin{pmatrix} \cos \theta & \mathbb{S}in \theta\\ -\mathbb{S}in \theta & \cos \theta \end{pmatrix}, 0 < \theta \leq 2{\sf p}i\right\} $$ By \cite[(11.37)]{Knapp}\footnote{Knapp denotes $M$ by $T$ and $T$ by $B$.} and the discussion below it, for $f \in C_c^{\infty}(\mathrm{SL}_2(\mathrm{R}R))$ satisfying $f(kxk^{-1})=f(x)$ for $k \in \mathrm{SL}_2(\mathrm{R}R)$ we have \mathbf Egin{align} \label{first-formula} m\Theta_{m+1}(f)=&-\frac{1}{2{\sf p}i i} \int_{0}^{2 {\sf p}i}(e^{im\theta}+e^{-im\theta})\frac{d}{d\theta}F^T_f(\theta)d\theta\\& \nonumber +\frac{1}{2}\int_{-\infty}^{\infty} e^{-m|t|}(\mathrm{sgn}(t)) \frac{d}{dt} F^M_f(a_t)dt\\ \nonumber &+\frac{1}{2}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d}{dt}F_f^M(-a_t)dt \end{align} for $m>0$. The $m=0$ term is unimportant for our purposes as we are interested in the behavior as $m \to \infty$. Here $dt$ and $d\theta$ are the usual Lesbesgue measures and \mathbf Egin{align*} F^T_f(k_{\theta})&=(e^{i\theta}-e^{-i\theta})\int_{\mathrm{GL}_2(F_v)/T(F_v)}f(xk_{\theta}x^{-1})d \dot{x}\\ F^M_f({\sf p}m a_t)&={\sf p}m|e^t-e^{-t}|\int_{\mathrm{GL}_2(F_v)/M(F_v)} f(xa_tx^{-1}) d\dot{x} \end{align*} for suitably chosen Haar measures (that are independent of ${\sf p}i_v$) \cite[(10.9a-b)]{Knapp}. We note that the functions $F^M_f$ are smooth \cite[Proposition 11.8]{Knapp} and for integers $k \mathfrak{g}eq 0$ the odd-order derivative $\frac{d^{2k+1}}{dt^{2k+1}}F^T_f(\theta)$ is continuous (see the remarks after \cite[Proposition 11.9]{Knapp}). Moreover $F^M_f(a_t)$ vanishes outside a bounded subset of $M(\mathrm{R}R)$ \cite[Proposition 11.7]{Knapp}. We claim that for $m>0$ and $k \mathfrak{g}eq 1$ one has {\allowdisplaybreaks \mathbf Egin{align} \label{claim-induct} m^{2k+1}\Theta_{m+1}(f)=&-\frac{1}{2{\sf p}i i^{2k+1}} \int_{0}^{2 {\sf p}i}(e^{im \theta}+e^{-im \theta})\frac{d^{2k+1}}{d\theta^{2k+1}}F_f^T(\theta) d\theta\\&+\frac{1}{2}\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d^{2k+1}}{dt^{2k+1}}F^M_f(a_t)dt \nonumber \\ &+\frac{1}{2}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|} (\mathrm{sgn}(t))\frac{d^{2k+1}}{dt^{2k+1}}F^M_f(-a_t)dt. \nonumber \end{align}} Assuming \eqref{claim-induct}, an application of the Riemann-Lesbesgue lemma implies \eqref{to-show-induct} which in turn implies the lemma. Thus proving \eqref{claim-induct} will complete the proof of the lemma. Proceeding by induction, assume \eqref{claim-induct} is true for $k-1>0$. Applying integration by parts we obtain that \eqref{claim-induct} is equal to {\allowdisplaybreaks\mathbf Egin{align*} &-(mi)^{-1}\left(-\frac{1}{2 {\sf p}i i^{2k-1}}\right)\int_{0}^{2 {\sf p}i}( e^{im\theta}-e^{-im\theta})\frac{d^{2k}}{d\theta^{2k}}F_f^T(\theta)d\theta\\ &-\frac{1}{2m}\int_{-\infty}^{\infty} e^{-m|t|}(\mathrm{sgn}(t)) \frac{d^{2k}}{dt^{2k}} F^M_f(a_t)dt\\ &-\frac{1}{2m}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d^{2k}}{dt^{2k}}F_f^M(-a_t)dt\\ &+-\frac{1}{-2m}e^{-m|0^+|}\left( \frac{d^{2k-1}}{dt^{2k-1}}F_f^M(a_0^+) +(-1)^m\frac{d^{2k-1}}{dt^{2k-1}}F_f^M(-a_0^+)\right)\\&+-\frac{1}{-2m}e^{-m|0^-|}\left( \frac{d^{2k-1}}{dt^{2k-1}}F_f^M(a_0^-) +(-1)^m\frac{d^{2k-1}}{dt^{2k-1}}F_f^M(-a_0^-)\right)\\ &=-m^{-1}\left(-\frac{1}{2 {\sf p}i i^{2k}}\right)\int_{0}^{2 {\sf p}i}( e^{im\theta}-e^{-im\theta})\frac{d^{2k}}{d\theta^{2k}}F_f^T(\theta)d\theta\\ &-\frac{1}{2m}\int_{-\infty}^{\infty} e^{-m|t|}(\mathrm{sgn}(t)) \frac{d^{2k}}{dt^{2k}} F^M_f(a_t)dt\\ &-\frac{1}{2m}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d^{2k}}{dt^{2k}}F_f^M(-a_t)dt \\&+\frac{1}{m}\left( \frac{d^{2k-1}}{dt^{2k-1}}F_f^M(a_0)+(-1)^m\frac{d^{2k-1}}{dt^{2k-1}}F_f^M(-a_0)\right) \end{align*}} \noindent where the ${\sf p}m$ denote values as $t \to 0^{{\sf p}m}$ (this is purely for emphasis, as $F_f^M$ is smooth). We note that the extra terms occur because of the singularity of the sign function $\mathrm{sgn}(t)$ at $t=0$. Since $F_f^M({\sf p}m a_t)$ is even as a function of $t$, the last terms in the expression above vanish. Thus the quantity above is equal to \mathbf Egin{align*} &-m^{-1}\left(-\frac{1}{2 {\sf p}i i^{2k}}\right)\int_{0}^{2 {\sf p}i}( e^{im\theta}-e^{-im\theta})\frac{d^{2k}}{d\theta^{2k}}F_f^T(\theta)d\theta\\ &-\frac{1}{2m}\int_{-\infty}^{\infty} e^{-m|t|}(\mathrm{sgn}(t)) \frac{d^{2k}}{dt^{2k}} F^M_f(a_t)dt\\ &-\frac{1}{2m}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d^{2k}}{dt^{2k}}F_f^M(-a_t)dt. \end{align*} Keeping in mind that $\frac{d^{2k}}{d\theta^{2k}}F_f^T(\theta)$ has jump discontinuities at $0$ and ${\sf p}i$ (see the remark after \cite[Proposition 11.9]{Knapp}), we now apply integration by parts again to see that this expression is equal to \mathbf Egin{align*} &m^{-2}\left(-\frac{1}{2 {\sf p}i i^{2k+1}}\right)\int_{0}^{2 {\sf p}i}( e^{im\theta}+e^{-im\theta})\frac{d^{2k+1}}{d\theta^{2k+1}}F_f^T(\theta)d\theta\\ &-m^{-2}\left(-\frac{1}{ 2{\sf p}i i^{2k+1}}\right)(2)\left( \frac{d^{2k}}{d\theta^{2k}}F_f^T(0^-)-(-1)^m\frac{d^{2k}}{d\theta^{2k}}F_f^T({\sf p}i^+)+ (-1)^m\frac{d^{2k}}{d\theta^{2k}}F_f^T({\sf p}i^-)-\frac{d^{2k}}{d\theta^{2k}}F_f^T(0^+)\right)\\ &+\frac{1}{2m^2}\int_{-\infty}^{\infty} e^{-m|t|}(\mathrm{sgn}(t)) \frac{d^{2k+1}}{dt^{2k+1}} F^M_f(a_t)dt\\ &+\frac{1}{2m^2}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d^{2k+1}}{dt^{2k+1}}F_f^M(-a_t)dt\\ &+\frac{1}{m^2}\frac{d^{2k}}{dt^{2k}}F_f^T(a_0)+\frac{1}{m^2}(-1)^m\frac{d^{2k}}{dt^{2k}}F_f^T(-a_0). \end{align*} The second and last lines of the expression above cancel by the jump relations \cite[(11.45a), (11.45b)]{Knapp}. Thus the above is equal to \mathbf Egin{align*} &m^{-2}\left(-\frac{1}{2 {\sf p}i i^{2k+1}}\right)\int_{0}^{2 {\sf p}i}( e^{im\theta}+e^{-im\theta})\frac{d^{2k+1}}{d\theta^{2k+1}}F_f^T(\theta)d\theta\\ &+\frac{1}{2m^2}\int_{-\infty}^{\infty} e^{-m|t|}(\mathrm{sgn}(t)) \frac{d^{2k+1}}{dt^{2k+1}} F^M_f(a_t)dt\\ &+\frac{1}{2m^2}(-1)^m\int_{-\infty}^{\infty}e^{-m|t|}(\mathrm{sgn}(t))\frac{d^{2k+1}}{dt^{2k+1}}F_f^M(-a_t)dt \end{align*} which completes the induction, proving \eqref{claim-induct} and hence the lemma. \end{proof} \mathbf Egin{rem} The jump relations which appear in this proof play a role in Langlands' adelization of the trace formula and his hope that it will be amenable to Poisson summation \cite{FLN} \cite{LSing}. \end{rem} For the proof of Lemma \ref{lem-Casimir}, it is convenient to summarize some of the information obtained in the previous lemma in the following table: \mathbf Egin{center} \mathbf Egin{tabular}{ l | c |c } ${\sf p}i_{v}$& $v$ & $(\mu_{i{\sf p}i_{v}})$\\ \hline $t \mapsto \mathrm{sgn}(t)^k|t|_v^{u}$ & real & $k+u$\\ $D_{m+1} \otimes |\cdot|_v^u$ & real &$(m/2+u,m/2+1+u)$\\ $z \mapsto z^{m}(z\overline{z})^{-\tfrac{m}{2}}|z|_v^{u}$ & complex & $\tfrac{m}{2}+u$ \end{tabular} \end{center} We now prove Lemma \ref{lem-Casimir}: \mathbf Egin{proof}[Proof of Lemma \ref{lem-Casimir}] The Harish-Chandra isomorphism \cite[\S VIII.5]{Knapp} factors as $$ \mathfrak{g}amma:Z(\mathrm{R}_{F_v/\mathrm{R}R}\mathfrak{gl}_{n \textf{C}C}) \tilde{\longrightarrow} Z(\mathfrak{m}_{\textf{C}C}) \tilde{\longrightarrow} U(\mathfrak{h}_{\textf{C}C}) $$ where the second map is the Harish-Chandra isomorphism $$ \mathfrak{g}amma_M:Z(\mathfrak{m}_{\textf{C}C}) \tilde{\longrightarrow} U(\mathfrak{h}_{\textf{C}C}). $$ We also have Harish-Chandra isomorphisms $$ \mathfrak{g}amma_j:Z(\mathfrak{m}_{j\textf{C}C}) \tilde{\longrightarrow} U(\mathfrak{h}_{j\textf{C}C}). $$ The infinitisimal character of ${\sf p}i_v$ (resp.~${\sf p}i_M$) is of the form $\Lambda({\sf p}i_v) \circ \mathfrak{g}amma$ (resp.~$\Lambda({\sf p}i_{M}) \circ \mathfrak{g}amma_M$) for some $\Lambda({\sf p}i_v)$ (resp.~$\Lambda({\sf p}i_M)$) in $ \mathfrak{h}_{\textf{C}C}^{\widetilde{\varepsilon}dge}$ \cite[\S VIII.6]{Knapp}. Similarly the infinitisimal character of ${\sf p}i_j$ is of the form $\Lambda({\sf p}i_j) \circ \mathfrak{g}amma_j$ for some $\Lambda({\sf p}i_j) \in \mathfrak{h}_j^{\widetilde{\varepsilon}dge}$. Moreover \mathbf Egin{align} \label{factor-Lambda} \Lambda({\sf p}i_v)=\Lambda({\sf p}i_M)=\mathbb{S}um_j \Lambda({\sf p}i_j) \end{align} \cite[Proposition 8.22]{Knapp}. If $C \in Z(\mathrm{R}_{F_v/\mathrm{R}R}\mathfrak{gl}_{n\textf{C}C})$ is the Casimir operator, the eigenvalue of $C$ acting on the space of ${\sf p}i_v$ is $\Lambda({\sf p}i_v)( \mathfrak{g}amma(C))$. For each $j$ let $\{\Lambda_{j,\alpha}\} \mathbb{S}ubset \mathfrak{h}_{j\textf{C}C}^{\widetilde{\varepsilon}dge}$ be a basis, and write $$ \Lambda({\sf p}i_v):=\mathbb{S}um_{j} \mathbb{S}um_{\alpha} a_{j,\alpha}({\sf p}i_j) \Lambda_{j,\alpha} $$ for some $a_{j,\alpha}({\sf p}i_v) \in \textf{C}C$. We note that $\mathfrak{g}amma(C)$ does not depend on ${\sf p}i_v$. Therefore in order to prove the lemma it suffices to exhibit a basis as above such that the $a_{j,\alpha}({\sf p}i_j)$ are bounded by a polynomial in the $|\mu_{i{\sf p}i_v}|$ for $1 \leq i \leq n$. In view of \eqref{L-prod} and \eqref{factor-Lambda} it follows that in order to prove the lemma it suffices to prove this statement in the special case where $n \in \{1,2\}$ for $v$ real and the special case $n=1$ for $v$ complex. In the $n=1$ cases this comes down to unraveling definitions. For $n=2$ we can assume that ${\sf p}i_v$ is a twist by a quasi-character of a discrete series or limit of discrete series representation. In this case we refer to \cite[Chapter VIII, \S 16, Problems 1]{Knapp}. \end{proof} We now prove Corollary \ref{cor-aut-trace}: \mathbf Egin{proof}[Proof of Corollary \ref{cor-aut-trace}] In view of Proposition \ref{prop-testsums} in order to prove the corollary it suffices to show that the contribution of the terms in Proposition \ref{prop-testsums} that depend on the automorphic representation does not grow too fast when we sum over all automorphic representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ fixed by a given compact open subgroup of $\mathrm{GL}_n(\mathbb{A}_{F'}^{\infty})$. More precisely, it suffices to show that \mathbf Egin{align} \label{obound} \mathbb{S}um_{{\sf p}i'}\mathrm{tr}({\sf p}i')(h)(C_{{\sf p}i'_E \times {\sf p}i'^{\tau\varepsilone}_E}(0)^{\tfrac{1}{2}+\varepsilon})=o(X) \end{align} where the sum is over any subset of the set of equivalence classes of cuspidal automorphic representations ${\sf p}i'$ of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_{n}(\mathbb{A}_{F'})$ and ${\sf p}i_E'$ is the base change of ${\sf p}i'$ to $\mathrm{GL}_n(\mathbb{A}_E)$. The basic properties of cyclic base change (i.e. the relationship between the $L$-function of an admissible representation and its base change) together with the recollections on local $L$-factors collected in \S \ref{ssec-analytic-cond} above imply that \mathbf Egin{align*} C_{{\sf p}i'_{E\infty} \times {\sf p}i'^{\varepsilone}_{E\infty}}(s) \leq C_{{\sf p}i'_{\infty}}(s)^N \end{align*} for sufficiently large $N \mathfrak{g}eq 0$ depending on $E/F'$ and $n$. Using Lemma \ref{lem-cond} and the Weyl law for cusp forms \cite[Theorem 9.1]{Donnely}, we see that in order to prove \eqref{obound} it suffices to show that the Casimir eigenvalues (and hence the Laplacian eigenvalues) of a cuspidal automorphic representation ${\sf p}i'$ contributing to \eqref{obound} are a polynomial function of the absolute value of the parameters $\mu_{i{\sf p}i'_{v'}}$ for archimedian $v'$. This is the content of Lemma \ref{lem-Casimir}. \end{proof} \mathbb{S}ection{Restriction and descent of $L$-parameters}\label{sec-rest-desc} The goal of this section is to prove some properties of $L$-parameters under restriction along an extension of number fields and then formulate the conjectures in automorphic representation theory to which these properties correspond. Criteria for parameters to descend are given in \S \ref{ssec-desc}. In \S \ref{ssec-primitive} we define a notion of $E$-primitive parameters and automorphic representations and then use it in \S \ref{ssec-rest-param} to give an explicit description of the fibers and image of the restriction map (see Proposition \ref{prop-bij-EF'}). In \S \ref{ssec-icosa-gp} a complement to Proposition \ref{prop-bij-EF'}, namely Proposition \ref{prop-A5-EF}, is given. More specifically, Proposition \ref{prop-A5-EF} deals with the case of field extensions with Galois group isomorphic to $\widetilde{A}_5$. Propositions \ref{prop-bij-EF'} and \ref{prop-A5-EF} are meant as motivation for conjectures \ref{conj-1} and \ref{conj-2} below respectively; these are the conjectures that appear in the statements of our first two main theorems. Finally, in \S \ref{ssec-artin-conj} we prove lemmas on restriction of $L$-parameters along subfields of a $\widetilde{A}_5$-extension that motivate conjectures \ref{conj-32} and \ref{conj-33}, the conjectures assumed in Theorem \ref{main-thm-3}. \mathbb{S}ubsection{Parameters and base change} \label{ssec-param-bc} In this subsection we recall the base change map (or restriction map) on $L$-parameters which conjecturally corresponds to base change of automorphic representations of $\mathrm{GL}_n$. Let $E/F$ be an extension of number fields, let $W_E$ (resp.~$W_F$) denote the Weil group of $E$ (resp.~$F$) and let $$ W_E'=W_E \times \mathrm{SU}(2) $$ (resp.~$W_F':=W_F \times \mathrm{SU}(2)$) denote the Weil-Deligne group of $E$ (resp.~$F$), where $\mathrm{SU}(2)$ is the real compact Lie group of unitary matrices of determinant one\footnote{There are competing definitions of representations of the Weil-Deligne group that are all equivalent, see \cite[\S 2.1]{GR}. To pass from $\mathrm{SL}_2(\textf{C}C)$ to $\mathrm{SU}(2)$ one uses the unitary trick.}. We will be using the notion of an $L$-parameter $$ \varphi:W_F' \longrightarrow{}^L\mathrm{GL}_{nF}=W_{F}' \times \mathrm{GL}_n(\textf{C}C) $$ extensively\footnote{$L$-parameters are defined in \cite[\S 8]{Borel} where they are called ``admissible homomorphisms.''}. Part of the definition of an $L$-parameter is the stipulation that the induced map $$ W_{F}' \longrightarrow W_F' $$ given by projection onto the first factor of ${}^L\mathrm{GL}_{nF}=W_{F}' \times \mathrm{GL}_n(\textf{C}C)$ is the identity. Thus $\varphi$ is determined by the representation $W_{F}' \to\mathrm{GL}_n(\textf{C}C)$ defined by projection onto the second factor of ${}^L\mathrm{GL}_{nF}$: \mathbf Egin{align} \label{proj-second} \mathbf Egin{CD} W_F' @>{{\sf p}hi}>> {}^L\mathrm{GL}_{nF}@>>> {}^L\mathrm{GL}_{nF}^{\circ}=\mathrm{GL}_n(\textf{C}C). \end{CD} \end{align} Thus one can safely think of $L$-parameters as representations $W_{F}' \to \mathrm{GL}_n(\textf{C}C)$ satisfying certain additional properties. We say that an $L$-parameter ${\sf p}hi:W_F' \to {}^L\mathrm{GL}_{n}$ is irreducible if the representation \eqref{proj-second} is irreducible. For convenience, we denote by \mathbf Egin{align} \label{L-param} \Phi_n(F):&=\{\textrm{Equivalence classes of }L\textrm{-parameters }\varphi:W_{F}' \to {}^L\mathrm{GL}_{nF}\}\\ \nonumber \Phi_n^0(F):&=\{\textrm{Equivalence classes of irreducible }L\textrm{-parameters }\varphi:W_{F}' \to {}^L\mathrm{GL}_{nF}\}. \end{align} If $E/F$ is Galois then there is a natural action of $\mathrm{Gal}(E/F)$ on the set of $L$-parameters from $W_E'$ given by $$ {\sf p}hi^{\mathbb{S}igma}(g)={\sf p}hi(\mathbb{S}igma^{-1}g\mathbb{S}igma). $$ This induces an action of $\mathrm{Gal}(E/F)$ on $\Phi_n(E)$ which preserves $\Phi_n^0(E)$; we denote the invariants under this action by $\Phi_n(E)^{\mathrm{Gal}(E/F)}$ (resp.~$\Phi_n^0(E)^{\mathrm{Gal}(E/F)}$). As noted above in \S \ref{ssec-bcformal}, we have a base change $L$-map $$ b_{E/F}:{}^L\mathrm{GL}_{nF} \longrightarrow {}^L\mathrm{R}_{E/F}\mathrm{GL}_{nF} $$ given by the diagonal embedding on connected components and the identity on the $W_F'$-factors. For each $L$-parameter $\varphi:W_F' \to {}^L\mathrm{GL}_{nF}$ the composition $b_{E/F} \circ \varphi :W_F' \to {}^L\mathrm{GL}_{nF}$ is another $L$-parameter (compare \cite[\S 15.3]{Borel}). One can view this construction in an equivalent manner as follows: an $L$-parameter ${\sf p}hi:W_F' \to {}^L\mathrm{R}_{E/F}\mathrm{GL}_{nF}$ can be identified canonically with an $L$-parameter ${\sf p}hi:W_E' \to {}^L\mathrm{GL}_{nE}$. From this viewpoint, the base change map simply associates to a parameter $\varphi:W_F' \to {}^L\mathrm{GL}_{nF}$ its restriction $b_{E/F}\circ \varphi:=\varphi|_{W_E'}$ (compare \cite[\S 15.3]{Borel}). Thus $b_{E/F}$ induces a map \mathbf Egin{align} \label{L-restr} b_{E/F}:\Phi_n^0(F) &\longrightarrow \Phi_n(E)\\ \varphi &\longmapsto \varphi|_{W_E'} \nonumber \end{align} which has image in $\Phi_n(E)^{\mathrm{Gal}(E/F)}$ if $E/F$ is Galois. According to Langlands functoriality, there should be a corresponding transfer of $L$-packets of automorphic representations. In fact, since $L$-packets are singletons in the case at hand, we should obtain an honest map from the set of equivalence classes of automorphic representations of $\mathrm{GL}_n(\mathbb{A}_F)$ to the set of equivalence classes of automorphic representations of $\mathrm{R}_{E/F}\mathrm{GL}_n(\mathbb{A}_F)=\mathrm{GL}_n(\mathbb{A}_E)$. Thus we should expect a map \mathbf Egin{align*} b_{E/F}:\Pi_n(F) &\mathbb{S}tackrel{?}{\longrightarrow} \Pi_n(E)\\ {\sf p}i &\mathbb{S}tackrel{?}{\longmapsto} {\sf p}i_E \end{align*} which has image in $\Pi_n(E)^{\mathrm{Gal}(E/F)}$ if $E/F$ is Galois. Moreover this map should share certain properties of \eqref{L-restr}. Making this precise in general seems to require the introduction of the conjectural Langlands group. Rather than take this route, we will simply prove properties of the restriction map on $L$-parameters (specifically propositions \ref{prop-bij-EF'} and \ref{prop-A5-EF} and lemmas \ref{lem-A5-EF} and \ref{lem-A5-EF3}) and then state the specific conjectures in automorphic representation theory (specifically conjectures \ref{conj-1}, \ref{conj-2}, \ref{conj-32} and \ref{conj-33}) that they suggest. \mathbb{S}ubsection{Descent of parameters} \label{ssec-desc} Our goal in this subsection is to prove the following lemma: \mathbf Egin{lem} \label{lem-bc-param} Let $E/F$ be a Galois extension of number fields. If ${\sf p}hi$ is irreducible and ${\sf p}hi^{\mathbb{S}igma} \cong {\sf p}hi$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F)$, then there is an $L$-parameter $$ \varphi:W_F' \longrightarrow {}^L\mathrm{GL}_{nF} $$ such that $b_{E/F} \circ \varphi \otimes \chi={\sf p}hi$, where $\chi:W_F' \longrightarrow {}^L\mathrm{GL}_{1F}$ is a quasi-character invariant under $\mathrm{Gal}(E/F)$. If $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})=0$, then $\chi$ can be taken to be trivial. \end{lem} Before we begin the proof we set a little notation. Let $\varphi$ and ${\sf p}hi$ be $L$-parameters as above. We let \mathbf Egin{align} \label{naughts} \varphi_0:W_F' &\longrightarrow ({}^L\mathrm{GL}_{nF})^{\circ}=\mathrm{GL}_n(\textf{C}C)\\ \nonumber {\sf p}hi_0:W_E' &\longrightarrow ({}^L\mathrm{GL}_{nE})^{\circ}=\mathrm{GL}_n(\textf{C}C) \end{align} be the homomorphisms defined by composing $\varphi$ (resp.~${\sf p}hi$) with the projection ${}^L\mathrm{GL}_{nF} \to ({}^L\mathrm{GL}_{nF})^{\circ}$ (resp. ${}^L\mathrm{GL}_{nE} \to ({}^L\mathrm{GL}_{nE})^{\circ}$). \mathbf Egin{proof} By assumption, for every $\mathbb{S}igma \in \mathrm{Gal}(E/F)$ we are given a $c(\mathbb{S}igma) \in \mathrm{GL}_n(\textf{C}C)$ such that $$ {\sf p}hi_0(\mathbb{S}igma \text{\sffamily{\bf\textsf{z}}}eta \mathbb{S}igma^{-1})=c(\mathbb{S}igma){\sf p}hi_0(\text{\sffamily{\bf\textsf{z}}}eta)c(\mathbb{S}igma)^{-1}. $$ Since ${\sf p}hi_0$ is irreducible, Schur's lemma implies that $c(\mathbb{S}igma)c(\tau)=\lambda(\mathbb{S}igma,\tau)c(\mathbb{S}igma\tau)$ for some $\lambda(\mathbb{S}igma, \tau) \in \textf{C}C^{\times}$. In other words, the projective representation $$ P{\sf p}hi_0:W_E' \longrightarrow ({}^L\mathrm{GL}_{nE})^{\circ} \longrightarrow \mathrm{PGL}_n(\textf{C}C) $$ obtained by composing ${\sf p}hi_0$ with the canonical projection can be extended to a (continuous) projective representation $$ {\sf p}si:W_F'\longrightarrow \mathrm{PGL}_n(\textf{C}C). $$ This extension has the property that ${\sf p}si(w)$ is semisimple for all $w \in W_F$. By \cite[\S 8]{Rajan2}, there is an $L$-parameter $$ \varphi:W_F' \longrightarrow {}^L\mathrm{GL}_{nF} $$ such that $\varphi$ is a lift of ${\sf p}si$. Let $P(b_{E/F}(\varphi)_0)$ denote the composite of $b_{E/F}(\varphi)_0=\varphi_0|_{W_E'}$ with the projection ${}^L\mathrm{GL}_{nE} \to \mathrm{PGL}_n(\textf{C}C)$. We have that $$ P(b_{E/F}(\varphi)_0) \cong P {\sf p}hi_0. $$ It follows that $b_{E/F}(\varphi)_0 \cong {\sf p}hi_0 \otimes \chi$ for some character $\chi:W_E' \to \textf{C}C^{\times}$ invariant under $\mathrm{Gal}(E/F)$. To complete the proof of the lemma, we need to show that if $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})=0$, then any character $\chi:W_E' \to \textf{C}C^{\times}={}^L\mathrm{GL}_{1F}$ that is invariant under $\mathrm{Gal}(E/F)$ is the restriction of a character of $W_F'$. Viewing $\textf{C}C^{\times}$ as a trivial $W_F'$ and $\mathrm{Gal}(\overline{F}/F$)-module, we have an inflation-restriction exact sequence \cite[\S 3]{Rajan2} \mathbf Egin{align} \label{inf-res} \mathbf Egin{CD} H^1(W_F',\textf{C}C^{\times}) @>{\mathrm{res}}>> H^1(W_E',\textf{C}C^{\times})^{W_E'/W_F'} @>>> H^2(W_F'/W_E',\textf{C}C^{\times}) \end{CD} \end{align} coming from the Hochschild-Serre spectral sequence. Here $H$ denotes the Moore cohomology groups. We note that for $i \mathfrak{g}eq 1$, $G$ discrete the Moore cohomology group $H^i(G,M)$ is equal to the usual continuous group cohomology group \cite[\S 3]{Rajan2}. Since $W_F'/W_E' \cong \mathrm{Gal}(E/F)$, this completes the proof of the lemma. \end{proof} One would like to construct functorial transfers of automorphic representations corresponding to the base change map on $L$-parameters recalled above. The $n=1$ case is trivial, as we now explain: Given a quasi-character $$ \mu:\mathrm{GL}_1(\mathbb{A}_F) \longrightarrow \textf{C}C^{\times} $$ trivial on $\mathrm{GL}_1(F)$ its base change $b_{E/F}(\mu)$ is given by $$ b_{E/F}(\mu):=\mu \circ \mathrm{N}_{E/F}:\mathrm{GL}_1(\mathbb{A}_E) \longrightarrow \textf{C}C^{\times} $$ where $\mathrm{N}_{E/F}$ is the norm map. We have the following lemma characterizing the image of the base change: \mathbf Egin{lem} \label{lem-image-bc} Suppose that $E/F$ is Galois and $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})=0$. If $\mathrm{\textrm{\'{e}t}}a:\mathrm{GL}_1(\mathbb{A}_E) \to \textf{C}C^{\times}$ is a quasi-character trivial on $\mathrm{GL}_1(E)$ satisfying $\mathrm{\textrm{\'{e}t}}a^{\mathbb{S}igma} =\mathrm{\textrm{\'{e}t}}a$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F)$ then $\mathrm{\textrm{\'{e}t}}a=\chi \circ \mathrm{N}_{E/F}$ for some quasi-character $\chi:\mathrm{GL}_1(\mathbb{A}_F) \to \textf{C}C^{\times}$ trivial on $\mathrm{GL}_1(F)$. \end{lem} \mathbf Egin{proof} We have a commutative diagram \mathbf Egin{align} \label{nice-diag} \mathbf Egin{CD} E^{\times} \backslash \mathbb{A}_E^{\times} @>{r_E}>> W_E'/W'^c_E\\ @V{\mathrm{N}_{E/F}}VV @VVV\\ F^{\times} \backslash \mathbb{A}_F^{\times} @>{r_F}>> W_F'/W'^c_F \end{CD} \end{align} where $(\cdot)^c$ denotes the closure of the commutator subgroup of $(\cdot)$ and the right homomorphism is induced by the inclusion $W_E' \leq W_F'$ \cite[(1.2.2)]{Tate}. As we proved in Lemma \ref{lem-bc-param}, any quasi-character of $W_E'/W'^{c}_E$ that is invariant under $\mathrm{Gal}(E/F)$ is the restriction of a quasi-character of $W_F'/W'^{c}_F$. Translating this to the left hand side of \eqref{nice-diag}, this implies that any quasi-character of $\mathbb{A}_E^{\times}$ trivial on $E^{\times}$ that is invariant under $\mathrm{Gal}(E/F)$ is the composition of a quasi-character of $\mathbb{A}_F^{\times}$ trivial on $F^{\times}$ with the norm map. \end{proof} \mathbb{S}ubsection{Primitive parameters and automorphic representations} \label{ssec-primitive} Let $F$ be a number field. It is convenient to introduce the following definition: \mathbf Egin{defn} \label{defn-induced-L} An $L$-parameter $\varphi:W_F' \to {}^L\mathrm{GL}_{nF}$ is \textbf{$K$-induced} if there is a nontrivial field extension $K/F$ of finite degree and an irreducible $L$-parameter ${\sf p}hi:W_K' \to {}^L\mathrm{GL}_{nK}$ such that $\varphi \cong \mathrm{Ind}_{W_K'}^{W_F'}{\sf p}hi$. If $E/F$ is a nontrivial field extension, then an \textbf{$E$-primitive} $L$-parameter $\varphi$ is an irreducible $L$-parameter such that $\varphi$ is not $K$-induced for all subfields $E \mathfrak{g}eq K > F$. \end{defn} We denote by \mathbf Egin{align} \Phi^{\mathrm{prim}}_n(E/F):=\{ \textrm{Equiv.~classes of $E$-primitive }L\textrm{-parameters }\varphi:W_F' \to {}^L\mathrm{GL}_{nF}\}. \end{align} Let $k$ be a global or local field and let $K/k$ be an \'etale $k$-algebra. Let $\overline{k}$ be a choice of algebraic closure of $k$. Write $K=\mathrm{op}lus_iK_i$ where the $K_i$ are finite extension fields of $k$. Let $$ \mathrm{Ind}_{K}^k:{}^L\mathrm{R}_{K/k}\mathrm{GL}_{nk} \to {}^L\mathrm{GL}_{n[K:k]k} $$ be the $L$-map that embeds $({}^L\mathrm{R}_{K/k}\mathrm{GL}_{nk})^{\circ}=\times_i\mathrm{GL}_n(\textf{C}C)^{\mathrm{Hom}_k(K_i,\overline{k})}$ diagonally and sends $W_{k}'$ to the the appropriate group of permutation matrices\footnote{$L$-maps are defined in \cite[\S 15.1]{Borel}.}. We recall that $L$-parameters ${\sf p}hi:W_K' \to {}^L\mathrm{GL}_{nK}$ can be identified canonically with $L$-parameters ${\sf p}hi:W_k' \to {}^L\mathrm{Res}_{K/k}\mathrm{GL}_{nk}$ \cite[Proposition 8.4]{Borel}; under this identification $\mathrm{Ind}_K^k({\sf p}hi)=\mathrm{op}lus_i\mathrm{Ind}_{W'_{K_i}}^{W_k'}({\sf p}hi)$. Using the local Langlands correspondence, for any irreducible admissible representation $\Pi_v$ of $\mathrm{GL}_n(E \otimes_FF_v)$ we can associate an irreducible admissible representation ${\sf p}i_v$ of $\mathrm{GL}_n(F_v)$ by stipulating that if ${\sf p}hi:W_{F_v}' \to {}^L\mathrm{R}_{E/F}\mathrm{GL}_n$ is the $L$-parameter attached to $\Pi_v$ then $\mathrm{Ind}_E^F \circ {\sf p}hi$ is the $L$-parameter attached to ${\sf p}i_v$. If this is the case then we write $$ {\sf p}i_v \cong \mathrm{Ind}_E^F(\Pi_v). $$ \mathbf Egin{defn} An automorphic representation ${\sf p}i$ of $\mathrm{GL}_{n}(\mathbb{A}_F)$ is \textbf{$K$-automorphically induced} if there is a nontrivial finite extension field $K/F$ and an automorphic representation $\Pi$ of $\mathrm{GL}_n(\mathbb{A}_K)=\mathrm{Res}_{K/F}\mathrm{GL}_n(\mathbb{A}_F)$ such that ${\sf p}i_v \cong \mathrm{Ind}_K^F(\Pi_v)$ for almost all places $v$ of $F$. If $E/F$ is a nontrivial field extension then an \textbf{$E$-primitive} automorphic representation of $\mathrm{GL}_n(\mathbb{A}_F)$ is a cuspidal automorphic representation of $\mathrm{GL}_{n}(\mathbb{A}_F)$ that is not $K$-induced for all subfields $E \mathfrak{g}eq K >F$. \end{defn} For field extensions $E/F$ let \mathbf Egin{align} \Pi^{\mathrm{prim}}_n(E/ F):=\{\textrm{isom.~classes of $E$-primitive automorphic reps.~of }\mathrm{GL}_{n}(\mathbb{A}_F)\}. \end{align} \mathbb{S}ubsection{Restriction of parameters} \label{ssec-rest-param} In \S \ref{ssec-desc} we discussed descent of parameters along a Galois extension $E/F$; the main result being that if $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})=0$ then $\mathrm{Gal}(E/F)$-invariant irreducible parameters descend. In this subsection we explore certain converse statements involving restrictions of parameters. The main result is Proposition \ref{prop-bij-EF'}. The statement parallel to Proposition \ref{prop-bij-EF'} in the context of automorphic representations is Conjecture \ref{conj-1}, the conjecture that appeared in the statement of Theorem \ref{main-thm-1}. Let $E \mathfrak{g}eq K \mathfrak{g}eq F$ be a subfield. For the remainder of this section, to ease notation we will often write $K$ where more properly we should write $W_K'$, e.g. $$ \varphi|_{K}:=\varphi|_{W_K'}. $$ We begin with the following lemma: \mathbf Egin{lem} \label{lem-restriction} Let $E/F$ be a Galois extension of number fields such that $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})=0$. Let $\varphi:W_F' \to {}^L\mathrm{GL}_{nF}$ be an irreducible $L$-parameter. Either there is a subfield $E \mathfrak{g}eq K \mathfrak{g}eq F$ and an irreducible $L$-parameter ${\sf p}hi:W_K' \to {}^L\mathrm{GL}_{mK}$ such that $\varphi \cong \mathrm{Ind}_{K}^{F}{\sf p}hi$ or there is an $L$-parameter $\varphi_1:W_F' \to {}^L\mathrm{GL}_{mF}$ with $\varphi_1|_{E}$ irreducible and a finite-dimensional irreducible representation $\rho:\mathrm{Gal}(E/F) \to \mathrm{GL}_{d}(\textf{C}C)$ such that $$ \varphi \cong \rho \otimes \varphi_1. $$ Here we view $\rho$ as an $L$-parameter via the quotient map $W_{F}' \to W_{E}'/W_F'=\mathrm{Gal}(E/F)$. \end{lem} We note in particular that in the notation of the lemma one has $m[K:F]=n$ in the former case and $md=n$ in the latter. The extreme cases $m=1$ and $d=1$ occur. For our use in the proof of this lemma and later, we record the following: \mathbf Egin{lem} \label{lem-basic} Let $H \leq G$ be groups with $H$ normal in $G$ and $[G:H]< \infty$. Moreover let $\varphi:G \to \mathrm{Aut}(V)$ be a finite-dimensional complex representation that is irreducible upon restriction to $H$. Then $$ \mathrm{Ind}_{H}^G(1) \otimes \varphi \cong \mathrm{Ind}_{H}^G(\varphi|_H) $$ and $\rho \otimes \varphi$ is irreducible for any irreducible representation $\rho$ of $G/H$. \end{lem} \mathbf Egin{proof} The first statement is \cite[\S 3.3, Example 5]{SerreFG}. As a representation of $G$, one has $\mathrm{Ind}_H^G(1) \cong \mathrm{op}lus_{i=1}^n \rho_i^{\mathrm{op}lus \mathrm{deg}(\rho_i)}$, where the sum is over a set of representatives for the irreducible representations of $G/H$. Thus to prove the second statement of the lemma it suffices to show that $$ \mathrm{dim}_{\textf{C}C}\mathrm{Hom}_{G}(\mathrm{Ind}_{H}^G(1) \otimes \varphi,\mathrm{Ind}_{H}^G(1) \otimes \varphi)=\mathbb{S}um_{i}\mathrm{deg}(\rho_i)^2. $$ By the first assertion of the lemma and Frobenius reciprocity we have \mathbf Egin{align*} \mathrm{Hom}_{G}(\mathrm{Ind}_{H}^G(1) \otimes \varphi,\mathrm{Ind}_{H}^G(1) \otimes \varphi) &\cong\mathrm{Hom}_{G}(\mathrm{Ind}_{H}^G(1) \otimes \varphi,\mathrm{Ind}_{H}^G(\varphi|_H))\\&\cong\mathrm{Hom}_{H}(\mathrm{op}lus_{i=1}^n \varphi|_H^{\mathrm{deg}(\rho_i)^2},\varphi|_H) \end{align*} which has dimension $\mathbb{S}um_{i=1}^n \mathrm{deg}(\rho_i)^2$. \end{proof} \mathbf Egin{proof}[Proof of Lemma \ref{lem-restriction}] Assume that there does not exist a subfield $E \mathfrak{g}eq K \mathfrak{g}eq F$ and an irreducible $L$-parameter ${\sf p}hi:W_K' \to {}^L\mathrm{GL}_{mK}$ such that $\varphi \cong \mathrm{Ind}_{K}^{F}({\sf p}hi)$. Then, by \cite[\S 8.1, Proposition 24]{SerreFG} the restriction $\varphi|_{E}$ is isomorphic to a direct sum of some number of copies of a fixed irreducible $L$-parameter ${\sf p}hi_1:W_E' \to {}^L\mathrm{GL}_{mE}$. Since $\varphi|_E^{\tau} \cong \varphi|_E$ for $\tau \in W_F'$ (trivially) it follows in particular that ${\sf p}hi_1$ is $\mathrm{Gal}(E/F)$-invariant and therefore descends to a parameter $\varphi_1:W_F' \to {}^L\mathrm{GL}_{mF}$ by Lemma \ref{lem-bc-param}. By Lemma \ref{lem-basic} one has $$ \mathrm{Ind}_{E}^{F}(1) \otimes \varphi_1 \cong \mathrm{Ind}_{E}^{F}({\sf p}hi_1). $$ Applying Frobenius reciprocity we see that $$ 0 \neq \mathrm{Hom}_{E}(\varphi|_{E},{\sf p}hi_1)=\mathrm{Hom}_{F}(\varphi,\mathrm{Ind}_{E}^{F} ({\sf p}hi_1))=\mathrm{Hom}_{F}(\varphi,\mathrm{Ind}_{E}^{F}(1) \otimes \varphi_1) $$ which, in view of Lemma \ref{lem-basic}, completes the proof of the lemma. \end{proof} As an example, we have the following corollary: \mathbf Egin{cor} \label{cor-restriction} Under the assumptions of Lemma \ref{lem-restriction}, if $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple nonabelian group $G$, $n=2$ and $\varphi|_{W'_E}$ is reducible, then $G=A_5$. \end{cor} \mathbf Egin{proof} By Lemma \ref{lem-restriction}, if $\varphi|_{E}$ is reducible, then either there is a quadratic extension $K/F$ contained in $E$ such that $\varphi \cong \mathrm{Ind}_{K}^{F}\varphi_1$ for some parameter $\varphi_1:W_K' \to {}^L\mathrm{GL}_{1K}$ or one has a nontrivial representation $\mathrm{Gal}(E/F) \to \mathrm{GL}_2(\textf{C}C)$ (there are no nontrivial one-dimensional representations of $\mathrm{Gal}(E/F)$ since $\mathrm{Gal}(E/F)$ is perfect). In the former case the extension $K/F$ would correspond to an index $2$ subgroup $H \leq \mathrm{Gal}(E/F)$, which would a fortiori be normal. Thus we would have $\mathrm{Gal}(E/F)/H \cong \mathbb{Z}/2$ contradicting the assumption that $\mathrm{Gal}(E/F)$ is perfect. Hence we must be in the latter case. The nontrivial representation $\mathrm{Gal}(E/F) \to \mathrm{GL}_2(\textf{C}C)$ induces a nontrivial projective representation $G \to \mathrm{PGL}_2(\textf{C}C)$ since $\mathrm{Gal}(E/F)$ is perfect. By a well-known theorem of Klein, if $G$ is a finite simple nonabelian group and $G \to \mathrm{PGL}_2(\textf{C}C)$ is a nontrivial projective representation, then $G\cong A_5$. \end{proof} In view of Lemma \ref{lem-restriction}, for each $n$ there are two natural cases to consider, namely the case where there is a nontrivial representation $\mathrm{Gal}(E/F) \to \mathrm{GL}_m(\textf{C}C)$ for some $m|n$ and the case where there is no nontrivial representation $\mathrm{Gal}(E/F) \to \mathrm{GL}_m(\textf{C}C)$ for any $m|n$. We will deal with the former case under the additional assumption that $n=2$ in \S \ref{ssec-icosa-gp} below. In the latter case one obtains a complete description of the fibers and image of base change on primitive parameters as follows: \mathbf Egin{prop} \label{prop-bij-EF} Let $E/F$ be a Galois extension of number fields such that $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple nonabelian group. \mathbf Egin{enumerate} \item If $\varphi_1,\varphi_2 :W_{F} \to {}^L\mathrm{GL}_{nF}$ are $L$-parameters such that $\varphi_1|_{E}$ and $\varphi_2|_{E}$ are irreducible and isomorphic, then $\varphi_1 \cong \varphi_2$. \item Assume that for all divisors $m|n$ there are no nontrivial irreducible representations $\mathrm{Gal}(E/F) \to \mathrm{GL}_{m}(\textf{C}C)$. Under this assumption, restriction of parameters induces a bijection \mathbf Egin{align*} b_{E/F}:\Phi_n^{\mathrm{prim}}(E/F) &\tilde{\longrightarrow} \Phi_n^0(E)^{\mathrm{Gal}(E/F)}\\ \varphi &\longmapsto \varphi|_{E}. \end{align*} \end{enumerate} \end{prop} \mathbf Egin{proof} We first check (1). Suppose that $\varphi_1,\varphi_2:W_{F}' \to {}^L\mathrm{GL}_{nF'}$ are two irreducible parameters with isomorphic irreducible restrictions to $W_{E}'$. Then by Frobenius reciprocity and Lemma \ref{lem-basic} we have \mathbf Egin{align*} 0 \neq \mathrm{Hom}_E(\varphi_1|_E,\varphi_2|_E)&=\mathrm{Hom}_F(\mathrm{Ind}_{E}^F(\varphi_1|_E),\varphi_2)\\&=\mathrm{op}lus_i \mathrm{Hom}_F(\rho_i \otimes \varphi_1,\varphi_2)^{\mathrm{op}lus \mathrm{deg}(\rho_i)} \end{align*} where the sum is over a set of representatives for the irreducible representations of $\mathrm{Gal}(E/F)$. By Lemma \ref{lem-basic}, $\rho_i \otimes \varphi_1$ is irreducible for all $i$, so by considering degrees we must have $\rho_i \otimes \varphi_1 \cong \varphi_2$ where $\rho_i$ is an abelian character of $\mathrm{Gal}(E/F)$. Since $\mathrm{Gal}(E/F)$ is perfect, this $\rho_i$ is necessarily trivial. Moving on to (2), we note that the restriction map from $L$-parameters of $W_F'$ to $L$-parameters of $W_E'$ obviously has image in the set of $\mathrm{Gal}(E/F)$-invariant parameters and under the addition assumption in (2) it has image in the set of irreducible parameters by Lemma \ref{lem-restriction}. In other words we have a well-defined map $$ b_{E/F}:\Phi_n^{\mathrm{prim}}(E/F) \longrightarrow \Phi_n^{0}(E)^{\mathrm{Gal}(E/F)}. $$ It is injective by (1) and surjective by Lemma \ref{lem-bc-param}, which completes the proof of the proposition. \end{proof} To set up trace identities it is convenient to work with automorphic representations attached to a subfield $F' \leq E$. In view of this we prove the following modification of Proposition \ref{prop-bij-EF}: \mathbf Egin{prop} \label{prop-bij-EF'} Let $E/F$ be a Galois extension of number fields such that $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple nonabelian group. Assume that for all $m|n$ there are no nontrivial irreducible representations $\mathrm{Gal}(E/F) \to \mathrm{GL}_{m}(\textf{C}C)$. If $E \mathfrak{g}eq F' \mathfrak{g}eq F$ is a subfield then the restriction map induces an injection \mathbf Egin{align} \label{restr-map} b_{F'/F}:\Phi_n^{\mathrm{prim}}(E/F) & \longrightarrow \Phi_n^{\mathrm{prim}}(E/F') \\ \varphi &\longmapsto \varphi|_{{F'}}. \nonumber \end{align} If ${\sf p}hi':W_{F'}' \to {}^L\mathrm{GL}_{nF'}$ is an $L$-parameter such that ${\sf p}hi'|_{W_E'}$ is irreducible and $\mathrm{Gal}(E/F)$-invariant then there is a unique character $\chi' \in \mathrm{Gal}(E/F')^{\widetilde{\varepsilon}dge}$ such that ${\sf p}hi' \otimes \chi'$ is in the image of the restriction map \eqref{restr-map}. If $\mathrm{Gal}(E/F')$ is solvable of order coprime to $n$ then for any irreducible $L$-parameter ${\sf p}hi':W_{F'}' \to {}^L\mathrm{GL}_{nF'}$ the restriction ${\sf p}hi'|_{W_E'}$ is again irreducible. \end{prop} \mathbf Egin{proof} Note that $\varphi|_{E}=(\varphi|_{F'})|_{E}$. Thus part (2) of Proposition \ref{prop-bij-EF} implies that restriction of $L$-parameters from $W_F'$ to $W_{F'}'$ maps primitive $L$-parameters to primitive $L$-parameters, so \eqref{restr-map} is well-defined. Parts (1) and (2) of Proposition \ref{prop-bij-EF} imply that \eqref{restr-map} is injective. Now suppose that ${\sf p}hi':W_{F'} \to {}^L\mathrm{GL}_{nF'}$ is an $L$-parameter such that ${\sf p}hi'|_{E}$ is irreducible and $\mathrm{Gal}(E/F)$-invariant. By Lemma \ref{lem-bc-param} the restriction ${\sf p}hi'|_{{E}}$ descends to an irreducible parameter $\varphi:W_F' \to {}^L\mathrm{GL}_{nF}$. By Frobenius reciprocity and Lemma \ref{lem-basic} we have \mathbf Egin{align} \mathrm{Hom}_{E}(\varphi|_{E},{\sf p}hi'|_{E})= \mathrm{Hom}_{{F'}}(\mathrm{Ind}_{E}^{F'}(\varphi|_{E'}),{\sf p}hi') =\mathrm{op}lus_{i}\mathrm{Hom}_{{F'}}(\rho_i \otimes \varphi|_{{F'}},{\sf p}hi')^{\mathrm{deg}(\rho_i)} \end{align} where the sum is over a set of representatives for the irreducible representations of $\mathrm{Gal}(E/F')$. The first space is one dimensional and hence so is the last. By Lemma \ref{lem-basic} $\rho_i \otimes \varphi|_{F'}$ is irreducible for all $i$, so by considering dimensions we see that $\rho_i \otimes \varphi|_{F'} \cong {\sf p}hi'$ for some character $\rho_i$ of $\mathrm{Gal}(E/F')$. This proves the second claim of the proposition. We are left with the final assertion of the proposition. Since $\mathrm{Gal}(E/F')$ is solvable there is a chain of subfields $F'=E_0 \leq \cdots \leq E_n=E$ such that $E_j/E_{j-1}$ is cyclic of prime degree. Using this fact the final assertion follows from Lemma \ref{lem-restriction}. \end{proof} Motivated by Proposition \ref{prop-bij-EF'}, we make the following conjecture, which is an elaboration of a case of Langlands functoriality: \mathbf Egin{conj} \label{conj-1} Let $E/F$ be a Galois extension of number fields and let $n$ be an integer such that \mathbf Egin{itemize} \item $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple nonabelian group, and \item For every divisor $m|n$ there are no nontrivial irreducible representations $\mathrm{Gal}(E/F) \to \mathrm{GL}_{m}(\textf{C}C)$. \end{itemize} Let $E \mathfrak{g}eq F' \mathfrak{g}eq F$ be a subfield. Every $E$-primitive automorphic representation ${\sf p}i$ of $\mathrm{GL}_n(\mathbb{A}_F)$ admits a unique base change ${\sf p}i_{F'}$ to $\mathrm{GL}_n(\mathbb{A}_{F'})$ and a unique base change to $\mathrm{GL}_n(\mathbb{A}_E)$, the first of which is an $E$-primitive automorphic representation. Thus base change induces an injection \mathbf Egin{align*} b_{E/F'}:\Pi_n^{\mathrm{prim}}(E/F) &\longrightarrow \Pi_n^{\mathrm{prim}}(E/F')\\ {\sf p}i &\longmapsto {\sf p}i_{F'} \end{align*} If ${\sf p}i'$ is a cuspidal automorphic representation of $\mathrm{GL}_n(\mathbb{A}_{F'})$ such that its base change ${\sf p}i'_E$ to $\mathrm{GL}_n(\mathbb{A}_E)$ is cuspidal and $\mathrm{Gal}(E/F)$-invariant then ${\sf p}i'_E$ descends to an automorphic representation of $\mathrm{GL}_n(\mathbb{A}_F)$. \end{conj} We also require a conjecture which can be addressed using endoscopic techniques, is discussed at length in \cite{Rajan3}, and is a theorem when $n=2$ \cite[Theorems 1 and 2]{Rajan3} or $\mathrm{Gal}(E/F')$ is cyclic \cite[Chapter 3, Theorems 4.2 and 5.1]{AC}: \mathbf Egin{conj} \label{conj-solv} Let $E/F'$ be a solvable extension of number fields and let $\Pi$ be a cuspidal automorphic representation of $\mathrm{GL}_n(\mathbb{A}_E)$. If $\Pi$ is $\mathrm{Gal}(E/F')$-invariant, then there is a $\mathrm{Gal}(E/F')$-invariant character $\chi \in (E^{\times} \backslash \mathbb{A}_E^{\times})^{\widetilde{\varepsilon}dge}$ such that $\Pi \otimes \chi$ descends to $\mathrm{GL}_n(\mathbb{A}_{F'})$. If $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=0$, then $\chi$ can be taken to be the trivial character. Conversely, if ${\sf p}i'_1$, ${\sf p}i_2'$ are cuspidal automorphic representations of $\mathrm{GL}_n(\mathbb{A}_{F'})$ that both base change to a cuspidal automorphic representation $\Pi$ of $\mathrm{GL}_n(\mathbb{A}_E)$, then there is a unique $\chi \in \mathrm{Gal}(E/F')^{\widetilde{\varepsilon}dge}$ such that ${\sf p}i_1' \cong {\sf p}i_2' \otimes \chi$. \end{conj} \mathbb{S}ubsection{The icosahedral group} \label{ssec-icosa-gp} Assume that $n=2$ and that $$ \mathrm{Gal}(E/F) \cong \mathrm{SL}_2(\mathbb{Z}/5) \cong \widetilde{A}_5, $$ the universal perfect central extension of $A_5$. We fix such an isomorphism for the remainder of this section and view it as an identification: $\mathrm{Gal}(E/F)=\widetilde{A}_5$. In this subsection we describe the image and fibers of the base change map on $L$-parameters in this setting. This description is used as motivation for Conjecture \ref{conj-2} below, the conjecture used in the statement of Theorem \ref{main-thm-2} above. As remarked below Theorem \ref{main-thm-1}, if $n=2$ the case where $\mathrm{Gal}(E/F)$ is the universal perfect central extension of $A_5$ is the only case in which the hypotheses of Proposition \ref{prop-bij-EF} do not hold. Moreover, the conclusion of Proposition \ref{prop-bij-EF} does not hold. Indeed, any irreducible $2$-dimensional representation of $\mathrm{SL}_2(\mathbb{Z}/5)$ induces an irreducible $L$-parameter $\varphi:W_{F}' \to {}^L\mathrm{GL}_{2F}$ such that $\varphi|_{E}$ is the direct sum of two copies of the trivial representation. For our purposes it is more important to find an analogue of Proposition \ref{prop-bij-EF'}. The facts from group theory that we require in this subsection are collected in \S \ref{appendix}. Fix an injection $A_4 \hookrightarrow A_5$ and let $\widetilde{A}_4$ denote the inverse image of $A_4$ under the projection map $\widetilde{A}_5 \to A_5$. It is a double cover of $A_4$. \mathbf Egin{lem} \label{lem-gen} Let $\tau \in \mathrm{Gal}(E/F)$ be of order $5$. Then $\langle \tau ,\widetilde{A}_4\rangle=\mathrm{Gal}(E/F)$. \end{lem} \mathbf Egin{proof} By Lagrange's theorem for any element $\tau \in \mathrm{Gal}(E/F)$ of order $5$ the group $\langle \tau,\widetilde{A}_4 \rangle$ has order divisible by $(5)(24)=120$. \end{proof} Our analogue of Proposition \ref{prop-bij-EF'} is the following proposition: \mathbf Egin{prop} \label{prop-A5-EF} Assume that $F'=E^{\widetilde{A}_4}$ and $\tau \in \mathrm{Gal}(E/F)$ is of order $5$. In this case restriction of parameters induces a map \mathbf Egin{align} \label{restr-map2} b_{F'/F}:\Phi_2^0(F) &\longrightarrow \Phi_2^0(F')\\ \varphi &\longmapsto \varphi|_{{F'}}. \nonumber \end{align} If ${\sf p}hi':W_{F'}' \to {}^L\mathrm{GL}_{2F'}$ is an $L$-parameter such that ${\sf p}hi'|_{E}$ is irreducible and $\mathrm{Gal}(E/F)$-invariant then ${\sf p}hi' \otimes \chi'$ is in the image of the restriction map \eqref{restr-map2} for a unique $\chi' \in \mathrm{Gal}(E/F')^{\widetilde{\varepsilon}dge}$. If ${\sf p}hi'|_{E}$ is reducible and $\mathrm{Hom}_{E}({\sf p}hi'|_{E},{\sf p}hi'|_{E}^{\tau}) \neq 0$ then ${\sf p}hi'|_E$ is the restriction of a parameter ${\sf p}hi:W_F' \to {}^L\mathrm{GL}_{2F}$. There are exactly two nonisomorphic irreducible ${\sf p}hi_1,{\sf p}hi_2 :W_{F}' \to {}^L\mathrm{GL}_{2F}$ such that ${\sf p}hi|_{E} \cong {\sf p}hi_{1}|_{E} \cong {\sf p}hi_{2}|_{E}$. \end{prop} \mathbf Egin{proof} We first check that an irreducible $L$-parameter $\varphi$ as above restricts to an irreducible $L$-parameter on $W_{F'}'$. Since $\mathrm{SL}_2(\mathbb{Z}/5)$ is perfect, there are no subgroups of $\mathrm{SL}_2(\mathbb{Z}/5)$ of index $2$. Since $\varphi$ has degree $2$ it follows from Lemma \ref{lem-restriction} that $\varphi$ is not induced, and hence $\varphi|_{E}$ is either irreducible or $$ \varphi \cong \chi \otimes \rho $$ for some character $\chi:W_F' \to {}^L\mathrm{GL}_{1F}$ and some irreducible representation $$ \rho:\mathrm{Gal}(E/F) \to \mathrm{GL}_2(\textf{C}C). $$ In the former case $\varphi|_{F'}$ is also irreducible, and hence we are done. Suppose on the other hand that $\varphi \cong \chi \otimes \rho$. Notice that any irreducible two-dimensional representation of $\mathrm{Gal}(E/F)$ is necessarily faithful. Indeed, the only normal subgroup of $\mathrm{Gal}(E/F)$ is its center and if such a representation was trivial on the center it would descend to a representation of $A_5$, a group that has no irreducible two-dimensional representations. Since $\mathrm{Gal}(E/F')$ is nonabelian $\rho(\mathrm{Gal}(E/F'))$ is nonabelian and it follows that $\rho|_{{F'}}$ is irreducible and hence so is $\varphi|_{F'}$. The second statement of the proposition is proved by the same argument as the analogous statement in Proposition \ref{prop-bij-EF'}. For the last assertion assume that ${\sf p}hi':W_{F'}' \to {}^L\mathrm{GL}_{2F'}$ is an $L$-parameter such that ${\sf p}hi'|_{E}$ is reducible. It follows from Lemma \ref{lem-restriction} that there is an element $\mathbb{S}igma_0 \in \mathrm{Gal}(E/F')$ of order dividing $2$ and a character $\chi_0:W_E' \to {}^L\mathrm{GL}_{1E}$ such that ${\sf p}hi'|_{E} \cong \chi_0 \mathrm{op}lus \chi_0^{\mathbb{S}igma_0}$. Since ${\sf p}hi'|_E \cong {\sf p}hi'|_E^{\mathbb{S}igma}$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F)$, the group $W_{F'}' / W_E'$ acts by conjugation on these two factors and this action defines a homomorphism $\mathrm{Gal}(E/F')\cong W_{F'}'/W_E' \to \mathbb{Z}/2$. Now $\widetilde{A}_4$ has no subgroup of index two, so this implies that homomorphism $\widetilde{A}_4 \to \mathbb{Z}/2$ just considered is trivial and hence the action of $W_{F'}'/W_E'$ on the pair $\{\chi_0,\chi_0^{\mathbb{S}igma_0}\}$ is trivial. It follows in particular that $\chi_0 \cong \chi_0^{\mathbb{S}igma_0}$ and additionally $\chi_0$ is isomorphic to all of its $\mathrm{Gal}(E/F')$-conjugates. If additionally $\mathrm{Hom}_{E}({\sf p}hi'|_{E},{\sf p}hi'|_{E}^{\tau}) \neq 0$ then $\chi_0$ is fixed under $\mathrm{Gal}(E/F')$ and $\tau$ and hence it is isomorphic to all of its $\mathrm{Gal}(E/F)$-conjugates by Lemma \ref{lem-gen}. Thus $\chi_0$ descends to a character $\chi:W_F' \to {}^L\mathrm{GL}_{1F}$ by Lemma \ref{lem-bc-param}. Let $\rho_2$ be a rank two irreducible representation of $\mathrm{Gal}(E/F)$ with character $\theta_2$ in the notation of \S \ref{appendix} and let $\langle \xi \rangle =\mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$. Then $$ \rho_2 \otimes \chi {\sf q}uad \textrm{ and } {\sf q}uad \xi \circ \rho_2 \otimes \chi $$ are two nonisomorphic $L$-parameters from $W_F'$ with restriction to $W_E'$ isomorphic to ${\sf p}hi'|_E$. If ${\sf p}hi:W_F' \to {}^L\mathrm{GL}_{2F}$ is any $L$-parameter with ${\sf p}hi|_E \cong {\sf p}hi'|_{E}$, then \mathbf Egin{align} \label{1-frob} 2=\mathrm{dim}(\mathrm{Hom}_{E}(\chi|_{E},{\sf p}hi|_{E}))= \mathrm{dim}(\mathrm{Hom}_{{F}}(\mathrm{Ind}_{{E}}^{{F}}(\chi|_E),{\sf p}hi)). \end{align} Now by Lemma \ref{lem-basic} $$ \mathrm{Ind}_{{E}}^{{F}}(\chi|_{{E}})\cong \mathrm{Ind}_{{E}}^{{F}}(1)\otimes \chi. $$ This combined with \eqref{1-frob} implies that $$ {\sf p}hi \cong \rho_2 \otimes \chi \textrm{ or }{\sf p}hi \cong \xi \circ \rho_2 \otimes \chi. $$ \end{proof} Motivated by Proposition \ref{prop-A5-EF} and Proposition \ref{prop-bij-EF} we propose the following conjecture. It is the (conjectural) translation of Proposition \ref{prop-A5-EF} and part (1) of Proposition \ref{prop-bij-EF} into a statement on automorphic representations. \mathbf Egin{conj} \label{conj-2} In the setting of Proposition \ref{prop-A5-EF} above each cuspidal automorphic representation ${\sf p}i$ of $\mathrm{GL}_2(\mathbb{A}_F)$ admits a unique cuspidal base change to $\mathrm{GL}_2(\mathbb{A}_{F'})$ and a unique base change to an isobaric automorphic representation of $\mathrm{GL}_2(\mathbb{A}_E)$. If ${\sf p}i'$ is a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{F'})$ such that ${\sf p}i'_E$ is cuspidal and $\mathrm{Hom}_I({\sf p}i_E',{\sf p}i'^{\tau}_E) \neq 0$, then there is a unique cuspidal automorphic representation ${\sf p}i$ of $\mathrm{GL}_2(\mathbb{A}_F)$ that has ${\sf p}i'_E$ as a base change. If ${\sf p}i'_E$ is not cuspidal and $\mathrm{Hom}_I({\sf p}i'_E,{\sf p}i'^{\tau}_E) \neq 0$ then there are precisely two isomorphism classes of cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_F)$ that base change to ${\sf p}i'_E$. \end{conj} \mathbf Egin{rem} In understanding the analogy between Proposition \ref{prop-A5-EF} and Conjecture \ref{conj-2} it is helpful to recall that if ${\sf p}i'_E$ is cuspidal and $\mathrm{Hom}_I({\sf p}i'_E,{\sf p}i'^{\tau}_E)\neq 0$ then ${\sf p}i'_E$ is isomorphic to all of its twists under elements of $\langle \mathrm{Gal}(E/F'),\tau\rangle=\mathrm{Gal}(E/F)$. \end{rem} \mathbb{S}ubsection{Motivating conjectures \ref{conj-32} and \ref{conj-33}} \label{ssec-artin-conj} In this section we prove some lemmas on restriction of $L$-parameters along subfields of an $\widetilde{A}_5$-extension and then state the conjectures (namely conjectures \ref{conj-32} and \ref{conj-33}) that are the translation of these statements to the context of automorphic representations. These conjectures are used in the statement of Theorem \ref{main-thm-3} above. As above, we identify $\mathrm{Gal}(E/F) =\widetilde{A}_5$. Fix an embedding $\mathbb{Z}/2 \times \mathbb{Z}/2 \hookrightarrow A_5$ and let $Q \hookrightarrow \widetilde{A}_5$ be its inverse image under the quotient map $\widetilde{A}_5 \to A_5$; it is isomorphic to the quaternion group. \mathbf Egin{lem} \label{lem-A5-EF} Let $F'=E^Q$. For all quasi-characters $\chi_0:W_{E}' \to {}^L\mathrm{GL}_{1E}$ invariant under $\mathrm{Gal}(E/F')$ there is an irreducible parameter $\varphi':W_{F'}' \to {}^L\mathrm{GL}_{2F'}$ such that $\varphi'|_E \cong \chi_0 \mathrm{op}lus \chi_0$. The parameter $\varphi'$ is unique up to isomorphism. Let $\varphi:W_F' \to {}^L\mathrm{GL}_{2F}$ be an irreducible $L$-parameter such that $\varphi|_E\cong \chi_0 \mathrm{op}lus \chi_0$ where $\chi_0:W_E' \to {}^L\mathrm{GL}_{1E}$ is $\mathrm{Gal}(E/F)$-invariant. Then $\varphi|_{F'}$ is irreducible, and there are precisely two distinct equivalence classes of $L$-parameters in $\Phi^0_2(F)$ that restrict to $\varphi|_{F'}$. Conversely, if $\varphi': W_{F'}' \to {}^L\mathrm{GL}_{2F'}$ is an irreducible parameter such that $\varphi'|_E \cong \chi_0 \mathrm{op}lus \chi_0$ for some quasi-character $\chi_0:W_{E}' \to {}^L\mathrm{GL}_{1E}$ invariant under $\mathrm{Gal}(E/F)$, then $\varphi'$ extends to an $L$-parameter on $W_F'$. \end{lem} \mathbf Egin{proof} One can twist by $\chi_0^{-1}$ and its extension to $W_F'$ to reduce the lemma to the case where $\chi_0$ is trivial (recall that a $\mathrm{Gal}(E/F')$ (resp.~$\mathrm{Gal}(E/F)$)-invariant quasi-character descends by Lemma \ref{lem-bc-param} and the fact that both of these groups have trivial Schur multiplier). In this case the lemma follows immediately from the character tables included in \S \ref{appendix} below (see Lemma \ref{lem-Q} in particular). \end{proof} The following is the conjectural translation of this statement (via Langlands functoriality) into the language of automorphic representations: \mathbf Egin{conj} \label{conj-32} Let $F'=E^Q$. Let ${\sf p}i$ be a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_F)$ with base change $\chi_0 \boxplus \chi_0$ to an isobaric automorphic representation of $\mathrm{GL}_2(\mathbb{A}_E)$. Then ${\sf p}i$ admits a base change ${\sf p}i_{F'}$ to $\mathrm{GL}_2(\mathbb{A}_{F'})$ that is cuspidal. There are precisely two distinct isomorphism classes of cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_{F})$ that base change to ${\sf p}i_{F'}$. Conversely, if ${\sf p}i'$ is a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{F'})$ such that ${\sf p}i'_E \cong \chi_0 \boxplus \chi_0$ where $\chi_0$ is $\mathrm{Gal}(E/F)$-invariant, then ${\sf p}i'$ descends to a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{F})$. \end{conj} The situation for $n=3$ is similar: \mathbf Egin{lem} \label{lem-A5-EF3} Let $F'=E^{\widetilde{A}_4}$. Let $\chi_0:W_{E}' \to {}^L\mathrm{GL}_{1E}$ be a character invariant under $\mathrm{Gal}(E/F')$. There is an irreducible parameter $\varphi':W_{F'}' \to {}^L\mathrm{GL}_{3F'}$ such that $\varphi'|_{W_E'} \cong \chi_0^{\mathrm{op}lus 3}$, unique up to isomorphism. Let $\varphi:W_F' \to {}^L\mathrm{GL}_{3F}$ be an irreducible $L$-parameter such that $\varphi|_E\cong \chi_0^{\mathrm{op}lus 3}$ where $$ \chi_0:W_E' \to {}^L\mathrm{GL}_{1E} $$ is $\mathrm{Gal}(E/F)$-invariant. Then $\varphi|_{F'}$ is irreducible, and there are precisely two inequivalent isomorphism classes of $L$-parameters in $\Phi^0_3(F)$ that restrict to the isomorphism class of $\varphi|_{F'}$. Conversely, if $\varphi': W_{F'}' \to {}^L\mathrm{GL}_{3F'}$ is an irreducible parameter such that $\varphi'|_E \cong \chi_0^{\mathrm{op}lus 3}$ for some quasi-character $\chi_0:W_{E}' \to {}^L\mathrm{GL}_{1F}$ invariant under $\mathrm{Gal}(E/F)$, then $\varphi'$ extends to an $L$-parameter on $W_F'$. \end{lem} \mathbf Egin{proof}One can twist by $\chi_0^{-1}$ and its extension to $W_F'$ to reduce the lemma to the case where $\chi_0$ is trivial (recall that a $\mathrm{Gal}(E/F')$ (resp.~$\mathrm{Gal}(E/F)$)-invariant quasi-character descends by Lemma \ref{lem-bc-param} and the fact that both of these groups have trivial Schur multiplier). In this case the lemma follows immediately from the character tables included in \S \ref{appendix} (see Lemma \ref{lem-tetra-reps} in particular). \end{proof} The corresponding conjecture is the following: \mathbf Egin{conj} \label{conj-33} Let $F'=E^{\widetilde{A}_4}$. Let ${\sf p}i$ be a cuspidal automorphic representation of $\mathrm{GL}_3(\mathbb{A}_F)$ with base change $\chi_0^{\boxplus 3}$ to an isobaric automorphic representation of $\mathrm{GL}_3(\mathbb{A}_E)$. Then ${\sf p}i$ admits a base change ${\sf p}i_{F'}$ to $\mathrm{GL}_3(\mathbb{A}_{F'})$ that is cuspidal. There are precisely two nonisomorphic cuspidal automorphic representations of $\mathrm{GL}_3(\mathbb{A}_{F})$ that base change to ${\sf p}i_{F'}$. Conversely, if ${\sf p}i'$ is a cuspidal automorphic representation of $\mathrm{GL}_3(\mathbb{A}_{F'})$ such that ${\sf p}i'_E \cong \chi_0^{\boxplus 3}$ where $\chi_0$ is $\mathrm{Gal}(E/F)$ invariant, then ${\sf p}i'$ descends to a cuspidal automorphic representation of $\mathrm{GL}_3(\mathbb{A}_{F})$. \end{conj} \mathbb{S}ubsection{Appendix: The representations of some binary groups} \label{appendix} In \S \ref{ssec-icosa-gp} and \S \ref{ssec-artin-conj} we studied the problem of base change along an extension $E/F$ where $\mathrm{Gal}(E/F)$ was isomorphic to the binary icosahedral group, that is, the universal perfect central extension $\widetilde{A}_5$ of the alternating group $A_5$ on $5$ letters. Fix an embedding $A_4 \hookrightarrow A_5$, and let $\widetilde{A}_4$ be the inverse image of $A_4$ under the quotient map $\widetilde{A}_5 \to A_5$. Similarly fix an embedding $\mathbb{Z}/2 \times \mathbb{Z}/2 \hookrightarrow A_5$ and let $Q$ be the inverse of $\mathbb{Z}/2 \times \mathbb{Z}/2$ under the quotient map $\widetilde{A}_5 \to A_5$. In \S \ref{ssec-icosa-gp} and \S \eqref{ssec-artin-conj} we required various properties of the representations of $\widetilde{A}_5$, $\widetilde{A}_4$, and $Q$. We collect these properties in this subsection for ease of reference. We now write down the character table of $\widetilde{A}_5$. For $n \in \{1,2,3,4,6\}$ let $C_n$ be the unique conjugacy class of $\widetilde{A}_5$ consisting of the elements of order $n$. Let $C_5$ and $C_5'$ be the two conjugacy classes of elements of order $5$, and if $g \in C_5$ (resp.~$C_5'$) let $C_{10}$ (resp.~$C_{10}'$) be the conjugacy class of $-g$ (viewed as a matrix in $\mathrm{SL}_2(\mathbb{Z}/5) \cong \widetilde{A}_5$). The degree of an irreducible representation is given by its subscript. We let $u,v$ be the distinct roots of the polynomial $x^2-x-1$. The following character table is in \cite[\S 7]{Buhler} (see \cite[Proof of Lemma 5.1]{KimIcos} for corrections). \mathbf Egin{center} \mathbf Egin{tabular}{ l | c |c |c | c | c | c | c| c |c |} & $C_1$ & $C_2$ & $C_4$ & $C_3$ & $C_6$ & $C_5$ & $C_{10}$ & $C_{5}'$ & $C_{10}'$ \\ \hline $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $\theta_3$ & $3$ & $3$ & $-1$ & $0$ & $0$ & $u$ & $u$ & $v$ & $v$ \\ $\theta_3'$ & $3$ & $3$ & $-1$ & $0$ & $0$ & $v$ & $v$ & $u$ & $u$ \\ $\theta_4$ & $4$ & $4$ & $0$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$\\ $\theta_5$ & $5$ &$5$ & $1$ & $-1$ & $-1$ & 0 & 0 & 0 & 0\\ $\theta_2$ & $2$ & $-2$ & $0$ & $-1$ & $1$ & $u-1$ & $1-u$ & $v-1$ & $1-v$ \\ $\theta_2'$ & $2$ & $-2$ & $0$ & $-1$ & $1$ & $v-1$ & $1-v$ & $u-1$ & $1-u$ \\ $\theta_4'$ & $4$ & $-4$ & $0$ & $1$ & $-1$ & $-1$ & $1$ & $-1$ & $1$ \\ $\theta_6$ & $6$ & $-6$ & $0$ & $0$ & $0$ & $1$ & $-1$ & $1$ & $-1$ \end{tabular} \end{center} Let $\chi$ be a nontrivial character of $\widetilde{A}_4$. It is of order $3$, as $\widetilde{A}_4^{\mathrm{ab}} \cong \mathbb{Z}/3$. Using the character table above, one proves the following lemma \cite[Lemmas 5.1-5.3]{KimIcos} \mathbf Egin{lem} \label{lem-icosa-reps}Let $\langle \xi \rangle =\mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$. The following is a complete list of irreducible characters of $\widetilde{A}_5$: \mathbf Egin{enumerate} \item trivial \item $\theta_2$, $\xi \circ \theta_2$ ($2$-dimensional) \item $\mathrm{Sym}^2(\theta_2)$, $\mathrm{Sym}^2(\xi \circ \theta_2)$ ($3$-dimensional) \item $\mathrm{Sym}^3(\theta_2)=\mathrm{Sym}^3(\xi \circ \theta_2)$, $\theta_2 \otimes \xi \circ \theta$ ($4$-dimensional) \item $\mathrm{Ind}_{\widetilde{A}_4}^{\widetilde{A}_5}(\chi) = \mathrm{Sym}^4(\theta_2) = \mathrm{Sym}^4(\xi \circ \theta_2)$ ($5$-dimensional) \item $\mathrm{Sym}^2(\theta_2) \otimes \xi \circ \theta_2 =\theta_2 \otimes \mathrm{Sym}^2(\xi \circ \theta_2)=\mathrm{Sym}^5(\theta_2)$ ($6$-dimensional) \end{enumerate} There two characters of degree $2,3,4$ given above are not equivalent. \end{lem} \mathbf Egin{rem} The fact that $\mathrm{Sym}^4(\theta_2) = \mathrm{Ind}_{\widetilde{A}_4}^{\widetilde{A}_5}(\chi)$ was observed by D.~Ramakrishnan (see \cite{KimIcos}). We point this out because it turns out to be an important fact for the arguments of \S \ref{ssec-trace-to-func2} below. \end{rem} Next we discuss the representations of $\widetilde{A}_4$. Write $t=(123)$ and let $\overline{C}_{t^i}$ be the conjugacy classes of $t^i$ for $i \in \{1,2\}$ in $A_4$, respectively. The inverse image of $\overline{C}_{t^i}$ is a union of two conjugacy classes $C_{t^i},C_{t^i}'$ for each $i \in \{1,2\}$. We assume that for $c \in C_{t^i}$ one has $|c|=3$ and for $c' \in C_{t^i}'$ one has $|c'|=6$. Write $C_2$ for the conjugacy class of elements of order $2$ and $C_4$ for the conjugacy class of order $4$. One has the following character table: \mathbf Egin{center} \mathbf Egin{tabular}{ l | c |c |c | c | c | c | c } & $C_1$ & $C_2$ & $C_4$ & $C_t$ & $C_t'$& $C_{t^2}'$ & $C_{t^2}'$ \\ \hline $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\ ${\sf p}si_1$ & $1$ & $1$ & $1$ & $e^{2 {\sf p}i i/3}$ & $e^{2 {\sf p}i i/3}$ & $e^{4 {\sf p}i i/3}$ & $e^{4 {\sf p}i i/3}$ \\ ${\sf p}si_1^2$ & $1$& $1$ & $1$ & $e^{4 {\sf p}i i/3}$ & $e^{4 {\sf p}i i/3}$ & $e^{2 {\sf p}i i/3}$ & $e^{2 {\sf p}i i/3}$ \\ ${\sf p}si_3$ & $3$ & $3$ & $-1$ & $0$ & $0$ & 0 & $0$\\ ${\sf p}si_2$ & $2$ & $-2$ & $0$ & $-1$ & $1$ & $-1$ & $1$\\ ${\sf p}si_2 {\sf p}si_1$ &$2$ &$-2$ &$0$& $-e^{2 {\sf p}i i/3}$& $e^{2 {\sf p}i i/3}$& $-e^{4{\sf p}i i/3}$& $e^{4 {\sf p}i i/3}$\\ ${\sf p}si_2{\sf p}si_1^2$ &$2$ & $-2$&$0$& $-e^{4 {\sf p}i i/3}$ & $e^{4 {\sf p}i i/3}$ & $-e^{2 {\sf p}i i/3}$ & $e^{2 {\sf p}i i/3}$ \end{tabular} \end{center} We make a few comments on the computation of this table. First, the characters that are lifts of characters of $A_4$ are computed in \cite[\S 5.7]{SerreFG}. Second, we note that ${\sf p}si_2:=\theta_2|_{\widetilde{A}_4}$ is irreducible. Indeed, the only normal subgroup of $\widetilde{A}_5$ is the center and $\theta_2$ is not the restriction of a character of $A_5$ since there are no rank two characters of $A_5$. Thus any representation with character $\theta_2$ is faithful. Since $\theta_2$ is of degree $2$, if the representation with character $\theta_2|_{\widetilde{A}_4}$ were reducible, it would provide an isomorphism from $\widetilde{A}_4$ to an abelian group. Since $\widetilde{A}_3$ is nonabelian, this shows that $\theta_2|_{\widetilde{A}_4}$ is irreducible. Its character values therefore follow from the character table for $\widetilde{A}_5$ above. The fact that the characters ${\sf p}si_2{\sf p}si_1^i$ are distinct for distinct $i \in \{1,2,3\}$ follows by considering determinants. Using the fact that that the sum of the squares of the degrees of the irreducible characters must equal the order of the group we see that the table is complete. \mathbf Egin{lem} \label{lem-tetra-reps} Let $\langle \xi \rangle=\mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$. One has $$ \theta_2|_{\widetilde{A}_4}=\xi \circ \theta_2|_{\widetilde{A}_4}={\sf p}si_2. $$ Moreover $$ \theta_3|_{\widetilde{A}_4}={\sf p}si_3. $$ \end{lem} \mathbf Egin{proof} This follows immediately from the character tables above. \end{proof} Finally we record the character table for the quaternion group $Q$. We present the group as $$ Q= \langle i, j : i^4=1,\, \,i^2=j^2,\, \,i^{-1}ji=j^{-1} \rangle. $$ Denoting by $C_x$ the conjugacy class of an element $x \in Q$, one has the following character table \cite[\S 19.1]{DF}: \mathbf Egin{center} \mathbf Egin{tabular}{ l | c |c |c | c | c | } & $C_1$ & $C_{-1}$ & $C_{i}$ & $C_{j}$ & $C_{ij}$ \\ \hline $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ $\Theta_1$ & $1$ & $1$ & $-1$ & $1$ & $-1$ \\ $\Theta_1'$ & $1$ & $1$ & $1$ & $-1$ & $-1$ \\ $\Theta_1\Theta_1'$ & $1$ & $1$ & $-1$ & $-1$ & $1$\\ $\Theta_2$ & $2$ & $-2$ & $0$ & $0$ & $0$ \end{tabular} \end{center} We note that as before the subscript indicates the degree of the representation. By examining the character tables of $\widetilde{A}_5$ and $Q$ one immediately deduces the following lemma: \mathbf Egin{lem} \label{lem-Q} Let $\langle \xi \rangle=\mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$. One has $\theta_2|_{Q}=\xi \circ \theta_2|_Q=\Theta_2$. {\sf q}ed \end{lem} \mathbb{S}ection{Proofs of the main theorems} \label{sec-proofs} In this section we prove the theorems stated in the introduction. \mathbb{S}ubsection{Preparation} The propositions of this subsection will be used in the proof of our main theorems in \S \ref{ssec-func-to-trace} and \S \ref{ssec-trace-to-func} below. \mathbf Egin{prop} \label{prop-solv} Let $E/F'$ be a Galois extension with $\mathrm{Gal}(E/F') \cong \widetilde{A}_4$ and let ${\sf p}i'$ be a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{F'})$. There are precisely $|\mathrm{Gal}(E/F')^{\mathrm{ab}}|$ non-isomorphic cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_{F'})$ that have ${\sf p}i'_E$ as a base change. \end{prop} \mathbf Egin{prop} \label{prop-solv3} Let $E/F'$ be a Galois extension with $\mathrm{Gal}(E/F') \cong \widetilde{A}_4$ and let ${\sf p}i'$ be a cuspidal automorphic representation of $\mathrm{GL}_3(\mathbb{A}_{F'})$. If ${\sf p}i'_E\cong \chi_0^{\boxplus 3}$ where $\chi_0$ is a quasi-character invariant under $\mathrm{Gal}(E/F')$, then there is a unique cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_{F'})$ that has ${\sf p}i'_E$ as a base change. It is of $\rho_3$-type, where $\rho_3: W_{F'}' \to {}^L\mathrm{GL}_{3F'}$ is a representation trivial on $W_E'$ that has character equal to the unique degree three irreducible character of $\widetilde{A}_4$. \end{prop} These propositions correspond to the first (and easiest) assertions on $L$-parameters in lemmas \ref{lem-A5-EF} and \ref{lem-A5-EF3}, respectively. They will be proven in a moment after some preparation. Let $P \leq \mathrm{GL}_{n}$ be a parabolic subgroup and let $P=MN$ be its Levi decomposition. Suppose that $\Pi_M$ is a cuspidal automorphic representation of $M(E) A_{\mathrm{GL}_{nE}} \backslash M(\mathbb{A}_E)$ and that $$ \Pi=\mathrm{Ind}_{M(\mathbb{A}_E)}^{\mathrm{GL}_{n}(\mathbb{A}_E)}(\Pi_M) $$ is an (irreducible) automorphic representation of $\mathrm{GL}_n(E) A_{\mathrm{GL}_{nE}} \backslash \mathrm{GL}_n(\mathbb{A}_E)$. Here $\Pi_M$ is extended to a representation of $P(\mathbb{A}_E)$ by letting the action of $N(\mathbb{A}_F)$ be trivial. We note that $\Pi$ is irreducible and unitary \cite[Chapter 3, \S 4]{AC}. Write $M={\sf p}rod_{i} \mathrm{GL}_{n_i}$ for some set of integers $n_i \mathfrak{g}eq 1$ and $\Pi_M=\otimes_i\Pi_i$ where the $\Pi_i$ are cuspidal automorphic representations of $\mathrm{GL}_{n_i}(F) \backslash \mathrm{GL}_{n_i}(\mathbb{A}_F)$. \mathbf Egin{lem} \label{lem-const-term} Suppose that $E/F'$ is a Galois extension of number fields and $\Pi^{\mathbb{S}igma} \cong \Pi$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F')$. Then $\{\Pi_i\}=\{\Pi_i^{\mathbb{S}igma}\}$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F')$. \end{lem} \mathbf Egin{proof} Since $\Pi$ is induced from cuspidal we use the theory of Eisenstein series to view $\Pi$ as a subrepresentation (not just subquotient) of $L^2(\mathrm{GL}_n(E) A_{\mathrm{GL}_{nE}} \backslash \mathrm{GL}_n(\mathbb{A}_E))$. Let $V_{\Pi} \leq L^2(\mathrm{GL}_n(E) A_{\mathrm{GL}_{nE}} \backslash \mathrm{GL}_n(\mathbb{A}_E))$ be the subspace of $\Pi$-isotypic automorphic forms. Consider the constant term $$ {\sf p}hi_P(m):=\int_{N(E) \backslash N(\mathbb{A}_E)} {\sf p}hi(nm)dn. $$ It is an automorphic form on $M(\mathbb{A}_E)$ \cite[Lemma 4]{LanglNotion}. There is a natural action of $\mathrm{Gal}(E/F')$ on $L^2( M(E)A_{\mathrm{GL}_{2E}} \backslash M(\mathbb{A}_E))$. By the normal basis theorem one has $d(n^{\mathbb{S}igma})=dn$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F')$, and hence the map \mathbf Egin{align*} V_{\Pi} &\longrightarrow L^2(A_{\mathrm{GL}_{2E}} M(E) \backslash M(\mathbb{A}_E))\\ {\sf p}hi &\longmapsto {\sf p}hi_P \end{align*} is $\mathrm{Gal}(E/F')$-invariant. Using the theory of Eisenstein series, specifically \cite[Propositions II.1.7 and IV.1.9]{MW}, it follows that that $\mathrm{Gal}(E/F')$ preserves the set of representations $\Pi_{1M}$ of $M(F) A_{\mathrm{GL}_{nE}} \backslash M(\mathbb{A}_E)$ such that $\Pi$ is a constituent of $\mathrm{Ind}_{M(\mathbb{A}_E)}^{\mathrm{GL}_{n}(\mathbb{A}_E)}(\Pi_{1M})$. Here, as before, we are extending $\Pi_{1M}$ to a representation of $P(\mathbb{A}_E)$ by letting $N(\mathbb{A}_E)$ act trivially. To make this statement easier for the reader to check, we note that our assumptions imply that $\Pi$ is not in the discrete spectrum, so no residues of Eisenstein series come into play (see \cite{MW2} for the classification of the discrete non-cuspidal spectrum of $\mathrm{GL}_n$). By the results contained in \cite[(4.3)]{JSII} on isobaric automorphic representations, the lemma follows. \end{proof} We now prove Proposition \ref{prop-solv}: \mathbf Egin{proof}[Proof of Proposition \ref{prop-solv}] Recall that $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=H^2(\widetilde{A}_4,\textf{C}C^{\times})=0$. Thus if ${\sf p}i'_E$ is cuspidal then the proposition is \cite[Theorem 2]{Rajan3}. In the remainder of the proof we will constantly use facts on cyclic prime degree base change established in \cite{Langlands}. A convenient list of the basic properties (in a more general setting) is given in \cite[Chapter 3, Theorems 4.2 and 5.1]{AC}. Assume that ${\sf p}i'_E$ is not cuspidal. By the theory of prime degree base change we must then have ${\sf p}i'_E \cong \chi_0 \boxplus \chi_0^{\mathbb{S}igma_0}$ for some quasi-character $\chi_0:E^{\times} \backslash \mathbb{A}_E^{\times} \to \textf{C}C^{\times}$ and some $\mathbb{S}igma_0 \in \mathrm{Gal}(E/F')$. Therefore we can apply Lemma \ref{lem-const-term} to see that $\widetilde{A}_4$ permutes the two-element set $\{\chi_0,\chi_0^{\mathbb{S}igma_0}\}$. Since $\mathrm{Gal}(E/F') \cong \widetilde{A}_4$ has no subgroup of index two one has $\chi_0^{\mathbb{S}igma}=\chi_0=\chi_0$ for all $\mathbb{S}igma \in \mathrm{Gal}(E/F')$. Since $\chi_0$ is $\mathrm{Gal}(E/F')$-invariant and $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=0$, Lemma \ref{lem-bc-param} implies that $\chi_0$ extends to a quasi-character $\chi'$ of $F'^{\times} \backslash \mathbb{A}_{F'}^{\times}$. Thus, upon replacing ${\sf p}i'$ by ${\sf p}i' \otimes \chi'^{-1}$ if necessary, we see that to complete the proof of the proposition it suffices to show that there are $|\mathrm{Gal}(E/F)^{\mathrm{ab}}|$ distinct isomorphism classes of cuspidal automorphic representations ${\sf p}i'$ of $\mathrm{GL}_{2}(\mathbb{A}_{F'})$ such that ${\sf p}i'_E \cong 1 \boxplus 1$. We now look more closely at the structure of $\widetilde{A}_4$. Let $V = \mathbb{Z}/2 \times \mathbb{Z}/2$ denote the Klein $4$ group and fix an embedding $V \hookrightarrow A_4$. The inverse image $Q$ of $V$ under the covering map $\widetilde{A}_4 \to A_4$ is isomorphic to the quaternion group; it is a nonabelian group of order $8$. The subgroup $Q \leq \widetilde{A}_4$ is normal and the quotient $\widetilde{A}_4 \to \widetilde{A}_4/Q \cong \mathbb{Z}/3$ induces an isomorphism $$ \widetilde{A}_4^{\mathrm{ab}} \tilde{\longrightarrow} \mathbb{Z}/3. $$ Let $\mu$ be a nontrivial character of $F'^{\times} \backslash \mathbb{A}_{F'}^{\times}$ trivial on $\mathrm{N}_{E/F'}\mathbb{A}_E^{\times}$. Then since $\widetilde{A}_4^{\mathrm{ab}} \tilde{\longrightarrow}\mathbb{Z}/3$ we have $\mu^3=1$. The three cuspidal automorphic representations ${\sf p}i', {\sf p}i' \otimes \mu$ and ${\sf p}i' \otimes \mu^2$ are all nonisomorphic (as can be seen by examining central characters) and all have the property that their base changes to $E$ are isomorphic to $1 \boxplus 1$. Therefore our task is to show that there are no other isomorphism classes of cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_{F'})$ that base change to $1 \boxplus 1$. We note that $({\sf p}i'\otimes \mu^i)_{E^Q}$ is independent of $i$ and is cuspidal by prime degree base change. Therefore it suffices to show that there is at most one cuspidal automorphic representation ${\sf p}i_0$ of $\mathrm{GL}_2(\mathbb{A}_{E^{Q}})$ whose base change to $\mathrm{GL}_2(\mathbb{A}_E)$ is $1 \boxplus 1$. Let ${\sf p}i_0$ be a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{E^Q})$ whose base change to $\mathrm{GL}_2(\mathbb{A}_E)$ is $1 \boxplus 1$. Choose a chain of subfields $E >E_1>E_2>E^Q$. We denote by $\chi_1 \in \mathrm{Gal}(E/E_2)^{\widetilde{\varepsilon}dge}$ a character that restricts nontrivially to $\mathrm{Gal}(E/E_1)$. The theory of prime degree base change implies that ${\sf p}i_{0E_1}$ cannot be cuspidal since $1$ is invariant under $\mathrm{Gal}(E/E_1)$. Hence ${\sf p}i_{0E_1}$ must be isomorphic to one of \mathbf Egin{align} 1 \boxplus 1, {\sf q}uad 1 \boxplus \chi_1|_{\mathbb{A}_{E_1}^{\times}},{\sf q}uad \textrm{ or } {\sf q}uad \chi_1|_{\mathbb{A}_{E_1}^{\times}} \boxplus \chi_1|_{\mathbb{A}_{E_1}^{\times}}. \end{align} Thus applying the theory of prime degree base change again we see that ${\sf p}i_{E_2}$ cannot be cuspidal. Now by assumption ${\sf p}i_0$ is cuspidal, and since ${\sf p}i_{0E_2}$ is not cuspidal ${\sf p}i_0$ is $E_2$-induced. In particular, ${\sf p}i_0={\sf p}i({\sf p}hi)$ for an irreducible $L$-parameter ${\sf p}hi:W_{E^Q}' \to {}^L\mathrm{GL}_{2E_Q}$ (compare \cite[\S 2 C)]{Langlands}). Note that ${\sf p}hi$ is necessarily trivial on $W_E'$, and hence can be identified with a two-dimensional irreducible representation of $\mathrm{Gal}(E/E^Q)$. There is just one isomorphism class of such representations by the character table for $Q$ recorded in \S \ref{appendix}. It follows that ${\sf p}i_0$ is the unique cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{E^Q})$ whose base change to $\mathrm{GL}_{2}(\mathbb{A}_E)$ is $1 \boxplus 1$. As mentioned above, this implies the proposition. \end{proof} As a corollary of the proof, we have the following: \mathbf Egin{cor} \label{cor-rho-type} Let $E/F'$ be a Galois extension with $\mathrm{Gal}(E/F') \cong \widetilde{A}_4$, and let ${\sf p}i'$ be a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{F'})$. If ${\sf p}i'_E$ is not cuspidal, then ${\sf p}i'$ is of $\rho$-type for some $L$-parameter $\rho$ trivial on $W_E'$. \end{cor} \mathbf Egin{proof} The proof of Proposition \ref{prop-solv} implies that there is a character $\chi'$ of $F'^{\times} \backslash \mathbb{A}_{F'}^{\times}$ such that $({\sf p}i' \otimes \chi'^{-1})|_{E} \cong 1 \boxplus 1$, so it suffices to treat the case where ${\sf p}i'|_E \cong 1 \boxplus 1$. By the argument in the proof of proposition \ref{prop-solv} and using the notation therein we have that ${\sf p}i'|_{E^Q}={\sf p}i({\sf p}hi)$, where ${\sf p}hi:W_{E^Q}' \to {}^L\mathrm{GL}_{2E^Q}$ is the unique irreducible $L$-parameter trivial on $W_E'$. Twisting ${\sf p}i'$ by an abelian character of $\widetilde{A}_4$ if necessary, we can and do assume that the central character of ${\sf p}i'$ is trivial. Thus we can apply \cite[\S 3]{Langlands} to conclude that ${\sf p}i'={\sf p}i({\sf p}hi')$ for some $L$-parameter ${\sf p}hi':W_{F'}' \to{}^L\mathrm{GL}_{2F'}$. \end{proof} We now prove Proposition \ref{prop-solv3}: \mathbf Egin{proof}[Proof of Proposition \ref{prop-solv3}] The quasi-character $\chi_0$ descends to a quasi-character $\chi':F'^{\times} \backslash \mathbb{A}_{F'}^{\times} \to \textf{C}C^{\times}$ by Lemma \ref{lem-bc-param} and the fact that $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=H^2(\widetilde{A}_4,\textf{C}C^{\times})=0$. Replacing ${\sf p}i'$ by ${\sf p}i' \otimes \chi'^{-1}$ if necessary, we can and do assume that ${\sf p}i'_E\cong 1^{\boxplus 3}$. In the remainder of the proof we will constantly use facts on cyclic prime degree base change established in \cite{AC}. A convenient list of the basic properties is given in \cite[Chapter 3, Theorems 4.2 and 5.1]{AC}. Let $Q \hookrightarrow \widetilde{A}_4$ be as in the proof of Proposition \ref{prop-solv}. By the theory of cyclic prime-degree base change $$ {\sf p}i'_{E^Q} =\chi_1 \boxplus \chi_2 \boxplus \chi_3 $$ for some characters $\chi_i:F'^{\times} \backslash \mathbb{A}_{F'}^{\times} \to \textf{C}C^{\times}$. Thus, by \cite[Chapter 3, Theorem 6.2]{AC} and its proof, since ${\sf p}i'$ is cuspidal we conclude that $$ {\sf p}i'\cong\mathrm{Ind}_{E^Q}^{F'}(\chi_1) $$ and hence is of $\rho$-type for some irreducible degree three $L$-parameter $\rho:W_{F'}' \to {}^L\mathrm{GL}_{nF'}$ trivial on $W_E'$. By the character table of $\widetilde{A}_4$ recorded in \S \ref{appendix}, we conclude that $\rho \cong \rho_3$ for $\rho_3$ as in the proposition. \end{proof} We also require the following linear independence statement: \mathbf Egin{lem} \label{lem-lin-ind} Let $M \leq \mathrm{GL}_n$ be the maximal torus of diagonal matrices. Let $v$ be a place of $F$. Suppose that there is a countable set $\mathcal{X}$ of quasi-characters of $M(F_v)$ and that the set $\mathcal{X}$ is stable under the natural action of $W(M,\mathrm{GL}_n)$. Suppose moreover that $\{a(\chi_v)\}_{\chi_v \in \mathcal{X}}$ is a set of complex numbers such that for all $f_v \in C_c^{\infty}(M(F_v))^{W(M,\mathrm{GL}_n)}$ one has $$ \mathbb{S}um_{\chi_v \in \mathcal{X}} a(\chi_v)\mathrm{tr}(\chi_v)(f_v)=0 $$ where the sum is absolutely convergent. Then $$ \mathbb{S}um_{W \in W(M,\mathrm{GL}_n)}a(\chi_v \circ W)=0 $$ for each $\chi_v \in \mathcal{X}$. \end{lem} \mathbf Egin{proof} By assumption \mathbf Egin{align*} 0&=\mathbb{S}um_{\chi_v \in \mathcal{X}} \mathbb{S}um_{W \in W(M,\mathrm{GL}_n)}a(\chi_v)\mathrm{tr}(\chi_v)(f_v \circ W)\\ &=\mathbb{S}um_{\chi_v \in \mathcal{X}}\mathbb{S}um_{W \in W(M,\mathrm{GL}_n)} a(\chi_v)\mathrm{tr}(\chi_v \circ W^{-1})(f_v)\\ &=\mathbb{S}um_{\chi_v \in \mathcal{X}}\mathrm{tr}(\chi_v)(f_v)\mathbb{S}um_{W \in W(M,\mathrm{GL}_n)} a(\chi_v \circ W) \end{align*} for all $f_v \in C_c^{\infty}(M(F_v))$. The result now follows from generalized linear independence of characters (see \cite[Lemma 6.1]{LabLan} and \cite[Lemma 16.l.1]{JacquetLanglands}). \end{proof} \mathbb{S}ubsection{Functoriality implies the trace identities} \label{ssec-func-to-trace} In this subsection we prove theorems \ref{main-thm-1}, \ref{main-thm-2}, and \ref{main-thm-3}, namely that cases of Langlands functoriality explicated in conjectures \ref{conj-1} and \ref{conj-solv} in the first case, Conjecture \ref{conj-2} in the second case, and conjectures \ref{conj-32} and \ref{conj-33} in the third case imply the stated trace identities. By Corollary \ref{cor-aut-trace} the sum \mathbf Egin{align*} \mathbb{S}um_{{\sf p}i'} \mathrm{tr}({\sf p}i')(h^1b_{E/F'}(\Sigma_{{\sf p}hi}^{S_0}(X))) \end{align*} is equal to $o(X)$ plus \mathbf Egin{align*} \mathbb{S}um_{{\sf p}i'} \mathrm{tr}({\sf p}i')(h^1) \mathrm{Res}_{s=1}\left( \widetilde{{\sf p}hi}(s)X^sL(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0})\right). \end{align*} Here the sum is over a set of equivalence classes of automorphic representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$. Specifically, for \eqref{11} of Theorem \ref{main-thm-1} we take it to be over $E$-primitive representations, for \eqref{A21} of Theorem \ref{main-thm-2} and \eqref{31} of Theorem \ref{main-thm-3} we take it to be over all cuspidal representations, and for \eqref{B21} of Theorem \ref{main-thm-2} we take it to be over cuspidal representations not of $\rho$-type for any two-dimensional representation $\rho:W_{F'}' \to {}^L\mathrm{GL}_{2F'}$ trivial on $W_E'$. The only nonzero contributions to this sum occur when $L(s,({\sf p}i'_E \times {\sf p}i'^{\tau\varepsilone}_E)^{S_0})$ has a pole, which implies that $\mathrm{Hom}_I({\sf p}i'_E,{\sf p}i'^{\tau}_E) \neq 0$ (see \eqref{ord-pole}). In this case if ${\sf p}i'_E$ is cuspidal it is then invariant under $\langle \mathrm{Gal}(E/F'),\tau \rangle=\mathrm{Gal}(E/F)$ and the pole is simple \eqref{ord-pole}. In view of conjectures \ref{conj-1} and \ref{conj-2}, in the setting of theorems \ref{main-thm-1} and \ref{main-thm-2} this implies that if $L(s,({\sf p}i'_E \times {\sf p}i'^{\tau\varepsilone}_E)^{S_0})$ has a pole then ${\sf p}i'_E$ descends to a cuspidal representation ${\sf p}i$ of $F$, whether or not ${\sf p}i'_E$ is cuspidal. On the other hand, the only nonzero contributions to the quantity \eqref{31} in Theorem \ref{main-thm-3} come from representations where $\dim \mathrm{Hom}_I({\sf p}i_E',{\sf p}i'^{\tau}_E)=n^2$, and this is the case if and only if ${\sf p}i'_E \cong \chi_0^{\boxplus n}$ where $\chi_0:E^{\times} \backslash \mathbb{A}_E^{\times} \to \textf{C}C^{\times}$ is a quasi-character invariant under $\mathrm{Gal}(E/F) =\langle \mathrm{Gal}(E/F'),\tau \rangle$. In these cases ${\sf p}i'_E$ descends to a cuspidal representation of $\mathrm{GL}_n(\mathbb{A}_F)$ by conjectures \ref{conj-32} and \ref{conj-33}. Assume for the moment that we are in the setting of Theorem \ref{main-thm-1}. In this case by Conjecture \ref{conj-solv} there are precisely $|\mathrm{Gal}(E/F')^{\mathrm{ab}}|$ inequivalent cuspidal representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ that base change to ${\sf p}i'_E$, since ${\sf p}i'_E$ is cuspidal by the theory of prime degree base change \cite[Chapter 3, Theorems 4.2 and 5.1]{AC}. With this in mind, the definition of transfer (see \S \ref{ssec-transfers} and Lemma \ref{lem-unr-transf}) completes the proof of the claimed trace identity. We only remark that the absolute convergence of the two sums follows from Corollary \ref{cor-aut-trace} and the fact that $L(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0})$ has a pole of order $\mathrm{Hom}_I({\sf p}i'_E,{\sf p}i'^{\tau}_E)$ (see \eqref{ord-pole}). Now assume that we are in the setting of Theorem \ref{main-thm-2}. In this case ${\sf p}i'_E$ may not be cuspidal, but by Proposition \ref{prop-solv} there are still exactly $|\mathrm{Gal}(E/F')^{\mathrm{ab}}|$ non-isomorphic cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_{F'})$ that base change to ${\sf p}i'_E$. With this in mind, the claimed trace identity follows as before. The proof of Theorem \ref{main-thm-3} is essentially the same. We only point out the most significant difference, namely that we are claiming that one can consider arbitrary Hecke functions on $C_c^{\infty}(\mathrm{GL}_n(F'_{S'_1})//\mathrm{GL}_n(\mathcal{O}_{F'S'_1}))$ instead of just those that are base changes of Hecke functions in $C_c^{\infty}(\mathrm{GL}_n(E_{S_{10}})//\mathrm{GL}_n(\mathcal{O}_{ES_{10}}))$. The reason this is possible is that for each $\mathrm{Gal}(E/F')$-invariant quasi-character $\chi_0:E^{\times} \backslash \mathbb{A}_E^{\times} \to \textf{C}C^{\times}$ there is a unique cuspidal automorphic representation of $\mathrm{GL}_n(\mathbb{A}_{F'})$ such that ${\sf p}i'_E \cong \chi_0^{\boxplus n}$ by propositions \ref{prop-solv} and \ref{prop-solv3}. {\sf q}ed \mathbb{S}ubsection{The trace identity implies functoriality: first two cases} \label{ssec-trace-to-func} In this subsection we prove theorems \ref{main-thm-1-conv} and \ref{main-thm-2-conv}, namely that the trace identities of Theorem \ref{main-thm-1}, and \ref{main-thm-2} imply the corresponding cases of functoriality under the assumption of a supply of transfers (more precisely, under Conjecture \ref{conj-transf}). By assumption, for all $h$ unramified outside of $S'$ with transfer $\Phi$ unramified outside of $S$ one has an identity \mathbf Egin{align} \label{id1} &\lim_{X \to \infty}|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1}X^{-1} \mathbb{S}um'_{{\sf p}i'} \mathrm{tr}({\sf p}i')(h^1b_{E/F'}(\Sigma^{S_0}_{{\sf p}hi}(X)) \\&= \lim_{X \to \infty} X^{-1}\mathbb{S}um'_{{\sf p}i} \mathrm{tr}({\sf p}i)( \Phi^1b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))). \nonumber \end{align} Here for the proof of Theorem \ref{main-thm-1-conv}, the sums are over a set of representatives for the equivalence classes of $E$-primitive automorphic representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ and $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_F)$, respectively. For the proof of Theorem \ref{main-thm-2-conv}, the sums are over a set of representatives for the equivalence classes of cuspidal automorphic representations of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ and $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_F)$, respectively, that are not of $\rho$-type for $\rho$ trivial on $W_E'$. We start by refining \eqref{id1}. Notice that each representation ${\sf p}i'$ appearing in \eqref{id1} above admits a base change ${\sf p}i'_E$ to $\mathrm{GL}_n(\mathbb{A}_E)$ by a series of cyclic base changes. We claim that ${\sf p}i'_E$ is cuspidal. In Theorem \ref{main-thm-1-conv} we have assumed that ${\sf p}i'$ is $E$-primitive. Hence, by the theory of cyclic base change, ${\sf p}i'_E$ must be cuspidal \cite[Chapter 3, Theorem 4.2 and Theorem 5.1]{AC}. In Theorem \ref{main-thm-2-conv} we have assumed that ${\sf p}i'$ is not of $\rho$-type for any $\rho$ trivial on $W_E'$. Thus ${\sf p}i'_E$ is cuspidal by Corollary \ref{cor-rho-type}. Now applying Corollary \ref{cor-aut-trace} and \eqref{ord-pole} we see that the top line of \eqref{id1} is equal to \mathbf Egin{align*} |\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um'_{{\sf p}i':{\sf p}i'_E \cong {\sf p}i'^{\tau}_E} \mathrm{tr}({\sf p}i')(h^1)\widetilde{{\sf p}hi}(1)\mathrm{Res}_{s=1}L(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0}). \end{align*} Note that the given residue is nonzero and that this sum is absolutely convergent by Corollary \ref{cor-aut-trace}. At this point we assume that the function $\Phi_S$ is chosen so that at finite places $v \in S$ where $\Phi_v \not \in C_c^{\infty}(\mathrm{GL}_n(F_v)//\mathrm{GL}_n(\mathcal{O}_{F_v}))$ the function $\Phi_v$ is of positive type (this is possible by Conjecture \ref{conj-transf}). Under this assumption we claim that the second line of \eqref{id1} is absolutely convergent. Indeed, the $L$-function $L(s,({\sf p}i_E \times {\sf p}i_E^{\varepsilone})^{S_0})$ of the admissible representation ${\sf p}i_E^{S_0} \times {\sf p}i_E^{\varepsilone S_0}$ is defined and convergent in some half plane \cite[Theorem 13.2]{Borel}, \cite{LanglProb}, and its Dirichlet series coefficients are positive \cite[Lemma a]{HR}. Thus the smoothed partial sums $\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X)))$ have positive coefficients. The fact that the second line of \eqref{id1} converges absolutely follows. Now we have refined \eqref{id1} to an identity of absolutely convergent sums \mathbf Egin{align} \label{id2} &|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um'_{{\sf p}i':{\sf p}i'_E \cong {\sf p}i'^{\tau}_E} \mathrm{tr}({\sf p}i')(h^1)\widetilde{{\sf p}hi}(1)\mathrm{Res}_{s=1}L(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0}) \\&= \mathbb{S}um'_{{\sf p}i} \mathrm{tr}({\sf p}i)(\Phi^1)\lim_{X \to \infty} X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))). \nonumber \end{align} where the residues in the top line are nonzero. Before starting the proof in earnest, we wish to refine \eqref{id2} yet again to an identity where only representations of a given infinity type are involved. Let $\Psi=\otimes_{w|\infty}\Psi_w \in \otimes_{w |\infty}C_c^{\infty}(M(E_{w}))^{W(M,\mathrm{GL}_n)}$, where $M \leq \mathrm{GL}_n$ is the standard maximal torus of diagonal matrices. For an irreducible unitary generic admissible representation $\Pi_{\infty}$ of $\mathrm{GL}_n(E_{\infty})$ (resp.~${\sf p}i_{\infty}$ of $\mathrm{GL}_n(F_{\infty})$) write $\chi_{\Pi_{\infty}}: M(E_{\infty}) \to \textf{C}C^{\times}$ (resp.~$\chi_{{\sf p}i_{\infty}}: M(F_{\infty}) \to \textf{C}C^{\times}$) for a choice of quasi-character whose unitary induction to $\mathrm{GL}_n(E_{\infty})$ (resp.~$\mathrm{GL}_n(F_{\infty})$) is $\Pi_{\infty}$ (resp.~${\sf p}i_{\infty}$). Here we are using our assumption that $F$ is totally complex. The quasi-characters $\chi_{\Pi_{\infty}w}$ and $\chi_{{\sf p}i_{\infty}v}$ for infinite places $w$ of $E$ and $v$ of $F$ are determined by $\Pi_{w}$ and ${\sf p}i_{v}$, respectively, up to the action of $W(M,\mathrm{GL}_n)$. Moreover, they determine $\Pi_w$ and ${\sf p}i_v$, respectively. We note that by an application of the descent arguments proving Lemma \ref{lem-archi-transf} the identity \eqref{id2} implies \mathbf Egin{align} \label{id3} &|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um'_{{\sf p}i':{\sf p}i'_E \cong {\sf p}i'^{\tau}_E} \mathrm{tr}(\chi_{{\sf p}i'_E})(\Psi)\mathrm{tr}(h^{\infty})\widetilde{{\sf p}hi}(1)\mathrm{Res}_{s=1}L(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0})\\ &= \mathbb{S}um'_{{\sf p}i} \mathrm{tr}(\chi_{{\sf p}i'})(\otimes_{v |\infty} (*_{w|v}\Psi_w))\mathrm{tr}({\sf p}i)(\Phi^{\infty})\lim_{X \to \infty} X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))) \nonumber \end{align} where the $*$ denotes convolution in $M(F_{\infty})$ (note we are implicitly choosing isomorphisms $M(E \otimes_F F_{v}) \cong \times_{w|v} M(F_{v})$ for each $v|\infty$ to make sense of this). Let $\chi_0:M(F_{\infty}) \to \textf{C}C^{\times}$ be a given quasi-character. By Lemma \ref{lem-lin-ind} the identity \eqref{id3} can be refined to \mathbf Egin{align} \label{id3.5} &|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um'_{\mathbb{S}ubstack{{\sf p}i':\chi_{{\sf p}i'_{E\infty}w}=\chi_{0Ew}^{W}\\ \textrm{ for some }W \in W(M,\mathrm{GL}_n)\\ \textrm{for all }w|\infty}} \mathrm{tr}(\chi_{{\sf p}i'_E})(\Psi)\mathrm{tr}(h^{\infty})\widetilde{{\sf p}hi}(1)\mathrm{Res}_{s=1}L(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0})\\ &= \mathbb{S}um'_{\mathbb{S}ubstack{{\sf p}i:\chi_{{\sf p}i_{\infty}v}=\chi_{0v}^W\\ \textrm{ for some }W \in W(M,\mathrm{GL}_n)\\\textrm{for all }v|\infty}} \mathrm{tr}(\chi_{{\sf p}i'})(\otimes_{v |\infty} (*_{w|v}\Psi_w))\mathrm{tr}({\sf p}i)(\Phi^{\infty})\lim_{X \to \infty} X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))).\nonumber \end{align} Now by descent \eqref{id3.5} implies the identity \mathbf Egin{align} \label{id4} &|\mathrm{Gal}(E/F')^{\mathrm{ab}}|^{-1} \mathbb{S}um'_{\mathbb{S}ubstack{{\sf p}i':{\sf p}i'_E \cong {\sf p}i'^{\tau}_E\\{\sf p}i'_E \cong {\sf p}i_{0\infty E}}} \mathrm{tr}({\sf p}i')(h^1)\widetilde{{\sf p}hi}(1)\mathrm{Res}_{s=1}L(s,({\sf p}i'_E \times {\sf p}i'^{\tau \varepsilone}_E)^{S_0}) \\&= \mathbb{S}um'_{{\sf p}i:{\sf p}i_{\infty} \cong {\sf p}i_{0\infty}} \mathrm{tr}({\sf p}i)(\Phi_S^1)\lim_{X \to \infty} X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))) \nonumber \end{align} for all irreducible admissible generic unitary representations ${\sf p}i_{0\infty}$ of $\mathrm{GL}_n(F_{\infty})$; here, as before, for finite $v$ the functions $\Phi_v$ are assumed to be of positive type when they are ramified (i.e.~not in $C_c^{\infty}(\mathrm{GL}_n(F_v)//\mathrm{GL}_n(\mathcal{O}_{F_v}))$). Note in particular that for any given $\Phi_S$ and $h_S$ the sums in \eqref{id4} are finite. We now start to work with \eqref{id4}. First consider descent of primitive representations. Suppose that $\Pi$ is a $\mathrm{Gal}(E/F)$-invariant primitive automorphic representation of $A_{\mathrm{GL}_{nE}} \backslash \mathrm{GL}_n(\mathbb{A}_{E})$. Then by Conjecture \ref{conj-solv} $\Pi$ descends to a representation ${\sf p}i'$ of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$. Here in the $n=2$ case we are using the fact that $H^2(\mathrm{Gal}(E/F'),\textf{C}C^{\times})=H^2(\widetilde{A}_4,\textf{C}C^{\times})=0$. The existence of a primitive automorphic representation ${\sf p}i$ of $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_F)$ that is a weak descent of $\Pi$ now follows from \eqref{id4} and a standard argument using the transfer of unramified functions (Lemma \ref{lem-unr-transf}). In more detail, assume that $\Pi$ and $E/F$ are unramified outside of $S$. Then choosing $h^{S'}=b_{E/F'}(f^{S_0})$ and $\Phi^{S}=b_{E/F}(f^{S_0})$ for $f^{S_0} \in C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_{E}^{S_0})//\mathrm{GL}_n(\widehat{\mathcal{O}}_{E}^{S_0}))$ (which are transfers of each other by Lemma \ref{lem-unr-transf}) the identity \eqref{id4} implies an identity of the form \mathbf Egin{align*} &\mathbb{S}um'_{\mathbb{S}ubstack{{\sf p}i':{\sf p}i'_E \cong {\sf p}i'^{\tau}_E\\{\sf p}i'_E \cong {\sf p}i_{0\infty E}}} a({\sf p}i')\mathrm{tr}({\sf p}i')(b_{E/F'}(f^{S_{0}})) = \mathbb{S}um'_{{\sf p}i:{\sf p}i_{\infty} \cong {\sf p}i_{0\infty}} c({\sf p}i)\mathrm{tr}({\sf p}i)(b_{E/F}(f^{S_{0}}))\nonumber \end{align*} for some $a({\sf p}i') \in \mathrm{R}R_{>0},c({\sf p}i) \in \mathrm{R}R_{ \mathfrak{g}eq 0}$ (here we are using the fact that we assumed the functions $\Phi_v$ to be of positive type if they are ramified). Applying linear independence of characters, this implies a refined identity \mathbf Egin{align*} &\mathbb{S}um a({\sf p}i')\mathrm{tr}({\sf p}i')(b_{E/F'}(f^S))= \mathbb{S}um c({\sf p}i)\mathrm{tr}({\sf p}i)(b_{E/F}(f^S)) \end{align*} where the sum on top (resp.~bottom) is over cuspidal automorphic representations ${\sf p}i'$ (resp.~${\sf p}i$) such that the character $\mathrm{tr}({\sf p}i' \circ b_{E/F'})$ (resp.~$\mathrm{tr}({\sf p}i \circ b_{E/F})$) of $C_c^{\infty}(\mathrm{GL}_n(\mathbb{A}_E^{S})//\mathrm{GL}_n(\mathcal{O}_{E}^{S}))$ is equal to $\mathrm{tr}(\Pi)$. Thus any of the representations ${\sf p}i$ on the right is a weak descent of $\Pi$, and there must be some representation on the right because the sum on the left is not identically zero as a function of $f^{S_0}$. We also note that the base change is compatible at places $v$ where ${\sf p}i$ is an abelian twist of the Steinberg representation by Lemma \ref{lem-EP}. This proves the statements on descent of cuspidal automorphic representations contained in theorems \ref{main-thm-1-conv} and \ref{main-thm-2-conv}. Now assume that ${\sf p}i$ is an $E$-primitive automorphic representation of $A_{\mathrm{GL}_{nF}} \backslash \mathrm{GL}_n(\mathbb{A}_F)$, and if $n=2$ assume that ${\sf p}i$ is not of $\rho$-type for any $\rho:W_F' \to {}^L\mathrm{GL}_{2F}$ trivial on $W_E'$. By assumption we have that \mathbf Egin{align} \label{claim-45} \lim_{X \to \infty}X^{-1}\mathrm{tr}({\sf p}i)(b_{E/F}(\Sigma^{S_0}_{{\sf p}hi}(X))) \neq 0. \end{align} Let ${\sf p}i'$ be a cuspidal automorphic representation of $A_{\mathrm{GL}_{nF'}} \backslash \mathrm{GL}_n(\mathbb{A}_{F'})$ that is not of $\rho'$-type for any $\rho':W_{F'}' \to {}^L\mathrm{GL}_{2F'}$ trivial on $W_E'$. By Lemma \ref{lem-unr-transf} one has $$ \mathrm{tr}({\sf p}i'_{v'})(b_{E/F'}(f_w))={\sf p}i'_{Ew}(f_w) $$ whenever $w$ is a finite place of $E$ dividing $v'$ and $f_w \in C_c^{\infty}(\mathrm{GL}_n(E_w)//\mathrm{GL}_n(\mathcal{O}_{E_w}))$. Thus by \eqref{claim-45}, the existence of a weak base change of ${\sf p}i$ to $A_{\mathrm{GL}_{nE}} \backslash \mathrm{GL}_n(\mathbb{A}_E)$ follows as before. This completes the proof of Theorem \ref{main-thm-1-conv} and Theorem \ref{main-thm-2-conv}. {\sf q}ed \mathbb{S}ubsection{Artin representations: Theorem \ref{main-thm-3-conv}} \label{ssec-trace-to-func2} Let $E/F$ be a Galois extension such that $\mathrm{Gal}(E/F)\cong \widetilde{A}_5$. We assume that $F$ is totally complex. As above, we fix embeddings $A_4 \hookrightarrow A_5$ and $\mathbb{Z}/2 \times \mathbb{Z}/2 \hookrightarrow A_4 \hookrightarrow A_5$ and let $\widetilde{A}_4,Q \leq \widetilde{A}_5$ denote the pull-backs of these groups under the quotient map $\widetilde{A}_5 \to A_5$. Throughout this subsection we assume the hypotheses of Theorem \ref{main-thm-3-conv}. We fix throughout this subsection a representation $\rho_2:W_F' \to {}^L\mathrm{GL}_{2F}$ trivial on $W_E'$ that has character $\theta_2$ in the notation of \S \ref{appendix}. There is exactly one other nonisomorphic irreducible degree-two character of $\mathrm{Gal}(E/F)$, namely $\xi \circ \theta_2$ where $\xi \in \mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$. In this subsection we prove Theorem \ref{main-thm-3-conv}, which asserts that the trace identities of Theorem \ref{main-thm-3} imply that $\rho_2 \mathrm{op}lus \xi \circ \rho_2$ has an associated isobaric automorphic representation. We note at the outset that the argument is modeled on a well-known argument of Langlands in the tetrahedral case \cite[\S 3]{Langlands}. The trace identities of Theorem \ref{main-thm-3} involve two different fields that were both denoted by $F'$; it is now necessary to distinguish between them. We let $$ F':=E^{\widetilde{A}_4} \leq K :=E^Q. $$ We require the following lemma: \mathbf Egin{lem} \label{lem-Q-autom} There is a cuspidal automorphic representation ${\sf p}i'$ of $\mathrm{GL}_2(\mathbb{A}_{F'})$ and a cuspidal automorphic representation $\mathbb{S}igma$ of $\mathrm{GL}_2(\mathbb{A}_K)$ such that ${\sf p}i'={\sf p}i(\rho_2|_{F'})$ and $\mathbb{S}igma={\sf p}i(\rho_2|_K)={\sf p}i'_K$. \end{lem} \mathbf Egin{proof} One has an automorphic representation ${\sf p}i'$ such that ${\sf p}i'={\sf p}i(\rho_2|_{F'})$ by Langlands' work \cite[\S 3]{Langlands}; see also \cite[\S 6]{GerLab}. By its construction ${\sf p}i'_K$ is isomorphic to $\mathbb{S}igma:={\sf p}i(\rho_2|_K)$. \end{proof} Choose $\mathbb{S}igma$ and ${\sf p}i'$ as in the lemma. Assuming the trace identities of Theorem \ref{main-thm-3} in the $n=2$ case there are precisely two distinct isomorphism classes of cuspidal automorphic representations represented by, say, ${\sf p}i_1,{\sf p}i_2$, such that ${\sf p}i_{iK} \cong \mathbb{S}igma$. Using our assumption that $F$ is totally complex this can be proven by arguments analogous to those used in \S \ref{ssec-trace-to-func}; we only note that $$ \lim_{X \to \infty} (\frac{d^{3}}{ds^3}(\widetilde{{\sf p}hi}(s)X^s))^{-1}\mathrm{tr}(\mathbb{S}igma)(b_{E/F'}(\Sigma^{S_0}(X)) \neq 0 $$ since $\mathrm{dim}(\mathrm{Hom}_I({\sf p}i'_E,{\sf p}i'^{\tau}_E)) =4$ by construction of ${\sf p}i$ (compare Proposition \ref{Perron-prop}). We emphasize that the trace identity of Theorem \ref{main-thm-3} tells us that $\mathbb{S}igma$ is the unique weak base change of ${\sf p}i_i$, which is stronger than the statement that $\mathbb{S}igma_E$ is the unique weak base change of ${\sf p}i_i$. We note in particular that using the transfers supplied in \S \ref{ssec-transfers} we have that the base changes are compatible at finite places $v$ that are unramified in $E/F$ and at all infinite places (which are complex by assumption). Moreover the ${\sf p}i_i$ are unramified outside of the set of places where $E/F$ is ramified. One expects that upon reindexing if necessary one has \mathbf Egin{align*} {\sf p}i_{1} &\mathbb{S}tackrel{?}{\cong} {\sf p}i(\rho_{2})\\ {\sf p}i_{2} &\mathbb{S}tackrel{?}{\cong} {\sf p}i(\xi \circ \rho_{2}). \end{align*} We do not know how to prove this, but we will prove something close to it, namely Corollary \ref{cor-isob} below. Consider $\mathrm{Sym}^2({\sf p}i')$ and $\mathrm{Sym}^2(\mathbb{S}igma)$; the first is a cuspidal automorphic representation of $\mathrm{GL}_3(\mathbb{A}_{F'})$ by \cite[Theorem 9.3]{GJ} and the second is an isobaric (noncuspidal) automorphic representation of $\mathrm{GL}_3(\mathbb{A}_K)$ \cite[Remark 9.9]{GJ}. \mathbf Egin{lem} \label{lem-sym} For $i \in \{1,2\}$ one has $$ \mathrm{Sym}^2({\sf p}i_i)_K \cong \mathrm{Sym}^2(\mathbb{S}igma) $$ and $$ \mathrm{Sym}^2({\sf p}i_i)_{F'} \cong \mathrm{Sym}^2({\sf p}i'). $$ \end{lem} \mathbf Egin{proof} Since ${\sf p}i_{iK} \cong {\sf p}i'$, it is easy to see that $\mathrm{Sym}^2({\sf p}i_{i})_{Kv_K} \cong \mathrm{Sym}^2(\mathbb{S}igma)_{v_K}$ for all places $v_K$ of $K$ that are finite and such that $K/F$ and $\mathbb{S}igma_i$ are unramified. The first statement then follows from strong multiplicity one for isobaric automorphic representations \cite[Theorem 4.4]{JSII}. Since the ${\sf p}i_i$ were defined to be weak descents of $\mathbb{S}igma$, they are in particular weak descents of the isobaric representation $1 \boxplus 1$ of $\mathrm{GL}_2(\mathbb{A}_E)$. Thus $$ \lim_{X \to \infty} \left(\frac{d^{8}}{ds^8}(\widetilde{{\sf p}hi}(s)X^s)\big|_{s=1}\right)^{-1}\mathrm{tr}(\mathrm{Sym}^2({\sf p}i_i))(b_{E/F}(\Sigma^{S_0}(X))) \neq 0 $$ since $\mathrm{tr}(\mathrm{Sym}^2({\sf p}i_i))(b_{E/F}(\Sigma^{S_0}(X))$ is a smoothed partial sum of the Dirichlet series $\text{\sffamily{\bf\textsf{z}}}eta_E^{S_0}(s)^9$. Applying the trace identities of Theorem \ref{main-thm-3} we conclude that $\mathrm{Sym}^2({\sf p}i_{i})$ admits a weak base change $\mathrm{Sym}^2({\sf p}i_i)_{F'}$ to $F'$. Now $\mathrm{Sym}^2({\sf p}i_i)_{F'}$ and $\mathrm{Sym}^2({\sf p}i')$ both base change to $\mathrm{Sym}^2(\mathbb{S}igma)$. Since $\mathrm{Sym}^2(\mathbb{S}igma)$ is not cuspidal, this implies that $\mathrm{Sym}^2({\sf p}i') \cong \mathrm{Sym}^2({\sf p}i_i)_{F'}$ \cite[Chapter 3, Theorems 4.2 and 5.1]{AC}. \end{proof} For convenience, let $S$ be the set of finite places where $E/F$ is ramified. Thus the base change from ${\sf p}i_i$ to $\mathbb{S}igma$ is compatible outside of $S$ and the the base changes from $\mathrm{Sym}^2({\sf p}i_i)$ to $\mathrm{Sym}^2({\sf p}i')$ and $\mathrm{Sym}^2(\mathbb{S}igma)$ are all compatible outside of $S$ \mathbf Egin{lem} \label{lem-F'} For $i \in \{1,2\}$ the cuspidal automorphic representation ${\sf p}i'$ is a weak base change of ${\sf p}i_i$: $$ {\sf p}i_{iF'} \cong {\sf p}i'. $$ The base change is compatible for $v \not \in S$. \end{lem} \mathbf Egin{proof} Fix $i \in \{1,2\}$. We will verify that the local base change ${\sf p}i_{iF'v'}$ is isomorphic to ${\sf p}i'_{v'}$ for all places $v'$ of $F'$ not dividing places in $S$; this will complete the proof of the lemma (notice that the local base change is well-defined even though we do not yet know that ${\sf p}i_{iF'}$ exists as an automorphic representation). Notice that ${\sf p}i_{iK} \cong {\sf p}i'_K \cong \mathbb{S}igma$ by construction of ${\sf p}i_i$. Thus if $v'$ is a place of $F'$ split in $K/F'$ and not lying above a place of $S$, then ${\sf p}i_{iF'v'} \cong {\sf p}i'_v$. Suppose that $v'$ is a place of $F'$ that is nonsplit in $K/F'$ and not lying above a place of $S$. Then there is a unique place $v_K|v'$ and $[K_{v_K}:F'_{v'}]=3$. Notice that ${\sf p}i_{i}$ and ${\sf p}i'$ have trivial central character by construction. Thus their Langlands classes are of the form \mathbf Egin{align*} A({\sf p}i_{iF'v'})=\mathbf Egin{pmatrix} a\text{\sffamily{\bf\textsf{z}}}eta & \\ & a^{-1} \text{\sffamily{\bf\textsf{z}}}eta^{-1}\end{pmatrix}{\sf q}uad \textrm{ and }{\sf q}uad A({\sf p}i_{v'}')=\mathbf Egin{pmatrix} a &\\ & a^{-1}\end{pmatrix} \end{align*} for some $a \in \textf{C}C^{\times}$ and some third root of unity $\text{\sffamily{\bf\textsf{z}}}eta$. By Lemma \ref{lem-sym} we have that $$ \mathrm{Sym}^2(A({\sf p}i_{iF'v'}))=\mathbf Egin{pmatrix}a^2\text{\sffamily{\bf\textsf{z}}}eta^2 & & \\ & 1 & \\&&a^{-2} \text{\sffamily{\bf\textsf{z}}}eta^{-2} \end{pmatrix} $$ is conjugate under $\mathrm{GL}_3(\textf{C}C)$ to $$ \mathrm{Sym}^2(A({\sf p}i'_{v'}))=\mathbf Egin{pmatrix} a^2 & & \\ & 1 & \\ & & a^{-2} \end{pmatrix}. $$ Thus $\{a^2,a^{-2}\}=\{a^2\text{\sffamily{\bf\textsf{z}}}eta^2,a^{-2}\text{\sffamily{\bf\textsf{z}}}eta^{-2}\}$. If $a^2=a^2\text{\sffamily{\bf\textsf{z}}}eta^2$ then $\text{\sffamily{\bf\textsf{z}}}eta=1$, proving that ${\sf p}i_{iF'v'} \cong {\sf p}i'_{v'}$. If on the other hand $a^2=a^{-2}\text{\sffamily{\bf\textsf{z}}}eta^{-2}$ and $\text{\sffamily{\bf\textsf{z}}}eta \neq 1$, then $$ a^4=\text{\sffamily{\bf\textsf{z}}}eta^{-2} $$ and the matrix $\mathrm{Sym}^2(A({\sf p}i'_{v'}))$ has order $6$. On the other hand, $\mathrm{Sym}^2(A({\sf p}i'_{v'}))$ is the image of a Frobenius element of $\mathrm{Gal}(E/F')$ under the Galois representation corresponding to $\mathrm{Sym}^2({\sf p}i')$. This Galois representation is the symmetric square of a representation of $\widetilde{A}_4$ with trivial determinant, and hence factors through $A_4$. As $A_4$ has no elements of order $6$, we arrive at a contradiction, proving that $\text{\sffamily{\bf\textsf{z}}}eta=1$. Hence ${\sf p}i_{iF'v'} \cong {\sf p}i'_{v'}$. \end{proof} Let $\chi \in \widetilde{A}_4^{\widetilde{\varepsilon}dge}$ be a nontrivial (abelian) character. Then for all places $v$ of $F$ one has an admissible representation $\mathrm{Ind}_{F'}^{F}(\chi)_v$. It is equal to $\otimes_{v'|v} \mathrm{Ind}_{F'_{v'}}^{F_v}(\chi_{v'})$. Note that one does not know a priori whether or not $\mathrm{Ind}_{F'}^F(\chi)$ is automorphic; proving this in a special case is the subject matter of \cite{KimIcos}. By class field theory we can also view $\mathrm{Ind}_{F'}^F(\chi)$ as an $L$-parameter $$ \mathrm{Ind}_{F'}^{F}(\chi):W_{F}' \longrightarrow {}^L\mathrm{GL}_{5F'} $$ and with this viewpoint in mind we prove the following lemma: \mathbf Egin{lem} \label{lem-ind-autom} For each $i \in \{1,2\}$ the $L$-parameter $\mathrm{Ind}_{F'}^F(\chi)$ is associated to $\mathrm{Sym}^4({\sf p}i_i)$. More precisely, $\mathrm{Ind}_{F'}^F(\chi)_v={\sf p}i(\mathrm{Ind}_{F'}^F(\chi)_{v}) \cong \mathrm{Sym}^4({\sf p}i_{iv})$ for all $v \not \in S$. \end{lem} We note that $\mathrm{Sym}^4({\sf p}i_i)$ is an automorphic representation of $\mathrm{GL}_5(\mathbb{A}_F)$ by work of Kim \cite{Kim} and Kim-Shahidi \cite[Theorem 3.3.7]{KScusp}. \mathbf Egin{proof} At the level of admissible representations for $v \not \in S$ one has \mathbf Egin{align} \label{first-frob} \mathrm{Sym}^4({\sf p}i_i)^{\varepsilone}_v \otimes \mathrm{Ind}_{F'}^F(\chi)_v \cong \otimes_{v'|v}\mathrm{Ind}_{F'_{v'}}^{F_v}(\mathrm{Sym}^4({\sf p}i'_{v'})^{\varepsilone} \otimes \chi_{v'}) \end{align} by Frobenius reciprocity. On the other hand ${\sf p}i'^{\varepsilone}={\sf p}i(\rho_{2}|_{F'}^{\varepsilone})$ and $$ \mathrm{Sym}^4(\rho_2)^{\varepsilone} \cong \mathrm{Sym}^4(\rho_2^{\varepsilone}) \cong \mathrm{Sym}^4(\rho_2) \cong \mathrm{Ind}_{F'}^F(\chi) \cong \mathrm{Ind}_{F'}^F(\chi)^{\varepsilone} $$ at the level of Galois representations (see Lemma \ref{lem-icosa-reps}). Thus the right hand side of \eqref{first-frob} is isomorphic to $$ \otimes_{v'|v}\mathrm{Ind}_{F'_{v'}}^{F_v}(\mathrm{Ind}_{F'_{v'}}^{F_v}(\chi_{v'})^{\varepsilone}|_{F'_{v'}} \otimes \chi_{v'}) \cong \otimes_{v'|v}\mathrm{Ind}_{F'_{v'}}^{F_v}(\chi_{v'})^{\varepsilone} \otimes \mathrm{Ind}_{F'_{v'}}^{F_v}(\chi_{v'}) $$ and we conclude that \mathbf Egin{align} \label{sym-isom} \mathrm{Sym}^4({\sf p}i_i)^{\varepsilone}_v \otimes \mathrm{Ind}_{F'}^F(\chi)_v \cong \otimes_{v'|v}\left(\mathrm{Ind}_{F'_{v'}}^{F_v}(\chi_{v'})^{\varepsilone} \otimes \mathrm{Ind}_{F'_{v'}}^{F_v}(\chi_{v'})\right). \end{align} Now if $A$ and $B$ are two square invertible diagonal matrices of rank $n$, the eigenvalues of $A$ can be recovered from knowledge of the eigenvalues of $A \otimes B$ and the eigenvalues of $B$. With this remark in hand, we see that \eqref{sym-isom} implies that $$ \mathrm{Sym}^4({\sf p}i_i)_v \cong \mathrm{Ind}_{F'}^F(\chi)_v $$ for all $v \not \in S$. \end{proof} With this preparation in place, we make a step towards proving that $\rho_2$ and ${\sf p}i_1$ are associated: \mathbf Egin{lem} \label{lem-places} Let $v \not \in S$. One has ${\sf p}i_{1v} \cong {\sf p}i(\rho_{2v})$ or ${\sf p}i_{1v} \cong {\sf p}i(\xi \circ \rho_{2v})$. \end{lem} \mathbf Egin{proof} For infinite places we use our running assumption that $F$ is totally complex together with Lemma \ref{lem-F'}. This allows one to deduce the lemma in this case. Assume now that $v$ is finite and choose $v'|v$. By Lemma \ref{lem-F'}, up to conjugation, the Langlands class of ${\sf p}i_{1v}$ and the Frobenius eigenvalue of $\rho_2(\mathrm{Frob}_v)$ satisfy \mathbf Egin{align*} A({\sf p}i_{1v})=\mathbf Egin{pmatrix} a\text{\sffamily{\bf\textsf{z}}}eta & \\ & a^{-1} \text{\sffamily{\bf\textsf{z}}}eta^{-1}\end{pmatrix}{\sf q}uad \textrm{ and } {\sf q}uad \rho_2(\mathrm{Frob}_v)=\mathbf Egin{pmatrix} a &\\ & a^{-1}\end{pmatrix} \end{align*} where $\text{\sffamily{\bf\textsf{z}}}eta$ is a $[F'_{v'}:F_v]$-root of unity. Thus if there is a place $v'|v$ such that $[F'_{v'}:F_v]=1$, then we are done. Since $[F':F]=5$, there are two other cases to consider, namely where there is a single $v'|v$ of relative degree $5$ and when there are two places $v'_2,v'_3|v$, one of them of relative degree $2$ and the other of relative degree $3$. By Lemma \ref{lem-ind-autom} the two matrices \mathbf Egin{align} \label{are-conj} \mathbf Egin{pmatrix} a^4\text{\sffamily{\bf\textsf{z}}}eta^4 & & & & \\ & a^2\text{\sffamily{\bf\textsf{z}}}eta^2 & & &\\ & & 1 & & \\ & & & a^{-2} \text{\sffamily{\bf\textsf{z}}}eta^{-2} & \\ & & & & a^{-4}\text{\sffamily{\bf\textsf{z}}}eta^{-4}\end{pmatrix} {\sf q}uad \textrm{ and }{\sf q}uad \mathbf Egin{pmatrix} a^4 & & & & \\ & a^2 & & &\\ & & 1 & & \\ & & & a^{-2} & \\ & & & & a^{-4}\end{pmatrix} \end{align} are conjugate. We will use this fact and a case-by-case argument to prove the lemma. Assume $[F'_{v'}:F_{v}]=5$. In this case $a+a^{-1}$ is ${\sf p}m \frac{\mathbb{S}qrt{5}-1}{2}$ or ${\sf p}m \frac{-\mathbb{S}qrt{5}-1}{2}$ by the character table of $\widetilde{A}_5$ in \S \ref{appendix} above, which implies that $a={\sf p}m\nu$ for a primitive fifth root of unity $\nu$. We conclude from the conjugacy of the two matrices \eqref{are-conj} that $\text{\sffamily{\bf\textsf{z}}}eta \neq \nu^{-1}$. On the other hand, if $\text{\sffamily{\bf\textsf{z}}}eta$ is any other fifth root of unity then the matrix $A({\sf p}i_{1v})$ is conjugate to either $\rho_2(\mathrm{Frob}_v)$ or $\xi \circ \rho_2(\mathrm{Frob}_v)$, where as above $\xi$ is the generator of $\mathrm{Gal}(\mathbb{Q}(\mathbb{S}qrt{5})/\mathbb{Q})$. Thus the lemma follows in this case. Assume now that $[F'_{v'}:F_v]=3$; this is the last case we must check. By consulting the character table of $\widetilde{A}_5$ in \S \ref{appendix} we see that $a+a^{-1}={\sf p}m 1$ which implies $a$ is a primitive $6$th root of unity or a primitive $3$rd root of unity. By the conjugacy of the matrices \eqref{are-conj} we conclude that $\text{\sffamily{\bf\textsf{z}}}eta \neq {\sf p}m a^{-1}$. Thus if $a$ is a primitive $3$rd root of unity the matrices $$ \mathbf Egin{pmatrix} a \text{\sffamily{\bf\textsf{z}}}eta & \\ & a^{-1} \text{\sffamily{\bf\textsf{z}}}eta^{-1} \end{pmatrix}, \, \, \, \mathbf Egin{pmatrix} a & \\ & a^{-1} \end{pmatrix} $$ are either equal (if $\text{\sffamily{\bf\textsf{z}}}eta=1$) or conjugate (if $\text{\sffamily{\bf\textsf{z}}}eta \neq a^{-1}$ is a nontrivial $3$rd root of unity). Now suppose that $a$ is a primitive $6$th root of unity; by replacing $a$ by $a^{-1}$ if necessary we may assume that $a=e^{2 {\sf p}i i/6}$. In this case the right matrix in \eqref{are-conj} has eigenvalues $e^{{\sf p}m 2 {\sf p}i i/3},1$ (the first two with multiplicity two and the last with multiplicity one). Since $\text{\sffamily{\bf\textsf{z}}}eta \neq {\sf p}m a^{-1}$, we must have $\text{\sffamily{\bf\textsf{z}}}eta=1$ or $\text{\sffamily{\bf\textsf{z}}}eta=e^{-2{\sf p}i i /3}$. In the former case $A({\sf p}i_{1v})$ and $\rho_2(\mathrm{Frob}_v)$ are equal and in the latter case they are conjugate. \end{proof} Another way of stating the lemma that appears more ``global'' is the following corollary: \mathbf Egin{cor} \label{cor-isob} One has $$ {\sf p}i_1 \boxplus {\sf p}i_2 \cong {\sf p}i(\rho_2 \mathrm{op}lus \xi \circ \rho_2). $$\ \end{cor} This is precisely Theorem \ref{main-thm-3-conv}. \mathbf Egin{proof} This follows from Lemma \ref{lem-places} and \cite[Proposition 4.5]{HennCyc}. To apply \cite[Proposition 4.5]{HennCyc} one uses the fact that the isobaric sum ${\sf p}i_1 \boxplus {\sf p}i_2$ is necessarily locally generic (see \cite[\S 0.2]{Bernstein} for the nonarchimedian case, which is all we use). \end{proof} Finally, we prove Corollary \ref{cor-artin-cases}: \mathbf Egin{proof}[Proof of Corollary \ref{cor-artin-cases}] In the notation above, Corollary \ref{cor-isob} implies the following isomorphisms at the level of admissible representations: \mathbf Egin{align*} \mathrm{Sym}^3({\sf p}i_1) &\cong {\sf p}i(\mathrm{Sym}^3(\rho_2))\\ {\sf p}i_1 \boxtimes {\sf p}i_2 &\cong {\sf p}i(\rho_2 \otimes \xi \circ \rho_2)\\ \mathrm{Sym}^4({\sf p}i_1) &\cong {\sf p}i(\mathrm{Sym}^4(\rho_2))\\ \mathrm{Sym}^2({\sf p}i_1) \boxtimes {\sf p}i_2 &\cong {\sf p}i(\mathrm{Sym}^2(\rho_2) \otimes \xi \circ \rho_2). \end{align*} Notice that any irreducible representation of $\mathrm{Gal}(E/F)$ of dimension greater than $3$ is on this list by Lemma \ref{lem-icosa-reps}. Therefore to complete the proof of the corollary it suffices to recall that all of the representations on the left are known to be automorphic. More precisely, the $\mathrm{Sym}^3$ lift was treated by work of Kim and Shahidi \cite[Theorem B]{KiSh}. The Rankin product ${\sf p}i_1 \boxtimes {\sf p}i_2$ is automorphic by work of Ramakrishnan \cite[Theorem M]{RRS}. The fact that the symmetric fourth is automorphic follows from \cite[Theorem 3.3.7]{KScusp} (see also \cite[Theorem 4.2]{KimIcos}). Finally, for the last case, one can invoke \cite[Theorem A]{KiSh} and \cite[Proposition 4.1]{SW}. \end{proof} \mathbb{S}ection{Some group theory} \label{sec-groups} In this section we explain why two group-theoretic assumptions we have made in theorems \ref{main-thm-1} and \ref{main-thm-1-conv} are essentially no loss of generality. \mathbb{S}ubsection{Comments on universal perfect central extensions} \label{ssec-upce} The underlying goal of this paper is to study the functorial transfer conjecturally attached to the map of $L$-groups $$ b_{E/F}:{}^L\mathrm{GL}_{nF} \longrightarrow {}^L\mathrm{Res}_{E/F}\mathrm{GL}_{nE} $$ for Galois extensions $E/F$. We explain how ``in principle'' this can be reduced to the study of Galois extensions $E/F$ where $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple group. Given a Galois extension $E/F$, we can find a chain of subextensions $E_0=E \mathfrak{g}eq E_1 \mathfrak{g}eq \cdots \mathfrak{g}eq E_m=F$ such that $E_i/E_{i+1}$ is Galois with simple Galois group. Using this, one can in principle reduce the study of arbitrary Galois extensions to the study of extensions with simple Galois group\footnote{Of course, this reduction will be subtle; see \cite{LR} and \cite{Rajan3} for the solvable case.}. If the extension is cyclic, then we can apply the body of work culminating in the book of Arthur and Clozel \cite{AC}. We therefore consider the case where $\mathrm{Gal}(E/F)$ is a finite simple nonabelian group. Assume for the moment that $\mathrm{Gal}(E/F)$ is a finite simple nonabelian group. There exists an extension $L/E$ such that $L/F$ is Galois, \mathbf Egin{align} \label{star} \mathbf Egin{CD} 1 @>>> \mathrm{Gal}(L/E) @>>>\mathrm{Gal}(L/F) @>>> \mathrm{Gal}(E/F)@>>>1 \end{CD} \end{align} is a central extension and \mathbf Egin{align} \label{abundant} \mathrm{Gal}(L/L \cap EF^{\mathrm{ab}}) \cong H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})^{\widetilde{\varepsilon}dge}, \end{align} where $F^{\mathrm{ab}}$ is the maximal abelian extension of $F$ (in some algebraic closure) \cite[Theorem 5]{Miyake} (in fact Miyake's theorem is valid for an arbitrary Galois extension $E/F$)\footnote{Here the $\widetilde{\varepsilon}dge$ denotes the dual, so $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})^{\widetilde{\varepsilon}dge} \cong H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})$ since $H^2(\mathrm{Gal}(E/F),\textf{C}C^{\times})$ is finite abelian.}. Such an extension $L$ is called an abundant finite central extension in loc.~cit. Choose an abelian extension $F'/F$ such that $$ L \cap EF^{\mathrm{ab}}=EF'. $$ We claim that $\mathrm{Gal}(L/F')$ is the universal perfect central extension of a finite simple group. Indeed, the central extension \eqref{star} restricts to induce a central extension $$ \mathbf Egin{CD} 1 @>>> \mathrm{Gal}(L/EF') @>>>\mathrm{Gal}(L/F') @>>> \mathrm{Gal}(E/F)@>>>1 \end{CD} $$ Moreover, $L \cap EF^{\mathrm{ab}}=EF'$ implies $L \cap F^{\mathrm{ab}}=F'$ since $\mathrm{Gal}(E/F)$ is a simple nonabelian group and therefore $\mathrm{Gal}(L/F')$ is perfect. By \eqref{abundant}, we conclude that $\mathrm{Gal}(L/F')$ is the universal perfect central extension of the finite simple group $\mathrm{Gal}(E/F)$ \cite[Proposition 4.228]{Gorenstein}. We observe that if we understand the functorial lifting conjecturally defined by $b_{L/F'}$, then we can ``in principle'' use abelian base change to understand the functorial lifting conjecturally defined by $b_{E/F}$. Thus assuming that $\mathrm{Gal}(E/F)$ is the universal perfect central extension of a finite simple group from the outset is essentially no loss of generality. \mathbb{S}ubsection{Generating $\mathrm{Gal}(E/F)$} \label{gen-gal} In the statement of Theorems \ref{main-thm-1} and \ref{main-thm-1-conv}, we required that $\mathrm{Gal}(E/F)=\langle \tau, \mathrm{Gal}(E/F') \rangle$ for some subfield $E \mathfrak{g}eq F' \mathfrak{g}eq F$ with $E/F'$ solvable and some element $\tau$. We also placed restrictions on the order of $\mathrm{Gal}(E/F')$. The theorems we recall in this subsection indicate that these restrictions are little or no loss of generality, and also demonstrate that one has a great deal of freedom in choosing generators of universal perfect central extensions of finite simple groups. To state some results, recall that a finite group $G$ is quasi-simple if $G/Z_G$ is a nonabelian simple group and $G$ is perfect. Thus universal perfect central extensions of simple nonabelian groups are quasi-simple. \mathbf Egin{thm}[Guralnick and Kantor] \label{thm-GK} Let $G$ be a quasi-simple group. Let $x \in G$ that is not in the center $Z_G$ of $G$. Then there is an element $g \in G$ such that $\langle x,g \rangle=G$. \end{thm} \mathbf Egin{proof} Let $\overline{x}$ be the image of $x$ in $G/Z_G$. Then there exists a $\overline{g} \in G/Z_G$ such that $\langle \overline{x}, \overline{g}\rangle=G/Z_G$ by \cite[Corollary]{GurKant}. We simply let $g \in G$ be any element mapping to $\overline{g}$. \end{proof} For applications to base change and descent of automorphic representations of $\mathrm{GL}_2$, preliminary investigation indicates that the primes $2$ and $3$ are troublesome. With this in mind, the following theorem might be useful (see \cite[Corollary 8.3]{GurM}): \mathbf Egin{thm}[Guralnick and Malle] \label{thm-good} Let $G$ be a quasi-simple group. Then $G$ can be generated by two elements of order prime to $6$. {\sf q}ed \end{thm} \mathbb{S}ection*{Acknowledgments} The author would like to thank A.~Adem for information on finite simple groups, R.~Guralnick and G.~Malle for including a proof of Theorem \ref{thm-good} in \cite{GurM} and R.~Guralnick for correcting mistakes in \S \ref{gen-gal}. H.~Hahn, R.~Langlands, S.~Morel, Ng\^o B.~C., P.~Sarnak, and N.~Templier deserve thanks for many useful conversations. The author is also grateful for the encouragement of R.~Langlands, Ng\^o B.~C., D.~Ramakrishnan, and P.~Sarnak. R.~Langlands deserves additional thanks in particular for encouraging the author to record the results in this paper. {\sf q}uash{ He also thanks J.~C.~for everything. } \mathbf Egin{thebibliography}{} \bibitem[AC]{AC} J.~Arthur and L.~Clozel, \textbf{Simple Algebras, Base Change, and the Advanced Theory of the Trace Formula}, Princeton University Press, 1989. \bibitem[Be]{Bernstein} J.~Bernstein, \emph{$P$-invariant distributions on $\mathrm{GL}(N)$ and the classficiation of unitary representations of $\mathrm{GL}(N)$ (non-archimedian case)} in \textbf{Lie Group Representations II}, Springer LNM {\bf 1041} 1984. \bibitem[B]{Booker} A.~Booker, \emph{Numerical tests of modularity}, J.~Ramanujan Math.~Soc. {\bf 20} No.~4 (2005) 283-339. \bibitem[Bo]{Borel} A.~Borel, \emph{Automorphic $L$-functions} in \textbf{Automorphic Forms, Representations, and $L$-functions}, Proceedings of Symposia in Pure Mathematics {\bf 33.2} AMS 1979. \bibitem[Buh]{Buhler} J.~Buhler \textbf{Icosahedral Galois representations}, LNM {\bf 654} Springer-Verlag, 1978. \bibitem[C1]{Cog1} J.~Cogdell, \emph{$L$-functions and converse theorems for $\mathrm{GL}_n$}, in \textbf{Automorphic Forms and Applications}, IAS/Park City Math.~Ser. {\bf 12} AMS 2007. \bibitem[D]{Donnely} H.~Donnelly, \emph{On the cuspidal spectrum for finite volume symmetric spaces}, J.~Differential Geometry {\bf 17} (1982) 239-253. \bibitem[DF]{DF} D.~S.~Dummit and R.~M.~Foote, \textbf{Abstract Algebra}, second edition, Prentice Hall, 1999. \bibitem[FLN]{FLN} E.~Frenkel, R.~Langlands, and Ng\^o B.~C., \emph{Formule des traces et fonctorialit\'e: Le d\'ebut d'un programme}, preprint (2010). \bibitem[GJ]{GJ} S.~Gelbart and H.~Jacquet \emph{A relation between automorphic representations of $\mathrm{GL}(2)$ and $\mathrm{GL}(3)$}, Ann.~scient.~\'Ec.~Norm.~Sup., $4^e$ s\'erie, t.~11, (1978) 471-542. \bibitem[GL]{GerLab} P.~G\'erardin and J.~P.~Labesse, \emph{The solution to a base change problem for $\mathrm{GL}(2)$ (following Langlands, Saito, Shintani)}, in \textbf{Automorphic Forms, Representations, and $L$-functions}, Proceedings of Symposia in Pure Mathematics {\bf 33.2} AMS 1979. \bibitem[Go]{Gorenstein} D.~Gorenstein, \textbf{Finite Simple Groups: An Introduction to their Classification}, Plenum Press, 1982. \bibitem[GrRe]{GR} D.~Gross and M.~Reeder, \emph{Arithmetic invariants of discrete Langlands parameters}, Duke Math. J. {\bf 154} (2010) 431-508. \bibitem[GuK]{GurKant} R.~M.~Guralnick and W.~M.~Kantor, \emph{Probabalistic generation of finite simple groups}, J.~Algebra {\bf 234} (2000) 743-792. \bibitem[GuM]{GurM} R.~M.~Guralnick and G.~Malle, \emph{Products of conjugacy classes and fixed point spaces}, JAMS {\bf 25} (2012) 77-121. \bibitem[HT]{HT} M.~Harris and R.~Taylor, \textbf{The Geometry and Cohomology of Some Simple Shimura Varieties}, Annals of Math. Studies, Princeton University Press, 2001. \bibitem[H1]{HennCyc} G.~Henniart, \emph{On the local Langlands conjecture for $\mathrm{GL}(n)$: The cyclic case} Annals of Math., {\bf 123} No.~1 1986 (145-203). \bibitem[H2]{PreuveHenn} G.~Henniart, \emph{Une preuve simple des conjectures de Langlands pour $\mathrm{GL}(n)$ sur un corps $p$-adique}, Invent.~Math. {\bf 139} 439-455 (2000). \bibitem[HoR]{HR} J.~Hoffstein and D.~Ramakrishnan, \emph{Siegel zeros and cusp forms}, IMRN, No.~6 (1995) 279-308. \bibitem[IS]{IS} H.~Iwaniec and P.~Sarnak, \emph{Perspectives on the analytic theory of $L$-functions}, GAFA Special Volume (2000) 705-741. \bibitem[JL]{JacquetLanglands} H.~Jacquet and R.~Langlands, \textbf{Automorphic Forms on $\mathrm{GL}(2)$}, LNM {\bf 114}, Springer Verlag 1970. \bibitem[JShI]{JS} H.~Jacquet and J.~Shalika, \emph{On Euler products and the classification of automorphic representations I}, AJM {\bf 103} No.3 (1981) 499-558. \bibitem[JShII]{JSII} H.~Jacquet and J.~Shalika, \emph{On Euler products and the classification of automorphic representations II}, AJM {\bf 103} No.4 (1981) 777-815. \bibitem[J2]{JacquetRS} H.~Jacquet, \emph{Archimedian Rankin-Selberg Integrals}, in \textbf{Automorphic Forms and $L$-functions II, Local Aspects}, AMS 2009. \bibitem[K1]{Kim} H.~Kim, \emph{Functoriality for the exterior square of $\mathrm{GL}_4$ and the symmetric fourth of $\mathrm{GL}_2$}, JAMS, {\bf 16} No.~1, (2002) 139-183. \bibitem[K2]{KimIcos} H.~Kim, \emph{An example of non-normal quintic automorphic induction and modularity of symmetric powers of cusp forms of icosahedral type}, Invent.~Math., {\bf 156} 495-502 (2004). \bibitem[KSh1]{KiSh} H.~Kim and F.~Shahidi, \emph{Functorial products for $\mathrm{GL}_2 \times \mathrm{GL}_3$ and the symmetric cube for $\mathrm{GL}_2$} Annals of Math. {\bf 155} (2002) 837-893. \bibitem[KSh2]{KScusp} H.~Kim and F.~Shahidi, \emph{Cuspidality of symmetric powers with applications}, Duke Math.~J. {\bf 122}, No.~1 (2002) 177-197. \bibitem[Kn]{Knapp} A.~Knapp, \textbf{Representation Theory of Semisimple Groups: An Overview Based on Examples}, Princeton University Press 1986. \bibitem[Ko]{KottTama} R.~E.~Kottwitz, \emph{Tamagawa numbers}, Annals of Math., {\bf 127} No.~3 (1998) 629-646. \bibitem[La1]{LanglProb} R.~P.~Langlands, \emph{Problems in the theory of automorphic forms}, in \textbf{Lectures in Modern Analysis and Applications}, LNM {\bf 170} Springer 1970. \bibitem[La2]{LanglEinM} R.~P.~Langlands, \emph{Automorphic representations, Shimura varieties and motives. Ein M\"archen}, in \textbf{Automorphic Forms, Representations, and $L$-functions}, Proceedings of Symposia in Pure Mathematics {\bf 33.2} AMS 1979. \bibitem[La3]{LanglNotion} R.~P.~Langlands, \emph{On the notion of an automorphic representation}, in \textbf{Automorphic Forms, Representations, and $L$-functions}, Proceedings of Symposia in Pure Mathematics {\bf 33.1} AMS 1979. \bibitem[La4]{Langlands} R.~P.~Langlands, \textbf{Base Change for $GL(2)$}, Annals of Mathematics Studies {\bf 96}, Princeton University Press 1980. \bibitem[La5]{LanglandsArch} R.~P.~Langlands, \emph{The classification of representations of real reductive groups}, in {\bf Math.~Surveys and Monographs 31}, AMS 1988. \bibitem[La6]{LanglBeyond} R.~P.~Langlands, \emph{Beyond endoscopy}, in \textbf{Contributions to Automorphic Forms, Geometry, and Number Theory: A Volume in Honor of Joseph Shalika}, The Johns Hopkins University Press 2004. \bibitem[La7]{LSing} R.~P.~Langlands, \emph{Singulariti\'es et transfert}, to appear in Annales des Sciences Mathematiques du Quebec. \bibitem[LLa]{LabLan} J.-P.~Labesse and R.~P.~Langlands, \emph{$L$-indistinguishability for $\mathrm{SL}(2)$}, Canad. J.~Math., {\bf 31} (1979) 726-785. \bibitem[LapRo]{LR} E.~Lapid and J.~Rogawski, \emph{On twists of cuspidal representations of $\mathrm{GL}(2)$}, Forum Mathematicum {\bf 10} (1998) 175-197. \bibitem[Lau]{Laumon} G.~Laumon, \textbf{Cohomology of Drinfeld Modular Varieties, Part I: Geometry, counting of points and local harmonic analysis}, Cambridge 1996. \bibitem[LiSha]{LS} M.~W.~Liebeck and A.~Shalev, \emph{Fuchsian groups, finite simple groups and representation varieties}, Invent.~Math. {\bf 159} (2005) 317-367. \bibitem[LuRS]{LRS} W.~Luo, Z.~Rudnick, and P.~Sarnak, \emph{On the generalized Ramanujan conjectures for $\mathrm{GL}(n)$} in Proc. Symp.~Pure Math. {\bf 66} Part 2, (1999) 301-311. \bibitem[Mi]{Miyake} K.~Miyake, \emph{Central extensions and Schur's multiplicators of Galois groups}, Nagoya Math.~J., {\bf 90} (1983) 137-144. \bibitem[MoeW1]{MW} C.~Moeglin and J-L.~Waldspurger, \textbf{Spectral Decomposition and Eisenstein Series}, Cambridge University Press 1995. \bibitem[MoeW2]{MW2} C.~Moeglin and J-L.~Waldspurger, \emph{Le spectre r\'esiduel de $\mathrm{GL}(n)$}, Annales Scientifiques de l'\'E.~N.~S., {\bf 22} n.~4 (1989) 605-674. \bibitem[Mo]{Moreno} C.~J.~Moreno, \textbf{Advanced Analytic Number Theory: $L$-functions}, AMS 2005. \bibitem[M\"uSp]{MS} W.~M\"uller and B.~Speh, \emph{On the absolute convergence of the spectral side of the Arthur trace formula for $\mathrm{GL}_2$}, GAFA {\bf 14} (2004) 58-93. \bibitem[R1]{Rajan2} C.~S.~Rajan, \emph{On the vanishing of the measurable Schur cohomology groups of Weil groups}, Compos.~Math.~{\bf 140} No.~1 (2004) 84-98. \bibitem[R2]{Rajan3} C.~S.~Rajan, \emph{On the image and fibres of solvable base change}, MRL {\bf 9} 499-508 (2002). \bibitem[Ra1]{RRS} D.~Ramakrishnan, \emph{Modularity of the Rankin-Selberg $L$-series and multiplicity one for $\mathrm{SL}(2)$}, Ann. of Math. {\bf 152} No.~1 (2000) 45-111. \bibitem[S]{Sarnak} P.~Sarnak, \emph{Comments on Robert Langlands' lecture: ``Endoscopy and Beyond''}, available at www.math.princeton.edu/sarnak. \bibitem[Se]{SerreFG} J-P.~Serre, \textbf{Linear Representations of Finite Groups} Springer 1977. \bibitem[T]{Tate} J.~Tate, \textbf{Number theoretic background}, in \textbf{Automorphic Forms, Representations, and $L$-functions}, Proceedings of Symposia in Pure Mathematics {\bf 33.2} AMS (1979) 3-26. \bibitem[V]{Venk} A.~Venkatesh, \emph{Beyond endoscopy and the classification of special forms on $\mathrm{GL}(2)$}, J. f\"ur die reine und angew. Math., {\bf 577} (2004) 23-80. \bibitem[Vo]{Vogan} D.~Vogan, \emph{Gel'fand-Kirillov dimension for Harish-Chandra modules}, Invent.~Math.~{\bf 48} 449-505. \bibitem[W]{SW} S.~Wang, \emph{On the symmetric powers of cusp forms on $\mathrm{GL}(2)$ of icosahedral type}, IMRN {\bf 44} (2003) 2373-2390. \end{thebibliography} \end{document}
math
204,521
\begin{document} \title{Electric Current Induced by Microwave Stark Effect of Electrons on Liquid Helium} \author{T. Wang\inst{1} \and M. Zhang\inst{1} \thanks{\emph{e-mail:} [email protected] (corresponding author)} \and L. F. Wei\inst{2} \thanks{\emph{e-mail:} [email protected]} } \institute{School of Physics Science and Technology, Southwest Jiaotong University, Chengdu 610031, China \and School of Information Science and Technology, Southwest Jiaotong University, Chengdu 610031, China} \date{Received: date / Revised version: date} \abstract{ We propose a frequency-mixed effect of Terahertz (THz) and Gigahertz (GHz) electromagnetic waves in the cryogenic system of electrons floating on liquid helium surface. The THz wave is near-resonant with the transition frequency between the lowest two levels of surface state electrons. The GHz wave does not excite the transitions but generates a GHz-varying Stark effect with the symmetry-breaking eigenstates of electrons on liquid helium. We show an effective coupling between the inputting THz and GHz waves, which appears at the critical point that the detuning between electrons and THz wave is equal to the frequency of GHz wave. By this coupling, the THz and GHz waves cooperatively excite electrons and generate the low-frequency ac currents along the perpendicular direction of liquid helium surface to be experimentally detected by the image-charge approach [Phys. Rev. Lett. {\bf 123}, 086801 (2019)]. This offers an alternative approach for THz detections. \PACS{ {73.20.-r}{Electron states at surfaces and interfaces} \and {42.50.Ct}{Quantum description of interaction of light and matter; related experiments} \and {42.50.-p}{Quantum optics} \and {85.35.Be}{Quantum well devices (quantum dots, quantum wires, etc.)} } } \maketitle \section{Introduction} \label{sec:1} Electrons floating on liquid helium surface have been considered as an alternative physical system to implement quantum computation~\cite{Platzman1967,PhysRevX.6.011031, PhysRevLett.89.245301,PhysRevB.67.155402, PhysRevLett.107.266803,PhysRevLett.101.220501, PhysRevLett.105.040503,Monarkha2019, PhysRevLett.124.126803,Spin,PhysRevB.86.205408}. Sommer first noticed that there is a barrier at the surface of liquid helium that prevents electrons from entering its interior~\cite{PhysRevLett.12.271}. Therefore, in addition to the attractive force of image charge in liquid helium, the surface electron forms a one-dimensional (1D) hydrogen atom above liquid helium surface~\cite{RevModPhys.46.451,PhysRevLett.32.280,PhysRevB.13.140}. The two lowest levels of the 1D hydrogen atoms have been proposed to encode the qubits, which have good scalability due to the strong Coulomb interactions between electrons~\cite{Platzman1967}. However, the single electron detection is still very difficult in experiments. Recently, E. Kawakami et.al. have used the image-charge approach to experimentally detect the Rydberg states of surface electrons on liquid helium~\cite{PhysRevLett.123.086801}. Compared with the conventional microwave absorption method, this method may be more effective to detect the quantum state of surface electrons~\cite{Elarabi2021,PhysRevLett.126.106802}. The transition frequency between the lowest two levels of surface electrons is in the regime of Terahertz (THz) and therefore has perhaps applications in the fields of THz detection and generation. The THz spectrum is between the usual microwave and infrared regions, and has many practical applications~\cite{NaturePhotonics}, such as THz wireless communication~\cite{THZ-Wireless Communication}, astronomical observation~\cite{EPJP-astronomy}, Earth's atmosphere monitoring~\cite{watter}, and the food safety~\cite{food}. Specially, in biological and medical fields~\cite{biology,APL}, THz imaging have smaller damages in body than X-ray, but higher resolution than mechanical supersonic waves. In this paper, we show a frequency-mixed effect of the THz and the Gigahertz (GHz) electromagnetic waves in the system of electrons floating on liquid helium. This effect could be applied to detect THz radiation. In our mixer, the inputting THz wave near-resonantly excites the transition between the lowest two levels of surface state electrons. The GHz wave does not excite the transitions but generates a GHz-varying Stark effect. This Stark effect has no counterpart in the system of natural atoms. The effective coupling between natural atoms and microwaves needs either the atomic high Rydberg states of $n\approx47$~\cite{Jing2020} or the magnetic coupling (with the strong microwave inputting). Here, the surface-bound states of electrons are symmetry broken due to the barrier at the liquid surface, and therefore the GHz wave can cause a GHz-varying energy gap of electrons on liquid helium by the lowest order electric dipole interaction. Specially, we find an effective coupling between the inputting THz and GHz waves, which appears at the frequencies condition that the detuning between electrons and THz wave is equal to the frequency of GHz wave. By this coupling, the THz and GHz waves cooperatively excite electrons and generate the low-frequency ac currents along the perpendicular direction of liquid helium surface to be experimentally detected by using the image-charge approach~\cite{PhysRevLett.123.086801,Elarabi2021,PhysRevLett.126.106802}. \section{The Hamiltonian} \label{sec:2} Due to the image potential and the barrier at liquid helium surface, electronic motion along the perpendicular direction of liquid helium surface can be treated as a 1D hydrogen atom. Its energy level is well known as ~\cite{PhysRevA.61.034901,PhysRevA.80.055801} \begin{equation} \begin{aligned} E_n=-\frac{\Lambda^{2}e^{4}m_e}{2n^{2}\hbar^{2}}\,, \end{aligned} \label{E1} \end{equation} with $\Lambda=(\varepsilon-1)/4(\varepsilon+1)$. The $e$ is the charge of an electron, $m_e$ is the electronic mass, and $\varepsilon= 1.0568$ the dielectric constant of liquid helium. Numerically, the transition frequency between the lowest two levels is $\omega_{12}=(E_2-E_1)/\hbar\simeq0.1~$THz (corresponding to a wavelength of $\lambda\simeq3$~mm). The eigenfunction of the above energy level reads \begin{equation} \begin{aligned} \psi_{n}(z)=2n^{-5/2}r_{B}^{-3/2}z \exp\left(\frac{-z}{nr_B}\right)L_{n-1}^{(1)}\left(\frac{2z}{nr_{B}}\right)\,, \end{aligned} \label{E2} \end{equation} with $r_{B}=\hbar^{2}/(m_{e}e^{2}\Lambda)\simeq76\rm{\AA}$ being the Bohr radius, and \begin{equation} \begin{aligned} L_{n}^{(\alpha)}(x)=\frac{e^{x}x^{-\alpha}}{n!}\frac{{\rm d}^{n}}{{\rm d}x^{n}} (e^{-x}x^{n+\alpha})\,, \end{aligned} \label{E3} \end{equation} the Laguerre polynomial. Considering only the lowest two levels in Eq.~(\ref{E1}), the Hamiltonian of an electron driven by the classical electromagnetic fields can be written as \begin{equation} \begin{aligned} \hat{H}_0=\hat{H}_{00}+\hat{V}\,, \end{aligned} \label{E4} \end{equation} with the bare-atomic part \begin{equation} \begin{aligned} \hat{H}_{00}=E_1|1\rangle\langle1|+E_2|2\rangle\langle2|\,, \end{aligned} \label{E5} \end{equation} and the electric dipole interaction \begin{equation} \begin{aligned} \hat V&=e\textbf{z}\cdot \textbf{E} =(\textbf{u}_{11}\hat{\sigma}_{11}+\textbf{u}_{22}\hat{\sigma}_{22} +\textbf{u}_{12}\hat{\sigma}_{12}+\textbf{u}_{21}\hat{\sigma}_{21})\cdot\textbf{E}\,. \end{aligned} \label{E6} \end{equation} Here, $\textbf{u}_{ij}=e\langle i|\textbf{z}|j\rangle=e\textbf{z}_{ij}$ is the electric dipole moment, and $\hat{\sigma}_{ij}=|i\rangle\langle j|$ the two-level operator with $i,j=1,2$. Note that $\langle 1|\textbf{z}|1\rangle \neq 0$ and $\langle 2|\textbf{z}|2\rangle \neq 0$ for the wave function (\ref{E2}). On electric field $\textbf{E}$, we first consider the case of two-wave mixing, i.e., \begin{equation} \begin{aligned} \textbf{E}(t)&=\textbf{E}_{T}\cos(\omega_{T}t+\theta_{T}) +\textbf{E}_{G}\cos(\omega_{G}t+\theta_{G})\,, \end{aligned} \label{E7} \end{equation} with $\omega_{T}\simeq 0.1~$ THz and $\omega_{G}\simeq 1~$GHz. The $\textbf{E}_{T}$ and $\textbf{E}_{G}$ are the amplitudes of the inputting THz and GHz waves. The $\theta_{T}$ and $\theta_{G}$ are their initial phases, respectively. Limiting within the usual rotating-wave approximation, the $\omega_{G}$ works only on the first two terms in Eq.~(\ref{E6}), and $\omega_{T}$ works only on the last two terms in Eq.~(\ref{E6}). Then, the Hamiltonian (\ref{E4}) in the usual interacting picture [defined by the unitary operator $\hat{U}_0=\exp(-i\hat{H}_{00}t/\hbar)$] can be approximately written as \begin{equation} \begin{aligned} \hat{H}=\hbar\left[\Omega_{11}(t) \hat{\sigma}_{11}+\Omega_{22}(t)\hat{\sigma}_{22} +\Omega_{12}(t)\hat{\sigma}_{12}+\Omega_{21}(t)\hat{\sigma}_{21}\right]\,, \end{aligned} \label{E8} \end{equation} with the time-dependent coupling frequencies $\Omega_{11}(t)=2\Omega_{110} \cos(\omega_{G}t+\theta_{G})$ , $\Omega_{22}(t)=2\Omega_{220}\cos(\omega_{G}t+\theta_{G})$, and $\Omega_{12}(t)=\Omega_{21}^{\ast}(t)=\Omega_{120}\exp(it\Delta)\exp(i\theta_{T})$. The $\Omega_{110}=\textbf{u}_{11}\cdot\textbf{E}_{G}/(2\hbar)$ and $\Omega_{220}=\textbf{u}_{22}\cdot\textbf{E}_{G}/(2\hbar)$ are due to the symmetry-breaking eigenstates of electron floating on liquid helium surface. The $\Omega_{120}=\textbf{u}_{12}\cdot\textbf{E}_{T}/(2\hbar)$ is the standard Rabi frequency of electric dipole transition, and $\Delta=\omega_T-\omega_{12}$ is the detuning between the THz wave and the resonant frequency $\omega_{12}$ of electronic transition $|1\rangle\rightleftharpoons|2\rangle$. For the analyzable effects of THz and GHz mixing, we apply another unitary transformation $\hat{U}=\exp[-i\hat{h}(t)/\hbar]$ to the electronic states. Here, \begin{equation} \begin{aligned} \hat{h}(t) =\hbar[\theta_{11}(t)\hat{\sigma}_{11}+\theta_{22}(t)\hat{\sigma}_{22}]\,, \end{aligned} \label{E9} \end{equation} \begin{equation} \begin{aligned} \theta_{11}(t)=\int_{0}^{t}\Omega_{11}(t){\rm d}t =\frac{2\Omega_{110}}{\omega_{G}}\sin(\omega_{G}t+\theta_{G})\,, \end{aligned} \label{E10} \end{equation} and \begin{equation} \begin{aligned} \theta_{22}(t)=\int_{0}^{t}\Omega_{22}(t){\rm d}t =\frac{2\Omega_{220}}{\omega_{G}}\sin(\omega_{G}t+\theta_{G})\,. \end{aligned} \label{E11} \end{equation} As a consequence, Hamiltonian (\ref{E8}) in the new interacting picture reads \begin{equation} \begin{aligned} \hat{H}_{I}=\hbar\Omega_{120} \left[e^{i\phi(t)}\hat{\sigma}_{12}+ e^{-i\phi(t)}\hat{\sigma}_{21}\right]\,, \end{aligned} \label{E12} \end{equation} with \begin{equation} \begin{aligned} e^{i\phi(t)}=e^{i\theta_T} e^{i\Delta t}e^{i\xi\sin(\omega_{G}t+\theta_{G})}\,, \end{aligned} \label{E13} \end{equation} and \begin{equation} \begin{aligned} \xi=\frac{2(\Omega_{110}-\Omega_{220})}{\omega_{G}}\,. \end{aligned} \label{E14} \end{equation} According to the well-known Jacobi-Anger expansion, we approximately write (\ref{E13}) as \begin{equation} \begin{aligned} e^{i\phi(t)}=&e^{i\theta_T} e^{i\Delta t}\sum\limits_{n=-\infty}^{\infty} J_{n}(\xi)e^{in(\omega_{G}t+\theta_{G})}\\ \\ =&J_0(\xi)e^{i\theta_T}e^{i\Delta t} +J_{-1}(\xi) e^{i[(\Delta-\omega_G)t+(\theta_T-\theta_G)]} +J_{1}(\xi) e^{i[(\Delta+\omega_G)t+(\theta_T+\theta_G)]}\\ \\ &+\cdots\,, \end{aligned} \label{E15} \end{equation} with $J_{n}(\xi)=(-1)^nJ_{-n}(\xi)$ being the Bessel function of the first kind. Considering the field strength of GHz wave is weak, i.e., $\xi\ll1$, and supposing the frequency condition $\Delta=\omega_G\gg\Omega_{120}$ is satisfied, then the Eq.~(\ref{E15}) reduces to \begin{equation} \begin{aligned} e^{i\phi(t)}\approx&J_{-1}(\xi) e^{i(\theta_T-\theta_G)}\approx-\frac{\xi}{2}e^{i(\theta_T-\theta_G)}\,, \end{aligned} \label{E16} \end{equation} with the rotating-wave approximation. As a consequence, Hamiltonian (\ref{E12}) reduces to \begin{equation} \begin{aligned} \hat{H}_{R}=\hbar\Omega_{\rm eff} \left[e^{i(\theta_T-\theta_G)}\hat{\sigma}_{12}+ e^{-i(\theta_T-\theta_G)}\hat{\sigma}_{21}\right]\,, \end{aligned} \label{E17} \end{equation} which generates the standard Rabi oscillation at frequency $\Omega_{\rm eff}=\Omega_{120}(\Omega_{220}-\Omega_{110})/\omega_G$. This Hamiltonian implies that the electrons floating on liquid helium may be applied as a frequency-mixer to generate the low frequency signal with $\Omega_{\rm eff}\ll \omega_G\ll \omega_T$. \section{The master equation} \label{sec:3} We use the standard master equation of classical electromagnetic waves interacting with the two-levels system~\cite{RevModPhys.77.633} to numerically solve the dynamical evolution of surface state electrons on liquid helium. The master equation reads \begin{equation} \begin{aligned} \frac{\rm{d}\hat{\rho}}{\rm{d}t}=-\frac{i}{\hbar}[\hat{H},\hat{\rho}] +\hat{L}(\hat{\rho})\,, \end{aligned} \label{E18} \end{equation} with the decoherence operator \begin{equation} \begin{aligned} \hat{L}(\hat{\rho})=&\frac{\Gamma_{21}}{2} (2\hat{\sigma}_{12}\hat{\rho}\hat{\sigma}_{21}-\hat{\sigma}_{22}\hat{\rho}-\hat{\rho}\hat{\sigma}_{22})\\ &+\frac{\gamma_{1}}{2}(2\hat{\sigma}_{11}\hat{\rho}\hat{\sigma}_{11}-\hat{\sigma}_{11}\hat{\rho}-\hat{\rho}\hat{\sigma}_{11})\\ &+\frac{\gamma_{2}}{2}(2\hat{\sigma}_{22}\hat{\rho}\hat{\sigma}_{22}-\hat{\sigma}_{22}\hat{\rho}-\hat{\rho}\hat{\sigma}_{22})\,. \end{aligned} \label{E19} \end{equation} Above, $\hat{\rho}=|\psi\rangle\langle\psi|=\sum_{i,j}^2\rho_{ij}|i\rangle\langle j|$, with $\sum_{i=1}^2\rho_{ii}=1$ and $\rho_{ij}=\rho_{ji}^{\ast}$. The $\rho_{ij}$ is the so-called density matrix elements of the two-levels system. The $\Gamma_{21}$ is the spontaneous decay rate of $|2\rangle\rightarrow|1\rangle$. The $\gamma_{1}$ and $\gamma_{2}$ describe the energy-conserved dephasing~\cite{RevModPhys.77.633}. It is worth to note that the unitary operator $\hat{U}=\exp[-i\hat{h}(t)/\hbar]$ made for the Hamiltonian transformation (\ref{E8}) $\rightarrow$ (\ref{E12}) does not change the formation of master equation. The proofs are detailed as follows. Firstly, we rewrite the density operator as $\hat{\rho}=|\psi\rangle\langle\psi| =\hat{U}\hat{\rho}_{I}\hat{U}^{\dagger}$ with $\hat{\rho}_{I}=|\psi'\rangle\langle\psi'|$, and rewrite the Hamiltonian (\ref{E8}) as $\hat{H}=\hat{H}_{I0}+\hat{H}_{\rm int}$, with $\hat{H}_{I0}=\hbar\left[\Omega_{11}(t) \hat{\sigma}_{11}+\Omega_{22}(t)\hat{\sigma}_{22}\right]$ and $\hat{H}_{\rm int}=\hbar\left[\Omega_{12}(t)\hat{\sigma}_{12} +\Omega_{21}(t)\hat{\sigma}_{21}\right]$. Our unitary operator obeys the commutation relation $[\hat{U},\hat{H}_{I0}]=0$, so \begin{equation} \begin{aligned} \frac{d\hat{\rho}}{dt}&=\frac{d\hat{U}}{dt}\hat{\rho}_{I} \hat{U}^{\dagger}+\hat{U}\frac{d\hat{\rho}_{I}}{dt}\hat{U}^{\dagger} +\hat{U}\hat{\rho}_{I}\frac{d\hat{U}^{\dagger}}{dt}\\ &=\hat{U}\frac{d\hat{\rho}_{I}}{dt}\hat{U}^{\dagger} +\frac{-i}{\hbar}[\hat{H}_{I0},\hat{U}\hat{\rho}_{I}\hat{U}^{\dagger}]\,. \end{aligned} \label{E20} \end{equation} Secondly, according to the original master equation (\ref{E18}), we have \begin{equation} \begin{aligned} \frac{d\hat{\rho}}{dt}&=\frac{-i}{\hbar}[\hat{H},\hat{\rho}]+\hat{L}(\hat{\rho})\\ &=\frac{-i}{\hbar} [\hat{H}_{\rm int},\hat{U}\hat{\rho}_{I}\hat{U}^{\dagger}] +\frac{-i}{\hbar}[\hat{H}_{I0},\hat{U}\hat{\rho}_{I}\hat{U}^{\dagger}] +\hat{L}(\hat{\rho})\\ &=\frac{-i}{\hbar}\hat{U}[\hat{H}_{I}, \hat{\rho}_{I}]\hat{U}^{\dagger} +\frac{-i}{\hbar}[\hat{H}_{I0},\hat{U}\hat{\rho}_{I}\hat{U}^{\dagger}] +\hat{L}(\hat{\rho})\,, \end{aligned} \label{E21} \end{equation} and where $\hat{H}_{I}=\hat{U}^\dagger\hat{H}_{\rm int}\hat{U}$ is nothing but Eq.~(\ref{E12}). The decoherence operator obeys the following equation \begin{equation} \begin{aligned} \hat{L}(\hat{\rho})&=\hat{U}\hat{L}(\hat{\rho}_{I})\hat{U}^{\dagger}\,, \end{aligned} \label{E22} \end{equation} because $\hat{U}^{\dagger}\hat{\sigma}_{11}\hat{U}=\hat{\sigma}_{11}$, $\hat{U}^{\dagger}\hat{\sigma}_{22}\hat{U}=\hat{\sigma}_{22}$, and $\hat{\sigma}_{12}\hat{\rho}\hat{\sigma}_{21} =\hat{U}\hat{\sigma}_{12}\hat{\rho}_{I}\hat{\sigma}_{21}\hat{U}^\dagger$, with $\hat{U}^{\dagger}\hat{\sigma}_{12}\hat{U} =\hat{\sigma}_{12}\exp[i\xi\sin(\omega_{G}t+\theta_{G})]$ and $\hat{U}^{\dagger}\hat{\sigma}_{21}\hat{U} =\hat{\sigma}_{21}\exp[-i\xi\sin(\omega_{G}t+\theta_{G})]$. Finally, comparing Eqs. (\ref{E20}) and (\ref{E21}), we find \begin{equation} \begin{aligned} \hat{U}\frac{d\hat{\rho}_{I}}{dt}\hat{U}^{\dagger} =\frac{-i}{\hbar}\hat{U}[\hat{H}_{I}, \hat{\rho}_{I}]\hat{U}^{\dagger} +\hat{U}\hat{L}(\hat{\rho}_{I})\hat{U}^{\dagger}\,, \end{aligned} \label{E23} \end{equation} and get the master equation in interaction picture \begin{equation} \begin{aligned} \frac{d\hat{\rho}_{I}}{dt} =\frac{-i}{\hbar}[\hat{H}_{I},\hat{\rho}_{I}] +\hat{L}(\hat{\rho}_{I})\,, \end{aligned} \label{E24} \end{equation} which has the same form as that of (\ref{E18}). Moreover, using the above approach one can easily find that the usual unitary transformation $\hat{U}_0$ also does not change the form of master equation. Compared with the original Hamiltonian (\ref{E8}), the Hamiltonian $H_I$ in interacting pictures is more clear to show the frequency-matching condition between electron and microwaves. While, the Hamiltonian $H_I$, as proved above, is also valid to numerically (exactly) solve the master equation. \section{The results and discussion} \label{sec:4} According to master equation (\ref{E24}), we get the following equations for density matrix elements, \begin{align} \label{E25} \frac{{\rm d}\rho_{22}}{{\rm d}t} & =i\rho_{21}\Omega_{120}e^{i\phi(t)} -i\rho_{12}\Omega_{120}e^{-i\phi(t)}-\Gamma_{21}\rho_{22}\,,\\ \label{E26} \frac{{\rm d}\rho_{12}}{{\rm d}t}&= i(1-2\rho_{22})\Omega_{120}e^{i\phi(t)} -\frac{1}{2}(\Gamma_{21}+\gamma_{1}+\gamma_{2})\rho_{12}\,. \end{align} In the experimental systems~\cite{PhysRevLett.123.086801,Elarabi2021,PhysRevLett.126.106802}, the liquid helium surface (with the surface-state electrons) is set approximately midway between two plates of a parallel-plate capacitor, and the induced image charges on one of the capacitor plates is described by $Q_{\rm image}\approx Q_{e}\langle z\rangle/D$. The $Q_{e}=eN$ is the charge of $N$ electrons on liquid helium surface, and $\langle z\rangle$ the average height of these electrons. The $D$ is the distance between the two parallel plates of capacitor. In terms of density matrix elements, the expectation value $\langle z\rangle$ in Schr\"{o}dinger picture is described by \begin{equation} \begin{aligned} \langle z\rangle =&\rho_{11}z_{11}+\rho_{22}z_{22}+2 z_{21}{\rm Re}\left[\rho_{12}e^{i\omega_{12}t} e^{-i\xi\sin(\omega_{G}t+\theta_{G})}\right]\\ =&z_{11}+(z_{22}-z_{11})\rho_{22}\\ &+2 z_{21}{\rm Re}\left[{\rm Re}(\rho_{12})e^{i\omega_{12}t} e^{-i\xi\sin(\omega_{G}t+\theta_{G})} +{\rm Im}(\rho_{12})e^{i\omega_{12}t} e^{-i\xi\sin(\omega_{G}t+\theta_{G})}e^{i\frac{\pi}{2}}\right]\,. \end{aligned} \label{E27} \end{equation} Numerically, the transition matrix elements are $z_{11}=1.5r_{B}$, $z_{22}=6r_{B}$, and $z_{21}=-0.5587r_{B}$. Note that, the term $\exp(i\omega_{12}t)$ in Eq.~(\ref{E27}) is due to the standard unitary transformation $\hat{U}_0$ in Sec. \ref{sec:2} applied for the interacting Hamiltonian (\ref{E8}), and $\exp[-i\xi\sin(\omega_{G}t+\theta_{G})]$ is due to the second unitary transformation $\hat{U}$ employed for Hamiltonian (\ref{E12}). Specially, the term $\exp(i\omega_{12}t)=\exp[i(\omega_{T}-\Delta)t]$ rapidly vibrates with the intrinsic frequencies of two-level electrons on liquid helium and generates the THz radiation. According to our rotating-wave approximated Hamiltonian (\ref{E17}), the density matrix elements $\rho_{22}$ and $\rho_{12}$ vibrate with the Rabi frequency $2\Omega_{\rm eff}\ll\Omega_G\ll\Omega_T$. The rotating-wave terms in Eq.~(\ref{E15}), such as $\exp(i\Delta t)$, vibrate with much higher frequencies than $\Omega_{\rm eff}$, but has small amplitude as shown in Fig.~\ref{fig:1}. The frequency distribution functions in Fig.~\ref{fig:1} are obtained by numerically solving the following Fourier transforms \begin{equation} \begin{aligned} \rho_{22}(T)=\int_{-\infty}^{\infty}\psi_{22}(\omega)e^{-i\omega T}{\rm d}\omega\,, \end{aligned} \label{E28} \end{equation} \begin{equation} \begin{aligned} {\rm Re}(\rho_{12}(T))=\int_{-\infty}^{\infty}\psi_{R12}(\omega)e^{-i\omega T}{\rm d}\omega\,, \end{aligned} \label{E29} \end{equation} and \begin{equation} \begin{aligned} {\rm Im}(\rho_{12}(T))=\int_{-\infty}^{\infty}\psi_{I12}(\omega)e^{-i\omega T}{\rm d}\omega\,. \end{aligned} \label{E30} \end{equation} The $\psi_{22}(\omega)$ is the amplitude of a harmonic vibrational component in function $\rho_{22}(T)$, with $T=\Omega_{120}t$ being the dimensionless time. The frequency of this component (harmonic vibration) is $\omega \Omega_{120}$, with $\omega$ being the dimensionless coefficient. The explanations for $\psi_{R12}(\omega)$ and $\psi_{I12}(\omega)$ are similar. \begin{figure} \caption{The numerical solutions of frequency distribution functions $\psi_{22} \label{fig:1} \end{figure} Due to the negligible components of high frequency vibrations in $\rho_{12}$, see Fig.~\ref{fig:1}, the off-diagonal term in Eq.~(\ref{E27}) is still the THz vibration due to the characteristic term $\exp(i\omega_{12} t)$. The THz waves travel in free space, and thus the off-diagonal term in Eq.~(\ref{E23}) is negligible for the microwave circuit experiments~\cite{PhysRevLett.123.086801,Elarabi2021,PhysRevLett.126.106802}, and then the formula of the vibrational image charge reduces to \begin{equation} \begin{aligned} Q_{\rm image}(t)&= \frac{Q_{e}(z_{22}-z_{11})}{D}\rho_{22}\,. \end{aligned} \label{E31} \end{equation} As to was mentioned at the beginning of this paper, the GHz wave can not directly excite the transition $|1\rangle\rightleftharpoons|2\rangle$, the THz wave acts as a trigger to start the system work. This mechanism is showed in Fig.\ref{fig:2}, where the detuning between THz wave and electron is set at $\Delta=\omega_G$. The electric field strength and the frequency of the GHz microwave are set at $E_{G0}=1$~V/cm and $\omega_G=1$~GHz, respectively. \begin{figure} \caption{Numerical solutions of $\rho_{22} \label{fig:2} \end{figure} In Fig.~\ref{fig:2}, the red line is the solution of the rotating-wave approximated Hamiltonian $\hat{H}_{R}$, i.e., Eq.~(\ref{E17}). This low frequency signal is the damped one because the Hamiltonian $\hat{H}_{R}$ is time-independent. For a time-independent Hamiltonian, the master equation has the steady-state solution, i.e., ${\rm d}\rho_{ij}/{\rm d}t=0$ with $t\rightarrow\infty$~\cite{ZHANG201412}. The same phenomenon can be found in the blue line in Fig.~\ref{fig:2}, which describes the excitation by THz wave only. In this case, the Hamiltonian (\ref{E12}) reduces to $\hat{H}_{\rm blue}=\hbar\Omega_{120} [\exp(i\Delta t+i\theta_T)\hat{\sigma}_{12}+ \exp(-i\Delta t-i\theta_T)\hat{\sigma}_{21}]$ with $\xi=0$. This Hamiltonian can be also written in a time-independent form, i.e., $\hat{H}'_{\rm blue}=\hbar\Omega_{120} [\exp(i\theta_T)\hat{\sigma}_{12}+ \exp(-i\theta_T)\hat{\sigma}_{21}]-\hbar\Delta |1\rangle\langle 1|$, by simply employing the unitary transformation of $\exp(i\Delta t|1\rangle\langle 1|)$. So, the blue line is also the damped one. Readers may notice that the blue line vibrates much faster than the red line. This can be simply explained by the analytic solution of Schr\"{o}dinger equation of $\hat{H}'_{\rm blue}$. The solution says the electron excitation probability $P_e=[\Omega_{120}^2/(2\Delta^2_{\Omega})]\times[1-\cos(2\Delta_{\Omega}t)]$ with the frequency $\Delta_{\Omega}=\sqrt{(\Delta^2/4)+\Omega_{120}^2}$\,. In the large detuning regime, i.e., $\Delta\gg\Omega_{120}$, the vibrational frequency $\Delta_{\Omega}$ is obviously larger than the standard Rabi frequency $\Omega_{120}$, but the amplitude is small (the rotating-wave effect). The green line is the numerical solution of Hamiltonian (\ref{E12}) without the rotating-wave approximation. Then, the vibration has not only the low frequency component but also the high frequency component. In this case, the persistent oscillation exists, because Hamiltonian (\ref{E12}) is associated with the rotating-wave terms and can not be written in the time-independent form by any unitary transformation, as well as the original Hamiltonian (\ref{E8}). However, the frequency of persistent oscillation is on the order of the inputting GHz wave and is not the usable signal that a frequency-mixer should produce. Thus we suggest using the low frequency signal predicted by the red line to detect the THz inputting, although the signal is the damped one. This means that a pulsed source of GHz microwave is needed. Furthermore, we note that the above detuning is set at $\Delta=\omega_G$. This condition may not exactly satisfy in the practical experiments, and therefore the effective Hamiltonian (\ref{E17}) becomes \begin{equation} \begin{aligned} \hat{H}_{R}=\hbar\Omega_{\rm eff} \left[e^{i(\theta_T-\theta_G)}e^{i\delta t}\hat{\sigma}_{12}+ e^{-i\delta t}e^{-i(\theta_T-\theta_G)}\hat{\sigma}_{21}\right]\,, \end{aligned} \label{E32} \end{equation} with $\delta=\Delta-\omega_G$. The solution of this Hamiltonian is similar to that of $\hat{H}_{\rm blue}$, but can still generate the low-frequency signal with $\sqrt{(\delta^2/4)+\Omega_{\rm eff}^2}$ and $\delta\sim\Omega_{\rm eff}$. \begin{figure} \caption{Numerical solutions of $\rho_{22} \label{fig:3} \end{figure} Finally, we describe a possibility to generate the persistent signal, i.e., monitor THz incoming online. Based on the above discussion, we apply another GHz wave $\textbf{E}_{G2}\cos(\omega_{G2}t+\theta_{G2})$ to the electrons. Such that the inputting field (\ref{E7}) extends to \begin{equation} \begin{aligned} \textbf{E}'(t)&=\textbf{E}_{T}\cos(\omega_{T}t+\theta_{T}) +\textbf{E}_{G}\cos(\omega_{G}t+\theta_{G}) +\textbf{E}_{G2}\cos(\omega_{G2}t+\theta_{G2})\,, \end{aligned} \label{E33} \end{equation} and consequently, the Eqs.~(\ref{E12}) and (\ref{E13}) become, respectively, \begin{equation} \begin{aligned} \hat{H}'_{I}=\hbar\Omega_{120} \left[e^{i\phi'(t)}\hat{\sigma}_{12}+ e^{-i\phi'(t)}\hat{\sigma}_{21}\right]\,, \end{aligned} \label{E34} \end{equation} and \begin{equation} \begin{aligned} e^{i\phi'(t)}=e^{i\theta_T} e^{i\Delta t}e^{i\xi\sin(\omega_{G}t+\theta_{G})} e^{i\xi_2\sin(\omega_{G2}t+\theta_{G2})}\,. \end{aligned} \label{E35} \end{equation} The derivation for $\xi_2=2[\Omega_{110}(\textbf{E}_{G2})-\Omega_{220}(\textbf{E}_{G2})]/\omega_{G2}$ is similar to that of parameter $\xi$ in Eq.~(\ref{E14}). Considering also $\xi_2\ll1$, we have \begin{equation} \begin{aligned} e^{i\phi'(t)}\approx-\frac{\xi}{2} e^{i(\Delta-\omega_G)t}e^{i\theta_{TG}} -\frac{\xi_2}{2}e^{i(\Delta-\omega_{G2})t}e^{i\theta_{TG2}}\,, \end{aligned} \label{E36} \end{equation} by employing the rotating-wave approximation again, and where $\theta_{TG}=\theta_{T}-\theta_{G}$ and $\theta_{TG2}=\theta_{T}-\theta_{G2}$. Furthermore, considering the typical case that $\xi\approx\xi_2$ and $\delta=\Delta-\omega_G=\omega_{G2}-\Delta$, such that the effective Hamiltonian (\ref{E17}) becomes, \begin{equation} \begin{aligned} \hat{H}'_{R}\approx\hbar\Omega_{\rm eff} \left\{e^{i\theta_{TG}}\left[e^{i\delta t}+e^{-i\delta t}e^{i(\theta_{G}-\theta_{G2})}\right]\hat{\sigma}_{12}+ e^{-i\theta_{TG}}\left[e^{-i\delta t}+e^{i\delta t}e^{-i(\theta_{G}-\theta_{G2})}\right]\hat{\sigma}_{21}\right\}\,. \end{aligned} \label{E37} \end{equation} Due to the small detuning $\delta$, this Hamiltonian can be never written in a time-independent form by any unitary transformations, and thereby the master equation has no steady-state solution. This causes the persistently oscillating charge, as shown in Fig.~\ref{fig:3}, which serves as a source for the low-frequency microwave outputting and could be applied to realize the real-time detection of THz incoming. \section{Conclusion} In summary, we have studied the frequency-mixed effects of THz and GHz waves in the cryogenic system of electrons floating on liquid helium. Different from the natural atoms, the transition frequency between the lowest two levels of electrons floating on liquid is in the THz regime. Moreover, the surface-state of electrons is symmetry breaking due to the barrier at the liquid surface. Therefore, both of THz and GHz waves can effectively drive the electrons via the electric dipole interaction. Specifically, the THz wave near-resonantly excites the transition between the lowest two levels of surface-bound electrons, the GHz wave does not excite the transition but generates the GHz-varying Stark effect. Using the unitary transformation approach and the rotating-wave approximation, we found an effective Hamiltonian (\ref{E17}) of two-wave mixing with the detuning $\Delta=\omega_{12}-\omega_T=\omega_G$, which could generate the significant ac current of frequency much lower than GHz, as shown in Fig.~\ref{fig:2}. The Hamiltonian is time-independent, so the generated low-frequency signal is the damped one due to the decay of surface-state electrons. To generate the persistent low-frequency signal, the time-dependent Hamiltonian is required, for example, the effective Hamiltonian (\ref{E37}) proposed for the case of three-wave mixing. \textbf{Acknowledgements}: This work was supported by the National Natural Science Foundation of China, Grants No. 12047576, and No. 11974290. \textbf{Data Availability Statement}: The data generated or analyzed during this study are included in this published article. \end{document}
math
29,738
\begin{document} \title{A Systematic Study of Online Class Imbalance Learning with Concept Drift} \author{Shuo~Wang,~\IEEEmembership{Member,~IEEE,} Leandro L.~Minku,~\IEEEmembership{Member,~IEEE,} and Xin~Yao,~\IEEEmembership{Fellow,~IEEE} \thanks{S. Wang and X. Yao are with the Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), School of Computer Science, The University of Birmingham, Edgbaston, Birmingham B15 2TT, UK. E-mail: \{S.Wang, X.Yao\}@cs.bham.ac.uk.} \thanks{L. L. Minku is with Department of Informatics, University of Leicester, Leicester LE1 7RH, UK. E-mail: [email protected].}} \markboth{IEEE Transactions on Neural Networks and Learning Systems,~Vol.~xx, No.~x, February~2017} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} As an emerging research topic, online class imbalance learning often combines the challenges of both class imbalance and concept drift. It deals with data streams having very skewed class distributions, where concept drift may occur. It has recently received increased research attention; however, very little work addresses the combined problem where both class imbalance and concept drift coexist. As the first systematic study of handling concept drift in class-imbalanced data streams, this paper first provides a comprehensive review of current research progress in this field, including current research focuses and open challenges. Then, an in-depth experimental study is performed, with the goal of understanding how to best overcome concept drift in online learning with class imbalance. Based on the analysis, a general guideline is proposed for the development of an effective algorithm. \end{abstract} \begin{IEEEkeywords} Online learning, class imbalance, concept drift, resampling. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \label{sec:intro} With the wide application of machine learning algorithms to the real world, class imbalance and concept drift have become crucial learning issues. Applications in various domains such as risk management~\cite{Sousa2016pe}, anomaly detection~\cite{Meseguer2010bo}, software engineering~\cite{Wang2013hi}, and social media mining~\cite{Sun2016ba} are affected by both class imbalance and concept drift. Class imbalance happens when the data categories are not equally represented, i.e., at least one category is minority compared to other categories~\cite{He2009kx}. It can cause learning bias towards the majority class and poor generalization. Concept drift is a change in the underlying distribution of the problem, and is a significant issue specially when learning from data streams~\cite{Minku:2010uq}. It requires learners to be adaptive to dynamic changes. Class imbalance and concept drift can significantly hinder predictive performance, and the problem becomes particularly challenging when they occur simultaneously. This challenge arises from the fact that one problem can affect the treatment of the other. For example, drift detection algorithms based on the traditional classification error may be sensitive to the imbalanced degree and become less effective; and class imbalance techniques need to be adaptive to changing imbalance rates, otherwise the class receiving the preferential treatment may not be the correct minority class at the current moment. Although there have been papers studying data streams with an imbalanced distribution and data streams with concept drift respectively, very little work discusses the cases when both class imbalance and concept drift exist. This paper aims to provide a systematic study of handling concept drift in class-imbalanced data streams. We focus on online (i.e. one-by-one) learning, which is a more difficult case than chunk-based learning, because only a single instance is available at a time. We first give a comprehensive review of current research progress in this field, including problem definitions, problem and approach categorization, performance evaluation and up-to-date approaches. It reveals new challenges and research gaps. Most existing work focuses on the concept drift in posterior probabilities (i.e. real concept drift~\cite{Gama2014re}, changes in $P\left(y \mid \mathbf{x} \right)$). The challenges in other types of concept drift have not been fully discussed and addressed. Especially, the change in prior probabilities $P\left( y \right)$ is closely related to class imbalance and has been overlooked by most existing work. Most proposed concept drift detection approaches are designed for and tested on balanced data streams. Very few approaches aim to tackle class imbalance and concept drift simultaneously. Among limited solutions, it is still unclear which approach is better and when. It is also unknown whether and how applying class imbalance techniques (e.g. resampling methods) affects concept drift detection and online prediction. To fill in the research gaps, we then provide an experimental insight into how to best overcome concept drift in online learning with class imbalance, by focusing on three research questions: 1) what are the challenges in detecting each type of concept drift when the data stream is imbalanced? 2) Among the proposed methods designed for online class imbalance learning with concept drift, which one performs better for which type of concept drift? 3) Would applying class imbalance techniques (e.g. resampling methods) facilitate concept drift detection and online prediction? Six recent approaches, DDM-OCI~\cite{Wang2013bp}, LFR~\cite{Wang2015zo}, PAUC-PH~\cite{Brzezinski2015ib}~\cite{Brzezinski2017el}, OOB~\cite{Wang2014vb}, RLSACP~\cite{Ghazikhani2013wm} and ESOS-ELM~\cite{Mirza2015nt}, are compared and analyzed in depth under each of the three fundamental types of concept drift (i.e. changes in prior probability $P\left( y \right)$, class-conditional probability density function (pdf) $p\left(\mathbf{x}\mid y\right)$ and posterior probability $P\left(y \mid \mathbf{x} \right)$) in artificial data streams, as well as real-world data sets. To the best of our knowledge, they are the very few methods that are explicitly designed for online learning problems with class imbalance and concept drift so far. Finally, based on the review and experimental results, we provide some guidelines for developing an effective algorithm for learning from imbalanced data streams with concept drift. We stress the importance of studying the mutual effect of class imbalance and concept drift. The contributions of this paper include: this is the first comprehensive study that looks into concept drift detection in class-imbalanced data streams; data problems are categorized in different types of concept drift and class imbalance with illustrative applications; existing approaches are compared and analysed systematically in each type; pros and cons of each approach are investigated; the results provide guidance for choosing the appropriate technique and developing better algorithms for future learning tasks; this is also the first work exploring the role of class imbalance techniques in concept drift detection, which sheds light on whether and how to tackle class imbalance and concept drift simultaneously. The rest of this paper is organized as follows. Section~\ref{sec:framework} formulate the learning problem, including a learning framework and detailed problem descriptions and introduction of class imbalance and concept drift individually. Section~\ref{sec:overcomeboth} reviews the combined issue of class imbalance and concept drift, including example applications and existing solutions. Section~\ref{sec:exp} carries out the experimental study, aiming to find out the answers to the three research questions. Section~\ref{sec:con} draws the conclusions and points out potential future directions. \section{Online Learning Framework with Class Imbalance and Concept Drift} \label{sec:framework} In data stream applications, data arrives over time in streams of examples or batches of examples. The information up to a specific time step $t$ is used to build/update predictive models, which then predict the new example(s) arriving at time step $t+1$. Learning under such conditions needs chunk-based learning or online learning algorithms, depending on the number of training examples available at each time step. According to the most agreed definitions~\cite{Minku:2010uq}~\cite{Ditzler2015hq}, chunk-based learning algorithms process a batch of data examples at each time step, such as the case of daily internet usage from a set of users; online learning algorithms process examples one by one and the predictive model is updated after receiving each example~\cite{Oza2001fk}, such as the case of sensor readings at every second in engineering systems. The term ``incremental learning" is also frequently used under this scenario. It is usually referred to as any algorithm that can process data streams with certain criteria met~\cite{Polikar2001oq}. On one hand, online learning can be viewed as a special case of chunk-based learning. Online learning algorithms can be used to deal with data coming in batches. They both build and continuously update a learning model to accommodate newly available data, and simultaneously maintain its performance on old data, giving rise to the stability-plasticity dilemma~\cite{Grossberg1988as}. On the other hand, the way of designing online and chunk-based learning algorithms can be very different~\cite{Minku:2010uq}. Most chunk-based learning algorithms are not suitable for online learning tasks, because batch learners process a chunk of data each time, possibly using an offline learning algorithm for each chunk. Online learning requires the model being adapted immediately upon seeing the new example, and the example is then immediately discarded, which allows to process high-speed data streams. From this point of view, designing online learning algorithm can be more challenging but so far has received much less attention than the other. First, the online learner needs to learn from a single data example, so it needs a more sophisticated training mechanism. Second, data streams are often non-stationary (concept drift). The limited availability of training examples at the current moment in online learning hinders the detection of such changes and the application of techniques to overcome the change. Third, it is often seen that data is class imbalanced in many classification tasks, such as the fault detection task in an engineering system, where the fault is always the minority. Class imbalance aggravates the learning difficulty~\cite{He2009kx} and complicates the data status~\cite{Wang2013po}. However, there is a severe lack of research addressing the combined issue of class imbalance and concept drift in online learning. To fill in this research gap, this paper aims at a comprehensive review of the work done to overcome class imbalance and concept drift, a systematic study of learning challenges, and an in-depth analysis of the performance of current approaches. We begin by formalizing the learning problem in this section. \subsection{Learning Procedure} \label{subsec:procedure} In supervised online classification, suppose a data generating process provides a sequence of examples $\left( \mathbf{x_t}, y_t \right)$ arriving one at a time from an unknown probability distribution $p_t\left(x,y \right)$. $\mathbf{x_t}$ is the input vector belonging to an input space $X$, and $y_t$ is the corresponding class label belonging to the label set $Y = \left\{ c_1, \ldots, c_N \right\}$. We build an online classifier $F$ that receives the new input $\mathbf{x_t}$ at time step $t$ and then makes a prediction. The predicted class label is denoted by $\hat{y}_t$. After some time, the classifier receives the true label $y_t$, used to evaluate the predictive performance and further train the classifier. This whole process will be repeated at following time steps. It is worth pointing out that we do not assume new training examples always arrive at regular and pre-defined intervals here. In other words, the actual time interval between time step $t$ and $t+1$ may be different from the actual time interval between $t+1$ and $t+2$. One challenge arises when data is class imbalanced. Class imbalance is an important data feature, commonly seen in applications such as spam filtering~\cite{Nishida2008fk} and fault diagnosis~\cite{Meseguer2010bo}~\cite{Wang2013hi}. It is the phenomenon when some classes of data are highly under-represented (i.e. minority) compared to other classes (i.e. majority). For example, if $P\left ( c_i \right )\ll P\left ( c_j \right )$, then $c_j$ is a majority class and $c_i$ is a minority class. The difficulty in learning from imbalanced data is that the relatively or absolutely underrepresented class cannot draw equal attention to the learning algorithm, which often leads to very specific classification rules or missing rules for this class without much generalization ability for future prediction. It has been well-studied in offline learning~\cite{Japkowicz:2002ul}, and has attracted growing attention in data stream learning in recent years~\cite{Hoens2012uq}. In many applications, such as energy forecasting and climate data analysis~\cite{Monteiro2009nw}, the data generator operates in nonstationary environments. It gives rise to another challenge, called ``concept drift". It means that the probability density function (pdf) of the data generating process is changing over time. For such cases, the fundamental assumption of traditional data mining -- the training and testing data are sampled from the same static and unknown distribution -- does not hold anymore. Therefore, it is crucial to monitor the underlying changes, and adapt the model to accommodate the changes accordingly. When both issues exist, the online learner needs to be carefully designed for effectiveness, efficiency and adaptivity. An online class imbalance learning framework was proposed in~\cite{Wang2013po} as a guide for algorithm design. The framework breaks down the learning procedure into three modules -- a class imbalance detector, a concept drift detector and an adaptive online learner, as illustrated in Fig.~\ref{fig:framework}. \begin{figure} \caption{Learning framework for online class imbalance learning~\cite{Wang2013po} \label{fig:framework} \end{figure} The class imbalance detector reports the current class imbalance status of data streams. The concept drift detector captures concept drifts involving changes in classification boundaries. Based on the information provided by the first two modules, the adaptive online learner determines when and how to respond to the detected class imbalance and concept drift, in order to maintain its performance. The learning objective of an online class imbalance algorithm can be described as ``recognizing minority-class data effectively, adaptively and timely without sacrificing the performance on the majority class"~\cite{Wang2013po}. \subsection{Problem Descriptions} \label{subsec:probdesp} A more detailed introduction about class imbalance and concept drift is given here individually, including the terminology, research focuses and state-of-the-art approaches. The purpose of this section is to understand the fundamental issues that we need to take extra care of in online class imbalance learning. We also aim at understanding whether and how the current research in class imbalance learning and concept drift detection are individually related to their combined issue elaborated later in Section~\ref{sec:overcomeboth}, rather than to provide an exhaustive list of approaches in the literature. Among others, we will answer the following questions: \textit{can existing class imbalance techniques process data streams? Would existing concept drift detectors be able to handle imbalanced data streams?} \subsubsection{\textbf{Class imbalance}} In class imbalance problems, the minority class is usually much more difficult or expensive to be collected than the majority class, such as the spam class in spam filtering and the fraud class in credit card application. Thus, misclassifying a minority-class example is more costly. Unfortunately, the performance of most conventional machine learning algorithms is significantly compromised by class imbalance, because they assume or expect balanced class distributions or equal misclassification costs. Their training procedure with the aim of maximizing overall accuracy often leads to a high probability of the induced classifier predicting an example as the majority class, and a low recognition rate on the minority class. In reality, it is common to see that the majority class has accuracy close to 100\% and the minority class has very low accuracy between 0\%-10\%~\cite{Kubat:1998bs}. The negative effect of class imbalance on classifiers, such as decision trees~\cite{Japkowicz:2002ul}, neural networks~\cite{Visa:2005ai}, k-Nearest Neighbour (kNN)~\cite{Kubat:1997yg}~\cite{Batista:2004oq}~\cite{Zhang:2003ve} and SVM~\cite{Yan:2003vn}~\cite{Wu:2003qe}, has been studied. A classifier that provides a balanced degree of predictive performance for all classes is required. The major research questions in this area are summarized and answered as follows:\\ \noindent (a) \textit{How do we define the imbalanced degree of data?} It seems to be a trivial question. However, there is no consensus on the definition in the literature. To describe how imbalanced the data is, researchers choose to use the percentage of the minority class in the data set~\cite{Hulse:2007eu}, the size ratio between classes~\cite{Lopez2013ls}, or simply a list of the number of examples in each class~\cite{Chawla:2002yq}. The coefficient of variance is used in~\cite{Hoens2012lq}, which is less straightforward. The description of imbalance status may not be a crucial issue in offline learning, but becomes more important in online learning, because there is no static data set in online scenarios. It is necessary to have some measurement automatically describing the up-to-date imbalanced degree and techniques monitoring the changes in class imbalance status. This will help the online learner to decide when and how to tackle class imbalance. The issue of changes in class imbalance status is relevant to concept drift, which will be further discussed in the next subsection. To define the imbalanced degree suitable for online learning, a real-time indicator was proposed -- time-decayed class size~\cite{Wang2013po}, expressing the size percentage of each class in the data stream. It is updated incrementally at each time step by using a time decay (forgetting) factor, which emphasizes the current status of data and weakens the effect of old data. Based on this, a class imbalance detector was proposed to determine which classes should be regarded as the minority/majority and how imbalanced the current data stream is, and then used for designing better online classifiers~\cite{Wang2014vb}~\cite{Wang2013hi}. The merit of this indicator is that it is suitable for data with arbitrary number of classes. \noindent (b) \textit{When does class imbalance matter?} It has been shown that class imbalance is not the only problem responsible for the performance reduction of classifiers. Classifiers' sensitivity to class imbalance also depends on the complexity and overall size of the data set. Data complexity comprises issues such as overlapping~\cite{Batista:2005uq}~\cite{Prati:2004kx} and small disjuncts~\cite{Jo2004bh}. The degree of overlapping between classes and how the minority class examples distribute in data space aggravate the negative effect of class imbalance. The small disjunct problem is associated with the within-class imbalance~\cite{Japkowicz:2003pd}. Regarding the size of the training data, a very large domain has a good chance that the minority class is represented by a reasonable number of examples, and thus may be less affected by imbalance than a small domain containing very few minority class examples. In other words, the rarity of the minority class can be in a relative or absolute sense in terms of the number of available examples~\cite{He2009kx}. In particular, authors in~\cite{Napierala2012bo}~\cite{Napierala2016gr} distinguished and analysed four types of data distributions in the minority class -- safe, borderline, outliers and rare examples. Safe examples are located in the homogenous regions populated by the examples from one class only; borderline examples are scattered in the boundary regions between classes, where the examples from both classes overlap; rare examples and outliers are singular examples located deeper in the regions dominated by the majority class. Borderline, rare and outlier data sets were found to be the real source of difficulties in learning imbalanced data sets offline, which have also been shown to be the harder cases in online applications~\cite{Wang2014vb}. Therefore, for any developed algorithms dealing with imbalanced data online, it is worth discussing their performance on data with different types of distributions. \noindent (c) \textit{How can we tackle class imbalance effectively (state-of-the-art solutions)?} A number of algorithms have been proposed to tackle class imbalance at the data and algorithm levels. Data-level algorithms include a variety of resampling techniques, manipulating training data to rectify the skewed class distributions. They oversample minority-class examples (i.e. expanding the minority class), undersample majority-class examples (i.e. shrinking the majority class), or combine both, until the data set is relatively balanced. Random oversampling and random undersampling are the simplest and most popular resampling techniques, where examples are randomly chosen to be added or removed. There are also smart resampling techniques (a.k.a guided resampling). For example, SMOTE~\cite{Chawla:2002yq} is a widely used oversampling method, which generates new minority-class data points based on the similarities between original minority-class examples in the feature space. Other smart oversampling techniques include Borderline-SMOTE~\cite{Han:2005sf}, ADASYN~\cite{He:2008cr}, MWMOTE~\cite{Barua2014mq}, to name but a few. Smart undersampling techniques include Tomek links~\cite{TOMEK:1976la}, One-sided selection~\cite{Kubat:1997kx}, Neighbourhood cleaning rule~\cite{Jorma:2001kx}, etc. The effectiveness of resampling techniques have been proved in real-world applications~\cite{Hao2014po}. They work independently of classifiers, and are thus more versatile than algorithm-level methods. The key is to choose an appropriate sampling rate~\cite{Estabrooks:2004ve}, which is relatively easy for two-class data sets, but becomes more complicated for multi-class data sets~\cite{Saez2016yq}. Empirical studies have been carried out to compare different resampling methods~\cite{Hulse:2007eu}. Particularly, it is shown that smart resampling techniques are not necessarily superior to random oversampling and undersampling; besides, they cannot be applied to online scenarios directly, because they work on a static data set for the relation among the training examples. Some initial effort has been made recently, to extend smart resampling techniques to online learning~\cite{Mao2015xp}. Algorithm-level methods address class imbalance by modifying their training mechanism with the direct goal of better accuracy on the minority class, including one-class learning~\cite{Japkowicz:1995qf}, cost-sensitive learning~\cite{Liu:2006yq} and threshold methods~\cite{Weiss:2003eu}. They require different treatments for specific kinds of learning algorithms. In other words, they are algorithm-dependent, so they are not as widely used as data-level methods. Some online cost-sensitive methods have been proposed, such as CSOGD~\cite{Wang2014fn} and RLSACP~\cite{Ghazikhani2013wm}. They are restricted to the perceptron-based classifiers, and require pre-defined misclassification costs of classes that may or may not be updated during the online learning. Finally, ensemble learning (also known as multiple classifier systems)~\cite{Polikar:2006xy} has become a major category of approaches to handling class imbalance~\cite{Galar:2011uq}. It combines multiple classifiers as base learners and aims to outperform every one of them. It can be easily adapted for emphasizing the minority class by integrating different resampling techniques~\cite{Li:2007hc}~\cite{Liu:2009kx}~\cite{Chawla:2003yq}~\cite{Blaszczynski2015ge} or by making base classifiers cost-sensitive~\cite{Joshi:2001kx}~\cite{Chawla:2007kx}~\cite{Guo:2004qe}~\cite{Fan:1999rm}. A few ensemble methods are available for online class imbalance learning, such as OOB and UOB~\cite{Wang2014vb} applying random oversampling and undersampling in Online Bagging~\cite{Oza:2005ve}, and WOS-ELM~\cite{Mirza2013la} training a set of cost-sensitive online extreme learning machines. It is worth pointing out that, the aforementioned online learning algorithms designed for imbalanced data are not suitable for non-stationary data streams. They do not involve any mechanism handling drifts that affect classification boundaries, although OOB and UOB can detect and react to class imbalance changes. \noindent (d) \textit{How do we evaluate the performance of class imbalance learning algorithms?} Traditionally, overall accuracy and error rate are the most frequently used metrics of performance evaluation. However, they are strongly biased towards the majority class when data is imbalanced. Therefore, other performance measures have been adopted. Most studies concentrate on two-class problems. By convention, the minority class is treated to be the positive, and the majority class is treated to be the negative. Table~\ref{tab:confusion} illustrates the confusion matrix of a two-class problem, producing four numbers on testing data. \begin{table}[htp] \caption{Confusion matrix for a two-class problem.} \label{tab:confusion} \centering \begin{tabular}{|c|c|c|} \hline & Predicted as positive & Predicted as negative\\ \hline Actual positive & True positive (TP) & False negative (FN) \\ Actual negative & False positive (FP) & True negative (TN) \\ \hline \end{tabular} \end{table} From the confusion matrix, we can derive the expressions for \textit{recall} and \textit{precision}: \begin{equation} \label{eq:recall} recall = \frac{TP}{TP+FN}, \end{equation} \begin{equation} \label{eq:precision} precision = \frac{TP}{TP+FP}. \end{equation} Recall (i.e. TP rate) is a measure of completeness -- the proportion of positive class examples that are classified correctly to all positive class examples. Precision is a measure of exactness -- the proportion of positive class examples that are classified correctly to the examples predicted as positive by the classifier. The learning objective of class imbalance learning is to improve recall without hurting precision. However, improving recall and precision can be conflicting. Thus, F-measure is defined to show the trade-off between them. \begin{equation} \label{eq:F} Fm = \frac{\left ( 1+\beta^2 \right )\cdot recall\cdot precision}{\beta^2\cdot precision+recall}, \end{equation} where $\beta$ corresponds to the relative importance of recall and precision. It is usually set to 1. Kubat et al.~\cite{Kubat:1997kx} proposed to use G-mean to replace overall accuracy: \begin{equation} \label{eq:G} Gm = \sqrt{\frac{TP}{TP+FN}\times \frac{TN}{TN+FP}}. \end{equation} It is the geometric mean of positive accuracy (i.e. TP rate) and negative accuracy (i.e. TN rate). A good classifier should have high accuracies on both classes, and thus a high G-mean. According to~\cite{He2009kx}, any metric that uses values from both rows of the confusion matrix for addition (or subtraction) will be inherently sensitive to class imbalance. In other words, the performance measure will change as class distribution changes, even though the underlying performance of the classifier does not. This performance inconsistency can cause problems when we compare different algorithms over different data sets. Precision and F-measure, unfortunately, are sensitive to the class distribution. Therefore, recall and G-mean are better options. To compare classifiers over a range of sample distributions, AUC (abbr. of the Area Under the ROC curve) is the best choice. A ROC curve depicts all possible trade-offs between TP rate and FP rate, where FP rate = $FP/\left( TN+FP \right)$. TP rate and FP rate can be understood as the benefits and costs of classification with respect to data distributions. Each point on the curve corresponds to a single trade-off. A better classifier should produce a ROC curve closer to the top left corner. AUC represents a ROC curve as a single scalar value by estimating the area under the curve, varying in [0, 1]. It is insensitive to the class distribution, because both TP rate and FP rate use values from only one row of the confusion matrix. AUC is usually generated by varying the classification decision threshold for separating positive and negative classes in the testing data set~\cite{Maloof:2003dq}~\cite{Fawcett:2006uq}. In other words, calculating AUC requires a set of confusion matrices. Therefore, unlike other measures based on a single confusion matrix, AUC cannot be used as an evaluation metric in online learning without memorizing data. Although a recent study has modified AUC for evaluating online classifiers~\cite{Brzezinski2015ib}, it still needs to collect recently received examples. The properties of the above measures are summarized in Table~\ref{tab:imbalance-metric}. They are defined under the two-class context. They cannot be used to evaluate multi-class data directly, except for recall. Their multi-class versions have been developed~\cite{Sokolova2009dl}~\cite{Sun:2006pd}~\cite{Hand:2001kx}. The ``multi-class" and ``online" columns in the table show whether the corresponding measure can be used directly without modification in multi-class and online data scenarios. \begin{table}[htp] \caption{Performance evaluation measures for class imbalance problems.} \label{tab:imbalance-metric} \centering \begin{tabular}{|c|c|c|c|} \hline Measures& Multi-class & Online & Sensitive to\\ &&&Imbalance\\ \hline recall & yes & yes & no\\ \hline precision & no~\cite{Sokolova2009dl} & yes & yes\\ \hline Fm & no~\cite{Sokolova2009dl} & yes & yes\\ \hline Gm & yes~\cite{Sun:2006pd} & yes & no\\ \hline AUC & no (See MAUC~\cite{Hand:2001kx}) & no (See PAUC~\cite{Brzezinski2015ib}) & no\\ \hline \end{tabular} \end{table} \subsubsection{\textbf{Concept drift}} Concept drift is said to occur when the joint probability $P\left( \mathbf{x},y \right)$ changes~\cite{Gama2014re}~\cite{Minku2012vn}~\cite{Minku2010uq2}. The key research topics in this area include:\\ \noindent (a) \textit{How many types of concept drift are there? Which type is more challenging?} Concept drift can manifest three fundamental forms of changes corresponding to the three major variables in the Bayes' theorem~\cite{Kelly1999fs}: 1) a change in prior probability $P\left( y \right)$; 2) a change in class-conditional pdf $p\left(\mathbf{x}\mid y \right)$; 3) a change in posterior probability $P\left( y\mid \mathbf{x} \right)$. The three types of concept drift are illustrated in Figure~\ref{fig:drift}. Comparing to the original data distribution shown in Figure~\ref{fig:drift}(a), \begin{figure} \caption{Illustration of 3 concept drift types.} \label{fig:drift} \end{figure} Fig.~\ref{fig:drift}(b) shows the $P\left( y \right)$ type of concept drift without affecting $p\left( \mathbf{x}\mid y \right)$ and $P\left( y\mid \mathbf{x} \right)$. The decision boundary remains unaffected. The prior probability of the circle class is reduced in this example. Such change can lead to class imbalance. A well-learnt discrimination function may drift away from the true decision boundary, due to the imbalanced class distribution. Fig.~\ref{fig:drift}(c) shows the $p\left( \mathbf{x}\mid y \right)$ type of concept drift without affecting $P\left( y \right)$ and $P\left( y\mid \mathbf{x} \right)$. The true decision boundary remains unaffected. Elwell and Polikar claimed that this type of drift is the result of an incomplete representation of the true distribution in current data, which simply requires providing supplemental data information to the learning model~\cite{Elwell2011dq}. Fig.~\ref{fig:drift}(d) shows the $P\left( y\mid \mathbf{x} \right)$ type of concept drift. The true boundary between classes changes after the drift, so that the previously learnt discrimination function does not apply any more. In other words, the old function becomes unsuitable or partially unsuitable, and the learning model needs to be adapted to the new knowledge. The posterior distribution change clearly indicates the most fundamental change in the data generating function. This is classified as \textit{real concept drift}. The other two types belong to \textit{virtual concept drift}~\cite{Hoens2012uq}, which does not change the decision (class) boundaries. In practice, one type of concept drift may appear in combination with other types. Existing studies primarily focus on the development of drift detection methods and techniques to overcome the real drift. There is a significant lack of research on virtual drift, which can also deteriorate classification performance. As illustrated in Fig.~\ref{fig:drift}(b), even though these types of drift do not affect the true decision boundaries, they can cause a well-learnt decision boundary to become unsuitable. Unfortunately, the current techniques for handling real drift may not be suitable for virtual drift, because they present very different learning difficulties and require different solutions. For instance, the methods for handling real drift often choose to reset and retrain the classifier, in order to forget the old concept and better learn the new concept. This is not an appropriate strategy for data with virtual drift, because the examples from previous time steps may still remain valid and help the current classification in virtual drift cases. It would be more effective and efficient to calibrate the existing classifier than retraining it. Besides, techniques for handling real drift typically rely on feedback about the performance of the classifier, while techniques for handling virtual drift can operate without such feedback~\cite{Gama2014re}. From our point of view, all three types are equally important. Particularly, the two virtual types require more research effort than currently dedicated work by our community. A systematic study of the challenges in each type will be given in Section~\ref{sec:exp}. \begin{table*}[htp] \caption{Categorization of concept drift techniques. See~\cite{Ditzler2015hq} for the full list of techniques under each category.} \label{tab:driftdetector} \centering \begin{tabular}{|c|c|l|} \hline \multirow{14}{*}{\textbf{Active}} & \multirow{6}{*}{\textbf{Step1. Change}} & \textbf{Hypothesis tests}: assess the validity of a hypothesis by comparing the distributions of two sets of fix-length \\ && data sequences. \\ \cline{3-3} && \textbf{Change-point methods}: identify the change point by analyzing all possible partitions of a fixed data sequence. \\ \cline{3-3} &\multirow{2}{*}{\textbf{detection}}& \textbf{Sequential hypothesis tests}: provide a one-off detection of change or no change, by inspecting incoming \\ && examples one by one (sequentially). \\ \cline{3-3} && \textbf{Change detection tests}: analyze the statistical behavior of streams of data in a fully sequential manner, such \\ && as a feature value or classification error. They are either based on a pre-defined threshold or some statistical \\ && features representing current data. \\ \cline{2-3} & \multirow{5}{*}{\textbf{Step2. Classifier}} & \textbf{Windowing}: the classifier is retrained based on a window with up-to-date examples. The window length can \\ && be either fixed or adaptive. \\ \cline{3-3} && \textbf{Weighting}: all received examples are weighted according to time or classification error, which are then used to \\ &\multirow{1}{*}{\textbf{adaptation}}& update the classifier. \\ \cline{3-3} && \textbf{Random Sampling}: the examples used to retrain the classifier are randomly chosen based on certain rules. \\ \cline{3-3} && \textbf{Ensemble}: build a new model in the classifier for the new concept. \\ \hline \multirow{2}{*}{\textbf{Passive}} & \multicolumn{2}{|l|}{\textbf{Single classifier}: update a single classifier, such as decision trees, online information network, and extreme learning machine.} \\ \cline{2-3} & \multicolumn{2}{|l|}{\textbf{Ensemble}: add, remove or modify the models in an ensemble classifier.} \\ \hline \end{tabular} \end{table*} Concept drift has further been characterized by its speed, severity, cyclical nature, etc. A detailed and mutually exclusive categorization can be found in~\cite{Minku2010uq2}. For example, according to speed, concept drift can be either abrupt, when the generating function is changed suddenly (usually within one time step), or gradual, when the distribution evolves slowly over time. They are the most commonly discussed types in the literature, because the effectiveness of drift detection methods can vary with the drifting speed. While most methods are quite successful in detecting abrupt drifts, as future data is no longer related to old data~\cite{Ditzler2013nh}, gradual drifts are often more difficult, because the slow change can delay or hide the hint left by the drift. We can see some drift detection methods specifically designed for gradual concept drift, such as Early Drift Detection method (EDDM)~\cite{Baena-Garca2006tg}. \noindent (b) \textit{How can we tackle concept drift effectively (state-of-the-art solutions)?} There is a wide range of algorithms for learning in non-stationary environments. Most of them assume and specialize in some specific types of concept drift, although real-world data often contains multiple types. They are commonly categorized into two major groups: active vs. passive approaches, depending on whether an explicit drift detection mechanism is employed. Active approaches (also known as trigger-based approaches) determine whether and when a drift has occurred before taking any actions. They operate based on two mechanisms -- a change detector aiming to sense the drift accurately and timely, and an adaptation mechanism aiming to maintain the performance of the classifier by reacting to the detected drift. Passive approaches (also known as adaptive classifiers) evolve the classifier continuously without an explicit trigger reporting the drift. A comprehensive review of up-to-date techniques tackling concept drift is given by Ditzler et al.~\cite{Ditzler2015hq}. They further organise these techniques based on their core mechanisms, summarized in Table~\ref{tab:driftdetector}. This table will help us to understand how online class imbalance algorithms are designed, which will be introduced in details in Section~\ref{sec:overcomeboth}. There exist other ways to classify the proposed algorithms, such as Gama et al.'s taxonomy based on the four modules of an adaptive learning system~\cite{Gama2014re}, and Webb et al.'s quantitative characterization~\cite{Webb2016jq}. This paper adopts the one proposed by Ditzler et al.~\cite{Ditzler2015hq} for its simplicity. The best algorithm varies with the intended applications. A general observation is that, while active approaches are quite effective in detecting abrupt drift, passive approaches are very good at overcoming gradual drift~\cite{Elwell2011dq}~\cite{Ditzler2015hq}. It is worth noting that most algorithms do not consider class imbalance. It is unclear whether they will remain effective if data becomes imbalanced. For example, some algorithms determine concept drift based on the change in the classification error, including OLIN~\cite{Cohen2008vn}, DDM~\cite{Gama2004kl} and PERM~\cite{Harel2014oa}. As we have explained in Section~\ref{subsec:probdesp} 1), the classification error is sensitive to the imbalance degree of data, and does not reflect the performance of the classifier very well when there is class imbalance. Therefore, these algorithms may not perform well when concept drift and class imbalance occur simultaneously. Some other algorithms are specifically designed for data streams coming in batches, such as AUE~\cite{Brzezinski2014fo} and the Learn++ family~\cite{Elwell2011dq}. These algorithms cannot be applied to online cases directly. \noindent (c) \textit{How do we evaluate the performance of concept drift detectors and online classifiers?} To fully test the performance of drift detection approaches (especially an active detector), it is necessary to discuss both data with artificial concept drifts and real-world data with unknown drifts. Using data with artificial concept drifts allows us to easily manipulate the type and timing of concept drifts, so as to obtain an in-depth understanding of the performance of approaches under various conditions. Testing on data from real-world problems helps us to understand their effectiveness from the practical point of view, but the information about when and how concept drift occurs is unknown in most cases. The following aspects are usually considered to assess the accuracy of active drift detectors. Their measurement is based on data with artificial concept drifts where drifts are known. \begin{itemize} \item True detection rate: the possibility of detecting the true concept drift. It shows the accuracy of the detection approach. \item False alarm rate: the possibility of reporting a concept drift that does not exist (false-positive rate). It characterizes the costs and reliability of the detection approach. \item Delay of detection: an estimate of how many time steps are required on average to detect a drift after the actual occurrence. It reflects how much time would be taken before the drift is detected. \end{itemize} Wang and Abraham~\cite{Wang2015zo} use a histogram to visualize the distribution of detection points from the drift detection approach over multiple runs. It reflects all the three aspects above in one plot. It is worth nothing that there are trade-offs between these measures. For example, an approach with a high true detection rate may produce a high false alarm rate. A very recent algorithm, Hierarchical Change-Detection Tests (HCDTs), was proposed to explicitly deal with the trade-off~\cite{Alippi2017hw}. After the performance of drift detection approaches is better understood, we need to quantify the effect of those detections on the performance of predictive models. All the performance metrics introduced in the previous section of ``class imbalance" can be used. The key question here is how to calculate them in the streaming settings with evolving data. The performance of the classifier may get better or worse every now and then. There are two common ways to depict such performance over time -- holdout and prequential evaluation~\cite{Gama2014re}. Holdout evaluation is mostly used when the testing data set (holdout set) is available in advance. At each time step or every few time steps, the performance measures are calculated based on the valid testing set, which must represent the same data concept as the training data at that moment. However, this is a very rigorous requirement for data from real-world applications. In prequential evaluation, data received at each time step is used for testing before it is use for training. From this, the performance measures can be incrementally updated for evaluation and comparison. This strategy does not require a holdout set, and the model is always tested on unseen data. When the data stream is stationary, the prequential performance measures can be computed based on the accumulated sum of a loss function from the beginning of the training. However, if the data stream is evolving, the accumulated measure can mask the fluctuation in performance and the adaptation ability of the classifier. For example, consider that an online classifier correctly predicts 90 out of 100 examples received so far (90\% accuracy on data with the original concept). Then, an abrupt concept drift occurs at time step 101, which makes the classifier only correctly predict 3 out of 10 examples from the new concept (30\% accuracy on data with the new concept). If we use the accumulated measure based on all the historical data, the overall accuracy will be 93/110, which seems to be high but does not reflect the true performance on the new data concept. This problem can be solved by using a sliding window or a time-based fading factor that weigh observations~\cite{Gama2013qp}. \section{Overcoming Class Imbalance and Concept Drift Simultaneously} \label{sec:overcomeboth} Following the review of class imbalance and concept drift in Section~\ref{sec:framework}, this section reviews the combined issue, including example applications and existing solutions. When both exist, one problem affects the treatment of the other. For example, the drift detection algorithms based on the traditional classification error may be sensitive to imbalanced degree and become less effective; the class imbalance techniques need to be adaptive to changing $P\left( y \right)$, otherwise the class receiving the preferential treatment may not be the correct minority class at the current moment. Therefore, their mutual effect should be considered during the algorithm design. \subsection{Illustrative Applications} \label{subsec:application} The combined problems of concept drift and class imbalance have been found in many real-world applications. Three examples are given here, to help us understand each type of concept drift. \subsubsection{Environment monitoring with $P\left( y \right)$ drift} Environment monitoring systems usually consist of various sensors generating streaming data in high speed. Real-time prediction is required. For example, a smart building has sensors deployed to monitor hazardous events. Any sensor fault can cause catastrophic failures. Machine learning algorithms can be used to build models based on the sensor information, aiming to predict faults in sensors accurately and timely~\cite{Wang2013hi}. First, the data is characterized by class imbalance, because obtaining a fault in such systems can be very expensive. Examples representing faults are the minority. Second, the number of faults varies with the faulty condition. If the damage gets worse over time, the faults will occur more and more frequently. It implies a prior probability change, a type of virtual concept drift. \subsubsection{Spam filtering with $p\left( \mathbf{x}\mid y \right)$ drift} Spam filtering is a typical classification problem involving class imbalance and concept drift~\cite{Lindstrom2010qg}. First of all, the spam class is the minority and suffers from a higher misclassification cost. Second, the spammers are actively working on how to break through the filter. It means that the adversary actions are adaptive. For example, one of the spamming behaviours is to change email content and presentation in disguise, implying a possible class-conditional pdf ($p\left( \mathbf{x}\mid y \right)$) change~\cite{Gama2014re}. \subsubsection{Social media analysis with $P\left( y\mid \mathbf{x} \right)$ drift} Social media (e.g. twitter, facebook) is becoming a valuable source of timely information on the internet. It attracts a growing number of people, sharing, communicating, connecting and creating user-generated data. Consider the example where a company would like to make relevant product recommendations to people who have shown some type of interest in their tweets. Machine learning algorithms can be used to discover who is interested in the product from the large amount of tweets~\cite{Li2012la}. The number of users who have shown the interest is always very small. Their information tends to be overwhelmed by other unrelated messages. Thus, it is utterly important to overcome the imbalanced distribution and discover the hidden information. Another challenge is users' interest changing from time to time. Users may lose their interest in the current trendy product very quickly, causing posterior probability ($P\left( y\mid \mathbf{x} \right)$) changes. Although the above examples are associated with only one type of concept drift, different types often coexist in real-world problems, which are hard to know in advance. For the example of spam filtering, which email belongs to spam also depends on users' interpretation. Users may re-label a particular category of normal emails as spam, which indicates a posterior probability change. \subsection{Approaches to Tackling Both Class Imbalance and Concept Drift} \label{subsec:approach} Some research efforts have been made to address the joint problem of concept drift and class imbalance, due to the rising need from practical problems~\cite{Pan2015wp}~\cite{Sousa2016pe}. Uncorrelated Bagging is one of the earliest algorithms, which builds an ensemble of classifiers trained on a more balanced set of data through resampling and overcomes concept drift passively by weighing the base classifier based on their discriminative power~\cite{Gao2008uq}~\cite{Gao2007fk}~\cite{Wu2014nq}. Selectively recursive approaches SERA~\cite{Chen2009dz} and REA~\cite{Chen2011tg} use similar ideas to Uncorrelated Bagging of building an ensemble of weighted classifiers, but with a ``smarter" oversampling technique. Learn++.CDS and Learn++.NIE are more recent algorithms, which tackle class imbalance through the oversampling technique SMOTE~\cite{Chawla:2002yq} or a sub-ensemble technique, and overcome concept drift through a dynamic weighting strategy~\cite{Ditzler2013mk}. HUWRS.IP~\cite{Hoens2013gh} improves HUWRS~\cite{Hoens:2011ys} to deal with imbalanced data streams by introducing an instance propagation scheme based on a Na\"{i}ve Bayes classifier, and uses Hellinger distance as a weighting measure for concept drift detection. This method relies on finding examples that are similar to the current minority-class concept, which however may not exist. So, Hellinger Distance Decision Tree (HDDT) was proposed to use Hellinger distance as the decision tree splitting criteria that is imbalance-insensitive~\cite{Pozzolo2014wl}. All these approaches belong to chunk-based learning algorithms. Their core techniques work when a batch of data is received at each time step, i.e. they are not suitable for online processing. Developing a true online algorithm for concept drift is very challenging because of the difficulties in measuring minority-class statistics using only one example at a time~\cite{Ditzler2015hq}. To handle class imbalance and concept drift in an online fashion, a few methods have been proposed recently. Drift Detection Method for Online Class Imbalance (DDM-OCI)~\cite{Wang2013bp} is one of the very first algorithms detecting concept drift actively in imbalanced data streams online. It monitors the reduction in minority-class recall (i.e. true positive rate). If there is a significant drop, a drift will be reported. It was shown to be effective in cases when minority-class recall is affected by the concept drift, but not when the majority class is mainly affected. A Linear Four Rates (LFR) approach was then proposed to improve DDM-OCI, which monitors four rates from the confusion matrix -- minority-class recall and precision and majority-class recall and precision, with statistically-supported bounds for drift detection~\cite{Wang2015zo}. If any of the four rates exceeds the bound, a drift will be confirmed. Instead of tracking several performance rates for each class, prequential AUC (PAUC)~\cite{Brzezinski2015ib}~\cite{Brzezinski2017el} was proposed as an overall performance measure for online scenarios, and was used as the concept drift indicator in Page-Hinkley (PH) test~\cite{Page1954qg}. However, it needs access to historical data. DDM-OCI, LFR and PAUC-based PH test are active drift detectors designed for imbalanced data streams, and are independent of classification algorithms. They aim at concept drift with classification boundary changes by default. Therefore, if a concept drift is reported, they will reset and retrain the online model. Although these drift detectors are designed for imbalanced data, they themselves do not handle class imbalance. It is still unclear how they perform when working with class imbalance techniques. \begin{table*}[htp] \caption{Online approaches to tackling concept drift and class imbalance, and their properties.} \label{tab:onlinemethod} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Approaches & Category? & Class & Access to & Additional data? & Multi-class? & $P\left( y \right)$ drift? \\ & & imbalance? & old data? & & & \\ \hline DDM-OCI~\cite{Wang2013bp} & Active (change detection test + windowing) & No & No & No & No & No \\ \hline LFR~\cite{Wang2015zo} & Active (change detection test + windowing) & No & No & No & No & No \\ \hline PAUC-PH~\cite{Brzezinski2015ib} & Active (change detection test + windowing) & No & Yes & No & No & No \\ \hline RLSACP~\cite{Ghazikhani2013wm}/ONN~\cite{Ghazikhani2014qb} & Passive (single classifier) & Yes & Yes & No & No & Yes \\ \hline ESOS-ELM~\cite{Mirza2015nt} & Passive+Active (ensemble) & Yes & No & Yes & No & No \\ \hline OOB/UOB using CID~\cite{Wang2014vb} & Active (weighting) & Yes & No & No & No & Yes \\ \hline \end{tabular} \end{table*} Besides the above active approaches, the perceptron-based algorithms RLSACP~\cite{Ghazikhani2013wm}, ONN~\cite{Ghazikhani2014qb} and ESOS-ELM~\cite{Mirza2015nt} adapt the classification model to non-stationary environments passively, and involve mechanisms to overcome class imbalance. RLSACP and ONN are single-model approaches with the same general idea. Their error function for updating the perceptron weights is modified, including a forgetting function for model adaptation and an error weighting strategy as the class imbalance treatment. The forgetting function has a pre-defined form, allowing the old data concept to be forgotten gradually. The error weights in RLSACP are incrementally updated based either on the classification performance or the imbalance rate from recently received data. It was shown that weight updating based on the imbalance rate leads to better performance. ESOS-ELM is an ensemble approach, maintaining a set of online sequential extreme learning machines (OS-ELM)~\cite{Liang2006bp}. For tackling class imbalance, resampling is applied in a way that each OS-ELM is trained with approximately equal number of minority- and majority-class examples. For tackling concept drift, voting weights of base classifiers are updated according to their performance G-mean on a separate validation data set from the same environment as the current training data. In addition to the passive drift detection technique, ESOS-ELM includes an independent module -- ELM-store, to handle recurring concept drift. ELM-store maintains a pool of weighted extreme learning machines (WELM)~\cite{Mirza2013la} to retain old information. It adopts a threshold-based technique and hypothesis testing to detect abrupt and gradual concept drift actively. If a concept drift is reported, a new WELM will be built and kept in ELM-store. If any stored model performs better than the current OS-ELM ensemble, indicating a possible recurring concept, it will be introduced in the ensemble. ESOS-ELM assumes the imbalance rate is known in advance and fixed. It needs a separate data set for initializing OS-ELMs and WELMs, which must include examples from all classes. It is also necessary to have validation data sets reflecting every data concept for concept drift detection, which can be a quite restrictive requirement for real-world data. With a different goal of concept drift detection from the above, a class imbalance detection (CID) approach was proposed, aiming at $P\left( y \right)$ changes~\cite{Wang2013po}. It reports the current imbalance status and provides information of which classes belong to the minority and which classes belong to the majority. Particularly, a key indicator is the real-time class size $w_k^{(t)}$, the percentage of class $c_k$ at time step t. When a new example $\mathbf{x_t}$ arrives, $w_k^{(t)}$ is incrementally updated by the following equation~\cite{Wang2013po}: \begin{equation} w_k^{(t)}=\theta w_k^{(t-1)}+\left ( 1-\theta \right ) \left[ \left (\mathbf{x_t},c_k \right ) \right], (k=1,\ldots,N) \label{eq:wk} \end{equation} where $\left[ \left( \mathbf{x_t}, c_k \right) \right] = 1$ if the true class label of $\mathbf{x_t}$ is $c_k$, and 0 otherwise. $\theta$ $\left(0<\theta<1\right)$ is a pre-defined time decay (forgetting) factor, which reduces the contribution of older data to the calculation of class sizes along with time. It is independent of learning algorithms, so it can be used with any type of online classifiers. For example, it has been used in OOB and UOB~\cite{Wang2014vb} for deciding the resampling rate adaptively and overcoming class imbalance effectively over time. OOB and UOB integrate oversampling and undersampling respectively into ensemble algorithm Online Bagging (OB)~\cite{Oza:2005ve}. Oversampling and undersampling are one of the simplest and most effective techniques of tackling class imbalance~\cite{Hulse:2007eu}. The properties of the above online approaches are summarized in Table~\ref{tab:onlinemethod}, answering the following six questions in order: \begin{itemize} \item How do they handle concept drift (the type based on the categorization in Table~\ref{tab:driftdetector})? \item Do they involve any class imbalance technique to improve the predictive performance of online models, in addition to concept drift detection? \item Do they need access to previously received data? \item Do they need additional data sets for initialisation or validation? \item Can they handle data streams with more than two classes (multi-class data)? \item Do they involve any mechanism handling $P\left( y \right)$ drift? \end{itemize} \section{Performance Analysis} \label{sec:exp} With a complete review of online class imbalance learning, we aim at a deep understanding of concept drift detection in imbalanced data streams and the performance of existing approaches introduced in Section~\ref{subsec:approach}. Three research questions will be looked into through experimental analysis: \textit{1) what are the difficulties in detecting each type of concept drift?} Little work has given separate discussions on the three fundamental types of concept drift, especially the $P\left( y \right)$ drift. It is important to understand their differences, so that the most suitable approaches can be used for the best performance. \textit{2) Among existing approaches designed for imbalanced data streams with concept drift, which approach is better and when?} Although a few approaches have been proposed for the purpose of overcoming concept drift and class imbalance, it is still unclear how well they perform for each type of concept drift. \textit{3) Whether and how do class imbalance techniques affect concept drift detection and online prediction?} No study has looked into the mutual effect of applying class imbalance techniques and concept drift detection methods. Understanding the role of class imbalance techniques will help us to develop more effective concept drift detection methods for imbalanced data. \subsection{Data Sets} \label{subsec:data} For an accurate analysis and comparable results, we choose two most commonly used artificial data generators, SINE1~\cite{Gama2004kl} and SEA~\cite{Street2001bh}, to produce imbalanced data streams containing three simulated types of concept drift. This is one of the very few studies that individually discuss $P\left( y \right)$, $p\left( \mathbf{x}\mid y \right)$ and $P\left(y \mid \mathbf{x} \right)$ types of concept drift in depth. In addition, each generator produces two data streams with a different drifting speed -- abrupt and gradual drifts. The drifting speed is defined as the inverse of the time taken for a new concept to completely replace the old one~\cite{Minku2010uq2}. According to speed, drifts can be either abrupt, when the generating function is changed completely in only one time step, or gradual, otherwise. The data streams with a gradual concept drift are denoted by `g' in the following experiment, i.e. SINE1g~\cite{Baena-Garca2006tg} and SEAg. Every data stream has 3000 time steps, with one concept drift starting at time step 1501. The new concept in SINE1 and SEA fully takes over the data stream from time step 1501; the concept drift in SINE1g and SEAg takes 500 time steps to complete, which means that the new concept fully replaces the old one from time step 2001. The detailed settings for generating each type of concept drift are included in the individual subsections. After the detailed analysis of the three types of concept drift, three real-world data sets are included in our experiment with unknown concept drift, which are PAKDD 2009 credit card data (PAKDD)~\cite{Linhart2010ln}, Weather data~\cite{Ditzler2013nh} and UDI TweeterCrawl data~\cite{LiWDWC12}. Data in PAKDD are collected from the private label credit card operation of a Brazilian retail chain. The task of this problem is to identify whether the client has a good or bad credit. The ``bad" credit is the minority class, taking 19.75\% of the provided modelling data. Because the data have been collected from a time interval in the past, gradual market change occurs. The Weather data set aims to predict whether rain precipitation was observed on each day, with inherent seasonal changes. The class of ``rain" is the minority at IR of 31\%. The original Tweet data include 50 million tweets posted mainly from 2008 to 2011. The task is to predict the tweet topic. We choose a time interval, containing 8774 examples and covering seven tweet topics~\cite{Wang2016wl}. Then, we further reduce it to 2-class data by using only two out of seven topics for our experiment. These real-world data will help us to understand the effectiveness of existing concept drift and class imbalance approaches in practical scenarios, which usually have more complex data distributions and concept drift. \subsection{Experimental and Evaluation Settings} \label{subsec:setting} \begin{table*}[htp] \caption{Artificial data streams with $P\left( y \right)$ concept drift.} \label{tab:pydata} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline ID & Data& Speed & \multicolumn{3}{|c|}{Class +1} & \multicolumn{3}{|c|}{Class -1}\\ \cline{4-9} &&& Concept & Old $P\left( y \right)$ & New $P\left( y \right)$ & Concept & Old $P\left( y \right)$ & New $P\left( y \right)$ \\ \hline 1&SINE1 & Abrupt & \multirow{2}{*}{Points below $y=\sin \left ( x \right )$} & \multirow{2}{*}{0.1} & \multirow{2}{*}{0.9} & \multirow{2}{*}{Points above or on $y=\sin \left ( x \right )$} & \multirow{2}{*}{0.9} & \multirow{2}{*}{0.1} \\ \cline{1-3} 2&SINE1g & Gradual &&&&&& \\ \hline 3&SEA & Abrupt & \multirow{2}{*}{$x_1+x_2 \leq 7$} & \multirow{2}{*}{0.5} & \multirow{2}{*}{0.1} & \multirow{2}{*}{$x_1+x_2 > 7$} & \multirow{2}{*}{0.5} & \multirow{2}{*}{0.9} \\ \cline{1-3} 4&SEAg & Gradual &&&&&& \\ \hline \end{tabular} \end{table*} The approaches listed in Table~\ref{tab:onlinemethod}, which are explicitly designed for the combined problem of class imbalance and concept drift, are discussed in our experiment. For the three active drift detection methods -- DDM-OCI, LFR and PAUC-PH, they are used with the traditional Online Bagging (abbr. OB)~\cite{Oza:2005ve} and OOB with CID~\cite{Wang2014vb} respectively for classification. Because OOB applies oversampling to overcome class imbalance and OB does not, it can help us to observe the role of class imbalance techniques (oversampling in our experiment) in concept drift detection. UOB is not chosen, for the consideration that undersampling may cause unstable performance which may indirectly affect our observation~\cite{Wang2014vb}. Between RLSACP and ONN, due to their similarity and the more theoretical support in RLSACP, only RLSACP is included in our experiment. Considering RLSACP and ESOS-ELM are perceptron-based methods, we use the Multilayer Perceptron (MLP) classifier as the base learner of OB and OOB. The number of neurons in the hidden layer of MLPs is set to the average of the number of attributes and classes in data, which is also the number of perceptrons in RLSACP and ESOS-ELM. All ensemble methods maintain 15 base learners. For ESOS-ELM, we disable the ``ELM-Store", which is designed for recurring concept drift; we allow that its ensemble size can grow to 20. In addition, ESOS-ELM requires an initialisation data set to initialize ELMs, and validation data sets to adjust misclassification costs. When dealing with artificial data, we use the first 100 examples to initialize ESOS-ELM, and generate a separate validation data set for each concept stage. We track the performance of all the methods from time step 101. In summary, ten algorithms join the comparison from Table~\ref{tab:onlinemethod}: OB, OOB, DDM-OCI+OB/OOB, PAUC-PH+OB/OOB, LFR+OB/OOB, RLSACP and ESOS-ELM. OB is the baseline without involving any class imbalance and concept drift techniques. To evaluate the effectiveness of concept drift detection methods and online learners, we adopt prequential test (as described in Section~\ref{sec:framework}) for its simplicity and popularity. Prequential recall of each class (defined in Eq.~\ref{eq:recall}) and prequential G-mean (defined in Eq.~\ref{eq:G}) are tracked over time for comparison, because they are insensitive to imbalance rates. When discussing the generated artificial data sets with ground truth known, we also compare the true detection rate (abbr. TDR), total number of false alarms (abbr. FA) and delay of detection (abbr. DoD) (as defined in Section~\ref{sec:framework}) among methods using any of the three active drift detectors (i.e. DDM-OCI, LFR and PAUC-PH). The calculation of TDR, FA and DoD is based on the following understanding: before a real concept drift occurs, all the reported alarms are considered as false alarms; after a real concept drift occurs, the first detection is seen as the true alarm; after that and before the next new real concept drift, the consequent detections are considered as false alarms. Furthermore, because we are particularly interested in how the learner performs on the new data concept in the artificial data sets, we calculate the average recall and G-mean over all the time steps before the concept drift starts and after the concept drift completely ends. It is worth noting that the recall and G-mean values are reset to 0 when the drift starts and ends for an accurate analysis. We use the Wilcoxon Sign Rank test at the confidence level of 95\% as our significance test in this paper. \subsection{Comparative Study on Artificial Data} \label{subsec:artificial_analysis} \noindent C.1. $\mathbf P\left( y \right)$ \textbf{Concept Drift} This section focuses on the $P\left( y \right)$ type of concept drift, without $p\left( \mathbf{x} \mid y \right)$ and $P\left( y \mid \mathbf{x} \right)$ changes. Data streams SINE1 and SINE1g have a severe class imbalance change, in which the minority (majority) class during the first half of data streams becomes the majority (minority) during the latter half. SEA and SEAg have a less severe change, in which the data stream presented to be balanced during the first half becomes imbalanced during the latter half. The concrete setting for each data stream is summarized in Table~\ref{tab:pydata}. Table~\ref{tab:pyDetectors} compares the detection performance of the three active concept drift detectors, in terms of TDR, FA and DoD. The first column is the data ID number, as denoted in Table~\ref{tab:pydata}. We can see that DDM-OCI and LFR are sensitive to class imbalance changes in data. They present very high true detection rate; especially, LFR has 100\% TDR in all cases regardless of whether resampling is used to tackle class imbalance. PAUC-PH does not report any concept drift, showing 0\% TDR in all cases. This is because DDM-OCI and LFR use time-decayed metrics as the indicator of concept drift, which have higher sensitivity to performance change in general than the prequential AUC used by PAUC-PH. LFR shows even higher TDR than DDM-OCI, because it tracks four rates in the confusion matrix instead of one. For the same reason, DDM-OCI and LFR have a higher chance of issuing false alarms than PAUC-PH. For DDM-OCI, oversampling in OOB increases the probability of reporting a concept drift by observing TDR in SEA and SEAg, compared to OB. This is because more examples are used for training in OOB, which improves the performance on the minority class for concept drift detection. \begin{table}[htp] \caption{Performance of the 3 active concept drift detectors on artificial data with $P\left( y \right)$ changes: TDR, FA and DoD. The `-' symbol indicates that no concept drift is detected.} \label{tab:pyDetectors} \centering \begin{tabular}{|c|c|c|c|c|} \hline & Method & TDR & FA & DoD \\ \hline \multirow{6}{*}{\begin{turn}{90}SINE1\end{turn}} & DDM-OCI+OB & 100\% & 0 & 94 \\ \cline{2-5} &DDM-OCI+OOB & 100\% & 2.22 & 45 \\ \cline{2-5} &LFR+OB & 100\% & 24 & 91\\ \cline{2-5} &LFR+OOB & 100\% & 26.16 & 63\\ \cline{2-5} &PAUC-PH+OB & 0\% & 1.03 & -\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 1.28 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SINE1g\end{turn}} & DDM-OCI+OB & 100\% & 1.09 & 281 \\ \cline{2-5} &DDM-OCI+OOB & 100\% & 4.38 & 118 \\ \cline{2-5} &LFR+OB & 100\% & 18.01 & 383 \\ \cline{2-5} &LFR+OOB & 100\% & 21.15 & 153 \\ \cline{2-5} &PAUC-PH+OB & 0\% & 1 & - \\ \cline{2-5} &PAUC-PH+OOB & 0\% & 1 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SEA\end{turn}} &DDM-OCI+OB & 45\% & 11.9 & 255\\ \cline{2-5} &DDM-OCI+OOB & 94\% & 14.1 & 301\\ \cline{2-5} &LFR+OB & 100\% & 0.73 & 35\\ \cline{2-5} &LFR+OOB & 100\% & 6.51 & 45\\ \cline{2-5} &PAUC-PH+OB & 0\% & 1 & -\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 1 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SEAg\end{turn}} & DDM-OCI+OB & 92\% & 15.1 & 80\\ \cline{2-5} & DDM-OCI+OOB & 100\% & 16.56 & 93\\ \cline{2-5} & LFR+OB & 100\% & 2.27 & 121\\ \cline{2-5} & LFR+OOB & 100\% & 6.3 & 324\\ \cline{2-5} & PAUC-PH+OB & 0\% & 1 & -\\ \cline{2-5} & PAUC-PH+OOB & 0\% & 1.01 & - \\ \hline \end{tabular} \end{table} Table~\ref{tab:pyLearners} compares recall and G-mean of all models over the new data concept, i.e. performance over time steps 1501-3000 for data streams with an abrupt change and performance over time steps 2001-3000 for data streams with a gradual change, showing whether and how well the drift detector can help with learning after concept drift is completed. The first column is the data ID number, as denoted in Table~\ref{tab:pydata}. In SINE1 and SINE1g, the negative class presents to be the minority after the change; in SEA and SEAg, the positive class presents to be the minority after the change. \begin{table}[htp] \caption{Performance of online learners on artificial data with $P\left( y \right)$ changes: means and standard deviations of average recall of each class and average G-mean over the new data concept. The significantly best values among all methods are shown in bold italics.} \label{tab:pyLearners} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline & Method & Class+1 Recall & Class-1 Recall & G-mean \\ \hline \multirow{10}{*}{\begin{turn}{90}SINE1\end{turn}} & DDM-OCI+OB & 0.887$\pm$0.004 & 0.170$\pm$0.009 & 0.317$\pm$0.009 \\ \cline{2-5} &DDM-OCI+OOB & 0.979$\pm$0.007 & 0.049$\pm$0.016 & 0.188$\pm$0.033 \\ \cline{2-5} &LFR+OB & 0.870$\pm$0.004 & 0.183$\pm$0.019 & 0.334$\pm$0.022\\ \cline{2-5} &LFR+OOB & 0.952$\pm$0.011 & 0.061$\pm$0.023 & 0.221$\pm$0.042\\ \cline{2-5} &PAUC-PH+OB & 0.889$\pm$0.004 & 0.168$\pm$0.008 & 0.316$\pm$0.007\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.992$\pm$0.002} & 0.692$\pm$0.013 & 0.828$\pm$0.008 \\ \cline{2-5} &RLSACP & 0.962$\pm$0.004 & 0.072$\pm$0.014 & 0.217$\pm$0.026 \\ \cline{2-5} &ESOS-ELM & 0.176$\pm$0.136 & \textbf{0.999$\pm$0.001} & 0.358$\pm$0.192 \\ \cline{2-5} &OB & 0.889$\pm$0.004 & 0.170$\pm$0.009 & 0.318$\pm$0.009 \\ \cline{2-5} &OOB & \textbf{0.992$\pm$0.002} & 0.699$\pm$0.014 & \textbf{0.832$\pm$0.008}\\ \hline \multirow{10}{*}{\begin{turn}{90}SINE1g\end{turn}} & DDM-OCI+OB & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 & 0.000$\pm$0.000\\ \cline{2-5} &DDM-OCI+OOB & 0.997$\pm$0.004 & 0.008$\pm$0.005 & 0.050$\pm$0.016\\ \cline{2-5} &LFR+OB & 0.972$\pm$0.006 & 0.031$\pm$0.027 & 0.138$\pm$0.079\\ \cline{2-5} &LFR+OOB & 0.956$\pm$0.011 & 0.036$\pm$0.026 & 0.150$\pm$0.076\\ \cline{2-5} &PAUC-PH+OB & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 & 0.000$\pm$0.000\\ \cline{2-5} &PAUC-PH+OOB & 0.989$\pm$0.001 & 0.708$\pm$0.002 & \textbf{0.835$\pm$0.002}\\ \cline{2-5} &RLSACP & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.001 & 0.002$\pm$0.013\\ \cline{2-5} &ESOS-ELM & 0.109$\pm$0.102 & \textbf{0.997$\pm$0.000} & 0.273$\pm$0.165\\ \cline{2-5} &OB & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 & 0.000$\pm$0.000\\ \cline{2-5} &OOB & 0.989$\pm$0.002 & 0.709$\pm$0.002 & \textbf{0.835$\pm$0.001} \\ \hline \multirow{10}{*}{\begin{turn}{90}SEA\end{turn}} &DDM-OCI+OB & 0.003$\pm$0.031 & \textbf{0.999$\pm$0.000} & 0.007$\pm$0.055\\ \cline{2-5} &DDM-OCI+OOB & 0.146$\pm$0.072 & 0.965$\pm$0.013 & 0.344$\pm$0.086\\ \cline{2-5} &LFR+OB & 0.020$\pm$0.009 & 0.996$\pm$0.001 & 0.113$\pm$0.053\\ \cline{2-5} &LFR+OOB & 0.059$\pm$0.031 & 0.981$\pm$0.007 & 0.221$\pm$0.054\\ \cline{2-5} &PAUC-PH+OB & 0.323$\pm$0.010 & 0.995$\pm$0.001 & 0.559$\pm$0.009\\ \cline{2-5} &PAUC-PH+OOB & 0.514$\pm$0.015 & 0.943$\pm$0.007 & \textbf{0.688$\pm$0.010}\\ \cline{2-5} &RLSACP & 0.021$\pm$0.023 & 0.993$\pm$0.007 & 0.070$\pm$0.077\\ \cline{2-5} &ESOS-ELM & \textbf{0.608$\pm$0.214} & 0.829$\pm$0.140 & \textbf{0.681$\pm$0.142}\\ \cline{2-5} &OB & 0.324$\pm$0.009 & 0.996$\pm$0.001 & 0.561$\pm$0.008\\ \cline{2-5} &OOB & 0.515$\pm$0.016 & 0.945$\pm$0.006 & \textbf{0.689$\pm$0.010} \\ \hline \multirow{10}{*}{\begin{turn}{90}SEAg\end{turn}} & DDM-OCI+OB & 0.040$\pm$0.073 & 0.998$\pm$0.001 & 0.124$\pm$0.136\\ \cline{2-5} &DDM-OCI+OOB & 0.142$\pm$0.071 & 0.973$\pm$0.014 & 0.334$\pm$0.096\\ \cline{2-5} &LFR+OB & 0.003$\pm$0.006 & \textbf{0.999$\pm$0.000} & 0.019$\pm$0.035\\ \cline{2-5} &LFR+OOB & 0.076$\pm$0.084 & 0.976$\pm$0.018 & 0.217$\pm$0.123\\ \cline{2-5} &PAUC-PH+OB & 0.365$\pm$0.029 & 0.997$\pm$0.000 & 0.600$\pm$0.023\\ \cline{2-5} &PAUC-PH+OOB & 0.489$\pm$0.024 & 0.951$\pm$0.011 & \textbf{0.679$\pm$0.017}\\ \cline{2-5} &RLSACP & 0.002$\pm$0.006 & \textbf{0.999$\pm$0.001} & 0.011$\pm$0.035\\ \cline{2-5} &ESOS-ELM & \textbf{0.562$\pm$0.208} & 0.809$\pm$0.143 & 0.646$\pm$0.130\\ \cline{2-5} &OB & 0.371$\pm$0.029 & 0.997$\pm$0.001 & 0.605$\pm$0.023\\ \cline{2-5} &OOB & 0.484$\pm$0.032 & 0.951$\pm$0.012 & \textbf{0.675$\pm$0.022} \\ \hline \end{tabular} } \end{table} \begin{table*}[htp] \caption{Artificial data streams with $p\left( \mathbf{x} \mid y \right)$ concept drift.} \label{tab:pxydata} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline ID & Data& Speed & \multicolumn{2}{|c|}{Class +1} & \multicolumn{2}{|c|}{Class -1}\\ \cline{4-7} &&& Old concept & New concept & Old concept & New concept \\ \hline 1&SINE1 & Abrupt & \multirow{2}{*}{Points below $y=\sin \left ( x \right )$} & \multirow{2}{*}{Points below $y=\sin \left ( x \right )$} & Points above or on $y=\sin \left ( x \right )$ & Points above or on $y=\sin \left ( x \right )$ \\ \cline{1-3} 2&SINE1g & Gradual &&&and $P\left ( x<0.5 \right ) = 0.9$&and $P\left ( x<0.5 \right ) = 0.1$ \\ \hline 3&SEA & Abrupt & \multirow{2}{*}{$x_1+x_2 \leq 7$} & \multirow{2}{*}{$x_1+x_2 \leq 7$} & $x_1+x_2 > 7$ & $x_1+x_2 > 7$ \\ \cline{1-3} 4&SEAg & Gradual &&&and $P\left ( x_1<5 \right ) = 0.9$&and $P\left ( x_1<5 \right ) = 0.1$ \\ \hline \end{tabular} \end{table*} In terms of minority-class recall, we can see that ESOS-ELM performs the significantly best, but ESOS-ELM sacrifices majority-class recall, especially in SINE1 and SINE1g. In terms of G-mean, OOB and OOB using PAUC-PH perform the significantly best, which shows they can best balance the performance between classes. It is worth noting that PAUC-PH is the drift detection method with 0\% TDR based on Table~\ref{tab:pyDetectors}. It means that OOB plays the main role in learning. It also explains that OOB and OOB using PAUC-PH have very close performance. All the OB and OOB models using the other active drift detectors do not show competitive recall and G-mean. Especially for those using DDM-OCI and LFR, the high number of false alarms causes too much resetting and performance loss; OOB can increase the chance of producing a false alarm, because more minority-class examples join the training. Therefore, we conclude that, for $P\left( y \right)$ type of concept drift, it is not necessary to apply any drift detection techniques that are not specifically designed for class imbalance changes; the use of these drift detectors could be even detrimental to the predictive performance due to false alarms and performance resetting; the adaptive resampling in OOB is sufficient to deal with the change and maintain the predictive performance; when using OOB with other active concept drift detectors, the number of false alarms and performance resetting need to be carefully considered. \noindent C.2. $\mathbf p\left(\mathbf{x} \mid y\right)$ \textbf{Concept Drift} The data streams in this section only involve $p\left( \mathbf{x} \mid y \right)$ type of concept drift, without $P\left( y \right)$ and $P\left( y \mid \mathbf{x} \right)$ changes. The class imbalance ratio is fixed to 1:9 and we let the positive class be the minority, so that the data stream is constantly imbalanced. The concept drift in each data stream is controlled by $p\left ( \mathbf{x} \right )$ of the negative class, as shown in Table~\ref{tab:pxydata}. Table~\ref{tab:pxyDetectors} compares the detection performance of the three active concept drift detectors. Similar to our previous results, DDM-OCI and LFR are more sensitive to $P\left( x \mid y \right)$ changes than PAUC-PH. When DDM-OCI and LFR work with OOB, their TDR shows 100\%; and LFR has higher FA and shorter DOD than DDM-OCI, due to more indicators it monitors. PAUC-PH shows 0\% TDR in most cases of working with both OB and OOB. Different from $P\left( y \right)$ changes, when DDM-OCI and LFR work with OB, their TDR is rather low, which suggests that their sensitivity is dependent on the class imbalance techniques. Unlike the cases with class imbalance changes, where it is possible for the minority-class examples to become more frequent, the data streams generated in this section have a fixed minority class with a constantly small prior probability. In other words, it would be more difficult to recognize examples from this minority class, which indirectly affects the detection sensitivity of DDM-OCI and LFR. When oversampling is applied, which introduces more training examples for the minority class, the performance metrics (G-mean, recall and precision) monitored by DDM-OCI and LFR can be substantially improved. It also increases the possibility of reporting a concept drift. This explains the low detection rate of DDM-OCI and LFR when working with OB and their high detection rate when working with OOB. \begin{table}[htp] \caption{Performance of the 3 active concept drift detectors on artificial data with $p\left(\mathbf{x} \mid y\right)$ changes: TDR, FA and DoD. The `-' symbol indicates that no concept drift is detected.} \label{tab:pxyDetectors} \centering \begin{tabular}{|c|c|c|c|c|} \hline & Method & TDR & FA & DoD \\ \hline \multirow{6}{*}{\begin{turn}{90}SINE1\end{turn}} & DDM-OCI+OB & 0\% & 0 & - \\ \cline{2-5} &DDM-OCI+OOB & 100\% & 1.25 & 594 \\ \cline{2-5} &LFR+OB & 0\% & 0.05 & -\\ \cline{2-5} &LFR+OOB & 100\% & 3.99 & 528\\ \cline{2-5} &PAUC-PH+OB & 4\% & 0.45 & 232\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 0.45 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SINE1g\end{turn}} & DDM-OCI+OB & 0\% & 0 & - \\ \cline{2-5} &DDM-OCI+OOB & 100\% & 1.37 & 387 \\ \cline{2-5} &LFR+OB & 0\% & 0 & -\\ \cline{2-5} &LFR+OOB & 100\% & 5.45 & 258 \\ \cline{2-5} &PAUC-PH+OB & 0\% & 1.04 & -\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 1 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SEA\end{turn}} &DDM-OCI+OB & 16\% & 1 & 1394\\ \cline{2-5} &DDM-OCI+OOB & 100\% & 4.03 & 473\\ \cline{2-5} &LFR+OB & 100\% & 0.31 & 52\\ \cline{2-5} &LFR+OOB & 100\% & 13.48 & 59\\ \cline{2-5} &PAUC-PH+OB & 0\% & 0 & -\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 0.85 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SEAg\end{turn}} & DDM-OCI+OB & 90\% & 0.15 & 238\\ \cline{2-5} & DDM-OCI+OOB & 100\% & 4.03 & 279\\ \cline{2-5} & LFR+OB & 29\% & 0 & 1154\\ \cline{2-5} & LFR+OOB & 100\% & 12.75 & 196\\ \cline{2-5} & PAUC-PH+OB & 0\% & 1 & -\\ \cline{2-5} & PAUC-PH+OOB & 0\% & 1 & - \\ \hline \end{tabular} \end{table} Table~\ref{tab:pxyLearners} compares recall and G-mean of all models over the new data concept. As we expected, almost all OB models show significantly worse minority-class recall and G-mean. On SINE1 and SINE1g data, minority-class recall of OB models is as low as 0, which may hinder the detection of any concept drift. Among the OOB models, those using DDM-OCI and LFR perform significantly worse than OOB using PAUC-PH and OOB itself, and the latter two show very close performance. This is because DDM-OCI and LFR trigger concept drift with false alarms, and cause model resetting multiple times. Along with the resetting, the useful and valid information learnt in the past is forgotten at the same time. For the two passive models, RLSACP and ESOS-ELM do not perform very well compared to OOB. Generally speaking, for imbalanced data streams with $p\left( \mathbf{x} \mid y \right)$ changes, class imbalance seems to be a more important issue than concept drift, considering that the learning model without triggering any concept drift detection achieves the best performance. Besides, while the adopted class imbalance technique can improve the final prediction, it can also improve the performance of active concept drift detection methods, depending on their working mechanism. \begin{table}[htp] \caption{Performance of online learners on artificial data with $p\left(\mathbf{x} \mid y\right)$ changes: means and standard deviations of average recall of each class and average G-mean over the new data concept. The significantly best values among all methods are shown in bold italics.} \label{tab:pxyLearners} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline & Method & Class+1 Recall & Class-1 Recall & G-mean \\ \hline \multirow{10}{*}{\begin{turn}{90}SINE1\end{turn}} & DDM-OCI+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 \\ \cline{2-5} &DDM-OCI+OOB & 0.036$\pm$0.025 & 0.997$\pm$0.002 & 0.145$\pm$0.052 \\ \cline{2-5} &LFR+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &LFR+OOB & 0.061$\pm$0.036 & 0.994$\pm$0.005 & 0.200$\pm$0.066\\ \cline{2-5} &PAUC-PH+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.689$\pm$0.038} & 0.985$\pm$0.004 & \textbf{0.811$\pm$0.027} \\ \cline{2-5} &RLSACP & 0.090$\pm$0.028 & 0.939$\pm$0.012 & 0.251$\pm$0.045 \\ \cline{2-5} &ESOS-ELM & 0.058$\pm$0.122 & \textbf{1.000$\pm$0.000} & 0.113$\pm$0.208 \\ \cline{2-5} &OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 \\ \cline{2-5} &OOB & \textbf{0.696$\pm$0.020} & 0.985$\pm$0.004 & \textbf{0.817$\pm$0.013}\\ \hline \multirow{10}{*}{\begin{turn}{90}SINE1g\end{turn}} & DDM-OCI+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &DDM-OCI+OOB & 0.035$\pm$0.064 & 0.993$\pm$0.006 & 0.096$\pm$0.135\\ \cline{2-5} &LFR+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &LFR+OOB & 0.038$\pm$0.062 & 0.992$\pm$0.008 & 0.111$\pm$0.132\\ \cline{2-5} &PAUC-PH+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.801$\pm$0.032} & 0.988$\pm$0.003 & \textbf{0.884$\pm$0.019}\\ \cline{2-5} &RLSACP & 0.072$\pm$0.049 & 0.952$\pm$0.009 & 0.173$\pm$0.102\\ \cline{2-5} &ESOS-ELM & 0.077$\pm$0.112 & 0.991$\pm$0.035 & 0.162$\pm$0.215\\ \cline{2-5} &OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &OOB & \textbf{0.802$\pm$0.034} & 0.988$\pm$0.003 & \textbf{0.884$\pm$0.021} \\ \hline \multirow{10}{*}{\begin{turn}{90}SEA\end{turn}} &DDM-OCI+OB & 0.001$\pm$0.000 & \textbf{0.999$\pm$0.000} & 0.002$\pm$0.006\\ \cline{2-5} &DDM-OCI+OOB & 0.144$\pm$0.027 & 0.973$\pm$0.007 & 0.332$\pm$0.040\\ \cline{2-5} &LFR+OB & 0.036$\pm$0.012 & 0.984$\pm$0.005 & 0.144$\pm$0.048\\ \cline{2-5} &LFR+OOB & 0.085$\pm$0.039 & 0.971$\pm$0.015 & 0.243$\pm$0.069\\ \cline{2-5} &PAUC-PH+OB & 0.130$\pm$0.027 & 0.983$\pm$0.004 & 0.341$\pm$0.042\\ \cline{2-5} &PAUC-PH+OOB & 0.459$\pm$0.044 & 0.923$\pm$0.010 & 0.645$\pm$0.030\\ \cline{2-5} &RLSACP & 0.000$\pm$0.001 & \textbf{0.999$\pm$0.001} & 0.001$\pm$0.006\\ \cline{2-5} &ESOS-ELM & 0.202$\pm$0.158 & 0.967$\pm$0.071 & 0.394$\pm$0.167\\ \cline{2-5} &OB & 0.130$\pm$0.027 & 0.983$\pm$0.004 & 0.341$\pm$0.042\\ \cline{2-5} &OOB & \textbf{0.477$\pm$0.031} & 0.919$\pm$0.010 & \textbf{0.657$\pm$0.021} \\ \hline \multirow{10}{*}{\begin{turn}{90}SEAg\end{turn}} & DDM-OCI+OB & 0.002$\pm$0.007 & \textbf{1.000$\pm$0.000} & 0.010$\pm$0.035\\ \cline{2-5} &DDM-OCI+OOB & 0.100$\pm$0.040 & 0.978$\pm$0.008 & 0.257$\pm$0.066\\ \cline{2-5} &LFR+OB & 0.101$\pm$0.027 & 0.999$\pm$0.000 & 0.269$\pm$0.058\\ \cline{2-5} &LFR+OOB & 0.050$\pm$0.029 & 0.980$\pm$0.011 & 0.182$\pm$0.065\\ \cline{2-5} &PAUC-PH+OB & 0.107$\pm$0.025 & 0.999$\pm$0.000 & 0.278$\pm$0.046\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.348$\pm$0.023} & 0.939$\pm$0.017 & \textbf{0.553$\pm$0.019}\\ \cline{2-5} &RLSACP & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.002\\ \cline{2-5} &ESOS-ELM & 0.183$\pm$0.137 & 0.964$\pm$0.090 & 0.368$\pm$0.161\\ \cline{2-5} &OB & 0.106$\pm$0.021 & 0.999$\pm$0.000 & 0.279$\pm$0.040\\ \cline{2-5} &OOB & \textbf{0.345$\pm$0.027} & 0.943$\pm$0.018 & \textbf{0.552$\pm$0.022} \\ \hline \end{tabular} } \end{table} \noindent C.3. $\mathbf P\left(y \mid \mathbf{x}\right)$ \textbf{Concept Drift} \begin{table*}[htp] \caption{Artificial data streams with $P\left( y \mid \mathbf{x} \right)$ concept drift.} \label{tab:pyxdata} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline ID & Data& Speed & \multicolumn{2}{|c|}{Class +1} & \multicolumn{2}{|c|}{Class -1}\\ \cline{4-7} &&& Old concept & New concept & Old concept & New concept \\ \hline 1&SINE1 & Abrupt & \multirow{2}{*}{Points below $y=\sin \left ( x \right )$} & \multirow{2}{*}{Points above/on $y=\sin \left ( x \right )$} & \multirow{2}{*}{Points above/on $y=\sin \left ( x \right )$} & \multirow{2}{*}{Points below $y=\sin \left ( x \right )$} \\ \cline{1-3} 2&SINE1g & Gradual &&&& \\ \hline 3&SEA & Abrupt & \multirow{2}{*}{$x_1+x_2 \leq 7$} & \multirow{2}{*}{$x_1+x_2 \leq 13$} & \multirow{2}{*}{$x_1+x_2 > 7$} & \multirow{2}{*}{$x_1+x_2 > 13$} \\ \cline{1-3} 4&SEAg & Gradual &&&&\\ \hline \end{tabular} \end{table*} The data streams in this section only involve $P\left( y \mid \mathbf{x} \right)$ type of concept drift, without $P\left( y \right)$ and $p\left( \mathbf{x} \mid y \right)$ changes. Following the settings in Section~\ref{subsec:artificial_analysis}.2, we fix the class imbalance ratio to 1:9 and let the positive class be the minority, so that the data stream is constantly imbalanced. As shown in Table~\ref{tab:pyxdata}, the data distribution in SINE1 and SINE1g involves a concept swap, and this change occurs probabilistically in SINE1g; the data distribution in SEA and SEAg has a concept threshold moving, and this change occurs continuously in SEAg. The change in SEA and SEAg is less severe than the change in SINE1 and SINE1g, because some of the examples from the old concept are still valid under the new concept after the threshold moves completely. The concept drift discussed in this section belongs to the real concept drift category, which affects the classification boundary and is expected to be captured by all concept drift detectors. According to Table~\ref{tab:pyxDetectors}, we can see that DDM-OCI and LFR have difficulty in detecting the concept drift when working with OB, because of the poor recall and G-mean produced by OB, which is also observed and explained in Section~\ref{subsec:artificial_analysis}.2. When DDM-OCI and LFR work with OOB, their detection rate TDR is greatly improved (above 90\% in most cases). This is because the improved performance metrics facilitate the detection. LFR is more sensitive to the change, which produces higher FA and shorter DoD. Different from previous observations in terms of concept drift detection performance, PAUC-PH working with OB produces 100\% TDR and low FA on data streams SINE1 and SINE1g, but PAUC-PH does not work well with OOB on the same data. It is interesting to see that oversampling does not always play a positive role in drift detection. One possible reason is that class imbalance techniques may sometimes hide the performance drop caused by the real concept drift, while it tries to maintain the overall predictive performance, especially for AUC type of metrics in our case. On data streams SEA and SEAg, PAUC-PH does not report any concept drift, probably due to the less severe concept drift. \begin{table}[htp] \caption{Performance of the 3 active concept drift detectors on artificial data with $P\left(y \mid \mathbf{x}\right)$ changes: TDR, FA and DoD. The `-’ symbol indicates that no concept drift is detected.} \label{tab:pyxDetectors} \centering \begin{tabular}{|c|c|c|c|c|} \hline & Method & TDR & FA & DoD \\ \hline \multirow{6}{*}{\begin{turn}{90}SINE1\end{turn}} & DDM-OCI+OB & 0\% & 0 & - \\ \cline{2-5} &DDM-OCI+OOB & 97\% & 1.02 & 1166 \\ \cline{2-5} &LFR+OB & 0\% & 0 & -\\ \cline{2-5} &LFR+OOB & 91\% & 3.92 & 783\\ \cline{2-5} &PAUC-PH+OB & 100\% & 1.03 & 884\\ \cline{2-5} &PAUC-PH+OOB & 2\% & 1.28 & 1180 \\ \hline \multirow{6}{*}{\begin{turn}{90}SINE1g\end{turn}} & DDM-OCI+OB & 0\% & 0 & - \\ \cline{2-5} &DDM-OCI+OOB & 69\% & 2.16 & 165 \\ \cline{2-5} &LFR+OB & 0\% & 1 & -\\ \cline{2-5} &LFR+OOB & 85\% & 6.21 & 306 \\ \cline{2-5} &PAUC-PH+OB & 100\% & 1.03 & 1119\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 1 & - \\ \hline \multirow{6}{*}{\begin{turn}{90}SEA\end{turn}} &DDM-OCI+OB & 61\% & 0.39 & 23\\ \cline{2-5} &DDM-OCI+OOB & 100\% & 3.87 & 151\\ \cline{2-5} &LFR+OB & 10\% & 0.02 & 865\\ \cline{2-5} &LFR+OOB & 100\% & 13.73 & 65\\ \cline{2-5} &PAUC-PH+OB & 0\% & 1 & -\\ \cline{2-5} &PAUC-PH+OOB & 0\% & 1 & -\\ \hline \multirow{6}{*}{\begin{turn}{90}SEAg\end{turn}} & DDM-OCI+OB & 100\% & 0 & 71\\ \cline{2-5} & DDM-OCI+OOB & 100\% & 3.9 & 342\\ \cline{2-5} & LFR+OB & 3\% & 0.02 & 1036\\ \cline{2-5} & LFR+OOB & 100\% & 13.59 & 123\\ \cline{2-5} & PAUC-PH+OB & 0\% & 1 & -\\ \cline{2-5} & PAUC-PH+OOB & 0\% & 1 & - \\ \hline \end{tabular} \end{table} \begin{table}[htp] \caption{Performance of online learners on artificial data with $P\left(y \mid \mathbf{x}\right)$ changes: means and standard deviations of average recall of each class and average G-mean over the new data concept. The significantly best values among all methods are shown in bold italics.} \label{tab:pyxLearners} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline & Method & Class+1 Recall & Class-1 Recall & G-mean \\ \hline \multirow{10}{*}{\begin{turn}{90}SINE1\end{turn}} & DDM-OCI+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 \\ \cline{2-5} &DDM-OCI+OOB & 0.004$\pm$0.003 & 0.998$\pm$0.002 & 0.030$\pm$0.016 \\ \cline{2-5} &LFR+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &LFR+OOB & 0.013$\pm$0.010 & 0.996$\pm$0.006 & 0.062$\pm$0.036\\ \cline{2-5} &PAUC-PH+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.031$\pm$0.013} & 0.941$\pm$0.009 & \textbf{0.098$\pm$0.026} \\ \cline{2-5} &RLSACP & 0.000$\pm$0.001 & \textbf{0.999$\pm$0.001} & 0.003$\pm$0.010 \\ \cline{2-5} &ESOS-ELM & 0.000$\pm$0.000 & 0.997$\pm$0.003 & 0.000$\pm$0.000 \\ \cline{2-5} &OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000 \\ \cline{2-5} &OOB & \textbf{0.033$\pm$0.012} & 0.942$\pm$0.009 & \textbf{0.102$\pm$0.022}\\ \hline \multirow{10}{*}{\begin{turn}{90}SINE1g\end{turn}} & DDM-OCI+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &DDM-OCI+OOB & 0.014$\pm$0.017 & 0.993$\pm$0.006 & 0.069$\pm$0.074\\ \cline{2-5} &LFR+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &LFR+OOB & 0.019$\pm$0.018 & 0.993$\pm$0.006 & 0.086$\pm$0.077\\ \cline{2-5} &PAUC-PH+OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.031$\pm$0.011} & 0.993$\pm$0.002 & \textbf{0.103$\pm$0.026}\\ \cline{2-5} &RLSACP & 0.000$\pm$0.001 & \textbf{1.000$\pm$0.000} & 0.001$\pm$0.008\\ \cline{2-5} &ESOS-ELM & 0.000$\pm$0.000 & 0.907$\pm$0.140 & 0.000$\pm$0.000\\ \cline{2-5} &OB & 0.000$\pm$0.000 & \textbf{1.000$\pm$0.000} & 0.000$\pm$0.000\\ \cline{2-5} &OOB & 0.027$\pm$0.010 & 0.995$\pm$0.002 & 0.093$\pm$0.028 \\ \hline \multirow{10}{*}{\begin{turn}{90}SEA\end{turn}} &DDM-OCI+OB & 0.013$\pm$0.022 & \textbf{0.999$\pm$0.001} & 0.050$\pm$0.085\\ \cline{2-5} &DDM-OCI+OOB & 0.110$\pm$0.031 & 0.968$\pm$0.008 & 0.311$\pm$0.057\\ \cline{2-5} &LFR+OB & 0.149$\pm$0.025 & \textbf{0.999$\pm$0.000} & 0.378$\pm$0.036\\ \cline{2-5} &LFR+OOB & 0.031$\pm$0.022 & 0.964$\pm$0.016 & 0.144$\pm$0.071\\ \cline{2-5} &PAUC-PH+OB & 0.153$\pm$0.023 & \textbf{0.999$\pm$0.000} & 0.384$\pm$0.031\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.292$\pm$0.017} & 0.967$\pm$0.008 & \textbf{0.530$\pm$0.015}\\ \cline{2-5} &RLSACP & 0.013$\pm$0.013 & 0.995$\pm$0.001 & 0.072$\pm$0.063\\ \cline{2-5} &ESOS-ELM & 0.065$\pm$0.068 & 0.997$\pm$0.022 & 0.222$\pm$0.106\\ \cline{2-5} &OB & 0.152$\pm$0.023 & \textbf{0.999$\pm$0.000} & 0.383$\pm$0.032\\ \cline{2-5} &OOB & 0.287$\pm$0.014 & 0.966$\pm$0.008 & 0.525$\pm$0.012 \\ \hline \multirow{10}{*}{\begin{turn}{90}SEAg\end{turn}} & DDM-OCI+OB & 0.000$\pm$0.002 & \textbf{1.000$\pm$0.000} & 0.001$\pm$0.013\\ \cline{2-5} &DDM-OCI+OOB & 0.042$\pm$0.022 & 0.988$\pm$0.006 & 0.163$\pm$0.059\\ \cline{2-5} &LFR+OB & 0.145$\pm$0.032 & 0.999$\pm$0.000 & 0.356$\pm$0.066\\ \cline{2-5} &LFR+OOB & 0.024$\pm$0.018 & 0.985$\pm$0.006 & 0.112$\pm$0.065\\ \cline{2-5} &PAUC-PH+OB & 0.152$\pm$0.019 & 0.999$\pm$0.000 & 0.370$\pm$0.027\\ \cline{2-5} &PAUC-PH+OOB & \textbf{0.288$\pm$0.034} & 0.974$\pm$0.010 & \textbf{0.512$\pm$0.036}\\ \cline{2-5} &RLSACP & 0.009$\pm$0.018 & \textbf{1.000$\pm$0.000} & 0.043$\pm$0.077\\ \cline{2-5} &ESOS-ELM & 0.138$\pm$0.088 & 0.993$\pm$0.057 & 0.336$\pm$0.106\\ \cline{2-5} &OB & 0.149$\pm$0.025 & 0.999$\pm$0.000 & 0.364$\pm$0.042\\ \cline{2-5} &OOB & \textbf{0.282$\pm$0.032} & 0.974$\pm$0.008 & \textbf{0.506$\pm$0.034} \\ \hline \end{tabular} } \end{table} \begin{figure*} \caption{Time-decayed G-mean curves (decay factor = 0.995) from OOB, PAUC-PH+OOB and ESOS-ELM on real-world data.} \label{fig:realdata} \end{figure*} The recall and G-mean over the new data concept in Table~\ref{tab:pyxLearners} further confirms the above analysis. The OB models produce very low minority-class recall and thus low G-mean. RLSACP and ESOS-ELM do not perform well on the new data concept either. By comparing the models that captures concept drifts (DDM-OCI+OOB, LFR+OOB, PAUC-PH+OB) and the models without reporting any concept drift (PAUC-PH+OOB, OOB), it seems that class imbalance causes a more difficult learning issue than the real concept drift in our cases. The models solely tackling class imbalance produce the significantly best recall and G-mean. The rather low imbalance ratio (i.e. 1:9) could be a reason. It would be worth discussing various imbalance levels in data with concept drift in our future work, in order to find out when it is worthwhile considering concept drift in imbalanced data streams. By comparing the results in Table~\ref{tab:pyxLearners}, Table~\ref{tab:pxyLearners} and Table~\ref{tab:pyLearners}, the $P\left( y \mid \mathbf{x} \right)$ type of concept drift indeed leads to the most performance reduction. It is consistent with our understanding that the real concept drift is the most radical type of change in data. However, existing approaches do not seem to tackle it well when data streams are very imbalanced. To develop better concept drift detection methods, the key issues here include how to best have them and class imbalance techniques work together and how to tackle the performance loss brought by false alarms. \subsection{Comparative Study on Real-World Data} \label{subsec:real_analysis} After the detailed analysis of the three types of concept drift, we now look into the performance of the above learning models on the three real-world data sets (PAKDD~\cite{Linhart2010ln}, Weather~\cite{Ditzler2013nh} and Tweet~\cite{LiWDWC12}) described in Section~\ref{subsec:data}. Based on the experimental results on the artificial data, we focus on the best active (PAUC+OOB) and the best passive concept drift detection methods (ESOS-ELM) here for a clear observation, in comparison with OOB. The three methods use the same parameter settings as before. The initialisation and validation data required by ESOS-ELM is the first 2\% examples of each data set. Without knowing the true concept drifts in real-world data, we calculate and track the time-decayed G-mean by setting the decay factor to 0.995, which means that the old performance is forgotten at the rate of 0.5\%. All the compared metrics are the average of 100 runs in the following figures. Fig.~\ref{fig:realdata} presents the time-decayed G-mean curves from OOB, PAUC-PH+OOB and ESOS-ELM on the three real-world data sets. The average number of reported drift by PAUC-PH is 1, 3 and 1 on Weather, PAKDD and Tweet data respectively. Compared to the artificial cases, we obtain some similar results: the passive approach ESOS-ELM does not perform as well as the other two methods; OOB and PAUC-PH show very close G-mean over time on Weather and PAKDD data, which suggests the importance of tackling class imbalance adaptively. In the PAKDD plot, we can see that the G-mean level is relatively stable without significant drop; differently, G-mean in the Tweet plot is reducing. It may suggest that the concept drift in PAKDD is less significant or influential than that in Tweet. Compared to the gradual market and environment change in PAKDD, the tweet topic change can be much faster and more noticeable. Therefore, although PAUC-PH detects 3 concept drifts in PAKDD, the two methods, OOB and PAUC-PH+OOB, does not show much difference. In tweet, PAUC-PH+OOB presents better G-mean than using OOB alone, showing the positive effect of the active concept drift detector in fast changing data streams. \subsection{Further Discussions} \label{subsec:discussion} In this section, we summarize and further discuss the results in the above comparative study on the artificial and real-world data. We also answer the research questions proposed at the beginning of this paper. When dealing with imbalanced data streams with concept drift, we have obtained the following: \begin{itemize} \item When both class imbalance and concept drift exist, class imbalance status and class imbalance changes are shown to be more crucial issues than the traditional concept drift (i.e. $p\left( \mathbf{x} \mid y \right)$ and $P\left( y \mid \mathbf{x} \right)$ changes) in terms of the online prediction performance. It is necessary to adopt adaptive class imbalance techniques (e.g. OOB discussed in our experiment), in addition to using concept drift detection methods alone (e.g. DDM-OCI, LFR). Most existing papers that proposed new concept drift detection methods for imbalanced data so far did not consider the effect of class imbalance techniques on final prediction and concept drift detection. \item $P\left( y \mid \mathbf{x} \right)$ concept drift (i.e. real concept drift) is the most severe type of change in data, compared to $p\left( \mathbf{x} \mid y \right)$ and $P\left( y \right)$ concept drift. This is based on the observation on the final prediction performance. For all three types of concept drift, existing concept drift approaches do not show much benefit in performance improvement. Concept drift is hard to be detected when no class imbalance technique is applied. Their drift detection performance is affected by the class imbalance technique, depending on their detection mechanism. \item For $P\left( y \right)$ concept drift, it is not necessary to apply any concept drift detection methods that are not designed for class imbalance changes, due to their false alarms and model resetting. It is crucial to detect and handle the class imbalance change in time. \item From the results on real-world data, we see that the effectiveness of traditional concept drift detectors (e.g. PAUC-PH) depends on the type of concept drift. For fast and significant concept drift, applying PAUC-PH seems to be more beneficial to the prediction performance. \item Among existing methods designed for imbalanced data with concept drift (4 active methods and 2 passive methods), the passive methods (i.e. ESOS-ELM and RLSACP) do not perform well in general. Although they contain both class imbalance and concept drift techniques, firstly, their class imbalance technique is not effectively adaptive to class imbalance changes, so that wrong imbalance status might be used during learning; secondly, they are restricted to the use of certain perceptron-based classifiers, so that the disadvantages of the classifiers are also inherited by the online model. For example, the training of OS-ELM in ESOS-ELM requires initialisation and validation data sets reflecting the correct data concepts, and the weighted OS-ELM was found to over-emphasize the minority class and present large performance variance sometimes in earlier studies~\cite{Wang2014vb}. \item Among the three active methods discussed in this work, which are DDM-OCI, LFR and PAUC-PH, DDM-OCI and LFR are more sensitive to concept drift than PAUC-PH, with a higher detection rate but also higher false alarms. In addition, the detection performance of DDM-OCI and LFR can be greatly improved by OOB. The explanation can be found in the previous analysis. \end{itemize} Overall, all these results suggest us that class imbalance and concept drift need to be studied simultaneously, when we design an algorithm to deal with imbalanced data with concept drift. Their mutual effect must be taken into consideration. Hence, we propose the following key issues to be considered for an effective algorithm: \begin{itemize} \item Is the class imbalance technique effective in predicting minority-class examples? \item Is the class imbalance technique adaptive to class imbalance changes? \item Is the concept drift technique effective in detecting different types of concept drift, in terms of detection rate, false alarms and detection promptness? Which type of concept drift is it designed for? Which type of concept drift does it perform better? \item Is the detection performance of the concept drift technique affected by the class imbalance technique? And how? \item How can we have the class imbalance technique and concept drift technique work together, to achieve better detection rate, fewer false alarms, less detection delay or better online prediction? \end{itemize} \section{Conclusion} \label{sec:con} This paper gives the first systematic study of handling concept drift in class-imbalanced data streams. In the context of online learning, we provide a thorough review and an experimental insight into this problem. First, a comprehensive review is given, including the problem description and definitions, the individual learning issues and solutions in class imbalance and concept drift respectively, the combined challenges and existing solutions in online class imbalance learning with concept drift, and example applications. The review reveals research gaps in the field of online class imbalance learning with concept drift. Specifically, little work has looked into the concept drift issue in imbalanced data streams systematically, although a few methods have been proposed for this purpose; $P\left( y \right)$ type of concept drift is closely related to the class imbalance issue, but it has not been investigated properly so far; most existing concept drift detection methods are only designed for or tested on balanced data streams. Second, to fill in these research gaps, we carry out a thorough empirical study by looking into the following research questions: 1) what are the challenges in detecting each type of concept drift when the data stream is imbalanced (i.e. changes in $P\left( y \right)$, $p\left( \mathbf{x} \mid y \right)$, and $P\left( y \mid \mathbf{x} \right)$)? 2) Among the proposed methods designed for online class imbalance learning with concept drift, i.e. DDM-OCI~\cite{Wang2013bp}, LFR~\cite{Wang2015zo}, PAUC-PH~\cite{Brzezinski2015ib}, OOB~\cite{Wang2014vb}, RLSACP~\cite{Ghazikhani2013wm} and ESOS-ELM~\cite{Mirza2015nt}, which one performs better for which type of concept drift? 3) Would applying class imbalance techniques (e.g. resampling methods) facilitate the concept drift detection and online prediction? By generating artificial data streams with different types of class imbalance and concept drift and experimenting on real-world data, we make the following conclusions. For the first research question, a $P\left( y \right)$ change can be easily tackled by an adaptive class imbalance technique (e.g. OOB used in this work). The traditional concept drift detectors, such as LFR, DDM-OCI and PAUC-PH, do not perform well in detecting a $p\left( \mathbf{x} \mid y \right)$ change. The prediction performance on an imbalanced data stream with $p\left( \mathbf{x} \mid y \right)$ changes can be effectively improved by solely using an adaptive class imbalance technique. A $P\left( y \mid \mathbf{x} \right)$ change is the most challenging case for learning, where the traditional active and passive concept drift detection methods do not bring much performance improvement. Class imbalance is shown to be a more crucial issue in terms of final prediction performance. For the second research question, the two passive methods, RLSACP and ESOS-ELM, do not perform well in general. DDM-OCI and LFR are sensitive to different types of concept drift, with a high detection rate but also high false alarms. PAUC-PH is more conservative in terms of drift detection. Based on the observation on minority-class recall and G-mean, the combination PAUC-PH and OOB was shown to be the best approach among all. For the third research question, it is necessary to apply adaptive class imbalance techniques when learning from imbalanced data streams with concept drift -- they bring the most prediction performance improvement. In our experiment, our class imbalance technique OOB facilitates the concept drift detection of DDM-OCI and LFR. This paper also provides guidelines for future algorithm design. Several important issues are pointed out for consideration. There are still many challenges and learning issues in this field that are worth of ongoing research, such as more effective concept drift detection methods for imbalanced data streams, studying the mutual effect of class imbalance and concept drift, and more real-world applications with different types of class imbalance and concept drift. \end{document}
math
108,610
\begin{document} \input{amssym} \begin{frontmatter} \title{Symmetry Analysis of Telegraph Equation} \author[]{Mehdi Nadjafikhah}\ead{m\[email protected]}, \author[MN]{Seyed Reza Hejazi}\ead{reza\[email protected]}, \address[MN]{School of Mathematics, Iran University of Science and Technology, Narmak-16, Tehran, I.R.Iran} \begin{keyword} Lie group analysis, Symmetry group, Optimal system, Invariant solution. \end{keyword} \begin{abstract} Lie symmetry group method is applied to study the Telegraph equation. The symmetry group and its optimal system are given, and group invariant solutions associated to the symmetries are obtained. Finally the structure of the Lie algebra symmetries is determined. \end{abstract} \end{frontmatter} \section{Introduction} The telegrapher's equations (or just telegraph equations) are a pair of linear differential equations which describe the voltage and current on an electrical transmission line with distance and time. The equations come from \textit{Oliver Heaviside} who developed the transmission line model. Oliver Heaviside (May 18, 1850 – February 3, 1925) was a self-taught English electrical engineer, mathematician, and physicist who adapted complex numbers to the study of electrical circuits, invented mathematical techniques to the solution of differential equations (later found to be equivalent to Laplace transforms), reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently co-formulated vector analysis. Although at odds with the scientific establishment for most of his life, Heaviside changed the face of mathematics and science for years to come the theory applies to high-frequency transmission lines (such as telegraph wires and radio frequency conductors) but is also important for designing high-voltage energy transmission lines. The model demonstrates that the electromagnetic waves can be reflected on the wire, and that wave patterns can appear along the line. The telegrapher's equations can be understood as a simplified case of Maxwell's equations. In a more practical approach, one assumes that the conductors are composed of an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line. \section{Lie Symmetries of the Equation} A PDE with $p-$independent and $q-$dependent variables has a Lie point transformations \begin{eqnarray*} \widetilde{x}_i=x_i+\varepsilon\xi_i(x,u)+{\mathcal O}(\varepsilon^2),\qquad \widetilde{u}_{\alpha}=u_\alpha+\varepsilon\varphi_\alpha(x,u)+{\mathcal O}(\varepsilon^2) \end{eqnarray*} where $\displaystyle{\xi_i=\frac{\partial\widetilde{x}_i}{\partial\varepsilon}\Big|_{\varepsilon=0}}$ for $i=1,...,p$ and $\displaystyle{\varphi_\alpha=\frac{\partial\widetilde{u}_\alpha}{\partial\varepsilon}\Big|_{\varepsilon=0}}$ for $\alpha=1,...,q$. The action of the Lie group can be considered by its associated infinitesimal generator \begin{eqnarray}\label{eq:18} \textbf{v}=\sum_{i=1}^p\xi_i(x,u)\frac{\partial}{\partial{x_i}}+\sum_{\alpha=1}^q\varphi_\alpha(x,u)\frac{\partial}{\partial{u_\alpha}} \end{eqnarray} on the total space of PDE (the space containing independent and dependent variables). Furthermore, the characteristic of the vector field (\ref{eq:18}) is given by \begin{eqnarray*} Q^\alpha(x,u^{(1)})=\varphi_\alpha(x,u)-\sum_{i=1}^p\xi_i(x,u)\frac{\partial u^\alpha}{\partial x_i}, \end{eqnarray*} and its $n-$th prolongation is determined by \begin{eqnarray*} \textbf{v}^{(n)}=\sum_{i=1}^p\xi_i(x,u)\frac{\partial}{\partial x_i}+\sum_{\alpha=1}^q\sum_{\sharp J=j=0}^n\varphi^J_\alpha(x,u^{(j)})\frac{\partial}{\partial u^\alpha_J}, \end{eqnarray*} where $\varphi^J_\alpha=D_JQ^\alpha+\sum_{i=1}^p\xi_iu^\alpha_{J,i}$. ($D_J$ is the total derivative operator describes in (\ref{eq:19})). The aim is to analysis the point symmetry structure of the Telegraph equation, which is \begin{equation}\label{eq:1} u_{tt}+ku_t=a^2\Big[\frac{1}{r}\Big(r\frac{\partial u}{\partial r}\Big)_r+\frac{1}{r^2}u_{xx}+u_{yy}\Big], \end{equation} where $u$ is a smooth function of $\displaystyle{(r,x,y,t)}$. Let us consider a one-parameter Lie group of infinitesimal transformations $(x,t,u)$ given by \begin{eqnarray*}\begin{array}{lllll} \widetilde{r}=r+\varepsilon\xi_1(r,x,y,t,u)+{\mathcal O}(\varepsilon^2),&& \widetilde{x}=x+\varepsilon\xi_2(r,x,y,t,u)+{\mathcal O}(\varepsilon^2),&& \widetilde{y}=y+\varepsilon\xi_3(r,x,y,t,u)+{\mathcal O}(\varepsilon^2),\\ \widetilde{t}=t+\varepsilon\xi_4(r,x,y,t,u)+{\mathcal O}(\varepsilon^2),&& \widetilde{u}=u+\varepsilon\eta(r,x,y,t,u)+{\mathcal O}(\varepsilon^2),\end{array} \end{eqnarray*} where $\varepsilon$ is the group parameter. Then one requires that this transformations leaves invariant the set of solutions of the Eq. (\ref{eq:1}). This yields to the linear system of equations for the infinitesimals $\xi_1(r,x,y,t,u)$, $\xi_2(r,x,y,t,u)$, $\xi_3(r,x,y,t,u)$, $\xi_4(r,x,y,t,u)$ and $\eta(r,x,y,t,u)$. The Lie algebra of infinitesimal symmetries is the set of vector fields in the form of $\textbf{v}=\xi_1\partial_r+\xi_2\partial_x+\xi_3\partial_y+\xi_4\partial_t+\eta\partial_u$. This vector field has the second prolongation \begin{eqnarray*} \textbf{v}^{(2)}=\textbf{v}+\varphi^r\partial_{r}+\varphi^x\partial_{x}+\varphi^y\partial_{y}+\varphi^t\partial_{t}+\varphi^{rr}\partial_{u_{rr}} +\varphi^{rx}\partial_{u_{rx}}+\cdots+\varphi^{yy}\partial_{u_{yy}}+\varphi^{yt}\partial_{u_{yt}}+\varphi^{tt}\partial_{tt} \end{eqnarray*} with the coefficients \begin{eqnarray*}\begin{array}{lll} \varphi^r =D_rQ+\xi_1u_{rr}+\xi_2u_{rx}+\xi_3u_{ry}+\xi_4u_{rt},&& \varphi^x =D_xQ+\xi_1u_{rx}+\xi_2u_{xx}+\xi_3u_{xy}+\xi_4u_{xt},\\ \varphi^y =D_yQ+\xi_1u_{ry}+\xi_2u_{xy}+\xi_3u_{yy}+\xi_4u_{yt},&& \varphi^t =D_xQ+\xi_1u_{rt}+\xi_2u_{xt}+\xi_3u_{xt}+\xi_4u_{tt},\\ \varphi^{rr}=D^2_rQ+\xi_1u_{rrr}+\xi_2u_{rrx}+\xi_3u_{rry}+\xi_4u_{rrt},&& \varphi^{rx}=D_rD_xQ+\xi_1u_{rxr}+\xi_2u_{rxx}+\xi_3u_{rxy}+\xi_4u_{rxt},\\ \varphi^{ry}=D_rD_yQ+\xi_1u_{ryr}+\xi_2u_{rxy}+\xi_3u_{ryy}+\xi_4u_{ryt}&& \varphi^{rt}=D_rD_tQ+\xi_1u_{rtr}+\xi_2u_{rxt}+\xi_3u_{ryt}+\xi_4u_{rtt},\\ \varphi^{xx}=D^2_xQ+\xi_1u_{xxr}+\xi_2u_{xxx}+\xi_3u_{xxy}+\xi_4u_{xxt},&& \varphi^{xy}=D_xD_yQ+\xi_1u_{xyr}+\xi_2u_{xxy}+\xi_3u_{xyy}+\xi_4u_{xyt},\\ \varphi^{xt}=D_xD_tQ+\xi_1u_{xrt}+\xi_2u_{xxt}+\xi_3u_{xyt}+\xi_4u_{xtt},&& \varphi^{yy}=D^2_yQ+\xi_1u_{ryy}+\xi_2u_{xyy}+\xi_3u_{yyy}+\xi_4u_{yyt},\\ \varphi^{yt}=D_yD_tQ+\xi_1u_{ryt}+\xi_2u_{xyt}+\xi_3u_{yyt}+\xi_4u_{ytt},&& \varphi^{tt}=D^2_tQ+\xi_1u_{rtt}+\xi_2u_{xtt}+\xi_3u_{ytt}+\xi_4u_{ttt}, \end{array} \end{eqnarray*} where the operators $D_r,D_x,D_y$ and $D_t$ denote the total derivative with respect to $r,x,y$ and $t$: \begin{eqnarray}\label{eq:19}\begin{array}{lll} D_r=\partial_r+u_r\partial_u+u_{rr}\partial_{u_r}+u_{rx}\partial_{u_x}+\cdots,&& D_x=\partial_x+u_x\partial_u+u_{xx}\partial_{u_x}+u_{rx}\partial_{u_r}+\cdots,\\ D_y=\partial_y+u_y\partial_u+u_{yy}\partial_{u_y}+u_{ry}\partial_{u_r}+\cdots,&& D_t=\partial_t+u_t\partial_u+u_{tt}\partial_{u_t}+u_{rt}\partial_{u_r}+\cdots,\end{array} \end{eqnarray} Using the invariance condition, i.e., applying the second prolongation $\textbf{v}^{(2)}$ to Eq. (\ref{eq:1}), the following system of 27 determining equations yields: \begin{eqnarray*} \begin{array}{lclclclc} {\xi_2}_u=0,&&{\xi_2}_{yy}=0,&&{\xi_3}_y=0,&&\hspace{-5cm}{\xi_3}_u=0,\\ {\xi_4}_t=0,&&{\xi_4}_u=0,&&{\xi_4}_{rr}=0,&&\hspace{-5cm}{\xi_4}_{xy}=0,\\ {\xi_4}_{yy}=0,&&{\xi_4}_{ry}=0,&&\eta_{tu}=0,&&\hspace{-5cm}\eta_{uu}=0\\ k{\xi_4}_y+2\eta_{ru}=0,&&\xi_1+r{\xi_2}_x=0,&&{\xi_2}_x+r{\xi_2}_{rx}=0,&&\hspace{-5cm}{\xi_2}_y+r{\xi_2}_{ry}=0,\\ {\xi_2}_{xx}-r{\xi_2}_r=0,&&k{\xi_4}_y+2\eta_{yu}=0,&&2{\xi_2}_r+r{\xi_2}_{rr}=0,&&\hspace{-5cm}{\xi_3}_r-r{\xi_2}_{xy}=0,\\ {\xi_3}_x+r^2{\xi_2}_y=0,&&{\xi_3}_t-a^2{\xi_4}_y=0,&&{\xi_4}_x-r{\xi_4}_{rx}=0,&&\hspace{-5cm}{\xi_4}_{xx}+r{\xi_4}_r=0,\\ r^2{\xi_2}_t-a^2{\xi_4}_x=0,&&k{\xi_4}_x+2\eta_{ux}=0,&&a^2r^2\eta_{rr}-kr^2\eta_t+a^2r\eta_r+a^2r^2\eta_{yy}-r^2\eta_{tt}+a^2\eta_{xx}=0. \end{array} \end{eqnarray*} The solution of the above system gives the following coefficients of the vector field $\textbf{v}$: \begin{eqnarray*} \xi_1&=&c_6\sin x-c_7\cos x-c_8y\cos x-c_9y\sin x+2c_{10}a^2t\sin x-2c_{11}a^2t\cos x,\\ \xi_2&=&c_1+c_6r^{-1}\cos x+c_7r^{-1}\sin x+c_8yr^{-1}\sin x-c_9yr^{-1}\sin x+2c_{10}a^2tr^{-1}\cos x-2c_{11}a^2tr^{-1}\sin x,\\ \xi_3&=&c_2+2c_5a^2t+c_8r\cos x+c_9r\sin x,\\ \xi_4&=&c_3+2c_5at+2c_{10}r\sin x-2c_{11}r\cos x,\qquad \eta=c_4u-c_5kyu-c_{10}kru\sin x+c_{11}kru\cos x, \end{eqnarray*} where $c_1,...,c_{11}$ are arbitrary constants, thus the Lie algebra ${\goth g}$ of the telegraph equation is spanned by the seven vector fields \begin{eqnarray*} \begin{array}{lclc} \textbf{v}_1=\partial_x,\qquad\textbf{v}_2=\partial_y,&&\hspace{-3cm} \textbf{v}_3=\partial_t,\qquad\textbf{v}_4=u\partial_u,\\ \textbf{v}_5=2a^2t\partial_y+2y\partial_t-kyu\partial_u,&&\hspace{-3cm}\textbf{v}_6=\sin x\partial_r+r^{-1}\cos x\partial_x,\\ \textbf{v}_7=-\cos x\partial_r+r^{-1}\sin x\partial_x,&&\hspace{-3cm} \textbf{v}_8=-y\cos x\partial_r+r^{-1}y\sin x\partial_x+r\cos x\partial_y,\\\textbf{v}_9=-y\sin x\partial_r-r^{-1}y\cos x\partial_x+r\sin x\partial_y,&&\hspace{-3cm}\textbf{v}_{10}=2a^2t\sin x\partial_r+2a^2tr^{-1}\cos x\partial_x+2r\sin x\partial_t-kru\sin x\partial_u,\\\textbf{v}_{11}=-2a^2t\cos x\partial_r+2a^2tr^{-1}\sin x\partial_x-2r\cos x\partial_t+kru\cos x\partial_u, \end{array} \end{eqnarray*} which $\textbf{v}_1,\textbf{v}_2$ and $\textbf{v}_3$ are translation on $x,t$ and $u$, $\textbf{v}_4$ is rotation on $u$ and $x$ and $\textbf{v}_7$ is scaling on $x,t$ and $u$. The commutation relations between these vector fields is given by the (\ref{table:1}), where entry in row $i$ and column $j$ representing $[\textbf{v}_i,\textbf{v}_j]$. \begin{table} \caption{Commutation relations of $\goth g$ }\label{table:1} \begin{eqnarray*}\hspace{-0.75cm}\begin{array}{cccccccccccc} \hline [\,,\,] &\hspace{1cm}\textbf{v}_1 &\hspace{0.5cm}\textbf{v}_2 &\hspace{0.5cm}\textbf{v}_3 &\hspace{0.5cm}\textbf{v}_4 &\hspace{0.5cm}\textbf{v}_5 &\hspace{0.5cm}\textbf{v}_6 &\hspace{0.5cm}\textbf{v}_7 &\hspace{0.5cm}\textbf{v}_8 &\hspace{0.5cm}\textbf{v}_9 &\hspace{0.5cm}\textbf{v}_{10} &\hspace{0.5cm}\textbf{v}_{11} \\ \hline \textbf{v}_1 &\hspace{1cm} 0 &\hspace{0.5cm} 0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_7 &\hspace{0.5cm}\textbf{v}_6 &\hspace{0.3cm}-\textbf{v}_9 &\hspace{0.5cm}\textbf{v}_8 &\hspace{0.3cm}-\textbf{v}_{11} &\hspace{0.5cm}\textbf{v}_{10} \\ \textbf{v}_2 &\hspace{1cm} 0 &\hspace{0.5cm} 0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}2a^2\textbf{v}_3-k\textbf{v}_4 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_7 &\hspace{0.3cm}-\textbf{v}_6 &\hspace{0.5cm}0 &\hspace{0.5cm}0\\ \textbf{v}_3 &\hspace{1cm} 0 &\hspace{0.5cm} 0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_2 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_6 &\hspace{0.5cm}\textbf{v}_7\\ \textbf{v}_4 &\hspace{1cm} 0 &\hspace{0.5cm} 0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0\\ \textbf{v}_5 &\hspace{1cm} 0 &\hspace{-0.1cm} -2a^2\textbf{v}_3+\textbf{v}_4&\hspace{0.3cm}-\textbf{v}_2 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_{11} &\hspace{0.3cm}-\textbf{v}_{10} &\hspace{0.3cm}-\textbf{v}_9 &\hspace{0.5cm}\textbf{v}_8\\ \textbf{v}_6 &\hspace{1cm} \textbf{v}_7 &\hspace{0.5cm} 0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_7 &\hspace{0.3cm}-\textbf{v}_6 &\hspace{0.5cm}0 &\hspace{0.5cm}0\\ \textbf{v}_7 &\hspace{0.9cm} -\textbf{v}_6 &\hspace{0.5cm} 0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_2 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{-0.2cm}2a^2\textbf{v}_3-k\textbf{v}_4\\ \textbf{v}_8 &\hspace{1cm} \textbf{v}_9 &\hspace{0.3cm} -\textbf{v}_7 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_{11} &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_2 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_1 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_5\\ \textbf{v}_9 &\hspace{0.9cm} -\textbf{v}_8 &\hspace{0.5cm} \textbf{v}_6 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_{10} &\hspace{0.3cm}-\textbf{v}_2 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_1 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_5 &\hspace{0.5cm}0\\ \textbf{v}_{10}&\hspace{1cm} \textbf{v}_{11} &\hspace{0.5cm} 0 &\hspace{0.3cm}-\textbf{v}_6 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_9 &\hspace{-0.4cm}-2a^2\textbf{v}_3+k\textbf{v}_4 &\hspace{0.5cm}0 &\hspace{0.5cm}0 &\hspace{0.5cm}\textbf{v}_5 &\hspace{0.5cm}0 &\hspace{0.5cm}a^2\textbf{v}_1\\ \textbf{v}_{11}&\hspace{0.9cm} -\textbf{v}_{10}&\hspace{0.5cm} 0 &\hspace{0.3cm}-\textbf{v}_7 &\hspace{0.5cm}0 &\hspace{0.3cm}-\textbf{v}_8 &\hspace{0.5cm}0 &\hspace{-0.3cm}-2a^2\textbf{v}_3+k\textbf{v}_4&\hspace{0.3cm}-\textbf{v}_5 &\hspace{0.5cm}0 &\hspace{0.3cm}-a^2\textbf{v}_1 &\hspace{0.5cm}0\\ \hline\end{array}\end{eqnarray*}\end{table} The one-parameter groups $G_i$ generated by the base of $\goth g$ are given in the following table. \begin{eqnarray*} g_1 &:&(r,x,y,t,u)\longmapsto(r,x+s,y,t,u),\qquad\qquad g_2 :(r,x,y,t,u)\longmapsto(r,x,y+s,t,u),\\ g_3 &:&(r,x,y,t,u)\longmapsto(r,x,y,t+s,u),\qquad\qquad g_4 :(r,x,y,t,u)\longmapsto(r,x,y,t,ue^s),\\ g_5 &:&(r,x,y,t,u)\longmapsto\Big(r,x,y+st,\frac{1}{a^2}sy+t,-\frac{1}{2a^2}skyu+u\Big),\\ g_6 &:&(r,x,y,t,u)\longmapsto\Big(s\sin x+r,\frac{s}{r}\cos x+x,y,t,u\Big),\\ g_7 &:&(r,x,y,t,u)\longmapsto\Big(-s\cos x+r,\frac{s}{r}\sin x+x,y,t,u\Big),\\ g_8 &:&(r,x,y,t,u)\longmapsto\Big(-sy\cos x+r,\frac{s}{r}y\sin x+x,sr\cos x+y,t,u\Big),\\ g_9 &:&(r,x,y,t,u)\longmapsto\Big(-sy\sin x+r,-\frac{s}{r}y\cos x+x,sr\sin x+y,t,u\Big),\\ g_{10}&:&(r,x,y,t,u)\longmapsto\Big(st\sin x+r,\frac{s}{r}t\cos x+x,y,\frac{s}{a^2}r\sin x+t-\frac{s}{2a^2}kru\sin x+u\Big),\\ g_{11}&:&(r,x,y,t,u)\longmapsto\Big(-st\cos x+r,\frac{s}{r}t\sin x+x,y,-\frac{s}{a^2}r\cos x+t,\frac{s}{2a^2}kru\cos x+u\Big). \end{eqnarray*} Since each group $G_i$ is a symmetry group and if $u=f(r,x,y,t)$ is a solution of the Telegraph equation, so are the functions \begin{eqnarray*} u^1&=&U(r,x+\varepsilon,y,t),\qquad u^2=U(r,x,y+\varepsilon,t),\qquad u^3=U(r,x,y,t+\varepsilon),\qquad u^4=e^{-\varepsilon}U(r,x,y,t),\\ u^5 &=&(2a^2+\varepsilon ky)U\Big(r,x,y+\varepsilon t,\frac{1}{a^2}\varepsilon y+t\Big),\qquad u^6 =U\Big(\varepsilon\sin x+r,\frac{\varepsilon}{r}\cos x+x,y,t\Big),\\ u^7 &=&U\Big(-\varepsilon\cos x+r,\frac{\varepsilon}{r}\sin x+x,y,t\Big),\qquad\quad\; u^8=U\Big(-\varepsilon y\cos x+r,\frac{\varepsilon}{r}y\sin x+x,\varepsilon r\cos x+y,t\Big),\\ u^9 &=&U\Big(-\varepsilon y\sin x+r,-\frac{\varepsilon}{r}y\cos x+x,\varepsilon r\sin x+y,t\Big),\\ u^{10}&=&(2a^2+\varepsilon kr\sin x)U\Big(\varepsilon t\sin x+r,\frac{\varepsilon}{r}t\cos x+x,y,\frac{\varepsilon}{a^2}r\sin x+t\Big),\\ u^{11}&=&(2a^2-\varepsilon kr\cos x)U\Big(-\varepsilon t\cos x+r,\frac{\varepsilon}{r}t\sin x+x,y,-\frac{\varepsilon}{a^2}r\cos x+t\Big). \end{eqnarray*} where $\varepsilon$ is a real number. Here we can find the general group of the symmetries by considering a general linear combination $c_1\textbf{v}_1+\cdots+c_1\textbf{v}_{11}$ of the given vector fields. In particular if $g$ is the action of the symmetry group near the identity, it can be represented in the form $g=\exp(\varepsilon_{11}\textbf{v}_{11})\cdots\exp(\varepsilon_1\textbf{v}_1)$. \section{Optimal system of Telegraph equation} $~~~~~$As is well known, the theoretical Lie group method plays an important role in finding exact solutions and performing symmetry reductions of differential equations. Since any linear combination of infinitesimal generators is also an infinitesimal generator, there are always infinitely many different symmetry subgroups for the differential equation. So, a mean of determining which subgroups would give essentially different types of solutions is necessary and significant for a complete understanding of the invariant solutions. As any transformation in the full symmetry group maps a solution to another solution, it is sufficient to find invariant solutions which are not related by transformations in the full symmetry group, this has led to the concept of an optimal system \cite{[6]}. The problem of finding an optimal system of subgroups is equivalent to that of finding an optimal system of subalgebras. For one-dimensional subalgebras, this classification problem is essentially the same as the problem of classifying the orbits of the adjoint representation. This problem is attacked by the naive approach of taking a general element in the Lie algebra and subjecting it to various adjoint transformations so as to simplify it as much as possible. The idea of using the adjoint representation for classifying group-invariant solutions is due to \cite{[1-1],[2-1],[5],[6]}. The adjoint action is given by the Lie series \begin{eqnarray}\label{eq:9} \mbox{Ad}(\exp(\varepsilon\textbf{v}_i)\textbf{v}_j)=\textbf{v}_j-\varepsilon[\textbf{v}_i,\textbf{v}_j]+\frac{\varepsilon^2}{2}[\textbf{v}_i,[\textbf{v}_i,\textbf{v}_j]]-\cdots, \end{eqnarray} where $[\textbf{v}_i,\textbf{v}_j]$ is the commutator for the Lie algebra, $\varepsilon$ is a parameter, and $i,j=1,\cdots,11$. Let $F^{\varepsilon}_i:{\goth g}\rightarrow{\goth g}$ defined by $\textbf{v}\mapsto\mbox{Ad}(\exp(\varepsilon\textbf{v}_i)\textbf{v})$ is a linear map, for $i=1,\cdots,11$. The matrices $M^\varepsilon_i$ of $F^\varepsilon_i$, $i=1,\cdots,11$, with respect to basis $\{\textbf{v}_1,\cdots,\textbf{v}_{11}\}$ are \begin{eqnarray*} &\displaystyle M^\varepsilon_1=\tiny\left(\begin{array}{ccccccccccc} 1&0&0&0&0&0&0&0&0&0&0\\0&1&0&0&0&0&0&0&0&0&0\\0&0&1&0&0&0&0&0&0&0&0\\0 &0&0&1&0&0&0&0&0&0&0\\0&0&0&0&1&0&0&0&0&0&0\\0&0&0&0&0&\cos s&\sin s&0&0&0&0\\0&0&0&0&0&-\sin s&\cos s&0&0&0&0\\0&0&0&0&0&0&0&\cos s&\sin s&0&0\\0&0&0&0&0&0&0&-\sin s&\cos s&0&0\\0&0&0&0&0&0&0&0&0&\cos s&-\sin s\\0&0&0&0&0&0&0&0&0&-\sin s&\cos s\end{array}\right), M^\varepsilon_2=\tiny\left(\begin{array}{ccccccccccc} 1&0&0&0&0&0&0&0&0&0&0\\0&1&0&0&0&0&0&0&0&0&0\\0&0&1&0&0&0&0&0&0&0&0\\0 &0&0&1&0&0&0&0&0&0&0\\0&0&-2a^2s&ks&1&0&0&0&0&0&0\\0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0\\0&0&0&0&0&0&-s&1&0&0&0\\0&0&0&0&0&s&0&0&1&0&0\\0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0&0&0&1\end{array}\right), \qquad \cdots\\ &\displaystyle \mbox{7 matrices} \quad\cdots\quad M^\varepsilon_{11}=\tiny\left(\begin{array}{ccccccccccc} \cosh as&0&0&0&0&0&0&0&0&\frac{1}{a}\sinh as&0\\0&1&0&0&0&0&0&0&0&0&0\\0&0&\cosh \sqrt{2}as&\frac{k}{\sqrt{2}a}(1-\cosh \sqrt{2}as)&0&0&\frac{1}{\sqrt{2}a}\sinh \sqrt{2}as&0&0&0&0\\0 &0&0&1&0&0&0&0&0&0&0\\0&0&0&0&\cosh as&0&0&\sinh as&0&0&0\\0&0&0&0&0&1&0&0&0&0&0\\ 0&0&\sqrt{2}a\sinh \sqrt{2}as&-\frac{k}{\sqrt{2}a}\sinh \sqrt{2}as&0&0&\cosh \sqrt{2}as&0&0&0&0\\0&0&0&0&\sinh s&0&0&\cosh s&0&0&0\\0&0&0&0&0&0&0&0&1&0&0\\a\sinh as&0&0&0&0&0&0&0&0&\cosh as&0\\ 0&0&0&0&0&0&0&0&0&0&1\end{array}\right). \end{eqnarray*} by acting these matrices on a vector field $\textbf{v}$ alternatively we can show that a one-dimensional optimal system of ${\goth g}$ is given by \begin{eqnarray*}\begin{array}{lll} X_1=a_1\textbf{v}_1+a_2\textbf{v}_3+a_3\textbf{v}_4+a_4\textbf{v}_8&& \hspace{-4cm}X_2=a_1\textbf{v}_1+a_2\textbf{v}_3+a_3\textbf{v}_4+a_4\textbf{v}_5-a_6\textbf{v}_9\\ X_3=a_1\textbf{v}_1+a_2\textbf{v}_2+a_3\textbf{v}_3+a_4\textbf{v}_4+a_6\textbf{v}_8&& \hspace{-4cm}X_4=a_1\textbf{v}_1+a_2\textbf{v}_3+a_3\textbf{v}_4+a_4\textbf{v}_5-a_6\textbf{v}_8\\ X_5=a_1\textbf{v}_1+a_2\textbf{v}_3+a_3\textbf{v}_5+a_4\textbf{v}_6+a_6\textbf{v}_8&& \hspace{-4cm}X_6=a_1\textbf{v}_1+\textbf{v}_2+a_2\textbf{v}_3+a_3\textbf{v}_4+a_4(\textbf{v}_6-\textbf{v}_{10})\\ X_7=a_1\textbf{v}_1+\textbf{v}_2+a_2\textbf{v}_3+a_3\textbf{v}_4+\textbf{v}_7-\textbf{v}_{11},&& \hspace{-4cm}X_8=a_1\textbf{v}_1+a_2\textbf{v}_2+a_3\textbf{v}_3+a_4\textbf{v}_4+a_5\textbf{v}_5+a_6\textbf{v}_6,\\ X_9=a_1\textbf{v}_1+a_2\textbf{v}_2+\textbf{v}_3+\textbf{v}_4-(2a^2-k)\textbf{v}_5+\textbf{v}_7,&& \hspace{-4cm}X_{10}=a_1\textbf{v}_1+a_2\textbf{v}_2+\textbf{v}_3+a_3\textbf{v}_4-(2a^2-k)\textbf{v}_5+\textbf{v}_6,\\ X_{11}=a_1\textbf{v}_1+a_2\textbf{v}_2+a_3\textbf{v}_3+a_4\textbf{v}_4+a_5\textbf{v}_5+a_6\textbf{v}_6+a_7\textbf{v}_{11},\\ X_{12}=a_1\textbf{v}_1+a_2\textbf{v}_2+a_3\textbf{v}_3+a_4\textbf{v}_4+a_5\textbf{v}_5+a_6\textbf{v}_6+a_7\textbf{v}_7+a_8\textbf{v}_{11},&&\\ X_{13}=a_1\textbf{v}_1+a_2\textbf{v}_2+a_3\textbf{v}_3+a_4\textbf{v}_4+a_5\textbf{v}_5+a_6\textbf{v}_6-a_7\textbf{v}_7-a_8\textbf{v}_9,&&\\ X_{14}=a_1\textbf{v}_1+a_2\textbf{v}_2+a_4\textbf{v}_3+a_4\textbf{v}_4+a_5\textbf{v}_5+a_6\textbf{v}_6+\textbf{v}_7-(2a+k)\textbf{v}_{11},&&\\ X_{15}=\frac{1}{2a^2-k}(\textbf{v}_1+\textbf{v}_8)+a_1\textbf{v}_2+a_2\textbf{v}_3+a_3\textbf{v}_4+a_5\textbf{v}_5+\textbf{v}_6+\textbf{v}_7,&&\\ X_{16}=a_1\textbf{v}_1+a_2\textbf{v}_2+\textbf{v}_3+\textbf{v}_4-(2a^2-k-1)\textbf{v}_5+a_3\textbf{v}_6+a_4\textbf{v}_7+a_5\textbf{v}_8+a_6\textbf{v}_9,&& \end{array}\end{eqnarray*} \section{Lie Algebra Structure} $~~~~~$In this part, we determine the structure of symmetry Lie algebra of the telegraph equation.\\ $\goth g$ has a \textit{Levi decomposition} in the form of ${\goth g}={\goth r}\ltimes{\goth h}$, where ${\goth r}=\langle\textbf{v}_2,\textbf{v}_3,\textbf{v}_4,\textbf{v}_5,\textbf{v}_6\rangle$ is the radical (the large solvable ideal) of $\goth g$ which is a nilpotent nilradical of $\goth g$ and ${\goth h}=\langle\textbf{v}_1,\textbf{v}_7,\textbf{v}_8,\textbf{v}_9,\textbf{v}_{10},\textbf{v}_{11}\rangle$ is a solvable and non-semisimple subalgebra of $\goth g$ with centralizer $\langle\textbf{v}_4\rangle$ containing in the minimal ideal $\langle\textbf{v}_1,\textbf{v}_2,2a^2\textbf{v}_3-k\textbf{v}_4,\textbf{v}_5,...,\textbf{v}_{11}\rangle$. Here we can find the quotient algebra generated from $\goth g$ such as \begin{eqnarray}\label{eq:2} {\goth g}_1={\goth g}/{\goth r}, \end{eqnarray} with commutators table (\ref{table:2}), where $\textbf{w}_i=\textbf{v}_i+{\goth r}$ for $i=1,...,11$ are members of ${\goth g}_1$. The (\ref{eq:2}) helps us to reduction differential equations. If we want to integration an involutive distribution, the process decomposes into two steps: \begin{itemize} \item integration of the involutive distribution with symmetry Lie algebra ${\goth g}/{\goth r}$, and \item integration on integral manifolds with symmetry algebra $\goth r$. \end{itemize} $~~~~$First, applying this procedure to the radical $\goth r$ we decompose the integration problem into two parts: the integration of the distribution with semisimple algebra ${\goth g}/{\goth r}$, then the integration of the restriction of distribution to the integral manifold with the solvable symmetry algebra $\goth r$.\\ $~~~~$The last step can be performed by quadratures. Moreover, every semisimple Lie algebra ${\goth g}/{\goth r}$ is a direct sum of simple ones which are ideal in ${\goth g}/{\goth r}$. Thus, the Lie-Bianchi theorem reduces the integration problem ti involutive distributions equipped with simple algebras of symmetries. $~~~~$Both $\goth g$ and ${\goth g}_1$ are non-solvable, because if ${\goth g}^{(1)}=\langle\textbf{v}_i,[\textbf{v}_i,\textbf{v}_j]\rangle=[\goth g, \goth g]$, and ${\goth g}_1^{(1)}=\langle\textbf{w}_i,[\textbf{w}_i,\textbf{w}_j]\rangle=[{\goth g}_1,{\goth g}_1]$, be the derived subalgebra of $\goth g$ and ${\goth g}_1$ we have \begin{eqnarray*} {\goth g}^{(1)}=[{\goth g},{\goth g}] =\langle \textbf{v}_1,\textbf{v}_2,2a^2\textbf{v}_3-k\textbf{v}_4,\textbf{v}_5,\textbf{v}_6,\textbf{v}_7,\textbf{v}_8,\textbf{v}_9, \textbf{v}_{10},\textbf{v}_{11}\rangle=[{\goth g}^{(1)},{\goth g}^{(1)}] ={\goth g}^{(2)}, \end{eqnarray*} and \begin{eqnarray*} {\goth g}_1^{(1)}=[{\goth g}_1,{\goth g}_1]=\langle \textbf{w}_1,\textbf{w}_2,\textbf{w}_3,\textbf{w}_4,\textbf{w}_5,\textbf{w}_6\rangle ={\goth g}_1. \end{eqnarray*} Thus, we have a chain of ideals ${\goth g}\supset{\goth g}^{(1)}={\goth g}^{(2)}\neq 0$, ${\goth g}_1\supset{\goth g}_1^{(1)}={\goth g}_1\neq 0$, which sows the non-solvability of $\goth g$ and ${\goth g}_1$. \begin{table} \caption{Commutation relations of $\goth g$ }\label{table:2} \begin{eqnarray*}\hspace{-0.75cm}\begin{array}{ccccccc} \hline [\,,\,] &\hspace{1cm}\textbf{w}_1 &\hspace{0.5cm}\textbf{w}_2 &\hspace{0.5cm}\textbf{w}_3 &\hspace{0.5cm}\textbf{w}_4 &\hspace{0.5cm}\textbf{w}_5 &\hspace{0.5cm}\textbf{w}_6 \\ \hline \textbf{w}_1 &\hspace{1cm} 0 &\hspace{0.5cm} 0 &\hspace{0.5cm}-\textbf{w}_4 &\hspace{0.6cm}\textbf{w}_3 &\hspace{0.5cm}-\textbf{w}_6 &\hspace{0.5cm}\textbf{w}_5 \\ \textbf{w}_2 &\hspace{1cm} 0 &\hspace{0.5cm} 0 &\hspace{0.7cm}\textbf{w}_6 &\hspace{0.5cm}-\textbf{w}_5 &\hspace{0.5cm}-\textbf{w}_4 &\hspace{0.5cm}\textbf{w}_3 \\ \textbf{w}_3 &\hspace{0.9cm} \textbf{w}_4 &\hspace{0.3cm} -\textbf{w}_6 &\hspace{0.7cm}0 &\hspace{0.5cm}-\textbf{w}_1 &\hspace{0.7cm}0 &\hspace{0.5cm}\textbf{w}_2 \\ \textbf{w}_4 &\hspace{0.8cm} -\textbf{w}_3 &\hspace{0.4cm} \textbf{w}_5 &\hspace{0.7cm}\textbf{w}_1 &\hspace{0.7cm}0 &\hspace{0.5cm}-\textbf{w}_2 &\hspace{0.5cm}0 \\ \textbf{w}_5 &\hspace{0.9cm} \textbf{w}_6 &\hspace{0.4cm}\textbf{w}_4 &\hspace{0.7cm}0 &\hspace{0.7cm}\textbf{w}_2 &\hspace{0.7cm}0 &\hspace{0.4cm}a^2\textbf{w}_1 \\ \textbf{w}_6 &\hspace{0.8cm} -\textbf{w}_5 &\hspace{0.3cm} -\textbf{w}_3 &\hspace{0.5cm}-\textbf{w}_2 &\hspace{0.7cm}0 &\hspace{0.5cm}-a^2\textbf{w}_1 &\hspace{0.5cm}0 \\ \hline\end{array}\end{eqnarray*}\end{table} \section{Conclusion} In this article group classification of telegraph equation and the algebraic structure of the symmetry group is considered. Classification of one-dimensional subalgebra is determined by constructing one-dimensional optimal system. The structure of Lie algebra symmetries is analyzed. \end{document}
math
28,391
\begin{document} \title{On non-empty cross-intersecting families} \author{Chao Shi$^1$, Peter Frankl$^{2}$\footnote{E-mail: [email protected] (P. Frankl)}, Jianguo Qian$^1$\footnote{Corresponding author. E-mail: [email protected] (J.G. Qian)}\\ \small 1. School of Mathematical Sciences, Xiamen University, Xiamen 361005, PR China\\ \small {2. R\'{e}nyi Institute, Budapest, Hungary}} \date{} \maketitle {\small{\bf Abstract.}\quad Let $2^{[n]}$ and $\binom{[n]}{i}$ be the power set and the class of all $i$-subsets of $\{1,2,\cdots,n\}$, respectively. We call two families $\mathscr{A}$ and $\mathscr{B}$ cross-intersecting if $A\cap B\neq \emptyset$ for any $A\in \mathscr{A}$ and $B\in \mathscr{B}$. In this paper we show that, for $n\geq k+l,l\geq r\geq 1,c>0$ and $\mathscr{A}\subseteq \binom{[n]}{k},\mathscr{B}\subseteq \binom{[n]}{l}$, if $\mathscr{A}$ and $\mathscr{B}$ are cross-intersecting and $\binom{n-r}{l-r}\leq|\mathscr{B}|\leq \binom{n-1}{l-1}$, then $$|\mathscr{A}|+c|\mathscr{B}|\leq \max\left\{\binom{n}{k}-\binom{n-r}{k}+c\binom{n-r}{l-r},\ \binom{n-1}{k-1}+c\binom{n-1}{l-1}\right\}$$ and the families $\mathscr{A}$ and $\mathscr{B}$ attaining the upper bound are also characterized. This generalizes the corresponding result of Hilton and Milner for $c=1$ and $r=k=l$, and implies a result of Tokushige and the second author (Theorem \ref{thm 1.3}). \vskip 0.3cm \noindent{\bf Keywords:} finite set; cross-intersecting; non-empty family \section{Introduction} For a natural number $n$, we write $[n]=\{1,2,\cdots,n\}$ and denote by $2^{[n]}$ the power set of $[n]$. In particular, for integer $i>0$ we denote by $\binom{[n]}{i}$ the collection of all $i$-subsets of $[n]$. Every subset of $2^{[n]}$ is called a {\it family}. We call a family $\mathscr{A}$ {\it intersecting} if $A\cap B\neq \emptyset$ for any $A, B\in\mathscr{A}$, and call $t$ ($t\geq2$) families $\mathscr{A}_1,\mathscr{A}_2,\cdots, \mathscr{A}_t$ {\it cross-intersecting} if $A_i\cap A_j\neq \emptyset$ for any $A_i\in \mathscr{A}_i$ and $A_j\in \mathscr{A}_j$ with $i\neq j$. For a family $A\in 2^{[n]}$, we define its {\it complement} as usual by $\overline{A}=[n]\setminus A$ and, for a family $\mathscr{A}\subset 2^{[n]}$, we denote $\overline{\mathscr{A}}=\{\overline{A}:A\in\mathscr{A}\}$. The following theorem, known as Erd\H{o}s-Ko-Rado theorem, is a fundamental result in extremal set theory. \begin{thm}\label{thm 1.1} (Erd\H{o}s-Ko-Rado,\cite{E}). For two positive integers $n$ and $k$, if $n\geq 2k$ and $\mathscr{A}\subset \binom{[n]}{k}$ is an intersecting family, then \begin{align*} |\mathscr{A}| \leq \binom{n-1}{k-1}. \end{align*} \end{thm} The Erd\H{o}s-Ko-Rado theorem has a large number of variations and generalizations, see \cite{A,F0,Furedi,H1,G,Kwan} for examples. A natural direction is to extend the notion of an intersecting family to a class of cross-intersecting families. Notice that if $\mathscr{A}_1=\mathscr{A}_2=\cdots= \mathscr{A}_t$ ($t\geq 2$), then the families $\mathscr{A}_1,\mathscr{A}_2,\cdots, \mathscr{A}_t$ are cross-intersecting if and only if $\mathscr{A}_1$ is intersecting. In this sense, the notion of cross-intersecting for families is indeed a generalization of that of intersecting for a family. The following result was proved by Hilton, a simple proof was given later by Borg \cite{B}. \begin{thm}\label{HB} (Hilton, \cite{H1}) Let $n,k$ and $t$ be positive integers with $n\geq 2k$ and $t\geq 2$. If $\mathscr{A}_1,\mathscr{A}_2,\cdots, \mathscr{A}_t\subset \binom{[n]}{k}$ are cross-intersecting families, then \begin{equation*} \sum\limits_{i=1}^t|\mathscr{A}_i|\leq \left\{ \begin{aligned} &\binom{n}{k},&\ \mbox{if}\ t\leq\frac{n}{k};\\ &t\binom{n-1}{k-1},& \ \mbox{if}\ t\geq\frac{n}{k}. \end{aligned} \right. \end{equation*} \end{thm} For $t=2$, lots of variations of Theorem \ref{HB} were also considered in the literature by imposing some particular restrictions on the families, e.g., the Sperner type restriction \cite{Wong}, $r$-intersecting restriction \cite{F1,W} and non-empty restriction \cite{F1,H3,W}. For non-empty restriction, Hilton and Milner gave the following result: \begin{thm}\label{HM} (Hilton and Milner, \cite{H3}) Let $n$ and $k$ be two positive integers with $n\geq 2k$ and $\mathscr{A},\mathscr{B}\subseteq \binom{[n]}{k}$. If $\mathscr{A}$ and $\mathscr{B}$ are non-empty cross-intersecting, then \begin{align*} |\mathscr{A}|+|\mathscr{B}|\leq \binom{n}{k}-\binom{n-k}{k}+1. \end{align*} \end{thm} In this paper we focus on non-empty cross-intersecting families. Inspired by Theorem \ref{HM}, we prove the following generalization of it. \begin{thm}\label{mainthm} Let $n,k,l,r$ be any integers with $n\geq k+l, l\geq r\geq 1$, $c$ be a positive constant and $\mathscr{A}\subseteq \binom{[n]}{k},\mathscr{B}\subseteq \binom{[n]}{l}$. If $\mathscr{A}$ and $\mathscr{B}$ are cross-intersecting and $\binom{n-r}{l-r}\leq|\mathscr{B}|\leq \binom{n-1}{l-1}$, then \begin{equation}\label{main2} |\mathscr{A}|+c|\mathscr{B}|\leq \max\left\{\binom{n}{k}-\binom{n-r}{k}+c\binom{n-r}{l-r},\ \binom{n-1}{k-1}+c\binom{n-1}{l-1}\right\} \end{equation} and the upper bound is attained if and only if one of the following holds:\\ {\bf(i).} \begin{equation}\label{iff} \binom{n}{k}-\binom{n-r}{k}+c\binom{n-r}{l-r}\geq\binom{n-1}{k-1}+c\binom{n-1}{l-1}, \end{equation} \hspace{9mm}$n>k+l,\mathscr{A}=\{A\in \binom{[n]}{k}:[r]\cap A\neq \emptyset\},\mathscr{B}=\{B\in\binom{[n]}{l}:[r]\subseteq B\}$;\\ {\bf(ii).} The `$\geq$' in (\ref{iff}) is `$\leq$', $n>k+l$, $\mathscr{A}=\{A\in \binom{[n]}{k}:1\in A\},\mathscr{B}=\{B\in \binom{[n]}{l}:1\in B\}$;\\ {\bf(iii).} $n=k+l,c< 1$, $\mathscr{B}\subset\binom{[n]}{l}$ with $|\mathscr{B}|=\binom{n-r}{l-r},\mathscr{A}=\binom{[n]}{k}\setminus\overline{\mathscr{B}}$;\\ {\bf(iv).} $n=k+l,c=1$, $\mathscr{B}\subset\binom{[n]}{l}$ with $\binom{n-r}{l-r}\leq|\mathscr{B}|\leq\binom{n-1}{l-1},\mathscr{A}=\binom{[n]}{k}\setminus\overline{\mathscr{B}}$;\\ {\bf(v).} \ $n=k+l,c> 1$, $\mathscr{B}\subset\binom{[n]}{l}$ with $|\mathscr{B}|=\binom{n-1}{l-1},\mathscr{A}=\binom{[n]}{k}\setminus\overline{\mathscr{B}}$. \end{thm} The following result is a simple consequence of Theorem \ref{mainthm}. It provides another generalization of Theorem \ref{HM} by extending two families to arbitrary number of families and it is also sharpening of Theorem \ref{HB}. \begin{cor}\label{thm 1.7} Let $n,k$ and $t$ be positive integers with $n\geq 2k$ and $t\geq 2$. If $\mathscr{A}_1,\mathscr{A}_2,\cdots,\mathscr{A}_t$ $\subseteq \binom{[n]}{k}$ are non-empty cross-intersecting families, then \begin{equation}\label{main} \sum\limits_{i=1}^t|\mathscr{A}_i|\leq \max\left\{\binom{n}{k}-\binom{n-k}{k}+t-1,\ t\binom{n-1}{k-1}\right\}. \end{equation} and the upper bound is sharp. \end{cor} \section{Proof of Theorem \ref{mainthm} and Corollary \ref{thm 1.7}} {\bf Proof of Theorem \ref{mainthm}} Let $\prec_L$, or $\prec $ for short, be the lexicographic order on $\binom{[n]}{i}$ where $i\in\{1,2,\cdots,n\}$, that is, for any two sets $A,B\in\binom{[n]}{i}$, $A\prec B$ if and only if $\min\{a:a\in A\setminus B\}<\min\{b:b\in B\setminus A\}$. For a family $\mathscr{A}\subseteq\binom{[n]}{k}$, let $\mathscr{A}_{L}$ denote the family consisting of the first $|\mathscr{A}|$ $k$-sets in order $\prec$, and call $\mathscr{A}$ $L$-{\it initial} if $\mathscr{A}_L=\mathscr{A}$. In our forthcoming argument, the well-known Kruskal-Katona theorem \cite{K1,K2} will play a key role, an equivalent formulation of which was given in \cite{F3,H2} as follows: \noindent {\bf Kruskal-Katona theorem.} {\it For $\mathscr{A}\in\binom{[n]}{k}$ and $\mathscr{B}\in\binom{[n]}{l}$, if $\mathscr{A}$ and $\mathscr{B}$ are cross-intersecting then $\mathscr{A}_{L}$ and $\mathscr{B}_{L}$ are cross-intersecting as well. } For any $i\in\{1,2,\cdots,k\}$, let $$\mathscr{P}^{(l)}_i=\left\{P\in \binom{[n]}{l}:P\supseteq[i]\right\}\ \ {\rm and}\ \ \mathscr{R}^{(k)}_i=\left\{R\in \binom{[n]}{k}:R\cap[i]\neq\emptyset\right\}.$$ \begin{lem}\label{RP} Let $n,k,l$ be any integers with $n\geq k+l$. For any $i\in\{1,2,\cdots,k\}$, $\mathscr{R}^{(k)}_{i}$ is the largest family that is cross-intersecting with $\mathscr{P}^{(l)}_{i}$ and, vice versa. Moreover, $\mathscr{R}^{(k)}_{i}$ and $\mathscr{P}^{(l)}_{i}$ are both $L$-initial. \end{lem} \begin{proof} Assume that $A$ is a $k$-set that intersects every $l$-set in $\mathscr{P}^{(l)}_{i}$. Choose an arbitrary $(l-i)$-set $B$ from $\{i+1,i+2,\cdots,n\}\setminus A$ (such $B$ exists since $n\geq k+l$). Then $[i]\cup {B}\in\mathscr{P}^{(l)}_{i}$. Since $A\cap B=\emptyset$ and $A$ intersects every $l$-set in $\mathscr{P}^{(l)}_{i}$, we must have ${A}\cap [i]\not=\emptyset$. Hence, $A\in \mathscr{R}^{(k)}_{i}$ and thus, $\mathscr{R}^{(k)}_{i}$ is largest. The reverse is analogous. Finally, the last part follows directly from the definitions of $\mathscr{P}^{(l)}_{i}$ and $\mathscr{R}^{(k)}_{i}$. \end{proof} By the Kruskal-Katona theorem, when investigating the maximum of $|\mathscr{A}|+c|\mathscr{B}|$, we may assume that both $\mathscr{A}$ and $\mathscr{B}$ are $L$-initial families. Moreover, for given $\mathscr{B}$, $\mathscr{A}$ is the largest family that is cross-intersecting with $\mathscr{B}$ and vice versa. That is, $$\mathscr{A}=\left\{A\in \binom{[n]}{k}: A\cap B\not=\emptyset\ \ {\rm for\ all}\ B\in\mathscr{B}\right\},$$ $$\mathscr{B}=\left\{B\in \binom{[n]}{l}: A\cap B\not=\emptyset\ \ {\rm for\ all}\ A\in\mathscr{A}\right\}.$$ We call such a pair $(\mathscr{A},\mathscr{B})$ a {\it maximal pair}.} Hence, the condition $|\mathscr{B}|\geq \binom{n-r}{l-r}$ implies $\mathscr{P}^{(l)}_r\subseteq \mathscr{B}$. Define $s$ to be the minimal integer such that $\mathscr{P}^{(l)}_s\ \subseteq \ \mathscr{B}$. Therefore, $1\leq s\leq r$. In the case that $s=1$, we have $\mathscr{P}^{(l)}_{1}=\{P\in \binom{[n]}{l}:1\in P\}$ and $\mathscr{R}^{(k)}_{1}=\{R\in \binom{[n]}{k}:1\in R\}$. Since $\mathscr{P}^{(l)}_1\subseteq \mathscr{B}$, $\binom{n-1}{l-1}=|\mathscr{P}^{(l)}_1|\leq|\mathscr{B}|\leq \binom{n-1}{l-1}$. This means that the only possibility is $\mathscr{B}=\mathscr{P}^{(l)}_1$. So by Lemma \ref{RP}, $\mathscr{A}=\mathscr{R}^{(k)}_1$ and, hence, $|\mathscr{A}|+c|\mathscr{B}|=\binom{n-1}{k-1}+c\binom{n-1}{l-1}$. Theorem \ref{mainthm} follows in this case. From now on we assume that $2\leq s\leq r$. By the minimality of $s$, we have \begin{equation}\label{P} \mathscr{P}^{(l)}_s\ \subseteq\ \mathscr{B}\subset \mathscr{P}^{(l)}_{s-1}. \end{equation} By Lemma \ref{RP}, $\mathscr{B}\subset \mathscr{P}^{(l)}_{s-1}$ means that $\mathscr{R}^{(k)}_{s-1}$ is cross-intersecting with $\mathscr{B}$. Hence, $\mathscr{R}^{(k)}_{s-1}\subseteq \mathscr{A}$ since $\mathscr{A}$ is largest. On the other hand, $\mathscr{P}^{(l)}_s\subseteq \mathscr{B}$ means that $\mathscr{A}$ is cross-intersecting with $\mathscr{P}^{(l)}_s$ since $\mathscr{A}$ is cross-intersecting with $\mathscr{B}$. So, again by Lemma \ref{RP}, we have $\mathscr{A}\subseteq \mathscr{R}^{(k)}_{s}$. In conclusion, (\ref{P}) implies \begin{equation} \mathscr{R}^{(k)}_{s-1}\subseteq \mathscr{A}\subseteq \mathscr{R}^{(k)}_s. \end{equation} Consider the cross-intersecting pair $\mathscr{B}_0:=\mathscr{B}\setminus \mathscr{P}^{(l)}_s$ and $\mathscr{A}_0:=\mathscr{A}\setminus \mathscr{R}^{(k)}_{s-1}$. By (\ref{P}), we have $B\cap[s]=[s-1]$ for any $B\in \mathscr{B}_0$, and $A\cap[s]=\{s\}$ for any $A\in \mathscr{A}_0$. Let \begin{equation*} Y=\binom{[s+1,n]}{l-s+1},\ \ X=\binom{[s+1,n]}{k-1}, \end{equation*} where $[s+1,n]=\{s+1,s+2,\cdots,n\}$. Define $G_s$ to be the bipartite graph with bipartite sets $X$ and $Y$, in which $PQ$ is an edge if and only if $P\in X$, $Q\in Y$ and $P\cap Q=\emptyset$. It is clear that $G_s$ is {\it biregular}, that is, the vertices in the same partite set have the same degree. \begin{lem}\label{bigraph} Let $G$ be a bipartite biregular graph with partite sets $P$ and $Q$, and let $c$ be a positive real constant. Let $P_0\subseteq P$ and $Q_0\subseteq Q$. If $P_0\cup Q_0$ is independent, then $|P_0|+c|Q_0|\leq \max\{|P|,c|Q|\}$. Moreover, if $G$ is connected, then equality is possible only for $P_0\cup Q_0=P$ or $Q$. \end{lem} \begin{proof} For a set $W$ of vertices, we denote by $N[W]$ the neighbourhood of $W$. Since $P_0\cup Q_0$ is independent, we have $N(P_0)\cap Q_0=\emptyset$ and $N(Q_0)\cap P_0=\emptyset$. Further, $|N[P_0]|\geq |P_0||Q|/|P|$ and $|N[Q_0]|\geq |Q_0||P|/|Q|$ since $G$ is biregular. Moreover, if $G$ is connected, then equality holds only if $P_0=P$ or $\emptyset$ and $Q_0=Q$ or $\emptyset$. Hence, if $|P|\geq c|Q|$ then we have $$|P_0|+c|Q_0|\leq |P_0|+\frac{|P|}{|Q|}|Q_0|\leq |P_0|+\frac{|P|}{|Q|}\frac{|Q|}{|P|}|N(Q_0)|\leq |P|.$$ The discussion for the case that $|P|\leq c|Q|$ is analogous. \end{proof} Let us first consider the case $n>k+l$. Set $\mathscr{A}_1=\{A\backslash[s]:A\in \mathscr{A}_0\}$ and $\mathscr{B}_1=\{B\backslash[s]:B\in \mathscr{B}_0\}$. Then for any $A\in \mathscr{A}_1$ and $B\in \mathscr{B}_1$, we have $A\cap B\not=\emptyset$ since $A\cup \{s\}\in \mathscr{A},B\cup [s-1]\in \mathscr{B}$ while $\mathscr{A}$ and $\mathscr{B}$ are cross-intersecting. This means that $\mathscr{A}_1\cup\mathscr{B}_1$ is independent in $G_s$. Let us note that $G_s$ is connected for $n>k+l$. So by Lemma \ref{bigraph}, \begin{equation}\label{bound} |\mathscr{A}_0|+c|\mathscr{B}_0|=|\mathscr{A}_1|+c|\mathscr{B}_1|\leq\max\{|X|,c|Y|\}. \end{equation} Moreover, for $n>k+l$, equality is possible in (\ref{bound}) only if $\mathscr{A}_0=X,\mathscr{B}_0=\emptyset$ or $\mathscr{A}_0=\emptyset,\mathscr{B}_0=Y$. Consequently, either the maximal pair $(\mathscr{A},\mathscr{B})$ is $(\mathscr{R}^{(k)}_s,\mathscr{P}^{(l)}_s)$ or it is $(\mathscr{R}^{(k)}_{s-1},\mathscr{P}^{(l)}_{s-1})$. Hence, we have \begin{equation}\label{max} \max\{|\mathscr{A}|+c|\mathscr{B}|\}=|\mathscr{R}^{(k)}_i|+c|\mathscr{P}^{(l)}_i|. \end{equation} for some $i\in\{1,2,\cdots, r\}$. We claim that the maximum in (\ref{max}) is achieved for $i=1$ or $i=r$. To prove this, it is sufficient to prove that there is no $i$ with $2\leq i<r$ satisfying both \begin{align} |\mathscr{R}^{(k)}_i|+c|\mathscr{P}^{(l)}_i|\geq |\mathscr{R}^{(k)}_{i-1}|+c|\mathscr{P}^{(l)}_{i-1}|,\\ |\mathscr{R}^{(k)}_i|+c|\mathscr{P}^{(l)}_i|\geq |\mathscr{R}^{(k)}_{i+1}|+c|\mathscr{P}^{(l)}_{i+1}|, \end{align} Equivalently, \begin{align*} &\binom{n-i}{k-1}\geq c\binom{n-i+1}{l-i+1}-c\binom{n-i}{l-i}=c\binom{n-i}{l-i+1},\\ &c\binom{n-i-1}{l-i}\geq \binom{n-i-1}{k-1}. \end{align*} Multiplying the two inequalities yields \begin{equation} c\binom{n-i}{k-1}\binom{n-i-1}{l-i}\ \geq\ c\binom{n-i}{l-i+1}\binom{n-i-1}{k-1} \end{equation} or equivalently, \begin{equation*} \binom{n-i}{k-1}/\binom{n-i-1}{k-1}\ \geq\ \binom{n-i}{l-i+1}/\binom{n-i-1}{l-i}. \end{equation*} Hence, \begin{equation} \frac{1}{n-i-k+1}\geq \frac{1}{l-i+1}. \end{equation} This contradicts the assumption that $n> k+l$. Our claim follows. Thus we have proved that the only maximal pairs are $(\mathscr{R}^{(k)}_1,\mathscr{P}^{(l)}_1)$ or $(\mathscr{R}^{(k)}_r,\mathscr{P}^{(l)}_r)$. This concludes the proof of (\ref{main2}). The uniqueness for initial families follows as well. To extend uniqueness to general families, we will apply a result proved independently by F\"{u}redi, Griggs and M\"{o}rs. To state it we need a definition. For two integers $i,j$ with $n\geq i+j$ and a family $\mathscr{F}\subset \binom{[n]}{i}$, let us define $$\mathscr{D}_j(\mathscr{F})=\left\{D\in \binom{[n]}{j}:\exists F\in\mathscr{F},D\cap F=\emptyset\right\}.$$ With this terminology, $\mathscr{A}$ and $\mathscr{B}$ are cross-intersecting if and only if $\mathscr{A}\cap\mathscr{D}_k(\mathscr{B})=\emptyset$ or equivalently $\mathscr{B}\cap\mathscr{D}_l(\mathscr{A})=\emptyset$. They form a maximal pair if and only if $\mathscr{A}=\binom{[n]}{k}\setminus\mathscr{D}_k(\mathscr{B})$ and $\mathscr{B}=\binom{[n]}{l}\setminus\mathscr{D}_l(\mathscr{A})$. \begin{Proposition}\label{FM} (F\"{u}redi, Griggs \cite{Furedi2}, M\"{o}rs \cite{M}). Suppose that $n>k+l, \mathscr{B}\subset\binom{[n]}{l},|\mathscr{B}|=\binom{n-r}{l-r}$ for some $r$ with $1\leq r\leq l$. Then \begin{equation}\label{FGM} |\mathscr{D}_k(\mathscr{B})|\geq \binom{n-r}{k} \end{equation} with strict inequality unless for some $R\in\binom{[n]}{r},\mathscr{B}=\{B\in\binom{[n]}{l}:R\subset B\}$. \end{Proposition} We should note that (\ref{FGM}) follows from the Kruskal-Katona theorem, the contribution of \cite{Furedi2} and \cite{M} is the uniqueness part. Actually, they proved analogous results for a much wider range but we only need this special case. Let us continue with the proof of the uniqueness in the case $n>k+l,|\mathscr{A}|=\binom{n}{k}-\binom{n-r}{k},|\mathscr{B}|=\binom{n-r}{l-r}$. From Proposition \ref{FM} and $|\mathscr{A}|=\binom{n}{k}-|\mathscr{D}_k(\mathscr{B})|$, we infer $|\mathscr{D}_k(\mathscr{B})|=\binom{n-r}{k}$. Hence, for some $R\in \binom{[n]}{r}$, $\mathscr{B}=\{B\in\binom{[n]}{l}:R\subset B\}$ and $\mathscr{A}=\binom{[n]}{k}\setminus\mathscr{D}_k(\mathscr{B})=\{A\in\binom{[n]}{k}:A\cap R\not=\emptyset\}$. Let us next consider the case $n=k+l$. First note that every $k$-set (resp., $l$-set) $F$ is disjoint to only one $l$-set (resp., $k$-set), that is, its complement $\overline{F}=[n]\setminus F$. Consequently, for a family $\mathscr{B}\subset\binom{[n]}{l}$, $\mathscr{D}_k(\mathscr{B})=\overline{\mathscr{B}}=\{\overline{B}:B\in\mathscr{B}\}$. Hence, for any maximal pair $(\mathscr{A}, \mathscr{B})$, $\mathscr{A}=\binom{[n]}{k}\setminus\overline{\mathscr{B}}$ and $|\mathscr{A}|+|\mathscr{B}|=\binom{n}{k}-\binom{n-r}{k}+\binom{n-r}{l-r}=\binom{n}{k}$ since $n=k+l$. This shows that for $c=1$, $|\mathscr{A}|+|\mathscr{B}|=\binom{n}{k}$ holds if and only if $\mathscr{B}$ is an arbitrary family with $\binom{n-r}{l-r}\leq \mathscr{B}\leq \binom{n-1}{l-1}$ and $\mathscr{A}=\binom{[n]}{k}\setminus\overline{\mathscr{B}}$. For $c>1$, the maximum in Theorem \ref{mainthm} is $\binom{n-1}{k-1}+c\binom{n-1}{l-1}$. It is realized by any pair $(\mathscr{A}, \mathscr{B})$ with $|\mathscr{B}|=\binom{n-1}{l-1},\mathscr{A}=\binom{[n]}{k} \setminus\overline{\mathscr{B}}$. For $c<1$, the maximum is $\binom{n}{k}-\binom{n-r}{k}+c\binom{n-r}{l-r}$. To realize it we can choose an arbitrary $\mathscr{B}\subset\binom{[n]}{l}$ satisfying $|\mathscr{B}|=\binom{n-r}{l-r}$ and set $\mathscr{A}=\binom{[n]}{k}\setminus \overline{\mathscr{B}}$. This completes the proof of Theorem \ref{mainthm}. \noindent{\bf Proof of Corollary \ref{thm 1.7}} Without loss of generality we assume that $|\mathscr{A}_{1}|\geq|\mathscr{A}_2|\geq\cdots\geq|\mathscr{A}_t|$. For $i\in\{1,2,\cdots,t\}$, write $\mathscr{B}_i=(\mathscr{A}_i)_{L}$. Then, $\mathscr{B}_1\supseteq \mathscr{B}_2\supseteq\cdots\supseteq\mathscr{B}_t$ and $\sum_{i=1}^t|\mathscr{B}_i|=\sum_{i=1}^t|\mathscr{A}_i|$. Further, by the Kruskal-Katona Theorem, $\mathscr{B}_1, \mathscr{B}_2,\cdots,\mathscr{B}_t$ are cross-intersecting and, therefore, $\mathscr{B}_i$ is intersecting for $i\geq 2$ as $\mathscr{B}_1\supseteq\mathscr{B}_i$. So by the Erd\H{o}s-Ko-Rado theorem, we have $|\mathscr{B}_2|\leq\binom{n-1}{k-1}$. Further, $\mathscr{B}_1,\mathscr{B}_2,\cdots,\mathscr{B}_2$ are cross-intersecting too. Hence, $$\sum\limits_{i=1}^t|\mathscr{A}_i|=\sum\limits_{i=1}^t|\mathscr{B}_i|\leq|\mathscr{B}_1|+(t-1)|\mathscr{B}_2|.$$ In Theorem \ref{mainthm}, setting $\mathscr{A}=\mathscr{B}_1, \mathscr{B}=\mathscr{B}_2,c=t-1$ and $r=k=l$, we obtain $$\sum\limits_{i=1}^t|\mathscr{A}_i|\leq|\mathscr{B}_1|+(t-1)|\mathscr{B}_2|\leq \max\left\{\binom{n}{k}-\binom{n-k}{k}+t-1,\ t\binom{n-1}{k-1}\right\}$$ and the upper bound is sharp. This completes our proof. \section{An application of Theorem \ref{mainthm}} Let us recall a related result. \begin{thm}\label{thm 1.3} (Frankl and Tokushige, \cite{F2}) Let $\mathscr{A}\subset\binom{[n]}{k}$ and $\mathscr{B}\subset\binom{[n]}{l}$ be non-empty cross-intersecting families with $n\geq k+l$ and $k\geq l$. Then \begin{equation}\label{FT} |\mathscr{A}|+|\mathscr{B}|\leq \binom{n}{k}-\binom{n-l}{k}+1. \end{equation} \end{thm} We should mention that this result had found several applications, in particular in \cite{F2} it used to provide a simple proof of the following important result. \begin{thm}\label{stability} (Hilton-Milner stability Theorem \cite{H3}) Suppose that $\mathscr{F}\subset\binom{[n]}{k}$ is intersecting, $\bigcap_{F\in\mathscr{F}}F=\emptyset$ and $n>2k$. Then \begin{equation}\label{FMEQ} |\mathscr{F}|\leq \binom{n-1}{k-1}-\binom{n-k-1}{k-1}+1. \end{equation} \end{thm} Let us derive now (\ref{FT}) from Theorem \ref{mainthm}. Without loss of generality, we may assume that $\mathscr{A}=\mathscr{A}_L,\mathscr{B}=\mathscr{B}_L$, i.e., both families are initial. We distinguish two cases. \noindent{\bf (a).} $|\mathscr{A}|>\binom{n-1}{k-1}$. Since $\mathscr{A}$ is initial, $\mathscr{A}\supset\mathscr{P}^{(k)}_1=\mathscr{R}^{(k)}_1$ follows. Now the cross-intersecting property implies $\mathscr{B}\supset\mathscr{P}^{(l)}_1$, in particular $|\mathscr{B}|\leq\binom{n-1}{l-1}$. Applying Theorem \ref{mainthm} with $c=1$ and $r=l$ yields \begin{equation}\label{10} |\mathscr{A}|+|\mathscr{B}|\leq \max\left\{\binom{n}{k}-\binom{n-l}{k}+1,\ \binom{n-1}{k-1}+\binom{n-1}{l-1}\right\}. \end{equation} \noindent{\bf (b).} $|\mathscr{A}|\leq\binom{n-1}{k-1}$. Since $|\mathscr{A}|\geq\binom{n-k}{k-k}=1$, we may apply Theorem \ref{mainthm} with the role of $\mathscr{A}$ and $\mathscr{B}$ interchanged, $r=k,c=1$, for obtaining \begin{equation}\label{11} |\mathscr{B}|+|\mathscr{A}|\leq \max\left\{\binom{n}{l}-\binom{n-k}{l}+1,\ \binom{n-1}{l-1}+\binom{n-1}{k-1}\right\}. \end{equation} Comparing (\ref{10}) and (\ref{11}) with (\ref{FMEQ}), to conclude the proof we must show the following two inequalities: \begin{equation}\label{12} \binom{n-1}{k-1}+\binom{n-1}{l-1}\leq \binom{n}{k}-\binom{n-l}{k}+1, \end{equation} \begin{equation}\label{13} \binom{n}{l}-\binom{n-k}{l}\leq \binom{n}{k}-\binom{n-l}{k}. \end{equation} Using the formulae $$\binom{n-1}{l-1}=\binom{n-2}{l-1}+\binom{n-3}{n-2}+\cdots+\binom{n-l-1}{0}$$ and $$\binom{n}{k}-\binom{n-l}{k}=\binom{n-1}{k-1}+\cdots+\binom{n-l}{k-1},$$ (\ref{12}) is equivalent to $$\binom{n-2}{l-1}+\cdots+\binom{n-l}{1}\leq\binom{n-2}{k-1}+\cdots+\binom{n-l}{k-1}.$$ This inequality follows by the termwise comparison \begin{equation}\label{14} \binom{n-i}{l+1-i}\leq \binom{n-i}{k-1}, \ 2\leq i\leq l. \end{equation} Since for $i\geq 2, l+1-i\leq k-1$ and $(l+1-i)+(k-1)=k+l-i\leq n-i$, (\ref{14}) and thereby (\ref{12}) hold. To prove (\ref{13}) is not hard either. If $n=k+l$ then we have equality. Let us apply induction on $n$, supposing that (\ref{13}) holds for all triples $(\widetilde{n},\widetilde{k},\widetilde{l})$ with $\widetilde{n}\geq\widetilde{k}+\widetilde{l},\widetilde{k}\geq\widetilde{l}\geq 1$. Thus we may use the following three inequalities $$\binom{n}{l}-\binom{n-k}{l}\leq \binom{n}{k}-\binom{n-l}{k},$$ $$\binom{n-1}{l-1}-\binom{(n-1)-(k-1)}{l-1}\leq \binom{n-1}{k-1}-\binom{(n-l)-(l-1)}{k-l}$$ and $$\binom{n-1}{l-2}\leq \binom{n-1}{k-2}.$$ Summing them up yields (\ref{13}) and concludes the new proof of Theorem \ref{thm 1.3}. \section{Remark and open problems} Let us recall that two families $\mathscr{A},\mathscr{B}$ are called cross-$q$-intersecting if $|A\cap B|\geq q$ for all $A\in\mathscr{A},B\in\mathscr{B}$. The following are two related results concerning cross-$q$-intersecting families. \begin{thm}\label{FK} (Frankl and Kupavskii, \cite{F1}) Let $\mathscr{A}, \mathscr{B}\subset\binom{[n]}{k}$ be non-empty cross-$q$-intersecting families with $k>q\geq 1$ and $n>2k-q$. Then \begin{align*} |\mathscr{A}|+|\mathscr{B}|\leq \binom{n}{k}-\sum\limits_{i=0}^{q-1}\binom{k}{i}\binom{n-k}{k-i}+1. \end{align*} \end{thm} \begin{thm}\label{WZ} (Wang and Zhang, \cite{W}) Let $n\geq 4$, $k,l\geq 2$, $q<\min\{k,l\}$, $n>k+l-q$, $(n,q)\neq(k+l,1),\binom{n}{k}\leq\binom{n}{l}$. Then for any non-empty cross-$q$-intersecting families $\mathscr{A}\subset\binom{[n]}{k}$ and $\mathscr{B}\subset\binom{[n]}{l}$, \begin{align*} |\mathscr{A}|+|\mathscr{B}|\leq\binom{n}{k}-\sum\limits_{i=0}^{q-1}\binom{k}{i}\binom{n-k}{l-i}+1. \end{align*} \end{thm} Based on the two theorems above, the following three problems are inspired naturally by Corollary \ref{thm 1.7}: \begin{problem}\label{c1} Let $\mathscr{A}_1\subset\binom{[n]}{k_1},\mathscr{A}_2\subset\binom{[n]}{k_2},\cdots,\mathscr{A}_t\subset \binom{[n]}{k_t}$ be non-empty cross-intersecting families with $k_1\geq k_2\geq\cdots\geq k_t$, $n\geq k_1+k_2$ and $t\geq 2$. Is it true that \begin{align*} \sum\limits_{i=1}^t|\mathscr{A}_i|\leq\max\left\{\binom{n}{k_1}-\binom{n-k_t}{k_1}+\sum\limits_{i=2}^{t}\binom{n-k_t}{k_i-k_t},\ \sum\limits_{i=1}^t\binom{n-1}{k_i-1}\right\}? \end{align*} \end{problem} We note that if we set $c=t-1$ in Theorem \ref{mainthm}, then we obtain a positive answer to Problem \ref{c1} for the special case that $k_2=\cdots= k_t$. \begin{problem}\label{c2} Let $\mathscr{A}_1,\mathscr{A}_2,\cdots, \mathscr{A}_t\subset \binom{[n]}{k}$ be non-empty cross-$q$-intersecting families with $k>q\geq 1$, $n>2k-q$ and $t\geq 2$. Is it true that \begin{align*} \sum\limits_{i=1}^t|\mathscr{A}_i|\leq\max\left\{\binom{n}{k}-\sum\limits_{i=0}^{q-1}\binom{k}{i}\binom{n-k}{k-i}+t-1,\ t\binom{n-q}{k-q}\right\}? \end{align*} \end{problem} \begin{problem}\label{c3} Let $\mathscr{A}_1\subset\binom{[n]}{k_1},\mathscr{A}_2\subset\binom{[n]}{k_2},\cdots,\mathscr{A}_t\subset \binom{[n]}{k_t}$ be non-empty cross-$q$-intersecting families with $k_1\geq k_2\geq\cdots\geq k_t>q\geq 1$, $n>k_1+k_2-q$ and $t\geq 2$. Is it true that \begin{align*} \sum\limits_{i=1}^t|\mathscr{A}_i|\leq\max\left\{\binom{n}{k_1}-\sum\limits_{i=0}^{q-1}\binom{k_t}{i}\binom{n-k_t}{k_1-i}+\sum\limits_{i=2}^{t}\binom{n-k_t}{k_i-k_t} ,\ \sum\limits_{i=1}^t\binom{n-q}{k_i-q}\right\}? \end{align*} \end{problem} We note that a positive answer to Problem \ref{c3} would imply that to Problem \ref{c2} and, hence, to Problem \ref{c1}. Moreover, the upper bound in Problem \ref{c3} is attained by setting $\mathscr{A}_1=\{A\in \binom{[n]}{k_1}:|A\cap[k_t]|\geq q\}$ and $\mathscr{A}_i=\{A\in \binom{[n]}{k_i}:A\supseteq[k_t]\}$ for $i\in\{2,3,\cdots,t\}$ if \begin{equation}\label{p3} \binom{n}{k_1}-\sum\limits_{i=0}^{q-1}\binom{k_t}{i}\binom{n-k_t}{k_1-i}+\sum\limits_{i=2}^{t}\binom{n-k_t}{k_i-k_t}\geq \sum\limits_{i=1}^t\binom{n-q}{k_i-q}, \end{equation} or setting $\mathscr{A}_i=\{A\in \binom{[n]}{k_i}:A\supseteq[q]\}$ for all $i\in\{1,2,\cdots,t\}$ if the `$\geq$' in (\ref{p3}) is `$\leq$'. \end{document}
math
26,779
\begin{document} \title{Infinite Order Amphicheiral Knots} \author{Charles Livingston} \address{Department of Mathematics, Indiana University\\Bloomington, IN 47405, USA} \email{[email protected]} \begin{abstract} In answer to a question of Long, Flapan constructed an example of a prime strongly positive amphicheiral knot that is not slice. Long had proved that all such knots are algebraically slice. Here we show that the concordance group of algebraically slice knots contains an infinitely generated free subgroup that is generated by prime strongly positive amphicheiral knots. A simple corollary of this result is the existence of positive amphicheiral knots that are not of order two in concordance. \end{abstract} \keywords{Knot, amphicheiral, concordance, infinite order} \primaryclass{57M25}\secondaryclass{57M27} \maketitle \section{Introduction} In 1977 Gordon \cite{go1} asked whether every class of order two in the classical knot concordance group can be represented by an amphicheiral knot. The question remains open although counterexamples in higher dimensions are now known to exist \cite{cm}. This problem is more naturally stated in terms of {\em negative} amphicheiral knots, since such knots represent 2--torsion in concordance; that is, if $K$ is negative amphicheiral, $ K \# K$ is slice. On the other hand there is no reason to assume that {\em positive} amphicheiral knots represent 2--torsion. Surprisingly, until now no examples of positive amphicheiral knots that are not 2--torsion in concordance have been known. The first goal of this paper is to present an example of a positive amphicheiral knot that is of infinite order in the concordance group. Long \cite{lo} proved that every {\em strongly positive} amphicheiral knot is algebraically slice. (This result followed work of Hartley and Kawauchi \cite{hk} showing a close relationship between strongly positive amphicheiral knots and slice knots.) Long also used an example of the author \cite{li1} to show that such knots need not be slice. Long's example was composite and he asked whether such an example exists among prime strongly amphicheiral knots. Flapan provided such an example in \cite{fl}. Here we will prove that examples such as Flapan's are of infinite order in the concordance group. This, of course, provides an example of a positive amphicheiral knot that is not 2--torsion in concordance. We will in fact prove much more, describing an infinite set of strongly positive amphicheiral knots that are linearly independent in the concordance group. We restate this as the main theorem of this paper. \begin{thm}\label{mainthm} The group of concordance classes of algebraically slice knots contains an infinitely generated free subgroup generated by prime strongly positive amphicheiral knots. \end{thm} \noindent{\bf Acknowledgement}\qua This research was motivated by a question of Alexander Stoimenow asking the order of Flapan's knot in the concordance group. \section{Amphicheirality} We will assume the reader is familiar with basic knot theory; \cite{rol} serves as a good reference. Material concerning Casson--Gordon invariants will be referenced as needed. Since the details of orientation and amphicheirality may be less familiar, we review them briefly here. For our purposes it is simplest to view a knot as a smooth oriented pair, $(S, K)$, where $S$ is diffeomorphic to $S^3$ and $K$ is diffeomorphic to $S^1$. Such a pair is usually abbreviated simply as $K$. Associated to $K$ we have three other knots: the {\em mirror} of $K$, $ K^{*} = (-S ,K)$; the {\em reverse} of $K$, $K^r = (S,-K)$; and the (concordance) {\em inverse} of $K$, $-K = (-S, -K)$. A knot is called {\em positive amphicheiral} if $( S, K)$ is oriented diffeomorphic to $(-S , K)$, and is called {\em strongly positive amphicheiral} if that diffeomorphism can be taken to be an involution. In terms of a knot diagram a knot is strongly positive amphicheiral if changing all the crossings of some oriented diagram of the knot results in a new diagram that can be transformed back into the original diagram by a 180 degree rotation. In Figure 2 we illustrate a strongly positive amphicheiral knot. \section{ Casson--Gordon invariant results}\label{cgsection} To prove that algebraically slice knots are not slice we need Casson--Gordon invariants. We now summarize the results that we will use. The main result of \cite{cg1} is the following. Let $K$ be a knot in $S^3$ and let $M_q$ denote its $q$--fold cyclic branched cover with $q$ a prime power. \begin{prop} If a knot $K$ is slice then there is a subgroup $H \subset H_1(M_q,{\bf Z})$ such that: \begin{enumerate} \item $H^{\perp} = H$ with respect to the linking form on $H_1(M_q,{\bf Z})$ and, in particular, the order of $H$ is the square root of the order of $H_1(M_q,{\bf Z})$. \item If $\chi\co H_1(M_q,{\bf Z}) \rightarrow {\bf Z}_{p^k}$, $p$ prime, is homomorphism that vanishes on $H$ then the associated Casson--Gordon invariant satisfies $\sigma(K,\chi) = 0$. \item $H$ is invariant under the group of deck transformations. \end{enumerate} \end{prop} The invariance of $H$ is not stated in \cite{cg1} but follows readily from the construction of $H$. Note that in \cite{cg1} the invariant $\sigma(K,\chi)$ is denoted $\sigma_1(\tau(K,\chi))$. To use this result we do not need the explicit definition of the Casson--Gordon invariants, but need only their properties given in Propositions \ref{cgsat} and \ref{trivialrep} below. We will be constructing our examples by starting with a knot $K$ and replacing the neighborhood of an unknotted circle $L$ in the complement of a Seifert surface for $K$ with the complement of a knot $J$. The identification map switches longitude and meridian so that the resulting manifold is still $S^3$. The effect of this construction is to tie that portion of $K$ that passes through $L$ into a knot. Details can be found, for example, in \cite{gl1}. Note that the construction does not change the Seifert form of $K$. Furthermore, there is a natural isomorphism between the homology groups of the $q$--fold cyclic branched covers for any given $q$. To see this a simple Mayer--Vietoris argument can be used: the branched cover of the modified knot is built from the branched cover of $K $ by removing a collection of homology circles (solid tori) and replacing them with other homology circles (copies of the complement of $J$). Similarly, there is a correspondence between characters on the two homology groups. We denote the new knot by $K_J$. In the construction used below, this process is iterated; we start with a link $L$ and replace each component with a knot complement. In the next proposition $\sigma_{j/p^k}(J)$ denotes the classical signature of $J$ given as the signature of $(1-\omega)V + (1-\bar{\omega})V^t$, where $V$ is a Seifert matrix for $J$ and $\omega = e^{ {j 2 \pi i / p^{k} }}$. \begin{prop}\label{cgsat} Suppose that $K_J$ is obtained from $K$ by removing the neighborhood of an unknotted circle $L$ and replacing it with the complement of a knot $J$ as described above. Then $ \sigma(K_J,\chi) - \sigma(K,\chi) =\sum_{j=0}^{q-1} \sigma_{\chi(T^j(\tilde{L}))/p^k}(J)$, where the $\tilde{L}$ is a lift of $L$, $T$ denotes the generator of the group of deck transformations, and $\chi \co H_1(M_q, {\bf Z}) \rightarrow {\bf Z}_{p^k}$ is a homomorphism. \end{prop} \begin{pf} The proof is contained in \cite{gl1}. It is based on a similar result for computing Casson--Gordon invariants of satellite knots first found by Litherland in \cite{lit2} \end{pf} The following result is an immediate consequence of a result of Litherland in \cite{lit2}. \begin{prop}\label{trivialrep} If $\chi$ is the trivial character, then $\sigma(K, \chi) = 0$. \end{prop} \begin{pf} In Corollary B2 of \cite{lit2} it is shown that for the trivial character $\chi$ one has $\sigma_\xi(K,\chi) = \sum_{\zeta^\nu = \xi} \sigma_\zeta(K) - \sum_{\omega^\nu = 1} \sigma_\omega(K) $. Here $\sigma_\xi(K,\chi)$ is a more general Casson--Gordon invariant than what we use here; it is related to $\sigma(K,\chi)$ and the proposition follows from the formula $ \lim_{\xi \to 1} \sigma_\xi(K,\chi) = \sigma(K,\chi)$. \end{pf} \section{Basic building blocks} Consider the knot $K_{ 2m+1}$ illustrated in Figure 1. In the figure a Seifert surface $F$ for $K_{2m+1}$ is evident. The bands are twisted in such a way that the Seifert form is $$ V_m = \left( \begin{matrix} 0 & m+1 \\ m & 0 \end{matrix} \right) . $$ Notice that $K_\mu$ is reversible and that $-K_\mu = K_{-\mu}$, which has Seifert form $V_{-m-1}$ since $-2m -1 = 2(-m-1)+1$. \cfigure{2.4in}{fig1.eps}{The knot $K_{2m+1}$} The link $L_1 \cup L_2$ will be used later as follows. A neighborhood of $L_1$ will be removed from $S^3$ and replaced with the complement of a knot $J $ as described earlier. Similarly, a neighborhood of $L_2$ will be removed and replaced with the complement of $-J$. This new knot will be denoted $K_{J, 2m+1}$. As mentioned before, a full explication of this construction is contained in \cite{gl1}. We will need a detailed understanding of the homology of the 3--fold cyclic branched cover of $K_{2m+1}$. Here we give the general result. The proof appears in detail elsewhere (eg. \cite{gl1}) but because it is simple we give a fairly complete outline. Let $M_q(K_{2m+1})$ denote the $q$--fold cyclic branched cover of $K_{2m+1}$ and let $\tilde{L}_i$ denote a fixed lift of $L_i$ to $M_q(K_{2m+1})$, $i=1, 2$. We will use $\tilde{L}_i$ to denote the homology classes represented by the $\tilde{L}_i$ also. \begin{prop}\label{homthm} $H_1(M_q(K_{2m+1}))= {\bf Z}_a \oplus {\bf Z}_a$ with $a = |(m+1)^q - m ^q|$. The homology is generated by the classes represented by $\tilde{L}_1$ and $\tilde{L}_2$. On homology the deck transformation $T$ acts by $T( \tilde{L}_1) = m^*(m+1) \tilde{L}_1$ and $T( \tilde{L}_2) = (m+1)^* m \tilde{L}_2$, where the superscript $(^*)$ denotes the multiplicative inverse modulo $a$. \end{prop} \begin{pf} A formula of Seifert (see \cite[Section 8D]{rol}) gives the matrix $\Gamma^q - (\Gamma - I)^q$ as a presentation matrix for the homology, where $\Gamma = V_m (V_m^t - V_m)$. That $H_1(M_q(K_{2m+1}))= {\bf Z}_a \oplus {\bf Z}_a$ where $a = |(m+1)^q - m ^q|$ follows readily. A Mayer--Vietoris argument used to compute the homology of the cover (again see \cite[Section 8D]{rol}) shows that the $\tilde{L}_i$ and their translates generate the homology of the cover and also gives the relations $mT(\tilde{L}_1) = (m+1)(\tilde{L_1})$ and $(m+1)T(\tilde{L}_2) = m (\tilde{L_2})$. Hence, $ T(\tilde{L}_1) = m^*(m+1)(\tilde{L_1})$ and $ T(\tilde{L}_2) = (m+1)^*m (\tilde{L_2})$, as desired. (Notice that $m$ and $m+1$ are invertible modulo $a$ since $m$ (respectively $m+1$) and $|(m+1)^q - m ^q|$ have greatest common divisor 1.) \end{pf} \section{A positive amphicheiral knot of infinite order in concordance} Consider the knot $\bar{K}_{2m+1} = K_{2m+1} \# K^*_{2m+1}$ illustrated in Figure 2. The knot is easily seen to be strongly positive amphicheiral: reflection through the plane of the page followed by a 180 degree rotation about the axis perpendicular to the plane of the page gives the desired involution. \cfigure{2.2in}{fig2.eps}{The strongly positive amphicheiral knot, $\bar{K}_{2m+1}$} To form the knot $\bar{K}_{J, 2m+1}$ we replace neighborhoods of $L_1$ and $L_2'$ with the complement of $J$ and neighborhoods of $L_2$ and $L_1'$ with the complement of $-J$. In this way, $\bar{K}_{J, 2m+1} = K_{J,2m+1} \# K^*_{J, 2m+1}$. (To clarify that the picture is correct, observe that mirror reflection through the plane of the page, followed by a 180 degree rotation about a point in the center of the figure, carries the knot complement that replaces $L_1$ to the mirror image of the knot complement that replaces $L'_1$. Hence, for this knot to be strongly positive amphicheiral the knot complement that replaces $L_1$ should be the mirror image of the knot that replaces $L'_1$.) Let $p$ be an odd prime dividing $a = (m+1)^3 - m ^3 $ and suppose that $p$ has exponent 1 in $(m+1)^3 - m ^3 $. That there is an infinite set of such primes occurring for some $m$ is proved in the appendix. In fact, any prime congruent to 1 modulo 3 suffices. \begin{prop} Suppose that $p$ divides $(m+1)^3 - m ^3 $ with exponent $1$. The $p$--torsion in the homology of the $3$--fold branched cover of $ \bar{K}_{2m+1}$ is $({\bf Z}_{p })^4$ generated by the lifts $\{ \tilde{L}_1, \tilde{L}_2, \tilde{ L}'_1, \tilde{ L}'_2\}$. The deck transformation $T$ has eigenvalues $\lambda_+ = m^*(m+1)$ with eigenvectors $\{ \tilde{L}_1, \tilde{ L}'_2 \}$ and eigenvalue $\lambda_- = (m+1)^* m$ with eigenvectors $\{ \tilde{L}_2, \tilde{ L}'_1 \}$. \end{prop} \begin{pf} This all follows readily from Proposition \ref{homthm}. \end{pf} Before proving the next theorem we need to make a simple number theoretic observation. \begin{lemma} If $p$ is a prime divisor of $|(m+1)^3 - m^3|$ then $m^*(m+1) \ne (m+1)^* m$ mod $p$. \end{lemma} \begin{pf} If $m^*(m+1) = (m+1)^* m$ mod $p$ then $m^2 = (m+1)^2$ mod $p$. It cannot be that $m = m+1$ mod $p$, so we would have $m = -m - 1$ mod $p$. In other words, $m = -1/2$ mod $p$. (Since $p$ is odd, 2 is invertible mod $p$.) Substituting this into $(m+1)^3 - m^3$ yields $1/4$ mod $p$. But then $(m+1)^3 - m^3$ is clearly not divisible by $p$. \end{pf} \begin{thm} Suppose that some odd prime $p$ divides $(m+1)^3 - m ^3 $ with exponent 1. For an appropriate choice of $J$ the knot $\bar{K}_{J, 2m+1}$ has infinite order in the concordance group. \end{thm} \begin{pf} The necessary properties of $J$ will be developed in the course of the proof. Consider the connected sum of $n $ copies of $\bar{K}_{J, 2m+1}$. We want to apply the properties of Casson--Gordon invariants as described in Section \ref{cgsection}, working with the 3--fold cover. Since the $p$--torsion in $H_1(M_3,{\bf Z})$ is $({\bf Z}_{p })^{4n}$, the order of $H$ is $ {p } ^{2n}$. Notice that $(T-\lambda_+ )(T-\lambda_-)$ annihilates the homology of the cover, so $H$ splits into a summand annihilated by $T - \lambda_+$ and a summand annihilated by $T- \lambda_-$. Here we use that the eigenvalues are distinct, which follows from the previous lemma. Call these $H_+$ and $H_-$. One of these has dimension at least $ n$; we will assume that it is $H_+$, the other case is identical. Any $v$ in $H_+$ is a linear combination of the $\{ \tilde{L}_1, \tilde{ L}'_2 \}$ associated to each of the $n$ summands. We want to see that some such $v$ involves at least $n$ of these basis elements. This is an exercise in elementary linear algebra as follows. Write out a set of basis elements for $H_+$ in terms of the full set of $\{ \tilde{L}_1, \tilde{ L}'_2 \}$. Writing these as the rows of a matrix (with at least $n$ rows and exactly $2n$ columns and applying Gauss-Jordan elimination, combining generators of $H_+$ and reordering the $\{ \tilde{L}_1, \tilde{ L}'_2 \}$ yields a generating set with matrix representation $$ \left( \begin{matrix} 1 & 0 & 0 & \ldots \\ 0 & 1 & 0 & \ldots \\ 0 & 0 & 1 & \ldots \\ \ldots & & & \end{matrix} \right).$$ The sum of these basis elements corresponding to the rows gives the desired element $v$. Linking with $v$ defines a character $\chi$ on the homology of the 3--fold cover of $n\bar{K}_{2m+1}$. Since $v$ is in the metabolizer, $\chi$ vanishes on the metabolizer. Considering just a single building block, $K_{2m+1}$, we have that the linking form satisfies $\mbox{lk}(\tilde{L}_i,\tilde{L}_i) = 0 $, $i = 1$ and 2. This implies the $L_1$ and $L_2$ link nontrivially since the linking form is nonsingular. From the preceding discussion we see that $\chi$ evaluates nontrivially on at least $n$ of the $\{\tilde{ L}_2, \tilde{L}'_1 \}$ and trivially on all of the $\{ \tilde{L}_1, \tilde{ L}'_2 \}$. From Proposition \ref{cgsat} we have that $$\sigma(n\bar{K}_{ J,2m+1},\chi) = \sum_i \sum_{j=1}^3 \sigma_{\alpha_{i,j}/p}(-J)\ \ \ \ +\ \ \sigma(n\bar{K}_{2m+1},\chi).$$ In the double summation the index $i$ runs over a set with at least $n$ terms. The $\alpha_{i,j}$ are all nonzero modulo $p$. The additivity of Casson--Gordon invariants \cite{gi} applied to the second summation yields $$\sigma(n\bar{K}_{ J,2m+1},\chi) = \sum_i \sum_{j=1}^3 \sigma_{\alpha_{i,j}/p}(-J) + \sum_{i = 1}^n \sigma( \bar{K}_{2m+1},\chi_i).$$ Here the characters $\chi_i$ are unknown, but notice that the values of $\sigma( \bar{K}_{2m+1},\chi_i)$ are taken from some finite set of rational numbers. Suppose that the maximum value of the absolute values of this finite set of numbers is $C$. Then as long as all of the classical signatures $\sigma_{\alpha /p}( J)$ are greater than $C/3$, we have that the above sum cannot equal zero. Finding a knot $J$ with this property is simple and the proof is complete. \end{pf} \noindent{\bf Refinements}\qua A more detailed analysis of $\sigma( \bar{K}_{2m+1},\chi_i)$ can be made to show that in fact these terms are all 0. The idea is that the knots $K_{2m+1}$ are doubly slice, so that the Casson--Gordon invariants must vanish for both eigenspaces. Since we do not need this to provide examples for the main theorem, we do not include the details here. It is worth noting one particular case that this refinement addresses, that of Flapan's original example of a nonslice strongly positive amphicheiral knot in \cite{fl}. In that case $m = 1$ and $J$ is the trefoil. The 3--fold branched cover has homology ${\bf Z}_7 \oplus {\bf Z}_7$ and we let $p = 7$. The associated Casson--Gordon invariants are then classical 7-signatures of the trefoil. Although not all 7--signatures are positive, all are nonnegative and the sum will necessarily involve some nonzero terms and is hence nontrivial. Hence Flapan's knot is of infinite order in the concordance group. \section{Completion of Theorem \ref{mainthm}} \noindent{\bf An Infinite Family of Linear Independent Examples } To prove Theorem \ref{mainthm} we need to find an infinite family of knots such as those constructed in the previous section, all of which are independent in concordance. The argument is fairly simple, similar to the one by Jiang \cite{ji} giving that the concordance group of algebraically slice knots contains an infinitely generated free subgroup. Using the results of the appendix, one can find an infinite sequence of integers, $\{m_i\}$, with the property that each $(m_i+1)^3 - m_i^3$ is divisible by a prime $p_i$ with exponent 1 and no $p_j$ divides $(m_i+1)^3 - m_i^3$ if $i \ne j$. Next construct the $\bar{K}_{J_i,2m_i +1}$ as in the previous section. We observe that the $\bar{K}_{J_i,2m_i +1}$ form the desired set, as follows. Suppose that some linear combination of these was slice. Let $\bar{K}_{J_n,2m_n +1}$ be one of the knots in that linear combination. Apply the Casson--Gordon result, Proposition \ref{cgsat}, using characters on the 3--fold cover to ${\bf Z}_{p_n}$. Using additivity, the value of the Casson--Gordon invariants is reduced to that on the 3--fold cover of the multiple of $\bar{K}_{m_n}$. (The character vanishes on the covers of the other summands by the choice of the $p_i$ so Proposition \ref{trivialrep} applies.) The calculations of the previous section show that some such Casson--Gordon invariants do not vanish. \noindent{\bf Primeness} The proof of Theorem \ref{mainthm} is now completed by showing that each of the $\bar{K}_{J_i,2m_i +1}$ is concordant to a prime knot. In \cite{fl} Flapan showed that one particular example of a $\bar{K}_{J,2m +1}$ is concordant to a prime knot that is strongly positive amphicheiral. Her proof applies in the present setting. This completes the proof. \appendix \section{Appendix: Prime divisors of $(m+1)^3 - m^3$} \begin{thm} The set of primes that divide $F(m) = (m+1)^3 - m^3 = 3m^2 + 3m +1$ for some $m$ is infinite. \end{thm} \begin{pf} The proof is reminiscent of Euclid's proof of the infinitude of primes. Suppose that the set of such primes is finite, $\{ {p_i} \}_{i = 1, \ldots , n}$. Let $N = \prod_{i= 1, \ldots , n}p_i$. Consider $F(N)$. Clearly none of the $p_i$ divide $F(N) = 3N^2 +3N + 1$, so there is a prime factor of $F(N)$ that is not among the $p_i$, contradicting the definition of $\{ {p_i} \}$. \end{pf} \begin{thm} The set of primes $p$ for which $p$ is a prime divisor of $F(m)$ with exponent one for some $m$ is infinite. \end{thm} \begin{pf} We prove that, in fact, if a prime divides $F(m)$ for some $m$, then it divides $F(m + p)$ with exponent 1. Suppose that $p$ has exponent greater than 1 in $F(m)$. Use Taylor's theorem to find $F(m +p) = F(m) + pF'(m) + p^2F''(m)/2$. ($F$ is quadratic.) The first and last terms are divisible by $p^2$, so $p$ has exponent 1 in $F(m + p)$ unless $p$ divides $F'(m)$. But, if $F(m)$ and $F'(m)$ have a common root then $F(m)$ would have multiple root. But this can occur if and only if the discriminant of this quadratic is 0. In this case the discriminant is $-3$, so a multiple root does not occur unless $p=3$. (As an alternative, to see that $F(m)$ and $F'(m)$ do not have a common root just note that a simple calculation shows that $4F(m) - (2m+1)F'(m) = 1$ for all $m$, so that $p$ cannot divide both $F(m)$ and $F'(m)$. \end{pf} Although we don't need a particular description of the primes that occur, such a calculation is possible. \begin{thm} A prime $p$ occurs as a divisor of $F(m)$ with exponent 1 for some $m$ if and only if $p$ is congruent to 1 mod 3. \end{thm} \begin{pf} By the previous theorem we need not concern ourselves with the exponent of $p$. Hence, we want the conditions for $3m^2 +3m +1$ to have a root modulo $p$. But, using the quadratic formula we quickly see that this will occur if and only if there is square root of $-3$ in ${\bf Z}_p$. Using the language of quadratic symbols \cite{la2}, we are asking for which $p$ we have $\left( {-3 \over p} \right) = 1$. From the properties of the quadratic symbol we have $$\left( {-3 \over p} \right) = \left( {-1 \over p} \right) \left( {3 \over p} \right) = (-1)^{{p-1 \over 2}} \left( {3 \over p} \right) .$$ Applying quadratic reciprocity (again, see \cite{la2}) to the second term gives $$(-1)^{{p-1 \over 2}} \left( {3 \over p} \right) = (-1)^{{p-1 \over 2}} \left( {p \over 3} \right) (-1)^{({p-1 \over 2})({3-1 \over 2})} = \left( {p \over 3} \right).$$ This last term is 1 if and only if $p$ is a square modulo 3, which occurs if and only if $p$ is congruent to 1 modulo 3. \end{pf} \makeatletter\@thebibliography@{CG1}\small\parskip0pt plus2pt\relax\makeatother \bibitem[CG1]{cg1} {\bf A\, Casson}, {\bf C\, Gordon}, {\it Cobordism of classical knots}, in {\it A la recherche de la Topologie perdue}, ed. by Guillou and Marin, Progress in Mathematics, Volume 62, 1986, pp.181--199. (Originally published as Orsay Preprint, 1975.) \bibitem[CM]{cm} {\bf D\, Coray}, {\bf M\, Daniel}, {\it Knot cobordism and amphicheirality}, Comment. Math. Helv. 58(4) (1983) 601--616. \bibitem[F]{fl} {\bf E\, Flapan}, {\it A prime strongly positive amphicheiral knot which is not slice}, Math. Proc. Cambridge Philos. Soc. 100(3) (1986) 533--537. \bibitem[G2]{gi} {\bf P\, Gilmer}, {\it Slice knots in $S\sp{3}$}, Quart. J. Math. Oxford Ser. (2) 34(135) (1983) 305--322. \bibitem[GL1]{gl1} {\bf P\, Gilmer}, {\bf C\, Livingston}, {\it The Casson--Gordon invariant and link concordance}, Topology 31(3) (1992) 475--492. \bibitem[G]{go1} {\bf C\, McA\, Gordon}, {\it Problems in Knot Theory}, in {\it Knot Theory}, ed. J. Hausmann, Springer Lecture Notes no. 685 1977. \bibitem[HK]{hk} {\bf R\, Hartley}, {\bf A\, Kawauchi}, {\it Polynomials of amphicheiral knots}, Math. Ann. 243(1) (1979) 63--70. \bibitem[J]{ji} {\bf B\, Jiang}, {\it A simple proof that the concordance group of algebraically slice knots is infinitely generated}, Proc. Amer. Math. Soc. 83 (1981) 189--192. \bibitem[La]{la2} {\bf S\, Lang}, {\it Algebraic Number Theory}, Addison-Wesley Publishing Co., Reading, Mass., 1968. \bibitem[Lt]{lit2} {\bf R\, Litherland}, {\it Cobordism of satellite knots}, in {\it Four--Manifold Theory}, Contemporary Mathematics, eds. C. Gordon and R. Kirby, American Mathematical Society, Providence RI 1984, 327--362. \bibitem[L1]{li1} {\bf C\, Livingston}, {\it Knots which are not concordant to their reverses}, Oxford Q. J. of Math. 34 (1983), 323--328. \bibitem[L2]{li2} {\bf C\, Livingston}, {\it Examples in concordance}, preprint at http://front.\-math.\-uc\-davis.\-edu/math.GT/0101035. \bibitem[Lo]{lo} {\bf D\, D\, Long}, {\it Strongly plus-amphicheiral knots are algebraically slice}, Math. Proc. Cambridge Philos. Soc. 95(2) (1984) 309--312. \bibitem[Ro]{rol} {\bf D\, Rolfsen}, Publish or Perish, Berkeley CA (1976). \endthebibliography \Addressesr \end{document}
math
25,254
\begin{document} \baselineskip17pt \title[Competition]{Does a population with the highest turnover coefficient win competition?} \author[R. Rudnicki]{Ryszard Rudnicki} \address{R. Rudnicki, Institute of Mathematics, Polish Academy of Sciences, Bankowa 14, 40-007 Katowice, Poland.} \email{[email protected]} \keywords{nonlinear Leslie model, competitive exclusion, periodic cycle, population dynamics} \subjclass[2010]{Primary: 92D25; Secondary: 37N25, 92D40} \begin{abstract} We consider a discrete time competition model. Populations compete for common limited resources but they have different fertilities and mortalities rates. We compare dynamical properties of this model with its continuous counterpart. We give sufficient conditions for competitive exclusion and the existence of periodic solutions related to the classical logistic, Beverton-Holt and Ricker models. \end{abstract} \maketitle \section{Introduction} \label{s:int} It is well known that if species compete for common limited resources, then usually they cannot coexist in the long term. This law was introduced by Gause \cite{Gause} and it is called \textit{the principle of competitive exclusion}. There are a lot of papers where the problem of competitive exclusion or coexistence is discussed. Most of them are described by continuous time models, but there are also some number of discrete time models devoted to this subject (see \cite{AS,CLCH} and the references given there). If we have continuous and discrete time versions of a similar competition model it is interesting to compare the properties of both versions of the model, especially to check if they are dynamically consistent, i.e., if they possess the same dynamical properties as stability or chaos. In this paper we consider a discrete time competition model with overlapping generations. We prove a sufficient condition for competitive exclusion and compare it with its continuous counterpart. The model considered here is the following. A population consists of $k$ different individual strategies. The state of the population at time $t$ is given by the vector $\mathbf x(t)=[x_{1}(t),\dots ,x_{k}(t)]$, where $x_{i}(t)$ is the size of subpopulation with the phenotype $i$. Individuals with different phenotypes do not mate and each phenotype $i$ is characterized by per capita reproduction $b_i$ and mortality $d_i$. We assume that the juvenile survival rate depends on the state $\mathbf x$ and it is given by a function $f\colon \mathbb R^k_+ \to [0,1]$. Therefore, $f$ describes the suppression of growth driven, for example, by competition for food or free nest sites for newborns. The time evolution of the state of population is given by the system \begin{equation} \label{d-m} x_i(t+1)=x_i(t)-d_ix_i(t)+ b_i x_i(t)f(\mathbf x(t)). \end{equation} We assume that $0<d_i\le 1$, $d_i<b_i$ for $i=1,\dots,k$ and $f(\mathbf x)>0$ for $\mathbf x\ne 0$. The model is similar in spirit to that presented in \cite{AB} (a continuous version) and in \cite{AR} (a discrete version) but in those papers $f$ has a special form strictly connected with competition for free nest sites for newborns. A simplified Leslie/Gower model \cite{AlSR} is also of the form~ (\ref{d-m}). The suppression function $f$ can be quite arbitrary. Usually, it is of the form $f(\mathbf x)=\varphi (w_1x_1+\dots+w_kx_k)$, where $\varphi$ is a decreasing function and $w_1,\dots,w_k$ are positive numbers, but e.g. in \cite{AB} it is of the form \[ f(\mathbf x)= \begin{cases} 1,&\textrm{if $\,(1+b_1)x_1+\dots+ (1+b_k)x_k\le K$},\\ \dfrac{K-x_1-\dots-x_k}{b_1x_1+\dots+b_kx_k}, &\textrm{if $\,(1+b_1)x_1+\dots +(1+b_k)x_k> K$}. \end{cases} \] Now we present some motivation for studying model (\ref{d-m}). We begin with a continuous time version of it. The time evolution of the state of population is described by the system \begin{equation} \label{c-m} x_i'(t)=-d_ix_i(t)+ b_i x_i(t)f(\mathbf x(t)), \quad i=1,\dots,k. \end{equation} We assume that $0<d_i< b_i$, $f$ has values in the interval $[0,1]$, and \begin{equation} \label{b:f} f(\mathbf x) \le \min\bigg\{\frac{d_i}{b_i} \colon \,\, i=1,\dots,k\bigg\} \quad\textrm{if $|\mathbf x|\ge M$}, \end{equation} where $|\mathbf x|=x_1+\dots+x_k$. From the last condition it follows that the total size $|\mathbf x(t)|$ of the population is bounded by $\max(M,|\mathbf x(0)|)$. We also assume that $f$ is a sufficiently smooth function to have the existence and uniqueness of the solutions of (\ref{c-m}), for example it is enough to assume that $f$ satisfies a local Lipschitz condition. We denote by $L_i=b_i/d_i$ the turnover coefficient for the strategy $i$. We assume that \begin{equation} \label{ineq} L_1>L_2\ge\dots \ge L_k. \end{equation} It is well known that \begin{equation} \label{goto0} \lim_{t\to\infty} x_i(t)=0 \textrm{ \ for $i\ge 2$}. \end{equation} Indeed, from (\ref{c-m}) it follows that \begin{equation} \label{goto0-2} (b_i^{-1}\ln x_i(t))'= -L_i^{-1}+f(\mathbf x(t)). \end{equation} Thus \begin{equation} \label{goto0-3} (b_1^{-1}\ln x_1(t)-b_i^{-1}\ln x_i(t))'= L_i^{-1}-L_1^{-1}. \end{equation} Therefore \begin{equation} \label{goto0-4} \frac{d}{dt} \ln \bigg( \frac{x_1(t)^{b_i}}{x_i(t)^{b_1}}\bigg) = b_1b_2(L_i^{-1}-L_1^{-1})>0 \end{equation} and, consequently, \begin{equation} \label{goto0-5} \lim_{t\to\infty}\frac{x_1(t)^{b_i}}{x_i(t)^{b_1}}=\infty. \end{equation} Since $x_1(t)$ is a bounded function, from (\ref{goto0-5}) it follows that (\ref{goto0}) holds. Now we return to a discrete time version of model (\ref{c-m}). From (\ref{b:f}) it follows immediately that $x_i(t+1)\le x_i(t)$ if $|\mathbf x(t)|\ge M$ and, therefore, the sequence $(x_i(t))$ is bounded. Moreover, since $d_i \le 1$ we have $x_i(t)>0$ if $x_i(0)>0$. It is of interest to know whether (\ref{goto0}) holds also for discrete model (\ref{d-m}). Observe that (\ref{d-m}) can be written as \begin{equation} \label{d-m1} \frac{x_i(t+1)-x_i(t)}{b_ix_i(t)}= -L_i^{-1}+ f(\mathbf x(t)). \end{equation} Now (\ref{goto0-3}) takes the form \begin{equation} \label{d-m2} \frac{1}{b_1}D_l\, x_1(t)-\frac{1}{b_i}D_l\, x_i(t) = L_i^{-1}-L_1^{-1}. \end{equation} In the last formula the \textit{logarithmic derivative} $x_i'/x_i$ was replaced by its discrete version \[ D_l\, x_i(t) :=\dfrac{x_i(t+1)-x_i(t)}{x_i(t)}. \] Let $\alpha=b_1/b_i$ and $\beta=b_1(L_i^{-1}-L_1^{-1})=\alpha d_i-d_1$. Then $0<\beta<\alpha$ and (\ref{d-m2}) can be written in the following way \begin{equation} \label{d-m3} D_l\, x_1(t)= \alpha D_l\, x_i(t) +\beta. \end{equation} We want to find a sufficient condition for (\ref{goto0}). In order to do it we formulate the following general question, which can be investigated independently of the above biological models. \begin{problem} \label{prob1} Find parameters $\alpha$ and $\beta$, $0<\beta<\alpha$, such that the following condition holds:\\ (C) \ if $(u_n)$ and $(v_n)$ are arbitrary bounded sequences of positive numbers satisfying \begin{equation} \label{p1} \frac{u_{n+1}-u_n}{u_n}= \alpha \frac{v_{n+1}-v_n}{v_n} +\beta, \end{equation} for $n\in\mathbb{N}$, then $\lim\limits_{n\to\infty} v_n=0$. \end{problem} In the case when the model has the property of competitive exclusion (\ref{goto0}) one can ask if the dynamics of the $k$-dimensional model is the same as in the restriction to the one-dimensional model. The answer to this question is positive for the continuous version, because the one-dimensional model has a very simple dynamics. In Section~\ref{ss:one} we also show that both dynamics are similar if the one-dimensional model has the shadowing property. More interesting question is what can happen when condition (C) does not hold. One can expect that then subpopulations with different strategies can coexist even if condition (\ref{ineq}) holds. But we do not have a coexistence equilibrium (i.e. a positive stationary solution of (\ref{d-m})) which makes the problem more difficult. In Section~\ref{ss:periodic} we check that two-dimensional systems with $f$ related to the classical logistic, Beverton-Holt and Ricker models can have periodic solutions even in the case when one-dimensional versions of these models have stationary globally stable solutions and the two-dimensional model has a locally stable boundary equilibrium $(x_1^*,0)$. \section{Competitive exclusion} \label{s:mr} The solution of Problem~\ref{prob1} is formulated in the following theorem. \begin{theorem} \label{th1} If \ $\alpha\le 1+\beta$ then condition (C) is fulfilled. If $\alpha> 1+\beta$ then we can find periodic sequences $(u_n)$ and $(v_n)$ of period two with positive elements which satisfy $(\ref{p1})$. \end{theorem} \begin{lemma} \label{l:c} Consider the function \begin{equation} \label{l-1} g_n (x_1,\dots,x_n)=(\alpha x_1+\gamma)(\alpha x_2+\gamma)\cdots(\alpha x_n+\gamma) \end{equation} defined on the set $S_{n,m}=\{\mathbf x\in \mathbb R^n_+\colon \,x_1x_2\cdots x_n=m\}$, where $\alpha>0$, $\gamma\ge 0$, $m>0$ and $n$ is a positive integer. Then \begin{equation} \label{l-2} g_n (x_1,\dots,x_n)\ge \left(\alpha m^{1/n}+\gamma \right)^n. \end{equation} \begin{proof} The case $\gamma=0$ is obvious, so we assume that $\gamma>0$. We use the standard technique of the Lagrange multipliers for investigating problems on conditional extrema. Let \[ L(x_1,\dots,x_n)=g_n (x_1,\dots,x_n)+\lambda(m-x_1x_2\cdots x_n). \] Then \[ \frac{\partial L(x_1,\dots,x_n)}{\partial x_i} =\frac{\alpha g_n (x_1,\dots,x_n)}{\alpha x_i+\gamma} -\frac{\lambda x_1x_2\cdots x_n}{x_i}. \] Observe that if $\dfrac{\partial L(x_1,\dots,x_n)}{\partial x_i}=0$ for $i=1,\dots,n$, then $x_1=\dots=x_n=m^{1/n}$. It means that the function $g_n$ has only one local conditional extremal point and this point is the global minimum because $g_n(\mathbf x)$ converges to infinity as $\|\mathbf x\|\to \infty$. \end{proof} \end{lemma} \begin{proof}[Proof of Theorem~\ref{th1}] Equation (\ref{p1}) can be written in the following form \begin{equation} \label{p2} \frac{u_{n+1}}{u_n}= \alpha \frac{v_{n+1}}{v_n} +\gamma, \end{equation} where $\gamma=\beta+1-\alpha$. Consider the case $\alpha\le 1+\beta$. Then $\gamma\ge 0$ and $\alpha+\gamma=\beta+1>1$. We show that if $(v_n)$ is a sequence of positive numbers such that $\limsup\limits_{n\to\infty} v_n>0$ and $(u_n)$ is a sequence of positive numbers such that (\ref{p2}) holds, then the sequence $(u_n)$ is unbounded. Indeed, then we can find an $\overline{m}>0$ and a subsequence $(v_{n_i})$ of $(v_n)$ such that $v_{n_i}\ge \overline m$ for $i\in\mathbb N$. We set $v_0=1$ and $x_i=v_{i}/v_{i-1}$ for $i\in\mathbb N$. Then $v_{n}=x_1\cdots x_n$ and $u_n=u_0g_n(x_1,\dots,x_n)$, where $u_0=u_1/(\alpha v_1+\gamma)$. If $m=x_1\cdots x_{n_i}$, then $m\ge \overline m$ and from Lemma~\ref{l:c} it follows that \[ u_{n_i}=u_0g_{n_i}(x_1,\dots,x_{n_i})\ge u_0\left(\alpha m^{1/n_i}+\gamma \right)^{n_i}\ge u_0\left(\alpha \overline m^{1/n_i}+\gamma \right)^{n_i}. \] Since $\lim\limits_{i\to\infty}\overline m^{1/n_i}=1$ and $\alpha+\gamma>1$ we obtain $\lim\limits_{i\to\infty} u_{n_i}=\infty$, which proves the first part of the theorem. Now we assume that $\alpha> 1+\beta$. Then $\gamma<0$. First we check that there exists $\theta>1$ such that \begin{equation} \label{per1} (\alpha \theta+\gamma)(\alpha \theta^{-1}+\gamma)=1. \end{equation} Equation (\ref{per1}) can be written in the following form \begin{equation} \label{per3} \theta+\theta^{-1}=L, \textrm{ \, where $L=\frac{\alpha^2+\gamma^2-1}{\alpha|\gamma|}$}. \end{equation} Since $\alpha+\gamma=\beta+1>1$ we have $\alpha^2+\gamma^2-1> 2\alpha|\gamma|$, which gives $L>2$ and implies that there exists $\theta>1$ such that (\ref{per1}) holds. Now we put $u_{2n-1}=c_1$, $u_{2n}=c_1(\alpha \theta+\gamma)$, $v_{2n-1}=c_2$, $v_{2n}=c_2\theta$ for $n\in\mathbb N$, where $c_1$ and $c_2$ are any positive constants. Then \[ \frac{u_{2n}}{u_{2n-1}}= \alpha \theta+\gamma=\alpha\frac{v_{2n}}{v_{2n-1}}+\gamma, \] and using (\ref{per1}) we obtain \[ \frac{u_{2n+1}}{u_{2n}}= \frac1{\alpha \theta+\gamma} =\alpha \theta^{-1}+\gamma=\alpha\frac{v_{2n+1}}{v_{2n}}+\gamma. \qedhere \] \end{proof} \begin{remark} \label{r:1} We have proved a slightly stronger condition than (C) in the case $\alpha\le 1+\beta$. Namely, if $(u_n)$ is a bounded sequence of positive numbers, $(v_n)$ is a sequence of positive numbers and they satisfy (\ref{p1}), then $\lim\limits_{n\to\infty} v_n=0$. In the proof of condition (C) we have not used the preliminary assumption that $\beta<\alpha$. \end{remark} \section{Applications} \label{s:appl} Now we return to the model given by (\ref{d-m}). We assume that $f\colon \mathbb R^k_+\to [0,1]$ is a continuous function which satisfies (\ref{b:f}). From (\ref{b:f}) it follows that there exists $\overline{M}>0$ such that the set \[ X=\{\mathbf x\in\mathbb R_+^k\colon x_1+\dots+x_k\le \overline M\} \] is invariant under (\ref{d-m}), i.e., if $\mathbf x(0)\in X$ then $\mathbf x(t)\in X$ for $t>0$. We restrict the domain of the model to the set $X$. Let $T\colon X\to X$ be the transformation given by $T_i(\mathbf x)=(1-d_i)x_i+b_if(\mathbf x)x_i$, for $i=1,\dots,k$. \subsection{Persistence} \label{ss:persistence} First we check that if $f(\mathbf 0)=1$ then the population is \textit{persistent}, i.e., $\liminf_{n \to\infty}\|T^n(\mathbf x)\|\ge \varepsilon_1 >0$ for all $\mathbf x\ne \mathbf 0$. This is a standard result from persistence theory but we check it to make the paper self-contained. Since $b_i>d_i$ for $i=1,\dots,k$ we find $\varepsilon>0$ and $\delta>0$ such that \begin{equation} \label{ej-f-p} T_i(\mathbf x)\ge (1+\delta)x_i \quad\textrm{for $i=1,\dots,k$ and $\mathbf x \in B(\mathbf 0,\varepsilon)$}, \end{equation} where $B(\mathbf 0,\varepsilon)$ denotes the open ball in $X$ with center $\mathbf 0$ and radius $\varepsilon$. Moreover, since $T(\mathbf x)\ne \mathbf 0$ for $\mathbf x\ne 0$ the closed set $T(X\setminus B(\mathbf 0,\varepsilon))$ is disjoint with some neighbourhood of $\mathbf 0$. Using (\ref{ej-f-p}) we find $\varepsilon_1\in (0,\varepsilon)$ such that for each $\mathbf x \ne \mathbf 0$ we also find an integer $n_0(\mathbf x)$ such that $T^n(\mathbf x)\notin B(\mathbf 0,\varepsilon_1)$ for $n\ge n_0(\mathbf x)$. \subsection{Convergence to one-dimensional dynamics} \label{ss:one} Now we present some corollaries of Theorem~\ref{th1} concerning the long-time behaviour of the population. The inequality $0<\alpha\le 1+\beta$ can be written in terms of birth and death coefficients as \begin{equation} \label{a:1} b_1(1-d_i)\le b_i(1-d_1)\quad\textrm{for $i=2,\dots,k$.} \end{equation} It means that if (\ref{ineq}) and (\ref{a:1}) hold, then all strategies except the first one become extinct. It suggests that the model should behave asymptotically as $t\to\infty$, like a one-dimensional model corresponding to a population consisting of only the first strategy. This reduced model is given by the recurrent formula \begin{equation} \label{d-m-a} y(t+1)=S(y(t)), \end{equation} where $S(y)=y-d_1y+ b_1yf(y,0,\dots,0)$. In order to check that the model given by (\ref{d-m}) has the same asymptotic behaviour as the transformation $S$, we need some auxiliary definitions. A sequence $(y_k)$ is called an $\eta$-\textit{pseudo-orbit} of a transformation $S$ if $|S(y_k)- y_{k+1}|<\eta$ for all $k\ge 1$. The transformation $S$ is called \textit{shadowing}, if for every $\delta>0$ there exists $\eta>0$ such that for each $\eta$-pseudo-orbit $(y_k)$ of $S$ there is a point $y$ such that $|y_k -S^k(y)|<\delta$. \begin{theorem} \label{th-as} Assume that $f(\mathbf 0)=1$ and that conditions $(\ref{ineq})$, $(\ref{a:1})$ hold. Then \[ \lim_{t\to\infty} x_i(t)=0 \ \textrm{ for $\,i=2,\dots,k$}. \] If $S$ is shadowing then for each $\delta>0$ and for each initial point $\mathbf x(0)=(x_1,\dots,x_k)$ with $x_1>0$ there exists $t_0\ge 0$ such that $y(t_0)>0$ and \begin{equation} \label{wn-sh} \big| x_1(t) - y(t) \big| <\delta \textrm{ \ for $t\ge t_0$}. \end{equation} \end{theorem} \begin{proof} Let us fix a $\delta>0$ and let $\eta>0$ be a constant from the shadowing property of $S$. From the uniform continuity of the function $f$ there is an $\varepsilon>0$ such that \begin{equation} \label{a:t1} \overline M b_1\big|f(x_1,\dots,x_k)-f(x_1,0,\dots,0) \big| <\eta \textrm{ if $\,\mathbf x\in X$, $x_2+\dots+x_k<\varepsilon$.} \end{equation} Since all strategies except the first one become extinct, there exists $t_0\ge 0$ such that $x_2(t)+\dots+x_k(t)<\varepsilon$ for $t\ge t_0$. From (\ref{a:t1}) it follows that \[ |x_1(t+1)-S(x_1(t))|<\eta \textrm{ \ for $t\ge t_0$} \] and, consequently, the sequence $x_1(t_0),x_1(t_0+1),\dots$ is an $\eta$-pseudo-orbit. Since $S$ is shadowing we have (\ref{wn-sh}). \end{proof} The shadowing property was intensively studied for the last thirty years and there are a lot of results concerning the shadowing property for one-dimensional maps (cf. a survey paper by Ombach and Mazur \cite{OmbachMazur}). It is obvious that if $S$ has an asymptotically stable periodic orbit then $S$ is shadowing on the basin of attraction of this orbit. Moreover, for a continuous one-dimensional transformation the convergence of all iterates to a unique fixed point $x$ implies its global stability \cite{Sedeghat}. Thus, as a simple consequence of Theorem~\ref{th-as} we obtain \begin{corollary} \label{cor-as} Assume that $f(\mathbf 0)=1$ and that conditions $(\ref{ineq})$, $(\ref{a:1})$ hold. If $S$ has a fixed point $x_*>0$ and $\lim\limits_{n\to\infty}S^n(x)=x_*$ for all $x>0$, then for each initial point $\mathbf x(0)=(x_1,\dots,x_k)$ with $x_1>0$, we have $\lim\limits_{t\to\infty}\mathbf x(t)=(x_*,0,\dots,0)$. \end{corollary} Some applications of shadowing to semelparous population similar to Theorem~\ref{th-as} and Corollary~\ref{cor-as} can be found in \cite{RudnickiWieczorek2010}. An interested reader will find there also some observations concerning chaotic behaviour of such models. In particular, the model given by (\ref{d-m}) can exhibit chaotic behaviour if the suppression function is of the form $f(\mathbf x)=1-x_1-\dots-x_n$, i.e., it is a generalization of the logistic model. \begin{remark}[Dynamical consistence] \label{r:d-c} If we replace $x'_i(t)$ with $(x_i(t+h)-x_i(t))/h$ in (\ref{c-m}) then we get \begin{equation} \label{d-mh} x_i(t+h)=x_i(t)-d_ihx_i(t)+ b_ih x_i(t)f(\mathbf x(t)),\quad i=1,\dots,k. \end{equation} One can ask if this scheme is dynamically consistent with (\ref{c-m}). Observe, that inequalities (\ref{ineq}) also hold if we replace $b_i$ with $b_ih$ and $d_i$ with $d_ih$. The difference equation (\ref{d-mh}) is said to be \textit{dynamically consistent} with (\ref{c-m}) if they possesses the same dynamical behavior such as local stability, bifurcations, and chaos \cite{LE}, or more specifically \cite{Mickens,RG} if they have the same given property, e.g. if the competitive exclusion takes place in both discrete and continuous models. The model (\ref{d-mh}) is biologically meaningful only if the death coefficients are $\le 1$, i.e., if \begin{equation} \label{w:h} 0<h\le h_{\max{}}=\min \{d_1^{-1},\dots,d_k^{-1}\}. \end{equation} We assume that $b_i$ and $d_i$ satisfy (\ref{ineq}), i.e., $b_1d_i>b_id_1$ for $i=2,\dots,k$. Let $b_{i,h}=b_ih$, $d_{i,h}=d_ih$. Now, (\ref{a:1}) applied to $b_{i,h}$ and $d_{i,h}$ gives \begin{equation} \label{a:1-r} b_1-b_i\le (b_1d_i-b_id_1)h\quad\textrm{for $i=2,\dots,k$}. \end{equation} In particular if (\ref{ineq}) holds and $b_i\ge b_1$ for $i=2,\dots,k$ then for all $h$ satisfying (\ref{w:h}) all strategies except the first one become extinct, i.e., the difference equation (\ref{d-mh}) is dynamically consistent with (\ref{c-m}) with respect to this property. We cannot expect ``full" dynamical consistence if the above conditions hold, because in the case of the logistic map, i.e., if $f(\mathbf x)=1-x_1-x_2$, the stationary point $\mathbf x_1^*=((b_1-d_1)d_1^{-1},0)$ of (\ref{c-m}) is globally stable but in the numerical scheme (\ref{d-mh}) this point loses stability when $b_1h>2+d_1h$. \end{remark} \subsection{Periodic solutions} \label{ss:periodic} Theorem~\ref{th1} can be also useful if we look for periodic oscillation in the model given by (\ref{d-m}). We restrict our investigation to the two-dimensional model. We recall that if $\alpha> 1+\beta$, then the periodic sequences given by $u_{2n-1}=c_1$, $u_{2n}=c_1(\alpha \theta+\gamma)$, $v_{2n-1}=c_2$, $v_{2n}=c_2 \theta$ for $n\in\mathbb N$, satisfy $(\ref{p1})$. Here $c_1$ and $c_2$ are any positive constants, $\theta>1$ is a solution of the equation \[ (\alpha \theta+\gamma)(\alpha \theta^{-1}+\gamma)=1, \] $\alpha=b_1/b_2$, $\beta=\alpha d_2-d_1>0$, and $\gamma=1+\beta-\alpha=\alpha(d_2-1)+(1-d_1)<0$. Under these assumptions we are looking for $c_1, c_2>0$ such that \begin{equation} \left\{ \label{ukl-2'} \begin{aligned} &\theta=1-d_2+b_2f(c_1,c_2), \\ &1=\theta(1-d_2)+b_2 \theta f(c_1(\alpha \theta+\gamma),c_2\theta). \end{aligned} \right. \end{equation} This system is equivalent to \begin{equation} \label{ukl-3'} \left\{ \begin{aligned} &f(c_1,c_2)=\left(\theta+d_2-1\right)b_2^{-1} \\ &f(c_1(\alpha \theta+\gamma),c_2\theta) =\left(\theta^{-1}+d_2-1\right)b_2^{-1}. \end{aligned} \right. \end{equation} Since $f(\mathbf x)\in (0,1)$ for $\mathbf x\in X\setminus\{\mathbf 0\}$, we have the following necessary condition for the existence of positive solution of the system (\ref{ukl-3'}) : \begin{equation} \label{e:wko} \theta<1+b_2-d_2\quad\textrm{and}\quad \theta<(1-d_2)^{-1}. \end{equation} Let $f(\mathbf x)=\varphi(x_1+x_2)$, where $\varphi$ is a strictly decreasing function defined on the interval $[0,K)$, $0<K\le \infty$, such that $\varphi(0)=1$ and $\lim_{x\to K}\varphi(x)=0$. Define $m_1=\left( \theta+d_2-1\right)b_2^{-1}$, $m_2=\left(\theta^{-1}+d_2-1\right)b_2^{-1}$, $p_1=\varphi^{-1}(m_1)$, and $p_2=\varphi^{-1}(m_2)$. If (\ref{e:wko}) holds then the constants $p_1$, $p_2$ are well defined and $0<p_1<p_2$. Thus, we find a positive solution of system (\ref{ukl-3'}) if and only if (\ref{e:wko}) holds and \begin{equation} \label{e:wkw} c_1+c_2=p_1 \quad\textrm{and}\quad c_1(\alpha \theta+\gamma)+c_2\theta=p_2. \end{equation} System (\ref{e:wkw}) has a unique solution \begin{equation} \label{e:wkw2} c_1= \frac{p_2-p_1\theta} {\alpha \theta+\gamma-\theta}, \quad c_2= \frac{p_1(\alpha \theta+\gamma)-p_2} {\alpha \theta+\gamma-\theta}. \end{equation} Since $\alpha>1$, $\theta>1$, and $\beta>0$ we have \[ \alpha \theta+\gamma-\theta=\alpha \theta+1+\beta-\alpha-\theta=(\alpha-1)(\theta-1)+\beta>0. \] Thus system (\ref{ukl-3'}) has a positive solution if and only if (\ref{e:wko}) holds and \begin{equation} \label{e:wkw3} p_1\theta< p_2<p_1(\alpha \theta+\gamma). \end{equation} Now we show how to find parameters $b_1,b_2,d_1,d_2$ such that (\ref{e:wko}) and (\ref{e:wkw3}) hold. Assume that $\beta$ is sufficiently small. Since $\gamma=\beta+1-\alpha$, from (\ref{per3}) it follows \[ \theta+\theta^{-1}=\frac{2\alpha(\alpha-1)-2(\alpha-1)\beta+\beta^2}{\alpha(\alpha-1)-\alpha\beta}= 2+\frac{2\beta}{\alpha(\alpha-1)}+O(\beta^2). \] Let $\theta=1+\varepsilon$. Then $\varepsilon=\sqrt{2/(\alpha^2-\alpha)}\,\sqrt{\beta}+O(\beta) $ and we can assume that $\varepsilon$ is also sufficiently small. Hence $\theta^{-1}=1-\varepsilon+O(\varepsilon^2)$, $\alpha\theta+\gamma=1+\alpha\varepsilon+O(\varepsilon^2)$, $m_1=(d_2+\varepsilon)b_2^{-1}$, and $m_2=(d_2-\varepsilon)b_2^{-1}+O(\varepsilon^2)$. Assume that $\varphi^{-1}$ is a $C^2$-function in a neighbourhood of the point $\bar x=d_2b_2^{-1}$. Then \begin{equation} \label{e:wkw4} p_1= A+Bb_2^{-1}\varepsilon+O(\varepsilon^2), \quad p_2= A-Bb_2^{-1}\varepsilon+O(\varepsilon^2), \end{equation} where $A=\varphi^{-1}(\bar x)$ and $B=(\varphi^{-1})'(\bar x)=1/\varphi'(A)$. Substituting (\ref{e:wkw4}) to (\ref{e:wkw3}) we rewrite (\ref{e:wkw3}) as \begin{equation} \label{e:wkw5} A+O(\varepsilon)<-2 Bb_2^{-1}<\alpha A+O(\varepsilon). \end{equation} Taking sufficiently small $\beta$, we are also able to check condition (\ref{e:wko}). Thus, if $A<-2 Bb_2^{-1}$, $\beta$ is sufficiently small and $\alpha$ is sufficiently large, both conditions (\ref{e:wko}) and (\ref{e:wkw3}) are fulfilled and a non-trivial periodic solution exists. \begin{example} \label{ex1} We consider the two-dimensional model (\ref{d-m}) related to the logistic map, i.e., with $f(\mathbf x)=\varphi(x_1+x_2)$ and $\varphi(x)=1-x/K$ for $x\in [0,K]$ and $\varphi(x)=0$ for $x>K$. Then $\varphi^{-1}(x)=K(1-x)$, and so $A=K(1-d_2/b_2)$, $B=-K$. From (\ref{e:wkw5}) it follows that if $2b_2/b_1<b_2-d_2<2$, $d_1=b_1d_2/b_2-\beta$ and $\beta$ is sufficiently small, then there exists a periodic solution. Let us consider a special example with the following coefficients $b_1=2.02$, $b_2=0.505$, $d_1=0.0399$, $d_2=0.01$. Then $b_1/d_1>b_2/d_2$, $\alpha= b_1/b_2=4$, $\beta=\alpha d_2-d_1=0.0001$, $\gamma=-2.9999$, and $\theta\approx 1.00408$. Then all conditions hold and a positive periodic solution exists. If $K=1$ then the periodic sequence is given by $x_1(2n-1)\approx 0.8482$, $x_1(2n)\approx 0.8622$, $x_2(2n-1)\approx 0.1099$, $x_2(2n)\approx 0.1103$ for $n\in\mathbb N$. It is interesting that in this case, one-dimensional models (i.e., with the birth and death coefficients $b_1,d_1$ or $b_2,d_2$) have positive and globally stable fixed points because $b_i<2+d_i$ for $i=1,2$ (see Section~\ref{ss:loc-stab}). Hence the two-dimensional model has a locally stable fixed point $(1-d_1/b_1,0)$ but this point is not globally stable. \end{example} \begin{example} \label{ex2} We consider now the two-dimensional Beverton-Holt model with harvesting, i.e., a model of type (\ref{d-m}) with $f(\mathbf x)=\varphi(x_1+x_2)$ and $\varphi(x)=c/(c+x)$ for $x\in [0,\infty)$, $c>0$. A one-dimensional version of this model always has one positive fixed point and this point is globally asymptotically stable (see Section~\ref{ss:loc-stab}). We have $\varphi^{-1}(x)=c/x-c$, and so $A=c(b_2/d_2-1)$, $B=-cb_2^2/d_2^2$. Inequality (\ref{e:wkw5}) takes the form \[ (b_2-d_2)+O(\varepsilon)<2b_2/d_2<\alpha (b_2-d_2)+O(\varepsilon). \] The first inequality is automatically fulfilled for sufficiently small $\beta$. The second inequality holds if $b_1>\dfrac{2b_2^2}{(b_2-d_2)d_2}$ and $\beta$ is sufficiently small and then a positive periodic solution exists. \end{example} \begin{example} \label{ex3} We consider now the two-dimensional model (\ref{d-m}) related to the Ricker map, i.e., with $f(\mathbf x)=\varphi(x_1+x_2)$ and $\varphi(x)=e^{-cx}$ for $x\in [0,\infty)$. We have $\varphi^{-1}(x)=- c^{-1}\ln x$, $(\varphi^{-1})'(x)=- (cx)^{-1}$ and so $A=c^{-1}\ln(b_2/d_2)$, $B=-b_2/(cd_2)$. Inequality (\ref{e:wkw5}) takes the form \[ \ln(b_2/d_2) +O(\varepsilon)<2/d_2<\alpha \ln(b_2/d_2)+O(\varepsilon). \] Thus if $d_2e^{2/(\alpha d_2)}<b_2<d_2e^{2/d_2}$ and $\beta$ is sufficiently small, then a positive periodic solution exists. Now we give an example when $T$ have a positive periodic point and both one-dimensional models have globally stable fixed points, i.e., $b_r<d_re^{2/d_r}$ holds (see Section~\ref{ss:loc-stab}). Let $b_1=1.0001e^2$, $b_2=b_1/4$, $d_1=0.9999$, $d_2=0.25$. The coefficients $\alpha= b_1/b_2=4$, $\beta=\alpha d_2-d_1=0.0001$, $\gamma=-2.9999$ are the same as in Example ~\ref{ex1}. Thus $\theta\approx 1.00408$, $\theta^{-1}\approx 0.99594$, and $\alpha\theta+\gamma=1.01642$. For $c=1$ we have $p_i=-\ln m_i$ for $i=1,2$ and we can check that the periodic sequence is given by $x_1(2n-1)\approx 1.49009$, $x_1(2n)\approx 1.51455$, $x_2(2n-1)\approx 0.000868$, $x_2(2n)\approx 0.000871$ for $n\in\mathbb N$. \end{example} \begin{remark} \label{r:per} We have restricted examples only to $f$ of the form $f(\mathbf x)=\varphi(x_1+x_2)$ with the typical $\varphi$ used in the classic competition models, to show that these models can have no coexistence equilibrium, but they can have a positive periodic solution. Formula (\ref{ukl-3'}) can be used to find periodic solutions of models with other $f$'s. \end{remark} \subsection{Stability of fixed points} \label{ss:loc-stab} In the previous sections we use some results concerning local and global stability of the transformation $T$ and to make our exposition self-contained we add a section concerning this subject. First we look for fixed points of the transformation $T$ and check their local stability. We assume that $f(\mathbf 0)=1$ and \begin{equation} \label{strict} L_1>L_2>\dots>L_k. \end{equation} Let $\mathbf x^*$ be a fixed point of $T$, i.e., $T({\mathbf x}^*)={\mathbf x}^*$. Then $x^*_i=0$ or $f(\mathbf x^*)=d_i/b_i=L_i^{-1}$ for $i=1,\dots,k$. From (\ref{strict}) it follows that $\mathbf x^*$ is a fixed point of $T$ if $\mathbf x^*=\mathbf x^*_0=\mathbf 0$ or $\mathbf x^*=\mathbf x^*_r=(0,\dots,x^*_r,\dots,0)$, where $r\in\{1,\dots,k\}$ and $f(\mathbf x^*_r)=L_r^{-1}$. We assume that the functions $x\mapsto f(0,\dots,x,\dots,0)$ are strictly decreasing. Then $T$ has exactly $k+1$ fixed points $ \mathbf x^*_r$, $r=0,\dots,k$. Let $A_r$ be the matrix with $a_{ij}^r=\dfrac{\partial T_i}{\partial x_j}({\mathbf x_r^*})$. We have \begin{equation} \label{wzpoch} \dfrac{\partial T_i}{\partial x_j}({\mathbf x}) =\delta_{ij}(1-d_i+b_if(\mathbf x))+b_ix_i\dfrac{\partial f}{\partial x_j}(\mathbf x), \end{equation} where $\delta_{ii}=1$ and $\delta_{ij}=0$ if $i\ne j$. Since $f(\mathbf 0)=1$ we obtain $a_{ii}^0=1-d_i+b_i>1$ and $a_{ij}^0=0$ if $i\ne j$, and therefore $\mathbf x_0^*=\mathbf 0$ is a repulsive fixed point. Now we consider a point $\mathbf x_r$ with $r>0$. Then $f(\mathbf x_r)=d_r/b_r$ and from (\ref{wzpoch}) we obtain \[ a_{ij}^r=\dfrac{\partial T_i}{\partial x_j}(\mathbf x_r^*)= \begin{cases} \delta_{ij}(1-d_i+b_id_r/b_r), \quad \textrm{if $i\ne r$},\\ \delta_{ij}+b_rx_r\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*), \quad \textrm{if $i= r$}. \end{cases} \] The matrix $A_r$ has $k$-eigenvalues $\lambda_i$, $i=1,\dots,k$ and $\lambda_i=a^r_{ii}$. We have \[ \begin{aligned} \lambda_i&=1-d_i+b_id_r/b_r=1 +b_i(L_r^{-1}-L_i^{-1}),\textrm{ if $i\ne r$},\\ \lambda_r&=1+b_rx_r\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*). \end{aligned} \] Observe that if $r=1$ then $\lambda_i\in (0,1)$ for $i>1$. If we assume that $-2<b_1x_1\dfrac{\partial f}{\partial x_1}(\mathbf x_1^*)<0$, then also $\lambda_1\in (0,1)$ and the fixed point $\mathbf x_1^*$ is locally asymptotically stable. If $r>1$ then $\lambda_i>1$ for $i<r$ and, consequently, the fixed point $\mathbf x_1^*$ is not asymptotically stable. But if $-2<b_rx_r\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)<0$, the point $\mathbf x_r^*$ is locally \textit{semi-stable}, i.e., is stable for the transformation $T$ restricted to the set \[ X_r=\{{\mathbf x}\in X \colon \,x_1=\dots=x_{r-1}=0\}. \] In the case of the logistic map $f(\mathbf x)=1-(x_1+\dots +x_k)/K$ we have $x_r=K(1-d_r/b_r)$, $\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)=-1/K$, and conditions for stability (or semi-stability) of $\mathbf x_r^*$ reduce to $b_r<2+d_r$. If the positive fixed point $x^*$ of a one-dimensional logistic map is locally asymptotically stable then this point is globally stable, i.e., $T^n(x)\to x^*$, for $x\in (0,K)$. Thus, Example~\ref{ex1} shows that the behaviour of a $k$-dimensional logistic map and its one-dimensional restrictions can be different. It can have a locally stable fixed point $\mathbf x_1^*$ but this point can be not globally asymptotically stable. Consider a model with the Beverton-Holt birth rate \[ f(\mathbf x)=\dfrac{c}{c+x_1+\dots +x_k}. \] Then we have $x_r=c\left(\dfrac{b_r}{d_r}-1\right)$, $\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)=-\dfrac{c}{(c+x_r)^2}$. Conditions for stability (or semi-stability) of $x_r$ reduce to the inequality $b_rx_rc< 2(c+x_r)^2$ or, equivalently to $b_r^2/d_r^2>b_r^2/d_r-b_r$, which always holds because $0\le d_r\le 1$. The positive fixed point $x^*$ of the one-dimensional map $T$ is globally stable because $x<T(x)<x^*$ for $x\in (0,x^*)$ and $T(x)<x$ for $x>x^*$. Example~\ref{ex2} shows that a two-dimensional map have a locally stable fixed point $\mathbf x_1$ but this point can be not globally asymptotically stable. In the case of the Ricker map $f(\mathbf x)=e^{-c(x_1+\dots +x_k)}$ we have $x_r=\dfrac 1c\ln\dfrac{b_r}{d_r}$, $\dfrac{\partial f}{\partial x_r}(\mathbf x_r^*)=-c\dfrac{d_r}{b_r}$, and conditions for stability (or semi-stability) of $x_r$ reduce to $cd_rx_r>2$, which takes place when $b_r<d_re^{2/d_r}$. The last inequality is also sufficient for global stability of the fixed point (see e.g. \cite[Th.\ 9.16]{Thieme}). \section{Conclusion} In this paper we consider a discrete time strong competition model. While in its continuous counterpart a population having the maximal turnover coefficient drives the other to extinction, the discrete time model can have no this property. We give sufficient conditions for competitive exclusion for a discrete model. Although this model does not have a coexistence equilibrium, it can have a positive periodic solution. It is interesting that this periodic solution can exist in the case when the restrictions of the model to one dimensional cases have globally stable stationary solutions. Theorem~\ref{th1} can be also applied to models when the suppression function $f$ depends on other factors, for example the suppression function $f$ can include resource density. It would be interesting to generalize Theorem~\ref{th1} to models with weaker competition, i.e., when the suppression function is not identical for all subpopulations, or to discrete-continuous hybrid models \cite{GHL,ML} or to equations on time scales \cite{BP}. \section*{Acknowledgments} The author is grateful to Dr. Magdalena Nockowska for several helpful discussions while this work was in progress. This research was partially supported by the National Science Centre (Poland) Grant No. 2014/13/B/ST1/00224. \end{document}
math
34,516
\begin{equation}gin{document} \newtheorem{lem}{Lemma}[section] \newtheorem{pro}[lem]{Proposition} \newtheorem{thm}[lem]{Theorem} \newtheorem{rem}[lem]{Remark} \newtheorem{cor}[lem]{Corollary} \newtheorem{df}[lem]{Definition} \title[The Toda System on Compact Surfaces] {A variational Analysis of the Toda System on Compact Surfaces} \author{Andrea Malchiodi and David Ruiz} \address{SISSA, via Bonomea 265, 34136 Trieste (Italy) and Departamento de An\'alisis Matem\'atico, University of Granada, 18071 Granada (Spain).} \thanks{A. M. has been partially supported by GENIL (SPR) for a stay in Granada in 2011, and is supported by the FIRB project {\em Analysis and Beyond} from MIUR. D.R has been supported by the Spanish Ministry of Science and Innovation under Grant MTM2008-00988 and by J. Andalucia (FQM 116).} \email{[email protected], [email protected]} \keywords{Geometric PDEs, Variational Methods, Min-max Schemes.} \subjclass[2000]{35J50, 35J61, 35R01.} \begin{equation}gin{abstract} In this paper we consider the following {\em Toda system} of equations on a compact surface: $$ \left\{ \begin{equation}gin{array}{ll} - \Delta u_1 = 2 \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right) - \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right), \\ - \Delta u_2 = 2 \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right) - \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right). & \end{array} \right.$$ We will give existence results by using variational methods in a non coercive case. A key tool in our analysis is a new Moser-Trudinger type inequality under suitable conditions on the center of mass and the scale of concentration of the two components $u_1, u_2$. \end{abstract} \maketitle \section{Introduction} Let $\Sg$ be a compact orientable surface without boundary, and $g$ a Riemannian metric on $\Sg$. Consider the following system of equations: \begin{equation}gin{equation}\label{eq:toda} - \frac 1 2 \Delta u_i(x) = \sum_{j=1}^{N} a_{ij} e^{u_j(x)}, \qquad x \in \Sg, \ i = 1, \delta ots, N, \end{equation} where $\D=\D_g$ stands for the Laplace-Beltrami operator and $A = (a_{ij})_{ij}$ is the {\em Cartan matrix} of $SU(N+1)$, $$ A = \left( \begin{equation}gin{array}{cccccc} 2 & -1 & 0 & \delta ots & \delta ots & 0 \\ -1 & 2 & -1 & 0 & \delta ots & 0 \\ 0 & -1 & 2 & -1 & \delta ots & 0 \\ \delta ots & \delta ots & \delta ots & \delta ots & \delta ots & \delta ots \\ 0 & \delta ots & \delta ots & -1 & 2 & -1 \\ 0 & \delta ots & \delta ots & 0 & -1 & 2 \\ \end{array} \right). $$ Equation \eqref{eq:toda} is known as the {\em Toda system}, and has been extensively studied in the literature. This problem has a close relationship with geometry, since it can be seen as the Frenet frame of holomorphic curves in $\mathbb{CP}^N$ (see \cite{guest}). Moreover, it arises in the study of the non-abelian Chern-Simons theory in the self-dual case, when a scalar Higgs field is coupled to a gauge potential, see \cite{dunne, tar, yys}. Let us assume, for the sake of simplicity, that $\Sg$ has total area equal to $1$, i.e. $\int_{\Sg} 1 \, dV_g=1$. In this paper we study the following version of the Toda system for $N=2$: \begin{equation}gin{equation}\label{eq:gtodaaux} \left\{ \begin{equation}gin{array}{l} - \Delta u_1 = 2 \rho_1 \left( h_1 e^{u_1} - 1 \right) - \rho_2 \left( h_2 e^{u_2} - 1 \right), \\ - \Delta u_2 = 2 \rho_2 \left(h_2 e^{u_2} - 1 \right) - \rho_1 \left( h_1 e^{u_1} - 1 \right), \end{array} \right. \end{equation} where $h_i$ are smooth and strictly positive functions defined on $\Sg$. By integrating on $\Sg$ both equations, we obtain that any solution $(u_1,u_2)$ of \eqref{eq:gtodaaux} satisfies: $$ \int_{\Sg} h_i e^{u_i} \, dV_g =1, \qquad i=1,\ 2. $$ Hence, problem \eqref{eq:gtodaaux} is equivalent to: \begin{equation}gin{equation}\label{eq:gtoda} \left\{ \begin{equation}gin{array}{ll} - \Delta u_1 = 2 \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right) - \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right), \\ - \Delta u_2 = 2 \rho_2 \left( \frac{h_2 e^{u_2}}{\int_\Sg h_2 e^{u_2} dV_g} - 1 \right) - \rho_1 \left( \frac{h_1 e^{u_1}}{\int_\Sg h_1 e^{u_1} dV_g} - 1 \right). & \end{array} \right. \end{equation} Problem \eqref{eq:gtoda} is variational, and solutions can be found as critical points of a functional $J_\rho : H^1(\Sg) \times H^1(\Sg) \to \mathbb{R}$ ($\rho=(\rho_1,\rho_2)$) given by \begin{equation}gin{equation} \label{funzionale} J_\rho(u_1, u_2) = \int_\SigmaQ(u_1,u_2)\, dV_g + \sum_{i=1}^2 \rho_i \left ( \int_\Sigmau_i dV_g - \log \int_\Sigmah_i e^{u_i} dV_g \right ), \end{equation} where $Q(u_1,u_2)$ is defined as: \begin{equation}gin{equation}\label{eq:QQ} Q(u_1,u_2) = \frac{1}{3} \left ( |\nabla u_1|^2 + |\nabla u_2|^2 + \nabla u_1 \cdot \nabla u_2\right ). \end{equation} Here and throughout the paper $\nabla u= \n_g u$ stands for the gradient of $u$ with respect to the metric $g$, whereas $\cdot$ denotes the Riemannian scalar product. Observe that both \eqref{eq:gtoda} and \eqref{funzionale} are invariant under addition of constants to $u_1$, $u_2$. The structure of the functional $J_{\rho}$ strongly depends on the parameters $\rho_1$, $\rho_2$. To start with, the following analogue of the Moser-Trudinger inequality has been given in \cite{jw}: \begin{equation}gin{equation} \label{mtjw} 4\pi \sum_{i=1}^2 \left (\log \int_\Sigmah_i e^{u_i} dV_g - \int_\Sigmau_i dV_g \right ) \leq \int_\SigmaQ(u_1,u_2)\, dV_g +C, \end{equation} for some $C=C(\Sg)$. As a consequence, $J_{\rho}$ is bounded from below for $\rho_i \leq 4 \pi$ (see also \cite{sw, sw2, wang} for related inequalities). In particular, if $\rho_i < 4 \pi$ ($i=1,2$), $J_{\rho}$ is coercive and a solution for \eqref{eq:gtoda} can be easily found as a minimizer. If $\rho_i>4\pi$ for some $i=1,\ 2$, then $J_{\rho}$ is unbounded from below and a minimization technique is no more possible. Let us point out that the Leray-Schauder degree associated to \eqref{eq:gtoda} is not known yet. For the scalar case, the Leray-Schauder has been computed in \cite{clin}. The unique result on the topological degree for Liouville systems is \cite{lin-zhang}, but our case is not covered there. In this paper we use variational methods to obtain existence of critical points (generally of saddle type) for $J_{\rho}$. Before stating our results, let us comment briefly on some aspects of the problem under consideration. When some of the parameters $\rho_i$ equals $4 \pi$, the situation becomes more subtle. For instance, if we fix $\rho_1 < 4 \pi$ and let $\rho_2 \nearrow 4 \pi$, then $u_2$ could exhibit a blow-up behavior (see the proof of Theorem 1.1 in \cite{jlw}). In this case, $u_2$ would become close to a function $U_{\lambda, x}$ defined as: $$ U_{\lambda, x}(y)= \log \left( \frac{4 \l}{\left(1 + \lambda \, d(x,y)^2\right)^2} \right), $$ where $y \in \Sg$, $d(x,y)$ stands for the geodesic distance and $\lambda$ is a large parameter. Those functions $U_{\l, x}$ are the unique entire solutions of the Liouville equation (see \cite{cl2}): $$ - \Delta U= 2 e^{U}, \qquad \int_{\mathbb{R}^2} e^{U} \, dx < +\infty.$$ In \cite{jlw} and \cite{ll} some conditions for existence are given when some of the $\rho_i$'s equals $4 \pi$. The proofs involve a delicate analysis of the limit behavior of the solutions when $\rho_i$ converge to $4\pi$ from below, in order to avoid bubbling of solutions. For that, some conditions on the functions $h_i$ are needed. The scalar counterpart of \eqref{eq:gtoda} is a Liouville-type problem in the form: \begin{equation}gin{equation} \label{scalar} - \Delta u = 2 \rho \left( \frac{h(x) e^{u}}{\int_\Sigma h(x) e^{u} d V_g} - 1 \right),\end{equation} with $\rho \in \mathbb{R}$. This equation has been very much studied in the literature; there are by now many results regarding existence, compactness of solutions, bubbling behavior, etc. We refer the interested reader to the reviews \cite{mreview, tar3}. Solutions of \eqref{scalar} correspond to critical points of the functional $I_{\rho}:H^1(\Sg) \to \mathbb{R}$, \begin{equation}gin{equation}\label{scalar2} I_\rho(u) = \frac 1 2 \int_{\Sigma} |\n_g u|^2 dV_g + 2 \rho \left ( \int_\Sigma u dV_g - \log \int_\Sigma h(x) e^{u} dV_g \right ), \qquad u \in H^{1}(\Sigma). \end{equation} The classical Moser-Trudinger inequality implies that $I_{\rho}$ is bounded from below for $\rho \leq 4 \pi$. For larger values of $\rho$, variational methods were applied to \eqref{scalar} for the first time in \cite{djlw}, \cite{st}. In \cite{dm} the $Q$-curvature prescription problem is addressed in a 4-dimensional compact manifold: however, the arguments of the proof can be easily translated to the Liouville problem \eqref{scalar}, see \cite{dja}. Let us briefly describe the proof of \cite{dm} in the case $\rho \in (4 \pi , 8 \pi)$, for simplicity. In \cite{dm} it is shown that, whenever $I_{\rho}(u_n) \to -\infty$, then (up to a subsequence) $$ \frac{e^{u_n}}{\int_{\Sg} e^{u_n}\, dV_g} \rightharpoonup \delta elta_{x}, \ x \in \Sg, $$ in the sense of measures. Moreover, for $L>0$ sufficiently large, one can define a homotopy equivalence (see also \cite{mal}): $$I_{\rho}^{-L}= \{u\in H^1(\Sg):\ I_{\rho}(u)< -L \} \simeq \{ \delta elta_x:\ x \in \Sg\} \simeq \Sg.$$ Therefore the sublevel $I_{\rho}^{-L}$ is not contractible, and this allows us to use a min-max argument to find a solution. We point out that \cite{dm} also deals with the case of higher values of $\rho$, whenever $\rho \noindenttin 4 \pi \mathbb{N}$. Coming back to system \eqref{eq:gtoda}, there are very few results when $\rho_i>4\pi$ for some $i=1$, $2$. One of them is given in \cite{cheikh} and concerns the case $\rho_1<4\pi$ and $\rho_2 \in (4\pi m, 4 \pi (m+1))$, $m \in \mathbb{N}$. There, the situation is similar to \cite{dm}; in a certain sense, one can describe the set $J_{\rho}^{-L}$ from the behavior of the second component $u_2$ as in \cite{dm}. In Theorem 1.4 of \cite{jlw}, an existence result is stated for $\rho_i \in (0,4 \pi) \cup (4 \pi, 8 \pi)$ for a compact surface $\Sg$ with positive genus: however, the min-max argument used in the proof seems not to be correct. The main problem is that a one-dimensional linking argument is used to obtain conditions on both the components of the system. In any case, the core of \cite{jlw} is the blow-up analysis for the Toda system (see Remark \ref{puffff} for more details). In particular, it is shown that if the $\rho_i$'s are bounded away from $4 \pi \mathbb{N}$, the set of solutions of \eqref{eq:gtoda} is compact (up to addition of constants). This is an essential tool for our analysis. In this paper we deal with the case $\rho_i \in (4 \pi, 8 \pi)$, $i=1,2$. Our main result is the following: \begin{equation}gin{thm}\label{t:main} Assume that $\rho_i \in (4 \pi, 8 \pi)$ and that $h_1, h_2$ are two positive $C^1$ functions on $\Sg$. Then there exists a solution $(u_1, u_2)$ of \eqref{eq:gtoda}. \end{thm} Let us point out that we find existence of solutions also if $\Sg$ is a sphere. Moreover, our existence result is based on a detailed study of the topological properties of the low sublevels of $J_{\rho}$. This study is interesting in itself; in the scalar case an analogous one has been used to deduce multiplicity results (see \cite{demarchis}) and degree computation formulas (see \cite{mal}). We shall see that the low sublevels of $J_{\rho}$ contain couples in which at least one component is very concentrated around some point of $\Sg$. Moreover, both components can concentrate at two points that could eventually coincide. However, we shall see that, in a certain sense, \begin{equation}gin{equation} \label{frase} \mbox{\em if } u_1,\ u_2 \ \mbox{\em concentrate around the same point at the same rate, then } J_{\rho} \, \mbox{\em is bounded from below.} \end{equation} To make this statement rigorous, we need several tools. The first is a definition of a rate of concentration of a positive function $f \in \Sg$, normalized in $L^1$, which is a refinement of the one given in \cite{mr}; this will be measured by a positive parameter called $\s=\s(f)$. In a sense, the smaller is $\s$, the higher is the rate of concentration of $f$. Compared to the classical concentration compactness arguments, our function $\s$ has the property of being continuous with respect to the $L^1$ topology (see Remark \ref{sigmacont}). Second, we also need to define a continuous center of mass when $\s\leq \delta $ for some fixed $\delta >0$: we will denote it by $\begin{equation}ta=\begin{equation}ta(f) \in \Sg$. When $\s \geq \delta $, the function is not concentrated and the center of mass cannot be defined. Hence, we have a map: $$\psi: H^1(\Sg) \to \overline{\Sg}_{\delta }, \ \psi(u_i)= (\begin{equation}ta(f_i), \s(f_i)),\ \mbox{ where } f_i=\frac{e^{u_i}}{\int_{\Sg} e^{u_i}\, dV_g}.$$ Here $\overline{\Sg}_{\delta }$ is the topological cone with base $\Sg$, so that we make the identification to a point when $\sigma \geq \delta elta $ for some $\delta elta>0$ fixed. Third, we need an improvement of the Moser-Trudinger inequality in the following form: if $ \psi(f_1) = \psi(f_2)$, then $J_{\rho}(u_1,u_2)$ is bounded from below. In this sense, \eqref{frase} is made precise. The proof uses local versions of the Moser-Trudinger inequality and applications of it to small balls (via a convenient dilation) and to annuli with small internal radius (via a Kelvin transform). Roughly speaking, on low sublevels one of the following alternatives hold: \begin{equation}gin{enumerate} \item one component concentrates at a point whereas the other does not concentrate ($\s_i< \delta elta \leq \s_j$), or \item the two components concentrate at different points ($\s_i < \delta elta,\ \begin{equation}ta_1 \neq \begin{equation}ta_2$), or \item the two components concentrate at the same point with different rates of concentration ($\s_i< \s_j<\delta elta$, $\begin{equation}ta_1=\begin{equation}ta_2$). \end{enumerate} With this at hand, for $L > 0$ large we are able to define a continuous map: $$ J_{\rho}^{-L} \quad \stackrel{\psi \oplus \psi}{\longrightarrow} \quad X:=(\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta }) \setminus \overline{D}, $$ where $\overline{D}$ is the diagonal of $\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta }$. We can also proceed in the opposite direction: in Section \ref{s:4} we construct a family of test functions modeled on $X$ on which $J_\rho$ attains arbitrarily low values, see Lemma \ref{l:dsmallIlow} for the precise result. Calling $\phi : X \to J_{\rho}^{-L}$ the corresponding map, we will prove that the composition \begin{equation}gin{equation} \label{compo} X \quad \stackrel{\phi}{\longrightarrow} \quad J_{\rho}^{-L} \quad \stackrel{\psi \oplus \psi}{\longrightarrow} \quad X \end{equation} is homotopically equivalent to the identity map. In this situation it is said that $J_{\rho}^{-L}$ {\em dominates} $X$ (see \cite{hat}, page 528). In a certain sense, those maps are natural since they describe properly the topological properties of $J_{\rho}^{-L}$. We will see that for any compact orientable surface $\Sg$, $X$ is non-contractible; this is proved by estimating its cohomology groups. As a consequence, $\phi(X)$ is not contractible in $J_{\rho}^{-L}$. This allows us to use a min-max argument to find a critical point of $J_{\rho}$. Here, the compactness of solutions proved in \cite{jlw} is an essential tool, since the Palais-Smale property for $J_{\rho}$ is an open problem (as it is for the scalar case). The rest of the paper is organized as follows. In Section \ref{s:pr} we present the notations that will be used in the paper, as well as some preliminary results. The definition of the map $\psi$, its properties, and the improvement of the Moser-Trudinger inequality will be exposed in Section \ref{s:3}. In Section \ref{s:4} we define the map $\phi$ and prove that the composition \eqref{compo} is homotopic to the identity. Here we also develop the min-max scheme that gives a critical point of $J_{\rho}$. The fact that $X$ is not contractible is proved in a final Appendix. \section{Notations and preliminaries}\label{s:pr} In this section we collect some useful notation and preliminary facts. Throughout the paper, $\Sg$ is a compact orientable surface without boundary; for simplicity, we assume $|\Sg|= \int_{\Sg} 1 dV_g =1$. Given $\delta elta>0$, we define the topological cone: \begin{equation}gin{equation} \label{cono} \overline{\Sg}_{\delta elta} = \left(\Sigma\times (0, +\infty) \right) |_{\left( \Sigma\times [\delta elta, + \infty) \right)}.\end{equation} For $x, y \in \Sg$ we denote by $d(x,y)$ the metric distance between $x$ and $y$ on $\Sg$. In the same way, for any $p \in \Sg $, $\Omega, \Omega' \subseteq \Sg$, we denote: $$ d(p, \Omega) = \inf \left\{ d(p,x) \; : x \in \Omega \right\}, \qquad d(\Omega,\Omega') = \inf \left\{ d(x,y) \; : \; x \in \Omega,\ y \in \Omega' \right\}. $$ Moreover, the symbol $B_p(r)$ stands for the open metric ball of radius $r$ and center $p$, and $A_p(r,R)$ the open annulus of radii $r$ and $R$, $r<R$. The complement of a set $\Omega$ in $\Sg$ will be denoted by $\Omega^c$. Given a function $u \in L^1(\Sg)$ and $\Omega \subset \Sg$, we consider the average of $u$ on $\Omega$: $$ \fint_{\Omega} u \, dV_g = \frac{1}{|\Omega|} \int_{\Omega} u \, dV_g.$$ We denote by $\overline{u}$ the average of $u$ in $\Sg$: since we are assuming $|\Sg| = 1$, we have $$ \overline{u}= \int_\Sigmau \, dV_g = \fint_\Sigmau\, dV_g. $$ Throughout the paper we will denote by $C$ large constants which are allowed to vary among different formulas or even within lines. When we want to stress the dependence of the constants on some parameter (or parameters), we add subscripts to $C$, as $C_\delta $, etc.. Also constants with subscripts are allowed to vary. Moreover, sometimes we will write $o_{\alpha}(1)$ to denote quantities that tend to $0$ as $\alpha \to 0$ or $\alpha \to +\infty$, depending on the case. We will similarly use the symbol $O_\a(1)$ for bounded quantities. \ \noindentindent We begin by recalling the following compactness result from \cite{jlw}. \begin{equation}gin{thm}\label{th:jlw} (\cite{jlw}) Let $m_1, m_2$ be two non-negative integers, and suppose $\L_1, \L_2$ are two compact sets of the intervals $(4 \pi m_1, 4 \pi (m_1 + 1))$ and $(4 \pi m_2, 4 \pi (m_2 + 1))$ respectively. Then if $\rho_1 \in \L_1$ and $\rho_2 \in \L_2$ and if we impose $\int_{\Sg} u_i dV_g = 0$, $i = 1, 2$, the solutions of \eqref{eq:gtoda} stay uniformly bounded in $L^\infty(\Sg)$ (actually in every $C^l(\Sg)$ with $l \in \mathbb{N}$). \end{thm} \noindentindent Next, we also recall some Moser-Trudinger type inequalities. As commented in the introduction, problem \eqref{eq:gtoda} is the Euler-Lagrange equation of the energy functional $J_{\rho}$ given in \eqref{funzionale}. This functional is bounded below only for certain values of $\rho_1, \rho_2$, as has been proved by Jost and Wang (see also \eqref{mtjw}): \begin{equation}gin{thm}\label{th:jw} (\cite{jw}) The functional $J_\rho$ is bounded from below if and only if $\rho_i \leq 4 \pi$, $i=1,\ 2$. \end{thm} \noindentindent The next proposition can be thought of as a local version of Theorem \ref{th:jw}, and will be of use in Section \ref{s:3}. Let us recall the definition of the quadratic form $Q$ in \eqref{eq:QQ}. \begin{equation}gin{pro}\label{p:MTbd} Fix $\delta > 0$, and let $\Omega_1 \subset \Omega_2 \subset \Sg$ be such that $d(\Omega_1, \partial \Omega_2) \geq \delta $. Then, for any $\varepsilon > 0$ there exists a constant $C = C(\e, \delta elta)$ such that for all $u \in H^1(\Sg)$ \begin{equation}gin{equation}\label{eq:ineqSg} 4 \pi \left ( \log \int_{\Omega_1} e^{u_1} dV_g + \log \int_{\Omega_1} e^{u_2} dV_g -\fint_{\Omega_2} u_1 dV_g - \fint_{\Omega_2} u_2 dV_g \right )\leq (1+\e) \int_{\Omega_2} Q(u_1, u_2) dV_g + C. \end{equation} \end{pro} \begin{equation}gin{pf} We can assume without loss of generality that $\fint_{\Omega_2} u_i dV_g = 0$ for $i = 1, 2$. Let us write $$ u_i = v_i + w_i, \quad \int_{\Omega_2} v_i \, dV_g = \int_{\Omega_2} w_i \, dV_g =0, $$ where $v_i \in L^\infty(\Omega_2)$ and $w_i \in H^1(\Omega_2)$ will be fixed later. We have \begin{equation}gin{equation}\label{eq:ddmm2} \log \int_{\Omega_1} e^{u_1} dV_g + \log \int_{\Omega_1} e^{u_2} dV_g \leq \| v_1\|_{L^{\infty}(\Omega_1)} + \| v_2\|_{L^{\infty}(\Omega_1)} + \log \int_{\Omega_1} e^{w_1} dV_g + \log \int_{\Omega_1} e^{w_2} dV_g. \end{equation} We next consider a smooth cutoff function $\chi$ with values into $[0,1]$ satisfying $$ \left\{ \begin{equation}gin{array}{ll} \chi(x) = 1 & \hbox{ for } x \in \Omega_1,\\ \chi(x) = 0 & \hbox{ if } d(x, \Omega) > \delta elta/2, \end{array} \right. $$ and then define $$ \tilde{w}_i(x) = \chi(x) w_i(x); \qquad \quad i = 1, 2. $$ Clearly $\tilde{w}_i$ belongs to $H^1(\Sg)$ and is supported in a compact set of the interior of $\Omega_2$. Hence we can apply Theorem \ref{th:jw} to $\tilde{w}_i$ on $\Sg$, finding $$ \log \int_{\Omega_1} e^{w_1} dV_g + \log \int_{\Omega_1} e^{w_2} dV_g \leq \log \int_{{\Sg}} e^{\tilde{w}_1} dV_g + \log \int_{{\Sg}} e^{\tilde{w}_2} dV_g \leq $$$$\frac{1}{4\pi} \int_{{\Sg}} Q(\tilde{w}_1,\tilde{w}_2) dV_g + \fint_{\Sg} (\tilde{w}_1 + \tilde{w}_2)\, dV_g + C. $$ Using the Leibnitz rule and H\"older's inequality we obtain $$ \int_{{\Sg}} Q(\tilde{w}_1,\tilde{w}_2) dV_g \leq (1+\e) \int_{\Omega_2} Q(w_1, w_2) dV_g + C_{\e} \int_{\Omega_2} (w_{1}^2 + w_{2}^2) dV_g. $$ Moreover, we can estimate the mean value of $\tilde{w}_i$ in the following way: $$\fint_{\Sg} \tilde{w}_i \, dV_g \leq C \left ( \int_{\Sg} |\nabla \tilde{w}_i|^2\, dV_g \right )^{1/2} \leq C_{\e} + \varepsilon \int_{\Omega_2} |\nabla \tilde{w}_i|^2\, dV_g \leq $$$$C_{\e} + C \varepsilon \left ( \int_{\Omega_2} |\nabla w_i|^2\, dV_g + C \int_{\Omega_2} w_{i}^2\, dV_g \right ).$$ From \eqref{eq:ddmm2} and the last formulas we find \begin{equation}gin{eqnarray}\label{eq:last} \noindentnumber \log \int_{\Omega_1} e^{u_1} dV_g + \log \int_{\Omega_1} e^{u_2} dV_g & \leq & \| v_1\|_{L^{\infty}(\Omega_1)} + \| v_2\|_{L^{\infty}(\Omega_1)} + \frac{1+\e}{4\pi} \int_{\Omega_2} Q(w_1, w_2) dV_g \\ & + & C_{\e} \int_{\Omega_2} (w_1^2 + w_2^2) dV_g + C. \end{eqnarray} To control the latter terms we use truncations in Fourier modes. Define $V_{\e}$ to be the direct sum of the eigenspaces of the Laplacian on $\Omega_2$ (with Neumann boundary conditions) with eigenvalues less or equal than $C_{\e}\e^{-1}$. Take now $v_i$ to be the orthogonal projection of $u_i$ onto $V_{\e}$. In $V_\e$ the $L^\infty$ norm is equivalent to the $L^2$ norm: by using Poincar{\'e}'s inequality we get $$ C_{\e} \int_{\Omega_2} (w_{1}^2 + w_{2}^2) dV_g \leq \varepsilon \int_{\Omega_2} Q(u_1,u_2) dV_g, $$$$ \|v_i\|_{L^\infty(\Omega_1)} \leq C_{\e} \|v_i\|_{L^2(\Omega_2)} \leq C_\varepsilon \left( \int_{\Omega_2} |\nabla u_i|^2 dV_g \right)^{\frac 12} \leq \varepsilon \int_{\Omega_2} Q(u_1,u_2) dV_g +C_{\e}. $$ Hence, from \eqref{eq:last} and the above inequalities we derive \eqref{eq:ineqSg} by renaming $\e$ properly. \end{pf} \begin{equation}gin{rem}\label{r:regeigenv} While the Fourier decomposition used in the above proof depends on $\Omega_2$, the constants only depend on $\Sg$, $\delta $ and $\e$. In fact, one can replace $\O_2$ by a domain $\check{\O}_2$, $\O_2 \subseteq \check{\O}_2 \subseteq B_{\O_2}(\delta /2)$ with boundary curvature depending only on $\delta $ and satisfying a uniform interior sphere condition with spheres of radius $\delta ^3$. For example, one can obtain such a domain $\check{\O}_2$ triangulating $\Sg$ by simplexes with diameters of order $\delta ^2$, take suitable union of triangles and smoothing the corners. For these domains, which are finitely many, the eigenvalue estimates will only depend on $\delta $. \end{rem} \ \noindentindent We next prove a criterion which gives us a first insight on the properties of the low sublevels of $J_{\rho}$. This result is in the spirit of an improved inequality in \cite{cl}, and we use an extra covering argument to track the concentration properties of both components of the system. We need first an auxiliary lemma. \begin{equation}gin{lem}\label{l:step1} Let $\delta _0 > 0$, $\g_0 > 0$, and let $\Omega_{i,j} \subseteq \Sg$, $i,j = 1, 2$, satisfy $d(\Omega_{i,j},\Omega_{i,k}) \geq \delta _0$ for $j \neq k$. Suppose that $u_1, u_2 \in H^1(\Sg)$ are two functions verifying \begin{equation}gin{equation}\label{eq:ddmm} \frac{\int_{\Omega_{i,j}} e^{u_i} dV_g}{\int_\Sigmae^{u_i} dV_g} \geq \g_0, \qquad \qquad i,j = 1, 2. \end{equation} Then there exist positive constants $\tilde{\g}_0$, $\tilde{\delta }_0$, depending only on $\g_0$, $\delta _0$, and two sets $\tilde{\O}_1, \tilde{\O}_2 \subseteq \Sg$, depending also on $u_1, u_2$ such that \begin{equation}gin{equation}\label{eqLddmm2} d(\tilde{\O}_1, \tilde{\O}_2) \geq \tilde{\delta }_0; \qquad \quad \frac{\int_{\tilde{\Omega}_{i}} e^{u_1} dV_g}{\int_\Sigmae^{u_1} dV_g} \geq \tilde{\g}_0, \quad \frac{\int_{\tilde{\Omega}_{i}} e^{u_2} dV_g}{\int_\Sg e^{u_2} dV_g} \geq \tilde{\g}_0; \quad i = 1, 2. \end{equation} \end{lem} \begin{equation}gin{pf} First, we fix a number $r_0 < \frac{\delta _0}{80}$. Then we cover $\Sg$ with a finite union of metric balls $(B_{x_l}(r_0))_l$, whose number can be bounded by an integer $N_{r_0}$ which depends only on $r_0$ (and $\Sg$). Next we cover $\overline{\Omega}_{i,j}$ by a finite number of these balls, and we choose $y_{i,j} \in \cup_l (x_l)$ such that $$ \int_{B_{y_{i,j}}(r_0)} e^{u_i} dV_g = \max \left\{ \int_{B_{x_l}(r_0)} e^{u_i} dV_g \; : \; B_{x_l}(r_0) \cap \overline{\Omega}_{i,j} \neq \emptyset \right\}. $$ Since the total number of balls is bounded by $N_{r_0}$ and since by our assumption the (normalized) integral of $e^{u_i}$ over $\Omega_{i,j}$ is greater or equal than $\g_0$, it follows that \begin{equation}gin{equation}\label{eq:bdtg0} \frac{\int_{B_{y_{i,j}}(r_0)} e^{u_i} dV_g}{\int_{\Sg} e^{u_i} dV_g} \geq \frac{\g_0}{N_{r_0}}. \end{equation} By the properties of the sets $\Omega_{i,j}$, we have that: $$ B_{y_{i,j}}(2r_0) \cap B_{y_{i,k}}(r_0)=\emptyset \qquad \hbox{ for } j \neq k. $$ Now, one of the following two possibilities occurs: \begin{equation}gin{description} \item[(a)] $B_{y_{1,1}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) \neq \emptyset$ or $B_{y_{1,2}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) \neq \emptyset$; \item[(b)] $B_{y_{1,1}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) = \emptyset$ and $B_{y_{1,2}}(5 r_0) \cap \left( B_{y_{2,1}}(5 r_0) \cup B_{y_{2,2}}(5 r_0) \right) = \emptyset$. \end{description} In case {\bf (a)} we define the sets $\tilde{\Omega}_i$ as $$ \tilde{\Omega}_1 = B_{y_{1,1}}(30 r_0), \qquad \quad \tilde{\Omega}_2 = B_{y_{1,1}}(40 r_0)^c, $$ while in case {\bf (b)} we define $$ \tilde{\Omega}_1 = B_{y_{1,1}}(r_0) \cup B_{y_{2,1}}(r_0); \qquad \quad \tilde{\Omega}_2 = B_{y_{1,2}}(r_0) \cup B_{y_{2,2}}(r_0)). $$ We also set $\tilde{\g}_0 = \frac{\g_0}{N_{r_0}}$ and $\tilde{\delta }_0 = r_0$. We notice that $\tilde{\g}_0$ and $\tilde{\delta }_0$ depend only on $\g_0$ and $\delta _0$, as claimed, and that the sets $\tilde{\Omega}_i$ satisfy the required conditions. This concludes the proof of the lemma. \end{pf} \ \noindentindent We next derive the improvement of the constants in Theorem \ref{th:jw}, in the spirit of \cite{cl}. \begin{equation}gin{pro}\label{p:imprc} Let $u_1, u_2 \in H^1(\Sg)$ be a couple of functions satisfying the assumptions of Lemma \ref{l:step1} for some positive constants $\delta _0, \g_0$. Then for any $\varepsilon > 0$ there exists $C=C(\e) > 0$, depending on $\e, \delta _0$, and $\g_0$ such that $$ 8 \pi \left( \log \int_{\Sg} e^{u_1-\overline{u}_1} dV_g + \log \int_{\Sg} e^{u_2-\overline{u}_2} dV_g \right) \leq (1+\e) \int_\Sg Q(u_1, u_2) dV_g + C. $$ \end{pro} \begin{equation}gin{pf} Let $\tilde{\delta }_0, \tilde{\g}_0$ and $\tilde{\Omega}_1, \tilde{\Omega}_2$ be as in Lemma \ref{l:step1}, and assume without loss of generality that $\overline{u}_1 = \overline{u}_2 = 0$. Let us define $U_i=\{x \in \Omega:\ d(x, \tilde{\Omega}_i) < \tilde{\delta }_0/2\}$. By applying Proposition \ref{p:MTbd}, we get: \begin{equation}gin{equation} \label{hola} 4 \pi \left (\log \int_{\tilde{\Omega}_i} e^{u_1} dV_g + \log \int_{\tilde{\Omega}_i} e^{u_2} dV_g - \fint_{U_i} (u_1+u_2) dV_g \right)\leq (1+\e) \int_{U_i} Q(u_1, u_2) dV_g + C.\end{equation} Observe that: $$ \log \int_{\tilde{\Omega}_i} e^{u_j} dV_g\geq \log \left ( \int_{\Sg} e^{u_j} dV_g\right ) + \log \tilde{\g}_0.$$ Since $U_1 \cap U_2 = \emptyset$, we can sum \eqref{hola} for $i=1,2$, to obtain $$8 \pi \left ( \log \int_{\Sg} e^{u_1} dV_g + \log \int_{\Sg} e^{u_2} dV_g -\sum_{i=1}^2 \fint_{U_i} (u_1+u_2) dV_g \right )\leq (1+\e) \int_{\Sg} Q(u_1, u_2) dV_g + C.$$ It suffices now to estimate the term $\fint_{U_i} (u_1+u_2) dV_g$. By using Poincar{\'e}'s inequality and the estimate $|U_i| \geq \tilde{\delta }_0^2$, we have: $$ \fint_{U_i} u_j \, dV_g \leq \tilde{\delta }_0^{-2} \int_{U_i} u_j \, dV_g \leq C \left ( \int_{\Sg} |\nabla u_j|^2\, dV_g \right )^{1/2} \leq C + \varepsilon \int_{\Sg} |\nabla u_j|^2\, dV_g.$$ To finish the proof it suffices to properly rename $\e$. \end{pf} Proposition \ref{p:imprc} implies that on low sublevels, at least one of the components must be very concentrated around a certain point. A more precise description of the topological properties of $J_{\rho}^{-L}$ will be given later on. \section{Volume concentration and improved inequality} \label{s:3} In this section we give the main tools for the description of the sublevels of the energy functional $J_{\rho}$. Those will be contained in Propositions \ref{covering} and \ref{mt}, whose proof will be given in the subsequent subsections. First, we give continuous definitions of center of mass and scale of concentration of positive functions normalized in $L^1$, which are adequate for our purposes. Those are a refinement of \cite{mr}. Consider the set $$ A = \left\{ f \in L^1(\Sg) \; : \; f > 0 \ \hbox{ a. e. and } \int_\Sigmaf dV_g = 1 \right\}, $$ endowed with the topology inherited from $L^1(\Sg)$. Moreover, let us recall the definition \eqref{cono} for the cone $\overline{\Sg}_{\delta elta}$. \begin{equation}gin{pro} \label{covering} Let us fix a constant $R>1$. Then there exists $\delta elta= \delta elta(R) >0$ and a continuous map: $$ \psi : A \to \overline{\Sg}_{\delta elta}, \qquad \quad \psi(f)= (\begin{equation}ta, \sigma),$$ satisfying the following property: for any $f \in A$ there exists $p \in \Sg$ such that \begin{equation}gin{enumerate} \item[{\emph a)}] $ d(p, \begin{equation}ta) \leq C' \sigma$ for $C' = \max\{3 R+1, \delta elta^{-1}diam(\Sg) \}.$ \item[{\emph b)}] There holds: $$ \int_{B_p(\sigma)} f \, dV_g > \tau, \qquad \quad \int_{B_p(R \sigma)^c} f \, dV_g > \tau, $$ where $\tau>0$ depends only on $R$ and $\Sg$. \end{enumerate} \end{pro} Roughly speaking, the above map $\psi(f)= (\begin{equation}ta, \sigma)$ gives us a center of mass of $f$ and its scale of concentration around that point. Indeed, the smaller is $\sigma$, the bigger is the rate of concentration. Moreover, if $\sigma$ exceeds a certain positive constant, $\begin{equation}ta$ could not be defined; so, it is natural to make the identification in $\overline{\Sg}_\delta $. Next, we state an improved Moser-Trudinger inequality for couples $(u_1,u_2)$ such that $e^{u_i}$ are centered at the same point with the same rate of concentration. Being more specific, we have the following: \begin{equation}gin{pro} \label{mt} Given any $\e>0$, there exist $R=R(\e)>1$ and $\psi$ as given in Proposition \ref{covering}, such that for any $(u_1, u_2) \in H^1(\Sg)\times H^1(\Sg)$ with: $$\psi \left( \frac{e^{u_1}}{\int_{\Sigma} e^{u_1} dV_g} \right )= \psi \left( \frac{e^{u_2}}{\int_{\Sigma} e^{u_2} dV_g} \right ), $$ the following inequality holds: $$ (1+\e) \int_\SigmaQ(u_1,u_2)\, dV_g \geq 8 \pi \left (\log \int_\Sigmae^{u_1-\overline{u}_1} + dV_g +\log \int_\Sigmae^{u_2-\overline{u}_2} dV_g \right )+ C, $$ for some $C=C(\e)$. \end{pro} The rest of the section is devoted to the proof of those propositions. \subsection{Proof of Proposition \ref{covering}} Take $R_0=3R$, and define $ \sigma: A \times \Sigma\to (0,+\infty)$ such that: \begin{equation}gin{equation} \label{sigmax} \int_{B_x(\sigma(x,f))} f \, dV_g = \int_{B_x(R_0 \sigma(x,f))^c} f \, dV_g. \end{equation} It is easy to check that $\sigma(x,f)$ is uniquely determined and continuous. Moreover, $\sigma$ satisfies: \begin{equation}gin{equation} \label{dett} d(x,y) \leq R_0 \max \{ \sigma(x,f), \sigma(y,f)\} +\min \{ \sigma(x,f), \sigma(y,f)\}. \end{equation} Otherwise, $ B_x(R_0 \sigma(x,f)) \cap B_y(\sigma(y,f)+\e) = \emptyset $ for some $\e>0$. Moreover, since $B_y(R_0 \sigma(y,f))$ does not fulfil the whole space $\Sg$, $A_y(\sigma(y,f), \sigma(y,f)+\e)$ is a nonempty open set. Then: $$ \int_{B_x(\sigma(x,f))} f \, dV_g = \int_{B_x(R_0 \sigma(x,f))^c} f \, dV_g \geq \int_{B_y(\sigma(y,f)+\e)} f \, dV_g > \int_{B_y(\sigma(y,f))} f \, dV_g. $$ By interchanging the roles of $x$ and $y$, we would also obtain the reverse inequality. This contradiction proves \eqref{dett}. We now define: $$ T: A \times \Sigma\to \mathbb{R}, \qquad T(x,f) = \int_{B_x(\s(x,f))} f dV_g. $$ \begin{equation}gin{lem}\label{sigma} If $x_0 \in \Sg$ is such that $T(x_0,f) = \max_{y \in\Sg} T(y,f)$, then $\s(x_0,f) < 3\s(x, f)$ for any other $x \in \Sg$. \end{lem} \begin{equation}gin{pf} Choose any $x \in \Sg$ and $\e>0$. First, observe that $B_x(R_0 \s(x,f)+\e) $ must intersect $B_{x_0}(\s(x_0,f))$. Otherwise, as above, we know that $A_x(R_0 \s(x,f), R_0 \s(x,f)+\e)$ is an open nonempty set. Then $$ T(x_0,f)= \int_{B_{x_0}(\s(x_0,f))} f \, dV_g < \int_{B_{x}(R_0\s(x,f))^c} f \, dV_g =T(x,f), $$ a contradiction. Arguing in the same way, we can also conclude that $B_x(R_0 \s(x,f)+\e) $ cannot be contained in $B_{x_0}(R_0\s(x_0,f))$. By the triangular inequality, we obtain that: $$ 2 (R_0 \sigma(x,f)+\e) > (R_0-1) \sigma(x_0,f).$$ Since $\e>0$ is arbitrary, there follows: $$ \sigma(x,f) \geq \frac{R_0-1}{2 R_0} \sigma(x_0,f).$$ Recalling that $R_0>3$, we are done. \end{pf} As a consequence of the previous lemma, we obtain the following: \begin{equation}gin{lem} \label{tau} There exists a fixed $\tau > 0$ such that $$ \max_{x \in \Sg} T(x,f) > \tau > 0 \qquad \quad \hbox{ for all } f \in A. $$ \end{lem} \begin{equation}gin{pf} Let us fix $x_0 \in \Sigma$ such that $T(x_0,f)=\max_{x\in \Sigma} T(x,f)$. For any $x \in A_{x_0}(\sigma(x_0,f ), R\sigma(x_0,f))$, by Lemma \ref{sigma}, we have that: $$ \int_{B_x(\sigma(x_0,f)/3)} f \, dV_g \leq \int_{B_x(\sigma(x,f))} f \, dV_g \leq T(x_0,f). $$ Let us take a finite covering: $$A_{x_0}(\sigma(x_0,f ), R\sigma(x_0,f)) \subset \cup_{i=1}^k B_{x_i}(\sigma(x_0,f)/3).$$ Observe that $k$ is independent of $f$ or $\sigma(x_0,f)$, and depends only on $\Sigma$ and $R$. Therefore: $$ 1 = \int_{\Sg} f \, dV_g \leq \int_{B_{x_0}(\sigma(x_0,f))} f \, dV_g + \int_{B_{x_0}(R\sigma(x_0,f))^c} f \, dV_g+ \sum_{i=1}^k \int_{B_{x_i}(\sigma(x_0,f)/3)} f \, dV_g \leq (k+2) T(x_0,f).$$ \end{pf} Let us define: $$ \sigma : A \to \mathbb{R}, \qquad \quad \sigma(f)= 3 \min\{ \sigma(x,f): \ x \in \Sigma\},$$ which is obviously a continuous function. \begin{equation}gin{rem} \label{sigmacont} In \cite{mr} (see Section 3 there) a sort of concentration parameter is defined, but it does not depend continuously on $f$. Moreover, the definition of barycenter given below has been modified compared to \cite{mr}. Finally, the application $\psi$ is mapped to a cone; this interpretation, which is crucial in our framework, was missing in \cite{mr}. \end{rem} Given $\tau$ as in Lemma \ref{tau}, consider the set: \begin{equation}gin{equation} \label{defS} S(f) = \left\{ x \in \Sigma\; : \; T(x,f) > \t,\ \s(x,f) < \s(f) \right\}. \end{equation} If $x_0 \in \Sg$ is such that $T(x_0,f)= \max_{x\in \Sg} T(x,f)$, then Lemmas \ref{sigma} and \ref{tau} imply that $x_0 \in S(f)$. Therefore, $S(f)$ is a nonempty open set for any $f \in A$. Moreover, from \eqref{dett}, we have that: \begin{equation}gin{equation}\label{eq:diamS} diam(S(f)) \leq (R_0+1)\s(f).\end{equation} By the Nash embedding theorem, we can assume that $\Sigma\subset \mathbb{R}^N$ isometrically, $N \in \mathbb{N}$. Take an open tubular neighborhood $\Sg \subset U \subset \mathbb{R}^N$ of $\Sg$, and $\delta elta>0$ small enough so that: \begin{equation}gin{equation} \label{co} co \left [ B_x((R_0+1)\delta elta)\cap \Sigma\right ] \subset U \ \forall \, x \in \Sg, \end{equation} where $co$ denotes the convex hull in $\mathbb{R}^N$. We define now $$ \eta(f) = \frac{\delta isplaystyleplaystyle \int_\Sigma(T(x,f) - \t)^+ \left( \s(f) - \s(x,f) \right)^+ x \ dV_g}{\delta isplaystyleplaystyle \int_\Sigma(T(x,f) - \t)^+ \left( \s(f) - \s(x,f) \right)^+ dV_g}\in \mathbb{R}^N. $$ The map $\eta$ yields a sort of center of mass in $\mathbb{R}^N$. Observe that the integrands become nonzero only on the set $S(f)$. However, whenever $\sigma(f) \leq \delta elta$, \eqref{eq:diamS} and \eqref{co} imply that $\eta(f) \in U$, and so we can define: $$ \begin{equation}ta: \{f \in A:\ \s(f)\leq \delta elta \} \to \Sg, \ \ \begin{equation}ta(f)= P \circ \eta (f),$$ where $P: U \to \Sg$ is the orthogonal projection. Now, let us check that $\psi(f)=(\begin{equation}ta(f), \sigma(f))$ satisfies the conditions given by Proposition \ref{covering}. If $\sigma(f) \leq \delta elta$, then $\begin{equation}ta(f) \in co [ S(f)] \cap \Sg$. Therefore, $d(\begin{equation}ta(f), S(f)) < (R_0+1)\sigma(f)$. Take any $p \in S(f)$. Recall that $R_0 = 3R$ and that $\s(f) \leq 3\s(x,f)<3 \s(f)$ for any $x \in S(f)$: it is easy to conclude then {\it a)} and {\it b)}. If $\sigma(f) \geq \delta elta$, $\begin{equation}ta$ is not defined. Observe that {\it a)} is then satisfied for any $\begin{equation}ta \in \Sg$. \subsection{Proof of Proposition \ref{mt}} First of all, we will need the following technical lemma: \begin{equation}gin{lem} \label{technical} There exists $C>0$ such that for any $x \in \Sg$, $d>0$ small, $$ \left | \fint_{B_x(d)} u\, dV_g - \fint_{\partial B_x(d)} u\, dS_g\right | \leq C \left ( \int_{B_{x}(d)} |\nabla u|^2 \, dV_g \right )^{1/2}.$$ Moreover, given $r\in (0,1)$, there exists $C=C(r, \Sg)>0$ such that for any $x_1$, $x_2 \in \Sg$, $d>0$ with $B_1 = B_{x_1}(r d) \subset B_2 = B_{x_2}(d)$, then: $$ \left | \fint_{B_1} u\, dV_g - \fint_{B_2} u\, dV_g\right | \leq C \left ( \int_{B_2} |\nabla u|^2 \, dV_g \right )^{1/2}.$$ \end{lem} \begin{equation}gin{pf} The existence of such a constant $C$ is given just by the $L^1$ embedding of $H^1$ and trace inequalities. Moreover, $C$ is independent of $d$ since both inequalities above are dilation invariant. \end{pf} In view of the statement of Proposition \ref{covering}, we now deduce a Moser-Trudinger type inequality for small balls, and also for annuli with small internal radius. Those inequalities are in the core of the proof of Proposition \ref{mt}, and are contained in the following two lemmas. The first one uses a dilation argument: \begin{equation}gin{lem} \label{ball} For any $\e>0$ there exists $C=C(\e)>0$ such that \begin{equation}gin{eqnarray*} (1+\e)\int_{B_p(s)}Q(u_1, u_2) \ dV_g + C & \geq & 4 \pi \left (\log \int_{B_p(s/2)} e^{u_1} dV_g +\log \int_{B_p(s/2)} e^{u_2} dV_g \right) \\ & - & 4 \pi (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s ), \end{eqnarray*} for any $u \in H^1(\Sg), \ p \in \Sg$, $s>0$ small and for $\bar{u}_i(s)= \delta isplaystyle \fint_{B_p(s)} u_i\, dV_g$. \end{lem} \begin{equation}gin{pf} For $s > 0$ smaller than the injectivity radius, the result follows easily from Proposition \ref{p:MTbd}, for some $C=C(s, \e)$. Then, we need to prove that the constant $C$ can be taken independent of $s$ as $s \to 0$. Notice that, as $s \to 0$ we consider quantities defined on smaller and smaller geodesic balls $B_q(\varsigma)$ on $\Sg$. Working in normal geodesic coordinates at $q$, gradients, averages and the volume element will resemble Euclidean ones. If we assume that near $q$ the metric of $\Sg$ is flat, we will get negligible error terms which will be omitted for reasons of brevity. To prove the lemma, we simply make a dilation of the pair $(u_1, u_2)$ of the form: $$ v_i(x)= u_i(s x+ p).$$ From easy computations there follows: $$ \int_{B(p,s)} Q(u_1,u_2) \, dV_g= \int_{B(0,1)} Q(v_1, v_2) \, dV_g, $$ $$ \bar{u}_i(s)= \fint_{B(0,1)} v_i \, dV_g, $$ $$ \int_{B_p(s/2)} e^{u_i} \, dV_g = s^{2} \int_{B(0,1/2)} e^{v_i}\, dV_g.$$ Applying Proposition \ref{p:MTbd} to the pair $(v_1, v_2)$, we conclude the proof of the lemma. \end{pf} The next lemma gives us an estimate of the quadratic form $Q$ on annuli by using the Kelvin transform. This transformation is indeed very natural in this framework, see Remark \ref{r:kelvin} for a more detailed discussion. \begin{equation}gin{lem} \label{annulus} Given $\e>0$, there exists a fixed $r_0>0$ (depending only on $\Sg$ and $\e$) satisfying the following property: for any $r \in (0, r_0)$ fixed, there exists $C=C(r, \e)>0$ such that, for any $(u_1,u_2) \in H^1(\Sg)$ with $u_i=0$ in $\partial B_p(2 r)$, $$ \int_{A_p(s/2,2 r)}Q(u_1, u_2) \ dV_g + \varepsilon \int_{B_p(2 r)}Q(u_1, u_2) \ dV_g + C \geq $$$$ 4 \pi \left (\log \int_{A_p(s, r)} e^{u_1} dV_g +\log \int_{A_p(s,r)} e^{u_2} dV_g + (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s )(1+\e)\right ) + C, $$ with $\ p \in \Sg$, $s\in(0,r)$ and $\bar{u}_i(s)= \fint_{B_p(s)} u_i\, dV_g$. \end{lem} \begin{equation}gin{pf} As in the proof of Lemma \ref{ball}, we need to show that $C$ is independent of $s$ as $s \to 0$. By taking $r_0$ small enough, also here the metric becomes close to the Euclidean one. Reasoning as in the proof of Lemma \ref{ball}, we can then assume that the metric is flat around $p$. We can define the Kelvin transform: $$ K : A_p(s/2, 2r) \to A_p(s/2, 2r), \qquad K(x)= p+ r s \frac{x-p}{\ |x-p|^2}.$$ Observe that $K$ maps the interior boundary of $A_p(s/2, 2r)$ onto the exterior one and viceversa, and fixes the set $\partial B_p(\sqrt{s\, r})$. Let us define the functions $ \hat{u}_i \in H^1(B_p(2r))$ as: $$\hat{u}_i(x)= \left \{ \begin{equation}gin{array}{ll} u_i(K(x)) - 4 \log |x-p| & \mbox{ if } |x-p| \geq s/2, \\ -4 \log (s/2)& \mbox{ if } |x-p| \leq s/2. \end{array} \right.$$ Our goal is to apply the Moser-Trudinger inequality given by Proposition \ref{p:MTbd} to $(\hat{u}_1, \hat{u}_2)$. In order to do so, let us compute: \begin{equation}gin{equation} \label{exp} \int_{A_p(s, r)} e^{\hat{u}_i} \, dV_g = \int_{A_p(s, r)} e^{u_i(K(x))} |x-p|^{-4} \, dV_g =\frac{1}{s^2 r^2} \int_{A_p(s, r)} e^{u_i(x)} \, dV_g,\end{equation} since the Jacobian of $K$ is $J(K(x)) = - r^2 s^2 |x-p|^{-4}$. Moreover, by Lemma \ref{technical}, we have that: $$ \left | \fint_{B_p(2r)} \hat{u}_i \, dV_g - \fint_{\partial B_p(r)} \hat{u}_i \, dS_x \right | \leq C \left (\int_{B_p(2r)} |\nabla \hat{u}_i| ^2 \, dV_g \right )^{1/2} \leq C + \varepsilon \int_{B_p(2r)} |\nabla \hat{u}_i| ^2 \, dV_g .$$ By using again a change of variables, $$ \fint_{\partial B_p(r)} \hat{u}_i \, dS_x = \fint_{\partial B_p(s)} u_i \, dS_x - 8 \pi r \log r.$$ Therefore, \begin{equation}gin{equation} \label{media} \left | \fint_{B_p(2r)} \hat{u}_i \, dV_g- \bar{u}_i(s) \right | \leq C + \varepsilon \int_{B_p(2r)} |\n \hat{u}_i| ^2 \, dV_g + \varepsilon \int_{B_p(s)} |\nabla u_i| ^2 \, dV_g. \end{equation} Let us now estimate the gradient terms. For $|x-p|\geq s/2$, $$|\nabla \hat{u}_i(x)|^2 = |\nabla u_i(K(x))|^2 \frac{s^2 r^2}{|x-p|^4} + \frac{16}{|x-p|^2} + 8 \nabla u (K(x)) \cdot \frac{x-p}{|x-p|^4}s r.$$ Therefore, $$\int_{B_p(2r)} |\nabla \hat{u}_i(x)|^2 \, dV_g= \int_{A_p(s/2, 2 r)} |\nabla \hat{u}_i(x)|^2 \, dV_g = \int_{A_p(s/2, 2 r)} |\nabla u_i(K(x))|^2 \frac{s^2 r^2}{\ |x-p|^4} \, dV_g +$$$$ 16 \int_{A_p(s/2, 2 r)} \frac{dV_g}{|x-p|^2} + 8 \int_{A_p(s/2, 2 r)} \nabla u_i (K(x)) \cdot \frac{x-p}{|x-p|^4}\ s \, r \, dV_g = $$$$ \int_{A_p(s/2, 2 r)} |\nabla u_i(x)|^2 \, dV_g + 32 \pi (\log (2r) - \log (s/2)) + 8 \int_{A_p(s/2, 2 r)} \nabla u_i (K(x)) \cdot \frac{K(x)-p}{|K(x)-p|^2} \ \frac{s^2 r^2}{\ |x-p|^{4}}\, dV_g = $$ $$ \int_{A_p(s/2, 2 r)} |\nabla u_i(x)|^2 \, dV_g + 32 \pi (\log (2r) - \log (s/2)) + 8 \int_{A_p(s/2, 2 r)} \nabla u_i (x) \cdot \frac{x-p}{\ |x-p|^{2}} \, dV_g = $$ $$ \int_{A_p(s/2, 2 r)} |\nabla u_i(x)|^2 \, dV_g + 32 \pi (\log (2r) - \log (s/2)) - 16 \pi \fint_{\partial B_p(s/2)} u_i \, dS_x. $$ In the last equality we have used integration by parts. By using again Lemma \ref{technical}, \begin{equation}gin{equation} \label{dirich} \left | \int_{B_p(2 r)} |\nabla \hat{u}_i(x)|^2 \, dV_g - \int_{A_p(s/2, 2 r)} |\nabla u_i(x)|^2 \, dV_g + 32 \pi \log s + 16 \pi \bar{u}_i(s) \right | \leq C + \varepsilon \int_{B_p(s)} |\n {u}_i| ^2 \, dV_g.\end{equation} Regarding the mixed term $\nabla \hat{u}_1 \cdot \nabla \hat{u}_2$, we have that for $|x-p|\geq s/2$, $$\nabla \hat{u}_1(x) \cdot \nabla \hat{u}_2(x) = \nabla u_1(K(x)) \cdot \nabla u_2(K(x)) \frac{s^2 r^2}{|x-p|^4} + \frac{16}{|x-p|^2} + \frac{4sr}{|x-p|^4} (\nabla u_1(K(x)) + \n u_2(K(x))\cdot (x-p).$$ Reasoning as above, we obtain the estimate: \begin{equation}gin{equation} \label{dirich2} \begin{equation}gin{array}{c} \left | \delta isplaystyleplaystyle \int_{B_p(2 r)} \nabla \hat{u}_1(x) \cdot \nabla \hat{u}_2(x) \, dV_g - \int_{A_p(s/2, 2 r)} \nabla u_1(x) \cdot \nabla u_2(x) \, dV_g + 32 \pi \log s + 8 \pi \bar{u}_1(s) + 8 \pi \bar{u}_2(s) \right | \leq \\ \\ C + \varepsilon \delta isplaystyleplaystyle \int_{B_p(s)} \left( |\nabla {u}_1| ^2 + |\n {u}_2|^2 \right )\, dV_g. \end{array}\end{equation} We now apply Proposition \ref{p:MTbd} to $(\hat{u}_1, \hat{u}_2)$ and use the estimates \eqref{exp}, \eqref{media}, \eqref{dirich} and \eqref{dirich2}, to obtain: $$ 4 \pi \left [ \log \int_{A_p(s,r)} e^{u_1} \, dV_g+ \log \int_{A_p(s,r)} e^{u_2} \, dV_g -(4 \log s + \bar{u}_1(s) + \bar{u}_2(s)) \right ] \leq $$$$ 4 \pi \left [ \log \int_{A_p(s,r)} e^{\hat{u}_1} \, dV_g + \log \int_{A_p(s,r)} e^{\hat{u}_2}\, dV_g - \fint_{B_p(2r)} (\hat{u}_1 + \hat{u}_2) \right ] +$$$$ \varepsilon \int_{B_p(2r)} \left(|\nabla \hat{u}_1|^2 + |\nabla \hat{u}_2|^2 \right) \, dV_g + \varepsilon \int_{B_p(s)} \left ( |\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g + C \leq $$ $$ (1+C\e) \int_{B_p(2r)} Q(\hat{u}_1,\hat{u}_2)\, dV_g + \varepsilon \int_{B_p(s)} \left ( |\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g +C \leq $$ $$ (1+C\e) \left [ \int_{A_p(s/2,2r)} Q(u_1,u_2)\, dV_g - 8 \pi ( 4 \log s + \bar{u}_1(s) + \bar{u}_2(s) ) \right ] + \varepsilon \int_{B_p(s)} \left( |\nabla u_1|^2 + |\nabla u_2|^2 \right) \, dV_g+C.$$ By renaming $\e$ conveniently, we conclude the proof. \end{pf} \begin{equation}gin{rem} The term $\bar{u}(s) + 2 \log s$ has an easy interpretation; by the Jensen inequality we have the estimate $$ \log \int_{B_p(s)} e^u \, dV_g = \log \left (|B_p(s)| \fint_{B_p(s)} e^u \, dV_g \right) \geq \bar{u}(s) + 2 \log s - C.$$ \end{rem} \begin{equation}gin{rem}\label{r:kelvin} The transformation $K$ is used to exploit the geometric properties of the problem, in order to gain as much control as possible on the exponential terms. From the formulas in \cite{jw2} one has that both components of the entire solutions of the Toda system in $\mathbb{R}^2$ decay at infinity at the rate $- 4 \log |x|$. In this way, the Kelvin transform brings these functions to (nearly) constants at the origin, giving a sort of optimization in the Dirichlet part. The minimal value of Dirichlet energy to obtain concentration of volume at a scale $s$ (as in the statement of Lemma \ref{annulus}) is then transformed into a boundary integral which cancels exactly the extra terms in Lemma \ref{ball} due to the $s$-dilation. \end{rem} \begin{equation}gin{rem} Lemmas \ref{ball} and \ref{annulus}, together with Proposition \ref{covering}, give a precise idea of the proof. Indeed, assume that for some $p\in \Sg$, $\sigma >0$: \begin{equation}gin{equation} \label{1} \int_{B_{p}(\s)} e^{u_i} dV_g \geq \tau \int_\Sigmae^{u_i} dV_g, \ i=1,2;\end{equation} \begin{equation}gin{equation} \label{2} \int_{B_{p}(R \s)^c} e^{u_i} dV_g \geq \tau \int_\Sigmae^{u_i} dV_g, \ i=1,2. \end{equation} If we sum the inequalities given by Lemmas \ref{ball} and \ref{annulus}, the term $\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \s$ cancels and we deduce the estimate of Proposition \ref{mt}. The problem is that when $\psi \left( \frac{e^{u_1}}{\int_{\Sigma} e^{u_1} dV_g} \right )= \psi \left( \frac{e^{u_2}}{\int_{\Sigma} e^{u_2} dV_g} \right )$ we do not really have \eqref{1}, \eqref{2} around the same point $p$. Moreover, $u_i$ needs not be zero on the boundary of a ball, as requested in Proposition \ref{annulus}. Some technical work is needed to deal with those difficulties. \end{rem} We now prove Proposition \ref{mt}. Fixed $\e>0$, take $R>1$ (depending only on $\e$) and let $\psi$ be the continuous map given by Proposition \ref{covering}. Fix also $\delta elta>0$ small (which will depend only on $\e$, too). Let $u_1$ and $u_2$ be two functions in $H^1(\Sg)$ with $\int_{\Sg} u_{i} \, dV_g =0$, such that: $$\psi \left( \frac{e^{u_1}}{\int_{\Sigma} e^{u_1} dV_g} \right )= \psi \left( \frac{e^{u_2}}{\int_{\Sigma} e^{u_2} dV_g} \right )= (\begin{equation}ta, \sigma) \in \overline{\Sg}_{\delta }. $$ If $\sigma \geq \frac{\delta elta}{R^2}$, then Proposition \ref{p:imprc} yields the result. Therefore, assume $\sigma < \frac{\delta elta}{R^2}$; Proposition \ref{covering} implies the existence of $\tau>0$, $p_1,\ p_2 \in \Sg$ satisfying: \begin{equation}gin{equation} \label{dentro} \int_{B_{p_i}(\s)} e^{u_i} dV_g \geq \tau \int_\Sg e^{u_i} dV_g, \ i=1,2;\end{equation} \begin{equation}gin{equation} \int_{B_{p_i}(R \s)^c} e^{u_i} dV_g \geq \tau \int_\Sigmae^{u_i} dV_g \ i=1,2; \end{equation} $$d(p_1, p_2) \leq (6R+2) \s; $$ The proof will be divided into two cases: \noindentindent {\bf CASE 1:} Assume that: \begin{equation}gin{equation} \label{fuera} \int_{A_{p_i}(R\s, \delta elta)} e^{u_i} dV_g \geq \tau/2 \int_{\Sg} e^{u_i} dV_g.\end{equation} In order to be able to apply Lemma \ref{annulus}, we need to modify our functions outside a certain ball. Choose $k \in \mathbb{N}$, $k \leq 2 \e^{-1}$, such that: $$ \int_{A_{p_1}(2^{k-1} \delta elta, 2^{k+1} \delta elta)} \left ( |\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g \leq \varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g. $$ We define $\tilde{u}_i \in H^1(\Sg)$ by: $$ \left \{ \begin{equation}gin{array}{ll} \tilde{u}_i(x) = u_i(x) & x \in B_{p_1}(2^k \delta elta), \\ \Delta \tilde{u}_i(x) =0 & x \in A_{p_1}(2^k \delta elta, 2^{k+1} \delta elta), \\ \tilde{u}_i(x) = 0 & x \noindenttin B_{p_1}(2^{k+1} \delta elta). \end{array} \right. $$ Since we plan to apply Lemma \ref{annulus} to $(\tilde{u}_1, \tilde{u}_2)$, we need to choose $\delta elta$ small enough so that $2^{3\e^{-1}}\delta elta < r_0$, where $r_0$ is given by that Lemma. It is easy to check, by using Lemma \ref{technical}, that $$ \int_{A_{p_1}(2^{k} \delta elta, 2^{k+1} \delta elta)} \left (|\nabla \tilde{u}_1|^2 + |\nabla \tilde{u}_2|^2 \right ) \, dV_g \leq $$$$ C \int_{A_{p_1}(2^{k-1} \delta elta, 2^{k} \delta elta)} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right ) \, dV_g \leq C \varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g, $$ where $C$ is a universal constant. \noindentindent {\bf Case 1.1:} $d(p_1, p_2) \leq R^{\frac 12} \s$. By applying Lemma \ref{ball} to $u_i$ for $p=p_1$ and $s= 2 (R^{1/2}+1)\sigma$, and taking into account \eqref{dentro}, we obtain: \begin{equation}gin{equation} \label{dentro11} \begin{equation}gin{array}{c}(1+\e) \delta isplaystyle \int_{B_p(s)}Q(u_1, u_2) \ dV_g + C \geq \\ \\4 \pi \left (\log \delta isplaystyle \int_{B_p(s/2)} e^{u_1} dV_g +\log \int_{B_p(s/2)} e^{u_2} dV_g - (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma ) \right ) \geq \\ \\ 4 \pi \left (\log \delta isplaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{u_2} dV_g - (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )-C \right ),\end{array} \end{equation} where $\bar{u}_i(\s)= \fint_{B_p(\s)} u_i\, dV_g$. We now apply Lemma \ref{annulus} to $\tilde{u}_i$ for $p=p_1$, $s'=4(R^{1/2}+1)\s$ and $r = 2^{k+1}\delta elta$: \begin{equation}gin{equation} \label{fuera11} \begin{equation}gin{array}{c} \delta isplaystyle \int_{A_p(s'/2,2 r)}Q(\tilde{u}_1, \tilde{u}_2) \ dV_g + \e \int_{\Sg} \left( |\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g+ C \geq \\ \\ 4 \pi \left (\log \delta isplaystyle \int_{A_p(s', r)} e^{\tilde{u}_1} dV_g +\log \int_{A_p(s',r)} e^{\tilde{u}_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ). \end{array} \end{equation} Taking into account \eqref{fuera}, we conclude: \begin{equation}gin{equation} \label{fuera11-bis} \begin{equation}gin{array}{c} \delta isplaystyle \int_{A_p(s'/2,2 r)}Q(\tilde{u}_1, \tilde{u}_2) \ dV_g + \e \int_{\Sg} \left( |\nabla u_1|^2 + |\nabla u_2|^2 \right ) \, dV_g+ C \geq \\ \\4 \pi \left (\log \delta isplaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{{u}_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \s )(1+\e)\right ).\end{array} \end{equation} Combining \eqref{dentro11} and \eqref{fuera11-bis} we obtain our result (after properly renaming $\e$). \noindentindent {\bf Case 1.2:} $d(p_1, p_2) \geq R^{\frac 12} \s$ and $\delta isplaystyleplaystyle \int_{B_{p_1}(R^{\frac 13} \s)} e^{u_2} dV_g \geq \tau/4 \int_\Sigmae^{u_2} dV_g$. Here we argue as in Case 1.1: as a first step, we apply Lemma \ref{ball} to $(u_1, u_2)$ for $p=p_1$ and $s = 2 (R^{1/3}+1)\s$. Then, we use Lemma \ref{annulus} with $(\tilde{u}_1, \tilde{u}_2)$ for $p=p_1$, $s'= 4 (R^{1/3}+1)\s$ and $r=2^{k+1} \delta elta$. \noindentindent {\bf Case 1.3:} $d(p_1, p_2) \geq R^{\frac 12} \s$ and $\delta isplaystyleplaystyle \int_{B_{p_2}(R^{\frac 13} \s)} e^{u_1} dV_g \geq \tau/4 \int_\Sigmae^{u_1} dV_g$. This case can be treated as in Case 1.2, by just interchanging the indices $1$ and $2$. \noindentindent {\bf Case 1.4:} $d(p_1, p_2) \geq R^{\frac 12} \s$, $\delta isplaystyleplaystyle \int_{B_{p_2}(R^{\frac 13} \s)} e^{u_1} dV_g \leq \tau/4 \int_\Sigmae^{u_1} dV_g$ and $\delta isplaystyleplaystyle \int_{B_{p_1}(R^{\frac 13} \s)} e^{u_2} dV_g \leq \tau/4 \int_\Sg e^{u_2} dV_g$. Here we need to use again some harmonic lifting of our functions. Take $n \in \mathbb{N}$, $n \leq 2 \e^{-1}$ so that $$ \sum_{i=1}^2 \int_{A_{p_i}(2^{n-1} \s, 2^{n+1} \sigma )} \left( |\nabla u_1|^2 + |\nabla u_2|^2 \right ) \, dV_g \leq \varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g, $$ where we have chosen $R$ so that $2^{3\e^{-1}} <R^{1/3}$. We define the function $v_i$ of class $H^1$ by: $$ \left \{ \begin{equation}gin{array}{ll} v_i(x) = u_i(x) & x \in B_{p_1}(2^{n} \s) \cup B_{p_2}(2^{n} \s), \\ \Delta v_i(x) =0 & x \in A_{p_1}(2^{n} \s, 2^{n+1} \s) \cup A_{p_2}(2^{n}\s, 2^{n+1} \s), \\ v_i(x) = \bar{u}_i(\s) & x \noindenttin B_{p_1}(2^{n+1} \s) \cup B_{p_2}(2^{n+1} \s). \end{array} \right. $$ Again, $$ \sum_{i=1}^2 \int_{A_{p_i}(2^{n} \s, 2^{n+1} \s)} \left( |\nabla v_1|^2 + |\nabla v_2|^2 \right) \, dV_g \leq $$$$ C \sum_{i=1}^2 \int_{A_{p_i}(2^{n-1} \s, 2^{n+1} \s)} \left (|\n u_1|^2 + |\nabla u_2|^2 \right ) \, dV_g \leq C \varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g, $$ where $C$ is a universal constant. We now apply Lemma \ref{ball} to $(v_1,v_2)$ with $p=p_1$ and $s=2(6R+2)\s$, and take into account \eqref{dentro}: \begin{equation}gin{equation} \label{dentro14} \begin{equation}gin{array}{c} \delta isplaystyle \int_{ B_{p_1}(2^{n} \s) \cup B_{p_2}(2^{n} \s)} Q(u_1,u_2) \, dV_g + C \varepsilon \int_{\Sg} \left ( |\nabla u_1|^2 + |\nabla u_2|^2 \right) \, dV_g +C \geq \\ \\ (1+\e)\delta isplaystyle \int_{B_p(s)}Q(v_1, v_2) \ dV_g +C \geq \\ \\ 4 \pi \left (\log \delta isplaystyle \int_{B_p(s/2)} e^{v_1} dV_g +\log \int_{B_p(s/2)} e^{v_2} dV_g - (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s ) \right ) \geq \\ \\ 4 \pi \left (\log \delta isplaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{u_2} dV_g - (\bar{u}_1(s) + \bar{u}_2(s) + 4 \log s ) \right )-C. \end{array} \end{equation} Now, we define $w_i \in H^1(\Sg)$ as: $$ \left \{ \begin{equation}gin{array}{ll} w_i(x) = \bar{u}_i(\s) & x \in B_{p_1}(2^{n} \s) \cup B_{p_2}(2^{n} \s), \\ \Delta w_i(x) =0 & x \in A_{p_1}(2^{n} \s, 2^{n+1} \s) \cup A_{p_2}(2^{n}\s, 2^{n+1} \s), \\ w_i(x) = \tilde{u}_i & x \noindenttin B_{p_1}(2^{n+1} \s) \cup B_{p_2}(2^{n+1} \s). \end{array} \right. $$ As before, $$ \sum_{i=1}^2 \int_{A_{p_i}(2^{n} \s, 2^{n+1} \s)} \left (|\nabla w_1|^2 + |\nabla w_2|^2 \right ) \, dV_g \leq $$$$ C \sum_{i=1}^2 \int_{A_{p_i}(2^{n-1} \s, 2^{n+1} \s)} \left (|\n u_1|^2 + |\nabla u_2|^2 \right ) \, dV_g \leq C \varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right ) \, dV_g, $$ where also here $C$ is a universal constant. We apply Lemma \ref{annulus} to $(w_1,w_2)$ for any point $p'$ such that $d(p', p_1) = \frac 1 2 R^{1/3}\s$, $s'= \s$ and $r = 2^{k+1} \delta elta$: \begin{equation}gin{equation} \label{fuera14} \begin{equation}gin{array}{c}\delta isplaystyle \int_{ ( B_{p_1}(2^{n+1} \s) \cup B_{p_2}(2^{n+1} \s))^c} Q(u_1,u_2) \, dV_g + C \varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\n u_2|^2 \right )\, dV_g +C \geq \\ \\ (1+\e)\delta isplaystyle \int_{A_{p'}(s'/2,2 r)}Q(w_1, w_2) \ dV_g +C \geq \\ \\ 4 \pi \left (\log \delta isplaystyle \int_{A_{p'}(s', r)} e^{w_1} dV_g +\log \int_{A_{p'}(s',r)} e^{w_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ). \end{array} \end{equation} Taking into account \eqref{fuera} and the hypothesis of Case 1.4, \begin{equation}gin{equation} \label{fuera14-bis} \begin{equation}gin{array}{c}\delta isplaystyle \int_{ ( B_{p_1}(2^n \s) \cup B_{p_2}(2^n \s))^c} Q(u_1,u_2) \, dV_g + C \varepsilon \int_{\Sg} \left ( |\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g +C \geq \\ \\ 4 \pi \left (\log \delta isplaystyle \int_{\Sg} e^{u_1} dV_g +\log \int_{\Sg} e^{u_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ). \end{array} \end{equation} Combining inequalities \eqref{dentro14} and \eqref{fuera14-bis}, we obtain our result. \noindentindent {\bf CASE 2:} Assume that for some $i=1,2$: $$ \int_{B_{p_i}(\delta elta)^c} e^{u_i} dV_g \geq \tau/2 \int_{\Sg} e^{u_i} dV_g.$$ Without loss of generality, let us consider $i=1$. Take $\delta elta'= \frac{\delta elta}{2^{3/\e}}$. If moreover: $$ \int_{B_{p_2}(\delta elta')^c} e^{u_2} dV_g \geq \tau/2,$$ then Proposition \ref{p:imprc} implies the desired inequality. So, we can assume that: \begin{equation}gin{equation} \label{hia} \int_{A_{p_2}(R\s, \delta elta')} e^{u_2} dV_g \geq \tau/2.\end{equation} We now apply the whole procedure of Case 1 to $u_1$, $u_2$, replacing $\delta elta$ with $\delta elta'$. For instance, as in Case 1.1, we would get \eqref{dentro11} and \eqref{fuera11}. However, here \eqref{fuera11-bis} does not follow immediately since now we do not know whether: $$ \int_{A_p(s',r)} e^{u_1} dV_g \geq \alpha \int_{\Sg} e^{u_1} dV_g,$$ for some fixed $\alpha>0$. This is needed to estimate: $$ \log \int_{A_p(s', r)} e^{\tilde{u}_1} dV_g \geq \log \int_{\Sg} e^{u_1} dV_g- C,$$ which allows us to obtain \eqref{fuera11-bis}. By applying the Jensen inequality and Lemma \ref{technical}, we get: $$ \log \int_{A_p(s', r)} e^{\tilde{u}_1} dV_g \geq \log \int_{A_p(r/8, r/4)} e^{u_1} dV_g\geq $$ $$ \log \fint_{A_p(r/8, r/4)} e^{u_1} dV_g-C \geq \fint_{A_p(r/8, r/4)} u_1 dV_g-C \geq -\varepsilon \int_{\Sg} |\nabla u_1|^2\, dV_g - C. $$ Therefore, from \eqref{hia} and \eqref{fuera11} we get: \begin{equation}gin{equation} \label{fuera2} \begin{equation}gin{array}{c} \delta isplaystyle \int_{A_p(s'/2,2 r)}Q(\tilde{u}_1, \tilde{u}_2) \ dV_g + C\varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g+ C \geq \\ \\4 \pi \left (\log \delta isplaystyle \int_{\Sg} e^{{u}_2} dV_g + (\bar{u}_1(\s) + \bar{u}_2(\s) + 4 \log \sigma )(1+\e)\right ).\end{array} \end{equation} Now, we apply Proposition \ref{p:MTbd}, to find: $$ (1+C\e) \int_{B_{p_1}(\delta elta/2)^c} Q(u_1, u_2) dV_g +C \geq 4 \pi \left ( \log \int_{B_{p_1}(\delta elta)^c} e^{u_1} dV_g + \log \int_{B_{p_1}(\delta elta)^c} e^{u_2} dV_g \right ). $$ Again here we can use Jensen inequality and the hypothesis of Case 2 to deduce: \begin{equation}gin{equation} \label{fuera2-bis} \begin{equation}gin{array}{c} \delta isplaystyle \int_{B_{p_1}(\delta elta/2)^c} Q(u_1, u_2) dV_g + C\varepsilon \int_{\Sg} \left (|\nabla u_1|^2 + |\nabla u_2|^2 \right )\, dV_g+ C\geq \\ \\4 \pi \left ( \log \delta isplaystyle \int_{\Sg} e^{u_1} dV_g \right ). \end{array}\end{equation} We conclude now by combining \eqref{fuera2-bis}, \eqref{fuera2} and \eqref{dentro11}. We can argue in the same way if we are under the conditions of Cases 1.2, 1.3 or 1.4. \begin{equation}gin{rem}\label{puffff} The improved inequality in Proposition \ref{mt} is consistent with the asymptotic analysis in \cite{jlw}. Here the authors prove that when both $u_1, u_2$ blow-up at the same rate at the same point, then the corresponding quantization of conformal volume is $(8\pi, 8\pi)$. On the other hand when the blow-up rates are different, but occur at the same point, then the quantization values are $(4\pi, 8\pi)$ or $(8\pi,4\pi)$. \end{rem} \section{Min-max scheme}\label{s:4} \noindentindent Let $\overline{\Sg}_{\delta }$ be as in \eqref{cono}, and let us set \begin{equation}gin{equation}\label{eq:DX} \overline{D}_{\delta } = diag(\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta }) := \left\{ (\vartheta_1, \vartheta_2) \in \overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta } \; : \; \vartheta_1 = \vartheta_2 \right\}; \qquad \quad X = \overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta } \setminus \overline{D}_{\delta }. \end{equation} Let $\varepsilon > 0$ be such that $\rho_i + \varepsilon < 8 \pi$ for $i = 1, 2$, and let $R, \delta , \psi$ be as in Proposition \ref{covering}. Consider then the map $\Psi : H^1(\Sg) \times H^1(\Sg)$ defined in the following way \begin{equation}gin{equation}\label{eq:Psi} \Psi(u_1, u_2) = \left( \psi \left( \frac{e^{u_1}}{\int_{\Sg} e^{u_1} dV_g} \right), \psi \left( \frac{e^{u_2}}{\int_{\Sg} e^{u_2} dV_g} \right) \right). \end{equation} By Proposition \ref{mt}, and since $C \geq h_i(x) \geq \frac{1}{C}>0$ for any $x \in \Sg$, we have that $I_\rho(u_1, u_2)$ is bounded from below for any $(u_1,\ u_2)$ such that $\Psi(u_1, u_2) \in \overline{D}_{\delta }$. Therefore, there exists a large $L > 0$ such that \begin{equation}gin{equation}\label{eq:psilow} J_\rho(u_1, u_2) \leq - L \quad \mathbb{R}ightarrow \quad \Psi(u_1, u_2) \in X. \end{equation} \ \noindentindent By our definition of $\overline{\Sg}_{\delta }$, the set $X$ is not compact: however it retracts to some compact subset $\mathcal{X}_\nu$, as it is shown in the next result. \begin{equation}gin{lem}\label{l:retr} For $\nu \ll \delta $, define $$ \mathcal{X}_{\nu,1} = \left\{ \left( (x_1, t_1), (x_2, t_2) \right) \in X \; : \; \left| t_1 - t_2 \right|^2 + d(x_1, x_2)^2 \geq \delta ^4, \max\{t_1, t_2\} < \delta , \min\{t_1, t_2\} \in \left[ \nu^2, \nu \right] \right\}; $$ $$ \mathcal{X}_{\nu,2} = \left\{ \left( (x_1, t_1), (x_2, t_2) \right) \in X \; : \; \max\{t_1, t_2\} = \delta , \min\{t_1, t_2\} \in \left[ \nu^2, \nu \right] \right\}, $$ and set \begin{equation}gin{equation}\label{eq:retrx} \mathcal{X}_{\nu} = \left( \mathcal{X}_{\nu,1} \cup \mathcal{X}_{\nu,2} \right) \subseteq X. \end{equation} Then there is a retraction $R_{\nu}$ of $X$ onto $\mathcal{X}_{\nu}$. \end{lem} \begin{equation}gin{pf} We proceed in two steps. First, we define a deformation of $X$ in itself satisfying that: \begin{equation}gin{enumerate} \label{caracola} \item[a)] either $\max\{t_1, t_2\} < \delta $ and $\left| t_1 - t_2 \right|^2 + d(x_1, x_2)^2 \geq \delta ^4$, \item[b)] or $\max\{t_1, t_2\} = \delta $. \end{enumerate} Then another deformation will provide us with the condition $\min\{t_1, t_2\} \in \left[ \nu^2, \nu \right]$. Let us consider the following ODE in $(\Sigma\times (0,\delta elta])^2$: $$ \frac{d}{ds} \left( \begin{equation}gin{array}{c} x_1(s) \\ t_1(s) \\ x_2(s) \\ t_2(s) \\ \end{array} \right) = \left( \begin{equation}gin{array}{c} (\delta elta - \max_i \{t_i(s)\}) \n_{x_1} d(x_1(s), x_2(s))^2 \\ (t_1(s) - t_2(s)) t_1(s) (\delta elta - t_1(s)) \\ (\delta elta - \max_i \{t_i(s)\}) \n_{x_2} d(x_1(s), x_2(s))^2 \\ (t_2(s) - t_1(s)) t_2(s) (\delta elta - t_2(s)) \\ \end{array} \right). $$ Notice that if $\left| t_1 - t_2 \right|^2 + d(x_1, x_2) < \delta ^4$ (and $\max\{t_1, t_2\} < \delta $) then $d(x_1, x_2)$ is small so $d(x_1, x_2)^2$ is a smooth function on $(\Sigma\times \mathbb{R})^2$, and the above vector field is well defined. For each initial datum $(\vartheta_1, \vartheta_2) \in X$ we define $s_{\vartheta_1, \vartheta_2}\geq 0$ as the smallest value of $s$ for which the above flow satisfies either a) or b). To define the first homotopy $H_1(s,\cdot)$ then one can use the above flow, rescaling in the evolution variable (depending on the initial datum) as $s \mapsto \tilde{s} = s_{\vartheta_1, \vartheta_2}s$. To define the second homotopy, we introduce two cutoff functions $\chi_1, \chi_2$: $$ \begin{equation}gin{array}{ll} \left\{ \begin{equation}gin{array}{ll} \chi_1(t) = 1 & \hbox{ for } t \leq \nu^2. \\ \chi \mbox{ is non increasing, } & \\ \chi_1(t) = -1& \hbox{ for } t \geq \nu, \end{array} \right. & \left\{ \begin{equation}gin{array}{ll} \chi_2(t) = 1 & \hbox{ for } t \leq \delta elta/2, \\ \chi_2(t)= 2 \left( 1-\frac{t}{\delta elta}\right) & t \in (\delta elta/2, \delta elta), \\ \chi_2(t) = 0& \hbox{ for } t \geq \delta elta, \end{array} \right. \end{array} $$ and consider the following ODE $$ \frac{d}{ds} \left( \begin{equation}gin{array}{c} t_1(s) \\ t_2(s) \\ \end{array} \right) = \left( \begin{equation}gin{array}{c} \chi_1(\min_i\{t_i(s)\}) \chi_2(t_1(s)) \\ \chi_1(\min_i\{t_i(s)\}) \chi_2(t_2(s)) \end{array} \right). $$ As in the previous case, there exists $\hat{s}_{\vartheta_1, \vartheta_2}$ such that the condition $\min_i t_i \in [\nu^2, \nu]$ is reached for $s = \hat{s}_{\vartheta_1, \vartheta_2}$, and one can define the homotopy $H_2$ rescaling in $s$ correspondingly. Observe that along the homotopy $H_2$ the distance $| t_1 - t_2 |$ is non decreasing if $|t_1-t_2| \leq \delta /4$. The concatenation of the homotopies $H_1$ and $H_2$ gives the desired conclusion. Note that both $H_1$ and $H_2$, by the way they are constructed, preserve the quotient relations in the definition of $X$. \end{pf} \ \noindentindent We next construct a family of test functions parameterized by $\mathcal{X}_{\nu}$ on which $J_{\rho}$ attains large negative values. For $(\vartheta_1, \vartheta_2) = \left( (x_1, t_1), (x_2, t_2) \right) \in \mathcal{X}_\nu$ define \begin{equation}gin{equation}\label{eq:test} \var_{(\vartheta_1, \vartheta_2)}(y) = \left( \var_1(y), \var_2(y) \right), \end{equation} where we have set \begin{equation}gin{equation}\label{eq:varvar12} \var_1(y) = \log \frac{1 + \tilde{t}_2^2 d(x_2,y)^2}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2}, \qquad \qquad \var_2(y) = \log \frac{1 + \tilde{t}_1^2 d(x_1,y)^2}{\left( 1 + \tilde{t}_2^2 d(x_2,y)^2 \right)^2}, \end{equation} with \begin{equation}gin{equation}\label{eq:tti} \tilde{t}_1 = \tilde{t}_1(t_1) = \left\{ \begin{equation}gin{array}{ll} \frac{1}{t_1}, & \hbox{ for } t_1 \leq \frac{\delta }{2}, \\ - \frac{4}{\delta ^2} (t_1-\delta ) & \hbox{ for } t_1 \geq \frac{\delta }{2}; \end{array} \right. \qquad \tilde{t}_2 = \tilde{t}_2(t_2) = \left\{ \begin{equation}gin{array}{ll} \frac{1}{t_2}, & \hbox{ for } t_2 \leq \frac{\delta }{2}, \\ - \frac{4}{\delta ^2} (t_2-\delta ) & \hbox{ for } t_2 \geq \frac{\delta }{2}. \end{array} \right. \end{equation} Notice that, by our choices of $\tilde{t}_1, \tilde{t}_2$, this map is well defined on $\mathcal{X}_\nu$ (especially for what concerns the identifications in $\overline{\Sg}_\delta $). We have then the following result. \begin{equation}gin{lem}\label{l:integrals} For $\nu$ sufficiently small, and for $(\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}$, there exists a constant $C=C(\delta ,\Sg) > 0$, depending only on $\Sg$ and $\delta $, such that \begin{equation}gin{equation}\label{eq:inttot} \frac{1}{C} \frac{t_i^2}{t_j^2} \leq \int_\Sigmae^{\var_i(y)} dV_g(y) \leq C \frac{t_i^2}{t_j^2}, \qquad \quad i \neq j; \end{equation} \end{lem} \begin{equation}gin{pf} First, we notice that by an elementary change of variables \begin{equation}gin{equation}\label{eq:intfalt} \int_{\mathbb{R}^2} \frac{1}{\left( 1 + \l^2 |x|^2 \right)^2} dx = \frac{C_0}{\l^2}; \qquad \quad \lambda > 0 \end{equation} for some fixed positive constant $C_0$. We distinguish next the two cases \begin{equation}gin{equation}\label{eq:alt1} |t_1 - t_2| \geq \delta ^3 \qquad \quad \hbox{ and } \qquad \quad |t_1 - t_2| < \delta ^3. \end{equation} In the first alternative, by the definition of $\mathcal{X}_{\nu}$ and by the fact that $\nu \ll \delta $, one of the $t_i$'s belongs to $[\nu^2, \nu]$, while the other is greater or equal to $\frac{\delta ^3}{2}$. If $t_1 \in [\nu^2, \nu]$ and if $t_2 \geq \frac{\delta ^3}{2}$ then the function $1 + \tilde{t}_2^2 d(x_2,y)^2$ is bounded above and below by two positive constants depending only on $\Sg$ and $\delta $. Therefore, working in geodesic normal coordinates centered at $x_1$ and using \eqref{eq:intfalt} we obtain $$ \frac{t_1^2}{C} \leq \frac{1}{C \tilde{t}_1^2} \leq \int_\Sg e^{\var_1(y)} dV_g(y) \leq \frac{C}{\tilde{t}_1^2} \leq C t_1^2. $$ If instead $t_2 \in [\nu^2, \nu]$ and if $t_1 \geq \frac{\delta ^3}{2}$ then the function $1 + \tilde{t}_1^2 d(x_1,y)^2$ is bounded above and below by two positive constants depending only on $\Sg$ and $\delta $, hence one finds $$ \int_\Sigmae^{\var_1(y)} dV_g(y) \geq \frac{1}{C} \int_\Sigma(1 + \tilde{t}_2^2 d(x_2,y)^2) dV_g(y) \geq \frac{\tilde{t}_2^2}{C} = \frac{1}{C t_2^2}, $$ and similarly $$ \int_\Sigmae^{\var_1(y)} dV_g(y) \leq C \int_\Sigma(1 + \tilde{t}_2^2 d(x_2,y)^2) dV_g(y) \leq C \tilde{t}_2^2 = \frac{C}{t_2^2}. $$ In both the last two cases we then obtain the conclusion. Suppose now that $|t_1 - t_2| < \delta ^3$: then by the definition of $\mathcal{X}_{\nu}$ we have that $d(x_1, x_2) \geq \frac{\delta ^2}{2}$ and that $t_1, t_2 \leq \nu + \delta ^3$. Then, from \eqref{eq:intfalt} and some elementary estimates we derive $$ \int_\Sigmae^{\var_1(y)} dV_g(y) \geq \int_{B_{x_1}(\delta elta^3)} e^{\var_1(y)} dV_g(y) \geq \frac{1}{C} \frac{1 + \tilde{t}_2^2 d(x_1,x_2)^2}{\tilde{t}_1^2} \geq \frac{1}{C} \frac{t_1^2}{t_2^2}. $$ By the same argument we obtain $$ \int_{B_{x_1}(\delta elta^3)} e^{\var_1(y)} dV_g(y) \leq C \frac{1 + \tilde{t}_2^2 d(x_1,x_2)^2}{\tilde{t}_1^2} \leq C \frac{t_1^2}{t_2^2}. $$ Moreover, we have $$ \int_{(B_{x_1}(\delta elta^3))^c} e^{\var_1(y)} dV_g(y) \leq \frac{C}{\tilde{t}_1^4} \int_{(B_{x_1}(\delta elta^3))^c} (1 + \tilde{t}_2^2 d(x_2,y)^2) dV_g(y) \leq C \frac{t_1^4}{t_2^2}. $$ This concludes the proof. \end{pf} \begin{equation}gin{lem}\label{l:dsmallIlow} For $(\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}$, let $\var_{(\vartheta_1, \vartheta_2)}$ be defined as in the above formula. Then $$ J_{\rho}(\var_{(\vartheta_1, \vartheta_2)}) \to - \infty \quad \hbox{ as } \nu \to 0 \qquad \quad \hbox{ uniformly for } (\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}. $$ \end{lem} \begin{equation}gin{pf} The statement follows from Lemma \ref{l:integrals} once the following three estimates are shown \begin{equation}gin{equation}\label{eq:estQ} \int_{\Sg} Q\left( \var_{(\vartheta_1, \vartheta_2)} \right) dV_g \leq 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_1} + 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_2}; \end{equation} \begin{equation}gin{equation}\label{eq:estlin1} \fint_{\Sg} \var_1 dV_g = 4 (1 + o_{{\delta }}(1)) \log t_1 - 2 (1 + o_{{\delta }}(1)) \log t_2; \end{equation} \begin{equation}gin{equation}\label{eq:estlin2} \fint_{\Sg} \var_2 dV_g = 4 (1 + o_{{\delta }}(1)) \log t_2 - 2 (1 + o_{{\delta }}(1)) \log t_1. \end{equation} In fact, these yield the inequality $$ J_{\rho}(\var_{(\vartheta_1, \vartheta_2)}) \leq (2 \rho_1 - 8 \pi + o_\delta (1)) \log t_1 + (2 \rho_2 - 8 \pi + o_\delta (1)) \log t_2 \to - \infty \qquad \quad \hbox{ as } \nu \to 0 $$ uniformly for $(\vartheta_1, \vartheta_2) \in \mathcal{X}_\nu$, since $\rho_1, \rho_2 > 4 \pi$. Here again we are using that $C \geq h_i(x) \geq \frac{1}{C}>0$ for any $x \in \Sg$. We begin by showing \eqref{eq:estlin1}, whose proof clearly also yields \eqref{eq:estlin2}. It is convenient to write $$ \var_1 = \log \left(1 + \tilde{t}_2^2 d(x_2,y)^2 \right) - 2 \log \left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right), $$ and to divide $\Sg$ into the two subsets $$ \mathcal{A}_1 = B_{x_1}(\delta ) \cup B_{x_2}(\delta ); \qquad \qquad \mathcal{A}_2 = \Sigma\setminus \mathcal{A}_1. $$ For $y \in \mathcal{A}_2$ we have that $$ \frac{1}{C_{\delta ,\Sg} t_1^2} \leq 1 + \tilde{t}_1^2 d(x_1,y)^2 \leq \frac{C_{\delta ,\Sg}}{t_1^2}; \qquad \qquad \frac{1}{C_{\delta ,\Sg} t_2^2} \leq 1 + \tilde{t}_2^2 d(x_2,y)^2 \leq \frac{C_{\delta ,\Sg}}{t_2^2}, $$ which implies \begin{equation}gin{equation}\label{eq:int1111} \frac{1}{|\Sg|} \int_{\mathcal{A}_2} \var_1 dV_g = 4 (1 + o_{{\delta }}(1)) \log t_1 - 2 (1 + o_{{\delta }}(1)) \log t_2. \end{equation} On the other hand, working in normal geodesic coordinates at $x_i$ one also finds $$ \int_{B_{\delta }(x_i)} \log \left( 1 + \tilde{t}_i^2 d(x_i,y)^2 \right) dV_g = o_\delta (1) \log t_i. $$ Using \eqref{eq:int1111} and the last formula we then obtain \eqref{eq:estlin1}. Let us now show \eqref{eq:estQ}. We clearly have that \begin{equation}gin{eqnarray*} \nabla \var_1 & = & \nabla \log \left( 1 + \tilde{t}_2^2 d(x_2,y)^2 \right) - 2 \nabla \log \left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right) \\ & = & \frac{2 \tilde{t}_2^2 d(x_2,y) \n_y d(x_2,y)}{1 + \tilde{t}_2^2 d(x_2,y)^2} - \frac{4 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2}, \end{eqnarray*} and similarly \begin{equation}gin{eqnarray*} \nabla \var_2 & = & \nabla \log \left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right) - 2 \nabla \log \left( 1 + \tilde{t}_2^2 d(x_2,y)^2 \right) \\ & = & \frac{2 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} - \frac{4 \tilde{t}_2^2 d(x_2,y) \n_y d(x_2,y)}{1 + \tilde{t}_2^2 d(x_2,y)^2}. \end{eqnarray*} From now on we will assume, without loss of generality, that $t_1 \leq t_2$. We distinguish between the case $t_2 \geq \delta ^3$ and $t_2 \leq \delta ^3$. In the first case the function $1 + \tilde{t}_2^2 d(x_2,y)^2$ is uniformly Lipschitz with bounds depending only on $\delta $, and therefore we can write that $$ \nabla \var_1 = - \frac{4 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} + O_\delta (1); \qquad \quad \nabla \var_2 = \frac{2 \tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} + O_\delta (1). $$ Given a large but fixed constant $C_1 > 0$, we divide the surface $\Sg$ into the three regions \begin{equation}gin{equation}\label{eq:3regions} \mathcal{B}_1 = B_{x_1}(C_1 t_1); \qquad \quad \mathcal{B}_2 = B_{x_2}(C_1 t_2); \qquad \quad \mathcal{B}_3 = \Sigma\setminus (\mathcal{B}_1 \cup \mathcal{B}_2). \end{equation} In $\mathcal{B}_1$ we have that $|\nabla \var_i| \leq {C}{\tilde{t}_1}$, while \begin{equation}gin{equation}\label{eq:sim1} \frac{\tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} = (1 + o_{C_1}(1)) \frac{ \n_y d(x_1,y)}{d(x_1,y)} \qquad \quad \hbox{ in } \Sigma\setminus \mathcal{B}_1. \end{equation} The last gradient estimates imply that \begin{equation}gin{eqnarray}\label{eq:estQt2large} \noindentnumber \int_{\Sg} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g & = & \int_{\Sigma\setminus \mathcal{B}_1} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g + o_{\delta }(1) \log \frac{1}{t_1} + O_\delta (1) \\ & = & 8 \pi \int_{C_1 t_1}^1 \frac{dt}{t} + o_{\delta }(1) \log \frac{1}{t_1} + O_\delta (1) \\ & = & 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_1} + 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_2} + O_\delta (1); \qquad \quad t_2 \geq \delta ^3. \noindentnumber \end{eqnarray} Assume now that $t_2 \leq \delta _3$. Then by the definition of $\mathcal{X}_{\nu}$ we have that $d(x_1, x_2) \geq \frac{\delta ^2}{2}$, and therefore $\mathcal{B}_1 \cap \mathcal{B}_2 = \emptyset$. Similarly to \eqref{eq:sim1} we find $$ \left\{ \begin{equation}gin{array}{ll} \frac{\tilde{t}_1^2 d(x_1,y) \n_y d(x_1,y)}{1 + \tilde{t}_1^2 d(x_1,y)^2} = (1 + o_{C_1}(1)) \frac{ \n_y d(x_1,y)}{d(x_1,y)}; & \\ \frac{\tilde{t}_2^2 d(x_2,y) \n_y d(x_2,y)}{1 + \tilde{t}_2^2 d(x_2,y)^2} = (1 + o_{C_1}(1)) \frac{ \n_y d(x_2,y)}{d(x_2,y)} & \end{array} \right. \qquad \quad \hbox{ in } \mathcal{B}_3. $$ Moreover we have the estimates $$ \left | \nabla \var_i \right |\leq {C}{\tilde{t}_i} \quad \hbox{ in } \mathcal{B}_i, \ i=1,\ 2; \qquad \qquad \left |\nabla \var_i \right | \leq C \quad \hbox{ in } \mathcal{B}_j, \ i \neq j. $$ Then, there follows: \begin{equation}gin{eqnarray}\label{eq:estQt2small} \noindentnumber \int_{\Sg} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g & = & \int_{\mathcal{B}_3} Q(\var_{(\vartheta_1, \vartheta_2)}) dV_g + o_{\delta }(1) \log \frac{1}{t_1} + o_{\delta }(1) \log \frac{1}{t_2} + O_\delta (1) \\ & = & 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_1} + 8 \pi (1 + o_{\delta }(1)) \log \frac{1}{t_2} + O_\delta (1); \qquad \quad t_2 \leq \delta ^3. \end{eqnarray} With formulas \eqref{eq:estQt2large} and \eqref{eq:estQt2small}, we conclude the proof of \eqref{eq:estQ} and hence that of the lemma. \end{pf} \ \noindentindent Since the functional $J_\rho$ attains large negative values on the above test functions $\var_{(\vartheta_1, \vartheta_2)}$, these are mapped to $X$ by $\Psi$. We next evaluate the image of $\Psi$ with more precision, beginning with the following technical lemma. \begin{equation}gin{lem}\label{l:concscale} Let $\var_1, \var_2$ be as in \eqref{eq:varvar12}: then, for some $C=C(\delta ,\Sg)>0$, the following estimates hold uniformly in $(\vartheta_1, \vartheta_2) \in \mathcal{X}_\nu$: \begin{equation}gin{equation}\label{eq:noconct1d} \sup_{x \in \Sg} \int_{B_x(r t_i)} e^{\var_i} dV_g \leq C r^2 \frac{t_i^2}{t_j^2} \qquad \quad \forall r >0,\ i \neq j. \end{equation} Moreover, given any $\varepsilon > 0$ there exists $C=C(\e, \delta , \Sg)$, depending only on $\e$, $\delta $ and $\Sg$ (but not on $\nu$), such that \begin{equation}gin{equation}\label{eq:noconct1d3} \int_{B_{x_i}(C t_i)} e^{\var_i} dV_g \geq (1 - \e) \int_{\Sg} e^{\var_i(y)} dV_g, \ i=1,\ 2. \end{equation} uniformly in $(\vartheta_1, \vartheta_2) \in \mathcal{X}_\nu$. \end{lem} \begin{equation}gin{pf} We prove the case $i=1$. Observe that $1 + \tilde{t}_2^2 d(x_2,y)^2 \leq \frac{C}{t_2^2}$ and that $1 + \tilde{t}_1^2 d(x_1,y)^2 \geq 1$. Therefore we immediately find $$ \int_{B_x(t_1 r)} e^{\var_1} dV_g \leq \frac{C}{t_2^2} \int_{B_x(t_1 r)} \frac{1}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2} dV_g(y) \leq C r^2 \frac{t_1^2}{t_2^2} \qquad \quad \hbox{ for all } x \in \Sg, $$ which gives the first inequality in \eqref{eq:noconct1d}. We now show \eqref{eq:noconct1d3}, by evaluating the integral in the complement of $B_{x_1}(R t_1)$ for some large $R$. Using again the fact that $1 + \tilde{t}_2^2 d(x_2,y)^2 \leq \frac{C}{t_2^2}$ we clearly have that \begin{equation}gin{equation}\label{eq:miao} \int_{\Sigma\setminus B_{x_1}(R t_1)} e^{\var_1(y)} dV_g(y) \leq \frac{C}{t_2^2} \int_{\Sigma\setminus B_{x_1}(R t_1)} \frac{1}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2} dV_g(y). \end{equation} To evaluate the last integral one can use normal geodesic coordinates centered at $x_1$ and \eqref{eq:intfalt} with a change of variable to find that $$ \lim_{t_1 \to 0^+} t_1^{-2} \int_{\Sigma\setminus B_{x_1}(R t_1)} \frac{1}{\left( 1 + \tilde{t}_1^2 d(x_1,y)^2 \right)^2} dV_g = o_R(1) \qquad \quad \hbox{ as } R \to + \infty. $$ This and \eqref{eq:miao}, jointly with the second inequality in \eqref{eq:inttot}, conclude the proof of the \eqref{eq:noconct1d3}, by choosing $R$ sufficiently large, depending on $\e, \delta $ and $\Sg$. \end{pf} \ \noindentindent We next show that, parameterizing the test functions on $\mathcal{X}_{\nu}$ and composing with $R_{\nu} \circ \Psi$, we obtain a map homotopic to the identity on $\mathcal{X}_{\nu}$. This step will be fundamental for us in order to run the variational scheme later in this section. \begin{equation}gin{lem}\label{l:homid} Let $L > 0$ be so large that $\Psi(\{ J_\rho \leq - L \}) \in X$, and let $\nu$ be so small that $J_\rho(\var_{(\vartheta_1, \vartheta_2)}) < - L$ for $(\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}$ (see Lemma \ref{l:dsmallIlow}). Let $R_{\nu}$ be the retraction given in Lemma \ref{l:retr}. Then the map from $T_\nu : \mathcal{X}_{\nu} \to \mathcal{X}_{\nu}$ defined as $$ T_\nu((\vartheta_1, \vartheta_2)) = R_{\nu} (\Psi(\var_{(\vartheta_1, \vartheta_2)})) $$ is homotopic to the identity on $\mathcal{X}_{\nu}$. \end{lem} \begin{equation}gin{pf} Let us denote $\vartheta_i= (x_i, t_i)$, $$f_i= \frac{e^{\var_i}}{\int_{\Sg} e^{\var_i} dV_g}, \quad \psi (f_i)=(\begin{equation}ta_i, \s_i),$$ where $\psi$ is given in Proposition \ref{covering}. First, we claim that there is a constant $C=C(\delta ,\Sg)>0$, depending only on $\Sg$ and $\delta $, such that: \begin{equation}gin{equation}\label{eq:betatest} \frac{1}{C} \leq \frac{\s_i}{t_i} \leq C, \qquad \qquad d \left( \b_i , x_i \right) \leq C t_i. \end{equation} By \eqref{eq:noconct1d3}, we have that $$ \s\left(x_i, f_i \right) \leq C t_i,$$ where $\s(x,f)$ is the continuous map defined in \eqref{sigmax}. From that, we get that $\s_i \leq C t_i$. Using now \eqref{eq:noconct1d}, we get the relation $t_i \leq C \s_i$. Taking into account that $\s(x_i, f) \leq C t_i$ and \eqref{dett}, we obtain that $$d \left(x_i, S\left(f_i \right) \right )\leq C t_i,$$ where $S(f)$ is the set defined in \eqref{defS}. But since the inequality $$d\left(\b_i, S\left(f_i \right) \right) \leq C \s_i$$ is always satisfied, we conclude the proof of \eqref{eq:betatest}. We are now ready to prove the lemma. Let us define a first deformation $H_1$ in the following form: $$ \left( \left( \begin{equation}gin{array}{c} (\b_1, \s_1) \\ (\b_2, \s_2) \\ \end{array} \right), s \right) \;\; \stackrel{\small H_1}{\longmapsto} \;\; \left( \begin{equation}gin{array}{c} \left( \b_1,\ (1-s) \s_1 + s \kappa_1 \right) \\ \\ \left( \b_2,\ (1-s) \s_2 + s \kappa_2 \right) \end{array} \right), $$ where $\kappa_i= \min \{ \delta elta, \frac{\s_i}{\sqrt{\nu}} \}$. A second deformation $H_2$ is defined in the following way: $$ \left( \left( \begin{equation}gin{array}{c} (\b_1, \kappa_1) \\ (\b_2, \kappa_2) \\ \end{array} \right), s \right) \;\; \stackrel{\small H_2}{\longmapsto} \;\; \left( \begin{equation}gin{array}{c} \left( (1-s)\b_1 + s x_1, \ \kappa_1 \right) \\ \\ \left( (1-s)\b_2 + s x_2,\ \kappa_2 \right) \end{array} \right), $$ where $(1-s)\b_i + s x_i$ stands for the geodesic joining $\begin{equation}ta_i$ and $x_i$ in unit time. A comment is needed here. If $\kappa_i < \delta elta$, then we have that $\s_i < \sqrt{\nu} \delta elta$. By choosing $\nu$ small enough, this implies that $\begin{equation}ta_i$ and $x_i$ are close to each other (recall \eqref{eq:betatest}). Instead, if $\kappa_i= \delta elta$, the identification in $\overline{\Sg}_\delta $ makes the above deformation trivial. We also use a third deformation $H_3$: $$ \left( \left( \begin{equation}gin{array}{c} (x_1, \kappa_1) \\ (x_2, \kappa_2) \\ \end{array} \right), s \right) \;\; \stackrel{\small H_3}{\longmapsto} \;\; \left( \begin{equation}gin{array}{c} \left( x_1,\ (1-s) \kappa_1 + s t_1 \right) \\ \\ \left( x_2,\ (1-s) \kappa_2 + s t_2 \right) \end{array} \right). $$ We define $H$ as the concatenation of those three homotopies. Then, $$ ((\vartheta_1, \vartheta_2), s) \mapsto R_{\nu} \circ H(\Psi(\var_{(\vartheta_1, \vartheta_2)}),s)$$ gives us the desired homotopy to the identity. Observe that, since $\nu \ll \delta $, $H(\Psi(\var_{(\vartheta_1, \vartheta_2)}),s)$ always stays in $X$, so that $R_{\nu}$ can be applied. \end{pf} \ \noindentindent We now introduce the variational scheme which yields existence of solutions: this remaining part follows the ideas of \cite{djlw} (see also \cite{mal}). Let $\overline{\mathcal{X}}_{\nu}$ denote the (contractible) cone over $\mathcal{X}_{\nu}$, which can be represented as $$ \overline{\mathcal{X}}_{\nu} = \left( \mathcal{X}_{\nu} \times [0,1] \right)|_{\sim}, $$ where the equivalence relation $\sim$ identifies $\mathcal{X}_{\nu} \times \{1\}$ to a single point. We choose $L > 0$ so large that \eqref{eq:psilow} holds, and then $\nu$ so small that $$ J_{\rho}(\var_{(\vartheta_1, \vartheta_2)}) \leq - 4L \qquad \quad \hbox{ uniformly for } (\vartheta_1, \vartheta_2) \in \mathcal{X}_{\nu}, $$ the last claim being possible by Lemma \ref{l:dsmallIlow}. Fixing this value of $\nu$, consider the following class of functions \begin{equation}gin{equation}\label{eq:PiPi} \Gamma = \left\{ \eta : \overline{\mathcal{X}}_{\nu} \to H^1(\Sg) \; : \; \eta \hbox{ is continuous and } \eta(\cdot \times \{0\}) = \var_{(\vartheta_1, \vartheta_2)} \hbox{ on } \mathcal{X}_{\nu} \right\}. \end{equation} Then we have the following properties. \begin{equation}gin{lem}\label{l:min-max} The set $\Gamma$ is non-empty and moreover, letting $$ \alpha = \inf_{\eta \in \Gamma} \; \sup_{m \in \overline{\mathcal{X}}_{\nu}} J_\rho(\eta(m)), \qquad \hbox{ one has } \qquad \alpha > - 2 L. $$ \end{lem} \begin{equation}gin{pf} To prove that $\Gamma \neq \emptyset$, we just notice that the map \begin{equation}gin{equation}\label{eq:ovPi} \tilde{\eta}(\vartheta,s) = s \var_{(\vartheta_1, \vartheta_2)}, \qquad \qquad (\vartheta,s) \in \overline{\mathcal{X}}_{\nu}, \end{equation} belongs to $\Gamma$. Suppose by contradiction that $\alpha \leq - 2L$: then there would exist a map $\eta \in \Gamma$ satisfying the condition $\sup_{m \in \overline{\mathcal{X}}_{\nu}} J_\rho(\eta(m)) \leq - L$. Then, since Lemma \ref{l:homid} applies, writing $m = (\vartheta, s)$, with $\vartheta \in \mathcal{X}_{\nu}$, the map $$ s \mapsto R_{\nu} \circ \Psi \circ \eta(\cdot,s) $$ would be a homotopy in $\mathcal{X}_{\nu}$ between $R_{\nu} \circ \Psi \circ \var_{(\vartheta_1, \vartheta_2)}$ and a constant map. But this is impossible since $\mathcal{X}_{\nu}$ is non-contractible (by the results in Section \ref{s:app} and by the fact that $\mathcal{X}_{\nu}$ is a retract of $X$) and since $R_{\nu} \circ \Psi \circ \var_{(\vartheta_1, \vartheta_2)}$ is homotopic to the identity on $\mathcal{X}_{\nu}$. Therefore we deduce $\alpha > - 2 L$, which is the desired conclusion. \end{pf} \ From the above Lemma, the functional $J_{\rho}$ satisfies suitable structural properties for min-max theory. However, we cannot directly conclude the existence of a critical point, since it is not known whether the Palais-Smale condition holds or not. The conclusion needs a different argument, which has been used intensively (see for instance \cite{djlw, dm}), so we will be sketchy. \ \noindentindent We take $\mu > 0$ such that $\mathcal{J}_i := [\rho_i-\mu, \rho_i+\mu]$ is contained in $(4 \pi, 8 \pi)$ for both $i = 1, 2$. We then consider $\tilde{\rho}_i \in \mathcal{J}_i$ and the functional $J_{\tilde{\rho}}$ corresponding to these values of the parameters. Following the estimates of the previous sections, one easily checks that the above min-max scheme applies uniformly for $\tilde{\rho}_i \in \mathcal{J}_i$ for $\nu$ sufficiently small. More precisely, given any large number $L > 0$, there exists $\nu$ so small that for $\tilde{\rho}_i \in \mathcal{J}_i$ \begin{equation}gin{equation}\label{eq:min-maxrho} \sup_{m \in \partial \overline{\mathcal{X}}_{\nu}} J_{\tilde{\rho}}(m) < - 4 L; \qquad \qquad \alpha_{\tilde{\rho}} := \inf_{\eta \in \Gamma} \; \sup_{m \in \overline{\mathcal{X}}_{\nu}} J_{\tilde{\rho}}(\eta(m)) > - 2L, \ \ (\tilde{\rho}=(\tilde{\rho}_1, \tilde{\rho}_2)). \end{equation} where $\Gamma$ is defined in \eqref{eq:PiPi}. Moreover, using for example the test map \eqref{eq:ovPi}, one shows that for $\mu$ sufficiently small there exists a large constant $\overline{L}$ such that \begin{equation}gin{equation}\label{eq:ovlovl} \alpha_{\tilde{\rho}} \leq \overline{L} \qquad \qquad \hbox{ for } \tilde{\rho}_i \in \mathcal{J}_i. \end{equation} \ \noindentindent Under these conditions, the following Lemma is well-known, usually taking the name "monotonicity trick". This technique was first introduced by Struwe in \cite{struwe}, and made general in \cite{jeanjean} (see also \cite{djlw, lucia}). \begin{equation}gin{lem}\label{l:arho} Let $\nu$ be so small that \eqref{eq:min-maxrho} holds. Then the functional $J_{t \rho}$ possesses a bounded Palais-Smale sequence $(u_l)_l$ at level $\tilde{\alpha}_{t \rho}$ for almost every $t \in \left[ 1 - \frac{\mu}{16 \pi}, 1 + \frac{\mu}{16 \pi} \right]$. \end{lem} \ \begin{equation}gin{pfn} {\sc of Theorem \ref{t:main}.} The existence of a bounded Palais-Smale sequence for $J_{t \rho}$ implies by standard arguments that this functional possesses a critical point. Let now $t_j \to 1$, $t_j \in \L$ and let $(u_{1,j}, u_{2,j})$ denote the corresponding solutions. It is then sufficient to apply the compactness result in Theorem \ref{th:jlw}, which yields convergence of $(u_{1,j}, u_{2,j})_j$ by the fact that $\rho_1, \rho_2$ are not multiples of $4 \pi$. \end{pfn} \section{Appendix: the set $ X=\overline{\Sg}_{\delta } \times \overline{\Sg}_{\delta } \setminus \overline{D}_\delta $ is not contractible.}\label{s:app} Without loss of generality, we consider the case $\delta elta=1$ (see \eqref{cono}). Let us denote $\overline{\Sg} = \overline{\Sg}_{1}$. If $\Sg= \mathbb{S}^2$, we have a complete description of $X$. Indeed, in this case $\overline{\Sg}$ can be identified with $B(0,1) \subset \mathbb{R}^3$. Therefore, we have: $$ X = (B(0,1) \times B(0,1)) \setminus E, $$ where $E=\{x \in \mathbb{R}^6:\ x_i = x_{i+3},\ i=1,\ 2,\ 3\}$. By taking the orthogonal projection onto $E^{\bot}$, we have that $X \simeq U \setminus \{0\}$ ($\simeq$ stands for homotopical equivalence), where $U \subset E^{\bot}$ is a convex neighborhood of $0$. And, clearly, $U\setminus \{0\} \simeq \mathbb{S}^2$. The case of positive genus is not so easy and we have a less complete description of $X$. However, we will prove that it is non-contractible by studying its cohomology groups $H^*(X)$, where coefficients will be taken in $\mathbb{R}$. Indeed, we will show that: \begin{equation}gin{pro} \label{pro51} If the genus of $\Sg$ is positive, then $H^4(X)$ is nontrivial. \end{pro} \begin{equation}gin{pf} In what follows, the elements of $\overline{\Sg}$ will be written as $(x,t)$, where $x \in \Sg$, $t \in (0,1]$. Clearly, $X= Y \cup Z$, where $Y$, $Z$ are open sets defined as: $$Y= \{((x_1,t_1), (x_2,t_2)) \in \overline{\Sg} \times \overline{\Sg}:\ t_1 \neq t_2 \},$$ $$Z= \{((x_1,t_1), (x_2,t_2)) \in \overline{\Sg} \times \overline{\Sg}:\ t_1<1,\ t_2<1,\ x_1 \neq x_2 \}.$$ Then, the Mayer-Vietoris Theorem gives the exactness of the sequence: $$ \cdots \rightarrow H^3(X) \rightarrow H^3(Y) \oplus H^3(Z) \rightarrow H^3(Y \cap Z) \rightarrow H^4(X) \rightarrow \cdots $$ Since our coefficients are real, the above cohomology groups are indeed real vector spaces. The exactness of the sequence then gives: \begin{equation}gin{equation} \label{mv} dim(H^3(Y \cap Z)) \leq dim(H^4(X)) + dim(H^3(Y) \oplus H^3(Z)).\end{equation} Let us describe the sets involved above. First of all, observe that $Y=Y_1 \cup Y_2$ has two connected components: $$Y_i= \{((x_1,t_1), (x_2,t_2)) \in \overline{\Sg} \times \overline{\Sg}:\ t_i > t_j, \ j \neq i \}.$$ To study $Y_1$, we define the following deformation retraction: $$ r_1 : Y_1 \to Y_1, \qquad r_1((x_1,t_1), (x_2,t_2))= ((x_1,1), (x_2,1/2)).$$ Clearly, $r_1(Y_1)=0 \times \left(\Sigma\times \{1/2\}\right)$, which is homeomorphic to $\Sg$. Analogously, $Y_2 \simeq \Sg$, and so $Y \simeq \Sigma\ \ensuremath{\mathaccent\cdot\cup}\, \Sg$ (here $\ \ensuremath{\mathaccent\cdot\cup}\,$ stands for the disjoint union). For what concerns $Z$, we can define a deformation retraction: $$r : Z \to Z, \qquad r((x_1,t_1), (x_2,t_2))= ((x_1,1/2), (x_2,1/2)).$$ Observe that $r(Z)= \left( \Sigma\times \{ 1/2\} \times \Sigma\times \{ 1/2\} \right)\setminus \overline{D}$ which is homeomorphic to $\Sg \times \Sigma\setminus D$, where $D$ is the diagonal of $\Sigma\times \Sg$. Let us set $$ A=\Sigma\times \Sigma\setminus D, $$ since it will appear many times in what follows. Moreover, $Y \cap Z = (Y_1 \cap Z) \cup (Y_2 \cap Z)$, and so this has two connected components. Also here we have a deformation retraction: $$ r'_1: Y_1 \cap Z \to Y_1 \cap Z ,\qquad r'_1((x_1,t_1), (x_2,t_2))= ((x_1,1/2), (x_2,1/3)).$$ It is clear that $r'_1(Y_1 \cap Z)$ is homeomorphic to $A=\Sg \times \Sigma\setminus D$. Analogously we can argue for $Y_2 \cap Z$; therefore, $Y \cap Z \simeq A \ \ensuremath{\mathaccent\cdot\cup}\, A$. Hence, from \eqref{mv} we obtain: \begin{equation}gin{equation} \label{H30} dim(H^4(X)) \geq dim (H^3(A)).\end{equation} Let us now compute the cohomology of $A=\Sigma\times \Sigma\setminus D$. Given $\e>0$, let us define: $$B = \{(x,y) \in \Sigma\times \Sg:\ d(x,y) < \varepsilon \},$$ which is an open neighborhood of $D$. Clearly, we can use the local contractibility of $\Sg$ to retract $B$ onto $D$. Moreover, $A \cup B = \Sigma\times \Sg$. The Mayer-Vietoris Theorem yields the exact sequence: \begin{equation}gin{equation} \label{mv2}\cdots \rightarrow H^2(A \cap B) \rightarrow H^3(\Sigma\times \Sg) \rightarrow H^3(A) \oplus H^3(B) \rightarrow H^3(A\cap B) \rightarrow \cdots \end{equation} Therefore, in order to study $H^3(A)$ we need some information about $H^*(A\cap B)$. By using the exponential map, we can define a homeomorphism: $$ h : A \cap B = \{(x,y) \in \Sigma\times \Sg:\ 0< d(x,y) < \varepsilon \} \to \{(x,v) \in T\Sg:\ 0<\|v\|<\e\},$$ $$ h(x,y)= (x,v) \in T\Sigma\mbox{ such that } \exp_x(v)=y,$$ where $T \Sg$ is the tangent bundle of $\Sg$. Therefore, $A \cap B$ is homotopically equivalent to the unit tangent bundle $UT \Sg$. The cohomology of $UT\Sg$ must be well known, but we have not been able to find a precise reference. We state and prove the following lemma: \begin{equation}gin{lem} Let us denote by $g=g(\Sg)$ the genus of $\Sg$. Then: \begin{equation}gin{enumerate} \item if $g= 1$, $H^0(UT \Sg)\cong H^3(UT \Sg)\cong \mathbb{R}$ and $H^1(UT \Sg)\cong H^2(UT \Sg)\cong \mathbb{R}^3$. \item if $g \neq 1$, $H^0(UT \Sg) \cong H^3(UT \Sg)\cong \mathbb{R}$ and $H^1(UT \Sg)\cong H^2(UT \Sg)\cong \mathbb{R}^{2g}$. \end{enumerate} \end{lem} \begin{equation}gin{pf} We only need to compute $H^1(UT \Sg)$ and $H^2(UT \Sg)$. If $g=1$, that is, $\Sigma\simeq \mathbb{T}^2$, then $T\Sg$ is trivial and hence $UT \Sigma\simeq \mathbb{T}^2 \times \mathbb{S}^1 \simeq \mathbb{T}^3$. The K{\"u}nneth formula gives us the result. If $g \neq 1$, we use the Gysin exact sequence (see Proposition 14.33 of \cite{bott-tu}): $$0 \rightarrow H^1(\Sg) \rightarrow H^1(UT\Sg) \rightarrow H^0(\Sg) \stackrel{\wedgedge e}{\rightarrow} H^2(\Sg) \rightarrow H^2(UT\Sg) \rightarrow H^1(\Sg) \rightarrow H^3(\Sg)=0.$$ In the above sequence, $\wedgedge e$ is the wedge product with the Euler class $e$. Since we are working with real coefficients and the Euler characteristic of $\Sg$ is different from zero, then $\wedgedge e$ is an isomorphism. Therefore, we conclude: $$ H^1(UT\Sg) \cong H^1(\Sg) \cong \mathbb{R}^{2g}, \qquad H^2(UT\Sg) \cong H^1(\Sg) \cong \mathbb{R}^{2g}.$$ \end{pf} \begin{equation}gin{rem} We have chosen real coefficients since they simplify our arguments and are enough for our purposes. As a counterpart, the above computations do not take into account the torsion part. For instance, it is known that $UT \mathbb{S}^2 = \mathbb{R} \mathbb{P}^3$ (see \cite{montesinos}). \end{rem} We now come back to the proof of Proposition \ref{pro51}. With our information, \eqref{mv2} becomes: $$ \cdots \rightarrow H^2(A \cap B) \rightarrow \mathbb{R}^{4g} \rightarrow H^3(A) \rightarrow \mathbb{R} \rightarrow \cdots $$ In the above sequence we computed $H^3(\Sigma\times \Sg)$ using the K{\"u}nneth formula. Then, $4g \leq dim(H^2(UT \Sg)) + dim(H^3(A))$. Therefore, $dim(H^3(A)) \geq 2g$, if $g>1$, or $dim(H^3(A)) \geq 1$, if $g=1$. In any case we conclude by \eqref{H30}. \end{pf} \begin{equation}gin{thebibliography}{99} \bibitem{bott-tu}{R. Bott and L. W. Tu, }{Differential forms in algebraic topology, }{Graduate Texts in Mathematics, 82. Springer-Verlag, New York-Berlin, 1982.} \bibitem{cl}{W. X. Chen and C. Li, }{ Prescribing Gaussian curvatures on surfaces with conical singularities, }{J. Geom. Anal. 1-4, (1991), 359-372.} \bibitem{cl2}{W. X. Chen and C. Li, }{Classification of solutions of some nonlinear elliptic equations, } {Duke Math. J. 63 (1991), no. 3, 615-622.} \bibitem{clin} C.C Chen and C.S. Lin, { Topological degree for a mean field equation on Riemann surfaces}, Comm. Pure Appl. Math. 56-12 (2003), 1667-1727. \bibitem{sw}{M. Chipot, I. Shafrir and G. Wolansky, }{On the solutions of Liouville systems, } {J. Differential Equations 140 (1997), no. 1, 59-105.} \bibitem{demarchis}{F. De Marchis, }{Generic multiplicity for a scalar field equation on compact surfaces, }{J. Funct. Anal. 259 (2010) 2165-2192.} \bibitem{djlw}{W. Ding, J. Jost, J. Li and G. Wang, }{Existence results for mean field equations, }{Ann. Inst. Henri Poincar\'e, Anal. Non Lin{\`e}aire 16-5 (1999), 653-666.} \bibitem{dja}{Z. Djadli, }{ Existence result for the mean field problem on Riemann surfaces of all genus}, Comm. Contemp. Math. 10 (2008), no. 2, 205-220. \bibitem{dm}{Z. Djadli and A. Malchiodi, }{Existence of conformal metrics with constant $Q$-curvature, }{Annals of Math., 168 (2008), no. 3, 813-858.} \bibitem{dunne}{G. Dunne, }{Self-dual Chern-Simons Theories, }{Lecture Notes in Physics, vol. 36, Berlin: Springer-Verlag, 1995.} \bibitem{guest}{M. A. Guest, }{Harmonic maps, loops groups, and integrable systems. }{London Mathematical Society Student Texts, 38. Cambridge University Press, Cambridge, 1997.} \bibitem{hat}{A. Hatcher, }{Algebraic Topology, }{Cambridge University Press 2002.} \bibitem{jeanjean}{L. Jeanjean, }{On the existence of bounded Palais-Smale sequences and applications to a Landesman-Lazer type problem set on $R^N$, }{Proc. Roy. Soc. Edinburgh A 129 (1999) 787-809.} \bibitem{jlw}{J. Jost, C. S. Lin and G. Wang, }{Analytic aspects of the Toda system II. Bubbling behavior and existence of solutions, }{Comm. Pure Appl. Math. 59 (2006), 526-558.} \bibitem{jw2}{J. Jost and G. Wang, }{Classification of solutions of a Toda system in $\mathbb{R}^2$, }{Int. Math. Res. Not., 2002 (2002), 277-290.} \bibitem{jw}{J. Jost and G. Wang, }{Analytic aspects of the Toda system I. A Moser-Trudinger inequality, }{Comm. Pure Appl. Math. 54 (2001), 1289-1319.} \bibitem{ll}{J. Li and Y. Li, }{Solutions for Toda systems on Riemann surfaces, }{Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 4 (2005), no. 4, 703-728.} \bibitem{lin-zhang}{C.S. Lin and L. Zhang, }{A Topological Degree Counting for Some Liouville Systems of Mean Field Type, }{Comm. Pure Appl. Math. 64 (2011), 556-590.} \bibitem{lucia}{M. Lucia, }{A deformation lemma with an application to a mean field equation, }{ Topol. Methods Nonlinear Anal. 30 (2007), no. 1, 113-138.} \bibitem{mreview}{A. Malchiodi, }{Topological methods for an elliptic equation with exponential nonlinearities, }{Discrete Contin. Dyn. Syst. 21 (2008), no. 1, 277-294.} \bibitem{mal}{A. Malchiodi, }{Morse theory and a scalar field equation on compact surfaces, }{Adv. Diff. Eq., 13 (2008), 1109-1129.} \bibitem{cheikh}{A. Malchiodi and C. B. Ndiaye, }{Some existence results for the Toda system on closed surfaces, } {Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl. 18 (2007), no. 4, 391-412.} \bibitem{mr}{A. Malchiodi and D. Ruiz, }{ New improved Moser-Trudinger inequalities and singular Liouville equations on compact surfaces, }{to appear in GAFA, preprint arXiv:1007.3861.} \bibitem{montesinos}{J. M. Montesinos, }{Classical Tesselations and Three-Manifolds, }{Springer-Verlag Berlin-Heidelberg, 1987.} \bibitem{sw2}{I. Shafrir and G. Wolansky, }{Moser-Trudinger and logarithmic HLS inequalities for systems, }{J. Eur. Math. Soc. 7-4 (2005), 413-448.} \bibitem{struwe}{M. Struwe, }{On the evolution of harmonic mappings of Riemannian surfaces, }{Comment. Math. Helv. 60 (1985) 558-581.} \bibitem{st}{M. Struwe and G. Tarantello, }{ On multivortex solutions in Chern-Simons gauge theory}, {Boll. Unione Mat. Ital., Sez. B, Artic. Ric. Mat. (8)-1, (1998), 109-121.} \bibitem{tar}{G. Tarantello, }{ Self-Dual Gauge Field Vortices: An Analytical Approach, }{PNLDE 72, Birkh\"auser Boston, Inc., Boston, MA, 2007.} \bibitem{tar3}{G. Tarantello, }{Analytical, geometrical and topological aspects of a class of mean field equations on surfaces, }{Discrete Contin. Dyn. Syst. 28 (2010), no. 3, 931-973.} \bibitem{wang}{G. Wang, }{Moser-Trudinger inequalities and Liouville systems, } {C. R. Acad. Sci. Paris. S{\'e}r. I Math. 328 (1999), no. 10, 895-900.} \bibitem{yys}{Y. Yang, }{Solitons in Field Theory and Nonlinear Analysis, }{Springer-Verlag, 2001.} \end{thebibliography} \end{document}
math
104,281
\begin{document} \title[On the Lodha--Moore groups]{HNN decompositions of the Lodha--Moore groups, and topological applications} \date{\today} \subjclass[2010]{Primary 20F65; Secondary 57M07, 20E06, 20F69} \keywords{Thompson group, HNN extension, BNS-invariant, finiteness properties, fundamental group at infinity} \author{Matthew C.~B.~Zaremsky} \address{Department of Mathematical Sciences, Binghamton University, Binghamton, NY 13902} \email{[email protected]} \begin{abstract} The Lodha--Moore groups provide the first known examples of type $\F_\infty$ groups that are non-amenable and contain no non-abelian free subgroups. These groups are related to Thompson's group $F$ in certain ways, for instance they contain it as a subgroup in a natural way. We exhibit decompositions of four Lodha--Moore groups, $G$, $G_y$, ${}_yG$ and ${}_yG_y$, into ascending HNN extensions of isomorphic copies of each other, both in ways reminiscent to such decompositions for $F$ and also in quite different ways. This allows us to prove two new topological results about the Lodha--Moore groups. First, we prove that they all have trivial homotopy groups at infinity; in particular they are the first examples of groups satisfying all four parts of Geoghegan's 1979 conjecture about $F$. Second, we compute the Bieri--Neumann--Strebel invariant $\Sigma^1$ for the Lodha--Moore groups, and get some partial results for the Bieri--Neumann--Strebel--Renz invariants $\Sigma^m$, including a full computation of $\Sigma^2$. \end{abstract} \maketitle \thispagestyle{empty} \section*{Introduction} The Lodha--Moore groups constructed by Lodha and Moore in \cite{lodha13,lodha14} provide the first known examples of type $\F_\infty$ groups that are non-amenable and contain no non-abelian free subgroups. These groups are closely related to Thompson's group $F$, and all contain it in a natural way. One Lodha--Moore group, which we denote by $G$ and which was denoted $G_0$ in \cite{lodha13}, admits a presentation with only three generators and nine relations. The examples arise as subgroups of Monod's groups of piecewise projective homeomorphisms of the circle \cite{monod13}. In 1979 Geoghegan made four conjectures about $F$, namely: \begin{itemize} \item[(1)] It is of type $\F_\infty$. \item[(2)] It has no non-abelian free subgroups. \item[(3)] It is non-amenable. \item[(4)] It has trivial homotopy groups at infinity. \end{itemize} Brown and Geoghegan proved (1) and (4) \cite{brown84}, and Brin and Squier proved (2) \cite{brin85}. Conjecture (3) remains famously open. One can consider these four conjectures for the Lodha--Moore groups as well, in which case (2) and (3) were proved in \cite{lodha13} and (1) in \cite{lodha14}. This leaves only (4). In this paper we prove that the Lodha--Moore groups do indeed satisfy conjecture (4), and so provide the first examples of groups satisfying all four parts of Geoghegan's conjecture for $F$. \begin{main:LM_htpy_infty_zero} The homotopy groups at infinity of any Lodha--Moore group are trivial. \end{main:LM_htpy_infty_zero} The key tool is to exhibit decompositions of the Lodha--Moore groups into ascending HNN extensions of isomorphic copies of each other. In addition to $G$, we consider Lodha--Moore-style groups $G_y$, ${}_yG$ and ${}_yG_y$. The last of these was considered in \cite{lodha13}, denoted $G$ there, and the other two are obvious additions to the family. The groups are arranged via $F\subset G \subset {}_yG,G_y \subset {}_yG_y$. We prove: \begin{theorem*} Both $G$ and ${}_yG_y$ decompose as ascending HNN extensions of $G_y$, and also of ${}_yG$. Both $G_y$ and ${}_yG$ decompose as ascending HNN extensions of $G$, and also of ${}_yG_y$. \end{theorem*} More precise formulations are found in Lemmas~\ref{lem:F_like_hnn} and~\ref{lem:weird_hnn}, and Corollaries~\ref{cor:F_like_hnns} and~\ref{cor:weird_hnns}. Note that $F$ decomposes as an ascending HNN extension of an isomorphic copy of \emph{itself}. This was one key observation toward proving conjecture (4) for $F$ in \cite{brown84}. In particular, we consider this fact to be an example of the Lodha--Moore groups being ``$F$-like.'' In this same vein, the fact that these groups contain no non-abelian free groups follows in an essentially identical way to the same fact for $F$, from \cite{brin85}. In other ways though, the groups are not very $F$-like. For instance the non-amenability of the groups follows via arguments based on work of Ghys and Carri\`ere \cite{ghys85}, which have no chance of being adapted to $F$. Obtaining HNN decompositions helps to prove the fourth part of the Geoghegan conjecture for these groups (Theorem~\ref{thrm:LM_htpy_infty_zero}), and has another interesting application as well. Namely, it helps us to compute their Bieri--Neumann--Strebel (BNS) invariants. The BNS-invariant of a group is a geometric object that reveals more subtle finiteness properties of the group; in particular it encodes information describing precisely which normal subgroups corresponding to abelian quotients are finitely generated. We prove: \begin{main:BNS} The BNS-invariant $\Sigma^1(H)$ of any Lodha--Moore group $H$ is of the form $S^2 \setminus P$ for some set $P$ with $|P|=2$. \end{main:BNS} The actual formulation of this theorem, e.g., describing $P$, involves a lot of notation, which will be introduced in the sections leading up to the precise formulation, and for the sake of this introduction we will not give these details yet. Our computation of $\Sigma^1$ again falls into the category of ways in which the Lodha--Moore groups are $F$-like, in that their BNS-invariants are obtained from their character spheres by removing exactly two points. After computing the BNS-invariants, we consider the higher BNSR-invariants $\Sigma^m$. We give strong evidence that they are all obtained just by removing the convex hull of the two points missing from $\Sigma^1$, as is also the case for $F$ \cite{bieri87,bieri10}. We prove this for the case $m=2$; see Theorem~\ref{thrm:BNSR} and Section~\ref{sec:Sigma^2}. The paper is organized as follows. In Section~\ref{sec:groups} we recall the construction of the groups, compute their abelianizations, and exhibit some important discrete characters, i.e., homomorphisms to $\mathbb{Z}$, for the groups. In Section~\ref{sec:HNN} we inspect how these groups decompose as strictly ascending HNN extensions of each other. In Section~\ref{sec:htpy_infty} we show that they have trivial homotopy groups at infinity. In Section~\ref{sec:inv} we recall the BNS-invariant, and some important tools to compute it, and then compute the BNS-invariants of all the Lodha--Moore groups. In Section~\ref{sec:bnsr} we recall the higher BNSR-invariants $\Sigma^m$ for $m\in\mathbb{N}\cup\{\infty\}$, and reduce the problem of computing them for the Lodha--Moore groups to proving a single conjecture, about one certain subgroup being of type $\F_\infty$ (Conjecture~\ref{conj:ker_psi_F_m}). We prove that this subgroup is at least finitely presented, and so obtain a complete computation of $\Sigma^2$ for each Lodha--Moore group (Theorem~\ref{thrm:BNSR} for $m=2$). As a remark, it is recommended, though not strictly necessary, that the reader be familiar with Lodha and Moore's paper \cite{lodha13} before attempting to read the present work. On the other hand, specialized knowledge of homotopy at infinity or BNS-invariants is not particularly necessary before reading the present work. \subsection*{Acknowledgments} I am grateful to Matt Brin, Ross Geoghegan and Yash Lodha for many helpful discussions. Also thanks to Ross Geoghegan and Mike Mihalik for both, independently, asking whether Theorem~\ref{thrm:LM_htpy_infty_zero} was true (and hence motivating what became Section~\ref{sec:htpy_infty}). \section{The groups}\label{sec:groups} We will first define ${}_yG_y$, and then define $G,{}_yG,G_y$ as subgroups of ${}_yG_y$. We will view elements of the group ${}_yG_y$ as functions on the Cantor set $2^\mathbb{N}$. Thus an element will be specified by declaring how it acts on an arbitrary infinite string of $0$s and $1$s. The starting point is two primitive functions, $x$ and $y$, defined as follows. (Our actions are on the right, following \cite{lodha13}.) \begin{align*} \xi.x \mathbin{\vcentcolon =} \left\{\begin{array}{ll} 0\eta & \text{ if } \xi=00\eta\\ 10\eta & \text{ if } \xi=01\eta\\ 11\eta & \text{ if } \xi=1\eta\end{array}\right. \end{align*} \begin{align*} \xi.y \mathbin{\vcentcolon =} \left\{\begin{array}{ll} 0(\eta.y) & \text{ if } \xi=00\eta\\ 10(\eta.y^{-1}) & \text{ if } \xi=01\eta\\ 11(\eta.y) & \text{ if } \xi=1\eta\end{array}\right. \end{align*} One can extrapolate how $x^{-1}$ and $y^{-1}$ act on $2^\mathbb{N}$ as well. Now let $2^{<\mathbb{N}}$ be the set of finite binary sequences. For any $s\in 2^{<\mathbb{N}}$, we can define functions $x_s$ and $y_s$ that act like $x$ or $y$ ``at'' the address $s$ and otherwise act like the identity: \begin{align*} \xi.x_s \mathbin{\vcentcolon =} \left\{\begin{array}{ll} s(\eta.x) & \text{ if } \xi=s\eta\\ \xi & \text{ otherwise}\end{array}\right. \end{align*} \begin{align*} \xi.y_s \mathbin{\vcentcolon =} \left\{\begin{array}{ll} s(\eta.y) & \text{ if } \xi=s\eta\\ \xi & \text{ otherwise}\end{array}\right. \end{align*} The group $\langle x_s\mid s\in 2^{<\mathbb{N}}\rangle$ is Thompson's group $F$. We now define the Lodha--Moore groups as follows. Here $0^n$ and $1^n$ denote words consisting of a string of $n$ such symbols. In particular $0^0=1^0=\emptyset$, the empty word. Also, $\mathbb{N}_0$ denotes $\mathbb{N}\cup\{\infty\}$. $${}_yG_y\mathbin{\vcentcolon =}\langle x_s, y_t\mid s,t\in 2^{<\mathbb{N}}\rangle \text{.}$$ $$G_y\mathbin{\vcentcolon =}\langle x_s, y_t\mid s,t\in 2^{<\mathbb{N}}\text{, } t\not\in\{0^n\}_{n\in\mathbb{N}_0}\rangle$$ $${}_yG\mathbin{\vcentcolon =}\langle x_s, y_t\mid s,t\in 2^{<\mathbb{N}}\text{, } t\not\in\{1^n\}_{n\in\mathbb{N}_0}\rangle$$ $$G\mathbin{\vcentcolon =}\langle x_s, y_t\mid s,t\in 2^{<\mathbb{N}}\text{, } t\not\in\{0^n,1^n\}_{n\in\mathbb{N}_0}\rangle$$ In words, the differences between the groups involve at which addresses in $2^{<\mathbb{N}}$ we allow a $y$ to act. In ${}_yG_y$, a $y$ can act anywhere, in $G_y$, a $y$ can act anywhere except not at addresses of the form $0^n$, in ${}_yG$, a $y$ can act anywhere except not at addresses $1^n$, and in $G$, a $y$ can act anywhere except not at addresses $0^n$ or $1^n$. The notation is a visual reminder of where $y$'s can act. For instance ${}_yG$ contains the element $y_0$ but not $y_1$. In \cite{lodha13}, presentations are given for $G$ and ${}_yG_y$. It is straightforward to extrapolate these to find presentations for ${}_yG$ and $G_y$. We state the defining relations for ${}_yG_y$, and to restrict to the others we just restrict which subscripts are used for the $y$ generators. The relations are: \begin{itemize} \item[(LM1)] $x_s^2=x_{s0} x_s x_{s1}$ for $s\in 2^{<\mathbb{N}}$ \item[(LM2)] $x_t x_s = x_s x_{t.x_s}$ for $s,t\in 2^{<\mathbb{N}}$ with $t.x_s$ defined \item[(LM3)] $y_t x_s = x_s y_{t.x_s}$ for $s,t\in 2^{<\mathbb{N}}$ with $t.x_s$ defined \item[(LM4)] $y_s y_t = y_t y_s$ for $s,t\in 2^{<\mathbb{N}}$ with neither $s$ nor $t$ a prefix of the other \item[(LM5)] $y_s = x_s y_{s0} y_{s10}^{-1} y_{s11}$ for $s\in 2^{<\mathbb{N}}$ \end{itemize} The action of $F$ on $2^\mathbb{N}$ restricts to a partial action of $x$ on $2^{<\mathbb{N}}$. When we say ``$t.x_s$ is defined, we mean that $x_s$ can act on $t$. \subsection{Strand diagrams}\label{sec:strands} In \cite{belk14}, Belk and Matucci develop the \emph{strand diagram} model for elements of $F$. This is related to the well known \emph{paired tree diagram} model. We are using right actions, and so here to get from a paired tree diagram to a strand diagram, pictorially we reflects the first (domain) tree upside down and attach its leaves to those of the second (range) tree. In \cite{lodha13}, Lodha and Moore discuss a paired tree diagram model for elements of their groups. It is natural then to also develop a strand diagram model, which is what we do in this subsection. We will not be overly rigorous, since this is just a helpful tool that makes computations easier. First we discuss elements of $F$ represented as strand diagrams (see \cite{belk14} for more details). A strand diagram is a picture of a single strand splitting into multiple strands, always via bifurcation, and then the multiple strands merging two at a time, with possible further splitting and merging until ultimately everything merges back down into one strand. With our present convention, the merges represent the carets of the (now upside down) first tree, and the splits represent the carets of the second tree. Two strand diagrams may represent the same element; the equivalence relation on such diagrams is generated by reduction moves of two forms, namely, those in the following picture: \begin{figure} \caption{Fundamental reduction moves in strand diagrams for $F$.} \label{fig:F_strand_moves} \end{figure} Multiplication in $F$ is modeled by stacking strand diagrams and performing reduction moves. A product $ab$ corresponds to stacking the strand diagrams for $a$ and $b$. Since we are using right actions, in the product $ab$ we stack $b$ on top of $a$. Now we introduce strand diagrams for elements of Lodha--Moore groups. Since $F=\langle x_s\rangle_s$, we already have pictures for the $x_s$ and we just need to invent pictures for the $y_s$. Moreover, the nodes of a finite binary tree are labeled by finite binary sequences, so once we have a picture for $y_\emptyset$, we will have one for any $y_s$, by putting the picture at the address $s$. So, we just need to declare how to represent $y_\emptyset$, and then check that the relations are correctly modeled. We choose as a strand diagram for $y_\emptyset$ a picture of a single strand with a counterclockwise ``cyclone'' $\circlearrowleft$ in the middle. Similarly $y_\emptyset^{-1}$ is a single strand with a clockwise cyclone $\circlearrowright$. Now elements of ${}_yG_y$ are represented by strand diagrams with cyclones on some strands. See Figure~\ref{fig:LM_strand_example} for an example. \begin{figure} \caption{Strand diagram for $y_{10011} \label{fig:LM_strand_example} \end{figure} It will not come up for us, but for an element like $y_s^2$, one could just draw two counterclockwise cyclones on the same strand. These strand diagrams are still considered up to equivalence. The old reduction moves from $F$ still hold, and the new reduction moves include: if a counterclockwise cyclone and a clockwise cyclone share the same segment of a strand, we may delete both (corresponding to $y_s y_s^{-1}=y_s^{-1}y_s=1$), and the following ``expansion'' moves shown in Figure~\ref{fig:LM_strand_moves}: \begin{figure} \caption{Four expansion moves in strand diagrams for Lodha--Moore groups.} \label{fig:LM_strand_moves} \end{figure} The point of using these cyclones is that the direction of spinning tells us whether to ``push'' $\{0,10,11\}$ to $\{00,01,1\}$ or vice versa. Our convention of reading the diagrams bottom-to-top ensures that positive powers of $y_s$ generators correspond to counterclockwise (positive) cyclones. To demonstrate that our conventions for strand diagrams correctly model the groups, we draw some examples of the defining relations in Figures~\ref{fig:pent},~\ref{fig:y_conj} and~\ref{fig:expand}. \begin{figure} \caption{The relation $x_0 x_\emptyset x_1 = x_\emptyset^2$.} \label{fig:pent} \end{figure} \begin{figure} \caption{The relation $x_\emptyset^{-1} \label{fig:y_conj} \end{figure} \begin{figure} \caption{The relation $x_\emptyset^{-1} \label{fig:expand} \end{figure} \subsection{Abelianizations and characters}\label{sec:chars} In this subsection we will abelianize the four Lodha--Moore groups; it turns out all of them abelianize to $\mathbb{Z}^3$. We will use bars to indicate the abelianized elements, and will write $H^{ab}$ for the abelianization of a group $H$. \begin{lemma}\label{lem:abln_gens} $G^{ab}$ is generated by $\overline{x}_0$, $\overline{x}_1$ and $\overline{y}_{10}$. ${}_yG^{ab}$ is generated by $\overline{y}_0$, $\overline{x}_1$ and $\overline{y}_{10}$. $G_y^{ab}$ is generated by $\overline{x}_0$, $\overline{y}_1$ and $\overline{y}_{10}$. ${}_yG_y^{ab}$ is generated by $\overline{y}_0$, $\overline{y}_1$ and $\overline{y}_{10}$. \end{lemma} \begin{proof} We have \emph{a priori} that $G^{ab}$ is generated by the $\overline{x}_s$ for all $s$ and the $\overline{y}_s$ for all $s\not\in\{0^n,1^n\}_{n\in\mathbb{N}_0}$. Note that the partial action of $F$ on $2^{<\mathbb{N}}$ has four orbits, represented by $\emptyset$, $0$, $1$ and $10$. By relation (LM3), every $\overline{y}_s$ equals $\overline{y}_{10}$. By relation (LM2), every $\overline{x}_s$ equals one of $\overline{x}_\emptyset$, $\overline{x}_0$, $\overline{x}_1$ or $\overline{x}_{10}$. Also, $\overline{x}_{10}=0$ and $\overline{x}_\emptyset=\overline{x}_0+\overline{x}_1$ since this is true in $F$. Now consider ${}_yG^{ab}$. Similar to before we reduce the generating set down to just $\overline{x}_0$, $\overline{x}_1$, $\overline{y}_0$ and $\overline{y}_{10}$. Then we use relation (LM5) to get $$\overline{y}_0=\overline{x}_0 + \overline{y}_{00} - \overline{y}_{010} + \overline{y}_{011} = \overline{x}_0 + \overline{y}_{00} = \overline{x}_0 + \overline{y}_0$$ whence $\overline{x}_0=0$. This finishes ${}_yG$. For $G_y$ and ${}_yG_y$ we use parallel arguments. \end{proof} Now to see that these abelianizations are all $\mathbb{Z}^3$, we need to show that in each case the three generators are linearly independent. For each group it suffices to exhibit, for each of its three generators, an epimorphism from the group to $\mathbb{Z}$, i.e., a \emph{discrete character}, that takes that generator to $1$ and the other two generators to $0$. We define such characters now. After giving them names we will discuss which ones are well defined on which groups, and which generators they ``detect'' for the purpose to proving linear independence. The characters we now define are denoted $$\chi_0 \text{, } \chi_1 \text{, } \psi_0 \text{, } \psi_1 \text{ and } \psi \text{.}$$ First define $\chi_0$ by setting $\chi_0(x_{0^n})=-1$ for $n\ge0$, $\chi_0(x_s)=0$ for $s\neq 0^n$ and $\chi_0(y_s)=0$ for all $s$. Restricted to $F$, this is the character typically called $\chi_0$. Next define $\chi_1$ via $\chi_1(x_{1^n})=1$ for $n\ge0$, $\chi_1(x_s)=0$ for $s\neq 1^n$ and $\chi_1(y_s)=0$ for all $s$. Again, this is the usual character called $\chi_1$ when restricted to $F$. Next define $\psi_0$ via $\psi_0(y_{0^n})=1$ for $n\ge0$, $\psi_0(y_s)=0$ for $s\neq 0^n$ and $\psi_0(x_s)=0$ for all $s$. Similarly define $\psi_1$ via $\psi_1(y_{1^n})=1$ for $n\ge0$, $\psi_1(y_s)=0$ for $s\neq 1^n$ and $\psi_1(x_s)=0$ for all $s$. Lastly define $\psi$ via $\psi(y_s)=1$ for all $s$ and $\psi(x_s)=0$ for all $s$. For example, if $w=x_0^2 y_{000}^{-1} y_{11}^4$ then the five functions $\chi_0$, $\chi_1$, $\psi_0$, $\psi_1$ and $\psi$ read $-2$, $0$, $-1$, $4$ and $3$ respectively. (Though, as the next observation points out, we shouldn't try to apply $\chi_0$ or $\chi_1$ to this $w$ as an element of ${}_yG_y$, since it will not be well defined.) \begin{observation} The characters $\chi_0,\chi_1,\psi$ are well defined on $G$. The characters $\chi_0,\psi_1,\psi$ are well defined on $G_y$. The characters $\psi_0,\chi_1,\psi$ are well defined on ${}_yG$. The characters $\psi_0,\psi_1,\psi$ are well defined on ${}_yG_y$. \end{observation} \begin{proof} We just need to verify that the relations (LM1) through (LM5) remain valid after applying any of these characters, and this is a straightforward exercise. For the reader interested in checking this, we reiterate that we only only need to check the relations with $y$-subscripts allowed in whichever Lodha--Moore group we are working with. For example, relation (LM5) for $s=\emptyset$ is $y_\emptyset = x_\emptyset y_0 y_{10}^{-1} y_{11}$, and applying $\chi_0$ we get $0=-1$; hence $\chi_0$ is not well defined on ${}_yG_y$. In $G$ though, this relation does not appear, and in fact $\chi_0$ is well defined on $G$. \end{proof} Characters provide a way to test linear independence of abelianized elements. For example, in $F$, if $a\overline{x}_0 + b\overline{x}_1 = 0$ then applying $\chi_0$ we see $-a+0=0$ and applying $\chi_1$ we see $0+b=0$, so in fact $a=b=0$ and we conclude that $\overline{x}_0$ and $\overline{x}_1$ are linearly independent. This shows that $F^{ab}\cong\mathbb{Z}^2$, and we have the following corollary for the Lodha--Moore groups: \begin{corollary} The abelianizations of the Lodha--Moore groups are all isomorphic to $\mathbb{Z}^3$. \end{corollary} \begin{proof} We know from Lemma~\ref{lem:abln_gens} that $G^{ab}$ is generated by $\overline{x}_0$, $\overline{x}_1$ and $\overline{y}_{10}$. We can tell these are linearly independent by applying the functions $\chi_0$, $\chi_1$ and $\psi$. For $G_y^{ab}$ with generators $\overline{x}_0$, $\overline{y}_1$ and $\overline{y}_{10}$ we use $\chi_0$, $\psi_1$ and $\psi$. For ${}_yG^{ab}$ with generators $\overline{y}_0$, $\overline{x}_1$ and $\overline{y}_{10}$ we use $\psi_0$, $\chi_1$ and $\psi$. Finally, for ${}_yG_y^{ab}$ with generators $\overline{y}_0$, $\overline{y}_1$ and $\overline{y}_{10}$ we use $\psi_0$, $\psi_1$ and $\psi$. \end{proof} \section{HNN decompositions}\label{sec:HNN} In this section we decompose the Lodha--Moore groups into ascending HNN extensions of each other. The eight HNN decompositions we find, (HNN1) through (HNN8), are listed at the end of the section for reference. First we give some general background, and then we will inspect the Lodha--Moore groups in two subsections. \begin{definition}[(External) ascending HNN extension]\label{def:ext_hnn} Let $B$ be a group and $\phi\colon B\hookrightarrow B$ an injective endomorphism. The \emph{ascending HNN extension} of $B$ with respect to $\phi$ is the group $$B*_{\phi,t}=\langle B,t\mid b^t=\phi(b) \text{ for all } b\in B\rangle \text{,}$$ where $b^t$ means $t^{-1}bt$. We call $t$ the \emph{stable element} of $B*_{\phi,t}$. If $\phi$ is not surjective, we call this a \emph{strictly ascending HNN extension}. \end{definition} Since we will be asking whether pre-existing groups have the form of ascending HNN extensions, we will take the following lemma as an alternate definition. (See also \cite[Lemma~3.1]{geoghegan01}.) \begin{lemma}[(Internal) ascending HNN extension]\label{lem:int_hnn} Let $H$ be a group, $B\le H$ and $z\in H$. Suppose that $B$ and $z$ generate $H$, and that $B^z\subseteq B$. Suppose $z^n\in B$ only if $n=0$ (if $B^z\subsetneq B$ this is automatically satisfied). Let $\phi\colon B\hookrightarrow B$ be the map $b\mapsto b^z$. Then $H\cong B*_{\phi,t}$. \end{lemma} \begin{proof} Define an epimorphism $\Phi \colon B*_{\phi,t} \twoheadrightarrow H$ by sending $t$ to $z$ and $b$ to $b$ for $b\in B$. This is a well defined homomorphism since $\Phi(b^t)=b^z=\phi(b)=\Phi(\phi(b))$ for all $b\in B$. We just need it to be injective. Elements of the abstract HNN extension $B*_{\phi,t}$ admit the form $t^n b t^m$ for $n\ge 0$, $b\in B$ and $m\le 0$. Suppose such an element lies in $\ker(\Phi)$. Then $z^n b z^m=1$, so $b=z^{-n-m}$. Hence $b=1$, and so indeed $\ker(\Phi)=\{1\}$. \end{proof} We will sometimes suppress the $\phi$ from the notation and just write $B*_t$, since when $B$ and $t$ live in an ambient group, $\phi$ is just conjugation by $t$. We call $H=B*_t$ an \emph{HNN decomposition} of $H$. As we will see, the Lodha--Moore groups all decompose into interesting strictly ascending HNN extensions. Some of the decompositions are reminiscent of the case of Thompson's group $F$, but others are quite different. \subsection{$F$-like HNN decompositions}\label{sec:Flike_HNN} First we discuss the ``$F$-like'' decompositions. For $H\in\{G,{}_yG,G_y,{}_yG_y\}$ and $s\in 2^{<\mathbb{N}}$, define $H(s)$ to be the subgroup of $H$ generated by those $x_t$ and $y_t$ where $t$ extends $s$. Note that $G(0)\cong G_y$, via $x_{0t}\mapsto x_t$ and $y_{0t}\mapsto y_t$. (The important thing to note is that $y_{01^n}\mapsto y_{1^n}$, so $G_y$ is the correct target, not $G$.) More generally, such arguments reveal the following relationships: \begin{itemize} \item $G(0)\congG_y$ \item $G(1)\cong{}_yG$ \item $G_y(0)\congG_y$ \item $G_y(1)\cong{}_yG_y$ \item ${}_yG(0)\cong{}_yG_y$ \item ${}_yG(1)\cong{}_yG$ \item ${}_yG_y(0)\cong{}_yG_y$ \item ${}_yG_y(1)\cong{}_yG_y$ \end{itemize} We will call $H(s)$ the \emph{semi-deferred} subgroup of $H$ at $s$. The prefix ``semi-'' is a reminder that $H(s)$ might not be isomorphic to $H$, in contrast to the case of $F$, when $F(s)\cong F$ is appropriately called the deferred copy of $F$ at $s$. Recall that $F$ is a strictly ascending HNN extension of $F(1)\cong F$ with stable element $x_\emptyset$; it is also a strictly ascending HNN extension of $F(0)\cong F$ with stable element $x_\emptyset^{-1}$. We have some similar statements about the Lodha--Moore groups. We will prove the first such fact, and then list the other ones, which follow by parallel proofs, in a corollary. \begin{lemma}\label{lem:F_like_hnn} We have that $G$ is a strictly ascending HNN extension of $G(1)$ with stable element $x_\emptyset$. \end{lemma} \begin{proof} First note that $G$ is generated by $G(1)$ and $x_\emptyset$. Indeed, the only generators missing are of the form $x_{0u}$ and $y_{0u}$; if $u$ features at least one $1$, then $x_{0u}$, respectively $y_{0u}$, is conjugate via $x_\emptyset$ to a generator of the form $x_{1t} \in G(1)$, respectively $y_{1t} \in G(1)$. Also, any $x_{0^n}$ for $n\ge1$ is conjugate via $x_\emptyset$ to $x_0 = x_\emptyset^2 x_1^{-1} x_\emptyset^{-1}$. So indeed we catch all the generators. Lastly note that $G(1)^{x_\emptyset} \subseteq G(11)$ since $x_{1t}^{x_\emptyset}=x_{11t}$ and $y_{1t}^{x_\emptyset}=y_{11t}$, so $G(1)^{x_\emptyset} \subsetneq G(1)$. \end{proof} \begin{corollary}\label{cor:F_like_hnns} By parallel proofs to that of the lemma, we have that $G$ is a strictly ascending HNN extension of $G(0)$ with stable element $x_\emptyset^{-1}$. Moreover, $G_y$ (respectively ${}_yG$) is a strictly ascending HNN extension of $G_y(1)$ (respectively ${}_yG(0)$) with stable element $x_\emptyset$ (respectively $x_\emptyset^{-1}$). \end{corollary} \subsection{Non-$F$-like HNN decompositions}\label{sec:nonFlike_HNN} Now we discuss some ways in which the Lodha--Moore groups decompose into strictly ascending HNN extensions that are not ``$F$-like''. More precisely, this time the base subgroup will not be a semi-deferred copy of a larger Lodha--Moore group, but rather the natural embedded subgroup copy of a smaller Lodha--Moore group. The stable elements will be $y_0^{-1}$ and $y_1$ instead of $x_\emptyset^\pm$. In this subsection, we treat $G$ as a subgroup of ${}_yG$ and $G_y$, and those as subgroups of ${}_yG_y$, all in the natural way. First, we have a technical lemma. \begin{lemma}\label{lem:weird_conj} We have $y_0 x_\emptyset y_0^{-1} \in G$ and $y_0^{-1} x_0 y_0 \not\in G_y$. \end{lemma} \begin{proof} Using the relations (LM1) through (LM5), we see that $y_s=y_{s1} y_{s01}^{-1} y_{s00} x_s$ for all $s$. Now using (LM1) through (LM5) and this new relation, we calculate: \begin{align*} y_0 x_\emptyset y_0^{-1} &= (y_{01} y_{001}^{-1} y_{000} x_0) x_\emptyset (y_{01} y_{001}^{-1} y_{000} x_0)^{-1} \\ &= y_{01} y_{001}^{-1} y_{000} (x_0 x_\emptyset x_0^{-1}) y_{000}^{-1} y_{001} y_{01}^{-1} \\ &= y_{01} y_{001}^{-1} y_{000} x_\emptyset^2 x_1^{-1} x_0^{-1} y_{000}^{-1} y_{001} y_{01}^{-1} \\ &= x_\emptyset^2 y_{110} y_{10}^{-1} x_1^{-1} (y_{0} x_0^{-1}) y_{000}^{-1} y_{001} y_{01}^{-1} \\ &= x_\emptyset^2 y_{110} y_{10}^{-1} x_1^{-1} y_{01} y_{001}^{-1} y_{000} y_{000}^{-1} y_{001} y_{01}^{-1} \\ &= x_\emptyset^2 y_{110} y_{10}^{-1} x_1^{-1} \in G \text{.} \end{align*} To see that $y_0^{-1} x_0 y_0 \not\in G_y$, we compute that $y_0^{-1} x_0 y_0 = y_{011}^{-1} y_{010} y_{00}^{-1} y_0$, and this is in $G_y$ if and only if $y_{00}^{-1}y_0$ is. But this word is in \emph{standard form} (\cite[Definition~5.1]{lodha13}), and uses generators of the form $y_{0^n}$, so cannot lie in $G_y$, by arguments similar to those in \cite[Section~5]{lodha13}. \end{proof} Figure~\ref{fig:weird_conj} shows that $y_0 x_\emptyset y_0^{-1} \in G$ using strand diagrams. \begin{figure} \caption{A visual proof that $y_0 x_\emptyset y_0^{-1} \label{fig:weird_conj} \end{figure} By similar proofs, we have that $y_1^{-1} x_\emptyset y_1 \in G$ and $y_1 x_1 y_1^{-1} \not\in {}_yG$. \begin{lemma}\label{lem:weird_hnn} We have that ${}_yG_y$ is a strictly ascending HNN extension of its subgroup $G_y$ with stable element $y_0^{-1}$. \end{lemma} \begin{proof} First note that ${}_yG_y$ is generated by $G_y=\langle x_\emptyset, x_1, y_1, y_{10}\rangle$ and $y_0$. By Lemma~\ref{lem:weird_conj}, we have $x_\emptyset^{y_0^{-1}}\in G_y$, and we also know that $x_1^{y_0^{-1}}=x_1$, $y_1^{y_0^{-1}}=y_1$ and $y_{10}^{y_0^{-1}}=y_{10}$, so in fact $G_y^{y_0^{-1}}\subseteq G_y$. That this inclusion is proper follows from Lemma~\ref{lem:weird_conj} as well, since the lemma says $x_0\not\in G_y^{y_0^{-1}}$. \end{proof} \begin{corollary}\label{cor:weird_hnns} By parallel proofs to the proofs of the lemmas, we also see that ${}_yG$ is a strictly ascending HNN extension of $G$ with stable element $y_0^{-1}$, and that ${}_yG_y$ (respectively $G_y$) is a strictly ascending HNN extension of ${}_yG$ (respectively $G$) with stable element $y_1$. \end{corollary} There are a lot of different HNN extensions to keep track of now, so we collect all the results of this section here, and name each HNN decomposition for reference. Here, whenever a smaller group is the base of a larger group, it is via the natural inclusion, e.g., $G\subseteq G_y$, and whenever a larger group is the base of a smaller group, it is via an isomorphism with a semi-deferred version, e.g., ${}_yG\cong G(1) \subseteq G$. \begin{align*}\begin{array}{lcll} (\textrm{HNN1}) &G&\cong&({}_yG)*_{x_\emptyset} \\ (\textrm{HNN2}) &G&\cong&(G_y)*_{x_\emptyset^{-1}} \\ (\textrm{HNN3}) &G_y&\cong&({}_yG_y)*_{x_\emptyset} \\ (\textrm{HNN4}) &G_y&\cong&G*_{y_1} \\ (\textrm{HNN5}) &{}_yG&\cong&G*_{y_0^{-1}} \\ (\textrm{HNN6}) &{}_yG&\cong&({}_yG_y)*_{x_\emptyset^{-1}} \\ (\textrm{HNN7}) &{}_yG_y&\cong&(G_y)*_{y_0^{-1}} \\ (\textrm{HNN8}) &{}_yG_y&\cong&({}_yG)*_{y_1} \end{array} \end{align*} The astute reader might notice some missing, like $G_y\cong (G_y)*_{x_\emptyset^{-1}}$ (via $G_y(0)\cong G_y$), but it turns out these eight are the only ones that will be usable for our purposes. \begin{remark} There are some other strictly ascending HNN extensions related to these groups that we will not make use of, but which seem worth mentioning. First, one can calculate that $y_1^{-1}y_0 x_\emptyset y_0^{-1} y_1 = x_\emptyset^2$ (this was essentially done at the end of Section~4 of~\cite{lodha13}, and the reader is encouraged to verify it using strand diagrams), and so we can build a strictly ascending HNN extension $G*_{y_0^{-1} y_1}$. This contains the group $\langle \sqrt{x_\emptyset},x_1\rangle$, where $\sqrt{x_\emptyset}\mathbin{\vcentcolon =} y_1 y_0^{-1} x_\emptyset y_0 y_1^{-1}$ (so called since $(\sqrt{x_\emptyset})^2 = x_\emptyset$), which is a non-amenable, free group-free group with only two generators (it is not known whether it is finitely presented). \end{remark} \begin{remark} As the previous remark stated, $y_1^{-1}y_0 x_\emptyset y_0^{-1} y_1 = x_\emptyset^2$, and it is easy to check that $G$ contains a copy of the Baumslag--Solitar group $BS(2,1)$. On the other hand, Thompson's group $V$ does not contain $BS(2,1)$ \cite{roever99}, so $G$ does not embed into $V$. It is conjectured that every co$\mathcal{CF}$ group embeds into $V$ \cite{bleak13}, and so if $G$ turned out to be co$\mathcal{CF}$, then it would be a counterexample. \end{remark} \section{Vanishing homotopy at infinity}\label{sec:htpy_infty} Having decomposed the Lodha--Moore groups as ascending HNN extensions of each other, we can now apply results from \cite{mihalik85} and \cite{geoghegan08}, and techniques of Brown--Geoghegan \cite{brown84}, to quickly derive Theorem~\ref{thrm:LM_htpy_infty_zero}, that the groups have trivial homotopy groups at infinity. The reader interested in definitions and background is directed to Chapters 16 and 17 of \cite{geoghegan08}. We will at least quickly state a definition: \begin{definition}\cite[Sections~17.1, 17.2]{geoghegan08} Let $H$ be a group of type $\F_n$ and let $X$ be a $K(H,1)$ with compact $n$-skeleton. Let $\widetilde{X}$ be the universal cover of $X$. We say that $H$ is \emph{$(n-1)$-connected at infinity} if for any compact set $C\subseteq \widetilde{X}$ there exists a compact set $C\subseteq D\subseteq \widetilde{X}$ such that the inclusion $\widetilde{X}\setminus D \hookrightarrow \widetilde{X}\setminus C$ induces the zero map in $\pi_k$ for all $k<n$. If a group is $n$-connected at infinity for all $n$ we say it has \emph{trivial homotopy groups at infinity}. \end{definition} The property of being $0$-connected at infinity is called being \emph{one-ended}. The property of being $1$-connected at infinity is, naturally, called being \emph{simply connected at infinity}. We also have the obvious parallel notion of having trivial \emph{homology} groups at infinity. \begin{cit}\cite[Theorem~3.1]{mihalik85}\cite[Theorem~16.9.5]{geoghegan08}\label{cit:hnn_simp_conn_infty} Let $B$ be a $1$-ended finitely presented group and $\phi\colon B\to B$ a monomorphism. Then the HNN extension $B*_{\phi,t}$ is simply connected at infinity. \end{cit} \begin{corollary}\label{cor:LM_simp_conn_infty} The Lodha--Moore groups are simply connected at infinity. \end{corollary} \begin{proof} The Lodha--Moore groups are clearly $1$-ended, since they are not virtually cyclic and contain no non-abelian free groups. Also, they decompose as ascending HNN extensions of each other as seen in Section~\ref{sec:HNN}, so the result is immediate from Citation~\ref{cit:hnn_simp_conn_infty}. \end{proof} Now Theorems~13.3.3(ii) and~17.2.1 of \cite{geoghegan08} reduce our problem to the next proposition. A heuristic summary of this reduction is: since we already have simple connectivity at infinity, the homotopy at infinity of $H$ will vanish as soon as the homology at infinity of $H$ vanishes (thanks to the Hurewicz Theorem), and the homology at infinity of $H$ will vanish as soon as $H^*(H;\mathbb{Z} H)$ vanishes (thanks to some arguments from homological algebra). So, it suffices to prove: \begin{proposition}\label{prop:LM_htpy_infty_zero} For each Lodha--Moore group $H$, we have $H^*(H;\mathbb{Z} H)=0$. \end{proposition} \begin{proof} We proceed similarly to the proof of \cite[Theorem~7.2]{brown84}; all claims can be compared to the corresponding steps of that proof. We will actually show that for all $i\ge1$, $H^i(H;L)=0$ for any free $\mathbb{Z} H$-module $L$ ($H$ is of type $\F_\infty$ \cite{lodha14} so this is equivalent). We induct on $i$. The base case is that $H^1(H;L)=0$, which follows since $H$ is $1$-ended and finitely generated \cite[Theorem~13.3.3(ii)]{geoghegan08}. Now suppose $i>1$. Let $H=B*_{\phi,t}$ be some HNN decomposition of $H$ from the list (HNN1) through (HNN8). There is a Mayer--Vietoris sequence $$\cdots\to H^{i-1}(B;L) \to H^i(H;L) \to H^i(B;L) \stackrel{\phi^*}{\to} H^i(B;L) \to \cdots \text{.}$$ Since $B$ is itself a Lodha--Moore group, and since $L$ is free over $\mathbb{Z} B$, by induction we are assuming that $H^{i-1}(B;L)=0$. Also, $\phi^*$ is injective since $H$ is of type $\F_\infty$ \cite{brown85}. We conclude from the exactness of the sequence that $H^i(H;L)=0$. \end{proof} We conclude: \begin{theorem}\label{thrm:LM_htpy_infty_zero} All the homotopy groups at infinity of any Lodha--Moore group are trivial. \qed \end{theorem} As a remark, a crucial point in the proof of the proposition was that $B$ is itself \emph{a} Lodha--Moore group (though not necessarily isomorphic to $H$) and so we can apply the induction hypothesis to $B$. To reiterate, this proves that the Lodha--Moore groups satisfy all four components of the 1979 Geoghegan Conjecture for Thompson's group $F$, namely \begin{itemize} \item[(1)] They are of type $\F_\infty$. \item[(2)] They have no non-abelian free subgroups. \item[(3)] They are non-amenable. \item[(4)] They have trivial homotopy groups at infinity. \end{itemize} The first three of these were proved by Lodha \cite{lodha14} and Lodha--Moore \cite{lodha13}. The only part still open for $F$ itself is (3). \section{The BNS-invariant}\label{sec:inv} Our HNN decompositions also allow us to analyze the BNSR-invariants of the Lodha--Moore groups. We will first focus on the BNS-invariant $\Sigma^1$, and in Section~\ref{sec:bnsr} we will discuss the higher BNSR-invariants $\Sigma^m$. There are some elegant tools for computing $\Sigma^1$ that have no known generalization for $\Sigma^m$, and in this section we will use these tools to compute $\Sigma^1$ of the Lodha--Moore groups. The work done in Section~\ref{sec:bnsr} to compute $\Sigma^2$ (and parts of the higher $\Sigma^m$) could also be used to compute $\Sigma^1$, but, as the reader will notice, things get very technical in that section, and it is more pleasant to compute $\Sigma^1$ using the aforementioned tools. The Bieri--Neumann--Strebel (BNS) invariant of a finitely generated group $H$ is a subset of the \emph{character sphere} $S(H)$. A \emph{character} of $H$ is a homomorphism $\chi \colon H\to \mathbb{R}$. (If the image is infinite cyclic, then recall we call this a discrete character.) Two characters are \emph{equivalent} if they differ by multiplication by a positive real number. The equivalence classes $[\chi]$ of characters of $H$ form a sphere $S(H)=S^{d-1}$ where $d$ is the rank of the torsion-free part of the abelianization of $H$. The BNS-invariant $\Sigma^1(H)$ is the subset of $S(H)$ given by: $$\Sigma^1(H)\mathbin{\vcentcolon =} \{[\chi]\mid \Gamma(H)^{\chi\ge0}\text{ is connected}\} \text{,}$$ where $\Gamma(H)$ is the Cayley graph of $H$ using any finite generating set, and $\Gamma(H)^{\chi\ge0}$ is the full subgraph spanned by those vertices on which $\chi$ takes non-negative values. It is standard notation to write $\Sigma^1(H)^c$ for $S(H)\setminus \Sigma^1(H)$. One main application of knowing the BNS-invariant of a group is that it reveals exactly which normal subgroups corresponding to abelian quotients are finitely generated. More precisely, we have: \begin{cit}\cite[Theorem~1.1]{bieri10}\label{cit:fin_props_bns} Let $H$ be a finitely generated group and $N$ a normal subgroup containing $[H,H]$. Then $N$ is finitely generated if and only if $[\chi]\in\Sigma^1(H)$ for every character $\chi$ with $\chi(N)=0$. \end{cit} \subsection{Tools}\label{sec:tools} We collect here a variety of tools for deducing whether characters are in the BNS-invariant $\Sigma^1$ or its complement. Some generalize to higher $\Sigma^m$ and some do not. The first two results involve ascending HNN extensions. They deal with the cases of the base group either lying in the kernel of a character, or not. These both generalize to higher $\Sigma^m$, as we will mention in Section~\ref{sec:bnsr}. \begin{cit}\cite[Theorem~2.1]{bieri10}\label{cit:hnn_bns} Let $H=B*_t$ for $B$ finitely generated, and let $\chi \colon H\to\mathbb{R}$ be the character given by $\chi(B)=0$ and $\chi(t)=1$. Then $[\chi]\in\Sigma^1(H)$ and moreover if $B^t\subsetneq B$ then $[-\chi]\in\Sigma^1(H)^c$. \end{cit} \begin{cit}\cite[Theorem~2.3]{bieri10}\label{cit:push_Sigma_1} Let $H=B*_t$ for $B$ finitely generated, and let $\chi\colon H\to \mathbb{R}$ be a character. Suppose $\chi|_B \neq 0$ and $[\chi|_B]\in \Sigma^1(B)$. Then $[\chi]\in\Sigma^1(H)$. \end{cit} It is in general very difficult to compute the BNS-invariant using the definition. An alternate characterization of $\Sigma^1(H)$, due to Brown \cite{brown87bns}, involves inspecting actions of $H$ on $\mathbb{R}$-trees, and over the years this definition has led to some powerful technology for computing $\Sigma^1(H)$. A modern formulation is distilled in \cite{koban14}, and we review the details here. Analogues of this technique for higher $\Sigma^m$ have yet to be developed. Given a character $\chi$ of a group $H$, we call an element $h\in H$ \emph{$\chi$-hyperbolic} if $\chi(h)\neq0$. By the \emph{commuting graph} $C(J)$ of a subset $J\subseteq H$ we mean the graph whose vertex set is $J$ and which has an edge connecting $g,h\in J$ if and only if $[g,h]=1$. Finally, given $I,J\subseteq H$, we say $J$ \emph{dominates} $I$ if every element of $I$ commutes with some element of $J$. \begin{cit}[Hyperbolics dominating generators]\cite[Lemma~1.9]{koban14}\label{cit:conn_dom} Let $\chi$ be a character of a group $H$. If there exists a set $J$ of $\chi$-hyperbolic elements with $C(J)$ connected, and a set $I$ generating $H$ such that $J$ dominates $I$, then $[\chi]\in\Sigma^1(H)$. \end{cit} In summary, this provides a nice way to translate knowledge about generators and commutator relations into knowledge about the BNS-invariant. In the next subsection we compile all these tools together, ultimately computing the BNS-invariant for the Lodha--Moore groups. \subsection{Computations}\label{sec:computations} The main result of this subsection is: \begin{theorem}\label{thrm:BNS} The BNS-invariants for $G$, $G_y$, ${}_yG$ and ${}_yG_y$ are all of the form $S^2 \setminus P$ for some set $P$ with $|P|=2$. More precisely, for $G$ we have $P=\{[\chi_0],[\chi_1]\}$, for ${}_yG$ we have $P=\{[\psi_0],[\chi_1]\}$, for $G_y$ we have $P=\{[\chi_0],[-\psi_1]\}$ and for ${}_yG_y$ we have $P=\{[\psi_0],[-\psi_1]\}$. \end{theorem} We first focus on ${}_yG_y$. Recall that every character $\chi$ is of the form $\chi=a\psi_0+b\psi_1+c\psi$ for $a,b,c\in\mathbb{R}$. \begin{proposition}[North/south hemispheres]\label{prop:big_hemispheres} Let $\chi$ be a character of ${}_yG_y$ as above with $c\neq 0$. Then $[\chi]\in\Sigma^1({}_yG_y)$. \end{proposition} \begin{proof} We will apply Citation~\ref{cit:conn_dom}. Let $J\mathbin{\vcentcolon =} \{y_s \mid s\neq 0^n,1^n\}$, so every element of $J$ is $\psi$-hyperbolic. Also, every element of $J$ is in the kernels of $\psi_0$ and $\psi_1$, so in fact they are all $\chi$-hyperbolic. Note that $[y_{0t},y_{1u}]=1$ for any $t,u$, so $C(J)$ is connected. It remains to show that some generating set of ${}_yG_y$ is dominated by $J$, and we will use the generators $x_\emptyset x_1^{-1},x_1,y_0,y_{10},y_1$. These generators respectively commute with the $\chi$-hyperbolic elements $y_s$ for $s=110$, $01$, $10$, $01$ and $01$. \end{proof} \begin{proposition}[Most of equator]\label{prop:big_equator} Let $\chi = a\psi_0+b\psi_1+c\psi$ be a character of ${}_yG_y$ with $c=0$, and $a\neq0$ and $b\neq0$. Then $[\chi]\in\Sigma^1({}_yG_y)$. \end{proposition} \begin{proof} Let $J\mathbin{\vcentcolon =} \{y_{0^n},y_{1^n} \mid n>0\}$, so every element is $\chi$-hyperbolic. Every $y_{0^n}$ commutes with every $y_{1^m}$ for $n,m>0$, so $C(J)$ is connected. The generators $x_\emptyset x_1^{-1},x_1,y_0,y_{10},y_1$ respectively commute with the $\chi$-hyperbolic elements $y_s$ for $s=11$, $0$, $1$, $0$ and $0$. \end{proof} \begin{lemma}[Remaining four points]\label{lem:big_poles} We have $[-\psi_0],[\psi_1]\in\Sigma^1({}_yG_y)$ and $[\psi_0],[-\psi_1] \in \Sigma^1({}_yG_y)^c$. \end{lemma} \begin{proof} From (HNN7) and (HNN8) we know that ${}_yG_y = (G_y)*_{y_0^{-1}}$ and ${}_yG_y = ({}_yG)*_{y_1}$. Note that $-\psi_0(G_y)=0$ and $-\psi_0(y_0^{-1})=1$. Similarly $\psi_1({}_yG)=0$ and $\psi_1(y_1)=1$. The result now follows from Citation~\ref{cit:hnn_bns}. \end{proof} These three results combine to yield: \begin{corollary} $\Sigma^1({}_yG_y)^c = \{[\psi_0],[-\psi_1]\}$. \qed \end{corollary} Next we consider $G_y$. The cases proceed essentially the same as for ${}_yG_y$, except now the first case is shorter, thanks to having already handled ${}_yG_y$. \begin{lemma}[North/south hemispheres]\label{lem:right_hemispheres} Let $\chi=a\chi_0+b\psi_1+c\psi$ be a character of $G_y$ with $c\neq0$. Then $[\chi] \in \Sigma^1(G_y)$. \end{lemma} \begin{proof} From (HNN3) we have $G_y = ({}_yG_y)*_{x_\emptyset}$. The restriction of $\chi$ to ${}_yG_y$ has non-zero $\psi$ coefficient. The result now follows from Proposition~\ref{prop:big_hemispheres} and Citation~\ref{cit:push_Sigma_1}. \end{proof} \begin{proposition}[Most of equator]\label{prop:right_equator} Let $\chi = a\chi_0+b\psi_1+c\psi$ be a character of $G_y$ with $c=0$, and $a\neq0$ and $b\neq0$. Then $[\chi]\in\Sigma^1(G_y)$. \end{proposition} \begin{proof} We apply Citation~\ref{cit:conn_dom}. Let $J\mathbin{\vcentcolon =} \{x_{0^n},y_{1^n} \mid n>0\}$, so every element is $\chi$-hyperbolic. Every $x_{0^n}$ commutes with every $y_{1^m}$ for $n,m>0$, so $C(J)$ is connected. The generators $x_\emptyset x_1^{-1},x_1,y_0,y_{10},y_1$ respectively commute with the $\chi$-hyperbolic elements $y_{11}$, $x_0$, $y_1$, $x_0$ and $x_0$. \end{proof} \begin{lemma}[Remaining four points]\label{lem:right_poles} We have $[-\chi_0],[\psi_1]\in\Sigma^1(G_y)$ and $[\chi_0],[-\psi_1]\in\Sigma^1(G_y)^c$. \end{lemma} \begin{proof} By (HNN3) and (HNN4) we know that $G_y = ({}_yG_y)*_{x_\emptyset}$ and $G_y = G*_{y_1}$. Note that $-\chi_0({}_yG_y)=0$ (since here ${}_yG_y$ means $G_y(1)$) and $-\chi_0(x_\emptyset)=1$. Similarly $\psi_1(G)=0$ and $\psi_1(y_1)=1$. The result now follows from Citation~\ref{cit:hnn_bns}. \end{proof} \begin{corollary} $\Sigma^1(G_y)^c = \{[\chi_0],[-\psi_1]\}$. \qed \end{corollary} Parallel arguments applied to ${}_yG$ and $G$ now quickly give us: \begin{corollary} $\Sigma^1({}_yG)^c = \{[\psi_0],[\chi_1]\}$ and $\Sigma^1(G)^c = \{[\chi_0],[\chi_1]\}$. \qed \end{corollary} This finishes the proof of Theorem~\ref{thrm:BNS}. Note that $\Sigma^1(F)=S(F)\setminus\{[\chi_0],[\chi_1]\}$, so for $G$ we now know that $[\chi]\in\Sigma^1(G)$ if and only if $\chi(F)=0$ or else $[\chi|_F]\in\Sigma^1(F)$. We can phrase this as, the inclusion $F\hookrightarrowG$ induces an isomorphism in $\Sigma^1(-)^c$. \section{The BNSR-invariants}\label{sec:bnsr} Bieri and Renz \cite{bieri88} extended the invariant $\Sigma^1$ to a family of invariants $\Sigma^m$ for $m\in\mathbb{N}$. These form a nested chain of subsets of the character sphere $S(H)$ of a group $H$, namely $$S(H)\supseteq \Sigma^1(H) \supseteq \Sigma^2(H) \supseteq \cdots$$ with $\Sigma^\infty(H) \mathbin{\vcentcolon =} \bigcap\limits_{m\in\mathbb{N}}\Sigma^m(H)$. The invariants $\Sigma^m(H)$ are called the \emph{Bieri--Neumann-Strebel--Renz (BNSR) invariants} of $H$. In this section we show that a full computation of all the $\Sigma^m$ for the Lodha--Moore groups would follow from a single conjecture, namely that $\ker(\psi)$ is of type $\F_\infty$. There is evidence that this is true, but it appears to be significantly more difficult than proving that the Lodha--Moore groups themselves are of type $\F_\infty$, which was already quite difficult in \cite{lodha14}. We can at least prove $\ker(\psi)$ is finitely presented; this allows us to fully compute $\Sigma^2$ of all the Lodha--Moore groups. We will make almost no use of the definition itself, but we state it here for completeness; see \cite[Section~1.2]{bieri10}. \begin{definition}[BNSR-invariants]\label{def:bnsr} Let $H$ be a group with finite presentation $\langle S\mid R\rangle$. Let $\Gamma$ be the Cayley graph of $H$ with respect to $S$. Pick an $H$-invariant orientation for each edge and glue in a $2$-cell for each relation in $R$, equivariantly along the appropriate loops in the graph. Call the resulting $2$-complex $\Gamma^2$. For a character $\chi$ of $H$ and $n\in\mathbb{Z}$, let $\Gamma^2_{\chi\ge n}$ be the full subcomplex spanned by vertices $h$ with $\chi(h)\ge n$. Now the definition of $\Sigma^2(H)$ is that $[\chi]\in\Sigma^2(H)$ if $\Gamma^2_{\chi\ge 0}$ is connected and there exists $n\le0$ such that the inclusion $\Gamma^2_{\chi\ge 0} \subseteq \Gamma^2_{\chi\ge n}$ induces the trivial map in $\pi_1$. For $H$ of type $\F_m$, the definition of $\Sigma^m(H)$ is similar. Let $\Gamma^m$ be the $m$-skeleton of the universal cover of a $K(H,1)$ with finite $m$-skeleton. Then $[\chi]$ is in $\Sigma^m(H)$ if and only if the above inclusion (for some $n$) induces the trivial map in $\pi_k$ for all $k<m$. \end{definition} One main application of the BNSR-invariants is the following: \begin{cit}\cite[Theorem~1.1]{bieri10}\label{cit:bnsr_fin_props} Let $H$ be a group of type $\F_m$ and $[H,H]\le N\triangleleft H$. Then $N$ is of type $\F_m$ if and only if for every non-zero character $\chi$ of $H$ with $\chi(N)=0$, we have $[\chi]\in\Sigma^m(H)$. \end{cit} In particular the BNSR-invariants of a group of type $\F_\infty$ give a complete catalog of exactly which normal subgroups containing the commutator subgroup have which finiteness properties. The Lodha--Moore groups are all of type $\F_\infty$. Lodha has proved this in a recent preprint on arXiv \cite{lodha14} for $G$, and his proof works in parallel for $G_y$, ${}_yG$ and ${}_yG_y$. The idea is to construct a contractible space $X$ on which $G$ acts with stabilizers of type $\F_\infty$ and with finitely many orbits of cells in each dimension, after which it follows from classical results that $G$ is of type $\F_\infty$. The ``hard part'' is proving that $X$ is contractible. Since we already know that the Lodha--Moore groups are of type $\F_\infty$, and since we have shown earlier that they decompose as nice HNN extensions, it is not too hard to compute a large piece of the BNSR-invariants. The key tools are the following generalizations of Citations~\ref{cit:hnn_bns} and~\ref{cit:push_Sigma_1}. \begin{cit}\cite[Theorem~2.1]{bieri10}\label{cit:hnn_bnsr} Let $H=B*_t$ for $B$ of type $\F_m$ ($m\in\mathbb{N}\cup\{\infty\}$), and let $\chi \colon H\to\mathbb{R}$ be the character given by $\chi(B)=0$ and $\chi(t)=1$. Then $[\chi]\in\Sigma^m(H)$. \end{cit} \begin{lemma}\label{lem:push_Sigma_m} Let $H$ be a group that is an ascending HNN extension $H=B\ast_t$ for $B$ a subgroup of type $\F_m$ ($m\in\mathbb{N}\cup\{\infty\}$). Let $\chi\colon H\to \mathbb{R}$ be a character. Suppose $\chi|_B \neq 0$ and $[\chi|_B]\in \Sigma^m(B)$. Then $[\chi]\in\Sigma^m(H)$. \end{lemma} \begin{proof} This follows in the same way as for the $m=\infty$ case in \cite[Theorem~2.3]{bieri10}, namely by combining \cite[Proposition~4.1]{meinert96} with \cite[Theorem~B]{meinert97} (and observing that for $m\ge2$, $\Sigma^m(H)=\Sigma^2(H)\cap\Sigma^m(H,\mathbb{Z})$ for any $H$). \end{proof} If the following conjecture holds for all $m\in\mathbb{N}\cup\{\infty\}$, then, as we will soon show, we have a full computation of the BNSR-invariants for all four Lodha--Moore groups $G$, $G_y$, ${}_yG$, ${}_yG_y$. \begin{conjecture}[$\textrm{C}_m$]\label{conj:ker_psi_F_m} The kernel of $\psi$ in some Lodha--Moore group is of type $\F_m$. \end{conjecture} Note that for $m\ge\ell$, ($\textrm{C}_m$) implies ($\textrm{C}_\ell$). We have already seen that ($\textrm{C}_1$) is true, since $[\pm\psi]\in\Sigma^1(G)$. We have evidence to suggest that ($\textrm{C}_m$) holds for all $m$ (meaning that ($\textrm{C}_\infty$) holds), but verifying it has proved to be significantly more difficult than Lodha's proof that $G$ is of type $\F_\infty$, and this was already quite difficult. As such, for the rest of the section we leave these as conjectures and show how, for any $m$, if ($\textrm{C}_m$) is true then the full computation of $\Sigma^m$ falls out. Then at the end we will verify ($\textrm{C}_2$) ``by hand'' and so fully compute $\Sigma^2$ for all four Lodha--Moore groups. \begin{observation}\label{obs:one_for_all} For any $m$, if ($\textrm{C}_m$) holds for some Lodha--Moore group $G$, $G_y$, ${}_yG$, ${}_yG_y$, then it holds for all of them. \end{observation} \begin{proof} For any two Lodha--Moore groups $H$ and $H'$, (HNN1) through (HNN8) tell us that $H'$ is either an ascending HNN extension of $H$ or else an ascending HNN extension of an ascending HNN extension of $H$. (For example ${}_yG_y=(G*_{y_0^{-1}})*_{y_1}$.) For any such $H=B*_t$, $\pm\psi|_B=\pm\psi$. By Citation~\ref{cit:bnsr_fin_props}, $\ker(\psi)$ in some group is of type $\F_m$ if and only if $[\pm\psi]$ is in $\Sigma^m$ of that group. The result now follows from repeated applications of Lemma~\ref{lem:push_Sigma_m}. \end{proof} \begin{corollary}\label{cor:psi_in_Sigma_infty} Assume ($\textrm{C}_m$) is true. Then for any Lodha--Moore group $H$, $[\pm\psi]\in\Sigma^m(H)$. \end{corollary} \begin{proof} This is immediate from Observation~\ref{obs:one_for_all} and Citation~\ref{cit:bnsr_fin_props}. \end{proof} \begin{lemma}[Non-zero $\psi$ coefficient]\label{lem:small_hemispheres_bnsr} Suppose ($\textrm{C}_m$) holds. Let $\chi$ be a character of a Lodha--Moore group $H$, with non-zero $\psi$ coefficient. Then $[\chi]\in\Sigma^m(H)$. \end{lemma} \begin{proof} In the first case we consider, suppose that if $H=G$ or ${}_yG$ then $\chi$ has $\chi_1$ coefficient zero and if $H=G_y$ or ${}_yG_y$ then $\chi$ has $\psi_1$ coefficient zero (in other words, whichever ``basis character on the right'' is defined and non-zero for $H$, in this case we suppose that coefficient is zero). For whichever $H$ we have, let $B$ and $t$ be such that $H=B*_t$ is the relevant one of (HNN1), (HNN3), (HNN5) or (HNN7). Since $\chi$ has non-zero $\psi$ coefficient, in all four cases $\chi|_B\neq0$, and thanks to our assumption in fact $\chi|_B = a\psi$ for some $a\in\mathbb{R}^\times$. But $[a\psi]\in\Sigma^m(B)$ by ($\textrm{C}_m$), Citation~\ref{cit:bnsr_fin_props} and Observation~\ref{obs:one_for_all}. Hence by Lemma~\ref{lem:push_Sigma_m} we conclude $[\chi]\in\Sigma^m(H)$. Now suppose $\chi$ has non-zero $\chi_1$ or $\psi_1$ coefficient, depending on which is defined and non-zero for our $H$. Again we consider HNN decompositions, this time (HNN2), (HNN4), (HNN6) and (HNN8); let $B$ and $t$ be such that $H=B*_t$ appears on this list. The restriction of $\chi$ to $B$ now has $\chi_1$ or $\psi_1$ coefficient zero, so by the previous paragraph $[\chi|_B]\in\Sigma^m(B)$ and hence $[\chi]\in\Sigma^m(G)$. \end{proof} Next we focus on the case when $\chi$ has $\psi$ coefficient zero. (At this point we do not need to care whether ($\textrm{C}_m$) is true.) Since $\Sigma^1(H)^c\subseteq \Sigma^m(H)^c$ for any group $H$, we already know some points that are not in $\Sigma^m$ of our groups; for example $[\chi_0]\in\Sigma^m(G)^c$, $[-\psi_1]\in\Sigma^m(G_y)^c$, etc. The next lemma looks at character classes of the form $[\pm\chi_i]$ and $[\pm\psi_i]$ and shows that they are in $\Sigma^m$ if and only if they are in $\Sigma^1$. \begin{lemma}\label{lem:poles_bnsr} We have $[-\chi_0],[-\chi_1]\in\Sigma^m(G)$, $[-\chi_0],[\psi_1]\in\Sigma^m(G_y)$, $[-\psi_0],[-\chi_1]\in\Sigma^m({}_yG)$ and $[-\psi_0],[\psi_1]\in\Sigma^m({}_yG_y)$. \end{lemma} \begin{proof} This is just a matter of applying Citation~\ref{cit:hnn_bnsr} to the right HNN decompositions. For example, (HNN1), which is $G=({}_yG)*_{x_\emptyset}$, shows $[-\chi_0]\in\Sigma^m(G)$, since $-\chi_0({}_yG)=0$ (recall here ${}_yG$ means $G(1)$) and $-\chi_0(x_\emptyset)=1$. As another example, (HNN8), which is ${}_yG_y={}_yG*_{y_1}$, shows that $[\psi_1]\in\Sigma^m({}_yG_y)$, since $\psi_1({}_yG)=0$ and $\psi_1(y_1)=1$. The other cases all follow similarly easily. \end{proof} With the multiples of the basis characters fully understood, we can now take care of the remaining case, when the $\psi$ coefficient is zero but the other two are not. It turns out that the results mirror the situation for $F$ done in \cite{bieri10}. First we handle ``three quarters'' of this situation (compare to Corollary~2.4 in \cite{bieri10}). \begin{lemma}\label{lem:long_interval} Let $H\in\{G,G_y,{}_yG,{}_yG_y\}$. If $H=G$ or $G_y$ let $\xi_0=\chi_0$ and if $H={}_yG$ or ${}_yG_y$ let $\xi_0=\psi_0$. Similarly let $\xi_1$ be whichever of $\chi_1$ or $-\psi_1$ is defined and non-zero on $H$. Let $\chi=a\xi_0+b\xi_1$ be a character of $H$ with $\psi$ coefficient zero, and with $a,b\neq0$. If $a<0$ or $b<0$ then $[\chi]\in\Sigma^\infty(H)$. \end{lemma} \begin{proof} First suppose $a<0$. Let $B$ and $t$ be such that the expression $H=B*_t$ is the relevant one of (HNN2), (HNN4), (HNN6) or (HNN8). In all cases, $\xi_1(B)=0$, so $[\chi|_B]=[-\xi_0]$, which is in $\Sigma^\infty(B)$ by Lemma~\ref{lem:poles_bnsr}, and so by Lemma~\ref{lem:push_Sigma_m} we conclude that $[\chi]\in\Sigma^\infty(H)$. Now suppose $b<0$. This time let $B$ and $t$ be such that $H=B*_t$ is the relevant one of (HNN1), (HNN3), (HNN5) or (HNN7). Then $\xi_0(B)=0$ so $[\chi|_B]=[b\xi_1]$. This is either $[-\chi_1]$ or $[\psi_1]$, which in either case is in $\Sigma^\infty(B)$. Hence by Lemma~\ref{lem:push_Sigma_m}, $[\chi]\in\Sigma^\infty(H)$. \end{proof} Lastly we handle the remaining ``one quarter'' of this situation. We find that in this case, even though $[\chi]\in\Sigma^1(H)$, in fact $[\chi]\in\Sigma^2(H)^c$, so of course $[\chi]\in\Sigma^\infty(H)^c$. We could even show the stronger fact that $[\chi]\in\Sigma^2(H,R)^c$ for any $R$, but we have not (and will not) define the homological invariants $\Sigma^m(H,R)$, and proving this would require a long digression introducing them, plus a couple long proofs, all of which would be simple imitations of those found in Section~2.3 of \cite{bieri10}. As such, we will only cover the homotopical case here, for which the proof amounts to a citation. \begin{observation} Let $H\in\{G,G_y,{}_yG,{}_yG_y\}$. If $H=G$ or $G_y$ let $\xi_0=\chi_0$ and if $H={}_yG$ or ${}_yG_y$ let $\xi_0=\psi_0$. Similarly let $\xi_1$ be whichever of $\chi_1$ or $-\psi_1$ is defined and non-zero on $H$. Let $\chi=a\xi_0+b\xi_1$ be a character of $H$ with $\psi$ coefficient zero. Suppose that $a>0$ and $b>0$. Then $[\chi]\in\Sigma^2(H)^c$. \end{observation} \begin{proof} An equivalent way to phrase this is to say that the convex hull in $S(H)$ of the two points in $\Sigma^1(H)^c$ lies in $\Sigma^2(H)^c$. But this is immediate from \cite[Theorem~2.6]{bieri10}, since the Lodha--Moore groups are all finitely presented and contain no non-abelian free subgroups. \end{proof} In summary, for any $m\in\mathbb{N}\cup\{\infty\}$, assuming the truth of ($\textrm{C}_m$), we have fully computed $\Sigma^m(H)$ for all four Lodha--Moore groups $H$. Namely, ($\textrm{C}_1$) is true and $\Sigma^1(H)$ is computed in Theorem~\ref{thrm:BNS}, and: \begin{theorem}\label{thrm:BNSR} Let $m>1$, allowing for $m=\infty$. If ($\textrm{C}_m$) is true, then for each Lodha--Moore group $H$, we have that $\Sigma^m(H)$ equals $\Sigma^1(H)$ with the convex hull of $\Sigma^1(H)^c$ removed. \qed \end{theorem} \subsection{The case of $\Sigma^2$}\label{sec:Sigma^2} It turns out we can verify the conjecture ($\textrm{C}_2$) ``by hand'' and hence compute $\Sigma^2(H)$ for any Lodha--Moore group $H$. We need to show that $\ker(\psi)$ in some $H$ is finitely presented. We will do this for $H=G$, i.e., we will show that $K\mathbin{\vcentcolon =}\ker(\psi)\leG$ is finitely presented. The starting point is the finite presentation for $G$ (there denoted $G_0$) at the end of \cite[Section~3]{lodha13}. This presentation has three generators, $$a=x_\emptyset \text{, } b=x_1 \text{ and } c=y_{10} \text{,}$$ and nine relations, \begin{itemize} \item[($\textrm{R1}$)] $ba^{-2}ba^2 b^{-1}a^{-1}b^{-1}a = 1$ \item[($\textrm{R2}$)] $ba^{-3}ba^3 b^{-1}a^{-2}b^{-1}a^2 = 1$ \item[($\textrm{R3}$)] $ca^2 b^{-1}a^{-1}c^{-1}aba^{-2} = 1$ \item[($\textrm{R4}$)] $ab^2 a^{-1}b^{-1}ab^{-1}a^{-1}caba^{-1}bab^{-2}a^{-1}c^{-1} = 1$ \item[($\textrm{R5}$)] $ca^{-1}bac^{-1}a^{-1}b^{-1}a = 1$ \item[($\textrm{R6}$)] $ca^{-2}ba^2 c^{-1}a^{-2}b^{-1}a^2 = 1$ \item[($\textrm{R7}$)] $caca^{-1}c^{-1}ac^{-1}a^{-1} = 1$ \item[($\textrm{R8}$)] $ca^2 ca^{-2}c^{-1}a^2 c^{-1}a^{-2} = 1$ \item[($\textrm{R9}$)] $b^2 a^{-1}b^{-1}aca^{-1}bc^{-1}a^{-1}cab^{-1}ab^{-1}c^{-1} = 1$ \end{itemize} (Note that ($\textrm{R4}$) and ($\textrm{R9}$) are not written right in \cite{lodha13} (v3 on arXiv); as written here they are the correct translations into $a,b,c$ of the relations $[y_{10},x_{01}]=1$ and $y_{10} = x_{10} y_{100} y_{1010}^{-1} y_{1011}$.) Instead of using $b=x_1$, we first rephrase everything using $d\mathbin{\vcentcolon =} x_0$. We do this because $d$ and $c$ commute, and this ends up making everything that follows more elegant. We have $d=a^2 b^{-1}a^{-1}$ and $b=a^{-1}d^{-1}a^2$, so a finite presentation for $G$ with generating set $a,d,c$ has the nine relations: \begin{itemize} \item[($\textrm{R1}'$)] $a^{-1}d^{-1}a^{-1}d^{-1}a^2 da^{-2}da^2 = 1$ \item[($\textrm{R2}'$)] $a^{-1}d^{-1}a^{-2}d^{-1}a^3 da^{-3}da^3 = 1$ \item[($\textrm{R3}'$)] $cdc^{-1}d^{-1} = 1$ \item[($\textrm{R4}'$)] $d^{-1}ad^{-1}a^{-1}d^2 cd^{-2}ada^{-1}dc^{-1} = 1$ \item[($\textrm{R5}'$)] $ca^{-2}d^{-1}a^3 c^{-1}a^{-3}da^2 = 1$ \item[($\textrm{R6}'$)] $ca^{-3}d^{-1}a^4 c^{-1}a^{-4}da^3 = 1$ \item[($\textrm{R7}'$)] $caca^{-1}c^{-1}ac^{-1}a^{-1} = 1$ \item[($\textrm{R8}'$)] $ca^2 ca^{-2}c^{-1}a^2 c^{-1}a^{-2} = 1$ \item[($\textrm{R9}'$)] $a^{-1}d^{-1}ad^{-1}a^{-1}da^2 ca^{-2}d^{-1}a^2c^{-1}a^{-1}ca^{-1}d^2 ac^{-1} = 1$ \end{itemize} Let $R$ denote the set of these nine relations. Let $X$ be the presentation $2$-complex for this presentation, so $\pi_1(X)\cong G$. Let $Y\to X$ be the cover with $\pi_1(Y)\cong K$. The $1$-skeleton of $Y$ consists of vertices $v_n$ for $n\in\mathbb{Z}$ and edges $a_n,d_n,c_n$ for each $n\in\mathbb{Z}$, where $a_n$ and $d_n$ are loops based at $v_n$, and $c_n$ goes from $v_n$ to $v_{n+1}$. Write $c_n^{-1}$ for the opposite orientation of $c_n$. Fix $v_0$ as a basepoint. The fundamental group of $Y$ is clearly generated by edge paths of the form \begin{align*} & c_0,c_1\dots,c_n,a_n,c_n^{-1},\dots,c_0^{-1} \text{, }\\ & c_0,c_1\dots,c_n,d_n,c_n^{-1},\dots,c_0^{-1} \text{, }\\ & c_{-1}^{-1}\dots,c_{-n}^{-1},a_{-n},c_{-n},\dots,c_{-1} \text{ and }\\ & c_{-1}^{-1}\dots,c_{-n}^{-1},d_{-n},c_{-n},\dots,c_{-1} \end{align*} for $n\ge0$. By slight abuse of notation we will call these $a_n$ and $d_n$, so for example $a_n\in\pi_1(Y)$ is the loop that goes from $v_0$ to $v_n$ along $c$-edges, then loops around $a_n$, and then comes back to $v_0$ along $c$-edges. We now have an infinite generating set for $K$, namely $$\{a_n,d_n\mid n\in\mathbb{Z}\}\text{.}$$ In fact this is just the generating set obtained for $K$ from Schreier's Lemma, using the generating set $\{a,d,c\}$ and the transversal $\{c^n\mid n\in\mathbb{Z}\}$ for $K\hookrightarrow\langle a,d,c\rangle\twoheadrightarrow\langle c\rangle$. We also know something about the $2$-cells of $Y$. For any $(w=1)\in R$, and every $n\in\mathbb{Z}$, let $w_n$ be the path in $Y^{(1)}$ traversed by reading $w$ starting at $v_n$. For example, if $w=ca^{-2}d^{-1}a^3c^{-1}a^{-3}da^2$ is from ($\textrm{R5}'$), then $w_n=a_{n+1}^{-2}d_{n+1}^{-1}a_{n+1}^3 a_n^{-3}d_n a_n^2$. These $w_n$ are loops since the net sum of exponents of $c$ for any $w$ is zero (in other words $\psi(w)=0$). Then $Y$ is obtained from $Y^{(1)}$ by attaching a $2$-cell along each boundary $w_n$. Since $\pi_1(Y)\cong K$, at this point we have an infinite presentation for $K$. The generators are the $a_n,d_n$ for $n\in\mathbb{Z}$, and the relations are $w_n=1$ for $w\in R$ and $n\in\mathbb{Z}$. Hence our presentation for $K$ has the nine infinite families of relations: \begin{itemize} \item[($\textrm{K1}_n'$)] $a_n^{-1}d_n^{-1}a_n^{-1}d_n^{-1}a_n^2 d_n a_n^{-2}d_n a_n^2 = 1$ \item[($\textrm{K2}_n'$)] $a_n^{-1}d_n^{-1}a_n^{-2}d_n^{-1}a_n^3 d_n a_n^{-3}d_n a_n^3 = 1$ \item[($\textrm{K3}_n'$)] $d_{n+1}d_n^{-1} = 1$ \item[($\textrm{K4}_n'$)] $d_n^{-1}a_n d_n^{-1}a_n^{-1}d_n^2 d_{n+1}^{-2}a_{n+1}d_{n+1}a_{n+1}^{-1}d_{n+1} = 1$ \item[($\textrm{K5}_n'$)] $a_{n+1}^{-2}d_{n+1}^{-1}a_{n+1}^3 a_n^{-3}d_n a_n^2 = 1$ \item[($\textrm{K6}_n'$)] $a_{n+1}^{-3}d_{n+1}^{-1}a_{n+1}^4 a_n^{-4}d_n a_n^3 = 1$ \item[($\textrm{K7}_n'$)] $a_{n+1}a_{n+2}^{-1}a_{n+1}a_n^{-1} = 1$ \item[($\textrm{K8}_n'$)] $a_{n+1}^2 a_{n+2}^{-2}a_{n+1}^2 a_n^{-2} = 1$ \item[($\textrm{K9}_n'$)] $a_n^{-1}d_n^{-1}a_n d_n^{-1}a_n^{-1}d_n a_n^2 a_{n+1}^{-2}d_{n+1}^{-1}a_{n+1}^2 a_n^{-1}a_{n+1}^{-1}d_{n+1}^2 a_{n+1} = 1$ \end{itemize} Our first goal is to reduce this presentation to one with finitely many generators. Define $$z\mathbin{\vcentcolon =} a_0^{-1}a_1 \text{.}$$ \begin{lemma}[Finite generating set]\label{lem:K_fg} For any $n\in\mathbb{Z}$ we have $a_n=a_0 z^n$ and $d_n=d_0$. \end{lemma} \begin{proof} By definition, $z^n = (a_0^{-1}a_1)^n$. By the relations ($\textrm{K7}_n'$), $a_n^{-1}a_{n+1} = a_{n+1}^{-1}a_{n+2}$ for all $n\in\mathbb{Z}$. Hence for $n\ge0$ we have $$z^n = (a_0^{-1}a_1)(a_1^{-1}a_2)\cdots(a_{n-1}^{-1}a_n) = a_0^{-1}a_n$$ and for $n<0$ we have $$z^n=(a_0^{-1}a_{-1})(a_{-1}^{-1}a_{-2})\cdots(a_{n+1}^{-1}a_n) = a_0^{-1}a_n \text{.}$$ In either case, $a_n=a_0 z^n$. The fact that $d_n=d_0$ for all $n$ is just the content of the relations ($\textrm{K3}_n'$). \end{proof} Setting $a=a_0$ and $d=d_0$ (which were their names in $G$ anyway), we can now convert our presentation for $K$ into one using just the generators $a$, $d$ and $z$. After the substitutions $a_n=az^n$ and $d_n=d$, and some free cyclic reductions, our nine families of relations become: \begin{itemize} \item[($\textrm{K1}_n$)] $d^{-1}z^{-n}a^{-1}d^{-1}az^n az^n dz^{-n}a^{-1}z^{-n}a^{-1}daz^n = 1$ \item[($\textrm{K2}_n$)] $d^{-1}z^{-n}a^{-1}z^{-n}a^{-1}d^{-1}az^n az^n az^n dz^{-n}a^{-1}z^{-n}a^{-1}z^{-n}a^{-1}daz^n az^n = 1$ \item[($\textrm{K3}_n$)] $1=1$ \item[($\textrm{K4}_n$)] $d^{-1}zdz^{-1} = 1$ \item[($\textrm{K5}_n$)] $z^{-1}a^{-1}z^{-(n+1)}a^{-1}d^{-1}az^{n+1}az^{n+1}aza^{-1}z^{-n}a^{-1}z^{-n}a^{-1}daz^n a = 1$ \item[($\textrm{K6}_n$)] $z^{-1}a^{-1}z^{-(n+1)}a^{-1}z^{-(n+1)}a^{-1}d^{-1}az^{n+1}az^{n+1}az^{n+1}aza^{-1}z^{-n}a^{-1}z^{-n}a^{-1} z^{-n}a^{-1}daz^n az^n a = 1$ \item[($\textrm{K7}_n$)] $1 = 1$ \item[($\textrm{K8}_n$)] $zaz^{-1}a^{-1}z^{-1}aza^{-1} = 1$ \item[($\textrm{K9}_n$)] $a^{-1}d^{-1}az^n d^{-1}z^{-n}a^{-1}daz^n az^{-1}a^{-1}z^{-(n+1)}a^{-1}d^{-1}az^{n+1}aza^{-1}z^{-(n+1)}a^{-1}d^2az = 1$ \end{itemize} Note that ($\textrm{K3}_n$) and ($\textrm{K7}_n$) are trivial, and that ($\textrm{K4}_n$) and ($\textrm{K8}_n$) are independent of $n$, so we may rename them ($\textrm{K4}$) and ($\textrm{K8}$). We can write them in the following more concise forms: \begin{itemize} \item[($\textrm{K4}$)] $[z,d]=1$. \item[($\textrm{K8}$)] $[z,aza^{-1}]=1$. \end{itemize} To recap, we have $K\cong\langle a,d,z\mid (\textrm{K1}_n) \text{ through }(\textrm{K9}_n) \text{ hold for all }n\rangle$ and we would like to find a finite presentation. We introduce three additional relations that will prove useful. \begin{itemize} \item[($\textrm{K10}$)] $[z,ada^{-1}]=1$ \item[($\textrm{K11}$)] $[z,a^2 da^{-2}]=1$ \item[($\textrm{K12}$)] $[z,a^2 za^{-2}]=1$ \item[($\textrm{K13}$)] $[z,a^3 za^{-3}]=1$ \end{itemize} \begin{observation} ($\textrm{K10}$) through ($\textrm{K13}$) hold in $K$. \end{observation} \begin{proof} Convert everything back into the generators $x_s$, $y_s$. We have $z = y_{110}y_{10}^{-1}$, $ada^{-1} = x_{00}$, $a^2 da^{-2} = x_{000}$, $a^2 za^{-2} = y_{01}y_{001}^{-1}$ and $a^3 za^{-3} = y_{001}y_{0001}^{-1}$. From this it is clear that ($\textrm{K10}$) through ($\textrm{K13}$) are true statements. \end{proof} \begin{proposition}[Finite set of relations] The relations ($\textrm{K1}_n$) through ($\textrm{K9}_n$) ($n\in\mathbb{Z}$) are all deducible from the eleven relations ($\textrm{K1}_0$), ($\textrm{K2}_0$), ($\textrm{K4}$), ($\textrm{K5}_0$), ($\textrm{K6}_0$), ($\textrm{K8}$), ($\textrm{K9}_0$), ($\textrm{K10}$), ($\textrm{K11}$), ($\textrm{K12}$) and ($\textrm{K13}$). \end{proposition} \begin{proof} First look at ($\textrm{K1}_n$). Applying ($\textrm{K4}$), we find that ($\textrm{K1}_n$) is equivalent to $$d^{-1}a^{-1}d^{-1}az^n ada^{-1}z^{-n} a^{-1}da = 1$$ for all $n\in\mathbb{Z}$. Now applying ($\textrm{K10}$), this is equivalent to $$d^{-1}a^{-1}d^{-1}a^2 da^{-2}da = 1 \text{.}$$ Since this holds for all $n\in\mathbb{Z}$, and the last expression is independent of $n$, we conclude that the family ($\textrm{K1}_n$) is derivable from just ($\textrm{K1}_0$), ($\textrm{K4}$) and ($\textrm{K10}$). Now look at ($\textrm{K2}_n$). Applying ($\textrm{K4}$) and conjugating, ($\textrm{K2}_n$) is equivalent to $$z^n ad^{-1}a^{-1}z^{-n}a^{-1}d^{-1}az^n az^n ada^{-1}z^{-n}a^{-1}z^{-n} a^{-1}da = 1$$ for all $n\in\mathbb{Z}$. Now we apply ($\textrm{K10}$) to get $$ad^{-1}a^{-2}d^{-1}az^n a^2 da^{-2}z^{-n} a^{-1}da = 1$$ and then ($\textrm{K11}$) to get $$ad^{-1}a^{-2}d^{-1}a^3 da^{-3}da = 1 \text{.}$$ This is independent of $n$, so we conclude the family ($\textrm{K2}_n$) is derivable from ($\textrm{K2}_0$), ($\textrm{K4}$), ($\textrm{K10}$) and ($\textrm{K11}$). The next family is ($\textrm{K5}_n$). After conjugating and applying ($\textrm{K8}$), we see ($\textrm{K5}_n$) is equivalent to $$az^{-1}a^{-1}z^{-1}a^{-1}d^{-1}a^2 za^{-1}z^{n+1}a^2 za^{-2}z^{-n}a^{-1}da = 1$$ for any $n\in\mathbb{Z}$. Now we apply ($\textrm{K12}$) and get $$az^{-1}a^{-1}z^{-1}a^{-1}d^{-1}a^2 za^{-1}za^2 za^{-3}da = 1 \text{,}$$ which is derivable from ($\textrm{K5}_0$), ($\textrm{K8}$) and ($\textrm{K12}$). Now we move to ($\textrm{K6}_n$). After some cyclic reduction, ($\textrm{K6}_n$) is equivalent to $$z^n az^n az^{-1}a^{-1}z^{-(n+1)}a^{-1}z^{-(n+1)}a^{-1}d^{-1}az^{n+1}az^{n+1}az^{n+1}aza^{-1}z^{-n}a^{-1}z^{-n}a^{-1}z^{-n}a^{-1}da = 1 \text{.}$$ Repeated applications of ($\textrm{K8}$), ($\textrm{K12}$) and ($\textrm{K13}$) yield $$z^{-1}a^2 z^{-1}a^{-1}z^{-1}a^{-2}d^{-1}azazazaza^{-4}da = 1 \text{,}$$ which is derivable from ($\textrm{K6}_0$), ($\textrm{K8}$), ($\textrm{K12}$) and ($\textrm{K13}$). The last family is ($\textrm{K9}_n$). We start with $$a^{-1}d^{-1}az^n d^{-1}z^{-n}a^{-1}daz^n az^{-1}a^{-1}z^{-(n+1)}a^{-1}d^{-1}az^{n+1}aza^{-1}z^{-(n+1)}a^{-1}d^2az = 1$$ and apply ($\textrm{K4}$) and ($\textrm{K8}$) to get $$a^{-1}d^{-1}ad^{-1}a^{-1}da^2 z^{-1}a^{-1}z^{-1}a^{-1}d^{-1}a^2 za^{-2}d^2az = 1 \text{,}$$ which is derivable from ($\textrm{K4}$), ($\textrm{K8}$) and ($\textrm{K9}_0$). \end{proof} This proves that $K$ is finitely presented, which was conjecture ($\textrm{C}_2$), and so Theorem~\ref{thrm:BNSR} holds for $m=2$. \end{document}
math
71,359
\begin{document} \title{Formalizing Chemical Theory using \the Lean Theorem Prover} \begin{abstract} \par Interactive theorem provers are computer programs that check whether mathematical statements are correct. We show how the mathematics of chemical theories can be written in the language of the Lean theorem prover, allowing chemical theory to be made even more rigorous and providing insight into the mathematics behind a theory. We use Lean to precisely define the assumptions and derivations of the Langmuir \cite{langmuir1918adsorption} and BET \cite{brunauer_emmett_teller_1938} theories of adsorption. We can also go further and create a network of definitions that build off of each other. This allows us to define a common basis for equations of motion or thermodynamics and derive many statements about them, like the kinematic equations of motion or gas laws such as Boyle’s Law. This approach could be extended beyond chemistry, and we propose the creation of a library of formally-proven theories in all fields of science. \end{abstract} \keywords{Proof assistants \and Formal verification \and Proof assistant \and Theorem provers \and Logic \and Adsorption \and Thermodynamics \and Kinematics \and Theory } \section{Introduction} Theoretical derivations in the scientific literature are typically written in a semi-formal fashion, and rely on human peer reviewers to catch mistakes. When these theories are implemented in software, the translation from mathematical model to executable code also requires humans to catch errors. This reflects the gap between mathematical equations describing models in science and the software written to encode these \cite{hinsenComputationalScienceShifting2014}. This occurs because the computer doesn't understand relationships among the scientific concepts and mathematical objects under study, it simply executes the code given it. Here, we recommend an alternative: interactive theorem provers that enable the mathematics and programming of science to be expressed in a rigorous way, with the logic checked by the computer. \subsection{Theorem provers for chemical theory} \par Interactive theorem provers are a type of computer program used for the creation of formal proofs or derivations, which are a sequence of logical deductions used to prove a theorem\footnote{Mathematical terms that may be unfamiliar to the reader, like \textit{theorem}, are defined in the glossary, section \ref{Glossary}} is correct \cite{hales2008formal}. They provide a way to write a proof step by step, while the computer verifies each step is logically correct \cite{rudnicki1992overview, wenzel2002isabelle, barras1997coq, gordon1993introduction, nipkow2002isabelle, owre1992pvs}. Formal proofs are used extensively in mathematics to prove various theories. On the other hand, scientific theory tends to use informal proofs when deriving it’s theories, since they are easier to write and understand (see Table \ref{Table 1}). \begin{table}[ht] \centering \begin{tabular}{@{}ll@{}} \toprule \textbf{Hand-written proofs} & \textbf{Formal proofs} \\ \midrule Informal syntax & Strict, computer language syntax \\ Only for human readers & Machine-readable and executable \\ Might exclude information & Cannot miss assumptions or steps \\ Might contain mistakes & Rigorously verified by computer \\ Requires human to proofread & Automated proof checking \\ Easy to write & Challenging to write \\ \bottomrule \end{tabular} \caption{Comparison of hand-written and formalized proofs.} \label{Table 1} \end{table} Scientists are generally familiar with computer algebra systems (CAS) that can symbolically manipulate mathematical expressions (see Table \ref{Table 2}). These systems include SymPy \cite{meurer2017sympy} and Mathematica. These systems are used frequently for scientific applications, but come at the cost of being unsound, meaning they can have false conclusions. One example is asking WolframAlpha if \(\frac{a}{ab} = \frac{1}{b}\). WolframAlpha returns true since its simplification tool cancels the \(a\) term. However, this is not true for all $a$; it is only true if \(a\) is not equal to zero. If a theorem prover were used instead, it would not finish the proof until we explicitly proved or assumed that $a$ did not equal zero. CAS hide or gloss over assumptions that may be perceived to complicate the user experience. However, these assumptions are important for building a self-consistent library of mathematics. Claiming \(\frac{a}{ab} = \frac{1}{b}\) is true for all $a$ is a false statement and would allow us to prove anything, such as \(1=0\) (\emph{ex falso sequitur quodlibet}, `from falsehood, anything follows'). Theorem provers are more rigorous than computer algebra systems, because they require computer-checked proofs before permitting operations, thereby preventing false statements from being proven. For example, $a \times b = b \times a$ is true when $a$ and $b$ are scalars, but $A \times B \neq B \times A$ when $A$ and $B$ are matrices. CAS impose special conditions to disallow $A \times B = B \times A$ \cite{meurer2017sympy}, whereas theorem provers only allow changes that are proven to be valid. Theorem provers construct all of their math from a small, base kernel of mathematical axioms, requiring computer-checked proofs for for objects constructed from the axioms. Even the most complicated math can be reduced back to that kernel. Since this kernel is small, verifying it by human experts or with other tools is manageable. Then, all higher-level math built and proved from the kernel is just as trustworthy. This contrasts with how CAS represent and introduce mathematics; because proofs are not required when high-level math is introduced, mistakes could enter at any level, and would require humans to catch and debug them \cite{duranMisfortunesTrioMathematicians2014} (see Table \ref{Table 2}). \begin{table}[ht] \centering \begin{tabular}{@{}ll@{}} \toprule \textbf{Interactive theorem provers} & \textbf{Computer algebra systems} \\ \midrule Symbolically transform formulae & Symbolically transform formulae \\ Only permit correct transformations & Human-checked correctness \\ Verification tool & Computational tool \\ Explicit assumptions & Hidden assumptions \\ Built off a small, trusted, kernel & Large program with many algorithms \\ \includegraphics[width=5cm]{Figures/ITP.png} & \includegraphics[width=5cm]{Figures/CAS5.png} \\ \bottomrule \end{tabular} \captionsetup[table]{skip=10pt} \caption{Interactive Theorem Provers \cite{de_moura_kong_avigad_van_doorn_von_raumer_2015, wenzel2002isabelle} vs. Computer Algebra Systems \cite{wolfram1991mathematica, meurer2017sympy, toolbox1993matlab}.} \label{Table 2} \end{table} \par Historically, formal verification and the axiomatization of theories have mainly been observed in mathematics \cite{appel1977,gonthier2008, boldo2015coquelicot, hales2017, buzzard2020, scholzeproof2021}, however recently there have been a few notable attempts at formalizing physics theories such as versions of relativity theory \cite{stannett2014, lu2017formalization}, electromagnetic optics \cite{khan2014formal}, and geometrical optics \cite{siddique2015formal}. Theorem provers have also been applied in the design of optical quantum experiments \cite{cervera2021design}. Our focus here is on formalizing fundamental theories in the chemical sciences. Inspired by Paleo’s ideas for formalizing physics theories \cite{paleo2012physics}, we want to create a formal basis for chemical theories and use Lean to verify them. \subsection{The Lean theorem prover} We have selected the Lean theorem prover \cite{avigad2015theorem} for its power as an interactive theorem prover, the coverage of its mathematics library, \texttt{mathlib} \cite{mathlib2020}, and the supportive online community of Lean enthusiasts \cite{zulipLean} with an aim to formalize the entire undergraduate math curriculum \cite{hartnettBuildingMathematicalLibrary2020, UGLean}. Interesting projects in modern mathematics have emerged from its foundations, including Perfectoid Spaces \cite{10.1145/3372885.3373830}, Cap Set Problem \cite{dahmen_et_al2019} and Liquid Tensor \cite{leanliquidrepo} have garnered attention in the media \cite{hartnettProofAssistantMakes2021}. A web-based game, the Natural Number Game \cite{NNG_github}, has been widely successful in introducing newcomers to Lean. As executable code, Lean proofs can be read by language modeling algorithms that find patterns in math proof databases, enabling automated proofs of formal proof statements, including International Math Olympiad problems \cite{proofartifact2021,FormalMath2022} \par We anticipate that Lean is expressive enough to formalize diverse and complex theories across quantum mechanics, fluid mechanics, reaction rate theory, statistical thermodynamics, and more. Lean gets its power from its ability to define mathematical objects and prove their properties, rather than just assuming premises for the sake of individual proofs. Lean is based on Type Theory \cite{whitehead2012elements, goerss2009simplicial} where both mathematical objects and the relation between them are modeled with types (see Fig.~\ref{leanoverview} in the Supporting Information). Everything in Lean is a term of a \textit{Type}, and Lean checks to make sure that the \textit{Types} match. Natural numbers, real numbers, functions, Booleans, and even proofs are types; examples of terms with these types include the number 1, Euler's number, \(f(x) = x^2\), \texttt{TRUE}, and the proof of BET theory, respectively. Lean is also expressive enough to allow us to define new types, just like mathematicians do \cite{avigad2015theorem}, which allows us to define specific scientific theories and prove statements about them. \par In this paper, we show how formalizing chemical theories may look, by demonstrating the tools of Lean through illustrative proofs in the chemical sciences. First, we introduce variables, types, premises, conjectures, and proof steps through a simple derivation of the Langmuir adsorption model. Next, we show how functions and definitions can be used to prove properties of mathematical objects by revising the Langmuir adsorption model through definitions and showing it has zero loading at zero pressure. Finally, we turn to more advanced topics, such as using geometric series to formalize the derivation of the BET equation and using structures to define and prove relationships in thermodynamics and motion. \section{Methods} \par Lean has a small kernel, based on dependent type theory \cite{whitehead2012elements, goerss2009simplicial}, with just over 6000 lines of code that allows it to instantialize a version of the Calculus of Inductive Constructions (CoIC) \cite{coquand1986calculus, coquand1988inductively}. The strong normalizing characteristic of the CoIC \cite{coquand1990proof} creates a robust programming language that is consistent. The CoIC creates a constructive foundation for mathematics allowing the entire field of mathematics to be built off of just 6000 lines of code. \par In Section \ref{proofs} we outline the proofs formalized using Lean version 3.45.0. We host proofs on a website that provides a semi-interactive platform connecting to the Lean codes in our GitHub repository \href{https://atomslab.github.io/LeanChemicalTheories/} {{\Large\texttwemoji{atom symbol}}}. An extended methods section introducing Lean is in the Supporting Information Section \ref{LeanAdditionalInfo}. \section{Formalized Proofs} \label{proofs} \subsection{Langmuir Adsorption: Introducing Lean Syntax and Proofs} \label{LangmuirProofSection} We begin with an easy proof to introduce Lean and the concept of formalization. The Langmuir adsorption model describes the loading of adsorbates onto a surface under isothermal conditions \cite{langmuir1918adsorption}. Several derivations have been developed \cite{langmuir1918adsorption,volmer1925thermodynamische,masel1996principles,kleman2003soft}; here we consider the original kinetic derivation \cite{langmuir1918adsorption}. First, we present a derivation of the Langmuir model given by the Eq.~\ref{Langmuir Model}, in \LaTeX, then transfer this into Lean and rigorously prove it. We also discuss how these proofs can be improved to be more robust. The Langmuir model assumes that (1) all sites are thermodynamically equivalent, (2) the system is at equilibrium, and (3) the adsorption and desorption rate are first order. The adsorption and desorption rates are given by Eq. \ref{Langmuir Adsorption} and Eq. \ref{Langmuir Desorption}, respectively. \begin{equation} r_{ad} = k_{ad}p_A[S] \label{Langmuir Adsorption} \end{equation} \begin{equation} r_d = k_d[A_{ad}] \label{Langmuir Desorption} \end{equation} From assumption (2), \(r_{ad} = r_d\), and with some rearrangement, we get Eq. \ref{Langmuir Equillibrium}. \begin{equation} [S] = \frac {k_d[A_{ad}]}{k_{ad}p_A} \label{Langmuir Equillibrium} \end{equation} Using the site balance, \([S_0] = [S] + [A_{ad}]\) we arrive at Eq. \ref{Langmuir Intermidiate}. \begin{equation} [S_0] = \frac {[A_{ad}]} {\frac{k_{ad}}{k_d}p_A} + [A_{ad}] \label{Langmuir Intermidiate} \end{equation} We can rearrange Eq. \ref{Langmuir Intermidiate} into a familiar form, Eq. \ref{Langmuir Intermidiate 2}. \begin{equation} \frac{[A_{ad}]}{[S]}= \frac{\frac{k_{ad}}{k_d}p_A}{1 + \frac{k_{ad}}{k_d}p_A} \label{Langmuir Intermidiate 2} \end{equation} Using the definition of the fraction of adsorption, \(\theta = \frac {[A_{ad}]}{[S_0]}\), and the definition of the equilibrium constant, \(K_{eq}^A = \frac{k_{ad}}{k_d}\), we arrive at the familiar Langmuir Model, Eq.~\ref{Langmuir Model}. \begin{equation} \label{Langmuir Model} \theta _{A}=\frac{K_{eq} p_{A}}{1+K_{eq} p_{A}} \end{equation} \par This informal proof is done in natural language, and it doesn't explicitly make clear which equations are premises to the proof and which are intermediate steps. While the key steps going from the premises to the conclusion are shown, the fine details of the algebra are excluded. In contrast, Lean requires premises and the conjecture to be precisely defined, and requires that each rearrangement and cancellation is shown or performed computationally using a tactic. The next part shows how this proof is translated into Lean. \begin{figure} \caption{A formalization of Langmuir's adsorption model, shown as screenshots from Lean operating in VSCode. The left side of the figure shows the ``Code Window,'' while the right side shows variables and goals at each step in the ``Tactic State''. When the user places the cursor at one of the numbered locations in the ``Code Window,'' VSCode displays the ``Tactic State'' of the proof. The turnstile symbol represents the state of the goal after each step. As each tactic is applied, hypotheses and/or the goal is updated in the tactic state as the proof proceeds. For clarity, we only show the hypothesis that changes after a tactic is applied and how that changes the goal. As an example, the goal state is the same in steps 1 and 2, since the first tactic rewrites (rw) the equation of adsorption (\textit{hrad} \label{LangmuirProof} \end{figure} As shown in Figure \ref{LangmuirProof}, every premise must be explicitly stated in Lean, along with the final conjecture, and proof tactics used to show that the conjecture follows from the premises. The central premises of the proof are expressions of adsorption rate (\textit{hrad}), desorption rate (\textit{hrd}), the equilibrium relation (\textit{hreaction}) and the adsorption site balance (\textit{h$S_{0}$}). Additional premises include the definition of adsorption constant (\textit{hK}) and surface coverage (\textit{h$\theta$}) from the first four premises, as well as mathematical constraints (\textit{hc1}, \textit{hc2}, and \textit{hc3}) that appear during the formalization. The model assumes the system is in equilibrium, so the adsorption rate, \(r_{ad} = k\_ad*Pₐ*S\) and, desorption rate, \(r_d = k\_d*A\) are equal to each other, where \(k_{ad}\) and \(k_d\) are the adsorption and desorption rate constant respectively, $S$ is concentration of empty sites, and $A$ is the concentration of sites occupied by $A$. After \texttt{begin}, a sequence of tactics rearranges the goal state until the conjecture is proved. Note when performing division, Lean is particular to require that the denominator terms are nonzero. \par An interesting part of the proof is that only certain variables or their combinations are required to be not zero. When building this proof, Lean imports the real numbers and the formalized theorems and tactics for them in \texttt{mathlib}. Lean does not permit division by zero, and it will flag issues when a number is divided by another number that could be zero. Consequently, we must include additional hypotheses \textit{hc1}-\textit{hc3} in order to complete the proof. These provide the minimum mathematical requirements for the proof; more strict constraints requiring rate constants and concentrations to be positive would also suffice. These ambiguities are better addressed by using \textit{definitions} and \textit{structures}, which enable us prove properties about the object. Nonetheless, this version of the Langmuir proof is still a machine-readable, executable, formalized proof. \par Though this is a natural way to write the proof, we can condense the premises by using local definitions. For instance, the first two premises \textit{hrad} and \textit{hrd} can be written into \textit{hreaction} to yield \textit{k\_ad*Pₐ*S = k\_d*A} and we can also write expressions of \textit{h\(\theta\)} and \textit{hK} in the goal statement. While \textit{hrad}, \textit{hrd}, \textit{h\(\theta\)}, and \textit{hK} each have scientific significance, in this proof, they are just combinations of real numbers. Alternative versions of this proof are described in SI Section \ref{LangmuirSI}. \subsection{Langmuir Revisited: Introducing Functions and Definitions in Lean} Functions in Lean are similar to functions in imperative programming languages like Python and C, in that they take in arguments and map them to outputs. However, functions in Lean (like everything in Lean) are also objects with properties that can be formally proved. \par Formally, a function is defined as a mapping of one set (the domain) to another set (the co-domain). The notation for a function is given by the arrow "$\rightarrow$". For instance, the function, conventionally written as $Y = f(X)$ or $Y(X)$, maps from set \textit{X} to set \textit{Y} is written as $X \rightarrow Y$ in arrow notion.\footnote{These types are easily extended to functionals, which are central to density functional theory. A function that takes a function as an input can be simply defined by \((\mathbb{R} \rightarrow \mathbb{R}) \rightarrow \mathbb{R}\)}. Importantly, the arrow "\(\rightarrow\)" is also used to represent the conditional statement (if-then) in logic, but this is not a duplication of syntax. Because everything is a term of \textit{Type} in Lean, functions map type \textit{X} to type \textit{Y}; when each type is a proposition, the resulting function is an if-then statement. \par As stated in the introduction, Lean's power comes from the ability to define objects globally, not just postulate them for the purpose of a local proof. When a mathematical object is formally defined in Lean, multiple theorems can be written about it with certainty that all proofs pertain to the same object. In Lean, we use \textit{def} to define new objects and then prove statements about these objects. The \textit{def} command has three parts: the arguments it takes in (which are the properties of the object), the type of the output, and the proof that the object has such a type. In Lean: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{name} properties : \textcolor{tpurple}{type} := proof of that type \end{alltt} For instance, we can define a function that doubles a natural number: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{double} : \textcolor{tpurple}{\(\mathbb{N}\)} \(\rightarrow\) \textcolor{tpurple}{\(\mathbb{N}\)} := \(\lambda\) n : \textcolor{tpurple}{\(\mathbb{N}\)}, n + n \end{alltt} The \(\lambda\) symbol comes from lambda calculus and is how an explicit function is defined. After the lambda symbol is the variable of the function, \textit{n} with type \(\mathbb{N}\). After the comma is the actual function. By hand, we would write this as \(f(n) = n + n\). This function doubles any natural number, as the name suggests. We could use it, for example, to show: \begin{alltt} \textcolor{maroon}{double} \textcolor{indigo}{(}3 : \textcolor{tpurple}{\(\mathbb{N}\)}\textcolor{indigo}{)} = \textcolor{indigo}{(}6 : \textcolor{tpurple}{\(\mathbb{N}\)}\textcolor{indigo}{)} \end{alltt} \par In the previous section, we showed an easy-to-read derivation of Langmuir adsorption, and in SI Section \ref{LangmuirSI}, we improved the proof using local definitions. Here, we improve it further by defining the Langmuir model as an object in Lean and then showing the kinetic derivation of that object. This way, the object defining the single-site Langmuir model can be reused in subsequent proofs, and all are certain to refer to the same object. \par We define the model as a function that takes in pressure as a variable. Given a pressure value, the function will compute the fractional occupancy of the adsorption sites. In Lean, this looks like \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/langmuir_kinetics.html#langmuir_single_site_model}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{langmuir_single_site_model} \textcolor{indigo}{(}equilibrium_constant : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} : \textcolor{tpurple}{\(\mathbb{R}\)} → \textcolor{tpurple}{\(\mathbb{R}\)} := \(\lambda\) P : \textcolor{tpurple}{\(\mathbb{R}\)}, equilibrium_constant*P/\textcolor{indigo}{(}1+equilibrium_constant*P\textcolor{indigo}{)} \end{alltt} The \(\lambda\) symbol comes from \(\lambda\)-calculus \cite{barendregt2013lambda} and is one way to construct functions. It declares that P is a real number that can be specified. When the real number is specified, it will take the place of P in the equation. The definition also requires the equilibrium constant to be specified\footnote{This definition can be specified in multiple ways. The pressure could be required as an input like the equilibrium constant, or the equilibrium constant can be specified as a variable in the function like pressure. Any of these definitions work, and it is possible to prove congruence between them. We chose this way to purposefully show both definitions and functions in Lean.}. \par With this, the kinetic derivation of Langmuir can be set up in Lean like this \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/langmuir_kinetics.html#langmuir_single_site_kinetic_derivation}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{langmuir_single_site_kinetic_derivation} \textcolor{indigo}{\{}Pₐ k_ad k_d A S : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{\}} \textcolor{indigo}{(}hreaction : let r_ad := k_ad*Pₐ*S, r_d := k_d*A in r_ad = r_d\textcolor{indigo}{)} \textcolor{indigo}{(}hS : S \(\ne\) 0\textcolor{indigo}{)} \textcolor{indigo}{(}hk_d : k_d \(\ne\) 0\textcolor{indigo}{)} : \textcolor{indigo}{let} \(\theta\) := A/\textcolor{indigo}{(}S+A\textcolor{indigo}{)}, K := k_ad/k_d \textcolor{indigo}{in} \(\theta\) = \textcolor{maroon}{langmuir_single_site_model} K Pₐ := \end{alltt} This derivation is almost exactly like the proof in SI Section \ref{LangmuirSI}; the only difference is the use of the Langmuir model as an object. After the \texttt{langmuir\_single\_site\_model} simplifies to the Langmuir equation, the proof steps are the same. Using the definition makes it possible to write multiple theorems about the same Langmuir object. We can also prove that the Langmuir expression has zero loading at zero pressure, and in the future we can show that it has a finite loading in the limit of infinite pressure, and converges to Henry's Law in the limit of zero pressure \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/langmuir_kinetics.html#langmuir_zero_loading_at_zero_pressure}{{\Large\texttwemoji{atom symbol}}}. Definitions and structures, as we will see in later sections, are crucial to building a web of interconnected scientific objects and theorems. \subsection{BET Adsorption: Formalizing a complex proof} Brunauer, Emmett, and Teller introduced the BET theory of multilayer adsorption (see Fig.~\ref{LangmuirBET}) in 1938 \cite{brunauer_emmett_teller_1938}. We formalize this derivation, beginning with Equation 26 from the paper, which is shown here in Eq.~\ref{BET 26}: \begin{equation} \label{BET 26} \frac{V}{A*V_0} = \frac{Cx}{(1-x)(1-x+Cx)} \end{equation} \begin{figure} \caption{Langmuir model vs BET model. The BET model, unlike Langmuir, allows particles to create infinite layers on top of previously adsorbed particles. Here $\theta$ is fraction of the surface adsorbed, V is total volume adsorbed, \(V_0\) is the volume of a complete unimolecular layer adsorbed in unit area, \(s_i\) is the surface area of the i\textsuperscript{th} \label{LangmuirBET} \end{figure} Here A is the total area adsorbed by all (infinite) layers expressed as sum of infinite series: \begin{equation} \label{Catalyst Surface Area} A = \sum_{i=0} ^{\infty} s_i = s_0(1+C\sum_{i=1} ^{\infty} x^i) \end{equation} and V is the total volume adsorbed is given by: \begin{equation} \label{Volume Adsorbed} V = V_0 \sum_{i=0} ^{\infty} is_i = Cs_0V_0 \sum_{i=1} ^{\infty} ix^i \end{equation} The variables \textit{y}, \textit{x} and \textit{C} are expressed in the original paper as shown through Eq.~\ref{BET y} to \ref{BET C}: \begin{equation} \label{BET y} y = PC_1,\; where \;C_1 = (a_1/b_1)e^{E_1/RT} \end{equation} \begin{equation} \label{BET x} x = PC_L,\; where \;C_L = e^{E_L/RT}/g \end{equation} \begin{equation} \label{BET C} C = y/x = C_1/C_L \end{equation} where \(a_1\), \(b_1\), and \(g\) are fitted constants, \(E_1\) is the heat of adsorption of the first layer, \(E_L\) is for the second (and higher) layers (also the same as heat of liquefaction of the adsorbate at constant temperature), \(R\) is the universal gas constant, and \(T\) is temperature. In Eq.~\ref{BET y} and \ref{BET x}, everything besides the pressure term is constant, since we are dealing with an isotherm, so we group the constants together into one term. \par These constants, along with the surface area of the zeroth layer, given by \(s_0\), saturation pressure, and the three constraints are defined using the \textit{constant} declaration in Lean. Mathematical objects can also be defined in other ways such as \textit{def}, \textit{class} or \textit{structure} \cite{avigad2015theorem} but for this proof we will use \textit{constant} which is convenient for such simple objects. We will illustrate later in our thermodynamics proof how constants can be merged into a Lean \textit{structure} for reusability. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#C_L}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{constants} \textcolor{indigo}{(}C_1 C_L s_0 P_0: \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}hCL : 0 < C_L\textcolor{indigo}{)} \textcolor{indigo}{(}hC1 : 0 < C_1\textcolor{indigo}{)} \textcolor{indigo}{(}hs_0 : 0 < s_0\textcolor{indigo}{)} \end{alltt} \par With these \textit{constant} declarations, we can now define \textit{y, x,} and \textit{C} in Lean as \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#BET_first_layer_adsoprtion_rate}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{BET_first_layer_adsorption_rate} \textcolor{indigo}{(}P : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} := \textcolor{indigo}{(}C_1\textcolor{indigo}{)}*P \textcolor{indigo}{local notation} `y' := BET_first_layer_adsorption_rate \textcolor{indigo}{def} \textcolor{maroon}{BET_n_layer_adsorption_rate} \textcolor{indigo}{(}P : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)}:= \textcolor{indigo}{(}C_L\textcolor{indigo}{)}*P \textcolor{indigo}{local notation} `x' := BET_n_layer_adsorption_rate \textcolor{indigo}{def} \textcolor{maroon}{BET_constant} := C_1/C_L \textcolor{indigo}{local notation} `C' := BET_constant \end{alltt} Since \textit{y} and \textit{x} are both functions of pressure, their definitions require pressure as an input. Alternatively, the input can be omitted if we wanted to deal with x as a function, rather than as a number. Notice that the symbols we declared using \textit{constant} do not need to be supplied in the inputs as they already exist in the global workspace. \par We formalize Eq.~\ref{BET 26} by recognizing that the main math behind the BET expression is an infinite sequence that describes the surface area of adsorbed particles for each layer. The series is defined as a function that maps the natural numbers to the real numbers; the natural numbers represent the indexing. It is defined in two cases: if the index is zero, it outputs the surface area of the zeroth layer, and if the index is the \(n+1\), it outputs \(x^{n+1}s_0C\). \begin{equation} \label{BET ith layer} s_i = Cx^is_0 \; for \; i : [1,\infty) \end{equation} In Lean, we define this sequence as \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#seq}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{seq} \textcolor{indigo}{(}P : \textcolor{tpurple}{nnreal}\textcolor{indigo}{)} : \textcolor{tpurple}{\(\mathbb{N}\)} → \textcolor{tpurple}{\(\mathbb{R}\)} |\textcolor{indigo}{(}0 : \(\mathbb{N}\)\textcolor{indigo}{)} := s_0 |\textcolor{indigo}{(}nat.succ n\textcolor{indigo}{)} := x^\textcolor{indigo}{(}n+1\textcolor{indigo}{)}*s_0*C \end{alltt} Where \(s_i\) is the surface area of the i\textsuperscript{th} layer, \(C\) and \(x\) are given by Eq.~\ref{BET C} and Eq.~\ref{BET x}, respectively, and \(s_0\) is the surface area of the zeroth layer. The zeroth layer is the base surface and is constant. We now have the area and volume equations both in terms of geometric series with well-defined solutions. The BET equation is defined as the ratio of volume absorbed to the volume of a complete unimolecular layer, given by Eq.~\ref{BET Equation}. \begin{equation} \label{BET Equation} \frac{V}{A*V_0} = \frac{Cs_0\sum_{i=1} ^{\infty} ix^i}{s_0(1+C\sum_{i=1} ^{\infty} x^i)} \end{equation} The main transformation in BET is simplifying this sequence into a simple fraction which involves solving the geometric series. The main math goal is given by Eq.~ \ref{BET Main Math}. \begin{equation} \label{BET Main Math} \frac{C\sum_{i=1} ^{\infty} ix^i}{(1+C\sum_{i=1} ^{\infty} x^i)} = \frac{Cx}{(1-x)(1-x+Cx)} \end{equation} Before doing the full derivation, we prove Eq.\ref{BET Main Math}, which we call \textit{sequence\_math}. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#sequence_math}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{lemma} \textcolor{maroon}{BET.sequence_math} \textcolor{indigo}{\{}P : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{\}} \textcolor{indigo}{(}hx1: \textcolor{dgreen}{(}x P\textcolor{dgreen}{)} < 1\textcolor{indigo}{)} \textcolor{indigo}{(}hx2 : 0 < \textcolor{dgreen}{(}x P\textcolor{dgreen}{)}\textcolor{indigo}{)} : \textcolor{indigo}{(}\(\sum\)' k : \(\mathbb{N}\), \textcolor{dgreen}{(}\textcolor{iibrown}{(}k + 1\textcolor{iibrown}{)}*\textcolor{iibrown}{(}\textcolor{maroon}{seq} P \textcolor{dorange}{(}k+1\textcolor{dorange}{)}\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)}/\textcolor{indigo}{(}s_0 + \(\sum\)' k, \textcolor{dgreen}{(}\textcolor{maroon}{seq} P \textcolor{iibrown}{(}k+1\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)} = C*\textcolor{indigo}{(}x P\textcolor{indigo}{)}/\textcolor{indigo}{(}\textcolor{dgreen}{(}1 - \textcolor{iibrown}{(}x P\textcolor{iibrown}{)}\textcolor{dgreen}{)}*\textcolor{dgreen}{(}1 - \textcolor{iibrown}{(}x P\textcolor{iibrown}{)} + \textcolor{iibrown}{(}x P\textcolor{iibrown}{)}*C\textcolor{dgreen}{)}\textcolor{indigo}{)} := \end{alltt} In Lean, the apostrophe after the sum symbol denotes an infinite sum, which is defined to start at zero since it is indexed by the natural numbers, which start at zero. Since the infinite sum of Eq.~\ref{BET Main Math} starts at one, we add one to all the indexes, \textit{k}, so that when $k$ is zero, we get one, etc. We also define two new theorems that derive the solution to these geometric series with an index starting at one. After expanding \textit{seq}, we use those two theorems, and then rearrange the goal to get two sides that are equal. We also use the tag \textit{lemma} instead of \textit{theorem}, just to communicate that it is a lower-priority theorem, intended to prove other theorems. The tag \textit{lemma} has no functional difference from \textit{theorem} in Lean, it's purpose is for mathematicians to label proofs. \par With this we can formalize the derivation of Eq.~\ref{BET 26}. First we define Eq.~\ref{BET 26} as a new object and then prove a theorem showing we can derive this object from the sequence. In Lean, the definition looks like this \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#brunauer_26}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{brunauer_26} := \(\lambda\) P : \textcolor{indigo}{\textcolor{dgreen}{\textcolor{tpurple}{\(\mathbb{R}\)}}}, C* \textcolor{indigo}{(}x P\textcolor{indigo}{)}/\textcolor{indigo}{(}\textcolor{dgreen}{(}1-\textcolor{iibrown}{(}x P\textcolor{iibrown}{)}\textcolor{dgreen}{)}*\textcolor{dgreen}{(}1-\textcolor{iibrown}{(}x P\textcolor{iibrown}{)}+C*\textcolor{iibrown}{(}x P\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)} \end{alltt} Here, we explicitly define this as a function, because we want to deal with Eq.~\ref{BET 26} normally as a function of pressure, rather then just a number. Now we can prove a theorem that formalizes the derivation of this equation \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#brunauer_26_from_seq}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{brunauer_26_from_seq} \textcolor{indigo}{\{}P V_0 : \textcolor{dgreen}{\textcolor{tpurple}{\(\mathbb{R}\)}}\textcolor{indigo}{\}} (hx1: (x P) < 1) \textcolor{indigo}{(}hx2 : 0 < \textcolor{dgreen}{(}x P\textcolor{dgreen}{)}\textcolor{indigo}{)} : \textcolor{indigo}{let} Vads := V_0 * \(\sum'\) \textcolor{indigo}{(}k : \(\mathbb{N}\)\textcolor{indigo}{)}, ↑k * \textcolor{indigo}{(}seq P k\textcolor{indigo}{)}, A := \(\sum'\) \textcolor{indigo}{(}k : \(\mathbb{N}\)), \textcolor{indigo}{(}seq P k\textcolor{indigo}{)} \textcolor{indigo}{in} Vads/A = V_0*\textcolor{indigo}{(}\textcolor{maroon}{brunauer_26} P\textcolor{indigo}{)} := \end{alltt} Unlike the Langmuir proof introduced earlier in Fig.~\ref{LangmuirProof}, the BET uses definitions that allowed reusability of those definitions across the proof structure. The proof starts by showing that \textit{seq} is summable. This just means the sequence has some infinite sum and the \(\sum'\) symbol is used to get the value of that infinite series. We show in the proof that both \textit{seq} and \textit{k*seq} are summable, where the first is needed for the area sum and the second is needed for the volume sum. After that, we simplify our definitions, move the index of the sum from zero to one so we can simplify the sequence, and apply the \textit{BET.sequence\_math} lemma we proved above. Finally, we use the \textit{field\_simp} tactic to rearrange and close the goal. With that, we were able to formalize the derivation of Eq.~\ref{BET 26}, just as Brunauer, \emph{et al.} did in 1938. \par In the SI, we continue formalizing BET theory, by deriving Equation 28 from the paper, given by Eq.~\ref{BET 28.2} \begin{equation} \label{BET 28.2} \frac{V}{A*V_0} = \frac{CP}{(P_0-P)(1+(C-1)(P/P_0))} \end{equation} This follows from recognizing that $1/C_L = P_0$. While Brunauer, \emph{et al.} attempt to show this in the paper, we discuss the trouble with implementing the logic they present. Instead, we show a similar proof that Eq.~\ref{BET 26} approaches infinity as pressure approaches $1/C_L$, and assume as a premise in the derivation of Eq.~\ref{BET 28.2} that $1/C_L \equiv P_0$. \subsection{Classical Thermodynamics and Gas Laws: Introducing Lean Structures} Lean is so expressive because it enables relationships between mathematical objects. We can use this functionality to precisely define and relate \emph{scientific concepts} with mathematical certainty. We illustrate this by formalizing proofs of gas laws in classical thermodynamics. We can prove that the ideal gas law, $PV=nRT$ follows Boyle's Law, $P_1 V_1 = P_2 V_2$, following the style of our derivation of Langmuir's theory: demonstrating that a conjecture follows from the premises \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/boyles_law.html}{{\Large\texttwemoji{atom symbol}}}. However, this proof style doesn't facilitate interoperability among proofs and limits the mathematics that can be expressed. In contrast, we can prove the same, more systematically, by first formalizing the concepts of thermodynamic systems and states, extending that system to a specific ideal gas system, defining Boyle's Law in light of these thermodynamic states, and then proving that the ideal gas obeys Boyle's Law (see Fig.~\ref{thermo}). \begin{figure} \caption{Thermodynamic system in Lean. Here the \textit{thermo\_system} \label{thermo} \end{figure} Classical thermodynamics describes the macroscopic properties of thermodynamic states and relationships between them \cite{dahm_visco_2015, sandler2017chemical}. We formalize the concept of ``thermodynamic system'' by defining a Lean \texttt{structure} called \textit{thermo\_system} over the real numbers, with thermodynamic properties (e.g. pressure, volume, etc.) defined as functions from the natural numbers to the real numbers \(\mathbb{N} \rightarrow \mathbb{R}\). This represents picking out the state of the system, where the natural numbers are the states. $\mathbb{N} \rightarrow \mathbb{R}$ is an appropriate type for the variables $P_1$ and $P_2$; they map state 1 and state 2 (represented as natural numbers), respectively, to values of pressure (which are real numbers). Since these are state variables, we are only concerned with what happens at specific states, not what happens in between states. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#thermo_system}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{structure} \textcolor{maroon}{thermo_system} := \textcolor{indigo}{(}pressure : \textcolor{tpurple}{\(\mathbb{N}\)} → \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}volume : : \textcolor{tpurple}{\(\mathbb{N}\)} → \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}temperature : \textcolor{tpurple}{\(\mathbb{N}\)} → \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}substance_amount : \textcolor{tpurple}{\(\mathbb{N}\)} → \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}energy : \textcolor{tpurple}{\(\mathbb{N}\)} → \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \end{alltt} We define six descriptions of the system: isobaric (constant pressure); isochoric (constant volume); isothermal (constant temperature); adiabatic (constant energy); closed (constant mass); and isolated (constant mass and energy). Each of these conditions has the type \textit{Prop}, or proposition, considering them to be assertions about the system. We formally define these by stating that, for all ($\forall$) pairs of states $n$ and $m$, the property at those states is equal. We define these six descriptions to take in a \textit{thermo\_system}, since we need to specify what system we are ascribing this property to. In Lean, this is \href{hhttps://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#isobaric}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{isobaric} \textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{tpurple}{Prop} := \(\forall\) n m : \textcolor{tpurple}{\(\mathbb{N}\)}, pressure n = pressure m \textcolor{indigo}{def} \textcolor{maroon}{isochoric}\textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{tpurple}{Prop} := \(\forall\) n m : \textcolor{tpurple}{\(\mathbb{N}\)}, volume n = volume m \textcolor{indigo}{def} \textcolor{maroon}{isothermal}\textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{tpurple}{Prop}:= \(\forall\) n m : \textcolor{tpurple}{\(\mathbb{N}\)}, temperature n = temperature m \textcolor{indigo}{def} \textcolor{maroon}{adiabatic}\textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{tpurple}{Prop }:= \(\forall\) n m : \textcolor{tpurple}{\(\mathbb{N}\)}, energy n = energy m \textcolor{indigo}{def} \textcolor{maroon}{closed_system} \textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{tpurple}{Prop}:= \(\forall\) n m : \textcolor{tpurple}{\(\mathbb{N}\)}, substance_amount n = substance_amount m \textcolor{indigo}{def} \textcolor{maroon}{isolated_system} \textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{tpurple}{Prop} := adiabatic M \(\wedge\) closed_system M \end{alltt} We define an isolated system to be just a closed system and ($\wedge$) adiabatic, rather then using the universal quantifier ($\forall$), since it would be redundant. \par Now that the basics of a thermodynamic system have been defined, we can define models that attempt to mathematically describe the system. These models can be defined as another structure, which \textit{extends} the \textit{thermo\_system} structure. When a structure extends another structure, it inherits the properties of the structure it extended. This allows us to create a hierarchy of structures so we don't have to redefine properties over and over again. The most well-known model is the ideal gas model, which comes with the ideal gas law equation of state. We define the ideal gas model to have two properties, the universal gas constant, \(R\), and the ideal gas law. In the future, we plan to add more properties to the definition, especially as we expand on the idea of energy. We define the ideal gas law as an equation relating the products of pressure and volume to the product of temperature, amount of substance, and the gas constant. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#ideal_gas}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{structure} \textcolor{maroon}{ideal_gas} \textcolor{indigo}{extends} \textcolor{maroon}{thermo_system} := \textcolor{indigo}{(}R : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}ideal_gas_law : \(\forall\) n : \textcolor{tpurple}{\(\mathbb{N}\)},\textcolor{dgreen}{(}pressure n\textcolor{dgreen}{)}*\textcolor{dgreen}{(}volume n\textcolor{dgreen}{)} = \textcolor{dgreen}{(}substance_amount n\textcolor{dgreen}{)}*R*\textcolor{dgreen}{(}temperature n\textcolor{dgreen}{)}\textcolor{indigo}{)} \end{alltt} To define a system modeled as an ideal gas, we write in Lean: \textit{(M : ideal\_gas $\mathbb{R}$)}. Now we have a system, M, modeled as an ideal gas. \par Boyle's law states that the pressure of an ideal gas is inversely proportional to the systems volume in an isothermal and closed system \cite{levine_1978}. This is mathematically given by Eq.~\ref{Boyle's Law}, where \(P\) is pressure, \(V\) is volume, and \(k\) is a constant whose value is dependent on the system. \begin{equation} \label{Boyle's Law} PV = k \end{equation} In Lean, we define Boyle's Law as \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#boyles_law}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{boyles_law} \textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} := \(\exists\)\textcolor{indigo}{(}k : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)}, \(\forall\) n : \textcolor{tpurple}{\(\mathbb{N}\)}, \textcolor{indigo}{(}pressure n\textcolor{indigo}{)} * \textcolor{indigo}{(}volume n\textcolor{indigo}{)}= k \end{alltt} We use the existential operator ($\exists$) on \(k\), which can be read as \textit{there exists a \(k\)}, because each system has a specific constant. We also define the existential before the universal, so its logically correct. Right now it reads, \textit{there exists a \(k\), such that for all states, this relation holds}. If we write it the other way, it would say \textit{for all states, there exists a \(k\), such that this relation holds.} The second way means that \(k\) is dependent on the state of the system, which isn't true. The constant is the same for any state of a system. Also, even though Boyle's law is a statement about an ideal gas, we define it as a general system so, in the future, we can look at what assumptions are needed for other models to obey Boyle's Law. \par Next, we prove a couple of theorems relating to the relations that can be derived from Boyle's law. From Eq.~\ref{Boyle's Law}, we can derive a relation between any two states, given by Eq.~\ref{Boyle's Relation}, where \(n\) and \(m\) are two states of the system. \begin{equation} \label{Boyle's Relation} P_nV_n = P_mV_m \end{equation} The first theorem we prove shows how Eq.\ref{Boyle's Relation} follows from Eq.\ref{Boyle's Law}. In Lean this looks like \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#boyles_law_relation}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{boyles_law_relation} \textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{maroon}{boyles_law} M → \(\forall\) n m : \textcolor{tpurple}{\(\mathbb{N}\)}, pressure n * volume n = pressure m * volume m := \end{alltt} The right arrow can be read as \textit{implies}, so the statement says that \textit{Boyle's law implies Boyle's relation}. This is achieved using modus ponens, introducing two new names for the universal quantifier, then rewriting Boyle's law into the goal, by specializing Boyle's law with \(n\) and \(m\). We also want to show that the inverse relation holds, such that Eq.~\ref{Boyle's Relation} implies Eq.~\ref{Boyle's Law}. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#boyles_law_relation'}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{boyles_law_relation'} \textcolor{indigo}{(}M : \textcolor{maroon}{thermo_system} \textcolor{indigo}{)} : \textcolor{indigo}{(}\(\forall\) n m, pressure n * volume n = pressure m * volume m\textcolor{indigo}{)} → \textcolor{maroon}{boyles_law} M := \end{alltt} We begin in the same way, by using modus ponens and simplifying Boyle's law to be in the form of Eq.~\ref{Boyle's Law}. Next, we satisfy the existential by providing an old name. In our proof, we use \(P_1V_1\) as an old name for \(k\), then we specialize the relation with \(n\) and \(1\) and close the goal. \par Finally, with these two theorems, we show that Boyle's law can be derived from the ideal gas law, under the assumption of an isothermal and closed system. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html#boyles_from_ideal_gas}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{boyles_from_ideal_gas} \textcolor{indigo}{(}M : \textcolor{maroon}{ideal_gas} \textcolor{indigo}{)} : \textcolor{indigo}{(}iso1 : \textcolor{maroon}{isothermal} M.to_\textcolor{maroon}{thermo_system} \textcolor{indigo}{)} \textcolor{indigo}{(}iso2 : \textcolor{maroon}{closed_system} M.to_\textcolor{maroon}{thermo_system} \textcolor{indigo}{)}: \textcolor{maroon}{boyles_law} M:= \end{alltt} This proof is completed by using the second theorem for Boyle's relation along with simplifying the ideal gas relation using the two \textit{iso} constraints. We have implemented this framework to prove both Charles' and Avagadro's Law \href{https://atomslab.github.io/LeanChemicalTheories/thermodynamics/basic.html}{{\Large\texttwemoji{atom symbol}}} illustrating the interoperability of these proofs. In the future, we plan to define energy and prove theorems relating to it including the laws of thermodynamics \cite{atkins2010laws}. \subsection{Kinematic equations: Calculus in Lean} Calculus and differential equations are ubiquitous in chemical theory, and much has been formalized in \texttt{mathlib}. To illustrate Lean's calculus capabilities and motivate future formalization efforts, we formally prove that the kinematic equations follow from calculus-based definitions of motion, assuming constant acceleration. Equations of motion are the basis for many chemical theories, such as reaction kinetics \cite{frost1961kinetics} and molecular dynamics \cite{haile1993molecular} that use Newtonian mechanics. \par The equations of motion are a set of two, coupled differential equations that relate the position, velocity, and acceleration of an object in an n-dimensional vector space \cite{beggs1983kinematics}. The differential equations are given by Eq.~\ref{Differential Equation 1} and \ref{Differential Equation 2}, where \textbf{x}, \textbf{v}, and \textbf{a} represent position, velocity, and acceleration, respectively (bold type face signifies a vector quantity). All three variables are parametric equations, where each dimension of the vector is a function of time.\footnote{These proofs could also be constructed using partial differential equations, but \texttt{mathlib} doesn't currently have enough theorems for partial derivatives.} \begin{equation} \label{Differential Equation 1} \textbf{v}(t) = \frac{d(\textbf{x}(t))}{dt} \end{equation} \begin{equation} \label{Differential Equation 2} \textbf{a}(t) = \frac{d(\textbf{v}(t))}{dt} \end{equation} \begin{figure} \caption{Kinematics in Lean. Here we define \textit{motion} \label{kinematicsFig} \end{figure} As in the thermodynamics section, we can define a structure, \texttt{motion}, that to encompass these concepts. This structure defines three new elements: position, velocity, and acceleration, which are functions, as well as two differential equations relating these three functions. This structure also requires the vector space to form an inner product space, which is a real or complex vector space with an operator (the inner product), over the field. The inner product is a generalization of the dot product for any vector space. By requiring \texttt{inner\_product\_space}, the motion structure inherits all of \texttt{inner\_product\_space}'s properties, and allows us to access the calculus theorems in \texttt{mathlib}. In Lean, this is \href{https://atomslab.github.io/LeanChemicalTheories/physics/kinematic_equations.html#motion}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{structure} \textcolor{maroon}{motion} \textcolor{indigo}{(}\textcolor{tpurple}{\(\mathbb{K}\)} : Type u_1\textcolor{indigo}{)} \textcolor{indigo}{(}\textcolor{tpurple}{E} : Type u_2\textcolor{indigo}{)} \textcolor{indigo}{[}is_R_or_C \textcolor{tpurple}{\(\mathbb{K}\)}\textcolor{indigo}{]} \textcolor{indigo}{[}inner_product_space \textcolor{tpurple}{\(\mathbb{K}\)} \textcolor{tpurple}{\(\mathbb{E}\)}\textcolor{indigo}{]} \textcolor{indigo}{extends} inner_product_space \textcolor{tpurple}{\(\mathbb{K}\)} \textcolor{tpurple}{E} := \textcolor{indigo}{(}position velocity acceleration : \textcolor{tpurple}{\(\mathbb{K}\)} → \textcolor{tpurple}{E} \textcolor{indigo}{)} \textcolor{indigo}{(}hvel : velocity = deriv position\textcolor{indigo}{)} \textcolor{indigo}{(}hacc : acceleration = deriv velocity\textcolor{indigo}{)} \end{alltt} $\mathbb{K}$ represents a field which we require to be either a real ($\mathbb{R}$) or complex ($\mathbb{C}$) number, and \(E\) symbolizes a general vector field. In mathematics, a field is an algebraic structure with addition, subtraction, multiplication, and division operations. Our vector space could be an n-dimensional Euclidean vector space, but we instead use a general vector field, to be as general as possible. This allows us to describe motion in a Euclidean vector space, as well as a hyperbolic vector space, or a vector space with special properties. \par Lean has two definitions of the derivative, the familiar "high school" derivative for single variable functions, called \textit{deriv}, and the more general Fr\'echet derivative, called \textit{fderiv}. The Fr\'echet derivative can be thought of as the generalization of the derivative of a function of a single variable, to the derivative of a function of multiple variables, which gives the total derivative of a function as a linear map. For the purposes of the kinematic equations, the "high-school" derivative suffices. \par In Lean, if a function is not differentiable at a point, the derivative at that point returns zero\footnote{Likewise, division by zero is defined to return zero, instead of something like "undefined" or "NaN." Though this is an unfamiliar convention for scientists and engineers, many formal theorem provers do this for nuanced reasons, which are elaborated \href{https://xenaproject.wordpress.com/2020/07/05/division-by-zero-in-type-theory-a-faq/}{\color{blue}{in this blog post}}.}. During our first formalization attempt, we tried to define a function to be constant by setting its derivative to zero. However, $df/dx = 0$ may also arise if a function is not differentiable at that point. To avoid this edge case, we define another structure to require the equations of motion to be n-times continuously differentiable everywhere. We only require the equations to be n-times differentiable, instead of infinitely differentiable for generality reasons, however, a theorem can instantiate this structure and assume infinite differentiablity. We also declare this as a separate structure, instead of in the \textit{motion} structure, to allow future proofs that require the equations to be n-times continuously differentiable on a set or an interval, rather then everywhere (e.g. a molecular mechanics force field with a non-smoothed cutoff is not differentiable at that point). That way, depending on the theorem, the user can choose which extension is appropriate. In Lean, this structure looks like \href{https://atomslab.github.io/LeanChemicalTheories/physics/kinematic_equations.html#motion_cont_diff_everywhere}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{structure} \textcolor{maroon}{motion_cont_diff_everywhere} \textcolor{indigo}{(}\textcolor{tpurple}{\(\mathbb{K}\)} :Type u_1\textcolor{indigo}{)} \textcolor{indigo}{(}\textcolor{tpurple}{E} : Type u_2\textcolor{indigo}{)} \textcolor{indigo}{[}is_R_or_C \textcolor{tpurple}{\(\mathbb{K}\)}\textcolor{indigo}{]} \textcolor{indigo}{[}inner_product_space \textcolor{tpurple}{\(\mathbb{K}\)} \textcolor{tpurple}{\(\mathbb{E}\)}\textcolor{indigo}{]} \textcolor{indigo}{extends} \textcolor{maroon}{motion} \textcolor{tpurple}{\(\mathbb{K}\)} \textcolor{tpurple}{E} := \textcolor{indigo}{(}contdiff : \(\forall\) n : with_top \textcolor{tpurple}{\(\mathbb{N}\)}, ∀ m : \textcolor{tpurple}{\(\mathbb{N}\)}, \textcolor{dgreen}{(}m < n\textcolor{dgreen}{)} \(\rightarrow\) \textcolor{dgreen}{(}cont_diff \textcolor{tpurple}{\(\mathbb{K}\)} n \textcolor{iibrown}{(}deriv^\textcolor{indigo}{[}m\textcolor{indigo}{]} position\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)} \end{alltt} The field \textit{contdiff} states that for all $n$, defined as a natural number including positive infinity, and for all $m$, defined as a natural number, if $m$ is less than $n$, then the \(m^{th}\) derivative of position is continuously differentiable $n$-times. \par When acceleration is constant, this set of differential equations has four useful analytical solutions, the kinematic equations, Eq.~\ref{Kinematic Equation 1}--\ref{Kinematic Equation 4}, where the subscript naught denotes variables evaluated at $t=0$. \begin{equation} \label{Kinematic Equation 1} \textbf{v}(t) = \textbf{a}t+\textbf{v}_0 \end{equation} \begin{equation} \label{Kinematic Equation 2} \textbf{x}(t) = \frac{\textbf{a}t^2}{2}+\textbf{v}_0t+\textbf{x}_0 \end{equation} \begin{equation} \label{Kinematic Equation 3} \textbf{x}(t) = \frac{\textbf{v}(t)+\textbf{v}_0}{2}t+\textbf{x}_0 \end{equation} \begin{equation} \label{Kinematic Equation 4} v^2(t) = v_0^2+2\textbf{a}\cdot \textbf{d} \end{equation} Under the assumption of one dimensional motion, these equations simplify to the familiar introductory kinematic equations. Eq.~\ref{Kinematic Equation 4}, also known as the Torricelli Equation, uses the shorthand square to represent the dot product, \(v^2(t) \equiv \textbf{v(t)}\cdot \textbf{v(t)}\). \par With this, we now can begin deriving the four kinematic equations. The first three derivations for Eq.~\ref{Kinematic Equation 1}--\ref{Kinematic Equation 3}, all use the same premises, given below: \begin{alltt} \textcolor{indigo}{(}\textcolor{tpurple}{\(\mathbb{K}\)} : Type u_1\textcolor{indigo}{)} \textcolor{indigo}{(}\textcolor{tpurple}{E} : Type u_2\textcolor{indigo}{)} [is_R_or_C \textcolor{tpurple}{\(\mathbb{K}\)}] [inner_product_space \textcolor{tpurple}{\(\mathbb{K}\)} E] \textcolor{indigo}{(}M : motion_cont_diff_everywhere \textcolor{tpurple}{\(\mathbb{K}\)} \textcolor{tpurple}{E}\textcolor{indigo}{)} \textcolor{indigo}{(}A : \textcolor{tpurple}{E}) \textcolor{indigo}{(}n : with_top \textcolor{tpurple}{\(\mathbb{N}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}accel_const : motion.acceleration = \(\lambda\) \textcolor{dgreen}{(}t : \textcolor{tpurple}{\(\mathbb{K}\)}\textcolor{dgreen}{)}, A\textcolor{indigo}{)} \end{alltt} The first line contains four premises to declare the field and vector space the motion space is defined on. The next line defines a motion space, M. The third line contains two premises, a variable, A, which represents the value of constant acceleration, and n, the number of times position can be differentiated. When applying these theorems, the \textit{top} function, which means positive infinity in Lean, can be used to specify n. The final line is a premise that assumes acceleration is constant. The lambda function is constant, because A is not a function of t, so for any value of t, the function outputs the same value, A. The three kinematic equations in Lean \href{https://atomslab.github.io/LeanChemicalTheories/physics/kinematic_equations.html#const_accel}{{\Large\texttwemoji{atom symbol}}} are given below (note, the premises are omitted since they have already been given above). \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{const_accel} premises : velocity = \(\lambda\) \textcolor{indigo}{(}t : \textcolor{tpurple}{\(\mathbb{K}\)}\textcolor{indigo}{)}, t\(\cdot\)\textbf{A} + velocity 0 := \textcolor{indigo}{theorem} \textcolor{maroon}{const_accel'} premises : position = \(\lambda\) \textcolor{indigo}{(}t : \textcolor{tpurple}{\(\mathbb{K}\)}\textcolor{indigo}{)}, \textcolor{indigo}{(}t^2\textcolor{indigo}{)}/2\(\cdot\)A + t\(\cdot\)\textcolor{indigo}{(}velocity 0\textcolor{indigo}{)} + position 0 := \textcolor{indigo}{theorem} \textcolor{maroon}{const_accel''} premises : \(\forall\) t : \textcolor{tpurple}{\(\mathbb{K}\)}, position t = \textcolor{indigo}{(}t/2\textcolor{indigo}{)}\(\cdot\)\textcolor{indigo}{(}\textcolor{dgreen}{(}velocity t\textcolor{dgreen}{)} - \textcolor{dgreen}{(}velocity 0\textcolor{dgreen}{)}\textcolor{indigo}{)} + position 0 := \end{alltt} The \textit{\(\cdot\)} symbol indicates scalar multiplication, such as when a vector is multiplied by a scalar. We normally use the \textit{\(\cdot\)} symbol for the dot product, but Lean uses the \texttt{inner} function for the dot product. Also, \texttt{velocity 0} means the velocity function evaluated at 0. Lean uses parenthesises for orders of operations, not for function inputs, so \(f(x)\) in normal notation converts to \texttt{f x} in Lean. The proofs of the first two theorems use the two differential equations from the \texttt{motion} structure, and the antiderivative, whose formalization we explain in the supplementary information (these theorems weren't available in \texttt{mathlib} at the time of writing, so we proved them ourselves). The third theorem is proved by rearranging the previous two theorems. Because we declared the field \texttt{is\_R\_or\_C}, the above proofs hold for both real and complex time. However, we were unable to prove Eq.~\ref{Kinematic Equation 4}, due to the complex conjugate that arises when simplifying the proof. Eq.~\ref{Kinematic Equation 4} uses the inner product, a function that takes in two vectors from a vector space, and outputs a scalar. If the vector space is a Euclidean vector space, this is just the dot product. The inner product is semi-linear, linear in its first argument, Equation \ref{Inner Product Linear}, but sesquilinear in its second argument, Equation \ref{Inner Product Sesquilinear}. \begin{gather} \label{Inner Product Linear} \langle ax+by, z \rangle = a \langle x,z \rangle + b \langle y,z \rangle \\ \label{Inner Product Sesquilinear} \langle x,ay + bz \rangle = \Bar{a}\langle x,y \rangle + \Bar{b}\langle x,z \rangle \end{gather} The bar denotes the complex conjugate: for a complex number, \(g = a+bi\), the complex conjugate is: \(\Bar{g} = a-bi\). If \(g\) is a real number, then \(g = \Bar{g}\). For the proof of Eq.~\ref{Kinematic Equation 4}, we get to a form where one of the inner products has an addition in the second term that we have to break up, and no matter which way we rewrite the proof line, one of the inner products ends up with addition in the second term. To proceed, we instead defined the final kinematic equation to hold only for real time. In Lean, this looks like \href{https://atomslab.github.io/LeanChemicalTheories/physics/kinematic_equations.html#real_const_accel'''}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{real_const_accel'''} \textcolor{indigo}{(}N : motion_cont_diff_everywhere \(\mathbb{R}\) E\textcolor{indigo}{)} \textcolor{indigo}{(}accel_const : N.to_motion.acceleration = λ \textcolor{dgreen}{(}t :\(\mathbb{R}\)\textcolor{dgreen}{)}, A\textcolor{indigo}{)} \{n : with_top \(\mathbb{N}\)\} : \(\forall\) t : \(\mathbb{R}\), inner \textcolor{indigo}{(}motion.velocity t\textcolor{indigo}{)} \textcolor{indigo}{(}motion.velocity t\textcolor{indigo}{)} = inner \textcolor{indigo}{(}motion.velocity 0\textcolor{indigo}{)} \textcolor{indigo}{(}motion.velocity 0\textcolor{indigo}{)} + 2 * inner A \textcolor{indigo}{(}\textcolor{dgreen}{(}motion.position t\textcolor{dgreen}{)} - \textcolor{dgreen}{(}motion.position 0\textcolor{dgreen}{)}\textcolor{indigo}{)} := \end{alltt} While we haven't proved that Eq.~\ref{Kinematic Equation 4} doesn't hold for complex time, we encountered difficulties and contradictions when attempting to prove the complex case. Thus, Eq.~\ref{Kinematic Equation 4}, currently only holds for real time. An imaginary-time framework can be used to derive equations of motion from non-standard Lagrangians \cite{popov2005imaginary, rami2013non}, to examine hidden properties in classical and quantum dynamical systems in the future. By exploring these proofs in both real and complex time, we illustrate how a proof in one case can be adapted for related cases. Here, four proofs for real numbers can be easily extended to complex numbers by changing the type declared up front, and the validity of the proofs in the more general context is immediately apparent. \section{Conclusions and Outlook} \par In this paper, we demonstrate how interactive theorem proving can be used to formally verify the mathematics in science and engineering. We found that, although formalization is slower and more challenging than writing hand-written derivations, our resulting proofs are more rigorous and complete. We observed that in some cases, translating scientific statements into formal language revealed hidden assumptions behind the mathematical derivations. For example, we make explicit common implicit assumptions, such as the denominator must not be zero when we deal with division. As well, in a more abstract way, we have attempted to reveal the formal definitions of equations, such as exactly how pressure is defined as a function or the assumptions of differentiability needed for kinematics. All of these are a result of formalizing these theorems. We concur with others who have discussed the limitations of hand-written proofs and their reliability \cite {bundy2005proof, hales2008formal,avigad2014formally}; formalized proofs can provide greater assurance and robustness. The Lean Theorem Prover is especially powerful, as it facilitates re-use of theorems and construction of higher-level mathematical objects from lower-level ones. We showed how this feature can be leveraged in science proofs; after a fundamental theory is formally verified, it can then be used in the development of other theories. This can be approached in two ways: \emph{definitions} can be directly reused in subsequent proofs, and \emph{structures} can enable hierarchies of related concepts, from general to more specific. Thus we have not just proved a couple of theorems about scientific objects, but have begun to create an interconnected structure of formally verified proofs relating fields of science. Beyond being formally grounded in axiomatic mathematics, Lean proofs are machine-readable instances of correct mathematical logic that can serve as a substrate for machine learning. Machine learning tools designed for text prediction \cite{arXiv:2005.14165, arXiv:2107.03374} can be fine-tuned to ``auto-complete'' mathematical proofs, given a formal problem statement \cite{proofartifact2021, GPT-f_lean}, even to the point of generating correct solutions to International Math Olympiad problems \cite{polu2022formal}. Recently, large language models have demonstrated capabilities in solving chemistry problems \cite{ hocky2022natural, white2022large}, as well as answering scientific question-and-answer problems invoking quantitative reasoning \cite{10.48550/ARXIV.2206.14858}. Formal proofs in science and engineering mathematics may in the future provide useful, high-quality data for artificial intelligences aiming to learn, reason, and discover in science \cite{ bradshaw1983studying, kitano2021nobel, krenn2022scientific}. These proofs were written in Lean 3 \cite{avigad2014formally}, because we needed to access the extensive \texttt{mathlib} library. While Lean 3 was designed for theorem proving and management of large-scale proof libraries, the new version, Lean 4 \cite{moura2021lean}, is a properly-designed functional programming language \cite{moura2021lean, ProgLean}. The Lean community is working on porting \texttt{mathlib} to Lean 4; when that is complete, we recommend future proofs be written in Lean 4, which is more capable, versatile, and easy to use compared to Lean 3. Our next goals are to continue building out classical thermodynamics, formalize statistical mechanics, and eventually construct proofs relating the two fields. We are also interested in laying the foundations for classical mechanics in Lean, and formalizing more difficult proofs like Noether's Theorem \cite{kosmann2011noether}. We hope these expository proofs in adsorption, thermodynamics, and kinematics will inspire others to consider what proofs and derivations could be formalized in their fields of expertise. Virtually all mathematical concepts can be established using dependent type theory; the density functionals, partial derivatives, N-dimensional integrals, and random variables appearing in our favorite theories \emph{should be expressible in Lean}. Just as an ever-growing online community of mathematicians and computer scientists is building \texttt{mathlib} \cite{zulipLean}; we anticipate a similar group of scientists building a library of formally-verified scientific theories and engineering mathematics. To join, start learning Lean, join the online community, and see what we can prove! \section*{Supporting Information} The supporting information provides additional background on how Lean works (Section \ref{LeanAdditionalInfo}) and all the additional proofs (Section \ref{SIproofs}), including an improved version of Langmuir adsorption model (\ref{LangmuirSI}), the final derived form of BET adsorption model (\ref{BETSI}), and the antiderivative proofs (\ref{antiderivSI}) that was used for kinematic equations. All code and proofs for this project are available in our GitHub repository \href{https://atomslab.github.io/LeanChemicalTheories/}{{\Large\texttwemoji{atom symbol}}}. \section*{Conflicts of Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \raggedbottom \section*{Glossary of Mathematical Terms and Symbols}\label{Glossary} \begin{table}[H] \begin{tabular}{p{0.1\textwidth} p{0.42\textwidth} p{0.41 \textwidth}} \hline \textbf{Term} & \textbf{Definition} & \textbf{Example} \\ \hline Axiom & A self-evident truth which is assumed to be true and doesn’t require proof. & Two sets are equal if they have the same elements; this is not proved, it is assumed. \\ Theorem & A theorem is a proposition or statement in math that can be demonstrated to be true by accepted mathematical operations and arguments. & The Pythagorean theorem, $a^2 + b^2 = c^2$ for all right triangles \\ Lemma & A true statement which is used as a stepping stone to prove other true statements. A lemma is a smaller, less important result than a theorem. & All numbers multiplied by 2 are even. Proving this could be an intermediate result used for other proofs. \\ Proposition & A true or false statement. & Socrates is mortal, all swans are white, and $3 < 4$ are propositions. \\ Hypothesis (Premise) & A statement assumed to be true that the proof follows from. It can also be thought of as the conditions or prerequisites for the theorem to hold. We emphasize that the math community uses "hypothesis" somewhat differently than the scientific community. & If all four sides of a rectangle have the same length, it is a square. The hypotheses would be ``the shape is a rectangle (has four sides and four equal, right angles)'' and ``all sides have the same length.'' \\ Conjecture & A statement which is proposed to be true, but no proof has been found yet. & Goldbach's conjecture: every even number greater than 2 is the sum of two prime numbers. This hasn't been proven true or false yet. \\ Proof & A sequence of logical steps which conclude that a statement is true from its hypotheses. & The proof of the Pythagorean theorem using geometry \\ Function & An expression that defines a relation between a set of inputs and a set of outputs. & \(f(x)=x^2\) relates (or maps) the set of real numbers $x$ to their square. \\ Type & A type can be thought of as a set, or category, that contains terms. In other programming languages, types define the category of data certain objects have (e.g. floats, strings, integers). Types in Lean work this way, too, and have more features: they can depend on values, as well as be the subject of proofs. & The natural numbers are a type. The Booleans (True and False) are also a type. Functions from integers to reals are also a type. \\ Term & Terms are members of a type. & Considering the type of natural numbers, then numbers like 1, 2, 3, and 8 are terms of that type. \\ \(\mathbb{N}\) & Symbol for the set of natural numbers & The numbers 0, 1, 2, 3, 4... \\ \(\mathbb{Z}\) & Symbol for the set of integers & The numbers, -3, -2, -1, 0, 1, 2 ... \\ \(\mathbb{Q}\) & Symbol for the set of rational numbers & The numbers, \(\frac{1}{2}, \frac{3}{4}, \frac{5}{9}\), etc. \\ \(\mathbb{R}\) & Symbol for the set of real numbers & -1, 3.6, Euler's number, $\pi$, $\sqrt{2}$, etc. \\ \(\mathbb{C}\) & Symbol for the set of complex numbers & -1, 5 + 2$i$, $\sqrt(2) + 5i$, etc. \\ \(\forall\) & Logical symbol for "for all" & \\ \(\exists\) & Logical symbol for "there exists" & \end{tabular} \end{table} \section{Supporting Information} \subsection{Additional Background}\label{LeanAdditionalInfo} Lean is an open source theorem prover developed by Microsoft Research and Carnegie Mellon University, based on dependent type theory, with the goal to formalize theorems in an expressive way \cite{de_moura_kong_avigad_van_doorn_von_raumer_2015}. Lean supports user interaction and constructs axiomatic proofs through user input, allowing it to bridge the gap between interactive and automated theorem proving. Like Mizar \cite{rudnicki1992overview} and Isabelle \cite{wenzel2002isabelle}, Lean allows user to state definitions and theorems but also combines more imperative tactic styles as in Coq \cite{barras1997coq}, HOL-Light \cite{gordon1993introduction}, Isabelle \cite{nipkow2002isabelle} and PVS \cite{owre1992pvs} to construct proofs. The ability to define mathematical objects, rather then just postulate them is where Lean gets its power \cite{avigad2015theorem}. It can be used to create an interconnected system of mathematics where the relationship of objects from different fields can be easily shown without loosing generality. \begin{figure} \caption{Overview of Lean Theorem Prover} \label{leanoverview} \end{figure} \par As mentioned above, the power of Lean comes from the ability to define objects and prove properties about them. In Lean, there are three ways to define new Types: type universes, Pi types, and inductive types. The first two are used to construct the basis of dependent type theory, and are used for more theoretical, foundational stuff. Instead we will focus on the use of inductive types. Standard inductive types, known as just \textit{inductive types}, are built from a set of constructors and well found \textit{recursion}. Non-recursive inductive types that contain only one constructor are called \textit{structures}. \par Many mathematical objects in Lean can be constructed through inductive types, which is a type built from a set of constructors and proper recursion \cite{dybjer1994inductive}. The natural numbers are an inductive type, defined using Peano's Encoding \cite{skolem1955peano}. This requires two constructors, a constant element, 0 : nat, and a function called the successor function, S. Then one can be constructed as S(0), two can be constructed as S(S(0)), etc. In Lean, the natural numbers are defined as: \begin{alltt} \textcolor{indigo}{inductive} \textcolor{maroon}{nat} | zero : nat | succ \textcolor{indigo}{(}n : nat\textcolor{indigo}{)} : nat \end{alltt} Here, the type \textit{nat} is defined through recursion by a constant element, zero, and a function. With this, the \textit{def} command is used to define properties about the class, like addition or multiplication. For instance, the addition of the natural numbers is defined as: \begin{alltt} \textcolor{dgreen}{protected} \textcolor{indigo}{def} \textcolor{maroon}{add} : nat → nat → nat | a zero := a | a \textcolor{indigo}{(}succ b\textcolor{indigo}{)} := succ \textcolor{indigo}{(}add a b\textcolor{indigo}{)} \end{alltt} Addition is defined as a function that takes in two natural numbers and outputs a natural number. Since the natural numbers are created from two constructors, there are two cases of addition that must be shown. The first is a general natural number plus zero which yields the general natural number, and the next is a general natural number plus the successor of a general natural number. The second case used recursion and calls \textit{add} again until it reduces to zero. \par The other way to define types is using \textit{structure} which allows us to add constraints to a type variable. For instance, the \textit{class} \textit{has\_add} constrains a type to have a function called \textit{add} which represents addition. \begin{alltt} \textcolor{indigo}{class} \textcolor{maroon}{has_add} \textcolor{indigo}{(}\(\alpha\) : Type u\textcolor{indigo}{)} := \textcolor{indigo}{(}add : \(\alpha\) → \(\alpha\) → \(\alpha\)\textcolor{indigo}{)} \end{alltt} This can be used for more advanced ideas, like defining rings or abelian groups. We can use class to define areas of science as new types with constraints to follow certain rules. \subsection{Additional Proofs} \label{SIproofs} \subsubsection{Langmuir Adsorption}\label{LangmuirSI} The first Langmuir proof introduced earlier states every premise explicitly but however we can condense that by rewriting \textit{hrad} and \textit{hrd} into \textit{hreaction} to yield \textit{k\_ad*Pₐ*S = k\_d*A} and we can then rewrite \textit{h\(\theta\)} and \textit{hK} in the goal statement. While \textit{hrad}, \textit{hrd}, \textit{h\(\theta\)}, and \textit{hK} have scientific significance, they do not have any mathematical significance. In Lean it looks like: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{Langmuir_single_site2} \textcolor{indigo}{(}Pₐ k_ad k_d A S: \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}hreaction : k_ad*Pₐ*S = k_d*A\textcolor{indigo}{)} \textcolor{indigo}{(}hS : S \(\ne\) 0\textcolor{indigo}{)} \textcolor{indigo}{(}hk_d : k_d \(\ne\) 0\textcolor{indigo}{)} : A/\textcolor{indigo}{(}S+A\textcolor{indigo}{)} = k_ad/k_d*Pₐ/\textcolor{indigo}{(}1+k_ad/k_d*Pₐ\textcolor{indigo}{)} := \end{alltt} \par However, while those four variables do not have any mathematical significance, and only serve to hinder our proofs, they do have scientific significance, and we do not want to just omit them. Instead we can use the \textit{let} command to create an in-line, local definition. This allows us to have the applicability of the theorem, while still having scientifically important variables. In Lean, this looks like \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/langmuir_kinetics.html} {{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{Langmuir_single_site} \textcolor{indigo}{(}Pₐ k_ad k_d A S : \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{)} \textcolor{indigo}{(}hreaction : \textcolor{indigo}{let} r_ad := k_ad*Pₐ*S, r_d := k_d*A \textcolor{indigo}{in} r_ad = r_d\textcolor{indigo}{)} \textcolor{indigo}{(}hS : S \(\ne\) 0\textcolor{indigo}{)} \textcolor{indigo}{(}hk_d : k_d \(\ne\) 0\textcolor{indigo}{)} : \textcolor{indigo}{let} \(\theta\) := A/(S+A), K := k_ad/k_d \textcolor{indigo}{in} \(\theta\) = K*Pₐ/\textcolor{indigo}{(}1+K*Pₐ\textcolor{indigo}{)} := \end{alltt} The first line after the \textit{theorem} statement, gives the variables use in the proof. Notice that \(r_{ad}\), \(r_d\), \(K_{eq}\), and \(\theta\) are not defined as variables. Instead, the \textit{let} statement defines those four variables in their respective premise or goal. Then, in the proof we can simplify the \textit{let} statement to get local definitions of those variables, just like \textit{hrad}, \textit{hrd}, \textit{h\(\theta\)}, and \textit{hK}. While this version of proof follow the same proof logic minus the two initial rewrites from earlier version, however if we stick with the first proof, we will find it very difficult to use compared to using this proof above, because of all those hypothesises. Suppose we wanted to prove \textit{langmuir\_single\_site2} and we already have proven \textit{langmuir\_single\_site}. We would find it impossible to use \textit{langmuir\_single\_site} because we are missing premises like \textit{hrad} or \textit{hrd}. Yet, we could prove the other way, ie. use \textit{langmuir\_single\_site} to prove \textit{langmuir\_single\_site2}. Having all of those extra premises that define the relation between variables only serves to hinder the applicability of our proofs. \subsubsection{BET Adsorption}\label{BETSI} \par We continue the derivation of Equation 27 from the paper that aims to redefine \textit{x} as \(x = P/P_0\), by recognizing that the volume should approach infinity at the saturation pressure, and, mathematically, it approaches infinity as x approaches one from the left. For x to approach one, pressure must approach \(1/C_L\). First, we show that Equation 26 from the paper approaches infinity as \textit{P} approaches \(1/C_L\). We specifically require it to approach from the left because volume approaches negative infinity if we come from the right. In Lean, this looks like \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#tendsto_at_top_at_inv_CL}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{lemma} \textcolor{maroon}{BET.tendsto_at_top_at_inv_CL} : filter.tendsto \textcolor{maroon}{brunauer_26} \textcolor{indigo}{(}nhds_within \textcolor{dgreen}{(}1/C_L\textcolor{dgreen}{)} \textcolor{dgreen}{(}set.Ioo 0 \textcolor{iibrown}{(}1/C_L\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)} filter.at_top:= \end{alltt} The function \textit{filter.tendsto} is the generic definition of the limit. It has three inputs, the function, what the independent variable approaches, and what the function approaches, in that order. We split this into three lines to better visualize what is happening. First, we are using the object \textit{brunauer\_26}, which is the BET equation as a function of pressure in terms of x. Next, \textit{(nhds\_within (1/C\_L) (set.Ioo 0 (1/C\_L)))} is how we say approaches \(1/C_L\) from the left. \textit{nhds\_within} means the intersection of a neighborhood, abbreviated as \textit{nhds}, and a set. A neighborhood of a point is the open set around that point. \textit{set.Ioo} designates a left-open right-open interval. Here we have the interval \((0,1/C_L)\). The intersections of the neighborhood and this set constrains us to approach the neighborhood from the left. The final part is \textit{filter.at\_top} which is a generalization of infinity, and just says our function approaches infinity. \par In the original derivation done by Brunauer et al, they wish to show that \(P_0 = 1/C_L\) because as pressure approaches each of these values, volume approaches infinity, these two values are equal. It should be noted that this idea is only true if \textit{C}, the BET constant, is greater than or equal to one. If not, the function has two points where it hits infinity in the positive pressure region. We also have problems showing the congruence of such a fact in Lean, since such a relation has yet to be formalized and the congruence of two \textit{nhds\_within} has not been shown. For now, we use the lemma above to prove a simplier version of the theorem where we assume \(P_0=1/C_L\), and show that with this assumption, \(V\) approaches infinity. In Lean, this looks like \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#brunauer_27}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{brunauer_27} \textcolor{indigo}{(}h1 : P_0 = 1/C_L\textcolor{indigo}{)} : filter.tendsto \textcolor{maroon}{brunauer_26} \textcolor{indigo}{(}nhds_within \textcolor{iibrown}{(}P_0\textcolor{iibrown}{)} \textcolor{dgreen}{(}set.Ioo 0 \textcolor{iibrown}{(}P_0\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)} filter.at_top:= \end{alltt} The proof of this theorem involves rewriting h1, and then applying the lemma proved above. While we would prefer to prove that \(P_0 = 1/C_L\), this proof will serve as a placeholder, until Mathlib builds out more math related to the congruence of this subject. This theorem does not use a lcoal definition, like Langmuir, because \(P_0\) is already defined as a variable using \textit{constant}. \par Finally, we formalize the derivation of Equation 28 from the paper, givne by Equation \ref{BET 28}. \begin{equation} \label{BET 28} \frac{V}{A*V_0} = \frac{CP}{(P_O-P)(1+(C-1)(P/P_0)} \end{equation} Just like Equation \ref{BET 26}, we first define Equation \ref{BET 28} at an object then formalize the derivation of this object. In Lean, the object looks like \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#brunauer_28}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{def} \textcolor{maroon}{brunauer_28} := \(\lambda\) P : \(\mathbb{R}\), C*P/\textcolor{indigo}{(}\textcolor{dgreen}{(}P_0-P\textcolor{dgreen}{)}*\textcolor{dgreen}{(}1+\textcolor{iibrown}{(}C-1\textcolor{iibrown}{)}*\textcolor{iibrown}{(}P/P_0\textcolor{iibrown}{)}\textcolor{dgreen}{)}\textcolor{indigo}{)} \end{alltt} Now we can prove a theorem that formalizes the derivation of this object \href{https://atomslab.github.io/LeanChemicalTheories/adsorption/BETInfinite.html#brunauer_28_from_seq}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{brunauer_28_from_seq} \textcolor{indigo}{\{}P V_0: \textcolor{tpurple}{\(\mathbb{R}\)}\textcolor{indigo}{\}} \textcolor{indigo}{(}h27 : P_0 = 1/C_L\textcolor{indigo}{)} \textcolor{indigo}{(}hx1: \textcolor{dgreen}{(}x P\textcolor{dgreen}{)} < 1\textcolor{indigo}{)} \textcolor{indigo}{(}hx2 : 0 < \textcolor{dgreen}{(}x P\textcolor{dgreen}{)}\textcolor{indigo}{)} : let Vads := V_0 * \(\sum'\) \textcolor{indigo}{(}k : \(\mathbb{N}\)\textcolor{indigo}{)}, ↑k * \textcolor{indigo}{(}seq P k\textcolor{indigo}{)}, A := \(\sum'\) \textcolor{indigo}{(}k : \(\mathbb{N}\)\textcolor{indigo}{)}, \textcolor{indigo}{(}seq P k\textcolor{indigo}{)} in Vads/A = V_0*\textcolor{indigo}{(}\textcolor{maroon}{brunauer_28} P\textcolor{indigo}{)} := \end{alltt} Rather then explicitly solving the sequence ratio, like we did for Equation \ref{BET 26}, we can now use the theorem that derived Equation \ref{BET 26} to solve the left hand side of our new goal. We then have a goal where we show that Equation \ref{BET 28} is just a rearranged version of Equation \ref{BET 26}, which is done through algebraic manipulation. \subsubsection{The antiderivative in Lean} \label{antiderivSI} \par For a function, \(f\), the antiderivative of that function, given by \(F\), is a differentiable function, such that the derivative of \(F\) is the original function \(f\). In Lean, we formalize the general antiderivative and show how it can be used for several specific applications, including the antiderivative of a constant, of a natural power, and of an integer power. We generalize our functions as a function from a general field onto a vector field, \(f : \mathbb{K} \rightarrow E\). This allows us to apply the theorems to any parametric vector function, including scalar functions. \par Our goal is to show, from the assumption that \(f(t)\) is the derivative of \(F(t)\) and \(f(t)\) is the derivative of \(G(t)\), then we have an equation \(F(t) = G(t) + F(0)\), which is the antiderivative of \(f(t)\). \(G(t)\) is the variable portion of the equation. For example, if the antiderivative is of the form \(F(t) = t^3 + t + 6\), then \(G(t) = t^3 + t\) and \(F(0) = 6\). \(F(0)\) is the constant of integration, but written in a more explicit relation to the function. Since \(G(t)\) is the function of just variables, we have as another premise \(G(0) = 0\). \par The first goal is to show that a linearized version of the antiderivative function holds. We can rewrite \(F(t)\) so that is linear by moving \(G(t)\) to the left hand side, leaving us with an equation that equals a constant. \begin{equation} \label{Constant Function} F(t)-G(t) = C \end{equation} Thus, we can relate any two points along this function, \(\forall x y, F(x)-G(x)=F(y)-G(y)\). To show this holds, we recognize that if Equation \ref{Constant Function} is constant, then the derivative of this function is equal to zero. \begin{equation} \label{Deriv of Constant} \frac{d}{dt}(F(t)-G(t)) = 0 \end{equation} Next, we apply the linearity of differentiation to Equation \ref{Deriv of Constant} to get a new form: \(\frac{d}{dt}F(t) - \frac{d}{dt}G(t) = 0\), and rearrange to get: \begin{equation} \label{Antideriv sub} \frac{d}{dt}F(t) = \frac{d}{dt}G(t) \end{equation} From the first premise, we assumed that \(f(t)\) is the derivative of \(F(t)\). From our second premise, we assumed that \(f(t)\) is also the derivative of \(G(t)\). Thus, applying both premises, we can simplify Equation \ref{Antideriv sub} to: \begin{equation} f(t) = f(t) \end{equation} which we recognize to be correct. \par Now that we have a new premise to use, given by Equation \ref{Antideriv Const}, we can specialize this function to get our final form. \begin{equation} \label{Antideriv Const} \forall x y, F(x)-G(x)=F(y)-G(y) \end{equation} We specialize the universals by supplying two old names. For x, we use t (the variable we have been basing our differentiation around), and for y we use 0. Thus, Equation \ref{Antideriv Const} becomes: \begin{equation} \label{Antideriv Const2} F(t) - G(t) = F(0) - G(0) \end{equation} Our third premise was that \(G(0) = 0\), so we can simplify and rearrange Equation \ref{Antideriv Const}, to get our final form: \begin{equation} \label{Antideriv Final Form} F(t) = G(t) + F(0) \end{equation} Which satisfies the goal we laid out in the beginning. In Lean, the statement of this theorem looks like \href{https://atomslab.github.io/LeanChemicalTheories/math/antideriv.html#antideriv}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{antideriv} \textcolor{indigo}{\{}\textcolor{tpurple}{E} : Type u_2\textcolor{indigo}{\}} \textcolor{indigo}{\{}\textcolor{tpurple}{\(\mathbb{K}\)}: Type u_3\textcolor{indigo}{\}} \textcolor{indigo}{[}is_R_or_C \textcolor{tpurple}{\(\mathbb{K}\)}\textcolor{indigo}{]} \textcolor{indigo}{[}normed_add_comm_group \textcolor{tpurple}{E}\textcolor{indigo}{]} \textcolor{indigo}{[}normed_space \textcolor{tpurple}{\(\mathbb{K}\)} \textcolor{tpurple}{E}\textcolor{indigo}{]} \textcolor{indigo}{\{}f F G: \textcolor{tpurple}{\(\mathbb{K}\)} → \textcolor{tpurple}{E}\textcolor{indigo}{\}} \textcolor{indigo}{(}hf : \(\forall\) t, has_deriv_at F \textcolor{dgreen}{(}f t\textcolor{dgreen}{)} t\textcolor{indigo}{)} \textcolor{indigo}{(}hg : \(\forall\) t, has_deriv_at G \textcolor{dgreen}{(}f t\textcolor{dgreen}{)} t\textcolor{indigo}{)} \textcolor{indigo}{(}hg' : G 0 = 0\textcolor{indigo}{)} : F = \(\lambda\) t, G t + F(0) := \end{alltt} Applying the \textit{antideriv} theorem to examples is very straight forward. We will show an example by deriving the antiderivative of a constant function. In Lean, we would state this as \href{https://atomslab.github.io/LeanChemicalTheories/math/antideriv.html#antideriv_const}{{\Large\texttwemoji{atom symbol}}}: \begin{alltt} \textcolor{indigo}{theorem} \textcolor{maroon}{antideriv_const} \textcolor{indigo}{(}F : \textcolor{tpurple}{\(\mathbb{K}\)} → E\textcolor{indigo}{)} {k : \textcolor{tpurple}{E}} \textcolor{indigo}{(}hf : \(\forall\) t, has_deriv_at F k t\textcolor{indigo}{)}: \textcolor{indigo}{(}F = λ \textcolor{dgreen}{(}x : \textcolor{tpurple}{\(\mathbb{K}\)})\textcolor{dgreen}{)}, x\(\cdot\)k + F 0\textcolor{indigo}{)} := \end{alltt} Here we say that the derivative of \(F(x)\) is the constant \(k\), and want to show that \(F(x) = x\cdot k + F(0)\), where the "\(\cdot\)" operator stands for scalar multiplication. To use the \textit{antideriv} theorem, we must show that its premises follow, meaning we must show: \begin{alltt} \(\forall\) t, has_deriv_at F k t \(\forall\) t, has_deriv_at x\(\cdot\)k k t 0\(\cdot\)k = 0 \end{alltt} The first goal is explicitly given in our premises, \textit{hf}. The next goal can be derived by taking out the constant, and showing that the function \(x\) has a derivative equal to \(1\). The final goal can be easily proven by recognizing zero multiplied by anything is zero. Thus, we have formalized antiderivative of a constant function, and can use this same process for any other function. The antiderivative is especially important for deriving the kinematic equations, as seen in the next section. \end{document}
math
96,743
\begin{document} \title{Is affine-invariance well defined on SPD matrices? A principled continuum of metrics} \titlerunning{Power-affine and deformed-affine metrics on SPD matrices} \author{Yann Thanwerdas \and Xavier Pennec} \authorrunning{Y. Thanwerdas, X. Pennec} \institute{Universit\'e C\^ote d'Azur, Inria, Epione, France} \maketitle \begin{abstract} Symmetric Positive Definite (SPD) matrices have been widely used in medical data analysis and a number of different Riemannian metrics were proposed to compute with them. However, there are very few methodological principles guiding the choice of one particular metric for a given application. Invariance under the action of the affine transformations was suggested as a principle. Another concept is based on symmetries. However, the affine-invariant metric and the recently proposed polar-affine metric are both invariant and symmetric. Comparing these two cousin metrics leads us to introduce much wider families: power-affine and deformed-affine metrics. Within this continuum, we investigate other principles to restrict the family size. \keywords{SPD matrices \and Riemannian symmetric space.} \end{abstract} \section{Introduction} Symmetric positive definite (SPD) matrices have been used in many different contexts. In diffusion tensor imaging for instance, a diffusion tensor is a 3-dimensional SPD matrix \cite{lenglet_statistics_2006,pennec_riemannian_2006,Fletcher07}; in brain-computer interfaces (BCI) \cite{Barachant13}, in functional MRI \cite{Deligianni11} or in computer vision \cite{Cheng13}, an SPD matrix can represent a covariance matrix of a feature vector, for example a spatial covariance of electrodes or a temporal covariance of signals in BCI. In order to make statistical operations on SPD matrices like interpolations, computing the mean or performing a principal component analysis, it has been proposed to consider the set of SPD matrices as a manifold and to provide it with some geometric structures like a \textit{Riemannian metric}, a transitive \textit{group action} or some \textit{symmetries}. These structures can be more or less natural depending on the context of the applications, and they can provide closed-form formulas and consistent algorithms \cite{pennec_riemannian_2006,Dryden09}. Many Riemannian structures have been introduced over the manifold of SPD matrices \cite{Dryden09}: Euclidean, log-Euclidean, affine-invariant, Cholesky, square root, power-Euclidean, Procrustes... Each of them has different mathematical properties that can fit the data in some problems but can be inappropriate in some other contexts: for example the curvature can be null, positive, negative, constant, not constant, covariantly constant... These properties on the curvature have some important consequences on the way we interpolate two points, on the consistence of algorithms, and more generally on every statistical operation one could want to do with SPD matrices. Therefore, a natural question one can ask is: given the practical context of an application, how should one choose the metric on SPD matrices? Are there some relations between the mathematical properties of the geometric structure and the intrinsic properties of the data? In this context, the affine-invariant metric \cite{pennec_riemannian_2006,Lenglet06,Fletcher07} was introduced to give an invariant computing framework under affine transformations of the variables. This metric endows the manifold of SPD matrices with a structure of a Riemannian symmetric space. Such spaces have a covariantly constant curvature, thus they share some convenient properties with constant curvature spaces but with less constraints. It was actually shown that there exists not only one but a one-parameter family that is invariant under these affine transformations \cite{Pennec08}. More recently, \cite{Su12,Su14,Zhang18} introduced another Riemannian symmetric structure that does not belong to the previous one-parameter family: the polar-affine metric. In this work, we unify these two frameworks by showing that the polar-affine metric is a square deformation of the affine-invariant metric (Section 2). We generalize in Section 3.1 this construction to a family of power-affine metrics that comprises the two previous metrics, and in Section 3.2 to the wider family of deformed-affine metrics. Finally, we propose in Section 4 a theoretical approach in the choice of subfamilies of the deformed-affine metrics with relevant properties. \enlargethispage{8mm} \section{Affine-invariant versus polar-affine} The affine-invariant metric \cite{pennec_riemannian_2006,Lenglet06,Fletcher07} and the polar-affine metric \cite{Zhang18} are different but they both provide a Riemannian symmetric structure to the manifold of SPD matrices. Moreover, both claim to be very naturally introduced. The former uses only the action of the real general linear group ${GL}_n$ on covariance matrices. The latter uses the canonical left action of ${GL}_n$ on the left coset space ${GL}_n/O_n$ and the polar decomposition ${GL}_n\simeq{SPD}_n\times O_n$, where $O_n$ is the orthogonal group. Furthermore, the affine-invariant framework is exhaustive in the sense that it provides \textit{all} the metrics invariant under the chosen action \cite{Pennec08} whereas the polar-affine framework only provides \textit{one} invariant metric. In this work, we show that the two frameworks coincide on the same quotient manifold ${GL}_n/O_n$ but differ because of the choice of the diffeomorphism between this quotient and the manifold of SPD matrices. In particular, we show that there exists a one-parameter family of polar-affine metrics and that any polar-affine metric is a square deformation of an affine-invariant metric. In 2.1 and 2.2, we build the affine-invariant metrics $g^1$ and the polar-affine metric $g^2$ in a unified way, using indexes $i\in\{1,2\}$ to differentiate them. First, we give explicitly the action $\eta^i:{GL}_n\times{SPD}_n\longrightarrow{SPD}_n$ and the quotient diffeomorphism $\tau^i:{GL}_n/O_n\longrightarrow{SPD}_n$; then, we explain the construction of the orthogonal-invariant scalar product $g^i_{I_n}$ that characterizes the metric $g^i$; finally, we give the expression of the metrics $g^1$ and $g^2$. In 2.3, we summarize the results and we focus on the Riemannian symmetric structures of ${SPD}_n$. \subsection{The one-parameter family of affine-invariant metrics} \subsubsection{Affine action and quotient diffeomorphism} In many applications, one would like the analysis of covariance matrices to be invariant under affine transformations $X\longmapsto AX+B$ of the random vector $X\in\mathbb{R}^n$, where $A\in{GL}_n$ and $B\in\mathbb{R}^n$. Then the covariance matrix $\Sigma=\mathrm{Cov}(X)$, is modified under the transformation $\Sigma\longmapsto A\Sigma A^\top$. This transformation can be thought as a transitive Lie group action $\eta^1$ of the general linear group on the manifold of SPD matrices: \begin{equation} \eta^1:\fun{{GL}_n\times{SPD}_n}{{SPD}_n}{(A,\Sigma)}{\eta^1_A(\Sigma)=A\Sigma A^\top}. \end{equation} This transitive action induces a diffeomorphism between the manifold ${SPD}_n$ and the quotient of the acting group ${GL}_n$ by the stabilizing group $\mathrm{Stab}^1(\Sigma)=\{A\in{GL}_n,\eta^1(A,\Sigma)=\Sigma\}$ at any point $\Sigma$. It reduces to the orthogonal group at $\Sigma=I_n$ so we get the quotient diffeomorphism $\tau^1$: \begin{equation} \tau^1:\fun{{GL}_n/O_n}{{SPD}_n}{[A]=A.O_n}{\eta^1(A,I_n)=AA^\top}. \end{equation} \subsubsection{Orthogonal-invariant scalar product} We want to endow the manifold $\mathcal{M}={SPD}_n$ with a metric $g^1$ invariant under the affine action $\eta^1$, i.e. an affine-invariant metric. As the action is transitive, the metric at any point $\Sigma$ is characterized by the metric at one given point $I_n$. As the metric is affine-invariant, this scalar product $g_{I_n}$ has to be invariant under the stabilizing group of $I_n$. As a consequence, the metric $g^1$ is characterized by a scalar product $g^1_{I_n}$ on the tangent space $T_{I_n}\mathcal{M}$ that is invariant under the action of the orthogonal group. The tangent space $T_{I_n}\mathcal{M}$ is canonically identified with the vector space $\mathrm{Sym}_n$ of symmetric matrices by the differential of the canonical embedding $\mathcal{M}\hookrightarrow\mathrm{Sym}_n$. Thus we are now looking for all the scalar products on symmetric matrices that are invariant under the orthogonal group. Such scalar products are given by the following formula \cite{Pennec08}, where $\alpha>0$ and $\beta>-\frac{\alpha}{n}$: for all tangent vectors $V_1,W_1\in T_{I_n}\mathcal{M}$, $g^1_{I_n}(V_1,W_1)=\alpha\,\mathrm{tr}(V_1W_1)+\beta\,\mathrm{tr}(V_1)\mathrm{tr}(W_1)$. \subsubsection{Affine-invariant metrics} To give the expression of the metric, we need a linear isomorphism between the tangent space $T_\Sigma\mathcal{M}$ at any point $\Sigma$ and the tangent space $T_{I_n}\mathcal{M}$. Since the action $\eta^1_{\Sigma^{-1/2}}$ sends $\Sigma$ to $I_n$, its differential given by $T_\Sigma\eta^1_{\Sigma^{-1/2}}:V\in T_\Sigma\mathcal{M}\longmapsto V_1=\Sigma^{-1/2}V\Sigma^{-1/2}\in T_{I_n}\mathcal{M}$ is such a linear isomorphism. Combining this transformation with the expression of the metric at $I_n$ and reordering the terms in the trace, we get the general expression of the affine-invariant metric: for all tangent vectors $V,W\in T_\Sigma\mathcal{M}$, \begin{equation} g^1_\Sigma(V,W)=\alpha\,\mathrm{tr}(\Sigma^{-1}V\Sigma^{-1}W)+\beta\,\mathrm{tr}(\Sigma^{-1}V)\mathrm{tr}(\Sigma^{-1}W). \end{equation} As the geometry of the manifold is not much affected by a scalar multiplication of the metric, we often drop the parameter $\alpha$, as if it were equal to 1, and we consider that this is a one-parameter family indexed by $\beta>-\frac{1}{n}$. \subsection{The polar-affine metric} \subsubsection{Quotient diffeomorphism and affine action} In \cite{Zhang18}, instead of defining a metric directly on the manifold of SPD matrices, a metric is defined on the left coset space ${GL}_n/O_n=\{[A]=A.O_n,A\in{GL}_n\}$, on which the general linear group ${GL}_n$ naturally acts by the left action $\eta^0:(A,[A'])\longmapsto[AA']$. Then this metric is pushed forward on the manifold ${SPD}_n$ into the polar-affine metric $g^2$ thanks to the polar decomposition $\mathrm{pol}:A\in{GL}_n\longmapsto(\sqrt{AA^\top},{\sqrt{AA^\top}}^{-1}A)\in{SPD}_n\times O_n$ or more precisely by the quotient diffeomorphism $\tau^2$: \begin{equation} \tau^2:\fun{{GL}_n/O_n}{{SPD}_n}{A.O_n}{\sqrt{AA^\top}}. \end{equation} This quotient diffeomorphism induces an action of the general linear group ${GL}_n$ on the manifold ${SPD}_n$, under which the polar-affine metric will be invariant: \begin{equation} \eta^2:\fun{{GL}_n\times{SPD}_n}{{SPD}_n}{(A,\Sigma)}{\eta^2_A(\Sigma)=\sqrt{A\Sigma^2A^\top}}. \end{equation} It is characterized by $\eta^2(A,\tau^2(A'.O_n))=\tau^2(\eta^0(A,A'.O_n))$ for $A,A'\in{GL}_n$. \subsubsection{Orthogonal-invariant scalar product} The polar-affine metric $g^2$ is characterized by the scalar product $g^2_{I_n}$ on the tangent space $T_{I_n}\mathcal{M}$. This scalar product is obtained by pushforward of a scalar product $g^0_{[I_n]}$ on the tangent space $T_{[I_n]}({GL}_n/O_n)$. It is itself induced by the Frobenius scalar product on $\mathfrak{gl}_n=T_{I_n}{GL}_n$, defined by $\dotprod{v}{w}_\mathrm{Frob}=\mathrm{tr}(vw^\top)$, which is orthogonal-invariant. This is summarized on the following diagram. $$ \begin{array}{ccccc} {GL}_n & \overset{s}{\longrightarrow} & {GL}_n/O_n & \overset{\tau^2}{\longrightarrow} & \mathcal{M}={SPD}_n\\ \relax A & \longmapsto & A.O_n & \longmapsto & \sqrt{AA^\top}\\ \relax \dotprod{\cdot}{\cdot}_\mathrm{Frob} & & g^0_{[I_n]} & & g^2_{I_n} \end{array} $$ Finally, we get the scalar product $g^2_{I_n}(V_2,W_2)=\mathrm{tr}(V_2W_2)$ for $V_2,W_2\in T_{I_n}\mathcal{M}$. \subsubsection{Polar-affine metric} Since the action $\eta^2_{\Sigma^{-1}}$ sends $\Sigma$ to $I_n$, a linear isomorphism between tangent spaces is given by the differential of the action $T_\Sigma\eta^2_{\Sigma^{-1}}:V\in T_\Sigma M\longrightarrow V_2=\Sigma^{-1}T_\Sigma\mathrm{pow}_2(V)\Sigma^{-1}\in T_{I_n}\mathcal{M}$. Combined with the above expression of the scalar product at $I_n$, we get the following expression for the polar affine metric: for all tangent vectors $V,W\in T_\Sigma\mathcal{M}$, \begin{equation} g^2_\Sigma(V,W)=\mathrm{tr}(\Sigma^{-2}\,T_\Sigma\mathrm{pow}_2(V)\,\Sigma^{-2}\,T_\Sigma\mathrm{pow}_2(W)). \end{equation} \subsection{The underlying Riemannian symmetric manifold} In the affine-invariant framework, we started from defining the affine action $\eta^1$ (on covariance matrices) and we inferred the quotient diffeomorphism $\tau^1:({GL}_n/O_n,\eta^0)\longrightarrow({SPD}_n,\eta^1)$. In the polar-affine framework, we started from defining the quotient diffeomorphism $\tau^2:{GL}_n/O_n\longrightarrow{SPD}_n$ (corresponding to the polar decomposition) and we inferred the affine action $\eta^2$. The two actually correspond to the same underlying affine action $\eta^0$ on the quotient ${GL}_n/O_n$. Then there is also a one-parameter family of affine-invariant metrics on the quotient ${GL}_n/O_n$ and a one-parameter family of polar-affine metrics on the manifold ${SPD}_n$. This is stated in the following theorems. \begin{theorem}[Polar-affine is a square deformation of affine-invariant] \begin{enumerate} \item There exists a one-parameter family of affine-invariant metrics on the quotient ${GL}_n/O_n$. \item This family is in bijection with the one-parameter family of affine-invariant metrics on the manifold of SPD matrices thanks to the diffeomorphism $\tau^1:A.O_n\longmapsto AA^\top$. The corresponding action is $\eta^1:(A,\Sigma)\longmapsto A\Sigma A^\top$. \item This family is also in bijection with a one-parameter family of polar-affine metrics on the manifold of SPD matrices thanks to the diffeomorphism $\tau^2:A.O_n\longmapsto\sqrt{AA^\top}$. The corresponding action is $\eta^2:(A,\Sigma)\longmapsto\sqrt{A\Sigma^2A^\top}$. \item The diffeomorphism $\mathrm{pow}_2:\fun{({SPD}_n,4g^2)}{({SPD}_n,g^1)}{\Sigma}{\Sigma^2}$ is an isometry between polar-affine metrics $g^2$ and affine-invariant metrics $g^1$. \end{enumerate} \end{theorem} In other words, performing statistical analyses (e.g. a principal component analysis) with the polar-affine metric on covariance matrices is equivalent to performing these statistical analyses with the classical affine-invariant metric on the \textit{square} of our covariance matrix dataset.\\ All the metrics mentioned in Theorem 1 endow their respective space with a structure of a Riemannian symmetric manifold. We recall the definition of that geometric structure and we give the formal statement. \begin{definition}[Symmetric manifold, Riemannian symmetric manifold] A manifold $\mathcal{M}$ is symmetric if it is endowed with a family of involutions $(s_x)_{x\in\mathcal{M}}$ called symmetries such that $s_x\circ s_y\circ s_x=s_{s_x(y)}$ and $x$ is an isolated fixed point of $s_x$. It implies that $T_xs_x=-\mathrm{Id}_{T_x\mathcal{M}}$. A Riemannian manifold $(\mathcal{M},g)$ is symmetric if it is endowed with a family of symmetries that are isometries of $\mathcal{M}$, i.e. that preserve the metric: $g_{s_x(y)}(T_ys_x(v),T_ys_x(w))=g_y(v,w)$ for $v,w\in T_y\mathcal{M}$. \end{definition} \begin{theorem}[Riemannian symmetric structure on ${SPD}_n$] The Riemannian manifold $({SPD}_n,g^1)$, where $g^1$ is an affine-invariant metric, is a Riemannian symmetric space with symmetry $s_\Sigma:\Lambda\longmapsto\Sigma\Lambda^{-1}\Sigma$. The Riemannian manifold $({SPD}_n,g^2)$, where $g^2$ is a polar-affine metric, is also a Riemannian symmetric space whose symmetry is $s_\Sigma:\Lambda\longmapsto\sqrt{\Sigma^2\Lambda^{-2}\Sigma^2}$. \end{theorem} This square deformation of affine-invariant metrics can be generalized into a power deformation to build a family of affine-invariant metrics that we call power-affine metrics. It can even be generalized into any diffeomorphic deformation of SPD matrices. We now develop these families of affine-invariant metrics. \section{Families of affine-invariant metrics} There is a theoretical interest in building families comprising some of the known metrics on SPD matrices to understand how one can be deformed into another. For example, power-Euclidean metrics \cite{Dryden10} comprise the Euclidean metric and tends to the log-Euclidean metric \cite{Fillard07} when the power tends to 0. We recall that the log-Euclidean metric is the pullback of the Euclidean metric on symmetric matrices by the symmetric matrix logarithm $\log:{SPD}_n\longrightarrow\mathrm{Sym}_n$. There is also a practical interest in defining families of metrics: for example, it is possible to optimize the power to better fit the data with a certain distribution \cite{Dryden10}. First, we generalize the square deformation by deforming the affine-invariant metrics with a power function $\mathrm{pow}_\theta:\Sigma\in{SPD}_n\longmapsto\Sigma^\theta=\exp(\theta\log\Sigma)$ to define the power-affine metrics. Then we deform the affine-invariant metrics by any diffeomorphism $f:{SPD}_n\longrightarrow{SPD}_n$ to define the deformed-affine metrics. \subsection{The two-parameter family of power-affine metrics} We recall that $\mathcal{M}={SPD}_n$ is the manifold of SPD matrices. For a power $\theta\ne 0$, we define the $\theta$-power-affine metric $g^\theta$ as the pullback by the diffeomorphism $\mathrm{pow}_\theta:\Sigma\longmapsto\Sigma^\theta$ of the affine-invariant metric, scaled by a factor ${1}/{\theta^2}$. Equivalently, the $\theta$-power-affine metric is the metric invariant under the $\theta$-affine action $\eta^\theta:(A,\Sigma)\longmapsto(A\Sigma^\theta A^\top)^{1/\theta}$ whose scalar product at $I_n$ coincides with the scalar product $g^1_{I_n}:(V,W)\longmapsto\alpha\,\mathrm{tr}(VW)+\beta\,\mathrm{tr}(V)\mathrm{tr}(W)$. The $\theta$-affine action induces an isomorphism $V\in T_\Sigma M\longmapsto V_\theta=\frac{1}{\theta}\Sigma^{-\theta/2}\,\partial_V\mathrm{pow}_\theta(\Sigma)\,\Sigma^{-\theta/2}\in T_{I_n}\mathcal{M}$ between tangent spaces. The $\theta$-power-affine metric is given by: \begin{equation} g^\theta_\Sigma(V,W)=\alpha\,\mathrm{tr}(V_\theta W_\theta)+\beta\,\mathrm{tr}(V_\theta)\mathrm{tr}(W_\theta). \end{equation} Because a scaling factor is of low importance, we can set $\alpha=1$ and consider that this family is a two-parameter family indexed by $\beta>-{1}/{n}$ and $\theta\ne 0$. We have chosen to define the metric $g^\theta$ so that the power function $\mathrm{pow}_\theta:(\mathcal{M},\theta^2g^\theta)\longrightarrow(\mathcal{M},g^1)$ is an isometry. Why this factor $\theta^2$? The first reason is for consistence with previous works: the analogous power-Euclidean metrics have been defined with that scaling \cite{Dryden10}. The second reason is for continuity: when the power tends to 0, the power-affine metric tends to the log-Euclidean metric. \begin{theorem}[Power-affine tends to log-Euclidean for $\theta\rightarrow 0$] Let $\Sigma\in\mathcal{M}$ and $V,W\in T_\Sigma\mathcal{M}$. Then $\lim_{\theta\rightarrow 0}{g^\theta_\Sigma(V,W)}=g^{LE}_\Sigma(V,W)$ where the log-Euclidean metric is $g^{LE}_\Sigma(V,W)=\alpha\, \mathrm{tr}(\partial_V\!\log(\Sigma)\,\partial_W\!\log(\Sigma))+\beta\,\mathrm{tr}(\partial_V\!\log(\Sigma))\mathrm{tr}(\partial_W\!\log(\Sigma))$. \end{theorem} \subsection{The continuum of deformed-affine metrics} In the following, we call a diffeomorphism $f:{SPD}_n\longrightarrow{SPD}_n$ a deformation. We define the $f$-deformed-affine metric $g^f$ as the pullback by the diffeomorphism $f$ of the affine-invariant metric, so that $f:(\mathcal{M},g^f)\longrightarrow(\mathcal{M},g^1)$ is an isometry. (Regarding the discussion before the Theorem 3, $g^{\mathrm{pow}_\theta}=\theta^2g^\theta$.) The $f$-deformed-affine metric is invariant under the $f$-affine action $\eta^f:(A,\Sigma)\longmapsto f^{-1}(Af(\Sigma)A^\top)$. It is given by $g^f_\Sigma(V,W)=\alpha\mathrm{tr}(V_fW_f)+\beta\mathrm{tr}(V_f)\mathrm{tr}(W_f)$ where $V_f=f(\Sigma)^{-1/2}\, \partial_Vf(\Sigma) \, f(\Sigma)^{-1/2}$. The basic Riemannian operations are obtained by pulling back the affine-invariant operations. \begin{theorem}[Basic Riemannian operations] For SPD matrices $\Sigma,\Lambda\in\mathcal{M}$ and a tangent vector $V\in T_\Sigma\mathcal{M}$, we have at all time $t\in\mathbb{R}$: \begin{center} $\begin{array}{|c|l|} \hline \mathrm{Geodesics} & \gamma^f_{(\Sigma,V)}(t)=f^{-1}(f(\Sigma)^{1/2}\exp(tf(\Sigma)^{-1/2}T_\Sigma f(V)f(\Sigma)^{-1/2})f(\Sigma)^{1/2}) \\ \hline \mathrm{Logarithm} & \mathrm{Log}^f_\Sigma(\Lambda)=(T_\Sigma f)^{-1}(f(\Sigma)^{1/2}\log(f(\Sigma)^{-1/2}f(\Lambda)f(\Sigma)^{-1/2})f(\Sigma)^{1/2})\\ \hline \mathrm{Distance} & d_f(\Sigma,\Lambda)=d_1(f(\Sigma),f(\Lambda))=\sum_{k=1}^n{(\log\lambda_k)^2}\\ \hline \end{array}$ \end{center} where $\lambda_1,...,\lambda_n$ are the eigenvalues of the symmetric matrix $f(\Sigma)^{-1/2}f(\Lambda)f(\Sigma)^{-1/2}$. \end{theorem} All tensors are modified thanks to the pushforward $f_*$ and pullback $f^*$ operators, e.g. the Riemann tensor of the $f$-deformed metric is $R^f(X,Y)Z=f^*(R(f_*X,f_*Y)(f_*Z))$. As a consequence, the deformation $f$ does not affect the values taken by the sectional curvature and these metrics are negatively curved. From a computational point of view, it is very interesting to notice that the identification $\mathcal{L}'_\Sigma:V\in T_\Sigma\mathcal{M}\longmapsto V'=T_\Sigma f(V)\in T_{f(\Sigma)}\mathcal{M}$ simplifies the above expressions by removing the differential $T_\Sigma f$. This change of basis can prevent from numerical approximations of the differential but one must keep in mind that $V\ne V'$ in general. This identification was already used for the polar-affine metric ($f=\mathrm{pow}_2$) in \cite{Zhang18} without explicitly mentioning. \section{Interesting subfamilies of deformed-affine metrics} Some deformations have already been used in applications. For example, the family $A_r:\mathrm{diag}(\lambda_1,\lambda_2,\lambda_3)\longmapsto\mathrm{diag}(a_1(r)\lambda_1,a_2(r)\lambda_2,a_3(r)\lambda_3)$ where $\lambda_1\geqslant\lambda_2\geqslant\lambda_3>0$ was proposed to map the anisotropy of water measured by diffusion tensors to the one of the diffusion of tumor cells in tumor growth modeling \cite{Jbabdi05}. The inverse function $\mathrm{inv}=\mathrm{pow}_{-1}:\Sigma\longmapsto\Sigma^{-1}$ or the adjugate function $\mathrm{adj}:\Sigma\longmapsto\det(\Sigma)\Sigma^{-1}$ were also proposed in the context of DTI \cite{Lenglet04,Fuster16}. Let us find some properties satisfied by some of these examples. We define the following subsets of the set $\mathcal{F}=\mathrm{Diff}({SPD}_n)$ of diffeomorphisms of ${SPD}_n$. (Spectral) $\mathcal{S}=\{f\in\mathcal{F}|\forall U\in O_n,\forall D\in\mathrm{Diag}^{++}_n,f(UDU^\top)=Uf(D)U^\top\}$. Spectral deformations are characterized by their values on sorted diagonal matrices so the deformations described above are spectral: $A_r,\mathrm{adj},\mathrm{pow}_\theta\in\mathcal{S}$.\\ For a spectral deformation $f\in\mathcal{S}$, $f(\mathbb{R}_+^*I_n)=\mathbb{R}_+^*I_n$ so we can unically define a smooth diffeomorphism $f_0:\mathbb{R}_+^*\longrightarrow\mathbb{R}_+^*$ by $f(\lambda I_n)=f_0(\lambda)I_n$. (Univariate) $\mathcal{U}=\{f\in\mathcal{S}|f(\mathrm{diag}(\lambda_1,...,\lambda_n))=\mathrm{diag}(f_0(\lambda_1),...,f_0(\lambda_n))\}$. The power functions are univariate. Any polynomial $P=\lambda X\prod_{k=1}^p(X-a_i)$ null at 0, with non-positive roots $a_i\leqslant 0$ and positive coefficient $\lambda>0$, also gives rise to a univariate deformation. (Diagonally-stable) $\mathcal{D}=\{f\in\mathcal{F}|f(\mathrm{Diag}^{++}_n)\subset\mathrm{Diag}^{++}_n\}$. The deformations described above $A_r,\mathrm{adj},\mathrm{pow}_\theta$ and the univariate deformations are clearly diagonally-stable: $A_r,\mathrm{adj},\mathrm{pow}_\theta\in\mathcal{D}$ and $\mathcal{U}\subset\mathcal{D}\cap\mathcal{S}$. (Log-linear) $\mathcal{L}=\{f\in\mathcal{F}|\log_*f=\log\circ\, f\circ\exp\,\,\text{is linear}\}$. The adjugate function and the power functions are log-linear deformations. More generally, the functions $f_{\lambda,\mu}:\Sigma\longmapsto(\det\Sigma)^{\frac{\lambda-\mu}{n}}\Sigma^\mu$ for $\lambda,\mu\ne0$, are log-linear deformations. We can notice that the $f_{\lambda,\mu}$-deformed-affine metric belongs to the one-parameter family of $\mu$-power-affine metrics with $\beta=\frac{\lambda^2-\mu^2}{n\mu^2}>-\frac{1}{n}$. The deformations $f_{\lambda,\mu}$ just introduced are also spectral and the following result states that they are the only spectral log-linear deformations. \begin{theorem}[Characterization of the power-affine metrics] If $f\in\mathcal{S}\cap\mathcal{L}$ is a spectral log-linear diffeomorphism, then there exist real numbers $\lambda,\mu\in\mathbb{R}^*$ such that $f=f_{\lambda,\mu}$ and the $f$-deformed-affine metric is a $\mu$-power-affine metric. \end{theorem} The interest of this theorem comes from the fact that the group of spectral deformations and the vector space of log-linear deformations have large dimensions while their intersection is reduced to a two-parameter family. This strong result is a consequence of the theory of Lie group representations because the combination of the spectral property and the linearity makes $\log_*f$ a homomorphism of $O_n$-modules (see the sketch of proof below).\\ \textit{Sketch of the proof.} Thanks to Lie group representation theory, the linear map $F=\log_*f:{Sym}_n\longrightarrow{Sym}_n$ appears as a homomorphism of $O_n$-modules for the representation $\rho:P\in O_n\longmapsto(V\longmapsto PVP^\top)\in GL({Sym}_n)$. Once shown that ${Sym}_n=\mathrm{span}(I_n)\oplus\ker{\mathrm{tr}}$ is a $\rho$-irreducible decomposition of ${Sym}_n$ and that each one is stable by $F$, then according to Schur's lemma, $F$ is homothetic on each subspace, i.e. there exist $\lambda,\mu\in\mathbb{R}^*$ such that for $V\in\mathrm{Sym}_n$, $F(V)=\lambda\frac{\mathrm{tr}(V)}{n}I_n+\mu\left(V-\frac{\mathrm{tr}(V)}{n}I_n\right)=\log_*f_{\lambda,\mu}(V)$, so $f=f_{\lambda,\mu}$.\\ \section{Conclusion} We have shown that the polar-affine metric is a square deformation of the affine-invariant metric and this process can be generalized to any power function or any diffeomorphism on SPD matrices. It results that the invariance principle of symmetry is not sufficient to distinguish all these metrics, so we should find other principles to limit the scope of acceptable metrics in statistical computing. We have proposed a few characteristics (spectral, diagonally-stable, univariate, log-linear) that include some functions on tensors previously introduced. Future work will focus on studying the effect of such deformations on real data and on extending this family of metrics to positive semi-definite matrices. Finding families that comprise two non-cousin metrics could also help understand the differences between them and bring principles to make choices in applications. \paragraph{Acknowledgements.} This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant G-Statistics agreement No 786854). This work has been supported by the French government, through the UCAJEDI Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-15-IDEX-01. \end{document}
math
27,756
\begin{document} \title{Large Degree Asymptotics of Generalized Bernoulli and Euler Polynomials} \author{Jos\'e Luis L\'opez \\ Departamento de Mat\'ematica e Inform\'atica,\\ Universidad P\'ublica de Navarra, 31006 Pamplona, Spain\\ \and Nico M. Temme\\ CWI, Science Park 123, 1098 GX Amsterdam, The Netherlands. \\ { \small e-mail: {\tt [email protected], [email protected]}} } \date{\today} \maketitle \begin{abstract} \noindent Asymptotic expansions are given for large values of $n$ of the generalized Bernoulli polynomials $B_n^\mu(z)$ and Euler polynomials $E_n^\mu(z)$. In a previous paper L\'opez and Temme (1999) these polynomials have been considered for large values of $\mu$, with $n$ fixed. In the literature no complete description of the large $n$ asymptotics of the considered polynomials is available. We give the general expansions, summarize known results of special cases and give more details about these results. We use two-point Taylor expansions for obtaining new type of expansions. The analysis is based on contour integrals that follow from the generating functions of the polynomials. \end{abstract} \vskip 0.8cm \noindent {\small 2000 Mathematics Subject Classification: 11B68, 30E10, 33E20, 41A60. \par\noindent Keywords \& Phrases: asymptotic expansions, generalized Bernoulli polynomials, generalized Euler polynomials. } \section{Introduction}\label{sec:intro} Generalized Bernoulli and Euler polynomials of degree $n$, complex order $\mu$ and complex argument $z$, denoted respectively by $B_n^{\mu}(z)$ and $E_n^{\mu}(z)$, can be defined by their generating functions. We have \cite{Milne:1951:CFD,Temme:1996:SFI} \begin{equation}\label{Berdef} \frac{w^{\mu}e^{wz}}{ (e^w-1)^{\mu}}= \sum_{n=0}^\infty \frac {B_n^{\mu}(z)}{ n!}w^n, \quad \vert w\vert<2\pi, \end{equation} and \begin{equation}\label{Euldef} \frac{2^{\mu}e^{wz}}{ (e^w+1)^{\mu}}= \sum_{n=0}^\infty \frac{E_n^{\mu}(z)}{ n!}w^n, \quad \vert w\vert<\pi. \end{equation} These polynomials play an important role in the calculus of finite differences. In fact, the coefficients in all the usual central-difference formulae for interpolation, numerical differentiation and integration, and differences in terms of derivatives can be expressed in terms of these polynomials (see \cite{Milne:1951:CFD,Norlund:1924:VUD}). An explicit formula for the generalized Bernoulli polynomials can be found in \cite{Srivastava:1988:EFB}. Properties and explicit formulas for the generalized Bernoulli and Euler numbers can be found in \cite{Lundell:1987:OTD,Todorov:1985:UFS,Todorov:1993:ERF} and in cited references. In a previous paper \cite{Lopez:1999:HPA} we have considered these polynomials for large values of $\mu$, with $n$ fixed, and in the present paper we consider $n$ as the large parameter, with the other parameters fixed. We summarize known results from the literature for integer values of $\mu$, and give more details about these results. We describe the method for obtaining the coefficients in the expansion for general $\mu$. Finally, we use two-point Taylor expansions for obtaining new type of expansions for general $\mu$. The analysis is based on contour integrals that follow from the generating functions of the polynomials. \section{The generalized Bernoulli polynomials}\label{sec:Ber} Three different cases arise, depending on $\mu=0,-1,-2,\ldots$, $\mu=1,2,3,\dots$, and $\mu$ otherwise, real or complex. Our approach is based on the Cauchy integral \begin{equation}\label{Berint} B_n^{\mu}(z)=\frac{n!}{ 2\pi i} \int_{{\cal C}}{w^{\mu}e^{wz}\over(e^w-1)^{\mu}} \frac{dw}{ w^{n+1}}, \end{equation} where ${\cal C}$ is a circle around the origin, with radius less than $2\pi$. This follows from \eqref{Berdef}. \subsection{Asymptotic form when \protectbold{\mu=0,-1,-2,\ldots}}\label{subsec:Ber1} In this case the generating series in \eqref{Berdef} converges for all finite values of $z$, and, hence, the polynomials $B_n^{\mu}(z)$ have a completely different behavior compared with the general case. We first follow the approach given in \cite{Weinmann:1963:AEB}, and observe that when $\mu$ is a negative integer or zero, say $\mu=-m$ $(m=0,1,2,\ldots)$, we can express $B_n^{\mu}(z)$ in terms of a finite sum. We expand by the binomial theorem \begin{equation}\label{binom} \left(e^w-1\right)^m=\sum_{r=0}^m(-1)^{m-r}{m\choose r}e^{rw}. \end{equation} This gives \begin{equation}\label{Bersum} B_n^{-m}(z)=\frac{n!}{(n+m)!} \sum_{r=0}^m (-1)^{m-r}{m\choose r} (z+r)^{n+m}. \end{equation} For any given $m\in {\mathbb N}$ and complex $z$ only the term or terms with the largest values of $|z+r|$ will give a large contribution to the sum in \eqref{Bersum}, the other terms being exponentially small in comparison. We conclude, that \eqref{Bersum} gives the asymptotic form when $n\to\infty$, when $\mu=-m$ and $z$ are fixed. In particular, when $z>0$, the term with index $r=m$ is maximal, and we have \begin{equation}\label{Bersum0} B_n^{-m}(z)=\frac{n!}{(n+m)!} (z+m)^{n+m}\left[1+{\cal O}\left(\frac{z+m-1}{z+m}\right)^{n+m}\right]. \end{equation} The error term can also be estimated by ${\cal O}(\exp(-(n+m)/(z+m)))$, which is indeed exponentially small compared with unity. For general complex $z=x+iy$ and $x>-m/2$ the term with index $r=m$ again is maximal and the same estimate as in \eqref{Bersum0} is valid. When $x=-m/2$ the terms with $r=0$ and $r=m$ give the maximal contributions, and we have \begin{equation}\label{Bersum1} B_n^{-m}(z)\sim\frac{n!}{(n+m)!} \left[(-1)^m(-\tfrac12m+iy)^{n+m}+(\tfrac12m+iy)^{n+m} \right]. \end{equation} When $x<-m/2$ the term with index $r=0$ is maximal, and we have \begin{equation}\label{Bersum2} B_n^{-m}(z)=(-1)^m\frac{n!}{(n+m)!} z^{n+m}\left[1+{\cal O}\left(\frac{z+1}{z}\right)^{n+m}\right]. \end{equation} By using the saddle point method we can obtain an estimate similar as the one in \eqref{Bersum0}. We write \begin{equation}\label{Bersum3} B_n^{-m}(z)=\frac{n!}{2\pi i}\int_{{\cal C}}\left(1-e^{-w}\right)^m e^{\phi(w)}\frac{dw}{w}, \end{equation} where $\phi(w)=(z+m)w-(n+m)\ln w$. This function has a saddle point at $w_0=(n+m)/(z+m)$, and when $\Re w_0$ is large and positive we replace $(1-e^{-w})$ by its value at this point. Then we have \renewcommand{1.0}{1.75} \begin{equation}\label{Bersum4} \begin{array}{lll} B_n^{-m}(z)&\sim&\dsp{\left(1-e^{-w_0}\right)^m\frac{n!}{2\pi i}\int_{{\cal C}} e^{(z+m)w}\frac{dw}{w^{n+m+1}}}\\ &=& \dsp{\left(1-e^{-w_0}\right)^m \frac{n!}{(n+m)!}(z+m)^{n+m},} \end{array} \renewcommand{1.0}{1.0} \end{equation} in which $e^{-w_0}$ is exponentially small. When $\Re w_0$ is not positive we can modify this method to obtain the estimates as given in \eqref{Bersum1} and \eqref{Bersum2}. Also, we can make further steps in the saddle point analysis, and show that the next terms in the expansion are exponentially small compared with unity, similar as shown in \eqref{Bersum0}. However, the representation in \eqref{Bersum} describes very elegantly the asymptotic behavior. \subsection{Asymptotic form when \protectbold{\mu= 1, 2,3,\ldots}}\label{subsec:Ber2} We write $\mu=m$. The starting point is the expansion for $m=1$: \begin{equation}\label{Ber1ser} B_n^{1}(z)=B_n(z)=-n!\sum_{{k=-\infty \atop k\ne0}}^\infty\frac{e^{2\pi ikz}}{(2\pi ik)^n}, \end{equation} which for $z\in(0,1)$ can be viewed as a Fourier expansion of $B_n^{1}(z)$. This expansion follows from taking the radius of the circle ${\cal C}$ in \eqref{Berint} equal to $(2K+1)\pi$, ($K$ an integer). Taking $K$ large, we take into account the poles of the integrand at $w=2\pi ik$ ($k=\pm1,\pm2,\ldots$), and calculate the residues of these poles. The integral around the circle ${\cal C}$ tends to zero as $K\to\infty$, provided $n>1$ and $0\le z\le 1$. This gives the expansions ($n=1,2,3,\ldots; 0\le z\le1$) \begin{equation}\label{Bersereven} B_{2n}(z)=2(-1)^{n+1}(2n)!\,\sum_{k=1}^\infty\frac{\cos(2\pi kz)}{(2\pi k)^{2n}}, \end{equation} and \begin{equation}\label{Berserodd} B_{2n+1}(z)=2(-1)^{n+1}(2n+1)!\,\sum_{k=1}^\infty\frac{\sin(2\pi kz)}{(2\pi k)^{2n+1}}. \end{equation} In \eqref{Berserodd} we can take $n=0$, provided $0<z<1$. This gives the well-known Fourier expansion of $B_{1}(z)=z-1/2$, $0<z<1$. In \eqref{Ber1ser} only the terms with $k=\pm1$ are relevant for the asymptotic behavior, and we obtain for fixed complex $z$ \begin{equation}\label{Ber1as} B_n^{1}(z)=\frac{2(-1)^{n+1}n!}{(2\pi)^n}\left[\cos\left(2\pi z+\tfrac12\pi n\right) +{\cal O}\left(2^{-n}\right)\right],\quad n\to\infty. \end{equation} For general fixed real or complex $z$ the series in \eqref{Bersereven} and \eqref{Berserodd} can be viewed as asymptotic expansion for large $n$, as easily follows from the ratio test. For general $\mu=m=1,2,3,\ldots$ a similar expansion as in \eqref{Ber1ser} can be given. In that case the poles of the integrand in \eqref{Berint} are of higher order. We can write \begin{equation}\label{Bermser} B_n^{m}(z)=-n!\sum_{{k=-\infty \atop k\ne0}}^\infty \beta_k^m(n,z)\frac{e^{2\pi ikz}}{(2\pi ik)^n}, \end{equation} where $\beta_k^1(n,z)=1, \forall k$. This is a Fourier expansion for $z\in(0,1)$ when $m<n$. For other values of $z$ it can be used as an asymptotic expansion for large $n$. An explicit form of $\beta_k^{m}(n,z)$ follows from calculating the residues of the poles at $2\pi ik$ of order $m$ of the integrand in \eqref{Berint}. For this we compute the coefficient $c_{m-1}$ in the expansion \begin{equation}\label{Berres1} \frac{(w-2\pi ik)^m w^m e^{zw}}{(e^w-1)^mw^{n+1}}=\sum_{r=0}^\infty c_r (w-2\pi ik)^r, \end{equation} from which we obtain \begin{equation}\label{Bermser1a} \beta_k^m(n,z)\frac{e^{2\pi ikz}}{(2\pi ik)^n}=c_{m-1}. \end{equation} We substitute $w=s+2\pi ik$ and write the expansion as \begin{equation}\label{Berres2} e^{2\pi ikz}\frac{s^m e^{zs}}{(e^s-1)^m(s+2\pi ik)^{n+1-m}}=\sum_{r=0}^\infty c_r s^r. \end{equation} We use \eqref{Berdef} and write the left-hand side in the form \begin{equation}\label{Berres2a} \frac{e^{2\pi ikz}}{(2\pi ik)^{n+1-m}}\sum_{\nu=0}^\infty \frac {B_\nu^{m}(z)}{ n!}s^\nu \sum_{\nu=0}^\infty{m-n-1\choose \nu}\frac{s^\nu}{(2\pi ik)^\nu}. \end{equation} Hence, $c_r$ of \eqref{Berres2} can be written as \begin{equation}\label{Berres2b} c_{r}=\frac{e^{2\pi ikz}}{(2\pi ik)^{n+1-m}}\sum_{\nu=0}^{r} \frac{B_\nu^m(z)}{\nu!}{m-n-1\choose r-\nu}(2\pi ik)^{\nu-r}, \end{equation} and we conclude that \begin{equation}\label{Berres3} c_{m-1}=\frac{e^{2\pi ikz}}{(2\pi ik)^{n}}\sum_{\nu=0}^{m-1} \frac{B_\nu^m(z)}{\nu!}{m-n-1\choose m-1-\nu}(2\pi ik)^{\nu}. \end{equation} It follows from \eqref{Bermser1a} that \begin{equation}\label{Berres4} \beta_k^m(n,z)=\sum_{\nu=0}^{m-1} \frac{B_\nu^m(z)}{\nu!} {m-n-1\choose m-1-\nu}(2\pi ik)^{\nu}. \end{equation} To avoid binomials with negative integers, and to extract the main asymptotic factor, we write \begin{equation}\label{Berres5} \beta_k^m(n,z)=(-1)^{m-1}{n-1\choose m-1}\sum_{\nu=0}^{m-1} B_\nu^m(z) {m-1\choose \nu}\frac{(n-\nu-1)!}{(n-1)!}(-2\pi ik)^{\nu}. \end{equation} For large $n$ the main term occurs for $\nu=0$. We have \begin{equation}\label{Berres6} \beta_k^m(n,z)=\frac{(-1)^{m-1}n^{m-1}}{(m-1)!}\left[1+{\cal O}(n^{-1})\right]. \end{equation} Observing that, as in \eqref{Ber1ser}, only the terms with $k=\pm1$ are relevant for the asymptotic behavior, we obtain \begin{equation}\label{Bermser2} B_n^{m}(z)=\frac{(-1)^{n+1}n!}{(2\pi)^n}\left[\beta_1^{m}(n,z)e^{2\pi iz+\frac12\pi in}+ \beta_{-1}^{m}(n,z)e^{-2\pi iz-\frac12\pi in}+\ldots\right], \end{equation} and by using \eqref{Berres5} we obtain for fixed $m$ and complex $z$ (cf. \eqref{Ber1as}) \renewcommand{1.0}{1.5} \begin{equation}\label{Bermas1} \begin{array}{ll} \dsp{B_n^{m}(z)=\frac{2(-1)^{m+n}}{(2\pi)^n}{n-1\choose m-1} \,\times} \\ \quad\quad\quad \dsp{\left[ \sum_{\nu=0}^{m-1} B_\nu^m(z) {m-1\choose \nu}\frac{(n-\nu-1)!}{(n-1)!}(2\pi )^{\nu}\cos\sigma+{\cal O}\left(2^{-n}\right)\right]}, \end{array} \renewcommand{1.0}{1.0} \end{equation} as $n\to\infty$, where $\sigma=(2z+\tfrac12n-\tfrac12\nu)\pi$. To obtain $\beta_k^m(n,z)$ for $m> 1$ we can also use a recurrence relation. We have the relation \begin{equation}\label{Berrec} \mu B_n^{\mu+1}(z)=(\mu-n)B_n^{\mu}(z)+n (z-\mu)B_{n-1}^{\mu}(z),\quad n\ge 1, \end{equation} which follows from \eqref{Berdef} by differentiating both members with respect to $w$. By differentiation with respect to $z$ we find \begin{equation}\label{Berdif} nB_{n-1}^{\mu}(z)=\frac{d}{dz}B_n^{\mu}(z), \end{equation} giving \begin{equation}\label{Berrecd} \mu B_n^{\mu+1}(z)=(\mu-n)B_n^{\mu}(z)+ (z-\mu)\frac{d}{dz}B_n^{\mu}(z),\quad n\ge 0. \end{equation} This gives the recurrence relation for $m=1,2,3,\ldots$ \begin{equation}\label{betarec} m \beta_k^{m+1}(n,z)=[m-n+2\pi ik(z-m)]\beta_k^{m}(n,z)+ (z-m)\frac{d}{dz}\beta_k^{m}(n,z). \end{equation} \subsection{Asymptotic form for general complex \protectbold{\mu}}\label{subsec:Ber3} We consider \eqref{Berint} and observe that the singularities at $\pm2\pi i$ are the sources for the main asymptotic contributions. We integrate around a circle with radius $3\pi$, avoiding branch cuts running from $\pm2\pi i$ to $+\infty$. See Figure~\ref{Ber.fig1}. The contribution from the circular arc is ${\cal O}((3\pi)^{-n})$, which is exponentially small with respect to the main contributions. \begin{figure} \caption{\small Contour for \eqref{Berint} \label{Ber.fig1} \end{figure} We denote the loops by ${\cal L}_{\pm}$ and the contributions from the loops by $I_{\pm}$. For the upper loop we substitute $w=2\pi i e^s$. This gives \begin{equation}\label{Ip} I_+=\frac{n!}{2\pi i}\frac{e^{2\pi iz}}{(2\pi i)^n}\int_{{\cal C}_+} g(s) s^{-\mu}e^{-ns}\,ds, \end{equation} where \begin{equation}\label{gs} g(s)=\left(\frac{2\pi is}{e^u-1}\right)^\mu e^{zu+\mu s},\quad u=2\pi i\left(e^s-1\right), \end{equation} and ${\cal C}_+$ is the image of ${\cal L}_+$. ${\cal C}_+$ is a contour that encircles the origin in the clockwise fashion. To obtain an asymptotic expansion we apply Watson's lemma for loop integrals, see \cite[p.~120]{Olver:1997:ASF}. We expand \begin{equation}\label{gsexp} g(s)=\sum_{k=0}^\infty g_k s^k, \end{equation} substitute this in \eqref{Ip}, and interchange summation and integration. This gives \begin{equation}\label{Ipas1} I_+\sim n! \,\frac{e^{2\pi iz}}{(2\pi i)^n}\,\sum_{k=0}^\infty g_k F_k, \end{equation} where \begin{equation}\label{Fk} F_k=\frac{1}{2\pi i}\int_{{\cal C}_+} s^{k-\mu}e^{-ns}\,ds, \end{equation} with ${\cal C}_+$ extended to $+\infty$. That is, we start the integration along the contour ${\cal C}_+$ at $s=+\infty$, with ${\rm ph}\,s=2\pi$, turn around the origin in the clock-wise direction, and return to $+\infty$ with ${\rm ph}\,s=0$. To evaluate the integrals we turn the path by writing $s=e^{\pi i}t$, and use the representation of the reciprocal gamma function in terms of the Hankel contour; see \cite[p.~48]{Temme:1996:SFI}. The result is \begin{equation}\label{Fkgam} F_k=n^{\mu-k-1}e^{\pi i \mu}\frac{(-1)^k}{\Gamma(\mu-k)}=n^{\mu-k-1}e^{\pi i \mu}\frac{(1-\mu)_k}{\Gamma(\mu)}, \end{equation} where $(a)_n$ is the shifted factorial, or Pochhammer's symbol, defined by \begin{equation}\label{poch} (a)_n=a\cdot(a+1)\cdots(a+n-1)=\frac{\Gamma(a+n)} {\Gamma(a)}, \quad n=0,1,2,\ldots\,. \end{equation} This gives the expansion \begin{equation}\label{Ipas2} I_+\sim \frac{n! \,n^{\mu-1}}{(2\pi )^n\,\Gamma(\mu)}\,e^{i\chi}\,\sum_{k=0}^\infty \frac{(1-\mu)_k g_k}{n^k}, \end{equation} where \begin{equation}\label{chizeta} \chi=2\zeta-\tfrac12n\pi,\quad \zeta=(z+\tfrac12\mu)\pi. \end{equation} The contribution $I_-$ can be obtained in a similar way. However, it is the complex conjugate of $I_+$ (not considering $z$ and $\mu$ as complex numbers). The result $I_++I_-$ can be obtained by taking twice the real part of $I_+$. We write $g_k=g_k^{(r)}+ig_k^{(i)}$ (with $g_k^{(r)},g_k^{(i)}$ real when $z$ and $\mu$ are real), and obtain \begin{equation}\label{Beras} B_n^\mu(z)\sim \frac{2\,n! \,n^{\mu-1}}{(2\pi )^n\,\Gamma(\mu)}\left[ \cos\chi\,\sum_{k=0}^\infty \frac{(1-\mu)_k g_k^{(r)}}{n^k}- \sin\chi\,\sum_{k=0}^\infty \frac{(1-\mu)_k g_k^{(i)}}{n^k}\right], \end{equation} as $n\to\infty$, with $z$ and $\mu$ fixed complex numbers ($\mu\notin{\mathbb Z}$). The first few coefficients $g_k^{(r)},g_k^{(i)}$ are \renewcommand{1.0}{1.5} \begin{equation}\label{gkri} \begin{array}{ll} g_0^{(r)}=1, &g_0^{(i)}=0,\\ g_1^{(r)}=\tfrac12\mu, &g_1^{(i)}=2\zeta,\\ g_2^{(r)}=\tfrac1{24}(3\mu^2+(4\pi^2-1)\mu-48\zeta^2), &g_2^{(i)}=(1+\mu)\zeta,\\ g_3^{(r)}=\tfrac1{48}(\mu^3+(4\pi^2-1)\mu^2+8(\pi^2-6\zeta^2)\mu-96\zeta^2), &\\ g_3^{(i)}=\tfrac1{12}\zeta(3\mu^2+(4\pi^2+5)\mu-16\zeta^2+4).&\\ \end{array} \end{equation} \renewcommand{1.0}{1.0} The first-order approximation reads \begin{equation}\label{Berasf} B_n^\mu(z)= \frac{2\,n! \,n^{\mu-1}}{(2\pi )^n\,\Gamma(\mu)}\left[ \cos\pi(2z+\mu-\tfrac12n)+{\cal O}(1/n)\right], \quad n\to\infty. \end{equation} N\"orlund \cite[p.~39]{Norlund:1961:SVA} describes the same method of this section and only gives the first-order approximation. \subsubsection{An alternative expansion}\label{subsec:Ber4} As observed in the previous method, the main contributions to \eqref{Berint} comes from the singular points of the integrand at $\pm2\pi i$. In this section we expand part of the integrand of \eqref{Berint} in a two-point Taylor expansion. In this way a simpler asymptotic representation can be obtained. For more details on this topic we refer to \cite{Lopez:2002:TPT,Lopez:2004:MPT} and for the evaluation of coefficients of such expansions to \cite{Vidunas:2002:SEC}. We write \begin{equation}\label{Ber41} f(w)=2^{-3\mu}\pi^{-2\mu}\left(w^2+4\pi^2\right)^\mu\left(\frac{w}{e^w-1}\right)^\mu e^{wz} \end{equation} and expand \begin{equation}\label{Ber42} f(w) =\sum_{k=0}^\infty\left(\alpha_k+w\beta_k\right)\left(w^2+4\pi^2\right)^k. \end{equation} The function $f(w)$ is analytic inside the disk $|w|<4\pi$ and the series converges in the same domain. The coefficients $\alpha_0$ and $\beta_0$ can be found by substituting $w=\pm 2\pi i$. This gives \renewcommand{1.0}{1.5} \begin{equation}\label{Ber43} \begin{array}{l} \dsp{\alpha_0=\frac{f(2\pi i)+f(-2\pi i)}{2}=\cos2\zeta}, \\ \dsp{ \beta_0=\frac{f(2\pi i)-f(-2\pi i)}{4\pi i}=\frac{1}{2\pi}\sin2\zeta, } \end{array} \renewcommand{1.0}{1.0} \end{equation} where $\zeta$ is defined in \eqref{chizeta}. The next coefficients can be obtained by writing $f_0(w)=f(w)$ and \renewcommand{1.0}{1.5} \begin{equation}\label{Ber44} \begin{array}{lll} f_{j+1}(w)&=&\dsp{\frac{f_j(w)-(\alpha_j+w\beta_j)}{w^2+4\pi^2}} \\ &=&\dsp{\sum_{k=j+1}^\infty\left(\alpha_k+w\beta_k\right)\left(w^2+4\pi^2\right)^{k-j-1},} \end{array} \renewcommand{1.0}{1.0} \end{equation} $ j=0,1,2,\ldots$, and by taking the limits when $w\to\pm2\pi i$. We have \renewcommand{1.0}{1.5} \begin{equation}\label{Ber45} \begin{array}{l} \dsp{\alpha_{j+1}=\frac{f_j^\prime(2\pi i)-f_j^\prime(-2\pi i)}{8\pi i}}, \\ \dsp{ \beta_{j+1}=-\frac{f_j^\prime(2\pi i)+f_j^\prime(-2\pi i)-2\beta_j}{16\pi^2}}. \end{array} \renewcommand{1.0}{1.0} \end{equation} This gives {\small \renewcommand{1.0}{1.75} \begin{equation}\label{Ber46} \begin{array}{l} \dsp{ \alpha_1=-\frac{1}{16\pi^2}[3\mu\cos2\zeta+2\pi\eta\sin2\zeta],}\\ \dsp{ \beta_1=\frac{1}{32\pi^3}[2\pi\eta\cos2\zeta+(2-3\mu)\sin2\zeta],}\\ \dsp{\alpha_2=\frac{1}{1536\pi^4}[(-12\pi^2\eta^2+4\mu\pi^2-33\mu+27\mu^2)\cos2\zeta+12\pi\eta(3\mu-1)\sin2\zeta]}, \\ \dsp{ \beta_2=\frac{1}{3072\pi^5}[-36\pi\eta(\mu-1)\cos2\zeta+(36-69\mu+27\mu^2+4\mu\pi^2-12\pi^2\eta^2)\sin2\zeta],} \end{array} \renewcommand{1.0}{1.0} \end{equation} } where $\eta=\mu-2z$. Substituting the expansion in \eqref{Ber42} into \eqref{Berint} we obtain \begin{equation}\label{Ber47} B_n^{\mu}(z)=n!\, 2^{3\mu}\pi^{2\mu}\sum_{k=0}^\infty \left[\alpha_k \Phi_k^{(n)}+\beta_k \Phi_k^{(n-1)}\right], \end{equation} where \begin{equation}\label{Ber48} \Phi_k^{(n)}=\frac{1}{2\pi i} \int_{\cal C} \left(w^2+4\pi^2\right)^{k-\mu}\frac{dw}{w^{n+1}}. \end{equation} We have $\Phi_k^{(2n+1)}=0$ and \begin{equation}\label{Ber49} \Phi_k^{(2n)}=(2\pi)^{2k-2\mu-2n}\,{k-\mu\choose n}=(-1)^n(2\pi)^{2k-2\mu-2n}\frac{(\mu-k)_n}{n!}. \end{equation} Hence, \renewcommand{1.0}{1.5} \begin{equation}\label{Ber410} \begin{array}{l} \dsp{B_{2n}^{\mu}(z)=(2n)! \,2^{3\mu}\pi^{2\mu}\sum_{k=0}^\infty \alpha_k \Phi_k^{(2n)},} \\ \dsp{B_{2n+1}^{\mu}(z)=(2n+1)!2^{3\mu}\pi^{2\mu}\,\sum_{k=0}^\infty \beta_k \Phi_k^{(2n)}.} \end{array} \renewcommand{1.0}{1.0} \end{equation} These convergent expansions have an asymptotic character for large $n$. This follows from (see \eqref{poch}) \renewcommand{1.0}{1.75} \begin{equation}\label{Ber411} \begin{array}{@{}r@{\;}c@{\;}l@{}} \dsp{\frac{ \Phi_{k+1}^{(2n)}}{ \Phi_k^{(2n)}}} &=& \dsp{4\pi^2\frac{(\mu-k-1)_n}{(\mu-k)_n}}\\ &=& \dsp{4\pi^2\frac{\Gamma(\mu-k-1+n)}{\Gamma(\mu-k-1)}\,\frac{\Gamma(\mu-k)}{\Gamma(\mu-k+n)}}\\ &=& \dsp{4\pi^2\frac{\mu-k-1}{\mu-k+n-1}={\cal O}\left(n^{-1}\right), \quad n\to\infty.} \end{array} \renewcommand{1.0}{1.0} \end{equation} We compare the first term approximations given in \eqref{Berasf} and those from \eqref{Ber410}. From \eqref{Berasf} we obtain \begin{equation}\label{Ber414} B_{2n}^\mu(z)\sim (-1)^n\frac{(2n)! \,2^\mu n^{\mu-1}}{(2\pi )^{2n}\,\Gamma(\mu)} \cos\pi(2z+\mu)+\ldots, \end{equation} and from \eqref{Ber410} \begin{equation}\label{Ber415} B_{2n}^\mu(z)=(-1)^n\frac{(2n)! \,2^\mu }{(2\pi )^{2n}\,\Gamma(\mu)}\frac{\Gamma(n+\mu)}{n!} \cos\pi(2z+\mu)+\ldots. \end{equation} Because $\Gamma(n+\mu)/n!\sim n^{\mu-1}$ as $n\to\infty$, we see that the first approximations give the same asymptotic estimates, and they are exactly the same when $\mu=1$. \paragraph{Integer values of \protectbold{\mu}}\label{par:Bintmu} Comparing the expansions in \eqref{Beras} and \eqref{Ber410}, we observe that those in \eqref{Ber410} do not vanish when $\mu=0,-1,-2,\ldots$, whereas the expansion in \eqref{Beras} does. We have when $\mu=m$ (integer) \renewcommand{1.0}{1.5} \begin{equation}\label{Ber412} \Phi_k^{(2n)}= \left\{ \begin{array}{ll} \dsp{(2\pi)^{2k-2m-2n}\,{k-m\choose n}}, \quad &k\ge n+m,\\ 0, \quad & k<n+m. \end{array} \right. \renewcommand{1.0}{1.0} \end{equation} Hence, the summation in \eqref{Ber410} starts with $k=n+m$. The scale $\{\Phi_k^{2n}\}$ loses its asymptotic property, because now \begin{equation}\label{Ber413} \frac{ \Phi_{k+1}^{(2n)}}{ \Phi_k^{(2n)}}=4\pi^2\frac{n+\ell+1}{\ell+1}={\cal O}\left(n\right), \quad n\to\infty, \end{equation} where $k=n+m+\ell$, and a possible asymptotic character of the series in \eqref{Ber410} has to be furnished by the coefficients $\alpha_k, \beta_k$, which depend on $n$ when $k\ge n+m$ Because $n$ is assumed to be large, and the coefficients $\alpha_k, \beta_k$ in \eqref{Ber410} become quite complicated when $k\ge n+m$, these expansions are of no use when $\mu$ is an integer. When we replace the expansion in \eqref{Ber42} with \begin{equation}\label{Ber42a} f(w) =\sum_{k=0}^\infty\left(\widetilde\alpha_k+w\widetilde\beta_k\right)\left(\frac{w^2+4\pi^2}{w^2}\right)^k \end{equation} we obtain \renewcommand{1.0}{1.5} \begin{equation}\label{Ber410a} \begin{array}{l} \dsp{B_{2n}^{\mu}(z)\sim(2n)! \,2^{3\mu}\pi^{2\mu}\sum_{k=0}^\infty \widetilde\alpha_k \widetilde\Phi_k^{(2n)},} \\ \dsp{B_{2n+1}^{\mu}(z)\sim(2n+1)!2^{3\mu}\pi^{2\mu}\,\sum_{k=0}^\infty\widetilde\beta_k \widetilde\Phi_k^{(2n)},} \end{array} \renewcommand{1.0}{1.0} \end{equation} where $\widetilde\alpha_k$ and $\widetilde\beta_k$ can be obtained from a similar scheme as in \eqref{Ber44}-\eqref{Ber45}. The functions $\Phi_k^{(2n)}$ are given by \begin{equation}\label{Ber49a} \widetilde \Phi_k^{(2n)}=\frac{(-1)^{n+k}}{(2\pi)^{2\mu+2n}}\frac{(\mu-k)_{n+k}}{(n+k)!} =\frac{(-1)^{n+k}}{(2\pi)^{2\mu+2n}}\frac{\Gamma(\mu+n)}{\Gamma(\mu-k)\,(n+k)!}. \end{equation} When $\mu=m$ (integer) these functions vanish if $k-m=0,1,2,\ldots$, which is more useful that in the earlier choice \eqref{Ber42}. When $m<0$ all terms vanish, when $m>0$ the series have a finite number of terms. With the expansion in \eqref{Ber42a}, which converges in certain neighborhoods of the points $w=\pm2\pi i$ (and not in a domain that contains any allowed deformation of the curve ${\cal C}$ in \eqref{Berint}), the expansions in \eqref{Ber410a} do not converge, but they have an asymptotic character for large $n$. As an example, when $m=1$ the expansions in \eqref{Ber410a} have just one term $(k=0)$. In this case $\widetilde\Phi_0^{(2n)}=(-1)^n/(2\pi)^{2n+2}$ and $\widetilde\alpha_0=\alpha_0$, $\widetilde\beta_0=\beta_0$ (see \eqref{Ber43}). These approximations correspond exactly to the first terms in the expansions in \eqref{Bersereven} and \eqref{Berserodd}. \section{The generalized Euler polynomials}\label{sec:Eul} We can use the same methods as for the Bernoulli polynomials, and, therefore, we give less details. Again, three different cases arise, depending on $\mu=0,-1,-2,\ldots$, $\mu=1,2,3,\dots$, and $\mu$ otherwise, real or complex. In the first and third case we use the Cauchy integral \begin{equation}\label{Eulint} E_n^{\mu}(z)=\frac{n!}{ 2\pi i} \int_{{\cal C}}\frac{2^{\mu}e^{wz}}{(e^w+1)^{\mu}} \frac{dw}{ w^{n+1}}, \end{equation} where ${\cal C}$ is a circle around the origin, with radius less than $\pi$. This follows from \eqref{Euldef}. \subsection{Asymptotic form when \protectbold{\mu=0,-1,-2,\ldots}}\label{subsec:Eul1} We proceed as in \S\ref{subsec:Ber1} and write $\mu=-m$ $(m=0,1,2,\ldots)$. We expand $E_n^{\mu}(z)$ in terms of a finite sum. We have \begin{equation}\label{Eulsum} E_n^{-m}(z)=2^{-m} \sum_{r=0}^m {m\choose r} (z+r)^{n}. \end{equation} For any given $m\in {\mathbb N}$ and complex $z$ only the term or terms with the largest values of $|z+r|$ will give a large contribution to the sum in \eqref{Eulsum}, the other terms being exponentially small in comparison. We conclude, that \eqref{Eulsum} gives the asymptotic form when $n\to\infty$, when $m$ and $z$ are fixed. In particular, when $z>0$, the term with index $r=m$ is maximal, and we have \begin{equation}\label{Eulsum0} E_n^{-m}(z)=2^{-m} (z+m)^{n}\left[1+{\cal O}\left(\frac{z+m-1}{z+m}\right)^{n}\right]. \end{equation} The error term can also be estimated by ${\cal O}(\exp(-n/(z+m)))$, which is indeed exponentially small compared with unity. For general complex $z=x+iy$ and $x>-m/2$ the term with index $r=m$ again is maximal and the same estimate as in \eqref{Eulsum0} is valid. When $x=-m/2$ the terms with $r=0$ and $r=m$ give the maximal contributions, and we have \begin{equation}\label{Eulsum1} E_n^{-m}(z)\sim2^{-m} \left[(-\tfrac12m+iy)^{n}+(\tfrac12m+iy)^{n} \right]. \end{equation} When $x<-m/2$ the term with index $r=0$ is maximal, and we have \begin{equation}\label{Eulsum2} E_n^{-m}(z)=2^{-m}z^{n}\left[1+{\cal O}\left(\frac{z+1}{z}\right)^{n}\right]. \end{equation} As explained at the end of \S\ref{subsec:Ber1} these estimates can also be derived by using the saddle point method. \subsection{Asymptotic form when \protectbold{\mu= 1, 2,3,\ldots}}\label{subsec:Eul2} We write $\mu=m$. For $m=1$ we have \renewcommand{1.0}{1.75} \begin{equation}\label{Eul1ser} \begin{array}{@{}r@{\;}c@{\;}l@{}} E_n^{1}(z)=E_n(z) &=& \dsp{2\,n!\sum_{{k=-\infty}}^\infty\frac{e^{(2k+1)\pi iz}}{((2k+1)\pi i)^{n+1}}}\\ &=& \dsp{4\,n!\sum_{{k=0}}^\infty\frac{\sin((2k+1)\pi z-\frac12\pi n)}{((2k+1)\pi)^{n+1}}}, \end{array} \renewcommand{1.0}{1.0} \end{equation} where $z\in(0,1)$ if $n=0$ and $z\in[0,1]$ if $n>0$. This expansion follows from \eqref{Eulint} as in \S\ref{subsec:Ber2}. In the second series in \eqref{Eul1ser} only the term with $k=0$ is relevant for the asymptotic behavior, and we obtain for fixed complex $z$ \begin{equation}\label{Eul1as} E_n^{1}(z)=\frac{4\,n!}{\pi^{n+1}}\left[\sin\left(\pi z-\tfrac12\pi n\right) +{\cal O}\left(3^{-n}\right)\right],\quad n\to\infty. \end{equation} For general fixed real or complex $z$ the series in \eqref{Eul1ser} can be viewed as asymptotic expansion for large $n$, as easily follows from the ratio test. For general $\mu=m=1,2,3,\ldots$ a similar expansion as in \eqref{Eul1ser} can be given. In that case the poles of the integrand in \eqref{Eulint} are of higher order. We can write \begin{equation}\label{Eulmser} E_n^{m}(z)=2\,n!\sum_{{k=0}}^\infty{\varepsilon}ilon_k^m(n,z)\frac{e^{(2k+1)\pi iz}}{((2k+1)\pi i)^{n+1}}, \end{equation} where ${\varepsilon}ilon_k^1(n,z)=1, \forall k$. To obtain ${\varepsilon}ilon_k^m(n,z)$ for $m> 1$ we compute the residues of the poles at $(2k+1)\pi i$ of order $m$ of the integrand in \eqref{Eulint}. For this we compute the coefficient $d_{m-1}$ in the expansion \begin{equation}\label{Eulres1} \frac{(w-(2k+1)\pi i)^m e^{zw}}{(e^w+1)^mw^{n+1}}=\sum_{r=0}^\infty d_r (w-(2k+1)\pi i)^r. \end{equation} We substitute $w=s+(2k+1)\pi i$ and write the expansion as \begin{equation}\label{Eulres2} (-1)^me^{z(2k+1)\pi i}\frac{s^m e^{zs}}{(e^s-1)^m(s+(2k+1)\pi i)^{n+1}}=\sum_{r=0}^\infty d_r s^r. \end{equation} We use \eqref{Berdef} and conclude that \begin{equation}\label{Eulres3} d_{m-1}=\frac{(-1)^me^{z(2k+1)\pi i}}{((2k+1)\pi i)^{n+1}}\sum_{\nu=0}^{m-1} \frac{B_\nu^m(z)}{\nu!}{-n-1\choose m-1-\nu}((2k+1)\pi i)^{\nu+1-m}. \end{equation} It follows that \begin{equation}\label{Eulres4} {\varepsilon}ilon_k^m(n,z)=(-1)^{m-1}2^{m-1}\sum_{\nu=0}^{m-1} \frac{B_\nu^m(z)}{\nu!}{-n-1\choose m-1-\nu}((2k+1)\pi i)^{\nu+1-m}, \end{equation} which we write in the form \renewcommand{1.0}{1.5} \begin{equation}\label{Eulres5} \begin{array}{l} \dsp{{\varepsilon}ilon_k^m(n,z)=\frac{2^{m-1}}{((2k+1)\pi i)^{m-1}}{n+m-1\choose m-1}}\,\times \\ \quad\quad \dsp{\sum_{\nu=0}^{m-1}B_\nu^m(z){m-1\choose \nu}\frac{(n+m-\nu-1)!}{(n+m-1)!}(-(2k+1)\pi i)^{\nu}}. \end{array} \end{equation} \renewcommand{1.0}{1.0} For large $n$ the main term occurs for $\nu=0$, giving \begin{equation}\label{Eulres6} {\varepsilon}ilon_k^m(n,z)= \frac{2^{m-1}n^{m-1}}{(m-1)!\,((2k+1)\pi i)^{m-1}}\left[1+{\cal O}\left(n^{-1}\right)\right], \end{equation} and in \eqref{Eulmser} the terms with $k=0, -1$ give the main terms, and we obtain for fixed $m$ and complex $z$ (cf. \eqref{Eul1as}) \renewcommand{1.0}{1.5} \begin{equation}\label{Eulmas1} \begin{array}{ll} \dsp{E_n^{m}(z)=\frac{2^{m+1}n!}{\pi^{n+m}}{n+m-1\choose m-1} \,\times} \\ \quad\quad \dsp{\left[ \sum_{\nu=0}^{m-1} B_\nu^m(z) {m-1\choose \nu}\frac{(n+m-\nu-1)!}{(n+m-1)!}\pi^{\nu}\sin\tau+{\cal O}\left(3^{-n}\right)\right]}, \end{array} \renewcommand{1.0}{1.0} \end{equation} as $n\to\infty$, where $\tau=(z-\tfrac12n-\tfrac12(m-1)-\tfrac12\nu)\pi$. \subsection{Asymptotic form for general complex \protectbold{\mu}}\label{subsec:Eul3} The analysis is as in \S\ref{subsec:Ber3}. We use a contour for the integral \eqref{Eulint} as in Figure~\ref{Ber.fig1}, now with loops around the branch points $\pm\pi i$, and with radius of the large circle smaller than $3\pi$. We denote the integrals around the loops by $I_\pm$. After the substitution $w=\pi i\exp(s)$ we obtain for the upper loop \begin{equation}\label{IpEul} I_+=\frac{2^\mu n!}{2\pi i}\frac{e^{\pi iz-\mu\pi i}}{(\pi i)^{n+\mu}}\int_{{\cal C}_+} h(s) s^{-\mu}e^{-ns}\,ds, \end{equation} where \begin{equation}\label{hs} h(s)=e^{zu}\left(\frac{\pi i s}{e^u-1}\right)^\mu,\quad u=\pi i \left(e^s-1\right). \end{equation} We expand $h(s)=\sum_{k=0}^\infty h_k s^k $ and interchange summation and integration in \eqref{IpEul}. By using \eqref{Fk} and \eqref{Fkgam} we obtain the result \begin{equation}\label{Eulas} E_n^\mu(z)\sim\frac{2^{\mu+1}\,n! \,n^{\mu-1}}{\pi ^{n+\mu}\,\Gamma(\mu)}\left[ \cos\chi\,\sum_{k=0}^\infty \frac{(1-\mu)_k h_k^{(r)}}{n^k}- \sin\chi\,\sum_{k=0}^\infty \frac{(1-\mu)_k h_k^{(i)}}{n^k}\right], \end{equation} as $n\to\infty$, with $z$ and $\mu$ fixed complex numbers ($\mu\notin{\mathbb Z}$), where \begin{equation}\label{chizeta2} \chi=\zeta-\tfrac12n\pi,\quad \zeta=(z-\tfrac12\mu)\pi. \end{equation} The first few coefficients $h_k^{(r)}, h_k^{(i)}$ are \renewcommand{1.0}{1.5} \begin{equation}\label{hkri} \begin{array}{ll} h_0^{(r)}=1, &h_0^{(i)}=0,\\ h_1^{(r)}=-\tfrac12\mu, &h_1^{(i)}=\zeta,\\ h_2^{(r)}=\tfrac1{24}(3(1-2\pi^2)\mu^2+(13\pi^2-12\zeta\pi-1)\mu-12\zeta^2), &h_2^{(i)}=\tfrac12(1-\mu)\zeta, \\ h_3^{(r)}=\tfrac{1}{48}z(-\mu^3 +(1-\pi^2)\mu^2+2(\pi^2+6\zeta^2)\mu-24\zeta^2), &\\ h_3^{(i)}=\tfrac1{24}\zeta(3\mu^2+(\pi^2-7)\mu-4\zeta^2+4).&\\ \end{array} \end{equation} \renewcommand{1.0}{1.0} The first-order approximation reads \begin{equation}\label{Eulasf} E_n^\mu(z)= \frac{2^{\mu+1}\,n! \,n^{\mu-1}}{\pi ^{n+\mu}\,\Gamma(\mu)}\left[ \cos\pi(z-\tfrac12\mu-\tfrac12n)+{\cal O}(1/n)\right],\quad n\to\infty. \end{equation} \subsubsection{An alternative expansion}\label{subsec:Eul4} We repeat the steps of \S\ref{subsec:Ber4}. We write \begin{equation}\label{Eul41} g(w)=\left(\frac{w^2+\pi^2}{2\pi}\right)^\mu\left(\frac{1}{e^w+1}\right)^\mu e^{wz} \end{equation} and expand \begin{equation}\label{Eul42} g(w) =\sum_{k=0}^\infty\left(\gamma_k+w\delta_k\right)\left(w^2+\pi^2\right)^k. \end{equation} We have \renewcommand{1.0}{1.5} \begin{equation}\label{Eul43} \begin{array}{l} \dsp{\gamma_0=\frac{g(\pi i)+g(-\pi i)}{2}=\cos \zeta}, \\ \dsp{ \delta_0=\frac{g(\pi i)-g(-\pi i)}{2\pi i}=\frac{1}{\pi}\sin \zeta. } \end{array} \renewcommand{1.0}{1.0} \end{equation} where $\zeta=(z-\frac12\mu)\pi$. The next coefficients follow from writing $g_0(w)=g(w)$ and \renewcommand{1.0}{1.5} \begin{equation}\label{Eul44} \begin{array}{lll} g_{j+1}(w)&=&\dsp{\frac{g_j(w)-(\gamma_j+w\delta_j)}{w^2+\pi^2}} \\ &=&\dsp{\sum_{k=j+1}^\infty\left(\gamma_k+w\delta_k\right)\left(w^2+\pi^2\right)^{k-j-1}, \quad j=0,1,2,\ldots.} \end{array} \renewcommand{1.0}{1.0} \end{equation} This gives \renewcommand{1.0}{1.5} \begin{equation}\label{Eul45} \begin{array}{lll} \dsp{\gamma_{j+1}=\frac{g_j^\prime(\pi i)-g_j^\prime(-\pi i)}{4\pi i}}, \\ \dsp{ \delta_{j+1}=-\frac{g_j^\prime(\pi i)+g_j^\prime(-\pi i)-2\delta_j}{4\pi^2}}. \end{array} \renewcommand{1.0}{1.0} \end{equation} and {\small \renewcommand{1.0}{1.75} \begin{equation}\label{Eul46} \begin{array}{l} \dsp{ \gamma_1=-\frac{1}{4\pi^2}[\mu\cos\zeta+\pi\eta\sin \zeta],}\\ \dsp{ \delta_1=\frac{1}{4\pi^3}[\pi\eta\cos\zeta+(2-\mu)\sin\zeta],}\\ \dsp{\gamma_2=\frac{1}{96\pi^4}[(-9\mu-3\pi^2\eta^2+\pi^2\mu+3\mu^2)\cos\zeta+ 6\pi\eta(\mu-1)\sin\zeta]}, \\ \dsp{ \delta_2=\frac{1}{96\pi^5}[6\pi\eta(3-\mu)\cos\zeta+(36-21\mu+3\mu^2+\pi^2\mu-3\pi^2\eta^2)\sin\zeta],} \end{array} \renewcommand{1.0}{1.0} \end{equation} } where $\eta=\mu-2z$. Substituting the expansion in \eqref{Eul42} into \eqref{Eulint} we obtain \begin{equation}\label{Eul47} E_n^{\mu}(z)=(4\pi)^\mu n!\sum_{k=0}^\infty \left[\gamma_k \Psi_k^{(n)}+\delta_k \Psi_k^{(n-1)}\right], \end{equation} where \begin{equation}\label{Eul48} \Psi_k^{(n)}=\frac{1}{2\pi i} \int_{\cal C} \left(w^2+\pi^2\right)^{k-\mu}\frac{dw}{w^{n+1}}. \end{equation} We have $\Psi_k^{(2n+1)}=0$ and \begin{equation}\label{Eul49} \Psi_k^{(2n)}=\pi^{2k-2\mu-2n}\,{k-\mu\choose n}=(-1)^n\pi^{2k-2\mu-2n}\frac{(\mu-k)_n}{n!}. \end{equation} Hence, \renewcommand{1.0}{1.5} \begin{equation}\label{Eul410} \begin{array}{l} \dsp{E_{2n}^{\mu}(z)=(2\pi)^\mu(2n)!\sum_{k=0}^\infty \gamma_k \Psi_k^{(2n)},} \\ \dsp{E_{2n+1}^{\mu}(z)=(2\pi)^\mu(2n+1)!\sum_{k=0}^\infty \delta_k \Psi_k^{(2n)}.} \end{array} \renewcommand{1.0}{1.0} \end{equation} These convergent expansions have an asymptotic character for large $n$. This follows from \begin{equation}\label{Eul411} \frac{ \Psi_{k+1}^{(2n)}}{ \Psi_k^{(2n)}}=\pi^2\frac{\mu-k}{\mu-k-1+n}={\cal O}\left(n^{-1}\right), \quad n\to\infty. \end{equation} Comparing the first term approximations given in \eqref{Eulasf} and those from \eqref{Eul410} we obtain from \eqref{Eulasf} \begin{equation}\label{Eul413} E_{2n}^\mu(z)\sim (-1)^n\frac{(2n)! \,2^{2\mu} n^{\mu-1}}{\pi^{2n+\mu}\,\Gamma(\mu)} \cos\pi(z-\tfrac12\mu)+\ldots, \end{equation} and from \eqref{Eul410} \begin{equation}\label{Eul414} E_{2n}^\mu(z)=(-1)^n\frac{(2n)! \,2^{2\mu} }{\pi^{2n+\mu}\,\Gamma(\mu)}\frac{\Gamma(n+\mu)}{n!} \cos\pi(z-\tfrac12\mu)+\ldots. \end{equation} and we see that the first approximations give the same asymptotic estimates. \paragraph{Integer values of \protectbold{\mu}}\label{par:Eintmu} The expansions in \eqref{Eul410} do not vanish when $\mu$ is a negative integer, as the expansion in \eqref{Eulas} does. We have when $\mu=m$ (integer) \renewcommand{1.0}{1.5} \begin{equation}\label{Eul412} \Psi_k^{(2n)}= \left\{ \begin{array}{ll} \dsp{\pi^{2k-2m-2n}\,{k-m\choose n}}, \quad &k\ge n+m,\\ 0, \quad & k<n+m. \end{array} \right. \renewcommand{1.0}{1.0} \end{equation} Hence, the summation in \eqref{Eul410} starts with $k=n+m$. When we expand \begin{equation}\label{Eul42a} g(w) =\sum_{k=0}^\infty\left(\widetilde\gamma_k+w\widetilde\delta_k\right)\left(\frac{w^2+\pi^2}{w^2}\right)^k \end{equation} we obtain the expansions \renewcommand{1.0}{1.5} \begin{equation}\label{Eul410a} \begin{array}{l} \dsp{E_{2n}^{\mu}(z)\sim(2\pi)^\mu(2n)!\sum_{k=0}^\infty \widetilde\gamma_k \widetilde\Psi_k^{(2n)},} \\ \dsp{E_{2n+1}^{\mu}(z)\sim(2\pi)^\mu(2n+1)!\sum_{k=0}^\infty \widetilde\delta_k \widetilde\Psi_k^{(2n)},} \end{array} \renewcommand{1.0}{1.0} \end{equation} where $\widetilde\gamma_k$ and $\widetilde\delta_k$ can be obtained from a similar scheme as in \eqref{Eul45}. The functions $\widetilde\Psi_k^{(2n)}$ are given by \begin{equation}\label{Eul49a} \widetilde\Psi_k^{(2n)}=\pi^{-2\mu-2n}\,{k-\mu\choose n+k}=(-1)^{n+k}\pi^{-2\mu-2n}\frac{(\mu-k)_{n+k}}{(n+k)!}. \end{equation} When $\mu=m$ (integer) these functions vanish if $k-m=0,1,2,\ldots$, which is more useful that in the earlier choice \eqref{Eul42}. For example, when $m=1$ the expansions in \eqref{Eul410a} have just one term $(k=0)$. In this case $\widetilde\Psi_0^{(2n)}=(-1)^n/\pi^{2n+2}$ and $\widetilde\gamma_0=\gamma_0$, $\widetilde\delta_0=\delta_0$ (see \eqref{Eul43}). These approximations correspond exactly to the first term in the second expansion in \eqref{Eul1ser}. \paragraph{Acknowledgments.} The authors thank the referee for the constructive remarks. The {\it Gobierno of Navarra, Res. 07/05/2008} is acknowledged for its financial support. JLL acknowledges financial support from {\emph{Ministerio de Educaci\'on y Ciencia}}, project MTM2007--63772. NMT acknowledges financial support from {\emph{Ministerio de Educaci\'on y Ciencia}}, project MTM2006--09050. \end{document} \end{document}
math
38,588
\begin{document} \title[Shadows and Barriers]{Shadows and Barriers} \author{Martin Br\"uckerhoff \address[Martin Br\"uckerhoff]{Universit\"at M\"unster, Germany} \email{[email protected]} \hspace*{0.5cm} Martin Huesmann \address[Martin Huesmann]{Universit\"at M\"unster, Germany} \email{[email protected]} } \thanks{MB and MH are funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 –390685587, Mathematics Münster: Dynamics–Geometry–Structure. } \begin{abstract} We show an intimate connection between solutions of the Skorokhod Embedding Problem which are given as the first hitting time of a barrier and the concept of shadows in martingale optimal transport. More precisely, we show that a solution $\tau$ to the Skorokhod Embedding Problem between $\mu$ and $\nu$ is of the form $\tau = \inf \{t \geq 0 : (X_t,B_t) \in \mathcal{R}\}$ for some increasing process $(X_t)_{t \geq 0}$ and a barrier $\mathcal{R}$ if and only if there exists a time-change $(T_l)_{l \geq 0}$ such that for all $l \geq 0$ the equation $$\mathbb{P}[B_{\tau} \in \cdot , \tau \geq T_l] = \shadow{\nu}{\mathbb{P}[B_{T_l} \in \cdot , \tau \geq T_l]}$$ is satisfied, i.e.\ the distribution of $B_{\tau}$ on the event that the Brownian motion is stopped after $T_l$ is the shadow of the distribution of $B_{T_l}$ on this event in the terminal distribution $\nu$. This equivalence allows us to construct new families of barrier solutions that naturally interpolate between two given barrier solutions. We exemplify this by an interpolation between the Root embedding and the left-monotone embedding. \normalem \noindent\emph{Keywords:} Skorokhod embedding, shadows, martingale optimal transport \\ \emph{Mathematics Subject Classification (2020):} Primary 60G40, 60G42, 60J45. \end{abstract} \date{\today} \maketitle \section{Introduction} Let $\mu$ be a probability measure on $\mathbb{R}$ and $(B_t)_{t \geq 0}$ a $\mathcal{F}$-Brownian Motion with initial distribution $\mathrm{Law}(B_0) = \mu$ defined on a filtered probability space $(\Omega, \mathcal{A},\mathbb{P}, (\mathcal{F}_t)_{t \geq 0})$. We assume that the filtration $\mathcal{F} = (\mathcal{F}_t)_{t \geq 0}$ is right-continuous and completed w.r.t.\ $\mathbb{P}$. Given another probability measure $\nu$, a finite $\mathcal{F}$-stopping time $\tau$ is said to be a solution to the Skorokhod Embedding Problem w.r.t.\ $\mu$ and $\nu$, if \begin{equation} \tag{$\mathrm{SEP}(\mu,\nu)$} (B_{t \land \tau})_{t \geq 0} \text{ is uniformly integrable} \quad \text{and} \quad B_{\tau} \sim \nu. \end{equation} It is well known that there exists a solution to $\mathrm{SEP}(\mu,\nu)$ if and only if $\mu \leq_{c} \nu$, i.e.\ if we have $\int_{\mathbb{R}} \varphi \, \mathrm{d} \mu \leq \int _{\mathbb{R}} \varphi \, \mathrm{d} \nu$ for all convex functions $\varphi$. In general there exist many different solutions to $\mathrm{SEP}(\mu,\nu)$ (cf.\ \cite{Ob04}). \subsection*{Main Result} In this article, we focus on the subclass of ``barrier solutions'' to the Skorokhod Embedding Problem which includes for instance the Root embedding \cite{Ro69}, the Az\'{e}ma-Yor embedding \cite{AzYo79}, the Vallois embedding \cite{Va83}, and the left-monotone embedding \cite{BeHeTo17}. These solutions can be described as the first time the process $(X_t,B_t)_{t \geq 0}$ hits a barrier in $[0, \infty) \times \mathbb{R}$ (cf.\ Definition \ref{def:Intro}) where $X$ is monotonously increasing, non-negative and $\mathcal{F}$-adapted. We show that these embeddings are closely related to the concept of shadows introduced by Beiglb\"ock and Juillet in \cite{BeJu16}. \begin{definition} \label{def:Intro} \begin{itemize} \item [(i)] A set $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ is called a barrier if $\mathcal{R}$ is closed and for all $(l,x) \in \mathcal{R}$ and $l \leq l'$ we have $(l',x) \in \mathcal{R}$. \item [(ii)] Let $\xi$ and $\zeta$ be finite measures on $\mathbb{R}$. We say that $\xi$ is a submeasure of $\zeta$ if $\xi[A] \leq_{+} \zeta[A]$ for all $A \in \mathcal{B}(\mathbb{R})$, denoted by $\xi \leq_{+} \zeta$. \item [(iii)] Let $\eta$ and $\zeta$ be finite measures on $\mathbb{R}$. A finite measure $\xi$ that satisfies $\eta \leq_{c} \xi \leq_{+} \zeta$ and $\xi \leq_{c} \xi'$ for all $\xi'$ with $\eta \leq_{c} \xi' \leq_{+} \zeta$, is called the shadow of $\eta$ in $\zeta$ and is denoted by $\shadow{\zeta}{\eta}$. \end{itemize} \end{definition} We want to mention that in the literature barriers defined as in Definition \ref{def:Intro}(i) are sometimes called ``right-barriers'' in contrast to ``left-barriers''. The shadow $\shadow{\zeta}{\eta}$ exists whenever the set of possible candidates is not empty, i.e.\ if there exists $\xi$ such that $\eta \leq_{c} \xi \leq_{+} \zeta$. This existence result was first shown by Rost \cite{Ro71}. Later Beiglb\"ock and Juillet \cite{BeJu16} rediscovered this object in the context of martingale optimal transport and coined the name shadow. In the following we use the notation $\mathrm{Law}(X;A)$ for the (sub-)probability measure which is given by the push-forward of $X$ under the restriction of $\mathbb{P}$ to the event $A$ (cf.\ Section \ref{ssec:Notation}). \begin{theorem} \label{thm:intro} Let $\mu \leq_{c} \nu$ and $\tau$ a solution of $\mathrm{SEP}(\mu,\nu)$. The following are equivalent: \begin{itemize} \item [(i)] There exists a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$ for all $l \geq 0$, and a closed barrier $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ such that \begin{equation*} \tau = \inf \{ t \geq 0 : (X_t,B_t) \in \mathcal{R}\} \quad a.s. \end{equation*} \item [(ii)] There exists a left-continuous $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ such that for all $l \geq 0$ we have \begin{equation} \label{eq:ShadowResid} \mathrm{Law}(B_{\tau}; \tau \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)}. \end{equation} \end{itemize} Moreover, we may choose $T_l = \inf \{t \geq 0: X_t \geq l\}$ or $X_t := \sup \{l \geq 0: T_l \leq t\}$ and $\mathcal{R} := \{ (l,x) \in [0, \infty) \times \mathbb{R} : U_{\mathrm{Law}(B_{T_l \land \tau})} (x) = U_{\nu}(x) \}$, respectively, where $U_\cdot$ denotes the potential function of a finite measure (see Definition \ref{def:PotentialFunction}). \end{theorem} \begin{remark} We want to stress that this theorem also holds for randomized stopping times (cf.\ Theorem \ref{thm:MainEqui}). This concerns the implication $(ii) \mathbb{R}ightarrow (i)$, as part (i) already ensures that the randomized stopping time is induced by a (non-randomized) stopping time. \end{remark} To the best of our knowledge, the only known connection between shadows and the Skorokhod Embedding Problem is implicitly through the left-monotone embedding because it is uniquely characterized by the property that the induced martingale coupling between the initial and the terminal marginal distribution is precisely the left-curtain coupling (see below). Theorem \ref{thm:intro} shows that this connection is not by accident, but just a special case of an intimate connection between shadows and barrier solutions rooted in potential theory. \begin{figure} \caption{This is a sketch of the support and densities of the measures appearing in \eqref{eq:RootExpl} \label{fig:RootExp} \end{figure} \begin{figure} \caption{This is a sketch of the support and densities of the measure appearing in \eqref{eq:LMExpl} \label{fig:LmExp} \end{figure} Since the time change in Theorem \ref{thm:intro} is given by $T_l = \inf \{t \geq 0 : X_t \geq l\}$ it is straightforward to compute the time changes for well known examples. In the case of the Root-embedding, we have $X^{r}_t := t$, $T^r_l = l$ and Property \eqref{eq:ShadowResid} turns into \begin{equation}\label{eq:RootExpl} \mathrm{Law}(B_\tau; \tau \geq l) = \shadow{\nu}{\mathrm{Law}(B_l; \tau \geq l)} \end{equation} for all $l \geq 0$. The measure $\mathrm{Law}(B_\tau; \tau \geq l)$ is the projection of $\mathrm{Law}((\tau,B_{\tau}); \tau \geq l)$ onto the second (the spatial) component. In the SEP context the joint law of $(\tau,B_\tau)$ describes when and where the Brownian motion is stopped. Since $\tau$ is a barrier stopping time the support of $\mathrm{Law}((\tau,B_{\tau}); \tau \geq l)$ is on the boundary of the barrier intersected with $[l,\infty)\times \mathbb{R}$. This is depicted on the left hand side of Figure \ref{fig:RootExp}. By \eqref{eq:RootExpl} we can characterize this measure using information from time $l$ only. For each $l \geq 0$, it is given as the shadow of $\mathrm{Law}(B_l; \tau \geq l)$ in the prescribed terminal distribution $\nu$. We have a similar situation in the case of the left-monotone embedding. We have $X^{lm}_t = \exp(-B_0)$, \begin{equation*} T^{lm}_l = \begin{cases} 0 & \exp(-B_0) \geq l \\ + \infty &\exp(-B_0) < l \end{cases} \end{equation*} and Property \eqref{eq:ShadowResid} becomes \begin{equation} \label{eq:LMExpl} \mathrm{Law}(B_{\tau}; B_0 \leq -\ln(l)) = \shadow{\nu}{\mathrm{Law}(B_0; B_0 \leq - \ln(l) )} \end{equation} for all $l \geq 0$. Again the measure $\mathrm{Law}(B_\tau; \tau \geq l)$ is the projection of $\mathrm{Law}((\tau,B_{\tau}); \tau \geq l)$ onto the second component and in the SEP context the latter measure is supported on the boundary of the barrier after time $l$ (left side of Figure \ref{fig:LmExp}). Recall that in the left-monotone phase space, the Brownian motion is only moving vertically. This time the characterization of $\mathrm{Law}(B_\tau; \tau \geq l)$ via the shadow of $\mathrm{Law}(B_0; B_0 \leq - \ln(l) )$ into $\nu$ is completely independent of $\tau$. In particular, \eqref{eq:LMExpl} yields that $\tau$ is the left-monotone embedding of $\mu$ into $\nu$ if and only if $(B_0,B_{\tau})$ is the left-curtain coupling of $\mu$ and $\nu$ (cf.\ \cite{BeJu16}). The shadow $\shadow{\nu}{\eta}$ of a measure $\eta$ in the probability measure $\nu$, is the most concentrated (in the sense of $\leq_{c}$) submeasure of $\nu$ which can be reached by an embedding of $\eta$ into $\nu$ via a (randomized) $\mathcal{F}$-stopping time (cf.\ Lemma \ref{lemma:ConvOrder}). Hence, Theorem \ref{thm:intro} characterizes in general barrier solutions as those solutions $\tau$, for which there exists a random time given by $(T_l)_{l \geq 0}$ such that for all $l \geq 0$ the mass which is not stopped before $T_l$ under $\tau$, is allocated by $\tau$ as concentrated as possible in the target distribution $\nu$ without interference with the mass that is stopped before $T_l$. \subsection*{Interpolation} If the time-change $(T_l)_{l \geq 0}$ is measurable w.r.t.\ the completion of the natural filtration $\mathcal{F}^B$ generated by the Brownian motion (as it is the case for the Root-embedding and the left-monotone embedding), we can assume that the Brownian motion $B$ is defined on the canonical path space $\Omega = C([0, \infty))$ and we can consider the natural shift operator $\theta$. In this case, for all $\lambda \in (0,\infty)$ we obtain an interpolation $(R^\lambda_l)_{l \geq 0}$ between two $\mathcal{F}$-time-changes $(T_l ^1)_{l \geq 0}$ and $(T_l^2)_{l \geq 0}$ by \begin{equation*} R^\lambda_l := T^1 _{l \land \lambda} + (T^2_{l-\lambda} \circ \theta_{T^1_\lambda}) \mathds{1}_{\{l \geq \lambda\}} = \begin{cases} T_l^1 & l \leq \lambda \\ T_\lambda^1 + T^2 _l \circ \theta_{T^1_\lambda} & l > \lambda \end{cases}. \end{equation*} For the Root time-change $(T_l ^{r})_{l \geq 0}$ and the left-monotone time-change $(T_l^{lm})_{l \geq 0}$ the interpolation becomes \begin{equation*} R^{\lambda} _l := T^{r}_{l \land \lambda} + (T^{lm} _{l - \lambda} \circ \theta _{T^{r}_{\lambda}}) \mathds{1}_{\{l \geq \lambda\}} = \begin{cases} l & l \leq \lambda \\ l & \exp(-B_\lambda) + \lambda \geq l > \lambda \\ + \infty & \exp(-B_\lambda) + \lambda < l, l > \lambda \end{cases}. \end{equation*} A solution $\tau ^{\lambda}$ to $\mathrm{SEP}(\mu,\nu)$ that satisfies property \eqref{eq:ShadowResid} w.r.t.\ $(R^\lambda _l)_{l \geq 0}$, is by Theorem \ref{thm:intro} a barrier solution w.r.t.\ the level-process \begin{equation} \label{eq:lvlPrc} X_t ^\lambda := \sup \{l \geq 0 : R_l^\lambda \leq t\} = \begin{cases} t & t < \lambda \\ \lambda + \exp(-B_0) & t \geq \lambda \end{cases}. \end{equation} A natural guess is that $\lambda \mapsto \tau ^{\lambda}$ is a reasonable interpolation between the left-monotone embedding ($\lambda \uparrow + \infty$) and the Root embedding $(\lambda \downarrow 0)$. This is indeed the case: \begin{proposition} \label{prop:Interpolation} Let $\lambda \in (0, \infty)$. We define the stochastic process $(X^{\lambda}_t)_{t \geq 0}$ as in \eqref{eq:lvlPrc}. There exists a barrier $\mathcal{R}^{\lambda} \subset [0, \infty) \times \mathbb{R}$ such that the first hitting time \begin{equation*} \tau ^{\lambda} := \inf \{t \geq 0: (X_t ^{\lambda}, B_t) \in \mathcal{R}^{\lambda} \} \end{equation*} is a solution to $\mathrm{SEP}(\mu,\nu)$. Moreover, $\mathrm{Law}(B,\tau ^\lambda)$ (as a measure on $\Omega \times [0, \infty))$, converges weakly to $\mathrm{Law}(B,\tau ^r)$ as $\lambda \rightarrow \infty$ and, if $\mu$ is atomless, converges weakly to $\mathrm{Law}(B,\tau ^{lm})$ as $\lambda \rightarrow 0$. \end{proposition} \begin{figure} \caption{The sketch of two sample paths of $(X^\lambda_t,B_t)_{t \in [0,\tau ^{\lambda} \end{figure} \begin{remark} The choice of the Root embedding and the left-monotone embedding as the endpoints of the interpolation is partially arbitrary. As long as both time-changes are $\mathcal{F}^B$-measurable, this procedure can be applied to any two barrier solutions to obtain a new mixed barrier solution (see Lemma \ref{lemma:Nesting}). The continuity and convergence is then a question of the stability properties of the corresponding embeddings. Other approaches to interpolate (in some sense) between two different barrier solutions can be found in \cite{CoHo07} and \cite{GaObZo19}. \end{remark} \subsection*{Multi-Marginal Embeddings} Theorem \ref{thm:intro} can be extended to the case that the barrier solution is ``delayed'', in the sense that the solution can be written as the first hitting time of a barrier after it surpassed a fixed stopping time $\sigma$. \begin{proposition} \label{prop:ShiftedThm} Let $\tau$ be a $\mathcal{F}$-stopping-time that solves $\mathrm{SEP}(\mu,\nu)$. Let $\sigma \leq \tau$ be another $\mathcal{F}$-stopping time. The following are equivalent: \begin{itemize} \item [(i)] There exists a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$ for all $l \geq 0$, and a closed barrier $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ such that \begin{equation*} \tau = \inf \{t \geq \sigma : (X_t,B_t) \in \mathcal{R} \} \quad a.s. \end{equation*} \item [(ii)] There exists a left-continuous $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ such that for all $l \geq 0$ we have \begin{equation*} \mathrm{Law}(B_\tau; \tau \geq \sigma \lor T_l) = \shadow{\nu}{\mathrm{Law}(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l)}. \end{equation*} \end{itemize} \end{proposition} Motivated by financial applications, there has been an increased interest in the multi-marginal Skorokhod Embedding Problem and in particular in multi-marginal barrier solutions (cf.\ \cite{BeCoHu17b, NuStTa17}). Since this is essentially a sequence of delayed barrier solutions, we can extend Theorem \ref{thm:intro} to this case by an inductive application of Proposition \ref{prop:ShiftedThm}. \begin{corollary} \label{cor:MultiMarginal} Let $\mu \leq_{c} \nu _1 \leq_{c} ... \leq_{c} \nu_n$ be greater than $\mu$ in convex order and $\tau _1 \leq ... \leq \tau _n$ an increasing sequence of uniformly integrable $\mathcal{F}$-stopping times such that $\tau _i$ is a solution to $\mathrm{SEP}(\mu, \nu_i)$ for all $1 \leq i \leq n$. The following are equivalent: \begin{itemize} \item [(i)] There exists a suitable process $(X_t)_{t \geq 0}$, and closed barriers $\mathcal{R}^1,...,\mathcal{R}^n \subset [0, \infty) \times \mathbb{R}$ such that \begin{align*} \tau ^1 &= \inf\{ t \geq 0: (X_t,B_t) \in \mathcal{R}^1\} \quad\text{ and} \\ \tau ^i &= \inf\{ t \geq \tau ^{i-1}: (X_t,B_t) \in \mathcal{R}^i\} \quad \text{for all } 1 \leq i \leq n. \end{align*} \item [(ii)] There exists a suitable time-change $(T_l)_{l \geq 0}$, such that for all $l\geq 0$ we have \begin{align*} \mathrm{Law}(B_{\tau ^1}; \tau ^1 \geq T_l) &= \shadow{\nu _1}{\mathrm{Law}(B_{T_l}; \tau^1 \geq T_l)} \quad\text{ and} \\ \mathrm{Law}(B_{\tau ^i}; \tau ^i \geq \tau^{i-1} \lor T_l) &= \shadow{\nu _i}{\mathrm{Law}(B_{\tau ^{i-1} \lor T_l}; \tau ^i \geq \tau^{i-1} \lor T_l)} \quad \text{for all } 1 \leq i \leq n. \end{align*} \end{itemize} \end{corollary} \subsection*{Another Perspective on Theorem \ref{thm:intro}} We will prove Theorem \ref{thm:intro} in Section \ref{sec:MainResult} using potential theory. However, there is an alternative point of view on this theorem using Choquet-type representations of the barrier stopping time $\tau$ and the terminal law $\mathsf{Law}(B_\tau)$ of the stopped process. The most primitive version of a barrier embedding is a first hitting time of the form \begin{equation*} \tau^F := \inf \{t \geq 0 : B_t \in F\} = \inf \{ t \geq 0 : (t,B_t) \in [0, \infty) \times F\} \end{equation*} where $F \subset \mathbb{R}$ is a closed set. The terminal distribution $\mathrm{Law}(B_{\tau ^F})$ w.r.t.\ this stopping time can be characterized using the notion of Kellerer dilations. Given a closed set $F \subset \mathbb{R}$ the Keller dilation is defined as the probability kernel \begin{equation} \label{eq:KellererDilation} K^F(x,dy) = \begin{cases} \frac{x^+ - x}{x^+ - x^-} \, \mathrm{d}lta x_- + \frac{x - x^-}{x^+ - x^-} \, \mathrm{d}lta_{x^-} \quad & x \not \in F \\ \, \mathrm{d}lta_x & x \in F \end{cases} \end{equation} where $x^+ = \inf (F \cap [x, \infty])$ and $x^- = \sup (F \cap (-\infty,x])$. As a direct consequence of \cite[Satz 25]{Ke73}, for every closed set $F \subset \mathbb{R}$ a stopping time $\tau$ satisfies $\tau = \tau ^F$ a.e.\ if and only if $\mathrm{Law}(B_\tau) = \mathrm{Law}(B_0)K^F$. The main idea behind Theorem \ref{thm:intro} is now the following: In the same way that a barrier solution $\tau$ can be represented as a composition of first hitting times $(\tau ^{F_t})_{t \geq 0}$ for an increasing family of closed sets $(F_t)_{t \geq 0}$, the terminal law $\mathsf{Law}(B_\tau)$ w.r.t.\ a stopping time $\tau$ satisfying the shadow relation \eqref{eq:ShadowResid} can be represented using Kellerer dilations $(K^{F_a})_{a \in [0,1]}$ for an increasing family of closed sets $(F_a)_{a \in [0,1]}$. Since for fixed $F$, $\tau ^F$ and $K^F$ are in a one-to-one correspondence, these two representation -one on the level of stopping times and one on the level of target distributions- are two sides of the same coin. In fact, up to reparametrization of the index set, these two families can be chosen identical. Let us explain these two representations in more detail. To keep the notation simple, we will only consider the case of the Root-embedding ($X_t^r = t, T_l ^r = l$). For all $\mathcal{F}^B$-stopping times $\tau_1, \tau _2$ and $s \geq 0$, we define the composition \begin{equation*} C_{s}(\tau _1, \tau _2) := \tau _1 \land s + \tau _2 \circ \theta_{\tau _1 \land s} \end{equation*} where $\theta$ denotes the shift operator on the path space. The composition $C_{s}(\tau _1, \tau _2)$ is again a stopping time. We also inductively define the stopping times \begin{equation*} C_{s_1, ... , s_n}(\tau_1, ... , \tau _n) := C_{s_n}(C_{s_1, ... , s_{n-1}}(\tau _1, ... , \tau _{n-1}), \tau _n). \end{equation*} for $0 \leq s_1 \leq ... \leq s_n$. For all $s > 0$ and for all closed sets $F$, we have $C_s(\tau^F,\tau^F) = \tau^F$. Conversely, if there exists $s \geq 0$ and stopping times $\tau_1, \tau_2$ s.t.\ $\tau^F = C_s(\tau_1,\tau_2)$ for a closed set $F \subset \mathbb{R}$, then $\tau_1 \land s = \tau ^F \land s$ and $\tau _2 \circ \theta_{\tau _1 \land s} = \tau^F \circ \theta_{\tau _1 \land s}$. Therefore, stopping times of the form $\tau^F$ are ``extremal'' or ``atomic'' w.r.t.\ the composition operation $C$. \begin{lemma} Let $\tau$ be a stopping time. The following are equivalent: \begin{itemize} \item[(i)] There exists a right-barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ s.t.\ $\tau = \inf \{t \geq 0 : (t,B_t) \in \mathcal{R}\}$. \item[(ii)] There exists an increasing family of closed sets $(F_t)_{t \geq 0}$ such that \begin{equation*} \tau = \lim _{n \rightarrow \infty} C_{2^{-n}, ... , n} (\tau ^{F_{2^{-n}}}, ... , \tau ^{F_n}). \end{equation*} \end{itemize} In this case a possible right-barrier is given by $\mathcal{R} := \overline{\bigcup _{t \geq 0} [t, \infty) \times F_t}$. \end{lemma} The proof of this equivalence is straightforward using the continuity of Brownian motion. We omit the details. On the level of measures, we obtain a similar representation of the shadow. For two probability measures $\zeta_1, \zeta _2$ and all $\alpha \in [0,1]$ the convex combination $(1- \alpha) \zeta_1 + \alpha \zeta_2$ is again a probability measure. By a result of Kellerer \cite[Theorem 1]{Ke73}, for every probability measure $\eta$ the extremal elements of the convex set $\{ \zeta : \eta \leq_{c} \zeta\}$ are given by $\left\{ \eta K^F \, : \, F \subset \mathbb{R} \text{ closed}\right\}$. \begin{lemma} Let $\tau$ be a stopping time and set $l_b := \sup \{l \geq 0: \mathbb{P}[\tau \geq l] \geq b\}$ for $b \in [0,1]$. The following are equivalent: \begin{itemize} \item [(i)] For all $l \geq 0$ we have $\mathrm{Law}(B_\tau; \tau \geq l) = \shadow{\nu}{\mathrm{Law}(B_l; \tau \geq l)}$. \item [(ii)] There exists an increasing family of closed sets $(F_a)_{a \in [0,1]}$ such that \begin{equation*} \mathrm{Law}(B_\tau) = \int _0 ^1 \eta_{1-a} K^{F_a} \, \mathrm{d} a \end{equation*} where the probability measures $\eta_a$ are defined by $\eta _a := \lim_{\varepsilon \rightarrow 0} \varepsilon ^{-1} \left( \overline{\eta}^{a + \varepsilon} - \overline{\eta}^a \right)$, $a \in [0,1]$, and $\overline{\eta}^\alpha := \mathrm{Law}(B_\tau; \tau \geq l_\alpha) - \frac{\mathbb{P}[\tau \geq l_\alpha] - \alpha}{\mathbb{P}[\tau = l_a]}\mathrm{Law}(B_\tau; \tau = l_\alpha)$ for $\alpha \in [0,1]$. \end{itemize} In this case we have $\shadow{\nu}{\mathrm{Law}(B_l; \tau \geq l_b)} = \int_0 ^b \eta_{1-a} K^{F_a} \, \mathrm{d} a$ for all $b \in [0,1]$. \end{lemma} Similar to \cite[Proposition 2.7]{BeJu16b} one can show that (i) implies (ii). The reversed implication is an application of Lemma \ref{lemma:ShadowDecomp}. We leave the details to the reader. \section{Related Literature} The Skorokhod Embedding Problem goes back to Skorokhod's work \cite{Sk65} in 1965. After his own solution to the embedding problem, this problem gained considerable attention in the literature and a wide range of different embeddings exploiting different mathematical tools were found. The survey \cite{Ob04} alone covers more than 20 different solutions. Moreover, several interesting variants of the Skorokhod Embedding are considered. Recently, there is an increased interest in a variant of Skorokhod Embedding Problem, which asks embeddings to minimize or maximize a predetermined cost function of space and time. This variant of the Skorokhod Embedding Problem has a direct connection to robust mathematical finance which was first noticed by Hobson \cite{Ho98}. For further background we refer to \cite{Ho03}. A novel mathematical exploration of properties of the optimal Skorokhod Embedding Problem in combination with optimal transport can be found in \cite{BeCoHu17}. Further variants are for instance the extensions to the embedding of multiple distributions (cf.\ \cite{BeCoHu17b}) and to higher dimensions (cf.\ \cite{GhKiPa19}). Among the first solutions to the Skorokhod Embedding Problem was Root's construction \cite{Ro69} of a barrier solution in the time-space phase space in 1969. Shortly after, Rost \cite{Ro76} proved that the Root-embedding is the unique embedding which has minimal variance among all other embeddings and provided an alternative construction of this embedding based on the potential theory for Markov processes. The Root-embedding and properties of the corresponding barrier are still subject of current research \cite{CoWa13,GaObZo19}. Moreover, the Root-embedding was recently used to construct a counterexample to the Cantelli-conjecture \cite{KlKu15}. The Root-embedding is presumably the most prominent barrier solution to the Skorokhod Embedding Problem. However, there are several other embeddings which can be characterized as first hitting times of barriers in a different phase space \cite{AzYo79, Va83}. The shadow for finite measures on the real line was introduced by Beiglböck and Juillet \cite{BeJu16} as the main tool in their construction of the left-curtain coupling. Thereby, they showed important properties as the associativity law and continuity, and coined the name shadow. Nevertheless, the essential concept of the shadow as well as its existence in a very broad framework already appeared in \cite{Ro71}. The shadow is used to study properties of the left-curtain coupling (cf.\ \cite{BeJu16}, \cite{Ju14}, \cite{HoNo17}, \cite{HoNo21}). Furthermore, the shadow can be used to construct and characterize a whole family of martingale couplings on the real line \cite{BeJu16b}, as well as finite-step martingales \cite{NuStTa17} and solutions to the peacock problem \cite{BrJuHu20}. To the best of our knowledge, the only known connection with the Skorokhod Embedding Problem so far is implicitly through the left-monotone embedding because it is uniquely characterized by the property that the induced martingale coupling between the initial and the terminal marginal distribution is precisely the left-curtain coupling. \section{Preliminary Results} \subsection{Notation} \label{ssec:Notation} $\Omega$ is a Polish space equipped with the Borel $\sigma$-algebra, $\mathcal{F}$ is a right-continuous filtration on $\Omega$ and $B$ is a $\mathcal{F}$-Bownian motion on the complete filtered probability space $(\Omega, \mathcal{B}(\Omega), \mathbb{P}, \mathcal{F})$. We use the notation $\mathrm{Law}(X;A)$ for the (sub-)probability measure which is given by the push-forward of the random variable $X$ under the restriction of $\mathbb{P}$ to the Borel set $A$. Alternatively, we sometimes use the notation $X_{\#}(\mathbb{P}_{|A})$ for this object. Further, we denote the set of finite (resp.\ probability) measures on a measurable space $\mathsf{X}$ by $\mathcal{M}(\mathsf{X})$ (resp.\ $\mathcal{P}(\mathsf{X})$). In the case $\mathsf{X} = \mathbb{R}$, we denote by $\mathcal{M}_1(\mathbb{R})$ (resp.\ $\mathcal{P}_1(\mathbb{R})$) the subset of finite (resp.\ probability) measures with finite first moment. We equip $\mathcal{M}_1(\mathbb{R})$ with the initial topology generated by the functionals $(I_f)_{f \in C_b(\mathbb{R}) \cup \{\vert \cdot \vert\}}$ where \begin{equation*} I_f : \mathcal{M}_1(\mathbb{R}) \ni \pi \mapsto \int _{\mathbb{R}} f \, \mathrm{d} \pi \in \mathbb{R}, \end{equation*} $C_b(\mathbb{R})$ is the set of continuous and bounded functions, and $\vert \cdot \vert$ denotes the absolute value function. We denote this topology on $\mathcal{M}_1(\mathbb{R})$ by $\mathcal{T}_1$. Finally, we define two order relations on $\mathcal{M}_1(\mathbb{R})$. We say that $\mu \in \mathcal{M}_1(\mathbb{R})$ is smaller than or equal to $\mu ' \in \mathcal{M}_1(\mathbb{R})$ in convex order, $\mu \leq_{c} \mu'$, if \begin{equation} \label{eq:OrderRelation} \int _{\mathbb{R}} \varphi \, \mathrm{d} \mu \leq \int _{\mathbb{R}} \varphi \, \mathrm{d} \mu ' \end{equation} holds for all convex $\varphi$ and $\mu$ is smaller than or equal to $\mu'$ in positive order, $\mu \leq_{+} \nu$, if \eqref{eq:OrderRelation} holds for all non-negative $\varphi$. \subsection{Randomized Stopping Times} The product space $\Omega \times [0,\infty)$ equipped with the product topology and Borel $\sigma$-algebra is again a Polish space. \begin{definition} A randomized stopping time (RST) w.r.t.\ $\mathbb{P}$ is a subprobability measure $\xi$ on $\Omega \times [0, \infty)$ such that the projection of $\xi$ onto $\Omega$ is $\mathbb{P}$ and there exists a disintegration $(\xi_\omega)_{\omega \in \Omega}$ of $\xi$ w.r.t.\ $\mathbb{P}$ such that \begin{equation} \label{eq:DecompRST} \rho _u : \omega \mapsto \inf\{t \geq 0 : \xi_\omega[0,t] \geq u\} \end{equation} is an $\mathcal{F}$-stopping time for all $u \in [0,1]$. We call a RST $\xi$ finite, if $\xi$ is a probability measure. \end{definition} We equip the space of RST with the topology of weak convergence of measures on $\Omega \times [0, \infty)$, i.e.\ the continuity of functionals $\xi \mapsto \int \varphi \, \mathrm{d} \xi$ for all $\varphi \in C_b(\Omega \times [0, \infty))$. The RST-property is closed under this topology (cf.\ \cite[Corollary 3.10]{BeCoHu17}). Any $\mathcal{F}$-stopping time $\tau$ naturally induces a RST by $\xi^{\tau} := \mathrm{Law}_{\mathbb{P}}(B,\tau)$. Conversely, we can represent any randomized stopping time as a usual stopping time by enlarging the filtration. \begin{lemma} [{\cite[Theorem 3.8]{BeCoHu17}}] \label{lemma:ReprRST} For every RST $\xi$ there exists an $(\mathcal{B}([0,1]) \times \mathcal{F}_t)_{t \geq 0}$-stopping-time $\overline{\tau} ^\xi$ on the probability space $([0,1] \times \Omega, \mathcal{B}([0,1] \times \Omega), \overline{\mathbb{P}})$ where $\overline{\mathbb{P}}$ is the product of the Lebesque measure and $\mathbb{P}$ such that \begin{equation*} \xi = \mathrm{Law}_{\overline{\mathbb{P}}}(\overline{\mathsf{Id}},\overline{\tau}^\xi) \end{equation*} where $\overline{\mathsf{Id}} : (u,\omega) \mapsto \omega$. Moreover, $\overline{B} : (u, \omega) \mapsto B(\omega)$ is a Brownian motion on $([0,1] \times \Omega, \mathcal{B}([0,1] \times \Omega), \overline{\mathbb{P}})$. \end{lemma} This representation is useful to justify the application of known theorems of stopping times to RST and will be used in the following. For further literature on randomized stopping times we refer to \cite{BeCoHu17} and references therein. Provided that $\mathrm{Law}(B_0) = \mu \leq_{c} \nu$, we say that $\xi$ is a solution of $\mathrm{SEP}(\mu,\nu)$ if \begin{align*} \sup _{s \geq 0} \int _{\Omega \times [0,\infty)} B_{s \land t} \, \mathrm{d} \xi(\omega,t) < + \infty \quad \text{and} \quad ((\omega,t) \mapsto B_t(\omega))_{\#} \xi = \nu. \end{align*} If $\xi$ is induced by a $\mathcal{F}$-stopping time $\tau$, this definition is consistent with the definiton of $\mathrm{SEP}(\mu,\nu)$ in the introduction. Especially in Section \ref{sec:MainResult} we will use the notational convention that $(\omega,t)$ always refers to an element of $\Omega \times [0, \infty)$. In particular, we will write $\xi[t \geq X]$ instead of $\xi[\{(\omega,t) : t \geq X(\omega)\}]$ where $X$ is a random variable and $\xi$ a RST. \subsection{Potential Theory} \label{ssec:PotentialTheory} Potential Theory is known to be a useful tool when dealing with barrier solutions (cf.\ \cite{Ro76}, \cite{CoWa13}) and the shadow (cf.\ \cite{Ro71}, \cite{BeJu16}). Since it is also a central part of our proof of Theorem \ref{thm:intro}, we recall some results below. \begin{definition} \label{def:PotentialFunction} Let $\eta \in \mathcal{M}_1$. The potential function of $\eta$ is defined by \begin{equation*} U_\eta : \mathbb{R} \rightarrow [0, \infty) \quad U_\eta (x) := \int _{\mathbb{R}} |y - x| \, \mathrm{d} \eta (y). \end{equation*} \end{definition} Since elements of $\mathcal{M}_1$ have finite first moments, the potential function is always well-defined. \begin{lemma}[{cf.\ \cite[Proposition 4.2]{BeJu16}, \cite[p.\ 335]{Ob04}}] \label{lemma:ConvOrder} Let $\mu, \nu \in \mathcal{P}_1(\mathbb{R})$. The following are equivalent: \begin{itemize} \item [(i)] $\mu \leq_{c} \nu$ \item [(ii)] $U_\mu \leq U_\nu$ \item [(iii)] There exists a solution to $\mathrm{SEP}(\mu,\nu)$. \end{itemize} \end{lemma} The equivalence between (i) and (ii) is not restricted to probability measures. Since both the convex order and the order of the potetntial functions are invariant w.r.t.\ scaling with positive factors, for all $\eta, \zeta \in \mathcal{M}_1$ with $\eta(\mathbb{R}) = \zeta(\mathbb{R})$ we have $\eta \leq_{c} \zeta$ if and only if $U_{\eta} \leq U_\zeta$ . \begin{lemma}[{cf.\ \cite[Proposition 4.1]{BeJu16}}] \label{lemma:characPotF} Let $m \in [0,\infty)$ and $x^* \in \mathbb{R}$. For a function $u:\mathbb{R} \rightarrow \mathbb{R}$ the following statements are equivalent: \begin{enumerate} \item [(i)] There exists a finite measure $\mu \in \mathcal{M}_1$ with mass $\mu(\mathbb{R}) = m$ and barycenter $x^* = \int _{\mathbb{R}} x \, \mathrm{d} \mu (x)$ such that $U_\mu = u$ . \item [(ii)] The function $u$ is non-negative, convex and satisfies \begin{equation} \label{eq:characPotF} \lim _{x \rightarrow \pm \infty} u(x) - m|x - x^*| = 0. \end{equation} \end{enumerate} Moreover, for all $\mu, \mu' \in \mathcal{M}_1$ we have $\mu = \mu'$ if and only if $U_\mu = U_{\mu'}$. \end{lemma} \begin{lemma} \label{lemma:PropPotf} Let $\eta$ be a positive measure on $\mathbb{R}$. If there exists an $\varepsilon > 0$ such that $U_{\eta}$ is affine on $[x-\varepsilon, x+ \varepsilon]$, $x \not \in \mathrm{supp}(\eta)$. \end{lemma} \begin{proof} The claim follows from the observation that the potential function of the measure $\eta$ satisfies $\frac{1}{2} U_\eta '' = \eta$ in a distributional sense (cf.\ \cite[Proposition 2.1]{HiRo12}). \end{proof} \begin{corollary} \label{cor:EquaPotfToZero} Let $\mu \leq \nu$ and $\tau$ be a solution to $\mathrm{SEP}(\mu,\nu)$. We have \begin{equation*} \mathbb{P}[\tau > 0, U_{\mu}(B_0) = U_{\nu}(B_0)] = 0. \end{equation*} \end{corollary} \begin{proof} Let $A := \{x \in \mathbb{R} :U_{\mu}(x) = U_{\nu}(x)\}$ and set $\eta := \mathrm{Law}(B_{0}; B_{0} \in A)$. Fubini's Theorem yields \begin{align*} 0 = \int U_{\nu} - U_{\mu} \, \mathrm{d} \eta = \mathbb{E}[U_{\eta}(B_\tau) - U_{\eta}(B_0)] = \mathbb{E}\left[\left(U_{\eta}(B_\tau) - U_{\eta}(B_0)\right) \mathds{1}_{\{\tau > 0\}}\right]. \end{align*} Since $U_\eta$ is a convex function and $(B_{t \land \tau})_{t \geq 0}$ is a uniformly integrable martingale, the (conditional) Jensen inequality yields that $U_\eta$ is $\mathbb{P}$-a.s.\ affine at $B_0$ on the set $\tau > 0$. Hence, by Lemma \ref{lemma:PropPotf} the claim follows. \end{proof} \begin{lemma} \label{lemma:T1Conv} Let $(\mu_n)_{n \in \mathbb{N}}$ be a sequence in $\mathcal{M}_1(\mathbb{R})$. The following are equivalent: \begin{itemize} \item [(i)] The sequence $(\mu_n)_{n \in \mathbb{N}}$ is weakly convergent and there exists a finite measure $\eta \in \mathcal{M}_1(\mathbb{R})$ such that \begin{equation*} \int _{\mathbb{R}} \varphi \, \mathrm{d} \mu_n \leq \int _{\mathbb{R}} \varphi \, \mathrm{d} \eta \end{equation*} for all non-negative convex $\varphi$. \item [(ii)] The sequence $(\mu_n)_{n \in \mathbb{N}}$ is convergent under $\mathcal{T}_1$. \item [(iii)] The sequence of potential functions is pointwise convergent and the limit is the potential function of a finte measure. \end{itemize} \end{lemma} \begin{proof} For the equiavlence of (ii) and (iii) and the implication (i)$\mathbb{R}ightarrow$(ii) we refer to \cite[Lemma 3.6]{BrJuHu20} and \cite[Lemma 3.3]{BrJuHu20}. It remains to show that (ii) implies (i). Since $\mathcal{T}_1$ is by definition stonger than the weak topology, $(\mu_n)_{n \in \mathbb{N}}$ is weakly convergent. Moreover, by \cite[Proposition 7.1.5]{AmGiSa08} the convergence in $\mathcal{T}_1$ implies that \begin{equation*} \limsup _{K \rightarrow \infty} \sup _{n \in \mathbb{N}} \int _{\mathbb{R}} |x| \mathds{1}_{\{\vert x \vert \geq K\}} \, \mathrm{d} \mu_n(x) = 0. \end{equation*} Hence, there exists a sequence $(K_m)_{m \in \mathbb{N}}$ with $K_{m+1} \geq K_m \geq 1$ such that $$\sup _{n \in \mathbb{N}} \int _{\mathbb{R}} |x| \mathds{1}_{\{\vert x \vert \geq K_m\}} \, \mathrm{d} \mu_n(x) \leq 2^{-m}$$ for all $m \in \mathbb{N}$. The measure \begin{equation*} \eta := \sum _{m = 1} ^{\infty} \sup _{n \in \mathbb{N}} \mu_n \left( [-K_m,-K_{m-1}] \cup [K_{m-1},K_m] \right) \left( \, \mathrm{d}lta _{-K_m} + \, \mathrm{d}lta_{K_m} \right) \end{equation*} is an element of $\mathcal{M}_1(\mathbb{R})$ which satisfies the desired properties. \end{proof} \subsection{Shadows} Recall the definition of the shadow in Definition \ref{def:Intro}. As direct consequences of this definition we obtain that \begin{equation*} \eta \leq_{+} \nu \mathbb{R}ightarrow \shadow{\nu}{\eta} = \eta \quad \text{and} \quad \eta \leq_{c} \eta' \mathbb{R}ightarrow \shadow{\nu}{\eta} \leq_{c} \shadow{\nu}{\eta'}. \end{equation*} In the following we collect further properties of the shadow. \begin{lemma}[{\cite[Theorem 4.8]{BeJu16}}] \label{lemma:ShadowAssz} Let $\eta := \eta _1 + \eta _2 \leq_{c} \nu$, the shadow of $\eta_2$ in $\nu - \shadow{\nu}{\eta_1}$ exists and we have \begin{equation*} \shadow{\nu}{\eta} = \shadow{\nu}{\eta _1} + \shadow{\nu - \shadow{\nu}{\eta _1}}{\eta _2}. \end{equation*} \end{lemma} The statement in Lemma \ref{lemma:ShadowAssz} is the ``associativity law'' for shadows already mentioned in the introduction. \begin{corollary} \label{lemma:CharShad} Let $\mu \leq_{c} \nu$ be probability measures and $A \subset \mathbb{R}$ a Borel set such that $\mu(A) > 0$. If a solution $\tau$ of $\mathrm{SEP}(\mu,\nu)$ satisfies \begin{equation*} \forall \tau' \text{ solution of } \mathrm{SEP}(\mu,\nu) \, : \, \mathrm{Law}(B_{\tau}; B_0 \in A) \leq_{c} \mathrm{Law}(B_{\tau'}; B_0 \in A) , \end{equation*} we have $\mathrm{Law}(B_\tau; B_0 \in A) = \shadow{\nu}{\mu _{|A}}$. \end{corollary} \begin{proof} If $\alpha := \mu(A) = 1$, there is nothing to show because $\shadow{\nu}{\mu_{|A}} = \nu = \mathrm{Law}(B_\tau; B_0 \in A)$. Assume $\alpha < 1$. Since $\tau$ is a solution to $\mathrm{SEP}(\mu,\nu)$, we have \begin{equation*} \mu_{|A} = \mathrm{Law}(B_0; B_0 \in A) \leq_{c} \mathrm{Law}(B_\tau; B_0 \in A) \leq_{+} \nu \end{equation*} and hence we obtain $\shadow{\nu}{\mu_{|A}} \leq_{c} \mathrm{Law}(B_\tau; B_0 \in A)$. It remains to show that also the reversed relation holds. By definition of the shadow, we have $\mu_{|A} \leq_{c} \shadow{\nu}{\mu_{|A}}$ and Lemma \ref{lemma:ConvOrder} yields that there exists a solution $\tau^A$ to $\mathrm{SEP}(\alpha^{-1}\mu_{|A}, \alpha^{-1}\shadow{\nu}{\mu_{|A}})$. By Lemma \ref{lemma:ShadowAssz} it is \begin{equation*} \mu_{|A^c} \leq_c \nu - \shadow{\nu}{\mu_{|A}} \end{equation*} and again Lemma \ref{lemma:ConvOrder} yields the existence of a solution $\tau ^{A^c}$ to $\mathrm{SEP}((1-\alpha)^{-1}\mu_{|A^c}, (1-\alpha)^{-1}(\nu - \shadow{\nu}{\mu_{|A}}))$. Since $\{B_0 \in A\} \in \mathcal{F}_0$, \begin{equation*} \tau' := \tau ^A \mathds{1}_{\{B_0 \in A\}} + \tau ^{A^c} \mathds{1}_{\{B_0 \not \in A\}} \end{equation*} is a solution to $\mathrm{SEP}(\mu,\nu)$ and thus \begin{equation*} \mathrm{Law}(B_{\tau}; B_0 \in A) \leq_{c} \mathrm{Law}(B_{\tau'}; B_0 \in A) = \alpha \mathrm{Law}(B_{\tau^A}) = \shadow{\nu}{\mu_{|A}}. \qedhere \end{equation*} \end{proof} \begin{corollary} \label{cor:ShadowOnEqualPart} Let $\mu \leq_{c} \nu$ and $\tau$ be a solution to $\mathrm{SEP}(\mu,\nu)$. Let $A \in \mathcal{F}_0$ such that $U_{\mu}(B_0) = U_{\nu}(B_0)$ on $A$. Then \begin{equation*} \shadow{\nu}{\mathrm{Law}(B_0; A^c)} = \mathrm{Law}(B_\tau; A^c). \end{equation*} \end{corollary} \begin{proof} Set $I := \{ x \in \mathbb{R} : U_{\mu}(x) < U_{\nu}(x)\}$. Since $I$ is the collection of irreducible components of $(\mu,\nu)$ (cf.\ \cite[Section A.1]{BeJu16} ), for any solution $\tau'$ of $\mathrm{SEP}(\mu,\nu)$, the stopped process $(B_{\tau' \land s})_{s \geq 0}$ stays in the irreducible component that it started in. Hence, the measure $\mathrm{Law}(B_{\tau'};B_0 \in I)$ is independent of the specific solution $\tau'$. By Lemma \ref{lemma:CharShad}, we obtain \begin{equation} \label{eq:IrredShadow} \mathrm{Law}(B_{\tau'}; B_{0} \in I) = \shadow{\nu}{\mathrm{Law}(B_{0}; B_{0} \in I)} \end{equation} for any solution $\tau'$ of $\mathrm{SEP}(\mu,\nu)$. Since $\{B_0 \in I\} \subset A^c$ and $\tau = 0$ on $\{B_0 \not \in I\}$ by Corollary \ref{cor:EquaPotfToZero}, we obtain \begin{align*} \mathrm{Law}(B_0; B_0 \not \in I, A^c) = \mathrm{Law}(B_\tau; B_0 \not \in I, A^c) \leq_+ \mathrm{Law}(B_\tau; B_0 \not \in I). \end{align*} Thus, with Lemma \ref{lemma:ShadowAssz} and \eqref{eq:IrredShadow} we obtain \begin{align*} \shadow{\nu}{\mathrm{Law}(B_0; A^c)} &= \shadow{\nu}{\mathrm{Law}(B_0; B_0 \in I)} + \shadow{\nu - \shadow{\nu}{\mathrm{Law}(B_0; B_0 \in I)}}{\mathrm{Law}(B_0; B_0 \not \in I, A^c)} \\ &= \mathrm{Law}(B_\tau ; B_0 \in I) + \shadow{\mathrm{Law}(B_\tau; B_0 \not \in I)}{\mathrm{Law}(B_0; B_0 \not \in I, A^c)} \\ &= \mathrm{Law}(B_\tau ; B_0 \in I) + \mathrm{Law}(B_\tau; B_0 \not \in I, A^c) = \mathrm{Law}(B_\tau ; A^c). \qedhere \end{align*} \end{proof} The connection of shadows to potential theory is through the following characterization of the potential functions of the shadow. \begin{lemma}[{\cite[Theorem 2]{BeHoNo20}}] \label{lemma:PotfShad} Let $\hat{\mu} \leq \mu \leq_{c} \nu$. The potential function of the shadow $\shadow{\nu}{\hat{\mu}}$ is given by \begin{equation*} U_{\shadow{\nu}{\hat{\mu}}} = U_{\nu} - \mathrm{conv} \left( U_{\nu} - U_{\hat{\mu}} \right) \end{equation*} where $\mathrm{conv}(f)$ denotes the convex hull of a function $f$, i.e. the largest convex function that is pointwise smaller than $f$. \end{lemma} \begin{lemma} [{\cite[Lemma 1]{BeHoNo20}}] \label{lemma:PropConv} Let $f$ be a continuous function bounded by an affine function from below. If $x \in \mathbb{R}$ satisfies $(\mathrm{conv}(f))(x) < f(x)$, there exists an $\varepsilon > 0$ such that $\mathrm{conv}(f)$ is affine on $[x - \varepsilon, x + \varepsilon]$. \end{lemma} \begin{lemma} \label{lemma:ShadowDecomp} Let $(\mu_a)_{a \in [0,1]}$ be a family of probability measures, $(F_a)_{a \in [0,1]}$ a decreasing sequence of closed subsets of $\mathbb{R}$ and set $\nu = \int _0 ^1 \mu_a K^{F_a} \leq_{+} \nu$. For all $b \in [0,1]$ we have \begin{equation*} \mathcal{S}^{\nu}\left(\int _0 ^b \mu_a \, \mathrm{d} a\right) = \int _0 ^b \mu_a K^{F_a} \, \mathrm{d} a. \end{equation*} \end{lemma} \begin{proof} Let $\eta, \zeta \in \mathcal{M}_1(\mathbb{R})$ and $F \subset \mathbb{R}$ a closed set with $\mathrm{supp}(\zeta) \subset F$. Since we have \begin{equation*} \eta \leq_{c} \eta K^F \leq_{+} \eta K^F + \zeta, \end{equation*} we obtain $\shadow{\eta K^F + \zeta}{\eta} \leq_{c} \eta K^F$. Conversely, we also have \begin{equation*} \eta K^F \leq_{c} \eta K^{\mathrm{supp}(\eta K^F + \zeta)} \leq_{c} \shadow{\eta K^F + \zeta}{\eta} \end{equation*} because $\mathrm{supp}(\eta K^F + \zeta) \subset F$ and by definition $\eta K^{\mathrm{supp}(\eta K^F + \zeta)}$ is the smallest measure in convex order which dominates $\eta$ in convex order and is supported on $\mathrm{supp}(\eta K^F + \zeta)$ (cf.\ \eqref{eq:KellererDilation}). Hence, we have $ \shadow{\eta K^F + \zeta}{\eta} = \eta K^F$. Furthermore, for all $n \in \mathbb{N}$, $\mu_1, \ldots , \mu_n \in \mathcal{M}_1$ and closed sets $F_1, \ldots , F_n \subset \mathbb{R}$ we can apply this equality to get \begin{equation*} \mu_1 K^{F_1} = \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1} \end{equation*} and with Lemma \ref{lemma:ShadowAssz} we inductively obtain \begin{align*} &\shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k}} \\ &= \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k-1}} \\ & \quad \quad + \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n} - \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k-1}} }{\mu_{k}} \\ &= \mu _1K^{F_1} + \ldots + \mu_{k-1}K^{F_{k-1}} + \shadow{\mu_{k} K ^{F_{k}} + \ldots + \mu_n K^{F_n}}{\mu_{k}} \\ &= \mu_1 K ^{F_1} + \ldots + \mu_k K^{F_k} \end{align*} for all $2 \leq k \leq n$. Since the map $(\mu,\nu) \mapsto \shadow{\nu}{\mu}$ is continuous under $\mathcal{T}_1$ (cf.\ \cite{Ju14}), the claim follows. \end{proof} \section{Proof of the Main Result} \label{sec:MainResult} We split the proof of Theorem \ref{thm:intro} in three parts. In Subsection \ref{ssec:adjoint} we show that the assumptions on the time-change and the level process in Theorem \ref{thm:intro} correspond to each other. In Subsection \ref{ssec:AprioriBound} we construct for every solution of the Skorokhod Embedding Problem an upper bound in the form of a barrier solution and we prove in Subsection \ref{ssec:ActualProof} that this upper bound is attained if and only if the properties of Theorem \ref{thm:intro} are satisfied. \subsection{Monotonously Increasing Processes} \label{ssec:adjoint} \begin{definition} Two monotonously increasing and non-negative families of random variables $(X_t)_{t \geq 0}$ and $(T_l)_{l \geq 0}$ are adjoint if $\mathbb{P}[X_t \geq l \Leftrightarrow T_l \leq t] = 1$ for all $l,t \geq 0$. \end{definition} \begin{remark} If $(X_t)_{t \geq 0}$ is right-continuous or $(T_l)_{l \geq 0}$ left-continuous and both families are adjoint, we have $\mathbb{P}[\forall l,t \geq 0 : X_t \geq l \Leftrightarrow T_l \leq t] = 1$. \end{remark} \begin{lemma} \label{lemma:ExAdjoint} \begin{itemize} \item [(i)] Let $(X_t)_{t \geq 0}$ be a right-continuous $\mathcal{F}$-adapted stochastic process which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_t = l] = 0$ for all $l \geq 0$. Then, the family $(T_l)_{l \geq 0}$ defined by \begin{equation*} T_l := \inf \{t \geq 0 : X_t \geq l \} \end{equation*} is a left-continuous $\mathcal{F}$-time change with $T_0 = 0$, $T_\infty = + \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ which is adjoint to $(X_t)_{t \geq 0}$. \item [(ii)] Let $(T_l)_{l \geq 0}$ be a left-continuous $\mathcal{F}$-time-change with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$. Then. the family $(X_t)_{t \geq 0}$ defined by \begin{equation*} X_t := \sup \{l \geq 0 : T_l \leq t\} \end{equation*} is a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t: X_s = X_{t} = l] = 0$ for all $l \geq 0$, and which is adjoint to $(T_l)_{l \geq 0}$. \end{itemize} \end{lemma} \begin{proof} Item (i): Let $t,l \geq 0$. If $X_t \geq l$, $T_l \leq t$ directly by definition. Conversely, if $T_l \leq t$, for all $u > t$ we obtain $X_u \geq l$ and thus $X_t = \lim _{u \downarrow t} X_u \geq l$ by right-continuity of $X$. Hence, $(T_l)_{l \geq 0}$ is adjoint to $(X_t)_{t \geq 0}$. Clearly, $(T_l)_{l \geq 0}$ is monotonously increasing. Since $(T_l)_{l \geq 0}$ and $(X_t)_{t \geq 0}$ are adjoint, the symmetric difference $\{T_l \leq t\} \triangle \{X_t \geq l\}$ is a $\mathbb{P}$-null-set and therefore contained in the completed filtration $\mathcal{F}_t$. Thus, $(T_l)_{l \geq 0}$ is a $\mathcal{F}$-time-change. Since $X_t$ is non-negative and finite, we obtain $T_0 = 0$ and $T_{\infty} = + \infty$. Moreover, $l \mapsto T_l$ is left-continuous by definition. Furthermore, we have $\mathbb{P}[ \lim _{k \downarrow l} T_k > T_l ] \leq \mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$. Item (ii): Basically the same just in reverse. \end{proof} \textbf{In the following} we fix a $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ and an adjoint $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ that satisfy the properties listed in Lemma \ref{lemma:ExAdjoint}. \subsection{A-priori Bound} \label{ssec:AprioriBound} Let $B$ be a Brownian motion that starts in $\mu$. Fix a randomized stopping time $\xi$ that is a solution to $\mathrm{SEP}(\mu,\nu)$. To simplify notation we will use the following notation for measures derived from $\xi$ \begin{equation} \label{eq:DefRST2} \mathrm{Law}(B_{\sigma \land \xi}) := ((\omega,t) \mapsto B_{\sigma(\omega) \land t}(\omega))_{\#} \xi \end{equation} where $\sigma$ is an $\mathcal{F}$-stopping time. We set $u(l,x) := U_{\mathrm{Law}(B_{T_l \land \xi})}(x)$ and $v(x) := U_{\mathrm{Law}(\nu)}$ for $l \geq 0$ and $x \in \mathbb{R}$. In this part we will show that $\xi$ is bounded from above by the stopping time \begin{equation*} \hat{\tau} := \inf\{ t \geq 0 : u(X_t,B_t) = v(B_t)\}, \end{equation*} i.e.\ we have $\xi[t \leq \hat{\tau}] = 1$. Since $u$ depends on $\xi$, $\hat{\tau}$ is obviously not a global bound for all solutions to $\mathrm{SEP}(\mu,\nu)$. Nevertheless, Lemma \ref{lemma:uCont} implies that $\hat{\tau}$ is a barrier solution. \begin{lemma} \label{lemma:uCont} The function $u$ is continuous and monotonously increasing in the first component. Moreover, for all $x \in \mathbb{R}$ we have $v(x) = \lim_{l \rightarrow \infty} u(l,x)$. \end{lemma} \begin{proof} For all $x \in \mathbb{R}$ and $l \leq l'$, by Lemma \ref{lemma:ReprRST} we have \begin{align*} u(l',x) = \overline{\mathbb{E}} \left[ |\overline{B}_{T_{l'} \land \overline{\tau}^\xi} - x| \right] \geq \overline{\mathbb{E}} \left[ |\overline{B}_{T_{l} \land \overline{\tau}^\xi} - x| \right] = u(l,x) \end{align*} because $\mathrm{Law}_{\overline{\mathbb{P}}}(\overline{B}_{T_{l} \land \overline{\tau}^\xi})_{l \geq 0}$ is increasing in convex order by the optional stopping theorem. We chose $(T_l)_{l \geq 0}$ such that, for fixed $l_0 \geq 0$, $l \mapsto T_l$ is $\mathbb{P}$-a.s.\ continuous at $l_0$. Hence, $l \mapsto \mathrm{Law}(B_{T_l \land \xi})$ is weakly continuous and by Lemma \ref{lemma:T1Conv}, $u$ is continuous in the fist component because $\mathrm{Law}(B_{T_l \land \xi}) \leq_c \nu$ for all $l \geq 0$. Furthermore, $u$ is $1$-Lipschitz continuous in the second component because $u(l,\cdot)$ is the potential function of $\mathrm{Law}(B_{T_l \land \xi})$. \end{proof} \begin{lemma} \label{lemma:EquaToLeq} Let $l \geq 0$ and $\sigma$ be a finite $\mathcal{F}$-stopping time. It is \begin{equation*} \xi \left[ u(l,B_{\sigma}) = v(B_{\sigma}), t > \sigma \geq T_l \right] = 0. \end{equation*} \end{lemma} \begin{proof} This is a direct consequence of Lemma \ref{lemma:ReprRST} and Corollary \ref{cor:EquaPotfToZero}. \end{proof} \begin{proposition} \label{prop:EquaToLeq} Let $\sigma$ be a finite $\mathcal{F}$ stopping time with $\mathbb{P}[u(X_{\sigma},B_{\sigma}) = v(B_{\sigma})] = 1$. We have $\xi[t \leq \sigma] = 1$. \end{proposition} \begin{proof} Let $r(x) := \inf \{l \geq 0 : u(l,x) = v(x)\}$ for all $x \in \mathbb{R}$ and let \begin{equation} \label{eq:DefOfL} L := \{ r(x) : x \in \mathbb{R}, \exists \varepsilon > 0 \text{ s.t. } r(x) \leq r(y) \text{ f.a.\ } y \in (x-\varepsilon,x+\varepsilon)\} \end{equation} be the value set of all local minima of $r$. The set $L$ is countable. Indeed, setting $I_{p,q} := \{ x \in (p,q) : r(x) \leq r(y) \text{ f.a.\ } y \in (p,q)\}$, we have $L = \bigcup _{(p,q) \in \mathbb{Q}^2} r(I_{p,q})$ where $r(I_{p,q})$ is either empty or a singleton. Since $u(X_\sigma,B_\sigma) = v(B_\sigma)$ and $X_\sigma = l \mathbb{R}ightarrow T_l \leq \sigma$ $\mathbb{P}$-a.s., we obtain \begin{align*} \xi[t > \sigma, X_{\sigma} \in L] &= \sum _{l \in L} \xi[t > \sigma, u(X_{\sigma},B_{\sigma}) = v(B_{\sigma}), X_{\sigma} = l] \\ &\leq \sum _{l \in L} \xi[t > \sigma \geq T_l, u(l,B_{\sigma}) = v(B_{\sigma})] \end{align*} and the r.h.s.\ is equal to $0$ by Lemma \ref{lemma:EquaToLeq}. It remains to show that $\xi[t > \sigma, X_{\sigma} \not \in L] = 0$. To this end, we define $[l]_n := \max \{ i/2^n : i \in \mathbb{N}, i/2^n \leq l \}$ for all $n \in \mathbb{N}$ and $l \geq 0$, and \begin{equation*} \sigma ^n := \inf \{t \geq 0: u([X_t]_n,B_t) = v(B_t) \}. \end{equation*} We claim that \begin{equation} \label{eq:AuxClaim} \mathbb{P}\left[X_{\sigma} \not \in L, \sigma < \inf _{n \in \mathbb{N}} \sigma ^n\right] = 0. \end{equation} Admitting \eqref{eq:AuxClaim}, since for all $n \in \mathbb{N}$ the function $t \mapsto u([X_t]_n,B_t)$ is right-continuous, we have a.s.\ $u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma_n})$ and hence \eqref{eq:AuxClaim} yields \begin{align*} \xi[t > \sigma, X_{\sigma} \not \in L] &\leq \xi\left[ t > \inf _{n \in \mathbb{N}} \sigma^n \right] \\ &\leq \sum _{n \in \mathbb{N}} \xi[ t > \sigma^n, u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma^n})] \\ &= \sum _{n \in \mathbb{N}} \sum _{i = 0} ^{\infty} \xi \left[ t > \sigma^n, u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma^n}), \frac{i}{2^n} \leq X_{\sigma ^n} < \frac{i+1}{2^n} \right] \\ &\leq \sum _{n \in \mathbb{N}} \sum _{i = 0} ^{\infty} \xi[ t > \sigma^n \geq i/2^n, u(i/2^n,B_{\sigma^n}) = v(B_{\sigma^n})]. \end{align*} By Lemma \ref{lemma:EquaToLeq}, these summands are zero for all $n,i \in \mathbb{N}$. We are left with verifying \eqref{eq:AuxClaim}. By the definition of $L$ in \eqref{eq:DefOfL}, we see that for every pair $(l,x)$ where $l \not \in L$ and $x \in \mathbb{R}$ with $u(l,x) = v(x)$, there exists a sequence $(x_n)_{n \in \mathbb{N}}$ that converges to $x$ such that $u([l]_n,x_n) = v(x_n)$ for all $n \in \mathbb{N}$ large enough. Indeed, since $u(l,x) = v(x)$, it is $r(x) \leq l$ which leaves us with two cases: If $r(x) < l$, we just need to choose $n$ large enough such that $r(x) \leq [l]_n \leq l$. If $r(x) = l \not \in L$, $x$ cannot be a local minimum of $r$, therefore there exists a sequence $(x_m)_{m \in \mathbb{N}}$ that converges to $x$ with $r(x_m) < l$ and we just need to choose an appropriate subsequence $(x_{m_n})_{n \in \mathbb{N}}$ such that $r(x_m) \leq [l]_{n_m} \leq l$. Thus, since $u(X_\sigma,B_\sigma) = v(B_\sigma)$ $\mathbb{P}$-a.s., we obtain for $\mathbb{P}$-a.e.\ $\omega$ \begin{align*} X_\sigma(\omega) \not \in L \quad &\mathbb{R}ightarrow \quad \forall \, \mathrm{d}lta > 0 \, \exists n \in \mathbb{N} \, \exists y \in \mathcal{B}_\, \mathrm{d}lta (B_{\sigma}(\omega)) \, : u([X_\sigma(\omega)]_n,y) = v(y) \end{align*} where $\mathcal{B}_{\, \mathrm{d}lta}(x)$ denotes the open ball of radius $\, \mathrm{d}lta$ around $x$. Hence, for all $\varepsilon > 0$ we have \begin{equation} \label{eq:NastyIneq} \begin{split} &\mathbb{P}[\forall n \in \mathbb{N} \, \forall t \in (\sigma, \sigma + \varepsilon) : u([X_t]_n,B_t) < v(B_t), X_{\sigma} \not \in L ] \\ \leq \,&\mathbb{P}[\forall n \in \mathbb{N} \, \forall t \in (\sigma, \sigma + \varepsilon) : u([X_\sigma]_n,B_t) < v(B_t), X_{\sigma} \not \in L ] \\ \leq \,& \mathbb{P}[\forall \, \mathrm{d}lta > 0 \, \exists y\in \mathcal{B}_\, \mathrm{d}lta (B_{\sigma}) \, \forall t \in (\sigma, \sigma + \varepsilon) : B_t \neq y]. \end{split} \end{equation} where we used the monotonicity of $u$ in the first component (cf.\ Lemma \ref{lemma:uCont}). By the strong Markov property and the continuity of Brownian motion, we can bound the last term in \eqref{eq:NastyIneq} by the sum of $\mathbb{P}[\forall t \leq \varepsilon : B_t \leq 0]$ and $\mathbb{P}[\forall t \leq \varepsilon : B_t \geq 0]$, and this is clearly $0$. Since $\varepsilon > 0$ is arbitrary, \eqref{eq:AuxClaim} is shown. \end{proof} Recall that $\hat{\tau} := \inf\{ t \geq 0 : u(X_t,B_t) = v(B_t)\}$ where $u(l,\cdot)$ is the potential function of $\mathrm{Law}(B_{\xi \land T_l})$ and $v$ is the potential function of $\nu$. \begin{corollary} \label{cor:Leq} We have a.s.\ $\xi[t \leq \hat{\tau}] = 1$. If $\xi$ is induced by an $\mathcal{F}$-stopping time $\tau$, we have $\tau \leq \hat{\tau}$. \end{corollary} \begin{proof} Since $u$ is continuous and $t \mapsto (X_t,B_t)$ is $\mathbb{P}$-a.s.\ right-continuous, we obtain $\mathbb{P}[u(X_{\hat \tau}, B_{\hat \tau}) = v(B_{\hat \tau})] = 1$ and therefore we can apply Proposition \ref{prop:EquaToLeq}. \end{proof} \subsection{Proof of Theorem \ref{thm:MainEqui}} \label{ssec:ActualProof} Recall once again the properties of $(X_t)_{t \geq 0}$ and $(T_l)_{l \geq 0}$ formulated at the end of subsection \ref{ssec:adjoint}. Let $\xi$ be a RST which is a solution to $\mathrm{SEP}(\mu,\nu)$. Additionally to $\mathrm{Law}(B_{T_l \land \xi})$ (cf.\ \eqref{eq:DefRST2}), we introduce notation for the measures \begin{equation} \label{eq:DefRST3} \begin{split} \mathrm{Law}(B_{\xi}; \xi \geq T_l) &:= ((\omega,t) \mapsto B_t(\omega))_{\#} \xi _{\vert \{t \geq T_l (\omega)\}} \quad \text{and} \\ \mathrm{Law}(B_{T_l}; \xi \geq T_l) &:= ((\omega,t) \mapsto B_{T_l(\omega)}(\omega))_{\#} \xi _{\vert \{t \geq T_l (\omega)\}} \end{split} \end{equation} The following Lemma \ref{lemma:ShadToSupp2} is the main observation that allows us to show in Lemma \ref{lemma:ShadToEqua} a counterpart to the upper bound stated in Corollary \ref{cor:Leq}. \begin{lemma} \label{lemma:ShadToSupp2} Let $l \geq 0$ and suppose that $\xi$ satisfies \begin{equation} \label{eq:ShaodwProp} \mathrm{Law}(B_{\xi}; \xi \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \xi \geq T_l)}. \end{equation} For all $x \in \mathbb{R}$, if $u(l,x) < v(x)$, $x \not \in \mathrm{supp}(\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l))$. \end{lemma} \begin{proof} Fix $l \geq 0$. By \eqref{eq:DefRST2} and \eqref{eq:DefRST3}, we have \begin{equation} \label{eq:DefinitionId} \mathrm{Law}(B_{T_l \land \xi}) - \mathrm{Law}(B_{T_l}; \xi \geq T_l) = \nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l). \end{equation} Hence, Lemma \ref{lemma:PotfShad} and \eqref{eq:ShaodwProp} yield \begin{align*} v - u(l, \cdot) &= U_{\mathrm{Law}(B_{\xi}; \xi \geq T_l)} - U_{\mathrm{Law}(B_{T_l}; \xi \geq {T_l})} \\ &= v - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} - \mathrm{conv} \left( v - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} \right). \end{align*} Let $x \in \mathbb{R}$ with $u(l,x) < v(x)$. By Lemma \ref{lemma:PropConv}, there exists an $\varepsilon > 0$ such that on the interval $[x- \varepsilon, x + \varepsilon]$ the function \begin{align*} \mathrm{conv} \left( v - U _{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} \right) \end{align*} is affine. Rewriting with Lemma \ref{lemma:PotfShad} and \eqref{eq:DefinitionId}, we obtain that the function \begin{align*} u(l, \cdot) - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} = U_{\mathrm{Law}(B_{T_l \land \xi}) - \mathrm{Law}(B_{T_l}; \xi \geq T_l)} = U_{\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l)} \end{align*} is affine around $x$. Hence, Lemma \ref{lemma:PropPotf} yields that $x \not \in \mathrm{supp}(\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l))$. \end{proof} \begin{lemma} \label{lemma:ShadToEqua} If $\xi$ satisfies $\mathrm{Law}(B_{\xi}; \xi \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \xi \geq T_l)}$ for all $l \geq 0$, we have $\xi\left[ u(X_t, B_{t}) < v(B_{t}) \right] = 0$. \end{lemma} \begin{proof} Let $\varepsilon > 0$. Since both $u(0,\cdot)$ and $v$ are potential functions of probability measures with the same mass and barycenter, they are continuous and their difference vanishes at $\pm \infty$. Hence, there exists an $M_1 \in \mathbb{N}$ such that $v(x) - u(0,x) \leq \frac{\varepsilon}{2}$ for all $|x| \geq M_1$. On the compact interval $[-M_1,M_1]$ the monotone increasing sequence $(u(l,\cdot))_{l \geq 0}$ converges pointwise to $v$ (see Lemma \ref{lemma:uCont}). Dini's theorem yields that there exists $M_2 \in \mathbb{N}$ such that $\sup _{x \in [-M_1,M_1]} v(x) - u(l,x) \leq \frac{\varepsilon}{2}$ for all $l \geq M_2$. Moreover, since $u$ is jointly continuous on the compact interval $[0,M_2] \times [-M_1,M_1]$,there exists an $n \in \mathbb{N}$ such that \begin{equation*} \forall \, x \in \mathbb{R} \ \forall \, 0 \leq l \leq l' \leq l + \frac{1}{2^n} \, : \quad u(l',x) - u(l,x) \leq \frac{\varepsilon}{2}. \end{equation*} For this $n$, we obtain \begin{align*} \xi \left[ u(X_t, B_t) + \varepsilon \leq v(B_t) \right] &= \sum _{i = 1} ^{\infty} \xi \left[ u(X_t, B_t) + \varepsilon \leq v(B_t), \frac{i-1}{2^n} \leq X_t < \frac{i}{2^n} \right] \\ &\leq \sum _{i = 1} ^{\infty} \xi \left[ u\left(\frac{i}{2^n}, B_t\right) < v(B_t), X_t < \frac{i}{2^n} \right]. \end{align*} For each $i \in \mathbb{N}$, the summands on the r.h.s.\ are $0$ because Lemma \ref{lemma:ShadToSupp2} yields \begin{align*} \xi \left[ u\left(\frac{i}{2^n}, B_t\right) < v(B_t), X_t < \frac{i}{2^n} \right] = \xi \left[ u\left(\frac{i}{2^n}, B_t \right) < v(B_t), t < T_{\frac{i}{2^n}} \right] = 0 \end{align*} Since $\varepsilon >0$ is arbitrary, the claim follows. \end{proof} \begin{lemma} \label{lemma:RBtoEqua} Let $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ be a closed barrier and $\tau$ an $\mathcal{F}$-stopping time. If $\tau = \inf \{t \geq 0: (X_t,B_t) \in \mathcal{R} \}$ $\mathbb{P}$-a.s., we have $\mathbb{P}[u(X_\tau,B_\tau) = v(B_{\tau})] = 1$. \end{lemma} \begin{proof} For all $(l,x) \in \mathcal{R}$, the Brownian motion $B$ cannot pass through $x$ on $[T_l \land \tau, \tau]$. Indeed, if $ t \in (T_l \land \tau, \tau]$ it is $X_t \geq l$ and since $\tau$ is by assumption the first time the process $(X_t,B_t)_{t \geq 0}$ hits the barrier $\mathcal{R}$, the Brownian motion is stopped at latest when it reaches $[l, \infty) \times \{x\} \subset \mathbb{R}$. Hence, we have $(B_{\tau} - x)(B_{\tau \land T_l} - x) \geq 0$ $\mathbb{P}$-a.s., and thus we obtain $$u(l,x) = \mathbb{E}[|B_{T_l \land \tau }-x|] = \mathbb{E}[|B_{\tau} - x|] = v(x).$$ Since $u$ is continuous (cf.\ Lemma \ref{lemma:uCont}) and $t \mapsto (X_t,B_t)$ is right-continuous, we get $\mathbb{P}[u(X_{\tau},B_{\tau}) = v(B_{\tau})] = 1$. \end{proof} \begin{lemma} \label{lemma:InfimumToEqual} Let $\hat{\tau} := \inf\{t \geq 0 : u(X_t,B_t) = v(B_t)\}$. For all $l \geq 0$ we have $\mathbb{P}[\hat{\tau} < T_l, u(l,B_{T_l \land \hat{\tau}}) < v(B_{T_l \land \hat{\tau}})] = 0$. \end{lemma} \begin{proof} By Lemma \ref{lemma:uCont}, $u$ is continuous and $t \mapsto (X_t,B_t)$ is $\mathbb{P}$-a.s.\ right-continuous, therefore we obtain from the definition of $\hat{\tau}$ \begin{equation*} \mathbb{P}[u(X_{\hat \tau}, B_{\hat \tau}) = v(B_{\hat \tau})] = 1. \end{equation*} Let $l \geq 0$. Since $(X_t)_{t \geq 0}$ is adjoint to $(T_l)_{l \geq 0}$, we obtain \begin{equation*} \hat{\tau} < T_l \quad \mathbb{R}ightarrow \quad X_{\hat{\tau}} < l \quad \mathbb{P}\text{a.s.} \end{equation*} Since $u$ is also monotonously increasing in the first component and it is $B_{T_l \land \hat{\tau}} = B_{\hat{\tau}}$ on the set $\{\hat{\tau} < T_l\}$, the claim follows. \end{proof} Recall the definitions from the end of Subsection \ref{ssec:adjoint}: \begin{theorem} \label{thm:MainEqui} The following are equivalent: \begin{itemize} \item [(i)] There exists a closed barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ such that $\xi$ is induced by the $\mathcal{F}$-stopping time $\tau := \inf \{ t \geq 0 : (X_t,B_t) \in \mathcal{R}\}$. \item [(ii)] For all $l \geq 0$ we have $\mathrm{Law}(B_{\xi}; \xi > T_l) = \shadow{\nu}{\mathrm{Law}(B_{\xi}; \xi > T_l)}$. \item [(iii)] $\xi$ is induced by the $\mathcal{F}$ stopping time $\hat{\tau} := \inf\{t \geq 0 : u(X_t,B_t) = v(B_t)\}$. \end{itemize} \end{theorem} \begin{proof} \textit{(i) $\mathbb{R}ightarrow$ (iii):} By Lemma \ref{lemma:RBtoEqua}, $\tau$ satisfies $\mathbb{P}[u(X_\tau,B_\tau) = v(B_{\tau})] = 1$ and thus we have $\mathbb{P}$-a.s. $\hat{\tau} \leq \tau$. The claim follows with Corollary \ref{cor:Leq}. \textit{(iii) $\mathbb{R}ightarrow$ (i):} Lemma \ref{lemma:uCont} yields that $u$ is a jointly continuous function which is monotonously increasing in $l$. Hence, the set $\mathcal{R} := \{(l,x) \in [0, \infty) \times \mathbb{R} : u(l,x) = v(x)\}$ is a closed barrier. \textit{(ii) $\mathbb{R}ightarrow$ (iii):} By Lemma \ref{lemma:ShadToEqua}, $\xi[t \geq \hat \tau] \geq \xi[u(X_t,B_t) = v(B_t)] = 1$ and Corollary \ref{cor:Leq} yields that $\xi [t \leq \hat{\tau}] = 1$. \textit{(iii) $\mathbb{R}ightarrow$ (ii):} Let $l \geq 0$. Since $\xi$ is induced by $\hat{\tau}$ and a solution to $\mathrm{SEP}(\mu,\nu)$, $\hat \tau - \hat{\tau} \land T_l$ is a solution of $\mathrm{SEP}(\mathrm{Law}(B_{\hat{\tau} \land T_l}),\nu)$ w.r.t.\ the Brownian motion $B'_s = B_{s + \hat{\tau} \land T_l}$. Moreover, Lemma \ref{lemma:InfimumToEqual} yields \begin{equation*} \hat{\tau} < T_l \mathbb{R}ightarrow u(l,B'_0) = v(B'_{0}) \quad \mathbb{P}\text{-a.s.} \end{equation*} Hence, by Corollary \ref{cor:ShadowOnEqualPart} it is \begin{align*} \mathrm{Law}(B_{\hat{\tau}}; \hat{\tau} \geq T_l) &= \mathrm{Law}(B'_{\hat{\tau} - \hat{\tau} \land T_l}; \hat \tau \geq T_l) \\ &= \shadow{\nu}{\mathrm{Law}(B'_{0}; \hat \tau \geq T_l)} = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \hat{\tau} \geq T_l)}. \qedhere \end{align*} \end{proof} \section{Proof of Proposition \ref{prop:ShiftedThm}} \begin{proof} Let $\tilde{\mathcal{F}}$ be the filtration defined by $\tilde{\mathcal{F}}_s := \mathcal{F}_{\sigma + s}$ and let $\tilde{B}$ be the process defined by $\tilde{B}_s := B_{\sigma + s}$. $\tilde{B}$ is an $\tilde{\mathcal{F}}$-Brownian motion. Moreover, $\tilde{\tau} := \tau - \sigma$ is an $\tilde{\mathcal{F}}$-stopping time because $\{\tilde{\tau} \leq s \} = \{\tau \leq \sigma + s\} \in \tilde{\mathcal{F}}_s$ for all $s \geq 0$. Clearly, we have $\tilde{B}_{\tilde{\tau}} = B_\tau$. Suppose (i) is satisfied. We set $\tilde{X}_s := X_{\sigma + s}$. Since $X$ is $\mathcal{F}$-adapted, $\tilde{X}$ is $\tilde{\mathcal{F}}$-adapted and, furthermore, we have \begin{equation*} \tilde{\tau} := \tau - \sigma = \inf \{ s \geq 0 : (\tilde{X}_s,\tilde{B}_s) \in \mathcal{R} \}. \end{equation*} Applying Theorem \ref{thm:intro} yields the existence of an $\tilde{\mathcal{F}}$-time-change $(\tilde{T}_l)_{l \geq 0}$ such that for all $l \geq 0$ we have \begin{equation*} \mathrm{Law}(\tilde{B}_{\tilde{\tau}}; \tilde{\tau} \geq \tilde{T}_l) = \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{T}_l}; \tilde{\tau} \geq \tilde{T}_l}. \end{equation*} In particular, by Theorem \ref{thm:intro} we can choose $\tilde{T}_l = \inf \{ s \geq 0 : \tilde{X}_s \geq l\}$. Moreover, we set $T_l = \inf \{t \geq 0 : X_t \geq l\}$ and see that we have \begin{align*} \sigma + \tilde{T}_l &= \sigma + \inf \{ s \geq 0 : \tilde{X}_s \geq l\} = \sigma + \inf \{ s \geq 0 : X_{\sigma + s} \geq l\} \\ &= \max\{\sigma, T_l \} \end{align*} where the last equality follows from the fact that $X$ is monotonously increasing. We easily verify that a.s. \begin{equation} \label{eq:RelationTilde} \tilde{B}_{\tilde{T}_l} = B_{\sigma + \tilde{T_l}} = B_{\sigma \lor T_l} \quad \text{ and } \quad \{ \tilde{\tau} \geq \tilde{T}_l\} = \{\tau - \sigma \geq \tilde{T}_l \} \} = \{\tau \geq \sigma \lor T_l\}. \end{equation} Hence, for all $l \geq 0$ we obtain \begin{equation*} \mathrm{Law}(B_\tau; \tau \geq \sigma \lor T_l) = \shadow{\nu}{\mathrm{Law}(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l )}. \end{equation*} Conversely, suppose that (ii) is satisfied. We set $\tilde{T}_l := \max\{0,T_l - \sigma\}$. Since $(T_l)_{l \geq 0}$ is an $\mathcal{F}$-time-change, $(\tilde{T}_l)_{l \geq 0}$ is an $\tilde{\mathcal{F}}$-time-change, and by definition we have $\sigma + \tilde{T}_l = \sigma \lor T_l$ such that \eqref{eq:RelationTilde} holds as well. In particular, we obtain \begin{align*} \mathrm{Law}(\tilde{B}_{\tilde{\tau}}; \tilde{\tau} \geq \tilde{T}_l) &= \mathrm{Law}(B_\tau; \tau \geq \sigma \lor T_l) \\ &= \shadow{\nu}{\mathrm{Law}(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l)} = \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{T_l}}; \tilde{\tau} \geq \tilde{T}_l)}. \end{align*} Applying Theorem \ref{thm:intro} yields the existence of an $\tilde{\mathcal{F}}$-adapted stochastic process $(\tilde{X}_s)_{s \geq 0}$ and a closed barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ such that \begin{equation*} \tilde{\tau} = \inf \{ s \geq 0: (\tilde{X}_s,\tilde{B}_s) \in \mathcal{R}\}. \end{equation*} In particular, by Theorem \ref{thm:intro} we can choose $\tilde{X}_s := \sup \{l \geq 0 : \tilde{T}_l \leq t\}$. Moreover, we set $X_t := \sup \{l \geq 0 : T_l \leq t\}$ and see that \begin{align*} \tilde{X}_s &= \sup \{l \geq 0: \tilde{T}_l \leq s\} = \sup \{l \geq 0: \max\{0, T_l - \sigma\} \leq s\} \\ &= \sup \{l \geq 0 : T_l \leq \sigma + s\} = X_{\sigma + s}. \end{align*} Thus, we have a.s. \begin{align*} \inf \{ t \geq \sigma: (X_t, B_t) \in \mathcal{R} \} &= \sigma + \inf\{ s \geq 0 : (X_{\sigma + s}, B_{\sigma + s}) \in \mathcal{R}\} \\ &= \sigma + \tilde{\tau} = \tau. \qedhere \end{align*} \end{proof} \begin{remark} \label{rem:CondLM} For the left-monotone time-change $(T_l^{lm})_{l \geq 0}$, we have \begin{equation*} \{\tau^i \geq \tau^{i-1} \lor T_l ^{lm} \} = \{\tau^i \geq \tau^{i-1}, T^{lm}_l = 0\} = \{ B_0 \leq q_l\} \end{equation*} where $q_l :=- \ln (l)$ and $T_l ^{lm} = 0$ on this set. Thus, the stopping times $\tau ^1 \leq ... \leq \tau ^n$ are $(T_l ^{lm})_{l \geq 0}$-shadow-residual if and only if \begin{align*} \mathrm{Law}(B_{\tau ^i}; B_0 \leq q_l) &= \mathrm{Law}(B_{\tau ^i}; \tau ^i \geq \tau^{i-1} \lor T_l ^{lm}) \\ &= \shadow{\nu _i}{\mathrm{Law}(B_{\tau ^{i-1} \lor T_l ^{lm}}; \tau ^i \geq \tau^{i-1} \lor T_l ^{lm})} \\ &= \shadow{\nu _i}{\mathrm{Law}(B_{\tau ^{i-1}}; B_0 \leq q_l)} \end{align*} for all $1 \leq i \leq n$. Applying this inductively, these stopping times are shadow-residual if and only if \begin{equation*} \mathrm{Law}(B_{\tau ^i}; B_0 \leq q_l) = \shadow{\nu _i}{ ... \, \shadow{\nu_1}{\mathrm{Law}(B_{0}; B_0 \leq q_l)}} =: \shadow{\nu_1,...,\nu _i}{\mathrm{Law}(B_{0}; B_0 \leq q_l)} \end{equation*} for all $1 \leq i \leq n$. This is the obstructed shadow defined by Nutz-Stebegg-Tan in \cite{NuStTa17}. Hence, $(\tau ^1, ... , \tau^n)$ is the multi-marginal lm-solution if and only if the joint distribution $(B_0,B_{\tau ^1}, ... , B_{\tau ^n})$ is the mutliperiod left-monotone transport. \end{remark} \section{Proof of Proposition \ref{prop:Interpolation}} \label{sec:Interpolation} In this subsection we suppose that $\Omega = C([0,\infty))$ is the path space of continuous functions and that $\mathbb{P}$ is a probability measure on the path space such that the canonical process $B: \omega \mapsto \omega$ is a Brownian motion with $\mathrm{Law}_{\mathbb{P}}(B_0) = \mu$. Moreover, we denote by $\theta$ the shift operator on $\Omega$, i.e.\ $\theta_r : (\omega_s)_{s \geq 0} \mapsto (\omega_s)_{s \geq r}$ for all $r \geq 0$. \subsection{Concatenation Method} To simplify notation, we say that a finite stopping time $\tau$ is shadow-residual w.r.t.\ a time-change $(T_l)_{l \geq 0}$ if for all $l \geq 0$ we have \begin{equation*} \mathrm{Law}(B_\tau; \tau \geq T_l) = \shadow{\mathrm{Law}(B_\tau)}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)}. \end{equation*} This is precisely the condition in part (ii) of Theorem \ref{thm:intro}. \begin{lemma} \label{lemma:CombinedStoppingTime} Let $\tau$ and $\sigma$ be two $\mathcal{F}$-stopping times such that $\tau$ is finite. The random variable $\tau + \sigma \circ \theta_{\tau}$ is a again a $\mathcal{F}$ stopping time. \end{lemma} \begin{proof} If $\tau$ takes only values in the countable set $A \subset [0, \infty)$, for all $s \geq 0$ we obtain \begin{equation*} \{\tau + \sigma \circ \ \theta_\tau \leq s\} = \bigcup _{k \in A \cap [0,t]} \{\sigma \circ \theta _k \leq t - k\} \in \mathcal{F}_{k + (t-k)} = \mathcal{F}_t. \end{equation*} A general $\tau$ can be approximated by discrete stopping times. \end{proof} \begin{corollary} \label{lemma:NestingPrep} Let $(T_l)_{l \geq 0}$ be a finite $\mathcal{F}$-time-change, $(S_l)_{l \geq 0}$ a $\mathcal{F}$-time-change and $\lambda > 0$. The family $(R_l)_{l \geq 0}$ defined by \begin{equation*} R_l := T_{l \land \lambda} + (S_{l-\lambda} \circ \theta _{T_\lambda}) \mathds{1} _{\{l \geq \lambda\}} = \begin{cases} T_l & l < \lambda \\ T_\lambda + S_{l - \lambda} \circ \theta _{T_\lambda} &l \geq \lambda \end{cases} \end{equation*} is an $\mathcal{F}$-time-change. If additionally, both $(T_l)_{l \geq 0}$ and $(S_l)_{l \geq 0}$ are left-continuous, $T_0 = S_0 = 0$, $T_\infty = S_\infty = + \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] =\mathbb{P}[\lim _{k \downarrow l} S_k = S_l] = 1$ for all $l \geq 0$, $(R_l)_{l \geq 0}$ satisfies these four properties as well. \end{corollary} \begin{lemma} \label{lemma:Nesting} Suppose we are in the setting of Lemma \ref{lemma:NestingPrep}. Additionally assume that $\tau$ is a solution of $\mathrm{SEP}(\mu,\nu)$ which is shadow residual w.r.t.\ $(T_l)_{l \geq 0}$. If $\sigma$ is a $\mathcal{F}$-stopping time such that $\sigma$ is a solution to $\mathrm{SEP}(\mathrm{Law}(B_{\tau \land T_\lambda}),\nu)$, then \begin{equation*} \rho := \tau \land T_{\lambda} + \sigma \circ \theta _{T_\lambda \land \tau} \end{equation*} is a $\mathcal{F}$-stopping time and a solution to $\mathrm{SEP}(\mu,\nu)$ which is shadow residual w.r.t.\ $(R _l)_{l \geq 0}$. \end{lemma} \begin{proof} We set $\tilde{\mathcal{F}}_s = \mathcal{F}_{s + \tau \land T_\lambda}$, $\tilde{B} := B \circ \theta _{\tau \land T_\lambda}$, $\tilde{\sigma} := \sigma \circ \theta_{\tau \land T_\lambda}$ and $\tilde{S}_l := S_l \circ \theta_{T_\lambda \land \tau}$. $\tilde{\sigma}$ is a stopping time w.r.t.\ the filtration generated by $\tilde{B}$. We also have $\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}) = \nu$ and $\tilde{\sigma}$ is $(\tilde{S}_l)_{l \geq 0}$-shadow-residual. \texttt{STEP 1:} We have $\mathrm{Law}(B_{\rho}) = \mathrm{Law}(B_{\tau \land T_\lambda + \tilde{\sigma}}) = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}) = \nu$. By Lemma \ref{lemma:CombinedStoppingTime}, $\rho$ is a $\mathcal{F}$-stopping time and $(B_{s \land \rho})_{s \geq 0}$ is uniformly integrable because $\tau$ and $\sigma$ are solutions to $\mathrm{SEP}(\mu,\nu)$ and $\mathrm{SEP}(\mathrm{Law}(\tilde{B}_0), \nu)$. Thus, $\rho$ is a solution to $\mathrm{SEP}(\mu,\nu)$. Moreover, we claim that we can represent $\rho$ as \begin{equation} \label{eq:Step1} \rho = \begin{cases} \tau & \tau < T_\lambda \\ T_\lambda + \tilde{\sigma} & \tau \geq T_\lambda \end{cases} \quad \mathbb{P}\text{-a.s.}. \end{equation} Indeed, since $\tau$ is $(T_l)_{l \geq 0}$-shadow-residual, by Theorem \ref{thm:MainEqui} (iii) and Lemma \ref{lemma:InfimumToEqual}, we have $\tau < T_\lambda \mathbb{R}ightarrow u(\lambda,\tilde{B}_0) = v(\tilde{B}_0)$ $\mathbb{P}$-a.s.\ where $u(l,\cdot) := U_{\mathrm{Law}(B_{T_l \land \tau})}$ and $v = U_{\nu}$ for all $l \geq 0$. Thus, we get \begin{align*} \mathbb{P}[\tilde{\sigma} > 0, \tau < T_\lambda] &\leq \mathbb{P}[\tilde{\sigma} > 0, u(\lambda, \tilde{B}_0) = v(\tilde{B}_0)] \\ &= \mathbb{P}[\tilde{\sigma} > 0, U_{\mathrm{Law}(\tilde{B}_0)}(\tilde{B}_0) = U_{\nu}(\tilde{B}_0)]. \end{align*} and the r.h.s. is equal to $0$ because $\tilde{\sigma}$ is a $\tilde{\mathcal{F}}$-stopping-time that solves $\mathrm{SEP}(\mathrm{Law}(\tilde{B}_0),\nu)$ (cf.\ Lemma \ref{lemma:EquaToLeq}). It remains to show that $\rho$ is $(R_l)_{l \geq 0}$-shadow-residual. We split this up in the cases $l \geq \lambda$ and $l < \lambda$. \texttt{STEP 2:} Suppose $l \geq \lambda$. Since by \texttt{STEP 1} $\{ \rho \geq R_l \} = \{\tilde{\sigma} \geq \tilde{S}_{l - \lambda},\tau \geq T_\lambda\}$ $\mathbb{P}$-a.s.\ and $\tilde{\sigma}$ is $(\tilde{S}_l)_{l \geq 0}$ shadow-residual, Lemma \ref{lemma:ShadowAssz} yields \begin{align*} &\mathrm{Law}(B_\rho; \rho \geq R_l) + \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \\ &\quad = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l - \lambda}) = \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l - \lambda}}; \tilde{\sigma} \geq \tilde{S}_{l - \lambda} )} \\ & \quad = \shadow{\nu}{\mathrm{Law}(B_{R_l}; \rho \geq R_l)} \\ & \hspace{2cm} + \shadow{\nu - \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)}}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda)} \end{align*} Thus, we obtain $\mathrm{Law}(B_\rho; \rho \geq R_l) = \shadow{\nu}{\mathrm{Law}(B_{R_l}; \rho \geq R_l)}$ if we show \begin{align} &\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) = \mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \quad \text{and} \label{eq:Step2Eq1}\\ &\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \leq_+ \nu - \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)}. \label{eq:Step2Eq2} \end{align} By \texttt{STEP} 1 it is $\tilde{\sigma} = 0$ on $\{\tau < T_\lambda\}$ and therefore \eqref{eq:Step2Eq1} follows immediately. Moreover, Lemma \ref{lemma:ShadowAssz} yields \begin{align*} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)} &\leq_{+} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda} \land \tilde{\sigma}}; \tau \geq T_\lambda)} \end{align*} On the one hand, by the definition of the shadow we have \begin{equation} \label{eq:Aux2} \begin{split} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{0}; \tau \geq T_\lambda)} &\leq_{c} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda} \land \tilde{\sigma}}; \tau \geq T_\lambda)} \\ &\leq_c \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda)} = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda) \end{split} \end{equation} because $0 \leq \tilde{S}_{l-\lambda} \land \tilde{\sigma} \leq \tilde{\sigma}$ and $\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda) \leq_{+} \nu$. On the other hand, since $u(\lambda, \tilde{B}_0) = v(\tilde{B}_0)$ on $\{\tau < T_\lambda\}$ by \texttt{STEP 1}, Corollary \ref{cor:ShadowOnEqualPart} yields \begin{equation*} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{0}; \tau \geq T_\lambda)} = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda). \end{equation*} Thus, we have equality in \eqref{eq:Aux2} which implies \begin{equation*} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)} \leq_{+} \nu - \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau < T_\lambda) \end{equation*} and thereby \eqref{eq:Step2Eq2}. \texttt{STEP 3:} Now suppose $l < \lambda$. Since $R_\lambda = T_\lambda$ and $\{\rho \geq R_\lambda\} = \{\tau \geq T_\lambda\}$ $\mathbb{P}$-a.s., by \texttt{STEP 2} we have \begin{align*} \mathrm{Law}(B_\rho; \rho \geq R_\lambda) &= \shadow{\nu}{\mathrm{Law}(B_{R_\lambda}; \rho \geq R_\lambda)} = \shadow{\nu}{\mathrm{Law}(B_{T_\lambda}; \tau \geq T_\lambda)} \\ &= \mathrm{Law}(B_\tau; \tau \geq T_\lambda) \end{align*} because $\tau$ is $(T_l)_{l \geq 0}$ shadow residual. In particular, we get \begin{align*} \mathrm{Law}(B_\rho; \rho \geq R_l) &= \mathrm{Law}(B_\rho; \rho \geq R_l, \tau < T_\lambda ) + \mathrm{Law}(B_\rho; \rho \geq R_l, \tau \geq T_\lambda ) \\ &= \mathrm{Law}(B_\tau; T_\lambda > \tau \geq T_l) + \mathrm{Law}(B_\rho; \rho \geq T_\lambda) \\ &= \mathrm{Law}(B_\tau; \tau \geq T_l) \\ &= \shadow{\nu}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)} \\ &= \shadow{\nu}{\mathrm{Law}(B_{R_l}; \rho \geq R_l)} \end{align*} because $\{\rho \geq R_l \} = \{\tau \geq T_\lambda\} \cup \{T_\lambda > \tau \geq T_\lambda\} = \{\tau \geq T_l\}$. \end{proof} \subsection{Robustness of LM-Embedding} \begin{lemma} \label{lemma:SEPcompact} Let $(\mathbb{P}_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\Omega$ such that $B$ is a Brownian motion under $\mathbb{P}_n$, $(\nu_n)_{n \in \mathbb{N}}$ a sequence of probability measures on $\mathbb{R}$ and $(\xi^n)_{n \in \mathbb{N}}$ a sequence of RST w.r.t.\ $\mathbb{P}_n$ that are a solution to $\mathrm{SEP}(\mathrm{Law}_{\mathbb{P}_n}(B_0), \nu _n)$ for all $n \in \mathbb{N}$. If $(\mathbb{P}_n)$ converges weakly to $\mathbb{P}$ and $(\nu_n)_{n \in \mathbb{N}}$ converges to the probability measure $\nu$ under $\mathcal{T}_1$, there exists a weakly convergent subsequence of $(\xi ^n)_{n \in \mathbb{N}}$. Moreover, the limit of every convergent subsequence is a RST w.r.t.\ $\mathbb{P}$ that solves $\mathrm{SEP}(\mathrm{Law}_\mathbb{P}(B_0),\nu)$. \end{lemma} \begin{proof} Let $\varepsilon > 0$. Since $(\mathbb{P}_n)_{n \in \mathbb{N}}$ converges weakly, there exists a compact set $K_{\varepsilon} \subset \Omega$ such that $\mathbb{P}^n[K_\varepsilon] > 1 - \varepsilon$ for all $n \in \mathbb{N}$. Since $(\nu_n)_{n \in \mathbb{N}}$ converges in $\mathcal{T}_1$, by Lemma \ref{lemma:T1Conv} there exists $\eta \in \mathcal{M}_1$ such that $\int \varphi \, \mathrm{d} \nu _n \leq \int \varphi \, \mathrm{d} \eta$ for all $n \in \mathbb{N}$ and non-negative convex functions $\varphi$. Moreover, by the Theorem of de la Vallee-Poussin there exists a non-negative convex function $V \in C^2(\mathbb{R})$ with $V'' \geq C > 0$ such that $\int_\mathbb{R} V \, \mathrm{d} \eta < \infty$. For all $s \geq 0$ and $n \in \mathbb{N}$ we have \begin{equation*} \xi ^n [t \geq s] = \overline{\mathbb{P}}[\overline{\tau}^{\xi^n} \geq s] \leq \frac{\overline{\mathbb{E}}[\overline{\tau} ^{\xi^n}]}{s} \leq \frac{\overline{\mathbb{E}}[V(\overline{B}_{\overline{\tau} ^{\xi^n}})]}{s} \leq \frac{1}{Cs} \int_{\mathbb{R}} V \, \mathrm{d} \eta \end{equation*} where we used the notation of Lemma \ref{lemma:ReprRST}, the Markov inequality and Ito's formula. Hence, there exists $s_\varepsilon > 0$ such that $\xi^n[t \leq s_\varepsilon] > 1 - \varepsilon$ for all $n \in \mathbb{N}$. Then the mass of the compact set $K_\varepsilon \times [0,s_\varepsilon]$ under $\xi ^n$ is strictly greater than $1- 2\varepsilon$ for all $n \in \mathbb{N}$. Hence, the set $\{\xi ^n : n \in \mathbb{N} \}$ is tight. By Prokhorovs Theorem there exists a weakly convergent subsequence. We denote the limit by $\xi$. Since the set of RST is closed under weak convergence (cf.\ \cite[Corollary 3.10]{BeCoHu17}), $\xi$ is a RST stopping time w.r.t.\ $\mathbb{P}$. Moreover, $(\omega,t) \mapsto \varphi(\omega_t)$ is a continuous and bounded function on $\Omega \times [0, \infty)$ for all $\varphi \in C_b(\mathbb{R})$, and therefore $\mathrm{Law}(B_\xi) = \nu$. It remains to show that $(B_{\xi \land t})_{t \geq 0}$ is uniformly integrable. Since $|x| \mathds{1} _{|x| \geq K} \leq |x- K/2| + |x + K/2| - K$, we get \begin{equation*} \mathbb{E} \left[ |B_{\xi \land s}| \mathds{1} _{\{|B_{\xi \land s}| \geq K\}} \right] \leq U_{\mathrm{Law}(B_{\xi \land s})}(K/2) + U_{\mathrm{Law}(B_{\xi \land s})} (-K/2) - K \end{equation*} for all $s,K \geq 0$. Moreover, since $\xi ^n$ converges weakly to $\xi$ and $g_m$ defined by $g_m(y) := \min\{|y|,m\}$ is a continuous and bounded function, we obtain for all $x \in \mathbb{R}$ with monotone and dominated convergence \begin{align*} U_{\mathrm{Law}(B_{\xi \land t})}(x) &= \sup_{m \in \mathbb{N}} \lim_{n \rightarrow \infty} \int _{\Omega \times [0, \infty)} g_m(\omega_{t \land s} - x) \, \mathrm{d} \xi ^n(\omega,t) \\ &\leq \sup _{n \in \mathbb{N}} \mathbb{E}\left[ |B_{\xi^{n} \land s}| \right] \leq \sup _{n \in \mathbb{N}} \mathbb{E}\left[ |B_{\xi^{n}}| \right] = \sup _{n \in \mathbb{N}} U_{\nu_n}(x) \leq U_{\eta}(x) \end{align*} where we used that $\mathrm{Law}(B_{\xi ^n \land t})_{t \geq 0}$ is uniformly integrable for all $n \in \mathbb{N}$ . Thus, using the asymptotic behaviour of potential functions, it is \begin{equation*} \lim _{K \rightarrow \infty} \sup _{t \geq 0} \mathbb{E} \left[ |B_{\xi \land t}| \mathds{1} _{\{|B_{\xi \land t}| \geq K\}} \right] \leq \limsup _{K \rightarrow \infty} U_{\eta}\left(-K/2\right) + U_{\eta}(K/2) - K = 0. \end{equation*} and the claim follows. \end{proof} \begin{lemma} \label{lemma:StabilityLM} Let $(\nu_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\mathbb{R}$, $(\mathbb{P}^n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\Omega$ such that $B$ is a Brownian motion with initial distribution $\mu_n$ under $\mathbb{P}^n$ and $(\xi^n)_{n \in \mathbb{N}}$ a sequence of corresponding RST which are lm-monotone solutions to $\mathrm{SEP}(\mu_n, \nu_n)$. If $(\mathbb{P}_n)_{n \in \mathbb{N}}$ converges weakly to $\mathbb{P}$ whose initial distribution $\mu$ is atomless and $\nu_n$ converges to $\nu$ in $\mathcal{T}_1$, the sequence $(\xi ^ n)_{n \in \mathbb{N}}$ converges weakly to a RST $\xi$ w.r.t.\ $\mathbb{P}$ which is a lm-monotone solution to $\mathrm{SEP}(\mu,\nu)$ \end{lemma} \begin{proof} By Lemma \ref{lemma:SEPcompact}, any subsequence of $(\xi ^n)_{n \in \mathbb{N}}$ has itself a convergent subsequence and the limit is a solution to $\mathrm{SEP}(\mu,\nu)$. If we show that this limit is $(T_l^{lm})_{l \geq 0}$-shadow-residual, by uniqueness (see \cite[Lemma 4.3]{BeHeTo17}), it has to be the unique left-monotone solution to $\mathrm{SEP}(\mu,\nu)$ and the claim follows. For simplicity, we denote the convergent subsequence of a given subsequence again by $(\xi ^n)_{n \in \mathbb{N}}$ and the limit by $\xi$. Since $\xi^n$ is $(T_l ^{lm})$-shadow-residual, the probability measure $\mathrm{Law}(B_0,B_{\xi ^n})$ is the left-curtain coupling of $\mu_n$ and $\nu_n$. Indeed, as in Remark \ref{rem:CondLM} we have for all $l \geq 0$ \begin{equation} \label{eq:LmStability} \begin{split} \mathrm{Law}(B_{\xi^n}; B_0 \leq - \ln(l)) &= \mathrm{Law}(B_{\xi ^n}; \xi ^n \geq T_l ^{lm}) \\ &= \shadow{\nu ^n}{\mathrm{Law}(B_0; \xi ^n \geq T_l ^n)} = \shadow{\nu _n}{\mathrm{Law}(B_0; B_0 \leq - \ln(l))} \end{split} \end{equation} because $\{t \geq T_l ^{lm}\} = \{T_l ^{lm} = 0\} = \{B_0 \leq - \ln(l) \}$ (where $-\ln(0) := - \infty$). As shown in \cite[Theorem 2.16]{Ju14} (and also as a consequence of stability of martingale optimal transport \cite[Theorem 1.1]{BaPa19})), the left-curtain coupling is stable under weak convergence, i.e.\ the weak limit $\mathrm{Law}(B_0,B_\xi)$ of $(\mathrm{Law}(B_0,B_{\xi^n}))_{n \in \mathbb{N}}$ is the left-curtain coupling of $\mu$ and $\nu$. Thus, analogous to \eqref{eq:LmStability}, $\xi$ is $(T_l^{lm})_{l \geq 0}$-shadow-residual. \end{proof} \subsection{Application} Fix $\mu \leq_{c} \nu$, let $\tau ^r$ be the Root solution to $\mathrm{SEP}(\mu,\nu)$ and $(T_l^r)_{l \geq 0}$ the Root time-change. Let $\lambda > 0$. We set $ \tilde B^{\lambda} := B \circ \theta _{T^{r} _{\lambda} \land \tau ^{r}}$. By the strong Markov property, $ \tilde B^{\lambda}$ is a Brownian motion and there exists a left-monotone solution $\sigma ^{\lambda}$ of $\mathrm{SEP}(\mathrm{Law}(\tilde B^{\lambda}_0),\nu)$. We define \begin{align*} \tau ^{\lambda} :=& \tau ^{r} \mathds{1} _{\{ \tau ^{r} < T^{r}_{\lambda} \}} + \sigma^\lambda \circ \theta_{(T^{r} _{\lambda} \land \tau ^{r})} \mathds{1} _{\{ \tau ^{r} \geq T^{r} _{\lambda} \}} \\ =& \tau^r \mathds{1}_{\{\tau ^r < \lambda \}} + \sigma^\lambda \circ \theta _{\lambda} \mathds{1} _{\{\tau ^r \geq \lambda\}} . \end{align*} By Lemma \ref{lemma:Nesting}, $\tau ^\lambda$ is a solution to $\mathrm{SEP}(\mu,\nu)$ which is shadow residual w.r.t.\ the time-change $(T_l ^\lambda)_{l \geq 0}$ defined as \begin{equation*} T^{\lambda} _l := T^{r}_{l \land \lambda} + (T^{lm} _{l - \lambda} \circ \theta _{T^{r}_{\lambda}}) \mathds{1}_{\{l \geq \lambda\}} = \begin{cases} l & l < \lambda \\ l & \exp(-B_\lambda) + \lambda \geq l \geq \lambda \\ + \infty & \exp(-B_\lambda) + \lambda < l, l \geq \lambda \end{cases}. \end{equation*} Thus, by Theorem \ref{thm:MainEqui}, there exists a barrier $\mathcal{R}^\lambda$ such that \begin{equation*} \tau ^{\lambda} = \inf \{t \geq 0 : (X_t ^\lambda, B_t) \in \mathcal{R}^\lambda\} \end{equation*} where $X^{\lambda}$ is defined as \begin{align*} X_t ^\lambda := \sup \{l \geq 0 : T_l^\lambda \leq t\} = \begin{cases} t & t < \lambda \\ \lambda + \exp(-B_0) & t \geq \lambda \end{cases}. \end{align*} To complete the proof of Proposition \ref{prop:Interpolation}, it remains to show the convergence of $\tau ^\lambda$ to $\tau ^r$ and $\tau ^{lm}$ as randomized stopping times as $\lambda$ tends to $+\infty$ and $0$. This is covered by Lemma \ref{lemma:ConvToRoot} and Lemma \ref{lemma:ConvToLM}. \begin{lemma} \label{lemma:ConvToRoot} The sequence $(\tau _\lambda)_{\lambda > 0}$ converges a.s.\ to $\tau ^r$ as $\lambda$ tends to $+ \infty$. In particular, $\mathrm{Law}(B,\tau ^\lambda)$ converges weakly to $\mathrm{Law}(B,\tau ^r)$. \end{lemma} \begin{proof} Since $\tau^r < + \infty$ and $T_\lambda^r \rightarrow \infty$, we have \begin{equation*} \lim _{\lambda \rightarrow + \infty} \tau ^\lambda = \lim _{\lambda \rightarrow + \infty} \left( \tau ^{r} \mathds{1} _{\{ \tau ^{r} < T^{r}_{\lambda} \}} + \sigma^\lambda \circ \theta_{(T^{r} _{\lambda} \land \tau ^{r})} \mathds{1} _{\{ \tau ^{r} \geq T^{r} _{\lambda} \}} \right) = \tau ^{r} \quad \mathbb{P}-\text{a.s.} \end{equation*} The weak convergence of $\mathrm{Law}(B,\tau ^\lambda)$ to $\mathrm{Law}(B,\tau ^r)$ follows immediately. \end{proof} \begin{lemma} \label{lemma:CompSupp} For all $\varphi \in C_c(\Omega \times [0, \infty))$ we have \begin{equation*} \lim_{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \mathds{1} _{\{\tau^r > 0\}}\right] = 0 \end{equation*} where $\tilde{B}^\lambda := B \circ \theta_{\tau ^r \land \lambda}$ and $\tilde{\sigma}^\lambda := \sigma^\lambda \circ \theta_{\tau ^r \land \lambda}$ for all $\lambda > 0$. \end{lemma} \begin{proof} For all $\lambda > 0$ we define the map $\Theta ^\lambda$ on $\Omega \times [0,\infty)$ by \begin{equation*} \Theta ^\lambda : (\omega,t) \mapsto (\omega \circ \theta_\lambda, \max\{t-\lambda,0\}). \end{equation*} A compatible metric on the Polish space $\Omega \times [0, \infty)$ is given by \begin{equation*} d((\omega,t),(\omega',t')) := |t-t'| + \sum _{n \in \mathbb{N}} 2^{-n}\sup _{s \in [0,n]} |\omega_s - \omega'_s| \end{equation*} and under this metric $\Theta ^\lambda$ is $2$-Lipschitz continuous for all $\lambda \in (0,1)$. Moreover, since $\lim _{\lambda \rightarrow 0}\Theta ^\lambda(\omega,t) = (\omega,t)$ for all $(\omega,t) \in \Omega \times [0, \infty)$, $\Theta^\lambda$ converges uniformly on compact sets to the idenity on $\Omega \times [0, \infty)$. Thus, we have \begin{align*} &\lim _{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\tilde{B}^\lambda,\tilde{\sigma} ^\lambda) \right\vert \right] \\ & \quad = \lim _{\lambda \rightarrow 0} \mathbb{E}\left[ \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\Theta^\lambda(B,\lambda + \tilde{\sigma} ^\lambda)) \right] = 0. \end{align*} By substituting the definition of $\tau^\lambda$, we obtain the estimate \begin{align*} &\mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \mathds{1} _{\{\tau^r > 0\}}\right] \\ &\quad \leq 2 ||\varphi||_\infty \mathbb{P}[0 <\tau ^r < \lambda] + \mathbb{E}\left[ \left\vert \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\tilde{B}^\lambda,\tilde{\sigma} ^\lambda) \right\vert \right] \end{align*} and therefore the claim follows. \end{proof} \begin{lemma} \label{lemma:ConvToLM} The sequence $(\mathrm{Law}(B,\tau ^\lambda))_{\lambda > 0}$ converges weakly to $\mathrm{Law}(B,\tau ^{lm})$ as $\lambda$ tends to $0$. \end{lemma} \begin{proof} On the set $\{\tau ^r = 0\}$, $U_\mu(0) = u^r(0,B_0) = v(B_0) = U_\nu(B_0)$ and thus $\tau ^{lm} = 0 = \tau ^r$. Hence, in conjunction with Lemma \ref{lemma:CompSupp} we obtain for all $\varphi \in C_c(\Omega \times [0, \infty))$ \begin{equation} \label{eq:ConvPHI} \lim_{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \right] = 0 \end{equation} where $\tilde{B}^\lambda := B \circ \theta_{\tau ^r \land \lambda}$ and $\tilde{\sigma}^\lambda := \sigma^\lambda \circ \theta_{\tau ^r \land \lambda}$ for all $\lambda > 0$. Since both $(\mathrm{Law}(B,\tau ^\lambda))_{\lambda > 0}$ and $(\mathrm{Law}(\tilde{B}^\lambda,\tilde{\sigma}^\lambda))_{\lambda > 0}$ are sequences of solutions to $\mathrm{SEP}(\mu,\nu)$ and $\mathrm{SEP}(\mathrm{Law}(\tilde{B}_0^\lambda),\nu)$, both families are tight by Lemma \ref{lemma:SEPcompact}. Thus, \eqref{eq:ConvPHI} holds also for all $\varphi \in C_b(\Omega \times [0, \infty))$. Finally, Lemma \ref{lemma:StabilityLM} shows that $\mathrm{Law}(\tilde{B}^\lambda,\tilde{\sigma}^\lambda)$ converges weakly $\mathrm{Law}(B,\tau ^{lm})$. \end{proof} \normalem \end{document}
math
95,454
\begin{document} \title[Cyclic coverings of virtual link diagrams] {Cyclic coverings of virtual link diagrams} \author{Naoko Kamada} \thanks{This work was supported by JSPS KAKENHI Grant Number 15K04879.} \address{ Graduate School of Natural Sciences, Nagoya City University\\ 1 Yamanohata, Mizuho-cho, Mizuho-ku, Nagoya, Aichi 467-8501 Japan } \date{} \begin{abstract} A virtual link diagram is called mod $m$ almost classical if it admits an Alexander numbering valued in integers modulo $m$, and a virtual link is called mod $m$ almost classical if it has a mod $m$ almost classical diagram as a representative. In this paper, we introduce a method of constructing a mod $m$ almost classical virtual link diagram from a given virtual link diagram, which we call an $m$-fold cyclic covering diagram. The main result is that $m$-fold cyclic covering diagrams obtained from two equivalent virtual link diagrams are equivalent. Thus we have a well-defined map from the set of virtual links to the set of mod $m$ almost classical virtual links. Some applications are also given. \end{abstract} \maketitle \section{Introduction} Virtual links, introduced by L. H. Kauffman \cite{rkauD}, correspond to abstract links \cite{rkk} and stable equivalence classes of links in thickened surfaces \cite{rCKS,rkk}. A virtual link diagram is called {\it almost classical} if it admits an Alexander numbering (cf. \cite{rsilver}), and it is called {\it mod $m$ almost classical} if it admits an Alexander numbering in $\mathbb{Z}_m$ (cf. \cite{rboden}). A virtual link is called almost classical (resp. mod $m$ almost classical) if it has an almost classical (resp. mod $m$ almost classical) virtual link diagram as a representative. Every classical link diagram is almost classical, and every almost classical virtual link diagram is mod $m$ almost classical. A virtual link diagram is checkerboard colorable if and only if it is mod 2 almost classical. It is known that Jones polynomials of mod $2$ almost classical virtual links have a property that Jones polynomials of classical links have (\cite{rkn0, rkn2004}). Alexander polynomials for mod $m$ almost classical virtual links can be defined in a similar way to those for almost classical link diagrams \cite{rboden}. In this paper, we introduce the notion of an oriented cut point and a cut system for a virtual link diagram, which is an extension of (unorieted) cut points introduced by H.~Dye in \cite{rdye2, rdye2017}. For any pair $(D,P)$ of a virtual link diagram $D$ and a cut system $P$, we construct a virtual link diagram $\varphi_m(D,P)$ which is mod $m$ almost classical. We call it an {\it $m$-fold cyclic covering (virtual link) diagram} of $(D,P)$. It turns out that the strong equivalence class of $\varphi_m(D,P)$ does not depend on $P$, namely, for any cut systems $P$ and $P'$ of the same virtual link diagram $D$, $\varphi_m(D,P)$ and $\varphi_m(D,P')$ are strongly equivalent (Lemma~\ref{lem2}). Our main theorem (Theorem~\ref{thm2}) states that if virtual link diagrams $D$ and $D'$ are equivalent, then $\varphi_m(D,P)$ and $\varphi_m(D', P')$ are equivalent. Thus, we obtain a well-defined map from the set of virtual links to the set of mod $m$ almost classical virtual links. As an application, we demonstrate how Theorem~\ref{thm2} is used to show that two virtual link diagrams are not equivalent. Theorem~\ref{thm2} implies Theorem~\ref{thm:appli} that if $\varphi_m(D,P)$ is not equivalent to a disjoint union of $m$ copies of $D$ itself then $D$ is never equivalent to a mod $m$ virtual link diagram, i.e., the virtual link represented by $D$ is not mod $m$ almost classical. This paper is organized as follows: In Section~\ref{secnormal} we recall virtual link diagrams and Alexander numberings, and introduce the notions of an oriented cut point and a cut system. In~Section~\ref{sect:CyclicCover1} we give a method of construction of $\varphi_m(D,P)$. It is shown that $\varphi_m(D,P)$ is a mod $m$ almost classical virtual link diagram. In Section~\ref{sect:main}, main results, Lemma~\ref{lem2} and Theorem~\ref{thm2}, are introduced and proved. In Section~\ref{sect:CyclicCover2} we give an alternative method of constructing cyclic covering virtual link diagrams. In Section~\ref{application} we show some applications. \section{Alexander numberings and cut systems}\label{secnormal} In this section we recall virtual link diagrams and Alexander numberings, and introduce the notions of an oriented cut point and a cut system, which are used for our construction of cyclic covering diagrams. A {\it virtual link diagram\/} is a generically immersed, closed and oriented 1-manifold in $\mathbb{R}^2$ with information of positive, negative or virtual crossing, on each double point. Here a {\it virtual crossing\/} means an encircled double point without over-under information \cite{rkauD}. {\it Generalized Reidemeister moves} are the local moves depicted in Figure~ \ref{fgmoves}: The 3 moves on the top are {\it (classical) Reidemeister moves} and the 4 moves on the bottom are so-called {\it virtual Reidemeister moves}. Two virtual link diagrams $D$ and $D'$ are said to be {\it equivalent} (resp. {\it strongly equivalent}) if they are related by a finite sequence of generalized Reidemeister moves (resp. virtual Reidemeister moves) and isotopies of $\mathbb{R}^2$. A {\it virtual link\/} (resp. a {\it pre-virtual link}) is an equivalence class (resp. a strong equivalence class) of virtual link diagrams. \begin{figure} \caption{Generalized Reidemeister moves} \label{fgmoves} \end{figure} A {\it virtual path} of a virtual link diagram $D$ means a path (possibly a loop) on $D$ on which there are no classical crossings. A virtual link diagram $D'$ is said to be obtained from $D$ by a {\it detour move} if $D'$ is obtained by replacing a virtual path of $D$ with a path which is a virtual path of $D'$. Two diagrams $D$ and $D'$ are strongly equivalent if and only if they are related by a finite sequence of detour moves and isotopies of $\mathbb{R}^2$ (cf. \cite{rkk, rkauD}). Let $D$ be a virtual link diagram. {\it A semi-arc} of $D$ is a virtual path which is an immersed arc between two classical crossings of $D$ or an immersed loop. Let $m$ be a non-negative integer. An {\it Alexander numbering} (resp. a {\it mod $m$ Alexander numbering}) of $D$ is an assignment of a number of $\mathbb{Z}$ (resp. $\mathbb{Z}_m$) to each semi-arc of $D$ such that the numbers of 4 semi-arcs around each classical crossing are as shown in Figure~\ref{fgalexnum} for some $i \in \mathbb{Z}$ (resp. $i \in \mathbb{Z}_m$). \begin{figure} \caption{Alexander numbering} \label{fgalexnum} \end{figure} Note that the numbers assigned to semi-arcs around a virtual crossing is depicted as in Figure~\ref{fgalexnumv}. \begin{figure} \caption{Alexander numbering around a virtual crossing} \label{fgalexnumv} \end{figure} An example of an Alexander numbering is depicted in Figure~\ref{fgexAlexN1}. A classical link diagram always admits an Alexander numbering. \begin{figure} \caption{An Alexander numbering of a classical link diagram} \label{fgexAlexN1} \end{figure} Not every virtual link diagram admits an Alexander numbering. The virtual link diagram depicted in Figure~\ref{fgexAlexN2} (i) does not admit an Alexander numbering, and the virtual link diagram in Figure~\ref{fgexAlexN2} (ii) does. \begin{figure} \caption{Virtual link diagrams which does/does not admit an Alexander numbering} \label{fgexAlexN2} \end{figure} Figure~\ref{fgexMAlexN1} shows an example of a mod $3$ Alexander numbering, which is not an Alexander numbering. \begin{figure} \caption{An mod $3$ Alexander numbering of a virtual link diagram} \label{fgexMAlexN1} \end{figure} A virtual link diagram is {\it almost classical} (resp. {\it mod $m$ almost classical\/}) if it admits an Alexander numbering (resp. a mod $m$ Alexander numbering). A virtual link $L$ is {\it almost classical\/} (resp. {\it mod $m$ almost classical}) if there is an almost classical (resp. mod $m$ almost classical) virtual link diagram of $L$. H. Boden, R.Gaudreau, E. Harper, A. Nicas, L. White \cite{rboden} studied mod $m$ almost classical virtual links. By definition, any almost classical virtual link diagram is mod $m$ almost classical. A virtual link diagram is checkerboard colorable if and only if it is mod $2$ almost classical. It is shown in \cite{rboden} that for a mod $m$ almost classical virtual knot $K$, if $D$ is a minimal virtual knot diagram of $K$, then $D$ is mod $m$ almost classical. H. Dye introduced the notion of a cut point \cite{rdye2}, which is an \lq unoriented\rq \, cut point in our sense. The author \cite{rkn2004} generalized the Kauffman-Murasugi-Thistlethwaite theorem (\cite{rkau1987, rmurasugi, rthis}) on the span of the Jones polynomial of a classical link to checkerboard colorable and proper virtual links. Using cut points, H. Dye \cite{rdye2017} further extended this result to virtual link diagrams that are not checkerboard colorable. Using (unoriented) cut points, the author constructed in \cite{rkn2, rkn3} a map from the set of virtual links to the set of checkerboard colorable virtual links, i.e., the set of mod $2$ almost classical virtual links. In this paper, we generalize this to the mod $m$ case. An {\it oriented cut point} or simply a {\it cut point} is a point on an arc at which a local orientation of the arc is given. In this paper we denote it by a small triangle on the arc as in Figure~\ref{fgorientedcutpt}. Whenever cut points on a virtual link diagram are discussed, we assume that they are on semi-arcs of the diagram avoiding crossings. An oriented cut point is called {\it coherent} (resp. {\it incoherent}) if the local orientation indicated by the cut point is coherent (resp. incoherent) to the orientation of the virtual link diagram. \begin{figure} \caption{An oriented cut point on an arc} \label{fgorientedcutpt} \end{figure} Let $D$ be a virtual link diagram and $P$ a set of oriented cut points of $D$. We say that $P$ is a {\it cut system} if $D$ admits an Alexander numbering such that at each oriented cut point, the number increases by one in the direction of the oriented cut point (Figure~\ref{fgalexnumorict}). Such an Alexander numbering is called an {\it Alexander numbering of a virtual link diagram with a cut system}. See Figure~\ref{fgExcutPtAlex} for examples. \begin{figure} \caption{Alexander numbering of a virtual link diagram with a cut system} \label{fgalexnumorict} \end{figure} \begin{figure} \caption{Alexander numberings of virtual link diagrams with cut systems} \label{fgExcutPtAlex} \end{figure} For a virtual link diagram $D$ with a cut system $P$, let ${\rm Arc}(D,P)$ be the set of arcs (or loops) obtained from semi-arcs of $D$ by cutting along $P$. (If there is a semi-arc of $D$ which is a loop and has no cut points of $P$, then ${\rm Arc}(D,P)$ has the loop as an element.) An Alexander numbering of $D$ with $P$ is regarded as a map from ${\rm Arc}(D,P)$ to ${\mathbb Z}$. For a semi-arc $a$ of $D$ not being a loop, we denote by $a_-$ (resp. $a_+$) the arc of ${\rm Arc}(D,P)$ which contains the starting point (resp. the terminal point) of $a$. \begin{lem}\label{lem:CutNumberOnArc} Let $f : {\rm Arc}(D,P) \to {\mathbb Z}$ be an Alexander numbering of a virtual link diagram $D$ with a cut system $P$. \begin{itemize} \item[(1)] For any semi-arc $a$ of $D$ not being a loop, $f(a_+) - f(a_-)$ is the number of coherent cut points minus the number of incoherent cut points of $P$ appearing on $a$. \item[(2)] For any semi-arc $a$ of $D$ being loop, the number of coherent cut points minus the number of incoherent cut points of $P$ appearing on $a$ is $0$. \end{itemize} \end{lem} \begin{proof} It is obvious, since when we move along $a$ from $a_-$ to $a_+$, the numbers assigned by $f$ changes by $+1$ (resp. $-1$) at each coherent (resp. incoherent) cut point. \end{proof} A {\it canonical cut system} of a virtual link diagram is a cut system which is obtained by introducing two oriented cut points as in Figure~\ref{fg:canocutsys} around each classical crossing. It is really a cut system and an Alexander numbering looks as in Figure~\ref{fg:canocutsys} around each virtual crossing. \begin{figure} \caption{Canonical cut system} \label{fg:canocutsys} \end{figure} The local transformations of oriented cut points depicted in Figure~\ref{fgOrientedCutmv} are called {\it oriented cut point moves}. For a virtual link diagram with a cut system, the result by an oriented cut point move is also a cut system of the same virtual link diagram. Note that the move III$'$ depicted in Figure~\ref{fgOrientedCutmv} is obtained from the move III modulo the moves II. \begin{figure} \caption{Oriented cut moves} \label{fgOrientedCutmv} \end{figure} \begin{thm} \label{thm:cutpointmove} Two cut systems of the same virtual link diagram are related by a sequence of oriented cut point moves. \end{thm} \begin{proof} Let $P$ and $P'$ be cut systems of a virtual link diagram $D$. Let $f$ (resp. $f'$) be an Alexander numbering of $D$ with cut system $P$ (resp. $P')$. Applying a finite number of oriented cut point moves III to $P$, we obtain a cut system $P''$ and an Alexander numbering $f''$ such that the numberings of 4 edges around each classical crossing are as same as those of $f'$. By Lemma~\ref{lem:CutNumberOnArc}, we see that for any semi-arc $a$ of $D$, the number of coherent cut points minus the number of incoherent cut points of $P''$ appearing on $a$ is equal to that of $P'$. Thus, by using oriented cut point moves I and II, $P''$ can be transformed to $P'$. \end{proof} \begin{cor}\label{cor:numbers} Let $D$ be a virtual link diagram and let $P$ be a cut system of $D$. The number of coherent cut points of $P$ equals that of incoherent cut points of $P$. \end{cor} \begin{proof} The canonical cut system for $D$ has the property that the number of coherent cut points equals that of incoherent cut points. Since each oriented cut point move preserves this property, by Theorem~\ref{thm:cutpointmove} we see that any cut system has the property. \end{proof} \section{Cyclic coverings of virtual link diagrams}\label{sect:CyclicCover1} In this section, we introduce a method of constructing a mod $m$ almost classical virtual link diagram $\varphi_m(D,P)$, which is determined up to strong equivalence, from a virtual link diagram $D$ with a cut system $P$. We denote by a pair $(D,P)$ a virtual link diagram $D$ with a cut system $P$. Moving $(D,P)$ slightly by an isotopy of ${\mathbb R}^2$, we assume that each cut point $p$ of $P$ is on a horizontal line $\ell(p)$ in ${\mathbb R}^2$ such that $\ell(p)$ intersects $D$ transversely avoiding all crossings of $D$ and $p$ is a unique cut point of $P$ on $\ell(p)$. Let $(D^0, P^0), (D^1, P^1), \dots, (D^{m-1}, P^{m-1})$ be $m$ parallel copies of $(D, P)$ with $(D^0, P^0)=(D,P)$ obtained from $(D,P)$ by sliding along the $x$-axis such that they appear from left to right in this order. For each cut point $p \in P$, we denote by $p^k$ the copy of $p$ in $P^k$ for $k \in \{0, \dots, m-1\}$. See Figure~\ref{fgAlexcov1p} for an example. (The Alexander numberings in the figure are used later.) \begin{figure} \caption{$3$ copies of a virtual link diagram with cut system} \label{fgAlexcov1p} \end{figure} For each $p \in P$, let $N(\ell(p))$ be a regular neighborhood of the horizontal line $\ell(p)$ in ${\mathbb R}^2$. In Figure~\ref{fgAlexcov2p}, $N(\ell(p))$ is the part between two dotted lines parallel to $\ell(p)$. The diagram $\cup_{k=0}^{m-1}D^k$ looks locally near $N(\ell(p))$ as in the upper part of Figure~\ref{fgAlexcov2p}. Replace it as in the lower part of the figure for every $p \in P$, where the doted arc drawn in the very bottom of the figure means a virtual path and we may put it anyplace as long as it contains only virtual crossings. The virtual link diagram obtained this way is denoted by $\varphi_m(D, P)$ and is called an {\it $m$-fold cyclic covering (virtual link) diagram} of $(D, P)$. In the early stage of this construction, we modified $(D,P)$ by an isotopy of ${\mathbb R}^2$. When we modify $(D,P)$ differently, the diagram $\varphi_m(D, P)$ may change. However it is preserved up to strong equivalence. Although this fact can be seen by observing how the diagram $\varphi_m(D, P)$ changes by a modification of $(D,P)$, we will show it in a more general situation as Theorem~\ref{thm:General} in Section~\ref{sect:CyclicCover2}. \begin{figure} \caption{Construction of $m$ cyclic covering} \label{fgAlexcov2p} \end{figure} For example, for $(D, P)$ depicted in Figure~\ref{fgAlexcov1p} (i), a $3$-fold cyclic covering virtual link diagram $\varphi_3(D, P)$ is shown in Figure~\ref{fgExAlexcov1p}. \begin{figure} \caption{A $3$-fold cyclic covering virtual link diagram of $(D,P)$} \label{fgExAlexcov1p} \end{figure} \begin{prop}\label{thm1} For a virtual link diagram $D$ with a cut system $P$, an $m$-fold cyclic covering virtual link diagram $\varphi_m(D, P)$ is mod $m$ almost classical. \end{prop} \begin{proof} Let $f$ be an Alexander numbering of $(D,P)$. For each $k \in \{0, \dots, m-1\}$, let $f^k$ denote the Alexander numbering of $(D^k, P^k)$ obtained from $f$ by shifting $k$. As shown in Figure~\ref{fgAlexcovprfp}, the Alexander numberings $f_0, \dots, f_{m-1}$ induce a mod $m$ Alexander numbering of $\varphi_m(D, P)$. For example, see Figures~\ref{fgAlexcov1p} and~\ref{fgExAlexcov1p}. \end{proof} \begin{figure} \caption{Alexander numbering of a cyclic covering virtual link diagram} \label{fgAlexcovprfp} \end{figure} \section{The main theorem} \label{sect:main} In Section~\ref{sect:CyclicCover1}, we introduced an $m$-fold cyclic covering diagram $\varphi_m(D,P)$ for a virtual link diagram $D$ with a cut system $P$. In this section, we first show that $\varphi_m(D,P)$, up to strong equivalence, does not depend on $P$ (Lemma~\ref{lem2}). Hence we may denote it by $\varphi_m(D)$. Our main theorem is that if $D$ and $D'$ are equivalent then $\varphi_m(D,P)$ and $\varphi_m(D',P')$ are equivalent (Theorem~\ref{thm2}). This implies that we have a map $\varphi_m$ from the set of virtual links to the set of mod $m$ almost classical virtual links. \begin{lem}\label{lem2} Let $D$ be a virtual link diagram, and let $P_1$ and $P_2$ be cut systems of $D$. Then $\varphi_m(D,P_1)$ and $\varphi_m(D,P_2)$ are strongly equivalent. \end{lem} \begin{proof} Suppose that $P_1$ and $P_2$ are as in the left part of Figure~\ref{fgcmovprf}. Then $\varphi_m(D, P_1)$ and $\varphi_m(D, P_2)$ are as in the right part of the figure, which are related by detour moves. The other cases of oriented cut moves are shown by a similar argument. \end{proof} \begin{figure} \caption{Results by an oriented cut point move} \label{fgcmovprf} \end{figure} The following is our main theorem. It implies that we have a map $\varphi_m$ from the set of virtual links to the set of mod $m$ almost classical virtual links. \begin{thm}\label{thm2} Let $(D,P)$ and $(D',P')$ be virtual link diagrams with cut systems. If $D$ and $D'$ are equivalent, then $\varphi_m(D,P)$ and $\varphi_m(D',P')$ are equivalent. \end{thm} \begin{proof} By Lemma~\ref{lem2}, it is sufficient to consider the case that $P$ and $P'$ are canonical cut systems. If $D'$ is related to $D$ by one of Reidemeister moves, then $\varphi_m(D, P)$ and $\varphi_m(D',P')$ are related by $m$ Reidemeister moves, which are copies of the original Reidemsiter moves. Suppose that $D'$ is related to $D$ by a virtual Reidemeister move I (resp. II) as in Figure~\ref{fgovprfv} (i) (resp. (ii)). Let $P_*$ be the cut system obtained from $P$ by cut point moves I and II as in the figure. By Lemme~\ref{lem2}, $\varphi_m(D,P)$ and $\varphi_m(D,P_*)$ are equivalent. On the other hand $\varphi_m(D',P')$ and $\varphi_m(D,P_*)$ are related by $m$ virtual Reidemeister moves I (resp. II). Thus $\varphi_m(D,P)$ and $\varphi_m(D',P')$ are equivalent. Suppose that $D'$ is related to $D$ by a virtual Reidemeister move III as in Figure~\ref{fgovprfv} (iii). Let $P_*$ (resp. $P'_*$) be the cut system obtained from $P$ (resp. $P'$) by cut point moves as in the figure. By Lemme~\ref{lem2}, $\varphi_m(D,P)$ (resp. $\varphi_m(D',P')$ ) and $\varphi_m(D,P_*)$ (resp. $\varphi_m(D',P'_*)$) are equivalent. On the other hand, $\varphi_m(D,P_*)$ and $\varphi_m(D',P'_*)$ are related by $m$ virtual Reidemeister moves III. Thus $\varphi_m(D,P)$ and $\varphi_m(D',P')$ are equivalent. Suppose that $D'$ is related to $D$ by a virtual Reidemeister move IV as in Figure~\ref{fgovprfv} (iv). Let $P_*$ (resp. $P'_*$) be the cut system obtained from $P$ (resp. $P'$) by cut point moves as in the figure. By Lemme~\ref{lem2}, $\varphi_m(D,P)$ (resp. $\varphi_m(D',P')$ ) and $\varphi_m(D,P_*)$ (resp. $\varphi_m(D',P'_*)$) are equivalent. On the other hand, $\varphi_m(D,P_*)$ and $\varphi_m(D',P'_*)$ are equivalent by $m$ virtual Reidemeister moves IV. Thus $\varphi_m(D, P)$ and $\varphi_m(D', P')$ are equivalent. The other cases where the orientations of virtual link diagrams are different are shown by a similar argument. \begin{figure} \caption{Diagrams related by a virtual Reidemeister move} \label{fgovprfv} \end{figure} \end{proof} \section{An alternative construction of cyclic covering virtual link diagrams} \label{sect:CyclicCover2} In this section, we introduce two methods of constructing cyclic covering virtual link diagrams. The first one is a more general method, denoted by $\varphi_m^0(D,P)$, including the method introduced in Section~\ref{sect:CyclicCover1} as a special case. The second one is a method which is also a special case of the first one. The reader who does not need it might skip this section. In the construction of $\varphi_m(D,P)$ introduced in Section~\ref{sect:CyclicCover1}, we first modified $(D,P)$ so that each horizontal line $\ell(p)$ through $p \in P$ intersects $D$ transversely avoiding the crossings of $D$ and the other cut points of $P$, and then we considered $m$ parallel copies of $(D,P)$. However, we may define $\varphi_m(D,P)$ without this procedure. Let $(D,P)$ be a virtual link diagram with a cut system. Let $(D^k, P^k)$, $k=0, \dots, m-1$, be virtual link diagrams with cut systems such that each $(D^k, P^k)$ is a copy of $(D,P)$ and that the intersection of $D^k$ and $D^{k'}$ for $k \neq k'$ is empty or consists of virtual crossings. (Furthermore, we may weaken the assumption that $(D^k, P^k)$ is a copy of $(D,P)$ so that $(D^k, P^k)$ is isotopic to $(D,P)$ by an isotopy of ${\mathbb R}^2$ or even that $(D^k, P^k)$ is strongly equivalent to $(D,P)$.) For each $p \in P$, let $N(p)$ be a regular neighborhood of $p$ in $D$, which is a small arc on $D$ containing $p$. Let $p_-$ and $p_+$ be the endpoints of $N(p)$ such that the orientation of the virtual link diagram restricted to $N(p)$ is from $p_-$ to $p_+$. For each $k \in \{0, \dots, m-1\} = {\mathbb Z}_m$, let $p^k$, $N(p^k)$, $p^k_-$ and $p^k_+$ be the corresponding copy of $p$, $N(p)$, $p_-$ and $p_+$ in $D^k$. Remove $N(p^k)$ for all $p \in P$ and $k \in \{0, \dots, m-1\}$ from the diagram $\cup_{k=0}^{m-1}D^k$ and, for each $p \in P$ and $k \in \{0, \dots, m-1\}$, connect the endpoint $p^k_-$ to $p^{k-\epsilon(p)}_+$ by any virtual path. Here $\epsilon(p)$ is $+1$ (resp. $-1$) if $p$ is coherent (resp. incoherent). We denote by $\varphi_m^0(D,P)$ a virtual link diagram obtained this way. Consider an Alexander numbering $f$ of $(D,P)$ and let $f^k$ be the Alexander numbering of $(D^k, P^k)$ obtained from $f$ by shifting by $k$. Then $f^0, \dots, f^{m-1}$ induce a mod $m$ Alexander numbering of $\varphi_m^0(D,P)$. Thus $\varphi_m^0(D,P)$ is mod $m$ almost classical. The method of construction of $\varphi_m(D,P)$ introduced in Section~\ref{sect:CyclicCover1} is a special case of the construction of $\varphi_m^0(D,P)$. \begin{thm}\label{thm:General} For a virtual link diagram $D$ with a cut point $P$, a diagram $\varphi_m^0(D,P)$ is unique up to strong equivalence. \end{thm} \begin{proof} Let $D_1$ and $D_2$ be virtual link diagrams obtained from the same $(D, P)$ by the construction for $\varphi_m^0(D,P)$ introduced above. By definition of $\varphi_m^0(D,P)$, every classical crossing of $D_1$ (or $D_2$) can be labelled uniquely with $c^k$ for a classical crossing $c$ of $D$ and $k \in \{0, \dots, m-1\}$. Thus there is a natural bijection between the classical crossings of $D_1$ and those of $D_2$. By an ambient isotopy of ${\mathbb R}^2$, we may assume that $D_1$ and $D_2$ coincide in a regular neighborhood of every classical crossing. Let $E$ denote the closure of the complement of the regular neighborhoods of all classical crossings of $D_1$ (or of $D_2$) in ${\mathbb R}^2$. The intersection $D_1 \cap E$ (or $D_2 \cap E$) consists of virtual paths which are properly immersed arcs or immersed loops in $E$. Let $A(D_1)$ (resp. $A(D_2)$) be the set of properly immersed arcs of $D_1 \cap E$ (resp. $D_2 \cap E$), and let $L(D_1)$ (resp. $L(D_2)$) be the set of immersed loops of $D_1 \cap E$ (resp. $D_2 \cap E)$. Let $a_1 \in A(D_1)$ and $a_2 \in A(D_2)$ be virtual paths starting with the same point $a_1(0)=a_2(0)$ in $\partial (D_1\cap E)= \partial (D_2\cap E)$, and let $a_1(1)$ and $a_2(1)$ be their terminal points in $\partial (D_1\cap E)= \partial (D_2\cap E)$. We assert that $a_1(1) = a_2(1)$. This is seen as follows. Let $E(D)$ be the complement of the regular neighborhoods of all classical crossings of $D$ in ${\mathbb R}^2$. The intersection $D \cap E(D)$ consists of virtual paths which are properly immersed arcs or immersed loops in $E(D)$. Let $a(0)$ be a point of $\partial (D \cap E(D))$ corresponding to $a_1(0)$ and let $a$ be the virtual path of $D \cap E(D)$ starting at $a(0)$. Let $a(1)$ be the terminal point of $a$ in $\partial (D \cap E(D))$. Then $a_1(0) = a_2(0) = a(0)^k$, $a_1(1)= a(1)^{k'}$ and $a_2(1) = a(1)^{k''}$ for some $k, k', k'' \in \{0, \dots, m-1\} = {\mathbb Z}_m$. Note that $k' -k$ is the sum of $-\epsilon(p)$ for all cut points $p \in P$ appearing on $a$, and so is $k'' -k$. Thus, we see that $k' =k''$ and $a_1(1) = a_2(1)$. Therefore, there is a bijection between $A(D_1)$ and $A(D_2)$ such that corresponding arcs $a_1 \in A(D_1)$ and $a_2 \in A(D_2)$ have the same starting point and the same terminal point. Every loop of $L(D_1)$ (or $L(D_2)$) can be labelled as $\ell^k$ for a virtual path being an immersed loop $\ell$ of $D$ and $k \in \{0, \dots, m-1\}$. Thus there is a bijection between $L(D_1)$ and $L(D_2)$. By detour moves, replace virtual paths which are elements of $A(D_1)$ and $L(D_1)$ with the corresponding elements of $A(D_2)$ and $L(D_2)$, and we can obtain $D_2$ from $D_1$. This implies that $D_1$ and $D_2$ are strongly equivalent. \end{proof} We introduce another method of construction of cyclic covering virtual link diagrams, which is a special case of the method above. Let $(D, P)$ be a virtual link diagram with a cut system. Put $m$ copies of $(D,P)$ in ${\mathbb R}^2$, say $(D^0, P^0), \dots, (D^{m-1}, P^{m-1})$, such that all corresponding semi-arcs are in parallel as in Figure~\ref{fgAlexcov2p23} and all crossings between $D^k$ and $D^{k'}$ for $k \ne k'$ are virtual crossings. Here semi-arcs of $D^{k+1}$ appears on the right of $D^{k}$ with respect to the orientation of $D$ as in Figure~\ref{fgAlexcov2p23}. See Figure~\ref{fgAlexcov1p2} (i) and (ii) for an example with $m=3$. \begin{figure} \caption{Parallel virtual link diagrams} \label{fgAlexcov2p23} \end{figure} \begin{figure} \caption{A $3$-fold cyclic covering virtual link diagram} \label{fgAlexcov1p2} \end{figure} For a cut point $p \in P$, let $p^k$ denote the corresponding cut point of $P^k$. Remove regular neighborhoods of all $p^k$ for $p \in P$ and $k \in \{0, \dots, m-1\} = {\mathbb Z}_m$ from $\cup_{k=0}^{m-1} D^k$, and connect the endpoints by virtual paths as in Figure~\ref{fgAlexcov2p24} (iii) (resp. (iv)) if the cut point is coherent (resp. incoherent) as in Figure~\ref{fgAlexcov2p24} (i) (resp. (ii)). \begin{figure} \caption{Replacement of neighborhoods of cut points} \label{fgAlexcov2p24} \end{figure} Then we obtain a virtual link diagram. Let us denote it by $\varphi_m^1(D,P)$. See Figure~\ref{fgAlexcov1p2} (iii) for an example. This concrete construction is also a special case of the general construction $\varphi^0_m(D,P)$. By Theorem~\ref{thm:General}, $\varphi_m(D,P)$, $\varphi_m^1(D,P)$ and $\varphi^0_m(D,P)$ are all strongly equivalent. We call them {\it cyclic covering (virtual link) diagrams}. From this construction we see the following. \begin{cor}\label{thmtwistknot} Let $(D,P)$ be a virtual knot diagram with cut system. Then $\varphi_m(D,P)$ is an $m$-component virtual link diagram. \end{cor} \begin{proof} Consider $\varphi_m^1(D,P)$. Since the number of coherent cut points of $P$ equals that of incoherent cut points of $P$ (Corollary~\ref{cor:numbers}), the number of twists as in Figure~\ref{fgAlexcov2p24} (iii) appearing in $\varphi_m^1(D,P)$ equals that of the opposite twists as in Figure~\ref{fgAlexcov2p24} (iv). Thus $\varphi_m^1(D,P)$ is an $m$-component virtual link diagram, and so is $\varphi_m(D,P)$. \end{proof} \section{Applications}\label{application} First, we demonstrate how Theorem~\ref{thm2} is used to show that two virtual link diagrams are not equivalent. Let $(D,P)$ and $(D',P')$ be virtual link diagrams with cut points depicted in Figure~\ref{fgAlexapp1p} (i) and (ii). Then $\varphi_3(D,P)$ and $\varphi_3(D',P')$ are as in the figure. \begin{figure} \caption{Example of mod 3 cyclic covering virtual link diagram} \label{fgAlexapp1p} \end{figure} It is easily seen that $\varphi_3(D,P)$ and $\varphi_3(D',P')$ are not equivalent, since any pair of components of $\varphi_3(D,P)$ have linking number $0$ and any pair of components of $\varphi_3(D',P')$ have linking number $1$. By Theorem~\ref{thm2}, we conclude that $D$ and $D'$ are not equivalent. Theorem~\ref{thm2} implies Theorem~\ref{thm:appli} below, which can be used to show that some virtual link diagrams are never equivalent to mod $m$ almost classical virtual link diagrams. \begin{lem} \label{lem:appli} Let $D$ be a mod $m$ almost classical virtual link diagram. For any cut system $P$ of $D$, $\varphi_m(D, P)$ is strongly equivalent to a virtual link diagram which is a disjoint union of $m$ copies of $D$. \end{lem} \begin{proof} There is a cut system $P_0$ of $D$ such that for each semi-arc of $D$, there are no cut points on it or there are $m$ coherent (or incoherent) cut points on it. Each semi-arc of $D$ with $m$ coherent (or incoherent) cut points yields $m$ copies of such semi-arcs in the $m$ parallel copies of $D$, and $m$ virtual paths in $\varphi_m(D, P_0)$ as in Figure~\ref{fgAlexcov3p}. These $m$ virtual paths in $\varphi_m(D, P_0)$ can be replaced with $m$ straight virtual paths by detour moves, and we obtain a disjoint union of $m$ copies of $D$. This implies that $\varphi_m(D, P_0)$ is strongly equivalent to the disjoint union of $m$ copies of $D$. Thus $\varphi_m(D, P_0)$ is strongly equivalent to a disjoint union of $m$ copies of $D$. By Lemma~\ref{lem2} (or Theorem~\ref{thm:General}), we see that $\varphi_m(D, P)$ is strongly equivalent to a disjoint union of $m$ copies of $D$. \end{proof} \begin{figure} \caption{mod $m$ almost classical virtual link and its oriented cut points} \label{fgAlexcov3p} \end{figure} \begin{thm} \label{thm:appli} If $\varphi_m(D,P)$ is not equivalent to a disjoint union of $m$ copies of $D$, then $D$ is never equivalent to a mod $m$ almost classical virtual link diagram. \end{thm} \begin{proof} Suppose that $D$ is equivalent to a mod $m$ almost classical virtual link diagram $D'$. By Lemma~\ref{lem:appli}, $\varphi_m(D', P')$ is equivalent to a disjoint union of $m$ copies of $D'$. By Theorem~\ref{thm2}, $\varphi_m(D,P)$ and $\varphi_m(D', P')$ are equivalent. Thus, $\varphi_m(D, P)$ is equivalent to a disjoint union of $m$ copies of $D'$, and hence equivalent to a disjoint union of $m$ copies of $D$. This contradicts the hypothesis. \end{proof} Let $D'$ be the virtual link diagram depicted in Figure~\ref{fgAlexapp1p}. For the cut system $P'$ in the figure, $\varphi_3(D',P')$ is not equivalent to a disjoint union of $D$, since a pair of its components have linking number $1$. By Theorem~\ref{thm:appli}, we can conclude that $D'$ is never equivalent to a mod $3$ almost classical virtual link diagram. \noindent {\bf Acknowledgement}\\ The author would like to thank Seiichi Kamada and Shin Satoh for their fruitful conversation. \end{document}
math
33,180
\begin{document} \title{Apolarity for determinants and permanents of generic matrices} \begin{abstract} We show that the apolar ideals to the determinant and permanent of a generic matrix, the Pfaffian of a generic skew symmetric matrix and the Hafnian of a generic symmetric matrix are each generated in degree two. In each case we specify the generators and a Gr\"{o}bner basis of the apolar ideal. As a consequence, using a result of K.~Ranestad and F.-O. Schreyer we give lower bounds to the cactus rank and rank of each of these invariants. We compare these bounds with those obtained by J. Landsberg and Z. Teitler. \end{abstract} \section{Introduction} \label{intro} This paper is originally motivated by a question from Zach Teitler about the generating degree of the annihilator ideal of the determinant and the permanent of a generic $n\times n$ matrix. Here annihilator is meant in the sense of the apolar pairing, i.e. Macaulay's inverse system. Our main result is that the apolar ideals of the determinant and of the permanent of a generic matrix are generated in degree $2$ (Theorems \ref{thm:main-generic-determinant} and \ref{thm:main-generic-permanent}). The reason for Teitler's interest in this problem is the recent paper by Kristian Ranestad and Frank-Olaf~Schreyer \cite{RS}, which gives a lower bound for smoothable rank, border rank and cactus rank of a homogeneous polynomial in terms of the generating degree of the apolar ideal and the dimension of the Artinian apolar algebra defined by the apolar ideal. We apply this and our result to bounding the scheme/cactus length of the determinant and the permanent of the generic matrix (Theorem \ref{generic-det-perm-RS-rank}). In section \ref{Annihilator of the Pfaffian and Hafnian} we give the analogous result for the annihilator ideal of the Pfaffian of a generic skew symmetric matrix (Theorem \ref{thm:Pfaffian-main-theorem}) and the annihilator of the Hafnian of a generic symmetric matrix (Theorem \ref{thm:Hafnian-main-theorem-cor}) In a sequel paper \cite{Sh2} we study the apolar ideal of the determinant and permanent of the generic symmetric matrix. Let $\sf k$ be a field of characteristic zero or characteristic $p>2$, and let $A=(a_{ij})$ be a square matrix of size $n$ with $n^{2}$ distinct variables. The determinant and permanent of $A$ are homogeneous polynomials of degree $n$. Let $R=\sf k$$ [ a_{ij}]$ be a polynomial ring and $S=\sf k$$[d_{ij}]$ be the ring of inverse polynomials associated to $R$, and let $R_k$ and $S_k$ denote the degree-$k$ homogeneous summands. Then $S$ acts on $R$ by contraction: \begin{equation} ({d_{ij}})^k \circ (a_{uv})^\ell=\begin{cases} a_{uv}^{\ell-k} & \text{if $(i,j)=(u,v)$},\\ 0 & \text{otherwise}. \end{cases} \end{equation} If $h\in S_k$ and $F \in R_n$, then we have $h\circ F\in R_{n-k}$. This action extends multilinearly to the action of $S$ on $R$. When the characteristic of the field $\sf k$ is zero or $\mathrm{char}{\sf k}=p$ greater than the degree of $F$, the contraction action can be replaced by the action of partial differential operators without coefficients (\cite{IK}, Appendix A, and \cite{Ge}). \begin{defi} To each degree-$j$ homogeneous element, $F\in R_j$ we associate $I=\mathrm {Ann} (F)$ in $S=\sf k$$[d_{ij}]$ consisting of polynomials $\Phi$ such that $\Phi\circ F=0$. We call $I=\mathrm {Ann} (F)$, the \emph{apolar ideal} of $F$; and the quotient algebra $S/\mathrm {Ann} (F)$ the apolar algebra of $F$.\par Let $F \in R$, then $\mathrm {Ann} (F) \subset S$ is an ideal and we have \begin{equation*} {(\mathrm {Ann}(F))}_{k}=\{ h \in S_k|h \circ F=0\}. \end{equation*} \end{defi} \begin{remark} Let $\phi: (S_i,R_i)\rightarrow \sf k$ be the pairing $\phi(g,f)=g\circ f$, and $V$ be a vector subspace of $R_k$, then we have \begin{equation} \label{eq:Intro-dim} \dim_{\sf k} (V^\perp)=\dim_{\sf k} S_k-\dim_{\sf k} V. \end{equation} \end{remark} For $V\subset R_k$, we denote by $V^\perp = \mathrm{Ann}(V)\cap S_k$. Let $F$ be a form of degree $j$ in $R$. We denote by $<F>_{j-k}$ the vector space $S_k \circ F \subset R_{j-k}$. (\cite{IK}). We denote by $M_k(A)$ the vector subspace of $R$ spanned by the $k \times k$ minors of $A$. \begin{lem}\label{lem:intro-gen} \begin{equation} S_k\circ(\det(A))=M_{n-k}(A) \subset R_{n-k}. \end{equation} \end{lem} \begin{proof} It is easy to see that \begin{equation*} S_k\circ(\det(A))\subset M_{n-k}(A) \subset R_{n-k}. \end{equation*} For the other inclusion, let $M_{\widehat I,\widehat J}(A), I=(i_1,\ldots ,i_k), J=(j_1,\ldots j_k), 1\le i_1\le i_2\le \cdots \le i_k\le n, 1\le j_1\le j_2\le \cdots \le j_k\le n$ be the $(n-k)\times (n-k)$ minor of $A$ one obtains by deleting the $I$ rows and $J$ columns of $A$. Now it is easy to see that $$M_{\widehat{I},\widehat{J}}=\pm (d_{i_1,j_1}\cdot d_{i_2,j_2}\cdots d_{i_k,j_k}) \circ \det (A).$$ Hence $M_{\widehat{I},\widehat{J}}\in S_k \circ (\det(A))$. \end{proof} \begin{remark}(see \cite{IK}, page 69, Lemma 2.15)\label{remark:introIK} Let $F \in R$ and $\deg F=j$ and $k \leq j$. Then we have \begin{equation}\label{eq:introIK} {(\mathrm {Ann}(F))}_{k}=\{h \in S_k| h \circ S^{j-k} F=0\}=(\mathrm {Ann} (S^{j-k} F))_k. \end{equation} \end{remark} \begin{remark}\label{generalnonsense-intro} By Lemma 1.3 and Remark 1.4 we have \begin{equation*} \mathrm {Ann}(\det(A))_k={M_k(A)}^{\perp}. \end{equation*} \end{remark} \begin{ex} Let $n=3$,\begin{center} $ A= \left( \begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{array} \right)$. \end{center} Let $P_{ij}$ and $M_{ij}$ be respectively the permanent and the determinant corresponding to the entry $a_{ij}$. \emph{Question}: Does $P_{11}=d_{22}d_{33}+d_{32}d_{23}$ annihilate $\det(A)= a_{11}M_{11}+a_{12}M_{12}+a_{13}M_{13}$? The following computations allow us to answer this. $P_{11}\circ a_{11}M_{11}= (d_{22}d_{33}+d_{23}d_{32})\circ (a_{11}a_{22}a_{33}-a_{11}a_{23}a_{32})=a_{11}-a_{11}=0.$ $P_{11}\circ a_{12}M_{12}= (d_{22}d_{33}+d_{23}d_{32})\circ (a_{12}a_{21}a_{33}-a_{12}a_{23}a_{31})=0.$ $P_{11}\circ a_{13}M_{13}= (d_{22}d_{33}+d_{23}d_{32})\circ (a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31})=0 .$ Hence $ P_{11}$ annihilates the determinant. \newline It is easy to see that when $n=3$, $ P_{ij}\circ M_{kl}=0$ for each $1\leq i,j,k,l\leq 3$. So in the case $n=3$ the annihilator of the determinant of a generic matrix certainly contains all its $2 \times 2$ permanents. \end{ex} \end{ack} \section{The apolar algebras associated to the $n\times n$ generic matrix} \label{generic} In this section we determine the annihilator ideals of the determinant and the permanent of a generic $n\times n$ matrix. In section \ref{sec:Hilbert-generic} we review the dimension of the subspace of $k\times k$ minors and permanents of an $n\times n$ generic matrix. In section \ref{Generators of the apolar ideal} we determine the generators of the apolar ideal to the determinant and permanent of a generic matrix. We continue to employ the notations of section \ref{intro}, so $R=\sf k$$ [ a_{ij}]$ is a polynomial ring and $S=\sf k$$[d_{ij}]$ is the ring of inverse polynomials associated to $R$, and $S$ acts on $R$ by contraction. \subsection{Hilbert function and dimension of spaces of minors and permanents} \label{sec:Hilbert-generic} Denote by $\mathfrak A_A=S/(\mathrm {Ann} (\det (A))$ the \emph{apolar algebra} of the determinant of the matrix $A$. Recall that the Hilbert function of $\mathfrak A_A$ is defined by $H(\mathfrak A_A)_i=\dim_{\sf{k}} (\mathfrak A_A)_{i}$ for all $i=0,1,\ldots \,.$ \begin{defi} \label{def:degree} Let $F$ be a polynomial in $R$, we define the $\deg(\mathrm {Ann}(F))$ to be the length of $S/Ann(F)$. \end{defi} The number of the $k \times k$ minors and permanents of a generic $n\times n$ matrix is ${n\choose k }^2$. The $k\times k$ minors form a linearly independent set (\cite{BC} Theorem 5.3 and Remark 5.4), and the $k\times k$ permanents form another linearly independent set. To show the linearly independence of these two sets we choose a term order, for example the diagonal order where the main diagonal term is a Gr\"{o}bner initial term. Now the initial terms give a basis for the two spaces (\cite{LS}, page 197). So the dimension of the space of $k\times k$ minors of an $n \times n$ matrix and the dimension of the space of $k\times k$ permanents of an $n\times n$ matrix are both ${n\choose k }^2$. By Lemma~\ref{lem:intro-gen} and Remark \ref{generalnonsense-intro} we have \begin{equation} \label{eq:Hilbert-generic} H(S/\mathrm {Ann}(\det{A}))_k=H(S/\mathrm {Ann}(Perm A))_k={n\choose k }^2. \end{equation} So the length $\dim_{\sf{k}} (\mathfrak A_A)$ satisfies \begin{equation} \label{eq:sum-Hilbert-generic} \dim_{\sf k}(\mathfrak A_A)=\sum_{k=0}^{k=n} {n\choose k }^2={2n\choose n }. \end{equation} A combinatorial proof of the Equation \ref{eq:sum-Hilbert-generic} can be found in \cite{ST}, Example 1.1.17. \subsection{Generators of the apolar ideal} \label{Generators of the apolar ideal} In this section we determine the generators of the apolar ideal of the determinant and permanent of a generic matrix. \begin{Notation} For a generic $n\times n$ matrix $A=(a_{ij})$, the permanent of $A$ is a polynomial of degree $n$ defined as follows: \begin{equation*} \mathrm{Per}(A)=\sum _ {\sigma \in S_n} \Pi a_{i,\sigma(i)} \end{equation*} \end{Notation} \begin{lem} \label{lem:generic-1} Let $A=(a_{ij})$ be a generic $n\times n$ matrix. Then each $2\times 2$ permanent of $D=(d_{ij})$ annihilates the determinant of $A$. \end{lem} \begin{proof} Assume we have an arbitrary $2\times 2$ permanent $d_{ij}d_{kl}+d_{il}d_{kj}$ corresponding to \begin{center} $ P= \left( \begin{array}{cc} d_{ij} & d_{il} \\ d_{kj} & d_{kl} \\ \end{array} \right)$ \end{center} Reacll that $\det(A)=\sum _ {\sigma \in S_n} Sgn(\sigma) \Pi a_{i,\sigma(i)}$. There are $n!$ terms in the expansion of the determinant. If a term does not contain the monomial $a_{ij} a_{kl}$ or the monomial $a_{il} a_{kj}$ then the result of the action of the permanent $d_{ij}d_{kl}+d_{il}d_{kj}$ on it will be zero. There are $(n-2)!$ terms which contain the monomial $a_{ij} a_{kl}$ and $(n-2)!$ terms which contain the monomial $a_{il} a_{kj}$. So assume we have a permutation $\sigma_{1}$ of $n$ objects having $a_{ij}$ and $ a_{kl}$ respectively in it's $i-th$ and $k-th$ place. Corresponding to $\sigma_{1}$ we also have a permutation $\sigma_{2}=\tau \sigma_{1}$, where $\tau=(j,l)$ is a transposition and $sgn (\sigma_{2})=sgn (\tau\sigma_{1})=-sgn( \sigma_{1})$. Thus corresponding to each positive term in the determinant which contains the monomial $a_{ij} a_{kl}$ or the monomial $a_{il} a_{kj}$ we have the same term with the negative sign, thus the resulting action of the permanent $d_{ij}d_{kl}+d_{il}d_{kj}$ on $\det (A)$ is zero. \end{proof} \begin{defi} \label{def:generic-spaces} Let $A=(a_{ij})$ and $D=(d_{ij})$ be two generic matrices with entries in the polynomial ring $R=\sf k$$[a_{ij}]$, and in the ring of differential operators $S=\sf k$$[d_{ij}]$, respectively. Let $\{\mathcal P_A\}$, $\{\mathcal M_A\}$, $\{\mathcal P_D\}$ and $\{\mathcal M_D\}$ denote the set of all $2\times 2$ permanents and the set of all $2 \times 2$ minors of $A$ and $D$, respectively. And let $\mathcal P_A$ , $\mathcal M_A=M_2(A)$, $\mathcal P_D$ and $\mathcal M_D=M_2(D)$ denote the spaces they span, respectively. \end{defi} \begin{cor}\label{cor:generic2by2} Each $2\times 2$ permanent of $D$ annihilates $\mathcal M_A$ \end{cor} \begin{proof} By Lemma \ref{lem:generic-1}, $P_D\circ \det (A)=0$. Let $F=\det (A)$. We have $$ (\mathrm {Ann} F)_2= (\mathrm {Ann}(S_{j-2}\circ F))_2.$$ Hence $$\mathcal P_D \circ \det (A)=0 \Longleftrightarrow \mathcal P_D \circ S_{j-2}(\det(A))=0 \Longleftrightarrow \mathcal P_D\circ \mathcal M_A=0.$$ \end{proof} We also know that the square of an element, or any product of two or more elements of the same row or column of $D$ annihilates $\det(A)$. \begin{defi} \label{def:generic-acceptable} A monomial in the $n^2$ variables of the ring $S=\sf k$$[d_{ij}]$ is \emph{acceptable}, if it is square free and has no two variables from the same row or column of $D$. A polynomial is acceptable if it can be written as the sum of acceptable monomials. \end{defi} We denote by $<X>$ the $\sf{k}$-vector space span of the set $X$. \begin{lem}\label{lem:generic-acceptables} $\mathcal P_D\oplus \mathcal M_D=<$degree 2 acceptable polynomials in $S>$. \end{lem} \begin{proof} Let $d_{ij}d_{kl}$ be an arbitrary acceptable monomial of degree 2. Since $\mathrm{char} (\sf k)\not=2$ we have: \begin{equation*} d_{ij}d_{kl}=1/2((d_{ij}d_{kl}-d_{ij}d_{kl})+(d_{ij}d_{kl}+d_{ij}d_{kl})). \end{equation*} By Equation \ref{eq:Hilbert-generic} \begin{equation*} \dim \mathcal P_D=\dim \mathcal M_D={n\choose2}^2. \end{equation*} Let $\Psi=<$degree 2 acceptable polynomials in $S>$. Then \begin{equation*} \dim\Psi=\dim S_2-\dim \mathcal U_D= {n^2+1\choose 2}-(n^2+{n\choose 2}(2n)). \end{equation*} So we have \begin{equation*} \dim(\mathcal P_D+ \mathcal M_D)={n^2+1\choose 2}-(n^2+{n\choose 2}(2n))=\dim \mathcal P_D+\dim \mathcal M_D. \end{equation*} Hence $\mathcal P_D\cap \mathcal M_D=0$. \end{proof} Denote the space of all unacceptable polynomials of degree 2 by $\mathcal U_D$. We have shown that $(\mathcal P_D+\mathcal U_D)\circ \mathcal M_A=0$ so $\mathcal P_D+\mathcal U_D \subset \mathrm {Ann} (\mathcal M_A) $. Then Equation \ref{eq:Intro-dim} implies $$\dim_{\sf k} (\mathrm {Ann}( \mathcal M_A))_2=\dim_{\sf k} S_2- \dim_{\sf k} \mathcal M_A.$$ \begin{lem}\label{lem:generic-pre1} $ \mathrm {Ann} (\mathcal M_A)\cap S_2 = \mathcal P_D+\mathcal U_D$. \end{lem} \begin{proof} By Lemma \ref{lem:generic-acceptables} we have $\mathcal P_D+ \mathcal M_D$ is complementary to $\mathcal U_D$. So we have $$ \dim ((\mathrm {Ann} (\mathcal M_A))_2)=\dim S_2 -\dim \mathcal M_A=\dim \mathcal P_D+\mathcal U_D. $$ \end{proof} \begin{Notation} We define the homomorphism $\xi:R\rightarrow S$ by setting $\xi(a_{ij})=d_{ij}$; for a monomial $v\in R$ we denote by $\hat{v}=\xi(v)$ the corresponding monomial of $S$. \end{Notation} \begin{remark}\label{remark:main-ann} Let $f= \sum_{i=1}^{i=k} \alpha_i v_i \in R_n$ with $\alpha_{i}\in \sf k$ and with $v_i$'s linearly independent monomials. Then we will have: \begin{equation}\label{eq:main-ann} \mathrm {Ann}(f) \cap S_n=<\alpha_j \hat{v_1}-\alpha_1\hat{v_j},<v_1,...,v_k>^\perp>, \end{equation} where $<v_1,...,v_k>^\perp=\mathrm {Ann}(<v_1,...,v_k>)\cap S_n$. \end{remark} \begin{lem}\label{lem:main-pre} \begin{equation}\label{eq:main-pre} (\mathcal P_D+\mathcal U_D)_k \subset \mathrm {Ann}(M_k(A)) \cap S_k. \end{equation} \end{lem} \begin{proof} We have: (1) $ \mathcal P_D\circ \det (A)=0 \Longleftrightarrow \mathcal P_D\circ S_{n-2}(\det(A))=0 \Longleftrightarrow \mathcal P_D\circ \mathcal M_A=0.$ (2) ($\mathrm {Ann}(\det(A))) \cap S_2= \mathcal P_D+\mathcal U_D \Rightarrow S_{k-2} (\mathcal P_D+\mathcal U_D)\circ (S_{n-k} \circ \det (A))=0$. $ \Rightarrow S_{k-2}(\mathcal P_D+\mathcal U_D) \circ M_k(A)=0$. $ \Rightarrow (\mathcal P_D+\mathcal U_D)_k \circ M_k(A)=0$. (By Remark \ref{remark:introIK}) So Equation \ref{eq:main-pre} holds. \end{proof} \begin{prop}\label{prop:generic-n} For a generic $n\times n$ matrix $A$ with $n\geq2$, we have \begin{equation*} (\mathcal P_D+\mathcal U_D)_n= \mathrm {Ann}(\det (A)) \cap S_n. \end{equation*} \end{prop} \begin{proof} Using Equation \ref{eq:main-pre} we only need to show \begin{equation*} (\mathcal P_D+\mathcal U_D)_n \supset \mathrm {Ann}(\det (A)) \cap S_n. \end{equation*} We use induction on $n$. For $n=2$ the equality is easy to see. Next we verify that the proposition holds for the case $n=3$. We need to see that the space of $2\times 2$ permanents of $D$ generates $ \mathrm {Ann} (\det (A))_3/\mathcal U_D$, i.e., $\mathrm {Ann}(M_3(A))_3/\mathcal U_D$. Corresponding to each term in the determinant, there is a permutation of three objects $\sigma$ such that we can write the term as $a_{1\sigma(1)}a_{2\sigma(2)}a_{3\sigma(3)}$. Consider the degree three binomial $b=a_{1\sigma(1)}a_{2\sigma(2)}a_{3\sigma(3)}-a_{1\tau(1)}a_{2\tau(2)}a_{3\tau(3)}$, where $\tau\neq\sigma$. Without loss of generality we can assume that $\sigma$ is the identity, so we consider the binomial $b=a_{11}a_{22}a_{33}-a_{1\tau(1)}a_{2\tau(2)}a_{3\tau(3)}$. If these two monomials have a common variable i.e., $\tau(i)=i$ for some $i=1,2,3$, then the binomial will be of the form $ b= a_{ii} (a_{jj}a_{kk}-a_{jk}a_{kj})$, $1\leq i,j,k,l\leq 3$, so we will have $b=a_{ii}M_{ii}$ and, as we have shown previously, $P_{ii}=d_{jj}d_{kk}-d_{jk}d_{kj}$ annihilates it. Assume that the monomials $a_{11}a_{22}a_{33}$ and $a_{1\tau(1)}a_{2\tau(2)}a_{3\tau(3)}$ do not have any common factor. We can add and subtract another term $a_{1\beta(1)}a_{2\beta(2)}a_{3\beta(3)}$, where $\beta$ is a permutation, such that it will have one common factor with $a_{11}a_{22}a_{33}$ and one common factor with $a_{1\tau(1)}a_{2\tau(2)}a_{3\tau(3)}$. By reindexing we can take $\beta(1)=\tau(1)$, $\beta(2)=2$ and then we can determine $\beta(3)$ according to the other two choices. Then by factorizing we get a binomial of the form $a_{ij}M_{ij}+a_{kl}M_{kl}$, where the first term can be annihilated by the permanent of the matrix $D$ corresponding to $d_{ij}$ and the second term can be annihilated by the permanent of the matrix $D$ corresponding to the element $d_{kl}$. So by Equation \ref{eq:main-ann} we are done. For example, if we have the binomial $a_{11}a_{22}a_{33}-a_{13}a_{21}a_{32}$ we can add and subtract the term $a_{11}a_{23}a_{32}$ which has one common factor with $a_{11}a_{22}a_{33}$ and one common factor with $a_{13}a_{21}a_{32}$ so we will get $a_{11}(a_{22}a_{33}-a_{23}a_{32})+ a_{32}(a_{11}a_{23}-a_{13}a_{21})$ which is $a_{11}M_{11}+a_{32}M_{32}$. And as we have shown before it can be annihilated by the space of $2\times 2$ permanents. So by Equation~\ref{eq:main-ann} we are done. When $n$ is larger than $3$ then by the induction assumption we can assume that the proposition holds for all $k \leq n-1$. By the Remark \ref{remark:main-ann} it is enough to show that if $b$ is a binomial of the form Equation \ref{eq:main-ann}, in $\mathrm {Ann}(\det (A)) \cap S_n$, then $b\in (\mathcal P_D+\mathcal U_D)_n$. Assume $b=b_1+b_2$ is of degree $n$. If the two terms, $b_1$ and $b_2$ are monomials in $S$ and have a common factor $l$, i.e., $b_1=la_1$ and $b_2=la_2$, then $b=l(a_1+a_2)$ where $a_1$ and $a_2$ are of degree at most $n-1$. So by the induction assumption the proposition holds for the binomial $a_1+a_2$, i.e. $a_1+a_2 \in (\mathcal P_D+\mathcal U_D)_{n-1}$. Hence we have \begin{equation*} b=l(a_1+a_2)\in l(\mathcal P_D+\mathcal U_D)_{n-1}\subset (\mathcal P_D+\mathcal U_D)_n. \end{equation*} If the two terms, $b_1$ and $b_2$ do not have any common factor then with the same method as above we can rewrite the binomial $b$ by adding and subtracting a term of the determinant, $m$ of degree $n$, which has a common factor $m_1$ with $b_1$ and a common factor $m_2$ with $b_2$. Then we will have \begin{equation*} b_1+b_2= b_1+m+b_2-m=m_1(c_1+m')+m_2(c_2-m''), \end{equation*} where $b_1=m_1c_1$, $m=m_1m'=m_2m''$ and $b_2=m_2c_2$. Since $c_1+m'$ and $c_2-m''$ are of degree at most $n-1$, the induction assumption yields \begin{equation*} b_1+b_2=m_1(c_1+m')+m_2(c_2-m'')\in(\mathcal P_D+\mathcal U_D)_n. \end{equation*} This completes the induction step and hence the proof of the proposition. \end{proof} \begin{cor} \label{cor:generic-main-k} For a generic $n\times n$ matrix $A$ and each integer $k$, $1\leq k \leq n$, we have \begin{equation*} (\mathcal P_D+\mathcal U_D)_k= \mathrm {Ann}(\det (A)) \cap S_k. \end{equation*} We also have $(\mathcal U_D)_{n+1}=S_{n+1}$. \end{cor} \begin{proof} Using equation \ref{eq:main-pre} we only need to show that \begin{equation*} \mathrm{Ann}(\det (A)) \cap S_k \subset (\mathcal P_D+\mathcal U_D)_k. \end{equation*} By Lemma \ref{lem:intro-gen} and Remark \ref{remark:introIK} we have \begin{equation*} (\mathrm{Ann}(\det (A)))_k=(\mathrm{Ann}(S_{n-k}\circ (\det (A))))_k=(\mathrm{Ann}(M_{k}(A)))_k \end{equation*} If we label the $k\times k$ minors of $A$ by $f_1,...,f_s$ we have \begin{equation*} (\mathrm{Ann}(M_{k}(A)))_k=\mathrm{Ann}(<f_1,...,f_s>)_k=(\bigcap_{i=1}^{i=s}(\mathrm{Ann}(f_i))_k \end{equation*} But for each $f_i$ if we denote the ring of variables of $f_i$ by $R^i$, then by Proposition \ref{prop:generic-n} we have \begin{equation*} (\mathcal P^i_D+\mathcal U^i_D)_{k}=\mathrm{Ann}(f_i)\cap S^i_k. \end{equation*} Hence \begin{equation*} \mathrm{Ann}(\det(A))\cap S_{k}\subset (\mathcal P_D+\mathcal U_D)_k. \end{equation*} Finally, every monomial of degree larger than $n$ will be unacceptable. So we have $(\mathcal U_D)_{n+1}=S_{n+1}$. \end{proof} \begin{thm}\label{thm:main-generic-determinant} Let A be a generic $n\times n$ matrix. Then the apolar ideal $\mathrm{Ann}(\det (A))\subset S $ is the ideal $(\mathcal P_D+\mathcal U_D)$, and is generated in degree two. \par \end{thm} \begin{proof} This follows directly from the Proposition \ref{prop:generic-n} and Corollary \ref{cor:generic-main-k}. \end{proof} \begin{thm}\label{thm:main-generic-permanent} Let A be a generic $n\times n$ matrix. Then the apolar ideal $\mathrm{Ann}(\mathrm{Per} (A))\subset S$ to $\mathrm{Per}(A)\in R$ is the ideal $(\mathcal M_D+\mathcal U_D)$, generated in degree two. \end{thm} \begin{proof} The proof follows directly from the proof of the Proposition \ref{prop:generic-n} and Corollary \ref{cor:generic-main-k}, by interchanging the determinants and the permanents. \end{proof} \begin{cor} Let $A=(a_{ij})$ be an $m\times n$ matrix where $n\ge m$. Let $N$ denote the space generated by all $m\times m$ minors of $A$. Then $\mathrm{Ann}(N)$ is generated in degree two by all $2\times 2$ permanents of $A$ and the degree two unacceptable monomials. \end{cor} \begin{proof} Let $s={n\choose m}$, and $f_1,...,f_s$ denote the $m\times m$ minors of $A$. We have $$\mathrm{Ann}(N)=\mathrm{Ann}(<f_1,...,f_s>)=\bigcap_{i=1}^{i=s}(\mathrm{Ann}(f_i)).$$ Let $R^i$ denote the ring of variables of $f_i$. Hence by Theorem 2.12 we have $\mathrm{Ann}(f_i)\cap S^i $ is generated in degree 2. So we have $\mathrm{Ann}(N)$ is also generated in degree 2. \end{proof} \section{Application to the ranks of the determinant and the permanent} \label{rank-generic} \begin{Notation} Let $F\in R=\sf k$$[a_{ij}]$ be a homogeneous form of degree $d$. A presentation \begin{equation}\label{eq:Waring decomposition} F=l_1^d+...+l_s^d \text{ with }l_i\in R_1. \end{equation} is called a \emph{Waring decomposition} of length $s$ of the polynomial $F$. The minimal number $s$ that satisfies the Equation \ref{eq:Waring decomposition} is called the \emph{rank} of $F$. The apolarity action of $S=\sf k$$[d_{ij}]$ on $R$, defines $S$ as a natural coordinate ring on the projective space $\textbf{P}(R_1)$ of 1-dimensional subspaces of $R_1$ and vice versa. A finite subscheme $\Gamma\subset \textbf{P}(R_1)$ is apolar to $F$ if the homogeneous ideal $I_\Gamma \subset S$ is contained in $\mathrm {Ann}(F)$ (\cite{IK},\cite{RS}). \begin{remark}\label{rmrk:rank-definition-ranestad}((\cite{IK} Def. 5.66,\cite{RS})) Let $\Gamma=\{[l_1],...,[l_s]\}$ be a collection of $s$ points in $\textbf{P}(R_1)$. Then $$ F=c_1l_1^d+...+c_sl_s^d \text{ with }c_i\in\sf{k}. $$ if and only if $$I_\Gamma \subset \mathrm {Ann}(F)\subset S$$ \end{remark} \end{Notation} \begin{defi}\label{def:ranks} We have the following ranks (\cite{IK} Def. 5.66 , \cite{BR} and \cite{RS}). Here $\Gamma$ is a punctual scheme (possibly not smooth), and the degree of $\Gamma$ is the number of points (counting multiplicities) in $\Gamma$. a. the rank $r(F)$: \begin{equation*} r(F)=\min\{\deg \Gamma| \Gamma\subset \textbf{P}(R_1) \text{ smooth}, \dim \Gamma=0,I_\Gamma \subset \mathrm {Ann}(F)\}. \end{equation*} Note that when $\Gamma$ is smooth, it is the set of points in the Remark \ref{rmrk:rank-definition-ranestad} (\cite{IK}, page 135). b. the smoothable rank $sr(F)$: \begin{equation*} sr(F)=\min\{\deg \Gamma| \Gamma\subset \textbf{P}(R_1) \text{ smoothable}, \dim \Gamma=0,I_\Gamma \subset \mathrm {Ann}(F)\}. \end{equation*} Note that for the smoothable rank one considers the smoothable schemes, that are the schemes which are the limits of smooth schemes of $s$ simple points (\cite{IK}, Definition 5.66). c. the cactus rank (scheme length in \cite{IK}, Definition 5.1 page 135) $cr(F)$: \begin{equation*} cr(F)=\min\{\deg \Gamma| \Gamma\subset \textbf{P}(R_1), \dim \Gamma=0,I_\Gamma \subset \mathrm {Ann}(F)\}. \end{equation*} d. the differential rank (Sylvester's catalecticant or apolarity bound) is the maximal dimension of a homogeneous component of $S/\mathrm {Ann}(F)$: \begin{equation*} l_{diff}(F)= \max_{i\in \mathbb{N}_0} \{ (H(S/\mathrm {Ann}(F)))_i\}. \end{equation*} \end{defi} Note that we give a lower bound for the cactus rank of the determinant and permanent of the generic matrix. We do not have information on the smoothable rank of the generic determinant or permanent. It is still open to find a bound for the smoothable rank. The work of A. Bernardi and K. Ranestad \cite{BR} in the case of generic forms of a given degree and number of variables show that the cactus rank and smoothable rank can be very different. \begin{prop}\label{prop:ranks-inequality}(\cite{IK}, Proposition 6.7C) The above ranks satisfy \begin{equation*} l_{diff}(F) \leq cr(F) \leq sr(F) \leq r(F). \end{equation*} \end{prop} \begin{prop}\label{prop-mainRS} \textbf{(Ranestad-Schreyer)} If the ideal of $\mathrm {Ann}(F)$ is generated in degree d and $\Gamma\subset \textbf{P}(T_1)$ is a finite (punctual) apolar subscheme to $F$, then \begin{equation*} \deg \Gamma \geq \frac{1}{d} \deg (\mathrm {Ann}(F)), \end{equation*} where $\deg (\mathrm {Ann}(F)) =\dim (S/\mathrm {Ann} (F))$ is the length of the 0-dimensional scheme defined by $\mathrm {Ann}(F)$. \end{prop} If in Proposition \ref{prop-mainRS} we take $F= \det (A)$ or $F=\mathrm{Per}(A)$, since we have found that for the determinant and the permanent of a matrix we have $d=2$; we can use the above proposition to find a lower bound for the above ranks of $F$. \begin{thm}\label{generic-det-perm-RS-rank} Let $F$ be the determinant or permanent of a generic $n\times n$ matrix $A$. We have \begin{equation*} {1\over{2}} {2n\choose n} \leq cr(F) \leq sr(F) \leq r(F). \end{equation*} \end{thm} \begin{proof} By Theorems \ref{thm:main-generic-determinant} and \ref{thm:main-generic-permanent}, Propositions \ref{prop-mainRS} and \ref{prop:ranks-inequality},and Equations \ref{eq:Hilbert-generic} and \ref{eq:sum-Hilbert-generic} we have for an apolar punctual scheme $\Gamma$, \begin{equation*} \deg \Gamma \geq \frac{1}{d} \deg (\mathrm {Ann}(F)= \frac{1}{2}\sum_{k=0}^{k=n} {{n\choose k}}^2= \frac{1}{2}{2n \choose n}. \end{equation*} \end{proof} \begin{Notation} \cite{LT} Let $\Phi \in S^d\mathbb C^n$ be a polynomial, we can polarize $\Phi$ and consider it as a multilinear form $\tilde{\Phi}$ where $\Phi(x)=\tilde{\Phi}(x,...,x)$ and consider the linear map $\Phi_{s,d-s}:S^s\mathbb C^{n*}\rightarrow S^{d-s}\mathbb C^{n}$, where $\Phi_{s,d-s}(x_1,...,x_s)(y_1,...,y_{d-s})=\tilde{\Phi}(x_1,...,x_s,y_1,...y_{d-s})$. Define \begin{equation*} Zeros(\Phi)=\{[x]\in \mathbb P \mathbb C^{n*}| \Phi(x)=0\} \subset \mathbb P \mathbb C^{n*}. \end{equation*} Let $x_1,...,x_n$ be linear coordinates on $\mathbb C^{n*}$ and define $$ \Sigma_s(\Phi):=\{[x] \in Zeros(\Phi)| \frac{\partial^I \Phi}{\partial x^I}(x)=0,\forall I,\text{ such that } |I|\leq s\}. $$ \end{Notation} In this notation $\Phi_{s,d-s}$ is the map from $S_s\to R_{n-s}$ taking $h$ to $h\circ \Phi$, hence its rank is $H(\mathfrak A_A)_s$. In the following theorem we use the convention that $\dim \emptyset=-1$. \begin{thm} \label{thm:LT-rank}\textbf{(Landsberg-Teitler)}(\cite{LT}) Let $\Phi \in S^d\mathbb C^{n}$, Let $1\leq s \leq d$. Then \begin{equation*} rank(\Phi)\geq rank \Phi_{s,d-s}+ \dim \Sigma_s(\Phi)+1. \end{equation*} \end{thm} \begin{rmk} (\textbf{Z. Teitler}) If we define $\Sigma_s(\Phi)$ to be a subset of affine rather than projective space, then the above theorem does not need $+1$ at the end, and does not need the statement that the dimension of the empty set is $-1$. \end{rmk} Applying this theorem for the determinant yields \begin{cor} \label{cor:-detLT-rank}\textbf{(Landsberg-Teitler)}(\cite{LT}) \begin{equation*} {r({\det}_{n})} \geq {n \choose {\lfloor{n/2}\rfloor}}^2+n^2-{(\lfloor{n/2}\rfloor+1)}^2. \end{equation*} \end{cor} \begin{prop} \label{prop:-BR-rank}\textbf{(Bernardi-Ranestad)}(\cite{BR}, Theorem 1) Let $F\in R^s$ be a homogeneous form of degree $d$, and let $l$ be any linear form in $S^s_1$. Let $F_l$ be a dehomogenization of $F$ with respect to $l$. Denote by $ \mathrm{Diff}(F)$ the subspace of $S^s$ generated by the partials of $F$ of all orders. Then \begin{equation*} cr(F)\leq \dim_{\sf k} \mathrm{Diff}(F_l) \end{equation*} \end{prop} We thank Pedro Marques for pointing out that it is easy to show that the length of a polynomial is an upper bound for the length of any dehomogenization of that polynomial. So we have \begin{equation}\label{eq:Pedro} cr(F)\leq \dim_{\sf k} \mathrm{Diff}(F)= \deg (\mathrm {Ann}(F)) \end{equation} \begin{prop}\label{prop:CCG-rank} For the monomial $m=x_1^{b_1}...x_n^{b_n}$, where $1\leq b_1\leq...\leq b_n$ we have (a) (\cite{CCG}) \begin{equation*} r(x_1^{b_1}...x_n^{b_n})=\Pi_{i=2}^{i=n}(b_i+1) \end{equation*} (b) (\cite{RS}) \begin{equation*} sr(x_1^{b_1}...x_n^{b_n})=cr(x_1^{b_1}...x_n^{b_n})=\Pi_{i=1}^{i=n-1}(b_i+1) \end{equation*} (c)(\cite{BBT2}) Let $d=b_1+\ldots+b_n$, and $m=l_1^d+\ldots+l_s^d$ with $r(m)=s$. Let $I\subset S$ be the homogeneous ideal of functions vanishing on $Q=\{[l_1],\ldots,[l_s]\}\subset \textbf{P}^{ n-1}$. Then $I$ is a complete intersection of degrees $b_2+1,\ldots,b_n+1$ generated by $$y_2^{b_2+1}-\Phi_1y_1^{b_1+1},\ldots,y_n^{b_n+1}-\Phi_ny_1^{b_1+1},$$ for some homogeneous polynomials $\Phi_i\in S$ of degree $b_i-b_1$. \end{prop} \begin{ex} Let $n=2$, and \begin{center} $ A= \left( \begin{array}{cc} a & b \\ c & d\\ \end{array} \right)$, \end{center} $\det(A)=ad-bc=(a+d)^2-(a-d)^2+(b-c)^2-(b+c)^2$ so $r(\det(A))=4$. The corresponding Hilbert sequence for $n=2$ is $(1,4,1)$. We have $l_{diff}(\det(A))=4$. Using Theorem \ref{generic-det-perm-RS-rank} we have: \begin{equation*} cr(\det(A)) \geq \frac{1}{d} \deg (\mathrm {Ann}(\det(A)))=\frac{1}{2} (6)=3. \end{equation*} So the lower bound we obtain using Theorem \ref{generic-det-perm-RS-rank} is 3. \newline\newline Using Corollary \ref{cor:-detLT-rank} (Landsberg-Teitler) we obtain: \begin{equation*} {r({det}_{2})} \geq {2 \choose {\lfloor{2/2}\rfloor}}^2+2^2-{(\lfloor{2/2}\rfloor+1)}^2=4+4-4=4. \end{equation*} On the other hand we have \begin{equation*} \det(A)=ad-bc=1/4((a+d)^2-(a-d)^2)-1/4((b+c)^2-(b-c)^2). \end{equation*} Hence \begin{equation*} r(\det(A))=cr(\det (A))=sr(\det (A))=l_{diff}(\det(A))=4. \end{equation*} \end{ex} \begin{ex} Let $n=3$, and \begin{center} $ A= \left( \begin{array}{ccc} a & b & e \\ c & d & f \\ g & h & i \\ \end{array} \right)$, \end{center} $\det(A)=g(bf-de)-h(af-ce)+i(ad-bc)$. Using Macaulay2 for the calculations we obtain the Hilbert sequence $(1,9,9,1)$, and by Theorem \ref{generic-det-perm-RS-rank} we have: \begin{equation*} cr(\det(A)) \geq \frac{1}{d} \deg (\mathrm {Ann}(\det(A)))=\frac{1}{2} (20)=10. \end{equation*} So the lower bound we find using the Theorem \ref{generic-det-perm-RS-rank} is 10, which is greater than the $l_{diff}(\det(A))=9$, so it is a better lower bound for the cactus and smoothable ranks introduced above. \newline\newline Using Corollary \ref{cor:-detLT-rank} we have: \begin{equation*} {r({det}_{3})} \geq {3 \choose {\lfloor{3/2}\rfloor}}^2+3^2-{(\lfloor{3/2}\rfloor+1)}^2=9+9-4=14. \end{equation*} On the other hand, for every $x$, $y$ and $z$, it is easy to see that $r(xyz) \leq4$: \begin{equation*} xyz=1/24({(x+y+z)}^3+{(x-y-z)}^3-{(x-y+z)}^3-(x+y-z)^3). \end{equation*} Hence $14 \leq r(\det(A))\leq 24$. If $a=1$ in $\det(A)$, we have that the punctual scheme $\mathrm{Ann}(\det A_{a=1})$ of degree $18$ with Hilbert function $(1,8,8,1)$. So by Proposition \ref{prop:-BR-rank} we have: $$cr(\det(A))\leq 18.$$ \end{ex} \begin{ex} Let $n=4$, and \begin{center} $ A= \left( \begin{array}{cccc} a & b & e & j\\ c & d & f & k\\ g & h & i & l \\ m & n & o & p\\ \end{array} \right)$, \end{center} Using Macaulay2 for the calculations we obtain the Hilbert sequence $(1,16,36,16,1)$. By Theorem \ref{generic-det-perm-RS-rank}, \begin{equation*} cr(\det(A)) \geq \frac{1}{d} \deg (\mathrm {Ann}(\det(A)))=\frac{1}{2} (70)=35. \end{equation*} which is less than the $l_{diff}(\det(A))=36$. So in this case $l_{diff}$ is a better lower bound for the cactus rank. Using Corollary \ref{cor:-detLT-rank} (Landsberg-Teitler) we have: \begin{equation*} {r({\det}_{4})} \geq {4 \choose {\lfloor{4/2}\rfloor}}^2+4^2-{(\lfloor{4/2}\rfloor+1)}^2=36+16-9=43. \end{equation*} So the lower bound found by Corollary \ref{cor:-detLT-rank} (Landsberg-Teitler) is a better lower bound for the rank in this case. Now using Proposition \ref{prop:CCG-rank} we have \begin{equation*} {r({\det}_{4})} \leq (4!) (2^3)=192 \end{equation*} \end{ex} \begin{ex} Let $n=5$, and \begin{center} $ A= \left( \begin{array}{ccccc} a & b & e & j & q\\ c & d & f & k & r\\ g & h & i & l & s\\ m & n & o & p &t\\ u & v & w & x & y\\ \end{array} \right)$, \end{center} Using Macaulay2 for the calculations we obtain the Hilbert sequence $(1,25,100,100,25,1)$. By Theorem \ref{generic-det-perm-RS-rank} \begin{equation*} cr(\det(A)) \geq \frac{1}{d} \deg (\mathrm {Ann}(\det(A)))=\frac{1}{2} (252)=126, \end{equation*} which is greater than the $l_{diff}(\det(A))=100$. So it is a better lower bound for cactus rank than $l_{diff}$. Using Corollary \ref{cor:-detLT-rank} (Landsberg-Teitler) we have: \begin{equation*} {r({\det}_{5})} \geq {5 \choose {\lfloor{5/2}\rfloor}}^2+5^2-{(\lfloor{5/2}\rfloor+1)}^2=116. \end{equation*} So for the first time at $n=5$ Theorem \ref{generic-det-perm-RS-rank} gives us a better lower bound for the rank than Corollary \ref{cor:-detLT-rank} (Landberg-Teitler). Now using Proposition \ref{prop:CCG-rank} we have \begin{equation*} {r({\det}_{5})} \leq (5!) (2^4)=1920 \end{equation*} \end{ex} \begin{ex} Let $n=6$. Using Macaulay2 for the calculations we obtain the Hilbert sequence $$H(S/\mathrm{Ann}(\det A))=(1,36,225,400,225,36,1).$$ Now using Theorem \ref{generic-det-perm-RS-rank} we have: \begin{equation*} cr(\det(A)) \geq \frac{1}{d} \deg (\mathrm {Ann}(\det(A)))=\frac{1}{2} (924)=462. \end{equation*} So the lower bound we can find using Theorem \ref{generic-det-perm-RS-rank} is 462, which is greater than the $l_{diff}(\det(A))=400$, and therefore is a better lower bound for cactus rank than $l_{diff}$. Using Corollary \ref{cor:-detLT-rank} (Landsberg-Teitler) we have: \begin{equation*} {r({\det}_{6})} \geq {6 \choose {\lfloor{6/2}\rfloor}}^2+6^2-{(\lfloor{6/2}\rfloor+1)}^2=420. \end{equation*} So again at $n=6$ Theorem \ref{generic-det-perm-RS-rank} give us a better lower bound than Corollary \ref{cor:-detLT-rank} (Landberg-Teitler). Now using Proposition \ref{prop:CCG-rank} we have \begin{equation*} {r({\det}_{6})} \leq (6!) (2^5)=23040 \end{equation*} \end{ex} \begin{remark}\label{remark-stirling-generic} (a)Using Stirling's formula, $n!\sim\sqrt{2\pi n}\left({\frac{n}{e}}\right)^n$, we can approximate $2n\choose n$ for large $n$ by $4^n/\sqrt{n\pi}$. Hence for large $n$ Theorem \ref{generic-det-perm-RS-rank} gives us a lower bound asymptotic to $4^n/2\sqrt{n\pi}\leq cr(\det(A))$, and the Landsberg-Teitler formula gives us the lower bound $2\cdot~4^n/(n\pi)\leq r(\det(A))$. The Landsberg-Teitler lower bound for $r(\det(A))$ is also asymptotic to $l_{diff}(\det (A))={n \choose {\lfloor{n/2}\rfloor}}^2$, which is a lower bound for $ cr(\det (A)).$ These are also lower bounds for the corresponding ranks of the permanent of a generic $n\times n$ matrix.\par (b) Using Proposition \ref{prop:CCG-rank} the upper bound for the rank of the determinant and permanent of a generic $n\times n$ matrix is given by $(n!)2^{n-1}$. This can be approximated for large $n$, using Stirling's formula, by $\sqrt{2\pi n}{(\frac{n}{e})}^n(2^{n-1})$. (c) By Equation \ref{eq:Pedro} an upper bound for the cactus rank of both the determinant and permanent of a generic $n\times n$ matrix is $2n\choose n$, which is asymptotic to $4^n/\sqrt {n \pi}$. \end{remark} In the following table we give lower bounds for the ranks of the determinant and permanent of an $n\times n$ generic matrix. \begin{table}[h] \begin{center} \caption{The determinant of the generic matrix}\label{table:generic} \begin{tabular}{l*{7}{c}r} \hline $n$ & 2 & 3 & 4 & 5 & 6 & $n\gg 0$\\ \hline lower bound for $cr(\det(A))$ by Theorem \ref{generic-det-perm-RS-rank} & 3 & 10 & 35 & 126 & 462 & $4^n/2\sqrt{n\pi}$ \\ \hline lower bound for $r(\det(A))$ by Corollary \ref{cor:-detLT-rank} & 4 & 14 & 43 & 116 & 420 & $4^n/2n\pi$\\ \hline $l_{diff}(\det(A))$& 4 & 9 & 36 & 100 & 400 & ${n \choose {\lfloor{n/2}\rfloor}}^2$\\ \hline \end{tabular} \end{center} \end{table} \section{Annihilator of the Pfaffian and Hafnian} \label{Annihilator of the Pfaffian and Hafnian} In this section we discuss the annihilator ideals of the Pfaffians and of the Hafnians. We show that the annihilator ideal of the Pfaffian of a generic skew symmetric $2n\times 2n$ matrix and the annihilator ideal of the Hafnian of generic symmetric $2n\times 2n$ matrix are both generated in degree 2. In the following discussion we let $X_m^{sk}=(x_{ij})$ with $x_{ij}=-x_{ji}$ be an $m\times m$ skew symmetric matrix of indeterminates in the polynomial ring $R^{sk}=\sf k$$[x_{ij}]$, Let $Y_m^{sk}=~(y_{ij})$ with $y_{ij}=-y_{ji}$ be an $m\times m$ skew symmetric matrix of indeterminates in the ring of differential operators $S^{sk}=\sf k$$[y_{ij}]$. We denote the Pfaffian of the matrix $X_m^{sk}$ by $Pf(X_m^{sk})$. It is well known that for any odd number $m$ we have $\det(X_m^{sk})=0$. It is also well known that the square of the Pfaffian is equal to the determinant of a skew symmetric matrix. So in the following we are going to consider the annihilator of the Pfaffian of generic $m\times m$ skew~symmetric matrices, where $m=2n$ is an even number. Recall that \begin{Notation} Let $F_{2n}\subset S_{2n}$ be the set of all permutations $\sigma$ satisfying the following conditions: (1) $\sigma(1)<\sigma(3)<...<\sigma(2n-1)$ (2) $\sigma(2i-1)<\sigma(2i)$ for all $1\leq i \leq n$ \begin{itemize} \item For a $2n\times 2n$ generic skew symmetric matrix $X^{sk}$, we denote by $Pf(X^{sk})$ the Pfaffian of $X^{sk}$ defined by \begin{equation}\label{eq:definition of Pfaffian-4} Pf(X^s)=\sum_{\sigma\in F_{2n}} sgn({\sigma}) x_{\sigma(1)\sigma(2)}x_{\sigma(3)\sigma(4)}...x_{\sigma(2n-1)\sigma(2n)} \end{equation} \item (\cite{IKO}) We denote by $Hf(X^s)$ the Hafnian of a generic symmetric $2n\times 2n$ matrix $X^s$ defined by \begin{equation}\label{eq:definition of Hafnian-4} Hf(X^s)=\sum_{\sigma\in F_{2n}} x_{\sigma(1)\sigma(2)}x_{\sigma(3)\sigma(4)}...x_{\sigma(2n-1)\sigma(2n)} \end{equation} \end{itemize} \end{Notation} Let $J_{2n}=\mathrm{Ann}(Pf(X_{2n}^{sk}))$. We first give some examples and then some partial results concerning $\mathrm{Ann}(Pf(X_{2n}^{sk}))$. Using Macaulay2 for calculations we have the following results: (a) Let $X_2$ be a generic skew symmetric $2\times 2$ matrix, then we have $H(S^{sk}/J_2)=(1,1)$. And the maximum degree of the generators of the annihilator ideal $J_2$ is 2. So using the Ranestad-Schreyer Proposition we have: \begin{equation*} cr(Pf(X_2^{sk})\ge \frac{1}{d}\deg(\mathrm{Ann}(Pf(X_2^{sk})))=\frac{1}{2}(2)=1, \end{equation*} which is the same as the differential length in this case. Evidently, in this case $r(Pf(X_2^{sk})=~1$, so we have \begin{equation*} r(Pf(X_2^{sk})=cr(Pf(X_2^{sk})=sr(Pf(X_2^{sk})=l_{diff}(Pf(X_2^{sk})=1. \end{equation*} (b) Let $X_4$ be a generic skew symmetric $4\times 4$ matrix. Using Macaulay2 for calculations we have $H(S^{sk}/J_4)=(1,6,1)$, and the maximum degree of the generators of the annihilator ideal $J_4$ is 2. Using the Ranestad-Schreyer Proposition we have: \begin{equation*} cr(Pf(X_4^{sk})\ge \frac{1}{d}\deg(\mathrm{Ann}(Pf(X_4^{sk})))=\frac{1}{2}(8)=4, \end{equation*} which is less than $l_{diff}=6$. (c) Let $X_6$ be a generic skew symmetric $6\times 6$ matrix. Using Macaulay2 for calculations we have $H(S^{sk}/J_6)=(1,15,15,1)$, and the maximum degree of the generators of the annihilator ideal $J_6$ is 2. Using the Ranestad-Schreyer Proposition we have: \begin{equation*} cr(Pf(X_6^{sk})\ge \frac{1}{d}\deg(\mathrm{Ann}(Pf(X_6^{sk})))=\frac{1}{2}(32)=16, \end{equation*} which is larger than $l_{diff}=15$. (d) Let $X_8$ be a generic skew symmetric $8\times 8$ matrix. Using Macaulay2 for calculations we have $H(S^{sk}/J_8)=(1,28,70,28,1)$, and the maximum degree of the generators of the annihilator ideal $J_8$ is 2. From the Ranestad-Schreyer Proposition we have: \begin{equation*} cr(Pf(X_8^{sk})\ge \frac{1}{d}\deg(\mathrm{Ann}(Pf(X_8^{sk})))=\frac{1}{2}(128)=64, \end{equation*} which is less than $l_{diff}=70$. (e) Let $X_{10}$ be a generic skew symmetric $10\times 10$ matrix. Using Macaulay2 for calculations we have $$H(S^{sk}/J_{10})=(1,45,210,210,45,1).$$ The maximum degree of the generators of the annihilator ideal $J_{10}$ is 2. From the Ranestad-Schreyer Proposition we have: \begin{equation*} cr(Pf(X_{10}^{sk})\ge \frac{1}{d}\deg(\mathrm{Ann}(Pf(X_{10}^{sk})))=\frac{1}{2}(512)=256, \end{equation*} which is larger than $l_{diff}=210$. \begin{remark}\label{remark: Hilbert-Pfaffian} The Hilbert sequence for the apolar algebra of the Pfaffian of a generic $2n\times 2n$ matrix is given by $2n\choose2t$, and we have $\sum_{t=0}^{t=n}{{2n}\choose {2t}}=2^{2n-1}$. \end{remark} \begin{defi}\label{def: 2t-Pfaffian} A $2t$-Pfaffian minor of a skew symmetric matrix $X$ is a Pfaffian of a submatrix of $X$ consisting of rows and columns indexed by $i_1,i_2,...,i_{2t}$ for some $i_1<i_2<...<i_{2t}$. \end{defi} The number of $2t$-Pfaffian minors of a $2n\times 2n$ skew symmetric matrix is clearly $2n\choose 2t$. We denote by $\{P_{2t}(X^{sk})\}$ the set of the $2t$-Pfaffians of $X^{sk}$. Furthermore, we denote by $P_{2t}(X^{sk})$ the vector space generated by $\{P_{2t}(X^{sk})\}$ in $R^{sk}_t$ and we denote by $(P_{2t}(X^{sk}))$ the ideal generated by $\{P_{2t}(X^{sk})\}$ in $R^{sk}$. Let $\tau$ be the lexicographic term order on $R^{sk}=\sf k$$[x_{ij}]$ induced by the following order on the indeterminates: \begin{equation*} x_{1,2n}\ge x_{1,2n-1}\ge...\ge x_{1,2}\ge x_{2,2n}\ge x_{2,2n-1} \ge...\ge x_{2n-1,2n}. \end{equation*} \begin{thm}\label{thm:Herzog-Trung basis Pfaffian}(Herzog-Trung \cite{HT}, Theorem 4.1) The set $\{P_{2t}(X)\}$ of the $2t$-Pfaffians of the matrix $X^{sk}$ is a Gr\"{o}bner basis of the ideal $(P_{2t}(X))$ with respect to $\tau$. \end{thm} \begin{cor}\label{cor:dimension-degree-Pfaffian-Hilbert} The dimension of the space of $2t \times 2t$ Pfaffians of a $2n\times 2n$ generic skew symmetric matrix $X^{sk}$ is $2n\choose 2t$. So we have \begin{equation*} \dim(S^{sk}/\mathrm{Ann}(Pf(X^{sk})))=2^{2n-1}. \end{equation*} \end{cor} \begin{proof} The proof follows directly from the Theorem \ref{thm:Herzog-Trung basis Pfaffian} and the combinatorial identity: $$\sum_{t=0}^{t=n}{{2n}\choose {2t}}=2^{2n-1}.$$ This identity is easy to show; e.g., it follows immediately by evaluating at $x=1$ and $x=-1$ the binomial expansion of $(x+1)^{2n}$. \end{proof} The examples strongly suggest that the apolar ideal of the Pfaffian is generated in degree~$2$. In the remaining part of this section we prove that this is always the case. \begin{defi}\label{def:PfaffianW} Let $W$ be the vector subspace of $S^{sk}$ spanned by degree 2 elements of type (a), (b) and (c) defined as follows (a) square of each element of $Y^{sk}$. The number of these monomials is $2n^2-n$. (b) product of each element of $Y^{sk}$ with another element in the same row or column of the matrix $Y^{sk}$. The number these monomials is $(2n^2-n)(2n-2)$. (c) Given any $4\times 4$ submatrix of $X^{sk}$ of the rows and columns $i_1,i_2,i_3$ and $i_4$, \begin{center} $ Q=\left( \begin{array}{cccc} 0 & x_{i_1i_2} & x_{i_1i_3} & x_{i_1i_4} \\ -x_{i_1i_2} & 0 & x_{i_2i_3} & x_{i_2i_4}\\ -x_{i_1i_3} & -x_{i_2i_3} & 0 & x_{i_3i_4}\\ -x_{i_1i_4} & -x_{i_2i_4} & -x_{i_3i_4} & 0\\ \end{array} \right),$ \end{center} we have $Pf(Q)=x_{i_1i_2}x_{i_3i_4}-x_{i_1i_3}x_{i_2i_4}+x_{i_1i_4}x_{i_2i_3}$. Corresponding to $Pf(Q)$ we have 3 binomials which annihilate $Pf(Q)$ hence annihilate $Pf(X^{sk})$. These binomials are $y_{i_1i_2}y_{i_3i_4}+y_{i_1i_3}y_{i_2i_4}$, $y_{i_1i_2}y_{i_3i_4}-y_{i_1i_4}y_{i_2i_3}$ and $y_{i_1i_3}y_{i_2i_4}+y_{i_1i_4}y_{i_2i_3}$. However these three binomials are not linearly independent, and we can write one of them as the sum of the other 2 binomials. So corresponding to each $4\times 4$ Pfaffian we have 2 linearly independent binomials in the annihilator ideal, and using Theorem \ref{thm:Herzog-Trung basis Pfaffian}, the number of these binomials is $2\cdot{{2n}\choose 4}$. \end{defi} \begin{rmk}For a $2n\times 2n$ skew symmetric matrix $X^{sk}$, we have $W\subset \mathrm{Ann}(Pf(X^{sk}))$. \end{rmk} \begin{lem}\label{lem:Pfaffian-gen2W} For the generic skew symmetric $2n\times 2n$ matrix $X^{sk}$, we have \begin{equation*} W=\mathrm{Ann}(P_4(X^{sk}))\cap S^{sk}_2. \end{equation*} \end{lem} \begin{proof} The monomials of type (a) and (b) correspond to unacceptable monomials discussed earlier and are linearly independent from any binomial in (c). The binomials in (c) are linearly independent by Theorem \ref{thm:Herzog-Trung basis Pfaffian}. Hence we have \begin{equation}\label{eq:Pf-W-proof-1} \dim_{\sf k}(W)=2{{2n}\choose 4}+(2n^2-n)(2n-2)+2n^2-n={{2n^2-n+1}\choose 2}-{2n\choose4}. \end{equation} According to Remark \ref{remark:introIK} we have \begin{equation*} \dim_{\sf k}(\mathrm{Ann}(P_4(X^{sk})))\cap S^{sk}_2=\dim_{\sf k}S^{sk}_2-\dim_{\sf k}(P_4(X^{sk})). \end{equation*} So we have \begin{equation}\label{eq:Pf-W-proof-2} \dim_{\sf k}(\mathrm{Ann}(P_4(X^{sk})))\cap S^{sk}_2={{2n^2-n+1}\choose 2}-{2n\choose4}. \end{equation} Using Equations \ref{eq:Pf-W-proof-1} and \ref{eq:Pf-W-proof-2} we obtain \begin{equation}\label{eq:Pf-W-proof-3} \dim_{\sf k}(W)=\dim_{\sf k}(\mathrm{Ann}(P_4(X^{sk})))\cap S^{sk}_2. \end{equation} On the other hand, evidently we have \begin{equation}\label{eq:Pf-W-proof-4} W\subset \mathrm{Ann}(P_4(X^{sk})))\cap S^{sk}_2. \end{equation} Using Equations \ref{eq:Pf-W-proof-3} and \ref{eq:Pf-W-proof-4} we have \begin{equation*} W=\mathrm{Ann}(P_4(X^{sk}))\cap S^{sk}_2. \end{equation*} \end{proof} \begin{lem} \label{lem:Pfaffian-Sn-2circlePfaf}Let $X^{sk}$ be a $2n\times 2n$ skew symmetric matrix ($n\ge 2$). We have, \begin{equation*} S_{n-2}\circ Pf(X^{sk})=P_4(X^{sk})\subset R^{sk}_2. \end{equation*} \end{lem} \begin{proof} First we show \begin{equation}\label{eq:Pfaffian-lemproofSn-2-circle} S_{n-2}\circ Pf(X^{sk})\supset P_4(X^{sk}). \end{equation} We use induction on the size of the matrix. The first step is $2n=6$. We denote by $f=[i_1,i_2,i_3,i_4]\in P_4(X^{sk})$ the Pfaffian of the sub matrix with the rows and columns $i_1,i_2,i_3$ and $i_4$. We have ${6\choose 4}=15$ choices for $f$. For any of these choices we get the Pfaffian of a $2\times 2$ sub matrix of the form \begin{center} $ \left( \begin{array}{cc} 0 & x \\ -x & 0 \\ \end{array} \right),$ \end{center} as the coefficient of $f$ in the Pfaffian of the matrix $X^{sk}$. So if we differentiate the $6\times 6$ Pfaffian with respect to that variable $x$, we get the $4\times 4$ Pfaffian $f=[i_1,i_2,i_3,i_4]$. Assume that Equation \ref{eq:Pfaffian-lemproofSn-2-circle} holds for the generic skew symmetric $(2n-2)\times(2n-2)$ matrix. We want to show it holds for the $2n\times 2n$ generic skew symmetric matrix. The Pfaffian of the skew symmetric $2n\times 2n$ matrix $X^{sk}$ can be computed recursively as \begin{equation}\label{eq:Pfaffian-lemproofSn-2-circle-eq2} Pf(X^{sk})=\sum_{i=2}^{i=2n}(-1)^i x^{sk}_{1i} Pf(X^{sk}_{\hat{1}\hat{i}}), \end{equation} where $X^{sk}_{\hat{1}\hat{i}}$ denotes the matrix $X^{sk}$ with both the first and the $i$-th rows and columns removed. So $X^{sk}_{\hat{1}\hat{i}}$ is a $(2n-2)\times (2n-2)$ matrix and Equation \ref{eq:Pfaffian-lemproofSn-2-circle} holds for it. So for each choice of $[i_1,i_2,i_3,i_4]$ of the matrix $X^{sk}_{\hat{1}\hat{i}}$ we can find $n-3$ variables of $X^{sk}_{\hat{1}\hat{i}}$ such that differentiating $Pf(X^{sk}_{\hat{1}\hat{i}})$ with respect to those variables gives us $[i_1,i_2,i_3,i_4]$. If we call those variable $a_1$,...,$a_{n-3}$, then using Equation \ref{eq:Pfaffian-lemproofSn-2-circle-eq2} if we add $x^{sk}_{1i}$ to our set of $n-3$ variables we will have a set of $n-2$ variables such that differentiating $Pf(X^{sk})$ with respect to those $n-2$ variables we will get $[i_1,i_2,i_3,i_4]$. Since we could write the recursive formula for the Pfaffian with respect to any other row or column, the result follows. For the opposite inclusion to Equation \ref{eq:Pfaffian-lemproofSn-2-circle} we have \begin{equation*} W\subset (\mathrm{Ann}(Pf(X^{sk})))_2\subset (\mathrm{Ann}(P_4(X^{sk})))_2. \end{equation*} But we have shown in Lemma \ref{lem:Pfaffian-gen2W} that \begin{equation*} W=(\mathrm{Ann}(P_4(X^{sk}))_2. \end{equation*} So we have \begin{equation*} (\mathrm{Ann}(Pf(X^{sk})))_2=(\mathrm{Ann}(P_4(X^{sk})))_2. \end{equation*} By Remark \ref{remark:introIK} we have \begin{equation*} (\mathrm{Ann}(Pf(X^{sk})))_2=\mathrm{Ann}(S_{n-2}\circ (Pf(X^{sk}))). \end{equation*} Hence we have \begin{equation*} S_{n-2}\circ Pf(X^{sk})=P_4(X^{sk}). \end{equation*} \end{proof} Recall that we denote by $P_{2k}(X^{sk})$ the vector subspace of $R^{sk}$ spanned by the $2k-$Pfaffian minors of $X^{sk}$ [Definition \ref{def: 2t-Pfaffian}]. \begin{lem}\label{lem:SkcirclePfaff} For $1\leq k\leq n-1$ we have \begin{equation}\label{eq:SkcirclePfaff} S_k\circ (Pf(X^{sk}))=P_{2n-2k}(X^{sk}). \end{equation} \end{lem} \begin{proof} First we want to show \begin{equation*} S_k\circ (Pf(X^{sk}))\subset P_{2n-2k}(X^{sk}). \end{equation*} We use induction on $k$. For $k=1$, we need to prove \begin{equation*} S_1\circ(Pf(X^{sk}))\subset P_{2n-2}(X^{sk}). \end{equation*} so we need to show for any monomial $y_{ij}\in S_1$ we have \begin{equation*} y_{ij}\circ(Pf(X^{sk}))\subset P_{2n-2}(X^{sk}). \end{equation*} It is enough to show the above inclusion holds for $y_{12}$. Using equation \ref{eq:Pfaffian-lemproofSn-2-circle-eq2} we have \begin{equation*} y_{12}\circ(Pf(X^{sk}))=y_{12}\circ \sum_{i=2}^{i=2n}(-1)^i x^{sk}_{1i} Pf(X^{sk}_{\hat{1}\hat{i}})=Pf(X^{sk}_{\hat{1}\hat{2}})+\sum_{i=3}^{i=2n}(-1)^i x^{sk}_{1i} Pf(X^{sk}_{\hat{1}\hat{i}})\in P_{2n-2}(X^{sk}). \end{equation*} So indeed \begin{equation*} S_1\circ(Pf(X^{sk}))\subset P_{2n-2}(X^{sk}). \end{equation*} Next assume $S_k\circ (Pf(X^{sk}))\subset P_{2n-2k}(X^{sk})$. We want to show \begin{equation*} S_{k+1}\circ (Pf(X^{sk}))\subset P_{2n-2k-2}(X^{sk}). \end{equation*} We have \begin{equation*} S_{k+1}\circ (Pf(X^{sk}))=S_{1}\circ (S_k \circ (Pf(X^{sk}))\subset S_1\circ (P_{2n-2k}(X^{sk})\subset P_{2n-2k-2}(X^{sk}). \end{equation*} For the other inclusion, we again use induction on $k$. First we show the inclusion holds for $k=1$. Let $\eta\in P_{2n-2}(X^{sk})$ be a $(2n-2)\times (2n-2)$ Pfaffian minor of $X^{sk}$. Corresponding to $\eta$ there exists a $2\times 2$ matrix of the form \begin{center} $ \left( \begin{array}{cc} 0 & x \\ -x & 0 \\ \end{array} \right),$ \end{center} where $x$ is not in the $2n-2$ rows and columns of $\eta$. If we differentiate the Pfaffian of $X^{sk}$ with respect to $x$ we will get $\eta$. So we have $\eta \in S_1\circ (Pf(X^{sk}))$. Next assume $P_{2n-2k}(X^{sk})\subset S_k\circ (Pf(X^{sk}))$, we have \begin{equation*} P_{2n-2k-2}(X^{sk})\subset S_1\circ (P_{2n-2k}(X^{sk}))\subset S_1\circ (S_k\circ (Pf(X^{sk})))=S_{k+1}\circ (Pf(X^{sk})). \end{equation*} Thus by induction the equality holds. \end{proof} Recall that $(W)$ is the ideal of $S^{sk}$ spanned by degree 2 elements of type (a), (b) and (c) as in Definition \ref{def:PfaffianW}. \begin{prop}\label{prop:Pfaffian-main-prop-n} For the $2n\times 2n$ generic skew symmetric matrix $X^{sk}$ we have \begin{equation} (W)_n=\mathrm{Ann}(Pf(X^{sk}))\cap S^{sk}_n \end{equation} \end{prop} \begin{proof} Let $2\leq k\leq n$. By Remark \ref{remark:introIK} and Lemma \ref{lem:SkcirclePfaff} we have (1) $W \circ Pf(X^{sk})=0 \Longleftrightarrow W \circ S^{sk}_{n-2}Pf(X^{sk})=0 \Longleftrightarrow W \circ P_{4}(X^{sk})=0$. (2) ($\mathrm {Ann}(Pf(X^{sk}))) \cap S_2= W \Rightarrow S_{k-2} W \circ (S_{n-k} \circ Pf(X^{sk}))=0$. \newline$ \Rightarrow S_{k-2}(W) \circ P_{2k}(X^{sk})=0$. \newline$ \Rightarrow (W)_k \circ P_{2k}(X^{sk})=0$. Therefore for all integers $k$, $2\leq k\leq n$, we have \begin{equation}\label{eq:Pfaffian-main-easy-k-inclusion} (W)_k \subset \mathrm {Ann}(P_{2k}(X^{sk})) \cap S^{sk}_k. \end{equation} We need to show \begin{equation}\label{eq:Pfaffian-main-difficult-n-inclusion} (W)_n\supset \mathrm{Ann}(Pf(X^{sk}))\cap S^{sk}_n. \end{equation} We use induction on $n$. For $n=1,2$, we have the $2\times 2$ and $4\times 4$ skew symmetric matrices and the equality is easy to see. Now we want to show that the proposition holds for $n=3$. We use the Remark \ref{remark:main-ann}. Let $\eta$ be a binomial in $\mathrm{Ann}(Pf(X^{sk}))\cap S^{sk}_3$. Without loss of generality we can write \begin{equation*} \eta=y_{12}y_{34}y_{56}-y_{\sigma(1)\sigma(2)}y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}. \end{equation*} Where $\sigma \in S_6$, $sgn(\sigma)=1$ and we have $\sigma(1)<\sigma(3)<\sigma(5)$ and $\sigma(1)<\sigma(2)$, $\sigma(3)<\sigma(4)$ and $\sigma(5)<\sigma(6)$. If the two terms of the binomial $\eta$ have a common factor then without loss of generality we can assume that the common factor is $y_{12}$ so we can write $\eta$ as \begin{equation*} \eta=y_{12}(y_{34}y_{56}-y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}) \end{equation*} But by the definition of $(W)_3$ the monomial $y_{34}y_{56}-y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}$ is included in $W$ since it is of the form (c). So we have $\eta\in (W)_3$. On the other hand, assume that the two terms of $\eta$, i.e. $y_{12}y_{34}y_{56}$ and $y_{\sigma(1)\sigma(2)}y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}$ do not have any common factor. We can add and subtract another term of the Pfaffian $\tau=y_{\beta(1)\beta(2)}y_{\beta(3)\beta(4)}y_{\beta(5)\beta(6)}$ such that $\beta$ is a permutation in $S_6$ and we have $\beta(1)<\beta(3)<\beta(5)$ and $\beta(1)<\beta(2)$, $\beta(3)<\beta(4)$ and $\beta(5)<\beta(6)$. and $\tau$ has one common factor with $y_{12}y_{34}y_{56}$ and one common factor with $y_{\sigma(1)\sigma(2)}y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}$. Without loss of generality we can take $\beta(5)=5, \beta(6)=6$ and $\beta(1)=\sigma(1), \beta(2)=\sigma(2)$. So we have \begin{equation*} \eta -\tau+\tau=\eta-y_{\sigma(1)\sigma(2)}y_{\beta(3)\beta(4)}y_{5,6}+y_{\sigma(1)\sigma(2)}y_{\beta(3)\beta(4)}y_{5,6}. \end{equation*} Hence we have \begin{equation*} \eta=y_{5,6}(y_{12}y_{34}-y_{\sigma(1)\sigma(2)}y_{\beta(3)\beta(4)})+y_{\sigma(1)\sigma(2)}(y_{\beta(3)\beta(4)}y_{5,6}-y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}). \end{equation*} But by the definition of $W$ we know that $y_{12}y_{34}-y_{\sigma(1)\sigma(2)}y_{\beta(3)\beta(4)}$ and $y_{\beta(3)\beta(4)}y_{5,6}-y_{\sigma(3)\sigma(4)}y_{\sigma(5)\sigma(6)}$ are both elements of $W$ of type (c). So we have $\eta\in (W)_3$. When $n$ is larger than 3 then by the induction assumption we can assume that the proposition holds for all integers $k\leq n-1$. Again we use the Remark \ref{remark:main-ann}. Assume $b=b_1+b_2$ is of degree $n$. If the two terms, $b_1$ and $b_2$ are monomials in $S^{sk}$ and have a common factor $l$, i.e. $b_1=la_1$ and $b_2=la_2$, then $b=l(a_1+a_2)$ where $a_1$ and $a_2$ are of degree at most $n-1$. By the induction assumption the proposition holds for the binomial $a_1+a_2$, i.e., $a_1+a_2 \in W_{n-1}$, hence we have \begin{equation*} b=l(a_1+a_2)\in l(W)_{n-1}\subset (W)_n. \end{equation*} If the two terms, $b_1$ and $b_2$ do not have any common factor then with the same method as above we can rewrite the binomial $b$ by adding and subtracting a term $m$ of degree $n$, which has a common factor $m_1$ with $b_1$ and a common factor $m_2$ with $b_2$, and we will have \begin{equation*} b_1+b_2= b_1+m+b_2-m=m_1(c_1+m')+m_2(c_2-m''), \end{equation*} where $b_1=m_1c_1$, $m=m_1m'=m_2m''$ and $b_2=m_2c_2$. Since $c_1+m'$ and $c_2-m''$ are of degree at most $n-1$, the induction assumption yields \begin{equation*} b_1+b_2=m_1(c_1+m')+m_2(c_2-m'')\in(W)_n. \end{equation*} This completes the induction step and the proof of the proposition. \end{proof} \begin{cor} \label{cor:Pfaffian-main-cor-k} For $1\leq k \leq n$ we have \begin{equation*} (W)_k=\mathrm{Ann}(Pf(X^{sk}))\cap S^{sk}_{k} \end{equation*} We also have $(W)_{n+1}=S^{sk}_{n+1}$. \end{cor} \begin{proof} Using Equation \ref{eq:Pfaffian-main-easy-k-inclusion} we only need to show that \begin{equation*} \mathrm{Ann}(Pf(X^{sk}))\cap S^{sk}_{k}\subset (W)_k \end{equation*} By Remark \ref{remark:introIK} and Lemma \ref{lem:SkcirclePfaff} we have \begin{equation*} (\mathrm{Ann}(Pf(X^{sk})))_k=(\mathrm{Ann}(S_{n-k}\circ Pf(X^{sk})))_k=(\mathrm{Ann}(P_{2k}(X^{sk})))_k \end{equation*} Now if we label the $2k\times 2k$ Pfaffians of $X^{sk}$ by $f_1,...,f_s$ we have \begin{equation*} \mathrm{Ann}(P_{2k}(X^{sk})))_k=(\mathrm{Ann}<f_1,...,f_s>)_k=(\bigcap_{i=1}^{i=s}(\mathrm{Ann}(f_i))_k \end{equation*} Let $R^i$ denote the ring in the variables of $f_i$ and $W(i)$ the $f_i$ variables that are involved. By Proposition~\ref{prop:Pfaffian-main-prop-n} we have \begin{equation*} (W(i))_k=Ann(f_i)\cap S^i_k \end{equation*} So we have \begin{equation*} \mathrm{Ann}(Pf(X^{sk}))\cap S^{sk}_{k}\subset (W)_k \end{equation*} To prove the second part, it is easy to see that every monomial of degree larger than $n$ will be unacceptable, of type (a) or (b), so in $W$, and we have $(W)_{n+1}=S^{sk}_{n+1}$. \end{proof} \begin{thm}\label{thm:Pfaffian-main-theorem} Let $X^{sk}$ be a generic skew symmetric $2n\times 2n$ matrix. Then the apolar ideal $\mathrm{Ann}(Pf(X^{sk}))$ is the ideal $W$ and is generated in degree 2. \end{thm} \begin{proof} This follows directly from Proposition \ref{prop:Pfaffian-main-prop-n} and Corollary \ref{cor:Pfaffian-main-cor-k}. \end{proof} \begin{cor} Let $X^{sk}$ be a $2n\times 2n$ generic skew symmetric matrix. We have \begin{equation}\label{eq:Pfaffian-rank-equation} 2^{2n-2} \leq cr(Pf(X^{sk}))\leq 2^{n-1} \end{equation} \end{cor} \begin{proof} By the Ranestad-Schreyer Proposition, Corollary \ref{cor:dimension-degree-Pfaffian-Hilbert} and Theorem \ref{thm:Pfaffian-main-theorem} we have \begin{equation*} cr(Pf(X^{sk}))\geq \frac{1}{2} \dim(S^{sk}/\mathrm{Ann}(Pf(X^{sk})))=\frac{1}{2}(2^{2n-1})=2^{2n-2}. \end{equation*} The second inequality is true by Equation \ref{eq:Pedro}. \end{proof} \begin{remark}\label{remark:l-diff-pfaffian} For $n\ge 5$ it can be easily seen that the lower bound for the cactus rank given by Corollary 4.12 is larger than $l_{diff}={{2n}\choose {2t_0}}$, where $t_0=\lfloor n/2\rfloor$. \end{remark} \begin{thm}\label{thm:Hafnian-main-theorem-cor} Let $X^{s}$ be a generic symmetric $2n\times 2n$ matrix. Then the apolar ideal $\mathrm{Ann}(Hf(X^{s}))$ is generated in degree 2, and the inequality \ref{eq:Pfaffian-rank-equation} also holds for $(Hf(X^{s}))$. \end{thm} \begin{proof} By the definition of the Hafnian, it is easy to see that none of the diagonal elements appear in $Hf(X^s)$, so for $1\leq i\leq 2n$ we have \begin{equation*} y_{ii}\circ Hf(X^s)=0 \end{equation*} Hence without loss of generality we can restrict our discussion to the case where $X^s$ is a generic zero-diagonal symmetric matrix. By changing the Pfaffians to Hafnians and vice versa, the proof follows directly from the proofs that we have for the Pfaffian of a generic skew symmetric matrix. \end{proof} \section{Gr\"{o}bner bases} In Section \ref{generic} we have shown that for $A$ a generic $n\times n$ matrix $\mathrm{Ann}(\det(A))=(\mathcal P_D+\mathcal U_D)$. In \cite{LS}, R. Laubenbacher and I. Swanson give a Gr\"{o}bner bases for the ideal of $2\times 2$ permanents of a matrix. In this section we first review their result (Theorem \ref{thm:generic-grobner-LS}) and then state our result for the ideal $\mathrm{Ann}(\det(A))$ and prove it independently (Theorem \ref{thm:generic-grobner-alternative-proof}). \begin{defi}\label{def:diagonal order}(\cite{LS}, page 197) Let $D=(d_{ij})$ be the matrix of the differential operators as defined in section \ref{intro}. A monomial order on the $d_{ij}$ is \emph{diagonal} if for any square submatrix of $D$, the leading term of the permanent (or of the determinant) of that submatrix is the product of the entries on the main diagonal. An example of such an order is the lexicographic order defined by: \begin{equation*} d_{ij}<d_{kl} \text{ if and only if } l>j \text{or } l=j \text{ and } k>i. \end{equation*} \end{defi} Throughout this section we use a lexicographic diagonal ordering. \begin{thm}\label{thm:generic-grobner-LS}(\cite{LS}, page 197) The following collection $G$ of polynomials is a minimal reduced Gr\"{o}bner basis for $\mathcal P_D$, with respect to any diagonal ordering: (1) The subpermanents $d_{ij}d_{kl}+d_{kj}d_{il}$, $i<k$,$j<l$; (2) $d_{i_1j_1}d_{i_1j_2}d_{i_2j_3}, i_1>i_2,j_1<j_2<j_3$; (3) $d_{i_1j_1}d_{i_2j_2}d_{i_2j_3}, i_1>i_2,j_1<j_2<j_3$; (4) $d_{i_1j_1}d_{i_2j_1}d_{i_3j_2}, i_1<i_2<i_3, j_1>j_2$; (5) $d_{i_1j_1}d_{i_2j_2}d_{i_3j_2}, i_1<i_2<i_3, j_1>j_2$; (6) $d_{i_1j_1}^{e_1}d_{i_2j_2}^{e_2}d_{i_3j_3}^{e_3}, i_1<i_2<i_3, j_2>j_3, e_1e_2e_3=2$. \end{thm} Monomials of type (2), (3), (4), (5) and (6) in the above theorem are in the ideal generated by all unacceptable monomials. \begin{thm}\label{thm:generic-grobner-alternative-proof} The collection of unacceptable degree 2 monomials and $2\times 2$ subpermanents of $D$, form a Gr\"{o}bner basis for $\mathrm{Ann}(\det(A))$ with respect to any diagonal ordering. \end{thm} \begin{proof}We will denote $\mathcal U_D $ and $\mathcal P_D$ by $\mathcal U, \mathcal P$ respectively in the following, where $D$ is understood.\par The elements of $(\mathcal U+\mathcal P)$ generate $\mathrm{Ann}(\det(A))$. Since $\mathcal U$ is a set of monomials, it is already Gr\"{o}bner. We use Buchberger's algorithm to find a Gr\"{o}bner basis for $\mathcal P+\mathcal U$. We consider several cases: a) Let $F$ and $G$ be distinct permanents of $D$. Let $F=a_{ik}a_{jl}+a_{il}a_{jk}$ and $G=a_{uz}a_{vw}+a_{uw}a_{vz}$ be two permanents in $\mathcal P$. \begin{center} $ F= perm\left( \begin{array}{cc} a_{ik} & a_{il} \\ a_{jk} & a_{jl}\\ \end{array} \right)$. \end{center} and \begin{center} $ G=perm \left( \begin{array}{cc} a_{uz} & a_{uw} \\ a_{vz} & a_{vw}\\ \end{array} \right)$. \end{center} Let $f_1=a_{ik}a_{jl}$ be the leading term of $F$, and $g_1=a_{uz}a_{vw}$ be the leading term of $G$ with respect to the given diagonal ordering. Denote the least common multiple of $f_1$ and $g_1$ by $h_{11}$. Let \begin{equation*} S(F,G)=(h_{11}/f_1)F-(h_{11}/g_1)G=a_{uz}a_{vw}a_{il}a_{jk}-a_{ik}a_{jl}a_{uw}a_{vz}. \end{equation*} Now using the multivariate division algorithm, reduce all the $S(F,G)$ relative to the set of all permanents. When there is no common factor in the initial terms of $F$ and $G$ the reduction is zero, as one can use $F$ and $G$ again as we show. First we reduce $S(F,G)$ dividing by $F\in\mathcal P$, so we will have \begin{equation*} S(F,G)+a_{uw}a_{vz}(a_{ik}a_{jl}+a_{il}a_{jk})=a_{uz}a_{vw}a_{il}a_{jk}+a_{uw}a_{vz}a_{il}a_{jk}. \end{equation*} Then we reduce the result using $G$ this time, so we will have \begin{equation*} a_{uz}a_{vw}a_{il}a_{jk}+a_{uw}a_{vz}a_{il}a_{jk}-a_{il}a_{jk}(a_{uz}a_{vw}+a_{uw}a_{vz})=0. \end{equation*} So we have shown that for all pairs $F$, $G$ of distinct permanents of $D$, the $S$-polynomials $S(F,G)$ reduces to zero with respect to $\mathcal P$. b) Let $F=a_{ik}a_{jl}+a_{il}a_{jk}$ and $G=a_{ik}a_{jm}+a_{im}a_{jk}$ be two permanents so that their initial terms have a common factor. We have \begin{equation*} S(F,G)=a_{il}a_{jk}a_{jm}-a_{im}a_{jk}a_{jl}\in \mathcal U. \end{equation*} c) Let $F=a_{im}a_{jn}+a_{in}a_{jm}$ be a permanent and $M=a_{tk}a_{tl}$ be an unacceptable monomial. We have \begin{equation*} S(F,M)=a_{tk}a_{tl}a_{jm}a_{in} \in \mathcal U. \end{equation*} d) Let $F=a_{il}a_{jm}+a_{im}a_{jl}$ be a permanent and $M=(a_{kn})^2$ be an unacceptable monomial. We have \begin{equation*} S(F,M)=a_{im}a_{jl}(a_{kn})^2 \in \mathcal U. \end{equation*} e) Let $F=a_{il}a_{jm}+a_{im}a_{jl}$ be a permanent and $M=(a_{il})^2$ be an unacceptable monomial which has a common factor with the initial term of $F$. We have \begin{equation*} S(F,M)=a_{il}a_{im}a_{jl} \in \mathcal U. \end{equation*} f) Let $F=a_{il}a_{jm}+a_{im}a_{jl}$ be a permanent and $M=a_{jn}a_{kn}$ be an unacceptable monomial. We have \begin{equation*} S(F,M)=a_{im}a_{jl}a_{jn}a_{kn} \in \mathcal U. \end{equation*} This exhausts all possibilities, so the generating set $\mathcal P+\mathcal U$ is itself a Gr\"{o}bner basis by Buchberger's algorithm. \end{proof} \subsection{Discussion of connected sum} \begin{defi} (\cite{MS}) A polynomial $F$ in $r$ variables is a connected sum if we can write $F=F'+F''$ with $F'$ and $F''$ in $r'$ and $r''$ variables, where $r'+r''=r$. \end{defi} Let $A$ be a generic $2\times 2$ matrix, we can write the determinant $A$ is a sum of two polynomials in complementary sets of variables. \begin{prop}(Buczy{\'n}ska, Buczy{\'n}ski,Teitler (\cite{BBT}) If a form $F$ of degree $d$ is a connected sum, then the apolar ideal has a minimal generator in degree d. (The converse does not hold.) \end{prop} In particular, since the generic determinant and permanent of size $n \ge 3$ have their annihilating ideals generated in degree 2, therefore they are not connected sums. This is also true for the Pfaffian of skew symmetric matrices and Hafnian of symmetric matrices of size $n\ge 6$. \end{document}
math
67,311
\begin{document} \title[Sliding invariants and classification of holomorphic foliations]{Sliding invariants and classification of singular holomorphic foliations in the plane} \author{{\sc Truong} Hong Minh } \thanks{\textbf{Keywords}: Invariants de glissement, feuilletages holomorphes, classification .\\ \indent\textbf{2010 Mathematics Subject Classification}: 34M35, 32S65} \address{Institut de Mathématiques de Toulouse, UMR5219, Université Toulouse 3} \email{[email protected]} \begin{abstract}{ By introducing a new invariant called the set of slidings, we give a complete strict classification of the class of germs of non-dicritical holomorphic foliations in the plan whose Camacho-Sad indices are not rational. Moreover, we will show that, in this class, the new invariant is finitely determined. Consequently, the finite determination of the class of isoholonomic non-dicritical foliations and absolutely dicritical foliations that have the same Dulac maps are proved. }\end{abstract} \mathfrak{m}aketitle \section{Introduction} The problem of classification of germs of foliations in the complex plane is stated by Thom \cite{Zol}. He conjectured that the analytic type of a foliation defined in a neighborhood of a singular point is completely determined by its associated separatrix and its corresponding holonomy. R. Moussu in \cite{Mou} gave a counterexample for this statement and showed that we have to consider the holonomy representation of each irreducible component of the exceptional divisor in a desingularization instead of the economizes. For this new version, Thom's problem is proved for cuspidal type singular points \cite{Mou}, \cite{Cer-Mou} and more generally for quasi-homogeneous foliations \cite{Gen}. However, the new statement of Thom's conjecture was refuted by J.F. Mattei by computing the dimension of the space of isoholonomic deformations \cite{Mat2}, \cite{Mat1}: There must be other invariants for the non quasi-homogeneous foliations. This conclusion is confirmed by the number of free coefficients in the normal forms in \cite{Gen-Pau1}, \cite{Gen-Pau2} and in the hamiltonian part of the normal forms of vector field in \cite{Ort-Ros-Vor}. By adding a new invariant called \emph{set of slidings} this paper solves the problem of strict classification for the non-dicritical foliations whose Camacho-Sad indices are not rational. Here, strict classification means up to diffeomorphism tangent to identity. \subsection{Preliminaries} A germ of singular foliation $\mathfrak{m}athfrak{F}F$ in $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ is called \emph{reduced} if there exists a coordinate system in which it is defined by a $1$-form whose linear part is \begin{equation*} \lambda_1 y dx + \lambda_2 xdy,\; \frac{\lambda_2}{\lambda_1}\not\in\ensuremath{\mathfrak{m}athbb Q}_{< 0}, \end{equation*} $\lambda=-\frac{\lambda_2}{\lambda_1}$ is called the \emph{Camacho-Sad index} of $\mathfrak{m}athfrak{F}F$. When $\lambda=0$, the origin is called a \emph{saddle-node} singularity, otherwise it is called \emph{nondegenerate}. A theorem of A. Seidenberg \cite{Sei, Mat-Mou} says that any singular foliation $\mathfrak{m}athfrak{F}F$ with isolated singularity admits a canonical \emph{desingularization}. More precisely, there is a holomorphic map \begin{equation}\label{sigma} \sigma: \mathcal{M}\rightarrow (\ensuremath{\mathfrak{m}athbb C}^2,0) \end{equation} obtained as a composition of a finite number of blowing-ups at points such that any point $m$ of the \emph{exceptional divisor} $\mathcal{D}:=\sigma^{-1}(0)$ is either a regular point or a reduced singularity of the strict transform $\tilde{\mathfrak{m}athfrak{F}F}=\sigma^*(\mathfrak{m}athfrak{F}F)$. An intersection of two irreducible components of $\mathcal{D}$ is called a \emph{corner}. An irreducible component of $\mathcal{D}$ is a \emph{dead branch} if in this component there is a unique singularity of $\tilde{\mathcal{F}}$ that is a corner. A \emph{separatrix} of $\mathfrak{m}athfrak{F}F$ is an analytical irreducible invariant curve through the origin of $\mathfrak{m}athfrak{F}F$. It is well known that any germ of singular foliation $\mathfrak{m}athfrak{F}F$ in $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ possesses at least one separatrix \cite{Cam-Sad}. When the number of separatrices is finite $\mathfrak{m}athfrak{F}F$ is \emph{non-dicritical}. Otherwise it is called \emph{dicritical}. Denote by $\mathfrak{m}athrm{Sing}(\tilde{\mathcal{F}})$ the set of all singularities of the strict transform $\tilde{\mathcal{F}}$. Let $D$ be a non-dicritical irreducible component of the exceptional divisor $\mathcal{D}$, then $D^*=D\setminus\mathfrak{m}athrm{Sing}(\tilde{\mathcal{F}})$ is a leaf of $\tilde{\mathcal{F}}$. Let $m$ be a regular point in $D^*$ and $\Sigma$ a small analytic section through $m$ transverse to $\tilde{\mathcal{F}}$. For any loop $\gamma$ in $D^*$ based on $m$ there is a germ of a holomorphic return map $ h_\gamma : (\Sigma, m)\rightarrow (\Sigma, m)$ which only depends on the homotopy class of $\gamma$ in the fundamental group $\pi_1(D^*,m)$. The map $h:\pi_1(D^*,m)\rightarrow \mathfrak{m}athrm{Diff}(\Sigma,m)$ is called the \emph{vanishing holonomy representation} of $\mathfrak{m}athfrak{F}F$ on $D$. Let $\mathfrak{m}athfrak{F}F'$ be a foliation that also admits $\sigma$ as its desingularization map. Assume that $\mathfrak{m}athrm{Sing}(\tilde{\mathcal{F}}')=\mathfrak{m}athrm{Sing}(\tilde{\mathcal{F}})$ where $\mathfrak{m}athrm{Sing}(\tilde{\mathcal{F}}')$ is the set of singularities of the strict transform $\tilde{\mathcal{F}}'$. Denote by $h'_\gamma$ in $\mathfrak{m}athrm{Diff}(\Sigma,m)$ the vanishing holonomy representation of $\mathfrak{m}athfrak{F}F$. We say that the vanishing holonomy representation of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ on $D$ are conjugated if there exists $\phi\in\mathfrak{m}athrm{Diff}(\Sigma,m)$ such that $\phi\circ h_\gamma= h'_\gamma\circ\phi$. The vanishing holonomy representation of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are called conjugated if they are conjugated on every non-dicritical irreducible component of $\mathcal{D}$. \begin{notation}\label{no1} We denote by $\mathfrak{m}athcal{M}$ the set of all non-dicritical foliations $\mathfrak{m}athfrak{F}F$ defined on $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ such that the Camacho-Sad index of $\tilde{\mathcal{F}}$ at each singularity is not rational. \end{notation} If $\mathfrak{m}athfrak{F}F$ is in $\mathfrak{m}athcal{M}$ then after desingularization all the singularities of $\tilde{\mathcal{F}}$ are not saddle-node. Moreover, the Chern class of an irreducible component of divisor, which is an integer, is equal to the sum of Camacho-Sad indices of the singularities in this component \cite{Cam-Sad}. Therefore, every element in $\mathcal{M}$ after desingularization admits no dead branch in its exceptional divisor. \subsection{Absolutely dicritical foliation}\label{sec1.2} Let $\sigma$ as in \eqref{sigma} be a composition of a finite number of blowing-ups at points. A germ of singular holomorphic foliation $\mathcal{L}$ is said $\sigma$-\emph{absolutely dicritical} if the strict transform $\tilde{\mathcal{L}}=\sigma^*(\mathcal{L})$ is a regular foliation and the exceptional divisor $\mathcal{D}=\sigma^{-1}(0)$ is completely transverse to $\tilde{\mathcal{L}}$. When $\sigma$ is the standard blowing-up at the origin, we called $\mathcal{L}$ a \emph{radial foliation}. At each corner $p=D_i\cap D_j$ of $\mathcal{D}$, the diffeomorphism from $(D_i,p)$ to $(D_j,p)$ that follows the leaves of $\tilde{\mathcal{L}}$ is called the \emph{Dulac map} of $\tilde{\mathcal{L}}$ at $p$. The existence of such foliations for any given $\sigma$ is proved in \cite{Can-Cor}. In fact, in \cite{Can-Cor} the authors showed that if in each smooth component of $\mathcal{D}$ we take any two smooth curves transverse to $\mathcal{D}$ then there is always an absolutely dicritical foliation admitting them as their integral curves. We will denote by $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\pitchfork\tilde{\mathcal{L}}$ if at any point $p\in\mathfrak{m}athrm{Sing}(\tilde{\mathcal{F}})$ the separatrices of $\tilde{\mathcal{F}}$ through $p$ are transverse to $\tilde{\mathcal{L}}$. \begin{lemma}\label{lem1} Let $\mathfrak{m}athfrak{F}F$ be a non-dicritical foliation such that $\sigma$ is its desingularization map. Then there exists a $\sigma$-absolutely dicritical foliation $\mathcal{L}$ satisfying $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\pitchfork\tilde{\mathcal{L}}$ \end{lemma} \begin{proof} Denote by $L_1,\ldots,L_k$ the strict transforms of the separatrices of $\mathfrak{m}athfrak{F}F$. On each component $D$ of $\mathcal{D}$ that does not contain any singularity of $\tilde{\mathcal{F}}$ except the corners we take a smooth curve $L_{k+j}$ transverse to $D$. Then we have the set of curves $\{L_1,\ldots,L_n\}$ such that each component of $\mathcal{D}$ is transverse to at least one curve $L_i$. Denote by $p_i=L_i\cap\mathcal{D}$. By \cite{Can-Cor}, for each $i$ there exists a $\sigma$-absolutely dicritical foliation $\mathcal{L}_i$ defined by a $1$-form $\omega_i$ verifying that $L_i$ is transverse to $\tilde{\mathcal{L}}_i$. Choose a local chart $(x_i,y_i)$ at $p_i$ such that $\mathcal{D}=\{x_i=0\}, L_i=\{y_i=0\}$ and \begin{equation*} \sigma^*\omega_i(x_i,y_i)=x_i^{m_i}d(x_i+y_i) + h.o.t., \end{equation*} where ``h.o.t.'' stands for higher order term. Write down $\omega_i$ in the local chart $(x_j,y_j)$ \begin{equation} \sigma^*\omega_i(x_j,y_j)=x_j^{m_j}d(a_{ij}x_j+b_{ij}y_j) + h.o.t.. \end{equation} Because $\tilde{\mathcal{L}}_i$ is transverse to $\mathcal{D}$, we have $b_{ij}\neq 0$. We define $a_{ii}=b_{ii}=1$. There always exists a vector $(c_1,\ldots,c_n)\in\ensuremath{\mathfrak{m}athbb C}^n$ such that for $j=1,\ldots,n$, \begin{equation*} a_j=\sum_{i=1}^n c_i a_{ij}\neq 0 \;\;\text{and}\;\; b_j=\sum_{i=1}^n c_i b_{ij}\neq 0. \end{equation*} Denote by $\omega_0=\sum c_i\omega_i$. Then, in the local chart $(x_j,y_j)$, we have \begin{equation*} \sigma^*\omega_0(x_j,y_j)=x_j^{m_j}d(a_{j}x_j+b_{j}y_j) + h.o.t.,\;\;\text{where}\;\; a_j\ne 0, b_j\ne 0. \end{equation*} Because $\omega_0$ and $\omega_i$, $i=1,\ldots, n$, have the same multiplicity on each component of $\mathcal{D}$, they have the same vanishing order. Since each component of $\mathcal{D}$ contains at least one point $p_i$ and the strict transform $\tilde{\mathcal{L}}$ of the foliation $\mathcal{L}$ defined by $\omega_0$ is transverse to $\mathcal{D}$ in each neighborhood of each $p_i$, $\tilde{\mathcal{L}}$ is generically transverse to $\mathcal{D}$. By \cite{Can-Cor}, $\mathcal{L}$ is absolutely dicritical and satisfies $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\pitchfork\tilde{\mathcal{L}}$. \end{proof} \subsection{Slidings of foliations} Consider first a nondegenerate reduced foliation $\mathfrak{m}athfrak{F}F$ in $(\ensuremath{\mathfrak{m}athbb C}^2,0)$. By \cite{Mat-Mou}, there exists a coordinate system in which $\mathfrak{m}athfrak{F}F$ is defined by $$\lambda y(1+A(x,y))dx + xdy,\;\lambda\notin\ensuremath{\mathfrak{m}athbb Q}_{\leq 0},$$ where $A(0,0)=0$. Let $\mathcal{L}$ be a germ regular foliation whose invariant curve through the origin (we call it the separatrix of $\mathcal{L}$) is transverse to the two separatrices of $\mathfrak{m}athfrak{F}F$, which are denoted by $S_1$ and $S_2$. Then we have the following Lemma, whose proof is straightforward. \begin{lemma}\label{lem2} The tangent curve of $\mathfrak{m}athfrak{F}F$ and $\mathcal{L}$, denoted $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$, is a smooth curve transverse to the two separatrices of $\mathfrak{m}athfrak{F}F$. Moreover, if the separatrix of $\mathcal{L}$ is tangent to $\{x-cy=0\}$ then $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ is tangent to $\{x+c\lambda y=0\}$. \end{lemma} \begin{figure} \caption{Sliding of $\mathfrak{m} \label{figure1} \end{figure} After a standard blowing-up $\sigma_1$ at the origin, the strict transform $\tilde{T}(\mathfrak{m}athfrak{F}F,\mathcal{L})$ of $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ is transverse to $\tilde{\mathcal{F}}$ and cut $D_1=\sigma_1^{-1}(0)$ at $p$. We denote by $D_1^*=D_1\setminus \mathfrak{m}athrm{Sing}(\sigma_1^*(\mathfrak{m}athfrak{F}F))$ and $\tilde{h}:\pi_1(D_1^*,p)\rightarrow \mathfrak{m}athrm{Diff}(\tilde{T}(\mathfrak{m}athfrak{F}F,\mathcal{L}),p)$ the vanishing holonomy representation of $\mathfrak{m}athfrak{F}F$. We choose a generator $\gamma$ for $\pi_1(D_1^*,p)\cong \ensuremath{\mathfrak{m}athbb Z}$. Then $\sigma_1$ induces $$h_\gamma=\sigma_1\circ \tilde{h}(\gamma)\circ\sigma_1^{-1}\in\mathfrak{m}athrm{Diff}(T(\mathfrak{m}athfrak{F}F,\mathcal{L}),0).$$ We call $h_\gamma$ the \emph{holonomy on the tangent curve $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$}. Denote by $\pi_{S_1}$ and $\pi_{S_2}$ the projection by the leaves of $\mathcal{L}$ from $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $S_1$ and $S_2$ respectively. \begin{defi} The sliding of a reduced foliation $\mathfrak{m}athfrak{F}F$ and a regular foliation $\mathcal{L}$ on $S_1$ (resp., $S_2$) is the diffeomorphism (figure \ref{figure1}) \begin{align*} g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})&= \pi_{S_1*}(h_\gamma)=\pi_{S_1}\circ h_\gamma\circ\pi_{S_1}^{-1}\\ \big(\text{resp.,}\;\;g_{S_2}(\mathfrak{m}athfrak{F}F,\mathcal{L})&= \pi_{S_2*}(h_\gamma)=\pi_{S_2}\circ h_\gamma\circ\pi_{S_2}^{-1}\big). \end{align*}\end{defi} Let $d:S_1\rightarrow S_2$ be the Dulac map of $\mathcal{L}$ (Section \ref{sec1.2}). Since $d=\pi_{S_2}\circ\pi_{S_1}^{-1}$, it is obvious that \begin{equation}\label{3} g_{S_2}(\mathfrak{m}athfrak{F}F,\mathcal{L})=d_*\left(g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})\right). \end{equation} Now let $\mathfrak{m}athfrak{F}F$ be a non-dicritical foliation such that after desingularization by the map $\sigma$ all singularities of $\sigma^*(\mathfrak{m}athfrak{F}F)=\tilde{\mathcal{F}}$ are nondegenerate. By Lemma \ref{lem1} there exists a $\sigma$-absolutely dicritical foliation $\mathcal{L}_0$ such that $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\pitchfork\tilde{\mathcal{L}}_0$. \begin{notation}\label{no5} We denote by $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ the set of all $\sigma$-absolutely dicritical foliations $\mathcal{L}$ satisfying the two following properties: \begin{itemize} \item $\tilde{\mathcal{L}}$ and $\tilde{\mathcal{L}}_0$ have the same Dulac maps at any corner of $\mathcal{D}$. \item At each singularity $p$ of $\tilde{\mathcal{F}}$, the invariant curves of $\tilde{\mathcal{L}}$ and $\tilde{\mathcal{L}}_0$ through $p$ are tangent (figure \ref{p38}). \end{itemize} \end{notation} \begin{figure} \caption{Element $\mathcal{L} \label{p38} \end{figure} Let $\mathcal{L}$ in $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ and $D$ be an irreducible component of $\mathcal{D}$. Suppose that $p_1,\ldots,p_m$ are the singularities of $\tilde{\mathcal{F}}$ on $D$. Then we denote by $$S_D(\tilde{\mathcal{F}},\tilde{\mathcal{L}})=\{g_{D,p_1}(\tilde{\mathcal{F}},\tilde{\mathcal{L}}),\ldots,g_{D,p_m}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})\},$$ where $g_{D,p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$ is the sliding of $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{L}}$ in a neighborhood of $p_i$. \begin{defi} The sliding of $\mathfrak{m}athfrak{F}F$ and $\mathcal{L}$ is \begin{equation*} S(\mathfrak{m}athfrak{F}F,\mathcal{L})=\cup_{D\in\mathfrak{m}athrm{Comp}(\mathcal{D})}S_D(\tilde{\mathcal{F}},\tilde{\mathcal{L}}), \end{equation*} where $\mathfrak{m}athrm{Comp}(\mathcal{D})$ is the set of all irreducible components of $\mathcal{D}$. The set of slidings of $\mathfrak{m}athfrak{F}F$ relative to direction $\mathcal{L}_0$ is the set \begin{equation*} \mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)=\cup_{\mathcal{L}\in\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)}S(\mathfrak{m}athfrak{F}F,\mathcal{L}). \end{equation*} \end{defi} We will prove in Corollary \ref{cor3} that $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)$ is an invariant of $\mathfrak{m}athfrak{F}F$: If $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are conjugated by $\Phi$ then for each $\mathcal{L}$ in $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ we have $S(\mathfrak{m}athfrak{F}F,\mathcal{L})=\tilde{\Phi}_{|\mathcal{D}}\circ S(\mathfrak{m}athfrak{F}F',\Phi_*\mathcal{L})\circ\tilde{\Phi}^{-1}_{|\mathcal{D}}$ . Under some conditions for $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ (Theorem \ref{thr1}), we will have $\tilde{\Phi}_{|\mathcal{D}}=\mathfrak{m}athrm{Id}$. Therefore $S(\mathfrak{m}athfrak{F}F,\mathcal{L})=S(\mathfrak{m}athfrak{F}F',\Phi_*\mathcal{L})$. Moreover, $\Phi_*\mathcal{L}$ is also in $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$. Consequently, $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)=\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F',\mathcal{L}_0)$. \begin{rema}\label{re7} For each singularity $p$ of $\tilde{\mathcal{F}}$ that is a corner, i.e., $p=D_i\cap D_j$, there are two slidings $g_{D_i,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$ and $g_{D_j,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$. However, by \eqref{3}, $g_{D_j,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$ is completely determined by $g_{D_i,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$ and the Dulac map of $\tilde{\mathcal{L}}$ at $p$.\\ \noindent This invariant is named ``sliding'' because it gives an obstruction for the construction of local conjugacy of two foliations that fixes the points in the exceptional divisor (Corollary \ref{cor3}).\\ \noindent The definition of $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)$ does not depend on choosing a element $\mathcal{L}_0$ in $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$. More precisely, if $\mathcal{L}'_0\in\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ then $\mathcal{L}_0\in\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}'_0)$ and $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)=\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}'_0)$\\ \noindent Although $S(\mathfrak{m}athfrak{F}F,\mathcal{L})$ is a set of local diffeomorphisms, it is not a local invariant. $S(\mathfrak{m}athfrak{F}F,\mathcal{L})$ also contains the information of the relation of those local diffeomorphisms because all these local diffeomorphisms are defined by the holonomy projections following the global fibration $\mathcal{L}$: in some sense, any fibration $\mathcal{L}\in\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ plays the role of a global common transversal coordinate on which the slidings invariants are computed all together and at the same time. \end{rema} Let us clarify here the role of the sliding invariant in the problem of classification of germs of foliations. Suppose that two non-dicritical foliations $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ satisfy that their separatrices and their vanishing holonomies are conjugated. Moreover, after desingularization, all the Camacho-Sad indices of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are coincide. Then after blowing-ups, $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are locally conjugated in a neighborhood of their singularities. Although we have the conjugation of their vanishing holonomies, in general, we can not glue the local conjugation together. The obstruction is that these local conjugations induce the local diffeomorphisms on the exceptional divisor which we call the slidings. In general, there is no reason for those slidings being parts of a global diffeomorphism of the divisor. \subsection{Statement of the main results} Let $\mathfrak{m}athfrak{F}F, \mathfrak{m}athfrak{F}F'\in\mathfrak{m}athcal{M}$. We say that their \emph{strict separatrices are tangent}, denoted $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\mathfrak{m}athbin{\!/\mathfrak{m}kern-5mu/\!} \mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}}')$, if they have the same desingularization map and the same set of singularities. Moreover, at each singularity which is not a corner of the divisor the separatrices of $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ are tangent. If $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\mathfrak{m}athbin{\!/\mathfrak{m}kern-5mu/\!} \mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}}')$ and $\mathcal{L}_0$ is an absolutely dicritical foliation satisfying $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\pitchfork\tilde{\mathcal{L}}_0$ then $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}}')\pitchfork\tilde{\mathcal{L}}_0$ and $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)=\ensuremath{\mathfrak{m}athbb R}R_{\mathfrak{m}athfrak{F}F'}(\mathcal{L}_0)$. We denote by $\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}})$ the set of Camacho-Sad indices of $\tilde{\mathcal{F}}$ at all singularities. We also denote by $\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}})=\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}}')$ if at each singularity, $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ have the same Camacho-Sad index. \begin{theorem}\label{thr1} Let $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ be two foliations in the class $\mathfrak{m}athcal{M}$ (see Notation \ref{no1}) such that $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\mathfrak{m}athbin{\!/\mathfrak{m}kern-5mu/\!}\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}}')$. Suppose that $\mathcal{L}_0$ is an absolutely dicritical foliation satisfying $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\pitchfork\tilde{\mathcal{L}}_0$. Let $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ be as in Notation \ref{no5} and $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)$, $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F',\mathcal{L}_0)$ the corresponding sets of slidings. Then the three following statements are equivalent: \begin{enumerate} \item[(i)] $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are strictly analytically conjugated. \item[(ii)] Their vanishing holonomy representations are strictly analytically conjugated, $\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}})=\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}}')$ and $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)=\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F',\mathcal{L}_0)$. \item[(iii)] Their vanishing holonomy representations are strictly analytically conjugated, $\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}})=\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}}')$ and $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)\cap\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F',\mathcal{L}_0)\neq\emptyset$. \end{enumerate} \end{theorem} Here, a strict conjugacy means a conjugacy tangent to identity. We will prove that the slidings of foliations are finitely determined: \begin{theorem}\label{thr2} Let $\mathfrak{m}athfrak{F}F$ be a non-dicritical foliation without saddle-node singularities after desingularization. There exists a natural $N$ such that if there is a non-dicritical foliation $\mathfrak{m}athfrak{F}F'$ satisfying the following conditions: \begin{enumerate} \item[(i)] $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ have the same set of singularities after desingularization and at a neighborhood of each singularity, $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ are locally strictly analytically conjugated, \item[(ii)] There exist $\mathcal{L},\mathcal{L}'$ in $\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$ such that $J^N(S(\mathfrak{m}athfrak{F}F,\mathcal{L}))=J^N(S(\mathfrak{m}athfrak{F}F',\mathcal{L}'))$, \end{enumerate} then there exists $\mathcal{L}''$ such that $\mathcal{L}''$ is strictly conjugated with $\mathcal{L}$ and $S(\mathfrak{m}athfrak{F}F,\mathcal{L}'')=S(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. \end{theorem} Here $J^N(S(\mathfrak{m}athfrak{F}F,\mathcal{L}))=J^N(S(\mathfrak{m}athfrak{F}F',\mathcal{L}'))$ means $J^{N}(g_{D,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}}))=J^{N}(g_{D,p}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}'))$ for all $g_{D,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$ in $S(\mathfrak{m}athfrak{F}F,\mathcal{L})$, $g_{D,p}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}')$ in $S(\mathfrak{m}athfrak{F}F',\mathcal{L}')$, where $J^N(g_{D,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}}))$ stands for the regular part of degree $N$ in the Taylor expansion of $g_{D,p}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})$. \\ These two theorems also give two corollaries on finite determination of the class of isoholonomic non-dicritical foliations and absolutely dicritical foliations that have the same Dulac maps (see Corollary \ref{cor17} and \ref{cor19}). \\ This paper is organized as follows: In section 2, local conjugacy of the pair $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ will be proved. We prove Theorem \ref{thr1} in Section 3. Section 4 is devoted to prove Theorem \ref{thr2} and two Corollaries of finite determination of class of isoholonomic non-dicritical foliations and absolutely dicritical foliations that have the same Dulac maps. \section{Local conjugacy of the pair $(\mathfrak{m}athfrak{F}F,\mathcal{L})$} Let $\mathfrak{m}athfrak{F}F$, $\mathfrak{m}athfrak{F}F'$ be two germs of nondegenerate reduced foliations in $(\ensuremath{\mathfrak{m}athbb C}^2,0)$. Denote by $S_1$, $S_2$ and $S'_1$, $S'_2$ the separatrices of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ respectively. Let $\mathcal{L}$ and $\mathcal{L}'$ be two germs of regular foliations such that their separatrices $L$ and $L'$ are transverse to the two separatrices of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ respectively. Suppose that $\Phi$ is a diffeomorphism conjugating $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$, then the restriction of $\Phi$ on the tangent curves commutes with the holonomies on $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$. The converse is also true: \begin{prop}\label{pro8} Suppose that $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ have the same Camacho-Sad index. If $\phi:T(\mathfrak{m}athfrak{F}F,\mathcal{L})\rightarrow T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ is a diffeomorphism commuting with the holonomies of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ then $\phi$ extends to a diffeomorphism $\Phi$ of $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ sending $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. Moreover, if we require that $\Phi$ sends $S_1$ (resp. $S_2$) to $S'_1$ (resp. $S'_2$) then this extension is unique. \end{prop} \begin{proof} By Lemma \ref{lem1}, the curves $S_1$, $S_2$, $L$, $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ (resp., $S'_1$, $S'_2$, $L'$, $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$) are four transverse smooth curves. It is well known that there exist two radial foliations $\ensuremath{\mathfrak{m}athbb R}R$ and $\ensuremath{\mathfrak{m}athbb R}R'$ such that $S_1$, $S_2$, $L$, $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $S'_1$, $S'_2$, $L'$, $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ are the invariant curves of $\ensuremath{\mathfrak{m}athbb R}R$ and $\ensuremath{\mathfrak{m}athbb R}R'$ respectively. After a blowing-up at the origin, denote by $p_1$, $p_2$, $p_L$, $p_T$ (resp., $p'_1$, $p'_2$, $p'_L$, $p'_T$) the intersections of strict transforms of $S_1$, $S_2$, $L$, $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ (resp., $S'_1$, $S'_2$, $L'$, $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$) with $\ensuremath{\mathfrak{m}athbb C}P$. Take $\phi_1$ in $\mathfrak{m}athrm{Aut}(\ensuremath{\mathfrak{m}athbb C}P)$ that sends $p_1$, $p_2$, $p_L$ to $p'_1$, $p'_2$, $p'_L$ respectively. By Lemma \ref{lem1} , the direction of $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ (resp., $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$) is completely determined by the Camacho-Sad index and the direction of $L$ (resp., $L'$). Therefore, $\phi_1(p_T)=p'_T$. Using the path lifting method after a blowing-up \cite{Mat-Mou}, $\phi$ extends to a diffeomorphism $\Phi_1$ of $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ sending $(\mathfrak{m}athfrak{F}F,\ensuremath{\mathfrak{m}athbb R}R)$ to $(\mathfrak{m}athfrak{F}F',\ensuremath{\mathfrak{m}athbb R}R')$. Denote by $\mathcal{L}_0=\Phi_{1*}^{-1}(\mathcal{L}')$. Because $\Phi_1^{-1}$ sends $L'$ and $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ to $L$ and $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ respectively, $L$ is also the separatrix of $\mathcal{L}_0$ and $T(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)=T(\mathfrak{m}athfrak{F}F,\mathcal{L})$. We denote by $T$ the tangent curve $T(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)$. The proof is reduced to show that there exists a diffeomorphism fixing points in $T$ sending $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)$. Choose a system of coordinates $(x,y)$ such that $\mathcal{L}_0$ is defined by $f_0=x+y$ and $\mathfrak{m}athfrak{F}F$ is defined by a $1$-form $$\omega(x,y)=\lambda y(1+A(x,y))dx+xdy,\; \lambda\not\in\ensuremath{\mathfrak{m}athbb Q}_{\leq 0}.$$ Then $T$ is defined by $$\tilde{a}u(x,y)=x-\lambda y(1+A(x,y))=0.$$ We claim that there exist a natural $n\geq 2$ and a holomorphic function $h$ such that $\mathcal{L}$ is defined by $$f(x,y)=\left(1+\tilde{a}u^n(x,y)h(x,y)\right)(x+y).$$ Indeed, assume that $\mathcal{L}$ is defined by $$\bar{a}r{f}(x,y)=u(x,y)(x+y),$$ where $u$ is invertible. Rewrite the equation of $T$ as $$x-\bar{a}r\tilde{a}u(y)=0,$$ where $\bar{a}r\tilde{a}u(y)=\lambda y + \ldots$. Because $\lambda\neq -1$, the maps $u(\bar{a}r\tilde{a}u(y),y).(\bar{a}r\tilde{a}u(y)+y)$ and $\bar{a}r\tilde{a}u(y)+y$ are diffeomorphic. Hence there exists a diffeomorphism $g\in\ensuremath{\mathfrak{m}athbb C}\{y\}$ such that $$g\Bigl(u(\bar{a}r\tilde{a}u(y),y).(\bar{a}r\tilde{a}u(y)+y)\Bigr)=\bar{a}r\tilde{a}u(y)+y.$$ This is equivalent to \begin{equation*} \bigl(g\circ \bar{a}r{f}-(x+y)\bigr)_{| \tilde{a}u=0}=0. \end{equation*} Therefore, there exist a natural $n\geq 1$ and a function $h$ satisfying $h_{|\tilde{a}u=0}\not\equiv 0$ such that $$g\circ \bar{a}r{f}(x,y)=\left(1+\tilde{a}u^n(x,y)h(x,y)\right)(x+y).$$ Because $g$ is a diffeomorphism, $\mathcal{L}$ is also defined by $f=g\circ \bar{a}r{f}$. Let us prove $n\geq 2$. We have \begin{align*} df\wedge \omega&=\tilde{a}u(x,y)(\ldots)+ n(x+y)h(x,y)\tilde{a}u^{n-1}d\tilde{a}u\wedge\omega \\ & =\tilde{a}u(x,y)(\ldots)+ n(x+y)h(x,y)\tilde{a}u^{n-1}(x+\lambda^2y+\ldots)dx\wedge dy. \end{align*} Because $T$ is defined by $\tilde{a}u(x,y)=0$, we have \begin{equation*} n(x+y)h(x,y)\tilde{a}u^{n-1}(x+\lambda^2y+\ldots)_{|\tilde{a}u=0}\equiv 0 \end{equation*} The fact $\lambda\neq 0,-1$ forces to $x+\lambda^2y\neq x-\lambda y$ and $x+y\neq x-\lambda y$. This implies $(\tilde{a}u^{n-1})_{|\tilde{a}u=0}\equiv 0$. Consequently, $n\geq 2$. Now let $$X=x\text{d}x- \lambda y(1+A(x,y)) \text{d}y$$ tangent to $\mathfrak{m}athfrak{F}F$. Now we will show that there exists $\alpha\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ such that the diffeomorphism $\exp[\tilde{a}u^{n-1}\alpha]X$ satisfies \begin{equation}\label{eq1} (x+y)\circ\exp[\tilde{a}u^{n-1}\alpha]X(x,y)=\sum_{i\geq 0}\frac{\tilde{a}u^{i(n-1)}\alpha^i}{i!}\mathfrak{m}athrm{ad}^i_{X}(x+y)=f(x,y), \end{equation} where $\mathfrak{m}athrm{ad_X}$ is the adjoint representation. Since $$\sum_{i\geq 0}\frac{\tilde{a}u^{i(n-1)}\alpha^i}{i!}\mathfrak{m}athrm{ad}^i_{X}(x+y)=x+y+\tilde{a}u^n\alpha + \frac{n}{2}\tilde{a}u^{2n-2}\alpha^2X(\tilde{a}u)+\tilde{a}u^{2n-1}(\ldots),$$ \eqref{eq1} becomes $$\alpha + \frac{n}{2}\tilde{a}u^{n-2}\alpha^2X(\tilde{a}u)+\tilde{a}u^{n-1}(\ldots)=(x+y)h(x,y).$$ Hence, the existence of $\alpha$ comes from the implicit function theorem.\\ Now we will prove the uniqueness of $\Phi$. In fact, we only need to show that if there exists a diffeomorphism $\Psi$ that sends $(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)$ to itself, preserves the two separatrices of $\mathfrak{m}athfrak{F}F$ and fixes the points of $T$ then $\Psi=\mathfrak{m}athrm{Id}$. Since $\Psi_{|T}=\mathfrak{m}athrm{Id}$, $\Psi$ sends every leaf of $\mathfrak{m}athfrak{F}F$ into itself. By \cite{Ber-Cer-Mez}, there exists $\beta\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ such that $$\Psi=\exp[\beta]X.$$ Because $\mathcal{L}_0$ is defined by the function $x+y$ and $\Psi$ fixes points in $T$, we get \begin{equation}\label{eq2} (x+y)\circ\exp[\beta]X=x+y. \end{equation} Decompose $\beta$ into the homogeneous terms $$\beta=\beta_0+\beta_1+\beta_2+\ldots=\beta_0+\bar{a}r{\beta}.$$ Since $\mathfrak{m}athrm{ad}^i_X(x)=x$ and $\mathfrak{m}athrm{ad}^i_X(y)=(-\lambda)^iy+c_i$ for all $i$, where $c_i\in(x,y)^2$, we have \begin{align*}(x+y)\circ\exp[\beta]X&=\sum_{i=0}^{\infty}\frac{\beta_0^i}{i!}x+\sum_{i=0}^{\infty}\frac{\beta_0^i}{i!}((-\lambda)^iy)+h.o.t.\\ &=\exp(\beta_0)x+\exp(-\lambda\beta_0)y+h.o.t.. \end{align*} So \eqref{eq2} leads to $$\exp(\beta_0)=\exp(-\lambda\beta_0)=1.$$ Hence, \begin{align} x\circ\exp[\beta_0]X&=\sum_{i=0}^{\infty}\frac{\beta_0^i}{i!}x=\exp(\beta_0) x=x, \label{5}\\ y\circ\exp[\beta_0]X&=\sum_{i=0}^{\infty}\frac{\beta_0^i}{i!}\left((-\lambda)^i y+c_i\right)=\exp(-\lambda \beta_0) y+c=y+c,\label{6} \end{align} where $c\in(x,y)^2$. We claim that \begin{equation}\label{7} \exp[\beta]X=\exp[\beta_0]X\circ\exp[\bar{a}r\beta]X. \end{equation} Indeed, for any $h\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ we have \begin{align*} h\circ\exp[\beta_0]X\circ\exp[\bar{a}r\beta]X &=\left(\sum_{i=0}^{\infty}\frac{\beta_0^i}{i!}\mathfrak{m}athrm{ad}^i_X(h)\right)\circ\exp[\bar{a}r\beta]X =\sum_{j=0}^{\infty}\frac{\bar{a}r\beta^j}{j!}\mathfrak{m}athrm{ad}^j_X\left(\sum_{i=0}^{\infty}\frac{\beta_0^i}{i!}\mathfrak{m}athrm{ad}^i_X(h)\right)\\ &=\sum_{k=0}^{\infty}\sum_{i+j=k}\frac{\bar{a}r\beta^j \beta_0^i}{j!i!}\mathfrak{m}athrm{ad}^k_X(h)=\sum_{k=0}^{\infty} \frac{(\bar{a}r\beta+\beta_0)^k}{k!}\mathfrak{m}athrm{ad}^k_X(h)=h\circ\exp[\beta]X. \end{align*} We write $\mathfrak{m}athrm{ad}^i_X(y+c)=(-\lambda)^iy+d_i$ where $d_i\in(x,y)^2$. By \eqref{5}, \eqref{6}, \eqref{7} we get \begin{align} (x+y)\circ\exp[\beta]X&=x\circ\exp[\beta_0]X\circ \exp[\bar{a}r\beta]X+ y\circ\exp[\beta_0]X\circ\exp[\bar{a}r\beta]X\nonumber\\ &=x\circ\exp[\bar{a}r\beta]X+ (y+c)\circ\exp[\bar{a}r\beta]X\nonumber\\ &=\sum_{i=0}^{\infty}\frac{\bar{a}r\beta^i}{i!}x+\sum_{i=0}^{\infty}\frac{\bar{a}r\beta^i}{i!}\mathfrak{m}athrm{ad}^i_X(y+c)\nonumber\\ &= \exp(\bar{a}r\beta)x+\exp(-\lambda\bar{a}r{\beta})y+ \sum_{i=0}^{\infty}\frac{\bar{a}r\beta^i}{i!}d_i\nonumber\\ &=x\prod_{i=1}^{\infty}\exp(\beta_i)+ y\prod_{i=1}^{\infty}\exp(-\lambda\beta_i)+ \sum_{i=0}^{\infty}\frac{\bar{a}r\beta^i}{i!}d_i.\label{8} \end{align} We will prove $\bar{a}r{\beta}=0$ by induction. From \eqref{8} we have $$(x+y)\circ\exp[\beta]X=x(1+\beta_1)+ y(1-\lambda\beta_1)+h.o.t.$$ So \eqref{eq2} forces $\beta_1=0$. Suppose that $\beta_1=\ldots=\beta_{k-1}=0$, we have $$(x+y)\circ\exp[\beta]X=x(1+\beta_k)+ y(1-\lambda\beta_k)+h.o.t..$$ Then \eqref{eq2} again leads to $\beta_k=0$ and consequently $\beta=\beta_0$. This implies that $$\Psi=\exp[\beta_0]X=(x,y+c).$$ Finally, \eqref{eq2} again gives $c=0$. So $\Psi=\mathfrak{m}athrm{Id}$. \end{proof} \begin{coro}\label{cor11} Suppose that $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are two nondegenerate reduced foliations that are analytically conjugated. Let $\mathcal{L}$ and $\mathcal{L}'$ be two regular foliations that are transverse to the two separatrices of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ respectively. Then there exists a diffeomorphism that sends $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. \end{coro} \begin{proof} Let $\Psi$ be the conjugacy of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$. Denote by $T'=\Psi(T(\mathfrak{m}athfrak{F}F,\mathcal{L}))$. Then the restriction $\Psi_{|T(\mathfrak{m}athfrak{F}F,\mathcal{L})}$ commutes with the holonomies of $\mathfrak{m}athfrak{F}F$ on $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $\mathfrak{m}athfrak{F}F'$ on $T'$. Moreover by the holonomy transport, the holonomies of $\mathfrak{m}athfrak{F}F'$ on $T'$ and on $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ are conjugated. Hence, the holonomies of $\mathfrak{m}athfrak{F}F$ on $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $\mathfrak{m}athfrak{F}F'$ on $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ are conjugated. By Proposition \ref{pro8} there exists a diffeomorphism that sends $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ \end{proof} By projecting on $S_1$ and $S_2$ the holonomies defined on $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ respectively, we can obtain \begin{coro}\label{cor3} If $\Phi$ is a diffeomorphism conjugating $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$, then $$\Phi_{|S_1}\circ g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})=g_{S'_1}(\mathfrak{m}athfrak{F}F',\mathcal{L}')\circ\Phi_{|S_1}.$$ Reciprocally, if $\mathfrak{m}athrm{CS}(\mathfrak{m}athfrak{F}F)=\mathfrak{m}athrm{CS}(\mathfrak{m}athfrak{F}F')$ and $\phi:S_1\rightarrow S'_1$ is a diffeomorphism satisfying $$\phi\circ g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})=g_{S'_1}(\mathfrak{m}athfrak{F}F',\mathcal{L}')\circ\phi$$ then $\phi$ uniquely extends to a diffeomorphism $\Phi$ of $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ sending $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. \end{coro} \begin{proof} Because $\Phi$ conjugates $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$, the restriction $\Phi_{|T(\mathfrak{m}athfrak{F}F,\mathcal{L})}$ commutes with the holonomies $h_\gamma$ and $h'_\gamma$ of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$. Denote by $\pi_{S_1}$ (resp., $\pi_{S'_1}$) the projection by the leaves of $\mathcal{L}$ (resp., $\mathcal{L}'$) from $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ (resp., $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$) to $S_1$ (resp., $S'_1$). Since $\Phi$ sends $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$, we have \begin{equation*} \pi_{S'_1}\circ\Phi_{|T(\mathfrak{m}athfrak{F}F,\mathcal{L})}=\Phi_{|S_1}\circ\pi_{S_1}. \end{equation*} Therefore \begin{align*} \Phi_{|S_1}\circ g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})&=\Phi_{|S_1}\circ\pi_{S_1}\circ h_\gamma\circ\pi_{S_1}^{-1}=\pi_{S'_1}\circ\Phi_{|T(\mathfrak{m}athfrak{F}F,\mathcal{L})}\circ h_\gamma\circ\pi_{S_1}^{-1}\\ &=\pi_{S'_1}\circ h'_\gamma\circ\Phi_{|T(\mathfrak{m}athfrak{F}F,\mathcal{L})}\circ\pi_{S_1}^{-1}=g_{S'_1}(\mathfrak{m}athfrak{F}F',\mathcal{L}')\circ\pi_{S'_1}\circ\Phi_{|T(\mathfrak{m}athfrak{F}F,\mathcal{L})}\circ\pi_{S_1}^{-1}\\ &=g_{S'_1}(\mathfrak{m}athfrak{F}F',\mathcal{L}')\circ\Phi_{|S_1}. \end{align*} Reciprocally, suppose $\phi:S_1\rightarrow S'_1$ is a diffeomorphism commuting with the slidings of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$. Denote by $\psi=\pi^{-1}_{S'_1} \circ\phi\circ\pi_{S_1}$ then \begin{align*} \psi\circ h_\gamma&= \pi^{-1}_{S'_1} \circ\phi\circ\pi_{S_1}\circ h_\gamma= \pi^{-1}_{S'_1} \circ\phi\circ g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})\circ\pi_{S_1}\\&= \pi^{-1}_{S'_1} \circ g_{S'_1}(\mathfrak{m}athfrak{F}F',\mathcal{L}')\circ\phi\circ\pi_{S_1}= h'_\gamma\circ \pi^{-1}_{S'_1} \circ\phi\circ\pi_{S_1}=h'_\gamma\circ\psi. \end{align*} By Proposition \ref{pro8}, $\psi$ uniquely extends to a diffeomorphism $\Phi$ that sends $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. \end{proof} \begin{rema} In particular, if in Corollary \ref{cor3} we have $S_1=S'_1$ and $g_{S_1}(\mathfrak{m}athfrak{F}F,\mathcal{L})=g_{S'_1}(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ then there exists a diffeomorphism sending $(\mathfrak{m}athfrak{F}F,\mathcal{L})$ to $(\mathfrak{m}athfrak{F}F',\mathcal{L}')$ and fixing points in $S_1$. \end{rema} \section{Strict classification of foliations in $\mathfrak{m}athcal{M}$}\label{sec3} This whole section is devoted to prove Theorem \ref{thr1}. \begin{proof}[Proof of theorem \ref{thr1}] The direction ((ii)$\ensuremath{\mathfrak{m}athbb R}ightarrow$(iii)) is obvious.\\ ((i)$\ensuremath{\mathfrak{m}athbb R}ightarrow$(ii)) Since the Camacho-sad index is an analytic invariant, it is obvious that $\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}})=\mathfrak{m}athrm{CS}(\tilde{\mathcal{F}}')$. Let $\Phi$ be the strict conjugacy and $\tilde{\Phi}:(\mathcal{M},\mathcal{D})\rightarrow (\mathcal{M},\mathcal{D})$ be its lifting by $\sigma$. Suppose that a non-corner point $m$ of $\mathcal{D}$ is a fixed point of $\tilde{\Phi}$. Then the linear map $D\tilde{\Phi}(m)$ has two eigenvalues. One corresponds to the direction of the divisor. We denote by $v(\tilde{\Phi})(m)$ the other eigenvalue and define $v(\tilde{\Phi})(m)=1$ for each corner $m$. \begin{lemma}\label{lem9} $\tilde{\Phi}_{|\mathcal{D}}=\mathfrak{m}athrm{Id}$ so $v(\tilde{\Phi})$ is a function defined on $\mathcal{D}$ and moreover $v(\tilde{\Phi})\equiv 1$. \end{lemma} \begin{proof} Denote by $\sigma_1$ the standard blowing-up at the origin of $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ $$\sigma_1:(\mathcal{M}_1,D_1)\rightarrow(\ensuremath{\mathfrak{m}athbb C}^2,0).$$ On $D_1$, we use the two standard chart $(x,\bar{y})$ and $(\bar{x}, y)$ together with the transition functions $\bar{x}=\bar{y}^{-1}$, $y=x\bar{y}$. Suppose that $$\Phi(x,y)=(x+\alpha(x,y), y+\beta(x,y)),\;\alpha,\beta\in(x,y)^2.$$ Then in the coordinate system $(x,\bar{y})$ we have \begin{align*}\Phi_1(x,\bar{y})=\sigma_1^*\Phi(x,\bar{y}) &=\left(x+\alpha(x,x\bar{y}),\frac{x\bar{y}+\beta(x,x\bar{y})}{x+\alpha(x,x\bar{y})}\right)\\&=(x(1+\ldots), \bar{y}+x(\beta_0+\ldots)), \end{align*} where $\beta_0=\frac{\partial^2\beta}{\partial x^2}(0,0)$. Therefore $\Phi_1:(\mathcal{M}_1,\mathcal{D}_1)\rightarrow(\mathcal{M}_1,D_1)$ fixes points in $D_1$ and $v(\Phi_1)\equiv 1$. Let $p$ be a non-reduced singularity of $\sigma_1^*\mathfrak{m}athfrak{F}F$ on $D_1$. We will show that $D\Phi_1(p)=\mathfrak{m}athrm{Id}$ and apply the inductive hypothesis for $\Phi_1$ in a neighborhood of $p$. Indeed, let $\sigma_2$ be the blowing-up at $p$ and $D_2=\sigma_2^{-1}(p)$. Denote by $S_p$ and $S'_p$ all invariant curves of $\sigma_1^*\mathfrak{m}athfrak{F}F$ and $\sigma_1^*\mathfrak{m}athfrak{F}F'$ through $p$. Because every element in $\mathcal{M}$ after desingularization admits no dead component in its exceptional divisor, $D_2$ is not a dead component. Therefore there is at least one irreducible component $\ell_p$ of $S_p$ that are not tangent to $D_1$. Because $\Phi_{1*}(S_p)=S'_p$ and $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\mathfrak{m}athbin{\!/\mathfrak{m}kern-5mu/\!}\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}}')$, $D\phi_1(p)$ has an eigenvector different from the direction of $D_1$, which is corresponding to the direction of $\ell_p$. So $D\Phi_1(p)$ has two eigenvectors. Since both of their eigenvalues are $1$, we have $D\Phi_1(p)=\mathfrak{m}athrm{Id}$. \end{proof} Now let $\mathcal{L}\in\mathfrak{m}athcal{R}_{\mathfrak{m}athfrak{F}F}(\mathcal{L}_0)$ and denote by $\mathcal{L}'=\Phi_*(\mathcal{L})$. Since $\tilde{\Phi}_{|\mathcal{D}}=\mathfrak{m}athrm{Id}$, the strict transforms $\tilde{\mathcal{L}}$ and $\tilde{\mathcal{L}}'$ have the same Dulac maps. Moreover, because $\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}})\mathfrak{m}athbin{\!/\mathfrak{m}kern-5mu/\!}\mathfrak{m}athrm{Sep}(\tilde{\mathcal{F}}')$, at each singularity $p$ of $\tilde{\mathcal{F}}$, $D\tilde{\Phi}(p)$ has two eigenvectors. As $v(\tilde{\Phi})\equiv 1$ and $\tilde{\Phi}_{|\mathcal{D}}=\mathfrak{m}athrm{Id}$ we have $D\tilde{\Phi}(p)=\mathfrak{m}athrm{Id}$. Therefore the invariant curves of $\tilde{\mathcal{L}}$ and $\tilde{\mathcal{L}}'$ through $p$ are tangent. This gives $\mathcal{L}'=\Phi_*(\mathcal{L})\in\ensuremath{\mathfrak{m}athbb R}R_\mathfrak{m}athfrak{F}F(\mathcal{L}_0)$. Because $\tilde{\Phi}$ fixes points in $\mathcal{D}$, by Corollary \ref{cor3} the identity map commutes with the slides of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$. This leads to $S(\mathfrak{m}athfrak{F}F,\mathcal{L})=S(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. Consequently, $\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F,\mathcal{L}_0)=\mathfrak{m}athfrak{S}(\mathfrak{m}athfrak{F}F',\mathcal{L}_0)$. Moreover, the vanishing holonomy representation of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are conjugated by $\tilde{\Phi}$. Since $v(\tilde{\Phi})\equiv 1$ this conjugacy is strict.\\ ((iii)$\ensuremath{\mathfrak{m}athbb R}ightarrow$(i)) Suppose that $\mathcal{L}$, $\mathcal{L}'\in\mathfrak{m}athcal{R}_{\mathfrak{m}athfrak{F}F}(\mathcal{L}_0)$ satisfy $S(\mathfrak{m}athfrak{F}F,\mathcal{L})=S(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. By Corollary \ref{cor3}, at each singularity $p_i$, $i\in\{1,\ldots,k\}$, of $\tilde{\mathcal{F}}$ there exists a neighborhood $U_i$ of $p_i$ and a local conjugacy $$\Phi_i:(\tilde{\mathcal{F}},\tilde{\mathcal{L}})_{|U_i}\rightarrow (\tilde{\mathcal{F}}',\tilde{\mathcal{L}}')_{|U_i}$$ such that $\Phi_{i|\mathcal{D}\cap U_i}=\mathfrak{m}athrm{Id}$. Let $U_0$ be a neighborhood of $\mathcal{D}\setminus\cup_{i=1}^k U_i$ such that $U_0$ does not contain any singularity of $\tilde{\mathcal{F}}$. Note that $U_0$ is not connected and the restriction of $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ on $U_0$ are regular. The strict conjugacy of the vanishing holonomy representations can be extended by path lifting method to the conjugacy $$\Phi_0:(\tilde{\mathcal{F}},\tilde{\mathcal{L}})_{|U_0}\rightarrow (\tilde{\mathcal{F}}',\tilde{\mathcal{L}}')_{|U_0},$$ satisfying that the second eigenvalue function $v(\Phi_0)$ is identically $1$. We will show that on each intersection $V_i=U_i\cap U_0$, $\Phi_i$ and $\Phi_0$ coincide. Denote by $$\Psi_i=\Phi^{-1}_{i| V_i}\circ \Phi_{0| V_i}:(\tilde{\mathcal{F}},\tilde{\mathcal{L}})_{|V_i}\rightarrow (\tilde{\mathcal{F}},\tilde{\mathcal{L}})_{|V_i}.$$ We claim that $v(\Psi_i)\equiv 1$ on $\mathcal{D}\cap V_i$. Let $p,q$ in $V_i\cap\mathcal{D}$. Denote by $l_p$ and $l_q$ the invariant curves of $\tilde{\mathcal{L}}$ through $p$ and $q$ respectively. As the two maps $\Psi_{i|l_p}$ and $\Psi_{i|l_q}$ are conjugated by the holonomy transport, we have $v(\Psi_i)(p)=v(\Psi_i)(q)$. Consequently, $v(\Psi_i)$ is constant on $V_i$. Since $v(\Phi_0)\equiv 1$, it follows that $v(\Phi_i)$ is constant on $V_i\cap\mathcal{D}$. Therefore, $v(\Phi_i)$ is constant on $U_i\cap\mathcal{D}$. Moreover, at the singularity $p_i$, $D\Phi_i(p_i)$ has three eigenvectors corresponding to the directions of the divisor and the directions of invariant curves of $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{L}}$ through $p_i$. Since $D\Phi_i(p_i)$ has also one eigenvalue $1$ corresponding to the directions of the divisor, we have $D\Phi_i(p_i)=\mathfrak{m}athrm{Id}$. This gives $v(\Phi_i)\equiv 1$ and consequently $v(\Psi_i)\equiv 1$. Now at each point $p \in V_i\cap\mathcal{D}$, the map $\Psi_{i|l_p}$ commutes with the holonomy of $\tilde{\mathcal{F}}$ around $p_i$. Since the Camacho-Sad index $\lambda_i$ of $\tilde{\mathcal{F}}$ at $p_i$ is not rational, Lemma \ref{lem10} below says that $\Psi_{i|l_p}=\mathfrak{m}athrm{Id}$ and so $\Psi_i=\mathfrak{m}athrm{Id}$. Hence we can glue all $\Phi_i$ together and the strict conjugacy we need is the projection of this diffeomorphism on $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ by $\sigma$. \end{proof} \begin{lemma}\label{lem10} Let $h\in\mathfrak{m}athrm{Diff}(\ensuremath{\mathfrak{m}athbb C},0)$ such that $h'(0)=\exp(2\pi i \lambda)$ where $\lambda\not\in\ensuremath{\mathfrak{m}athbb Q}$. If $\psi\in\mathfrak{m}athrm{Diff}(\ensuremath{\mathfrak{m}athbb C},0)$ satisfying $\psi'(0)=1$ and $\psi\circ h=h\circ \psi$ then $\psi=\mathfrak{m}athrm{Id}$. \end{lemma} \begin{proof} Since $\lambda\not\in\ensuremath{\mathfrak{m}athbb Q}$, there is a formal diffeomorphism $\phi$ such that $\phi\circ h\circ\phi^{-1}(z)=\exp(2\pi i\lambda)z$. Denote by $\tilde{\psi}=\phi\circ \psi\circ\phi^{-1}$, then $\tilde{\psi}'(0)=1$ and $\tilde{\psi}(\exp(2\pi i\lambda)z)=\exp(2\pi i\lambda)\tilde{\psi} $. The proof is reduced to show that $\tilde{\psi}=\mathfrak{m}athrm{Id}$. Suppose that $\tilde{\psi}(z)=z+\sum_{j=2}^{\infty}a_jz^j$. Then $$\tilde{\psi}(\exp(2\pi i\lambda)z)=\exp(2\pi i\lambda)z+\sum_{j=2}^{\infty}a_j\exp(2j\pi i\lambda)z^j,$$ and $$\exp(2\pi i\lambda)\tilde{\psi}(z)=\exp(2\pi i\lambda)z+\sum_{j=2}^{\infty}a_j\exp(2\pi i\lambda)z^j.$$ Since $\lambda\not\in\ensuremath{\mathfrak{m}athbb Q}$, it forces $a_j=0$ for all $j\ge 2$. Hence $\tilde{\psi}=\mathfrak{m}athrm{Id}$. \end{proof} \section{Finite determinacy}\label{sec4} Let $S$ be a germ of curve at $p$ in a surface $X$. We denote by $\Sigma(S)$ the set of all germs of singular curves having the same desingularization map and having the same singularities as $S$ after desingularization. Here, the singularities of $S$ after desingularization are the singularities of the curve defined by the union of strict transform of $S$ and the exceptional divisor. If $S$ is smooth, we denote by $\mathfrak{m}^n(S)$ the set of all holomorphic functions on $S$ whose vanishing orders at $p$ are at least $n$. \begin{prop}\label{pro11} Let $S$ be a germ of curve in $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ and $S_1,\ldots,S_k$ be its irreducible components. Suppose that $\sigma:(\mathcal{M},\mathcal{D})\rightarrow(\ensuremath{\mathfrak{m}athbb C}^2,0)$ is a finite composition of blowing-ups such that all the transformed curves $\sigma^*S_i=\tilde{S}_i$ are smooth. Then there exists a natural $N$ such that if $f_i\in\mathfrak{m}^N(\tilde{S}_i)$, $i=1,\ldots,k$, then there exists $F\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ such that $F\circ\sigma_{|\tilde{S}_i}=f_i$. Moreover, the same $N$ can be chosen for all elements in $\Sigma(S)$. \end{prop} \begin{proof} We first consider the statement when $S$ is irreducible. If $S$ is smooth then $\tilde{S}$ is diffeomorphic to $S$. So we can suppose that $S$ is singular. Denote by $p=\tilde{S}\cap\mathcal{D}$. Choose a coordinate system $(x_p,y_p)$ in a neighborhood of $p$ such that $\tilde{S}=\{y_p=0\}$ and $\mathcal{D}=\{x_p=0\}$. Then $\sigma^{-1}$ is defined by $$x_p=\frac{\alpha(x,y)}{\beta(x,y)}\;\; \text{and}\;\; y_p=\frac{\mathfrak{m}u(x,y)}{\nu(x,y)},$$ where $\alpha,\beta,\mathfrak{m}u,\nu\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$, $\mathfrak{m}athrm{gcd}(\alpha,\beta)=1$ and $\mathfrak{m}athrm{gcd}(\mathfrak{m}u,\nu)=1$. So we have \begin{equation} \frac{\alpha\circ\sigma(x_p,y_p)}{\beta\circ\sigma(x_p,y_p)}=x_p \end{equation} Therefore, there exist a natural $k$ and a holomorphic function $h$ such that \begin{equation}\label{equa11} \alpha\circ\sigma(x_p,y_p)=x_p^{k+1}h(x_p,y_p),\,\,\beta\circ\sigma(x_p,y_p)=x_p^{k}h(x_p,y_p), \end{equation} where $x_p\nmid h$. We claim that $h$ is a unit. Indeed, suppose $h(0,0)=0$ and denote by $\tilde{L}$ the curve $\{h(x_p,y_p)=0\}$. Let $L$ be a curve defined in $(\ensuremath{\mathfrak{m}athbb C}^2,0)$ such that $\sigma^*(L)=\tilde{L}$. Let $\{\bar{a}r{h}(x,y)=0\}$ be a reduced equation of $L$. By \eqref{equa11}, $\bar{a}r{h}|\alpha$ and $\bar{a}r{h}|\beta$. It contradicts $\mathfrak{m}athrm{gcd}(\alpha,\beta)=1$. Now denote by $u(x_p)=h(x_p,0)$ which is a unit, we have \begin{equation}\label{11} \alpha\circ\sigma(x_p,0)=u(x_p)x_p^{k+1},\,\beta\circ\sigma(x_p,0)=u(x_p)x_p^{k}. \end{equation} For each $m\ge (k-1)(k+1)$ there exists $j\in\{0,\ldots,k-1\}$ such that $k|(m-j(k+1))$. Thus \begin{equation*} m=ik+j(k+1),\, i,j\in\ensuremath{\mathfrak{m}athbb N}. \end{equation*} So \eqref{11} implies that if a holomorphic function $f(x_p)$ satisfies $x_p^{(k-1)(k+1)}|f(x_p)$ then there exists a holomorphic function $F(x,y)$ such that $F\circ\sigma(x_p,0)=f(x_p)$. Consequently \begin{equation}\label{equation13} \mathfrak{m}^{(k-1)(k+1)}(\tilde{S})\subset\sigma^*\ensuremath{\mathfrak{m}athbb C}\{x,y\}_{|\tilde{S}}. \end{equation} In the general case, suppose that $S_i$ is defined by $\{g_i=0\}$. If $f_i\in\mathfrak{m}^{N}(\tilde{S}_i)$, $i=1,\ldots,k$, with $N$ big enough, there exist $F_i$, $i=1,\ldots,k$, such that $F_i\circ\sigma_{|\tilde{S}_i}=f_i$. We will find a holomorphic function $F$ such that $F_{|S_i}=F_{i|S_i}$ for all $i=1,\ldots,k$. This is reduced to show that there exists a natural $M$ such that the following morphism $\Theta$ is surjective \begin{equation*} \frac{(x,y)^M}{{(g_1)\cap\ldots\cap (g_k)\cap (x,y)^M}}\rightarrow\frac{(x,y)^M}{{(g_1)\cap (x,y)^M}}\oplus\ldots\oplus\frac{(x,y)^M}{{(g_k)\cap (x,y)^M}}. \end{equation*} Indeed, by Hilbert's Nullstellensatz, there exists a natural $M_1$ such that \begin{equation}\label{eq7} (x,y)^{M_1}\subset (g_i,g_j) \end{equation} for all $1\leq i < j \leq k$. We will show that for all $i=1,\ldots,k$, $j=0,\ldots,(k-1)M_1$ the elements $e_{ij}=(0,\ldots,\overline{x^jy^{(k-1)M_1-j}},\ldots,0)$, where $\overline{x^jy^{(k-1)M_1-j}}$ is in the $i^{th}$ position, are in $\mathfrak{m}athrm{Im}\Theta$ and then $M$ can be chosen as $(k-1)M_1$. We decompose $$x^jy^{(k-1)M_1-j}=\prod_{\substack{l=1,\ldots,k\\ l\neq i}}x^{j_l}y^{M_1-j_l},$$ where $0\le j_l\le M_1$. By \eqref{eq7}, there exist $a_{il},b_{il}\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ such that $a_{il}g_i+b_{il}g_l=x^{j_l}y^{M_1-j_l}$. This implies that \begin{equation*} e_{ij}=\Theta\left(\prod_{\substack{l=1,\ldots,k\\ l\neq i}} \left(x^{j_l}y^{M_1-j_l}-a_{il}g_i\right)\right)\in\mathfrak{m}athrm{Im}\Theta \end{equation*} Now we will show that the same $N$ can be chosen for all elements of $\Sigma(S)$. In the case $S$ is irreducible, let $S'$ in $\Sigma(S)$ and $\{y_p=s(x_p)\}$ be the equation of $\sigma^*(S')=\tilde{S}'$. We also have \begin{equation*} \alpha\circ\sigma(x_p,s(x_p))=v(x_p)x_p^{k+1},\,\beta\circ\sigma(x_p,s(x_p))=v(x_p)x_p^{k}, \end{equation*} where $v(x_p)=h(x_p,s(x_p))$ which is a unit. Consequently, \eqref{equation13} holds. In the general case, it is sufficient to show that the same $M_1$ in \eqref{eq7} can be chosen for all elements of $\Sigma(S)$. Let $M_{ij}$ be the smallest natural satisfying \begin{equation*} (x,y)^{M_{ij}}\subset (g_i,g_j). \end{equation*} We claim that $$M_{ij}\le \mathfrak{m}athrm{I}(g_i,g_j)=\mathfrak{m}athrm{dim}_\ensuremath{\mathfrak{m}athbb C}\frac{\ensuremath{\mathfrak{m}athbb C}\{x,y\}}{(g_i,g_j)}.$$ Indeed, there exists $x^{l}y^{M_{ij}-1-l}\not\in(g_i,g_j)$. Let $P_m$, $m=1,\ldots,M_{ij}$, be a sequence of monomials such that $P_1=1$, $P_{M_{ij}}=x^{l}y^{M_{ij}-1-l}$ and either $P_{m+1}=x\cdot P_m$ or $P_{m+1}=y\cdot P_m$. Since $P_m|P_{M_{ij}}$ we have $P_m\not\in(g_i,g_j)$ for all $m=1,\ldots, M_{ij}$. We will show that $\{P_1,\ldots, P_{M_{ij}}\}$ is independent in the vector space $\frac{\ensuremath{\mathfrak{m}athbb C}\{x,y\}}{(g_i,g_j)}$ over $\ensuremath{\mathfrak{m}athbb C}$. Suppose that $$c_1P_1+\ldots+c_{M_{ij}}P_{M_{ij}}\in(g_i,g_j).$$ Suppose there exists $c_m\ne 0$. Let $m_0$ be the smallest natural such that $c_{m_0}\ne 0$. Then $$c_{m_0}P_{m_0}+\ldots+c_{M_{ij}}P_{M_{ij}}=P_{m_0}(c_{m_0}+\ldots)\in(g_i,g_j).$$ This implies that $P_{m_0}$ in $(g_i,g_j)$ and it is a contradiction. Now, it is well known that the intersection number $\mathfrak{m}athrm{I}(g_i,g_j)$ is a topological invariant. It means that if two curves $\{g_i\cdot g_j=0\}$ and $\{g'_i\cdot g'_j=0\}$ are topologically conjugated then $\mathfrak{m}athrm{I}(g_i,g_j)=\mathfrak{m}athrm{I}(g'_i,g'_j)$. Consequently, $M_1$ can be chosen as $max_{1\le i<j\le k}\mathfrak{m}athrm{I}(g_i,g_j)$ that doesn't depend on the elements of $\Sigma(S)$. \end{proof} Now, we will prove the finite determinacy property of the slidings of foliations. \begin{proof}[Proof of Theorem \ref{thr2}] Suppose that $\tilde{T}(\mathfrak{m}athfrak{F}F,\mathcal{L})=\cup T_i$, $\tilde{T}'(\mathfrak{m}athfrak{F}F',\mathcal{L}')=\cup T'_i$ where $T_i$ and $T'_i$ are irreducible components of $T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ and $T(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. Then the singularities of $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ are $p_i=T_i\cap\mathcal{D}=T'_i\cap\mathcal{D}$. Denote by $h_{i\gamma}$ the holonomy of $\tilde{\mathcal{F}}$ on $T_i$. Now let $p_i$ be a singularity $\tilde{\mathcal{F}}$. We first suppose that $p_i$ is not a corner. Denote by $D$ the irreducible component of $\mathcal{D}$ through $p_i$. Because $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ are strictly conjugated in a neighborhood of $p_i$, by Corollaries \ref{cor11} and \ref{cor3}, there exists a diffeomorphism $\psi_i$ in $\mathfrak{m}athrm{Diff}(D,p_i)$ tangent to identity such that \begin{equation}\label{equa13} \psi_i\circ g_{D,p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})=g_{D,p_i}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}')\circ\psi_i. \end{equation} Let $\pi_D$ (resp., $\pi'_D$) be the projection from $T_i$ (resp., $T'_i$) to $D$ that follows the leaves of $\tilde{\mathcal{L}}$ (resp., $\tilde{\mathcal{L}}'$). Denote by \begin{equation}\label{e13} \phi_i=\pi_D^{-1}\circ\psi_i^{-1}\circ\pi_D. \end{equation} Then $\phi_i\in\mathfrak{m}athrm{Diff}(T_i,p_i)$. Since $J^{N}(S(\mathfrak{m}athfrak{F}F,\mathcal{L}))=J^{N}(S(\mathfrak{m}athfrak{F}F',\mathcal{L}'))$, $\phi_i$ is tangent to identity map at order at least $N$. In the case $p_i$ is a corner, let $D$ be one of two irreducible components of $\mathcal{D}$ through $p_i$ and define $\phi_i$ as above. We also have that $\phi_i$ is tangent to identity map at order at least $N$. \begin{lemma}\label{lem17} Suppose that there exists a diffeomorphism $\Phi$ such that the lifting $\sigma^*(\Phi)=\tilde{\Phi}$ satisfies \begin{itemize} \item{} $\tilde{\Phi}_{|\mathcal{D}}=\mathfrak{m}athrm{Id}$, \item{} $\tilde{\Phi}_{|T_i}=\phi_i$, \item{} $T(\mathfrak{m}athfrak{F}F,\Phi_*\mathcal{L})=T(\mathfrak{m}athfrak{F}F,\mathcal{L})$. \end{itemize} Then $\mathcal{L}''=\Phi_*(\mathcal{L})$ satisfies $S(\mathfrak{m}athfrak{F}F,\mathcal{L}'')=S(\mathfrak{m}athfrak{F}F',\mathcal{L}')$. \end{lemma} \begin{proof} Let $p_i$ be a singularity $\tilde{\mathcal{F}}$. In the case $p_i$ is not a corner, we denote $D$, $\pi_D$, $\pi'_D$ as above. Let $\pi''_D$ be the projection following the leaves of $\tilde{\mathcal{L}}''$ from $T_i$ to $D$, then $$\pi''_D=\pi_D\circ\phi_i^{-1}.$$ We have \begin{align*} g_{D,p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}}'')&=\pi''_D\circ h_{i\gamma} \circ\pi''^{-1}_D=\pi_D\circ\phi_i^{-1}\circ h_{i\gamma}\circ\phi_i\circ\pi_D^{-1}\\&= \psi_i\circ\pi_D\circ h_{i\gamma}\circ\pi_D^{-1}\circ\psi_i^{-1}= \psi_i\circ g_{D,p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})\circ\psi_i^{-1}\\&=g_{D,p_i}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}'). \end{align*} If $p_i$ is a corner, $p_i=D\cap D'$, we also have \begin{equation*} g_{D,p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}}'')=g_{D,p_i}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}'). \end{equation*} Since $\tilde{\Phi}_{|\mathcal{D}}=\mathfrak{m}athrm{Id}$ the Dulac maps of $\tilde{\mathcal{L}}''$ and $\tilde{\mathcal{L}}'$ in a neighborhood of $p_i$ are the same. So Remark \ref{re7} leads to \begin{equation*} g_{D',p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}}'')=g_{D',p_i}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}'). \end{equation*} \end{proof} Now we will prove the existence of $\Phi$ in Lemma \ref{lem17} for $N$ big enough. Suppose that $\mathfrak{m}athfrak{F}F$ and $\mathcal{L}$ are respectively defined by \begin{align*} \omega&=a(x,y)dx+b(x,y)dy,\\ \omega_\mathcal{L}&=c(x,y)dx+d(x,y)dy. \end{align*} Then the tangent curve $T=T(\mathfrak{m}athfrak{F}F,\mathcal{L})$ is defined by $$q(x,y)=da-cb=0.$$ Let $X_q=\frac{\partial q}{\partial y}\text{d}x-\frac{\partial q}{\partial x}\text{d}y$ be a vector field tangent to $T$ and $\tilde{X}_q$ be its lifting by $\sigma$. By the implicit function theorem, if $N$ is big enough, there exists $f_i$ defined on $T_i$ such that $$\exp[f_i]\left(\restr{\tilde{X}_{q}}{T_i}\right)=\phi_i.$$ Using Proposition \ref{pro11}, by choosing $N$ big enough, there exists $f\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ such that $$\exp[f\circ\sigma]\restr{\tilde{X}_{q}}{T_i}=\phi_i.$$ For each $\Phi=(\Phi_1,\Phi_2)\in\mathfrak{m}athrm{Diff}(\ensuremath{\mathfrak{m}athbb C}^2,0)$, denote by $$<\Phi>=\frac{\omega_\mathcal{L}\wedge\Phi^*\omega_\mathcal{L}}{dx\wedge dy} =c\left(c\circ\Phi\frac{\partial\Phi_1}{\partial y}+d\circ\Phi\frac{\partial\Phi_2}{\partial y}\right)-d\left(c\circ\Phi\frac{\partial\Phi_1}{\partial x}+d\circ\Phi\frac{\partial\Phi_2}{\partial x}\right).$$ It is easy to see that $T(\mathfrak{m}athfrak{F}F,\Phi_*\mathcal{L})=T$ if and only if $q|<\Phi>$. For each holomorphic function $f$, we denote by $$\Phi_f= \exp[f]X_q.$$ Lemma \ref{lemma18} below says that there exists a holomorphic function $u$ such that $\Phi_{f-uq}$ satisfies Lemma \ref{lem17} for $N$ big enough. Moreover, by Proposition \ref{pro11}, we can chose $N$ that only depends on $\mathfrak{m}athfrak{F}F$. \end{proof} \begin{lemma}\label{lemma18} If $N$ is big enough, for all $f$ in $(x,y)^N$ there exists a holomorphic function $u$ such that $q|<\Phi_{f-uq}>$. \end{lemma} \begin{proof} We have \begin{align*} \restr{\frac{\partial}{\partial x}x\circ\Phi_{f-uq}}{\{q=0\}}&=\restr{\frac{\partial}{\partial x}\sum_{i=0}^{\infty}\frac{(f-uq)^i}{i!}\mathfrak{m}athrm{ad}_{X_q}^i(x)}{\{q=0\}}\\ &=\restr{\frac{\partial}{\partial x}x\circ\Phi_f}{\{q=0\}}-\restr{u\cdot\frac{\partial q}{\partial x}\cdot\sum_{i=1}^{\infty}\frac{f^{i-1}}{(i-1)!}\mathfrak{m}athrm{ad}^i_{X_q}(x)}{\{q=0\}}\\ &=\restr{\frac{\partial}{\partial x}x\circ\Phi_f}{\{q=0\}}-\restr{u\cdot\frac{\partial q}{\partial x}\cdot\frac{\partial q}{\partial y}\circ\Phi_f}{\{q=0\}}. \end{align*} Similarly, \begin{align*} \restr{\frac{\partial}{\partial y}x\circ\Phi_{f-uq}}{\{q=0\}}&=\restr{\frac{\partial}{\partial y}x\circ\Phi_f}{\{q=0\}}-\restr{u\cdot\frac{\partial q}{\partial y}\cdot\frac{\partial q}{\partial y}\circ\Phi_f}{\{q=0\}},\\ \restr{\frac{\partial}{\partial x}y\circ\Phi_{f-uq}}{\{q=0\}}&=\restr{\frac{\partial}{\partial x}y\circ\Phi_f}{\{q=0\}}+\restr{u\cdot\frac{\partial q}{\partial x}\cdot\frac{\partial q}{\partial x}\circ\Phi_f}{\{q=0\}},\\ \restr{\frac{\partial}{\partial y}y\circ\Phi_{f-uq}}{\{q=0\}}&=\restr{\frac{\partial}{\partial y}y\circ\Phi_f}{\{q=0\}}+\restr{u\cdot\frac{\partial q}{\partial y}\cdot\frac{\partial q}{\partial x}\circ\Phi_f}{\{q=0\}}.\\ \end{align*} This implies that \begin{align} \restr{<\Phi_{f-uq}>}{\{q=0\}} &=\restr{<\Phi_f>}{\{q=0\}}\nonumber \\&-\restr{u\cdot\left(c\frac{\partial q}{\partial y}-d\frac{\partial q}{\partial x}\right)\cdot\left(\left(c\frac{\partial q}{\partial y}-d\frac{\partial q}{\partial x}\right)\circ\Phi_f\right)}{\{q=0\}}.\label{12} \end{align} Denote by $h=c\frac{\partial q}{\partial y}-d\frac{\partial q}{\partial x}$. Then $\{h=0\}$ is the tangent curve of $\mathcal{L}$ and the foliation defined by the level sets of $q$. Since at each singularity $p_i$ of $\tilde{\mathcal{F}}$, the irreducible component $T_i$ of $T$ is transverse to $\tilde{\mathcal{L}}$, the irreducible components of the strict transform of $\{h=0\}$ at $p_i$ are also transverse to $T_i$. This implies that $(q,h)=1$ and the two curves $\{q\cdot h=0\}$ and $\{q\cdot (h\circ\Phi_f)=0\}$ are topologically conjugated. By Hilbert's Nullstellensatz and the proof of Proposition \ref{pro11} there exists a natural $M$ such that $(x,y)^M\subset(q,h)$ and $(x,y)^M\subset(q,h\circ \Phi_f)$. This implies that $(x,y)^{2M}\subset(q,h\cdot(h\circ \Phi_f))$. So if $<\Phi_{f}>\in(x,y)^{2M}$, by \eqref{12} we can choose $u\in\ensuremath{\mathfrak{m}athbb C}\{x,y\}$ such that $q|<\Phi_{f-uq}>$. \end{proof} \begin{rema} If we replace the condition ``$\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ are locally strictly analytically conjugated" in Theorem \ref{thr2} by the condition ``$\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are in $\mathcal{M}$'' then the conclusion in Theorem \ref{thr2} becomes: ``For all natural $M\ge N$ there exists $\mathcal{L}''_M$ such that $J^M(S(\mathfrak{m}athfrak{F}F,\mathcal{L}''_M))=J^M(S(\mathfrak{m}athfrak{F}F',\mathcal{L}'))$''. Indeed, in that case, because the Camacho-Sad indices are not rational, $\tilde{\mathcal{F}}$ and $\tilde{\mathcal{F}}'$ are locally formally conjugated. So we can choose $\psi$ in \eqref{equa13} such that $$J^{M}(\psi_i\circ g_{D,p_i}(\tilde{\mathcal{F}},\tilde{\mathcal{L}})\circ\psi_i^{-1})=J^{M}(g_{D,p_i}(\tilde{\mathcal{F}}',\tilde{\mathcal{L}}')).$$ \end{rema} \begin{coro}\label{cor17} Let $\mathfrak{m}athfrak{F}F\in\mathfrak{m}athcal{M}$ defined by a $1$-form $\omega$ then there exists a natural $N$ such that if $\mathfrak{m}athfrak{F}F'\in\mathfrak{m}athcal{M}$ is defined by a $1$-form $\omega'$ satisfying that $J^N\omega=J^N\omega'$ and the vanishing holonomy representations of $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are strictly analytically conjugated, then $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are strictly analytically conjugated. \end{coro} \begin{proof} Let $\mathcal{L}\in\mathfrak{m}athcal{R}_0$ then $J^{m(N)}S(\mathfrak{m}athfrak{F}F,\mathcal{L})=J^{m(N)}S(\mathfrak{m}athfrak{F}F',\mathcal{L})$ where $m(M)$ is an increasing function on $N$ and $m(N)\rightarrow\infty$ when $N\rightarrow\infty$. By Theorem \ref{thr2} if $N$ is big enough there exists $\mathcal{L}''\in\mathfrak{m}athcal{R}_0$ such that $S(\mathfrak{m}athfrak{F}F,\mathcal{L}'')=S(\mathfrak{m}athfrak{F}F',\mathcal{L})$. By Theorem \ref{thr1}, $\mathfrak{m}athfrak{F}F$ and $\mathfrak{m}athfrak{F}F'$ are strictly analytically conjugated. \end{proof} \begin{rema} This Corollary is consistent with the result of J.F. Mattei in \cite{Mat1} which says that the dimension of moduli space of the equisingular unfolding of a foliation is finite. Note that the vanishing holonomy representations of two foliations that are jointed by a unfolding are conjugated but the converse is not true. \end{rema} \begin{coro}\label{cor19} Let $\mathcal{L}$ be a $\sigma$-absolutely dicritical foliation defined by $1$-form $\omega$. There exists a natural $N$ such that if $\mathcal{L}'$ is a $\sigma$-absolutely dicritical foliation defined by $\omega'$ satisfying $J^N\omega=J^N\omega'$ and the Dulac maps of $\tilde{\mathcal{L}}$ and $\tilde{\mathcal{L}}'$ are the same then $\mathcal{L}$ and $\mathcal{L}'$ are strictly analytically conjugated. \end{coro} \begin{proof} Suppose that $\mathcal{D}=\cup_{i=1,\ldots, k}D_i$ where $D_i$ is an irreducible component of $\mathcal{D}$. We take a pair of irreducible functions $f_i$ and $g_i$ for each $i=1,\ldots, k$, such that the curve $C_i=\{f_i=0\}$ and $C'_i=\{g_i=0\}$ satisfy the following properties: \begin{itemize} \item[1.] The strict transforms $\tilde{C_i}$ and $\tilde{C'_i}$ cut $D_i$ at two different points $p_i$, $q_i$, respectively, such that none of them is a corner. \item[2.] $\tilde{C_i}$, $\tilde{C'_i}$ are smooth and transverse to the invariant curve of $\tilde{\mathcal{L}}$ through $p_i$, $q_i$ respectively. \end{itemize} Because $[\ensuremath{\mathfrak{m}athbb C}:\ensuremath{\mathfrak{m}athbb Q}]$ is an infinite field extension, there exists $(\lambda_1,\lambda_2,\ldots,\lambda_k)\in\ensuremath{\mathfrak{m}athbb C}^k$ such that $$\sum_{i=1}^k c_i\lambda_i\not\in\ensuremath{\mathfrak{m}athbb Q},\;\forall (c_1,\ldots,c_k)\in\ensuremath{\mathfrak{m}athbb Q}^k\setminus\{(0,\ldots,0)\}.$$ Now, let us consider the non-dicritical foliation $\mathfrak{m}athfrak{F}F$ defined by the $1$-form $$\omega_0=\prod_{i=1}^k(f_ig_i)\cdot \left(\sum_{i=1}^k\left(\lambda_i\frac{df_i}{f_i}+\frac{dg_i}{g_i}\right)\right).$$ Then $\mathfrak{m}athfrak{F}F$ admits $\sigma$ as its desingularization map and the singularities of the strict transform $\tilde{\mathcal{F}}$ are the corners of $\mathcal{D}$ and $p_i,q_i$, $i=1,\ldots,k$. We claim that at each singularity, the Camacho-Sad index of $\tilde{\mathcal{F}}$ is not rational. Indeed, denote by $m_{ij}$ the multiplicity of $f_i\circ\sigma$ and $g_i\circ\sigma$ on $D_j$. At the corner $p_{ij}=D_i\cap D_j$, we take coordinates $(x,y)$ such that $D_i=\{x=0\}, D_j=\{y=0\}$. In this coordinate system, we can write $\sigma^*\omega_0$ as \begin{equation*} \sigma^*\omega_0=u(x,y)x^{2\sum_{l=1}^{k} m_{li}}y^{2\sum_{l=1}^{k} m_{lj}}\sum_{l=1}^k\left((\lambda_l+1)m_{li}\frac{dx}{x}+(\lambda_l+1)m_{lj}\frac{dy}{y}+ \alpha_{ij}\right), \end{equation*} where $u(x,y)$ is a unit and $\alpha_{ij}$ is a holomorphic form. So the Camacho-Sad index of $\tilde{\mathcal{F}}$ at $p_{ij}$ is \begin{equation*} \mathfrak{m}athrm{I}(p_{ij})=\frac{\sum_{l=1}^k(\lambda_l+1)m_{lj}}{\sum_{l=1}^k(\lambda_l+1)m_{li}}\not\in\ensuremath{\mathfrak{m}athbb Q}. \end{equation*} Similarly, the Camacho-Sad indices of $\tilde{\mathcal{F}}$ at $p_i$ and $q_i$, respectively, are \begin{equation*} \mathfrak{m}athrm{I}(p_i)=\frac{\sum_{l=1}^k(\lambda_l+1)m_{li}}{\sum_{l=1}^k \lambda_l}\not\in\ensuremath{\mathfrak{m}athbb Q},\; \mathfrak{m}athrm{I}(q_i)=\frac{\sum_{l=1}^k(\lambda_l+1)m_{li}}{k}\not\in\ensuremath{\mathfrak{m}athbb Q}. \end{equation*} Now if $J^N\omega=J^N\omega'$ then $J^{m(N)}S(\mathfrak{m}athfrak{F}F,\mathcal{L})=J^{m(N)}S(\mathfrak{m}athfrak{F}F,\mathcal{L}')$ where $m(N)$ is an increasing function on $N$ and $m(N)\rightarrow\infty$ when $N\rightarrow\infty$. Moreover if $N$ is big enough the invariant curves of $\tilde{\mathcal{L}}$ and $\tilde{\mathcal{L}}'$ through the singularities of $\tilde{\mathcal{F}}$ are tangent. By using Theorem \ref{thr2} for $\mathfrak{m}athfrak{F}F'=\mathfrak{m}athfrak{F}F$, there exists a foliation $\mathcal{L}''$ strictly conjugated with $\mathcal{L}$ such that the two couples $(\mathfrak{m}athfrak{F}F,\mathcal{L}'')$ and $(\mathfrak{m}athfrak{F}F,\mathcal{L}')$ are strictly conjugated. Consequently, $\mathcal{L}$ and $\mathcal{L}'$ are strictly conjugated. \end{proof} \end{document}
math
70,601
\boldsymbol{e}gin{document} \title{A large-scale service system with packing constraints:\\ Minimizing the number of occupied servers\\%(DRAFT) } \author { Alexander L. Stolyar \\ Bell Labs, Alcatel-Lucent\\ 600 Mountain Ave., 2C-322\\ Murray Hill, NJ 07974 \\ \texttt{[email protected]} \and Yuan Zhong \\ University of California\\ 465 Soda Hall, MC-1776\\ Berkeley, CA 94720\\ \texttt{[email protected]} } \maketitle \boldsymbol{e}gin{abstract} {We consider a large-scale service system model proposed in {\cal I}te{St2012}, which is} motivated by the problem of efficient {placement} of virtual machines to physical host machines in a network cloud, so that the total number of occupied hosts is minimized. Customers of different types arrive to a system with an infinite number of servers. A server packing {\em configuration} is the vector $\boldsymbol{k} = \{k_i\}$, where $k_i$ is the number of type-$i$ customers that the server ``contains''. Packing constraints are described by a fixed finite set of allowed configurations. Upon arrival, each customer is placed into a server immediately, subject to the packing constraints; the server can be idle or already serving other customers. After service completion, each customer leaves its server and the system. \iffalse There are {multiple types of input flows of customers, where} a customer{'s} mean service time {depends} on its type. There is {an} infinite number of servers. A server packing {\em configuration} is the vector $\boldsymbol{k}=\{k_i\}$, where $k_i$ is the number of type-$i$ customers {that} the server ``contains''. Packing constraints {are described by} a fixed finite set of {allowed} configurations $\boldsymbol{k}$. Service times of different customers are independent; after a service completion, each customer leaves its server and the system. Each arriving customer is placed for service immediately; it can be placed into a server already serving other customers (as long as packing constraints are not violated), or into an idle server. \fi It was shown in {\cal I}te{St2012} that a simple real-time algorithm, called {\em Greedy}, is asymptotically optimal in the sense of minimizing $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}^{1+\alpha}$ in {the} stationary regime, as the customer arrival rates grow to infinity. {(Here $\alpha > 0$, and $X_{\boldsymbol{k}}$ denotes the number of servers with configuration $\boldsymbol{k}$.)} In particular, when parameter $\alpha$ is small, {\em Greedy} approximately solves the problem of minimizing $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$, the number of occupied hosts. In this paper we introduce the algorithm called {\em Greedy with sublinear Safety Stocks (GSS)}, and show that it asymptotically solves the exact problem of minimizing $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$. An important feature of the algorithm is that sublinear safety stocks of $X_{\boldsymbol{k}}$ are created automatically -- when and where necessary -- without having to determine {\em a priori} where they are required. {Moreover, we also provide a tight characterization of the rate of convergence to optimality under {\em GSS}.} The {\em GSS} algorithm is as simple as {\em Greedy}, and uses no more system state information than {\em Greedy} does. \end{abstract} \category{}{Network Services}{Cloud Computing} \category{}{Probability and Statistics}{Markov Processes, Queueing Theory, Stochastic Processes} \category{}{Design and Analysis of Algorithms}{Approximation Algorithms Analysis, Packing and Covering Problems} \terms{Algorithms, Performance, Theory} \keywords{Multi-dimensional Bin Packing, Infinite-Server System, Markov Chain, Safety Stocks, Fluid Scale Optimality, Local Fluid Scaling} \section{Introduction} We consider a service system model {\cal I}te{St2012} motivated by the problem of efficient placement of virtual machines (VMs) to physical host machines (servers) in a data center (DC) {\cal I}te{Gulati2012}. A {\em service policy} decides to which server each incoming VM will be placed. We are interested in service policies that minimize the total number of occupied servers in the system. It is further desirable that the policy be simple, so that placement decisions are made in real time, and depend only on the current system state, but not on system parameters. Consider the following description of a DC. It consists of a number of servers. While servers may potentially have different characteristics, in this paper we assume that they are all the same. More specifically, let there be $N$ different types of resources (for example, type-$1$ resource can be CPU, type-$2$ resource can be memory, etc). For each $n \in \{1, 2, \ldots, N\}$, a server possesses amount $B_n > 0$ of type-$n$ resource. $I$ types of VMs arrive in a probabilistic fashion, and request services at the DC. Arriving VMs will be placed into the servers, occupying certain resources. More specifically, for $i \in \{1, 2, \ldots, I\}$, a type-$i$ VM requires amount $b_{i, n} > 0$ of type-$n$ resource during service, where $n \in \{1, 2, \ldots, N\}$. Once a VM completes its service, it departs the system, freeing up corresponding resources. We assume that service times of different VMs are independent. For each $i \in \{1, 2, \ldots, I\}$, let $k_i$ be the number of type-$i$ VMs that a server contains. Then the following {\em vector packing constraints} must be observed at all times. Namely, a server can contain $k_i$ type-$i$ VMs ($i \in \{1, 2, \ldots, I\}$) simultaneously if and only if \boldsymbol{e}gin{equation}\label{eq:vec-packing} \sum_{i} k_i b_{i,n} \le B_n, \end{equation} for each $n \in \{1, 2, \ldots, N\}$. In this case, the vector $\boldsymbol{k} = (k_1, \ldots, k_I)$ is called a {\em server configuration}. The model considered in this paper is similar to the DC described above, but different in the following two aspects. \boldsymbol{e}gin{itemize} \item[1.] While vector packing constraints (cf. Eq. \eqref{eq:vec-packing}) arise naturally in the context of VM placement, we make the more general assumption of so-called {\em monotone} packing constraints (cf. Section \ref{sec:model}) in our model. \item[2.] We consider a system with an infinite number of servers, where incoming VMs will be immediately placed into a server. For large-scale DCs, the number of servers is not a bottleneck, hence an infinite-server system reasonably approximates such DCs. \end{itemize} We would also like to remark that an important assumption of our model is that the service requirement of a VM is not affected by potentially other VMs occupying the same server. This is a reasonable modeling assumption for multi-core servers, for example. There can be different performance objectives of interest. For example, we may be interested in minimizing the total energy consumption {\cal I}te{Gulati2012}, or maximizing system throughput {\cal I}te{Maguluri2012}. In this paper, we are interested in minimizing the total number of occupied servers. These objectives are different but related. For example, by switching off idle servers, or keeping them in stand-by mode, we can reduce energy consumption by minimizing the number of occupied servers. In the main results of the paper, we introduce the policy called {\em Greedy with sublinear Safety Stocks ({\em GSS})}, and show that it asymptotically minimizes the total number of occupied servers in steady state, as the input flow rates of VMs grow to infinity. {{\em GSS} is a simple policy that makes placement decisions in real time, and based only on the current system state.} Informally speaking, {\em GSS} places incoming VMs in a way that greedily minimizes a Lyapunov function, which asymptotically coincides with the total number of occupied servers. {\em GSS} maintains non-empty safety stocks at every server configuration $\boldsymbol{k}$ whenever $X_{\boldsymbol{k}}$ becomes ``too small'', so as to allow flexibility on VM placement. In other words, under {\em GSS}, there is a non-zero number of servers of every configuration, so that an incoming VM can potentially be placed into a server with any configuration. These safety stocks correspond to the discrepancy between the Lyapunov function and the total number of occupied servers, and grow ``sublinearly'' with the input flow rates. {We also provide a characterization of the rate of convergence to optimality under {\em GSS}, which is tighter than the conventional fluid-scale convergence rate.} \subsection{Related Works} {In this section, we discuss related works, and put our results in perspective.} The most closely related work is {\cal I}te{St2012}, where the model considered in this paper was proposed, and a related problem was studied. {In both this paper and {\cal I}te{St2012}, the asymptotic regime of interest is when the input flow rates grow to infinity, and the system is considered under the {\em fluid scaling}, i.e., when the system states are scaled down by the input flow rates.} In {\cal I}te{St2012}, the problem of interest is minimizing $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}^{1+\alpha}$, where $\alpha > 0$, and $X_{\boldsymbol{k}}$ is the number of occupied servers with configuration $\boldsymbol{k}$. A simple policy called {\em Greedy} was introduced, which asymptotically minimizes the sum $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}^{1+\alpha}$, for any $\alpha > 0$, in the stationary regime. {Policies {\em Greedy} and {\em GSS} differ in two important aspects. First, they try to minimize different objectives -- $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}^{1+\alpha}$ ($\alpha > 0$) and $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$, respectively.} When $\alpha > 0$ is small, {\em Greedy} approximately solves the problem of minimizing the total number of occupied servers $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$, in the asymptotic regime where the input flow rates grow to infinity, {and at the fluid scale}. However, if minimizing $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$ is the ``true'' desired objective, $\alpha > 0$ need to be chosen carefully, depending on the system scale (input flow rates), which may be difficult to do. Therefore, we believe that asymptotically solving the exact problem of minimizing {$\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$} is of substantial interest. Moreover, the policy {\em GSS} proposed in this paper is as simple as {\em Greedy}, and uses no more system state information than {\em Greedy} does. {Second, at a technical level, to prove the asymptotic optimality of {\em Greedy}, {\cal I}te{St2012} considered only the fluid scaling and the corresponding fluid limits. In this paper, to prove the asymptotic optimality of {\em GSS}, it is no longer sufficient to consider the fluid-scale system behavior alone; a {\em local fluid scaling} is also considered, needed to study the dynamics of safety stocks. In addition, this allows us to derive a tighter characterization of the rate of convergence to optimality under {\em GSS}, as opposed to the fluid-scale convergence shown in {\cal I}te{St2012} for {\em Greedy}.} \iffalse It was shown in {\cal I}te{St2012} that under {\em Greedy}, the system's rate of convergence to optimality when the input flow rates grow to infinity, is the conventional fluid-level convergence rate. In contrast, the convergence rate under {\em GSS} is different, and involves considering the {\em local fluid scaling}.} \fi On a broader level, the model considered in this paper is related to the vast literature on classical stochastic bin packing problems. In a bin packing system, random-sized items arrive, and need to be placed into finite-sized bins. The items do not leave or move between bins, and a typical objective is to minimize the number of occupied bins. A packing problem is {\em one-dimensional} if sizes of the items and bins are captured by scalars, and {\em multi-dimensional} if they are captured by vectors. Problems with the multi-dimensional packing constraints \eqn{eq:vec-packing} are called {\em vector packing}. For a good review of one-dimensional bin packing, see for example {\cal I}te{Csirik2006}, and see for example {\cal I}te{Bansal2009} for a recent review of multi-dimensional packing. In bin packing {\em service} systems, items (customers) arrive at random times to be placed into a bin (server), and leave after a random service time. The servers can process multiple customers as long as packing constraints are observed. Customers get queued, and a typical objective of a packing algorithm is to maximize system throughput. (See for example {\cal I}te{Gamarnik2004} for a review of this line of work.) Our model is similar to the latter systems, except there are multiple bins (servers) -- in fact, an infinite number in our case. Models of this type are more recent (see for example, {\cal I}te{Jiang2012, Maguluri2012}). {\cal I}te{Jiang2012} addresses a joint routing and VM placement problem, which in particular includes packing constraints. The approach of {\cal I}te{Jiang2012} resembles Markov Chain algorithms used in combinatorial optimization. {\cal I}te{Maguluri2012} considers maximizing throughput of a queueing system with a finite number of bins (servers), where VMs can wait for service. Very recently, {\cal I}te{GR2012} has new results on the classical one-dimensional online bin packing; it also contains heuristics and simulations for the corresponding system with item departures, which is a special case of our model. \iffalse (For a good recent review of one-dimensional bin packing see e.g. {\cal I}te{Csirik2006}.) In particular, in {\em online} stochastic bin packing problems, random-{sized} items arrive {to} the system and need to be placed into {finite-sized} bins, according to an online algorithm. The items never leave or move between bins, and the typical objective is to minimize the number of occupied bins. A bin packing problem is {\em multi-dimensional}, when bins and item sizes are vectors. Problems with the packing constraints \eqn{eq:vec-packing} are called {\em multi-dimensional vector packing} (see e.g. {\cal I}te{Bansal2009} for a recent review). Bin packing {\em service} systems arise when random-sized items (customers) {arrive according to a probabilistic mechanism}, which need to be served by a bin (server), and leave after a random service time. The server can simultaneously process multiple customers as long as they can simultaneously fit into it. Customers waiting for service are queued, and a typical problem is to determine the maximum throughput under a given (``packing'') algorithm for assigning customers for service. (See e.g. {\cal I}te{Gamarnik2004} for a review of this line of work.) Our model is similar to the latter systems, except there are multiple bins (servers) -- in fact, an infinite number in our case. Models of this type are more recent (see e.g. {\cal I}te{Jiang2012,Maguluri2012}). Paper {\cal I}te{Jiang2012} addresses a real-time VM allocation problem, which in particular includes packing constraints; the approach of {\cal I}te{Jiang2012} is close in spirit to Markov Chain algorithms used in combinatorial optimization. Paper {\cal I}te{Maguluri2012} is concerned mostly with maximizing throughput of a queueing system (where VMs can actually wait for service) with a finite number of bins. Very recently, {\cal I}te{GR2012} has new results on the classical one-dimensional online bin packing; it also contains heuristics and simulations for the corresponding system with item departures, which is a special case of our model. \fi {As mentioned earlier, we consider t}he asymptotic regime where the input flow rates scale up to infinity. In this respect, our work is related to the (also vast) literature on queueing systems in the {\em many servers} regime. (See e.g. {\cal I}te{ST2010_04} for an overview. The name ``many servers'' reflects the fact that the average number of occupied servers scales up to infinity as well, linearly with the input {flow} rates.) However, packing constraints are not present in earlier works (prior to {\cal I}te{St2012}) on the many servers regime, to the best of our knowledge. {The idea of maintaining sublinear safety stocks to increase system flexibility, and hence avoid ``resource'' starvation -- the approach taken by {\em GSS}, the policy proposed in this paper -- has also appeared in other works. For example, see {\cal I}te{Meyn2005} and the references therein for an overview. However, to the best of our knowledge, the following feature of {\em GSS} is novel, and has not appeared in algorithms proposed in earlier works. Namely, {\em GSS} creates safety stocks {\em automatically}, in the sense that it does not require {\em a priori} knowledge of the subset of configurations for which the sublinear safety stocks need to be maintained. As a result, {\em GSS} does not require any {\em a priori} knowledge of the system parameters, because the safety stocks automatically adapt to parameter changes. We remark that the policy {\em Greedy} proposed in {\cal I}te{St2012} also creates safety stocks, but they scale linearly with the input flow rates, whereas {\em GSS} creates sublinear safety stocks. } { Finally, an overview of some resource allocation issues that arise from VM placement in the context of cloud computing can be found in {\cal I}te{Gulati2012}. } \subsection{Organization} The rest of the paper is organized as follows. In Section \ref{ssec:notation}, we introduce the notation and conventions adopted in this paper. The precise model and main results are described in Section \ref{sec:model-results}. The model is introduced in Section \ref{sec:model}. Here we describe two versions of the model, the closed and open system. In Section \ref{ssec:asymp-regime}, we describe the asymptotic regime of interest. The {\em GSS} policy is described in Section \ref{sec-GSS-definition}, and the main results, Theorems \ref{thm:main-closed} and \ref{th-main-res-open}, are stated in Section \ref{ssec:results}, for the closed and open system, respectively. Sections \ref{sec-closed} and \ref{sec-open} are devoted to proving Theorems \ref{thm:main-closed} and \ref{th-main-res-open}, respectively. A discussion of the results in this paper and some future directions is provided in Section \ref{sec-discussion}. \subsection{Notation and Conventions} \label{ssec:notation} Let $\mathbb{R}$ be the set of real numbers, and let $\mathbb{R}_+$ be the set of nonnegative real numbers. Let $\mathbb{Z}$ be the set of integers, let $\mathbb{Z}_+$ be the set of nonnegative integers, and let $\mathbb{N}$ be the set of natural numbers. $\mathbb{R}^n$ denotes the real vector space of dimension $n$, and $\mathbb{R}_+^n$ denotes the nonnegative orthant of $\mathbb{R}^n$. $\mathbb{Z}^n$ and $\mathbb{Z}_+^n$ are similarly defined. We reserve bold letters for vectors, and plain letters for scalars and sets. For a scalar $x$, let $|x|$ denote its absolute value, and let $\lceil x\rceil$ denote the largest integer that does not exceed $x$. For two scalars $x$ and $y$, let $x \wedge y = \min\{x, y\}$, and let $x \vee y = \max\{x, y\}$. For a vector $\boldsymbol{x} = (x_i)_{i=1}^n \in \mathbb{R}^n$, let $\|\boldsymbol{x}\|$ denote its $1$-norm, i.e., $\|\boldsymbol{x}\| = \sum_{i=1}^n |x_i|$. The distance from vector $\boldsymbol{x} \in \mathbb{R}^n$ to a set $U \subset \mathbb{R}^n$ is denoted by $d(\boldsymbol{x},U)=\inf_{\boldsymbol{u}\in U} \|\boldsymbol{x}-\boldsymbol{u}\|$. We use $\boldsymbol{e}_i$ to denote the $i$-th standard unit vector, with only the $i$th component being $1$, and all other components being $0$. For a set $\mathcal{N}$, let $\boldsymbol{1}_{\mathcal{N}}$ be the indicator function of $\mathcal{N}$. For a finite set $\mathcal{N}$, let $|\mathcal{N}|$ be its cardinality. For two sets $\mathcal{N}$ and $\mathcal{M}$, let $\mathcal{N} \backslash \mathcal{M}$ denote the set difference of $\mathcal{N}$ and $\mathcal{M}$, i.e., $\mathcal{N} \backslash \mathcal{M} = \{x\in \mathcal{N} : x \notin \mathcal{M}\}$. For a set $\mathcal{N} \subset \mathbb{R}^n$, let $\hull{\mathcal{N}}$ denote its convex hull, i.e., the set of all $\boldsymbol{x} \in \mathbb{R}^n$ such that there exist $\gamma_1, \ldots, \gamma_m \in \mathbb{R}_+$ and $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_m \in \mathcal{N}$ with $\boldsymbol{x} = \sum_{j=1}^m \gamma_j \boldsymbol{v}_j$ and $\sum_{j=1}^m \gamma_j = 1$. Symbol $\to$ means ordinary convergence in $\mathbb{R}^n$, and $\implies$ denotes convergence in distribution of random variables taking values in $\mathbb{R}^n$, equipped with the Borel $\sigma$-algebra. The abbreviation {\em w.p.1} means convergence {\em with probability 1}. We often write $x(\cdot)$ to mean the function (or random process) $\{x(t),~t\ge 0\}$. We write iff as a shorthand for ``if and only if'', i.o for ``infinitely often'', LHS for ``left-hand side'' and RHS for ``right-hand side''. {We also write WLOG for ``without loss of generality'', w.r.t for ``with respect to'', and u.o.c for ``uniformly on compact sets''.} Throughout this paper, if $x(\cdot)$ is a random process (which in most cases will be Markov), we will denote by $x(\infty)$ its random state when the process is in stationary regime; in other words, $x(\infty)$ is equal in distribution to $x(t)$ (for any $t$) when $x(\cdot)$ is stationary. We use the terms {\em steady state} and {\em stationary regime} interchangeably. \section{Model and Main Results}\label{sec:model-results} \subsection{Infinite Server System with Packing Constraints} \label{sec:model} We consider the following infinite server system that evolves in continuous time. There are $I$ types of customers, indexed by $i \in \{1,2,\ldots,I\} \equiv {\cal I}$, and an infinite number of homogeneous servers. A server can potentially serve more than one customer simultaneously. We use $\boldsymbol{k} = (k_1, k_2, \ldots, k_I) \in \mathbb{Z}_+^I$, an $I$-dimensional vector with nonnegative integer components, to denote a \emph{server configuration}. The general packing constraints are captured by the finite set $\bar{\cal K} \subset \mathbb{Z}_+^I$ of \emph{feasible server configurations}. Thus, a server can simultaneously serve $k_i$ customers of type $i$, $i \in {\cal I}$, iff $\boldsymbol{k} = (k_1, k_2, \ldots, k_I) \in \bar{\cal K}$. From now on, we drop the word ``feasible'', and simply call $\bar{\cal K}$ the set of server configurations. In this paper, we assume that the set $\bar{\cal K}$ is \emph{monotone}. \boldsymbol{e}gin{assumption}\label{asmp:monotone} $\bar{\cal K}$ is \emph{monotone} in the following sense. If $\boldsymbol{k} \in \bar{\cal K}$, and $\boldsymbol{k}' \in \mathbb{Z}_+^I$ has $\boldsymbol{k}' \leq \boldsymbol{k}$ component-wise, then $\boldsymbol{k}' \in \bar{\cal K}$ as well. \end{assumption} A simple consequence of the monotonicity assumption is that $\boldsymbol{0} \in \bar{\cal K}$. We now let ${\cal K} = \bar{\cal K} \backslash \{\boldsymbol{0}\}$ denote the set of non-zero server configurations. \\\\ {\bf Vector Packing is Monotone.} An important example of monotone packing is vector packing. Consider the vector packing constraints in \eqref{eq:vec-packing}. It is clear that if the server configuration $\boldsymbol{k} = \{k_1, \ldots, k_I\}$ satisfies \eqref{eq:vec-packing}, and if $\boldsymbol{k}' \le \boldsymbol{k}$ component-wise, then $\boldsymbol{k}'$ also satisfies \eqref{eq:vec-packing}. On the other hand, not all monotone packing is vector packing. For example, when $I = 2$, $\bar{{\cal K}} = \{(0, 0), (0, 1), (0, 2), (1, 0), (2, 0)\}$ is monotone, but is not described by vector packing constraints. In the sequel, we will only assume monotone packing in our model, and all our results hold under this general setting. To exclude triviality, we also assume that for all $i \in {\cal I}$, $\boldsymbol{e}_i$ (the $i$-th standard unit vector) is an element of $\bar{\cal K}$. As discussed in the introduction, we make the following important assumption in this paper. We assume that simultaneous services do {\em not} affect the service distributions of individual customers; in other words, the service time of a customer is unaffected by whether or not there are other customers served simultaneously by the same server. Let us also remark that ideally, we would like to consider an open system, where each arriving customer is immediately placed for service in one of the servers, and leaves the system after service completion. However, we will first consider a ``closed'' version of this open system. The reason is twofold. First, the analysis of the closed system is a stepping stone to that of the open system, and illustrates the main ideas more clearly. Second, we will see shortly that the closed system can be used to model job migration in a cloud, and is therefore of independent interest. Denote by $X_{\boldsymbol{k}}$ the number of servers with configuration $\boldsymbol{k} \in {\cal K}$. The system state is then the vector $\boldsymbol{X} = \{X_{\boldsymbol{k}}, ~\boldsymbol{k} \in {\cal K}\}$. By convention, $X_{\boldsymbol{0}} \equiv 0$ at all times.\\\\ {\bf Closed System.} Here we describe the ``closed'' version of the model. Let $r \in \mathbb{N}$ be given. Suppose that there are in total $r$ customers in the system, and no exogenous arrivals. For each $i \in {\cal I}$, we suppose that there are $\rho_i r$ customers of type $i$ in the system at all times. This in particular implies that $\sum_{i \in {\cal I}} \rho_i = 1$. It is convenient to index the system by $r$ its total number of customers, and we use $\boldsymbol{X}^r = (X_{\boldsymbol{k}}^r,~\boldsymbol{k} \in {\cal K})$ to denote a system state. The system evolves as follows. Each customer is almost always in service, except at a discrete set of time instances, where it migrates from one server to another (possibly the same one){, subject to the packing constraints imposed by $\bar{{\cal K}}$}. For a customer, the time between consecutive migrations is called its {\em service requirement}. Thus, one can alternatively think of a customer as departing the system after its service requirement, and then immediately arriving to the system, to be placed into a server. For each $i$, we assume that the service requirements of type-$i$ customers are i.i.d. exponential random variables with mean $1/\mu_i$, and that the service requirements are independent across different $i \in {\cal I}$. A (Markovian) \emph{service policy} (``packing rule'') decides to which server a customer will be placed after its service requirement, based only on the current system state $\boldsymbol{X}^r$. A service policy has to observe the packing constraints. Under any well-defined service policy, the system state at time $t$, $\boldsymbol{X}^r(t)$, is a continuous-time Markov chain on a finite state space. \iffalse COM: CORRECT It is easily seen to be irreducible, and the process $\{\boldsymbol{X}^r(t), ~t\ge 0\}$, has a unique stationary distribution. \fi {Hence, for each $r$,} the process $\{\boldsymbol{X}^r(t), ~t\ge 0\}$ always has a stationary distribution. \newline \newline {\bf Open System.} In the open system, customers of type $i$ arrive exogenously as an independent Poisson flow of rate $\lambda_i r$, where $\lambda_i$ is fixed and $r$ is a scaling parameter. Each arriving customer has to be placed for service immediately in one of the servers, subject to the packing constraints imposed by $\bar{{\cal K}}$. Service times of all customers are independent. Service time of a type-$i$ customer is exponentially distributed with mean $1/\mu_i$. After a service completion, each customer leaves the system. If we denote $\rho_i = \lambda_i/\mu_i$, then in steady state, the average number of type $i$ customers in the system is $\rho_i r$, and the average total number of customers is $\sum_ i \rho_i r$. We assume, WLOG, that $\sum_ i \rho_i = 1$ -- this is equivalent to re-choosing the value of parameter $r$, if necessary. A (Markovian) \emph{service policy} (``packing rule'') in this case decides to which server an arriving customer will be placed, based only on the current system state. A service policy has to observe the packing constraints. {Similar to the closed system, we let $X_{\boldsymbol{k}}^r(t)$ denote the number of servers with configuration $\boldsymbol{k}$ at time $t$ in the $r$th system. However, for the policy that we will study, $\boldsymbol{X}^r(t) = (X_{\boldsymbol{k}}^r(t))_{\boldsymbol{k} \in {\cal K}}$ will not be a Markov process.} We postpone the discussion of {a complete Markovian description of the system} and the existence of {the associated} stationary distribution to {Section \ref{sssec:gss-open}}. \iffalse COM: TOO EARLY TO TALK ABOUT THIS HERE\\ Under any well-defined service policy, the system state at time $t$, $\boldsymbol{X}^r(t)$, is a continuous-time Markov chain on a countable state space. It is easily seen to be irreducible and positive recurrent (see Lemma COM below), and therefore the process $\{\boldsymbol{X}^r(t), ~t\ge 0\}$, has a unique stationary distribution. \fi \iffalse \subsection{System Dynamics} Any arriving customer can be ``added" to a server, under the configuration feasibility constraint, i.e., it can be added to any server whose configuration $\boldsymbol{k}\in\bar{\cal K}$ (before the addition) is such that $\boldsymbol{k}+\boldsymbol{e}_i \in {\cal K}$; and for each $i \in {\cal I}$, when the service of a type-$i$ customer by the server in configuration $\boldsymbol{k}$ is completed, the customer leaves the system and the server's configuration changes to $\boldsymbol{k} - \boldsymbol{e}_i$. \boldsymbol{e}gin{itemize} \item Service completion of a type-$i$ customer from a server with configuration $\boldsymbol{k}$. $X_{\boldsymbol{k}} := X_{\boldsymbol{k}} - 1$, $X_{\boldsymbol{k} - \boldsymbol{e}_i} := X_{\boldsymbol{k} - \boldsymbol{e}_i} + 1$. \item Key decision to make is where to place an arrival. \item The following constraints are in place. An arrival can only be placed into a server with configuration $\boldsymbol{k}$ if $\boldsymbol{k} = \boldsymbol{0}$, or $X_{\boldsymbol{k}} > 0$. \item Service requirement of a type-$i$ customer is an independent exponential random variable with parameter $\mu_i$, $i \in {\cal I}$. \item In the open system, the arrivals of type-$i$ customers is an independent Poisson process of rate $\lambda_i$. \end{itemize} \fi \subsection{Asymptotic Regime}\label{ssec:asymp-regime} \iffalse COM: Here we define asymptotic regime, $r\to\infty$, etc, without any reference to any specific algorithm. We say that what we want is to minimize the number of occupied servers in a stationary regime. Introduce fluid scaled process here. Define LP formally. Say, INFORMALLY, that by asymptotic optimality we mean that fluid-scaled state is close to LP opt. solution state. ENDCOM \subsection{Fluid Scaling}\label{ssec:fluid-scaling} \fi We are interested in finding a service policy that minimizes the total number of occupied servers in the stationary regime. The exact problem is intractable, so instead we consider asymptotically optimal service policies. For both the closed and open systems, the asymptotic regime of interest is when $r \rightarrow \infty$. Informally speaking, in this limit, the {\em fluid-scaled} system state satisfies a conservation law (cf. Eq. \eqref{eq:conservation}), and the best that a policy can do is solving a linear program, subject to this conservation law. We now describe the asymptotic regime in more detail. First, we defined the so-called {\em fluid scaling}. Recall that both the closed and open systems are indexed by $r$, and $\boldsymbol{X}^r(t)$ is the vector that denotes the numbers of servers at time $t$, in the $r$th system. The {\em fluid scaled} process is $\boldsymbol{x}^r(t)=\boldsymbol{X}^r(t)/r$. For each $r$, in the closed system, $\boldsymbol{X}^r(\cdot)$ has a (not necessarily unique) stationary distribution, so $\boldsymbol{x}^r(\cdot)$ also has a stationary distribution. We will see shortly that in an open system, $\boldsymbol{X}^r(\cdot)$ also has a stationary distribution (see Lemma \ref{lem-complete-state-tight}). Denote by $\boldsymbol{X}^r(\infty)$ and $\boldsymbol{x}^r(\infty)$ the random states of the corresponding processes in a stationary regime. (Recall the convention in Section~\ref{ssec:notation}.) We now argue that as $r \to \infty$, \boldsymbol{e}gin{equation}\label{eq:approx-conservation} \sum_{\boldsymbol{k} \in {\cal K}} k_i x^r_{\boldsymbol{k}}(\infty) \implies \rho_i, \mbox{ for all } i. \end{equation} In a closed system, for each $i \in {\cal I}$, there are $\rho_i r$ customers of type $i$ in the system at all times, so on all sample paths, \[ \sum_{\boldsymbol{k} \in {\cal K}} k_i x^r_{\boldsymbol{k}}(t) = \rho_i, \mbox{ for all } r, t \mbox{ and } i. \] This implies that the same holds for $\boldsymbol{x}^r(\infty)$. In an open system, the total number of type-$i$ customers is $\sum_{\boldsymbol{k} \in {\cal K}} k_i X^r_{\boldsymbol{k}}(\infty)$, in steady state. It is easy to see that, independent from the service policy, this quantity is a Poisson random variable with mean $\rho_i r$. Thus, as $r \rightarrow \infty$, $\sum_{\boldsymbol{k} \in {\cal K}} k_i x^r_{\boldsymbol{k}}(\infty) \implies \rho_i$. Now consider the following linear program (LP). \boldsymbol{e}gin{eqnarray} \mbox{Minimize} & & \sum_{\boldsymbol{k}\in {\cal K}} x_{\boldsymbol{k}} \\ \mbox{subject to} & & \sum_{\boldsymbol{k} \in {\cal K}} k_i x_{\boldsymbol{k}} = \rho_i, \quad \mbox{for all } i \in {\cal I}, \label{eq:conservation} \\ & & \quad \quad \ \ x_{\boldsymbol{k}} \geq 0, \quad \mbox{ for all } \boldsymbol{k} \in {\cal K}. \end{eqnarray} Denote by ${\cal X}$ the set of feasible solutions to LP: $${\cal X} = \{\boldsymbol{x} \in \mathbb{R}_+^{|{\cal K}|} : \sum_{\boldsymbol{k} \in {\cal K}} k_ix_{\boldsymbol{k}} = \rho_i, i \in {\cal I}\}.$$ Then ${\cal X}$ is a compact subset of $\mathbb{R}_+^{|{\cal K}|}$. Let ${\cal X}^*$ denote the set of optimal solutions of LP, and let $u^*$ denote its optimal value. In light of Eqs. \eqref{eq:approx-conservation} and \eqref{eq:conservation}, a service policy is asymptotically optimal if, roughly speaking, under this policy and for large $r$, $\sum_{\boldsymbol{k} \in {\cal K}} x^r_{\boldsymbol{k}}(\infty) \approx u^*$ with high probability (cf. Theorems \ref{thm:main-closed} and \ref{th-main-res-open}). \iffalse Denote by ${\cal X}$ the feasible set of LP: ${\cal X} = \{\boldsymbol{x} = (x_{\boldsymbol{k}})_{\boldsymbol{k} \in {\cal K}} : \sum_{\boldsymbol{k} \in {\cal K}} k_ix_{\boldsymbol{k}} = \rho_i, i \in {\cal I}\}$. Then ${\cal X}$ is a compact subset of $\mathbb{R}_+^{|{\cal K}|}$, and $\boldsymbol{x}^r(t) \in {\cal X}$ for all $r$ and $t$. This implies that the sequence of distributions of $\boldsymbol{x}^r(\infty)$ is tight, and therefore there always exists a limit in distribution $\boldsymbol{x}^r(\infty)\implies \boldsymbol{x}(\infty)$, along a subsequence of $r$. (The limit depends potentially on the service policy.) \fi \iffalse \boldsymbol{e}gin{itemize} \item Possible objectives: $\mathbb{E}\left[\sum_{\boldsymbol{k}\in {\cal K}} X_{\boldsymbol{k}}\right]$, or $\mathbb{P}\left(\sum_{\boldsymbol{k}\in {\cal K}} X_{\boldsymbol{k}} \geq C\right)$. \item Impractical to solve the problem exactly; look for approximate solutions. \item Look for simple policies that give asymptotically optimal solutions. \item Consider the following linear program (LP) \boldsymbol{e}gin{eqnarray} \mbox{Minimize} & & \sum_{\boldsymbol{k}\in {\cal K}} x_{\boldsymbol{k}} \\ \mbox{subject to} & & \sum_{\boldsymbol{k} \in {\cal K}} k_i x_{\boldsymbol{k}} = \rho_i, \quad \mbox{for all } i \in {\cal I}, \\ & & \quad \quad \ \ x_{\boldsymbol{k}} \geq 0, \quad \mbox{ for all } \boldsymbol{k} \in {\cal K}. \end{eqnarray} \item Let ${\cal X}^*$ denote the set of optimal solutions of LP. \item \red{Explain what this linear program means.} \item Define the fluid scaling. $\boldsymbol{x}^r = \frac{\boldsymbol{X}^r}{r}$. \end{itemize} Define what we mean by asymptotic optimality. First, mention the objective to consider. Second, mention that we want to consider asymptotic optimality. Define the set of optimal solutions in a ``fluid'' sense. \red{No need to be precise here} \fi The following characterization of the set ${\cal X}^*$ by dual variables will be useful. The proof is elementary and omitted. \boldsymbol{e}gin{lem}\label{lem:lp-dual-char} $\boldsymbol{x} = (x_{\boldsymbol{k}})_{\boldsymbol{k} \in {\cal K}} \in {\cal X}^*$ iff $\boldsymbol{x}$ is a feasible solution of LP, and there exist $\eta_i \in \mathbb{R}$, $i \in {\cal I}$, such that \boldsymbol{e}gin{itemize} \item[(i)] $\sum_{i \in {\cal I}} k_i \eta_i \leq 1$ for all $\boldsymbol{k} \in {\cal K}$, and \item[(ii)] if $\sum_{i \in {\cal I}} k_i \eta_i < 1$, then $x_{\boldsymbol{k}} = 0$. \end{itemize} \end{lem} \iffalse \boldsymbol{e}gin{proof} The lemma is a simple consequence of the characterization of primal optimal solutions through complimentary slackness. \end{proof} \fi The following lemma relates the distance between a point $\boldsymbol{x} \in {\cal X}$ and the optimal set ${\cal X}^*$ to the objective value of LP evaluated at $\boldsymbol{x}$. \boldsymbol{e}gin{lem}\label{LEM:LP-RATE} There exists a positive constant $D \ge 1$ such that for any $\boldsymbol{x} \in {\cal X}$, \[ D\left(\sum_{\boldsymbol{k} \in {\cal K}} x_{\boldsymbol{k}} - u^*\right) \ge d\left(\boldsymbol{x}, {\cal X}^*\right). \] \end{lem} Note that $D \ge 1$ is necessary, since for every $\boldsymbol{x} \in {\cal X}$, $d\left(\boldsymbol{x}, {\cal X}^*\right) \ge \sum_{\boldsymbol{k} \in {\cal K}} x_{\boldsymbol{k}} - u^*$. \boldsymbol{e}gin{proof} {See Appendix \ref{apdx:lp-rate}.} \end{proof} \subsection{Greedy with {sublinear} Safety Stocks ({GSS})} \label{sec-GSS-definition} Now we introduce the service policy, {\em Greedy with sublinear Safety Stocks (GSS)}, along with a variant, which we will prove to be asymptotically optimal. \subsubsection{{GSS} Policy in a Closed System}\label{sssec:gss-closed} \noindent {\bf {\em GSS}.} Let $p \in (\frac{1}{2}, 1)$. For a given $r$, define a weight function $w^r : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ to be $w^r(X) = 1 \wedge \frac{X}{r^p}$. Let ${\cal M}$ denote the set of all pairs $(\boldsymbol{k}, i) \in {\cal K} \times {\cal I}$ such that $\boldsymbol{k} \in {\cal K}$ and $\boldsymbol{k} - \boldsymbol{e}_i \in \bar{\cal K}$. Given $\boldsymbol{X} = \{X_{\boldsymbol{k}'}, \boldsymbol{k}' \in {\cal K}\}$ and $(\boldsymbol{k}, i) \in {\cal M}$, define $\Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}) = w^r\left(X_{\boldsymbol{k}}\right) - w^r(X_{\boldsymbol{k} - \boldsymbol{e}_i})$. Under {\em GSS}, a customer of type $i$ is placed into a server with configuration $\boldsymbol{k} - \boldsymbol{e}_i$ where $X_{\boldsymbol{k} - \boldsymbol{e}_i} > 0$ or $\boldsymbol{k} - \boldsymbol{e}_i = \boldsymbol{0}$, such that $\Delta_{(\boldsymbol{k}, i)}(\boldsymbol{X})$ is minimal. Ties are broken arbitrarily. Note that the {\em GSS} policy makes decisions based only the current system state. The parameter $r$ which it uses is nothing else but the total number of customers in the system, which is, of course, a function of the state, and which happens to be constant in the closed system. We now provide an intuitive explanation of the policy. Let $f^r$ be the anti-derivative of $w^r$, so that \[ f^r(X) = \left\{\boldsymbol{e}gin{array}{ll} \frac{X^2}{2 r^p}, & \mbox{ if } X \in [0, r^p]; \\ X - \frac{r^p}{2}, & \mbox{ if } X > r^p. \end{array}\right. \] Let $F^r(\boldsymbol{X}) = \sum_{\boldsymbol{k} \in {\cal K}} f^r(X_{\boldsymbol{k}})$. Then $w^r$ and $\Delta^r_{(\boldsymbol{k}, i)}$ capture the first-order change in $F^r$. Suppose that the current system state is $\boldsymbol{X} = (X_{\boldsymbol{k}})_{\boldsymbol{k} \in {\cal K}}$. Then, placing a type-$i$ customer into a server with configuration $\boldsymbol{k} - \boldsymbol{e}_i$ only changes $X_{\boldsymbol{k}-\boldsymbol{e}_i}$ and $X_{\boldsymbol{k}}$: $X_{\boldsymbol{k} - \boldsymbol{e}_i}$ decreases by $1$ (if $X_{\boldsymbol{k} - \boldsymbol{e}_i} > 0$), and $X_{\boldsymbol{k}}$ increases by $1$. Thus, the first-order change in $F^r$ is \[ \frac{d}{dX} f^r(X) \Big|_{X = X_{\boldsymbol{k}}} - \frac{d}{dX} f^r(X) \Big|_{X = X_{\boldsymbol{k} - \boldsymbol{e}_i}} = \Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}). \] In this sense, {\em GSS} decreases $F^r$ greedily, by placing a customer into a server that results in the largest (first-order) decrease in $F^r$. The next lemma states that $F^r(\boldsymbol{X})$ only differs from $\sum_{\boldsymbol{k}} X_{\boldsymbol{k}}$ by $O(r^p)$. The proof is straightforward and omitted. \boldsymbol{e}gin{lem}\label{lem:F-sum-close} For any $\boldsymbol{X} \in \mathbb{R}_+^{|{\cal K}|}$, \[ \sum_{\boldsymbol{k}\in {\cal K}} X_{\boldsymbol{k}} - \frac{|{\cal K}| r^p}{2} \leq F^r(\boldsymbol{X}) \leq \sum_{\boldsymbol{k} \in {\cal K}} X_{\boldsymbol{k}}. \] \end{lem} Under the fluid scaling described earlier, the difference $O(r^p)$ between $F^r(\boldsymbol{X})$ and $\sum_{\boldsymbol{k} \in {\cal K}} X_{\boldsymbol{k}}$ becomes negligible, as it is of order $o(r)$. Thus, for a fluid-scaled process, minimizing $F^r(\boldsymbol{X})$ (what {\em GSS} tries to do) is ``equivalent'' to minimizing $\sum_{\boldsymbol{k} \in {\cal K}} X_{\boldsymbol{k}}$, when $r$ is large. \iffalse In the sequel, we will see that we are interested in asymptotically optimal solution(s) when we scale the system state by $r$, the total number of customers in the system. Under this scaling, the difference $O(r^p)$ between $F^r(\boldsymbol{X})$ and $\sum_{\boldsymbol{k} \in {\cal K}} X_{\boldsymbol{k}}$ becomes negligible, as it is of an order-of-magnitude smaller than $r$. Since {\em GSS} tries to minimize $F^r$, and $F^r(\boldsymbol{X})$ and $\sum_{\boldsymbol{k} \in {\cal K}} X_{\boldsymbol{k}}$ are close, {\em GSS} produces an asymptotically optimal system state that minimizes $\sum_{\boldsymbol{k} \in {\cal K}} X_{\boldsymbol{k}}$, when $r$ is large. \fi \subsubsection{{GSS} Policy in an Open System}\label{sssec:gss-open} First, we describe the ``pure'' {\em GSS} policy. \newline \newline {\bf {\em GSS}.} Let $p \in (\frac{1}{2}, 1)$. For a given system state $\boldsymbol{X}$, let $Z=Z(\boldsymbol{X})$ denote the total number of customers in the system. For a system with parameter $r$, define a weight function $\bar w^r(X)=\bar w^r(X;Z)$ as follows: $\bar w^r(X) = 1 \wedge \frac{X}{Z^p}$. (Note that $\bar w^r(X)$ generalizes the corresponding weight function $w^r(X) = 1 \wedge \frac{X}{r^p}$ for the closed system, because in the closed system with parameter $r$ the total number of customers is constant $Z\equiv r$.) Let ${\cal M}$ denote the set of all pairs $(\boldsymbol{k}, i) \in {\cal K} \times {\cal I}$ such that $\boldsymbol{k} \in {\cal K}$ and $\boldsymbol{k} - \boldsymbol{e}_i \in \bar{\cal K}$. Given $\boldsymbol{X} = \{X_{\boldsymbol{k}'}, \boldsymbol{k}' \in {\cal K}\}$ and $(\boldsymbol{k}, i) \in {\cal M}$, define $\bar \Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}) = \bar w^r\left(X_{\boldsymbol{k}}\right) - \bar w^r(X_{\boldsymbol{k} - \boldsymbol{e}_i})$. Under {\em GSS}, an arriving customer of type $i$ is placed into a server with configuration $\boldsymbol{k} - \boldsymbol{e}_i$ where $X_{\boldsymbol{k} - \boldsymbol{e}_i} > 0$ or $\boldsymbol{k} - \boldsymbol{e}_i = \boldsymbol{0}$, such that $\bar \Delta_{(\boldsymbol{k}, i)}(\boldsymbol{X})$ is minimal. Ties are broken arbitrarily. In this paper, for the open system, we will analyze not the ``pure'' {\em GSS} policy, described above, but its slight modification, called {\em Modified GSS} ({\em GSS-M}). \newline\newline {\bf {\em GSS-M}.} Under this policy, a {\em token} of type $i$ is generated immediately upon each service completion of type $i$, and is placed for ``service'' immediately according to {\em GSS}. The system state $\boldsymbol{X} = \{X_{\boldsymbol{k}}, \boldsymbol{k}\in {\cal K}\}$ account for both tokens of type $i$ as well as actual type-$i$ customers for all $i \in {\cal I}$. Each arriving type $i$ customer first seeks to replace an existing token of type $i$ already in ``service'' (chosen arbitrarily), and if there is none, it is placed for service according to {\em GSS}. Each token that is not replaced by an actual arriving customer before an independent exponentially distributed timeout with mean $1/\mu_0$, leaves the system. (This modification is the same as the one introduced in {\cal I}te{St2012} for the {\em Greedy} algorithm, to obtain the {\em Greedy-M} policy.) We emphasize that {\em GSS} and {\em GSS-M} do {\em not} require the knowledge of parameter $r$. Since the system evolution under the {\em GSS-M} involves both actual customers and tokens, we need to define the Markov chain describing this evolution more precisely. A {\em complete server configuration} is defined (in the same way as in {\cal I}te{St2012}) as a pair $(\boldsymbol{k},\hat \boldsymbol{k})$, where vector $\boldsymbol{k}=(k_1,\ldots,k_I)\in {\cal K}$ gives the numbers of all customers (both actual and tokens) in a server, while vector $\hat \boldsymbol{k} \le \boldsymbol{k}$, $\boldsymbol{k}\in \bar {\cal K}$, gives the numbers of actual customers only. The Markov process state at time $t$ is the vector $\{X_{(\boldsymbol{k},\hat \boldsymbol{k})}^r(t)\}$, where the index $(\boldsymbol{k},\hat \boldsymbol{k})$ takes values that are all possible complete server configurations, and superscript $r$, as usual, indicates the system with parameter $r$. Note that $\boldsymbol{X}^r(t) = \{X^r_{\boldsymbol{k}}(t), \boldsymbol{k} \in {\cal K}\}$ can be considered as a ``projection'' of $\{X^r_{(\boldsymbol{k},\hat \boldsymbol{k})}(t)\}$, with $X^r_{\boldsymbol{k}}(t) = \sum_{\hat \boldsymbol{k}: \hat \boldsymbol{k} \le \boldsymbol{k}} X^r_{(\boldsymbol{k}, \hat \boldsymbol{k})}$ for each $\boldsymbol{k} \in {\cal K}$. {Let $\hat Y_i^r(t)$, $\tilde Y_i^r(t)$, and $Y_i^r(t)=\hat Y_i^r(t)+\tilde Y_i^r(t)$ denote the total number of actual type-$i$ customers, the total number of type-$i$ tokens, and the total number of all (both actual and tokens) type-$i$ customers in the $r$th system, respectively.} The total number of actual customers of all types is then $Z^r(t)=\sum_i \hat Y^r_i(t)$. The behaviors of the processes $\{(Y_i^r(t), \hat Y_i^r(t)), ~t\ge 0\}$, are independent across all $i$, with $\hat Y_i^r(\infty)$ having Poisson distribution with mean $\rho_i r$. The following fact has the same proof as Lemma 11 in {\cal I}te{St2012}. \boldsymbol{e}gin{lem} \label{lem-complete-state-tight} {The} Markov chain $\{X_{(\boldsymbol{k},\hat \boldsymbol{k})}^r(t)\}, ~t\ge 0$, is irreducible and positive recurrent for each $r$. \iffalse Moreover, the distributions of $\{(\hat y_i^r(\infty), y_i^r(\infty)), ~i\in {\cal I}\}= (1/r)\{(\hat Y_i^r(\infty), Y_i^r(\infty)), ~i\in {\cal I}\}$ are tight, and any limit in distribution \\ $\{(\hat y_i(\infty), y_i(\infty)), ~i\in {\cal I}\}$ is such that $\hat y_i(\infty)=\rho_i$ and $y_i(\infty)\le \rho_i + \lambda_i /\mu_0$ for all $i$. Consequently, the distributions of $\{x_{(k,\hat k)}^r(\infty)\}= (1/r)\{X_{(k,\hat k)}^r(\infty)\}$ are tight. \fi \end{lem} \noindent {\bf Remark.} Informally, the reason (which is the same as in {\cal I}te{St2012}) for considering a modified version of {\em GSS} instead of pure {\em GSS} in an open system is as follows. Recall that in a closed system, a customer migration can be also thought of as its departure followed immediately by an arrival of the same type. As such, departures and arrivals in a closed system are perfectly ``synchronized'', which in particular means that in a closed system, for every departing customer, we always have the option of putting it right back into the server which it has just departed from. This means that a greedy control, pursuing minimization of a given objective function, cannot possibly increase (up to a first-order approximation) the objective function at every customer migration. In contrast, in an open system, departures and arrivals are not synchronized. Therefore, it is not immediately clear that a greedy algorithm will necessarily improve the objective. The tokens are introduced so that, informally speaking, the decisions on placements of new type-$i$ arrivals are made somewhat ``in advance'', at the times of prior type-$i$ departures. In this sense, the behavior of an open system ``emulates'' that of a corresponding closed system. \subsection{Main Results}\label{ssec:results} \boldsymbol{e}gin{thm}\label{thm:main-closed} Let $p \in (\frac{1}{2}, 1)$. For each $r$, consider the closed system operating under {\em GSS} policy, in steady state. Then there exists some constant $C > 0$, not depending on $r$, such that \[ \mathbb{P}\left(d(\boldsymbol{x}^r(\infty), {\cal X}^*) \le C r^{p-1}\right) \rightarrow 1 \] as $r \rightarrow \infty$. Consequently, we have fluid-scale asymptotic optimality: $$ d(\boldsymbol{x}^r(\infty), {\cal X}^*) \implies 0. $$ \end{thm} \iffalse \boldsymbol{e}gin{cor}\label{cor:main-closed} Let $\boldsymbol{x}^r(\infty)$ and $\boldsymbol{x}(\infty)$ be the same as in Theorem \ref{thm:main-closed}, and let $\boldsymbol{x}(\infty)$ be a weak limit point of $\{\boldsymbol{x}^r(\infty)\}$, so that \[ \boldsymbol{x}^r(\infty) \implies \boldsymbol{x}(\infty) \] along a subsequence of $\{r\}$. Then \[ \mathbb{P}\left(\boldsymbol{x}(\infty) \in {\cal X}^*\right) = 1. \] \end{cor} \fi \boldsymbol{e}gin{thm} \label{th-main-res-open} Let $p \in (\frac{1}{2}, 1)$. For each $r$, consider the open system operating under {\em GSS-M} policy, in steady state. {Then there exists some constant $C > 0$, not depending on $r$, such that} as $r\to\infty$, \boldsymbol{e}ql{eq-main-res-open} \mathbb{P}\left(d(\boldsymbol{x}^r(\infty),{\cal X}^*)\le C r^{p-1}\right) \to 1, \end{equation} and \boldsymbol{e}ql{eq-main-res-open2} r^{-p} \sum_i \tilde Y_i^r(\infty) \implies 0. \end{equation} Consequently, we have fluid-scale asymptotic optimality: $$ d(\boldsymbol{x}^r(\infty),{\cal X}^*) \implies 0 {~~~~\mbox{and}~~~~ r^{-1} \sum_i \tilde Y_i^r(\infty) \implies 0}. $$ \end{thm} \iffalse \section{Preliminaries}\label{ssec:prelim} COM: THIS SECTION TO BE DISBANDED. LP AND FLUID SCALING MOVE UP INTO THE ASYMPTOTIC REGIME AND ASYMPTOTIC OPTIMALITY. LOCAL FLUID SCALING MOVES DOWN TO THE BEGINNING OF THE PROOF FOR THE CLOSED SYSTEM. \fi \section{Closed System: Asymptotic \\Optimality of {GSS}}\label{sec-closed} {We restrict our attention to closed systems and prove Theorem \ref{thm:main-closed} in this section. As mentioned earlier, it is not sufficient to consider only the system states at the fluid scale, defined in Section \ref{ssec:asymp-regime}. We also need the concept of \emph{local fluid scaling}, introduced below. Proposition \ref{prop:opt-si-1} -- a key step in the proof of Theorem \ref{thm:main-closed} -- is established in Section \ref{ssec:key-prop}. In Section \ref{ssec:proof-closed}, we construct an appropriate probability space, quantify the drift of $F^r$ under {\em GSS} (cf. Propositions \ref{prop:loc-decr} and \ref{PROP:CONV-PROB}), and prove Theorem \ref{thm:main-closed}.} \subsection{Local Fluid Scaling}\label{ssec:loc-fluid-scaling} Besides the fluid-scaled processes $\boldsymbol{x}^r(t)$ defined in Section \ref{ssec:asymp-regime}, it is also convenient to consider the system dynamics at the {\em local fluid scale}. More precisely, for each $r$ and $t$, define the corresponding {\em local fluid scale} process $\widetilde{\boldsymbol{x}}^r(t)$ by \[ \widetilde{\boldsymbol{x}}^r(t) = \frac{1}{r^p} \boldsymbol{X}^r (t). \] In the asymptotic regime $r \rightarrow \infty$, recall that the fluid scale process $\boldsymbol{x}^r(\cdot)$ always lives in the compact set ${\cal X}$ (defined in Section \ref{ssec:asymp-regime}). This is no longer true for the local fluid scale processes $\widetilde{\boldsymbol{x}}^r(\cdot)$: for a fixed $t$, $\{\widetilde{\boldsymbol{x}}^r(t)\}_r$ can be unbounded. However, at the local fluid scale, we will always consider the following weight function $\widetilde{w}$, which remains bounded. Define the local-fluid-scale weight function $\widetilde{w} : \mathbb{R} \cup \{\infty\} \rightarrow \mathbb{R}_+$ to be $\widetilde{w}(\widetilde{x}) = 1\wedge \widetilde{x}$. By convention, $1 < \infty$, so $\widetilde{w}$ is well-defined. Note that for every $r$, $\widetilde{w}(\widetilde{x}^r) = w^r(X^r)$, where $\widetilde{x}^r = X^r/r^p$. For $(\boldsymbol{k}, i) \in {\cal M}$, we can also define the weight difference at the local fluid scale to be \[ \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}) = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}}) - \widetilde{w}(\widetilde{x}_{\boldsymbol{k} - \boldsymbol{e}_i}). \] \noindent {\bf Remark}. In the sequel, we will always use lower case $x$ (or $\boldsymbol{x}$) to denote quantities at the fluid scale, $\widetilde{x}$ (or $\widetilde{\boldsymbol{x}}$) to denote quantities at the local fluid scale, and upper case $X$ (or $\boldsymbol{X}$) to denote quantities without scaling. \subsection{Key Proposition}\label{ssec:key-prop} For a vector $\widetilde{\boldsymbol{x}} \in \left(\mathbb{R}_+ \cup \{\infty\} \right)^{|{\cal K}|}$ with components being possibly infinite, we can define the concept of a {\em Strictly Improving (SI) pair associated with $\widetilde{\boldsymbol{x}}$}. \boldsymbol{e}gin{definition}[Strictly Improving (SI) pair]\label{df:si-pair} For \\$(\boldsymbol{k}, i)$, $(\boldsymbol{k}', i) \in {\cal M}$, $\{(\boldsymbol{k}, i), (\boldsymbol{k}', i)\}$ is an SI pair associated with $\widetilde{\boldsymbol{x}}$ if \boldsymbol{e}gin{itemize} \item[(a)] $k_i \geq 1$, $\widetilde{x}_{\boldsymbol{k}} > 0$; \item[(b)] either $\boldsymbol{k}' = \boldsymbol{e}_i$, or $[k'_i > 0 \mbox{ and } \widetilde{x}_{\boldsymbol{k}' - \boldsymbol{e}_i} > 0]$; and \item[(c)] $\Delta_{(\boldsymbol{k}', i)} < \Delta_{(\boldsymbol{k}, i)}$. \end{itemize} \end{definition} { The idea of SI pairs is as follows. Suppose that the current system state is $\boldsymbol{X}^r$, and a type-$i$ customer just completed its service requirement at a server with configuration $\boldsymbol{k}$. Then the first-order change in $F^r$ is $-\Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}^r)$. Suppose that this customer is then placed into a server with configuration $\boldsymbol{k}'$, under {\em GSS}. Then, the total (first-order) change in $F^r$ after this transition is $\Delta^r_{(\boldsymbol{k}', i)}(\boldsymbol{X}^r) - \Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}^r)$, or $\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r)$. The existence of an SI pair ensures that we can always improve (up to first order) the current value of $F^r$.} Recall that for any feasible system state $\boldsymbol{X}^r$, $\boldsymbol{x}^r = \boldsymbol{X}^r/r$ denotes the fluid-scale system state, and $\widetilde{\boldsymbol{x}}^r = \boldsymbol{X}^r/r^p$ denotes the associated state at the local fluid scale. The following proposition establishes that whenever $\boldsymbol{x}^r$ is sufficiently far away from optimality, an SI pair exists. \boldsymbol{e}gin{prop}\label{prop:opt-si-1} Let $D > 0$ be the same as in Lemma \ref{LEM:LP-RATE}. Then, there exist a positive constant $\varepsilon$ such that the following holds. For sufficiently large $r$, if $d(\boldsymbol{x}^r, {\cal X}^*) \ge 2D|{\cal K}|r^{p-1}$, then there exists an SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ (possibly depending on $r$) associated with $\widetilde{\boldsymbol{x}}^r = (\widetilde{x}_{\boldsymbol{k}}^r)_{\boldsymbol{k} \in {\cal K}}$, and furthermore, $\widetilde{x}_{\boldsymbol{k}}^r \geq \varepsilon$, $\widetilde{x}_{\boldsymbol{k}' - \boldsymbol{e}_i}^r \ge \varepsilon$, and $\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r) \leq -\varepsilon$. \end{prop} Proposition \ref{prop:opt-si-1} follows from the two lemmas below. \iffalse First, recall the linear program LP defined in Section \ref{ssec:asymp-regime}: \boldsymbol{e}gin{eqnarray} \mbox{Minimize} & & \sum_{\boldsymbol{k}\in {\cal K}} x_{\boldsymbol{k}} \\ \mbox{subject to} & & \sum_{\boldsymbol{k} \in {\cal K}} k_i x_{\boldsymbol{k}} = \rho_i, \quad \mbox{for all } i \in {\cal I}, \\ & & \quad \quad \ \ x_{\boldsymbol{k}} \geq 0, \quad \mbox{ for all } \boldsymbol{k} \in {\cal K}. \end{eqnarray} Also recall that ${\cal X} = \{\boldsymbol{x} \geq 0 : \sum_{\boldsymbol{k} \in {\cal K}} k_i x_{\boldsymbol{k}} = \rho_i, \forall i \in {\cal I}\}$ denotes the feasible set, ${\cal X}^*$ denotes the set of optimal solutions, and $u^*$ denotes the optimal value. We remark that both ${\cal X}$ and ${\cal X}^*$ are compact and convex polytopes with a finite number of extreme points. Consider a sequence of fluid scaled states $\boldsymbol{x}^r$ and associated states $\widetilde{\boldsymbol{x}}^r$ at the local fluid scale. Since $\boldsymbol{x}^r \in {\cal X}$ for every $r$, and ${\cal X}$ is compact, the sequence $\{\boldsymbol{x}^r\}$ has a limit point $\boldsymbol{x} \in {\cal X}$, i.e., there exists a subsequence $\{r_n\}$ of $\{r\}$ such that $\boldsymbol{x}^{r_n} \rightarrow \boldsymbol{x}$ as $n \rightarrow \infty$. Furthermore, for this subsequence, we can assume that $\widetilde{\boldsymbol{x}}^{r_n} \rightarrow \widetilde{\boldsymbol{x}}$, with either $\widetilde{x}_{\boldsymbol{k}} \in \mathbb{R}_+$ or $\widetilde{x}_{\boldsymbol{k}} = \infty$. \fi \boldsymbol{e}gin{lem}\label{lem:existence-si-1} Consider any sequence $\{\boldsymbol{x}^r\}$ and the associated states $\widetilde{\boldsymbol{x}}^r$. Let $\boldsymbol{x} \in {\cal X}$ be a limit point of the sequence $\{\boldsymbol{x}^r\}$, so that the the subsequence $\{r_n\}$ of $\{r\}$ satisfies $\boldsymbol{x}^{r_n} \rightarrow \boldsymbol{x}$ and $\widetilde{\boldsymbol{x}}^{r_n} \rightarrow \widetilde{\boldsymbol{x}}$ as $n \rightarrow \infty$, with some components of $\widetilde{\boldsymbol{x}}$ being possibly infinite. If there is no SI pair associated with $\widetilde{\boldsymbol{x}}$, then $\boldsymbol{x} \in {\cal X}^*$, i.e. $\boldsymbol{x}$ is an optimal solution of LP. \end{lem} \boldsymbol{e}gin{proof}[of Lemma \ref{lem:existence-si-1}] Suppose that there is no SI pair associated with $\widetilde{\boldsymbol{x}}$. We will show that $\boldsymbol{x} \in {\cal X}^*$, i.e., $\boldsymbol{x}$ is an optimal solution of the linear program LP. To this end, we will use Lemma \ref{lem:lp-dual-char}. In particular, we will construct $\eta_i \geq 0$, $i \in {\cal I}$ such that \boldsymbol{e}gin{itemize} \item[(i)] $\sum_{i \in {\cal I}} k_i \eta_i \leq 1$ for all $\boldsymbol{k} \in {\cal K}$, and \item[(ii)] if $\sum_{i \in {\cal I}} k_i \eta_i < 1$, then $\widetilde{x}_{\boldsymbol{k}} < 1$. \end{itemize} Note that condition (ii) here is stronger than condition (ii) in Lemma \ref{lem:lp-dual-char}. Let $\eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{e}_i})$ for all $i \in {\cal I}$. Then clearly $\eta_i \in [0, 1]$ for all $i \in {\cal I}$. We first show that condition (i) holds. To this end, we prove the following stronger statement: if $\boldsymbol{k} \in {\cal K}$ is such that $k_i \ge 1$ implies $\eta_i > 0$, then $\sum_{i \in {\cal I}} k_i \eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}})$. Suppose not. Let $\boldsymbol{k} \in {\cal K}$ be a minimal counterexample, so that \boldsymbol{e}gin{equation}\label{eq:si-exist-1} \sum_{i \in {\cal I}} k_i \eta_i \neq \widetilde{w}(\widetilde{x}_{\boldsymbol{k}}), \end{equation} and for each $i \in {\cal I}$, $k_i \ge 1$ implies $\eta_i > 0$. Note that $\sum_{i \in {\cal I}} k_i \ge 2$, since $\eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{e}_i})$ for each $i \in {\cal I}$, by definition. Thus, there exists $i \in {\cal I}$ such that $\eta_i > 0$, $\boldsymbol{k}' = \boldsymbol{k} - \boldsymbol{e}_i \in {\cal K}$, and \boldsymbol{e}gin{equation}\label{eq:si-exist-2} \sum_{i \in {\cal I}} k'_i \eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}'}). \end{equation} Subtracting Eq. \eqref{eq:si-exist-2} from Eq. \eqref{eq:si-exist-1}, we get that \[ \Delta_{(\boldsymbol{k}, i)} = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}}) - \widetilde{w}(\widetilde{x}_{\boldsymbol{k}'}) \neq \eta_i. \] Thus either $\Delta_{(\boldsymbol{k}, i)} > \eta_i$, or $\Delta_{(\boldsymbol{k}, i)} < \eta_i$. If $\Delta_{(\boldsymbol{k}, i)} > \eta_i$, we verify that $\{(\boldsymbol{k}, i), (\boldsymbol{e}_i, i)\}$ is an SI pair associated with $\widetilde{\boldsymbol{x}}$. First, conditions (b) and (c) in Definition \ref{df:si-pair} are automatically satisfied. Second, $\Delta_{(\boldsymbol{k}, i)} > \eta_i > 0$. In particular, $\widetilde{x}_{\boldsymbol{k}} > 0$. We also have $k_i \geq 1$, so condition (a) in Definition \ref{df:si-pair} is also satisfied. If $\Delta_{(\boldsymbol{k}, i)} < \eta_i$, we verify that $\{(\boldsymbol{e}_i, i), (\boldsymbol{k}, i)\}$ is an SI pair associated with $\widetilde{\boldsymbol{x}}$. First, condition (c) in Definition \ref{df:si-pair} is automatically satisfied. Second, since $\eta_i > 0$, $\widetilde{x}_{\boldsymbol{e}_i} > 0$. Thus condition (a) in Definition \ref{df:si-pair} is satisfied. Finally, $k_i \ge 1$ by assumption, so to verify condition (b), we only need to verify that $\widetilde{x}_{\boldsymbol{k} - \boldsymbol{e}_i} > 0$. Since $\sum_{i \in {\cal I}} k_i \geq 2$, $\sum_{i \in {\cal I}} k'_i \geq 1$. This implies that there exists $i' \in {\cal I}$ such that $k'_{i'} \ge 1$. Thus $k_{i'} \ge k'_{i'} \ge 1$, so $\eta_{i'} > 0$. By Eq. \eqref{eq:si-exist-2}, $\widetilde{w}(\widetilde{x}_{\boldsymbol{k}'}) \ge \eta_{i'} > 0$, so $\widetilde{x}_{\boldsymbol{k}'} > 0$. Thus, condition (b) in Definition \ref{df:si-pair} is verified. In either case, we have an SI pair associated with $\widetilde{\boldsymbol{x}}$, contradicting the assumption that there is no SI pair associated with $\widetilde{\boldsymbol{x}}$. Thus, for all $\boldsymbol{k} \in {\cal K}$ such that $k_i \ge 1$ implies $\eta_i > 0$, \[ \sum_{i \in {\cal I}} k_i \eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}}). \] For all $\boldsymbol{k} \in {\cal K}$, we can find $\boldsymbol{k}' \le \boldsymbol{k}$ such that $\boldsymbol{k}' \in {\cal K}$, $k'_i \ge 1$ implies $\eta_i > 0$, and $\sum_{i \in {\cal I}} k_i \eta_i = \sum_{i \in {\cal I}} k'_i \eta_i$. Thus, \[ \sum_{i \in {\cal I}} k_i \eta_i = \sum_{i \in {\cal I}} k'_i \eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}'}) \leq 1. \] This establishes condition (i). We now establish condition (ii). Suppose that condition (ii) does not hold. Let $\boldsymbol{k} \in {\cal K}$ be minimal such that \[ \widetilde{x}_{\boldsymbol{k}} \geq 1, ~~~\mbox{ and }~~~ \sum_{i \in {\cal I}} k_i \eta_i < 1. \] First, note that $\boldsymbol{k} \neq \boldsymbol{e}_i$ for any $i \in {\cal I}$, because if $\eta_i < 1$, then \[ 1 > \eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{e}_i}) = 1 \wedge \widetilde{x}_{\boldsymbol{e}_i}. \] Thus $\sum_{i \in {\cal I}} k_i \geq 2$. Second, if $\eta_i > 0$ for all $i \in {\cal I}$ with $k_i \geq 1$, then from the proof of condition (i), we have that \[ 1 > \sum_{i \in {\cal I}} k_i \eta_i = \widetilde{w}(\widetilde{x}_{\boldsymbol{k}}) = 1 \wedge \widetilde{x}_{\boldsymbol{k}}, \] so we have $\widetilde{x}_{\boldsymbol{k}} < 1$, reaching a contradiction. Thus, there exists $i \in {\cal I}$ such that $\eta_i = 0$ and $k_i \geq 1$. Let $\boldsymbol{k}' = \boldsymbol{k} - \boldsymbol{e}_i$. Then $\boldsymbol{k}' \in {\cal K}$, since \[ \sum_{i \in {\cal I}} k'_i = \sum_{i \in {\cal I}} k_i - 1 \geq 1. \] Since $\eta_i = 0$, \[ \sum_{i \in {\cal I}} k'_i \eta_i = \sum_{i \in {\cal I}} k_i \eta_i < 1. \] By minimality of $\boldsymbol{k}$, we must have $\widetilde{x}_{\boldsymbol{k}'} < 1$. Thus, $\widetilde{w}(\widetilde{x}_{\boldsymbol{k}'}) = 1 \wedge \widetilde{x}_{\boldsymbol{k}'} < 1$, and $\widetilde{w}(\widetilde{x}_{\boldsymbol{k}}) = 1 \wedge \widetilde{x}_{\boldsymbol{k}} = 1$. This implies that \[ \Delta_{(\boldsymbol{k}, i)} > 0 = \eta_i, \] and that $\{(\boldsymbol{k}, i), (\boldsymbol{e}_i, i)\}$ is an SI pair associated with $\widetilde{\boldsymbol{x}}$. This is a contradiction, so condition (ii) is established. \end{proof} \boldsymbol{e}gin{lem}\label{lem:existence-si-2} Consider any sequence $\{\boldsymbol{x}^r\}$ and associated states $\widetilde{\boldsymbol{x}}^r$. Let $\boldsymbol{x}^{r_n}$, $\boldsymbol{x}$, $\widetilde{\boldsymbol{x}}^{r_n}$ and $\widetilde{\boldsymbol{x}}$ be the same as in Lemma \ref{lem:existence-si-1}. If for all sufficiently large $n$, $d(\boldsymbol{x}^{r_n}, {\cal X}^*) \ge 2D|{\cal K}|r_n^{p-1}$, then there is an SI pair associated with $\widetilde{\boldsymbol{x}}$. \end{lem} \boldsymbol{e}gin{proof}[of Lemma \ref{lem:existence-si-2}] We prove the lemma by contradiction. Suppose that the lemma is not true, then for sufficiently large $n$, $d(\boldsymbol{x}^{r_n}, {\cal X}^*) \ge 2D |{\cal K}| r_n^{p-1}$, and there is no SI pair associated with $\widetilde{\boldsymbol{x}}$. By Lemma \ref{lem:existence-si-1}, $\boldsymbol{x}$ is an optimal solution of LP, and from the proof of Lemma \ref{lem:existence-si-1}, $\boldsymbol{\eta} = (\eta_i)_{i \in {\cal I}}$ is an optimal dual solution of LP, where $\eta_i = \widetilde{x}_{\boldsymbol{e}_i}$ for all $i \in {\cal I}$. For a given $r$, consider the following linear program, which we call $\mbox{LP}^r$. \boldsymbol{e}gin{eqnarray} \mbox{Minimize} & & \sum_{\boldsymbol{k}\in {\cal K}} \widetilde{x}_{\boldsymbol{k}} \\ \mbox{subject to} & & \sum_{\boldsymbol{k} \in {\cal K}} k_i \widetilde{x}_{\boldsymbol{k}} = \rho_i r^{1-p}, \quad \mbox{for all } i \in {\cal I}, \\ & & \quad \quad \ \ \widetilde{x}_{\boldsymbol{k}} \geq 0, \quad \quad \quad \mbox{ for all } \boldsymbol{k} \in {\cal K}. \end{eqnarray} $\mbox{LP}^r$ is just a scaled version of LP, defined in Section \ref{ssec:asymp-regime}. For each $r$, the feasible set of $\mbox{LP}^r$ is $r^{1-p}{\cal X}$, its set of optimal solutions is $r^{1-p}{\cal X}^*$, and its optimal value is $r^{1-p} u^*$. $r^{1-p}\boldsymbol{x}$ is an optimal solution of $\mbox{LP}^r$, and $\boldsymbol{\eta}$ is an optimal dual solution. Furthermore, by Lemma \ref{LEM:LP-RATE}, for sufficiently large $n$, \boldsymbol{e}gin{eqnarray*} \sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^{r_n}_{\boldsymbol{k}} - r^{1-p}u^* & = & r^{1-p} \left(\sum_{\boldsymbol{k} \in {\cal K}} x^{r_n}_{\boldsymbol{k}} - u^*\right) \\ & \ge & r^{1-p} d(\boldsymbol{x}^{r_n}, {\cal X}^*)/D \\ & \ge & r^{1-p}\cdot (2D|{\cal K}|r^{p-1})/D \ge 2|{\cal K}|. \end{eqnarray*} For each $n$, consider the Lagrangian $L(\widetilde{\boldsymbol{x}}^{r_n}, \boldsymbol{\eta})$ of $\mbox{LP}^{r_n}$, evaluated at $\widetilde{\boldsymbol{x}}^{r_n}$ and $\boldsymbol{\eta}$: \[ L(\widetilde{\boldsymbol{x}}^{r_n}, \boldsymbol{\eta}) = \sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^{r_n}_{\boldsymbol{k}} + \sum_{i \in {\cal I}} \eta_i\left(\rho_i r_n^{1-p} - \sum_{\boldsymbol{k} \in {\cal K}} k_i \widetilde{x}^{r_n}_{\boldsymbol{k}}\right). \] We calculate the Lagrangian in two ways. First, by feasibility of $\widetilde{\boldsymbol{x}}^{r_n}$, $L(\widetilde{\boldsymbol{x}}^{r_n}, \boldsymbol{\eta}) = \sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^{r_n}_{\boldsymbol{k}}$. Second, we rewrite $L(\widetilde{\boldsymbol{x}}^{r_n}, \boldsymbol{\eta})$ as \[ L(\widetilde{\boldsymbol{x}}^{r_n}, \boldsymbol{\eta}) = r_n^{1-p} \sum_{i \in {\cal I}} \rho_i \eta_i + \sum_{\boldsymbol{k} \in {\cal K}} \left(1 - \sum_{i \in {\cal I}} k_i \eta_i\right)\widetilde{x}^{r_n}_{\boldsymbol{k}}. \] The first term on the RHS equals $r_n^{1-p} u^*$, by the dual optimality of $\boldsymbol{\eta}$. For the second term on the RHS, note that in the proof of Lemma \ref{lem:existence-si-1}, we have established that for all $\boldsymbol{k} \in {\cal K}$, $\sum_{i \in {\cal I}} k_i \eta_i \le 1$, and if $\sum_{i\in {\cal I}} k_i \eta_i < 1$, then $\widetilde{x}_{\boldsymbol{k}} < 1$. Since $\widetilde{\boldsymbol{x}}^{r_n} \rightarrow \widetilde{\boldsymbol{x}}$, for all sufficiently large $n$, if $\sum_{i\in {\cal I}} k_i \eta_i < 1$, then $\widetilde{x}^{r_n}_{\boldsymbol{k}} \le 1$. Thus for all sufficiently large $n$, \[ \sum_{\boldsymbol{k} \in {\cal K}} \left(1 - \sum_{i \in {\cal I}} k_i \eta_i\right)\widetilde{x}^{r_n}_{\boldsymbol{k}} \leq |{\cal K}|, \] and \[ \sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^{r_n}_{\boldsymbol{k}} = L(\widetilde{\boldsymbol{x}}^{r_n}, \boldsymbol{\eta}) \le r_n^{1-p} u^* + |{\cal K}|, \] contradicting the fact that \[ \sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^{r_n}_{\boldsymbol{k}} - r_n^{1-p} u^* \ge 2|{\cal K}| \] for sufficiently large $n$. This establishes Lemma \ref{lem:existence-si-2}. \end{proof} \noindent {\bf Proof of Proposition \ref{prop:opt-si-1}.} We are now ready to prove Proposition \ref{prop:opt-si-1}. Suppose that the proposition does not hold. Then for all $\varepsilon > 0$, there exist infinitely many $r$ and $\boldsymbol{x}^r$ such that $d(\boldsymbol{x}^r, {\cal X}^*) \ge 2D|{\cal K}|r^{p-1}$, and for all SI pairs (if any) $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ of $\widetilde{\boldsymbol{x}}^r$, either $\widetilde{x}^r_{\boldsymbol{k}} < \varepsilon$, or $\widetilde{x}^r_{\boldsymbol{k}' - \boldsymbol{e}_i} < \varepsilon$, or $\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r) > -\varepsilon$. Thus, we can find a subsequence $\{r_n\}$ of $\{r\}$ and states $\boldsymbol{x}^{r_n}$ such that \boldsymbol{e}gin{itemize} \item[1.] $\boldsymbol{x}^{r_n} \rightarrow \boldsymbol{x} \in {\cal X}$ as $n \rightarrow \infty$, \item[2.] $\widetilde{\boldsymbol{x}}^{r_n} \rightarrow \widetilde{\boldsymbol{x}}$ as $n \rightarrow \infty$, with some components of $\widetilde{\boldsymbol{x}}$ being possibly infinite, \item[3.] $d(\boldsymbol{x}^{r_n}, {\cal X}^*) \ge 2D|{\cal K}|r_n^{p-1}$ for all $n$, and \item[4.] for all SI pairs $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ associated with $\widetilde{\boldsymbol{x}}^{r_n}$ (if any), either $\widetilde{x}^{r_n}_{\boldsymbol{k}} < 1/n$, or $\widetilde{x}^{r_n}_{\boldsymbol{k}' - \boldsymbol{e}_i} < 1/n$, or $\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^{r_n}) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^{r_n}) > -1/n$. \end{itemize} From Property $4$, we can deduce that $\widetilde{\boldsymbol{x}}$ does not have an SI pair. But by Property $3$, this contradicts Lemma \ref{lem:existence-si-2}. This establishes Proposition \ref{prop:opt-si-1}. \(\Box\) \iffalse The following lemma states that if $\boldsymbol{x}$ is not optimal for LP, then we can always find an SI pair associated with $\widetilde{\boldsymbol{x}}$. \boldsymbol{e}gin{lem} There exists $C > 0$ and $\varepsilon > 0$ such that for all sufficiently large $r$, if $d(\boldsymbol{x}^r, {\cal X}^*) \ge Cr^{p-1}$, then there exists an SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ (possibly depending on $r$) which is associated with $\widetilde{\boldsymbol{x}}^r = (\widetilde{x}_{\boldsymbol{k}}^r)_{\boldsymbol{k} \in {\cal K}}$. Furthermore, $\widetilde{x}_{\boldsymbol{k}}^r \geq \varepsilon$, and $\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r) \leq -\varepsilon$. \end{lem} \boldsymbol{e}gin{proof} Proof consists of several stages. \boldsymbol{e}gin{itemize} \item[1.] Show that $d(\boldsymbol{x}^r, {\cal X}^*) \ge Cr^{p-1}$ implies $\sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^r_{\boldsymbol{k}} - r^{1-p}L^* \geq C'$. This involves relating the rate of increase in $\sum_{\boldsymbol{k} \in {\cal K}} x_{\boldsymbol{k}}$ to $d(\boldsymbol{x}, {\cal X}^*)$. \item[2.] Show that if $\widetilde{\boldsymbol{x}}$ has no SI pair, then $\boldsymbol{x}$ is optimal for LP, by constructing associated dual variables $(\eta_i)$. \item[3.] Show that $(\eta_i)$ has the following property: \[ \sum_{\boldsymbol{k} \in {\cal K}} \widetilde{x}^r_{\boldsymbol{k}} - r^{1-p}\sum_{i \in {\cal I}} \rho_i \eta_i < C'. \] \end{itemize} \end{proof} \boldsymbol{e}gin{lem}\label{lem:opt-si-1} Suppose $\boldsymbol{x}^r \rightarrow \boldsymbol{x} \in {\cal X}$ as $r \rightarrow \infty$, and $\widetilde{\boldsymbol{x}}^r$ are such that either $\widetilde{x}^r_{\boldsymbol{k}} \rightarrow \widetilde{x}_{\boldsymbol{k}} \in \mathbb{R}$, or $\widetilde{x}^r_{\boldsymbol{k}} \rightarrow \widetilde{x}_{\boldsymbol{k}} = \infty$ as $r \rightarrow \infty$. If $\boldsymbol{x} \notin {\cal X}^*$, then there exists an SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ associated with $\widetilde{\boldsymbol{x}} = (\widetilde{x}_{\boldsymbol{k}})_{\boldsymbol{k} \in {\cal K}}$. \\COM: MORE PRECISE STATEMENT THAT WE NEED IS: There exists $C>0$ such that, for all sufficiently large $r$, $d(\boldsymbol{x}^r,{\cal X}^*)\ge C$ implies that there exists an SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ associated with $\widetilde{\boldsymbol{x}} = (\widetilde{x}_{\boldsymbol{k}})_{\boldsymbol{k} \in {\cal K}}$. And moreover, the SI-pair is such that $\widetilde{x}_{\boldsymbol{k}}\ge \varepsilon_1>0$ and the "difference of deltas" is $\le -\varepsilon_2 < 0$. ENDCOM \end{lem} COM: In the proof we use the fact that, for each $r$, the optimal values of the LP $\min \sum_k X_k$ and of the modified problem $\min \sum_k f^r(X_k)$ are no more than $|\mathcal{K}| r^p$ apart. So, we work with the latter problem -- it is such that for all large $r$ the lagrange multipliers $\eta_i$ are non-negative, which is easy to verify. ENDCOM COM: The following part about $D_{min}$ is good only as intuition. Not as a statement to be referred to. When we use the fact that $D_{min} < 0$ later on (I marked the place below), we need a more precise argument. Namely, if $D_{min} < -\varepsilon$ at the beginning of a $T r^{p-1}$-long interval, then there exists an SI-pair $\{(\boldsymbol{k}, i), (\boldsymbol{k}', i)\}$, which remains an SI-pair in the entire interval, and moreover, in the entire interval we have $\widetilde{x}^r_{\boldsymbol{k}}\ge \varepsilon_1$ and $\left(\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r)\right) < -\varepsilon_2$. Then we can use Lemma 9 to claim a lower bound on the number of SI-transitions in the interval.\\ Also, need to emphasize that here we are talking only about first-order contributions of the transitions. ENDCOM For a given $r$, consider a system state $\widetilde{\boldsymbol{x}}^r$ at the local fluid scale. Define \[ D_{\min}(\widetilde{\boldsymbol{x}}^r) = \min \left(\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r)\right) k_i \mu_i\widetilde{x}^r_{\boldsymbol{k}}, \] where the minimum is taken over all pairs $\{(\boldsymbol{k}, i), (\boldsymbol{k}, i)\}$ and all SI pairs $\{(\boldsymbol{k}, i), (\boldsymbol{k}', i)\}$. Note that $D_{\min}(\widetilde{\boldsymbol{x}}^r)$ is always nonpositive, can potentially go to $-\infty$ as $r \rightarrow \infty$, and that it remains well-defined even when there is no SI pair associated with $\widetilde{\boldsymbol{x}}^r$ (in which case $D_{\min}(\widetilde{\boldsymbol{x}}^r) = 0$). Roughly speaking, $D_{\min}$ captures the rate of change of $F^r$ at the local fluid scale. Suppose that the current system state is $\boldsymbol{X}^r$, and a customer of type $i$ at a server with configuration $\boldsymbol{k}$ just completed its service requirement. Then $X_{\boldsymbol{k}}$ decreases by $1$ and $X_{\boldsymbol{k} - \boldsymbol{e}_i}$ increases by $1$, and the first-order change in $F^r$ is $-\Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}^r)$. This customer will be immediately placed onto a server, say with configuration $\boldsymbol{k}'$, according to the policy {\em GSS}. Thus, the total (first-order) change in $F^r$ after this transition is $\Delta^r_{(\boldsymbol{k}', i)}(\boldsymbol{X}^r) - \Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}^r)$, or $\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r)$. The rate of departure of type-$i$ customers from $X_{\boldsymbol{k}}^r$ is $k_i \mu_i X^r_{\boldsymbol{k}} = r^p k_i \mu_i \widetilde{x}^r_{\boldsymbol{k}}$, so the contribution of departures of type-$i$ customers from $X_{\boldsymbol{k}}^r$ to the rate of change of $F^r$ is \[ \left(\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r)\right) r^p k_i \mu_i \widetilde{x}^r_{\boldsymbol{k}}, \] and the total rate of change of $F^r$ is therefore the sum of these quantities over $i \in {\cal I}$. By definition, the total rate of change of $F^r$ is then no greater than $r^p D_{\min}$. The following lemma establishes that $D_{\min}$ is strictly negative whenever the current system state is not optimal for LP. \boldsymbol{e}gin{lem}\label{lem:opt-si-2} Let $\varepsilon > 0$. Then there exists $\varepsilon_1 = \varepsilon_1(\varepsilon) > 0$ and $r_0 \in \mathbb{N}$ such that for all $r \geq r_0$, if $\boldsymbol{x}^r \in {\cal X}$ and $d(\boldsymbol{x}^r, {\cal X}^*) \geq \varepsilon$, then $D_{\min}(\widetilde{\boldsymbol{x}}^r) < - \varepsilon_1$. \end{lem} \boldsymbol{e}gin{proof} We prove the lemma by contradiction. Suppose the contrary, i.e., there exists some $\varepsilon > 0$ such that we can find a subsequence $\{r_{\ell}\}$ of $\{r\}$ with $\boldsymbol{x}^{r_{\ell}} \in {\cal X}$ for all $\ell$, $d(\boldsymbol{x}^{r_{\ell}}, {\cal X}^*) \geq \varepsilon$, and $D_{\min}(\widetilde{\boldsymbol{x}}^{r_{\ell}}) \rightarrow 0$ as $\ell \rightarrow \infty$. By the compactness of ${\cal X}$, we can find a further subsequence (write it as $\{r\}$ with an abuse of notation) such that $\boldsymbol{x}^r \rightarrow \boldsymbol{x} \in {\cal X}$ as $r \rightarrow \infty$, and $d(\boldsymbol{x}, {\cal X}^*) \geq \varepsilon$. We can further suppose that for this subsequence, either $\widetilde{x}_{\boldsymbol{k}}^r \rightarrow \widetilde{x}_{\boldsymbol{k}} \in \mathbb{R}$, or $\widetilde{x}_{\boldsymbol{k}}^r \rightarrow \widetilde{x}_{\boldsymbol{k}} = \infty $, as $r \rightarrow \infty$. We now work with this subsequence $\{r\}$. To summarize, it has the following properties: \boldsymbol{e}gin{itemize} \item $d(\boldsymbol{x}^r, {\cal X}^*) \geq \varepsilon$, and $\boldsymbol{x}^r \rightarrow \boldsymbol{x} \in {\cal X}$ as $r \rightarrow \infty$; \item $\widetilde{\boldsymbol{x}}^r \rightarrow \widetilde{\boldsymbol{x}}$ as $r \rightarrow \infty$, with some components of $\widetilde{\boldsymbol{x}}$ being possibly infinite; and \item $D_{\min}(\widetilde{\boldsymbol{x}}^r) \rightarrow 0$ as $r \rightarrow \infty$. \end{itemize} Since $d(\boldsymbol{x}, {\cal X}^*) \geq \varepsilon > 0$, $\boldsymbol{x} \notin {\cal X}^*$. By Lemma \ref{lem:opt-si-1}, there exists an SI pair $\{(\boldsymbol{k}, i), (\boldsymbol{k}', i)\}$ associated with $\widetilde{\boldsymbol{x}}$. By definition, we must have $\widetilde{x}_{\boldsymbol{k}} > 0$, and $\Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}) > \Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}})$. Define \[ \varepsilon_2 = \frac{\widetilde{x}_{\boldsymbol{k}}}{2} > 0, \quad \mbox{ and } \quad \varepsilon_3 = \frac{\Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}) - \Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}})}{2} > 0. \] Then there must exist $r_0$ such that for all $r \geq r_0$, $\widetilde{x}_{\boldsymbol{k}}^r \geq \varepsilon_2$, and $\Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) \geq \varepsilon_3$. This implies that for all $r \geq r_0$, \[ D_{\min}\left(\widetilde{\boldsymbol{x}}^r\right) \leq \left(\Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^r) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^r)\right)k_i\mu_i\widetilde{x}_{\boldsymbol{k}}^r \leq - \varepsilon_3 \varepsilon_2 k_i \mu_i. \] This contradicts the assumption that $D_{\min}(\widetilde{\boldsymbol{x}}^r) \rightarrow 0$ as $r \rightarrow \infty$. Thus, we have established the lemma. \end{proof} \fi \subsection{Proof of Theorem \ref{thm:main-closed}}\label{ssec:proof-closed} We will assume WLOG the following construction of the probability space. For each $(\boldsymbol{k},i) \in {\cal M}$, consider an independent unit-rate Poisson process $\{\Pi_{(\boldsymbol{k}, i)}(t), ~t\ge 0\}$. Assume that, for each $r$, the Markov process $\boldsymbol{X}^r(\cdot)$ is driven by this common set of Poisson processes $\Pi_{(\boldsymbol{k}, i)}(\cdot)$, as follows. For each $(\boldsymbol{k}, i)\in {\cal M}$, let us denote by $D^r_{(\boldsymbol{k}, i)}(t)$ the total number of type-$i$ service completions from servers of configuration $\boldsymbol{k}$, in the time interval $[0,t]$. Then \boldsymbol{e}ql{eq-driving} D^r_{(\boldsymbol{k}, i)}(t) = \Pi_{(\boldsymbol{k}, i)} \left(\int_0^t X_{\boldsymbol{k}}^r(\xi) k_i \mu_i d\xi\right). \end{equation} \boldsymbol{e}gin{lem}\label{lem:unif-conv} Let $T > 0$ be fixed. With probability $1$, the following property holds. Consider any sequence $\{t_0^r\}_r$ with $t_0^r \in [0, Tr^{2-p}]$. Then for any $\xi \in [0, 1]$, and for any $(\boldsymbol{k}, i) \in {\cal M}$, \[ \frac{1}{r^{2p-1}}\left(\Pi_{(\boldsymbol{k}, i)}\left(t_0^r + \xi r^{2p-1} \right) - \Pi_{(\boldsymbol{k}, i)}\left(t_0^r\right)\right) \rightarrow \xi \] as $r \rightarrow \infty$. The convergence is uniform over $t_0^r, \xi$, and $(\boldsymbol{k}, i)$ in the following sense. For any $\varepsilon > 0$, there exists $r(\varepsilon)$ such that for all $r \geq r(\varepsilon)$, $\xi \in [0, 1]$, $(\boldsymbol{k}, i) \in {\cal M}$, and $t_0^r \in [0, Tr^{2-p}]$, \[ \max_{(\boldsymbol{k}, i), \xi, t_0^r}\left| \frac{1}{r^{2p-1}}\left(\Pi_{(\boldsymbol{k}, i)}\left(t_0^r + \xi r^{2p-1} \right) - \Pi_{(\boldsymbol{k}, i)}\left(t_0^r\right)\right) - \xi \right| < \varepsilon. \] \end{lem} {The proof of Lemma \ref{lem:unif-conv} depends on simple large-deviation type estimates for Poisson random variables. The idea is essentially the same as that of Lemma 4.3 in {\cal I}te{SS2002}: we partition the interval $[0, Tr^{2p-1}]$ into subintervals of length $r^{p-1/2}$, and for each of them write the probability that the average increase rate of $\Pi_{(\boldsymbol{k}, i)}$ lies outside $(1-\varepsilon,1+\varepsilon)$. These probabilities are $\exp\left(-\mbox{poly}(r)\right)$, and we only have $\mbox{poly}(r)$ such subintervals (here $\mbox{poly}(r)$ means a polynomial in $r$). This is true for any $\varepsilon>0$. We can then cover {\em any} subinterval of length $r^{2p-1}$ by these subintervals of length $r^{p-1/2}$. We omit a detailed proof here.} The following corollary is a simple consequence of Lemma \ref{lem:unif-conv}. \boldsymbol{e}gin{cor}\label{cor:del-x} Let $T$ be fixed. With probability $1$, the following holds. For sufficiently large $r$, \boldsymbol{e}ql{eq-bounded-rate} \max_{\substack{ \xi \in [0, 1], \\ t^r_0 \in [0, Tr^{1-p}]}} d\left(\boldsymbol{X}^r(t^r_0 + \xi r^{p-1}), \boldsymbol{X}^r(t^r_0) \right) \leq 2 \bar \mu |{\cal K}| r^p, \end{equation} where $\bar{\mu} = \max_{i\in{\cal I}} \mu_i$, and $\mu_i$ is the service rate for type-$i$ customers. \end{cor} \boldsymbol{e}gin{proof} Consider the probability-$1$ event in Lemma \ref{lem:unif-conv}, in which we can and do replace $T$ with $2 \bar \mu T$. (We do this because the total ``instantaneous'' rate of all transitions is upper bounded by $2 \bar \mu r$.) The rate of departure of type-$i$ customers is $\rho_i \mu_i r \le \rho_i \bar{\mu} r$, and the total rate of customer departure is no greater than $\sum_{i \in {\cal I}} \rho_i \bar{\mu} r = \bar{\mu} r$. Thus, for each $\boldsymbol{k} \in {\cal K}$, the rate of change in $X_{\boldsymbol{k}}$ is at most $\bar{\mu} r$. For an interval of length $r^{p-1}$, the total change in $X_{\boldsymbol{k}}$ is at most $O(r\cdot r^{p-1}) = O(r^p)$. More precisely, with probability $1$, for each $\boldsymbol{k} \in {\cal K}$, \[ \limsup_{r \rightarrow \infty} \frac{1}{r^p} \max_{\substack{\xi \in [0, 1], \\t^r_0 \in [0, Tr^{1-p}]}} \left|X^r_{\boldsymbol{k}}(t^r_0 + \xi r^{p-1}) - X^r_{\boldsymbol{k}}(t^r_0) \right| \leq \bar{\mu}. \] Thus, for sufficiently large $r$, and for each $\boldsymbol{k} \in {\cal K}$, \[ \max_{\substack{\xi \in [0, 1],\\t^r_0 \in [0, Tr^{1-p}]}} \left|X^r_{\boldsymbol{k}}(t^r_0 + \xi r^{p-1}) - X^r_{\boldsymbol{k}}(t^r_0) \right| \leq 2 \bar{\mu} r^p. \] Summing over the above expression establishes the corollary. \end{proof} \boldsymbol{e}gin{prop}\label{prop:loc-decr} There exist positive constants $C_1$ and $\delta$ such that the following holds. Let $T > 0$ be given. Then w.p.$1$, for all sufficiently large $r$, and for any interval $[t_0, t_0+r^{p-1}] \subset [0, Tr^{1-p}]$, if $d\left(\boldsymbol{x}^r(t_0), {\cal X}^*\right) \ge C_1 r^{p-1}$, then \[ F^r\big(\boldsymbol{X}^r(t_0 + r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(t_0)\big) \leq -\delta r^{2p-1}. \] \end{prop} \boldsymbol{e}gin{proof} The proof idea is as follows. Consider the increase in $F^r$ at each state transition. For concreteness, suppose that the current system state is $\boldsymbol{X}^r$, and a type-$i$ customer just completed its service requirement on a server with configuration $\boldsymbol{k}$, and is placed into a server with configuration $\boldsymbol{k}'$. Then it is a simple calculation to see that the increase in $F^r$ is at most \[ \Delta^r_{(\boldsymbol{k}', i)}(\boldsymbol{X}^r) - \Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}^r) + 4r^{-p}. \] The term $\Delta^r_{(\boldsymbol{k}', i)}(\boldsymbol{X}^r) - \Delta^r_{(\boldsymbol{k}, i)}(\boldsymbol{X}^r)$ captures the first-order increase in $F^r$, and the term $4r^{-p}$ bounds the second-order increase in $F^r$. We will see that over an interval of length $r^{p-1}$, the increase in $F^r$ due to first-order terms is at most $-O(r^{2p-1})$, and the increase due to second-order terms is at most a constant. We now proceed to the formal proof. From now on, we work with the probability-$1$ event defined in Lemma \ref{lem:unif-conv}, under which \[ \frac{1}{r^{2p-1}}\left(\Pi_{(\boldsymbol{k}, i)}\left(t_0 + \xi r^{2p-1} \right) - \Pi_{(\boldsymbol{k}, i)}\left(t_0\right)\right) \rightarrow \xi \] as $r \rightarrow \infty$, uniformly over $t_0, \xi$, and $(\boldsymbol{k}, i)$. Let $C_1 = 2(\bar{\mu} + D)|{\cal K}|$, where $\bar{\mu} = \max_{i\in {\cal I}} \mu_i$ and $D$ is the same as in Lemma \ref{LEM:LP-RATE}. Let $\varepsilon > 0$ be the same as in Proposition \ref{prop:opt-si-1}, and let $\delta > 0$ be such that $\delta < \frac{1}{8}\mu_i\varepsilon^2$ for all $i \in {\cal I}$. Claim that for all sufficiently large $r$, and for any interval $[t_0, t_0 + r^{p-1}] \subset [0, Tr^{1-p}]$, if $d\left(\boldsymbol{x}^r(t_0), {\cal X}^*\right) \ge C_1r^{p-1}$, then \[ F^r\big(\boldsymbol{X}^r(t_0 + r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(t_0)\big) \leq -\delta r^{2p-1}. \] Suppose the contrary. Then there exist a subsequence of $\{r\}$ (which, with an abuse of notation, we still index by $r$), along which we have some $[t_0^{r}, t_0^{r} + r^{p-1}] \subset [0, Tr^{1-p}]$, such that $d\left(\boldsymbol{x}^r(t_0^{r}), {\cal X}^*\right) \geq C_1 r^{p-1}$, and \boldsymbol{e}gin{equation}\label{eq:contrary-drift-F} F^{r}\big(\boldsymbol{X}^{r}(t_0^{r} + r^{p-1})\big) - F^{r}\big(\boldsymbol{X}^{r}(t_0^{r})\big) > -\delta r^{2p-1}. \end{equation} First, for sufficiently large $r$, and for all $\xi \in [0, 1]$, there exists a SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ associated with $\boldsymbol{x}^{r}(t_0^{r} + \xi r^{p-1})$ (possibly depending on $r$ and $\xi$), such that \boldsymbol{e}gin{align} & \widetilde{x}^{r}_{\boldsymbol{k}}(t_0^{r} + \xi r^{p-1}) \ge \varepsilon, \quad \widetilde{x}^{r}_{\boldsymbol{k}' - \boldsymbol{e}_i}(t_0^{r} + \xi r^{p-1}) \ge \varepsilon, \mbox{ and}\label{eq:si-pair-1}\\ & \Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^{r}(t_0^{r} + \xi r^{p-1})) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^{r}(t_0^{r} + \xi r^{p-1})) \le -\varepsilon. \label{eq:si-pair-2} \end{align} By Corollary \ref{cor:del-x}, for all $\xi \in [0, 1]$, $d\left(\boldsymbol{X}^{r}(t_0^{r} + \xi r^{p-1}), \boldsymbol{X}^r(t_0^r)\right) \leq 2\bar{\mu} |{\cal K}| r^p$. Using triangle inequality and choosing $C_1 > 2(\bar{\mu} + D) |{\cal K}|$, we have that for sufficiently large $r$, and for all $\xi \in [0, 1]$, \[ d\left(\boldsymbol{x}^{r}(t_0^{r} + \xi r^{p-1}), {\cal X}^*\right) \ge 2D|{\cal K}| r^{p-1}. \] \eqref{eq:si-pair-1} and \eqref{eq:si-pair-2} now follow from Proposition \ref{prop:opt-si-1}. Fix a sufficiently large $r$ so that \eqref{eq:si-pair-1} and \eqref{eq:si-pair-2} hold. We then consider the first-order change in $F^r$ over the interval $[t_0^{r}, t_0^{r} + r^{p-1}]$ (i.e., the difference of $\Delta$). To do this, we partition $[t_0^{r}, t_0^{r} + r^{p-1}]$ into subintervals of length $c\varepsilon r^{p-1}$, with $c > 0$ chosen small enough so that on each subinterval, there exists a {\em fixed} SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ such that \eqref{eq:si-pair-1} and \eqref{eq:si-pair-2} hold for this SI pair, and with $\varepsilon$ replaced by $\varepsilon/2$. We now argue that this can be done. Consider the first such subinterval, for example. By Lemma \ref{lem:unif-conv}, for sufficiently large $r$, the number of state transitions over this subinterval is at most $(c\varepsilon r^{p-1})\cdot O(r) = O(\varepsilon r^p) < \frac{1}{8}\varepsilon r^p$, by choosing a sufficiently small $c$. This implies that for each $\boldsymbol{k} \in {\cal K}$, the change in $\tilde{x}^r_{\boldsymbol{k}}$ over this subinterval is at most $\frac{1}{8}\varepsilon$. Thus, \eqref{eq:si-pair-1} and \eqref{eq:si-pair-2} hold for an SI pair associated with $\tilde{\boldsymbol{x}}^r (t_0^r)$, with $\varepsilon$ replaced by $\varepsilon/2$. The same argument holds for other subintervals. Now concentrate on the subinterval $[t_0^r, t_0^r + c\varepsilon r^{p-1}]$, and a corresponding SI pair $\{(\boldsymbol{k}', i), (\boldsymbol{k}, i)\}$ associated with $\tilde{\boldsymbol{x}}^r(t_0^r)$ for which \eqref{eq:si-pair-1} and \eqref{eq:si-pair-2} hold on this subinterval with $\varepsilon$ replaced by $\varepsilon/2$. The number of type-$i$ departures from servers of configuration $\boldsymbol{k}$ is at least $\mu_i \cdot \frac{\varepsilon r^p}{2}\cdot (c\varepsilon r^{p-1}) = \frac{1}{2} c\mu_i \varepsilon^2 r^{2p-1}$. At each such departure, the first-order increase (due to the difference of $\Delta$) in $F^{r}$ is at most $-\varepsilon/2$, since {\em GSS} results in a smaller first-order increase than moving the departure to a server with configuration $\boldsymbol{k}' - \boldsymbol{e}_i$. Summing over all such increases over type-$i$ departures gives a first-order increase in $F^r$ which is at most \[ -\frac{\varepsilon}{2} \cdot \left(\frac{1}{2}c \mu_i \varepsilon^2 r^{2p-1}\right) \le -2c \varepsilon \delta r^{2p-1}. \] Exactly the same argument holds for other subintervals, so the total first-order increase in $F^r$ is at most $-2\delta r^{2p-1}$. \iffalse This implies that for sufficiently large $m$ and for all $\xi \in [0, \varepsilon/16 \bar{\mu} |{\cal K}|]$, we must have \boldsymbol{e}gin{align} & \quad \widetilde{x}^{r_m}_{\boldsymbol{k}}(t_0^{r_m} + \xi r_m^{p-1}) \ge \varepsilon - \frac{\varepsilon}{8} = \frac{7\varepsilon}{8}, \label{eq:existence-si-1} \\ & \quad \widetilde{x}^{r_m}_{\boldsymbol{k}' - \boldsymbol{e}_i}(t_0^{r_m} + \xi r_m^{p-1}) \ge \varepsilon - \frac{\varepsilon}{8} = \frac{7\varepsilon}{8} > 0, \label{eq:existence-si-2} \\ \mbox{and} & \quad \Delta_{(\boldsymbol{k}', i)}(\widetilde{\boldsymbol{x}}^{r_m}(t_0^{r_m} + \xi r_m^{p-1})) - \Delta_{(\boldsymbol{k}, i)}(\widetilde{\boldsymbol{x}}^{r_m}(t_0^{r_m} + \xi r_m^{p-1})) \le -\varepsilon + 4\frac{\varepsilon}{8} = -\frac{\varepsilon}{2}. \label{eq:existence-si-3} \end{align} By Inequality \eqref{eq:existence-si-1}, for sufficiently large $m$, the number of type-$i$ departures from servers of configuration $\boldsymbol{k}$ over the interval $[t_0^{r_m}, t_0^{r_m} + \bar{\varepsilon} r_m^{p-1}]$ is at least \[ \mu_i \left(\frac{\varepsilon}{2}r_m^p\right)\cdot (\bar{\varepsilon} r_m^{p-1}) = \frac{1}{2} \mu_i \bar{\varepsilon} \varepsilon r_m^{2p-1}. \] By Inequalities \eqref{eq:existence-si-2} and \eqref{eq:existence-si-3}, at each such departure, the first-order increase (due to the difference of $\Delta$) in $F^{r_m}$ is at most $-\varepsilon/2$, since we can always obtain a smaller first-order increase than moving the departure to a server with configuration $\boldsymbol{k}' - \boldsymbol{e}_i$. Summing all first-order increases over the type-$i$ departures at server with configuration $\boldsymbol{k}$, we see that the first-order increase in $F^{r_m}$ over the interval $[t_0^{r_m}, t_0^{r_m} + \bar{\varepsilon} r_m^{p-1}]$ is at most \[ -\frac{\varepsilon}{2}\cdot \left(\frac{1}{2} \mu_i \bar{\varepsilon} \varepsilon r_m^{2p-1}\right) = -\frac{1}{4}\mu_i \varepsilon^2 \bar{\varepsilon} r_m^{2p-1} \le -\frac{1}{4}\underline{\mu} \varepsilon^2 \bar{\varepsilon} r_m^{2p-1}. \] where $\underline{\mu} = \min_{i \in {\cal I}} \mu_i$. For other subintervals of $[t_0^{r_m}, t_0^{r_m} + r_m^{p-1}]$ of length $\bar{\varepsilon}r_m^{p-1}$, exactly the same argument holds, and the corresponding first-order increase in $F^{r_m}$ is at most $-\frac{1}{4}\underline{\mu} \varepsilon^2 \bar{\varepsilon} r_m^{2p-1}$ as well. Summing over all these subintervals, we have that for all sufficiently large $m$, the first-order increase in $F^{r_m}$ over the interval $[t_0^{r_m}, t_0^{r_m} + r_m^{p-1}]$ is at most \[ -\frac{1}{4}\underline{\mu} \varepsilon^2 r_m^{2p-1}. \] \fi Finally, consider the second-order increase in $F^{r}$. As discussed at the beginning of the proof, the second-order increase in $F^{r}$ at each state transition is at most $4r^{-p}$. For sufficiently large $r$, the total number of state transitions over the interval $[t_0^{r}, t_0^{r} + r^{p-1}]$ is at most $r^{p-1}\cdot O(r) = O(r^p)$, and hence the total second-order increase in $F^{r}$ is at most $(4r^{-p}) \cdot O(r^p) = O(1)$. Thus, for sufficiently large $r$, \[ F^{r}\big(\boldsymbol{X}^{r}(t_0^{r} + r^{p-1})\big) - F^{r}\big(\boldsymbol{X}^{r}(t_0^{r})\big) \le -2\delta r^{2p-1} + O(1) \le -\delta r^{2p-1}. \] This contradicts \eqref{eq:contrary-drift-F}, and we have established the proposition. \iffalse COM: THE FOLLOWING IS THE "HAND-WAVING" ARGUMENT THAT NEEDS TO BE DONE MUCH MORE CLEANLY. SEE THE EARLIER COMMENT ABOVE Lemma \ref{lem:opt-si-2} ENDCOM\\ Recall from the remark preceding Lemma \ref{lem:opt-si-2} that the rate of change of $F^r$ is no greater than $r^p D_{\min}$. Thus, for sufficiently large $r$, the first-order change in $F^r$ over the time interval $[t_0^r, t_0^r + r^{p-1}]$ is at most \[ \frac{3}{4} r^{p-1} r^p D_{\min} < - \frac{3}{4} r^{2p-1} \varepsilon_1. \] Now consider the second-order change in $F^r$. As explained in the proof idea, the second-order increase in $F^r$ at each state transition is at most $4r^{-p}$. For sufficiently large $r$, the total number of transitions in the interval $[t_0^r, t_0^r + r^{p-1}]$ is at most $2r\cdot r^{p-1} = 2r^p$. Thus, the total second-order increase in $F^r$ is at most $4r^{-p}\cdot 2r^p = 8$, for sufficiently large $r$. Therefore, for sufficiently large $m$, \[ F^{r_m}\big(\boldsymbol{X}^{r_m}(t_0^{r_m} + r_m^{p-1})\big) - F^{r_m}\big(\boldsymbol{X}^{r_m}(t_0^{r_m})\big) \leq -\frac{\varepsilon_1}{2} r_m^{2p-1} = -\delta r_m^{2p-1}. \] This is a contradiction to the fact that \[ F^{r_m}\big(\boldsymbol{X}^{r_m}(t_0^{r_m} + r_m^{p-1})\big) - F^{r_m}\big(\boldsymbol{X}^{r_m}(t_0^{r_m})\big) > -\delta r_m^{2p-1} \] for all $m$. We have thus established the proposition.\fi \end{proof} \boldsymbol{e}gin{prop}\label{PROP:CONV-PROB} There exist positive constants $C$ and $T$ such that as $r \rightarrow \infty$, \[ \mathbb{P}\left(d\left(\boldsymbol{x}^r(Tr^{1-p}), {\cal X}^*\right) \leq Cr^{p-1} \right) \rightarrow 1. \] \end{prop} \boldsymbol{e}gin{proof}[Sketch] { The proof is very intuitive. We keep track of the evolution of $F^r$ on the interval $[0, Tr^{1-p}]$ subdivided into $r^{p-1}$-long subintervals. W.p.1., for all sufficiently large $r$, the following is true for each subinterval $[t_0,t_0+r^{p-1}]$: $F^r$ decreases by at least $\delta r^{2p-1}$ if $d\left(\boldsymbol{x}^r(t_0), {\cal X}^*\right) \ge C_1 r^{p-1}$ (by Proposition \ref{prop:loc-decr}), and it can never increase by more than $C_3 r^p$. Therefore, if we choose $T$ large enough, then $d\left(\boldsymbol{x}^r(t), {\cal X}^*\right) < C_1 r^{p-1}$ at some time $t\in[0,Tr^{1-p}]$ (because otherwise $F^r$ would become negative), and $d\left(\boldsymbol{x}^r(t), {\cal X}^*\right)=O(r^{p-1})$ thereafter. \iffalse We first show that for sufficiently large $r$, $d\left(\boldsymbol{x}^r(t_0), {\cal X}^*\right) \le O(r^{p-1})$ for some $t_0 \in [0, T r^{1-p}]$. By Proposition \ref{prop:loc-decr}, whenever $\boldsymbol{x}^r$ is away from ${\cal X}^*$ by $C_1r^{1-p}$, $F^r$ decreases by $\delta r^{2p-1}$. $F^r$ is nonnegative, so it cannot decrease ``forever''. Therefore, by picking $T$ sufficiently large, such $t_0$ exists. We then show that $d\left(\boldsymbol{x}^r(t), {\cal X}^*\right) \le O(r^{p-1})$ for all $t \in [0, T r^{1-p}]$ with $ t \ge t_0$. This is done by carefully keeping track of the increase of $F^r$ and the difference between $\sum_{\boldsymbol{k}} X^r_{\boldsymbol{k}}$ and $F^r(\boldsymbol{X}^r)$. \fi We refer the readers to Appendix \ref{apdx:conv-prob} for details.} \end{proof} \noindent {\bf Proof of Theorem \ref{thm:main-closed}.} Theorem \ref{thm:main-closed} is now a simple consequence of Proposition \ref{PROP:CONV-PROB}. For each $r$, consider $\boldsymbol{x}^r(\cdot)$ in the stationary regime. In particular, for any $T > 0$, $\boldsymbol{x}^r(Tr^{1-p})$ has the same distribution as $\boldsymbol{x}^r(\infty)$. Therefore, by Proposition \ref{PROP:CONV-PROB}, \[ \mathbb{P}\left(d\left(\boldsymbol{x}^r(\infty), {\cal X}^*\right) \le C r^{p-1} \right) \rightarrow 1, \] as $r \rightarrow \infty$. This completes the proof of Theorem \ref{thm:main-closed}. \(\Box\) \section{Open System: Asymptotic \\Optimality of (Modified) {GSS}} \label{sec-open} {We prove Theorem \ref{th-main-res-open} in this section. The proof ``extends'' that of Theorem \ref{thm:main-closed}. The main additional step is Theorem \ref{thm:local-fluid-tightness}, which shows that in steady state, for each $i \in {\cal I}$, $\tilde Y_i^r(t)$ the number of tokens of type-$i$, remains $o(r^p)$ with high probability, over $O(r^{1-p})$-long intervals.} As a starting point, we need the following facts. \iffalse COM: What do we do about Theorem~\ref{THM:SQRT-R-TIGHTNESS}? If there is no time to write a proof, I can write a (sketch of) direct proof of Th \ref{thm:local-fluid-tightness}, with reference to {\cal I}te{SY2012} for details. This should suffice, I think. ENDCOM \fi \boldsymbol{e}gin{thm} \label{THM:SQRT-R-TIGHTNESS} Consider the sequence (in $r$) of open systems in steady state. Consider any fixed $i$. There exists a positive constant $c$ such that, uniformly on all $r$, $$ \mathbb{E} \exp\{\|r^{-1/2} (\hat Y^r_i(\infty)-\rho_i r,\tilde Y^r_i(\infty))\| \} \le c. $$ \end{thm} \boldsymbol{e}gin{proof} See Appendix \ref{apdx:sqrt-r-tightness}. \end{proof} For our purposes, the following corollary will suffice. \boldsymbol{e}gin{cor} \label{cor-tightness} Consider the sequence (in $r$) of open systems in steady state. Consider any fixed $i$. Then, for any $q>1/2$, $$ \|r^{-q} (\hat Y^r_i(\infty)-\rho_i r,\tilde Y^r_i(\infty))\| \Longrightarrow 0. $$ \end{cor} Next we show that the property of Corollary~\ref{cor-tightness} holds not just at a given time, but uniformly on a $O(r^{1-q})$-long interval. \boldsymbol{e}gin{thm} \label{thm:local-fluid-tightness} Consider the sequence (in $r$) of open systems in stationary regime. Consider any fixed $i$. Let $q>1/2$ and $T>0$ be fixed. \iffalse Then, as $r\to\infty$, \boldsymbol{e}ql{eq-local-fluid-tightness-1} \sup_{t\in[0,Tr^{1-p}]} \|r^{-p} (\hat Y^r_i(t)-\rho_i r,\tilde Y^r_i(t))\| \Longrightarrow 0. \end{equation} Consequently, \fi Then, as $r\to\infty$, \boldsymbol{e}ql{eq-local-fluid-tightness-weak} \sup_{t\in[0,Tr^{1-q}]} \|r^{-q} (\hat Y^r_i(t)-\rho_i r,\tilde Y^r_i(t))\| \Longrightarrow 0, \end{equation} and, consequently, \boldsymbol{e}ql{eq-Z-approx-r-weak} \sup_{t\in[0,Tr^{1-q}]} r^{-q} \|Z^r(t) -r\|\Longrightarrow 0. \end{equation} \end{thm} Clearly, the statement of Theorem~\ref{thm:local-fluid-tightness} is equivalent to the following one: {\em Any subsequence of $\{r\}$ contains a further subsequence along which w.p.1, \boldsymbol{e}ql{eq-local-fluid-tightness} \sup_{t\in[0,Tr^{1-q}]} \|r^{-q} (\hat Y^r_i(t)-\rho_i r,\tilde Y^r_i(t))\|\to 0, \end{equation} and then \boldsymbol{e}ql{eq-Z-approx-r} \sup_{t\in[0,Tr^{1-q}]} r^{-q} \|Z^r(t) -r\|\to 0. \end{equation} } In turn, to prove the latter statement it suffices to show that {\em there exists a construction of the underlying probability space, for which the statement holds.} \iffalse COM: THIS REQUIRES SOME WORK, WHICH IS SKETCHED NEXT. BASICALLY, WE START FROM LEMMA 15 IN {\cal I}te{St2012}, AND PROCEEDS AS IN {\cal I}te{SY2012}, BUT OUR CASE HERE IS MUCH EASIER. THEOREM~\ref{thm:local-fluid-tightness} IS WHAT I THOUGHT REQUIRED $p>2/3$, BUT NOW I THINK $p>1/2$ IS GOOD ENOUGH. ENDCOM \fi We will need some estimates, which can be obtained from a strong approximation of Poisson processes, available in, for example, {\cal I}te[Chapters 1 and 2]{Csorgo_Horvath}: \boldsymbol{e}gin{prop}\label{thm:strong approximation-111-clean} A unit rate Poisson process $\Pi(\cdot)$ and a standard Brownian motion $W(\cdot)$ can be constructed on a common probability space in such a way that the following holds. For some fixed positive constants $C_1$, $C_2$, $C_3$, such that $\forall T>1$ and $\forall u \geq 0$ \[ \mathbb{P}\left(\sup_{0 \leq t \leq T} |\Pi(t) - t - W(t)| \geq C_1 \log T + u\right) \leq C_2 e^{-C_3 u}. \] \end{prop} If in the above statement we replace $T$ with $rT$, and $u$ with $r^{1/4}$, we obtain \boldsymbol{e}gin{align} & \mathbb{P}\left(\sup_{0 \leq t \leq rT} |(\Pi(t) - t) - W(t)| < C_1 \log (rT) + r^{1/4}\right) \nonumber \\ & > 1- C_2 e^{-C_3 r^{1/4}}. \label{eq-add1} \end{align} Note also that for a fixed $\delta\in (0,q-1/2)$ and all large $r$, \boldsymbol{e}ql{eq-add2} \mathbb{P}\left(\sup_{0 \leq t \leq rT} |W(t)| \le r^{1/2+\delta}\right) \ge 1- e^{cr^{2\delta}} \end{equation} for some constant $c>0$. If events in \eqn{eq-add1} and \eqn{eq-add2} hold for all large $r$, then \boldsymbol{e}ql{eq-add3} \sup_{0 \leq t \leq rT} r^{-q}|\Pi(t)-t| \to 0. \end{equation} To prove Theorem~\ref{thm:local-fluid-tightness}, consider the following construction of the probability space. (We want to strongly emphasize that this construction will be used only for the purpose of proving Theorem~\ref{thm:local-fluid-tightness}. For the proof of Theorem~\ref{th-main-res-open}, we can and will use a different probability space construction.) For each $r$, we divide the time interval $[0,Tr^{1-q}]$ into $r^{1-q}$ of $T$-long subintervals, namely $[(m-1)T,mT]$ with $m=1,2,\ldots,r^{1-q}$. In each of the subintervals, and for each $r$, we consider independent unit rate Poisson processes $\Pi_i^{r,m}$, $\hat \Pi_i^{r,m}$, $\tilde \Pi_i^{r,m}$, driving type $i$ exogenous arrivals, actual customer departures and token departures, respectively. More precisely, the number of type $i$ exogenous arrivals, actual customer departures and token departures, by time $t$ from the beginning of the $m$-th interval is given by $$ \Pi_i^{r,m}(\lambda_i r t), ~~~ \hat \Pi_i^{r,m}\left(\int_0^t {\mu_i}\hat Y_i^r(\xi)d\xi\right), ~~~\tilde \Pi_i^{r,m}\left(\int_0^t {\mu_0}\tilde Y_i^r(\xi)d\xi\right), $$ respectively. Using \eqn{eq-add1}-\eqn{eq-add3} we obtain the following property for $\Pi_i^{r,m}$ (and analogous ones for $\hat \Pi_i^{r,m}$ and $\tilde \Pi_i^{r,m}$): \boldsymbol{e}ql{eq-key-for-tightness-proof} \max_{1 \le m \le r^{1-q}} ~~\max_{0\le t \le rT} |\Pi_i^{r,m}(t) - t|/r^{q} \to 0, ~~\mbox{as $r\to\infty$, ~~w.p.1}. \end{equation} \iffalse Now, using \eqn{eq-key-for-tightness-proof} \\ COM: AND SIMILAR. AND ALSO WE NEED THE PRELIMINARY FLUID LIMIT RESULT, LEMMA 15 IN {\cal I}te{St2012} ENDCOM \\ we can use the approach in {\cal I}te{SY2012} to establish \eqn{eq-local-fluid-tightness}, along the chosen subsequence of $r$. (In fact, the ``implementation'' of the approach in the setting of our Theorem~\ref{thm:local-fluid-tightness} is much simpler than that in the proof of the main result of{\cal I}te{SY2012}. In particular, we do not need to use what is called ``hydrodynamic limits'' there.) COM: THIS NEEDS AT LEAST A SKETCH. LATER. ENDCOM \fi We denote $$ g^r(t) = (\hat y^r_i(t),\tilde y^r_i(t)) = r^{-q} (\hat Y^r_i(t)-\rho_i r,\tilde Y^r_i(t)). $$ Then, we can prove the following. \boldsymbol{e}gin{lem} \label{lem-local-fluid-conv} Consider fixed realizations (for each $r$) of driving processes, such that the properties \eqn{eq-key-for-tightness-proof} hold with $q$ replaced by a smaller parameter $q'\in (1/2,q)$. Consider the corresponding sequence of realizations of $(g^r(t), ~t\ge 0)$, with bounded initial states $\|g^r(0)\|\le \epsilon$, $\epsilon>0$. Then, there exists a subsequence of $r$ along which \boldsymbol{e}ql{eq-conv-to-local-fluid} g^r(t) \to g(t), ~~~\mbox{u.o.c.}, \end{equation} where $(g(t), ~t\ge 0)$ is Lipschitz continuous, with $\|g(0)\|\le\epsilon$, and it satisfies conditions \boldsymbol{e}ql{eq-yhat} (d/dt) \hat y_i(t) = - \mu_i \hat y_i(t), \end{equation} \boldsymbol{e}ql{eq-ytilde} (d/dt) \tilde y_i(t) = \left\{ \boldsymbol{e}gin{array}{ll} \mu_i \hat y_i(t) - \mu_0 \tilde y_i(t), & \mbox{if}~\tilde y_i(t)>0\\ \max\{0, \mu_i \hat y_i(t) - \mu_0 \tilde y_i(t)\}, & \mbox{if}~\tilde y_i(t)=0 \end{array} \right. \end{equation} at points $t\ge 0$, where the derivatives exist (which is almost everywhere w.r.t. the Lebesgue mesure). Moreover, the convergence \boldsymbol{e}ql{eq-y-converge} \|g(t)\| \to 0, ~~t\to \infty, \end{equation} holds and is uniform w.r.t. initial states with $\|g(0)\|\le \epsilon$, and \boldsymbol{e}ql{eq-y-bounded} \sup_{\|g(0)\|\le \epsilon} \max_{t\ge 0} \|g(t)\| \to 0, ~~\epsilon \to 0. \end{equation} As a consequence of \eqn{eq-y-bounded}, \boldsymbol{e}ql{eq-y-equilibrium} \|g(0)\| = 0 ~~\mbox{implies}~~ \|g(t)\| = 0, ~\forall t. \end{equation} \end{lem} Lemma~\ref{lem-local-fluid-conv} is analogous to Lemma 14 in {\cal I}te{St2012}, except that the space scaling by $r^{-q}$ is applied, as opposed to the fluid scaling by $r^{-1}$, and the number of actual customers $\hat Y^r_i(t)$ is centered before scaling. The proof is somewhat more involved -- the main issue is that (unlike for the fluid limit) the Lipschitz property of the limit is no longer automatic, because the rates of arrivals and departures in the system are $O(r)$, while the space is only scaled down by $r^q$. (That is why we need to use properties \eqn{eq-key-for-tightness-proof}, as opposed to simply a strong law of large numbers.) However, this issue can be resolved as in, for example, the proof of Theorem 23 in {\cal I}te{SY2012}. We omit a detailed proof. \newline\newline {\bf Proof of Theorem \ref{thm:local-fluid-tightness}.} By Corollary~\ref{cor-tightness}, we can choose a subsequence of $r$ (increasing sufficiently fast) so that $$ \|g^r(0)\|\to 0, ~~\mbox{w.p.1}. $$ Then, we use the construction of the probability space specified above, which guarantees that w.p.1 the properties \eqn{eq-key-for-tightness-proof} hold with $q$ replaced by a smaller parameter $q'\in (1/2,q)$ -- let us consider any element of the probability space for which the properties \eqn{eq-key-for-tightness-proof} do hold. We claim that, for this element, \eqn{eq-local-fluid-tightness} holds. Suppose not. Then, there exists $\epsilon>0$ and a further subsequence of $r$, along which $\tau^r=\min\{t ~|~ \|g^r(t)\|>\epsilon\} \le Tr^{1-q}$. By Lemma~\ref{lem-local-fluid-conv}, we can and do choose time duration $T_1>0$ such that any limit trajectory $g(t)$ with $\|g(0)\|\le \epsilon$ satisfies $\|g(T_1)\|\le \epsilon/2$. For each $r$, consider the trajectory of $g^r$ on the time interval $[\tau^r-T_1,\tau^r]$. (Suppose for now that $\tau^r \ge T_1$ for all sufficiently large $r$.) Then we can choose a further subsequence of $r$ along which $g^r(\tau^r-T_1 +t)\to g(t)$ uniformly for $t\in [0,T_1]$, for a limit function $g(t)$ as in Lemma~\ref{lem-local-fluid-conv}. But, this is impossible because then $\|g^r(\tau^r)\| \to \|g(T_1)\| \le \epsilon/2$. The case when $\tau^r <T_1$ for infinitely many $r$ is even simpler: we choose a further subsequence along which this is true, and consider the trajectories of $g^r$ on the fixed time interval $[0,T_1]$. In this case any limit trajectory $g(t)$ described in Lemma~\ref{lem-local-fluid-conv} stays at $0$ in the entire interval $[0,T_1]$, because $\|g(0)\|= \lim_r \|g^r(0)\|=0$. This means that $\|g^r(\tau^r)\| \to 0$, again a contradiction. \(\Box\) \iffalse COM: NOTE THAT WHATEVER THE CONSTRUCTION OF THE PROBABILITY SPACE THAT WAS USED FOR THE PROOF OF THEOREM \ref{thm:local-fluid-tightness}, IT IS ``INTERNAL'' TO THAT PROOF -- WE DO NOT HAVE TO ASSUME IT IN THE REST OF THE OPEN SYSTEM ANALYSIS. \\ IN PARTICULAR, IN THE REST OF THE ANALYSIS WE CAN ASSUME THAT THE DRIVING POISSON PROCESS (FOR THE ACTUAL CUSTOMER DEPARTURES, TOKEN DEPRTURES AND EXOGENOUS ARRIVALS) SATISFY LEMMA \ref{lem:unif-conv} -- SEE BELOW. \\ WHATEVER THE PROBABILITY SPACE CONSTRUCTION IS USED FOR WHAT FOLLOWS, THEOREM \ref{thm:local-fluid-tightness} STILL ALLOWS US TO SAY: WE CHOOSE A SUBSEQUENCE OF $r$ ALONG WHICH THE PROB. 1 PROPERTIES OF TH \ref{thm:local-fluid-tightness} HOLD. FOR EXAMPLE, WHEN WE CONSIDER LOCAL FLUID LIMITS, WE CAN CLAIM THAT, IN THE LIMIT, THE WEIGHT $1 \wedge \frac{X}{(Z^r)^p}$ BECOMES $1 \wedge \frac{X}{r^p}$, AS WE HAD FOR THE CLOSED SYSTEM. ENDCOM \fi From this point on, we assume the following structure of the probability space. (It is different from the one used for the proof of Theorem~\ref{thm:local-fluid-tightness}, which, as we discussed, was for that proof only.) There are common (for all $r$) unit rate Poisson processes driving the system, defined as follows. For each $(\boldsymbol{k},i)\in {\cal M}$ and $\hat \boldsymbol{k} \le \boldsymbol{k}$, consider independent unit-rate Poisson process $\hat \Pi_{(\boldsymbol{k},\hat \boldsymbol{k}), i}(t), ~t\ge 0$, so that the number of actual type $i$ customer departures from configuration $(\boldsymbol{k},\hat \boldsymbol{k})$ in the interval $[0,t]$ is equal to $\hat \Pi_{(\boldsymbol{k},\hat \boldsymbol{k}), i}\left(\int_0^t {\mu_i} \hat k_i X_{(\boldsymbol{k},\hat \boldsymbol{k})}^r(\xi)d\xi\right)$. Similarly, consider independent unit-rate Poisson process $\left\{\tilde \Pi_{(\boldsymbol{k},\hat \boldsymbol{k}), i}(t), ~t\ge 0\right\}$, so that the number of type $i$ token departures from configuration $(\boldsymbol{k},\hat \boldsymbol{k})$ due to their expiration, is equal to $\tilde \Pi_{(\boldsymbol{k},\hat \boldsymbol{k}), i}\left(\int_0^t {\mu_0}(k_i-\hat k_i) X_{(\boldsymbol{k},\hat \boldsymbol{k})}^r(\xi)d\xi\right)$. Finally, for each $i\in {\cal I}$, let $\{\Pi_i(t), ~t\ge 0\}$ be an independent unit-rate Poisson process, such that the number of exogenous type $i$ arrivals in $[0,t]$ is equal to $\Pi_i(\lambda_i r t)$. For a fixed parameter $T>0$, whose value will be chosen later, each of the above Poisson processes satisfies Lemma~\ref{lem:unif-conv}, in which we can and do replace $T$ with $2T [(\bar \mu \vee \mu_0)+ \sum_i \lambda_i]$. (We do this because we will ``work'' with system sample paths such that $\sum_i \hat Y_i = \sum_i (\hat Y_i^r +\tilde Y_i^r) < 2r$, and for these sample paths the total ``instantaneous'' rate of all transitions is upper bounded by $2 r [(\bar \mu \vee \mu_0)+ \sum_i \lambda_i]$.) Denote by $\tilde D^r_i(t_1,t_2)$ the number of type-$i$ token departures (due to their expirations), and by $\hat A^{**,r}_i(t_1,t_2)$ the total number of exogenous type-$i$ arrivals (of actual customers) that do {\em not} replace type-$i$ tokens, all in the interval $(t_1,t_2]$. Also, denote $Y^r_i(t_1,t_2)=Y^r_i(t_2)-Y^r_i(t_1)$. \boldsymbol{e}gin{thm} \label{thm:non-typical-transitions} Consider the sequence (in $r$) of open systems in stationary regime. Let $T>0$ be fixed. Then, any subsequence of $r$ contains a further subsequence such that, w.p.1, the following holds: \boldsymbol{e}ql{eq-non-typical-departures} \tilde D^r_i(t_0,t_0+r^{p-1})/[r^p r^{p-1}]\to 0, \end{equation} \boldsymbol{e}ql{eq-non-typical-arrivals} \hat A^{**,r}_i(t_0,t_0+r^{p-1})/[r^p r^{p-1}]\to 0, \end{equation} uniformly on all intervals $[t_0,t_0+r^{p-1}]\subset [0,Tr^{1-p}]$. \end{thm} \boldsymbol{e}gin{proof} Indeed, by Theorem~\ref{thm:local-fluid-tightness}, we can and do choose a subsequence of $r$ along which \eqn{eq-local-fluid-tightness}-\eqn{eq-Z-approx-r} hold w.p.1. Then, \eqn{eq-non-typical-departures} follows from \eqn{eq-local-fluid-tightness}, which states that the number of tokens $\tilde Y^r_i(t)$ is uniformly $o(r^p)$, and from the construction of the token departure processes, with the corresponding driving processes $\tilde \Pi_{(\boldsymbol{k},\hat \boldsymbol{k}), i}$ satisfying Lemma~\ref{lem:unif-conv}. From \eqn{eq-local-fluid-tightness} we also have the uniform convergence $$ Y^r_i(t_0,t_0+r^{p-1})/[r^p r^{p-1}]\to 0. $$ But, this along with \eqn{eq-non-typical-departures} implies uniform convergence \eqn{eq-non-typical-arrivals} as well, because we have the conservation law $$ Y^r_i(t_0,t_0+r^{p-1}) = \hat A^{**,r}_i(t_0,t_0+r^{p-1}) - \tilde D^r_i(t_0,t_0+r^{p-1}). $$ The theorem is then proved. \end{proof} \noindent {\bf Proof of Theorem \ref{th-main-res-open}.} Consider the sequence of the system processes in stationary regime. Consider a fixed $T>0$, chosen to be sufficiently large, as in Proposition~\ref{PROP:CONV-PROB}. Consider any subsequence of $r$. Then, we can and do choose a further subsequence of $r$ along which, w.p.1, \eqn{eq-local-fluid-tightness}-\eqn{eq-Z-approx-r} hold with some $q\in (1/2,p)$ (by Theorem~\ref{thm:local-fluid-tightness}) , and the properties stated in Theorem~\ref{thm:non-typical-transitions} hold. As in the proof of Proposition~\ref{PROP:CONV-PROB}, we will keep track of the evolution of the value of $F^r(\boldsymbol{X}^r(t))$. We emphasize that this is exactly the same function $F^r$ as defined in Section~\ref{sec-GSS-definition} and used in the analysis of closed system, namely it has the fixed parameter $r$ (in the system with index $r$), and {\em not} the random ``parameter'' $Z^r$. We claim that the following property holds. {\bf Claim:} {\em There exist positive constants $0 < C_1 < C_2$, $\delta>0$, such that the following holds. For all sufficiently large $r$, uniformly on all intervals $[t_0,t_0+r^{p-1}]\subset [0,Tr^{1-p}]$, we have (a) $F^r(\boldsymbol{X}^r(t_0)) - r u^* \ge C_1 r^p$ implies \[F^r(\boldsymbol{X}^r(t_0+r^{p-1})) - F^r(\boldsymbol{X}^r(t_0)) \le - \delta r^{2p-1},\] and (b) $F^r(\boldsymbol{X}^r(t_0)) - r u^* \le C_1 r^p$ implies \[\sup_{\xi\in [0,1]} F^r(\boldsymbol{X}^r(t_0+\xi r^{p-1})) - r u^* \le C_2 r^p.\] } Clearly, (b) is analogous to Corollary~\ref{cor:del-x} for the closed system and is proved exactly same way, with $\bar \mu$ in \eqn{eq-bounded-rate} replaced by $\bar \mu \vee \mu_0$. Statement (a) is analogous to Proposition~\ref{prop:loc-decr} for the closed system, and we prove it below. It is also clear that the claim, along with \eqn{eq-local-fluid-tightness}-\eqn{eq-Z-approx-r}, implies the theorem statement via the argument almost verbatim repeating that in the proof of Proposition~\ref{PROP:CONV-PROB}. It remains to prove (a). The proof is the same as that of Proposition~\ref{prop:loc-decr}, except that we have to make additional estimates accounting for: (i) token departures due to their expiration and actual customer arrivals that do not find tokens; (ii) the fact that {\em GSS-M} uses weight function $\bar w^r=\bar w^r(X;Z^r)$, as opposed to function $w^r=w^r(X)$ (which has constant $r$ as a parameter, instead of the random variable $Z^r$). This is because, {\em if we would have only transitions associated with actual customer departures and actual customer arrivals replacing tokens, and the assignment decisions would be based on weight $w^r$ as opposed to $\bar w^r$}, then exactly the same drift estimates as those in the proof of Proposition~\ref{prop:loc-decr} would apply. Note that in (i) we consider exactly those transitions for which we have properties \eqn{eq-non-typical-departures}-\eqn{eq-non-typical-arrivals}. Therefore, in any interval $[t_0,t_0+r^{p-1}]$ the ``worst case'' possible increase in $F^r(\boldsymbol{X}^r)$ due to such transitions is $o(r^{2p-1})$. (We omit obvious epsilon/delta formalities.) Now consider (ii). Since we have the uniform bound $|Z^r(t)-r|\le O(r^q)$, \iffalse COM: MAYBE PUT THIS BACK? we conclude that, first, $Z^p(t)\le 2r$, which gives us the bound on the total rate at which all transitions of any type occur; second, \fi it is easy to check that $|\bar w^r(X) - w^r(X)|\le O(r^{q-1})$ for any $X\ge 0$. This means that the error in the calculation of first-order contribution into the change of $F^r(\boldsymbol{X}^r)$ in any $[t_0,t_0+r^{p-1}]$, introduced by {\em GSS-M} using weight $\bar w^r$ instead of $w^r$, is uniformly bounded by $O(r r^{p-1} r^{q-1})=O(r^{p+q-1})= o(r^{2p-1})$. (Again, we omit epsilon/delta formalities.) We see that the potential positive contribution of both (i) and (ii) into the change of objective function in any interval $[t_0,t_0+r^{p-1}]$ is $o(r^{2p-1})$, uniformly on the choice of the interval. The estimate in (a) follows. Thus, the proof of the above claim, and of the theorem, follows. \(\Box\) \iffalse COM: Theorem~\ref{thm:non-typical-transitions} establishes the key fact that we need to ``adopt'' the objective drift estimates that we used for the closed system. Namely, it allows us to claim that, assuming the state is away from optimal on fluid scale, the number of ``non-typical'' transitions -- token expirations and arrivals not finding tokens -- is negligible compared to ``typical'' improving departures, i.e. actual departures along $(k,i)$ that are followed by immediate placement of a token along $(k',i)$, where $(k,i)$, $(k',i)$, is an SI-pair. So the drift estimates we have for the closed system go through with a small adjustment. ENDCOM COM: An important point about ``demonstrating negative drift'' for the open system is that we do not start from {\bf any} state at time $0$. We consider a sequence (in $r$) of trajectories satisfying \eqn{eq-local-fluid-tightness}-\eqn{eq-Z-approx-r}. So, in particular, on the fluid scale, in the limit, we do have $\boldsymbol{x}(t)\in {\cal X}$ in this entire time interval. This allows us to talk about the linear program and its optimal set, etc. ENDCOM COM: It follows from Th 15 that the weights $X_k/(Z^r)^p \wedge 1$ differ from weights $X_k/r^p \wedge 1$ by no more than $C r^{p'-1}$, where $p'\in(1/2,p)$. So, we can bound the total "error" introduced by using "modified" weights $X_k/(Z^r)^p \wedge 1$ instead of just $X_k/r^p \wedge 1$. And we keep track of the distance to the optimal set of the clean problem $\min \sum_k f^r(X_k)$, where $ f^r$ does not depend on $Z^r$. COM: Add the fact that for all sufficiently large $r$, the optimization problem with modified objective is such that all Lagrange multipliers $\eta_i\ge 0$. \fi \section{Discussion}\label{sec-discussion} We presented the policy {\em Greedy with sublinear Safety Stocks (GSS)} along with a variant, which asymptotically minimize the steady-state total number of occupied servers at the fluid scale, as the input flow rates grow to infinity. A technical novelty of {\em GSS} is that it \emph{automatically} creates non-zero safety stocks, {\em sublinear in the system ``size''}, at server configurations which have zero stocks on the fluid scale. It is important to note that the algorithm does it without {\em a priori} knowledge of system parameters. To prove the fluid-scale optimality of {\em GSS}, we also need to consider a local fluid scaling, under which the sublinear safety stocks are ``visible''. This in turn allows us to obtain a tight asymptotic characterization of the algorithm deviation from exact optimal packing. We can extend {\em GSS} to policies that asymptotically minimize the more general objective $\sum_{\boldsymbol{k}} c_{\boldsymbol{k}} X_{\boldsymbol{k}}$, where $c_{\boldsymbol{k}} > 0$ can be interpreted as the ``cost'' (for example, some estimated energy cost) of keeping a server in configuration $\boldsymbol{k}$, for each $\boldsymbol{k} \in {\cal K}$. Instead of the weight function $w^r(X^r_{\boldsymbol{k}})$ for each $\boldsymbol{k} \in {\cal K}$, consider the weight function $c_{\boldsymbol{k}} w^r(X^r_{\boldsymbol{k}})$, and define $\Delta^r$ as the difference between the new weight functions. We can then define {\em GSS} and {\em GSS-M} using the new $\Delta^r$. They minimize the fluid scale quantity $\sum_{\boldsymbol{k}} c_{\boldsymbol{k}} x_{\boldsymbol{k}}$ asymptotically, and similar convergence rates can be obtained. If we assume that the cost $c_{\boldsymbol{k}}$ is monotonically non-decreasing in $\boldsymbol{k}$ (i.e., $c_{\boldsymbol{k}'} \le c_{\boldsymbol{k}}$ if $\boldsymbol{k}' \le \boldsymbol{k}$), then all our results and proofs still hold essentially verbatim. If costs $c_{\boldsymbol{k}}$ are not monotone in $\boldsymbol{k}$, most of the statements and proofs easily extend, except those of Lemmas \ref{lem:existence-si-1} and \ref{lem:existence-si-2}, where some dual variables $\eta_i$ may need to be negative. These $\eta_i$ can be defined in a similar fashion as those in the proof of Lemma 6 in {\cal I}te{St2012}. There are some possible directions for future research. For example, one may expect asymptotic optimality of ``pure'' {\em GSS} in an open system, which seems more difficult to establish. Proving or disproving its optimality may require better understanding of and some new insight into the system dynamics. Another direction can be the investigation of policies other (possibly simpler) than {\em GSS}. {\em GSS} is {\em asymptotically} optimal as the system scale increases. However, if the number $|{\cal K}|$ of feasible configurations is large, the system scale may need to be very large for the near optimal performance. It is then of interest to design policies (e.g., some form of best-fit) that have provably good performance properties at a wide range of system scales. \iffalse \boldsymbol{e}gin{itemize} \item Discussion of results. \item Technical contribution: use of safety stocks for stochastic control. \item Immediate extensions: general linear objective - utility maximization. \red{General arrival processes? But we used strong approximation of Poisson processes.} \item Future extensions: performance analysis of simpler policies. For example, some form of best-fit. \end{itemize} \boldsymbol{e}gin{abstract} We consider a service system model primarily motivated by the problem of efficient assignment of virtual machines to physical host machines in a network cloud, so that the number of occupied hosts is minimized. There are multiple input flows of different type customers, with a customer mean service time depending on its type. There is infinite number of servers. A server packing {\em configuration} is the vector $k=\{k_i\}$, where $k_i$ is the number of type $i$ customers the server "contains". Packing constraints must be observed, namely there is a fixed finite set of configurations $k$ that are allowed. Service times of different customers are independent; after a service completion, each customer leaves its server and the system. Each new arriving customer is placed for service immediately; it can be placed into a server already serving other customers (as long as packing constraints are not violated), or into an idle server. We consider a simple parsimonious real-time algorithm, called {\em Greedy}, which attempts to minimize the increment of the objective function $\sum_k X_k^{1+\alpha}$, $\alpha>0$, caused by each new assignment; here $X_k$ is the number of servers in configuration $k$. (When $\alpha$ is small, $\sum_k X_k^{1+\alpha}$ approximates the total number $\sum_k X_k$ of occupied servers.) Our main results show that certain versions of the Greedy algorithm are {\em asymptotically optimal}, in the sense of minimizing $\sum_k X_k^{1+\alpha}$ in stationary regime, as the input flow rates grow to infinity. We also show that in the special case when the set of allowed configurations is determined by {\em vector-packing} constraints, Greedy algorithm can work with {\em aggregate configurations} as opposed to exact configurations $k$, thus reducing computational complexity while preserving the asymptotic optimality. \end{abstract} \section{Introduction} \label{sec-intro} The primary motivation for this work is the following problem arising in cloud computing: how to assign various types of virtual machines to physical host machines (in a data center) in real time, so that the total number of host machines in use is minimized. It is very desirable that an assignment algorithm is simple, does need to know the system parameters, and makes decisions based on the current system state only. (An excellent overview of this and other resource allocation issues arising in cloud computing can be found in {\cal I}te{Gulati2012}.) A data center (DC) in the ``cloud'' consists of a number of host machines. Assume that all hosts are same: each of them possesses the amount $B_n>0$ of resource $n$, where $n\in \{1,2,\ldots,N\}$ is a resource index. (For example, resource $1$ is CPU, resource $2$ is memory, etc.) The DC receives requests for virtual machine (VM) placements; VMs can be of different types $i\in\{1,\ldots,I\}$; a type $i$ VM requires the amounts $b_{i,n}> 0$ of each resource $n$. Several VMs can share the same host, as long as the host's capacity constraints are not violated; namely, a host can simultaneously contain a set of VMs given by a vector $k=(k_1,\ldots,k_I)$, where $k_i$ is the number of type $i$ VMs, as long as for each resource $n$ \boldsymbol{e}ql{eq-intro1} \sum_i k_i b_{i,n} \le B_n. \end{equation} Thus, VMs can be assigned to hosts already containing other VMs, subject to the above ``packing'' constraints. After a certain random sojourn (service) time each VM vacates its host (leaves the system), which increases the ``room'' for new arriving VMs to be potentially assigned to the host. A natural problem is to find a real-time algorithm for assigning VM requests to the hosts, which minimizes (in appropriate sense) the total number of hosts in use. Clearly, such a scheme will maximize the DC capacity; or, if it leaves a large number of hosts unoccupied, those hosts can be (at least temporarily) turned off to save energy. More specifically, the model assumptions that we make are as follows:\\ (a) The exact nature of "packing" constraints will not be important -- we just assume that the feasible configuration vectors $k$ (describing feasible sets of VMs that can simultaneously occupy one host) form a finite set ${\cal K}$; and assume monotonicity -- if $k\in{\cal K}$ then so is any $k'\le k$.\\ (b) There is no limit on the number of hosts that can be used and each new VM is assigned to a host immediately -- so it is an {\em infinite server} model, with no blocking or waiting.\\ (c) Service times of different VMs are independent of each other, even for VMs served simultaneously on the same host.\\ (d) We further assume in this paper that the arrival processes of VMs of each type are Poisson and service time distributions are exponential. These assumptions are not essential and can be much relaxed, as discussed in Section~\ref{sec-more-general}. The basic problem we address in this paper is: \boldsymbol{e}ql{eq-problem-basic} \mbox{minimize} ~~ \sum_k X_k^{1+\alpha}, \end{equation} where $\alpha> 0$ is a fixed parameter, and $X_k$ is the (random) number of hosts having configuration $k$ in the stationary regime. (Clearly, when $\alpha$ is small, $\sum_k X_k^{1+\alpha}$ approximates the total number $\sum_k X_k$ of occupied hosts.) We consider the {\em Greedy} real-time (on-line) VM assignment algorithm, which, roughly speaking, tries to minimize the increment of the objective function $\sum_k X_k^{1+\alpha}$ caused by each new assignment. Our main results show that certain versions of the Greedy algorithm are {\em asymptotically optimal}, as the input flow rates become large or, equivalently, the average number of VMs in the system becomes large. We also show (in Section~\ref{sec-aggregation}) that in the special case when feasible configurations are determined by constraints \eqn{eq-intro1}, Greedy algorithm can work with ``aggregate configurations'' as opposed to exact configurations $k$, thus reducing computational complexity while preserving the asymptotic optimality. \subsection{Previous work} Our model is related to the vast literature on the classical {\em stochastic bin packing} problems. (For a good recent review of one-dimensional bin packing see e.g. {\cal I}te{Csirik2006}.) In particular, in {\em online} stochastic bin packing problems, random-size items arrive in the system and need to be placed according to an online algorithm into finite size bins; the items never leave or move between bins; the typical objective is to minimize the number of occupied bins. A bin packing problem is {\em multi-dimensional}, when bins and item sizes are vectors; the problems with the packing constraints \eqn{eq-intro1} are called {\em multi-dimensional vector packing} (see e.g. {\cal I}te{Bansal2009} for a recent review). Bin packing {\em service} systems arise when there is a random in time input flow of random-sized items (customers), which need to be served by a bin (server) and leave after a random service time; the server can simultaneously process multiple customers as long as they can simultaneously fit into it; the customers waiting for service are queued; a typical problem is to determine the maximum throughput under a given (``packing'') algorithm for assigning customers for service. (See e.g. {\cal I}te{Gamarnik2004} for a review of this line of work.) Our model is similar to the latter systems, except there are multiple bins (servers), in fact -- infinite number in our case. Models of this type are more recent (see e.g. {\cal I}te{Jiang2012,Maguluri2012}). Paper {\cal I}te{Jiang2012} addresses a real-time VM allocation problem, which in particular includes packing constraints; the approach of {\cal I}te{Jiang2012} is close in spirit to Markov Chain algorithms used in combinatorial optimization. Paper {\cal I}te{Maguluri2012} is concerned mostly with maximizing throughput of a queueing system (where VMs can actually wait for service) with a finite number of bins. The asymptotic regime in this paper is such that the input flow rates scale up to infinity. In this respect, our work is related to the (also vast) literature on queueing systems in the {\em many servers} regime. (See e.g. {\cal I}te{ST2010_04} for an overview. The name ``many servers'' reflects the fact that the number of servers scales up to infinity as well, linearly with the input rates; this condition is irrelevant in our case of infinite number of servers.) In particular, we study {\em fluid limits} of our system, obtained by scaling the system state down by the (large) total number of customers. We note, however, that packing constraints are not present in the previous work on the many servers regime, to the best of our knowledge. \section{Discussion} \label{sec-discussion} We have shown that (versions of) the Greedy algorithm are asymptotically optimal in the sense of minimizing the objective function $\sum_k X_k^{1+\alpha}$ with $\alpha>0$. When $\alpha$ is small (but positive), the algorithms produce an approximation of a solution minimizing the linear objective $\sum_k X_k$, i.e. the total number of occupied servers. If $\sum_k X_k$ is the ``real'' underlying objective, the ``price'' we pay by applying Greedy algorithm with small $\alpha>0$ is that the algorithm will keep non-zero amounts (``safety stocks'') of servers in many ``unnecessary'' (from the point of view of linear objective) configurations $k$, including many -- potentially all -- non-maximal configurations in ${\cal K}$. What we gain for this ``price'' is the simplicity and agility of the algorithm. ``True'' minimization of the linear objective $\sum_k X_k$ requires that a linear program is solved (via explicit offline or implicit dynamic approach), so that the system is prevented from using ``unnecessary'' configurations $k$, not employed in optimal LP solutions. The Greedy algorithm with $\alpha>0$ is asymptotically optimal as the average number $r$ of customers in the system goes to infinity. The fact that it maintains safety stocks of many configurations, means in particular that the algorithms' performance is close to optimal when the ratio $r/|{\cal K}|$ is sufficiently large, so that there is enough customers in the system to keep non-negligible safety stocks of servers in potentially all configurations. If the number $|{\cal K}|$ of configurations is large, then $r$ needs to be very large to achieve near-optimality. The use of aggregate configurations in the special case of vector-packing constraints alleviates this scalability issue when the number $|{\cal Q}|$ of aggregate configurations is substantially smaller than $|{\cal K}|$. Finally, we note that the closed system, considered in Theorems~\ref{th-fluid-stat-gt1-closed} and \ref{th-fluid-stat-gt1-closed-aaa}, is not necessarily artificial. For example, it models the scenario where VMs do not leave the system, but can be moved (``migrated'') from one host to another. In this case, a ``service completion'' is a time point when a VM migration can be attempted. \fi \boldsymbol{e}gin{thebibliography}{1} \bibitem{Bansal2009} N.~Bansal, A.~Caprara, M.~Sviridenko. \newblock A New Approximation Method for Set Covering Problems, with Applications to Multidimensional Bin Packing. \newblock \emph{SIAM J. Comput.}, 2009, Vol.39, No.4, pp.1256-1278. \bibitem{Csirik2006} J.~Csirik, D. S.~Johnson, C.~Kenyon, J. B.~Orlin, P. W.~Shor, and R. R.~Weber. \newblock On the Sum-of-Squares Algorithm for Bin Packing. \newblock \emph{J.ACM}, 2006, Vol.53, pp.1-65. \bibitem{Csorgo_Horvath} M.~Cs{\"{o}}rg{\H{o}} and L.~Horv{\'{a}}th. \newblock \emph{Weighted Approximations in Probability and Statistics}, Wiley, 1993. \bibitem{Gamarnik2004} D.~Gamarnik. \newblock Stochastic Bandwidth Packing Process: Stability Conditions via Lyapunov Function Technique. \newblock \emph{Queueing Systems}, 2004, Vol.48, pp.339-363. \bibitem{GS2012} D.~Gamarnik and A. L.~Stolyar. \newblock Multiclass Multiserver Queueing System in the Halfin--Whitt Heavy Traffic Regime: Asymptotics of the Stationary Distribution. \newblock \emph{Queueing Systems}, 2012, Vol.71, pp.25-51. \bibitem{Gulati2012} A.~Gulati, A.~Holler, M.~Ji, G.~Shanmuganathan, C.~Waldspurger, and X.~Zhu. \newblock VMware Distributed Resource Management: Design, Implementation and Lessons Learned. \newblock \emph{VMware Technical Journal}, 2012, Vol.1, No.1, pp. 45-64. http://labs.vmware.com/publications/vmware-technical-journal \bibitem{GR2012} V.~Gupta and A.~Radovanovic. \newblock Online Stochastic Bin Packing. \newblock Preprint, 2012. \bibitem{Jiang2012} J. W.~Jiang, T.~Lan, S.~Ha, M.~Chen, and M.~Chiang. \newblock Joint VM Placement and Routing for Data Center Traffic Engineering. \newblock \emph{INFOCOM 2012}. \bibitem{Maguluri2012} S. T.~Maguluri, R.~Srikant, and L.~Ying. \newblock Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters. \newblock \emph{INFOCOM 2012}. \bibitem{Meyn2005} S.~Meyn. \newblock Dynamic Safety-Stocks for Asymptotic Optimality in Stochastic Networks. \newblock \emph{Queueing Systems}, 2005, Vol. 50, pp.255-297. \iffalse \bibitem{St95} A. L.~Stolyar. \newblock On the Stability of Multiclass Queueing Networks: A Relaxed Sufficient Condition via Limiting Fluid Processes. \newblock {\em Markov Processes and Related Fields}, Vol. 1(4), 1995, pp.491-512. \bibitem{St2001} A. L.~Stolyar. \newblock MaxWeight Scheduling in a Generalized Switch: State Space Collapse and Workload Minimization in Heavy Traffic. \newblock \emph{Annals of Applied Probability}, 2004, Vol.14, No.1, pp.1-53. \fi {\bibitem{SS2002} S.~Shakkottai and A. L.~Stolyar. \newblock Scheduling for Multiple Flows Sharing a Time-Varying Channel: The Exponential Rule. \newblock \emph{American Mathematical Society Translations}, 2002, Series 2, Vol. 207, pp. 185-202} \bibitem{ST2010_04} A. L.~Stolyar and T.~Tezcan. \newblock Shadow Routing Based Control of Flexible Multi-Server Pools in Overload. \newblock \emph{Operations Research}, 2011, Vol.59, No.6, pp.1427-1444. \bibitem{SY2012} A. L.~Stolyar and E.~Yudovina. \newblock Tightness of Invariant Distributions of a Large-Scale Flexible Service System under a Priority Discipline. \newblock Bell Labs Technical Memo, 2012. Submitted. http://arxiv.org/abs/1201.2978 \bibitem{St2012} A. L.~Stolyar. \newblock An Infinite Server System with General Packing Constraints. \newblock Bell Labs Technical Memo, 2012. Submitted. http://arxiv.org/abs/1205.4271 \end{thebibliography} \appendix \section{Proof of Lemma \ref{LEM:LP-RATE}}\label{apdx:lp-rate} Both ${\cal X}$ and ${\cal X}^*$ are convex and compact polytopes with a finite number of extreme points. Let ${\cal S}$ and ${\cal S}^*$ be the set of extreme points of ${\cal X}$ and of ${\cal X}^*$, respectively. Note that for all $\boldsymbol{x}^* \in {\cal S}^*$, $\sum_{\boldsymbol{k}} x^*_{\boldsymbol{k}} = u^*$, and for all $\boldsymbol{x}' \in {\cal S} \backslash {\cal S}^*$, $\sum_{\boldsymbol{k}} x'_{\boldsymbol{k}} > u^*+\delta$. for some $\delta > 0$. Let $\hull{{\cal S}\backslash {\cal S}^*}$ be the convex hull of the set ${\cal S}\backslash {\cal S}^*$. Then for all $\boldsymbol{x}' \in \hull{{\cal S}\backslash {\cal S}^*}$, $\sum_{\boldsymbol{k}} x'_{\boldsymbol{k}} \ge u^* + \delta$. Consider the function $g : \hull{{\cal S}\backslash {\cal S}^*} \times {\cal X}^* \rightarrow \mathbb{R}$ defined by $g(\boldsymbol{x}^*, \boldsymbol{x}') = \|\boldsymbol{x}^* - \boldsymbol{x}'\|/\left(\sum_{\boldsymbol{k} \in {\cal K}} x'_{\boldsymbol{k}} - u^*\right)$. Function $g$ is well-defined, always positive and clearly continuous. Since both $\hull{{\cal S}\backslash {\cal S}^*}$ and ${\cal X}^*$ are compact, so is their product space. Thus there exists $D > 0$ such that $g$ is upper bounded by $D$. For every $\boldsymbol{x} \in {\cal X}$, there exists $\lambda \in [0, 1]$ such that $\boldsymbol{x} = \lambda \boldsymbol{x}' + (1-\lambda) \boldsymbol{x}^*$, with $\boldsymbol{x}' \in \hull{{\cal S}\backslash {\cal S}^*}$ and $\boldsymbol{x}^* \in {\cal X}^*$. Then \boldsymbol{e}gin{eqnarray*} d\left(\boldsymbol{x}, {\cal X}^*\right) &\le& \|\boldsymbol{x} - \boldsymbol{x}^*\| = \lambda \|\boldsymbol{x}' - \boldsymbol{x}^*\| \\ & = & \lambda g(\boldsymbol{x}^*, \boldsymbol{x}') \left(\sum_{\boldsymbol{k} \in {\cal K}} x'_{\boldsymbol{k}} - u^*\right) \\ & \le & \lambda D\left(\sum_{\boldsymbol{k} \in {\cal K}} x'_{\boldsymbol{k}} - u^*\right) = D\left(\sum_{\boldsymbol{k} \in {\cal K}} x_{\boldsymbol{k}} - u^*\right). \end{eqnarray*} \section{Proof of Proposition \ref{PROP:CONV-PROB}}\label{apdx:conv-prob} Let $\delta > 0$ be the same as in Proposition \ref{prop:loc-decr}, and define $T = 3/\delta$. {$C > 0$ will be chosen to be sufficiently large, whose value will be determined later in the proof.} Clearly, to prove the proposition, it suffices to prove a stronger property \[ \mathbb{P}\left(d\left(\boldsymbol{x}^{r}(Tr^{1-p}), {\cal X}^*\right) \leq Cr^{p-1} \mbox{ for all large $r$} \right) = 1. \] By Proposition \ref{prop:loc-decr}, there exists $C_1 > 0$ such that w.p.$1$, for sufficiently large $r$, and for any interval $[t_0, t_0 + {r}^{p-1}] \subset [0, T{r}^{1-p}]$, if $d\left(\boldsymbol{x}^{r}(t_0), {\cal X}^*\right) \ge C_1 r^{p-1}$, then \boldsymbol{e}gin{equation}\label{eq:local-drift-F} F^r\big(\boldsymbol{X}^r(t_0^r + r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(t_0^r)\big) \leq -\delta r^{2p-1}. \end{equation} We pick some $r$ such that the above statement holds, and that furthermore, for every $t_0 \in [0, Tr^{1-p}]$ and $\xi \in [0, 1]$, \boldsymbol{e}gin{equation}\label{eq:local-change-X} d\left(\boldsymbol{X}^r(t^r_0 + \xi r^{p-1}), \boldsymbol{X}^r(t^r_0) \right) \leq O(r^p). \end{equation} This can be done by Corollary \ref{cor:del-x}. Now claim that $d(\boldsymbol{x}^r(Tr^{1-p}), {\cal X}^*) \le C r^{p-1}$. To establish the claim, we consider the set $\mathcal{L} = \{\ell \in \mathbb{Z}_+: \ell r^{p-1} \in [0, T r^{1-p}]\}$, and prove that \boldsymbol{e}gin{itemize} \item[(a)] there exists $\ell_0 \in \mathcal{L}$ such that $d(\boldsymbol{x}^r(\ell_0 r^{p-1}), {\cal X}^*) \le C_1 r^{p-1}$, and \item[(b)] there exists $C_2 > 0$ such that for all $\ell \in \mathcal{L}$ with $\ell \ge \ell_0$, $F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \le r u^* + C_2 r^p$. \end{itemize} First suppose that (a) does not hold. Then for every $\ell \in \mathcal{L}$, $d(\boldsymbol{x}^r(\ell r^{p-1}), {\cal X}^*) \ge C_1 r^{p-1}$, so \[ F^r\big(\boldsymbol{X}^r((\ell+1)r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \leq -\delta r^{2p-1}. \] Let $\bar{\ell} = \lceil Tr^{2(1-p)} \rceil$. Summing these inequalities over $\ell$, we obtain \boldsymbol{e}gin{align*} & F^r\big(\boldsymbol{X}^r(\bar{\ell} r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(0)\big) \leq - \bar{\ell} \delta r^{2p-1} \\ & \le - (Tr^{2(1-p)}-1)\delta r^{2p-1} = -T\delta r + \delta r^{2p-1}. \end{align*} Thus, \boldsymbol{e}gin{align*} & F^r\big(\boldsymbol{X}^r(\bar{\ell} r^{p-1})\big) \le F^r\big(\boldsymbol{X}^r(0)\big) - T\delta r + \delta r^{2p-1} \\ & \le r - \frac{3}{\delta} \delta r + \delta r^{2p-1} < 0. \end{align*} This contradicts the nonnegativity of $F^r$, so statement (a) is established. To establish statement (b), we use the following simple lemma, whose proof is omitted. \boldsymbol{e}gin{lem}\label{lem:max-seq} Let $K, \alpha$ and $\boldsymbol{e}ta$ be given positive constants. Consider a sequence of real numbers $\{a_n\}$ that satisfies: (i) $a_0 \le K$, (ii) $a_{n+1} - a_n \le \alpha$, and (iii) if $a_n \ge K$, then $a_{n+1} - a_n \le -\boldsymbol{e}ta$. Then $\max_n a_n \le K + \alpha$. \end{lem} We will establish the following corresponding statements: (i) $F^r \left(\boldsymbol{X}^r(\ell_0 r^{p-1})\right) \le r u^* + C_1 r^p$. Recall that we have $d\left(\boldsymbol{x}^r(\ell_0 r^{p-1}), {\cal X}^*\right) \le C_1 r^{p-1}$, so by Lemma \ref{lem:F-sum-close}, \boldsymbol{e}gin{align*} & F^r \left(\boldsymbol{X}^r(\ell_0 r^{p-1})\right) - ru^* \le \sum_{\boldsymbol{k} \in {\cal K}} X^r_{\boldsymbol{k}} (\ell_0 r^{p-1}) - r u^* \\ & \le r d\left(\boldsymbol{x}^r(\ell_0 r^{p-1}), {\cal X}^*\right) \le C_1 r^p. \end{align*} (ii) There exists $C_3 > 0$ such that $F^r \left(\boldsymbol{X}^r((\ell+1) r^{p-1})\right) - F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) \le C_3 r^p$. This is clear, since by Lemma \ref{lem:F-sum-close}, $F^r\left(\boldsymbol{X}^r\right)$ differs from $\sum_{\boldsymbol{k}} X^r_{\boldsymbol{k}}$ by $O(r^p)$, and the change in $\boldsymbol{X}^r$ is at most $O(r^p)$ over an interval of length $r^{1-p}$. (iii) If $F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) \ge r u^* + C_1 r^p$, then \[F^r\big(\boldsymbol{X}^r((\ell+1)r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \leq -\delta r^{2p-1}.\] To see this, suppose that $F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) \ge r u^* + C_1 r^p$. Then $d\left(\boldsymbol{x}^r(\ell r^{p-1}), {\cal X}^*\right) \ge \sum_{\boldsymbol{k} \in {\cal K}} x^r_{\boldsymbol{k}} - u^* \ge \frac{1}{r} F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) - u^* \ge C_1 r^{p-1}$, and we must have \[ F^r\big(\boldsymbol{X}^r((\ell+1)r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \leq -\delta r^{2p-1}. \] \iffalse \boldsymbol{e}gin{itemize} \item[(i)] $F^r \left(\boldsymbol{X}^r(\ell_0 r^{p-1})\right) \le r u^* + C_1 r^p$, \item[(ii)] $F^r \left(\boldsymbol{X}^r((\ell+1) r^{p-1})\right) - F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) \le \left(2\bar{\mu} + \frac{1}{2}\right) |{\cal K}| r^p$, and \item[(iii)] if $F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) \ge r u^* + C_1 r^p$, then \[ F^r\big(\boldsymbol{X}^r((\ell+1)r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \leq -\delta r^{2p-1}. \] \end{itemize} To establish statement (i), recall that $d\left(\boldsymbol{x}^r(\ell_0 r^{p-1}), {\cal X}^*\right) \le C_1 r^{p-1}$, so by Lemma \ref{lem:F-sum-close}, \[ F^r \left(\boldsymbol{X}^r(\ell_0 r^{p-1})\right) - ru^* \le \sum_{\boldsymbol{k} \in {\cal K}} X^r_{\boldsymbol{k}} (\ell_0 r^{p-1}) - r u^* \le r d\left(\boldsymbol{x}^r(\ell_0 r^{p-1}), {\cal X}^*\right) \le C_1 r^p. \] To establish statement (ii), we use Property 2 and Lemma \ref{lem:F-sum-close}, and deduce that \boldsymbol{e}gin{eqnarray*} F^r \left(\boldsymbol{X}^r((\ell+1) r^{p-1})\right) - F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) & \le & \sum_{\boldsymbol{k} \in {\cal K}} X^r_{\boldsymbol{k}} ((\ell+1) r^{p-1}) - \sum_{\boldsymbol{k} \in {\cal K}} X^r_{\boldsymbol{k}} (\ell r^{p-1}) + \frac{|{\cal K}|}{2} r^p \\ & \le & 2 \bar{\mu} |{\cal K}| r^p + \frac{|{\cal K}|}{2} r^p = \left(2\bar{\mu} + \frac{1}{2}\right)|{\cal K}| r^p. \end{eqnarray*} To establish statement (iii), suppose that $F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) \ge r u^* + C_1 r^p$. Then \[ d\left(\boldsymbol{x}^r(\ell r^{p-1}), {\cal X}^*\right) \ge \sum_{\boldsymbol{k} \in {\cal K}} x^r_{\boldsymbol{k}} - u^* \ge \frac{1}{r} F^r \left(\boldsymbol{X}^r(\ell r^{p-1})\right) - u^* \ge C_1 r^{p-1}, \] and by Property 1, we must have \[ F^r\big(\boldsymbol{X}^r((\ell+1)r^{p-1})\big) - F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \leq -\delta r^{2p-1}. \] \fi By Lemma \ref{lem:max-seq}, for all $\ell \in \mathcal{L}$ with $\ell \ge \ell_0$, we have \[ F^r\big(\boldsymbol{X}^r(\ell r^{p-1})\big) \le r u^* + \left(C_1 + C_3\right) r^p = r u^* + C_2 r^p, \] by letting $C_2 = C_1 + C_3$. This establishes statement (b). In particular, for $\bar{\ell} = \lceil T r^{2(1-p)}\rceil$, \[ F^r\big(\boldsymbol{X}^r(\bar{\ell} r^{p-1})\big) \le r u^* + C_2 r^p. \] Now by \eqref{eq:local-change-X}, the difference between $\boldsymbol{X}^r(T r^{1-p})$ and $\boldsymbol{X}^r(\bar{\ell}r^{p-1})$ is $O(r^p)$. Furthermore, the difference between $F^r\big(\boldsymbol{X}^r(\bar{\ell} r^{p-1})\big)$ and $\boldsymbol{X}^r(\bar{\ell}r^{p-1})$ also $O(r^p)$. This implies that \[ \sum_{\boldsymbol{k} \in {\cal K}} X^r_{\boldsymbol{k}}(T r^{1-p}) - r u^* \le C_2 r^{p} + O(r^p). \] Thus, there exists $C > 0$ such that \[ \sum_{\boldsymbol{k} \in {\cal K}} x^r_{\boldsymbol{k}}(T r^{1-p}) - u^* \le \frac{C}{D} r^{p-1}. \] By Lemma \ref{LEM:LP-RATE}, \boldsymbol{e}gin{align*} & ~d(\boldsymbol{x}^r(T r^{1-p}), {\cal X}^*) \le D\left(\sum_{\boldsymbol{k} \in {\cal K}} x^r_{\boldsymbol{k}}(T r^{1-p}) - u^*\right) \\ \le & ~D \cdot \frac{C}{D} r^{1-p} = C r^{1-p}, \end{align*} and we have established the claim. Therefore, w.p.$1$, \[ d(\boldsymbol{x}^{r}(T r^{1-p}), {\cal X}^*) \le Cr^{1-p}, \] for all sufficiently large $r$. This establishes the proposition. \iffalse \section{Proof of Lemma \ref{lem:unif-conv}}\label{apdx:unif-conv} The line of proof is going to proceed as follows. We first construct a probability-$1$ event, and then show that the property that we wish to establish holds under this event. Let $r$ be given. We consider a finite cover of the interval $[0, Tr^{2-p}]$ by subintervals of the form $[\ell r^{p-1/2}, (\ell + 1)r^{p-1/2})$, where $\ell \in \mathbb{Z}_+$. It is clear that we need at most $\lceil \frac{Tr^{2-p}}{r^{p-1/2}}\rceil = \lceil Tr^{5/2 - 2p}\rceil$ such subintervals. Denote the set $\{1, 2, \ldots, \lceil Tr^{5/2 - 2p}\rceil\}$ by $\mathcal{T}^r$. Let $W_{\ell}^r = \Pi_{(\boldsymbol{k}, i)}\left(\ell r^{p-1/2}\right) - \Pi_{(\boldsymbol{k}, i)}\left((\ell-1)r^{p-1/2}\right)$. Then $W_{\ell}^r$ counts the number of jumps of a unit-rate Poisson process over the interval $[(\ell-1)r^{p-1/2}, \ell r^{p-1/2})$, and is a Poisson random variable with mean $r^{p-1/2}$. For each $\ell \in \mathcal{T}^r$, each $(\boldsymbol{k}, i) \in {\cal M}$, and each $q \in \mathbb{N}$, define the following event: \[ A_{(\boldsymbol{k}, i), \ell}^{q} = \left\{ \left|\frac{W_{\ell}^r}{r^{p-1/2}} - 1\right| \geq \frac{1}{q} \right\}. \] We have the following concentration inequality: \[ \mathbb{P}\left(A_{(\boldsymbol{k}, i), \ell}^{q}\right) \leq \exp\left(-g(q)r^{p-1/2}\right), \] where $g : \mathbb{N} \rightarrow \mathbb{R}_{++}$ is a positive function on $\mathbb{N}$ (the exact form of $g$ is not important). Thus, for each fixed $q$ and $r$, \[ \mathbb{P}\left(\bigcup_{(\boldsymbol{k}, i)\in {\cal M}} \bigcup_{\ell \in \mathcal{T}} A_{(\boldsymbol{k}, i), \ell}^q \right) \leq |{\cal M}| \lceil Tr^{5/2 - 2p}\rceil \exp\left(-g(q)r^{p-1/2}\right). \] If we denote the event $\bigcup_{(\boldsymbol{k}, i), \ell} A_{(\boldsymbol{k}, i), \ell}^q$ by $B^{r, q}$, then \[ \mathbb{P}\left(\bigcup_r B^{r, q}\right) \leq \sum_r |{\cal M}| \lceil Tr^{5/2 - 2p}\rceil \exp\left(-g(q)r^{p-1/2}\right) < \infty. \] For fixed $q \in \mathbb{N}$, by the Borel-Cantelli lemma, $\mathbb{P}\left(B^{r, q} \mbox{ i.o}\right) = 0$, so \[ \mathbb{P}\left(\overline{B^{r, q}} \mbox{ eventually}\right) = 1, \] where for a measurable set $A$, $\overline{A}$ denotes its complement. Therefore, \[ \mathbb{P}\left(\bigcap_{q\in \mathbb{N}} \left\{\overline{B^{r, q}} \mbox{ eventually}\right\}\right) = 1. \] Now $\overline{B^{r, q}}$ is the event \[ \left\{ \max_{(\boldsymbol{k}, i)\in {\cal M}, \ell \in \mathcal{T}}\left|\frac{W_{\ell}^r}{r^{p-1/2}} - 1\right| < \frac{1}{q}\right\}, \] so $\left\{\overline{B^{r, q}} \mbox{ eventually}\right\}$ is the event that there exists $r_0$ such that for all $r \geq r_0$, \[ \left\{ \max_{(\boldsymbol{k}, i)\in {\cal M}, \ell \in \mathcal{T}}\left|\frac{W_{\ell}^r}{r^{p-1/2}} - 1\right| < \frac{1}{q}\right\}. \] Thus, $\left\{\bigcap_{q\in \mathbb{N}} \left\{\overline{B^{r, q}} \mbox{ eventually}\right\}\right\}$ is the event that $\frac{W_{\ell}^r}{r^{p-1/2}} \rightarrow 1$ uniformly over $\ell \in \mathcal{T}^r$, as $r\rightarrow \infty$. We can cover any interval $[t_0^r, t_0^r + Tr^{2p-1}] \subset [0, Tr^{2-p}]$ by $\lceil r^{p-1/2} \rceil$ subintervals of the form $\left[\ell r^{p-1/2}, (\ell+1)r^{p-1/2}\right]$, so under the event that $\frac{W_{\ell}^r}{r^{p-1/2}} \rightarrow 1$ uniformly over $\ell \in \mathcal{T}^r$ as $r\rightarrow \infty$, we have \[ \frac{1}{r^{2p-1}}\left(\Pi_{(\boldsymbol{k}, i)}\left(t_0^r + \xi r^{2p-1} \right) - \Pi_{(\boldsymbol{k}, i)}\left(t_0^r\right)\right) \rightarrow \xi \] as $r \rightarrow \infty$, uniformly over $t_0^r, \xi$, and $(\boldsymbol{k}, i)$. \fi \section{Proof of Theorem \ref{THM:SQRT-R-TIGHTNESS}}\label{apdx:sqrt-r-tightness} The general approach of the proof is similar to that of Theorem 2 (ii) in {\cal I}te{GS2012}, in that it is based on the process generator estimates for the exponent $e^{\Phi}$, where $\Phi$ is a function on the state space. However, the function $\Phi$ in our case is much different, and so are the specifics of the estimates. Consider fixed $i \in {\cal I}$ and $r$. For notational convenience, we drop the subscript $i$ and superscript $r$ from all quantities considered in this proof. The Markov chain $\boldsymbol{U}(\cdot) = (\hat Y(\cdot), \tilde Y(\cdot))$ has infinitesimal transition rate matrix $\boldsymbol{x}i$ given by \[ \xi(\boldsymbol{u}, \boldsymbol{u} + \boldsymbol{v}) \rightarrow \left\{ \boldsymbol{e}gin{array}{ll} \lambda r, & \mbox{ if } \boldsymbol{v} = (1, -1\cdot\boldsymbol{1}_{\{\tilde y > 0\}}),\\ \mu \hat{y}, & \mbox{ if } \boldsymbol{v} = (-1, 1), \\ \mu_0 \tilde{y}, & \mbox{ if } \boldsymbol{v} = (0, -1), \\ 0, & \mbox{ otherwise,} \end{array}\right. \] where $\boldsymbol{u} = (\hat y, \tilde y)$. We consider $A$ the infinitesimal generator of the Markov chain $\boldsymbol{U}(\cdot)$, defined by \boldsymbol{e}gin{equation}\label{eq:generator-def} A G(\boldsymbol{u}) = \sum_{\boldsymbol{u}'} \xi(\boldsymbol{u}, \boldsymbol{u}')\left(G(\boldsymbol{u}') - G(\boldsymbol{u})\right), \end{equation} for all functions $G : \mathbb{Z}_+^2 \to \mathbb{R}$ in the domain of $A$. We also consider the formal operator $\bar{A}$, defined (similar to Eq. \eqref{eq:generator-def}) by \boldsymbol{e}gin{equation}\label{eq:for-gen-def} \bar{A} G(\boldsymbol{u}) = \sum_{\boldsymbol{u}'} \xi(\boldsymbol{u}, \boldsymbol{u}')\left(G(\boldsymbol{u}') - G(\boldsymbol{u})\right), \end{equation} for all functions $G: \mathbb{Z}_+^2 \to \mathbb{R}$. Similarly to {\cal I}te{GS2012}, it is easy to observe that the following property holds: if a function $G$ takes a fixed constant value on the entire state space, except maybe a finite subset, then $G$ is within the domain of $A$, $A G = \bar A G$, and moreover \boldsymbol{e}ql{eq-generators} \mathbb{E}[AG(\boldsymbol{U})] = \mathbb{E}[\bar AG(\boldsymbol{U})] = 0, \end{equation} where the expectation is taken w.r.t the stationary distribution of the Markov chain $\boldsymbol{U}(\cdot)$. \iffalse we choose an appropriate Lyapunov function $G$, and use the balance equation \[ \mathbb{E}[AG(\boldsymbol{U})] = 0 \] to derive the desired bounds. Here the expectation is take w.r.t the stationary distribution of the Markov chain $\boldsymbol{U}(\cdot)$. \fi First, define the (candidate) Lyapunov function $G : \mathbb{Z}_+^2 \to \mathbb{R}$ by \[ G(\boldsymbol{u}) = \exp \left(\frac{1}{\sqrt{r}}h(\boldsymbol{u})\right), \] where $h(\boldsymbol{u}) = \sqrt{(\hat y - \rho r)^2 + \frac{\mu_0}{\mu} \tilde y^2}$. Note that, for an arbitrary $b\ge 0$, the truncated function $$ \bar G^{(b)}(\boldsymbol{u}) = \exp \left(\frac{h(\boldsymbol{u})}{\sqrt{r}} \wedge b \right) $$ is constant outside a finite subset and therefore, by \eqn{eq-generators}, \boldsymbol{e}ql{eq-generators2} \mathbb{E}[\bar AG^{(b)}(\boldsymbol{U})] = 0. \end{equation} Also note that, $$ \bar{A}G^{(b)}(\boldsymbol{u}) \le \bar{A}G(\boldsymbol{u}), ~~\mbox{if}~h(\boldsymbol{u})/\sqrt{r} \le b, $$ $$ \bar{A}G^{(b)}(\boldsymbol{u}) \le 0, ~~\mbox{if}~h(\boldsymbol{u})/\sqrt{r} \ge b. $$ \iffalse Note that $G$ as defined can be unbounded, so may not be in the domain of $A$. However, similar to the proof of Theorem 2 (ii) in {\cal I}te{GS2012}, it suffices to consider the formal equality $\mathbb{E}[\bar{A}G(\boldsymbol{U})] = 0$ and derive the desired bounds. (Consider the ``truncated'' version $\bar{G}^{c_1} = G \wedge {c_1}$, for any $c_1 > 0$, then using $\mathbb{E}[A\bar{G}^{c_1}(\boldsymbol{U})] = 0$, we can derive an upper bound on $\mathbb{E}[\bar{G}^{c_1}(\boldsymbol{U})]$, independent from $c_1 > 0$. Then use the monotone convergence theorem to get the same bound on $\mathbb{E}[\bar{G}(\boldsymbol{U})]$.) \fi Similar to {\cal I}te{GS2012}, the following inequality can be derived, using Taylor expansion. There exists some constant $c_2 > 0$ such that for sufficiently large $r$, \boldsymbol{e}gin{equation}\label{eq:drift-G-1} \bar{A}G(\boldsymbol{u}) \le G(\boldsymbol{u})\left(\frac{1}{\sqrt{r}}\bar{A}h(\boldsymbol{u}) + \frac{c_2}{r}(\lambda r + \mu \hat y + \mu_0 \tilde y)\right). \end{equation} The term $\frac{G(\boldsymbol{u})}{\sqrt{r}}\bar{A}h(\boldsymbol{u})$ captures the first-order change in $G(\boldsymbol{u})$, and $\frac{c_2 G(\boldsymbol{u})}{r}(\lambda r + \mu \hat y + \mu_0 \tilde y)$ bounds the second-order change. Here we used the fact that $h$ is Lipschitz continuous and $\|\boldsymbol{u}\|$ is changed by at most $1$ by any single transition. Now consider the term $\bar{A}h(\boldsymbol{u})$. We use the following inequality to bound $\bar{A}h(\boldsymbol{u})$: \[ \sqrt{(x+a)^2 + (y+b)^2} - \sqrt{x^2 + y^2} \leq \frac{ax + by + a^2 + b^2}{\sqrt{x^2 + y^2}}. \] To verify this inequality, note that first, \[ \left(\sqrt{(x+a)^2 + (y+b)^2}\right)^2 \leq \left(\sqrt{x^2 + y^2} + \frac{ax + by + a^2 + b^2}{\sqrt{x^2 + y^2}}\right)^2, \] and second, \[ \sqrt{x^2 + y^2} + \frac{ax + by + a^2 + b^2}{\sqrt{x^2 + y^2}} \geq 0. \] Thus, \boldsymbol{e}gin{align} \bar{A}h(\boldsymbol{u}) \le & ~\frac{(\lambda r - \mu \hat{y})(\hat{y} - \rho r) - (\lambda r - \mu \hat{y} + \mu_0 \tilde y)(\mu_0\tilde{y}/\mu)}{\sqrt{(\hat{y} - \rho r)^2 + \mu_0 \tilde{y}^2/\mu}} \nonumber \\ & ~+ \frac{c_3(\lambda r + \mu \hat y + \mu_0 \tilde y)}{\sqrt{(\hat{y} - \rho r)^2 + \mu_0 \tilde{y}^2/\mu}} \nonumber \\ = & ~\frac{-\frac{\mu}{2}(\hat y - \rho r)^2 - \frac{\mu_0^2}{2\mu}\tilde{y}^2 - \frac{\mu}{2}(\hat y - \rho r + \tilde y)^2}{\sqrt{(\hat{y} - \rho r)^2 + \mu_0 \tilde{y}^2/\mu}} \nonumber \\ & ~+ \frac{c_3(\lambda r + \mu \hat y + \mu_0 \tilde y)}{\sqrt{(\hat{y} - \rho r)^2 + \mu_0 \tilde{y}^2/\mu}} \nonumber \\ \le & ~\frac{-\frac{\mu}{2}(\hat y - \rho r)^2 - \frac{\mu_0^2}{2\mu}\tilde{y}^2}{h(\boldsymbol{u})} + \frac{c_3(\lambda r + \mu \hat y + \mu_0 \tilde y)}{h(\boldsymbol{u})} \nonumber \\ \le & ~-c_4h(\boldsymbol{u}) + \frac{c_3}{\sqrt{r}}(\lambda r + \mu \hat y + \mu_0 \tilde y), \label{eq:drift-G-2} \end{align} for some positive constants $c_3$ and $c_4$, and when $h(\boldsymbol{u}) \ge \sqrt{r}$. Combining Inequalities \eqref{eq:drift-G-1} and \eqref{eq:drift-G-2}, we have \[ \bar{A}G(\boldsymbol{u}) \le G(\boldsymbol{u})\left(-\frac{c_4}{\sqrt{r}} h(\boldsymbol{u}) + \frac{c_2 + c_3}{r}(\lambda r + \mu \hat y + \mu_0 \tilde y)\right). \] Consider the term in the bracket on the RHS. It is now an elementary calculation to see that there exists some positive constant $c_5$, such that whenever $h(\boldsymbol{u}) \ge c_5 \sqrt{r}$, \[ -\frac{c_4}{\sqrt{r}} h(\boldsymbol{u}) + \frac{c_2 + c_3}{r}(\lambda r + \mu \hat y + \mu_0 \tilde y) \le -1. \] Also note that when $h(\boldsymbol{u}) < c_5 \sqrt{r}$, the maximum values of \[ G(\boldsymbol{u})~~~\mbox{ and }~~~G(\boldsymbol{u})\left(-\frac{c_4}{\sqrt{r}} h(\boldsymbol{u}) + \frac{c_2 + c_3}{r}(\lambda r + \mu \hat y + \mu_0 \tilde y)\right) \] are both bounded above by an absolute constant, say $c_6$, which does not depend on $r$. In summary, \boldsymbol{e}gin{align*} & \bar{A}G(\boldsymbol{u}) \le -G(\boldsymbol{u}) ~~\mbox{ whenever } ~~h(\boldsymbol{u}) \ge c_5 \sqrt{r}, \\ \mbox{and }~~& \bar{A}G(\boldsymbol{u}) \le c_6 ~~\mbox{ whenever } ~~h(\boldsymbol{u}) < c_5 \sqrt{r}. \end{align*} Thus, for any $b>c_5$, \boldsymbol{e}gin{eqnarray*} 0 = \mathbb{E}[AG^{(b)}(\boldsymbol{U})] & \le & \mathbb{E}[\bar AG(\boldsymbol{U}) \boldsymbol{1}_{\{c_5 \sqrt{r} \le h(\boldsymbol{U}) \le b \sqrt{r}\}}] \\ & & + \mathbb{E}[\bar AG(\boldsymbol{U}) \boldsymbol{1}_{\{h(\boldsymbol{U}) < c_5 \sqrt{r}\}}] \\ & \le & - \mathbb{E}[G(\boldsymbol{U})\boldsymbol{1}_{\{c_5 \sqrt{r} \le h(\boldsymbol{U}) \le b \sqrt{r}\}}] + c_6. \end{eqnarray*} This implies that $\mathbb{E}[G(\boldsymbol{U})\boldsymbol{1}_{\{c_5 \sqrt{r} \le h(\boldsymbol{U}) \le b \sqrt{r}\}}] \le c_6$, and then $\mathbb{E}[G(\boldsymbol{U})\boldsymbol{1}_{\{ h(\boldsymbol{U}) \le b \sqrt{r}\}}] \le 2 c_6$. Finally, by Monotone Convergence, $\mathbb{E}[G(\boldsymbol{U})] \le 2c_6$. This completes the proof. \iffalse \appendix \boldsymbol{e}gin{center} {\Large\textbf{Appendix}} \end{center} \section{Proof of Lemma~\ref{lem-bound-tail}} \label{prf-bound-tail} \fi \iffalse \noindent Bell Labs, Alcatel-Lucent \\ 600 Mountain Avenue, 2C-322 \\ Murray Hill, NJ 07974, USA \\ [email protected] \fi \end{document}
math
162,474
\begin{document} \title{Self-similarity and spectral asymptotics for the continuum random tree} \author{D.A. Croydon\footnote{Dept of Statistics, University of Warwick, Coventry CV4 7AL, UK; {[email protected]}.} and B. M. Hambly\footnote{Mathematical Institute, University of Oxford, 24-29 St Giles', Oxford, OX1 3LB, UK; [email protected]}} \date{5 June 2007} \maketitle \begin{abstract} We use the random self-similarity of the continuum random tree to show that it is homeomorphic to a post-critically finite self-similar fractal equipped with a random self-similar metric. As an application we determine the mean and almost-sure leading order behaviour of the high frequency asymptotics of the eigenvalue counting function associated with the natural Dirichlet form on the continuum random tree. We also obtain short time asymptotics for the trace of the heat semigroup and the annealed on-diagonal heat kernel associated with this Dirichlet form. \end{abstract} \section{Introduction}\label{intro} One of the reasons the continuum random tree of Aldous has attracted such great interest is that it connects together a number of diverse areas of probability theory. On one hand, it appears from discrete probability as the scaling limit of combinatorial graph trees and probabilistic branching processes; and on the other hand, it is intimately related with a continuous time process, namely the normalised Brownian excursion, \cite{Aldous3}. However, with both of these representations of the continuum random tree, there does not appear to be an obvious description of the structure of the set itself. In this paper we demonstrate that the continuum random tree has a recursive description as a random self-similar fractal and show that the set is always homeomorphic to a deterministic subset of the Euclidean plane. As an application of this precise description of the random self-similarity of the continuum random tree, we deduce results about the spectrum and on-diagonal heat kernel of the natural Dirichlet form on the set using techniques developed for random recursive self-similar fractals. From its graph tree scaling limit description, Aldous showed how the continuum random tree has a certain random self-similarity, \cite{Aldous5}. In this article, we use this result iteratively to label the continuum random tree, $\mathcal{T}$, using a shift space over a three letter alphabet. This enables us to show that there is an isometry from $\mathcal{T}$, with its natural metric $d_{\mathcal{T}}$ (see Section \ref{crtdef} for a precise definition of $\mathcal{T}$ and $d_\mathcal{T}$, and Section \ref{decompsec} for the decomposition of $\mathcal{T}$ that we apply), to a deterministic subset of $\mathbb{R}^2$, $T$ say, equipped with a random metric $R$, $\mathbf{P}$-a.s., where $\mathbf{P}$ is the probability measure on the probability space upon which all the random variables of the discussion are defined. This metric is constructed using random scaling factors in an adaptation of the now well-established techniques of \cite{Kigami} for building a resistance metric on a post-critically finite self-similar fractal. We note that on a tree the resistance and geodesic metrics are the same. Furthermore, we show that the isometry in question also links the natural Borel probability measures on the spaces $(\mathcal{T},d_\mathcal{T})$ and $(T,R)$. The relevant measures will be denoted by $\mu$ and $\mu^T$ respectively, with $\mu$ arising as the scaling limit of the uniform measures on the graph approximations of $\mathcal{T}$ (see \cite{Aldous3}, for example), and $\mu^T$ being the random self-similar measure that is associated with the construction of $R$. The result that we prove is the following; full descriptions of $(T,R,\mu^T)$ are given in Section \ref{selfsimsec}, and the isometry is defined in Section \ref{isosec}. {\thm \label{first} There exists a deterministic post-critically finite self-similar dendrite, $T$, equipped with a (random) self-similar metric, $R$, and Borel probability measure, $\mu^T$, such that $(T,R,\mu^T)$ is equivalent to $(\mathcal{T},d_\mathcal{T},\mu)$ as a measure-metric space, $\mathbf{P}$-a.s.} Previous analytic work on the continuum random tree in \cite{Croydoncrt} obtained estimates on the quenched and the annealed heat kernel for the tree. We can now adapt techniques of \cite{Hamasymp} to consider the spectral asymptotics of the tree. As a byproduct we are also able to refine the results on the annealed heat kernel to show the existence of a short time limit for $t^{2/3}\mathbf{E}p_t(\rho,\rho)$ at the root of the tree $\rho$, where the notation $\mathbf{E}$ is used to represent expectation under the probability measure $\mathbf{P}$. The natural Dirichlet form on $L^2(\mathcal{T},\mu)$ may be thought of simply as the electrical energy when we consider $(\mathcal{T},d_\mathcal{T})$ as a resistance network. We shall denote this form by $\mathcal{E}_\mathcal{T}$, and its domain $\mathcal{F}_\mathcal{T}$, and explain in Section \ref{crtdef} how it may be constructed using results of \cite{Kigamidendrite}. The eigenvalues of the triple $(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T}, \mu)$ are defined to be the numbers $\lambda$ which satisfy \begin{equation}\label{evaluedef} \mathcal{E}_\mathcal{T}(u,v)=\lambda\int_\mathcal{T}uvd\mu,\hspace{20pt}\forall v\in\mathcal{F}_\mathcal{T} \end{equation} for some eigenfunction $u\in\mathcal{F}_\mathcal{T}$. The corresponding eigenvalue counting function, $N$, is obtained by setting \begin{equation} \label{ecf} N(\lambda):=\#\{\mbox{eigenvalues of }(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T}, \mu)\leq\lambda\}, \end{equation} and we prove in Section \ref{specsec} that this is well-defined and finite for any $\lambda\in\mathbb{R}$, $\mathbf{P}$-a.s. In Section \ref{specsec}, we also prove the following result, which shows that asymptotically the mean and $\mathbf{P}$-a.s. behaviour of $N$ are identical. {\thm \label{second} There exists a deterministic constant $C_0\in(0,\infty)$ such that\\ (a) $\lambda^{-2/3}\mathbf{E}N(\lambda)\rightarrow C_0$, as $\lambda\rightarrow\infty$.\\ (b) $\lambda^{-2/3}N(\lambda)\rightarrow C_0$, as $\lambda\rightarrow\infty$, $\mathbf{P}$-a.s.} To provide some context for this result, we will now briefly discuss some related work. For the purposes of brevity, during the remainder of the introduction, we shall use the notation $N(\lambda)$ to denote the eigenvalue counting function of whichever problem is being considered. Classically, for the usual Laplacian on a bounded domain $\Omega\subseteq\mathbb{R}^n$, Weyl's famous theorem tells us that the eigenvalue counting function satisfies \begin{equation}\label{weyl} N(\lambda)=C_n|\Omega|\lambda^{n/2}+o(\lambda^{n/2}),\hspace{20pt}\mbox{as }\lambda\rightarrow\infty, \end{equation} where $C_n$ is a constant depending only on $n$, and $|\Omega|$ is the Lebesgue measure of $\Omega$, see \cite{Lap}. As a consequence, in this setting, there exists a limit for $\lambda^{-n/2}N(\lambda)$ as $\lambda\rightarrow\infty$. In the case of deterministic p.c.f. self-similar fractals it is known that \[ N(\lambda)=\lambda^{d_S/2}(G(\ln\lambda)+o(1)),\hspace{20pt}\mbox{as }\lambda\rightarrow\infty, \] where $G$ is a periodic function, see \cite{Kigami}, Theorem 4.1.5. The generic case has $G$ constant but for fractals with a high degree of symmetry, such as the class of nested fractals, (an example is the Sierpinski gasket), the function $G$ can be proved to be non-constant, and so no limit actually exists for $\lambda^{-d_S/2}N(\lambda)$ as $\lambda\rightarrow\infty$ for these fractals. In the case of random recursive Sierpinski gaskets, as studied in \cite{Hamasymp}, there are similar results, however the function $G$ must be multiplied by a random weight variable, which can be thought of as a measure of the volume of the fractal, and roughly corresponds to the factor $|\Omega|$ in (\ref{weyl}). Again the generic case is that the limit of the rescaled counting function exists and, in this setting, there are no known examples of periodic behaviour. For the continuum random tree, no periodic fluctuations or random weight factors appear; this is due to the non-lattice distribution of the Dirichlet $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ random variables that are used in the self-similar construction, and also the fact that summing the three elements of the triple gives exactly one, $\mathbf{P}$-a.s. It is also worth commenting upon the values of the exponent of $\lambda$ in the leading order behaviour of $N(\lambda)$ in the classical and fractal setting. From Weyl's result for bounded domains in $\mathbb{R}^n$, we see that the limit \[d_S:=2\lim_{\lambda\rightarrow\infty}\frac{\ln N(\lambda)}{\ln\lambda}\] is precisely $n$, matching the Hausdorff dimension of $\Omega$. However, for deterministic and random self-similar fractals, this agreement is not generally the case. For a large class of finitely ramified fractals it has been proved that \begin{equation}\label{spectraldim} d_S=\frac{2d_H}{1+d_H}, \end{equation} where $d_H$ is the Hausdorff dimension of the fractal in the resistance metric (see \cite{Kigami}, Theorem 4.2.1, and \cite{Hamasymp}, Theorem 1.1). Due to its definition from the spectral asymptotics, the quantity $d_S$ has become known as the spectral dimension of a (Laplacian on a) set. Clearly, from the previous theorem, we see that for the continuum random tree $d_S=4/3$. This result could have been predicted from the self-similar fractal picture of the set given in Theorem \ref{first}, and (\ref{spectraldim}), noting that $d_H=2$ for the continuum random tree (see \cite{LegallDuquesne}). Observe that to be able to apply the result of \cite{LegallDuquesne}, the equivalence of the resistance and geodesic metrics on trees must be used. Finally, let $X$ be the Markov process corresponding to $(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T}, \mu)$ and denote by $(p_t(x,y))_{x,y\in\mathcal{T},\:t>0}$ its transition density; alternatively this is the heat kernel of the Laplacian associated with the Dirichlet form. The existence of $p_t$ for $t>0$ was proved in \cite{Croydoncrt}, where it was also shown that $t^{2/3}p_t(x,x)$ exhibits logarithmic fluctuations globally, and log-logarithmic fluctuations for $\mu$-a.e. $x\in\mathcal{T}$, as $t\rightarrow 0$. These fluctuations are caused by variations in the ``thickness'' of the measure $\mu$ over the space, which result in turn from the randomness of the construction. However, the result of Theorem \ref{second}(b) implies that these fluctuations must even out, when averaged over the entire space. In particular, applying an Abelian theorem in the way discussed in Remark 5.11 of \cite{Hamasymp}, we obtain the following limit result for the trace of the heat semigroup, which we state without proof. {\cor Let $C_0$ be the constant of Theorem \ref{second}, and $\Gamma$ be the standard gamma function, then\\ (a) \[t^{2/3}\mathbf{E}\int_\mathcal{T}p_t(x,x)\mu(dx)\rightarrow C_0\Gamma(5/3),\hspace{20pt}\mbox{as }t\rightarrow 0,\] (b) $\mathbf{P}$-a.s., \[t^{2/3}\int_\mathcal{T}p_t(x,x)\mu(dx)\rightarrow C_0\Gamma(5/3),\hspace{20pt}\mbox{as }t\rightarrow 0.\]} Another corollary, which follows from the invariance under random re-rooting of the continuum random tree, \cite{Aldous2}, allows us to deduce from part (a) of this Corollary the following limit for the annealed heat kernel at $\rho$, the root of $\mathcal{T}$ (see Section \ref{crtdef} for a definition). This tightens the result obtained in \cite{Croydoncrt}, Proposition 1.7 for the annealed heat kernel. {\cor Let $C_0$ be the constant of Theorem \ref{second}, and $\Gamma$ be the standard gamma function, then \[ t^{2/3}\mathbf{E}p_t(\rho,\rho)\rightarrow C_0\Gamma(5/3) \mbox{ as }t\rightarrow 0, \]} An outline of the paper is as follows. In Section 2 we introduce the continuum random tree and give the natural Dirichlet form associated with the tree. In Section 3 we use the decomposition of Aldous to give a description of the tree via a sequence space. Once we have established this we can map the continuum random tree into a post-critically finite self-similar tree with a random metric. Finally we show that the map ensures that the two sets are equivalent as metric measure spaces. Once we have the picture as a self-similar set with a random metric it is straightforward to deduce a decomposition of the Dirichlet form and from this a natural scaling in the eigenvalues. This leads to our results on the spectrum, and via an Abelian theorem, to results on the trace of the heat semigroup. \section{Continuum random tree}\label{crtdef} The connection between trees and excursions is an area that has been of much recent interest. In this section, we provide a brief introduction to this link, a definition of the continuum random tree, and also describe how to construct the natural Dirichlet form on this set. We begin by defining the space of excursions, $U$, to be the set of continuous functions $f:\mathbb{R}_+\rightarrow\mathbb{R}_+$ for which there exists a $\tau(f)\in(0,\infty)$ such that $f(t)>0$ if and only if $t\in(0,\tau(f))$. Given a function $f\in U$, we define a distance on $[0,\tau(f)]$ by setting \begin{equation}\label{distance} d_f(s,t):=f(s)+f(t)-2m_f(s,t), \end{equation} where $m_f(s,t):=\inf\{f(r):\:r\in[s\wedge t,s\vee t]\}$. We then use the equivalence \begin{equation}\label{eq} s\sim t\hspace{20pt}\Leftrightarrow\hspace{20pt}d_f(s,t)=0, \end{equation} to define $\mathcal{T}_f:=[0,\tau(f)]/\sim$. Denoting by $[s]$ the equivalence class containing $s$, it is elementary (see \cite{LegallDuquesne}, Section 2) to check that $d_{\mathcal{T}_f}([s],[t]):=d_f(s,t)$ defines a metric on $\mathcal{T}_f$, and also that $\mathcal{T}_f$ is a {\it dendrite}, which is taken to mean a path-wise connected Hausdorff space containing no subset homeomorphic to the circle. Furthermore, the metric $d_{\mathcal{T}_f}$ is a shortest path metric on $\mathcal{T}_f$, which means that it is additive along the paths of $\mathcal{T}_f$. The {\it root} of the tree $\mathcal{T}_f$ is defined to be the equivalence class $[0]$, and is denoted by $\rho_f$. A natural volume measure to impose upon $\mathcal{T}_f$ is the projection of Lebesgue measure on $[0,\tau(f)]$. In particular, for open $A\subseteq\mathcal{T}_f$, let \[\mu_f(A):=\ell\left(\{t\in[0,\tau(f)]:\:[t]\in A\}\right),\] where, throughout this article, $\ell$ is the usual 1-dimensional Lebesgue measure. This defines a Borel measure on $(\mathcal{T}_f,d_{\mathcal{T}_f})$, with total mass equal to $\tau(f)$. We are now able to define the {\it continuum random tree}\index{continuum random tree} as the random dendrite that we get when the function $f$ is chosen according to the law of a suitably scaled Brownian excursion. More precisely, we shall assume that there exists an underlying probability space, with probability measure $\mathbf{P}$, upon which is defined a process $W=(W_t)_{t=0}^1$ which has the law of the normalised Brownian excursion, where, throughout this article ``normalised'' is taken to mean ``scaled to return to the origin for the first time at time 1''. In keeping with the notation used so far in this section, the measure-metric space of interest should be written $(\mathcal{T}_W, d_{\mathcal{T}_W}, \mu_W)$, the distance on $[0,\tau(W)]$, defined at (\ref{distance}), $d_W$, and the root, $\rho_W$. However, we shall omit the subscripts $W$ with the understanding that we are discussing the continuum random tree in this case. We note that $\tau(W)=1$, $\mathbf{P}$-a.s., and so $[0,\tau(W)]=[0,1]$ and $\mu$ is a probability measure on $\mathcal{T}$, $\mathbf{P}$-a.s. Moreover, that $\mu$ is non-atomic is readily checked using simple path properties of $W$. Note that our definition differs slightly from the Aldous continuum random tree, which is based on the random function $2W$. Since this extra factor only has the effect of increasing distances by a factor of 2, our results are readily adapted to apply to Aldous' tree. A further observation that will be useful to us is that between any three points of a dendrite there is a unique branch-point. We shall denote the branch-point of $x,y,z\in\mathcal{T}$ by $b(x,y,z)$, which is the unique point in $\mathcal{T}$ lying on the arcs between $x$ and $y$, $y$ and $z$, and $z$ and $x$. Finally, we note that it is easy to check the conditions of \cite{Kigamidendrite}, Theorem 5.4 to deduce that it is possible to build a natural Dirichlet form on the continuum random tree. {\thm $\mathbf{P}$-a.s. there exists a local regular Dirichlet form $(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T})$ on $L^2(\mathcal{T},\mu)$, which is associated with the metric $d_\mathcal{T}$ through, for every $x\neq y$, \begin{equation}\label{dtrecover} d_\mathcal{T}(x,y)^{-1}=\inf\{\mathcal{E}_\mathcal{T}(f,f):\:f\in\mathcal{F}_\mathcal{T},\:f(x)=0,\:f(y)=1\}. \end{equation}} This final property means that the metric $d_\mathcal{T}$ is indeed the resistance metric associated with $(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T})$. It will be the eigenvalue counting function defined from $(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T},\mu)$ as at (\ref{ecf}) for which we deduce asymptotic results in this article. \section{Decomposition of the continuum random tree}\label{decompsec} To make precise the decomposition of the continuum random tree that we shall apply, we use the excursion description of the set introduced in the previous section. This allows us to prove rigorously the independence properties that are important to our argument. However, it may not be immediately obvious exactly what the excursion picture is telling us about the continuum random tree, and so, after Lemma \ref{aldousdecomp}, we present a more heuristic discussion of the procedure we use in terms of the related dendrites. The initial object of consideration is a triple $(W,U,V)$, where $W$ is the normalised Brownian excursion, and $U$ and $V$ are independent $U[0,1]$ random variables, independent of $W$. From this triple it is possible to define three independent Brownian excursions. The following decomposition is rather awkward to write down, but is made clearer by Figure \ref{bed}. First, suppose $U<V$. On this set, it is $\mathbf{P}$-a.s. possible to define $H\in [0,1]$ by \begin{equation}\label{hdef} \{H\}:=\{t\in[U,V]:\:W_t=\inf_{s\in[U,V]}W_s\}. \end{equation} We also define \begin{equation}\label{hmindef} H_-:=\sup\{t<U:\:W_t=W_H\},\hspace{20pt}H_+:=\inf\{t>V:\:W_t=W_H\}, \end{equation} \[\Delta_1:=1+H_--H_+,\hspace{20pt}\Delta_2:=H-H_-,\hspace{20pt}\Delta_3:=H_+-H,\] \[\tilde{U}_1:=\frac{H_-}{\Delta_1},\hspace{20pt}U_2:=\frac{U-H_-}{\Delta_2},\hspace{20pt}{U}_3:=\frac{V-H}{\Delta_3},\] and for $t\in[0,1]$, \[\tilde{W}^1_t:=\Delta_{1}^{-1/2}(W_{t\Delta_1}\mathbf{1}_{\{t\leq \tilde{U}_1\}}+W_{H_++(t-\tilde{U}_1)\Delta_1}\mathbf{1}_{\{t> \tilde{U}_1\}}),\] \[{W}^2_t:=\Delta_{2}^{-1/2}(W_{H_-+t\Delta_2}-W_{H}),\] \[{W}^3_t:=\Delta_{3}^{-1/2}(W_{H+t\Delta_3}-W_{H}).\] Finally, it will be convenient to shift $\tilde{W}^1$ by $\tilde{U}_1$ so that the root of the corresponding tree is chosen differently. Thus, we define $W^1$ by \[W^1_t:= \left\{\begin{array}{ll}W_{\tilde{U}_1}+W_{\tilde{U}_1+t}-2m(\tilde{U}_1,\tilde{U}_1+t),&\hspace{20pt}0\leq t \leq 1-\tilde{U}_1\\ W_{\tilde{U}_1}+W_{\tilde{U}_1+t-1}-2m(\tilde{U}_1+t-1,\tilde{U}_1),&\hspace{20pt}1-\tilde{U}_1\leq t \leq 1,\end{array}\right.\] and set $U_1:=1-\tilde{U}_1$. If $U>V$, the definition of these quantities is similar, with $W^1$ again being the rescaled, shifted excursion containing $t=0$, $W^2$ being the rescaled excursion containing $t=U$, and $W^3$ being the rescaled excursion containing $t=V$. A minor adaptation of \cite{Aldous5}, Corollary 3, using the invariance under random re-rooting of the continuum random tree (see \cite{Aldous2}, Section 2.7), then gives us the following result, which we state without proof. \begin{figure} \caption{Brownian excursion decomposition.} \label{bed} \end{figure} {\lem \label{aldousdecomp} The quantities $W^1,W^2,W^3,U_1,U_2,U_3$ and $(\Delta_1,\Delta_2,\Delta_3)$ are independent. Each $W^i$ is a normalised Brownian excursion, each $U_i$ is $U[0,1]$, and $(\Delta_1,\Delta_2,\Delta_3)$ has the Dirichlet $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ distribution.} Describing the result in terms of the corresponding trees gives a much clearer picture of what the above decomposition does. Using the notation of Section \ref{crtdef}, let $(\mathcal{T},d_\mathcal{T}, \mu)$ be the continuum random tree associated with $W$, and $\rho=[0]$ its root. Again, we use $[t]$, for $t\in[0,1]$, to represent the equivalence classes of $[0,1]$ under the equivalence relation defined at (\ref{eq}). If we define $Z^1:=[U]$ and $Z^2:=[V]$, then $Z^1$ and $Z^2$ are two independent $\mu$-random vertices of $\mathcal{T}$. We now split the tree $\mathcal{T}$ at the branch-point $b(\rho,Z^1,Z^2)$, which may be checked to be equal to $[H]$, and denote by $\mathcal{T}^1$, $\mathcal{T}^2$ and $\mathcal{T}^3$ the components of $\mathcal{T}$ containing $\rho$, $Z^1$ and $Z^2$ respectively. Choose the root of each subtree to be equal to $b(\rho,Z^1,Z^2)$ and, for $i=1,2,3$, let $\mu^i$ be the probability measure on $\mathcal{T}^i$ defined by $\mu^i(A)=\mu(A)/\Delta_i$, for measurable $A\subseteq\mathcal{T}^i$, where $\Delta_i:=\mu(\mathcal{T}^i)$. The previous result tells us precisely that $(\mathcal{T}^i,\Delta_i^{-1/2}d_\mathcal{T},\mu^i)$, $i=1,2,3$, are three independent copies of $(\mathcal{T},d_\mathcal{T},\mu)$. Furthermore, if $Z_i:=\rho,Z^1,Z^2$ for $i=1,2,3$, respectively, then $Z_i$ is a $\mu^i$-random variable in $\mathcal{T}^i$. Finally, all these quantities are independent of the masses $(\mu(\mathcal{T}^1),\mu(\mathcal{T}^2),\mu(\mathcal{T}^3))$, which form a Dirichlet $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ triple. Although it is possible to deal with the subtrees directly using conditional definitions of the random variables to decompose the continuum random tree in this way, the excursion description allows us to keep track of exactly what is independent more easily, and it is to this setting that we return. However, we shall not completely neglect the tree description of the algorithm we now introduce, and a summary in this vein appears after Proposition \ref{decompprop}. We continue by applying inductively the decomposition map from $U^{(1)}\times[0,1]^2$ to ${U^{(1)}}^3\times [0,1]^3\times\Delta$ (where $\Delta$ is the standard 2-simplex) that takes the triple $(W,U,V)$ to the collection $(W^1,W^2,W^3,U_1,U_2,U_3,(\Delta_1,\Delta_2,\Delta_3))$ of excursions and uniform and Dirichlet random variables. We shall denote this decomposition map by $\Upsilon$. To label objects in our consideration it will be useful to use, as an address space, sequences of $\{1,2,3\}$. In particular, we will write the collections of finite sequences as, for $n\geq 0$, \[\Sigma_n:=\{1,2,3\}^n,\hspace{20pt}\Sigma_*:=\bigcup_{m\geq 0}\Sigma_m,\] where $\Sigma_0:=\{\emptyset\}$. Later, we will refer to the space of infinite sequences of $\{1,2,3\}$, which we denote by $\Sigma$, and also apply some further notation, which we introduce now. For $i\in\Sigma_m, j\in\Sigma_n, k\in\Sigma$, write $ij=i_1\dots i_m j_1 \dots j_n$, and $ik=i_1\dots i_m k_1 k_2 \dots$. For $i\in\Sigma_*$, denote by $|i|$ the integer $n$ such that $i\in\Sigma_n$ and call this the {\it length}\index{length} of $i$. For $i\in \Sigma_n\cup\Sigma$, $n\geq m$, the {\it truncation}\index{truncation} of $i$ to length $m$ is written as $i|m:=i_1\dots i_m$. Now, suppose we are given an independent collection $(W,U,(V_i)_{i\in\Sigma_*})$, where $W$ is a normalised Brownian excursion, $U$ is $U[0,1]$, and $(V_i)_{i\in\Sigma_*}$ is a family of independent $U[0,1]$ random variables. Set $(W^\emptyset,U_\emptyset):=(W,U)$. Given $(W^i, U_i)$, define \[(W^{i1},W^{i2},W^{i3},U_{i1},U_{i2},U_{i3},(\Delta_{i1},\Delta_{i2},\Delta_{i3})):=\Upsilon(W^i,U_i,V_i),\] and denote the filtration associated with $(\Delta_i)_{i\in\Sigma_*\backslash\{\emptyset\}}$ by $(\mathcal{F}_n)_{n\geq0}$. In particular, $\mathcal{F}_n:=\sigma(\Delta_i:\:|i|\leq n)$. The subsequent result is easily deduced by applying the previous lemma repeatedly. {\thm For each $n$, $((W^i,U_i, V_i))_{i\in\Sigma_n}$ is an independent collection of independent triples consisting of a normalised Brownian excursion and two $U[0,1]$ random variables, and moreover, the entire family of random variables is independent of $\mathcal{F}_n$.} Resulting from this construction, the collection $(\Delta_i)_{i\in\Sigma_*\backslash\{\emptyset\}}$ has some particularly useful independence properties, which we will use in the next section to build a random self-similar fractal related to $\mathcal{T}$. Furthermore, Lemma \ref{aldousdecomp} implies that each triple of the form $(\Delta_{i1},\Delta_{i2},\Delta_{i3})$ has the Dirichlet $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ distribution. Subsequently we will also be interested in the collection $(w(i))_{i\in\Sigma_*\backslash\{\emptyset\}}$, where for each $i$, we define \[w(i):=\Delta_i^{1/2},\] and will write $l(i)$ to represent the product $w(i|1)w(i|2)\dots w(i||i|)$, where $l(\emptyset):=1$. The reason for considering such families is that, in our decomposition of the continuum random tree, $(\Delta_i)_{i\in\Sigma_*\backslash\{\emptyset\}}$ and $(w(i))_{i\in\Sigma_*\backslash\{\emptyset\}}$ represent the mass and length scaling factors respectively. By viewing the inductive procedure for decomposing excursions as the repeated splitting of trees in the way described after Lemma \ref{aldousdecomp}, it is possible to use the above algorithm to break the continuum random tree into smaller components, with the subtrees in the $n$th level of construction being described by the excursions $(W^i)_{i\in\Sigma_n}$. The maps we now introduce will make this idea precise. For the remainder of this section, the arguments that we give hold $\mathbf{P}$-a.s. First, denote by $H^i$, $H_-^i$ and $H_+^i$ the random variables in $[0,1]$ associated with $(W^i,U_i,V_i)$ by the formulae at (\ref{hdef}) and (\ref{hmindef}). Let $i\in\Sigma_*$. Define, for $t\in[0,1]$, \[\phi_{i1}(t):=(H_+^i+t\Delta_{i1})\mathbf{1}_{\{t<U_{i1}\}}+(t-U_{i1})\Delta_{i1}\mathbf{1}_{\{t\geq U_{i1}\}},\] and if $U_i<V_i$, define $\phi_{i2}$ and $\phi_{i3}$ to be the linear contractions from $[0,1]$ to $[H_-^i,H^i]$ and $[H^i,H_+^i]$ respectively. If $U_i>V_i$, the images of $\phi_{i2}$ and $\phi_{i3}$ are reversed. Note that, for each $i$, the map $\phi_{i}$ satisfies, for any measurable $A\subseteq[0,1]$, \begin{equation}\label{measurescal} \ell(\phi_i(A))=\Delta_i\ell(A), \end{equation} where $\ell$ is the usual Lebesgue measure on $[0,1]$. Importantly, these maps also satisfy a certain distance scaling property. In particular, it is elementary to check from the definitions of the excursions that, for any $i\in\Sigma_*$, $j\in \{1,2,3\}$, \begin{equation}\label{disscal} d_{W^i}(\phi_{ij}(s),\phi_{ij}(t))=w(ij)d_{W^{ij}}(s,t),\hspace{20pt}\forall s,t\in [0,1], \end{equation} where $d_{W^i}$ is the distance on $[0,1]$ associated with $W^i$ by the definition at (\ref{distance}). This equality allows us to define a map on the trees related to the excursions. Let $(\tilde{\mathcal{T}}_i,d_{\tilde{\mathcal{T}}_i})$ be the metric space dendrite determined from $W^i$ by the equivalence relation given at (\ref{eq}). Denote the corresponding equivalence classes $[t]_i$ for $t\in[0,1]$. Now define, for $i\in\Sigma_*$, $j\in \{1,2,3\}$, \begin{eqnarray*} \tilde{\phi}_{ij}:\tilde{\mathcal{T}}_{ij}&\rightarrow&\tilde{\mathcal{T}}_i\\ \mbox{$[t]_{ij}$}&\mapsto&[\phi_{ij}(t)]_{i}. \end{eqnarray*} The following result is readily deduced from the distance scaling property at (\ref{disscal}), and so we state it without proof. {\lem\label{scalyo} $\mathbf{P}$-a.s., for every $i\in\Sigma_*$, $j\in \{1,2,3\}$, $\tilde{\phi}_{ij}$ is well-defined and moreover, \[d_{\tilde{\mathcal{T}}_i}(\tilde{\phi}_{ij}(x),\tilde{\phi}_{ij}(y))=w(ij)d_{\tilde{\mathcal{T}}_{ij}}(x,y),\hspace{20pt}\forall x,y\in \tilde{\mathcal{T}}_{ij}.\]} By iterating the functions $(\tilde{\phi}_{i})_{i\in\Sigma_*\backslash\{\emptyset\}}$, we can map any $\tilde{\mathcal{T}}_i$ to the original continuum random tree, $\mathcal{T}\equiv\tilde{\mathcal{T}}_\emptyset$, which is the object of interest. We will denote the map from $\tilde{\mathcal{T}}_i$ to $\mathcal{T}$ by $\tilde{\phi}_{*i}:=\tilde{\phi}_{i|1}\circ\tilde{\phi}_{i|2}\circ\dots\circ\tilde{\phi}_{i}$, and its image by $\mathcal{T}_i:=\tilde{\phi}_{*i}(\tilde{\mathcal{T}}_i)$. It is these sets that form the basis of our decomposition of $\mathcal{T}$. We will also have cause to refer to the following points in $\mathcal{T}_i$: \[\rho_i:=\tilde{\phi}_{*i}([0]_i),\hspace{20pt}Z^1_i:=\tilde{\phi}_{*i}([U_i]_i),\hspace{20pt}Z^2_i:=\tilde{\phi}_{*i}([V_i]_i).\] Although it has been quite hard work arriving at the definition of $(\mathcal{T}_i)_{i\in\Sigma_*}$, the properties of this family of sets that we will need are derived without too many difficulties from the construction. The proposition we now prove includes the following results: the sets $(\mathcal{T}_i)_{i\in\Sigma_n}$ cover $\mathcal{T}$; $\mathcal{T}_i$ is simply a rescaled copy of $\tilde{\mathcal{T}}_i$ with $\mu$-measure $l(i)^2$; the overlaps of sets in the collection $(\mathcal{T}_i)_{i\in\Sigma_n}$ are small; and also describes various relationships between points of the form $\rho_{i}$, $Z_i^1$ and $Z_i^2$. This result is summarised in Figure \ref{crtd}. \begin{figure} \caption{Continuum random tree decomposition.} \label{crtd} \end{figure} {\propn \label{decompprop} $\mathbf{P}$-a.s., for every $i\in\Sigma_*$,\\ (a) $\mathcal{T}_i=\cup_{j\in\Sigma_n}\mathcal{T}_{ij}$, for all $n\geq 0$.\\ (b) $(\mathcal{T}_i,d_{\mathcal{T}})$ and $(\tilde{\mathcal{T}}_i,l(i)d_{\tilde{\mathcal{T}}_i})$ are isometric.\\ (c) $\rho_{i1}=\rho_{i2}=\rho_{i3}=b(\rho_{i},Z_i^1,Z_i^2)$.\\ (d) $Z_{ij}^1=\rho_{i},Z_i^1,Z_i^2$, for $j=1,2,3$ respectively.\\ (e) $\rho_{i}\not\in\mathcal{T}_{i2}\cup\mathcal{T}_{i3}$, $Z^1_{i}\not\in\mathcal{T}_{i1}\cup\mathcal{T}_{i3}$ and $Z^2_{i}\not\in\mathcal{T}_{i1}\cup\mathcal{T}_{i2}$.\\ (f) if $|j|=|i|$, but $j\neq i$, then $\mathcal{T}_i\cap\mathcal{T}_j=\{\rho_i\}$ when $j|(|j|-1)=i|(|i|-1)$, and $\mathcal{T}_i\cap\mathcal{T}_j=\emptyset$ or $\{Z_i^1\}$ otherwise.\\ (g) $\mu(\mathcal{T}_i)=l(i)^2$.} \begin{proof} By induction, it suffices to show that (a) holds for $n=1$. By definition, we have $\cup_{j\in \{1,2,3\}}\phi_{ij}([0,1])=[0,1)$, and so \[\tilde{\mathcal{T}}_i=\cup_{j\in \{1,2,3\}}\{[\phi_{ij}(t)]_i:\:t\in[0,1]\}=\cup_{j\in \{1,2,3\}}\tilde{\phi}_{ij}(\tilde{\mathcal{T}}_{ij}),\] where we apply the definition of $\tilde{\phi}_{ij}$ for the final equality. Applying $\tilde{\phi}_{*i}$ to both sides of this equation completes the proof of (a). Part (b) is an immediate consequence of the definition of $\mathcal{T}_i$ and the distance scaling property of $\tilde{\phi}_{*i}$ proved in Lemma \ref{scalyo}. Analogous to the remark made after Lemma \ref{aldousdecomp}, the point $[H^i]_i$ represents the branch-point of $[0]_i$, $[U_i]_i$ and $[V_i]_{i}$ in $\tilde{\mathcal{T}}_i$. Thus, since $\tilde{\phi}_{*i}$ is simply a rescaling map, we have that \[b(\rho_{i},Z_i^1,Z_i^2)=b(\tilde{\phi}_{*i}([0]_i),\tilde{\phi}_{*i}([U_i]_i),\tilde{\phi}_{*i}([V_i]_i))=\tilde{\phi}_{*i}([H^i]_i).\] Now, note that for any $j\in \{1,2,3\}$, we have by definition that $\phi_{ij}(0)\in\{H^i,H^i_-,H^i_+\}$, and so $[\phi_{ij}(0)]_i=[H^i]_i$. Consequently, \begin{equation}\label{groggy} \tilde{\phi}_{*i}([H^i]_i)=\tilde{\phi}_{*i}([\phi_{ij}(0)]_i)=\tilde{\phi}_{*ij}([0]_{ij})=\rho_{ij}, \end{equation} which proves (c). Part (d) and (e) are easy to check from the construction using similar ideas and so their proof is omitted. Now note that, for $k\in\Sigma_*$, the decomposition of the excursions, and the fact that the local minima of a Brownian excursion are distinct, implies that for $j_1,j_2\in \{1,2,3\}$, $j_1\neq j_2$, we have $\tilde{\phi}_{kj_1}(\tilde{\mathcal{T}}_{kj_1})\cap\tilde{\phi}_{kj_2}(\tilde{\mathcal{T}}_{kj_2})=\{[H^k]_k\}$. Applying the injection $\tilde{\phi}_{*k}$ to this equation yields \begin{equation}\label{intersection} \mathcal{T}_{kj_1}\cap\mathcal{T}_{kj_2}=\{\tilde{\phi}_{*k}([H^k]_k)\}=\{\rho_{k1}\}, \end{equation} with the second equality following from (\ref{groggy}). This fact will allow us to prove (f) by induction on the length of $i$. Obviously, there is nothing to prove for $|i|=0$. Suppose now that $|i|\geq 1$ and the desired result holds for any index of length strictly less than $|i|$. Suppose $|j|=|i|$, but $j\neq i$, and define $k:=i|(|i|-1)$. If $j|(|j|-1)\neq k$, then the inductive hypothesis implies that $\mathcal{T}_i\cap\mathcal{T}_j\subseteq\mathcal{T}_k\cap\mathcal{T}_{j|(|j|-1)}\subseteq\{\rho_k,Z_k^1\}$, where we apply part (a) to obtain the first inclusion. Using parts (d) and (e) of the proposition it is straightforward to deduce from this that $\mathcal{T}_i\cap\mathcal{T}_j\subseteq\{Z_i^1\}$ in this case. If $j|(|j|-1)=k$, then we can apply the equality at (\ref{intersection}) to obtain that $\mathcal{T}_i\cap\mathcal{T}_j=\{\rho_{k1}\}=\{\rho_i\}$, which completes the proof of part (f). Finally, $\mu$ is non-atomic and so $\mu(\mathcal{T}_i)=\mu(\mathcal{T}_i\backslash\{\rho_i,Z_i^1\})$. Hence, by the disjointness of the sets and the fact that $\mu$ is a probability measure, we have $1\geq\sum_{i\in\Sigma_n}\mu(\mathcal{T}_i\backslash\{\rho_i,Z_i^1\})=\sum_{i\in\Sigma_n}\mu(\mathcal{T}_i)$. Now, by definition, for each $i$, \[\mathcal{T}_i=\{\tilde{\phi}_{*i}([t]_i):\:t\in[0,1]\}=\{[t]:\:t\in\phi_{i|1}\circ\phi_{i|2}\circ\dots\circ\phi_{i}([0,1])\}.\] Thus, since $\mu$ is the projection of Lebesgue measure, this implies that $\mu(\mathcal{T}_i)$ is no smaller than $\ell(\phi_{i|1}\circ\phi_{i|2}\circ\dots\circ\phi_{i}([0,1]))$. By repeated application of (\ref{measurescal}), this lower bound is equal to $\Delta_{i|1}\Delta_{i|2}\dots\Delta_i=l(i)^2$. Now observe that, because $(\Delta_{i1},\Delta_{i2},\Delta_{i3})$ are Dirichlet $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ random variables, we have $\Delta_{i1}+\Delta_{i2}+\Delta_{i3}=1$ for every $i\in\Sigma_*$, and from this it is simple to show that $\sum_{i\in\Sigma_n}l(i)^2=1$. Hence $\sum_{i\in\Sigma_n}\mu(\mathcal{T}_i)\geq\sum_{i\in\Sigma_n}l(i)^2=1$. Thus $\sum_{i\in\Sigma_n}\mu(\mathcal{T}_i)$ is actually equal to 1, and moreover, (g) must hold. \end{proof} With regards to Figure \ref{crtd}, note that the fact that sets from $(\mathcal{T}_{ij})_{j\in \{1,2,3\}}$ only intersect at $\rho_{i1}$ is proved in part (f) of the above proposition, and so the diagram is representative of the set structure of the decomposition. Furthermore, it is clear that the sets $\mathcal{T}_i$ are all compact dendrites, because they are simply rescaled versions of the compact dendrites $\tilde{\mathcal{T}}_i$. The tree description of the inductive algorithm runs as follows. Suppose that the triples $((\mathcal{T}_i,l(i)^{-1}d_{\mathcal{T}},\mu^i))_{i\in\Sigma_n}$ are independent copies of $(\mathcal{T},d_{\mathcal{T}},\mu)$, independent of $\mathcal{F}_n$, where $\mu^i(A):=\mu(A)/\mu(\mathcal{T}_i)$ for measurable $A\subseteq\mathcal{T}_i$. Furthermore, suppose $\mathcal{T}_i$ has root $\rho_i$, and $Z^1_i$ and $Z^2_i$ are two $\mu^i$-random variables in $\mathcal{T}_i$. For $j=1,2,3$, define $\mathcal{T}_{ij}$ to be the component of $\mathcal{T}_i$ (when split at $b(\rho_i,Z_i^1,Z_i^2)$) containing $\rho_i,Z^1_i,Z^2_i$ respectively. Define $\Delta_{ij}:=\mu^i(\mathcal{T}_{ij})$, and equip the sets with the metrics $\Delta_{ij}^{-1/2}l(i)^{-1}d_{\mathcal{T}}=l(ij)^{-1}d_{\mathcal{T}}$ and measures $\mu^{ij}$, defined by \[\mu^{ij}(A):=\frac{\mu^i(A)}{\Delta_{ij}}=\frac{\mu(A)}{\mu(\mathcal{T}_{ij})}.\] Then the triples $((\mathcal{T}_i,l(i)^{-1}d_{\mathcal{T}},\mu^i))_{i\in\Sigma_{n+1}}$ are independent copies of the continuum random tree, independent of $\mathcal{F}_{n+1}$. Moreover, for $i\in\Sigma_{n+1}$, the algorithm gives us the root $\rho_i$ of $\mathcal{T}_i$ and also a $\mu^i$-random vertex, $Z_i^1$. To continue the algorithm, we pick independently for each $i\in\Sigma_{n+1}$ a second $\mu^i$-random vertex, $Z_i^2$. Note that picking this extra $\mu^i$-random vertex is the equivalent of picking the $U[0,1]$ random variable $V_i$ in the excursion picture. To complete this section, we introduce one further family of variables associated with the decomposition of the continuum random tree. From Proposition \ref{decompprop}(f), observe that the sets in $(\mathcal{T}_i)_{i\in\Sigma_n}$ only intersect at points of the form $\rho_i$ or $Z_i^1$. Consequently, it is possible to consider the two point set $\{\rho_i,Z_i^1\}$ to be the boundary of $\mathcal{T}_i$. Denote the renormalised distance between boundary points by, for $i\in \Sigma_*$, \[D_i:=l(i)^{-1}d_\mathcal{T}(\rho_i,Z_i^1).\] By construction, we have that $d_\mathcal{T}(\rho_i,Z_i^1)=l(i)d_{W^i}(0,U_i)$. Hence we can also write $D_i=d_{W^i}(0,U_i)$, and so, for each $n$, $(D_i)_{i\in\Sigma_n}$ is a collection of independent random variables, independent of $\mathcal{F}_n$. Moreover, the random variables $(D_i)_{i\in\Sigma_*}$ are identically distributed as $D_\emptyset$, which represents the height of a $\mu$-random vertex in $\mathcal{T}$. It is known that such a random variable has mean $\sqrt{\pi/8}$, and finite variance (see \cite{Aldous2}, Section 3.3). Finally, we have the following recursive relationship \begin{equation}D_i=w(i1)D_{i1}+w(i2)D_{i2},\label{drecurs} \end{equation} which may be deduced by decomposing the path from $\rho_i$ to $Z_i^1$ at $b(\rho_i,Z_i^1,Z_i^2)$, and applying parts (c) and (d) of Proposition \ref{decompprop}. \section{Self-similar dendrite in $\mathbb{R}^2$}\label{selfsimsec} The subset of $\mathbb{R}^2$ to which we will map the continuum random tree is a simple self-similar fractal, and is described as the fixed point of a collection of contraction maps. In particular, for $(x,y)\in\mathbb{R}^2$, set \[F_1(x,y):=\frac{1}{2}(1-x,y),\hspace{10pt}F_2(x,y):=\frac{1}{2}(1+x,-y),\] \[F_3(x,y):=\left(\frac{1}{2}+cy,cx\right),\] where $c\in(0,1/2)$ is a constant, and define $T$ to be the unique non-empty compact set satisfying $A=\bigcup_{i=1}^3F_i(A)$. The existence and uniqueness of $T$, which is shown in Figure \ref{ssd}, is guaranteed by an extension of the usual contraction principle for metric spaces, see \cite{Kigami}, Theorem 1.1.4. For a wide class of self-similar fractals, which includes $T$, there is now a well-established approximation procedure for defining an intrinsic Dirichlet form and associated resistance metric on the relevant space, see \cite{Barlow} and \cite{Kigami} for details. However, to capture the randomness of the continuum random tree, we will need to randomise this construction, and it is to describing how this is done that this section is devoted. \begin{figure} \caption{Self-similar dendrite.} \label{ssd} \end{figure} The scaling factors that will be useful in defining a sequence of compatible Dirichlet forms on subsets of $T$ will be the family $(w(i))_{i\in\Sigma_*\backslash\{\emptyset\}}$, as defined in the previous section. Although we would like to simply replace the deterministic scaling factors that are used in the method of \cite{Kigami} with this collection of random variables, following this course of action would result in a sequence of non-compatible quadratic forms, and taking limits would not be straightforward. To deal with the offending tail fluctuations caused by using random scaling factors, we introduce another collection of random variables \begin{equation}\label{ridef} R_i:=\lim_{n\rightarrow\infty}\sum_{j\in\{1,2\}^n}\frac{l(ij)}{l(i)},\hspace{20pt}i\in\Sigma_*, \end{equation} which we shall term resistance perturbations. Clearly these are identically distributed, and, by appealing to the independence properties of $(w(i))_{i\in\Sigma_*\backslash\{\emptyset\}}$, various questions regarding the convergence and distribution of the $(R_i)_{i\in\Sigma_*}$ may be answered by standard multiplicative cascade techniques. Consequently we provide only a brief explanation and suitable references for the proof of the following result. Crucially, part (d) reveals an important identity between the resistance perturbations and the family $(D_i)_{i\in\Sigma_*}$, which was defined from the continuum random tree. {\lem \label{resprop} (a) $\mathbf{P}$-a.s., the limit at (\ref{ridef}) exists in $(0,\infty)$ for every $i\in\Sigma_*$.\\ (b) $\mathbf{E}R_\emptyset =1$, and $\mathbf{E}R_\emptyset^d<\infty$ for every $d\geq 0$.\\ (c) $\mathbf{P}$-a.s., for every $i\in\Sigma_*$, the identity $R_i=w(i1)R_{i1}+w(i2)R_{i2}$ holds.\\ (d) $\mathbf{P}$-a.s., $(R_i)_{i\in\Sigma_*}\equiv (HD_i)_{i\in\Sigma_*}$, where $H:=\sqrt{8/\pi}$.} \begin{proof} The finite limit result of (a) and part (b) are immediate applications of Theorem 2.0 of \cite{Liu}. Part (c) is immediate from the definition of $(R_i)_{i\in\Sigma_*}$. Using the identical distribution of the family of resistance perturbations, part (c) implies that $\mathbf{P}(R_i=0)=\mathbf{P}(R_i=0)^2$. Since $\mathbf{E}R_i=1$, it follows that $\mathbf{P}(R_i=0)=0$, which completes the proof of (a). Checking the $\mathbf{P}$-a.s. equivalence of (d) is straightforward. First, from an elementary application of a conditional version of Chebyshev's inequality it may be deduced that, for each $i$, \begin{eqnarray*} \lefteqn{\mathbf{P}\left(\left|HD_i-\sum_{j\in\{1,2\}^n}\frac{l(ij)}{l(i)}\right|>\lambda\vline\:\mathcal{F}_{n+|i|}\right)}\\ &\leq&\lambda^{-2}\mathrm{Var}\left(\sum_{j\in\{1,2\}^n}\frac{l(ij)}{l(i)}(HD_{ij}-1)\vline\:\mathcal{F}_{n+|i|}\right)\\ &\leq&H^{2}\lambda^{-2}\sum_{j\in\{1,2\}^n}\frac{l(ij)^2}{l(i)^2}\mathrm{Var}D_{\emptyset} \end{eqnarray*} where we have also used the facts that $\mathbf{E}(HD_i)=1$, the identity at (\ref{drecurs}) and the independence properties of the relevant random variables. Taking expectations yields \[\mathbf{P}\left(\left|HD_i-\sum_{j\in\{1,2\}^n}\frac{l(ij)}{l(i)}\right|>\lambda\right)\leq H^2\lambda^{-2}(\mathbf{E}(\Delta_1+\Delta_2))^n\mathrm{Var}D_\emptyset.\] As remarked in the previous section, $D_\emptyset$ has finite variance. Furthermore, a simple symmetry argument yields that the expectation in the right hand side is precisely 2/3. Hence the sum of probabilities over $n$ is finite, and applying a Borel-Cantelli argument yields the result.\end{proof} The sequence of vertices upon which we will define our Dirichlet forms will be that which is commonly used for a p.c.f.s.s. fractal, see \cite{Kigami} for more examples. Thus we shall not detail the reason for the choice, but start by simply stating that the boundary of $T$ may be taken to be the two point set $V^0:=\{(0,0), (1,0)\}$. Our initial Dirichlet form is defined by \[D(f,f):=\sum_{x,y\in V^0,\:x\neq y}H(f(x)-f(y))^2,\hspace{20pt}\forall f\in C(V^0),\] where, for a countable set, $A$, we denote $C(A):=\{f:A\rightarrow \mathbb{R}\}$. The constant $H$ is defined as in the previous lemma and is necessary to achieve the correct scaling in the metric we shall later define. We now introduce an increasing family of subsets of $T$ by setting $V^n:=\bigcup_{i\in\Sigma_n}F_i(V^0)$, where for $i\in\Sigma_*$, $F_i:=F_{i_1}\circ\dots\circ F_{i_{|i|}}$. By defining \[\mathcal{E}^n(f,f):=\sum_{i\in\Sigma_n}\frac{1}{l(i)R_i}D(f\circ F_i,f\circ F_i),\hspace{20pt}\forall f\in C(V^n),\] we obtain Dirichlet forms on each of the appropriate finite subsets of $T$, $\mathbf{P}$-a.s. By applying the identity of Lemma \ref{resprop}(c), it is straightforward to check that the family $(V^n,\mathcal{E}^n)$ is compatible in the sense that the trace of $\mathcal{E}^{n+1}$ on $V^n$ is precisely $\mathcal{E}^n$ for each $n$ (cf. \cite{Kigami}, Definition 2.2.1), and from this fact we may take a limit in a sensible way. Specifically, let \[\mathcal{E}'(f,f):=\lim_{n\rightarrow\infty}\mathcal{E}^n(f,f),\hspace{20pt}\forall f\in\mathcal{F}',\] where $\mathcal{F}'$ is the set of functions on the countable set $V^*:=\bigcup_{n\geq 0}V^n$ for which this limit exists finitely. Note that we have abused notation slightly by using the convention that if a form $\mathcal{E}$ is defined for functions on a set $A$ and $f$ is a function defined on $B\supseteq A$, then we write $\mathcal{E}(f,f)$ to mean $\mathcal{E}(f|_A, f|_A)$. The quadratic form $(\mathcal{E}', \mathcal{F}')$ is actually a resistance form (see \cite{Kigami}, Definition 2.3.1), and we can use it to define a (resistance) metric $R'$ on $V^*$ using a formula analogous to (\ref{dtrecover}) \[ R'(x,y)^{-1} = \inf\{\mathcal{E}'(f,f):f\in\mathcal{F}', f(x)=0,f(y)=1\}, \] for $x,y\in V^*$, $x\neq y$, and setting $R'(x,x)=0$. We note that for sets of the form $F_i(V_0)$ with $i\in\Sigma_*$ we have \begin{equation}\label{edgeres} R'(F_i(0,0),F_i(1,0))=\frac{l(i)R_i}{H}. \end{equation} To prove that this metric may be extended to $T$ in a natural way (at least $\mathbf{P}$-a.s.) requires a similar argument to the deterministic case, and so we omit the full details here. The most crucial fact that is needed is the following: \begin{equation}\label{diamdecay} \lim_{n\rightarrow\infty}\sup_{i\in\Sigma_n}\mathrm{diam}_{R'}F_i(V^*)=0, \hspace{20pt}\mathbf{P}\mbox{-a.s.}, \end{equation} where, in general, $\mathrm{diam}_{d}(A)$ represents the diameter of a set $A$ with respect to a metric $d$. The proof follows the chaining argument of \cite{Barlow}, Proposition 7.10, and full details of the proof of the following Proposition can be found in \cite{thesis}. {\propn \label{rextend} There exists a unique metric $R$ on $T$ such that $(T,R)$ is the completion of $(V^*, R')$, $\mathbf{P}$-a.s. Moreover, the topology induced upon $T$ by $R$ is the same as that induced by the Euclidean metric, $\mathbf{P}$-a.s.} To complete this section, we introduce the natural stochastic self-similar measure on $T$, and note that $(\mathcal{E}',\mathcal{F}')$ may be extended to a Dirichlet form on the corresponding $L^2$ space. In particular, by proceeding exactly as in the deterministic case, see \cite{Kigami}, Section 1.4, it is possible to prove that, $\mathbf{P}$-a.s., there exists a unique non-atomic Borel probability measure, $\mu^T$ say, on $(T,R)$ that satisfies \begin{equation}\label{selfsimmeas} \mu^T(F_i(T))=l(i)^2,\hspace{20pt}\forall i\in\Sigma_*. \end{equation} Again, full details of this result are given in \cite{thesis}. If we extend $(\mathcal{E}',\mathcal{F}')$ in the natural way by setting $\mathcal{E}(f,f):=\mathcal{E}'(f,f)$, for $f \in \mathcal{F}:=\{f\in C(T):\:f|_{V^*}\in\mathcal{F}'\}$, where we use $C(T)$ to represent the continuous functions on $T$ (with respect to the Euclidean metric or $R$), then the following result holds (for a proof, see \cite{thesis}). {\propn \label{rdescrp} $\mathbf{P}$-a.s., $(\mathcal{E},\mathcal{F})$ is a local, regular Dirichlet form on $L^2(T,\mu^T)$ and, moreover, it may be associated with the metric $R$ through \[ R(x,y)^{-1} = \inf\{\mathcal{E}(f,f): f\in\mathcal{F}, f(x)=0,f(y)=1\}. \]} \section{Equivalence of measure-metric spaces}\label{isosec} In this section, we demonstrate how the decomposition of the continuum random tree presented in Section \ref{decompsec} allows us to define an isometry from the continuum random tree to the random self-similar dendrite, $(T,R)$, described in the previous section. An important consequence of the decomposition is that it allows us to label points in $\mathcal{T}$ using the shift space of infinite sequences, $\Sigma:=\{1,2,3\}^\mathbb{N}$. The following lemma defines the projection $\pi_{\mathcal{T}}:\Sigma\rightarrow\mathcal{T}$ that we will use, which is analogous to the well-known projection map for self-similar fractals, see \cite{Barlow}, Lemma 5.10. We include the result for the corresponding projection $\pi_T:\Sigma\rightarrow T$ to allow us to introduce the necessary notation, and provide a direct comparison of the two maps. Henceforth, we shall use the notation $T_i:=F_i(T)$, for $i\in\Sigma_*$, and assume that $\Sigma$ is endowed with the usual ultra-metric topology generated by the sets $\{ij:j\in\Sigma\}$, $i\in\Sigma_*$. {\lem \label{5.1} (a) There exists a map $\pi_{T}:\Sigma\rightarrow {T}$ such that $\pi_{{T}}\circ\sigma_i(\Sigma)={T}_i$, for every $i\in\Sigma_*$, where $\sigma_i:\Sigma\rightarrow\Sigma$ is defined by $\sigma_i(j)=ij$ for $j\in\Sigma$. Furthermore, this map is continuous, surjective and unique.\\ (b) $\mathbf{P}$-a.s., there exists a map $\pi_\mathcal{T}:\Sigma\rightarrow \mathcal{T}$ such that $\pi_{\mathcal{T}}\circ\sigma_i(\Sigma)=\mathcal{T}_i$, for every $i\in\Sigma_*$, where $\sigma_i$ is defined as in (a). Furthermore, this map is continuous, surjective and unique.} \begin{proof} Part (a) is proved in \cite{Barlow} and \cite{Kigami}, so we prove only (b). $\mathbf{P}$-a.s., for each $i\in\Sigma$, the sets in the collection $(\mathcal{T}_{i|n})_{n\geq 0}$ are compact, non-empty subsets of $(\mathcal{T},d_\mathcal{T})$, and by Proposition \ref{decompprop}(a), the sequence is decreasing. Hence, to show that $\cap_{n\geq0}\mathcal{T}_{i|n}$ contains exactly one point for each $i\in\Sigma$, $\mathbf{P}$-a.s., it will suffice to demonstrate that, $\mathbf{P}$-a.s., \begin{equation}\label{diamshrink} \lim_{n\rightarrow\infty}\sup_{i\in\Sigma_n}\mathrm{diam}_{d_\mathcal{T}}\mathcal{T}_i= 0. \end{equation} From Proposition \ref{decompprop}(b), we have that $\mathrm{diam}_{d_\mathcal{T}}\mathcal{T}_i=l(i)\mathrm{diam}_{d_{\tilde{\mathcal{T}}_i}}\tilde{\mathcal{T}}_i$. Using the similarity that this implies, the above result may be proved in the same way as (\ref{diamdecay}). To enable us to apply this argument, we note that $\mathrm{diam}_{d_{\tilde{\mathcal{T}}_i}}\tilde{\mathcal{T}}_i\leq2\sup_{t\in[0,1]}W^i_t$. The upper bound here is simply twice the maximum of a normalised Brownian excursion, and has finite positive moments of all orders as required (see \cite{Aldous2}, for example). Using the result of the previous paragraph, it is $\mathbf{P}$-a.s. possible to define a map $\pi_\mathcal{T}:\Sigma\rightarrow\mathcal{T}$ such that, for $i\in\Sigma$, $\{\pi_\mathcal{T}(i)\}=\bigcap_{n\geq0}\mathcal{T}_{i|n}$. That $\pi_\mathcal{T}$ satisfies the claims of the lemma, and is the unique map to do so, may be proved in exactly the same way as in the self-similar fractal case. \end{proof} Heuristically, the isometry that we will define between the two dendrites under consideration can be thought of as simply ``$\varphi=\pi_T\circ\pi_\mathcal{T}^{-1}$''. However, to introduce the map rigorously, so that it is well-defined, we first need to prove some simple, but fundamental, results about the geometry of the sets and the maps $\pi_T$ and $\pi_\mathcal{T}$. From here on we use the notation $\dot{2}=222\dots$. {\lem \label{lemmadrop} $\mathbf{P}$-a.s.,\\ (a) $\pi_\mathcal{T}^{-1}(\rho_{k1})=\{k11\dot{2},k21\dot{2},k31\dot{2}\}$, for all $k\in\Sigma_*$.\\ (b) For every $i,j\in\Sigma$, $\pi_\mathcal{T}(i)=\pi_{\mathcal{T}}(j)$ if and only if $\pi_T(i)=\pi_T(j)$.} \begin{proof} The proof we give holds on the $\mathbf{P}$-a.s. set for which the decomposition of $\mathcal{T}$ and the definition of $\pi_\mathcal{T}$ is possible. Recall that $\rho_{k1}=b(\rho_k,Z_k^1,Z_k^2)$. For this branch-point to equal $\rho_{k}$ or $Z_k^1$, we would require at least two of its arguments to be equal, which happens with zero probability. Thus $\rho_{k1}\in\mathcal{T}_{k}\backslash\{\rho_k,Z_k^1\}$, and so Proposition \ref{decompprop}(f) implies that if $\pi_\mathcal{T}(i)=\rho_{k1}$ for some $i\in\Sigma$, then $i||k|=k$. Given this fact, it is elementary to apply the defining property of $\pi_\mathcal{T}$ and the results about $\rho_i$ and $Z_i^1$ that were deduced in Proposition \ref{decompprop} to deduce that part (a) of this lemma also holds. It now remains to prove part (b). Fix $i,j\in\Sigma$, $i\neq j$, and let $m$ be the unique integer satisfying $i|m=j|m$ and $i_{m+1}\neq j_{m+1}$. Furthermore, define $k=i_1\dots i_m\in\Sigma_*$. Now by standard arguments for p.c.f.s.s. fractals (see \cite{Kigami}, Proposition 1.2.5 and the subsequent remark) we have that $\pi_T(i)=\pi_T(j)$ implies that $\sigma^m(i),\sigma^m(j)\in\mathcal{C}$, where $\mathcal{C}$ is the critical set for the self-similar structure, $T$, as defined in \cite{Kigami}, Definition 1.3.4. Here, we use the notation $\sigma$ to represent the shift map, which is defined by $\sigma(i)=i_2i_3\dots$. Note that it is elementary to calculate that $\mathcal{C}=\{11\dot{2},21\dot{2},31\dot{2}\}$ for this structure. Thus $i,j\in\{k11\dot{2},k21\dot{2},k31\dot{2}\}$, and so, by part (a), $\pi_\mathcal{T}(i)=\rho_{k1}=\pi_\mathcal{T}(j)$, which completes one implication of the desired result. Now suppose $\pi_\mathcal{T}(i)=\pi_\mathcal{T}(j)$. From the definition of $\pi_\mathcal{T}$, we have that $\pi_\mathcal{T}(i)\in\mathcal{T}_{ki_{m+1}}$ and also $\pi_\mathcal{T}(j)\in\mathcal{T}_{kj_{m+1}}$. Hence $\pi_\mathcal{T}(i),\pi_\mathcal{T}(j)\in\mathcal{T}_{ki_{m+1}}\cap\mathcal{T}_{kj_{m+1}}=\{\rho_{k1}\}$, where we use (\ref{intersection}) to deduce the above equality. In particular, this allows us to apply part (a) to deduce that $i,j\in\{k11\dot{2},k21\dot{2},k31\dot{2}\}$. Applying the shift map to this $m$ times yields $\sigma^m(i),\sigma^m(j)\in\mathcal{C}$. It is easy to check that $\pi_T(\mathcal{C})$ contains only the single point $(\frac{1}{2},0)$. Thus $\pi_T(i)=F_k\circ \pi_T(\sigma^m(i))=F_k\circ\pi_T(\sigma^m(j))=\pi_T(j)$, which completes the proof. \end{proof} We are now able to define the map $\varphi$ precisely on a $\mathbf{P}$-a.s. set by\begin{eqnarray*} \varphi:\mathcal{T}&\rightarrow& T\\ x&\mapsto & \pi_T(i),\hspace{20pt}\mbox{for any $i\in \Sigma$ with $\pi_\mathcal{T}(i)=x$.} \end{eqnarray*} By part (b) of the previous lemma, this is a well-defined injection. Furthermore, since $\pi_T$ is surjective, so is $\varphi$. Hence we have constructed a bijection from $\mathcal{T}$ to $T$ and it remains to show that it is also an isometry. We start by checking that $\varphi$ is continuous, which will enable us to deduce that it maps geodesic paths in $\mathcal{T}$ to geodesic paths in $T$. However, before we proceed with the lemma, we introduce the following notation for $x\in\mathcal{T}$, $n\geq 0$, \[\mathcal{T}_n(x):=\bigcup\{\mathcal{T}_i:\:i\in\Sigma_n,\:x\in\mathcal{T}_i\}.\] Define $(T_n(x))_{x\in T,n\geq 0}$ similarly, replacing $\mathcal{T}_i$ with $T_i$ in the above definition where appropriate. From the properties $\pi_T(i\Sigma)=T_i$, $\pi_\mathcal{T}(i\Sigma)=\mathcal{T}_i$, and the definition of $\varphi$, it is straightforward to deduce that \begin{equation}\label{setmap} \varphi(\mathcal{T}_i)=T_i,\hspace{20pt}\forall i\in\Sigma_*, \end{equation} on the $\mathbf{P}$-a.s. set that we can define all the relevant objects. {\lem $\mathbf{P}$-a.s., $\varphi$ is a continuous map from $(\mathcal{T},d_\mathcal{T})$ to $(T,R)$.} \begin{proof} By \cite{Kigami}, Proposition 1.3.6, for each $x\in T$, the collection $(T_n(x))_{n\geq 0}$ is a base of neighbourhoods of $x$ with respect to the Euclidean metric on $\mathbb{R}^2$. Since, by Proposition \ref{rextend}, $R$ is topologically equivalent to this metric, $\mathbf{P}$-a.s., then the same is true when we consider the collections of neighbourhoods with respect to $R$, $\mathbf{P}$-a.s. Similarly, we may use (\ref{diamshrink}), $\mathbf{P}$-a.s., to imitate the proofs of these results to deduce that $\mathbf{P}$-a.s., for each $x\in\mathcal{T}$, the collection $(\mathcal{T}_n(x))_{n\geq 0}$ is a base of neighbourhoods of $x$ with respect to $d_\mathcal{T}$. The remaining argument applies $\mathbf{P}$-a.s. Let $U$ be an open subset of $(T,R)$ and $x\in\varphi^{-1}(U)$. Define $y=\varphi(x)\in U$. Now, since $U$ is open, there exists an $n$ such that $T_n(y)\subseteq U$. Also, by (\ref{setmap}), for each $i\in\Sigma_n$, we have that $x\in \mathcal{T}_i$ implies that $y\in T_i$. Hence \[\varphi(\mathcal{T}_n(x))=\varphi\left(\cup_{i\in\Sigma_n,\:x\in\mathcal{T}_i} \mathcal{T}_i\right)\subseteq\cup_{i\in\Sigma_n,\:y\in T_i} T_i=T_n(y)\subseteq U.\] Consequently, $\mathcal{T}_n(x)\subseteq\varphi^{-1}(U)$. Since $\mathcal{T}_n(x)$ is a $d_\mathcal{T}$-neighbourhood of $x$ it follows that $\varphi^{-1}(U)$ is open in $(\mathcal{T},d_\mathcal{T})$. The lemma follows. \end{proof} We are now ready to proceed with the main result of this section. In the proof, we will use the notation $\gamma_{xy}^{\mathcal{T}}:[0,1]\rightarrow\mathcal{T}$ to denote a geodesic path (continuous injection) from $x$ to $y$, where $x$ and $y$ are points in the dendrite $\mathcal{T}$. Clearly, because $\varphi$ is a continuous injection, $\varphi\circ\gamma_{xy}^{\mathcal{T}}$ describes a geodesic path from $\varphi(x)$ to $\varphi(y)$ in $T$. {\thm $\mathbf{P}$-a.s., the map $\varphi$ is an isometry, and the metric spaces $(\mathcal{T},d_\mathcal{T})$ and $(T,R)$ are isometric.} \begin{proof} Obviously, the second statement of the theorem is an immediate consequence of the first. The following argument, in which we demonstrate that $\varphi$ is indeed an isometry, holds $\mathbf{P}$-a.s. Given $\varepsilon>0$, by (\ref{diamdecay}) and (\ref{diamshrink}), we can choose an $n\geq 1$ such that \[\sup_{i\in\Sigma_n}\mathrm{diam}_{d_\mathcal{T}}\mathcal{T}_i,\:\sup_{i\in\Sigma_n}\mathrm{diam}_R T_i<\frac{\varepsilon}{4}.\] Now, fix $x,y\in\mathcal{T}$, define $t_0:=0$ and set \[t_{m+1}:=\inf\{t>t_m\: :\:\gamma_{xy}^\mathcal{T}(t)\not\in\mathcal{T}_n(\gamma^\mathcal{T}_{xy}(t_m))\},\] where $\inf\emptyset:=1$. We will also denote $x_m:=\gamma_{xy}^\mathcal{T}(t_m)$. Since, for each $x'\in\mathcal{T}$, the collection $(\mathcal{T}_n(x'))_{n\geq 0}$ forms a base of neighbourhoods of $x'$, we must have that $t_{m-1}<t_m$ whenever $t_{m-1}<1$. We now claim that for any $m$ with $t_{m-1}<1$ there exists a unique $i(m)\in\Sigma_n$ such that \begin{equation}\label{claimjop} \gamma_{xy}^\mathcal{T}(t)\in\mathcal{T}_{i(m)},\hspace{20pt}t_{m-1}\leq t\leq t_m. \end{equation} Let $m$ be such that $t_{m-1}<1$. By the continuity of $\gamma_{xy}^\mathcal{T}$, we have that $x_m\in\mathcal{T}_n(x_{m-1})$, and hence there exists an $i(m) \in\Sigma_n$ such that $x_{m-1},x_m\in\mathcal{T}_{i(m)}$. Clearly, the image of $\gamma^\mathcal{T}_{xy}$ restricted to $t\in[t_{m-1},t_m]$ is the same as the image of $\gamma_{x_{m-1}x_{m}}^\mathcal{T}$, which describes the unique path in $\mathcal{T}$ from $x_{m-1}$ to $x_m$. Note also that $\mathcal{T}_{i(m)}$ is a path-connected subset of $\mathcal{T}$, and so the path from $x_{m-1}$ to $x_m$ lies in $\mathcal{T}_{i(m)}$. Consequently, the set $\gamma_{xy}([t_{m-1},t_m])$ is contained in $\mathcal{T}_{i(m)}$. Thus to prove the claim at (\ref{claimjop}), it remains to show that $i(m)$ is unique. Suppose that there exists $j\in\Sigma_n$, $j\neq i(m)$ for which the inclusion at (\ref{claimjop}) holds. Then the uncountable set $\gamma_{xy}^\mathcal{T}([t_{m-1},t_m])$ is contained in $\mathcal{T}_{i(m)}\cap\mathcal{T}_j$, which, by Proposition \ref{decompprop}(f), contains at most two points. Hence no such $j$ can exist. Now assume that $m_1<m_2$ and that $t_{m_2-1}<1$. Suppose that $i({m_1})=i({m_2})$, then $x_{m_1-1},x_{m_2}\in\mathcal{T}_{i({m_1})}$ By a similar argument to the previous paragraph, it follows that $\gamma_{xy}^\mathcal{T}([t_{m_1-1},t_{m_2}])\subseteq\mathcal{T}_{i({m_1})}$. By definition, this implies that $t_{m_1}\geq t_{m_2}$, which cannot be true. Consequently, we must have that $i({m_1})\neq i({m_2})$. Since $\Sigma_n$ is a finite set, it follows from this observation that $N:=\inf\{m:\:t_{m}=1\}$ is finite, and moreover, the elements of $(i(m))_{m=1}^N$ are distinct. The conclusion of the previous paragraph provides us with a useful decomposition of the path from $x$ to $y$, which we will be able to use to complete the proof. The fact that $d_\mathcal{T}$ is a shortest path metric allows us to write $d_\mathcal{T}(x,y)=\sum_{m=1}^Nd_{\mathcal{T}}(x_{m-1},x_m)$. For $m\in\{2,\dots,N-1\}$, we have that $i({m})\neq i(m+1)$, and so by applying Proposition \ref{decompprop}(f), we can deduce that $x_{m}\in\mathcal{T}_{i(m)}\cap\mathcal{T}_{i({m+1})}\subseteq\{\rho_{i(m)},Z_{i(m)}^1\}$. Similarly, we have $x_{m-1}\in\mathcal{T}_{i(m-1)}\cap\mathcal{T}_{i(m)}\subseteq\{\rho_{i(m)},Z_{i(m)}^1\}$. Thus, by the injectivity of $\gamma_{xy}^\mathcal{T}$, we must have that $\{x_{m-1},x_{m}\}=\{\rho_{i(m)},Z_{i(m)}^1\}$, which implies $d_\mathcal{T}(x_{m-1},x_m)=d_{\mathcal{T}}(\rho_{i(m)},Z_{i(m)})=l(i(m))D_{i(m)}$. Hence we can conclude that \begin{equation}\label{dtsplit} d_\mathcal{T}(x,y)-\sum_{m=2}^{N-1}l(i(m))D_{i(m)}=d_{\mathcal{T}}(x_0,x_1)+d_\mathcal{T}(x_{N-1},x_N). \end{equation} As remarked before this lemma, $\varphi\circ\gamma_{xy}^\mathcal{T}$ is a geodesic path from $\varphi(x)$ to $\varphi(y)$. Thus the shortest path property of $R$ allows us to write \begin{equation}\label{rsplit} R(\varphi(x),\varphi(y))=\sum_{m=1}^N R(\varphi(x_{m-1}),\varphi(x_m)). \end{equation} Let $m\in\{2,\dots,N-1\}$. By applying $\varphi$ to the expression for $\{x_{m-1},x_m\}$ that was deduced above, we obtain that $\{\varphi(x_{m-1}),\varphi({x_m})\}=\{\varphi(\rho_{i(m)}),\varphi(Z_{i(m)}^1)\}$. Now, part (a) of Lemma \ref{lemmadrop} implies that \[\varphi(\rho_{i(m)})=\pi_T(k11\dot{2})=F_{k}(\pi_T(11\dot{2}))=F_k((\frac{1}{2},0))=F_{i(m)}((0,0)),\] where $k:=i(m)|(|i(m)|-1)$. In Proposition \ref{decompprop}(d) it was shown that $Z_i^1=Z_{i2}^1$, for every $i\in\Sigma_*$. It follows that $i(m)\dot{2}\in\pi_{\mathcal{T}}^{-1}(Z_{i(m)}^1)$, and so \[\varphi(Z^1_{i(m)})=\pi_T(i(m)\dot{2})=F_{i(m)}(\pi_T(\dot{2}))=F_{i(m)}((1,0)).\] Thus $R(\varphi(x_{m-1}),\varphi(x_m))=R(F_{i(m)}((0,0)),F_{i(m)}((1,0)))$, and so from the expression at (\ref{edgeres}), we can deduce that $R(\varphi(x_{m-1}),\varphi(x_m))=\sqrt{{\pi}/{8}}l(i(m))R_{i(m)}$, which, by Lemma \ref{resprop}(d), is equal to $l(i(m))D_{i(m)}$. Substituting this into (\ref{rsplit}), and combining the resulting equation with the equality at (\ref{dtsplit}) yields \begin{eqnarray*} \lefteqn{|d_\mathcal{T}(x,y)-R(\varphi(x),\varphi(y))|\leq}\\ &&\sum_{m\in\{1,N\}}\left(d_\mathcal{T}(x_{m-1},x_m)+R(\varphi(x_{m-1}),\varphi(x_m))\right). \end{eqnarray*} Now, $x_0$ and $x_1$ are both contained in $\mathcal{T}_{i(1)}$, and so the choice of $n$ implies that $d_\mathcal{T}(x_0,x_1)<\varepsilon/4$. Furthermore, $\varphi(x_0)$ and $\varphi(x_1)$ are both contained in $\varphi(\mathcal{T}_{i(1)})=T_{i(1)}$, and so we also have $R(\varphi(x_{0}),\varphi(x_1))<\varepsilon/4$. Thus the summand with $m=1$ is bounded by $\varepsilon/2$. Similarly for $m=N$. Hence $|d_\mathcal{T}(x,y)-R(\varphi(x),\varphi(y))|<\varepsilon$. Since the choice of $x,y$ and $\varepsilon$ was arbitrary, the proof is complete. \end{proof} The final result that we present in this section completes the proof of the fact that $(\mathcal{T},d_{\mathcal{T}},\mu)$ and $(T,R,\mu^T)$ are equivalent measure-metric spaces, where we continue to use the notation $\mu^T$ to represent the stochastic self-similar measure on $(T,R)$, as defined in Section \ref{selfsimsec}. {\thm $\mathbf{P}$-a.s., the probability measures $\mu$ and $\mu^T\circ\varphi$ agree on the Borel $\sigma$-algebra of $(\mathcal{T},d_\mathcal{T})$.} \begin{proof} That both $\mu^T\circ\varphi$ and $\mu$ are non-atomic Borel probability measures on $(\mathcal{T},d_\mathcal{T})$, $\mathbf{P}$-a.s., is obvious. Recall from Proposition \ref{decompprop}(g) that $\mu(\mathcal{T}_i)=l(i)^2$, for every $i\in\Sigma_*$, $\mathbf{P}$-a.s. Furthermore, from the identities of (\ref{selfsimmeas}) and (\ref{setmap}), we also have $\mu^T\circ\varphi(\mathcal{T}_i)=\mu^T(T_i)=l(i)^2$, for every $i\in\Sigma_*$, $\mathbf{P}$-a.s. The result is readily deduced from these facts. \end{proof} \section{Spectral asymptotics}\label{specsec} Due to the construction of the natural Dirichlet form on the continuum random tree from the natural metric on the space, the results of the previous section imply that the spectrum of $(\mathcal{E}_\mathcal{T},\mathcal{F}_\mathcal{T},\mu)$ is $\mathbf{P}$-a.s. identical to that of $(\mathcal{E},\mathcal{F},\mu^T)$, the random Dirichlet form and self-similar measure on $T$, as defined in Section \ref{selfsimsec}. Consequently, to deduce the results of the introduction, it will suffice to show that the analogous results hold for $(\mathcal{E},\mathcal{F},\mu^T)$, which is possible using techniques developed for related self-similar fractals. For this argument, it will be helpful to apply various decomposition and comparison inequalities for the Dirichlet and Neumann eigenvalues associated with this Dirichlet form, and we shall start by introducing these. To define the Dirichlet eigenvalues for $(\mathcal{E},\mathcal{F},\mu^T)$, we first introduce the related Dirichlet form $(\mathcal{E}^D,\mathcal{F}^D)$ by setting \[\mathcal{E}^D(f,f):=\mathcal{E}(f,f),\hspace{20pt}\forall f\in\mathcal{F}^D,\] where \[\mathcal{F}^D:=\{f\in\mathcal{F}:\:f|_{V^0}=0\}.\] The Dirichlet eigenvalues of the original form, $(\mathcal{E},\mathcal{F},\mu^T)$, are then defined to be the eigenvalues of $(\mathcal{E}^D,\mathcal{F}^D, \mu^T)$. We shall use the title Neumann eigenvalues to refer to the usual eigenvalues of $(\mathcal{E},\mathcal{F},\mu^T)$, defined analogously to (\ref{evaluedef}). Before continuing, note that the description of $R$ in Proposition \ref{rdescrp} easily leads to the well known inequality \begin{equation}\label{resineq} |f(x)-f(y)|^2\leq R(x,y)\mathcal{E}(f,f),\hspace{20pt}\forall x,y\in T,\:f\in\mathcal{F}. \end{equation} By applying this fact (and using $\|\cdot\|_p$ to represent the corresponding $L^p(T,\mu^T)$ norm), we find that, for $x\in T$, $f\in\mathcal{F}$, \[|f(x)|^2 \leq 2\int_T (|f(x)-f(y)|^2+|f(y)|^2)d\mu\leq 2 \mathrm{diam}_R T \mathcal{E}(f,f)+2\|f\|^2_2,\] and so, $\mathbf{P}$-a.s., $\|f\|_\infty^2\leq C(\mathcal{E}(f,f)+\|f\|^2_2)$, for some constant $C$. Combining this inequality with (\ref{resineq}), we can imitate the argument of \cite{KigLap}, Lemma 5.4, to deduce that the natural inclusion map from $(\mathcal{F},\mathcal{E}+\|\cdot\|_2^2)$ to $L^2(T,\mu^T)$ is a compact operator. It follows that the Dirichlet and Neumann spectra of $(\mathcal{E},\mathcal{F},\mu^T)$ are discrete, and so the associated eigenvalue counting functions, $N^D(\lambda)$ and $N^N(\lambda)$, are well-defined and finite for all $\lambda\in\mathbb{R}$. From the definitions in the previous paragraph, we can easily see that $N(\lambda)=N^N(\lambda)$, $\mathbf{P}$-a.s., and so, using the terminology introduced above, the eigenvalues of $(\mathcal{E}_\mathcal{T}, \mathcal{F}_\mathcal{T}, \mu)$ may be thought of as Neumann eigenvalues. Of course, this definition does not provide any justification for using the name Neumann, so we will now give an explanation of why it is sensible to do so. Since we will not actually apply this interpretation, we only sketch the relevant results. Analogously to \cite{Kigami}, Definition 3.7.1, let $\mathcal{D}$ be the collection of functions $f\in C(T)$ such that there exists a function $g\in C(T)$ satisfying \begin{equation}\label{limit}\lim_{n\rightarrow\infty}\max_{x\in V^n\backslash V^0}\left|\mu_{n,x}^{-1}\Delta_n f(x)-g(x)\right|=0,\end{equation} where $\Delta_n$ is the discrete Laplacian on $V^n$ associated with $\mathcal{E}^n$, $\mu_{n,x}:=\int_T\psi_x^nd\mu$, and $\psi_x^n$ is the unique harmonic extension (with respect to $(\mathcal{E},\mathcal{F})$) of $\mathbf{1}_{\{x\}}$ from $V^n$ to $T$. For a function $f\in \mathcal{D}$ satisfying (\ref{limit}), we write $\Delta f =g$, so that $\Delta$ is essentially the limit operator of the rescaled discrete Laplacians $\Delta_n$. Furthermore, for $f\in\mathcal{D}$, we can also define a function, $df$ say, with domain $V^0$, which represents the Neumann derivative on the boundary of $T$ (similarly to \cite{Kigami}, Definition 3.7.6) by setting $(df)(x):=\lim_{n\rightarrow\infty}-\Delta_n u(x)$. By using a Green's function argument as in the proof of \cite{Kigami}, Theorem 3.7.9, it is possible to deduce that the Friedrichs extension of $\Delta$ on $\mathcal{D}_D:=\{f\in\mathcal{D}:\:f|_{V^0}=0\}$ is precisely $\Delta_D$, the Laplacian associated with $(\mathcal{E}^D, \mathcal{F}^D, \mu^T)$. Similarly, the Friedrichs extension of $\Delta$ on $\mathcal{D}_N:=\{f\in\mathcal{D}:\:(df)(x)=0,\:\forall x\in V^0\}$ is $\Delta_N$, the Laplacian associated with $(\mathcal{E},\mathcal{F}, \mu^T)$. Note that the construction of the relevant Green's function may be accomplished more easily than in \cite{Kigami} by, instead of imitating the analytic definition used there, applying a probabilistic definition, with $g(x,y)$ being the Green's kernel for the Markov process associated with $(\mathcal{E}, \mathcal{F})$ killed on hitting $V^0$ (the existence of which follows from an argument similar to that used in \cite{Kumagai}, Proposition 4.2). Applying the relationships between the various operators introduced in the previous paragraph (and also the continuity of the Green's function), we are able to emulate the argument of \cite{Kigami}, Proposition 4.1.2, to deduce that the eigenvalues of $(\mathcal{E}^D, \mathcal{F}^D, \mu^T)$ are precisely the solutions to \[-\Delta u=\lambda u,\hspace{20pt}u|_{V^0}=0,\] for some eigenfunction $u\in\mathcal{D}$. Furthermore, the eigenvalues of $(\mathcal{E}, \mathcal{F}, \mu^T)$ are precisely the solutions to \begin{equation}\label{Neu} -\Delta u=\lambda u,\hspace{20pt}(du)|_{V^0}=0, \end{equation} for some eigenfunction $u\in\mathcal{D}$. From these characterisations, it is clear that the Dirichlet and Neumann eigenvalues of $(\mathcal{E}, \mathcal{F}, \mu^T)$ that we have defined are exactly the eigenvalues of $-\Delta$ with the usual Dirichlet (zero function on boundary) and Neumann (zero derivative on boundary) boundary conditions respectively, where the analytic boundary of $T$ is taken to be $V^0$. By mapping these results to the continuum random tree, we are able to deduce, $\mathbf{P}$-a.s., the existence of a Laplace operator $\Delta_\mathcal{T}$ on $\mathcal{T}$, and also a Neumann boundary derivative, so that the eigenvalues of $(\mathcal{E},\mathcal{F},\mu)$ satisfy a result analogous to (\ref{Neu}). In the continuum random tree setting, observe that the natural analytic boundary is the two point set consisting of the root and one $\mu$-random vertex, $\{\rho, Z^1_\emptyset\}$. Consequently, the results we prove also demonstrate the Dirichlet spectrum corresponding to this boundary satisfies the same asymptotics as the original (Neumann) spectrum. Another point of interest is that by replicating the argument of \cite{Kigami}, Theorem 3.7.14, we are able to uniquely solve the Dirichlet problem for Poisson's equation (with respect to $\Delta_\mathcal{T}$) on the continuum random tree, again taking $\{\rho,Z^1_\emptyset\}$ as our boundary. We now return to our main argument. From the construction of $(\mathcal{E},\mathcal{F})$, it is possible to deduce the following self-similar decomposition using the same proof as in Lemma 4.5 of \cite{Hamasymp}. {\lem $\mathbf{P}$-a.s., we have, for every $n\geq 1$, \[\mathcal{E}(f,g)=\sum_{i\in\Sigma_n}\frac{1}{l(i)}\mathcal{E}_i(f\circ F_i,g\circ F_i),\hspace{20pt}\forall f,g\in\mathcal{F},\] where $(\mathcal{E}_i)_{i\in\Sigma_n}$ are independent copies of $\mathcal{E}$, independent of $\mathcal{F}_n$.} The operators of the above theorem each have a Dirichlet version, $\mathcal{E}_i^D$, defined in the same way as $\mathcal{E}^D$ was from $\mathcal{E}$. We shall denote by $N^D_i(\lambda)$ and $N^N_i(\lambda)$ the corresponding Dirichlet and Neumann eigenvalue counting functions. {\lem \label{6.2} $\mathbf{P}$-a.s., we have, for every $\lambda>0$, \begin{equation}\label{ineq}\sum_{i=1}^3N_i^D(\lambda w(i)^3)\leq N^D(\lambda)\leq N^N(\lambda)\leq \sum_{i=1}^3 N_{i}^N(\lambda w(i)^3),\end{equation} and also $N^D(\lambda)\leq N^N(\lambda)\leq N^D(\lambda)+2$.} \begin{proof} Since the proof of this result can be completed by repeating the argument of \cite{Hamasymp}, Lemma 5.1, we will only present a brief outline here. First, define a quadratic form $(\tilde{\mathcal{E}}^D,\tilde{\mathcal{F}}^D)$ by setting $\tilde{\mathcal{E}}^D=\mathcal{E}^D|_{\tilde{\mathcal{F}}^D\times\tilde{\mathcal{F}}^D}$, where $\tilde{\mathcal{F}}^D$ is the set $\{f\in\mathcal{F}^D:\:f|_{V^1}=0\}$. It is straightforward to check that $(\tilde{\mathcal{E}}^D,\tilde{\mathcal{F}}^D)$ is a local Dirichlet form on $L^2(T,\mu^T)$ and the natural inclusion map from $\tilde{\mathcal{F}}^D$ to $L^2(T,\mu^T)$ is compact, $\mathbf{P}$-a.s. Thus we can define the related eigenvalue counting function $\tilde{N}^D(\lambda)$ and, by \cite{Hamasymp}, Lemma 5.4, we have $\tilde{N}^D(\lambda)\leq N^D(\lambda)$ for all $\lambda$, $\mathbf{P}$-a.s. Now, fix $i\in\{1,2,3\}$ and suppose $f$ is an eigenfunction of $(\mathcal{E}_i^D,\mathcal{F}_i^D,\mu^T_i)$ with eigenvalue $\lambda w(i)^3$, where the domain $\mathcal{F}_i^D$ of $\mathcal{E}_i^D$ is defined analogously to $\mathcal{F}^D$ and $\mu^T_i:=\mu^T\circ F_i(\cdot) /\mu^T(T_i)$. If we set \[g(x):=\left\{\begin{array}{ll} f\circ F_i^{-1} (x), & \mbox{for }x\in T_i, \\ 0 & \mbox{otherwise,} \end{array}\right.\] then, by definition, we have that, for $h\in\tilde{\mathcal{F}}^D$, \[\tilde{\mathcal{E}}^D(g,h)=\frac{1}{w(i)}\mathcal{E}_i^D(f,h\circ F_i)=\lambda w(i)^2\int_T f (h\circ F_i)d\mu^T_i=\lambda \int_T gh d\mu^T.\] Thus $g$ is an eigenfunction of $(\tilde{\mathcal{E}}^D,\tilde{\mathcal{F}}^D,\mu^T)$ with eigenvalue $\lambda$. Hence it is clear that $\sum_{i=1}^3N_i^D(\lambda w(i)^3)\leq \tilde{N}^D(\lambda)$ for all $\lambda$, $\mathbf{P}$-a.s., which completes the proof of the left-hand inequality of (\ref{ineq}). The right-hand inequality of (\ref{ineq}) is proved using a similar decomposition of a suitable enlargement of $(\mathcal{E},\mathcal{F})$, see \cite{Hamasymp}, Proposition 5.2, for details. The remaining parts of the lemma are a simple application of Dirichlet-Neumann bracketing, see \cite{Hamasymp}, Lemma 5.4, for example. \end{proof} For the remainder of this section, we will continue to follow \cite{Hamasymp}, and proceed by defining a time-shifted general branching process, $X$. Although the results we shall prove will be in terms of $N^D$, the second set of inequalities in the above lemma imply that the asymptotics of $N^N$, and consequently $N$, are the same. Define the functions $(\eta_i)_{i\in\Sigma_*}$ by, for $t\in\mathbb{R}$, \[\eta_i(t):=N_i^D(e^t)-\sum_{j=1}^3N_{ij}^D(e^t w(ij)^3),\] and let $\eta:=\eta_\emptyset$. Clearly, the paths of $\eta_i(t)$ are cadlag, and Lemma \ref{6.2} implies that the functions take values in $[0,6]$, $\mathbf{P}$-a.s. If we set $X_i(t):=N_i(e^t)$, and $X:=X_\emptyset$, then it is possible to check that the following evolution equation holds: \begin{equation}\label{evo} X(t)=\eta(t)+\sum_{i=1}^3X_i(t+3\ln w(i)); \end{equation} and also that \begin{equation}\label{alter} X(t)=\sum_{i\in\Sigma_*}\eta_i(t+3\ln l(i)). \end{equation} The equation at (\ref{evo}) is particularly important, as it will allow us to use branching process and renewal techniques to obtain the results of interest. We start by investigating the mean behaviour of $X$, and will now introduce the notation necessary to do this. Set $\gamma=2/3$, and define, for $t\in\mathbb{R}$, \begin{equation}\label{mdef} m(t):=e^{-\gamma t}\mathbf{E}X(t),\hspace{20pt}u(t):=e^{-\gamma t}\mathbf{E}\eta(t). \end{equation} Furthermore, define the measure $\nu$ by $\nu([0,t])=\sum_{i=1}^3 \mathbf{P}(w(i)^3\geq e^{-t})$, and let $\nu_\gamma$ be the measure that satisfies $\nu_\gamma(dt)=e^{-\gamma t}\nu(dt)$. Some properties of these objects are collected in the following lemma. {\lem \label{props} (a) The function $m$ is bounded and measurable, and $m(t)\rightarrow0$ as $t\rightarrow -\infty$.\\ (b) The function $u$ is in $L^1(\mathbb{R})$ and $u(t)\rightarrow0$ as $|t|\rightarrow \infty$.\\ (c) The measure $\nu_\gamma$ is a Borel probability measure on $[0,\infty)$, and the integral $\int_0^\infty t\nu_\gamma(dt)$ is finite.} \begin{proof} A fact that may be deduced from (\ref{resineq}), and will be important in proving parts (a) and (b), is that $\mathbf{P}$-a.s., $\|f\|_2^2\leq \mathcal{E}(f,f)\mathrm{diam}_R T$, for every $f\in\mathcal{F}^D$. In particular, this implies that the bottom of the Dirichlet spectrum is bounded below by $(\mathrm{diam}_R T)^{-1}$, and consequently we must have $\eta(t)=0$ for $t<-\ln\mathrm{diam}_RT$, $\mathbf{P}$-a.s. Hence, \begin{equation}\label{etazeronound} \mathbf{E}\eta(t)\leq 6 \mathbf{P}(t\geq -\ln\mathrm{diam}_RT). \end{equation} Applying this result, the alternative representation of $X$ at (\ref{alter}), and the independence of $N_i^D$ and $\mathcal{F}_{|i|}$, we obtain \begin{eqnarray*} m(t)&=&\sum_{i\in\Sigma_*}e^{-\gamma t}\mathbf{E}\eta_i(t+3\ln l(i))\\ &\leq&6e^{-\gamma t}\mathbf{E}(\#\{i\in\Sigma_*:\:t+3\ln l(i)\geq-\ln\mathrm{diam}_{R'} T\}), \end{eqnarray*} where $R'$ is an independent copy of $R$. Applying standard branching process techniques to the process with particles $i\in\Sigma_*$, where $i\in\Sigma_*$ has offspring $ij$ at time $-\ln w(ij)$ after its birth, $j=1,2,3$, it is possible to show that $\mathbf{E}(\#\{i\in\Sigma_*:\:\ln l(i)\geq -t\})\leq Ce^{2t}$, for every $t\in\mathbb{R}$; the exponent 2 that arises is the Malthusian parameter for the relevant branching process. Thus, for $t\in\mathbb{R}$, \[m(t)\leq 6C\mathbf{E}((\mathrm{diam}_RT)^\gamma).\] Since $\mathrm{diam}_RT\buildrel{d}\over{=}\mathrm{diam}_{d_\mathcal{T}}\mathcal{T}$, and, as remarked in the proof of Lemma \ref{5.1}, $\mathrm{diam}_{d_\mathcal{T}}\mathcal{T}$ has finite positive moments, we are able to deduce that the right hand side of the above inequality is finite. Thus, $m$ is bounded. The measurability of $m$ follows from the fact that $X$ has cadlag paths, $\mathbf{P}$-a.s. To demonstrate the limit result, we recall the bound at (\ref{etazeronound}), which we apply to (\ref{alter}) to obtain \[m(t)\leq \sum_{i\in\Sigma_*}6e^{-\gamma t}\mathbf{P}(l(i)^3\mathrm{diam}_{R'}T\geq e^{-t}),\] where $R'$ is again an independent copy of $R$. Applying Markov's inequality to this expression, we find that, for $\theta>0$, \begin{eqnarray*} m(t)&\leq&\sum_{i\in\Sigma_*} 6 e^{(\theta-\gamma)t}\left(\mathbf{E}(w(i)^{3\theta})\right)^{|i|}\mathbf{E}((\mathrm{diam}_RT)^\theta)\\ &=& 6 e^{(\theta-\gamma)t} \mathbf{E}((\mathrm{diam}_RT)^\theta)\sum_{n\geq 0}3^n\left(\mathbf{E}(w(1)^{3\theta})\right)^{n} \end{eqnarray*} Taking $\theta>\gamma$, we have $\mathbf{E}(w(1)^{3\theta})<\mathbf{E}(w(1)^2)=\frac{1}{3}$, so the sum over $n$ is finite, as is the expectation involving $\mathrm{diam}_RT$. Consequently, the upper bound converges to zero as $t\rightarrow -\infty$, which completes the proof of (a). That $u(t)$ is finite for $t\in\mathbb{R}$ follows from the fact that $\eta(t)$ is, and the measurability of $u$ is a result of $\eta$ having cadlag paths, $\mathbf{P}$-a.s. Observe that, for $t\geq 0$, \begin{equation}\label{bound} \mathbf{P}\left(\mathrm{diam}_RT>t\right)=\mathbf{P} \left( \mathrm{diam}_{d_\mathcal{T}} \mathcal{T}>t\right)\leq \mathbf{P}\left(\sup_{s\in[0,1]}W_s>\frac{t}{2}\right)\leq Ce^{-t^2/4}, \end{equation} for some constant $C$, where the final inequality is obtained by applying the exact distribution of the supremum of a normalised Brownian excursion (see \cite{Aldous2}, Section 3.1). Thus, again applying (\ref{etazeronound}), we see that $u(t)$ is bounded above by $6 e^{-\gamma t}\left(1\wedge Ce^{-e^{-2t}/4}\right)$ for all $t$, which readily implies the remaining claims of (b). Part (c) is easily deduced using simple properties of the Dirichlet distribution of the triple $(w(1)^2,w(2)^2,w(3)^2)$. \end{proof} The importance of the previous lemma is that it allows us to apply the renewal theorem to deduce the mean behaviour of $X$, with the precise result being presented in the following proposition. Part (a) of Theorem \ref{second} is an easy corollary of this. {\propn \label{meanconv} The function $m$ converges as $t\rightarrow \infty$ to the finite and non-zero constant \[m(\infty):=\frac{\int_{-\infty}^\infty u(t)dt}{\int_0^\infty t\nu_\gamma(dt)}.\]} \begin{proof} After multiplying by $e^{-\gamma t}$ and taking expectations, the equation at (\ref{evo}) may be rewritten, for $t\in\mathbb{R}$, \[m(t)=u(t)+\int_0^\infty m(t-s)\nu_\gamma(ds),\] which is the double-sided renewal equation of \cite{Karlin}. The results that are proved about $m$, $u$ and $\nu_\gamma$ in Lemma \ref{props} mean that the conditions of the renewal theorem stated in \cite{Karlin} are satisfied, and the proposition follows from this. \end{proof} To determine the $\mathbf{P}$-a.s. behaviour of $X$, and prove part (b) of Theorem \ref{second}, the argument of \cite{Hamasymp}, Section 5, may be used. Note that this method is in turn an adaptation of Nerman's results on the almost-sure behaviour of general branching processes, see \cite{Nerman}. Since the steps of our proof are almost identical to those of \cite{Hamasymp}, we shall omit many of the details here. One point that should be highlighted, however, is that in the proof of Lemma 5.7 of \cite{Hamasymp} there is an error, with one of the relevant terms being omitted from consideration. We shall explain how to deal with this term, and also correct the limiting procedure that should be used at the end of the argument. For the purposes of the proof, we introduce the following notation to represent a cut-set of $\Sigma_*$: for $t>0$, \[\Lambda_t:=\{i\in\Sigma_*:\:-3\ln l(i)\geq t > -3\ln l(i|(|i|-1))\}.\] We will also have cause to refer to the subset of $\Lambda_t$ defined by, for $t,c>0$, \[\Lambda_{t,c}:=\{i\in\Sigma_*:\:-3\ln l(i)\geq t+c,\: t> -3\ln l(i|(|i|-1))\}.\] {\propn $\mathbf{P}$-a.s., we have \[e^{-\gamma t}X(t)\rightarrow m(\infty),\hspace{20pt}\mbox{as }t\rightarrow 0,\] where $m(\infty)$ is the constant defined in Proposition \ref{meanconv}.} \begin{proof} First, we truncate the characteristics $\eta_i$ by defining, for fixed $c>0$, $\eta^c_i(t):=\eta_i(t)\mathbf{1}_{\{t<n_0c\}}$, where $n_0$ is an integer that will be chosen later in the proof (we are using the term ``characteristic'' in the generalised sense of \cite{Nerman}, Section 7, to describe a random function that, if the characteristic is indexed by $i$, can depend on $(w(ij))_{j\in\Sigma_*}$). From these truncated characteristics, construct the processes $X_i^c$, by \[X_i^c(t):=\sum_{j\in\Sigma_*}\eta_{ij}^c(t+3\ln (l(ij)/l(i))),\] and set $X^c:=X_\emptyset^c$. The corresponding discounted mean process is $m^c(t):=e^{-\gamma t}\mathbf{E}X^c(t)$, and this may be checked to converge to $m^c(\infty)\in(0,\infty)$ as $t\rightarrow\infty$ using the argument of Proposition \ref{meanconv}. From a branching process decomposition of $X^c$, we can deduce the following bound for $n_1\geq n_0$, $n\in\mathbb{N}$, \[|e^{-\gamma c(n+n_1)}X^c(c(n+n_1))-m^c(\infty)|\leq S_1(n,n_1)+S_2(n,n_1)+S_3(n,n_1),\] where, \begin{eqnarray*} \lefteqn{S_1(n,n_1):=}\\ &&\left|\sum_{i\in\Lambda_{cn}\backslash\Lambda_{cn,cn_1}} \left(e^{-\gamma c (n+n_1)} X_i^c(c(n+n_1)+3\ln l(i))-l(i)^2 m^c(c(n+n_1)+3\ln l(i))\right)\right|, \end{eqnarray*} \[S_2(n,n_1):=\left|\sum_{i\in\Lambda_{cn}\backslash\Lambda_{cn,cn_1}}l(i)^2 m^c(c(n+n_1)+3\ln l(i))-m^c(\infty)\right|,\] \[S_3(n,n_1):=e^{-\gamma c(n+n_1)}\sum_{i\in\Lambda_{cn,cn_1}}X_i^c(c(n+n_1)+3\ln l(i)).\] The first two of these terms are dealt with in \cite{Hamasymp}, and using the arguments from that article, we have that, $\mathbf{P}$-a.s., \[\lim_{n_1\rightarrow\infty}\limsup_{n\rightarrow\infty} S_j(n,n_1)=0,\hspace{20pt}\mbox{for }j=1,2.\] We now show how $S_3(n,n_1)$ decays in a similar fashion. First, introduce a set of characteristics, $\phi_i^{c,n_1}$, defined by \[\phi_i^{c,n_1}(t):=\sum_{j=1}^3X_{ij}(0)\mathbf{1}_{\{-3\ln w(ij)>t+cn_1, \:t>0\}},\]and, for $t>0$, set \[Y^{c,n_1}(t):=\sum_{i\in\Sigma_*}\phi_i^{c,n_1}(t+3\ln l(i)).\] Note that from the definition of the cut-sets $\Lambda_{cn}$ and $\Lambda_{cn,cn_1}$ we can deduce that \[Y^{c,n_1}(cn)=\sum_{i\in\Lambda_{cn,cn_1}}X_i(0)\geq e^{\gamma c (n+n_1)}S_3(n,n_1),\] where for the second inequality we apply the monotonicity of the $X_i$s. Now, $Y^{c,n_1}$ is a branching process with random characteristics $\phi^{c,n_1}_i$, and we are able to check the conditions of the extension of \cite{Nerman}, Theorem 5.4, that is stated as \cite{Hamasymp}, Theorem 3.2, are satisfied. By applying this result, we find that $\mathbf{P}$-a.s., \[e^{-\gamma t}Y^{c,n_1}(t)\rightarrow \frac{\int_0^\infty e^{-\gamma t}\mathbf{E}\phi^{c,n_1}_\emptyset(t)dt}{\int_0^{\infty}t\nu_\gamma(dt)},\hspace{20pt}\mbox{as }t\rightarrow\infty.\] It is obvious that $\mathbf{E}\phi^{c,n_1}_\emptyset(t)\leq 3\mathbf{E}X(0)\leq 3m(0)<\infty$, where $m$ is the function defined at (\ref{mdef}). Consequently, there exists a constant $C$ that is an upper bound for the above limit uniformly in $n_1$, and so $\mathbf{P}$-a.s., \[\lim_{n_1\rightarrow\infty}\limsup_{n\rightarrow\infty}S_3(n,n_1)\leq \lim_{n_1\rightarrow\infty} Ce^{-\gamma cn_1}=0.\] Combining the three limit results for $S_1$, $S_2$ and $S_3$, it is easy to deduce that $\mathbf{P}$-a.s., \[\lim_{n\rightarrow\infty}|e^{-\gamma cn}X^c(cn)-m^c(\infty)|=0.\] We continue by showing how the process $X$, when suitably scaled, converges along the subsequence $(cn)_{n\geq 0}$. Applying the conclusion of the previous paragraph, we find that $\mathbf{P}$-a.s., \begin{eqnarray} \lefteqn{\limsup_{n\rightarrow\infty}|e^{-\gamma cn}X(cn)-m(\infty)|\leq}\nonumber\\ && |m(\infty)-m^c(\infty)|+\limsup_{n\rightarrow\infty}e^{-\gamma cn}|X(cn)-X^c(cn)|.\label{upper} \end{eqnarray} Recall that the process $X^c$ and its discounted mean process $m^c$ depend on the integer $n_0$. By applying the dominated convergence theorem, it is straightforward to check that, as we let $n_0\rightarrow \infty$, the first of the terms in the above estimate, which is deterministic, converges to zero. For the second term, observe that \begin{eqnarray*} |X(t)-X^c(t)|&=&\sum_{i\in\Sigma_*}\eta_i(t+3\ln l(i))\mathbf{1}_{\{t+3\ln l(i)>cn_0 \}}\\ &\leq& 6 \#\{i\in\Sigma_*:\:t+ 3\ln l(i) > cn_0\}. \end{eqnarray*} Applying standard branching process results to the process described in the proof of Lemma \ref{props}, we are able to deduce the existence of a finite constant $C$ such that, as $t\rightarrow\infty$, we have $e^{-2t}\#\{i\in\Sigma_*:\:-\ln l(i) < t\}\rightarrow C$, $\mathbf{P}$-a.s., from which it follows that $\mathbf{P}$-a.s., \[\limsup_{n\rightarrow\infty}e^{-\gamma cn}|X(cn)-X^c(cn)|\leq 6C e^{-\gamma cn_0}.\] Consequently, by choosing $n_0$ suitably large, the upper bound in (\ref{upper}) can be made arbitrarily small, which has as a result that $e^{-\gamma cn}X(cn)\rightarrow m(\infty)$ as $n\rightarrow \infty$, $\mathbf{P}$-a.s., for each $c$. The proposition is readily deduced from this using the monotonicity of $X$. \end{proof} \def$'${$'$} \end{document}
math
87,784
\begin{document} \numberwithin{equation}{section} \newtheorem{THEOREM}{Theorem} \newtheorem{PRO}{Proposition} \newtheorem{XXXX}{\underline{Theorem}} \newtheorem{CLAIM}{Claim} \newtheorem{COR}{Corollary} \newtheorem{LEMMA}{Lemma} \newtheorem{REM}{Remark} \newtheorem{EX}{Example} \newenvironment{PROOF}{{\bf Proof}.}{{\ \vrule height7pt width4pt depth1pt} \par } \newcommand{\Bibitem}[1]{\bibitem{#1} \ifnum\thelabelflag=1 \marginpar{ \hspace{-1.08\textwidth}\fbox{\rm#1}} \fi} \newcounter{labelflag} \setcounter{labelflag}{0} \newcommand{\labelon}{\setcounter{labelflag}{1}} \newcommand{\Label}[1]{\label{#1} \ifnum\thelabelflag=1 \ifmmode \makebox[0in][l]{\qquad\fbox{\rm#1}} \else\marginpar{ \hspace{-1.15\textwidth}\fbox{\rm#1}} \fi \fi} \newcommand{\ifhmode\newline\else\noindent\fi}{\ifhmode\newline\else\noindent\fi} \newcommand{\RIGHTLINE}[1]{\ifhmode\newline\else\noindent\fi\rightline{#1}} \newcommand{\CENTERLINE}[1]{\ifhmode\newline\else\noindent\fi\centerline{#1}} \def\BOX #1 #2 {\framebox[#1in]{\parbox{#1in}{ }}} \parskip=8pt plus 2pt \def\AUTHOR#1{\author{#1} \maketitle} \def\Title#1{\begin{center} \Large\bf #1 \end{center} \vskip 1ex } \def\Author#1{\vspace*{-2ex}\begin{center} #1 \end{center} \vskip 2ex \par} \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \def\bdk#1{\makebox[0pt][l]{#1}\hspace*{0.03ex}\makebox[0pt][l]{#1}\hspace*{0.03ex}\makebox[0pt][l]{#1}\hspace*{0.03ex}\makebox[0pt][l]{#1}\mathbfox{#1} } \def\psbx#1 #2 {\mathbfox{\psfig{file=#1,height=#2}}} \newcommand{\FG}[2]{{\includegraphics[height=#1mm]{#2.eps}}} \Title{Improved Vietoris Sine Inequalities for \\ Non-Monotone, Non-Decaying Coefficients} \par\vspace*{-4mm}\par \Author{(Draft version 1, April 25, 2015)} \begin{center} MAN KAM KWONG\footnote{The research of this author is supported by the Hong Kong Government GRF Grant PolyU 5003/12P and the Hong Kong Polytechnic University Grants G-UC22 and G-UA10} \end{center} \begin{center} \emph{Department of Applied Mathematics\\ The Hong Kong Polytechnic University, Hunghom, Hong Kong}\\ \tt{[email protected]} \end{center} \par\vspace*{\baselineskip}\par \newcommand{\mathbf}{\mathbf} \begin{abstract} \parskip=6pt The classical Vietoris sine inequality states that for any non-increasing sequence of positive real numbers $ \left\{ a_k\right\} _{k=1}^\infty $ satisfying $$ \hspace*{20mm} a_{2j-1} \, \geq \frac{2j}{2j-1} \,\,a_{2j} \qquad (j=1,2,3,\cdots) , \eqno (*) $$ the following sine polynomials are nonnegative in $ [0,\pi ] $, $$ \hspace*{9mm} \sum_{k=1}^{n} \, a_k \, \sin(kx) \geq 0, \qquad x\in[0,\pi ], \quad \mathbfox{for all } n = 1,2,3,\cdots . \eqno (\dagger) $$ Recently, the author has improved this result to include non-monotone sequences. In this paper, we establish two further extensions. The first states that if $ \left\{ a_k\right\} $ is a sequence of positive numbers satisfying $$ \hspace*{20mm} a_{2}\, \geq \, 0.5869890995\cdots \,\, a_{3}, \quad \mathbfox{and} \quad a_{2j}\, \geq \, \frac{2j+1}{2j+2} \,\,a_{2j+1} \qquad (j=2,3,\cdots) , $$ then ($*$) implies ($\dagger$). An example is $ \left\{ a_k\right\} = \left\{ \frac{8}{5} , \frac{4}{5} , \frac{4}{3} , 1, \frac{6}{5} , 1, \frac{8}{7} , 1, \cdots \right\} , $ with $ a_k=1 $ for even $ k\geq 4 $ and $ a_k=(k+1)/k $ for odd $ k\geq 3 $. A second, independent, extension affirms that ($\dagger$) also holds under ($*$) and $$ \hspace*{20mm} a_{2j}\, \geq \, \frac{(2j+1)(4j-1)}{2j(4j+3)} \,\,a_{2j+1} \qquad (j=1,2,\cdots). $$ An example is $ \left\{ 3, \frac{3}{2} , \frac{7}{3} , \frac{7}{4} , \frac{11}{5} , \frac{11}{6} , \cdots \right\} $ where $ a_{k}=2-\frac{(-1)^k}{k} $. The coefficients in these examples are not monotone and not converging to 0. \end{abstract} {\bf{Mathematics Subject Classification (2010).}} 26D05, 42A05. {\bf{Keywords.}} Trigonometric sums, positivity, inequalities. \section{Introduction} Excellent surveys on the history and applications of nonnegative trigonometric polynomials can be found, for example, in Alzer, Koumandos and Lamprecht \cite{akl}, Askey et.\ al.\ \cite{A}--\cite{AS}, Brown \cite{b1}, and Koumandos \cite{Ks2}, and the references therein. For convenience, we use the acronyms NN to stand for ``non-negative'', and PS for P-Sum (a sum with all its partial sums NN). These can be interpreted as an adjective or a noun depending on the context. A sequence of real numbers is denoted by $ \left\{ a_k\right\} _{k=1}^\infty $, or simply, $ \left\{ a_k\right\} $. A finite $ n $-tuple of numbers can be interpreted as an infinite sequence by adding $ 0 $ to the end. The symbol $ \searrow $ means non-increasing. Following the convention adopted in \cite{Kw2}, we use bold capital letters such as $ \mathbf F $ and $ \mathbf\Phi $ to denote sums of numbers or functions. One of the earliest known PS is \begin{equation} \mathbf F = \sum \frac{\sin(kx)}{k} \,, \Label{F} \end{equation} first conjectured by Ferj\'er 1910, and confirmed independently by Jackson and Gronwall. Vietoris, in 1958, established a deep result that includes $ \mathbf F $. { \renewcommand{C}{A} \begin{THEOREM}[Vietoris \cite{V}] The sum \, $ \sum a_k\,\sin(kx) $ is a PS in $ [0,\pi ] $ $($i.e. $(\dagger)$ holds$)$ if $a_k \searrow 0$ and \begin{equation} a_{2j-1} \,\, \geq \,\, \frac{2j}{2j-1} \,\, a_{2j} \,, \qquad \mathbfox{for} \quad j=1,2,\cdots \,. \Label{v} \end{equation} \end{THEOREM} } \begin{REM} \em There is an analogous cosine inequality ( $ a_1 + \sum a_k\,\cos(kx) $ is also a PS), but we are only concerned with the sine sum in this paper. \end{REM} Belov, in 1995, greatly improved Vietoris' sine inequality, by establishing, under the monotonicity requirement, a necessary and sufficient condition for PS. { \renewcommand{C}{B} \begin{THEOREM}[Belov \cite{be}] Assume $ \displaystyle a_k \searrow 0. $ Then $\sum a_k\,\sin(kx) $ is a PS in $ [0,\pi ] $ iff \begin{equation} \sum_{k=1}^{n} (-1)^{k-1}\,ka_k \geq 0, \qquad \mathbfox{for all } n\geq 2. \Label{bel} \end{equation} \end{THEOREM} } \begin{REM} \em For the cosine analog, condition (\ref{bel}) is sufficient but not necessary. \end{REM} Belov's Theorem leaves no more room for improvement, unless the $ \searrow $ assumption on $ a_k $ is lifted. In this less restrictive situation, (\ref{bel}) is no longer sufficient for PS (it is still necessary). It is not difficult to construct examples of PS sine sums with non-monotone coefficients, as we will see in Section~2. However, no useful general conditions applicable to non-monotone coefficients are known until recently. In \cite{Kw2}, the following result was established. { \renewcommand{C}{C} \begin{THEOREM} Vietoris' result remains valid when $ \searrow $ (still need (\ref{v})) is relaxed to $$ \frac{(2j-1)\sqrt{j+1}}{2j\sqrt{j}} \,\,a_{2j+1}\, \leq \,a_{2j}, \qquad j=1,2,\cdots . $$ \end{THEOREM} } An example is given by the non-monotone sequence of coefficients: \begin{eqnarray} && 1\,, \hspace*{6mm} \frac{1}{2} \,, \hspace*{7mm} \frac{1}{\sqrt{2}} \,, \hspace*{12mm} \frac{3}{4\sqrt{2}} \,, \hspace*{12mm} \frac{1}{\sqrt{3}} \,, \hspace*{12mm} \frac{5}{6\sqrt{3}} \,, \hspace*{10mm} \cdots \Label{Cm} \\[1.2ex] = \hspace*{-4mm} && 1\,, \hspace*{4mm} 0.5\,, \hspace*{4mm} 0.707\cdots \,, \hspace*{4mm} 0.530\cdots \,, \hspace*{4mm} 0.577\cdots \,, \hspace*{4mm} 0.481\cdots \,, \hspace*{6mm} \nonumber \cdots \end{eqnarray} An important tool used in the proof is the well-known Comparison Principle (CP for short). It will continue to play an important role in this paper. Since a non-zero scalar multiple of a PS is still a PS, we consider two sequences of coefficients equivalent if they only differ by a non-zero multiple. We say that \par\vspace*{-4mm}\par \begin{center} $ \left\{ a_k\right\} \succeq\left\{ b_k\right\} $ $ \Longleftrightarrow $ (1) $ a_k=0\Longrightarrow b_k=0 $, and (2) after skipping those $ a_k=0 $, $\,\,\displaystyle\frac{b_k}{a_k} \searrow\,0$. \end{center} \par\vspace*{-4mm}\par \noindent This defines a partial ordering among equivalent classes of sequences. With this notation, the CP can be restated as follows. \begin{LEMMA}\Label{cmp} Let $ \sigma _k(x) $ be a sequence of functions defined on an interval $ I $. \begin{center} $ \sum a_k\sigma _k(x) $ PS in $ I $ and $ \left\{ a_k\right\} \succeq\left\{ b_k\right\} \,\,\Longrightarrow \,\,\sum b_k\sigma _k(x) $ PS in $ I $. \end{center} \end{LEMMA} \begin{REM} \em Among all the sequences of coefficients satisfying Vietoris' conditions, there is a maximal one, namely \begin{equation} \left\{ c_k\right\} = \left\{ 1\,, \hspace*{6mm} \frac{1}{2} \,, \hspace*{6mm} \frac{1}{2} \,, \hspace*{6mm} \frac{3}{8} \,, \hspace*{6mm} \frac{3}{8} \,, \hspace*{6mm} \frac{5}{16} \,, \hspace*{6mm} \cdots \right\} \end{equation} obtained by replacing the inequality sign in (\ref{v}) by equality and letting $ a_{2j}=a_{2j+1} $. The CP reduces the proof of the general Vietoris inequality to just showing that the maximal sum $ \mathbf V=\sum\,c_k\,\sin(kx) $ is PS. In the same sense, (\ref{Cm}) is the maximal sequence for Theorem~C. On the other hand, there is no maximal sequence for Belov's result. \end{REM} In this paper, we present two further improvements of Theorem~C, In order to better illustrate some of the main ideas, we first establish, in Section~\ref{sc3}, a slightly weaker NN criterion that is associated with the sequence of coefficients \begin{equation} \left\{ \gamma _k\right\} = \left\{ 2, \, 1, \, \frac{4}{3} , \, 1, \, \frac{6}{5} , \, 1, \, \frac{8}{7} , \, 1 , \, \cdots \right\} , \qquad \gamma _k = \begin{cases} \mathbfox{\footnotesize $\displaystyle \frac{k+1}{k} $} \quad & k \mathbfox{ is odd} \\[1.5ex] 1 \quad & k \mathbfox{ is even} \end{cases} \,\, . \Label{ak1} \end{equation} \setcounter{THEOREM}{0} \begin{LEMMA} $ \mathbf\Psi =\sum a_k\sin(kx) $ is a PS in $ [0,\pi ] $ if (\ref{v}) holds and \begin{equation} a_{2j}\, \geq \, \frac{2j+1}{2j+2} \,\,a_{2j+1}, \qquad \mathbfox{for } j=1,2,\cdots . \Label{kv} \end{equation} The maximal sum is given by $ \mathbf{\Phi }=\sum\,\gamma _k\,\sin(kx) $. \end{LEMMA} \par\vspace*{4mm}\par \begin{REM} \em Lemma 2 is already a significant improvement over Theorem~A and~C. The coefficients $ a_k $ that satisfy the hypotheses of these Theorems must decay faster than $ 1/\sqrt{k} $. The coefficients of $ \mathbf\Phi $, on the other hand, converge to $ 1 $. \end{REM} Lemma~2 can be sharpened in two different ways. Let $ \displaystyle \alpha \approx 0.78265213271\cdots $ be the second largest real root of the polynomial \begin{equation} 54675\,{a}^{4}-2442195\,{a}^{3}+2182800\,{a}^{2}-115424\,a-96429 = 0. \Label{alp} \end{equation} \par\vspace*{1mm}\par \begin{THEOREM} $ \mathbf\Psi =\sum a_k\sin(kx) $ is a PS in $ [0,\pi ] $ if $ \left\{ a_k\right\} $ satisfies (\ref{v}), \begin{equation} a_{2}\, \geq \, \frac{3\alpha }{4} \, a_{3} , \quad \mathbfox{and (\ref{kv}) \,\,for\,\, } j=2,3,\cdots. \end{equation} The maximal sum is $ \mathbf\Phi _1 $ with coefficients $ \displaystyle \left\{ 2\alpha \,,\, \alpha \,,\, \gamma _3 \,,\, \gamma _4 \,,\, \gamma _5 \,,\, \cdots \right\} . $ The value $ \alpha $ is best possible; if it is replaced by any smaller positive number, then $ \mathbf\Phi _1(5) $ is not NN. \end{THEOREM} \par\vspace*{1mm}\par \begin{REM} \em Note that even though the coefficients of $ \mathbf\Phi _1 $ are not monotone, the subsequence of odd-order coefficients is decreasing, while the even-order coefficients are constant. Contrast this with $ \mathbf\Phi _2 $ defined below. Its subsequence of even-order coefficients is increasing. \end{REM} Let \par\vspace*{-9mm}\par \begin{equation} \left\{ \delta _k\right\} = \left\{ 3, \, \frac{3}{2} , \, \frac{7}{3} , \, \frac{7}{4} , \, \frac{11}{5} , \, \frac{11}{6} , \, \cdots \right\} , \qquad \delta _k = 2 - \frac{(-1)^k}{k} \, . \end{equation} \par\vspace*{4mm}\par \begin{THEOREM} $ \mathbf\Psi =\sum a_k\sin(kx) $ is a PS in $ [0,\pi ] $ if $ \left\{ a_k\right\} $ satisfies (\ref{v}) and, \begin{equation} a_{2j}\, \geq \, \frac{(2j+1)(4j-1)}{2j(4j+3)} \,\,a_{2j+1}, \qquad \mathbfox{for } j=1,2,\cdots. \Label{kv2} \end{equation} The maximal sum is $ \mathbf\Phi _2=\sum \delta _k\,\sin(kx) $. \end{THEOREM} \par\vspace*{1mm}\par \begin{REM} \em Theorems~1 and 2 are independent of each other as their extremal sums are not related to each other by $ \succeq $. The same is true for Lemma 2 and Theorem~C. On the other hand, each of Theorems~1 and~2 implies both Lemma 2 and Theorem~C. Yet, neither extends Belov's result. It would be ideal if Belov's result can be combined with Theorems~1 and~2 in a general unified way, but that remains a future goal for now. \end{REM} By applying the reflection $ x\mapsto(\pi -x) $ to $ \mathbf\Phi $ (or $ \mathbf\Phi _1 $ and $ \mathbf\Phi _2 $), we see that its PS property is equivalent to that of \begin{equation} \mathbf{\Theta } = \sum (-1)^{k+1}\,\gamma _k\sin(kx) \Label{H} \end{equation} (and the corresponding $ \mathbf\Theta _i $, $ i=1,2 $). For any $ k\in(1,\infty ) $, define \begin{equation} \phi _k(x) = \sin((k-1)x) + \frac{k-1}{k} \, \sin(kx) \, , \Label{fk} \end{equation} \begin{equation} \theta _k(x) = \sin((k-1)x) - \frac{k-1}{k} \, \sin(kx) \, . \Label{hk} \end{equation} The partial sums $ \mathbf{\Phi }(n) $ and $ \mathbf{\Theta }(n) $ have the representations \begin{equation} \mathbf{\Phi }(n) = 2\,\phi _2(x) + \frac{4}{3} \,\phi _4(x) + \cdots + \frac{2\tilde n}{2\tilde n-1} {\,\phi _{2\tilde n}(x)} + \left[ \, \frac{(2\tilde n+2)\sin(nx)}{2\tilde n +1} \,\right] , \Label{Fj} \end{equation} \begin{equation} \mathbf{\Theta }(n) = 2\,\theta _2(x) + \frac{4}{3} \,\theta _4(x) + \cdots + \frac{2\tilde n}{2\tilde n-1} {\,\theta _{2\tilde n}(x)} + \left[ \, \frac{(2\tilde n+2)\sin(nx)}{2\tilde n +1} \,\right] , \Label{Hj} \end{equation} where $ \tilde n $ denotes the largest integer less than or equal to $ n/2 $, and the notation $ \left[ \, \cdot \, \right] $ means that the term is present only if $ n $ is an odd integer. \begin{REM} \em An alternative way to see that Theorem~1 implies Lemma~2 is to note that $$ \mathbf \Phi = 2(1-\alpha )\phi _2 + \mathbf \Phi _1 . $$ The first term on the righthand side is NN and the second term is a PS. Likewise, \begin{equation} \mathbf \Phi = \mathbf F + \mathbf \Phi _2 , \Label{Ph1} \end{equation} where $ \mathbf F $ is the Ferj\'er-Jackson-Gronwall PS, shows that Theorem~2 implies Lemma~2. \end{REM} The following well-known identities will be used in subsequent proofs. {\footnotesize \begin{eqnarray} \sin(x)+\sin(3x)+\sin(5x)+\cdots+\sin((2n-1)x) \hspace*{1.4mm} &=& \frac{1-\cos(2nx)}{2\cos(x)} \,. \Label{s3} \\ \sin(x)\,+\,\sin(2x)\,+\,\sin(3x)\,+\,\cdots\,+\,\sin((nx) \hspace*{5.8mm} &=& \frac{\cos(\frac{x}{2} )-\cos(\frac{(2n+1)x}{2} )}{2\sin(\frac{x}{2} )} \,. \Label{s1} \\ \cos(x)+\cos(3x)+\cos(5x)+\cdots+\cos((2n-1)x) \hspace*{0.4mm} &=& \frac{\sin(2nx)}{2\sin(x)} \,. \Label{c3} \\ \cos(x)-\cos(2x)+\cos(3x)-\cdots+(-1)^n\cos((nx) &=& \frac{1}{2} + (-1)^n \frac{\cos(\frac{(2n+1)x}{2} )}{2\cos(\frac{x}{2} )} \,. \Label{c1} \end{eqnarray} } The rest of the paper is organized as follows. In Section~\ref{sc2}, we give some examples of PS sine sums with non-monotone coefficients that can be easily constructed using known results. These examples should be contrasted with those covered by Theorems~1 and~2. The proofs of Lemma~2 and Theorems~1 and~2 are given in Sections 3, 4, and 5, respectively. Section~\ref{sc6} presents some further examples and remarks. \section{Trivial Examples of PS with Non-Monotone Coefficients \label{sc2}} \begin{EX} \em Assume $ b_k\searrow0 $. Then $ \mathbf{B} = \sum b_k\sin((2k-1)x) $ is a PS in $ [0,\pi ] $. \end{EX} Consider \begin{equation} \mathbf{C}(n)=\sin(x)+\sin(3x)+ \cdots +\sin((2n-1)x) . \end{equation} From (\ref{s3}), we see that \begin{equation} 2\cos(x) \, \mathbf{C}(n) = 1 - \cos(2nx) \geq 0 . \end{equation} Hence, $ \mathbf{C} $ is PS. It follows from the CP that $ \mathbf{B} $ is also PS. Even though the sequence $ \left\{ b_k\right\} $ is decreasing, from the point of view of the full sine sum, the coefficient sequence is actually $ \left\{ b_1,0,b_2,0,b_3,0,\cdots\right\} $, which is not monotone. \par\vspace*{\baselineskip}\par \begin{EX} \em Let $ \mathbf B $ and $ \mathbf C $ be as in Example 1 and $ \mathbf V $ be the Vietoris sum as in Remark 3. \begin{equation} \mathbf{C}+\mathbf{V} = 2\sin(x)+\frac{1}{2} \,\sin(2x) + \frac{3}{2} \,\sin(3x) + \frac{3}{8} \,\sin(4x) + \cdots , \Label{SV} \end{equation} is a PS with non-monotone coefficients. More generally, $ \beta \mathbf{B}+\mathbf{V} $ is a PS for any $ \beta >0 $. \end{EX} \par\vspace*{\baselineskip}\par \begin{EX} \em By applying the reflection $ x\mapsto\pi -x $ to $ \mathbf{V} $, we see that \begin{equation} \mathbf{V}_2 = \sum (-1)^{k+1} c_k\,\sin(kx) \end{equation} is a PS in $ [0,\pi ] $, so is $ 2\mathbf{V} + \mathbf{V}_2 $ with coefficients \begin{equation} 3\,, \hspace*{2mm} \,\frac{1}{2} \,, \hspace*{2mm} \,\frac{3}{2} \,, \hspace*{1mm} \,\frac{3}{8} \,, \hspace*{1mm} \,{\frac {9}{8}}\,, \hspace*{1mm} \, {\frac {5}{16}}\,, \hspace*{1mm} \, ... \end{equation} \end{EX} \par\vspace*{\baselineskip}\par \begin{EX} \em It is easy to construct specific sine polynomials with a finite number of terms and non-monotone coefficients that are PS in $ [0,\pi ] $. For example \begin{equation} 2\sin(x) + \sin(2x) + \left( 1+\frac{\sqrt3}{2} \right) \,\sin(3x) \Label{po1} \end{equation} and \begin{equation} 3\sin(x) + \sin(2x) + \left( \frac{3}{2} + \sqrt2 \right) \,\sin(3x) \Label{po2} \end{equation} are both PS in $ [0,\pi ] $ with non-monotone coefficients. We refer the readers to \cite{Kw} for a discussion of how these and similar polynomials can be constructed. It is also easy to prove that for any positive integer $ m $, $$ \sin(x) + \frac{\sin(mx)}{m} $$ is a PS in $ [0,;\pi ] $ with non-monotone coefficients. If one insists on constructing examples with an infinite number of terms, simply add an appropriate multiple of one of these to $ \mathbf{V} $. \end{EX} We consider all such examples trivial because they are easy corollaries of Vietoris' result and other known examples. \section{Proof of Lemma 2 \label{sc3}} Lemma 2 is obviously true for $ n=1,2 $ and $ 3 $. Hence, we assume $ n\geq 4 $ in the following. \begin{LEMMA}\Label{jk} For all $ k>1 $, \begin{equation} \theta _k(x)\geq 0 \quad \mathbfox{ for } x \in \left[ 0,\frac{\sigma }{k} \right] , \Label{jp} \end{equation} where $ \sigma \approx 4.493409458 $ is the first positive zero of the function \begin{equation} f(z) = \sin(z) - z\cos(z) . \Label{f} \end{equation} \end{LEMMA} \begin{PROOF} Let $ \displaystyle \mu =1-\frac{1}{k} \in \left( 0 ,1\right) $ and $ y=kx $. Then, from the definition (\ref{fk}), \begin{equation} \frac{\theta _k(x)}{\mu } = \frac{\sin(\mu y)}{\mu } - \sin(y) . \end{equation} \begin{equation} \frac{\partial }{\partial \mu } \left( \frac{\theta _k(y)}{\mu } \right) = - \frac{\sin(\mu y)-\mu y\cos(\mu y)}{\mu ^2} = - \frac{f(\mu y)}{\mu ^2} \,. \Label{jpa} \end{equation} For $ x\in[0,\sigma /k] $, $ \mu y\in[0,\sigma ] $. Since $ f(z) $ is positive in $ (0,\sigma ) $, the righthand side of (\ref{jpa}) is negative, implying that $ \theta _k(y)/\mu $ is a decreasing function of $ \mu $. Hence, \begin{equation} \frac{\theta _k(x)}{\mu } \geq \lim_{k\rightarrow \infty } \frac{\theta _k(x)}{\mu } = 0 . \end{equation} \end{PROOF} \begin{LEMMA}\Label{H1} For any integer $ n>0 $, \begin{equation} \mathbf{\Phi }(n)\geq 0 \quad \mathbfox{in } \left[ 0, \frac{\pi }{n} \right] \cup \left[ \pi -\frac{\pi }{n} \, , \pi \right] . \end{equation} \end{LEMMA} \begin{PROOF} In $ [0,\pi /n] $, every term in $ \mathbf\Phi (n) $ is NN and so is their sum. The assertion $ \mathbf\Phi (n)\geq 0 $ in $ [\pi -\pi /n,\pi ] $ is equivalent to $ \mathbf\Theta $ being NN in $ [0,\pi /n] $. We make use of the representation (\ref{Hj}) of $ \mathbf\Theta (n) $. If $ n $ is even, $ \mathbf\Theta $ is a sum of positive multiples of $ \theta _{2j}(x) $, for $ j=1,\cdots,\tilde n $. By Lemma~\ref{jk}, each of these is NN in $ [0,\sigma /2\tilde n]\supset[0,\pi /n] $. Hence, their sums is NN in $ [0,\pi /n] $. If $ n $ is odd, there is an extra term $ \sin(nx) $ which is also NN in $ [0,\pi /n] $ and the conclusion still holds. \end{PROOF} In view of Lemma~\ref{H1}, to complete the proof of Lemma~2, it remains to show that $ \mathbf\Phi (n) $ is NN in $ I_n=[\pi /n,\pi -\pi /n] $ for all $ n $. Let $ m=n $ if $ n $ is odd, and $ n-1 $ otherwise. It is the largest odd integer $ {}\leq n $. Then $ \mathbf\Phi (n)=\mathbf{S}_1(n)+\mathbf{T}(m) $, where \begin{equation} \mathbf S(n) = \sin(x)+\sin(2x)+\cdots+\sin(nx) \Label{S1} \end{equation} and \begin{equation} \mathbf{T}(m) = \sin(x)+\frac{\sin(3x)}{3} + \cdots + \frac{\sin(m x)}{m}. \Label{s2} \end{equation} Identity (\ref{s1}) gives a lower bound for $ \mathbf S(n) $. \begin{eqnarray} \mathbf S(n) &\geq & \frac{\cos(x/2)-1}{2\sin(x/2)} \nonumber \\[1.2ex] &=& - \frac{\tan(x/4)}{2} \Label{Stan} \\[1.2ex] &\geq & - \frac{1}{2} \,. \end{eqnarray} The proof of Lemma 2 is thus complete if we can show that \begin{equation} \mathbf{T}(m) \geq \frac{1}{2} \, , \qquad x \in I_n, \quad n\geq 4 . \Label{S23} \end{equation} When $ n $ is even, $ n $ and $ n-1 $ use the same $ \mathbf{T}(m) $, but $ I_{n-1}\subset I_n $. Hence, if (\ref{S23}) can be proved for $ n $, then it will also hold for $ n-1 $. In other words, we only have to establish (\ref{S23}) for even $ n $, in which case $ m=n-1 $. Note that $ \mathbf{T}(m) $ is an even function about $ x=\pi /2 $. Thus, it suffices to show (\ref{S23}) for odd $ m $ and $ x\in J_n=[\pi /n,\pi /2] $. An alternative representation for $ \mathbf{T}(m) $ can be given using (\ref{c3}). \begin{eqnarray} \mathbf{T}(m) = f_n(x) &:=& \int_{0}^{x} \left( \cos(s)+\cos(3s) + \cdots + \cos((n-1) s) \right) \,ds \nonumber \\[1.2ex] &=& \int_{0}^{x} \frac{\sin(ns)}{2\sin(s)} \,ds \, . \Label{sx} \end{eqnarray} For convenience, we revert back to using $ n=m-1 $ instead of $ m $. Besides being easier to estimate, another advantage of the alternative representation is that the definition of $ f_n(x) $ can be extended to all real $ n\in(0,\infty ) $. Even though we only need (\ref{S23}) for even integers $ n $, we are going to prove the stronger inequality \begin{equation} f_n(x) \geq \frac{1}{2} \, , \qquad x \in J_n, \quad n\geq 4 . \Label{S24} \end{equation} \begin{center} \FG{59}{fn} \\ \par\vspace*{-7.8mm}\par {\footnotesize \hspace*{4.0mm} $ x_1 $\hspace*{3.6mm} $ x_2 $\hspace*{3.6mm} $ x_3 $\hspace*{58mm} } \par\vspace*{1mm}\par Figure 1. Graph of $ f_{23}(x) $. \end{center} \par\vspace*{3mm}\par The graph of one of these functions, $ f_{23}(x) $, is depicted in Figure 1. Since $ f_n'(x)=\sin(nx)/\sin(x) $, the critical points of $ f_n(x) $ in $ J_n $ are $ \pi /n,\,2\pi /n,\,3\pi /n,\,\cdots $. The first is the left endpoint of $ J_n $ and is a local maximum, so are all other odd-order points. The even-order points $ x_2=2\pi /n,\,x_4=4\pi /n,\,\cdots $ are local minima. A lower bound for $ f_n(x) $ in $ J_n $ is, therefore, \begin{equation} \min_{x\in J_n} f_n(x) = \min \left\{ f_n(x_2), f_n(x_4), \cdots \right\} . \Label{s4} \end{equation} Integration by parts gives \begin{eqnarray} f_n(x_{2k+2}) - f_n(x_{2k}) &=& \int_{x_{2k}}^{x_{2k+2}} \frac{\sin(ns)}{2\sin(s)} \,ds \nonumber \\[1.2ex] &=& \int_{x_{2k}}^{x_{2k+2}} \frac{(1-\cos(ns))\cos(s)}{2n\sin^2(s)} \,ds \nonumber \\[1.2ex] &>& 0 . \Label{s6} \end{eqnarray} Hence, $ f_n(x_2)<f_n(x_4)<f_n(x_6)<\cdots $ and it follows from (\ref{s4}) that \begin{equation} f_n(x) \geq f_n(x_2) . \Label{s5} \end{equation} Now (\ref{S24}) follows from the next Lemma and the proof of Lemma~2 is complete. \begin{LEMMA}\Label{S2} The sequence $ f_n(x_2) $, $ n=4,5,\cdots $ is increasing. \begin{equation} \frac{2}{3} = f_4(x_2) < f_5(x_2) < ... < f_n(x_2) < f_{n+1}(x_2) < ... \end{equation} \end{LEMMA} \begin{PROOF} That $ f_4(x_2)=2/3 $ can be verified directly. In fact, each $ f_n(x_2) $ can be computed exactly using Maple. The change of variable, $ s=t/n $ gives \begin{equation} f_n(x_2) = \int_{0}^{2\pi /n} \frac{\sin(ns)}{2\sin(s)} \,ds = \int_{0}^{2\pi } \frac{\sin(t)}{2n\sin(t/n)} \,dt = \int_{0}^{\pi } k_n(t)\sin(t)\,ds, \end{equation} where \begin{equation} k_n(t) = \frac{1}{2n\sin(t/n)} \,. \end{equation} Thus, \begin{equation} f_{n+1}(x_2)-f_n(x_2) = \int_{0}^{2\pi } \big( k_{n+1}(t) - k_n(t) \big) \sin(t) \,dt . \end{equation} In the next Lemma, we show that \begin{equation} h_n(t)=k_{n}(t)-k_{n+1}(t) \Label{ht} \end{equation} is a positive increasing function of $ t\in[0,2\pi ] $. Anticipating this fact, we see that \begin{eqnarray} f_{n+1}(x_2)-f_n(x_2) &=& \int_{\pi }^{2\pi } |\sin(t)|h_n(t) \,dt - \int_{0}^{\pi } \sin(t)h_n(t) \,dt \nonumber \\ &>& h_n(\pi ) \int_{\pi }^{2\pi } |\sin(t)| \,dt - h_n(\pi ) \int_{0}^{\pi } \sin(t) \,dt \nonumber \\ &=& 0. \end{eqnarray} as desired. \end{PROOF} \begin{LEMMA} For $ n\geq 4 $, $ h_n(t) $ is a positive increasing function of $ t $ in $ [0,2\pi ] $. \end{LEMMA} \begin{PROOF} The NN of $ h_n(t) $ follows from the fact that, for fixed t, $ k_n(t) $ is a decreasing function of $ n $, which is equivalent to the fact that $ n\sin(t/n) $ is an increasing function of $ n $. The increasing property of $ h_n(t) $ is true if we can prove that \begin{equation} \frac{\partial ^2}{\partial n\partial t} \, k_n(t) \leq 0. \Label{hnt} \end{equation} Direct computation gives the numerator of $ -\,\frac{\partial ^2}{\partial n\partial t} \, k_n(t) $ as the function \begin{equation} \xi (t) = 2\cos^2(t/n) +t\,\sin^2(t/n) - 2n \cos(t/n) \sin(t/n) . \end{equation} For convenience, we have suppressed the dependence of $ \xi (t) $ on $ n $. Now it suffices to show that $ \xi (t)\geq 0 $ for $ t\in[0,2\pi ] $. Since $ \xi (0)=0 $, if we can show that $ \xi '(t)\geq 0 $, the proof is complete. \begin{equation} \xi '(t) = \sin\left( \frac{t}{n} \right) \left[ 3 \sin \left( \frac{t}{n} \right) - \frac{2t}{n} \,\cos\left( \frac{t}{n} \right) \right] = \sin\left( \frac{t}{n} \right) \xi _2(t). \end{equation} It now suffices to show that $ \xi _2(t) $ is NN. The desired conclusion follows from the facts $ \xi _2(0)=0 $, and \begin{equation} \xi _2'(t) = \frac{1}{n} \, \cos\left( \frac{t}{n} \right) + \frac{2t}{n^2} \sin\left( \frac{t}{n} \right) \geq 0 \,. \end{equation} \end{PROOF} \section{Proof of Theorem 1 \label{sc4}} We first take care of $ n>20 $. The partial sums $ \mathbf\Phi _1(n) $ can be represented as \begin{eqnarray} \mathbf{\Phi }_1(n) &=& \mathbf\Phi (n) - \lambda \phi _2(x) \nonumber \\ &=& \mathbf{S}(n) + f_n(x) - \lambda \phi _2(x), \end{eqnarray} where $ \lambda =2-2\alpha \approx 0.434695735 $. In view of (\ref{Stan}) and Lemma~\ref{S2}, we get, for all $ n>20 $, $ x\in[0,\pi ] $, \begin{equation} \mathbf{\Phi }_1(n) \geq F(x) := - \,\frac{\tan(x/4)}{2} + f_{20}(x_2) -\frac{4347}{10000} \, \phi _2(x). \Label{F20} \end{equation} Maple gives \begin{equation} f_{20}(x_2) = \frac{2}{15} +{\frac {1580}{4641}}\,\cos \left( \frac{\pi }{5} \right) +{\frac {1820 }{1881}}\,\cos \left(\frac{2\pi }{5} \right) > \frac{73542}{103909} \, . \Label{F20a} \end{equation} It follows from (\ref{F20}) and (\ref{F20a}) that \begin{equation} \mathbf{\Phi }_1(n) \geq F_1(x) := \frac{73542}{103909} - \,\frac{\tan(x/4)}{2} -\frac{4347}{10000} \left( \sin(x) + \frac{\sin(2x)}{2} \right) . \Label{F2a} \end{equation} Let $ T=\tan(x/4) $. Since $ x\in[0,\pi ] $, we have $ T\in[0,1] $. Then \begin{eqnarray*} F_1(x) &=& {\frac {73542}{103909}} - \frac{T}{2} -{\frac {4347}{1250}}\,{\frac {T \left( 1-{T}^{2} \right) ^{3}}{ \left( 1+{T}^{2} \right) ^{4}}} \\[1.5ex] &=& \frac{P(T)}{(1+T^2)^4} \,. \end{eqnarray*} where {\small \begin{eqnarray*} P(T) \!\! &=& \!\! -45963750\,{T}^{9}+91927500\,{T}^{8}+267837423\,{T}^{7}+367710000\,{T} ^{6}-1630859769\,{T}^{5} \\ && +551565000\,{T}^{4}+1171222269\,{T}^{3}+ 367710000\,{T}^{2}-497656173\,T+91927500 . \end{eqnarray*} } The classical Sturm Theorem, provides a way to find the number of real roots of an algebraic polynomial with real coefficients within any given subinterval of the real line. It can be used (see \cite{Kw} and the discussion below) to show that $ P(T)>0 $ in $ [0,1] $. With this fact, we conclude that $ \mathbf\Phi _1(n)>0 $ for $ x\in[0,\pi ],\,n>20 $. For $ n\leq 20 $, the above argument does not work because when $ f_{20}(x_2) $ is replaced by any $ f_n(x_2) $ with $ n<20 $, the resulting $ F(x) $ is no longer NN in $ [0,\pi ] $. Our verification of Theorem~1 for $ n\leq 20 $, relies on a brute-force technique based on the Sturm Theorem. The method is explained in great details in \cite{Kw}. See also \cite{AK} which discusses its use in the study of Rogosinski-Szeg\"o-type inequalities \cite{AK2}. In a nutshell, given any specific sine polynomial, we can expand it into a product of $ \sin(x) $ and an algebraic polynomial $ p(Y) $ of the variable $ Y=\cos(x)\in[-1,1] $. The Sturm Theorem can then be invoked to check if $ p(Y) $ is NN or not. This procedure works with one polynomial at a time. It is, therefore, not adequate to prove general results like Theorem~1, which involves an infinite number of polynomials. Nevertheless, we can comfortably use this technique to deal with the first 20 of such polynomials. The procedure we implemented in Maple, however, has one limitation. It works only when the coefficients of the sine polynomial are given rational numbers. For this reason, it cannot be directly applied to the sine polynomials of Theorem~1 because they involve the irrational number $ \alpha $. The procedure is modified as follows. For $ n\leq 20 $, except $ n=5 $, we replace $ \alpha $ by the slightly smaller rational number $ \underline{\alpha }=171/100<\alpha $. The corresponding partial sums $ \underline{\mathbf\Phi }_1(n) $ is shown to be NN using the Maple procedure. It then follows from the CP that $ \mathbf\Phi _1(n) $ is also NN. With $ \mathbf\Phi _1(5) $, the above approach encounters a different problem. No matter what $ \underline\alpha <\alpha $ is chosen, $ \underline{\mathbf\Phi }_1 $ is not NN. In fact, $ \alpha $ has been chosen to be critical in some sense, namely, $$ \alpha = \inf \left\{ a \,\left |\, p_a(Y) \geq 0 \mathbfox{ in } [0,\pi ] \right .\right\} . $$ Here $ p_a(Y) $ is the algebraic polynomial \begin{equation} p_a(Y) = 144\,{Y}^{4}+60\,{Y}^{3}-68\,{Y}^{2}+ \left( 15\,a-30 \right) Y+(15\,a-1) . \end{equation} associated with the sine polynomial \begin{equation} 2a \,\sin(x) + a \,\sin(2x) + \sum_{k=3}^{5} \gamma _k \,\sin(kx)\geq 0 . \end{equation} For large $ a $, for example $ a=2 $, $ p_a(Y) $ is NN in $ [-1,1] $; its graph lies above and away from the $ Y $-axis. On the other hand, when $ a=0 $, the graph crossed the $ Y $-axis. As $ a $ increases from 0, the graph of $ p_a(Y) $ rises monotonically. By continuity, there is a value of $ a=\alpha $ when the graph is about to leave the $ Y $-axis; it is tangent to the $ Y $-axis at one or more points. To determine $ \alpha $, note that each point of tangency corresponds to a double root of $ p_\alpha (Y)=0 $. A necessary condition for having a double root is the vanishing of the discriminant. With the help of Maple, the discriminant, after deleting a numerical factor, is found to be (\ref{alp}). Numerical computation yields four real roots of (\ref{alp}): $ \displaystyle -0.17, \, 0.30, \, 0.78, $ and $ 43.76 $. Hence, $ \alpha $ is the second largest root. \section{Proof of Theorem 2 \label{c5}} As in the proof of Theorem 1, we can use the Sturm procedure to confirm Theorem~2 for small $ n $, more specifically, we have done that for $ n\leq 20 $. Hence, we assume $ n>20 $ in the rest of this section. The partial sums of $ \mathbf\Phi _2 $ have the representation \begin{equation} \mathbf \Phi _2(n) = 2\mathbf S(n) + \mathbf U(n) , \Label{F2} \end{equation} where $ \mathbf{S} $ is given by (\ref{S1}) and \begin{eqnarray} \mathbf U(n) &=& \sin(x) - \frac{\sin(2x)}{2} + \cdots - \frac{(-1)^n\sin(nx)}{n} \nonumber \\ &=& \int_{0}^{x} \big(\cos(s) - \cos(2s) + \cdots - (-1)^n\cos(ns) \big) \,ds \nonumber \\ &=& \frac{x}{2} +(-1)^n \int_{0}^{x} \frac{\cos\big(\frac{(2n+1)s}2\big)}{2\cos\big(\frac{s}2\big)} \,ds . \Label{Un} \end{eqnarray} We have used (\ref{c1}) to derive the last equality. By Lemma~\ref{H1}, we only have to show that $ \mathbf\Phi _2(n)\geq 0 $ in $ I_n=[\pi /n,\pi -\pi /n] $. Using (\ref{Stan}), (\ref{F2}) and (\ref{Un}), we see that \begin{equation} \mathbf\Phi _2(n) \geq -\tan\left( \frac{x}{4} \right) + \frac{x}{2} - h_n(x) , \Label{F2aa} \end{equation} where \begin{equation} h_n(x) = (-1)^{n+1} \int_{0}^{x} \frac{\cos\big(\frac{(2n+1)s}2\big)}{2\cos\big(\frac{s}2\big)} \,ds . \end{equation} Since $ \tan(x/4)\leq 0.32x $ for $ x\in[0,\pi ] $, (\ref{F2aa}) leads to \begin{equation} \mathbf\Phi _2(n) \geq 0.18 \,x - h_n(x) . \Label{F2b} \end{equation} Hence, Theorem~2 is proved if we can show that \begin{equation} h_n(x) \leq 0.18\,x , \quad x\in I_n, \quad n\geq 21. \Label{hn} \end{equation} With change of variables, $ s\mapsto 2t $, $ x\mapsto 2y $ and $ 2n+1\mapsto \hat m $, (\ref{hn}) becomes \begin{equation} \hspace*{32mm} g_{\hat{m}}(y) \leq 0.18\,y , \qquad y\in I_{\hat m}, \quad \hat m=43,45,47,\cdots, \Label{gm1} \end{equation} where \begin{equation} g_{\hat{m}}(y) = (-1)^{(\hat{m}+1)/2} \int_{0}^{y} \frac{\cos(\hat{m}t)}{2\cos(t)} \,dt \Label{gm3} \end{equation} and $ I_{\hat m}=[\pi /(\hat m-1),\pi /2-\pi /(\hat m-1)] $. In fact, we claim that (\ref{gm1}) holds in the bigger interval $ J_m=[\pi /\hat{m},\pi /2] $. \begin{center} \FG{59}{gm} \\ \par\vspace*{-4.0mm}\par {\footnotesize \hspace*{65mm} $ y_3 $\hspace*{4mm} $ y_{2} $\hspace*{4mm} $ y_{1} $} \par\vspace*{1mm}\par \par\vspace*{1mm}\par Figure 2. Graphs of $ u=g_{23}(y) $ and $ u=0.18y $. \end{center} \par\vspace*{3mm}\par The wavy curve in Figure 2 depicts the graph of $ g_{23}(y) $ and the dashed line is the graph of $ 0.18y $. It is clear from the figure that, in this case, (\ref{gm1}) fails for small positive $ y $. When $ \hat{m}=1(\mathbfox{mod}4) $, however, $ g_{\hat{m}}(y) $ is negative for $ y\in[0,\pi /\hat{m}] $ and it can be shown that (\ref{gm1}) holds in the whole interval $ [0,\pi /2] $. The shape of $ g_{\hat{m}}(y) $ is strikingly similar to that of $ f_m(x) $ in Figure 1. Indeed, by using the reflection mapping $ y=\pi /2-x $, one can show that $ g_{\hat{m}}(y)=f_{\hat{m}}(\pi /2-y)-f_{\hat{m}}(\pi /2) $. With this observation, we can deduce many of the properties of $ g_{\hat{m}}(y) $ from those of $ f_n(x) $. For instance, the critical points of $ g_{\hat{m}}(y) $ are given by the sequence $$ y_{(\hat{m}-1)/2}= \frac{\pi }{2\hat{m}} \quad < \quad y_{(\hat{m}-3)/2}=\frac{3\pi }{2\hat{m}} \quad < \quad \cdots \quad < \quad y_1= \frac{(\hat{m}-2)\pi }{2\hat{m}} \,. $$ Note that we have numbered the critical points $ y_k $ from right to left. The first one, $ y_1 $, is always a local maximum and then they alternate as local minimum and maximum. The last one, $ y_{(\hat{m}-1)/2} $ is is a minimum or maximum depending on whether $ (\hat{m}+1)/2 $ is odd or even. The sequence of local maximum (minimum) values $ g_{\hat{m}}(y_i) $ is decreasing (increasing) as $ i $ increases. The global maximum of $ g_{\hat{m}}(y) $ is attained at $ y_1 $. \begin{LEMMA}\Label{gm} For all odd integers $ \hat{m}\geq 43 $, \begin{equation} g_{\hat{m}}(y) \leq 0.22, \quad y\in[0,\pi /2]. \Label{gmy} \end{equation} \end{LEMMA} \begin{PROOF} Let us estimate \begin{eqnarray*} g_{\hat{m}}(y_1)-g_{\hat{m}}(y_2) &=& \int_{y_2}^{y_1} \frac{|\cos(\hat{m}t)|}{2\cos(t)} \,dt \, \\[1.5ex] &=& \int_{\pi }^{2\pi } \frac{|\sin(s)|}{2\hat{m}\,\sin(s/{\hat{m}})} \,ds \,. \end{eqnarray*} For fixed $ s\in[\pi ,2\pi ] $, $ \hat{m}\sin(s/\hat{m}) $ is an increasing function of $ \hat{m}\geq 43 $. As a result, $ g_{\hat{m}}(y_1)-g_{\hat{m}}(y_2) $ is a decreasing function of $ \hat{m} $. In particular, \begin{equation} g_{\hat{m}}(y_1)-g_{\hat{m}}(y_2) \leq g_{43}(y_1)-g_{43}(y_2) = 0.21731814075\cdots . \end{equation} Here we have abused the notation: $ y_1 $ and $ y_2 $ on the lefthand side of the inequality are different from those on the other side. Since $ g_{\hat{m}}(y_2)<0 $, the desired conclusion follows. \end{PROOF} Obviously, Lemma~\ref{gm} implies that (\ref{gm1}) holds on $ [11/9,\pi /2] $. It remains to show (\ref{gm1}) on $ [\pi /\hat{m},11/9] $. Our next Lemma shows that in this subinterval, (\ref{gmy}) can be greatly improved. \begin{LEMMA}\Label{gm2} For all odd integers $ \hat{m}\geq 43 $, \begin{equation} g_{\hat{m}}(y) \leq 0.06, \quad y\in[0,11/9]. \Label{gmy2} \end{equation} \end{LEMMA} \begin{PROOF} For $ \hat{m}=43 $, the first (counting from $ y_1) $ local maximum that falls within the subinterval $ [0,11/9] $ is $ y_5 $, and we compute \begin{equation} g_{43}(y_5) - g_{43}(y_6) = 0.059552923006\cdots . \end{equation} Using the same arguments as in the proof of Lemma~\ref{gm}, we see that $ g_{\hat{m}}(y_5)-g_{\hat{m}}(y_6) $ is a decreasing function of $ \hat{m} $. Hence, \begin{equation} g_{{\hat{m}}}(y_5) - g_{{\hat{m}}}(y_6) < 0.059552923006\cdots \end{equation} and the desired conclusion follows. \end{PROOF} Lemma~\ref{gm2} implies that (\ref{gm1}) holds on $ [1/3,11/9] $. It remains to show (\ref{gm1}) on $ [\pi /\hat{m},1/3] $. We use a different method to estimate $ g_{\hat{m}}(y) $ in this interval. For $ t\in[0,1/3] $, \begin{equation} 1 \leq \frac{1}{\cos(t)} \leq \frac{1}{\cos(1/3)} < 1.06 . \end{equation} It follows that \begin{equation} \frac{\cos(\hat{m}t)}{2\cos(t)} \leq \frac{1}{2} \,\cos(\hat{m}t) + 0.03 \Label{gm4} \end{equation} and \begin{equation} - \, \frac{\cos(\hat{m}t)}{\cos(t)} \leq - \frac{1}{2} \,\cos(\hat{m}t) + 0.03 . \Label{gm5} \end{equation} We consider two cases. When $ (\hat{m}+1)/2 $ is even, then from (\ref{gm3}) and (\ref{gm4}), we obtain \begin{equation} g_{\hat{m}}(y) \leq \frac{\sin(\hat{m}y)}{2\hat{m}} + 0.03y . \end{equation} It is not difficult to see that this implies (\ref{gm1}) in $ [\pi /\hat{m},1/3] $. In the complementary case, when $ (\hat{m}+1)/2 $ is odd, we use (\ref{gm3}) and (\ref{gm5}) to obtain \begin{equation} g_{\hat{m}}(y) \leq - \,\frac{\sin(my)}{2\hat{m}} + 0.03y . \end{equation} This implies (\ref{gm1}) in $ [0,1/3]\supset[\pi /\hat{m},1/3] $, and completes the proof of Theorem~2. \section{Further Examples and Remarks \label{sc6}} \begin{EX} \em Theorem~1 can be applied to show that the sum $$ \frac{\phi _2(x)}{\sqrt{2}} + \frac{\phi _4(x)}{\sqrt{3}} + \frac{\phi _6(x)}{\sqrt{4}} \cdots + \frac{\phi _{2\tilde{n}}(x)}{\sqrt{\tilde n +1}} + \left[ \, \frac{\sin(nx)}{\sqrt{\tilde n+2}} \,\right] $$ is a PS. It is not covered by Theorem~2. More generally, Theorem~1 implies that $$ \frac{\phi _2(x)}{\sqrt{\beta +1}} + \frac{\phi _4(x)}{\sqrt{\beta +2}} + \frac{\phi _6(x)}{\sqrt{\beta +3}} \cdots + \frac{\phi _{2\tilde{n}}(x)}{\sqrt{\beta +\tilde n}} + \left[ \, \frac{\sin(nx)}{\sqrt{\beta +\tilde n+1}} \,\right] $$ is PS for $ \beta \geq \displaystyle\frac{8-9\alpha ^2}{9\alpha ^2-4} \approx 1.64393 $. Numerical experiments suggest that the sum is PS for $ \beta >1.76923 $. \end{EX} \par\vspace*{\baselineskip}\par \begin{EX} \em Theorem~1 implies that $$ \phi _2(x) + \frac{\phi _4(x)}{2^\gamma } + \cdots + \frac{\phi _{2\tilde{n}}(x)}{{\tilde n}^\gamma } + \left[ \, \frac{\sin(nx)}{(\tilde n + 1)^\gamma } \,\right] $$ is PS for $ \gamma \geq 0.26 $. Theorem~2 performs worse in this case, giving only $ \gamma \geq 0.36258 $. Numerical experiments suggest that the sum may be a PS for $ 0.24\leq \gamma <0.26 $, but not for $ \gamma =0.23 $. In the latter case, all partial sums except the sixth are NN in $ [0,\pi ] $. These two examples indicate that Theorem~1 and 2 are not best possible. \end{EX} \par\vspace*{2mm}\par \begin{REM} \em In Theorems A, C, 1 and 2, the extremal sums are characterized by their respective subsequences of odd-order coefficients, namely $$ \left\{ c_{2j-1}\right\} = \left\{ 1,\,\frac{1}{2} ,\,\frac{3}{8} ,\,\frac{5}{16} ,\,\cdots \right\} , $$ $$ \hspace*{17mm} \left\{ 1,\, \frac{1}{\sqrt2} ,\, \frac{1}{\sqrt3} ,\, \cdots \right\} , $$ $$ \left\{ 2\alpha , \gamma _{2j+1} \right\} = \left\{ 2\alpha , \,\frac{4}{3} ,\, \frac{6}{5} ,\, \frac{8}{7} ,\,\cdots \right\} , \hspace*{7mm} $$ and $$ \hspace*{1mm} \left\{ \delta _{2j-1}\right\} = \left\{ 3, \,\frac{7}{3} ,\, \frac{11}{5} ,\, \frac{15}{7} ,\,\cdots \right\} . $$ The relative strength of the various results can be determined by comparing these sequences according to the CP. For instance, sequence 1 $ \succeq $ sequence 2, while each of sequences 3 and 4 is $ \succeq $ sequences 1 and 2. To look for an improvement of Theorems~1 and 2, one searches find a sequence $ \succeq $ sequence 3 or 4 that yields a PS. Note that $ \left\{ 1,1,\cdots \right\} $ $ \succeq $ sequence 3 and 4, but its associated sine sum is not a PS. In other words, $ \left\{ 1,\cdots \right\} $ is a strict upper bound of all possible improvements of Vietoris' sine result. \end{REM} \par\vspace*{2mm}\par \begin{REM} \em Theorem~1 relaxes the first condition, in (\ref{v}), of the Vietoris result. It is natural to ask whether the second condition in (\ref{v}) can also be relaxed by replacing some of the factors $ \rho _j=\frac{2j-1}{2j} $ with larger constants. The following observation concerning Belov's necessary condition (\ref{bel}) leads to the answer no. \end{REM} \begin{LEMMA} {\rm (i)} A necessary condition for any sine polynomial $ \sum_{k=1}^{n} a_k\,\sin(a_kx) $ to be NN in some neighborhood $ [\pi -\epsilon ,\pi ] $, $ 0<\epsilon <\pi $ is \begin{equation} \sum_{k=1}^{n} (-1)^{k-1}\,ka_k \geq 0. \Label{bel1} \end{equation} {\rm (ii)} A necessary condition for $ \sum_{k=1}^{n} a_k\,\sin(a_kx) $ to be NN in some neighborhood $ [0,\epsilon ] $, $ 0<\epsilon <\pi $ is \begin{equation} \sum_{k=1}^{n} ka_k \geq 0 \Label{bel2} \end{equation} \end{LEMMA} \begin{PROOF} Let us prove (i). By assumption \begin{equation} 0 \leq \sum_{k=1}^{n} \frac{a_k\,\sin(kx)}{\pi -x} \end{equation} for all $ x\in[\pi -\epsilon ,\pi ) $ By taking the limit as $ x\rightarrow \pi $, we get (using, for example, L'H\^opital's rule) \begin{equation} 0 \leq \lim_{x\rightarrow \pi } \sum_{k=1}^{n} \frac{a_k\,\sin(kx)}{\pi -x} = \sum_{k=1}^{n} (-1)^{k+1}a_k . \end{equation} The proof of (ii) is similar. \end{PROOF} \begin{REM} \em In the hypotheses of the Lemma, $ a_k $ are not required to be of the same sign or monotone. Also note that unlike in the Belov condition, we are assuming in the hypothesis only that the sine polynomial itself (not any of its proper partial sums) is NN, and only one inequality (\ref{bel1}) is required to hold (not for all $ n $). \end{REM} \begin{REM} \em As Belov already pointed out, his condition (\ref{bel}) is no longer sufficient without the additional monotonicity requirement on the coefficients. We give an example related to our sum $ \Phi $. It is easy to verify that the polynomial $$ 2\sin(x)+\sin(2x)+ \frac{4}{3} \sin(3x) + \sin(4x) + \frac{6}{5} \sin(5x) + \frac{6}{8} \sin(8x) $$ is not NN in $ [0,\pi ] $, although it satisfies (\ref{bel}). This polynomial is constructed by taking $ \mathbf\Phi (5) $, the first five terms of $ \mathbf\Phi $, skipping the terms involving $ \sin(6x) $ and $ \sin(7x) $ and add the next term with a suitable coefficient to satisfy (\ref{bel}). The same is true for the polynomial constructed using $ \mathbf\Phi (5) $ and $ \sin(10x) $. However, we notice that, after that, all polynomials of the form $$ \mathbf\Phi (5)+ \frac{6}{n} \,\sin(nx), \quad n=12,14,16,\cdots $$ are all PS. \end{REM} \begin{REM} \em Another natural question to ask is whether our Theorem~1 has a cosine counterpart, namely, whether $ \sum\,{\gamma _k}\,\cos(kx) $ is a PS, if $ \gamma _0=\gamma _1 $ and $ \gamma _k $ is given by (\ref{ak1}) for $ k=1,2,\cdots $. The answer is also no. For $ x=\pi $, the cosine series becomes $$ \gamma _2 - \gamma _3 + \gamma _4 - \gamma _5 + \cdots $$ and every partial sum with an even number of terms is negative, because $ \gamma _2<\gamma _3 $, $ \gamma _4<\gamma _5 $, etc. A similar observation applies to the analogous sum $ \sum\,{\delta _k}\,\cos(kx) $. \end{REM} {\bf Acknowledgments} The author is thankful to Horst Alzer for many inspiring discussions on the subject of inequalities, in particular, trigonometric inequalities. Many of the technical computations mentioned in the article were carried out using the excellent Maple symbolic computation software. \end{document}
math
45,568
\begin{document} \title{\LARGE \bf Robustness of Leader-Follower Networked Dynamical Systems} \author{Mohammad Pirani, Ebrahim Moradi Shahrivar, Baris Fidan and Shreyas Sundaram \thanks{This material is based upon work supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). M. Pirani and B. Fidan are with the Department of Mechanical and Mechatronics Engineering at the University of Waterloo, Waterloo, ON, Canada. E-mail: \texttt{\{mpirani, fidan\}@uwaterloo.ca}. E. M. Shahrivar is with the Department of Electrical and Computer Engineering at the University of Waterloo, Waterloo, ON, Canada. E-mail: {\texttt{[email protected]}}. S. Sundaram is with the School of Electrical and Computer Engineering at Purdue University, W. Lafayette, IN, USA. E-mail: {\texttt{[email protected]}}. } } \maketitle \thispagestyle{empty} \pagestyle{empty} \begin{abstract} We present a graph-theoretic approach to analyzing the robustness of leader-follower consensus dynamics to disturbances and time delays. Robustness to disturbances is captured via the system $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ norms and robustness to time delay is defined as the maximum allowable delay for the system to remain asymptotically stable. Our analysis is built on understanding certain spectral properties of the grounded Laplacian matrix that play a key role in such dynamics. Specifically, we give graph-theoretic bounds on the extreme eigenvalues of the grounded Laplacian matrix which quantify the impact of disturbances and time-delays on the leader-follower dynamics. We then provide tight characterizations of these robustness metrics in Erdos-Renyi random graphs and random regular graphs. Finally, we view robustness to disturbances and time delay as network centrality metrics, and provide conditions under which a leader in a network optimizes each robustness objective. Furthermore, we propose a sufficient condition under which a single leader optimizes both robustness objectives simultaneously. \end{abstract} \section{Introduction} \label{sec:intro} In networked dynamical systems, the influence of each individual agent on the global dynamics is determined by: (i) the location of the agent in the network, (ii) its local behavior and dynamics, and (iii) the overall network structure. One class of local behavior that has received attention in the literature (particularly in the context of social and economic networks) is the notion of agent {\it stubbornness} \cite{Friedkin2, Frasca, Ghaderi13, Ozdaglar, Acemoglu2013opinion}, where an agent influences other agents but is not affected in return. Such agents have also been studied from the perspective of acting as {\it leaders} in multi-agent systems \cite{Rahmani,Chapman, Ni}. In this paper, our goal is to study the impact of {\it disturbances} (such as noise, faults, attacks and other external inputs) and {\it time-delays} in networked dynamical systems that contain stubborn (or leader) agents. From a control-theoretic viewpoint, robustness to disturbances is investigated by studying their impact on the state or output of the system, and is often quantified via system $\mathcal{H}_{2}$ or $\mathcal{H}_{\infty}$ norms. In this direction, a large literature has recently investigated the robustness of networked dynamical systems to disturbances from an input-output standpoint \cite{Leonard,Fitch2,Bamieh2,Jovanovic,Patterson1,Clark5,Summers,Scardovi,Scardovi2,Siami,Siami,Hinf1,Hinf2,Hinf3}. Similarly, robustness of a consensus network to time-delays in the communication between agents is quantified in terms of the maximum allowable time-delay for the system to remain asymptotically stable \cite{Olfati2}. As we describe in the paper, the robustness metrics of interest are a function of the spectrum of the grounded Laplacian matrix (obtained by removing certain rows and columns from the Laplacian matrix \cite{Barooah,Miekkala,PiraniSundaramArxiv}). Hence, one of the main contributions of this paper, which is an extended version of the conference paper \cite{piranicdc}, is to propose graph-theoretic bounds on the extreme eigenvalues of this matrix. These eigenvalue bounds consequently provide bounds on the robustness metrics in general graphs, and tight bounds on such metrics in random graphs. After characterizing graph-theoretic bounds on these robustness metrics, we turn our attention to selecting leaders in the network in order to optimize robustness. Leader selection algorithms for multi-agent systems have attracted much attention in recent years \cite{Patterson2, Clarkbook}, and optimal leaders for certain metrics have been characterized in terms of either known network centrality measures, or via the introduction of new centrality measures \cite{Pasqualetti,Leonard,ACC}. We contribute to this literature by investigating the leader selection problem in a given network to optimize network robustness to disturbances and time delay. More specifically, we provide conditions under which a leader in a network optimizes each robustness objective. Furthermore, we propose a sufficient graph-theoretic condition for a particular leader to optimize all of the robustness objectives simultaneously. The paper is organized as follows. We start by introducing our notation in Section~\ref{sec:notation}, and in Section~\ref{sec:influence}, we formally state the leader-follower consensus dynamics that we will be studying in this paper. There, we also describe how the spectrum of the grounded Laplacian matrix plays a role in the robustness of such dynamics to disturbances and time-delays. In Section~\ref{sec:grounded_laplacian}, we provide graph-theoretic characterizations of the salient eigenvalues of the grounded Laplacian matrix; we then use these characterizations to provide bounds on the robustness metrics in general graphs (Section~\ref{sec:application}) and in random graphs (Section~\ref{sec:random}). We consider the problem of selecting leaders to optimize these metrics (viewed in terms of centrality measures) in Section~\ref{sec:robustleader}, validate our analysis via simulations in Section~\ref{sec:simulations}, and conclude in Section~\ref{sec:conc}. \section{Notation} \label{sec:notation} We denote an undirected graph by $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$, where $\mathcal{V} = \{v_1, v_2, \ldots, v_n\}$ is a set of nodes (or vertices) and $\mathcal{E} \subset \mathcal{V}\times\mathcal{V}$ is the set of edges. The neighbors of node $v_i \in \mathcal{V}$ are given by the set $\mathcal{N}_i = \{v_j \in \mathcal{V} \mid (v_i, v_j) \in \mathcal{E}\}$. The adjacency matrix of the graph is given by a symmetric and binary $n \times n$ matrix $A$, where element $A_{ij}=1$ if $(v_i, v_j) \in \mathcal{E}$ and zero otherwise. The degree of node $v_i$ is denoted by $d_i \triangleq \sum_{j=1}^nA_{ij}$. For a given set of nodes $X \subset \mathcal{V}$, the {\it edge-boundary} (or just boundary) of the set is given by $\partial{X} \triangleq \{(v_i,v_j) \in \mathcal{E} \mid v_i \in X, v_j \in \mathcal{V}\setminus{X}\}$. The Laplacian matrix of the graph is given by $L \triangleq D - A$, where $D = \diag(d_1, d_2, \ldots, d_n)$. The eigenvalues of the Laplacian are real and nonnegative, and are denoted by $0 = \lambda_1(L) \le \lambda_2(L) \le \ldots \le \lambda_n(L)$. For a given subset $\mathcal{S} \subset \mathcal{V}$ of nodes (which we term {\it grounded nodes}), the {\it grounded Laplacian} induced by $\mathcal{S}$ is denoted by $L_g(\mathcal{S})$ or simply $L_g$, and is obtained by removing the rows and columns of $L$ corresponding to the nodes in $\mathcal{S}$. \section{Problem Statement} \label{sec:influence} Consider a connected network consisting of $n$ agents $\mathcal{V} = \{v_1, v_2,\ldots, v_n\}$. The set of agents is partitioned into a set of followers $\mathcal{F}$, and a set of leaders\footnote{These agents may also be referred to as {\it{anchors}} \cite{Rahmani} or {\it{stubborn agents}}\cite{Ghaderi13},\cite{piranicdc} depending on the context.} $\mathcal{S}$. Each agent $v_i$ has a scalar and real valued state $x_i(t)$, where $t$ is the time index. The state of each follower agent $v_j\in \mathcal{F}$ evolves based on the interactions with its neighbors as \begin{align} \dot{x}_{j}(t)&=\sum_{v_i\in \mathcal{N}_j}(x_i(t)-x_j(t)). \label{eqn:partial} \end{align} The state of the leaders (which should be tracked by the followers) is assumed to be constant\footnote{The results in this paper can be extended to the case where the state of the leaders are time-varying \cite{Rahmani}, and given by $\dot{x}_{\mathcal{S}}(t) = u(t)$.} and thus \begin{equation} \dot{x}_{j}(t) = 0, \enspace \forall v_j \in \mathcal{S}. \label{eqn:fully} \end{equation} If the graph is connected, the states of the follower agents will converge to some convex combination of the states of the leaders \cite{Clark1}. We assume without loss of generality that the leader agents are placed last in the ordering of the agents. Aggregating the states of all followers into a vector $x_\mathcal{F}(t) \in \mathbb{R}^{n-|\mathcal{S}|}$, and the states of all leaders into a vector $x_{\mathcal{S}}(t)\in \mathbb{R}^{|\mathcal{S}|}$ (note that $x_{\mathcal{S}}(t) = x_{\mathcal{S}}(0)$ for all $t \ge 0$), equations \eqref{eqn:partial} and \eqref{eqn:fully} yield the following dynamics \begin{equation} \begin{bmatrix} \dot{x}_\mathcal{F}(t) \\[0.3em] \dot{x}_{\mathcal{S}}(t) \end{bmatrix}=-\underbrace{\begin{bmatrix} L_g & L_{12} \\[0.3em] L_{21} & L_{22} \end{bmatrix}}_L\begin{bmatrix} {x}_F(t) \\[0.3em] {x}_{\mathcal{S}}(t) \end{bmatrix}. \label{eqn:mat} \end{equation} Given equation \eqref{eqn:fully}, we have that $ L_{21}=0$ and $ L_{22}=0 $. Hence the dynamics of the follower agents are given by \begin{align} \dot{x}_\mathcal{F}(t) &= -{L}_gx_\mathcal{F}(t) + L_{12}x_{\mathcal{S}}(0). \label{eqn:partial4} \end{align} Here, $L_g$ is the grounded Laplacian induced by the leaders, representing the interaction between the followers. The submatrix $L_{12}$ of the graph Laplacian captures the influence of the leaders on the followers. \begin{remark} For the case where the underlying network is connected and there exists at least one leader, the matrix ${L}_g$ is a diagonally dominant matrix with at least one strictly diagonally dominant row. Hence, from \cite{Horn} it is a positive definite matrix and in this case, the dynamics given by \eqref{eqn:partial4} will be asymptotically stable and the convergence rate is determined by the smallest eigenvalue of ${L}_g$. Moreover ${L}_g^{-1}$ is a nonnegative matrix and based on the Perron-Frobenius theorem \cite{Horn} its largest eigenvalue $\lambda_{n-|\mathcal{S}|}({L}_g^{-1})$ has an eigenvector $\mathbf{x}$ with nonnegative components. Thus, the smallest eigenvalue $\lambda_1(L_g)$ of $L_g$ also has an eigenvector $\mathbf{x}$ with nonnegative components. \label{rem:deffin} \end{remark} In this paper, we will consider the impact of two perturbations to the above nominal dynamics\footnote{We analyze these two cases separately, since the objective of this study is to show the explicit dependency of these two robustness metrics on the spectrum of $L_g$.}: \begin{itemize} \item The case where the update rule of each follower agent $v_j \in \mathcal{F}$ is affected by a disturbance $w_j(t)$. In this case, we extend \eqref{eqn:partial4} to \begin{align} \dot{{x}}_\mathcal{F}(t) &= -{L}_g{x}_\mathcal{F}(t)+ L_{12}x_{\mathcal{S}}(0) + w(t). \label{eqn:padgrtial43} \end{align} Here $w(t)$ is a vector representing the disturbances. It is assumed that the leader agents are unaffected by the disturbances. This is a reasonable assumption due to the fact that they do not update their state. \item The case where the communication between the agents is affected by some time delay. In this case, we have the dynamics \begin{align} \dot{{x}}_\mathcal{F}(t) &= -{L}_g{x}_\mathcal{F}(t-\tau) + L_{12}x_{\mathcal{S}}(0), \label{eqn:partiweal43} \end{align} where $0<\tau \leq \tau_{max}$ for some $\tau_{max}>0$. \end{itemize} \begin{remark} If each agent has instantaneous access to its own state, the dynamics have the form \begin{equation} \dot{{x}}_\mathcal{F}(t) = -D_g{x}_\mathcal{F}(t)+ A_g{x}_\mathcal{F}(t-\tau) + L_{12}x_{\mathcal{S}}(0), \label{eqn:offdiag} \end{equation} where $L_g=D_g-A_g$. In this case since all of the principal minors of $L_g$ are nonnegative and $L_g$ is non-singular, \eqref{eqn:offdiag} is asymptotically stable independent of the magnitude of the delays in the off-diagonal terms of $L_g$ (Theorem 1 in \cite{Hofbauer}). \end{remark} In the following subsections, we analyze the robustness of system \eqref{eqn:padgrtial43} to the disturbances and \eqref{eqn:partiweal43} to the time delay. \subsection{Robustness of \eqref{eqn:padgrtial43} to Disturbances} \label{sec:robust_disturbances} Let $\bar{x}_\mathcal{F}(t)$ be the state of \eqref{eqn:padgrtial43} when $w(t)=0$, and define the error between the nominal and disturbed state as $e(t)={x}_\mathcal{F}(t)-\bar{x}_\mathcal{F}(t)$. The transfer function from the disturbance $w(t)$ to $e(t)$ is obtained from \eqref{eqn:padgrtial43} as $G(s)= (sI+L_g)^{-1}$. In order to discuss the robustness of \eqref{eqn:padgrtial43} to disturbances, a typical approach (e.g. \cite{Bamieh,Jovanovic,piranicdc}) is to consider system $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ norms, defined as \cite{Doyle} \begin{align} ||G||_2 &\triangleq \left( \frac{1}{2\pi}\trace\int_0^{\infty}G^*(j\omega)G(j\omega)d\omega \right)^{\frac{1}{2}},\nonumber \\ ||G||_{\infty} &\triangleq \sup_{\omega\in \mathbb{R}}{\lambda_{max}^{\frac{1}{2}}(G^*(j\omega)G(j\omega))}. \end{align} The system $\mathcal{H}_2$ norm can also be calculated based on the controllability Gramian $\mathcal{W}_c$, which is the solution of a Lyapunov equation. In particular, for the error dynamics of \eqref{eqn:padgrtial43} we have $||G||_2^2=\trace \mathcal{W}_c$, which becomes \cite{Bamieh2} \begin{equation} ||G||_2=\left(\frac{1}{2}\trace(L_g^{-1})\right)^{\frac{1}{2}}. \label{eqn:htwo} \end{equation} For the system $\mathcal{H}_{\infty}$ norm of the error dynamics of \eqref{eqn:padgrtial43}, we present the following proposition. \begin{proposition} The system $\mathcal{H}_{\infty}$ norm of the error dynamics of \eqref{eqn:padgrtial43} is \begin{equation} ||G||_{\infty}=\frac{1}{\lambda_1({L}_g)}. \label{eqn:hinfti} \end{equation} \label{prop:hinf} \end{proposition} \begin{IEEEproof} We have $G(j\omega)=(j\omega I+{L}_g)^{-1}$, which gives \begin{align} G^*(j\omega)G(j\omega)&=(-j\omega I+{L}_g)^{-1}(j\omega I+{L}_g)^{-1} \nonumber \\ &=\underbrace{(\omega^2I+{L}_g^2)^{-1}}_{\mathcal{C}^{-1}}. \end{align} We know that $\mathcal{C}>0$ (positive definite) which yields $\mathcal{C}^{-1}>0$. Thus finding $\sup_{\omega} \lambda_{n-|\mathcal{S}|}(\mathcal{C}^{-1})$ is equivalent to finding $\inf_{\omega} \lambda_{1}(\mathcal{C})$. Since $\lambda_{1}(\mathcal{C})=\omega^2+\lambda_{1}({L}_g^2)$, we have $\inf_{\omega} \lambda_{1}(\mathcal{C})=\lambda_{1}({L}_g^2)$, proving the proposition. \end{IEEEproof} Based on Proposition \ref{prop:hinf} and Remark \ref{rem:deffin}, minimizing the $\mathcal{H}_{\infty}$ norm of the error dynamics of \eqref{eqn:padgrtial43} is equivalent to maximizing the convergence rate of \eqref{eqn:padgrtial43}. This equivalence between the two metrics will be revisited in Section \ref{sec:robustleader}. \begin{remark} The recent literature mainly works with $\frac{1}{2}\trace(L_g^{-1})$ instead of its square root, and refers to this metric as the {\it {network disorder}} \cite{Bamieh2}. To maintain consistency, we also adopt this terminology and refer to $\frac{1}{2}\trace(L_g^{-1})$ as $\mathcal{H}_2$ disorder and $\frac{1}{\lambda_1({L}_g)}$ as $\mathcal{H}_{\infty}$ disorder. \end{remark} \subsection{Robustness of \eqref{eqn:partiweal43} to Time Delay} The other robustness metric that we analyze in this paper is the robustness of \eqref{eqn:partiweal43} to time delay. The following theorem gives a necessary and sufficient condition for the asymptotic stability of \eqref{eqn:partiweal43}. \begin{theorem}[\cite{Buslowicz}] The dynamical system \eqref{eqn:partiweal43} is asymptotically stable if and only if \begin{equation}\label{thm:ineq1} \tau_{max}< \frac{\tan^{-1}\left(\frac{Re(\lambda_i(L_g))}{Im(\lambda_i(L_g))}\right)}{\lambda_i(L_g)}, \end{equation} for all $i=1,2, ..., n-|\mathcal{S}|$. \label{thm:delll} \end{theorem} Since the eigenvalues are real, the numerator of the right hand side term in \eqref{thm:ineq1} is $\frac{\pi}{2}$, and thus a necessary and sufficient condition for asymptotic stability of \eqref{eqn:partiweal43} is \begin{equation} \tau_{max}<\frac{\pi}{2\lambda_{max}(L_g)}=\frac{\pi}{2\lambda_{n-|S|}(L_g)}. \label{eqn:ti} \end{equation} In this paper, we refer to the quantity $\hat{\tau}_{max}=\frac{\pi}{2\lambda_{n-|S|}(L_g)}$ as the {\it{delay threshold}}. The above characterization of the robustness of \eqref{eqn:padgrtial43} to disturbances, based on system $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ norms in \eqref{eqn:htwo} and \eqref{eqn:hinfti}, and the robustness of \eqref{eqn:partiweal43} to delay given by the quantity in \eqref{eqn:ti}, illustrates the role that the spectrum of the grounded Laplacian matrix $L_g$ plays in such robustness metrics. Thus, we analyze the spectrum of the grounded Laplacian matrix in this paper, and consequently give bounds on the above robustness metrics. More specifically, our contributions are as follows. \begin{itemize} \item We extend existing bounds on the smallest eigenvalue, $\lambda_1$, and provide new bounds on the largest eigenvalue, $\lambda_{n-|\mathcal{S}|}$, of the grounded Laplacian matrix. \item Based on the graph-theoretic bounds on the extreme eigenvalues of $L_g$, we present graph-theoretic necessary and sufficient conditions for robustness of leader-follower dynamics to disturbances and time delay. \item We characterize the system $\mathcal{H}_2$ disorder, $\mathcal{H}_{\infty}$ disorder, and the robustness to delay ($\hat{\tau}_{max}$) in random graphs. \item We look at these robustness metrics for the disturbance and time delay as different network centrality metrics and give sufficient conditions for a node in a network to be the best leader in the sense of minimizing $\mathcal{H}_2$ disorder, minimizing $\mathcal{H}_{\infty}$ disorder (or maximizing convergence rate) and maximizing $\hat{\tau}_{max}$ simultaneously. \end{itemize} \section{On the Spectrum of the Grounded Laplacian Matrix} \label{sec:grounded_laplacian} In this section, we present graph-theoretic bounds on the smallest eigenvalue and the spectral radius (largest eigenvalue) of the grounded Laplacian matrix. \subsection{Smallest Eigenvalue of $L_g$} There is a vast literature dedicated to analyzing the spectrum of the Laplacian matrix \cite{Mohar}, \cite{Anderson}, \cite{Chung}. The following theorem gives bounds on $\lambda_1(L_g)$ (the smallest eigenvalue of the grounded Laplacian matrix) based on graph theoretic properties. \begin{theorem} Consider a connected graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ with a set of leaders $\mathcal{S} \subset \mathcal{V}$. Let $L_g$ be the grounded Laplacian matrix induced by $\mathcal{S}$, and for each $v_i \in \mathcal{F}$, let $\beta_i$ be the number of leaders in follower $v_i$'s neighborhood. Then \begin{multline} \max\left\{ \min_{i\in \mathcal{V}\setminus \mathcal{S}}\beta_i, \left(\frac{|\partial \mathcal{S}|}{n-|\mathcal{S}|}\right)x_{min}\right\} \leq \lambda_1({L}_g) \leq \min_{\emptyset \ne X\subseteq \mathcal{V\setminus \mathcal{S}}} \frac{|\partial X|}{|X|} \leq \frac{|\partial \mathcal{S}|}{n-|\mathcal{S}|}\leq \max_{i\in \mathcal{V}\setminus \mathcal{S}}\beta_i, \label{eqn:maineq} \end{multline} where $x_{min}$ is the smallest eigenvector component of $\mathbf{x}$, a nonnegative eigenvector corresponding to $\lambda_1({L}_g)$.\footnote{Throughout the paper, we take eigenvector $\mathbf{x}$ to be normalized such that its largest component is $x_{max}=1$.} \label{thm:main} \end{theorem} \begin{IEEEproof} The lower bound $\frac{|\partial \mathcal{S}|}{n-|\mathcal{S}|}x_{min}$ and the tightest and the second tightest upper bounds are given in Theorem 1 in \cite{PiraniSundaramArxiv}. The extreme upper bound is due to the fact that $\sum_{i=1}^{n-|\mathcal{S}|}\beta_i=|\partial \mathcal{S}|$ which gives $\frac{|\partial \mathcal{S}|}{n-|\mathcal{S}|}\leq \max_{i\in \mathcal{V}\setminus \mathcal{S}}\beta_i$. For the lower bound $\min_{i\in \mathcal{V}\setminus \mathcal{S}}\beta_i$, we left-multiply the eigenvector equation $\lambda_1\textbf{x}={L}_g\textbf{x}$ by the vector consisting of all 1's, and use the fact that $\mathbf{1}^TL_g=[\beta_1, \beta_2, ..., \beta_{n-|\mathcal{S}|}]$ to get \begin{equation*} \lambda_1({L}_g)\sum_{i=1}^{n-|\mathcal{S}|}x_i=\sum_{i=1}^{n-|\mathcal{S}|}\beta_ix_i\geq \min_{i\in \mathcal{V}\setminus \mathcal{S}}\beta_i\sum_{i=1}^{n-|\mathcal{S}|}x_i, \end{equation*} since $\mathbf{x}$ is nonnegative, which gives $\lambda_1({L}_g)\geq \min_{i\in \mathcal{V}\setminus \mathcal{S}}\beta_i$ as required. \end{IEEEproof} The following lemma from \cite{PiraniSundaramArxiv} provides a sufficient condition under which the smallest component of the eigenvector corresponding to $\lambda_1({L}_g)$ goes to $1$ and consequently the bound \eqref{eqn:maineq} becomes tight. \begin{lemma}[\cite{PiraniSundaramArxiv}] Let $\mathbf{x}$ be a nonnegative eigenvector corresponding to the smallest eigenvalue of ${L}_g$. Then the smallest eigenvector component of $\mathbf{x}$ satisfies \begin{equation} x_{min} \ge 1 - \frac{2\sqrt{|\mathcal{S}||\partial \mathcal{S}|)}}{\lambda_2({\bar{L}})}, \label{eqn:eige12} \end{equation} where $\bar{L} \in \mathbb{R}^{(n-|\mathcal{S}|)\times (n-|\mathcal{S}|)}$ is the Laplacian matrix formed by removing the leaders and their incident edges. \label{lem:mainlem} \end{lemma} Thus, in networks where the number of leaders (and their incident edges) grows slowly compared to the algebraic connectivity of the network induced by the followers, the bounds on the smallest eigenvalue in \eqref{eqn:maineq} become tight. In order to obtain another condition under which $\lambda_1({L}_g)$ is bounded away from zero, we use the following definition. \begin{definition}[\cite{Kutten}] A subset of vertices $\mathcal{X}\subset \mathcal{V}$ is an $f$- dominating set if each vertex $v_i\in \mathcal{V}\setminus \mathcal{X}$ is connected to at least $f$ vertices in $\mathcal{X}$. \end{definition} Based on the above definition and the lower bound $\min_{i\in \mathcal{V}\setminus \mathcal{S}}\{\beta_i\}$ in \eqref{eqn:maineq}, we have the following corollary. \begin{corollary} If the set of leaders is an $f$-dominating set, then $\lambda_1({L}_g)\geq f$, regardless of the connectivity of the network. \label{cor:akjdbvao} \end{corollary} The following proposition introduces a condition under which $\lambda_1({L}_g)$ remains unchanged when some edges are added or removed. \begin{proposition} Consider a connected graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ with a set of leaders $\mathcal{S} \subset \mathcal{V}$. Let $L_g$ be the grounded Laplacian matrix induced by $\mathcal{S}$. If each follower is connected to exactly $\beta$ leaders, then regardless of the interconnection topology inside $\mathcal{F}$ or $\mathcal{S}$, we have $\lambda_1(L_g)=\beta$. Moreover in this case $\lambda_1(L_g)$ strictly decreases when any edge $(v_i,v_j)\in \mathcal{E}$ is removed, where $v_i\in \mathcal{F}$ and $v_j\in \mathcal{S}$. \label{prop:robus} \end{proposition} \begin{IEEEproof} The proof of the first part is clear due to \eqref{eqn:maineq} since $\min_{i\in \mathcal{F}}\{\beta_i\}=\max_{i\in \mathcal{F}}\{\beta_i\}=\beta$. Furthermore, in this case if we remove an edge between $\mathcal{F}$ and $\mathcal{S}$, then based on \eqref{eqn:maineq} we have $\lambda_1(L_g) \leq \frac{|\partial \mathcal{S}|}{n-|\mathcal{S}|}=\frac{(n-|\mathcal{S}|-1)\beta+\beta-1}{n-|\mathcal{S}|}<\beta$, which proves the claim. \end{IEEEproof} Proposition \ref{prop:robus} is important from two aspects. First, it gives freedom in designing connections between the follower agents. Second, it introduces a notion of robustness of the network under edge failures within the set of follower (or leader) agents. \subsection{Spectral Radius of $L_g$} Bounds on the spectral radius of the Laplacian matrix are discussed in \cite{Anderson, Shiiii}. Here we discuss graph theoretic bounds on the spectral radius, $\lambda_{n-|\mathcal{S}|}(L_g)$, of the grounded Laplacian matrix. We start with the definition of the {\it incidence matrix}. \begin{definition} Given a connected graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$, an orientation of the graph $\mathcal{G}$ is defined by assigning a direction (arbitrarily) to each edge in $\mathcal{E}$. For graph $\mathcal{G}$ with $m$ edges, numbered as $e_1, e_2, ..., e_m$, its node-edge incidence matrix $\mathcal{B}(\mathcal{G})\in \mathbb{R}^{n\times m}$ is defined as $$[\mathcal{B}(\mathcal{G})]_{kl}= \begin{cases} 1 & \quad \text{if node $k$ is the head of edge $l$},\\ -1 & \quad \text{if node $k$ is the tail of edge $l$},\\ 0 & \quad \text{otherwise}.\\ \end{cases} $$ The graph Laplacian satisfies $L=\mathcal{B}(\mathcal{G})\mathcal{B}(\mathcal{G})^T$. \end{definition} Partitioning the rows of the incidence matrix into the sets of followers and leaders yields \begin{equation} \mathcal{B}(\mathcal{G})=\begin{bmatrix} \mathcal{B}_{\mathcal{F}} \\[0.3em] \mathcal{B}_{\mathcal{S}} \end{bmatrix}, \end{equation} where $\mathcal{B}_{\mathcal{F}}\in \mathbb{R}^{(n-|\mathcal{S}|)\times m}$ and $\mathcal{B}_{\mathcal{S}}\in \mathbb{R}^{|\mathcal{S}|\times m}$. As a result, $L_g=\mathcal{B}_{\mathcal{F}} \mathcal{B}_{\mathcal{F}} ^T$. Defining the matrix $N=\mathcal{B}_{\mathcal{F}}^T \mathcal{B}_{\mathcal{F}}$, we have the following lemma. \begin{lemma} If $\lambda_k$ is an eigenvalue of $L_g$, it is also an eigenvalue of $N$. \label{lem:fer} \end{lemma} \begin{IEEEproof} If we have $L_g\mathbf{x}_k=\lambda_k\mathbf{x}_k$ for any eigenvalue $\lambda_k(L_g)$ and corresponding eigenvector $\mathbf{x}_k$, then $N\mathcal{B}_{\mathcal{F}}^T\mathbf{x}_k=\mathcal{B}_{\mathcal{F}}^TL_g\mathbf{x}_k=\lambda_k\mathcal{B}_{\mathcal{F}}^T\mathbf{x}_k$. If $\mathcal{B}_{\mathcal{F}}^T\mathbf{x}_k=\mathbf{0}$ then $L_g\mathbf{x}_k=\mathbf{0}$ which is impossible since $L_g$ is positive definite by Remark \ref{rem:deffin}. Thus $\lambda_k$ is also an eigenvalue of $N$ with eigenvector $\mathcal{B}_{\mathcal{F}}^T\mathbf{x}_k$. \end{IEEEproof} This leads to the following bounds on $\lambda_{n-|\mathcal{S}|}(L_g)$. \begin{theorem} Consider a connected graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ with a set of leaders $\mathcal{S} \subset \mathcal{V}$. Let $L_g$ be the grounded Laplacian matrix induced by $\mathcal{S}$. The spectral radius of $L_g$ satisfies \begin{align} d_{max}^{\mathcal{F}} \leq \lambda_{n-|\mathcal{S}|}(L_g) \leq \max\left\{d_{max}^{\mathcal{F}}, \max_{\substack{(u,v)\in \mathcal{E}\\{u,v\in \mathcal{F}}}}\{d_u+d_v\} \right\}, \label{eqn:up} \end{align} where $d_{max}^{\mathcal{F}}$ is the maximum degree over the follower agents. \label{thm:bounddmax} \end{theorem} \begin{IEEEproof} For the lower bound, the Rayleigh quotient inequality \cite{Horn} indicates $$ \lambda_{n-|\mathcal{S}|}(L_g) \geq z^T L_g z , $$ for all $z\in \mathbb{R}^{n-|S|}$ with $z^Tz=1$. By choosing $z=e_i$, where $e_i$ is a vector of zeros except for a single 1 at an element corresponding to a vertex with maximum degree, the lower bound is obtained. In order to show the upper bound, we use the property of matrix $N$ mentioned in Lemma \ref{lem:fer}. Thus we show the same upper bound for the spectral radius of $N$, $\lambda_{max}(N)$. We have \begin{equation} \lambda_{max}(N)\leq \lambda_{max}(|N|) \leq \max_{i=1, \dots, n-|S|}[|N|]_i, \label{eqn:pf} \end{equation} where $[|N|]_i$ is the row sum of the $i$-th row of $|N|$. The first inequality in \eqref{eqn:pf} is due to the properties of nonnegative matrices (Theorem 8.1.18 in \cite{Horn}) and the second inequality is due to the Perron-Frobenius theorem. We know that the $i$-th row of $N$ belongs to edge $e_i$. This edge is either connecting two follower agents or a follower with a leader. Moreover, when $e$ is an edge in $\mathcal{G}$ connecting $\{u,v\}\in \mathcal{F}$, the row sum of $|N|$ for the row corresponding to $e$ is $d_u+d_v$ \cite{Anderson}. Furthermore, when $e$ is an edge in $\mathcal{G}$ which connects $u\in \mathcal{F}$ to $v\in \mathcal{S}$, then the row sum is $d_u$. This gives the upper bound in \eqref{eqn:up}. \end{IEEEproof} \begin{remark} Unlike the case where there is no leader in the network (i.e., traditional consensus dynamics) where we always have $d_{max}< \max_{(u,v)\in \mathcal{E}}\{d_u+d_v\}$, for the upper bound in \eqref{eqn:up}, it is possible to have \begin{equation} d_{max}^{\mathcal{F}} > \max_{\substack{(u,v)\in \mathcal{E}\\{u,v\in \mathcal{F}}}}\{d_u+d_v\}, \label{eqn:wrnh} \end{equation} and \eqref{eqn:up} becomes tight, i.e., $\lambda_{n-|\mathcal{S}|}(L_g)=d_{max}^{\mathcal{F}}$. This corresponds to the case where the vertex with maximum degree among the followers is not connected to any follower (its neighbors are all leaders) and the degrees of the rest of the followers are small enough such that \eqref{eqn:wrnh} is satisfied. \label{rem:aetng} \end{remark} Based on Theorem \ref{thm:bounddmax} and Remark \ref{rem:aetng}, we introduce a class of graphs in which the bound \eqref{eqn:up} is tight, i.e., $\lambda_{n-|\mathcal{S}|}(L_g)=d_{max}^{\mathcal{F}}$. \begin{definition}[\cite{Godsil}] An independent vertex set of a graph $\mathcal{G}$ is a subset of the vertices such that no two vertices in the subset are connected to each other via an edge. \end{definition} \begin{corollary} If the set of followers $\mathcal{F}$ is an independent set, then $\lambda_{n-|S|}(L_g)=d_{max}^{\mathcal{F}}$. \label{cor:remm} \end{corollary} \begin{IEEEproof} We can prove this statement in two ways. The first is based on the proof of Theorem \ref{thm:bounddmax}: since there is no row in $N$ which belongs to an edge connecting two followers, we have $\lambda_{n-|S|}(L_g)=d_{max}^{\mathcal{F}}$. The second proof is to note that in the case where $\mathcal{F}$ is an independent set, the grounded Laplacian matrix will be a diagonal matrix and $\lambda_{n-|S|}(L_g)$ will be the largest diagonal element, namely $d_{max}^{\mathcal{F}}$. \end{IEEEproof} A simple example that satisfies the condition in Corollary \ref{cor:remm} is a bipartite graph in which one partition consists of the leaders and the other set contains the followers. \section{Application of the Spectrum of $L_g$ to Network Robustness to Disturbances and Time Delay} \label{sec:application} In the previous section, we analyzed the spectral properties of the grounded Laplacian matrix $L_g$. In this section, we use the results from the previous section to give bounds on the network robustness metrics we identified earlier. \subsection{Robustness to Disturbances} By Proposition \ref{prop:hinf}, we know that the system $\mathcal{H}_{\infty}$ norm of the error dynamics of \eqref{eqn:padgrtial43} is equal to $\frac{1}{\lambda_1(L_g)}$. Hence, based on Theorem \ref{thm:main}, we have the following bounds for $\mathcal{H}_{\infty}$ disorder. \begin{equation} \frac{1}{\max_{i\in \mathcal{F}}\{\beta_i\}} \leq \frac{n-|\mathcal{S}|}{|\partial \mathcal{S}|} \leq \frac{1}{\lambda_1(L_g)} \leq \frac{1}{\min_{i\in \mathcal{F}}\{\beta_i\}}. \label{eqn:hinfff} \end{equation} Note that the upper bound is taken to be $\infty$ if $\min_{i\in \mathcal{F}}\{\beta_i\}=0$. Based on \eqref{eqn:hinfff}, for a leader-follower multi-agent system with leader set $\mathcal{S}$ and follower set $\mathcal{F}$, a necessary condition to have $||G||_{\infty}\leq \gamma$ is to have $\frac{1}{\max_{i\in \mathcal{F}}\{\beta_i\}} \leq \gamma$ and a sufficient condition is to have $\frac{1}{\min_{i\in \mathcal{F}}\{\beta_i\}}\leq \gamma$. Based on Corollary \ref{cor:akjdbvao}, a sufficient condition for $||G||_{\infty}\leq \gamma$ is that the leader set is a $\lceil \frac{1}{\gamma} \rceil$-dominating set. \subsection{Robustness to Time Delay} Based on the bounds discussed in Theorem \ref{thm:bounddmax} and \eqref{eqn:ti}, a necessary condition for asymptotic stability of \eqref{eqn:partiweal43} is \begin{align} \tau_{max} < \frac{\pi}{2 d_{max}^{\mathcal{F}}}. \label{eqn:de} \end{align} Moreover, a sufficient condition for asymptotic stability of \eqref{eqn:partiweal43} is \begin{equation} \tau_{max} < \frac{\pi}{2 \mathcal{M}}, \label{eqn:dwede} \end{equation} where $\mathcal{M}=\max\left\{d_{max}^{\mathcal{F}}, \max_{\substack{(u,v)\in \mathcal{E}\\{u,v\in \mathcal{F}}}} d_u+d_v \right\}$. \begin{remark} Since $ d_{max}^{\mathcal{F}} \geq 1$ for connected graphs, condition \eqref{eqn:de} implies that the delay must necessarily be strictly less than $\frac{\pi}{2}$ for \eqref{eqn:partiweal43} to be stable. \end{remark} \begin{remark} Based on Corollary \ref{cor:remm}, if the set of followers is an independent set, then necessary and sufficient conditions \eqref{eqn:de} and \eqref{eqn:dwede} coincide, i.e., $\hat{\tau}_{max} = \frac{\pi}{2 d_{max}^{\mathcal{F}}}$. \end{remark} \subsection{Trade-off Between Minimizing $\mathcal{H}_{\infty}$-Norm and Maximizing $\hat{\tau}_{max}$ } There is a trade-off between maximizing $\lambda_1(L_g)$ (minimizing $\mathcal{H}_{\infty}$ disorder) and minimizing $\lambda_{n-{\mathcal{S}}}(L_g)$ (maximizing delay threshold). The same situation is discussed for the algebraic connectivity ($\lambda_2(L)$) and the spectral radius of the Laplacian matrix in \cite{Olfati2}. In this subsection, we address this problem. More formally, we introduce the following network design problem. Suppose we are given a set of agents consisting of at least $\beta$ leaders. The objective is to design a network to solve the optimization problem, \begin{equation} \begin{aligned} & \underset{X}{\text{minimize}} & & J(X)=\lambda_{n-|\mathcal{S}|}(L_g) \\ & \text{subject to} & & \lambda_1(L_g) \geq \beta, \\ &&& X_i\in \{0,1\}, \\ \end{aligned} \label{eqn:ebf} \end{equation} where $X_{\binom{n}{2}\times 1}$ is an indicator vector in which each edge $e_i$ in the graph is assigned to an element in $X$, i.e., $X_i=1$ if $e_i$ exists and $X_i=0$ otherwise. Despite the Boolean constraint $X_i\in \{0,1\}$, Proposition \ref{prop:robus} provides an efficient way to solve this problem, namely connecting each follower to exactly $\beta$ leaders.\footnote{ Note that based on Theorem \ref{thm:main}, $\lambda_1(L_g)$ is upper bounded by the total number of leaders, and thus at least $\beta$ leaders are needed to make the problem feasible.} Based on that proposition, in this case the value of $\lambda_1(L_g)$ will be independent of the interconnections between the followers. If we make the set of follower agents an independent set, from Corollary \ref{cor:remm}, we have $\lambda_{n-|{\mathcal{S}}|}(L_g)=d_{max}^{\mathcal{F}}=\beta$. In other words, this design makes $L_g$ a diagonal matrix whose diagonal elements are $\beta$, and the optimal solution is attained. \section{Robustness in Random Graphs} \label{sec:random} In this section we discuss $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders and the delay threshold $\hat{\tau}_{max}$ when the underlying network structure is a random graph. We analyze two well known random graphs, namely Erdos-Renyi (ER) random graphs and random regular graphs (RRG). \subsection{Erdos-Renyi Random Graphs} \begin{definition} An Erdos-Renyi (ER) random graph $\mathcal{G}(n,p)$ is a graph on $n$ nodes, where each edge between two distinct nodes is present independently with probability $p$ (which could be a function of $n$). We say that a graph property holds {\it asymptotically almost surely} if the probability of drawing a graph with that property goes to $1$ as $n \rightarrow \infty$. Let $\Omega_n$ be the set of all undirected graphs on $n$ nodes. For a given graph function $f: \Omega_n \rightarrow \mathbb{R}_{\ge 0}$ and another function $g: \mathbb{N} \rightarrow \mathbb{R}_{\ge 0}$, we say $f(\mathcal{G}(n,p)) \le (1+o(1))g(n)$ asymptotically almost surely if there exists some function $h(n) \in o(1)$ such that $f(\mathcal{G}(n,p)) \le (1+h(n))g(n)$ with probability tending to $1$ as $n \rightarrow \infty$. Lower bounds of the above form have an essentially identical definition. \end{definition} \subsubsection{Network Disorder in ER Random Graphs} Before discussing network disorder in ER random graphs we recall the following theorem for the smallest eigenvalue of the grounded Laplacian in such graphs. \begin{theorem}[\cite{PiraniSundaramArxiv}] Consider the Erdos-Renyi random graph $\mathcal{G}(n,p)$, where the edge probability $p$ satisfies $p(n) \ge \frac{c\ln{n}}{n}$, for constant $c>1$. Let $\mathcal{S}$ be a set of grounded nodes chosen uniformly at random with $|\mathcal{S}| = o(\sqrt{np})$. Then the smallest eigenvalue $\lambda_1(L_g)$ of the grounded Laplacian satisfies $(1-o(1))|\mathcal{S}|p \le \lambda_1(L_g) \le (1+o(1))|\mathcal{S}|p$ asymptotically almost surely. \label{thm:erdos1} \end{theorem} The above theorem covers a broad range of edge-probability functions, and includes constant $p$ as a special case. Based on the above theorem, we obtain the following result for $\mathcal{H}_{\infty}$ disorder in random graphs. \begin{theorem} Consider a random graph $\mathcal{G}(n,p)$ with $p(n) \ge \frac{c\ln{n}}{n}$, for constant $c>1$. Let $\mathcal{S} \subset \mathcal{V}$ be a set of grounded nodes chosen uniformly at random with $|\mathcal{S}| = o(\sqrt{np})$. Then for $\mathcal{H}_{\infty}$ disorder we have \begin{equation} (1 - o(1))\frac{1}{|\mathcal{S}|p} \le \frac{1}{\lambda_1(L_g)} \le(1 + o(1))\frac{1}{|\mathcal{S}|p}, \label{eqn:per4} \end{equation} asymptotically almost surely. \label{thm:cohh} \end{theorem} Moreover, we have the following result for $\mathcal{H}_{2}$ disorder in random graphs with constant edge probability $p$. \begin{theorem} Consider a random graph $\mathcal{G}(n,p)$ with constant $p$. Let $\mathcal{S} \subset \mathcal{V}$ be a set of grounded nodes chosen uniformly at random with $|\mathcal{S}|=o(\sqrt{n})$. Then for $\mathcal{H}_{2}$ disorder we have \begin{equation} (1 - o(1))\frac{|\mathcal{S}|+1}{2|\mathcal{S}|p} \le \frac{1}{2}\trace(L_g^{-1}) \le(1 + o(1))\frac{|\mathcal{S}|+1}{2|\mathcal{S}|p}, \label{eqn:per44} \end{equation} asymptotically almost surely. \label{thm:coh} \end{theorem} \begin{IEEEproof} For each node $v_i \in \mathcal{V}\setminus\mathcal{S}$, let $\beta_i$ denote the number of grounded nodes that are in the neighborhood of $v_i$. We can then write the grounded Laplacian matrix $L_g$ as $L_g=\bar{L}+E$, where $E = \diag(\beta_1, \beta_2, \ldots, \beta_{n-|\mathcal{S}|})$ and $\bar{L}$ is the Laplacian matrix for the graph induced by the nodes $\mathcal{V}\setminus\mathcal{S}$. Using Weyl's inequality for $i=1,2,..., n-|\mathcal{S}|$, we have \begin{equation} \lambda_i(\bar{L})\leq \lambda_i(L_g)\leq \lambda_i(\bar{L})+|\mathcal{S}|. \label{eqn:weylll} \end{equation} Thus we have, \begin{multline} \frac{1}{2}\left(\sum_{i=2}^{n-|\mathcal{S}|}\left(\frac{1}{\lambda_i(\bar{L})+|\mathcal{S}|}\right)+\frac{1}{\lambda_1(L_g)}\right) \leq \frac{1}{2}\trace(L_g^{-1}) \leq \frac{1}{2}\left(\sum_{i=2}^{n-|\mathcal{S}|}\frac{1}{\lambda_i(\bar{L})}+\frac{1}{\lambda_1(L_g)}\right). \label{eqn:per3} \end{multline} Noting that $\bar{L}$ is the Laplacian matrix for an Erdos-Renyi random graph on $n-|\mathcal{S}|$ nodes with constant $p$, for $n-|\mathcal{S}|= \Omega(n)$ we have $(1- o(1))(n-|\mathcal{S}|)p \le \lambda_2(\bar{L})\le(1+o(1))(n-|\mathcal{S}|)p$ and $(1 - o(1))(n-|\mathcal{S}|)p \le \lambda_{n-|\mathcal{S}|}(\bar{L}) \le (1+ o(1))(n-|\mathcal{S}|)p$ asymptotically almost surely \cite{Mesbahi}. Thus according to \eqref{eqn:per3}, Theorem~\ref{thm:cohh} and considering the fact that $|\mathcal{S}|=o(\sqrt{n})$, the result is obtained. \end{IEEEproof} \begin{remark} With a single leader and constant $p$, both $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders are within $(1\pm o(1))\frac{1}{p}$ asymptotically almost surely (from \eqref{eqn:per4} and \eqref{eqn:per44}). \end{remark} Theorems \ref{thm:erdos1} and \ref{thm:cohh} apply to any edge probability $p$ satisfying $p(n) \ge \frac{c\ln{n}}{n}$, for constant $c>1$. For any $p$ in this range, the second smallest eigenvalue of the graph Laplacian satisfies $\lambda_2({L})=\Theta(np)$ asymptotically almost surely \cite{PiraniSundaramArxiv}. Thus, for this more general class of edge probabilities, we have the following looser bound on $\mathcal{H}_2$ disorder. \begin{corollary} Consider a random graph $\mathcal{G}(n,p)$ where the edge probability satisfies $p\ge\frac{c\ln n}{n}$ for any $c>1$ and a set of grounded nodes $\mathcal{S} \subset \mathcal{V}$ chosen uniformly at random such that $|\mathcal{S}|=o(\sqrt{np})$. Then for $\mathcal{H}_{2}$ disorder we have \begin{equation} \frac{1}{2}\trace(L_g^{-1})=\Theta(\frac{1}{p}), \label{eqn:per5} \end{equation} asymptotically almost surely. \end{corollary} \begin{IEEEproof} By Theorem \ref{thm:erdos1} we have $(1-o(1))|\mathcal{S}|p \le \lambda_1(L_g) \le (1+o(1))|\mathcal{S}|p$ for the regime of $p$ mentioned in the corollary. According to the Cauchy interlacing theorem and \cite{PiraniSundaramArxiv} we have $\beta np \geq 2d_{max}\geq \lambda_n(L) \geq \lambda_i(L_g)\geq \lambda_i(L)\geq \lambda_2(L) \geq \alpha np$ asymptotically almost surely for some $\alpha, \beta >0$ and $i= 2, ..., n-|\mathcal{S}|$. Summing the inverse of these eigenvalues to obtain $\trace(L_g^{-1})$ gives the result. \end{IEEEproof} \subsubsection{Delay Threshold in ER Random Graphs} The following result discusses the value of $\hat{\tau}_{max}$ in ER random graphs. \begin{theorem} Consider a random graph $\mathcal{G}(n,p)$ and let $\mathcal{S} \subset \mathcal{V}$ be a set of grounded nodes chosen uniformly at random with $|\mathcal{S}|=o({np})$. Then for constant $p$ we have \begin{equation} (1 - o(1))\frac{\pi}{2np}\leq \hat{\tau}_{max} \leq (1 + o(1))\frac{\pi}{2np}, \end{equation} asymptotically almost surely. Moreover, for $p \geq \frac{c\ln n}{n}$ and $c>1$ we have \begin{equation} \hat{\tau}_{max} = \Theta(\frac{1}{np}), \end{equation} asymptotically almost surely. \label{thm:jsbvh} \end{theorem} \begin{IEEEproof} Similar to the proof of Theorem \ref{thm:coh}, we write the grounded Laplacian matrix $L_g$ as $L_g=\bar{L}+E$. Using Weyl's inequality \eqref{eqn:weylll} for constant $p$ we have $(1 - o(1))(n-|\mathcal{S}|)p \le \lambda_{n-|\mathcal{S}|}(\bar{L}) \le (1+ o(1))(n-|\mathcal{S}|)p$ asymptotically almost surely \cite{Mesbahi}. By considering $|\mathcal{S}|=o({np})$ we have \begin{equation} (1 - o(1))np \le \lambda_{n-|\mathcal{S}|}({L}_g) \le (1+ o(1))np, \label{eqn:ght} \end{equation} asymptotically almost surely, which yields the result. For $p \geq \frac{c\ln n}{n}$ according to Theorem \ref{thm:bounddmax} and considering the fact that in this regime of $p$ we have $d_i=\Theta(np)$ for all $v_i\in \mathcal{V}$ \cite{PiraniSundaramArxiv}, we have $\lambda_{n-|\mathcal{S}|}(\bar{L})=\Theta(np)$ asymptotically almost surely. Moreover, based on Weyl's inequality \eqref{eqn:weylll} and according to the fact that $|\mathcal{S}|=o({np})$ we have $\lambda_{n-|{\mathcal{S}}|}(L_g)=\Theta(np)$, which yields the result. \end{IEEEproof} \subsection{Random Regular Graphs} \begin{definition} Let $\Omega_{n,d}$ be the set of all undirected graphs on $n$ nodes where every node has degree $d$ (note that this assumes that $nd$ is even). A {\it random $d$-regular graph} (RRG), denoted $\mathcal{G}_{n,d}$ is a graph drawn uniformly at random from $\Omega_{n,d}$. \end{definition} \subsubsection{Network disorder in RRG} We have the following result for the disorder in random regular graphs. \begin{theorem} Let $\mathcal{G}_{n,d}$ be a random $d$-regular graph on $n$ nodes, with a set of leaders $\mathcal{S}$. Then for sufficiently large (constant) $d$, both $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders are $O(n)$ asymptotically almost surely. \label{thm:ry} \end{theorem} \begin{IEEEproof} Let $v_i\in \mathcal{S}$ be any arbitrary vertex in the leader set. It was shown in \cite{PiraniSundaramArxiv} that $\lambda_1(L_{gi})=\Theta(\frac{1}{n})$ asymptotically almost surely for a random $d$-regular graph with sufficiently large $d$, where $L_{gi}$ is the grounded Laplacian induced by node $v_i$. Based on the interlacing theorem we have $\lambda_1(L_{g}(\mathcal{S})) \geq \lambda_1(L_{gi})$. This implies that the $\mathcal{H}_{\infty}$ disorder (given by \eqref{eqn:hinfti}) in this case is $O(n)$. Moreover, for $j=2,3,...,n-1$, we have \begin{equation} \lambda_j(L_{g}(\mathcal{S})) \geq \lambda_j(L_{gi})\geq \lambda_2(L_{gi}) \ge \lambda_2(L)\geq \alpha d, \label{eqn:bunchofinq} \end{equation} asymptotically almost surely for some $\alpha>0$ and sufficiently large $d$. The first (left) and the second last inequalities in \eqref{eqn:bunchofinq} are due to the interlacing theorem, and the last inequality is a direct consequence of the result in \cite{Friedman}. Thus for $\mathcal{H}_2$ disorder we have $\frac{1}{2}\trace(L_{gi}^{-1}) = \frac{1}{2}\sum_{j = 1}^{n-1}\frac{1}{\lambda_j(L_{gi})} = \frac{1}{2\lambda_1(L_{gi})} + \frac{1}{2}\sum_{j = 2}^{n-1}\frac{1}{\lambda_j(L_{gi})} = \Theta(n) + O(\frac{n}{d}) = \Theta(n)$. Hence, from the first inequality in \eqref{eqn:bunchofinq} we have $\frac{1}{2}\sum_{j = 1}^{n-1}\frac{1}{\lambda_j(L_{g}(\mathcal{S}))}=O(n)$, which gives the result. \end{IEEEproof} Based on Theorem~\ref{thm:ry}, for any combination of leaders in a random regular graph, both $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders will grow at most linearly with the network size.\footnote{There are some graphs in which the network disorder grows faster than the network size, e.g., $d$-dimensional grids for $d=1,2$ in which the network disorder is $O(n^2)$ and $O(n\log (n))$, respectively \cite{Bamieh2}.} \subsubsection{Delay Threshold in RRG} The following result discusses the value of $\hat{\tau}_{max}$ in random regular graphs. It follows immediately from Theorem \ref{thm:bounddmax} and \eqref{eqn:ti} and thus we skip the proof. \begin{theorem} Let $\mathcal{G}_{n,d}$ be a random $d$-regular graph on $n$ nodes, with a set of leaders $\mathcal{S}$. Then we have $\frac{\pi}{4d}\leq \hat{\tau}_{max}\leq \frac{\pi}{2d}$. \end{theorem} The values of system $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders and $\hat{\tau}_{max}$ in ER random graphs and RRGs are summarized in Table \ref{tab:nscvh}. \begin{table}[h] \begin{tabular}{c c c c c} \textbf{ER} \\ \hline $p$ & $\mathcal{H}_2$ & $\mathcal{H}_{\infty}$ & $\hat{\tau}_{max}$\\ \hline Constant & $(1\pm o(1))\frac{|\mathcal{S}|+1}{2|\mathcal{S}|p}$ & $(1\pm o(1))\frac{1}{|\mathcal{S}|p}$ & $(1\pm o(1))\frac{\pi}{2np}$\\ $\frac{c\ln n}{n},c>1$ & $\Theta(\frac{1}{p})$ & $(1\pm o(1))\frac{1}{|\mathcal{S}|p}$ & $\Theta(\frac{1}{np})$\\ \hline\\ \\ \textbf{RRG} \\ \hline $d$ & $\mathcal{H}_2$ & $\mathcal{H}_{\infty}$ & $\hat{\tau}_{max}$\\ \hline Constant & $O(n)$ & $O(n)$ & $\in [\frac{\pi}{4d},\frac{\pi}{2d}]$\\ \end{tabular} \centering \caption {Disorder and Delay Threshold in random graphs. For network disorders in ER graphs, it is assumed that $|\mathcal{S}|=o(\sqrt{np})$.} \label{tab:nscvh} \end{table} \section{System Robustness as Network Centrality Metrics} \label{sec:robustleader} In this section, we look at the system robustness to disturbances and time delay via network centrality metrics. In particular, we seek to choose a leader in order to maximize robustness to disturbances \cite{Bamieh}, \cite{Bamieh2}, \cite{Leonard}, \cite{piranicdc} and time delay. As argued in Section~\ref{sec:robust_disturbances}, minimizing the $\mathcal{H}_{\infty}$ disorder is equivalent to maximizing convergence rate, and thus we will make connections to existing work that has looked at this latter metric \cite{Clark1, Ghaderi13,piranicdc}. \subsection{Optimal Leaders for Each Objective} In this subsection, we provide conditions for a single leader in a network to optimize each robustness metric separately. \subsubsection{Minimizing $\mathcal{H}_2$ disorder} As shown in \cite{Leonard}, the optimal single leader for minimizing $\mathcal{H}_2$ disorder is the node with maximal {\it information centrality} defined as $IC(\mathcal{G})=\max_{i\in \mathcal{V}}[\frac{1}{n}\sum_j\gamma_{ij}]^{-1}$, where $\gamma_{ij}$ is the sum of the lengths of {\it all} paths between nodes $v_i$ and $v_j$ in the network. For the case of trees, the information central vertex and the closeness central vertex (a vertex whose summation of distances to the rest of the vertices is minimum) in the network are the same. The result is extended to the case of multiple leaders in \cite{Fitch2}. \subsubsection{Minimizing $\mathcal{H}_{\infty}$ disorder} In order to discuss optimal leaders for minimizing the $\mathcal{H}_{\infty}$ disorder (or maximizing the convergence rate), we define a {\it grounding centrality} for each node $v_i$, which is equal to the smallest eigenvalue of the grounded Laplacian induced by that node. Thus, the single best leader in terms of minimizing $\mathcal{H}_{\infty}$ disorder (maximizing convergence rate) is the one with largest grounding centrality \cite{ACC}. In the following example, we show that the grounding central vertex and the information central vertex can be far from each other in a graph and consequently the best leader to minimize $\mathcal{H}_{2}$ disorder can be different from the one that minimizes $\mathcal{H}_{\infty}$ disorder. \begin{example} A {\it broom tree}, $B_{n,\Delta}$, is a star $S_{\Delta}$ with $\Delta$ leaf vertices and a path of length $n-\Delta-1$ attached to the center of the star, as illustrated in Fig.~\ref{fig:broom} \cite{Stevanovic10}. \begin{figure} \caption{Broom tree with $\Delta=4$, $n=9$.} \label{fig:broom} \end{figure} Consider the broom tree $B_{2\Delta+1,\Delta}$. By numbering the vertices as shown in Fig.~\ref{fig:broom}, for $\Delta=500$, we find (numerically) that the grounding central vertex is vertex 614. The information central vertex is located at the middle of the star (vertex 501). The deviation of the grounding central vertex from the information central vertex increases as $\Delta$ increases.\footnote{In this example, some other well known centrality metrics, e.g. degree centrality, betweenness centrality, closeness centrality and eigenvector centrality, are all optimized at the center of the star. } \label{exm:1} \end{example} \subsubsection{Maximizing Delay Threshold $\hat{\tau}_{max}$} In the following lemma, we provide a sufficient condition for a leader in a network to maximize the delay threshold $\hat{\tau}_{max}$. \begin{lemma} Consider a connected graph $\mathcal{G} = \{\mathcal{V},\mathcal{E}\}$. Node $v_k \in \mathcal{V}$ is the optimal leader for maximizing $\hat{\tau}_{\max}$ if $d_k\geq 2d_i$ for all $v_i\in \mathcal{V}\setminus \{v_k\}$. \label{lem:aergtn} \end{lemma} \begin{IEEEproof} Based on \eqref{eqn:ti}, we need to show that \begin{equation} \lambda_{n-|\mathcal{S}|}(L_{gk})\leq \lambda_{n-|\mathcal{S}|}(L_{gi}), \label{eqn:sgb} \end{equation} for all $i\in \mathcal{V}\setminus \{v_k\}$. Here $L_{gi}$ and $L_{gk}$ are the grounded Laplacian matrices induced by nodes $v_i$ and $v_k$, respectively. Based on Theorem \ref{thm:bounddmax}, a sufficient condition for \eqref{eqn:sgb} is \begin{equation} \max_{{u\in \mathcal{V}\setminus\{v_i\}}}d_u \geq \max \left\{\max_{{u\in \mathcal{V}\setminus\{v_k\}}}d_u, \max_{\substack{(u,v)\in \mathcal{E}\\{u,v\in \mathcal{V}\setminus\{v_k\}}}}\{d_u+d_v\}\right\}, \label{eqn:kohig} \end{equation} for all $v_i\in \mathcal{V}$. We know that $$2 \max_{u\in \mathcal{V}\setminus\{v_k\}}d_u \geq \max_{\substack{(u,v)\in \mathcal{E}\\{u,v\in \mathcal{V}\setminus\{v_k\}}}}\{d_u+d_v\}.$$ Thus, a sufficient condition for \eqref{eqn:kohig} is \begin{equation} \max_{{u\in \mathcal{V}\setminus\{v_i\}}}d_u \geq 2 \max_{u\in \mathcal{V}\setminus\{v_k\}}d_u, \end{equation} for all $v_i\in \mathcal{V}\setminus \{v_k\}$, which is equivalent to $d_k\geq 2d_i$ for all $v_i\in \mathcal{V}\setminus \{v_k\}$. \end{IEEEproof} Based on Lemma \ref{lem:aergtn}, the leader which maximizes $\hat{\tau}_{max}$ in Example \ref{exm:1} is the center of the star, which is the same leader which minimizes $\mathcal{H}_2$ disorder. However it is not true that these two robustness metrics ($\mathcal{H}_2$ disorder and $\hat{\tau}_{max}$) always share the same optimal leader. For example, in the graph shown in Example \ref{exm:1}, if we fix the degree of the star and increase the length of the tail, the information central vertex will no longer remain in the center of the star, while the center of the star still maximizes $\hat{\tau}_{max}$, provided its degree is at least 4. Lemma \ref{lem:aergtn} indicates that if a node in a network has substantially higher degree than the other nodes, then it is the optimal leader for $\hat{\tau}_{max}$. However the fact that the highest degree node is always the best leader is not true when the differences in degrees are moderate, as shown in the following example. \begin{example} In the graph shown in Fig.~\ref{fig:brooms}, the black nodes have degree 3 which is the highest degree in the graph. However, we have $\lambda_{n-|\mathcal{S}|}^{Gray}=3.7321$ and $\lambda_{n-|\mathcal{S}|}^{Black}=4.1149$, where $\lambda_{n-|\mathcal{S}|}^{Gray}$ is the largest eigenvalue of the grounded Laplacian induced by the gray node (and the same for $\lambda_{n-|\mathcal{S}|}^{Black}$). \begin{figure} \caption{An example which shows that a leader with maximum degree does not maximize $\hat{\tau} \label{fig:brooms} \end{figure} \label{exm:2} \end{example} Our discussion and results in this section have shown that the optimal leaders for each of the robustness metrics will, in general, be different. In the following subsection, we discuss conditions under which a single leader can optimize all of the three objectives ($\mathcal{H}_{\infty}$ disorder or convergence rate, $\mathcal{H}_{2}$ disorder, and delay threshold) simultaneously. \subsection{A Sufficient Condition for a Leader to Minimize Network Disorder and Maximize Delay Threshold} In this subsection we provide a sufficient condition for a leader to simultaneously minimize $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders and maximize $\hat{\tau}_{max}$. We require the following concept. \begin{definition} The {\it resistance distance} $r_{ij}$ between two vertices $v_i$ and $v_j$ in a graph is the equivalent resistance between these two vertices when we treat each edge of the graph as a $1 \Omega$ resistor. The {\it effective resistance} of vertex $v_i$ is $R_i=\sum_{j\neq i}r_{ij}$. \end{definition} It can be shown that the resistance distance between $v_i$ and $v_j$ is the $j$-th diagonal element of $L_{gi}^{-1}$, where $v_i$ is a single grounded vertex \cite{Bapat}. Thus, the effective resistance of vertex $v_i$ is \begin{equation} R_i=\trace(L_{gi}^{-1}). \label{eqn:dop1} \end{equation} Moreover, the resistance distance between vertices $v_i$ and $v_j$ is given by \cite{Bapat} \begin{equation} r_{ij}=(e_i-e_j)^T L_{gk}^{-1} (e_i-e_j), \label{eqn:resis} \end{equation} where $k \notin \{i,j\}$ is the index of an arbitrary vertex which becomes grounded and $e_i$ is a vector of zeros except for a $1$ in the element corresponding to the $i$-th vertex. \begin{theorem} Consider a connected graph $\mathcal{G} = \{\mathcal{V},\mathcal{E}\}$. Node $v_k \in \mathcal{V}$ will simultaneously be the best single leader to minimize $\mathcal{H}_{\infty}$ disorder, minimize $\mathcal{H}_2$ disorder, and maximize $\hat{\tau}_{max}$ and convergence rate if $d_k \ge \frac{2d_i}{x^2_{min}}$ for all $v_i \in \mathcal{V}\setminus\{v_k\}$, where $x_{min}$ is the smallest component of a nonnegative eigenvector corresponding to the smallest eigenvalue of the grounded Laplacian $L_{gk}$. \label{thm:jvfvv} \end{theorem} \begin{IEEEproof} From \eqref{eqn:dop1} and \eqref{eqn:resis}, the effective resistance of $v_i$ is \begin{align} R_i = \trace(L_{gi}^{-1}) &=r_{ik}+\sum_{j\neq i}(e_i-e_j)^T L_{gk}^{-1} (e_i-e_j)\nonumber \\ &= \trace(L_{gk}^{-1})+nr_{ik}-2S_{i}^k, \label{eqn:resis2} \end{align} where $S_{i}^k$ is the sum of the elements of the $i$-th row (or column) in $L_{gk}^{-1}$. From \eqref{eqn:resis2} we have $\trace(L_{gk}^{-1})-\trace(L_{gi}^{-1})=2S_{i}^k-nr_{ik}$. Thus for $v_k$ to be a better leader than $v_i$ for network $\mathcal{H}_2$ disorder, it is sufficient to have \begin{equation} 2\bar{S}-nr_{ik}\leq 0, \label{eqn:resis4} \end{equation} where $\bar{S}=\max_j S_{j}^k$ is the maximum row sum in $L_{gk}^{-1}$. On the other hand, from \cite{ACC} we know that $\bar{S}x_{min}\leq \lambda_{max}(L_{gk}^{-1})\leq \bar{S}$. Combining this with \eqref{eqn:resis4} yields $\lambda_{max}(L_{gk}^{-1})\leq \frac{nr_{ik}x_{min}}{2}$ as a sufficient condition for $v_k$ to be a better leadership candidate than $v_i$ for the objective of minimizing $\mathcal{H}_2$ network disorder. This sufficient condition can be more conveniently framed as $\lambda_1(L_{gk}) \geq \frac{2}{nr_{ik}x_{min}}$. From \cite{Bapat} we know that $r_{ik}\geq \max\{\frac{1}{d_i},\frac{1}{d_k}\}$ where $d_i$ and $d_k$ are the degrees of vertices of $v_i$ and $v_k$ respectively. Thus a sufficient condition for the above inequality to hold is $\lambda_1(L_{gk}) \geq \frac{2\min\{d_i,d_k\}}{nx_{min}}$. A sufficient condition for this, based on \eqref{eqn:maineq} (with $\mathcal{S}=\{v_k\}$), is \begin{equation} \frac{d_kx_{min}}{n-1} \geq \frac{2\min\{d_i,d_k\}}{nx_{min}}. \label{eqn:obj} \end{equation} On the other hand, for $v_k$ to be a better leader compared to $v_i$ for optimizing $\mathcal{H}_{\infty}$ disorder (or equivalently maximizing convergence rate), according to \eqref{eqn:maineq} it is sufficient to have $\frac{d_k x_{min}}{n-1} \geq \frac{d_i}{n-1}$ which gives $d_k \geq \frac{d_i}{x_{min}}$, where $x_{min}$ is again the smallest eigenvector component of $\mathbf{x}$, a nonnegative eigenvector corresponding to $ \lambda_1(L_{gk})$. Combining this with \eqref{eqn:obj}, a sufficient condition for $v_k$ to be a better leader than $v_i$ for both objectives simultaneously is \begin{equation} d_k \geq \max\left\{\frac{d_i}{x_{min}}, \frac{2d_i(n-1)}{nx^2_{min}}\right\}. \end{equation} Since $\frac{2(n-1)}{nx_{min}} \ge 1$ for $n \ge 2$, we conclude that $d_k \ge \frac{2d_i}{x^2_{min}}$ for all $v_i \in \mathcal{V}\setminus\{v_k\}$ is a sufficient condition for $v_k$ to be the optimal leader for minimizing $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ disorders simultaneously. Based on Lemma \ref{lem:aergtn} and considering the fact that $\frac{2d_i}{x^2_{min}}\geq 2d_i$, the condition mentioned in the theorem is also a sufficient condition for $d_k\geq 2d_i$ and thus it is sufficient for $v_k$ to be a leader which maximizes $\hat{\tau}_{max}$, which yields the result. \end{IEEEproof} \begin{remark} Based on the sufficient condition $d_k \ge \frac{2d_i}{x^2_{min}}$ and the bound given in Lemma \ref{lem:mainlem} for $x_{min}$, as the algebraic connectivity of the graph induced by the follower agents becomes larger, $x_{min}$ becomes closer to 1 and the condition $d_k \ge \frac{2d_i}{x^2_{min}}$ will be less demanding in terms of the degree that the leader agent $d_k$ is required to have. However, the condition for the optimal leader for $\hat{\tau}_{max}$ given in Lemma \ref{lem:aergtn} is independent of the connectivity of the graph induced by the follower agents. \end{remark} In the following example, we describe a graph such that there exists a node satisfying $d_k \ge \frac{2d_i}{x^2_{min}}$ and consequently becomes the optimal leader to optimize all of the objectives, i.e. $\mathcal{H}_{2}$ disorder, $\mathcal{H}_{\infty}$ disorder (and convergence rate), and delay threshold $\hat{\tau}_{max}$. \begin{example} Consider an ER random graph $\mathcal{G}(n,p)$ with $p \geq \frac{c\ln n}{n}$ for some $c>1$. The degree of each vertex in the graph is $d_i=\Theta(np)$ and the algebraic connectivity is $\lambda_2(L) = \Theta(np)$ asymptotically almost surely \cite{PiraniSundaramArxiv}. Suppose we wish to connect a single leader node $v_{n+1}$ to this network in such a way that it is the single best leader for optimizing $\mathcal{H}_2$ disorder, $\mathcal{H}_{\infty}$ disorder (or convergence rate) and the delay threshold. Pick any $\epsilon > 0$ and connect $v_{n+1}$ to any $(2+\epsilon)d_{max}$ nodes in the network, where $d_{max}$ is the maximum degree of any node in the network. Let $L_{g,n+1}$ be the grounded Laplacian induced by $v_{n+1}$. From \eqref{eqn:eige12}, the eigenvector for $\lambda_1(L_{g,n+1})$ has smallest component $x_{min} \ge 1 - \frac{2\sqrt{d_{n+1}}}{\lambda_2({L})}$ which goes to $1$ asymptotically almost surely (since $d_{n+1} = (2+\epsilon) d_{max} = \Theta(np)$ and $\lambda_2(L) = \Theta(np)$). Thus the condition $d_{n+1} \ge \frac{2d_i(n-1)}{nx^2_{min}}$ will be satisfied for this node asymptotically almost surely. \end{example} \section{Simulation Results} \label{sec:simulations} In this section, we provide some simulation results for the robustness of the leader-follower consensus dynamics when the underlying network is an ER random graph $\mathcal{G}(n,p)$ for constant $p$. We set the number of leaders in a network as $|\mathcal{S}|=2$ and the probability of edge formation as $p=0.1$. In Fig. \ref{fig:h2dis}, the value of the $\mathcal{H}_2$ disorder converges to $\frac{|\mathcal{S}|+1}{2|\mathcal{S}|p}=7.5$, as predicted by Theorem \ref{thm:coh}. In Fig. \ref{fig:hinfdis}, the value of the $\mathcal{H}_{\infty}$ disorder converges to $\frac{1}{|\mathcal{S}|p}=5$, in accordance with Theorem \ref{thm:cohh}. In Fig. \ref{fig:delayfig}, $\frac{\frac{\pi}{2np}}{\hat{\tau}_{max}}$ converges to $1$ (although slowly), as specified by Theorem \ref{thm:jsbvh}. \begin{figure} \caption{$\mathcal{H} \label{fig:h2dis} \end{figure} \begin{figure} \caption{$\mathcal{H} \label{fig:hinfdis} \end{figure} \begin{figure} \caption{$\frac{\frac{\pi} \label{fig:delayfig} \end{figure} \section{Summary and Conclusions} \label{sec:conc} We investigated the robustness of leader-follower consensus dynamics to uncertainty and time delay. The analysis was built on the spectrum of the grounded Laplacian matrix, which allowed us to provide tight characterizations of these robustness metrics for random graphs. Moreover, we analyzed the problem of leader selection to optimize each robustness metric and provided a sufficient condition for a single leader to optimize all robustness metrics simultaneously. An interesting avenue for future work is to extend the leader selection for these robustness metrics for the case of multiple leaders and analyze these robustness metrics for other classes of networks. \end{document}
math
63,997
\begin{document} \hyphenation{corres-pond} \title{Lower bound for the Complexity of the Boolean Satisfiability Problem} \author{Carlos Barr\'{o}n-Romero \thanks{Universidad Aut\'{o}noma Metropolitana, Unidad Azcapotzalco, Av. San Pablo No. 180, Col. Reynosa Tamaulipas, C.P. 02200. MEXICO. } } \date{February 12, 2015} \maketitle \begin{abstract} This paper depicts algorithms for solving the decision Boolean Satisfiability Problem. An extreme problem is formulated to analyze the complexity of algorithms and the complexity for solving it. A novel and easy reformulation as a lottery for an extreme case is presented to determine a stable complexity around $2^n$. The reformulation point out that the decision Boolean Satisfiability Problem can only be solved in exponential time. This implies there is not an efficient algorithm for the NP Class. \end{abstract} Algorithms, Complexity, SAT, NP, Quantum Computation. 68Q10, 68Q12,68Q19,68Q25. \pagestyle{myheadings} \thispagestyle{plain} \markboth{Carlos Barr\'{o}n-Romero}{Lower bound for SSAT} \section{Introduction} My previous works over the NP class is~\cite{arXiv:Barron2005}, ~\cite{arXiv:Barron2010}, and ~\cite{arXiv:Barron2015b}. In the last one, the classical decision problem, the Boolean Satisfiability Problem, named SAT was used to state a lower bound for its complexity. As a general framework, my technique consists: 1) to study general problem, 2) to determine a simple reduction, and 3) to analyze for trying to build an efficient algorithm for the simple problem. I like to explains that to build an algorithms to determine a complexity bound has more than I depicts in~\cite{arXiv:Barron2015b}. I take the approach by similarities from applied mathematics: the well-know optimization conditions, the search inside of a region or outside of it, and fixed point method. My article \cite{arXiv:Barron2015b} focus on describing the fixed point and probabilistic approach. This article is a commentary study of the decision SAT, the changes in the algorithms presented here do not change my main result for NP's complexity but they clarifies details. This paper focus in the the SAT's properties, objections and proofs about the algorithms for solving an extreme case problem of SAT (also I called reduced SAT or Simple SAT, see section~\ref{sc:SAS_SSAT}.). Hereafter, SSAT states Simple SAT. The following section depicts SAT and SSAT and their properties. The next section depicts the algorithms for SSAT, and the complexity for an extreme SSAT is depicted in the next section. Some parts of \cite{arXiv:Barron2015b} are repeated here to make this article self-content. \section{SAT and Simple SAT}~\label{sc:SAS_SSAT} A Boolean variable only takes the values: $0$ (false) or $1$ (true). The logical operators are \textbf{not}: $\overline{x};$ \textbf{and}: $\wedge ,$ and \textbf{or}: $\vee .$ Hereafter, $\Sigma =\left\{0,1\right\}$ is the corresponding alphabet, $x$ is a binary string in $\Sigma^n$ means its corresponding number in $[0,2^n-1]$ and reciprocally. The inner or fixed point approach means to take the data from the translation of the problem's formulas, and outside or probabilistic approach means to take randomly the data from the problem's search space. A SAT$(n,m)$ problem consists to answer if a system of $m$ Boolean formulas in conjunctive normal form over $n$ Boolean variables has an assignation of logical values such the system of formulas are true. The system of formulas is represented as a matrix, where each row correspond to a disjunctive formula. By example, let SAT$(4,4)$ be \begin{equation*} \begin{array}{ccccc} \ \ & (x_{3}& \vee \ \overline{x}_{2}& & \vee \ x_{0}) \\ \wedge & & ( x_{2}& \vee \ x_{1} & \vee \ x_{0})\\ \wedge & \ & ( \overline{x}_{2} & \vee \ x_{1} & \vee \ x_{0}) \\ \wedge &(x_3 & & & \ \vee \ \overline{x}_{0}). \end{array} \end{equation*} This problem is satisfactory. The assignation $x_{0}=1,x_{1}=0,x_{2}=1,$ and $x_{3}=1$ is a solution, as it is depicting by substituting the Boolean values: \begin{equation*} \begin{array}{ccccc} \ \ & (1& \vee \ 0 & & \vee \ 1) \\ \wedge & & ( 1 & \vee \ 0 & \vee\ 1)\\ \wedge & & ( 0 & \vee \ 0 & \vee \ 1) \\ \wedge & (1 & & & \vee \ 0) \end{array} \equiv 1. \end{equation*} It is important to note that the requirement of rows with the same number of Boolean variables in a given order is a simple reduction for studying SAT. This paper focuses in this simple formulation of SAT. SSAT$(n,m)$ is a SAT where its $m$ Boolean row formulas have the same length and the Boolean variables are in each row are in the same order, $x_{n-1},\ldots,x_0$. For any SSAT, each row of the system of Boolean formulas can be translated into a set of binary numbers. Each row of SSAT maps to a binary string in $\Sigma^n$, with the convention: $\overline{x}_{i}$ to $0$ (false), and $x_{i}$ to $1$ (true) in the $i$ position. Hereafter, any binary string in $\Sigma^n$ represents a binary number and reciprocally. For example, given SAT$(2,2)$: \begin{equation*} \begin{array}{ccc} \ \ & ( \overline{x}_{1} & \vee \ x_{0}) \\ \wedge & ( x_{1} & \vee \ x_{0} ). \end{array} \end{equation*} It is traduced to: \begin{equation*} \begin{array}{l} 01 \\ 11. \end{array} \end{equation*} The problem is to determine, does SSAT$(n,m)$ have a solution? without previous knowledge. \section{Characteristics and properties of SSAT}~\label{sc:CharPrpSSAT} \begin{proposition} ~\label{prop:SATvsSSAT} 1) A problem SAT can be transformed into an equivalent SSAT. 2) A problem SSAT is a SAT. 3) SSAT could be a subproblem of a problem SAT. \begin{proof} 1) A SAT is transformed into an equivalent SSAT by algebraic procedures based in $F \equiv F \wedge (v \vee \overline{v})$, where $F$ is a formula and $v$ is a Boolean variable. 2) Any SSAT is a SAT with formulas of the same number of variables 3) On the other hand, SAT could have a subset of the Boolean formulas, that they can be arranged as a SSAT. \end{proof} \end{proposition} For the cases 2 and 3) the complexity for solving SSAT is less than the complexity for solving SAT. The case 1) opens the possibility that for some SAT can be solved with less complexity than solving SSAT. By example, SAT$(2,2)$ for $x_1,x_0$ under $(x_0) \wedge (\overline{x}_0) \equiv 0$ versus SSAT$(2,4)$ $(\overline{x}_1 \vee \overline{x}_0) \wedge (\overline{x}_1 \vee x_0) \wedge (x_1 \vee \overline{x}_0)\wedge (x_1 \vee x_0) \equiv 0.$ However, the first system can be see as the SSAT$(1,2)$ $(x_0) \wedge (\overline{x}_0)$, which has no solution. This article focuses in study SSAT, in my next article, the complexity SSAT $\preceq$ SAT is depicted in detail. \begin{proposition} ~\label{prop:TradSAT} \begin{enumerate} \item Any SAT$(n,m)$ can be translated to a matrix of ternary numbers, and the ternary numbers are strings in $\{0,1,2\}^n$. \item The search space of SSAT is less than the search space of SAT. \end{enumerate} \begin{proof} \begin{enumerate} \item Taking the alphabet $\left\{ 0,1,2\right\}.$ Each row of SAT is mapping to a ternary number, with the convention: $\overline{x}_{i}$ to $0$ (false), $x_{i}$ to $1$ ( true), and $2$ when the variable $x_{i}$ is no present. \item By construction, $|\Sigma^n|$ $=$ $|\left\{ 0,1\right\}^n|$ $=$ $2^n$ $\leq$ $3^n$ $=$ $|\left\{ 0,1,2\right\}^n|.$ \end{enumerate} \end{proof} \end{proposition} The previous propositions justify to focus in SSAT. The former proposition states that sections of a SAT can be see as subproblem type SSAT. Moreover, it is sufficient to prove that there is not polynomial time algorithm for it. By example, the previous SAT$(4,4),$ it contains the following SSAT$(3,2):$ \begin{equation*} \begin{array}{cccc} & (\ x_{2}& \vee \ x_{1} & \vee \ x_0) \\ \wedge & (\ \overline{x}_{2} & \vee \ x_{1} & \vee \ x_0). \end{array} \end{equation*} \begin{proposition} ~\label{prop:binNumBlock} Given a binary number $b$ $=$ $b_{n-1}b_n\ldots b_0$. Then the Boolean disjunctive formula that correspond to the translation of $\overline{b}$ is 0. \begin{proof} Without loss of generality, let $x$ be $=$ $x_{n-1}\vee \overline{x}_{n-2}\vee \ldots \vee \overline{x}_0$ the translation of $b$ and $\overline{x}$ $=$ $\overline{x}_{n-1}\vee x_{n-2}\vee \ldots \vee x_0$ the translation of $\overline{b}$. Then $x \wedge \overline{x}$ $=$ $(x_{n-1}\vee \overline{x}_{n-2}\vee \ldots \vee \overline{x}_0)\wedge(\overline{x}_{n-1}\vee x_{n-2}\vee \ldots \vee x_0)$ $=$ $(x_{n-1}\wedge \overline{x}_{n-1}) \vee (\overline{x}_{n-2} \wedge x_{n-2}) \vee \ldots \vee (\overline{x}_0 \wedge x_0)$ $=$ $0$. \end{proof} \end{proposition} The translation of the rows formulas of SSAT allows to define a table of binary numbers for SSAT. The matrix of binary values is an equivalent visual formulation of SSAT$(n,m)$. The following boards have not a set of values in $\Sigma $ to satisfy them: \begin{equation*} \begin{tabular}{|c|} \hline $x_{1}$ \\ \hline 1 \\ \hline 0 \\ \hline \end{tabular} \ \ \ \begin{tabular}{|l|l|} \hline $x_{2}$ & $x_{1}$ \\ \hline 0 & 0 \\ \hline 1 & 1 \\ \hline 0 & 1 \\ \hline 1 & 0 \\ \hline \end{tabular} \end{equation*} I called unsatisfactory boards to the previous ones. It is clear that they have not a solution because each binary number has its binary complement. To find an unsatisfactory board is like order the number and its complement, by example: $000,$ $101,$ $110,$ $001,$ $010,$ $111,$ $011,$ and $100$ correspond to the unsatisfactory board, i.e.: \begin{equation*} \begin{array}{c} 000 \\ 111 \\ 001 \\ 110 \\ 010 \\ 101 \\ 011 \\ 100. \end{array} \end{equation*} By inspection, it is possible to verify that the previous binary numbers correspond to a SSAT$(3,8)$ with no solution because any binary number is blocked by its complement binary number (see prop.~\ref{prop:binNumBlock}). By example, $000$ and $111$ correspond to $(x_2 \vee x_1 \vee x_0) \wedge (\overline{x}_2 \vee \overline{x}_1 \vee \overline{x}_0)$. Substituting by example $x_2=1,x_1=1,x_0=1$, we get $(1 \vee 1 \vee 1) \wedge (0 \vee 0 \vee 0)$ $\equiv$ $(1) \wedge (0)$ $\equiv$ $0.$ \begin{proposition} ~\label{prop:SolSAT_binary} SSAT$(n,m)$ has different rows and $m<2^{n}$. There is a satisfactory assignation that correspond to a binary string in $\Sigma^n$ as a number from $0$ to $2^{n}-1$. \begin{proof} Let $s$ be any binary string that corresponds to a binary number from $0$ to $2^n-1$, where $s$ has not its complement into the translated formulas of the given SSAT$(n,m)$. Then $s$ coincide with at least one binary digit of each binary number of the translated rows formulas, the corresponding Boolean variable is 1. Therefore, all rows are 1, i.e., $s$ makes SSAT$(n,m)$ = 1. \end{proof} \end{proposition} The previous proposition point out when a solution $s\in[0, 2^n-1]$ exists for SSAT. More important, SSAT can be see like the problem to look for a number $s$ which its complements does not corresponded to the translated numbers of the SSAT's formulas. \begin{proposition} ~\label{prop:NoSolSAT_binary} SSAT$(n,2^n)$'s rows correspond to the $0$ to $2^{n}-1$ binary numbers. Then it is an unsatisfactory board. \begin{proof} The binary strings of the values from $0$ to $2^n-1$ are all possible assignation of values for the board. These strings correspond to all combinations of $\Sigma^n$, and by the prop.~\ref{prop:binNumBlock} SSAT$(n,2^n)$ has not solution. \end{proof} \end{proposition} This proposition~\ref{prop:NoSolSAT_binary} states that if $m=2^n$ and SSAT has different rows, then there is not a solution. These are necessary conditions for any SSAT but these conditions a) different rows formulas and b) the number of rows formulas are previous knowledge. As it is depicted below, it is possible to evaluate SSAT$(n,m)$ as a logic circuit without substituting, and evaluating the Boolean formulas, i.e., without knowing the rows of SSAT. \begin{proposition} ~\label{prop:NoSolSAT} Given SAT$(n,m).$ There is not solution, if $L$ exists, where $L$ is any subset of Boolean variables, with their rows formulas isomorphic to an unsatisfactory board. \begin{proof} The subset $L$ satisfies the proposition~\ref{prop:NoSolSAT_binary}. Therefore, it is not possible to find satisfactory set of $n$ values for SAT$(n,m)$. \end{proof} \end{proposition} Here, the last proposition depicts a necessary condition in order to determine the existence of the solution for SSAT. It is easy to understand but it is quite different to accept that SSAT has not solution, i.e., that SSAT is equivalent to an unsatisfactory board. The next propositions justifies focus in an extreme SSAT because solving some easy cases of SSAT can be solved in very efficient time without a satisfactory assignation as a witness. \begin{proposition} ~\label{prop:O1SSAT} Given SSAT$(n,m).$ If $m < 2^n$ then SSAT$(n,m)$ has a solution, such answer is found with complexity $\mathbf{O}(1)$. \begin{proof} The rows of the given SSAT$(n,m)$ do not correspond to all numbers in the search space $[0,2^n-1],$ even with repeated rows. Then, it exits a number which is not blocked. The complexity is $\mathbf{O}(1)$, the only step corresponds to "if $m < 2^n$ then SSAT$(n,m)$ has a solution". \end{proof} \end{proposition} \begin{proposition} ~\label{prop:OKSSAT} Given SSAT$(n,m).$ Let $k$ be the number of failed candidates of the search space $[0,2^n]$, such $k=k_1+k_2$ where $k_1$ is the number of candidates that their translation is a formula of SSAT$(n,m)$, and $k_2$ is the number of candidates that their translation is a repeated formula in SSAT$(n,m).$ If $m-k_2 < 2^n$ then SSAT$(n,m)$ has a solution, such answer is found with complexity $\mathbf{O}(k)$. \begin{proof} $k_1+ m-k$ is an estimation of the rows of the given SSAT$(n,m)$. They do not correspond to all numbers in the search space $[0,2^n-1],$ because $k_1+ m-k$ $=$ $k_1+ m-(k_1 + k_2)$ $=$ $m - k_2$ $< 2^n$. Then, SSAT$(n,m)$ has a satisfactory assignation, i.e., it is not a blocked board. The complexity is $\mathbf{O}(k)$. It corresponds to the $k$ tested candidates. \end{proof} \end{proposition} The previous propositions do not estimate a witness $x$ for verifying that SSAT$(n,m)(x) = 1.$ SSAT$(n,m)$ has a satisfactory assignation is implied by the fact that such SSAT$(n,m)$ is not a blocked board. SSAT can be see as a logic circuit, it only depends of the selection of the binary values assigned to $n$ lines, each line inputs the corresponding binary value to its Boolean variable $x_i$. This is an important consideration because the complexity of the evaluation as a logic function is $\mathbf{O}(1)$. The figure~\ref{fig:BoxSATnxm} depicts SAT$(n,m)$ as its logic circuit. \begin{figure} \caption{ SAT$(n,m)$ is a white of box containing a circuit of logical gates where each row has the same number of Boolean variables.} \label{fig:BoxSATnxm} \end{figure} \begin{proposition} ~\label{prop:SAT_TwoOne} Given SSAT$(n,m)$ as a circuit, and $M_{n \times m}$ the numbers of the translation of the SSAT$(n,m)$'s rows. \begin{enumerate} \item Let $k$ be the translation of any row formula of SSAT$(n,m)$. \item Let $k$ be any binary number, $k$ $\in$ $[0: 2^n-1]$. \end{enumerate} if SSAT${(n,m)}$$(k)=0$, then \begin{enumerate} \item SSAT$(n,m)$ $(\overline{k})=0$ and $k, \overline{k}\in M_{n \times m}$. \item SSAT${(n,m)}$ $(k)=0$ and $\overline{k}\in M_{n \times m}$. \end{enumerate} \begin{proof} Without previous knowledge in the second case, the information that we have is SSAT${(n,m)}$ $(k)=0$. It is caused by the translation of $\overline{k}$ in SSAT$(n,m).$ On the former case when SSAT${(n,m)}$ $(k)=0$ is not satisfied, it is because the complement of $k$ blocks the system, i.e. SSAT${(n,m)}$ $(\overline{k})=0$ (see prop.~\ref{prop:binNumBlock}) and $k, \overline{k}\in M_{n \times m}$. \end{proof} \end{proposition} \begin{proposition} $\Sigma={0,1}$ is an alphabet. Given SSAT$(n,m)$, the set $\mathcal{S}$ $=$ $\{ x \in \Sigma^n |$ SSAT$(n,m)$$(x)$$=1$ $\}$ $\subset \Sigma^n$ of the satisfactory assignations is a regular expression. \begin{proof} $\mathcal{S} \subset \Sigma^n$. \end{proof} \end{proposition} The last proposition depicts that a set of binary strings $\mathcal{S}$ of the satisfactory assignations can be computed by testing SSAT$(n,m)$$(x)$$=1$, and the cost to determine $\mathcal{S}$ is $2^n$, the number of different strings in $\Sigma^n$. With $\mathcal{S}$ $\neq$ $\emptyset$ there is not opposition to accept that SSAT$(n,m)$ has solution, no matters if $m$ is huge and the formulas are in disorder or repeated. It is enough and sufficient to evaluate SSAT$(n,m)$$(x)$, $x\in \mathcal{S}$. On the other hand, $\mathcal{S}$ $=$ $\emptyset$, there is not a direct verification. It is necessary, to validate how $\mathcal{S}$ is constructed. Solving {SSAT}$(n,m)$ could be easy if we have the binary numbers that has not a complement in its translated rows. Also, because, $|\Sigma^n|=2^n$ has exponential size, it could not be convenient to focus in the information of {SSAT}$(n,m)$ \ with $m\gg 2^n$. ~\label{rem:ComBoxSAT} The complexity of the evaluation of SAT$(n,m)(y=y_{n-1}y_{n-2}\cdots $ $y_{1}$ $y_{0})$ could be considered to be $ \mathbf{O}(1)$. Instead of using a cycle, it is plausible to consider that {SSAT}$(n,m)$ is a circuit of logical gates. This is depicted in figure~\ref{fig:BoxSATnxm}. Hereafter, SAT$(n,m)$ correspond to a logic circuit of "and", "or" gates, and the complexity of its evaluation is $\mathbf{O}(1)$. \begin{proposition} ~\label{prop:BuildSolSSAT} {SSAT}$(n,m)$ \ has different row formulas, and $m \leq 2^{n}$. Any subset of $\Sigma^n$ could be a solution for an appropriate {SSAT}$(n,m)$. \begin{proof} $\emptyset$ is the solution of a blocked board., i.e., for any SSAT$(n,m)$ with $m=2^n$. For $m=2^n-1$, it is possible to build a SSAT$(n,m)$ with only $x$ as the solution. The blocked numbers $[0,2^n-1]$ $\setminus$ $\{x, \overline{x}\}$ and $x$ is translated and added to SSAT. By construction, SSAT$(n,m)$$(x)=1.$ For $f$ different solutions. Let $S$=$\{x_1,\ldots,x_f\}$ be the given solutions. Then the blocked numbers $B$ = $\{y \in [0,2^n-1] \, | \, \forall x \in S_X, y \neq x \text{ or } \overline{y} \neq x \}$, where $S_X$ $=$ $\{s \in X \, | \, \overline{s} \notin X \}.$ The numbers of $S_X$ and $B\setminus X$ are translated and added to the resulting SSAT. \end{proof} \end{proposition} It is prohibitive to analyze more than one iteration SSAT's formulas. For example, when $m\approx 2^{n},$ any strategy for looking solving SSAT could have $m$ as a factor of the later iterations. \begin{proposition} ~\label{prop:EvalMatchFixedPoint} $y\in \Sigma ^{n}$ and $ y=y_{n-1}y_{n-2}\cdots y_{1}y_{0}.$ The following strategies of resolution of SAT$(n,m)$ are equivalent. \begin{enumerate} \item The evaluation of SAT$(n,m)(y)$ as logic circuit. \item~\label{stp:match} A matching procedure that consists verifying that each $y_{i}$ match at least one digit $s_{i}^{k}\in M_{n\times m},$ $\forall k=1,\ldots ,m$. \end{enumerate} \begin{proof} SAT$(n,m)(y)=1$, it means that at least one variable of each row is 1, i.e., each $y_i,$ $i=1,\ldots,n$ for at least one bit, this matches to 1 in $s^k_j$, $k=1,\ldots, m$. \end{proof} \end{proposition} The evaluation strategies are equivalent but the computational cost is not. The strategy~\ref{stp:match} implies at least $m \cdot n$ iterations. This is a case for using each step of a cycle to analyze each variable in a row formulas or to count how many times a Boolean variable is used. \begin{proposition} An equivalent formulation of {SSAT}$(n,m)$\ is to look for a binary number $x^{\ast }$ from $0$ to $2^{n}-1.$ \begin{enumerate} \item If $x^{\ast }\in M_{n\times m}$ and $\overline{x}^{\ast }\notin M_{n\times m}$ then SAT$(n,m)(x^{\ast })=1.$ \item If $x^{\ast }\in M_{n\times m}$ and $\overline{x}^{\ast }\in M_{n\times m} $ then SAT$(n,m)(x^{\ast })=0.$ If $m<2^{n}-1$ then $\exists y^{\ast} \in [0, 2^{n}-1]$ with $\overline{y}^{\ast }\notin M_{n\times m} $ and SAT$(n,m)(y^{\ast })=1.$ \item if 2), then $\exists$ SAT$(n,m+1)$ such that 1) is fulfill. \end{enumerate} \begin{proof} \ \begin{enumerate} \item When $x^{\ast }\in M_{n\times m}$ and $\overline{x}^{\ast }\notin M_{n\times m}$, this means that the corresponding formula of $x^{\ast }$ is not blocked and for each Boolean formula of SAT$(n,m)(x^{\ast })$ at least one Boolean variable coincides with one variable of $x^{\ast }.$ Therefore SAT$(n,m)(x^{\ast })=1.$ \item I have, $m<2^{n}-1$, then $\exists y^{\ast } \in[0, 2^{n}-1]$ with $\overline{y}^{\ast }\notin M_{n\times m}.$ Therefore, {SSAT}$(n,m)$$(y^\ast)=1$. \item Adding the corresponding formula of $y^{\ast }$ to SAT$(n,m)$, a new SAT$(n,m+1)$ is obtained. By 1, the case is proved. \end{enumerate} \end{proof} \end{proposition} \begin{proposition} ~\label{prop:NumbSolSSAT} SSAT$(n,m)$ has different row formulas, and $m \leq 2^{n}$. The complexity to solve SSAT$(n,m)$ is $\mathbf{O}(1).$ \begin{proof} With the knowledge that $m < 2^n $ the Boolean formulas of SSAT$(n,m)$ does not correspond to a blocked board. It has not solution when $m=2^n$ and the SSAT$(n,m)$'s rows are different, i.e., it is a blocked board. \end{proof} \end{proposition} This approach allows for verifying and getting a solution for any {SSAT}$(n,m)$. By example, SAT$(6,4)$ corresponds to the set $M_{6\times 4}$: \begin{equation*} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & $x_{5}=0$ & $x_{4}=0$ & $x_{3}=0$ & $x_{2}=0$ & $x_{1}=0$ & $x_{0}=0$ \\ \hline & $\overline{x}_{5}\vee $ & $\overline{x}_{4}\vee $ & $\overline{x}_{3}\vee $ & $\overline{x}_{2}\vee $ & $\overline{x}_{1}\vee $ & $\overline{x}_{0})$ \\ \hline $\wedge ($ & $\overline{x}_{5}\vee $ & $\overline{x}_{4}\vee $ & $\overline{x }_{3}\vee $ & $\overline{x}_{2}\vee $ & $\overline{x}_{1}\vee $ & $x_{0})$ \\ \hline $\wedge ($ & $x_{5}\vee $ & $x_{4}\vee $ & $x_{3}\vee $ & $x_{2}\vee $ & $ x_{1}\vee $ & $\overline{x}_{0})$ \\ \hline $\wedge ($ & $\overline{x}_{5}\vee $ & $x_{4}\vee $ & $x_{3}\vee $ & $ \overline{x}_{2}\vee $ & $x_{1}\vee $ & $x_{0})$ \\ \hline \end{tabular} \text{ \ } \end{equation*} \begin{equation*} \begin{tabular}{|l|l|l|l|l|l|} \hline $x_{5}$ & $x_{4}$ & $x_{3}$ & $x_{2}$ & $x_{1}$ & $x_{0}$ \\ \hline $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $0$ & $0$ & $0$ & $0$ & $0$ & $1$ \\ \hline $1$ & $1$ & $1$ & $1$ & $1$ & $0$ \\ \hline $0$ & $1$ & $1$ & $0$ & $1$ & $1$ \\ \hline \end{tabular} \text{.} \end{equation*} The first table depicts that SAT$(6,4)(y=000000)=1$. The second table depicts the set $M_{6\times 4}$ as an array of binary numbers. The assignation $y$ corresponds to first row of $M_{6\times 4}.$ At least one digit of $y$ coincides with each number of M$_{n\times m}$, the Boolean formulas of SAT$(6,4).$ Finally, $y$ $=$ $ 000000$ can be interpreted as the satisfied assignment $x_{5}=0,$ $x_{4}=0,$ $x_{3}=0,$ $x_{2}=0,$ $x_{1}=0,$ and $x_{0}=0.$ {SSAT}$(n,m)$ can be used as an array of $m$ indexed Boolean formulas. In fact, the previous proposition gives an interpretation of the {SSAT}$(n,m)$ \ as a type fixed point problem. For convenience, without exploring the formulas the SAT, my strategy is to look each formula, and to keep information in a Boolean array of the formulas of SAT by its binary number as an index for the array. At this point, the resolution {SSAT}$(n,m)$ \ is equivalent to look for a binary number $x$ such that {SSAT}$(n,m)$$\left( x\right) =1$. The strategy is to use the binary number representation of the formulas of {SSAT}$(n,m)$ \ in M$_{n\times m}.$ SSAT as a function can be see as the function of a fixed point method, however, a satisfactory assignation could not belong to the binary translations of the SSAT's formulas. The advantage of taking the candidates from translations of SSAT's formulas is that for each failure, two numbers can be discarded (see prop.~\ref{prop:SAT_TwoOne}). Furthermore, the equivalent between SSAT with the alternative formulation to determine if there is a binary string, which is not blocked in binary translations of the SSAT's formulas point out the lack of relationship between the rows of SSAT. \section{Extreme SSAT Problem}~\label{sc:exSSATCon} In section~\ref{sc:SAS_SSAT}, the prop.~\ref{prop:NumbSolSSAT} depicts that if SSAT's information includes that its rows are different then to answer is not a complex problem. In fact, SSAT's formulas are not necessary to review. The number of rows $m$ and the fact that the SSAT's rows are different imply the answer without viewing inside the given problem SSAT. Here, let us be critical, in order to build with precision an extreme problem. The extreme SSAT includes the parameters $n$ (number of Boolean variables) and $m$ the number of SSAT's rows. No information about the specific of SSAT's rows are given. But, the extreme problem could be a SSAT problem with only one binary string as solution or none, and it includes duplicate and disorder SSAT's rows. The selection of the unique solution is arbitrary, i.e., it could be any $s \in [0,2^n-1]$. Hereafter, $\mathcal{S} = \left\{x \in [0,2^n-1] \, | \, \text{SSAT}(n,m)(x)=1\right\}.$ The next propositions, depicts the difficult for determining a satisfactory assignation for an extreme SSAT. \begin{proposition}~\label{prop:probsel} Let $n$ be large, and SSAT$(n,m)$ an extreme problem, i.e., $|\mathcal{S}|$ $\leq 1$, and $m \gg 2^n$. \begin{enumerate} \item The probability for selecting a solution ($\mathcal{P}$$_{ss}(f)$) after testing $f$ different candidates ($f<<2^n$) is $ \approx 1 / 2^{2n}$ (it is insignificant). \item Given $C$ $\subset$ $[0,2^n-1]$ with a polinomial cardinality, i.e., $|C|$ $=$ $n^k$, with a constant $k >0.$ The probability that the solution belongs $C$ ($\mathcal{P}$$_{s}(C)$) is insignificant, and more and more insignificant when $n$ grows. \item Solving SSAT$(n,m)$ is not efficient. \end{enumerate} \begin{proof} \ Assuming that $|\mathcal{S}|=1$. \begin{enumerate} \item The probability $\mathcal{P}$$_{ss}(f)$ corresponds to product of the probabilities for be selected and be the solution. For the inner approach (i.e., the $f$ candidates are from the translations of the SSAT$(n,m)$'s rows) $\mathcal{P}$$_{ss}(f)$ $=$ $1/\left( 2^{n}-2f \right) \cdot 1 /2^n \approx 1/2^{2n} \approx 0.$ For the outside approach (i.e., the $f$ candidates are from the $[0,2^n-1]$ the search space) $\mathcal{P}$$_{ss}(f)$ $=$ $1/\left( 2^{n}-f \right) \cdot 1 /2^n \approx 1 /2^{2n} \approx 0.$ \item $\mathcal{P}$$(C)$ $=$ $n^k / 2^n.$ Then $\mathcal{P}$$_s(C)$ $=$ $n^k / 2^n$ $\cdot$ $1 /2^n$, and $lim_{n \to \infty} K n^k / 2^{n}$ (L'H\^{o}\-pi\-tal's rule) $=$ $0^+,$ $K>0.$ For $n$ large, $2^n-Kn^k \approx 2^n,$ and $Kn^k \ll 2^n.$ Moreover, for the inner approach, $\mathcal{P}$$_{ss}(n^k)$ $=$ $1/\left( 2^{n}-2n^k \right) \cdot 1 /2^n \approx 1/2^{2n} \approx 0.$ For the outside approach, $\mathcal{P}$$_{ss}(n^k)$ $=$ $1/\left( 2^{n}-n^k \right) \cdot 1 /2^n \approx 1 /2^{2n} \approx 0.$ \item In any approach, inner or outside, many rows of SSAT$(n,m)$ have large probability to be blocked, because there is only one solution. Then the probability after $f$ iterations remains $1/ 2^{2n} \approx 0$. It is almost impossible to find the solution with $f$ small or a polinomial number of $n$. \end{enumerate} Assuming that $|\mathcal{S}|=0$. $\mathcal{P}$$_s$ $=$ $0.$ \begin{enumerate} \item[1,2] For the inner approach and for the outside approach, $\mathcal{P}$$_{ss}(f)$ $=$ $0.$ \item[3] It is equivalent $\mathcal{S}=\emptyset$ $\Leftrightarrow$ SSAT$(n,m)(x)$ $=$ $0,$ $\forall x \in [0,2^n-1].$ This means that it is necessary to test all the numbers in $[0,2^n-1].$ \end{enumerate} \end{proof} \end{proposition} One important similarity between the extreme SSAT as a numerical problem (see prop.~\ref{prop:BuildSolSSAT}) for one or none solution is the interpretation to guest such type of solution. It is like a lottery but with the possibility that there is not winner number. The exponential constant $2^n$ causes a rapidly decay as it depicted in fig.~\ref{fig:probDecy} where $t=2^n-1, 2^n-8, 2^n-32$. \begin{figure} \caption{Behavior of the functions $P_e(t)$ and $P_i(t)$.} \label{fig:probDecy} \end{figure} The interpretation of taking the extreme SSAT as a circuit for an electronic lottery behaves different when there is one winner number than when there is none. It is probably to wait for long time (it is an exponential waiting time) to get the winner number. People accept the winner ticket $x^\ast,$ because a judge can show in an electronic board the result SSAT$(x^\ast)$ $=$ $1$. It is unlikely to get the winner ticket in short time, but most of the people accept this case by testing the winner ticket. However, the case when there is not winner number is rejected, because the long time to wait to test all the numbers, and who can have the time, and be the unconditional and unbiased witness to testify that always the electronic board shows SSAT$(x)$ $=$ $0,$ $\forall x\in[0,2^n-1]$. Both cases are similar, and they point out that solving extreme SSAT takes an exponential time, no depending if a group of person does a lottery or a computer performs an algorithm. \begin{proposition} There is not an efficient algorithm for solving extreme \break SSAT$(n,m)$. \begin{proof} If such algorithm exists then it is capable for solving in polinomial time the equivalent number problem with one winner number or none in contradiction to the exponential time. \end{proof} \end{proposition} \section{Algorithms for SAT} ~\label{sc:algthmsSSAT} The previous sections depict characteristics and properties of SSAT. The complexity for solving any SSAT$(n,m)$ needs at least one carefully review of the SSAT's rows, i.e., its complexity is related to the numbers $m$, and any algorithm for solving SSAT could have $m$ has a factor related to its complexity. If it uses also the columns for substituting and simplifying by algebra the factor grows $m \cdot n$ at least. Also, the ordering and discarding repeated rows increased the complexity by $m\text{log}_2(m)$. The properties depicted in section~\ref{sc:SAS_SSAT} indicate two source of data for solving SSAT$(n,m)$, 1) its $m$ rows or 2) the search space of all possible Boolean values for its variables ($\Sigma^n$). The second is large and $m$ could be large also. Therefore, the efficient type of algorithms for solving SSAT must be doing in one way without cycles, and with the constraint that the total iterations must be related to $m < 2^n$, or $2^{n-1}$, or $2^n$. This is because the fixed point approach or inside search(taking candidates from the translation SSAT's formulas) and the outside approach or probabilistic approach (taking candidates from the search space $[0,2^n-1].$ It is necessary to be sceptical and impartial, in order to accept the answer from a computer´s algorithm or a person. No matters if $m$ is huge or SSAT is an extreme problem, without a proof or a clearly explication, I reject to accept such answer. This impose another characteristic for the algorithms for solving SSAT, they must provide a witness or something to corroborate that SSAT has been solved without objections. A very simple algorithm to determine if SSAT has solution is in~\cite{arXiv:Barron2015b}. The algorithm is presented to solve SSAT by using the equivalent numerical formulation, more precisely for building an unsatisfactory board in a the table $T$. \begin{algorithm} ~\label{alg:SATBoard_1} \textbf{Input:} {SSAT}$(n,m)$. \textbf{Output:} The answer if {SSAT}$(n,m)$ \ has solution or not. $T$ is an unsatisfactory board when {SSAT}$(n,m)$\ has not solution. \textbf{Variables in memory}: $T[0:2^{n}-1]$=$-1$: array of binary integer; $address$: integer; $ct=0$: integer; $k$: binary integer. \begin{enumerate} \item \textbf{if} $m < 2^n$ \textbf{then} \item \hspace{0.5cm} \textbf{output:} "SSAT$(n,m)$ has a solution, \hspace{0.5cm} its formulas do not cover $\Sigma^n.$"; \item \textbf{end if} \item \textbf{while not end(}{SSAT}$(n,m)$ \textbf{)} \item \hspace{0.5cm} $k=b_{n-1}b_{n-2}\ldots b_{0}$= \textbf{Translate to binary formula} ({SSAT}$(n,m)$); \item \hspace{0.5cm} \textbf{if }$k.[b_{n-1}]$ \textbf{equal} $0$ \textbf{then} \item \qquad \qquad $address=2\ast k.[b_{n-2}\ldots b_{0}];$ \item \qquad \textbf{else} \item \qquad \qquad $address=2\ast (2^{n-1}-k.[b_{n-2}\ldots b_{0}])-1;$ \item \hspace{0.5cm} \hspace{0.5cm} \textbf{end if} \item \hspace{0.5cm} \textbf{if} T[$address$] \textbf{equal} $-1$ \textbf{then} \item \hspace{0.5cm}\hspace{0.5cm} $ct=ct+1$; \item \hspace{0.5cm}\hspace{0.5cm} $T[address]=k;$ \item \hspace{0.5cm} \textbf{end if} \item \hspace{0.5cm}\textbf{if} $ct$ \textbf{equal} $2^{n}$ \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{output:} "There is not solution for {SSAT}$(n,m)$. \hspace{0.5cm} It has $2^n$ different formulas."; \item \hspace{0.5cm} \hspace{0.5cm} \textbf{stop} \item \hspace{0.5cm} \textbf{end if} \item \textbf{end while} \item \textbf{output:} "{SSAT}$(n,m)$ \ has a solution, \hspace{0.5cm} its formulas do not cover $\Sigma^n.$"; \end{enumerate} \end{algorithm} The previous algorithm is quite simple. It does not require to evaluate SSAT. The output has an equivalent formulation of the input SSAT, as a table of un unsatisfactory board, it writes "There is not solution for {SSAT}$(n,m)$". On the other hand, the algorithm writes "{SSAT}$(n,m)$ \ has a solution", without any additional information, or witness. It is reasonable to ask, do i accept the result of the previous algorithm?. The answers is "yes" but after carefully reviewing and verifying the correctness of the algorithm. If the answer of the algorithm is forgotten, it is possible to recall the answer from the table $T$, but it is not cheap. It is necessary to review in order to determine if there is a binary number without its complement or if all binary numbers are follow by its complement. In the former case, SSAT has a solution, in the second no. The objection is that the verification using $T$ after running the algorithm is quite expensive. Using the property of evaluating SSAT as circuit, the previous algorithm is modified to the next algorithm. \begin{algorithm} ~\label{alg:SATModBoard_1} \textbf{Input:} {SSAT}$(n,m)$. \textbf{Output:} An unsatisfactory board T when {SSAT}$(n,m)$ \ has not a solution. A satisfactory assignation $k$ when {SSAT}$(n,m)$ \ has a solution. \textbf{Variables in memory}: $T[0:2^{n}-1]$=$-1$: array of binary integer; $address$: integer; $ct=0$: integer; $k$: binary integer. \begin{enumerate} \item \textbf{if} $m < 2^n$ \textbf{then} \item \hspace{0.5cm} \textbf{output:} "SSAT$(n,m)$ has a solution, \hspace{0.5cm} its formulas do not cover $\Sigma^n.$"; \item \textbf{end if} \item \textbf{while not end(}{SSAT}$(n,m)$ \textbf{)} \item \hspace{0.5cm} $k=b_{n-1}b_{n-2}\ldots b_{0}$= \textbf{Translate to binary formula} ({SSAT}$(n,m)$); \item \hspace{0.5cm} \textbf{if} SSAT$(n,m)$($k$) \textbf{equal} 1 \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{output:} "$k$ is a solution for SSAT$(n,m).$"; \item \hspace{0.5cm} \hspace{0.5cm} \textbf{stop}; \item \hspace{0.5cm} \textbf{end if}; \item \hspace{0.5cm} \textbf{if }$k.[b_{n-1}]$ \textbf{equal} $1$ \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} $k=\overline{k}$; \item \hspace{0.5cm} \textbf{end if}; \item \hspace{0.5cm} $address=2\ast k.[b_{n-2}\ldots b_{0}];$ \item \hspace{0.5cm} \textbf{if} T[$address$] \textbf{equal} $-1$ \textbf{then} \item \hspace{0.5cm}\hspace{0.5cm} $T[address]=k;$ \item \hspace{0.5cm}\hspace{0.5cm} $address=2\ast (2^{n-1}-\overline{k}.[b_{n-2}\ldots b_{0}])-1;$ \item \hspace{0.5cm}\hspace{0.5cm} $T[address]=\overline{k};$ \item \hspace{0.5cm}\hspace{0.5cm} $ct=ct+2$; \item \hspace{0.5cm} \textbf{end if} \item \hspace{0.5cm}\textbf{if} $ct$ \textbf{equal} $2^{n}$ \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{output:} "There is not solution for {SSAT}$(n,m)$. \hspace{0.5cm} It has $2^n$ different formulas."; \item \hspace{0.5cm} \hspace{0.5cm} \textbf{stop} \item \hspace{0.5cm} \textbf{end if} \item \textbf{end while} \item \textbf{for} $k=0$ to $2^{n}-1$ \textbf{do} \item \hspace{0.5cm} \textbf{if} T[$k$] \textbf{equal} $-1$ \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{output:} "$k$ is a solution of {SSAT}$(n,m)$."; \item \hspace{0.5cm} \hspace{0.5cm} \textbf{stop}; \item \hspace{0.5cm} \textbf{end if}; \item \textbf{for}; \end{enumerate} \end{algorithm} The previous algorithm solves the problem and it provides two type of witness: 1) an unsatisfactory board $T$ when there is no solution, and 2) the satisfactory assignation $k$ when there is a solution. It exploits the properties of {SSAT}$(n,m)$\ as a circuit, the inside search (i.e., the candidates come from the SSAT's formulas). Each failure eliminates two binary numbers, therefore the table $T$ is building faster than the algorithm~\ref{alg:SATBoard_1}. The algorithm does not use a double linked list as the algorithms 2 and 3 in~\cite{arXiv:Barron2015b}. The drawback of this algorithm are the last lines. Here, the satisfactory assignation is founded but it is expensive with more the $2^n$ iterations. This could be changed by using a double linked list as in algorithms 2 and 3 in~\cite{arXiv:Barron2015b}, this requires a lot of memory. The difference between them is that the former stopped with one satisfactory assignation and the second stopped after build $\mathcal{S}$. The algorithms 3 and 4 in~\cite{arXiv:Barron2015b} are building using deterministic and probabilistic approach. They provides different type of witness to corroborate when SSAT has solution or not. The former gives a double linked list with the elements of $\mathcal{S}$ and the other gives a Boolean table $T$ where the elements of $\mathcal{S}$ correspond to $i\in[0,2^n-1]$ such that $T[i]=0$. The situation for solving SSAT$(n,m)$ is subtle. Its number of rows could be exponential, but for any SSAT$n,m)$, there are no more than $2^n$ different rows, then $m \gg 2^n$ means duplicate rows. It is possible to consider duplicate rows but this is not so important as to determine at least one solution in $\Sigma^n$. The search space $\Sigma^n$ corresponds to a regular expression and it is easy to build by a finite deterministic automata (Kleene's Theorem) but in order. However, to test the binary numbers in order is not adequate. For $m$ very large any source of binary number as candidates must be random and its construction be cheap. The next algorithm generates a random permutation the numbers from 0 to $Mi$. \begin{algorithm}~\label{alg:ParraBinNum} \textbf{Input:} $T[0: Mi]=[0:Mi]$. \textbf{Output:} $T[0: Mi]$ contains a permutation of the numbers from $0$ to $Mi$. \textbf{Variables in memory}: $i=0$: integer; $rdm, a$=0 : integer; \begin{enumerate} \item \textbf{for} i:=0 to $Mi-1$ \item \hspace{0.5cm} \textbf{if} $T[i]$ \textbf{equals} $i$ \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{select uniform randomly} $rdm \in [i+1, Mi]$; \item \hspace{0.5cm} \hspace{0.5cm} $a$ $=$ $T[rdm]$; \item \hspace{0.5cm} \hspace{0.5cm} $T[rdm]$ $=$ $T[i]$; \item \hspace{0.5cm} \hspace{0.5cm} $T[i]$ $=$ $a$; \item \hspace{0.5cm} \textbf{end if} \item \textbf{end for} \item \textbf{stop} \end{enumerate} \end{algorithm} An important property of this algorithm is that it builds a permutation of the numbers $0$ to $Mi.$ None index coincide with the numbers in order. Let floor() be a function, it returns the smallest integer less than or equal to a given number. Let rand() be a function that it returns a random real number in $(0,1)$. The line \textbf{Select uniform randomly} $rdm \in [0, Mi-1]$; could be implemented $k$ $=$ floor($r$ $\cdot$ $Mi$), where $r=rand()$, and $Mi>0$, integer. Then $0$ $\leq$ $k$ $\leq Mi-1$. In similar way, \textbf{Select uniform randomly} $rdm \in [i+1, Mi]$; could be implemented as $k$ $=$ floor($r$ $\cdot$ $(Mi-i+1.5)$) + $(i+1)$. The previous algorithm, is an alternative to change the line 4 in the probabilistic algorithm 4 in~\cite{arXiv:Barron2015b}: 4. \hspace{0.5cm} \textbf{select uniform randomly} $k \in [0,2^{n}-1] \setminus \{ i \, |\, T[i] =1\}$; Using the approach of the algorithm~\ref{alg:ParraBinNum}, the next algorithm solves SSAT$(n,m)$ in straight forward using an outside approach. Here, each candidates is a random selection from $[0,2^{n-1}].$ \begin{algorithm}~\label{alg:SAT_Perm} \textbf{Input:} n, SSAT$(n,m)$. \textbf{Output:} $rdm$, such that SSAT$(n,m)$$(rdm)=1$ or SSAT has not solution. \textbf{Variables in memory}: $T[0: 2^{n-1}-1]$=$[0: 2^{n-1}-1]$: integer; $Mi$=$2^{n-1}-1$: integer; $rdm, a$: integer. \begin{enumerate} \item \textbf{if} $m < 2^n$ \textbf{then} \item \hspace{0.5cm} \textbf{output:} "SSAT$(n,m)$ has a solution, \hspace{0.5cm} its formulas do not cover $\Sigma^n.$"; \item \textbf{end if} \item \hspace{0.5cm} \textbf{if} $T[i]$ \textbf{equals} $i$ \textbf{then} \item \textbf{for} i:=0 to $Mi-1$ \item \hspace{0.5cm} \textbf{if} $T[i]$ \textbf{equals} $i$ \textbf{then} \hspace{0.5cm} // \textbf{select uniform randomly} $rdm \in [i+1, Mi]$; \item \hspace{0.5cm} \hspace{0.5cm} $rdm$ $=$ floor($rand()$ $\cdot$ $(Mi-i+1.5)$) $+$ $(i+1)$; \item \hspace{0.5cm} \hspace{0.5cm} $a$ $=$ $T[rdm]$; \item \hspace{0.5cm} \hspace{0.5cm} $T[rdm]$ $=$ $T[i]$; \item \hspace{0.5cm} \hspace{0.5cm} $T[i]$ $=$ $a$; \item \hspace{0.5cm} \textbf{end if} \item \hspace{0.5cm} $rdm$ $=$ $0T[i]$; \item \hspace{0.5cm} \textbf{if} SSAT$(n,m)$($rdm$) \textbf{ equals} 0 \textbf{and} \hspace{0.9cm} SSAT$(n,m)$($\overline{rdm}$) \textbf{ equals} 0 \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{ continue} \item \hspace{0.5cm} \textbf{end if} \item \hspace{0.5cm} \textbf{if} SSAT$(n,m)$($rdm$) \textbf{equals} 1 \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{output:} "$rdm$ is a solution for SSAT$(n,m)$."; \item \hspace{0.5cm} \hspace{0.5cm} \textbf{stop}; \item \hspace{0.5cm} \textbf{else} \item \hspace{0.5cm} \hspace{0.5cm} \textbf{output:} "$\overline{rdm}$ is a solution for SSAT$(n,m)$."; \item \hspace{0.5cm} \hspace{0.5cm} \textbf{stop}; \item \hspace{0.5cm} \textbf{end if} \item \textbf{end for} \item $rdm$ = $0T[Mi]$; \item \textbf{if} SSAT$(n,m)$($rdm$) \textbf{equal} 1 \textbf{then} \item \hspace{0.5cm} \textbf{output:} "$rdm$ is a solution for SSAT$(n,m)$."; \item \hspace{0.5cm} \textbf{stop}; \item \textbf{end if} \item \textbf{if} SSAT$(n,m)$($\overline{rdm}$) \textbf{equal} 1 \textbf{then} \item \hspace{0.5cm} \textbf{output:} "$\overline{rdm}$ is a solution for SSAT$(n,m)$."; \item \hspace{0.5cm} \textbf{stop}; \item \textbf{end if} \item \textbf{output:} "There is not solution for SSAT$(n,m)$, SSAT$(n,m)(x)=0,$ $\forall x \in$ $[0,2^{n-1}]$."; \item \textbf{stop}; \end{enumerate} \end{algorithm} The limit of the iterations to reach the answer is $Mi+1=(2^{n-1}-1)+1=2^{n-1}$. Therefore, the complexity of the previous algorithm is $\mathbf{O}\left(Mi\right)$=$\mathbf{O}\left( 2^{n-1} \right)$. No matters if the rows of SSAT$(n,m)$ are duplicates or disordered or $m \gg 2^n$. The upper bound of the iterations is $2^{n-1}$ and the search space is $[0,2^n-1]$ because a value $x\in[0,2^{n-1}-1]$ is used to build $rdm=0x \in [0, 2^n-1]$ and $\overline{rdm} \in [0, 2^n-1]$ are tested in the same iteration. \section{Complexity for SSAT} ~\label{sc:compleForSSAT} The prop.~\ref{prop:BuildSolSSAT} depicts the complexity of solving SSAT and how to build a SSAT with some given set of solutions. By example, the following SSAT$(3,7)$ has one solution $x_2=0$, $x_1=1$, and $x_0=1$: \begin{equation*} \begin{tabular}{llllccc} & & & & & $\Sigma^3$ & $[0,7]$ \\ & $\overline{x}_{2}\vee $ & $\overline{x}_{1}\vee $ & $\overline{x}_{0})$ & & 000 & 0\\ $\wedge ($ & $\overline{x}_{2}\vee $ & $\overline{x}_{1}\vee $ & $x_{0})$ & & 001 & 1 \\ $\wedge ($ & $\overline{x}_{2}\vee $ & $ x_{1}\vee $ & $\overline{x}_{0})$ & &010 & 2\\ $\wedge ($ & $ \overline{x}_{2}\vee $ & $x_{1}\vee $ & $x_{0})$ & & 011 & 3 \\ $\wedge ($ & $x_2\vee $ & $\overline{x}_{1}\vee $ & $x_{0})$ & & 101 & 5\\ $\wedge ($ & $x_2\vee $ & $x_{1}\vee $ & $\overline{x}_{0})$ & & 110 & 6\\ $\wedge ($ & $x_2\vee $ & $x_{1}\vee $ & $x_{0})$ & & 111 & 7\\ \end{tabular} \text{ \ } \end{equation*} By construction, the unique solution is the binary string of $3$. It corresponds to the translation $( \overline{x}_{2}\vee x_{1}\vee x_{0})$. It satisfies SSAT$(3,7)$, as the assignation $x_2=0$, $x_1=1$, and $x_0=1$. It is not blocked by $100$, which corresponds to the missing formula $(x_2 \vee \overline{x}_{1}\vee \overline{x}_{0})$ (The complement of the formula $3$). The other numbers $0,1,2$ are blocked by $5,6,7$. \begin{proposition}~\label{prop:SSATBynSearch} Let SSAT$(n,2^n-1)$ be a problem with only one solution and its rows in ascendent order. Then the complexity by a binary search to determine the unique solution is $\mathbf{O}\left(\log_2(2^n-1)\right)$ $\approx$ $\mathbf{O}\left(n \right)$. \begin{proof} Without loss of generality the rows can be as the previous example SSAT$(3,7)$ in a table with indexes from $[0,2^n-2]$. The following algorithm determines the unique solution: \begin{algorithm} ~\label{alg:SATBynSearch} \textbf{Input:} {SSAT}$(n,m)$\ with only one solution and its rows in ascending order. \textbf{Output:} The unique satisfactory assignation $k$. \textbf{Variables in memory}: $T[0:2^{n}-2]$ = (Translated SSAT'rows) : array of binary integer; $l_i,r_i,m_i$: integer. \begin{enumerate} \item \textbf{if} $T[0]$ \textbf{is not equals} $0$ \textbf{then} \item \hspace{0.5cm} \textbf{output:} "$0$ is the solution."; \item \hspace{0.5cm} \textbf{stop}; \item \textbf{end if} \item \textbf{if} $T[2^n-2]$ \textbf{is equals} $2^n-1$ \textbf{then} \item \hspace{0.5cm} \textbf{output:} "$2^n-1$ is the solution."; \item \hspace{0.5cm} \textbf{stop}; \item \textbf{end if} \item $l_i=0$; \item $r_i=2^n-2$; \item \textbf{while} $((r_i - l_i) > 1)$ \textbf{do}. \item \hspace{0.5cm} $m_i = (l_i + r_i)/2. $ \item \hspace{0.5cm} \textbf{if} $T[m_i]$ \textbf{is equals} $m_i$ \textbf{then} \item \hspace{0.5cm} \hspace{0.5cm} $l_i=m_i$; \item \hspace{0.5cm} \textbf{otherwise} \item \hspace{0.5cm} \hspace{0.5cm} $r_i=m_i$. \item \hspace{0.5cm} \textbf{end if} \item \textbf{end while} \item \textbf{output:} "$l_i+1$ is the solution."; \item \textbf{stop}; \end{enumerate} \end{algorithm} \end{proof} \end{proposition} The previous proposition is based in the numerical translation of SSAT. The drawback of the previous binary search is that it only applies for solving special SSAT$(2,2^n-1)$ with different rows and in ascending order. When SSAT$(2,2^n-1)$'s rows are in disorder, the cost of sorting includes $\textbf{O}(2^n-1)$ by using the Address Calculation Sorting (R. Singleton, 1956)~\cite{Agt:pena}. It has lineal complexity and is the less expensive sorting to my knowledge. In this case the complexity to determine the unique solution is $\textbf{O}(2^n).$ On the other hand, the no solution case has complexity $\textbf{O}(1)$, knowing that SSAT$(n,2^n)$ has different rows, there is nothing to look for. But again, to know that SSAT$(n,2^n)$ has different rows, it has the cost of at least $\textbf{O}(2^{n-1})$ by verifying at least one time the SSAT$(n,2^n)$'s rows by using the algorithm 3 in ~\cite{arXiv:Barron2015b}. The extreme SSAT problem is designed to test how difficult is to determine one or none solution without more knowledge than $n$ the number of variables, and $m$ the number of rows. It is extreme because $m \gg 2^n$ could be huge. This implies that SSAT'rows are repeated, and the inner approach is not convenient. It could take more than $m \gg 2^n$ iterations. Also, it does not help to know that SSAT could have one or none solution. As it is mentioned before, any algorithm must to solve SSAT without loops. The algorithms 1,2, and 3 in~\cite{arXiv:Barron2015b} are based in the inner or fixed point approach, therefore solving the extreme SSAT could takes more than $2^n$ iterations ($m \gg 2^n$ is huge). They behave not stable for the extreme SSAT. The number of iterations is quite wide depending of $m \gg 2^n$. With many SSAT's rows repeated the inner approach or fixed point type method has not advantage using the elimination of two candidates for solving the extreme SSAT, it has to review the SSAT's rows but duplicates rows do not provide information for knowing is the solution or not solution is reached. It has the lower bound $2^{n-1}$ for special SSAT$(2^n,2^n-1)$ because, it eliminates $k$ and $\overline{k}$ when $k$ cames from the translation of the SSAT's rows. But depending if the SSAT's rows are duplicates and disorder, it could behave quite different and makes a huge number of iterations ($\gg 2^n$) for an extreme SSAT. By example, if SSAT has the same row $2^n$ times at the beginning, after $2^n$ iterations the algorithm is far away for solving SSAT. This phenomena does not happen with the outside approach, after $2^n$ iterations the solution is reached. The algorithm~\ref{alg:SAT_Perm} is based in the outside approach. It uses a random search in $[0,2^n-1]$ by creating two candidates from $[0,2^{n-1}]$. The candidates are $0x$ and $\overline{0x}$, $x\in [0,2^{n-1}].$ The pay off is an stable behavior, no matters the extreme SSAT. Each candidate provides information that slowly and consistently, it reduces the distance to the solution. When there is no solution, this algorithm always takes $2^{n-1}$ iterations and it performs less than $2^{n-1}$ when there is one solution. The algorithm takes advantage of the evaluation of SSAT as a logic function in a circuit (see fig.~\ref{fig:BoxSATnxm})but it can not use the inner approach´s property for eliminating two candidates in each failure test but it tests two candidates at same time. The narrow behavior of the outside approach is the size of the search space $[0,2^{n-1}]$. The wide behavior of the inner approach is caused when $m$ $\gg 2^n$ and by the possibility for testing all SSAT's rows. \begin{proposition}~\label{prop:alghtmsRange} Let $n$ be large, and let SSAT be an extreme problem, i.e., $|\mathcal{S}|$ $\leq 1$. The algorithms 1, 2, and 3 in~\cite{arXiv:Barron2015b}, and algorithms~\ref{alg:SATBoard_1} (inner approach) behaves wide, and the algorithms 4~\cite{arXiv:Barron2015b}, and algorithm~\ref{alg:SAT_Perm} (probabilistic and outside approach) behave narrow. \begin{proof} \ The property depicted in prop.~\ref{prop:binNumBlock} relates $k$ and its complement, it allows to eliminate two numbers when the candidate come from translation of a SSAT's formula. This is the inner approach or fixed point type method. For the extreme SSAT, any of the algorithms 1, 2, and 3 in~\cite{arXiv:Barron2015b}, and algorithms~\ref{alg:SATBoard_1} could iterates more than $2^n$ when the given SSAT's rows are repeated. In this case after $2^n$ iterations, it is possible to be far away of the solution. When there is not solution, the number of iterations could be around $m \gg 2^n$. It is a wide range of iterations from 1 to $m$ with $m$ $\gg$ $2^n$. On the other hand, the algorithms 4~\cite{arXiv:Barron2015b}, and algorithm~\ref{alg:SAT_Perm} (probabilistic and outside approach) uses SSAT as function and they explores the search space $[0,2^n-1]$ by creating two candidates from $[0,2^{n-1}].$ It means that at most $2^{n-1}$ iterations are needed for solving any SSAT, even in the case of an extreme SSAT with $m \gg 2^n.$ \end{proof} \end{proposition} \begin{proposition} ~\label{prop:SATVerifyNot} Given an extreme SAT$(n,m)$. It is not possible to verify in polynomial time the solution of it. \begin{proof} This result follows from an extreme SSAT$(n,m)$, $m\gg 2^n.$ A sceptical person or a computer program must matched the huge data of SSAT$(n,m$) and the answer of the algorithms. He or it does not execute any of the algorithms, they just receive the results. When there is not solution, a table or an structure provide by the algorithm means that $\mathcal{S}$ is empty. All the algorithms here give an answer and a witness. It is simple to verify when there is a solution $s^\ast$, SSAT$(n,m)(s^\ast)=1$. But, when the answer is no solution, he or it has an equivalent formulation of $\mathcal{S}=\emptyset$ or that the extreme SSAT$(n,m)$ is equivalent to the special SSAT$(n,2^n)$ with different rows. The corroboration can not consist in accepting the answer blindly $\mathcal{S}=\emptyset$ or that the extreme SSAT$(n,m)$ is equivalent to the special SSAT$(n,2^n)$. Also it is not sufficient testing some candidates with SSAT$(n,m)$ but all. The corroboration of the equivalence between extreme SSAT$(n,m)$ and special SSAT$(n,2^n)$ needs at least $2^{n}$ iterations to match their rows. Without executing a complete and carefully checking and matching, the results of the algorithms themselves are not a corroboration that the original extreme SSAT$(n,m)$ fulfill: SSAT$(n,m)(x)$ $=$ $0,$ $\forall x \in [0,2^n-1]$ when there is not solution. \end{proof} \end{proposition} \begin{table} \centering \[ \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multicolumn{2}{|l}{ $m\approx 2^{n}$} & \multicolumn{3}{|l}{Existence} & \multicolumn{3}{|l|}{Construction} \\ \hline \multicolumn{2}{|l}{SSAT$(n,m)$} & \multicolumn{3}{|l}{Test: $m-r\leq 2^{n}$} & \multicolumn{3}{|l|}{$x\in $SSAT$(n,m)$} \\ \hline rows & \vbox{\hbox{$r$ duplicate} \hbox{rows}} & min & avg & max & min & avg & max \\ \hline $m=2$ & $0$ & $1$ & $1$ & $1$ & $1$ & $2$ & $3$ \\ \hline $m<2^{n}$ & $0$ & $1$ & $1$ & $1$ & $1$ & ${m}/{2}$ & $m+1$ \\ \hline $m=2^{n}-1$ & $0$ & $1$ & $1$ & $1$ & $1$ & $2^{n-1}$ & $2^{n}$ \\ \hline \multicolumn{2}{|l}{$m=2^{n}$ (different rows)} & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ \hline \multicolumn{8}{|l|}{$2^{n}$ (unknow rows, $2^{n}$ different rows) SSAT$(n,m)$ no solution} \\ \hline $m=2^{n}$ & $0$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ \\ \hline $m=2^{n}+1$ & $1$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ \\ \hline $m=2^{n}+r$ & $r$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ \\ \hline \multicolumn{8}{|l|}{$2^{n}$ (unknow rows,$2^{n}-1$ different rows) SSAT$ (n,m)$ unique solution} \\ \hline $m=2^{n}-1+1$ & $1$ & $2$ & $2^{n-2}$ & $2^{n-1}$ & $1$ & $ 2^{n-1}$ & $2^{n-1}$ \\ \hline $m=2^{n}-1+r$ & $r$ & $2$ & $2^{n-2}$ & $2^{n-1}$ & $1$ & $ 2^{n-1}$ & $2^{n-1}$ \\ \hline \end{tabular} \] \caption{Behavior of algorithm~\ref{alg:SATModBoard_1} for the extreme SSAT}\label{tb:algInternal} \end{table} \begin{table} \centering \[ \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multicolumn{2}{|l}{$m\gg 2^{n}$} & \multicolumn{3}{|l}{Existence} & \multicolumn{3}{|l|}{Construction} \\ \hline \multicolumn{2}{|l}{SSAT$(n,m)$} & \multicolumn{3}{|l}{ } & \multicolumn{3}{|l|}{$x\in \left[ 0,2^{n-1}\right] $} \\ \hline rows &\vbox{\hbox{$r$ duplicate} \hbox{rows}} & min & avg & max & min & avg & max \\ \hline \multicolumn{8}{|l|}{$2^{n}$ (unknow rows, $2^{n}$ different rows) SSAT$(n,m)$ no solution} \\ \hline $m=2^{n}+r$ & $r$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ & $2^{n-1}$ \\ \hline \multicolumn{8}{|l|}{$2^{n}$ (unknow rows,$2^{n}-1$ different rows) SSAT$ (n,m)$ unique solution} \\ \hline $m=2^{n}-1+r$ & $r$ & $1$ & $2^{n-2}$ & $2^{n-1}$ & $1$ & $2^{n-2}$ & $2^{n-1}$ \\ \hline \end{tabular} \] \caption{Behavior of algorithm~\ref{alg:SAT_Perm} for the extreme SSAT}\label{tb:algexternal} \end{table} The tables~\ref{tb:algInternal} and ~\ref{tb:algexternal} summarizes the complexity for solving the extreme SSAT. For solving extreme SSAT, the column existence depicts that the complexity is $\mathbf{O}(1)$ for almost all the cases but $m=2^n$ with unknown rows. This is because there is not a property for implying $\forall x \in [0,2^n-1],$ SSAT$(n,m)(x)=0$ but to verify that all SSAT's rows are different. For this case, the algorithms~\ref{alg:SATModBoard_1} and ~\ref{alg:SAT_Perm} prove that there is not solution after testing all possible candidates. For more details see the end of the section~\ref{sc:exSSATCon}. This means that there is no a shortcut for verifying $\mathcal{S} = \emptyset$ for a given extreme SSAT$(n,m)$. \section*{Conclusions and future work} ~\label{sc:conclusions and future work} The results here does not change the SAT's complexity of the article~\cite{arXiv:Barron2015b}. It was interesting to analyze with more details that SSAT problems and algorithms behaves quite wide. Particularly, the inner or fixed point approach has not an advantage for eliminating two candidates for extreme SSAT$(n,m)$ and it gives the wide behaviour. However, outside approach or probabilistic approach behaves stable with the upper bound $2^{n-1}.$ The outside approach and the evaluation of SSAT as a circuit correspond to the probabilistic type of method allow to build the stable algorithm~\ref{alg:SAT_Perm}. This algorithm is a more detailed version of the probabilistic algorithm 4 of ~\cite{arXiv:Barron2015b}. Moreover, for extreme SSAT $(n,m)$ with $m \approx 2^n$ the complexity inside (alg.\ref{alg:SATModBoard_1}) is similar to the outside (alg.\ref{alg:SAT_Perm}), i.e., $\mathbf{O}\left( 2^{n-1}\right).$ The main result is the impossibility to build an efficient algorithm for solving the decision SSAT, i.e., for knowing if it has a satisfactory assignation or not. The sceptical point of view needs proof to confirm or deny an answer. The algorithms in this paper always give some kind of witness or proof. When $m<2^n,$ there is a solution because the formulas of the given SSAT$(n,,m)$ do not cover the binary combination of the search space $\Sigma^n.$ A satisfactory assignation when SSAT has solution is sufficient. But, a message when there is not solution do not substitute the detailed corroboration that SSAT$(n,m)$ has $2^n$ different formulas or that $\forall x \in[0,2^n-1]$, SSAT$(n,m)(x)=0$ with $m \gg 2^n$. The lack of an easy test to verify when there is not solution point out that there is not way for verifying a solution in polynomial time. Extreme SSAT states that in order to solve it, at least one review of its search space ($\Sigma^n$) is necessary. This is done by splitting it into two spaces: $\mathbf{0}\Sigma^{n-1}$ and $\mathbf{1}\Sigma^{n-1}$ in at most $2^{n-1}$ iterations. Finally, this implies $\mathbf{O}(\text{SSAT})=\mathbf{O}(2^{n-1})$ $\preceq$ $\mathbf{O}(\text{NP-Soft})$ $\preceq$ $\mathbf{O}(\text{NP-Hard}).$ \end{document}
math
59,050
\begin{document} \title[Nested Canalyzing Functions And Their Average Sensitivities]{Nested Canalyzing Functions And Their Average Sensitivities} {\rm Aut}hor[Yuan Li, John O. Adeyeye, Reinhard Laubenbacher]{Yuan Li$^{1\ast}$, John O. Adeyeye $^{2\ast}$, Reinhard Laubenbacher$^{3}$} \address{{\small $^{1}$Department of Mathematics, Winston-Salem State University, NC 27110,USA}\\ {\small email: [email protected] }\\ $^{2}$Department of Mathematics, Winston-Salem State University, NC 27110,USA, {\small email: [email protected]}\\ {\small $^{3}$Virginia Bioinformatics Institute, Virginia Tech, Blacksburg, VA 24061,USA }\\ {\small email: [email protected]}} \thanks{$^{\ast}$ Supported by an award from the USA DoD $\#$ W911NF-11-10166} \keywords{Nested Canalyzing Function, Layer Number, Extended Monomial, Multinomial Coefficient, Dynamical System, Hamming Weight, Activity, Average Sensitivity. } \date{} \begin{abstract} In this paper, we obtain complete characterization for nested canalyzing functions (NCFs) by obtaining its unique algebraic normal form (polynomial form). We introduce a new concept, LAYER NUMBER for NCF. Based on this, we obtain explicit formulas for the the following important parameters: 1) Number of all the nested canalyzing functions, 2) Number of all the NCFs with given LAYER NUMBER, 3) Hamming weight of any NCF, 4) The activity number of any variable of any NCF, 5) The average sensitivity of any NCF. Based on these formulas, we show the activity number is greater for those variables in out layer and equal in the same layer. We show the average sensitivity attains minimal value when the NCF has only one layer. We also prove the average sensitivity for any NCF (No matter how many variables it has) is between $0$ and $2$. Hence, theoretically, we show why NCF is stable since a random Boolean function has average sensitivity $\frac{n}{2}$. Finally we conjecture that the NCF attain the maximal average sensitivity if it has the maximal LAYER NUMBER $n-1$. Hence, we guess the uniform upper bound for the average sensitivity of any NCF can be reduced to $\frac{4}{3}$ which is tight. \end{abstract} \maketitle \section{Introduction} \label{sec-intro} Canalyzing function were introduced by Kauffman \cite{Kau1} as appropriate rules in Boolean network models or gene regulatory networks. Canalyzing functions are known to have other important applications in physics, engineering and biology. In \cite{Mor} it was shown that the dynamics of a Boolean network which operates according to canalyzing rules is robust with regard to small perturbations. In \cite{Win2}, W. Just, I. Shmulevich and J. Konvalina derived an exact formula for the number of canalyzing functions. In \cite{Yua2}, the definition of canalyzing functions was generalized to any finite fields $\mathbb{F}_{q}$, where $q$ is a power of a prime. Both the exact formulas and the asymptotes of the number of the generalized canalyzing functions were obtained. Nested Canalyzing Functions (NCFs) were introduced recently in \cite{Kau2}. One important characteristic of (nested) canalyzing functions is that they exhibit a stabilizing effect on the dynamics of a system. That is, small perturbations of an initial state should not grow in time and must eventually end up in the same attractor of the initial state. The stability is typically measured using so-called Derrida plots which monitor the Hamming distance between a random initial state and its perturbed state as both evolve over time. If the Hamming distance decreases over time, the system is considered stable. The slope of the Derrida curve is used as a numerical measure of stability. Roughly speaking, the phase space of a stable system has few components and the limit cycle of each component is short. In \cite{Kau3}, the authors studied the dynamics of nested canalyzing Boolean networks over a variety of dependency graphs. That is, for a given random graph on $n$ nodes, where the in-degree of each node is chosen at random between $0$ and $k$, where $k\leq n$, a nested canalyzing function is assigned to each node in terms of the in-degree variables of that node. The dynamics of these networks were then analyzed and the stability measured using Derrida plots. It is shown that nested canalyzing networks are remarkably stable regardless of the in-degree distribution and that the stability increases as the average number of inputs of each node increases. An extensive analysis of available biological data on gene regulations (about 150 genes) showed that 139 of them are regulated by canalyzing functions \cite{Har}. In \cite{Kau3, Nik}, it was shown that 133 of the 139 are in fact nested canalyzing. Most published molecular networks are given in the form of a wiring diagram, or dependency graph, constructed from experiments and prior published knowledge. However, for most of the molecular species in the network, little knowledge, if any, could be deduced about their regulatory mechanisms, for instance in the gene transcription networks in yeast \cite{Herr} and E. Coli \cite{Bar}. Each one of these networks contains more than 1000 genes. Kauffman et. al \cite{Kau2} investigated the effect of the topology of a sub-network of the yeast transcriptional network where many of the transcriptional rules are not known. They generated ensembles of different models where all models have the same dependency graph. Their heuristic results imply that the dynamics of those models which used only nested canalyzing functions were far more stable than the randomly generated models. Since it is already established that the yeast transcriptional network is stable, this suggests that the unknown interaction rules are very likely nested canalyzing functions. In a recent article \cite{Bal}, the whole transcriptional network of yeast, which has 3459 genes as well as the transcriptional networks of E. Coli (1481 genes) and B. subtillis (840 genes) have been analyzed in a similar fashion, with similar findings. These heuristic and statistical results show that the class of nested canalyzing functions is very important in systems biology. It is shown in \cite{Jar} that this class is identical to the class of so-called unate cascade Boolean functions, which has been studied extensively in engineering and computer science. It was shown in \cite{But} that this class produces the binary decision diagrams with the shortest average path length. Thus, a more detailed mathematical study of this class of functions has applications to problems in engineering as well. In \cite{Abd2}, the authors provided a description of nested canalyzing function. As a corollary of the equivalence, a formula in the literature for the number of unate cascade functions also provides such a formula the number of nested canalyzing functions. Recently, in \cite{Mur2}, those results were generalized to the multi-state nested canalyzing functions on finite fields $\mathbb{F}_{p}$, where $p$ is a prime. They obtained the formula for the number of the generalized NCFs, as a recursive relation. In \cite{Coo}, Cook et al. introduced the notion of sensitivity as a combinatorial measure for Boolean functions providing lower bounds on the time needed by CREW PRAM (concurrent read , but exclusive write (CREW) parallel random access machine (PRAM)). It was extended by Nisan \cite{Nis} to block sensitivity. It is still open whether sensitivity and block sensitivity are polynomially related (they are equal for monotone Boolean functions). Although the definition is straightforward, the sensitivity is understood only for a few classes function. For monotone functions, Ilya Shmulevich \cite {Shm2} derived asymptotic formulas for a typical monotone Boolean functions. Recently, Shengyu Zhang \cite{Zha} find a formula for the average sensitivity of any monotone Boolean functions, hence, a tight bound is derived. In \cite{Shm}, Ilya Shmulevich and Stauart A. Kauffman considered the activities of the variables of Boolean functions with only one canalyzing variable. They obtained the average sensitivity of this kind of Boolean function. In this paper, we revisit the NCF, obtaining a more explicit characterization of the Boolean NCFs than those in \cite{Abd2}. We introduce a new concept, the $LAYER$ $NUMBER$ in order to classify all the variables. Hence, the dominance of the variable can be quantified. As a consequence, we obtain an explicit formula for the number of NCFs. Thus, a nonlinear recursive relation (the original formula) is solved, which maybe of independent mathematical interest. Using our unique algebraic normal form of NCF, for any NCF, we get the formula of activity for its variables. We show that the variables in a more dominant layer have greater activity number. Variables in the same layer have the same activity numbers. Consequently, we obtain the formula of any NCF's average sensitivity, its lower bound is $\frac{n}{2^{n-1}}$ and its upper bound is $2$ (No matter what $n$ is) which is much less than $\frac{n}{2}$, the average sensitivity of a random Boolean function. So, theoretically, we proved why NCF is ``stable''. We also find the formula of the Hamming weight of each NCF. Finally, we conjecture that the NCF attains its maximal value if it has the maximal LAYER NUMBER $n-1$. Hence, we guess the tight upper bound is $\frac{4}{3}$. In the next section, we introduce some definitions and notations. \section{Preliminaries} \label{2} In this section we introduce the definitions and notations. Let $\mathbb{F}=\mathbb{F}_{2}$ be the Galois field with $2$ elements. If $f$ is a $n$ variable function from $\mathbb{F}^{n}$ to $\mathbb{F}$, it is well known \cite{Lid} that $f$ can be expressed as a polynomial, called the algebraic normal form(ANF): \[ f(x_{1},x_{2},\ldots,x_{n})=\bigoplus_{0\leq k_i\leq 1,i=1,\ldots,n}a_{k_{1}k_{2}\ldots k_{n}}{x_{1}}^{k_{1}}{x_{2}}^{k_{2} }\cdots{x_{n}}^{k_{n}} \] where each coefficient $a_{k_{1}k_{2}\ldots k_{n}}\in\mathbb{F}$ is a constant. The number $k_{1}+k_{2}+\cdots+k_{n}$ is the multivariate degree of the term $a_{k_{1}k_{2}\ldots k_{n}}{x_{1}}^{k_{1}}{x_{2}}^{k_{2}}\cdots {x_{n}}^{k_{n}}$ with nonzero coefficient $a_{k_{1}k_{2}\ldots k_{n}}$. The greatest degree of all the terms of $f$ is called the algebraic degree, denoted by $deg(f)$. \begin{defn} \label{def2.1} $f(x_{1},x_{2},\ldots, x_{n})$ is essential in variable $x_{i}$ if there exist $r, s\in\mathbb{F}$ and $x_{1}^{*},\ldots,x_{i-1}^{*}$ $,x_{i+1}^{*},\ldots, x_{n}^{*}$ such that $f(x_{1}^{*},\ldots,x_{i-1} ^{*},r,x_{i+1}^{*},\ldots,x_{n}^{*})\neq f(x_{1}^{*},\ldots,x_{i-1} ^{*},s,x_{i+1}^{*},\ldots,x_{n}^{*})$. \end{defn} \begin{defn} \label{def2.2} A function $f(x_{1},x_{2},\ldots,x_{n})$ is $<i:a:b>$ canalyzing if $f(x_{1},\ldots,x_{i-1},a,x_{i+1},\ldots,x_{n})=b$, for all $x_{j}$, $j\neq i$, where $i\in\{1,\dots,n\}$, $a$,$b\in\mathbb{F}$. \end{defn} The definition is reminiscent of the concept of “canalisation” introduced by the geneticist C. H. Waddington \cite{Wad} to represent the ability of a genotype to produce the same phenotype regardless of environmental variability. \begin{defn} \label{def2.3} Let $f$ be a Boolean function in $n$ variables. Let $\sigma$ be a permutation on $\{1,2,\ldots,n\}$. The function $f$ is nested canalyzing function (NCF) in the variable order $x_{\sigma(1)},\ldots,x_{\sigma(n)}$ with canalyzing input values $a_{1},\ldots,a_{n}$ and canalyzed values $b_{1},\ldots,b_{n}$, if it can be represented in the form $f(x_{1},\ldots,x_{n})=\left\{ \begin{array} [c]{ll} b_{1} & x_{\sigma(1)}=a_{1},\\ b_{2} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}=a_{2},\\ b_{3} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}}, x_{\sigma(3)}=a_{3},\\ .\ldots. & \\ b_{n} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2} },\ldots,x_{\sigma(n-1)}= \overline{ a_{n-1}}, x_{\sigma(n)}=a_{n},\\ \overline{b_{n}} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}},\ldots,x_{\sigma(n-1)}= \overline{ a_{n-1}}, x_{\sigma(n)}=\overline{ a_{n}}. \end{array} \right. $ Where $\overline{a}=a\oplus 1$.The function f is nested canalyzing if f is nested canalyzing in the variable order $x_{\sigma(1)},\ldots,x_{\sigma(n)}$ for some permutation $\sigma$. \end{defn} Let $\alpha=(a_{1},a_{2},\ldots,a_{n})$ and $\beta=(b_{1},b_{2},\ldots,b_{n} )$, we say $f$ is $\{\sigma:\alpha:\beta\}$ NCF if it is NCF in the variable order $x_{\sigma(1)},\ldots,x_{\sigma(n)}$ with canalyzing input values $\alpha=(a_{1},\ldots,a_{n})$ and canalyzed values $\beta=(b_{1},\ldots ,b_{n})$. Given vector $\alpha=(a_{1},a_{2},\ldots,a_{n})$, we define $\alpha ^{i_{1},\ldots,i_{k}}=(a_{1},\ldots,\overline{a_{i_{1}}},\ldots,\overline {a_{i_{k}}},\ldots,a_{n})$ From the above definition, we immediately have the following \begin{prop} $f$ is $\{\sigma:\alpha:\beta\}$ NCF $\Longleftrightarrow$ $f$ is $\{\sigma:\alpha^{n}:\beta^{n}\}$ NCF \end{prop} \begin{example} \label{exa2.1} $f(x_{1},x_{2},x_{3})=x_{1}(x_{2}\oplus 1)x_{3}\oplus 1$ is $\{(1,2,3):(0,1,0):(1,1,1)\}$ NCF. Actually, one can check this function is nested canalyzing in any variable order. \end{example} \begin{example} \label{exa2.2} $f(x_{1},x_{2},x_{3})=(x_{1}\oplus 1)(x_{2}(x_{3}\oplus 1)\oplus 1)\oplus 1$. This function is $\{(1,2,3):(1,0,1):(1,0,0)\}$ NCF. It is also $\{(1,3,2):(1,1,1):(1,0,1)\}$ NCF. One can check this function can be nested canalyzing in only two variable orders $(x_{1},x_{2},x_{3})$ and $(x_{1},x_{3},x_{2})$. \end{example} From the above definitions, we know a function is NCF, all the $n$ variable must be essential. However, a constant function $b$ can be $<i:a:b>$ canalyzing for any $i$ and $a$. \section{A Complete Characterization for NCF} \label{3} In \cite{Lor}, the author introduced Partially Nested Canalyzing Functions (PNCFs), a generalization of the NCFs, and the nested canalyzing depth, which measures the extent to which it retains a nested canalyzing structure. In \cite{Win}, the author introduced the extended monomial system. As we will see, in a Nested Canalyzing Function, some variables are more dominant than the others. We will classify all the variables of a NCF into different levels according to the extent of their dominance. Hence, we will give description about NCF with more detail. Actually, we will obtain clearer description about NCF by introducing a new concept: LAYER NUMBER. As a by-product, we also obtain some enumeration results. Eventually, we will find an explicit formula of the number of all the NCFs. First, we have \begin{defn} \label{def3.1} \cite{Win} $M(x_{1},\ldots,x_{n})$ is an extended monomial of essential variables $x_{1},\ldots,x_{n}$ if $M(x_{1},\ldots,x_{n} )=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})...(x_{n}\oplus a_{n})$, where $a_{i}\in\mathbb{F}_{2}$. \end{defn} Basically, we will rewrite Theorem 3.1 in \cite{Abd2} with more information. \begin{lemma} \label{lm3.1} $f(x_{1},x_{2},...x_{n})$ is $<i:a:b>$ canalyzing iff $f(X)=f(x_{1},x_{2},...,x_{n})=(x_{i}\oplus a)Q(x_{1},\ldots, x_{i-1},x_{i+1}\ldots x_{n})\oplus b$. \end{lemma} \begin{proof} From the algebraic normal form of $f$, we rewrite it as $f=x_{i}g_{1} (X_{i})\oplus g_{0}(X_{i})$, where $X_{i}=(x_{1},\ldots,x_{i-1},x_{i+1} ,\ldots,x_{n})$. Hence, $f(X)=f(x_{1},x_{2},\ldots,x_{n})$ $=(x_{i} \oplus a)g_{1}(X_{i})\oplus ag_{1}(X_{i})\oplus g_{0}(X_{i})$. Let $g_{1}(X_{i})=Q(x_{1},\ldots, x_{i-1},x_{i+1}\ldots x_{n})$ and $r(X_{i})=ag_{1}(X_{i})\oplus g_{0}(X_{i})$. Then $f(X)=f(x_{1},\ldots,x_{n})=(x_{i}\oplus a)Q(x_{1},\ldots, x_{i-1},x_{i+1}\ldots x_{n})\oplus r(X_{i})$ Since $f(X)$ is $<i:a:b>$ canalyzing, we get $f(X)=f(x_{1},...x_{i-1} ,a,x_{i+1},\ldots,x_{n})=b$ for any $x_{1},\ldots, x_{i-1},x_{i+1}\ldots x_{n}$, i.e., $r(X_{i})=b$ for any $X_{i}$. So $r(X_{i})$ must be the constant $b$. We finished the necessity. The sufficiency is obvious. \end{proof} \begin{remark}\label{remark1} \label{re1} 1) When we contrast this lemma to the first part of Theorem 3.1 in \cite{Abd2},we make clear that here, the $x_{i}$ is not essential in $Q$. 2) In \cite{Yua2}, there is a general version of this Lemma over any finite fields. 3) In the above lemma, if $f$ is constant, then $Q=0$. \end{remark} From Definition \ref{def2.3}, we have the following \begin{prop} \label{prop3.1} If $f(x_{1},\ldots,x_{n})$ is $\{\sigma:\alpha:\beta\}$ NCF, i.e., if it is NCF in the variable order $x_{\sigma(1)},\ldots,x_{\sigma(n)}$ with canalyzing input values $\alpha=(a_{1},\ldots,a_{n})$ and canalyzed values $\beta=(b_{1},\ldots ,b_{n})$. Then, for $1\leq k\leq n-1$, let $x_{\sigma(1)}=\overline{a_{1}} ,\ldots,x_{\sigma(k)}=\overline{a_{k}}$, then the function $f(x_{1},\ldots,\overset{\sigma(1)}{\overline{a_{1}}},\ldots,\overset {\sigma(k)}{\overline{a_{k}}},\ldots, x_{n})$ is $\{\sigma^{*}:\alpha ^{*}:\beta^{*}\}$ NCF on those remaining variables, where $\sigma^{*} =x_{\sigma(k+1)},\ldots,x_{\sigma(n)}$, $\alpha^{*}=(a_{k+1},\ldots,a_{n})$ and $\beta^{*}=(b_{k+1},\ldots,b_{n})$. \end{prop} \begin{defn} \label{def3.2} If $f(x_{1},\ldots,x_{n})$ is a NCF. We call variable $x_{i}$ the most dominant variable of $f$, if there is an order $\alpha=(x_{i},\ldots)$ such that $f$ is NCF with this variable order(In other words, if $f$ is also $<i:a:b>$ canalyzing for some $a$ and $b$). \end{defn} In Example \ref{exa2.1}, all the three variables are most dominant, in Example \ref{exa2.2}, only $x_{1}$ is the most dominant variable. We have \begin{theorem}\label{th3.1} Given NCF $f(x_{1},\ldots,x_{n})$, all the variables are most dominant iff $f=M(x_{1},\ldots,x_{n})\oplus b$, where $M$ is an extended monomial, i.e., $M=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})...(x_{n}\oplus a_{n})$. \end{theorem} \begin{proof} $x_{1}$ is the most dominant, from Lemma \ref{lm3.1}, we know there exist $a_{1}$ and $b$ such that $f(x_{1},x_{2},\ldots,x_{n})=(x_{1}\oplus a_{1})Q(x_{2},\ldots, x_{n})\oplus b$, i.e., $(x_{1}\oplus a_{1})|(f\oplus b)$. Now, $x_{2}$ is also the most dominant, we have $a_{2}$ and $b^{\prime}$ such that $f(x_{1},a_{2},x_{3},\ldots,x_{n})=b^{\prime}$ for any $x_{1},x_{3} ,\ldots,x_{n}$. Specifically, let $x_{1}=a_{1}$, we get $f(a_{1},a_{2},x_{3},\ldots,x_{n})=b=b^{\prime}$. Hence, we also get $(x_{2}\oplus a_{2})|(f\oplus b)=(x_{1}\oplus a_{1})Q(x_{2},\ldots, x_{n})$, since $x_{1}\oplus a_{1}$ and $x_{2}\oplus a_{2}$ are coprime, we get $(x_{2}\oplus a_{2})|Q(x_{2},\ldots, x_{n})$, hence, $f(x_{1},x_{2},\ldots,x_{n})=(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})Q^{\prime} (x_{3},\ldots, x_{n})\oplus b$. With induction principle, the necessity is proved. The sufficiency if evident. \end{proof} We are ready to prove the following main result of this section. \begin{theorem}\label{th2} \label{th3.2} Given $n\geq2$, $f(x_{1},x_{2},\ldots,x_{n})$ is nested canalyzing iff it can be uniquely written as \begin{equation}\label{eq3.1} f(x_{1},x_{2},\ldots,x_{n})=M_{1}(M_{2}(\ldots(M_{r-1} (M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)\oplus b. \end{equation} Where each $M_{i}$ is an extended monomial of a set of disjoint variables. More precisely, $M_{i}=\prod_{j=1}^{k_{i}}(x_{i_{j}}\oplus a_{i_{j}})$, $i=1,\ldots,r$, $k_{i}\geq1$ for $i=1,\ldots,r-1$, $k_{r}\geq2$, $k_{1} \oplus \ldots \oplus k_{r}=n$, $a_{i_{j}}\in\mathbb{F}_{2}$, $\{i_{j}|j=1,\ldots,k_{i}, i=1,\ldots,r\}=\{1,\ldots,n\}$. \end{theorem} \begin{proof} We use induction on $n$. When $n=2$, there are 16 boolean functions, 8 of them are NCFs, Namely $(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})\oplus c=M_{1}\oplus 1\oplus b$, where $b=1\oplus c$ and $M_{1} =(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})$. If $(x_{1}\oplus a_{1})(x_{2}\oplus a_{2})\oplus c=(x_{1}\oplus{a_{1}}^{\prime})(x_{2}\oplus{a_{2} )}^{\prime}\oplus c^{\prime}$, by equating the coefficients, we immediately obtain $a_{1}={a_{1}}^{\prime}$, $a_{2}={a_{2}}^{\prime}$ and $c=c^{\prime}$. So, uniqueness is true. We have proved that equation \ref{eq3.1} is true for $n=2$, where $r=1$. Let's assume that equation \ref{eq3.1} is true for any nested canalyzing function which has at most $n-1$ essential variables. Now, consider NCF $f(x_{1},\ldots,x_{n})$. Suppose $x_{\sigma(1)},\ldots,x_{\sigma(k_{1})}$ are all the most dominant canalyzing variables of $f$, $1\leq k_{1}\leq n$. Case 1: $k_{1}=n$, by Theorem \ref{th3.1}, the conclusion is true with $r=1$. Case 2: $k_{1}<n$, with the same arguments to Theorem \ref{th3.1}, we can get $f=M_{1}g\oplus b$, where $M_{1}=(x_{\sigma(1)}\oplus a_{\sigma(1)})\ldots(x_{\sigma(k)}\oplus a_{\sigma(k)})$. Let $x_{\sigma(1)}=\overline{a_{\sigma(1)}},\ldots, x_{\sigma(k)}=\overline {a_{\sigma(k)}}$ in $f$, the function $g\oplus b$, hence, $g$, of the remaining variables will also be nested canalyzing by Proposition \ref{prop3.1}. Since $g$ has $n-k_{1}\leq n-1$ variables, by induction assumption, we get $g=M_{2}(M_{3}(\ldots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)\oplus b_{1}$, at this time, $b_{1}$ must be $1$. Otherwise, all the variables in $M_{2}$ will also be the most dominant variables of $f$. Hence, we are done. \end{proof} Because each NCF can be uniquely written as \ref{eq3.1} and the number $r$ is uniquely determined by $f$, we have \begin{defn} \label{def3.3} For a NCF written as equation \ref{eq3.1}, the number $r$ will be called its LAYER NUMBER. Essential variables of $M_{1}$ will be called the most dominant variables(canalyzing variable), they belong to the first layer of this NCF. Essential variables of $M_{2}$ will be called the second most dominant variables and belong to the second layer of this NCF and etc. \end{defn} The function in example \ref{exa2.1} has LAYER NUMBER 1 and the function in example \ref{exa2.2} has LAYER NUMBER 2. \begin{remark}\label{remark2} In Theorem \ref{th2}, 1) $k_r\geq 2$. It is impossible that $k_r=1$. Otherwise, $M_r\oplus 1$ will be a factor of $M_{r-1}$ which means LAYER NUMBER is $r-1$. 2) If variable $x_i$ is in the first layer, and $x_i\oplus a_i$ is a factor of $M_i$, then this NCF is $<i:a_i:b>$ canalyzing, we simply say $x_i$ is a canalyzing variable of this NCF. \end{remark} Let $\mathbb{NCF}(n,r)$ stands for the set of all the $n$ variable nested canalyzing functions with LAYER NUMBER $r$ and $\mathbb{NCF}(n)$ stands for the set of all the $n$ variable nested canalyzing functions. We have \begin{cor} \label{co3.1} Given $n\geq2$, \[ |\mathbb{NCF}(n,r)|=2^{n+1}\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i} \geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1},\ldots,k_{r-1}} \] and \[ |\mathbb{NCF}(n)|=2^{n+1}\sum_{\substack{r=1}}^{n-1}\sum_{\substack{k_{1} +\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1} ,\ldots,k_{r-1}} \] Where the multinomial coefficient $\binom{n}{k_{1},\ldots,k_{r-1}}=\frac {n!}{k_{1}!\ldots k_{r}!}$ \end{cor} \begin{proof} From Equation \ref{eq3.1}, for each choice $k_{1},\ldots,k_{r}$, with condition $k_{1}+\ldots+k_{r}=n$, $k_{i}\geq1$, $i=1,\ldots,r-1$ and $k_{r}\geq2$, there are $2^{k_{1}}\binom{n}{k_{1}}$ many ways to form $M_{1}$, there are $2^{k_{2}}\binom{n-k_{1}}{k_{2}}$ many ways to form $M_{2}$, $\ldots$, there are $2^{k_{r}}\binom{n-k_{1}-\ldots-k_{r-1}}{k_{r}}$ many ways to form $M_{r}$, $b$ has two choices. Hence, \[ |\mathbb{NCF}(n,r)|=2\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i} \geq1,i=1,\ldots,r-1, k_{r}\geq2}}2^{k_{1}+\ldots+k_{r}}\binom{n}{k_{1}} \binom{n-k_{1}}{k_{2}}\ldots\binom{n-k_{1}-\ldots-k_{r-1}}{k_{r}} \] \[ =2^{n+1}\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\frac{n!}{(k_{1})!(n-k_{1})!}\frac{(n-k_{1})!}{(k_{2} )!(n-k_{1}-k_{2})!}\ldots\frac{(n-k_{1}-\ldots-k_{r-1})!}{k_{r}!(n-k_{1} -\ldots-k_{r})!} \] \[ =2^{n+1}\sum_{\substack{k_{1}+\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\frac{n!}{k_{1}!k_{2}!\ldots k_{r}!}=2^{n+1}\sum_{\substack{k_{1} +\ldots+k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1} ,\ldots,k_{r-1}}. \] Since $\mathbb{NCF}(n)=\bigcup_{r=1}^{n-1}\mathbb{NCF}(n,r)$ and $\mathbb{NCF}(n,i)\bigcap\mathbb{NCF}(n,j)=\phi$ when $i\neq j$, we get the formula of $|\mathbb{NCF}(n)|$. \end{proof} One can check that $|\mathbb{NCF}(2)|=8$, $|\mathbb{NCF}(3)|=64$, $|\mathbb{NCF}(4)|=736$, $|\mathbb{NCF}(5)|=10624$,... These results are consistent with those in \cite{Ben, Sas}. By equating our formula to the recursive relation in \cite{Ben, Sas}, we have the following \begin{cor} \label{co3.2} The solution of the nonlinear recursive sequence \[ a_{2}=8, a_{n}=\sum_{r=2}^{n-1}\binom{n}{r-1}2^{r-1}a_{n-r+1}+2^{n+1} , n\geq3 \] is \[ a_{n}=2^{n+1}\sum_{\substack{r=1}}^{n-1}\sum_{\substack{k_{1}+\ldots +k_{r}=n\\k_{i}\geq1,i=1,\ldots,r-1, k_{r}\geq2}}\binom{n}{k_{1} ,\ldots,k_{r-1}}. \] \end{cor} \section{ Activity, Sensitivity and Hamming Weight} A Boolean function is balanced if exactly half of its value is zero. Equivalently, the Hamming weight of this $n$ variables Boolean function is $2^{n-1}$. There are $\binom{2^n}{2^{n-1}}$ balanced functions . It is easy to show that a Boolean functions with canalyzing variables is not balanced, i.e., biased. Actually, very biased. For example, Two constant functions are trivially canalyzing, They are the most biased. Extended monomial functions are the second most biased since for any of them, only one value is nonzero. But biased functions may have no canalyzing variables. For example, $f(x_1,x_2,x_3)=x_1x_2x_3\oplus x_1x_2\oplus x_1x_3\oplus x_2x_3$ is biased but without canalyzing variables. In Boolean functions, some variable have greater influence over the output of the function than other variables. To formalize this, a concept called $activity$ was introduced. Let $\frac{\partial f(x_1,\ldots,x_n)}{\partial x_i}=f(x_1,\ldots,x_i\oplus 1,\ldots,x_n)\oplus f(x_1,\ldots,x_i,\ldots,x_n)$. The $activity$ of variable $x_i$ of $f$ is defined as \begin{equation}\label{act1} \alpha_i^f=\frac{1}{2^n}\sum_{(x_1,\ldots,x_n)\in \mathbb{F}_2^n}\frac{\partial f(x_1,\ldots,x_n)}{\partial x_i} \end{equation} Note, the above definition can also be written as the following \begin{equation}\label{act2} \alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n)) \end{equation} The activity of any variables of constant functions is 0. For affine function $f(x_1,\ldots,x_n)=x_1\oplus \ldots\oplus x_n\oplus b$, $\alpha_i^f=1$ for any $i$. It is clear, for any $f$ and $i$, we have $0\leq \alpha_i^f\leq 1$. Another important quantity is the sensitivity of a Boolean function, which measures how sensitive the output of the function is if the input changes (This was introduced in \cite {Coo}). The sensitivity $s^f(x_1,\ldots,x_n)$ of $f$ on vector $(x_1,\ldots,x_n)$ is defined as the number of Hamming neighbors of $(x_1,\ldots,x_n)$ on which the function value is different from $f(x_1,\ldots,x_n)$. That is, \begin{equation*} s^f(x_1,\ldots,x_n)=|\{i|f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\neq f(x_1,\ldots,\overset{i}{1},\ldots,x_n), i=1,\ldots,n \}|. \end{equation*} Obviously, $s^f(x_1,\ldots,x_n)=\sum_{i=1}^n\frac{\partial f(x_1,\ldots,x_n)}{\partial x_i}$ The average sensitivity of function $f$ is defined as \begin{equation*} s^f=E[s^f(x_1,\ldots,x_n)]=\frac{1}{2^n}\sum_{(x_1,\ldots,x_n)\in \mathbb{F}_2^n}s^f(x_1,\ldots,x_n)=\sum_{i=1}^n\alpha_i^f. \end{equation*} It is clear that $0\leq s^f\leq n$. The average sensitivity is one of the most studied concepts in the analysis of Boolean functions. Recently, It receives a lot of attention. See \cite{Ama, Ber, Ber2, Bop, Che, Chr, Kel, Liu, Li, Qia, Shm2, Shm, Shp, Sch, Vir}. Bernasconi \cite{Ber} has showed that a random Boolean function has average sensitivity $\frac{n}{2}$. It means the average value of the average sensitivities of all the $n$ variables Boolean functions is $\frac{n}{2}$. In \cite {Shm}, Ilya Shmulevich and Stuart A. Kauffman calculated the activity of all the variables of a Boolean functions with exactly one canalyzing variable and unbiased input for the other variable. Add all the activities, the average sensitivity of this function was also obtained. In the following, using Equation \ref{eq3.1}, we will obtain the formula of the Hamming weight of any NCF, the activities of all the variables of any NCF and the average sensitivity (which is bounded by constant) of any NCF. First, we have \begin{lemma}\label{lm4.1} $(x_1\oplus a_1)\ldots (x_k\oplus a_k)=$ $\left\{ \begin{array}{ll} 1, & (x_1,\ldots,x_k)=(\overline{a_1},\ldots,\overline{a_k})\\ 0, & otherwise. \end{array}\right.$ i.e., only one value is $1$ and all the other $2^k-1$ values are $0$. \end{lemma} \begin{theorem}\label{th4.1} Given $n\geq 2$. Let $f_1=M_1$, $f_r=M_{1}(M_{2}(\ldots(M_{r-1} (M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)$ , $r\geq 2$, where $M_i$ is same as that in the in Theorem \ref{th3.2}, Then the Hamming weight of $f_r$ is \begin{equation}\label{eq4.3} W(f_r)=\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i} \end{equation} The Hamming weight of $f_r\oplus 1$ is \begin{equation}\label{eq4.4} W(f_r\oplus 1)=\sum_{j=0}^r(-1)^{j}2^{n-\sum_{i=1}^jk_i} \end{equation} Where $\sum_{i=1}^0k_i$ should be explained as $0$. \end{theorem} \begin{proof} First, let's consider the Hamming weight of $f_r$. When $r=1$, we know the result is true by Lemma \ref{lm4.1}. When $r>1$, we consider two cases: Case A: $r$ is odd, $r=2t+1$. All the vectors make $f=1$ will be divided into the following disjoint groups. Group $1$: $M_1=1$, $M_2=0$; Group $2$: $M_1=1$, $M_2=1$, $M_3=1$, $M_4=0$; $\ldots$ Group $j$: $M_1=1$, $M_2=1$, $\ldots$, $ M_{2j-1}=1$, $M_{2j}=0$; $\ldots$ Group $t$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t-1}=1$, $M_{2t}=0$; Group $t+1$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t}=1$, $M_{2t+1}=M_r=1$. In Group $1$, the number of vectors is $(2^{k_2}-1)2^{n-k_1-k_2}=2^{n-k_1}-2^{n-k_1-k_2}$. In Group $2$, the number of vector is $(2^{k_4}-1)2^{n-k_1-k_2-k_3-k_4}=2^{n-k_1-k_2-k_3}-2^{n-k_1-k_2-k_3-k_4}$. \ldots In Group $t$, the number of vector is $(2^{k_{2t}}-1)2^{n-k_1-\ldots-k_{2t}}=2^{n-k_1-\ldots-k_{2t-1}}-2^{n-k_1-\ldots -k_{2t}}$. In Group $t+1$, the number of vectors is $2^{n-k_1-\ldots -k_r}=1$. Add all of them, we get the formula Equation \ref{eq4.3}. Case B: $r$ is even, $r=2t$. All the vectors make $f=1$ will be divided into the following disjoint groups. Group $1$: $M_1=1$, $M_2=0$; Group $2$: $M_1=1$, $M_2=1$, $M_3=1$, $M_4=0$; $\ldots$ Group $j$: $M_1=1$, $M_2=1$, $\ldots$, $ M_{2j-1}=1$, $M_{2j}=0$; $\ldots$ Group $t-1$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t-3}=1$, $M_{2t-2}=0$; Group $t$ : $M_1=1$, $M_2=1$, $\ldots$, $M_{2t-1}=1$, $M_{2t}=M_r=0$. In Group $1$, the number of vectors is $(2^{k_2}-1)2^{n-k_1-k_2}=2^{n-k_1}-2^{n-k_1-k_2}$. In Group $2$, the number of vector is $(2^{k_4}-1)2^{n-k_1-k_2-k_3-k_4}=2^{n-k_1-k_2-k_3}-2^{n-k_1-k_2-k_3-k_4}$. \ldots In Group $t-1$, the number is $(2^{k_{2t-2}}-1)2^{n-k_1-\ldots-k_{2t-2}}=2^{n-k_1-\ldots-k_{2t-3}}-2^{n-k_1-\ldots -k_{2t-2}}$. In Group $t$, the number of vectors is $2^{n-k_1-\ldots -k_{2t-1}}-2^{n-k_1-\ldots -k_{2t}}=2^{k_{2t}}-1$. Add all of them, we get the formula Equation \ref{eq4.3} again. Because $|\{(x_1,\ldots,x_n)|f(x_1,\ldots,x_n)=0\}|+|\{(x_1,\ldots,x_n)|f(x_1,\ldots,x_n)=1\}|=2^n$, we know the Hamming weight of $f_r\oplus 1$ is \begin{equation*} W(f_r\oplus 1)=2^n-W(f_r)=2^n-\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}=\sum_{j=0}^r(-1)^{j}2^{n-\sum_{i=1}^jk_i}. \end{equation*} Where $\sum_{i=1}^0k_i$ should be explained as $0$. \end{proof} In the following, we will calculate the activities of the variables of any NCF. Let $f$ be a NCF and written as the form in Theorem \ref{th3.2}. Without loss of generality(to avoid the complicated notation), we assume $M_1=(x_1\oplus a_1)(x_2\oplus a_2)\ldots (x_{k_1}\oplus a_{k_1})$ and $m_1=(x_1\oplus a_1)\ldots(x_{i-1}\oplus a_{i-1})(x_{i+1}\oplus a_{i+1})\ldots (x_{k_1}\oplus a_{k_1})$, i.e., $M_1=(x_i\oplus a_i)m_1$. If $r=1$, i.e., $k_1=n$, then \begin{equation*} \alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n)) \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}m_1=\frac{1}{2^{n-1}}W(m_1)=\frac{1}{2^{n-1}}. \end{equation*} by Lemma \ref{lm4.1}. If $1<r\leq n-1$, Let's consider the activity of $x_i$ in the first layer, i.e., $1\leq i\leq k_1$. We have \begin{equation*} \alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n)) \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}m_1(M_{2}(\ldots(M_{r-1} (M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1). \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}W(m_1(M_{2}(\ldots(M_{r-1} (M_{r}\oplus 1)\oplus 1)\ldots)\oplus 1)). \end{equation*} $=\left\{ \begin{array}{ll} \frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-1-(\sum_{i=1}^jk_i-1)}, & k_1>1\\ \frac{1}{2^{n-1}}\sum_{j=0}^{r-1}(-1)^{j}2^{n-1-\sum_{i=1}^jk_{i+1}}, & k_1=1. \end{array}\right.$ $=\left\{ \begin{array}{ll} \frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1>1\\ \frac{1}{2^{n-1}}\sum_{j=0}^{r-1}(-1)^{j}2^{n-\sum_{i=0}^jk_{i+1}}, & k_1=1. \end{array}\right.=\left\{ \begin{array}{ll} \frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1>1\\ \frac{1}{2^{n-1}}\sum_{j=0}^{r-1}(-1)^{j}2^{n-\sum_{i=1}^{j+1}k_{i}}, & k_1=1. \end{array}\right.$ $=\left\{ \begin{array}{ll} \frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1>1\\ \frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}, & k_1=1. \end{array}\right.=\frac{1}{2^{n-1}}\sum_{j=1}^r(-1)^{j-1}2^{n-\sum_{i=1}^jk_i}$ by Theorem \ref{th4.1}. Note, in the above, $k_1=1$ means $m_1=1$, we used the Equation \ref{eq4.4} with layer number $r-1$ and the first layer is $M_2$ for $n-1$ variables functions. Now let's consider the variables in the second layer, i.e., $x_i$ is an essential variable of $M_2$. We have $M_2=(x_i+a_i)m_2$ and \begin{equation*} \alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n)) \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1(m_{2}(\ldots(M_{r-1} (M_{r}\oplus 1)\oplus 1)\ldots)). \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1m_{2}(\ldots(M_{r-1} (M_{r}\oplus 1)\oplus 1)\ldots). \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{j=1}^{r-1}(-1)^{j-1}2^{n-1-((k_1+k_2-1)+\ldots +k_{j+1}))}=\frac{1}{2^{n-1}}\sum_{j=1}^{r-1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+1}k_i} \end{equation*} by Equation \ref{eq4.3} in Theorem \ref{th4.1}. Note, $M_1m_2$ is the first layer, $M_3$ is the second layer and etc. Now let's consider the variables in the $lth$ layer, i.e., $x_i$ is an essential variable of $M_l$, $2\leq l\leq r-1$. We have $M_l=(x_i+a_i)m_l$ and \begin{equation*} \alpha_i^f=\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}(f(x_1,\ldots,\overset{i}{0},\ldots,x_n)\oplus f(x_1,\ldots,\overset{i}{1},\ldots,x_n)) \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1\ldots M_{l-1}m_l(M_{l+1}(\ldots (M_r\oplus 1)\ldots )\oplus 1). \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-1-((k_1+\ldots + k_{l}-1)+k_{l+1}+\ldots +k_{j+l-1}))}=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i} \end{equation*} by Equation \ref{eq4.3} in Theorem \ref{th4.1}. Note, $M_1\ldots M_{l-1}m_l$ is the first layer, $M_{l+1}$ is the second layer, and etc. Let $x_i$ be the variable in the last layer $M_{r}$, we have \begin{equation*} =\frac{1}{2^{n-1}}\sum_{(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\in \mathbb{F}_2^{n-1}}M_1M_2\ldots\ M_{r-1}m_r=\frac{1}{2^{n-1}} \end{equation*} by Lemma \ref{lm4.1}. Variables in the same layer have the same activities, so we use $A_l^f$ to stand for the activity number of each variable in the $lth$ layer $M_l$, $1\leq l\leq r$. We find the formula of $A_l^f$ for $2\leq l\leq r-1$ is also true when $l=r$ or $r=1$. Hence, we write all the above as the following \begin{theorem}\label{th4.2} Let $f$ be a NCF and written as in the Theorem \ref{th3.2}. then the activity of each variable in the $lth$ layer , $1\leq l\leq r$, is \begin{equation}\label{4.5} A_l^f=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i} \end{equation} The average sensitivity of $f$ is \begin{equation}\label{eq4.6} s^f=\sum_{l=1}^rk_lA_l^f=\frac{1}{2^{n-1}}\sum_{l=1}^r k_l\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i} \end{equation} \end{theorem} We do some analysis about the formulas in Theorem \ref{th4.2}, we have \begin{cor}\label{co4.1} $n\geq 3$, $A_1^f>A_2^f>\ldots >A_r^f$ and $\frac{n}{2^{n-1}}\leq s^f < 2- \frac{1}{2^{n-2}}$ \end{cor} \begin{proof} \begin{equation*} A_l^f=\frac{1}{2^{n-1}}\sum_{j=1}^{r-l+1}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i}=\frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l}-2^{n-k_1-\ldots -k_{l+1}}+\ldots (-1)^{r-l}) \end{equation*} Since the sum is an alternate decreasing sequence and $k_{l+1}\geq 1$, we have \begin{equation*} \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l-1})\leq \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l}-2^{n-k_1-\ldots -k_{l+1}})< A_l^f< \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l}) \end{equation*} Hence, \begin{equation*} A_{l+1}^f< \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_{l+1}})\leq \frac{1}{2^{n-1}}(2^{n-k_1-\ldots -k_l-1})<A_l^f. \end{equation*} We have \begin{equation*} k_1A_1^f=\frac{k_1}{2^{n-1}}(2^{n-{k_1}}-2^{n-{k_1}-k_2}+2^{n-{k_1}-k_2-k_3}-\ldots (-1)^{r-1}) \end{equation*} \begin{equation*} k_2A_2^f=\frac{k_2}{2^{n-1}}(2^{n-k_1-k_2}-2^{n-k_1-k_2-k_3}+2^{n-k_1-k_2-k_3-k_4}-\ldots (-1)^{r-2}) \end{equation*} \begin{equation*} \ldots \ldots \end{equation*} \begin{equation*} k_lA_l^f=\frac{k_l}{2^{n-1}}(2^{n-k_1-\ldots-k_l}-2^{n-k_1-\ldots-k_l-k_{l+1}}-\ldots (-1)^{r-l}) \end{equation*} \begin{equation*} \ldots \ldots \end{equation*} \begin{equation*} k_rA_r^f=\frac{k_r}{2^{n-1}} \end{equation*} Hence, $s^f=\sum_{l=1}^rk_lA_l^f\geq \frac{k_1}{2^{n-1}}+\frac{k_2}{2^{n-1}}+\ldots +\frac{k_r}{2^{n-1}}=\frac{n}{2^{n-1}}$, so we know the NCF with LAYER NUMBER 1 has the minimal average sensitivity. On the other hand, $s^f=\sum_{l=1}^rk_lA_l^f<\frac{k_1}{2^{n-1}}2^{n-k_1}+\frac{k_2}{2^{n-1}}2^{n-k_1-k_2}+\ldots +\frac{k_l}{2^{n-1}}2^{n-k_1-\ldots -k_l}+\dots+\frac{k_r}{2^{n-1}}=U(k_1,\ldots,k_r)$, where $k_1+\ldots +k_r=n$, $k_i\geq 1$, $i=1,\ldots, r-1$ and $k_r\geq 2$. We will find the maximal value of $U(k_1,\ldots,k_r)$ in the following. First, we claim $k_r=2$ if $U(k_1,\ldots,k_r)$ reach maximal value. Because if $k_r$ is increased by $1$, and the last term makes $\frac{1}{2^{n-1}}$ more contributions to $U(k_1,\ldots,k_r)$, then there exists $l$, $k_l$ will be decreased by $1$ ($k_1+\ldots+k_r=n$), hence \begin{equation*} \frac{k_l}{2^{n-1}}2^{n-k_1-\ldots -k_l} \end{equation*} will be decreased more than $\frac{1}{2^{n-1}}$. Now, Look at $\frac{k_1}{2^{n-1}}2^{n-k_1}$, it is obvious it attains the maximal value only when $k_1=1$ or $2$ but obviously $k_1=1$ will be the choice since it also make all the other terms greater.. Now Look at $\frac{k_2}{2^{n-1}}2^{n-k_1-k_2}$, it attains the maximal value when $k_1=k_2=1$ or $k_1=1$ and $k_2=2$, again, $k_2=1$ is the best choice to make all the other terms greater. In general, if $k_1=\ldots=k_{l-1}=1$, then $\frac{k_l}{2^{n-1}}2^{n-k_1-\ldots -k_l}$ attains its maximal value when $k_l=1$, where $1\leq l\leq r-1$. In summary, we have showed that $U(k_1,\ldots,k_r)$ reaches maximal value when $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$ and Max $ U(k_1,\ldots,k_r)=U(1,\ldots,1,2)=\frac{1}{2^{n-1}}(2^{n-1}+2^{n-2}+\ldots+ 2^{2}+2)=2-\frac{1}{2^{n-2}}$. \end {proof} \begin{remark} So, we know the average sensitivity is bounded by constants for any NCF with any number of variables Since the minimal value approaches to $0$ and the maximal value of $U(k_1,\ldots,k_r)$ approaches to $2$ as $n\rightarrow \infty$. Hence, $0<s^f<2$ for any NCF with arbitrary number of variables. \end{remark} In the following, we evaluate the formula Equation \ref{eq4.6} for some parameters $k_1,\ldots,k_r$, we have \begin{lemma}\label{lm4.2} 1) When $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$, $s^f=\frac{4}{3}-\frac{3+(-1)^n}{3\times 2^n}$; 2) Given $n\geq 4$, $r=n-2$, $k_1=\ldots=k_{n-3}=1$, $k_{n-2}=3$, $s^f=\frac{4}{3}-\frac{9+5(-1)^{n-1}}{3\times 2^n}$; 3) If $n$ is even and $n\geq 6 $, $r=\frac{n}{2}$, $k_1=1$, $k_2=\ldots=k_{\frac{n}{2}-1}=2$, $k_{\frac{n}{2}}=3$, $s^f=\frac{4}{3}-\frac{4}{3\times 2^n}$. Hence, these three cardinalities are equal if $n$ is even. \end{lemma} \begin{proof} When $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$ by Equation \ref{eq4.6}. We have \begin{equation*} s^f=\sum_{l=1}^{r}k_lA_l^f=\frac{1}{2^{n-1}}\sum_{l=1}^{n-1} k_l\sum_{j=1}^{n-l}(-1)^{j-1}2^{n-\sum_{i=1}^{j+l-1}k_i} \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}\sum_{l=1}^{n-1} k_l(\sum_{j=1}^{n-l-1}(-1)^{j-1}2^{n-j-l+1}+(-1)^{n-l-1})=\frac{1}{2^{n-1}}\sum_{l=1}^{n-1} k_l(\frac{1}{3}2^{n-l+1}+\frac{1}{3}(-1)^{n-l}) \end{equation*} \begin{equation*} =\frac{1}{2^{n-1}}(\sum_{l=1}^{n-2} (\frac{1}{3}2^{n-l+1}+\frac{1}{3}(-1)^{n-l})+2)=\frac{4}{3}-\frac{3+(-1)^n}{3\times 2^n} \end{equation*} The other two formulas are also routine simplifications of Equation \ref{eq4.6}. \end{proof} Based on our numerical calculation, Lemma \ref{lm4.2} and the proof of Corollary \ref{co4.1}, We have the following \begin{conj}\label{conj4.1} The maximal value of $s^f$ is $s^f=\frac{4}{3}-\frac{3+(-1)^n}{3\times 2^n}$. It will be reached if the NCF has the maximal LAYER NUMBERS $n-1$, i.e., if $r=n-1$, $k_1=\ldots=k_{n-2}=1$, $k_{n-1}=2$. When $n$ is even, this maximal value is also reached by NCF with parameters $n\geq 4$, $r=n-2$, $k_1=\ldots=k_{n-3}=1$, $k_{n-2}=3$ or $n\geq 6 $, $r=\frac{n}{2}$, $k_1=1$, $k_2=\ldots=k_{\frac{n}{2}-1}=2$ and $k_{\frac{n}{2}}=3$. \end{conj} \begin{remark}\label{re4.2} When $n=6$, the NCF with $k_1=1$, $k_2=2$, $k_3=1$ and $k_4=2$ also has the maximal average sensitivity $\frac{21}{16}$. But this can not be generalized. If the above conjecture is true, then we have $0<s^f<\frac{4}{3}$ for any NCF with arbitrary number of variables. In other words, both $0$ and $\frac{4}{3}$ are uniform tight bounds for any NCF. \end{remark} We point out, given the algebraic normal form of $f$, it is easy to find all of its canalyzing variables (the first layer $M_1$), then write $f=M_1g+b$, repeating the schedule to $g$, we can easily to determine if $f$ is NCF, if yes, we then write it as the form in Theorem \ref{th3.2}. We end this section by the following example. \begin{example} Let $N(x_1,x_2,x_3,x_4)=x_1x_2x_3\oplus x_2x_3x_4\oplus x_1x_3\oplus x_3x_4\oplus 1$ and $Y(x_1,x_2,x_3,x_4,x_5)=x_1x_2x_3x_4x_5\oplus x_1x_2x_3x_4\oplus x_1x_2x_4x_5\oplus x_1x_2x_4\oplus x_1x_3x_4\oplus x_1x_3\oplus x_1x_4\oplus x_1$. For $N(x_1,x_2,x_3,x_4)$, for all the $4$ variables, we found when $x_2=1$ or $x_3=0$, then functions becomes constant $1$, so we know $N(x_1,x_2,x_3,x_4)=(x_2\oplus 1)(x_3)N_1\oplus 1$. Actually, $N(x_1,x_2,x_3,x_4)=x_1x_2x_3\oplus x_2x_3x_4\oplus x_1x_3\oplus x_3x_4\oplus 1=x_3(x_1x_2\oplus x_2x_4\oplus x_1\oplus x_4)\oplus 1$ $=x_3(x_2(x_1\oplus x_4)\oplus x_1\oplus x_4)\oplus 1=x_3((x_2\oplus 1)(x_1\oplus x_4))\oplus 1$. Since $x_1\oplus x_4$ has no canalyzing variable, we know $N$ is not NCF, but a partially NCF. For $Y(x_1,x_2,x_3,x_4,x_5)$, We find $x_1=0$ or $x_3=1$, the function will be reduced to $0$, so we know $Y=x_1(x_3\oplus 1)Y_1$. Where $Y_1=x_2x_4x_5\oplus x_2x_4\oplus x_4\oplus 1$, for this function we find only when $x_4=0$, $Y_1$ will be reduced to $1$, so $Y_1=x_4Y_2\oplus 1$, where $Y_2=x_2x_5\oplus x_2\oplus 1$, and finally, we have $Y_2=x_2(x_5\oplus 1)\oplus 1$, So $Y$ is NCF with $n=5$, $r=3$ and $k_1=2$, $k_2=1$, $k_3=2$, $M_1=x_1(x_3\oplus 1)$, $M_2=x_4$ and $M_3=x_2(x_5\oplus 1)$, hence its Hamming weight is $5$ by Equation \ref{eq4.3} and its average sensitivity is $\frac{15}{16}$by Equation \ref{eq4.6}. \end{example} \section{Conclusion} We obtain a complete characterization for nested canalyzing functions (NCFs) by deriving its unique algebraic normal form (polynomial form). We introduced a new invariant, LAYER NUMBER for nested canalyzing function. So, the dominance of nested canalyzing variables is quantified. Consequently, we obtain the explicit formula of the number of nested canalyzing functions. Based on the polynomial form, we also obtain the formula of the Hamming weight of each NCF. The activity number of each variable of a NCF is also provided with an explicit formula. Consequently, we proved the average sensitivity of any NCF is less than $2$, hence, we proved why NCF is stable theoretically. Finally, we conjecture that the tight upper bound for the average sensitivity of any NCF is $\frac{4}{3}$. \end{document}
math
45,626
\begin{document} \title{Scalability of Shor's algorithm with a limited set of rotation gates} \author{Austin G. Fowler and Lloyd C. L. Hollenberg} \affiliation{Centre for Quantum Computer Technology,\\ School of Physics, University of Melbourne, Victoria 3010, AUSTRALIA.} \date{\today} \begin{abstract} Typical circuit implementations of Shor's algorithm involve controlled rotation gates of magnitude $\pi/2^{2L}$ where $L$ is the binary length of the integer N to be factored. Such gates cannot be implemented exactly using existing fault-tolerant techniques. Approximating a given controlled $\pi/2^{d}$ rotation gate to within $\delta=O(1/2^{d})$ currently requires both a number of qubits and number of fault-tolerant gates that grows polynomially with $d$. In this paper we show that this additional growth in space and time complexity would severely limit the applicability of Shor's algorithm to large integers. Consequently, we study in detail the effect of using only controlled rotation gates with $d$ less than or equal to some $d_{\rm max}$. It is found that integers up to length $L_{\rm max} = O(4^{d_{\rm max}})$ can be factored without significant performance penalty implying that the cumbersome techniques of fault-tolerant computation only need to be used to create controlled rotation gates of magnitude $\pi/64$ if integers thousands of bits long are desired factored. Explicit fault-tolerant constructions of such gates are also discussed. \end{abstract} \pacs{PACS number : 03.67.Lx} \maketitle Shor's factoring algorithm \cite{Shor94b,Shor00} is arguably the driving force of much experimental quantum computing research. It is therefore crucial to investigate whether the algorithm has a realistic chance of being used to factor commercially interesting integers. This paper focuses on the difficulty of implementing the quantum Fourier transform (QFT) -- an integral part of the algorithm. Specifically, the controlled $\pi/2^{d}$ rotations that comprise the QFT are extremely difficult to implement using fault-tolerant gates protected by quantum error correction (QEC). To factor an $L$-bit integer $N$, a $2L$-qubit QFT is required that at first glance involves controlled rotation gates of magnitude $\pi/2^{2L}$. Prior work on simplifying the QFT so that it only involves controlled rotation gates of magnitude $\pi/2^{d_{\rm max}}$ has been performed by Coppersmith \cite{Copp94} with the conclusion that the length $L_{\rm max}$ of the maximum length integer that can be factored scales as $O(2^{d_{\rm max}})$ and that factoring an integer thousands of bits long would require the implementation of controlled rotations as small as $\pi/10^{6}$. This paper refines this work with the conclusion that $L_{\rm max}$ scales as $O(4^{d_{\rm max}})$, with $\pi/64$ rotations sufficient to enable the factoring of integers thousands of bits long. The discussion is organized as follows. In Section~\ref{shor} Shor's algorithm is revised with emphasis on extracting useful output from the quantum period finding (QPF) subroutine. This subroutine is described in detail in this section. In Section~\ref{aqft} Coppersmith's approximate quantum Fourier transform (AQFT) is described, followed by Section~\ref{ftrotation} which outlines the techniques used to implement the gate set required by the AQFT using only fault-tolerant gates protected by QEC. In Section~\ref{sVr} the relationship between the probability of success $s$ of the QPF subroutine and the period $r$ being sought is investigated. In Section~\ref{sVLd} the relationship between the probability success $s$ and both the length $L$ of the integer being factored and the minimum angle controlled rotation $\pi/2^{d_{\rm max}}$ is studied. This is then used to relate $L_{\rm max}$ to $d_{\rm max}$. Section~\ref{conc} concludes with a summary of results. \section{Shor's Algorithm} \label{shor} Shor's algorithm factors an integer $N=N_{1}N_{2}$ by finding the period $r$ of a function $f(k)=m^{k}\bmod N$ where $1<m<N$ and ${\rm gcd}(m,N)=1$. Provided $r$ is even and $f(r/2)\neq N-1$ the factors are $N_{1}={\rm gcd}(f(r/2)+1,N)$ and $N_{2}={\rm gcd}(f(r/2)-1,N)$, where ${\rm gcd}$ denotes the greatest common divisor. The probability of finding a suitable $r$ given a randomly selected $m$ such that ${\rm gcd}(m,N)=1$ is greater than $0.75$ \cite{Niel00}. Thus on average very few values of $m$ need to be tested to factor $N$. The quantum heart of Shor's algorithm can be viewed as a subroutine that generates numbers of the form $j \simeq c 2^{2L}/r$. To distinguish this from the necessary classical pre-and postprocessing, this subroutine will be referred to as QPF (quantum period finding). For physical reasons, the probability $s$ that QPF will successfully generate useful data may be quite low with many repetitions required to work out the period $r$ of a given $f(k)=m^{k}\bmod N$. Using this terminology, Shor's algorithm consists of classical preprocessing, potentially many repetitions of QPF with classical postprocessing and possibly a small number of repetitions of this entire cycle. This cycle is summarized in Fig~\ref{figure:flowchart}. \begin{figure} \caption{The complete Shor's algorithm including classical pre- and postprocessing. The first branch is highly likely to fail, resulting in many repetitions of the quantum heart of the algorithm, whereas the second branch is highly likely to succeed.} \label{figure:flowchart} \end{figure} A number of different quantum circuits implementing QPF have been designed \cite{Vedr96,Goss98,Beau03,Zalk98}. Table~\ref{table:circuit_comparison} summarizes the number of qubits required and the depth of each of these circuits. The depth of a circuit has been defined to be the minimum number of 2-qubit gates that must be applied sequentially to complete the circuit. It has been assumed that multiple disjoint 2-qubit gates can be implemented in parallel, hence the total number of 2-qubit gates can be significantly greater that the depth. For example, the Beauregard circuit has a 2-qubit gate count of $8L^{4}$ to first order in $L$. \begin{table} \begin{tabular}{c|c|c} Circuit & Qubits & Depth \\ \hline Beauregard \cite{Beau03} & $2L$ & $32L^{3}$ \\ Vedral \cite{Vedr96} & $5L$ & $240L^{3}$ \\ Zalka \cite{Zalk98}& $\sim 50L$ & $\sim 2^{17}L^{2}$ \\ Gossett \cite{Goss98} & $O(L^{2})$ & $O(L \log L)$ \\ \end{tabular} \caption{Number of qubits required and circuit depth of different implementations of Shor's algorithm. Where possible, figures are accurate to first order in $L$.} \label{table:circuit_comparison} \end{table} Note that in general the depth of the circuit can be reduced at the cost of additional qubits. The underlying algorithm common to each circuit begins by initializing the quantum computer to a single pure state $|0\rangle_{2L}|0\rangle_{L}$. Note that for clarity the computer state has been broken into a $2L$-qubit $k$-register and an $L$-qubit $f$-register. The meaning of this will become clearer below. Step two is to Hadamard transform each qubit in the $k$-register yielding \begin{equation} \label{two} \frac{1}{2^{L}}\sum_{k=0}^{2^{2L}-1}|k\rangle_{2L}|0\rangle_{L}. \end{equation} Step three is to calculate and store the corresponding values of $f(k)$ in the $f$-register \begin{equation} \label{three} \frac{1}{2^{L}}\sum_{k=0}^{2^{2L}-1}|k\rangle_{2L}|f(k)\rangle_{L}. \end{equation} Note that this step requires additional ancilla qubits. The exact number depends heavily on the circuit used. Step four can actually be omitted but it explicitly shows the origin of the period $r$ being sought. Measuring the $f$-register yields \begin{equation} \label{four} \frac{\sqrt{r}}{2^{L}}\sum_{n=0}^{ 2^{2L}/r-1}|k_{0}+nr\rangle_{2L}|f_{M}\rangle_{L} \end{equation} where $k_{0}$ is the smallest value of $k$ such that $f(k)$ equals the measured value $f_{M}$. Step five is to apply the quantum Fourier transform \begin{equation} \label{qft1} |k\rangle \rightarrow \frac{1}{2^{L}}\sum_{j=0}^{2^{2L}-1}\exp(\frac{2\pi i}{2^{2L}} jk)|j\rangle \end{equation} to the $k$-register resulting in \begin{equation} \label{fivesum} \frac{\sqrt{r}}{2^{2L}}\sum_{j=0}^{2^{2L}-1}\sum_{p=0}^{ 2^{2L}/r-1}\exp(\frac{2\pi i}{2^{2L}} j(k_{0}+pr))|j\rangle_{2L}|f_{M}\rangle_{L}. \end{equation} The probability of measuring a given value of $j$ is thus \begin{equation} \label{prj} {\rm Pr}(j,r,L)=\left|\frac{\sqrt{r}}{2^{2L}}\sum_{p=0}^{ 2^{2L}/r-1}\exp(\frac{2\pi i}{2^{2L}} jpr)\right|^{2}. \end{equation} If $r$ divides $2^{2L}$ Eq~(\ref{prj}) can be evaluated exactly. In this case the probability of observing $j=c2^{2L}/r$ for some integer $0\leq c<r$ is $1/r$ whereas if $j\neq c2^{2L}/r$ the probability is 0. This situation is illustrated in Fig~\ref{figure:period}(a). However if $r$ divides $2^{2L}$ exactly a quantum computer is not needed as $r$ would then be a power of 2 and easily calculable. When $r$ is not a power of 2 the perfect peaks of Fig~\ref{figure:period}(a) become slightly broader as shown in Fig~\ref{figure:period}(b). All one can then say is that with high probability the value $j$ measured will satisfy $j\simeq c2^{2L}/r$ for some $0\leq c<r$. \begin{figure} \caption{Probability of different measurements $j$ at the end of quantum period finding with total number of states $2^{2L} \label{figure:period} \end{figure} Given a measurement $j\simeq c2^{2L}/r$ with $c\neq 0$, classical postprocessing is required to extract information about $r$. The process begins with a continued fraction expansion. To illustrate, consider factoring 143 ($L=8$). Suppose we choose $m$ equal 2 and the output $j$ of QPF is 31674. The relation $j\simeq c2^{2L}/r$ becomes $31674\simeq c65536/r$. The continued fraction expansion of $c/r$ is \begin{equation} \label{contfrac854} \frac{31674}{65536}=\frac{1}{\frac{32768}{15837}}=\frac{1}{2+\frac{1094}{15837}}=\frac{1}{2+\frac{1}{14+\frac{1}{2+\frac{1}{10+1/52}}}}. \end{equation} The continued fraction expansion of any number between 0 and 1 is completely specified by the list of denominators which in this case is $\{2,14,2,10,52\}$. The $n$th convergent of a continued fraction expansion is the proper fraction equivalent to the first $n$ elements of this list. An introductory exposition and further properties of continued fractions are described in Ref~\cite{Niel00}. \begin{eqnarray} \label{convergeants427} \{2\} & = & \frac{1}{2} \nonumber \\ \{2,14\} & = & \frac{14}{29} \nonumber \\ \{2,14,2\} & = & \frac{29}{60} \nonumber \\ \{2,14,2,10\} & = & \frac{304}{629} \nonumber \\ \{2,14,2,10,52\} & = & \frac{15837}{32768} \end{eqnarray} The period $r$ can be sought by substituting each denominator into the function $f(k)= 2^{k} \bmod 143$. With high probability only the largest denominator less than $2^{L}$ will be of interest. In this case $2^{60}\bmod 143=1$ and hence $r=60$. Two modifications to the above are required. Firstly, if $c$ and $r$ have common factors, none of the denominators will be the period but rather one will be a divisor of $r$. After repeating QPF a number of times, let $\{j_{m}\}$ denote the set of measured values. Let $\{c_{mn}/d_{mn}\}$ denote the set of convergents associated with each measured value $\{j_{m}\}$. If a pair $c_{mn}$, $c_{m'n'}$ exists such that ${\rm gcd}(c_{mn}, c_{m'n'})=1$ and $d_{mn}$, $d_{m'n'}$ are divisors of $r$ then $r={\rm lcm}(d_{mn}, d_{m'n'})$, where ${\rm lcm}$ denotes the least common multiple. It can be shown that given any two divisors $d_{mn}$, $d_{m'n'}$ with corresponding $c_{mn}$, $c_{m'n'}$ the probability that ${\rm gcd}(c_{mn},c_{m'n'})=1$ is at least $1/4$ \cite{Niel00}. Thus only $O(1)$ different divisors are required. In practice, it will not be known which denominators are divisors so every pair $d_{mn}$, $d_{m'n'}$ with ${\rm gcd}(c_{mn}, c_{m'n'})=1$ must be tested. The second modification is simply allowing for the output $j$ of QPF being useless. Let $s$ denote the probability that $j=\lfloor c2^{2L}/r\rfloor$ or $\lceil c2^{2L}/r\rceil$ for some $0<c<r$ where $\lfloor \rfloor$, $\lceil \rceil$ denote rounding down and up respectively. Such values of $j$ will be called useful as the denominators of the associated convergents are guaranteed to include a divisor of $r$. The period $r$ sought can always be found provided $O(1/s)$ runs of QFT can be performed. To summarize, as each new value $j_{m}$ is measured, the denominators $d_{mn}$ less than $2^{L}$ of the convergents of the continued fraction expansion of $j_{m}/2^{2L}$ are substituted into $f(k)=m^{k} \bmod N$ to determine whether any $f(d_{mn})=1$ which would imply that $r=d_{mn}$. If not, every pair $d_{mn}$, $d_{m'n'}$ with associated numerators $c_{mn}$, $c_{m'n'}$ satisfying ${\rm gcd}(c_{mn}, c_{m'n'})=1$ must be tested to see whether $r={\rm lcm}(d_{mn}, d_{m'n'})$. Note that as shown in Fig~\ref{figure:flowchart}, if $r$ is even or $m^{r/2}\bmod N = \pm 1\bmod N$ then the entire process needs to be repeated $O(1)$ times. Thus Shor's algorithm always succeeds provided $O(1/s)$ runs of QFT can be performed. \section{Approximate Quantum Fourier Transform} \label{aqft} A circuit that implements the QFT of Eq~(\ref{qft1}) is shown in Fig~\ref{figure:qftboth}(a). Note the use of controlled rotations of magnitude $\pi/2^{d}$. In matrix notation these 2-qubit operations correspond to \begin{equation} \label{contphase} \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\pi/2^{d}} \end{array} \right). \end{equation} \begin{figure} \caption{Circuit for a 4-qubit a.) quantum Fourier transform b.) approximate quantum Fourier transform with $d_{\rm max} \label{figure:qftboth} \end{figure} The approximate QFT (AQFT) circuit is very similar with just the deletion of rotation gates with $d$ greater than some $d_{\rm max}$. For example, Fig~\ref{figure:qftboth}(b) shows an AQFT with $d_{\rm max}=1$. Let $[j]_{m}$ denote the $m$th bit of $j$. The AQFT equivalent to Eq~(\ref{qft1}) is \cite{Copp94} \begin{equation} \label{aqfteq} |k\rangle \rightarrow \frac{1}{\sqrt{2^{2L}}}\sum_{j=0}^{2^{2L}-1}|j\rangle \exp(\frac{2\pi i}{2^{2L}} {\textstyle \tilde{\sum}_{mn}}[j]_{m}[k]_{n}2^{m+n}) \end{equation} where $\tilde{\sum}_{mn}$ denotes a sum over all $m$, $n$ such that $0\leq m,n<2L$ and $2L-d_{\rm max}+1\leq m+n<2L$. It has been shown by Coppersmith that the AQFT is a good approximation of the QFT \cite{Copp94} in the sense that the phase of individual computational basis states in the output of the AQFT differ in angle from those in the output of the QFT by at most $2\pi L2^{-d_{\rm max}}$. The purpose of this paper is to investigate in detail the effect of using the AQFT in Shor's algorithm. \section{Fault-Tolerant Construction of Small Angle Rotation Gates} \label{ftrotation} When the 7-qubit Steane code \cite{Stea96,Cald95,Niel00} and its concatenated generalizations are used to do computation, only the limited set of gates \textsc{cnot}, Hadamard ($H$), $X$, $Z$, $S$ and $S^{\dag}$ can be implemented easily, where \begin{equation} \label{Sgate} S = \left( \begin{array}{cc} 1 & 0 \\ 0 & i \\ \end{array} \right). \end{equation} Complicated circuits of depth in the hundreds and requiring a minimum of 22 qubits are required to implement the $T$ and $T^{\dag}$ gates \cite{Niel00} \begin{equation} \label{Tgate} T = \left( \begin{array}{cc} 1 & 0 \\ 0 & e^{i\pi/4} \\ \end{array} \right). \end{equation} Note however that if it is acceptable to add an additional 15 qubits for every $T$ and $T^{\dag}$ gate in a sequence of fault-tolerant single-qubit gates (see for example Eq (\ref{U31})), the effective depth of each $T$ and $T^{\dag}$ gate circuit can be reduced to 2. Together, the set \textsc{cnot}, $H$, $X$, $Z$, $S$, $S^{\dag}$, $T$ and $T^{\dag}$ enables the implementation of arbitrary 1- and 2-qubit gates via the Solovay-Kitaev theorem \cite{Kita97,Niel00}. For example, the controlled $\pi/2^{d}$ gate can be decomposed into a single \textsc{cnot}\ and three single-qubit rotations as shown in Fig~\ref{figure:phasedec}. \begin{figure} \caption{Decomposition of a controlled rotation into single-qubit gates and a \textsc{cnot} \label{figure:phasedec} \end{figure} Approximating single-qubit $\pi/2^{d}$ rotations using the fault-tolerant gate set is much more difficult. For convenience, such rotations will henceforth be denoted by $R_{2^{d}}$. The simplest (least number of fault-tolerant gates) approximation of the $R_{128}$ single-qubit rotation gate that is more accurate than simply the identity matrix is the 31 gate product \begin{eqnarray} \label{U31} U_{31} & = & HTHT^{\dag}HTHTHTHT^{\dag}HT^{\dag}HT\nonumber \\ & & HTHT^{\dag}HT^{\dag}HTHT^{\dag}HT^{\dag}HT^{\dag}H. \end{eqnarray} Eq~(\ref{U31}) was determined via an exhaustive search minimizing the metric \begin{equation} \label{dist} {\rm dist}(U,V) = \sqrt{\frac{2- |{\rm tr}(U^{\dag}V)|}{2}} \end{equation} The rationale of Eq~(\ref{dist}) is that if $U$ and $V$ are similar, $U^{\dag}V$ will be close to the identity matrix (possibly up to some global phase) and the absolute value of the trace will be close to 2. By subtracting this absolute value from 2 and dividing by 2 a number between 0 and 1 is obtained. The overall square root is required to ensure that the triangle inequality \begin{equation} \label{triangle} {\rm dist}(U,W) \leq {\rm dist}(U,V)+{\rm dist}(V,W) \end{equation} is satisfied. The identity matrix is a good approximation of $R_{128}$ in the sense that ${\rm dist}(R_{128},I) = 8.7\times 10^{-3}$. Eq~(\ref{U31}) is only slightly better with ${\rm dist}(R_{128},U_{31}) = 8.1\times 10^{-3}$. A 46 gate sequence has been found satisfying ${\rm dist}(R_{128},U_{46}) = 7.5\times 10^{-4}$. Note that this is still only 10 times better than doing nothing. Further investigation of the properties of fault-tolerant approximations of arbitrary single-qubit unitaries will be performed in the near future. For the present discussion it suffices to know that the number of gates grows somewhere between linearly and quadratically with $\ln(1/{\delta})$ \cite{Niel00} where $\delta={\rm dist}(R,U)$, $R$ is the rotation being approximated, and $U$ is the approximating product of fault-tolerant gates (the exact scaling is not known). In particular, this means that approximating a rotation gate $R_{2^{d}}$ with accuracy $\delta=1/2^{d}$ requires a number of gates that grows linearly or quadratically with $d$. In addition to the inconveniently large number of fault-tolerant gates $n_{\delta}$ required to achieve a given approximation $\delta$, each individual gate in the approximating sequence must be implemented with probability of error $p$ less than $O(\delta/n_{\delta})$. Note that $\delta$ is not a probability of error but rather a measure of the distance between the ideal gate and the approximating product so this relationship is not exact. If the required probability $p\sim \delta/n_{\delta}=1/(n_{\delta}2^{d})$ is too small to be achieved using a single level of QEC, the technique of concatenated QEC must be used. Roughly speaking, if a given gate can be implemented with probability of error $p$, adding an additional level of concatenation \cite{Knil96b} leads to an error rate of $cp^{2}$ where $c<1/p$. If the Steane code is used with seven qubits for the code and an additional five qubits for fault-tolerant correction, every additional level of concatenation requires 12 times as many qubits. This implies that if a gate is to be implemented with accuracy $1/(n_{\delta}2^{d})$, the number of qubits $q$ scales as $O(d^{\ln_{2} 12}) = O(d^{3.58})$. While this is a polynomial number of qubits, for even moderate values of $d$ this leads to thousands of qubits being used to achieve the required gate accuracy. Given the complexity of implementing $T$ and $T^{\dag}$ gates, the number of fault-tolerant gates required to achieve good approximations of arbitrary rotation gates and the large number of qubits required to achieve sufficiently reliable operation, it is clear that for practical reasons the use of $\pi/2^{d}$ rotations must be restricted to those with very small $d$. \section{Dependence of Output Reliability on Period of $f(k)=m^{k}\bmod N$} \label{sVr} Different values of $r$ (the period of $f(k)=x^{k}\bmod N$) imply different probabilities $s$ that the value $j$ measured at the end of QPF will be useful (see Fig~\ref{figure:flowchart}). In particular, as discussed in Section~\ref{shor} if $r$ is a power of 2 the probability of useful output is much higher (see Fig~\ref{figure:period}). This section investigates how sensitive $s$ is to variations in $r$. Recall Eq~(\ref{prj}) for the probability of measuring a given value of $j$. When the AQFT (Eq~(\ref{aqfteq}) is used this becomes \begin{equation} \label{aqftprj} \begin{array}{l} {\rm Pr}(j,r,L,d_{\rm max}) = \\ {\displaystyle \left|\frac{\sqrt{r}}{2^{2L}}\sum_{p=0}^{ 2^{2L}/r-1}\exp(\frac{2\pi i}{2^{2L}}{\textstyle \tilde{\sum}_{mn}[j]_{m}[pr]_{n}2^{m+n}})\right|^{2}} \end{array} \end{equation} The probability $s$ of useful output is thus \begin{equation} \label{s} s(r,L,d_{\rm max})=\sum_{\{{\rm useful}~j\}}{\rm Pr}(j,r,L,d_{\rm max}) \end{equation} where $\{{\rm useful}~j\}$ denotes all $j=\lfloor c2^{2L}/r\rfloor$ or $\lceil c2^{2L}/r\rceil$ such that $0<c<r$. Fig~\ref{figure:Ldarray} shows $s$ for $r$ ranging from 2 to $2^{L}-1$ and for various values of $L$ and $d_{\rm max}$. The decrease in $s$ for small values of $r$ is more a result of the definition of $\{{\rm useful}~j\}$ than an indication of poor data. When $r$ is small there are few useful values of $j\simeq c2^{2L}/r\rceil$, $0<c<r$ and a large range states likely to be observed around each one resulting superficially in a low probability of useful output $s$ as $s$ is the sum of the probabilities of observing only values $j=\lfloor c2^{2L}/r\rfloor$ or $\lceil c2^{2L}/r\rceil$, $0<c<r$. However, in practice values much further from $j\simeq c2^{2L}/r$ can be used to obtain useful output. For example if $r=4$ and $j=16400$ the correct output value (4) can still be determined from the continued fraction expansion of $16400/65536$ which is far from the ideal case of $16384/65536$. To simplify subsequent analysis each pair $(L, d_{\rm max})$ will from now on be associated with $s(2^{L-1}+2,L,d_{\rm max})$ which corresponds to the minimum value of $s$ to the right of the central peak. The choice of this point as a meaningful characterization of the entire graph is justified by the discussion above. For completeness, Fig~\ref{figure:Ldarray}(e) shows the case of noisy controlled rotation gates of the form \begin{equation} \label{contphase_delta} \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i(\pi/2^{d}+\delta)} \end{array} \right). \end{equation} where $\delta$ is a normally distributed random variable of standard deviation $\sigma$. This has been included to simulate the effect of using approximate rotation gates built out of a finite number of fault-tolerant gates. The general form and probability of successful output can be seen to be similar despite $\sigma=\pi/32$. This $\sigma$ corresponds to $\pi/2^{d_{\rm max}+2}$. For a controlled $\pi/64$ rotation, single-qubit rotations of angle $\pi/128$ are required, as shown in Fig~\ref{figure:phasedec}. Fig~\ref{figure:Ldarray}(e) implies that it is acceptable for this rotation to be implemented within $\pi/512$, implying \begin{equation} \label{contphase_U} U = \left( \begin{array}{cccc} 1 & 0 \\ 0 & e^{i(\pi/128+\pi/512)} \end{array} \right) \end{equation} is an acceptable approximation of $R_{128}$. Given that ${\rm dist}(R_{128}, U) = 2.1\times 10^{-3}$, the 46 fault-tolerant gate approximation of $R_{128}$ mentioned above is adequate. \begin{figure} \caption{Probability $s$ of obtaining useful output from quantum period finding as a function of period $r$ for different integer lengths $L$ and rotation gate restrictions $\pi/2^{d_{\rm max} \label{figure:Ldarray} \end{figure} \section{Dependence of Output Reliability on Integer Length and Gate Restrictions} \label{sVLd} In order to determine how the probability of useful output $s$ depends on both the integer length $L$ and the minimum allowed controlled rotation $\pi/2^{d_{\rm max}}$, Eq~(\ref{s}) was solved with $r=2^{L-1}+2$ as discussed in Section \ref{sVr}. Fig~\ref{figure:darray} contains semilog plots of $s$ versus $L$ for different values of $d_{\rm max}$. Note that Eq~(\ref{s}) grows exponentially more difficult to solve as $L$ increases. \begin{figure*} \caption{Dependence of the probability of useful output from the quantum part of Shor's algorithm on the length $L$ of the integer being factored for different levels of restriction of controlled rotation gates of angle $\pi/2^{d_{\rm max} \label{figure:darray} \end{figure*} For $d_{\rm max}$ from 0 to 5, the exponential decrease of $s$ with increasing $L$ is clear. Asymptotic lines of best fit of the form \begin{equation} \label{fit} s \propto 2^{-L/t} \end{equation} have been shown. Note that for $d_{\rm max}>0$, the value of $t$ increases by greater than a factor of 4 when $d_{\rm max}$ increases by 1. This enables one to generalize Eq~(\ref{fit}) to an asymptotic lower bound valid for all $d_{\rm max}>0$ \begin{equation} \label{genfit} s \propto 2^{-L/4^{d_{\rm max}-1}} \end{equation} with the constant of proportionality approximately equal to 1. Keeping in mind that the required number of repetitions of QPF is $O(1/s)$, one can relate $L_{\rm max}$ to $d_{\rm max}$ by introducing an additional parameter $f_{\rm max}$ characterizing the acceptable number of repetitions of QPF \begin{equation} \label{Leq} L_{\rm max}\simeq 4^{d_{\rm max}-1}\log_{2}f_{\rm max}. \end{equation} Available RSA \cite{Rive78} encryption programs such as PGP typically use integers of length $L$ up to 4096. The circuit in \cite{Vedr96} runs in $150L^3$ steps when an architecture that can interact arbitrary pairs of qubits in parallel is assumed and fault-tolerant gates are used. Note that to first order in $L$ the number of steps does not increase as additional levels of QEC are used. Thus $\sim$$10^{13}$ steps are required to perform a single run of QPF. On an electron spin or charge quantum computer \cite{Burk00,Holl03} running at 10GHz this corresponds to $\sim$$15$ minutes of computing. If we assume $\sim$24 hours of computing is acceptable then $f_{\rm max}\sim 10^2$. Substituting these values of $L_{\rm max}$ and $f_{\rm max}$ into Eq~(\ref{Leq}) gives $d_{\rm max}=6$ after rounding up. Thus provided controlled $\pi/64$ rotations can be implemented accurately, implying the need to accurately implement $\pi/128$ single-qubit rotations, it is conceivable that a quantum computer could one day be used to break a 4096-bit RSA encryption in a single day. \section{Conclusion} \label{conc} We have demonstrated the robustness of Shor's algorithm when a limited set of rotation gates is used. The length $L_{\rm max}$ of the longest factorable integer can be related to the maximum acceptable runs of quantum period finding $f_{\rm max}$ and the smallest accurately implementable controlled rotation gate $\pi/2^{d_{\rm max}}$ via $L_{\rm max}\sim 4^{d_{\rm max}-1}\log_{2}f_{\rm max}$. Integers thousands of digits in length can be factored provided controlled $\pi/64$ rotations can be implemented with rotation angle accurate to $\pi/256$. Sufficiently accurate fault-tolerant constructions of such controlled rotation gates have been described. \widetext \end{document}
math
27,612
\begin{document} \hbox{Aut}hor[Robert Laterveer] {Robert Laterveer} \address{Institut de Recherche Math\'ematique Avanc\'ee, CNRS -- Universit\'e de Strasbourg,\ 7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX, FRANCE.} \email{[email protected]} \title{A remark on the Chow ring of K\"uchle fourfolds of type $d3$} \begin{abstract} We prove that K\"uchle fourfolds $X$ of type d3 have a multiplicative Chow--K\"unneth decomposition. We present some consequences for the Chow ring of $X$. \end{abstract} \keywords{Algebraic cycles, Chow ring, motives, Beauville ``splitting property''} \subjclass{Primary 14C15, 14C25, 14C30.} \maketitle \section{Introduction} K\"uchle \cite{Ku} has classified Fano fourfolds that are obtained as sections of globally generated homogeneous vector bundles on Grassmannians. In K\"uchle's list, a Fano fourfold {\em of type d3\/} is defined as a smooth $4$--dimensional section of the vector bundle \[ (\wedge^2 \mathcal U^\vee)^{\oplus 2}\oplus \mathcal O_{G}(1)\ \to\ G:=\operatorname{Gr}(5,V_{10})\ ,\] where $\mathcal U$ denotes the tautological bundle on the Grassmannian $G$ (of $5$--dimensional subspaces of a fixed $10$--dimensional complex vector space). Let $X$ be a K\"uchle fourfold of type d3. The Hodge diamond of $X$ is \[ \begin{array}[c]{ccccccccccccc} &&&&&& 1 &&&&&&\\ &&&&&0&&0&&&&&\\ &&&&0&&5&&0&&&&\\ &&&0&&0&&0&&0&&&\\ &&0&&1&&26&&1&&0&&\\ &&&0&&0&&0&&0&&&\\ &&&&0&&5&&0&&&&\\ &&&&&0&&0&&&&&\\ &&&&&& 1 &&&&&&\\ \end{array}\] This Hodge diamond looks like the one of a $K3$ surface. Interestingly, the relation \[ \hbox{K\"uchle\ fourfolds\ of\ type\ d3}\ \leftrightsquigarrow \ \hbox{$K3$\ surfaces} \] goes further than a similarity of Hodge numbers: Kuznetsov \cite[Corollary 3.6]{Kuz} has shown that K\"uchle fourfolds $X$ of type d3 are related to $K3$ surfaces on the level of derived categories (in the sense that the derived category of $X$ admits a semi--orthogonal decomposition, and the interesting part of this decomposition is isomorphic to the derived category of a $K3$ surface). The main result of the present note is that K\"uchle fourfolds of type d3 behave like $K3$ surfaces from a Chow--theoretic point of view: \begin{nonumbering}[=theorem \ref{main}] Let $X$ be a K\"uchle fourfold of type d3. Then $X$ has a multiplicative Chow--K\"unneth decomposition (in the sense of \cite{SV}). \end{nonumbering} This is very easily proven, provided one uses Kuznetsov's alternative description \cite{Kuz} of K\"uchle fourfolds of type d3. Theorem \ref{main} has interesting consequences for the Chow ring $A^\ast(X)_{\mathbb{Q}}$: \begin{nonumberingc}[=corollary \ref{cor}] Let $X$ be a K\"uchle fourfold of type d3. Let $R^3(X)\subset A^3(X)_{\mathbb{Q}}$ be the subgroup generated by the Chern class $c_3(T_X)$ and intersections $A^1(X)\cdot A^2(X)$ of divisors with $2$--cycles. The cycle class map induces an injection \[ R^3(X)\ \hookrightarrow\ H^6(X,\mathbb{Q})\ .\] \end{nonumberingc} This is reminiscent of the famous result of Beauville--Voisin describing the Chow ring of a $K3$ surface \cite{BV}. More generally, there is a similar injectivity result for the Chow ring of certain self--products $X^m$ (corollary \ref{cor}). There are two other families of fourfolds on K\"uchle's list which have a Hodge diamond of $K3$ type: the families of type c5 and c7. There is also the complete intersection eightfold in $\operatorname{Gr}(2,V_8)$ of \cite[Proposition 5.2]{FM} which has a Hodge diamond of $K3$ type. It would be interesting to establish a multiplicative Chow--K\"unneth decomposition for those varieties as well; I hope to return to this in the future. \vskip0.6cm \begin{convention} In this note, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. For a smooth variety $X$, we will denote by $A^j(X)$ the Chow group of codimension $j$ cycles on $X$ with $\mathbb{Q}$--coefficients. The notation $A^j_{hom}(X)$ will be used to indicate the subgroups of homologically trivial cycles. For a morphism between smooth varieties $f\colon X\to Y$, we will write $\Gamma_f\in A^\ast(X\times Y)$ for the graph of $f$. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. We will write $H^\ast(X):=H^\ast(X,\mathbb{Q})$ for singular cohomology with $\mathbb{Q}$--coefficients. \end{convention} \vskip0.6cm \section{Multiplicative Chow--K\"unneth decomposition} \begin{definition}[Murre \cite{Mur}]\label{ck} Let $X$ be a smooth projective variety of dimension $n$. We say that $X$ has a {\em CK decomposition\/} if there exists a decomposition of the diagonal \[ \mathbb{D}elta_X= \pi^0_X+ \pi^1_X+\cdots +\pi^{2n}_X\ \ \ \hbox{in}\ A^n(X\times X)\ ,\] such that the $\pi^i_X$ are mutually orthogonal idempotents and $(\pi^i_X)_\ast H^\ast(X)= H^i(X)$. Given a CK decomposition for $X$, we set $$A^i(X)_{(j)} := (\pi_X^{2i-j})_\ast A^i(X).$$ The CK decomposition is said to be {\em self-dual\/} if \[ \pi^i_X = {}^t \pi^{2n-i}_X\ \ \ \hbox{in}\ A^n(X\times X)\ \ \ \forall i\ .\] (Here ${}^t \pi$ denotes the transpose of a cycle $\pi$.) (NB: ``CK decomposition'' is short-hand for ``Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} \label{R:Murre} The existence of a Chow--K\"unneth decomposition for any smooth projective variety is part of Murre's conjectures \cite{Mur}, \cite{MNP}. It is expected that for any $X$ with a CK decomposition, one has \begin{equation*}\label{hope} A^i(X)_{(j)}\stackrel{??}{=}0\ \ \ \hbox{for}\ j<0\ ,\ \ \ A^i(X)_{(0)}\cap A^i_{num}(X)\stackrel{??}{=}0. \end{equation*} These are Murre's conjectures B and D, respectively. \end{remark} \begin{definition}[Definition 8.1 in \cite{SV}]\label{mck} Let $X$ be a smooth projective variety of dimension $n$. Let $\mathbb{D}elta_X^{sm}\in A^{2n}(X\times X\times X)$ be the class of the small diagonal \[ \mathbb{D}elta_X^{sm}:=\bigl\{ (x,x,x) : x\in X\bigr\}\ \subset\ X\times X\times X\ .\] A CK decomposition $\{\pi^i_X\}$ of $X$ is {\em multiplicative\/} if it satisfies \[ \pi^k_X\circ \mathbb{D}elta_X^{sm}\circ (\pi^i_X\otimes \pi^j_X)=0\ \ \ \hbox{in}\ A^{2n}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\] In that case, \[ A^i(X)_{(j)}:= (\pi_X^{2i-j})_\ast A^i(X)\] defines a bigraded ring structure on the Chow ring\,; that is, the intersection product has the property that \[ \operatorname{i}a \Bigl(A^i(X)_{(j)}\otimes A^{i^\prime}(X)_{(j^\prime)} \xrightarrow{\cdot} A^{i+i^\prime}(X)\Bigr)\ \subseteq\ A^{i+i^\prime}(X)_{(j+j^\prime)}\ .\] (For brevity, we will write {\em MCK decomposition\/} for ``multiplicative Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} The property of having an MCK decomposition is severely restrictive, and is closely related to Beauville's ``(weak) splitting property'' \cite{Beau3}. For more ample discussion, and examples of varieties admitting a MCK decomposition, we refer to \cite[Chapter 8]{SV}, as well as \cite{V6}, \cite{SV2}, \cite{FTV}, \cite{LV}. \end{remark} There are the following useful general results: \begin{proposition}[Shen--Vial \cite{SV}]\label{product} Let $M,N$ be smooth projective varieties that have an MCK decomposition. Then the product $M\times N$ has an MCK decomposition. \end{proposition} \begin{proof} This is \cite[Theorem 8.6]{SV}, which shows more precisely that the {\em product CK decomposition\/} \[ \pi^i_{M\times N}:= \sum_{k+\ell=i} \pi^k_M\times \pi^\ell_N\ \ \ \in A^{\dim M+\dim N}\bigl((M\times N)\times (M\times N)\bigr) \] is multiplicative. \end{proof} \begin{proposition}[Shen--Vial \cite{SV2}]\label{blowup} Let $M$ be a smooth projective variety, and let $f\colon\widetilde{M}\to M$ be the blow--up with center a smooth closed subvariety $N\subset M$. Assume that \begin{enumerate} \item $M$ and $N$ have a self--dual MCK decomposition; \item the Chern classes of the normal bundle $\mathbb{N}N_{N/M}$ are in $A^\ast_{(0)}(N)$; \item the graph of the inclusion morphism $N\to M$ is in $A^\ast_{(0)}(N\times M)$; \item the Chern classes $c_j(T_M)$ are in $A^\ast_{(0)}(M)$. \end{enumerate} Then $\widetilde{M}$ has a self--dual MCK decomposition, the Chern classes $c_j(T_{\widetilde{M}})$ are in $A^\ast_{(0)}(\widetilde{M})$, and the graph $\Gamma_f$ is in $A^\ast_{(0)}( \widetilde{M}\times M)$. \end{proposition} \begin{proof} This is \cite[Proposition 2.4]{SV2}. \end{proof} \section{Main result} \begin{theorem}\label{main} Let $X$ be a K\"uchle fourfold of type d3. Then $X$ has a self--dual MCK decomposition. Moreover, the Chern classes $c_j(T_X)$ are in $A^\ast_{(0)}(X)$. \end{theorem} \begin{proof} The argument relies on the following alternative description of $X$ (this is \cite[Corollary 3.5]{Kuz}): \begin{theorem}[Kuznetsov \cite{Kuz}]\label{alter} Let $X$ be a K\"uchle fourfold of type d3. Then $X$ is isomorphic to the blow--up of $(\mathbb{P}^1)^4$ in a $K3$ surface $S$ obtained as smooth intersection of two divisors of multidegree $(1,1,1,1)$. \end{theorem} (The argument of \cite[Corollary 3.5]{Kuz} also shows that conversely, any blow--up of $(\mathbb{P}^1)^4$ in a $K3$ surface $S$ of this type is a K\"uchle fourfold of type d3. We will not need this.) We wish to apply the general result proposition \ref{blowup} to $M=(\mathbb{P}^1)^4$ and $N=S$. All we need to do is to check that the assumptions of proposition \ref{blowup} are met with. Assumption (1) is OK, since varieties with trivial Chow groups and $K3$ surfaces have a self--dual MCK decomposition \cite[Example 8.17]{SV}. Assumption (4) is trivially satisfied, since $A^\ast_{hom}((\mathbb{P}^1)^4)=0$ and so $A^\ast_{}((\mathbb{P}^1)^4)=A^\ast_{(0)}((\mathbb{P}^1)^4)$. To check assumptions (2) and (3), we consider things family--wise. That is, we write \[ \bar{B}:= \mathbb{P} H^0\bigl( (\mathbb{P}^1)^4, \mathcal O(1,1,1,1)^{\oplus 2}\bigr) \] and we consider the universal complete intersection \[\bar{\mathcal S}\ \to\ \bar{B}\ .\] We write $B\subset\bar{B}$ for the Zariski open parametrizing smooth dimensionally transversal intersection, and $\mathcal S\to B$ for the base change (so the fibres $S_b$ of $\mathcal S\to B$ are exactly the $K3$ surfaces considered in theorem \ref{alter}). We make the following claim: \begin{claim}\label{gfc} Let $\Gamma\in A^i(\mathcal S)$ be such that \[ \Gamma\vert_{S_b}=0\ \ \ \hbox{in}\ H^{2i}(S_b)\ \ \ \forall b\in B\ .\] Then also \[ \Gamma\vert_{S_b}=0\ \ \ \hbox{in}\ A^i(S_b)\ \ \ \forall b\in B\ .\] \end{claim} We argue that the claim implies that assumptions (2) and (3) are met with (and so proposition \ref{blowup} can be applied to prove theorem \ref{main}). Indeed, let $p_j\colon \mathcal S\times_B \mathcal S\to \mathcal S$, $j=1,2$, denote the two projections. We observe that \[ \pi^0_\mathcal S:={1\over 24} (p_1)^\ast c_2(T_{\mathcal S/B})\ ,\ \ \pi^4_\mathcal S:={1\over 24} (p_2)^\ast c_2(T_{\mathcal S/B})\ ,\ \ \pi^2_\mathcal S:= \mathbb{D}elta_\mathcal S-\pi^0_\mathcal S-\pi^4_\mathcal S\ \ \ \in A^4(\mathcal S\times_B \mathcal S) \] defines a ``relative MCK decomposition'', in the sense that for any $b\in B$, the restriction $\pi^i_\mathcal S\vert_{S_b\times S_b}$ defines an MCK decomposition (which we denote $\pi^i_{S_b}$) for $S_b$. To check that assumption (2) is satisfied, we need to check that for any $b\in B$ there is vanishing \begin{equation}\label{need} (\pi^2_{S_b})_\ast c_2(\mathbb{N}N_{S_b/(\mathbb{P}^1)^4})\stackrel{??}{=}0\ \ \ \hbox{in}\ A^2(S_b)\ .\end{equation} But we can write \[ (\pi^2_{S_b})_\ast c_2(\mathbb{N}N_{S_b/(\mathbb{P}^1)^4}) = \Bigl( (\pi^2_\mathcal S)_\ast c_2(\mathbb{N}N_{\mathcal S/((\mathbb{P}^1)^4\times B)})\Bigr)\vert_{S_b} \ \ \ \hbox{in}\ A^2(S_b)\ \] (for the formalism of relative correspondences, cf. \cite[Chapter 8]{MNP}), and besides we know that $ (\pi^2_{S_b})_\ast c_2(\mathbb{N}N_{S_b/(\mathbb{P}^1)^4})$ is homologically trivial ($\pi^2_{S_b}$ acts as zero on $H^4(S_b)$). Thus, the claim implies the necessary vanishing (\ref{need}). Assumption (3) is checked similarly. Let $\iota_b\colon S_b\to (\mathbb{P}^1)^4$ and $\iota\colon \mathcal S\to (\mathbb{P}^1)^4\times B$ denote the inclusion morphisms. To check assumption (3), we need to convince ourselves of the vanishing \begin{equation}\label{need2} (\pi^\ell_{S_b\times(\mathbb{P}^1)^4})_\ast (\Gamma_{\iota_b})\stackrel{??}{=}0\ \ \ \hbox{in}\ A^4(S_b\times (\mathbb{P}^1)^4)\ \ \ \forall \ell\not= 8\ ,\ \forall b\in B\ .\end{equation} The fact that $\ell\not=8$ implies that $ (\pi^\ell_{S_b\times(\mathbb{P}^1)^4})_\ast (\Gamma_{\iota_b})$ is homologically trivial. Furthermore, we can write the cycle we are interested in as the restriction of a universal cycle: \[ (\pi^\ell_{S_b\times(\mathbb{P}^1)^4})_\ast (\Gamma_{\iota_b}) = \Bigl( (\sum_{j+k=\ell} \pi^j_\mathcal S\times \pi^k_{(\mathbb{P}^1)^4}) _\ast (\Gamma_\iota)\Bigr)\vert_{S_b\times (\mathbb{P}^1)^4} \ \ \ \hbox{in}\ A^4(S_b\times (\mathbb{P}^1)^4)\ .\] For any $b\in B$, there is a commutative diagram \[ \begin{array}[c]{ccc} A^4(\mathcal S\times (\mathbb{P}^1)^4) &\to& A^4(S_b\times (\mathbb{P}^1)^4) \\ &&\\ \ \ \ \downarrow{\cong}&&\ \ \ \downarrow{\cong}\\ &&\\ \bigoplus A^\ast(\mathcal S) &\to& \bigoplus A^\ast(S_b)\\ \end{array} \] where horizontal arrows are restriction to a fibre, and where vertical arrows are isomorphisms by repeated application of the projective bundle formula for Chow groups. Claim \ref{gfc} applied to the lower horizontal arrow shows the vanishing (\ref{need2}), and so assumption (3) holds. It is left to prove the claim. Since $A^i_{hom}(S_b)=0$ for $i\le 1$, the only non--trivial case is $i=2$. Given $\Gamma\in A^2(\mathcal S)$ as in the claim, let $\bar{\Gamma}\in A^2(\bar{\mathcal S})$ be a cycle restricting to $\Gamma$. We consider the two projections \[ \begin{array}[c]{ccc} \bar{\mathcal S}&\xrightarrow{\pi}& (\mathbb{P}^1)^4 \\ \ \ \ \ \downarrow{\scriptstyle \phi}&&\\ \ \ \bar{B}\ &&\\ \end{array}\] Since any point of $(\mathbb{P}^1)^4$ imposes exactly one condition on $\bar{B}$, the morphism $\pi$ has the structure of a projective bundle. As such, any $\bar{\Gamma}\in A^{2}(\bar{\mathcal S})$ can be written \[ \bar{\Gamma}= \sum_{\ell=0}^2 \pi^\ast( a_\ell) \cdot \xi^\ell \ \ \ \hbox{in}\ A^{2}(\bar{\mathcal S})\ ,\] where $a_\ell\in A^{2-\ell}( (\mathbb{P}^1)^4)$ and $\xi\in A^1(\bar{\mathcal S})$ is the relative hyperplane class. Let $h:=c_1(\mathcal O_{\bar{B}}(1))\in A^1(\bar{B})$. There is a relation \[ \phi^\ast(h)=\alpha \xi + \pi^\ast(h_1)\ \ \ \hbox{in}\ A^1(\bar{\mathcal S})\ ,\] where $\alpha\in\mathbb{Q}$ and $h_1\in A^1((\mathbb{P}^1)^4)$. As in \cite[Proof of Lemma 1.1]{PSY}, one checks that $\alpha\not=0$ (if $\alpha$ were $0$, we would have $\phi^\ast(h^{\dim \bar{B}})=\pi^\ast(h_1^{\dim \bar{B}})$, which is absurd since $\dim \bar{B}>4$ and so the right--hand side is $0$). Hence, there is a relation \[ \xi = {1\over \alpha} \bigl(\phi^\ast(h)-\pi^\ast(h_1)\bigr)\ \ \ \hbox{in}\ A^1(\bar{\mathcal S})\ .\] For any $b\in B$, the restriction of $\phi^\ast(h)$ to the fibre $S_b$ vanishes, and so it follows that \[ \bar{\Gamma}\vert_{S_b} = a_0^\prime\vert_{S_b}\ \ \ \hbox{in}\ A^{2}(S_b)\ \] for some $a_0^\prime\in A^2( (\mathbb{P}^1)^4)$. But \[ A^2( (\mathbb{P}^1)^4)= \bigoplus_{i+j+k+\ell=2} A^i(\mathbb{P}^1)\otimes A^j(\mathbb{P}^1)\otimes A^k(\mathbb{P}^1)\otimes A^\ell(\mathbb{P}^1) \] is generated by intersections of divisors, and so Beauville--Voisin's result \cite{BV} implies that \[ \bar{\Gamma}\vert_{S_b} = a_0^\prime\vert_{S_b} \ \ \in\ \mathbb{Q} {\mathfrak{o}}_{S_b}\ \ \ \subset\ A^{2}(S_b)\ ,\] where ${\mathfrak{o}}_{S_b}\in A^2(S_b)$ is the distinguished $0$--cycle of \cite{BV}. This proves the claim. \end{proof} \section{A consequence} \begin{corollary}\label{cor} Let $X$ be a K\"uchle fourfold of type d3, and let $m\in\mathbb{N}$. Let $R^\ast(X^m)\subset A^\ast(X^m)$ be the $\mathbb{Q}$--subalgebra \[ R^\ast(X^m):= < (p_i)^\ast A^1(X), (p_i)^\ast A^2(X), (p_{ij})^\ast(\mathbb{D}elta_X), (p_i)^\ast c_3(T_X)>\ \ \ \subset\ A^\ast(X^m)\ .\] (Here $p_i\colon X^m\to X$ and $p_{ij}\colon X^m\to X^2$ denote projection to the $i$th factor, resp. to the $i$th and $j$th factor.) The cycle class map induces injections \[ R^j(X^m)\ \hookrightarrow\ H^{2j}(X^m)\] in the following cases: \begin{enumerate} \item $m=1$ and $j$ arbitrary; \item $m=2$ and $j\ge 5$; \item $m=3$ and $j\ge 9$. \end{enumerate} \end{corollary} \begin{proof} Theorem \ref{main}, in combination with proposition \ref{product}, ensures that $X^m$ has an MCK decomposition, and so $A^\ast(X^m)$ has the structure of a bigraded ring under the intersection product. The corollary is now implied by the combination of the two following claims: \begin{claim}\label{c1} There is inclusion \[ R^\ast(X^m)\ \ \subset\ A^\ast_{(0)}(X^m)\ .\] \end{claim} \begin{claim}\label{c2} The cycle class map induces injections \[ A^j_{(0)}(X^m)\ \hookrightarrow\ H^{2j}(X^m)\ \] provided $m=1$, or $m=2$ and $j\ge 5$, or $m=3$ and $j\ge 9$. \end{claim} To prove claim \ref{c1}, we note that $A^k_{hom}(X)=0$ for $k\le 2$, which readily implies the equality $A^k(X)=A^k_{(0)}(X)$ for $k\le 2$. The fact that $c_3(T_X)$ is in $A^3_{(0)}(X)$ is part of theorem \ref{main}. The fact that $\mathbb{D}elta_X\in A^4_{(0)}(X\times X)$ is a general fact for any $X$ with a self--dual MCK decomposition \cite[Lemma 1.4]{SV2}. Since the projections $p_i$ and $p_{ij}$ are pure of grade $0$ \cite[Corollary 1.6]{SV2}, and $A^\ast_{(0)}(X^m)$ is a ring under the intersection product, this proves claim \ref{c1}. To prove claim \ref{c2}, we observe that Manin's blow--up formula \cite[Theorem 2.8]{Sc} gives an isomorphism of motives \[ h(X)\cong h(S)(1)\oplus {\mathds{1}} \oplus {\mathds{1}}(1)^{\oplus 4} \oplus {\mathds{1}}(2)^{\oplus 6} \oplus {\mathds{1}}(3)^{\oplus 4} \oplus {\mathds{1}}(4)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] Moreover, in view of proposition \ref{blowup} (cf. also \cite[Proposition 2.4]{SV2}), the correspondence inducing this isomorphism is of pure grade $0$. In particular, for any $m\in\mathbb{N}$ we have isomorphisms of Chow groups \[ A^j(X^m)\cong A^{j-m}(S^m)\oplus \bigoplus_{k=0}^4 A^{j-m+1-k}(S^{m-1})^{b_k} \oplus \bigoplus A^\ast(S^{m-2})\oplus \bigoplus_{\ell\ge 3} A^\ast(S^{m-\ell}) \ , \] and this isomorphism respects the $A^\ast_{(0)}()$ parts. Claim \ref{c2} now follows from the fact that for any surface $S$ with an MCK decomposition, and any $m\in\mathbb{N}$, the cycle class map induces injections \[ A^i_{(0)}(S^m)\ \hookrightarrow\ H^{2i}(S^m)\ \ \ \forall i\ge 2m-1\ \] (this is noted in \cite[Introduction]{V6}, cf. also \cite[Proof of Lemma 2.20]{acs}). \end{proof} \vskip1cm \begin{nonumberingt} Thanks to my colleagues Kai--kun and Len--kun at the Schiltigheim Centre for Pure Mathematics. \end{nonumberingt} \vskip1cm \end{document}
math
19,981
\begin{document} \title{Refined approximation for a class of Landau-de Gennes energy minimizers} \author{Luc Nguyen \thanks{OxPDE, Mathematical Institute, University of Oxford, 24--29 St Giles', Oxford OX1 3LB, UK; email: [email protected]}~ and Arghir Zarnescu \thanks{OxPDE, Mathematical Institute, University of Oxford, 24--29 St Giles', Oxford OX1 3LB, UK; email: [email protected]}} \maketitle \begin{abstract} We study a class of Landau-de Gennes energy functionals in the asymptotic regime of small elastic constant $L>0$. We revisit and sharpen the results in [18] on the convergence to the limit Oseen-Frank functional. We examine how the Landau-de Gennes global minimizers are approximated by the Oseen-Frank ones by determining the first order term in their asymptotic expansion as $L\to 0$. We identify the appropriate functional setting in which the asymptotic expansion holds, the sharp rate of convergence to the limit and determine the equation for the first order term. We find that the equation has a ``normal component'' given by an algebraic relation and a ``tangential component'' given by a linear system. \end{abstract} {\it 2000 Mathematics Subject Classification.} Primary 35J60, 35Q56; Secondary 76A15, 58E20. {\bf Keywords.} Nematic liquid crystals, De Gennes, $3$-d Ginzburg-Landau, Oseen--Frank limit, $Q$-tensors, harmonic maps. \section{Introduction} The complexity of nematic liquid crystals has led to the existence of several major competing theories attempting to describe them, ranging from simple and popular theories such as the Oseen-Frank theory \cite{of} to more involved ones such as the Landau-de Gennes theory \cite{dg}. Because of intertwining features of these two theories, it is of considerable interest to consider the asymptotics of the Landau-de Gennes theory to the Oseen-Frank limit. The difference between different theories exhibits itself in many ways. For our purpose, we will restrict our discussion within the scope of the Oseen-Frank and the Landau-de Gennes theories. For other theories, we refer the readers to the stimulating review by Lin and Liu \cite{Lin-Liu}. First and foremost of all, different theories use different mathematical descriptions of the molecular alignment. For example, to describe a certain liquid crystal contained in a region $\Omega$ $\subset$ ${\mathbb R}^d$, $d$ $=$ $2$, $3$, the Oseen-Frank theory uses vector fields $n:\Omega\to\mathbb{S}^2$, whereas the Landau-de Gennes theory uses matrix-valued functions $Q:\Omega\to {\mycal{S}}_0$, referred to as $Q$-tensors, where ${\mycal{S}}_0$ denotes the set of three-by-three symmetric and traceless matrices. The different number of degrees of freedom used in the different theories reflects in the different predictive capacities of the theories. The simplest and very popular Ossen-Frank description, which uses only two degrees of freedom, has the advantage of being simple but misses certain interesting features of liquid crystal. The first deficiency is that it ignores the important ``head-to-tail'' symmetry of the material. The consequences of this deficiency of the Oseen-Frank theory are analysed in \cite{BallZar}. Moreover, the Oseen-Frank theory can only explain some of the so-called ``defect'' patterns present in nematic liquid crystals, namely the ``point'' defects but not the more complicated ``line'' or ``wall'' defects. Part of this limitation of the Oseen-Frank theory could be due to the fact that it only considers uniaxial nematic liquid crystals. The more complex Landau-de Gennes theory allows for both uniaxial and biaxial nematic liquid crystals and has five degrees of freedoms hence it could have better predictive capacities. The additional complexity of Landau-de Gennes theory can, potentially, change dramatically the interpretation of defects: while in the Oseen-Frank theory the defects are discontinuities of the vector fields, in Landau-de Gennes theory one could interpret defects as discontinuities of eigenvectors. Although this possibility was considered in the mathematical literature [16], [18] and is consistent with the view of P.G. de Gennes [6], there does not seem to be yet a generally accepted definition of defects, and the relevance of this proposed definition remains to be explored. Despite many differences, it is of interest from both mathematical and practical points of view to see how and to what extent the mathematically simpler Oseen-Frank theory can be used to ``approximate'' the Landau-de Gennes theory. In \cite{Ma-Za}, Majumdar and Zarnescu showed that in the Oseen-Frank limit, and under suitable boundary conditions, a global ``Landau-de Gennes energy minimizer'' $Q_L$, which is parametrized by an elastic constant $L$, can be approximated in a suitable sense by a global ``Oseen-Frank energy minimizer'' $Q_*$ for $L$ sufficiently small. However, many interesting features of $Q_L$ are not necessarily captured by $Q_*$, which has only two degrees of freedom. For example, $Q_*$ has only point defects and is uniaxial, and therefore does not reflect the appearance of optical ``line defects'' that was observed in Schlieren texture \cite{dg} regardless whether one interpretes defects as discontinuities of eigenvectors or as the uniaxial-biaxial interface. From the foregoing discussion, it is therefore necessary to consider the difference $D_L$ $:=$ $Q_L - Q_*$ which potentially encodes the use of the three additional degrees of freedom. Even though $D_L$ is a small quantity, its presence might nevertheless create significant physical effects, e.g. in the eigenvalues and eigenvectors of $Q_L$. Evidence for this behavior has been observed numerically, see \cite[Fig. 1]{SonKilHes}. From the theoretical point of view, as far as the eigenvectors of $Q_L$ are concerned, the following behavior is possible. Consider $A_*, A_L:[0,1]^3\to {\mycal{S}}_0$ which are defined by \[ A_*(x,y,z)=\left(\begin{array}{ccc} 1 & y & 0\\ y & 1 & 0\\ 0 & 0 & -2 \end{array}\right) \text{ and } A_L=\left(\begin{array}{ccc} 1+ Lx & y & 0\\ y & 1-Lx & 0\\ 0 & 0 & -2 \end{array}\right). \] It is straightforward to show that there exist smooth functions $e_i:[0,1]^3\to \mathbb{S}^2,i=1,2,3$ such that at each point in space, $\{e_1, e_2, e_3\}$ is an orthonormal frame of eigenvectors of $A_*$. In constrast, no such smooth eigenframe exists for $A_L$. Moreover, if one adopts defects as discontinuities of eigenvectors, then $A_L$ has a line defect while $A_*$ does not have any defect, even though it is close to $A_L$. We nevertheless note that, in the above example, neither $A_*$ nor $A_L$ is a minimizer of the relevant functionals. The purpose of this paper is bifold. Firstly, we revisit and sharpen the convergence result in \cite{Ma-Za}. Secondly, we prove in an suitable setting that $D_L$ is of order $L$ and derive the equations that govern the limit of $\frac{D_L} {L}$. From a mathematical point of view, our problem bears strong analogies to the Ginzburg-Landau theory for superconductors, which has been investigated intensively in the literature. In fact, the work of B\'{e}thuel, Brezis and H\'{e}lein \cite{BBH} has provided much guidance in our work. The new complexity and challenge in our problem come mainly from having to deal with ``high dimensional'' objects, $Q$-tensors defined on a three-dimensional domain with values into a five-dimensional space. The geometry of the five dimensional linear space ${\mycal{S}}_0$ and its appropriate decompositions that take into account the behaviour near the two dimensional ``limit manifold'' (see \eqref{LimitSurf::Def} for a precise definition) are crucial in obtaining certain cancellations, that allow to by-pass the singular character of the limit $L\to 0$. In order to obtain estimates independent of the elastic constant $L$ we need to use in various ways the maximum principle for scalar equations and the choice of the right scalar quantities is a highly nontrivial task that is strongly influenced by the understanding of the appropriate geometry of the $Q$-tensor and its relations with the equations. The use of the physically significant ``spectral {\it scalar} quantities'' (i.e., those that depend only on the spectrum of $Q$) was already recognized in \cite{Ma-Za} but the analysis here goes significantly further by also using certain specific {\it tensorial} quantities $X_L$, $Y_L$ and $Z_L$ defined by \eqref{XDef}, \eqref{YDef} and \eqref{ZDef}. Note that all these quantities are directly related to the matrix minimal polynomial associated to (matrices belonging to) the limit manifold. The rest of the paper is organized as follows. In Section \ref{Statements}, we set up the mathematical framework of our problem and state our main results. In Section \ref{GeomDes}, we quickly survey some geometry of the limit manifold ${\mycal{S}}_*$. In the first part of Section \ref{ProjForm}, we derive the equations that governs the limit of $\frac{D_L}{L}$ provided it exists. In the second part we derive the equations for the orthogonal projection of $Q_L$ onto the limit manifold, if such projection exists. These derivations use crucially the geometry of the limit manifold. In Sections \ref{SEC-C1AlphaConv}, \ref{SEC-CjConv} we prove the $C^{1,\alpha}$ and $C^j$ convergence of $Q_L$ to $Q_*$, which extends the previous convergence results in \cite{Ma-Za}. Sections \ref{SEC-HDEst} and \ref{SEC-Rate} are devoted to prove the convergence of $\frac{D_L}{L}$ in a suitable setting. \section{Notations and main results}\label{Statements} Let $M_{3 \times 3}^{\rm sym}$ denote the set of real $3\times 3$ symmetric matrices, ${\mycal{S}}_0$ the set of traceless symmetric $3 \times 3$ matrices, and $\Omega$ an open bounded subset of ${\mathbb R}^3$ with smooth boundary. Consider, a Landau-de Gennes functional of the form \begin{equation} I_L[Q]= \int_\Omega \Big[\frac{L}{2}|\nabla Q|^2 + f_B(Q)\Big]\,dx, \qquad Q \in H^1(\Omega, {\mycal{S}}_0). \label{L-DGFunc} \end{equation} Here $f_B$ is the bulk energy density that accounts for the bulk effects, $|\nabla Q|^2$ is the elastic energy density that penalizes spatial inhomogeneities and $L$ $>$ $0$ is a material-dependent elastic constant. In this paper, we will contrive ourselves to the case where the bulk energy density is a quartic polynomial of the form \begin{equation} f_B(Q) = -\frac{a^2}{2}{\rm tr}(Q^2) - \frac{b^2}{3}{\rm tr}(Q^3) + \frac{c^2}{4}[{\rm tr}(Q^2)]^2, \label{BulkEDensity} \end{equation} where $a^2$, $b^2$ and $c^2$ are material- and temperature-dependent positive constants. It is well-known that this type of bulk energy density is the simplest form that allows multiple local minima and a first order nematic-isotropic phase transition \cite{dg}, \cite{Virga}. The Euler-Lagrange equation for the function $I_L$ is \begin{equation} L\,\Delta Q_L = -a^2\,Q_L - b^2[Q_L^2 - \frac{1}{3}{\rm tr}(Q_L^2)\,{\rm Id}] + c^2\,{\rm tr}(Q_L^2)\,Q_L, \label{L-DG::ELEqn} \end{equation} where the term $\frac{1}{3}b^2\,{\rm tr}(Q_L^2)\,{\rm Id}$ is a Lagrange multiplier that accounts for the tracelessness constraint. By standard arguments using elliptic theory, it can be seen that $H^1$ solution of \eqref{L-DG::ELEqn} is real analytic, see e.g. \cite[Proposition 13]{Ma-Za}. In \cite{Ma-Za}, it was shown that, subjected to a given suitable boundary value $Q_b$, along a subsequence, the minimizers $Q_L$ of $I_L$ converge uniformly away from a set of discrete points to a minimizer $Q_*$ of the functional \begin{equation} I_*[Q] = \int_\Omega |\nabla Q|^2\,dx, \qquad Q \in H^1(\Omega,{\mycal{S}}_*) \label{LimitFunc} \end{equation} where \begin{equation} {\mycal{S}}_* = \{Q \in N: f_B(Q) = \min f_B(Q)\}. \label{LimitSurf::Def} \end{equation} Even though \eqref{LimitSurf::Def} gives a mysterious definition for ${\mycal{S}}_*$, this surface can in fact be represented as (see \cite{Maj}) \begin{equation} {\mycal{S}}_{*} = \Big\{Q = s_+(n \otimes n - \frac{1}{3}{\rm Id}), n \in {\mathbb S}^2\Big\}, s_+ = \frac{b^2 + \sqrt{b^4 + 24 a^2 c^2}}{4c^2}. \label{LimitSurf::Rep} \end{equation} It is readily seen that ${\mycal{S}}_*$ is isometric (modulo a scaling) to the projective plane ${\mathbb R} P^2$, which is not orientable. If $\Omega$ is simply connected, it follows from \cite{BallZar} that $Q_*$ $=$ $s_+(n_* \otimes n_* - \frac{1}{3}{\rm Id})$ where $n_*$ $\in$ $H^1(\Omega,{\mathbb S}^2)$ minimizes the so-called Oseen-Frank functional, \begin{equation} I_{OF}[n] = \int_\Omega |\nabla n|^2\,dx, \qquad n \in H^1(\Omega,{\mathbb S}^2). \label{O-FFunc} \end{equation} Note that such $n_*$ is usually called a minimizing ${\mathbb S}^2$-valued harmonic map. Analogous to $n_*$, $Q_*$ is also an ``${\mycal{S}}_*$-valued harmonic map''. It satisfies the Euler-Lagrange equation for $I_*$, \begin{multline} \Delta Q_* = -\frac{2}{s_+^2}|\nabla Q_*|^2\,Q_* + \frac{2}{s_+}\Big[\sum_{\alpha = 1}^3 (\nabla_\alpha Q_*)^2 - \frac{1}{3}|\nabla Q_*|^2\,{\rm Id}\Big]\\ = -\frac{4}{s_+^2}(Q_* - \frac{1}{6}\,s_+\,{\rm Id})\sum_{\alpha=1}^3 (\nabla Q_*)^2. \label{Limit::ELEqn} \end{multline} The set of discrete points away from which $Q_L$ converges to $Q_*$ mentioned above consists precisely of points of discontinuity of $Q_*$. Presumably, these points correspond to point defects of the nematics. In this context, there is existing literature on the location of singularities. See \cite{AlLi} and the references therein. To state our main results, let $Q_b:$ $\partial\Omega$ $\rightarrow$ ${\mycal{S}}_*$ be a smooth given boundary data and consider the minimization problems \begin{align} &\min\big\{I_L[Q]: Q \in H^1(\Omega,{\mycal{S}}_0), Q\big|_{\partial\Omega} \equiv Q_b\big\}, \label{MinProb}\\ &\min\big\{I_*[Q]: Q \in H^1(\Omega,{\mycal{S}}_*), Q\big|_{\partial\Omega} \equiv Q_b\big\}. \label{MinLimProb} \end{align} Our first result sharpens the $C^0$ convergence result proved in \cite{Ma-Za}. \begin{thm}\label{MainThm1} Let $\Omega$ be an open bounded subset of ${\mathbb R}^3$ and $Q_L$ be a minimizer of the minimization problem \eqref{MinProb}. For any sequence $L_k$ $\rightarrow$ $0$, there exists a subsequence $L_{k'}$ such that $Q_{L_{k'}}$ converges strongly in $H^1$-norm to a minimizer $Q_*$ of the minimization problem \eqref{MinLimProb}. Moreover, if ${\rm Sing}(Q_*)$ is the singular set of $Q_*$, i.e. the set of points $x$ $\in$ $\bar\Omega$ where $Q_*$ is not smooth in any neighborhood of $x$, then \begin{align*} &Q_{L_{k'}} \rightarrow Q_* \text{ in } C^{1,\alpha}_{\rm loc}(\bar\Omega\setminus{\rm Sing}(Q_*),{\mycal{S}}_0), \alpha \in (0,1),\\ &Q_{L_{k'}} \rightarrow Q_* \text{ in } C^j_{\rm loc}(\Omega\setminus {\rm Sing}(Q_*),{\mycal{S}}_0), j \geq 2. \end{align*} In addition, for $K$ $\Subset$ $\bar\Omega\setminus{\rm Sing}(Q_*)$, there exists $\bar L$ $>$ $0$ such that for $L_{k'}$ $<$ $\bar L$, we can rewrite \eqref{L-DG::ELEqn} as \[ \Delta Q_{L_{k'}} = -\frac{4}{s_+^2}(Q_{L_{k'}} - \frac{1}{6}\,{\rm Id})\sum_{\alpha = 1}^3(\nabla_\alpha Q_{L_{k'}})^2 + \,R_{L_{k'}} \text{ in } K, \] where $R_{L_{k'}}$ satisfies \[ \big|\nabla^j R_{L_{k'}}(y)\big| \leq \frac{C(a^2,b^2,c^2,\Omega,K,Q_b,Q_*,j)}{{\rm dist}(y,\partial\Omega)^{j+2}}\,L_{k'} \text{ for }y \in K. \] \end{thm} \begin{remark} (a) For the sake of clarity, let us highlight that, in Theorem \ref{MainThm1}, the $C^{1,\alpha}$-convergence of $Q_{L_k'}$ to $Q_*$ is up to the boundary. (b) The bound for $R_{L_{k'}}$ can be slightly improved. See Corollary \ref{ImprRemEst}. \end{remark} Having a good convergence of $Q_L$ to $Q_*$, we turn to the question of how to extract information about the other three degrees of freedom that $Q_*$ misses. A reasonable way is to look for some asymptotic expansion of the form $Q_L$ $=$ $Q_* + L\,Q_\bullet + O(L^2)$. Intuitively, such expansion is only possible if $Q_*$ is an isolated solution to the limiting harmonic map problem \eqref{Limit::ELEqn}. It is thus reasonable to assume that the linearized operator ${\mycal{L}}_{Q_*}$ of the harmonic map problem has some kind of bijectivity. On a different aspect, it should be noted that, in general, one does not expect that $\frac{1}{L}(Q_L - Q_*)$ to converge in the energy space $H^1_0(\Omega, {\mycal{S}}_0)$. For example, consider the case where $\Omega$ is the unit ball, $a^2$ $=$ $b^2$ $=$ $c^2$ $=$ $1$ and $Q_*$ is the so-called ``hedgehog'', i.e. \[ Q_* = \frac{3}{2}\Big(\frac{x \otimes x}{|x|^2} - \frac{1}{3}\,{\rm Id}\Big). \] By a direct computation using \eqref{FAE::Id01} in Theorem \ref{MainThm2} below, one finds that, if $Q_\bullet$ exists, then the normal component $A$ of $Q_\bullet$ (with respect to the decomposition ${\mycal{S}}_0$ $\approx$ $T_{Q_*}{\mycal{S}}_* \oplus (T_{Q_*} {\mycal{S}}_*)^\perp_{{\mycal{S}}_0}$) is given by \[ A = \frac{81}{20|x|^2}\Big(\frac{x \otimes x}{|x|^2} - \frac{1}{3}\,{\rm Id}\Big). \] This implies in particular that $Q_\bullet$ does not vanish on $\partial\Omega$. As the $\frac{1}{L}(Q_L - Q_*)$'s vanish on $\partial\Omega$, they cannot converge to $Q_\bullet$ in $H^1_0(\Omega)$. For the purpose of getting a convergence result for $\frac{1}{L}(Q_L - Q_*)$, we contrive ourselves to the case where $Q_*$ is smooth. Even though our assumption on $Q_*$ is very restrictive in the sense that it already bans the appearance of point defects in $Q_*$, it is not too restrictive in the study of higher dimensional defects of nematics. For, in practice, it has been observed that line defects can occur either close to point defects or far from point defects. We prove: \begin{thm}\label{MainThm2} Let $\Omega$ be an open bounded subset of ${\mathbb R}^3$ and $Q_L$ be a minimizer of the minimization problem \eqref{MinProb}. Assume that $Q_{L_k}$ converges strongly in $H^1(\Omega, {\mycal{S}}_0)$ to a minimizer $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ of the limit minimization problem \eqref{MinLimProb} for some sequence $L_k$ $\rightarrow$ $0$. Assume in addition that $Q_*$ is smooth and that the linearized operator ${\mycal{L}}_{Q_*}:$ $H^1_0(\Omega,M_{3\times 3})$ $\rightarrow$ $H^{-1}(\Omega,M_{3\times 3})$ of the limit harmonic map problem (see \eqref{LinearizedOp}) is bijective. Then there exists $Q_\bullet$ $\in$ $C^\infty(\Omega,{\mycal{S}}_0) \cap H^s(\Omega,{\mycal{S}}_0)$ for any $0$ $<$ $s$ $<$ $1/2$ such that \begin{align*} &\frac{1}{L_{k}}(Q_{L_{k}} - Q_*) \rightarrow Q_\bullet \text{ in } H^s(\Omega), 0 < s < 1/2,\\ &\frac{1}{L_{k}}(Q_{L_{k}} - Q_*) \rightarrow Q_\bullet \text{ in } C^j_{\rm loc}(\Omega), j \geq 0. \end{align*} Moreover, if we split $Q_\bullet$ $=$ $A + B$ where $A$ belongs to the normal space $(T_{Q_*} {\mycal{S}}_*)^\perp_{{\mycal{S}}_0}$ to ${\mycal{S}}_*$ at $Q_*$ with respect to ${\mycal{S}}_0$ and $B$ belongs to the tangent space $T_{Q_*} {\mycal{S}}_*$ to ${\mycal{S}}_*$ at $Q_*$, then \begin{enumerate}[(i)] \item ${A}$ is given by \begin{equation} {A} = -\frac{2}{b^2s_+^2}\Big[\frac{6}{6a^2 + b^2\,s_+}|\nabla Q_*|^2\big(c^2\,Q_* + \frac{1}{3}b^2{\rm Id}\big)\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big) - \sum_{\alpha = 1}^3(\nabla_\alpha Q_*)^2\Big], \label{FAE::Id01} \end{equation} \item and ${B}$ satisfies in $\Omega$ the equations \begin{multline} \Delta {B} = \Big[-b^2({B}\,{A} + {A}\,{B}) - \frac{6c^2}{6a^2 + b^2\,s_+}|\nabla Q_*|^2\,{B}\Big]\\ -\frac{4}{s_+^2}\big[\big(\nabla {B}\big)^\parallel\,\nabla Q_* + \nabla Q_*\,\big(\nabla {B}\big)^\parallel\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big) - \big(\Delta {A}\big)^\parallel, \label{FAE::Id02} \end{multline} where $\big(\nabla {B}\big)^\parallel$ and $\big(\Delta {A}\big)^\parallel$ are the tangential components of $\nabla {B}$ and $\Delta {A}$ with respect to the decomposition ${\mycal{S}}_0$ $\approx$ $T_{Q_*}{\mycal{S}}_* \oplus (T_{Q_*} {\mycal{S}}_*)^\perp_{{\mycal{S}}_0}$, respectively. \end{enumerate} \end{thm} \begin{remark} As an example, we note that, if $\Omega$ is simply connected and \begin{equation} \|Q_b - p\|_{C^2(\partial\Omega)} \leq \epsilon \label{intr::SmallData} \end{equation} for some fixed point $p$ $\in$ ${\mycal{S}}_*$ and some sufficiently small $\epsilon$ $>$ $0$, then ${\mycal{L}}_{Q_*}:$ $H^1_0(\Omega,M_{3\times 3})$ $\rightarrow$ $H^{-1}(\Omega,M_{3\times 3})$ is bijective. To see this, observe first that the constant map $p$ is the unique minimizing harmonic map that has constant boundary values $p$. Thus, by the stability of minimizing ${\mathbb S}^2$-harmonic maps \cite{HardtLin} and the lifting of maps in $H^1(\Omega,{\mycal{S}}_{*})$ to maps in $H^1(\Omega,{\mathbb S}^2)$ \cite{BallZar}, the minimization problem \eqref{MinLimProb} has a unique solution $Q_*$ provided that \eqref{intr::SmallData} holds for some small $\epsilon$. (For expository purposes, we note that by \cite{Strw}, when $\Omega$ is a ball, $Q_*$ is actually the unique weakly harmonic map admitting $Q_b$ as its boundary values for all sufficiently small $\epsilon$.) Furthermore, by classical regularity results for harmonic maps \cite{su}, \cite{SU-Bdry}, $Q_*$ is smooth and satisfies $|Q_* - p| + |\nabla Q_*| + |\nabla^2 Q_*|$ $\leq$ $o(1)$ as $\epsilon$ $\rightarrow$ $0$. On the other hand, the linearized operator corresponding to a constant map is the Laplace operator and hence is bijective. Thus, by \cite[Chapter IV, Theorem 5.17]{Kato}, ${\mycal{L}}_{Q_*}$ is also bijective provided $\epsilon$ is chosen appropriately small. \end{remark} \section{Preliminaries}\label{GeomDes} We begin with a brief study of the geometry of the limit manifold ${\mycal{S}}_*$ defined in \eqref{LimitSurf::Def}. Using \eqref{LimitSurf::Rep} and some simple algebraic manipulations, we find the following equivalent definitions for ${\mycal{S}}_*$. \begin{lemma}\label{DiffRepLimSurf} Let $s_+$ $=$ $\frac{b^2 + \sqrt{b^4 + 4a^2\,c^2}}{4c^2}$. A matrix $Q$ belongs to ${\mycal{S}}_*$ if and only if one of the following holds.e \begin{enumerate}[(i)] \item $f_B(Q)$ $=$ $\min\{f_B(R): R \in {\mycal{S}}_0\}$. \item $Q$ $=$ $s_+(n \otimes n - \frac{1}{3}{\rm Id})$ for some $n$ $\in$ ${\mathbb S}^2$. \item $Q$ $\in$ ${\mycal{S}}_0$, ${\rm tr}(Q^2)$ $=$ $\frac{2s_+^2}{3}$ and ${\rm tr}(Q^3)$ $=$ $\frac{2s_+^3}{9}$. \item $Q$ $\in$ ${\mycal{S}}_0$ and its minimal polynomial is $\lambda^2 - \frac{1}{3}s_+\,\lambda - \frac{2}{9}s_+^2$. \end{enumerate} \end{lemma} For a matrix $Q$ in ${\mycal{S}}_*$, let $T_Q {\mycal{S}}_*$ denote the tangent space to ${\mycal{S}}_*$ at $Q$ in ${\mycal{S}}_0$, and $(T_Q {\mycal{S}}_*)^\perp_{{\mycal{S}}_0}$ and $(T_Q {\mycal{S}}_*)^\perp$ denote the orthogonal complements of $T_Q {\mycal{S}}_*$ in the tangent spaces at $Q$ to ${\mycal{S}}_0$, $T_Q {\mycal{S}}_0$, and to $M_{3 \times 3}^{\rm sym}$, $T_Q M_{3\times 3}^{\rm sym}$, respectively. We will often identify ${\mycal{S}}_0$ with $T_Q {\mycal{S}}_0$ and $M_{3 \times 3}^{\rm sym}$ with $T_Q M_{3 \times 3}^{\rm sym}$. We have the following characterization of these spaces. \begin{lemma}\label{Tangent-NormalSpaces} For a point $Q$ $\in$ ${\mycal{S}}_*$, the tangent and normal spaces to ${\mycal{S}}_*$ at $Q$ are \begin{align} T_Q {\mycal{S}}_* &= \{\dot Q \in M_{3 \times 3}^{\rm sym}: \frac{1}{3}s_+\dot Q = \dot Q\,Q + Q\,\dot Q\},\label{TNS::Id01}\\ (T_Q {\mycal{S}}_*)^\perp_{{\mycal{S}}_0} &= \{Q^\perp \in {\mycal{S}}_0: Q^\perp\,Q = Q\,Q^\perp\},\label{TNS::Id02}\\ (T_Q {\mycal{S}}_*)^\perp &= \{Q^\perp \in M_{3\times 3}^{\rm sym}: Q^\perp\,Q = Q\,Q^\perp\}.\label{TNS::Id03} \end{align} Moreover, any matrix $A$ $\in$ $T_Q M_{3\times 3}^{\rm sym}$ $\approx$ $M_{3\times 3}^{\rm sym}$ can be decomposed uniquely as $A$ $=$ $\dot A + A^\perp$ $\in$ $T_Q {\mycal{S}}_* \oplus (T_Q {\mycal{S}}_*)^\perp$ by \begin{align} A^\perp &= -\frac{2}{s_+^2}\big(\frac{1}{3}s_+\,A - Q\,A - A\,Q\big)\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\nonumber\\ &= -\frac{2}{s_+^2}\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\big(\frac{1}{3}s_+\,A - Q\,A - A\,Q\big).\label{TNS::Id05} \end{align} \end{lemma} \noindent{\bf Proof.\;} Set \[ \mycal{A} = \{\dot Q \in M_{3 \times 3}^{\rm sym}: \frac{1}{3}s_+\dot Q = \dot Q\,Q + Q\,\dot Q\}. \] By Lemma \ref{DiffRepLimSurf}(ii), we have $Q$ $=$ $s_+\big(n \otimes n - \frac{1}{3}{\rm Id}\big)$ and \[ T_Q {\mycal{S}}_* = \{n \otimes \dot n + \dot n \otimes n: \dot n \in T_{n}{\mathbb S}^2\}. \] It is thus immediate that $T_Q {\mycal{S}}_*$ $\subset$ $\mycal{A}$. To see the converse, pick $\dot Q$ $\in$ $\mycal{A}$. Then by Lemma \ref{DiffRepLimSurf}(iv), \begin{equation} 9Q\,\dot Q\,Q = -2\,s_+^2 \dot Q. \label{TNS::Eqn01} \end{equation} Choose an orthonormal frame $\{n, \hat n, \check n\}$ of ${\mathbb R}^3$ at $n$ $\in$ ${\mathbb S}^2$ $\subset$ ${\mathbb R}^3$. Using \eqref{TNS::Eqn01}, we get \[ Q(\dot Q n) = - \frac{s_+}{3}\dot Q n, Q(\dot Q\hat n) = \frac{2s_+}{3} \dot Q\hat n, \text{ and } Q(\dot Q\check n) = \frac{2s_+}{3}\dot Q\check n. \] By the explicit form of $Q$, we infer that $\dot Q n$ $=$ $\alpha \hat n + \beta\check n$, $\dot Q\hat n$ $=$ $\gamma_1 n$, and $\dot Q\check n$ $=$ $\gamma_2 n$. As $\dot Q$ is symmetric, it is necessary that $\alpha$ $=$ $\gamma_1$ and $\beta$ $=$ $\gamma_2$. We then arrive at $\dot Q$ $=$ $n \otimes (\alpha \hat n + \beta\check n) + (\alpha \hat n + \beta\check n) \otimes n$, which shows $\dot Q$ $\in$ $T_Q {\mycal{S}}_*$. \eqref{TNS::Id01} follows. Next, let \[ \mycal{B} := \{Q^\perp \in {\mycal{S}}_0: Q^\perp\,Q = Q\,Q^\perp\}. \] We have \[ (T_Q {\mycal{S}}_*)^\perp_{{\mycal{S}}_0} = \{\alpha n \otimes n + \beta \hat n \otimes \hat n + \gamma \check n \otimes \check n + \lambda(\hat n \otimes \check n + \check n \otimes \hat n), \alpha + \beta + \gamma = 0\} \subset \mycal{B}. \] Now pick $Q^\perp$ $\in$ $\mycal{B}$. Then \[ Q(Q^\perp n) = \frac{2s_+}{3}Q^\perp n, Q(Q^\perp\hat n) = -\frac{s_+}{3} Q^\perp \hat n, \text{ and } Q(Q^\perp \check n) = -\frac{s_+}{3}Q^\perp \check n. \] Putting $Q^\perp n$ $=$ $\alpha n$, $Q^\perp\hat n$ $=$ $\beta \hat n + \lambda \check n$, and $Q^\perp \check n$ $=$ $\lambda \hat n + \gamma\check n$, we reach \eqref{TNS::Id02}. The proof of \eqref{TNS::Id03} is similar to the above and is thus omitted. Now let $A$ $\in$ $T_Q M_{3 \times 3}^{\rm sym}$ and decompose $A$ $=$ $\dot A + A^\perp$. By \eqref{TNS::Id03} and \eqref{TNS::Eqn01}, \[ 9Q\,A\,Q = -2\,s_+^2\dot A + 9A^\perp\,Q^2, \] and so \[ 2s_+^2\,A + 9Q\,A\,Q = 2s_+^2A^\perp + 9 A^\perp\,Q^2 = A^\perp(2s_+^2{\rm Id} + 9Q^2). \] Hence, by Lemma \ref{DiffRepLimSurf}(iv), \begin{align*} A^\perp &= (2s_+^2\,A + 9Q\,A\,Q)(2s_+^2{\rm Id} + 9Q^2)^{-1},\\ &= \frac{1}{s_+}(2s_+^2\,A + 9Q\,A\,Q)(3Q + 4s_+{\rm Id})^{-1}\\ &= -\frac{1}{18s_+^3}(2s_+^2\,A + 9Q\,A\,Q)(3Q - 5s_+{\rm Id})\\ &= \frac{1}{9s_+^2}(-3s_+(AQ + QA) + 18 QAQ + 5 s_+^2 A)\\ &= -\frac{2}{s_+^2}\big(\frac{1}{3}s_+\,A - Q\,A - A\,Q\big)\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\\ &= -\frac{2}{s_+^2}\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\big(\frac{1}{3}s_+\,A - Q\,A - A\,Q\big). \end{align*} The last assertion follows. $\square$ The next lemma states some interrelations between the tangent and normal spaces of ${\mycal{S}}_*$. \begin{lemma}\label{InterRel} Let $Q$ be a point in ${\mycal{S}}_*$. For $X$, $Y$ in $T_Q {\mycal{S}}_*$ and $Z$, $W$ in $(T_Q {\mycal{S}}_*)^\perp$, we have $XY + YX$, $ZW + WZ$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$ and $XZ + ZX$ $\in$ $T_Q {\mycal{S}}_*$. \end{lemma} \noindent{\bf Proof.\;} That $ZW + WZ$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$ follows immediately from Lemma \ref{Tangent-NormalSpaces}. To see that $XY + YX$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$, we note that Lemma \ref{Tangent-NormalSpaces} gives \[ \frac{1}{3}s_+\,X = XQ + QX \text{ and }\frac{1}{3}s_+\,Y = YQ + QY. \] Hence \[ XYQ = X\Big[\frac{1}{3}s_+\,Y - QY\Big] = \Big[\frac{1}{3}s_+\,X - XQ]Y = QXY. \] Similarly, we have $YXQ$ $=$ $QYX$. It then follows from Lemma \ref{Tangent-NormalSpaces} that $XY + YX$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$. For the last assertion, we calculate \begin{multline*} (XZ + ZX)Q = XQZ + Z\Big[\frac{1}{3}s_+\,X - QX\Big]\\ = \Big[\frac{1}{3}s_+\,X - QX\Big]Z + Z\Big[\frac{1}{3}s_+\,X - QX\Big] = \frac{1}{3}s_+(XZ + ZX) - Q(XZ + ZX). \end{multline*} By Lemma \ref{Tangent-NormalSpaces}, this implies that $XZ + ZX$ $\in$ $T_Q {\mycal{S}}_*$. $\square$ \begin{lemma}\label{MysIds} For any $X$, $Y$ $\in$ $T_Q {\mycal{S}}_*$ and $Z$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$, there holds \begin{align} {\rm tr}((XY + YX)Q) &= \frac{s_+}{3}{\rm tr}(XY),\label{MysIds::Id1}\\ (XY + YX)Q &= -\frac{s_+}{3}(XY + YX) + {\rm tr}(XY)Q + \frac{s_+}{3}{\rm tr}(XY)\,{\rm Id},\label{MysIds::Id2}\\ \Big[\frac{1}{s_+}Q + \frac{1}{3}{\rm Id}\Big]Z &= \Big(\frac{1}{s_+}{\rm tr}(QZ) + \frac{1}{3}{\rm tr}(Z)\Big)\,\Big[\frac{1}{s_+}Q + \frac{1}{3}{\rm Id}\Big].\label{MysIds::Id3} \end{align} \end{lemma} \noindent{\bf Proof.\;} Since $X$ $\in$ $T_Q {\mycal{S}}_*$, $XQ + QX$ $=$ $\frac{1}{3}s_+\,X$. It follows that \[ XQY + QXY = \frac{1}{3}s_+\,XY. \] Taking trace, we get \eqref{MysIds::Id1}. We next prove \eqref{MysIds::Id3}. Set \[ P_1 = \frac{1}{s_+}Q + \frac{1}{3}{\rm Id} \text{ and } P_2 = -\frac{1}{s_+}Q + \frac{2}{3}{\rm Id}. \] Then $P_1^2$ $=$ $P_1$, $P_2^2$ $=$ $P_2$, $P_1\,P_2$ $=$ $0$ and $P_1 + P_2$ $=$ ${\rm Id}$. Set \[ V_i = \{W \in (T_Q {\mycal{S}}_*)^\perp: P_i\,W = W\}, \qquad i = 1,2. \] Then $(T_Q {\mycal{S}}_*)^\perp$ $=$ $V_1 \oplus V_2$, $V_1$ $\perp$ $V_2$, and $P_i$ $\in$ $V_i$. Write $Q$ $=$ $s_+(n \otimes n - \frac{1}{3}{\rm Id})$ for some $n$ $\in$ ${\mathbb S}^2$ and pick an orthonormal basis $\{n, \hat n, \check n\}$ of ${\mathbb R}^3$ at $n$. It is easy to check that $\hat n \otimes \hat n$, $\check n \otimes \check n$, and $\hat n \otimes \check n + \check n \otimes \hat n$ belong to $V_2$. It follows that $\dim V_2$ $\geq$ $3$. Since $\dim V_1 + \dim V_2$ $=$ $\dim (T_Q {\mycal{S}}_*)^\perp$ $=$ $4$, and $\dim V_1$ $\geq$ $1$ (as $P_1$ $\in$ $V_1$), we infer that $\dim V_1$ $=$ $1$. Now, for any $Z$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$, $P_1\,Z$ $\in$ $V_1$ and so \[ P_1\,Z = k\,P_1 \text{ for some } k \in {\mathbb R}. \] Taking trace we get \[ \frac{1}{s_+}{\rm tr}(QZ) + \frac{1}{3}{\rm tr}(Z) = k. \] \eqref{MysIds::Id3} is established. Finally, to get \eqref{MysIds::Id2}, we note that $W$ $=$ $XY + YX - \frac{2}{3}{\rm tr}(XY)\,{\rm Id}$ $\in$ $(T_Q {\mycal{S}}_*)^\perp$. Applying \eqref{MysIds::Id3} to $Z$ $=$ $W$, we get \[ (XY + YX)P_1 = \frac{2}{3}{\rm tr}(XY)P_1 + \frac{1}{s_+}{\rm tr}[(XY + YX)Q]\,P_1. \] \eqref{MysIds::Id2} follows immediately from the above identity and \eqref{MysIds::Id1}. $\square$ \begin{lemma}\label{2ndForm} The second fundamental form of ${\mycal{S}}_*$ in ${\mycal{S}}_0$ is given by \begin{align*} \mathrm{II}(X,Y)(Q) &= -\frac{2}{s_+^2}\,{\rm tr}(X\,Y)\,Q + \frac{1}{s_+}\Big[X\,Y + Y\,X - \frac{2}{3}{\rm tr}(X\,Y)\,{\rm Id}\Big]\\ &= -\frac{1}{s_+^2}\,(X\,Y + Y\,X)(2Q - \frac{1}{3}s_+\,{\rm Id}) = -\frac{1}{s_+^2}\,(2Q - \frac{1}{3}s_+\,{\rm Id})(X\,Y + Y\,X) \end{align*} where $X$ and $Y$ are tangent vectors to ${\mycal{S}}_*$ at $Q$ $\in$ ${\mycal{S}}_*$. \end{lemma} \noindent{\bf Proof.\;} Since $SO(3)$ acts transitively on ${\mycal{S}}_*$ by conjugation, it suffices to verify the conclusion at \[ Q_0 = \frac{1}{3}\mathrm{diag}(2,-1,-1) \in {\mycal{S}}_*. \] Also, by linearity, it suffices to consider $X$ and $Y$ in a set of basis vectors. For the ease in calculation, we isometrically embed ${\mycal{S}}_0$ into $M_{3\times 3}$ by the obvious embedding. We parametrize $M_{3\times 3}$ by \[ A = \left[\begin{array}{ccc} y_1 & y_2 & y_3\\ y_4 & y_5 & y_6\\ y_7 & y_8 & y_9 \end{array}\right]. \] To pick tangential vectors at a point $Q_0$, we note that $B^t\,Q_0 + Q_0\,B$ is tangent to ${\mycal{S}}_*$ at $Q_0$ for any skew-symmetric matrix $B$. Choosing $B$ $=$ $e_1 \otimes e_2 - e_2 \otimes e_1$, we get \begin{align*} v_1 &= \left[\begin{array}{ccc} -y_2 - y_4 & y_1-y_5 & -y_6\\ y_1-y_5 & y_2 + y_4 & y_3\\ -y_8 & y_7 & 0 \end{array}\right]\\ &= -(y_2 + y_4)\partial_1 + (y_1 - y_5)\partial_2 - y_6\partial_3 + (y_1 - y_5)\partial_4 + (y_2+ y_4)\partial_5 + y_3\partial_6 - y_8\partial_7 + y_7\partial_8. \end{align*} Choosing $B$ $=$ $e_1 \otimes e_3 - e_3 \otimes e_1$, we get \begin{align*} v_2 &= \left[\begin{array}{ccc} -y_3 - y_7 & -y_8 & y_1 - y_9\\ -y_6 & 0 & y_4\\ y_1 - y_9 & y_2 & y_3 + y_7 \end{array}\right] \\ &= -(y_3 + y_7)\partial_1 - y_8 \partial_2 + (y_1 - y_9)\partial_3 - y_6\partial_4 + y_4\partial_6 + (y_1 - y_9)\partial_7 + y_2\partial_8 + (y_3 + y_7)\partial_9. \end{align*} It is readily seen that $v_1$ and $v_2$ form a local frame for ${\mycal{S}}_*$ in a neighborhood of $Q_0$. Let $\bar\nabla$ denote the connection of $M^{3\times 3}$ $\cong$ ${\mathbb R}^9$. We calculate \begin{align*} \bar\nabla_{v_1} v_1 &= 2(y_1 - y_5)[-\partial_1 + \partial_5] + ...,\\ \bar\nabla_{v_1} v_2 &= (y_1 - y_5)[\partial_6 + \partial_8] + ...,\\ \bar\nabla_{v_2} v_2 &= 2(y_1 - y_9)[-\partial_1 + \partial_9] + ... \end{align*} where the dots comprise of terms that vanish at $Q_0$. Since $\mathrm{II}(X,Y)$ is the normal component of $\bar\nabla_X Y$, we thus have \begin{align*} \mathrm{II}(v_1, v_1)(Q_0) &= 2s_+(-\partial_1 + \partial_5) = 2s_+\mathrm{diag}(-1,1,0),\\ \mathrm{II}(v_1, v_2)(Q_0) &= s_+(\partial_6 + \partial_8) = s_+\left[\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{array}\right],\\ \mathrm{II}(v_2, v_2)(Q_0) &= 2s_+(-\partial_1 + \partial_9) = 2s_+\mathrm{diag}(-1,0,1). \end{align*} It is elementary to check that the assertion holds for $X$, $Y$ $\in$ $\{v_1, v_2\}$ and $Q$ $=$ $Q_0$. The proof is complete. $\square$ \begin{corollary}\label{HarMapEqn} A map $Q_0:$ ${\mathbb R}^n$ $\rightarrow$ ${\mycal{S}}_*$ is harmonic if and only if one of the following occurs. \begin{align*} &\text{(i) $\Delta Q_0$ commutes with $Q_0$.}\\ &\text{(ii) } \Delta Q_0 = -\frac{2}{s_+^2}\,|\nabla Q_0|^2\,Q_0 + \frac{2}{s_+}\Big[\sum_{\alpha=1}^n(\nabla_\alpha Q_0)^2 - \frac{1}{3}|\nabla Q_0|^2\,{\rm Id}\Big].\\ &\text{(iii) } \Delta Q_0 = -\frac{4}{s_+^2}\,\sum_{\alpha=1}^n(\nabla_\alpha Q_0)^2\,(Q_0 - \frac{1}{6}s_+\,{\rm Id}),\\ &\text{(iv) } \Delta Q_0 = -\frac{4}{s_+^2}\,(Q_0 - \frac{1}{6}s_+\,{\rm Id})\sum_{\alpha=1}^n(\nabla_\alpha Q_0)^2. \end{align*} \end{corollary} \noindent{\bf Proof.\;} The conclusion follows immediately from well-known forms of harmonic map equations (see e.g. \cite{Helein}) and Lemmas \ref{Tangent-NormalSpaces}, \ref{2ndForm}. $\square$ \section{Equations for the first order term in the asymptotic expansion and for the projection onto the limit manifold}\label{ProjForm} The goal of this section is bifold. Let $Q_L$ be a critical point of $I_L$. In the first part, we will assume that $Q_L$ has a formal asymptotic form $Q_L$ $=$ $Q_* + L\,Q_\bullet + o(L)$ for some harmonic map $Q_*$ and derive the equations that govern $Q_\bullet$. In the second part, we derive the equations for the orthogonal projection $Q_L^\sharp$ of $Q_L$ onto the limit manifold ${\mycal{S}}_*$ provideded that such projection is defined. Roughly speaking, these equations are of the form ``harmonic map plus corrector terms''. This latter part will be useful when we prove uniform $C^{1,\alpha}$ bounds for $Q_L$ up to the boundary. \begin{proposition}\label{FirstApproxEqn} Assume that $Q_{L_k}$ $\in$ $C^2(\Omega, {\mycal{S}}_0)$ is a critical point of $I_{L_k}$, and that as $L_k$ $\rightarrow$ $0$, $Q_{L_k}$ converges on compact subsets of $\Omega$ in $C^2$-norm to $Q_*$ $\in$ $C^2(\Omega,{\mycal{S}}_*)$ which is a critical point of $I_*$ and $\frac{1}{L_k}(Q_{L_k} - Q_*)$ converges in $C^2$-norm to some $Q_\bullet$ $\in$ $C^2(\Omega, {\mycal{S}}_0)$. If we write $Q_\bullet$ $=$ ${A} + {B}$ with ${A}$ $\in$ $(T_{Q_*} {\mycal{S}}_*)^\perp$ and ${B}$ $\in$ $T_{Q_*} {\mycal{S}}_*$, then \eqref{FAE::Id01} and \eqref{FAE::Id02} hold. \end{proposition} \noindent{\bf Proof.\;} For simplicity, we will drop the subscript $L_k$. We begin with the proof of \eqref{FAE::Id01}. Split \[ Q = Q_* + L_k\,\hat Q. \] Noting that $Q_*$ $\in$ ${\mycal{S}}_*$, we calculate using \eqref{L-DG::ELEqn} and $2c^2\,s_+^2 = 3a^2 + b^2\,s_+$, \begin{align} L_k\,\Delta\hat Q &= \Big[b^2\big(\frac{1}{3}s_+\hat Q - Q_*\,\hat Q - \hat Q\,Q_*\big) + \frac{2}{3}b^2{\rm tr}(Q_*\,\hat Q){\rm Id} + 2c^2{\rm tr}(Q_*\,\hat Q)Q_* - \Delta Q_*\Big]\nonumber\\ &\qquad\qquad + L_k\Big[-b^2\big(\hat Q^2 - \frac{1}{3}\,{\rm tr}(\hat Q^2)\,{\rm Id}\big) + c^2\,{\rm tr}(\hat Q^2)Q_* + 2c^2\,{\rm tr}(Q_*\,\hat Q)\,\hat Q\Big]\nonumber\\ &\qquad\qquad + L_k^2\,c^2\,{\rm tr}(\hat Q^2)\,\hat Q. \label{FAE::Eqn01} \end{align} Sending $L_k$ $\rightarrow$ $0$, we thereby obtain \begin{equation} \frac{1}{3}s_+\,Q_\bullet - Q_*\,Q_\bullet - Q_\bullet\,Q_* = -\frac{2}{3}{\rm tr}(Q_*\,Q_\bullet){\rm Id} - \frac{2c^2}{b^2}{\rm tr}(Q_*\,Q_\bullet)Q_* + \frac{1}{b^2}\Delta Q_*. \label{FAE::Eqn02} \end{equation} Multiplying \eqref{FAE::Eqn02} by $Q_*$ to the left, taking trace and using Lemma \ref{DiffRepLimSurf}(iv), we get \begin{equation} {\rm tr}(Q_*\,Q_\bullet) = \frac{3}{6a^2 + b^2\,s_+}{\rm tr}(Q_*\,\Delta Q_*). \label{FAE::Eqn03} \end{equation} Substituting this into \eqref{FAE::Eqn02} and using Lemma \ref{Tangent-NormalSpaces}, we hence get \begin{align} {A} &= -\frac{2}{s_+^2}\big(\frac{1}{3}s_+\,Q_\bullet - Q_*\,Q_\bullet - Q_\bullet\,Q_*\big)\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\nonumber\\ &= -\frac{2}{s_+^2}\Big[-\frac{6}{6a^2 + b^2\,s_+}{\rm tr}(Q_*\,\Delta Q_*)\big(\frac{c^2}{b^2}\,Q_* + \frac{1}{3}{\rm Id}\big) + \frac{1}{b^2}\Delta Q_*\Big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big). \label{FAE::Eqn04} \end{align} On the other hand, by Corollary \ref{HarMapEqn}(iii) and Lemma \ref{DiffRepLimSurf}(iv), \begin{equation} \Delta Q_*\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big) = - \sum_{\alpha=1}^3 (\nabla_\alpha Q_*)^2 =: -(\nabla Q_*)^2. \label{FAE::Eqn05} \end{equation} Taking trace yields \begin{equation} {\rm tr}(Q_*\,\Delta Q_*) = -|\nabla Q_*|^2. \label{FAE::Eqn06} \end{equation} Substituing \eqref{FAE::Eqn05} and \eqref{FAE::Eqn06} into \eqref{FAE::Eqn04} we get \eqref{FAE::Id01}. We now turn to the proof of \eqref{FAE::Id02}. We note that \[ \Delta {B} = \big(\Delta {B}\big)^\perp + \big(\Delta Q_\bullet\big)^\parallel - \big(\Delta {A}\big)^\parallel. \] It is therefore enough to establish \begin{align} \big(\Delta {B}\big)^\perp &= -\frac{4}{s_+^2}\big[\big(\nabla {B}\big)^\parallel\,\nabla Q_* + \nabla Q_*\,\big(\nabla {B}\big)^\parallel\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big), \label{FAE::(ii)-1}\\ \big(\Delta Q_\bullet)^\parallel &= -b^2({B}\,{A} + {A}\,{B}) - \frac{6c^2}{6a^2 + b^2\,s_+}|\nabla Q_*|^2\,{B}. \label{FAE::(ii)-2} \end{align} To prove \eqref{FAE::(ii)-1}, we note that, by Lemma \ref{Tangent-NormalSpaces}, \begin{equation} \frac{1}{3}s_+\,{B} - {B}\,Q_* - Q_*\,{B} = 0. \label{FAE::Eqn08} \end{equation} This implies \[ \frac{1}{3}s_+\,\Delta {B} - \Delta {B}\,Q_* - Q_*\,\Delta {B} = {B}\,\Delta\,Q_* + \Delta Q_*\,{B} + 2\nabla {B}\,\nabla Q_* + 2\nabla Q_*\,\nabla {B}. \] Hence, by \eqref{TNS::Id05} in Lemma \ref{Tangent-NormalSpaces}, \begin{align} \big(\Delta {B}\big)^\perp &= -\frac{1}{s_+^2}\big[{B}\,\Delta\,Q_* + \Delta Q_*\,{B} + 2\nabla {B}\,\nabla Q_* + 2\nabla Q_*\,\nabla {B}\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\nonumber\\ &\qquad\qquad -\frac{1}{s_+^2}\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\big[{B}\,\Delta\,Q_* + \Delta Q_*\,{B} + 2\nabla {B}\,\nabla Q_* + 2\nabla Q_*\,\nabla {B}\big]. \label{FAE::Eqn09} \end{align} To proceed, note that $\Delta Q_*$ $\in$ $(T_{Q_*}{\mycal{S}}_*)^\perp$ (by Lemma \ref{Tangent-NormalSpaces} and Corollary \ref{HarMapEqn}(i)) and $\nabla Q_*$ $\in$ $T_{Q_*}{\mycal{S}}_*$. Applying Lemma \ref{InterRel} we see that \begin{multline*} \big[{B}\,\Delta\,Q_* + \Delta Q_*\,{B} + 2(\nabla {B})^\perp\,\nabla Q_* + 2\nabla Q_*\,(\nabla {B})^\perp\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\\ + \big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\big[{B}\,\Delta\,Q_* + \Delta Q_*\,{B} + 2(\nabla {B})^\perp\,\nabla Q_* + 2\nabla Q_*\,(\nabla {B})^\perp\big] \in T_{Q_*} {\mycal{S}}_*, \end{multline*} and \begin{multline*} \big[2(\nabla {B})^\parallel\,\nabla Q_* + 2\nabla Q_*\,(\nabla {B})^\parallel\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\\ + \big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\big[2(\nabla {B})^\parallel\,\nabla Q_* + 2\nabla Q_*\,(\nabla {B})^\parallel\big] \in (T_{Q_*} {\mycal{S}}_*)^\perp. \end{multline*} Hence, by projecting both sides of \eqref{FAE::Eqn09} onto the normal space $(T_{Q_*}{\mycal{S}}_*)^\perp$, we arrive at \begin{multline} \big(\Delta {B}\big)^\perp = -\frac{2}{s_+^2}\big[(\nabla {B})^\parallel\,\nabla Q_* + \nabla Q_*\,(\nabla {B})^\parallel\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\\ + \frac{2}{s_+^2}\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\big[(\nabla {B})^\parallel\,\nabla Q_* + \nabla Q_*\,(\nabla {B})^\parallel\big] \in (T_{Q_*} {\mycal{S}}_*)^\perp. \label{FAE::Eqn09x} \end{multline} Observe that, by Lemma \ref{InterRel} again, $(\nabla {B})^\parallel\,\nabla Q_* + \nabla Q_*\,(\nabla {B})^\parallel$ belongs to $(T_{Q_*}{\mycal{S}}_*)^\perp$ and so commutes with $Q_*$. Thus, the right hand side of \eqref{FAE::Eqn09x} is equal to \[ -\frac{4}{s_+^2}\big[\big(\nabla {B}\big)^\parallel\,\nabla Q_* + \nabla Q_*\,\big(\nabla {B}\big)^\parallel\big]\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big). \] \eqref{FAE::(ii)-1} follows. Finally, we prove \eqref{FAE::(ii)-2}. Recall that by Lemma \ref{Tangent-NormalSpaces} and Corollary \ref{HarMapEqn}, the first bracket term on the right hand side of \eqref{FAE::Eqn01} belongs to $(T_{Q_*} {\mycal{S}}_*)^\perp$. Thus, by projecting \eqref{FAE::Eqn01} onto $T_{Q_*}{\mycal{S}}_*$, dividing by $L_k$ and then letting $L_k$ $\rightarrow$ $0$, we get \[ \big(\Delta Q_\bullet)^\parallel = \Big[-b^2Q_\bullet^2 + 2c^2\,{\rm tr}(Q_*\,Q_\bullet)\,Q_\bullet\Big]^\parallel. \] Using Lemma \ref{InterRel}, \eqref{FAE::Eqn03} and \eqref{FAE::Eqn06}, we then obtain \eqref{FAE::(ii)-2}. The proof is complete. $\square$ \begin{proposition}\label{ProjEqn} Let $Q_L$ $\in$ $H^1(\Omega, {\mycal{S}}_0)$ be a critical point for $I_L$. Assume furthermore that $Q_L(\Omega)$ is contained in a tubular neighborhood of ${\mycal{S}}_*$ that projects smoothly onto ${\mycal{S}}_*$. Then the orthogonal projection $Q_L^\sharp$ of $Q_L$ onto ${\mycal{S}}_*$ satisfies in $\Omega$ the equation \begin{multline} \Delta Q_L^\sharp = -\frac{2}{s_+^2}|\nabla Q_L^\sharp|^2\,Q_L^\sharp + \frac{2}{s_+}\Big[\sum_{\alpha = 1}^3(\nabla_\alpha Q_L^\sharp)^2 - \frac{1}{3}|\nabla Q_L^\sharp|^2\,{\rm Id}\Big]\\ - \Big[T_L^{-1}\big(\frac{1}{s_+}\,Q_L^\sharp - \frac{2}{3}\,{\rm Id}\big)W_L - W_L\big(\frac{1}{s_+}\,Q_L^\sharp - \frac{2}{3}\,{\rm Id}\big)T_L^{-1}\Big], \label{ProjEqn::Est1} \end{multline} where \begin{align} W_L &= 2\nabla Q_L^\sharp\,\nabla[(Q_L^\sharp)^{-1}\,Q_L]\,Q_L^\sharp - 2Q_L^\sharp\,\nabla[(Q_L^\sharp)^{-1}\,Q_L]\,\nabla Q_L^\sharp\nonumber\\ &\qquad\qquad - \frac{1}{s_+}Q_L\sum_{\alpha=1}^3 (\nabla_\alpha Q_L^\sharp)^2 + \frac{1}{s_+}\sum_{\alpha=1}^3 (\nabla_\alpha Q_L^\sharp)^2\,Q_L,\label{ProjEqn::Est2}\\ T_L &= Q_L - \frac{2}{9}s_+\,{\rm tr}[(Q_L^\sharp)^{-1}\,Q_L]\,{\rm Id} + \beta\Big[\frac{1}{s_+}\,Q_L^\sharp + \frac{1}{3}\,{\rm Id}\Big],\label{ProjEqn::Est3} \end{align} and $\beta$ is an arbitrary nonzero real number. \end{proposition} \noindent{\bf Proof.\;} We will drop the subscript $L$ for convenience. Being a critical point of $I_L$, $Q$ satisfies \eqref{L-DG::ELEqn}, i.e. \[ L\Delta Q = -a^2Q - b^2\big[Q^2 - \frac{1}{3}{\rm tr}(Q^2){\rm Id}\big] + c^2\,{\rm tr}(Q^2)\,Q. \] Let $K$ $=$ $(Q^\sharp)^{-1}Q$. By definition, $Q - Q^\sharp$ is normal to the tangent plane to ${\mycal{S}}_*$ at $Q^\sharp$, which implies in view of Lemma \ref{Tangent-NormalSpaces} that $Q$, $Q^\sharp$ and $K$ commutes with one another. In particular, \[ Q = K\,Q^\sharp = Q^\sharp\,K, \] and so, \[ \Delta Q = K\,\Delta Q^\sharp + 2\nabla K\,\nabla Q^\sharp + \Delta K\,Q^\sharp = \Delta Q^\sharp\, K + 2\nabla Q^\sharp\,\nabla K + Q^\sharp\,\Delta K. \] On the other hand, by \eqref{L-DG::ELEqn}, $\Delta Q$ commutes with with $Q^\sharp$. It follows that \[ Q^\sharp\big[K\,\Delta Q^\sharp + 2\nabla K\,\nabla Q^\sharp + \Delta K\,Q^\sharp\big] = \big[\Delta Q^\sharp\, K + 2\nabla Q^\sharp\,\nabla K + Q^\sharp\,\Delta K\big]Q^\sharp, \] which implies \begin{equation} Q\,\Delta Q^\sharp - \Delta Q^\sharp\,Q = 2\nabla Q^\sharp\,\nabla K\,Q^\sharp - 2Q^\sharp\,\nabla K\,\nabla Q^\sharp =: \hat W. \label{ProjEqn01} \end{equation} Now write \begin{equation} \Delta Q^\sharp = X + Y, \text{ where } X \in T_{Q^\sharp}{\mycal{S}}_* \text{ and } Y \in (T_{Q^\sharp}{\mycal{S}}_*)^\perp. \label{ProjEqn01bis} \end{equation} It is well-known that $Y$ $=$ $\textrm{II}(\nabla Q^\sharp,\nabla Q^\sharp)(Q^\sharp)$ (see e.g. \cite{Helein}). Hence, by Lemma \ref{2ndForm}, \begin{equation} Y = -\frac{2}{s_+^2}|\nabla Q^\sharp|^2\,Q^\sharp + \frac{2}{s_+}\Big[\sum_{\alpha = 1}^3(\nabla_\alpha Q^\sharp)^2 - \frac{1}{3}|\nabla Q^\sharp|^2\,{\rm Id}\Big]. \label{ProjEqn02} \end{equation} Therefore, by \eqref{ProjEqn01} and \eqref{ProjEqn::Est2}, we have \begin{equation} Q\,X - X\,Q = \hat W - Q\,Y + Y\,Q = W. \label{ProjEqn03} \end{equation} In addition, as $X$ $\in$ $T_{Q^\sharp}{\mycal{S}}_*$, Lemma \ref{Tangent-NormalSpaces} gives \begin{equation} Q^\sharp\,X + X\,Q^\sharp = \frac{1}{3}s_+\,X. \label{ProjEqn04} \end{equation} The conclusion then can be derived directly from \eqref{ProjEqn02}, \eqref{ProjEqn03} and \eqref{ProjEqn04} using the identities established in Lemma \ref{MysIds}. Since the calculation is lengthy and is somewhat of minor importance to our goal, we deter it to the appendix. $\square$ \section{$C^{1,\alpha}$-convergence}\label{SEC-C1AlphaConv} Let $Q_L$ be a minimizer of the minimization problem \eqref{MinProb}. It is customary to show that for any $L_{k}$ $\rightarrow$ $0$, there exists a subsequence $L_{k'}$ such that $Q_{L_{k'}}$ converges strongly in $H^1$ to some $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ which is $I_*$-minimizing. In this section, we are interested in establishing $C^{1,\alpha}$ convergence for such sequence of minimizers. We will use the following notation. For a function $u$ defined on $\bar\Omega$, we denote by ${\rm Sing}(u)$ the singular set of $u$, i.e. the set of points $x$ $\in$ $\bar\Omega$ such that there is no neighborhood $U$ of $x$ for which $u\big|_{U}$ is smooth. \begin{proposition}\label{C1AlphaConv} Assume that $Q_{L_k}$ converges strongly in $H^1(\Omega, {\mycal{S}}_0)$ to a minimizer $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ of $I_*$ for some sequence $L_k$ $\rightarrow$ $0$. For any compact subset $K$ of $\bar\Omega\setminus {\rm Sing}(Q_*)$, there exists $\bar L$ $=$ $\bar L(a^2,b^2,c^2,\Omega,K,Q_b,Q_*)$ $>$ $0$ such that for any $\alpha$ $\in$ $(0,1)$ there holds \[ \|Q_{L_k}\|_{C^{1,\alpha}(K)} \leq C(a^2,b^2,c^2,\Omega,K,Q_b,Q_*,\alpha) \text{ for any } L_k \leq \bar L. \] In particular, away from ${\rm Sing}(Q_*)$, $Q_{L_k}$ converges to $Q_*$ in $C^{1,\alpha}$-norm. \end{proposition} It is convenient to introduce \begin{equation} \tilde f_B(Q) = f_B(Q) - \min_{{\mycal{S}}_0} f_B, \label{VarBulkEDensity} \end{equation} and \begin{equation} \tilde I_L[Q]= \int_\Omega \Big[\frac{L}{2}|\nabla Q|^2 + \tilde f_B(Q)\Big]\,dx, \qquad Q \in H^1(\Omega, {\mycal{S}}_0). \label{L-DGFuncVar} \end{equation} Clearly, $Q$ is a minimizer for $I_L$ if and only if it is a minimizer for $\tilde I_L$. The proof of Proposition \ref{C1AlphaConv} consists of two main steps. First, we prove a uniform bound for $\Delta Q_{L_k}$ under an additionally assumed uniform $C^1$ bound. Second, we prove the required uniform $C^1$ bound. It is readily seen that Proposition \ref{C1AlphaConv} follows from such estimates. We will frequently use the following two results, which are variants of \cite[Lemma 2]{BBH}. \begin{lemma}\label{BBHMaxPrin} Let $B_R$ $\subset$ ${\mathbb R}^3$ be a ball centered at the origin and of radius $R$. Assume that $u$ $\in$ ${\rm Lip}(B_R) \cap C^0(\bar B_R)$ satisfies in the weak sense \[ -L\,\Delta u + a\,u \leq C \text{ in } B_R, \] where $L$, $a$ and $C$ are positive constants. Then, for $\alpha$ $\geq$ $0$, there holds in $B_R$, \[ u(x) \leq \frac{C}{a} + 2\exp\Big(-\sqrt{\frac{a}{L}}\,\frac{R - |x|}{2}\Big)\,\sup_{\partial B_r}u^+ \leq \frac{C}{a} + \frac{C(\alpha)\,L^\alpha}{a^\alpha\,(R - |x|)^{2\alpha}}\,\sup_{\partial B_R}u^+. \] \end{lemma} \noindent{\bf Proof.\;} Define a function $\phi$ in $B_R$ by \[ \phi(x) = \frac{\sinh\big(\sqrt{\frac{a}{L}}|x|\big)\,R}{\sinh\big(\sqrt{\frac{a}{L}}R\big)\,|x|} \] It is routine to check that $\phi$ $\in$ $C^2(B_R)$ and $-L\,\Delta\phi + a\,\phi$ $=$ $0$. It thus follows from the maximum principle that \[ u(x) \leq \frac{C}{a} + \Big[\sup_{\partial B_r}u^+\Big]\,\phi(x) \text{ in } B_R. \] Next, as $-L\,\Delta\phi + a\,\phi$ $=$ $0$, the maximum principle implies that, for $|x|$ $\leq$ $R/2$, \[ \phi(x) \leq \phi(R/2) = \frac{2\sinh\big(\sqrt{\frac{a}{L}}\frac{R}{2}\big)}{\sinh\big(\sqrt{\frac{a}{L}}R\big)} \leq 2\exp\Big(-\sqrt{\frac{a}{L}}\frac{R}{2}\Big) \leq \frac{C(\alpha)\,L^\alpha}{a^\alpha\,R^{2\alpha}}. \] In addition, for $R/2$ $<$ $|x|$ $<$ $R$, we have \[ \phi(x) \leq 2\exp\Big(-\sqrt{\frac{a}{L}}(R - |x|)\Big) \leq \frac{C(\alpha)\,L^\alpha}{a^\alpha\,(R - |x|)^{2\alpha}}. \] The assertion follows from the last three estimates. $\square$ \begin{lemma}\label{BBHMaxPrin::Bdry} Let $\Omega$ be a domain in ${\mathbb R}^3$ and $x_0$ $\in$ $\partial\Omega$. Assume that $u$ $\in$ ${\rm Lip}(B_R(x_0) \cap \Omega) \cap C^0(\bar B_R(x_0) \cap \bar \Omega)$ satisfies in the weak sense \begin{align*} &-L\,\Delta u + a\,u \leq C \text{ in } B_R(x_0) \cap \Omega,\\ &u = 0 \text{ on } B_R(x_0) \cap \partial\Omega \end{align*} where $L$, $a$ and $C$ are positive constants. Then, for $\alpha$ $\geq$ $0$, \[ u(x) \leq \frac{C}{a} + \frac{C(\alpha)\,L^\alpha}{a^\alpha\,(R - |x|)^{2\alpha}}\,\sup_{\partial B_R(x_0) \cap \Omega}u^+ \text{ in } B_R(x_0) \cap \Omega. \] \end{lemma} \noindent{\bf Proof.\;} Extend $u$ to a function $\tilde u$ defined on $B_R(x_0)$ by zero in $B_R \setminus \Omega$. The result then follows from Lemma \ref{BBHMaxPrin::Bdry}. $\square$ We turn now to the proof of Proposition \ref{C1AlphaConv}. For our purpose, it is useful to note that, by Lemma \ref{DiffRepLimSurf}(iv), \begin{equation} {\mycal{S}}_* = \Big\{Q \in {\mycal{S}}_0: Q^2 - \frac{1}{3}s_+\,Q - \frac{2}{9}s_+^2\,{\rm Id} = 0\Big\}, \label{LimitSurf::AlterRep} \end{equation} where $s_+$ is given in \eqref{LimitSurf::Rep}. In other words, the minimal polynomial of any matrix in ${\mycal{S}}_*$ is $\lambda^2 - \frac{1}{3}s_+\,\lambda - \frac{2}{9}s_+^2$. Furthermore, for $Q$ belonging to a small tubular neighborhood of the limit manifold ${\mycal{S}}_*$, the norm of of $Q^2 - \frac{1}{3}s_+\,Q - \frac{2}{9}s_+^2\,{\rm Id}$ is comparable to the distance from $Q$ to ${\mycal{S}}_*$, which is a consequence of the following lemma. \begin{lemma}\label{DistComp} Let $\alpha$, $\beta$, $\gamma$, $\mu$, $\nu$ and $\delta$ be real numbers and define for $Q$ $\in$ ${\mycal{S}}_0$, \begin{multline*} h(Q) = \alpha\,({\rm tr}(Q^2))^3 + \beta\,s_+\,{\rm tr}(Q^2)\,{\rm tr}(Q^3)+ \gamma\,s_+^2\,({\rm tr}(Q^2))^2\\ + \mu\,s_+^3\,{\rm tr}(Q^3) + \nu\,s_+^4\,{\rm tr}(Q^2) + \delta\,s_+^6. \end{multline*} If \begin{align*} &\frac{8}{27}\alpha + \frac{4}{27}\beta + \frac{4}{9}\gamma + \frac{2}{9}\mu + \frac{2}{3}\nu + \delta = 0,\\ &\frac{8}{3}\alpha + \frac{10}{9}\beta + \frac{8}{3}\gamma + \mu + 2\nu = 0,\\ &(16\alpha + 4\beta + 8\gamma)^2 - (16\alpha + 6\beta + 8\gamma + 3\mu)^2 > 0, \end{align*} then there exist $\epsilon$ $>$ $0$ and $C$ $>$ $0$ such that \[ \frac{1}{C}\,{\rm dist}(Q,{\mycal{S}}_*)^2 \leq h(Q) \leq C\,{\rm dist}(Q,{\mycal{S}}_*)^2 \] for any $Q$ $\in$ ${\mycal{S}}_0$ satisfying ${\rm dist}(Q,{\mycal{S}}_*)$ $<$ $\epsilon$. \end{lemma} \noindent{\bf Proof.\;} Let $x$, $y$, $-(x+y)$ be the eigenvalues of $Q$. For $Q$ close to ${\mycal{S}}_*$, we can further assume that $x$ and $y$ are close to $-\frac{s_+}{3}$ while $-(x+y)$ is close to $\frac{2s_+}{3}$. We have \begin{multline*} h(Q) = 8\alpha\,(x^2 + y^2 + xy)^3 - 6\beta\,s_+\,(x^2 + y^2 + xy)xy(x+y)\\ + 4\gamma\,s_+^2\,(x^2 + y^2 + xy)^2 - 3\,s_+^3\mu\,xy(x+y) + 2\nu\,s_+^4\,(x^2 + y^2 + xy) + \delta\,s_+^6. \end{multline*} By a simple calculation using the given constraints on $\alpha$, $\beta$, $\gamma$, $\mu$ and $\nu$ we have \begin{align*} h\big(-\frac{s_+}{3},-\frac{s_+}{3}\big) &= 0,\\ \partial_x h\big(-\frac{s_+}{3},-\frac{s_+}{3}\big) &= \partial_y h\big(-\frac{s_+}{3},-\frac{s_+}{3}\big) = -\Big[\frac{8}{3}\alpha + \frac{10}{9}\beta + \frac{8}{3}\gamma + \mu + 2\nu\Big]s_+^5 = 0,\\ \partial_{xx} h\big(-\frac{s_+}{3},-\frac{s_+}{3}\big) &= \partial_{yy} h\big(-\frac{s_+}{3},-\frac{s_+}{3}\big)\\ &= \Big[2\big(\frac{8}{3}\alpha + \frac{10}{9}\beta + \frac{8}{3}\gamma + \mu + 2\nu\big) + 16\alpha + 4\beta + 8\gamma\Big]s_+^4\\ &= \big[16\alpha + 4\beta + 8\gamma\big]s_+^4,\\ \partial_{xy} h\big(-\frac{s_+}{3},-\frac{s_+}{3}\big) &= \Big[\big(\frac{8}{3}\alpha + \frac{10}{9}\beta + \frac{8}{3}\gamma + \mu + 2\nu\big) + 16\alpha + 6\beta + 8\gamma + 3\mu\Big]s_+^4\\ &= \big[16\alpha + 6\beta + 8\gamma + 3\mu\big]s_+^4. \end{align*} The assertion follows from a simple application of the Taylor expansion theorem. $\square$ As argued before, it is of relevance to see how $Q_L^2 - \frac{1}{3}s_+\,Q_L - \frac{2}{9}s_+^2\,{\rm Id}$ converges to zero. It is convenience to introduce \begin{equation} X_L = \frac{1}{L} \Big[Q_L^2 - \frac{1}{3}s_+\,Q_L - \frac{2}{9}s_+^2\,{\rm Id}\Big]. \label{XDef} \end{equation} The following result gives a rate of convergence for $Q_L^2 - \frac{1}{3}s_+\,Q_L - \frac{2}{9}s_+^2\,{\rm Id}$ provided that $Q_L$ is sufficiently close to ${\mycal{S}}_*$ and a gradient bound is known. \begin{proposition}\label{MinPolyConv} There exist $\delta_0$ $=$ $\delta_0(a^2,b^2,c^2)$ $>$ $0$ such that if ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ and $|\nabla Q_L|$ $\leq$ $C_1$ in some $B_r(x_0) \cap \Omega$ for some $x_0$ $\in$ $\bar\Omega$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$, then \[ |X_L| \leq C(a^2,b^2,c^2,C_1)[1 + r^{-2}] \text{ in } B_{r/2}(x_0) \cap \Omega. \] \end{proposition} \noindent{\bf Proof.\;} For convenience, we write $Q$ and $X$ for $Q_L$ and $X_L$, respectively. For $\delta_0$ sufficiently small, we define $Q^\sharp$ as the orthogonal projection of $Q$ onto ${\mycal{S}}_*$, i.e. \[ |Q(x) - Q^\sharp(x)| = {\rm dist}(Q(x),{\mycal{S}}_*). \] Set $Y = L^2\,|X|^2$. We have \begin{align*} Y &= {\rm tr}\Big(Q^4 - \frac{2}{3}s_+\,Q^3 - \frac{1}{3}s_+^2\,Q^2 + \frac{4}{27}s_+^3\,Q + \frac{4}{81}s_+^4\,{\rm Id}\Big)\\ &= {\rm tr}(Q^4) - \frac{2}{3}s_+\,{\rm tr}(Q^3) - \frac{1}{3}s_+^2\,{\rm tr}(Q^2) + \frac{4}{27}\,s_+^4\\ &= \frac{1}{2}{\rm tr}(Q^2)^2 - \frac{2}{3}s_+\,{\rm tr}(Q^3) - \frac{1}{3}s_+^2\,{\rm tr}(Q^2) + \frac{4}{27}\,s_+^4\\ &=: g(Q). \end{align*} Applying Lemma \ref{DistComp} we can choose $\delta_0$ sufficiently small such that \begin{equation} \frac{1}{C}\,Y \leq |Q - Q^\sharp|^2 \leq C\,Y \text{ in } B_r(x_0) \cap \Omega. \label{MinPolyConv::WeakBound} \end{equation} We have \begin{equation} \Delta Y = g_{ij}(Q)\,\Delta Q_{ij} + g_{ij,pq}(Q)\,\nabla Q_{ij} \cdot \nabla Q_{pq}, \label{MinPolyConv::Eqn01} \end{equation} where $g_{ij}(Q)$ $=$ $\frac{\partial g}{\partial Q_{ij}}(Q)$ and $g_{ij,pq}(Q)$ $=$ $\frac{\partial^2 g}{\partial Q_{ij}\partial Q_{pq}}(Q)$. Since $g$ vanishes on ${\mycal{S}}_*$ (in view of \eqref{LimitSurf::AlterRep}), it achieves its minimum everywhere on ${\mycal{S}}_*$ and in particular at $Q^\sharp$ (as a function on $M_{3\times 3}$). Hence, by \eqref{MinPolyConv::WeakBound} and the given gradient bound, we have \begin{equation} g_{ij,pq}(Q)\,\nabla Q_{ij} \cdot \nabla Q_{pq} \geq g_{ij,pq}(Q^\sharp)\,\nabla Q_{ij} \cdot \nabla Q_{pq} - C\,\sqrt{Y} \geq -C\,\sqrt{Y} \text{ in } B_r(x_0) \cap \Omega. \label{MinPolyConv::Eqn02} \end{equation} On the other hand, by recalling \eqref{L-DG::ELEqn}, we have \begin{align*} g_{ij}(Q)\,L\,\Delta Q_{ij} &= \Big[2\,{\rm tr}(Q^2)\,Q_{ji} - 2s_+(Q^2)_{ji} - \frac{2}{3}s_+^2\,Q_{ji}\Big]\\ &\qquad\qquad \times \Big[- a^2 Q_{ij} - b^2[(Q^2)_{ij} - \frac{1}{3}{\rm tr}(Q^2)\delta_{ij}] + c^2\,{\rm tr}(Q^2)\,Q_{ij}\Big]\\ &= 2c^2\,({\rm tr}(Q^2))^3 + (- 2b^2 - 2c^2\,s_+)\,{\rm tr}(Q^2)\,{\rm tr}(Q^3)\\ &\qquad\qquad + (-2a^2 + \frac{1}{3}b^2\,s_+ - \frac{2}{3}c^2\,s_+^2)\,({\rm tr}(Q^2))^2 + (2a^2\,s_+ + \frac{2}{3}b^2s_+^2){\rm tr}(Q^3)\\ &\qquad\qquad + \frac{2}{3}a^2\,s_+^2\,{\rm tr}(Q^2)\\ &= (3a^2\frac{1}{s_+^2} + b^2\frac{1}{s_+})\,({\rm tr}(Q^2))^3 + (-3a^2\frac{1}{s_+} - 3b^2)\,{\rm tr}(Q^2)\,{\rm tr}(Q^3)\\ &\qquad\qquad - 3a^2\,({\rm tr}(Q^2))^2 + (2a^2\,s_+ + \frac{2}{3}b^2s_+^2){\rm tr}(Q^3) + \frac{2}{3}a^2\,s_+^2\,{\rm tr}(Q^2). \end{align*} Applying Lemma \ref{DistComp} and using \eqref{MinPolyConv::WeakBound}, we get, for $\delta_0$ sufficiently small, \begin{equation} g_{ij}(Q)\,L\,\Delta Q_{ij} \geq C\,|Q - Q^\sharp|^2 \geq CY \text{ in } B_r(x_0) \cap \Omega. \label{MinPolyConv::Eqn03} \end{equation} Combining \eqref{MinPolyConv::Eqn01}, \eqref{MinPolyConv::Eqn02} and \eqref{MinPolyConv::Eqn03} we get \[ L\,\Delta Y \geq 2C_2\,Y - C_2'\,L\,\sqrt{Y} \geq C_2\,Y - C_3\,L^2 \text{ in } B_r(x_0) \cap \Omega, \] where $C_2$, $C_2'$, $C_3$ are positive constants. We thus have \begin{equation} -L\,\Delta Y + C_2\,Y \leq C_3\,L^2 \text{ in } B_r(x_0) \cap \Omega. \label{MinPolyConv::Eqn04} \end{equation} It is readily seen that the assertion follows from Lemmas \ref{BBHMaxPrin} and \ref{BBHMaxPrin::Bdry}. $\square$ As a consequence of Proposition \ref{MinPolyConv}, we have the following estimate for $\Delta Q_L$. \begin{corollary}\label{UniLapBnd} There exist $\delta_0$ $=$ $\delta_0(a^2,b^2,c^2)$ $>$ $0$ such that if ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ and $|\nabla Q_L|$ $\leq$ $C_1$ in some $B_r(x_0) \cap \Omega$ for some $x_0$ $\in$ $\bar\Omega$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$, then \[ |\Delta Q_L| \leq C(a^2,b^2,c^2,C_1)[1 + r^{-2}] \text{ in } B_{r/2}(x_0) \cap \Omega. \] \end{corollary} \noindent{\bf Proof.\;} By \eqref{L-DG::ELEqn}, \[ L^2|\Delta Q_L|^2 = \Big|-a^2Q - b^2\big(Q^2 - \frac{1}{3}{\rm tr}(Q^2)\,{\rm Id}\big) + c^2\,{\rm tr}(Q^2)\,Q\Big|^2 =: h(Q). \] A direct computation using Lemma \ref{DistComp} shows that $h(Q)$ and $|Q^2 - \frac{1}{3}s_+Q - \frac{2}{9}s_+^2\,{\rm Id}|^2$ are comparable near ${\mycal{S}}_*$. The assertion follows from Proposition \ref{MinPolyConv}. $\square$ \begin{corollary}\label{TildeFBConv} There exist $\delta_0$ $=$ $\delta_0(a^2,b^2,c^2)$ $>$ $0$ such that if ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ and $|\nabla Q_L|$ $\leq$ $C_1$ in some $B_r(x_0) \cap \Omega$ for some $x_0$ $\in$ $\bar\Omega$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$, then \[ \tilde f_B(Q_{L_k}) \leq C(a^2,b^2,c^2,C_1)[1 + r^{-4}]\,L^2 \text{ in } B_{r/2}(x_0) \cap \Omega. \] \end{corollary} \noindent{\bf Proof.\;} Using Lemma \ref{DistComp}, we see that $\tilde f_B(Q)$ and $|Q^2 - \frac{1}{3}s_+Q - \frac{2}{9}s_+^2\,{\rm Id}|^2$ are comparable near ${\mycal{S}}_*$. The assertion follows from Proposition \ref{MinPolyConv}. $\square$ \begin{corollary}\label{|Q2|Conv} There exist $\delta_0$ $=$ $\delta_0(a^2,b^2,c^2)$ $>$ $0$ such that if ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ and $|\nabla Q_L|$ $\leq$ $C_1$ in some $B_r(x_0) \cap \Omega$ for some $x_0$ $\in$ $\bar\Omega$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$, then \[ 0 \leq \frac{2}{3}s_+^2 - {\rm tr}(Q_{L}^2) \leq C[1 + r^{-2}]\,L \text{ in } B_{r/2}(x_0) \cap \Omega. \] \end{corollary} \noindent{\bf Proof.\;} The second inequality follows from Proposition \ref{MinPolyConv} and that ${\rm tr}(Q^2) - \frac{2}{3}s_+^2$ $=$ ${\rm tr}(Q^2 - \frac{1}{3}s_+\,Q - \frac{2}{9}s_+^2\,{\rm Id})$. The first inequality follows from \cite[Proposition 3]{Ma-Za}. $\square$ We turn to establishing a uniform gradient bound for $Q_L$. We will use Bochner technique. Note that uniform interior gradient estimate was already established in \cite{Ma-Za}. To adapt the argument therein to our situation here, we need some additional information about the gradient on the boundary. To this end, we split $Q_L$ into $Q_L^\sharp$, the projection of $Q_L$ onto ${\mycal{S}}_*$, and $Q_L - Q_L^\sharp$, and obtain boundary gradient estimates for each of them separately. It turns out that $Q_L - Q_L^\sharp$ can be controlled rather easily by the minimal polynomial. For the other part, we use Proposition \ref{ProjEqn}. We start with: \begin{lemma}\label{C1BndAuxRslt} Given $x$ $\in$ $\Omega$ and $r$ $>$ $10\,{\rm dist}(x,\partial\Omega)$, assume that $Q_L$ satisfies \begin{equation} \sup_{B_r(x) \cap \Omega} |\nabla Q_L| \leq C_1. \label{CBAR::Hyp1} \end{equation} There exist $\delta_0$ $>$ $0$ and $C$ $>$ $0$ such that if ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ in $B_r(x) \cap \Omega$, then Uni\begin{multline*} \sup_{\partial\Omega \cap B_{r/2}(x)}|\nabla (Q_L - Q_L^\sharp)| \leq C\Big[r^{-1}\|Q - Q^\sharp\|_{L^\infty(B_r(x) \cap \Omega)}\\ + r^{-3/2}\|\nabla Q\|_{L^2(B_r(x) \cap \Omega)} + r^{1/4}\|\nabla Q\|_{L^2(B_r(x) \cap \Omega)}^{1/2}\Big], \end{multline*} where $Q_L^\sharp$ is the orthogonal projection of $Q_L$ onto ${\mycal{S}}_*$, defined by \[ |Q_L(y) - Q_L^\sharp(y)| = \min\big\{|Q_L(y) - R|: R \in {\mycal{S}}_*\big\}. \] \end{lemma} We will need a few simple results whose proof will be given after the proof of Lemma \ref{C1BndAuxRslt}. \begin{lemma}\label{CalcLemma} Let $g:$ $B_1(0)$ $\subset$ ${\mathbb R}^m$ $\rightarrow$ ${\mathbb R}$ be a smooth function which has a local minimum at the origin and $g(0)$ $=$ $0$. Then there exist some positive constants $\epsilon$ $>$ $0$ and $C$ $>$ $0$ depending only on $g$ such that \[ \Big(2g\,\nabla_{ij} g - \nabla_i g\,\nabla_j g\Big)(x)\,a_i\,a_j \geq -C\,|x|^3\,|a|^2 \] for any $x$ $\in$ $B_\epsilon(0)$ and $a$ $\in$ ${\mathbb R}^m$. \end{lemma} \begin{lemma}\label{SimpleEllEst} Assume that $f$ $\in$ $L^p(B_r^+(0))$, $p$ $>$ $3$, $g$ $\in$ $C^2(B_r(0) \cap \{x_3 = 0\})$ and $u$ $\in$ $W^{1,\infty}(B_r^+(0))$ satisfy \begin{align*} -a_{ij}\,\nabla_{ij} u + b_i\,\nabla_i u &= f \text{ in } B_r^+(0),\\ u &= g \text{ on } B_r(0) \cap \{x_3 = 0\}, \end{align*} where the coefficients $a_{ij}$ and $b_i$ are continuous and satisfy $\lambda$ $\leq$ $(a_{ij})$ $\leq$ $\Lambda$ and $|b_i|$ $\leq$ $\Lambda$ for some $0$ $<$ $\lambda$ $\leq$ $\Lambda$ $<$ $\infty$. Then \begin{multline*} \sup_{B_{r/2}^+(0)} |\nabla u| \leq C\big[\sup_{B_r(0) \cap \{x_3 = 0\}} |\nabla_T g| + r\,\sup_{B_r(0) \cap \{x_3 = 0\}}|\nabla_T^2 g|\\ + r^{-3/2}\|\nabla u\|_{L^2(B_r^+(0))} + r^{1-3/p}\,\|f\|_{L^p(B_r^+(0))}\big], \end{multline*} where $\nabla_T$, $\nabla_T^2$ denote the horizontal gradient and Hessian, respectively. \end{lemma} \noindent{\bf Proof of Lemma \ref{C1BndAuxRslt}.} We will drop the subscript $L$. Also we will abbreviate $B_r$ for $B_r(0)$. In the proof we will use without explicit mentioning the following two facts: (a) $|Q|$ $\leq$ $\sqrt{\frac{2}{3}}s_+$ in $\Omega$ (see \cite[Proposition 3]{Ma-Za}), and (b) $|R|$ $=$ $\sqrt{\frac{2}{3}}s_+$ for any $R$ $\in$ ${\mycal{S}}_*$. Set $\hat X = L\,X$. Arguing as in the proof of Proposition \ref{MinPolyConv} (cf. \eqref{MinPolyConv::Eqn01}), for $\delta_0$ sufficiently small, there holds \begin{equation} \frac{1}{C}\,|\hat X| \leq |Q - Q^\sharp| \leq C\,|\hat X| \text{ in } B_r \cap \Omega. \label{CBAR::Eqn02} \end{equation} Note that $|\hat X|$ is smooth whenver $|\hat X|$ $>$ $0$ and, by \eqref{CBAR::Hyp1}, \begin{equation} \nabla|\hat X| = \frac{\nabla |\hat X|^2}{2|\hat X|} = \frac{{\rm tr}(\nabla \hat X\,\hat X)}{|\hat X|} \leq |\nabla \hat X| \leq C|\nabla Q| \leq C\text{ in } B_r \cap \Omega \cap \{|\hat X| > 0\}. \label{CBAR::Eqn01} \end{equation} Since $|\hat X|$ is continuous in $B_r \cap \Omega$, it follows that $|\hat X|$ $\in$ $W^{1,\infty}(B_r \cap \Omega)$. In particular (see e.g. \cite[p. 84]{EG}), \begin{equation} \nabla |\hat X| = 0 \text{ a.e. in } B_r \cap \Omega \cap \{|\hat X| = 0\}. \label{CBAR::Eqn01x} \end{equation} As in the proof of Proposition \ref{MinPolyConv}, we have \[ |\hat X|^2 = \frac{1}{2}{\rm tr}(Q^2)^2 - \frac{2}{3}s_+\,{\rm tr}(Q^3) - \frac{1}{3}s_+^2\,{\rm tr}(Q^2) + \frac{4}{27}\,s_+^4 =: g(Q). \] Hence, in the region where $|\hat X|$ $>$ $0$, \begin{equation} \Delta |\hat X| = \frac{g_{ij}(Q)\,\Delta Q_{ij}}{2|\hat X|} + \frac{2g(Q)\,g_{ij,pq}(Q)\,\nabla Q_{ij} \cdot \nabla Q_{pq} - g_{ij}(Q)\,\nabla Q_{ij}\,g_{pq}(Q)\,\nabla Q_{pq}}{4|\hat X|^3}, \label{CBAR::Eqn03} \end{equation} where $g_{ij}(Q)$ $=$ $\frac{\partial g}{\partial Q_{ij}}(Q)$ and $g_{ij,pq}(Q)$ $=$ $\frac{\partial^2 g}{\partial Q_{ij}\partial Q_{pq}}(Q)$. Since $g$ vanishes on ${\mycal{S}}_*$ (in view of \eqref{LimitSurf::AlterRep}), it achieves its minimum everywhere on ${\mycal{S}}_*$ and in particular at $Q^\sharp$ (as a function on $M_{3\times 3}$). Hence, by \eqref{CBAR::Eqn02} and Lemma \ref{CalcLemma}, for $\delta_0$ sufficiently small, we have in $B_r \cap \Omega$, \begin{equation} \frac{2g(Q)\,g_{ij,pq}(Q)\,\nabla Q_{ij} \cdot \nabla Q_{pq} - g_{ij}(Q)\,\nabla Q_{ij}\,g_{pq}(Q)\,\nabla Q_{pq}}{|\hat X|^3} \geq -C\,|\nabla Q|^2. \label{CBAR::Eqn04} \end{equation} On the other hand, by recalling \eqref{MinPolyConv::Eqn03}, we have for $\delta_0$ sufficiently small that \begin{equation} g_{ij}(Q)\,L_k\,\Delta Q_{ij} \geq C\,|Q - Q^\sharp|^2 \geq 0 \text{ in } B_r \cap \Omega. \label{CBAR::Eqn05} \end{equation} We now fix $\delta_0$. Combining \eqref{CBAR::Eqn01}, \eqref{CBAR::Eqn03}, \eqref{CBAR::Eqn04} and \eqref{CBAR::Eqn05} we get \begin{equation} -\Delta|\hat X| \leq C|\nabla Q|^2 \text{ in } B_r \cap \Omega \cap \{|\hat X| > 0\}. \label{CBAR::Eqn05x1} \end{equation} We next show that the above inequality can be extended to \begin{equation} -\Delta|\hat X| \leq C|\nabla Q|^2 \text{ in } B_r \cap \Omega \text{ in the sense of distribution}. \label{CBAR::Eqn05x2} \end{equation} Indeed, since $|\hat X|$ is smooth in $\{|\hat X| > 0\}$, by Sard's theorem, we can select a decreasing sequence $\delta_j$ $\rightarrow$ $0$ such that the level sets $\{|\hat X| = \delta_j\}$ are smooth. Fix some $\varphi$ $\in$ $C^{c,\infty}(B_r \cap \Omega)$ where $\varphi$ $\geq$ $0$ in $B_r \cap \Omega$. Then, by \eqref{CBAR::Eqn01x}, we compute \begin{align*} \int_{B_r \cap \Omega} \nabla|\hat X|\,\nabla \varphi\,dx &= -\int_{B_r \cap \Omega \cap \{|\hat X \geq \delta_j\}}\Delta |\hat X|\,\varphi\,dx + \int_{B_r \cap \Omega \cap \{|\hat X| = \delta_j\}} \nabla|\hat X| \cdot \nu\,\varphi\,d\sigma(x)\\ &\qquad\qquad + \int_{B_r \cap \Omega \cap \{0 < |\hat X| < \delta_j\}} \nabla|\hat X|\,\nabla \varphi\,dx, \end{align*} where $\nu$ is the outer normal to $\{|\hat X| = \delta_j\}$ relative to the set $B_r \cap \Omega \cap \{|\hat X| > \delta_j\}$. Observe that $\nabla |\hat X| \cdot \nu$ $\leq$ $0$ on $\{|\hat X| = \delta_j\}$. Thus, by \eqref{CBAR::Eqn01} and \eqref{CBAR::Eqn05x1}, \[ \int_{B_r \cap \Omega} \nabla|\hat X|\,\nabla \varphi\,dx \leq C\int_{B_r \cap \Omega}|\nabla Q|^2\,\varphi\,dx + C\sup_{B_r \cap \Omega}|\nabla \varphi|\,\big|B_r \cap \Omega \cap \{0 < |\hat X| < \delta_j\}\big|. \] Sending $j$ $\rightarrow$ $\infty$, we obtain \eqref{CBAR::Eqn05x2}. By \eqref{CBAR::Eqn05x2}, $|\hat X|$ $\leq$ $w$ where $w$ is the solution to \begin{align*} -\Delta w &= C|\nabla Q|^2 \text{ in } B_r \cap \Omega,\\ w &= |\hat X| \text{ on } \partial (B_r \cap \Omega). \end{align*} Using \eqref{CBAR::Hyp1} and applying Lemma \ref{SimpleEllEst}, we obtain, for $y \in B_{r/2} \cap \Omega$, \begin{align} w(y) &\leq C\Big[r^{-3/2}\|\nabla w\|_{L^2(B_{3r/4} \cap \Omega)} + r^{1/4}\|\nabla Q\|_{L^8(B_{3r/4} \cap \Omega)}^2\Big]{\rm dist}(y,\partial\Omega)\nonumber\\ &\leq C\Big[r^{-3/2}\|\nabla w\|_{L^2(B_{3r/4} \cap \Omega)} + r^{1/4}\|\nabla Q\|_{L^2(B_{3r/4} \cap \Omega)}^{1/2}\Big]{\rm dist}(y,\partial\Omega). \label{CBAR::Eqn06} \end{align} To proceed we split $w$ $=$ $w_1 + w_2$ where $w_1$ is the solution to \begin{align*} -\Delta w_1 &= C|\nabla Q|^2 \text{ in } B_r \cap \Omega,\\ w_1 &= 0 \text{ on } \partial (B_r \cap \Omega). \end{align*} By elliptic estimates and \eqref{CBAR::Hyp1}, we have \begin{equation} \|\nabla w_1\|_{L^2(B_{3r/4} \cap \Omega)} \leq C\,\|\Delta w_1\|_{L^2(B_r \cap \Omega)} \leq C\,\|\nabla Q\|_{L^2(B_r \cap \Omega)}. \label{CBAR::Eqn07} \end{equation} Also, as $w_2$ satisfies \begin{align*} -\Delta w_2 &= 0 \text{ in } B_r \cap \Omega,\\ w_2 &= 0 \text{ on } B_r \cap \partial \Omega,\\ w_2 &= |\hat X| \text{ on } \partial B_r \cap \Omega, \end{align*} we infer that \begin{multline} \|\nabla w_2\|_{L^2(B_{3r/4} \cap \Omega)} \leq C\,r^{-1}\,\|w_2\|_{L^2(B_r \cap \Omega)}\\ \leq C\,r^{1/2}\|w_2\|_{L^\infty(B_r \cap \Omega)} \leq C\,r^{1/2}\|\hat X\|_{L^\infty(B_r \cap \Omega)}. \label{CBAR::Eqn08} \end{multline} Taking \eqref{CBAR::Eqn02}, \eqref{CBAR::Eqn06}-\eqref{CBAR::Eqn08} into account altogether we arrive at \begin{multline*} |Q - Q^\sharp|(y) \leq C\Big[r^{-1}\,\|Q - Q^\sharp\|_{L^\infty(B_r \cap \Omega)} + r^{-3/2}\|\nabla Q\|_{L^2(B_{3r/4} \cap \Omega)}\\ + r^{1/4}\|\nabla Q\|_{L^2(B_{3r/4} \cap \Omega)}^{1/2}\Big]{\rm dist}(y,\partial\Omega) \text{ in } B_{r/2} \cap \Omega. \end{multline*} As $|Q - Q^\sharp|$ vanishes on $\partial\Omega$, the conclusion follows. $\square$ \noindent{\bf Proof of Lemma \ref{CalcLemma}.} Set \[ A = 2g\,\nabla^2 g - \nabla g \otimes \nabla g. \] Since $g$ has a local minimum at the origin, we have the following Taylor's expansions, \begin{align*} g(x) &= \frac{1}{2}\nabla_{ij} g(0)\,x_i\,x_j + O(|x|^3)\\ \nabla_{ij} g(x) &= \nabla_{ij} g(0) + O(|x|),\\ \nabla_i g(x)\,\nabla_j g(x) &= \nabla_{ip}g(0)\,x_p\,\nabla_{jq}g(0)\,x_q + O(|x|^3), \end{align*} where the error terms are meant for small $|x|$. Hence \[ A_{ij}(x)\,a_i\,a_j = \big[\nabla_{ij} g(0)\,x_i\,x_j\big]\big[\nabla_{ij} g(0)\,a_i\,a_j\big] - \big[\nabla_{ip}g(0)\,x_p\,a_i\big]\big[\nabla_{jq}g(0)\,x_q\,a_j\big] + O(|x|^3\,|a|^2). \] Now observe that $\nabla^2 g(0)$ is non-negative and so the difference of the first two terms on the right hand side is non-negative as well. The assertion follows. $\square$ \noindent{\bf Proof of Lemma \ref{SimpleEllEst}.} By scaling, it suffices to consider $r$ $=$ $1$. Also, for simplicity, we will only present a proof for the case $(a_{ij})$ $=$ $(\delta_{ij})$ and $b_i$ $=$ $0$. The general case can be done in exactly the same way. We first observe that we can assume without loss of generality that $g$ $\equiv$ $0$. Indeed, extend $g$ to a function $G$ on $B_1^+$ by setting $G(x_1,x_2,x_3)$ $=$ $g(x_1,x_2)$. Then the function $\tilde u$ $:=$ $u - G$ belongs to $W^{1,\infty}(B_1^+)$ and satisfies $-\Delta \tilde u$ $=$ $\tilde f$ where $\tilde f$ $=$ $f + \Delta G$. Moreover, there hold \begin{align*} |\nabla u| &\leq |\nabla \tilde u| + \sup_{B_1 \cap \{x_3 = 0\}} |\nabla_T g|,\\ |\tilde f| &\leq |f| + \sup_{B_1 \cap \{x_3 = 0\}} |\nabla_T^2 g|. \end{align*} Hence, if the assertion holds for $g$ $\equiv$ $0$, we can apply it to $\tilde u$ and then use the above inequalities to recover the general case. We therefore assume henceforth that $g$ $\equiv$ $0$. Fix $0$ $<$ $r_1$ $\leq$ $r_2$ $<$ $1$ and select a smooth cut-off function $\eta$ which is identically $1$ in $B_{r_1}$ and vanishes outside $B_{r_2}$. It is readily seen that the function $\hat u$ $:=$ $\eta u$ belongs to $W^{1,\infty}(B_1^+) \cap W^{1,1}_0(B_1^+)$, vanishes near $\partial B_1 \cap \{x_n > 0\}$ and satisfies $-\Delta \hat u$ $=$ $\hat f$ where $\hat f$ $=$ $\eta\,f - 2\nabla\eta\,\nabla u - u\,\Delta\eta$ $\in$ $L^p(B_1^+)$. Hence, by the unique solvability in $W^{2,p}$ and by $W^{2,p}$-estimates (see \cite[Theorems 9.13, 9.15]{GT}), $\hat u$ $\in$ $W^{2,p}(B_{1}^+) \cap W^{1,p}_0(B_{1}^+)$ and \begin{align*} \|\nabla^2 \hat u\|_{L^p(B_{r_1}^+)} &\leq C\,\|\hat f\|_{L^p(B_1^+)}\\ &\leq C\Big[\frac{1}{(r_2 - r_1)^2}\|u\|_{L^p(B_{r_2}^+)} + \frac{1}{r_2 - r_1}\|\nabla u\|_{L^p(B_{r}^+)} + \|f\|_{L^p(B_1^+)}\Big]. \end{align*} Thus, since $u|_{B_1 \cap \{x_3 = 0\}}$ $\equiv$ $0$, the Poincar\'{e}'s inequality gives \[ \|\nabla^2 \hat u\|_{L^p(B_{1}^+)} \leq C\Big[\frac{r_2}{(r_2 - r_1)^2}\|\nabla u\|_{L^p(B_{r_2}^+)} + \|f\|_{L^p(B_1^+)}\Big]. \] Applying Morrey's inequality (see \cite[Theorem 7.19]{GT}), we hence get \begin{align*} \mathop\textrm{osc}_{B_{r_1}^+}|\nabla u| &= \mathop\textrm{osc}_{B_{r_1}^+}|\nabla \hat u| \leq C\,r_1^{1-3/p}\|\nabla^2 \hat u\|_{L^p(B_{r_1}^+)}\\ &\leq C\Big[\frac{r_2^{2 - 3/p}}{(r_2 - r_1)^2}\|\nabla u\|_{L^p(B_{r_2}^+)} + \|f\|_{L^p(B_1^+)}\Big], \end{align*} and so \begin{align*} \sup_{B_{r_1}^+}|\nabla u| &\leq \mathop\textrm{osc}_{B_{r_1}^+}|\nabla u| + C\,r_1^{-3/p}\|\nabla u\|_{L^p(B_{r_1}^+)}\\ &\leq C\Big[\frac{r_2^{2 - 3/p}}{(r_2 - r_1)^2}\|\nabla u\|_{L^p(B_{r_2}^+)} + \|f\|_{L^p(B_1^+)}\Big]. \end{align*} Using Young's inequality, we hence arrive at \begin{equation} \sup_{B_{r_1}^+}|\nabla u| \leq \frac{1}{2}\sup_{B_{r_2}^+}|\nabla u| + C_1\Big[\frac{1}{(r_2 - r_1)^p}\|\nabla u\|_{L^2(B_{1}^+)} + \|f\|_{L^p(B_1^+)}\Big]. \label{SEE::Eqn01} \end{equation} To derive the conclusion from \eqref{SEE::Eqn01}, we use standard iteration procedure (see e.g. \cite[Lemma 6.1]{Giu}). Fix some $\lambda$ $>$ $1$ and set $r_i$ $=$ $\frac{3}{4} - \frac{1}{4\lambda^i}$. Applying \eqref{SEE::Eqn01} repeatedly, we get \begin{align*} \sup_{B_{1/2}^+}|\nabla u| &= \sup_{B_{r_0}^+}|\nabla u|\\ &\leq \frac{1}{2}\sup_{B_{r_{1}}^+}|\nabla u| + C_1\Big[\frac{4^p}{(\lambda-1)^p}\|\nabla u\|_{L^2(B_{1}^+)} + \|f\|_{L^p(B_1^+)}\Big]\lambda^p\\ &\leq \frac{1}{4}\sup_{B_{r_{2}}^+}|\nabla u| + C_1\Big[\frac{4^p}{(\lambda-1)^p}\|\nabla u\|_{L^2(B_{1}^+)} + \|f\|_{L^p(B_1^+)}\Big]\big\{\lambda^p + 2^{-1}\lambda^{2p}\big\}\\ &\leq \ldots \leq \frac{1}{2^i}\sup_{B_{r_{i}}^+}|\nabla u| + C_1\Big[\frac{4^p}{(\lambda-1)^p}\|\nabla u\|_{L^2(B_{1}^+)} + \|f\|_{L^p(B_1^+)}\Big]\big\{\lambda^p + \ldots + 2^{1-i}\lambda^{ip}\big\}. \end{align*} Choosing $\lambda$ $<$ $2^{1/p}$ and letting $i$ $\rightarrow$ $\infty$, we thus get \begin{align*} \sup_{B_{1/2}^+}|\nabla u| &\leq C_2\Big[\|\nabla u\|_{L^2(B_{1}^+)} + \|f\|_{L^p(B_1^+)}\Big], \end{align*} which completes the proof. $\square$ The next lemma is an extension of \cite[Lemma 7]{Ma-Za} to cover boundary estimates. \begin{lemma}\label{BochnerEst} There exist $\epsilon_0$ $>$ $0$, $\delta_0$ $>$ $0$ and $C$ $>$ $0$ such that if $Q_L$ satisfies ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ in $B_{2r_0}(x_0) \cap \Omega$ for some $x_0$ $\in$ $\bar \Omega$ and \[ \bar E := \sup_{x \in B_{r_0}(x_0) \cap \Omega} \sup_{0 < r < r_0}\frac{1}{r}\int_{B_r(x) \cap \Omega} \Big[\frac{1}{2}|\nabla Q_L|^2 + \frac{1}{L}\tilde f_B(Q_L)\Big]\,dy \leq \epsilon_0, \] then \[ \sup_{B_{r_0/2}(x_0) \cap \Omega} |\nabla Q_L| \leq C\Big[\sup_{\partial\Omega}\big[|\nabla_T Q_b| + |\nabla_T^2 Q_b|\big] + \frac{1}{r_0}\|{\rm dist}(Q_L,{\mycal{S}}_*)\|_{L^\infty(B_{r_0}(x_0))} + \frac{1}{r_0}\bar E^{1/2}\Big]. \] Here $\nabla_T$ denotes the tangential derivative along $\partial\Omega$. \end{lemma} \noindent{\bf Proof.\;} For simplicity, we write $Q$ $=$ $Q_L$ and define \begin{align*} &v = \frac{1}{2}|\nabla Q|^2 + \frac{1}{L}\tilde f_B(Q), B = \sup_{\partial\Omega} \big[|\nabla_T Q_b| + |\nabla_T^2 Q_b|\big],\\ &\bar d = \sup_{B_{r_0}(x_0) \cap \Omega} {\rm dist}(Q,{\mycal{S}}_*) \text{ and } R = \bar E^{1/2}. \end{align*} By \cite[Lemma 6]{Ma-Za}, for $\delta_1$ $>$ $0$ sufficiently small, the following Bochner-type inequality holds \begin{equation} -\Delta v \leq C\,v^2 \text{ in } B_{2r_0}(x_0) \cap \Omega \text{ provided } \delta_0 < \delta_1, \text{ which we will assume}. \label{BE::Eqn01} \end{equation} The proof then proceeds by a standard Bochner-type argument. It suffices to establish \begin{equation} \sup_{B_{r_0/2}(x_0) \cap \Omega} v \leq C\Big[B^2 + \frac{1}{r_0^2}\bar d^2 + \frac{1}{r_0^2}\,R^2\Big]. \label{BE::Eqn02} \end{equation} To this end, we show for appropriate choice of $\epsilon_0$, $\delta_0$ and $r_0$ that \begin{equation} M := \max_{0 \leq r \leq r_0}(r_0 - r)^2\sup_{B_r(x_0) \cap \Omega} \big[v - C_*\,B^2\big] \leq C\,(R + \bar d)^2. \label{BE::Eqn03} \end{equation} where $C_*$ is some positive constant to be determined. Evidently, \eqref{BE::Eqn03} implies \eqref{BE::Eqn02}. Pick $r_1$ $\in$ $[0,r_0]$ and $x_1$ $\in$ $\overline{B_{r_1}(x) \cap \Omega}$ such that \begin{align*} (r_0 - r_1)^2\sup_{B_{r_1}(x_0) \cap \Omega} \big[v - C_*\,B^2\big] &= \max_{0 \leq r \leq r_0}(r_0 - r)^2\sup_{B_r(x_0) \cap \Omega} \big[v - C_*\,B^2\big],\\ V &:= v(x_1) = \sup_{B_{r_1}(x_0) \cap \Omega} v. \end{align*} Then for $r_2$ $=$ $\frac{1}{2}(r_0 - r_1)$, \begin{align} \sup_{B_{r_2}(x_1) \cap \Omega} v &\leq \sup_{B_{r_2 + r_1}(x_0)} v \leq 4V - 3C_*\,B^2 \leq 4V,\label{BE::Eqn04}\\ M &= 4r_2^2\,\big[V - C_*\,B^2\big].\label{BE::Eqn05} \end{align} We do the following rescaling: \begin{align*} \tilde \Omega &= \big\{y \in {\mathbb R}^3: x_1 + V^{-1/2}\,y \in \Omega\big\},\\ \rho_0 &= V^{1/2}\,r_2 \geq \sqrt{M}/2,\\% \geq 1,\\ \tilde v(y) &= \frac{1}{V}\,v(x_1 + V^{-1/2}\,y) \text{ for } y \in B_{\rho_0}(0) \cap \tilde\Omega. \end{align*} Then by \eqref{BE::Eqn01}, \eqref{BE::Eqn04} and our hypotheses, we have \begin{align} -\Delta \tilde v &\leq C_0\,\tilde v \text{ in } B_{\rho_0}(0) \cap \tilde\Omega,\label{BE::Eqn06}\\ \sup_{B_{\rho_0}(0) \cap \tilde\Omega} \tilde v &\leq 4\,\tilde v(0) = 4,\label{BE::Eqn07}\\ \frac{1}{\rho}\int_{B_\rho(0) \cap \tilde\Omega}\,\tilde v\,dy &\leq R^2 \leq \epsilon_0 \text{ for } 0 < \rho \leq \rho_0. \label{BE::Eqn08} \end{align} For simplicity, in the sequel, we will write $B_r$ for $B_r(0)$. Let $\rho_1$ $=$ $\min\{{\rm dist}(0,\partial\tilde\Omega),\frac{1}{10}\rho_0\}$. We first claim that $\rho_1$ $\leq$ $1$ for $\epsilon_0$ sufficiently small. For otherwise, by the Harnack inequality for \eqref{BE::Eqn06} and by \eqref{BE::Eqn08} apply to the ball $B_1(0)$, we have \[ 1 = \tilde v(0) \leq C\int_{B_1(0)} \tilde v(y)\,dy \leq C\,R^2 \leq C\,\epsilon_0, \] which is impossible for $\epsilon_0$ small. The claim thus follows. Now let $\hat v(z)$ $=$ $\tilde v(\rho_1\,z)$ and apply the Harnack inequality for $\hat v$ on $B_1$ to get \[ 1 = \tilde v(0) = \hat v(0) \leq C\int_{B_1} \hat v(z)\,dz = \frac{C}{\rho_1^3}\int_{B_{\rho_1}} v(y)\,dy \leq \frac{C\,R^2}{\rho_1^2}. \] Hence \[ \min\big\{{\rm dist}(0,\partial\tilde\Omega),\frac{1}{10}\rho_0\big\} = \rho_1 \leq C_1\,R. \] If $\rho_1$ $=$ $\frac{1}{10}\rho_0$, this implies that $M$ $\leq$ $4\rho_0^2$ $\leq$ $400C_1^2\,R^2$, which proves \eqref{BE::Eqn03} and we are done. Hence, we assume that \begin{equation} {\rm dist}(0,\partial\tilde\Omega) = \rho_1 < \frac{1}{10}\rho_0. \label{BE::Eqn08bis} \end{equation} Note that we can assume in addition that \begin{equation} V \geq C_*\,B^2, \label{BE::Eqn08Add} \end{equation} for otherwise \eqref{BE::Eqn03} follows from \eqref{BE::Eqn05} and we are also done. Next, we note that if we define $\tilde Q(y)$ $=$ $Q(x_1 + V^{-1/2}\,y)$ for $y$ $\in$ $B_{\rho_0} \cap \tilde\Omega$ and $\tilde L$ $=$ $V\,L$, then \begin{equation} \tilde v = \frac{1}{2}|\nabla \tilde Q|^2 + \frac{1}{\tilde L}\,\tilde f_B(\tilde Q). \label{BE::Eqn09} \end{equation} and so, by \eqref{BE::Eqn07}, \begin{equation} |\nabla\tilde Q| \leq 4 \text{ in } B_{\rho_0} \cap \tilde \Omega. \label{BE::Eqn10} \end{equation} Hence, by \eqref{BE::Eqn08bis} and Lemma \ref{C1BndAuxRslt}, we have for $\rho$ $\in$ $(0,\rho_0)$ that \begin{equation} \sup_{\partial\tilde\Omega \cap B_{\rho/2}} |\nabla(\tilde Q - \tilde Q^\sharp)| \leq C\Big[\rho^{-1}\|\tilde Q - \tilde Q^\sharp\|_{L^\infty(B_\rho \cap \tilde\Omega)} + \rho^{-3/2}\|\nabla \tilde Q\|_{L^2(B_{\rho} \cap \tilde\Omega)} + \rho^{1/4}\|\nabla \tilde Q\|_{L^2(B_{\rho} \cap \tilde\Omega)}^{1/2}\Big]. \label{BE::Eqn11} \end{equation} Here $\tilde Q^\sharp$ denotes the orthogonal projection of $\tilde Q$ onto ${\mycal{S}}_*$. Also, note that, as $\tilde Q^\sharp$ $=$ $P \circ \tilde Q$ for some smooht projection map $P$, \[ |\nabla \tilde Q^\sharp| \leq C\,|\nabla \tilde Q|. \] Thus, by Proposition \ref{ProjEqn}, \[ |\Delta \tilde Q^\sharp| \leq C\,|\nabla \tilde Q|^2. \] Noting \eqref{BE::Eqn08bis} and $\rho\,V^{-1/2}$ $\leq$ $\rho_0\,V^{-1/2}$ $=$ $r_2$ $\leq$ $r_0$, we can apply Lemma \ref{SimpleEllEst} to get \begin{align} \sup_{B_{\rho/2} \cap \tilde\Omega} |\nabla \tilde Q^\sharp| &\leq C\Big[\|\nabla_T\tilde Q\|_{L^\infty(\partial\tilde\Omega \cap B_{\rho})} + \rho\,\|\nabla_T^2 \tilde Q\|_{L^\infty(\partial\tilde\Omega \cap B_{\rho})}\nonumber\\ &\qquad\qquad + \rho^{-3/2}\|\nabla \tilde Q\|_{L^2(B_\rho \cap \tilde\Omega)} + \rho^{1/4}\|\Delta \tilde Q^\sharp\|_{L^4(B_{\rho} \cap \tilde\Omega)}\Big]\nonumber\\ &\leq C\Big[V^{-1/2}B + \rho^{-3/2}\|\nabla \tilde Q\|_{L^2(B_{\rho} \cap \tilde\Omega)} + \rho^{1/4}\|\nabla \tilde Q\|_{L^2(B_{\rho} \cap \tilde\Omega)}^{1/2}\Big]. \label{BE::Eqn12} \end{align} Summing up \eqref{BE::Eqn11} and \eqref{BE::Eqn12} and using \eqref{BE::Eqn08}, \eqref{BE::Eqn08Add} and \eqref{BE::Eqn09}, we infer that \[ \sup_{\partial\tilde\Omega \cap B_{\rho/2}} \tilde v \leq C_2\Big[\frac{1}{C_*} + \frac{R^2 + \bar d^2}{\rho^2} + R\,\rho\Big]. \] We therefore conclude that the function \[ w_\rho := \left\{\begin{array}{l} \max\{C_2\big[\frac{1}{C_*} + \frac{R^2 + \bar d^2}{\rho^2} + R\,\rho\big], \tilde v\} \text{ in } B_{\rho/2} \cap \tilde\Omega,\\ C_2\big[\frac{1}{C_*} + \frac{R^2 + \bar d^2}{\rho^2} + R\,\rho\big] \text{ in } B_{\rho/2} \setminus \tilde\Omega \end{array}\right. \] is Lipschitz in $B_{1/2}$ and satisfies \[ -\Delta w_\rho \leq C\,w_\rho \text{ in } B_{\rho/2}. \] As before, the Harnack inequality implies that $\rho_0$ $<$ $1$ if we chose $\epsilon_0$ sufficiently small and $C_*$ sufficiently large. Now, define $\hat w_\rho(z)$ $=$ $w_\rho(\rho\,z)$ and apply the Harnack inequality together with \eqref{BE::Eqn08} to get \begin{multline*} 1 = \tilde v \leq w_\rho(0) = \hat w_\rho(0) \leq C\int_{B_{1/2}} \hat w_\rho(z)\,dz\\ = \frac{C}{\rho^3}\int_{B_{\rho/2}} w_\rho(y)\,dy \leq C\Big[\frac{1}{C_*} + \frac{R^2 + \bar d^2}{\rho^2} + R\,\rho\Big]. \end{multline*} To sum up we have shown that \begin{equation} 1 \leq C_3\Big[\frac{1}{C_*} + \frac{(R + \bar d)}{\rho^2} + (R + \bar d)\,\rho\Big] \text{ for any } \rho \in (0,\rho_0). \label{BE::Eqn13} \end{equation} Now select $C_*$ $>$ $4C_3$, $\epsilon_0$ $<$ $32^{-1}\,C_3^{-3/2}$ and $\delta_0$ $<$ $\epsilon_0^{1/2}$ so that \[ C_3\Big[\frac{1}{C_*} + \frac{(R + \bar d)^2}{\rho_*^2} + (R + \bar d)\,\rho_*\Big] < \frac{1}{2} \text{ for } \rho_* = (R + \bar d)^{1/3}. \] \eqref{BE::Eqn13} then implies that \[ \rho_0 < \rho_* \leq 2\,\epsilon_0^{1/6}, \] and so \[ 1 \leq C_3\Big[\frac{1}{C_*} + \frac{(R + \bar d)^2}{\rho_0^2} + \epsilon_0^{2/3}\Big] \leq \frac{1}{2} + C_3\,\frac{(R + \bar d)^2}{\rho_0^2}. \] Therefore, by \eqref{BE::Eqn05}, \[ M \leq \rho_0^2 \leq 2C_3\,(R + \bar d)^2. \] The proof is complete. $\square$ \begin{remark}\label{BochnerEst::Interior} Regarding interior estimate, the proof given above yields the following statement. There exist $\epsilon_0$ $>$ $0$, $\delta_0$ $>$ $0$ and $C$ $>$ $0$ such that if $Q_L$ satisfies ${\rm dist}(Q_L,{\mycal{S}}_*)$ $\leq$ $\delta_0$ in $B_{2r_0}(x_0)$ $\Subset$ $\Omega$ and \[ \bar E := \sup_{x \in B_{r_0}(x_0)} \sup_{0 < r < r_0}\frac{1}{r}\int_{B_r(x)} \Big[\frac{1}{2}|\nabla Q_L|^2 + \frac{1}{L}\tilde f_B(Q_L)\Big]\,dy \leq \epsilon_0, \] then \[ \sup_{B_{r_0/2}(x_0)} |\nabla Q_L| \leq \frac{C}{r_0}\bar E^{1/2}. \] \end{remark} \begin{proposition}\label{UniC1Bnd} Assume that $Q_{L_k}$ converges strongly in $H^1(\Omega, {\mycal{S}}_0)$ to a minimizer $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ of $I_*$ for some sequence $L_k$ $\rightarrow$ $0$. For any compact subset $K$ of $\bar\Omega\setminus {\rm Sing}(Q_*)$, there exists $\bar L$ $=$ $\bar L(a^2,b^2,c^2,\Omega,K,Q_b,Q_*)$ $>$ $0$ such that \[ \sup_K |\nabla Q_{L_k}| \leq C(a^2,b^2,c^2,\Omega,K,Q_b,Q_*) \text{ for any } L_k \leq \bar L. \] \end{proposition} \noindent{\bf Proof.\;} It suffices to consider $K$ $=$ $K_{2\eta}$ $:=$ $\{x \in \bar\Omega: {\rm dist}(x,{\rm Sing}(Q_*)) \geq 2\eta\}$ where $\eta$ is some arbitrary small positive number. Let $\epsilon_0$, $\delta_0$ and $r_0$ be as in Lemma \ref{BochnerEst}. First, by \cite[Propositions 4 and 6]{Ma-Za} and Lemma \ref{DistComp}, we can select $\bar L$ such that \[ {\rm dist}(Q_{L_k},{\mycal{S}}_*) \leq C\sqrt{\tilde f_B(Q_{L_k})} \leq \delta_0 \text{ for any } x \in K \text{ and } L_k \leq \bar L. \] where here and below $C$ denotes a constant that may depend on $\Omega$, $a^2$, $b^2$, $c^2$, and $Q_b$ but is otherwise independent of $L_k$ and $Q_{L_k}$. By standard regularity results for harmonic maps (see e.g. \cite{su} and \cite{SU-Bdry}), for some $\epsilon_1$ $>$ $0$ to be chosen, there exists $0$ $<$ $r_0$ $<$ $\frac{1}{4}\eta$ such that \begin{equation} \frac{1}{r_0}\int_{B_{2r_0}(x) \cap \Omega}|\nabla Q_*|^2\,dx \leq \epsilon_1 \text{ for any }x \in K_\eta. \label{UniC1Bnd::Eqn01} \end{equation} Also, note that $Q_{L_k} - Q_*$ $\in$ $H^1_0(\Omega, {\mycal{S}}_0)$. Hence since $Q_{L_k}$ is $\tilde I_{L_k}$-minimizing, \[ \int_{\Omega} \Big[|\nabla Q_{L_k}|^2 + \frac{1}{L_k}\tilde f_B(Q_{L_k})\Big]\,dx \leq \int_{\Omega} |\nabla Q_*|^2\,dx. \] On the other hand, since $Q_{L_k}$ $\rightarrow$ $Q_*$ in $H^1$, we have \begin{equation} \int_\Omega |\nabla Q_*|^2\,dx = \lim_{k \rightarrow \infty}\int_{\Omega} |\nabla Q_{L_k}|^2\,dx. \label{UniC1Bnd::Eqn02} \end{equation} It follows that \begin{equation} \lim_{k \rightarrow 0} \frac{1}{L_k}\int_\Omega \tilde f_B(Q_{L_k})\,dx = 0. \label{UniC1Bnd::Eqn03} \end{equation} From \eqref{UniC1Bnd::Eqn01}-\eqref{UniC1Bnd::Eqn03}, we infer that there exists some $\bar L$ $>$ $0$ such that \[ \frac{1}{r_0}\int_{B_{2r_0}(x) \cap \Omega} e_{L_k}\,dx \leq \epsilon_1 \text{ for any } x \in K \text{ and } L_k \leq \bar L, \] where \[ e_{L_k} = \frac{1}{2}|\nabla Q_{L_k}|^2 + \frac{1}{L_k}\tilde f_B(Q_{L_k}). \] Applying the monotonicity formulas in \cite[Lemmas 2,9]{Ma-Za}, we arrive at \[ \frac{1}{r}\int_{B_{r}(x) \cap \Omega} e_k\,dx \leq C\,\epsilon_1 \text{ for any } x \in K, r \in (0,r_0] \text{ and } L_k \leq \bar L. \] We now fix $\epsilon_1$ so that $C\epsilon_1$ $\leq$ $\epsilon_0$. The assertion follows from Lemma \ref{BochnerEst} with the appropriate choice of $\bar L$ so that the above argument goes through. $\square$ \noindent{\bf Proof of Proposition \ref{C1AlphaConv}.} Fix $K$ $\Subset$ $K'$ $\Subset$ $\bar\Omega\setminus{\rm Sing}(Q_*)$. By Proposition \ref{UniC1Bnd} and Corollary \eqref{UniLapBnd}, there exists $\bar L$ $>$ $0$ and $C$ $>$ $0$ depending only on $a^2$, $b^2$, $c^2$, $\Omega$, $K$, $K'$, $Q_b$ and $Q_*$ such that \[ \sup_{K'}\big[|\nabla Q_{L_k}| + |\Delta Q_{L_k}|\big] \leq C \text{ for } L_k \leq \bar L. \] Also, by \cite[Proposition 3]{Ma-Za}, \[ \sup_\Omega |Q_{L_k}| \leq \sqrt{\frac{2}{3}}s_+. \] The conclusion follows from standard $W^{2,p}$-estimates for Poisson equations and Morrey's inequality (see e.g. \cite{GT}). $\square$ \section{$C^{j}$-convergence}\label{SEC-CjConv} In the previous section, we showed that the $H^1$-convergence of a sequence of minimizers $Q_{L_k}$ $\in$ $H^1(\Omega, {\mycal{S}}_0)$ of $I_{L_k}$ to a minimizer $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ of $I_*$ improves itself to a $C^{1,\alpha}$-convergence on compact subsets of $\bar\Omega\setminus {\rm Sing}(Q_*)$. In this section, we study $C^j$-convergence. Like in Ginzburg-Landau theory, we do not expect to have $C^2$ convergence up to the boundary; for $\Delta Q_L$ vanishes on $\partial \Omega$ while $\Delta Q_*$ needs not do. \begin{proposition}\label{CjConv} Assume that $Q_{L_k}$ converges strongly in $H^1(\Omega, {\mycal{S}}_0)$ to a minimizer $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ of $I_*$ for some sequence $L_k$ $\rightarrow$ $0$. Then $Q_{L_k}$ converges uniformly in $C^j$-norm on compact subsets of $\Omega\setminus{\rm Sing}(Q_*)$ to $Q_*$. \end{proposition} The approach we take closely follows \cite{BBH}. A key step is to show the convergence of the ``normal component'', i.e. the ``minimal polynomial'' $Q_L^2 - \frac{1}{3}s_+\,Q_L - \frac{2}{9}s_+^2\,{\rm Id}$. It is clear that Proposition \ref{CjConv} follows from Propositions \ref{MinPolyConv}, \ref{UniC1Bnd} and the following lemma. \begin{lemma}\label{PrepUniHighDerEst} Assume for some $B_r(x)$ $\Subset$ $\Omega$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$ that \[ \sup_{B_r(x)} \Big[|\nabla Q_L| + |X_L|\Big] \leq C_1. \] Then, for $0$ $<$ $s$ $<$ $r$ and $j$ $\geq$ $0$, \begin{align} \sup_{B_s(x)}\big|\nabla^j X_L\big| &\leq C(a^2,b^2,c^2,C_1,j)[1 + (r-s)^{-j}],\label{PUHDE::Est01}\\ \sup_{B_s(x)}|\nabla^{j+1} Q_{L}| &\leq C(a^2,b^2,c^2,C_1,j)[1 + (r-s)^{-j}].\label{PUHDE::Est02} \end{align} \end{lemma} In the proof of the above lemma, obtaining \eqref{PUHDE::Est01} is central. As we have said before, $X_L$ ``measures'' how fast $Q_L$ approaches the limit manifold. Hence, intuitively speaking, $X_L$ should ``behave'' similarly to a vector in the normal space $(T_{Q_*} {\mycal{S}}_*)^\perp$. Now, consider the linear map $\Lambda$ on $(T_{Q_*} {\mycal{S}}_*)^\perp$ created by a left multiplcation by $Q_*$. This map has two eigenvalues $-\frac{1}{3}s_+$ and $\frac{2}{3}s_+$, where the former is double and the latter is simple. (See the proof of Lemma \ref{MysIds}.) The eigenspace decomposition of $(T_{Q_*} {\mycal{S}}_*)^\perp$ with respect to $\Lambda$ is given by \[ (T_{Q_*} {\mycal{S}}_*)^\perp = \Big[\big(Q_* + \frac{1}{3}s_+\,{\rm Id}\big)(T_{Q_*} {\mycal{S}}_*)^\perp\Big] \oplus \Big[\big(Q_* - \frac{2}{3}s_+\,{\rm Id}\big)(T_{Q_*} {\mycal{S}}_*)^\perp\Big]. \] We will momentarily see that this decomposition is very useful in the analysis of $X_L$. (See Steps 4 and 5 of the following proof.) \noindent{\bf Proof.\;} Like usual, we drop the subscript $L$ in $Q_{L}$ and $X_L$. Also, we will abbreviate $B_r(x)$ to $B_r$. Throughout the proof we will frequently use the estimate $|Q|$ $\leq$ $\sqrt{\frac{2}{3}}s_+$ without explicitly mentioning it (see \cite[Proposition 3]{Ma-Za}). For $j \geq 0$ and $s > 0$, define \begin{align*} m(s,j) &= \sup_{B_{s}}\sum_{l=0}^j\big|\nabla^l X| + \sum_{l=0}^{j+1}|\nabla^l Q|,\\ M(s,j) &= m(s,j) + \sum_{l_1 + \ldots + l_p = j+1,\, l_q > 0,\,q > 1}m(s,l_1)\ldots m(s,l_p). \end{align*} Fix $\epsilon$ $\in$ $(0,1)$ for the moment. In the sequel, $C$ denotes a constant that depends only on $a^2$, $b^2$, $c^2$, $C_1$, $j$ and $\epsilon$. We will consecutively show the following six estimates for $0$ $<$ $r_1$ $<$ $r_2$ $<$ $r$. \begin{align} \|\nabla^{j+2} Q\|_{L^\infty(B_{r_1})} &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C\Big[1 + \frac{1}{r_2 - r_1}\Big]M(r_2,j), \label{PUHDE::Step1-2}\\ \|\nabla^{j+1} X\|_{L^\infty(B_{r_1})} &\leq C\Big[\frac{1}{\sqrt{L}} + \frac{1}{r_2 - r_1}\Big]M(r_2,j), \label{PUHDE::Step2}\\ \|\nabla^{j+1}{\rm tr}(X)\|_{L^\infty(B_{r_1})} &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C\Big[1 + \frac{1}{r_2 - r_1}\Big]M(r_2,j), \label{PUHDE::Step3}\\ \big\|\nabla^{j+1} Y^{(-2)}\big\|_{L^\infty(B_{r_1})} &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C\Big[1 + \frac{1}{r_2 - r_1}\Big]M(r_2,j), \label{PUHDE::Step4}\\ \|\nabla^{j+1}X\|_{L^\infty(B_{r_1})} &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C\Big[1 + \frac{1}{r_2 - r_1}\Big]M(r_2,j), \label{PUHDE::Step5}\\ m(r_1,j+1) &\leq C\Big[1 + \frac{1}{r_2 - r_1}\Big]M(r_2,j), \label{PUHDE::Step6} \end{align} where in \eqref{PUHDE::Step4}, $Y^{(-2)}$ $=$ $\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)X$. It is readily seen that the conclusion follows from \eqref{PUHDE::Step6}. Among the above, estimate \eqref{PUHDE::Step4} is crucial. \noindent\underline{Step 1:} Proof of \eqref{PUHDE::Step1-2}. By \eqref{L-DG::ELEqn}, we have \begin{align} \Delta Q &= \frac{1}{L}\Big[-a^2\,Q - b^2[Q^2 - \frac{1}{3}{\rm tr}(Q^2)\,{\rm Id}] + c^2\,{\rm tr}(Q^2)\,Q\Big]\nonumber\\ &= c^2\,{\rm tr}(X)\,Q + \frac{1}{3}b^2\,{\rm tr}(X)\,{\rm Id} - b^2\,X. \label{PUHDE::Eqn01} \end{align} Differentiating \eqref{PUHDE::Eqn01} up to $j+1$ order, we get \[ \|\Delta (\nabla^{j+1} Q)\|_{L^\infty(B_{r_2})} \leq C\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + M(r_2,j). \] Therefore, by a standard interpolation inequality (see e.g. \cite[Lemma A.1]{BBH}), \begin{align*} \|\nabla^{j+2} Q\|_{L^\infty(B_{r_1})}^2 &\leq C\|\nabla^{j+1}Q\|_{L^\infty(B_{r_2})}\big[\|\Delta (\nabla^{j+1} Q)\|_{L^\infty(B_{r_2})} + \frac{1}{(r_2 - r_1)^2}\|\nabla^{j+1}Q\|_{L^\infty(B_{r_2})}\big]\\ &\leq C\,M(r_2,j)\big[\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + [1 + (r_2 - r_1)^{-2}]M(r_2,j)\big]. \end{align*} This implies \eqref{PUHDE::Step1-2}. \noindent\underline{Step 2:} Proof of \eqref{PUHDE::Step2}. Using \eqref{PUHDE::Eqn01}, we calculate \begin{multline} L\,\Delta X = 2Q\Big[c^2\,{\rm tr}(X)\,Q + \frac{1}{3}b^2\,{\rm tr}(X)\,{\rm Id} - b^2\,X\Big]\\ - \frac{1}{3}s_+\Big[c^2\,{\rm tr}(X)\,Q + \frac{1}{3}b^2\,{\rm tr}(X)\,{\rm Id} - b^2\,X\Big] + 2\sum_{\alpha = 1}^3(\nabla_\alpha Q)^2. \label{PUHDE::Eqn02} \end{multline} It thus follows from a standard interpolation inequality (see e.g. \cite[Lemma A.1]{BBH}) that \begin{align*} \|\nabla^{j+1} X\|_{L^\infty(B_{r_1})}^2 &\leq C\|\nabla^j X\|_{L^\infty(B_{r_2})}\big[\|\Delta \nabla^j X\|_{L^\infty(B_{r_2})} + \frac{1}{(r_2 - r_1)^2}\|\nabla^j X\|_{L^\infty(B_{r_2})} \big]\\ &\leq C\big[L^{-1} + (r_2 - r_1)^{-2}\big]M(r_2,j)^2, \end{align*} which proves \eqref{PUHDE::Step2}. \noindent\underline{Step 3:} Proof of \eqref{PUHDE::Step3}. Taking trace of \eqref{PUHDE::Eqn02}, we get \begin{equation} L\,\Delta {\rm tr}(X) = 2c^2\,{\rm tr}(X)\,{\rm tr}(Q^2) - 2b^2\,{\rm tr}(QX) + 2|\nabla Q|^2. \label{PUHDE::Eqn03} \end{equation} Using \[ {\rm tr}(Q^3) = \frac{3}{4s_+}L^2({\rm tr}(X))^2 + \frac{1}{2}s_+\,{\rm tr}(Q^2) - \frac{1}{9}s_+^3 - \frac{3}{2s_+}L^2\,{\rm tr}(X^2), \] we write \begin{align} {\rm tr}(QX) &= \frac{1}{L}\Big[{\rm tr}(Q^3) - \frac{1}{3}s_+\,{\rm tr}(Q^2)\big]\nonumber\\ &= \frac{1}{L}\Big[\frac{3}{4s_+}L^2({\rm tr}(X))^2 + \frac{1}{6}s_+\,{\rm tr}(Q^2) - \frac{1}{9}s_+^3 - \frac{3}{2s_+}L^2\,{\rm tr}(X^2)\Big]\nonumber\\ &= \frac{3}{4s_+}L({\rm tr}(X))^2 + \frac{1}{6}s_+\,{\rm tr}(X) - \frac{3}{2s_+}L\,{\rm tr}(X^2). \label{PUHDE::Eqn04} \end{align} We thus rewrite \eqref{PUHDE::Eqn03} as \begin{align} L\,\Delta {\rm tr}(X) &= (\frac{4}{3}c^2\,s_+^2 - \frac{1}{3}b^2s_+){\rm tr}(X) + 2|\nabla Q|^2\nonumber\\ &\qquad\qquad + L\big[2c^2\,({\rm tr}(X))^2 - \frac{3}{2s_+}\,b^2\,({\rm tr}(X))^2 + \frac{3}{s_+}\,b^2\,{\rm tr}(X^2)\big]\nonumber\\ &= (2a^2 + \frac{1}{3}b^2s_+){\rm tr}(X) + 2|\nabla Q|^2\nonumber\\ &\qquad\qquad + L\big[2c^2\,({\rm tr}(X))^2 - \frac{3}{2s_+}\,b^2\,({\rm tr}(X))^2 + \frac{3}{s_+}\,b^2\,{\rm tr}(X^2)\big]. \label{PUHDE::Eqn04Bis} \end{align} We arrive at \begin{equation} -L\,\Delta {\rm tr}(X) + (2a^2 + \frac{1}{3}b^2s_+){\rm tr}(X) = A + L B, \label{PUHDE::Eqn05} \end{equation} where, by \eqref{PUHDE::Step1-2} and \eqref{PUHDE::Step2} \begin{multline*} \|\nabla^{j+1}A\|_{L^\infty(B_{(r_1 + r_2)/2})} + \sqrt{L}\|\nabla^{j+1}B\|_{L^\infty(B_{(r_1 + r_2)/2})}\\ \leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C[1 + (r_2 - r_1)^{-1}]M(r_2,j). \end{multline*} Hence, by differentiating \eqref{PUHDE::Eqn05} to $(j+1)$ order and applying Lemma \ref{BBHMaxPrin}, we infer that \begin{align*} \|\nabla^{j+1}{\rm tr}(X)\|_{L^\infty(B_{r_1})} &\leq C\,\|\nabla^{j+1}(A + LB)\|_{L^\infty(B_{(r_1 + r_2)/2})}\\ &\qquad\qquad + C\min\Big\{1,\frac{L^{1/2}}{r_2 - r_1}\Big\}\,\|\nabla^{j+1}{\rm tr}(X)\|_{L^\infty(B_{(r_1 + r_2)/2})}\\ &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C[1 + (r_2 - r_1)^{-1}]M(r_2,j)\\ &\qquad\qquad + C\,\min\Big\{1,\frac{L^{1/2}}{r_2 - r_1}\Big\}\,[L^{-1/2} + (r_2 - r_1)^{-1}]\,M(r_2,j)\\ &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C[1 + (r_2 - r_1)^{-1}]M(r_2,j). \end{align*} Estimate \eqref{PUHDE::Step3} is proved. \noindent\underline{Step 4:} Proof of \eqref{PUHDE::Step4}. Set \[ Y^{(\alpha)} = \Big(Q\, + \frac{\alpha}{3}\,s_+\,{\rm Id}\Big)\,X \] where $\alpha$ is to be determined. We derive from \eqref{PUHDE::Eqn01} that \begin{align*} L\,\Delta Y^{(\alpha)} &= \Delta\big[Q^3 - \frac{1}{3}s_+(1 - \alpha)Q^2 - \frac{1}{9}s_+^2(2 + \alpha)Q\big]\nonumber\\ &= -3b^2\,Q^2\,X + \frac{2}{3}b^2\,s_+(1 - \alpha)\,Q\,X\nonumber\\ &\qquad\qquad+ \frac{1}{9}b^2\,s_+^2(2 + \alpha)\,X + R(Q, \nabla Q, {\rm tr}(X)), \end{align*} where $R$ is an explicit polynomial. Noting that \begin{equation} Q^2\,X = L\,X^2 + \frac{1}{3}s_+\,Q\,X + \frac{2}{9}\,s_+^2\,X, \label{PUHDE::Eqn06Bis} \end{equation} we get \begin{align*} L\,\Delta Y^{(\alpha)} &= -\frac{1}{3}b^2\,s_+(1 + 2\alpha)\,Q\,X\nonumber\\ &\qquad\qquad - \frac{1}{9}b^2\,s_+^2(4 - \alpha)\,X - 3b^2\,L\,X^2 + R(Q, \nabla Q, {\rm tr}(X)), \end{align*} Pick $\alpha$ $=$ $-2$, we arrive at \begin{equation} -L\,\Delta Y^{(-2)}_{ij} + b^2\,s_+\,Y^{(-2)}_{ij} = \big[3b^2\,L\,X^2 - R(Q, \nabla Q, {\rm tr}(X))\big]_{ij}. \label{PUHDE::Eqn08} \end{equation} As before, by differentiating \eqref{PUHDE::Eqn08} to $j+1$ order and applying Lemma \ref{BBHMaxPrin} together with \eqref{PUHDE::Step1-2}-\eqref{PUHDE::Step3}, we get \begin{align*} \|\nabla^{j+1} Y^{(-2)}\|_{L^\infty(B_{r_1})} &\leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C[1 + (r_2 - r_1)^{-1}]M(r_2,j). \end{align*} Recalling the definition of $Y^{(-2)}$, we arrive at \eqref{PUHDE::Step4}. \noindent\underline{Step 5:} Proof of \eqref{PUHDE::Step5}. We have \[ {\rm tr}(Y^{(-2)}) = \frac{1}{L}\big[{\rm tr}(Q^3) - s_+{\rm tr}(Q^2) + \frac{4}{9}s_+^3\big] = \frac{1}{L}\big[{\rm tr}(Q^3) - \frac{2}{9}s_+^3\big] - s_+\,{\rm tr}(X). \] Hence, by \eqref{PUHDE::Step3} and \eqref{PUHDE::Step4}, \begin{equation} \Big\|\nabla^{j+1}\big(\frac{1}{L}\big[{\rm tr}(Q^3) - \frac{2}{9}s_+^3\big]\big)\Big\|_{L^\infty(B_{r_1})} \leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C[1 + (r_2 - r_1)^{-1}]M(r_2,j). \label{PUHDE::Eqn09} \end{equation} On the other hand, as $Q$ is a traceless $3 \times 3$ matrix, Cayley's theorem gives \[ Q^3 - \frac{1}{2}\,{\rm tr}(Q^2)\,Q - \frac{1}{3}{\rm tr}(Q^3)\,{\rm Id} = 0. \] Therefore \begin{align*} Y^{(1)} &= (Q + \frac{1}{3}s_+\,{\rm Id})X = \frac{1}{L}(Q^3 - \frac{1}{3}s_+^2\,Q - \frac{2}{27}s_+^3\,{\rm Id})\\ &= \frac{1}{2}{\rm tr}(X)Q + \frac{1}{3L}\big[{\rm tr}(Q^3) - \frac{2}{9}s_+^3\big]\,{\rm Id}. \end{align*} It thus follows from \eqref{PUHDE::Step3} and \eqref{PUHDE::Eqn09} that \[ \|\nabla^{j+1}Y^{(1)}\|_{L^\infty(B_{r_1})} \leq \epsilon\,\|\nabla^{j+1} X\|_{L^\infty(B_{r_2})} + C[1 + (r_2 - r_1)^{-1}]M(r_2,j). \] As $s_+X$ $=$ $Y^{(1)} - Y^{(-2)}$, we get \eqref{PUHDE::Step5}. \noindent\underline{Step 6:} Proof of \eqref{PUHDE::Step6}. Let \[ \Phi(s) = \|\nabla^{j+1} X\|_{L^\infty(B_s)}, \qquad 0 \leq s \leq r. \] Then $\Phi$ is monotonically non-decreasing and, by \eqref{PUHDE::Step5} with $\epsilon$ $=$ $\frac{1}{2}$, \begin{equation} \Phi(r_1) \leq \frac{1}{2}\,\Phi(r_2) + C[1 + (r_2 - r_1)^{-1}]M(r_2,j), \qquad 0 \leq r_1 < r_2 \leq r. \label{PUHDE::Eqn10} \end{equation} The proof proceeds by a standard iteration argument (cf. \cite[Lemma 6.1]{Giu}). Define a sequence $\rho_j$ by \begin{align*} \rho_0 &= r_1,\\ \rho_{i+1} - \rho_i &= \frac{2^i}{3^{i+1}}(r_2 - r_1). \end{align*} By a simple induction we get from \eqref{PUHDE::Eqn10} that \[ \Phi(r_1) \leq \frac{1}{2^i}\Phi(\rho_i) + C[1 + (r_2 - r_1)^{-1}]M(r_2,j)\sum_{l=0}^{i-1}\frac{3^l}{4^l}. \] Sending $i$ $\rightarrow$ $\infty$, we get \[ \Phi(r_1) \leq C[1 + (r_2 - r_1)^{-1}]M(r_2,j). \] Estimate \eqref{PUHDE::Step6} follows immediately in view of \eqref{PUHDE::Step1-2}. The proof is complete. $\square$ \begin{remark} The proof given above can be adapted to give a different (though much longer) proof for Proposition \ref{MinPolyConv}. \end{remark} The following is an easy consequence of Lemma \ref{PrepUniHighDerEst}. \begin{corollary}\label{PUHDE::Cor} Assume for some $x$ $\in$ $\partial \Omega$, $r$ $>$ $0$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$ that \[ \sup_{B_r(x) \cap \Omega} \Big[|\nabla Q_L| + |X_L|\Big] \leq C_1. \] Then, for $y$ $\in$ $B_{r/2}(x) \cap \Omega$, there holds \begin{align} |\nabla^j X_L|(y) &\leq \frac{C(a^2,b^2,c^2,C_1,j)}{{\rm dist}(y,\partial(B_r(x) \cap\Omega))^{j}},\label{PUHDE::Cor::Est01}\\ |\nabla^{j+1} Q_{L}|(y) &\leq \frac{C(a^2,b^2,c^2,C_1,j)}{{\rm dist}(y,\partial(B_r(x) \cap\Omega))^{j}}.\label{PUHDE::Cor::Est02} \end{align} \end{corollary} So far, we have shown the first part of Theorem \ref{MainThm1}. It remains to write $\Delta Q_L$ in the form stated in the theorem. By \eqref{PUHDE::Eqn01}, this can be done if we know what the limit of $X_L$ is. By a careful inspection in the proof of Lemma \ref{PrepUniHighDerEst} (namely \eqref{PUHDE::Eqn02} and \eqref{PUHDE::Eqn04Bis}), one expects that \begin{align*} {\rm tr}(X_L) \text{ ``converges to'' } & - \frac{6}{6a^2 + b^2s_+}|\nabla Q_*|^2 \end{align*} and \begin{align*} X_L \text{ ``converges to'' } & \frac{1}{b^2\,s_+^2}\Big\{-\frac{6s_+^2}{6a^2 + b^2s_+}|\nabla Q_*|^2\big[c^2\,Q_* + \frac{1}{3}\,b^2\,{\rm Id}\big]\nonumber\\ &\qquad\qquad + 4\big(Q_* - \frac{1}{6}s_+\,{\rm Id}\big)\sum_{\alpha = 1}^3(\nabla_\alpha Q_*)^2\Big\}. \end{align*} A more natural and systematic way of getting this information is to assume that there is a formal asymptotic expansion $Q_L$ $=$ $Q_* + L\,Q_\bullet + O(L^2)$ and expand \eqref{L-DG::ELEqn} accordingly. Compare \eqref{FAE::Id01}. Motivated on the above discussion, we consider \begin{align} Y_L &= {\rm tr}(X_L) + \frac{6}{6a^2 + b^2s_+}|\nabla Q_L|^2,\label{YDef}\\ Z_L &= X_L - \frac{1}{b^2\,s_+^2}\Big\{-\frac{6s_+^2}{6a^2 + b^2s_+}|\nabla Q_L|^2\big[c^2\,Q_L + \frac{1}{3}\,b^2\,{\rm Id}\big]\nonumber\\ &\qquad\qquad + 4\big(Q_L - \frac{1}{6}s_+\,{\rm Id}\big)\sum_{\alpha = 1}^3(\nabla_\alpha Q_L)^2\Big\}.\label{ZDef} \end{align} The following lemma establishes the rate of convergence of $Y_L$ and $Z_L$. \begin{lemma}\label{EqnConv-Ez} Assume for some $x$ $\in$ $\bar\Omega$, $r$ $>$ $0$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$ that \[ \sup_{B_r(x) \cap \Omega} \big[|\nabla Q_L| + |X_L|\big] \leq C_1. \] Then, for $y$ $\in$ $B_{r/2}(x) \cap \Omega$, there holds \begin{align} |\nabla^j Y_L| + |\nabla^j Z_L| \leq \frac{C(a^2,b^2,c^2,C_1,j)}{{\rm dist}(y,\partial(B_r(x) \cap\Omega))^{j+2}}L, \label{EC-Ez::Est01} \end{align} \end{lemma} \noindent{\bf Proof.\;} We will drop the subscript $L$ in $Q_{L}$, $X_L$, $Y_L$ and $Z_L$. By Corollary \ref{PUHDE::Cor} and \eqref{PUHDE::Eqn04Bis}, we have for $y$ $\in$ $B_{r/2} \cap \Omega$, \begin{equation} |\nabla^j Y|(y) \leq \frac{C}{{\rm dist}(y,\partial(B_r(x) \cap\Omega))^{j+2}}\,L. \label{EC-Ez::Eqn01} \end{equation} To continue we introduce the following convention: $R(A_1, \ldots, A_t; A_{t+1}, \ldots, A_s)$ will be used to denote various explicitly computable polynomials in the $A_l$'s which are (jointly) linear in $(A_{t+1}, \ldots, A_s)$. By \eqref{PUHDE::Eqn02}, we have \begin{align} L\,\Delta X &= -b^2\big(2Q - \frac{1}{3}s_+\,{\rm Id}\big)X + 2(\nabla Q)^2\nonumber\\ &\qquad\qquad + {\rm tr}(X)\big[\frac{1}{3}(2b^2 + c^2\,s_+)Q + \frac{1}{9}(-b^2\,s_+ + 4c^2\,s_+^2)\,{\rm Id}\big] + L\,R(X), \label{EC-Ez::Eqn02} \end{align} where \[ (\nabla Q)^2 = \sum_{\alpha = 1}^3 (\nabla_\alpha Q)^2. \] Multiplying \eqref{EC-Ez::Eqn02} to the left by $Q - \frac{1}{6}s_+\,{\rm Id}$ we get \begin{align} L\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\Delta X &= -\frac{1}{2}b^2\,s_+^2\,X + 2\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)(\nabla Q)^2\nonumber\\ &\qquad\qquad+ {\rm tr}(X)\big[\frac{1}{2}c^2\,s_+^2\,Q + \frac{1}{6}\,b^2\,s_+^2\,{\rm Id}\big] + L\,R(Q,X)\nonumber\\ &= -\frac{1}{2}b^2\,s_+^2\,Z + L\,R(Q,X;\frac{1}{L}Y)). \label{EC-Ez::Eqn03} \end{align} It is readily seen that the assertion follows from Corollary \ref{PUHDE::Cor}, \eqref{EC-Ez::Eqn01} and \eqref{EC-Ez::Eqn03}. $\square$ \noindent{\bf Proof of Theorem \ref{MainThm1}.} The strong convergence in $H^1(\Omega,{\mycal{S}}_0)$ follows from \cite[Lemma 3]{Ma-Za}. (The statement therein requires an additional hypothesis that $\Omega$ be simply connected. However, such condition was used there only to obtain a lifting of a map in $H^1(\Omega,{\mycal{S}}_*)$ to a map in $H^1(\Omega,{\mathbb S}^2)$, which we do not need here.) The convergence in $C^{1,\alpha}$ and $C^j$ follows directly from Propositions \ref{C1AlphaConv} and \ref{CjConv}. It remains to show the last assertion. Using \eqref{PUHDE::Eqn01}, \eqref{YDef} and \eqref{ZDef}, we compute \begin{align} \Delta Q &= c^2\,{\rm tr}(X)\,Q + \frac{1}{3}b^2\,{\rm tr}(X)\,{\rm Id} - b^2\,X\nonumber\\ &= - \frac{4}{s_+^2}\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\sum_{\alpha = 1}^3(\nabla_\alpha Q)^2 + c^2\,Y\,Q + \frac{1}{3}b^2\,Y\,{\rm Id} - b^2\,Z. \label{ExplicitApproxEqn} \end{align} The conclusion then follows from Lemma \ref{EqnConv-Ez}. $\square$ \section{Higher derivative estimates near the boundary}\label{SEC-HDEst} By Proposition \ref{UniC1Bnd} and Corollary \ref{UniLapBnd}, $\Delta Q_L$ is uniformly bounded up to the boundary. This suggests that estimate \eqref{PUHDE::Cor::Est02} for the derivatives of $Q_L$ might not be optimal near $\partial\Omega$. In this section, we will first study higher derivatives estimate near $\partial\Omega$ and then use it to slightly improve estimate \eqref{EC-Ez::Est01} in Lemma \ref{EqnConv-Ez}. This will be useful when we study the existence of the ``first order term'' in the asymptotic expansion of $Q_L$. \begin{proposition}\label{BdryCjEst} Assume for some $x$ $\in$ $\partial\Omega$, $r$ $>$ $0$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$ that $\Omega \cap B_r(x)$ is connected and \[ \sup_{B_r(x) \cap \Omega} \Big[|\nabla Q_L| + |X_L|\Big] \leq C_1. \] Then, for any $\mu$ $\in$ $(0,1]$ and $j$ $\geq$ $0$, there holds \[ |\nabla^{j+2} Q_L|(y) \leq \frac{C(a^2,b^2,c^2,C_1,\Omega,Q_b,r,\mu,j)}{{\rm dist}(y,\partial\Omega)^{j + \mu}} \text{ in } B_{r/2}(x) \cap \Omega. \] \end{proposition} We start with a simple lemma, which is a local version of \cite[Theorem 4.9]{GT}. \begin{lemma}\label{VanNearBdry} Assume that $u$ $\in$ $C^2(B_r^+) \cap C^0(\overline{B_r^+})$ satisfies $u\big|_{B_r \cap \{x_n = 0\}}$ $\equiv$ $0$ and \[ |\Delta u|(x) \leq x_n^{-\mu-1} \text{ in } B_r^+, \] for some $\mu$ $\in$ $(0,1)$. Then there exists a universal constant $C$ such that \[ u(x) \leq C\,r^{-1}\,\sup_{B_r^+} |u|\,x_n + \frac{1}{\mu - \mu^2}\,x_n^{1-\mu} \text{ in } B_{r/2}^+. \] \end{lemma} \noindent{\bf Proof.\;} It is enough to consider $r$ $=$ $1$. Let $w$ be the solution to \begin{align*} \Delta w &= 0 \text{ in } B_1^+,\\ w &= 0 \text{ on } B_1 \cap \{x_n = 0\},\\ w &= 1 \text{ on } \partial B_1 \cap \{x_n > 0\}. \end{align*} By standard elliptic estimates, we have \[ w(x) \leq C\,x_n \text{ in } B_{1/2}^+. \] Since the function $v$ $=$ $\sup_{B_r^+}|u|\,w + \frac{1}{\mu - \mu^2}x_n^{1-\mu}$ satisfies \begin{align*} \Delta v &= - x_n^{-\mu - 1} \text{ in } B_1^+, v \geq |u| \text{ on } \partial B_1^+, \end{align*} the assertion follows from the maximum principle. $\square$ \begin{lemma}\label{BdryC2Est} Under the assumption of Proposition \ref{BdryCjEst}, we have for any $\mu$ $\in$ $(0,1]$ that \begin{equation} |\nabla^2 Q_{L}|(y) \leq \frac{C(a^2, b^2, c^2, C_1,\Omega,Q_b,\mu)}{r^{1-\mu}\,{\rm dist}(y,\partial\Omega)^\mu} \text{ for } y \in B_{r/4}(x) \cap \Omega.\label{BC2E::Est01} \end{equation} \end{lemma} \noindent{\bf Proof.\;} Like usual, we will write $Q$ for $Q_L$ and $B_r$ for $B_r(x)$. We can assume that $r$ $<$ $1$. Also, the conclusion for $\mu$ $=$ $1$ is true in view of Corollary \ref{PUHDE::Cor}. We will now fix some $\mu$ $\in$ $(0,1)$. For simplicity in presenting, we will only consider the case where $x$ $=$ $0$ and $B_r \cap \Omega$ $=$ $B_r^+$. The general case requires minor changes due to the procedure of flattening the boundary. By \eqref{PUHDE::Eqn01} and our hypothesis, $Q$ satisfies an equation of the form \[ \Delta Q = f \text{ in } B_r \cap \Omega, \] where $f$ satisfies $|f|(y)$ $\leq$ $C$ in $B_r \cap \Omega$. Moreover, by Corollary \ref{PUHDE::Cor}, \begin{equation} |\nabla f|(y) \leq \frac{C}{{\rm dist}(y,\partial\Omega)} \text{ in } B_{3r/4} \cap \Omega. \label{BC2E::Eqn00} \end{equation} Now, pick $\alpha$ $\in$ $\{1,2\}$ and let $U$ $=$ $\nabla_\alpha Q$. Let $W$ be the solution to \begin{align*} \Delta W &= 0 \text{ in } B_{3r/4}^+,\\ W &= U \text{ on } \partial B_{3r/4}^+. \end{align*} Define $W_0$ by $W(x_1, x_2, x_3)$ $=$ $U(x_1, x_2, 0)$. Applying Lemma \ref{SimpleEllEst} to $W - W_0$ we get \[ \sup_{B_{r/2}^+}|\nabla W| \leq C\Big[\sup_{B_{3r/4} \cap \{x_3 = 0\}} |\nabla_T U| + r\,\sup_{B_{3r/4} \cap \{x_3 = 0\}} |\nabla_T^2 U| + \frac{1}{r^{3/2}}\|\nabla(W - W_0)\|_{L^2(B_{2r/3}^+)}\Big]. \] On the other hand, as $W - W_0$ vanishes on $B_{3r/4}^+ \cap \{x_3 = 0\}$, a simple integration by parts using a cut-off function supported in $B_{3r/4}^+$ which is identically $1$ in $B_{2r/3}^+$ gives \[ \frac{1}{r^{3/2}}\|\nabla(W - W_0)\|_{L^2(B_{2r/3}^+)} \leq C\Big[r\,\sup_{B_{3r/4}^+} |\Delta(W - W_0)| + \frac{1}{r}\sup_{B_{3r/4}^+} |W - W_0|\Big]. \] Taking the last two inequalities into account and using the maximum principle we get \begin{equation} \sup_{B_{r/2}^+}|\nabla W| \leq C\Big[\sup_{B_{3r/4} \cap \{x_3 = 0\}} |\nabla_T U| + r\,\sup_{B_{3r/4} \cap \{x_3 = 0\}} |\nabla_T^2 U| + \frac{1}{r}\sup_{\partial B_{3r/4}^+} |U|\Big] \leq \frac{C}{r}. \label{BC2E::Eqn01} \end{equation} Let $V$ $=$ $U - W$. Then $|V(y)|$ $\leq$ $C\sup_{B_{3r/4}^+}|U|$ $\leq$ $C$ and, by \eqref{BC2E::Eqn00}, \begin{align*} |\Delta V(y)| &\leq C\,r^{\mu}\,y_3^{-\mu - 1} \text{ in } B_{3r/4}^+,\\ V &= 0 \text{ on } \partial B_{3r/4}^+. \end{align*} Hence, by Lemma \ref{VanNearBdry}, \[ |V|(y) \leq \frac{C}{r}\sup_{B_r^+}|V|\,y_3 + C\,r^{\mu}\,y_3^{1-\mu} \leq C\,r^{\mu-1}\,y_3^{1-\mu} \text{ in } B_{r/2}^+. \] It thus follows by interpolating (see e.g. \cite[Lemma A.1]{BBH}) that \begin{equation} |\nabla V(y)|^2 \leq C\sup_{B_{y_3/2}(y)}|V|\Big[\sup_{B_{y_3/2}(y)}|\Delta V| + \frac{1}{y_3^2}\sup_{B_{y_3/2}(y)}|V|\Big] \leq C\,r^{2\mu-2}\,y_3^{-2\mu} \text{ in } B_{r/4}^+. \label{BC2E::Eqn02} \end{equation} Summing up \eqref{BC2E::Eqn01} and \eqref{BC2E::Eqn02} we get \[ |\nabla U|(y) \leq C\,r^{\mu-1}\,y_3^{-\mu} \text{ in } B_{r/4}^+. \] As $U$ $=$ $\nabla_\alpha Q$ with $\alpha$ $\in$ $\{1,2\}$, we have obtained the required bound for $\nabla_{ij} Q$ with $(i,j)$ $\neq$ $(3,3)$. The remaining estimate for $\nabla_{33} Q$ follows from $|\Delta Q|$ $=$ $|f|$ $\leq$ $C$ in $B_r \cap \Omega$. We conclude the proof. $\square$ \noindent{\bf Proof of Proposition \ref{BdryCjEst}.} We will do an induction on $j$. The assertion for $j$ $=$ $0$ follows from Lemma \ref{BdryC2Est}. By differentiating \eqref{PUHDE::Eqn01} to $(j+2)$-nd order, we see that \[ \Delta(\nabla^{j+2} Q) = f_j \] where, by Corollary \ref{PUHDE::Cor}, \[ |f_j|(y) \leq \frac{C}{{\rm dist}(y,\partial\Omega)^{j+2}} \text{ in } B_{r/2}(x) \cap \Omega. \] Therefore, by interpolation (see e.g. \cite[Lemma A.1]{BBH}), we get the required estimate for $\nabla^{j+3} Q$. $\square$ \begin{proposition}\label{EqnConv-Part2} Assume for some $B_r(x)$ $\Subset$ $\Omega$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$ that \[ \sup_{B_r(x)} \big[|\nabla Q_L| + |X_L|\big] \leq C_1. \] Then, there exist constants $C$ and $C'$ depending on $(a^2,b^2,c^2,C_1,j)$ such that for $0$ $<$ $s$ $<$ $r$, there holds \begin{align} \sup_{B_s(x)}\Big(|\nabla^j Y_L| + |\nabla^j Z_L|\Big) \leq \frac{C}{(r-s)^j}\Big[(r-s)^{-1}\,L + \exp\Big(-\frac{C'}{\sqrt{L}}(r-s)\Big)\Big], \label{ECP2P::Est02} \end{align} where $Y_L$ and $Z_L$ are defined by \eqref{YDef} and \eqref{ZDef}, respectively. \end{proposition} \noindent{\bf Proof.\;} We use some idea from the proof of Lemma \ref{PrepUniHighDerEst}. Like usual, we write $Q$ $=$ $Q_L$, $X$ $=$ $X_L$, $Y$ $=$ $Y_L$ and $Z$ $=$ $Z_L$. Fix some $\mu$ $\in$ $(0,1)$ for the moment. By Corollary \ref{PUHDE::Cor} and Proposition \ref{BdryCjEst}, we have the following estimates, \begin{align} \sup_{B_s(x)}|\nabla^{j+2}Q| \leq \frac{C}{(r-s)^{j+\mu}} \text{ and } \sup_{B_s(x)}\big[|\nabla^{j}X|(y) + |\nabla^j Y|(y) + |\nabla^j Z|\big] \leq \frac{C}{(r-s)^{j}}. \label{ECP2P::Eqn00} \end{align} We split the proof into three steps, which bear some analogy to Steps 3-5 in the proof of Lemma \ref{PrepUniHighDerEst}. In the first step, we prove an estimate for ${\rm tr}(Z)$. The remaining steps are to prove \eqref{ECP2P::Est02}. In the proof, $R(A_1, A_2, ..., A_t; A_{t+1}, \ldots A_s)$ will be used to denote various explicitly computable polynomials in the $A_l$'s which are (jointly) linear in $(A_{t+1}, \ldots, A_s)$. \noindent\underline{Step 1:} We show that \begin{equation} \sup_{B_s(x)}|\nabla^j Y| \leq \frac{C}{(r-s)^j}\Big[\frac{L}{r-s} + \exp\Big(-\frac{C'}{\sqrt{L}}(r-s)\Big)\Big]. \label{ECP2P1::Eqn01} \end{equation} By \eqref{PUHDE::Eqn04Bis}, we have \[ L\Delta{\rm tr}(X) = (2a^2 + \frac{1}{3}b^2\,s_+)Y + L\,R(X). \] Hence, by noting that $Y$ $=$ ${\rm tr}(X) + \frac{6}{6a^2 + b^2s_+}|\nabla Q|^2$ and using \eqref{PUHDE::Eqn01}, \begin{align*} L\Delta Y &= (2a^2 + \frac{1}{3}b^2\,s_+)Y + L\,R(Q,\nabla Q,X;|\nabla^2 Q|^2,\nabla X). \end{align*} Note that by choosing $\mu$ sufficiently small in \eqref{ECP2P::Eqn00}, we have \[ \sup_{B_{(r+s)/2}(y)}|\nabla^{j} R(Q,\nabla Q,X;|\nabla^2 Q|^2,\nabla X)| \leq \frac{C}{(r-s)^{j+1}}. \] Hence, by Lemma \ref{BBHMaxPrin} and \eqref{ECP2P::Eqn00}, \[ \sup_{B_{s}(x)}|\nabla^{j}Y| \leq \frac{C\,L}{(r-s)^{j+1}} + \frac{C}{(r-s)^j}\,\exp\Big(-\frac{C_1}{\sqrt{L}}(r-s)\Big), \] which implies \eqref{ECP2P1::Eqn01} and so complete Step 1. \noindent\underline{Step 2:} We will show that \begin{equation} \sup_{B_s(x)}\Big|\nabla^j\Big(\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)Z\Big)\Big| \leq \frac{C}{(r-s)^j}\Big[\frac{L}{r-s} + \exp\Big(-\frac{C'}{\sqrt{L}}(r-s)\Big)\Big]. \label{ECP2P2::Est02-1} \end{equation} Multiplying \eqref{EC-Ez::Eqn03} to the left by $Q - \frac{1}{6}s_+\,{\rm Id}$, we get \begin{equation} L\big(\frac{1}{4}s_+^2\,{\rm Id} + \frac{1}{2}\,L\,X\big)\Delta X = -\frac{1}{2}b^2\,s_+^2\,\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)Z + L\,\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\,R(Q,X;\frac{1}{L}Y). \label{ECP2P2::Est02-2} \end{equation} Recall the definition of $X$ and \eqref{PUHDE::Eqn01}, the above equation implies \begin{align} L\,\Delta X &=\frac{4}{s_+^2}\Big[\text{Right hand side of \eqref{ECP2P2::Est02-2}} - \frac{1}{2}L\,X\,\Delta(L\,X)\Big]\nonumber\\ &= - 2b^2\,\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)Z + L\,R(Q, \nabla Q, X; \frac{1}{L}Y). \label{ECP2P2::Eqn03} \end{align} It then follows from \eqref{EC-Ez::Eqn03}, \eqref{ECP2P2::Eqn03} and \eqref{PUHDE::Eqn01} that \[ L\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)\Delta X = b^2\,s_+\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)Z + L\,R(Q, \nabla Q, X; \frac{1}{L}Y), \] and \[ L\Delta\Big[\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)X\Big] = b^2\,s_+\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)Z + L\,R(Q, \nabla Q, X, \nabla X; \frac{1}{L}Y). \] Recalling the definition of $Z$ and \eqref{PUHDE::Eqn01}, we then arrive at \begin{multline} L\Delta\Big[\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)Z\Big] = b^2\,s_+\big(Q - \frac{2}{3}s_+\,{\rm Id}\big)Z\\ + L\,R(Q, \nabla Q, X; \nabla^2 Q, (\nabla^2 Q)^2, \nabla X, \frac{1}{L}Y). \label{ECP2P2::Eqn04} \end{multline} Applying Lemma \ref{BBHMaxPrin} and using \eqref{ECP2P::Eqn00} and\eqref{ECP2P1::Eqn01}, we arrive at \eqref{ECP2P2::Est02-1} (by choosing $\mu$ appropriately small). This completes Step 2. \noindent\underline{Step 3:} We will show that \begin{equation} \sup_{B_s(x)}\Big|\nabla^j\Big(\big(Q + \frac{1}{3}s_+\,{\rm Id}\big)Z\Big)\Big| \leq \frac{C}{(r-s)^j}\Big[\frac{L}{r-s} + \exp\Big(-\frac{C'}{\sqrt{L}}(r-s)\Big)\Big]. \label{ECP2P0::Est02-2} \end{equation} It is readily seen that \eqref{ECP2P::Est02} is a consequence of \eqref{ECP2P2::Est02-1} and \eqref{ECP2P0::Est02-2}. Since $Q$ is a traceless $3\times 3$ matrix, Cayley's theorem gives \begin{equation} Q^3 - \frac{1}{2}{\rm tr}(Q^2)\,Q - \frac{1}{3}{\rm tr}(Q^3)\,{\rm Id} = 0. \label{ECP2P0::Eqn01} \end{equation} Noting that $\Delta Q$ commutes with $Q$ by \eqref{L-DG::ELEqn}, we compute \begin{equation} \Delta Q^3 = 3\,Q^2\,\Delta Q + 2\sum_{\alpha=1}^3\big[\nabla_\alpha Q\,Q\,\nabla_\alpha Q + (\nabla_\alpha Q)^2\,Q + Q\,(\nabla_\alpha Q)^2\big] =: 3\,Q^2\,\Delta Q + 2V. \label{ECP2P0::Eqn02} \end{equation} To simplify $V$, we note that the definition of $X$ implies \begin{equation} Q\,\nabla_\alpha Q + \nabla_\alpha Q\,Q = \frac{1}{3}s_+\,\nabla_\alpha Q + L\,\nabla_\alpha X, \label{ECP2P0::Eqn03} \end{equation} and so \begin{align*} V &= \sum_{\alpha=1}^3\big[\nabla_\alpha Q\,(Q\,\nabla_\alpha Q + \nabla_\alpha Q\,Q) + Q\,(\nabla_\alpha Q)^2\big]\\ &= \big(Q +\frac{1}{3}s_+\,{\rm Id}\big)(\nabla Q)^2 + L\,R(\nabla Q;\nabla X) =: U + L\,R(\nabla Q;\nabla X). \end{align*} Substituting this into \eqref{ECP2P0::Eqn02} and using \eqref{PUHDE::Eqn01}, we get \begin{align} \Delta Q^3 &= 3\,Q^2\,\Delta Q + 2U + L\,R(\nabla Q;\nabla X)\nonumber\\ &= s_+\big(Q + \frac{2}{3}s_+\,{\rm Id}\big)\,\Delta Q + 2U + L\,R(Q,\nabla Q, X;\nabla X)\nonumber\\ &= s_+\big(\frac{1}{3}b^2 + c^2\,s_+\big){\rm tr}(X)\,Q + \frac{2}{9}s_+^2(b^2 + c^2\,s_+){\rm tr}(X)\,{\rm Id} - b^2\,s_+\,Q\,X - \frac{2}{3}b^2\,s_+^2\,X\nonumber\\ &\qquad\qquad+ 2U + L\,R(Q,\nabla Q, X;\nabla X). \label{ECP2P0::Eqn04} \end{align} Taking trace of \eqref{ECP2P0::Eqn04}, we get \begin{multline} \Delta {\rm tr}(Q^3) = \frac{2}{3}s_+^2(b^2 + c^2\,s_+){\rm tr}(X) - b^2\,s_+\,{\rm tr}(Q\,X) - \frac{2}{3}b^2\,s_+^2\,{\rm tr}(X)\\ \qquad\qquad+ 2\,{\rm tr}(U) + L\,R(Q,\nabla Q, X;\nabla X). \label{ECP2P0::Eqn05} \end{multline} Similarly, we have \[ \Delta Q^2 = 2\,Q\,\Delta Q + 2\sum_{\alpha = 1}^3 (\nabla_\alpha Q)^2, \] and so, in view of \eqref{ECP2P0::Eqn03} and \eqref{PUHDE::Eqn01}, \begin{align} \Delta\big[{\rm tr}(Q^2)Q\big] &= 2\,{\rm tr}(Q\,\Delta Q)\,Q + 2|\nabla Q|^2\,Q + {\rm tr}(Q^2)\,\Delta Q + 2\sum_{\alpha = 1}^3 \nabla_\alpha {\rm tr}(Q^2)\,\nabla_\alpha Q\nonumber\\ &= 2\,{\rm tr}(Q\,\Delta Q)\,Q + 2|\nabla Q|^2\,Q + \frac{2}{3}s_+^2\,\Delta Q + L\,R(Q,\nabla Q, X;\nabla X)\nonumber\\ &= 2\,\big[c^2\,{\rm tr}(X)\,{\rm tr}(Q^2) - b^2{\rm tr}(Q\,X)\big]\,Q + 2|\nabla Q|^2\,Q\nonumber\\ &\qquad\qquad + \frac{2}{3}s_+^2\big[c^2\,{\rm tr}(X)\,Q + \frac{1}{3}b^2\,{\rm tr}(X)\,{\rm Id} - b^2\,X\big] + L\,R(Q,\nabla Q, X;\nabla X)\nonumber\\ &= 2c^2\,s_+^2\,{\rm tr}(X)\,Q - 2b^2{\rm tr}(Q\,X)\,Q + \frac{2}{9}b^2\,s_+^2\,{\rm tr}(X)\,{\rm Id} - \frac{2}{3}b^2\,s_+^2\,X\nonumber\\ &\qquad\qquad + 2|\nabla Q|^2\,Q + L\,R(Q,\nabla Q, X;\nabla X). \label{ECP2P0::Eqn06} \end{align} Combining \eqref{ECP2P0::Eqn01}, \eqref{ECP2P0::Eqn04}, \eqref{ECP2P0::Eqn05} and \eqref{ECP2P0::Eqn06} together, we get \begin{align*} 0 &= \Big[s_+\big(\frac{1}{3}b^2 + c^2\,s_+\big){\rm tr}(X)\,Q + \frac{2}{9}s_+^2(b^2 + c^2\,s_+){\rm tr}(X)\,{\rm Id} - b^2\,s_+\,Q\,X - \frac{2}{3}b^2\,s_+^2\,X + 2U\Big]\\ &\qquad\qquad - \frac{1}{2}\Big[2c^2\,s_+^2\,{\rm tr}(X)\,Q - 2b^2{\rm tr}(Q\,X)\,Q + \frac{2}{9}b^2\,s_+^2\,{\rm tr}(X)\,{\rm Id} - \frac{2}{3}b^2\,s_+^2\,X + 2|\nabla Q|^2\,Q\Big]\\ &\qquad\qquad - \frac{1}{3}\Big[\frac{2}{3}s_+^2(b^2 + c^2\,s_+){\rm tr}(X) - b^2\,s_+\,{\rm tr}(Q\,X) - \frac{2}{3}b^2\,s_+^2\,{\rm tr}(X) + 2\,{\rm tr}(U)\Big]{\rm Id}\\ &\qquad\qquad + L\,R(Q,\nabla Q, X;\nabla X), \end{align*} which simplifies to \begin{align} 0 &= \big(Q + \frac{1}{3}s_+\,{\rm Id}\big)\Big[-b^2\,s_+\,X + \frac{1}{3}b^2s_+\,{\rm tr}(X)\,{\rm Id} + b^2\,{\rm tr}(Q\,X)\,{\rm Id}\Big]+ 2U\nonumber\\ &\qquad\qquad - |\nabla Q|^2\,Q - \frac{2}{3}\,{\rm tr}(U)\,{\rm Id} + L\,R(Q,\nabla Q, X;\nabla X). \label{ECP2P0::Eqn07} \end{align} Multiplying \eqref{ECP2P0::Eqn07} to the left by $Q + \frac{1}{3}s_+\,{\rm Id}$ and noting that $(Q + \frac{1}{3}s_+\,{\rm Id})^2$ $=$ $s_+\big(Q + \frac{1}{3}s_+\,{\rm Id}\big) + L\,X$, we get \begin{align} 0 &= \big(Q + \frac{1}{3}s_+\,{\rm Id}\big)\Big[-b^2\,s_+\,X + \frac{1}{3}b^2s_+\,{\rm tr}(X)\,{\rm Id} + b^2\,{\rm tr}(Q\,X)\,{\rm Id} + 2\sum_{\alpha=1}^3 (\nabla_\alpha Q)^2\nonumber\\ &\qquad\qquad - \frac{2}{3}|\nabla Q|^2\,{\rm Id} - \frac{2}{3s_+}\,{\rm tr}(U)\,{\rm Id}\Big] + L\,R(Q,\nabla Q, X;\nabla X). \label{ECP2P0::Eqn07bis} \end{align} Comparing \eqref{ECP2P0::Eqn07} and \eqref{ECP2P0::Eqn07bis} we get \[ |\nabla Q|^2\,Q + \frac{2}{3}\,{\rm tr}(U)\,{\rm Id} = \Big[\frac{2}{3}|\nabla Q|^2 + \frac{2}{3s_+}\,{\rm tr}(U)\Big]\big(Q + \frac{1}{3}s_+\,{\rm Id}\big) + L\,R(Q,\nabla Q, X;\nabla X). \] Taking trace of the above yields \begin{equation} {\rm tr}(U) = \frac{1}{2}s_+|\nabla Q|^2 + L\,R(Q,\nabla Q, X;\nabla X). \label{ECP2P0::Eqn08} \end{equation} Subtituting \eqref{ECP2P0::Eqn08} into \eqref{ECP2P0::Eqn07bis} and using \eqref{PUHDE::Eqn04} and ${\rm tr}(X)$ $=$ $Y - \frac{6}{6a^2 + b^2\,s_+}\,|\nabla Q|^2$, we then obtain \begin{align} 0 &= \big(Q + \frac{1}{3}s_+\,{\rm Id}\big)\Big[-b^2\,s_+\,X - \frac{2b^2\,s_+ + 4c^2\,s_+^2}{6a^2 + b^2\,s_+}|\nabla Q|^2\,{\rm Id} + 2\sum_{\alpha=1}^3 (\nabla_\alpha Q)^2\Big]\nonumber\\ &\qquad\qquad + L\,R(Q,\nabla Q, X;\nabla X,\frac{1}{L}Y)\nonumber\\ &= -b^2\,s_+\big(Q + \frac{1}{3}s_+\,{\rm Id}\big)Z + L\,R(Q,\nabla Q, X;\nabla X,\frac{1}{L}Y). \end{align} Estimate \eqref{ECP2P0::Est02-2} follows, whence \eqref{ECP2P::Est02}. $\square$ As a consequence of Proposition \ref{EqnConv-Part2}, we have \begin{corollary}\label{ECP2P::Cor} Assume \[ \sup_{B_r(x) \cap \Omega} \big[|\nabla Q_L| + |X_L|\big] \leq C_1. \] for some $x$ $\in$ $\bar\Omega$, $r$ $>$ $0$, $L$ $\in$ $(0,1)$ and $C_1$ $>$ $0$. Then, for $j$ $\geq$ $0$, there exist constants $C$ and $C'$ depending on $(a^2,b^2,c^2,C_1,j)$ such that, in $B_{r/2}(x) \cap \Omega$, there hold \[ |\nabla^j Y_L|(y) + |\nabla^j Z_L|(y) \leq \frac{C}{\tilde d_{r/2}(y)^{j}}\Big[\frac{L}{\tilde d_{r/2}(y)} + \exp\Big(-\frac{C'}{\sqrt{L}}\,\tilde d_{r/2}(y)\Big)\Big]. \] where $\tilde d_s(y)$ $=$ ${\rm dist}(y,\partial(B_s(x) \cap \Omega))$. \end{corollary} \begin{corollary}\label{ImprRemEst} Under the setting of Theorem \ref{MainThm1}, the $R_{L_{k'}}$ satisfies for $y$ $\in$ $K$, \[ |\nabla^j R_{L_{k'}}|(y) \leq \frac{C}{d(y)^j}\Big[\frac{L}{d(y)} + \exp\Big(-\frac{C'}{\sqrt{L}}\,d(y)\Big)\Big], \] where $d(y)$ $=$ ${\rm dist}(y,\partial\Omega)$. \end{corollary} \noindent{\bf Proof.\;} The assertion follows from \eqref{ExplicitApproxEqn} and Corollary \ref{ECP2P::Cor}. $\square$ \section{Rate of convergence for a class of minimizers}\label{SEC-Rate} In this section, we will show that for a class of limit map $Q_*$, the difference $Q_L - Q_*$ is of order $L$. It is conceivable that such statement may not hold when $Q_*$ is not an isolated minimizer of the limit functional $I_*$. Recall that the Euler-Lagrange equation for $I_*$ is \[ \Delta Q_* = -\frac{4}{s_+^2}(Q_* - \frac{1}{6}\,s_+\,{\rm Id})\sum_{\alpha=1}^3 (\nabla_\alpha Q_*)^2. \] It is thus reasonable to assume some kind of bijectivity for the ``linearized operator'' \begin{equation} {\mycal{L}}_{Q_*}\Psi := \Delta\Psi + \frac{4}{s_+^2}(Q_* - \frac{1}{6}\,s_+\,{\rm Id})\sum_{\alpha=1}^3 (\nabla_\alpha Q_*\,\nabla_\alpha \Psi + \nabla_\alpha \Psi\,\nabla_\alpha Q_*) + \frac{4}{s_+^2} \Psi\sum_{\alpha=1}^3 (\nabla_\alpha Q_*)^2. \label{LinearizedOp} \end{equation} In this paper, we will restrict our attention to the case where $Q_*$ is smooth in the entire domain $\Omega$. The general case will be considered in a forthcoming paper. We establish: \begin{proposition}\label{Rate} Let $\Omega$ be an open bounded subset of ${\mathbb R}^3$. Assume that $Q_{L_k}$ converges strongly in $H^1(\Omega, {\mycal{S}}_0)$ to a minimizer $Q_*$ $\in$ $H^1(\Omega,{\mycal{S}}_*)$ of $I_*$ for some sequence $L_k$ $\rightarrow$ $0$. Furthermore, assume that $Q_*$ is smooth and the linearized harmonic map operator ${\mycal{L}}_{Q_*}:$ $H^1_0(\Omega,M_{3\times 3})$ $\rightarrow$ $H^{-1}(\Omega,M_{3\times 3})$ defined by \eqref{LinearizedOp} is bijective. Then there exists $\bar L > 0$ such that, for $L_k < \bar L$, \begin{equation} \|Q_{L_{k}}- Q_*\|_{H^s(\Omega)}\le C(a^2, b^2, c^2, \Omega, Q_b, \mu, s)\,L_{k} \text{ for } 0 < s < \frac{1}{2}, \label{est:Hsrate} \end{equation} and \begin{equation} \|\nabla^j(Q_{L_{k}}- Q_*)\|_{L^\infty(K)}\le C(a^2, b^2, c^2, \Omega, K, Q_b, \mu, j)L_{k} \text{ for } K \Subset \Omega, j \geq 0. \label{est:Higherrate} \end{equation} \end{proposition} We start with a few lemmas. \begin{lemma}\label{Trick} Let $\Omega$ $\subset$ ${\mathbb R}^3$ be a bounded domain with smooth boundary . Consider the elliptic operator \begin{equation} ({\mycal{L}} u)_\alpha := \Delta u_\alpha + b_\alpha^\beta \cdot \nabla u_\beta + c_\alpha^\beta\,u_\beta, \label{LES} \end{equation} where $b_\alpha^\beta$ $\in$ $W^{1,p}(\bar\Omega)$ for some $p \geq 2$, $c_\alpha^\beta$ $\in$ $L^\infty(\Omega)$. If ${\mycal{L}}:$ $H^1_0(\Omega)$ $\rightarrow$ $H^{-1}(\Omega)$ is bijective, then \begin{equation} \|u\|_{L^2(\Omega)} \leq C\,\|{\mycal{L}} u\|_{\tilde H^{-2}(\Omega)} \text{ for any } u \in H^1_0(\Omega), \label{HsEst::Est01} \end{equation} where $\tilde H^{-2}(\Omega)$ denotes the dual space of $H^2(\Omega) \cap H^1_0(\Omega)$, and $C$ is a constant that depends only on $\Omega$ and the spectral of ${\mycal{L}}$. \end{lemma} \noindent{\bf Proof.\;} Let ${\mycal{L}}^*:$ $H^1_0(\Omega)$ $\rightarrow$ $H^{-1}(\Omega)$ denote the adjoint of ${\mycal{L}}$, i.e. \[ ({\mycal{L}}^* v)^\alpha = \Delta v^\alpha + \div(b_\gamma^\alpha\,v^\gamma) + c^\alpha_\gamma\,v^\gamma. \] Since ${\mycal{L}}$ is bijective, so is ${\mycal{L}}^*$. Let $v$ $\in$ $H^1_0(\Omega)$ be the unique solution to \[ {\mycal{L}}^* v = u. \] By regularity theory for elliptic operators (see e.g. \cite[Chapter 3, Theorem 10.1]{L-U}), we have $v$ $\in$ $H^2(\Omega)$ and \[ \|v\|_{H^2(\Omega)} \leq C(\|u\|_{L^2(\Omega)} + \|v\|_{L^2(\Omega)}). \] Moreover, because the kernel of ${\mycal{L}}^*:$ $H^1_0(\Omega)$ $\rightarrow$ $H^{-1}(\Omega)$ is trivial, the $\|v\|_{L^2(\Omega)}$-term on the right hand side can be dropped. Hence \[ \|v\|_{H^2(\Omega)} \leq C\,\|u\|_{L^2(\Omega)}. \] It thus follows from integrating by parts that \begin{align*} \|u\|_{L^2(\Omega)}^2 &= \int_{\Omega} u\,{\mycal{L}}^* v\,dx = \left<{\mycal{L}} u, v\right>_{(H^{-1}(\Omega),H^1_0(\Omega))}\\ &\leq \|{\mycal{L}} u\|_{\tilde H^{-2}(\Omega)}\,\|v\|_{H^2(\Omega) \cap H^1_0(\Omega)}\\ &\leq C\|{\mycal{L}} u\|_{\tilde H^{-2}(\Omega)}\,\|u\|_{L^2(\Omega)}. \end{align*} The assertion follows. $\square$ \begin{lemma}\label{NormEst} Let $\Omega\subset\mathbb{R}^3$ be a smooth bounded domain. We denote by $\tilde H^{-2}(\Omega)$ the dual space of $H^2(\Omega)\cap H^1_0(\Omega)$. If $|R^\sharp(x)| \le Ce^{-\frac{Cd(x)}{\sqrt{L}}}$ where $d(x)$ denotes the distance to the boundary $\partial \Omega$, then \begin{equation} \|R^\sharp \|_{\tilde H^{-2}(\Omega)}\le CL \label{Rintilde} \end{equation} with the constant $C$ independent of $L$. \end{lemma} \noindent{\bf Proof.\;} \underline{Step 1:} We consider first the case when $\Omega=[0,1]^3$ and ``$d(x)=x_3$''. We show that if $|R^\sharp (x)|\le Ce^{-\frac{Cx_3}{\sqrt{L}}}$ then (\ref{Rintilde}) holds. We take a test function $\varphi\in H^2(\Omega) \cap H^1_0(\Omega)$. We compute \begin{align} |\langle R^\sharp, \varphi\rangle| &\le \underbrace{\Big|\int_{[0,1]^3} \Big(\int_{x_3}^1\big(\int_z^1 R^\sharp(x',w)\,dw\big)\,dz\Big)\partial^2_{x_3}\varphi(x',x_3)\,dx_3\,dx'\Big|}_{\mathcal{I}}\nonumber\\ &\qquad\qquad +\underbrace{\Big|\int_{[0,1]^2}\Big(\int_0^1\big(\int_z^1 R^\sharp(x',w)\,dw\big)\,dz\Big)\partial_{x_3}\varphi(x',0)\,dx'\Big|}_{\mathcal{II}}. \label{Rest} \end{align} $\mathcal{I}$ can be estimated as follows. \begin{align} |\mathcal{I}| &\le \bigg\{\int_{[0,1]^3}\Big(\int_{x_3}^1\big(\int_z^1 |R^\sharp(x',w)|\,dw\big)\,dz\Big)^2\,dx\bigg\}^{\frac{1}{2}}\bigg\{\int_{[0,1]^3}|\Delta\varphi|^2\,dx\bigg\}^{\frac{1}{2}}\nonumber\\ &\le C\bigg\{\int_{[0,1]^3}\Big(\int_{x_3}^1\big(\int_z^1 e^{-\frac{Cw}{\sqrt{L}}}\,dw\big)\,dz\Big)^2\,dx\bigg\}^{\frac{1}{2}}\bigg\{\int_{[0,1]^3}|\Delta\varphi|^2\,dx\bigg\}^{\frac{1}{2}}\nonumber\\ &\le C\sqrt{L}\bigg\{\int_{[0,1]^3}\big(\int_{x_3}^1 e^{-\frac{Cz}{\sqrt{L}}}\,dz\big)^2\,dx\bigg\}^{\frac{1}{2}}\bigg\{\int_{[0,1]^3}|\Delta\varphi|^2\,dx\bigg\}^{\frac{1}{2}}\nonumber\\ &\le CL \|\varphi\|_{H^2([0,1]^3)}, \label{-221} \end{align} Similarly, \begin{align} |\mathcal{II}| &\le \bigg\{\int_{[0,1]^2}\Big(\int_0^1\big(\int_z^1 R^\sharp(x',w)dw\big)dz\Big)^2\,dx'\bigg\}^{\frac{1}{2}}\bigg\{\int_{[0,1]^2} |\partial_{x_3} \varphi(x',0)|^2\,dx'\bigg\}^{\frac{1}{2}}\nonumber\\ &\le C\sqrt{L}\bigg\{\int_{[0,1]^2}\big(\int_0^1 e^{-\frac{z}{\sqrt{L}}}\,dz\big)^2\,dx'\bigg\}^{\frac{1}{2}}\|\textrm{Tr}(\nabla\varphi)\|_{L^2(\partial\Omega)}\nonumber\\ &\le CL\|\varphi\|_{H^2(\Omega)}. \label{-222} \end{align} Using (\ref{-221}) and (\ref{-222}) in (\ref{Rest}) we obtain the claimed estimate (\ref{Rintilde}) with the constant $C$ independent of $L$. \noindent\underline{Step 2:} We now consider the general case. Select $\delta$ $>$ $0$ sufficiently small so that for any $x$ $\in$ $\Omega$ with $d(x)$ $<$ $\delta$, there is a unique $\pi(x)$ $\in$ $\partial\Omega$ such that $d(x)$ $=$ $|x - \pi(x)|$. Cover $\partial\Omega$ by finitely many open set $O_i$ such that each $O_i$ is diffeomorphic to $[0,1]^2$ by some map $\psi_i:$ $[0,1]^2$ $\rightarrow$ $O_i$. Define $\phi_i:$ $[0,1]^2 \times [0,\delta]$ $\rightarrow$ $\Omega$ by mapping $(\xi,\eta)$ $\in$ $[0,1]^2 \times [0,\delta]$ to the unique point $x$ in $\Omega$ such that $\pi(x)$ $=$ $\psi_i(\xi)$ and $d(x)$ $=$ $\eta$. Let $U_i$ $=$ $\phi_i([0,1]^2 \times [0,\delta])$ and $U_*$ $=$ $\{x\in\Omega: d(x) > \delta/2\}$. Then $\{U_*\} \cup \{U_i\}$ forms a covering of $\Omega$. Let $\mathbf{1}$ $=$ $\theta_* + \sum \theta_i$ be a partition of unity of $\Omega$ associated to this covering. Fix $\varphi$ $\in$ $H^2(\Omega) \cap H^1_0(\Omega)$. By our hypothesis, in $U_*$, $|R^\sharp|$ $\leq$ $C\,L$ where $C$ depends on $\delta$. Thus, \begin{equation} \Big|\int_{\Omega} R^\sharp\,\varphi\,\theta_*\,dx\Big| \leq C\,L\,\|\varphi\|_{L^2(\Omega)}. \label{NE::Est1} \end{equation} Next, by performing a change of variables, we have \[ \int_{\Omega} R^\sharp\,\varphi\,\theta_i\,dx = \int_{[0,1]^2 \times [0,\delta]} \tilde R_i\,\varphi \circ \phi_i\,dx, \] where $\tilde R_i$ satisfies $|\tilde R_i(x)|$ $\leq$ $C\,e^{-\frac{Cx_3}{\sqrt{L}}}$. Thus, by Step 1, \begin{equation} \Big|\int_{\Omega} R^\sharp\,\varphi\,\theta_i\,dx\Big| \leq C\,L\,\|\varphi\|_{H^2(\Omega)}. \label{NE::Est2} \end{equation} Summing up \eqref{NE::Est1} and \eqref{NE::Est2}, we get \eqref{Rintilde}. $\square$ It turns out that we will also need an estimate similar to that in Lemma \ref{NormEst} but with the norm measure in fractional Sobolev spaces $H^s(\Omega)$. For the clarity in exposition, we state the estimate here while deter its proof until after the proof of Proposition \ref{Rate}. \begin{lemma}\label{Remainder::HsNorm} Let $\Omega\subset\mathbb{R}^3$ be a smooth bounded domain. If $|R^\sharp(x)|\le Ce^{-\frac{Cd(x)}{\sqrt{L}}}$ where $d(x)$ denotes the distance to the boundary then \begin{equation} \|R^\sharp\|_{H^s(\Omega)}\le CL \label{RinX} \end{equation} for all $s\in [-2,-\frac{3}{2})$. \end{lemma} \begin{lemma}\label{NormEstEz} Let $\Omega\subset\mathbb{R}^3$ be a smooth bounded domain. If $|R^\flat(x)| \le \frac{CL}{d(x)}$ where $d(x)$ denotes the distance to the boundary $\partial \Omega$, then \begin{equation} \|R^\flat\|_{H^{-1}(\Omega)}\le CL \label{NEE::Est1} \end{equation} with the constant $C$ independent of $L$. \end{lemma} \noindent{\bf Proof.\;} The assertion is a consequence of the well-known Hardy inequality. We give a proof here for completeness. Similar to the proof of Lemma \ref{NormEst}, it suffices to show that if $\Omega$ $=$ $[0,1]^3$ and $|R(x)|$ $\leq$ $\frac{CL}{x_3}$, then \eqref{NEE::Est1} holds. Moreover, we can assume that $R$ is non-negative. For $\varphi$ $\in$ ${\rm Lip}_0(\Omega)$, we estimate \begin{align*} \Big|\int_{[0,1]^3} R\,\varphi\,dx\Big| &\leq \Big|\int_{[0,1]^3} \int_{x_3}^1 R(x',z)\,dz\,\partial_{x_3} \varphi(x',x_3)\,dx_3\,dx'\Big|\\ &\leq C\Big|\int_{[0,1]^3} \ln x_3\,\partial_{x_3} \varphi(x',x_3)\,dx_3\,dx'\Big| \leq C\|\varphi\|_{H^1_0(\Omega)}. \end{align*} Using this inequality, one can argue using density, Fatou's lemma and the splitting $f$ $=$ $f^+ - f^-$ to show that $R\,\varphi$ is integrable for any $\varphi$ $\in$ $H^1_0(\Omega)$ and the above inequality extends to $\varphi$ $\in$ $H^1_0(\Omega)$. The assertion follows. $\square$ \noindent{\bf Proof of Proposition \ref{Rate}.} Like usual, we drop the subscript $L_k$. By Propositions \ref{C1AlphaConv} and \ref{CjConv} and the smoothness of $Q_*$, we can assume that $Q$ converges in $C^{1,\alpha}(\bar\Omega)$ and $C^j_{\rm loc}(\Omega)$, $j$ $\geq$ $2$, to $Q_*$. Moreover, by Theorem \ref{MainThm1} and Corollary \ref{ImprRemEst}, we have \begin{equation} \Delta Q = -\frac{4}{s_+^2}\big(Q - \frac{1}{6}s_+\,{\rm Id}\big)\sum_{\alpha = 1}^3 (\nabla_\alpha Q)^2 + R, \label{Rate::ApproxEqn} \end{equation} where $R$ $\in$ $C^\infty(\Omega)$ and satisfies the estimate \begin{equation} |R(y)| \leq C\,\Big[\frac{L}{d(y)} + \exp\Big(-\frac{d(y)}{\sqrt{L}}\Big)\Big] \text{ and } |\nabla^l R(y)| \leq \frac{C\,L_k}{d(y)^{l+2}} \text{ for } y \in \Omega, l\geq 0. \label{Rate::ErrEst} \end{equation} Here $d(y)$ $=$ ${\rm dist}(y,\partial\Omega)$. Let $S$ $=$ $Q - Q_*$. By \eqref{Rate::ApproxEqn} and Corollary \ref{HarMapEqn}(iv), \begin{multline} \Delta S + \frac{4}{s_+^2}\left(Q-\frac{1}{6}\,s_+\,{\rm Id}\right)\sum_{\alpha=1}^3\big[\nabla_\alpha Q\,\nabla_{\alpha} S +\nabla_\alpha S\,\nabla_\alpha Q_*\big]\\ +\frac{4}{s_+^2} S\,\sum_{\alpha=1}^3(\nabla_\alpha Q_*)^2 = R, \label{Rate::DeltaS} \end{multline} For our purpose, we use the convergence of $Q$ to $Q_*$ to rewrite \eqref{Rate::DeltaS} in the following form: \begin{equation} \tilde {\mycal{L}}\,S_{ij} := {\mycal{L}}_{Q_*}\,S_{ij} + \tilde b_{ij}^{\alpha\beta}\,\nabla S_{\alpha\beta} + \tilde c_{ij}^{\alpha\beta}\,S_{\alpha\beta} = R_{ij}, \label{Rate::DSLinearForm} \end{equation} where, by Propositions \ref{C1AlphaConv}, \ref{CjConv} and \ref{BdryCjEst}, $\tilde b_{ij}^{\alpha\beta}$ $\in$ $C^{\alpha}(\bar\Omega) \cap C^\infty(\Omega) \cap W^{1,p}(\Omega)$, $\tilde c_{ij}^{\alpha\beta}$ $\in$ $C^{\alpha}(\bar\Omega) \cap C^\infty(\Omega)$ for any $\alpha$ $\in$ $(0,1)$ and $p$ $>$ $1$, and satisfy \begin{align} \|\tilde b_{ij}^{\alpha\beta}\|_{W^{1,p}(\Omega)} \leq C \text{ and } \|\tilde b_{ij}^{\alpha\beta}\|_{C^{\alpha}(\bar\Omega)} + \|\tilde c_{ij}^{\alpha\beta}\|_{C^{\alpha}(\bar\Omega)} \leq o(1).\label{Rate::CoefEst1} \end{align} Here $o(1)$ is such that $\lim_{L_k \rightarrow 0} o(1)$ $=$ $0$. In particular, by our hypotheses, for all $L_k$ sufficiently small and $t$ $\in$ $[0,1]$, the operators $\tilde{\mycal{L}}_t:$ $H^1_0(\Omega)$ $\rightarrow$ $H^{-1}(\Omega)$ defined by \[ \tilde {\mycal{L}}_t\,S_{ij} := {\mycal{L}}_{Q_*}\,S_{ij} + t\,\tilde b_{ij}^{\alpha\beta}\,\nabla S_{\alpha\beta} + t\,\tilde c_{ij}^{\alpha\beta}\,S_{\alpha\beta} \] are injective. The stability of the Fredholm index (see e.g. \cite[Chapter IV, Theorem 5.17]{Kato}) then implies that the $\tilde {\mycal{L}}_t$'s are bijective. Thus, by Lemma \ref{Trick}, \[ \|S\|_{L^2(\Omega)} \leq C\,\|R\|_{(H^2(\Omega) \cap H^1_0(\Omega))^*}. \] Now, by \eqref{Rate::ErrEst}, Lemmas \ref{NormEst} and \ref{NormEstEz}, we have \[ \|R\|_{(H^2(\Omega) \cap H^1_0(\Omega))^*} \leq CL_k. \] We thus have \begin{equation} \|S\|_{L^2(\Omega)} \leq C\,L_k. \label{Rate::L2Est} \end{equation} On the other hand, note that, by \eqref{Rate::DeltaS}, \[ \|S\|_{H^s(\Omega)} \leq C\,\big[\|\Delta S\|_{H^{s-2}(\Omega)} + \|S\|_{H^{s-2}(\Omega)}\big] \leq \frac{1}{2}\|S\|_{H^s(\Omega)} + C\big[\|R\|_{H^{s-2}(\Omega)} + \|S\|_{H^{s-2}(\Omega)}\big] \] for any $s$ $>$ $0$. Thus, by \eqref{Rate::ErrEst} and Lemmas \ref{Remainder::HsNorm} and \ref{NormEstEz}, \[ \|S\|_{H^s(\Omega)} \leq C\,L \text{ for any } 0 < s < \frac{1}{2}. \] Estimate \eqref{est:Hsrate} follows. To obtain the $C^j$-convergence, we use \eqref{Rate::L2Est}, \eqref{Rate::ErrEst} and apply standard elliptic estimates to \eqref{Rate::DSLinearForm}to get \[ \|S\|_{H^m(K)} \leq C\,L_k, \] for any $K$ $\Subset$ $\Omega$ and $m$ $\geq$ $1$. This completes the proof. $\square$ \noindent{\bf Proof of Theorem \ref{MainThm2}.} The conclusion follows immediately from Propositions \ref{Rate} and \ref{FirstApproxEqn}. $\square$ To finish this section, we furnish the proof of Lemma \ref{Remainder::HsNorm}. For the convenience of the reader, we recall that, for $s\in [-2,-\frac{3}{2})$, $H^s(\Omega)$ can be viewed as an interpolation space between $H^{-2}(\Omega)$ and $H^{-1}(\Omega)$ (see e.g. \cite[Theorem 6.2.4]{interpolation}), \begin{align*} H^s(\Omega) &= [H^{-1}(\Omega),H^{-2}(\Omega)]_{s+2,2}\\ &= \Big\{ u \in H^{-2}(\Omega)+H^{-1}(\Omega) \Big| u = \sum_{i\in\mathbb{Z}}u_i \text{ with } u_i\in H^{-1}(\Omega)\\ &\qquad\qquad \text{ and } \sum_{i\in\mathbb{Z}} 2^{-2(s+2)i}\|u_i\|_{H^{-2}(\Omega)}^2+\sum_{i\in\mathbb{Z}}2^{-2(s+1)i}\|u_i\|_{H^{-1}(\Omega)}^2 < \infty \Big\}. \end{align*} The $H^s$-norm is defined to be \[ \|u\|_{H^s} = \inf_{u=\sum_{i\in\mathbb{Z}}u_i}\Big(\sum_{i\in\mathbb{Z}} 2^{-2(s+2)i}\|u_i\|_{H^{-2}(\Omega)}^2+\sum_{i\in\mathbb{Z}}2^{-2(s+1)i}\|u_i\|_{H^{-1}(\Omega)}^2\Big)^{\frac{1}{2}}. \] \noindent{\bf Proof of Lemma \ref{Remainder::HsNorm}.} As in the proof of Lemma \ref{NormEst}, it suffices to consider the case where $\Omega=[0,1]^3$ and ``$d(x)=x_n$''. We show that if $|R^\sharp(x)|\le Ce^{-\frac{Cx_n}{\sqrt{L}}}$ then (\ref{RinX}) holds. To this end we consider first a sequence $(y_i)_{i\in\mathbb{N}}$ with $y_i\stackrel{def}{=}2^{-i}$. We take $\chi\in C_c^\infty\big([0,1]^2\times [-1,1]\big)$ with $\chi\equiv 1$ on $[0,1]^2\times [-\frac{1}{2},\frac{1}{2}]$ and $\chi\equiv 0$ on $[0,1]^2\times [-1,-\frac{3}{4}]\cup [0,1]^2\times [\frac{3}{4},1]$ such that $\frac{\partial\chi}{\partial x_n}(x',x_n)<0$ for $x_n>0$ . Define \[ R_i(x)\stackrel{def}{=}\left\{\begin{array}{ll} R^\sharp(x)\Big(\chi(x',x_n2^{-i})-\chi(x',x_n2^{-i+1})\Big),& \,i\in\mathbb{Z}, i\le 0\\ 0,\, i\in\mathbb{Z},\, i>0 \end{array}\right. \] One can easily check that $R^\sharp(x)=\sum_{i\in\mathbb{Z}}R_i(x),\forall x\in [0,1]^2\times (0,1]$. Moreover we have, for $i\ge 0$, that the support of $R_{-i}$, ${\rm Supp}\, R_{-i}$, is a subset of $\Omega_i := [0,1]^2\times [y_{i+2},y_i]$ and $|R_{-i}(x)|\le |R(x)|$ for all $ x \in {\rm Supp}\, R_{-i}$. For $\varphi\in C_c^\infty(\Omega_i)$ and $i\ge 0$ we have \begin{align} \Big|\int_{\Omega_i} R_{-i}(x) \varphi(x)\,dx\Big| &= \Big|\int_{\Omega_i}\Big(\int_{x_n}^{y_i}\big(\int_z^{y_i} R_{-i}(x',w_n)\,dw_n\big)\,dz\Big)\partial^2_{x_n}\varphi(x',x_n)\,dx'dx_n\Big |\nonumber\\ &\le \bigg(\int_{\Omega_i}\Big(\int_{x_n}^{y_i}\big(\int_z^{y_i} |R_{-i}(x',w_n)|\,dw_n\big)\,dz\Big)^2\,dx\bigg)^{\frac{1}{2}}\Big(\int_{\Omega_i}|\Delta\varphi|^2\,dx\Big)^{\frac{1}{2}}\nonumber\\ &\leq C\,\bigg(\int_{\Omega_i}\Big(\int_{x_n}^{y_i}\big(\int_z^{y_i} e^{-\frac{C\,w_n}{\sqrt{L}}}\,dw_n\big)\,dz\Big)^2\,dx\bigg)^{\frac{1}{2}}\,\|\varphi\|_{H^2(\Omega_i)}\nonumber\\ &\leq C\,\sqrt{L}\bigg(\int_{\Omega_i}\big(\int_{x_n}^{y_i} e^{-\frac{C\,z}{\sqrt{L}}}\,dz\big)^2\,dx\bigg)^{\frac{1}{2}}\,\|\varphi\|_{H^2(\Omega_i)}\nonumber\\ &\leq C\,L\bigg( \int_{y_{i+2}}^{y_i} (e^{-\frac{C\,x_n}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}})^2\,dx_n\bigg)^{\frac{1}{2}}\|\varphi\|_{H^2(\Omega_i)}\nonumber\\ &\leq C\,L\Big(e^{-\frac{C\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}}\Big)\sqrt{y_i-y_{i+1}}\|\varphi\|_{H^2(\Omega_i)}. \label{-22longest} \end{align} We have shown that $\|R_{-i}\|_{H^{-2}(\Omega_i)} \le CL\Big(e^{-\frac{Cy_{i+2}}{\sqrt{L}}}-e^{-\frac{Cy_i}{\sqrt{L}}}\Big)\sqrt{y_i-y_{i+1}}$. On the other hand, as ${\rm Supp}\, R_{-i}\subset\Omega_i$, one can check that $\|R_{-i}\|_{H^{-2}(\Omega_i)}=\|R_{-i}\|_{H^{-2}(\Omega)}$. We thus have \begin{equation} \|R_{-i}\|_{H^{-2}(\Omega)} \le CL\Big(e^{-\frac{Cy_{i+2}}{\sqrt{L}}}-e^{-\frac{Cy_i}{\sqrt{L}}}\Big)\sqrt{y_i-y_{i+1}}, \label{-22est} \end{equation} with the constant $C$ depending only on the dimension. We next estimate $\|R_{-i}\|_{H^{-1}(\Omega)}$. For $\varphi\in C_c^\infty(\Omega_i)$ and $i\ge 0$ we have: \begin{align*} \Big|\int_{\Omega_i} R_{-i}(x)\varphi(x)\,dx\Big| &= \Big|\int_{\Omega_i} \Big(\int_{x_n}^{y_i} R_{-i}(x', w_n)\,dw_n\Big)\partial_{x_n}\varphi(x)\,dx\Big|\nonumber\\ &\le \Big(\int_{\Omega_i}\big(\int_{x_n}^{y_i} |R_{-i}(x', w_n)|\,dw_n\big)^2\,dx\Big)^{\frac{1}{2}}\Big(\int_{\Omega_i} |\nabla\varphi(x)|^2\,dx\Big)^{\frac{1}{2}}\nonumber\\ &\leq C\,\Big(\int_{\Omega_i}\big(\int_{x_n}^{y_i} e^{-\frac{C\,w_n}{\sqrt{L}}}\,dw_n\big)^2\,dx\Big)^{\frac{1}{2}}\,\|\varphi\|_{H^1_0(\Omega_i)}\nonumber\\ &\leq C\,\sqrt{L}\Big(\int_{y_{i+2}}^{y_i}\big(e^{-\frac{C\,x_n}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}}\big)^2\,dx_n\Big)^{\frac{1}{2}}\|\varphi\|_{H^1_0(\Omega_i)}\nonumber\\ &\leq C\,{\sqrt{L}}\big(e^{-\frac{C\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}}\big)\sqrt{y_i-y_{i+1}}\|\varphi\|_{H^1_0(\Omega_i)}. \end{align*} On the other hand, noting that $y_i=2^{-i}$, one finds \[ \sqrt{L}\Big(e^{-\frac{C\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}}\Big)\sqrt{y_i-y_{i+1}} \leq C\,L\Big(e^{-\frac{C'\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C'\,y_i}{\sqrt{L}}}\Big)\frac{\sqrt{2}}{\sqrt{y_i-y_{i+1}}}. \] Combining the last two inequalitites we get \begin{equation} \|R_{-i}\|_{H^{-1}(\Omega)} \le \frac{CL\Big(e^{-\frac{C'\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C'\,y_i}{\sqrt{L}}}\Big)}{\sqrt{y_i-y_{i+1}}}. \label{-12est} \end{equation} Taking (\ref{-22est}) and (\ref{-12est}) into account and noting that $s$ $\in$ $[-2,-3/2)$ and $y_i=2^{-i}$ for $i\ge 0$, we obtain: \begin{align*} \|R^\sharp\|_{H^{s}(\Omega)} &\leq \Big\{ \sum_{i\ge 0} 2^{2(s+2)i}\|R_{-i}\|_{H^{-2}}^2+\sum_{i\ge 0}2^{2(s+1)i}\|R_{-i}\|_{H^{-1}}^2 \Big\}^{1/2}\\ &\leq CL\Big\{\sum_{i\ge 0} 2^{(2s+3)i}\big(e^{-\frac{C\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}}\big) + \sum_{i \ge 0} 2^{(2s+3)i}\,\big(e^{-\frac{C'\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C'\,y_i}{\sqrt{L}}}\big)\Big\}^{1/2}\\ &\leq CL\Big\{\sum_{i\ge 0} \big(e^{-\frac{C\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C\,y_i}{\sqrt{L}}}\big) + \sum_{i \ge 0} \big(e^{-\frac{C'\,y_{i+2}}{\sqrt{L}}}-e^{-\frac{C'\,y_i}{\sqrt{L}}}\big)\Big\}^{1/2}\\ &\leq CL, \end{align*} which proves \eqref{RinX}. $\square$ \appendix \section{Proof of Proposition \ref{ProjEqn} (completed)} Recall that we need to solve for $X$ from \eqref{ProjEqn03} and \eqref{ProjEqn04}. We note that by \eqref{MysIds::Id3}, \[ K\Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big] = \frac{1}{3}{\rm tr}(K)\Big[\frac{1}{s_+}Q^\sharp + \frac{1}{3}\,{\rm Id}\Big], \] which implies \begin{equation} Q = K\,Q^\sharp = -\frac{1}{3}s_+\,K + \frac{1}{3}{\rm tr}(K)\,Q^\sharp + \frac{1}{9}s_+{\rm tr}(K)\,{\rm Id}. \label{ProjEqn05} \end{equation} Hence, \eqref{ProjEqn03} is equivalent to \begin{equation} -\frac{1}{3}s_+(K\,X - X\,K) + \frac{1}{3}{\rm tr}(K)(Q^\sharp\,X - X\,Q^\sharp) = W. \label{ProjEqn06} \end{equation} Using \eqref{ProjEqn04}, we thus get \begin{equation} -\frac{1}{3}s_+(K\,X - X\,K) + \frac{1}{3}{\rm tr}(K)(2Q^\sharp\,X - \frac{1}{3}s_+\,X) = W. \label{ProjEqn06bis} \end{equation} Multiplying \eqref{ProjEqn06} to the left and to the right by $Q^\sharp$ and then taking difference yields \begin{multline} -\frac{1}{3}s_+(Q\,X + X\,Q) + \frac{1}{3}s_+(Q^\sharp\,X\,K + K\,X\,Q^\sharp)\\ + \frac{1}{3}{\rm tr}(K)((Q^\sharp)^2\,X + X\,(Q^\sharp)^2 - 2Q^\sharp\,X\,Q^\sharp) = Q^\sharp\,W - W\,Q^\sharp. \label{ProjEqn07} \end{multline} On the other hand, by \eqref{ProjEqn04} and $(Q^\sharp)^2 - \frac{1}{3}s_+\,Q^\sharp - \frac{2}{9}s_+^2\,{\rm Id}$ $=$ $0$ (see Lemma \ref{DiffRepLimSurf}(iv)), we have \[ (Q^\sharp)^2\,X + X\,(Q^\sharp)^2 = \frac{5}{9}s_+^2\,X, \] and \[ Q^\sharp\,X\,K + K\,X\,Q^\sharp = \frac{s_+}{3}(K\,X + X\,K) - (Q\,X + X\,Q). \] Substituting these relations into \eqref{ProjEqn07} and noting that $Q^\sharp\,X\,Q^\sharp$ $=$ $-\frac{2}{9}s_+^2\,X$ (by \eqref{ProjEqn04} and Lemma \ref{DiffRepLimSurf}(iv)), we get \begin{equation} - \frac{2}{3}s_+(Q\,X + X\,Q) + \frac{1}{9}s_+^2(K\,X + X\,K) + \frac{1}{3}s_+^2\,{\rm tr}(K)\,X = Q^\sharp\,W - W\,Q^\sharp. \label{ProjEqn08} \end{equation} Performing $\eqref{ProjEqn08} - \eqref{ProjEqn03}\times \frac{2}{3}s_+ - \eqref{ProjEqn06bis} \times \frac{1}{3}s_+$, we arrive at \[ - \frac{4}{3}s_+\,Q\,X + \frac{2}{9}s_+^2\,K\,X + \frac{1}{3}s_+^2\,{\rm tr}(K)\,X - \frac{1}{9}s_+\,{\rm tr}(K)(2Q^\sharp\,X - \frac{1}{3}s_+\,X) = Q^\sharp\,W - W\,Q^\sharp - s_+\,W, \] i.e. \[ \Big[-\frac{4}{3}s_+\,K\,Q^\sharp + \frac{2}{9}s_+^2\,K - \frac{2}{9}s_+\,{\rm tr}(K)\,Q^\sharp + \frac{10}{27}s_+^2\,{\rm tr}(K)\,{\rm Id}\Big]X = Q^\sharp\,W - W\,Q^\sharp - s_+\,W. \] It thus follows from \eqref{ProjEqn05} that \begin{equation} -2s_+\Big[Q - \frac{2}{9}s_+\,{\rm tr}(K)\,{\rm Id}\Big]X = Q^\sharp\,W - W\,Q^\sharp - s_+\,W. \label{ProjEqn09} \end{equation} Note that $Q - \frac{2}{9}s_+\,{\rm tr}(K)\,{\rm Id}$ is singular because \eqref{ProjEqn05} implies that \begin{equation} \Big[Q - \frac{2}{9}s_+\,{\rm tr}(K)\,{\rm Id}\Big]\Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big] = 0. \label{ProjEqn10} \end{equation} To solve \eqref{ProjEqn09} for $X$, consider $T$ defined by \eqref{ProjEqn::Est3}. Observe that $T$ is invertible and \begin{equation} T^{-1}\Big[Q - \frac{2}{9}s_+\,{\rm tr}(K)\,{\rm Id}\Big] = {\rm Id} - \beta\,T^{-1}\,\Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big]. \label{ProjEqn11} \end{equation} Multiplying \eqref{ProjEqn11} to the right by $X$ and using \eqref{ProjEqn09}, we get \begin{equation} X - \beta\,T^{-1}\,\Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big]X = -\frac{1}{2s_+}T^{-1}(Q^\sharp\,W - W\,Q^\sharp - s_+\,W). \label{ProjEqn12} \end{equation} Multiplying \eqref{ProjEqn11} to the right by $\big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\big]X$ and noting that $\big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\big]^2$ $=$ $\big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\big]$ and using \eqref{ProjEqn10}, we get \begin{equation} 0 = \Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big]X - \beta\,T^{-1}\,\Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big]X. \label{ProjEqn13} \end{equation} Combining \eqref{ProjEqn12} and \eqref{ProjEqn13}, we obtain \begin{equation} \Big[\frac{1}{s_+}\,Q^\sharp - \frac{2}{3}\,{\rm Id}\Big]X = \frac{1}{2s_+}T^{-1}(Q^\sharp\,W - W\,Q^\sharp - s_+\,W). \label{ProjEqn14} \end{equation} On the other hand, \eqref{ProjEqn03} and \eqref{ProjEqn09} imply \[ -2s_+\,X\Big[Q - \frac{2}{9}s_+\,{\rm tr}(K)\,{\rm Id}\Big] = Q^\sharp\,W - W\,Q^\sharp + s_+\,W. \] Arguing as before, we get \[ X\Big[\frac{1}{s_+}\,Q^\sharp - \frac{2}{3}\,{\rm Id}\Big] = \frac{1}{2s_+}(Q^\sharp\,W - W\,Q^\sharp + s_+\,W)T^{-1}. \] Together with \eqref{ProjEqn04}, this implies that \begin{equation} \Big[\frac{1}{s_+}\,Q^\sharp + \frac{1}{3}\,{\rm Id}\Big]X = -\frac{1}{2s_+}(Q^\sharp\,W - W\,Q^\sharp + s_+\,W)T^{-1}. \label{ProjEqn15} \end{equation} Summing up \eqref{ProjEqn14} and \eqref{ProjEqn15} we arrive at \[ X = -\frac{1}{2s_+}\Big[T^{-1}(Q^\sharp\,W - W\,Q^\sharp - s_+\,W) + (Q^\sharp\,W - W\,Q^\sharp + s_+\,W)T^{-1}\Big]. \] To rewrite this in a better form, note that \eqref{ProjEqn03} and \eqref{ProjEqn04} and implies that $Q^\sharp\,W + W\,Q^\sharp$ $=$ $\frac{1}{3}s_+\,W$. Therefore \begin{equation} X = -\Big[T^{-1}\big(\frac{1}{s_+}\,Q^\sharp - \frac{2}{3}\,{\rm Id}\big)W - W\big(\frac{1}{s_+}\,Q^\sharp - \frac{2}{3}\,{\rm Id}\big)T^{-1}\Big]. \label{ProjEqn16} \end{equation} By \eqref{ProjEqn01bis}, \eqref{ProjEqn02} and \eqref{ProjEqn16}, we conclude the proof. $\square$ \def$'${$'$} \end{document}
math
149,411
\begin{document} \title{On quadratic optimization problems and canonical duality theory} \author{C. Z\u{a}linescu\\Faculty of Mathematics, University Al. I. Cuza Iasi, Iasi, Romania} \maketitle Canonical duality theory (CDT) is advertised by its author DY Gao as \textquotedblleft a breakthrough methodological theory that can be used not only for modeling complex systems within a unified framework, but also for solving a large class of challenging problems in multidisciplinary fields of engineering, mathematics, and sciences." DY Gao solely or together with some of his collaborators applied CDT for solving some quadratic optimization problems with quadratic constraints. Unfortunately, in almost all papers we read on CDT there are unclear definitions, non convincing arguments in the proofs, and even false results. The aim of this paper is to treat rigorously quadratic optimization problems by the method suggested by CDT and to compare what we get with the results obtained by DY Gao and his collaborators on this topic in several papers. \section{Notations and preliminary results\label{sec1}} Let us consider the quadratic functions $q_{k}:\mathbb{R}^{n}\rightarrow \mathbb{R}$ for $k\in\overline{0,m}$, that is $q_{k}(x):=\tfrac{1} {2}\left\langle x,A_{k}x\right\rangle -\left\langle b_{k},x\right\rangle +c_{k}$ for $x\in\mathbb{R}^{n}$ with given $A_{k}\in\mathfrak{S}_{n}$, $b_{k}\in\mathbb{R}^{n}$ (seen as column vector) and $c_{k}\in\mathbb{R}$ for $k\in\overline{0,m}$, where $\mathfrak{S}_{n}$ denotes the class of symmetric matrices from $\mathfrak{M}_{n}:=\mathbb{R}^{n\times n}$, and $\left\langle \cdot,\cdot\right\rangle $ denotes the usual inner product on $\mathbb{R}^{n} $. For $k\in\mathbb{N}^{\ast}$ We set \[ \mathbb{R}_{+}^{k}:=\{\eta\in\mathbb{R}^{k}\mid\eta_{i}\geq0~\forall i\in\overline{1,k}\},\quad\mathbb{R}_{-}^{k}:=-\mathbb{R}_{+}^{k} ,\quad\mathbb{R}_{++}^{k}:=\operatorname*{int}\mathbb{R}_{+}^{k} ,\quad\mathbb{R}_{--}^{k}:=-\mathbb{R}_{++}^{k}. \] The fact that $A\in\mathfrak{S}_{n}$ is positive (semi) definite is denoted by $A\succ0$ $(A\succeq0)$ and we set $\mathfrak{S}_{n}^{+}:=\{A\in \mathfrak{S}_{n}\mid A\succeq0\}$, $\mathfrak{S}_{n}^{++}:=\{A\in \mathfrak{S}_{n}\mid A\succ0\}$ ; it is well known that $\mathfrak{S}_{n} ^{++}=\operatorname*{int}\mathfrak{S}_{n}^{+}$. In this paper we consider quadratic minimization problems with (quadratic) equality and inequality constraints. With this aim, we fix a set $J\subset\overline{1,m}$ corresponding to the equality constraints; the set $J^{c}:=\overline {1,m}\setminus J$ will correspond to the inequality constraints. So, the general problem is $(P_{J})$ $~~\min$ $q_{0}(x)$ ~s.t. $x\in X_{J}$, \noindent where \[ X_{J}:=\{x\in\mathbb{R}^{n}\mid\left[ \forall j\in J:q_{j}(x)=0\right] ~\wedge~\left[ \forall j\in J^{c}:q_{j}(x)\leq0\right] \}. \] For later use we introduce also the set \[ \Gamma_{J}:=\{(\lambda_{1},...,\lambda_{m})\in\mathbb{R}^{m}\mid\lambda _{j}\geq0~\forall j\in J^{c}\}. \label{r-gamma} \] Clearly, for $J=\overline{1,m}$ $(P_{J})$ becomes the quadratic minimization problem with (quadratic) equality constraints denoted $(P_{e})$ with $X_{e}:=X_{\overline{1,m}}$ its feasible set, while for $J=\emptyset$ $(P_{J})$ becomes the quadratic minimization problem with inequality constraints denoted $(P_{i})$ with $X_{i}:=X_{\emptyset}$ its feasible set. Clearly $X_{e}\subset X_{J}\subset X_{i}$, the inclusions being strict in general when $\emptyset\neq J\neq\overline{1,m}$. Observe that any optimization problem with equality constraints can be seen as a problem with inequality constraints because the equality constraint $h(x)=0$ can be replaced by the inequality constraints $g_{1}(x):=h(x)\leq0$ and $g_{2}(x):=-h(x)\leq0$. Excepting linear programming, such a procedure is not used in general because the constraints qualification conditions are very different for problems with equality constraints and those with inequality constraints. To the family $(q_{k})_{k\in\overline{0,m}}$ we associate the Lagrangian $L:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}$ defined by \[ L(x,\lambda):=q_{0}(x)+\sum\nolimits_{j=1}^{m}\lambda_{j}q_{j}(x)=\tfrac{1} {2}\left\langle x,A(\lambda)x\right\rangle -\left\langle x,b(\lambda )\right\rangle +c(\lambda), \label{r-L} \] where $A(\lambda)x:=[A(\lambda)]\cdot x$ and \[ A(\lambda):=\sum\nolimits_{k=0}^{m}\lambda_{k}A_{k},\quad b(\lambda ):=\sum\nolimits_{k=0}^{m}\lambda_{k}b_{k},\quad c(\lambda):=\sum \nolimits_{k=0}^{m}\lambda_{k}c_{k}, \] with $\lambda_{0}:=1$ and $\lambda:=(\lambda_{1},...,\lambda_{m})^{T} \in\mathbb{R}^{m}$. Clearly, $A:\mathbb{R}^{m}\rightarrow\mathfrak{S}_{n}$, $b:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}$, $c:\mathbb{R}^{m}\rightarrow \mathbb{R}$ defined by the above formulas are affine mappings. Moreover, one considers the sets \begin{gather} Y_{0}:=\{\lambda\in\mathbb{R}^{m}\mid\det A(\lambda)\neq0\},\label{r-s0}\\ Y^{+}:=\{\lambda\in\mathbb{R}^{m}\mid A(\lambda)\succ0\},\quad Y^{-} :=\{\lambda\in\mathbb{R}^{m}\mid A(\lambda)\prec0\}.\label{r-s0pm} \end{gather} Observe that $Y_{0}$ is a (possible empty) open set, while $Y^{+}$ and $Y^{-}$ are (possibly empty) open and convex sets. Sometimes one uses also the sets \begin{gather} Y_{\operatorname{col}}:=\{\lambda\in\mathbb{R}^{m}\mid b(\lambda )\in\operatorname{Im}A(\lambda)\},\label{r-yext}\\ Y_{\operatorname{col}}^{+}:=\{\lambda\in Y_{\operatorname{col}}\mid A(\lambda)\succeq0\},\quad Y_{\operatorname{col}}^{-}:=\{\lambda\in Y_{\operatorname{col}}\mid A(\lambda)\preceq0\},\label{r-yextp} \end{gather} where for $F\in\mathbb{R}^{m\times n}$ we set $\operatorname{Im}F:=\{Fx\mid x\in\mathbb{R}^{n}\}$ and $\ker F:=\{x\in\mathbb{R}^{n}\mid Fx=0\}$. Clearly, $Y_{0}\subset Y_{\operatorname{col}}$, $Y^{+}\subset Y_{\operatorname{col} }^{+}$, $Y^{-}\subset Y_{\operatorname{col}}^{-}$, and $Y_{\operatorname{col} }$ is neither open, nor closed (in general). Unlike for $Y^{+}$, the convexity of $Y_{\operatorname{col}}^{+}$ is less obvious. In fact the next (probably known) result holds. \begin{lemma} \label{lem-im}\emph{(i)} Let $A,B\in\mathfrak{S}_{n}^{+}$. Then $\operatorname{Im}(A+B)=\operatorname{Im}A+\operatorname{Im}B$. \emph{(ii)} Let $A\in\mathfrak{S}_{n}$ and $a\in\mathbb{R}^{n}$, and set $q(x):=\tfrac{1}{2}\left\langle x,Ax\right\rangle -\left\langle a,x\right\rangle $. Then $q(x_{1})=q(x_{2})$ for all $x_{1},x_{2}\in \mathbb{R}^{n}$ such that $Ax_{1}=Ax_{2}=a$. \end{lemma} Proof. (i) It is known that $\operatorname{Im}F=(\ker F)^{\perp}$, and so $\mathbb{R}^{n}=\operatorname{Im}F+\ker F$, provided $F\in\mathfrak{S}_{n}$. Moreover, using Schwarz' inequality for positive semi-definite matrices (operators) we have that $\ker F=\{x\in\mathbb{R}^{n}\mid\left\langle x,Fx\right\rangle =0\}$ whenever $F\in\mathfrak{S}_{n}^{+}$. Since $A+B\in\mathfrak{S}_{n}^{+}$ we get \begin{align*} \left( \operatorname{Im}(A+B)\right) ^{\perp} & =\ker(A+B)=\{x\in \mathbb{R}^{n}\mid\left\langle x,(A+B)x\right\rangle =0\}\\ & =\ker A\cap\ker B=\left( \operatorname{Im}A\right) ^{\perp}\cap\left( \operatorname{Im}B\right) ^{\perp}=\left( \operatorname{Im} A+\operatorname{Im}B)\right) ^{\perp}, \end{align*} whence the conclusion. (ii) Take $x_{1},x_{2}\in\mathbb{R}^{n}$ such that $Ax_{1}=Ax_{2}=a;$ setting $x:=x_{1}$ and $u:=x_{2}-x_{1}$, we have that $x_{2}=x+u$ and $Au=0$. It follows that $\left\langle a,u\right\rangle =\left\langle Ax,u\right\rangle =\left\langle x,Au\right\rangle =0$, and so \[ q(x+u)=\tfrac{1}{2}\left\langle x+u,A(x+u)\right\rangle -\left\langle a,x+u\right\rangle =\tfrac{1}{2}\left\langle x,Ax\right\rangle -\left\langle a,x\right\rangle =q(x), \] whence $q(x_{2})=q(x_{1})$. $\square$ \begin{corollary} \label{c-yqi-ycoli}With the previous notations and assumptions, $Y_{\operatorname{col}}^{+}$ and $Y_{\operatorname{col}}^{-}$ are convex. Moreover, if $Y^{+}$ (resp.~$Y^{-}$) is nonempty, then $Y^{+} =\operatorname*{int}Y_{\operatorname{col}}^{+}$ (resp.~$Y^{-} =\operatorname*{int}Y_{\operatorname{col}}^{-}$). \end{corollary} Proof. Take $\lambda,\lambda^{\prime}\in Y_{\operatorname{col}}^{+}$ and $\alpha\in(0,1)$. From the definition of $Y_{\operatorname{col}}^{+}$ and Lemma \ref{lem-im}~(i), taking into account that $A$ and $b$ are affine, we get \begin{align*} b(\alpha\lambda+(1-\alpha)\lambda^{\prime}) & =\alpha b(\lambda )+(1-\alpha)b(\lambda^{\prime})\in\alpha\operatorname{Im}A(\lambda )+(1-\alpha)\operatorname{Im}A(\lambda^{\prime})\\ & =\operatorname{Im}[\alpha A(\lambda)]+\operatorname{Im}[(1-\alpha )A(\lambda^{\prime})]=\operatorname{Im}[\alpha A(\lambda)+(1-\alpha )A(\lambda^{\prime})]\\ & =\operatorname{Im}A(\alpha\lambda+(1-\alpha)\lambda^{\prime}), \end{align*} and so $\alpha\lambda+(1-\alpha)\lambda^{\prime}\in Y_{\operatorname{col}} ^{+}$. The proof of the convexity of $Y_{\operatorname{col}}^{-}$ is similar. Assume now that $Y^{+}\neq\emptyset$ and take $\lambda_{0}\in Y^{+}$, $\lambda\in Y_{\operatorname{col}}^{+}$ and $\alpha\in(0,1)$. Then $A(\alpha\lambda_{0}+(1-\alpha)\lambda)=\alpha A(\lambda_{0})+(1-\alpha )A(\lambda)\succ0$, and so $\alpha\lambda_{0}+(1-\alpha)\lambda\in Y^{+}$. Taking the limit for $\alpha\rightarrow0$ we obtain that $\lambda \in\operatorname*{cl}Y^{+}$. Hence $Y^{+}\subset Y_{\operatorname{col}} ^{+}\subset\operatorname*{cl}Y^{+}$, and so \[ Y^{+}=\operatorname*{int}Y^{+}\subset\operatorname*{int}Y_{\operatorname{col} }^{+}\subset\operatorname*{int}(\operatorname*{cl}Y^{+})=Y^{+}. \] The proof is complete. $\square$ Of course, for every $(x,\lambda)\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ we have that \begin{equation} \nabla_{x}L(x,\lambda)=A(\lambda)\cdot x-b(\lambda),\quad\nabla_{xx} ^{2}L(x,\lambda)=A(\lambda),\quad\nabla_{\lambda}L(x,\lambda)=\left( q_{j}(x)\right) _{j\in\overline{1,m}}. \label{r-nxL} \end{equation} Hence $L(\cdot,\lambda)$ is (strictly) convex for $\lambda\in Y_{\operatorname{col}}^{+}$ $(\lambda\in Y^{+})$ and (strictly) concave for $\lambda\in Y_{\operatorname{col}}^{-}$ $(\lambda\in Y^{-})$. Moreover, for $\lambda\in Y_{0}$ we have that $\nabla_{x}L(x,\lambda)=0$ iff $x=[A(\lambda )]^{-1}\cdot b(\lambda)$, written $A(\lambda)^{-1}b(\lambda)$ in the sequel. Let us consider now the (dual objective) function \begin{equation} D:Y_{\operatorname{col}}\rightarrow\mathbb{R},\quad D(\lambda):=L(x,\lambda )\text{ with }A(\lambda)x=b(\lambda);\label{r-pd0} \end{equation} $D$ is well defined because for $x_{1},x_{2}\in\mathbb{R}^{n}$ with $A(\lambda)x_{1}=A(\lambda)x_{2}=b(\lambda)$, by Lemma \ref{lem-im}~(ii), we have that $L(x_{2},\lambda)=L(x_{1},\lambda)$. In particular, \[ \big[\lambda\in Y_{0}~\text{~}\wedge~~x=\left( A(\lambda)\right) ^{-1}\cdot b(\lambda)\big]\Longrightarrow L(x,\lambda)=D(\lambda).\label{r-lpd} \] Of course \begin{equation} D(\lambda)=L\big(A(\lambda)^{-1}b(\lambda),\lambda\big)=-\tfrac{1} {2}\big\langle b(\lambda),A(\lambda)^{-1}b(\lambda)\big\rangle+c(\lambda )\quad\forall\lambda\in Y_{0}.\label{r-pd} \end{equation} \begin{lemma} \label{lem-qperfdual}Let $(\overline{x},\overline{\lambda})\in\mathbb{R} ^{n}\times\mathbb{R}^{m}$ be such that $\nabla_{x}L(\overline{x} ,\overline{\lambda})=0$ and $\left\langle \overline{\lambda},\nabla_{\lambda }L(\overline{x},\overline{\lambda})\right\rangle =0$. Then $\overline{\lambda }\in Y_{\operatorname{col}}$ and \begin{equation} q_{0}(\overline{x})=L(\overline{x},\overline{\lambda})=D(\overline{\lambda}). \label{r-qlpd} \end{equation} In particular, $\overline{x}\in X_{e}$ and (\ref{r-qlpd}) hold if $(\overline{x},\overline{\lambda})$ is a critical point of $L$, that is $\nabla L(\overline{x},\overline{\lambda})=0$. \end{lemma} Proof. Because $0=\nabla_{x}L(\overline{x},\overline{\lambda})=A(\overline {\lambda})\overline{x}-b(\overline{\lambda})$, it is clear that $\overline {\lambda}\in Y_{\operatorname{col}}$ and $L(\overline{x},\overline{\lambda })=D(\overline{\lambda})$ by the definition of $D$. On the other hand, \[ L(\overline{x},\overline{\lambda})=q_{0}(\overline{x})+\sum\nolimits_{j=1} ^{m}\overline{\lambda}_{j}q_{j}(\overline{x})=q_{0}(\overline{x})+\left\langle \overline{\lambda},\nabla_{\lambda}L(\overline{x},\overline{\lambda })\right\rangle =q_{0}(\overline{x}). \] The last assertion follows from the expression of $\nabla_{\lambda} L(\overline{x},\overline{\lambda})$ in (\ref{r-nxL}). $\square$ Formula (\ref{r-qlpd}) is related to the so-called \textquotedblleft complimentary-dual principle\textquotedblright\ (see \cite[p.~NP11] {GaoRuaLat:16}, \cite[p.~13]{GaoRuaLat:17}) and sometimes is called the \textquotedblleft perfect duality formula". \begin{proposition} \label{lem-pd}\emph{(i)} The following representation of $D$ holds: \begin{equation} D(\lambda)=\left\{ \begin{array} [c]{ll} \min_{x\in\mathbb{R}^{n}}L(x,\lambda) & \text{if }\lambda\in Y_{\operatorname{col}}^{+},\\ \max_{x\in\mathbb{R}^{n}}L(x,\lambda) & \text{if }\lambda\in Y_{\operatorname{col}}^{-}, \end{array} \right. \label{r-1} \end{equation} the value of $D(\lambda)$ being attained at any $x\in\mathbb{R}^{n}$ such that $A(\lambda)x=b(\lambda)$ whenever $\lambda\in Y_{\operatorname{col}}^{+}\cup Y_{\operatorname{col}}^{-};$ in particular, $D(\lambda)$ is attained uniquely at $x:=A(\lambda)^{-1}b(\lambda)$ for $\lambda\in Y^{+}\cup Y^{-}$. \emph{(ii)} $D$ is concave and upper semi\-continuous on $Y_{\operatorname{col}}^{+}$, and convex and lower semi\-continuous on $Y_{\operatorname{col}}^{-}$. \emph{(iii)} Let $J\subset\overline{1,m}$ and $(\overline{x},\overline {\lambda})\in X_{J}\times\mathbb{R}^{m}$ be such that $\nabla_{x} L(\overline{x},\overline{\lambda})=0$ and $\left\langle \overline{\lambda },\nabla_{\lambda}L(\overline{x},\overline{\lambda})\right\rangle =0$. Then $\overline{\lambda}\in Y_{\operatorname{col}};$ moreover \begin{gather*} \overline{\lambda}\in\Gamma_{J}\cap Y_{\operatorname{col}}^{+}\Longrightarrow D(\overline{\lambda})=\max\left\{ D(\lambda)\mid\lambda\in\Gamma_{J}\cap Y_{\operatorname{col}}^{+}\right\} ,\label{r-2a}\\ \overline{\lambda}\in(-\Gamma_{J})\cap Y_{\operatorname{col}}^{-} \Longrightarrow D(\overline{\lambda})=\min\left\{ D(\lambda)\mid\lambda \in(-\Gamma_{J})\cap Y_{\operatorname{col}}^{-}\right\} . \label{r-2b} \end{gather*} \emph{(iv)} Assume that $(\overline{x},\overline{\lambda})\in\mathbb{R} ^{n}\times\mathbb{R}^{m}$ is such that $\nabla L(\overline{x},\overline {\lambda})=0$. Then \begin{equation} D(\overline{\lambda})=\left\{ \begin{array} [c]{ll} \max_{\lambda\in Y_{\operatorname{col}}^{+}}D(\lambda) & \text{if } \overline{\lambda}\in Y_{\operatorname{col}}^{+},\\ \min_{\lambda\in Y_{\operatorname{col}}^{-}}D(\lambda) & \text{if } \overline{\lambda}\in Y_{\operatorname{col}}^{-}. \end{array} \right. \label{r-2c} \end{equation} In particular, (\ref{r-2c}) holds if $\overline{\lambda}\in Y^{+}\cup Y^{-}$ is a critical point of $D$ and $\overline{x}:=x(\overline{\lambda})$. \end{proposition} Proof. (i) Consider $\lambda\in Y_{\operatorname{col}}^{+};$ then there exists $u\in\mathbb{R}^{n}$ such that $A(\lambda)u=b(\lambda)$, and so $\nabla _{x}L(u,\lambda)=A(\lambda)u-b(\lambda)=0$. Because $L(\cdot,\lambda)$ is convex we obtain that $L(u,\lambda)\leq L(u^{\prime},\lambda)$ for every $u^{\prime}\in\mathbb{R}^{n}$, whence $D(\lambda)=L(u,\lambda)=\min _{u^{\prime}\in\mathbb{R}^{n}}L(u^{\prime},\lambda)$. Of course, if $\lambda\in Y^{+}$ then $L(\cdot,\lambda)$ is strictly convex and $u=A(\lambda)^{-1}b(\lambda)$, and so $A(\lambda)^{-1}b(\lambda)$ is the unique minimizer of $L(\cdot,\lambda)$ on $\mathbb{R}^{n}$. The case $\overline{\lambda}\in Y^{-}$ is solved similarly. (ii) Because $L(x,\cdot)$ is linear (hence concave and convex) for every $x\in\mathbb{R}^{n}$, from (\ref{r-1}) we obtain that $D$ is concave and u.s.c.~on $Y_{\operatorname{col}}^{+}$ as an infimum of concave continuous functions. The argument is similar for the other situation. (iii) Assume that $\overline{\lambda}\in Y_{\operatorname{col}}^{+}$ (hence $\overline{\lambda}\in\Gamma_{J}\cap Y_{\operatorname{col}}^{+}$), and take $\lambda\in\Gamma_{J}\cap Y_{\operatorname{col}}^{+}$. Using (\ref{r-1}) and the fact that $\overline{x}\in X_{J}$, we have that \[ D(\lambda)\leq L(\overline{x},\lambda)=q_{0}(\overline{x})+\sum_{j\in J^{c} }\lambda_{j}q_{j}(\overline{x})\leq q_{0}(\overline{x})=q_{0}(\overline {x})+\left\langle \overline{\lambda},\nabla_{\lambda}L(\overline{x} ,\overline{\lambda})\right\rangle =L(\overline{x},\overline{\lambda })=D(\overline{\lambda}), \] and so $D(\overline{\lambda})=\sup_{\lambda\in\Gamma_{J}\cap Y_{\operatorname{col}}^{+}}D(\lambda)$. The proof for $\overline{\lambda} \in(-\Gamma_{J})\cap Y_{\operatorname{col}}^{-}$ is similar. (iv) One applies (iii) for $J:=\overline{1,m}$. $\square$ Observe that $D$ is a $C^{\infty}$ function on the open set $Y$ (assumed to be nonempty). Indeed, the operator $\varphi:\{U\in\mathfrak{M}_{n}\mid U$ invertible$\}\rightarrow\mathfrak{M}_{n}$ defined by $\varphi(U)=U^{-1}$ is Fr\'{e}chet differentiable and $d\varphi(U)(S)=-U^{-1}SU^{-1}$ for $U,S\in\mathfrak{M}_{n}$ with $U$ invertible. It follows that \begin{align} \frac{\partial D(\lambda)}{\partial\lambda_{j}} & =\tfrac{1}{2}\left\langle b(\lambda),A(\lambda)^{-1}A_{j}A(\lambda)^{-1}b(\lambda)\right\rangle -\left\langle b_{j},A(\lambda)^{-1}b(\lambda)\right\rangle +c_{j}\nonumber\\ & =\tfrac{1}{2}\left\langle x(\lambda),A_{j}x(\lambda)\right\rangle -\left\langle b_{j},x(\lambda)\right\rangle +c_{j}=q_{j}\left( x(\lambda )\right) \quad\forall j\in\overline{1,m}\label{r-d1pdq} \end{align} for $\lambda\in Y_{0}$, where \[ x(\lambda):=A(\lambda)^{-1}b(\lambda)\quad\left( \lambda\in Y_{0}\right) ;\label{r-xl} \] hence \begin{equation} \nabla D(\lambda^{\prime})=\nabla_{\lambda}L(x(\lambda^{\prime}),\lambda ^{\prime})\quad\forall\lambda^{\prime}\in Y_{0}.\label{r-grpdl} \end{equation} Consequently, \begin{equation} \forall\lambda^{\prime}\in Y_{0}:\left[ \nabla D(\lambda^{\prime} )=0\iff\nabla_{\lambda}L\left( x(\lambda^{\prime}),\lambda^{\prime}\right) =0\iff\nabla L\left( x(\lambda^{\prime}),\lambda^{\prime}\right) =0\right] .\label{r-cppdL} \end{equation} A similar computation gives \begin{align*} \frac{\partial^{2}D(\lambda)}{\partial\lambda_{j}\partial\lambda_{k}}= & -\left\langle A_{j}A(\lambda)^{-1}b(\lambda),A(\lambda)^{-1}A_{k} A(\lambda)^{-1}b(\lambda)\right\rangle \\ & +\left\langle A_{j}A(\lambda)^{-1}b_{k}+A_{k}A(\lambda)^{-1}b_{j} ,A(\lambda)^{-1}b(\lambda)\right\rangle -\left\langle b_{j},A(\lambda )^{-1}b_{k}\right\rangle \\ & =-\left\langle A_{j}x(\lambda)-b_{j},A(\lambda)^{-1}\left( A_{k} x(\lambda)-b_{k}\right) \right\rangle \quad\forall j,k\in\overline{1,m} \end{align*} for $\lambda\in Y_{0}$. Omitting $\lambda$ $(\in Y_{0})$, for $v\in \mathbb{R}^{m}$ and $A_{v}:=\sum_{j=1}^{m}v_{j}A_{j}$, $b_{v}:=\sum_{j=1} ^{m}v_{j}b_{j}$, we get \[ \big\langle v,\nabla^{2}Dv\big\rangle=\sum\nolimits_{j,k=1}^{m}\frac {\partial^{2}D}{\partial\lambda_{j}\partial\lambda_{k}}v_{j}v_{k} =-\left\langle A_{v}A^{-1}b-b_{v},A^{-1}\left( A_{v}A^{-1}b-b_{v}\right) \right\rangle . \] Therefore, $\nabla^{2}D(\lambda)\preceq0$ if $\lambda\in Y^{+}$ and $\nabla^{2}D(\lambda)\succeq0$ if $\lambda\in Y^{-}$, confirming that $D$ is concave on $Y^{+}$ and convex on $Y^{-}$. \section{Quadratic minimization problems with equality constraints} As mentioned above, for $J:=\overline{1,m}$, $(P_{J})$ becomes the quadratic minimization problem $(P_{e})$ $~~\min$ $q_{0}(x)$ ~s.t. $x\in X_{e}:=X_{\overline{1,m}}$. Using the previous facts we are in a position to state and prove the following result. \begin{proposition} \label{p1}Let $(\overline{x},\overline{\lambda})\in\mathbb{R}^{n} \times\mathbb{R}^{m}$. \emph{(i)} Assume that $(\overline{x},\overline{\lambda})$ is a critical point of $L$. Then $\overline{x}\in X_{e}$, $\overline{\lambda}\in Y_{\operatorname{col}}$, and (\ref{r-qlpd}) holds; moreover, for $\overline{\lambda}\in Y_{\operatorname{col}}^{+}$ we have that \begin{equation} q_{0}(\overline{x})=\inf_{x\in X_{e}}q_{0}(x)=L(\overline{x},\overline {\lambda})=\sup_{\lambda\in Y_{\operatorname{col}}^{+}}D(\lambda )=D(\overline{\lambda}), \label{r-minmaxqe} \end{equation} while for $\overline{\lambda}\in Y_{\operatorname{col}}^{-}$ we have that \begin{equation} q_{0}(\overline{x})=\sup_{x\in X_{e}}q_{0}(x)=L(\overline{x},\overline {\lambda})=\inf_{\lambda\in Y_{\operatorname{col}}^{-}}D(\lambda )=D(\overline{\lambda}). \label{r-maxminqe} \end{equation} \emph{(ii)} Assume that $(\overline{x},\overline{\lambda})$ is a critical point of $L$ with $\overline{\lambda}\in Y_{0}$. Then $\nabla D(\overline {\lambda})=0$ and $\overline{x}=A(\overline{\lambda})^{-1}b(\overline{\lambda })$; moreover, $\overline{x}$ is the unique global minimizer of $q_{0}$ on $X_{e}$ when $\overline{\lambda}\in Y^{+}$, and $\overline{x}$ is the unique global maximizer of $q_{0}$ on $X_{e}$ when $\overline{\lambda}\in Y^{-}$. Conversely, assume that $\overline{\lambda}\in Y_{0}$ is a critical point of $D$. Then $(\overline{x},\overline{\lambda})$ is a critical point of $L$, where $\overline{x}=A(\overline{\lambda})^{-1}b(\overline{\lambda})$; consequently \emph{(i)} and \emph{(ii)} apply. \end{proposition} Proof. (i) Assume that $(\overline{x},\overline{\lambda})$ is a critical point of $L$; hence $\nabla_{x}L(\overline{x},\overline{\lambda})=0$ and $\nabla_{\lambda}L(\overline{x},\overline{\lambda})=0$. Using Lemma \ref{lem-qperfdual} we obtain that $\overline{\lambda}\in Y_{\operatorname{col}}$, $\overline{x}\in X_{e}$, and (\ref{r-qlpd}) holds. Assume moreover that $\overline{\lambda}\in Y_{\operatorname{col}}^{+}$. Because $L(\cdot,\overline{\lambda})$ is convex, its infimum is attained at $\overline{x}$. Therefore, for $x\in X_{e}$ we have that $q_{0}(\overline {x})=L(\overline{x},\overline{\lambda})\leq L(x,\overline{\lambda})=q_{0}(x)$, and so $q_{0}(\overline{x})=\inf_{x\in X_{e}}q_{0}(x)$. Using Proposition \ref{lem-pd}~(iii) for $J:=\overline{1,m}$ (hence $\Gamma_{J}=\mathbb{R}^{m} $), we get the last equality in (\ref{r-minmaxqe}). Hence (\ref{r-minmaxqe}) holds. The proof of (\ref{r-maxminqe}) in the case $\overline{\lambda}\in Y_{\operatorname{col}}^{-}$ is similar; an alternative proof is to apply the previous case for $q_{j}$ replaced by $-q_{j}$ and $\overline{\lambda}_{j}$ by $-\overline{\lambda}_{j}$ for $j\in\overline{1,m}.$ (ii) Assume that $(\overline{x},\overline{\lambda})$ is a critical point of $L$ with $\overline{\lambda}\in Y_{0}$. Since $A(\overline{\lambda} )\overline{x}-b(\overline{\lambda})=\nabla_{x}L(\overline{x},\overline {\lambda})=0$, clearly $\overline{x}=x(\overline{\lambda})=A(\overline {\lambda})^{-1}b(\overline{\lambda})$. Using (\ref{r-grpdl}) we obtain that $\nabla D(\overline{\lambda})=\nabla_{\lambda}L(\overline{x},\overline {\lambda})=0$. Moreover, suppose that $\overline{\lambda}\in Y^{+}$. Then $L(\cdot ,\overline{\lambda})$ is strictly convex, and so $q_{0}(\overline {x})=L(\overline{x},\overline{\lambda})<L(x,\overline{\lambda})=q_{0}(x)$ for $x\in X_{e}\setminus\{\overline{x}\}$. Hence $\overline{x}$ is the unique global minimizer of $q_{0}$ on $X_{e}$. The proof in the case $\overline {\lambda}\in Y^{-}$ is similar. Conversely, let $\overline{\lambda}\in Y_{0}$ be a critical point of $D$ and take $\overline{x}:=A(\overline{\lambda})^{-1}b(\overline{\lambda});$ then $\nabla_{x}L(\overline{x},\overline{\lambda})=0$ by (\ref{r-nxL}). Using (\ref{r-d1pdq}) we obtain that $\overline{x}\in X_{e}$, and so $\nabla _{\lambda}L(\overline{x},\overline{\lambda})=0$. Therefore, $(\overline {x},\overline{\lambda})$ is a critical point of $L$. $\square$ The next example shows that $(P_{e})$ might have several solutions when $\overline{\lambda}\in Y_{\operatorname{col}}^{+}$. \begin{example} \label{ex-qe}Take $q_{0}(x,y):=xy$, $q_{1}(x,y):=\tfrac{1}{2}(x^{2}+y^{2}-1)$ for $x,y\in\mathbb{R}$. Then $L(x,y,\lambda)=xy+\tfrac{\lambda}{2}\left( x^{2}+y^{2}-1\right) $. It follows that $A(\lambda)=\left( \begin{array} [c]{ll} \lambda & 1\\ 1 & \lambda \end{array} \right) $, $b(\lambda)=\left( \begin{array} [c]{l} 0\\ 0 \end{array} \right) $, $c(\lambda)=-\tfrac{1}{2}\lambda$, $Y_{0}=\mathbb{R} \setminus\{-1,1\}$, $Y^{+}=-Y^{-}=(1,\infty)$, $Y_{\operatorname{col} }=\mathbb{R}$, $Y_{\operatorname{col}}^{+}=-Y_{\operatorname{col}} ^{-}=[1,\infty)$, $D(\lambda)=-\tfrac{1}{2}\lambda$. Clearly, $D$ has not critical points, and the only critical points of $L$ are $(\pm2^{-1/2} ,\mp2^{-1/2},1)$ and $(\pm2^{-1/2},\pm2^{-1/2},-1)$. For $(\pm2^{-1/2} ,\mp2^{-1/2},1)$ we can apply Proposition \ref{p1}~(i) with $\overline {\lambda}:=1\in Y_{\operatorname{col}}^{+}$, and so both $\pm2^{-1/2}(1,-1)$ are solutions for problem $(P_{e})$, while for $(\pm2^{-1/2},\pm2^{-1/2},-1)$ we can apply Proposition \ref{p1}~(i) with $\overline{\lambda}:=-1\in Y_{\operatorname{col}}^{-}$, and so $\pm2^{-1/2}(1,1)$ are global maximizers of $q_{0}$ on $X_{e}$. \end{example} \section{Quadratic minimization problems with equality and inequality constraints} Let us consider now the general quadratic minimization problem $(P_{J})$ considered at the beginning of Section \ref{sec1}. To $(P_{J})$ we associate the sets \begin{gather*} Y^{J}:=\Gamma_{J}\cap Y_{0},\quad Y^{J+}:=\Gamma_{J}\cap Y^{+},\quad Y^{J-}:=(-\Gamma_{J})\cap Y^{-},\\ Y_{\operatorname{col}}^{J}:=\Gamma_{J}\cap Y_{\operatorname{col}},\quad Y_{\operatorname{col}}^{J+}:=\Gamma_{J}\cap Y_{\operatorname{col}}^{+},\quad Y_{\operatorname{col}}^{J-}:=(-\Gamma_{J})\cap Y_{\operatorname{col}}^{-}, \end{gather*} where $Y_{0}$, $Y^{+}$ and $Y^{-}$, $Y_{\operatorname{col}}$, $Y_{\operatorname{col}}^{+}$ and $Y_{\operatorname{col}}^{-}$, are defined in (\ref{r-s0}), (\ref{r-s0pm}), (\ref{r-yext}) and (\ref{r-yextp}), respectively. Unlike $Y_{0}$, $Y^{+}$, $Y^{-}$, the sets $Y^{J}$, $Y^{J+}$ and $Y^{J-}$ are (generally) not open. Because $Y^{+}$, $Y_{\operatorname{col} }^{+}$ and $Y_{\operatorname{col}}^{-}$ are convex, so are $Y^{J+}$, $Y_{\operatorname{col}}^{J+}$ and $Y_{\operatorname{col}}^{J-}$, and so $L(\cdot,\lambda)$ is (strictly) convex on $Y_{\operatorname{col}}^{J+}$ $(Y^{J+})$ and (strictly) concave on $Y_{\operatorname{col}}^{J-}$ $(Y^{J-});$ moreover, $\operatorname*{int}Y^{J+}=\operatorname*{int}Y_{\operatorname{col} }^{J+}$ ($\operatorname*{int}Y^{J-}=\operatorname*{int}Y_{\operatorname{col} }^{J-}$) provided $Y^{J+}\neq\emptyset$ ($\operatorname*{int}Y^{J-} \neq\emptyset$). As observed already, for $J=\overline{1,m}$ we have that $\Gamma _{J}=\mathbb{R}^{m}$, and so $Y^{J}$, $Y^{J+}$, $Y^{J-}$, $Y_{\operatorname{col}}^{J}$, $Y_{\operatorname{col}}^{J+}$ and $Y_{\operatorname{col}}^{J-}$ reduce to $Y_{0}$, $Y^{+}$, $Y^{-}$, $Y_{\operatorname{col}}$, $Y_{\operatorname{col}}^{+}$ and $Y_{\operatorname{col}}^{-}$, respectively. Suggested by the well known necessary optimality conditions for minimization problems with equality and inequality constraints, we say that $(\overline {x},\overline{\lambda})\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ is a $J$-LKKT point of $L$ (that is a Lagrange--Karush--Kuhn--Tucker\footnote{It seems that the term Lagrange--Karush--Kuhn--Tucker multiplier was introduced by J.-P. Penot in \cite{Pen:81}.} point of $L$ with respect to $J$) if $\nabla _{x}L(\overline{x},\overline{\lambda})=0$ and \[ \textstyle\left[ \forall j\in J^{c}:\overline{\lambda}_{j}\geq0~~\wedge ~~\frac{\partial L}{\partial\lambda_{j}}(\overline{x},\overline{\lambda} )\leq0~~\wedge~~\overline{\lambda}_{j}\cdot\frac{\partial L}{\partial \lambda_{j}}(\overline{x},\overline{\lambda})=0\right] ~\wedge~\left[ \forall j\in J:\frac{\partial L}{\partial\lambda_{j}}(\overline{x} ,\overline{\lambda})=0\right] ,\label{r-kkt-lqm} \] or, equivalently, \begin{equation} \overline{x}\in X_{J}~~\wedge~~\overline{\lambda}\in\Gamma_{J}~~\wedge ~~\left[ \forall j\in J^{c}:\overline{\lambda}_{j}q_{j}(\overline {x})=0\right] ;\label{r-kkt-pqei} \end{equation} we say that $\overline{x}\in\mathbb{R}^{n}$ is a $J$-LKKT point for $(P_{J})$ if there exists $\overline{\lambda}\in\mathbb{R}^{m}$ such that $(\overline {x},\overline{\lambda})$ verifies (\ref{r-kkt-pqei}); moreover, for $D$ defined in (\ref{r-pd0}), we say that $\overline{\lambda}\in Y_{0}$ is a $J$-LKKT point for $D$ if \begin{equation} \textstyle\left[ \forall j\in J^{c}:\overline{\lambda}_{j}\geq0~~\wedge ~~\frac{\partial D}{\partial\lambda_{j}}(\overline{\lambda})\leq 0~~\wedge~~\overline{\lambda}_{j}\cdot\frac{\partial D}{\partial\lambda_{j} }(\overline{\lambda})=0\right] ~\wedge~\left[ \forall j\in J:\frac{\partial D}{\partial\lambda_{j}}(\overline{\lambda})=0\right] .\label{r-kkt-dqm} \end{equation} Of course, when $J=\overline{1,m}$, $(\overline{x},\overline{\lambda} )\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ is a $J$-LKKT point of $L$ iff $\nabla L(\overline{x},\overline{\lambda})=0$, while $\overline{\lambda}\in Y_{0}$ is a $J$-LKKT point for $D$ iff $\nabla D(\overline{\lambda})=0$. \begin{remark} \label{rem-kktLkktpd}Notice that $\overline{\lambda}\in Y_{0}$ is a $J$-LKKT point of $D$ if and only if $(x(\overline{\lambda}),\overline{\lambda})$ is a $J$-LKKT point of $L$; for this just take into account (\ref{r-d1pdq}). Moreover, taking into account (\ref{r-cppdL}), if $\overline{\lambda}\in Y_{0}$ is a critical point of $D$ then $\overline{\lambda}$ is a $J$-LKKT point of $D$ and $(x(\overline{\lambda}),\overline{\lambda})$ is a $J$-LKKT point of $L$ (being a critical point of $L$). \end{remark} In general, for distinct $J$ and $J^{\prime}$, the sets of $J$-LKKT and $J^{\prime}$-LKKT points of $L$ (resp. $D$) are not comparable. For comparable $J$ and $J^{\prime}$ we have the following result whose simple proof is omitted; its second part follows from the first one and the previous remark. \begin{lemma} \label{fact1}Let $J\subset J^{\prime}\subset\overline{1,m}$ and $(\overline {x},\overline{\lambda})\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ \emph{(i)} If $(\overline{x},\overline{\lambda})$ is a $J^{\prime}$-LKKT point of $L$ and $\overline{\lambda}_{j}\geq0$ for all $j\in J^{\prime}\setminus J$, then $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$. Conversely, if $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$ and $\overline{\lambda}_{j}>0$ for all $j\in J^{\prime}\setminus J$, then $(\overline{x},\overline{\lambda})$ is a $J^{\prime}$-LKKT point of $L$. \emph{(ii)} If $\overline{\lambda}\in Y_{0}$ is a $J^{\prime}$-LKKT point of $D$ and $\overline{\lambda}_{j}\geq0$ for all $j\in J^{\prime}\setminus J$, then $\overline{\lambda}$ is a $J$-LKKT point of $D$. Conversely, if $\overline{\lambda}$ is a $J$-LKKT point of $D$ and and $\overline{\lambda }_{j}>0$ for all $j\in J^{\prime}\setminus J$, then $\overline{\lambda}$ is a $J^{\prime}$-LKKT point of $D$. \end{lemma} The result below corresponds to Proposition \ref{p1}. \begin{proposition} \label{p1ei}Let $(\overline{x},\overline{\lambda})\in\mathbb{R}^{n} \times\mathbb{R}^{m}$. \emph{(i)} Assume that $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$. Then $\overline{x}$ is a $J$-LKKT point of $(P_{J})$, $\overline{x}\in X_{J}$, $\overline{\lambda}\in Y_{\operatorname{col}}^{J}$, and (\ref{r-qlpd}) holds; moreover, if $\overline{\lambda}\in Y_{\operatorname{col}}^{J+}$ then \begin{equation} q_{0}(\overline{x})=\inf_{x\in X_{J}}q_{0}(x)=L(\overline{x},\overline {\lambda})=\sup_{\lambda\in Y_{\operatorname{col}}^{J+}}D(\lambda )=D(\overline{\lambda}). \label{r-minmaxqei} \end{equation} \emph{(ii) }Assume that $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$ with $\overline{\lambda}\in Y_{0}$ (or, equivalently, $\overline{\lambda}\in Y^{J}$). Then $\overline{x}=A(\overline{\lambda} )^{-1}b(\overline{\lambda})$, and $\overline{\lambda}$ is a $J$-LKKT point of $D$; moreover, $\overline{x}$ is the unique global minimizer of $q_{0}$ on $X_{J}$ if $\overline{\lambda}\in Y^{J+}$. Conversely, assume that $\overline{\lambda}\in Y_{0}$ is a $J$-LKKT point of $D$. Then $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$, where $\overline{x}:=A(\overline{\lambda})^{-1}b(\overline{\lambda})$. Consequently, \emph{(i)} and \emph{(ii)} apply. \emph{(iii)} Assume that $\overline{\lambda}\in Y^{J+}$. Then \[ D(\overline{\lambda})=\sup_{\lambda\in Y_{\operatorname{col}}^{J+}} D(\lambda)\Longleftrightarrow D(\overline{\lambda})=\sup_{\lambda\in Y^{J+} }D(\lambda)\Longleftrightarrow\overline{\lambda}\text{ is a $J$-LKKT point of $D$.} \] \end{proposition} Proof. (i) By hypothesis, (\ref{r-kkt-pqei}) holds. The fact that $\overline{x}$ is a $J$-LKKT point of $(P_{J})$ is obvious from its very definition; hence $\overline{x}\in X_{J}$. On the other hand, because $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$ we have that $\overline{\lambda}\in Y_{\operatorname{col}}^{J}$ and (\ref{r-qlpd}) holds by Lemma \ref{lem-qperfdual}. Assume that $\overline{\lambda}\in Y_{\operatorname{col}}^{J+}$ $(=\Gamma _{J}\cap Y_{\operatorname{col}}^{+})$. The last equality in (\ref{r-minmaxqei} ) follows from Proposition \ref{lem-pd}~(iii). Because $L(\cdot,\overline {\lambda})$ is convex, its infimum is attained at $\overline{x}$. Therefore, for $x\in X_{J}$ we have that \[ q_{0}(\overline{x})=L(\overline{x},\overline{\lambda})\leq L(x,\overline {\lambda})=q_{0}(x)+\sum\nolimits_{j=1}^{m}\overline{\lambda}_{j}q_{j}(x)\leq q_{0}(x), \label{r-siqei} \] whence $q_{0}(\overline{x})=\inf_{x\in X_{i}}q_{0}(x)$. Hence (\ref{r-minmaxqei}) holds. (ii) Because $(\overline{x},\overline{\lambda})$ is a $J$-LKKT point of $L$ with $\overline{\lambda}\in Y_{0}$, we have that $A(\overline{\lambda })\overline{x}-b(\overline{\lambda})=\nabla_{x}L(\overline{x},\overline {\lambda})=0$, and so $\overline{x}=x(\overline{\lambda})$. As observed in Remark \ref{rem-kktLkktpd}, (\ref{r-kkt-dqm}) is verified. Suppose now that moreover that $\overline{\lambda}\in Y^{+}$ (and so Then $L(\cdot,\overline{\lambda})$ is strictly convex, and so $q_{0}(\overline {x})=L(\overline{x},\overline{\lambda})<L(x,\overline{\lambda})\leq q_{0}(x)$ for $x\in X_{J}\setminus\{\overline{x}\}$. Hence $\overline{x}$ is the unique global minimizer of $q_{0}$ on $X_{J}$. Conversely, let $\overline{\lambda}\in Y_{0}$ be a $J$-LKKT point of $D$, and take $\overline{x}:=x(\overline{\lambda});$ then $(\overline{x},\overline {\lambda})$ is a $J$-LKKT point of $L$ by Remark \ref{rem-kktLkktpd}. (iii) If $\overline{\lambda}$ is a $J$-LKKT point of $D$, we have that $D(\overline{\lambda})=\sup_{\lambda\in Y_{\operatorname{col}}^{J+}} D(\lambda)$ by Remark \ref{rem-kktLkktpd} and (i), while $D(\overline{\lambda })=\sup_{\lambda\in Y_{\operatorname{col}}^{J+}}D(\lambda)$ implies $D(\overline{\lambda})=\sup_{\lambda\in Y^{J+}}D(\lambda)$ because $Y^{J+}\subset Y_{\operatorname{col}}^{J+}$. Assume that $D(\overline{\lambda })=\sup_{\lambda\in Y^{J+}}D(\lambda)$. Setting $Q:=-D$, we have that $Q$ is convex and $\overline{\lambda}$ is a global minimizer of $Q$ on (the convex set) $Y^{J+}$. Using \cite[Prop.\ 4]{Zal:89} we have that \[ 0\leq Q^{\prime}(\overline{\lambda},\lambda-\overline{\lambda}):=\lim _{t\rightarrow0+}\frac{Q(\overline{\lambda}+t(\lambda-\overline{\lambda }))-Q(\overline{\lambda})}{t}=\left\langle \lambda-\overline{\lambda},\nabla Q(\overline{\lambda})\right\rangle \quad\forall\lambda\in Y^{J+}. \] It follows that $\left\langle y,v\right\rangle \leq0$ for all $y\in \mathbb{R}_{+}(Y^{J+}-\overline{\lambda})$, where $v:=\nabla D(\overline {\lambda})$. Because $\Gamma_{J}$ and $Y^{+}$ are convex sets, $Y^{J+} =\Gamma_{J}\cap Y^{+}$, and $\overline{\lambda}\in\operatorname*{int} Y^{+}=Y^{+}$, we have that \begin{align*} \mathbb{R}_{+}(Y^{J+}-\overline{\lambda}) & =\mathbb{R}_{+}\left[ (\Gamma_{J}-\overline{\lambda})\cap(Y^{+}-\overline{\lambda})\right] =\mathbb{R}_{+}(\Gamma_{J}-\overline{\lambda})\\ & =\left\{ \mu\in\mathbb{R}^{m}\mid\forall j\in J^{c}:\overline{\lambda} _{j}=0\Rightarrow\mu_{j}\geq0\right\} . \end{align*} Therefore, $\frac{\partial D}{\partial\lambda_{j}}(\overline{\lambda} )=v_{j}=0$ for $j\in J\cup\{j\in J^{c}\mid\overline{\lambda}_{j}>0\}$ and $\frac{\partial D}{\partial\lambda_{j}}(\overline{\lambda})=v_{j}\leq0$ for $j\in\{j^{\prime}\in J\mid\overline{\lambda}_{j^{\prime}}=0\}$. This shows that condition (\ref{r-kkt-dqm}) is verified. $\square$ \begin{corollary} \label{fact2}Let $\emptyset\neq J\subset\overline{1,m}$ and let $(\overline {x},\overline{\lambda})\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ be a $J$-LKKT point of $L$ such that $A(\overline{\lambda})\succeq0;$ hence $\overline{x}$ $\in X_{J}$, $\overline{\lambda}\in Y_{\operatorname{col}}^{J+}$ and (\ref{r-minmaxqei}) holds. If $J_{\geq}:=\{j\in J\mid\overline{\lambda} _{j}\geq0\}$ is nonempty, then $(\overline{x},\overline{\lambda})$ is a $(J\setminus J_{\geq})$-LKKT point of $L$, and so $\overline{x}$ is a global minimizer of $q_{0}$ on $X_{J\setminus J_{\geq}}\supset X_{J}$. \end{corollary} Proof. The first assertion holds by Proposition \ref{p1ei}~(i) because $\overline{\lambda}\in Y_{\operatorname{col}}^{J+}$. In what concerns the second assertion, it is sufficient to observe that for $j\in J^{c}\cup J_{\geq}=(J\setminus J_{\geq})^{c}$ we have that $\overline{\lambda}_{j}\geq 0$, and $\overline{\lambda}_{j}\cdot\frac{\partial L}{\partial\lambda_{j} }(\overline{x},\overline{\lambda})=0$ by the definition of a $J$-LKKT point of $L$, then to apply Proposition \ref{p1ei}~(i) for $J$ replaced by $J\setminus J_{\geq}$. $\square$ \begin{corollary} \label{fact3}If $(\overline{x},\overline{\lambda})\in\mathbb{R}^{n} \times\mathbb{R}^{m}$ is a critical point of $L$ (in particular if $\overline{\lambda}\in Y_{0}$ is a critical point of $D$ and $\overline {x}:=x(\overline{\lambda})$), then $(\overline{x},\overline{\lambda})$ is a $J^{c}$-LKKT point of $L$, where $J:=\{j\in\overline{1,m}\mid\overline {\lambda}_{j}\geq0\}$. Consequently, if moreover $A(\overline{\lambda})\geq0$, then $\overline{x}$ $(\in X_{e})$ is a global minimizer of $q_{0}$ on $X_{J^{c}}\supset X_{e}$. \end{corollary} Proof. Apply Corollary \ref{fact2} for $J:=\overline{1,m}$. $\square$ The next result is the variant of Proposition \ref{p1ei} for maximizing $q_{0}$ on $X_{J}$. \begin{proposition} \label{p1eimax}Let $(\overline{x},\overline{\lambda})\in\mathbb{R}^{n} \times\mathbb{R}^{m}$. \emph{(i)} Assume that $\nabla_{x}L(\overline{x},\overline{\lambda})=0$ and the condition \begin{equation} \textstyle \left[ \forall j\in J^{c}:\overline{\lambda}_{j}\leq 0~~\wedge~~\frac{\partial L}{\partial\lambda_{j}}(\overline{x},\overline {\lambda})\leq0~~\wedge~~\overline{\lambda}_{j}\cdot\frac{\partial L} {\partial\lambda_{j}}(\overline{x},\overline{\lambda})=0\right] ~\wedge~\left[ \forall j\in J:\frac{\partial L}{\partial\lambda_{j} }(\overline{x},\overline{\lambda})=0\right] \label{r-lkktmax} \end{equation} is verified. Then $\overline{x}$ $\in X_{J}$, $\overline{\lambda}\in Y_{\operatorname{col}}$, and \begin{equation} \textstyle \left[ \forall j\in J^{c}:\overline{\lambda}_{j}\leq 0~~\wedge~~\frac{\partial D}{\partial\lambda_{j}}(\overline{\lambda} )\leq0~~\wedge~~\overline{\lambda}_{j}\cdot\frac{\partial D}{\partial \lambda_{j}}(\overline{\lambda})=0\right] ~\wedge~\left[ \forall j\in J:\frac{\partial D}{\partial\lambda_{j}}(\overline{\lambda})=0\right] ; \label{r-qlpdmax} \end{equation} moreover, if $\overline{\lambda}\in Y_{\operatorname{col}}^{-}$ (or equivalently $\overline{\lambda}\in Y_{\operatorname{col}}^{J-}$), then \[ q_{0}(\overline{x})=\sup_{x\in X_{J}}q_{0}(x)=L(\overline{x},\overline {\lambda})=\inf_{\lambda\in Y_{\operatorname{col}}^{J-}}D (\lambda )=D(\overline{\lambda}). \] \emph{(ii) }Assume that $\overline{\lambda}\in Y_{0}$, $\nabla_{x} L(\overline{x},\overline{\lambda})=0$ and $(\overline{x},\overline{\lambda})$ verifies (\ref{r-lkktmax}). Then $\overline{x}=x(\overline{\lambda})$, and $\overline{\lambda}$ verifies condition (\ref{r-qlpdmax}); moreover, $\overline{x}$ is the unique global maximizer of $q_{0}$ on $X_{J}$ if $\overline{\lambda}\in Y^{J-}$. \emph{(iii)} Assume that $\overline{\lambda}\in Y^{J-}$. Then \[ D(\overline{\lambda})=\inf_{\lambda\in Y^{J-}}D(\lambda)\Longleftrightarrow D(\overline{\lambda})=\inf_{\lambda\in Y_{\operatorname{col}}^{J-}} D(\lambda)\Longleftrightarrow\overline{\lambda}\text{ verifies condition (\ref{r-qlpdmax}).} \] \end{proposition} The proof of the above result is an easy adaptation of the proof of Proposition \ref{p1ei}, so we omit it. \section{Quadratic minimization problems with inequality constraints} We consider now the particular case of $(P_{J})$ in which $J=\emptyset;$ the problem is denoted by $(P_{i})$ and the set of its feasible solutions by $X_{i}$. In this case $\Gamma_{J}=\mathbb{R}_{+}^{m}$, and the sets $Y^{J}$, $Y^{J+}$, $Y^{J-}$, $Y_{\operatorname{col}}^{J}$, $Y_{\operatorname{col}} ^{J+}$, $Y_{\operatorname{col}}^{J+}$ and $Y_{\operatorname{col}}^{J-}$ are denoted by $Y^{i}$, $Y^{i+}$, $Y^{i-}$, $Y_{\operatorname{col}}^{i}$, $Y_{\operatorname{col}}^{i+}$ and $Y_{\operatorname{col}}^{i-}$, respectively. Moreover, in this situation we shall use KKT instead of $J$-LKKT. So, we say that $(\overline{x},\overline{\lambda})\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ is a Karush--Kuhn--Tucker point of $L$ if $\nabla_{x}L(\overline{x} ,\overline{\lambda})=0$ and \[ \overline{\lambda}\in\mathbb{R}_{+}^{m}~~\wedge~~\nabla_{\lambda} L(\overline{x},\overline{\lambda})\in\mathbb{R}_{-}^{m}~~\wedge ~~\big \langle\overline{\lambda},\nabla_{\lambda}L(\overline{x},\overline {\lambda})\big\rangle=0,\label{r-kkt-lq} \] or, equivalently, \begin{equation} \overline{x}\in X_{i}~~\wedge~~\overline{\lambda}\in\mathbb{R}_{+}^{m} ~~\wedge~~\left[ \forall j\in\overline{1,m}:\overline{\lambda}_{j} q_{j}(\overline{x})=0\right] ;\label{r-kkt-pqi} \end{equation} we say that $\overline{x}$ is a KKT point for $(P_{i})$ if there exists $\overline{\lambda}\in\mathbb{R}^{m}$ such that (\ref{r-kkt-pqi}) holds; we say that $\overline{\lambda}\in Y_{0}$ is a KKT point for $D$ if \[ \overline{\lambda}\in\mathbb{R}_{+}^{m}~~\wedge~~\nabla D(\overline{\lambda })\in\mathbb{R}_{-}^{m}~~\wedge~~\big \langle\overline{\lambda},\nabla D(\overline{\lambda})\big\rangle=0.\label{r-kkt-dqi} \] Proposition \ref{p1ei} becomes the next result when $J=\emptyset$. \begin{proposition} \label{p1i}Let $(\overline{x},\overline{\lambda})\in\mathbb{R}^{n} \times\mathbb{R}^{m}$. \emph{(i)} Assume that $(\overline{x},\overline{\lambda})$ is a KKT point of $L$. Then $\overline{x}$ is a KKT point of $(P_{i})$, and so $\overline{x}\in X_{i}$, $\overline{\lambda}\in Y_{\operatorname{col}}^{i}$, and (\ref{r-qlpd}) holds; moreover, for $\overline{\lambda}\in Y_{\operatorname{col}}^{i+}$ we have that \begin{equation} q_{0}(\overline{x})=\inf_{x\in X_{i}}q_{0}(x)=L(\overline{x},\overline {\lambda})=\sup_{\lambda\in Y_{\operatorname{col}}^{i+}}D(\lambda )=D(\overline{\lambda}). \label{r-minmaxqi} \end{equation} \emph{(ii)} Assume that $(\overline{x},\overline{\lambda})$ is a KKT point of $L$ with $\overline{\lambda}\in Y_{0}$. Then $\overline{x}=x(\overline {\lambda})$ and $\overline{\lambda}$ is a KKT point of $D$; moreover, $\overline{x}$ is the unique global minimizer of $q_{0}$ on $X_{i}$ provided $\overline{\lambda}\in Y^{i+}$. Conversely, assume that $\overline{\lambda}\in Y_{0}$ is a KKT point of $D$. Then $(\overline{x},\overline{\lambda})$ is a KKT point of $L$, where $\overline{x}:=A(\overline{\lambda})^{-1}b(\overline{\lambda})$. \emph{(iii)} Assume that $\overline{\lambda}\in Y^{i+}$. Then \[ D(\overline{\lambda})=\sup_{\lambda\in Y_{\operatorname{col}}^{i+}} D(\lambda)\Longleftrightarrow D(\overline{\lambda})=\sup_{\lambda\in Y^{i+} }D(\lambda)\Longleftrightarrow\overline{\lambda}\text{ is a KKT point of $D$.} \] \end{proposition} \begin{remark} \label{r-jrw}Jeyakumar, Rubinov and Wu (see \cite[Prop.~3.2]{JeyRubWu:07}) proved that $\overline{x}$ is a (global) solution of $(P_{i})$ when there exists $\overline{\lambda}\in Y_{\operatorname{col}}^{i}$ is a KKT point of $L$; this result was established previously by Hiriart-Urruty in \cite[Th.~4.6]{Hir:98} when $m=2$. \end{remark} \begin{remark} \label{rem-adv}Having in view Propositions \ref{p1}, \ref{p1ei}, \ref{p1i}, it is more advantageous to use their versions (i) than the second part of (ii) with $\overline{\lambda}\in Y_{0}$ because in versions (i) one must know only the Lagrangian (hence only the data of the problems), and this provides both $\overline{x}$ and $\overline{\lambda}$, without needing to calculate effectively $D$, then to determine $\overline{\lambda}$ (and after that, $\overline{x}$). Using $D$ could be useful, maybe, if the number of constraints is much smaller than $n$. As seen in the proofs, the consideration of the dual function is not essential in finding the optimal solutions of the primal problem(s). \end{remark} \section{Comparisons with results on quadratic optimization problems obtained by using CDT\label{sec4}} In this section we analyze results obtained by DY Gao and his collaborators in papers dedicated to quadratic optimization problems, or as particular cases of more general results. The main tool to identify the papers where quadratic problems are considered was to look in the survey papers like \cite{Gao:03a}, \cite{Gao:09} (which is almost the same as \cite{Gao:08}, both of them being cited in Gao's papers), \cite{GaoShe:09} (which is very similar to \cite{Gao:09b}), as well as in the recent book \cite{GaoLatRua:17}. We present the results in chronological order using our notations (when possible) and with equivalent formulations; however, sometimes we quote the original formulations to feel also the flavor of those papers. When we have not notations for some sets we introduce them, often as in the respective papers; similarly for some notions. Because $c_{0}$ in the definition of $q_{0}$ may be taken always to be $0$, we shall not mention it in the sequel. Before beginning our analysis we consider it is worth having in view the following remark from the very recent paper \cite{RuaGao:17b} and to observe that there is not an assumption that some multiplier $\overline{\lambda}_{j}$ be non null in Propositions \ref{p1}, \ref{p1ei} and \ref{p1i}. \begin{quote} ``\emph{Remark 1}. As we have demonstrated that by the generalized canonical duality (32), all KKT conditions can be recovered for both equality and inequality constraints. Generally speaking, the nonzero Lagrange multiplier condition for the linear equality constraint is usually ignored in optimization textbooks. But it can not be ignored for nonlinear constraints. It is proved recently [26] that the popular augmented Lagrange multiplier method can be used mainly for linear constrained problems. Since the inequality constraint $\mu\not =0$ produces a nonconvex feasible set $\mathcal{E}_{a}^{\ast}$, this constraint can be replaced by either $\mu<0$ or $\mu>0$. But the condition $\mu<0$ is corresponding to $y\circ(y-e_{K})\geq0$, this leads to a nonconvex open feasible set for the primal problem. By the fact that the integer constraints $y_{i}(y_{i}-1)=0$ are actually a special case (boundary) of the boxed constraints $0\leq y_{i}\leq1$, which is corresponding to $y\circ(y-e_{K})\geq0$, we should have $\mu>0$ (see [8] and [12, 16]). In this case, the KKT condition (43) should be replaced by $\mu>0,~~y\circ(y-e_{K})\leq0,~~\mu^{T}[y\circ(y-e_{K})]=0.\quad$ (47) \noindent Therefore, as long as $\mu\neq0$ is satisfied, the complementarity condition in (47) leads to the integer condition $y\circ(y-e_{K})=0$. Similarly, the inequality $\tau\neq0$ can be replaced by $\tau>0$." \end{quote} Notice that many papers (co-) authored by DY Gao, mostly in those made public in the last five years, the multipliers corresponding to nonlinear constraints (but not only) are assumed to be positive. So, in most cases Eq.~(\ref{r-2c}) is true. Moreover, it is worth observing that $\overline{x}\in X_{J}$ is a local minimizer as well as a local maximizer of $q_{0}$ on $X_{J}$ whenever $X_{J}$ is a finite set; this is the case in many optimization problems mentioned in this section. The quadratic problem considered by Gao in \cite[Sect.~5.1]{Gao:03a} is of type $(P_{i})$ in which $A_{1}:=I_{n}:=\operatorname*{diag}e$ with $e:=(1,...,1)^{T}\in\mathbb{R}^{n}$, $b_{1}=0$, $c_{1}<0$, $A_{j}=0$ for $j\in\overline{2,m}$. Below, $X_{i1}:=\{x\in X_{i}\mid q_{1}(x)=0\}$ and $Y_{1}^{i+}:=\{\lambda\in Y^{i+}\mid\lambda_{1}>0\}$. Theorem 4 in \cite{Gao:03a} (attributed to \cite{Gao:03b}) asserts: \emph{Let $\overline{\lambda}\in Y^{i}$ be a KKT point of $D$ and $\overline{x}:=x(\overline{\lambda})$. Then $\overline{x}$ is a KKT point of $(P_{i})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$.} Theorem 6 in \cite{Gao:03a} asserts: \emph{Assume that $A_{0}$ has at least one negative eigenvalue and $(\overline{x},\overline{\lambda})$ is a KKT point of $L$. If $\overline{\lambda}\in Y_{1}^{i+}$, then $\overline{x}\in X_{q1}^{i}$ and $q_{0}(\overline{x})=\min_{x\in X_{i1}}q_{0}(x)=\max _{\lambda\in Y_{1}^{i+}}D(\lambda)=D(\overline{\lambda})$. If $\overline {\lambda}\in\mathbb{R}_{+}^{m}\cap Y^{-}$ then $q_{0}(\overline{x})=\max_{x\in X_{i}}q_{0}(x)=\max_{\lambda\in\mathbb{R}_{+}^{m}\cap Y^{-}}D(\lambda )=D(\overline{\lambda})$. } Clearly, the conclusion of \cite[Th.~4]{Gao:03a} follows from Proposition \ref{p1i} (ii) and (i). Let us look at \cite[Th.~6]{Gao:03a}. Because $(\overline{x},\overline {\lambda})$ is a KKT point of $L$ with $\overline{\lambda}\in Y_{q1} ^{i+}\subset Y^{i+}$, (\ref{r-minmaxqi}) holds. Moreover, because $\overline{\lambda}_{1}>0$, it follows that $q_{1}(\overline{x})=0$, and so $\overline{x}\in X_{q1}^{i}$ $(\subset X_{i})$, and so the first assertion of \cite[Th.\ 6]{Gao:03a} holds, but (\ref{r-minmaxqi}) is stronger. Consider now the particular case in which $b_{j}=0$ and $c_{j}=0$ for $j\in\overline{2,m}$ (or, equivalently, $m=1$); in this case the preceding problem becomes a \textquotedblleft quadratic programming problem over a sphere\textquotedblright, considered in \cite[Sect.~6]{Gao:03a}. Assume that $Y^{-}\ni\overline{\lambda}=\overline{\lambda}_{1}>0$. Then $\nabla D(\overline{\lambda})=0$, and so $\overline{x}\in X_{e}$. Using Proposition \ref{p1} we get \[ \max_{x\in X_{i}}q_{0}(x)\geq q_{0}(\overline{x})=\max_{x\in X_{e}} q_{0}(x)=\min_{\lambda\in Y^{-}}D(\lambda)=\min_{\lambda\in Y_{\operatorname{col}}^{-}}D(\lambda)=D(\overline{\lambda}), \] which does not agree with the second assertion of \cite[Th.~6]{Gao:03a} because $\mathbb{R}_{+}\cap Y^{-}\subset Y^{-}\subset Y_{\operatorname{col} }^{-}$. \begin{example} \label{ex-gao03}Let $n=1$, $q_{0}(x)=-\tfrac{1}{2}(x^{2}+x)$ and $q_{1}(x)=\tfrac{1}{2}(x^{2}-1)$. It follows that $X_{e}=\{-1,1\}$, $X_{i}=[-1,1]$, $Y^{+}=(1,\infty)=Y^{i+}$ and $Y^{-}=(-\infty,1)\supset \lbrack0,1)=\mathbb{R}_{+}\cap Y^{-}$. In this case we have that $A(\lambda)=\lambda-1$, $b(\lambda)=\tfrac{1}{2}$, $c(\lambda)=-\frac{\lambda }{2}$, $L(x,\lambda)=\frac{\lambda-1}{2}x^{2}-\tfrac{1}{2}x-\frac{\lambda}{2} $, $\nabla L(x,\lambda)=\left( (\lambda-1)x-\tfrac{1}{2},\tfrac{1}{2} x^{2}-\tfrac{1}{2}\right) $, $\nabla L(x,\lambda)=0$ $\Leftrightarrow$ $(x,\lambda)\in\left\{ (-1,\tfrac{1}{2}),(1,\tfrac{3}{2})\right\} $, $D(\lambda)=\frac{1}{8(1-\lambda)}-\frac{\lambda}{2}$. For $(\overline {x},\overline{\lambda})=(1,\tfrac{3}{2})$ we have that \[ q_{0}(\overline{x})=\min_{x\in X_{i}}q_{0}(x)=\max_{\lambda\in Y_{\operatorname{col}}^{i+}}D(\lambda)=D(\overline{\lambda}), \] which confirms the second assertion of \cite[Th.~6]{Gao:03a}, while for $(\overline{x},\overline{\lambda})=(-1,\tfrac{1}{2})$ we have that \[ \tfrac{1}{8}=\max_{x\in\lbrack-1,1]}q_{0}(x)>0=q_{0}(-1)=\max_{x\in \{-1,1\}}q_{0}(x)=\min_{\lambda\in\lbrack0,1)}D(\lambda)=D(\tfrac{1}{2} )<\sup_{\lambda\in\lbrack0,1)}D(\lambda)=\infty. \] This shows that the third assertion of \cite[Th.~6]{Gao:03a} is false. \end{example} Of course, in \cite[Th.~6]{Gao:03a} there is no need to assume $A$ (i.e.\ our $A_{0}$) \textquotedblleft has at least one negative eigenvalue\textquotedblright; probably this hypothesis was added in order problem $(\mathcal{P}_{\lambda})$ be not a convex one. The problems considered by DY Gao in his survey papers \cite[Sect.~4]{Gao:08} and \cite[Sect.~4]{Gao:09} (which are almost the same) refer to \textquotedblleft box constrained problem\textquotedblright\ (\cite{Gao:07}, \cite{GaoRua:10}), \textquotedblleft integer programming\textquotedblright \ (\cite{FanGaoSheWu:08}, \cite{Gao:07}, \cite{GaoRua:10}, \cite{WanFanGaoXin:08}), \textquotedblleft mixed integer programming with fixed charge\textquotedblright\ (\cite{GaoRuaShe:10}) and \textquotedblleft quadratic constraints\textquotedblright\ (\cite{GaoRuaShe:09}). In these survey papers the results are stated without proofs and their statements are generally different from the corresponding ones in the papers mentioned above; even more, for some results, the statements are different in the two survey papers, even if the wording (text) is almost the same. We shall mention those results from \cite[Sect.~4]{Gao:08} and/or \cite[Sect.~4]{Gao:09} which have not equivalent statements in other papers. It seems that the first paper dedicated completely to quadratic problems with quadratic equality constraints using CDT is \cite{FanGaoSheWu:08}, even if \cite{Gao:07} was published earlier; note that \cite{FanGaoSheWu:08} is cited in \cite{Gao:07} as Ref.~6 with a slightly different title (see also Ref. Fang SC, Gao DY, Sheu RL, Wu SY (2007a) in \cite{GaoRua:08}). The problems considered by Fang, Gao, Sheu and Wu in \cite{FanGaoSheWu:08} are of type $(P_{e})$ with $m=n$. Setting $e_{j}:=(\delta_{jk})_{k\in \overline{1,n}}\in\mathbb{R}^{n}$, one has $A_{j}:=2\operatorname*{diag}e_{j} $, $b_{j}:=e_{j}$, $c_{j}:=0$ for $j\in\overline{1,n}$. Of course, $X_{e}=\{0,1\}^{n}$. Theorem 1 in \cite[Th.~1]{FanGaoSheWu:08} asserts: \emph{Let $\overline{\lambda}\in Y_{0}\cap\mathbb{R}_{++}^{n}$ be a critical point of $D$ and $\overline{x}:=x(\overline{\lambda})$. Then $\overline{x}$ is a KKT point for problem $(P_{e})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. } Theorem 2 in \cite[Th.~1]{FanGaoSheWu:08} asserts: \emph{Let $\overline{\lambda}\in Y_{0}\cap\mathbb{R}_{--}^{n}$ be a critical point of $D$ and $\overline{x}:=x(\overline{\lambda})$. Then $\overline{x}$ is a KKT point for the problem $(\mathcal{P}_{\max})$ of maximizing $q_{0}$ on $X_{e}$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. } Theorem 3 in \cite[Th.~1]{FanGaoSheWu:08} asserts: \emph{Let $\overline{\lambda}\in Y_{0}$ be a critical point of $D$ and $\overline {x}:=x(\overline{\lambda})$. } \emph{(a) If $\overline{\lambda}\in\mathcal{S}_{\natural}^{+}:=Y^{+} \cap\mathbb{R}_{++}^{n}$, then $q_{0}(\overline{x})=\min_{x\in X_{e}} q_{0}(x)=\max_{\lambda\in\mathcal{S}_{\natural}^{+}}D(\lambda)=D(\overline {\lambda})$. } \emph{(b) If $\overline{\lambda}\in\mathcal{S}_{\natural}^{-}:=Y^{-} \cap\mathbb{R}_{++}^{n}$, then in a neighborhood $\mathcal{X}_{0} \times\mathcal{S}_{0}\subset X_{e}\times\mathcal{S}_{\natural}^{-}$ of $(\overline{x},\overline{\lambda})$, $q_{0}(\overline{x})=\min_{x\in \mathcal{X}_{0}}q_{0}(x)=\min_{\lambda\in\mathcal{S}_{0}}D(\lambda )=D(\overline{\lambda})$. } \emph{(c) If $\overline{\lambda}\in\mathcal{S}_{\flat}^{-}:=Y^{-} \cap\mathbb{R}_{--}^{n}$, then $q_{0}(\overline{x})=\max_{x\in X_{e}} q_{0}(x)=\min_{\lambda\in\mathcal{S}_{\flat}^{-}}D(\lambda)=D(\overline {\lambda})$. } \emph{(d) If $\overline{\lambda}\in\mathcal{S}_{\flat}^{+}:=Y^{+} \cap\mathbb{R}_{--}^{n}$, then in a neighborhood $\mathcal{X}_{0} \times\mathcal{S}_{0}\subset X_{e}\times\mathcal{S}_{\flat}^{+}$ of $(\overline{x},\overline{\lambda})$, $q_{0}(\overline{x})=\max_{x\in \mathcal{X}_{0}}q_{0}(x)=\max_{\lambda\in\mathcal{S}_{0}}D(\lambda )=D(\overline{\lambda})$. } Using Proposition 4 for $\overline{\lambda}\in Y_{0}$ with $\nabla D(\overline{\lambda})=0$ we have: (i) $q_{0}(\overline{x})=D(\overline {\lambda})$ without supplementary conditions on $\overline{\lambda};$ (ii) because $\mathcal{S}_{\natural}^{+}\subset Y^{+}$, Eq.~(\ref{r-minmaxqe}) is stronger than the minmax relation in (a); (iii) because $\mathcal{S}_{\flat }^{-}\subset Y^{-}$, Eq.~(\ref{r-maxminqe}) is stronger than the maxmin relation in (c); (iii) because $q_{0}$ is locally constant on $X_{e}$, (b) and (d) are true but their conclusions are much weaker then those provided by Eq.~(\ref{r-maxminqe}) and Eq.~(\ref{r-minmaxqe}), respectively. The quadratic problems $(\mathcal{P}_{b})$ considered by Gao in \cite[Th.~4] {Gao:07} is of type $(P_{e})$ in which $m\geq n$, $q_{0}(x):=-\tfrac{1} {2}\left\Vert Ax-c\right\Vert ^{2}$ for some $A\in\mathbb{R}^{p\times n}$ and $c\in\mathbb{R}^{p}$, $A_{j}:=\operatorname*{diag}e_{j}$, $b_{j}:=0$, $c_{j}:=-\tfrac{1}{2}$ for $j\in\overline{1,n}$, $A_{j}=0$ for $j\in \overline{n+1,m}$; hence $X_{e}\subset\{-1,1\}^{n};$ problem $(\mathcal{P} _{bo})$ is $(\mathcal{P}_{b})$ in the case $m=n$. The problem of maximizing $D$ on $\mathcal{S}_{b}:=Y_{0}\cap(\mathbb{R}_{++}^{n}\times\mathbb{R}^{m-n})$ is denoted by $(\mathcal{P}_{b}^{d})$ in the general case, and by $(\mathcal{P}_{bo}^{d})$ for $m=n$ (when $\mathcal{S}_{b}:=Y_{0}\cap \mathbb{R}_{++}^{n}$). Theorem 4 in \cite{Gao:07} asserts: \emph{Let $\overline{\lambda }\in\mathcal{S}_{b}$ be \textquotedblleft a critical point of $(\mathcal{P} _{b}^{d})$\textquotedblright\ and $\overline{x}:=x(\overline{\lambda})$. Then $\overline{x}$ \textquotedblleft is a critical point of $(\mathcal{P}_{b})$" and $q_{0}(\overline{x})=D(\overline{\lambda})$. Moreover, if $\overline {\lambda}\in\mathcal{S}_{b}^{+}:=Y^{+}\cap(\mathbb{R}_{++}^{n}\times \mathbb{R}^{m-n})$, then $q_{0}(\overline{x})=\min_{x\in X_{e}}q_{0} (x)=\max_{\lambda\in\mathcal{S}_{b}^{+}}D(\lambda)=D(\overline{\lambda})$. } Corollary 2 in \cite{Gao:07} asserts: \emph{Let $\overline{\lambda }\in\mathcal{S}_{b}$ be \textquotedblleft a KKT point the canonical dual problem $(\mathcal{P}_{bo}^{d})$\textquotedblright\ and $\overline {x}:=x(\overline{\lambda})$. Then $\overline{x}$ \textquotedblleft is a KKT point of the Boolean least squares problem $(\mathcal{P}_{bo})$". If $\overline{\lambda}\in\mathcal{S}_{b}^{+}$, then $q_{0}(\overline{x} )=\min_{x\in X_{e}}q_{0}(x)=\max_{\lambda\in\mathcal{S}_{b}^{+}} D(\lambda)=D(\overline{\lambda})$. } Unfortunately, it is not defined what is meant by critical points of problems $(\mathcal{P}_{b}^{d})$ and $(\mathcal{P}_{b})$, respectively. However, because $\mathcal{S}_{b}$ and $\mathcal{S}_{b}^{+}$ are open sets, by \textquotedblleft critical point of $(\mathcal{P}_{b}^{d})$\textquotedblright \ one must mean \textquotedblleft critical point of $D$\textquotedblright; in this situation\ the conclusions of \cite[Th.~4]{Gao:07}, less $\overline{x}$ \textquotedblleft is a critical point of $(\mathcal{P}_{b})$", are true, but are much weaker than those provided by Proposition \ref{p1}. Similarly, in \cite[Cor.~2]{Gao:07}, $\overline{\lambda}\in\mathcal{S}_{b}$ is \textquotedblleft a KKT point the canonical dual problem $(\mathcal{P} _{bo}^{d})$\textquotedblright\ is equivalent to $\overline{\lambda}$ is a \textquotedblleft critical point of $D$\textquotedblright. The difference between problems $(\mathcal{P}_{b})$ considered by Wang, Fang, Gao and Xing in \cite[p.~215]{WanFanGaoXin:08} and \cite[Th.~4]{Gao:07} is that in the former $q_{0}$ is a general quadratic function (hence $X_{e}\subset\{-1,1\}^{n}$). Theorem 2.2 in \cite{WanFanGaoXin:08} asserts: \emph{Let $\overline{\lambda}\in\mathcal{S}_{b}:=Y_{0}\cap(\mathbb{R}_{+}^{n} \times\mathbb{R}^{m-n})$ be a critical point of $D$ and $\overline {x}:=x(\overline{\lambda})$. Then $\overline{x}$ is a KKT point of $(P_{e})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. } Theorem 2.3 in \cite{WanFanGaoXin:08} asserts: \emph{Let $\overline{\lambda}\in\mathcal{S}_{b}^{+}:=Y^{+}\cap(\mathbb{R}_{+}^{n} \times\mathbb{R}^{m-n})$ be a critical point of $D$ and $\overline {x}:=x(\overline{\lambda})$. Then $q_{0}(\overline{x})=\min_{x\in X_{e}} q_{0}(x)=\max_{\lambda\in\mathcal{S}_{b}^{+}}D(\lambda)=D(\overline{\lambda} )$. } Theorems 3.2 and 3.3 from \cite{WanFanGaoXin:08} are the versions of Theorems 2.2 and 2.3 for $n=m$, respectively. Of course, the conclusions of Theorems 2.2 and 2.3 are valid replacing $\mathcal{S}_{b}$ and $\mathcal{S}_{b}^{+}$ by $Y_{0}$ and $Y^{+}$, respectively. The general quadratic problem with inequality constraints $(P_{i})$ is considered by Gao in \cite{Gao:08} and \cite{Gao:09}. In the sequel, the Moore--Penrose generalized inverse of $F\in\mathfrak{M}_{n}$ is denoted by $F^{\dag}$ or $F^{+}$, as in the corresponding cited papers authored by Gao and his collaborators. Theorem 7 in \cite{Gao:08} and Theorem 10 in \cite{Gao:09} assert: \emph{Let $\overline{\lambda}\in Y_{\operatorname{col}}^{i+}$ be a a solution of problem $(\mathcal{P}_{q}^{d})$ of maximizing $D$ on $Y_{\operatorname{col} }^{i}$ and $\overline{x}:=\left[ A(\overline{\lambda})\right] ^{\dag }(\overline{\lambda})$. Then $\overline{x}$ is a KKT point of $(P_{i})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. If $A(\overline{\lambda} )\succeq0$ then $\overline{\lambda}$ is a global maximizer of the problem $(\mathcal{P}_{q}^{d})$ and $\overline{x}$ is a global minimizer of\ $(P_{i} )$. If $A(\overline{\lambda})\prec0$, then $\overline{x}$ is a local minimizer (or maximizer) of\ $(P_{i})$ if and only if $\overline{\lambda}$ is a local minimizer (or maximizer) of $D$ on $Y_{\operatorname{col}}^{i+}$.} The \textquotedblleft box constrained problem" $(\mathcal{P}_{b})$ considered by Gao in \cite[Th.~3]{Gao:08} and \cite{Gao:09} is of type $(P_{i})$ in which $m=n$, $A_{j}:=2\operatorname*{diag}e_{j}$, $b_{j}:=0$, $c_{j}:=-1$ for $j\in\overline{1,n}$; hence $X_{i}=[-1,1]^{n}$. Theorem 3 in \cite{Gao:08} asserts: \emph{Let $\overline{\lambda }\in Y_{\operatorname{col}}^{i+}$ be a critical point of $D$\ and $\overline{x}:=\left[ A(\overline{\lambda})\right] ^{\dag}b(\overline {\lambda})$. Then $\overline{x}$ is a KKT point of $(P_{i})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. Moreover, if $A(\overline {\lambda})\succeq0$ then $q_{0}(\overline{x})=\min_{x\in X_{i}}q_{0} (x)=\max_{\lambda\in Y_{\operatorname{col}}^{i+}}D(\lambda)=D(\overline {\lambda})$. If $A(\overline{\lambda})\prec0$, then on a neighborhood $\mathcal{X}_{o}\times\mathcal{S}_{o}$ of $(\overline{x},\overline{\lambda})$ we have either $q_{0}(\overline{x})=\min_{x\in\mathcal{X}_{0}}q_{0} (x)=\min_{\lambda\in\mathcal{S}_{0}}D(\lambda)=D(\overline{\lambda})$, or $q_{0}(\overline{x})=\max_{x\in\mathcal{X}_{0}}q_{0}(x)=\max_{\lambda \in\mathcal{S}_{0}}D(\lambda)=D(\overline{\lambda})$.} The only difference between \cite[Th.~3]{Gao:08} and \cite[Th.~5] {Gao:09} is that in the latter the case $A(\overline{\lambda})\prec0$ is missing. Probably, the intention was to take $\overline{\lambda}\in Y_{\operatorname{col}}^{i}$ instead of $\overline{\lambda}\in Y_{\operatorname{col}}^{i+}$ in the first assertions of \cite[Ths.~3, 7]{Gao:08} and \cite[Ths.~5, 10]{Gao:09}; in fact, there is not $\overline {\lambda}\in Y_{\operatorname{col}}^{i+}$ such that $A(\overline{\lambda })\prec0!$ It is not clear how the criticality of $D$ at $\lambda\in Y_{\operatorname{col}}\setminus Y_{0}$ is defined in \cite[Th.~3]{Gao:08} and \cite[Th.~5]{Gao:09}. Let us assume that $\overline{\lambda}\in Y_{0}$ is a critical point of $D$ in the mentioned results from \cite{Gao:08} and \cite{Gao:09}; in this situation \cite[Th.~3]{Gao:08} is a particular case of \cite[Th.~7]{Gao:08}. Then $\overline{x}$ is a KKT point of $(P_{i})$ iff $\overline{\lambda} \in\mathbb{R}_{+}^{n};$ assuming moreover that $A(\overline{\lambda})\succeq 0$, the conclusion of the second assertion of \cite[Th.~7]{Gao:08} is true. However, in the case $A(\overline{\lambda})\prec0$ the conclusions of \cite[Ths.~3, 7]{Gao:08} are false, as the next example shows. \begin{example} \label{ex-gao08} Consider $n:=m:=2$, $A_{0}:=\left[ \begin{array} [c]{cc} -1 & 1\\ 1 & -3 \end{array} \right] $, $A_{1}:=\operatorname*{diag}e_{1}$, $A_{2}:=\operatorname*{diag} e_{2}$, $b_{0}:=(0,-1)^{T},b_{1}:=b_{2}:=0$, $c_{1}:=c_{2}:=-\frac{1}{2}$. Then $A(\lambda)=A_{0}+\lambda_{1}A_{1}+\lambda_{2}A_{2}$, $b(\lambda)=b_{0}$, $c(\lambda)=-\tfrac{1}{2}(\lambda_{1}+\lambda_{2})$. We have that $Y_{\operatorname{col}}=Y_{0}=\{(\lambda_{1},\lambda_{2})\in\mathbb{R}^{2} \mid(\lambda_{1}-1)(\lambda_{2}-3)\neq1\}$. The critical points $(\overline {x},\overline{\lambda})$ of $L$ are: $\left( (-1,-1)^{T},(0,3)^{T}\right) $, $\left( (-1,1)^{T},(2,3)^{T}\right) $, $\left( (1,-1)^{T},(2,5)^{T}\right) $, $\left( (1,1)^{T},(0,1)^{T}\right) $. Applying Proposition \ref{p1i} we obtain that $\overline{x}:=(1,-1)^{T}$ is the global minimizer of $q_{0}$ on $X_{i}=[0,1]^{2}$ and $\overline{\lambda}:=(2,5)^{T}$ is the global maximizer of $D$ on $Y_{\operatorname{col}}^{i}=Y^{i}=\{(\lambda_{1},\lambda_{2} )\in\mathbb{R}^{2}\mid\lambda_{1}>2$, $(\lambda_{1}-2)(\lambda_{2}-3)>1\}$. Take now $(\overline{x},\overline{\lambda}):=\left( (1,1)^{T},(0,1)^{T} \right) ;$ we have that $\overline{\lambda}\in\mathbb{R}_{+}^{2}$ and $A(\overline{\lambda})\prec0$. From Proposition \ref{lem-pd}~(iv), we have that $\overline{\lambda}$ is a global minimizer of $D$ on $Y_{\operatorname{col}}^{-}=Y^{-}=\{(\lambda_{1},\lambda_{2})\in\mathbb{R} ^{2}\mid\lambda_{1}<2$, $(\lambda_{1}-2)(\lambda_{2}-3)>1\}$. Assuming that $\overline{\lambda}$ is a local maximizer of $D$, because $D$ is convex on $Y_{\operatorname{col}}^{-}=Y^{-}$, $D$ is constant on an open neighborhood $U\subset Y^{-}$ of $\overline{\lambda}$, and so $\nabla D(\lambda)=0$ for $\lambda\in U;$ taking into account (\ref{r-cppdL}), this is a contradiction. Observe that $\overline{x}=(1,1)$ is not a local minimizer of $q_{0}$ on $X_{i}$. Indeed, take $x:=(1-u,1)\in X_{i}$ for $u\in(0,2);$ then $q_{0}(x)=-\tfrac{1}{2}u^{2}<0=q_{0}(\overline{x})$, proving that $\overline{x}$ is not a local minimum of $q_{0}$ on $X_{i}$. \end{example} Gao and Sherali in \cite[Th.\ 8.16]{GaoShe:09} (attributed to \cite{Gao:05}) assert: \emph{Suppose that $m=1$, $A_{1}>0$, $b_{1}=0$, $c_{1}<0$. Let $\overline{\lambda}\in Y^{i}$ be a critical point of $D$ and $\overline {x}:=x(\overline{\lambda})$. If $\overline{\lambda}\in Y^{i+}$, then $\overline{x}$ is a global minimizer of $q_{0}$ on $X_{i}$. If $\overline {\lambda}\in\mathbb{R}_{+}\cap Y^{-}$ then $\overline{\lambda}$ is a local minimizer of $q_{0}$ on $X_{i}$. } As in the case of \cite[Th.~6]{Gao:03a} above, the first assertion of \cite[Th.~8.16]{GaoShe:09} follows from Proposition \ref{p1i}. However, the second assertion of \cite[Th.~8.16]{GaoShe:09} is false as the next example shows. \begin{example} \emph{(see \cite[Ex.~1]{VoiZal:10})}\label{ex-th16-gs} Consider $n:=2$, $m:=1$, $A_{0}:=\left[ \begin{array} [c]{cc} -2 & -1\\ -1 & -3 \end{array} \right] $, $A_{1}:=I_{2}$, $b_{0}:=(-1,-1)^{T}$, $b_{1}:=0$, $c_{1} :=-\frac{1}{2}$. Then $D(\lambda)=-\frac{1}{2}\lambda-\frac{1}{2} \frac{2\lambda-3}{\lambda^{2}-5\lambda+5}$ and $D^{\prime}(\lambda)=-\frac {1}{2}\frac{\left( \lambda-2\right) ^{2}}{\left( \lambda^{2}-5\lambda +5\right) ^{2}}\left( \lambda-1)(\lambda-5\right) $. Hence the set of critical points of $D$ is $\{1,2,5\}\subset\mathbb{R}_{+}$. For $\overline {\lambda}=1$ we have that $A(\overline{\lambda})=\left( \begin{array} [c]{cc} -1 & -1\\ -1 & -2 \end{array} \right) \prec0$ and $\overline{x}=x(\overline{\lambda})=(1,0)^{T}$. Since $X_{i}=\{(\cos t,\sin t)^{T}\mid t\in(-\pi,\pi]\}$ and \[ q_{0}((\cos t,\sin t)^{T})=-(3+\cos t-2\sin t)\sin^{2}\tfrac{1}{2}t\leq (\sqrt{5}-3)\sin^{2}\tfrac{1}{2}t<0=q_{0}(\overline{x}) \] for all $t\in(-\pi,\pi]\setminus\{0\}$, we have that $\overline{x}$ is the unique global maximizer of $q_{0}$ on $X_{i}$, in contradiction with the second assertion of \cite[Th.~8.16]{GaoShe:09}. \end{example} The problem considered by Zhang, Zhu and Gao in \cite{ZhaZhuGao:09} is of type $(P_{i})$ in which $m\geq n$, $A_{j}:=\operatorname*{diag}e_{j}$, $b_{j}:=0$, $c_{j}\leq0$ for $j\in\overline{1,n}$, $A_{j}=0$ for $j\in\overline{n+1,m}$. Theorem 1 in \cite{ZhaZhuGao:09} asserts: \emph{Let $\overline {\lambda}\in Y^{i}$ be a KKT point of $D$ and $\overline{x}:=x(\overline {\lambda})$. Then $\overline{x}$ is a KKT point of $(P_{i})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. } Theorem 2 in \cite{ZhaZhuGao:09} asserts: \emph{Let $\overline {\lambda}\in Y^{i}$ be a KKT point of $D$ and $\overline{x}:=x(\overline {\lambda})$. If $\overline{\lambda}\in Y^{i+}$, then $\overline{\lambda}$ \textquotedblleft is a global maximizer of\textquotedblright\ $D$ on $Y^{i+}$ \textquotedblleft if and only if the vector" $\overline{x}$ \textquotedblleft is a global minimizer of\textquotedblright\ $(P_{i})$ on $X_{i}$, and $q_{0}(\overline{x})=\min_{x\in X_{i}}q_{0}(x)=\max{}_{\lambda\in Y^{i+} }D(\lambda)=D(\overline{\lambda})$. If $\overline{\lambda}\in\mathbb{R} _{+}^{m}\cap Y^{-}$, \textquotedblleft then in a neighborhood $\mathcal{X} _{0}\times S_{0}\subset$\textquotedblright$X_{i}\times(\mathbb{R}_{+}^{m}\cap Y^{-})$ of $(\overline{x},\overline{\lambda})$, \textquotedblleft we have that either\textquotedblright\ $q_{0}(\overline{x})=\min_{x\in\mathcal{X}_{0}} q_{0}(x)=\min_{\lambda\in S_{0}}D(\lambda)=D(\overline{\lambda})$, or $q_{0}(\overline{x})=\max_{x\in\mathcal{X}_{0}}q_{0}(x)=\max{}_{\lambda\in S_{0}}D(\lambda)=D(\overline{\lambda})$. } Clearly, \cite[Th.~1]{ZhaZhuGao:09} and the conclusion of \cite[Th.~2] {ZhaZhuGao:09} in the case $\overline{\lambda}\in Y^{i+}$ follow from Proposition \ref{p1i}. As shown in \cite[Ex.\ 2]{Zal:11} and Example \ref{ex-gao08}, each of the alternative conclusions of \cite[Th.~2] {ZhaZhuGao:09} in the case $\overline{\lambda}\in\mathbb{R}_{+}^{m}\cap Y^{-}$ is false. Observe that \cite{GaoRuaShe:09} is cited in \cite{ZhaZhuGao:09} as a paper to appear, but not in connection with the previous result. The problem $(P_{i})$ is considered also by Gao, Ruan and Sherali in \cite[p.~486]{GaoRuaShe:09}; the problem of maximizing $D$ on $Y_{\operatorname{col}}^{i}$ is denoted by $(\mathcal{P}_{q}^{d})$. Theorem 4 in \cite{GaoRuaShe:09} asserts: \emph{Let $\overline {\lambda}\in Y_{\operatorname{col}}^{i}$ be a critical point of $(\mathcal{P} _{q}^{d})$ and $\overline{x}:=\left[ A(\overline{\lambda})\right] ^{+}b(\overline{\lambda})$. Then $\overline{x}$ is a KKT point of $(P_{i})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. If $\overline{\lambda}\in Y_{\operatorname{col}}^{i+}$, then $q_{0}(\overline{x})=\min_{x\in X_{i}} q_{0}(x)=\max_{\lambda\in Y_{\operatorname{col}}^{i+}}D(\lambda)=D(\overline {\lambda})$. If $\overline{\lambda}\in Y^{i+}$ then $\overline{\lambda}$ \textquotedblleft is a unique global maximizer of $(\mathcal{P}_{q}^{d})$ and the vector $\overline{x}$ is a unique global minimizer of $(P_{i} )$\textquotedblright. If $\overline{\lambda}\in\mathbb{R}_{+}^{m}\cap Y^{-}$, then $\overline{\lambda}$ \textquotedblleft is a local minimizer of\textquotedblright\ $D$ \textquotedblleft on the neighborhood $S_{o}\subset $\textquotedblright$\mathbb{R}_{+}^{m}\cap Y^{-}$ \textquotedblleft if and only if $\overline{x}$ is a local minimizer of\textquotedblright\ $q_{0}$ \textquotedblleft on the neighborhood $X_{o}\subset$\textquotedblright$X_{i}$, i.e., $q_{0}(\overline{x})=\min_{x\in\mathcal{X}_{o}}q_{0}(x)=\min_{\lambda \in\mathcal{S}_{o}}D(\lambda)=D(\overline{\lambda})$.} As noticed before Lemma \ref{lem-im}, $Y_{\operatorname{col}}$ is not open in general, so it is not possible to speak about the differentiability of $D$ at $\lambda\in Y_{\operatorname{col}}\setminus Y_{0} $. As in \cite[Th.~4]{Gao:07}, it is not explained what is meant by critical point of $(\mathcal{P}_{q}^{d})$; we interpret it as being a critical point of $D$. With the above interpretation for \textquotedblleft critical point of $(\mathcal{P}_{q}^{d})$\textquotedblright, we agree with the first two assertions of \cite[Th.~4]{GaoRuaShe:09}. However, the third assertion of \cite[Th.~4]{GaoRuaShe:09} that $\overline{\lambda}$ is the unique global maximizer of $(\mathcal{P}_{q}^{d})$ provided that $\overline{\lambda}\in Y^{i+}$ is false, as seen in Example \ref{ex-qi-grs} below. The same example shows that the fourth assertion of \cite[Th.~4]{GaoRuaShe:09} is false, too; another counterexample is provided by Example \ref{ex-gao08}. \begin{example} \label{ex-qi-grs}Let us take $n=m=2$, $q_{0}(x,y):=xy-x$, and $q_{1} (x,y):=-q_{2}(x,y):=\tfrac{1}{2}\left( x^{2}+y^{2}-1\right) $ for $(x,y)\in\mathbb{R}^{2}$. Clearly, the problems $(P_{e})$ for $(q_{0},q_{1})$ and $(P_{i})$ for $(q_{0},q_{1},q_{2})$ are equivalent in the sense that they have the same objective functions and the same feasible sets (hence the same solutions). Denoting by $L^{e}$, $A^{e}$, $b^{e}$, $c^{e}$, $D^{e}$ and $L^{i}$, $A^{i}$, $b^{i}$, $c^{i}$, $D^{i}$ the functions associated to problems $(P_{e})$ and $(P_{i})$ mentioned above, we get: $L^{e} (x,y,\lambda)=xy-x+\tfrac{\lambda}{2}\left( x^{2}+y^{2}-1\right) $, $A^{e}(\lambda)=\left( \begin{array} [c]{ll} \lambda & 1\\ 1 & \lambda \end{array} \right) $, $b^{e}(\lambda)=(1,0)^{T}$, $c^{e}(\lambda)=-\tfrac{1}{2}\lambda$, $Y_{\operatorname{col}}=Y_{0}=\mathbb{R}\setminus\{-1,1\}$, $Y_{\operatorname{col}}^{+}=-Y_{\operatorname{col}}^{-}=Y^{+}=-Y^{-} =(1,\infty)$, $D^{e}(\lambda)=\frac{-\lambda}{\lambda^{2}-1}-\tfrac{1} {2}\lambda$ [for the problem $(P_{e})$] and $L^{i}(x,y,\lambda_{1},\lambda _{2})=L^{e}(x,y,\lambda_{1}-\lambda_{2})$, $A^{i}(\lambda_{1},\lambda _{2})=A^{e}(\lambda_{1}-\lambda_{2})$, $b^{i}(\lambda_{1},\lambda_{2} )=b^{e}(\lambda_{1}-\lambda_{2})$, $c^{i}(\lambda_{1},\lambda_{2} )=c^{e}(\lambda_{1}-\lambda_{2})$, $Y_{\operatorname{col}}^{i}=Y^{i} =\{(\lambda_{1},\lambda_{2})\in\mathbb{R}_{+}^{2}\mid\lambda_{1}-\lambda _{2}\neq\pm1\}$, $Y_{\operatorname{col}}^{i+}=Y^{i+}=\{(\lambda_{1} ,\lambda_{2})\in\mathbb{R}_{+}^{2}\mid\lambda_{1}-\lambda_{2}>1\}$, $\mathbb{R}_{+}^{2}\cap Y^{-}=\{(\lambda_{1},\lambda_{2})\in\mathbb{R}_{+} ^{2}\mid\lambda_{1}-\lambda_{2}<-1\}$, $D^{i}(\lambda_{1},\lambda_{2} )=D^{e}(\lambda_{1}-\lambda_{2})$. The critical points of $L^{e}$ are $(0,1,0)$ and $\left( \pm\sqrt {3}/2,-1/2,\pm\sqrt{3}\right) $. Using Proposition \ref{p1}, it follows that $(\sqrt{3}/2,-1/2)$ is the unique global minimizer of $q_{0}$ on $X_{e}$ and $\sqrt{3}$ is a global maximizer of $D^{e}$ on $Y_{\operatorname{col}}^{+}$ $(=Y^{+})$, while $(-\sqrt{3}/2,-1/2)$ is the unique global maximizer of $q_{0}$ on $X_{e}$ and $-\sqrt{3}$ is a global minimizer of $D^{e}$ on $Y_{\operatorname{col}}^{-}$ $(=Y^{-})$. Note that $(\overline{x},\overline{y},\overline{\lambda}_{1},\overline {\lambda}_{2})$ is a KKT point of $L^{i}$ iff $(\overline{x},\overline {y},\overline{\lambda}_{1},\overline{\lambda}_{2})$ is a critical point of $L^{i}$ with $(\overline{\lambda}_{1},\overline{\lambda}_{2})\in\mathbb{R} _{+}^{2}$, iff $(\overline{x},\overline{y},\overline{\lambda}_{1} -\overline{\lambda}_{2})$ is a critical point of $L^{e}$ with $(\overline {\lambda}_{1},\overline{\lambda}_{2})\in\mathbb{R}_{+}^{2}$. Using Proposition \ref{p1i}~(ii) we obtain that $(\sqrt{3}/2,-1/2)$ is the unique global minimizer of $q_{0}$ on $X_{i}$ and any $(\overline{\lambda}_{1} ,\overline{\lambda}_{2})\in\mathbb{R}_{+}^{2}$ with $\overline{\lambda} _{1}-\overline{\lambda}_{2}=\sqrt{3}$ is a global maximizer of $D^{i}$ on $Y^{i+}$ $(=Y_{\operatorname{col}}^{i+})$, the latter assertion contradicting the third assertion of \cite[Th.~4]{GaoRuaShe:09}. On the other hand, as seen above, $(-\sqrt{3}/2,-1/2)$ is the unique global maximizer of $q_{0}$ on $X_{e}=X_{i}$ and $(\sqrt{3},2\sqrt{3})\in\mathbb{R}_{+}^{2}\cap Y^{-}$ is a global minimizer of $(\mathcal{P}_{q}^{d})$, contradicting the fourth assertion of \cite[Th.~4]{GaoRuaShe:09}. \end{example} The problem considered by Lu, Wang, Xin and Fang in \cite{LuWanXinFan:10} is of type $(P_{e})$ with $m=n$. More precisely, $A_{j}=2\operatorname*{diag} e_{j}$, $b_{j}:=e_{j}$, $c_{j}:=0$ for $j\in\overline{1,n};$ hence $X_{e}=\{0,1\}^{n}$. One must emphasize the fact that the authors use the usual Lagrangian, even if CDT is invoked. Theorem 2.2 (resp.\ Theorem 2.3) of \cite{LuWanXinFan:10} asserts: \emph{If $\overline{\lambda}\in Y_{0}$ (resp.\ $\overline{\lambda}\in Y^{+}$) is such that $\nabla D(\overline{\lambda})=0$ and $\overline{x}:=x(\overline {\lambda})$, then $q_{0}(\overline{x})=D(\overline{\lambda})$ (resp.\ $q_{0} (\overline{x})=\min_{x\in X_{e}}q_{0}(x)$). } Gao and Ruan \cite{GaoRua:10} considered problems $(P_{e})$ and $(P_{i})$ when $m=n$ and $A_{j}:=\operatorname*{diag}e_{j}$, $b_{j}:=0$, $c_{j}:=-\tfrac {1}{2}$ for $j\in\overline{1,n}$. Of course, $X_{e}=\{-1,1\}^{n}$ and $X_{i}=[-1,1]^{n}$. The problem of maximizing $D$ on $Y^{i+}$ is denoted by $(\mathcal{P}^{d})$. Theorem 1 in \cite{GaoRua:10} (attributed to \cite{Gao:07}) asserts: \emph{\textquotedblleft If $\overline{\sigma}$ is a critical point of\textquotedblright\ $D$, \textquotedblleft the vector $\overline{x} $"$:=x(\overline{\sigma})$ \textquotedblleft is a KKT point of" $(P_{i})$ and $q_{0}(\overline{x})=D(\overline{\sigma})$. \textquotedblleft If the critical point $\overline{\sigma}>0$, then the vector $\overline{x}$"$\in X_{e}$ \textquotedblleft is a local optimal solution of the integer programming problem" $(P_{e})$. If $\overline{\sigma}\in Y^{i+}$, then $q_{0}(\overline {x})=\min_{x\in X_{i}}q_{0}(x)=\max_{\sigma\in Y^{i+}}D(\sigma)=D(\overline {\sigma})$. \textquotedblleft If the critical point $\overline{\sigma}\in $"$Y^{i+}$ \textquotedblleft and $\overline{\sigma}>0$, then the vector $\overline{x}$"$\in X_{e}$ \textquotedblleft is a global minimizer to the integer programming problem" $(P_{e})$. If $\overline{\sigma}\in\mathbb{R} _{+}^{n}\cap Y^{-}$, \textquotedblleft then $\overline{\sigma}$ is a local minimizer of $(\mathcal{P}^{d})$, the vector $\overline{x}$ is a local minimizer of" $(P_{i})$, \textquotedblleft and on the neighborhood $\mathcal{X}_{o}\times\mathcal{S}_{o}$ of $(\overline{x},\overline{\sigma})$, $q_{0}(\overline{x})=\min_{x\in\mathcal{X}_{o}}q_{0}(x)=\min_{\sigma \in\mathcal{S}_{o}}D(\sigma)=D(\overline{\sigma})$. } Concerning \cite[Th.~1]{GaoRua:10} we observe the following: In the first assertion it is not clear if $\overline{\sigma}$ belongs to $\mathbb{R} _{+}^{n}$ or not; of course, $\overline{x}$ is not a KKT point of $(P_{i})$ if $\overline{\sigma}\notin\mathbb{R}_{+}^{n}$. The second assertion is true because $X_{e}$ is finite (without any condition on $\overline{\sigma}$). The third assertion is false without assuming that $\overline{\sigma}$ is at least a KKT point of $D$. The fourth assertion is true without assuming $\overline{\sigma}>0$. The fifth assertion is false if $\overline{\sigma}>0$ and $\nabla D(\overline{\sigma})\neq0$. The main difference between \cite[Th.~1]{GaoRua:10} and the conjunction of \cite[Th.~2 \& Th.~3]{GaoRua:10} is that in the latter $Y_{0}$ is replaced by $Y_{\operatorname{col}}$, but their statements are not more clear. This is the reason for not analyzing them here. The problem considered by Gao, Ruan and Sherali in \cite{GaoRuaShe:10} is of type $(P_{J})$ with $n=m=2k$ $(k\in\mathbb{N}^{\ast})$ and $J:=\overline {k+1,n}$. In \cite{GaoRuaShe:10} $A_{0}$ is such that $(A_{0})_{ij}=0$ if $\max\{i,j\}>k$, $A_{j}:=2\operatorname*{diag}e_{j}$ and $c_{j}:=0$ for $j\in\overline{1,m}$, $b_{j}:=e_{j+k}$ for $j\in J^{c}$ $(=\overline{1,k})$ and $b_{j}:=e_{j}$ for $j\in J;$ moreover, $\mathcal{S}_{\natural }:=Y_{\operatorname{col}}\cap(\mathbb{R}_{+}^{k}\times\mathbb{R}_{++}^{k})$ $(\subset Y_{\operatorname{col}}^{i}\subset Y_{\operatorname{col}}^{J})$, $\mathcal{S}_{\natural}^{+}:=Y^{+}\cap\mathcal{S}_{\natural}$ $(\subset Y^{i+}\subset Y^{J+})$, $\mathcal{S}_{\flat}:=Y_{\operatorname{col}} \cap(\mathbb{R}_{-}^{k}\times\mathbb{R}_{--}^{k})$, $\mathcal{S}_{\flat} ^{-}:=Y^{-}\cap\mathcal{S}_{\flat}$ $(\subset Y^{i-}\subset Y^{J-})$. Theorem 1 of \cite{GaoRuaShe:10} asserts: \emph{Let $\overline {\lambda}\in\mathcal{S}_{\natural}$ be a KKT point of $D$ and $\overline {x}:=x(\overline{\lambda})$. Then $\overline{x}$ is feasible to the primal problem $(P_{J})$ and $q_{0}(\overline{x})=L(\overline{x},\overline{\lambda })=D(\overline{\lambda})$.} Theorem 2 in \cite{GaoRuaShe:10} asserts: \emph{Let $\overline {\lambda}\in\mathcal{S}_{\natural}^{+}\cup\mathcal{S}_{\flat}^{-}$ be a critical point of $D$ and $\overline{x}:=x(\overline{\lambda})$. If $\overline{\lambda}\in\mathcal{S}_{\natural}^{+}$ then $q_{0}(\overline {x})=\min_{x\in X_{J}}q_{0}(x)=\max_{\lambda\in\mathcal{S}_{\natural}^{+} }D(\lambda)=D(\overline{\lambda})$. If $\overline{\lambda}\in\mathcal{S} _{\flat}^{-}$ then $q_{0}(\overline{x})=\max_{x\in X_{J}}q_{0}(x)=\min _{\lambda\in\mathcal{S}_{\flat}^{-}}D(\lambda)=D(\overline{\lambda})$. } In \cite[Th.~1]{GaoRuaShe:10} it is not clear what is meant by KKT point of $D$ because $D$ is not differentiable for $\overline{\lambda} \in\mathcal{S}_{\natural}\setminus Y_{0}$. Propositions \ref{p1ei} and \ref{p1eimax} confirm \cite[Th.~2]{GaoRuaShe:10}, but the conclusions of the latter are much weaker than those of the former. The quadratic problems $(\mathcal{P}_{b})$ and $(\mathcal{P}_{bo})$ considered by Ruan and Gao in \cite{RuaGao:17} (and \cite{RuaGao:16}) are those from \cite{Gao:07}. The statement of \cite[Th.\ 5]{RuaGao:17} is that of \cite[Cor.~2]{Gao:07} in which $\mathcal{S}_{b}$ is now $Y_{0}\cap\{\lambda \in\mathbb{R}^{m}\mid\lambda_{j}\neq0$ $\forall j\in\overline{1,n}\}$, $\mathcal{S}_{b}^{+}$ being the same, that is $Y^{+}\cap\mathbb{R}_{++}^{m}$. The statement of \cite[Th.\ 6]{RuaGao:17} is that of \cite[Th.~4]{Gao:07} in which \textquotedblleft a critical point of $(\mathcal{P}_{b}^{d} )$\textquotedblright\ is replaced by \textquotedblleft a KKT point of $(\mathcal{P}_{b}^{d})$\textquotedblright. The quadratic problem considered by Ruan and Gao in \cite{RuaGao:17b} is of type $(P_{J})$ in which $m>n$, and $\overline{1,n+1}\subset J$. In \cite{RuaGao:17b} $A_{j}:=2\operatorname*{diag}e_{j}$, $b_{j}:=e_{j}$, $c_{j}:=0$ for $j\in\overline{1,n}$, $A_{j}:=0$ for $j\in\overline{n+1,m};$ hence $X_{J}\subset\{0,1\}^{n}$. One considers $\mathcal{S}_{a}:=\{\lambda\in Y^{J}\mid\lambda_{j}\neq0$ $\forall j\in J\}$ and $\mathcal{S}_{a} ^{+}:=\{\lambda\in Y^{J+}\mid\lambda_{j}>0$ $\forall j\in J\}$. Theorem 3 of \cite{RuaGao:17b} asserts: \emph{Let $\overline {\lambda}\in\mathcal{S}_{a}$ be a $J$-LKKT point of $D$ and $\overline {x}:=x(\overline{\lambda})$. Then $\overline{x}$ is a $J$-LKKT point of $(P_{J})$ and $q_{0}(\overline{x})=D(\overline{\lambda})$. } Theorem 4 in \cite{RuaGao:17b} asserts: \emph{Let $\overline {\lambda}\in\mathcal{S}_{a}^{+}$ be a $J$-LKKT point of $D$ and $\overline {x}:=x(\overline{\lambda})$. Then $q_{0}(\overline{x})=\min_{x\in X_{J}} q_{0}(x)=\max_{\lambda\in\mathcal{S}_{a}^{+}}D(\lambda)=D(\overline{\lambda} )$. } Clearly, \cite[Th.~3]{RuaGao:17b} is an immediate consequence of Lemma \ref{lem-qperfdual}, while \cite[Th.~4]{RuaGao:17b} is a very particular case of Proposition \ref{p1ei}. The quadratic problem considered by Gao in \cite{Gao:17}, \cite{Gao:18} and \cite{GaoAli:18} is of type $(P_{J})$ in which $m=n+1$ and $J:=\overline{1,n} $. In these papers $A_{0}:=0$, $A_{j}:=2\operatorname*{diag}e_{j}$, $b_{j}:=e_{j}$, $c_{j}:=0$ for $j\in J$, and $A_{n+1}:=0$; hence $X_{J} \subset\{0,1\}^{n}$. Theorem 2 of \cite{Gao:17} asserts: \emph{Let $\overline{\lambda }\in Y^{J+}$ be a global maximizer of $D$ on $Y^{J+}$. Then $\overline {x}:=x(\overline{\lambda})\in X_{J}$ and $q_{0}(\overline{x})=\min_{x\in X_{J}}q_{0}(x)=\max_{\lambda\in Y^{J+}}D(\lambda)=D(\overline{\lambda})$. } The differences between \cite[Th.~2]{Gao:17} and \cite[Th.~2] {Gao:18} are: in the latter $b_{n+1}:=-c(u)\in\mathbb{R}_{-}^{n}$, $c_{n+1}:=-V_{c}<0$, and $Y^{J+}$ is replaced by $\{\lambda\in Y^{J+} \mid\lambda_{n+1}>0\}$. The differences between \cite[Th.~2]{Gao:17} and \cite[Th.~1]{GaoAli:18} are: in the latter $c_{n+1}:=-V_{c}<0$, and $\min_{\rho\in\mathcal{Z}_{a}}P_{u}(\rho)$ is replaced by $\min_{\rho \in\mathbb{R}^{n}}P_{u}(\rho);$ of course, $\min_{\rho\in\mathbb{R}^{n}} P_{u}(\rho)=-\infty$ if $c_{u}\neq0$. In all 3 papers there are provided proofs of the mentioned results. Using Proposition \ref{p1ei} (iii) in then context of \cite[Th.~2]{Gao:17} we have that $\overline{\lambda}$ is a $J$-LKKT point of $D;$ using Proposition \ref{p1ei} (ii) and (i) we get the conclusion of \cite[Th.~2]{Gao:17}. Yuan \cite{Yua:17} (the same as \cite{Yua:16}) considers problem $(P_{i})$ in its general form. In \cite[p.~340]{Yua:17} one asserts: \emph{\textquotedblleft One hard restriction is given" by $b_{0}\neq0$. \textquotedblleft The restriction is very important to guarantee the uniqueness of a globally optimal solution of" $(P_{i})$.} Theorem 1 of \cite{Yua:17} asserts: \emph{Let $\mathcal{Y} :=\{\sigma\in Y^{i}\mid x(\sigma)\in X_{i}\}\neq\emptyset$, and let $(\mathcal{P}^{d})$ be the problem of maximizing $D$ on $\mathcal{Y}$. If $\overline{\sigma}$ is a solution of $(\mathcal{P}^{d})$, then $\overline {x}:=x(\overline{\sigma})$ is a solution of $(P_{i})$ and $q_{0}(\overline {x})=D(\overline{\sigma})$. } Theorem 2 of \cite{Yua:17} asserts: \emph{Assume that ($C_{1}$) $\sum_{k=0}^{m}A_{k}\succ0$, and ($C_{2}$) there exists $k\in\overline{1,m}$ such that $A_{k}\succ0$, $A_{0}+A_{k}\succ0$, and $\left\Vert D_{k}A_{0} ^{-1}b_{0}\right\Vert >\left\Vert b_{k}^{T}D_{k}^{-1}\right\Vert +\sqrt{\left\Vert b_{k}^{T}D_{k}^{-1}\right\Vert ^{2}+2|c_{k}|}$, where $A_{k}=D_{k}^{T}D_{k}$ and $\left\Vert ^{\ast}\right\Vert $ is some vector norm. Then problem $(\mathcal{P}^{d})$ has a unique non-zero solution $\overline{\sigma}$ in the space $Y^{i+}$. } Counterexamples to both theorems of \cite{Yua:17} as well as for the assertion on the \textquotedblleft hard restriction" $b_{0}\neq0$ from \cite[p.~340]{Yua:17} are provided in \cite{Zal:18}. \section{Conclusions} \null -- We made a complete study of quadratic minimization problems with quadratic equality and/or inequality constraints using the method suggested by the canonical duality theory (CDT) introduced by DY Gao. This method is based on the introduction of a dual function. Our study uses only the usual Lagrangian associated to minimization problems with equality and/or inequality constraints, without any reference to CDT; CDT is presented (or, at least, referred) in all the papers cited in Section \ref{sec4}. -- As observed in Remark \ref{rem-adv}, it is more advantageous to use the assertions (i) of Propositions \ref{p1}, \ref{p1ei}, \ref{p1i}, than the second part of (ii) with $\overline{\lambda}\in Y_{0}$ because in versions (i) one must know only the Lagrangian (hence only the data of the problems), and this provides both $\overline{x}$ and $\overline{\lambda}$. Using $D$ could be useful, possibly, if the number of constraints is much smaller than $n$. -- As seen in Section \ref{sec4}, many results obtained by DY Gao and his collaborators on quadratic optimization problems are not stated clearly, and some of them are even false; some statements were made more clear in subsequent papers, but we didn't observe some warning about the false assertions. For the great majority of the correct assertions the use of the usual direct method provides stronger versions. -- Asking the strict positivity of the multipliers corresponding to nonlinear constraints (but not only, as in \cite{RuaGao:17b}), is very demanding, even for inequality constraints. Just observe that for $k$ equality constraints one has $2^{k}$ distinct possibilities to get the feasible set, but at most one could produce strictly positive multipliers. \textbf{Acknowledgement} We thank prof. Marius Durea for reading a previous version of the paper and for his useful remarks. \end{document}
math
88,249
\begin{document} \title{On a Combination of the Cyclic Nimhoff and Subtraction Games} \author{Tomoaki Abuku\footnote{University of Tsukuba, Ibaraki, Japan} \and Masanori Fukui\footnote{Hiroshima University, Hiroshima, Japan}\footnote{Osaka Electro-Communication University, Osaka, Japan} \and Ko Sakai\footnote{University of Tsukuba, Ibaraki, Japan} \and Koki Suetsugu\footnote{Kyoto University, Kyoto, Japan}\footnote{Presently with National Institute of Informatics, Tokyo, Japan}} \theoremstyle{definition} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{Lem}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \theoremstyle{defn} \newtheorem{conjecture}{Conjecture}[section] \newtheorem{defn}{Definiton}[section] \newtheorem{exam}{Example}[section] \newtheorem{rem}{Remark}[section] \newtheorem{pict}{Figure}[section] \newcommand{\mathbb{N}}{\mathbb{N}} \footnotetext[1]{{\bf Key words.} Combinatorial Game, Impartial Game, Nim, Nimhoff, Subtraction Game}\footnotetext[2]{{\bf AMS 2000 subject classifications.} 05A99, 05E99} \date{} \maketitle \centerline{\bf Abstract} \noindent In this paper, we study a combination (called the generalized cyclic Nimhoff) of the cyclic Nimhoff and subtraction games. We give the $\mathcal{G}$-value of the game when all the $\mathcal{G}$-value sequence of subtraction games have a common $h$-stair structure.\\ \section{Introduction} \label{intro} \subsection{Impartial game} This paper discusses ``impartial'' combinatorial games in normal rule, namely games with the following characters: \begin{itemize} \item\ Two players alternately make a move. \item\ No chance elements (the possible moves in any given position is determined in advance). \item\ The both players have complete knowledge of the game states. \item\ The game terminates in finitely many moves. \item\ The both players have the same set of the possible moves in any position. (impartial) \item\ The player who makes the last move wins. (normal) \end{itemize} Throughout this paper, we suppose that all game positions are ``short'', namely there are limitedly many positions that can be reached from the initial position, and any position cannot appear twice in a play. \begin{defn}[outcome classes] A game position is called an $\mathcal{N}$-position (resp. a $\mathcal{P}$-position) if the first player (resp. the second player) has a winning strategy. \end{defn} Clearly, all impartial game positions are classified into $\mathcal{N}$-positions or $\mathcal{P}$-positions. If $G$ is an $\mathcal{N}$-position, there exists a move from $G$ to a $\mathcal{P}$-position. If $G$ is a $\mathcal{P}$-position, there exists no move from $G$ to a $\mathcal{P}$-position. \subsection{Nim and $\mathcal{G}$-value} Nim is a well-known impartial game with the following rules: \begin{itemize} \item\ It is played with several heaps of tokens. \item\ The legal move is to remove any number of tokens (but at least one token) from any single heap. \item\ The end position is the state of no heaps of tokens. \end{itemize} We denote by $\mathbb{N}_{0}$ the set of all nonnegative integers. \begin{defn}[nim-sum] The value obtained by adding numbers in binary form without carry is called nim-sum. The nim-sum of nonnegative integers $m_{1},\ldots,m_{n}$ is written by \begin{align*} m_{1}\oplus \cdots \oplus m_{n}. \end{align*} \end{defn} The set $\mathbb{N}_{0}$ is isomorphic to the direct sum of countably many $\mathbb{Z}/2\mathbb{Z}$'s. \begin{defn}[minimum excluded number] Let $T$ be a proper subset of $\mathbb{N}_{0}$. Then $\mathrm{mex}\ $$T$ is defined to be the least nonnegative integer not contained in $T$, namely \begin{align*} \mathrm{mex}\ T=\mathrm{min} (\mathbb{N}_0 \setminus T). \end{align*} \end{defn} \begin{defn}[$\mathcal{G}$-value] Let $G$ and $G'$ be game positions. The notation $G \rightarrow G'$ means that $G'$ can be reached from $G$ by a single move. The value $\mathcal{G}(G)$ called the $\mathcal{G}$-value (or nim value or Grundy value or SG-value, depending on authors) of $G$ is defined as follows: \begin{align*} \mathcal{G}(G)=\mathrm{mex} \{\mathcal{G}(G')\mid G \rightarrow G'\}. \end{align*} \end{defn} The following theorem is well-known. \begin{thm}[\cite{Grundy}, \cite{Sprague}] $\mathcal{G}(G)=0$ if and only if $G$ is a $\mathcal{P}$-position. \end{thm} Therefore, we only need to decide the $\mathcal{G}$-value of positions for winning strategy in impartial games and the $\mathcal{G}$-value is also useful for analysis of the disjunctive sum of games. If $G$ and $H$ are any positions of (possibly different) impartial games, the disjunctive sum of $G$ and $H$ (written as $G + H$) is defined as follows: each player must make a move in either $G$ or $H$ (but not both) on his turn. \begin{thm}[\cite{Grundy}, \cite{Sprague}] Let $G$ and $H$ be two game positions. Then \begin{align*} \mathcal{G}(G+H)=\mathcal{G}(G)\oplus \mathcal{G}(H). \end{align*} \end{thm} \if0 \begin{thm}[\cite{Grundy}, \cite{Sprague}] The $\mathcal{G}$-value of Nim position $(m_1,\ldots,m_n)$ is the following: \begin{align*} \mathcal{G}(m_1,\ldots,m_n)=m_1\oplus \cdots \oplus m_n. \end{align*} \end{thm} \fi \begin{defn}[periodic] Let $a=(a_0, a_1, a_2, \ldots)$ be a sequence of integers. We say that $a$ is periodic with period $p$ and preperiod $n_0$, if we have \begin{center} $a_{n+p}=a_n$ for all $n\geq n_0$. \end{center} We say that $a$ is purely periodic if it is periodic with preperiod $0$. \end{defn} \begin{defn}[arithmetic periodic] Let $a=(a_0, a_1, a_2, \ldots)$ be a sequence of integers. We say that $a$ is arithmetic periodic with period $p$, preperiod $n_0$, and saltus $s$, if we have \begin{center} $a_{n+p}=a_n+s$ for all $n\geq n_0$. \end{center} \end{defn} \begin{defn}[$\mathcal{G}$-value sequence of a heap game] Assume $H$ is a heap game and let $\mathcal{G}_{H}(m)$ be the $\mathcal{G}$-value of a single heap with $m$ tokens. The we call sequence \begin{align*} \mathcal{G}_{H}(0), \mathcal{G}_{H}(1), \ldots \end{align*} the $\mathcal{G}$-value sequence of $H$. \end{defn} Shortly after Bouton published studies on Nim \cite{Bouton}, Wythoff conducted research on $\mathcal{P}$-position of a game which is nowadays called Wythoff's Nim \cite{Wythoff}. Wythoff's Nim is a well-known impartial game with the following rules: \begin{itemize} \item\ The legal move is to remove any number of tokens from a single heap (as in Nim) or remove the same number of tokens from both heaps. \item\ The end position is the state of no heaps of tokens. \end{itemize} \if0 The $\mathcal{P}$-positions are as follows: let $(m,n)$ $(m\leq n)$ be a position of Wythoff's Nim. For $n-m=k$, the $\mathcal{P}$-positions of Wythoff's Nim are given by $m=\lfloor k\Phi \rfloor$, $n= \lfloor k\Phi \rfloor +k$, where $\Phi$ is the golden ratio, i.e. $\Phi=\frac{1+\sqrt{5}}{2}$. \fi Wythoff's work was one of the earliest researches on heap games which permit the players to remove tokens from more then one heap at the same time. Another early research was by Moore \cite{Moore}. Let $k$ be a fixed given number. In Moore's game the player can remove tokens from less then $k$ heaps at the same time without any restriction. \if0 Connell considered Wythoff-type games in which the players can only remove a multiple of $k$ tokens from a single heap and can remove any (but the same) numbers of tokens from the both heaps \cite{Con59}. He characterized $\mathcal{P}$-position of the game. Holladay considered various versions of Wythoff's Nim and characterized their $\mathcal{P}$-positions \cite{Hol68}. The game of type $E$ in his paper is the 2-heap version of the cyclic Nimhoff, whose $\mathcal{G}$-values will be explained in the next subsection. The 2-heap version of the cyclic Nimhoff is studied in \cite{DASR09} as well. \fi \subsection{The Cyclic Nimhoff} Nimhoff was extensively researched by Fraenkel and Lorberbom \cite{Fraenkel}. Let $R$ be a subset of $\mathbb{N}_0^n$ not containing $(0,\dots,0)$. A position of Nimhoff is $m$-heaps of tokens. The moves are of two types: each player can remove any positive number of tokens from a single heap, or remove $s_i$ tokens from the $i$th heap for $i=1,\dots,n$ such that $(s_1,\ldots,s_n)\in R$. In particular they researched the cyclic Nimhoff in the case that $R=\{(s_1,\ldots,s_n) \mid 0<\displaystyle \sum_{i=1}^{n} s_i<h\}$, where $h$ is a fixed positive integer \cite{Fraenkel}. The $\mathcal{G}$-value of position $(x_1,x_2,\ldots,x_n)$ in the cyclic Nimhoff is $$ \mathcal{G}(x_1,x_2,\ldots,x_n) = \left(\displaystyle \bigoplus_i^{} \left\lfloor \frac{x_i}{h}\right\rfloor\right)h + \left(\displaystyle \sum_i^{} x_i \right)\bmod h, $$ where $\displaystyle \bigoplus_i^{}{a_i}$ denotes the nim-sum of all $a_i$'s. \subsection{The Subtraction Games} Let $S$ be a set of positive integers. In the subtraction game $\mathrm{Subtraction}(S)$, the only legal moves are to remove $s$ tokens from a heap for some $s\in S$. In particular, Nim is $\mathrm{Subtraction}(\mathbb{N}_+)$, where $\mathbb{N}_{+}$ is the set of all positive integers. There are a lot of preceding studies on subtraction games \cite{Berlekamp}. \if0 For example,in the case $S=\{s_1,s_2\}$ or $S=\{s_1,s_2,s_1+s_2\}$ a study was done, that is they studied about period of the $\mathcal{G}$- \fi For example,All-but subtraction games All-but$(S)$ (i.e. $\mathrm{Subtraction}(\mathbb{N}_+\setminus S)$ such that $S$ is a finite set) were studied in detail by Angela Siegel \cite{ASiegel}. She proved that the $\mathcal{G}$-value sequence is arithmetic periodic and characterized some cases in which the sequence is purely periodic. \section{The Generalized Cyclic Nimhoff} We define the generalized cyclic Nimhoff as a combination of the cyclic Nimhoff and subtraction games as follows. \begin{defn}[generalized cyclic Nimhoff] Let $h$ be a fixed positive integer and $S_1, S_2, \ldots ,S_n$ are sets of positive integers. Let $(x_1,x_2,\ldots,x_n)$ be an ordered $n$-tuple of non-negative integers. We define subsets $X_1$, $X_2$, $\cdots$, $X_n$, $Y$ of the set of $n$-tuples of non-negative integers as follows: \begin{align*} X_1&=\{(x_1-s_1,x_2,\ldots,x_n)\mid s_1 \in S_1\}\\ X_2&=\{(x_1,x_2-s_2,\ldots,x_n)\mid s_2 \in S_2\}\\ &\vdots\\ X_n&=\{(x_1,x_2,\ldots,x_n-s_n)\mid s_n \in S_n\}\\ Y&=\{(x_1-s_1,x_2-s_2,\ldots,x_n-s_n)\mid 0<\displaystyle \sum_{i=1}^{n} s_i< h\}. \end{align*} In the generalized cyclic Nimhoff GCN$(h; S_1,S_2,\ldots,S_n)$, the set of legal moves from position $(x_1,x_2,\ldots,x_n)$ is $X_1\cup X_2\cup\cdots\cup X_n\cup Y$. \end{defn} \begin{defn} Let $a=\{a(x)\}^{\infty}_{x=0}$ be an arbitrary sequence of non-negative integers. The $h$-stair $b=\{b(x)\}^{\infty}_{x=0}$ of $a$ is defined by the following: \begin{align*} b(xh+r)=a(x)h+r \end{align*} for all $x\in \mathbb{N}$ and for all $r=0,1,\cdots,h-1$. \end{defn} \begin{exam} If $a=0, 0, 1, 5, 4, \ldots$, then the $3$-stair of $a$ is\\ $b=0,1,2,0,1,2,3,4,5,15,16,17,12,13,14,\ldots$. \end{exam} Let us denote the $\mathcal{G}$-value sequence of $\mathrm{Subtraction}(S)$ by $\{G_{S}(x)\}_{x=0}^{\infty}$. \begin{thm}\label{generalizednimhoff} Let $a_1,a_2, \ldots, a_n$ be arbitrary sequences of non-negative integers. Let $(x_1,x_2, ... , x_n)$ be a game position of the generalized cyclic Nimhoff GCN$(h; S_1,S_2,\ldots,S_n)$. If $\{G_{S_i}(x)\}$ is the $h$-stair of sequence $a_i$ for all $i$ $(1\leq i \leq n)$, then \begin{align*} \mathcal{G}(x_1,x_2,\ldots,x_n) = \left(\displaystyle \bigoplus_i^{}\left\lfloor \frac{G_{S_i}(x_i)}{h} \right\rfloor\right)h+\left(\displaystyle \sum_i^{} x_i \right)\bmod h. \end{align*} \end{thm} \begin{proof} For each $i=1,\dots,n$, let $x_i = q_i h + r_i$ where $0 \leq r_i < h$. Since $G_{S_i}$ is the $h$-stair of sequence $a_i$, $G_{S_i}(x_i)$ = $a_i(q_i)h + r_i$. In other words, note that $$ \left\lfloor \dfrac{x_i}{h}\right\rfloor = q_i,\quad \left\lfloor \dfrac{G_{S_i}(x_i)}{h}\right\rfloor = a_i(q_i),\quad x_i\equiv G_{S_i}(x_i)\equiv r_i\pmod h. $$ The proof is by induction on $(x_1,x_2, \ldots, x_n)$. Let \begin{align*} Q(x_1,x_2, \ldots, x_n) &= \displaystyle \bigoplus_i^{}\left\lfloor \frac{G_{S_i}(x_i)}{h} \right\rfloor = \displaystyle \bigoplus_i^{}{a_i}(q_i),\\ R(x_1,x_2,\ldots,x_n)&=\displaystyle \sum_i^{} x_i \bmod h = \displaystyle \sum_i^{} G_{S_i}(x_i) \bmod h = \displaystyle \sum_i^{} r_i \bmod h. \end{align*} Then, it is sufficient to prove that \begin{align*} \mathcal{G}(x_1,x_2,\ldots,x_n)= Q(x_1,x_2, \ldots, x_n)h+R(x_1,x_2,\ldots,x_n). \end{align*} First, we show that for any $k < Q(x_1,x_2, \ldots, x_n)h +R(x_1,x_2,\ldots,x_n)$, there exists a position $(x'_1,x'_2, \ldots, x'_n) \in X_1\cup X_2\cup\cdots\cup X_n\cup Y$ such that $\mathcal{G}(x'_1,x'_2,\ldots,x'_n) =k$. There are two cases. Case that $Q(x_1,x_2, \ldots, x_n)h \leq k < Q(x_1,x_2, \ldots, x_n)h+R(x_1,x_2,\ldots,x_n)$: In this case, $k$ can be written in form $Q(x_1,x_2, \ldots, x_n)h + k'$ by $k'$ such that $0 \leq k' < R(x_1,x_2,\ldots,x_n)$. Since $0 < R(x_1,x_2,\ldots,x_n) - k' \leq R(x_1,x_2,\ldots,x_n) = \sum_i^{} r_i \bmod h$ and $0 < R(x_1,x_2,\ldots,x_n) - k' < h$, there exist $(k_1,k_2,\ldots, k_n)$ such that $k_1 + k_2 + \cdots + k_n= R(x_1,x_2,\ldots,x_n) - k'$ and $k_j \leq r_j$ for each $j$. Then $(x_1 - k_1, x_2-k_2, \ldots x_n-k_n) \in Y$. In addition, $Q(x_1-k_1,x_2-k_2,\ldots,x_n-k_n) = Q(x_1,x_2,\ldots,x_n)$ and $R(x_1-k_1,x_2-k_2,\ldots, x_n-k_n) = R(x_1,x_2,\ldots,x_n) - (k_1 + k_2 + \cdots + k_n) = k'$. Therefore, $\mathcal{G}(x_1-k_1,x_2-k_2,\ldots,x_n-k_n) = Q(x_1,x_2,\ldots,x_n)h + k' = k$ from induction hypothesis. Case that $k<Q(x_1,x_2,\ldots,x_n)h$: In this case, $k$ can be written in form $Q'h+k'$ by $Q'$ and $k'$ such that $Q' < Q(x_1,x_2,\ldots,x_n) = \bigoplus_i^{}{a_i}(q_i)$ and $0 \leq k' <h$. According to the nature of nim-sum, there exists $j$ and $g$ which satisfy $Q' = a_1(q_1) \oplus a_2(q_2) \oplus \cdots \oplus a_{j-1}(q_{j-1}) \oplus g \oplus a_{j+1}(q_{j+1})\oplus \cdots \oplus a_n(q_n)$ and $g < a_j(q_j)$. Without loss of generality, we assume $j=1$. That is, there exist $g<{a_1}(q_1)$ which satisfies $Q' = g \oplus {a_2}(q_2) \oplus \cdots \oplus {a_n}(q_n) $. If we put $r'_1$ to satisfy that $(r'_1 + r_2 + r_3 + \cdots + r_n) \bmod h = k'$ and $0 \leq r'_1 < h$, then $gh+r_1'<a_1(q_1)h\leq a_1(q_1)h+r_1=G_{S_1}(x_1)$, and therefore, there exists $x'_1$ such that $G_{S_1}(x_1')=gh+r_1'$ and $x_1-x'_1 \in S_1$. Thus, we have $(x'_1,x_2,\ldots, x_n) \in X_1$. Therefore, \begin{align*} \mathcal{G}(x'_1,x_2,\ldots,x_n) &=\left(\left\lfloor\dfrac{G_{S_1}(x_1')}{h} \right\rfloor \oplus\left\lfloor\dfrac{G_{S_2}(x_2)}{h} \right\rfloor\oplus \cdots \oplus\left\lfloor\dfrac{G_{S_n}(x_n)}{h} \right\rfloor\right)h\\ &\quad +(x'_1 + x_2 + \cdots + x_n)\bmod h\\ &=(g \oplus {a_{2}}(q_{2})\oplus \cdots \oplus {a_n}(q_n))h+k'=Q'h+k'= k \end{align*} from induction hypothesis. Next, we show that, if $(x_1,x_2,\ldots,x_n)$ $\rightarrow$ $(x'_1,x'_2,\ldots,x'_n)$, then $$ Q(x_1,x_2,\ldots,x_n)h + R(x_1,x_2,\ldots,x_n)\ne Q(x'_1,x'_2,\ldots,x'_n)h + R(x'_1,x'_2,\ldots,x'_n). $$ Cleary, the claim is true if $(x'_1,x'_2,\ldots,x'_n)$ is in $Y$, since $R(x'_1,x'_2,\ldots,x'_n) \neq R(x_1,x_2,\ldots,x_n)$. Therefore, we assume that $(x'_1,x'_2,\ldots,x'_n)$ is in $X_1$ without loss of generality, namely $x'_j= x_j$ $(j>1)$ and $x_1-x'_1 \in S_{1}$. Let $x'_1 = q'_1 h + r'_1\ (0\leq r'_1<h)$. If $Q(x_1,x_2,\ldots,x_n)h + R(x_1,x_2,\ldots,x_n) = Q(x'_1,x_2,\ldots,x_n)h + R(x'_1,x_2,\ldots,x_n)$, then we have $Q(x_1,x_2,\ldots,x_n) = Q(x'_1,x_2,\ldots,x_n)$ and $R(x_1,x_2,\ldots,x_n) = R(x'_1,x_2,\ldots,x_n)$. Then, $r_1 = r'_1$ since $R(x_1,x_2,\ldots,x_n) = R(x'_1,x_2,\ldots,x_n)$, and $\lfloor{G_{S_1}(x_1)}/{h}\rfloor =\lfloor{G_{S_1}(x_1')}/{h} \rfloor$ since $Q(x_1,x_2,\ldots,x_n) = Q(x'_1,x_2,\ldots,x_n)$. Therefore $G_{S_1}(x_1)=G_{S_1}(x_1')$, but it is impossible because $x_1-x'_1 \in S_{1}$. \end{proof} There are a variety of subtraction games with the $h$-stair of a simple integer sequence as their $\mathcal{G}$-value sequence. \begin{exam}[Nim] For any $h$, $G_{\mathbb{N}_{+}}(x) = x = \left(\left\lfloor \frac{x}{h} \right\rfloor\right)h+(x \bmod h)$. \end{exam} \begin{exam}[Subtraction($\{1,\ldots,l-1\}$) and its variants] If $\{1,\ldots,l-1\} \subset S \subset \mathbb{N}_+\setminus\{kl\mid l\in \mathbb{N}_+\}$ and $h\mid l$, then \begin{align*} G_{S}(x) = x\bmod l = \left(\left\lfloor \frac{x \bmod l}{h} \right\rfloor\right)h+((x \bmod l)\bmod h). \end{align*} \end{exam} \begin{exam}[All-but($\{h,2h,\ldots,kh\}$)] If $S=\mathbb{N}_{+}\setminus \{h,2h,\ldots,kh\}$, then $G_{S}(x)$ is the $h$-stair of $\{\underbrace{0,0,\ldots,0}_{k+1}, \underbrace{1,1,\ldots,1}_{k+1},\underbrace{2,2,\ldots,2}_{k+1}\ldots\}$. \end{exam} \begin{exam}[All-but($\{s_1,s_2\}$) \cite{ASiegel}] If $s_2>s_1$, then $G_{S}(x)$ is the $s_1$-stair of a sequence of positive integers. \end{exam} Theorem \ref{generalizednimhoff} allows us to combine several subtraction games which have $\mathcal{G}$-value sequences of form $h$-stair for common $h$. For example, for GCN($4$; $\mathbb{N}_{+}$, $\{1,2,3,4,5,6,7\}$, $\mathbb{N}_{+}\setminus \{4,8\}$), we have the following: \begin{align*} \mathcal{G}(x_1,x_2,x_3) =\left(\left\lfloor \frac{x_1}{4}\right\rfloor\oplus\left\lfloor \frac{x_2 \bmod 8}{4}\right\rfloor\oplus\left\lfloor \frac{x_3}{12} \right\rfloor\right)\times4+(x_1+x_2+x_3) \bmod 4. \end{align*} Suppose that a subtraction set $S$ is given. Then we can define a new subtraction set $S'$ such that the $\mathcal{G}$-value sequence of $\mathrm{Subtraction}(S')$ is the $h$-stair of the $\mathcal{G}$-value sequence of $\mathrm{Subtraction}(S)$. \begin{thm} Let $S$ be an arbitrary subtraction set and let $S'=\mathbb{N}_{+}\setminus \{(\mathbb{N}_{+}\setminus S)h\}$. Then \begin{align*} G_{S'}(n) =G_{S}\left(\left\lfloor \frac{n}{h} \right\rfloor\right)h+(n \bmod h). \end{align*} \end{thm} \begin{proof} Let $n=qh+i$ where $0\leq i<h$. Then the formula to be shown is $G_{S'}(qh+i)=G_{S}(q)h+i$. The proof is by induction on $n$ $(=qh+i)$.\\ First, we show that there exists a move to a position with any smaller $\mathcal{G}$-value $rh+k$ than $G_{S}(q)h+i$. There are two cases.\\ Case that $r=G_{S}(q)$ and $k < i$:\\ Since $0< i-k <h$, there exists a move $gh+i \rightarrow gh+k$ and we have \begin{align*} G_{S'}(gh+k)=G_{S}(q)h+k=rh+k \end{align*} by induction hypothesis.\\ Case that $r<G_{S}(q)$ and $0\leq k<h$:\\ By the definition of $G_{S}(q)$, there exists $q'$ such that $q-q'\in S$, $G_{S}(q')=r$ and \begin{align*} G_{S'}(q'h+k)=G_{S}(q')h+k=rh+k \end{align*} by induction hypothesis. So we only need to prove that there is move to $q'h+k$.\\ If $i \neq k$, clearly there exists a move $qh+i \rightarrow q'h+k$.\\ Assume that $i = k$ and that there does not exist a move $qh+i \rightarrow q'h+k$. Then \begin{align*} (q-q')h \notin S'\Rightarrow (q-q')h \in (\mathbb{N}_{+}\setminus S)h\Rightarrow q-q' \in (\mathbb{N}_{+} \setminus S)\Rightarrow q-q' \notin S, \end{align*} which is a contradiction.\\ Next, we show that, if $n=qh+i \rightarrow n'=q'h+k$, then \begin{center} $G_{S'}(n')$ $\neq$ $G_{S}(q)h+i$. \end{center} If $G_{S'}(n')=G_{S}(q)h+i$, then we have $G_{S}(q)=G_{S}(q')$ and $k=i$ by induction hypothesis, but it is impossible by the definition of $G_{S}(q)$. Because \begin{align*} (q-q') \notin S\Rightarrow (q-q') \in (\mathbb{N}_{+}\setminus S)\Rightarrow (q-q')h \in (\mathbb{N}_{+} \setminus S)h\\ \Rightarrow (q-q')h \notin \mathbb{N}_{+}\setminus\{(\mathbb{N}_{+} \setminus S)h\}\Rightarrow n-n'\notin S'. \end{align*} \end{proof} \addcontentsline{toc}{section}{{\bf References}} The affiliations of the authors.\\ Tomoaki Abuku: University of Tsukuba, Ibaraki, Japan.\\ Masanori Fukui: Hiroshima University, Hiroshima, Japan and Osaka Electro-Communication University, Osaka, Japan.\\ Ko Sakai: University of Tsukuba, Ibaraki, Japan.\\ Koki Suetsugu: Kyoto University, Kyoto, Japan. Presently with National Institute of Informatics, Tokyo, Japan. \end{document}
math
20,215
\betagin{equation}in{document} \title[Admissible fundamental operators] {Admissible fundamental operators associated with two domains related to $\mu$-synthesis} \author[Bappa Bisai]{Bappa Bisai} \address{Mathematics group, Harish-Chnadra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad, 211019, India.} \email{[email protected], [email protected]} \keywords{Symmetrized polydisc, Tetrablock, $\widetilde{\mathbb{G}}_3amma_{n}$-contraction, $\mathbb E$-contraction, Fundamental operator tuple, Fundamental operator pair, Conditional dilation.} \subjclass[2010]{47A13, 47A20, 47A25, 47A45} \betagin{equation}in{abstract} A commuting tuple of $n$-operators $(S_1, \dots, S_{n-1},P)$ on a Hilbert space $\mathcal{H}$ is called a $\widetilde{\mathbb{G}}_3amma_n$-contraction if the closed symmetrized polydisc \[ \widetilde{\mathbb{G}}_3amma_n=\left\{\left(\sum\limits_{1\leq i\leq n}z_i, \sum\limits_{1\leq i<j\leq n}z_iz_j, \dots, \Phi_1rod\limits_{i=1}^nz_i\right): |z_i|\leq 1,\, i=1,\dots, n \right\} \] is a spectral set. Also a commuting triple of operators $(A,B,P)$ for which the closed tetrablock $\overlineerline{\mathbb E}$ is a spectral set is called an $\mathbb E$-contraction, where \[ \mathbb E=\left\{ (\betata_1+\bar{\betata}_2x_3,\betata_2+\bar{\betata}_1x_3,x_3)\in \mathbb C^3: |x_3|< 1 \text{ and }|\betata_1|+|\betata_2|< 1 \right\}. \] To every $\widetilde{\mathbb{G}}_3amma_{n}$-contraction, there is a unique operator tuple $(A_1, \dots, A_{n-1})$, defined on $\overlineerline{\text{Ran}}D_P$, such that \[ S_i-S_{n-i}^*P=D_{P}A_iD_P, \quaduad D_{P}=(I-P^*P)^{1/2}, \quaduad i=1, \dots, n-1. \] This is called the \textit{fundamental operator tuple} (or in short the $\mathcal{F}_O$-tuple) associated with the $\widetilde{\mathbb{G}}_3amma_{n}$-contraction. Similarly, for every $\mathbb{E}$-contraction there is a unique $\mathcal{F}_O$-pair, defined on $\overlineerline{\text{Ran}}D_P$, such that \[ A-B^*P=D_PF_1D_P, \quaduad B-A^*P=D_PF_2D_P. \] In this article, we discuss necessary condition of conditional dilation for both completely non-unitary (c.n.u) $\widetilde{\mathbb{G}}_3amma_{n}$-contractions and c.n.u $\mathbb E$-contractions. Consider two tuples, $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$, of operators defined on two Hilbert spaces. One of the principal goals is to identify a necessary and a sufficient condition guaranteeing the existence of a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots, S_{n-1},P)$ such that $(A_1, \dots, A_{n-1})$ becomes the $\mathcal{F}_O$-tuple of $(S_1, \dots, S_{n-1},P)$ and $(B_1, \dots,B_{n-1})$ becomes the $\mathcal{F}_O$-tuple of $(S_1^*,\dots, S_{n-1}^*,P^*)$. Also for given two pairs of operators $(F_1,F_2)$ and $(G_1,G_2)$ defined on two Hilbert spaces, we examine when there is a c.n.u $\mathbb E$-contraction $(A,B,P)$ such that $(F_1,F_2)$ becomes the $\mathcal{F}_O$-pair of $(A,B,P)$ and $(G_1,G_2)$ becomes the $\mathcal{F}_O$-pair of $(A^*,B^*,P^*)$. \end{abstract} \maketitle \section{Introduction and preliminaries} Throughout this article all operators are bounded linear transformations defined on separable complex Hilbert spaces.\\ In \cite{Agler}, Agler and Young defined the symmetrized bidisc $\mathbb G_2$ to study the $2 \times 2$ spectral Nevanlinna-Pick interpolation problem. Later Costara in \cite{costara} analyzed the symmetrized polydisc $\mathbb G_n$ to study the generalized $n \times n$ spectral Nevanlinna-Pick interpolation problem, where \[ \mathbb G_n =\left\{ \left(\sum_{1\leq i\leq n} z_i,\sum_{1\leq i<j\leq n}z_iz_j,\dots, \Phi_1rod_{i=1}^n z_i \right): \,|z_i|< 1, i=1,\dots,n \right \}. \] In 2007, Abouhajar, White and Young introduced the tetrablock $\Bbb E$ (see \cite{A:W:Y}), where \[ \mathbb E=\left\{ (\betata_1+\overlineerline{\betata}_2x_3,\betata_2+\overlineerline{\betata}_1x_3, x_3): |x_3|< 1 \text{ and }|\betata_1|+|\betata_2|< 1 \right\}. \] The motivation for studying this domain comes from a class of interpolation problems in $H^{\infty}$ control theory. \\ The domains $\mathbb G_n$ and $\mathbb E$ are originated in the $\mu$-synthesis problem. Given a linear subspace $E\subseteq \mathcal{M}_n(\mathbb C)$, the space of all $n \times n$ complex matrices, the structured singular value of $A \in \mathcal{M}_n(\mathbb{C})$ is the number \[ \mu_E(A)=\left(\text{inf}\{\|X\|:X\in E \text{ and }(I-AX) \text{ is singular}\}\right)^{-1}. \] If $E= \mathcal{M}_n(\Bbb C)$, then $\mu_E(A)=\|A\|$, the operator norm of $A$. In this case the $\mu$-synthesis problem is the classical Nevanlinna-Pick interpolation problem. If $E$ is the space of all scalar multiples of the identity matrix $I_{n \times n}$, then $\mu_E(A)=r(A)$, the spectral radius of $A$. For any linear subspace $E$ of $\mathcal{M}_n(\Bbb C)$ that contains the identity matrix, $r(A)\leq \mu_{E}(A)\leq \|A\|$. For the control-theory motivations behind $\mu_E$ an interested reader can see \cite{doyel}. For a given subspace $E\subseteq \mathcal{M}_n(\mathbb C)$, the aim of $\mu$-synthesis problem is to construct an analytic $n \times n$-matrix-valued function $F$ on the open unit disk $\mathbb D$ (with center at the origin) subject to a finite number of interpolation conditions such that $\mu_E(F(\langlembda))<1$ for all $\langlembda\in \Bbb D$. If $E=\{aI_{n\times n}: a \in \Bbb C\}$, then $\mu_E(A)=r(A)<1$ if and only if $\Phi_1i_n(\langlembda_1, \dots, \langlembda_n)\in \Bbb G_n$ (see \cite{costara}), where $\langlembda_1,\dots, \langlembda_n$ are eigenvalues of $A$ and $\Phi_1i_n$ is the symmetrization map on $\Bbb C^n$ defined by \[ \Phi_1i_n(z_1, \dots, z_n)=\left(\sum_{1\leq i\leq n} z_i,\sum_{1\leq i<j\leq n}z_iz_j,\dots, \Phi_1rod_{i=1}^n z_i\right). \] If $E$ is the linear subspace of $2 \times 2$ diagonal matrices, then for any $A=(a_{ij})\in \mathcal{M}_2(\Bbb C)$, $\mu_E(A)<1$ if and only if $(a_{11},a_{22}, det A)\in \Bbb E$ (see \cite{A:W:Y}). Though the origin of these two domains is control engineering, the domains have been well studied by numerous mathematicians over the past two decades from the perspectives of complex geometry, function theory, and operator theory. An interested reader is referred to some exciting works of recent past \cite{A:W:Y,Agler,costara,E:Z,T:B,PalAdv,B:P,B:P5,B:P6,S:B,S:Pdecom,PalNagy} and references there in. In this article, we shall particularly focus on operator theoretic sides of $\mathbb G_n$ and $\Bbb E$. \betagin{equation}in{defn} A compact subset $X\subset \Bbb C^n$ is said to be \textit{spectral set} for a commuting $n$-tuple of operators $\underline{T}=(T_1, \dots, T_n)$ if the Taylor joint spectrum $\sigma(\underline{T})$ of $\underline{T}$ is a subset of $X$ and von Neumann inequality holds for every rational function, that is, \[ \|f(\underline{T})\|\leq \|f\|_{\infty, X}, \] for all rational functions $f$ with poles off $X$. \end{defn} For a detailed discussion on Taylor joint spectrum, an interested reader is referred to Taylor's works \cite{Taylor, Taylor1} or Curto's survey article \cite{curto}. \betagin{equation}in{defn} A commuting $n$-tuple of operators $(S_1, \dots, S_{n-1},P)$ for which the closed symmetrized polydisc $\widetilde{\mathbb{G}}_3amma_{n}(=\overlineerline{\Bbb G}_n)$ is a spectral set is called a \textit{$\widetilde{\mathbb{G}}_3amma_{n}$-contraction}. In a similar fashion, a commuting triple of operators $(A,B, P)$ for which $\overlineerline{\Bbb E}$ is a spectral set is called an \textit{$\Bbb E$-contraction}. \end{defn} One of the most remarkable inventions in operator theory on these two domains is the existence and uniqueness of \textit{fundamental operator tuples} or \textit{fundamental operator pairs}. The concept of the fundamental operator of a $\widetilde{\mathbb{G}}_3amma$-contraction was first introduced in \cite{PalAdv}. It is proved in \cite{PalAdv} that to for every $\widetilde{\mathbb{G}}_3amma$-contraction $(S,P)$ there is a unique operator $A$ on $\mathcal{D}_P$ such that \[ S-S^*P=D_PAD_P, \] where $D_P=(I-P^*P)^{1/2}$ and $\mathcal{D}_P=\overlineerline{\textit{Ran}}D_P$. Later Pal in \cite{PalNagy} (see also Theorem 4.4 of \cite{A:Pal}) proved that for any $\widetilde{\mathbb{G}}_3amma_{n}$-contraction $(S_1, \dots, S_{n-1},P)$, there is a unique operator tuple $(A_1, \dots, A_{n-1})$ on $\mathcal{D}_P$ such that \[ S_i-S_{n-i}^*P=D_PA_iD_P, \text{ for each }i= 1, \dots, n-1. \] It is known (\cite{T:B}) that to for every $\mathbb E$-contraction $(A,B,P)$, there are unique operators $F_1, F_2\in \mathcal{B}(\mathcal{D}_P)$ such that \[ A-B^*P=D_PF_1D_P, \quadquad B-A^*P=D_PF_2D_P. \] The fundamental operator tuple for a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction or the fundamental operator pair for an $\mathbb E$-contraction plays the central role in almost all results in the existing operator theoretic literature on these two domains. For this reason such a tuple or a pair was named the fundamental operator tuple or the fundamental operator pair respectively. \\ A contraction $P$ is called completely non-unitary (c.n.u) if it has no reducing non-trivial proper subspace on which its restriction is unitary. A $\widetilde{\mathbb{G}}_3amma_{n}$-contraction $(S_1, \dots,S_{n-1},P)$ is called c.n.u if $P$ is c.n.u. Sz.-Nagy-Foias constructed an explicit minimal isometric dilation (see \cite{Nagy}, CH-VI) for a c.n.u contraction. \betagin{equation}in{defn} A commuting $n$-tuple of operators $\underline{T}=(T_1, \dots, T_{n})$ on a Hilbert space $\mathcal{H}$, having $X$ as a spectral set, is said to have a rational dilation if there exist a Hilbert space $\mathcal{K}$, an isometry $V:\mathcal{H}\to \mathcal{K}$ and an $n$-tuple of commuting normal operators $\underline{N}=(N_1, \dots,N_n)$ on $\mathcal{K}$ with $\sigma(\underline{N})\subseteq bX$ such that $f(\underline{T})=V^*f(\underline{N})V$, for every rational function on $X$, where $bX$ is the \textit{distinguished boundary} (to be defined in Section \ref{background}) of $X$. \end{defn} \betagin{equation}in{defn} Let $(S_1, \dots,S_{n-1},P)$ be a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction on $\mathcal{H}$. A commuting tuple $(V_1,\dots, V_{n-1},V)$ on $\mathcal{K}$ is said to be a $\widetilde{\mathbb{G}}_3amma_{n}$-isometric dilation of $(S_1, \dots, S_{n-1},P)$ if $\mathcal{H}\subseteq\mathcal{K}$, $(V_1, \dots, V_{n-1},V)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-isometry and \[ P_{\mathcal{H}}\left(V_1^{m_1}\dots V_{n-1}^{m_{n-1}}V^m\right)|_{\mathcal{H}}=S_1^{m_1}\dots S_{n-1}^{m_{n-1}}P^m, \] for all non-negative integers $m_i, m$. Moreover, the dilation is called minimal if $\mathcal{K}=\overlineerline{\text{span}}\left\{V_1^{m_1}\dots V_{n-1}^{m_{n-1}}V^mh:h \in \mathcal{H} \text{ and }m_i, m\in \mathbb N\cup \{0\}\right\}$. \end{defn} It was established in \cite{Agler} that the rational dilation does hold for the symmetrized bidisc. In \cite{SPrational}, a Schaffer type explicit dilation and functional model were obtained for a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction under certain conditions. The author of this paper and Pal constructed an explicit Sz.-Nagy-Foias type $\widetilde{\mathbb{G}}_3amma_{n}$-isometric dilation for a particular class of c.n.u $\widetilde{\mathbb{G}}_3amma_{n}$-contractions in \cite{B:P5}. In Section \ref{dilationforGamma} (see Theorem \ref{bisai}), we present a necessary condition for the existence of $\widetilde{\mathbb{G}}_3amma_{n}$-isometric dilation of a c.n.u $\widetilde{\mathbb{G}}_3amma_{n}$-contraction. In the dilation theory and the Theorem \ref{bisai}, the $\mathcal{F}_O$-tuples play a pivotal role.\\ We now know how important the role of $\mathcal{F}_O$-tuples is in the operator theory on the symmetrized polydisc $\mathbb G_n$. So it is important to know which pair of operator tuples qualify as the $\mathcal{F}_O$-tuples of a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction. As a consequence of Theorem \ref{bisai}, we develop a necessary condition on the $\mathcal{F}_O$-tuples of a c.n.u $\widetilde{\mathbb{G}}_3amma_{n}$-contraction $(S_1, \dots, S_{n-1},P)$ and its adjoint $(S_1^*, \dots, S_{n-1}^*,P^*)$. Indeed, Theorem \ref{necessarytheorem} leaves a necessary condition on $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ for them to be the $\mathcal{F}_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*, \dots, S_{n-1}^*,P^*)$ respectively. Theorem \ref{necessarytheorem} of this article is a generalized version of Lemma 3.1 in \cite{B:P} which deals with pure $\widetilde{\mathbb{G}}_3amma_{n}$-contractions. A contraction $P$ is called \textit{pure} if $P^{*n}$ strongly converges to $0$ as $n$ tends to infinity. A $\widetilde{\mathbb{G}}_3amma_{n}$-contraction $(S_1, \dots, S_{n-1},P)$ is called pure if the last component, that is, $P$ is pure. A natural question arises: what about the converse of Theorem \ref{necessarytheorem} ? That is, given two tuples of operators $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ defined on some certain Hilbert spaces, does there exist a c.n.u $\widetilde{\mathbb{G}}_3amma_{n}$-contraction $(S_1, \dots, S_{n-1},P)$ such that $(A_1, \dots, A_{n-1})$ becomes the $\mathcal{F}_O$-tuple of $(S_1, \dots, S_{n-1},P)$ and $(B_1, \dots, B_{n-1})$ becomes the $\mathcal{F}_O$-tuple of $(S_1^*, \dots, S_{n-1}^*,P^*)$. We answer this question in Theorem \ref{sufficient}. Again, Theorem \ref{sufficient} can be regarded as a generalized version of Theorem 3.4 in \cite{B:P} which deals with pure contraction. Also our results generalize the existing similar results for $\widetilde{\mathbb{G}}_3amma$-contractions \cite{TirthaHari}.\\ Let $(F_1, F_2)$ and $(G_1,G_2)$ be two pairs of operators defined on two Hilbert spaces. All the results obtained for $\widetilde{\mathbb{G}}_3amma_{n}$-contractions are then applied to $\mathbb E$-contractions to decipher when there is a c.n.u $\mathbb E$-contraction $(A,B,P)$ such that $(F_1,F_2)$ becomes the $\mathcal{F}_O$-pair of $(A,B,P)$ and $(G_1, G_2)$ becomes the $\mathcal{F}_O$-pair of $(A^*,B^*,P^*)$. This is the content of Theorem \ref{tetrablock} and it is a generalization of Theorem 3 in \cite{TirthaHari}.\\ By virtue of the unitary map $U:H^2(\mathbb D) \to H^2(\mathbb T)$ defined by \[ z^n\mapsto e^{int}, \] the Hilbert spaces $H^2(\mathbb D)$ and $H^2(\mathbb T)$ are unitarily equivalent. Also, for a Hilbert space $\mathcal{E}$, $H^2(\mathcal{E})$ is unitarily equivalent to $H^2(\mathbb D)\otimes \mathcal{E}$. In the sequel, we shall identify these unitarily equivalent spaces and use them, without mention, interchangeably as per notational convenience. \section{Background Material and Preparatory Results}\langlebel{background} This section contains a few known facts about c.n.u contractions, $\widetilde{\mathbb{G}}_3amma_{n}$-contractions and $\mathbb E$-contractions. \subsection{Isometric dilations of a c.n.u contraction} \lVertoindent In this subsection we recall two minimal isometric dilation of a c.n.u contraction. Let $P$ on $\mathcal{H}$ be a c.n.u contraction.\\ \betagin{equation}in{enumerate} \item[(i)] If $P$ is a c.n.u contraction, then $\{P^nP^{*n}: n \geq 1\}$ is a non-increasing sequence of positive operators so that it converges strongly. Suppose $\mathcal{A}_* $ is the strong limit of $\{P^nP^{*n}:n \geq 1\}$. Then $P^n\mathcal{A}_* P^{*n}=\mathcal{A}_* $ for every $n\geq 1$. This defines an isometry \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} V_*:& \overlineerline{\textit{Ran}}(\mathcal{A}_*) \to \overlineerline{\textit{Ran}}(\mathcal{A}_*)\\ & \mathcal{A}_*^{1/2}x \mapsto \mathcal{A}_*^{1/2}P^*x. \end{align*} \endgroup An interested reader can see CH-3 of \cite{Kubrusly} for more details of the map $V_*$. Since $V_*$ is an isometry on $\overlineerline{\textit{Ran}}(\mathcal{A}_*)$, let $U$ on $\mathcal{K}$ be the minimal unitary extension of $V_*$. Let $D_P=(I-P^*P)^{1/2}$, $D_{P^*}=(I-PP^*)^{1/2}$, $\mathcal{D}_P=\overlineerline{\textit{Ran}}D_P$ and $\mathcal{D}_{P^*}=\overlineerline{\textit{Ran}}D_{P^*}$. Define an isometry \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} \varphi_1 :& \mathcal{H} \to H^2(\mathcal{D}_{P^*}) \oplus \mathcal{K}\\ & h \mapsto \sum\limits_{n=0}^{\infty}z^nD_{P^*}P^{*n}h \oplus \mathcal{A}_*^{1/2}h. \end{align*} \endgroup It was proved in \cite{Douglas} that $\betagin{equation}in{pmatrix} M_z & 0\\ 0 & U^* \end{pmatrix}$ on $H^2(\mathcal{D}_{P^*})\oplus \mathcal{K}$ is a minimal isometric dilation of $\varphi_1 P\varphi_1 ^*$ and \betagin{equation}in{equation}\langlebel{eqn8} \varphi_1 P^*= \betagin{equation}in{pmatrix} M_z & 0\\ 0 & U^* \end{pmatrix}^*\varphi_1. \end{equation} \item[(ii)] The notion of \textit{characteristic function} of a contraction introduced by Sz.-Nagy and Foias \cite{Nagy}. For a contraction $P$ on $\mathcal{H}$, let $\Lambda_P=\{z \in \mathbb C: (I-zP^*) \text{ is invertible}\}$. For $z\in \Lambda_P$, the characteristic function of $P$ is defined as \[ \mathbb{T}heta_P(z)= [-P + zD_{P^*}(I-zP^*)^{-1}D_P]|_{\mathcal{D}_P}. \] By virtue of the relation $PD_P=D_{P^*}P$ (Section I.3 of \cite{Nagy}), $\mathbb{T}heta_P(z)$ maps $\mathcal{D}_P$ into $\mathcal{D}_{P^*}$ for every $z\in \Lambda_P$. Clearly, $\mathbb{T}heta_P$ induces a multiplication operator $M_{\mathbb{T}heta_P}$ from $H^2(\mathcal{D}_P)$ into $H^2(\mathcal{D}_{P^*})$ by \[ M_{\mathbb{T}heta_P}g(z)=\mathbb{T}heta_P(z)g(z), \text{ for all }g \in H^2(\mathcal{D}_P) \text{ and }z\in \mathbb D. \] Consider \[ \mathbb{D}elta_P(t)=(I_{\mathcal{D}_P}-\mathbb{T}heta_P(e^{it})^*\mathbb{T}heta_P(e^{it}))^{1/2} \] for those $t$ at which $\mathbb{T}heta_P(e^{it})$ exists, on $L^2(\mathcal{D}_P)$ and the subspaces \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &\textbf{K}=H^2(\mathcal{D}_P) \oplus \overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)},\\ &\mathcal{Q}_P=\{\mathbb{T}heta_P f \oplus \mathbb{D}elta_P f: f \in H^2(\mathcal{D}_P)\},\\ &\mathcal{H}_P=\textbf{K}\omegainus \mathcal{Q}_P. \end{align*} \endgroup If $P$ be a c.n.u contraction defined on a Hilbert space $\mathcal{H}$, then (see CH-VI of \cite{Nagy}) there exist an isometry $\varphi_2: \mathcal{H}\to \textbf{K}$ with $\varphi_2\mathcal{H}=\mathcal{H}_P$ such that $\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}$ on $\textbf{K}$ is a minimal isometric dilation of $\varphi_2P\varphi_2^*$ and \betagin{equation}in{equation}\langlebel{eqn9} \varphi_2 P^*= \betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*\varphi_2. \end{equation} \end{enumerate} Before proceeding further, we recall a result from \cite{B:P6} which will be used in sequel. \betagin{equation}in{lem}[\cite{B:P6}, Lemma 3.1]\langlebel{BisaiPallem} Let $E_i, K_i$ for $i=1,2$, be Hilbert spaces. Suppose $U$ is a unitary from $(H^2(\mathbb D)\otimes E_1) \oplus K_1$ to $(H^2(\mathbb D)\otimes E_2) \oplus K_2$ such that \[ (M_z \otimes I_{E_1}) \oplus W_1 = U((M_z \otimes I_{E_2}) \oplus W_2)U^*, \] where $W_1$ on $K_1$ and $W_2$ on $K_2$ are unitaries. Then $U$ is of the form $(I_{H^2(\mathbb D)}\otimes U_1)\oplus U_2$ for some unitaries $U_1:E_1 \to E_2$ and $U_2:K_1 \to K_2$. \end{lem} Since the isometries $\varphi_1$ and $\varphi_2$ give two minimal isometric dilations of c.n.u contraction $P$ and minimal isometric dilations of a contraction is unique up to unitary equivalence, therefore, there is a unitary $\mathfrak{U}: H^2(\mathcal{D}_{P^*})\oplus\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}\to H^2(\mathcal{D}_{P^*})\oplus \mathcal{K}$ such that $\mathfrak{U}\varphi_2=\varphi_1$ and \betagin{equation}in{equation}\langlebel{eqn10} \mathfrak{U}\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*= \betagin{equation}in{pmatrix} M_z & 0\\ 0 & U^* \end{pmatrix}^*\mathfrak{U}. \end{equation} Therefore, by Lemma \ref{BisaiPallem}, $\mathfrak{U}$ has the block matrix form \betagin{equation}in{equation}\langlebel{eqnU} \mathfrak{U}=\betagin{equation}in{pmatrix} I_{H^2(\mathbb D)}\otimes V_1 & 0\\ 0 & V_2 \end{pmatrix}, \end{equation} for some unitaries $V_1\in \mathcal{B}(\mathcal{D}_{P^*})$ and $V_2\in \mathcal{B}(\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)},\mathcal{K})$. Equations \eqref{eqn8}, \eqref{eqn9}, \eqref{eqn10} and \eqref{eqnU} will be used very frequently in sequels.\\ \subsection{$\widetilde{\mathbb{G}}_3amma_n$-contractions, $\mathbb E$-contractions and their special classes} We recall from literature a few definitions and facts about the operators associated with $\widetilde{\mathbb{G}}_3amma_{n}$ and $\overlineerline{\mathbb E}$. \betagin{equation}in{defn} A commuting $n$-tuple of Hilbert space operators $(S_1, \dots, S_{n-1},P)$ for which $\widetilde{\mathbb{G}}_3amma_{n}$ is a spectral set is called a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction. Similarly, a commuting triple of Hilbert space operators $(A,B,P)$ for which $\overlineerline{\mathbb{E}}$ is a spectral set is called an $\mathbb E$-contraction. \end{defn} The sets $\widetilde{\mathbb{G}}_3amma_{n}$ and $\overlineerline{\mathbb E}$ are not convex but polynomially convex. By an application of Oka-Weil theorem, we have the following characterizations for $\widetilde{\mathbb{G}}_3amma_{n}$-contractions and for $\mathbb E$-contractions. \betagin{equation}in{lem}[\cite{S:Pdecom}, Theorem 2.4]\langlebel{SPLem} A commuting tuple of bounded operators $(S_1, \dots,S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction if and only if \[ \|f(S_1, \dots, S_{n-1},P)\|\leq \|f\|_{\infty, \widetilde{\mathbb{G}}_3amma_{n}} \] for any holomorphic polynomial $f$ in $n$-variables. \end{lem} \betagin{equation}in{lem}[\cite{T:B}, Lemma 3.3]\langlebel{TBLem} A commuting triple of bounded operators $(A,B,P)$ is a tetrablock contraction if and only if \[ \|f(A,B,P)\|\leq \|f\|_{\infty,\overlineerline{\mathbb E}} \] for any holomorphic polynomials $f$ in three variables. \end{lem} It is evident from the above results that the adjoint of a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction or an $\mathbb E$-contraction is also a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction or an $\mathbb E$-contraction and if $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction or $(A,B,P)$ is an $\mathbb E$-contraction, then $P$ is a contraction. Unitaries, isometries, c.n.u, pure etc. are special classes of contractions. There are analogous classes for $\widetilde{\mathbb{G}}_3amma_{n}$-contractions and $\mathbb E$-contractions in the literature. Before discussing them, we shall recollect a few facts from literature. For a compact subset $X$ of $\mathbb C^n$, let $\mathcal{A}(X)$ be the algebra of functions continuous in $X$ and holomorphic in the interior of $X$. A \textit{boundary} of $X$ (with respect to $\mathcal{A}(X)$) is a closed subset of $X$ on which every function in $\mathcal{A}(X)$ attains its maximum modulus. It follows from the theory of uniform algebras that the intersection of all boundaries of $X$ is also a boundary of $X$ (with respect to $\mathcal{A}(X)$) (see Theorem 9.1 of \cite{wermer}). This particular smallest one is called the \textit{distinguished boundary} of $X$ and is denoted by $bX$. The distinguished boundary of $\widetilde{\mathbb{G}}_3amma_{n}$, denoted by $b\widetilde{\mathbb{G}}_3amma_{n}$, was determined in \cite{E:Z} to be the symmetrization of the $n$-torus, i.e., \[ b\widetilde{\mathbb{G}}_3amma_{n}=\Phi_1i_n(\mathbb T^n)=\left\{\left(\sum\limits_{1\leq i\leq n}z_i, \sum\limits_{1\leq i<j\leq n}z_iz_j, \dots, \Phi_1rod_{i=1}^{n} z_i\right): |z_i|=1, i= 1, \dots, n\right\}. \] Also, we obtained from literature (see \cite{A:W:Y}) that the distinguished boundary of $\overlineerline{\mathbb{E}}$ is the following set: \[ b\mathbb E=\left\{(a,b,p)\in \overlineerline{\mathbb E}: |p|=1\right\}. \] Now we are in a position to present the special classes of $\widetilde{\mathbb{G}}_3amma_{n}$-contractions. \betagin{equation}in{defn} Let $S_1,\dots, S_{n-1},P$ be commuting operators on $\mathcal H$. Then $(S_1,\dots, S_{n-1},P)$ is called \betagin{equation}in{itemize} \item[(i)] a $\widetilde{\mathbb{G}}_3amma_n$-\textit{unitary} if $S_1,\dots, S_{n-1},P$ are normal operators and the Taylor joint spectrum $\sigma_T (S_1,\dots, S_{n-1},P)$ is a subset of $b\widetilde{\mathbb{G}}_3amma_n$ ; \item[(ii)] a $\widetilde{\mathbb{G}}_3amma_n$-isometry if there exist a Hilbert space $\mathcal K \supseteq \mathcal H$ and a $\widetilde{\mathbb{G}}_3amma_n$-unitary $(T_1,\dots,T_{n-1},U)$ on $\mathcal K$ such that $\mathcal H$ is a joint invariant subspace of $T_1,\dots, T_{n-1},U$ and that $(T_1|_{\mathcal H},\dots, \\mathbb{T}_{n-1}|_{\mathcal H},U|_{\mathcal H})=(S_1,\dots, S_{n-1},P)$; \item[(iii)] a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction if $(S_1,\dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction and $P$ is a c.n.u contraction; \item[(iv)] a pure $\widetilde{\mathbb{G}}_3amma_{n}$-contraction if $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction and $P$ is a pure contraction. \end{itemize} \end{defn} We obtain from literature the following analogous classes of $\mathbb E$-contractions. \betagin{equation}in{defn} Let $A,B,P$ be commuting operators on $\mathcal H$. Then $(A,B,P)$ is called \betagin{equation}in{itemize} \item[(i)] an $\mathbb E$-\textit{unitary} if $A,B,P$ are normal operators and the Taylor joint spectrum $\sigma_T (A,B,\\P)$ is a subset of $b\mathbb{E}$ ; \item[(ii)] an $\mathbb E$-isometry if there exists a Hilbert space $\mathcal K \supseteq \mathcal H$ and an $\mathbb E$-unitary $(Q_1,Q_2,V)$ on $\mathcal K$ such that $\mathcal H$ is a joint invariant subspace of $Q_1,Q_2,V$ and that $(Q_1|_{\mathcal H}, Q_2|_{\mathcal H},\\V|_{\mathcal H})=(A,B,P)$ ; \item[(iii)] a c.n.u $\mathbb E$-contraction if $(A,B,P)$ is an $\mathbb E$-contraction and $P$ is a c.n.u contraction; \item[(iv)] a pure $\mathbb E$-contraction if $(A,B,P)$ is an $\mathbb E$-contraction and $P$ is a pure contraction. \end{itemize} \end{defn} The theorem below provides a set of characterizations of a $\widetilde{\mathbb{G}}_3amma_n$-unitary. \betagin{equation}in{thm}[\cite{SPrational}, Theorem 4.2]\langlebel{gammauni} Let $S_1, \dots, S_{n-1},P$ be commuting operators on a Hilbert space $\mathcal{H}$. Then the following are equivalent: \betagin{equation}in{itemize} \item[(i)] $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-unitary; \item[(ii)] there exist commuting unitary operators $U_1, \dots, U_n$ on $\mathcal{H}$ such that \[ (S_1, \dots, S_{n-1},P)=\Phi_1i_n(U_1, \dots, U_n); \] \item[(iii)] $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction and $P$ is an unitary; \item[(iv)] $P$ is unitary, $S_i=S_{n-i}^*P$ for each $i=1, \dots, n-1$ and $\bigg(\dfracrac{n-1}{n}S_1, \dfracrac{n-2}{n}S_2, \dots,\\ \dfracrac{1}{n}S_{n-1}\bigg)$ is a $\widetilde{\mathbb{G}}_3amma_{n-1}$-contraction. \end{itemize} \end{thm} For a given tuple, how does one determine whether it is a $\widetilde{\mathbb{G}}_3amma_{n}$-isometry or not? The next theorem gives necessary and sufficient conditions. \betagin{equation}in{thm}[\cite{SPrational}, Theorem 4.4]\langlebel{gammaiso} Let $S_1, \dots, S_{n-1},P$ be commuting operators on a Hilbert space $\mathcal{H}$. Then the following are equivalent: \betagin{equation}in{itemize} \item[(i)] $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry; \item[(ii)] $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction and $P$ is an isometry; \item[(iii)] $P$ is isometry, $S_i=S_{n-i}^*P$ for each $i=1, \dots, n-1$ and $\Big(\dfracrac{n-1}{n}S_1, \dfracrac{n-2}{n}S_2, \dots,\\ \dfracrac{1}{n}S_{n-1}\Big)$ is a $\widetilde{\mathbb{G}}_3amma_{n-1}$-contraction. \end{itemize} \end{thm} We conclude this section by stating analogues of Theorems \ref{gammauni} and \ref{gammaiso} for $\mathbb E$-contractions. \betagin{equation}in{thm}[{\cite{T:B}, Theorem 5.4}]\langlebel{thm:tu} Let $\underline N = (N_1, N_2, N_3)$ be a commuting triple of bounded operators. Then the following are equivalent. \betagin{equation}in{enumerate} \item $\underline N$ is an $\mathbb E$-unitary, \item $N_3$ is a unitary and $\underline N$ is an $\mathbb E$-contraction, \item $N_3$ is a unitary, $N_2$ is a contraction and $N_1 = N_2^* N_3$. \end{enumerate} \end{thm} \betagin{equation}in{thm}[{\cite{T:B}, Theorem 5.7}] \langlebel{thm:ti} Let $\underline V = (V_1, V_2, V_3)$ be a commuting triple of bounded operators. Then the following are equivalent. \betagin{equation}in{enumerate} \item $\underline V$ is an $\mathbb E$-isometry. \item $V_3$ is an isometry and $\underline V$ is an $\mathbb E$-contraction. \item $V_3$ is an isometry, $V_2$ is a contraction and $V_1=V_2^* V_3$. \end{enumerate} \end{thm} \section{Necessary condition of conditional dilation for c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contractions}\langlebel{dilationforGamma} Let $(S_1, \dots, S_{n-1},P)$ be a $\widetilde{\mathbb{G}}_3amma_n$-contraction on $\mathcal{H}$ with $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ being the $\mathcal{F}_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*, \dots, S_{n-1}^*,P^*)$ respectively. Suppose \[ \Sigma_1(z)=\left(\dfracrac{n-1}{n}(A_1+zA_{n-1}^*), \dfracrac{n-2}{n}(A_2+zA_{n-2}^*), \dots, \dfracrac{1}{n}(A_{n-1}+zA_{1}^*)\right) \] and \[ \Sigma_2(z)=\left(\dfracrac{n-1}{n}(B_1^*+zB_{n-1}), \dfracrac{n-2}{n}(B_2^*+zB_{n-2}), \dots, \dfracrac{1}{n}(B_{n-1}^*+zB_{1})\right). \] The following theorem provides a conditional dilation of a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction. \betagin{equation}in{thm}[\cite{B:P6}, Corollary 3.6]\langlebel{BisaiPalDilation} Let $(S_1, \dots, S_{n-1},P)$ be a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal{H}$ with $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ being the $\mathcal{F}_O$-tuples of $(S_1, \dots, S_{n-1},\\P)$ and $(S_1^*, \dots, S_{n-1}^*,P^*)$ respectively. Let $\Sigma_1(z)$ and $\Sigma_2(z)$ be $\widetilde{\mathbb{G}}_3amma_{n-1}$-contractions for all $z\in \mathbb D$. Then there is an isometry $\varphi_{BS}$ from $\mathcal{H}$ into $H^2(\mathcal{D}_{P^*})\oplus\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ such that for $i=1, \dots, n-1$ \[ \varphi_{BS}S_i^*=\left(M_{G_i^*+zG_{n-i}}\oplus\widetilde{S}_{i2}\right)^*\varphi_{BS} \text{ and }\varphi_{BS}P^*=\left(M_z\oplus M_{e^{it}}\right)^*\varphi_{BS}, \] where $(M_{G_1^*+zG_{n-1}}, \dots, M_{G_{n-1}^*+zG_1},M_z)$ on $H^2(\mathcal{D}_{P^*})$ is a pure $\widetilde{\mathbb{G}}_3amma_n$-isometry and $(\widetilde{S}_{12},\dots,\\ \widetilde{S}_{(n-1)2},M_{e^{it}})$ on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ is a $\widetilde{\mathbb{G}}_3amma_n$-unitary. \end{thm} The following result from \cite{S:B} will be useful. \betagin{equation}in{lem}[\cite{S:B}, Theorem 4.10]\langlebel{lem1} If $(T_1, \dots, T_{n-1},M_z)$ on $H^2(\mathcal{E})$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry, then for each $i=1, \dots, n-1$ $$T_i=M_{Y_i+zY_{n-i}^*},$$ for all $z\in \mathbb D$ and for some $Y_i\in\mathcal{B}(\mathcal{E})$. \end{lem} In order to prove the main result (Theorem \ref{bisai}) of this section, we need the following lemma. \betagin{equation}in{lem}\langlebel{lemmain} Let $P$ be a c.n.u contraction on $\mathcal{H}$. Suppose $(M_{X_1^*+zX_{n-1}}, \dots, M_{X_{n-1}^*+zX_1},M_z)$ on $H^2(\mathcal{D}_{P^*})$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry and $(R_1, \dots, R_{n-1},M_{e^{it}})$ is a $\widetilde{\mathbb{G}}_3amma_n$-unitary on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$. If for each $i=1, \dots, n-1,$ \betagin{equation}in{equation}\langlebel{eqn1} \betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\mathcal{Q}_P \subseteq \mathcal{Q}_P, \end{equation} then there exists a $\widetilde{\mathbb{G}}_3amma_n$-isometry $(M_{Y_1+zY_{n-1}^*}, \dots, M_{Y_{n-1}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ such that for each $i=1, \dots, n-1,$ \[ \betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P} \\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P} \\ \mathbb{D}elta_P \end{pmatrix}M_{{Y_i+zY_{n-i}^*}}. \] \betagin{equation}in{proof} From Equation \eqref{eqn1}, we can define operator $T_i$ on $H^2(\mathcal{D}_P)$ by the following way: \betagin{equation}in{equation}\langlebel{eqn2} \betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}T_i \end{equation} for each $i$. Since $\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}$ is an isometry on $H^2(\mathcal{D}_P)$, \betagin{equation}in{equation}\langlebel{eqn3} T_i=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}. \end{equation} \textbf{Claim.} $(T_1, \dots, T_{n-1},M_z)$ on $H^2(\mathcal{D}_P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry. \\ \lVertoindent\textit{Proof of Claim.} Using Equation \eqref{eqn3} and taking cue from the facts that $\mathcal{Q}_P$ is invariant under $\betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}$ for each $i$ and that $\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}$ is an isometry we have \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &T_iT_j\\ =& \betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_j^*+zX_{n-j}} & 0\\ 0 & R_j \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\\ = & \betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{X_j^*+zX_{n-j}} & 0\\ 0 & R_j \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\\ = & \betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_j^*+zX_{n-j}} & 0\\ 0 & R_j \end{pmatrix}\betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\\ (&\text{since } M_{X_i^*+zX_{n-i}}M_{X_j^*+zX_{n-j}}=M_{X_j^*+zX_{n-j}}M_{X_i^*+zX_{n-i}}\text{ and }R_iR_j=R_jR_i)\\ =&\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_j^*+zX_{n-j}} & 0\\ 0 & R_j \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{X_i^*+zX_{n-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\\ =&T_jT_i. \end{align*} \endgroup Again from Equation \eqref{eqn2}, we have \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align} M_{X_i^*+zX_{n-i}}M_{\mathbb{T}heta_P}&=M_{\mathbb{T}heta_P}T_i,\, \langlebel{eqn4}\\ R_i\mathbb{D}elta_P&=\mathbb{D}elta_PT_i. \langlebel{eqn5} \end{align} \endgroup Now applying $M_z$ and $M_{e^{it}}|_{\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}}$ on both sides of Equations \eqref{eqn4} and \eqref{eqn5} respectively, we get \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} M_{\mathbb{T}heta_P}M_zT_i&=M_{\mathbb{T}heta_P}T_iM_z\\ \mathbb{D}elta_PM_zT_i&=\mathbb{D}elta_PT_iM_z, \end{align*} \endgroup that is, $\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_zT_i=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}T_iM_z \text{ and consequently } M_zT_i=T_iM_z$. Therefore, $(T_1,\\ \dots, T_{n-1},M_z)$ on $H^2(\mathcal{D}_P)$ is a commuting tuple. Clearly, the tuple $$\underline{\textbf{M}}=\left(\betagin{equation}in{pmatrix} M_{X_1^*+zX_{n-1}} & 0\\ 0 & R_1 \end{pmatrix}, \dots, \betagin{equation}in{pmatrix} M_{X_{n-1}^*+zX_{1}} & 0\\ 0 & R_{n-1} \end{pmatrix}, \betagin{equation}in{pmatrix} M_{z} & 0\\ 0 & M_{e^{it}} \end{pmatrix}\right)$$ being a direct sum of two $\widetilde{\mathbb{G}}_3amma_{n}$-contractions, is a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction. Now using Equation \eqref{eqn2} and the fact that $\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}$ is an isometry, for any holomorphic polynomial $f$ in $n$-variables we have $$ \|f(T_1, \dots, T_{n-1},M_z)\| =\left\|\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}^*f(\underline{\textbf{M}})\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}\right\|\leq\|f\|_{\infty,\widetilde{\mathbb{G}}_3amma_{n}}. $$ Therefore, by Lemma \ref{SPLem}, $(T_1, \dots, T_{n-1},M_z)$ on $H^2(\mathcal{D}_P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction. Since $M_z$ is isometry and hence by Theorem \ref{gammaiso}, $(T_1, \dots, T_{n-1},M_z)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-isometry. This completes the proof of the claim.\\ \lVertoindent By Lemma \ref{lem1}, $T_i=M_{Y_i+zY_{n-i}^*}$ for some $Y_i\in \mathcal{B}(\mathcal{D}_P)$. This completes the proof. \end{proof} \end{lem} Combining the Lemma \ref{lemmain} and Theorem \ref{BisaiPalDilation} we have the following theorem which is the main result of this section. \betagin{equation}in{thm}\langlebel{bisai} Let $(S_1, \dots, S_{n-1},P)$ be a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal{H}$ with $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ being the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*,\dots,\\ S_{n-1}^*,P^*)$ respectively. Let $\Sigma_1(z)$ and $\Sigma_2(z)$ be $\widetilde{\mathbb{G}}_3amma_{n-1}$-contractions for all $z\in \mathbb D$. Then for each $i=1, \dots, n-1$, there exist $Y_i\in \mathcal{B}(\mathcal{D}_P)$ such that $(M_{Y_1+zY_{n-1}^*},\dots, M_{Y_{n-1}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry and a unitary $\widetilde{V}_2\in \mathcal{B}(\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)})$ such that \[ \betagin{equation}in{pmatrix} M_{\widetilde{G}_i^*+z\widetilde{G}_{n-i}} & 0\\ 0 & \widetilde{V}_2^*\widetilde{S}_{i2}\widetilde{V}_2 \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_i+zY_{n-i}^*}, \] where the tuple $(\widetilde{G}_1, \dots, \widetilde{G}_{n-1})$ is unitarily equivalent to $(B_1,\dots, B_{n-1})$. \end{thm} \betagin{equation}in{proof} By Theorem \ref{BisaiPalDilation}, for each $i=1, \dots, n-1$ \betagin{equation}in{equation}\langlebel{eqn6} \varphi_{BS}S_i^*=\betagin{equation}in{pmatrix} M_{G_i^*+zG_{n-i}} & 0\\ 0 & \widetilde{S}_{i2} \end{pmatrix}^*\varphi_{BS} \end{equation} and \[ \varphi_{BS}P^*=\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*\varphi_{BS}. \] Since $P$ is a c.n.u contraction and minimal isometric dilation of a contraction is unique, there exists a unitary $\Phi:H^2(\mathcal{D}_{P^*})\oplus \overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)} \to H^2(\mathcal{D}_{P^*})\oplus \overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ such that $\Phi\varphi_2=\varphi_{BS}$ and $\Phi \betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*=\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*\Phi$. Then by Lemma \ref{BisaiPallem}, $\Phi$ has the block matrix form $\Phi=\betagin{equation}in{pmatrix} I\otimes\widetilde{V}_1 & 0\\ 0 & \widetilde{V}_2 \end{pmatrix}$, for some unitaries $\widetilde{V}_1\in\mathcal{B}(\mathcal{D}_{P^*})$, $\widetilde{V}_2\in \mathcal{B}(\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)})$. Taking cue from this fact and Equation \eqref{eqn6}, we have \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} \varphi_2 S_i^* & = \betagin{equation}in{pmatrix} I\otimes \widetilde{V}_1^* & 0\\ 0 & \widetilde{V}_2^* \end{pmatrix}\betagin{equation}in{pmatrix} I\otimes G_i + M_z^* \otimes G_{n-i} & 0\\ 0 & \widetilde{S}_{i2}^* \end{pmatrix}\betagin{equation}in{pmatrix} I\otimes \widetilde{V}_1 & 0\\ 0 & \widetilde{V}_2 \end{pmatrix}\varphi_2\\ &=\betagin{equation}in{pmatrix} I \otimes \widetilde{V}_1^*G_i\widetilde{V}_1+M_z^*\otimes \widetilde{V}_1^*G_{n-i}\widetilde{V}_1 & 0\\ 0 & \widetilde{V}_2^*\widetilde{S}_{i2}^*\widetilde{V}_2 \end{pmatrix}\varphi_2. \end{align*} \endgroup Set $\widetilde{G}_i=\widetilde{V}_1^*G_i\widetilde{V}_1$ and $S_{i2}=\widetilde{V}_2^*\widetilde{S}_{i2}\widetilde{V}_2$. Then \betagin{equation}in{equation}\langlebel{eqn7} \varphi_2 S_i^* = \betagin{equation}in{pmatrix} M_{\widetilde{G}_i^*+z\widetilde{G}_{n-i}} & 0\\ 0 & S_{i2} \end{pmatrix}^*\varphi_2. \end{equation} By Theorem \ref{BisaiPalDilation}, $(M_{G_1^*+zG_{n-1}}, \dots, M_{G_{n-1}^*+zG_1},M_z)$ on $H^2(\mathcal{D}_{P^*})$ is a pure $\widetilde{\mathbb{G}}_3amma_n$-isometry and $(\widetilde{S}_{12}, \dots, \widetilde{S}_{(n-1)2},M_{e^{it}})$ on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ is a $\widetilde{\mathbb{G}}_3amma_n$-unitary. Since $\widetilde{V}_1$ is unitary, $(M_{\widetilde{G}_1^*+z\widetilde{G}_{n-1}}, \dots,\\ M_{\widetilde{G}_{n-1}^*+z\widetilde{G}_1},M_z)$ on $H^2(\mathcal{D}_{P^*})$ being unitarily equivalent to $(M_{G_1^*+zG_{n-1}}, \dots, M_{G_{n-1}^*+zG_1},M_z)$, is a pure $\widetilde{\mathbb{G}}_3amma_n$-isometry. Similarly, one can prove that $(S_{12}, \dots, S_{(n-1)2},M_{e^{it}})$ on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ is a $\widetilde{\mathbb{G}}_3amma_n$-unitary. From Equation \eqref{eqn7}, it is clear that $\varphi_2\mathcal{H}=\mathcal{H}_P$ is invariant under $\betagin{equation}in{pmatrix} M_{\widetilde{G}_i^*+z\widetilde{G}_{n-i}} & 0\\ 0 & S_{i2} \end{pmatrix}^*$, that is, $\mathcal{Q}_P$ is invariant under $\betagin{equation}in{pmatrix} M_{\widetilde{G}_i^*+z\widetilde{G}_{n-i}} & 0\\ 0 & S_{i2} \end{pmatrix}$. Then by Lemma \ref{lemmain}, there exists a $\widetilde{\mathbb{G}}_3amma_n$-isometry $(M_{Y_1+zY_{n-1}^*}, \dots, M_{Y_{n-1}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ such that $\betagin{equation}in{pmatrix} M_{\widetilde{G}_i^*+z\widetilde{G}_{n-i}} & 0\\ 0 & S_{i2} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_i+zY_{n-i}^*}.$ By using Equations \eqref{eqn7} and \eqref{eqn9} we have \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &S_i^*-S_{n-i}P^*\\ =&\varphi_2^*\betagin{equation}in{pmatrix} M_{\widetilde{G}_i^*+z\widetilde{G}_{n-i}} & 0\\ 0 & \widetilde{S}_{i2} \end{pmatrix}^*\varphi_2-\varphi_2^*\betagin{equation}in{pmatrix} M_{\widetilde{G}_{n-i}^*+z\widetilde{G}_{i}} & 0\\ 0 & \widetilde{S}_{(n-i)2} \end{pmatrix}\varphi_2\varphi_2^*\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*\varphi_2\\ =&\varphi_2^*\betagin{equation}in{pmatrix} P_{\mathbb{C}}\otimes \widetilde{G}_i & 0\\ 0 & 0 \end{pmatrix}\varphi_2\; (\text{since }\widetilde{S}_{i2}^*=M_{e^{it}}^*\widetilde{S}_{(n-1)2}=\widetilde{S}_{(n-1)2}M_{e^{it}}^*)\\ =& \varphi_1\betagin{equation}in{pmatrix} P_{\mathbb C}\otimes V_1\widetilde{G}_iV_1^* & 0\\ 0 & 0 \end{pmatrix}\varphi_1\\ =& D_{P^*}(V_1\widetilde{G}_iV_1^*)D_{P^*}. \end{align*} \endgroup Since $\mathcal F_O$-tuple of a $\widetilde{\mathbb{G}}_3amma_n$-contraction is unique, $(V_1\widetilde{G}_1V_1^*, \dots, V_1\widetilde{G}_{n-1}V_1^*)=(B_1, \dots, B_{n-1})$. Therefore, the tuple $(\widetilde{G}_1, \dots, \widetilde{G}_{n-1})$ is unitarily equivalent to the $\mathcal F_O$-tuple $(B_1, \dots, B_{n-1})$ of $(S_1^*, \dots, S_{n-1}^*,P^*)$. This completes the proof. \end{proof} \section{The $\mathcal F_O$-tuples of c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contractions: necessary condition and sufficient condition} We start with a technical lemma that will be required to the proof of Theorem \ref{necessarytheorem}. \betagin{equation}in{lem}\langlebel{lem} Let $(S_1, \dots, S_{n-1},P)$ be a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction as in Theorem \ref{bisai} with $S_{j2}=M_{{{n-1} \choose {j}} + {{n-1} \choose {j-1}}e^{it}}$ in the representation \eqref{eqn7} of $S_j$ for each $j$. Then for $j =1, \dots, n-1$ \[ {{n-1} \choose {j-1}}\mathcal{A}_*+{{n-1}\choose {j}}\mathcal{A}_*P^*=\mathcal{A}_*S_{n-j}^* \] \quadquad\quadquad\quadquad\quadquad\quadquad\quadquad and \[ {{n-1} \choose {j-1}}P\mathcal{A}_* + {{n-1} \choose {j}}\mathcal{A}_*=P\mathcal{A}_*S_{n-j}^*. \] \end{lem} \betagin{equation}in{proof} From Equations \eqref{eqn10} and \eqref{eqnU} we have $U^*=V_2M_{e^{it}}V_2^*$. Again using \eqref{eqn7}, \eqref{eqnU} and $\mathfrak{U}\varphi_2=\varphi_1$ we have \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} \varphi_1 S_j^* & = \betagin{equation}in{pmatrix} M_{V_1} & 0\\ 0 & V_2 \end{pmatrix}\betagin{equation}in{pmatrix} M_{\widetilde{G}_j^* + z \widetilde{G}_{n-j}} & 0\\ 0 & S_{j2} \end{pmatrix}^*\betagin{equation}in{pmatrix} M_{V_1} & 0\\ 0 & V_2 \end{pmatrix}^*\varphi_1\\ &= \betagin{equation}in{pmatrix} M_{V_1\widetilde{G}_j^*V_1^* + zV_{1}\widetilde{G}_{n-j}V_1^*} & 0\\ 0 & {{n-1}\choose {j}}+{{n-1}\choose {j-1}}V_2M_{e^{it}}V_2^* \end{pmatrix}^*\varphi_1\\ & = \betagin{equation}in{pmatrix} M_{B_j^* + zB_{n-j}} & 0\\ 0 & {{n-1}\choose {j}}+{{n-1}\choose {j-1}}U^* \end{pmatrix}^*\varphi_1 \quaduad (\text{since }V_1\widetilde{G}_{j}V_1^*=B_j). \end{align*} \endgroup That is, $S_j^*=\varphi_1^*\betagin{equation}in{pmatrix} M_{B_j^*+zB_{n-j}} & 0\\ 0 & {{n-1}\choose {j}} + {{n-j}\choose {j-1}}U^* \end{pmatrix}^*\varphi_1$. Similarly, we have $\varphi_1P^*=\betagin{equation}in{pmatrix} M_z & 0\\ 0 & U^* \end{pmatrix}^*\varphi_1$, that is, $P^*=\varphi_1^*\betagin{equation}in{pmatrix} M_z & 0\\ 0 & U^* \end{pmatrix}^*\varphi_1$. Clearly $\textit{Ran}(\varphi_1)$ is invariant under $\betagin{equation}in{pmatrix} M_z & 0\\ 0 & U^* \end{pmatrix}^*$ and $\betagin{equation}in{pmatrix} M_{B_j^*+zB_{n-j}} & 0\\ 0 & {{n-1}\choose {j}} + {{n-j}\choose {j-1}}U^* \end{pmatrix}^*$ for $j=1, \dots, n-1$. Since $\varphi_1$ is isometry we have $P^{*n}= \varphi_1^*\betagin{equation}in{pmatrix} M_z^n & 0\\ 0 & U^{*n} \end{pmatrix}^*\varphi_1.$ Therefore, $P^nP^{*n}=\varphi_1^*\betagin{equation}in{pmatrix} M_z^nM_z^{*n} & 0\\ 0 & I \end{pmatrix}\varphi_1$ and hence $\mathcal{A}_*=\varphi_1^*\betagin{equation}in{pmatrix} 0 & 0\\ 0 & I \end{pmatrix}\varphi_1$. Now \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &{{n-1} \choose {j-1}}\mathcal{A}_*+{{n-1}\choose {j}}\mathcal{A}_*P^*-\mathcal{A}_*S_{n-j}^*\\ =& {{n-1} \choose {j-1}}\varphi_1^*\betagin{equation}in{pmatrix} 0 & 0\\ 0 & I \end{pmatrix}\varphi_1+{{n-1} \choose {j}}\varphi_1^*\betagin{equation}in{pmatrix} 0 & 0\\ 0 & I \end{pmatrix}\varphi_1\varphi_1^*\betagin{equation}in{pmatrix} M_z^* & 0\\ 0 & U \end{pmatrix}\varphi_1\\ & \quadquad\quadquad - \varphi_1^*\betagin{equation}in{pmatrix} 0 & 0\\ 0 & I \end{pmatrix}\varphi_1\varphi_1^*\betagin{equation}in{pmatrix} M_{B_{n-j}^*+zB_{j}} & 0\\ 0 & {{n-1} \choose {j-1}} +{{n-1} \choose {j}}U^* \end{pmatrix}^*\varphi_1\\ =&\varphi_1^*\betagin{equation}in{pmatrix} 0 & 0\\ 0 & {{n-1} \choose {j-1}}+{{n-1} \choose {j}}U \end{pmatrix}\varphi_1 - \varphi_1^*\betagin{equation}in{pmatrix} 0 & 0\\ 0 & {{n-1} \choose {j-1}}+{{n-1} \choose {j}}U \end{pmatrix}\varphi_1\\ =& 0. \end{align*} \endgroup Therefore, ${{n-1} \choose {j-1}}\mathcal{A}_*+{{n-1}\choose {j}}\mathcal{A}_*P^*=\mathcal{A}_*S_{n-j}^*$. Since $P\mathcal{A}_*P^*=\mathcal{A}_*$, hence ${{n-1} \choose {j-1}}P\mathcal{A}_*+{{n-1}\choose {j}}\mathcal{A}_*=P\mathcal{A}_*S_{n-j}^*$. This completes the proof. \end{proof} Before proceeding further, we recall a necessary and sufficient condition under which a tuple of operator becomes the $\mathcal F_O$-tuple of a $\widetilde{\mathbb{G}}_3amma_n$-contraction. \betagin{equation}in{thm}[\cite{B:P}, Theorem 2.1]\langlebel{BisaiPal1} A tuple of operators $(A_1, \dots, A_{n-1})$ defined on $\mathcal{D}_P$ is the $\mathcal F_O$-tuple of a $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots, S_{n-1},P)$ if and only if $(A_1, \dots, A_{n-1})$ satisfy the following operator equations in $X_1, \dots, X_{n-1}$: \[ D_PS_i=X_iD_P+X_{n-i}^*D_PP, \quadquad i= 1,\dots, n-1. \] \end{thm} The next two results provide relations between the $\mathcal F_O$-tuples of $\widetilde{\mathbb{G}}_3amma_{n}$-contractions $(S_1, \dots,\\ S_{n-1},P)$ and $(S_1^*, \dots, S_{n-1}^*,P^*)$. \betagin{equation}in{lem}[\cite{B:P}, Lemma 2.4]\langlebel{BPlem1} Let $(S_1, \dots, S_{n-1},P)$ be a $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal H$ and $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ be respectively the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*, \dots, S_{n-1}^*,P^*)$. Then \[ D_PA_i = (S_iD_P-D_{P^*}B_{n-i}P)|_{\mathcal{D}_P}, \quaduad \text{for }i=1, \dots, n-1. \] \end{lem} \betagin{equation}in{lem}[\cite{B:P}, Lemma 2.5]\langlebel{BPlem} Let $(S_1, \dots, S_{n-1},P)$ be a $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal H$ and $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ be respectively the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*, \dots, S_{n-1}^*,P^*)$. Then \[ PA_i = B_i^*P|_{\mathcal D_P}, \quaduad \text{for }i=1, \dots, n-1. \] \end{lem} The following result gives a relation between the $\mathcal F_O$-tuples of $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots,\\S_{n-1},P)$ and its adjoint $(S_1^*, \dots, S_{n-1}^*,P^*)$ and the characteristic function $\mathbb{T}heta_P$ of $P$. \betagin{equation}in{lem}[\cite{B:P}, Lemma 3.1]\langlebel{BisaiPal} Let $(A_1,\dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ be the $\mathcal F_O$-tuples of a $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots, S_{n-1},P)$ and its adjoint $(S_1^*,\dots, S_{n-1}^*,P^*)$ respectively. Then for each $i = 1, \dots, n-1$ \[ (B_i^* + zB_{n-i})\mathbb{T}heta_P(z)=\mathbb{T}heta_P(z)(A_i + zA_{n-i}^*), \quaduad \text{for all }z \in \mathbb D. \] \end{lem} The next theorem provides a necessary condition on the $\mathcal F_O$-tuples of a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1,\dots,S_{n-1},P)$ and its adjoint $(S_1^*,\dots, S_{n-1}^*,P^*)$. \betagin{equation}in{thm}\langlebel{necessarytheorem} Let $(S_1, \dots, S_{n-1},P)$ be a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal{H}$ with $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ being the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*, \dots,\\ S_{n-1}^*,P^*)$ such that $S_{j2}={{n-1}\choose {j}}I+M_{{{n-1}\choose {j-1}}e^{it}}$ in the representation \eqref{eqn7} of $S_{j}$. Let $\Sigma_1(z)$ and $\Sigma_2(z)$ be $\widetilde{\mathbb{G}}_3amma_{n-1}$-contractions for all $z\in \mathbb{D}$. Then \betagin{equation}in{equation}\langlebel{eqnnecessary} \betagin{equation}in{pmatrix} M_{B_j^* + z B_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{A_j+zA_{n-j}^*}. \end{equation} Moreover, if $V_1$ is as in \eqref{eqnU}, then there exists a $\widetilde{\mathbb{G}}_3amma_n$-isometry $(M_{Y_1+zY_{n-1}^*}, \dots, M_{Y_{n-1}+zY_1^*},\\mathcal{M}_z)$ on $H^2(\mathcal{D}_P)$ such that \betagin{equation}in{equation}\langlebel{necessary1} \betagin{equation}in{pmatrix} M_{B_j^* + zB_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \end{equation} \end{thm} \betagin{equation}in{proof} By Lemma \ref{BisaiPal}, we have for each $j=1, \dots, n-1$ \betagin{equation}in{equation}\langlebel{relation} M_{B_j^* + z B_{n-j}}M_{\mathbb{T}heta_P}=M_{\mathbb{T}heta_P}M_{A_j+zA_{n-j}^*}. \end{equation} Now we have to prove that \betagin{equation}in{equation}\langlebel{relation1} M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}}\mathbb{D}elta_P=\mathbb{D}elta_P M_{A_j + z A_{n-j}^*}. \end{equation} Since $M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}}$ commutes with $\mathbb{D}elta_P$ and $\mathbb{D}elta_P$ is non-negative, it suffices to prove that \betagin{equation}in{equation}\langlebel{relation2} \mathbb{D}elta_P^2 M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}}= \mathbb{D}elta_P^2 M_{A_j + z A_{n-j}^*}. \end{equation} \lVertoindent Notice that (for simplification see the Appendix) \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align}\langlebel{expression1} &\mathbb{D}elta_P(t)^2\left[{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}\right]\lVertotag\\ =&\left[{{n-1}\choose{j}}D_P\mathcal{A}_*D_P+{{n-1}\choose{j-1}}D_PP\mathcal{A}_*D_P\right] \lVertotag\\ &+ \sum\limits_{m=1}^{\infty}e^{imt}\left[{{n-1}\choose {j-1}}D_P\mathcal{A}_*P^{*(m-1)}D_P+{{n-1}\choose{j}}D_P\mathcal{A}_*P^{*m}D_P\right]\lVertotag\\ &+\sum\limits_{m=-\infty}^{-1}e^{imt}\left[{{n-1}\choose{j}}D_PP^{-m}\mathcal{A}_*D_{P}+{{n-1}\choose {j-1}}D_PP^{-m+1}\mathcal{A}_*D_P\right]. \end{align} \endgroup \lVertoindent Again (see the Appendix) \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align}\langlebel{expression2} & \mathbb{D}elta_P(t)^2(A_j+e^{it}A_{n-j}^*)\lVertotag\\ =& \left[D_P^2A_j-D_PS_jD_P+D_PD_{P^*}B_{n-j}P+D_PP\mathcal{A}_*S_{n-j}^*D_P\right]\lVertotag\\ &+e^{it}\left[A_{n-j}^*D_P^2+P^*B_j^*D_{P^*}D_{P}-D_PS_{n-j}^*D_P+D_P\mathcal{A}_*S_{n-j}^*D_P\right]\lVertotag\\ &+\sum\limits_{m=2}^{\infty}e^{imt}D_P\mathcal{A}_*P^{*(m-1)}S_{n-j}^*D_P + \sum\limits_{m=-\infty}^{-1}e^{imt}D_PP^{-m+1}\mathcal{A}_*S_{n-j}^*D_P. \end{align} \endgroup Suppose $\mathcal{I}_m$ and $\mathcal{J}_m$ denote the coefficients of $e^{imt}$ in the right hand side of Equations \eqref{expression1} and \eqref{expression2} respectively. Clearly $\mathcal{I}_m,\,\mathcal{J}_m\in \mathcal{B}(\mathcal{D}_P)$. Now \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} \mathcal{J}_0D_P=&D_P^2A_jD_P-D_PS_jD_P^2+D_PD_{P^*}B_{n-j}PD_P+D_PP\mathcal{A}_*S_{n-j}^*D_P^2\\ =&D_P(D_PA_jD_P)-D_{P}S_j(I-P^*P)+D_P(D_{P^*}B_{n-j}D_{P^*})P+D_PP\mathcal{A}_*S_{n-j}^*D_P^2\\ =&D_P(S_j-S_{n-j}^*P)-D_PS_j(I-P^*P)+D_P(S_{n-j}^*-S_jP^*)P+D_PP\mathcal{A}_*S_{n-j}^*D_P^2\\ =&D_PP\mathcal{A}_*S_{n-j}^*D_P^2\\ =&{{n-1} \choose {j-1}}D_PP\mathcal{A}_*D_P^2 + {{n-1} \choose {j}}D_P\mathcal{A}_*D_P^2\quaduad (\text{by Lemma }\ref{lem})\\ =&\mathcal{I}_0D_P. \end{align*} \endgroup Since $\mathcal{I}_0, \,\mathcal{J}_0\in\mathcal{B}(\mathcal{D}_P)$, therefore, $\mathcal{I}_0=\mathcal{J}_0$. \\ Again \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &D_P\mathcal{J}_1\\=&D_PA_{n-j}^*D_P^2+D_PP^*B_j^*D_{P^*}D_{P}-D_P^2S_{n-j}^*D_P+D_P^2\mathcal{A}_*S_{n-j}^*D_P\\ =&(D_PA_{n-j}^*D_P)D_P+P^*(D_{P^*}B_j^*D_{P^*})D_P-(I-P^*P)S_{n-j}^*D_P + D_P^2\mathcal{A}_*S_{n-j}^*D_P\\ =&(S_{n-j}-S_j^*P)^*D_P+P^*(S_j^*-S_{n-j}P^*)^*D_P-S_{n-j}^*D_P+P^*PS_{n-j}^*D_P+D_P^2\mathcal{A}_*S_{n-j}^*D_P\\ =&D_P^2\mathcal{A}_*S_{n-j}^*D_P\\ =&{{n-1}\choose{j-1}}D_P^2\mathcal{A}_*D_P+{{n-1}\choose{j}}D_P^2\mathcal{A}_*P^*D_P\\ =&D_P\mathcal{I}_1. \end{align*} \endgroup Therefore, if $T=\mathcal{J}_1-\mathcal{I}_1$, then $T:\mathcal{D}_P\to \mathcal{D}_P$ and $D_PT=0$. Now \[ \langlengle TD_Ph, D_Ph' \ranglengle=\langlengle D_PTh,h' \ranglengle=0 \quaduad \text{for all }h,h'\in \mathcal{H}. \] This implies that $T=0$ and thus $\mathcal{I}_1=\mathcal{J}_1$.\\ For $m\geq 2$ \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} \mathcal{I}_m=&{{n-1}\choose {j-1}}D_P\mathcal{A}_*P^{*(m-1)}D_P+{{n-1}\choose{j}}D_P\mathcal{A}_*P^{*m}D_P\\ =&D_P\left[{{n-1}\choose {j-1}}\mathcal{A}_*+ {{n-1}\choose{j}}\mathcal{A}_*P^{*}\right]P^{*(m-1)}D_P\\ =&D_P\mathcal{A}_*S_{n-j}^*P^{*(m-1)}D_P\quaduad (\text{by Lemma }\ref{lem})\\ =&\mathcal{J}_m. \end{align*} \endgroup Further, for $m\leq -1$, \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} \mathcal{I}_m=&{{n-1}\choose{j}}D_PP^{-m}\mathcal{A}_*D_{P}+{{n-1}\choose {j-1}}D_PP^{-m+1}\mathcal{A}_*D_P\\ =&D_PP^{-m}\left[{{n-1}\choose{j}}\mathcal{A}_*+{{n-1}\choose{j-1}}P\mathcal{A}_*\right]D_P\\ =&D_PP^{-m+1}\mathcal{A}_*S_{n-j}^*D_P\quaduad (\text{by Lemma }\ref{lem})\\ =&\mathcal{J}_m. \end{align*} \endgroup Therefore, $\mathcal{I}_m=\mathcal{J}_m$ for all $m$. Hence \[ \mathbb{D}elta_P^2M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}}=\mathbb{D}elta_P^2M_{A_j+zA_{n-j}^*}, \] which implies that \betagin{equation}in{equation}\langlebel{eqnmain1} M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}}\mathbb{D}elta_P=\mathbb{D}elta_PM_{A_j+zA_{n-j}^*}. \end{equation} Therefore, combining Equations \eqref{relation} and \eqref{eqnmain1} we obtain Equation \eqref{eqnnecessary}.\\ It is obvious from Equation \eqref{eqn7} that $\textit{Ran}(\varphi_2)=\mathcal{H}_P=\mathcal{Q}_P^{\Phi_1erp}$ is invariant under $$\betagin{equation}in{pmatrix} M_{\widetilde{G}_j^*+z\widetilde{G}_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}^*.$$ Then by Lemma \ref{lemmain}, there exists a $\widetilde{\mathbb{G}}_3amma_n$-isometry $(M_{Y_1+zY_{n-1}^*}, \dots, M_{Y_{n-1}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ such that for each $i=1, \dots, n-1$ \[ \betagin{equation}in{pmatrix} M_{\widetilde{G}_j^*+z\widetilde{G}_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \] From Theorem \ref{bisai}, we have $\widetilde{G}_i=V_1^*B_iV_1$, where $V_1$ is as in \eqref{eqnU}. Therefore, \[\betagin{equation}in{pmatrix} M_{V_1^*B_j^*V_1 + z V_1^*B_{n-j}V_1} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}, \] that is, \[ \betagin{equation}in{pmatrix} M_{B_j^* + z B_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \] \end{proof} A straight-forward corollary of Theorem \ref{necessarytheorem} is the following. \betagin{equation}in{cor}\langlebel{bappacor} Let $(S_1, \dots, S_{n-1},P)$ be a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal{H}$ with $(A_1,\dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ being the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*,\dots,\\S_{n-1}^*,P^*)$ such that \[ \varphi_2S_j^*=\betagin{equation}in{pmatrix} M_{V_1^*B_j^*V_1+zV_1^*B_{n-j}V_1}& 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose{j-1}}e^{it}} \end{pmatrix}^*\varphi_2, \] where $V_1$ is as in \eqref{eqnU}. Suppose $\Sigma_2(z)$ is $\widetilde{\mathbb{G}}_3amma_{n-1}$-contractions for all $z\in \mathbb D$. Then Equation \eqref{eqnnecessary} holds. \end{cor} \betagin{equation}in{proof} One can prove it easily if follows the proof of Theorem \ref{necessarytheorem}. \end{proof} The Theorem \ref{necessarytheorem} provides a necessary condition on $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ to be the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*,\dots, S_{n-1}^*,P^*)$, respectively. It is natural to ask about sufficiency, that is, for given two tuples of operators $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ defined on two Hilbert spaces, under what conditions there exists a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots, S_{n-1},P)$ such that $(A_1, \dots, A_{n-1})$ becomes the $\mathcal F_O$-tuple of $(S_1, \dots, S_{n-1},P)$ and $(B_1, \dots, B_{n-1})$ becomes the $\mathcal F_O$-tuple of $(S_1^*, \dots, S_{n-1}^*,P^*)$. \betagin{equation}in{thm}\langlebel{sufficient} Let $P$ be a c.n.u contraction on a Hilbert space $\mathcal{H}$. Let $A_1, \dots, A_{n-1}\in \mathcal{B}(\mathcal{D}_P)$ and $B_1, \dots, B_{n-1}\in \mathcal{B}(\mathcal{D}_{P^*})$ be such that they satisfy Equations $\eqref{eqnnecessary}$ and \eqref{necessary1}. Suppose $\Sigma_2(z)$ is $\widetilde{\mathbb{G}}_3amma_{n-1}$-contraction for all $z\in \mathbb D$. Then there exists a $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots, S_{n-1},P)$ such that $(A_1, \dots, A_{n-1})$ becomes the $\mathcal F_O$-tuple of $(S_1, \dots, S_{n-1},P)$ and $(B_1, \dots, B_{n-1})$ becomes the $\mathcal F_O$-tuple of $(S_1^*, \dots, S_{n-1}^*,P^*)$. \end{thm} \betagin{equation}in{proof} Let us define \[ S_j=\varphi_2^*W_j\varphi_2 \quaduad \text{for }j=1, \dots, n-1, \] where \[ W_j=\betagin{equation}in{pmatrix} M_{V_1^*B_j^*V_1+zV_1^*B_{n-j}V_1}& 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose{j-1}}e^{it}} \end{pmatrix}\quaduad \text{for }j=1, \dots, n-1. \] Equation \eqref{necessary1} tells us that $\mathcal{Q}_P \left(=(\textit{Ran}(\varphi_2))^{\Phi_1erp}\right)$ is invariant under $W_j$ for $j=1, \dots, n-1$, that is, $\textit{Ran}(\varphi_2)$ is invariant under $W_j^*$ for $j=1, \dots, n-1$. Also from Equation \eqref{eqn9}, $P=\varphi_2^*W\varphi_2$ and $\textit{Ran}(\varphi_2)$ is invariant under $W^*$, where $W=\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}$. Since $\Sigma_2(z)$ is $\widetilde{\mathbb{G}}_3amma_{n-1}$-contraction for all $z\in \mathbb D$, by Theorem \ref{gammaiso}, $\left(M_{B_1^*+zB_{n-1}}, \dots, M_{B_{n-1}^*+zB_1},M_z\right)$ on $H^2(\mathcal{D}_{P^*})$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry. Therefore, $\left(M_{V_1^*B_1^*V_1+zV_1^*B_{n-1}V_1}, \dots, M_{V_1^*B_{n-1}^*V_1+zV_1^*B_{1}V_1},M_z\right)$ $(=\Omega(z), \text{ say})$ is a $\widetilde{\mathbb{G}}_3amma_n$-isometry on $H^2(\mathcal{D}_{P^*})$ as $V_1$ on $\mathcal{D}_{P^*}$ is a unitary. Again by Theorem \ref{gammauni}, \betagin{equation}ingroup \alphalowdisplaybreaks\betagin{equation}in{align*} &\Phi_1i_n(I,\dots,I,\dots, I,M_{e^{it}})\\ =&\left(M_{{{n-1}\choose {1}}+e^{it}}, \dots,M_{{{n-1}\choose {j}}+{{n-1}\choose{j-1}}e^{it}}, \dots, M_{I+{{n-1}\choose{n-2}}e^{it}},M_{e^{it}} \right) \end{align*} \endgroup on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ is a $\widetilde{\mathbb{G}}_3amma_n$-unitary. Hence the tuple $(W_1, \dots, W_{n-1},W)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction.\\ \lVertoindent \textbf{Claim.} $\left(S_1, \dots, S_{n-1},P\right)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction on $\mathcal{H}$.\\ \lVertoindent\textit{Proof of Claim.} Clearly, $\left(S_1^*, \dots, S_{n-1}^*,P^*\right)$ is a commuting tuple of operators and for any $f\in \mathbb C[z_1, \dots, z_{n}]$, \[ f(S_1^*,\dots, S_{n-1}^*,P^*)=\varphi_2^*f(W_1^*, \dots, W_{n-1}^*,W^*)\varphi_2. \] Since $(W_1^*,\dots, W_{n-1}^*,W^*)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction, so \[ \|f(S_1^*, \dots, S_{n-1}^*,P^*)\|=\left\|\varphi_2^*f(W_1^*, \dots, W_{n-1}^*,W^*)\varphi_2\right\| \leq\|f\|_{\infty, \widetilde{\mathbb{G}}_3amma_n}. \] Therefore, by Lemma \ref{SPLem}, $(S_1^*, \dots, S_{n-1}^*,P^*)$ is a $\widetilde{\mathbb{G}}_3amma_n$-contraction on $\mathcal{H}$ and again by Lemma \ref{SPLem}, it follows that $(S_1, \dots, S_{n-1},P)$ is a $\widetilde{\mathbb{G}}_3amma_{n}$-contraction as well. This completes the proof of the claim.\\ We now show that $(B_1, \dots, B_{n-1})$ is the $\mathcal F_O$-tuple of $(S_1^*, \dots, S_{n-1}^*,P^*)$. For each $j=1, \dots, n-1$ \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} S_j^*-S_{n-j}P^*=&\varphi_2^*W_j^*\varphi_2-\varphi_2^*W_{n-j}\varphi_2\varphi_2^*W^*\varphi_2\\ =&\varphi_2^*W_j^*\varphi_2-\varphi_2^*W_{n-j}W^*\varphi_2\\ =&\varphi_2^*\betagin{equation}in{pmatrix} P_{\mathbb C}\otimes V_1^*B_jV_1 & 0\\ 0 & 0 \end{pmatrix}\varphi_2\\ =&\varphi_1^*\betagin{equation}in{pmatrix} P_{\mathbb C}\otimes B_j & 0\\ 0 & 0 \end{pmatrix}\varphi_1\\ =&D_{P^*}B_jD_{P^*}, \end{align*} \endgroup where $P_{\mathbb C}$ is the orthogonal projection from $H^2(\mathbb D)$ onto the subspace consisting of constant functions in $H^2(\mathbb D)$. Since $\mathcal F_O$-tuple of a $\widetilde{\mathbb{G}}_3amma_n$-contraction is unique, therefore, $(B_1, \dots, B_{n-1})$ is the $\mathcal F_O$-tuple of $(S_1^*, \dots, S_{n-1}^*,P^*)$. \\ Suppose $(Y_1, \dots, Y_{n-1})$ is the $\mathcal F_O$-tuple of $(S_1, \dots, S_{n-1},P)$. Then by Corollary \ref{bappacor}, we have \betagin{equation}in{equation}\langlebel{eqnnecessary1} \betagin{equation}in{pmatrix} M_{B_j^* + z B_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \end{equation} Then from Equation \eqref{eqnnecessary} and Equation \eqref{eqnnecessary1}, we have for each $j=1, \dots, n-1$ that \betagin{equation}in{equation}\langlebel{eqnnecessary2} \betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{A_j+zA_{n-j}^*}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \end{equation} Now from Equation \eqref{eqnnecessary2} and the fact that $\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}$ is an isometry we have that $$A_j+zA_{n-j}^*=Y_j+zY_{n-j}^* \quaduad \text{for all }j=1, \dots, n-1$$ and for all $z \in \mathbb D$. Therefore, $Y_j=A_j$ for all $j$ and hence $(A_1, \dots, A_{n-1})$ is the $\mathcal F_O$-tuple of $(S_1, \dots, S_{n-1},P)$. The proof is complete. \end{proof} Combining Theorems \ref{necessarytheorem} \& \ref{sufficient}, we get the following theorem which is one of the main results of this paper. \betagin{equation}in{thm}\langlebel{mainthm} Let $(S_1, \dots, S_{n-1},P)$ be a c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contraction on a Hilbert space $\mathcal{H}$ with $(A_1, \dots, A_{n-1})$ and $(B_1, \dots, B_{n-1})$ being the $\mathcal F_O$-tuples of $(S_1, \dots, S_{n-1},P)$ and $(S_1^*, \dots,\\ S_{n-1}^*,P^*)$ such that $S_{j2}={{n-1}\choose {j}}I+M_{{{n-1}\choose {j-1}}e^{it}}$ in the representation \eqref{eqn7} of $S_{j}$. Suppose $\Sigma_1(z)$ and $\Sigma_2(z)$ are $\widetilde{\mathbb{G}}_3amma_{n-1}$-contractions for all $z\in \mathbb{D}$. Then \betagin{equation}in{equation}\langlebel{Eqnnecessary} \betagin{equation}in{pmatrix} M_{B_j^* + z B_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{A_j+zA_{n-j}^*}. \end{equation} Moreover, if $V_1$ is as in \eqref{eqnU}, then there exists a $\widetilde{\mathbb{G}}_3amma_n$-isometry $(M_{Y_1+zY_{n-1}^*}, \dots, M_{Y_{n-1}+zY_1^*},\\mathcal{M}_z)$ on $H^2(\mathcal{D}_P)$ such that \betagin{equation}in{equation}\langlebel{Necessary1} \betagin{equation}in{pmatrix} M_{B_j^* + zB_{n-j}} & 0\\ 0 & M_{{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}} \end{pmatrix}\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \end{equation} Conversely, if $P$ is a c.n.u contraction on a Hilbert space $\mathcal{H}$ and $A_1, \dots, A_{n-1}\in \mathcal{B}(\mathcal{D}_P)$ and $B_1, \dots, B_{n-1}\in \mathcal{B}(\mathcal{D}_{P^*})$ are such that they satisfy Equations \eqref{Eqnnecessary} $\&$ \eqref{Necessary1} with $\Sigma_2(z)$ is $\widetilde{\mathbb{G}}_3amma_{n-1}$-contraction for all $z\in \mathbb D$, then there exists a $\widetilde{\mathbb{G}}_3amma_n$-contraction $(S_1, \dots, S_{n-1},P)$ such that $(A_1, \dots, A_{n-1})$ becomes the $\mathcal F_O$-tuple of $(S_1, \dots, S_{n-1},P)$ and $(B_1, \dots, B_{n-1})$ becomes the $\mathcal F_O$-tuple of $(S_1^*, \dots, S_{n-1}^*,P^*)$. \end{thm} \section{Results about $\mathbb E$-contractions} This section is devoted to prove a theorem for c.n.u $\mathbb E$-contractions which is analogue to the Theorem \ref{mainthm} for c.n.u $\widetilde{\mathbb{G}}_3amma_n$-contractions. Before proceeding further, we recall a few results from literature which will be useful to prove the main result of this section. \betagin{equation}in{lem}[\cite{T:B}, Corollary 4.2]\langlebel{Fopair1} The fundamental operators $F_1$ and $F_2$ of a tetrablock contraction $(A,B,P)$ are the unique bounded linear operators on $\mathcal{D}_P$ that satisfy the pair of operator equations \[ D_PA=X_1D_P+X_2^*D_PP \quaduad\text{ and }\quaduad D_PB=X_2D_P+X_1^*D_PP. \] \end{lem} Here is an analogue of Lemma \ref{BisaiPal} in the tetrablock setting. \betagin{equation}in{lem}[\cite{TirthaHari}, Lemma 17]\langlebel{Hari} Let $F_1$ and $F_2$ be the fundamental operators of a tetrablock contraction $(A,B,P)$ and $G_1$ and $G_2$ be the fundamental operators of the tetrablock contraction $(A^*,B^*,P^*)$. Then for all $z\in \mathbb D$, \[ (F_1^*+F_2z)\mathbb{T}heta_{P^*}(z)=\mathbb{T}heta_{P^*}(z)(G_1+G_2^*z) \] and \[ (F_2^*+F_1z)\mathbb{T}heta_{P^*}(z)=\mathbb{T}heta_{P^*}(z)(G_2+G_1^*z). \] \end{lem} \subsection{Necessary condition of conditional dilation for c.n.u $\Bbb E$-contractions} \text{ } \lVertoindent We recall a few sentences (until Theorem 3.5) from \cite{B:P5} to recall a conditional $\mathbb E$-isometric dilation of a c.n.u $\mathbb E$-contraction $(A,B,P)$ on the minimal Sz.-Nagy and Foias isometric dilation space $\textbf{K}$ of the c.n.u contraction $P$. Let $(A,B,P)$ be a c.n.u $\Bbb E$-contraction on a Hilbert space $\mathcal H$ with $\mathcal F_O$-pair $(F_1,F_2)$. Let $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$. Then we have from Theorem 6.1 of \cite{T:B} that $(V_1, V_2, V_3)$ on $\mathcal{K}=\mathcal{H}\oplus \ell^2(\mathcal{D}_P)$ is the minimal $\mathbb E$-isometric dilation of $(A,B,P)$, where $V_3$ on $\mathcal{K}$ is the minimal isometric dilation of $P$. By Theorem 5.6 of \cite{T:B}, there is a decomposition of $\mathcal{K}$ into a direct sum $\mathcal{K}=\mathcal{K}_1 \oplus \mathcal{K}_2$ such that $\mathcal{K}_1$ and $\mathcal{K}_2$ reduce $V_1$, $V_2$, $V_3$ and $({V}_{11},{V}_{12},{V}_{13})=(V_1|_{\mathcal{K}_1}, V_2|_{\mathcal{K}_1},V_3|_{\mathcal{K}_1})$ on $\mathcal{K}_1$ is a pure $\mathbb E$-isometry while $({V}_{21},{V}_{22},{V}_{23})=(V_1|_{\mathcal{K}_2}, V_2|_{\mathcal{K}_2},V_3|_{\mathcal{K}_2})$ on $\mathcal{K}_2$ is an $\mathbb E$-unitary. Taking cue from this fact, we have the following conditional $\mathbb E$-isometric dilation for a c.n.u $\mathbb E$-contraction $(A,B,P)$. \betagin{equation}in{thm}[\cite{B:P5}, Theorem 3.5]\langlebel{commutativemodel} Let $(A,B,P)$ be a c.n.u $\mathbb{E}$-contraction on a Hilbert space $\mathcal{H}$ with $\mathcal F_O$-pair $(F_1,F_2)$. Let $[F_1,F_2] = 0$, $[F_1,F_1^*] = [F_2,F_2^*]$. Suppose $(G_1,G_2)$ is the $\mathcal F_O$-pair of $({V}_{11}^*,{V}_{12}^*,{V}_{13}^*)$. Then $(\widetilde{A}_1 \oplus \widetilde{A}_2,\widetilde{B}_1 \oplus \widetilde{B}_2,\widetilde{P}_1 \oplus \widetilde{P}_2)$ on $\textbf{K}$ is a minimal $\mathbb E$-isometric dilation of $(A,B,P)$, where \betagin{equation}in{equation*} \left.\betagin{equation}in{aligned} \widetilde{A}_1&=I_{H^2(\mathbb D)}\otimes \mathcal V_1^* G_1^* \mathcal V_1\, +\, M_z\otimes \mathcal V_1^*G_2 \mathcal V_1\\ \widetilde{B}_1&=I_{H^2(\mathbb D)}\otimes \mathcal V_1^* G_2^* \mathcal V_1 \,+\, M_z\otimes \mathcal V_1^* G_1 \mathcal V_1 \\ \widetilde{P_1} &= M_z\otimes I_{\mathcal{D}_{P^*}} \end{aligned} \right\} \quaduad \text{on } \; \; H^2(\mathbb D)\otimes \mathcal{D}_{P^*} \end{equation*} and \[ \widetilde{A}_2 = \mathcal V_2^*V_{21}\mathcal V_2\, , \, \widetilde{B}_2 = \mathcal V_2^*V_{22}\mathcal V_1\,, \widetilde{P_2}=M_{e^{it}}|_{\overlineerline{\mathbb{D}elta_PL^2(\mathcal{D}_P)}} \quaduad \text{ on } \;\; \overlineerline{\mathbb{D}elta_PL^2(\mathcal{D}_P)}, \] for some unitaries $\mathcal V_1: \mathcal D_{P^*} \rightarrow \mathcal D_{V_3^*}$ and $\mathcal V_2: \overlineerline{\mathbb{D}elta_PL^2(\mathcal{D}_P)} \rightarrow \mathcal{K}_2$. If $\underline{S}= (S_1,S_2,S_3)$ is an $\mathbb E$-isometric dilation of $(A,B,P)$ such that $S_3$ is the minimal isometric dilation of $P$, then $\underline{S}$ is unitarily equivalent to $(\widetilde{A}_1 \oplus \widetilde{A}_2,\widetilde{B}_1 \oplus \widetilde{B}_2,\widetilde{P}_1 \oplus \widetilde{P}_2)$. Finally, \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} A &= P_{\mathcal{H}}(\widetilde{A}_1\oplus\widetilde{A}_2)|_{\mathcal{H}}\,,\\ B & = P_{\mathcal{H}}(\widetilde{B}_1\oplus\widetilde{B}_2)|_{\mathcal{H}}\,,\\ P & = P_{\mathcal{H}}((M_z\otimes I_{\mathcal{D}_{P^*}}) \oplus M_{e^{it}}|_{\overlineerline{\mathbb{D}elta_PL^2(\mathcal{D}_P)}})|_{\mathcal{H}}\,. \end{align*} \endgroup As a direct consequence of the above theorem we have the following result. \end{thm} \betagin{equation}in{thm}\langlebel{bisaipalEdilation} Let $(A,B,P)$ be a c.n.u $\Bbb E$-contraction on a Hilbert space $\mathcal H$ with $\mathcal F_O$-pair $(F_1,F_2)$. Let $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$. Then there is an isometry $\varphi$ from $\mathcal{H}$ into $H^2(\mathcal{D}_{P^*}) \oplus \overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ such that \[ \varphi A^*= \left(M_{X_1^*+zX_2}\oplus\widetilde{A}_2\right)^*\varphi,\quadquad \varphi B^*= \left(M_{X_2^*+zX_1}\oplus\widetilde{B}_2\right)^*\varphi \] \[ \text{and }\;\; \varphi P^*=\left(M_z \oplus M_{e^{it}}\right)^*\varphi, \] where $\left(M_{X_1^*+zX_2}, M_{X_2^*+zX_1},M_z\right)$ on $H^2(\mathcal{D}_{P^*})$ is a pure $\Bbb E$-iosmetry and $\big(\widetilde{A}_2,\widetilde{B}_2,M_{e^{it}}\big)$ on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ is an $\Bbb E$-unitary. \end{thm} \betagin{equation}in{proof} We apply the same argument as in the proof of Theorem \ref{commutativemodel} and the proof follows. \end{proof} Here is an analogue of Lemma \ref{lem1} for $\Bbb E$-contractions. \betagin{equation}in{lem}\langlebel{lem01} If $(A_1,A_{2},M_z)$ on $H^2(\mathcal{E})$ is an $\Bbb E$-isometry, then there exist $Y_i\in\mathcal{B}(\mathcal{E})$ such that $A_i=M_{Y_i+zY_{n-i}^*}$. \end{lem} \betagin{equation}in{proof} The proof is similar to the proof of Lemma \ref{lem1} and we skip it. \end{proof} We present an analogue of Lemma \ref{lemmain} for $\Bbb E$-contractions. \betagin{equation}in{lem}\langlebel{lemmain01} Let $P$ be a c.n.u contraction on a Hilbert space $\mathcal{H}$. Let $(M_{X_1^*+zX_{n-1}},M_{X_{2}^*+zX_1},\\mathcal{M}_z)$ on $H^2(\mathcal{D}_{P^*})$ be an $\Bbb E$-isometry and $(R_1, R_{2},M_{e^{it}})$ be an $\Bbb E$-unitary on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$. If for each $i=1,\,2,$ \betagin{equation}in{equation}\langlebel{eqn01} \betagin{equation}in{pmatrix} M_{X_i^*+zX_{3-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P} \\ \mathbb{D}elta_P \end{pmatrix}H^2(\mathcal{D}_P)\subseteq \betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P} \\ \mathbb{D}elta_P \end{pmatrix}H^2(\mathcal{D}_P), \end{equation} then there exists an $\Bbb E$-isometry $(M_{Y_1+zY_{2}^*}, M_{Y_{2}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ such that for each $i=1, 2,$ \[ \betagin{equation}in{pmatrix} M_{X_i^*+zX_{3-i}} & 0\\ 0 & R_i \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P} \\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P} \\ \mathbb{D}elta_P \end{pmatrix}M_{{Y_i+zY_{3-i}^*}}. \] \end{lem} \betagin{equation}in{proof} Similar to that of the proof of Theorem 4.10 of \cite{S:B}. \end{proof} Combining Theorem \ref{bisaipalEdilation} and Lemma \ref{lemmain01} we have the following result. \betagin{equation}in{thm}\langlebel{bisai1} Let $(A,B,P)$ be a c.n.u $\Bbb E$-contraction on a Hilbert space $\mathcal{H}$ with $(F_1,F_2)$ and $(G_1,G_2)$ being the $\mathcal F_O$-pairs of $(A,B,P)$ and $(A^*,B^*,P^*)$ respectively. Let $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$. Then there exists $Y_1,\,Y_2\in \mathcal{B}(\mathcal{D}_P)$ such that $(M_{Y_1+zY_{2}^*},M_{Y_{2}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ is an $\Bbb E$-isometry and for $i=1,\,2$, \[ \betagin{equation}in{pmatrix} M_{V_1^*G_i^*V_1+zV_1^*G_{3-i}V_1} & 0\\ 0 & \widetilde{V}_2^*R_i\widetilde{V}_2 \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_i+zY_{3-i}^*}. \] \end{thm} \betagin{equation}in{proof} We may imitate the proof of Theorem \ref{bisai}. \end{proof} \subsection{Admissible $\mathcal F_O$-pairs} We have an analogue of Lemma \ref{lem} for c.n.u $\Bbb E$-contractions. \betagin{equation}in{lem}\langlebel{lemEcontraction} Let $(A,B,P)$ be a c.n.u $\Bbb E$-contraction as in Theorem \ref{bisai1} with \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &\varphi_2A^*=\betagin{equation}in{pmatrix} M_{V_1^*G_1^*V_1+zV_1^*G_{2}V_1} & 0\\ 0 & \dfracrac{1}{2}(I+M_{e^{it}}) \end{pmatrix}^*\varphi_2,\\ & \varphi_2B^*=\betagin{equation}in{pmatrix} M_{V_1^*G_2^*V_1+zV_1^*G_{1}V_1} & 0\\ 0 & \dfracrac{1}{2}(I+M_{e^{it}}) \end{pmatrix}^*\varphi_2\\ \text{and }\;\;& \varphi_2P^*=\betagin{equation}in{pmatrix} M_z & 0\\ 0 & M_{e^{it}} \end{pmatrix}^*\varphi_2. \end{align*} \endgroup Then \[ \mathcal{A}_* + \mathcal{A}_*P^*=2\mathcal{A}_*A^*=2\mathcal{A}_*B^* \] and \[ P\mathcal{A}_* + \mathcal{A}_* = 2P\mathcal{A}_*A^*=2P\mathcal{A}_*B^*. \] \betagin{equation}in{proof} Follows immediately by proceeding as in the proof of Lemma \ref{lem}. \end{proof} \end{lem} \betagin{equation}in{lem} The triple $\left(M_{(I+e^{it})/2},M_{(I+e^{it})/2},M_{e^{it}}\right)$ on $\overlineerline{\mathbb{D}elta_P L^2(\mathcal{D}_P)}$ is an $\mathbb E$-unitary. \end{lem} \betagin{equation}in{proof} Directly follows from Theorem \ref{thm:tu} and we omit it. \end{proof} The following theorem gives a necessary condition on the $\mathcal F_O$-pairs of a c.n.u $\Bbb E$-contraction $(A,B,P)$ and its adjoint $(A^*,B^*,P^*)$ which is the analogue of Theorem \ref{necessarytheorem} in the tetrablock setting. \betagin{equation}in{thm}\langlebel{necessarytheorem1} Let $(A,B,P)$ be a c.n.u $\Bbb E$-contraction on a Hilbert space $\mathcal{H}$ as in Lemma \ref{lemEcontraction}. Suppose $(F_1,F_2)$ is the $\mathcal F_O$-pair of $(A,B,P)$ such that $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$. Then for $j=1,\,2$, \betagin{equation}in{equation}\langlebel{eqnnecessaryE} \betagin{equation}in{pmatrix} M_{G_j^* + z G_{3-j}} & 0\\ 0 & M_{(I+e^{it})/2} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{F_j+zF_{3-j}^*}. \end{equation} Moreover, if $V_1$ is as in \eqref{eqnU}, then there exists an $\Bbb E$-isometry $(M_{Y_1+zY_{2}^*},\\ M_{Y_{2}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ such that \betagin{equation}in{equation}\langlebel{necessary1E} \betagin{equation}in{pmatrix} M_{G_j^* + z G_{3-j}} & 0\\ 0 & M_{(I+e^{it})/2} \end{pmatrix}\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \end{equation} \end{thm} \betagin{equation}in{proof} One can prove it easily by using Lemmas \ref{Hari} \& \ref{lemEcontraction} and mimicking the proof of Theorem \ref{necessarytheorem}. \end{proof} The following result is an analogue of Theorem \ref{sufficient}. \betagin{equation}in{thm}\langlebel{sufficientE} Let $P$ be a c.n.u contraction on a Hilbert space $\mathcal{H}$. Let $F_1, F_{2}\in \mathcal{B}(\mathcal{D}_P)$ and $G_1,G_{2}\in \mathcal{B}(\mathcal{D}_{P^*})$ be such that they satisfy Equations $\eqref{eqnnecessaryE}$ and \eqref{necessary1E}. Suppose $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$. Then there exists an $\Bbb E$-contraction $(A,B,P)$ such that $(F_1,F_2)$ becomes the $\mathcal F_O$-pair of $(A,B,P)$ and $(G_1,G_2)$ becomes the $\mathcal F_O$-pair of $(A^*,B^*,P^*)$. \end{thm} \betagin{equation}in{proof} We apply the same argument as in the proof of Theorem \ref{sufficient} and the proof follows. \end{proof} Combining Theorems \ref{necessarytheorem1} $\&$ \ref{sufficientE} we get the following theorem which is the main result of this section. \betagin{equation}in{thm}\langlebel{tetrablock} Let $(A,B,P)$ be a c.n.u $\Bbb E$-contraction on a Hilbert space $\mathcal{H}$ as in Lemma \ref{lemEcontraction}. Suppose $(F_1,F_2)$ is the $\mathcal F_O$-pair of $(A,B,P)$ such that $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$. Then for $j=1,\,2$, \betagin{equation}in{equation}\langlebel{eqnnecessaryEcont} \betagin{equation}in{pmatrix} M_{G_j^* + z G_{3-j}} & 0\\ 0 & M_{(I+e^{it})/2} \end{pmatrix}\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{F_j+zF_{3-j}^*}. \end{equation} Moreover, if $V_1$ is as in \eqref{eqnU}, then there exists an $\Bbb E$-isometry $(M_{Y_1+zY_{2}^*},\\ M_{Y_{2}+zY_1^*},M_z)$ on $H^2(\mathcal{D}_P)$ such that \betagin{equation}in{equation}\langlebel{necessary1Econt} \betagin{equation}in{pmatrix} M_{G_j^* + z G_{3-j}} & 0\\ 0 & M_{(I+e^{it})/2} \end{pmatrix}\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}=\betagin{equation}in{pmatrix} M_{V_1}M_{\mathbb{T}heta_P}\\ \mathbb{D}elta_P \end{pmatrix}M_{Y_j+zY_{n-j}^*}. \end{equation} Conversely, if $P$ is a c.n.u contraction on a Hilbert space $\mathcal{H}$ and $F_1, F_{2}\in \mathcal{B}(\mathcal{D}_P)$ and $G_1,G_{2}\in \mathcal{B}(\mathcal{D}_{P^*})$ are such that they satisfy Equations $\eqref{eqnnecessaryEcont}$ and \eqref{necessary1Econt} with $[F_1,F_2]=0,\,[F_1,F_1^*]=[F_2,F_2^*]$, then there exists an $\Bbb E$-contraction $(A,B,P)$ such that $(F_1,F_2)$ becomes the $\mathcal F_O$-pair of $(A,B,P)$ and $(G_1,G_2)$ becomes the $\mathcal F_O$-pair of $(A^*,B^*,P^*)$. \end{thm} \betagin{equation}in{center} \textbf{Acknowledgment} \end{center} The author acknowledges Harish-Chandra Research Institute, Prayagraj for warm hospitality. The author is grateful to Indian Statistical Institute, Kolkata and Prof. Debashish Goswami for a visiting scientist position under which most of the work was done. He wishes to thank the Stat-Math Unit, ISI Kolkata for all the facilities provided to him. The author has been supported by Prof. Debashish Goswami's J C Bose grant and the institute post-doctoral fellowship of HRI, Prayagraj. \text{ }\\ \lVertoindent \textbf{Appendix}\\ \lVertoindent \textit{Proof of Equation \ref{expression1}:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &\mathbb{D}elta_P(t)^2\left[{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}\right]\\ =& \left[I-\mathbb{T}heta_P(e^{it})^*\mathbb{T}heta_P(e^{it})\right]\left[{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}\right]\\ =&\left[{{n-1}\choose {j}}+{{n-1}\choose {j-1}}e^{it}\right]-\left[-P^*+\sum\limits_{m=0}^{\infty}e^{-i(m+1)t}D_PP^mD_{P^*}\right]\\ & \times \left[-P + \sum\limits_{m=0}^{\infty}e^{i(m+1)t}D_{P^*}P^{*m}D_P\right]\left[{{n-1}\choose j} + {{n-1}\choose {j-1}}e^{it}\right]\\ =&{{n-1}\choose j} + {{n-1}\choose {j-1}}e^{it} - \left[-P^*+\sum\limits_{m=-\infty}^{-1}e^{imt}D_PP^{-m-1}D_{P^*}\right]\\ & \times \Bigg[-{{n-1}\choose j}P + e^{it}\left({{n-1}\choose {j}}D_{P^*}D_P-{{n-1}\choose {j-1}}P\right)\\ &+\sum\limits_{m=2}^{\infty}e^{imt}D_{P^*}P^{*(m-2)}\textbf{E}\Bigg],\,\text{where }\textbf{E}=\left({{n-1}\choose {j}}P^* +{{n-1}\choose {j-1}}\right)D_P\\ =&\Bigg[{{n-1}\choose {j}}D_P^2-D_PD_{P^*}\left({{n-1}\choose {j}}D_{P^*}D_P-{{n-1}\choose {j-1}}P\right) -\sum\limits_{k=2}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(k-2)}\textbf{E}\Bigg]\\ +&\sum\limits_{m=1}^{\infty}e^{imt}\Bigg[D_{P}P^{*(m-1)}\textbf{E}-\sum\limits_{k=1}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(m+k-2)}\textbf{E}\Bigg] + \sum\limits_{-\infty}^{-1}e^{imt}\Bigg[{{n-1}\choose{j}}D_PP^{-m-1}D_{P^*}P\\ -&D_PP^{-m}D_{P^*}\left({{n-1}\choose {j}}D_{P^*}D_P-{{n-1}\choose{j-1}}P\right) -\sum\limits_{k=-\infty}^{m-2}D_PP^{-k-1}D_{P^*}^2P^{*(m-k-2)}\textbf{E}\Bigg]. \end{align*} \endgroup The last equality follows from the fact that $P^*D_{P^*}=D_PP^*$. Now consider the coefficients of the above expression:\\ \textbf{Constant term:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &{{n-1}\choose {j}}D_P^2-D_PD_{P^*}\left({{n-1}\choose {j}}D_{P^*}D_P-{{n-1}\choose {j-1}}P\right)-\sum\limits_{k=2}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(k-2)}\textbf{E}\\ =&{{n-1}\choose {j}}D_P^2-{{n-1}\choose {j}}D_P^2+{{n-1}\choose {j}}D_PPP^*D_{P}+{{n-1}\choose {j-1}}D_PD_{P^*}P\\ &-\sum\limits_{k=2}^{\infty}D_PP\left(P^{k-2}P^{*(k-2)}-P^{k-1}P^{-(k-1)}\right)\textbf{E}\\ =&{{n-1}\choose {j}}D_PPP^*D_{P}+{{n-1}\choose {j-1}}D_PD_{P^*}P-D_PP(I-\mathcal{A}_*)\textbf{E}\\ =& {{n-1}\choose {j}}D_PPP^*D_{P}+{{n-1}\choose {j-1}}D_PPD_{P}-{{n-1}\choose {j}}D_PPP^*D_P\\ &-{{n-1}\choose{j-1}}D_PPD_P+{{n-1}\choose{j}}D_PP\mathcal{A}_*P^*D_P+{{n-1}\choose{j-1}}D_PP\mathcal{A}_*D_P\\ =&{{n-1}\choose{j}}D_P\mathcal{A}_*D_P+{{n-1}\choose{j-1}}D_PP\mathcal{A}_*D_P \;\;\;\; (\text{since }P\mathcal{A}_*P^*=\mathcal{A}_*). \end{align*} \endgroup The second last equality follows from the fact that $D_{P^*}P=PD_P$.\\ \lVertoindent \textbf{Coefficient of $e^{imt}$, $m\geq 1$:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &D_{P}P^{*(m-1)}\textbf{E} -\sum\limits_{k=1}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(m+k-2)}\textbf{E}\\ =& {{n-1}\choose {j}}D_{P}P^{*m}D_P+{{n-1}\choose {j-1}}D_{P}P^{*(m-1)}D_P-\sum\limits_{k=1}^{\infty}D_P\left(P^{k-1}P^{*(k-1)}-P^{k}P^{*k}\right)P^{*(m-1)}\textbf{E}\\ =& {{n-1}\choose {j}}D_{P}P^{*m}D_P+{{n-1}\choose {j-1}}D_{P}P^{*(m-1)}D_P \\ &-D_P(I-\mathcal{A}_*)P^{*(m-1)}\left({{n-1}\choose{j-1}}+{{n-1}\choose{j}}P^*\right)D_P\\ =&{{n-1}\choose {j}}D_PP^{*m}D_P + {{n-1}\choose {j-1}}D_PP^{*(m-1)}D_P-{{n-1}\choose {j-1}}D_PP^{*(m-1)}D_P\\ &-{{n-1}\choose {j}}D_PP^{*m}D_P+{{n-1}\choose {j-1}}D_P\mathcal{A}_*P^{*(m-1)}D_P+{{n-1}\choose{j}}D_P\mathcal{A}_*P^{*m}D_P\\ =&{{n-1}\choose {j-1}}D_P\mathcal{A}_*P^{*(m-1)}D_P+{{n-1}\choose{j}}D_P\mathcal{A}_*P^{*m}D_P. \end{align*} \endgroup \textbf{Coefficient of $e^{imt}$, $m\leq -1$:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &{{n-1}\choose{j}}D_PP^{-m-1}D_{P^*}P -D_PP^{-m}D_{P^*}\left({{n-1}\choose {j}}D_{P^*}D_P-{{n-1}\choose{j-1}}P\right)\\ &-\sum\limits_{k=-\infty}^{m-2}D_PP^{-k-1}D_{P^*}^2P^{*(m-k-2)}\textbf{E}\\ =&{{n-1}\choose{j}}D_PP^{-m}D_{P}-{{n-1}\choose{j}}D_PP^{-m}D_{P^*}^2D_P+{{n-1}\choose{j-1}}D_PP^{-m+1}D_P\\ &-\sum\limits_{k=2-m}^{\infty}D_PP^{-m+1}\left(P^{m+k-2}P^{*(m+k-2)}-P^{m+k-1}P^{*(m+k-1)}\right)\textbf{E}\\ =&{{n-1}\choose{j}}D_PP^{-m}D_{P}-{{n-1}\choose{j}}D_PP^{-m}D_{P}+{{n-1}\choose{j}}D_PP^{-m+1}P^*D_{P}\\ &+ {{n-1}\choose{j-1}}D_PP^{-m+1}D_{P}-D_PP^{-m+1}(I-\mathcal{A}_*)\textbf{E}\\ =&{{n-1}\choose{j}}D_PP^{-m+1}P^*D_{P}+{{n-1}\choose{j-1}}D_PP^{-m+1}D_{P}-{{n-1}\choose{j}}D_PP^{-m+1}P^*D_{P}\\ &-{{n-1}\choose{j-1}}D_PP^{-m+1}D_{P}+{{n-1}\choose{j}}D_PP^{-m+1}\mathcal{A}_*P^*D_{P}+{{n-1}\choose {j-1}}D_PP^{-m+1}\mathcal{A}_*D_P\\ =&{{n-1}\choose{j}}D_PP^{-m}\mathcal{A}_*D_{P}+{{n-1}\choose {j-1}}D_PP^{-m+1}\mathcal{A}_*D_P. \end{align*} \endgroup The last equality follows from the fact that $P\mathcal{A}_*P^*=\mathcal{A}_*$. Therefore, Equation \eqref{expression1} holds.\\ \lVertoindent \textit{Proof of Equation \eqref{expression2}:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &\mathbb{D}elta_P(t)^2(A_j+e^{it}A_{n-j}^*)\\ =&(I-\mathbb{T}heta_P(e^{it})^*\mathbb{T}heta_P(e^{it}))(A_j+e^{it}A_{n-j}^*)\\ =&(A_j+e^{it}A_{n-j}^*)-\mathbb{T}heta_P(e^{it})^*(B_j^*+ e^{it}B_{n-j})\mathbb{T}heta_P(e^{it})\;\;(\text{using Lemma } \ref{BisaiPal})\\ =&(A_j+e^{it}A_{n-j}^*)-\left[-P^*+\sum\limits_{m=0}^{\infty}e^{-i(m+1)t}D_PP^mD_{P^*}\right][B_j^*+ e^{it}B_{n-j}]\\ & \quadquad\quadquad\quadquad\times \left[-P+\sum\limits_{m=0}^{\infty}e^{i(m+1)}D_{P^*}P^{*m}D_P\right]\\ =&A_j+e^{it}A_{n-j}^* -\left[-P^*+\sum\limits_{m=0}^{\infty}e^{-i(m+1)t}D_PP^mD_{P^*}\right] \Bigg[-B_j^*P\\ &+e^{it}(B_j^*D_{P^*}D_P-B_{n-j}P)+\sum\limits_{m=2}^{\infty}e^{imt}(B_j^*D_{P^*}P^*+B_{n-j}D_{P^*})P^{*(m-2)}D_P\Bigg]\\ =&A_j+e^{it}A_{n-j}^* -\left[-P^*+\sum\limits_{m=-\infty}^{-1}e^{imt}D_PP^{-m-1}D_{P^*}\right] \Bigg[-B_j^*P+e^{it}(B_j^*D_{P^*}D_P-B_{n-j}P)+\\ &\sum\limits_{m=2}^{\infty}e^{imt}D_{P^*}S_{n-j}^*P^{*(m-2)}D_P\Bigg]\big(\text{using Theorem }\ref{BisaiPal1} \text{ for }(S_1^*, \dots, S_{n-1}^*,P^*)\big)\\ =& \Bigg[A_j-P^*B_j^*P - D_PD_{P^*}(B_j^*D_{P^*}D_P-B_{n-j}P)-\sum\limits_{k=-\infty}^{-2}D_PP^{-k-1}D_{P^*}^2P^{*(-k-2)}S_{n-j}^*D_P\Bigg]\\ &+e^{it}\Bigg[A_{n-j}^*-P^*B_{n-j}P+P^*B_j^*D_{P^*}D_P-\sum\limits_{k=1}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(k-1)}S_{n-j}^*D_P\Bigg]\\ &+\sum\limits_{m=2}^{\infty}e^{imt}\Bigg[P^*D_{P^*}S_{n-j}^*P^{*(m-2)}D_P-\sum\limits_{k=1}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(m+k-2)}S_{n-j}^*D_P\Bigg]\\ &+\sum\limits_{m=-\infty}^{-1}e^{imt}\Bigg[D_PP^{-m-1}D_{P^*}B_j^*P-D_PP^{-m}D_{P^*}(B_j^*D_{P^*}D_P-B_{n-j}P)\\ &-\sum\limits_{k=2-m}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(m+k-2)}S_{n-j}^*D_P\Bigg]. \end{align*} \endgroup Let us simplify the coefficients of the above expression:\\ \textbf{Constant term:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &A_j-P^*B_j^*P - D_PD_{P^*}(B_j^*D_{P^*}D_P-B_{n-j}P)-\sum\limits_{k=-\infty}^{-2}D_PP^{-k-1}D_{P^*}^2P^{*(-k-2)}S_{n-j}^*D_P\\ =&A_j-P^*PA_j-D_P(S_j^*-S_{n-j}P^*)^*D_P+D_PD_{P^*}B_{n-j}P\\ &-\sum\limits_{k=2}^{\infty}D_{P}P\left(P^{k-2}P^{*(k-2)}-P^{k-1}P^{*(k-1)}\right)S_{n-j}^*D_P\;\;(\text{by Lemma }\ref{BPlem})\\ =&D_P^2A_j-D_PS_jD_P+D_PPS_{n-j}^*D_P+D_PD_{P^*}B_{n-j}P-D_PP(I-\mathcal{A}_*)S_{n-j}^*D_P\\ =&D_P^2A_j-D_PS_jD_P+D_PD_{P^*}B_{n-j}P+D_PP\mathcal{A}_*S_{n-j}^*D_P. \end{align*} \endgroup \textbf{Coefficient of $e^{it}$:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &A_{n-j}^*-P^*B_{n-j}P +P^*B_j^*D_{P^*}D_P-\sum\limits_{k=1}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(k-1)}S_{n-j}^*D_P\\ =&A_{n-j}^*-A_{n-j}^*P^*P+P^*B_j^*D_{P^*}D_{P}-D_P(I-\mathcal{A}_*)S_{n-j}^*D_P\;(\text{by Lemma }\ref{BPlem})\\ =&A_{n-j}^*D_P^2+P^*B_j^*D_{P^*}D_{P}-D_PS_{n-j}^*D_P+D_P\mathcal{A}_*S_{n-j}^*D_P. \end{align*} \endgroup \textbf{Coefficient of $e^{imt}$, $m\geq 2$:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &P^*D_{P^*}S_{n-j}^*P^{*(m-2)}D_P-\sum\limits_{k=1}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(m+k-2)}S_{n-j}^*D_P\\ =&D_PP^*S_{n-j}^*P^{*(m-2)}D_P-\sum\limits_{k=1}^{\infty}D_P\left(P^{k-1}P^{*(k-1)}-P^kP^{*k}\right)P^{*(m-1)}S_{n-j}^*D_P\\ =&D_PS_{n-j}^*P^{*(m-1)}D_P-D_P(I-\mathcal{A}_*)P^{*(m-1)}S_{n-j}^*D_P\\ =&D_P\mathcal{A}_*P^{*(m-1)}S_{n-j}^*D_P. \end{align*} \endgroup \textbf{Coefficient of $e^{imt}$, $m\leq -1$:} \betagin{equation}ingroup \alphalowdisplaybreaks \betagin{equation}in{align*} &D_PP^{-m-1}D_{P^*}B_j^*P-D_PP^{-m}D_{P^*}(B_j^*D_{P^*}D_P-B_{n-j}P)\\ &-\sum\limits_{k=2-m}^{\infty}D_PP^{k-1}D_{P^*}^2P^{*(m+k-2)}S_{n-j}^*D_P\\ =&D_PP^{-m-1}D_{P^*}PA_j-D_PP^{-m}(S_j^*-S_{n-j}P^*)^*D_P+D_PP^{-m}D_{P^*}B_{n-j}P\\ &-\sum\limits_{k=2-m}^{\infty}D_PP^{1-m}\left(P^{m+k-2}P^{*(m+k-2)}-P^{m+k-1}P^{*(m+k-1)}\right)S_{n-j}^*D_P\\ =&D_PP^{-m}D_PA_j-D_PP^{-m}S_jD_P+D_PP^{-m+1}S_{n-j}^*D_P\\ &+D_PP^{-m}D_{P^*}B_{n-j}P-D_PP^{1-m}(I-\mathcal{A}_*)S_{n-j}^*D_P\\ =&D_PP^{-m}D_PA_j-D_PP^{-m}S_jD_P+D_PP^{-m+1}S_{n-j}^*D_P\\ &+D_PP^{-m}D_{P^*}B_{n-j}P-D_PP^{-m+1}S_{n-j}^*D_P+D_PP^{-m+1}\mathcal{A}_*S_{n-j}^*D_P\\ =&D_{P}P^{-m}(D_PA_j+D_{P^*}B_{n-j}P)-D_PP^{-m}S_jD_P+D_PP^{-m+1}\mathcal{A}_*S_{n-j}^*D_P\\ =& D_PP^{-m+1}\mathcal{A}_*S_{n-j}^*D_P, \quaduad(\text{using Lemma }\ref{BPlem1}). \end{align*} \endgroup Thus Equation \eqref{expression2} holds. \betagin{equation}in{thebibliography}{99} \bibitem{A:W:Y} A. A. Abouhajar, M. C. White and N. J. Young, A Schwarz lemma for a domain related to $\mu$-synthesis, \textit{J. Geom. Anal.} 17 (2007) 717 - 750. \bibitem{Agler} J. Agler and N. J. Young, A commutant lifting theorem for a domain in $\Bbb C^2$ and spectral interpolation, \textit{J. Funct. Anal.}, 161 (1999) 452 - 477. \bibitem{wermer} H. Alexander and J. Wermer, Several complex variables and Banach algebras, \textit{Graduate Texts in Mathematics, 35; 3rd Edition}, Springer, (1997). \bibitem{Nagy} H. Bercovici, C. Foias, L. Kerchy and B. Sz.-Nagy, Harmonic Analysis of Operators on Hilbert Space, \textit{Universitext, Springer}, New York, 2010. \bibitem{T:B} T. Bhattacharyya, The tetrablock as a spectral set, \textit{Indiana Univ. Math. J.}, 63 (2014), 1601 - 1629. \bibitem{TirthaHari} T. Bhattacharyya, S. Lata and H. Sau, Admissible fundamental operators, \textit{J. Math. Anal. Appl.} 425 (2015), 983 - 1003. \bibitem{PalAdv} T. Bhattacharyya, S. Pal and S. Shyam Roy, Dilations of $\widetilde{\mathbb{G}}_3amma$-contractions by solving operator equations, \textit{Adv. Math.}, 230 (2012), 577 - 606. \bibitem{B:P} B. Bisai and S. Pal, The fundamental operator tuples associated with the symmetrized polydisc, \textit{New York J. Math.}, 27 (2021), 349 - 362. \bibitem{B:P5} B. Bisai and S. Pal, A model theory for operators associated with a domain related to $\mu$-synthesis, \textit{Collect. Math.} (To appear), https://doi.org/10.1007/s13348-021-00341-6. \bibitem{B:P6} B. Bisai and S. Pal, A Nagy-Foias program for a c.n.u $\widetilde{\mathbb{G}}_3amma_{n}$-contractions, \textit{Preprint}, arXiv:2110.03436. \bibitem{S:B} S. Biswas and Subrata S. Roy, Functional models of $\widetilde{\mathbb{G}}_3amma_n$-contractions and characterization of $\widetilde{\mathbb{G}}_3amma_n$-isometries, \textit{J. Funct. Anal} 266, 6224 - 6255 (2014). \bibitem{costara} C. Costara, On the spectral Nevanlinna-Pick problem, \textit{Stud. Math.} 170 (2005) 23 - 55. \bibitem{curto} R. E. Curto, Applications of several complex variables to multiparameter spectral theory in: Surveys of Some Recent Results in Operator Theory, Vol. II, in: Pitman Res. Notes Math. Ser., vol. 192, Longman Sci. Tech., Harlow, 1988, pp. 25 - 90. \bibitem{Douglas} R. G. Douglas, Structure theory for operators. I., \textit{J. Reine Angew. Math.} 232 (1968) 180 - 193. \bibitem{doyel} J. Doyel, ANalysis of feedback systems with structured uncertainties, \textit{IEE Proc.}, Control Theory Appl. 129 (1982) 242 - 250. \bibitem{E:Z} A. Edigarian and W. Zwonek, Geometry of the symmetrized polydisc, \textit{Arch. Math.} 84 (2005) 364 - 374. \bibitem{Kubrusly} Carlos S. Kubrusly, An Introduction to Models and Decompositions in Operator Theory, \textit{Birkhauser Boston}, 1997. \bibitem{A:Pal} A. Pal, On $\widetilde{\mathbb{G}}_3amma_n$-contractions and their conditional dilations, \textit{J. Math. Anal. Appl.}, 510 (2022), no. 2, Paper No. 126016. \bibitem{S:Pdecom} S. Pal, Canonical decomposition of operators associated with the symmetrized polydisc, \textit{Complex Anal. Oper. Theory}, 12 (2018), 931 - 943. \bibitem{S:P} S. Pal, Operator theory and distinguished varieties in the symmetrized $n$-disk, \textit{https://arxiv.org/abs/1708.06015}. \bibitem{SPrational} S. Pal, Rational dilation for operators associated with spectral interpolation and distinguished varieties, \textit{https://arxiv.org/abs/1712.05707}. \bibitem{PalNagy} S. Pal, Dilation, functional model and a complete unitary invariant for $C_{.0}$ $\widetilde{\mathbb{G}}_3amma_{n}$-contractions, \textit{https://arxiv.org/abs/1708.06015}. \bibitem{Taylor} J. L. Taylor, The analytic -functional calculus for several commuting opoerators, \textit{Acta Math.} 125 (1970) 1 - 38. \bibitem{Taylor1} J. L. Taylor, A joint spectrum for several commuting operators, \textit{J. Func. Anal.} 6 (1970) 172 - 191. \end{thebibliography} \end{document}
math
99,145
\begin{document} \maketitle \begin{abstract} We consider the problem of the unification modulo an equational theory ACh, which consists of a function symbol $h$ that is homomorphic over an associative-commutative operator $+$. Since the unification modulo ACh theory is undecidable, we define a variant of the problem called \textit{bounded ACh unification}. In this bounded version of ACh unification, we essentially bound the number of times $h$ can be applied to a term recursively, and only allow solutions that satisfy this bound. There is no bound on the number of occurrences of $h$ in a term, and the $+$ symbol can be applied an unlimited number of times. We give inference rules for solving the bounded version of the problem and prove that the rules are sound, complete, and terminating. We have implemented the algorithm in Maude and give experimental results. We argue that this algorithm is useful in cryptographic protocol analysis. \end{abstract}  \section{Introduction} Unification is a method to find a solution for a set of equations. For instance, consider an equation $x + y \overset{?}= a + b$, where $x$ and $y$ are variables, and $a$, and $b$ are constants. If $+$ is an uninterpreted function symbol, then the equation has one solution $\{ x \mapsto a,\,y \mapsto b\}$, and this unification is called syntactic unification. If the function symbol $+$ has the property of commutativity then the equation has two solutions: $\{ x \mapsto a,\,y \mapsto b\}$ and $\{ x \mapsto b,\,y \mapsto a\}$; And this is called unification modulo the commutativity theory.\par Unification modulo equational theories play a significant role in symbolic cryptographic protocol analysis~\cite{EMM}. An overview and references for some of the algorithms may be seen in~\cite{KNW, EKLMMNS, NMM}. One such equational theory is the distributive axioms: $x \times (y + z) = (x \times y) + (x \times z); (y + z)\times x = (y \times x) + (z \times x).$ A decision algorithm is presented for unification modulo two-sided distributivity in~\cite{MS}. A sub-problem of this, unification modulo one-sided distributivity, is in greater interest since many cryptographic protocol algorithms satisfy the one-sided distributivity. In their paper~\cite{TA}, Tiden and Arnborg presented an algorithm for unification modulo one-sided distributivity: $x \times (y + z) = (x \times y) + (x \times z),$ and also it has been shown that it is undecidable if we add the properties of associativity $x + (y + z) = (x+y)+z$ and a one-sided unit element $x\times1=x$. However, some counter examples~\cite{NMM} have been presented showing that the complexity of the algorithm is exponential, although they thought it was polynomial-time bounded. \par  For practical purposes, one-sided distributivity can be viewed as the homomorphism theory, $h(x+y) = h(x) +h(y)$, where the unary operator $h$ distributes over the binary operator $+$. Homomorphisms are highly used in cryptographic protocol analysis. In fact, Homomorphism is a common property that many election voting protocols satisfy~\cite{KRS}. \par Our goal is to present a novel construction of an algorithm to solve unification modulo the homomorphism theory over a binary symbol $+$ that also has the properties of associativity and commutativity (ACh), which is an undecidable unification problem~\cite{PN}. Given that ACh unification is undecidable but necessary to analyze cryptographic protocols, we developed an approximation of ACh unification, which we show to be decidable. \par In this paper, we present an algorithm to solve a modified general unification problem modulo the ACh theory, which we call {\em bounded ACh unification}. We define the {\em h-height} of a term to be basically the number of  $h$ symbols recursively applied to each other. We then only search for ACh unifiers of a bounded h-height. We do not restrict the h-height of terms in unification problems. Moreover, the number of occurrences of the + symbol is bounded neither in a problem nor in its solutions. In order to accomplish this, we define the {\em h-depth} of a variable, which is the number of $h$ symbols on top of a variable. We develop a set of inference rules for ACh unification that keep track of the h-depth of variables. If the h-depth of any variable exceeds the bound $\kappa$, then the algorithm terminates with no solution. Otherwise, it gives all the unifiers or solutions to the problem.  \section{Preliminary}  \ignore{We assume that the reader is familiar with the basic notation of unification theory and term rewriting systems (see for example~\cite{BN, BS}).\\ \\}  \subsection{Basic Notation}  We briefly recall the standard notation of unification theory and term rewriting systems from~\cite{BN, BS}. \par  Given a finite or countably infinite set of function symbols $\mathcal{F}$, also known as a signature, and a countable set of variables $\mathcal{V}$, the set of $\mathcal{F}$-terms over $\mathcal{V}$ is denoted by $\mathcal{T}( \mathcal{F}, \mathcal{V})$. The set of variables appearing in a term $t$ is denoted by $Var(t)$, and it is extended to sets of equations. A term is called ground if $Var(t)=\emptyset$.  Let $Pos(t)$ be the set of positions of a term $t$ including the root position $\epsilon$~\cite{BS}. For any $p\in Pos(t)$, $t|_p$ is the subterm of $t$ at the position $p$ and $t[s]_p$ is the term $t$ in which $t|_p$ is replaced by $s$. \ignore{\blue{For any position~$p$ in a term $t$ (including the root position $\epsilon$), $t(p)$ is the symbol at position $p$, $t|_p$ is the subterm of $t$ at position $p$, and $t[u]_p$ is the term $t$ in which $t|_p$ is replaced by $u$.}}A substitution is a mapping from $\mathcal{V}$ to $\mathcal{T}( \mathcal{F}, \mathcal{V})$ with only finitely many variables not mapped to themselves and is denoted by $\sigma = \{x_1 \mapsto t_1,\ldots, x_n \mapsto t_n\}$, where the domain of $\sigma$ is $Dom(\sigma):=\{x_1,\ldots, x_n\}$. The range of $\sigma$, denoted as $Range(\sigma)$, defined as union of the sets $\{x\sigma\}$, where $x$ is a variable in $Dom(\sigma)$. The identity substitution is a substitution that maps all the variables to themselves.  The application of substitution $\sigma$ to a term $t$, denoted as $t\sigma$, is defined by induction on the structure of the terms: \begin{itemize} \item $x\sigma$, where $t$ is a variable $x$ \item $c$, where $t$ is a constant symbol $c$ \item $f(t_1\sigma, \ldots,t_n\sigma)$, where $t=f(t_{1}, \ldots, t_{n})$ with $n \geq 1$ \end{itemize} \ignore{$$t\sigma = \left\{ \begin{array}{lcl} x\sigma & \mbox{if}& t = x, \\  f(t_1\sigma, \ldots,t_n\sigma) & \mbox{if} & t=f( t_1, \ldots,t_n) \end{array} \right.$$ In the second case of this definition, $n=0$ is allowed: in this case, $f$ is a constant symbol and we have $f\sigma = f$.  An application of a substitution $\sigma$ to a term $t$ is denoted as $t\sigma$ and defined as, $x\sigma$, $f(t_1\sigma,\ldots, t_n\sigma)$ when $t$ is a variable $x$ or otherwise. Of course, an application on constant symbol gives the constant back as a result.} \par  The restriction of a substitution $\sigma$ to a set variables $\mathcal{V}$, denoted as $\sigma|\mathcal{V}$, is the substitution which is equal to identity everywhere except over $\mathcal{V} \cap Dom(\sigma)$, where it is coincides with $\sigma$.     \begin{definition}[More General Substitution] \emph{A substitution $\sigma$ is more general than substitution $\theta$ if there exists a substitution $\eta$ such that $\theta = \sigma \eta$, denoted as $\sigma \lesssim \theta$. Note that the relation $\lesssim$ is a quasi-ordering, i.e., reflexive and transitive.} \end{definition} \begin{definition}[Unifier, Most General Unifier] \emph{A substitution $\sigma$ is a unifier or solution of two terms $s$ and $t$ if $s\sigma = t\sigma$; it is a most general unifier if for every unifier $\theta$ of $s$ and $t$, $\sigma \lesssim \theta$. Moreover, a substitution $\sigma$ is a solution of a set of equations if it is a solution of each of the equations. If a substitution $\sigma$ is a solution of a set of equations $\Gamma$, then it is denoted by $\sigma \models \Gamma$.} \end{definition} A set of identities $E$ is a subset of $\mathcal{T}( \mathcal{F}, \mathcal{V}) \times \mathcal{T}( \mathcal{F}, \mathcal{V})$ and are represented in the form  $s\thickapprox t$. An equational theory $=_E$ is induced by a set of fixed identities $E$ and it is the least congruence relation that is closed under substitution and contains $E$. \begin{definition}[\textit{E}-Unification Problem, \textit{E}-Unifier, \textit{E}-Unifiable] \emph{Let $\mathcal{F}$ be a signature and $E$ be an equational theory. An \textit{E}-unification problem over $\mathcal{F}$ is a finite set of equations $ \Gamma = \{ s_1 \overset{?}=_E t_1,\ldots, s_n \overset{?}=_E t_n\}$ between terms. An \textit{E}-unifier or E-solution of two terms $s$ and $t$ is a substitution $\sigma$ such that $s\sigma =_E t\sigma$.  An \textit{E}-unifier of $\Gamma$ is a substitution $\sigma$ such that $s_i\sigma =_E t_i \sigma$ for   $i= 1,\ldots,n$. The set of all \textit{E}-unifiers is denoted by $\mathcal{U}_E(\Gamma)$ and $\Gamma$ is called \textit{E}-unifiable  if $\mathcal{U}_E(\Gamma) \neq \emptyset$. If $E = \emptyset$ then $\Gamma$ is a syntactic unification problem.} \end{definition} Let $\Gamma = \{ s_1 \overset{?}=_E t_1,\ldots,s_n \overset{?}=_E t_n\}$ be a set of equations, and let $\theta$ be a substitution. \ignore{We say that $\theta$ satisfies $\Gamma$ modulo equational theory $E$ if $\theta$ is an \textit{E}-solution of each equation in $\Gamma$, that is,  $ s_i \theta =_E t_i\theta$ for $i=1,\ldots,n$.} We write $\theta \models_E \Gamma$ when $\theta$ is an \textit{E}-unifier of $\Gamma$. Let $\sigma = \{ x_1 \mapsto t_1,\ldots,x_n \mapsto t_n\}$ and $\theta$ be substitutions, and let $E$ be an equational theory. We say that $\theta$ satisfies $\sigma$ in the equational theory $E$ if $x_i \theta =_E t_i \theta $ for $i=1,\ldots,n$. We write it as $\theta \models_E \sigma$. \begin{definition}\emph{ Let $E$ be an equational theory and $\mathcal{X}$ be a set of variables. The substitution $\sigma$ is \textit{more general modulo E on $\mathcal{X}$} than $\theta$ iff there exists a substitution $\sigma'$ such that $x\theta =_E x \sigma \sigma'$ for all $x \in \mathcal{X}$. We write it as $\sigma\lesssim^\mathcal{X}_E \theta$.} \end{definition} \begin{definition}[Complete Set of \textit{E}-Unifiers] \emph{Let $\Gamma$ be a \textit{E}-unification problem over $\mathcal{F}$ and let $Var(\Gamma)$ be the set of all variables occurring in $\Gamma$. A complete set of \textit{E}-unifiers of $\Gamma$ is a set $S$ of substitutions such that, each element of $S$ is a \textit{E}-unifier of $\Gamma$, i.e., $S \subseteq \mathcal{U}_E(\Gamma)$, and for each $\theta \in \mathcal{U}_E(\Gamma)$ there exists a $\sigma \in S$ such that $\sigma$ is more general modulo E on $Var(\Gamma)$ than $\theta$, i.e., $\sigma \lesssim^{Var(\Gamma)} _E\theta$.} \end{definition} A complete set $S$ of \textit{E}-unifiers is minimal if for any two distinct unifiers $\sigma$ and $\theta$ in $S$, one is not more general modulo $E$ than the other, i.e., $\sigma \lesssim^{Var(\Gamma)}_E \theta $ implies $\sigma = \theta$. A minimal complete set of unifiers for a syntactic unification problem $\Gamma$ has only one element if it is not empty. It is denoted by {$mgu(\Gamma)$} and can be called most general unifier of unification problem $\Gamma$. \begin{definition} \emph{Let E be an equational theory. We say that a multi-set of equations $\Gamma'$ is a conservative E-extension of another multi-set of equations $\Gamma$ if any solution of $\Gamma'$ is also a solution of $\Gamma$ and any solution of $\Gamma$ can be extended to a solution of $\Gamma'$. This means for any solution $\sigma$ of $\Gamma$, there exists $\theta$ whose domain is the variables in $Var(\Gamma') \setminus Var(\Gamma)$ such that $\sigma\theta$ is a solution of $\Gamma$. The property of conservative E-extension is transitive.} \end{definition} \par Let $\mathcal{F}$ be a signature, and $l,r$ be $\mathcal{F}$-terms. A \emph{rewrite rule} is an identity, denoted as $l \rightarrow r$, where $l$ is not a variable and $\mathit{Var}(r) \subseteq \mathit{Var}(l)$. A \emph{term rewriting system} (TRS) is a pair $(\mathcal{F}, R)$, where $R$ is a finite set of rewrite rules. In general, a TRS is represented by $R$. A term $u$ rewrites to a term $v$ with respect to $R$, denoted by $u \rightarrow_R v$ (or simply $u \rightarrow v$), if there exist a position $p$ of $u$, $l \rightarrow r \in R$, and substitution $\sigma$ such that $u|_p = l\sigma$ and $v = u[r\sigma]_p$. A TRS $R$ is said to be \emph{terminating} if there is no infinite reduction sequences of the form $u_0 \rightarrow_R u_1 \rightarrow_R \ldots $. A TRS $R$ is \emph{confluent} if, whenever $u \rightarrow_R^{*} s_1$ and $u \rightarrow_R^{*} s_2$, there exists a term $v$ such that $s_1 \rightarrow_R^{*} v$ and $s_2 \rightarrow_R^{*} v$. A TRS $R$ is convergent if it is both confluent and terminating. \subsection{ACh Theory} The equational theory we consider is the theory of a homomorphism over a binary function symbol $+$ which satisfies the properties of associativity and the commutativity. We abbreviate this theory as ACh. The signature $\mathcal{F}$ includes a unary symbol $h$, and a binary symbol $+$, and other uninterpreted function symbols with fixed arity.  \par The function symbols $h$ and $+$ in the signature $\mathcal{F}$ satisfy the following identities: \begin{itemize} \item$x + (y+z) \thickapprox (x+y)+z \text{ (Associativity, A for short)}$  \item $x + y \thickapprox y+x \text{ (Commutativity, C for short)}$  \item $h(x + y) \thickapprox h(x) + h(y) \text{ (Homomorphism, h for short)}$  \end{itemize}  \subsection{Rewriting Systems} We consider two convergent rewriting systems $R_1$ and $R_2$ for homomorphism $h$ modulo associativity and commutativity. \begin{itemize} \item $R_1 := \{h(x_1+x_2) \rightarrow h(x_1) + h(x_2)\}$ and  \item $R_2 := \{h(x_1)+h(x_2) \rightarrow h(x_1+ x_2)\}$.      \end{itemize}  \subsection{h-Depth Set}  \label{sec:hdset}  For convenience, we assume that our unification problem is in {\em flattened} form, i.e., that every equation in the problem is in one of the following forms: $x \overset{?} = y$, $x \overset{?}= h(y)$, $x \overset{?}= y_1 + \cdots + y_n$, and $x \overset{?}= f ( x_1,\ldots, x_n)$, where $x$ and $y$ are variables, $y_i$s and $x_i$s are pair-wise distinct variables,  and $f$ is a free symbol with $n\geq0$. The first kind of equations are called {\em VarVar equations}.  The second kind are called {\em $h$-equations}. The third kind are called   {\em $+$-equations}. The fourth kind are called {\em free equations}.  \ignore{It is well-known that how to convert a unification problem into flattened form.} \begin{definition}[Graph $\mathbb{G}(\Gamma)$] \emph{Let $\Gamma$ be a unification problem. We define a graph $\mathbb{G}(\Gamma)$ as a graph where each node represents a variable in $\Gamma$ and each edge represents a function symbol in $\Gamma$. To be exact, if an equation $y\overset{?}=f(x_1,\ldots, x_n)$, where $f$ is a symbol with $n\geq1$, is in $\Gamma$ then the graph $\mathbb{G}(\Gamma)$ contains $n$ edges $y\overset{f}\rightarrow x_1,\ldots,y\overset{f}\rightarrow x_n$. For a constant symbol $c$, if an equation $y\overset{?}= c$ is in $\Gamma$ then the graph $\mathbb{G}(\Gamma)$ contains a vertex $y$. Finally, the graph $\mathbb{G}(\Gamma)$ contains two vertices $y$ and $x$ if an equation $y\overset{?} = x$ is in $\Gamma$.} \end{definition} \begin{definition}[h-Depth] \emph{Let $\Gamma$ be a unification problem and let $x$ be a variable that occurs in $\Gamma$. Let $h$ be a unary symbol and let $f$ be a symbol (distinct from $h$) with arity greater than or equal to 1 and occurring in $\Gamma$. We define h-depth of a variable $x$ as the maximum number of $h$-symbols along a path to $x$ in $\mathbb{G}(\Gamma)$, and it is denoted by $h_d(x, \Gamma)$. That is, $$h_d(x, \Gamma) := \max \{h_{dh}(x, \Gamma), h_{df}(x, \Gamma), 0 \}, $$ where $h_{dh}(x, \Gamma) := \max\{ 1+ h_d(y, \Gamma) \mid y\overset{h} \rightarrow x \text{ is an edge in } \mathbb{G}(\Gamma)\}$ and $h_{df}(x, \Gamma) := \max\{ h_d(y, \Gamma) \mid \text{there exists $f \neq h$ such that }y\overset{f}\rightarrow x \text{ is in } \mathbb{G}(\Gamma) \}$}.  \end{definition} \begin{definition}[h-Height] \ignore{Let $\Gamma$ be a unification problem and let $t$ be a term that occurs in $\Gamma$.} \emph{We define h-height of a term $t$ as the following: $$h_h(t) := \left\{ \begin{array}{lcl} h_h(t') + 1& \mbox{if}& t = h(t') \\  \max\{h_h(t_1),\ldots, h_h(t_n) \} & \mbox{if} & t = f (t_1,\ldots,t_n), f\neq h\\  0 & \mbox{if}& t= x \,\text{or}\, c \end{array} \right.$$ where $f$ is a function symbol with arity greater than or equal to 1.}  \end{definition}  \ignore{Without loss of generality, we assume that h-depth and h-height is not defined for a variable that occurs on both sides of the equation. This is because the occur check rule--- which concludes the problem with no solution---presented in the next section has higher priority over the h-depth updating rules.}   \begin{definition}[h-Depth Set] \emph{Let $\Gamma$ be a set of equations. The h-depth set of $\Gamma$, denoted $h_{ds}(\Gamma)$, is defined as $h_{ds}(\Gamma) := \{ (x, h_{d}(x, \Gamma)) \mid x \text{ is a variable appearing in } \Gamma\}$. \ignore{Let $\mathcal{V}$ be a set of variables occurring in $\Gamma$. We define a set h-depth set of $\Gamma$ whose elements are  pairs of a variable from $\mathcal{V}$ and a non-negative integer.}In other words, the elements in the h-depth set are of the form $(x,\,c)$, where $x$ is a variable that occur in $\Gamma$ and $c$ is a natural number representing the h-depth of $x$.} \end{definition} Maximum value of h-depth set $\triangle$ is the maximum of all $c$ values and it is denoted by $MaxVal(\triangle)$, i.e., $MaxVal(\triangle) := \max\{c \,|\, (x,\, c ) \in \triangle \text{ for some } x\}.$ \begin{definition}[\textit{ACh}-Unification Problem, Bounded \textit{ACh}-Unifier]  \emph{An \textit{ACh}-unification problem over $\mathcal{F}$ is a finite set of equations  $ \Gamma = \{ s_1 \overset{?}=_{ACh} t_1,\ldots,s_n \overset{?}=_{ACh} t_n\}, s_i , t_i \in \mathcal{T}( \mathcal{F}, \mathcal{V}),$ where $ACh$ is the equational theory defined above. A $\kappa$ bounded \textit{ACh}-unifier or $ \kappa$ bounded \textit{ACh}-solution of $\Gamma$ is a substitution $\sigma$ such that $s_i\sigma =_{ACh} t_i \sigma$, $h_h(s_i \sigma) \leq \kappa$, and $h_h( t_i\sigma) \leq \kappa$ for all $i$.} \par \emph{Notice that the bound $\kappa$ has no role in the problem but in the solution.} \end{definition} \setlength{\fboxsep}{1pt} \section{Inference System $\mathfrak{I}_{ACh}$} \subsection{Problem Format} An inference system is a set of inference rules that transforms an equational unification problem into other. In our inference procedure, we use a set triple $ {\Gamma||\triangle||\sigma}$ similar to the format presented in~\cite{LL}, where $\Gamma$ is a unification problem modulo the ACh theory, $\triangle$ is an h-depth set of $\Gamma$, and $\sigma$ is a substitution. Let $\kappa \in \mathbb{N}$ be a bound on the h-depth of the variables.  A substitution $\theta$ satisfies the set triple $ {\Gamma||\triangle||\sigma}$ if $\theta$ satisfies $\sigma$ and every equation in $\Gamma$, $MaxVal(\triangle) \leq \kappa$, and we write that relation as $\theta \models {\Gamma||\triangle||\sigma}$. We also use a special set triple $\bot$ for no solution in the inference procedure. Generally, the inference procedure is based on the priority of rules and also uses \textit{don't care} non-determinism when there is no priority. i.e., any rule applied from a set of rules without priority. Initially, $\Gamma$ is the non-empty set of equations to solve, $\triangle$ is an empty set, and $\sigma$ is the identity substitution. The inference rules are applied until either the set of equations is empty with most general unifier $\sigma$ or $\bot$ for no solution. Of course, the substitution $\sigma$ is a $\kappa$ bounded \textit{E}-unifier of $\Gamma$. An inference rule is written as $ \frac{\Gamma||\triangle|| \sigma}{\Gamma'||\triangle'|| \sigma'}.$ This means that if something matches the top of this rule, then it is to be replaced with  the bottom of the rule. \ignore{In the proofs we will write inference rules as follows: $$\Gamma ||\triangle||\sigma \Rightarrow_{\mathfrak{I}_{ACh}} \{\Gamma_1 ||\triangle_1||\sigma_1, \cdots, \Gamma_n ||\triangle_n|\sigma_n\}$$ meaning to branch and replace the left hand side with one of the right hand sides in each branch. The only inference rule that has more than one branch is {\em AC Unification}. So we often just write inference rules as follows: $\Gamma ||\triangle||\sigma \Rightarrow_{\mathfrak{I}_{ACh}} \Gamma' ||\triangle'||\sigma'$.} \par  Let $\mathcal{OV}$ be the set of variables occurring in the unification problem $\Gamma$ and let $\mathcal{NV}$ be a new set of variables such that $\mathcal{NV}=\mathcal{V}\setminus\mathcal{OV}$. Unless otherwise stated we assume that $x,\,x_1,\ldots,x_n,\,\text{and}\,y,\,y_1,\ldots,y_n,\,z$ are variables in $\mathcal{V}$, $v,\,v_1,\ldots,v_n$ are in $\mathcal{NV}$,  and terms $w, t,\,t_1,\ldots,t_n,\,s,\,s_1,\ldots,s_n$ in $\mathcal{T}( \mathcal{F}, \mathcal{V})$, and $f$ and $g$ are uninterpreted function symbols. \ignore{Recall that $h$ is a unary, and the associativity and the commutativity operator $+$. }A fresh variable is a variable that is generated by the current inference rule and has never been used before. \par For convenience, we assume that that every equation in the problem is in one of the flattened forms (see Section~\ref{sec:hdset}). If not, we apply flattening rules to put the equations into that form. These rules are performed before any other inference rule. They put the problem into flattened form and all the other inference rules leave the problem in flattened form, so there is no need to perform these rules again later. It is necessary to update the h-depth set $\triangle$ with the h-depth values for each variable during the inference procedure. \subsection{Inference Rules} \label{sec:inf:system} We present a set of inference rules to solve a unification problem modulo associativity, commutativity, and homomorphism theory.\ignore{our inference procedure looking for the solutions within the given bound $\kappa$.} We also present some examples that illustrate the applicability of these rules. \subsubsection{Flattening} \par Firstly, we present a set of inference rules for flattening the given set of equations. The variable $v$ represents a fresh variable in the following rules.  \\ \\ \fbox{ \begin{minipage}{\textwidth} \textbf{Flatten Both Sides (FBS)}$$ \frac{\{ t_1 \overset{?}= t_2\}\,\cup \,\Gamma||\triangle|| \sigma }{\{ v \overset{?}= t_1,\,v \overset{?}= t_2\}\,\cup \,\Gamma||\{(v,\, 0)\}\cup\triangle|| \sigma }  \text{\hspace{1pt} if $t_1$ and $t_2$ $\notin\mathcal{V}$}$$  \end{minipage} } \\ \fbox{ \begin{minipage}{\textwidth} \textbf{Flatten Left $+$ (FL)} $$\frac{\{ t\overset{?}= t_1 + t_2\}\,\cup \,\Gamma||\triangle|| \sigma }{\{ t\overset{?}= v + t_2,\,v \overset{?}= t_1 \}\,\cup \,\Gamma||\{(v,\, 0)\}\cup\triangle|| \sigma }  \text{\hspace{35pt} if $t_1 \notin \mathcal{V}$}$$ \end{minipage} }\\ \\ \fbox{ \begin{minipage}{\textwidth} \textbf{Flatten Right $+$ (FR)} $$\frac{\{ t\overset{?}= t_1 + t_2\}\,\cup \,\Gamma||\triangle|| \sigma }{\{ t\overset{?}= t_1 + v,\,v \overset{?}= t_2\}\,\cup \,\Gamma||\{(v,\, 0)\}\cup\triangle|| \sigma }  \text{\hspace{35pt} if $t_2 \notin \mathcal{V}$}$$ \end{minipage} } \\ \\ \fbox{ \begin{minipage}{\textwidth} \textbf{Flatten Under $h$ (FU)} $$\frac{\{ t_1 \overset{?}= h(t)\}\,\cup \,\Gamma||\triangle|| \sigma }{\{ t_1\overset{?}= h(v),\,v \overset{?}= t \}\,\cup \,\Gamma||\{(v,\, 0)\}\cup\triangle|| \sigma }  \text{\hspace{35pt} if $t \notin \mathcal{V}$}$$ \end{minipage} } \\  \par We demonstrate the applicability of these rules using the example below. \begin{example} \label{ex:flatten} \emph{Solve the unification problem $\{h(h(x)) \overset{?}= (s + w)+(y+z) \}.$\\ \\ We only consider the set of equations $\Gamma$ here, not the full triple.\\ \\  $\{h(h(x)) \overset{?}= (s + w)+(y+z) \}\overset{FBS}\Rightarrow\\  \{ v \overset{?}= h(h(x)),\, v \overset{?}= (s + w)+(y+z) \} \overset{FL}\Rightarrow\\  \{v \overset{?}= h(h(x)),\,v \overset{?}= v_1 +(y+z),\, v_1 \overset{?}= s+w\} \overset{FL}\Rightarrow\\  \{v \overset{?}= h(h(x)),\,v \overset{?}= v_1 +(y+z),\, v_1 \overset{?}= v_2+w, v_2 \overset{?}=s\} \overset{FR}\Rightarrow\\  \{v \overset{?}= h(h(x)),\,v \overset{?}= v_1 + v_3,\, v_1 \overset{?}= v_2+w,\,v_3 \overset{?}= y+z, v_2 \overset{?}=s\}\overset{FR}\Rightarrow\\  \{v \overset{?}= h(h(x)),\,v \overset{?}= v_1 + v_3,\, v_1 \overset{?}= v_2+v_4,\,v_3 \overset{?}= y+z, v_2 \overset{?}=s, v_4 \overset{?}=w\} \overset{FU}\Rightarrow\\   \{v \overset{?}= h(v_5),\,v \overset{?}= v_1 + v_3,\, v_1 \overset{?}= v_2+v_4,\,v_3 \overset{?}= y+z, v_2 \overset{?}=s, v_4 \overset{?}=w, v_5 \overset{?}=h(x)\}.$\\  We see that each equation in the set  $ \{v \overset{?}= h(v_5),\,v \overset{?}= v_1 + v_3,\, v_1 \overset{?}= v_2+v_4,\,v_3 \overset{?}= y+z,\\ v_2 \overset{?}=s, v_4 \overset{?}=w, v_5 \overset{?}=h(x)\}$  is in the flattened form.}  \end{example}  \par  \noindent  \subsubsection{Update h-Depth Set}   \par  We also present a set of inference rules to update the h-depth set. These rules are performed eagerly. \ignore{We apply these rules immediately after applying any other rule in the inference system.}\\ \\  \fbox{ \begin{minipage}{\textwidth} \textbf{Update $h$ (U$h$)} $$ \frac{ \{x \overset{?}= h(y)\}\,\cup \,\Gamma || \{(x,\,c_1),\,(y,\,c_2)\}\cup \triangle || \sigma}{ \{ x \overset{?}= h(y)\}\,\cup \,\Gamma || \{(x,\,c_1 ),\, (y,\,c_1 + 1)\}\cup \triangle || \sigma}  \text{\hspace{35pt} If $ c_2 < (c_1 + 1)$}$$  \end{minipage} }\\ \begin{example} \emph{Solve the unification problem: $\{x\overset{?}=h(h(h(y)))\}$.\\ \\ We only consider the pair $\Gamma||\triangle$ since $\sigma$ does not change at this step.\\ \\  $\{x\overset{?}=h(h(h(y)))\}|| \{(x,\,0),\,(y,\,0)\} \overset{FU^{+}}\Rightarrow \\    \{x\overset{?}=h(v),\,v\overset{?}= h(v_1),\, v_1\overset{?}=h(y) \}|| \{(x,\,0),\,(y,\,0),\,(v,\,0),\,(v_1,\,0)\} \overset{Uh}\Rightarrow\\ \{x\overset{?}=h(v),\,v\overset{?}= h(v_1),\, v_1\overset{?}=h(y) \}|| \{(x,\,0),\,(y,\,0),\,(v,\,1),\,(v_1,\,0)\}\overset{Uh}\Rightarrow\\ \{x\overset{?}=h(v),\,v\overset{?}= h(v_1),\, v_1\overset{?}=h(y) \}|| \{(x,\,0),\,(y,\,0),\,(v,\,1),\,(v_1,\,1)\}\overset{Uh}\Rightarrow\\ \{x\overset{?}=h(v),\,v\overset{?}= h(v_1),\, v_1\overset{?}=h(y) \}|| \{(x,\,0),\,(y,\,2),\,(v,\,1),\,(v_1,\,1)\}\overset{Uh}\Rightarrow\\ \{x\overset{?}=h(v),\,v\overset{?}= h(v_1),\, v_1\overset{?}=h(y) \}|| \{(x,\,0),\,(y,\,2),\,(v,\,1),\,(v_1,\,2)\}\overset{Uh}\Rightarrow\\ \{x\overset{?}=h(v),\,v\overset{?}= h(v_1),\, v_1\overset{?}=h(y) \}|| \{(x,\,0),\,(y,\,3),\,(v,\,1),\,(v_1,\,2)\},$\\ where $\overset{FU^{+}}\Rightarrow$ represents the application of $FU$ rule once or more than once.\\ \indent It is true that the h-Depth of $y$ is 3 since there are three edges labeled $h$ from $x$ to $y$, in the graph $\mathbb{G}(\Gamma)$.} \end{example} \noindent  \fbox{ \begin{minipage}{\textwidth} \textbf{Update $+$} \begin{enumerate} \item \textbf{Update Left $+$ (UL)} $$\frac{ \{x_1 \overset{?}= y_1 + y_2\}\,\cup \,\Gamma || \{(x_1,\,c_1),\, (y_1,\,c_2),\, (y_2,\,c_3)\} \cup \triangle || \sigma}{ \{x_1 \overset{?}= y_1 + y_2\}\,\cup \,\Gamma || \{(x_1,\, c_1),\,(y_1,\,c_1),\, (y_2,\,c_3)\} \cup \triangle || \sigma}   \text{\hspace{5pt} If $ c_2 < c_1 $} $$ \item \textbf{Update Right $+$ (UR)} $$\frac{ \{x_1 \overset{?}= y_1 + y_2\}\,\cup \,\Gamma || \{(x_1,\,c_1),\, (y_1,\,c_2),\, (y_2,\,c_3)\} \cup \triangle || \sigma}{ \{x_1 \overset{?}= y_1 + y_2\}\,\cup \,\Gamma || \{(x_1,\, c_1),\, (y_1,\,c_2),\,(y_2,\,c_1)\} \cup \triangle || \sigma}   \text{\hspace{5pt} If $ c_3 < c_1 $}$$   \end{enumerate}     \end{minipage} }  \begin{example} \emph{Solve the unification problem $\{z\overset{?}= x + y,\, x_1 \overset{?}= h(h(z)) \}.$ \\ \\ Similar to the last example, we only consider the pair $\Gamma||\triangle,$\\ \\  $\{z\overset{?}= x + y,\, x_1 \overset{?}= h(h(z)) \}||\{(x,\,0),(y,\,0),(z,\,0),(x_1,\,0)\}\overset{FU}{\Rightarrow}\\    \{z\overset{?}= x + y,\, x_1 \overset{?}= h(v),\, v \overset{?}= h(z) \}||\{(x,\,0),(y,\,0),(z,\,0),(x_1,\,0),(v,\,0)\}\overset{Uh^{+}}\Rightarrow\\  \{z\overset{?}= x + y,\, x_1 \overset{?}= h(v),\, v \overset{?}= h(z) \}||\{(x,\,0),(y,\,0),(z,\,2),(x_1,\,0),(v,\,1)\}\overset{UL}{\Rightarrow}\\  \{z\overset{?}= x + y,\, x_1 \overset{?}= h(v),\, v \overset{?}= h(z) \}||\{(x,\,2),(y,\,0),(z,\,2),(x_1,\,0),(v,\,1)\}\overset{UR}{\Rightarrow}\\  \{z\overset{?}= x + y,\, x_1 \overset{?}= h(v),\, v \overset{?}= h(z) \}||\{(x,\,2),(y,\,2),(z,\,2),(x_1,\,0),(v,\,1)\}.$\\   \indent Since there are two edges labeled $h$ from $x_1$ to $z$ in the graph $\mathbb{G}(\Gamma)$, the h-Depth of $z$ is 2.  The h-Depths of $x$ and $y$ are also updated accordingly.} \end{example} Now, we resume the inference procedure for Example~\ref{ex:flatten} and also we consider $\triangle$ because it will be updated at this step.\\  $\{v \overset{?}= h(v_3),\, v_3 \overset{?}= h(x),\,v \overset{?}= v_1 + v_2,\, v_1 \overset{?}= s+w,\,v_2 \overset{?}= y+z\}||\\\{(x,\,0),(y,\,0),(z,\,0),(s,\,0),(w,\,0),(v,\,0),(v_1,\,0),(v_2,\,0),(v_3,\,0)\} \overset{Uh}\Rightarrow\\ \{v \overset{?}= h(v_3),\, v_3 \overset{?}= h(x),\,v \overset{?}= v_1 + v_2,\, v_1 \overset{?}= s+w,\,v_2 \overset{?}= y+z\}||\\\{(x,\,1),(y,\,0),(z,\,0),(s,\,0),(w,\,0),(v,\,0),(v_1,\,0),(v_2,\,0),(v_3,\,0)\} \overset{Uh}\Rightarrow\\ \{v \overset{?}= h(v_3),\, v_3 \overset{?}= h(x),\,v \overset{?}= v_1 + v_2,\, v_1 \overset{?}= s+w,\,v_2 \overset{?}= y+z\}||\\\{(x,\,1),(y,\,0),(z,\,0),(s,\,0),(w,\,0),(v,\,0),(v_1,\,0),(v_2,\,0),(v_3,\,1)\} \overset{Uh}\Rightarrow\\ \{v \overset{?}= h(v_3),\, v_3 \overset{?}= h(x),\,v \overset{?}= v_1 + v_2,\, v_1 \overset{?}= s+w,\,v_2 \overset{?}= y+z\}||\\\{(x,\,2),(y,\,0),(z,\,0),(s,\,0),(w,\,0),(v,\,0),(v_1,\,0),(v_2,\,0),(v_3,\,1)\}.$\\ \par \noindent \subsubsection{Splitting Rule} \par This rule takes the homomorphism theory into account. In this theory,  we can not solve equation $h(y)\overset{?}= x_1 + x_2$ unless $y$ can be written as the sum of two new variables $y=v_1+v_2$, where $v_1$ and $v_2$ are in $\mathcal{NV}$. Without loss of generality we generalize it to $n$ variables $x_1,\ldots,x_n$.\\ \\  \fbox{ \begin{minipage}{\textwidth} \textbf{Splitting}  $$\frac{\{x\overset{?}=h(y),x\overset{?}=x_1+ \cdots + x_n \}\cup \Gamma||\triangle|| \sigma}{\{x\overset{?}=h(y), y \overset{?}= v_1+ \cdots + v_n, x_1 \overset{?}= h(v_1), \ldots, x_n \overset{?}= h(v_n)\} \cup \Gamma||\triangle'|| \sigma}$$ where $n>1$, $x \neq y$ and $x \neq x_{i}$ for any $i$, $\triangle'= \{(v_1,\,0),\ldots,(v_n,\,0)\}\cup\triangle$, and $v_1,\ldots, v_n$ are fresh variables in $\mathcal{NV}$.     \end{minipage}   }\\  \begin{example} \emph{ Solve the unification problem  $\{ h(h(x)) \overset{?}= y_1+y_2\}.$}\\ \par  \emph{Still we only consider pair $\Gamma || \triangle$, since rules modifying $\sigma$ are not introduced yet.}\\ \\ $ \{ h(h(x)) \overset{?}= y_1+y_2 \}||\{(x,\,0),(y_1,\,0),(y_2,\,0)\}\, \overset{FBS^{+}}\Rightarrow$ \\ $\{ v \overset{?}=h(v_{1}), \, v_{1}\overset{?}=h(x),\, v\overset{?}=y_{1}+y_{2}\ \}||\{(x,\,0), (y_{1}, \, 0), (y_{2}, \, 0), (v,\,0), (v_{1}, \,0)\} \, \overset{Uh^{+}}\Rightarrow$\\  $\{ v \overset{?}=h(v_{1}), \, v_{1}\overset{?}=h(x),\, v\overset{?}=y_{1}+y_{2}\ \}|||\{(x,\,2),(y_1,\,0),(y_2,\,0),(v,\,0),(v_1,\,1)\}, \,\overset{Splitting}\Rightarrow$\\      $\{ v \overset{?}=h(v_1),\, v_1 \overset{?}= v_{11} + v_{12},\, y_1\overset{?}= h(v_{11}),\, y_2 \overset{?}=h(v_{12}),\, v_1 \overset{?}= h(x) \, \}||\\\{(x,\,2),(y_1,\,0),(y_2,\,0),(v,\,0),(v_1,\,1),(v_{11},\,0),(v_{12},\,0)\}, \,\overset{Uh^{+} }\Rightarrow$\\ $ \{ v \overset{?}=h(v_1),\, v_1 \overset{?}= v_{11} + v_{12},\, y_1\overset{?}= h(v_{11}),\, y_2 \overset{?}=h(v_{12}),\, v_1 \overset{?}= h(x) \, \}||\\\{(x,\,2),(y_1,\,0),(y_2,\,0),(v,\,0),(v_1,\,1),(v_{11},\,1),(v_{12},\,1)\}, \,\overset{Splitting}\Rightarrow$\\   $ \{ v \overset{?}=h(v_1),\, y_1\overset{?}= h(v_{11}),\, y_2 \overset{?}=h(v_{12}),\, v_1 \overset{?}= h(x),\, x \overset{?}= v_{13} + v_{14}, \, v_{11}\overset{?}=h(v_{13}),\\v_{12}\overset{?}=h(v_{14}) \, \}||\{(x,\,2),(y_1,\,0),(y_2,\,0),(v,\,0),(v_1,\,1),(v_{11},\,1),(v_{12},\,1),(v_{13},\,0),(v_{14},\,0)\}\overset{Uh^{+}}\Rightarrow$  $ \{ v \overset{?}=h(v_1),\, y_1\overset{?}= h(v_{11}),\, y_2 \overset{?}=h(v_{12}),\, v_1 \overset{?}= h(x),\, x \overset{?}= v_{13} + v_{14}, \, v_{11}\overset{?}=h(v_{13}),\\v_{12}\overset{?}=h(v_{14}) \, \}||\{(x,\,2),(y_1,\,0),(y_2,\,0),(v,\,0),(v_1,\,1),(v_{11},\,1),(v_{12},\,1),(v_{13},\,2),(v_{14},\,2)\}$. \end{example} \subsubsection{Trivial} \par The Trivial inference rule is to remove trivial equations in the given problem $\Gamma$.  \\ \\  \fbox{ \begin{minipage}{\textwidth} $$\frac{\{ t\overset{?}= t\}\,\cup \,\Gamma || \triangle || \sigma}{\Gamma || \triangle || \sigma} $$      \end{minipage}   }\\ \par \noindent \subsubsection{Variable Elimination (VE)} \par The Variable Elimination rule is to convert the equations into assignments. In other words, it is used to find the most general unifier. \\ \\  \fbox{ \begin{minipage}{\textwidth} \begin{enumerate} \item \textbf{VE1}$$\frac{\{ x \overset{?}= y \}\,\cup \, \Gamma || \triangle || \sigma}{\Gamma \{x \mapsto y\} || \triangle || \sigma \{ x \mapsto y \} \cup \{x \mapsto y \} } \text{\hspace{25pt} if $x$ and $y$ are distinct variables} $$ \item \textbf{VE2} $$\frac{\{ x \overset{?}= t \}\,\cup \,\Gamma || \triangle || \sigma}{\Gamma \{ x \mapsto t\} || \triangle || \sigma \{ x \mapsto t\}\cup\{x \mapsto t\} } \text{\hspace{25pt} if $t \notin \mathcal{V}$ and $x$ does not occur in $t$}$$ \end{enumerate}   \end{minipage}   }\\ \par The rule VE2 is performed last after all other inference rules have been performed. The rule VE1 is performed eagerly. \begin{example} \emph{ Solve unification problem $\{ x \overset{?}= y ,\, x \overset{?}= h(z) \}$.} \\ \par\noindent  $\{ x \overset{?}= y ,\, x \overset{?}= h(z) \}||\{(x,\,0),\,(y,\,0),\,(z,\,0)\}||\emptyset \overset{Uh}\Rightarrow \\  \{ x \overset{?}= y ,\, x \overset{?}= h(z) \}||\{(x,\,0),\,(y,\,0),\,(z,\,1)\}||\emptyset \overset{VE1}\Rightarrow \\  \{y \overset{?}= h(z)\}||\{(x,\,0),\,(y,\,0),\,(z,\,1)\}||\{ x \mapsto y\}\overset{VE2}\Rightarrow\\  \emptyset||\{(x,\,0),\,(y,\,0),\,(z,\,1)\}||\{ x \mapsto h(z),\,y \mapsto h(z)\}.$\par \emph{The substitution $\{ x \mapsto h(z),\,y \mapsto h(z)\}$ is the most general unifier of the given problem $\{ x \overset{?}= y ,\,x \overset{?}= h(z) \}$.}  \end{example}  \par \noindent \subsubsection{Decomposition (Decomp)} \par  The Decomposition rule decomposes an equation into several sub-equations if both sides' top symbol matches.\\ \\   \fbox{ \begin{minipage}{\textwidth} \textbf{Decomp} $$\frac{\{ x \overset{?}= f( s_1,\ldots,s_n), x \overset{?}= f( t_1,\ldots,t_n)\}\cup \Gamma || \triangle || \sigma}{\{x \overset{?}= f( t_1, \ldots,t_n),\, s_1 \overset{?}= t_1,\ldots,s_n \overset{?}= t_n \} \, \cup \,\Gamma|| \triangle || \sigma} \text{\hspace{25pt} if $f \neq +$ }$$   \end{minipage}   }\\ \begin{example} \emph{Solve the unification problem $\{h(h(x)) \overset{?}= h(h(y))\}$.}\\ \\ $\{h(h(x)) \overset{?}= h(h(y))\}||\{(x,\,0),\,(y,\,0)\}||\emptyset \overset{Flatten^{+}}\Rightarrow \\ \{v \overset{?}= h(v_1),\,v_1 \overset{?}= h(x),\,v \overset{?}= h(v_2),\,v_2 \overset{?}= h(y)\}||\{(x,\,0),\,(y,\,0),(v,\,0),(v_1,\,0),(v_2,\,0)\}||\emptyset\overset{Uh^{+}}\Rightarrow \{v \overset{?}= h(v_1),\,v_1 \overset{?}= h(x),\,v \overset{?}= h(v_2),\,v_2 \overset{?}= h(y)\}||\{(x,\,2),\,(y,\,2),(v,\,0),(v_1,\,1),(v_2,\,1)\}||\emptyset\overset{Decomp}\Rightarrow \{v \overset{?}= h(v_1),\, v_1 \overset{?}= v_2,\, v_1 \overset{?}= h(x),\,v_2 \overset{?}= h(y)\}||\{(x,\,2),\,(y,\,2),(v,\,0),(v_1,\,1),(v_2,\,1)\}||\emptyset \overset{VE1}\Rightarrow \{v \overset{?}= h(v_2),\, v_2 \overset{?}= h(x),\,v_2 \overset{?}= h(y)\}||\{(x,\,2),\,(y,\,2),(v,\,0),(v_1,\,1),(v_2,\,1)\}||\{v_1 \mapsto v_2\}\overset{Decomp}\Rightarrow \{v \overset{?}= h(v_2),\, v_2 \overset{?}= h(x),\,x\overset{?}= y\}||\{(x,\,2),\,(y,\,2),(v,\,0),(v_1,\,1),(v_2,\,1)\}||\{v_1 \mapsto v_2\}\overset{VE2^{+}}\Rightarrow\\ \emptyset||\{(x,\,2),\,(y,\,2),(v,\,0),(v_1,\,1),(v_2,\,1)\}||\{v_1 \mapsto h(y),\,x\mapsto y,\,v \mapsto h(h(y)),\,v_2 \mapsto h(y)\},$\\ \emph{where $\{x\mapsto y\}$ is the most general unifier of the problem $\{h(h(x)) \overset{?}= h(h(y))\}$.} \end{example} \par \noindent  \subsubsection{AC Unification}   \par  The AC Unification rule calls an AC unification algorithm to unify the AC part of the problem. Notice that we apply AC unification only once when no other rule except VE-2 can apply.  In this inference rule $\Psi$ represents the set of all equations with the $+$ symbol on the right hand side.  $\Gamma$ represents the set of equations not containing a $+$ symbol. \textit{Unify} is a function that returns  one of the complete set of unifiers returned by the AC unification algorithm. \textit{GetEqs} is a function that  takes a substitution and returns the equational form of that substitution. In other words,   $GetEqs(\{x_1 \mapsto t_1, \ldots, x_n \mapsto t_n\}) = \{x_1 \overset{?}= t_1, \ldots, x_n \overset{?}= t_n\}$. \\ \\    \fbox{ \begin{minipage}{\textwidth} \textbf{AC Unification} $$\frac{\Psi \cup \Gamma || \triangle || \sigma}{GetEqs(\theta_{1}) \cup \Gamma|| \triangle || \sigma \vee\ldots \vee GetEqs(\theta_{n}) \cup \Gamma|| \triangle || \sigma }$$ \text{where $\textit{Unify}(\Psi) = \{\theta_{1}, \ldots, \theta_{n}\}.$}   \end{minipage}   } \\ \\   We illustrate the applicability of the AC unification rule using the example below. For convenience, we only consider $\Gamma$ from the problem.   \begin{example} \emph{Solve the unification problem   $\{x + y \overset{?}= z + y_{1}, \, x_{1} \overset{?}= x_{2} \}$, where $x, y, z, x_{1}, x_{2}$, and $y_{1}$ are pairwise distinct.\\ \\   $\{x + y \overset{?}= z + y_{1}, \, x_{1} \overset{?}= x_{2} \} \,\overset{FBS}\Rightarrow$    $\{ v \overset{?}= x + y, \, v \overset{?}= z + y_{1}\} \cup\{ x_{1} \overset{?}= x_{2} \}\,\overset{AC\,Unification}\Rightarrow$\\   $\{ v \overset{?}= c_1 + c_2 + c_3+c_{4} , \, x \overset{?}= c_{1} + c_{2},\, y \overset{?}= c_{3} + c_{4}, z \overset{?}= c_{1} + c_{3},\, y_{1} \overset{?}= c_{2} + c_{4} \} \cup\{ x_{1} \overset{?}= x_{2} \}\, \vee$\\   $\{ v \overset{?}= c + z + y,\, x \overset{?}= c + z, \, y_{1} \overset{?}= c + y \} \cup\{ x_{1} \overset{?}= x_{2} \}\, \vee$\\     $\{ v \overset{?}= z + c + y,\, x \overset{?}= z+c, \, y_{1} \overset{?}= c + y \} \cup\{ x_{1} \overset{?}= x_{2} \}\, \vee$\\     $\{ v \overset{?}= x + c + z,\, y \overset{?}= c+z, \, y_{1} \overset{?}= x+c \} \cup\{ x_{1} \overset{?}= x_{2} \}\, \vee$\\      $\{ v \overset{?}= x + z + c,\, y \overset{?}= z+c, \, y_{1} \overset{?}= x+c \} \cup\{ x_{1} \overset{?}= x_{2} \}\, \vee$\\      $\{ v \overset{?}= z + y_{1},\, x \overset{?}= z, \, y \overset{?}= y_{1} \} \cup\{ x_{1} \overset{?}= x_{2} \}\, \vee$\\      $\{ v \overset{?}= y_{1} + z,\, x \overset{?}= y_{1}, \, y \overset{?}= z \} \cup\{ x_{1} \overset{?}= x_{2} \},$\\      where $c, c_{1}, c_{2}, c_{3}$, and $c_{4}$ are constant symbols.}   \end{example} \ignore{Note that we have written the rule for one member of the complete set of AC unifiers of $\Psi$. This will branch on every member of the complete set of AC unifiers of $\Psi$.} \par \noindent \subsubsection{Occur Check (OC)} \par OC checks if a variable on the left-hand side of an equation occurs on the other side of the equation. If it does, then the problem has no solution. This rule has the highest priority.\\ \\    \fbox{ \begin{minipage}{\textwidth} \textbf{OC} $$\frac{\{ x \overset{?}= f( t_1,\ldots,t_n)\}\cup \Gamma || \triangle || \sigma}{ \bot} \text{\hspace{35pt} If $x \in \mathcal{V}ar(f( t_1,\ldots,t_n)\sigma$)}$$ where $\mathcal{V}ar(f( t_1,\ldots,t_n)\sigma)$ represents set of all variables that occur in $f( t_1,\ldots,t_n)\sigma$.   \end{minipage}   } \begin{example} \emph{Solve the following unification problem $\{x \overset{?}= y, \, y \overset{?}= z + x \}.$\\ \\ $\{x \overset{?}= y, \, y \overset{?}= z+x\}||\{(x,\,0),(y,\,0),(z,\,0)\}||\emptyset \,\overset{VE1}\Rightarrow\\ \{ y \overset{?}= z + y\}||\{(x,\,0),(y,\,0),(z,\,0)\}||\{x \mapsto y \}\,\overset{OC}\Rightarrow\, \textit{Fail}.$\\ Hence, the problem $\{x \overset{?}= y, \, y \overset{?}= z + x\}$ has no solution.} \end{example} \par \noindent \subsubsection{Clash} \par This rule checks if the \textit{top symbol} on both sides of an equation is the same. If not, then there is no solution to the problem, unless one of them is $h$ and the other $+$. \\ \\  \fbox{ \begin{minipage}{\textwidth} \textbf{Clash} $$\frac{\{ x \overset{?}= f( s_1,\ldots,s_m), \, x \overset{?}= g( t_1,\ldots,t_n)\}\cup \Gamma || \triangle || \sigma}{\bot}  \text{\hspace{25pt} If $f \notin \{h,\,+\}$ or $g \notin \{h,\,+\}$ } $$    \end{minipage}   } \begin{example} \emph{Solve the unification problem $\{ f(x,\, y) \overset{?}= g(h(z)) \}$, where $f$ and $g$ are two distinct uninterpreted function symbols.\\ \\ $\{ f(x,\, y) \overset{?}= g(h(z)) \}||\{(x,\,0),(y,\,0),(z,\,0) \} ||\emptyset \overset{Flatten^{+}}\Rightarrow\\ \{ v \overset{?}= f(x,\, y),\,v \overset{?}= g(v_1), v_1 \overset{?}=h(z) \} ||\{(x,\,0),(y,\,0),(z,\,0),(v,\,0),(v_1,\,0) \} ||\emptyset \overset{Uh^{+}}\Rightarrow\\ \{ v \overset{?}= f(x,\, y),\,v \overset{?}= h(v_1),\, v_1 \overset{?}=h(z) \} ||\{(x,\,0),(y,\,0),(z,\,1),(v,\,0),(v_1,\,0) \} ||\emptyset \overset{Clash}\Rightarrow \textit{Fail}.$ Hence, the problem $\{ f(x,\, y) \overset{?}= g(h(z)) \}$ has no solution.} \end{example} \par \noindent \subsubsection{Bound Check (BC)} \par The Bound Check is to determine if a solution exists within the bound $\kappa$, a given maximum h-depth of any variable in $\Gamma$. If one of the h-depths in the h-depth set $\triangle$ exceeds the bound $\kappa$, then the problem has no solution. We apply this rule immediately after the rules of update h-depth set.\\ \\  \fbox{ \begin{minipage}{\textwidth} \textbf{BC} $$ \frac{ \Gamma || \triangle || \sigma}{\bot}  \text{\hspace{35pt} If $MaxVal(\triangle) > \kappa$}$$   \end{minipage}   }\\ \begin{example}  \emph{Solve the following unification problem $\{ h(y) \overset{?}= y + x \}$.\\ \\  Let the bound be $\kappa =2$.\\    $\{ h(y) \overset{?}= y + x \}||\{(x,\,0),(y,\,0)\}||\emptyset\overset{FBS}\Rightarrow\\  \{ v\overset{?}= h(y),\,v \overset{?}= y + x\}||\{(x,\,0),(y,\,0),(v,\,0)\}||\emptyset\overset{Uh}\Rightarrow\\  \{ v\overset{?}= h(y),\,v \overset{?}= y + x\}||\{(x,\,0),(y,\,1),(v,\,0)\}||\emptyset\overset{Splitting}\Rightarrow\\    \{ v\overset{?}= h(y),\, y \overset{?}= v_{11} + v_{12},\, y \overset{?}= h(v_{11}),\, x \overset{?}= h(v_{12})||\{(x,\,0),(y,\,1),(v,\,0),(v_{11},\,0),(v_{12},\,0)\}||\emptyset \overset{Uh^{+}}\Rightarrow   \{ v\overset{?}= h(y),\, y \overset{?}= v_{11} + v_{12},\, y \overset{?}= h(v_{11}),\, x \overset{?}= h(v_{12})||\{(x,\,0),(y,\,1),(v,\,0),(v_{11},\,2),(v_{12},\,1)\}||\emptyset \overset{\scriptsize{Splitting}}\Rightarrow    \{ v\overset{?}= h(y),\, v_{11} \overset{?}= v_{13} + v_{14},\, v_{11} \overset{?}= h(v_{13}),\, v_{12}\overset{?}= h(v_{14}),\, y \overset{?}= h(v_{11}),\, x \overset{?}= h(v_{12})||\\\{(x,\,0),(y,\,1),(v,\,0),(v_{11},\,2),(v_{12},\,1),(v_{13},\,0),(v_{14},\,0)\}||\emptyset \overset{Uh^{+}}\Rightarrow\\   \{ v\overset{?}= h(y),\, v_{11} \overset{?}= v_{13} + v_{14},\, v_{11} \overset{?}= h(v_{13}),\, v_{12}\overset{?}= h(v_{14}),\, y \overset{?}= h(v_{11}),\, x \overset{?}= h(v_{12})||\\\{(x,\,0),(y,\,1),(v,\,0),(v_{11},\,2),(v_{12},\,1),  (v_{13},\,3),(v_{14},\,2)\}||\emptyset \overset{BC}\Rightarrow \textit{Fail}.$\\   Since $MaxVal(\triangle) = 3 > \kappa$, the problem $\{ h(y) \overset{?}= y + x\}$ has no solution within the given bound.} \end{example} \par \noindent \subsubsection{Orient} \par The Orient rule swaps the left side term of an equation with the right side term. In particular, when the left side term is a variable but not the right side term.\\ \\  \fbox{ \begin{minipage}{\textwidth} \textbf{Orient} $$ \frac{ \{t \overset{?}{=} x\} \cup\Gamma || \triangle || \sigma}{\{x \overset{?}{=} t\} \cup\Gamma || \triangle || \sigma}  \text{\hspace{35pt} If $t$ is not a variable}$$   \end{minipage}   }\\ \begin{algorithm} \caption{AChUnify} \paragraph{\textbf{Input:}} \begin{itemize} \item An equation set $\Gamma$, a bound $\kappa$, an empty set $\sigma$, and an empty h-depth set $\triangle$. \end{itemize} \paragraph{\textbf{Output:}} \begin{itemize} \item A complete set of $\kappa$-bounded $ACh$ unifiers $\{\sigma_1,\ldots, \sigma_n\}$ or $\bot$ indicating that the problem has no solution. \end{itemize} 1: Apply $Trivial$ to eliminate equations of the form $t \overset{?}{=} t$.\\ 2: Apply $OC$ to see if any variable on the left side occurs on the right. If yes, \\ \hspace*{0.4cm}then return $\bot$.\\ 3: Flatten the set of equations $\Gamma$ using the flattening rules.\\ 4: Update the h-depth set $\triangle$.\\ 5: Apply $BC$ to see if $MaxVal(\triangle) > \kappa$. If yes, then return $\bot$.\\ 6: Apply the $Orient$ rule.\\ 7: Apply the $Splitting$ rule.\\ 8: Apply the $Clash$ rule.\\ 9: Apply the $Decomposition$ rule.\\ 10: Apply the $AC\: Unification$ rule.\\ 11: Finally, apply the $Variable\,Elimination$ rule and get the output.\\ \end{algorithm} \section{Proof of Correctness} \ignore{\begin{tcolorbox}[colback=yellow!100] To check, if these proofs comply with the reviewer's comments!  \end{tcolorbox}} We prove that the proposed inference system is terminating, sound, and complete. \subsection{Termination} Before going to present the proof of termination, we shall introduce few notation which will be used in the subsequent sections. For two set triples, $ \Gamma||\triangle|| \sigma$ and $\Gamma'||\triangle'|| \sigma'$, \begin{itemize} \item $\Gamma||\triangle|| \sigma \Rightarrow_{\mathfrak{I}_{ACh}} \Gamma'||\triangle'|| \sigma'$, means that the set triple $\Gamma'||\triangle'|| \sigma'$ is deduced from $\Gamma||\triangle|| \sigma$ by applying a rule from $\mathfrak{I}_{ACh}$ once. We call it as one step. \item $\Gamma||\triangle|| \sigma \overset{*}{\Rightarrow_{\mathfrak{I}_{ACh}}} \Gamma'||\triangle'|| \sigma'$, means that the set triple $\Gamma'||\triangle'|| \sigma'$ is deduced from $\Gamma||\triangle|| \sigma$ by zero or more steps \item $\Gamma||\triangle|| \sigma \overset{+}{\Rightarrow_{\mathfrak{I}_{ACh}}} \Gamma'||\triangle'|| \sigma'$, means that the set triple $\Gamma'||\triangle'|| \sigma'$ is deduced from $\Gamma||\triangle|| \sigma$ by one or more steps \end{itemize} As we notice, AC unification divides $\Gamma||\triangle|| \sigma$ into finite number of branches $\Gamma_{1}||\triangle_{1}|| \sigma_{1}$ and so on $\Gamma_{n}||\triangle_{n}|| \sigma_{n}$. Hence, for a triple $\Gamma||\triangle|| \sigma$, after applying some inference rules, the result is a disjunction of triples $\bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$. Accordingly, we introduce the following notation: \begin{itemize} \item $\Gamma||\triangle||\sigma {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$, where $\bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ is a disjunction of triples, means that the set triple $\Gamma||\triangle||\sigma$ becomes $\bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ with an application of a rule once. \item $\Gamma||\triangle||\sigma \overset{+} {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ means that $\Gamma||\triangle||\sigma$ becomes $\bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ after applying some inference rules once or more than once. \item $\Gamma||\triangle||\sigma \overset{*} {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ means that $\Gamma||\triangle||\sigma$ becomes $\bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ after applying some inference rules zero or more times. \end{itemize} \par Here, we define a measure of $\Gamma||\triangle||\sigma$ for proving termination: \begin{itemize} \ignore{\item Let $|\Gamma|$ be the cardinality of $\Gamma$. Since $|\Gamma|$ is a natural number, $|\Gamma|$ with $\leq$ on natural numbers is a well-founded ordering.} \item Let $Sym(\Gamma)$ be a multi-set of non-variable symbols occurring in $\Gamma$. The standard ordering of $|Sym(\Gamma)|$ based on natural numbers is a well-founded ordering on the set of equations. \ignore{\item Let $Top(\Gamma)$ be the set of all top symbols of $\Gamma$. Since $|Top(\Gamma)|$ is a natural number, $|Top(\Gamma)|$ with $\leq$ on natural numbers is a well-founded ordering.} \item Let $\kappa$ be a natural number. Let $\overline{h_{d}}(\Gamma):= \{ (\kappa+1) \,\text{-} \,h_{d}(x, \Gamma)\mid(x, h_{d}(x, \Gamma))\in h_{d}(\Gamma)\}$ be a multi-set. Since every element of the set is a natural number, the multi-set order for $\overline{h_{d}}(\Gamma)$ is a well-founded ordering. \item Let $p$ be a number of non-solved variables in $\Gamma$.  \item Let $m$ be the number of equations of the form $f(t)\overset{?}=x$ in $\Gamma$. \item Let $n$ be the number of +-equations with $x$ occurring on the left side, i.e, $x = x_1+\cdots+x_n$. \end{itemize} \par Then we define the measure of $\Gamma||\triangle||\sigma$ as the following: $$\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma) = (n, |Sym(\Gamma)|, p, m, |\Gamma|, \overline{h_{d}}(\Gamma)).$$ Since each element in this tuple with its corresponding order is well-founded, the lexicographic order on this tuple is well-founded as well. \ignore{As we apply the rules of flattening whenever needed, we assume that the equations of $\Gamma$ are in flattened form.} We show that $\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma)$ decreases with the application of each rule of the inference system $\mathfrak{I}_{ACh}$ except AC unification. The reader can see the proof of termination of the AC unification in~\cite{FF}. \begin{lemma} \label{lem:ter1} \emph{Let $\Gamma||\triangle||\sigma$ and $\Gamma||\triangle||\sigmap$ be two set triples, where $\Gamma$ and $\Gamma'$ are in flattened form, such that $\Gamma||\triangle||\sigma \Rightarrow_{\mathfrak{I}_{ACh}} \Gamma||\triangle||\sigmap$. Then $\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma) > \mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma)p$.} \end{lemma} \begin{proof} \textbf{Trivial.} The cardinality of $\Gamma$, $|\Gamma |$, decreases while other components of the measure either stays the same or decreases. Hence, $\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma) > \mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma)p$.\\ \textbf{Decomposition.} The number of $f$ symbols decreased by one, and hence $|Sym(\Gamma)|$ decreases while $p$ stays the same. Hence, $\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma) > \mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma)p$.\\ \textbf{Update h-Depth Set.} On application of one of the update rules, increases h-depth of a variable $x$ from $n$ to $n+1$. However, $\kappa \text{-} n >\kappa \text{-} (n+1)$. Which means that $\overline{h_{d}}(\Gamma)$ decreases while the other components stay the same. Hence, $\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma) > \mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma)p$.\\ \noindent \textbf{Splitting.} On the application of the Splitting rule, $n$, the number of +-equations with $x$ on the left side decreased by one. So, $\mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma) > \mathbb{M}_{\Iach}(\Gamma, \triangle, \sigma)p$.\\ \textbf{Orient.} It is not difficult to see the fact that $m$ decreases. \\ \textbf{Variable elimination.} Of course, the number of non-solved variables decreases in the application of this rule.\\ \end{proof} \begin{theorem}[\textbf{Termination}] \emph{For any set triple $\Gamma||\triangle||\sigma$, there is a set triple $\Gamma||\triangle||\sigmap$ such that $\Gamma||\triangle|| \sigma \overset{*}{\Rightarrow_{\mathfrak{I}_{ACh}}} \Gamma'||\triangle'|| \sigma'$ and none of the rules $\mathfrak{I}_{ACh}$ can be applied on $\Gamma||\triangle||\sigmap$.} \end{theorem} \begin{proof} By induction on Lemma~\ref{lem:ter1}, this theorem can be proved. \ignore{First notice that at some point all the Decomposition rules not involving $h$ will eventually be performed. That is because when we perform Decomposition on the top symbol $f$, one occurrence of $f$ disappears, and none of the rules can make them come back. So from now on, we assume that all such rules have been performed. Let us call $x \overset{?}= h(y)$ an {\em $h$-rule of depth $i$} if $h_d(x) = i$. A splitting rule involving $x \overset{?}= h(y)$ and $x \overset{?}= v_1 +\cdots+ v_n$  is called a {\em splitting rule of depth $i$}. We say that the variable $x$ is split. A Decomposition rule involving $x \overset{?}= h(y)$ and $x \overset{?}= h(z)$ is called an {\em $h$-Decomp rule of depth $i$}. We say that $x$ is $h$-decomposed.   We first show that at some point all splitting rules of depth 0 and $h$-Decomp rules of depth 0 will have been performed. We notice that after AC Unification is called for the first time, any variable appearing in a VarVar equation or a $+$-equation will either appear exactly once in the left-hand side of one of those equations and never on the right-hand side, or else it will never appear on the left-hand side of one of those equations. This is because the AC unification rule creates equations from substitutions, which have this property. Also, the AC unification rule, the VE1 rule, and the Trivial rule will not change this property. Therefore when a variable $x$ is split, all of the occurrences of $x$ in $+$ equations and VarVar equations disappear. If $x$ has depth 0, then it cannot occur on the right-hand side of an $h$-equation. So after this split, variable $x$ cannot be split anymore. Also, $h$ can then only be $h$-decomposed a finite number of times, because each time eliminates an $h$-equation with $x$ on the left-hand side, and no new ones of depth 0 can be created. We want to show by induction that all splits and $h$-Decomps will eventually be performed. Suppose that all of them at depth $i$ has been performed at some point. We will show that at some point all of them at depth $i+1$ will be performed.  Again we notice that after AC Unification is called for the first time, any variable appearing in a VarVar equation or a $+$-equation will either appear exactly once in the left-hand side of one of those equations and never on the right-hand side, or else it will never appear on the left-hand side of one of those equations. This is because the AC unification rule creates equations from substitutions, which have this property. Also, the AC unification rule, the VE1 rule, and the Trivial rule will not change this property. Therefore when a variable $x$ is split, all of the occurrences of $x$ in $+$ equations and VarVar equations disappear. If $x$ has depth $i+1$, then it cannot occur on the right-hand side of an $h$-equation that can possibly be split again. This is a result of our induction hypothesis. So after this split, variable $x$ cannot be split anymore. Also, $h$ can then only be $h$-decomposed a finite number of times, because each time eliminates an $h$-equation with $x$ on the left-hand side, and no new ones of depth $i+1$ can be created.    Because of our Bound Check rule, all splits and $h$-Decomp rules will eventually be performed. From now on, we assume that they have all been performed. Assume all VE1 rules and Trivial rules that currently exist been performed. Then suppose we perform AC Unification. This will not create any applications of Trivial or VE1. Therefore the process will be finished here. The only thing left is the performance of VE2 rules at the end, which trivially halts because they reduce the number of equations. } \end{proof} \subsection{Soundness} \noindent In this Section, we show that our inference system $\mathfrak{I}_{ACh}$ is truth-preserving. \begin{lemma}\label{lem:soundness1} \emph{Let $\Gamma||\triangle||\sigma$ and $\Gamma||\triangle||\sigmap$ be two set triples such that $\Gamma||\triangle||\sigma \Rightarrow_{\mathfrak{I}_{ACh}} \Gamma||\triangle||\sigmap$ via all the rules of $\mathfrak{I}_{ACh}$ except AC unification. Let $\theta$ be a substitution such that $\theta \models \Gamma||\triangle||\sigmap$. Then $\theta \models \Gamma ||\triangle||\sigma$.} \end{lemma} \begin{proof} \textbf{Trivial.} It is trivially true.\\ \ignore{Splitting.  $$\frac{\{w\overset{?}=h(y),w\overset{?}=x_1+ \cdots + x_n \}\cup \Gamma||\triangle|| \sigma}{\{w\overset{?}=h(y), y \overset{?}= v_1+ \cdots + v_n, x_1 \overset{?}= h(v_1), \ldots, x_n \overset{?}= h(v_n)\} \cup \Gamma||\triangle'|| \sigma}$$ where $n>1$, $y \neq w$, $\triangle'= \{(v_1,\,0),\ldots,(v_n,\,0)\}\cup\triangle\}$, and $v_1,\ldots, v_n$ are fresh variables in $\mathcal{NV}$. \ignore{$$\frac{\{w\overset{?}=h(y),w\overset{?}=x_1+x_2 \}\cup \Gamma||\triangle|| \sigma}{\{w\overset{?}=h(y), y \overset{?}= v_1+v_2, x_1 \overset{?}= h(v_1), x_2 \overset{?}= h(v_2)\} \cup \Gamma||\triangle'|| \sigma}$$}} \textbf{Splitting.} Let $\theta$ be a substitution. Assume that $\theta$ satisfies $\{w\overset{?}=h(y), y \overset{?}= v_1+ \cdots + v_n, x_1 \overset{?}= h(v_1), \ldots, x_n \overset{?}= h(v_n)\} \cup \Gamma$. Then we have that $w\theta\overset{?}=h(y)\theta$, $y\theta\overset{?}= (v_1+\cdots +v_n)\theta$, $x_1\theta \overset{?}= h(v_1)\theta$, \ldots, $x_n \theta \overset{?}= h(v_n)\theta$. This implies that $w\theta\overset{?}=h(y\theta)$, $y\theta\overset{?}= v_1\theta+\cdots+v_n\theta$, $x_1\theta \overset{?}= h(v_1\theta)$ \ldots $x_n \theta \overset{?}= h(v_n\theta)$. In order to prove that $\theta$ satisfies $\{w\overset{?}=h(y),w\overset{?}=x_1+\cdots+x_n \}$, it is enough to prove $\theta$ satisfies the equation $w\overset{?}=x_1+\cdots+x_n$. By considering the right side term $x_1+\cdots+x_n$ and after applying the substitution, we get $(x_1+\cdots+x_n)\theta \overset{?}= x_1\theta +\cdots+ x_n \theta \overset{?}= h(v_1\theta) +\cdots+ h(v_n\theta)$. By the homomorphism theory, we write that $h(v_1\theta) +\cdots+ h(v_n\theta) \overset{?}= h(v_1\theta +\cdots+ v_n \theta)$. Then $h(v_1\theta +\cdots+ v_n \theta)\overset{?}= h(y\theta)\overset{?}= w\theta$. Hence, $\theta$ satisfies $w\overset{?}=x_1+\cdots+x_n$.\\ \textbf{Variable Elimination.}\\ {\textbf{VE1.}} Assume that $\theta \models \Gamma\{x \mapsto y\} || \triangle || \sigma\{ x \mapsto y\} \cup \{x \mapsto y \}$. This means that $\theta$ satisfies $\Gamma\{x \mapsto y\}$ and $\sigma \{ x \mapsto y\} \cup \{x \mapsto y \}$. Now, we have to prove that $\theta$ satisfies $\{ x \overset{?}= y \},\Gamma,$ and $\sigma$. But $\theta$ satisfies $x \mapsto y$ means that $x \theta \overset{?}= y\theta$. $\Gamma$ is $\Gamma \{x \mapsto y\}$ but without replacing $x$ with $y$. Since $y \theta \overset{?}= x\theta$, the substitution $\theta$ satisfies $y \mapsto x$. Hence, we conclude that $\theta$ satisfies $\Gamma$ and $\sigma$.\\ \textbf{VE2.} We have that $\theta$ satisfies $\Gamma$ and $\sigma\{ x \mapsto t\} \cup \{x \mapsto t \}$. Now, we have to prove that $\theta$ satisfies $\{ x \overset{?}= t \}$ and $\sigma$. By the definition of $\theta \models \Gamma$, we have $x \theta \overset{?}= t\theta$ and it is enough to prove that $\theta$ satisfies $\sigma$. Let $w \mapsto s[x]$ be an assignment in $\sigma$.  After applying $x \mapsto t$ on $\sigma$, the assignment $y \mapsto s$ with $s|_p=x$, where $p$ is a position, becomes $y \mapsto s[t]_{p}$. We also know that $\theta$ satisfies $\sigma\{ x \mapsto t\}$ implies that $\theta$ also satisfies $w \mapsto s[t]_{p}$. Then by the definition, we write that $y \theta \overset{?}= s[t\theta]_{p} \overset{?}= s[x\theta]_{p}$. This means that $\theta$ satisfies the assignment $w \mapsto s[x]$. Hence, $\theta$ satisfies $\sigma$.\\ \noindent \textbf{Decomposition.} Assume that $\theta \models \{x \overset{?}= f( t_1, \ldots,t_n),\, s_1 \overset{?}= t_1,\ldots,s_n \overset{?}= t_n \} \, \cup \,\Gamma|| \triangle || \sigma.$ This means that $\theta$ satisfies $\{x \overset{?}= f( t_1, \ldots,t_n),\, s_1 \overset{?}= t_1,\ldots,s_n \overset{?}= t_n \} \, \cup \,\Gamma$. Now we have to prove that $\theta$ satisfies $\{ x \overset{?}= f( s_1, \ldots,s_n), x \overset{?}= f( t_1, \ldots,t_n)\}\cup \Gamma $. Given that $\theta$ satisfies $x \overset{?}= f( t_1, \ldots, t_n) $ and it is enough to show that $\theta$ also satisfies $x \overset{?}=f( s_1, \ldots,s_n)$. We write $x \theta \overset{?}= f( t_1, \ldots,t_n)\theta \overset{?}= f( t_1\theta, t_2\theta,\ldots,t_n\theta) \overset{?}=  f( s_1\theta, s_2\theta,\ldots,s_n\theta)$ since $s_1 \theta \overset{?}= t_1 \theta ,\,\ldots,s_n \theta \overset{?}= t_n \theta$. So, $\theta$ satisfies $x \overset{?}= f( t_1, \ldots,t_n)$ and $x \overset{?}=f( s_1, \ldots,s_n)$. Hence, $\theta \models \{x \overset{?}=f( s_1, \ldots,s_n),\, x \overset{?}= f( t_1, \ldots,t_n)\}$.\\ \end{proof} \begin{lemma} \label{lem:soundness2} \emph{Let $\Gamma||\triangle||\sigma$ and $\Gamma||\triangle||\sigmap$ be two set triples such that \\$\Gamma||\triangle||\sigma {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ via AC unification. Let $\theta$ be a substitution such that $\theta \models \Gamma_i||\triangle_i||\sigma_i$. Then $\theta \models \Gamma ||\triangle||\sigma$.} \end{lemma} \begin{proof} \textbf{AC Unification.} $$\frac{\Psi \cup \Gamma || \triangle || \sigma}{GetEqs(\theta_{1}) \cup \Gamma|| \triangle || \sigma \vee\ldots \vee GetEqs(\theta_{n}) \cup \Gamma|| \triangle || \sigma }$$ Given that $\theta \models GetEqs(\theta_{1}) \cup \Gamma|| \triangle || \sigma \vee\ldots \vee GetEqs(\theta_{n}) \cup \Gamma|| \triangle || \sigma$. This means that $\theta$ satisfies $GetEqs(\theta_{1}) \cup \Gamma|| \triangle || \sigma, \ldots, GetEqs(\theta_{n}) \cup \Gamma|| \triangle || \sigma$. Which implies that $\theta$ also satisfies $\Psi$.  \end{proof} By combining Lemma~\ref{lem:soundness1} and Lemma~\ref{lem:soundness2}, we have: \begin{lemma}\label{lem:soundness3} \emph{Let $\Gamma||\triangle||\sigma$ and $\Gamma||\triangle||\sigmap$ be two set triples such that \\$\Gamma||\triangle||\sigma {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$. Let $\theta$ be a substitution such that $\theta \models \Gamma_i||\triangle_i||\sigma_i$. Then $\theta \models \Gamma ||\triangle||\sigma$.} \end{lemma} Then by induction on Lemma~\ref{lem:soundness3}, we get the following theorem: \begin{theorem}\label{thm:soundness} \emph{Let $\Gamma||\triangle||\sigma$ and $\Gamma||\triangle||\sigmap$ be two set triples such that \\$\Gamma||\triangle||\sigma \overset{*} {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i$). Let $\theta$ be a substitution such that $\theta \models \Gamma_i||\triangle_i||\sigma_i$. Then $\theta \models \Gamma ||\triangle||\sigma$.} \end{theorem} We have the following corollary from Theorem~\ref{thm:soundness}: \begin{theorem}[\textbf{Soundness}] \emph{Let $\sigma$ be a set of equations. Suppose that we get $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ after exhaustively applying the rules from $\mathfrak{I}_{ACh}$ to $\Gamma||\triangle||\sigma$, i.e,  $\Gamma||\triangle||\sigma \overset{*} {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i$), where for each $i$, no rules applicable to $\Gamma_i||\triangle_i||\sigma_i$. Let $\Sigma = \{\sigma_{i} \mid \Gamma_{i} = \emptyset\}$. Then any member of $\Sigma$ is an $ACh$-unifier of $\Gamma$.} \end{theorem} \subsection{Completeness} Before going to prove the completeness of our inference system, we present a definition below: \begin{definition}[Directed conservative extension] \emph{Let $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ and $\bigvee_{i}(\Gamma'_i||\triangle'_i||\sigma'_i)$ be two set triples. $\bigvee_{i}(\Gamma'_i||\triangle'_i||\sigma'_i)$ is called a directed conservative extension of $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$, if for any substitution $\theta$, such that $\theta \models \Gamma_i||\triangle_i||\sigma_i$, then there exists $k$ and $\sigma$, whose domain is the variables in $Var(\Gamma'_{k})\setminus Var(\Gamma_{k})$, such that $\theta\sigma \models \Gamma'_i||\triangle'_i||\sigma'_i$. If $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ (resp. $\bigvee_{i}(\Gamma'_i||\triangle'_i||\sigma'_i)$) only contains one set triple $\Gamma||\triangle||\sigma$ (resp. $\Gamma||\triangle||\sigmap$), we say $\bigvee_{i}(\Gamma'_i||\triangle'_i||\sigma'_i)$ (resp. $\Gamma||\triangle||\sigmap$) is a directed conservative extension of $\Gamma||\triangle||\sigma$.} \end{definition} Next, we show that our inference procedure never loses any solution. \begin{lemma}\label{lem:comACU} \emph{Let $\Gamma||\triangle||\sigma$ be a set triple. If there exists a set triple $\Gamma||\triangle||\sigmap$ such that $\Gamma||\triangle||\sigma \Rightarrow_{\mathfrak{I}_{ACh}} \Gamma||\triangle||\sigmap$ via all the rules of $\mathfrak{I}_{ACh}$ except AC unification, then $\Gamma||\triangle||\sigmap$ is a directed conservative extension of $\Gamma||\triangle||\sigma$. } \end{lemma} \begin{proof} \textbf{Trivial.} It is trivially true.\\  \noindent \textbf{Occur Check.} In the homomorphism theory, no term can be equal to a subterm of itself. This is because the number of $+$ symbols and h-depth of each variable stay the same with the application of the homomorphism equation $h(x_1+\cdots+x_n)\overset{?}= h(x_1)+\cdots+h(x_n)$. So, the given problem has no solution in the homomorphism theory.\\ \textbf{Bound Check.}  We see that there exists a variable $y$ with the h-depth $\kappa+1$ in the graph, that is, there is a variable $x$ above $y$ with $ \kappa+1$ h-symbols below it.  Let $\theta$ be a solution of the unification problem $\Gamma$. Then the term $x\theta$ has the h-height $\kappa +1$, but the term $x\theta$ is also a subterm of some $s_i\theta$ or $t_i \theta$ in the original unification problem. Hence, the unification problem $\Gamma$ has no solution within the given bound $\kappa$.\\ \ignore{Variable Elimination. It is trivially true.\\} \textbf{Clash. }We don't have a rewrite rule that deals with the uninterpreted function symbols, i.e., the function symbols which are not in $\{ h,\,+\}$. So the given problem has to have no solution.\\ \textbf{Splitting.} We have to make sure that we never lose any solution with this rule. Here we consider the rewrite system $R_1$ which has the rewrite rule $h(x_1+\cdots+x_n) \rightarrow h(x_1)+\cdots+h(x_n)$.  In order to apply this rule the term under the $h$ should be the sum of $n$ variables.  The problem $\{h(y) \overset{?}= x_1 +\cdots+x_n\}$ is replaced by the set $\{h(v_1+\cdots+v_n) \overset{?}= x_1 +\cdots+x_n\}$ with the substitution $\{y\mapsto v_1 + \cdots+v_n \}$.  Then we have the equation with the reduced term in $R_1$ is the equation $h(v_1)+\cdots+h(v_n) \overset{?}= x_1 +\cdots+x_n$, and the substitution $\{y\mapsto v_1 +\cdots+ v_n, x_1 \mapsto h(v_1),\ldots, x_n \mapsto h(v_n)\}$.  Hence, we never lose any solution here. \\ \textbf{Decomposition.} If $f$ is the top symbol on both sides of an equation then there is no other rule to solve it except the Decomposition rule, where $f\neq h$ and $f\neq +$. So, we never lose any solution.\\ \par To cover the case where the top symbol is $h$ for the terms on both sides of an equation, we consider the rewrite system $R_2$ which has the rewrite rule $h(x_1)+h(x_2) \rightarrow h(x_1+x_2)$. In the homomorphism theory with the rewrite system $R_2$, we cannot reduce the term $h(t)$.  So, we solve the equation of the form $h(t_1) \overset{?}=h(t_2)$ only with the Decomposition rule. Hence, we never lose any solution here too. \end{proof} \begin{lemma}\label{lem:ACU} \emph{Let $\Gamma||\triangle||\sigma$ be a set triple. If there exists a set triple $\Gamma||\triangle||\sigmap$ such that $\Gamma||\triangle||\sigma {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ via AC unification, then $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ is a directed conservative extension of $\Gamma||\triangle||\sigma$. } \end{lemma} \begin{proof} Since the buit-in AC unification algorithm is complete, we never lose any solutions on the application of this rule. \ignore{Assume that there are equations in $\Psi$ that contains both $h$ and $+$ symbols. Then the built-in unification algorithm on $\Psi$ may loose solutions since $h$ is considered as an uninterpreted symbol. However, the missing solutions are regained on the other branch of the unification problem.} \end{proof} By combining Lemma~\ref{lem:comACU} and Lemma~\ref{lem:ACU}, we have: \begin{lemma}\label{lem:1step} \emph{Let $\Gamma||\triangle||\sigma$ be a set triple. If there exists a set of set triples $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ such that $\Gamma||\triangle||\sigma {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$, then $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ is a directed conservative extension of $\Gamma||\triangle||\sigma$. } \end{lemma} By induction on Lemma~\ref{lem:1step}, we get: \begin{theorem}\label{lem:oneormore} \emph{Let $\Gamma||\triangle||\sigma$ be a set triple. If there exists a set of set triples $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ such that $\Gamma||\triangle||\sigma \overset{+} {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$, then $\bigvee_{i}(\Gamma_i||\triangle_i||\sigma_i)$ is a directed conservative extension of $\Gamma||\triangle||\sigma$. } \end{theorem} We get the following corollary from the above theorem: \begin{theorem}[\textbf{Completeness}] \label{thm:completeness} \emph{Let $\Gamma$ be a set of equations. Suppose that we get $\bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$ after applying the rules from $\mathfrak{I}_{ACh}$ to $\Gamma||\triangle||\sigma$ exhaustively, that is, \\ $\Gamma||\triangle||\sigma \overset{*} {\scriptstyle \implies_{\mathfrak{I}_{ACh}}} \bigvee_{i}(\Gamma_{i}||\triangle_{i}|| \sigma_{i})$, where for each $i$, none of the rules applicable on $\Gamma_{i}||\triangle_{i}|| \sigma_{i}$. Let $\Sigma = \{\sigma_{i} \mid \Gamma_{i} = \emptyset\}$. Then for any $ACh$-unifier $\theta$ of $\Gamma$, there exists a $\sigma \in \Sigma$, such that $\sigma \lesssim^{Var(\Gamma)} _{ACh}\theta$.} \end{theorem} \section{Implementation} We have implemented the algorithm in the Maude programming language\footnote{\url{http://maude.cs.illinois.edu/w/index.php/The_Maude_System}}. The implementation of this inference system is available\footnote{\url{https://github.com/ajayeeralla/Unification_ACh}}. We chose the Maude language because it provides a nice environment for expressing inference rules of this algorithm. \ignore{and the implementation of this algorithm will be integrated into the Maude-NPA\footnote{\url{http://maude.cs.illinois.edu/w/index.php/Maude_Tools:_Maude-NPA}}, a protocol analyzer written in Maude and developed by Naval Research Laboratory (NRL), USA, at some time.}The system specifications of this implementation are Ubuntu 14.04 LTS, Intel Core i5 3.20 GHz, and 8 GiB RAM with Maude 2.6.\par We give a table to show some of our results. In the given table, we use five columns: Unification problem, Real Time, time to terminate the program in ms (milliseconds), Solution either $\bot$ for no solution or Yes for solutions, \# Sol. for number of solutions, and Bound $\kappa$. It makes sense that the real time keeps increasing as the given h-depth $\kappa$ increases for the first problem where the other problems give solutions, but in either case the program terminates. \begin{table*}[!t]  \footnotesize  \begin{tabular} {|ccccc|} \hline \hline  Unification Problem & Real Time & Solution & \# Sol.& Bound \\ \hline \hline  $\{h(y) \overset{?}= y + x\}$ & 674ms & $\bot$ & 0 &10 \\ \hline  $\{h(y) \overset{?}= y + x\}$ & 15880ms & $\bot$ & 0 &20 \\ \hline  $\{h(y) \overset{?}= x_1 + x_2\}$ &5ms & Yes & 1 &10 \\ \hline  $\{h(h(x)) \overset{?}= h(h(y))\}$ &2ms & Yes & 1 & 10 \\ \hline  $\{x + y_1 \overset{?}= x + y_2\}$ &3ms & Yes &1 &10 \\ \hline  $\{v \overset{?}= x + y, v \overset{?}= w+z , s \overset{?}= h(t)\}$ &46ms & Yes & 10 &10 \\ \hline  $\{v \overset{?}= x_1 + x_2 , v \overset{?}= x_3 + x_4 , x_1 \overset{?}= h(y), x_2 \overset{?}= h(y)\}$ &100ms & Yes & 6 &10 \\ \hline  $\{h(h(x)) \overset{?}= v+w+y + z\}$ &224ms & Yes & 1&10 \\ \hline  $\{v \overset{?}= (h(x)+y),v\overset{?}= w+z\}$ &55ms & Yes & 7&10 \\ \hline   $\{f(x , y) \overset{?}= h(x_1)\}$ &0ms & $\bot$ & 0&10 \\ \hline   $\{f(x_1 , y_1) \overset{?}= f(x_2 , y_2)\}$ &1ms & Yes & 1&10 \\ \hline   $\{v \overset{?}= x_1 + x_2 , v \overset{?}= x_3 + x_4\} $ &17ms & Yes & 7 &10 \\ \hline   $\{f(x_1 , y_1) \overset{?}= g(x_2 , y_2)\}$ &0ms & $\bot$ & 0&10 \\ \hline   $\{h(y) \overset{?}= x , y \overset{?}= h(x)\}$ &0ms & $\bot$ & 0&10 \\ \hline   \end{tabular}   \caption{\em{Tested results with bounded ACh-unification algorithm}}   \label{tab:imp}  \end{table*}   \section{Conclusion} We introduced a set of inference rules to solve the unification problem modulo the homomorphism theory $h$ over an AC symbol $+$, by enforcing a threshold $\kappa$ on the h-depth of any variable. \ignore{Our algorithm finds all the solutions or unifiers within the given h-depth $\kappa$. We implemented the algorithm in Maude because the inference rules are easy to write in Maude, and also we hope to incorporate them in the Maude-NPA tool, a protocol analyzer written in Maude. Our work on this topic actually came out of work on the Maude-NPA tool.} Homomorphism is a property that is very common in cryptographic algorithms. So, it is important to analyze cryptographic protocols in the homomorphism theory. Some of the algorithms and details in this direction can be seen in~\cite{ALLNR2, EKLMMNS, ALLNR1}. However, none of those results perform ACh unification because that is undecidable. \ignore{, One way around this, is to assume that identity and an inverse exist, but because of the way the Maude-NPA works, it would still be necessary to unify modulo ACh. So a unification algorithm there becomes crucial.} We believe that our approximation is a good way to deal with it. We also tested some problems and the results are shown in Table~\ref{tab:imp}.\\ \textbf{Acknowledgments.} We thank anonymous reviewers who have provided useful comments. Ajay Kumar Eeralla was partially supported by NSF Grant CNS 1314338. \end{document}
math
75,462
\begin{document} \title{Generalized Fourier--Feynman transforms and generalized convolution products on Wiener space II} \titlerunning{Generalized Fourier--Feynman transforms and generalized convolution products II} \author{Sang Kil Shim \and Jae Gil Choi} \institute{Sang Kil Shim \at Department of Mathematics, Dankook University, Cheonan 330-714, Republic of Korea\\ \email{[email protected]} \and Jae Gil Choi (Corresponding author)\at School of General Education, Dankook University, Cheonan 330-714, Republic of Korea\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} The purpose of this article is to present the second type fundamental relationship between the generalized Fourier--Feynman transform and the generalized convolution product on Wiener space. The relationships in this article are also natural extensions (to the case on an infinite dimensional Banach space) of the structure which exists between the Fourier transform and the convolution of functions on Euclidean spaces. \keywords{Wiener space \and Gaussian process \and generalized Fourier--Feynman transform \and generalized convolution product.} \subclass{Primary 46G12; Secondary 28C20 \and 60G15 \and 60J65} \end{abstract} \setcounter{equation}{0} \section{Introduction}\label{sec:introduction} \par Given a positive real $T>0$, let $C_0[0,T]$ denote one-parameter Wiener space, that is, the space of all real-valued continuous functions $x$ on $[0,T]$ with $x(0)=0$. Let $\mathcal{M}$ denote the class of all Wiener measurable subsets of $C_0[0,T]$ and let $\mathfrak{m}$ denote Wiener measure. Then, as is well-known, $(C_0[0,T],\mathcal{M},\mathfrak{m})$ is a complete measure space. \par In \cite{HPS95,HPS96,HPS97-1,PSS98} Huffman, Park, Skoug and Storvick established fundamental relationships between the analytic Fourier--Feynman transform (FFT) and the convolution product (CP) for functionals $F$ and $G$ on $C_0[0,T]$, as follows: \begin{equation}\label{eq:offt-ocp} T_{q}^{(p)}\big((F*G)_q\big)(y) = T_{q}^{(p)}(F)\bigg(\frac{y}{\sqrt2}\bigg) T_{q}^{(p)}(G)\bigg(\frac{y}{\sqrt2}\bigg) \end{equation} and \begin{equation}\label{eq:ocp-offt} \big(T_{q}^{(p)}(F)*T_{q}^{(p)}(G)\big)_{-q}(y) = T_{q}^{(p)}\bigg(F\bigg(\frac{\cdot}{\sqrt2}\bigg)G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{equation} for scale-almost every $y\in C_{0}[0,T]$, where $T_{q}^{(p)}(F)$ and $(F*G)_q$ denote the $L_p$ analytic FFT and the CP of functionals $F$ and $G$ on $C_0[0,T]$. For an elementary introduction of the FFT and the corresponding CP, see \cite{SS04}. \par For $f\in L_2(\mathbb R)$, let the Fourier transform of $f$ be given by \[ \mathcal{F}(f)(u)=\int_{\mathbb R}e^{iuv}f(v)dm_L^{\mathfrak{n}}(v) \] and for $f, g\in L_2(\mathbb R)$, let the convolution of $f$ and $g$ be given by \[ (f*g)(u)=\int_{\mathbb R} f(u-v)g(v)dm_L^{\mathfrak{n}}(v) \] where $dm_L^{\mathfrak{n}} (v)$ denotes the normalized Lebesgue measure $(2\pi)^{-1/2}dv$ on $\mathbb R$. As commented in \cite{Indag}, the Fourier transform $\mathcal{F}$ acts like a homomorphism with convolution $*$ and ordinary multiplication on $L_2(\mathbb R)$ as follows: for $f, g \in L_2(\mathbb R)$ \begin{equation}\label{worthy} \mathcal{F}(f*g)=\mathcal{F}(f)\mathcal{F}(g). \end{equation} But the Fourier transform $\mathcal{F}$ and the convolution $*$ have a dual property such as \begin{equation}\label{eq:F-02} \mathcal{F}(f)*\mathcal{F}(g)=\mathcal{F}(f g). \end{equation} Equations \eqref{eq:offt-ocp} and \eqref{eq:ocp-offt} above are natural extensions (to the case on an infinite dimensional Banach space) of the equations \eqref{worthy} and \eqref{eq:F-02}, respectively. \par In \cite{CCKSY05,HPS97-2}, the authors extended the relationships \eqref{eq:offt-ocp} and \eqref{eq:ocp-offt} to the cases between the generalized FFT (GFFT) and the generalized CP (GCP) of functionals on $C_0[0,T]$. The definition of the ordinary FFT and the corresponding CP are based on the Wiener integral, see \cite{HPS95,HPS96,HPS97-1}. While the definition of the GFFT and the GCP studied in \cite{CCKSY05,HPS97-2} are based on the generalized Wiener integral \cite{CPS93,PS91}. The generalized Wiener integral (associated with Gaussian process) was defined by $\int_{C_0[0,T]} F(\mathcal Z_h(x,\cdot))d\mathfrak{m}(x)$ where $\mathcal Z_h$ is the Gaussian process on $C_0[0,T]\times[0,T]$ given by $\mathcal Z_h (x,t)=\int_0^t h(s)\tilde{d}x(s)$, and where $h$ is a nonzero function in $L_2[0,T]$ and $\int_0^t h(s)\tilde{d}x(s)$ denotes the Paley--Wiener--Zygmund stochastic integral \cite{PWZ33,Park69,PS88}. On the other hand, in \cite{Indag}, the authors defined a more general CP (see, Definition \ref{def:cp} below) and developed the relationship, such as \eqref{eq:offt-ocp}, between their GFFT and the GCP (see, Theorem \ref{thm:gfft-gcp-compose} below). Equation \eqref{eq:gfft-gcp} in Theorem \ref{thm:gfft-gcp-compose} is useful in that it permits one to calculate the GFFT of the GCP of functionals on $C_0[0,T]$ without actually calculating the GCP. In this paper we work with the second relationship, such as equation \eqref{eq:ocp-offt}, between the GFFT and the GCP of functionals on $C_0[0,T]$. Our new results corresponds to equation \eqref{eq:F-02} rather than equation \eqref{worthy}. It turns out, as noted in Remark \ref{re:meaning-main} below, that our second relationship between the GFFT and the CP also permits one to calculate the GCP of the GFFT of functionals on $C_0[0,T]$ without actually calculating the GCP. \setcounter{equation}{0} \section{Preliminaries}\label{sec:introduction} \par In order to present our relationship between the GFFT and the GCP, we follow the exposition of \cite{Indag}. \par A subset $B$ of $C_0[0,T]$ is said to be scale-invariant measurable provided $\rho B\in \mathcal{M}$ for all $\rho>0$, and a scale-invariant measurable set $N$ is said to be scale-invariant null provided $\mathfrak{m}(\rho N)=0$ for all $\rho>0$. A property that holds except on a scale-invariant null set is said to hold scale-invariant almost everywhere (s-a.e.). A functional $F$ is said to be scale-invariant measurable provided $F$ is defined on a scale-invariant measurable set and $F(\rho\,\cdot\,)$ is Wiener-measurable for every $\rho> 0$. If two functionals $F$ and $G$ are equal s-a.e., we write $F\approx G$. \par Let $\mathbb C$, $\mathbb C_+$ and $\mathbb{\widetilde C}_+$ denote the set of complex numbers, complex numbers with positive real part and nonzero complex numbers with nonnegative real part, respectively. For each $\lambda \in \mathbb C$, $\lambda^{1/2}$ denotes the principal square root of $\lambda$; i.e., $\lambda^{1/2}$ is always chosen to have positive real part, so that $\lambda^{-1/2}=(\lambda^{-1})^{1/2}$ is in $\mathbb C_+$ for all $\lambda\in\widetilde{\mathbb C}_+$. \par Let $h$ be a function in $L_2[0,T]\setminus\{0\}$ and let $F$ be a $\mathbb C$-valued scale-invariant measurable functional on $C_0[0,T]$ such that \[ \int_{C_0[0,T]} F\big(\lambda^{-1/2}\mathcal Z_h(x,\cdot)\big)d\mathfrak{m}(x) =J(h;\lambda) \] exists as a finite number for all $\lambda>0$. If there exists a function $J^* (h;\lambda)$ analytic on $\mathbb C_+$ such that $J^*(h;\lambda)=J(h;\lambda)$ for all $\lambda>0$, then $J^*(h;\lambda)$ is defined to be the generalized analytic Wiener integral (associated with the Gaussian process $\mathcal{Z}_h$) of $F$ over $C_0[0,T]$ with parameter $\lambda$, and for $\lambda \in \mathbb C_+$ we write \[ \int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x) = J^*(h;\lambda). \] Let $q\ne 0$ be a real number and let $F$ be a functional such that \[ \int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x) \] exists for all $\lambda \in \mathbb C_+$. If the following limit exists, we call it the generalized analytic Feynman integral of $F$ with parameter $q$ and we write \[ \int_{C_0[0,T]}^{\mathrm{anf}_{q}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak m(x) = \lim_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}} \int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak m(x). \] \par Next (see \cite{CCKSY05,Indag,HPS97-2}) we state the definition of the GFFT. \renewcommand{\thesection.3}{\thesection.1} \begin{definition} Let $h$ be a function in $L_2[0,T]\setminus\{0\}$. For $\lambda\in\mathbb{C}_+$ and $y \in C_{0}[0,T]$, let \[ T_{\lambda,h}(F)(y) =\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(y+\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x). \] For $p\in (1,2]$ we define the $L_p$ analytic GFFT (associated with the Gaussian process $\mathcal{Z}_h$), $T^{(p)}_{q,h}(F)$ of $F$, by the formula, \[ T^{(p)}_{q,h}(F)(y) =\operatorname*{l.i.m.}_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}} T_{\lambda,h} (F)(y) \] if it exists; i.e., for each $\rho>0$, \[ \lim_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}} \int_{C_{a,b}[0,T]}\big| T_{\lambda,h} (F)(\rho y) -T^{(p)}_{q, h }(F)(\rho y) \big|^{p'} d\mathfrak m (y)=0 \] where $1/p+1/p' =1$. We define the $L_1$ analytic GFFT, $T_{q, h }^{(1)}(F)$ of $F$, by the formula \[ T_{q, h }^{(1)}(F)(y) = \lim_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}} T_{\lambda,h} (F)(y) \] for s-a.e. $y\in C_0[0,T]$ whenever this limit exists. \end{definition} \par We note that for $p \in [1,2]$, $T_{q,h}^{(p)}(F)$ is defined only s-a.e.. We also note that if $T_{q,h}^{(p)}(F)$ exists and if $F\approx G$, then $T_{q,h}^{(p)}(G)$ exists and $T_{q,h}^{(p)}(G)\approx T_{q,h }^{(p)}(F)$. One can see that for each $h\in L_2[0,T]$, $T_{q,h}^{(p)}(F)\approx T_{q,-h}^{(p)}(F)$ since \[ \int_{C_0[0,T]}F(x)d\mathfrak{m}(x)=\int_{C_0[0,T]}F(-x)d\mathfrak{m}(x). \] \renewcommand{\thesection.4}{\thesection.2} \begin{remark}\label{remark:ordinary-fft} Note that if $h\equiv 1$ on $[0,T]$, then the generalized analytic Feynman integral and the $L_p$ analytic GFFT, $T_{q,1}^{(p)}(F)$, agree with the previous definitions of the analytic Feynman integral and the analytic FFT, $T_{q}^{(p)}(F)$, respectively \cite{HPS95,HPS96,HPS97-1,PSS98} because $\mathcal Z_1(x,\cdot)=x$ for all $x \in C_0[0,T]$. \end{remark} \par Next (see \cite{Indag}) we give the definition of our GCP. \renewcommand{\thesection.3}{\thesection.3} \begin{definition}\label{def:cp} Let $F$ and $G$ be scale-invariant measurable functionals on $C_{0}[0,T]$. For $\lambda \in \widetilde{\mathbb C}_+$ and $h_1,h_2\in L_2[0,T]\setminus\{0\}$, we define their GCP with respect to $\{\mathcal{Z}_{h_1},\mathcal{Z}_{h_2}\}$ (if it exists) by \begin{equation}\label{eq:gcp-Z} \begin{aligned} (F*G)_{\lambda}^{(h_1,h_2)}(y) = \begin{cases} \int_{C_0[0,T]}^{\mathrm{ anw}_{\lambda}} F\big(\frac{y+{\mathcal Z}_{h_1} (x,\cdot)}{\sqrt2}\big) G\big(\frac{y-{\mathcal Z}_{h_2} (x,\cdot)}{\sqrt2}\big)d \mathfrak m(x), \quad \lambda \in \mathbb C_+ \\ \int_{C_0[0,T]}^{\mathrm{ anf}_{q}} F\big(\frac{y+{\mathcal Z}_{h_1} (x,\cdot)}{\sqrt2}\big) G\big(\frac{y-{\mathcal Z}_{h_2} (x,\cdot)}{\sqrt2}\big)d \mathfrak{m}(x),\\ \qquad \qquad \qquad \qquad \qquad \qquad \lambda=-iq,\,\, q\in \mathbb R, \,\,q\ne 0. \end{cases} \end{aligned} \end{equation} When $\lambda =-iq$, we denote $(F*G)_{\lambda}^{(h_1,h_2)}$ by $(F*G)_{q}^{(h_1,h_2)}$. \end{definition} \renewcommand{\thesection.4}{\thesection.4} \begin{remark}\label{remark:ordinary-cp} (i) Given a function $h$ in $L_2[0,T]\setminus\{0\}$ and letting $h_1=h_2\equiv h$, equation \eqref{eq:gcp-Z} yields the convolution product studied in \cite{CCKSY05,HPS97-2}: \[ \begin{aligned} (F*G)_{q}^{(h,h)}(y)& \equiv(F*G)_{q,h}(y)\\ &=\int_{C_0[0,T]}^{\mathrm{ anf}_{q}} F\bigg(\frac{y+ \mathcal Z_{h} (x,\cdot)}{\sqrt2}\bigg) G\bigg(\frac{y- \mathcal Z_{h} (x,\cdot)}{\sqrt2}\bigg)d \mathfrak{m}(x) . \end{aligned} \] (ii) Choosing $h_1=h_2\equiv 1$, equation \eqref{eq:gcp-Z} yields the convolution product studied in \cite{HPS95,HPS96,HPS97-1,PSS98}: \[ \begin{aligned} (F*G)_{q}^{(1,1) }(y) & \equiv (F*G)_{q}(y)\\ & =\int_{C_0[0,T]}^{\mathrm{ anf}_{q}} F\bigg(\frac{y+ x}{\sqrt2}\bigg) G\bigg(\frac{y- x}{\sqrt2}\bigg)d \mathfrak{m}(x). \end{aligned} \] \end{remark} \par In order to establish our assertion we define the following conventions. Let $h_1$ and $h_2$ be nonzero functions in $L_2[0,T]$. Then there exists a function $\mathbf{s}\in L_2[0,T]$ such that \begin{equation}\label{eq:fn-rot} \mathbf{s}^2(t)=h_1^2(t)+h_2^2(t) \end{equation} for $m_L$-a.e. $t\in [0,T]$, where $m_L$ denotes Lebesgue measure on $[0,T]$. Note that the function `$\mathbf{s}$' satisfying \eqref{eq:fn-rot} is not unique. We will use the symbol $\mathbf{s}(h_1,h_2)$ for the functions `$\mathbf{s}$' that satisfy \eqref{eq:fn-rot} above. Given nonzero functions $h_1$ and $h_2$ in $L_{2}[0,T]$, infinitely many functions, $\mathbf{s}(h_1,h_2)$, exist in $L_{2}[0,T]$. Thus $\mathbf{s}(h_1,h_2)$ can be considered as an equivalence class of the equivalence relation $\sim$ on $L_2[0,T]$ given by \[ \mathbf{s}_1\sim \mathbf{s}_2 \,\,\Longleftrightarrow\,\, \mathbf{s}_1^2=\mathbf{s}_2^2 \,\,\,m_L\mbox{-a.e.}. \] But we observe that for every function $\mathbf{s}$ in the equivalence class $\mathbf{s}(h_1,h_2)$, the Gaussian random variable {\color{red}${\mathcal {Z}}_{\mathbf{s}}(x,T)$} has the normal distribution $N(0,\|h_1\|_2^2+\|h_2\|_2^2)$. Inductively, given a sequence $\mathcal H=\{h_1,\ldots, h_n\}$ of nonzero functions in $L_2[0,T]$, let $\mathbf{s}(\mathcal H)\equiv \mathbf{s}(h_1,h_2,\ldots,h_n)$ be the equivalence class of the functions $\mathbf{s}$ which satisfy the relation \[ \mathbf{s}^2(t)=h_1^2(t)+\cdots+h_n^2(t) \] for $m_L$-a.e. $t\in[0,T]$. Throughout the rest of this paper, for convenience, we will regard $\mathbf{s}(\mathcal H)$ as a function in $L_2[0,T]$. We note that if the functions $h_1,\ldots, h_n$ are in $L_{\infty}[0,T]$, then we can take $\mathbf{s}(\mathcal H)$ to be in $L_{\infty}[0,T]$. By an induction argument it follows that \[ \mathbf{s}(\mathbf{s}(h_1,h_2,\ldots,h_{k-1}),h_k) =\mathbf{s}(h_1,h_2,\ldots,h_k) \] for all $k\in\{2,\ldots,n\}$. \renewcommand{\thesection.6}{\thesection.5} \begin{example} Let $h_1(t)=t^4$, $h_2(t)=\sqrt{2}t^3$, $h_3(t)=\sqrt{3}t^2$, $h_4(t)={\sqrt{2}}t$, $h_5(t)=1$, and $\mathbf{s}(t)=t^4 +t^2 +1$ for $t\in [0,T]$. Then $\mathcal H=\{h_1,h_2,h_3,h_4,h_5\}$ is a sequence of functions in $L_2[0,T]$ and it follows that \[ \mathbf{s}^2(t)= h_1^2(t)+ h_2^2(t)+ h_3^2(t)+ h_4^2(t)+ h_5^2(t). \] Thus we can write $\mathbf{s}\equiv \mathbf{s}(h_1,h_2,h_3,h_4,h_5)$. Furthermore, one can see that \[ (-1)^{m}\mathbf{s}\equiv \mathbf{s}((-1)^{n_1}h_1,(-1)^{n_2}h_2,(-1)^{n_3}h_3,(-1)^{n_4}h_4,(-1)^{n_5}h_5) \] with $m, n_1,n_2,n_3,n_4,n_5 \in \{1,2\}$. On the other hand, it also follows that \[ \mathbf{s}(h_1,h_2,h_3,h_4,h_5)(t)\equiv \mathbf{s}(g_1,g_2,g_3) (t) \] for each $t\in [0,T]$, where $g_1(t)=-t^4-1$, $g_2(t)={\sqrt2} t\sqrt{t^4+1}$, and $g_3(t)=t^2$ for $t\in [0,T]$. \end{example} \renewcommand{\thesection.6}{\thesection.6} \begin{example} Let $h_1(t)=t^4+t^2$, $h_2(t)=t^4-t^2$, $h_3(t)=\sqrt2 t^3$, and $\mathbf{s}(t)=\sqrt{2(t^8 +t^4)}$ for $t\in [0,T]$. Then, by the convention for $\mathbf{s}$, it follows that \[ \mathbf{s}(t)\equiv\mathbf{s}(h_1,h_2) (t) \equiv \mathbf{s}({\sqrt2} h_2 ,{\sqrt2} h_3)(t). \] \end{example} \renewcommand{\thesection.6}{\thesection.7} \begin{example} Using the well-known formulas for trigonometric and hyperbolic functions, it follows that \[ \begin{aligned} \sec \big(\tfrac{\pi}{4 T} t\big) &=\mathbf{s}\big(1, \tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t)\\ &=\mathbf{s}\big(\sin,\cos,\tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t) \\ & =\mathbf{s}\big(\sin\big(\tfrac{\pi}{4 T} \cdot\big),\cos\big(\tfrac{\pi}{4 T} \cdot\big),\tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t), \end{aligned} \] \[ \cosh t =\mathbf{s}(1, \sinh)(t)=\mathbf{s}(-1, \sinh)(t)=\mathbf{s}(\sin,\cos,\sinh)(t) , \] and \[ -\coth \big(t+\tfrac12\big) =\mathbf{s}\big(1, \mathrm{csch}\big(\cdot+\tfrac12\big)\big)(t) =\mathbf{s}(-\sin,\cos,- \mathrm{csch}\big(\cdot+\tfrac12\big))(t) \] for each $t\in [0,T]$. \end{example} \setcounter{equation}{0} \section{The relationship between the GFFT and the GCP} \par The Banach algebra $\mathcal S(L_2[0,T])$ consists of functionals on $C_0[0,T]$ expressible in the form \begin{equation}\label{eq:element} F(x)=\int_{L_2[0,T]}\exp\{i\langle{u,x}\rangle\}df(u) \end{equation} for s-a.e. $x\in C_0[0,T]$, where the associated measure $f$ is an element of $\mathcal M(L_2[0,T])$, the space of $\mathbb C$-valued countably additive (and hence finite) Borel measures on $L_2[0,T]$, and the pair $\langle{u,x}\rangle$ denotes the Paley--Wiener--Zygmund stochastic integral $\mathcal Z_u(x,T) \equiv \int_0^T u(s)\tilde{d}x(t)$. For more details, see \cite{CS80,CPS93,HPS97-2,PSS98}. \par We first present two known results for the GFFT and the GCP of functionals in the Banach algebra $\mathcal S(L_2[0,T])$. \renewcommand{\thesection.3}{\thesection.1} \begin{theorem}[\cite{HPS97-2}]\label{thm:gfft} Let $h$ be a nonzero function in $L_\infty[0,T]$, and let $F\in\mathcal S(L_2[0,T])$ be given by equation \eqref{eq:element}. Then, for all $p\in[1,2]$, the $L_p$ analytic GFFT, $T_{q,h}^{(p)}(F)$ of $F$ exists for all nonzero real numbers $q$, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula \[ T_{q,h}^{(p)}(F)(y) = \int_{L_2[0,T]}\exp\{i\langle{u,y}\rangle\}df_t^h(u) \] for s-a.e. $y\in C_{0}[0,T]$, where $f_t^h$ is the complex measure in $\mathcal M(L_2[0,T])$ given by \[ f_t^{h}(B)=\int_B \exp\bigg\{-\frac{i}{2q}\|uh\|_2^2\bigg\}df(u) \] for $B \in \mathcal B(L_2[0,T])$. \end{theorem} \renewcommand{\thesection.3}{\thesection.2} \begin{theorem}[\cite{Indag}] \label{thm:gcp} Let $k_1$ and $k_2$ be nonzero functions in $L_\infty[0,T]$ and let $F$ and $G$ be elements of $\mathcal S(L_2[0,T])$ with corresponding finite Borel measures $f$ and $g$ in $\mathcal M(L_2[0,T])$. Then, the GCP $(F*G)_q^{(k_1,k_2)}$ exists for all nonzero real $q$, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula \[ (F*G)_q^{(k_1,k_2)}(y) = \int_{L_2[0,T]}\exp\{i\langle{w,y}\rangle\}d\varphi^{k_1,k_2}_c(w) \] for s-a.e. $y\in C_{0}[0,T]$, where \[ \varphi^{k_1,k_2}_c =\varphi^{k_1,k_2}\circ\phi^{-1}, \] $\varphi^{k_1,k_2}$ is the complex measure in $\mathcal M(L_2[0,T])$ given by \[ \varphi_{k_1,k_2}(B) =\int_B \exp\bigg\{-\frac{i}{4q}\|uk_1-vk_2\|_2^2\bigg\}df(u)dg(v) \] for $B \in \mathcal B(L_2^2[0,T])$, and $\phi:L_2^2[0,T]\to L_2[0,T]$ is the continuous function given by $\phi(u,v)=(u+v)/\sqrt2$. \end{theorem} \par The following corollary and theorem will be very useful to prove our main theorem (namely, Theorem \ref{thm:cp-tpq02}) which we establish the relationship between the GFFT and the GCP such as equation \eqref{eq:ocp-offt}. The following corollary is a simple consequence of Theorem \ref {thm:gfft}. \renewcommand{\thesection.9}{\thesection.3} \begin{corollary}\label{thm:afft-inverse} Let $h$ and $F$ be as in Theorem \ref{thm:gfft}. Then, for all $p\in[1,2]$, and all nonzero real $q$, \begin{equation}\label{eq:inverse} T_{-q, h}^{(p)}\big(T_{q,h}^{(p)}(F)\big)\approx F. \end{equation} As such, the GFFT, $T_{q,h}^{(p)}$, has the inverse transform $\{T_{q,h}^{(p)}\}^{-1}=T_{-q,h}^{(p)}$. \end{corollary} \par The following theorem is due to Chang, Chung and Choi \cite{Indag}. \renewcommand{\thesection.3}{\thesection.4} \begin{theorem} \label{thm:gfft-gcp-compose} Let $k_1$, $k_2$, $F$, and $G$ be as in Theorem \ref{thm:gcp}, and let $h$ be a nonzero function in $L_\infty[0,T]$. Assume that $h^2=k_1k_2$ $m_L$-a.e. on $[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$, \begin{equation}\label{eq:gfft-gcp} \begin{aligned} &T_{q,h}^{(p)}\big((F*G)_q^{(k_1,k_2)}\big)(y) \\ & = T_{q,\mathbf{s}(h,k_1)/\sqrt2}^{(p)}(F)\bigg(\frac{y}{\sqrt2}\bigg) T_{q,\mathbf{s}(h,k_2)/\sqrt2}^{(p)}(G)\bigg(\frac{y}{\sqrt2}\bigg) \end{aligned} \end{equation} for s-a.e. $y\in C_{0}[0,T]$, where $\mathbf{s}(h,k_j)$'s, $j\in \{1,2\}$, are the functions which satisfy the relation \eqref{eq:fn-rot}, respectively. \end{theorem} \renewcommand{\thesection.4}{\thesection.5} \begin{remark} In equation \eqref{eq:gfft-gcp}, choosing $h=k_1=k_2\equiv 1$ yields equation \eqref{eq:offt-ocp} above. Also, letting $h=k_1=k_2$ yields the results studied in \cite{CCKSY05,HPS97-2}. As mentioned above, equation \eqref{eq:gfft-gcp} is a more general extension of equation \eqref{worthy} to the case on an infinite dimensional Banach space. \end{remark} \par We are now ready to establish our main theorem in this paper. \renewcommand{\thesection.3}{\thesection.6} \begin{theorem} \label{thm:cp-tpq02} Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Then, for all $p\in[1,2]$ and all nonzero real $q$, \begin{equation}\label{eq:cp-fft-basic} \begin{aligned} &\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \Big)_{-q}^{(k_1,k_2)}(y) \\ &=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \end{equation} for s-a.e. $y\in C_0[0,T]$, where $\mathbf{s}(h,k_j)$'s, $j\in \{1,2\}$, are the functions which satisfy the relation \eqref{eq:fn-rot}, respectively. \end{theorem} \begin{proof} Applying \eqref{eq:inverse}, \eqref{eq:gfft-gcp} with $F$, $G$, and $q$ replaced with $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)$, $T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G)$, and $-q$, respectively, and \eqref{eq:inverse} again, it follows that for s-a.e. $y\in C_0[0,T]$, \[ \begin{aligned} &\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \Big)_{-q}^{(k_1,k_2)}(y)\\ &= T_{q,h}^{(p)}\Big(T_{-q,h}^{(p)}\Big(\big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \big)_{-q}^{(k_1,k_2)}\Big)\Big)(y)\\ &= T_{q,h}^{(p)}\bigg( T_{-q,\mathbf{s}(h,k_1)/\sqrt2}^{(p)}\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)}(F)\Big) \bigg(\frac{\cdot}{\sqrt2}\bigg)\\ &\qquad\quad\times T_{-q,\mathbf{s}(h,k_2)/\sqrt2}^{(p)}\Big(T_{q,\mathbf{s}(h,k_2)/\sqrt{2} }^{(p)}(G)\Big) \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg) (y)\\ &=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \] as desired. \qed\end{proof} \renewcommand{\thesection.4}{\thesection.7} \begin{remark}\label{re:meaning-main} (i) Equation \eqref{eq:gfft-gcp} shows that the GFFT of the GCP of two functionals is the ordinary product of their transforms. On the other hand, equation \eqref{eq:cp-fft-basic} above shows that the GCP of GFFTs of functionals is the GFFT of product of the functionals. These equations are useful in that they permit one to calculate $T_{q,h}^{(p)}((F*G)_q^{(k_1,k_2)})$ and $(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G))_{-q}^{(k_1,k_2)}$ without actually calculating the GCPs involved them, respectively. In practice, equation \eqref{eq:cp-fft-basic} tells us that to calculate $T_{q,h}^{(p)}(F(\frac{\cdot}{\sqrt2}) G(\frac{\cdot}{\sqrt2} ))$ is easier to calculate than are $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)$, $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (G)$, and $(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) )_{-q}^{(k_1,k_2)}$. (ii) Equation \eqref{eq:cp-fft-basic} is a more general extension of equation \eqref{eq:F-02} to the case on an infinite dimensional Banach space. \end{remark} \renewcommand{\thesection.9}{\thesection.8} \begin{corollary}[Theorem 3.1 in \cite{PSS98}] Let $F$ and $G$ be as in Theorem \ref{thm:gcp}. Then, for all $p\in[1,2]$ and all real $q\in\mathbb R\setminus\{0\}$, \[ \Big( T_q^{(p)}(F)*T_q^{(p)}(G)\Big)_{-q} (y) =T_q^{(p)}\bigg(F \bigg(\frac{\cdot}{\sqrt2}\bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg) \bigg)(y) \] for s-a.e. $y\in C_{0}[0,T]$, where $T_q^{(p)}(F)$ denotes the ordinary analytic FFT of $F$ and $(F*G)_q$ denotes the CP of $F$ and $G$ (see Remarks \ref{remark:ordinary-fft} and \ref{remark:ordinary-cp}). \end{corollary} \begin{proof} In equation \eqref{eq:cp-fft-basic}, simply choose $h=k_1=k_2\equiv 1$. \qed\end{proof} \renewcommand{\thesection.9}{\thesection.9} \begin{corollary}[Theorem 3.2 in \cite{CCKSY05}] Let $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Then, for all $p\in[1,2]$ and all real $q\in\mathbb R\setminus\{0\}$, \[ \Big(T_{q,h}^{(p)}(F)*T_{q,h}^{(p)}(G)\Big)_{-q} (y) =T_{q,h}^{(p)}\bigg(F \bigg(\frac{\cdot}{\sqrt2}\bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg) \bigg)(y) \] for s-a.e. $y\in C_{0}[0,T]$, where $(F*G)_q\equiv (F*G)_q^{(h,h)}$ denotes the GCP of $F$ and $G$ studied in \cite{CCKSY05,HPS97-2} (see Remark \ref{remark:ordinary-cp}). \end{corollary} \begin{proof} In equation \eqref{eq:cp-fft-basic}, simply choose $h=k_1=k_2$. \qed\end{proof} \setcounter{equation}{0} \section{Examples} The assertion in Theorem \ref{thm:cp-tpq02} above can be applied to many Gaussian processes $\mathcal Z_h$ with $h\in L_\infty[0,T]$. In view of the assumption in Theorems \ref{thm:gfft-gcp-compose} and \ref{thm:cp-tpq02}, we have to check that there exist solutions $\{h,k_1,k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of the system \[ \begin{cases} \mbox{(i)} &h^2 =k_1k_2,\\ \mbox{(ii)} &\mathbf{s}_1=\mathbf{s}(h,k_1) \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],\\ \mbox{(iii)} &\mathbf{s}_2=\mathbf{s}(h,k_2) \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T], \end{cases} \] or, equivalently, \begin{equation}\label{system} \begin{cases} \mbox{(i)} &h^2 =k_1k_2,\\ \mbox{(ii)} &\mathbf{s}_1^2=h^2 +k_1^2 \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],\\ \mbox{(iii)} &\mathbf{s}_2^2=h^2 +k_2^2 \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T]. \end{cases} \end{equation} Throughout this section, we will present some examples for the solution sets of the system \eqref{system}. To do this we consider the Wiener space $C_0[0,1]$ and the Hilbert space $L_2[0,1]$ for simplicity. \renewcommand{\thesection.6}{\thesection.1} \begin{example} (Polynomials) The set $\mathcal P =\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} & h(t) = 2t(t^2-1) \\ &k_1(t) =(t^2-1)^2, \\ &k_2(t) =4t^2, \\ &\mathbf{s}_1(t) = (t^2-1)(t^2+1), \\ &\mathbf{s}_2(t) = 2 t(t^2+1) \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =(t^2-1)(t^2+1), \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =2 t(t^2+1) \] for all $t\in [0,1]$. In this case, equation \eqref{eq:cp-fft-basic} with the functions in $\mathcal P$ holds for any functionals in $F$ and $G$ in $\mathcal S(L_2[0,1])$. \end{example} \renewcommand{\thesection.6}{\thesection.2} \begin{example} (Trigonometric functions I) The set $\mathcal T_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} h(t)=\sin 2t=2\sin t\cos t, \\ k_1(t)=2\sin^2t, \\ k_2(t)=2\cos^2t, \\ \mathbf{s}_1(t)=2\sin t, \\ \mathbf{s}_2(t)=2\cos t \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =\mathbf{s}(2\sin\cos,2\sin^2)(t) =2 \sin t, \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =\mathbf{s}(2\sin\cos,2\cos^2)(t) =2 \cos t \] for all $t\in [0,1]$. Also, using equation \eqref{eq:cp-fft-basic}, it follows that for all $p\in[1,2]$, all nonzero real $q$, and all functionals $F$ and $G$ in $\mathcal S(L_2[0,1])$, \[ \Big(T_{q,\sqrt{2}\sin }^{(p)} (F)* T_{q,\sqrt{2}\cos}^{(p)}(G) \Big)_{-q}^{(2\sin^2,2\cos^2)}(y) =T_{q,2\sin\cos}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \] for s-a.e. $y\in C_0[0,1]$. \end{example} \renewcommand{\thesection.6}{\thesection.3} \begin{example} (Trigonometric functions II) The set $\mathcal T_2=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} h(t)=\sqrt2\sin t, \\ k_1(t)=\sqrt2\sin t\tan t, \\ k_2(t)=\sqrt2\cos t, \\ \mathbf{s}_1(t)=\sqrt2\tan t, \\ \mathbf{s}_2(t)=\sqrt2 \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =\mathbf{s}(\sqrt2\sin,\sqrt2\sin\tan)(t) = \sqrt2\tan t, \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =\mathbf{s}(\sqrt2\sin,\sqrt2\cos)(t) =\sqrt2 \,\,\,\,\,(\mbox{constant function}) \] for all $t\in [0,1]$. \end{example} \renewcommand{\thesection.6}{\thesection.4} \begin{example} (Hyperbolic functions) The hyperbolic functions are defined in terms of the exponential functions $e^{x}$ and $e^{-x}$. The set $\mathcal H=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} h(t)=1 , \\ k_1(t)= \sinh\big(t+\tfrac12\big), \\ k_2(t)= \mathrm{csch} \big(t+\tfrac12\big), \\ \mathbf{s}_1(t)= \cosh\big(t+\tfrac12\big), \\ \mathbf{s}_2(t)= \coth\big(t+\tfrac12\big) \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =\mathbf{s}\big(1,\sinh\big(\cdot+\tfrac12\big)\big)(t) = \cosh\big(t+\tfrac12\big), \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =\mathbf{s}\big(1,\mathrm{csch} \big(\cdot+\tfrac12\big)\big)(t) = \coth\big(t+\tfrac12\big) \] for all $t\in [0,1]$. \end{example} \setcounter{equation}{0} \section{Iterated GFFTs and GCPs}\label{relation2} \par In this section, we present general relationships between the iterated GFFT and the GCP for functionals in $\mathcal S(L_2[0,T])$ which are developments of \eqref{eq:cp-fft-basic}. To do this we quote a result from \cite{Indag}. \renewcommand{\thesection.3}{\thesection.1} \begin{theorem}\label{thm:2018-step1} Let $F\in \mathcal S(L_2[0,T])$ be given by equation \eqref{eq:element}, and let $\mathcal H=\{h_1,\ldots,h_n\}$ be a finite sequence of nonzero functions in $L_\infty[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$, the iterated $L_p$ analytic GFFT, \[ T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big( \cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) \] of $F$ exists, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula \[ T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big( \cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y) = \int_{L_2[0,T]}\exp\{i\langle{u,y}\rangle\}df_t^{h_1,\ldots,h_n}(u) \] for s-a.e. $y\in C_{0}[0,T]$, where $f_t^{h_1,\ldots,h_n}$ is the complex measure in $\mathcal M(L_2[0,T])$ given by \[ f_t^{h_1,\ldots,h_n}(B) =\int_B \exp\bigg\{-\frac{i}{2q}\sum_{j=1}^n\|uh_j\|_2^2\bigg\}df(u) \] for $B \in \mathcal B(L_2[0,T])$. Moreover it follows that \begin{equation}\label{eq:gfft-n-fubini-add} T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big( \cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big)(y) =T_{q, \mathbf{s}(\mathcal H)}^{(p)}(F)(y) \end{equation} for s-a.e. $y\in C_{0}[0,T]$, where $\mathbf{s}(\mathcal H)\equiv \mathbf{s}(h_1,\ldots,h_n)$ is a function in $L_{\infty}[0,T]$ satisfying the relation \begin{equation}\label{eq:fn-rot-ind} \mathbf{s}(\mathcal H)^2(t)=h_1^2(t)+\cdots+h_n^2(t) \end{equation} for $m_L$-a.e. $t\in [0,T]$. \end{theorem} We next establish two types of extensions of Theorem \ref{thm:cp-tpq02} above. \renewcommand{\thesection.3}{\thesection.2} \begin{theorem} \label{thm:iter-gfft-gcp-compose} Let $k_1$, $k_2$, $F$, and $G$ be as in Theorem \ref{thm:gcp}, and let $\mathcal H=\{h_1,\ldots,h_n\}$ be a finite sequence of nonzero functions in $L_{\infty}[0,T]$. Assume that \[ \mathbf{s}(\mathcal H)^2 \equiv \mathbf{s} (h_1,\ldots,h_n)^2=k_1k_2 \] for $m_L$-a.e. on $[0,T]$, where $\mathbf{s}(\mathcal H)$ is the function in $L_{\infty}[0,T]$ satisfying \eqref{eq:fn-rot-ind} above. Then, for all $p\in[1,2]$ and all nonzero real $q$, \begin{equation} \label{eq:multi-rel-01} \begin{aligned} &\Big(T_{q,k_1/\sqrt2}^{(p)} \big(T_{q,h_n/\sqrt2}^{(p)}\big(\cdots\big(T_{q,h_2/\sqrt2}^{(p)} \big(T_{q,h_1/\sqrt2}^{(p)}(F)\big)\big)\cdots\big)\big)\\ &\qquad *T_{q,k_2/\sqrt2}^{(p)}\big( T_{q,h_n/\sqrt2}^{(p)}\big( \cdots\big(T_{q,h_2/\sqrt2}^{(p)} \big(T_{q,h_1/\sqrt2}^{(p)}(G)\big)\big)\cdots\big)\big)\Big)_{-q}^{(k_1,k_2)}(y)\\ &=\Big(T_{q, \mathbf{s}(\mathcal H,k_1)/\sqrt2}^{(p)}(F) *T_{q, \mathbf{s}(\mathcal H,k_2)/\sqrt2}^{(p)}(G)\Big)_{-q}^{(k_1,k_2)}(y)\\ &= T_{q,\mathbf{s}(\mathcal H)}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \end{equation} for s-a.e. $y\in C_0[0,T])$, where $\mathbf{s}(\mathcal H,k_1)$ and $\mathbf{s}(\mathcal H,k_2)$ are functions in $L_{\infty}[0,T]$ satisfying the relations \[ \mathbf{s}(\mathcal H,k_1)^2 \equiv \mathbf{s}(h_1,\ldots,h_n,k_1)^2 =h_1^2 +\cdots+h_n^2 +k_1^2 \] and \[ \mathbf{s}(\mathcal H,k_2)^2\equiv \mathbf{s}(h_1,\ldots,h_n,k_2)^2 =h_1^2 +\cdots+h_n^2 +k_2^2 \] for $m_L$-a.e. on $[0,T]$, respectively. \end{theorem} \begin{proof} Applying \eqref{eq:gfft-n-fubini-add}, the first equality of \eqref{eq:multi-rel-01} follows immediately. Next using \eqref{eq:cp-fft-basic} with $h$ replaced with $\mathbf{s}(\mathcal H)$, the second equality of \eqref{eq:multi-rel-01} also follows. \qed\end{proof} \par In view of equations \eqref{eq:gfft-n-fubini-add} and \eqref{eq:cp-fft-basic}, we also obtain the following assertion. \renewcommand{\thesection.3}{\thesection.3} \begin{theorem} \label{thm:iter-gfft-gcp-compose-2nd} Let $F$ and $G$ be as in Theorem \ref{thm:gcp}. Given a nonzero function $h$ in $L_{\infty}[0,T]$ and finite sequences $\mathcal K_1=\{k_{11},k_{12},\ldots,k_{1n}\}$ and $\mathcal K_2=\{k_{21},k_{22},\ldots,k_{2m}\}$ of nonzero functions in $L_{\infty}[0,T]$, assume that \[ h^2=\mathbf{s}(\mathcal K_1)\mathbf{s}(\mathcal K_2) \] for $m_L$-a.e. on $[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$, \begin{equation} \label{eq:multi-rel-02-2nd} \begin{aligned} &\Big(T_{q,h/\sqrt2}^{(p)} \big(T_{q,k_{1n}/\sqrt2}^{(p)} \big(\cdots \big(T_{q,k_{12}/\sqrt2}^{(p)}\big(T_{q,k_{11}/\sqrt2}^{(p)}(F)\big)\big)\cdots\big)\big)\big)\\ &\quad *T_{q,h/\sqrt2}^{(p)}\big( T_{q,k_{2m}/\sqrt2}^{(p)} \big( \cdots \big(T_{q,k_{22}/\sqrt2}^{(p)}\big(T_{q,k_{21}/\sqrt2}^{(p)}(G)\big)\big)\cdots\big)\big)\big)\Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\ &=\Big(T_{q,h/\sqrt2}^{(p)}\big(T_{q,\mathbf{s}(\mathcal K_1)/\sqrt2}^{(p)}(F)\big) *T_{q,h/\sqrt2}^{(p)}\big( T_{q,\mathbf{s}(\mathcal K_2)/\sqrt2}^{(p)}(G)\big) \Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\ &=\Big(T_{q, \mathbf{s}(h,\mathbf{s}(\mathcal K_1))/\sqrt2}^{(p)}(F) *T_{q,\mathbf{s}(h,\mathbf{s}(\mathcal K_2))/\sqrt2}^{(p)}(G)\Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\ &= T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \end{equation} for s-a.e. $y\in C_0[0,T])$, where $\mathbf{s}(h,\mathbf{s}(\mathcal K_1))$, and $\mathbf{s}(h,\mathbf{s}(\mathcal K_2))$ are functions in $L_{\infty}[0,T]$ satisfying the relations \[ \mathbf{s}(h,\mathbf{s}(\mathcal K_1))^2 =h^2 +\mathbf{s} (\mathcal K_1)^2=h^2+k_{11}^2 + \cdots+k_{1n}^2, \] and \[ \mathbf{s}(h,\mathbf{s}(\mathcal K_2))^2 =h^2 +\mathbf{s} (\mathcal K_2)^2=h^2+k_{21}^2 +\cdots+k_{2m}^2 \] for $m_L$-a.e. on $[0,T]$, respectively. \end{theorem} \renewcommand{\thesection.4}{\thesection.4} \begin{remark} Note that given the functions $\{\mathbf{s}(\mathcal H),k_1,k_2, \mathbf{s}(\mathcal H,k_1),\mathbf{s}(\mathcal H,k_2) \}$ in Theorem \ref{thm:iter-gfft-gcp-compose}, the set $\mathcal F=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,T]$ with \[ \begin{cases} h(t)=\mathbf{s}(\mathcal H)(t), \\ \mathbf{s}_1(t)=\mathbf{s}(\mathcal H,k_1)(t), \\ \mathbf{s}_2(t)=\mathbf{s}(\mathcal H,k_2)(t) \end{cases} \] is a solution set of the system \eqref{system}. Also, given the functions \[ \{h,\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2),\mathbf{s}(h,\mathbf{s}(\mathcal K_1)),\mathbf{s}(h,\mathbf{s}(\mathcal K_2))\} \] in Theorem \ref{thm:iter-gfft-gcp-compose-2nd}, the set $\mathcal F=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,T]$ with \[ \begin{cases} k_1(t)=\mathbf{s}(\mathcal K_1)(t), \\ k_2(t)=\mathbf{s}(\mathcal K_2)(t), \\ \mathbf{s}_1(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_1))(t), \\ \mathbf{s}_2(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_2))(t) \end{cases} \] is a solution set of the system \eqref{system}. \end{remark} In the following two examples, we also consider the Wiener space $C_0[0,1]$ and the Hilbert space $L_\infty[0,1]$ for simplicity. \renewcommand{\thesection.6}{\thesection.5} \begin{example} Let $h_1=\sin \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, $h_2=\cos \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, $h_3=\tan\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, $k_1(t)=\tan \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, and $k_2(t)= \sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$ on $[0,1]$. Then $\{h_1,h_2,h_3,k_1,k_2\}$ is a set of functions in $L_\infty[0,1]$, and given the set $\mathcal H=\{h_1,h_2,h_3\}$, it follows that \[ \begin{aligned} \mathbf{s}(\mathcal H)(t) &\equiv\mathbf{s}(h_1,h_2,h_3)^2(t)\\ &=\mathbf{s}\big(\sin\tfrac{\pi}{4}\big(\cdot+\tfrac{1}{2} \big),\cos\tfrac{\pi}{4} \big(\cdot+\tfrac{1}{2} \big),\tan\tfrac{\pi}{4}\big(\cdot+\tfrac{1}{2} \big)\big)^2(t)\\ &=\sec^2 \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\\ &=k_1(t)k_2(t), \end{aligned} \] \[ \begin{aligned} \mathbf{s}(\mathcal H,k_1)^2(t) &\equiv \mathbf{s}(h_1,h_2,h_3,k_1)^2(t)\\ &=\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)+\tan^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) =\mathbf{s}(\mathbf{s}(\mathcal H),k_1)^2(t), \end{aligned} \] and \[ \begin{aligned} \mathbf{s}(\mathcal H,k_2)^2(t) &\equiv \mathbf{s}(h_1,h_2,h_3,k_2)^2(t)\\ &=\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) +\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\\ &=\mathbf{s}(\mathbf{s}(\mathcal H),k_2)^2(t), \end{aligned} \] for all $t\in [0,1]$. From this we see that the set $\mathcal F_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} h(t)= \mathbf{s}(h_1,h_2,h_3)(t)=\sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), \\ k_1(t)=\tan \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) ,\\ k_2(t)= \sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\\ \mathbf{s}_1(t)=\mathbf{s}(\mathcal H,k_1)(t), \\ \mathbf{s}_2(t)=\mathbf{s}(\mathcal H,k_2)(t) \end{cases} \] is a solution set of the system \eqref{system}, and equation \eqref{eq:multi-rel-01} holds with the sequence $\mathcal H=\{h_1,h_2,h_3\}$ and the functions $k_1$ and $k_2$. \end{example} In the next example, the kernel functions of the Gaussian processes defining the transforms and convolutions involve trigonometric and hyperbolic (and hence exponential) functions. \renewcommand{\thesection.6}{\thesection.6} \begin{example} Consider the function \[ h(t)= 2\sqrt{\csc\tfrac{\pi}{4}\big( t+\tfrac{1}{2}\big) \mathrm{cosh}\tfrac{\pi}{4}\big( t+\tfrac{1}{2}\big)} \] on $[0,1]$, and the finite sequences \[ \mathcal K_1=\big\{2\mathrm{tanh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), 2\mathrm{sech} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),2 \cot\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\big\} \] and \[ \mathcal K_2=\big\{\sqrt2\sin\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\sqrt2\cos\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), \sqrt2\mathrm{sinh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\sqrt2\mathrm{cosh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\big\} \] of functions in $L_{\infty}[0,1]$. Then using the relationships among hyperbolic functions and among trigonometric functions, one can see that \[ \mathbf{s}(\mathcal K_1)(t)=2\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\quad \mbox{ and } \quad \mathbf{s}(\mathcal K_2)(t)=2\mathrm{cosh} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) \] on $[0,1]$. From this we also see that the set $\mathcal F_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} h(t)= 2\sqrt{\csc\frac{\pi}{4}( t+\frac{1}{2}) \mathrm{cosh}\frac{\pi}{4}( t+\frac{1}{2})}, \\ k_1(t)=\mathbf{s}(\mathcal K_1)(t)=2\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) ,\\ k_2(t)=\mathbf{s}(\mathcal K_2)(t)=2\mathrm{cosh} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\\ \mathbf{s}_1(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_1))(t), \\ \mathbf{s}_2(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_2))(t) \end{cases} \] is a solution set of the system \eqref{system}, and equation \eqref{eq:multi-rel-02-2nd} holds with the function $h$, and the sequences $\mathcal K_1$ and $\mathcal K_2$. \end{example} \setcounter{equation}{0} \section{Further results} \par In this section, we derive a more general relationship between the iterated GFFT and the GCP for functionals in $\mathcal S(L_2[0,T])$. To do this we also quote a result from \cite{Indag}. \renewcommand{\thesection.3}{\thesection.1} \begin{theorem} \label{thm:iter-gfft-more-1} Let $F$ and $\mathcal H=\{h_1,\ldots,h_n\}$ be as in Theorem \ref{thm:2018-step1}. Assume that $q_1,q_2,\ldots, q_n$ are nonzero real numbers with $\mathrm{sgn}(q_1)=\cdots=\mathrm{sgn}(q_n)$, where `$\mathrm{sgn}$' denotes the sign function. Then, for all $p\in[1,2]$, \[ \begin{aligned} &T_{q_n,h_n}^{(p)}\big(T_{q_{n-1},h_{n-1}}^{(p)}\big( \cdots\big(T_{q_2,h_2}^{(p)}\big(T_{q_1,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y)\\ &=T_{\alpha_n,\tau_n^{(n)}h_n}^{(p)}\Big(T_{\alpha_{n},\tau_n^{(n-1)}h_{n-1}}^{(p)} \Big(\cdots \big(T_{\alpha_n,\tau_n^{(2)}h_2}^{(p)} \big(T_{\alpha_n,\tau_n^{(1)}h_1}^{(p)}(F) \big)\big)\cdots\Big)\Big) (y) \end{aligned} \] for s-a.e. $y\in C_{0}[0,T]$, where $\alpha_n$ is given by \[ \alpha_n=\frac{1}{\frac{1}{q_1}+\frac{1}{q_2}+\cdots+\frac{1}{q_n}} \] and $\tau_n^{(j)}=\sqrt{{\alpha_n}/{q_j}}$ for each $j\in \{1,\ldots,n\}$. Moreover it follows that \[ T_{q_n,h_n}^{(p)}\big(T_{q_{n-1},h_{n-1}}^{(p)}\big( \cdots\big(T_{q_2,h_2}^{(p)}\big(T_{q_1,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y) =T_{\alpha_n,\mathbf{s}(\tau\mathcal H)}^{(p)}(F)(y) \] for s-a.e. $y\in C_{0}[0,T]$, where $\mathbf{s}(\tau\mathcal H) \equiv \mathbf{s}(\tau_n^{(1)}h_1, \ldots, \tau_n^{(n)}h_n )$ is a function in $L_{\infty}[0,T]$ satisfying the relation \[ \mathbf{s}(\tau\mathcal H) ^2(t) =(\tau_n^{(1)}h_1)^2(t)+ \ldots+ (\tau_n^{(n)}h_n)^2(t) \] for $m_L$-a.e. $t\in [0,T]$. \end{theorem} \par Next, by a careful examination we see that for all $F\in \mathcal S(L_2[0,T])$ and any positive real $\beta>0$, \begin{equation}\label{eq:2018-new-parameter-change} T_{\beta q,h}^{(p)} (F) \approx T_{q,h/\sqrt{\beta}}^{(p)}(F) . \end{equation} Using \eqref{eq:2018-new-parameter-change} and \eqref{eq:cp-fft-basic}, we have the following lemma. \renewcommand{\thesection.2}{\thesection.2} \begin{lemma} \label{thm:2018-last-pre} Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Let $q$, $q_1$, and $q_2$ be nonzero real numbers with $\mathrm{sgn}(q)=\mathrm{sgn}(q_1)=\mathrm{sgn}(q_2)$. Then, for all $p\in [1,2]$, \[ \begin{aligned} & \big(T_{q_1,\sqrt{q_1/(2q)} \mathbf{s}(h,k_1)}^{(p)} (F)* T_{q_2,\sqrt{q_2/(2q)}\mathbf{s}(h,k_2)}^{(p)}(G) \big)_{-q}^{(k_1,k_2)}(y)\\ & =T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \] for s-a.e. $y\in C_{0}[0,T]$. \end{lemma} \par Finally, in view of Theorem \ref{thm:iter-gfft-more-1} and Lemma \ref{thm:2018-last-pre}, we obtain the following assertion. \renewcommand{\thesection.3}{\thesection.3} \begin{theorem} \label{thm:2018-last} Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Let $\mathcal H_1=\{h_{1j}\}_{j=1}^n$ and $\mathcal H_2=\{h_{2l}\}_{l=1}^m$ be finite sequences of nonzero functions in $L_{\infty}[0,T]$. Given nonzero real numbers $q$, $q_1$, $q_{11}$, $\ldots$, $q_{1n}$, $q_2$, $q_{21}$, $\ldots$, $q_{2m}$ with \[ \begin{aligned} \mathrm{sgn}(q) &=\mathrm{sgn}(q_1)=\mathrm{sgn}(q_{11})=\cdots=\mathrm{sgn}(q_{1n})\\ &=\mathrm{sgn}(q_2)=\mathrm{sgn}(q_{21})=\cdots=\mathrm{sgn}(q_{2m}), \end{aligned} \] let \[ \alpha_{1n}=\frac{1}{\frac{1}{q_{11}}+\frac{1}{q_{12}}+\cdots+\frac{1}{q_{1n}}}, \] \[ \alpha_{2m}=\frac{1}{\frac{1}{q_{21}}+\frac{1}{q_{22}}+\cdots+\frac{1}{q_{2m}}}, \] \[ \beta_{1n}=\frac{1}{\frac{1}{q_{1}}+\frac{1}{q_{11}}+\frac{1}{q_{12}}+\cdots+\frac{1}{q_{1n}}}, \] and \[ \beta_{2m}=\frac{1}{\frac{1}{q_{2}}+\frac{1}{q_{21}}+\frac{1}{q_{22}}+\cdots+\frac{1}{q_{2m}}}. \] Furthermore, assume that \[ h^2 = \mathbf{s}(\tau_{1n}\mathcal H_1)\mathbf{s}(\tau_{2m}\mathcal H_2) \] for $m_L$-a.e. on $[0,T]$, where $\mathbf{s}(\tau_{1n}\mathcal H_1)$ and $\mathbf{s}(\tau_{2m}\mathcal H_2)$ are functions in $L_{\infty}[0,T]$ satisfying the relation \[ \mathbf{s}(\tau_{1n}\mathcal H_1)^2 \equiv \mathbf{s}(\tau_{1n}^{(1)}h_{11}, \ldots, \tau_{1n}^{(n)}h_{1n} )^2 =(\tau_{1n}^{(1)}h_{11})^2 + \cdots+ (\tau_{1n}^{(n)}h_{1n})^2 \] and \[ \mathbf{s}(\tau_{2m}\mathcal H_2)^2 \equiv \mathbf{s}(\tau_{2m}^{(1)}h_{21}, \ldots, \tau_{2m}^{(m)}h_{2m} )^2 =(\tau_{2m}^{(1)}h_{21})^2 + \cdots+ (\tau_{2m}^{(m)}h_{2m})^2 , \] respectively, and where $\tau_{1n}^{(j)}=\sqrt{{\alpha_{1n}}/{q_{1j}}}$ for each $j\in \{1,\ldots,n\}$, and $\tau_{2m}^{(l)}=\sqrt{{\alpha_{2m}}/{q_{2l}}}$ for each $l\in \{1,\ldots,m\}$. For notational convenience, let \[ h_1'=\sqrt{q_{1}/(2q)}h,\quad h_{1j}'=\sqrt{\alpha_{1n}/(2q)}h_{1j}, \quad j=1,\ldots,n, \] and let \[ h_2' =\sqrt{q_{2}/(2q)}h,\quad h_{2l}'=\sqrt{\alpha_{2m}/(2q)}h_{2l}, \quad l=1,\ldots,m. \] Then, for all $p\in[1,2]$, \[ \begin{aligned} &\Big(T_{q_1,h_1'}^{(p)}\big(T_{q_{1n},h_{1n}'}^{(p)}\big( \cdots\big(T_{q_{11},h_{11}'}^{(p)} (F)\big)\cdots \big)\big) \\ & \quad\quad\quad *T_{q_2,h_2'}^{(p)}\big( T_{q_{2m}, h_{2m}'}^{(p)}\big( \cdots\big(T_{q_{21}, h_{21}' }^{(p)} (G)\big)\cdots \big)\big) \Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\ &=\Big(T_{q_1,\sqrt{q_{1}/(2q)} h}^{(p)} \big( T_{\alpha_{1n},\sqrt{\alpha_{1n}/(2q)}\mathbf{s}(\tau_{1n}\mathcal H_1)}^{(p)}(F)\big)\\ &\quad \quad \quad *T_{q_2,\sqrt{q_{2}/(2q)}h}^{(p)} \big(T_{\alpha_{2m},\sqrt{\alpha_{2m}/(2q)} \mathbf{s}(\tau_{2m}\mathcal H_2)}^{(p)}(G)\big) \Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\ &=\Big( T_{\beta_{1n},\sqrt{\beta_{1n}/(2q)}\mathbf{s}(h,\mathbf{s}(\tau_{1n}\mathcal H_1))}^{(p)}(F)\\ &\qquad \qquad \qquad\qquad\,\,\, *T_{\beta_{2m},\sqrt{\beta_{2m}/(2q)}\mathbf{s}(h,\mathbf{s}(\tau_{2m}\mathcal H_2))}^{(p)}(G) \Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\ &= T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \] for s-a.e. $y\in C_{0}[0,T]$. \end{theorem} \end{document}
math
47,782
\begin{document} \title{The $E_2$-term of the $K(n)$-local $E_n$-Adams spectral sequence} \author{Tobias Barthel} \address{Max Planck Institute for Mathematics, Bonn, Germany} \author{Drew Heard} \address{Max Planck Institute for Mathematics, Bonn, Germany} \date{\today} \begin{abstract} Let $E=E_n$ be Morava $E$-theory of height $n$. In~\cite{devhop04} Devinatz and Hopkins introduced the $K(n)$-local $E_n$-Adams spectral sequence and showed that, under certain conditions, the $E_2$-term of this spectral sequence can be identified with continuous group cohomology. We work with the category of $L$-complete $E^\vee_* E$-comodules, and show that in a number of cases the $E_2$-term of the above spectral sequence can be computed by a relative $\mathrm{Ext}$ group in this category. We give suitable conditions for when we can identify this $\mathrm{Ext}$ group with continuous group cohomology. \end{abstract} \maketitle \section*{Introduction} Let $E_n$ denote the $n$-th Morava $E$-theory (at a fixed prime $p$), the Landweber exact cohomology theory with coefficient ring \[ \pi_*(E_n) = \mathbb{W}(\mathbb{F}_{p^n})[\![u_1,\ldots,u_{n-1}]\!][u^{\pm 1}], \] where each $u_i$ is in degree 0, and $u$ has degree -2. Here $\mathbb{W}(\mathbb{F}_{p^n})$ refers to the Witt vectors over the finite field $\mathbb{F}_{p^n}$ (an unramified extension of $\mathbb{Z}_p$ of degree $n$). Note that $E_0$ is a complete local regular Noetherian ring with maximal ideal $\mathfrak{m} = (p,u_1,\ldots,u_{n-1})$. Unless indicated otherwise, let us fix an integer $n \ge 1$ and write $E$ instead of $E_n$ throughout. The cohomology theory $E$ plays a very important role in the chromatic approach to stable homotopy theory, in particular in the understanding of the $K(n)$-local homotopy category (see, for example,~\cite{hs99}). The formal group law associated to $E_*$ is the universal deformation of the Honda formal group law $\mathbb{G}amma_n$ of height $n$ over $\mathbb{F}_{p^n}$. Let $\mathbb{G}_n = \Aut(\mathbb{G}amma_n) \rtimes \mathbb{G}al(\mathbb{F}_{p^n}/\mathbb{F}_p)$ denote the $n$-th (extended) Morava stabilizer group. Lubin-Tate theory implies that $\mathbb{G}_n$ acts on the ring $E_*$, and Brown representability implies that $\mathbb{G}_n$ acts on $E$ itself in the stable homotopy category. The Goerss--Hopkins--Miller theorem~\cite{rezkhm,gh04} implies that this action can be taken to be via $E_\infty$-ring maps. In general $\mathbb{G}_n$ is a profinite group, and it is not clear how to form the homotopy fixed points with respect to such groups (although progress has been made in this area; see~\cite{davis06,behdav,quicklt}). Nonetheless, in~\cite{devhop04} Devinatz and Hopkins defined $E_\infty$-ring spectra $E^{hG}$ for $G \subset \mathbb{G}_n$ a closed subgroup of the Morava stabilizer group, which behave like continuous homotopy fixed point spectra (and indeed if $G$ is finite they agree with the usual construction of homotopy fixed points). Remarkably they showed that there is an equivalence $E^{h\mathbb{G}_n} \simeq L_{K(n)}S^0$, a result expected since the work of Morava~\cite{mor85}. Davis~\cite{davis06}, Behrens--Davis~\cite{behdav} and Quick~\cite{quicklt} have given constructions of homotopy fixed point spectra with respect to the continuous action of $G$ on $E$, and these agree with the construction of Devinatz and Hopkins. Devinatz and Hopkins additionally showed that for any spectrum $Z$ there is a strongly convergent spectral sequence \[ E_2^{\ast,\ast}=H_c^*(G,E^\ast Z) \mathbb{R}ightarrow (E^{hG})^\ast Z \] which is a particular case of a spectral sequence known as the $K(n)$-local $E$-Adams spectral sequence~\cite[Appendix A]{devhop04}. Here, for a closed subgroup $G \subset \mathbb{G}_n$ the continuous cohomology of $G$ with coefficients in a topological $\mathbb{G}_n$-module $N$ is defined using the cochain complex $\operatorname{Hom}^c(G^\bullet,N)$ (see the discussion before the proof of~\Cref{thm:COR}). Using homology instead of cohomology Devinatz and Hopkins identified conditions~\cite[Proposition 6.7]{devhop04} under which the $K(n)$-local $E$-Adams spectral sequence takes the form \[ E_2^{\ast,\ast}=H_c^*(\mathbb{G}_n,E_*Z) \mathbb{R}ightarrow \pi_*L_{K(n)}Z. \] It was remarked that this was probably not the most general result. In many cases the $E_2$-term of Adams-type spectral sequences can be calculated by $\mathrm{Ext}$ groups (for example~\cite[Chapter 2]{ravgreen}). Thus we ask the following two questions: \begin{enumerate}[(a)] \item Can the $E_2$-term of the $K(n)$-local $E$-Adams spectral sequence be calculated by a suitable $\mathrm{Ext}$ group? \item In what generality can we identify the $E_2$-term with continuous group cohomology? \end{enumerate} In this document we give partial answers to both these questions. Some work on the second problem has been done previously, and we provide a comparison between some known results and our results. In the $K(n)$-local setting the natural functor to consider for a spectrum $X$ is not $E_*X$ but rather $E^\vee_* X \coloneqq \pi_*L_{K(n)}(E \wedge X)$. The use of this completed version of $E$-homology becomes very important in understanding the $E_2$-term of this spectral sequence.\footnote{This also gives one reason why the case of continuous cohomology with coefficients in $E^*Z$ is easier than in $E^\vee_* Z$ for any spectrum $Z$; since $F(Z,\mathbb{S}igma^k E)$ is already $K(n)$-local for any $k \in \mathbb{Z}$ (since $E$ is), there is no need for a `completed' version of $E$-cohomology.} This is not just an $E_*$-module, but rather an $L$-complete $E_*$-module, and based on work of Baker~\cite{bakerlcomplete} we work in the category of $L$-complete $E^\vee_* E$-comodules. This category is not abelian, and so we use the methods of relative homological algebra to define a relative $\mathrm{Ext}$ functor for certain classes of objects in the category, which we denote by $\widehat{\Ext}_{E^\vee_* E}^{s,t}(-,-)$. The following is our answer for (a). \theoremstyle{plain} \newtheorem*{thm:ass}{\Cref{thm:ass}} \begin{thm:ass} Let $X$ and $Y$ be spectra and suppose that $E^\vee_* X$ is pro-free, and $E^\vee_* Y$ is either a finitely-generated $E_*$-module, pro-free, or has bounded $\mathfrak{m}$-torsion (i.e., is annihilated by some power of $\mathfrak{m}$). Then the $E_2$-term of the $K(n)$-local $E$-Adams spectral sequence with abutment $\pi_{t-s}F(X,L_{K(n)}Y)$ is \[ E_2^{s,t} =\widehat{\Ext}^{s,t}_{E^\vee_* E}(E^\vee_* X,E^\vee_* Y). \] \end{thm:ass} Our answer to question (b) is the following. \theoremstyle{plain} \newtheorem*{thm:COR}{\Cref{thm:COR}} \begin{thm:COR} Suppose that $X$ is a spectrum such that $E^\vee_* X$ is either a finitely-generated $E_*$-module, pro-free, or has bounded $\mathfrak{m}$-torsion. Then, for the $K(n)$-local $E$-Adams spectral sequence with abutment $\pi_*L_{K(n)}X$, there is an isomorphism \[ E_2^{\ast,\ast} = \widehat{\Ext}^{\ast,\ast}_{E^\vee_* E}(E_*,E^\vee_* X) \simeq H_c^\ast(\mathbb{G}_n,E^\vee_* X). \] \end{thm:COR} We then compare this to some of the known results in the literature. We have two applications of these results. Firstly, we can almost immediately extend a result of Goerss--Henn--Mahowald--Rezk, used in their construction of a resolution of the $K(2)$-local sphere at the prime 3~\cite{GHMR}, from finite subgroups of $\mathbb{G}_n$ to arbitrary closed subgroups. The second application appears to work at height $n=1$ only. Here we construct a spectral sequence with $E_2$-term $L_i\mathrm{Ext}^{s,t}_{E_*E}(E_*X,E_*Y)$, where $E_*X$ is a projective $E_*$-module, $E_*Y$ a flat $E_*$-module, and $L_i$ refers to the derived functor of completion on the category of $\mathbb{Z}p$-modules. We show that the abutment of this spectral sequence is $\widehat{\Ext}_{E^\vee_* E}^{s-i,t}(E^\vee_* X,E^\vee_* Y)$ and calculate this when $X = Y = S^0$ at the prime 2. \section{$L$-completion and $L$-complete comodules} \subsection{$L$-completion}\label{sec:lcompletion} It is now well understood (see, for example~\cite{hs99}) that in the $K(n)$-local setting the functor $E^\vee_*(-) = \pi_*L_{K(n)}(E \wedge -)$, from spectra to $E_*$-modules, mentioned in the introduction is a more natural covariant analogue of $E^*(-)$ than ordinary $E_*$-homology, despite the fact that it is not a homology theory. It is equally well understood that this functor is naturally thought of as landing in the category $\widehat{\text{Mod}}_{E_*}$ of $L$-complete $E_*$-modules, rather than the category of $E_*$-modules. We review the basics of this category now; for more details see~\cite{hs99,barthel2013completed,hovey2004ss,rezk2013}. \begin{rem}\label{rem:bousfieldlocalisation} Since we always work with $E$-modules there is some ambiguity to the type of Bousfield localisation we are using. Recall that $L_{K(n)}$ denotes Bousfield localisation with respect to Morava $K$-theory $K(n)$ on the category of spectra. Let $L_{K(n)}^E$ denote Bousfield localisation on the category of $E$-modules. Suppose now that $M$ is an $E$-module. Then by~\cite[Lemma 4.3]{bakerjean} there is an equivalence $L_{K(n)}M \simeq L^{E}_{K(n) \wedge E}M$. But by~\cite[Proposition 2.2]{hovey08fil} the latter is just $L^E_{K(n)}M$ and so it does not matter if we use $L_{K(n)}$ or $L_{K(n)}^E$. \end{rem} To keep the theory general, suppose that $R$ is a complete local Noetherian graded ring with a unique maximal homogeneous ideal $\mathfrak{m}$, generated by a regular sequence of $n$ homogeneous elements. Our assumptions imply that the (Krull) dimension of $R$ is $n$. Let $\operatorname{Mod}_R$ denote the category of graded $R$-modules, where the morphisms are the morphisms of $R$-modules that preserve the grading. Recall that given an $R$-module $M$, the completion of $M$ (at $\mathfrak{m}$) is \[ M^\wedge_\mathfrak{m} = \varprojlim_k M/\mathfrak{m}^kM. \] Here we must take the limit in the graded sense. This is functorial, but completion is not right (nor in fact left) exact; the idea is to then replace completion with its zeroth derived functor. \begin{definition} For $s \ge 0$ let $L_s(-):\operatorname{Mod}_R \to \operatorname{Mod}_R$ be the $s$-th left derived functor of the completion functor $(-)^\wedge_\mathfrak{m}$. \end{definition} Since completion is not right exact it is \emph{not} true that $L_0M \simeq M^\wedge_\mathfrak{m}$. In fact the natural map $M \to M^\wedge_\mathfrak{m}$ factors as the composite $M \xrightarrow{\eta_M} L_0M \xrightarrow{\epsilon_M} M^\wedge_\mathfrak{m}$. \begin{definition} We say that $M$ is $L$-complete if $\eta_M$ is an isomorphism of $R$-modules. \end{definition} The map $\epsilon_M$ is surjective with kernel \begin{equation}\label{eq:kernel} \varprojlim{}^1_k \operatorname{Tor}_1^{R}(R/\mathfrak{m}^k,M); \end{equation} in general these derived functors fit into an exact sequence~\cite[Theorem A.2]{hs99} \[ 0 \to \varprojlim{}^1_k \operatorname{Tor}^R_{s+1}(R/\mathfrak{m}^k,M) \to L_sM \to \varprojlim_k \operatorname{Tor}^R_s(R/\mathfrak{m}^k,M) \to 0, \] and vanish if $s<0$ or $s>n$. Let $\widehat{\operatorname{Mod}}_R$ denote the subcategory of $\operatorname{Mod}_R$ consisting of those graded $R$-modules $M$ for which $\eta_M$ is an isomorphism. This category is a bicomplete full abelian subcategory of the category of graded $R$-modules, and is closed under extensions and inverse limits formed in $\operatorname{Mod}_R$. One salient feature of this category is that $\mathrm{Ext}^s_{\widehat{\operatorname{Mod}}_R}(M,N) \simeq \mathrm{Ext}^s_{R}(M,N)$ for all $s \ge 0$ whenever $M$ and $N$ are $L$-complete $R$-modules~\cite[Theorem 1.11]{hovey2004ss}. The tensor product of $L$-complete modules need not be $L$-complete; we write $M \boxtimes_R N \coloneqq L_0(M \otimes_R N)$. By~\cite[Proposition A.6]{hs99} this gives $\widehat{\operatorname{Mod}}_R$ the structure of a symmetric monoidal category. \begin{rem}\label{rem:tamemodules} We will use the following properties of $L$-completion repeatedly: \begin{enumerate}[(i)] \item If $M$ is a flat $R$-module, then $L_0M = M^\wedge_\mathfrak{m}$ is flat as an $R$-module and thus $L_sM = 0$ for $s>0$ (see~\cite[Corollary 1.3]{hovey2004ss} or~\cite[Proposition A.15]{barthel2013completed}); \item If $M$ is a finitely-generated $R$-module, then $L_0M = M$ and $L_sM = 0$ for $s>0$~\cite[Proposition A.4, Theorem A.6]{hs99}; and, \item If $M$ is a bounded $\mathfrak{m}$-torsion module, then $L_0M = M$ and $L_sM = 0$ for $s>0$. \end{enumerate} The last item follows from~\cite[Theorem A.6]{hs99} and the observation that for large enough $k$ there are equivalences (by~\cite[Proposition A.4]{hs99}) \begin{equation*} \begin{split} L_0M & \simeq L_0(M \otimes_{R} R/\mathfrak{m}^k) \\ & \simeq L_0M \otimes_R R/\mathfrak{m}^k \\ & \simeq M \otimes_R R/\mathfrak{m}^k \\ & \simeq M, \end{split} \end{equation*} so that $M$ is $L$-complete. Modules $M$ that have $L_sM = 0$ for $s>0$ are known as \emph{tame}. For example, $L$-complete modules are always tame. \end{rem} \begin{example}\label{examp:zp} Let $R = \mathbb{Z}p$ and $\mathfrak{m} = (p)$. Since $\mathbb{Z}p$ has Krull dimension 1 the only potential non-zero derived functors are $L_0$ and $L_1$. By~\cite[Proposition 5.2]{barthel2013completed}, $L$-completion with respect to $\mathbb{Z}p$ naturally lands in the category of $\mathbb{Z}_p$-modules. It is immediate from the remark above that $L_0\mathbb{Z}p = \mathbb{Z}_p$ and $L_i\mathbb{Z}p =0$ for $i>0$. By~\cite[Theorem A.2]{hs99} for any $\mathbb{Z}p$-module $M$ we have \[ L_0M = \mathrm{Ext}^1_{\mathbb{Z}p}(\mathbb{Z}/p^\infty,M)\quad \text{ and } \quad L_1M = \operatorname{Hom}_{\mathbb{Z}p}(\mathbb{Z}/p^\infty,M) \simeq \varprojlim_r \operatorname{Hom}_{\mathbb{Z}p}(\mathbb{Z}/p^r,M). \] If $M$ is any injective $\mathbb{Z}p$-module $M$, for example if $M =\mathbb{Q}/\mathbb{Z}p$, then it follows from this description that $L_0M = 0$. On the other hand the inverse system defined above gives $L_1(\mathbb{Q}/\mathbb{Z}p) = \mathbb{Z}_p$. The discussion above also shows that if $M$ is any bounded $p$-torsion $\mathbb{Z}p$-module then it is $L$-complete and hence tame. \end{example} Suppose now that $M$ is a flat $R$-module so that, by Lazard's theorem, we can write it canonically as a filtered colimit over finite free modules, $M = \varinjlim_j F_j$. Since $\operatorname{Hom}_R(F_j,L_0N)$ is $L$-complete for any $j \in J$, the same is true for $\varprojlim_j \operatorname{Hom}_R(F_j,L_0N) = \operatorname{Hom}_R(M,L_0N)$, and hence we get a natural factorization \[\xymatrix{& L_0\operatorname{Hom}_R(M,N) \ar@{-->}[d] \\ \operatorname{Hom}_R(M,N) \ar[ru] \ar[r] & \operatorname{Hom}_R(M,L_0N)}\] for arbitrary $N$. \begin{prop} If $M$ is projective and $N$ is flat, then the natural map \[\xymatrix{L_0\operatorname{Hom}_R(M,N) \ar[r] & \operatorname{Hom}_R(M,L_0N)}\] is an isomorphism of $L$-complete $R$-modules. \end{prop} \begin{proof} It is enough to show the claim for $M = \bigoplus_I R$ free. Since $R$ is Noetherian, products of flat modules are flat, so we get \[L_0\operatorname{Hom}_R(M,N) = L_0\prod_I N = \varprojlim_k ((\prod_I N) \otimes R/\mathfrak{m}^k)\] and similarly \[\operatorname{Hom}_R(M,L_0N) = \prod_I \varprojlim_k (N \otimes R/\mathfrak{m}^k) = \varprojlim_k \prod_I (N \otimes R/\mathfrak{m}^k).\] Therefore, it suffices to show that the natural map \[ \epsilon: (\prod_I N) \otimes R/\mathfrak{m}^k \to \prod_I (N \otimes R/\mathfrak{m}^k)\] is an isomorphism for all $k$. Since $R/\mathfrak{m}^k$ is finitely-presented, this is true by~\cite[Proposition 4.44]{MR1653294}, and the proposition follows. \end{proof} \begin{cor}\label{cor:homlcompletion} For $M$ projective and $N$ flat, there are isomorphisms \[L_s\operatorname{Hom}_R(M,N) = \begin{cases} \operatorname{Hom}_{\operatorname{\widehat{Mod}}_R}(L_0M,L_0N) & \text{if } s=0 \\ 0 & \text{otherwise.} \end{cases}\] \end{cor} \begin{proof} The first statement is a direct consequence of the previous proposition. For the case of $s>0$ note that $\operatorname{Hom}_R(M,N)$ is flat, hence tame. \end{proof} \begin{rem} Using work of Valenzuela~\cite{valenzuela} it is possible to construct a spectral sequence \[ E_2^{s,t} = L_p \mathrm{Ext}^q_R(M,N) \mathbb{R}ightarrow \mathrm{Ext}_R^{p+q}(M,L_{R/\mathfrak{m}}N), \] where $M$ and $N$ are arbitrary $R$-modules and $L_{R/\mathfrak{m}}$ is the total left derived functor of $L_0$. Specialising to $M$ projective and $N$ tame gives the above corollary. \end{rem} \subsection{Completed $E$-homology} We now specialise to the case where $R = E_*$. By~\cite[Proposition 8.4]{hs99} the functor $E^\vee_*(-)$ always takes values in $\widehat{\operatorname{Mod}}_{E_*}$. This is in fact a special case of the following theorem. \begin{prop}\cite[Corollary 3.14]{barthel2013completed}\label{prop:lcompletion} An $E$-module $M$ is $K(n)$-local if and only if $\pi_*M$ is an $L$-complete $E_*$-module. \end{prop} \begin{rem} The case where $M = E \wedge X$, for $X$ an arbitrary spectrum, proved in~\cite{hs99}, uses a different method. In particular there is a tower of generalised Moore spectra $M_I$ such that $L_{K(n)}X \simeq \operatorname{holim}_I L_nX \wedge M_I$~\cite[Proposition 7.10]{hs99}. This gives rise to a Milnor sequence \begin{equation}\label{eq:milnor} 0 \to \varprojlim_I{}^1 E_{\ast +1}(X \wedge M_I) \to E^\vee_* X \to \varprojlim_I E_\ast (X \wedge M_I) \to 0, \end{equation} which by~\cite[Theorem A.6]{hs99} implies $E^\vee_* X$ is $L$-complete. \end{rem} The projective objects in $\widehat{\operatorname{Mod}}_{E_*}$ will be important for us. These are characterised in~\cite[Theorem A.9]{hs99} and~\cite[Proposition A.15]{barthel2013completed}. \begin{definition} An $L$-complete $E_*$-module is pro-free if it is isomorphic to the completion (or, equivalently, $L$-completion) of a free $E_*$-module. Equivalently, these are the projective objects in $\widehat{\operatorname{Mod}}_{E_*}$. \end{definition} \begin{prop}\label{prop:completemodules} If $E^\vee_* X$ is either finitely-generated as an $E_*$-module, pro-free, or has bounded $\mathfrak{m}$-torsion, then $E^\vee_* X$ is complete in the $\mathfrak{m}$-adic topology. \end{prop} \begin{proof} The case where $E^\vee_* X$ is finitely-generated follows from the fact that $E_*$ is complete and Noetherian. Since $E^\vee_* X$ is always $L$-complete and $L_0$-completion is idempotent, when $E^\vee_* X$ is pro-free (and hence flat) $L_0(E^\vee_* X) \simeq E^\vee_* X = (E^\vee_* X)^{\wedge}_\mathfrak{m}$, so that $E^\vee_* X$ is complete. The case where $E^\vee_* X$ has bounded $\mathfrak{m}$-torsion is clear. \end{proof} \begin{rem}\label{rem:ktheory} The condition that $E^\vee_* X$ is pro-free is not overly restrictive. Let $K$ denote the 2-periodic version of Morava $K$-theory with coefficient ring $K_* = E_*/\mathfrak{m} = \mathbb{F}_{p^n}[u^{\pm 1} ]$. If $K_*X$ is concentrated in even degrees, then $E^\vee_* X$ is pro-free~\cite[Proposition 8.4]{hs99}. For example, this implies that $E^\vee_* E_n^{hF}$ is pro-free for any closed subgroup $F \subset \mathbb{G}_n$. By~\cite[Theorem 8.6]{hs99} $E^\vee_* X$ is finitely generated if and only if $X$ is $K(n)$-locally dualisable. \end{rem} We will need the following version of the universal coefficient theorem (for $Y=S$ this is~\cite[Corollary 4.2]{hovey2004ss}). \begin{prop}\label{lem:UCT} Let $X$ and $Y$ be spectra. If $E^\vee_* X$ is pro-free, then \[ \operatorname{Hom}_{E_*}(E^\vee_{\ast} X,E^\vee_{\ast} Y) \simeq \pi_*F(X,L_{K(n)}(E \wedge Y)). \] \end{prop} \begin{proof} Let $M,N$ be $K(n)$-local $E$-module spectra. Note that $\pi_*M$ and $\pi_*N$ are always $L$-complete by~\Cref{prop:lcompletion}. Under such conditions Hovey~\cite[Theorem 4.1]{hovey2004ss} has constructed a natural, strongly and conditionally convergent, spectral sequence of $E_*$-modules\footnote{Note that we have regraded the spectral sequence in~\cite{hovey2004ss} to reflect the fact we use homology rather than cohomology.} \[ E_2^{s,t} = \mathrm{Ext}^{s,t}_{\widehat{\operatorname{Mod}}_{E_*}}(\pi_{*}M,\pi_{*}N) \simeq \mathrm{Ext}_{E_*}^{s,t}(\pi_{*}M,\pi_{*}N) \mathbb{R}ightarrow \pi_{t-s}F_E(M,N). \] Set $M = L_{K(n)}(E \wedge X)$ and $N = L_{K(n)}(E \wedge Y)$. Note then that \[ F_E(L_{K(n)}(E \wedge X),L_{K(n)}(E \wedge Y)) \simeq F_E(E \wedge X,L_{K(n)}(E \wedge Y)) \simeq F(X,L_{K(n)}(E \wedge Y)), \] where the second isomorphism is~\cite[Corollary III.6.7]{EKMM}, giving a spectral sequence \[ E_2^{s,t} = \mathrm{Ext}^{s,t}_{\widehat{\operatorname{Mod}}_{E_*}}(E^\vee_{\ast} X,E^\vee_{\ast} Y) \simeq \mathrm{Ext}_{E_*}^{s,t}(E^\vee_{\ast} X,E^\vee_{\ast} Y) \mathbb{R}ightarrow \pi_{t-s}F(X,L_{K(n)}(E \wedge Y)). \] Since $E^\vee_* X$ is pro-free it is projective in $\widehat{\operatorname{Mod}}_{E_*}$ and so the spectral sequence collapses, giving the desired isomorphism. \end{proof} \begin{rem} The map above can be described in the following way: given \[ f:X \to L_{K(n)}(E \wedge Y) \] then the homomorphism takes \[ g:S \to L_{K(n)}(E \wedge X) \] to the element \[ S \xrightarrow{g} L_{K(n)}(E \wedge X) \xrightarrow{1 \wedge f} L_{K(n)}(E \wedge E \wedge Y) \xrightarrow{\mu \wedge 1} L_{K(n)} (E\wedge Y). \] \end{rem} \subsection{$L$-complete Hopf algebroids} Since $E^\vee_* X$ always lands in the category of $L$-complete $E_*$-modules, one is led to wonder if $E^\vee_* X$ is a comodule over a suitable $L$-complete Hopf algebroid. The category of $L$-complete Hopf algebroids has previously been studied by Baker~\cite{bakerlcomplete}, and we now briefly review this work. Suppose that $R$ is as in~\Cref{sec:lcompletion} and, additionally, $R$ is an algebra over some local subring $(k_0,\mathfrak{m}_0)$ of $(R,\mathfrak{m})$, such that $\mathfrak{m}_0 = k_0 \cap \mathfrak{m}$. We say $A \in \widehat{\operatorname{Mod}}_{k_0}$ is a ring object if it has an associative product $\phi:A \otimes_{k_0} A \to A$. An $R$-unit for $\phi$ is a $k_0$-algebra homomorphism $\eta:R \to A$. A ring object $A$ is $R$-biunital if it has two units $\eta_L,\eta_R:R \to A$ which extend to give a morphism $\eta_L \otimes \eta_R: R \otimes_{k_0} R \to A$. Such an object is called $L$-complete if it is $L$-complete as both a left and right $R$-module. \begin{definition}\cite[Definition 2.3]{bakerlcomplete} Suppose that $\mathbb{G}amma$ is an $L$-complete commutative $R$-biunital ring object with left and right units $\eta_L,\eta_R:R \to \mathbb{G}amma$, along with the following maps: \begin{align*} \Delta&: \mathbb{G}amma \to \mathbb{G}amma \boxtimes_R \mathbb{G}amma \text{ (composition) } \\ \epsilon&: \mathbb{G}amma \to R \text{ (identity) } \\ c&: \mathbb{G}amma \to \mathbb{G}amma \text{ (inverse) } \end{align*} satisfying the usual identities (as in~\cite[Appendix A]{ravgreen}) for a Hopf algebroid. Then the pair $(R,\mathbb{G}amma)$ is an \emph{$L$-complete Hopf algebroid }if $\mathbb{G}amma$ is pro-free as a left $R$-module, and the ideal $\mathfrak{m}$ is invariant, i.e., $\mathfrak{m}\mathbb{G}amma = \mathbb{G}amma \mathfrak{m}$. \end{definition} \begin{lemma}~\cite[Proposition 5.3]{bakerlcomplete} The pair $(E_*,E^\vee_* E)$ is an $L$-complete Hopf algebroid. \end{lemma} \begin{definition}~\cite[Definition 2.4]{bakerlcomplete} Let $(R,\mathbb{G}amma)$ be an $L$-complete Hopf algebroid. A left $(R,\mathbb{G}amma)$-comodule $M$ is an $L$-complete $R$-module $M$ together with a left $R$-linear map $\psi:M \to \mathbb{G}amma \boxtimes_R M$ which is counitary and coassociative. \end{definition} We will usually refer to a left $(R,\mathbb{G}amma)$-comodule as an $L$-complete $\mathbb{G}amma$-comodule and we write $\widehat{\Comod}_\mathbb{G}amma$ for the category of such comodules. \begin{rem} In all cases we will consider, $E^\vee_* X$ will be a complete $E_*$-module, and so we could work in the category of complete $E^\vee_* E$-comodules, as studied previously by Devinatz~\cite{mordevinatz}. However, whilst the category of $L$-complete $E_*$-modules is abelian, the same is \emph{not} true for the category of complete $E_*$-modules, so we prefer to work with $L$-complete $E^\vee_* E$-comodules. \end{rem} Given an $L$-complete $R$-module $N$, let $\mathbb{G}amma \boxtimes_R N$ be the comodule with structure map $\psi = \mathbb{G}amma \boxtimes_R \Delta$. This is called an \emph{extended $L$-complete $\mathbb{G}amma$-comodule}. The following is the standard adjunction between extended comodules and ordinary modules. \begin{lemma}\label{lem:adjoint} Let $N$ be an $L$-complete $R$-module and let $M$ be an $L$-complete $\mathbb{G}amma$-comodule. Then there is an isomorphism \[ \operatorname{Hom}_{\operatorname{\widehat{Mod}}_R}(M,N) = \operatorname{Hom}_{\widehat{\Comod}_\mathbb{G}amma}(M,\mathbb{G}amma \boxtimes_R N). \] \end{lemma} Suppose that $F$ is a ring spectrum (in the stable homotopy category) such that $F_*F$ is a flat $F_*$-module. In this case the pair $(F_*,F_*F)$ is an (ordinary) Hopf algebroid. To show that $F_*(X)$ is an $F_*F$-comodule for any spectrum $X$ requires knowing that $F_*(F \wedge X) \simeq F_*F \otimes_{F_*} F_*X$. The same is true here; to show that $E^\vee_* X$ is an $L$-complete $E^\vee_* E$-comodule we need to show that $E^\vee_* (E \wedge X) \simeq E^\vee_* E \boxtimes_{E_*} E^\vee_* X$. We do not know if it is true in general; our next goal will be to give the examples of $L$-complete $E^\vee_* E$-comodules that we need. We first start with a preliminary lemma. \begin{lemma}\label{lem:tameness} Let $M$ and $N$ be $E_*$-modules such that $M$ is flat and $N$ is either a finitely-generated $E_*$-module, pro-free, or has bounded $\mathfrak{m}$-torsion. Then $M \otimes_{E_*} N$ is tame. \end{lemma} \begin{proof} First assume $N$ is finitely-generated. Since $E_*$ is Noetherian there is a short exact sequence \[ 0 \to K \to F \to N \to 0 \] where $F = \oplus_I E_*$ is free and $K$ and $F$ are finitely-generated. Tensoring with the flat module $M$ gives another short exact sequence, and by~\cite[Theorem A.2]{hs99} there is a long exact sequence \begin{equation}\label{eq:les} \cdots \to L_{k+1}(M \otimes_{E_*} N) \to L_k(M \otimes_{E_*}K) \to L_k(M \otimes_{E_*} F) \to L_k(M \otimes_{E_*} N) \to \cdots .\end{equation} The functors $L_k$ are additive for all $k \ge 0$, and since $M$ is flat we see that $L_0(M \otimes_{E_*} F) = \oplus_I M^\wedge_\mathfrak{m}$ and $L_k(M \otimes_{E_*}F) = 0$ for $k>0$. It follows that $L_{k+1}(M \otimes_{E_*}N) \simeq L_k(M \otimes_{E_*} K)$ for $k \ge 1$. Since $K,F$ and $N$ are all finitely-generated $E_*$-modules we use~\cite[Theorem A.4]{hs99} to see that the end of the long exact sequence~\eqref{eq:les} takes the form \[ 0 \to L_1(M \otimes_{E_*}N) \to L_0(M) \otimes_{E_*} K \to L_0(M) \otimes_{E_*} F \to L_0(M) \otimes_{E_*} N \to 0. \] Since $M$ is flat, $L_0(M)$ is pro-free, and hence flat~\cite[Proposition A.15]{barthel2013completed}, so $L_0(M) \otimes_{E_*} K \to L_0(M) \otimes_{E_*} F$ is injective, forcing $L_1(M \otimes_{E_*}N) = 0$. Since $N$ was an arbitrary finitely-generated $E_*$-module and $K$ is finitely generated, we see that $L_1(M \otimes_{E_*} K) = 0$, also. It follows that $L_2(M \otimes_{E_*} N) \simeq L_1(M \otimes_{E_*} K) = 0$, and arguing inductively we see that $L_k(M \otimes_{E_*} N) = 0$ for $k>0$, so that $M \otimes_{E_*} N$ is tame. Now assume that $N$ is pro-free, and hence flat. It follows that $M \otimes_{E_*} N$ is also flat, and hence tame. For the final case, where $N$ has bounded $\mathfrak{m}$-torsion, note that $M \otimes_{E_*} N$ also has bounded $\mathfrak{m}$-torsion, and so is tame (see~\Cref{rem:tamemodules}). \qedhere \end{proof} We now identify conditions on a spectrum $X$ so that $E^\vee_* X$ is an $L$-complete $E^\vee_* E$-comodule. \begin{prop}\label{lem:comodules} Let $X$ be a spectrum. If $E^\vee_* E \otimes_{E_*} E^\vee_* X$ is tame, then \begin{equation}\label{eq:kunneth} E^\vee_*(E \wedge X) \simeq E^\vee_* E \boxtimes_{E_*} E^\vee_* X \end{equation} and $E^\vee_* X$ is an $L$-complete $E^\vee_* E$-comodule. In particular this occurs when $E^\vee_* X$ is either a finitely-generated $E_*$-module, pro-free or has bounded $\mathfrak{m}$-torsion. \end{prop} \begin{proof} There is a spectral sequence~\cite[Theorem IV.4.1]{EKMM} \begin{equation}\label{eq:ekmm} E^2_{s,t} = \operatorname{Tor}^{E_*}_{s,t}(E^\vee_* E,E^\vee_* X) \mathbb{R}ightarrow \pi_{s+t}(L_{K(n)}(E \wedge E)\wedge_E L_{K(n)}(E \wedge X)). \end{equation} For any $E$-module $M$ we also have the spectral sequence of $E_*$-modules~\cite[Theorem 2.3]{hovey08fil} \[ E^2_{s,t} = (L_s\pi_*M)_t \mathbb{R}ightarrow \pi_{s+t} L_{K(n)}M. \] In particular there is a spectral sequence starting from the abutment of~\ref{eq:ekmm} that has the form \[ (L_i\pi_{\ast}(L_{K(n)}(E \wedge E)\wedge_E L_{K(n)}(E \wedge X)))_{s+t} \mathbb{R}ightarrow \pi_{i+s+t}L_{K(n)}(L_{K(n)}(E \wedge E)\wedge_E L_{K(n)}(E \wedge X)). \] By~\Cref{rem:bousfieldlocalisation} we deduce that there is an equivalence \[ L_{K(n)}(L_{K(n)}(E \wedge E)\wedge_E L_{K(n)}(E \wedge X)) \simeq L_{K(n)}(E \wedge E \wedge X), \] and so the latter spectral sequence abuts to $E^\vee_* (E \wedge X)$. \color{black} Since $E^\vee_* E$ is a flat $E_*$-module the first spectral sequence always collapses, and the second spectral sequence becomes \begin{equation}\label{eq:ss} (L_i(E^\vee_* E \otimes_{E_*} E^\vee_* X))_{s+t} \mathbb{R}ightarrow E^\vee_{i+s+t} (E \wedge X). \end{equation} Thus, if $E^\vee_* E \otimes_{E_*} E^\vee_* X$ is tame, this gives an isomorphism \[ E^\vee_*(E \wedge X) \simeq E^\vee_* E \boxtimes_{E_*} E^\vee_* X, \] and so $E^\vee_* X$ is an $L$-complete $E^\vee_* E$-comodule. Since $E^\vee_* E$ is pro-free it is flat, and~\Cref{lem:tameness} applies to show that $E^\vee_* E \otimes_{E_*} E^\vee_* X$ is tame in the given cases. \end{proof} \begin{rem} This raises the question: what is the most general class of $L$-complete comodules $M$ such that $E^\vee_* E \otimes_{E_*} M$ is tame? In light of Baker's example~\cite[Appendix B]{bakerlcomplete} of an $L$-complete - and hence tame - module $N$ such that $L_1(\bigoplus_{i=0}^{\infty}N) \ne 0$, this seems to be a subtle problem. In particular, we note that this example implies that the collection of tame modules itself need not satisfy the above condition. \end{rem} The following corollary shows that the equivalence of~\eqref{eq:kunneth} can be iterated. \begin{cor}\label{lem:iteratedkunneth} Let $Y$ be a spectrum such that $E^\vee_* Y$ is either a finitely-generated $E_*$-module, pro-free or has bounded $\mathfrak{m}$-torsion. Then for all $s \ge 0$ there is an isomorphism \[ E^\vee_* (E^{\wedge s} \wedge Y ) \simeq (E^\vee_* E)^{\boxtimes s} \boxtimes_{E_*} E^\vee_* Y. \] \end{cor} \begin{proof} We will prove this by induction on $s$, the case $s=0$ being trivial. Assume now that $E^\vee_* (E^{\wedge (s-1)} \wedge Y) \simeq (E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y $; we will show that $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$ is tame. We claim that this is true in the three cases we consider. \begin{enumerate} \item If $E^\vee_* Y$ is flat, then so is $(E^\vee_* E)^{\boxtimes(s-1)}\boxtimes_{E_*}E^\vee_* Y$, and we can apply~\Cref{lem:tameness} to see that $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$ is tame. \item If $E^\vee_* Y$ is finitely-generated then $(E^\vee_* E)^{\boxtimes(s-1)}\boxtimes_{E_*}E^\vee_* Y \simeq (E^\vee_* E)^{\boxtimes(s-1)}\otimes_{E_*}E^\vee_* Y$~\cite[Theorem A.4]{hs99}. Since $E^\vee_* E \otimes_{E_*} (E^\vee_* E)^{\boxtimes(s-1)}$ is a flat $E_*$-module, once again we can apply~\Cref{lem:tameness} to see that $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$ is tame. \item If $E^\vee_* Y$ has bounded $\mathfrak{m}$-torsion, then the same is true for $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$, and it follows that it is tame, as required. \end{enumerate} Therefore, \Cref{lem:comodules} applied to $X =E^{\wedge (s-1)} \wedge Y$ implies that \[ E^\vee_*(E^{\wedge s} \wedge Y) \simeq E^\vee_* E \boxtimes_{E_*} E^\vee_*(E^{\wedge (s-1)} \wedge Y) \simeq (E^\vee_* E)^{\boxtimes s} \boxtimes_{E_*} E^\vee_* Y, \] where the last isomorphism uses the inductive hypothesis once more. \end{proof} \begin{comment} \begin{proof} We will prove this inductively. Apply the spectral sequence~\eqref{eq:ss} (whose existence only relied on the flatness of $E^\vee_* E$) with $X = E^{\wedge(s-1)} \wedge Y$, namely \begin{equation}\label{eq:ss2} (L_i(E^\vee_* E \otimes_{E_*} E^\vee_* (E^{\wedge(s-1)} \wedge Y)))_{s+t} \mathbb{R}ightarrow E^\vee_{i+s+t} (E^{\wedge s} \wedge X). \end{equation} By induction we can assume that $E^\vee_* (E^{\wedge (s-1)} \wedge Y ) \simeq (E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y $, and so it suffices to show that $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$ is tame. We claim that this is true in the three cases we consider. \begin{enumerate} \item If $E^\vee_* Y$ is flat, then so is $(E^\vee_* E)^{\boxtimes(s-1)}\boxtimes_{E_*}E^\vee_* Y$, and we can apply~\Cref{lem:tameness} to see that $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$ is tame. \item If $E^\vee_* Y$ is finitely-generated then $(E^\vee_* E)^{\boxtimes(s-1)}\boxtimes_{E_*}E^\vee_* Y \simeq (E^\vee_* E)^{\boxtimes(s-1)}\otimes_{E_*}E^\vee_* Y$~\cite[Theorem A.4]{hs99}. Since $E^\vee_* E \otimes_{E_*} (E^\vee_* E)^{\boxtimes(s-1)}$ is a flat $E_*$, once again we can apply~\Cref{lem:tameness} to see that $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$ is tame. \item If $E^\vee_* Y$ has bounded $\mathfrak{m}$-torsion, then the same is true for $E^\vee_* E \otimes_{E_*} ((E^\vee_* E)^{\boxtimes (s-1)} \boxtimes_{E_*} E^\vee_* Y)$, and it follows that it is tame, as required.\qedhere \end{enumerate} \end{proof} \end{comment} \section{Relative homological algebra} \subsection{Motivation} Recall~\cite[Appendix A]{ravgreen} that the category of comodules over a Hopf algebroid $(A,\mathbb{G}amma)$ is abelian whenever $\mathbb{G}amma$ is flat over $A$, and that if $I$ is an injective $A$-module then $\mathbb{G}amma \otimes_A I$ is an injective $\mathbb{G}amma$-comodule. This implies that the category of $\mathbb{G}amma$-comodules has enough injectives. Given $\mathbb{G}amma$-comodules $M$ and $N$ we can define $\mathrm{Ext}^i_\mathbb{G}amma(M,N)$ in the usual way as the $i$-th derived functor of $\operatorname{Hom}_\mathbb{G}amma(M,N)$, functorial in $N$. However, the category of $L$-complete $\mathbb{G}amma$-comodules does not need to be abelian. In this case, in order to define $L$-complete $\mathrm{Ext}$-groups, we need to use relative homological algebra, for which the following is meant to provide some motivation. The following two lemmas show that we can form a resolution by~\emph{relative injective} objects, instead of absolute injectives. \begin{lemma}\label{lem:relres} Let $(A,\mathbb{G}amma)$ be a Hopf algebroid (over a commutative ring $K$) such that $\mathbb{G}amma$ is a flat $A$-module, and let \[ 0 \to N \to R^0 \to R^1 \to \cdots \] be a sequence of left $\mathbb{G}amma$-comodules which is exact (over $K$) and such that for each $i$, $\mathrm{Ext}^n_\mathbb{G}amma(M,R^i) = 0$ for all $n>0$. Then $\mathrm{Ext}_\mathbb{G}amma(M,N)$ is the cohomology of the complex \[ \mathrm{Ext}_\mathbb{G}amma^0(M,R^0) \to \mathrm{Ext}_\mathbb{G}amma^0(M,R^1) \to \cdots. \] \end{lemma} \begin{proof} See~\cite[Lemma 1.1]{mrw77} or~\cite[Lemma A1.2.4]{ravgreen}. \end{proof} \begin{definition} A $\mathbb{G}amma$-comodule $S$ is a relative injective $\mathbb{G}amma$-comodule if it is a direct summand of an extended comodule, i.e., one of the form $\mathbb{G}amma \otimes_A N$. \end{definition} \begin{lemma}\label{lem:proj} Let $S$ be a relatively injective comodule. If $M$ is a projective $A$-module, then $\mathrm{Ext}_\mathbb{G}amma^i(M,S) = 0$ for $i>0$. Hence if $I^*$ is a resolution of $N$ by relatively injective comodules then \begin{equation}\label{eq:extequiv} \mathrm{Ext}^n_\mathbb{G}amma(M,N) = H^n(\operatorname{Hom}_\mathbb{G}amma(M,I^*)) \end{equation} for all $n \ge 0$. \end{lemma} \begin{proof} The second statement follows from the first and~\Cref{lem:relres}. For the first statement proceed as in~\cite[A1.2.8(b)]{ravgreen}. \end{proof} In the case of $L$-complete $\mathbb{G}amma$-comodules, we will take the analogue of~\Cref{eq:extequiv} as a definition of $\widehat{\Ext}_{\mathbb{G}amma}(-,-)$ (see~\Cref{def:lcompleteext}). \begin{rem} The reader may wonder about projective objects. In general, comodules over a Hopf algebra do not have enough projectives. For example, when $(A,\mathbb{G}amma) = (\mathbb{F}_p,\mathcal{A})$, where $\mathcal{A}$ is the dual of the Steenrod algebra, it is believed that there are no nonzero projective objects~\cite{stablepalmeri}. \end{rem} \subsection{Homological algebra for $L$-complete comodules} The category $\widehat{\Comod}_\mathbb{G}amma$ of $L$-complete $\mathbb{G}amma$-comodules is not abelian; it is an additive category with cokernels. The absence of kernels is due to the failure of tensoring with $\mathbb{G}amma$ to be flat. If $\theta: M \to N$ is a morphism of $L$-complete comodules, then there is a commutative diagram~\cite{bakerlcomplete} \[ \begin{tikzcd} 0 \arrow{r} & \ker \theta \arrow[dashed]{d} \arrow{r} & M \arrow{r}{\theta} \arrow{d}{\psi_M} & N \arrow{d}{\psi_N} \\ & \mathbb{G}amma \boxtimes_R \ker \theta \arrow{r} & \mathbb{G}amma \boxtimes_R M \arrow{r}{\text{id} \boxtimes_R \theta} & \mathbb{G}amma \boxtimes_R N, \end{tikzcd} \] but the dashed arrow need not exist or be unique. Since $\widehat{\Comod}_\mathbb{G}amma$ is not abelian we need to use the methods of relative homological algebra to define a suitable Ext functor, which we briefly review now. For a more thorough exposition see~\cite{eilenbergmoore} (although in general one needs to dualise what they say, since they mainly work with relative projective objects). Our work is in fact similar to that of Miller and Ravenel~\cite{mrw77}. \begin{definition}\label{def:injclass} An injective class $\mathcal{I}$ in a category $\mathcal{C}$ is a pair $(\mathcal{D},\mathcal{S})$ where $\mathcal{D}$ is a class of objects and $\mathcal{S}$ is a class of morphisms such that: \begin{enumerate} \item $I$ is in $\mathcal{D}$ if and only if for each $f:A \to B$ in $\mathcal{S}$ \[ f^\ast:\operatorname{Hom}_{\mathcal{C}}(B,I) \to \operatorname{Hom}_{\mathcal{C}}(A,I) \] is an epimorphism. We call such objects relative injectives. \item A morphism $f:A \to B$ is in $\mathcal{S}$ if and only if for each $I \in \mathcal{D}$ \[ f^\ast:\operatorname{Hom}_{\mathcal{C}}(B,I) \to \operatorname{Hom}_{\mathcal{C}}(A,I) \] is an epimorphism. These are called the relative monomorphisms. \item\label{item:relinj} For any object $A \in \mathcal{C}$ there exists an object $Q \in \mathcal{D}$ and a morphism $f:A \to Q$ in $\mathcal{S}$. \end{enumerate} \end{definition} \begin{rem} Note that given either $\mathcal{S}$ or $\mathcal{D}$, the other class is determined by the requirements above, and that the third condition ensures the existence of enough relative injectives. \end{rem} It is not hard to check that $\mathcal{D}$ is closed under retracts and that if the composite morphism $A \xrightarrow{f}B \to C$ is in $\mathcal{S}$ then so is $f:A \to B$. \begin{example}[The split injective class] The \emph{split injective class} $\mathcal{I}_s = (\mathcal{D}_s,\mathcal{S}_s)$ has $\mathcal{D}_s$ equal to all objects of $\mathcal{C}$ and $\mathcal{S}_s$ all morphisms that satisfy~\Cref{def:injclass}, i.e., $\operatorname{Hom}_\mathcal{C}(f,-)$ is surjective for all objects. One can easily check that this is equivalent to the requirement that $f:A \to B$ is a split monomorphism. \end{example} \begin{example}[The absolute injective class] Let $\mathcal{S}$ be the class of all monomorphisms and then let $\mathcal{D}$ be the objects as needed. This satisfies (3) if there are enough categorical injectives. \end{example} One way to construct an injective class is via a method known as reflection of adjoint functors. \begin{prop} Suppose that $\mathcal{C}$ and $\mathcal{F}$ are additive categories with cokernels, and there is a pair of adjoint functors \[ T:\mathcal{C} \rightleftarrows \mathcal{F}:U. \] Then, if $(\mathcal{D},\mathcal{S})$ is an injective class in $\mathcal{C}$, we define an injective class $(\mathcal{D}',\mathcal{S}')$ in $\mathcal{F}$, where the class of objects is given by the set of all retracts of $T(\mathcal{D})$ and the class of morphisms is given by all morphisms whose image (under $U$) is in $\mathcal{S}$. \end{prop} \begin{proof}[Sketch of proof.\footnotemark] \footnotetext{For full details see~\cite[p.~15]{eilenbergmoore} - here it is proved for relative projectives, but it is essentially formal to dualise the given argument.} First note that, since relative injectives are closed under retracts, to show that $\mathcal{D'}$ is as claimed, it suffices to show that $T(I)$ is relative injective, whenever $I \in \mathcal{D}$. Let $A \to B$ be in $\mathcal{S}'$ and $I \in \mathcal{D}$; then the map \[ \operatorname{Hom}_\mathcal{F}(B,T(I)) \to \operatorname{Hom}_\mathcal{F}(A,T(I)) \] is equivalent to the epimorphism \[ \operatorname{Hom}_{\mathcal{C}}(U(B),I) \to \operatorname{Hom}_{\mathcal{C}}(U(A),I). \] A similar method shows that the relative monomorphisms are as claimed. Finally we observe that for all $A \in \mathcal{F}$ there exists a $Q \in \mathcal{D}$ such that $U(A) \to Q \in \mathcal{S}$. Then the adjoint $A \to T(Q)$ satisfies Condition 3. To see this note that $U(A) \to Q$ factors as $U(A) \to U(T(Q)) \to Q$; since relative monomorphisms are closed under left factorisation (see above) $U(A) \to U(T(Q)) \in \mathcal{S}$. Then $A \to T(Q) \in \mathcal{S}'$ as required. \end{proof} We recall the following definition. \begin{definition} An extended $L$-complete $E^\vee_* E$-comodule is a comodule isomorphic to one of the form $E^\vee_* E \boxtimes_{E_*} M$, where $M$ is an $L$-complete $E_*$-module. Here the comultiplication is given by the map \[ E^\vee_* E \boxtimes_{E_*} M \xrightarrow{\Delta \boxtimes \text{id}} E^\vee_* E\boxtimes_{E_*} E^\vee_* E \boxtimes_{E_*} M. \] \end{definition} \begin{example}\label{examp:Morava} Give $\widehat{\operatorname{Mod}}_{E_*}$ the split injective class. Then the adjunction \[ \operatorname{Hom}_{\operatorname{\widehat{Mod}_{E_*}}}(A,B) = \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(A,E^\vee_* E \boxtimes_{E_*} B) \] produces an injective class in $\widehat{\Comod}_{E^\vee_* E}$. In particular we have \begin{enumerate} \item $\mathcal{S}$ is the class of all comodule morphisms $f: A \to B$ whose underlying map of $L$-complete $E_*$-modules is a split monomorphism. \item $\mathcal{D}$ is the class of $L$-complete $E^\vee_* E$-comodules which are retracts of extended complete $E^\vee_* E$-comodules. \end{enumerate} Note that for any complete $E^\vee_* E$-comodule $M$ the coaction map $M \xrightarrow{\psi} E^\vee_* E\widehat{\otimes}_{E_*} M$ is a relative monomorphism into a relative injective. \end{example} We will say that a three term complex $M \xrightarrow{f} N \xrightarrow{g} P$ of comodules is relative short exact if $gf = 0$ and $f:M \to N$ is a relative monomorphism. A relative injective resolution of a comodule $M$ is a complex of the form \[ 0 \to M \to J^0 \to J^1 \to \cdots \] where each $J^i$ is relatively injective, and each three-term subsequence \[ J^{s-1} \to J^s \to J^{s+1}, \] where $J^{-1} = M$ and $J^s = 0$ for $s< -1$, is relative short exact. Note that, by definition, relative exact sequences are precisely those that give exact sequences of abelian groups after applying $\operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(-,I)$, whenever $I$ is relative injective. We have the usual comparison theorem for relative injective resolutions. The proof is nearly identical to the standard inductive homological algebra proof - in this context see~\cite[Theorem 2.2]{husmoore}. \begin{prop}\label{prop:comparasion} Let $M$ and $M'$ be objects in an additive category $\mathcal{C}$ with relative injective resolutions $P^\ast$ and $P'^\ast$, respectively. Suppose there is a map $f: M \to M'$. Then, there exists a chain map $f^{\ast}:P^\ast \to P'^\ast$ extending $f$ that is unique up to chain homotopy. \end{prop} \begin{definition}(cf.~\cite[p.~7]{eilenbergmoore}.)\label{def:lcompleteext} Let $M$ and $N$ be $L$-complete $E^\vee_* E$-comodules, and let $M$ be pro-free. Let $I^\ast$ be a relative injective resolution of $N$. Then, for all $s \ge 0$, we define \[ \widehat{\Ext}_{\widehat{\Comod}_{E^\vee_* E}}^s(M,N) = H^s(\operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(M,I^\ast)). \] \end{definition} For brevity we will write $\widehat{\Ext}_{E^\vee_* E}^s(M,N)$ for this Ext functor. Note that~\Cref{prop:comparasion} implies that the derived functor is independent of the choice of relative injective resolution. \begin{rem}\begin{enumerate} \item The reader should compare this definition to~\Cref{lem:proj}. \item The category of $L$-complete $E_*$-modules has no non-zero injectives~\cite[p.~40]{barthel2013completed}; this suggests that the same is true of $L$-complete $E^\vee_* E$-comodules, which is yet another reason we need to use relative homological algebra. \end{enumerate} \end{rem} Let $M$ be an $L$-complete comodule. As in~\cite{mrw77} we have the standard, or cobar resolution of $M$, denoted $\Omega^*(E^\vee_* E,M)$, with \[ \Omega^n(E^\vee_* E,M) = \underbrace{E^\vee_* E \boxtimes_{E_*} \cdots \boxtimes_{E_*} E^\vee_* E}_{n+1 \text{ times }} \boxtimes_{E_*} M \] and differential \begin{equation*} \begin{split} d(e_0 \boxtimes \cdots \boxtimes e_n \boxtimes m) &= \sum_{i=0}^n (-1)^i e_0 \boxtimes \cdots e_{i-1} \boxtimes \Delta(e_i) \boxtimes e_{i+1} \boxtimes \cdots \boxtimes m \\ & + (-1)^{n+1} e_0 \boxtimes \cdots e_n \boxtimes \psi(m). \end{split} \end{equation*} The usual contracting homotopy of~\cite{mrw77} given by \[ s(e_0 \boxtimes \cdots \boxtimes e_n \boxtimes m) = \epsilon(e_0) e_1 \boxtimes \cdots \boxtimes e_n \boxtimes m \] shows that $(\Omega^*(E^\vee_* E,M),d)$ defines a relative injective resolution of $M$. \begin{lemma}\label{lem:ext0} Let $M$ and $N$ be $L$-complete $E^\vee_* E$-comodules. Then there is an isomorphism \[ \widehat{\Ext}_{E^\vee_* E}^0(M,N) \simeq \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(M,N). \] \end{lemma} \begin{proof} Let $\psi_M:M \to E^\vee_* E \boxtimes_{E_*} M$ and $\psi_N:N \to E^\vee_* E \boxtimes_{E_*} N$ be the comodule structure maps. Define \[ \psi_M^\ast,\psi_N^\ast:\operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(M,N) \to \operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(M,E^\vee_* E \boxtimes_{E_*} N) \] by \[ \psi_M^\ast(f) = (1 \boxtimes f)\psi_M \quad \text{ and } \quad \psi_N^\ast(f) = \psi_Nf. \] Note that (see~\cite[Proof of A1.1.6]{ravgreen} for the case of an ordinary Hopf algebroid) \[ \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(M,N) = \ker(\psi_M^\ast - \psi_N^\ast). \] The cobar complex begins \[ \xymatrix{ E^\vee_* E \ar[r] & E^\vee_* E \boxtimes_{E_*} N \ar[rr]^-{\Delta \boxtimes 1 - 1 \boxtimes \psi_N} && E^\vee_* E \boxtimes_{E_*} E^\vee_* E \boxtimes_{E_*} N. } \] Applying $\operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(M,-)$ and using the adjunction of~\Cref{lem:adjoint} between extended $L$-complete $E^\vee_* E$-comodules and $L$-complete $E_*$-modules we see that \[ \widehat{\Ext}_{E^\vee_* E}^0(M,N) = \ker\left(\operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(M,N) \xrightarrow{f} \operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(M,E^\vee_* E \boxtimes_{E_*} N)\right). \] One can check that the map $f$ is precisely $\psi_M^\ast - \psi_N^\ast$, and the claim follows. \end{proof} \section{The $K(n)$-local $E_n$-Adams spectral sequence} \subsection{Adams spectral sequences} Here we present some standard material on Adams-type spectral sequences following~\cite{millerrelations,devhop04}. Throughout this section we always work in the homotopy category of spectra. Let $R$ be a ring spectrum. We say that a spectrum $I$ is $R$-injective if the map $I \to R \wedge I$ induced by the unit is split. A sequence of spectra $X' \to X \to X''$ is called $R$-exact if the composition is trivial and \[ [X',I] \leftarrow [X,I] \leftarrow [X'',I] \] is exact as a sequence of abelian groups for each $R$-injective spectrum $I$. An $R$-resolution of a spectrum $X$ is then an $R$-exact sequence of spectra (i.e., each three term subsequence is $R$-exact) \[ \ast \to X \to I^0 \to I^1 \to \cdots \] such that each $I^s$ is $R$-injective. Given an $R$-resolution of $X$ we can always form an Adams resolution of $X$; that is, a diagram \[ \begin{tikzcd} X=X_0\arrow[swap]{dr}{j} && \arrow[swap]{ll}{i} X_1 \arrow[swap]{dr}{j} && \arrow[swap]{ll}{i} X_2 \arrow[swap]{dr}{j} && \arrow[swap]{ll}{i} X_3 \\ & I^0 \arrow[dashed,swap]{ur}{k} && \mathbb{S}igma^{-1}I^1 \arrow[dashed,swap]{ur}{k} && \mathbb{S}igma^{-2}I^2 \arrow[dashed,swap]{ur}{k} \end{tikzcd} \cdots \] such that each $\mathbb{S}igma^{-s} I^s$ is $R$-injective and each $X_{k+1} \to X_k \to \mathbb{S}igma^{-k}I^k$ is a fiber sequence. Note that the composition $I^k \to \mathbb{S}igma^{k+1} X_{k+1} \to I^{k+1}$ corresponds to the original morphism in the $R$-resolution of $X$. Given such a diagram we can always form the following exact couple \[ \begin{tikzcd} D^{s+1,t+1} = \pi_{t-s}(X_{s+1}) \arrow{rr}{i} && \pi_{t-s}(X_s) = D^{s,t} \arrow{dl}{j} & \\ & E_1^{s,t} = \pi_{t-s}(\mathbb{S}igma^{-s}I^s). \arrow[dashed]{ul}{k} \end{tikzcd} \] If we form the standard resolution, where $I^k = R^{\wedge(k+1)} \wedge X$ for $k \ge 0$, and if $R_*R$ is a flat $R_*$-module, then it is not hard to see that on the $E_1$-page we get the following sequence \[ 0 \to R_*X \to R_*R \otimes_{R_*} R_*X \to R_*R^{\otimes 2} \otimes_{R_*} R_*X \to \cdots. \] By explicitly checking the maps one can see that this is the cobar complex for computing $\mathrm{Ext}$, and so we get the usual Adams spectral sequence \[ \mathrm{Ext}^{\ast,\ast}_{R_*R}(R_*,R_*X) \mathbb{R}ightarrow \pi_* X^\wedge_R. \] Here $X^\wedge_R$ is the $R$-nilpotent completion of $X$~\cite{bousfield79}. This construction can be suitably modified to construct the $F$-local $R$-Adams spectral sequence (see~\cite[Appendix A]{devhop04}), where $F$ is any spectrum. Following Devinatz and Hopkins say an $F$-local spectrum $I$ is $R$-injective if the map $I \to L_F(R \wedge I)$ is split. The definition of $R$-exact sequence and $R$-exact resolution then follow in the same way as the unlocalised case. We specialise to the case where $F = K(n)$ and $R = E$ is Morava $E$-theory. Following~\cite[Remark A.9]{devhop04} we take $I^j = L_{K(n)}(E^{\wedge (j+1)} \wedge X)$. The formulas of~\cite[Construction 4.11]{devhop04} actually show that the $I^j$ form an Adams resolution (in fact they can be assembled into a cosimplicial resolution). Here is our main result. \begin{theorem}\label{thm:ass} Let $X$ and $Y$ be spectra and suppose that $E^\vee_* X$ is pro-free, and $E^\vee_* Y$ is either a finitely-generated $E_*$-module, pro-free, or has bounded $\mathfrak{m}$-torsion (i.e., is annihilated by some power of $\mathfrak{m}$). Then the $E_2$-term of the $K(n)$-local $E$-Adams spectral sequence with abutment $\pi_{t-s}F(X,L_{K(n)}Y)$ is \[ E_2^{s,t} =\widehat{\Ext}^{s,t}_{E^\vee_* E}(E^\vee_* X,E^\vee_* Y). \] \end{theorem} \begin{proof} By mapping $X$ into an Adams resolution of $L_{K(n)}Y$ we obtain an exact couple with $E_1^{s,t} = \pi_{t-s}F(X,\mathbb{S}igma^{-s}I^s) \simeq \pi_tF(X,I^s)$. Unwinding the exact couple we see that the $E_2$-page is the cohomology of the complex \[ \pi_*F(X,I^0) \to \pi_*F(X,I^1) \to \pi_*F(X,I^2) \to \cdots. \] As usual, the Adams spectral sequence is independent of the choice of resolution from the $E_2$-page onwards, so we use the standard resolution, i.e., we let $I^s = L_{K(n)}(E^{\wedge (s+1)} \wedge Y)$. Applying~\Cref{lem:UCT} (which we can do under the assumption that $E^\vee_* X$ is pro-free) we see that \[ \begin{split} \pi_\ast F(X,I^s)&\simeq \operatorname{Hom}_{E_*}(E^\vee_* X,E^\vee_* (E^{\wedge s} \wedge Y)) \\ &\simeq \operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(E^\vee_* X,E^\vee_* (E^{\wedge s} \wedge Y)), \end{split} \] where the latter follows from the fact that $E^\vee_*(-)$ is always $L$-complete. By~\Cref{lem:iteratedkunneth} we have $E^\vee_*(E^{ \wedge s} \wedge Y) \simeq (E^\vee_* E)^{\boxtimes s} \boxtimes_{E_*} E^\vee_* Y$. Using the adjunction between extended comodules and $L$-complete $E_*$-modules we get \[ \operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(E^\vee_* X,E^\vee_* (E^{\wedge s} \wedge Y)) \simeq \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* X, (E^\vee_* E)^{\boxtimes (s+1)} \boxtimes_{E_*} E^\vee_* Y). \] This implies that the $E_2$-page is the cohomology of the complex \[ \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* X,E^\vee_* E \boxtimes_{E_*} E^\vee_* Y) \to \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* X,(E^\vee_* E)^{\boxtimes 2} \boxtimes_{E_*} E^\vee_* Y) \to \cdots \] which is precisely $\widehat{\Ext}^{\ast,\ast}_{E^\vee_* E}(E^\vee_* X,E^\vee_* Y)$. \end{proof} The following is now a consequence of~\cite[Theorem 2]{devhop04} and uniqueness of the $E_2$-term. \begin{cor}\label{cor:e2htfp} Let $X$ be a spectrum such that $E^\vee_* X$ is pro-free. If $F$ is a closed subgroup of $\mathbb{G}_n$, then there is an isomorphism \[ \widehat{\Ext}^{s,t}_{E^\vee_* E}(E^\vee_* X,E^\vee_* E^{hF}) \simeq H_c^s(F,\pi_tF(X,E)) \simeq H_c^s(F,E^{-t}X). \] \end{cor} \section{Identification of the $E_2$-term with group cohomology}\label{sec:morCOR} It has been known since the work of Morava~\cite{mor85}, that completed $\mathrm{Ext}$ groups (as considered in~\cite{mordevinatz}) can, under some circumstances, be identified with continuous group cohomology. The results of this section say that our $L$-complete $\mathrm{Ext}$ groups can also be identified with continuous group cohomology. In fact, in many cases complete and $L$-complete $\mathrm{Ext}$ groups coincide, although we do not make this statement precise. Before we can give our result identifying $L$-complete $\mathrm{Ext}$ groups and group cohomology, we need two preliminary lemmas. We write $M \widehat{\otimes}_{E_*} N$ for the $\mathfrak{m}$-adic completion of the ordinary tensor product. \begin{lemma}\label{lem:completefg} Let $M$ be a pro-free $E_*$-module, and $N$ a finitely-generated $E_*$-module. Then \[ M \widehat{\otimes}_{E_*}N \simeq M \otimes_{E_*} N. \] \end{lemma} \begin{proof} For a fixed finitely generated $E_*$-module $N$, the category of $E_*$-modules $M$ for which the conclusion of the lemma holds is closed under retracts. By~\cite[Proposition A.13]{hs99} $M = L_0(\oplus_I E_*)$ is a retract of $\prod_I E_*$, and so it suffices to prove the lemma for $M=\prod_I E_*$. Note that by~\cite[Proposition 4.44]{MR1653294} an $E_*$-module $N$ is finitely-presented (equivalently, finitely-generated, since $E_*$ is Noetherian) if and only if for any collection $\{ C_\alpha \}$ of $E_*$-modules, the natural map $N \otimes_{E_*} \prod C_\alpha \to \prod(N \otimes_{E_*} C_\alpha)$ is an equivalence. We then have a series of equivalences \[ \begin{split} (\prod_I E_*) \widehat{\otimes}_{E_*} N &= \varprojlim_k (E_*/\mathfrak{m}^k \otimes_{E_*} (\prod_I E_*) \otimes_{E_*} N) \ \\ & \simeq \varprojlim_k \prod_I (E_*/\mathfrak{m}^k \otimes_{E_*} N)\\ & \simeq \prod_I \varprojlim (E_*/\mathfrak{m}^k \otimes_{E_*} N) \\ & \simeq \prod_I N^\wedge_{\mathfrak{m}} \\ & \simeq \prod_I N \\ & \simeq (\prod_I E_*) \otimes_{E_*} N\qedhere. \end{split} \] \end{proof} \begin{lemma}\label{lemma:lcompletetotensor} Let $M$ and $N$ be $E_*$-modules. Suppose that $M$ is pro-free and $N$ is either pro-free, finitely-generated as an $E_*$-module, or has bounded $\mathfrak{m}$-torsion. Then \[ M \widehat{\otimes}_{E_*} N \simeq M \boxtimes_{E_*} N. \] \end{lemma} \begin{proof} Note that in each case~\Cref{prop:completemodules} implies that $M$ and $N$ are both complete in the $\mathfrak{m}$-adic topology. When $N$ is finitely generated there is an isomorphism~\cite[Proposition A.4]{hs99} \[ M \boxtimes_{E_*} N \simeq L_0(M) \otimes_{E_*} N \simeq M \otimes_{E_*} N, \] where the last isomorphism follows from the fact that $M$ is already $L$-complete. Since $N$ is finitely-generated~\Cref{lem:completefg} implies that $M \otimes_{E_*} N \simeq M \widehat{\otimes}_{E_*} N$. Now suppose that $N$ has bounded $\mathfrak{m}$-torsion. Note that $M \otimes_{E_*} N$ is still bounded $\mathfrak{m}$-torsion, and so $M \boxtimes_{E_*} N \simeq M \otimes_{E_*} N$. Furthermore, there is an isomorphism $M \otimes_{E_*} N \simeq M \widehat{\otimes}_{E_*} N$, since the inverse system defining the completed tensor product is eventually constant. For the final case, assume that $N$ is pro-free. Since both $M$ and $N$ are flat $E_*$-modules the same is true for $M \otimes_{E_*} N$. This implies (see~\Cref{rem:tamemodules}) that $M \widehat{\otimes}_{E_*} N \simeq M \boxtimes_{E_*} N$. \end{proof} We can now identify when the $E_2$-term of the $K(n)$-local $E$-Adams spectral sequence is given by continuous group cohomology. \begin{theorem}\label{thm:COR} Suppose that $X$ is a spectrum such that $E^\vee_* X$ is either a finitely-generated $E_*$-module, pro-free, or has bounded $\mathfrak{m}$-torsion. Then, for the $K(n)$-local $E$-Adams spectral sequence with abutment $\pi_*L_{K(n)}X$, there is an isomorphism \[ E_2^{s,t} = \widehat{\Ext}^{s,t}_{E^\vee_* E}(E_*,E^\vee_* X) \simeq H_c^s(\mathbb{G}_n,E^\vee_t X). \] \end{theorem} To prove this we will need to be explicit about the definition of continuous group cohomology we use, following~\cite[Section 2]{tate_k2}. Let $N$ be a topological $\mathbb{G}_n$-module and define \[ C^k(\mathbb{G}_n,N) = \operatorname{Hom}^c(\mathbb{G}_n^k,N), \] the group of continuous functions from $\mathbb{G}_n^k$ to $N$, where $\mathbb{G}_n^k = \underbrace{\mathbb{G}_n \times \ldots \times \mathbb{G}_n}_{k\text{ times}}$ and $k\ge 0$. Define morphisms $d:C^k(\mathbb{G}_n,N) \to C^{k+1}(\mathbb{G}_n,N)$ by \[ \begin{split} (df)(g_1,\ldots,g_{k+1}) = g_1f(g_2,\ldots,g_{k+1}) &+ \sum_{j=1}^k (-1)^jf(g_1,\ldots,g_j g_{j+1},\ldots,g_{k+1})\\ & + (-1)^{k+1}f(g_1,\ldots,g_{k}). \end{split} \] One can check that $d^2=0$ and thus we obtain a complex $C^\ast(\mathbb{G}_n,N)$. We then define $H_c^*(\mathbb{G}_n,N)$ as $ H^*(C^\ast(\mathbb{G}_n,N),d)$. Of course, the same definition holds for any closed subgroup $G \subset \mathbb{G}_n$. We refer the reader to~\cite[Section 2]{davis06} for a more thorough discussion of various notions of continuous group cohomology used in chromatic homotopy theory. \begin{proof} As previously we have that \[ \begin{split} \widehat{\Ext}_{E^\vee_* E}^{\ast,\ast}(E_*,E^\vee_* X) &\simeq H^*(\operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E_*, (E^\vee_* E)^{\boxtimes (\ast+1)}\boxtimes_{E_*} E^\vee_* X)) \\ & \simeq H^*(\operatorname{Hom}_{E_*}(E_*,(E^\vee_* E)^{\boxtimes \ast}\boxtimes_{E_*} E^\vee_* X)) \\ & \simeq H^*( (E^\vee_* E)^{\boxtimes \ast} \boxtimes_{E_*} E^\vee_* X). \end{split} \] By \Cref{lem:iteratedkunneth} we see there is an equivalence $E^\vee_* E^{\boxtimes \ast} \simeq E^\vee_* (E^{\wedge \ast})$, which is isomorphic to $\operatorname{Hom}^c(\mathbb{G}_n^{ \ast},E_*)$~\cite[p.~9]{devhop04}. This is pro-free by~\cite[Theorem 2.6]{hovoperations} and so applying~\Cref{lemma:lcompletetotensor} we see that \[ \begin{split} (E^\vee_* E)^{\boxtimes \ast} \boxtimes_{E_*} E^\vee_* X &\simeq \operatorname{Hom}^c(\mathbb{G}_n^\ast,E_*)\boxtimes_{E_*} E^\vee_* X \\ &\simeq \operatorname{Hom}^c(\mathbb{G}_n^\ast,E_*) \widehat{\otimes}_{E_*} E^\vee_* X. \end{split} \] Since, under our assumptions, $E^\vee_* X$ is $\mathfrak{m}$-adically complete we have that \[ \operatorname{Hom}^c(\mathbb{G}_n^\ast,E_*) \widehat{\otimes}_{E_*} E^\vee_* X \simeq \operatorname{Hom}^c(\mathbb{G}_n^\ast,E^\vee_* X). \] Then \[ H^*((E^\vee_* E)^{\boxtimes \ast} \boxtimes_{E_*} E^\vee_* X ) \simeq H^*(\operatorname{Hom}^c(\mathbb{G}_n^\ast,E^\vee_* X)). \] As in~\cite[Proof of Theorem 5.1]{lawsonnaumann} one can see that the latter is precisely $H_c^\ast(\mathbb{G}_n,E^\vee_* X)$. \end{proof} \begin{rem}\label{rem:specseq} In~\cite{davislawson} the authors give several examples where the $E_2$-term of the $K(n)$-local $E$-Adams spectral sequence for $\pi_*L_{K(n)}X$ can be identified with continuous group cohomology; we compare~\Cref{thm:COR} with these. The following cases are considered. \begin{enumerate}[(a)] \item By~\cite[Theorem 2(ii)]{devhop04}, if $X$ is finite then $E_2^{s,\ast} = H_c^s(\mathbb{G}_n,E_*X)$. If $X$ is finite, then smashing with it commutes with localisation and so $E_*X = E^\vee_* X$. By induction on the number of cells one can check that if $X$ is finite then $K_*X$ is finite (in each degree), where $K$ is the 2-periodic version of Morava $K$-theory used in~\Cref{rem:ktheory}. By~\cite[Theorem 8.6]{hs99} this is equivalent to $E^\vee_* X$ being finitely generated. \item By~\cite[Theorem 5.1]{lawsonnaumann} if $E_*X$ is a flat $E_*$-module then $E_2^{s,\ast} = H^s_c(\mathbb{G}_n,E^\vee_* X)$. But by~\Cref{rem:tamemodules} if $E_*X$ is flat then $E^\vee_* X = L_0(E_*X) = (E_*X)^\wedge_\mathfrak{m}$ is pro-free. \item By~\cite[Proposition 7.4]{hms} if $\mathcal{K}_{n,\ast}(X)$ is finitely generated as an $E_*$-module then $E_2^{s,\ast}=H^s_c(\mathbb{G}_n,\mathcal{K}_{n,\ast}(X))$. Here $\mathcal{K}_{n,\ast}(X) = \varprojlim_I E_*(X \wedge M_I)$. We suspect, but have been unable to prove, that if $\mathcal{K}_{n,\ast}(X)$ is finitely generated then $\mathcal{K}_{n,\ast}(X) \simeq E^\vee_* X$. We note that if $E^\vee_* X$ is finitely generated, then $X$ is dualisable~\cite[Theorem 8.6]{hs99}, and in this case the $\lim^1$ term in the Milnor exact sequence~\eqref{eq:milnor} vanishes~\cite[Proposition 6.2]{barthel2013completed}, so that $E^\vee_* X \simeq \mathcal{K}_{n,\ast}(X)$. \item The last case considered is more complex. Let $X$ be a spectrum such that, for each $E(n)$-module spectrum $M$, there exists a $k$ with $\mathfrak{m}^kM_*X =0$. Here $E(n)$ is the $n$-th Johnson-Wilson theory. Then, by~\cite[Proposition 6.7]{devhop04}, $E_2^{\ast,\ast} = H^\ast_c(\mathbb{G}_n,E_*X)$. Note that $E$ is an $E(n)$-module spectrum and so the proof of~\cite[Proposition 6.11]{devhop04} implies that $E \wedge X$ is $K(n)$-local, so that $E_*X = E^\vee_* X$. Since $E$ is an $E(n)$-module spectrum, $E^\vee_* X$ is a bounded $\mathfrak{m}$-torsion module and so~\Cref{thm:COR} applies. \end{enumerate} \end{rem} \section{The category of Morava modules} In this section we will show how~\Cref{cor:e2htfp} allows us to easily extend a result originally proved in~\cite{GHMR} for finite subgroups of $\mathbb{G}_n$ to arbitrary closed subgroups. First we need a definition. \begin{definition}~\cite{GHMR} A \emph{Morava module} is a complete $E_*$-module $M$ equipped with a continuous $\mathbb{G}_n$-action such that, if $g \in \mathbb{G}_n,a \in E_*$ and $x\in M$, then \[ g(ax) = g(a)g(x). \] \end{definition} We denote the category of Morava modules by $\mathcal{EG}_n$. Here a homomorphism $\phi:M \to N$ of Morava modules is a continuous (with respect to the $\mathfrak{m}$-adic topology) $E_*$-module homomorphism such that the following diagram commutes, where $g \in \mathbb{G}_n$: \[ \xymatrix{ M \ar[r]^{\phi} \ar[d]_g & N \ar[d]^g\\ M \ar[r]_{\phi} & N. }\] For example, if $X$ is any spectrum such that $E^\vee_* X$ is either finitely-generated, pro-free, or has bounded $\mathfrak{m}$-torsion, then $E^\vee_* X$ is a complete $E_*$-module (by~\Cref{prop:completemodules}) and the $\mathbb{G}_n$-action on $E$ defines a compatible action on $E^\vee_* X$. This gives $E^\vee_* X$ the structure of a Morava module. The category $\mathcal{EG}_n$ is a symmetric monoidal category; given Morava modules $M$ and $N$ their monoidal product is given by $M \widehat{\otimes}_{E_*} N$ with the diagonal $\mathbb{G}_n$-action. A homomorphism of complete $E_*$-modules is a homomorphism of $E_*$-modules that is continuous with respect to the $\mathfrak{m}$-adic topology. However, it turns out that any homomorphism between complete $E_*$-modules is automatically continuous. We learnt this from Charles Rezk, who also provided the following proof. \begin{lemma}\label{lem:cont} Let $f: M \to N$ be an $E_*$-module homomorphism between complete $E_*$-modules. Then $f$ is continuous with respect to the $\mathfrak{m}$-adic topology. \end{lemma} \begin{proof} The map $f$ is an $E_*$-module homomorphism and so $f(\mathfrak{m}^k M)$ is a subset of $\mathfrak{m}^k N$ for any $k \ge 0$; thus $\mathfrak{m}^kM$ is a subset of $f^{-1}(\mathfrak{m}^kN)$. Therefore $f^{-1}(\mathfrak{m}^kN)$ is a union of $\mathfrak{m}^kM$-cosets. It follows from the fact that $\mathfrak{m}^kM$ is open that $f^{-1}(\mathfrak{m}^kN)$ is open in the $\mathfrak{m}$-adic topology. \end{proof} Let $\{U_i\}$ be a system of open normal subgroups of $\mathbb{G}_n$ such that $\bigcap_i U_i = \{ e \}$ and $\mathbb{G}_n = \varprojlim_i \mathbb{G}_n/U_i$. Then we define $E_*[\![\mathbb{G}_n]\!] = \varprojlim_i E_*[\mathbb{G}_n/U_i]$, the completed group ring, with diagonal $\mathbb{G}_n$-action. If $H$ is a closed subgroup of $\mathbb{G}_n$, then we define $E_*[\![\mathbb{G}_n/H]\!]$ in a similar way, with diagonal $\mathbb{G}_n$-action. With this in mind there is the following result. \begin{prop}~\cite[Theorem 2.7]{GHMR}\label{prop:ghmr2.7} Let $H_1$ and $H_2$ be closed subgroups of $\mathbb{G}_n$ and suppose that $H_2$ is finite. Then there is an isomorphism \[ E_\ast[\![\mathbb{G}_n/H_1]\!]^{H_2} \mathop{\longrightarrow}^{\cong} \operatorname{Hom}_{\mathcal{EG}_n}(E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}) \] \end{prop} We will in fact see that this holds more generally whenever $H_2$ is a closed subgroup of $\mathbb{G}_n$. We will need the following relationship between homomorphisms of Morava modules and $L$-complete comodules. \begin{prop} If $M$ and $N$ are both Morava modules and $L$-complete comodules, such that the underlying $L$-complete $E_*$-modules are pro-free, then \[ \operatorname{Hom}_{\mathcal{EG}_n}(M,N) \simeq \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(M,N). \] \end{prop} \begin{proof} Let $\phi:M \to N$ be a homomorphism of Morava modules. Note that $M$ and $N$ are complete, and hence also $L$-complete, and so $\phi$ defines a morphism in $\widehat{\operatorname{Mod}}_{E_*}$. We wish to show that this is in fact a comodule homomorphism. Let $\psi_M:M \to \operatorname{Hom}^c(\mathbb{G}_n,M) \simeq E^\vee_* E \widehat{\otimes}_{E_*} M$ be the adjoint of the $\mathbb{G}_n$-action map, and similarly for $\psi_N$. By~\Cref{lemma:lcompletetotensor} $E^\vee_* E \widehat{\otimes}_{E_*} M \simeq E^\vee_* E \boxtimes_{E_*} M$, and equivariance of $\phi$ implies that the following diagram commutes \[ \xymatrix{ M \ar[r]^-{\phi} \ar[d]_{\psi_M} & N\ar[d]^{\psi_N}\\ E^\vee_* E \boxtimes_{E_*} M \ar[r]_-{\text{id} \boxtimes \phi} & E^\vee_* E \boxtimes_{E_*} N, } \] so that $\phi$ defines a morphism of comodules. Conversely, suppose that we are given an $L$-complete comodule homomorphism $\Phi: M \to N$. Since $M$ and $N$ are complete~\Cref{lem:cont} implies that $\Phi$ is a homomorphism of complete $E_*$-modules. Given the structure map $\psi_M$ we define a $\mathbb{G}_n$-action on $M$ using the retract diagram \[ M \xrightarrow{\psi_M} E^\vee_* E \boxtimes_{E_*} M \simeq \operatorname{Hom}^c(\mathbb{G}_n,E_*) \widehat{\otimes}_{E_*} M \xrightarrow{\text{ev}(g) \widehat{\otimes} \text{id}} M, \] where $\text{ev}(g):\operatorname{Hom}^c(\mathbb{G}_n,E_*) \to E_*$ is the evaluation map at $g \in \mathbb{G}_n$. The fact that $\Phi$ is a $L$-complete comodule homomorphism shows that, with this $\mathbb{G}_n$-action, $\Phi$ is in fact a morphism of Morava modules. These constructions define maps $\operatorname{Hom}_{\mathcal{EG}_n}(M,N) \to \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(M,N)$ and vice-versa, and it is not hard to see that these are inverse to each other. \end{proof} \begin{cor}\label{cor:homequivalence} If $E^\vee_* X$ and $E^\vee_* Y$ are pro-free, then \[ \operatorname{Hom}_{\mathcal{EG}_n}(E^\vee_* X,E^\vee_* Y) \simeq \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* X,E^\vee_* Y). \] \end{cor} \begin{proof} The condition that $E^\vee_* X$ and $E^\vee_* Y$ are pro-free ensures that they are Morava modules; they are also $L$-complete $E^\vee_* E$-comodules by~\Cref{prop:completemodules}. \end{proof} We can now easily derive our version of the Goerss--Henn--Mahowald--Rezk result. \begin{prop}\label{prop:GHMR} Let $H_1$ and $H_2$ be closed subgroups of $\mathbb{G}_n$. Then there is an isomorphism \[ E_\ast[\![\mathbb{G}_n/H_1]\!]^{H_2} \mathop{\longrightarrow}^{\cong} \operatorname{Hom}_{\mathcal{EG}_n}(E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}). \] \end{prop} \begin{proof} By~\Cref{cor:e2htfp} we have \[ \widehat{\Ext}^{s,\ast}_{E^\vee_* E}(E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}) \simeq H_c^s(H_2,E^{-\ast} E^{hH_1}). \] From the results of~\cite{hovoperations,devhop04} it can be deduced that $E^{-*}E^{hH_1} \simeq E_*[\![\mathbb{G}_n/H_1]\!]$ for any closed subgroup $H_1 \subset \mathbb{G}_n$, and that this isomorphism respects the $\mathbb{G}_n$-actions on both sides. Using~\Cref{lem:ext0} \[ \begin{split} \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}) &\simeq H_c^0(H_2,E_{\ast}[\![\mathbb{G}_n/H_1]\!])\\& \simeq E_{*}[\![\mathbb{G}_n/H_1]\!]^{H_2}. \end{split} \] Since $E^\vee_* E^{hH_1}$ and $E^\vee_* E^{hH_2}$ are pro-free,~\Cref{cor:homequivalence} implies that \[ \begin{split} \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}) \simeq \operatorname{Hom}_{\mathcal{EG}_n} (E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}), \end{split} \] so that \[ \operatorname{Hom}_{\mathcal{EG}_n} (E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}) \simeq E_*[\![\mathbb{G}_n/H_1]\!]^{H_2} \] as required. \end{proof} \begin{rem} Let $H$ be a topological group and assume that $R$ is an $H$-spectrum and $X = \lim_i X_i$ is an inverse limit of a sequence of finite discrete $H$-sets $X_i$, such that $X$ has a continuous $H$-action. Following~\cite{GHMR} we define the $H$-spectrum \[ R[\![X]\!] = \operatorname{holim}_i R \wedge (X_i)_+, \] with the diagonal $H$-action. In~\cite{behdav} Behrens and Davis show that if $H_1$ and $H_2$ are as above then there is an equivalence \[ F(E^{hH_1},E^{hH_2}) \simeq E[\![\mathbb{G}_n/H_1]\!]^{hH_2}. \] This was originally proved for $H_2$ finite in~\cite{GHMR}. Combined with~\Cref{prop:GHMR} it is easy to see that there is a commutative diagram \[ \begin{tikzcd} \pi_* E[\![\mathbb{G}_n/H_1]\!]^{hH_2} \arrow{r} \arrow[swap]{d}{\simeq} & \left( E_*[\![\mathbb{G}_n/H_1]\!]\right)^{H_2} \arrow{d}{\simeq} \\ \pi_*F(E^{hH_1},E^{hH_2}) \arrow{r} & \operatorname{Hom}_{\mathcal{EG}_n}(E^\vee_* E^{hH_1},E^\vee_* E^{hH_2}), \end{tikzcd} \] for $H_1,H_2$ closed subgroups of $\mathbb{G}_n$. Again this is proved in~\cite{GHMR} under the additional assumption that $H_2$ is finite. \end{rem} \begin{comment} \begin{theorem}[Shapiro's Lemma] There are isomorphisms \[ \begin{split} \mathrm{Ext}^{s,\ast}_{E^\vee_* E}(E_*,E^\vee_* E_n^{hF})&\simeq \mathrm{Ext}^{s,\ast}_{\operatorname{Hom}^c(F,E_*)}(E_*,E_*) \\ & \simeq H^s(\mathbb{G}_n,E^\vee_* E^{hF}) \\&\simeq H^s(F,E_*). \end{split} \] \begin{proof} \end{proof} \end{theorem} \end{comment} \section{The $E_1$ and $K(1)$-local $E_1$-Adams spectral sequences} Since $E_*E$ is a flat $E_*$-module we have the $E$-Adams spectral sequence (see, for example~\cite{hovsad}) \[ E_2^{s,t} = \mathrm{Ext}^{s,t}_{E_*E}(E_*,E_*X) \mathbb{R}ightarrow \pi_*L_nX. \] This is a spectral sequence of $\mathbb{Z}p$-modules and our goal in this section is to use the derived functor of $p$-completion on $\mathbb{Z}p$-modules to construct a spectral sequence abutting to the $E_2$-term of the $K(n)$-local $E$-Adams spectral sequence. Unfortunately our proof only works when $n=1$ and $p$ is an arbitrary prime. We shall see that, for a spectrum $X$, the spectral sequence naturally carries copies of $\mathbb{Q}/\mathbb{Z}_{(p)}$ in $\mathrm{Ext}^{\ast,\ast}_{E_*E}(E_*,E_*X)$ to copies of $\mathbb{Z}_p$ in $H^{\ast}_c(\mathbb{G}_n,E^\vee_* X)$. Already at height 2, for primes greater than or equal to 5, the calculations of~\cite{shimyabe} imply that there are 3 copies of $\mathbb{Q}/\mathbb{Z}_{(p)}$ which lie in bidegree $(4,0),(4,0)$ and $(5,0)$, whilst in $H^*_c(\mathbb{G}_2,(E_2)_*)$ there are copies of $\mathbb{Z}_p$ in bidgrees $(0,0),(1,0)$ and $(3,0)$. The grading on the spectral sequence we construct will imply that there is no possible class that could give rise to the copy of $\mathbb{Z}_p$ in bidegree $(1,0)$. If one accepts the chromatic splitting conjecture~\cite{hovbous} then an analogue of our spectral sequence cannot exist at all when $n \ge 2$. \textbf{Note:} From now on, unless otherwise stated, it is implicit that $E$ refers to Morava $E$-theory at height 1 only. The reason that the spectral sequence exists when $n=1$ is due to the fact that $E_* \simeq \mathbb{Z}_p[u^{\pm 1}]$. As Hovey shows in~\cite[Lemma 3.1]{hovey08fil}, given a graded $E_*$-module $M$, there is an isomorphism $(L_0M)_k \simeq L_0M_k$ where the second $L_0$ is taken in the category of $\mathbb{Z}_p$-modules. A similar result holds for completion with respect to $\mathbb{Z}p$-modules. We will often use this implicitly to pass between ungraded $\mathbb{Z}p$-completion and graded $E_*$-completion. \begin{theorem}\label{thm:specseq} If $E_*X$ is a projective $E_*$-module and $E_*Y$ is a flat $E_*$-module, then there is a spectral sequence \[ E_2^{i,s} = L_i \mathrm{Ext}_{E_*E}^{s,t}(E_*X,E_*Y) \mathbb{R}ightarrow \widehat{\Ext}_{E^\vee_* E}^{s-i,t}(E^\vee_* X,E^\vee_* Y), \] where $L_i$ is taken in the category of $\mathbb{Z}p$-modules. \end{theorem} \begin{rem} The assumption that $E_*X$ is projective ensures that $\mathrm{Ext}_{E_*E}^{\ast,\ast}(E_*X,E_*Y)$ can be computed by a relative injective resolution of $E_*Y$. Additionally, if $E_*X$ is projective, the spectral sequence of \cite[Theorem 2.3]{hovey08fil} collapses, so $E^\vee_* X \cong L_0E_*X$ is pro-free by~\cite[Proposition A.15]{barthel2013completed}. \end{rem} \begin{proof} Let $M^\ast$ be the cobar complex with $M^s = E_*E^{\otimes (s+1)} \otimes_{E_*} E_*Y$. Then \[ \begin{split} \mathrm{Ext}_{E_*E}^{\ast,\ast}(E_*X,E_*Y) &= H^*(\operatorname{Hom}_{E_*E}(E_*X,E_*E^{\otimes (\ast+1)} \otimes_{E_*} E_*Y)) \\ &\simeq H^*(\operatorname{Hom}_{E_*}(E_*X,E_*E^{\otimes \ast} \otimes_{E_*} E_*Y)) \end{split} \] For brevity, we denote $\operatorname{Hom}_{E_*}(E_*X,E_*E^{\otimes \ast} \otimes_{E_*} E_*Y)$ by $N^\ast$. Since $E_*E$ and $E_*Y$ are flat $E_*$-modules, so is the iterated tensor product $E_*E^{\otimes \ast} \otimes_{E_*} E_*Y$; since $E_*X$ is projective, and $E_*$ is Noetherian, each $N^s$ is flat, and hence tame. Under such a condition there is a spectral sequence\footnote{Note that we switch from a chain complex to a cochain complex, which accounts for the shift in grading in the abutment.}~\cite[Proposition 8.7]{rezk2013} \[ E_2^{i,s} = L_i(H^{s}(N^\ast)) \simeq L_i\mathrm{Ext}_{E_*E}^{s,\ast}(E_*X,E_*Y) \mathbb{R}ightarrow H^{s-i}(L_0(N^\ast)). \] To identify the abutment use the spectral sequence of~\cite[Theorem 2.3]{hovey08fil} to see that $L_0(E_*X) = E^\vee_* X$ and $L_0(E_*Y) = E^\vee_* Y$, since both $E_*X$ and $E_*Y$ are flat. It also follows from the symmetric monoidal structure on $L$-complete $E_*$-modules~\cite[Corollary A.7]{hs99} that $L_0(E_*E^{\otimes \ast} \otimes_{E_*} E_*Y) = L_0(E_*E)^{\boxtimes \ast} \boxtimes_{E_*} L_0(E_*Y)$. Applying~\Cref{cor:homlcompletion} we see that \[ \begin{split} L_0(N^\ast) &= \operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(E^\vee_* X,E^\vee_* E^{\boxtimes \ast} \boxtimes_{E_*} E^\vee_* Y) \\ &\simeq \operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* X,E^\vee_* E^{\boxtimes (\ast+1)} \boxtimes_{E_*} E^\vee_* Y). \end{split} \] The cohomology of the latter is precisely $\widehat{\Ext}^{\ast,\ast}_{E^\vee_* E}(E^\vee_* X,E^\vee_* Y)$. \end{proof} \begin{rem} This spectral sequence can also be obtained as a Grothendieck spectral sequence. Again assuming that $E_*X$ is projective, consider the following functors, and their derived functors: \[ G: \operatorname{Hom}_{E_*E}(E_*X,-), \quad R^tG: \mathrm{Ext}^t_{E_*E}(E_*X,-) \] from $E_*E$-comodules with flat underlying $E_*$-module to $\mathbb{Z}p$-modules, and \[ F: L_0(-), \quad L^tF : L_t(-), \] from $\mathbb{Z}p$-modules to $\mathbb{Z}_p$-modules. Then \[ FG(-) = L_0\operatorname{Hom}_{E_*E}(E_*X,-). \] Let $E_*E \otimes_{E_*} N$ be an extended $E_*E$-comodule, where $N$ is a flat $E_*$-module; this implies $E_*E \otimes_{E_*} N$ is still flat. Then, for $s>0$, \[ \begin{split} L^sF(G(E_*E \otimes_{E_*} N)) & \simeq L_s(\operatorname{Hom}_{E_*E}(E_*X,E_*E \otimes_{E_*} N)) \\ & \simeq L_s \operatorname{Hom}_{E_*}(E_*X,N) = 0. \\ \end{split} \] by~\Cref{cor:homlcompletion}. This implies that the Grothendieck spectral sequence exists. To identify the abutment we just need to identify the derived functors of $FG$. Once again we can use the cobar resolution $M \to E_*E \otimes_{E_*} M \to \cdots $, where $M$ is an $E_*E$-comodule that is flat as an $E_*$-module. Then \[ \begin{split} R^sFG(M) &= H^s(L_0\operatorname{Hom}_{E_*E}(E_*X,(E_*E)^{\otimes (\ast+1)}\otimes_{E_*}M)) \\ &\simeq H^s(L_0\operatorname{Hom}_{E_*}(E_*X,(E_*E)^{\otimes \ast}\otimes_{E_*}M)) \\ & \simeq H^s(\operatorname{Hom}_{\widehat{\operatorname{Mod}}_{E_*}}(E^\vee_* X,E^\vee_* E^{\boxtimes \ast} \boxtimes_{E_*} L_0M )) \\ & \simeq H^s(\operatorname{Hom}_{\widehat{\Comod}_{E^\vee_* E}}(E^\vee_* X,E^\vee_* E^{\boxtimes (\ast+1)}\boxtimes_{E_*} L_0M) ). \end{split} \] As we have seen previously this identifies $R^sFG(M)$ with $\widehat{\Ext}^{s,\ast}_{E^\vee_* E}(E^\vee_* X,L_0M)$. \end{rem} \subsection{Height 1 calculations} As an example we will show how the calculation of $H^*_c(\mathbb{G}_1,E_*)$ follows from the corresponding calculation of $\mathrm{Ext}^{\ast,\ast}_{E_*E}(E_*,E_*)$. We work at the prime 2 since the calculations are more interesting here, due to the presence of 2-torsion in $\mathbb{G}_1 = \mathbb{Z}_2^\times \simeq \mathbb{Z}_2 \times \mathbb{Z}/2$. We first need the following lemma that relates the $E(n)$ and $E$-Adams spectral sequences. This result holds for \emph{all} heights $n$ and primes $p$. \begin{lemma}[Hovey-Strickland]\label{prop:Enss} Let $M$ and $N$ be $E(n)_*E(n)$-comodules. Then, for all $s$ and $t$, there is an isomorphism \[ \mathrm{Ext}^{s,t}_{E(n)_*E(n)}(M,N) \simeq \mathrm{Ext}^{s,t}_{E_*E}(M \otimes_{E(n)_*} E_{*},N \otimes_{E(n)_*} E_{*}). \] \end{lemma} \begin{proof} By~\cite[Theorem C]{hoveystricklandcomod} the functor that takes $M$ to $M \otimes_{E(n)_*}E_*$ defines an equivalence of categories between $E(n)_*E(n)$-comodules and $E_*E$-comodules. \end{proof} This implies that $\mathrm{Ext}_{E(1)_*E(1)}^{s,t}(E(1)_*,E(1)_*) = \mathrm{Ext}_{E_*E}^{s,t}(E_{\ast},E_{\ast})$ for all $s$ and $t$. We start with a calculation described in~\cite[Section 6]{hovsad}.\footnote{Note that there is a small typo in~\cite{hovsad}. Namely the $\mathbb{Z}/16k$ referred to by Hovey-Sadofksy in filtration degree 1 of the $8k-1$-stem should actually refer to the 2-primary part of $\mathbb{Z}/16k$.} \begin{prop} Let $p = 2$. Then \[ \mathrm{Ext}_{E(1)_*E(1)}^{s,t}(E(1)_*,E(1)_*)= \begin{cases} \mathbb{Z}_{(2)} & t=0,s=0 \\ \mathbb{Q}/\mathbb{Z}_{(2)} & t=0,s=2\\ \mathbb{Z}/2^{k+2} & t=2^{k+1}m,m \not \equiv 0 \mod (2),k \ne 0, s=1 \\ \mathbb{Z}/2 & t = 4t'+2, s=1, t' \in \mathbb{Z} \\ \mathbb{Z}/2 & s \ge 2, t=\text{even} \\ 0 & \text{else.} \end{cases} \] \end{prop} We now run the spectral sequence of~\Cref{thm:specseq}. Note that~\Cref{examp:zp} computes the $E_2$-term of this spectral sequence. It can be checked that the differentials are $d_r:E_r^{i,s} \to E_r^{i+r,s+r-1}$; since the spectral sequence is non-zero only for $i=0$ and $i=1$ we see that there are no differentials in the spectral sequence, and that it collapses at the $E_2$-page. We deduce the following: \begin{theorem} Let $p = 2$. Then \[ H^s_c(\mathbb{G}_1,E_t) = \begin{cases} \mathbb{Z}_2 & t=0,s=0,1 \\ \mathbb{Z}/2^{k+2} & t=2^{k+1}m,m \not \equiv 0 \mod (2),k \ne 0, s=1 \\ \mathbb{Z}/2 & t = 4t'+2, s=1,t' \in \mathbb{Z} \\ \mathbb{Z}/2 & s \ge 2, t=\text{even} \\ 0 & \text{else.} \end{cases} \] \end{theorem} \end{document}
math
82,256
\begin{document} \title{Projective transformations of rotation sets} \begin{abstract} We give a new proof and extend a result of J. Kwapisz: whenever a set $C$ is realized as the rotation set of some torus homeomorphism, the image of $C$ under certain projective transformations is also realized as a rotations set. \end{abstract} The concept of \emph{rotation set}, introduced by M. Misiurewicz and K. Ziemian in \cite{MisZie}, is one of the most important tools to study the global dynamics of homeomorphisms of the torus $\mathbb{T}^2$. If $f$ is a homeomorphism of $\mathbb{T}^2$ isotopic to the identity, and $F$ is a lift of $f$ to $\mathbb{R}^2$, the rotation set of $F$ is a compact convex subset of the plane which describes ``at what speeds and in what directions the orbits of $f$ rotate around the torus". One of the main problems in the theory is to determine which compact convex subsets of $\mathbb{R}^2$ can be realized as the rotations sets of some torus homeomorphisms. For compact convex subsets with empty interiors (\emph{i.e.} singletons and segments), a conjectural answer to the problem has been formulated by J. Franks and M. Misiurewicz (see \cite{FraMis}). Fifteen years ago, J. Kwapicz has introduced a technical tool which allows to simplify the problem. Namely, he observed that, if a compact convex set $C\subset \mathbb{R}^2$ is realized as the rotation of a certain torus diffeomorphism, and if a projective transformation $L$ maps $C$ to a bounded set of the plane, then $L(C)$ can be realized as the rotation of another torus diffeomorphism (see \cite[section 2]{Kwa}). Kwapisz's proof requires to consider the suspension of the initial torus homeomorphism, and to apply a theorem of D. Fried to find a new surface of section for this flow, in the appropriate cohomology class. Fried's theorem works only for $C^1$ flows; this forces Kwapisz to consider only rotation sets of $C^1$ diffeomorphisms, whereas the natural setting for his result would be rotation sets of homeomorphisms. The purpose of the present note is to provide a more elementary proof of Kwapisz's result. Our proof remains at the level of surfaces homeomorphisms, \emph{i.e.} does not require to consider a flow on a three-dimensional manifold. It does not make use of Fried's theorem (in some sense, we replace it by the more classical fact that the only surface with fundamental group isomorphic to $\mathbb{Z}^2$ is the torus $\mathbb{T}^2$). As a consequence, it works for surface homeomorphisms without any differentiability assumption. This might be of interest in relation with some recent works related to the Franks-Misiurewicz conjecture (see~\cite{LecTal,Kor}, and the example of Avila quoted in these papers). \begin{theo*} Let $C$ be a compact subset of the plane which is realized as the rotation set of some torus homeomorphism. Let $L \in \mathrm{SL}(3,\mathbb{Z})$ be a projective transformation such that the image $C'$ of $C$ under $L$ is a bounded subset of the plane. Then $C'$ is also realized as the rotation set of some torus homeomorphism. \end{theo*} In this statement, we use the usual affine chart to embed the plane in the projective plane. The requirement that the image of $C$ under $L$ is a bounded subset of the plane means that we demand that $L(C)$ does not meet the line at infinity. A more precise version of the above theorem will be given below. We now recall the classical definition of the rotation set by Misiurewicz and Ziemian. We consider a self-homeomophism $f$ of the torus $\mathbb{T}^2 = \mathbb{R}^2/\mathbb{Z}^2$, and a lift $F:\mathbb{R}^2 \tilde o \mathbb{R}^2$. We assume that $f$ is isotopic to the identity which amounts to say that $F$ commutes with the deck transformations $S : (x,y)\mapsto (x+1,y)$ and $T : (x,y)\mapsto (x,y+1)$. The rotation set of $F$ is defined as the set of $w \in {\mathbb{R}}^2$ such that there exists a sequence $(z_k)_{k \geq 0}$ of points of the plane, and a sequence $(n_k)_{k \geq 0}$ of integers tending to $\infty$ such that $$ \frac{F^{n_k}(z_k)-z_k}{n_k} $$ converges to $w$ as $k$ goes to infinity. Note that this definition depends on the choice of coordinates on the torus (in order to identify the universal cover of the torus with $\mathbb{R}^2$). In particular, it depends on the choice of a basis $(S,T)$ of the fundamental group of the torus. To make things clear we need a definition of the rotation set that makes explicit this dependence. \begin{defi*}\leftarrowbel{d.rotation-set} Consider an action of $\mathbb{Z}^3$ on $\mathbb{R}^2$ generated by three commuting homeomorphisms $G,U,V$. We define \emph{the rotation set of $G$ with respect to $U$ and $V$} as the set $\rho_{U,V}(G)\subset \mathbb{R}^2$ of all vectors $w$ such that there exists a compact subset $K$ of the plane, and a sequence $(m_k,n_k,p_k)_{k\geq 0}$ of elements of $\mathbb{Z}^3$ so that: \begin{enumerate} \item for every $k$, $U^{-m_k}V^{-n_k}G^{p_k}(K) \cap K \neq \emptyset$, \item the sequence $(m_k,n_k,p_k)$ tends to infinity, \item the sequence $\left(\frac{m_k}{p_k},\frac{n_k}{p_k}\right)$ tends to $w$. \end{enumerate} \end{defi*} \begin{rema} In the case where $F$ is a lift of a homeomorphism of $\mathbb{T}^2$, and $S,T$ are the elementary translations $(x,y)\mapsto (x+1,y)$ and $(x,y)\mapsto (x,y+1)$, one easily checks that the rotation set $\rho_{S,T}(F)$ coincides with the classical rotation set of $F$. \end{rema} In order to prove the above theorem, we will consider a lift $F$ of a torus homeomophism whose rotation set (in the sense of Misiurewicz and Ziemiann) is the given compact convex set $C$. In order to realize the set $C'=L(C)$, we will not only replace $F$ by a new homeomophism $G$; we will also replace the elementary translations $S:(x,y)\mapsto (x+1,y)$ and $T:(x,y)\mapsto (x,y+1)$ by some ``non-linear translations" $U,V$. \begin{rema} The above definition immediatly extends to the case of a $\mathbb{Z}^p$ action on a non compact topological space $X$. In this more general setting, to get a more symmetric definition, it is tempting to replace $U^{-m_k}V^{-n_k}G^{p_k}$ in the first item by $U^{-m_k}V^{-n_k}G^{-p_k}$, and to define the ``rotation set" as a subset of $\mathbb{RP}^{p-1}$, instead of looking in a specific affine chart. The definition depends on a choice of basis of $\mathbb{Z}^p$, but two different choices give two ``rotation sets" that differ under a projective transformation, thus we get a conjugacy invariant which is a subset of $\mathbb{RP}^{p-1}$ up to projective isomorphisms (see the argument at the end of the paper). Going back to the case of an action of $\mathbb{Z}^3$ on $\mathbb{R}^2$, one could wonder which results of the classical rotation set theory for torus homeomorphisms (in the sense of Misiurewicz and Ziemian) can be generalized to rotation sets of $\mathbb{Z}^3$ actions on the plane. \end{rema} Now we are in a position to give a more precise statement of the theorem above. We denote by $\Delta_\infty=\{[x:y:0]\}$ the ``line at infinity" in $\mathbb{R}\mathbb{P}^2$, and by $\Phi: \mathbb{R}\mathbb{P}^2\setminus \Delta_\infty\tilde o \mathbb{R}^2$ the affine chart mapping $[x:y:z]$ to $(x/z,y/z)$. If $L\in \mathrm{SL}(3,\mathbb{R})$ is a projective transformation, we denote by $\widehat L$ the ``restriction of this map to the affine plane": more formally, $$\widehat L=\Phi L \Phi^{-1} :\mathbb{R}^2\setminus (\Phi L^{-1}(\Delta_\infty)) \longrightarrow \mathbb{R}^2\setminus (\Phi L(\Delta_\infty)).$$ \begin{theo*} Let $S : (x,y)\mapsto (x+1,y)$ and $T : (x,y)\mapsto (x,y+1)$. Let $F$ be a lift of a homeomorphism of the torus $\mathbb{T}^2 = \mathbb{R}^2/\mathbb{Z}^2$ isotopic to the identity. Let $L \in \mathrm{SL}(3,\mathbb{Z})$ be a projective transformation such that $\rho_{S,T}(F)$ is disjoint from the line $\Phi L^{-1}(\Delta_\infty)$. Let $$L^{-1}=\left(\begin{array}{ccc}a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{array}\right)\quad\quad\mbox{and} \quad\quad \begin{array}{ccc} U = S^{a_1}T^{b_1}F^{-c_1}\\ V = S^{a_2}T^{b_2}F^{-c_2}\\ G= S^{-a_3}T^{-b_3}F^{c_3}\end{array}.\tilde ext{ Then:}$$ \begin{enumerate} \item the quotient space $\mathbb{R}^2/\leftarrowngle U,V\rightarrowngle$ is homeomorphic to the torus $\mathbb{T}^2$; \item the rotation set $\rho_{U,V}(G)$ is equal to $\widehat L(\rho_{S,T}(F))$. \end{enumerate} \end{theo*} \begin{rema} Note that, since $G$ obviously commutes with $U$ and $V$, it can be seen as a lift of a homeomorphism $g$ of $\mathbb{T}^2$ which is isotopic to the identity. Thus this second theorem implies the first one. From the definition, one easily deduces that $g$ is as smooth as $f$: if $f$ is $C^r$ for some $r\in \mathbb{N}\cup\{\infty\}$ or analytical, then so is $g$. Moreover, every invariant finite measure for $f$ induces an invariant finite measure for $g$. For example, if $f$ preserves a measure in the Lebesgue class, then so does $g$. \end{rema} \begin{rema} \leftarrowbel{r.surfaces-classification} Consider a $\mathbb{Z}^2$ action on $\mathbb{R}^2$ generated by some homeomorphisms $U$ and $V$. Assume this action is properly discontinuous. Then the quotient space $\mathbb{R}^2/\leftarrowngle U,V\rightarrowngle$ is a topological surface (\emph{i.e.} a separated topological manifold of dimension 2) whose fundamental group is isomorphic to $\mathbb{Z}^2$. According to the classification of surfaces (see \emph{e.g.} \cite{Ric}), it follows that this quotient space must be homeomorphic to $\mathbb{T}^2$. This is a key ingredient of the following proof that will play the part of Fried's theorem in Kwapisz's original proof. \end{rema} \begin{proof}[Proof of Item 1 of the theorem.] In view of Remark~\ref{r.surfaces-classification}, it is enough to prove that the action of $\mathbb{Z}^2$ on $\mathbb{R}^2$ generated by the homeomorphisms $U$ an $V$ is properly discontinuous: we consider a ball $B(0,R)$ in $\mathbb{R}^2$, and we aim to prove that $U^mV^n(B(0,R))$ is disjoint from $B(0,R)$ whenever $\|(m,n)\|$ is large enough. We denote by $\mathrm{D}(H)$ the \emph{displacement set} of the homeomorphism $H$ of the plane, that is, the set of all vectors of the type $H(z)-z$ where $z$ ranges over $\mathbb{R}^2$. Obviously $\mathrm{D}(S)=\{(1,0)\}$ and $\mathrm{D}(T)=\{(0,1)\}$. By assumption, the rotation set $\rho_{S,T}(F)$ is disjoint from the line $\Phi L^{-1}(\Delta_\infty)$. Therefore, we may consider a compact neighbourhood $\mathcal{O}$ of $\rho_{S,T}(F)$ so that $$\mathrm{dist}(\mathcal{O},\mathbb{R}^2\cap \Phi L^{-1}(\Delta_\infty))>\epsilon>0.$$ From the definition of the rotation set, one immediately sees that there exists an integer $k_0$ so that $\mathrm{D}(F^k)\subset k\mathcal{O}$ for $|k|\geq k_0$. And since $D(F^k)$ is bounded for every $|k|<k_0$, one gets that there exists $R'$ so that, for every $k\in\mathbb{Z}$ $$\mathrm{D}(F^k)\subset B(0,R')+k\mathcal{O}.$$ Now recall that $U = S^{a_1}T^{b_1}F^{-c_1}$ and $V = S^{a_2}T^{b_2}F^{-c_2}$. Since $S$, $T$ and $F$ commute, one immediately gets, for every $(m,n)\in\mathbb{Z}^2$, $$\mathrm{D}(U^mV^n)=(ma_1+na_2,mb_1+nb_2)-\mathrm{D}(F^{mc_1+nc_2}).$$ Using the inclusion above, we obtain that, for every $(m,n)\in\mathbb{Z}^2$, $$\mathrm{D}(U^mV^n)\subset B(0,R')+(ma_1+na_2,mb_1+nb_2)-(mc_1+nc_2) \mathcal{O},$$ and therefore \begin{eqnarray*} U^mV^n(B(0,R)) & \subset & B(0,R+R')+(ma_1+na_2,mb_1+nb_2)-(mc_1+nc_2) \mathcal{O}\\ & = & B(0,R+R')+(mc_1+nc_2)\Phi L^{-1}([m:n:0])-(mc_1+nc_2)\mathcal{O}\\ & \subset & B(0,R+R')+(mc_1+nc_2)\left(\Phi L^{-1}(\Delta_\infty)-\mathcal{O}\right)\\ & \subset & B(0,R+R')+(mc_1+nc_2)(\mathbb{R}^2\setminus B(0,\epsilon)). \end{eqnarray*} (The last inclusion comes from the definition of the neighbourhood $\mathcal{O}$.) On the first hand, if $|mc_1+nc_2|$ is larger than $\frac{2R+R'}{\epsilon}$, the last inclusion above implies that $U^mV^n(B(0,R))$ is disjoint from $B(0,R)$, as desired. On the other hand, since $\mathcal{O}$ is compact, we can find $R''$ so that $(mc_1+nc_2) \mathcal{O}\subset B(0,R'')$ whenever $|mc_1+nc_2|\leq \frac{2R+R'}{\epsilon}$. As a consequence, if $|mc_1+nc_2|$ is smaller than $\frac{2R+R'}{\epsilon}$, but $\|(ma_1+na_2,mb_1+nb_2)\|$ is larger than $2R+R'+R''$, then the first inclusion above implies that $U^mV^n(B(0,R))$ is disjoint from $B(0,R)$ as desired. To conclude, it remains to notice that since the vectors $(a_1,b_1,c_1)$ and $(a_2,b_2,c_2)$ are non-colinear (recall that $L^{-1}$ has rank three), the map $(m,n) \mapsto (ma_1+na_2,mb_1+nb_2,mc_1+nc_2)$ is a proper embedding of $\mathbb{Z}^2$ into $\mathbb{R}^3$. Thus the two quantities $|mc_1+nc_2|$ and $\|(ma_1+na_2,mb_1+nb_2)\|$ cannot remain bounded at the same time when $\|(m,n)\|$ is large. This shows that $U^mV^n(B(0,R))$ is disjoint from $B(0,R)$ provided that $\|(m,n)\|$ is bigger than some constant. In other words, the action of $\mathbb{Z}^2$ on $\mathbb{R}^2$ generated by the homeomorphisms $U$ an $V$ is properly discontinuous. According to Remark~\ref{r.surfaces-classification}, this implies that $\mathbb{R}^2/\leftarrowngle U,V\rightarrowngle$ is homeomorphic to $\mathbb{T}^2$. \end{proof} \begin{proof}[Proof of Item 2 of the theorem.] Consider a compact subset $K$ of $\mathbb{R}^2$ and two sequences $(m_k,n_k,p_k)_{k\geq 0}$ and $(\mu_k,\nu_k,\pi_k)_{k\geq 0}$ of elements of $\mathbb{Z}^3$ which are related by $$(\mu_k,\nu_k,\pi_k)=L(m_k,n_k,p_k).$$ Obviously, $(m_k,n_k,p_k)$ tends to infinity if and only if $(\mu_k,\nu_k,\pi_k)$ tends to infinity. Now observe that $$S^{-m_k}T^{-n_k}F^{p_k}=U^{-\mu_k}V^{-\nu_k}G^{\pi_k}.$$ In particular, $(S^{-m_k}T^{-n_k}F^{p_k}(K))\cap K \neq \emptyset$ if and only if $(U^{-\mu_k}V^{-\nu_k}G^{\pi_k}(K))\cap K \neq \emptyset$. Finally, $\left(m_k/p_k,n_k/p_k\right)$ converges to $w\in\mathbb{R}^2$ if and only if $\left(\mu_k/\pi_k,\nu_k/\pi_k\right)=\widehat L\left(m_k/p_k,n_k/p_k\right)$ converges to the vector $\widehat L(w)$. This shows that $\rho_{U,V}(G)=\widehat L(\rho_{S,T}(F))$. \end{proof} \noindent \emph{Fran\c{c}ois B\'eguin}\\ LAGA, CNRS UMR 7539, Universit\'e Paris 13, 93430 Villetaneuse, France. \noindent \emph{Sylvain Crovisier}\\ LMO, CNRS UMR 8628, Universit\'e Paris-Sud 11, 91405 Orsay, France. \noindent \emph{Fr\'ed\'eric Le Roux}\\ IMJ-PRG, CNRS UMR 7586, Université Marie et Pierre Curie, 75005 Paris, France. \end{document}
math
14,383
\begin{document} \title{\sc {On the strong chromatic index and maximum induced matching of tree-cographs, permutation graphs and chordal bipartite graphs} \begin{abstract} We show that there exist linear-time algorithms that compute the strong chromatic index and a maximum induced matching of tree-cographs when the decomposition tree is a part of the input. We also show that there exist efficient algorithms for the strong chromatic index of (bipartite) permutation graphs and of chordal bipartite graphs. \end{abstract} \setcounter{page}{0} \section{Introduction} \begin{definition}[\cite{kn:cameron}] An {\em induced matching\/} in a graph $G$ is a set of edges, no two of which meet a common vertex or are joined by an edge of $G$. The size of an induced matching is the number of edges in the induced matching. An induced matching is maximum if its size is largest among all possible induced matchings. \end{definition} \begin{definition}[\cite{kn:fouquet}] Let $G=(V,E)$ be a graph. A {\em strong edge coloring\/} of $G$ is a proper edge coloring such that no edge is adjacent to two edges of the same color. A strong edge-coloring of a graph is a partition of its edges into induced matchings. The {\em strong chromatic index\/} of $G$ is the minimal integer $k$ such that $G$ has a strong edge coloring with $k$ colors. We denote the strong chromatic index of $G$ by $s\chi^{\prime}(G)$. \end{definition} Equivalently, a strong edge coloring of $G$ is a vertex coloring of $L(G)^2$, the square of the linegraph of $G$. The strong chromatic index problem can be solved in polynomial time for chordal graphs~\cite{kn:cameron} and for partial k-trees~\cite{kn:salavatipour}, and can be solved in linear time for trees~\cite{kn:faudree}. However, it is NP-complete to find the strong chromatic index for general graphs \cite{kn:cameron,kn:mahdian,kn:stockmeyer} or even for planar bipartite graphs~\cite{kn:hocquard}. In this paper, we show that there exist linear-time algorithms that compute the strong chromatic index and a maximum induced matching of tree-cographs when the decomposition tree is a part of the input. We also show that there exist efficient algorithms for the strong chromatic index of (bipartite) permutation graphs and of chordal bipartite graphs. The class of tree-cographs was introduced by Tinhofer in~\cite{kn:tinhofer}. \begin{definition} Tree-cographs are defined recursively by the following rules. \begin{enumerate}[\rm 1.] \item Every tree is a tree-cograph. \item If $G$ is a tree-cograph then also the complement $\Bar{G}$ of $G$ is a tree-cograph. \item For $k \geq 2$, if $G_1,\ldots,G_k$ are connected tree-cographs then also the disjoint union is a tree-cograph. \end{enumerate} \end{definition} Let $G$ be a tree-cograph. A decomposition tree for $G$ consists of a rooted binary tree $T$ in which each internal node, including the root, is labeled as a join node $\otimes$ or a union node $\oplus$. The leaves of $T$ are labeled by trees or complements of trees. It is easy to see that a decomposition tree for a tree-cograph $G$ can be obtained in $O(n^3)$ time. \section{The strong chromatic index of tree-cographs} The {\em linegraph} $L(G)$ of a graph $G$ is the intersection graph of the edges of $G$~\cite{kn:beineke}. It is well-known that, when $G$ is a tree then the linegraph $L(G)$ of $G$ is a claw-free blockgraph~\cite{kn:harary}. A graph is {\em chordal} if it has no induced cycles of length more than three~\cite{kn:dirac}. Notice that blockgraphs are chordal. A vertex $x$ in a graph $G$ is {\em simplicial} if its neighborhood $N(x)$ induces a clique in $G$. Chordal graphs are characterized by the property of having a {\em perfect elimination ordering}, which is an ordering $[v_1,\ldots,v_n]$ of the vertices of $G$ such that $v_i$ is simplicial in the graph induced by $\{v_i,\ldots,v_n\}$. A perfect elimination ordering of a chordal graph can be computed in linear time~\cite{kn:rose}. This implies that chordal graphs have at most $n$ maximal cliques, and the clique number can be computed in linear time, where the {\em clique number} of $G$, denoted by $\omega(G)$, is the number of vertices in a maximum clique of $G$. \begin{theorem}[\cite{kn:cameron}] If $G$ is a chordal graph then $L(G)^2$ is also chordal. \end{theorem} \begin{theorem}[\cite{kn:cameron3}] \label{weakly chordal} Let $k \in \mathbb{N}$ and let $k \geq 4$. Let $G$ be a graph and assume that $G$ has no induced cycles of length at least $k$. Then $L(G)^2$ has no induced cycles of length at least $k$. \end{theorem} \begin{lemma} Tree-cographs have no induced cycles of length more than four. \end{lemma} \begin{proof} Let $G$ be a tree-cograph. First observe that trees are bipartite. It follows that complements of trees have no induced cycles of length more than four. We prove the claim by induction on the depth of a decomposition tree for $G$. If $G$ is the union of two tree-cographs $G_1$ and $G_2$ then the claim follows by induction since any induced cycle is contained in one of $G_1$ and $G_2$. Assume $G$ is the join of two tree-cographs $G_1$ and $G_2$. Assume that $G$ has an induced cycle $C$ of length at least five. We may assume that $C$ has at least one vertex in each of $G_1$ and $G_2$. As one of $G_1$ and $G_2$ has more than two vertices of $C$, $C$ has a vertex of degree at least three, which is a contradiction. \qed\end{proof} \begin{lemma} \label{complement of tree} Let $T$ be a tree. Then $L(\Bar{T})^2$ is a clique. \end{lemma} \begin{proof} Consider two non-edges $\{a,b\}$ and $\{p,q\}$ of $T$. If the non-edges share an endpoint then they are adjacent in $L(\Bar{T})^2$ since they are already adjacent in $L(\Bar{T})$. Otherwise, since $T$ is a tree, at least one pair of $\{a,p\}$, $\{a,q\}$, $\{b,p\}$ and $\{b,q\}$ is a non-edge in $T$, otherwise $T$ has a 4-cycle. By definition, $\{a,b\}$ and $\{p,q\}$ are adjacent in $L(\Bar{T})^2$. \qed\end{proof} If $G$ is the union of two tree-cographs $G_1$ and $G_2$ then \[\omega(L(G)^2)=\max\{\omega(L(G_1)^2),\omega(L(G_2)^2)\}.\] The following lemma deals with the join of two tree-cographs. \begin{lemma} \label{join} Let $P$ and $Q$ be tree-cographs and let $G$ be the join of $P$ and $Q$. Let $X$ be the set of edges that have one endpoint in $P$ and one endpoint in $Q$. Then \begin{enumerate}[\rm (a)] \item $X$ forms a clique in $L(G)^2$, \item every edge of $X$ is adjacent in $L(G)^2$ to every edge of $P$ and to every edge of $Q$, and \item every edge of $P$ is adjacent in $L(G)^2$ to every edge of $Q$. \end{enumerate} \end{lemma} \begin{proof} This is an immediate consequence of the definitions. \qed\end{proof} For $k \geq 3$, a {\em $k$-sun} is a graph which consists of a clique with $k$ vertices and an independent set with $k$ vertices. There exist orderings $c_1,\ldots,c_k$ and $s_1,\ldots,s_k$ of the vertices in the clique and independent set such that each $s_i$ is adjacent to $c_i$ and to $c_{i+1}$ for $i=1,\ldots,k-1$ and such that $s_k$ is adjacent to $c_k$ and $c_1$. A graph is {\em strongly chordal} if it is chordal and has no $k$-sun, for $k \geq 3$~\cite{kn:farber}. \begin{figure} \caption{A $3$-sun, a gem and a claw} \label{3-sun} \end{figure} \begin{lemma} \label{strongly chordal} Let $T$ be a tree. Then $L(T)^2$ is strongly chordal. \end{lemma} \begin{proof} When $T$ is a tree then $L(T)$ is a blockgraph. Obviously, blockgraphs are strongly chordal. Lubiw proves in~\cite{kn:lubiw} that all powers of strongly chordal graphs are strongly chordal. \qed\end{proof} We strengthen the result of Lemma~\ref{strongly chordal} as follows. {\em Ptolemaic graphs} are graphs that are both distance hereditary and chordal~\cite{kn:howorka}. Ptolemaic graphs are gem-free chordal graphs. The following theorem characterizes ptolemaic graphs. \begin{theorem}[\cite{kn:howorka}] A connected graph is ptolemaic if and only if for all pairs of maximal cliques $C_1$ and $C_2$ with $C_1 \cap C_2 \neq \varnothing$, the intersection $C_1 \cap C_2$ separates $C_1 \setminus C_2$ from $C_2 \setminus C_1$. \end{theorem} \begin{lemma} \label{ptolemaic} Let $T$ be a tree. Then $L(T)^2$ is ptolemaic. \end{lemma} \begin{proof} Consider $L(T)$. Let $C$ be a block and let $P$ and $Q$ be two blocks that each intersects $C$ in one vertex. Since $L(T)$ is claw-free, the intersections of $P \cap C$ and $Q \cap C$ are distinct vertices. The intersection of the maximal cliques $P \cup C$ and $Q \cup C$, which is $C$, separates $P \setminus Q$ and $Q \setminus P$ in $L(T)^2$. Since all intersecting pairs of maximal cliques are of this form, this proves the lemma. \qed\end{proof} \begin{corollary} \label{char} Let $G$ be a tree-cograph. Then $L(G)^2$ has a decomposition tree with internal nodes labeled as join nodes and union nodes and where the leaves are labeled as ptolemaic graphs. \end{corollary} {F}rom Corollary~\ref{char} it follows that $L(G)^2$ is perfect~\cite{kn:chudnovsky}, that is, $L(G)^2$ has no odd holes or odd antiholes~\cite{kn:lovasz}. This implies that the chromatic number of $L(G)^2$ is equal to the clique number. Therefore, to compute the strong chromatic index of a tree-cograph $G$ it suffices to compute the clique number of $L(G)^2$. \begin{theorem} Let $G$ be a tree-cograph and let $T$ be a decomposition tree for $G$. There exists a linear-time algorithm that computes the strong chromatic index of $G$. \end{theorem} \begin{proof} First assume that $G=(V,E)$ is a tree. Then the strong chromatic index of $G$ is \begin{equation} \label{form1} s\chi^{\prime}(G)=\max \;\{\; d(x)+d(y)-1 \;|\; (x,y) \in E\;\} \end{equation} where $d(x)$ is the degree of the vertex $x$. To see this notice that Formula~(\ref{form1}) gives the clique number of $L(G)^2$. Assume that $G$ is the complement of a tree. By Lemma~\ref{complement of tree} the strong chromatic index is the number of nonedges in $G$, which is \[s\chi^{\prime}(G)=\binom{n}{2} - (n-1).\] Assume that $G$ is the union of two tree-cographs $G_1$ and $G_2$. Then, obviously, \[s\chi^{\prime}(G)= \max \;\{\; s\chi^{\prime}(G_1), \;s\chi^{\prime}(G_2)\;\}.\] Finally, assume that $G$ is the join of two tree-cographs $G_1$ and $G_2$. Let $X$ be the set of edges of $G$ that have one endpoint in $G_1$ and the other in $G_2$. Then, by Lemma~\ref{join}, we have \[s\chi^{\prime}(G) = |X|+s\chi^{\prime}(G_1) + s\chi^{\prime}(G_2).\] The decomposition tree for $G$ has $O(n)$ nodes. For the trees the strong chromatic index can be computed in linear time. In all other cases, the evaluation of $s\chi^{\prime}(G)$ takes constant time. It follows that this algorithm runs in $O(n)$ time, when a decomposition tree is a part of the input. \qed\end{proof} \section{Induced matching in tree-cographs} Consider a strong edge coloring of a tree-cograph $G$. Then each color class is an induced matching in $G$, which is an independent set in $L(G)^2$~\cite{kn:cameron}. In this section we show that the maximal value of an induced matching in $G$ can be computed in linear time. Again, we assume that a decomposition tree is a part of the input. \begin{theorem} \label{induced matching} Let $G$ be a tree-cograph and let $T$ be a decomposition tree for $G$. Then the maximal number of edges in an induced matching in $G$ can be computed in linear time. \end{theorem} \begin{proof} In this proof we denote the cardinality of a maximum induced matching in a graph $G$ by $i\nu(G)$. First assume that $G$ is a tree. Since the maximum induced matching problem can be formulated in monadic second-order logic, there exists a linear-time algorithm to compute the cardinality of a maximal induced matching in $G$ \cite{kn:brandstadt,kn:fricke}. Assume that $G$ is the complement of a tree. By Lemma~\ref{complement of tree} $L(G)^2$ is a clique. Thus the cardinality of a maximum induced matching in $G$ is one if $G$ has a nonedge and otherwise it is zero. Assume that $G$ is the union of two tree-cographs $G_1$ and $G_2$. Then \[i\nu(G) = i\nu(G_1)+i\nu(G_2).\] Assume that $G$ is the join of two tree-cographs $G_1$ and $G_2$. Then \[i\nu(G)= \max\;\{\;i\nu(G_1), \;i\nu(G_2), \;1\;\}.\] This proves the theorem. \qed\end{proof} \section{Permutation graphs} A permutation diagram on $n$ points is obtained as follows. Consider two horizontal lines $L_1$ and $L_2$ in the Euclidean plane. For each line $L_i$ consider a linear ordering $\prec_i$ of $\{1,\ldots,n\}$ and put points $1,\ldots,n$ on $L_i$ in this order. For $k=1,\ldots,n$ connect the two points with the label $k$ by a straight line segment. \begin{definition}[\cite{kn:golumbic}] A graph $G$ is a permutation graph if it is the intersection graph of the line segments in a permutation diagram. \end{definition} \begin{figure} \caption{A permutation graph and a permutation diagram} \end{figure} Consider two horizontal lines $L_1$ and $L_2$ and on each line $L_i$ choose $n$ intervals. Connect the left - and right endpoint of the $k^{\mathrm{th}}$ interval on $L_1$ with the left - and right endpoint of the $k^{\mathrm{th}}$ interval on $L_2$. Thus we obtain a collection of $n$ trapezoids. We call this a trapezoid diagram. \begin{definition} A graph is a trapezoid graph if it is the intersection graph of a collection of trapezoids in a trapezoid diagram. \end{definition} \begin{lemma} \label{permutation} If $G$ is a permutation graph then $L(G)^2$ is a trapezoid graph. \end{lemma} \begin{proof} Consider a permutation diagram for $G$. Each edge of $G$ corresponds to two intersecting line segments in the diagram. The four endpoints of a pair of intersecting line segments define a trapezoid. Two vertices in $L(G)^2$ are adjacent exactly when the corresponding trapezoids intersect (see Proposition~1 in~\cite{kn:cameron2}). \qed\end{proof} \begin{theorem} There exists an $O(n^4)$ algorithm that computes a strong edge coloring in permutation graphs. \end{theorem} \begin{proof} Dagan, {\em et al.\/},~\cite{kn:dagan} show that a trapezoid graph can be colored by a greedy coloring algorithm. It is easy to see that this algorithm can be adapted so that it finds a strong edge-coloring in permutation graphs. \qed\end{proof} \begin{remark} A somewhat faster coloring algorithm for trapezoid graphs appears in~\cite{kn:felsner}. Their algorithm runs in $O(n \log n)$ time where $n$ is the number of vertices in the trapezoid graph. An adaption of their algorithm yields a strong edge coloring for permutation graphs that runs in $O(m \log n)$ time, where $n$ and $m$ are the number of vertices and edges in the permutation graph. \end{remark} \subsection{Bipartite permutation graphs} A graph is a {\em bipartite permutation graph} if it is not only a bipartite graph but also a permutation graph~\cite{kn:spinrad}. Let $G=(A,B,E)$ be a bipartite permutation graph with color classes $A$ and $B$. \begin{lemma} \label{bip perm} Let $G$ be a bipartite permutation graph. Then $L(G)^2$ is an interval graph. \end{lemma} \begin{proof} We first show that $L(G)^2$ is chordal. We may assume that $L(G)^2$ is connected. Let $x$ and $y$ be two non-adjacent vertices in a graph $H$. An $x,y$-separator is a set $S$ of vertices which separates $x$ and $y$ in distinct components. The separator is a minimal $x,y$-separator if no proper subset of $S$ separates $x$ and $y$. A set $S$ is a minimal separator if there exist non-adjacent vertices $x$ and $y$ such that $S$ is a minimal $x,y$-separator. Recall that Dirac characterizes chordal graphs by the property that every minimal separator is a clique~\cite{kn:dirac}. Consider the trapezoid diagram. Let $S$ be a minimal separator in the trapezoid graph $L(G)^2$ and consider removing the trapezoids that are in $S$ from the diagram. Every component of $L(G)^2-S$ is a connected part in the diagram. Consider the left-to-right ordering of the components in the diagram. Since $S$ is a minimal separator there must exist two {\em consecutive\/} components $C_1$ and $C_2$ such that every vertex of $S$ has a neighbor in both $C_1$ and $C_2$~\cite{kn:bodlaender}. Assume that $S$ has two non-adjacent trapezoids $t_1$ and $t_2$. Each of $t_i$ is characterized by two crossing line segments $\{a_i,b_i\}$ of the permutation diagram. Since $t_1$ and $t_2$ are not adjacent, any pair of line-segments with one element in $\{a_1,b_1\}$ and the other element in $\{a_2,b_2\}$ are parallel. Each trapezoid $t_i$ intersects each component $C_1$ and $C_2$. Since pairs of line-segments are parallel, we have that, for some $i \in \{1,2\}$ \begin{enumerate}[\rm (1)] \item $N_G(a_i) \cap C_1 \subseteq N_G(a_{3-i}) \cap C_1$, \item $N_G(a_i) \cap C_1 \subseteq N_G(b_{3-i}) \cap C_1$, \item $N_G(b_i) \cap C_1 \subseteq N_G(a_{3-i}) \cap C_1$ and \item $N_G(b_i) \cap C_1 \subseteq N_G(b_{3-i}) \cap C_1$, \end{enumerate} and the reverse inequalities hold for $C_2$. Each trapezoid $t_i$ has at least one line segment of $\{a_i,b_i\}$ intersecting with a line segment of $C_i$. By the neighborhood containments this implies that $G$ has a triangle, which contradicts that $G$ is bipartite. This proves that $S$ is a clique and by Dirac's characterization $L(G)^2$ is chordal. Lekkerkerker and Boland prove in~\cite{kn:lekkerkerker} that a graph $H$ is an interval graph if and only if $H$ is chordal and $H$ has no asteroidal triple. It is easy to see that a permutation graph has no asteroidal triple~\cite{kn:kloks}. Cameron proves in~\cite{kn:cameron2} (and independently Chang proves in~\cite{kn:chang}) that $L(H)^2$ is AT-free whenever a graph $H$ is AT-free. Thus, since $L(G)^2$ is chordal and AT-free, $L(G)^2$ is an interval graph. This proves the lemma. \qed\end{proof} Chang proves in~\cite{kn:chang} that there exists a linear-time algorithm that computes a maximum induced matching in bipartite permutation graphs. We show that there is a simple linear-time algorithm that computes the strong chromatic index of bipartite permutation graphs. \begin{theorem} There exists a linear-time algorithm that computes the strong chromatic index of bipartite permutation graphs. \end{theorem} \begin{proof} Let $G=(A,B,E)$ be a bipartite permutation graph and consider a permutation diagram for $G$. Let $[a_1,\ldots,a_s]$ and $[b_1,\ldots,b_t]$ be left-to-right orderings of the vertices of $A$ and $B$ on the topline of the diagram. Assume that $a_1$ is the left-most endpoint of a line segment on the topline. We may assume that the line segment of $a_1$ intersects the line segment of $b_1$. Consider the set of edges in the maximal complete bipartite subgraph $M$ in $G$ that consists of the following vertices. \begin{enumerate}[\rm (a)] \item $M$ contains $a_1$ and $b_1$, \item $M$ contains all the vertices of $A$ of which the endpoint on the topline is to the left of $b_1$, \item $M$ contains all the vertices of $B$ of which the endpoint on the bottom line is to the left of $a_1$. \end{enumerate} Notice that $M$ is the set of edges in the complete bipartite subgraph in $G$ induced by \[N[a_1] \cup N[b_1].\] Extend the set of edges in $M$ with the edges in $G$ that have one endpoint in $M$. Call the set of edges in $M$ plus the edges with one endpoint in $N(a_1) \cup N(b_1)$ the extension $\Bar{M}$ of $M$. That is, \[\Bar{M}=\{\;\{p,q\}\in E\;|\; p \in M \quad\text{or}\quad q \in M\;\}.\] Notice that $\Bar{M}$ is the unique maximal clique that contains the simplicial edge $\{a_1,b_1\}$ in $L(G)^2$. The second maximal clique in $L(G)^2$ is found by the process described above for the line segments induced by $A \setminus \{a_1\} \cup B$. Likewise, the third maximal clique in $L(G)^2$ is found by repeating the process for the line segments induced by $A \cup B \setminus \{b_1\}$. Next, remove the vertices $a_1$ {\em and\/} $b_1$ and repeat the three steps described above. It is easy to see that the list obtained in this manner contains all the maximal cliques of $L(G)^2$. Notice also that this algorithm can be implemented to run in linear time. Since $L(G)^2$ is perfect the chromatic number is equal to the clique number, so it suffices to keep track of the cardinalities of the maximal cliques that are found in the process described above. \qed\end{proof} \section{Chordal bipartite graphs} \begin{definition}[\cite{kn:golumbic2,kn:huang}] A bipartite graph is chordal bipartite if it has no induced cycles of length more than four. \end{definition} In contrast to bipartite permutation graphs, $L(G)^2$ is not necessarily chordal when $G$ is chordal bipartite. An example to the contrary is shown in Figure~\ref{counterexample}. \begin{figure} \caption{A chordal bipartite graph $G$ for which $L(G)^2$ is not chordal.} \label{counterexample} \end{figure} A graph is {\em weakly chordal} if it has no induced cycle of length more than four or the complement of such a cycle~\cite{kn:hayward2}. Weakly chordal graphs are perfect. Notice that chordal bipartite graphs are weakly chordal. Cameron, Sritharan and Tang prove in~\cite{kn:cameron3} (and independently Chang proves in~\cite{kn:chang}) that $L(G)^2$ is weakly chordal whenever $G$ is weakly chordal. Thus, if $G$ is chordal bipartite then $L(G)^2$ is perfect and so, in order to compute the strong chromatic index of $G$ it is sufficient to compute the clique number in $L(G)^2$ (see also~\cite{kn:abueida}). It is well-known that the clique number of a perfect graph can be computed in polynomial time~\cite{kn:grotschel}.\footnote{Actually, this paper shows that for any graph $G$ with $\omega(G)=\chi(G)$ the values of these parameters can be determined in polynomial time. The reason is that Lov\'asz' bound $\vartheta(G)$ for the Shannon capacity of a graph can be computed in polynomial time for all graphs, {\em via\/} the ellipsoid method, and the parameter $\vartheta(G)$ is sandwiched between $\omega(G)$ and $\chi(G)$.} The algorithm presented in~\cite{kn:hayward} to compute the clique number of weakly chordal graphs runs in $O(n^3)$ time, where $n$ is the number of vertices in the graph. A direct application of their algorithm to solve the strong chromatic index of chordal bipartite graphs $G$ involves computing the graph $L(G)^2$. This graph has $m$ vertices, where $m$ is the number of edges in $G$. This gives a timebound $O(n^6)$ for computing the strong chromatic index of a chordal bipartite graph (see also~\cite{kn:abueida}). In this section we show that there is a more efficient method. \begin{definition}[\cite{kn:yannakakis}] A bipartite graph $G=(A,B,E)$ is a chain graph if there exists an ordering $a_1, a_2, \ldots, a_{|A|}$ of the vertices in $A$ such that $N(a_1) \subseteq N(a_2) \subseteq \cdots \subseteq N(a_{|A|})$. \end{definition} Chain graphs are sometimes called {\em difference graphs}~\cite{kn:hammer}. Equivalently, a graph $G=(V,E)$ is a chain graph if there exists a positive real number $T$ and a real number $w(x)$ for every vertex $x \in V$ such that $|w(x)| < T$ for every $x \in V$ and such that, for any pair of vertices $x$ and $y$, $\{x,y\} \in E$ if and only if $|w(x)-w(y)| \geqslant T$ $($see Figure~\ref{fig difference} for an example$)$. \begin{figure} \caption{\label{fig difference} \label{fig difference} \end{figure} Chain graphs can be characterized in many ways~\cite{kn:hammer}. For example, a graph is a chain graph if and only if it has no induced $K_3$, $2K_2$ or $C_5$~\cite[Proposition~2.6]{kn:hammer}. Thus a bipartite graph is a chain graph if it has no induced $2K_2$. Abueida, {\em et al.\/}, prove that, if $G$ is a bipartite graph that does not contain an induced $C_6$, then a maximal clique in $L(G)^2$ is a maximal chain subgraph of $G$~\cite{kn:abueida}. Notice that $C_6$ has three consecutive edges that form a clique in $L(G)^2$, however, these edges do not form a chain subgraph. Thus, computing the clique number of $L(G)^2$ for $C_6$-free bipartite graphs $G$ is equivalent to finding the maximal number of edges that form a chain graph in $G$. In~\cite{kn:abueida} the authors prove that, if $G$ is a $C_6$-free bipartite graph, then \[\chi(G^{\ast})=ch(G),\] where $G^{\ast}$ is the complement of $L(G)^2$ and $ch(G)$ is the minimum number of chain subgraphs of $G$ that cover the edges of $G$. An {\em antimatching} in a graph $G$ is a collection of edges which forms a clique in $L(G)^2$~\cite{kn:mahdian}. It is easy to see that finding a chain subgraph with the maximal number of edges in general graphs is NP-complete. Mahdian mentions in his paper that the complexity of maximum antimatching in simple bipartite graphs is open~\cite{kn:mahdian}. \begin{lemma} Let $G$ be a bipartite graph. Finding a maximum set of edges that form a chain subgraph of $G$ is NP-complete. \end{lemma} \begin{proof} Let $G=(A,B,E)$ be a bipartite graph. Let $C(G)$ be the graph obtained from $G$ by making cliques of $A$ and $B$. Notice that $G$ is a chain graph if and only if $C(G)$ is chordal. Yannakakis shows in~\cite{kn:yannakakis} that adding a minimum set of edges to $C(G)$ such that this graph becomes chordal is NP-complete. \noindent Consider the bipartite complement $G^{\prime}$ of $G$. Adding a minimum set of edges such that $G$ becomes a chain graph is equivalent to removing a minimum set of edges from $G^{\prime}$ such that the remaining graph is a chain graph. This completes the proof. \qed\end{proof} In the following theorem we present our result for chordal bipartite graphs. \begin{theorem} \label{chordal bip} There exists an $O(n^4)$ algorithm that computes the strong chromatic index of chordal bipartite graphs. \end{theorem} \begin{proof} Let $G$ be chordal bipartite with color classes $C$ and $D$. Consider the bipartite adjacency matrix $A$ in which rows correspond with vertices of $C$ and columns correspond with vertices of $D$. An entry of this matrix is one if the corresponding vertices are adjacent and it is zero if they are not adjacent. \noindent It is well-known that $G$ is chordal bipartite if and only if $A$ is totally balanced. Notice that a chain graph has a bipartite adjacency matrix that is triangular. So we look for a maximal submatrix of $A$ which is triangular after permuting rows and columns. \noindent Anstee and Farber and Lehel prove that a totally balanced matrix, which has no repeated columns, can be completed into a `maximal totally balanced matrix.'~\cite{kn:anstee2,kn:lehel}, If $A$ has $n$ rows then this completing has $\binom{n+1}{2}+1$ columns. The rows and columns of a maximal totally balanced matrix can be permuted such that the adjacency matrix gets the following form. \begin{figure} \caption{\label{fig totallybalanced} \label{fig totallybalanced} \end{figure} \noindent One can easily deal with repeated columns in $A$ by giving the vertices a weight. When the matrix has the desired form, one can easily find the maximal triangular submatrix in linear time. Anstee and Farber give a rough bound of $O(n^5)$ to find the completion. But faster algorithms are given by Paige and Tarjan and by Lubiw~\cite{kn:paige,kn:lubiw}. \qed\end{proof} \end{document}
math
27,034
\begin{document} \title{\textbf{Bures and Sj\"{o}qvist Metrics over Thermal State Manifolds for Spin Qubits and Superconducting Flux Qubits}} \author{\textbf{Carlo Cafaro}$^{1}$ and \textbf{Paul M.\ Alsing}$^{2}$} \affiliation{$^{1}$SUNY Polytechnic Institute, 12203 Albany, New York, USA} \affiliation{$^{2}$Air Force Research Laboratory, Information Directorate, 13441 Rome, New York, USA} \begin{abstract} The interplay among differential geometry, statistical physics, and quantum information science has been increasingly gaining theoretical interest in recent years. In this paper, we present an explicit analysis of the Bures and Sj\"{o}qvist metrics over the manifolds of thermal states for specific spin qubit and the superconducting flux qubit Hamiltonian models. While the two metrics equally reduce to the Fubini-Study metric in the asymptotic limiting case of the inverse temperature approaching infinity for both Hamiltonian models, we observe that the two metrics are generally different when departing from the zero-temperature limit. In particular, we discuss this discrepancy in the case of the superconducting flux Hamiltonian model. We conclude the two metrics differ in the presence of a nonclassical behavior specified by the noncommutativity of neighboring mixed quantum states. Such a noncommutativity, in turn, is quantified by the two metrics in different manners. Finally, we briefly discuss possible observable consequences of this discrepancy between the two metrics when using them to predict critical and/or complex behavior of physical systems of interest in quantum information science. \end{abstract} \pacs{Quantum Computation (03.67.Lx), Quantum Information (03.67.Ac), Quantum Mechanics (03.65.-w), Riemannian Geometry (02.40.Ky), Statistical Mechanics (05.20.-y).} \maketitle \fancyhead[R]{\ifnum\value{page}<2\relax\else\thepage\fi} \thispagestyle{fancy} \section{Introduction} Geometry plays a special role in the description and, to a certain extent, in the understanding of various physical phenomena \cite{pettini07,karol06}. The concepts of length, area, and volume are ubiquitous in physics and their meaning can prove quite helpful in explaining physical phenomena from a more intuitive perspective \cite{cafaroprd22,cafaropre22}. The notions of \textquotedblleft\emph{longer}\textquotedblright\ and \textquotedblleft \emph{shorter}\textquotedblright\ are extensively used in virtually all disciplines \cite{cafarophysicaa22}. Indeed, geometric formulations of classical and quantum evolutions along with geometric descriptions of classical and quantum mechanical aspects of thermal phenomena are becoming increasingly important in science. Concepts, such as thermodynamic length, area law, and statistical volumes are omnipresent in geometric thermodynamics, general relativity, and statistical physics, respectively. The concept of entropy finds application in essentially any realm of science, from classical thermodynamics to quantum information science. The notions of \textquotedblleft\emph{hotter}\textquotedblright\ and \textquotedblleft \emph{cooler}\textquotedblright\ are widely used in many fields. Entropy can be used to provide measures of distinguishability of classical probability distributions, as well as pure and mixed quantum states. It can also be used to propose measures of complexity for classical motion, quantum evolution, and entropic motion on curved statistical manifolds underlying the entropic dynamics of physical systems for which only partial knowledge of relevant information can be obtained \cite{cafaroPhD,cafaroCSF,felice18}. Furthermore, entropy can also be used to express the degree of entanglement in a quantum state specifying a composite quantum system. For instance, concepts such as Shannon entropy, von Neumann entropy, and Umegaki relative entropy are ubiquitous in classical information science, quantum information theory, and information geometric formulations of mixed quantum state evolutions \cite{amari}, respectively. In this paper, inspired by the increasing theoretical interest in the interplay among differential geometry, statistical physics, and quantum information science \cite{zanardiprl07,zanardi07,pessoa21,silva21,silva21B,mera22}, we present an explicit analysis of the Bures \cite{bures69,uhlman76,hubner92} and Sj\"{o}qvist \cite{erik20} metrics over the manifolds of thermal states for the spin qubit and the superconducting flux qubit Hamiltonian models. From a chronological standpoint, the first physical application of the Sj\"{o}qvist interferometric metric occurs in the original paper by Sj\"{o}qvist himself in Ref. \cite{erik20}. Here, the author considered his newly proposed interferometric metric to quantify changes in behavior of a magnetic system in a thermal state under modifications of temperature and magnetic field intensity. Keeping the temperature constant while changing the externally applied magnetic field, the Sj\"{o}qvist interferometric metric was shown to be physically linked to the magnetic susceptibility. This quantity, in turn, quantifies how much a material will become magnetized when immersed in a magnetic field. A second application of the Sj\"{o}qvist interferometric metric happens in Refs. \cite{silva21,silva21B}. Here, this metric is used to characterize the finite-temperature phase transitions in the framework of band insulators. In particular, the authors considered the massive Dirac Hamiltonian model for a band insulator in two spatial dimensions. The corresponding Sj\"{o}qvist interferometric metric was calculated and expressed in terms of two physical parameters, the temperature, and the hopping parameter. Furthermore, the Sj\"{o}qvist interferometric metric was physically regarded as an interferometric susceptibility. Interestingly, it was observed in Refs. \cite{silva21,silva21B} a dramatic difference between the Sj\"{o}qvist interferometric metric and the Bures metric when studying topological phase transitions in these types of systems. Specifically, while the topological phase transition is captured for all temperatures in the case of the Sj\"{o}qvist interferometric metric, the topological phase transition is captured only at zero temperature in the case of the Bures metric. Clearly, the authors leave as an unsolved question the experimental observation of the singular behavior of the Sj\"{o}qvist interferometric metric in actual laboratory experiments in Refs. \cite{silva21,silva21B}. A third interesting work is the one presented in Ref. \cite{mera22}. Here the authors focus on the zero-temperature aspects of certain quantum systems in their (pure) ground quantum state. They consider two systems. The first system is described by the $XY$ anisotropic spin-$1/2$ chain with $N$-sites on a circle in the presence of an external magnetic field. The second model is the Haldane model, a two-dimensional condensed-matter lattice model \cite{haldane88}. In the first case, the two parameters that determine both the Hamiltonian and the parameter manifold are the anisotropy degree and the magnetic field intensity. In the second case, instead, the two key parameters become the on-site energy and the phase in the model being considered. Expressing the so-called quantum metric in terms of these tunable parameters, they study the thermodynamical limit of this metric along critical submanifolds of the whole parameter manifold. They observe a singular (regular) behavior of the metric along normal (tangent) directions to the critical submanifolds. Therefore, they conclude that tangent directions to critical manifolds are special. Finally, the authors also point out that it would be interesting to understand how their findings generalize to the finite-temperature case where states become mixed. Interestingly, without reporting any explicit analysis, the authors state they expect the Bures and the Sj\"{o}qvist metrics to assume different functional forms. In this paper, inspired by the previously mentioned relevance of comprehending the physical significance of choosing one metric over another one in such geometric characterizations of physical aspects of quantum systems, we report a complete and straightforward analysis of the link between the Sj\"{o}qvist interferometric metric and the Bures metric for two special classes of nondegenerate mixed quantum states. Specifically , focusing on manifolds of thermal states for the spin qubit and the superconducting flux qubit Hamiltonian models, we observe that while the two metrics both reduce to the Fubini-Study metric \cite{provost80,wootters81,braunstein94} in the zero-temperature asymptotic limiting case of the inverse temperature $\beta\overset{\text{def}}{=}\left( k_{B}T\right) ^{-1}$ (with $k_{B}$ being the Boltzmann constant) approaching infinity for both Hamiltonian models, the two metrics are generally different. Furthermore, we observe this different behavior in the case of the superconducting flux Hamiltonian model. More generally, we note that the two metrics seem to differ when a nonclassical behavior is present since in this case, the metrics quantify noncommutativity of neighboring mixed quantum states in different manners. Finally, we briefly discuss the possible observable consequences of this discrepancy between the two metrics when using them to predict critical and/or complex behavior of physical systems of interest. We acknowledge that despite the fact that most of the preliminary background results presented in this paper are partially known in the literature, they appear in a scattered fashion throughout several papers written by researchers working in distinct fields of physics who may not be necessary aware of each other's findings. For this reason, we present here an explicit and unified comparative formulation of the Bures and Sj\"{o}qvist metrics for simple quantum systems in mixed states. In particular, as mentioned earlier, we illustrate our results on the examples of thermal state manifolds for spin qubits and superconducting flux qubits. These applications are original and, to the best of our knowledge, do not appear anywhere in the literature. The layout of the rest of this paper is as follows. In Section II, we present with explicit derivations the expressions of the Bures metric in H\"{u}bner's \cite{hubner92} and Zanardi's \cite{zanardi07} forms. In Section III, we present an explicit derivation of the Sj\"{o}qvist interferometric metric \cite{erik20} between two neighboring nondegenerate density matrices. In Section IV, we present two Hamiltonian models. The first Hamiltonian model describes a spin-$1/2$ particle in a uniform and time-independent external magnetic field oriented along the $z$-axis. The second Hamiltonian model, instead, specifies a superconducting flux qubit. Bringing these two systems in thermal equilibrium with a reservoir at finite and non-zero temperature $T$, we construct the two corresponding parametric families of thermal states. In Section V, we present an explicit calculation of both the Sj\"{o}qvist and the Bures metrics for each one of the two distinct families of parametric thermal states. From our comparative analysis, we find that the two metric coincide for the first Hamiltonian model (electron in a constant magnetic field along the $z$-direction), while they differ for the second Hamiltonian model (superconducting flux qubit). In Section VI, we discuss the effects that arise from the comparative analysis carried out in Section V concerning the Bures and Sj\"{o}qvist metrics for spin qubits and superconducting flux qubits Hamiltonian models introduced in Section IV. Finally, we conclude with our final remarks along with a summary of our main findings in Section VII. \section{The Bures metric} In this section, we present two explicit calculations. In the first calculation, we carry out a detailed derivation of the Bures metric by following the original work presented by H\"{u}bner in Ref. \cite{hubner92}. In the second calculation, we recast the expression of the Bures metric obtained by H\"{u}bner in a way that is more suitable in the framework of geometric analyses on thermal states manifolds. Here, we follow the original work presented by Zanardi and collaborators in Ref. \cite{zanardi07}. \subsection{The explicit derivation of H\"{u}bner's general expression} We begin by carrying out an explicit derivation of the Bures metric inspired by H\"{u}bner \cite{hubner92}. Recall that the squared Bures distance between two density matrices infinitesimally far apart is given by, \begin{equation} \left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+d\rho\right) \right] ^{2}=2-2\mathrm{tr}\left[ \rho^{1/2}\left( \rho+d\rho\right) \rho ^{1/2}\right] ^{1/2}\text{.} \label{buri1} \end{equation} To find a useful expression for $\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+d\rho\right) \right] ^{2}$, we follow the line of reasoning used by H\"{u}bner in Ref. \cite{hubner92}. Consider an Hermitian matrix $A\left( t\right) $ with $t\in \mathbb{R} $ defined as \begin{equation} A\left( t\right) \overset{\text{def}}{=}\left[ \rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] ^{1/2}\text{,} \label{buri2} \end{equation} with $A\left( t\right) A\left( t\right) =\rho^{1/2}\left( \rho +td\rho\right) \rho^{1/2}$. Note that $A\left( 0\right) =\rho$ and, for later use, we assume $\rho$ to be invertible. At this point we observe that knowledge of the metric tensor $g_{ij}\left( \rho\right) $ at $\rho$ requires knowing $\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho +td\rho\right) \right] ^{2}$ up to the second order in $t$. H\"{u}bner's ansatz (i.e., educated guess) is \begin{equation} \left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}=t^{2}g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}\text{,} \label{buri3} \end{equation} with $\left\{ \rho^{i}\right\} $ denoting a given set of coordinates on the manifold of density matrices. From Eq. (\ref{buri3}), we note that \begin{equation} g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}=\frac{1}{2}\left( \frac{d^{2} }{dt^{2}}\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}\right) _{t=0}\text{.} \label{buri4} \end{equation} Observe that using Eq. (\ref{buri2}), the RHS\ of Eq. (\ref{buri4}) becomes \begin{align} \frac{1}{2}\left( \frac{d^{2}}{dt^{2}}\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}\right) _{t=0} & =\frac{1} {2}\frac{d^{2}}{dt^{2}}\left\{ 2-2\mathrm{tr}\left[ \rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] ^{1/2}\right\} _{t=0}\nonumber\\ & =\frac{1}{2}\frac{d^{2}}{dt^{2}}\left\{ 2-2\mathrm{tr}\left[ A\left( t\right) \right] \right\} _{t=0}\nonumber\\ & =-\mathrm{tr}\left[ \ddot{A}\left( t\right) \right] _{t=0}\text{,} \end{align} that is, \begin{equation} \frac{1}{2}\left( \frac{d^{2}}{dt^{2}}\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}\right) _{t=0}=-\mathrm{tr} \left[ \ddot{A}\left( t\right) \right] _{t=0}\text{.} \label{buri5} \end{equation} From Eqs. (\ref{buri4}) and (\ref{buri5}), we get \begin{equation} g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}=-\mathrm{tr}\left[ \ddot {A}\left( t\right) \right] _{t=0}\text{.} \label{buri6} \end{equation} Differentiating two times the relation $A\left( t\right) A\left( t\right) =\rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}$, setting $t=0$ and, finally, assuming $\rho$ diagonalized in the form \begin{equation} \rho=\sum_{i}\lambda_{i}\left\vert i\right\rangle \left\langle i\right\vert \text{,} \end{equation} we have \begin{equation} \left\{ \frac{d^{2}}{dt^{2}}\left[ A\left( t\right) A\left( t\right) \right] \right\} _{t=0}=\left\{ \frac{d^{2}}{dt^{2}}\left[ \rho ^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] \right\} _{t=0}\text{.} \end{equation} More explicitly, we notice that \begin{equation} \frac{d}{dt}\left[ A\left( t\right) A\left( t\right) \right] =\dot {A}\left( t\right) A\left( t\right) +A\left( t\right) \dot{A}\left( t\right) \label{buri7} \end{equation} and, \begin{equation} \frac{d}{dt}\left[ \rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] =\rho^{1/2}d\rho\rho^{1/2}\text{.} \label{buri8} \end{equation} Setting $t=0$, from Eqs. (\ref{buri7}) and (\ref{buri8}) we obtain \begin{equation} \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) =\rho^{1/2}d\rho\rho^{1/2}\text{.} \label{buri8a} \end{equation} After the second differentiation of $A\left( t\right) A\left( t\right) $ and $\rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}$, we get \begin{equation} \ddot{A}\left( 0\right) A\left( 0\right) +2\text{ }\dot{A}\left( 0\right) \dot{A}\left( 0\right) +A\left( 0\right) \ddot{A}\left( 0\right) =0\text{.} \label{pauli} \end{equation} Multiplying both sides of Eq. (\ref{pauli}) by $A^{-1}\left( 0\right) $ from the right and using the cyclicity of the trace operation, we get \begin{equation} \mathrm{tr}\left[ \ddot{A}\left( 0\right) \right] =-\mathrm{tr}\left[ A^{-1}\left( 0\right) \dot{A}\left( 0\right) ^{2}\right] \text{.} \label{buri9} \end{equation} From Eqs. (\ref{buri6}) and (\ref{buri9}), the Bures metric $ds_{\mathrm{Bures}}^{2}$ becomes \begin{align} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) & =g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}\nonumber\\ & =-\mathrm{tr}\left[ \ddot{A}\left( t\right) \right] _{t=0}\nonumber\\ & =\mathrm{tr}\left[ A^{-1}\left( 0\right) \dot{A}\left( 0\right) ^{2}\right] \nonumber\\ & =\sum_{i}\left\langle i\left\vert A^{-1}\left( 0\right) \dot{A}\left( 0\right) ^{2}\right\vert i\right\rangle \text{,} \end{align} that is, \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {2}\sum_{i,k,l}\left[ \left\langle i\left\vert A^{-1}\left( 0\right) \right\vert k\right\rangle \left\langle k\left\vert \dot{A}\left( 0\right) \right\vert l\right\rangle \left\langle l\left\vert \dot{A}\left( 0\right) \right\vert i\right\rangle +\left\langle i\left\vert A^{-1}\left( 0\right) \right\vert l\right\rangle \left\langle l\left\vert \dot{A}\left( 0\right) \right\vert k\right\rangle \left\langle k\left\vert \dot{A}\left( 0\right) \right\vert i\right\rangle \right] \text{.} \label{buri9a} \end{equation} Observe that $A\left( 0\right) \left\vert k\right\rangle =\rho\left\vert k\right\rangle =\lambda_{k}\left\vert k\right\rangle $ and, therefore, $A^{-1}\left( 0\right) \left\vert k\right\rangle =\rho^{-1}\left\vert k\right\rangle =\lambda_{k}^{-1}\left\vert k\right\rangle $ with $\lambda _{k}\neq0$ for any $k$. We need to find an expression for $\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle $. From Eq. (\ref{buri8a}), we get \begin{equation} \left\langle i\left\vert \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) \right\vert j\right\rangle =\left\langle i\left\vert \rho^{1/2}d\rho\rho^{1/2}\right\vert j\right\rangle \text{.} \label{buri10} \end{equation} We note that \begin{align} \left\langle i\left\vert \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) \right\vert j\right\rangle & =\left\langle i\left\vert \dot{A}\left( 0\right) \rho+\rho\dot{A}\left( 0\right) \right\vert j\right\rangle \nonumber\\ & = {\displaystyle\sum\limits_{k}} \lambda_{k}\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert k\right\rangle \left\langle k\left\vert j\right. \right\rangle + {\displaystyle\sum\limits_{k}} \lambda_{k}\left\langle i\left\vert k\right. \right\rangle \left\langle k\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle \nonumber\\ & =\lambda_{j}\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle +\lambda_{i}\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle \nonumber\\ & =\left( \lambda_{i}+\lambda_{j}\right) \left\langle i\left\vert \dot {A}\left( 0\right) \right\vert j\right\rangle \text{,} \end{align} that is, \begin{equation} \left\langle i\left\vert \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) \right\vert j\right\rangle =\left( \lambda_{i}+\lambda_{j}\right) \left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle \text{.} \label{buri11} \end{equation} Moreover, we observe that \begin{align} \left\langle i\left\vert \rho^{1/2}d\rho\rho^{1/2}\right\vert j\right\rangle & =\left\langle i\left\vert \left( \sum_{k}\sqrt{\lambda_{k}}\left\vert k\right\rangle \left\langle k\right\vert \right) d\rho\left( \sum_{m} \sqrt{\lambda_{m}}\left\vert m\right\rangle \left\langle m\right\vert \right) \right\vert \right\rangle \nonumber\\ & =\sum_{k,m}\sqrt{\lambda_{k}}\sqrt{\lambda_{m}}\left\langle i\left\vert k\right. \right\rangle \left\langle k\left\vert d\rho\right\vert m\right\rangle \left\langle m\left\vert j\right. \right\rangle \nonumber\\ & =\sum_{k,m}\sqrt{\lambda_{k}}\sqrt{\lambda_{m}}\delta_{ik}\delta _{mj}\left\langle k\left\vert d\rho\right\vert m\right\rangle \nonumber\\ & =\sqrt{\lambda_{i}}\sqrt{\lambda_{j}}\left\langle i\left\vert d\rho\right\vert j\right\rangle \text{,} \end{align} that is, \begin{equation} \left\langle i\left\vert \rho^{1/2}d\rho\rho^{1/2}\right\vert j\right\rangle =\sqrt{\lambda_{i}}\sqrt{\lambda_{j}}\left\langle i\left\vert d\rho\right\vert j\right\rangle \text{.} \label{buri12} \end{equation} Using Eqs. (\ref{buri10}), (\ref{buri11}), and (\ref{buri12}), we have \begin{equation} \left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle =\frac{\sqrt{\lambda_{i}}\sqrt{\lambda_{j}}}{\lambda_{i}+\lambda_{j} }\left\langle i\left\vert d\rho\right\vert j\right\rangle \text{.} \label{buri13} \end{equation} Finally, using Eq. (\ref{buri13}), $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{buri9a}) becomes \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {2}\sum_{k,l}\frac{\lambda_{l}+\lambda_{k}}{\left( \lambda_{l}+\lambda _{k}\right) ^{2}}\left\vert \left\langle k\left\vert d\rho\right\vert l\right\rangle \right\vert ^{2}\text{,} \end{equation} that is, relabelling the dummy indices (i.e., $k\rightarrow i$ and $l\rightarrow j$), \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {2}\sum_{i,j}\frac{\left\vert \left\langle i\left\vert d\rho\right\vert j\right\rangle \right\vert ^{2}}{\lambda_{i}+\lambda_{j}}\text{.} \label{buri14} \end{equation} The derivation of Eq. (\ref{buri14}) ends our revisitation of H\"{u}bner's original analysis presented in Ref. \cite{hubner92}. Note that in obtaining Eq. (\ref{buri14}), there is no need to introduce any Hamiltonian \textrm{H} that might be responsible for the changes from $\rho$ to $\rho+d\rho$. For this reason, the expression of the Bures metric in Eq. (\ref{buri14}) is said to be general. \subsection{The explicit derivation of Zanardi's general expression} In Ref. \cite{zanardi07}, Zanardi and collaborators provided an alternative expression of the Bures metric in Eq. (\ref{buri14}). Interestingly, in this alternative expression, the Bures metric in Eq. (\ref{buri14}) can be decomposed in its classical and nonclassical parts. To begin, in view of future geometric investigations in statistical physics, let us use a different notation and rewrite the Bures metric in Eq. (\ref{buri14}) as \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) \overset{\text{def}}{=}\frac{1}{2}\sum_{n\text{, }m}\frac{\left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}}{p_{m}+p_{n}}\text{,} \label{Bures} \end{equation} with $1\leq m$, $n\leq N$. Let us assume that the quantities $\rho$ and $d\rho$ in Eq. (\ref{Bures}) are given by, \begin{equation} \rho\overset{\text{def}}{=}\sum_{n}p_{n}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \end{equation} and \begin{equation} d\rho\overset{\text{def}}{=}\sum_{n}\left[ dp_{n}\left\vert n\right\rangle \left\langle n\right\vert +p_{n}\left\vert dn\right\rangle \left\langle n\right\vert +p_{n}\left\vert n\right\rangle \left\langle dn\right\vert \right] \text{,} \label{dro} \end{equation} respectively, with $\left\langle n\left\vert m\right. \right\rangle =\delta_{n,m}$. Let us use Eqs. (\ref{dro}) and (\ref{Bures}) to find a more explicit expression for the Bures metric $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $. Observe that the quantity $\left\langle i\left\vert d\rho\right\vert j\right\rangle $ can be recast as, \begin{align} \left\langle i\left\vert d\rho\right\vert j\right\rangle & =\left\langle i\left\vert \left( \sum_{n}\left[ dp_{n}\left\vert n\right\rangle \left\langle n\right\vert +p_{n}\left\vert dn\right\rangle \left\langle n\right\vert +p_{n}\left\vert n\right\rangle \left\langle dn\right\vert \right] \right) \right\vert j\right\rangle \nonumber\\ & =\sum_{n}dp_{n}\left\langle i|n\right\rangle \left\langle n|j\right\rangle +p_{n}\left\langle i|dn\right\rangle \left\langle n|j\right\rangle +p_{n}\left\langle i|n\right\rangle \left\langle dn|j\right\rangle \nonumber\\ & =\sum_{n}dp_{n}\delta_{in}\delta_{nj}+p_{n}\left\langle i|dn\right\rangle \delta_{nj}+p_{n}\delta_{in}\left\langle dn|j\right\rangle \nonumber\\ & =dp_{i}\delta_{ij}+p_{j}\left\langle i|dj\right\rangle +p_{i}\left\langle di|j\right\rangle \text{,} \end{align} that is, \begin{equation} \left\langle i\left\vert d\rho\right\vert j\right\rangle =dp_{i}\delta _{ij}+p_{j}\left\langle i|dj\right\rangle +p_{i}\left\langle di|j\right\rangle \text{.} \label{1} \end{equation} Note that the orthonormality condition $\left\langle i|j\right\rangle =\delta_{ij}$ implies $\left\langle di|j\right\rangle +\left\langle i|dj\right\rangle =0$, that is \begin{equation} \left\langle di|j\right\rangle =-\left\langle i|dj\right\rangle \text{.} \label{2} \end{equation} Using Eq. (\ref{2}), $\left\langle i\left\vert d\rho\right\vert j\right\rangle $ in Eq. (\ref{1}) becomes \begin{equation} \left\langle i\left\vert d\rho\right\vert j\right\rangle =dp_{i}\delta _{ij}+p_{j}\left\langle i|dj\right\rangle -p_{i}\left\langle i|dj\right\rangle =\delta_{ij}dp_{i}+\left( p_{j}-p_{i}\right) \left\langle i|dj\right\rangle \text{,}\nonumber \end{equation} that is, \begin{equation} \left\langle i\left\vert d\rho\right\vert j\right\rangle =\delta_{ij} dp_{i}+\left( p_{j}-p_{i}\right) \left\langle i|dj\right\rangle \text{.} \label{3} \end{equation} Making use of Eq. (\ref{3}), we can now find an explicit expression of the quantity $\left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}$ in Eq. (\ref{Bures}). Indeed, from Eq. (\ref{3}) we have \begin{align} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2} & =\left\langle m|d\rho|n\right\rangle \left\langle m|d\rho|n\right\rangle ^{\ast}\nonumber\\ & =\left\langle m|d\rho|n\right\rangle \left\langle n|d\rho|m\right\rangle \nonumber\\ & =\left[ \delta_{mn}dp_{m}+\left( p_{n}-p_{m}\right) \left\langle m|dn\right\rangle \right] \left[ \delta_{nm}dp_{n}+\left( p_{m} -p_{n}\right) \left\langle n|dm\right\rangle \right] \nonumber\\ & =\delta_{mn}dp_{m}\delta_{nm}dp_{n}+\delta_{mn}dp_{m}\left( p_{m} -p_{n}\right) \left\langle n|dm\right\rangle +\left( p_{n}-p_{m}\right) \left\langle m|dn\right\rangle \delta_{nm}dp_{n}+\nonumber\\ & +\left( p_{n}-p_{m}\right) \left\langle m|dn\right\rangle \left( p_{m}-p_{n}\right) \left\langle n|dm\right\rangle \nonumber\\ & =\delta_{nm}dp_{n}^{2}-\left( p_{n}-p_{m}\right) ^{2}\left\langle m|dn\right\rangle \left\langle n|dm\right\rangle \text{,} \label{4} \end{align} that is, \begin{equation} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}=\delta _{nm}dp_{n}^{2}-\left( p_{n}-p_{m}\right) ^{2}\left\langle m|dn\right\rangle \left\langle n|dm\right\rangle \text{.} \label{5} \end{equation} Using Eq. (\ref{2}), Eq. (\ref{5}) reduces to \begin{align} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2} & =\delta_{nm}dp_{n}^{2}+\left( p_{n}-p_{m}\right) ^{2}\left\langle dm|n\right\rangle \left\langle n|dm\right\rangle \nonumber\\ & =\delta_{nm}dp_{n}^{2}+\left( p_{n}-p_{m}\right) ^{2}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\text{,} \end{align} that is, \begin{equation} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}=\delta _{nm}dp_{n}^{2}+\left( p_{n}-p_{m}\right) ^{2}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\text{.} \label{6} \end{equation} Finally, substituting Eq. (\ref{6}) into Eq. (\ref{Bures}), $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ becomes \begin{align} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) & =\frac {1}{2}\sum_{n\text{, }m}\frac{\delta_{nm}dp_{n}^{2}+\left( p_{n} -p_{m}\right) ^{2}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2} }{p_{m}+p_{n}}\nonumber\\ & =\frac{1}{4}\sum_{n}\frac{dp_{n}^{2}}{p_{n}}+\frac{1}{2}\sum_{n\neq m}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\frac{\left( p_{n}-p_{m}\right) ^{2}}{p_{m}+p_{n}}\text{,} \end{align} that is, \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {4}\sum_{n}\frac{dp_{n}^{2}}{p_{n}}+\frac{1}{2}\sum_{n\neq m}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\frac{\left( p_{n} -p_{m}\right) ^{2}}{p_{m}+p_{n}}\text{.} \label{7} \end{equation} Eq. (\ref{7}) is the explicit expression of $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ we were searching for. As a side remark, note that if both $\left\vert n\right\rangle $ and $\left\vert m\right\rangle \in\ker\left( \rho\right) $, we have that $\left\langle n|d\rho |m\right\rangle =0$. Indeed, from $\rho\left\vert m\right\rangle =\left\vert 0\right\rangle $, we have $d\rho\left\vert m\right\rangle +\rho\left\vert dm\right\rangle =\left\vert 0\right\rangle $. Therefore, we have $\left\langle n|d\rho|m\right\rangle +\left\langle n|\rho|dm\right\rangle =\left\langle n|0\right\rangle $, that is, $\left\langle n|d\rho|m\right\rangle =0$ since $\left\langle n|0\right\rangle =0$ and $\left\langle n|\rho|dm\right\rangle =0$. The expression of the Bures metric in Eq. (\ref{7}) can be regarded as given by two contribution, a classical and a nonclassical term. The first term in $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) is the classical one and is represented by the classical Fisher-Rao information metric between the two probability distributions $\left\{ p_{n}\right\} _{1\leq n\leq N}$ and $\left\{ p_{n}+dp_{n}\right\} _{1\leq n\leq N}$. The second term is the nonclassical one and emerges from the noncommutativity of the density matrices $\rho$ and $\rho+d\rho$ (i.e., $\left[ \rho\text{, }\rho+d\rho\right] =\left[ \rho\text{, }d\rho\right] \neq0$, in general). When $\left[ \rho\text{, }\rho+d\rho\right] =0$, the problem becomes classical and the Bures metric reduces to the classical Fisher-Rao metric. \subsection{The explicit derivation of Zanardi's expression for thermal states} In what follows, we specialize on the functional form of $ds_{\mathrm{Bures} }^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) for thermal states. Specifically, let us focus on mixed quantum states $\rho\left( \beta\text{, }\lambda\right) $ of the form, \begin{equation} \rho\left( \beta\text{, }\lambda\right) \overset{\text{def}}{=} \frac{e^{-\beta\mathrm{H}\left( \lambda\right) }}{\mathcal{Z}} =\frac{e^{-\beta\mathrm{H}\left( \lambda\right) }}{\mathrm{tr}\left( e^{-\beta\mathrm{H}\left( \lambda\right) }\right) }\text{,} \label{ro} \end{equation} with $\mathcal{Z}\overset{\text{def}}{=}\mathrm{tr}\left( e^{-\beta \mathrm{H}\left( \lambda\right) }\right) $ denoting the partition function of the system. The Hamiltonian $\mathrm{H}$ in Eq. (\ref{ro}) depends on a set of parameters $\left\{ \lambda\right\} $ and is such that $\mathrm{H} \left\vert n\right\rangle =E_{n}\left\vert n\right\rangle $ or, equivalently \begin{equation} \mathrm{H}=\sum_{n}E_{n}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \label{spectral} \end{equation} with $1\leq n\leq N$. Using the spectral decomposition of $\mathrm{H}$ in Eq. (\ref{spectral}), $\rho\left( \beta\text{, }\lambda\right) $ in Eq. (\ref{ro}) can be recast as \begin{equation} \rho\left( \beta\text{, }\lambda\right) =\sum_{n}p_{n}\left\vert n\right\rangle \left\langle n\right\vert =\sum_{n}\frac{e^{-\beta E_{n}} }{\mathcal{Z}}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \label{rouse} \end{equation} with $p_{n}\overset{\text{def}}{=}e^{-\beta E_{n}}/\mathcal{Z}$. Note that the $\lambda$-dependence of $\rho$ in Eq. (\ref{rouse}) appears, in general, in both $E_{n}=E_{n}\left( \lambda\right) $ and $\left\vert n\right\rangle =\left\vert n\left( \lambda\right) \right\rangle $. Before presenting the general case where both $\beta$ and the set of $\left\{ \lambda\right\} $ can change, we focus on the sub-case where $\beta$ is kept constant while $\left\{ \lambda\right\} $ is allowed to change. \subsubsection{Case: $\beta$-constant and $\lambda$-nonconstant} In what follows, assuming that $\beta$ is fixed, we wish to find the expression of $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) when $\rho$ is given as in Eq. (\ref{rouse}). \ Clearly, we note that the two key quantities that we need to find are $dp_{i}^{2}$ and $\left\langle i|dj\right\rangle $. Let us start with the latter. From $\mathrm{H}\left\vert j\right\rangle =E_{j}\left\vert j\right\rangle $, we have $d\mathrm{H}\left\vert j\right\rangle +\mathrm{H}\left\vert dj\right\rangle =dE_{j}\left\vert j\right\rangle +E_{j}\left\vert dj\right\rangle $.\ Assuming $i\neq j$, we have \begin{equation} \left\langle i\right\vert d\mathrm{H}\left\vert j\right\rangle +\left\langle i\right\vert \mathrm{H}\left\vert dj\right\rangle =\left\langle i\right\vert dE_{j}\left\vert j\right\rangle +\left\langle i\right\vert E_{j}\left\vert dj\right\rangle =dE_{j}\delta_{ij}+E_{j}\left\langle i|dj\right\rangle =E_{j}\left\langle i|dj\right\rangle \text{,} \end{equation} that is, \begin{equation} \left\langle i\right\vert d\mathrm{H}\left\vert j\right\rangle +\left\langle i\right\vert \mathrm{H}\left\vert dj\right\rangle =E_{j}\left\langle i|dj\right\rangle \text{.} \label{8} \end{equation} Observe that, \begin{equation} \left\langle i\right\vert \mathrm{H}\left\vert dj\right\rangle =\left\langle dj\right\vert \mathrm{H}^{\dagger}\left\vert i\right\rangle ^{\ast }=\left\langle dj\right\vert \mathrm{H}\left\vert i\right\rangle ^{\ast} =E_{i}\left\langle dj|i\right\rangle ^{\ast}=E_{i}\left\langle i|dj\right\rangle \text{.} \label{9} \end{equation} Substituting Eq. (\ref{9}) into Eq. (\ref{8}), we get \begin{equation} \left\langle i\right\vert d\mathrm{H}\left\vert j\right\rangle +E_{i} \left\langle i|dj\right\rangle =E_{j}\left\langle i|dj\right\rangle \text{,} \end{equation} that is, \begin{equation} \left\langle i|dj\right\rangle =\frac{\left\langle i\right\vert d\mathrm{H} \left\vert j\right\rangle }{E_{j}-E_{i}}\text{.} \label{10} \end{equation} Eq. (\ref{10}) is the first piece of relevant information we were looking for. Let us not focus on calculating $dp_{i}^{2}$. From $p_{i}\overset{\text{def} }{=}e^{-\beta E_{i}}/\mathcal{Z}$, we get \begin{align} dp_{i} & =d\left( \frac{e^{-\beta E_{i}}}{\mathcal{Z}}\right) \nonumber\\ & =\frac{1}{\mathcal{Z}}d\left( e^{-\beta E_{i}}\right) +e^{-\beta E_{i} }d\left( \frac{1}{\mathcal{Z}}\right) \nonumber\\ & =\frac{1}{\mathcal{Z}}\frac{d}{dE_{i}}\left( e^{-\beta E_{i}}\right) dE_{i}+e^{-\beta E_{i}}\left( -\frac{1}{\mathcal{Z}^{2}}d\mathcal{Z}\right) \nonumber\\ & =-\beta\frac{e^{-\beta E_{i}}}{\mathcal{Z}}dE_{i}-\frac{e^{-\beta E_{i}} }{\mathcal{Z}}\frac{d\mathcal{Z}}{\mathcal{Z}}\nonumber\\ & =-\beta\frac{e^{-\beta E_{i}}}{\mathcal{Z}}dE_{i}-\frac{e^{-\beta E_{i}} }{\mathcal{Z}}\sum_{j}\left( \frac{d\mathcal{Z}}{dE_{j}}\frac{dE_{j} }{\mathcal{Z}}\right) \text{,} \end{align} that is, \begin{equation} dp_{i}=-\beta\frac{e^{-\beta E_{i}}}{\mathcal{Z}}dE_{i}-\frac{e^{-\beta E_{i} }}{\mathcal{Z}}\sum_{j}\left( \frac{d\mathcal{Z}}{dE_{j}}\frac{dE_{j} }{\mathcal{Z}}\right) \text{.} \label{11} \end{equation} At this point, note that \begin{align} \sum_{j}\frac{d\mathcal{Z}}{dE_{j}}\frac{dE_{j}}{\mathcal{Z}} & =\sum _{j}\frac{d}{dE_{j}}\left( \sum_{k}e^{-\beta E_{k}}\right) \frac{dE_{j} }{\mathcal{Z}}\nonumber\\ & =-\beta\sum_{j}\frac{e^{-\beta E_{j}}}{\mathcal{Z}}dE_{j}\nonumber\\ & =-\beta\sum_{j}p_{j}dE_{j}\text{.} \label{12} \end{align} Substituting Eq. (\ref{12}) into Eq. (\ref{11}), we get \begin{equation} dp_{i}=-\beta p_{i}dE_{i}+\beta p_{i}\sum_{j}p_{j}dE_{j}=-\beta p_{i}\left[ dE_{i}-\sum_{j}p_{j}dE_{j}\right] \text{.} \label{13} \end{equation} Eq. (\ref{13}) is the second piece of relevant information we were looking for. We can now calculate $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, } \rho+d\rho\right) $ in Eq. (\ref{7}) by means of Eqs. (\ref{10}) and (\ref{13}). We obtain, \begin{align} \frac{1}{4}\sum_{i}\frac{dp_{i}^{2}}{p_{i}} & =\frac{1}{4}\sum_{i}\beta ^{2}\frac{p_{i}^{2}}{p_{i}}\left[ dE_{i}-\sum_{j}p_{j}dE_{j}\right] ^{2}\nonumber\\ & =\frac{\beta^{2}}{4}\sum_{i}p_{i}\left[ dE_{i}-\left\langle dE\right\rangle _{\beta}\right] ^{2}\nonumber\\ & =\frac{\beta^{2}}{4}\left( \left\langle dE^{2}\right\rangle _{\beta }-\left\langle dE\right\rangle _{\beta}^{2}\right) \nonumber\\ & =\frac{\beta^{2}}{4}\left( \left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta}-\left\langle d\mathrm{H}_{d}\right\rangle _{\beta}^{2}\right) \text{,} \end{align} that is, \begin{equation} \frac{1}{4}\sum_{i}\frac{dp_{i}^{2}}{p_{i}}=\frac{\beta^{2}}{4}\left( \left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta}-\left\langle d\mathrm{H}_{d}\right\rangle _{\beta}^{2}\right) \text{.} \label{14} \end{equation} The quantity $d\mathrm{H}_{d}$ in Eq. (\ref{14}) is defined as \begin{equation} d\mathrm{H}_{d}\overset{\text{def}}{=}\sum_{j}dE_{j}\left\vert j\right\rangle \left\langle j\right\vert \text{,} \end{equation} and is different from $d\mathrm{H}$. For clarity, we also observe that \begin{equation} \left\langle d\mathrm{H}_{d}\right\rangle _{\beta}\overset{\text{def}}{=} \sum_{i}p_{i}dE_{i}\text{, and }\left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta}\overset{\text{def}}{=}\sum_{i}p_{i}dE_{i}^{2}\text{.} \end{equation} Finally, using Eqs. (\ref{10}) and (\ref{14}), we get \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac {\beta^{2}}{4}\left( \left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta }-\left\langle d\mathrm{H}_{d}\right\rangle _{\beta}^{2}\right) +\frac{1} {2}\sum_{n\neq m}\left\vert \frac{\left\langle n|d\mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m} }\right) ^{2}}{\mathcal{Z}\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\text{.} \label{bures1} \end{equation} The Bures metric $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho +d\rho\right) $ in Eq. (\ref{bures1}) is the Bures metric in Eq. (\ref{7}) between two mixed thermal states $\rho\left( \beta\text{, }\lambda\right) $ and $\left( \rho+d\rho\right) \left( \beta\text{, }\lambda\right) $ when only changes in $\lambda$ are permitted. \subsubsection{Case: $\beta$-nonconstant and $\lambda$-nonconstant} In what follows, we consider the general case where both $\beta$ and the set of $\left\{ \lambda\right\} $ can change. The sub-case where $\beta$ changes while the set of $\left\{ \lambda\right\} $ is kept constant is then obtained as a special case. For simplicity, let us assume we have two parameters, $\beta$ and a single parameter $\lambda$ that we denote with $h$ (a magnetic field intensity, for instance). In this two-dimensional parametric case, we generally have that \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \beta\text{, }h\right) =\left( \begin{array} [c]{cc} d\beta & dh \end{array} \right) \left( \begin{array} [c]{cc} g_{\beta\beta} & g_{\beta h}\\ g_{h\beta} & g_{hh} \end{array} \right) \left( \begin{array} [c]{c} d\beta\\ dh \end{array} \right) =g_{\beta\beta}d\beta^{2}+g_{hh}dh^{2}+2g_{\beta h}d\beta dh\text{,} \label{15} \end{equation} where we used the fact that $g_{h\beta}=g_{\beta h}$. From Eq. (\ref{15}), we note that \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \beta\text{, }h\right) =\left\{ \begin{array} [c]{c} g_{\beta\beta}\left( \beta\text{, }h\right) d\beta^{2}\text{, if }h=\text{\textrm{const}.}\\ g_{hh}\left( \beta\text{, }h\right) dh^{2}\text{, if }\beta =\text{\textrm{const}.}\\ g_{\beta\beta}\left( \beta\text{, }h\right) d\beta^{2}+g_{hh}\left( \beta\text{, }h\right) dh^{2}+2g_{\beta h}\left( \beta\text{, }h\right) d\beta dh\text{, if }\beta\neq\text{\textrm{const}. and }h\neq \text{\textrm{const}.} \end{array} \right. \text{.} \end{equation} Recalling $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}), we start by calculating the expression of $dp_{n}$ with $p_{n}=p_{n}\left( h\text{, }\beta\right) \overset{\text{def}}{=}e^{-\beta E_{n}}/\mathcal{Z}$. We observe that $dp_{n}$ can be written as, \begin{equation} dp_{n}=\frac{\partial p_{n}}{\partial h}dh+\frac{\partial p_{n}}{\partial \beta}d\beta\text{,} \label{util1} \end{equation} where $\partial p_{n}/\partial\beta$ is given by, \begin{align} \frac{\partial p_{n}}{\partial\beta} & =\frac{\partial}{\partial\beta }\left( \frac{e^{-\beta E_{n}}}{\mathcal{Z}}\right) \nonumber\\ & =\frac{1}{\mathcal{Z}}\frac{\partial}{\partial\beta}\left( e^{-\beta E_{n}}\right) +e^{-\beta E_{n}}\frac{\partial}{\partial\beta}\left( \frac {1}{\mathcal{Z}}\right) \nonumber\\ & =-\frac{E_{n}}{\mathcal{Z}}e^{-\beta E_{n}}+e^{-\beta E_{n}}\frac{\partial }{\partial\mathcal{Z}}\left( \frac{1}{\mathcal{Z}}\right) \frac {\partial\mathcal{Z}}{\partial\beta}\nonumber\\ & =-p_{n}E_{n}-\frac{e^{-\beta E_{n}}}{\mathcal{Z}}\frac{1}{\mathcal{Z}} \frac{\partial\mathcal{Z}}{\partial\beta}\nonumber\\ & =-p_{n}E_{n}-p_{n}\frac{\partial\ln\mathcal{Z}}{\partial\beta}\nonumber\\ & =-p_{n}E_{n}+p_{n}\frac{1}{\mathcal{Z}}\sum_{n}E_{n}e^{-\beta E_{n} }\nonumber\\ & =-p_{n}E_{n}+p_{n}\sum_{n}p_{n}E_{n}\nonumber\\ & =-p_{n}E_{n}+p_{n}\left\langle \mathrm{H}\right\rangle \text{,} \end{align} that is, \begin{equation} \frac{\partial p_{n}}{\partial\beta}d\beta=p_{n}\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] d\beta\text{.} \label{util2} \end{equation} Note that the expectation value $\left\langle \mathrm{H}\right\rangle $ in Eq. (\ref{util2}) is defined as $\left\langle \mathrm{H}\right\rangle $ $\overset{\text{def}}{=}\sum_{n}p_{n}E_{n}$. From Eq. (\ref{13}), we also have \begin{equation} \frac{\partial p_{n}}{\partial h}dh=-\beta p_{n}\left[ \frac{\partial E_{n} }{\partial h}-\sum_{j}p_{j}\frac{\partial E_{j}}{\partial h}\right] dh=\beta p_{n}\left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] dh\text{,} \label{util3} \end{equation} where $\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle $ is defined as \begin{equation} \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \overset{\text{def}}{=}\sum_{j}p_{j}\partial_{h}E_{j}\text{.} \label{masini} \end{equation} Using Eqs. (\ref{util1}), (\ref{util2}), and (\ref{util3}), we wish to calculate the term $\left( 1/4\right) \sum_{n}dp_{n}^{2}/p_{n}$ in $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}). Let us begin by observing that \begin{equation} dp_{n}^{2}=\left( \frac{\partial p_{n}}{\partial h}dh+\frac{\partial p_{n} }{\partial\beta}d\beta\right) ^{2}=\left( \partial_{h}p_{n}dh\right) ^{2}+\left( \partial_{\beta}p_{n}d\beta\right) ^{2}+2\partial_{\beta} p_{n}\partial_{h}p_{n}d\beta dh\text{.} \end{equation} Therefore, we get \begin{align} \frac{1}{4}\sum_{n}\frac{dp_{n}^{2}}{p_{n}} & =\frac{1}{4}\sum_{n} \frac{\left( \partial_{h}p_{n}dh\right) ^{2}+\left( \partial_{\beta} p_{n}d\beta\right) ^{2}+2\partial_{\beta}p_{n}\partial_{h}p_{n}d\beta dh}{p_{n}}\nonumber\\ & =\frac{1}{4}\sum_{n}\frac{\left( \partial_{h}p_{n}\right) ^{2}}{p_{n} }dh^{2}+\frac{1}{4}\sum_{n}\frac{\left( \partial_{\beta}p_{n}\right) ^{2} }{p_{n}}d\beta^{2}+\frac{1}{4}\sum_{n}\frac{2\partial_{\beta}p_{n}\partial _{h}p_{n}}{p_{n}}d\beta dh\text{.} \end{align} First, note that \begin{equation} \frac{1}{4}\sum_{n}\frac{\left( \partial_{\beta}p_{n}\right) ^{2}}{p_{n} }d\beta^{2}=\frac{1}{4}\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\text{,} \label{A} \end{equation} where $\left\langle \mathrm{H}\right\rangle $ and $\left\langle \mathrm{H} ^{2}\right\rangle $ are defined as \begin{equation} \left\langle \mathrm{H}\right\rangle \overset{\text{def}}{=}\sum_{i}p_{i} E_{i}\text{, and }\left\langle \mathrm{H}^{2}\right\rangle \overset{\text{def} }{=}\sum_{i}p_{i}E_{i}^{2}\text{,} \end{equation} respectively. Indeed, using Eq. (\ref{util2}), we have \begin{align} \sum_{n}\frac{\left( \partial_{\beta}p_{n}\right) ^{2}}{p_{n}} & =\sum _{n}\frac{p_{n}^{2}\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] ^{2}}{p_{n}}\nonumber\\ & =\sum_{n}\frac{p_{n}^{2}\left\langle \mathrm{H}\right\rangle ^{2}+p_{n} ^{2}E_{n}^{2}-2\left\langle \mathrm{H}\right\rangle p_{n}^{2}E_{n}}{p_{n} }\nonumber\\ & =\left\langle \mathrm{H}\right\rangle ^{2}+\left\langle \mathrm{H} ^{2}\right\rangle -2\left\langle \mathrm{H}\right\rangle ^{2}\nonumber\\ & =\left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H} \right\rangle ^{2}\text{.} \end{align} Second, observe that \begin{equation} \frac{1}{4}\sum_{n}\frac{\left( \partial_{h}p_{n}\right) ^{2}}{p_{n}} dh^{2}=\frac{1}{4}\beta^{2}\left\{ \left\langle \left[ \left( \partial _{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} dh^{2}\text{,} \label{B} \end{equation} where $\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle $ is given in Eq. (\ref{masini}) and $\left\langle \left[ \left( \partial _{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle $ is defined as \begin{equation} \left\langle \left[ \left( \partial_{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle \overset{\text{def}}{=}\sum_{i}p_{i}\left( \partial _{h}E_{i}\right) ^{2}\text{.} \end{equation} Indeed, using Eq. (\ref{util3}), we have \begin{align} \sum_{n}\frac{\left( \partial_{h}p_{n}\right) ^{2}}{p_{n}} & =\sum _{n}\frac{\left( \beta p_{n}\right) ^{2}\left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] ^{2}}{p_{n}}\nonumber\\ & =\beta^{2}\sum_{n}\left[ p_{n}\left\langle \left( \partial_{h} \mathrm{H}\right) _{d}\right\rangle ^{2}+p_{n}\left( \partial_{h} E_{n}\right) ^{2}-2p_{n}\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \partial_{h}E_{n}\right] \nonumber\\ & =\beta^{2}\left\{ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}+\left\langle \left[ \left( \partial_{h} \mathrm{H}\right) _{d}\right] ^{2}\right\rangle -2\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} \nonumber\\ & =\beta^{2}\left\{ \left\langle \left[ \left( \partial_{h}\mathrm{H} \right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial _{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} \text{.} \end{align} Third, we note that \begin{equation} \frac{1}{4}\sum_{n}\frac{2\partial_{\beta}p_{n}\partial_{h}p_{n}}{p_{n}}d\beta dh=\frac{1}{4}2\beta\left[ \left\langle \mathrm{H}\left( \partial _{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] d\beta dh\text{.} \label{C} \end{equation} Indeed, using Eqs. (\ref{util2}) and (\ref{util3}), we get \begin{align} \sum_{n}\frac{2\partial_{\beta}p_{n}\partial_{h}p_{n}}{p_{n}} & =\sum _{n}\frac{2p_{n}\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] \beta p_{n}\left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] }{p_{n}}\nonumber\\ & =\sum_{n}2\beta\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] \left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] p_{n}\nonumber\\ & =\sum_{n}2\beta\left[ \left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \partial_{h}E_{n}-E_{n}\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle +E_{n}\partial_{h} E_{n}\right] p_{n}\nonumber\\ & =2\beta\left[ \left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H} \right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle +\left\langle \mathrm{H} \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] \nonumber\\ & =2\beta\left[ \left\langle \mathrm{H}\left( \partial_{h}\mathrm{H} \right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] \text{,} \end{align} where $\left\langle \mathrm{H}\left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle $ is defined as \begin{equation} \left\langle \mathrm{H}\left( \partial_{h}\mathrm{H}\right) _{d} \right\rangle \overset{\text{def}}{=}\sum_{i}p_{i}E_{i}\partial_{h} E_{i}\text{.} \end{equation} Finally, employing Eqs. (\ref{A}), (\ref{B}), and (\ref{C}), the most general expression of $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) between two mixed thermal states $\rho\left( \beta\text{, }h\right) $ and $\left( \rho+d\rho\right) \left( \beta\text{, }h\right) $ when either changes in the parameter $\beta$ or $h$ are allotted becomes \begin{align} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) & =\frac {1}{4}\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\nonumber\\ & +\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{h}\mathrm{H} |m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n} }-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\right\} dh^{2}+\nonumber\\ & +\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H} \right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta dh\text{.} \label{general} \end{align} Note that\textbf{ }$ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho +d\rho\right) $\textbf{ }is the sum of two contributions, the classical Fisher-Rao information metric contribution and\textbf{ }the non-classical metric contribution expressed in the summation term in the right-hand-side of Eq. (\ref{general}). For later convenience, we also remark that the quadratic term\textbf{ }$\left\vert \left\langle n|\partial_{h}\mathrm{H}|m\right\rangle \right\vert ^{2}$\textbf{ }in the summation term in the right-hand-side of Eq. (\ref{general}) is invariant under change of sign of the Hamiltonian of the system.\textbf{ }Clearly, from Eq. (\ref{general}) we find that $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =(1/4)\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$ when $h=$\textrm{const}. and only $\beta$ can change. If $\beta=$\textrm{const}., $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, } \rho+d\rho\right) $ in Eq. (\ref{general}) reduces to Eq. (\ref{bures1}). The explicit derivation of Eq. (\ref{general}) ends our calculation of the Bures metric between neighboring thermal states undergoing temperature and/or magnetic field intensity changes as originally presented by Zanardi and collaborators in Ref. \cite{zanardi07}. \section{The Sj\"{o}qvist metric} In this section, we introduce the Sj\"{o}qvist metric \cite{erik20} for nondegerante mixed states with an explicit derivation. Assume to consider two rank-$N$ neighboring nondegenerate density operators $\rho\left( t\right) $ and $\rho\left( t+dt\right) $ linked by means of a smooth path $t\mapsto \rho\left( t\right) $ specifying the evolution of a given quantum system. The nondegeneracy property implies that the phase of the eigenvectors represents the gauge freedom in the spectral decomposition of the density operators. As a consequence, there exists a one-to-one correspondence between the set of two orthogonal rays $\left\{ e^{i\phi_{k}\left( t\right) }\left\vert e_{k}\left( t\right) \right\rangle :0\leq\phi_{k}\left( t\right) <2\pi\right\} _{1\leq k\leq N}$ that specify the spectral decomposition along the path $t\mapsto\rho\left( t\right) $ and the rank-$N$ nondegenerate density operator $\rho\left( t\right) $. Obviously, if some nonzero eigenvalue of $\rho\left( t\right) $ is degenerate, this correspondence would no longer exist. We present next the explicit derivation of the Sj\"{o}qvist metric. \subsection{The explicit derivation} Consider two neighboring states $\rho\left( t\right) $ and $\rho\left( t+dt\right) $ with spectral decompositions given by $\mathcal{B}\left( t\right) =\left\{ \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle \right\} _{1\leq k\leq N}$ and $\mathcal{B}\left( t+dt\right) =\left\{ \sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right\} _{1\leq k\leq N}$, respectively. The quantity $N$ denotes the rank of the nondegenerate density operator $\rho\left( t\right) $. Consider the infinitesimal distance $d^{2}\left( t\text{, }t+dt\right) $ between $\rho\left( t\right) $ and $\rho\left( t+dt\right) $ defined as \begin{equation} d^{2}\left( t\text{, }t+dt\right) \overset{\text{def}}{=}\sum_{k=1} ^{N}\left\Vert \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle -\sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right\Vert ^{2}\text{.} \label{distb} \end{equation} The Sj\"{o}qvist metric is defined as the minimum of $d^{2}\left( t\text{, }t+dt\right) $ in Eq. (\ref{distb}). Note that the squared norm term $\left\Vert \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle -\sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right\Vert ^{2}$ can be written as \begin{align} & \left( \sqrt{p_{k}\left( t\right) }e^{-if_{k}\left( t\right) }\left\langle n_{k}\left( t\right) \right\vert -\sqrt{p_{k}\left( t+dt\right) }e^{-if_{k}\left( t+dt\right) }\left\langle n_{k}\left( t+dt\right) \right\vert \right) \nonumber\\ & \left( \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle -\sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right) \nonumber\\ & =p_{k}\left( t\right) +p_{k}\left( t+dt\right) -\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle +\nonumber\\ & -\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }e^{-i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle \text{,} \end{align} with $e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle +e^{-i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle $ equal to \begin{align} & 2\operatorname{Re}\left\{ e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\} \nonumber\\ & =2\operatorname{Re}\left\{ e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert e^{i\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) }\right\} \nonumber\\ & =2\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \operatorname{Re}\left\{ e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }e^{i\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) }\right\} \nonumber\\ & =2\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \cos\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) +\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) \right] \text{.} \end{align} Note that $f_{k}\left( t+dt\right) =f_{k}\left( t\right) +\dot{f} _{k}\left( t\right) dt+O\left( dt^{2}\right) $ and $\left\vert n_{k}\left( t+dt\right) \right\rangle =\left\vert n_{k}\left( t\right) \right\rangle +\left\vert \dot{n}_{k}\left( t\right) \right\rangle dt+O\left( dt^{2}\right) $. Therefore, we have \begin{equation} \cos\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) +\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) \right] =\cos\left\{ \dot{f}_{k}\left( t\right) dt+\arg\left[ 1+\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt\right] +O\left( dt^{2}\right) \right\} \text{.} \end{equation} Setting $\dot{f}_{k}\left( t\right) dt+\arg\left[ 1+\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt\right] +O\left( dt^{2}\right) \overset{\text{def}}{=}\lambda_{k}\left( t\text{, }t+dt\right) $, the infinitesimal distance $d^{2}\left( t\text{, }t+dt\right) $ becomes \begin{equation} d^{2}\left( t\text{, }t+dt\right) =2-2\sum_{k=1}^{N}\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \cos\left[ \lambda_{k}\left( t\text{, }t+dt\right) \right] \text{.} \end{equation} Then, the Sj\"{o}qvist metric $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}$ is the minimum of $d^{2}\left( t\text{, }t+dt\right) $, $d_{\min}^{2}\left( t\text{, }t+dt\right) $, and is obtained when $\lambda_{k}\left( t\text{, }t+dt\right) $ equals zero for any $1\leq k\leq N$. Its expression is given by, \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=2-2\sum_{k=1}^{N}\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \text{.} \label{zero1} \end{equation} It is worthwhile emphasizing that the minimum of $d^{2}\left( t\text{, }t+dt\right) $ is achieved by selecting phases $\left\{ f_{k}\left( t\right) \text{, }f_{k}\left( t+dt\right) \right\} $ such that \begin{equation} \dot{f}_{k}\left( t\right) dt+\arg\left[ 1+\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt+O\left( dt^{2}\right) \right] =0 \label{condo1} \end{equation} Observing that $e^{\left\langle n_{k}\left( t\right) \left\vert \dot{n} _{k}\left( t\right) \right. \right\rangle dt}=1+\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt+O\left( dt^{2}\right) $ is such that $\arg\left[ e^{\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt}\right] =-i\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt$, Eq. (\ref{condo1}) can be rewritten to the first order in $dt$ as \begin{equation} \dot{f}_{k}\left( t\right) -i\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle =0\text{.} \label{condo2} \end{equation} Eq. (\ref{condo2}) denotes the parallel transport condition $\left\langle \psi_{k}\left( t\right) \left\vert \psi_{k}\left( t\right) \right. \right\rangle =0$ where $\left\vert \psi_{k}\left( t\right) \right\rangle \overset{\text{def}}{=}e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle $ is associated with individual pure state paths in the chosen ensemble that defines the mixed state $\rho\left( t\right) $ \cite{aharonov87}. To find a more useful expression of $ds_{\mathrm{Sj\ddot {o}qvist}}^{2}$, let us start by observing that, \begin{equation} \sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }=p_{k}\left( t\right) \sqrt{1+\frac{dp_{k}\left( t\right) }{p_{k}\left( t\right) } }=p_{k}+\frac{1}{2}\dot{p}_{k}dt-\frac{1}{8}\frac{\dot{p}_{k}^{2}}{p_{k} }dt^{2}+O\left( dt^{2}\right) \text{.} \label{zero2} \end{equation} Furthermore, to the second order in $dt$, the state $\left\vert n_{k}\left( t+dt\right) \right\rangle $ can be written as \begin{equation} \left\vert n_{k}\left( t+dt\right) \right\rangle =\left\vert n_{k}\left( t\right) \right\rangle +\left\vert \dot{n}_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\vert \ddot{n}_{k}\left( t\right) \right\rangle dt^{2}+O\left( dt^{2}\right) \text{.} \label{oro1} \end{equation} Therefore, to the second order in $dt$, the quantum overlap $\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle $ becomes \begin{equation} \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle =\left\langle n_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle +\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\langle n_{k}\left( t\right) |\ddot{n} _{k}\left( t\right) \right\rangle dt^{2}+O\left( dt^{2}\right) \end{equation} Let us focus now on calculating $\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert =\sqrt {\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert ^{2}}$, where \begin{equation} \left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert ^{2}=\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle \text{.} \label{oro2} \end{equation} Using Eq. (\ref{oro1}), Eq. (\ref{oro2}) becomes \begin{align} \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle & \approx\left[ \left\langle n_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle +\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\langle n_{k}\left( t\right) |\ddot{n}_{k}\left( t\right) \right\rangle dt^{2}\right] \cdot\nonumber\\ & \cdot\left[ \left\langle n_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle +\left\langle \dot{n}_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\langle \ddot{n}_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle dt^{2}\right] \nonumber\\ & =\left[ 1+\left\langle n_{k}|\dot{n}_{k}\right\rangle dt+\frac{1} {2}\left\langle n_{k}|\ddot{n}_{k}\right\rangle dt^{2}\right] \left[ 1+\left\langle \dot{n}_{k}|n_{k}\right\rangle dt+\frac{1}{2}\left\langle \ddot{n}_{k}|n_{k}\right\rangle dt^{2}\right] \nonumber\\ & \approx1+\left\langle \dot{n}_{k}|n_{k}\right\rangle dt+\frac{1} {2}\left\langle \ddot{n}_{k}|n_{k}\right\rangle dt^{2}+\left\langle n_{k} |\dot{n}_{k}\right\rangle dt+\left\langle n_{k}|\dot{n}_{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2}+\frac{1}{2}\left\langle n_{k}|\ddot{n}_{k}\right\rangle dt^{2}\nonumber\\ & =1+\left[ \left\langle \dot{n}_{k}|n_{k}\right\rangle +\left\langle n_{k}|\dot{n}_{k}\right\rangle \right] dt+\left\langle n_{k}|\dot{n} _{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2}+\frac {1}{2}\left[ \left\langle n_{k}|\ddot{n}_{k}\right\rangle +\left\langle \ddot{n}_{k}|n_{k}\right\rangle \right] dt^{2}\nonumber\\ & =1+\left\langle n_{k}|\dot{n}_{k}\right\rangle \left\langle \dot{n} _{k}|n_{k}\right\rangle dt^{2}-\left\langle \dot{n}_{k}|\dot{n}_{k} \right\rangle dt^{2}\text{,} \end{align} that is, \begin{equation} \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle =1+\left\langle n_{k}|\dot{n}_{k}\right\rangle \left\langle \dot{n}_{k} |n_{k}\right\rangle dt^{2}-\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle dt^{2}+O\left( dt^{2}\right) \text{,} \label{zero3} \end{equation} since $\left\langle n_{k}|n_{k}\right\rangle =1$ implies $\left\langle \dot {n}_{k}|n_{k}\right\rangle +\left\langle n_{k}|\dot{n}_{k}\right\rangle =0$ and $\left\langle n_{k}|\ddot{n}_{k}\right\rangle +\left\langle \ddot{n} _{k}|n_{k}\right\rangle =-2\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle $. Finally, using Eqs. (\ref{zero2}) and (\ref{zero3}) along with noting that $\sum_{k}\dot{p}_{k}=0$, the Sj\"{o}qvist metric $ds_{\mathrm{Sj\ddot{o} qvist}}^{2}$ in Eq. (\ref{zero1}) becomes \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & \approx2-2\sum_{k=1}^{N}\left( p_{k}+\frac{1}{2}\dot{p}_{k}dt-\frac{1}{8}\frac{\dot{p}_{k}^{2}}{p_{k}} dt^{2}\right) \left( 1+\frac{1}{2}\left\langle n_{k}|\dot{n}_{k} \right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2}-\frac{1} {2}\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle dt^{2}\right) \nonumber\\ & \approx2-2\sum_{k=1}^{N}p_{k}-\sum_{k=1}^{N}p_{k}\left\langle n_{k}|\dot {n}_{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2} +\sum_{k=1}^{N}p_{k}\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle dt^{2}+\frac{1}{4}\sum_{k=1}^{N}\frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2}\text{,} \end{align} that is, \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & \approx\frac{1}{4}\sum_{k=1}^{N} \frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2}+\sum_{k=1}^{N}p_{k}\left[ \left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle -\left\langle n_{k}|\dot{n} _{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle \right] dt^{2}\nonumber\\ & \approx\frac{1}{4}\sum_{k=1}^{N}\frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2} +\sum_{k=1}^{N}p_{k}\left[ \left\langle \dot{n}_{k}|\left( \mathrm{I} -\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |\dot {n}_{k}\right\rangle \right] dt^{2}\text{,} \label{89} \end{align} where \textrm{I} in Eq. (\ref{89}) denotes the identity operator on the $N$-dimensional Hilbert space. Finally, neglecting terms that are smaller than $O\left( dt^{2}\right) $ in Eq. (\ref{zero1}) and defining $ds_{k} ^{2}\overset{\text{def}}{=}\left[ \left\langle \dot{n}_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |\dot{n}_{k}\right\rangle \right] dt^{2}$, the expression of the Sj\"{o}qvist metric will be formally taken to be \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\overset{\text{def}}{=}\frac{1}{4}\sum _{k=1}^{N}\frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2}+\sum_{k=1}^{N}p_{k}ds_{k} ^{2}\text{.} \label{vetta} \end{equation} The derivation of Eq. (\ref{vetta}) concludes our explicit calculation of the Sj\"{o}qvist metric for nondegerante mixed states. Interestingly, note that $ds_{k}^{2}\overset{\text{def}}{=}\left\langle \dot{n}_{k}\left\vert \left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) \right\vert \dot{n}_{k}\right\rangle dt^{2}$ in Eq. (\ref{vetta}) can be written as $ds_{k}^{2}=\left\langle \nabla n_{k}\left\vert \nabla n_{k}\right. \right\rangle $ with $\left\vert \nabla n_{k}\right\rangle \overset{\text{def}}{=}\mathrm{P}_{\bot}^{\left( k\right) }\left\vert \dot{n}_{k}\right\rangle $ being the covariant derivative of $\left\vert n_{k}\right\rangle $ and $\mathrm{P}_{\bot}^{\left( k\right) } \overset{\text{def}}{=}\mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert $ denoting the projector onto states perpendicular to $\left\vert n_{k}\right\rangle $.\textbf{ }In analogy to the Bures metric case (see the comment right below Eq. (\ref{general})), we stress for later convenience that the quadratic term\textbf{ }$ds_{k}^{2}$\textbf{ }does not change under change of sign of the Hamiltonian of the system. The expression of the Sj\"{o}qvist metric in Eq. (\ref{vetta}) can be viewed as expressed by two contributions, a classical and a nonclassical term. The first term in Eq. (\ref{vetta}) is the classical one and is represented by the classical Fisher-Rao information metric between the two probability distributions $\left\{ p_{k}\right\} _{1\leq k\leq N}$ and $\left\{ p_{k}+dp_{k}\right\} _{1\leq k\leq N}$. The second term is the nonclassical one and is represented by a weighted average of pure state Fubini-Study metrics along directions specified by state vectors $\left\{ \left\vert n_{k}\right\rangle \right\} _{1\leq k\leq N}$. We are now ready to introduce our Hamiltonian models. \section{The Hamiltonian Models} In this section, we present two Hamiltonian models. The first Hamiltonian model specifies a spin-$1/2$ particle in a uniform and time-independent external magnetic field oriented along the $z$-axis. The second Hamiltonian model, instead, describes a superconducting flux qubit. Finally, we construct the two corresponding parametric families of thermal states by bringing these two systems in thermal equilibrium with a reservoir at finite and non-zero temperature $T$.\begin{figure} \caption{Schematic depiction of a spin qubit (a) and a superconducting flux qubit (b) in thermal equilibrium with a reservoir at non-zero temperature T. The spin qubit in (a) has opposite orientations of the spin along the quantization axis as its two states. The superconducting flux qubit in (b), instead, has circulating currents of opposite sign as its two states.} \end{figure} \subsection{Spin-1/2 qubit Hamiltonian} Consider a spin-$1/2$ particle represented by an electron of $m$, charge $-e$ with $e\geq0$ immersed in an external magnetic field $\vec{B}\left( t\right) $. From a quantum-mechanical perspective, the Hamiltonian of this system can be described the Hermitian operator \textrm{H}$\left( t\right) $\textrm{ }given by $\mathrm{H}\left( t\right) \overset{\text{def}}{=}-\vec{\mu }\mathbf{\cdot}\vec{B}\left( t\right) $ \cite{sakurai}, with $\vec{\mu}$ denoting the electron magnetic moment operator. The quantity $\vec{\mu}$ is defined as $\vec{\mu}\overset{\text{def}}{=}-\left( e/m\right) \vec{s}$ with $\vec{s}\overset{\text{def}}{=}\left( \hslash/2\right) \vec{\sigma}$ being the spin operator. Clearly, $\hslash\overset{\text{def}}{=}h/(2\pi)$ is the reduced Planck constant and $\vec{\sigma}\overset{\text{def}}{=}\left( \sigma_{x}\text{, }\sigma_{y}\text{, }\sigma_{z}\right) $ is the usual Pauli spin vector operator. Assuming a time-independent magnetic field along the $z$-direction given by $\vec{B}\left( t\right) =B_{0}\hat{z}$ and introducing the frequency $\omega\overset{\text{def}}{=}(e/m)B_{0}$, the spin-$1/2$ qubit (SQ) Hamiltonian becomes \begin{equation} \mathrm{H}_{\mathrm{SQ}}\left( \omega\right) \overset{\text{def}}{=} \frac{\hslash\omega}{2}\sigma_{z}\text{,} \label{spinH} \end{equation} where $\sigma_{z}\overset{\text{def}}{=}\left\vert \uparrow\right\rangle \left\langle \uparrow\right\vert -\left\vert \downarrow\right\rangle \left\langle \downarrow\right\vert $ with $\left\vert \uparrow\right\rangle $ and $\left\vert \downarrow\right\rangle $ denoting the spin-up and the spin-down quantum states, respectively. Observe that with the sign convention used for\textbf{ }$\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $\textbf{ }in Eq. (\ref{spinH}), we have that\textbf{ }$\left\vert \downarrow \right\rangle $\textbf{ (}$\left\vert \uparrow\right\rangle $\textbf{) }denotes the ground (excited) state of the system with energy\textbf{ }$-\hslash\omega/2$\textbf{ (}$+\hslash\omega/2$\textbf{).} \subsection{Superconducting flux qubit Hamiltonian} It is known that a qubit is a two-level (or, a two-state) quantum system and, moreover, it is possible to realize the two levels in a number of ways. For example, the two-levels can be regarded as the spin-up and spin-down of an electron, or as the vertical and horizontal polarization of a single photon. Interestingly, the two-levels of a qubit can be also realized as the supercurrent flowing in an anti-clockwise and clockwise directions in a superconducting loop \cite{clarke08,devoret13}. A flux qubit is a superconducting loop interrupted by one or three Josephson junctions (i.e., a dissipationless device with a nonlinear inductance). An arbitrary flux qubit can be described as a superposition of two persistent current basis states. The two quantum states are total magnetic flux $\Phi$ pointing up $\left\vert \uparrow\right\rangle $ and $\Phi$ pointing down $\left\langle \downarrow \right\vert $. Alternatively, as previously mentioned, the two-levels of the quantum system can be described as the supercurrent $I_{q}$ circulating in the loop anti-clockwise and $I_{q}$ circulating clockwise. The Hamiltonian of a superconducting flux qubit (\textrm{SFQ}) in persistent current basis $\left\{ \left\vert \uparrow\right\rangle \text{, }\left\langle \downarrow\right\vert \right\} $ is given by \cite{chiorescu03,pekola07,paauw09,pekola16}, \begin{equation} \mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \overset{\text{def}}{=}-\frac{\hslash}{2}\left( \Delta\sigma_{x} +\epsilon\sigma_{z}\right) \text{.} \label{superH} \end{equation} In Eq. (\ref{superH}), $\hslash\overset{\text{def}}{=}h/\left( 2\pi\right) $ is the reduced Planck constant, while $\sigma_{x}$ and $\sigma_{z}$ are Pauli matrices. Furthermore, $\hslash\epsilon\overset{\text{def}}{=}2I_{q}\left( \Phi_{e}-\frac{\Phi_{0}}{2}\right) $ is the magnetic energy bias defined in terms of the supercurrent $I_{q}$, the externally applied magnetic flux $\Phi_{e}$, and the magnetic flux quantum $\Phi_{0}\overset{\text{def} }{=}h/\left( 2e\right) $ with $e$ being the absolute value of the electron charge. Finally, $\hslash\Delta$ is the energy gap at the degeneracy point specified by the relation $\Phi_{e}=\Phi_{0}/2$ (i.e., $\epsilon=0$) and represents the minimum splitting of the energy levels of the ground state $\left\vert g\right\rangle $ and the first excited state $\left\vert e\right\rangle $ of the superconducting qubit. At the gap, the coherence properties of the qubit are optimal. Away from the degeneracy point, $\epsilon\neq0$ and the energy-level splitting becomes $\hslash\nu \overset{\text{def}}{=}\hslash\sqrt{\epsilon^{2}+\Delta^{2}}$, with $\nu$ being the transition angular frequency of the qubit. The energy level splitting $\hslash\Delta$ depends on the critical current of the three Josephson junctions and their capacitance \cite{paauw09}. For flux qubits one has $\Delta\sim E_{C}/E_{J}$ with $E_{C}$ and $E_{J}$ denoting the Cooper pair charging energy and the Josephson coupling energy \cite{pekola16}, respectively. In summary, a flux qubit can be represented by a double-well potential whose shape (symmetrical versus asymmetrical) can be tuned with the externally applied magnetic flux $\Phi_{e}$. When $\Phi_{e}=\Phi_{0}/2$, the double-well is symmetric, the energy eigenstates (i.e., ground state and first excited states $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $, respectively) are symmetric (i.e., $\left\vert g\right\rangle \overset{\text{def}}{=}\left[ \left\vert \uparrow\right\rangle +\left\vert \downarrow\right\rangle \right] /\sqrt{2}$) and antisymmetric (i.e., $\left\vert e\right\rangle \overset{\text{def}}{=}\left[ \left\vert \uparrow\right\rangle -\left\vert \downarrow\right\rangle \right] /\sqrt{2}$) superpositions of the two states $\left\vert \uparrow\right\rangle $ and $\left\vert \downarrow\right\rangle $ and, finally, the splitting of the energy levels of $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $ is $\Delta$. Instead, when $\Phi_{e}\neq\Phi_{0}/2$, the double-well is not symmetric, the energy eigenstates are arbitrary superpositions of the basis states $\left\vert \uparrow\right\rangle $ and $\left\vert \downarrow \right\rangle $ (i.e., $\alpha\left\vert \uparrow\right\rangle \pm \beta\left\vert \downarrow\right\rangle $ with $\left\vert \alpha\right\vert ^{2}+\left\vert \beta\right\vert ^{2}=1$) and, finally, the energy gap becomes $\hslash\nu\overset{\text{def}}{=}\hslash\sqrt{\epsilon^{2}+\Delta^{2}}$. For more details on the theory underlying superconducting flux qubits, we refer to Ref. \cite{clarke08}. The transition from (isolated) physical systems specified by pure states evolving according to the Hamiltonians in Eqs. (\ref{spinH})\ and (\ref{superH}) to the same (open) physical systems described by mixed quantum states can be explained as follows. Assume a quantum system specified by an Hamiltonian $\mathrm{H}$ is in thermal equilibrium with a reservoir at non-zero temperature $T$. Then, following the principles of quantum statistical mechanics \cite{huang87}, the system has temperature $T$ and its state is described by a thermal state \cite{strocchi08} specified by a density matrix $\rho$ given by, \begin{equation} \rho\overset{\text{def}}{=}\frac{e^{-\beta\mathrm{H}}}{\mathrm{tr}\left( e^{-\beta\mathrm{H}}\right) }\text{.} \label{densityma} \end{equation} In Eq. (\ref{densityma}), $\beta\overset{\text{def}}{=}\left( k_{B}T\right) ^{-1}$ denotes the so-called inverse temperature, while $k_{B}$ is the Boltzmann constant. In what follows, we shall consider two families of mixed quantum thermal states given by \begin{equation} \rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) \overset{\text{def} }{=}\frac{e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) } }{\mathrm{tr}\left( e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) }\right) }\text{ and, }\rho_{\mathrm{SFQ}}\left( \beta\text{, } \epsilon\right) \overset{\text{def}}{=}\frac{e^{-\beta\mathrm{H} _{\mathrm{SFQ}}\left( \epsilon\right) }}{\mathrm{tr}\left( e^{-\beta \mathrm{H}_{\mathrm{SFQ}}\left( \epsilon\right) }\right) }\text{.} \label{densita} \end{equation} Note that in $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{densita}), we assume that the parameter $\Delta$ is fixed. For a work on how to tune the energy gap $\Delta$ in a flux qubit from an experimental standpoint, we refer to Ref. \cite{paauw09}. In Fig. $1$, we present a schematic depiction of of a spin qubit and a superconducting flux qubit in thermal equilibrium with a reservoir at non-zero temperature $T$. \section{Applications} In this section, we calculate both the Sj\"{o}qvist and the Bures metrics for each one of the two distinct families of parametric thermal states mentioned in the previous section. From our comparative investigation, we find that the two metric coincide for the first Hamiltonian model (electron in a constant magnetic field along the $z$-direction), while they differ for the second Hamiltonian model (superconducting flux qubit). \subsection{Spin qubits} Let us consider a system with an Hamiltonian described by $\mathrm{H} _{\mathrm{SQ}}\left( \omega\right) \overset{\text{def}}{=}\left( \hslash\omega/2\right) \sigma_{z}$ in Eq. (\ref{spinH}). Observe that $\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $ can be recast as \begin{equation} \mathrm{H}_{\mathrm{SQ}}\left( \omega\right) =\sum_{n=0}^{1}E_{n}\left\vert n\right\rangle \left\langle n\right\vert =\frac{\hslash\omega}{2}\left\vert 0\right\rangle \left\langle 0\right\vert -\frac{\hslash\omega}{2}\left\vert 1\right\rangle \left\langle 1\right\vert \text{,} \label{h1} \end{equation} where $E_{0}\overset{\text{def}}{=}\hslash\omega/2$, $E_{1}\overset{\text{def} }{=}-\hslash\omega/2$, and $\left\{ \left\vert n\right\rangle \right\} \overset{\text{def}}{=}\left\{ \left\vert 0\right\rangle =\left\vert \uparrow\right\rangle \text{, }\left\vert 1\right\rangle =\left\vert \downarrow\right\rangle \right\} $. For clarity, note that\textbf{ }$\left\vert 1\right\rangle =\left\vert \downarrow\right\rangle $\textbf{ (}$\left\vert 0\right\rangle =\left\vert \uparrow\right\rangle $\textbf{) }denotes here the ground (excited) state corresponding to the lowest (highest) energy level with\textbf{ }$E_{1}\overset{\text{def}}{=}-\hslash\omega/2$ ($E_{0}\overset{\text{def}}{=}\hslash\omega/2$). Observe that the thermal state $\rho_{\mathrm{SQ}}$ emerging from the Hamiltonian $\mathrm{H} _{\mathrm{SQ}}$ in Eq. (\ref{h1}) can be written as \begin{equation} \rho_{\mathrm{SQ}}=\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=}\frac{e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) }}{\mathrm{tr}\left( e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) }\right) }\text{.} \label{mike1} \end{equation} The thermal state $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{mike1}) can be rewritten as, \begin{align} \rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) & =\frac {e^{-\beta\frac{\hslash\omega}{2}\sigma_{z}}}{\mathrm{tr}\left( e^{-\beta\frac{\hslash\omega}{2}\sigma_{z}}\right) }\nonumber\\ & =\frac{\left( \begin{array} [c]{cc} e^{-\beta\frac{\hslash\omega}{2}} & 0\\ 0 & e^{\beta\frac{\hslash\omega}{2}} \end{array} \right) }{e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta\frac{\hslash\omega}{2}} }\nonumber\\ & =\frac{\left( \begin{array} [c]{cc} \cosh\left( \beta\frac{\hslash\omega}{2}\right) -\sinh\left( \beta \frac{\hslash\omega}{2}\right) & 0\\ 0 & \cosh\left( \beta\frac{\hslash\omega}{2}\right) +\sinh\left( \beta \frac{\hslash\omega}{2}\right) \end{array} \right) }{2\cosh\left( \beta\frac{\hslash\omega}{2}\right) }\nonumber\\ & =\frac{1}{2}\left( \begin{array} [c]{cc} 1-\tanh\left( \beta\frac{\hslash\omega}{2}\right) & 0\\ 0 & 1+\tanh\left( \beta\frac{\hslash\omega}{2}\right) \end{array} \right) \nonumber\\ & =\frac{1}{2}\left[ \mathrm{I}-\tanh\left( \beta\frac{\hslash\omega} {2}\right) \sigma_{z}\right] \text{,} \end{align} that is, \begin{equation} \rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) =\frac{1}{2}\left[ \mathrm{I}-\tanh\left( \beta\frac{\hslash\omega}{2}\right) \sigma _{z}\right] \text{.} \label{ro1} \end{equation} In what follows, we shall use $\rho_{\mathrm{SQ}}\left( \beta\text{, } \omega\right) $ in Eq. (\ref{ro1}) to calculate the Bures and the Sj\"{o}qvist metrics. \subsubsection{The Bures metric} We begin by noticing that $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{general}) becomes in our case \begin{align} ds_{\mathrm{Bures}}^{2} & =\frac{1}{4}\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\nonumber\\ & +\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial _{\omega}\mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\right\} d\omega^{2}+\nonumber\\ & +\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\omega}\mathrm{H} \right) _{d}\right\rangle \right] \right\} d\beta d\omega\text{,} \label{general1} \end{align} where, for simplicity of notation, we denote $\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $ in Eq. (\ref{h1}) with $\mathrm{H}$. To calculate $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{general}), we perform three distinct calculations. Specifically, we compute the metric tensor components $g_{\beta\beta}$, $2g_{\beta\omega}$, and $g_{\omega\omega}$ defined as \begin{equation} g_{\beta\beta}\left( \beta\text{, }\omega\right) \overset{\text{def} }{=}\frac{1}{4}\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] \text{, }2g_{\beta\omega}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=}\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H} \right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle \right] \right\} \text{, } \label{secondo} \end{equation} and, \begin{equation} g_{\omega\omega}\left( \beta\text{, }\omega\right) \overset{\text{def} }{=}\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial _{\omega}\mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\right\} \text{,} \label{terzo} \end{equation} respectively. \paragraph{First sub-calculation} Let us begin with calculating $(1/4)\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$. Observe that the expectation value $\left\langle \mathrm{H} ^{2}\right\rangle $ of $\mathrm{H}^{2}$ is given by, \begin{align} \left\langle \mathrm{H}^{2}\right\rangle & =\mathrm{tr}\left( \mathrm{H}^{2}\rho\right) =\sum_{i=0}^{1}p_{i}E_{i}=p_{0}E_{0}+p_{1} E_{1}\nonumber\\ & =\frac{e^{-\beta E_{0}}}{\mathcal{Z}}\left( \frac{\hslash\omega} {2}\right) ^{2}+\frac{e^{-\beta E_{1}}}{\mathcal{Z}}\left( -\frac {\hslash\omega}{2}\right) ^{2}\nonumber\\ & \nonumber\\ & =\frac{\hslash^{2}\omega^{2}}{4}\left( \frac{e^{-\beta\frac{\hslash\omega }{2}}}{\mathcal{Z}}+\frac{e^{\beta\frac{\hslash\omega}{2}}}{\mathcal{Z} }\right) \nonumber\\ & =\frac{\hslash^{2}\omega^{2}}{4}\text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}^{2}\right\rangle =\frac{\hslash^{2}\omega^{2}} {4}\text{,} \label{a} \end{equation} where the partition function is $\mathcal{Z}\overset{\text{def}}{=} e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta\frac{\hslash\omega}{2}} =2\cosh\left( \beta\frac{\hslash\omega}{2}\right) $. Furthermore, we note that the expectation value $\left\langle \mathrm{H}\right\rangle $ of the Hamiltonian is \begin{align} \left\langle \mathrm{H}\right\rangle & =\mathrm{tr}\left( \rho \mathrm{H}\right) =\sum_{i=0}^{1}p_{i}E_{i}=p_{0}E_{0}+p_{1}E_{1} =\frac{e^{-\beta E_{0}}}{\mathcal{Z}}\frac{\hslash\omega}{2}-\frac{e^{-\beta E_{1}}}{\mathcal{Z}}\frac{\hslash\omega}{2}\nonumber\\ & =\frac{e^{-\beta E_{0}}-e^{-\beta E_{1}}}{\mathcal{Z}}\frac{\hslash\omega }{2}=\frac{e^{-\beta\frac{\hslash\omega}{2}}-e^{\beta\frac{\hslash\omega}{2}} }{e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta\frac{\hslash\omega}{2}}} \frac{\hslash\omega}{2}=-\frac{2\sinh\left( \beta\frac{\hslash\omega} {2}\right) }{2\cosh\left( \beta\frac{\hslash\omega}{2}\right) } \frac{\hslash\omega}{2}=-\frac{\hslash\omega}{2}\tanh\left( \beta \frac{\hslash\omega}{2}\right) \text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}\right\rangle =-\frac{\hslash\omega}{2}\tanh\left( \beta\frac{\hslash\omega}{2}\right) \text{.} \label{b} \end{equation} Therefore, using Eqs. (\ref{a}) and (\ref{b}), $(1/4)\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$ becomes \begin{equation} g_{\beta\beta}\left( \beta\text{, }\omega\right) d\beta^{2} \overset{\text{def}}{=}\frac{1}{4}\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}=\frac{\hslash^{2}}{16}\omega^{2}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta^{2}\text{.} \label{A1} \end{equation} For completeness, we remark that\textbf{ }$1-\tanh^{2}\left[ \beta\left( \hslash\omega/2\right) \right] $\textbf{ }in Eq. (\ref{A1}) can also be expressed as\textbf{ }$1$\textbf{/}$\cosh^{2}\left[ \beta\left( \hslash\omega/2\right) \right] $. The calculation of $g_{\beta\beta}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{A1}) ends our first sub-calculation. \paragraph{Second sub-calculation} Let us focus on the second term in Eq. (\ref{secondo}). We start by noting that $\left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle $ is given by \begin{align} \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}E_{i}\partial_{\omega}E_{i} =p_{0}E_{0}\partial_{\omega}E_{0}+p_{1}E_{1}\partial_{\omega}E_{1}\nonumber\\ & =p_{0}\frac{\hslash\omega}{2}\partial_{\omega}\left( \frac{\hslash\omega }{2}\right) +p_{1}\left( -\frac{\hslash\omega}{2}\right) \partial_{\omega }\left( -\frac{\hslash\omega}{2}\right) \nonumber\\ & =\frac{\hslash^{2}}{4}\omega p_{0}+\frac{\hslash^{2}}{4}\omega p_{1} =\frac{\hslash^{2}}{4}\omega\left( p_{0}+p_{1}\right) =\frac{\hslash^{2}} {4}\omega\text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle =\frac{\hslash^{2}}{4}\omega\text{.} \label{c} \end{equation} Moreover, $\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle $ can be expressed as \begin{align} \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}\partial_{\omega}E_{i}=p_{0}\partial_{\omega}E_{0} +p_{1}\partial_{\omega}E_{1}=\frac{\hslash}{2}p_{0}-\frac{\hslash}{2} p_{1}\nonumber\\ & =\frac{\hslash}{2}\frac{e^{-\beta E_{0}}-e^{-\beta E_{1}}}{\mathcal{Z} }=\frac{\hslash}{2}\frac{e^{-\beta\frac{\hslash\omega}{2}}-e^{\beta \frac{\hslash\omega}{2}}}{e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta \frac{\hslash\omega}{2}}}=-\frac{\hslash}{2}\frac{2\sinh\left( \beta \frac{\hslash\omega}{2}\right) }{2\cosh\left( \beta\frac{\hslash\omega} {2}\right) }\nonumber\\ & =-\frac{\hslash}{2}\tanh\left( \beta\frac{\hslash\omega}{2}\right) \text{,} \end{align} that is, \begin{equation} \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle =-\frac{\hslash}{2}\tanh\left( \beta\frac{\hslash\omega}{2}\right) \text{.} \label{d} \end{equation} Therefore, using Eqs. (\ref{b}), (\ref{c}), and (\ref{d}), we obtain \begin{equation} 2g_{\beta\omega}\left( \beta\text{, }\omega\right) d\beta d\omega \overset{\text{def}}{=}\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\omega }\mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta d\omega =\frac{\hslash^{2}}{8}\beta\omega\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] d\beta d\omega\text{.} \label{B1} \end{equation} The calculation of $2g_{\beta\omega}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{B1}) ends our second sub-calculation. \paragraph{Third sub-calculation} Let us now calculate the term in Eq. (\ref{terzo}). Recall from Eq. (\ref{d}) that $\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d} \right\rangle =-\frac{\hslash}{2}\tanh\left( \beta\frac{\hslash\omega} {2}\right) $. Therefore, we have \begin{equation} \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle ^{2}=\frac{\hslash^{2}}{4}\tanh^{2}\left( \beta\frac{\hslash\omega} {2}\right) \text{.} \label{f} \end{equation} Moreover, we note that $\left\langle \left[ \left( \partial_{\omega }\mathrm{H}\right) _{d}\right] ^{2}\right\rangle $ can be rewritten as \begin{align} \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle & =\sum_{i=0}^{1}p_{i}\left( \partial_{\omega} E_{i}\right) ^{2}=p_{0}\left( \partial_{\omega}E_{0}\right) ^{2} +p_{1}\left( \partial_{\omega}E_{1}\right) ^{2}\nonumber\\ & =p_{0}\left[ \partial_{\omega}\left( -\frac{\hslash\omega}{2}\right) \right] ^{2}+p_{1}\left[ \partial_{\omega}\left( \frac{\hslash\omega} {2}\right) \right] ^{2}\nonumber\\ & =\frac{\hslash^{2}}{4}p_{0}+\frac{\hslash^{2}}{4}p_{1}=\frac{\hslash^{2} }{4}\left( p_{0}+p_{1}\right) =\frac{\hslash^{2}}{4}\text{,} \end{align} that is, \begin{equation} \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle =\frac{\hslash^{2}}{4}\text{.} \label{g} \end{equation} Finally, note that \begin{align} 2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\omega} \mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) } & =\frac{2}{\mathcal{Z} }\left\vert \frac{\left\langle 0|\partial_{\omega}\mathrm{H}|1\right\rangle }{E_{0}-E_{1}}\right\vert ^{2}\frac{e^{-\beta E_{0}}-e^{-\beta E_{1}} }{e^{-\beta E_{0}}+e^{-\beta E_{1}}}+\frac{2}{\mathcal{Z}}\left\vert \frac{\left\langle 1|\partial_{\omega}\mathrm{H}|0\right\rangle }{E_{1}-E_{0} }\right\vert ^{2}\frac{e^{-\beta E_{1}}-e^{-\beta E_{0}}}{e^{-\beta E_{1} }+e^{-\beta E_{0}}}\nonumber\\ & =0\text{,} \label{h} \end{align} since $\left\langle 0|\partial_{\omega}\mathrm{H}|1\right\rangle =\left\langle 1|\partial_{\omega}\mathrm{H}|0\right\rangle =0$ as a consequence of the fact that $\mathrm{H}=\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $ in Eq. (\ref{spinH}) is diagonal. Therefore, using Eqs. (\ref{f}), (\ref{g}), and (\ref{h}), we finally get that Eq. (\ref{terzo}) becomes \begin{equation} g_{\omega\omega}\left( \beta\text{, }\omega\right) d\omega^{2}=\frac {\hslash^{2}}{16}\beta^{2}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega }{2}\right) \right] d\omega^{2}\text{.} \label{C1} \end{equation} The calculation of $g_{\omega\omega}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{C1}) ends our third sub-calculation. In conclusion, exploiting Eqs. (\ref{A1}), (\ref{B1}), and (\ref{C1}), the Bures metric $ds_{\mathrm{Bures}}^{2}=g_{\beta\beta}\left( \beta\text{, }\omega\right) d\beta^{2}+g_{\omega\omega}\left( \beta\text{, } \omega\right) d\omega^{2}+2g_{\beta\omega}\left( \beta\text{, } \omega\right) d\beta d\omega$ in Eq. (\ref{general1}) becomes \begin{equation} ds_{\mathrm{Bures}}^{2}=\frac{\hslash^{2}\omega^{2}}{16}\left[ 1-\tanh ^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta^{2} +\frac{\hslash^{2}\beta^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] d\omega^{2}+\frac{\hslash^{2}\beta\omega }{8}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta d\omega\text{.} \label{07} \end{equation} Using Einstein's summation convention, $ds_{\mathrm{Bures}}^{2}=g_{ij} ^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) d\theta^{i}d\theta^{j}$ with $\theta^{1}\overset{\text{def}}{=}\beta$ and $\theta^{2}\overset{\text{def}}{=}\omega$. Finally, using Eq. (\ref{07}), the Bures metric metric tensor $g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) $ becomes \begin{equation} g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega} {2}\right) \right] \left( \begin{array} [c]{cc} \omega^{2} & \beta\omega\\ \beta\omega & \beta^{2} \end{array} \right) \text{,} \label{f2} \end{equation} with $1\leq i$, $j\leq2$. Note that $g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) $\textbf{ }in Eq. (\ref{f2}) equals the classical Fisher-Rao metric since there is no non-classical contribution in this case. The derivation of $g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) $ in Eq. (\ref{f2}) ends our calculation of the Bures metric tensor for spin qubits. Interestingly, we observe that setting $k_{B}=1$, $\beta=t^{-1}$, and $\omega_{z}=t$, our Eq. (\ref{f2}) reduces to the last relation obtained by Zanardi and collaborators in Ref. \cite{zanardi07}. \subsubsection{The Sj\"{o}qvist metric} Given the expression of $\rho_{\mathrm{SQ}}\left( \beta\text{, } \omega\right) $ in\ Eq. (\ref{ro1}), we can proceed with the calculation of the Sj\"{o}qvist metric given by \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2} }{p_{k}}+\sum_{k=0}^{1}p_{k}\left\langle dn_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |dn_{k}\right\rangle \text{.} \label{smetric} \end{equation} In our case, we note that the probabilities $p_{0}$ and $p_{1}$ are given by \begin{equation} p_{0}=p_{0}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=} \frac{1-\tanh\left( \beta\frac{\hslash\omega}{2}\right) }{2}\text{, and }p_{1}=p_{1}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=} \frac{1+\tanh\left( \beta\frac{\hslash\omega}{2}\right) }{2}\text{,} \label{prob} \end{equation} respectively. Furthermore, the states $\left\vert n_{0}\right\rangle $ and $\left\vert n_{1}\right\rangle $ are \begin{equation} \left\vert n_{0}\right\rangle \overset{\text{def}}{=}\left\vert 0\right\rangle \text{, and }\left\vert n_{1}\right\rangle \overset{\text{def}}{=}\left\vert 1\right\rangle \text{.} \label{stati} \end{equation} Observe that since $n_{k}=n_{k}\left( \beta\text{, }\omega\right) $, we have that $dn_{k}\overset{\text{def}}{=}\frac{\partial n_{k}}{\partial\beta} d\beta+\frac{\partial n_{k}}{\partial\omega}d\omega$. In our case, we get from Eq. (\ref{stati}) that $\left\vert dn_{k}\right\rangle =\left\vert 0\right\rangle $. From Eq. (\ref{smetric}), $ds_{\mathrm{Sj\ddot{o}qvist}} ^{2}$ reduces to \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2} }{p_{k}}=\frac{1}{4}\left( \frac{dp_{0}^{2}}{p_{0}}+\frac{dp_{1}^{2}}{p_{1} }\right) \text{,} \label{n2} \end{equation} where the differentials $dp_{0}$ and $dp_{1}$ are given by \begin{equation} dp_{0}\overset{\text{def}}{=}\frac{\partial p_{0}}{\partial\beta}d\beta +\frac{\partial p_{0}}{\partial\omega}d\omega\text{, and }dp_{1} \overset{\text{def}}{=}\frac{\partial p_{1}}{\partial\beta}d\beta +\frac{\partial p_{1}}{\partial\omega}d\omega\text{,} \label{n1} \end{equation} respectively. Therefore, substituting Eq. (\ref{n1}) into Eq. (\ref{n2}), we get \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & =\frac{1}{4}\frac{\left( \partial _{\beta}p_{0}d\beta+\partial_{\omega}p_{0}d\omega\right) ^{2}}{p_{0}} +\frac{1}{4}\frac{\left( \partial_{\beta}p_{1}d\beta+\partial_{\omega} p_{1}d\omega\right) ^{2}}{p_{1}}\nonumber\\ & =\frac{\left( \partial_{\beta}p_{0}\right) ^{2}d\beta^{2}+\left( \partial_{\omega}p_{0}\right) ^{2}d\omega^{2}+2\partial_{\beta}p_{0} \partial_{\omega}p_{0}d\beta d\omega}{4p_{0}}+\frac{\left( \partial_{\beta }p_{1}\right) ^{2}d\beta^{2}+\left( \partial_{\omega}p_{1}\right) ^{2}d\omega^{2}+2\partial_{\beta}p_{1}\partial_{\omega}p_{1}d\beta d\omega }{4p_{1}}\nonumber\\ & =\left[ \frac{\left( \partial_{\beta}p_{0}\right) ^{2}}{4p_{0}} +\frac{\left( \partial_{\beta}p_{1}\right) ^{2}}{4p_{1}}\right] d\beta ^{2}+\left[ \frac{\left( \partial_{\omega}p_{0}\right) ^{2}}{4p_{0}} +\frac{\left( \partial_{\omega}p_{1}\right) ^{2}}{4p_{1}}\right] d\omega^{2}+\left[ \frac{2\partial_{\beta}p_{0}\partial_{\omega}p_{0}} {4p_{0}}+\frac{2\partial_{\beta}p_{1}\partial_{\omega}p_{1}}{4p_{1}}\right] d\beta d\omega\text{,} \end{align} that is, \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\left[ \frac{\left( \partial_{\beta} p_{0}\right) ^{2}}{4p_{0}}+\frac{\left( \partial_{\beta}p_{1}\right) ^{2} }{4p_{1}}\right] d\beta^{2}+\left[ \frac{\left( \partial_{\omega} p_{0}\right) ^{2}}{4p_{0}}+\frac{\left( \partial_{\omega}p_{1}\right) ^{2} }{4p_{1}}\right] d\omega^{2}+\left[ \frac{2\partial_{\beta}p_{0} \partial_{\omega}p_{0}}{4p_{0}}+\frac{2\partial_{\beta}p_{1}\partial_{\omega }p_{1}}{4p_{1}}\right] d\beta d\omega\text{.} \label{chi2} \end{equation} From Eq. (\ref{prob}), we observe that \begin{align} \partial_{\beta}p_{0} & =-\frac{\hslash\omega}{4}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] \text{, }\partial_{\omega} p_{0}=-\frac{\hslash\beta}{4}\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] \text{,}\nonumber\\ & \nonumber\\ \partial_{\beta}p_{1} & =\frac{\hslash\omega}{4}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] \text{, }\partial_{\omega} p_{1}=\frac{\hslash\beta}{4}\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] \text{.} \label{chi1} \end{align} Finally, substituting Eq. (\ref{chi1}) into Eq. (\ref{chi2}), we obtain \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\frac{\hslash^{2}\omega^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta ^{2}+\frac{\hslash^{2}\beta^{2}}{16}\left[ 1-\tanh^{2}\left( \beta \frac{\hslash\omega}{2}\right) \right] d\omega^{2}+\frac{\hslash^{2} \beta\omega}{8}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta d\omega\text{.} \label{f0} \end{equation} Using Einstein's summation convention, $ds_{\mathrm{Sj\ddot{o}qvist}} ^{2}=g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\omega\right) d\theta^{i}d\theta^{j}$ with $\theta^{1}\overset{\text{def} }{=}\beta$ and $\theta^{2}\overset{\text{def}}{=}\omega$. Finally, using Eq. (\ref{f0}), the Sj\"{o}qvist metric metric tensor becomes \begin{equation} g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, } \omega\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta \frac{\hslash\omega}{2}\right) \right] \left( \begin{array} [c]{cc} \omega^{2} & \beta\omega\\ \beta\omega & \beta^{2} \end{array} \right) \text{,} \label{f1} \end{equation} with $1\leq i$, $j\leq2$. Note that $g_{ij}^{\left( \mathrm{Sj\ddot{o} qvist}\right) }\left( \beta\text{, }\omega\right) $\textbf{ }in Eq. (\ref{f2}) is equal to the classical Fisher-Rao metric since the non-classical contribution is absent in this case. The derivation of $g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\omega\right) $ in Eq. (\ref{f1}) ends our calculation of the Sj\"{o}qvist metric tensor for spin qubits. Recalling the general expressions of the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{7}) and (\ref{vetta}) and, moreover, from our first set of explicit calculations, a few remarks are in order. First, both metrics have a classical and a non-classical contribution. Second, the classical Fisher-Rao metric contribution is related to changes\textbf{ }$dp_{n}=\partial_{\beta} p_{n}d\beta+\partial_{h}p_{n}dh$\textbf{ }in the probabilities\textbf{ } $p_{n}\left( \beta\text{, }h\right) \propto e^{-\beta E_{n}\left( h\right) }$\textbf{ }with\textbf{ }$\left\{ E_{n}\left( h\right) \right\} $\textbf{ }being the eigenvalues of the Hamiltonian. Finally, the non-classical contribution in the two metrics is linked to changes $\left\vert dn\right\rangle =\partial_{h}\left\vert n\right\rangle dh=\left\vert \partial_{h}n\right\rangle dh$\textbf{ }in the eigenvectors\textbf{ }$\left\{ \left\vert n\left( h\right) \right\rangle \right\} $\textbf{ }of the Hamiltonian. In our first Hamiltonian model,\textbf{ }H$\propto\sigma_{z} $\textbf{ }is diagonal and, thus, its eigenvectors do not depend on any parameter. Therefore, we found that both the Bures and Sj\"{o}qvist metrics reduce to the classical Fisher-Rao metric. However, one expects that if\textbf{ \ }H\textbf{ }is not proportional to the Pauli matrix operator\textbf{ }$\sigma_{z}$\textbf{, }non-classical contributions do not vanish any longer and the two metrics may yield different quantum (i.e., non-classical) metric contributions. Indeed, if one considers a spin qubit Hamiltonian specified by a magnetic field with an orientation that is not constrained to be along the $z$-axis, the Bures and Sj\"{o}qvist metrics happen to be different. In particular, for a time-independent and uniform magnetic field given by $\vec{B}=B_{x}\hat{x}+B_{z}\hat{z}$, the spin qubit Hamiltonian becomes \textrm{H}$_{\mathrm{SQ}}\left( \omega_{x}\text{, } \omega_{z}\right) \overset{\text{def}}{=}\left( \hslash/2\right) (\omega_{x}\sigma_{x}+\omega_{z}\sigma_{z})$. Assuming $\omega_{x}$ -fixed$\neq0$, tuning only the parameters $\beta$ and $\omega_{z}$, and repeating our metric calculations, it can be shown that the Bures and Sj\"{o}qvist metric tensor components $g_{ij}^{\mathrm{Bures}}\left( \beta\text{, }\omega_{z}\right) $ and $g_{ij}^{\mathrm{Sj\ddot{o}qvist} }\left( \beta\text{, }\omega_{z}\right) $ are \begin{equation} g_{ij}^{\mathrm{Bures}}\left( \beta\text{, }\omega_{z}\right) =\frac {\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\sqrt{\omega _{x}^{2}+\omega_{z}^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \omega_{x}^{2}+\omega_{z}^{2} & \beta\omega_{z}\\ \beta\omega_{z} & \beta^{2}\frac{\omega_{z}^{2}}{\omega_{x}^{2}+\omega_{z} ^{2}}+\frac{4}{\hslash^{2}}\frac{\omega_{x}^{2}}{\left( \omega_{x}^{2} +\omega_{z}^{2}\right) ^{2}}\frac{\tanh^{2}\left( \beta\frac{\hslash \sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) }{1-\tanh^{2}\left( \beta\frac{\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) } \end{array} \right) \text{,} \label{G1A} \end{equation} and, \begin{equation} g_{ij}^{\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, }\omega_{z}\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash \sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \omega_{x}^{2}+\omega_{z}^{2} & \beta\omega_{z}\\ \beta\omega_{z} & \beta^{2}\frac{\omega_{z}^{2}}{\omega_{x}^{2}+\omega_{z} ^{2}}+\frac{4}{\hslash^{2}}\frac{\omega_{x}^{2}}{\left( \omega_{x}^{2} +\omega_{z}^{2}\right) ^{2}}\frac{1}{1-\tanh^{2}\left( \beta\frac {\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) } \end{array} \right) \text{,} \label{G1B} \end{equation} respectively.\textbf{ }For completeness, we remark that useful calculation techniques to arrive at expressions as in Eqs. (\ref{G1A}) and (\ref{G1B}) will be performed in the next subsection where\textbf{ }\textrm{H} $_{\mathrm{SQ}}\left( \omega_{x}\text{, }\omega_{z}\right) $ will be replaced by the superconducting flux qubit\textbf{ }Hamiltonian $\mathrm{H} _{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \overset{\text{def} }{=}\left( -\hslash/2\right) \left( \Delta\sigma_{x}+\epsilon\sigma _{z}\right) $. Returning to our considerations, recall that for any\textbf{ }$x\in \mathbb{R} $\textbf{,} we have \begin{equation} \frac{\tanh^{2}\left( x\right) }{1-\tanh^{2}\left( x\right) }=\sinh ^{2}\left( x\right) \text{, and }\frac{1}{1-\tanh^{2}\left( x\right) }=\cosh^{2}\left( x\right) \text{.} \end{equation} \textbf{T}hen, using Eqs. (\ref{G1A}) and (\ref{G1B}), we obtain \begin{equation} 0\leq\frac{g_{\omega_{z}\omega_{z}}^{\mathrm{nc}\text{, \textrm{Bures}} }\left( \beta\text{, }\omega_{z}\right) }{g_{\omega_{z}\omega_{z} }^{\mathrm{nc}\text{, }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, } \omega_{z}\right) }=\frac{\sinh^{2}\left( \beta\frac{\hslash\sqrt{\omega _{x}^{2}+\omega_{z}^{2}}}{2}\right) }{\cosh^{2}\left( \beta\frac {\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) }=\tanh^{2}\left( \beta\frac{\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) \leq1\text{,} \label{G1C} \end{equation} with\textbf{ }$g_{\omega_{z}\omega_{z}}^{\mathrm{nc}\text{, \textrm{Bures}} }\left( \beta\text{, }\omega_{z}\right) $ and $g_{\omega_{z}\omega_{z} }^{\mathrm{nc}\text{, }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, } \omega_{z}\right) $ denoting the non-classical contributions in the Bures and Sj\"{o}qvist metric cases, respectively. From Eqs. (\ref{G1A}) and (\ref{G1B}), we conclude that the introduction of a nonvanishing component of the magnetic field along the $x$-direction introduces a visible non-commutative probabilistic structure in the quantum mechanics of the system characterized by a non-classical scenario with $\left[ \rho\text{, } \rho+d\rho\right] \neq0$). In such a case, the Bures and the Sj\"{o}qvist metrics exhibit a different behavior as evident from their nonclassical metric tensor components (i.e., $g_{\omega_{z}\omega_{z}}^{\mathrm{nc}}\left( \beta\text{, }\omega_{z}\right) $) in Eq. (\ref{G1C}). \subsection{Superconducting flux qubits} Let us consider a system with an Hamiltonian described by $\mathrm{H} _{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \overset{\text{def} }{=}\left( -\hslash/2\right) \left( \Delta\sigma_{x}+\epsilon\sigma _{z}\right) $ in Eq. (\ref{superH}). The thermal state $\rho_{\mathrm{SFQ} }\left( \beta\text{, }\epsilon\right) $ corresponding to $\mathrm{H} _{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) $ with $\Delta$ assumed to be constant is given by \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) \overset{\text{def} }{=}\frac{e^{-\beta\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, } \epsilon\right) }}{\mathrm{tr}\left( e^{-\beta\mathrm{H}_{\mathrm{SFQ} }\left( \Delta\text{, }\epsilon\right) }\right) }\text{.} \label{anto1} \end{equation} Observe that $\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) $ is diagonalizable and can be recast as $\mathrm{H}_{\mathrm{SFQ} }=M_{\mathrm{H}_{\mathrm{SFQ}}}\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}$ where\textbf{ }$M_{\mathrm{H}_{\mathrm{SFQ}}}$\textbf{ }and\textbf{ }$M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}$\textbf{ }are the eigenvector matrix and its inverse, respectively. Therefore, after some algebra, $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto1}) can be rewritten as \begin{align} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) & =\frac {e^{-\beta\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) } }{\mathrm{tr}(e^{-\beta\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) })}=\frac{e^{-\beta M_{\mathrm{H}_{\mathrm{SFQ}}} \mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}}}{\mathrm{tr}(e^{-\beta M_{\mathrm{H}_{\mathrm{SFQ}} }\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}})}\nonumber\\ & =\frac{M_{\mathrm{H}_{\mathrm{SFQ}}}e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }}M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1} }{\mathrm{tr}(M_{\mathrm{H}_{\mathrm{SFQ}}}e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }}M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1} )}\nonumber\\ & =M_{\mathrm{H}_{\mathrm{SFQ}}}\frac{e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }}}{\mathrm{tr}(e^{-\beta\mathrm{H} _{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }})}M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}\text{,} \end{align} that is, \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =M_{\mathrm{H} _{\mathrm{SFQ}}}\frac{e^{-\beta\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }}}{\mathrm{tr}(e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }})}M_{\mathrm{H}_{\mathrm{SFQ}}} ^{-1}=M_{\mathrm{H}_{\mathrm{SFQ}}}\rho_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }\left( \beta\text{, }\epsilon\right) M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}\text{.} \label{anto0} \end{equation} The quantity $\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }$ in Eq. (\ref{anto0}) is defined as, \begin{equation} \mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) } \overset{\text{def}}{=}E_{0}\left\vert n_{1}\right\rangle \left\langle n_{1}\right\vert +E_{1}\left\vert n_{0}\right\rangle \left\langle n_{0}\right\vert \text{.} \label{anto2} \end{equation} The the eigenvalues $E_{0}$ and $E_{1}$ are given by $E_{0}\overset{\text{def} }{=}-\left( \hslash/2\right) \nu$ and $E_{1}\overset{\text{def}}{=}+\left( \hslash/2\right) \nu$, respectively, with $\nu\overset{\text{def}}{=} \sqrt{\Delta^{2}+\epsilon^{2}}$. For later use, it is convenient to introduce the notation $\tilde{E}_{0}\overset{\text{def}}{=}E_{1}$ and $\tilde{E} _{1}\overset{\text{def}}{=}E_{0}$ so that $\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }\overset{\text{def}}{=}\tilde{E}_{0}\left\vert n_{0}\right\rangle \left\langle n_{0}\right\vert +\tilde{E}_{1}\left\vert n_{1}\right\rangle \left\langle n_{1}\right\vert $. The two orthonormal eigenvectors corresponding to $E_{0}$ and $E_{1}$ are $\left\vert n_{1}\right\rangle $ and $\left\vert n_{0}\right\rangle $, respectively. They are given by \begin{equation} \left\vert n_{0}\right\rangle \overset{\text{def}}{=}\frac{1}{\sqrt{2}}\left( \begin{array} [c]{c} \frac{\epsilon-\sqrt{\epsilon^{2}+\Delta^{2}}}{\sqrt{\epsilon^{2}+\Delta ^{2}-\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}}}\\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}-\epsilon\sqrt{\epsilon^{2} +\Delta^{2}}}} \end{array} \right) \text{ and, }\left\vert n_{1}\right\rangle \overset{\text{def} }{=}\frac{1}{\sqrt{2}}\left( \begin{array} [c]{c} \frac{\epsilon+\sqrt{\epsilon^{2}+\Delta^{2}}}{\sqrt{\epsilon^{2}+\Delta ^{2}+\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}}}\\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}+\epsilon\sqrt{\epsilon^{2} +\Delta^{2}}}} \end{array} \right) \text{,} \label{anto444} \end{equation} respectively. A suitable choice for the eigenvector matrix $M_{\mathrm{H} _{\mathrm{SFQ}}}$ and its inverse $M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}$ in Eq. (\ref{anto0}) can be expressed as \begin{equation} M_{\mathrm{H}_{\mathrm{SFQ}}}\overset{\text{def}}{=}\left( \begin{array} [c]{cc} \frac{\epsilon+\nu}{\Delta} & \frac{\epsilon-\nu}{\Delta}\\ 1 & 1 \end{array} \right) \text{ and, }M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}\overset{\text{def} }{=}\left( \begin{array} [c]{cc} \frac{\Delta}{2\nu} & \frac{\nu-\epsilon}{2\nu}\\ -\frac{\Delta}{2\nu} & \frac{\nu+\epsilon}{2\nu} \end{array} \right) \text{,} \label{anto44} \end{equation} respectively. Using Eqs. (\ref{anto2}) and (\ref{anto44}), $\rho _{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto0}) becomes \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =\frac{1}{2}\left( \begin{array} [c]{cc} 1+\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) & \frac{\Delta} {\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash\frac{\sqrt {\epsilon^{2}+\Delta^{2}}}{2}\right) \\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash \frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) & 1-\frac{\epsilon} {\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash\frac{\sqrt {\epsilon^{2}+\Delta^{2}}}{2}\right) \end{array} \right) \text{,} \end{equation} that is, \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =\frac{1}{2}\left[ \mathrm{I}+\left( \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}}}\sigma _{x}+\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\sigma_{z}\right) \tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \text{.} \label{anto5} \end{equation} For completeness, we note here that the spectral decomposition of $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =M_{\mathrm{H} _{\mathrm{SFQ}}}\rho_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }\left( \beta\text{, }\epsilon\right) M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}$ in Eq. (\ref{anto5}) is given by $\rho_{\mathrm{SFQ}}\overset{\text{def}}{=} p_{0}\left\vert n_{0}\right\rangle \left\langle n_{0}\right\vert +p_{1}\left\vert n_{1}\right\rangle \left\langle n_{1}\right\vert $. The probabilities $p_{0}$ and $p_{1}$ are \begin{equation} p_{0}\overset{\text{def}}{=}\frac{e^{-\beta\tilde{E}_{0}}}{\mathcal{Z}} =\frac{1}{2}\left[ 1-\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) \right] \text{, and }p_{1}\overset{\text{def} }{=}\frac{e^{-\beta\tilde{E}_{1}}}{\mathcal{Z}}=\frac{1}{2}\left[ 1+\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \text{,} \label{sorry} \end{equation} respectively, with $\mathcal{Z}\overset{\text{def}}{\mathcal{=}}e^{-\beta E_{0}}+e^{-\beta E_{1}}=e^{-\beta\tilde{E}_{0}}+e^{-\beta\tilde{E}_{1}} =2\cosh(\beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2})$ denoting the partition function of the system. In what follows, we shall use $\rho _{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto5}) to calculate the Bures and the Sj\"{o}qvist metrics. \subsubsection{The Bures metric} For simplicity of notation, we replace $\mathrm{H}_{\mathrm{SFQ}}$ with $\mathrm{H}$ in the forthcoming calculation. We begin by noting that, in our case, the general expression of the Bures metric $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{general}) becomes \begin{align} ds_{\mathrm{Bures}}^{2} & =\frac{1}{4}\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\nonumber\\ & +\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial _{\epsilon}\mathrm{H}|m\right\rangle }{\tilde{E}_{n}-\tilde{E}_{m}}\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{n}}-e^{-\beta\tilde{E}_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta\tilde{E}_{n}}+e^{-\beta\tilde{E}_{m} }\right) }\right\} d\epsilon^{2}+\nonumber\\ & +\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\epsilon} \mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta d\epsilon \text{.} \label{zarrillo} \end{align} As previously pointed out in this manuscript, $ds_{\mathrm{Bures}}^{2} $\textbf{ }is the sum of two contributions, the classical Fisher-Rao information metric contribution and the non-classical metric contribution described in the summation term in the right-hand-side of Eq. (\ref{zarrillo} ). In what follows, we shall the that the presence of nonvanishing terms\textbf{ }$\left\vert \left\langle n|\partial_{\epsilon}\mathrm{H} |m\right\rangle \right\vert ^{2}$ leads to the existence of a non-classical contribution in\textbf{ }$ds_{\mathrm{Bures}}^{2}$. Following our previous line of reasoning, we partition our calculation in three parts. In particular, since $ds_{\mathrm{Bures}}^{2}=g_{\beta\beta}\left( \beta\text{, } \epsilon\right) d\beta^{2}+g_{\epsilon\epsilon}\left( \beta\text{, } \epsilon\right) d\epsilon^{2}+2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon$, we focus on computing $g_{\beta\beta }\left( \beta\text{, }\epsilon\right) $, $2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) $, and $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) $. \subsubsection{First sub-calculation} We proceed here with the first sub-calculation. We recall that $g_{\beta\beta }\left( \beta\text{, }\epsilon\right) d\beta^{2}=(1/4)\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$. Note that $\left\langle \mathrm{H}^{2}\right\rangle $ and $\left\langle \mathrm{H}\right\rangle ^{2}$ are given by, \begin{equation} \left\langle \mathrm{H}^{2}\right\rangle =\mathrm{tr}\left( \mathrm{H} ^{2}\rho\right) =\frac{\hslash^{2}}{4}\left( \epsilon^{2}+\Delta^{2}\right) \text{,} \label{jo2} \end{equation} and, \begin{equation} \left\langle \mathrm{H}\right\rangle ^{2}=\left[ \mathrm{tr}\left( \mathrm{H}\rho\right) \right] ^{2}=\frac{\hslash^{2}}{4}\left( \epsilon ^{2}+\Delta^{2}\right) \tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon ^{2}+\Delta^{2}}}{2}\right) \text{,} \label{jo3} \end{equation} respectively. Therefore, using Eqs. (\ref{jo2}) and (\ref{jo3}), $g_{\beta\beta}\left( \beta\text{, }\epsilon\right) d\beta^{2}$ becomes \begin{equation} g_{\beta\beta}\left( \beta\text{, }\epsilon\right) d\beta^{2}=\frac {\hslash^{2}}{16}\left( \epsilon^{2}+\Delta^{2}\right) \left[ 1-\tanh ^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta^{2}\text{.} \label{J1} \end{equation} Our fist sub-calculation ends with the derivation of Eq. (\ref{J1}). \subsubsection{Second sub-calculation} In our second calculation, we focus on calculating the term $2g_{\beta \epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon$ defined as \begin{equation} 2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon \overset{\text{def}}{=}\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\epsilon }\mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta d\epsilon \text{.} \label{jo4} \end{equation} Note that $\left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H} \right) _{d}\right\rangle $ can be recast as \begin{align} \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}\tilde{E}_{i}\partial_{\epsilon }\tilde{E}_{i}=p_{0}\tilde{E}_{0}\partial_{\epsilon}\tilde{E}_{0}+p_{1} \tilde{E}_{1}\partial_{\epsilon}\tilde{E}_{1}\nonumber\\ & =p_{0}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2}}\right) \partial_{\epsilon}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\right) +p_{1}\left( -\frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\right) \partial_{\epsilon}\left( -\frac{\hslash}{2}\sqrt{\epsilon ^{2}+\Delta^{2}}\right) \nonumber\\ & =\left( p_{0}+p_{1}\right) \left( \frac{\hslash}{2}\sqrt{\epsilon ^{2}+\Delta^{2}}\right) \partial_{\epsilon}\left( \frac{\hslash}{2} \sqrt{\epsilon^{2}+\Delta^{2}}\right) \nonumber\\ & =\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2}}\right) \partial_{\epsilon}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\right) \nonumber\\ & =\frac{\hslash^{2}}{4}\epsilon\text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle =\frac{\hslash^{2}}{4}\epsilon\text{.} \label{mas1} \end{equation} We also note that the expectation value $\left\langle \mathrm{H}\right\rangle $ of the Hamiltonian $\mathrm{H}$ equals \begin{equation} \left\langle \mathrm{H}\right\rangle =-\frac{\hslash}{2}\sqrt{\epsilon ^{2}+\Delta^{2}}\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}} }{2}\right) \text{.} \label{mas2} \end{equation} Finally, the quantity $\left\langle \left( \partial_{\epsilon}\mathrm{H} \right) _{d}\right\rangle $ can be rewritten as \begin{align} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}\partial_{\epsilon}\tilde{E}_{i}=p_{0}\partial _{\epsilon}\tilde{E}_{0}+p_{1}\partial_{\epsilon}\tilde{E}_{1}\nonumber\\ & =p_{0}\partial_{\epsilon}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2} +\Delta^{2}}\right) +p_{1}\partial_{\epsilon}\left( -\frac{\hslash}{2} \sqrt{\epsilon^{2}+\Delta^{2}}\right) \nonumber\\ & =\frac{e^{-\beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}} }{\mathcal{Z}}\left( \frac{\hslash}{2}\right) \frac{\epsilon}{\sqrt {\epsilon^{2}+\Delta^{2}}}+\frac{e^{\beta\hslash\frac{\sqrt{\epsilon ^{2}+\Delta^{2}}}{2}}}{\mathcal{Z}}\left( -\frac{\hslash}{2}\right) \frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\nonumber\\ & =-\frac{\hslash}{2}\frac{2\sinh\left( \beta\hslash\frac{\sqrt{\epsilon ^{2}+\Delta^{2}}}{2}\right) }{2\cosh\left( \beta\hslash\frac{\sqrt {\epsilon^{2}+\Delta^{2}}}{2}\right) }\frac{\epsilon}{\sqrt{\epsilon ^{2}+\Delta^{2}}}\nonumber\\ & =-\frac{\hslash}{2}\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}} \tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{,} \end{align} that is, \begin{equation} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle =-\frac{\hslash}{2}\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{.} \label{mas3} \end{equation} Finally, using Eqs. (\ref{mas1}), (\ref{mas2}), and (\ref{mas3}), $2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon$ in Eq. (\ref{jo4}) becomes \begin{align} 2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon & =\frac{1}{4}\left\{ 2\beta\left[ \begin{array} [c]{c} \frac{\hslash^{2}}{4}\epsilon+\frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \cdot\\ \left( -\frac{\hslash}{2}\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}} \tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right) \end{array} \right] \right\} d\beta d\epsilon\nonumber\\ & =\frac{1}{4}\left\{ 2\beta\left[ \frac{\hslash^{2}}{4}\epsilon -\frac{\hslash^{2}}{4}\epsilon\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\beta d\epsilon\nonumber\\ & =\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon\text{,} \end{align} that is, \begin{equation} 2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon =\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon\text{.} \label{J2} \end{equation} Our second sub-calculation ends with the derivation of Eq. (\ref{J2}). \subsubsection{Third sub-calculation} In what follows, we focus on the calculation of $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) d\epsilon^{2}$, \begin{equation} \ g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) d\epsilon ^{2}\ \overset{\text{def}}{=}\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\epsilon }\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\epsilon}\mathrm{H}|m\right\rangle }{\tilde {E}_{n}-\tilde{E}_{m}}\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{n} }-e^{-\beta\tilde{E}_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta \tilde{E}_{n}}+e^{-\beta\tilde{E}_{m}}\right) }\right\} d\epsilon ^{2}\text{.} \label{zar1} \end{equation} Let us recall that $\left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle $ is given in Eq. (\ref{mas3}). Therefore, we get \begin{equation} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle ^{2}=\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2}} \tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{.} \label{zar2} \end{equation} Moreover, $\left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d} ^{2}\right\rangle $ is given by \begin{align} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}^{2} \right\rangle & =\sum_{i=0}^{1}p_{i}\left( \partial_{\epsilon}\tilde{E} _{i}\right) ^{2}=p_{0}\left( \partial_{\epsilon}\tilde{E}_{0}\right) ^{2}+p_{1}\left( \partial_{\epsilon}\tilde{E}_{1}\right) ^{2}\nonumber\\ & =p_{0}\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2} }+p_{1}\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2} }\nonumber\\ & =\left( p_{0}+p_{1}\right) \frac{\hslash^{2}}{4}\frac{\epsilon^{2} }{\epsilon^{2}+\Delta^{2}}\nonumber\\ & =\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2}}\text{,} \end{align} that is, \begin{equation} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}^{2} \right\rangle =\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2} +\Delta^{2}}\text{.} \label{zar3} \end{equation} Finally, let us focus on the term in Eq. (\ref{zar1}) given by \begin{align} 2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\epsilon} \mathrm{H}|m\right\rangle }{\tilde{E}_{n}-\tilde{E}_{m}}\right\vert ^{2} \frac{\left( e^{-\beta\tilde{E}_{n}}-e^{-\beta\tilde{E}_{m}}\right) ^{2} }{\mathcal{Z}\cdot\left( e^{-\beta\tilde{E}_{n}}+e^{-\beta\tilde{E}_{m} }\right) } & =\frac{2}{\mathcal{Z}}\left\vert \frac{\left\langle n_{0}|\partial_{\epsilon}\mathrm{H}|n_{1}\right\rangle }{\tilde{E}_{0} -\tilde{E}_{1}}\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{0}} -e^{-\beta\tilde{E}_{1}}\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0} }+e^{-\beta\tilde{E}_{1}}\right) }+\nonumber\\ & +\frac{2}{\mathcal{Z}}\left\vert \frac{\left\langle n_{1}|\partial _{\epsilon}\mathrm{H}|n_{0}\right\rangle }{\tilde{E}_{1}-\tilde{E}_{0} }\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{1}}-e^{-\beta\tilde{E}_{0} }\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0}}+e^{-\beta\tilde{E}_{1} }\right) }\nonumber\\ & =\frac{2}{\mathcal{Z}}\frac{\left( e^{-\beta\tilde{E}_{0}}-e^{-\beta \tilde{E}_{1}}\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0}}+e^{-\beta \tilde{E}_{1}}\right) }\frac{\left( \left\vert \left\langle n_{0} |\partial_{\epsilon}\mathrm{H}|n_{1}\right\rangle \right\vert ^{2}+\left\vert \left\langle n_{1}|\partial_{\epsilon}\mathrm{H}|n_{0}\right\rangle \right\vert ^{2}\right) }{\left\vert \tilde{E}_{0}-\tilde{E}_{1}\right\vert ^{2}}\nonumber\\ & =2\frac{\left( e^{-\beta\tilde{E}_{0}}-e^{-\beta\tilde{E}_{1}}\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0}}+e^{-\beta\tilde{E}_{1}}\right) ^{2} }\frac{\frac{\hslash^{2}}{4}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}} +\frac{\hslash^{2}}{4}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}}}{\hslash ^{2}\left( \epsilon^{2}+\Delta^{2}\right) }\nonumber\\ & =\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\tanh ^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{,} \label{alsing} \end{align} that is, \begin{equation} 2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\epsilon} \mathrm{H}|m\right\rangle }{\tilde{E}_{n}-\tilde{E}_{m}}\right\vert ^{2} \frac{\left( e^{-\beta\tilde{E}_{n}}-e^{-\beta\tilde{E}_{m}}\right) ^{2} }{\mathcal{Z}\cdot\left( e^{-\beta\tilde{E}_{n}}+e^{-\beta\tilde{E}_{m} }\right) }d\epsilon^{2}=\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon ^{2}\right) ^{2}}\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) d\epsilon^{2}\text{.} \label{zar4} \end{equation} For clarity, note that $\partial_{\epsilon}\mathrm{H}$ in Eq. (\ref{zar4}) equals $\left( -\hslash/2\right) \sigma_{z}$ in the standard computational basis $\left\{ \left\vert 0\right\rangle \text{, }\left\vert 1\right\rangle \right\} $. Therefore, combining Eqs. (\ref{zar2}), (\ref{zar3}), and (\ref{zar4}) we get that $g_{\epsilon\epsilon}\left( \beta\text{, } \epsilon\right) d\epsilon^{2}$ in Eq. (\ref{zar1}) equals \begin{equation} g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) d\epsilon^{2} =\frac{\hslash^{2}}{16}\left\{ \frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) +\frac{\beta^{2}\epsilon^{2} }{\epsilon^{2}+\Delta^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\epsilon ^{2}\text{.} \label{J3} \end{equation} Then, using Eqs. (\ref{J1}), (\ref{J2}), and (\ref{J3}), $ds_{\mathrm{Bures} }^{2}$ in Eq. (\ref{zarrillo}) becomes \begin{align} ds_{\mathrm{Bures}}^{2} & =\frac{\hslash^{2}}{16}\left( \epsilon^{2} +\Delta^{2}\right) \left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta^{2}+\nonumber\\ & +\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon+\nonumber\\ & +\frac{\hslash^{2}}{16}\left\{ \frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) +\frac{\beta^{2}\epsilon^{2} }{\epsilon^{2}+\Delta^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\epsilon ^{2}\text{.} \label{f4} \end{align} Finally, using Eq. (\ref{f4}), the Bures metric tensor in the case of a superconducting flux qubit becomes \begin{equation} g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\epsilon\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \epsilon^{2}+\Delta^{2} & \beta\epsilon\\ \beta\epsilon & \frac{\beta^{2}\epsilon^{2}}{\epsilon^{2}+\Delta^{2}} +\frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }\frac{\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}} {2}\right) }{1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) } \end{array} \right) \text{,} \label{chetu1} \end{equation} with $1\leq i$, $j\leq2$. The derivation of Eqs. (\ref{f4}) and (\ref{chetu1}) completes our calculation of the Bures metric structure for a superconducting flux qubit. \subsubsection{The Sj\"{o}qvist metric} Let us observe that the Sj\"{o}qvist metric in Eq. (\ref{vetta}) can be rewritten in our case as \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\overset{\text{def}}{=}\frac{1}{4}\sum _{k=0}^{1}\frac{dp_{k}^{2}}{p_{k}}+\sum_{k=0}^{1}p_{k}ds_{k}^{2}\text{,} \label{cami0} \end{equation} where $ds_{k}^{2}\overset{\text{def}}{=}\left[ \left\langle dn_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |dn_{k}\right\rangle \right] $ and $\left\langle n_{k}\left\vert n_{k^{\prime}}\right. \right\rangle =\delta_{kk^{\prime}}$. From Eq. (\ref{anto444}), the states $\left\vert dn_{0}\right\rangle $ and $\left\vert dn_{1}\right\rangle $ become \begin{equation} \left\vert dn_{0}\right\rangle \overset{\text{def}}{=}\frac{1}{\sqrt{2} }\left( \begin{array} [c]{c} \frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac {\sqrt{\epsilon^{2}+\Delta^{2}}-\epsilon}{\left( \epsilon^{2}+\Delta ^{2}-\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2}}d\epsilon\\ \frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac{\left( \sqrt{\epsilon^{2}+\Delta^{2}}-\epsilon\right) ^{2}}{\left( \epsilon ^{2}+\Delta^{2}-\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2} }d\epsilon \end{array} \right) \text{, and }\left\vert dn_{1}\right\rangle \overset{\text{def} }{=}\frac{1}{\sqrt{2}}\left( \begin{array} [c]{c} \frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac {\epsilon+\sqrt{\epsilon^{2}+\Delta^{2}}}{\left( \epsilon^{2}+\Delta ^{2}+\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2}}d\epsilon\\ -\frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac{\left( \epsilon+\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{2}}{\left( \epsilon ^{2}+\Delta^{2}+\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2} }d\epsilon \end{array} \right) \text{,} \label{cami1} \end{equation} respectively. Eqs. (\ref{anto444}) and (\ref{cami1}) will be used to calculate the nonclassical contribution that appears in the Sj\"{o}qvist metric in\ Eq. (\ref{cami0}). In what follows, however, let us consider the classical contribution $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical}\right) }$ in Eq. (\ref{cami0}). We note that $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical}\right) }$ equals \begin{align} \frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2}}{p_{k}} & =\frac{1}{4}\frac {dp_{0}^{2}}{p_{0}}+\frac{1}{4}\frac{dp_{1}^{2}}{p_{1}}\nonumber\\ & =\left[ \frac{\left( \partial_{\beta}p_{0}\right) ^{2}}{4p_{0}} +\frac{\left( \partial_{\beta}p_{1}\right) ^{2}}{4p_{1}}\right] d\beta ^{2}+\left[ \frac{\left( \partial_{\epsilon}p_{0}\right) ^{2}}{4p_{0} }+\frac{\left( \partial_{\epsilon}p_{1}\right) ^{2}}{4p_{1}}\right] d\epsilon^{2}+\left[ \frac{2\partial_{\beta}p_{0}\partial_{\epsilon}p_{0} }{4p_{0}}+\frac{2\partial_{\beta}p_{1}\partial_{\epsilon}p_{1}}{4p_{1} }\right] d\beta d\epsilon\text{.} \label{q2} \end{align} Using Eq. (\ref{sorry}), $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical}\right) }$ in Eq. (\ref{q2}) reduces to \begin{align} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical} \right) } & =\frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2}}{p_{k}}\nonumber\\ & =\frac{\hslash^{2}}{16}\left( \epsilon^{2}+\Delta^{2}\right) \left[ 1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}} {2}\right) \right] d\beta^{2}+\nonumber\\ & +\frac{\hslash^{2}}{16}\frac{\beta^{2}\epsilon^{2}}{\epsilon^{2}+\Delta ^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) \right] d\epsilon^{2}+\nonumber\\ & +\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon\text{.} \label{classical} \end{align} We can now return our focus on the nonclassical contribution $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum} }\right) }$ that specifies the Sj\"{o}qvist metric. We have \begin{align} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) } & =\sum_{k=0}^{1}p_{k}\left\langle dn_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |dn_{k}\right\rangle \nonumber\\ & =p_{0}\left\langle dn_{0}|dn_{0}\right\rangle -p_{0}\left\vert \left\langle dn_{0}|n_{0}\right\rangle \right\vert ^{2}+p_{1}\left\langle dn_{1} |dn_{1}\right\rangle -p_{1}\left\vert \left\langle dn_{1}|n_{1}\right\rangle \right\vert ^{2}\text{.} \end{align} A simple check allows us to verify that $\left\langle dn_{0}|n_{0} \right\rangle =0$ and $\left\langle dn_{1}|n_{1}\right\rangle =0$. Therefore, $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) }$ becomes \begin{align} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) } & =p_{0}\left\langle dn_{0}|dn_{0} \right\rangle +p_{1}\left\langle dn_{1}|dn_{1}\right\rangle \nonumber\\ & =p_{0}\frac{1}{8}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}}\left( \epsilon-\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{2}\frac{2\Delta ^{2}+2\epsilon^{2}-2\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}}{\left( \Delta ^{2}+\epsilon^{2}-\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{3} }d\epsilon^{2}+\nonumber\\ & +p_{1}\frac{1}{8}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}}\left( \epsilon+\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{2}\frac{2\Delta ^{2}+2\epsilon^{2}+2\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}}{\left( \Delta ^{2}+\epsilon^{2}+\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{3} }d\epsilon^{2}\nonumber\\ & =\frac{1}{4}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }d\epsilon^{2}\nonumber\\ & =\frac{\hslash^{2}}{16}\frac{4\Delta^{2}}{\hslash^{2}\left( \Delta ^{2}+\epsilon^{2}\right) ^{2}}d\epsilon^{2}\text{,} \end{align} that is, \begin{equation} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) }=\frac{\hslash^{2}}{16}\frac{4\Delta^{2} }{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}d\epsilon ^{2}\text{.} \label{quantum} \end{equation} Finally, combining Eqs. (\ref{classical}) and (\ref{quantum}), the Sj\"{o}qvist metric $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}$ becomes \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & =\frac{\hslash^{2}}{16}\left( \epsilon^{2}+\Delta^{2}\right) \left[ 1-\tanh^{2}\left( \beta\hslash \frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta ^{2}+\nonumber\\ & +\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon+\nonumber\\ & +\frac{\hslash^{2}}{16}\left\{ \frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}+\frac{\beta^{2}\epsilon^{2}} {\epsilon^{2}+\Delta^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\epsilon ^{2}\text{.} \label{f3} \end{align} The metric tensor $g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\epsilon\right) $ from Eq. (\ref{f3}) is given by \begin{equation} g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\epsilon\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \epsilon^{2}+\Delta^{2} & \beta\epsilon\\ \beta\epsilon & \frac{\beta^{2}\epsilon^{2}}{\epsilon^{2}+\Delta^{2}} +\frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }\frac{1}{1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}} }{2}\right) } \end{array} \right) \text{,} \label{chetu} \end{equation} with $1\leq i$, $j\leq2$. The derivation of Eqs. (\ref{f3}) and (\ref{chetu}) completes our calculation of the Sj\"{o}qvist metric structure for superconducting flux qubits. \section{Considerations from the comparative analysis} In this section, we discuss the outcomes of the comparative analysis carried out in Section V concerning the Bures and Sj\"{o}qvist metrics for spin qubits and superconducting flux qubits Hamiltonian models presented in Section IV. \subsection{The asymptotic limit of $\beta$ approaching infinity} We begin by discussing the asymptotic limit of $\beta$ approaching infinity. In \ the case of a spin qubit with Hamiltonian \textrm{H}$_{\mathrm{SQ} }\left( \omega\right) $ in Eq. (\ref{spinH}), the density matrix $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{ro1}) approaches $\rho_{\mathrm{SQ}}^{\beta\rightarrow\infty}\left( \omega\right) \overset{\text{def}}{=}\left\vert 1\right\rangle \left\langle 1\right\vert $ as $\beta\rightarrow\infty$. Observe that $\left\vert 1\right\rangle $ denotes here the ground state, the state of lowest energy $-\hslash\omega/2$. Since $\rho_{\mathrm{SQ}}^{\beta\rightarrow\infty}\left( \omega\right) $ is a constant in $\omega$, the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{07}) and (\ref{f0}), respectively, both vanish in this limiting scenario. In this regard, the case of the superconducting flux qubit specified by the Hamiltonian \textrm{H}$_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) $ in Eq. (\ref{superH}) is more interesting. Indeed, in this case the density matrix $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto5}) approaches $\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) $ when $\beta$ approaches infinity. The quantity $\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) $ represents a pure (ground) state of lowest energy $\left( -\hslash/2\right) \sqrt{\Delta^{2}+\epsilon^{2}}$ and is given by \begin{equation} \rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) =\frac {1}{2}\left( \begin{array} [c]{cc} 1+\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}} & \frac{\Delta} {\sqrt{\epsilon^{2}+\Delta^{2}}}\\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}}} & 1-\frac{\epsilon} {\sqrt{\epsilon^{2}+\Delta^{2}}} \end{array} \right) \text{,} \end{equation} with $\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) =(\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) )^{2}$ and \textrm{tr}$\left( \rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) \right) =1$. Furthermore, when $\beta\rightarrow\infty$, the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{f4}) and (\ref{f3}), respectively, reduce to the same expression \begin{equation} ds_{\mathrm{Bures}}^{2}\overset{\beta\rightarrow\infty}{\rightarrow}\frac {1}{4}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2}} d\epsilon^{2}\text{, and }ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\overset{\beta \rightarrow\infty}{\rightarrow}\frac{1}{4}\frac{\Delta^{2}}{\left( \Delta ^{2}+\epsilon^{2}\right) ^{2}}d\epsilon^{2}\text{.} \label{betalim} \end{equation} The limiting expressions assumed by the Bures and Sj\"{o}qvist metrics in Eq. (\ref{betalim}) are, modulo an unimportant constant factor, the Fubini-Study metric $ds_{\mathrm{FS}}^{2}$ for pure states. Indeed, we have \begin{equation} ds_{\mathrm{FS}}^{2}\overset{\text{def}}{=}\mathrm{tr}\left[ \left( \frac{\partial\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) }{\partial\epsilon}\right) ^{2}\right] d\epsilon^{2} =\frac{1}{2}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }d\epsilon^{2}\text{.} \label{FSlimit} \end{equation} In the next subsection, we discuss the discrepancy in the Bures (Eqs. (\ref{07}) and (\ref{f4})) and Sj\"{o}qvist (Eqs. (\ref{f0}) and (\ref{f3})) metrics emerging from the different nature of the nonclassical contributions in the two metrics. \subsection{The metrics discrepancy} \begin{figure} \caption{Plots of the metric discrepancy $\Delta g_{\epsilon\epsilon} \end{figure} We begin by noting that in the case of the spin qubit Hamiltonian model in Eq. (\ref{spinH}), there is no discrepancy since the Bures and the Sj\"{o}qvist metrics in Eqs. (\ref{07}) and (\ref{f0}), respectively, coincide. Indeed, in this case, both metrics reduce to the classical Fisher-Rao information metric. The nonclassical/quantum terms vanish in both metrics due to the commutativity of $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ and $\left( \rho_{\mathrm{SQ}}+d\rho_{\mathrm{SQ}}\right) \left( \beta\text{, } \omega\right) $, with $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{ro1}). In the case of the superconducting flux qubit Hamiltonian model in Eq. (\ref{superH}), instead, the nonclassical/quantum terms vanish in neither the Bures nor the Sj\"{o}qvist metrics due to the non-commutativity of $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon \right) $ and $\left( \rho_{\mathrm{SFQ}}+d\rho_{\mathrm{SFQ}}\right) \left( \beta\text{, }\epsilon\right) $, with $\rho_{\mathrm{SQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto5}). However, these nonclassical contributions differ in the two metrics and this leads to a discrepancy in the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{f4}) and (\ref{f3}), respectively. More specifically, we have \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}-ds_{\mathrm{Bures}}^{2}=\Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) d\epsilon^{2}\geq0\text{,} \label{dis1} \end{equation} for any $\beta$ and $\epsilon$. Note that $\Delta g_{\epsilon\epsilon }^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) \overset{\text{def} }{=}g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, }\epsilon\right) -g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Bures}}\left( \beta\text{, }\epsilon\right) $ is the difference between the nonclassical (\textrm{nc}) contributions in the metric components $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) \overset{\text{def}}{=}g_{\epsilon\epsilon}^{\mathrm{c}}\left( \beta\text{, }\epsilon\right) +g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) $ and is given by \begin{equation} \Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon \right) \overset{\text{def}}{=}\frac{1}{4}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \text{,} \label{discrepancy} \end{equation} with $0\leq$ $\tanh^{2}\left( x\right) \leq1$ for any $x\in \mathbb{R} $. To be crystal clear, it is useful to view the metric tensor $g_{ij}\left( \beta\text{, }\epsilon\right) $ with $1\leq i$, $j\leq2$ (i.e., $\theta ^{1}\overset{\text{def}}{=}\beta$ and $\theta^{2}\overset{\text{def} }{=}\epsilon$) recast as \begin{equation} g_{ij}\left( \beta\text{, }\epsilon\right) =\left( \begin{array} [c]{cc} g_{\beta\beta}\left( \beta\text{, }\epsilon\right) & g_{\beta\epsilon }\left( \beta\text{, }\epsilon\right) \\ g_{\epsilon\beta}\left( \beta\text{, }\epsilon\right) & g_{\epsilon\epsilon }^{\mathrm{c}}\left( \beta\text{, }\epsilon\right) +g_{\epsilon\epsilon }^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) \end{array} \right) \text{.} \end{equation} The discrepancy between the Bures and Sj\"{o}qvist metrics arises only because $g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, }\epsilon\right) \neq g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Bures}}\left( \beta\text{, }\epsilon\right) $. However, the metric discrepancy $\Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{discrepancy}) vanishes in the asymptotic limit of $\beta$ approaching infinity. In Fig. $1$, we plot the discrepancy between the Bures and the Sj\"{o}qvist metrics for the superconducting flux qubit Hamiltonian model. In Table I, instead, we summarize the links between the Bures and the Sj\"{o}qvist metrics.\begin{table}[t] \centering \begin{tabular} [c]{c|c|c|c}\hline\hline \textbf{Description of quantum states} & \textbf{Quantum states} & \textbf{Bures metric} & \textbf{Sj\"{o}qvist metric}\\\hline Pure & $\rho=\rho^{2}$ & Fubini-Study metric & Fubini-Study metric\\ Mixed, classical scenario & $\rho\neq\rho^{2}$, $\left[ \rho\text{, } \rho+d\rho\right] =0$ & Fisher-Rao metric & Fisher-Rao metric\\ Mixed, nonclassical scenario & $\rho\neq\rho^{2}$, $\left[ \rho\text{, } \rho+d\rho\right] \neq0$ & $ds_{\mathrm{Bures}}^{2}\neq ds_{\mathrm{Sj\ddot {o}qvist}}^{2}$ & $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\neq ds_{\mathrm{Bures} }^{2}$\\\hline \end{tabular} \caption{Relation between the Bures and the Sj\"{o}qvist metrics\textbf{. }The Bures and the Sj\"{o}qvist metrics are identical when considering pure quantum states $\left( \rho=\rho^{2}\right) $ or mixed quantum states $\left( \rho\neq\rho^{2}\right) $ for which the non-commutative probabilistic structure underlying quantum theory is not visible (i.e., in the classical scenario with $\left[ \rho\text{, }\rho+d\rho\right] =0$). In particular, in the former and latter cases, they becomes the Fubini-Study and the Fisher-Rao information metrics, respectively. However, the Bures and the Sj\"{o}qvist metrics are generally distinct when considering mixed quantum states $\left( \rho\neq\rho^{2}\right) $ for which the non-commutative probabilistic structure of quantum mechanics is visible (i.e., in the nonclassical scenario with $\left[ \rho\text{, }\rho+d\rho\right] \neq0$).} \end{table} \section{Conclusive Remarks} In this paper, building on our recent scientific effort in Ref. \cite{cafaroprd22}, we presented an explicit analysis of the Bures and Sj\"{o}qvist metrics over the manifolds of thermal states for the spin qubit (Eq. (\ref{spinH})) and the superconducting flux qubit Hamiltonian (Eq. (\ref{superH})) models. We observed that while both metrics (Eqs. (\ref{07}) and (\ref{f0})) reduce to the Fubini-Study metric in the (zero-temperature) asymptotic limiting case of the inverse temperature $\beta$ approaching infinity for both Hamiltonian models, the two metrics are generally different. We observed this different behavior in the case of the superconducting flux Hamiltonian model. In general, we note that the two metrics (Eqs. (\ref{f4}) and (\ref{f3})) seem to differ when nonclassical behavior is present since they quantify noncommutativity of neighboring mixed quantum states in different manners (Eqs. (\ref{dis1}) and (\ref{discrepancy})). In summary, we reach (see Table I) the conclusion that for pure quantum states $\left( \rho=\rho^{2}\right) $ and for mixed quantum states $\left( \rho\neq\rho ^{2}\right) $ for which the non-commutative probabilistic structure underlying quantum theory is not visible (i.e., in the classical scenario with $\left[ \rho\text{, }\rho+d\rho\right] =0$), the Bures and the Sj\"{o}qvist metrics are the same (Eqs. (\ref{f2}) and (\ref{f1})). Indeed, in the former and latter cases, they reduce to the Fubini-Study and Fisher-Rao information metrics, respectively. Instead, when investigating mixed quantum states $\left( \rho\neq\rho^{2}\right) $ for which the non-commutative probabilistic structure of quantum mechanics is visible (i.e., in the non-classical scenario with $\left[ \rho\text{, }\rho+d\rho\right] \neq0$), the Bures and the Sj\"{o}qvist metrics exhibit a different behavior (Eqs. (\ref{G1A}) and (\ref{G1B}); Eqs. (\ref{chetu1}) and (\ref{chetu})). Our main conclusions can be outlined as follows: \begin{enumerate} \item[{[i]}] We presented an explicit derivation of Bures metric for arbitrary density matrices in H\"{u}bner's form (Eq. (\ref{buri14})) and in Zanardi's form (Eq. (\ref{7})). Moreover, we presented a clear derivation of Zanardi's form of the Bures metric suitable for the special class of thermal states (Eq. (\ref{general})). Finally, we reported an explicit derivation of the Sj\"{o}qvist metric for nondegenerate density matrices (Eq. (\ref{vetta})). \item[{[ii]}] Using our explicit derivations outlined in [i], we performed detailed analytical calculations yielding the expressions of the Bures (Eqs. (\ref{07}) and (\ref{f4})) and Sj\"{o}qvist (Eqs. (\ref{f0}) and (\ref{f3})) metrics on manifolds of thermal states (Eqs. (\ref{ro1}) and (\ref{anto5})) that correspond to a spin qubit (Eq. (\ref{spinH})) and a superconducting flux qubit (Eq. (\ref{superH})) Hamiltonian models. \item[{[iii]}] In the absence of nonclassical features in which the neighboring density matrices $\rho$ and $d\rho$ commute, the Bures and the Sj\"{o}qvist metrics lead to and identical metric expression exemplified by the classical Fisher-Rao metric tensor. We have explicitly verified this similarity in the case of a manifold of thermal states for spin qubits in the presence of a constant magnetic field along the quantization $z$-axis. \item[{[iv]}] In general, the Bures and the Sj\"{o}qvist metrics are expected to yield different expressions. Indeed, the Bures and Sj\"{o}qvist metrics seem to quantify the noncommutativity of neighboring mixed states $\rho$ and $d\rho$ in different manners, in general. We have explicitly verified this difference in the case of a manifold of thermal states for superconducting flux qubits (see Fig. $2$). \item[{[v]}] In the asymptotic limit of $\beta\rightarrow\infty$, both Bures and Sj\"{o}qvist metric tensors reduce to the same limiting value (Eq. (\ref{betalim})) specified by, modulo an unimportant constant factor, the Fubini-Study metric tensor (Eq. (\ref{FSlimit})) for the zero-temperature pure states. \item[{[vi]}] In the superconducting flux qubit Hamiltonian model considered here, we observe that the difference $ds_{\mathrm{Sj\ddot{o}qvist}} ^{2}-ds_{\mathrm{Bures}}^{2}$ is a positive quantity that depends solely on the difference between the nonclassical nature of the metric tensor component $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) $. Indeed, we have shown that $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}-ds_{\mathrm{Bures}}^{2}=\Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) d\epsilon^{2}\geq0$ (Eqs. (\ref{dis1}) and (\ref{discrepancy})). \item[{[vii]}] The existence of nonclassical contributions in the Bures and Sj\"{o}qvist metrics is related to the presence of non-vanishing quadratic terms like\textbf{ }$\left\vert \left\langle n\left\vert \partial _{h}\mathrm{H}\right\vert m\right\rangle \right\vert ^{2}$\textbf{ }and\textbf{ }$\left\vert \left\langle dn\left\vert n\right. \right\rangle \right\vert ^{2}$\textbf{,} respectively. The former term is related to modulus squared of suitable quantum overlaps defined in terms of parametric variations in the Hamiltonian of the system. The latter term, instead, is specified by the modulus squared of suitable quantum overlaps characterized by parametric variations of the eigenstates of the Hamiltonian of the system. It is not unreasonable to expect a formal connection between these two types of terms causing the noncommutativity between\textbf{ }$\rho$ and $\rho+d\rho$ (see, for instance, Eq. (15.30) in Ref. \cite{karol06}) and find a deeper relation between the Bures and Sj\"{o}qvist metrics for the class of thermal quantum states. Indeed, for a more quantitative discussion on the link between these two terms, see Ref. \cite{alsing23}. \item[{[viii]}] The differential\textbf{ }$d\rho\left( \beta\text{, }h\right) \overset{\text{def}}{=}\partial_{\beta}\rho d\beta+\partial_{h}\rho dh$\textbf{ }depends both on eigenvalues and eigenvectors parametric variations.\textbf{ }However, the noncommutativity between\textbf{ }$\rho $\textbf{ }and\textbf{ }$d\rho$\textbf{ }is related to that part of\textbf{ }$d\rho$\textbf{ }that emerges from the eigenvectors parametric variations. These changes, in turn, can be related to the existence of a nonvanishing commutator between the Hamiltonian of the system and the density matrix specifying the thermal state. Indeed, in the two main examples studied in this paper, we have\textbf{ }$\left[ \mathrm{H}_{\mathrm{SQ}}\left( \omega\right) \text{, }\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) \right] =0$\textbf{ }and\textbf{ }$\left[ \mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \text{, }\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) \right] \neq0$\textbf{, }respectively. In the former case, unlike the latter case, there is no contribution to\textbf{ }$d\rho$\textbf{ }arising from a variation in the eigenvectors of the Hamiltonian. \end{enumerate} For the set of pure states, the scenario is rather unambiguous. The Fubini--Study metric represents the only natural option for a measure that characterizes \textquotedblleft random states\textquotedblright. Alternatively, for mixed-state density matrices, the geometry of the state space is more complicated \cite{karol06,brody19}. There is a collection of distinct metrics that can be used, each of them with different physical inspiration, benefits and disadvantages that can rest on the peculiar application one \ might be interested in examining. Specifically, both simple and complicated geometric quantities (i.e., for instance, path, path length, volume, curvature, and complexity) seem to depend on the measure selected on the space of mixed states that specify the quantum system being investigated \cite{karol99,cafaroprd22}. Therefore, our work in this paper can be particularly important in offering an explicit comparative study between the (emerging) Sj\"{o}qvist interferometric geometry and the (established) Bures geometry for mixed quantum states. Gladly, the importance of the existence of this kind of comparative investigation was lately emphasized in Refs. \cite{mera22} and \cite{cafaroprd22} too. From a mathematics standpoint, it would be interesting to formally prove (or, alternatively, disprove with an explicit counterexample) the monotonicity \cite{petz96a,petz99} of the Sj\"{o}qvist metric in an arbitrarily finite-dimensional space of mixed quantum states. From a physics perspective that relies on empirical evidence, instead, it would be very important to understand what the physical significance of employing either metric is \cite{mera22,cafaroprd22}. In conclusion, despite its relatively limited scope, we hope this work will inspire either young or senior scientists to pursue ways to deepen our understanding (both mathematically and physically) of this fascinating connection among information geometry, statistical physics, and quantum mechanics \cite{cafaropre20,gassner21,hasegawa21,miller20,cc,saito20,ito20}. \begin{acknowledgments} C.C. is grateful to the United States Air Force Research Laboratory (AFRL) Summer Faculty Fellowship Program for providing support for this work. C. C. acknowledges helpful discussions with Orlando Luongo,\ Cosmo Lupo, Stefano Mancini, and Hernando Quevedo. P.M.A. acknowledges support from the Air Force Office of Scientific Research (AFOSR). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Air Force Research Laboratory (AFRL). \end{acknowledgments} \end{document}
math
161,662
\begin{document} \markboth{A. Ayd{\i}n}{$mo$-convergence} \title{Multiplicative order convergence in $f$-algebras} \author{Abdullah Ayd{\i}n\coraut$^{1}$} \address{$^1$Department of Mathematics, Mu\c{s} Alparslan University, Mu\c{s}, Turkey\\} \emails{[email protected]} \maketitle \begin{abstract} A net $(x_\alpha)$ in an $f$-algebra $E$ is said to be multiplicative order convergent to $x\in E$ if $\left|x_\alpha-x\right|u\oc 0$ for all $u\in E_+$. In this paper, we introduce the notions $mo$-convergence, $mo$-Cauchy, $mo$-complete, $mo$-continuous and $mo$-KB-space. Moreover, we study the basic properties of these notions. \end{abstract} \subjclass{46A40, 46E30} \keywords{$mo$-convergence, $f$-algebra, $mo$-KB-space, vector lattice} \hinfoo{DD.MM.YYYY}{DD.MM.YYYY} \section{Introductory facts}\label{sec1} In spite of the nature of the classical theory of Riesz algebra and $f$-algebra, as far as we know, the concept of convergence in $f$-algebras related to multiplication has not been done before. However, there are some close studies under the name unbounded convergence in some kinds of vector lattices; see for example \cite{AAyd,AEEM2,AGG,GTX}. Our aim is to introduce the concept of $mo$-convergence by using the multiplication in $f$-algebras. First of all, let us remember some notations and terminologies used in this paper. Let $E$ be a real vector space. Then $E$ is called \textit{ordered vector space} if it has an order relation $\leq$ (i.e, $\leq$ is reflexive, antisymmetric and transitive) that is compatible with the algebraic structure of $E$ that means $y\leq x$ implies $y+z\leq x+z$ for all $z\in E$ and $\lambda y\leq \lambda x$ for each $\lambda\geq 0$. An ordered vector $E$ is said to be {\em vector lattice} (or, {\em Riesz space}) if, for each pair of vectors $x,y\in E$, the supremum $x\vee y=\sup\{x,y\}$ and the infimum $x\wedge y=\inf\{x,y\}$ both exist in E. Moreover, $x^+:=x\vee 0$, $x^-:=(-x)\vee0$, and $\lvert x\rvert:=x\vee(-x)$ are called the {\em positive} part, the {\em negative} part, and the {\em absolute value} of $x\in E$, respectively. Also, two vector $x$, $y$ in a vector lattice is said to be {\em disjoint} whenever $\lvert x\rvert\wedge\lvert y\rvert=0$. A vector lattice $E$ is called \textit{order complete} if $0\leq x_\alpha\uparrow\leq x$ implies the existence of $\sup{x_\alpha}$ in $E$. A subset $A$ of a vector lattice is called \textit{solid} whenever $\left|x\right|\leq\left|y\right|$ and $y\in A$ imply $x\in A$. A solid vector subspace is referred to as an \textit{order ideal}. An order closed ideal is referred to as a \textit{band}. A sublattice $Y$ of a vector lattice is majorizing $E$ if, for every $x\in E$, there exists $y\in Y$ with $x\leq y$. A partially ordered set $I$ is called {\em directed} if, for each $a_1,a_2\in I$, there is another $a\in I$ such that $a\geq a_1$ and $a\geq a_2$ (or, $a\leq a_1$ and $a\leq a_2$). A function from a directed set $I$ into a set $E$ is called a {\em net} in $E$. A net $(x_\alpha)_{\alpha\in A}$ in a vector lattice $X$ is called \textit{order convergent} (or shortly, \textit{$o$-convergent}) to $x\in X$, if there exists another net $(y_\beta)_{\beta\in B}$ satisfying $y_\beta \downarrow 0$, and for any $\beta\in B$ there exists $\alpha_\beta\in A$ such that $|x_\alpha-x|\leq y_\beta$ for all $\alpha\geq\alpha_\beta$. In this case, we write $x_\alpha\xrightarrow{o} x$; for more details see for example \cite{ABPO,Vul,Za}. A vector lattice $E$ under an associative multiplication is said to be a \textit{Riesz algebra} whenever the multiplication makes $E$ an algebra (with the usual properties), and in addition, it satisfies the following property: $x,y\in E_+$ implies $xy\in E_+$. A Riesz algebra $E$ is called \textit{commutative} if $xy=yx$ for all $x,y\in E$. A Riesz algebra $E$ is called \textit{$f$-algebra} if $E$ has additionally property that $x\wedge y=0$ implies $(xz)\wedge y=(zx)\wedge y=0$ for all $z\in E_+$; see for example \cite{ABPO}. A vector lattice $E$ is called \textit{Archimedean} whenever $\frac{1}{n}x\downarrow 0$ holds in $E$ for each $x\in E_+$. Every Archimedean $f$-algebra is commutative; see Theorem 140.10 \cite{Za}. Assume $E$ is an Archimedean $f$-algebra with a multiplicative unit vector $e$. Then, by applying Theorem 142.1(v) \cite{Za}, in view of $e=ee=e^2\geq0$, it can be seen that $e$ is a positive vector. In this article, unless otherwise, all vector lattices are assumed to be real and Archimedean, and so $f$-algebras are commutative. Recall that a net $(x_\alpha)$ in a vector lattice $E$ is \textit{unbounded order convergent} (or shortly, \textit{$uo$-convergent}) to $x\in E$ if $|x_\alpha-x|\wedge u\xrightarrow{o}0$ for every $u\in E_+$. In this case, we write $x_\alpha\xrightarrow{uo}x$; see for example \cite{GTX} and \cite{AAyd,AEEM2,AGG}. Motivated from this definition, we give the following notion. \begin{definition} Let $E$ be an $f$-algebra. A net $(x_\alpha)$ in $E$ is said to be {\em multiplicative order convergent} to $x\in E$ (shortly, $(x_\alpha)$ $mo$-converges to $x$) if $\left|x_\alpha-x\right|u\oc 0$ for all $u\in E_+$. Abbreviated as $x_\alpha\fc x$. \end{definition} It is clear that $x_\alpha\fc x$ in an $f$-algebra $E$ implies $x_\alpha y\fc xy$ for all $y\in E$ because of $\left|xy\right|=\left|x\right|\left|y\right|$ for all $x,y\in E$. We shall keep in mind the following useful lemma, obtained from the property of $xy\in E_+$ for every $x,y\in E_+$. \begin{lemma}\label{basic monotonicity} If $y\leq x$ is provided in an $f$-algebra $E$ then $uy\leq ux$ for all $u\in E_+$. \end{lemma} Recall that multiplication by a positive element in $f$-algebras is a vector lattice homomorphism, i.e., $u(x\wedge y)=(ux)\wedge(uy)$ and $u(x\vee y)=(ux)\vee(uy)$ for every positive element $u$; see for example Theorem 142.1(i) \cite{Za}. We will denote an $f$-algebra $E$ as \textit{infinite distributive} $f$-algebra whenever the following condition holds: if $\inf(A)$ exists for any subset $A$ of $E_+$ then the infimum of the subset $uA$ exists and $\inf(uA)=u\inf(A)$ for each positive vector $u\in E_+$. For a net $(x_\alpha)\downarrow 0$ in an infinite distributive $f$-algebra, the net $(ux_\alpha)$ is also decreasing to zero for all positive vector $u$. \begin{remark}\label{order con iff f-conv} The order convergence implies the $mo$-convergence in infinite distributive $f$-algebras. The converse holds true in $f$-algebras with multiplication unit. Indeed, assume a net $(x_\alpha)_{\alpha \in A}$ order converges to $x$ in an infinite distributive $f$-algebra $E$. Then there exists another net $(y_\beta)_{\beta\in B}$ satisfying $y_\beta \downarrow 0$, and, for any $\beta\in B$, there exists $\alpha_\beta\in A$ such that $|x_\alpha-x|\leq y_\beta$. Hence, we have $|x_\alpha-x|u\leq y_\beta u$ for all $\alpha\geq\alpha_\beta$ and for each $u\in E_+$. Since $y_\beta \downarrow$, we have $uy_\beta\downarrow$ for each $u\in E_+$ by Lemma \ref{basic monotonicity}, and $\inf(uy_\beta)=u\inf(y_\beta)=0$ because of $\inf(y_\beta)=0$. Therefore, $\left|x_\alpha-x\right|u\oc 0$ for each $u\in E_+$. That means $x_\alpha\fc x$. For the converse, assume $E$ is an $f$-algebra with multiplication unit $e$ and $x_\alpha\fc x$ in $E$. That is, $\left|x_\alpha-x\right|u\oc 0$ for all $u\in E_+$. Since $e\in E_+$, in particular, choose $u=e$, and so we have $\left|x_\alpha-x\right|=\left|x_\alpha-x\right|e\oc 0$, or $x_\alpha\oc x$ in $E$. \end{remark} By considering Example 141.5 \cite{Za}, we give the following example. \begin{example} Let $[a,b]$ be a closed interval in $\mathbb{R}$ and let $E$ be vector lattice of all reel continuous functions on $[a,b]$ such that the graph of functions consists of a finite number of line segments. In view of Theorem 141.1 \cite{Za}, every positive orthomorphism $\pi$ in $E$ is trivial orthomorphism, i.e., there is a reel number $\lambda$ such that $\pi (f)=\lambda f$ for all $f\in E$. Therefore, a net of positive orthomorphism $(\pi_\alpha)$ is order convergent to $\pi$ iff it is $mo$-convergent to $\pi$ whenever the multiplication is the natural multiplicative, i.e., $\pi_1\pi_2(f)=\pi_1(\pi_2f)$ for all $\pi_1,\pi_2\in Orth(E)$ and all $f\in E$. Indeed, $Orth(E)$ is Archimedean $f$-algebra with the identity operator as a unit element; see Theorem 140.4 \cite{Za}. So, by applying Remark \ref{order con iff f-conv}, the $mo$-convergence implies the order convergence of the net $(\pi_\alpha)$. Conversely, assume the net of positive orthomorphisms $\pi_\alpha\oc \pi$ in $Orth(E)$. Then we have $\pi_\alpha(f)\oc \pi(f)$ for all $f\in E$; see Theorem VIII.2.3 \cite{Vul}. For fixed $0\leq\mu\in Orth(E)$, there is a reel number $\lambda_\mu$ such that $\mu(f)=\lambda_\mu f$ for all $f\in E$. Since $\left|\pi_\alpha(f)-\pi(f)\right|=\left|\lambda_{\pi_\alpha}f-\lambda_\pi f\right|\oc 0$, we have $$ \left|(\pi_\alpha)f-(\pi) f\right|\mu=\left|\mu\lambda_{\pi_\alpha}f-\mu\lambda_\pi f\right|=\left|\lambda_\mu\lambda_{\pi_\alpha}f-\lambda_\mu\lambda_\pi f\right|=\left|\lambda_\mu\right|\left|\lambda_{\pi_\alpha}f-\lambda_\pi f\right|\oc 0 $$ for all $f\in E$. Since $\mu$ is arbitrary, we get $\pi_\alpha\fc \pi$. \end{example} \section{Main results} We begin the section with the next list of properties of the $mo$-convergence which follows directly from Lemma \ref{basic monotonicity}, and the inequalities $\left|x-y\right| \leq \left|x-x_\alpha\right|+\left|x_\alpha-y\right|$ and $\left| \left|x_\alpha\right|-\left|x\right| \right|\leq\left| x_\alpha-x\right|$. \begin{lemma} Let $x_\alpha\fc x$ and $y_\alpha\fc y$ in an $f$-algebra $E$. Then the following holds: \begin{enumerate} \item[(i)] $x_\alpha\fc x$ iff $(x_\alpha- x)\fc 0$; \item[(ii)] if $x_\alpha\fc x$ then $y_\beta\fc x$ for each subnet $(y_\beta)$ of $(x_\alpha)$; \item[(iii)] suppose $x_\alpha\fc x$ and $y_\beta\fc y$ then $ax_\alpha+by_\beta\fc ax+by$ for any $a,b\in \mathbb{R}$; \item[(iv)] if $x_\alpha \fc x$ and $x_\alpha \fc y$ then $x=y$; \item[(v)] if $x_\alpha \fc x$ then $\lvert x_\alpha\rvert \fc \lvert x \rvert$. \end{enumerate} \end{lemma} Recall that an order complete vector lattice $E^\delta$ is said to be an order completion of the vector lattice $E$ whenever $E$ is Riesz isomorphic to a majorizing order dense vector lattice subspace of $E^\delta$. Every Archimedean Riesz space has a unique order completion; see Theorem 2.24 \cite{ABPO}. \begin{proposition} Let $(x_\alpha)$ be a net in an $f$-algebra $E$. Then $x_\alpha\fc 0$ in $E$ iff $x_\alpha\fc 0$ in the order completion $E^\delta$ of $E$. \end{proposition} \begin{proof} Assume $x_\alpha\fc 0$ in $E$. Then $\left|x_\alpha\right|u\oc 0$ in $E$ for all $u\in E_+$, and so $\left|x_\alpha\right|u\oc 0$ in $E^\delta$ for all $u\in E_+$; see Corollary 2.9 \cite{GTX}. Now, let's fix $v\in E^\delta_+$. Then there exists $x_v\in E_+$ such that $v\leq x_v$ because $E$ majorizes $E^\delta$. Then we have $\left|x_\alpha\right|v\leq \left|x_\alpha\right|x_v$. From $\left|x_\alpha\right|x_v\oc 0$ in $E^\delta$ it follows that $\left|x_\alpha\right|v\oc 0$ in $E^\delta$, that is, $x_\alpha\fc 0$ in the order completion $E^\delta$ because $v\in E^\delta_+$ is arbitrary. Conversely, assume $x_\alpha\fc 0$ in $E^\delta$. Then, for all $u\in E^\delta_+$, we have $\left|x_\alpha\right|u\oc 0$ in $E^\delta$. In particular, for all $x\in E_+$, $\left|x_\alpha\right|x\oc 0$ in $E^\delta$. By Corollary 2.9 \cite{GTX}, we get $\left|x_\alpha\right|x\oc 0$ in $E$ for all $x\in E_+$. Hence $x_\alpha\fc$ in $E$. \end{proof} The multiplication in $f$-algebra is $mo$-continuous in the following sense. \begin{theorem} Let $E$ be an infinite distributive $f$-algebra, and $(x_\alpha)_{\alpha \in A}$ and $(y_\beta)_{\beta \in B}$ be two nets in $E$. If $x_\alpha\fc x$ and $y_\beta\fc y$ for some $x,y\in E$ and each positive element of $E$ can be written as a multiplication of two positive elements then $x_\alpha y_\beta\fc xy$. \end{theorem} \begin{proof} Assume $x_\alpha\fc x$ and $y_\beta\fc y$. Then $\left| x_\alpha-x\right|u\oc 0$ and $\left| y_\beta-y\right|u\oc 0$ for every $u\in E_+$. Let's fix $u\in E_+$. So, there exist another two nets $(z_\gamma)_{\gamma\in\Gamma}\downarrow 0$ and $(z_\xi)_{\xi\in\Xi}\downarrow 0$ in $E$ such that, for all $(\gamma,\xi)\in\Gamma\times\Xi$ there are $\alpha_\gamma\in A$ and $\beta_\xi\in B$ with $\left|x_\alpha-x\right|u\leq z_\gamma$ and $\left|y_\beta-y\right|u\leq z_\xi$ for all $\alpha\geq\alpha_\gamma$ and $\beta\geq\beta_\xi$. Next, we show the $mo$-convergence of $(x_\alpha y_\beta)$ to $xy$. By considering the equality $\left|xy\right| =\left|x\right| \left|y\right|$ and Lemma \ref{basic monotonicity}, we have \begin{eqnarray*} \left|x_\alpha y_\beta-xy\right|u&=&\left|x_\alpha y_\beta-x_\alpha y+x_\alpha y-xy\right|u\\&\leq& \left|x_\alpha\right|\left|y_\beta-y\right|u+\left| x_\alpha -x\right|\left|y\right|u\\&\leq& \left|x_\alpha-x\right|\left|y_\beta-y\right|u+\left|x\right|\left|y_\beta-y\right|u+\left| x_\alpha -x\right|\left|y\right|u. \end{eqnarray*} The second and the third terms in the last inequality both order converge to zero as $\beta\to\infty$ and $\alpha\to \infty$ respectively because of $\left|x\right|u,\left|y\right|u\in E_+$, $x_\alpha\fc x$ and $y_\beta\fc y$. Now, let's show the convergence of the first term of last inequality. There are two positive elements $u_1,u_2\in E_+$ such that $u=u_1u_2$ because the positive element of $E$ can be written as a multiplication of two positive elements. So, we get $\left|x_\alpha-x\right|\left|y_\beta-y\right|u=(\left|x_\alpha-x\right|u_1)(\left|y_\beta-y\right|u_2)$. Since $(z_\gamma)_{\gamma\in\Gamma}\downarrow 0$ and $(z_\xi)_{\xi\in\Xi}\downarrow 0$, the multiplication $(z_\gamma z_\xi)\downarrow 0$. Indeed, we firstly show that the multiplication is decreasing. For indexes $(\gamma_1,\xi_1)(\gamma_2,\xi_2)\in \Gamma\times\Xi$, we have $z_{\gamma_2}\leq z_{\gamma_1}$ and $z_{\xi_2}\leq z_{\xi_1}$ because both of them are decreasing. Since the nets are positive, it follows from $z_{\xi_2}\leq z_{\xi_1}$ that $z_{\gamma_2}z_{\xi_2}\leq z_{\gamma_2}z_{\xi_1}\leq z_{\gamma_1}z_{\xi_1}$. As a result $(z_\gamma z_\xi)_{(\gamma,\xi)\in\Gamma\times\Xi}\downarrow$. Now, we show that the infimum of multiplication is zero. For a fixed index $\gamma_0$, we have $z_\gamma z_\xi\leq z_{\gamma_0}z_\xi$ for $\gamma\geq \gamma_0$ because $(z_\gamma)$ is decreasing. Thus, we get $\inf(z_\gamma z_\xi)=0$ because of $\inf(z_{\gamma_0}z_\xi)=z_{\gamma_0}\inf(z_\xi)=0$. Therefore, we see $(\left|x_\alpha-x\right|u_1)(\left|y_\beta-y\right|u_2)\oc 0$. Hence, we get $x_\alpha y_\beta\fc xy$. \end{proof} The lattice operations in an $f$-algebra are $mo$-continuous in the following sense. \begin{proposition}\label{LO are $mo$-continuous} Let $(x_\alpha)_{\alpha \in A}$ and $(y_\beta)_{\beta \in B}$ be two nets in an $f$-algebra $E$. If $x_\alpha\fc x$ and $y_\beta\fc y$ then $(x_\alpha\vee y_\beta)_{(\alpha,\beta)\in A\times B} \fc x\vee y$. In particular, $x_\alpha\fc x$ implies $x_\alpha^+\fc x^+$. \end{proposition} \begin{proof} Assume $x_\alpha\fc x$ and $y_\beta\fc y$. Then there exist two nets $(z_\gamma)_{\gamma\in\Gamma}$ and $(w_\lambda)_{\lambda\in\Lambda}$ in $E$ satisfying $z_\gamma\downarrow 0$ and $w_\lambda\downarrow 0$, and for all $(\gamma,\lambda)\in\Gamma\times\Lambda$ there are $\alpha_\gamma\in A$ and $\beta_\lambda\in B$ such that $\left|x_\alpha-x\right|u\leq z_\gamma$ and $\left|y_\beta-y\right|u\leq w_\lambda$ for all $\alpha\geq\alpha_\gamma$ and $\beta\geq\beta_\lambda$ and for every $u\in E_+$. It follows from the inequality $|a\vee b-a\vee c|\leq |b-c|$ in vector lattices that \begin{equation*} \begin{split} \left|x_\alpha \vee y_\beta - x\vee y\right|u&\leq \left|x_\alpha \vee y_\beta -x_\alpha \vee y\right|u+\left|x_\alpha \vee y- x\vee y\right|u\\ &\leq \left|y_\beta -y\right|u+\left|x_\alpha-x\right|u\leq w_\lambda+z_\gamma \end{split} \end{equation*} for all $\alpha\geq\alpha_\gamma$ and $\beta\geq\beta_\lambda$ and for every $u\in E_+$. Since $(w_\lambda+z_\gamma)\downarrow 0$, $\left|x_\alpha \vee y_\beta - x\vee y\right|u$ order converges to $0$ for all $u\in E_+$. That is, $(x_\alpha\vee y_\beta)_{(\alpha,\beta)\in A\times B} \fc x\vee y$. \end{proof} \begin{lemma}\label{mo convergence positive} Let $(x_\alpha)$ be a net in an $f$-algebra $E$. Then \begin{enumerate} \item[(i)] $0\leq x_\alpha\fc x$ implies $x\in E_+$. \item[(ii)] if $(x_\alpha)$ is monotone and $x_\alpha\fc x$ then implies $x_\alpha\oc x$. \end{enumerate} \end{lemma} \begin{proof} $(i)$ Assume $0\leq x_\alpha\fc x$. Then we have $x_\alpha=x_\alpha^+\fc x^+=0$ by Proposition \ref{LO are $mo$-continuous}. Hence, we get $x\in E_+$. $(ii)$ We show that $x_\alpha\uparrow$ and $x_\alpha\fc x$ implies $x_\alpha\uparrow x$. Fix an index $\alpha$. Then we have $x_\beta-x_\alpha\in X_+$ for $\beta\ge\alpha$. By $(i)$, $x_\beta-x_\alpha\fc x-x_\alpha\in X_+$. Therefore, $x\geq x_\alpha$ for any $\alpha$. Since $\alpha$ is arbitrary, then $x$ is an upper bound of $(x_\alpha)$. Assume $y$ is another upper bound of $(x_\alpha)$, i.e., $y\geq x_\alpha$ for all $\alpha$. So, $y-x_\alpha\fc y-x\in X_+$, or $y\ge x$, and so $x_\alpha \uparrow x$. \end{proof} The following simple observation is useful in its own right. \begin{proposition} Every disjoint decreasing sequence in an $f$-algebra $mo$-converges to zero. \end{proposition} \begin{proof} Suppose $(x_n)$ is a disjoint and decreasing sequence in an $f$-algebra $E$. So, $\left|x_n\right|u$ is also a disjoint sequence in $E$ for all $u\in E_+$; see Theorem 142.1(iii) \cite{Za}. Fix $u\in E_+$, by Corollary 3.6 \cite{GTX}, we have $\left|x_n\right|u\uoc 0$ in $E$. So, $\left|x_n\right|u\wedge w\oc 0$ in $E$ for all $w\in E_+$. Thus, in particular for fixed $n_0$, taking $w$ as $\left|x_{n_0}\right|u$. Then, for all $n\geq n_0$, we get $$ \left|x_n\right|u=\left|x_n\right|u\wedge \left|x_{n_0}\right|u=\left|x_n\right|u\wedge w\oc 0 $$ because of $\left|x_n\right|u\leq \left|x_{n_0}\right|u$. Therefore, $x_n \fc 0$ in $E$. \end{proof} For the next two facts, observe the following fact. Let $E$ be a vector lattice, $I$ be an order ideal of $E$ and $(x_\alpha)$ be a net in $I$. If $x_\alpha\oc x$ in $I$ then $x_\alpha\oc x$ in $E$. Conversely, if $(x_\alpha)$ is order bounded in $I$ and $x_\alpha\oc x$ in $E$ then $x_\alpha\oc x$ in $I$. \begin{proposition} Let $E$ be an $f$-algebra, $B$ be a projection band of $E$ and $P_B$ be the corresponding band projection. If $x_\alpha\fc x$ in $E$ then $P_B(x_\alpha)\fc P_B(x)$ in both $E$ and $B$. \end{proposition} \begin{proof} It is known that $P_B$ is a lattice homomorphism and $0\leq P_B\leq I$. It follows from $\lvert P_B(x_\alpha)-P_B(x)\rvert=P_B\lvert x_\alpha-x\rvert\leq \lvert x_\alpha-x\rvert$ that $\lvert P_B(x_\alpha)-P_B(x)\rvert u\leq \lvert x_\alpha-x\rvert u$ for all $u\in E_+$. Then it follows easily that $P_B(x_\alpha)\fc P_B(x)$ in both X and B. \end{proof} \begin{theorem}\label{ideal iff vector lattice} Let $E$ be an $f$-algebra and $I$ be an order ideal and sub-$f$-algebra of $E$. For an order bounded net $(x_\alpha)$ in $I$, $x_\alpha\fc 0$ in $I$ iff $x_\alpha\fc 0$ in $E$. \end{theorem} \begin{proof} Suppose $x_\alpha\fc 0$ in $E$. Then for any $u\in I_+$, we have $\lvert x_\alpha\rvert u\oc 0$ in $E$. So, the preceding remark implies $\lvert x_\alpha\rvert u\oc 0$ in $I$ because $\lvert x_\alpha\rvert u$ is order bounded in $I$ . Therefore, we get $x_\alpha\fc 0$ in $I$. Conversely, assume that $(x_\alpha)$ $mo$-converges to zero in $I$. For any $u\in I_+$, we have $\lvert x_\alpha\rvert u\oc 0$ in $I$, and so in $E$. Then, by applying Theorem 142.1(iv) \cite{Za}, we have $x_\alpha w=0$ for all $w\in I^d=\{x\in E:x\perp y \ for \ all \ y\in I\}$ and for each $\alpha$ because $(x_\alpha)$ in $I$. For any $u\in I_+$ and each $0\leq w\in I^d$, it follows that $$ \lvert x_\alpha\rvert (u+w)=\lvert x_\alpha\rvert u+\lvert x_\alpha\rvert w=\lvert x_\alpha\rvert u\oc 0 $$ in $E$. So that, for each $z\in (I\oplus I^d)_+$, we get $\lvert x_\alpha\rvert z\oc 0$ in $E$. It is known that $I\oplus I^d$ is order dense in $E$; see Theorem 1.36 \cite{ABPO}. Fix $v\in E_+$. Then there exists some $u\in (I\oplus I^d)$ such that $v\leq u$. Thus, we have $\lvert x_\alpha\rvert v \leq \lvert x_\alpha\rvert u\oc 0$ in $E$. Therefore, $\lvert x_\alpha\rvert v\oc 0$, and so $x_\alpha\fc 0$ in $E$. \end{proof} The following proposition extends Theorem 3.8 \cite{AAyd} to the general setting. \begin{theorem} Let $E$ be an infinite distributive $f$-algebra with a unit $e$ and $(x_n)\downarrow$ be a sequence in $E$. Then $x_n\fc 0$ iff $\left|x_n\right|(u\wedge e)\oc 0$ for all $u\in E_+$. \end{theorem} \begin{proof} For the forward implication, assume $x_n\fc 0$. Hence, $\left|x\right| u\oc 0$ for all $u\in E_+$, and so $\left|x_n\right|(u\wedge e)\leq \left|x_n\right|u\oc 0$ because of $e\in E_+$. Therefore, $\left|x_n\right|(u\wedge e)\oc 0$. For the reverse implication, fix $u\in E_+$. By applying Theorem 2.57 \cite{ABPO} and Theorem 142.1(i) \cite{Za}, note that \begin{eqnarray*} \left| x_n\right| u\leq\left| x_n\right|(u-u\wedge ne)+\left| x_n\right|(u\wedge ne)\leq\frac{1}{n}u^2\left|x_n\right|+n\left|x_n\right|(u\wedge e) \end{eqnarray*} Since $(x_n)\downarrow$ and $E$ is Archimedean, we have $\frac{1}{n}u^2\left| x_n\right|\downarrow 0$. Furthermore, it follows from $\left|x_n\right|(u\wedge e)\oc 0$ for each $u\in E_+$ that there exists another sequence $(y_m)_{m\in B}$ satisfying $y_m \downarrow 0$, and for any $m\in B$, there exists $n_m$ such that $\left|x_n\right|(u\wedge e)\leq \frac{1}{n}y_m$, or $n\left|x_n\right|(u\wedge e)\leq y_m$ for all $n \geq n_m$. Hence, we get $n\left|x_n\right|(u\wedge e)\oc 0$. Therefore, we have $\left|x_n\right|u\oc 0$, and so $x_n\fc 0$. \end{proof} The $mo$-convergence passes obviously to any sub-$f$-algebra $Y$ of $E$, i.e., for any net $(y_\alpha)$ in $Y$, $y_\alpha\fc0$ in $E$ implies $y_\alpha\fc0$ in $Y$. For the converse, we give the following theorem. \begin{theorem}\label{$up$-regular} Let $Y$ be a sub-$f$-algebra of an $f$-algebra $E$ and $(y_\alpha)$ be a net in $Y$. If $y_\alpha\fc 0$ in $Y$ then it $mo$-converges to zero in $E$ for each of the following cases; \begin{enumerate} \item[(i)] $Y$ is majorizing in $E$; \item[(ii)] $Y$ is a projection band in $E$; \item[(iii)] if, for each $u\in E$, there are element $x,y\in Y$ such that $\left|u-y\right|\leq |x|$. \end{enumerate} \end{theorem} \begin{proof} Assume $(y_\alpha)$ is a net in $Y$ and $y_\alpha\fc 0$ in $Y$. Let's fix $u\in E_+$. $(i)$ Since $Y$ is majorizing in $E$, there exists $v\in Y_+$ such that $u\leq v$. It follows from $$ 0\leq |y_\alpha|u\leq |y_\alpha| v\oc0, $$ that $|y_\alpha|u\oc 0$ in $E$. That is, $y_\alpha\fc 0$ in $E$. $(ii)$ Since $Y$ is a projection band in $E$, we have $Y=Y^{\bot\bot}$ and $E=Y\oplus Y^{\bot}$. Hence $u=u_1+u_2$ with $u_1\in Y_+$ and $u_2\in Y^{\bot}_+$. Thus, we have $y_\alpha\wedge u_2=0$ because $(y_\alpha)$ in $Y$ and $u_2\in Y^{\bot}$. Hence, by applying Theorem 142.1(iii) \cite{Za}, we see $y_\alpha u=0$ for all index $\alpha$. It follows from $$ \left|y_\alpha\right|u=\left|y_\alpha\right|(u_1+u_2)=\left|y_\alpha\right|u_1\oc 0 $$ tha $\left|y_\alpha\right|u\oc 0$ in $E$. Therefore, $y_\alpha\fc 0$ in $E$. $(iii)$ For the given $u\in E_+$, there exists elements $x,y\in Y$ with $\left|u-y\right|\leq |x|$. Then $$ \left|y_\alpha\right|u\leq \left|y_\alpha\right|\left|u-y\right|+\left|y_\alpha\right|\left|y\right|\leq\left|y_\alpha\right|\left|x\right|+\left|y_\alpha\right|\left|y\right|. $$ By $mo$-convergence of $(y_\alpha)$ in $Y$, we have $\left|y_\alpha\right|\left|x\right|\oc 0$ and $\left|y_\alpha\right|\left|y\right|\oc 0$, and so $\left|y_\alpha\right|u\oc 0$. That means $y_\alpha\fc 0$ in $E$ because $u$ is arbitrary in $E_+$. \end{proof} We continue with some basic notions in $f$-algebra, which are motivated by their analogies from vector lattice theory. \begin{definition}\label{$mo$-notions} Let $(x_\alpha)_{\alpha \in A}$ be a net in $f$-algebra $E$. Then \begin{enumerate} \item[(i)] $(x_\alpha)$ is said to be {\em $mo$-Cauchy} if the net $(x_\alpha-x_{\alpha'})_{(\alpha,\alpha') \in A\times A}$ $mo$-converges to $0$, \item[(ii)] $E$ is called {\em $mo$-complete} if every $mo$-Cauchy net in $E$ is $mo$-convergent, \item[(iii)] $E$ is called {\em $mo$-continuous} if $x_\alpha\oc0$ implies $x_\alpha\fc 0$, \item[(iv)] $E$ is called a {\em $mo$-KB-space} if every order bounded increasing net in $E_+$ is $mo$-convergent. \end{enumerate} \end{definition} \begin{remark}\label{$mo$-cont-0} An $f$-algebra $E$ is $mo$-continuous iff $x_\alpha\downarrow 0$ in $E$ implies $x_\alpha\fc 0$. Indeed, the implication is obvious. For the converse, consider a net $x_\alpha\oc 0$. Then there exists a net $z_\beta\downarrow 0$ in $X$ such that, for any $\beta$ there exists $\alpha_\beta$ so that $|x_\alpha|\leq z_\beta$ for all $\alpha\geq\alpha_\beta$. Hence, by $mo$-continuity of $E$, we have $z_\beta\fc 0$, and so $x_\alpha\fc 0$. \end{remark} \begin{proposition} Let $(x_\alpha)$ be a net in an $f$-algebra $E$. If $x_\alpha\fc x$ and $(x_\alpha)$ is an $o$-Cauchy net then $x_\alpha\oc x$. Moreover, if $x_\alpha\fc x$ and $(x_\alpha)$ is $uo$-Cauchy then $x_\alpha\uoc x$. \end{proposition} \begin{proof} Assume $x_\alpha\fc x$ and $(x_\alpha)$ is an order Cauchy net in $E$. Then $x_\alpha-x_\beta\oc 0$ as $\alpha,\beta\to\infty$. Thus, there exists another net $z_\gamma\downarrow 0$ in $E$ such that, for every $\gamma$, there exists $\alpha_\gamma$ satisfying $$ |x_\alpha-x_\beta|\leq z_\gamma $$ for all $\alpha,\beta\geq \alpha_\gamma$. By taking $f$-limit over $\beta$ the above inequality and applying Proposition \ref{LO are $mo$-continuous}, i.e., $\left|x_\alpha-x_\beta\right|\fc \left|x_\alpha-x\right|$, we get $|x_\alpha-x|\leq z_\gamma$ for all $\alpha\geq\alpha_\gamma$. That means $x_\alpha\oc x$. The similar argument can be applied for the $uo$-convergence case, and so the proof is omitted. \end{proof} In the case of $mo$-complete in $f$-algebras, we have conditions for $mo$-continuity. \begin{theorem}\label{of-contchar} For an $mo$-complete $f$-algebra $E$, the following statements are equivalent: \begin{enumerate} \item[(i)] $E$ is $mo$-continuous; \item[(ii)] if $0\leq x_\alpha\uparrow\leq x$ holds in $E$ then $x_\alpha$ is a $mo$-Cauchy net; \item[(iii)] $x_\alpha\downarrow 0$ implies $x_\alpha\fc 0$ in $E$. \end{enumerate} \end{theorem} \begin{proof} $(i)\Rightarrow(ii)$ Consider the increasing and bounded net $0\leq x_\alpha\uparrow\leq x$ in $E$. Then there exists a net $(y_\beta)$ in $E$ such that $(y_\beta-x_\alpha)_{\alpha,\beta}\downarrow 0$; see Lemma 12.8 \cite{ABPO}. Thus, by applying Remark \ref{$mo$-cont-0}, we have $(y_\beta-x_\alpha)_{\alpha,\beta}\fc0$, and so the net $(x_\alpha)$ is $mo$-Cauchy because of $\left|x_\alpha-x_{\alpha'} \right|_{\alpha,\alpha'\in A}\leq\left|x_\alpha-y_\beta\right|+\left|y_\beta-x_{\alpha'}\right|$. $(ii)\Rightarrow(iii)$ Suppose that $x_\alpha\downarrow 0$ in $E$, and fix arbitrary $\alpha_0$. Then we have $x_\alpha\leq x_{\alpha_0}$ for all $\alpha\geq\alpha_0$. Thus we can get $0\leq(x_{\alpha_0}-x_\alpha)_{\alpha\geq\alpha_0}\uparrow\leq x_{\alpha_0}$. So, it follows from $(ii)$ that the net $(x_{\alpha_0}-x_\alpha)_{\alpha\geq\alpha_0}$ is $mo$-Cauchy, i.e., $(x_{\alpha^{'}}-x_\alpha)\fc 0$ as $\alpha_0\le\alpha,\alpha^{'}\to\infty$. Then there exists $x\in E$ satisfying $x_\alpha\fc x$ as $\alpha_0\le\alpha\to\infty$ because $E$ is $mo$-complete. Since $x_\alpha\downarrow$ and $x_\alpha\fc 0$, it follows from Lemma \ref{mo convergence positive} that $x_\alpha\downarrow0$, and so we have $x=0$. Therefore, we get $x_\alpha\fc 0$. $(iii)\Rightarrow(i)$ It is just the implication of Remark \ref{$mo$-cont-0}. \end{proof} \begin{corollary}\label{of + f implies o} Let $E$ be an $mo$-continuous and $mo$-complete $f$-algebra. Then $E$ is order complete. \end{corollary} \begin{proof} Suppose $0\leq x_\alpha\uparrow\leq u$ in $E$. We show the existence of supremum of $(x_\alpha)$. By considering Theorem \ref{of-contchar} $(ii)$, we see that $(x_\alpha)$ is an $mo$-Cauchy net. Hence, there is $x\in E$ such that $x_\alpha\fc x$ because $E$ is $mo$-complete. It follows from Lemma \ref{mo convergence positive} that $x_\alpha\uparrow x$ because of $x_\alpha\uparrow$ and $x_\alpha\fc x$. Therefore, $E$ is order complete. \end{proof} \begin{proposition}\label{f-KB is of} Every $mo$-KB-space is $mo$-continuous. \end{proposition} \begin{proof} Assume $x_\alpha\downarrow 0$ in $E$. From Theorem \ref{of-contchar}, it is enough to show $x_\alpha\fc 0$. Let's fix an index $\alpha_0$, and define another net $y_\alpha:=x_{\alpha_0}-x_\alpha$ for $\alpha\ge\alpha_0$. Then it is clear that $0\leq y_\alpha\uparrow\le x_{\alpha_0}$, i.e., $(y_\alpha)$ is increasing and order bounded net in $E$. Since $E$ is a $mo$-KB-space, there exists $y\in E$ such that $y_\alpha\fc y$. Thus, by Lemma \ref{mo convergence positive}, we have $y_\alpha\oc y$. Hence, $y=\sup\limits_{\alpha\geq\alpha_0}y_\alpha=\sup\limits_{\alpha\geq\alpha_0}(x_{\alpha_0}-x_\alpha)=x_{\alpha_0}$ because of $x_\alpha\downarrow 0$. Therefore, we get $y_\alpha=x_{\alpha_0}-x_\alpha\fc x_{\alpha_0}$ or $x_\alpha\fc0$ because of $y_\alpha\fc y$. \end{proof} \begin{proposition} Every $mo$-KB-space is order complete. \end{proposition} \begin{proof} Suppose $0\leq x_\alpha\uparrow\leq z$ is an order bounded and increasing net in an $mo$-KB-space $E$ for some $z\in E_+$. Then $x_\alpha\fc x$ for some $x\in E$ because $E$ is $mo$-KB-space. By Lemma \ref{mo convergence positive}, we have $x_\alpha\uparrow x$ because of $x_\alpha\uparrow$ and $x_\alpha\fc x$. So, $E$ is order complete. \end{proof} \begin{proposition}\label{$o$-closed sublattice of $mo$-KB} Let $Y$ be an sub-$f$-algebra and order closed sublattice of an $mo$-KB-space $E$. Then $Y$ is also a $mo$-KB-space. \end{proposition} \begin{proof} Let $(y_\alpha)$ be a net in $Y$ such that $0\leq y_\alpha\uparrow\leq y$ for some $y\in Y_+$. Since $E$ is a $mo$-KB-space, there exists $x\in E_+$ such that $y_\alpha\fc x$. By Lemma \ref{mo convergence positive}, we have $y_\alpha\uparrow x$, and so $x\in Y$ because $Y$ is order closed. Thus $Y$ is a $mo$-KB-space. \end{proof} \end{document}
math
30,578
\begin{document} \title[]{Complex symmetry of composition operators on weighted Bergman spaces } \author{Osmar R Severiano} \address{IMECC, Universidade Estadual de Campinas, Campinas-SP, Brazil.} \email{$\mathrm{[email protected]}$} \begin{abstract} In this article, we study the complex symmetry of compositions operators $C_{\phi}f=f\circ \phi$ induced on weighted Bergman spaces $A^2_{\beta}(\mathbb{D}),\ \beta\geq -1,$ by analytic self-maps of the unit disk. One of ours main results shows that $\phi$ has a fixed point in $\mathbb{D}$ whenever $C_{\phi}$ is complex symmetric. Our works establishes a strong relation between complex symmetry and cyclicity. By assuming $\beta\in \mathbb{N}$ and $\phi$ is an elliptic automorphism of $\mathbb{D}$ which not a rotation, we show that $C_{\phi}$ is not complex symmetric whenever $\phi$ has order greater than $2(3+\beta).$ \end{abstract} {\subjclass[2010]{Primary; Secondary}} \keywords{Complex symmetry, composition operator, weighted Bergman space, cyclicity, linear fractional maps.} \maketitle{} \section*{Introduction} If $X$ is a Banach space of analytic functions on an open set $U\subset \mathbb{D}$ and if $\phi$ is an analytic self-map of $U,$ the \textit{composition operator} with \textit{symbol} $\phi$ is defined by $C_{\phi}f=f \circ \phi$ for any $f \in X.$ The emphasis here is on the comparison of properties of $C_{\phi}$ with those of symbols $\phi.$ Composition operators have been studied on a variety of spaces and the majority of the literature is concerned with sets $U$ that are open and connected. It clear that the set $U$ strongly influences the properties of the operator $C_{\phi}.$ For example if $U$ is the open unit disk $\mathbb{D},$ it is well-known that the operator $C_{\phi}$ is bounded on the Hardy space $H^2(\mathbb{D}).$ In general this result holds for each weighted Bergman space $A^2_{\beta}(\mathbb{D})$ (see \cite[page 532]{anna}). We refer to \cite{cow} and \cite{hed} for more details about the Hardy and weighted Bergman space respectively. The concept of \textit{complex symmetric operators} on separable Hilbert spaces is a natural generalization of complex symmetric matrices, and their general study was initiated by Garcia, Putinar, and Wogen (see \cite{Garc2},\cite{Garc3}, \cite{Wogen1} and \cite{Wogen2}). The class of complex symmetric operators includes a large number of concrete examples including all normal operators, binormal operators, Hankel operators, finite Toeplitz matrices, compressed shift operators, and the Volterra integral operator. The study of complex symmetry of composition operators on the Hardy space of the unit disc $H^2 (\mathbb{D})$ was initiated by Garcia and Hammond in \cite{Garc1}. In this work, they showed that for each $\alpha\in \mathbb{D},$ the involutive automorphism of $\mathbb{D}$ given by \begin{align}\label{0} \phi_{\alpha}(z):=\frac{\alpha-z}{1-\overline{\alpha}z} \end{align} induces a non-normal complex symmetric composition operator. In particular, we see that the class of complex symmetric operators is strictly larger than that of the normal operators. Another important work on complex symmetry of composition operators on $H^2(\mathbb{D})$ was realized by P. S. Bourdon and S. W. Noor in \cite{wal}. In this work they showed that if $\phi$ is an elliptic automorphism of order $N>3$ (including $N=\infty$) then $C_{\phi}$ is not complex symmetric \cite[Proposition 3.1.]{wal} and \cite[Proposition 3.3]{wal}. It is worth mentioning that for a complete classification of the automorphisms of $\mathbb{D}$ that induce complex symmetric composition operators it is sufficient to classify the elliptic automorphism of order 3. Based on \cite{wal}, T. Eklund, M. Lindström and P. Mleczko also tried to classify which automorphisms of $\mathbb{D}$ induce complex symmetric composition operators on the classical Bergman space $A^2.$ They showed that if $\phi$ is an elliptic automorphism of order $N>5$ then $C_{\phi}$ is not complex symmetric. Our first main result is the following: \begin{thm}\label{pps1} Let $\phi$ be an analytic self-map of $\mathbb{D}.$ If $C_{\phi}$ is complex symmetric on $A^2_{\beta}(\mathbb{D}),$ then $\phi$ must fix a point in $\mathbb{D}.$ \end{thm} We will use Theorem \ref{pps1} to prove that the complex symmetry of $C_{\phi}$ strongly influences the dynamics of $C_{\phi}$ and $C_{\phi}^*$ on $A^2_{\beta}(\mathbb{D})$ (see Propositions \ref{pps4} and \ref{pps3}). As a consequence we will show that \textit{hyperbolic linear fractional maps} of $\mathbb{D}$ never induce complex symmetric composition operators. Hence when $\phi$ is a \textit{parabolic} or \textit{hyperbolic} automorphism of $\mathbb{D}$ we will see that $C_{\phi}$ is not complex symmetric. Our main result on the complex symmetry of composition operators induced by elliptic autmorphisms generalizes the results in \cite{wal} and \cite{Ted} on $A^2_{-1}:=H^2(\mathbb{D})$ and $A^2:=A^2_{0}(\mathbb{D})$ respectively, to all $A^2_{\beta}(\mathbb{D})$ with $\beta\in \mathbb{N}.$ We prove the following result: \begin{thm} Let $\beta\in \mathbb{N}.$ If $\phi$ is an elliptic automorphism of $\mathbb{D}$ which is not a rotation then $C_{\phi}$ is not complex symmetric on $A^2_{\beta}(\mathbb{D})$ if $\phi$ has order $N\geq2(3+\beta).$ \end{thm} \section{Notations and Preliminaries } In this section, we present some preliminary definitions and results. Throughout this article we will use the following notations: $\mathbb{D}:=\{z\in\mathbb{C}:\left| z\right| <1\}$ the open unit disc, $\mathbb{T}:=\{z\in \mathbb{C}:\left| z\right| =1\}$ unit circle, $\mathbb{N}:=\{0,1,2,\ldots\}$ and for each operator $T$ on Hilbert space we denote the orbit of $T$ in $f$ by $\mathrm{Orb}(T,f)=\{T^nf:n=0,1,\ldots\}.$ \subsection{Complex symmetric operators} A bounded operator $T$ on a separable Hilbert space $\mathcal{H}$ is \textit{complex symmetric} if there exists an orthonormal basis for $\mathcal{H}$ with respect to which $T$ has a self-transpose matrix representation. An equivalent definition also exists. An conjugate-linear operator $C$ on $\mathcal{H}$ is said to be a \textit{conjugation} if $C^2=I$ and $\left\langle Cf,Cg\right\rangle =\left\langle g,f\right\rangle $ for all $f, g\in \mathcal{H}.$ So, we say that $T$ is $C$-\textit{symmetric} if $CT = T^*C,$ and complex symmetric if there exists a conjugation $C$ with respect to which $T$ is $C$-symmetric. In general, complex symmetric operators enjoy the following \textit{spectral symmetry } property : \begin{align}\label{37} f\in Ker(T-\lambda I)\Longleftrightarrow Cf\in Ker(T^*-\overline{\lambda}I). \end{align} This follows from $CT=T^*C$ where $C$ is a conjugation. Another fact well-known in the literature says that $f$ is orthogonal to $Cg$ whenever $f$ and $g$ are eigenvectors corresponding to distinct eigenvalues of a $C$-symmetric operator (see \cite{pamona}). \subsection{Weighted Bergman space $A^2_{\beta}(\mathbb{D})$} Let $dA(z)$ be the normalized area measure on $\mathbb{D}$ and $-1<\beta<\infty.$ The weighted Bergman space $A^2_{\beta}:=A^2_{\beta}(\mathbb{D})$ is the space of all analytic functions in $\mathbb{D}$ such that norm \begin{align*} \left\|f\right\|:= \left( \int_{\mathbb{D}}\left| f(z)\right| ^2dA(z)\right) ^{1/2}<\infty, \end{align*} where $dA_{\beta}(z)=(\beta+1)(1-\left|z\right|^2)^{\beta}dA(z).$ The weighted Bergman space $A^2_{\beta}$ is a Hilbert space with the inner product \begin{align}\label{eq1} \displaystyle\left\langle f,g\right\rangle=\sum_{n=0}^{\infty}\frac{n!\Gamma(2+\beta)}{\Gamma(n+2+\beta)} \widehat{f}(n) \overline{\widehat{g}(n)}, \end{align} where $(\widehat{f}(n))_{n\in \mathbb{N}}$ and $(\widehat{g}(n))_{n\in \mathbb{N}}$ are the sequences of Maclaurin coefficients for $f$ and $g$ respectively, and $\Gamma$ is the Gamma function. Hence, the norm of $f\in A^2_{\beta}$ is also given by \begin{align}\label{eq2} \left\|f\right\|=\sqrt{\displaystyle \sum_{n=0}^{\infty}\frac{n!\Gamma(2+\beta)}{\Gamma(n+2+\beta)}\left|\widehat{f}(n)\right|^2}. \end{align} For convenience we write $A^2_0:=A^2,$ and we interpreted the classical Hardy space $H^2(\mathbb{D})$ as the \textit{limit case} of the weighted Bergman space $A^2_{\beta},$ $\beta\longrightarrow -1,$ that is $H^2(\mathbb{D}):=A^2_{-1}$ (see \cite{rikka}). For each $\alpha\in \mathbb{D},$ let $K_{\alpha}$ denotes the \textit{reproducing kernel} for $A^2_{\beta}$ at $\alpha;$ that is \begin{align*} K_{\alpha}(z)=\frac{1}{(1-\overline{\alpha}z)^{2+\beta}}. \end{align*} These funcions play an important role in the theory of weighted Bergman spaces, namely: $\left\langle f, K_{\alpha}\right\rangle =f(\alpha)$ for all $f\in A^2_{\beta}.$ Moreover, if $\phi$ is an analytic self-map of $\mathbb{D},$ then a simple computation gives $C_{\phi}^*K_{\alpha}=K_{\phi(\alpha)}.$ For more details about $A^2_{\beta}$ we refer \cite{hed} and \cite{peter}. The space $H^{\infty}:=H^{\infty}(\mathbb{D})$ is the Banach space of all analytic and bounded functions on $\mathbb{D}.$ The norm of a function $f\in H^{\infty}$ is defined by $\left\| f\right\|_{\infty} =\sup\left\lbrace \left| f(z)\right| :z\in \mathbb{D}\right\rbrace.$ It is straightforward to verify that $H^{\infty}$ is a subspace of $A^2_{\beta}$ and $ \left\|f\right\|\leq \left\| f\right\| _{\infty}.$ Moreover, for each $\psi \in H^{\infty},$ we define the bounded operator $M_{\psi}:A^2_{\beta}\longrightarrow A^2_{\beta}$ by $(M_{\psi}f)(z)=\psi(z)f(z).$ This operator is called (\textit{analytic}) $\mathit{Toeplitz}$ $\mathit{operator},$ it also is called multiplication operator by $\psi.$ \subsection{Denjoy-Wolff point}Let $\phi^{[n]}$ denote the $n$-th iterate of the analytic self-map $\phi,$ and we define inductively by $\phi^{[0]}=id,\phi^{[1]}=\phi$ and $\phi^{[n]}=\phi^{[n-1]}\circ \phi$ for each non-negative integer $n.$ Since $C_{\phi}^n=C_{\phi^{[n]}},$ it follows that the dynamic of $\phi$ influences strongly the dynamics of $C_{\phi}.$ If $\omega \in \overline{\mathbb{D}}$ is a fixed point for $\phi$ such that the sequence $\phi^{[n]}$ converges uniformly on compact subsets of $\mathbb{D}$ to $\omega,$ then $\omega$ is said to be an \textit{attractive fixed point} for $\phi.$ The next result is concerned with the existence of attractive fixed points for analytic self-maps of $\mathbb{D}$ which are not elliptic automorphisms. \begin{thm} If $\phi$ is an analytic self-map of $\mathbb{D}$ is not an elliptic automorphism, then there is an unique point $\omega\in \overline{\mathbb{D}}$ such that \begin{align}\label{t7} \omega=\lim_{n\longrightarrow \infty}\phi^{[n]}(z) \end{align} for each $z\in \mathbb{D}.$ \end{thm} This result is proved in \cite{wolf1} and \cite{wolf2}. The point $\omega$ in \eqref{t7} is called the \textit{Denjoy-Wolff} point of $\phi.$ If $\omega\in \mathbb{D}$ then $\omega$ is the unique fixed point of $\phi$ in $\mathbb{D}.$ \subsection{Linear fractional composition operators}Recall that a \textit{linear fractional self-map} of $\mathbb{D}$ is a mapping of the form \begin{align}\label{frac1} \phi(z)=\frac{az+b}{cz+d} \end{align} satisfying $ \left| b\overline{d}-a\overline{c}\right| +\left| ad-bc\right| \leq \left| d\right| ^2-\left| c\right| ^2 $ (see \cite{ele}). Let $\mathrm{LFT}(\mathbb{D})$ be denotes the set of all linear fractional self-maps of $\mathbb{D}.$ Since these maps have at one and most two fixed points in $\overline{\mathbb{D}},$ we classify them according to the location of their fixed points, namely: \begin{itemize} \item \textit{Parabolic maps:} If $\phi$ has their fixed point on $\mathbb{T}.$ \item \textit{Hyperbolic maps:} If $\phi$ has attractive fixed point on $\overline{\mathbb{D}}$ and the other fixed point outside of $\mathbb{D},$ and both fixed points are on $\mathbb{T}$ if and only if $\phi$ is an automorphism of $\mathbb{D}.$ \item \textit{Loxodromic and elliptic maps:} If $\phi$ has a fixed point on $\mathbb{D}$ and the other fixed point outside of $\overline{\mathbb{D}}.$ The elliptic ones are always automorphisms of $\mathbb{D}.$ \end{itemize} It is worth mentioning that the automorphisms of $\mathbb{D}$ are linear fractional maps of the form \begin{align} \phi(z)=e^{i\theta} \frac{\alpha-z}{1-\overline{\alpha}z} \end{align} for some $\theta\in \mathbb{R}$ and $\alpha\in \mathbb{D}.$ If $\phi$ is an elliptic automorphism of $\mathbb{D},$ then $\phi$ has an unique fixed point $\alpha\in\mathbb{D}.$ In this case $\phi$ is conjugate to a rotation via Schwarz Lemma, that is \begin{align}\label{pre2} \phi=\phi_{\alpha}\circ(\lambda\phi_{\alpha}) \end{align} for some $\lambda$ on the unit circle $\mathbb{T}.$ Following, we highlight some properties of involutive automorphism $\phi_{\alpha}$ once it will play an important role in Section \ref{5}. Since $C_{\phi_{\alpha}}$ is an invertible operator with $C_{\phi_{\alpha}}^2=I$ and $(z^n)_{n\in \mathbb{N}}$ has dense span on $A^2_{\beta},$ it follows that the sequence formed by the vectors $\phi_{\alpha}^n=C_{\phi_{\alpha}}z^n$ is dense on $A^2_{\beta}$ too, moreover putting $v_n:=C_{\phi_{\alpha}}^*z^n$ for each non-negative integer we have $\left\langle v_n,v_m\right\rangle =0$ if $n\neq m.$ If $\phi$ is an analytic self-map of $\mathbb{D}$, then no general formula for $C_{\phi}^{*}$ is known. A first result in this direction is due to Carl Cowen (see \cite[Theorem 9.2]{cow}). He showed that if $\phi$ is a linear fractional self-map of $\mathbb{D}$ then such a formula is given in terms of compositions of Toeplitz operators and compositons operators. Based on the work of Carl Cowen, Hurst in \cite[Theorem 2]{Hurst} generalized the formula for weighted Bergman spaces $A^2_{\beta};$ \textit{if $\phi$ is as in \eqref{frac1} then } \begin{align}\label{frac2} C_{\phi}=M_{g}C_{\sigma}M_{h}^{*}, \end{align} \textit{where the functions $g,h$ and $\sigma$ are defined as} \begin{align*} \sigma(z)=\frac{\overline{a}z-\overline{c}}{-\overline{b}z+\overline{d}}, \ g(z)=\frac{1}{(-\overline{b}z+\overline{d})^{\beta+2}}\ and \ h(z)=(cz+d)^{\beta+2}. \end{align*} Hence, if $\phi=\phi_{\alpha}$ the formula \eqref{frac2} gives $C_{\phi_{\alpha}}^*=M_{K_{\alpha}}C_{\phi_{\alpha}}M_{1/K_{\alpha}}^*.$ So, assuming $2+\beta$ is a natural number, we determine an expression for $M_{1/K_{\alpha}}$ in terms of the multiplication operator $M_z$, by observing \begin{align}\label{sym2} \frac{1}{K_{\alpha}(z)}=(1-\overline{\alpha}z)^{2+\beta}=\sum_{k=0}^{2+\beta} {2+\beta\choose k}(-\overline{\alpha}z)^{k}. \end{align} Then \begin{align}\label{pre3} M_{1/K_{\alpha}}=\displaystyle \sum_{k=0}^{2+\beta}{2+\beta\choose k}(-\overline{\alpha}M_z)^k. \end{align} By combining the formula for the adjoint of $C_{\phi_{\alpha}}$ and \eqref{pre3}, we obtain the following expression: \begin{align}\label{658} C_{\phi_{\alpha}}^*= \sum_{k=0}^{2+\beta}{2+\beta\choose k} (-\alpha)^k M_{K_{\alpha}}C_{\phi_{\alpha}}(M_z^*)^k. \end{align} The equality \eqref{658} plays an important role in section \ref{5}, it will provide a general way to study the orthogonality between the vectors $v_n:=C_{\phi_{\alpha}}^*z^n$ as compared to \cite[Lemma 2.2.]{wal} and \cite[Lemma 5]{Ted}, where the authors considered the some problem on the Hardy space $H^2(\mathbb{D})$ and Bergman space $A^2$ respectively. \section{Cyclic vectors on $A_{\beta}^2$}\label{2} In this section, we focus on the study of \textit{cyclic vectors} for the weighted Bergman spaces $A^2_{\beta}$, we first establish some basic results. If $f\in A^2_{\beta},$ we let $[f]$ denote the closure in $A^2_{\beta}$ of $\{pf:\text{$p$ is a polynomial in $z$}\}.$ A function $f\in A^2_{\beta}$ is said to be \textit{cyclic} if it generates the whole space, that is, $[f]=A^2_{\beta}.$ For the Hardy space $H^2(\mathbb{D})$ or the Bergman space $A^2,$ various sufficient and necessary conditions are known to decide if a given function is cyclic. For example see \cite[Chapter 7]{hed} and \cite[Chapter 2]{mar}. \begin{lem}\label{lem1} Let $M_z$ be the multiplication operator on $A_{\beta}^2.$ Then \begin{align}\label{lem13}\left( M_z^*f\right) (z)=\sum_{n=0}^{\infty}\displaystyle\frac{\Gamma(n+2+\beta)(n+1)}{\Gamma(n+3+\beta)} \widehat{f}(n+1) z^n \end{align} for each $f\in A^2_{\beta}.$ \end{lem} \begin{proof} It is enough to determine the coefficients of the function $M_{z}^*f$ That is, the value $\widehat{M_{z}^*f}(n)$ for each non-negative integer $n.$ We note that \begin{align*} \frac{n!\Gamma(2+\beta)}{\Gamma(n+2+\beta)}\widehat{M^*_zf}(n) = \left\langle M^*_{z}f,z^n\right\rangle =\left\langle f,M_zz^{n}\right\rangle =\frac{(n+1)!\Gamma(2+\beta)}{\Gamma(n+3+\beta)}\widehat{f}(n+1). \end{align*} Hence, $\widehat{M^*_zf}(n)=\displaystyle\frac{\Gamma(n+2+\beta)(n+1)}{\Gamma(n+3+\beta)}\widehat{f}(n+1).$ \end{proof} The relation \eqref{lem13} provides a simple formula to compute the $n$-th coefficient of Maclaurin of $\widehat{M^*_z}f,$ for each $f\in A^2_{\beta}.$ Below we use it to establish the injectivity of the operator $M_{\omega-z}^*.$ More precisely: \begin{pps}\label{pps2} The adjoint of the multiplication operator $M_{\omega-z}$ on $A^2_{\beta}$ is injective for each $\omega\in \mathbb{T}.$ In particular, $(\omega-z)f$ is cyclic whenever $f$ is cyclic. \end{pps} \begin{proof} Suppose $M_{\omega-z}^*g=0$ for some $g\in A^2_{\beta}.$ Then by Lemma \ref{lem1} we have \begin{align*} \displaystyle \sum_{n=0}^{\infty}\overline{\omega}\widehat{g}(n)z^n=\sum_{n=0}^{\infty}\displaystyle\frac{\Gamma(n+2+\beta)(n+1)}{\Gamma(n+3+\beta)}\widehat{g}(n+1)z^n. \end{align*} Hence for each non-negative integer $n,$ we have \begin{align}\label{lem11}\widehat{g}(n+1)=\displaystyle \frac{\Gamma(n+3+\beta)}{\Gamma(n+2+\beta)(n+1)}\widehat{g}(n). \end{align} By using \eqref{lem11} recursively, we obtain the relation, $ \widehat{g}(n)=\displaystyle \frac{\Gamma(n+2+\beta)}{n!\Gamma(\beta+2)}\widehat{g}(0). $ Thus a simple computation provides \begin{align*} \left\|g\right\|^2= \sum_{n=0}^{\infty}\frac{n!\Gamma(2+\beta)}{\Gamma(n+2+\beta)}\left|\widehat{g}(n)\right|^2 = \left[ \sum_{n=0}^{\infty} \frac{\Gamma(n+2+\beta)}{\Gamma(\beta+2)n!}\right] \left|\widehat{g}(0)\right|^2. \end{align*} Since $\displaystyle\sum_{n=0}^{\infty}\frac{\Gamma(n+2+\beta)}{\Gamma(\beta+2)n!}$ diverges, $\left\| g\right\| $ is finite if and only if $\widehat{g}(0)=0.$ Thus all Maclaurin coefficients of $g$ must vanish. Hence $g\equiv 0,$ and therefore $M_{\omega-z}^*$ is injective. Now suppose that $f$ is cyclic and $g\in [(\omega-z)f]^{\perp}.$ Then $ 0=\left\langle z^n(\omega -z)f,g\right\rangle$ for each non-negative $n.$ Hence \begin{align}\label{lem12}0=\left\langle z^n(\omega -z)f,g\right\rangle =\left\langle M_{\omega -z}(z^nf),g\right\rangle =\left\langle z^nf,M_{\omega -z}^*g\right\rangle. \end{align} Since $f$ is cyclic, \eqref{lem12} forces $M_{\omega-z}^*g=0.$ By assuming $M_{\omega-z}^*$ injective, we get $g\equiv 0$. In particular, we conclude that $(\omega-z)f$ is cyclic. \end{proof} Theorem \ref{thm1} is the main result of this section. It generalizes similars results for composition operators on $H^2(\mathbb{D})$ and $A^2$ proved in \cite[Proposition 2.1]{bourdon} and \cite[Lemma 1]{Ted} respectively. The main tool they use is the cyclicity of $(\omega -z)g$ where $g$ is a cyclic eigenvector for $C_{\phi}$ and $\omega\in \mathbb{T}$ is the Denjoy-Wolff point of $\phi.$ \begin{thm}\label{thm1} Suppose that the analytic self-map $\phi$ of $\mathbb{D}$ has a Denjoy-Wolff point $\omega $ in $\mathbb{T}.$ If $\lambda$ is an eigenvalue of $C_{\phi}:A^2_{\beta}\longrightarrow A^2_{\beta}$ with a cyclic function as a corresponding eigenvector, then $C_{\phi}-\lambda I$ has dense range. \end{thm} \begin{proof} Since $\phi$ has Denjoy- Wolff point in $\mathbb{T},$ it follows that $\phi$ is nonconstant, and therefore $\phi$ is an open function. Let $g$ be a cyclic eigenvector of $C_{\phi}$ corresponding to the eigenvalue $\lambda.$ It is worth noting that $\lambda\neq 0.$ Indeed, if $\lambda=0$ then $g(\phi(z))=0$ for all $z\in \mathbb{D}$ hence $g\equiv 0.$ We recall that the operator $C_{\phi}-\lambda I$ has dense range if and only if $C_{\phi}^{*}-\overline{\lambda}I$ is injective. Therefore, if $\overline{\lambda}$ is an eigenvalue of $C_{\phi}^{*}$ there is a nonzero vector $h\in A^2_{\beta}$ such that $C_{\phi}^{*}h=\overline{\lambda}h$. Then for any non-negative integers $n$ and $k,$ we have \begin{align}\label{the31} \lambda^k\left\langle z^n(\omega -z)g,h\right\rangle=&\left\langle z^n(\omega -z)g, \overline{\lambda}^kh\right\rangle\nonumber =\left\langle z^n(\omega -z)g,(C_{\phi}^{*})^{k}h\right\rangle\nonumber\\ = &\left\langle C_{\phi^k}(z^n(\omega -z)g),h\right\rangle\nonumber =\left\langle \phi_{k}^n(\omega -\varphi_k)g\circ \phi_{k}^n,h\right\rangle\nonumber\\ =& \left\langle \phi_{k}^n(\omega -\phi_k)(C_{\phi})^kg,h\right\rangle\nonumber =\left\langle \phi^k(\omega -\phi_k)\lambda^kg,h\right\rangle\nonumber\\ =&\lambda^k\left\langle \phi_k^{n}(\omega -\phi_k)g,h\right\rangle. \end{align} By combining \eqref{the31} and $\lambda\neq 0$ we obtain \begin{equation}\label{eq12} \left\langle z^n(\omega -z)g,h\right\rangle=\left\langle \phi^n_{k}(\omega -\phi_k)g,h\right\rangle. \end{equation} Since $\left|\phi_{k}^n(\omega -\phi_k)g\overline{h}\right|\leq 2\left|g\overline{h}\right|\in L^1(\mathbb{D},dA_{\beta})$ and the iterated sequence $(\phi_k)_{k=1}^{\infty}$ converges pointwise to $\omega $ on $\mathbb{D}$ (even uniformly on compact subsets of $\mathbb{D},$ see \cite[Theorem 2.51]{cow}). By applying the Lebesgue Dominated Convergence Theorem, we obtain from \eqref{eq12} that \begin{equation*} \left\langle z^n(\omega -z)g,h\right\rangle=\displaystyle\lim _{k\longrightarrow \infty}\left\langle \phi_k^n(\omega -\phi_k)g,h\right\rangle=0, \end{equation*} for each non-negative integer $n.$ Hence, $h\in [(\omega-z)g]^{\perp}.$ By the Proposition \ref{pps2}, $(\omega-z)g$ is cyclic, and therefore $h$ is identically zero. However this contradicts the fact that $h$ is an eigenvector. \end{proof} \begin{cor}\label{cor1}Suppose that $\phi$ is an analytic self-map of $\mathbb{D}.$ If $\phi$ has Denjoy-Wolff point in $\mathbb{T},$ then $C_{\phi}-I$ has dense range on $A^2_{\beta}.$ \end{cor} \begin{proof} This is an immediate consequence of Theorem \ref{thm1}, since $\lambda=1$ is an eigenvalue for $C_{\phi}$ having the cyclic eigenvector $g\equiv 1.$ \end{proof} \section{Cylicity and Hypercyclicity}\label{3} The next main result shows that if $C_{\phi}$ is complex symmetric on $A^2_{\beta},$ then $\phi$ must fix a point in $\mathbb{D}.$ Naturally, this implies results about the dynamics of $C_{\phi}$ and $C_{\phi}^*.$ \begin{thm}\label{thm2} Let $\phi$ be a analytic self-map of $\mathbb{D}.$ If $C_{\phi}:A^2_{\beta}\longrightarrow A^{2}_{\beta}$ is complex symmetric then $\phi$ either an elliptic automorphism of the unit disc or has a Denjoy-Wolff point in $\mathbb{D}.$ \end{thm} \begin{proof} Suppose on the contrary that $\phi$ has Denjoy-Wolff point in $\mathbb{T}.$ By Corollary \ref{cor1} follows that $C_{\phi}-I$ has dense range on $A^2_{\beta}$ or equivalently $C_{\phi}^*-I$ is injective. If $C_{\phi}$ also is complex symmetric, we obtain $CC_{\phi}^*C=C_{\phi}$ for some conjugation $C,$ so $C_{\phi}^*-I$ is not injective because $C_{\phi}^*CK_0=CK_{0}.$ Hence $C_{\phi}$ is not complex symmetric. \end{proof} An operator $T$ on $\mathcal{H}$ is said to be \textit{cyclic} if there exists a vector $f\in \mathcal{H}$ for which the linear span of its orbit $(T^nf)_{n\in \mathbb{N}}$ is dense in $\mathcal{H}.$ If the orbit $(T^nf)_{n\in \mathbb{N}}$ itself is dense in $\mathcal{H},$ then $T$ is said to be \textit{hypercyclic}. In these cases $f$ is called a \textit{cyclic} or \textit{hypercyclic} vector for $T$ respectively. The book \cite{caos} has a systematic study of cyclic and hypercyclic operators. In particular it shows how spectral properties influence dynamical properties. For example if $T$ has eigenvalues then $T^*$ is never hypercyclic (see \cite[Proposition 5.1.]{caos}). Hence $C_{\phi}^*$ is never hypercyclic since $1$ is always an eigenvalue for $C_{\phi}.$ Additionally, if $C_{\phi}$ is complex symmetric then we have: \begin{pps}\label{pps4} Let $\phi$ be an analytic self-map of $\mathbb{D}$ such that $C_{\phi}$ is complex symmetric on $A^2_{\beta}.$ Then $C_{\phi}$ and $C_{\phi}^*$ are not hypercyclic. \end{pps} \begin{proof} By the comments above it is enough to show that $C_{\phi}$ is not hypercyclic. If $C_{\phi}$ is complex symmetric then $\phi(\alpha)=\alpha$ for some $\alpha \in \mathbb{D}.$ So $C_{\phi}^*K_{\alpha}=K_{\alpha},$ and therefore $C_{\phi}$ is not hypercyclic. \end{proof} Proposition \ref{pps4} says that the Bergman space $A^2_{\beta}$ does not support composition operators simultaneously complex symmetric and hypercyclic. As we saw this is strongly influenced by the existence of fixed points for $\phi$ in $\mathbb{D}.$ In contrast to Proposition \ref{pps4}, we will see that each complex symmetric $C_{\phi}$ with non-automorphic $\phi$ is cyclic. This is consequence of the following result: \textit{If $\phi$ is an analytic self-map of the disk with $\phi(\alpha)=\alpha$ for some a in the unit disk and $\phi$ is neither constant nor an elliptic automorphism, then $C_{\phi}^*$ is cyclic on $H^2(\mathbb{D})$ with cyclic vector $K_z$ for each $z\neq \alpha$}. The proof of this result appears in the work done by T. Worner (see \cite{Worner}), where the author studied the commutant of certains composition operators in $H^2(\mathbb{D}).$ Here it is worth mentioning that the proof given in \cite[Theorem 3]{Worner} works for $A^2_{\beta}.$ \begin{pps}\label{pps3} Let $\phi$ be an analytic self-map of $\mathbb{D}$ such that $C_{\phi}$ is complex symmetric on $A^2_{\beta}$ and $\phi$ is neither constant nor an elliptic automorphism then $C_{\phi}$ and $C_{\phi}^*$ are cyclic. \end{pps} \begin{proof} Let $\alpha$ be the Denjoy-Wolff of $\phi.$ According to Theorem \ref{thm2}, the complex symmetry of $C_{\phi}$ implies $\alpha\in \mathbb{D}.$ In particular, $\phi(\alpha)=\alpha.$ So $C_{\phi}^*$ is cyclic and $K_z$ is a cyclic vector for $C_{\phi}^*$ for each $z\neq\alpha.$ Since $C_{\phi}$ is complex symmetric, we have $C_{\phi}=CC_{\phi}^*C$ for some conjugation $C.$ Let $f$ in the orthogonal complement of the span of $\mathrm{Orb}(C_{\phi},CK_{z})$ then \begin{equation*}0=\left\langle f,C_{\phi}^{n}CK_{z}\right\rangle=\left\langle f,C(C _{\phi}^n)^*K_z\right\rangle=\left\langle (C_{\phi}^{n})^*K_z,Cf\right\rangle \end{equation*} for each non-negative integer $n.$ So $Cf=0$ since $K_z$ is a cyclic vector for $C_{\phi}^*.$ Because $C$ is an isometry, we obtain $f\equiv 0.$ Hence, span of $\mathrm{Orb}(C_{\phi},CK_{z})$ is dense in $A^2_{\beta}$ and therefore $C_{\phi}$ is cyclic. \end{proof} It is worth mentioning that Propositions \ref{pps4} and \ref{pps3} are generalizations of \cite[Theorem 5.1.]{jung}. Moreover, we need not suppose that $\phi$ has a fixed point in $\mathbb{D},$ because this is by guaranteed Theorem \ref{thm2}. \section{Hyperbolic linear fractional non-automorphisms}\label{4} As we saw in Section \ref{3} the complex symmetry of $C_{\phi}$ strongly influences the location of the Denjoy-Wolff point of $\phi.$ More precisely, if $C_{\phi}$ is complex symmetric then $\phi$ has a fixed point inside the disk (see Proposition \ref{thm2}). Hence parabolic linear fractional maps never induce complex symmetric composition operators on $A^2_{\beta}.$ So, for a complete classification of the linear fractional self-maps that induce complex symmetric composition operators on $A^2_{\beta}$ we must study the hyperbolic, loxodromic and elliptic maps. In this section we deal with the case in which $\phi$ is a hyberbolic linear fractional map. We will see that in this case $C_{\phi}$ is not complex symmetric. We begin our study showing that each hyperbolic linear fractional map is similar to a map of the form \begin{align*} \psi_s(z)=\frac{sz}{1-(1-s)z} \ \text{for some} \ 0<\left|s\right|<1. \end{align*} \begin{lem}\label{11} Let $\phi$ a hyperbolic linear fractional map. Then it is similar to $\psi_s$ for some $0<\left|s\right|<1.$ \end{lem} \begin{proof} Let $\phi(\alpha)=\alpha$ for some $\alpha\in \mathbb{D}$ and $\phi(e^{i\theta})=e^{i\theta}.$ Putting $\phi_{\alpha}(e^{i\theta})=\lambda,$ we have $\left| \lambda\right|=1.$ Hence, the mapping $\Phi=\overline{\lambda}\phi_{\alpha}$ is an automorphism of $\mathbb{D}$ with $\Phi(\alpha)=0$ and $\Phi(e^{i\theta})=1.$ Since $\mathrm{LFT}(\mathbb{D})$ is a group with the composition of functions, it follows that $\psi=\Phi\circ \phi\circ \Phi^{-1}$ is a linear fractional map of $\mathbb{D}.$ Also $\psi(1)=1$ and $\psi(0)=0.$ Let $a,b,c,d$ be complex numbers such that $\psi(z)=(az+b)(cz+d)^{-1}.$ Then we obtain $b=d\psi(0)=0$ and $a/(c+d)=\psi(1)=1.$ Putting $s=(c+d)/d,$ we can rewrite $\psi$ as follows \begin{center} $ \displaystyle \psi(z)=\frac{(c+d)z}{cz+d}=\frac{ \left( \frac{c+d}{d}\right)z}{1-\left( -\frac{c}{d}\right)z }=\frac{sz}{1-(1-s)z}=\psi_s(z).$ \end{center} Now it is enough to prove that $0<\left|s\right|<1.$ If $s=0,$ the function $\psi_s$ is identically zero however this contradicts $\psi_s(1)=1.$ If $s=1,$ $\psi_s$ is the identity, and therefore $\phi$ is the identity too. As $\psi_s$ is a linear fractional map of $\mathbb{D},$ we have \begin{align}\label{36} \left|s\right|\left(1+\left|1-s\right|\right)\leq \left(1-\left|1-s\right|\right)\left(1+\left|1-s\right|\right). \end{align} Since $s\neq 1,$ the inequality \eqref{36} forces $\left|s\right|\leq 1-\left|1-s\right|<1.$ This concludes the result. \end{proof} Since each hyperbolic linear fractional non-automorphism is similar to $\psi_s,$ we focus our study on the map $\psi_s.$ It is clear that $\psi_s$ has its Denjoy-Wolff point inside the unit disk, so by the discussion before Proposition \ref{pps3} the operator $C_{\psi_s}^*$ is cyclic on $A^2_{\beta}.$ According to the work of P. S. Bourdon and J. H. Shapiro (see \cite[Proposition 2.7]{sha}), if $T^*$ has \textit{eigenvalues of infinite multiciplicity} (that is, $Ker(T-\lambda I)$ is finite dimensional for all complex number $\lambda$) then $T$ is not cyclic, based on result we show the following: \begin{lem} The operator $C_{\psi_s}$ is not cyclic on $A_{\beta}^2$ for each $0<\left|s\right|<1.$ \end{lem} \begin{proof} By denoting $H_0=\{f\in A^2_{\beta}:f(0)=0\},$ P. R. Hurst in \cite[Theorem 5]{Hurst} showed that $C_{\psi_s}^*|_{H_0}$ is similar to the $sC_{\sigma}$ where $\sigma(z)=sz+1-s.$ In this work he also showed that for each $\mathrm{Re}(\lambda)>-(\beta+2)/2,$ the function $f_{\lambda} \in A^2_{\beta},$ $f_{\lambda}(z)=(1-z)^{\lambda},$ is an eigenvector for $C_{\sigma}$ corresponding to the eigenvalue $\lambda.$ So, for each integer $k$ and $\lambda(k)=\lambda+2\pi i k/\log s,$ we see $C_{\sigma}f_{\lambda(k)}=s^{\lambda}f_{\lambda(k)}.$ In particular, this last equality show that $s^{\lambda}$ is a eigenvalue of infinite multiplicity for $C_{\sigma}$ since the setting $\{f_{\lambda(k)}:k\in \mathbb{Z}\}$ is linearly independent, and thefore $C_{\sigma}^*$ is not cyclic. Since cyclicity is invariant under similarity and $H_0$ reduces the operator $C_{\psi_s},$ it follows that $C_{\psi_s}$ is not cyclic too. \end{proof} It follows that hyperbolic linear fractional non-automorphism do not induce complex symmetric composition operators. This is consequence of the strong relation between cyclicity and complex symmetry. \begin{thm}\label{12} Let $\phi$ a hyperbolic linear fractional map then $C_{\phi}$ is not complex symmetric on $A^2_{\beta}$. \end{thm} \begin{proof} First we consider the non-automorphism case. By Lemma \ref{11}, $C_{\phi}$ is similar to the $C_{\psi_s}$ for some $0<\left|s\right|<1.$ Since cyclicity is invariant under similarity it follows that $C_{\phi}^*$ and $C_{\phi}$ are not simultaneously cyclic, and therefore by Proposition \ref{pps3} we conclude that $C_{\phi}$ is not complex symmetric. The automorphism case follows by Theorem \ref{thm2}. \end{proof} Due to Theorem \ref{12} it remains to classify the complex symmetric composition operators induced by linear fractional self-maps with a fixed point inside $\mathbb{D}$ and outside $\overline{\mathbb{D}}.$ It is worth mentioning that even in the Hardy space $H^2(\mathbb{D})$ this case remains open. \section{The symbol $\phi=\phi_{\alpha}\circ(\lambda\phi_{\alpha})$} \label{5} In this section the map $\phi$ denotes the linear fractional self-map $\phi_{\alpha}\circ (\lambda \phi_{\alpha})$ where $ \lambda \in \overline{\mathbb{D}}$ and $\phi_\alpha$ are the involutive automorphism (see \eqref{0}). As we saw in \eqref{pre2} the map $\phi$ is an automorphism of $\mathbb{D}$ whenever $\lambda\in \mathbb{T}$. Here, we will see that a location of the numbers $\alpha$ and $\lambda$ play an important in deciding if $C_{\phi}$ is complex symmetric. The first result on the map $\phi$ appeared in \cite[Proposition 2.1.]{Garc1} In this work S. R. Garcia and C. Hammond showed that $\phi_{\alpha}$ and constants always induce complex symmetric composition operators on $A^2_{\beta}.$ In this direction, we must also highlight the work \cite{wal} where P. S. Bourdon and S. W. Noor presented an almost complete characterization of automorphisms of $\mathbb{D}$ which induce complex symmetric composition operators on $H^2(\mathbb{D}).$ The composition operators induced by automorphisms were studied on $A^2,$ by T. Eklund, M. Lindström, P. Mleczko where they showed that the techniques used on $H^2(\mathbb{D})$ can be adapted for $A^2.$ By comparing the results on $A_{-1}^2:=H^2(\mathbb{D})$ and $A^2:=A^2_0(\mathbb{D}),$ we note that the index $\beta$ strongly influence the complex symmetry of the invertible bounded composition operators. Here we will unify these result by proving a general version of \cite[Proposition 3.3]{wal} and \cite[Theorem 10]{Ted}. We start by studying $\phi=\phi_{\alpha}\circ(\lambda\phi_{\alpha})$ with $\lambda\in \mathbb{D}.$ If $\lambda=0$ then the symbol $\phi$ is constant while $\alpha=0$ gives $\phi(z)=\lambda z.$ So, we assume $\alpha$ and $\lambda$ are non-zero. \begin{lem}\label{lem2} Let $\beta\in \mathbb{N}.$ If $\alpha\in \mathbb{D}\backslash\{0\}$ and $v_n:=C_{\phi_{\alpha}}^*z^n$ for each non-negative integer $n$ then $v_n$ is orthogonal to $v_0$ whenever $n>2+\beta.$ \end{lem} \begin{proof} A simple computation shows that the action of $C_{\phi_{\alpha}}$ on $K_{\alpha}$ is given by \begin{equation}\label{q51} C_{\phi_{\alpha}}K_{\alpha}(z)= \frac{(1-\overline{\alpha}z)^{2+\beta}}{(1-\left|\alpha\right|^2)^{2+\beta}}, \end{equation} and therefore the computation \eqref{sym2} provides \begin{align}\label{sym3} \left\langle v_n,v_0\right\rangle=& \left\langle C_{\phi_{\alpha}}^{*}z^n,K_{\alpha}\right\rangle\nonumber = \left\langle z^n,C_{\phi_{\alpha}}K_{\alpha}\right\rangle\nonumber =\displaystyle \frac{1}{(1-\left|\alpha\right|^2)^{2+\beta}}\left\langle z^n,(1-\overline{\alpha}z)^{2+\beta}\right\rangle\nonumber\\ =&\frac{1}{(1-\left|\alpha\right|^2)^{2+\beta}}\sum_{k=0}^{2+\beta} {{2+\beta}\choose k }\left\langle z^n,(-\overline{\alpha}z)^k\right\rangle . \end{align} From \eqref{sym3} we see that $v_n$ is orthogonal to $v_0$ whenever $2+\beta>0.$ \end{proof} Due to the generalized Newton binomial formula, we see that the conclusion of Lemma \ref{lem2} fails if $2+\beta\notin \mathbb{N}$ since \begin{align}\label{rem1} (1-\overline{\alpha}z)^{2+\beta}=\sum_{k=0}^{\infty}{{2+\beta}\choose k }(-\overline{\alpha}z)^k \ \text{and} \ \left\langle v_n,v_0\right\rangle ={2+\beta\choose n}\left\| z^n\right\| ^2. \end{align} The Theorem \ref{thm6} shows that in general the converse of Proposition \ref{pps3} is not true More precisely, the cyclicity of $C_{\phi}$ and $C_{\phi}^*$ is not enough to guarantee the complex symmetry of $C_{\phi}.$ In fact, if we consider the symbol $\phi=\phi_{\alpha}\circ(\lambda \phi_{\alpha})$ with $\alpha, \lambda\in \mathbb{D}\backslash \{0\}$ then $\phi$ is neither constant nor elliptic automorphism of $\mathbb{D}$ and $\phi(\alpha)=\alpha.$ These conditions together with \cite[Theorem 3]{Worner} and \cite[Theorem 3.2.]{sha} guarantee that $C_{\phi}^*$ and $C_{\phi}$ are cyclic on $H^2(\mathbb{D}).$ In \cite{anna}, A. Gori studied cyclic phenomena for composition operators on weighted Bergman space, in particular she showed that cyclic operators on $H^2(\mathbb{D})$ are cyclic on $A^2_{\beta}$ too (see \cite[Theorem 1.2]{anna}). Therefore, $C_{\phi}$ and $C_{\phi}^*$ are cyclic on $A^2_{\beta}.$ \begin{thm}\label{thm6} Let $\beta\in\mathbb{N} .$ If $\phi=\phi_{\alpha}\circ(\lambda \phi_{\alpha})$ with $\alpha, \lambda\in \mathbb{D}\backslash \{0\}$ then $C_{\phi}$ is not complex symmetric on $A^2_{\beta}.$ \end{thm} \begin{proof} Since $C_{\phi}$ is cyclic, the eigenvalues of $C_{\phi}^*$ are simple. Putting $v_n:=C_{\phi_{\alpha}}^*z^n$ for each integer non-negative $n,$ we have $C_{\phi}^*v_n=\overline{\lambda}^nv_n,$ hence $Ker(C_{\phi}^*-\overline{\lambda}^nI)$ is generated by $v_n.$ Suppose that $C_{\phi}$ is complex symmetric, and let $C$ be a conjugation on $A^2_{\beta}$ such that $CC_{\phi}C=C_{\phi}^*.$ As we saw above, $C_{\phi}\phi_{\alpha}^n=\lambda^n\phi_{\alpha}$ for each non-negative integer $n,$ so the relation \eqref{37} implies that $C_{\phi}^*C\phi_{\alpha}^n=\overline{\lambda}^nC\phi_{\alpha}^n.$ From what we saw earlier this last equality implies that $C\phi_{\alpha}^n=r_nv_n$ for some complex number $r_n.$ Let $n>2+\beta,$ then Lemma \ref{lem2} implies that $v_n$ is orthogonal to $v_0,$ hence \begin{align*} \alpha^n=[\phi_{\alpha}(0)]^n=\left\langle \phi_{\alpha}^n,K_0\right\rangle =\left\langle CK_0,C\phi_{\alpha}^n\right\rangle =r_0\overline{r_n}\left\langle v_0,v_n\right\rangle =0. \end{align*} This last equality forces $\alpha=0,$ contradicting the hypothesis $\alpha\neq 0.$ \end{proof} \subsection{Elliptic automorphism } Let $\phi$ be an elliptic automorphism of $\mathbb{D},$ then it has the form $\phi=\phi_{\alpha}\circ(\lambda\phi_{\alpha})$ for some $\alpha\in \mathbb{D}$ and $\lambda\in \mathbb{T}$ (see \eqref{frac2}). Since $\lambda$ is an unitary number, we define the order of $\phi$ through the number $\lambda.$ More precisely, we say that $\phi$ has \textit{finite order} $N,$ if $N$ is the smallest positive integer for which $\lambda^N=1.$ If no such integer exists then $\phi$ is said to have \textit{infinite order.} Our next goal is to prove that elliptic automorphisms of infinite order which are not rotations do not induce complex symmetric composition operators on $A^2_{\beta}.$ \begin{thm}\label{thm13} Suppose that $\phi$ is an elliptic automorphism of infinite order and not a rotation. Then $C_{\phi}$ is not complex symmetric on $A^2_{\beta}.$ \end{thm} \begin{proof} We get a contradiction by assuming that $C_{\phi}$ is complex symmetric. Let $C$ a conjugation on $A^2_{\beta}$ such that $CC_{\phi}C=C_{\phi}^*.$ Since $(\phi_{\alpha}^n)_{n\in \mathbb{N}}$ is a sequence of eigenvectors for $C_{\phi}$ corresponding to dinstict eigenvalues, we have $\left\langle C\phi_{\alpha}^n, \phi_{\alpha}^m\right\rangle =0$ if $n\neq m.$ Moreover, a simple computation shows that $\left\langle C\phi_{\alpha}^n,\phi_{\alpha}^n\right\rangle \neq 0$ for each $n.$ Putting $b_n:=\left\langle \phi_{\alpha}^n, C\phi_{\alpha}^n\right\rangle $ we obtain \begin{align}\label{thm5} \left\langle C\phi_{\alpha}^n-\left\| z^n\right\| ^{-2}\overline{b_n}\;v_n, \phi_{\alpha}^m\right\rangle =&\left\langle C\phi_{\alpha}^n, \phi_{\alpha}^m\right\rangle - \frac{1}{\left\| z^n\right\| ^{2}}\overline{b_n}\;\left\langle C_{\phi_{\alpha}}^*z^n,\phi_{\alpha}^m\right\rangle \nonumber\\ =& \left\langle C\phi_{\alpha}^n, \phi_{\alpha}^m\right\rangle - \frac{1}{\left\| z^n\right\| ^{2}}\overline{b_n}\;\left\langle z^n,C_{\phi_{\alpha}}C_{\phi_{\alpha}}z^m\right\rangle \nonumber\\ =& \left\langle C\phi_{\alpha}^n, \phi_{\alpha}^m\right\rangle - \frac{1}{\left\| z^n\right\| ^{2}}\left\langle C\phi_{\alpha}^n, \phi_{\alpha}^m\right\rangle\left\| z^n\right\| ^{2}=0. \end{align} By combining \eqref{thm5} and the a density of the span of $(\phi_{\alpha}^n)_{n\in \mathbb{N}}$ on $A^2_{\beta},$ we reach the following relation \begin{align*} C\phi_{\alpha}^n=\frac{1}{\left\| z^n\right\| ^{2}}\overline{b_n}v_n \end{align*} for each non-negative integer $n.$ Now consider the map $\psi=\phi_{\alpha}\circ (\delta\phi_{\alpha})$ with $\delta\in \mathbb{D}\backslash \{0\},$ and observe that \begin{align}\label{thm7} CC_{\psi}^*C\phi_{\alpha}^n=\frac{b_n\delta^n}{\left\| z^n\right\| ^{2}}Cv_n =\frac{b_n\delta^n}{\left\| z^n\right\| ^{2}}\frac{\left\| z^n\right\| ^2}{b_n}\phi_{\alpha}^n=\delta^n\phi_{\alpha}^n=C_{\psi}\phi_{\alpha}^n. \end{align} for each non-negative integer $n.$ In particular, \eqref{thm7} forces $CC_{\psi}^*C=C_{\psi},$ that is $C_{\psi}$ is complex symmetric, however this contradicts Theorem \ref{thm6}. \end{proof} To treat the finite order elliptic automorphism case, we will use a lemma, which relates the iterates of the operator $M_z^*$ acting on the vectors $z^n.$ \begin{lem}\label{216} Let $M_{z}:A^2_{\beta}\longrightarrow A_{\beta}^{2}$ be multiplication by $z$, then for each non-negative integer $n$ and $m$ we have $$ \left(M_{z}^{*}\right)^{m}z^{n}=\left\{\begin{array}{rl} 0 &\mathcal{H}pace{0.2cm} for \mathcal{H}pace{0.2cm} m>n\mathcal{H}pace{0.2cm} \\ \\ c_{m,n}z^{n-m}&\mathcal{H}pace{0.2cm} for\mathcal{H}pace{0.2cm} 0<m\leq n\\ \\ z^{n}&\mathcal{H}pace{0.2cm}for \mathcal{H}pace{0.2cm} m=0 \end{array}\right.$$ where $c_{m,n}=\displaystyle\frac{\Gamma(2+n-m+\beta)n!}{(n-m)!\Gamma(2+n+\beta)}.$ \end{lem} \begin{proof} We first fixe $m.$ Following, we note that \begin{align}\label{v50} \left\langle (M_z^{*})^mz^n,f\right\rangle= \left\langle M_{z^m}^{*}z^n,f\right\rangle = \left\langle z^n,M_{z^m}f\right\rangle =\left\langle z^n, z^mf\right\rangle \end{align} for each non-negative integer $n$ and $f\in A^2_{\beta}.$ From \eqref{v50}, we see that the result is immediate if $m=0.$ So, we assume $m>0.$ Additionally, if $m>n$ the equality \eqref{v50} implies that $z^mf$ is orthogonal to $z^n,$ and therefore $(M_z^*)^mz^n=0.$ If $m\leq n,$ a simple computation using the $A_{\beta}^2$ inner product provides \begin{align} \left\langle (M_z^{*})^mz^n,f\right\rangle=& \displaystyle \frac{n!\Gamma(2+\beta)}{\Gamma(2+n+\beta)}\widehat{f}(n-m)\nonumber\\ =&\displaystyle \frac{(n-m)!\Gamma(2+\beta)}{\Gamma(2+n-m+\beta)}\frac{\Gamma(2+n-m+\beta)}{(n-m)!\Gamma(2+\beta)}\frac{n!\Gamma(2+\beta)}{\Gamma(2+n+\beta)}\widehat{f}(n-m)\nonumber\\ =& \displaystyle \left\langle \frac{\Gamma(2+n-m+\beta)}{(n-m)!\Gamma(2+\beta)}\frac{n!\Gamma(2+\beta)}{\Gamma(2+n+\beta)}z^{n-m},f\right\rangle\nonumber\\ =&\displaystyle \left\langle \frac{\Gamma(2+n-m+\beta)n!}{(n-m)!\Gamma(2+n+\beta)}z^{n-m},f\right\rangle\nonumber. \end{align} Therefore, $\displaystyle (M_z^{*})^mz^n=\frac{\Gamma(2+n-m+\beta)n!}{(n-m)!\Gamma(2+n+\beta)}z^{n-m}.$ \end{proof} The next lemma generalizes \cite[Lemma 2.2.]{wal} and \cite[Lemma 5.]{Ted}. \begin{lem} Let $\beta\in \mathbb{N}.$ If $\alpha \in \mathbb{D}\backslash\{0\} $ and $v_n:=C_{\phi_{\alpha}}^*z^n$ for each non-negative integer $n$ then $v_n$ is orthogonal to $v_m$ whenever $\left| n-m\right| \geq \beta+3.$ \end{lem} \begin{proof} Recall that in \eqref{658} we computated an expression for the adjoint of $C_{\phi_{\alpha}}$ \begin{align}\label{q61} C_{\phi_{\alpha}}^*= \sum_{k=0}^{2+\beta}{2+\beta\choose k} (-\alpha)^k M_{K_{\alpha}}C_{\phi_{\alpha}}(M_z^*)^k. \end{align} We use \eqref{q61} to compute the action of $C_{\phi_{\alpha}}^*$ on $z^n.$ In view of Lemma \ref{216}, we have two cases to consider, namely: $n>2+\beta$ and $ 2+\beta\geq n.$ To simplify the study of these cases, we use the notation $r_k={2+\beta\choose k} (-\alpha)^k$ for each $k\in \left\lbrace 0,1,\ldots,2+\beta\right\rbrace. $ If $n>\beta+2$, we obtain \begin{align}\label{thm8} C_{\phi_{\alpha}}^*z^n= \displaystyle \sum_{k=0}^{2+\beta}r_kM_{K_{\alpha}}C_{\phi_{\alpha}}\left(c_{k,n}z^{n-k}\right)=\displaystyle \sum_{k=0}^{2+\beta}r_kc_{k,n} K_{\alpha}(z)\left[\phi_{\alpha}(z)\right]^{n-k}. \end{align} For the second case, that is $2+\beta\geq n,$ we have \begin{align}\label{thm9} C_{\phi_{\alpha}}^*z^n= \displaystyle \sum_{k=0}^{n}r_kM_{K_{\alpha}}C_{\phi_{\alpha}}\left(c_{k,n}z^{n-k}\right)=\displaystyle \sum_{k=0}^{2+\beta}r_kc_{k,n} K_{\alpha}(z)\left[\phi_{\alpha}(z)\right]^{n-k}, \end{align} where $c_{k,n}=0$ if $k=n+1, \ldots,2+\beta.$ So, a simple computation from \eqref{thm8} and \eqref{thm9} provides \begin{align}\label{thm10} C_{\phi_{\alpha}}C_{\phi_{\alpha}}^*z^n=\displaystyle \sum_{k=0}^{2+\beta}r_kc_{k,n} C_{\phi_{\alpha}}K_{\alpha}\phi_{\alpha}^{n-k}(z)=\displaystyle \sum_{k=0}^{2+\beta}r_kc_{k,n}(C_{\phi_{\alpha}}K_{\alpha})(z)z^{n-k} \end{align} To obtain the power series of $C_{\phi_{\alpha}}C_{\phi_{\alpha}}^*z^n$ we apply \eqref{q51} in \eqref{thm10} and we again use the Newton binomial formula, getting \begin{align} C_{\phi_{\alpha}}C_{\phi_{\alpha}}^*z^n=&\displaystyle \sum_{k=0}^{2+\beta} \frac{r_kc_{k,n}}{(1-\left|\alpha\right|^2)^{2+\beta}}(1-\overline{\alpha}z)^{2+\beta}z^{n-k}\nonumber\\ =& \displaystyle \sum_{k=0}^{2+\beta} \frac{r_kc_{k,n}}{(1-\left|\alpha\right|^2)^{2+\beta}}\left[\sum_{j=0}^{2+\beta}{2+\beta\choose j}(-\overline{\alpha}z)^j\right]z^{n-k}\nonumber\\ =&\displaystyle \sum_{k=0}^{2+\beta}\sum_{j=0}^{2+\beta}\frac{\overline{r_j}r_kc_{k,n}}{(1-\left|\alpha\right|^2)^{2+\beta}}z^{n-k+j}.\nonumber \end{align} By using this last equality we compute the inner product between $v_n$ and $v_m,$ as follows \begin{equation}\label{thm11} \left\langle v_n,v_m\right\rangle=\left\langle z^n,C_{\phi_{\alpha}}C_{\phi_{\alpha}}^*z^m\right\rangle = \sum_{k=0}^{2+\beta}\sum_{j=0}^{2+\beta}\frac{r_j\overline{r_k}\;\overline{c_{k,m}}}{(1-\left|\alpha\right|^2)^{2+\beta}} \left\langle z^n,z^{m-k+j}\right\rangle. \end{equation} From \eqref{thm11} we see that the orthoganality between $v_n$ and $v_m$ is closely linked to indices $j,k,m$ and $n,$ more precisely \begin{center}$v_n\perp v_m \Longleftrightarrow \left| m-k+j- n\right| \neq 0.$ \end{center} Since $0\leq j,k\leq 2+\beta,$ the inequality $-\left|j-k\right|\geq -(2+\beta)$ holds. Moreover, by assuming $\left|n-m\right|\geq 3+\beta$ we get \begin{equation*}\label{218} \left|m-k+j-n\right|\geq \left|n-m\right|-\left|j-k\right|\geq 3+\beta-(2+\beta)=1. \end{equation*} Therefore, $v_n\perp v_m.$ \end{proof} Although the statement above is an analogue of \cite[Lemma 2.2]{wal}, our proof is very different. \begin{lem}\label{234} Let $\beta \in \mathbb{N} $ and $\alpha\in \mathbb{D}\backslash\{0\}.$ Suppose also that $\phi=\phi_{\alpha}\circ (\lambda \phi_{\alpha})$ is an elliptic automorphism of finite order $N$ and define $V_n=Ker(C_{\phi}^{*}-\overline{\lambda}^nI)$ for $n=0,1,\ldots,N-1.$ If $N\geq 2(3+\beta)$ then $V_0\perp V_{3+\beta}.$ \end{lem} \begin{proof} For each non-negative integer $n$ we put $v_n:=C_{\phi_{\alpha}}^*z^n.$ We show that $V_n=\overline{span}(v_{kN+n})_{k\in \mathbb{N}}.$ Since $C_{\phi}^*v_{kN+n}=\overline{\lambda}^{n}v_{kN+n}$ for each non-negative integer $k,$ the inclusion $\overline{span}(v_{kN+n})_{k\in \mathbb{N}}\subset V_{n}$ is immediate. On the other hand, $C_{\phi_{\alpha}}^*f\in Ker(C_{\overline{\lambda}z}-\overline{\lambda}^nI)$ for each $f\in V_n,$ since $C_{\phi_{\alpha}}^*(C_{\overline{\lambda} z}-\overline{\lambda}^nI)C_{\phi_{\alpha}}^*=C_{\phi}^*-\overline{\lambda}^nI.$ Since \begin{align*} Ker(C_{\overline{\lambda}z}-\overline{\lambda}^nI)=\overline{span}(z^{kN+n})_{k\in \mathbb{N}} \end{align*} for $n=0,1,\ldots, N-1$ we have $C_{\phi_{\alpha}}^*f\in \overline{span}(z^{kN+n})_{k\in \mathbb{N}}$ or equivalently $f\in \overline{span}(v_{kN+n})_{k\in \mathbb{N}}.$ This forces $V_n=\overline{span}(v_{kN+n})_{n\in \mathbb{N}}.$ Then given $v_{kN}\in V_{0}$ and $v_{jN+(3+\beta)}\in V_{3+\beta}$ for each non-negative integer $k$ and $j$ we have \begin{center} $\left|kN-\left[jN+(3+\beta)\right]\right|\geq \left|N(k-j)\right|-(3+\beta)\geq 2(3+\beta)-(3+\beta)=3+\beta.$ \end{center} By Lemma \ref{216} we conclude that $v_{kN}\perp v_{jN+(3+\beta)},$ or more precisely $V_0\perp V_{3+\beta}.$ \end{proof} The next result is analogous to \cite[Proposition 3.3]{wal} and it generalizes to all $A_{\beta}^2$ for $\beta\in \mathbb{N}.$ \begin{thm}\label{thm12}Let $\beta\in \mathbb{N} $ and $\alpha\in \mathbb{D}\backslash\{0\}.$ Suppose also that $\phi$ is an elliptic automorphism of finite order $N\geq 2(3+\beta)$ and not a rotation then, $C_{\phi}$ is not complex symmetric on $A_{\beta}^2.$ \end{thm} \begin{proof} First we observe that Lemma \ref{234} guarantees that $V_0\perp V_{3+\beta},$ because $N\geq 2(3+\beta).$ We get a contradiction by assuming that $C_{\phi}$ is complex symmetric. Let $C$ be a conjugation on $A^2_{\beta}$ such that $C_{\phi}C=CC_{\phi}^*.$ Then $C$ maps $V_0$ and $V_{3+\beta}$ onto $Ker(C_{\phi}-I)$ and $Ker(C_{\phi}-\lambda^{3+\beta}I)$ respectively. Since $C$ preserves orthogonality, we have \begin{equation}\label{235} Ker(C_{\phi}-I)\perp Ker(C_{\phi}-\lambda^{3+\beta}I) \end{equation} and in particular \eqref{235} implies that $K_0$ is orthogonal to $\phi_{\alpha}^{3+\beta},$ hence \begin{align*}{\alpha}^{3+\beta}=\left[\phi_{\alpha}(0)\right]^{3+\beta}=\left\langle \phi_{\alpha}^{3+\beta},K_0\right\rangle=0 \end{align*} This last equality forces $\alpha=0,$ and contradicts the hypothesis $\alpha\neq 0.$ \end{proof} Due to Theorem \ref{thm12} it follows that the order $3,4,\ldots, 5+2\beta$ elliptic cases remain open, more precisely: \\ \\ \textbf{Problem :} Let $\beta\in \mathbb{N}$ and $\phi$ an elliptic automorphism of $\mathbb{D}.$ Is $C_{\phi}$ complex symmetric on $A^2_{\beta}$ when $\phi$ has order $N=2,3,\ldots , 5+2\beta ?$ \\ \\ \textbf{Acknowledgments} \\ The author is grateful to Professor Sahibzada Waleed Noor for the discussions and suggestions. This work is part of the doctoral thesis of the author. \end{document}
math
50,445
\begin{document} \title{It\^{o}'s formula, the stochastic exponential and change of measure on general time scales.} \author{Wenqing Hu. \thanks{Department of Mathematics and Statistics, Missouri University of Science and Technology (formerly University of Missouri, Rolla). Email: \texttt{[email protected]}}} \date{ } \maketitle \begin{abstract} We provide an It\^{o}'s formula for stochastic dynamical equation on general time scales. Based on this It\^{o}'s formula we give a closed form expression for stochastic exponential on general time scales. We then demonstrate a Girsanov's change of measure formula in the case of general time scales. Our result is being applied to a Brownian motion on the quantum time scale ($q$--time scale). \end{abstract} \textit{Keywords}: It\^{o}'s formula, stochastic exponential, change of measure, Girsanov's theorem, quantum time scale. \textit{2010 Mathematics Subject Classification Numbers}: 60J60, 60J65, 34N05, 60H10. \section{Introduction} The theory of dynamical equation on time scales (\cite{[Bohner Peterson book]}) has attracted many researches recently. In particular, attempts of extension to stochastic dynamical equations and stochastic analysis on general time scales have been made in several previous works (\cite{[SDtE Bohner]}, \cite{[SD Exponential Bohner-Sanyal]}, \cite{[SdtE Grow-Sanyal]}, \cite{[time scale BM]}, \cite{[first attempt time scale stochastic]}). In the work \cite{[SD Exponential Bohner-Sanyal]} the authors mainly work with a discrete time scale; in \cite{[SDtE Bohner]} the authors introduce an extension of a function and define the stochastic as well as deterministic integrals as the usual integrals for the extended function; in \cite{[SdtE Grow-Sanyal]} the authors make use of their results on the quadratic variation of a Brownian motion (\cite{[quadratic variation BM time scale]}) on time scales and, based on this, they define the stochastic integral via a generalized version of the It\^{o} isometry; in \cite{[first attempt time scale stochastic]} the authors introduce the so called $\grad$--stochastic integral via the backward jump operator and they also derive an It\^{o}'s formula based on this definition of the stochastic integral. We notice that different previous works adopt different notions of the stochastic integral and there lacks a uniform and coherent theory of a stochastic calculus on general time scales. The purpose of the present article is to fill in this gap. We will be mainly working under the framework of \cite{[SDtE Bohner]}, in that we define our stochastic integral using the definition given in \cite{[SDtE Bohner]}. We then present a general It\^{o}'s formula for stochastic dynamical equations under the framework of \cite{[SDtE Bohner]}. Our It\^{o}'s formula works for general time scales, and thus fills the gap left in \cite{[SD Exponential Bohner-Sanyal]}, which deals with only discrete time scales. By making use of the It\^{o}'s formula we obtain a closed--form expression for the stochastic exponential on general times scales. We will then demonstrate a change of measure (Girsanov's) theorem for stochastic dynamical equation on time scales. We would like to point out that our change of measure formula is different from the continuous process case in that the density function is not given by the stochastic exponential, but rather is found by the fact that the process on the time scale can be extended to a continuous process simply by linear extension. It is also worth mentioning that our construction is different from \cite{[Bhamidi et al]} in that we are working with the case that the time parameter of the process is running on a time scale, whereas in \cite{[Bhamidi et al]} and related works (e.g. \cite{[Short time asymptotic BM fractal]}, \cite{[LDP for BM fractal]}, \cite{[Cont-Fournie]}) the authors are working with the case that the state space of the process is a time scale. We note that stochastic calculus on the so called $q$--Brownian motion has been considered in \cite{[Haven2009]}, \cite{[Haven2011]}, \cite{[BrycqBM]}. As an application, we will also work our It\^{o}'s formula for a Brownian motion on the quantum time scale ($q$--time scale) case at the last section of the paper. The paper is organized as follows. In Section 2 we discuss some basic set--up for time scales calculus. In Section 3 we will briefly review the results in \cite{[SDtE Bohner]} and define the stochastic integral and stochastic dynamical equation on time scales. In Section 4 we present and prove our It\^{o}'s formula. In Section 5 we discuss the formula for stochastic exponential. In Section 6 we prove the change of measure (Girsanov's) formula. Finally in Section 7 we consider an example of Brownian motion on a quantum time scale. \section{Set--up: Basics of time scales calculus.} A \textit{time scale} $\T$ is an arbitrary nonempty closed subset of the real numbers $\R$, where we assume that $\T$ has the topology that it inherits from the real numbers $\R$ with the standard topology. We define the \textit{forward jump operator} by $$\sm(t)=\inf\{s\in \T: s>t\} \text{ for all } t\in \T \text{ such that this set is non--empty} \ ,$$ and the \textit{backward jump operator} by $$\rho(t)=\sup\{s\in \T: s<t\} \text{ for all } t\in \T \text{ such that this set is non--empty} \ .$$ Let $t\in \T$. If $\sm(t)>t$, then $t$ is called \textit{right--scattered}. If $\sm(t)=t$, then $t$ is called \textit{right dense}. If $\rho(t)<t$, then $t$ is called \textit{left--scattered}. If $\rho(t)=t$, then $t$ is called \textit{left--dense}. Moreover, the sets $\T^\kp$ and $\T_\kp$ are derived from $\T$ as follows: If $\T$ has a left--scattered maximum, then $\T^\kp$ is the set $\T$ without that left--scattered maximum; otherwise, $\T^\kp=\T$. If $\T$ has a right--scattered minimum, then $\T_\kp$ is the set $\T$ without that right--scattered minimum; otherwise, $\T_\kp=\T$. The \textit{graininess function} is defined by $\mu(t)=\sm(t)-t$ for all $t\in \T^\kp$. Notice that since $\T$ is closed, for any $t\in \T$, the points $\sm(t)$ and $\rho(t)$ are belonging to $\T$. For a set $A\subset \R$ we denote the set $A_\T=A\cap \T$. \ Given a time scale $\T$ and a function $f: \T\ra \R$, the \textit{delta} (or \textit{Hilger}) \textit{derivative} $f^\Dt(t)$ of $f$ at $t\in \T$ is defined as follows (\cite[Definition 1.10]{[Bohner Peterson book]}). \ \textbf{Definition 1.} \textit{Assume $f: \T\ra \R$ is a function and let $t\in \T^\kp$. Then we define $f^\Dt(t)$ to be the number (provided that it exists) with the property that given any $\ve>0$, there is a neighborhood $U$ of $t$ (i.e., $U=(t-\dt, t+\dt)\cap \T$ for some $\dt>0$) such that} $$\left|[f(\sm(t))-f(s)]-f^\Dt(t)[\sm(t)-s]\right|\leq \ve |\sm(t)-s| \text{ \textit{for all} } s\in U \ .$$ \ The delta derivative is characterized by the following Theorem, which is \cite[Theorem 1.16]{[Bohner Peterson book]}. \ \textbf{Theorem 1.} \textit{Assume that $f: \T\ra \R$ is a function and let $t\in \T^\kp$. Then we have the following:} (i) \textit{If $f$ is differentiable at $t$, then $f$ is continuous at $t$.} (ii) \textit{If $f$ is continuous at $t$ and $t$ is right--scattered, then $f$ is differentiable at $t$ with} $$f^\Dt(t)=\dfrac{f(\sm(t))-f(t)}{\sm(t)-t} \ .$$ (iii) \textit{If $t$ is right--dense, then $f$ is differentiable at $t$ if and only if the limit } $$\lim\li_{s\ra t}\dfrac{f(t)-f(s)}{t-s}$$ \textit{exists as a finite number. In this case} $$f^\Dt(t)=\lim\li_{s\ra t}\dfrac{f(t)-f(s)}{t-s} \ .$$ (iv) \textit{If $f$ is differentiable at $t$, then} $$f(\sm(t))=f(t)+\mu(t)f^\Dt(t) \ .$$ \ \section{Stochastic integrals and stochastic differential equations on time scales.} We will adopt the definitions introduced in \cite{[SDtE Bohner]} as our definition of a Brownian motion and It\^{o}'s stochastic integral on time scales. In the next section we will derive an It\^{o}'s formula corresponding to the stochastic integral defined in such a way. \ \textbf{Definition 2.} \textit{A Brownian motion indexed by a time scale $\T\subset \R$ is an adapted stochastic process $\{W_t\}_{t\in \T\cup \{0\}}$ on a filtered probability space $(\Om, \cF_t, \Prob)$ such that} (1) \textit{$\Prob(W_0=0)=1$}; (2) \textit{If $s<t$ and $s,t\in \T$ then the increment $W_t-W_s$ is independent of $\cF_s$ and is normally distributed with mean $0$ and variance $t-s$}; (3) \textit{The process $W_t$ is almost surely continuous on $\T$.} \ Note that the property (3) is proved in the work \cite{[time scale BM]}. For a random function $f: [0,\infty)_{\T}\times \Om\ra \R$ we define the \textit{extension} $\widetilde{f}: [0,\infty)\times \Om\ra \R$ by \begin{equation} \label{Extension}\widetilde{f}(t,\om)=f(\sup[0,t]_{\T},\om) \end{equation} for all $t\in [0,\infty)$. We shall make use of the definitions given in \cite{[SDtE Bohner]} for the classical Lebesgue and Riemann integral. For any random function $f: [0,\infty)_{\T}\times \Om \ra \R$ and $T<\infty$ we define its $\Dt$--Riemann (Lebesgue) integral as $$\int_0^T f(t,\om)\Dt t=\int_0^T \widetilde{f}(t,\om)dt \ ,$$ where the integral on the right hand side of the above equation is interpreted as a standard Riemann (Lebesgue) integral. In a similar way, the work \cite{[SDtE Bohner]} defines a stochastic integral for an $L^2([0,T]_\T)$--progressively measurable random function $f(t,\om)$ as $$\int_0^T f(t,\om)\Dt W_t=\int_0^T \widetilde{f}(t,\om)dW_t \ ,$$ where again the right hand side of the above equation is interpreted as a standard It\^{o}'s stochastic integral. Note that the way (1) that we define the extension guarantees that the function $\widetilde{f}(t,\om)$ is progressively measurable. In \cite{[SDtE Bohner]} the authors then defined the solution of the $\Dt$--stochastic differential equation indicated by the notation \begin{equation}\label{SDeltaE} \Dt X_t=b(t,X)\Dt t+\sm(t,X)\Dt W_t \ , \end{equation} as the process $\{X_t\}_{t\in [0,T]_\T}$ such that $$X_{t_2}-X_{t_1}=\int_{t_1}^{t_2}b(t,X_t)\Dt t+\int_{t_1}^{t_2}\sm(t, X_t)\Dt W_t \ ,$$ with the deterministic and stochastic integrals on the right--hand side of the above equality interpreted as was just mentioned. Under the condition of continuity in the $t$--variable and uniform Lipschitz continuity in the $x$--variable of the functions $b(t,x)$ and $\sm(t,x)$, together with being no worse than linear growth in $x$--variable, existence and pathwise uniqueness of strong solution to \eqref{SDeltaE} are proved in \cite{[SDtE Bohner]}. \ \section{It\^{o}'s formula for stochastic integrals on time scales.} We will make use of the following fact that is simple to prove. \ \textbf{Proposition 1.} \textit{The set of all left--scattered or right--scattered points of $\T$ is at most countable.} \ \textbf{Proof.} If $x\in \T$ is a right--scattered point, then $I_x=(x,\sm(x))$ is an open interval such that $I_x\cap \T=\emptyset$. Similarly, if $x\in \T$ is a left--scattered point, then $I_x=(\rho(x), x)$ is an open interval such that $I_x\cap \T=\emptyset$. Suppose $x<y$ and $x,y\in \T$. We then distinguish four different cases. Case 1: both $x$ and $y$ are right--scattered. We argue that in this case we have $I_x\cap I_y=\emptyset$. Suppose this is not the case, then we must have $\sm(x)>y$. But we see that $\sm(x)=\inf\{s>x: s\in \T\}$ and $y\in \T$. So we must have $\sm(x)\leq y$. We arrive at a contradiction; Case 2: both $x$ and $y$ are left--scattered. This case is similar to Case 1 and we conclude that $I_x\cap I_y=\emptyset$; Case 3: $x$ is left--scattered, $y$ is right--scattered. In this case we see that $I_x=(\rho(x), x)$ and $I_y=(y, \sm(y))$, as well as $x<y$. This implies that $I_x\cap I_y=\emptyset$; Case 4: $x$ is right--scattered, $y$ is left--scattered. In this case $I_x=(x, \sm(x))$ and $I_y=(\rho(y), y)$. If $\sm(x)\leq\rho(y)$, then $I_x\cap I_y=\emptyset$. If $\sm(x)>\rho(y)$, then we see that $(x,y)=I_x\cup I_y$ so that $(x,y)\cap \T=\emptyset$. That implies further $\sm(x)=y$ and $\rho(y)=x$, i.e., $I_x=I_y$. Thus we see that for all points $x\in \T$ being left or right--scattered, the set of all open intervals of the form $I_x$ are disjoint subsets of $\R$. Henceforth there are at most countably many such intervals. Each such interval corresponds to one or two endpoints in $\T$ that are either left or right--scattered. Thus the total number of left or right--scattered points in $\T$ are at most countably many. $\square$ \ Let $C$ be the (at most) countable set of all left--scattered or right--scattered points of $\T$. As we have already seen in the proof of the previous Proposition, the set $C$ corresponds to at most countably many open intervals $\cI=\{I_1, I_2,...\}$ such that (1) for any $k\neq l$, $I_k\cap I_l=\emptyset$; (2) either the left endpoint or right endpoint or both endpoints of any of the $I_k$'s are in $\T$, and are left or right--scattered; (3) $I_k\cap \T=\emptyset$ for any $k=1,2,...$; (4) any point in $C$ is a left or right--endpoint of one of the $I_k$'s. We will denote $I_k=(s_{I_k}^-, s_{I_k}^+)$. Since for any $x\in \T$, the points $\sm(x)$ and $\rho(x)$ are in $\T$, we further infer that for any such interval $I_k$, we have $s_{I_k}^-$ and $s_{I_k}^+$ are in $\T$, so that $s_{I_k}^-$ is right--scattered and $s_{I_k}^+$ is left--scattered. We then establish the following It\^{o}'s formula. For any two points $t_1, t_2\in \T$, $t_1\leq t_2$, and any open interval $I_k\in \cI$, such that $I_k\cap [t_1, t_2]\neq \emptyset$, we have $I_k\subset (t_1,t_2)$. This is because if not the case, then $t_1$ or $t_2$ will belong to $I_k$, contradictory to the fact that $I_k\cap \T=\emptyset$. We conclude that $$\{I_k \in \cI: I_k\cap [t_1,t_2]\neq \emptyset\}=\{I_k\in \cI: I_k\subset (t_1, t_2)\} \ .$$ Let us consider a function $f(t,x): \T\times \R\ra \R$. Let $f^\Dt(t,x)$, $f^{\Dt^2}(t,x)$ be the first and second order delta (Hilger) derivatives of $f$ with respect to time variable $t$ at $(t,x)$ and let $\dfrac{\pt f}{\pt x}(t,x)$ and $\dfrac{\pt^2 f}{\pt x^2}(t,x)$ be the first and second order partial derivatives of $f$ with respect to space variable $x$ at $(t,x)$. \ \textbf{Theorem 2.} (It\^{o}'s formula) \textit{Let any function $f: \T\times \R\ra \R$ be such that $f^\Dt(t,x)$, $f^{\Dt^2}(t,x)$, $\dfrac{\pt f}{\pt x}(t,x)$, $\dfrac{\pt^2 f}{\pt x^2}(t,x)$, $\dfrac{\pt f^\Dt}{\pt x}(t,x)$ and $\dfrac{\pt^2 f^\Dt}{\pt x^2}(t,x)$ are continuous on $\T\times \R$. Set any $t_1\leq t_2$, $t_1, t_2\in [0, \infty)_\T$, then we have} \begin{equation}\label{ItoFormula}\begin{array}{ll} &f(t_2,W_{t_2})-f(t_1, W_{t_1}) \\ =&\play{\int_{t_1}^{t_2}f^\Dt(s, W_s)\Dt s+ \int_{t_1}^{t_2}\dfrac{\pt f}{\pt x}(s,W_s)\Dt W_s +\dfrac{1}{2}\int_{t_1}^{t_2}\dfrac{\pt^2 f}{\pt x^2}(s, W_s)\Dt s} \\ &\qquad +\play{\sum\li_{I_k\in \cI, I_k\subset (t_1,t_2)} \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^+, W_{s_{I_k}^-}) \right.} \\ &\play{\qquad \qquad \qquad \qquad \qquad \left.-\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right] \ .} \end{array}\end{equation} \ \textbf{Proof.} We will make use of the following classical version (Peano form) of Taylor's theorem: for any function $f: \T\times \R\ra \R$ such that $\dfrac{\pt f}{\pt x}(t,x)$ and $\dfrac{\pt^2 f}{\pt x^2}(t,x)$ are continuous on $\T\times \R$, and any $s\in \T$ and $x_1, x_2\in \R$ we have \begin{equation}\label{ClassicalTaylorExpansion} f(s, x_2)-f(s, x_1)=\dfrac{\pt f}{\pt x}(s,x_1)(x_2-x_1) +\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s,x_1)(x_2-x_1)^2+R^f_\text{C}(s; x_1,x_2) \ , \end{equation} where $$|R^f_\text{C}(s; x_1, x_2)|\leq r(|x_2-x_1|)(x_2-x_1)^2 \ ,$$ and $r: \R_+\ra \R_+$ is an increasing function with $\lim\li_{u\da 0}r(u)=0$. We will also make use of the time scale Taylor formula (see \cite[Theorem 1.113]{[Bohner Peterson book]} as well as \cite{[AgarwalBohnerBasicTimeScaleCalc]}) applied to $f(t,x)$ up to first order in $t$: for any $s_2>s_1$ and $s_1,s_2\in \T$ we have \begin{equation}\label{TimeScaleTaylorExpansion} f(s_2, x)-f(s_1, x)= f^{\Dt}(s_1,x)(s_2-s_1)+R^f_{\text{TS}}(x; s_1,s_2) \ , \end{equation} where $$|R^f_{\text{TS}}(x; s_1,s_2)|=\left|\int_{s_1}^{\rho(s_2)}(s_2-\sm(s))f^{\Dt^2}(s)\Dt s\right|\leq r(|s_2-s_1|)|s_2-s_1|$$ with $r(\bullet)$ as before. Combining \eqref{ClassicalTaylorExpansion} and \eqref{TimeScaleTaylorExpansion} we see that we have \begin{equation}\label{TaylorExpansion} \begin{array}{ll} & f(s_2, x_2)-f(s_1, x_1) \\ =& [f(s_2,x_2)-f(s_1,x_2)]+[f(s_1,x_2)-f(s_1,x_1)] \\ =& f^{\Dt}(s_1,x_2)(s_2-s_1)+\dfrac{\pt f}{\pt x}(s_1,x_1)(x_2-x_1) +\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_1,x_1)(x_2-x_1)^2 +R^f_{\text{TS}}(x_2;s_1,s_2)+R^f_\text{C}(s_1; x_1,x_2) \\ =& \left[f^{\Dt}(s_1,x_1)+\dfrac{\pt f^\Dt}{\pt x}(s_1,x_1)(x_2-x_1)+\dfrac{1}{2}\dfrac{\pt^2 f^\Dt}{\pt x^2}(s_1,x_1)(x_2-x_1)^2 +R_C^{f^\Dt}(s_1; x_1,x_2)\right](s_2-s_1) \\ & \qquad +\dfrac{\pt f}{\pt x}(s_1,x_1)(x_2-x_1)+\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_1,x_1)(x_2-x_1)^2 +R^f_{\text{TS}}(x_2;s_1,s_2)+R^f_\text{C}(s_1; x_1,x_2) \\ =& f^{\Dt}(s_1,x_1)(s_2-s_1)+\dfrac{\pt f}{\pt x}(s_1,x_1)(x_2-x_1) +\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_1,x_1)(x_2-x_1)^2 +R(s_1,s_2;x_1,x_2) \ , \end{array} \end{equation} with $$|R(s_1,s_2;x_1,x_2)|\leq r(|s_2-s_1|)|s_2-s_1|+r(|x_2-x_1|)(x_2-x_1)^2$$ for another function $r: \R_+\ra \R_+$ increasing with $\lim\li_{u\da 0}r(u)=0$. Consider a partition $\pi^{(n)}$: $t_1=s_0<s_1<...<s_n=t_2$, such that (1) each $s_i\in \T$; (2) $\max\li_{i}(\rho(s_i)-s_{i-1})\leq \dfrac{1}{2^n}$ for $i=1,2,...,n$. Notice that by definition $\rho(s_i)=\sup\{s<s_i: s\in \T\}$, so that we can always find $s_{i-1}\in \T$ so that $\rho(s_i)-s_{i-1}$ is sufficiently small. Let the sets $C$ and $\cI$ be defined as before. Let us fix a partition $\pi^{(n)}$, and consider a classification of its corresponding intervals $(s_{i-1}, s_i) \ , \ i=1,2,...,n$. We will classify all intervals $(s_{i-1}, s_i)$ such that for all $I_k\in \cI$ we have $I_k\cap (s_{i-1}, s_i)=\emptyset$ as class $(a)$; and we classify all intervals $(s_{i-1}, s_i)$ such that there exist some $I_k\in \cI$ with $(s_{i-1}, s_i)\cap I_k \neq \emptyset$ as class $(b)$. For an interval $(s_{i-1}, s_i)$ in class $(a)$, since for all $I_k\in \cI$ we have $I_k\cap (s_{i-1}, s_i)=\emptyset$, we see that $\rho(s_i)=s_i$. Because otherwise $(\rho(s_i), s_i)$ will be one of the $I_k$'s. Thus in this case we have $s_i-s_{i-1}<\dfrac{1}{2^n}$. For an interval $(s_{i-1}, s_i)$ in class $(b)$, since both $s_{i-1}$ and $s_i$ are in $\T$, we see that we have in fact $I_k\subseteq (s_{i-1}, s_i)$. In this case either $I_k=(s_{i-1}, s_i)$, or $I_k\neq (s_{i-1}, s_i)$. If the latter happens, then $(\rho(s_i), s_i)\in \cI$ is one of the $I_k$'s and $\rho(s_i)-s_{i-1}<\dfrac{1}{2^n}$. We also see from the above analysis that all $I_k$'s are contained in intervals $(s_{i-1}, s_i)$ that belong to class $(b)$. On the other hand, each interval $(s_{i-1}, s_i)$ is either entirely one of the $I_k$'s, or it contains an interval $(\rho(s_i), s_i)$ that is one of the $I_k$'s. For the latter case, i.e., when $s_{i-1}<\rho(s_i)<s_i$, the set of intervals of the form $(s_{i-1}, \rho(s_i))$ are disjoint open intervals such that \begin{equation}\label{EstimateOfRemainderIntervals} \sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i}(\rho(s_i)-s_{i-1})<\dfrac{n}{2^n} \ . \end{equation} Now we have \begin{equation}\label{ItoFormulaExpansion} \begin{array}{l} f(t_2, W_{t_2})-f(t_1, W_{t_1}) \\ \play{=\sum\li_{i=1}^n[f(s_i, W_{s_i})-f(s_{i-1}, W_{s_{i-1}})]} \\ \play{=\sum\li_{(a)}[f(s_i, W_{s_i})-f(s_{i-1}, W_{s_{i-1}})] +\sum\li_{(b)}[f(s_i, W_{s_i})-f(s_{i-1}, W_{s_{i-1}})]} \\ \play{=(I)+(II) \ .} \end{array} \end{equation} We apply \eqref{TaylorExpansion} term by term in part $(I)$ of \eqref{ItoFormulaExpansion}, and we get \begin{equation} \begin{array}{l}\label{TaylorExpansionOfII} \play{(I)} \\ \play{=\sum\li_{(a)}[f(s_i, W_{s_i})-f(s_{i-1}, W_{s_{i-1}})]} \\ \play{=\sum\li_{(a)}\left[f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1}) +\dfrac{\pt f}{\pt x}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})\right.} \\ \play{\left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})^2+R(s_{i-1}, s_i; W_{s_{i-1}}, W_{s_i})\right]} \\ \play{=\sum\li_{i=1}^n \left[f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1}) +\dfrac{\pt f}{\pt x}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})\right]} \\ \play{\ \ \ \ \ \ +\left(\sum\li_{(a)}\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})^2 +\sum\li_{(b)}\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right)} \\ \play{\ \ \ \ \ \ -\sum\li_{(b)}\left[f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1}) +\dfrac{\pt f}{\pt x}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})+\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right]} \\ \play{\ \ \ \ \ \ +\sum\li_{(a)}R(s_{i-1}, s_i; W_{s_{i-1}}, W_{s_i}) \ .} \\ \play{= (III)_1+(III)_2+(IV)+(V) \ .} \end{array} \end{equation} We have the following four convergence results: \textit{Convergence Result} 1.1. By Lemma 1 \eqref{ConvergenceDeterministicDeltaIntegral} and \eqref{ConvergenceStochasticDeltaIntergal} established below we have \begin{equation}\label{ConvergenceResult1-1} \Prob\left((III)_1\ra \play{\int_{t_1}^{t_2}f^\Dt(s, W_s)\Dt s+\int_{t_1}^{t_2}\dfrac{\pt f}{\pt x}(s, W_s)\Dt W_s} \text{ as $n \ra \infty$}\right)=1 \ . \end{equation} \textit{Convergence Result} 1.2. By Lemma 2 \eqref{ContinuousPartQuadraticVariation} and Lemma 1 \eqref{ConvergenceDeterministicDeltaIntegral} established below we have \begin{equation}\label{ConvergenceResult1-2} \Prob\left((III)_2\ra \play{\int_{t_1}^{t_2}\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s, W_s)\Dt s} \text{ as $n \ra \infty$}\right)=1 \ . \end{equation} \textit{Convergence Result} 2. We have, with probability $1$, that $$(V)=\sum\li_{(a)}R(s_{i-1}, s_i; W_{s_{i-1}}, W_{s_i})\ra 0$$ as $n \ra \infty$. In fact, by the Kolmogorov--\v{C}entsov theorem proved in Theorem 3.1 of \cite{[time scale BM]} we know that for almost all trajectory of $W_t$ on $\T$, for each fixed trajectory $W_t(\om)$, there exist an $n_0=n_0(\om)$ such that for all $n\geq n_0$, for a partition $\pi^{(n)}$ with a classification of its intervals $(s_{i-1}, s_i)$ into classes $(a)$ and $(b)$ as above, $\sup\li_{(a)}|W_{s_i}-W_{s_{i-1}}|\leq \dfrac{\dt}{2^{\gm n/5}}$ for some fixed $\dt>0$ and $\gm>0$. From here we can estimate $$\begin{array}{l} \play{\E\sum\li_{(a)}R(s_{i-1}, s_i; W_{s_{i-1}}, W_{s_i})} \\ \leq \play{\E\sum\li_{(a)}\left[r(s_i-s_{i-1})(s_i-s_{i-1})+r(|W_{s_i}-W_{s_{i-1}}|)(W_{s_i}-W_{s_{i-1}})^2\right]} \\ \leq r\left(\dfrac{1}{2^n}\right)(t_2-t_1)+r\left(\dfrac{\dt}{2^{\gm n/5}}\right)(t_2-t_1) \ , \end{array}$$ i.e., \begin{equation}\label{ConvergenceResult2} \Prob\left(\lim\li_{n \ra \infty}\sum\li_{(a)}R(s_{i-1}, s_i; W_{s_{i-1}}, W_{s_i})=0\right)=1 \ . \end{equation} \textit{Convergence Result} 3. Let $$\begin{array}{ll} (II)+(IV)=A_n=A_n(\om)&\play{=\sum\li_{(b)}\left[f(s_i, W_{s_i})-f(s_{i-1}, W_{s_{i-1}})-f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right.} \\ &\play{\left. \ \ \ \ \ \ -\dfrac{\pt f}{\pt x}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})-\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right] \, ,} \end{array}$$ and $$\begin{array}{ll} B_n=B_n(\om)&=\play{\sum\li_{I_k\in \cI, I_k\subset (t_1,t_2)} \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^-, W_{s_{I_k}^-}) -f^\Dt(s_{I_k}^-, W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-) \right.} \\ &\play{\qquad \qquad \qquad \qquad \left.-\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right]} \ . \end{array}$$ We claim that we have \begin{equation}\label{ConvergenceResult3} \Prob(|A_n(\om)-B_n(\om)|\ra 0 \text{ as } n \ra \infty)=1 \ . \end{equation} In fact, from the analysis that leads to the estimate $(5)$ we see that we can write $A_n-B_n$ as $$\begin{array}{ll} &A_n-B_n \\ =&\play{ \sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[f(s_i, W_{s_i})-f(s_{i-1}, W_{s_{i-1}})-f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right.} \\ &\play{\left. \ \ \ \ \ \ -\dfrac{\pt f}{\pt x}(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})-\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right]} \\ &\play{-\sum\li_{(s_{i-1}, s_i) \in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[f(s_i, W_{s_i})-f(\rho(s_i), W_{\rho(s_i)})-f^\Dt(\rho(s_i), W_{\rho(s_i)})(s_i-\rho(s_i))\right.} \\ &\play{\left. \ \ \ \ \ \ -\dfrac{\pt f}{\pt x}(\rho(s_i), W_{\rho(s_i)})(W_{s_i}-W_{\rho(s_i)}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(\rho(s_i), W_{\rho(s_i)})(s_i-\rho(s_i))\right]} \\ &\play{-\sum\li_{I_k\in \cI, I_k\subset (s_{i-1}, \rho(s_i)) \text{ for some } (s_{i-1}, s_i) \in (b) } \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^-, W_{s_{I_k}^-}) \right.} \\ &\play{\ \ \ \ \ \ \left.-f^\Dt(s_{I_k}^-, W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)-\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right]} \\ =&(VI)_1+(VI)_2+(VI)_3+(VI)_4-(VII) \ . \end{array}$$ Here $$(VI)_1=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} [f(\rho(s_i), W_{\rho(s_i)})-f(s_{i-1}, W_{s_{i-1}})] \ , $$ $$\begin{array}{ll} (VI)_2&=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[f^\Dt(\rho(s_i), W_{\rho(s_i)})(s_i-\rho(s_i)) -f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right] \\ &=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[\left(f^\Dt(\rho(s_i), W_{\rho(s_i)})-f^\Dt(s_{i-1}, W_{s_{i-1}})\right)(s_i-s_{i-1}) \right. \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.-f^\Dt(\rho(s_i), W_{\rho(s_i)})(\rho(s_i)-s_{i-1})\right] \ , \end{array}$$ $$\begin{array}{ll} (VI)_3&=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[f^\Dt(\rho(s_i), W_{\rho(s_i)})(W_{s_i}-W_{\rho(s_i)}) -f^\Dt(s_{i-1}, W_{s_{i-1}})(W_{s_i}-W_{s_{i-1}})\right] \\ &=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[\left(f^\Dt(\rho(s_i), W_{\rho(s_i)})-f^\Dt(s_{i-1}, W_{s_{i-1}})\right)(W_{s_i}-W_{s_{i-1}}) \right. \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.-f^\Dt(\rho(s_i), W_{\rho(s_i)})(W_{\rho(s_i)}-W_{s_{i-1}})\right] \ , \end{array}$$ $$\begin{array}{ll} (VI)_4&=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[f^\Dt(\rho(s_i), W_{\rho(s_i)})(s_i-\rho(s_i)) -f^\Dt(s_{i-1}, W_{s_{i-1}})(s_i-s_{i-1})\right] \\ &=\sum\li_{(s_{i-1}, s_i)\in (b), \, s_{i-1}<\rho(s_i)<s_i} \left[f^\Dt(s_{i-1}, W_{s_{i-1}})\left(s_{i-1}-\rho(s_i))\right) \right. \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.+\left(f^\Dt(\rho(s_i), W_{\rho(s_i)})-f^\Dt(s_{i-1}, W_{s_{i-1}})\right) (s_i-\rho(s_i))\right] \ , \end{array}$$ $$\begin{array}{l} (VII) \\ =\play{\sum\li_{I_k\in \cI, I_k\subset (s_{i-1}, \rho(s_i)) \text{ for some } (s_{i-1}, s_i) \in (b) } \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^-, W_{s_{I_k}^-}) \right.} \\ \play{\ \ \ \ \ \ \left.-f^\Dt(s_{I_k}^-, W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)-\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right]} \ . \end{array}$$ From \eqref{EstimateOfRemainderIntervals}, the Kolmogorov--\v{C}entsov theorem proved in Theorem 3.1 of \cite{[time scale BM]}, as well as the assumptions about function $f$, we see that $$\Prob\left(|(VI)_1|+|(VI)_2|+|(VI)_3|+|(VI)_4|+|(VII)|\ra 0 \text{ as } n \ra \infty\right)=1 \ .$$ From here we immediately see the claim \eqref{ConvergenceResult3}. Note that for any interval $I_k=(s_{I_k}^-, s_{I_k}^+)$ we have $f^\Dt(s_{I_k}^-, W_{s_{I_k}^-})=\dfrac{f(s_{I_k}^+, W_{s_{I_k}^-})-f(s_{I_k^-}, W_{s_{I_k}^-})}{s_{I_k}^+-s_{I_k}^-}$, therefore we see that \begin{equation}\label{SubtractionItoBn} \begin{array}{ll} B_n=B_n(\om)&=\play{\sum\li_{I_k\in \cI, I_k\subset (t_1,t_2)} \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^-, W_{s_{I_k}^-}) -[f(s_{I_k}^+, W_{s_{I_k}^-})-f(s_{I_k^-}, W_{s_{I_k}^-})] \right.} \\ &\play{\qquad \qquad \qquad \qquad \left.-\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right]} \\ &=\play{\sum\li_{I_k\in \cI, I_k\subset (t_1,t_2)} \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^+, W_{s_{I_k}^-}) \right.} \\ &\play{\qquad \qquad \qquad \qquad \left.-\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right]} \ . \end{array} \end{equation} Combining the convergence results \eqref{ConvergenceResult1-1}, \eqref{ConvergenceResult1-2}, \eqref{ConvergenceResult2}, \eqref{ConvergenceResult3}, together with \eqref{ItoFormulaExpansion} and \eqref{TaylorExpansionOfII}, \eqref{SubtractionItoBn}, we establish \eqref{ItoFormula}. $\square$ \ The next two lemmas are used in the above proof of It\^{o}'s formula, but they are also of independent interest. \ \textbf{Lemma 1.} (Convergence of $\Dt$--deterministic and stochastic integrals.) \textit{Given a time scale $\T$ and $t_1, t_2\in \T$, $t_1<t_2$; a probability space $(\Om, \cF, \Prob)$; a Brownian motion $\{W_t\}_{t\in \T}$ on the time scale $\T$, for any progressively measurable random function $f$ that is continuous on $[t_1, t_2]\cap \T$, viewed as a $L^2([t_1, t_2]_\T)$--progressively measurable random function $f(t, \om)$ on $\T$, and the families of partitions $\pi^{(n)}: t_1=s_0<s_1<...<s_n=t_2$, $s_0, s_1, ...,s_n\in \T$, $\max\li_{i=1,2,...,n}(\rho(s_i)-s_{i-1})<\dfrac{1}{2^n}$, we have } \begin{equation}\label{ConvergenceDeterministicDeltaIntegral} \Prob\left(\lim\li_{n\ra \infty}\sum\li_{i=1}^n f(s_{i-1}, \om)(s_i-s_{i-1})=\play{\int_{t_1}^{t_2}f(s,\om)\Dt s}\right)=1 \ , \end{equation} \begin{equation}\label{ConvergenceStochasticDeltaIntergal} \Prob\left(\lim\li_{n\ra \infty}\sum\li_{i=1}^n f(s_{i-1}, \om)(W_{s_i}-W_{s_{i-1}})=\play{\int_{t_1}^{t_2}f(s,\om)\Dt W_s}\right)=1 \ . \end{equation} \ \textbf{Proof.} As we have seen in the proof of the It\^{o}'s formula, for a given partition $\pi^{(n)}: t_1=s_0<s_1<...<s_n=t_2$, such that $s_i\in \T$ for $i=0,1,...,n$, and $\max\li_{i=1,2,...,n}(\rho(s_i)-s_{i-1})<\dfrac{1}{2^n}$, we can classify all intervals of the form $(s_{i-1}, s_i)$ into two classes $(a)$ and $(b)$: class $(a)$ are those open intervals $(s_{i-1}, s_i)$ such that it does not contain any open intervals $I_k\in \cI$; class $(b)$ are those open intervals $(s_{i-1}, s_i)$ such that it contains at least one open interval $I_k\in \cI$, the latter of which has endpoints that are left or right scattered. Let us form a family of partitions $\sm^{(n)}: t_1=r_0<r_1<...<r_m=t_2$, so that the partition $\sm^{(n)}$ is the partition $\pi^{(n)}$ together with all points in $\T$ that are of the form $r_j=\rho(s_i)$ for some $s_i$ in the partition $\pi^{(n)}$. Note that under this construction we have $r_0,r_1,...,r_m\in \T$. In fact, for any interval $(s_{i-1}, s_i)$ in $(a)$, there is an identical interval $(r_{j-1}, r_j)$ in the partition $\sm^{(n)}$ corresponding to it; for any interval $(s_{i-1}, s_i)$ in $(b)$, there are two intervals $(r_{j-2}, r_{j-1})$ and $(r_{j-1}, r_j)$ corresponding to it, so that $r_{j-1}=\rho(s_i)$. And by \eqref{EstimateOfRemainderIntervals} we know that $$\sum\li_{(s_{i-1}, s_i)\in (b), \, (r_{j-2}, r_{j-1}) \text{ is corresponding to it}}(r_{j-1}-r_{j-2})<\dfrac{n}{2^n} \ .$$ Note that the number $m$ depends on $n$ and the partition $\pi^{(n)}$: $m=m(n ,\pi^{(n)})$. In particular $m\ra \infty$ as $n \ra \infty$. For simplicity we will suppress this dependence later in our proof. Let us recall the definition of deterministic and stochastic $\Dt$--integrals as defined in Section 2. Let $\tilde{f}$ be the extension of $f$ that we have in \eqref{Extension}: for any $t\in \T$, $$\tilde{f}(t,\om)=f(\sup[0,t]_\T, \om) \ .$$ Note that if $t\in \T$ is such that $\rho(t)=t$, then $\tilde{f}(t,\om)=f(t,\om)$, otherwise if $t\in \T$ is such that $\rho(t)<t$, then $\tilde{f}(t,\om)=f(\rho(t), \om)$. Thus we see that $$\Prob\left(\lim\li_{n\ra \infty}\sum\li_{j=1}^{m} f(r_{j-1}, \om)(r_j-r_{j-1})=\play{\int_{t_1}^{t_2}\tilde{f}(s,\om)ds}\right)=1 \ ,$$ $$\Prob\left(\lim\li_{n\ra \infty}\sum\li_{j=1}^{m} f(r_{j-1}, \om)(W_{r_j}-W_{r_{j-1}})=\play{\int_{t_1}^{t_2}\tilde{f}(s,\om)dW_s}\right)=1 \ .$$ So it suffices to prove that $$\Prob\left(\lim\li_{n\ra \infty}\left[\sum\li_{i=1}^n f(s_{i-1}, \om)(s_i-s_{i-1}) -\sum\li_{j=1}^m f(r_{j-1}, \om)(r_j-r_{j-1})\right]=0\right)=1$$ and $$\Prob\left(\lim\li_{n\ra \infty}\left[ \sum\li_{i=1}^n f(s_{i-1}, \om)(W_{s_i}-W_{s_{i-1}}) -\sum\li_{j=1}^m f(r_{j-1}, \om)(W_{r_j}-W_{r_{j-1}})\right]=0 \right)=1 \ .$$ In fact, for any interval $(s_{i-1}, s_i)$ in class $(a)$, there exist an interval $(r_{j-1}, r_j)$ identical to the interval $(s_{i-1}, s_i)$, so that $$f(s_{i-1}, \om)(s_i-s_{i-1})-f(r_{j-1}, \om)(r_j-r_{j-1})=0$$ and $$f(s_{i-1}, \om)(W_{s_i}-W_{s_{i-1}})-f(r_{j-1}, \om)(W_{r_j}-W_{r_{j-1}})=0 \ .$$ For any open interval $(s_{i-1}, s_i)$ in class $(b)$, there are two corresponding intervals $(r_{j-2}, r_{j-1})$ and $(r_{j-1}, r_j)$ such that $r_{j-2}=s_{i-1}$, $r_{j-1}=\rho(s_i)$ and $r_j=s_i$. In this case $$\begin{array}{ll} &f(s_{i-1}, \om)(s_i-s_{i-1})-f(r_{j-1}, \om)(r_j-r_{j-1})-f(r_{j-2}, \om)(r_{j-1}-r_{j-2}) \\ =&f(s_{i-1}, \om)(s_i-s_{i-1})-f(\rho(s_i), \om)(s_i-\rho(s_i))-f(s_{i-1}, \om)(\rho(s_i)-s_{i-1}) \\ =&(f(s_{i-1}, \om)-f(\rho(s_i), \om))(s_i-\rho(s_i)) \end{array}$$ and $$\begin{array}{ll} &f(s_{i-1}, \om)(W_{s_i}-W_{s_{i-1}})-f(r_{j-1}, \om)(W_{r_j}-W_{r_{j-1}})-f(r_{j-2}, \om)(W_{r_{j-1}}-W_{r_{j-2}}) \\ =&f(s_{i-1}, \om)(W_{s_i}-W_{s_{i-1}})-f(\rho(s_i), \om)(W_{s_i}-W_{\rho(s_i)})-f(s_{i-1}, \om)(W_{\rho(s_i)}-W_{s_{i-1}}) \\ =&(f(s_{i-1}, \om)-f(\rho(s_i), \om))(W_{s_i}-W_{\rho(s_i)}) \ . \end{array}$$ From the above calculations and the fact that we have \eqref{EstimateOfRemainderIntervals} and $f$ is continuous on $[t_1, t_2]\cap \T$, together with the fact that $s_{j-1}, \rho(s_j)\in \T$, $0\leq \rho(s_j)-s_{j-1}\leq \dfrac{1}{2^n}$, we see the claim follows. $\square$ \ \textbf{Lemma 2.} (Convergence of quadratic variation of Brownian motion on time scale.) \textit{Given a time scale $\T$ and $t_1, t_2\in \T$, $t_1<t_2$; a probability space $(\Om, \cF, \Prob)$; a Brownian motion $\{W_t\}_{t\in \T}$ on the time scale $\T$. Let any $L^2([t_1, t_2]_\T)$--progressively measurable random function $f(t, \om)$ on $\T$ be defined such that $\E f^2(t,\om)$ is uniformly bounded on $[t_1, t_2]$. Consider the families of partitions $\pi^{(n)}: t_1=s_0<s_1<...<s_n=t_2$, $s_0, s_1, ...,s_n\in \T$, $\max\li_{i=1,2,...,n}(\rho(s_i)-s_{i-1})<\dfrac{1}{2^n}$. We classify all the intervals $(s_{i-1}, s_i)$, $i=1,2,...,n$ into two classes $(a)$ and $(b)$ as before. Then we have } \begin{equation}\label{ContinuousPartQuadraticVariation} \Prob\left(\lim\li_{n\ra \infty}\left[\sum\li_{(a)} f(s_{i-1}, \om)(W_{s_{i}}-W_{s_{i-1}})^2 -\sum\li_{(a)}f(s_{i-1}, \om)(s_i-s_{i-1})\right]=0\right)=1 \ . \end{equation} \ \textbf{Proof.} We notice that for all intervals $(s_{i-1}, s_i)\in (a)$ we have $\rho(s_{i})=s_{i-1}$ and thus $s_i-s_{i-1}<\dfrac{1}{2^n}$. Let us denote that $$V_n=\left[\sum\li_{(a)} f(s_{i-1}, \om)(W_{s_{i}}-W_{s_{i-1}})^2 -\sum\li_{(a)}f(s_{i-1}, \om)(s_i-s_{i-1})\right] \ .$$ Since $f(t,\om)$ is progressively measurable, we see that $f(s_{i-1}, \om)$ is independent of $W_{s_i}-W_{s_{i-1}}$. Thus $$\begin{array}{ll} \E V_n &= \play{\E \left[\sum\li_{(a)} f(s_{i-1}, \om)(W_{s_{i}}-W_{s_{i-1}})^2 -\sum\li_{(a)}f(s_{i-1}, \om)(s_i-s_{i-1})\right]} \\ &\play{=\sum\li_{(a)} \E f(s_{i-1}, \om)(s_{i}-s_{i-1})-\sum\li_{(a)} \E f(s_{i-1}, \om)(s_i-s_{i-1})} \\ &=0 \ . \end{array} $$ Furthermore $$\begin{array}{ll} \Var V_n &= \play{\E \left[\sum\li_{(a)} f(s_{i-1}, \om)(W_{s_{i}}-W_{s_{i-1}})^2 -\sum\li_{(a)}f(s_{i-1}, \om)(s_i-s_{i-1})\right]^2} \\ &\play{=\E \sum\li_{(a)} f(s_{i-1}, \om)f(s_{j-1}, \om)[(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})] \cdot[(W_{s_{j}}-W_{s_{j-1}})^2-(s_{j}-s_{j-1})]} \\ &\play{=\sum\li_{(a)} \E f(s_{i-1}, \om)f(s_{j-1}, \om)[(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})] \cdot[(W_{s_{j}}-W_{s_{j-1}})^2-(s_{j}-s_{j-1})]} \ . \end{array} $$ If $i<j$, then $f(s_{i-1}, \om)f(s_{j-1}, \om)[(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})]$ is independent of $[(W_{s_{j}}-W_{s_{j-1}})^2-(s_{j}-s_{j-1})]$, so we have $\E f(s_{i-1}, \om)f(s_{j-1}, \om)[(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})] \cdot[(W_{s_{j}}-W_{s_{j-1}})^2-(s_{j}-s_{j-1})]=0$. Similarly, for $i>j$ we also have $\E f(s_{i-1}, \om)f(s_{j-1}, \om)[(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})] \cdot[(W_{s_{j}}-W_{s_{j-1}})^2-(s_{j}-s_{j-1})]=0$. This implies that $$\begin{array}{ll} \Var V_n &=\play{\sum\li_{(a)} \E f^2(s_{i-1}, \om)[(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})]^2} \\ &=\play{\sum\li_{(a)} \E f^2(s_{i-1}, \om) \E [(W_{s_{i}}-W_{s_{i-1}})^2-(s_{i}-s_{i-1})]^2} \\ &=\play{\sum\li_{(a)} \E f^2(s_{i-1}, \om) \E [(W_{s_{i}}-W_{s_{i-1}})^4-2(W_{s_{i}}-W_{s_{i-1}})^2(s_{i}-s_{i-1})+(s_{i}-s_{i-1})^2]} \\ &=\play{\sum\li_{(a)} \E f^2(s_{i-1}, \om) [3(s_{i}-s_{i-1})^2-2(s_{i}-s_{i-1})^2+(s_{i}-s_{i-1})^2]} \\ &=\play{2\sum\li_{(a)} \E f^2(s_{i-1}, \om) (s_{i}-s_{i-1})^2}\leq \dfrac{1}{2^{n-1}} \left(\max\li_{s\in[t_1,t_2]}\E f^2(s,\om)\right) \sum\li_{(a)} (s_{i}-s_{i-1})\ra 0 \end{array}$$ as $n\ra \infty$. This together with the fact that $\E V_n=0$ for any $n$ implies the claim \eqref{ContinuousPartQuadraticVariation} of the Lemma. $\square$ \ The argument above leads us to an It\^{o}'s formula for $f(t, W_t)$. Making use of the same methods, one can derive a more general It\^{o}'s formula for the solution $X_t$ to the $\Dt$--stochastic differential equation \eqref{SDeltaE}. We will not repeat the proof, but we will claim the following Theorem. \ \textbf{Theorem 3.} (General It\^{o}'s formula) \textit{Let $X_t$ be the solution to the $\Dt$--stochastic differential equation \eqref{SDeltaE}. Let any function $f: \T\times \R\ra \R$ be such that $f^\Dt(t,x)$, $f^{\Dt^2}(t,x)$, $\dfrac{\pt f}{\pt x}(t,x)$, $\dfrac{\pt^2 f}{\pt x^2}(t,x)$, $\dfrac{\pt f^\Dt}{\pt x}(t,x)$ and $\dfrac{\pt^2 f^\Dt}{\pt x^2}(t,x)$ are continuous on $\T\times \R$. For any $t_1\leq t_2$, $t_1, t_2\in [0, \infty)_\T$ we have} \begin{equation}\label{ItoFormulaGeneral}\begin{array}{ll} &f(t_2,X_{t_2})-f(t_1, X_{t_1}) \\ =&\play{\int_{t_1}^{t_2}b(s, X_s)f^\Dt(s, X_s)\Dt s+ \int_{t_1}^{t_2}\sm(s, X_s)\dfrac{\pt f}{\pt x}(s,X_s)\Dt W_s +\dfrac{1}{2}\int_{t_1}^{t_2}\sm^2(s, X_s)\dfrac{\pt^2 f}{\pt x^2}(s, X_s)\Dt s} \\ &+\play{\sum\li_{I_k\in \cI, I_k\subset (t_1,t_2)} \left[f(s_{I_k}^+,W_{s_{I_k}^+})-f(s_{I_k}^+, W_{s_{I_k}^-}) \right.} \\ &\play{\ \ \ \ \ \ \ \left.-\sm(s_{I_k}^-, W_{s_{I_k}^-})\dfrac{\pt f}{\pt x}(s_{I_k}^-,W_{s_{I_k}^-})(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\sm^2(s_{I_k}^-, W_{s_{I_k}^-})\dfrac{\pt^2 f}{\pt x^2}(s_{I_k}^-,W_{s_{I_k}^-})(s_{I_k}^+-s_{I_k}^-)\right] \ .} \end{array}\end{equation} \ \section{The stochastic exponential on time scales.} Our target in this section is to establish a closed--form formula for the \textit{stochastic exponential} in the case of general time scales $\T$. \ \textbf{Definition 3.} \textit{We say an adapted stochastic process $A(t)$ defined on the filtered probability space $(\Om, \cF_t, \Prob)$ is} stochastic regressive \textit{with respect to the Brownian motion $W_t$ on the time scale $\T$ if and only if for any right--scattered point $t\in \T$ we have} $$(1+A(t))(W_{\sm(t)}-W_t) \neq 0 \ , \ \text{ a.s. \ for all } t\in \T \ .$$ \textit{The set of stochastic regressive processes will be denoted by }$\cR_W$. \ The following definition of a stochastic exponential was also introduced in \cite{[SD Exponential Bohner-Sanyal]}. \ \textbf{Definition 4.} (Stochastic Exponential) \textit{Let $t_0\in \T$ and $A\in \cR_W$, then the unique solution of the $\Dt$--stochastic differential equation} \begin{equation}\label{SDeltaEStochasticExponential} \Dt X_t=A(t)X_t\Dt W_t \ , \ X(t_0)=1 \ , \ t\in \T \end{equation} \textit{is called the} stochastic exponential \textit{and is denoted by} $$X_\bullet=\cE_A(\bullet, t_0) \ .$$ \ We note that $\cE_A(t, t_0)$ as a solution to the equation \eqref{SDeltaEStochasticExponential} can be written into an integral equation \begin{equation}\label{StochasticExponentialIntagralEquation} \cE_A(t, t_0)=1+\int_{t_0}^t A(s)\cE_A(s,t_0)\Dt W_s \ , \ \text{ for all } t\in \T \ . \end{equation} We will be making use of the set--up we have in Section 4 about It\^{o}'s formula. Let $t_0<t$ and $t_0, t\in \T$. Let the sets $C$ and $\cI$ be defined as in Section 4 corresponding to the interval $[t_1, t_2]=[t_0, t]$. Let $I_k\in \cI$ and $I_k=(s_{I_k}^-, s_{I_k}^+)$. We note that $s_{I_k}^-=\rho(s_{I_k}^+)$, $s_{I_k}^+=\sm(s_{I_k}^-)$. Let \begin{equation}\label{D} D(t, t_0)=\sum\li_{I_k\in \cI, I_k\subset (t_0,t)}A(s_{I_k}^-)(W_{s_{I_k}^+}-W_{s_{I_k}^-}) -\dfrac{1}{2}\sum\li_{I_k\in \cI, I_k\subset (t_0,t)}A^2(s_{I_k}^-)(s_{I_k}^+-s_{I_k}^-) \ . \end{equation} We define \begin{equation}\label{U} U(t, t_0)=\prod\limits_{I_k\in \cI, I_k\subset (t_0,t)}\left[1+A(s_{I_k}^-)(W_{s_{I_k}^+}-W_{s_{I_k}^-})\right] \ , \end{equation} \begin{equation}\label{V} V(t, t_0)=\exp\left(\play{\int_{t_0}^t A(s)\Dt W_s-\dfrac{1}{2}\int_{t_0}^t A^2(s)\Dt s - D(t, t_0)}\right) \ . \end{equation} \ \textbf{Theorem 4.} (Stochastic Exponential on time scales) \textit{The stochastic exponential has the closed--form expression} $$\cE_A(t, t_0)=U(t, t_0)V(t, t_0) \ .$$ \ \textbf{Proof.} Consider the process $$Y_t=\int_{t_0}^t A(s)\Dt W_s-\dfrac{1}{2}\int_{t_0}^t A^2(s)\Dt s - D(t, t_0) \ .$$ Let us introduce another function $\al(t)$ such that $$\al(t)=\left\{ \begin{array}{ll} 0 & \ , \ \text{when } t=s_{I_k}^- \text{ or } t=s_{I_k}^+ \ , \\ A(t) & \ , \ \text{otherwise} \ . \end{array}\right.$$ We see now that the process $Y_t$ is a solution to the $\Dt$--stochastic differential equation $$\Dt Y_t=\al(t)\Dt W_s-\dfrac{1}{2}\al^2(t)\Dt s \ , \ Y_{t_0}=0 \ .$$ Notice that $Y_{s_{I_k}^-}=Y_{s_{I_k}^+}$ for any $I_k=(s_{I_k}^-, s_{I_k}^+)\in \cI$. Taking this into account, as well as the fact that $\al(s)=0$ whenever $t=s_{I_k}^- \text{ or } t=s_{I_k}^+$, we can apply the general It\^{o}'s formula \eqref{ItoFormulaGeneral} to the function $V(t, t_0)=\exp(Y_t)$ and we will get $$\begin{array}{ll} &\exp(Y_t)-1 \\ =&\play{\int_{t_0}^t \al(s)\exp(Y_s)\Dt W_s-\dfrac{1}{2}\int_{t_0}^t \al^2(s)\exp(Y_s)\Dt s+\dfrac{1}{2}\int_{t_0}^t\al^2(s)\exp(Y_s)\Dt s} \\ =&\play{\int_{t_0}^t \al(s)\exp(Y_s)\Dt W_s} \ . \end{array}$$ Thus $$V(t, t_0)=1+\int_{t_0}^t \al(s)V(s, t_0)\Dt W_s \ ,$$ or in other words $$\Dt V(t, t_0)=\al(t)V(t, t_0)\Dt W_t \ .$$ Let us now consider the function $\cE_A(t, t_0)=U(t, t_0)V(t, t_0)$. We claim that \begin{equation}\label{Claim} U(t, t_0)V(t, t_0)-1=\int_{t_0}^t A(s)U(s,t_0)V(s, t_0)\Dt W_s \ . \end{equation} Notice that $$U(s_{I_k}^+, t_0)=[1+A(s_{I_k}^-)(W_{s_{I_k}^+}-W_{s_{I_k}^-})]U(s_{I_k}^-, t_0)= U(s_{I_k}^-, t_0)+A(s_{I_k}^-)U(s_{I_k}^-, t_0)(W_{s_{I_k}^+}-W_{s_{I_k}^-}) \ ,$$ i.e., $$U(s_{I_k}^+, t_0)-U(s_{I_k}^-, t_0)= A(s_{I_k}^-)U(s_{I_k}^-, t_0)(W_{s_{I_k}^+}-W_{s_{I_k}^-}) \ .$$ Using this fact, the above claimed identity \eqref{Claim} can be written as \begin{equation}\label{ProductDifferenceToProve} \begin{array}{ll} &U(t, t_0)V(t, t_0)-1 \\ =&\play{\int_{t_0}^t \al(s)U(s,t_0)V(s,t_0)\Dt W_s +\sum\li_{I_k\in \cI, I_k\subset (t_0, t)}A(s_{I_k}^-)U(s_{I_k}^-,t_0)V(s_{I_k}^-,t_0)(W_{s_{I_k}^+}-W_{s_{I_k}^-})} \\ =&\play{\int_{t_0}^t \al(s)U(s,t_0)V(s,t_0)\Dt W_s +\sum\li_{I_k\in \cI, I_k\subset (t_0, t)}V(s_{I_k}^-,t_0)(U(s_{I_k}^+, t_0)-U(s_{I_k}^-, t_0))} \ . \end{array} \end{equation} In fact, with respect to the partition $\pi^{(n)}: t_0=s_0<s_1<...<s_{n-1}<s_n=t$ that we have been using, we have $$\begin{array}{ll} &U(t, t_0)V(t, t_0)-1 \\ =&\play{\sum\li_{i=1}^n [U(s_i, t_0)V(s_i, t_0)-U(s_{i-1}, t_0)V(s_{i-1}, t_0)]} \\ =&\play{\sum\li_{i=1}^n \left[(U(s_i, t_0)-U(s_{i-1}, t_0))(V(s_i, t_0)-V(s_{i-1}, t_0)) +U(s_{i-1}, t_0)(V(s_i, t_0)-V(s_{i-1}, t_0))\right.} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \play{\left.+V(s_{i-1}, t_0)(U(s_i, t_0)-U(s_{i-1}, t_0))\right]} \\ =&\play{\sum\li_{i=1}^n (U(s_i, t_0)-U(s_{i-1}, t_0))(V(s_i, t_0)-V(s_{i-1}, t_0)) +\sum\li_{i=1}^n U(s_{i-1}, t_0)(V(s_i, t_0)-V(s_{i-1}, t_0))} \\ & \ \ \ \ \ \ \ \play{+\sum\li_{i=1}^n V(s_{i-1}, t_0)(U(s_i, t_0)-U(s_{i-1}, t_0))} \\ =&(I)+(II)+(III) \ . \end{array}$$ Here $$\begin{array}{llll} (I)&=& \play{\sum\li_{i=1}^n (U(s_i, t_0)-U(s_{i-1}, t_0))(V(s_i, t_0)-V(s_{i-1}, t_0))} & \ , \\ (II)&=& \play{\sum\li_{i=1}^n U(s_{i-1}, t_0)(V(s_i, t_0)-V(s_{i-1}, t_0))} & \ , \\ (III)&=& \play{\sum\li_{i=1}^n V(s_{i-1}, t_0)(U(s_i, t_0)-U(s_{i-1}, t_0))} & \ . \end{array}$$ We can apply the previous arguments and classify the intervals $(s_{i-1}, s_i)$ into classes $(a)$ and $(b)$. Notice that on each interval $(s_{I_k}^-, s_{I_k}^+)$, the function $V(t, t_0)$ remains constant and the function $U(t, t_0)$ has a jump, and on each interval $(s_{i-1}, s_i)$ in class $(a)$ the function $U(t, t_0)$ is a constant. This observation and similar arguments (which we leave to the reader) as in the previous section will enable us to prove that with probability $1$, as $n \ra \infty$, we will have $$\begin{array}{llll} (I) &\ra &0 & \ , \\ (II) &\ra &\play{\int_{t_0}^t \al(s)U(s,t_0)V(s,t_0)\Dt W_s} & \ , \\ (III)&\ra &\play{\sum\li_{I_k\in \cI, I_k\subset (t_0, t)}V(s_{I_k}^-,t_0)(U(s_{I_k}^+, t_0)-U(s_{I_k}^-, t_0))} & \ . \end{array}$$ So we proved \eqref{ProductDifferenceToProve} and thus \eqref{Claim}. $\square$ \section{Change of measure and Girsanov's theorem on time scales.} We demonstrate in this section a change of measure formula (Girsanov's formula) for Brownian motion on time scales. Our analysis is based on the method of extension that was introduced in Section 3 (originally from \cite{[SDtE Bohner]}). Let us consider two processes, the standard Brownian motion $\{W_t\}_{t\in \T}$ on $(\Om, \cF_t, \Prob)$ on the time scale $\T$, and the process $$B_t=W_t-\int_0^t A(s)\Dt s \ ,$$ on the time scale $t\in \T$. Let us consider an extension of the (probably random) function $A(s)$ as in \eqref{Extension}. Let us define the so obtained extension function to be $\widetilde{A}(s)$. Recall that \eqref{Extension} implies that $$\widetilde{A}(s,\om)=A(\sup[0,s]_\T,\om) \ .$$ Let $\widetilde{W}_t$ be a standard Brownian motion on $[0,\infty)$. If we define $$\widetilde{B}_t=\widetilde{W}_t-\int_0^t \widetilde{A}(s)ds \ ,$$ then the process $\widetilde{B}_t$ agrees with $B_t$ for any time point $t\in \T$. For any $t, t_0\in \T$, $t>t_0$, let \begin{equation}\label{GirsanovDensity} \begin{array}{ll} \cG_A(t, t_0)&=\play{\exp\left(\int_{t_0}^t \widetilde{A}(s)d W_s-\dfrac{1}{2}\int_{t_0}^t \widetilde{A}^2(s)d s\right)} \\ &=\play{\exp\left(\int_{t_0}^t \widetilde{A}(s)d W_s-\dfrac{1}{2}\int_{t_0}^t \widetilde{A^2}(s)d s\right)} \\ &=\play{\exp\left(\int_{t_0}^t A(s)\Dt W_s-\dfrac{1}{2}\int_{t_0}^t A^2(s)\Dt s\right) \ .} \end{array} \end{equation} It is easy to see that the function $\cG_A(t, t_0)$ is the standard Girsanov's density function for the process $\widetilde{B}_t$ with respect to the standard Brownian motion $\widetilde{W}_t$. Since $\widetilde{B}_t$ and $\widetilde{W}_t$ have the same distributions as $B_t$ and $W_t$ on the time scale $\T$, we conclude with the following two Theorems. \ \textbf{Theorem 5.} (Novikov's condition on time scales) \textit{If for every $t\geq 0$ we have } \begin{equation}\label{NovikovCondition} \E \exp\left(\int_0^t A^2(s)\Dt s\right)<\infty \ , \end{equation} \textit{then for every $t\geq 0$ we have } $$\E \cG_A(t, t_0)=1 \ .$$ \ Let \eqref{NovikovCondition} be satisfied. Let $T>0$ and pick $T>t_0$, $t_0, T\in \T$. Consider a new measure $\Prob^B$ on $(\Om, \cF_t)$, defined by it Radon--Nikodym derivative with respect to $\Prob^W$, as $$\dfrac{d\Prob^B}{d\Prob^W}=\cG_A(T, t_0) \ .$$ \ \textbf{Theorem 6.} (Girsanov's change of measure on time scales) \textit{Under the measure $\Prob^B$ the process $B_t$, $t\in [0,T]_\T$ is a standard Brownian motion on $\T$.} \ \section{Application to Brownian motion on a quantum time scale.} In this section we are going to apply our result to a quantum time scale ($q$--time scale, see \cite[Example 1.41]{[Bohner Peterson book]}). Let $q>1$ and $$q^\Z:=\{q^k: k\in \Z\} \text{ and } \overline{q^\Z}:=q^\Z\cup \{0\} \ .$$ The quantum time scale ($q$--time scale) is defined by $\T=\overline{q^\Z}$. Given the quantum time scale $\T$, one can then construct a Brownian motion $W_t$ on $\T$ according to Definition 2. We have $$\sm(t)=\inf\{q^n: n\in [m+1,\infty)\}=q^{m+1}=qq^m=qt$$ if $t=q^m\in \T$ and obviously $\sm(0)=0$. So we obtain $$\sm(t)=qt \text{ and } \rho(t)=\dfrac{t}{q} \text{ for all } t\in \T$$ and consequently $$\mu(t)=\sm(t)-t=(q-1)t \ \text{ for all } t\in \T\ .$$ Here $0$ is a right--dense minimum and every other point in $\T$ is isolated. For a function $f: \T\ra \R$ we have $$f^\Dt(t)=\dfrac{f(\sm(t))-f(t)}{\mu(t)}=\dfrac{f(qt)-f(t)}{(q-1)t} \text{ for all } t\in \T\backslash \{0\}$$ and $$f^\Dt(0)=\lim\li_{s\ra 0}\dfrac{f(0)-f(s)}{0-s}=\lim\li_{s\ra 0}\dfrac{f(s)-f(0)}{s}$$ provided the limit exists. The open intervals $I_k$ that we have constructed in Section 4 have the form $I_k=(q^k, q^{k+1})$ where $k\in \Z$. For any two points $t_1<t_2$, $t_1,t_2 \in \T$, if $t_1, t_2\neq 0$, then $t_1=q^{k_1}$ and $t_2=q^{k_2}$ for two integers $k_1<k_2$. In this case we can apply \eqref{ItoFormula} and we get \begin{equation}\label{ItoFormula1BMqScale} \begin{array}{ll} &f(q^{k_2}, W_{q^{k_2}})-f(q^{k_1}, W_{q^{k_1}}) \\ =&\play{\int_{q^{k_1}}^{q^{k_2}}f^\Dt(s, W_s)\Dt s+ \int_{q^{k_1}}^{q^{k_2}}\dfrac{\pt f}{\pt x}(s,W_s)\Dt W_s +\dfrac{1}{2}\int_{q^{k_1}}^{q^{k_2}}\dfrac{\pt^2 f}{\pt x^2}(s, W_s)\Dt s} \\ &\qquad +\play{\sum\li_{k=k_1}^{k_2-1} \left[f(q^{k+1},W_{q^{k+1}})-f(q^{k+1}, W_{q^k}) \right.} \\ &\play{\qquad \qquad \qquad \qquad \qquad \left.-\dfrac{\pt f}{\pt x}(q^k,W_{q^k})(W_{q^{k+1}}-W_{q^k}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(q^k,W_{q^{k}})(q^{k+1}-q^k)\right] \ .} \end{array} \end{equation} Since $\T\backslash\{0\}$ is a discrete time scale, we have $$\int_{q^{k_1}}^{q^{k_2}}\dfrac{\pt f}{\pt x}(s,W_s)\Dt W_s= \sum\li_{k=k_1}^{k_2-1} \dfrac{\pt f}{\pt x}(q^k,W_{q^k})(W_{q^{k+1}}-W_{q^k}) \ ,$$ and $$\dfrac{1}{2}\int_{q^{k_1}}^{q^{k_2}}\dfrac{\pt^2 f}{\pt x^2}(s, W_s)\Dt s =\sum\li_{k=k_1}^{k_2-1}\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(q^k,W_{q^{k}})(q^{k+1}-q^k) \ .$$ Moreover, we have $$\begin{array}{ll} &\play{\int_{q^{k_1}}^{q^{k_2}}f^\Dt(s, W_s)\Dt s} \\ =&\play{\sum\li_{k=k_1}^{k_2-1}\dfrac{f(q^{k+1},W_{q^k})-f(q^k, W_{q^k})}{q^{k+1}-q^k}(q^{k+1}-q^{k})} \\ =&\play{\sum\li_{k=k_1}^{k_2-1}[f(q^{k+1},W_{q^k})-f(q^k, W_{q^k})] \ .} \end{array}$$ Therefore \eqref{ItoFormula1BMqScale} becomes $$ \begin{array}{ll} &f(q^{k_2}, W_{q^{k_2}})-f(q^{k_1}, W_{q^{k_1}}) \\ =&\play{\sum\li_{k=k_1}^{k_2-1} \left\{[f(q^{k+1},W_{q^k})-f(q^k, W_{q^k})]+[f(q^{k+1},W_{q^{k+1}})-f(q^{k+1}, W_{q^k})]\right\}} \\ =&\play{\sum\li_{k=k_1}^{k_2-1} [f(q^{k+1},W_{q^{k+1}})-f(q^k, W_{q^k})] \ ,} \end{array} $$ which is a trivial telescoping identity. This justifies \eqref{ItoFormula} in the case away from $0$. Let us consider now the case when $t_1=0$ and $t_2=q^k>0$ for some $k\in \Z$. In this case we have, according to \eqref{ItoFormula}, that \begin{equation}\label{ItoFormula2BMqScale} \begin{array}{ll} &f(q^{k}, W_{q^{k}})-f(0,0) \\ =&\play{\int_{0}^{q^{k}}f^\Dt(s, W_s)\Dt s+ \int_{0}^{q^{k}}\dfrac{\pt f}{\pt x}(s,W_s)\Dt W_s +\dfrac{1}{2}\int_{0}^{q^{k}}\dfrac{\pt^2 f}{\pt x^2}(s, W_s)\Dt s} \\ &\qquad +\play{\sum\li_{j=-\infty}^{k-1} \left[f(q^{j+1},W_{q^{j+1}})-f(q^{j+1}, W_{q^j}) \right.} \\ &\play{\qquad \qquad \qquad \qquad \qquad \left.-\dfrac{\pt f}{\pt x}(q^j,W_{q^j})(W_{q^{j+1}}-W_{q^j}) -\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(q^j,W_{q^{j}})(q^{j+1}-q^j)\right] \ .} \end{array} \end{equation} One can justify that in this case we have $$\int_{0}^{q^{k}}\dfrac{\pt f}{\pt x}(s,W_s)\Dt W_s= \sum\li_{j=-\infty}^{k-1} \dfrac{\pt f}{\pt x}(q^j,W_{q^j})(W_{q^{j+1}}-W_{q^j}) \ ,$$ and $$\dfrac{1}{2}\int_{0}^{q^{k}}\dfrac{\pt^2 f}{\pt x^2}(s, W_s)\Dt s =\sum\li_{j=-\infty}^{k-1}\dfrac{1}{2}\dfrac{\pt^2 f}{\pt x^2}(q^j,W_{q^{j}})(q^{j+1}-q^j) \ .$$ Moreover, we have $$\begin{array}{ll} &\play{\int_{0}^{q^{k}}f^\Dt(s, W_s)\Dt s} \\ =&\play{\sum\li_{j=-\infty}^{k-1}\dfrac{f(q^{j+1},W_{q^j})-f(q^j, W_{q^j})}{q^{j+1}-q^j}(q^{j+1}-q^{j})} \\ =&\play{\sum\li_{j=-\infty}^{k-1}[f(q^{j+1},W_{q^j})-f(q^j, W_{q^j})] \ .} \end{array}$$ Therefore \eqref{ItoFormula1BMqScale} becomes $$ \begin{array}{ll} &f(q^{k}, W_{q^{k}})-f(0, 0) \\ =&\play{\sum\li_{j=-\infty}^{k-1} \left\{[f(q^{j+1},W_{q^j})-f(q^j, W_{q^j})]+[f(q^{j+1},W_{q^{j+1}})-f(q^{j+1}, W_{q^j})]\right\}} \\ =&\play{\sum\li_{j=-\infty}^{k-1} [f(q^{j+1},W_{q^{j+1}})-f(q^j, W_{q^j})] \ .} \end{array} $$ which is also a telescoping identity. This justifies \eqref{ItoFormula} in the case including $0$. Making use of Theorem 4, it is easy to write down the stochastic exponential for the quantum time scale: $$\cE_A(0,q^k)=\prod\limits_{j=-\infty}^{k-1}\left[1+A(q^j)(W_{q^{j+1}}-W_{q^j})\right] \ .$$ \ \textbf{Acknowledgements}: The author would like to express his sincere gratitude to Professor Martin Bohner for inviting him to present this work at the Time Scales Seminar, Missouri S\&T on October 5, 2016, and also for pointing out the use of delta (Hilger) derivatives. He would also like to thank Professor David Grow for an inspiring discussion on the topic. Many thanks go to the anonymous referee for invaluable comments that lead to essential improvements of the paper. Special thanks are dedicated to Missouri University of Science and Technology (formerly University of Missouri, Rolla) for a startup fund that supports this work. \end{document}
math
54,613
\begin{document} \title{ Quantum coding in non-inertial frames } \author{ N. Metwally$^{(1,2)}$ and A. Sagheer$^{(1)}$\\} \address { $^{1,2}$Math. Dept., Faculty of Science, Aswan university, Aswan, Egypt.\\ $^{2}$Mathematics Department, College of Science, University of Bahrain,\\ P. O. Box 32038 Kingdom of Bahrain.} \ead{[email protected]} \begin{abstract} The capacity of accelerated channel is investigated for different classes of initial states. It is shown that, the capacities of the travelling channels depend on the frame in which the accelerated channels are observed in and the initial shared state between the partners. In some frames, the capacities decay as the accelerations of both qubit increase. The decay rate is larger if the partners are initially share a maximum entangled state. The possibility of using the accelerated quantum channels to perform quantum coding protocol is discussed. The amount of decoded information is quantified for different cases, where it decays as the partner's accelerations increase to reach its minimum bound. This minimum bound depends on the initial shared states and it is large for maximum entangled state. \end{abstract} \maketitle \section{Introduction} Nowadays, investigating the dynamics of quantum states in non inertial frames represents one of the most events in the context of quantum information. For example, M. del Rey et al.\cite{Marco} have presented a scheme for simulating a set of accelerated atoms coupled to a single mode field. The dynamics of multipartite entanglement of fermionic systems in non-inertial frames is discussed by Wang and Jing \cite{Wang}. The effect of deoherence on a qubit-qutrit system is studied by Ramzan and Khan \cite{Ramazan}. The influence of Unruh effect on the payoff function of the players in the quantum Prisoners is investigated in \cite{khan}. Goudarzi and Beyrami \cite{Goud} have discussed the effect of uniform acceleration on multiplayer quantum game. The usefulness classes of travelling entangled quantum channels in non-inertial frames is investigated by Metwally \cite{metwally1}. Manipulating some quantum information tasks in these non-inertial frames represents a real desired aim. Therefore some efforts have been done to investigate quantum teleportation via accelerated states as quantum channels. For example, Alsing and Milburn \cite{Alsing}gave a description of the quantum teleportation between two users, one of them in an inertial frame and the other moves in an accelerated frame. In \cite{Metwally2}, the possibility of using maximum and partial entangled states as an initial states to perform quantum teleportation in non-inertial frames is discussed, where it is assumed that the teleported information can be either accelerated or non accelerated. Quantum coding protocols represent one of the most important ways to send coded information between two users\cite{Ben1,Bos,Bown}. Therefore, it is important to discuss the possibility of using different accelerated states, which are initially prepared in maximum or partial entangled states to perform quantum coding protocol. In this contribution, we investigate the behavior of the capacities of different accelerated states. Theses states have been used as quantum channels to implement the original quantum coding protocol\cite{Ben1}. The paper is organized as follows: In Sec.2, the initial system is described, where we assume that the users share a two- qubit state of X-type \cite{Eberly}. The relation between Minkowski and Rindler spaces is reviewed. The Unruh effect on the initial system is investigated. The channels' capacities are quantified for different classes of initial states in Sec.3. The accelerated states are used as quantum channels to perform quantum coding. The amount of decoded information is calculated and the effect of the accelerations and the initial states setting is discussed in Sec.4. Finally, Sec.5 is devoted to discuss our results. \section{The Model} The class of $X-$state \cite{Eberly} represents one of most popular classes of quantum channel which have been extensively studied in the context of quantum information. Some properties of this class have been discussed in different directions. For example the phenomena of quantum discord of two-qubit X-states is discussed by Q. Chen et. al, \cite{Chen,Ali}. The phenomena of entanglement sudden death of two-qubit X-states in thermal reservoirs is discussed in \cite{Rau}. Therefore we are motivated to investigate the behavior of this state in non-inertial frame\cite{metwally1}. In computational basis $\ket{00},\ket{01},\ket{10}$ and $\ket{11}$, $X$-state takes the form, \begin{eqnarray}\label{xstate} \rho_X&=&\mathcal{A}_{11}\ket{00}\bra{00}+\mathcal{A}_{22}\ket{01}\bra{01}+\mathcal{A}_{33}\ket{10}\bra{10}+ \mathcal{A}_{44}\ket{11}\bra{11}+\mathcal{A}_{23}\ket{01}\bra{10} \nonumber\\ &&+\mathcal{A}_{32}\ket{10}\bra{01}+ \mathcal{A}_{14}\ket{00}\bra{11}+\mathcal{A}_{41}\ket{11}\bra{00}, \end{eqnarray} where \begin{eqnarray} \mathcal{A}_{11}&=&\mathcal{A}_{44}=\frac{1}{4}(1+c_z),\quad\mathcal{A}_{22}=\mathcal{A}_{33}=\frac{1}{4}(1-c_z), \nonumber\\ \mathcal{A}_{23}&=&\mathcal{A}_{32}=\frac{1}{4}(c_x+c_y),\quad\mathcal{A}_{14}=\mathcal{A}_{41}=\frac{1}{4}(c_x-c_y). \end{eqnarray} Since, we are interested to use this class of states to perform quantum coding in non-inertial frames, it is important to shed some light on the Minkowski and Rindler spaces in the context of Dirac field.\\ \subsection{ Minkowski and Rindler's spaces} In the inertial frames, Minkowsik coordinates $(t,z)$ are used to describe Dirac field, while in the uniformly accelerated case, Rindler coordinates $(\tau, \chi)$ are more adequately. The relations between the Minkowski and Rindler coordinates are given by\cite{Alsing,Edu}, \begin{equation}\label{trans} \tau=r~tanh\left(\frac{t}{z}\right), \quad \chi=\sqrt{t^2-z^2}, \end{equation} where $-\infty<\tau<\infty$, $-\infty<\chi<\infty$ and $r$ is the acceleration of the moving particle. The transformations (\ref{trans}) define two regions in Rindler's spaces: the first region $I $ for $|t|<x$ and the second region $II$ for $x<-|t|$ . A single mode $k$ of fermions and anti-fermions in Minkowski space is described by the annihilation operators $a_k$ and $b_{-k}$ respectively, where $a_k\ket{0_k}=0$ and $b^\dagger_{-k}\ket{0_{k}}=0$. In terms of Rindler's operator( $c^{(I)}_k, d^{(II)}_{-k}$), the Minkowski operators can be written as \cite{Walls}, \begin{equation}\label{op} a_k=\cos r c^{(I)}_k-\exp(-i\phi)\sin r d^{(II)}_{-k}, \quad b^\dagger_{-k}=\exp(i\phi)\sin r c^{(I)}_k+\cos r d^{(II)}_{k}, \end{equation} where $tan r=e^{-\pi\omega \frac{c}{a}}$, $0\leq r\leq \pi/$4, $ a$ is the acceleration such that $0\leq a\leq\infty$, $\omega$ is the frequency of the travelling qubits, $c$ is the speed of light and $\phi$ is an unimportant phase that can be absorbed into the definition of the operators. It is clear that, the transformation (\ref{op}) mixes a particle in region $I$ and an anti particle in region $II$. This effect is called Unruh effect \cite{Alsing1,un}. In terms of Rindler's modes, the Minkowski vacuum $\ket{0_k}_M$ and the one particle states $\ket{1_k}_M$ take the form, \begin{eqnarray}\label{Min} \ket{0_k}_M&=&\cos r\ket{0_k}_I\ket{0_{-k}}_{II}+ \sin r\ket{1_k}_I\ket{1_{-k}}_{II}, \nonumber\\ \ket{1_k}_M&=&a^\dagger_k\ket{0_k}_M =\ket{1_k}_I\ket{0_k}_{II}. \end{eqnarray} \subsection{Unruh effect on X-state} Since the expressions of the vacuum and single particle states are obtained in Rindler basis(\ref{Min}), then we can investigate the dynamics of the suggested state (\ref{xstate}) from the uniformly accelerated point of view. In this context, using Eq.(\ref{Min}$\&$\ref{xstate}), one can obtain the form of the $X-$ sate in the first and the second regions. \begin{eqnarray} \rho_{x{A_{\ell}B_{\ell}}}&=&\mathcal{B}^{\ell}_{11}\ket{00}\bra{00}+\mathcal{B}^{\ell}_{22}\ket{01}\bra{01}+ \mathcal{B}^{\ell}_{33}\ket{10}\bra{10}+ \mathcal{B}^{\ell}_{44}\ket{11}\bra{11}+\mathcal{B}^{\ell}_{23}\ket{01}\bra{10} \nonumber\\ &&+\mathcal{B}^{\ell}_{32}\ket{10}\bra{01}+ \mathcal{B}^{\ell}_{14}\ket{00}\bra{11}+\mathcal{B}^{\ell}_{41}\ket{11}\bra{00}, \end{eqnarray} where, $\ell=I, II $, for the first and second regions respectively. If the states of Alice and Bob in the second region $II$ are traced out, then the accelerated channel in the first region is defined by the coefficients, \begin{eqnarray} \mathcal{B}^{(I)}_{11}&=&\mathcal{A}_{11}\cos^2r_a\cos^2r_b, \quad \mathcal{B}^{(I)}_{14}=\mathcal{A}_{14}\cos r_a\cos r_b \nonumber\\ \mathcal{B}^{(I)}_{22}&=&\cos^2r_a(\mathcal{A}_{11}\sin^2r_b+\mathcal{A}_{22}),\quad \mathcal{B}^{(I)}_{23}=\mathcal{A}_{23}\cos r_a\cos r_b \nonumber\\ \mathcal{B}^{(I)}_{23}&=&\mathcal{A}_{32}\cos r_a\cos r_b\quad \mathcal{B}^{(I)}_{33}=\cos^2r_b(\mathcal{A}_{11}\sin^2r_a+\mathcal{A}_{33}) \nonumber\\ \mathcal{B}^{(I)}_{41}&=&\mathcal{A}_{41}\cos r_a\cos r_b, \quad \mathcal{B}^{(I)}_{44}=\sin^2 r_a(\mathcal{A}_{11}\sin^2 r_b+\mathcal{A}_{22})+\mathcal{A}_{33}\sin^2r_b+\mathcal{A}_{44} \end{eqnarray} On the other hand, if we trace out the states of Alice and Bob in the first region $I$, the accelerated channel between Alice and Bob in the second region $II$ is defined by the coefficients, \begin{eqnarray} \mathcal{B}^{(II)}_{11}&=&(\mathcal{A}_{22}+\mathcal{A}_{11}\cos^2r_b)\cos^2r_a+\mathcal{A}_{33}\cos^2r_b+\mathcal{A}_{44} ,\quad \mathcal{B}^{(II)}_{14}=\mathcal{A}_{41}\sin r_a\sin r_b \nonumber\\ \mathcal{B}^{(II)}_{22}&=&(\mathcal{A}_{33+}\cos^2r_a)\sin^2r_b,\quad \mathcal{B}^{(II)}_{23}=\mathcal{A}_{32}\sin r_a\sin r_b, \nonumber\\ \mathcal{B}^{(II)}_{23}&=&\mathcal{A}_{23}\sin r_a\sin r_b\quad \mathcal{B}^{(II)}_{33}=(\mathcal{A}_{22}+\mathcal{A}_{11}\cos^2r_b)\sin^2r_a, \nonumber\\ \mathcal{B}^{(II)}_{41}&=&\mathcal{A}_{14}\sin r_a\sin r_b, \quad \mathcal{B}^{(II)}_{44}=\mathcal{A}_{11}\sin^2 r_a\sin^2r_b. \end{eqnarray} There are also two channels that could be investigated, $\rho_{A_IB_{II}}$ and $\rho_{A_{II}B_{I}}$ which represent the channel between Alice, Anti-Bob and Anti-Alice, Bob respectively. In this context, we consider only the channel between Alice in the first region $I$ and Bob in the second region $II$. In this case the coefficient $\mathcal{B}_{ij}$ are given by, \begin{eqnarray}\label{aIbII} \mathcal{B}_{11}&=&(\mathcal{A}_{22}+\mathcal{A}_{11}\cos^2r_b)\cos^2r_a ,\quad \mathcal{B}_{14}=\mathcal{A}_{23}\cos r_a\sin r_b, \nonumber\\ \mathcal{B}_{22}&=&\mathcal{A}_{11}\cos^2r_a\sin^2r_b,\quad \mathcal{B}_{23}=\mathcal{A}_{14}\cos r_a\sin r_b, \nonumber\\ \mathcal{B}_{23}&=&\mathcal{A}_{41}\cos r_a\sin r_b\quad \mathcal{B}_{33}=(\mathcal{A}_{22}+\mathcal{A}_{11}\cos^2r_b)\sin^2r_a+(\mathcal{A}_{33}\cos^2r_b+\mathcal{A}_{44}), \nonumber\\ \mathcal{B}_{41}&=&\mathcal{A}_{32}\cos r_a\sin r_b, \quad \mathcal{B}_{44}=(\mathcal{A}_{33}+\mathcal{A}_{11}\sin^2r_a)\sin^2r_b. \end{eqnarray} Since the quantum channels are obtained in the different regions, it is possible to investigate the behavior of the channel capacity as well as the possibility of using these states as quantum channels to perform quantum coding. \section{Channel capacity} Channel capacity which measures the rate of information transfer represents one of the most important indictors of the channel's efficiency. Therefor, it is necessary to investigate this phenomena for the travelling channels in different regions. Mathematically, the channel capacity of the channel $\rho_{A_\ell B_\ell}, \ell=I,II$ is given by \cite{sohr}, \begin{eqnarray}\label{Cap} \mathcal{C_P}&=&\log_aD +(\mathcal{B}^{(\ell)}_{11}+\mathcal{B}^{(\ell)}_{33})\ log\bigl(\mathcal{B}^{(\ell)}_{11}+\mathcal{B}^{(\ell)}_{33}\bigr)+ \mathcal{B}^{(\ell)}_{22}+\mathcal{B}^{(\ell)}_{44}\ log\bigl(\mathcal{B}^{(\ell)}_{22}+\mathcal{B}^{(\ell)}_{44}\bigr) \nonumber\\ &&-\lambda_1\log\lambda_1-\lambda_2\log\lambda_2-\lambda_3\log\lambda_1-\lambda_3\log\lambda_3 \end{eqnarray} where, \begin{eqnarray}\label{lambda} \lambda_{1,2}=\frac{1}{2}\Bigl\{\mathcal{B}^{(\ell)}_{11}+\mathcal{B}^{(\ell)}_{44}\bigr\} \pm\sqrt{\bigl(\mathcal{B}^{(\ell)}_{11}-\mathcal{B}^{(\ell)}_{44}\bigr)^2+ 4\mathcal{B}^{(\ell)}_{41}\mathcal{B}^{(\ell)}_{14}}\Bigr\}, \nonumber\\ \lambda_{3,4}=\frac{1}{2}\Bigl\{\mathcal{B}^{(\ell)}_{22}+\mathcal{B}^{(\ell)}_{33}\bigr\} \pm\sqrt{\bigl(\mathcal{B}^{(\ell)}_{22}-\mathcal{B}^{(\ell)}_{33}\bigr)^2+ 4\mathcal{B}^{(\ell)}_{23}\mathcal{B}^{(\ell)}_{32}}\Bigr\}, \end{eqnarray} where $D=2$. The channel capacity of the quantum channel between Alice in the first region and Bob in the second region is given by(\ref{Cap}), but $\mathcal{B}^{(\ell)}_{ij}$ are given by (\ref{aIbII}). \begin{figure} \caption{The channel's capacity of a system is initially prepared in (a) maximum entangled state, MES with $c_x=c_y=c_z=-1$ and (b) partial entangled state. PES with $c_x=-0.9, c_y=-0.8$ and $c_z=-0.7$. The dash dot, dot and solid curves represent the capacities of the channels $\rho_{A_IB_I} \end{figure} The dynamics of the channel capacity of the travelling state is displayed in Fig.(1) for a system initially prepared in maximum entangled state. Fig.(1a), describes the behavior of $\mathcal{C_P}$ for quantum channels which are generated between Alice and Bob in the first region, $\rho_{A_IB_{I}}$, in the second region $\rho_{A_{II}B_{II}}$ and between Alice in the first region and Bob in the second region $\rho_{A_IB_{II}}$. In this investigation, it is assumed that both qubits are accelerated. It is clear that the capacity the channel capacity $\mathcal{C_P}$ is maximum for the channel $\rho_{A_IB_{I}}$ at zero accelerations. However as the accelerations of the travelling qubits increase, the channel capacity decreases smoothly to reach its minimum values with-$\infty$ acceleration. In the second region, the degree of entanglement of the state $\rho_{A_{II}B_{II}}$ is not maximum \cite{metwally3}, therefore $\mathcal{C_P}$ is not maximum at zero accelerations. For larger values of the accelerations the channel's capacity slightly decreases. This show that this generated entangled state is more robust than that in the first region, $\rho_{A_IB_{I}}$. The generated channel between Alice and Anti-Bob, $\rho_{A_IB_{II}}$ has a zero capacity at $r_a=r_b=0$, since there is no entanglement generated between these qubits. However as the accelerations increase, the channel's capacity $\mathcal{C_P}$ increases gradually to reach its maximum value at -$\infty$ accelerations. In Fig.(1b), the channel's capacities of the travelling quantum channels are investigated for a system initially prepared in a partial entangled state, where we set $c_x=-0.9, c_y=-0.8$ and $c_z=-0.7$. It is clear that the general behavior is similar to that predicted in Fig.(1a), for maximum entangled state, MES. However in the first region, the initial capacities of the state $\rho_{A_IB_I}$ is smaller than that described in Fig.(1a) for MES. As the accelerations of Alice and Bob's qubit increase the channel capacity $\mathcal{C_P}$ decreases. In the second region $II$, the capacity of the quantum channel which is generated between Anti-Alice and Anti-Bob, $\rho_{A_{II}B_{II}}$ decreases smoothly as $r_{a(b)}$ increases. The capacity for the channel $\rho_{A_IB_{II}}$ increases as the accelerations of both qubit increase, but its maximum value is smaller than its corresponding one for MES. From Fig.(1), one concludes that the channel capacity depends on the initial degree of entanglement of the travelling qubit. In our recent work \cite{metwally1}, we have shown that the generated entangled channel in the first region has larger degree of entanglement than that in the second region. This explains, why the channel capacity in the first region is much larger than that displayed in the second region. On the other hand, initially, the channel between a qubit and the Anti-qubit of the other qubit is separable. However this channel turns into entangled state as the accelerations increase and consequently its capacity increases. \section{quantum Coding} In this section, we investigate the possibility of using the travelling channels between the different users to perform the quantum coding protocol which is proposed by Bennett and Wienser \cite{Ben1}. This protocol works as follows: \begin{enumerate} \item Alice encodes the given information in her qubit by using one of local unitary operators $\mathcal{U}_i=I,\sigma_x,\sigma_y$ or $\sigma_z$ with probability $p_i$. Due to this operation the information is coded in the state \begin{equation} \rho_{cod}=\sum_{i}^{3}\Bigl\{p_i\mathcal{U}_i\otimes I_2{\rho_x}_{A_{\ell}B_{\ell}}I_2\otimes\mathcal{U}_i^\dagger\Bigl\}, \end{equation} where $I_2$ is the unitary operator for Bob qubit. \item Alice sends her qubit to Bob, who makes joint measurements on the two qubits. The maximum amount of information which Bob can extract from Alice’s message is Bounded by \cite{metwally3,metwally4}, \begin{equation} I_{B}=\mathcal{S}(\rho_{cod})-\sum_{i=1}^{3}p_i\mathcal{S}\Bigl(\mathcal{U}_i\otimes I_2{\rho_x}_{A_{\ell}B_{\ell}}I_2\otimes\mathcal{U}_i^\dagger\Bigr). \end{equation} Explicitly, \begin{equation} I_B=-2(\lambda_+\log\lambda_{+}+\lambda_{-}\log\lambda_{-})-2(\lambda_{1}\log\lambda_1+ \lambda_{2} \log\lambda_2+\lambda_3\log\lambda_3+\lambda_4\log\lambda_4), \end{equation} where \begin{eqnarray} \lambda_{\pm}&=&\frac{1}{2}\Bigl\{1+ \frac{1}{2}\sqrt{\{(\mathcal{B}_{11}^{(\ell)}+\mathcal{B}_{33}^{(\ell)})- (\mathcal{B}_{22}^{(\ell)}+\mathcal{B}_{44}^{(\ell)})\}^2+ 4(\mathcal{B}_{12}^{(\ell)}+\mathcal{B}_{34}^{(\ell)})(\mathcal{B}_{21}^{(\ell)}+\mathcal{B}_{43}^{(\ell)})}~\Bigr\}, \nonumber\\ \end{eqnarray} and $\lambda_i, i=1..4$ are given in (\ref{lambda}) and it is assumed that Alice has used the unitary operator with an equal probability i.e., $p_i=\frac{1}{4}$, $i=I,\sigma_x,\sigma_y,\sigma_z$. \begin{figure} \caption{The amount of information decoded by Bob $I_{Bob} \label{Fig-BobI} \end{figure} In Fig.(2), the amount of decoded information by Bob, $I_{Bob}$ is plotted for different initial channels. Fig.(2a), describes the behavior of $I_{Bob}$ for a qubit system initially prepared in maximum entangled state. It is clear that the amount of information which decoded from the channel between Alice and Bob in the first region, $\rho_{A_IB_I}$ decreases smoothly to reach its minimum value when the accelerations tend to infinity. In the second region the information is coded in the state $\rho_{A_{II}B_{II}}$. In this case the amount of decoded information is smaller than that shown in the first region. However the decoded information from the channel between Alice and Anti-Bob, $\rho_{A_{I}B_{II}}$ increases as the accelerations increase to reach its maximum bound. This maximum bound represents the lower bound for the decoded information in the regions $I$ and $II$. The behavior of the decoded information in a system initially prepared in a partial entangled state is depicted in Fig.(2b). The general behavior is similar to that shown in Fig.(2a), but the information decoded by Bob, $I_{Bob}$ is much smaller than coded in maximum entangled states. The minimum bound is smaller than that displayed for a system initially prepared in MES. The amount of information which is decoded from the quantum channel $\rho_{A_{I}B_{II}}$ increases, with a rate smaller than its corresponding one in Fig.(2a),as the accelerations increase. \end{enumerate} \section{Conclusion} In this contribution, we investigate the dynamics of the accelerated channels in non-inertial frame. It is assumed that the partners initially share maximum or partial entangled channels. The capacity of the accelerated channels depends on the initial state setting and the frame in which the partners are observed. It is shown that the capacity decays if both partners are observed in the same frame, and increases smoothly if the partners are observed in different frames. The capacity decays quickly in one region, where the accelerated channel has larger degree of entanglement. However in some regions, the decay is small because this accelerated channel is partially entangled. This shows that the capacity of larger degree of entanglement decays faster than that has smaller degree of entanglement. Starting from a maximum entangled states, the capacity of the accelerated channels is much larger than that depicted for partial entangled channel as initial states. The possibility of employing the accelerated channels in different frames to perform quantum coding is investigated. It is assumed that the source supplies the partners with different classes of initial states. The amount of decoded information is quantified for different accelerated channels. It is shown the decoded information decays as the the accelerations of both qubit increases. The decay rate depends on the frame in which the channel is accelerated, where in the first region, the rate of decay is larger than that depicted in the second region. If the partners start with a maximum entangled states, then the decoded information from the accelerated channels is much larger than that decoded from accelerated channel generated initially from partial entangled channels. The decoded information from a channel between a partner and the Anti-partner increases gradually to reach its maximum bound with-$\infty$ acceleration. The increasing rate depends on the initially shared state between the partners. {\it In conclusion}, it is possible to use the accelerated quantum channels to send coded information between two partners. The channel capacity and the amount of decoded information depend on the frames in which the partners are observed and the initial shared state between the partners. \end{document}
math
21,981
\begin{document} \title{Forcing the ${\Sigma^1_3} \begin{abstract} We generically construct a model in which the $\bf{\Sigma^1_3}$-separation property is true, i.e. every pair of disjoint $\bf{\Sigma^1_3}$-sets can be separated by a $\bf{\Delta^1_3}$-definable set. This answers an old question from the problem list $``$Surrealist landscape with figures$"$ by A. Mathias from 1968. We also construct a model in which the (lightface) $\Sigma^1_3$-separation property is true. \end{abstract} \section{Introduction} The separation property, together with the reduction property and the uniformization property, are three classical notions which were introduced and studied first by Polish and Russian descriptive set theorists in the 1920's and 1930's. \begin{definition} Let $\Gamma$ be a (lightface or boldface) projective pointclass, and let $\check{\Gamma}=\{X \, : \, \omega^{\omega} \backslash X \in \Gamma \}$ denote the dual pointclass of $\Gamma$. \begin{itemize} \item We say that $\Gamma$ has the separation property iff every pair $A_1$ and $A_2$ of disjoint elements of $\Gamma$ has a separating set $C \in \Gamma \cap \check{\Gamma}$, where $C$ separates $A_1$ and $A_2$ if $A_1 \subset C$ and $A_2 \subset \omega^{\omega} \backslash C$. \item $\Gamma$ has the reduction property if for any pair $A_1$ and $A_2$ in $\Gamma$, there are disjoint sets $B_1 \subset A_1$ and $B_2 \subset A_2$ both in $\Gamma$ such that $A_1 \cup A_2= B_1 \cup B_2$. \item $\Gamma$ has the uniformization property if for every $A \subset \omega^{\omega} \times \omega^{\omega}$ there is a uniformizing function $f_A$ whose graph is in $\Gamma$, where we say that $f_A$ is a uniformizing function of $A$ if $dom f_A = pr_1(A)=\{ x \in \omega^{\omega} \, : \, \exists y ((x,y) \in A) \}$ and $f_A \subset A$. \end{itemize} \end{definition} It is rather straightforward to see that the uniformization property for $\Gamma$ implies the reduction property for $\Gamma$. A classical result due to Novikov shows that the reduction property can not hold simultaneously at both $\Gamma$ and $\check{\Gamma}$. Passing to complements immediately yields that the reduction property for $\Gamma$ implies that the dual $\check{\Gamma}$ has the separation property (see e.g. Y. Moschovakis book \cite{Moschovakis} for many more information on the orgin and history of these notions). Consequentially, $\bf{\Sigma^1_1}$ and $\bf{\Pi^1_2}$-sets have the separation property due to M. Kondo's theorem that $\bf{\Pi^1_1}$, hence also $\bf{\Sigma^1_2}$ has the uniformization property. The fact that the $\bf{\Sigma^1_1}$-separation property is true has been proved by N. Lusin already in 1927. This is as much as $\mathsf{ZFC}$ can prove about the separation property. In G\"odel's constructible universe $L$ there is a good $\Sigma^1_2$-definable wellorder of the reals, hence the $\bf{\Sigma^1_n}$-uniformization property holds for $n\ge 3$, so $\bf{\Pi^1_n}$-separation must hold as well. On the other hand, by the celebrated results of Y. Moschovakis $\bf{\Delta}^1_{2n}$-determinacy implies the $\bf{\Pi}^1_{2n+1}$-uniformization property, so in particular under $\bf{\Delta}^1_2$-determinacy $\bf{\Sigma^1_3}$-separation holds. Note here, that due to H. Woodin, ${\Delta^1_2}$-determinacy together with $\bf{\Pi}^1_1$-determinacy already implies that $M_1^{\#}$ exists and is $\omega_1$-iterable (see \cite{MSW}, Theorem 1.22) which in turn implies the existence of an inner model with a Woodin cardinal. On the other hand, in the presence of $``$every reals has a sharp$"$, if the $\bf{\Sigma}^1_3$-separation property holds, then it does so because already $\bf{\Delta}^1_2$-determinacy holds. The above follows from Steel's and Woodin's solution to the fourth Delfino problem. Steel, showed that in the presence of $``$every reals has a sharp$"$, the $\bf{\Sigma}^1_3$-separation property implies the existence of an inner model with a Woodin cardinal as well (see \cite{Steel}, Theorem 0.7), more precisely, under the stated assumptions, for any real $y$, there is a proper class model $M$ with $y \in M$, and an ordinal $\delta$ such that $V^M_{\delta+1}$ is countable and $\delta$ is a Woodin cardinal in $M$. Now results of Woodin which were later reproved by I. Neeman using different methods (see \cite{Neeman}, Corollary 2.3) the latter assertion implies that $\bf{\Delta}^1_2$-determinacy must hold in $V$. It is natural to ask whether one can get a model of the $\bf{\Sigma^1_3}$-separation property from just assuming the consistency of $\mathsf{ZFC}$. Indeed, this question has been asked long before the connection of determinacy assumptions and large cardinals has been uncovered; it appears as Problem 3029 in A. Mathias's list of open problems compiled in 1968 (see \cite{Mathias}, or \cite{Kanovei1}, where the problem is stated again). The problem itself seems to have a nontrivial history of attempted solutions (see \cite{Kanovei2} for an account). Put in wider context, this paper can be seen as following a tradition of establishing consequences from (local forms of) projective determinacy using the methods of forcing. There is an extensive list of results which deal with forcing statements concerning the Lebesgue measurability and the Baire property of certain levels of the projective hierarchy. For the separation property, L. Harrington, in unpublished notes dating back to 1974, constructed a model in which the separation property fails for both $\bf{\Sigma}^1_3$ and $\bf{\Pi}^1_3$-sets. In the same set of handwritten notes, he outlines how his proof can be altered to work for arbitrary $n \ge 3$. Very recently, using different methods, V. Kanovei and V. Lyubetsky devised a forcing which, given an arbitrary $n \ge 3$, produces a universe in which the $\bf{\Sigma}^1_n$- and the $\bf{\Pi}^1_n$-separation property fails (see \cite{Kanovei1}). Yet tools for producing models which deal with the separation property, the reduction property or the uniformization property in a positive way were non-existent. Goal of this paper is to show that the $\bf{\Sigma^1_3}$-separation property has no large cardinal strength, which answers Mathias question. \begin{theorem*} Starting with $L$ as the ground model, one can produce a set-generic extension $L[G]$ in which the $\bf{\Sigma^1_3}$-separation property holds. \end{theorem*} The proof method also allows to tackle the $\Sigma^1_3$-separation property: \begin{theorem*} Starting with $L$ as the ground model, one can produce a set-generic extension $L[G]$ in which the ${\Sigma^1_3}$-separation property holds. \end{theorem*} As always, the flexibility of the forcing method can be exploited to produce effects which can not be inferred from projective determinacy assumptions alone. An example would be that the above proofs lift without pain to statements about the $\Sigma^1_1$-separation property in the generalized Baire space $\omega_1^{\omega_1}$. Another example, though speculative, involves lifting the above results to inner models with finitely many Woodin cardinals. We strongly believe that the proofs of the theorems above can serve as a blueprint to obtain models where the $\Sigma^1_{n+3}$-separation property holds, for arbitrary $n \in \omega$, while working over the canonical inner model with $n$ Woodin cardinals $M_n$ instead of $L$, as we do in this article. Note here, that for even $n$ this would produce models, which display a behaviour of the separation property which contradicts the one implied by $\mathsf{PD}$. We finish the introduction with a short summary of the present article. In section two we briefly introduce the forcings which will be used in order to prove the two main theorems. In the third section we shall construct a mild generic extension of $L$ denoted with $W$ which is for our needs the right ground model to work with. In the third subsection of section three, we prove an auxiliary result whose purpose is to highlight several important ideas in an easier setting. Our hopes are that this way, the reader obtains a better understanding of the proofs of the later main theorems. In section four we prove the boldface separation property and in the fifth section we prove the lightface separation property. The latter relies on several arguments from the boldface case and can not be read separately. In the sixth section we discuss some interesting and open questions. \section{Preliminaries} \subsection{Notation} The notation we use will be mostly standard, we hope. We write $\mathbb{P}=(\mathbb{P}_{\alpha} \, : \, \alpha < \gamma)$ for a forcing iteration of length $\gamma$ with initial segments $\mathbb{P}_{\alpha}$. The $\alpha$-th factor of the iteration will be denoted with $\mathbb{P}(\alpha)$. Note here that we drop the dot on $\mathbb{P}(\alpha)$, even though $\mathbb{P}(\alpha)$ is in fact a $\mathbb{P}_{\alpha}$-name of a partial order. If $\alpha' < \alpha < \gamma$, then we write $\mathbb{P}_{\alpha' \alpha}$ to denote the intermediate forcing of $\mathbb{P}$ which happens in the interval $[\alpha',\alpha)$, i.e. $\mathbb{P}_{\alpha' \alpha}$ is such that $\mathbb{P} \cong \mathbb{P}_{\alpha'} \ast \mathbb{P}_{\alpha' \alpha}$. We write $\mathbb{P} \Vdash \varphi$ whenever every condition in $\mathbb{P}$ forces $\varphi$, and make deliberate use of restricting partial orders below conditions, that is, if $p \in \mathbb{P} $ is such that $p \Vdash \varphi$, we let $\mathbb{P}':= \mathbb{P}_{\le p}:=\{ q \in \mathbb{P} \, : \, q \le p\}$ and use $\mathbb{P}'$ instead of $\mathbb{P}$. This is supposed to reduce the notational load of some definitions and arguments. We also sometimes write $V[\mathbb{P}]\models \varphi$ to indicate that for every $\mathbb{P}$-generic filter $G$ over $V$, $V[G] \models \varphi$. \subsection{The forcings which are used} The forcings which we will use in the construction are all well-known. We nevertheless briefly introduce them and their main properties. \begin{definition} For a stationary $S \subset \omega_1$ the club-shooting forcing with finite conditions for $S$, denoted by $\mathbb{P}_S$ consists of conditions $p$ which are finite partial functions from $\omega_1$ to $S$ and for which there exists a normal function $f: \omega_1 \rightarrow \omega_1$ such that $p \subset f$. $\mathbb{P}_S$ is ordered by end-extension. \end{definition} The club shooting forcing $\mathbb{P}_S$ is the paradigmatic example for an $S$-\emph{proper forcing}, where we say that $\mathbb{P}$ is $S$-proper if and only if for every condition $p \in \mathbb{P}_S$, every sufficiently large $\theta$ and every countable $M \prec H(\theta)$ such that $M \cap \omega_1 \in S$ and $p, \mathbb{P}_S \in M$, there is a $q<p$ which is $(M, \mathbb{P}_S)$-generic. \begin{lemma} The club-shooting forcing $\mathbb{P}_S$ generically adds a club through the stationary set $S \subset \omega_1$, while being $S$-proper and hence $\omega_1$-preserving. Moreover stationary subsets $T$ of $S$ remain stationary in the generic extension. \end{lemma} We will choose a family of $S_{\beta}$'s so that we can shoot an arbitrary pattern of clubs through its elements such that this pattern can be read off from the stationarity of the $S_{\beta}$'s in the generic extension. For that it is crucial to recall that $S$-proper posets can be iterated with countable support and always yield an $S$-proper forcing again. This is proved exactly as in the well-known case for plain proper forcings (see \cite{Goldstern}, 3.19. for a proof). \begin{fact} Let $(\mathbb{P}_{\alpha} \,:\, \alpha< \gamma)$ be a countable support iteration, assume also that at every stage $\alpha$, $\mathbb{P}_{\alpha} \Vdash_{\alpha} \mathbb{P}(\alpha)$ is $S$-proper. Then the iteration is an $S$-proper notion of forcing again. \end{fact} Once we decide to shoot a club through a stationary, co-stationary subset of $\omega_1$, this club will belong to all $\omega_1$-preserving outer models. This hands us a robust method of coding arbitrary information into a suitably chosen sequence of sets. Let $(S_{ \alpha} \, : \, \alpha < \omega_1)$ be a sequence of stationary, co-stationary subsets of $\omega_1$ such that $\forall \alpha, \, \beta < \omega_1 (S_{\alpha} \cap S_{\beta} \in \hbox{NS}_{\omega})$, and let $S\subset \omega_1$ be stationary and such that $S \cap S_{\alpha} \in \hbox{NS}_{\omega})$. Note that we can always assume that these objects exist. The following coding method has been used several times already (see \cite{SyVera}). \begin{lemma} Let $r \in 2^{\omega_1}$ be arbitrary, and let $\mathbb{P}$ be a countable support iteration $(\mathbb{P}_{\alpha} \, : \, \alpha < \omega_1)$, inductively defined via \[\mathbb{P}(\alpha) := \mathbb{P}_{\omega_1 \backslash S_{2 \cdot \alpha}} \text{ if } r(\alpha)=1 \] and \[\mathbb{P}({\alpha}) := \mathbb{P}_{\omega_1 \backslash S_{(2 \cdot\alpha) +1}} \text{ if } r(\alpha)=0.\] Then in the resulting generic extension $V[\mathbb{P}]$, we have that $\forall \alpha < \omega_1:$ \[ r(\alpha)=1 \text{ if and only if } S_{2 \cdot \alpha} \text{ is nonstationary, }\] and \[ r_{\alpha}=0 \text{ iff } S_{(2 \cdot \alpha)+1} \text{ is nonstationary.} \] \end{lemma} \begin{proof} Note first that the iteration will be $S$-proper, hence $\omega_1$-preserving. Assume that $r(\alpha)=1$ in $V[{\mathbb{P}}]$. Then by definition of the iteration we must have shot a club through the complement of $S_{\alpha}$, thus it is nonstationary in $V[{\mathbb{P}}]$. On the other hand, if $S_{2 \cdot \alpha}$ is nonstationary in $V[{\mathbb{P}}]$, then as for $\beta \ne 2 \cdot \alpha$, every forcing of the form $\mathbb{P}_{S_{\beta}}$ is $S_{2 \cdot \alpha}$-proper, we can iterate with countable support and preserve $S_{2 \cdot \alpha}$-properness, thus the stationarity of $S_{2 \cdot \alpha}$. So if $S_{2 \cdot \alpha}$ is nonstationary in $V[{\mathbb{P}}]$, we must have used $\mathbb{P}_{S_{2 \cdot \alpha}}$ in the iteration, so $r(\alpha)= 1$. \end{proof} The second forcing we use is the almost disjoint coding forcing due to R. Jensen and R. Solovay. We will identify subsets of $\omega$ with their characteristic function and will use the word reals for elements of $2^{\omega}$ and subsets of $\omega$ respectively. Let $D=\{d_{\alpha} \, : \, \alpha < \aleph_1 \}$ be a family of almost disjoint subsets of $\omega$, i.e. a family such that if $d, d' \in D$ then $d \cap d'$ is finite. Let $X\subset \kappa$ for $\kappa \le 2^{\aleph_0}$ be a set of ordinals. Then there is a ccc forcing, the almost disjoint coding $\mathbb{A}_D(X)$ which adds a new real $x$ which codes $X$ relative to the family $D$ in the following way $$\alpha \in X \text{ if and only if } x \cap d_{\alpha} \text{ is finite.}$$ \begin{definition} The almost disjoint coding $\mathbb{A}_D(X)$ relative to an almost disjoint family $D$ consists of conditions $(r, R) \in \omega^{<\omega} \times D^{<\omega}$ and $(s,S) < (r,R)$ holds if and only if \begin{enumerate} \item $r \subset s$ and $R \subset S$. \item If $\alpha \in X$ and $d_{\alpha} \in R$ then $r \cap d_{\alpha} = s \cap d_{\alpha}$. \end{enumerate} \end{definition} For the rest of this paper we let $D \in L$ be the definable almost disjoint family of reals one obtains when recursively adding the $<_L$-least real to the family which is almost disjoint from all the previously picked reals. Whenever we use almost disjoint coding forcing, we assume that we code relative to this fixed almost disjoint family $D$. The last two forcings we briefly discuss are Jech's forcing for adding a Suslin tree with countable conditions and, given a Suslin tree $T$, the associated forcing which adds a cofinal branch through $T$. Recall that a set theoretic tree $(T, <)$ is a Suslin tree if it is a normal tree of height $\omega_1$ and has no uncountable antichain. As a result, forcing with a Suslin tree $S$, where conditions are just nodes in $S$, and which we always denote with $S$ again, is a ccc forcing of size $\aleph_1$. Jech's forcing to generically add a Suslin tree is defined as follows. \begin{definition} Let $\mathbb{P}_J$ be the forcing whose conditions are countable, normal trees ordered by end-extension, i.e. $T_1 < T_2$ if and only if $\exists \alpha < \text{height}(T_1) \, T_2= \{ t \upharpoonright \alpha \, : \, t \in T_1 \}.$ \end{definition} It is wellknown that $\mathbb{P}_J$ is $\sigma$-closed and adds a Suslin tree. In fact more is true, the generically added tree $T$ has the additional property that for any Suslin tree $S$ in the ground model $S \times T$ will be a Suslin tree in $V[G]$. This can be used to obtain a robust coding method (see also \cite{Ho} for more applications) \begin{lemma}\label{oneSuslintreepreservation} Let $V$ be a universe and let $S \in V$ be a Suslin tree. If $\mathbb{P}_J$ is Jech's forcing for adding a Suslin tree, $g \subset \mathbb{P}_J$ be a generic filter and if $T=\bigcup g$ is the generic tree then $$V[g][T] \models S \text{ is Suslin.}$$ \end{lemma} \begin{proof} Let $\dot{T}$ be the $\mathbb{P}_J$-name for the generic Suslin tree. We claim that $\mathbb{P}_J \ast \dot{T}$ has a dense subset which is $\sigma$-closed. As $\sigma$-closed forcings will always preserve ground model Suslin trees, this is sufficient. To see why the claim is true consider the following set: $$\{ (p, \check{q}) \, : \, p \in \mathbb{P}_J \land height(p)= \alpha+1 \land \check{q} \text{ is a node of $p$ of level } \alpha \}.$$ It is easy to check that this set is dense and $\sigma$-closed in $\mathbb{P}_J \ast \dot{T}$. \end{proof} A similar observation shows that we can add an $\omega_1$-sequence of such Suslin trees with a countably supported iteration. \begin{lemma}\label{ManySuslinTrees} Let $S$ be a Suslin tree in $V$ and let $\mathbb{P}$ be a countably supported product of length $\omega_1$ of forcings $\mathbb{P}_J$. Then in the generic extension $V[G]$ there is an $\omega_1$-sequence of Suslin trees $\vec{T}=(T_{\alpha} \, : \, \alpha \in \omega_1)$ such that for any finite $e \subset \omega$ the tree $S \times \prod_{i \in e} T_i$ will be a Suslin tree in $V[\vec{T}]$. \end{lemma} These sequences of Suslin trees will be used for coding in our proof and get a name. \begin{definition} Let $\vec{T} = (T_{\alpha} \, : \, \alpha < \kappa)$ be a sequence of Suslin trees. We say that the sequence is an independent family of Suslin trees if for every finite set $e= \{e_0, e_1,...,e_n\} \subset \kappa$ the product $T_{e_0} \times T_{e_1} \times \cdot \cdot \cdot \times T_{e_n}$ is a Suslin tree again. \end{definition} The upshot of being an independent sequence is that we can pick our favourite subset of indices and decide to shoot a branch through every tree whose index belongs to the set, while guaranteeing that no other Suslin tree from the sequence is destroyed. The following fact can be easily seen via induction on $\kappa$. \begin{fact} Let $\vec{T} = (T_{\alpha} \, : \, \alpha < \kappa)$ be independent and let $I \subset \kappa$ be arbitrary. If we form the finitely supported product of forcings $\mathbb{P}:=\prod_{\alpha \in I} T_{\alpha}$, then for every $\beta \notin I$, $V[\mathbb{P}] \models ``T_{\beta}$ is a Suslin tree$"$. \end{fact} Thus independent Suslin trees are suitable to encode information, as soon as we can make the independent sequence definable. \section{A first step towards the proof of the boldface separation property} \subsection{The ground model $W$ of the iteration} We have to first create a suitable ground model $W$ over which the actual iteration will take place. $W$ will be a generic extension of $L$, satisfying $\mathsf{CH}$ and, as stated already earlier, has the property that it contains two $\omega_1$-sequences $\vec{S}=\vec{S^1} \cup \vec{S^2}$ of mutually independent Suslin trees. The goal is to add the trees generically, and in a second forcing, use an $L$-definable sequence of stationary subsets of $\omega_1$ to code up the trees. The resulting universe will have the feature that any further outer universe, which preserves stationary subsets, can decode the information written into the $L$-stationary subsets in a $\Sigma_1(\omega_1)$-definable way, and hence has access to the sequence of independent Suslin trees $\vec{S}$. This property can be used to create two $\Sigma_1(\omega_1)$-predicates which are empty first and which can be filled with arbitrary reals $x$, using $\aleph_1$-sized forcings with the countable chain condition. These forcing have the crucial feature that they will be independent of the ground model they live in, a feature we will exploit heavily later on. We start with G\"odels constructible universe $L$ as our ground model. Next we fix an appropriate sequence of stationary subsets of $\omega_1$. Recall that $\diamondsuit$ holds in our ground model $L$, i.e. there is a $\Sigma_1$-definable sequence $(a_{\alpha} \, : \, \alpha < \omega_1)$ of countable subsets of $\omega_1$ such that any set $A \subset \omega_1$ is guessed stationarily often by the $a_{\alpha}$'s, i.e. $\{ \alpha < \omega_1 \, : \, a_{\alpha}= A \cap \alpha \}$ is a stationary subset of $\omega_1$. The $\diamondsuit$-sequence can be used to produce an easily definable sequence of stationary subsets: using a definable bijection between $\omega_1$ and $\omega_1 \cdot \omega_1$, we list the reals in $L$ in an $\omega_1 \cdot \omega_1$ sequence $(r_{\beta} \, : \, \beta < \omega_1 \cdot \omega_1)$ and define for every $\beta < \omega_1 \cdot \omega_1$ a stationary set in the following way: $$R_{\beta} := \{ \alpha < \omega_1 \, : \, a_{\alpha}= r_{\beta} \}.$$ and let $\vec{R}= (R_{\beta} \, : \, \beta < \omega_1 \cdot \omega_1)$ denote the sequence. We proceed with adding an $\omega_1$-sequence of Suslin trees with a countably supported product of Jech's Forcing $ \mathbb{P}_J$. We let \[\mathbb{R} := \prod_{\beta \in \omega_1} \mathbb{P}_J \] using countable support. This is a $\sigma$-closed, hence proper notion of forcing. We denote the generic filter of $\mathbb{R}$ with $\vec{S}=(S_{\alpha} \, : \, \alpha < \omega_1)$ and note that whenever $I \subset \omega_1$ is a set of indices then for every $j \notin I$, the Suslin tree $S_j$ will remain a Suslin tree in the universe $L[\vec{S}][g]$, where $g \subset \prod_{i \in I} S_i$ denotes the generic filter for the forcing with the finitely supported product of the trees $S_i$, $i \in I$ (see \cite{Ho} for a proof of this fact). We fix a definable bijection between $[\omega_1]^{\omega}$ and $\omega_1$ and identify the trees in $(S_{\alpha }\, : \, \alpha < \omega_1)$ with their images under this bijection, so the trees will always be subsets of $\omega_1$ from now on. In a second step, we destroy each element of $\vec{S}$, via adding generically branches. That is, we let the second forcing \[ \mathbb{R}':= \prod_{\alpha < \omega_1} S_{\alpha}, \] using countable support. Note that by the argument from the proof of Lemma \ref{oneSuslintreepreservation}, $\mathbb{R} \ast \mathbb{R}'$ has a dense subset which is $\sigma$-closed, hence the $L[\mathbb{R} \ast \mathbb{R}']$ is a $\sigma$-closed generic extension of $L$. In a third step we code the trees from $\vec{S}$ into the sequence of $L$-stationary subsets $\vec{R}$ we produced earlier, using club shooting forcing. It is important to note, that the forcing we are about to define does preserve Suslin trees, a fact we will show later. The forcing used in the second third will be denoted by $\mathbb{S}$. Fix $\alpha< \omega_1$ and consider the $\omega_1$ tree $S_{\alpha} \subset \omega_1$. We let $\mathbb{R}_{\alpha}$ be the countable support product which codes the characteristic function of $S_{\alpha}$ into the $\alpha$-th $\omega_1$-block of the $R_{\beta}$'s. $$\mathbb{R}_{\alpha} = \prod_{\gamma \in S_{\alpha}} \mathbb{P}_{\omega_1 \backslash R_{\omega_1 \cdot \alpha + 2 \cdot \gamma}} \times \prod_{\gamma \notin S_{\alpha}} \mathbb{P}_{\omega_1 \backslash R_{\omega_1 \cdot \alpha + 2 \cdot \gamma +1}} $$ Recall that for a stationary, co-stationary $R \subset \omega_1$, $\mathbb{P}_{R}$ denotes the club shooting forcing which shoots a club through $R$, thus $\mathbb{R}_{\alpha}$ codes up the tree $S_{\alpha}$ via writing the 0,1-pattern of the characteristic function of $S_{\alpha}$ into the $\alpha$-th $\omega_1$-block of $\vec{R}$. If we let $R$ be some stationary subset of $\omega_1$ which is disjoint from all the $R_{\alpha}$'s, whose existence is guaranteed by $\diamondsuit$, then it is obvious that for every $\alpha < \omega_1$, $\mathbb{R}_{\alpha}$ is an $R$-proper forcing which additionally is $\omega$-distributive. Then we let $\mathbb{S}$ be the countably supported iteration, $$\mathbb{S}:=\bigstar_{\alpha< \omega_1} \mathbb{R}_{\alpha}$$ which is again $R$-proper and $\omega$-distributive. This way we can turn the generically added sequence of $\omega_1$-trees $\vec{S}$ into a definable sequence of $\omega_1$-trees. Indeed, if we work in $L[\vec{S}\ast G]$, where $\vec{S} \ast G$ is $\mathbb{R} \ast \mathbb{S}$-generic over $L$, then \begin{align*} \forall \alpha, \gamma < \omega_1 (&\gamma \in S_{\alpha} \Leftrightarrow R_{\omega_1 \cdot \alpha + 2 \cdot \gamma} \text{ is not stationary and} \\ & \gamma \notin S_{\alpha} \Leftrightarrow R_{\omega_1 \cdot \alpha + 2 \cdot \gamma +1} \text{ is not stationary}) \end{align*} Note here that the above formula can be written in a $\Sigma_1(\omega_1)$-way, as it reflects down to $\aleph_1$-sized, transitive models of $\mathsf{ZF}P$ which contain a club through exactly one element of every pair $\{(R_{\alpha}, R_{\alpha+1}) \, : \, \alpha < \omega_1\}$. Finally we partition $\vec{S}$ into its even and its odd members and let \[ \vec{S^1}:= \{ S_{\alpha} \in \vec{S} \, : \, \alpha \text{ is even } \} \] and \[ \vec{S^2}:= \{ S_{\beta} \in \vec{S} \, : \, \beta \text { is odd} \} \] Again, both sequences $\vec{S^1}$ and $\vec{S^2}$ are $\Sigma^1(\omega_1)$-definable in $W$ and all stationary set preserving outer models of $W$. Our goal is to use $\vec{S^1}$ and $\vec{S^2}$ for coding again. For this it is essential, that both sequences remain independent in the inner model $L[\mathbb{R}][\mathbb{S}]$, after forcing with $\mathbb{S}$. The following line of reasoning is similar to \cite{Ho}. Recall that for a forcing $\mathbb{P}$ and $M \prec H(\theta)$, a condition $q \in \mathbb{P}$ is $(M,\mathbb{P})$-generic iff for every maximal antichain $A \subset \mathbb{P}$, $A \in M$, it is true that $ A \cap M$ is predense below $q$. The key fact is the following (see \cite{Miyamoto2} for the case where $\mathbb{P}$ is proper) \begin{lemma}\label{preservation of Suslin trees} Let $T$ be a Suslin tree, $S \subset \omega_1$ stationary and $\mathbb{P}$ an $S$-proper poset. Let $\theta$ be a sufficiently large cardinal. Then the following are equivalent: \begin{enumerate} \item $\Vdash_{\mathbb{P}} T$ is Suslin \item if $M \prec H_{\theta}$ is countable, $\eta = M \cap \omega_1 \in S$, and $\mathbb{P}$ and $T$ are in $M$, further if $p \in \mathbb{P} \cap M$, then there is a condition $q<p$ such that for every condition $t \in T_{\eta}$, $(q,t)$ is $(M, \mathbb{P} \times T)$-generic. \end{enumerate} \end{lemma} \begin{proof} For the direction from left to right note first that $\Vdash_{\mathbb{P}} T$ is Suslin implies $\Vdash_{\mathbb{P}} T$ is ccc, and in particular it is true that for any countable elementary submodel $N[\dot{G}_{\mathbb{P}}] \prec H(\theta)^{V[\dot{G}_{\mathbb{P}}]}$, $\Vdash_{\mathbb{P}} \forall t \in T (t$ is $(N[\dot{G}_{\mathbb{P}}],T)$-generic). Now if $M \prec H(\theta)$ and $M \cap \omega_1 = \eta \in S$ and $\mathbb{P},T \in M$ and $p \in \mathbb{P} \cap M$ then there is a $q<p$ such that $q$ is $(M,\mathbb{P})$-generic. So $q \Vdash \forall t \in T (t$ is $(M[\dot{G}_{\mathbb{P}}], T)$-generic, and this in particular implies that $(q,t)$ is $(M, \mathbb{P} \times T)$-generic for all $t \in T_{\eta}$. For the direction from right to left assume that $\Vdash \dot{A} \subset T$ is a maximal antichain. Let $B=\{(x,s) \in \mathbb{P} \times T \, : \, x \Vdash_{\mathbb{P}} \check{s} \in \dot{A} \}$, then $B$ is a predense subset in $\mathbb{P} \times T$. Let $\theta$ be a sufficiently large regular cardinal and let $M \prec H(\theta)$ be countable such that $M \cap \omega_1=\eta \in S$ and $\mathbb{P}, B,p,T \in M$. By our assumption there is a $q <_{\mathbb{P}} p$ such that $\forall t \in T_{\eta} ((q,t)$ is $(M, \mathbb{P} \times T)$-generic). So $B \cap M$ is predense below $(q,t)$ for every $t \in T_{\eta}$, which yields that $q \Vdash_{\mathbb{P}} \forall t \in T_{\eta} \exists s<_{T} t(s \in \dot{A})$ and hence $q \Vdash \dot{A} \subset T \upharpoonright \eta$, so $\Vdash_{\mathbb{P}} T$ is Suslin. \end{proof} In a similar way, one can show that Theorem 1.3 of \cite{Miyamoto2} holds true if we replace proper by $S$-proper for $S \subset \omega_1$ a stationary subset. \begin{theorem} Let $(\mathbb{P}_{\alpha})_{\alpha < \eta}$ be a countable support iteration of length $\eta$, let $S \subset \omega_1$ be stationary and suppose that for every $\alpha < \eta$, for the $\alpha$-th factor of the iteration $\dot{\mathbb{P}}(\alpha)$ it holds that $\Vdash_{\alpha} ``\dot{\mathbb{P}}(\alpha)$ is $S$-proper and preserves every Suslin tree.$"$ Then $\mathbb{P}_{\eta}$ is $S$-proper and preserves every Suslin tree. \end{theorem} So in order to argue that our forcing $\mathbb{S}$ preserves Suslin trees if used over $L[\mathbb{R}]$, it is sufficient to show that every factor preserves Suslin trees. This is indeed the case. \begin{lemma} Let $S \subset \omega_1$ be stationary, co-stationary, then the club shooting forcing $\mathbb{P}_S$ preserves Suslin trees. \end{lemma} \begin{proof} Because of Lemma \ref{preservation of Suslin trees}, it is enough to show that for any regular and sufficiently large $\theta$, every $M \prec H_{\theta}$ with $M \cap \omega_1 = \eta \in S$, and every $p \in \mathbb{P}_S \cap M$ there is a $q<p$ such that for every $t \in T_{\eta}$, $(q,t)$ is $(M,(\mathbb{P}_S \times T))$-generic. Note first that as $T$ is Suslin, every node $t \in T_{\eta}$ is an $(M,T)$-generic condition. Further, as forcing with a Suslin tree is $\omega$-distributive, $M[t]$ has the same $M[t]$-countable sets as $M$. It is not hard to see that if $M\prec H(\theta)$ is such that $M \cap \omega_1 \in S$ then an $\omega$-length descending sequence of $\mathbb{P}_S$-conditions in $M$ whose domains converge to $M \cap \omega_1$ has a lower bound as $M \cap \omega_1 \in S$. We construct an $\omega$-sequence of elements of $\mathbb{P}_S$ which has a lower bound which will be the desired condition. We list the nodes on $T_{\eta}$, $(t_i \, : \, i \in \omega)$ and consider the according generic extensions $M[t_i]$. In every $M[t_i]$ we list the $\mathbb{P}_S$-dense subsets of $M[t_i]$, $(D^{t_i}_n \, : \, n \in \omega)$ and write the so listed dense subsets of $M[t_i]$ as an $\omega \times \omega$-matrix and enumerate this matrix in an $\omega$-length sequence of dense sets $(D_i \, : \, i \in \omega)$. If $p=p_0 \in \mathbb{P}_S \cap M$ is arbitrary we can find, using the fact that $\forall i \, (\mathbb{P}_S \cap M[t_i] = M \cap \mathbb{P}_S$), an $\omega$-length, descending sequence of conditions below $p_0$ in $\mathbb{P}_S \cap M$, $(p_i \, : \, i \in \omega)$ such that $p_{i+1} \in M \cap \mathbb{P}_S$ is in $D_i$. We can also demand that the domain of the conditions $p_i$ converge to $M \cap \omega_1$. Then the $(p_i)$'s have a lower bound $p_{\omega} \in \mathbb{P}_S$ and $(t, p_{\omega})$ is an $(M, T \times \mathbb{P}_S)$-generic conditions for every $t \in T_{\eta}$ as any $t \in T_{\eta}$ is $(M,T)$-generic and every such $t$ forces that $p_{\omega}$ is $(M[T], \mathbb{P}_S)$-generic; moreover $p_{\omega} < p$ as desired. \end{proof} Putting things together we obtain: \begin{theorem} The forcing $\mathbb{S}$, defined above preserves Suslin trees. \end{theorem} Let us set $W:= L[\mathbb{R} \ast (\mathbb{R}' \times \mathbb{S}) ]$ which will serve as our ground model for a second iteration of length $\omega_1$. Note that $W$ satisfies that it is an $\omega$-distributive generic extension of $L$. We end with a straightforward lemma which is used later in coding arguments. \begin{lemma}\label{a.d.coding preserves Suslin trees} Let $T$ be a Suslin tree and let $\mathbb{A}_F(X)$ be the almost disjoint coding which codes a subset $X$ of $\omega_1$ into a real with the help of an almost disjoint family of reals of size $\aleph_1$. Then $$\Vdash_{\mathbb{A}_{F}(X)} T \text{ is Suslin }$$ holds. \end{lemma} \begin{proof} This is clear as $\mathbb{A}_{F}(X)$ has the Knaster property, thus the product $\mathbb{A}_{F}(X) \times T$ is ccc and $T$ must be Suslin in $V^{\mathbb{A}_{F}(X)}$. \end{proof} \subsection{Coding reals into Suslin trees} We introduced the model $W$ for one specific purpose: the possibility to code up reals into the sequence of definable Suslin trees $\vec{S^1}$ or $\vec{S^2}$ using a method which is not sensitive to its ground model. For the following, we let $W$ be our ground model, though the definitions will work, and will be used for suitable outer models of $W$ as well. We will encounter this situation as ultimately we will iterate the coding forcings we are about to define. Let $x \in W$ be an arbitrary real, ;et $m,k \in \omega$, let $(x,m,k)$ denote the real which codes the triple consisting of $x,m$ and $k$ in some fixed recursive way, and let $i \in \{ 1,2\}$. Then we shall define the forcing $\hbox{Code}(x,i)$, which codes the real $x$ into $\aleph_1$-many $\omega$-blocks of $\vec{S^i}$ as a two step iteration: \[ \hbox{Code}((x,m,k),i):= \mathbb{C} (\omega_1)^L \ast \dot{\mathbb{A}} (\dot{Y}_{x,i}) \] where the first factor is ordinary $\omega_1$-Cohen forcing, but defined in $L$, and the second factor codes a specific subset of $\omega_1$ denoted with $Y_{(x,m,k),i}$ into a real using almost disjoint coding forcing relative to the canonical, constructible almost disjoint family of reals $D$. We emphasize, that in iterations of coding forcings, we still fall back to force with $(\mathbb{C} (\omega_1))^L$ as our first factor, that is we never use the $\omega_1$-Cohen forcing of the current universe. Thus, iterating the coding forcings is in fact a hybrid of a product (namely the coordinates where we use $(\mathbb{C}(\omega_1))^L$) and a finites support iteration (the coordinates where we use the almost disjoint coding forcing). We shall discuss this later in more detail. We let $g \subset \omega_1$ be a $\mathbb{C} (\omega_1)^L$-generic filter over $W$, and let $\rho: [\omega_1]^{\omega} \rightarrow \omega_1$ be some canonically definable, constructible bijection between these two sets. We use $\rho$ and $g$ to define the set $h \subset \omega_1$, which eventually shall be the set of indices of $\omega$-blocks of $\vec{S}^i$, where we code up the characteristic function of the real ($(x,m,k)$. Let $h:= \{\rho( g \cap \alpha) \,: \, \alpha < \omega_1 \}$ and let $X \subset \omega_1$ be the $<$-least set (in some previously fixed well-order of $H(\omega_2)^{W[g]}$ which codes the follwing objects: \begin{itemize} \item The $<$-least set of $\omega_1$-branches in $W$ through elments of $\vec{S}$ which code $(x,m,k)$ at $\omega$-blocks which start at values in $h$, that is we collect $\{ b_{\beta} \subset S_{\beta} \, : \, \beta= \omega \gamma + 2n, \gamma \in h \land n \in \omega \land n \notin (x,m,k) \}$ and $\{ b_{\beta} \subset S_{\beta} \, : \, \beta= \omega \gamma + 2n+1, \gamma \in h \land n \in \omega \land n \in (x,m,k) \}$. \item The $<$-least set of $\omega_1 \cdot \omega \cdot \omega_1$-many club subsets through $\vec{R}$, our $\Sigma_1 (\omega_1)$-definable sequence of $L$-stationary subsets of $\omega_1$ from the last section, which are necessary to compute every tree $S_{\beta} \in \vec{S}$ which shows up in the above item, using the $\Sigma_1 (\omega_1)$-formula from the previous section before Lemma 2.10. \end{itemize} Note that, when working in $L[X]$ and if $\gamma \in h$ then we can read off $(x,m,k)$ via looking at the $\omega$-block of $\vec{S^i}$-trees starting at $\gamma$ and determine which tree has an $\omega_1$-branch in $L[X]$: \begin{itemize} \item[$(\ast)$] $n \in (x,m,k)$ if and only if $S^i_{\omega \cdot \gamma +2n+1}$ has an $\omega_1$-branch, and $n \notin (x,m,k)$ if and only if $S^i_{\omega \cdot \gamma +2n}$ has an $\omega_1$-branch. \end{itemize} Note that $(\ast)$ is actually a formula $(\ast) ((x,y,m) ,\gamma)$ with two parameters $(x,y,m)$ and $\gamma$ but we will suppress this, as the parameters usually are clear from the context. Indeed if $n \notin (x,m,k)$ then we added a branch through $S^i_{\omega \cdot \gamma+ 2n}$. If on the other hand $S^i_{\omega \cdot\gamma +2n}$ is Suslin in $L[X]$ then we must have added an $\omega_1$-branch through $S^i_{\omega \cdot \gamma +2n+1}$ as we always add an $\omega_1$-branch through either $S^i_{\omega \cdot \gamma +2n+1}$ or $S^i_{\omega \cdot \gamma +2n}$ and adding branches through some $S^i_{\alpha}$'s will not affect that some $S^i_{\beta}$ is Suslin in $L[X]$, as $\vec{S}$ is independent. We note that we can apply an argument resembling David's trick in this situation. We rewrite the information of $X \subset \omega_1$ as a subset $Y \subset \omega_1$ using the following line of reasoning. It is clear that any transitive, $\aleph_1$-sized model $M$ of $\mathsf{ZF}P$ which contains $X$ will be able to correctly decode out of $X$ all the information. Consequentially, if we code the model $(M,\in)$ which contains $X$ as a set $X_M \subset \omega_1$, then for any uncountable $\beta$ such that $L_{\beta}[X_M] \models \mathsf{ZF}P$ and $X_M \in L_{\beta}[X_M]$: \[L_{\beta}[X_M] \models \text{\ldq The model decoded out of }X_M \text{ satisfies $(\ast)$ for every $\gamma \in h \subset \omega_1$\rdq.} \] In particular there will be an $\aleph_1$-sized ordinal $\beta$ as above and we can fix a club $C \subset \omega_1$ and a sequence $(M_{\alpha} \, : \, \alpha \in C)$ of countable elementary submodels such that \[\forall \alpha \in C (M_{\alpha} \prec L_{\beta}[X_M] \land M_{\alpha} \cap \omega_1 = \alpha)\] Now let the set $Y\subset \omega_1$ code the pair $(C, X_M)$ such that the odd entries of $Y$ should code $X_M$ and if $Y_0:=E(Y)$ where the latter is the set of even entries of $Y$ and $\{c_{\alpha} \, : \, \alpha < \omega_1\}$ is the enumeration of $C$ then \begin{enumerate} \item $E(Y) \cap \omega$ codes a well-ordering of type $c_0$. \item $E(Y) \cap [\omega, c_0) = \emptyset$. \item For all $\beta$, $E(Y) \cap [c_{\beta}, c_{\beta} + \omega)$ codes a well-ordering of type $c_{\beta+1}$. \item For all $\beta$, $E(Y) \cap [c_{\beta}+\omega, c_{\beta+1})= \emptyset$. \end{enumerate} We obtain \begin{itemize} \item[$({\ast}{\ast})$] For any countable transitive model $M$ of $\mathsf{ZF}P$ such that $\omega_1^M=(\omega_1^L)^M$ and $ Y \cap \omega_1^M \in M$, $M$ can construct its version of the universe $L[Y \cap \omega_1^M]$, and the latter will see that there is an $\aleph_1^M$-sized transitive model $N \in L[Y \cap \omega_1^M]$ which models $(\ast)$ for $(x,m,k)$ and every $\gamma \in h \subset \omega_1^M$. \end{itemize} Thus we have a local version of the property $(\ast)$. In the next step $\dot{\mathbb{A}} (\dot{Y})$, working in $W[g]$, for $g\subset \mathbb{C} (\omega_1)$ generic over $W$, we use almost disjoint forcing $\mathbb{A}_D(Y)$ relative to the $<_L$-least almost disjoint family of reals $D \in L $ to code the set $Y$ into one real $r$. This forcing is well-known, has the ccc and its definition only depends on the subset of $\omega_1$ we code, thus the almost disjoint coding forcing $\mathbb{A}_D(Y)$ will be independent of the surrounding universe in which we define it, as long as it has the right $\omega_1$ and contains the set $Y$. We finally obtained a real $r$ such that \begin{itemize} \item[$({\ast}{\ast}{\ast})$] For any countable, transitive model $M$ of $\mathsf{ZF}P$ such that $\omega_1^M=(\omega_1^L)^M$ and $ r \in M$, $M$ can construct its version of $L[r]$ which in turn thinks that there is a transitive $\mathsf{ZF}P$-model $N$ of size $\aleph_1^M$ such that $N$ believes $(\ast)$ for $(x,m,k)$ and every $\gamma \in h$. \end{itemize} Note that the above is a $\Pi^1_2(r)$-statement. We say in this situation that the real $(x,m,k)$\emph{ is written into $\vec{S}^i$}, or that $(x,m,k)$ \emph{is coded into} $\vec{S^i}$. If $(x,m,k)$ is coded into $\vec{S}^i$ and $r$ is a real witnessing this, then the set $h$ which is equal to $\{ \gamma < \omega_1 \, ;\, \gamma \text{ is a starting point for an $\omega$-block where $(\ast)$ }$ for $(x,y,m)$ holds$\}$ is dubbed (following \cite{FS}) the coding area of $(x,m,k)$ with respect to $r$. We want to iterate these coding forcings. As the first factor of a coding forcing will always be $(\mathbb{C}(\omega_1))^L$, an iteration of the coding forcing is in fact a hybrid of a (countably supported) product (namely the coordinates where we use $(\mathbb{C}(\omega_1))^L$) and an actual finite support iteration (the coordinates where we use almost disjoint coding forcing). \begin{definition} A mixed support iteration $\mathbb{P}=(\mathbb{P}_{\beta}\,:\, {\beta< \alpha})$ is called legal if $\alpha < \omega_1$ and there exists a bookkeeping function $F: \alpha \rightarrow H(\omega_2)^2$ such that $\mathbb{P}$ is defined inductively using $F$ as follows: \begin{itemize} \item If $F(0)=(x,i)$, where $x$ is a real, $i\in \{1,2\}$, then $\mathbb{P}_0= \hbox{Code}({x,i} )$. Otherwise $\mathbb{P}_0$ is the trivial forcing. \item If $\beta>0$ and $\mathbb{P}_{\beta}$ is defined, $G_{\beta} \subset \mathbb{P}_{\beta}$ is a generic filter over $W$, $F(\beta)=(\dot{x}, i)$, where $\dot{x}$ is a $\mathbb{P}_{\beta}$-name of a real, $i \in \{1,2\}$ and $\dot{x}^{G_{\beta}}=x$ then, working in $W[G_{\beta}]$ we let $\mathbb{P}(\beta):= \hbox{Code} ({x,i} )$, that is we code $x$ into the $\vec{S}^i$, using our coding forcing. We shall use full (i.e. countable) support on the $(\mathbb{C}(\omega_1))^L$-coordinates and finite support on the coordinates where we use almost disjoint coding forcing. \end{itemize} \end{definition} Informally speaking, a legal forcing just decides to code the reals which the bookkeeping $F$ provides into either $\vec{S^1}$ or $\vec{S^2}$. Note further that the notion of legal can be defined in exactly the same way over any $W[G]$, where $G$ is a $\mathbb{P}$-generic filter over $W$ for an legal forcing. Finally note that instead of creating $\omega$-blocks of Suslin trees using $\mathbb{C} (\omega_1)^L$ where we code the branches every single time we code a real, we could have also defined an altered ground model $W'$ as $W[g]$, where $g \subset \prod \mathbb{C} (\omega_1)$ is generic for the countably supported product of $\aleph_1$-many copies of $\omega_1$-Cohen forcing, and then worked over $W'$ using exclusively almost disjoint coding forcings which pick first one coordinate $g_{\alpha}$, $\alpha< \omega_1$ of $g$ in an injective way, and then code the $\aleph_1$-many branches along $g_{\alpha}$ using almost disjoint coding forcings as described above. The difference between these approaches is only of symbolic nature, we opted for the one we chose because of a slightly neater presentation. We obtain the following first properties of legal forcings: \begin{lemma} \begin{enumerate} \item If $\mathbb{P}=(\mathbb{P}(\beta) \, : \, \beta < \delta) \in W$ is legal then for every $\beta < \delta$, $\mathbb{P}_{\beta} \Vdash| \mathbb{P}(\beta)|= \aleph_1$, thus every factor of $\mathbb{P}$ is forced to have size $\aleph_1$. \item Every legal forcing over $W$ preserves $\aleph_1$ and $\mathsf{CH}$. \item The product of two legal forcings is legal again. \end{enumerate} \end{lemma} \begin{proof} The first assertion follows immediately from the definition. To see the second item we exploit some symmetry. Indeed, every legal $\mathbb{P} = \bigstar_{\beta < \delta} P(\beta)= \bigstar_{\beta < \delta} ( ((\mathbb{C} (\omega_1))^L \ast \dot{\mathbb{A}} (\dot{Y_{\beta} }) )$ can be rewritten as \[(\prod_{\beta < \delta} (\mathbb{C} (\omega_1))^L )\ast \bigstar_{\beta < \delta} \dot{\mathbb{A}}_D (\dot{Y}_{\beta} )\] (again with mixed support). The latter representation is easily seen to be of the form $\mathbb{P} \ast \bigstar_{\beta < \delta} \dot{\mathbb{A}}_D(\dot{Y}_{\beta} )$, where $\mathbb{P}$ is $\sigma$-closed and the second part is a finite support iteration of ccc forcings, hence $\aleph_1$ is preserved. That $\mathsf{CH}$ holds is standard. To see that the third item is true, we recall that the definition of $\hbox{Code} _{x,i}$ is independent of the surrounding universe as long as it contains the real $x$, thus we see that a two step iteration $\mathbb{P}_1 \ast \mathbb{P}_2$ of two legal $\mathbb{P}_1, \mathbb{P}_2 \in W$ is in fact a product. As the iteration of two legal forcings (in fact the iteration of countably many legal forcings) is legal as well, the proof is done. \end{proof} The second assertion of the last lemma immediately gives us the following: \begin{corollary} Let $\mathbb{P}= (\mathbb{P}(\beta) \, : \, \beta < \delta) \in W$ be an legal forcing over $W$. Then $W[\mathbb{P}] \models \mathsf{CH}$. Further, if $\mathbb{P}= (\mathbb{P}(\alpha) \, : \, \alpha < \omega_1) \in W$ is an $\omega_1$-length iteration such that each initial segment of the iteration is legal over $W$, then $W[\mathbb{P}] \models \mathsf{CH}$. \end{corollary} In an iteration of coding forcing we do not add any unwanted or accidental solutions to our $\Sigma^1_3$ predicate give by $({\ast} {\ast} {\ast})$, which we shall show now. The set of triples of (names of) reals which are enumerated by the bookkeeping function $F \in W$ which comes along with an legal $\mathbb{P} = (\mathbb{P}(\beta) \, : \, \beta < \delta)$, we call the set of reals coded by $\mathbb{P}$. That is, if \[ \mathbb{P}(\beta)= (\mathbb{C}(\omega_1))^L \ast \dot{\mathbb{A}}_D (\dot{Y}_{(\dot{x}_{\beta}, \dot{y}_{\beta}, \dot{m}_{\beta} ) } ) \] and $G \subset \mathbb{P}$ is a generic filter and if we let for every $\beta < \delta$, $ \dot{x}_{\beta}^G =:x_{\beta}$, $\dot{y}_{\beta}^G =:y_{\beta}$, $\dot{m}_{\beta}^G =:m_{\beta}$, then $\{ (x_{\beta},y_{\beta},m_{\beta} ) \, : \, \beta < \alpha \}$ is the set of reals coded by $\mathbb{P}$ and $G$ (though we will suppress the $G$). \begin{lemma} If $\mathbb{P} \in W$ is legal, $\mathbb{P}=(\mathbb{P}_{\beta} \, : \, \beta < \delta)$, $G \subset \mathbb{P}$ is generic over $W$ and $\{ (x_{\beta},y_{\beta},m_{\beta} ) \, : \, \beta < \delta\}$ is the set of (triples of) reals which is coded as we use $\mathbb{P}$. Let \[A:= \{ (x,m,k) \in W[G] \, : \, \exists r ( ({\ast}{\ast}{\ast} )\text{ holds for $r$ and $(x,m,k)$} \}. \] Then in $W[G]$, the set of reals which belong to $A$ is exactly $\{ (x_{\beta},y_{\beta},m_{\beta} ) \, : \, \beta < \delta\}$, that is, we do not code any unwanted information accidentally. \end{lemma} \begin{proof} Let $G$ be $\mathbb{P}$ generic over $W$. Let $g= (g_{\beta} \, : \, {\beta} < \delta)$ be the set of the $\delta$ many $\omega_1$ subsets added by the $(\mathbb{C} (\omega_1))^L$-part of the factors of $\mathbb{P}$. We let $\rho : ([\omega_1]^{\omega})^L \rightarrow \omega_1$ be our fixed, constructible bijection and let $h_{\beta}= \{ \rho (g_{\beta} \cap \alpha) \, : \, \alpha < \omega_1\}$. Note that the family $\{h_{\beta} \,: \, \beta < \delta \}$ forms an almost disjoint family of subsets of $\omega_1$. Thus there is $\alpha < \omega_1$ such that $\alpha> h_{\beta_1}\cap h_{\beta_2}$ for $\beta_1 \ne \beta_2 < \delta$ and additionally, $\alpha$ is an index not used by the iterated coding forcing $\mathbb{P}$, where we say that an index $i$ of $\vec{S}$ is used by $\mathbb{P}$ whenever an $\omega_1$-branch through $S_i$ is coded by a factor of $\mathbb{P}$. We fix such an $\alpha$ and $S_{\alpha} \in \vec{S}$. We claim that there is no real in $W[G]$ such that $W[G] \models L[r] \models ``S_{\alpha}$ has an $\omega_1$-branch$"$. We show this by pulling the forcing $S_{\alpha}$ out of $\mathbb{P}$. Indeed if we consider $W[\mathbb{P}]=L[\mathbb{Q}^0] [\mathbb{Q}^1][\mathbb{Q}^2][\mathbb{P}]$, and if $S_{\alpha}$ is as described already, we can rearrange this to $W[\mathbb{P}]= L [\mathbb{Q}^0] [\mathbb{Q}'^1 \times S_{\alpha} ] [ \mathbb{Q}^2] [\mathbb{P}] = W[\mathbb{P}'] [S_{\alpha} ]$, where $\mathbb{Q}'^1$ is $\prod_{\beta \ne \alpha} S_{\beta}$ and $\mathbb{P}'$ is $\mathbb{Q}^0 \ast \mathbb{Q}'^1 \ast \mathbb{Q}^2 \ast \mathbb{P}$. Note now that, as $S_{\alpha}$ is $\omega$-distributive, $2^{\omega} \cap W[\mathbb{P}] = 2^{\omega} \cap W[\mathbb{P}']$, as $S_{\alpha}$ is still a Suslin tree in $W[\mathbb{P}']$ by the fact that $\vec{S}$ is independent, and no factor of $\mathbb{P}'$ besides the trees from $\vec{S}$ used in $\mathbb{P}'$ destroys Suslin trees. But this implies that \[W[\mathbb{P}'] \models \lnot \exists r L[r] \models `` S_{\alpha} \text{ has an $\omega_1$-branch}" \] as the existence of an $\omega_1$-branch through $S_{\alpha}$ in the inner model $L[r]$ would imply the existence of such a branch in $W[\mathbb{P}']$. Further and as no new reals appear when passing to $W[\mathbb{P}]$ we also get \[W[\mathbb{P}] \models \lnot \exists r L[r] \models `` S_{\alpha} \text{ has an $\omega_1$-branch}". \] On the other hand any unwanted information, i.e. any $(x,m,k) \notin \{(x_{\beta}, m_{\beta},k_{\beta}) \, : \, \beta < \delta \}$ such that $W[G] \models (x,m,k) \in A$, will also witness that \[L[r] \models `` S_{\alpha} \text{ has an $\omega_1$-branch}"\] for unboundedly many $\alpha$'s which are not in any of the $h_{\beta}$'s from above. Indeed, if $r$ witnesses $({\ast} {\ast} {\ast})$ for $(x,y,m)$, then there must also be an uncountable $M$, $r \in M$, $M \models \mathsf{ZF}P$, whose local version of $L[r]$ believes $(\ast)$ for every $\gamma \in h$, as otherwise we could find $r, x \in M_0 \prec M$, $M_0$ countable, and the transitive collapse $\bar{M}_0$ of $M_0$ is a counterexample to the truth of $({\ast} {\ast} {\ast})$, which is a contradiction. If $M$ is an uncountable, transitive $\mathsf{ZF}P$ model as above, then $L[r]^M \models ``S_{\alpha}$ has an $\omega_1$-branch$"$, and as the trees from $\vec{S}$ are $\Sigma_1(\omega_1)$-definable, and as the existence of an $\omega_1$-branch is again a $\Sigma_1(\omega_1)$-statement, we obtain by upwards absoluteness that $L[r] \models ``S_{\alpha}$ has an $\omega_1$-branch$"$, as claimed. In particular, as $(x,m,k ) \in A$, $r$ will satisfy that \[n \in (x,y,m) \rightarrow L[r] \models ``S_{\omega \gamma+2n+1} \text{ has an $\omega_1$-branch}" \] and \[ n \notin (x,y,m) \rightarrow L[r] \models ``S_{\omega \gamma+2n} \text{ has an $\omega_1$-branch}". \] for $\omega_1$-many $\gamma$'s. But by the argument above, only trees which we used in one of the factors of $\mathbb{P}$ have this property, so there can not be unwanted codes. \end{proof} \subsection{An auxiliary result} We proceed via proving first the following auxiliary theorem whose proof introduces some of the key ideas, and will serve as a simplified blueprint for the proof of the main results. \begin{theorem} There is a generic extension $L[G]$ of $L$ in which there is a real $R_0$ such that every pair of disjoint (ligthface) $\Sigma^1_3$-sets can be separated by a $\Delta^1_3(R_0)$-formula. \end{theorem} For its proof, we will use the two easily definable $\omega_1$-sequences of Suslin trees on $\omega_1$, $\vec{S}=\vec{S}^1 \cup \vec{S}^2$ and branch shooting forcings to create for every pair $(A_m,A_k)$ of disjoint $\Sigma^1_3$-definable sets of reals a $\Delta_3^1 (\alpha_0)$-definable separating set $D_{m,k} \supset A_m$. Using a bookkeeping function we list all the triples $(x,m,k)$ where $x$ is a real and $m, k\in \omega$, and decide for every such triple whether we code it into $\vec{S^1}$ which is equivalent to put it into $D_{m,k}$ or code it into $\vec{S^2}$ which eventually should become the complement $D_{m,k}^c$. Using coding arguments the sets $D_{m,k}$ and its complement will be $\Sigma^1_3(R_0)$-definable. The fact that we have to decide at every stage where to put the current real $x$ before the iteration is actually finished seems to be somewhat daring as the evaluation of the $\Pi^1_3$ and $\Sigma^1_3$-sets vary as we generically enlarge our surrounding universe along the iteration. Additionally one has to deal with possible degenerated cases which stem from a certain amount of self referentiality in the way we set up things. Indeed it could happen that forcing a triple $(x,m,k)$ into one side, $D_{m,k}$ say, could force simultaneously that $x$ will become a member of $A_k$ in the generic extension, thus preventing $D_{m,k}$ to actually separate $A_m$ and $A_k$. A careful case distinction will show that this problem can be overcome though. \subsection{Definition of the iteration over $W$} For $n \in \omega$ let \[\varphi_n(v_0)= \exists v_1 \psi_n(v_0,v_1)\] be the $n$-th formula in an enumeration of the $\Sigma^1_3$-formulas with one free variable. Let \[A_n:= \{ x \in 2^{\omega} \, : \, \varphi_n(x) \text{ is true} \},\] so $A_n$ is the set of reals whose definition uses the $n$-th $\Sigma^1_3$-formula in our enumeration. We force with an $\omega_1$-length mixed support iteration of legal forcings which all have size $\aleph_1$, and use a definable, surjective bookkeeping-function \[ F: \omega_1 \rightarrow \omega_1 \times \omega_1 \times \omega \times \omega\] to determine the iteration. We demand that every $\alpha < \omega_1$ is always strictly bigger than the first projection of $F(\alpha)$. We also assume that every quadruple $(\beta, \gamma, m,k)$ in $\omega_1 \times \omega_1 \times \omega \times \omega$ is hit unboundedly often by $F$. The purpose of $F$ is to list all triples of the form $(x,m,k)$, where $x$ is a real in some intermediate universe of our iteration and $m,k \in \omega$ corresponds to a pair $(\varphi_m,\varphi_k)$ of $\Sigma^1_3$-formulas. The iteration will be defined in such a way, that, at every stage $\beta$ of the iteration, whenever some triple $(x,m,k)$ is considered by $F$, we must decide immediately whether to code (a real coding) the triple $(x,m,k)$ somewhere into the $\vec{S^1}$ or $\vec{S^2}$-sequence. The set of codes written into $\vec{S}^1$ which contain $m,k$ will result in the $\Sigma^1_3$-set $D^1_{m,k}$ which is a supset of $A_m$, the set of codes containing $m,k$ which are written into $\vec{S^2}$ shall result in the $\Sigma^1_3$-supset $D^2_{m,k}$ of $A_k$. The real $R_0$ will be used to indicate, in a uniform way for all $m,k$, the set of $\aleph_1$-many $\omega$-blocks of $\vec{S}$, which represent insecure data we should not use for our separating sets. The reader should think of $R_0$ as an error term, modulo which the separating sets will work. That is $R_0 \in 2^{\omega}$ is such that it codes $\aleph_1$-many $\omega$-blocks of $\vec{S}$, and $D^1_{m,k}$ and $D^2_{m,k}$ should be the set of $(x,m,k)$'s which are coded into $\vec{S}^1$ and $\vec{S}^2$ respectively whose coding areas are almost disjoint from the set of ordinals coded by $R_0$, in that their intersection is bounded below $\omega_1$. More precisely let \begin{align*} D^1_{m,k} (R_0):= \{ (x,m,k) \, : &\, x \in 2^{\omega} \land (x,m,k) \text{ is coded into } \vec{S}^1 \\&\text{and its coding area is almost disjoint} \\&\text{ from the indices coded by $R_0$} \} \end{align*} and let $D^2_{m,k}(R_0)$ be defined similarly. Our goal is to have $D^1_{m,k}(R_0) \cap D^2_{m,k}(R_0) = \emptyset$ for every $m,k \in \omega$. Thus we have found our separating sets for $A_m$ and $A_k$. We proceed with the details of the inductive construction of the forcing iteration. Assume that we are at some stage $\alpha < \omega_1$ of our iteration, let $\mathbb{P}_{\alpha}$ denote the partial order we have defined so far, let $G_{\alpha}$ denote a generic filter for $\mathbb{P}_{\alpha}$. We inductively assume in addition, that we have created a $\mathbb{P}_{\alpha}$-name of a set $\dot{b}_{\alpha}$ which is forced to be a set of countably many $\omega$-blocks of $\vec{S^1}$ and $\vec{S^2}$. Our goal is to define the next forcing $\dot{\mathbb{Q}}_{\alpha}$ which we shall use. As will become clear after finishing the definition of the iteration, we can assume that $\mathbb{P}_{\alpha}$ is a legal notion of forcing. We look at the value $F(\alpha)$ and define the forcing $\dot{\mathbb{Q}}_{\alpha}$ according to $F(\alpha)$ by cases as follows. \subsubsection{Case a} For the first case we assume that $F(\alpha) = ( \beta, \gamma, m,k)$, and that the $\gamma$-th (in some wellorder of $W$) name of a real of $W^{\mathbb{P}_{\beta}}$ is $\dot{x}$. We ask, whether there exists a forcing $\mathbb{P}$ such that \[W[G_{\alpha}] \models \mathbb{P} \text{ is legal and } \mathbb{P} \Vdash \exists z (\varphi_m(z) \land\varphi_k(z)) .\] If there is such a legal $\mathbb{P}$, then we use it, i.e. we fix the $<$-least such forcing and let $\mathbb{P}(\alpha):= \mathbb{P}$, let $\mathbb{P}_{\alpha+1}=\mathbb{P}_{\alpha} \ast \mathbb{P}(\alpha)$, and let $G(\alpha+1)$ be $\mathbb{P}(\alpha+1)$-generic over $W$. \subsubsection{Case b} We assume again that $F(\alpha) = ( \beta, \gamma, m,k)$, and that the $\gamma$-th (in some wellorder of $W$) name of a real of $W^{\mathbb{P}_{\beta}}$ is $\dot{x}$. Let $\dot{x}^{G_{\alpha}}=x$. Now we assume that case a is wrong, i.e. in $W[G_{\alpha}]$, there is no legal $\mathbb{P}$ such that $\mathbb{P} \Vdash \exists z (\varphi_m(z) \land \varphi_k(z) )$. We shall distinguish three sub-cases. \begin{itemize} \item[(i)] First assume that there is a legal forcing $\mathbb{Q}$ such that \begin{align*} W[G_{\alpha}] \models &\, \mathbb{Q} \Vdash \varphi_m(x) \end{align*} In this situation, we will code $(x,m,k)$ into the $\vec{S}^1$-sequence, i.e. we let \[ \mathbb{P}(\alpha):= \hbox{Code} ((x,m,k),1) \] and set $\mathbb{P}_{\alpha+1}:=\mathbb{P}_{\alpha} \ast \mathbb{P}(\alpha)$. The upshot of the arguments above is the following: \begin{flushleft} \textbf{Claim:} Let $G_{\alpha+1}$ be a $\mathbb{P}_{\alpha+1}$-generic filter over $W$ and let $\mathbb{P}$ be an arbitrary legal forcing in $W[G_{\alpha+1}]$. Then \end{flushleft} \[ \cancel{\Vdash}_{\mathbb{P}} \, x \in A_k.\] \begin{proof} Indeed if not, then pick $\mathbb{P} \in W[G_{\alpha+1}]$ such that there is a $p \in \mathbb{P}$ such that $p \Vdash_{\mathbb{P}} x \in A_k$. If we consider $\mathbb{P}$ below the condition $p$, we obtain a legal forcing $\mathbb{P}_{ \le p}$ again, and $\mathbb{Q} \times \mathbb{P}_{\le p} \Vdash x \in A_m \cap A_k$, because $\mathbb{Q}$ introduces a real $r_m$ which witnesses that $\varphi_m(x)$ holds true. In particular $\varphi_m(x)$ is true in all outer models of $W[G_{\alpha+1}] [\mathbb{Q}]$ by upwards absoluteness of $\Sigma^1_3$-statements, which follows from Shoenfield absoluteness. Likewise, $\mathbb{P}_{\le p}$ shows that $W[G_{\alpha+1}] [\mathbb{P}_{ \le p} \models \varphi_k (x)$, thus $\mathbb{Q} \times \mathbb{P}_{\le p}$ is a legal forcing which forces $x \in A_m \cap A_k$ which is a contradiction. \end{proof} \item[(ii)] The second subcase is symmetric to the first one. We assume that there is no legal forcing $\mathbb{Q}$, for which $\Vdash_{\mathbb{Q}} \varphi_m(x)$ is true, but there is a legal $\mathbb{Q}$ for which it is true that \begin{align*} W[G_{\alpha}] \models \mathbb{Q} \Vdash \varphi_k(x). \end{align*} Then we code $x$ into the $\vec{S^2}$-sequence with the usual coding. Note that by the symmetric argument from above, no further legal extension will ever satisfy that $x \in A_m$. \item[(iii)] In the final subcase, there is no legal forcing which forces $x \in A_m \cup A_k$, and we are free to choose where to code $x$. In that situation we settle to code $(x,m,k)$ into $\vec{S^1}$. \end{itemize} This ends the inductive definition of the iteration. \subsubsection{Discussion of case b} We pause here to discuss briefly the crucial case b of the iteration. At first glance it seems promising, when in the first subcase of case b of the iteration, to use the legal forcing $\mathbb{Q}$, granted to exist by assumption, in order to obtain $x \in A_m$. After all, we know that case a is not true here, so if we can force $x \in A_m$ with a legal forcing, we can conclude that for all further future legal extensions, $x$ will not belong to $A_k$ which seems to fully settle the problem of where to place the particular triple $(x,m,k)$. The just described strategy will fail however, for reasons having to do with the already mentioned self-referentiality of the the set-up. Indeed, one can easily produce $\Sigma^1_3$-predicates $\varphi_m$ and $\varphi_k$ such that case b will apply for all reals $x$, and such that whenever we decide to code $(x,m,k)$ into $\vec{S^1}$, in the resulting generic extension $\varphi_k(x)$ will become true. And vice versa, whenever we decide to code $(x,m,k)$ into $\vec{S^2}$, $\varphi_m(x)$ will hold in the resulting generic extension. Thus, for these particular $m,k$, we are in case b throughout the iteration, and find a legal $\mathbb{Q}$ for every real $x$ we encounter, which forces $\mathbb{Q} \Vdash x \in A_k(x)$. But the forcings $\mathbb{Q}$, when applied, always add pathological and unwanted situations, namely $\mathbb{Q} \Vdash x \in A_k(x)$, yet $x$ is coded into $\vec{S^1}$. And as we have to place all the reals, we will produce these problems cofinally often throughout the whole iteration which ruins our attempts to proof the theorem. Our definition of the iteration circumvents these problems via noting that the possibility to actually use the forcing $\mathbb{Q}$ is sufficient to rule out a pathological situation, by the closure of legal forcings under products. Thus the mere existence of such a legal $\mathbb{Q}$ is sufficient to not run into any problems when coding $(x,m,k)$ into $\vec{S}^1$. We emphasize that this line of reasoning takes advantage of the specific coding method we decided to use and justifies the construction of our ground model $W$. \subsection{Discussion of the resulting W[G]} We let $G$ be a generic filter for the $\omega_1$-length iteration which we just described using mixed support. First we note that the iteration is proper, hence the iteration preserves $\aleph_1$. Consequently there will be no new reals added at stage $\omega_1$, so $\omega^{\omega} \cap W[G] = \bigcup_{\alpha< \omega_1} \omega^{\omega} \cap W[G_{\alpha}]$, in particular $\mathsf{CH}$ is true in $W[G]$. A second useful observation is that for every pair of stages $\alpha <\beta < \omega_1$, the quotient-forcing which we use to pass from $\mathbb{P}_{\alpha}$ to $\mathbb{P}_{\beta}$ is a legal forcing as seen from the intermediate model $W[\mathbb{P}\alpha]$. Our goal is now to define a real $R_0$ and, given a pair of disjoint $\Sigma^1_3$-definable sets $A_m,A_k$, a $\Delta^1_3(R_0)$-definable separating set, i.e. a set such that $A_m \subset D^1_{m,k}$ and $A_k \subset D_{m,k}^2$ and such that $D_{m,k}^2=D_{m,k}^c$. We want our set $D^1_{m,k} (R_0)$ to consist of the codes written into $\vec{S}^1$ beyond $\alpha_0$ which itself contain a code for the pair $(m,k)$ and its converse $D_{m,k}^2$ to consist of all the codes on the $\vec{S}^2$-side which contain a code for $(m,k)$ and whose coding areas are almost disjoint from the subset of $\omega_1$ coded by the real $R_0$. What should the real $R_0$ be? It is clear from the definition of the iteration $\mathbb{P}$, that there are stages in the iteration where case a applies. There, we just blindly use legal forcings. In particular, nothing prevents these legal forcings to code up $(x,m,k)$ into, say $\vec{S^1}$ while $\varphi_k(x)$ is true, thus adding a problem. Note however that such degenerate situations can only happen once for every pair $(A_m,A_k)$. As we only have countably many such pairs and as our iteration has length $\omega_1$ and as we visit every triple $(x,m,k)$ uncountably often with our bookkeeping function, there will be a stage $\beta_0 < \omega_1$ such that from $\beta_0$ on all the codes we have written into $\vec{S}^1$ and $\vec{S}^2$ are intended ones, i.e. the codes really define a separating set $D_{m,k}$ for $A_m$ and $A_k$. Thus, in order to define $R_0$, we first let $\beta_0 < \omega_1$ be the last stage in the iteration $\mathbb{P}$, where case a is applied. Then, working in $W[G_{\beta_0}]$, we let $R_0$ code the collection of all indices of all the trees from $\vec{S^1}$ and $\vec{S^2}$, which were used for coding in $W[G_{\beta_0}]$. Note here that this collection is characterized by the countable set $\{ r_{\beta} \, : \, \beta < \beta_0\}$ where $r_{\beta}$ is the real which is added with almost disjoint coding at stage $\beta$ and which witnesses $( {\ast} {\ast} {\ast} )$ for $(x_{\beta},m_{\beta},k_{\beta})$ and each $\gamma \in h_{\beta}$ (where $h_{\beta}$ just denotes the coding area given by the real $r_{\beta}$) holds. This countable set of reals can itself be coded by a real, and this real is $R_0$. We define: \begin{align*} x \in D_{m,k}^1(R_0) \Leftrightarrow &\exists r \in 2^{\omega} L[r] \models ``\exists M (\text{$M$ witnesses that }(\ast) \text{ holds for $(x,m,k)$, $\vec{S}^1$}\\ & \text{and every $\gamma$ in its coding area $h \subset \omega_1"$}. \\& \text{ Further } L[r,R_0] \models ``h \text{ is almost disjoint} \\& \qquad \qquad \text{ from the set of indices coded by $R_0"$ )} \end{align*} and \begin{align*} x \in D_{m,k}^2(R_0) \Leftrightarrow &\exists r \in 2^{\omega} L[r] \models ``\exists M (\text{$M$ witnesses that }(\ast) \text{ holds for $(x,m,k)$, $\vec{S}^2$}\\ & \text{and every $\gamma$ in its coding area $h \subset \omega_1"$}. \\& \text{ Further } L[r,R_0] \models ``h \text{ is almost disjoint} \\& \qquad \qquad \text{ from the set of indices coded by $R_0"$ )} \end{align*} We shall show now that these sets work as intended. \begin{lemma} In $W[G]$ for every pair $m \ne k \in {\omega}$, $D^1_{m,k}(R_0)$ and $D_{m,k}^2(R_0)$ union up to all the reals. \end{lemma} \begin{proof} Immediate from the definitions. \end{proof} \begin{lemma} In $W[G]$ for every pair $(m,k)$, if the $\Sigma^1_3$-sets $A_m$ and $A_k$ are disjoint then $D^1_{m,k}(R_0)$ separates $A_m$ from $A_k$, i.e. $A_m \subset D^1_{m,k}(R_0)$ and $A_k \cap D^1_{m,k}(R_0)=\emptyset$. Likewise $D^2_{m,k}$ separates $A_k$ from $A_m$. Consequentially, for every $m,k$ such that $A_m \cap A_k = \emptyset$, $D^1_{m,k}(R_0) \cap D^2_{m,k}(R_0)=\emptyset$. \end{lemma} \begin{proof} We will only proof the first assertion, the second one is proved exactly as the first one with the roles of $m,k$ switched. Assume that $A_m$ and $A_k$ are disjoint and let $x \in W[G] \cap 2^{\omega}$ be arbitrary, such that $x \in A_m$ is coded into $\vec{S^1}$ and its coding area is almost disjoint from the set of ordinals coded by $R_0$. There is a least stage $\alpha$ with $\beta_0<\alpha< \omega_1$ such that $F(\alpha)= (\dot{x},m,k)$ where $\dot{x}$ is a name for $x$. According to the definition of the iteration and the assumption that $A_m \cap A_k=\emptyset$, we can rule out case a. Thus case b remains, and hence the first or the third subcase did apply at stage $\alpha$. Suppose, without loss of generality, that we were in the first subcase of case b. Assume for a contradiction that in $W[G]$, $x \in A_k$, then there would be a stage $\alpha'$ of the iteration, $\alpha'> \alpha> \beta_0$ such that $W[G_{\alpha'}] \models x \in A_k$ and the part of the iteration $\mathbb{P}$ between stage $\alpha$ and $\alpha'$, denoted with $\mathbb{P}_{\alpha \alpha'}$, is a legal forcing, and which forces $x \in A_k$. But, as at stage $\alpha$, the first subcase of b applied, there is a legal forcing $\mathbb{Q}$, such that $\mathbb{Q} \Vdash x \in A_m$, hence, at $\alpha$, there is a legal forcing which forces $x \in A_m \cap A_k$, namely $\mathbb{P}_{\alpha \alpha'} \times \mathbb{Q}$, which is a contradiction. \end{proof} \begin{lemma}\label{Sigma13} In $W[G]$, for every $m,k \in \omega$, $D^1_{m,k}$ and $D^2_{m,k}$ are $\Sigma^1_3(R_0)$-definable. Thus $W[G]$ satisfies that every pair of disjoint $\Sigma^1_3$-sets can be separated by a $\Delta^1_3 (R_0)$-set. \end{lemma} \begin{proof} We claim that for $m,k \in \omega \times \omega$ arbitrary, $D^1_{m,k}$ and $D^2_{m,k}$ have the following definitions in $W[G]$: \begin{align*} x \in D_{m,k}^1 \Leftrightarrow & \exists r \forall M (r, R_0 \in M \land \omega_1^M=(\omega_1^L)^M \land M \text{ transitive } \rightarrow \\ &M \models L[r] \models ``\exists N ( N \models \mathsf{ZF}P \land |N| = \aleph_1^M \land N \text{ is transitive } \land \\ &N \text{ believes $(\ast)$ for $(x,y,m)$ and $\vec{S}^1$ and every $\gamma \in h"$} \\&\text{ and $L[r,R_0] \models``$the coding area $h$ of $(x,m,k)$ is almost disjoint} \\& \text{from the set of indices coded by $R_0"$} )). \end{align*} and \begin{align*} x \in D_{m,k}^2 \Leftrightarrow & \exists r \forall M (r, R_0 \in M \land \omega_1^M=(\omega_1^L)^M \land M \text{ transitive } \rightarrow \\ &M \models L[r] \models ``\exists N ( N \models \mathsf{ZF}P \land |N| = \aleph_1^M \land N \text{ is transitive } \land \\ &N \text{ believes $(\ast)$ for $(x,y,m)$ and $\vec{S}^2$ and every $\gamma \in h"$} \\&\text{ and $L[r,R_0] \models``$the coding area $h$ of $(x,m,k)$ is almost disjoint} \\& \text{from the set of indices coded by $R_0"$} )). \end{align*} Counting quantifiers yields that both formulas are of the form $\exists \forall (\Sigma^1_2 \rightarrow \Delta^1_2)$ and hence $\Sigma^1_3$. We will only show the result for $D_{m,k}^1$. To show the direction from left to right, note that if $x \in D_{m,k}^1$, then there was a stage $\alpha>\beta_0$ in our iteration such that we coded $x$ into the $\vec{S}^1$-sequence. In particular we added a real $r_{\alpha}$ for which property $({\ast}{\ast}{\ast} )$ is true, hence $r_{\alpha}$ witnesses that the right hand side is true in $W[G]$. For the other direction assume that the right hand side is true. This in particular means that the assertion is true for transitive models containing $r$ of arbitrary size. Indeed if there would be a transitive $M$ which contains $r$ and whose size is $\ge \aleph_1$, then there would be a countable $M_0 \prec M$ which contains $r$. The transitive collapse of $M_0$ would form counterexample to the assertion of the right hand side, which is a contradiction to our assumption. But if the right hand side is true for models of arbitrary size, by reflection it must be true for $W[G]$ itself, hus $x \in D_{m,k}^1$, and we are done. \end{proof} \section{Boldface Separation} \subsection{Preliminary Considerations} We turn our attention to boldface separation. Goal of this section is to prove the first main theorem. \begin{theorem} There is an $\omega_1$-preserving, generic extension of $L$ in which every pair of disjoint $\bf{\Sigma^1_3}$-sets $A_m$ and $A_k$ can be separated by a $\bf{\Delta^1_3}$-set. \end{theorem} It uses the proof of our auxiliary theorem as the base case of an inductive construction. The main idea to keep control is to replace the notion of legal forcing with a dynamic variant which keeps changing along the iteration. To motivate the following we first consider a more fine-tuned approach to the definition of the iteration of the proof of the last theorem. Let us assume that $(m,k)$ is the first pair such that case b in the definition of the iteration applies. Recall that in the discussion of case b, we showed that, given a pair $(A_m, A_k)$ of $\Sigma^1_3$-sets for which there does not exist a legal forcing $\mathbb{Q}$ such that $\Vdash_{\mathbb{Q}} \exists z (\varphi_m(z) \land \varphi_k(z))$ becomes true, we can assign for an arbitrary real $x$ always a side $\vec{S}^1$ or $\vec{S}^2$ such that in all future legal extensions, there will never occur a pathological situation, i.e. from that stage on we never run into the problem of having coded the triple $(x,m,k)$ into, say, $\vec{S^1}$, yet $\varphi_k(x)$ becomes true in some future extension of our iteration (or vice versa). Note here that the arguments in the discussion of case b were uniform for all reals $x$ which appear in a legal extension. So it is reasonable to define for the pair $(m,k)$ a stronger notion of legality, called 1-legal with respect to $(0,m,k)$ (the 0 indicates the base case of an inductive construction we define later) as follows: Let $F: \gamma \rightarrow H(\omega_2)^4$ be a bookkeeping function and let $E:=\{ (0,m,k) \}$. We let $\mathbb{P}$ be a mixed support iteration of length $ \gamma$. Then we say that $(\mathbb{P} \, : \, \beta < \gamma\} ) $ is 1-legal with respect to $E$ and $F$ if \begin{itemize} \item $\mathbb{P}$ is a legal forcing relative to $F$. \item Whenever $\beta < \gamma$ is a stage such that $F(\beta)=(\dot{x},m,k,i)$, where $\dot{x}$ is a $\mathbb{P}_{\beta}$-name of a real, $\xi$ is an ordinal and $i \in \{1,2 \}$ we split into three subcases: \begin{itemize} \item[(i)] First we assume that in $W[G_{\beta}]$, there is a legal forcing $\mathbb{Q}$ such that $\mathbb{Q} \Vdash x (=\dot{x}^{G_{\beta}}) \in A_m$. Then, the $\beta$-th forcing of $\mathbb{P}$, $\mathbb{P}(\beta)$ must be $\hbox{Code} ((x,m,k), 1)$. \item[(ii)] Assume that (i) is wrong but the dual situation is true for $A_k$. That is, there is a legal forcing $\mathbb{Q}$ such that $\mathbb{Q} \Vdash x (=\dot{x}^{G_{\beta}}) \in A_k$. Then, the $\beta$-th forcing of $\mathbb{P}$, $\mathbb{P}(\beta)$ must be $\hbox{Code} ((x,m,k), 2)$. \item[(iii)] In the third case we assume that neither (i) nor (ii) is true. In that situation we force with either coding $x$ into the $A_m$ or the $A_k$ side, whatever the bookkeeping tells us. \end{itemize} If the iteration $(\mathbb{P} \, : \, \beta < \gamma)$ obeys the above rules, then we say that it is 1-legal with respect to $E$ and bookkeeping function $F$. \end{itemize} From earlier considerations it is clear that if we drop the notion of legal from now on and replace it with 1-legal relative to $E=\{ (0,m,k)\}$ in our iteration, we can ensure that, at least for the pair $(m,k)$ no new pathological situations will arise anymore. This process can be iterated. Assume that we run into a new pair $(m',k')$ where the modified case b applies, i.e. there is no 1-legal forcing which forces $\exists z (\varphi_{m'}(z) \land\varphi_{k'} (z))$, we can introduce the new notion of 2-legal with respect to $\{(0,m,k) , (1,m',k')\}$. If chosen the right way, this new notion will hand us a condition that guarantees that no new pathological situations arises for the two pairs $(\varphi_m,\varphi_k)$ and $(\varphi_{k'}, \varphi_{m'})$. So our strategy for producing a model where the $\bf{\Sigma^1_3}$-separation property is as follows: we list all possible reals $x$, parameters $y$ and pairs of $\Sigma^1_3$-formulas $(\varphi_m(\cdot,y), \varphi_k(\cdot,y))$, while simultaneously define stronger and stronger versions of legality, which take care of placing the reals we encounter along the iteration in a non-pathological way. \subsection{$\alpha$-legal forcings} This section shall give a precise recursive definition of the process sketched above. The notions of 0 and 1-legality will form the base cases of an inductive definition. Let $\alpha \ge 1$ be an ordinal and assume we defined already the notion of $\alpha$-legality. Then we can inductively define the notion of $\alpha+1$-legality as follows. Suppose that $\gamma < \omega_1$, $F$ is a bookkeeping function, \[F: \gamma \rightarrow H(\omega_2)^5 \] and \[\mathbb{P}=(\mathbb{P}_{\beta} \, : \, \beta < \gamma)\] is a legal forcing relative to $F$ (in fact relative to some bookkeeping $F'$ determined by $F$ in a unique way - the difference here is not relevant). Suppose that \[E= \{(\delta, \dot{y}_{\delta}, m_{\delta} ,k_{\delta}) \, : \, \delta \le \alpha\}\] where $m_{\delta},k_{\delta} \in \omega$ and every $\dot{y}_{\delta}$ is a $\mathbb{P}$-name of a real and for every two ordinals $\beta, \gamma< \alpha$, $\mathbb{P} \Vdash (\dot{y}_{\beta}, m_{\beta},k_{\beta}) \ne (\dot{y}_{\gamma},m_{\gamma},k_{\gamma})$. Suppose that for every $\delta \le \alpha$, $(\mathbb{P}_{\beta} \, : \, \beta < \gamma )$ is $\delta$-legal with respect to $E \upharpoonright \delta = \{ (\eta, \dot{y}_{\eta},m_{\eta},k_{\eta}) \in E \, : \, \eta < \delta \}$ and $F$. Finally assume that $\dot{y}_{\alpha+1}$ is a $\mathbb{P}$-name for a real and $m_{\alpha+1},k_{\alpha+1} \in \omega$ such that $\mathbb{P} \Vdash \forall \delta \le \alpha ((\dot{y}_{\delta},m_{\delta},k_{\delta}) \ne (\dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1}))$. Then we say that $(\mathbb{P}_{\beta} \, : \, \beta < \gamma )$ is $\alpha+1$-legal with respect to $E \cup \{\alpha+1, \dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1})\}$ and $F$ if it obeys the following rules. \begin{enumerate} \item Whenever $\beta < \gamma $ is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i\in\{1,2\}$ such that \[F(\beta)= (\dot{x},\dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1},i)\] and $\dot{y}_{\alpha+1}$ is in fact a $\mathbb{P}_{\beta}$-name, and for $G_{\beta}$ a $\mathbb{P}_{\beta}$-generic over $W$, $W[G_{\beta}]$ thinks that \begin{align*} \exists \mathbb{Q} (&\mathbb{Q} \text{ is } \alpha \text{-legal with respect to } E \, \land \\& \mathbb{Q} \Vdash x \in A_m({y}_{\alpha+1})), \end{align*} where $x=\dot{x}^G$, and $y_{\alpha}=\dot{y}_{\alpha+1}^G$. Then continuing to argue in $W[G_{\beta}]$, if ${\mathbb{Q}}_1=\dot{\mathbb{Q}}_1^{G_{\beta}}$ we let \[\mathbb{P}(\beta)= \hbox{Code}((x,y,m,k),1). \] Note that we confuse here the quadruple $(x,y,m,k)$ with one real $w$ which codes this quadruple. \item Whenever $\beta < \gamma $ is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i \in \{1, 2\}$ such that \[F(\beta)= (\dot{x},\dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1}, i)\] and for $G_{\beta}$ which is $\mathbb{P}_{\beta}$-generic over $W$, $W[G_{\beta}]$ thinks that \begin{align*} \forall \mathbb{Q}_1 (&\mathbb{Q}_1 \text{ is } \alpha \text{-legal with respect to } E \\& \rightarrow \, \lnot(\mathbb{Q}_1 \Vdash x \in A_m(\dot{y}_{\alpha+1}))) \end{align*} but there is a forcing $\mathbb{Q}_2$ such that $W[G_{\beta}]$ thinks that \begin{align*} \mathbb{Q}_{2} \text{ is } \alpha &\text{-legal with respect to $E$ and } \\ & {\mathbb{Q}_2} \Vdash x \in A_k( \dot{y}_{\alpha+1} ) \end{align*} Then continuing to argue in $W[G_{\beta}]$, we force with \[\mathbb{P}(\beta):= \hbox{Code}((x,y,m,k),2).\] Note that we confuse here again the quadruple $(x,y,m,k)$ with one real $w$ which codes this quadruple. \item If neither 1 nor 2 is true, then either \[ \mathbb{P}(\beta)=\hbox{Code}((x,y,m,k),2)\] or \[ \mathbb{P}(\beta)=\hbox{Code}((x,y,m,k),1)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. \item If $F(\beta) = (\dot{x},\dot{y},m,k,i)$ and for our $\mathbb{P}_{\beta}$-generic filter $G$, $W[G] \models \forall \delta \le \alpha+1 ((\delta,\dot{y}^G,m,k) \notin E^G)$, then, working over $W[G_{\beta}]$ let \[ \mathbb{P}(\beta)=\hbox{Code}((x,y,m,k),i)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. \end{enumerate} This ends the definition for the successor step $\alpha \rightarrow \alpha+1$. For limit ordinals $\alpha$, we say that a legal forcing $\mathbb{P}$ is $\alpha$ legal with respect to $E$ and $F$ if for every $\eta < \alpha$, $(\mathbb{P}_{\beta} \,: \, \beta < \gamma)$ is $\eta$-legal with respect to $E \upharpoonright \eta$ and some $F'$. \par We add a couple of remarks concerning the last definition. \begin{itemize} \item By definition, if $\delta_2 < \delta_1$ and $\mathbb{P}_1$ is $\delta_1$-legal with respect to $E= \{(\beta, \dot{y}_{\beta}, m_{\beta} ,k_{\beta}) \, : \, \beta \le \delta_1\}$ and some $F_1$, then $\mathbb{P}_1$ is also $\delta_2$-legal with respect to $E \upharpoonright \delta_2 = \{(\beta, \dot{y}_{\beta}, m_{\beta} ,k_{\beta}) \, : \, \beta \le \delta_2\}$ and an altered bookkeeping function $F'$. \item The notion of $\alpha$-legal can be defined in a uniform way over any legal extension $W'$ of $W$. \item We will often just say that some iteration $\mathbb{P}$ is $\alpha$-legal, by which we mean that there is a set $E$ and a bookkeeping $F$ such that $\mathbb{P} $ is $\alpha$-legal with respect to $E$ and $F$. \end{itemize} \begin{lemma}\label{productlegal} Let $\alpha \ge 1$, assume that $W'$ is some $\alpha$-legal generic extension of $W$, and that $\mathbb{P}^1=(\mathbb{P}^1_{\beta} \,: \,\beta < \delta)$ and $\mathbb{P}^2=(\mathbb{P}^2_{\beta} \,: \, \beta < \delta) $ are two $\alpha$-legal forcings over $W'$ with respect to a common set $E=\{\delta,\dot{y}_{\delta},m_{\delta}, k_{\delta} \,: \, \delta < \alpha\}$ and bookkeeping functions $F_1$ and $F_2$ respectively. Then there is a bookkeeping function $F$ such that $\mathbb{P}_1 \times \mathbb{P}_2$ is $\alpha$-legal over $W'$ with respect to $E$ and $F$. \end{lemma} \begin{proof} We define $F \upharpoonright \delta_1$ to be $F_1$. For values $\delta_1+ \beta > \delta_1$ we let $F(\delta_1+\beta)$ be such that its value on the first four coordinates equal the first four coordinates of $F_2(\beta)$, i.e. $F(\delta_1+\beta)=(\dot{x},\dot{y},m,k,i)$ for some $i \in \{1,2\}$ where $F_2(\beta)=(\dot{x},\dot{y},m,k,i')$. We claim now that we can define the remaining value of $F(\beta)$, in such a way that the lemma is true. This is shown by induction on $\beta< \delta_2$. Let $(\mathbb{P}_2)_{\beta}$ be the iteration of $\mathbb{P}_2$ up to stage $\beta < \delta_2$. Assume, that $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ is in fact an $\alpha$-legal forcing relative to $E$ and $F$. Then we have that $F(\delta_1+\beta) \upharpoonright 5=F_2(\beta) \upharpoonright 5 =(\dot{x},\dot{y},m,k)$, and we claim that at that stage, \begin{claim} If we should apply case 1,2 or 3, when considering the forcing $\mathbb{P}_1 \times \mathbb{P}_2$ as an $\alpha$-legal forcing relative to $E$ over the model $W'$, we must apply the same case when considering $\mathbb{P}_2$ as an $\alpha$-legal forcing over the model $W'$ relative to $E$. \end{claim} Once the claim is shown, the lemma can be proven as follows by induction on $\beta < \delta_2$: we work in the model $W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}]$, consider $F(\delta_1+\beta) \upharpoonright 5 = F_2(\beta) \upharpoonright 5$, and ask which of the four cases has to be applied. By the claim, it will be the same case, as when considering $\mathbb{P}_2$ over $W'$ as an $\alpha$-legal forcing relative to $E$ and $F_2$. In particular the forcing $\mathbb{P}_2(\beta)$ we define at stage $\beta$ will be a choice obeying the rules of $\alpha$-legality, even when working over the model $W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}]$. This shows that $\mathbb{P}_1 \times \mathbb{P}_2$ is an $\alpha$-legal forcing relative to $E$ and some $F$ over $W'$. The proof of the claim is via induction on $\alpha$. If $\alpha=1$ and both $\mathbb{P}_1$ and $\mathbb{P}_2$ are 1-legal with respect to $E$ which must be of the form $ \{0,\dot{y},m,k\}$, then we shall show that there is a bookkeeping $F$ such that $(\mathbb{P}_2)_{\beta}\, : \, \beta < \delta_2\})$ is still 1-legal with respect to $E$, even when considered in the universe $W'[\mathbb{P}_1]$. We assume first that at stage $\delta_1+\beta$ of $\mathbb{P}_1 \times \mathbb{P}_2$ case 1 in the definition of 1-legal applies, when working in the model $W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}]$ relative to $E$ and $F$. Thus \[ F(\beta) \upharpoonright 5= (\dot{x},\dot{y},m,k) \] and $(0,\dot{y},m,k) \in E$ and for any $G^1 \times G_{\beta}$ which is $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$-generic over $W'$, if $\dot{x}^{G_{\beta}}=x$ and $\dot{y}^{G_{\beta}}=y$, the universe $W'[G_1 \times G_{\beta}]$ thinks that \begin{align*} \exists \mathbb{Q} (&\mathbb{Q} \text{ is } 0 \text{-legal with respect to } E \text{ and some F }\, \land \\& \, \mathbb{Q} \Vdash {x} \in A_m({y})). \end{align*} Thus, if we work over $W'[G_{\beta}]$ instead it will think \begin{align*} \exists (\mathbb{P}_1 \times \mathbb{Q}) (&\mathbb{P}_1 \times \mathbb{Q} \text{ is } 0 \text{-legal } \, \land \\& \, \mathbb{P}_1 \times \mathbb{Q} \Vdash {x} \in A_m({y})). \end{align*} Thus, at stage $\beta$, we are in case 1 as well, when considering $\mathbb{P}_2$ as an 1-legal forcing over $W'$ relative to $E$. If, at stage $\beta$, case 2 applies, when considering $\mathbb{P}_1 \times \mathbb{P}_2$ as a 1-legal forcing with respect to $E$ over $W'$, then we argue first that case 1 is impossible when considering $\mathbb{P}_2$ as a $1$-legal forcing over $W'$. Indeed, assume for a contradiction that case 1 must be applied, then, by assumption, $\mathbb{P}_2(\beta)$ will force that $x \in A_m(y)$. Yet, by Shoenfield absoluteness, $\mathbb{P}_2(\beta)$ would witness that we are in case 1 at stage $\beta$ when considering $\mathbb{P}_1 \times\mathbb{P}_2$ as 1-legal with respect to $E$ over $W'$, which is a contradiction. Thus we can not be in case 1 and we shall show that we are indeed in case 2, i.e. there is a 0-legal forcing $\mathbb{Q}$, such that $\mathbb{Q} \Vdash x \in A_k(y)$, but such a $\mathbb{Q}$ exists, namely $\mathbb{P}_2(\beta)$, Finally, if at stage $\beta$, case 3 applies when considering $\mathbb{P}_2$ as a 1-legal forcing with respect to $E$ over $W'[\mathbb{P}_1]$, we claim that we must be in case 3 as well, when considering $\mathbb{P}_2$ over just $W'$. If not, then we would be in case 1 or 2 at $\beta$. Assume without loss of generality that we were in case 1, then, as by assumption $\mathbb{P}_2$ is 1-legal over $W'$, $\mathbb{P}_2(\beta)$ will force $\Vdash x \in A_m(y)$. But this is a contradiction, so we must be in case 3 as well. This finishes the proof of the claim for $\alpha=1$. We shall argue now that the Claim is true for $\alpha+1$-legal forcings provided we know that it is true for $\alpha$-legal forcings. Again we shall show the claim via induction on $\beta$. So assume that $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ is $\alpha+1$-legal with respect to $E=E \upharpoonright \alpha \cup \{(\alpha, \dot{y},m_{\alpha},k_{\alpha})\}$ and an $F$ whose domain is $\delta_1 +\beta$. We look at \[F(\delta_1+ \beta) \upharpoonright 5=F_2(\beta) \upharpoonright 5= (\dot{x},\dot{y},m_{\alpha},k_{\alpha})\] We concentrate on the case where $\beta$ is such that case 2 applies when considering $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ over $W'$. The rest follows similarly. Our goal is to show that case 2 must apply when considering the $\beta$-th stage of the forcing using $F_2$ and $E$ over $W'[(\mathbb{P}_2)_{\beta}]$ as well. Assume first for a contradiction, that, when working over $W'[(\mathbb{P}_2)_{\beta}]$, at stage $\beta$, case 1 applies. Then, for any $(\mathbb{P}_2)_{\beta}$-generic filter $ G_{\beta}$ over $W'$, \begin{align*} W'[G_{\beta}] \models \exists \mathbb{Q} (&\mathbb{Q} \text{ is $\alpha$-legal with respect to $E \upharpoonright \alpha$ and some $F'$ and} \\ & \mathbb{Q} \Vdash x \in A_m(y)) \end{align*} Now, as $\mathbb{P}_2$ is $\alpha$-legal, we know that $\mathbb{P}_2(\beta)$ is such that $\mathbb{P}_2(\beta) \Vdash x \in A_m(y)$. Thus, using the upwards-absoluteness of $\Sigma^1_3$-formulas, at stage $\beta$ of the $\alpha+1$-legal forcing determined by $F$ and $E$, there is an $\alpha$-legal forcing $\mathbb{Q}$ with respect to $E \upharpoonright \alpha$ which forces $x \in A_m(y)$, namely $\mathbb{P}_2(\beta)$. But this is a contradiction, as we assumed that when considering $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ over $W'$ at stage $\beta$, case 1 does not apply, hence such an $\alpha$-legal forcing should not exist. So we know that case 1 is not true. We shall show now that case 2 must apply at stage $\beta$ when considering $\mathbb{P}_2$ over the universe $W'$. By assumption we know that \begin{align*} W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}] \models&\exists \mathbb{Q}_2 (\mathbb{Q}_2 \text{ is $\alpha$-legal with respect to $E \upharpoonright \alpha$ and } \\& \, \, \mathbb{Q}_2 \Vdash x \in A_k(y) \end{align*} As $\mathbb{P}_1$ is $\alpha+1$-legal with respect to $E$ and $F_1$, it is also $\alpha$-legal with respect to $E \upharpoonright \alpha$ and some altered $F'_1$, thus, as a consequence from the induction hypothesis, we obtain that \[W'[(\mathbb{P}_2)_{\beta}] \models \mathbb{P}_1 \times \mathbb{Q}_2 \text{ is $\alpha$-legal and } \mathbb{P}_1 \times \mathbb{Q}_2 \Vdash x \in A_k(y).\] But then, $\mathbb{P}_1 \times \mathbb{Q}_2$ witnesses that we are in case 2 as well when at stage $\beta$ of $\mathbb{P}_2$ over $W'$. This ends the proof of the claim and so we have shown the lemma. \end{proof} \subsection{Proof of the first Main Theorem} We are finally in the position to prove that the $\bf{\Sigma^1_3}$-separation property can be forced over $W$. The iteration we are about to define inductively will be a legal iteration, whose tails are $\alpha$-legal and $\alpha$-increases along the iteration. We start with fixing a bookkeeping function \[ F: \omega_1 \rightarrow H(\omega_1)^4 \] which visits every element cofinally often. The role of $F$ is to list all the quadruples of the form $(\dot{x}, \dot{y},m,k)$, where $\dot{x}, \dot{y}$ are names of reals in the forcing we already defined, and $m$ and $k$ are natural numbers which represent $\Sigma^1_3$-formulas with two free variables, cofinally often. Assume that we are at stage $\beta < \omega_1$ of our iteration. By induction we will have constructed already the following list of objects. \begin{itemize} \item An ordinal $\alpha_{\beta} \le \beta$ and a set $E_{\alpha_{\beta}}$ which is of the form $\{\eta ,\dot{y}_{\eta}, m_{\eta},k_{\eta} \, : \, \eta < \alpha_{\beta} \}$, where $\dot{y}_{\eta}$ is a $\mathbb{P}_{\beta}$-name of a real, $m_\eta, k_{\eta}$ are natural numbers. As a consequence, for every bookkeeping function $F'$, we do have a notion of $\eta$-legality relative to $E$ and $F'$ over $W[G_{\beta}]$. \item We assume by induction that for every $\eta < \alpha_{\beta}$, if $\beta_{\eta}< \beta$ is the $\eta$-th stage in $\mathbb{P}_{\beta}$, where we add a new member to $E_{\alpha_{\beta}}$, then $W[G_{\beta_{\eta}}]$ thinks that the $\mathbb{P}_{\beta_{\eta} \beta}$ is $\eta$-legal with respect to $E_{\alpha_{\beta}} \upharpoonright \eta$. \item If $(\eta, \dot{y}_{\eta},m_{\eta},k_{\eta}) \in E_{\alpha_{\beta}}$, then we set again $\beta_{\eta}$ to be the $\eta$-th stage in $\mathbb{P}_{\beta}$ such that a new member to $E_{\alpha_{\beta}}$ is added. In the model $W[G_{\beta_{\eta}}]$, we can form the set of reals $R_{\eta}$ which were added so far by the use of a coding forcing in the iteration up to stage $\beta_{\eta}$, and which witness $({\ast} {\ast} {\ast})$ holds for some $(x,y,m,k)$; Note that $R_{\eta}$ is a countable set of reals and can therefore be identified with a real itself, which we will do. The real $R_{\eta}$ indicates the set of places we must avoid when expecting correct codes, at least for the codes which contain $\dot{y}_{\eta},m_{\eta}$ and $k_{\eta}$. \end{itemize} Assume that $F(\beta)= (\dot{x},\dot{y},m,k)$, assume that $\dot{x}$, $\dot{y}$ are $\mathbb{P}_{\beta}$-names for reals, and $m,k\in \omega$ correspond to the $\Sigma^1_3$-formulas $\varphi_m(v_0,v_1)$ and $\varphi_k(v_0,v_1)$. Assume that $G_{\beta}$ is a $\mathbb{P}_{\beta}$-generic filter over $W$. Let $\dot{x}^{G_{\beta}}=x$ and $\dot{y}_1^{G_{\beta} }=y_1, \dot{y}_2^{G_{\alpha}}=y_2$. We turn to the forcing $\mathbb{P}(\beta)$ we want to define at stage $\beta$ in our iteration. Again we distinguish several cases. \begin{itemize} \item[(A)] Assume that $W[G_{\beta}]$ thinks that there is an $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ relative to $E_{\alpha_{\beta}}$ and some $F'$ such that \begin{align*} \mathbb{Q} \Vdash \exists z (z\in A_m(y) \cap A_k(y)). \end{align*} Then we pick the $<$-least such forcing, where $<$ is some previously fixed wellorder. We denote this forcing with $\mathbb{Q}_1$ and use \[\mathbb{P}(\beta):= \mathbb{Q}_1.\] We do not change $R_{\beta}$ at such a stage. \item[(B)] Assume that (i) is not true. \begin{itemize} \item[(i)] Assume however that there is an $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ in $W[G_{\beta}]$ with respect to $E_{\alpha_{\beta}}$ and some $F'$ such that \begin{align*} \mathbb{Q} \Vdash x \in A_m(y). \end{align*} Then we set \[\mathbb{P}(\beta):= \hbox{Code} (({x}, {y}, m,k), 1).\] In that situation, we enlarge the $E$-set as follows. We let $(\alpha_{\beta}, \dot{y}, m, k)=: (\alpha_{\beta}, \dot{y}_{\alpha_{\beta}}, m_{\alpha_{\beta}}, k_{\alpha_{\beta}})$ and \[E_{\alpha_{\beta}+1}:= E_{\alpha_{\beta}} \cup \{ (\alpha_{\beta}, \dot{y}, m, k) \} .\] Further, if we let $r_{\eta}$ be the real which is added by $\hbox{Code} (({x}, {y}, m,k), 1)$ at stage $\eta$ of the iteration which witnesses $({\ast} {\ast} {\ast})$ of some quadruple $(x_{\eta},y_{\eta},m_{\eta},k_{\eta}$). Then we collect all the countably many such reals we have added so far in our iteration up to stage $\beta$ and put them into one set $R$ and let \[ R_{\alpha_{\beta}+1 }:= R . \] \item[(ii)] Assume that (i) is wrong, but there is an $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ with respect to $E_{\alpha_{\beta}}$ and some $F'$ in $W[G_{\beta}]$ such that \begin{align*} \mathbb{Q} \Vdash x \in A_k(y). \end{align*} Then we set \[\mathbb{P}(\beta):= \hbox{Code} (({x}, {y}, m,k), 2).\] In that situation, we enlarge the $E$-set as follows. We let the new $E$ value $(\alpha_{\beta}, \dot{y}_{\alpha_{\beta}}, m_{\alpha_{\beta}}, k_{\alpha_{\beta}})$ be $ (\alpha_{\beta}, \dot{y}, m, k)$ and \[E_{\alpha_{\beta}+1}:= E_{\alpha_{\beta}} \cup \{ (\alpha_{\beta}, \dot{y}, m, k) \}.\] Further, if we let $r_{\eta}$ be the real which is added by $\hbox{Code} (({x}, {y}, m,k), 1)$ at stage $\eta$ of the iteration which witnesses $({\ast} {\ast} {\ast})$ of some quadruple $(x_{\eta},y_{\eta},m_{\eta},k_{\eta}$). Then we collect all the countably many such reals we have added so far in our iteration up to stage $\beta$ and put them into one set $R$ and let \[ R_{\alpha_{\beta}+1 }:= R . \] \item[(iii)] If neither (i) nor (ii) is true, then there is no $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ with respect to $E_{\alpha_{\beta}}$ which forces $x \in A_m(y)$ or $x \in A_k(y)$, and we set \[ \mathbb{P}(\beta):=\hbox{Code} (({x}, {y}, m,k), 1).\] Further, if we let $r_{\eta}$ be the real which is added by $\hbox{Code} (({x}, {y}, m,k), 1)$ at stage $\eta$ of the iteration which witnesses $({\ast} {\ast} {\ast})$ of some quadruple $(x_{\eta},y_{\eta},m_{\eta},k_{\eta}$). Then we collect all the countably many such reals we have added so far in our iteration up to stage $\beta$ and put them into one set $R$ and let \[ R_{\alpha_{\beta}+1 }:= R . \] Otherwise we force with the trivial forcing. \end{itemize} \end{itemize} At limit stages $\beta$, we let $\mathbb{P}_{\beta}$ be the inverse limit of the $\mathbb{P}_{\eta}$'s, $\eta < \beta$, and set $E_{\alpha_{\beta}}= \bigcup_{\eta < \beta} E_{\alpha_{\eta}}$. This ends the definition of $\mathbb{P}_{\omega_1}$. \subsection{Discussion of the resulting universe} We let $G_{\omega_1}$ be a $\mathbb{P}_{\omega_1}$-generic filter over $W$. As $W[G_{\omega_1}]$ is a proper extension of $W$, $\omega_1$ is preserved. Moreover $\mathsf{CH}$ remains true. A second observation is that for every stage $\beta$ of our iteration and every $\eta > \beta$, the intermediate forcing $\mathbb{P}_{[\beta, \eta)}$, defined as the factor forcing of $\mathbb{P}_{\beta}$ and $\mathbb{P}_{\eta}$, is always an $\alpha_{\beta}$-legal forcing relative to $E_{\alpha_{\beta}}$ and some bookkeeping. This is clear as by the definition of the iteration, we force at every stage $\beta$ with a $\alpha_{\beta}$-legal forcing relative to $E_{\alpha_{\beta}}$ and $\alpha_{\beta}$-legal becomes a stronger notion as we increase $\alpha_{\beta}$. We shall define the separating sets now. For a pair of disjoint $\Sigma^1_3(y)$ sets $A_m(y)$ and $A_k(y)$ we consider the least stage $\beta$ such that there is a $\mathbb{P}_{\beta}$-name $\dot{z}$ such that $\dot{z}^{G_{\beta}}=z$ and $(z,y,m,k)$ are considered by $F$ at stage $\beta$. Let $R_{\beta}$ be the set of all reals which were added by the coding forcing up to stage $\beta$ and which witness $({\ast}{\ast}{\ast})$ for some $(x,y,m,k)$. Then for any real $x \in W[G_{\omega_1}]$: \begin{align*} x \in D^1_{y,m,k} (R_{\beta})\Leftrightarrow \exists r \notin R_{\beta} (&L[r] \models (x,y,m,k) \text{ can be read off from a code} \\& \text{ written on an } \omega_1\text{-many $\omega$-blocks of elements of } \\& \vec{S^1} ). \end{align*} and \begin{align*} x \in D^2_{y,m,k} (R_{\beta})\Leftrightarrow \exists r \notin R_{\beta} (&L[r] \models (x,y,m,k) \text{ can be read off from a code} \\& \text{ written on an } \omega_1\text{-many $\omega$-blocks of elements of } \\& \vec{S^2}). \end{align*} It is clear from the definition of the iteration that for any real parameter $y$ and any $m,k \in \omega$, $D^1_{y,m,k} \cup D^2_{y,m,k} = 2^{\omega}$. The next lemma establishes that the sets are indeed separating. \begin{lemma} In $W[G_{\omega_1}]$, let $y$ be a real and let $m,k \in \omega$ be such that $A_m(y) \cap A_k(y)=\emptyset.$ Then there is an real $R$ such that the sets $D^1_{y,m,k}(R)$ and $ D^2_{y,m,k}(R)$ partition the reals. \end{lemma} \begin{proof} Let $\beta$ be the least stage such that there is a real $x$ such that $F(\beta) \upharpoonright 4=(\dot{x}, \dot{y},m,k)$ with $\dot{x}^G_{\beta}=x$, $\dot{y}^G_{\beta}=y$. Let $R$ be $R_{\beta}$ and $R_{\beta}$ be as defined above. Then, as $A_m(y)$ and $A_k(y)$ are disjoint in $W[G_{\omega_1}]$, by the rules of the iteration, case B must apply at $\beta$. Assume now for a contradiction, that $D^1_{y,m,k}(R)$ and $ D^2_{y,m,k}(R)$ do have non-empty intersection in $W[G_{\omega_1}]$. Let $z \in D^1_{y,m,k}(R) \cap D^2_{y,m,k}(R)$ and let $\beta' > \beta$ be the first stage of the iteration which sees that $z$ is in the intersection. Then, by the rules of the iteration and without loss of generality, we must have used case B(i) at $\beta$, and case B(ii) at stage $\beta'$. But this would imply, that at stage $\beta$, there is an $\alpha_{\beta}$-legal forcing with respect to $E_{\alpha_{\beta}}$, which forces $x \in A_m(y) \cap A_k(y)$, namely the intermediate forcing $(\mathbb{P}_{\beta \beta'} )$. This is a contradiction. \end{proof} \begin{lemma} In $W[G_{\omega_1}]$, for every pair $m,k$ and every parameter $y \in 2^{\omega}$ such that $A_m(y) \cap A_k(y) = \emptyset$ there is a real $R$ such that \[ A_m(y) \subset D^1_{y,m,k}(R) \land A_k(y) \subset D^2_{y,m,k}(R)\] \end{lemma} \begin{proof} The proof is by contradiction. Assume that there is a real $x$ such that $x \in A_m(y) \cap D^2_{y,m,k}(R)$ for every $R$. We consider the smallest ordinal $\beta < \omega_1$ such that $F(\beta)\upharpoonright 4$ considers a quintuple of the form $(x,y,m,k)$ and let $R= R_{\beta}$. As $A_m(y)$ and $A_k(y)$ are disjoint we know that at stage $\beta$ we were in case B. As $x$ is coded into $\vec{S^2}$ after stage $\beta$ and by the last Lemma, Case B(i) is impossible at $\beta$. Hence, without loss of generality we may assume that case B(ii) applies at $\beta$. As a consequence, there is a forcing $\mathbb{Q}_2$ which is $\alpha_{\beta}$-legal with respect to $E_{\alpha_{\beta}}$ which forces $\mathbb{Q}_2 \Vdash x \in A_k(y)$. Note that in that case we collect all the reals which witness $( {\ast} {\ast} {\ast})$ for some quadruple to form the set $R_{\beta}$. As $x \in A_m(y) \cap D^2_{y,m,k} (R)$, we let $\beta'> \beta$ be the first stage such that $W[G_{\beta'}] \models x \in A_m(y)$. By Lemma \ref{productlegal}, $W[G_{\beta}]$ thinks that $\mathbb{Q}_2 \times \mathbb{P}_{\beta \beta'}$ is $\alpha_{\beta}$-legal with respect to $E_{\alpha_{\beta}}$, yet $\mathbb{Q}_2 \times \mathbb{P}_{\beta \beta'} \Vdash x \in A_m(y) \cap A_k(y)$. This is a contradiction. \end{proof} The next lemma will finish the proof of our theorem: \begin{lemma} In $W[G_{\omega_1}]$, if $y \in 2^{\omega}$ is an arbitrary parameter, $R$ a real and $m,k$ natural numbers, then the sets $D^1_{y,m,k}(R)$ and $D^2_{y,m,k}(R)$ are $\Sigma^1_3(R)$-definable. \end{lemma} \begin{proof} The proof is almost identical to the proof of Lemma \ref{Sigma13}, the only thing added is the real $R$ as parameter. \end{proof} \section{Forcing the lightface $\Sigma^1_3$-separation property} The techniques developed in the previous sections can be used to force a model where the (lightface) $\Sigma^1_3$-separation property is true. In what follows, we heavily use ideas and notation from earlier sections, so the upcoming proof can not be read independently. \begin{theorem} Starting with $L$ as the ground model, one can produce a set-generic extension $L[G]$ which satisfies $\mathsf{CH}$ and in which the ${\Sigma^1_3}$-separation property holds. \end{theorem} \begin{proof} For the the proof to come, we will redefine the notion of legal forcings. \begin{definition} A mixed support iteration is called (0-)legal if it is defined as in Definition 4.2, with the only difference that we code (reals that code) quadruples of the form $(x,0,m,k)$ and $(x,1,m,k)$, where $m,k \in \omega$ and $x$ is (the name of) a real into $\vec{S}$. \end{definition} Similar to before, we take 0-legal forcings as the base set of forcings and define gradually smaller families of forcings, which we call $n$-legal forcings with respect to a set $E$ and a bookkeeping function $F$. Assume that $E$ is a finite list of length $n$ of pairwise distinct pairs of natural numbers $(m_l, k_l)$, $l \le n$, and that $F$ is a bookkeeping function. Assume that for every $l \le n$ we do have a notion of $l$-legality with respect to $E \upharpoonright l$, let $(m_{n+1}, k_{n+1})$ be a new pair of natural numbers, distinct from the previous ones. Then we say that $\mathbb{P}$ is $n+1$-legal with respect to $E \cup \{ (m_{n+1}, k_{n+1} ) \}$ and $F$ if for every $l \le n$, $\mathbb{P} $ is $l$-legal with respect to $E \upharpoonright l$ and $F$ and it obeys the following rules: \begin{enumerate} \item $\mathbb{P}$ never codes a quadruple of the form $(x,1,m_{n+1},k_{n+1})$ into $\vec{S}$. \item Whenever $\beta < \gamma $, where $\gamma$ is the length of the iteration $\mathbb{P}$, is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i\in\{1,2\}$ such that \[F(\beta)= (\dot{x},m_{n+1},k_{n+1},i)\] and for $G$ which is $\mathbb{P}_{\beta}$-generic over $W$, $W[G]$ thinks that \begin{align*} \exists \mathbb{Q} (&\mathbb{Q} \text{ is } n \text{-legal with respect to } E \, \land \\& \mathbb{Q} \Vdash x \in A_{m_{n+1}}), \end{align*} where $x=\dot{x}^G$, and $y_{\alpha}=\dot{y}_{\alpha+1}^G$. Then continuing to argue in $W[G]$, we let \[\mathbb{P}(\beta)= \hbox{Code}((x,0,m_{n+1},k_{n+1}),1).\] \item Whenever $\beta < \gamma $ is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i\in\{1,2\}$ such that \[F(\beta)= (\dot{x},m_{n+1},k_{n+1},i)\] and for $G_{\beta}$ which is $\mathbb{P}_{\beta}$-generic over $W$, $W[G_{\beta}]$ thinks that \begin{align*} \forall \mathbb{Q}_1 (&\mathbb{Q}_1 \text{ is } n \text{-legal with respect to } E \\& \rightarrow \, \lnot(\mathbb{Q}_1 \Vdash x \in A_{m_{n+1}})) \end{align*} but there is a forcing $\mathbb{Q}_2$ such that $W[G_{\beta}]$ thinks that \begin{align*} \mathbb{Q}_{2} \text{ is } n \text{-legal with respect to $E$ and } \\ {\mathbb{Q}_2} \Vdash x \in A_{k_{n+1}} \end{align*} Then continuing to argue in $W[G_{\beta}]$, we force with \[\mathbb{P}(\beta):= \hbox{Code}((x,0,m_{n+1},k_{n+1}),2).\] \item If neither 1 nor 2 is true, then either \[ \mathbb{P}(\beta)=\hbox{Code}((x,0,m_{n+1},k_{n+1}),2)\] or \[ \mathbb{P}(\beta)=\hbox{Code}((x,0,m_{n+1},k_{n+1}),1)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. Otherwise $\mathbb{P}$ uses the trivial forcing at that stage. \item If $F(\beta) = (\dot{x},m,k,i)$ and for every $\mathbb{P}_{\beta}$-generic filter $G$, $W[G] \models \forall l \le n+1 ((m_l,k_l) \ne (m,k))$, then let \[ \mathbb{P}(\beta)=\hbox{Code}((x,0,m,k),i)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. \end{enumerate} This ends the definition for the successor step $n \rightarrow n+1$. With this new notion of $n$-legality, we start the proof of the theorem. The ground model over which we form an iteration is the universe $W$ again, which was defined earlier. Over $W$ we will perform first an $\omega$-length, finitely supported iteration $(\mathbb{P}_n)_{n \in \omega}$ of legal posets, and then a second legal iteration of length $\omega_1$. The codes of the form $(x,0,m,k)$ shall eventually define the separating sets for $\varphi_m$ and $\varphi_k$; codes of the form $(x,1,m,k)$ shall correspond to countable sets of reals (i.e. reals themselves) which indicate the correctness of certain codes of the form $(x,0,m,k)$ which avoid the coding areas coded by these reals. We let $\{(\varphi_{m_n}, \varphi_{k_n}) \, : \, n \in \omega \}$ be an enumeration of all pairs of $\Sigma^1_3$-formulas. Assume that $(\varphi_{m_0}, \varphi_{k_0})$ is such that there is no legal forcing $\mathbb{Q}$ such that $W[\mathbb{Q}] \models \exists z (\varphi_{m_0}(z) \land \varphi_{k_0}(z))$. Repeating the arguments from before, we set $E_0:=\{m_0,k_0) \}$ and define the notion of 1-legal with respect to $E_0$. As will become clear in a second, every step of the iteration $(\mathbb{P}_n \, : \, {n \in\omega})$ will either use a legal forcing or define a new and gradually stronger notion of legality. We let $l_n \in \omega$ denote the degree of legality we have already defined at stage $n \in \omega$ of our iteration and define $l_0$ to be $0$ (where 0-legal should just be legal) and $l_1$ to be 1 for the base case of our induction; likewise $\mathbb{P}_0$ is set to be the trivial forcing. The forcing $\mathbb{P}_{\omega}$ is the countably supported iteration of $(\mathbb{P}_n \, : \,n \in \omega)$, which we will define inductively. Assume we are at stage $n \ge 1 \in \omega$ of the iteration and we have defined already the following list of objects and notions: \begin{enumerate} \item $\mathbb{P}_{n-1}$ and the generic filter $G_{n-1}$. \item A natural number $l_n \le n$ and a notion of $l_n$-legal relative to $E_{l_n}= \{ (m'_0,k'_0),...,(m'_{l_n-1},k'_{l_n-1}) \} \subset \{ (m_0,k_0),...,(m_{n-1},k_{n-1}) \} $, which is a strengthening of 1-legal relative to $E_0$. \item A finite set of reals $\{R_0<...<R_{n - (l_n -1)} \}$, where each real $R_i$ codes a countable set of reals. The choice of the indices will become clear later. \end{enumerate} Consider now the $n+1$-th pair $(\varphi_{m_n}, \varphi_{k_n})$ and split into cases. \begin{enumerate} \item[$(a)$] There is an $l_n$-legal forcing $\mathbb{Q}$ such that \[W[G_{n-1}] \models \mathbb{Q} \Vdash \exists z (\varphi_{m_n} (z) \land \varphi_{k_n} (z)), \] then we must use the forcing $\mathbb{Q}$. We collect all the reals we have added so far generically which witness $({\ast} {\ast} {\ast})$ for a triple $(x,m,k)$ and call the set $R_{n-l_n}$. In a second step, we use the usual method to code the quadruple $(R_{{n-l_n}}, 1,m'_{l_{n-1}},k'_{l_{n-1}} )$ into $\vec{S}^1$. \item[$(b)$] In the second case there is no $l_{n}$-legal forcing $\mathbb{Q}$ relative to $E_{l_n}$ which forces $\varphi_{m_n}$ and $\varphi_{k_n}$ to have non-empty intersection. In that case we force with the trivial forcing and define the notion $l_{n+1}$-legal. We first let $(m'_{l_n},k'_{l_n})=(m_n,k_n)$ and $E_{l_n +1}:= E_{l_n} \cup \{(m_n,k_n)\}$, and define $l_{n+1}$-legal relative to $E_{l_n+1}$ just as above. We do not define a new $R_{n-l_n}$. \end{enumerate} We let $\mathbb{P}_{\omega}$ be the inverse limit of the forcings $\mathbb{P}_n$ and consider the universe $W[\mathbb{P}_{\omega}]$. We shall and will assume from now on that in $\mathbb{P}_{\omega}$, case $(b)$ is applied infinitely many times. \begin{lemma} For every $n \in \omega$, the tail of the iteration $(\mathbb{P}_m \, :\, m \ge n)$ is at least an $l_n-1$-legal iteration relative to $E_{l_n}$ as seen from $W[G_n]$. \end{lemma} \begin{proof} This can easily be seen if one stares at the definition of the iteration. At stage $n$ we either follow case $(a)$ which is an $l_n -1$-legal forcing, as we use the $l_n$-legal $\mathbb{Q}$ and additionally code $(R_{{n-l_n}}, 1,m'_{l_{n-1}},k'_{l_{n-1}} )$ which results in an $l_n-1$-legal forcing. Or we apply case $(b)$, thus define $l_n+1$-legality and every further factor of the iteration must be $l_n$-legal. As mixed support iterations of $l_{n}-1$-legal forcings yield an $l_n -1$-legal forcing, this ends the proof. \end{proof} The arguments which are about to follow will depend in detail heavily on the actual form of the iteration $\mathbb{P}_{\omega}$ which in turn depends on how the enumeration of the $\Sigma^1_3$-formulas behaves. The theorem will of course be true, no matter how $\mathbb{P}_{\omega}$ does look like. To facilitate the arguments and notation, however, we assume without loss of generality from now on that the sequence $(\varphi_{m_n}, \varphi_{k_n})$ is so chosen that in the definition of $\mathbb{P}_{\omega}$ the cases $(a)$ and $(b)$ are alternating. Thus whenever $n$ is even then $(\varphi_{m_n}, \varphi_{k_n})$ is such that case $(b)$ has to be applied and whenever $n$ is odd then for $(\varphi_{m_n}, \varphi_{k_n})$ case $(a)$ is the one which one should apply. Via changing the order of $(\varphi_{m_n}, \varphi_{k_n})$, this can always be achieved. Consequentially, at even stages $2n$ of the iteration, we define the new notion of $n+1$-legal, while forcing with the trivial forcing; on odd stages $2n+1$, we use the $n+1$-legal forcing to force that $\varphi_{m_{2n+1}}$ and $\varphi_{k_{2n+1}}$ have non-empty intersection and then form the real $R_{2n+1 -n} =R_{n+1}$ and then code the quadruple $(R_{n+1},1,m_{2n},k_{2n})$ into $\vec{S^1}$. A consequence of the last lemma (and our assumption on the form of $\mathbb{P}_{\omega}$) is that for every even natural number $2n$, there is a final stage in $\mathbb{P}_{\omega}$ where we create new codes of the form $(R,1,m_{n},k_{n})$. Indeed, by the definition of $l_{n+1}$-legal, no codes of the form $(R,1,m_{n},k_{n})$ are added by $l_{n+1}$-legal forcings and we have that \begin{align*} R_{n}:= \{ R \, : \, (R, 1,m_{2n},k_{2n}) \text{ is coded into } \vec{S^1} \}. \end{align*} To introduce a useful notion, we say that a real $x$, which is coded into $\vec{S^1}$ or $\vec{S^2}$, has coding area almost disjoint from the real $R$, if $R$ codes an $\omega_1$-sized subset of $\omega$ and the $\omega$-blocks, where $x$ is coded are almost disjoint from the set of ordinals coded by $R$, in that their intersection is countable. \begin{lemma} In $W[\mathbb{P}_{\omega}]$ for every $n \in \omega \backslash 0$, there is only one real $R$, namely $R_{n}$ which has a code of the form $(R_n,1,m_{2n},k_{2n})$ written into $\omega_1$-many $\omega$-blocks of elements of $\vec{S^1}$ almost disjoint from $R_{n-1}$. \end{lemma} \begin{proof} This is just a straightforward consequence of the definition of the iteration and of our assumption on the form of $\mathbb{P}_{\omega}$. For $n=1$, note that $R_0$ is the unique ordinal for which $(R,1,m_0,k_0)$ is coded into $\vec{S^1}$. Then the next forcing $\mathbb{P}(2)$ is trivial while 2-legal is defined and the forcing $\mathbb{P}(3)$ first forces $\varphi_{m_3}$ and $\varphi_{k_3}$ to intersect without creating any code of the form $(R,1,m_0,k_0)$ or $(R,1,m_2,k_2)$, forms $R_1$ and writes $(R_1,1,m_2,k_2)$ into $\vec{S}^1$ with coding area almost disjoint from $R_0$. As all later factors of the iteration are 2-legal, there will be no new codes of the form $(R,1,m_2,k_2)$, hence $R_1$ is the unique real for which $(R_1,1,m_2,k_2)$ is written into $\vec{S^1}$ withwith coding area almost disjoint from $R_0$. The argument for arbitrary $n$ works exactly the same way with the obvious replacements of letters. \end{proof} \begin{lemma} In $W[\mathbb{P}_{\omega}]$, every real $R_{n}$ is $\Sigma^1_3$-definable. \end{lemma} \begin{proof} This is by induction on $n$. For $n=0$, $R_0$ is the unique real for which $(R,1,m_0,k_0)$ is coded into $\vec{S^1}$. This can be written in a $\Sigma^1_3$-way: \begin{align*} x= R_0 \Leftrightarrow \exists r \forall M &(|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \rightarrow \\&M \text{ sees with the help of the coded information in } r \text{ that }\\& x \text{ is the unique real such that } (x,1,m_0,k_0) \text{ is coded in} \\& \text{ a block of } \vec{S^1}). \end{align*} Now assume that there is a $\Sigma ^1_3$-formula which uniquely defines $R_{n}$, then, by the last lemma, $R_{n+1}$ is the unique real which has a code of the form $(R,1,m_{2(n+1)},k_{2(n+1)})$ written into $\aleph_1$-many $\omega$-blocks of elements of $\vec{S^1}$ almost disjoint from the set of ordinals coded by $R_n$. Let $\psi$ be the $\Sigma^1_3$-formula which defines $R_{n}$, then \begin{align*} x= \alpha_{n+1} \Leftrightarrow &\psi(R_{n}) \text{ and } \\ &\exists r \forall M (|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \rightarrow \\&M \text{ sees with the help of the coded information in } r \text{ that }\\& x \text{ is the unique real such that } (x,1,m_{2(n+1)},k_{2(n+1)}) \\& \text{is coded in} \text{ $\aleph_1$-many blocks of } \vec{S^1} \\& \text{almost disjoint from the ordinals coded by } R_{n}). \end{align*} \end{proof} The ordinals $R_{n}$ indicate the set of places we need to exclude in order ro obtain correct codes of the form $(x,0,m_{2n},k_{2n})$ which are written into $\vec{S}$. Indeed the iteration $\mathbb{P}_{\omega}$, after the stage where we coded the quadruple $(R_{n},1,m_{2n},k_{2n})$ into $\vec{S^1}$ will be $2n-1$-legal, just by the definition of the iteration, which means that the tail of $\mathbb{P}_{\omega}$ will never produce a pathological situation. Finally we can form our desired universe of the $\Sigma^1_3$-separation property. We let $E_{\omega} = \{(m_{2n},k_{2n}) \,: \, n \in \omega\}$ and force with $W[\mathbb{P}_{\omega}]$ as our ground model. We use a countably supported iteration of length $\omega_1$ where we force every quadruple of the form $(x,0,m_{2n},k_{2n})$, $x \in 2^{\omega}$, into either $\vec{S^1}$ or $\vec{S^2}$ with coding area almost disjoint from $ R_{n}$, according to whether case 2, 3 or 4 is true in the definition of $n$-legal. Note that, by assumption, all pairs $(\varphi_{m_n}, \varphi_{k_n})$, $n$ odd, do have a non-empty intersection. Let $W_1$ denote the universe we obtain this way. As a consequence we can define in $W_1$ the desired separating sets as follows. For a pair $(\varphi_{m_{2n}},\varphi_{k_{2n}})$ we let $\psi_{n}$ be the $\Sigma^1_3$-formula which defines $\alpha_{n}$. Now for any real $x$ we let \begin{align*} x \in D_{m_{2n},k_{2n}} \Leftrightarrow \, &\psi(R_{n}) \text{ and } \\& \exists r \forall M (|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \\&\rightarrow M \text{ sees with the help of the coded information in } r \text{ that }\\& (x,0,m_{2n},k_{2n}) \text{ is coded into $\aleph_1$-many blocks of } \vec{S^1} \\& \text{almost disjoint from the ordinals coded by } R_{n}). \end{align*} and \begin{align*} x \notin D_{m_{2n},k_{2n}} \Leftrightarrow \, &\psi(R_{n}) \text{ and } \\& \exists r \forall M (|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \\&\rightarrow M \text{ sees with the help of the coded information in } r \\& \text{that } (x,0,m_{2n},k_{2n}) \text{ is coded into $\aleph_1$-many blocks of } \vec{S^2} \\&\text{almost disjoint from the ordinals coded by } R_{n}). \end{align*} Both formulas are $\Sigma^1_3$, hence the $\Sigma^1_3$-separation property holds in $W_1$. \end{proof} \section{Possible further applications and open problems} The method which was used to prove the consistency of $\bf{\Sigma^1_{3}}$-separation can be applied to the generalized Baire space as well as we will sketch briefly. Let $BS(\omega_1)$ be defined as $\omega_1^{\omega_1}$ equipped with the usual product topology, i.e. basic open sets are of the form $O_{\sigma} :=\{ f \supset \sigma \, : \, f \in \omega_1^{\omega_1}, \, \sigma \in \omega_1^{\omega} \}$. The projective hierarchy of $BS(\omega_1)$ is formed just as in the classical setting via projections and complements. The $\bf{\Sigma^1_{1}}$-sets are projections of closed sets, the $\bf{\Pi^1_{1}}$-sets are the complements of the $\bf{\Sigma^1_{1}}$-sets and so on. The corresponding separation problem in $BS(\omega_1)$ is the following: does there exist a set generic extension of $L$ where $\bf{\Sigma^1_1}$-sets can be separated with $\bf{\Delta^1_1}$-sets? Our above proof can be applied here as well. All we have to do is to lengthen our sequence of stationary sets we will use to code. We start with $L$ as our ground model, fix our definable sequence of pairwise almost disjoint, $L$-stationary subsets of $\omega_1$, $(R_{\alpha} \, : \, \alpha < \omega_2)$. We again split $\vec{R}$ into $\vec{R}^1$ and $\vec{R}^2$, add $\omega_2$-many Suslin trees $\vec{S}$ generically and use $\vec{R}$ to code up $\vec{S}$ just as we did it in the construction of the universe $W$, but leave out the almost disjoint coding forcings as we quantify over $H(\omega_2)$ anyway. Next we list the $\bf{\Sigma^1_1}$-formulas $\varphi_n$ and start an $\omega_2$-length iteration where we add branches of members of the definable $(S_{\alpha} \, : \, \alpha < \omega_2)$ whenever our bookkeeping function $F$ hands us a triple $(x,m,k)$, just as in the situation of the usual Baire space. As there we distinguish the several cases and restrict ourselves to \emph{legal} forcings, where legal is the straightforward adjustment of legal in the $\omega$-case. The separating sets $D_{m,k}$ are defined using $\aleph_1$-sized, transitive models as which witness the wanted patterns on $\vec{S}^1$ and $\vec{S}^2.$ The sequence of the fixed $W$-Suslin trees $(S_{\alpha} \, : \, \alpha < \omega_1)$ is $\Sigma_1 (\omega_1)$-definable, thus the codes we write into them are $\Sigma_1(\omega_1)$-definable as well. We do not have to add almost disjoint coding forcings, as we quantify over subsets of $\omega_1$ in this setting anyway. All the factors will have the ccc, thus an iteration of length $\omega_2$ is sufficient to argue just as above that in the resulting generic extension $L[G]$, every pair of $\bf{\Sigma^1_1}$-sets is separated by the according $D_{m,k}$. The just sketched method is not limited to the case $\omega_1$. Indeed, if $\kappa$ is a successor cardinal,in $L$ then we can lift the argument to $\kappa$ as well. The proof will rely on a different kind of preservation result for iterated forcing constructions, as we can not use Shelah's theory of iterations of $S$-proper forcings anymore. Also, the choice of the definable sequence of $L$-stationary subsets of $\kappa$ has to be altered slightly, as we can not shoot clubs in a nice way through arbitrary stationary subsets of $\kappa$. How to solve some of the just posed problems is worked out in \cite{SyLL}. What remains an interesting open problem is the following: \begin{question} Can one force the $\Sigma^1_{1}$-separation property for $BS(\kappa)$ where $\kappa$ is inaccessible? What if $\kappa$ is weakly compact? \end{question} Another possible further direction is, as mentioned already in the introduction, to replace the ground model $L$ with inner models with large cardinals. We expect that a modification of the ideas of this article can be lifted in that context. In particular we expect that, given any natural number $n \ge 1$, over the canonical inner model with $n$-many Woodin cardinals $M_n$ one can force a model for which the $\Sigma^1_{n+3}$-separation property is true. Note here, that this would produce universes where the $\Sigma^1_{2n}$-separation property is true for the first time. These considerations rely on large cardinals however and it is interesting whether one can get by without them. \begin{question} For $n \ge 4$, can one force the $\Sigma^1_n$-separation property over $L$? \end{question} Last, note that the technique presented in this article seems to only work locally for one fixed $\Sigma^1_n$-pointclass. It would be very interesting to produce a model where we force a global behaviour of the $\Sigma^1_n$-separation property. \begin{question} For $n,m \in \omega$, is it possible to force the $\Sigma^1_n$ and the $\Sigma^1_m$-separation property simultaneously? If $E \subset \omega$, is it possible to force a universe in which the $\Sigma^1_n$-property is true for every $n \in E$? \end{question} Last, it is tempting to analyse how much consequences of $\Delta^1_2$-determinacy can be forced to hold simultaneously. A first test question in that direction would be \begin{question} Is it possible to force over $L$ the existence of a universe in which the $\Sigma^1_3$-separation property holds and every $\Sigma^1_3$-set has the Baire property? \end{question} \end{document}
math
122,253
\begin{document} \title{ The EPR Paradox Implies A Minimum Achievable Temperature} \author{ David M. Rogers} \affiliation{ University of South Florida, Tampa} \begin{abstract} We carefully examine the thermodynamic consequences of the repeated partial projection model for coupling a quantum system to an arbitrary series of environments under feedback control. This paper provides observational definitions of heat and work that can be realized in current laboratory setups. In contrast to other definitions, it uses only properties of the environment and the measurement outcomes, avoiding references to the `measurement' of the central system's state in any basis. These definitions are consistent with the usual laws of thermodynamics at all temperatures, while never requiring complete projective measurement of the entire system. It is shown that the back-action of measurement must be counted as work rather than heat to satisfy the second law. Comparisons are made to stochastic Schrodinger unravelling and transition-probability based methods, many of which appear as particular limits of the present model. These limits show that our total entropy production is a lower bound on traditional definitions of heat that trace out the measurement device. Examining the master equation approximation to the process at finite measurement rates, we show that most interactions with the environment make the system unable to reach absolute zero. We give an explicit formula for the minimum temperature achievable in repeatedly measured quantum systems. The phenomenon of minimum temperature offers a novel explanation of recent experiments aimed at testing fluctuation theorems in the quantum realm and places a fundamental purity limit on quantum computers. \end{abstract} \pacs{42.50.Pq, 05.70.Ln, 03.65.Yz, 03.67.-a} \maketitle \section{Introduction} A version of the EPR paradox prevents simultaneously doing work on a quantum system and knowing how much work has been done. A system can do work on its environment only if the two have a nonzero interaction energy. During interaction, two become entangled, leading to a superposition of different possible values for the work. According to quantum mechanics, measuring the work projects into a state with exactly zero interaction energy. Therefore the system-environment interaction is always either zero or unknown. One hundred years ago, Einstein presented a first-order rate hypothesis concerning the rate of energy exchange between a molecular system and a reservoir of photons.\cite{aeins16} Under this hypothesis, the transition between states with known molecular energy levels by emission and absorption of discrete photons can be shown to bring about thermal equilibrium for all parties: the photons, the molecular energy levels, and the particle velocities. This semiclassical picture provided a clear, consistent, and straightforward picture for the time-evolution of coupled quantum systems. Nevertheless, the argument must have appeared unsatisfactory at the time because it only provided a statistical, rather than an exact, mechanical description of the dynamics. Many years later, Einstein, Podolsky, and Rosen published the famous EPR paradox.\cite{epr,epr2} The paradox states that, before any measurement is made, neither position nor velocity exist as real physical quantities for a pair of entangled particles. Either of the two choices can be `made real' only by performing a measurement. The consequence for energy exchange processes follows directly. For a particle entangled with a field, neither a definite (molecular energy level / photon number) pair nor a definite (Stark state / field phase) pair exist before any measurement is made. Recent works on quantum fluctuation theorems confront this difficulty in a variety of ways. One of the most prominent is the stochastic Schr\"{o}dinger equation that replaces a dissipative quantum master equation with an ensemble of trajectories containing periodic jumps due to measurement.\cite{jhoro12} In that setup, the jump process represents dissipation, so heat is defined as any energy change in the system due to the jumps. Other changes in energy, caused by varying the Hamiltonian in time, are counted as work. Fluctuation theorems for this process are based on the detailed balance condition for jumps due to the reservoir, avoiding most issues with defining a work measurement. The work of Venkatesh\cite{bvenk15} shows that regular, projective measurement of work-like quantities based on the system alone (such as time-derivative of the Hamiltonian expectation) generally leads to ``qualitatively different statistics from the [two energy measurement] definition of work and generally fail to satisfy the fluctuation relations of Crooks and Jarzynski.'' Another major approach is to model the environment's action as a series of generic quantum maps. A physical interpretation as a two-measurement process accomplishing feedback control was given by Funo.\cite{kfuno13} There, an initial partial projection provides classical information that is used to choose a Hamiltonian to evolve the system for a final measurement. That work showed that the transition probabilities in the process obey an integral fluctuation theorem. Although the interpretation relied on a final measurement of the system's energy, it provided one of the first examples for the entropic consequences of measurement back-action.\cite{sdeff16} Recent work on the statistics of the transition process for general quantum maps showed that the canonical fluctuation theorems hold if the maps can be decomposed into transitions between stationary states of the dynamics.\cite{gmanz15} This agrees with other works showing the importance of stationary states in computing entropy changes from quantum master equations.\cite{rkosl13} The back-action due to measurement is not present in this case. In contrast, the present work starts from a physically motivated process and shows that work and heat can be defined without recourse to stationary states of the central system. By doing so, it arrives at a clear picture of the back-action, and a minimum temperature argument. It also builds a quantum parallel to the measurement-based definition of work and heat for classical nonequilibrium systems laid out in Ref.~\cite{droge12}. There, the transition probability ratio is shown to be equivalent to a physical separation of random and deterministic forces. Although no fluctuation theorem can be shown in general, in the van Hove limit, the interaction commutes with the stationary state,\cite{bvenk15} and a fluctuation theorem such as the one in Ref.~\cite{gmanz15} applies. Our model uses a combination of system and reservoir with joint Hamiltonian, \begin{equation} \hat H = \hat H_A + \hat H_B + \gamma \hat H_{AB} . \label{e:en} \end{equation} The coupling Hamiltonian should not be able to simply shift an energy level of either system, which requires $\Tri{A}{f(\hat H_A) \hat H_{AB}} = 0$ and $\Tri{B}{f(\hat H_B) \hat H_{AB}} = 0$, for arbitrary scalar functions, $f$. A simple generalization discussed later is to waive the first constraint, but this is not investigated here. There have been many definitions proposed for heat and work in quantum systems. These fall roughly into three categories: the near-equilibrium limit, experimental work-based definitions, and mathematical definitions based on information theory. The near-equilibrium limit is one of the earliest models, and is based on the weak-coupling limit of a system interacting with a quantum energy reservoir at a set temperature over long time intervals. That model is probably the only general one derivable from first principles where it can be proven that every system will eventually relax to a canonical equilibrium distribution with the same temperature as the reservoir.\cite{hspoh78} The essential step is taking the van Hove limit, where the system-reservoir interaction energy scale, $\gamma$, goes to zero (weak coupling) with constant probability for energy-conserving transitions (which scale as $\gamma^2/(\hbar^2 \lambda)$). In this limit, the only allowed transitions are those that conserve the uncoupled energy, $\hat H_A+\hat H_B$. The dynamics then becomes a process obeying detailed-balance for hopping between energy levels of the system's Hamiltonian, $\hat H_A$. States with energy superpositions can mix, but eventually decay to zero probability as long as the environment can couple to every system energy level. Adding an effective time-dependent Hamiltonian, $\hat H^\text{eff}_A(t)$, onto this picture and assuming very long time-scales provides the following definitions of heat and work,\cite{ralic79} \begin{align} \dot Q &= \Tr{\hat H^\text{eff}_A(t) \dot \rho} \notag \\ \dot W &= \Tr{\pd{\hat H^\text{eff}_A(t)}{t} \rho} ,\label{e:QW} \\ \intertext{where $\dot F = dF/dt$ denotes the time-derivative of $F$ according to the dynamics, and $e^{-\beta \hat H^\text{eff}_A(t)}$ must be the stationary state of the time-evolution used. Note that to match the dynamics of a coupled system, $\hat H^\text{eff}_A(t)$ must be a predefined function of $t$ satisfying, (see Eq.~\ref{e:dynab})} \Tr{\hat H^\text{eff}_A(t) \Tri{B}{\rho_{AB}}} &= \Tr{(\hat H_A + \gamma \hat H_{AB}) \rho_{AB}} \label{e:ematch} \end{align} Work and heat defined by equation~\ref{e:QW} have been used extensively to study quantum heat engines. \cite{ralic79,egeva92,tkieu04,hquan07,mespo10,swook11,ldios11,hli13,rkosl13} For this definition, it is possible to prove convexity,\cite{hspoh78} and positivity of $\dot S_\text{tot} = \dot S_A - \beta \dot Q$.\cite{ralic79} Statistical fluctuations of heat and work have also been investigated.\cite{hquan08,jhoro12,kfuno13,gmanz15} These first applications have demonstrated some of the novel properties of quantum systems, but encounter conceptual difficulties when applied to dynamics that does not follow the instantaneous eigenstates of $H^\text{eff}_A(t)$.\cite{rkosl13,bvenk15,sdeff16} The paradox described in this work shows why moving away from eigenstates is so difficult. The small-coupling, slow-process limit under which Eq.~\ref{e:QW} applies also amounts to an assumption that the system-environment pair is continually being projected into states with known $\hat H_A + \gamma \hat H_{AB}$. It is not suitable for use in deriving modern fluctuation theorems because its validity relies on the this limit. Entropy can also be defined thermodynamically by analyzing physical processes taking an initial state to a final state. One of the simplest results using the thermodynamic approach is that even quantum processes obey a fluctuation theorem for exchanges of (heat) energy between system and environment when each transition conserves energy and there is no external driving force.\cite{cjarz04} On averaging, this agrees with the common experimental definition of heat production as the free energy change of two reservoirs set up to dissipate energy by a quantum contact that allows monitoring the energy exchange process.\cite{yutsu10,yutsu12,jkosk13,jpeko15} Semiclassical trajectories have also been investigated as a means to show that postulated expressions for quantum work go over to the classical definition in the high-temperature or small-$\hbar$ limit.\cite{cjarz15} Other works in this category consider a process where the system's energy is measured at the start and end of a time-dependent driving process. It is then easy to show that the statistics of the energy change give a quantum version of the Jarzynski equality for the free energy difference.\cite{htasa00,ptalk07a} More general results are difficult owing to the fact that, for coupled systems, quantum transitions that do not conserve energy are possible, giving rise to the paradox motivating this work. There have also been many mathematically-based definitions of entropy production for open quantum systems. The primary goal of a mathematical definition is to quantify the information contained in a quantum state.\cite{vvedr02} It is well-known that preparation of a more ordered system state from a less ordered one requires heat release proportional to the information entropy difference.\cite{elutz15,jparr15} From this perspective, information is more fundamental than measured heats, because it represents a lower bound on any physical process that could accomplish this transformation. A maximum work could be found from such a definition using energy conservation. However, the disadvantage of a mathematical definition is that it can not be used to construct a physical transformation process obeying these bounds. Most of the bounds on mathematical entropy production are proven with the help of the Klein inequality stating that relative entropy between two density matrices must be positive.\cite{mrusk90} There are, in addition, many connections with communication and measure theory that provide approximations to the relative entropy.\cite{vvedr02,tsaga13} One particular class of mathematical definitions that has received special attention is the relative entropy, \begin{align} S(\rho | \rho^\text{inst}) &= \Tr{\rho \log \rho - \rho \log \rho^\text{inst}} \notag \\ &= \beta (F(t) - F^\text{(eq)}) \label{e:Srel} \\ \intertext{ between an arbitrary density matrix and an `instantaneous equilibrium' state,} \rho^\text{inst} &= \exp{\left[-\beta \hat H^\text{eff}(t)\right]}/Z^\text{eff}(\beta,t) .\label{e:pos} \end{align} This definition is closely related to the physical process of measuring the system's energy at the start and end of a process. Several notable results have been proven in those works, including work relations and integrated fluctuation theorems\cite{htasa00,mcamp09,mcamp09a,kfuno13,gmanz15} as well as useful upper and lower bounds.\cite{sdeff10,jhoro12} The present work is distinguished from these mathematical definitions because it completely removes the requirement for defining or using an `instantaneous equilibrium' distribution of the central system or directly measuring the central system at all. One of the primary motivations for this work has been to derive a firm theoretical foundation for analyzing time-sequences of measurements in hopes of better understanding the role of the environment in decoherence. \cite{vbrag80,csun97,ejoos98,wstru00,qturc00a,nherm00,rpolk01,wzure02,mball16,bdanj16} The present paper provides a new way of understanding the gap between the Lindblad operators describing the quantum master equation and the physical processes responsible for decoherence. Rather than unravelling the Lindblad equation, we choose a physical process and show how a Lindblad equation emerges. This path shows the importance of the source of environmental noise in determining the low-temperature steady-state. The result also provides an alternative continuous time, Monte Carlo method for wavefunction evolution\cite{jdali92,hcarm93} without using the dissipation operator associated with the Lindblad master equation. Another outcome has been finding a likely explanation for the anomalous temperature of Utsumi et. al.\cite{yutsu10,yutsu12} Those works attempted to test the classical fluctuation theorems for electron transport through a quantum dot, and found that the effective temperature of 1.37 K (derived from the slope of the transport odds ratio, $\log p_\text{fwd} / p_\text{rev}$) was much higher than the electron temperature of 130-300 mK. Trying to lower the temperature further below that point showed minimal changes in the slope, indicating a minimum temperature had been reached. Sections~\ref{s:process} and~\ref{s:therm} present a repeated measurement process, and show that it allows for a physical definition of heat and work that occurs between successive measurements. Measurements are only performed on the interacting reservoir, and (because of entanglement) cause instantaneous projection of the central system according to the standard rules of quantum mechanics. In this way, it is not required to define a temperature for the central system. Because the central system is generally out of equilibrium, the concept of equilibrium is applied only to the environmental interactions. Section~\ref{s:clausius} proves the Clausius form of the second law for the new definitions, and section~\ref{s:jcm} immediately applies these to the quantum theory of radiation. The limits of slow and fast measurement rates are investigated in sections~\ref{s:weak} and~\ref{s:strong}. The slow rate limit recovers Einstein's picture of first-order rate processes and complies with Eq.~\ref{e:QW} when the system-reservoir coupling, $\gamma$, is infinitesimally small. The fast measurement limit does not exhibit a quantum Zeno paradox,\cite{bvenk15} but effectively injects white noise into the energy of the joint system -- consistent with the energy-time uncertainty principle. At intermediate stages, continuous finite interaction with the reservoir causes an effective increase in the `temperature' of the system's steady-state. Although surprising, the measurement rate is unavoidable in the theory as it is the exact parameter controlling broadening of spectral lines.\cite{epowe93} I end with a proof in section~\ref{s:mint} that effects from the minimum achievable temperature will be seen when the reservoir temperature is less than the system's first excitation energy and the measurement rate is on the order of this excitation energy. \section{ Repeated Measurement Process}\label{s:process} To study the action of continual environmental measurement on part of a quantum system, I propose the following process (Fig.~\ref{f:ref}): \begin{enumerate} \item Let $\ket{\psi}$ represent a general wavefunction of the central system, and $\ket{n}$ represent the state of the measurement device at energy level $\hat H_B \ket{n} = \hbar \omega^B_n \ket{n}$. \item The central system is coupled to the measurement device whose state is chosen at random from a starting distribution, $\rho_B(0)$, (panel d-a) \begin{equation} \ket{\psi} \to \ket{\psi}\otimes \ket{n}. \end{equation} The starting distribution must have a well-defined energy, and so $\rho_B(0)$ should be diagonal in the energy basis of system $B$. \item The joint system is evolved forward using the coupled Hamiltonian, $\hat U(t) = e^{-it \hat H/\hbar}$ until the next measurement time, chosen from a Poisson process with rate $\lambda$ (panel b-c). \begin{equation} \ket{\psi, n} \to \hat U(t) \ket{\psi, n} \end{equation} \item The state of the measurement device is `measured' {\em via} projection into one of its uncoupled energy eigenstates, $\ket{m}$ (panel c). \begin{equation} \hat U(t) \ket{\psi, n} \to \frac{\bra{m} \hat U(t) \ket{\psi, n}}{\sqrt{p_m}}, \end{equation} with probability $p_m = |\avg{m|U(t)|\psi, n}|^2$. \end{enumerate} The measurement process itself is described exactly by the `purification' operator of Spohn and Lebowitz,\cite{hspoh78} whose effect on the joint density matrix is given by, \begin{equation} \hat P \rho_{AB} = (\operatorname{Tr}_B \rho_{AB}) \otimes \rho_B(0) .\label{e:pure} \end{equation} Every time this operation is performed, the memory of the environmental system is destroyed, and all system-environment superposition is removed. For studying thermalization process, it suffices to use a thermal equilibrium distribution for $\rho_B(0)$, \begin{equation} \rho^\text{eq}_B(\beta) = e^{-\beta \hat H_B}/Z_B(\beta) \label{e:eqB}. \end{equation} In many experimental cases, $\rho_B(0)$ represents a specially prepared input to drive the system toward a desired state. The operation of measurement disconnects the two systems, and, more importantly, makes the energy of the reservoir system correspond to a physical observable. A complete accounting for heat in quantum mechanics can be made using only these measurements on ancillary systems, rather than the central, $A$, system. The thermodynamics based on this accounting allows the central system to retain most of its quantum character, while at the same time deriving the traditional, operational relationships between heat and work. Although the analysis below is phrased in terms of density matrices, that view is equivalent to carrying out this process many times with individual wave-functions. Specifically, if $\rho_{A}(0) = \sum_j p_j \pure{\psi_j}$, is composed of any number of pure states,\cite{ejayn57a} the final density matrix at time $t$ is a linear function of $\rho_A$ and hence of each $\pure{\psi_j}$. Carrying out the process on individual wave-functions thus allows an extra degree of choice in how to compose $\rho_A(0)$, the use of which does not alter any of the results. This process is a repeatable version of the measurement and feedback control process studied in Ref.~\cite{kfuno13}, and fits into the general quantum map scheme of Ref.~\cite{gmanz15}. Nevertheless, our analysis finds different results because the thermodynamic interpretation of the environment and measuring device allows the reservoir to preform work in addition to exchanging heat. \begin{figure} \caption{Schematic of the repeated measurement process. (a-c) Exact evolution of the coupled system+reservoir from an uncoupled state quickly leads to an entangled state. (c) Measuring the reservoir energy selects a subsample of the system, removing coherences. (d) Replacing the reservoir state with a thermal sample results in heat and work output. The thermal nature of the environment is responsible for dissipation.} \label{f:ref} \end{figure} \section{ Thermodynamics of Repeated Measurement}\label{s:therm} \begin{figure*} \caption{Work and Heat of the intermittently measured quantum system. On the left, the system (A) and reservoir (B) Hamiltonians are uncoupled. Coupling does not initially change their energy, since diagonal elements of $\hat H_{AB} \label{f:therm} \end{figure*} In order for heat and work to have an unambiguous physical meaning, they must be represented by the outcome of some measurement. Fig.~\ref{f:therm} presents the energies for each operation applied to a system and its reservoir over the course of each measurement interval in Fig.~\ref{f:ref}. Initially (in Step 2), the density matrix begins as a tensor product, uncoupled from the reservoir, which has a known starting distribution, $\rho_B(0)$. However, for a coupled system and measurement device, time evolution leads to entanglement. At the time of the next measurement, the entanglement is projected out, so it is again permissible to refer to the properties of the $A$ and $B$ systems separately. After a measurement, the total energy of the system/reservoir pair will have changed from $\tavg{\hat H_A + \hat H_B + \gamma \hat H_{AB}}$ to $\tavg{\hat H_A + \hat H_B}$. The amount of energy that must be added to `measure' the system/reservoir pair at any point in time is therefore, $-\gamma \tavg{\hat H_{AB}}$. This step is responsible for the measurement `back-action', and the violation of the FT for general quantum dynamics. Strictly speaking, this measurement energy does not correspond to an element of physical reality. Nevertheless, the starting and ending $\hat H_A$, $\hat H_B$ are conserved quantities under the uncoupled time-evolution, and so the energy of the measurement step can be objectively defined in an indirect way. This instantaneous measurement of the reservoir simulates the physical situation where an excitation in the reservoir leaks out into the environment. After this happens, the information it carried is available to the environment, causing traditional collapse of the system/reservoir pair. To complete the cycle, the reservoir degree of freedom must be replaced with a new sample from its input ensemble. For the micromaser, this replacement is accomplished spatially by passing separate atoms ($B$) through a cavity, one at a time. On average, the system should output a `hot' $\rho_B(t)$, which the environment will need to cool back down to $\rho_B(0)$. Using the methods of ordinary thermodynamics,\cite{ralic79,tkieu04,swook11} we can calculate the minimum heat and maximum work for transformation of $\rho_B(t)$ back to $\rho_B(0)$ via an isothermal, quasistatic process at the set temperature of the reservoir, \begin{align} \beta Q &= -\Tr{\rho_B(0) \log \rho_B(0)} + \Tr{\rho_B(t) \log \rho_B(t)} \notag \\ &= -\Delta S_B \label{e:Q} \\ W_\text{therm} &= \Tr{(\rho_B(0) - \rho_B(t)) \hat H_B} + \Delta S_B/\beta \notag \\ &= - \Delta F_B \label{e:Wtherm} \\ W &= W_\text{therm} + \Delta H_A + \Delta H_B \notag \\ &= \Delta H_A - Q \label{e:W} \end{align} These sign of these quantities are defined as the energy added to the system, while $\Delta X \equiv \tavg{\hat X}_\text{final} - \tavg{\hat X}_\text{initial}$ represents the total change in $\hat X$ during evolution from one measurement time to the next. In this work, $T$ always refers to the externally set temperature of the reservoir system. The temperature of the reservoir, used in defining $\beta = 1/k_B T$ above, is entirely related to the conditions under which the reservoir states are prepared. It can be different for each measurement interval. Note that when a thermal equilibrium distribution is used for the reservoir (Eq.~\ref{e:eqB}), the reservoir dissipates energy from the system. Since it always begins in a state of minimum free energy, the reservoir always recovers work from the system as well, since $-W_\text{therm}$ is always strictly positive by Eq.~\ref{e:pos}. This makes sense when the central system is relaxing from an initial excited state. When the central system is at equilibrium, the second law is saved (Sec.~\ref{s:clausius}) by including the work done during the measurement step. \subsection{ Caution on Using a Time-Dependent Hamiltonian}\label{s:issues} The assumption of a time-dependent Hamiltonian for the system leads to an ambiguity on the scale of the measurement back-action.\cite{kfuno13,bvenk15,sdeff16} This presentation does not follow the traditional route of assuming a time-dependent Hamiltonian for the central system. The assumption of a time-dependent Hamiltonian is awkward to work with in this context because it side-steps the measurement paradox. Instead, it assumes the existence of a joint system wherein the dynamics for sub-system $A$ is given exactly by, $\dot \rho_A(t) = -\frac{i}{\hbar}[\hat H^\text{eff}_A(t), \rho_A(t)]$. The complete physical system plus environment must have a conserved energy function. This matches the dynamics, \begin{equation} \dot \rho_A(t) = -\frac{i}{\hbar}\Tri{B}{\hat H_A + \hat H_B + \gamma \hat H_{AB}, \rho_{AB}(t)} \label{e:dynab} \end{equation} exactly when Eq.~\ref{e:ematch} holds. In classical mechanics, such a function can be formally constructed by adding an ancillary degree of freedom, $y$, that moves linearly with time, $y(t) = t$. The potential energy function, \begin{equation} V(x,y) = V(x) + V^\text{int}(x,y) - \int_0^y \pd{V^\text{int}(x_\text{ref}(t),t)}{t} dt \end{equation} is defined using the known trajectory for $x_\text{ref}(t)$ under the desired Hamiltonian, $H(x,t)$, so that so $y$ experiences no net force. Alternatively, $y$ can be considered to be infinitely massive. When translated to quantum mechanics, neither of these last two methods avoids the Heisenberg uncertainty principle.\cite{lland3,cjarz15} An intuitive argument can be based on $\tavg{\Delta p} \tavg{\Delta x} \ge \frac{\hbar}{2}$. In both cases, the work done by the system on the reservoir is, $\pd{V^\text{int}(x,y)}{dy} dy$, and contributes directly to the change in momentum of $y$. The $y$-coordinate was constructed to move linearly in time, and hence measures the `time' of interaction. Using these translations from momentum change and position to work / time provides, $\tavg{\Delta p_y} \tavg{\Delta y} \simeq \tavg{\Delta_t V(x,t)}\tavg{\Delta t}$. Although the definitions of heat and work in Eq.~\ref{e:QW} can be shown to be mathematically consistent with the laws of thermodynamics, they require infinitesimally slow time-evolution under the Markov assumption and constant comparison to a steady-state distribution.\cite{ralic79,rkosl13,gmanz15} The present method is valid under a much less restrictive set of assumptions. In particular, it allows arbitrary time-evolution, and only makes use the equilibrium properties of the $B$ system, not the central, $A$ system. The present set of definitions is also directly connected to the experimental measurement process. Defining a time-dependent $\hat H$ as is done in other works groups the central system together with some aspects of the reservoir. In the present framework, it is easy to allow $\hat H_B$ and $\hat H_{AB} = \hat H_{AB}^0 + \hat H_A'$ to be different for each measurement interval (encompassing even non-Markovian dynamical schemes\cite{wstru00,ashab05,smani06}). In this case, the analysis above mostly carries through, with the exception that, since $\tavg{f(\hat H_A) \hat H_{AB}} \ne 0$, an extra amount of energy is added during coupling, but not removed during measurement. This extra energy contributes to the work done on the system according to Eq.~\ref{e:QW}. However, the connection to heat found here is very different because, as the next subsection shows, the definition of heat in Eq.~\ref{e:QW} requires that the reservoir be near equilibrium. The comparison presented here is conceptually simpler because energy stored in the system cannot be instantaneously altered by an external source. For a specific example, consider the energy exchange process taking place between a nuclear spin and its environment in an NMR spin-relaxation experiment.\cite{ldios11} In order to represent stored energy, the Hamiltonian of the atom can be defined with respect to some static field, $\hat H_A = \frac{\hbar \omega_0}{2} \sigma_z$. Rather than varying the field strength directly, changing the atomic state from its initial equilibrium can be brought about with an interaction Hamiltonian, such as the JCM studied here. The work can be added over each time interval to give, \begin{equation} \int^t_0 dt'\; W(t') = \frac{\hbar\omega_0}{2} \Tr{\sigma_z (\rho_A(t) - \rho_A(0))} - \int_0^t dt'\; Q(t') . \end{equation} The heat release can be analyzed using either of the methods in the next section (Sec.~\ref{s:QB}). Assuming the minimum heat release leads to $\int_0^t dt' \beta(t')Q(t') = S_A(t)-S_A(0)$, in agreement with the rules of equilibrium thermostatics. Alternately, in the limit where the $B$ system always begins at thermal equilibrium and moves infinitesimally slowly between each measurement interval, Eq.~\ref{e:QW} is recovered, giving $W(t) = 0$. \subsection{ Comparison to Common Approximations for the Heat Evolution}\label{s:QB} The heat generated in the process of Figs.~\ref{f:ref} and~\ref{f:therm} comes directly from the entropy change of the measurement system, $B$. Most analyses ignore the measurement system, making this result difficult to compare with others in the literature. Here I present two simple methods for calculating $\Delta S_B$ from quantities available in other methods. First, assuming the time-dependence of $\rho_A(t)$ is known, a lower bound on the heat emitted can be derived from the state function, $S_A(t) = -\Tr{\rho_A\log\rho_A}$. Because over each time interval, $\Delta S_A + \Delta S_B \ge 0$, the total heat added obeys the inequality, \begin{equation} \Delta Q(t) = -\Delta S_B/\beta \le \Delta S_A/\beta . \end{equation} Assuming the minimum required heat release leads to a prediction of the quasistatic heat evolution, \begin{equation} \int_0^t \frac{dQ(t')}{dt'} dt' \le \int^t_0 dS_A(t')/\beta(t') \; dt' . \end{equation} This is exactly the result of equilibrium quantum thermodynamics, valid for arbitrary processes, $\rho_A(t)$. Second, if the $B$ system always begins in thermal equilibrium, $\rho_B(0) = \rho_B^{(\beta)}$, and the change in occupation probability for each energy level ($\Delta \operatorname{diag}(\rho_B)$) over a measurement interval is small, then we can directly use the expansion,\cite{hspoh78} \begin{equation} \delta S_B = -\sum_j \delta p_j \log p_j \label{e:dSB} \end{equation} This is helpful because in Fig.~\ref{f:therm}, the entropy of the $B$ system is always calculated in the energy basis of $B$. Substituting the canonical equilibrium distribution, \begin{equation} \delta Q = - \sum_j \delta p_j E_j = -\delta H_B \label{e:dQB} . \end{equation} Equations~\ref{e:dSB} and~\ref{e:dQB} apply whenever $\rho_B(0)$ is a canonical distribution and the change in $\rho_B$ is small over an interval. In the van Hove limit (Sec.~\ref{s:weak}), energy is conserved between the $A$ and $B$ systems. Because of energy conservation, the heat evolution of Eq.~\ref{e:dQB} is exactly the well-known result of Eq.~\ref{e:QW} in this case. \section{ Thermodynamic Consistency}\label{s:clausius} For the definitions of work and heat given above to be correct, they must meet two requirements. In order to satisfy the first law, the total energy gain at each step must equal the heat plus work from the environment. This is true by construction because the total energy change over each cycle is just $\tavg{\Delta\hat H_A}$. Next, in satisfaction of the second law, the present section will show that there can only be a net heat release over any cyclic process. Since $Q$ has been defined as heat input to the system, this means \begin{equation} \oint Q \le 0. \end{equation} There is a fundamental open question as to whether the energy change caused by the measurement process should be classified as heat or work. Counting it as heat asserts that it is spread throughout the environment in an unrecoverable way. Conversely, counting it as work asserts that measurement can only be brought about by choosing to apply a stored force over a distance. In the cycle of Fig.~\ref{f:therm}, it is classified as work, because this is the only assignment consistent with thermodynamics. Counting $\tavg{\gamma \hat H_{AB}}$ as heat leads to a systematic violation of the second law, as I now show. Integrating the quantity, \begin{equation} R = \avg{\Delta H_A} + \avg{\Delta H_B} - \Delta S_B/\beta , \end{equation} over an entire cyclic process cancels $\tavg{\Delta H_A}$, leaving \begin{equation} \oint R = \oint \avg{\Delta H_B} - \Delta S_B/\beta .\label{e:Rcontrib} \end{equation} If the $B$ sub-system starts each interval in thermal equilibrium (Eq.~\ref{e:eqB}), this is the free energy difference used in Eq.~\ref{e:Srel}. The Klein inequality then proves the {\em positivity} of each contribution to Eq.~\ref{e:Rcontrib}. Therefore, over a cyclic process, $\oint R \ge 0$. A thermodynamically sound definition is found when counting as part of $Q$ only the entropy change of the reservoir. Heat comes into this model because the environment is responsible for transforming $\rho_B(t)$ back into $\rho_B(0)$. Using a hypothetical quasistatic, isothermal process to achieve this will require adding a heat, $Q = (S_B(0) - S_B(t))/\beta = -\Delta S_B$. I now show that $\oint \Delta S_B \ge 0$ by considering entropy changes for the $A$-$B$ system jointly. At the starting point, the two systems are decorrelated,\cite{tsaga13} \begin{equation} S[\rho_A(0)\otimes\rho_B(0)] = S_A(0) + S_B(0) . \end{equation} The time-evolution of this state is unitary, so $\rho_{AB}(t)$ has the same value for the entropy. However, projection always increases the entropy,\cite{ejayn57a,tsaga13} so \begin{align} S[\rho_A(t) \otimes \rho_B(t)] &\ge S[\rho_{AB}(t)] . \intertext{The $A$ and $B$ systems in the final state are also decorrelated, proving the statement,} \Delta S_A + \Delta S_B &\ge 0. \end{align} This is quite general, and applies to any measurement time, starting state, and Hamiltonian, $\hat H_{AB}$. Again, for a cyclic process $A$ must return to its starting point, so $\oint \Delta S_A = 0$, and $\oint Q \le 0$. It should be stressed that the results of this section hold regardless of the lengths of the measurement intervals, $\{t^{(k+1)} - t^{(k)}\}$. The choice of Poisson-distributed measurement times is not justified in every case. This is especially true for the physical micromaser, where the measurement times should instead be Gaussian, based on the cavity transit time for each atom. Instead, choosing measurement times from a Poisson distribution mimics the situation where a measurement is brought about from an ideal, random collision-type process. \section{ Results} \subsection{ Analysis of the Micromaser}\label{s:jcm} Exact numerical results are known for the micromaser in the rotating wave approximation -- a single-qbit system $(B)$ in state $e$ or $g$ coupled to a single mode of an optical cavity ($A$) in a Fock state, $n = 0, 1, \ldots$.\cite{sharo06,hwalt06,sharo13} The Hamiltonian is known as the Jaynes-Cummings Model (JCM), \begin{align} \hat H_A &= \hbar \omega^A (\hat n_A + \frac{1}{2}) \label{e:JCM} \\ \hat H_B &= \frac{\hbar \omega^B}{2} (\pure{e} - \pure{g}) \\ \gamma \hat H_{AB} &= \gamma( a_A^\dagger a_B + a_A a_B^\dagger ) \label{e:Hab} \end{align} The rotating wave approximation neglects a term, \begin{equation} \gamma \hat H_{AB}' = \gamma ( a_A^\dagger a_B^\dagger + a_A a_B ) \label{e:Hab2} \end{equation} in the Hamiltonian causing simultaneous excitation of the qbit and cavity. It is usually justified when the two frequencies, $\omega^A$ and $\omega^B$, are near resonance.\cite{ejayn58} \footnote{Note that the atom-field interaction should also contain a diamagnetic term that is ignored here but may sometimes be grouped with an effective change in $\omega^A$.\cite{mcris93}} After a time, $t$, the initial state will be in superposition with a state where the photon has been emitted. The ideal 1-photon micromaser can be solved analytically because the total number of excitations is conserved, and unitary evolution only mixes the states $\ket{e,n-1}$ and $\ket{g,n}$. Thus the only allowed transitions are between these two states. Attempting to define the work done by the excited atom on the field requires measuring the energy of the atom. This is physically realized in the micromaser when the atom exits the cavity. This will project the environment into a state with known excitation, $n_B = e$ or $g$. The work and ending states (and from those the heat) can all be neatly expressed in terms of $x(t)$, the average number of photons absorbed by the atom given that a projective measurement on the atom is performed at time $t$, \begin{align} x(t) &\equiv \sum_{n=0}^\infty p_n(\sigma_g |b_n(t)|^2 - \sigma_e |b_{n+1}|^2) \\ \sigma_e(t) &= \sigma_e(0) + x(t) \\ \sigma_g(t) &= \sigma_g(0) - x(t) \\ \avg{\Delta H_A(t)} &= -\hbar \omega_A x(t) \\ \avg{\Delta H_B(t)} &= \hbar \omega_B x(t) . \end{align} The expression for the transition probability, $|b_n(t)|^2$, is recounted in the appendix. The analytical solution gives an exact result for the heat and work when a measurement is done at time $t$. Averaging over the distribution of measurement times will then give the expected heat and work values over an interval. In the limit of many measurements ($T/t \to \infty$), this expectation gives the rate of heat and work per average measurement interval. Note that for the physical micromaser setup, the interaction time is set by the velocity of the atom and the cavity size -- resulting in a narrow Gaussian distribution rather than the Poisson process studied here. For a Poisson distribution of interaction times, the averages are easily computed to be, \begin{equation} \avg{|b_n(t)|^2} = \frac{1}{2}\left(1 - \frac{\lambda^2 + \Delta_c^2} {\lambda^2 + \Delta_c^2 + 4 n \gamma^2/\hbar^2}\right) . \end{equation} Strong and weak-coupling limits of this equation give identical first-order terms, \begin{equation} \lim_{\lambda \to \infty} \avg{|b_n(t)|^2} = \lim_{\gamma/\hbar \to 0} \avg{|b_n(t)|^2} = \frac{2 n\gamma^2/\hbar^2}{\lambda^2 + \Delta_c^2} . \end{equation} Since measurements happen with rate $\lambda$, the effective rate of atomic absorptions in these limits is, \begin{equation} \lambda\avg{x} = \frac{2\lambda\gamma^2/\hbar^2}{\lambda^2 + \Delta_c^2} \left( \sigma_g \avg{n} - \sigma_e \avg{n+1} \right) \label{e:dx} \end{equation} this recovers Einstein's simple picture of photon emission and absorption processes occurring with equal rates,\cite{aeins16} \begin{align} dW_\text{abs}/\Delta E_\text{abs} &= \sigma_g B^e_g \avg{n} dt \\ dW_\text{em}/\Delta E_\text{em} &= \sigma_e \left( A^g_e + B^g_e \avg{n} \right) dt . \end{align} All the $A,B$ coefficients are equal to the prefactor of Eq.~\ref{e:dx} here because $x(t)$ counts only a single cavity mode at frequency $\omega^A$. In a blackbody, the $A$ coefficient goes as $\omega^2 d\omega$ because more modes contribute.\cite{epowe93} The denominator, $\lambda^2 + \Delta_c^2$ is exactly the one that appears in the traditional expression for a Lorentzian line shape. Here, however, the measurement rate, $\lambda$ appears rather than the inverse lifetime of the atomic excited-state. The line broadens as the measurement rate increases, and the atom is able to absorb/emit photons further from its excitation frequency. Only the resonant photons will cause equilibration, while others will cause noise. In the van Hove limit, $\gamma,\lambda \to 0$ and the contribution of the resonant photons will dominate. \begin{figure} \caption{Work and heat production during decay of a photon in a cavity ($n_A = 1$) coupled to a 2-level reservoir (Eqns.~\ref{e:JCM} \label{f:decay} \end{figure} This simple picture should be compared to the full (Rabi) coupling, Eq.~\ref{e:Hab} plus Eq.~\ref{e:Hab2}. The remaining figures show numerical results for the simulation of a resonant cavity ($A$) and qubit ($B$) system starting from a cavity in the singly excited energy state.\cite{jjoha13} Figure~\ref{f:decay}a compares the average work and heat computed using this cycle for at the state-point ($\omega^A = \omega^B = 2\pi$, $\gamma/\hbar = 0.05$, $\lambda = 10^{-2}$). The average was taken over 5000 realizations of process~\ref{f:therm}. Rabi oscillations can be seen clearly as the photon exchanges with the reservoir (atom). Initially, this increases the entropy of the incoming atom's energy distribution. When there is a strong probability of emission, however, the integrated heat release, $-\int_0^t Q(t') dt'$, shows system actually decreases the entropy of the reservoir. This happens because the the reservoir atom is left in a consistent, high-energy, low-entropy state. In this way, the reservoir can extract useful work from the cavity. Panel (b) shows that no laws of thermodynamics are broken, since the system starts in a pure state, but ends in an equilibrium state. The information entropy of the system itself increases appreciably during the first Rabi cycle. Eventually, the equilibration process ends with the initial excitation energy being transformed into both heat and work. Despite the appearance of Fig.~\ref{f:decay}a (which happens for this specific coupling strength), the emitted heat is generally non-zero. The work and entropy defined by Eq.~\ref{e:QW} differ from the results of this section. Because the earlier definition is based only on the system itself, without considering the reservoir, there is no way to use the energy of the interacting atom for useful work. Eq.~\ref{e:QW} therefore finds zero work, and classifies $\Delta H_A$ entirely as heat lost to the environment. Panels (b) and (d) of Fig.~\ref{f:decay} show results from considering the system and reservoir jointly in the weak-coupling limit as will be discussed in Sec.~\ref{s:weak}. \subsection{ Weak Coupling Limit}\label{s:weak} The classical van Hove limit was investigated in detail by Spohn and Lebowitz,\cite{hspoh78} who showed generally that thermal equilibrium is reached by $\rho_A$ in this limit irrespective of the type of coupling interaction, $\hat H_{AB}$. First, the interaction strength, $\gamma$, must tend to zero so that only the leading-order term in the interaction remains. This makes the dynamics of $\rho_A(t) = \Tri{B}{\rho_{AB}(t)}$ expressible in terms of 2-point time-correlation functions for the reservoir. Next, the long-time limit (here $\lambda \to 0$) is taken by finding the average density of $\rho_A$ upon measurement. This enforces energy conservation because time evolution causes off-diagonal matrix elements to oscillate and average to zero over long enough timescales. Finally, the Gibbs ensemble is found to be stationary by combining energy conservation with the detailed balance condition obeyed by the reservoir, \begin{align} \Tri{B}{e^{-\beta \hat H_B} \hat A(0) \hat B(t)} &= \Tr{e^{-\beta \hat H_B} \hat B(t-i\beta) \hat A(0)}, \\ \intertext{which enforces for the $A$ system,} e^{-\beta E^A_n} B^m_n &= e^{-\beta E^A_m} B^n_m . \end{align} The time-dependence of the operators in this equation is defined by the Heisenberg picture, below. Because the present analysis requires expressions for the time-dependence of both $\rho_A$ and $\rho_B$, this section re-derives the weak-coupling limit without taking the partial trace. The time-dependence of $\rho$ can be found from second-order perturbation theory, \begin{align} \theta_{AB}(t) &= \rho_{AB}(0) -\frac{i\gamma}{\hbar}\int_0^t dx\; [\hat H_{AB}(x), \rho_{AB}(0)] + O(\tfrac{\gamma^3}{\hbar^3}) \notag \\ &- \frac{\gamma^2}{\hbar^2}\int_0^t ds \int_0^s dx \; [\hat H_{AB}(s), [\hat H_{AB}(x), \rho_{AB}(0)]] ,\label{e:weak} \end{align} where $\rho_{AB}(0) = \rho_A\otimes \rho_B(0)$. This equation uses the following notation for the density matrix and time-dependence in the interaction representation, \begin{align} \theta_{AB}(t) &= U_0^{-t} \rho_{AB}(t) U_0^t \\ \hat H_{AB}(t) &= U_0^{-t} \hat H_{AB} U_0^{t} \\ \intertext{with time-evolution operator,} U_0 &= e^{-i(\hat H_A + \hat H_B)/\hbar}. \end{align} The time-evolution can be written more explicitly by decomposing $\hat H_{AB}$ into transitions between joint system/reservoir states ($m$ to $n$) with energy difference $\omega_n - \omega_m$, \begin{align} \hat H_{AB}(t) &= \sum_{\omega} \hat V_\omega e^{i\omega t} \\ \intertext{where} \hat V_\omega &\equiv \sum_{n,m \;:\; \omega_n - \omega_m = \omega} \pure{n} \hat H_{AB} \pure{m} . \end{align} It is easy to average each term in Eq.~\ref{e:weak} over Poisson-distributed measurement times to find, \begin{align} \avg{\theta(t)} &= \lambda \int_0^\infty dt\; e^{-\lambda t} \theta(t) \\ &= \rho_{AB}(0) -\frac{i\gamma}{\hbar}[\tilde H_{AB}(\lambda), \rho_{AB}(0)] + \frac{\gamma^2}{\hbar^2} L' [\rho_{AB}(0)], \intertext{where,} \tilde H_{AB}(\lambda) &= \sum_\omega \frac{1}{\lambda - i\omega} \hat V_\omega \\ L' \rho &= \sum_{\omega,\omega'} s_{\omega,\omega'} \left( \hat V_{\omega} \rho \hat V_{\omega'}^\dagger - \frac{1}{2}\{\hat V_{\omega'}^\dagger\hat V_{\omega}, \rho\} \right) \notag \\ &\qquad\qquad\qquad + \frac{ia_{\omega,\omega'}}{2} [\hat V_{\omega'}^\dagger \hat V_{\omega}, \rho] \label{e:wdiss} \\ s_{\omega,\omega'} &= \frac{2\lambda - i(\omega - \omega')}{d_{\omega,\omega'}} \\ a_{\omega,\omega'} &= \frac{\omega + \omega'}{d_{\omega,\omega'}} \\ d_{\omega,\omega'} &= (\lambda - i\omega)(\lambda + i\omega') (\lambda - i(\omega-\omega')) . \end{align} Note that the sums run over both positive and negative transition frequencies, $\omega$, and that these quantities have the symmetries, $\hat V_{\omega}^\dagger = \hat V_{-\omega}$, $s_{\omega,\omega'}^* = s_{\omega',\omega}$, $d_{\omega,\omega'}^* = d_{\omega',\omega}$, and $a_{\omega,\omega'}^* = a_{\omega',\omega}$. The canonical Lindblad form can be obtained by diagonalizing the matrix, $[s_{\omega,\omega'}]$. When $\lambda \to 0$, transitions where energy is conserved between the $A$ and $B$ systems ($\omega = 0$) dominate in the sum, resulting in a net prefactor of $(\gamma/\lambda\hbar)^2$. The transition rate is then $\gamma^2/\hbar^2 \lambda$ -- exactly the combination that is kept constant in the van Hove limit. In this limit, tracing over $B$ in Eq.~\ref{e:wdiss} should recover Eq.~III.19 of Ref.~\citenum{hspoh78}. By applying the interaction part of Eq.~\ref{e:wdiss} to the time evolution with rate $\lambda$, the effective master equation in the weak coupling limit becomes, \begin{align} \pd{\rho_{A}}{t} &= -\frac{i}{\hbar}[\hat H_A, \rho_A(t)] + \frac{\gamma^2 \lambda}{\hbar^2} \Tri{B}{L' [\rho_A(t)\otimes \rho_B(0)]} .\label{e:lind} \end{align} For the JCM, there is just one $\hat V_{\Delta_c} = a_A a_B^\dagger$, which gives the same answer as the exact result, Eq.~\ref{e:dx}. \begin{figure} \caption{Decay of the system simulated in Fig.~\ref{f:decay} \label{f:cmp} \end{figure} Relaxation process simulated by continuously applying $L'$ can show qualitative differences from the process in Sec.~\ref{s:process}. Without the trace over the environment, $L'$ just gives the approximation to $\theta(t)$ from second-order perturbation theory. This decays faster than when repeated projection is actually used because the environment loses its memory after each projection.\cite{ejayn57a} These two time-scales can be seen in Fig.~\ref{f:cmp}. Fig.~\ref{f:cmp} (and Fig.~\ref{f:decay}b,d) compares simulation of $L'$ with the exact process~\ref{f:ref} when repeated projection is used in the same way for both. That is, time evolution under the Lindblad equation (\ref{e:lind}) is carried out in intervals, $t \sim $Poisson($\lambda$). After each interval, the purification operator (Eq.~\ref{e:pure}) is applied to the density matrix. This way, the only difference from the exact process is that the time-propagator has been approximated by its average. It is evident that the initial $\cos^2$ shape and Rabi oscillation structure have been lost. Instead, the $L'$ propagator shows a fast initial loss followed by simple exponential decay toward the steady-state. Nevertheless, the observed decay rate and eventual steady states match very well between the two methods. The total evolved heat shows a discrepancy because the fast initial loss in the $L'$ propagator quickly mixes $\rho_B$. Numerical simulations of the Lindblad equation were carried out using QuTiP.\cite{jjoha13} \subsection{ Fast Coupling Limit}\label{s:strong} For the atom-field system, it was shown that the transition rate approached the same value in both the weak coupling and infinitely fast measurement case. To find the general result for the Poisson measurement process as $\lambda \to \infty$, note that the Taylor series expansion of the time average turns into an expansion in powers of $\lambda^{-1}$, \begin{equation} \lambda \int_0^\infty dt\; e^{-\lambda t} \theta(t) = \sum_{k=0}^\infty \lambda^{-k} \theta^{(k)}(t) . \end{equation} It is elementary to calculate successive derivatives, $\theta^{(k)}$, by plugging into \begin{equation} \pd{\theta(t)}{t} = -\frac{i\gamma}{\hbar} [\hat H_{AB}(t), \theta(t)] . \end{equation} The average measured $\theta$ after a short interaction time on the order of $\lambda^{-1}$ is therefore, \begin{align} \avg{\theta} &= \rho_{AB}(0) - \frac{i\gamma}{\lambda \hbar} [ \hat H_{AB}, \rho_{AB}(0) ] \notag \\ &+ \frac{\gamma}{\lambda^2\hbar^2} \left[[\hat H_A + \hat H_B, \hat H_{AB}], \rho_{AB}(0)\right] \notag \\ &+ \frac{\gamma^2}{\lambda^2\hbar^2} \left(2 \hat H_{AB} \rho_{AB}(0) \hat H_{AB} - \{\hat H_{AB}^2, \rho_{AB}(0)\} \right) \notag \\ & + O\left(\frac{\gamma^3}{\lambda^3\hbar^3}\right) .\label{e:strong} \end{align} We can immediately see that this limit is valid when the measurement rate is faster than $\gamma/\hbar$ measurements per second. The $O(\gamma)$ terms are in the form of a time-propagation over the average measurement interval, $\lambda^{-1}$. They have only off-diagonal elements, and do not contribute to $\tavg{\hat H_A}$ or $\tavg{\hat H_B}$. The third term has the familiar Lindblad form, which immediately proves a number of important consequences. First, all three terms are trace-free and totally positive. Next, this term introduces dissipation towards a stationary state for $\rho$. For a system under infinitely fast repeated measurement, the $O(\gamma)$ terms do not contribute to Tr$_B$, and the density matrix evolves according to, \begin{align} \dot \rho_A(t) = &-\frac{i}{\hbar}[\hat H_A, \rho_A(t)] \notag \\ &- \frac{\gamma^2}{\lambda \hbar^2} \Tri{B}{ [\hat H_{AB},[\hat H_{AB}, \rho_{A}\otimes \rho_B(0)]] } . \end{align} A more explicit representation is possible by defining the sub-matrices, \begin{equation} [\hat V^{nm}]_{ij} = [\hat H_{AB}]_{in,jm}. \end{equation} These have the symmetry, $\hat V^{nm} = \hat V^{\dagger\, mn}$, so \begin{align} -\Big[ &[\hat H_{AB},[\hat H_{AB}, \rho_{A}\otimes \rho_B(0)]] \Big]_{m,m} \notag \\ &= \sum_n p^B_n 2 \hat V^{mn} \rho_A \hat V^{\dagger\, mn} - p^B_m \{\hat V^{mn} \hat V^{\dagger\, mn}, \rho_A \} \end{align} For the JCM, this gives, \begin{equation} \lambda\avg{x} = \frac{2\gamma^2}{\hbar^2\lambda}(\sigma_g\avg{n} - \sigma_e\avg{n+1}) . \end{equation} The stationary state of this system will usually not be in the canonical, Boltzmann-Gibbs form. In fact, the prefactor does not depend on the cavity-field energy mismatch, $\Delta_c$, so it gives atomic transitions regardless of the wavelength of the light. This phenomenon is an explicit manifestation of the energy-time uncertainty principle. In the long-time limit of Sec.~\ref{s:weak}, energy-preserving transitions dominated over all possibilities. In the short-time limit of this section, all the transitions contribute equally, and the energy difference caused by a transition could be infinitely large. In-between, energy conservation (and convergence to the canonical distribution) depends directly on the smallness of the measurement rate, $\lambda$. \section{ Minimum Achievable Temperature}\label{s:mint} Results from simulating the time-evolution of the open quantum system using Eq.~\ref{e:wdiss} reveal that even as the reservoir temperature approaches zero, the probability of the first excited state does not vanish. In fact, the results very nearly resemble a Gibbs distribution at elevated temperatures. As the reservoir goes to absolute zero, the effective system temperature levels off to a constant, minimum value. This section gives both intuitive and rigorous arguments showing that this is a general phenomenon originating from work added during the measurement process. First, observe that the total Hamiltonian, $\hat H$, is preserved during coupled time-evolution. When allowed by the transitions in $\hat H_{AB}$ (i.e. when $[\hat H, \hat H_{AB}] \ne 0$), a portion of that total energy will oscillate between $\hat H_A + \hat H_B$ and $\hat H_{AB}$. Consider, for example, a dipole-dipole interaction, $\hat H = \hat x_A^2 + \hat p_A^2 + \hat x_B^2 + \hat p_2^2 + \gamma \hat x_A \hat x_B$. At equilibrium, the individual systems have $\tavg{\hat x} = 0$, but the coupled system polarizes so that, $\tavg {\hat H_{AB}} < 0$. Intuitively, the joint system can be pictured as relaxing to a thermal equilibrium at an elevated temperature. The initial density matrix at each restart, $\rho_A(\beta') \otimes \rho_B(\beta)$, would then look like an instantaneous fluctuation of \begin{equation} \rho_{AB}(\beta') = e^{-\beta \hat H} / Z_{AB}(\beta') \label{e:betap} \end{equation} where $\tavg{\hat H_{AB}} = 0$ is too high and $\tavg{\hat H_B}$ is too low. At steady state, $\tavg{\hat H_A}$ must be the same at the beginning and end of every measurement cycle. This allows the equilibrium argument above to determine $\beta'$ by self-consistency, \begin{equation} \avg{\hat H_B(t) - \hat H_B(\beta)} = -\gamma \avg{\hat H_{AB}(t)} . \end{equation} If equilibrium at $\beta' = 1/k_B T'$ is reached by the average measurement time, then expanding $\tavg{\hat H_B(\beta') - \hat H_B(\beta)}$ yields, \begin{equation} \Delta T \simeq \frac{-\gamma\avg{\hat H_{AB}(t)}}{C_{V,B}} , \end{equation} where $C_{V,B}$ is the heat capacity of the reservoir system. It is well-known that quantum mechanical degrees of freedom freeze out at temperatures that are fractions of their first excitation energy ($\Delta E_1$). Since the heat capacity when $\beta^{-1} < \Delta E_1$ goes to zero, while the interaction energy should remain nonzero, this intuitive argument suggests that the temperature of the system cannot go much below $\Delta E_1/k_B$. To be more quantitative, $\tavg{\hat H_{AB}(t)}$ can be estimated in the weak coupling limit from the second-order perturbation theory of Sec~\ref{s:weak}. This comparison considers the case $\Delta_c = 0$, since the stationary state where $\Delta_c \ne 0$ is known to be non-canonical. Also, the JCM with rotating wave approximation is too idealistic, since when $\Delta_c = 0$ no off-resonance interactions can occur -- so $\hat H_{AB}$ commutes with $\hat H$ and the minimum temperature argument does not apply. In other words, in the rotating wave approximation, the number of absorption events, $x(t)$, always increases the energy of the atom and decreases the energy of the cavity by the same amount. However, if the physical interaction Hamiltonian, $\hat H_{AB} = (a_A + a_A^\dagger)(a_B + a_B^\dagger)$ is used, then the weak coupling theory should also include transitions between $0,g$ and $1,e$. The average number of simultaneous excitations must be tracked separately, since it increases both the energy of the atom and cavity. Using Eq.~\ref{e:wdiss} with $\omega^A = \omega^B = \omega$, this average is \begin{equation} \avg{d(t)} = \frac{2\gamma^2/\hbar^2}{\lambda^2 + (2 \omega)^2} \left( \sigma_g \avg{n+1} - \sigma_e\avg{n} \right) . \end{equation} In the low-temperature limit, only the probabilities of the four lowest-lying states, labeled $p_{0/1}\sigma_{g/e}$, are relevant. The general result whenever $\hat H_{AB}$ allows for both $0,e \leftrightarrow 1,g$ and $0,g \leftrightarrow 1,e$ transitions with with equal weight and respective energy differences of zero and $2\hbar\omega$ is, \begin{equation} \pd{\tavg{\hat H_A}}{t} = \frac{2 \tfrac{\omega}{\lambda} \gamma^2 / \hbar}{(\tfrac{\lambda}{2\omega})^2 + 1} \left( (\tfrac{\lambda}{2\omega})^2 (p_0 - p_1) + \sigma_e p_0 - \sigma_g p_1 \right) . \end{equation} This can be solved for steady-state, $\tavg{\hat H_A} = 0$ to find, \begin{align} \frac{p_1}{p_0} &= \frac{(\tfrac{\lambda}{2\omega})^2 +\sigma_e}{(\tfrac{\lambda}{2\omega})^2 + \sigma_g} . \\ \intertext{In the low-temperature limit,} \lim_{\sigma_g \to 1} \frac{p_1}{p_0} &= \frac{(\tfrac{\lambda}{2\omega})^2}{(\tfrac{\lambda}{2\omega})^2 + 1} .\label{e:minT} \end{align} \begin{figure} \caption{Steady-state inverse temperature vs. reservoir $\beta$. The arrows plot the limiting value of $-\omega^{-1} \label{f:steady} \end{figure} This argument brings the energy-time uncertainty principle into sharp focus. If the measurement rate is on the order of the transition frequency, $\omega$, then $p_1/p_0$ can be of order 1, making absolute zero unreachable regardless of the coupling strength, $\gamma$, or the reservoir temperature determining $\sigma_e/\sigma_g$. On the other hand, as the relative measurement rate, $\lambda/\omega$, approaches zero the thermodynamic equilibrium condition, $\sigma_ep_0 = \sigma_g p_1$, dominates. In the limit where measurements are performed very slowly, transitions that do not conserve the energy of the isolated systems are effectively eliminated. Figure~\ref{f:steady} illustrates these conclusions. For high reservoir temperatures and low measurement rates, the system's steady-state probabilities follow the canonical distribution with the same temperature as the reservoir. When the reservoir temperature is lowered below a limiting value, the system is unable to respond -- effectively reaching a minimum temperature determined by Eq.~\ref{e:minT}. Effects from the minimum temperature can be minimized by lowering the measurement rate. \section{ Conclusions} A measurement process is needed in order to define heat and work a quantum setting. Continuously measuring the energy of an interacting quantum system leads either to a random telegraph process or else to the quantum Zeno paradox, while waiting forever before measuring the energy leads the EPR paradox. The resolution by intermittent measurement leads to the conclusion that quantum systems under measurement do not always reach canonical (Boltzmann-Gibbs) steady-states. Instead, the steady-state of a quantum system depends both on its coupling to an external environment and the rate of measurement. The presence of a measurement rate in the theory indicates the importance of the outside observer -- a familiar concept in quantum information. Most experiments on quantum information have been analyzed in the context of a Lindblad master equation, whose standard interpretation relies on associating a measurement rate to every dissipative term. This work has shown that every dissipative term can be a source/sink for both heat and work. This work has re-derived the master equation in the limit of weak coupling for arbitrary (Poisson-distributed) measurement rates. The result agrees with standard line-shape theory, and shows that measurement rates on the order of the first excitation energy can cause observable deviations from the canonical distribution. The physical consequences of the measurement rate will become increasingly important as quantum experiments push for greater control.\cite{bdanj16} However, they also present a new probe of the measurement rule and energy-time uncertainty principle for quantum mechanics. For the micromaser, the rate {\em seems} to be the number of atoms sent through the cavity per unit time -- since every atom that leaves the cavity is measured via its interaction with the outside environment. It is not, however, because even there the atoms can be left isolated and held in a superposition state indefinitely, leading to entanglement between successive particles.\cite{sharo13} Most generally, the number of measurements per unit time is determined by the rate at which information can leak into the environment. If information leaks quickly, the amount of energy exchanged can be large and the minimum effective temperature of the system will be raised. If information leaks slowly, the work done by measurement will be nearly zero, and the quantum system will more closely approach the canonical distribution. By the connection to the width of spectroscopic lines, this rate is closely related to the excited-state lifetime. This model presents a novel, experimentally motivated and thermodynamically consistent treatment of heat and work exchange in the quantum setting. By doing so, it also raises new questions about the thermodynamics of measurement. First, the explicit connection to free energy and entropy of reservoir states provides an additional source of potential work that may be extracted from coupling. Connecting multiple systems together or adding partial projection using this framework will provide more realistic conditions for reaching this maximum efficiency. Second, we have shown special conditions that cause the present definitions to reduce to well-known expressions in the literature. Third, although the initial process was defined in terms of wavefunctions, the average heat and work is defined in terms of the density matrices. Definitions (Eq.~\ref{e:Q} and~\ref{e:W}) still apply when the density matrix consists of a single state, but the repeated measurement projecting to a single wavefunction has a subtly different interpretation. The difference (not investigated here) is related to Landauer's principle,\cite{swook11,elutz15} since measuring the exact state from the distribution, $\rho_A \otimes \rho_B$, carries a separate `recording' cost. Stochastic Schr\"{o}dinger equation and power measurement based methods assume that all energy exchange with the reservoir is as heat. There, work is supplied by the time-dependence of the Hamiltonian. As we have shown here, heat is most closely identified with the von Neumann entropy of the $A$ system. The energy exchange with the reservoir is only indirectly connected to the heat exchange through Eq.~\ref{e:dQB}. The fact that this becomes exact in the van Hove limit explains the role of the steady-state for $A$ and observations by many authors that the work of measurement is the source of non-applicability of fluctuation theorems.\cite{cjarz04,kfuno13,cjarz15,bvenk15,sdeff16} When $\Delta H_A + \Delta H_B = 0$, the measurement back-action disappears, and the fluctuation theorem for $\Delta H_A$ is given by the formalism of Ref.~\citenum{gmanz15}. It should also be possible to derive a forward fluctuation theorem (not restricted to time-reversal) for predicting the force/flux relationships along the lines of Refs.~\citenum{droge12}. There have been many other investigations on thermodynamics of driven, open quantum systems. The restriction to time-independent Hamiltonians in this work differs from most others, which assume a pre-specified, time-dependent $\hat H_A(t)$. To make a comparison, either the cycle should be modified as described in Sec.~\ref{s:issues} or work at each time-step in such models must be re-defined to count only energy that is stored in a time-independent Hamiltonian for the central system, $H_A$. The process studied here retains a clear connection to the experimental measurement process, and is flexible enough to compute heat and work for continuous feedback control. In view of the near-identity between Eq.~\ref{e:minT} and Eq.~10 of Ref.~\citenum{yutsu10}, it is very likely that recent experimental deviations from the fluctuation theorem are due to the phenomenon of minimum temperature, as well as to differences between traditional, system-centric, and the present, observational, definitions of heat and work. \begin{acknowledgments} I thank Brian Space, Sebastian Deffner, and Bart\l{}omiej Gardas for helpful discussions. This work was supported by the University of South Florida Research Foundation and NSF MRI CHE-1531590. \end{acknowledgments} \appendix \section{ Explicit Solution for the JCM} The solution to the Jaynes-Cummings model under the rotating wave approximation is well-known.\cite{hwalt06,sharo06,jhoro12} I summarize it in the notation of this work for completeness. For states with $n > 0$ total excitations, the time-evolution operator decomposes into a $2\times 2$ block-diagonal,\cite{ejayn58} \begin{align} \begin{bmatrix} \avg{n-1,e | \psi(t)} \\ \avg{n,g | \psi(t)} \end{bmatrix} &= e^{-i\omega^A t(n - \tfrac{1}{2})} \\ &\begin{bmatrix} a_n(t) & b_n(t) \\ b_n(t) & a_n(t)^* \end{bmatrix} \begin{bmatrix} \avg{n-1,e | \psi(0)} \\ \avg{n,g | \psi(0)} \end{bmatrix} ,\notag \end{align} with the definitions,\cite{sharo06} \begin{align} \Omega_n &= \frac{2\gamma}{\hbar} \sqrt{n} \\ \Delta_c &= \omega_B - \omega_A \\ \Omega_n'^2 &= \Omega_n^2 + \Delta_c^2 \\ a_n(t) &= \cos(\Omega_n' t/2) - \frac{i \Delta_c}{\Omega_n'}\sin(\Omega_n' t/2) \\ b_n(t) &= -\frac{i \Omega_n}{\Omega_n'} \sin(\Omega_n' t/2) . \end{align} Starting at $t=0$ from $\pure{n-1} \otimes \pure{e}$ gives, \begin{equation} \rho_{AB}(t) = \begin{bmatrix} \ket{n-1,e} \\ \ket{n,g} \end{bmatrix}^T \begin{bmatrix} |a_n(t)|^2 & -a_n(t) b_n(t) \\ a^*_n(t) b_n(t) & |b_n(t)|^2 \end{bmatrix} \begin{bmatrix} \bra{n-1,e} \\ \bra{n,g} \end{bmatrix}. \end{equation} Starting at $t=0$ from $\pure{n} \otimes \pure{g}$ gives, \begin{equation} \rho_{AB}(t) = \begin{bmatrix} \ket{n-1,e} \\ \ket{n,g} \end{bmatrix}^T \begin{bmatrix} |b_n(t)|^2 & a_n(t) b_n(t) \\ -a^*_n(t) b_n(t) & |a_n(t)|^2 \end{bmatrix} \begin{bmatrix} \bra{n-1,e} \\ \bra{n,g} \end{bmatrix}. \end{equation} Because of the simplicity of this system, measuring the atom also projects the cavity into a Fock state. This simplifies the analysis, since we only need to track the pure probabilities, $p_n$. Assuming the incoming atomic states are chosen to be pure $e$ or $g$ at random (with probabilities $\sigma_e$ or $\sigma_g$, resp.), \begin{align} p_n(t) = p_n(0) &+ |b_{n+1}(t)|^2(\sigma_g p_{n+1} - \sigma_e p_n) \notag \\ & - |b_n(t)|^2 (\sigma_g p_n - \sigma_e p_{n-1}). \label{e:pt} \end{align} Eq.~\ref{e:pt} uses the fact that $b_0 = 0$. This master equation has a non-trivial steady-state at $p_n = p_0 (\frac{\sigma_e}{\sigma_g})^n$. The existence of this steady-state, and the fact that the cavity does not have a canonical distribution, even when the atom does ($\sigma_e/\sigma_g = e^{-\beta \hbar \omega^B}$) were noted by Jaynes.\cite{ejayn58} Experimentally, relaxation to the canonical distribution occurs because of imperfect isolation of the cavity, which allows thermalization interactions with external resonant photons and results in a near-canonical (but not perfect) steady state.\cite{hwalt06} Such interactions could easily be added to the present model, but for clarity this analysis focuses on interaction with the single reservoir system, $B$. \end{document}
math
69,454
\begin{document} \title{General Description of Mixed State Geometric Phase} \author{Mingjun Shi} \email{[email protected]} \affiliation{Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China} \author{Jiangfeng Du} \email{[email protected]} \affiliation{Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale, University of Science and Technology of China, Hefei, Anhui, 230026, China} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542} \begin{abstract} We consider arbitrary mixed state in unitary evolution and provide a comprehensive description of corresponding geometric phase in which two different points of view prevailing currently can be unified. Introducing an ancillary system and considering the purification of given mixed state, we find that different results of mixed state geometric phase correspond to different choice of the representation of Hilbert space of the ancilla. Moreover we demonstrate that in order to obtain Uhlmann's geometric phase it is not necessary to resort to the unitary evolution of ancilla. \end{abstract} \pacs{03.65} \maketitle The notion of geometric phase was firstly demonstrated in Pancharatnam's study of the interference of light in different states of polarization\cite{Pancharatnam}. Afterwards, Berry discovered a non-trivial phase factor in adiabatic, cyclic and parametric variations of quantal system\cite{Berry}. This discovery has prompted more attention and more activities on search for the geometric structure involved in the evolution of quantum system. Aharonov and Anandan gave the formalism for geometric phase in the nonadiabtic case\cite{Aharonov and Anandan}. Samuel and Bhandari made the extension to nonadiabtic and non-cyclic evolution\cite{Samuel and Bhandari}. Further generalized case of nonadiabtic, non-cyclic and non-unitary evolution has also been carried out in \cite{Mukunda,Pati}. It should be emphasized that the above-mentioned considerations refer to the \textit{pure} quantal state. The definition of geometric phase in mixed state scenario is still an open question. Uhlmann\cite{Uhlmann} firstly considered this issue within the mathematical context of purification. In contrast with the abstract formalism of Uhlmann's definition, Sj\"{o}qvist \textit{et al} presented a more concrete formalism of geometrical phase for mixed states in the experimental context of quantum interferometry\cite{Sjoqvist}. Such two definitions------that is, Uhlmann's and Sj\"{o}qvist's------would result in different geometric phase of mixed states\cite{Slater}. In other words, they are accordant with each other only in the case of pure states. It is unsatisfactory in theoretical point of view to have two different definitions and results for the same thing. On the other hand, geometric phase has important significance in quantum computation. Its potential application to fault-tolerant quantum computation has been the subject of recent investigation\cite{Jones,Zanardi}. There is no conflict in treating with the pure state geometric phase, but in actual and experimental situation the issue of mixed states is unpreventable. Unidentical explanation on the latter would be a trouble for experimentalist. Recently in \cite{Ericsson}, Ericsson \textit{et al} compared Uhlmann's and Sj\"{o}qvist's geometric phase for a mixed states undergoing unitary evolution and concluded that the former depends not only on the geometry of the path of system alone but also on a constrained bilocal unitary evolution of the purified entangled state, whereas the latter is essentially the property of system alone. Nevertheless there is not a consistent definition and interpretation of mixed state geometric phase so far. In this paper, we propose a new definition of parallel transport for mixed state and give a general description of mixed state geometric phase in unitary evolution, from which Uhlmann's phase and Sj\"{o}qvist's phase can be deduced. In other words, such two definitions of geometric phase can be embodied in our statement. In our approach we deal with the mixed state not in view of spectrum decomposition (i.e., orthogonal decomposition), but by means of non-orthogonal decomposition. Of course for given mixed state there are infinite forms of non-orthogonal decomposition. We will seek the specific one among them and let each pure component undergo parallel transport. The geometric phase associated with each pure component is simply the total phase difference during an interval of unitary evolution. Then we think that the whole ensemble also undergoes parallel transport and the corresponding geometric phase factor is the sum of weighted geometric phase factor of pure component with related visibility. To this end, we adopt the usual method to deal with a mixed state of some quantum system, that is, introducing an ancilla and considering the purification of the mixed state in a larger Hilbert space. As a consequence we find that two different choice of the basis of ancilla Hilbert space will result in Sj\"{o}qvist's and Uhlmann's phase respectively. Our consideration need not ask for the help of unitary evolution of ancilla. Consider a mixed state of some quantum system with $n$\ energy levels. In $n$-dimensional Hilbert space $\mathcal{H}^{S}$ the\ initial state can be expressed as \begin{equation} \rho(0)= {\displaystyle\sum\nolimits_{j=1}^{n}} \lambda_{j}\left\vert e_{j}\right\rangle \left\langle e_{j}\right\vert , \end{equation} where $\lambda_{j}$'s\ and $\left\vert e_{j}\right\rangle $'s\ are respectively the eigenvalues and eigenstates of $\rho(0)$. We choose $\left\{ \left\vert e_{j}\right\rangle \right\} _{j=1,\cdots,n}$ as the basis of $\mathcal{H}^{S}$. Some unitary operator $U(t)$\ determines the evolution of the system, that is \begin{equation} \rho(t)=U(t)\rho(0)U^{\dag}(t). \end{equation} $U(t)$ can be represented as $U(t)=\exp\left( -iHt\right) $, where $H$ is the Hamiltonian of the system and assumed to be independent of time for simplicity (here $\hbar=1$). Introduce an ancillary system. The Hilbert space describing the ancilla, named as $\mathcal{H}^{A}$, has the same dimension as $\mathcal{H}^{S}$. The basis of $\mathcal{H}^{A}$\ is labeled by $\left\{ \left\vert f_{j}\right\rangle \right\} _{j=1,\cdots,n}$. Then the purification of $\rho(0)$\ is \begin{equation} \left\vert \Psi(0)\right\rangle = {\displaystyle\sum\nolimits_{j=1}^{n}} c_{j}\left\vert e_{j}\right\rangle \otimes\left\vert f_{j}\right\rangle , \end{equation} where $c_{j}$'s can be regarded as real and positive numbers and in fact $c_{j}=\sqrt{\lambda_{j}}$. Under bilocal unitary transformation $U(t)\otimes V(t)$, the time evolution of $\left\vert \Psi(0)\right\rangle $\ is \begin{equation} \left\vert \Psi(t)\right\rangle = {\displaystyle\sum\nolimits_{j=1}^{n}} c_{j}U(t)\left\vert e_{j}\right\rangle \otimes V(t)\left\vert f_{j} \right\rangle .\label{Psi(t)} \end{equation} Obviously after tracing out ancilla, we always have $\rho(t)=Tr_{A}\left\vert \Psi(t)\right\rangle \left\langle \Psi(t)\right\vert $ for arbitrary $V(t)$. Also we express $V(t)$ as $V(t)=\exp\left( -iKt\right) $\ where $K$\ is the time-independent Hamiltonian of ancilla. Eq.(\ref{Psi(t)}) can be rewritten as \begin{equation} \left\vert \Psi(t)\right\rangle = {\displaystyle\sum\nolimits_{j=1}^{n}} U(t)CV^{T}(t)\left\vert e_{j}\right\rangle \otimes\left\vert f_{j} \right\rangle ,\label{Psi(t)-1} \end{equation} where $V^{T}(t)$\ is the transpose of $V(t)$ and $C$ is a diagonal matrix, i.e., $C=diag[c_{1},c_{2},\cdots,c_{n}]$. Define \begin{equation} |\widetilde{\psi}_{j}(t)\rangle=U(t)CV^{T}(t)\left\vert e_{j}\right\rangle ,\label{psi-tilde(t)} \end{equation} where the symbol "$\sim$" means that $|\widetilde{\psi}_{j}(t)\rangle$'s are not necessarily orthogonal with one another and are definitely non-normalized. We have \begin{equation} \left\vert \Psi(t)\right\rangle = {\displaystyle\sum\nolimits_{j=1}^{n}} |\widetilde{\psi}_{j}(t)\rangle\otimes\left\vert f_{j}\right\rangle . \end{equation} As for the system, $\rho(t)$\ is expressed as \begin{equation} \rho(t)= {\displaystyle\sum\nolimits_{j=0}^{n}} |\widetilde{\psi}_{j}(t)\rangle\langle\widetilde{\psi}_{j}(t)|.\label{rho(t)} \end{equation} Generally it is a non-orthogonal decomposition. Denote the normalized form of $|\widetilde{\psi}_{j}(t)\rangle$ as $|\psi_{j}(t)\rangle=\frac{|\widetilde {\psi}_{j}(t)\rangle}{\langle\widetilde{\psi}_{j}(t)|\widetilde{\psi} _{j}(t)\rangle^{1/2}}$. Then $|\psi_{j}(t)\rangle$, as the pure component of $\rho(t)$,\ appears in the ensemble with the probability $p_{j}(t)=\langle \widetilde{\psi}_{j}(t)|\widetilde{\psi}_{j}(t)\rangle$. Now we ask such a question: \textit{Under what condition each }$|\psi_{j}(t)\rangle $\textit{\ undergoes parallel transport, i.e., }$\langle\psi_{j}(t)|\frac {d}{dt}|\psi_{j}(t)\rangle=0$\textit{ for }$j=1,2,\cdots,n$\textit{?} Note that the form of $V(t)$\ is unknown. Usually $p_{j}(t)$\ is not invariant. So we are above all faced with such a trouble that the pure components of $\rho(t)$\ come forth with time-dependent weights (or probabilities). To avoid this trouble we would "see" the system state from the ancilla in another viewpoint. Formally speaking, we would choose another basis of $\mathcal{H}^{S}$, say $\left\{ \left\vert g_{j}\right\rangle \right\} $, and suppose the relationship between $\left\{ \left\vert f_{j}\right\rangle \right\} $\ and $\left\{ \left\vert g_{j}\right\rangle \right\} $\ is a time-independent unitary transformation $Z$ which will be determined later, that is, $\left\vert f_{j}\right\rangle =Z\left\vert g_{j}\right\rangle $. Rewrite (\ref{Psi(t)-1}). \begin{align} \left\vert \Psi(t)\right\rangle & = {\displaystyle\sum\nolimits_{j=1}^{n}} U(t)CV^{T}(t)\left\vert e_{j}\right\rangle \otimes Z\left\vert g_{j} \right\rangle \nonumber\\ & = {\displaystyle\sum\nolimits_{j=1}^{n}} U(t)CV^{T}(t)Z^{T}\left\vert e_{j}\right\rangle \otimes\left\vert g_{j}\right\rangle \nonumber\\ & = {\displaystyle\sum\nolimits_{j=1}^{n}} |\widetilde{\varphi}_{j}(t)\rangle\otimes\left\vert g_{j}\right\rangle ,\label{Psi(t)-2} \end{align} where $|\widetilde{\varphi}_{j}(t)\rangle\equiv U(t)CV^{T}(t)Z^{T}\left\vert e_{j}\right\rangle $ Then $\rho(t)= {\displaystyle\sum\nolimits_{j=0}^{n}} |\widetilde{\varphi}_{j}(t)\rangle\langle\widetilde{\varphi}_{j}(t)|.$ Now we hope every $\langle\widetilde{\varphi}_{j}(t)|\widetilde{\varphi} _{j}(t)\rangle$ is invariant. It is equivalent to say that all diagonal elements of matrix $Z^{\ast}V^{\ast}C^{2}V^{T}Z^{T}$ are time-independent. Note that $C$ is time-independent. If $Z^{\ast}$\ can diagonalize $V^{\ast}$, i.e., $Z^{\ast}V^{\ast}Z^{T}$ is diagonal (or equivalently $ZKZ^{\dag}$\ is diagonal), then our hope is realized. Now suppose that we have the time-independent unitary $Z$ such that \begin{align*} \overline{K} & \equiv Z^{\ast}K^{T}Z^{T}=ZKZ^{\dag}\\ & =diag[\kappa_{1},\kappa_{1},\cdots,\kappa_{n}], \end{align*} where $\kappa_{j}$'s are all real numbers and in fact the eigenvalues of $K$. Correspondingly $V(t)$ is transformed to \begin{align*} \overline{V}(t) & =ZV(t)Z^{\dag}\\ & =diag[e^{-i\kappa_{1}},e^{-i\kappa_{2}},\cdots,e^{-i\kappa_{n}}]. \end{align*} \ Then we can say that given arbitrary non-diagonal $V(t)$ (or Hamiltonian $K$) on the basis $\left\{ \left\vert f_{j}\right\rangle \right\} $\ we can choose as new basis $\left\{ \left\vert g_{j}\right\rangle \right\} $ of $\mathcal{H}^{S}$\ the eigenstates of $K$\ so that in the view of $\left\{ \left\vert g_{j}\right\rangle \right\} $\ the weight of\ each pure component of the system mixed ensemble is invariant. This invariant weight is presented as \begin{equation} q_{j}=\langle\widetilde{\varphi}_{j}(t)|\widetilde{\varphi}_{j}(t)\rangle =\langle e_{j}|Z^{\ast}C^{2}Z^{T}|e_{j}\rangle.\label{qj} \end{equation} Let $|\varphi_{j}(t)\rangle=q_{j}^{-1/2}|\widetilde{\varphi}_{j}(t)\rangle$. We have $\rho(t)= {\displaystyle\sum\nolimits_{j=1}^{n}} q_{j}^{2}|\varphi_{j}(t)\rangle\langle\varphi_{j}(t)|$. It is still a non-orthogonal decomposition of $\rho(t)$. Next, what about the exact form of Hamiltonian $K$ such that guarantees $\langle\varphi_{j}(t)|\frac{d}{dt}|\varphi_{j}(t)\rangle=\langle \widetilde{\varphi}_{j}(t)|\frac{d}{dt}|\widetilde{\varphi}_{j}(t)\rangle=0$ for each $j$? That is, we further hope \begin{widetext} \begin{gather} \left\langle e_{j}\right\vert [Z^{\ast}V^{\ast}(t)CU^{\dag}(t)][\overset {\cdot}{U}(t)CV^{T}(t)Z^{T}+U(t)C\overset{\cdot}{V^{T}}(t)Z^{T}]\left\vert e_{j}\right\rangle =0,\\ j=1,2,\cdots,n,\nonumber \end{gather} \end{widetext} or equivalently \begin{gather} \left\langle e_{j}\right\vert Z^{\ast}V^{\ast}(t)[CHC\nonumber\\ +C^{2}K^{T}]V^{T}(t)Z^{T}\left\vert e_{j}\right\rangle =0,\label{Condition for K} \end{gather} for $j=1,2,\cdots,n$. Remind ourselves that $Z^{\ast}$ is time-independent and can diagonalize $K^{T}$ and $V^{\ast}$. We suppose $K^{T}$\ is determined by the following equation. \begin{equation} C^{2}K^{T}+K^{T}C^{2}=-2CHC.\label{Condition for K -1} \end{equation} Then (\ref{Condition for K}) equals \begin{align*} & \left\langle e_{j}\right\vert Z^{\ast}\left( CHC+C^{2}K^{T}\right) Z^{T}\left\vert e_{j}\right\rangle \\ & =\frac{1}{2}\left\langle e_{j}\right\vert Z^{\ast}[\left( CHC+C^{2} K^{T}\right) \\ & +\left( -CHC-K^{T}C^{2}\right) ]Z^{T}\left\vert e_{j}\right\rangle \\ & =\frac{1}{2}\left\langle e_{j}\right\vert Z^{\ast}\left( C^{2}K^{T} -K^{T}C^{2}\right) Z^{T}\left\vert e_{j}\right\rangle \\ & =0. \end{align*} That implies $\langle\varphi_{j}(t)|\frac{d}{dt}|\varphi_{j}(t)\rangle=0$. Thus we have answered the above-mentioned question, that is, first the Hamiltonian of ancilla has to meet condition (\ref{Condition for K -1}), and then the basis of Hilbert space of ancilla is such that the Hamiltonian has diagonal form. We regard this description as the parallel transport of mixed state in unitary evolution. The geometric phase of $|\varphi_{j}(t)\rangle$\ is \begin{align} \gamma_{j}(t) & =\arg\langle\varphi_{j}(0)|\varphi_{j}(t)\rangle\nonumber\\ & =\arg\left\langle e_{j}\right\vert Z^{\ast}CU(t)CV^{T}(t)Z^{T}|e_{j} \rangle\nonumber\\ & =\arg\left\langle e_{j}\right\vert Z^{\ast}CU(t)CZ^{T}\overline{V} ^{T}(t)|e_{j}\rangle\nonumber\\ & =\arg\left\langle e_{j}\right\vert Z^{\ast}CU(t)CZ^{T}|e_{j}\rangle -\kappa_{j}.\label{gamma(t)} \end{align} The visibility is decided by $\nu_{j}=|\langle\varphi_{j}(0)|\varphi _{j}(t)\rangle|$. Retrospecting (\ref{Psi(t)-2}), we can say that $\left\vert \Psi (t)\right\rangle $\ undergoes parallel transport, and hence give the geometric phase of $\left\vert \Psi(t)\right\rangle $ as follows. \begin{align} \Gamma(t) & =\arg\langle\Psi(0)\left\vert \Psi(t)\right\rangle \nonumber\\ & =\arg {\displaystyle\sum\nolimits_{j=1}^{n}} \langle\widetilde{\varphi}_{j}(0)|\widetilde{\varphi}_{j}(t)\rangle \nonumber\\ & =\arg {\displaystyle\sum\nolimits_{j=1}^{n}} q_{j}\nu_{j}e^{i\gamma_{j}}\label{Gamma(t)} \end{align} Rewrite (\ref{Gamma(t)}) as $e^{i\Gamma(t)}= {\displaystyle\sum\nolimits_{j=1}^{n}} q_{j}\nu_{j}e^{i\gamma_{j}}$ and consider the mixed ensemble of system. (\ref{Gamma(t)}) is in actually the sum of weighted geometric phase factor of (non-orthogonal) pure component with the associated visibility. We regard (\ref{Gamma(t)}) as the geometric phase of mixed state. Obviously it is gauge-invariant. We would rather review (\ref{Gamma(t)}) from another point of view. Note that $\rho(0)=CC^{\dag}$. $\left\vert \Psi(t)\right\rangle $ being\ in bilocal unitary transformation $U(t)\otimes V(t)$,\ we have $C(t)=U(t)CV^{T} (t)$. It is evident that $\rho(t)=C(t)C^{\dag}(t)$. Then $C(t)$\ is the purification of $\rho(t)$\ in Uhlmann's sense. Let $W(t)=C(t)Z^{T}$. $W(t)$ is another Uhlmann's purification of $\rho(t)$.\ Furthermore\ we have \begin{align*} & W^{\dag}(t)\overset{\cdot}{W}(t)\\ & =Z^{\ast}C^{\dag}(t)\overset{\cdot}{C}(t)Z^{T}\\ & =-iZ^{\ast}V^{\ast}(t)[CHC+C^{2}K^{T}]V^{T}(t)Z^{T}\\ & =iZ^{\ast}V^{\ast}(t)[CHC+K^{T}C^{2}]V^{T}(t)Z^{T}\\ & =W(t)\overset{\cdot}{W^{\dag}}(t), \end{align*} where we have used (\ref{Condition for K -1}). Thus $W(t)$\ satisfies Uhlmann's parallel transport condition, and we obtain Uhlmann's geometric phase of mixed state. \begin{align} \gamma^{Uhl}(t) & =\arg Tr\left( W^{\dag}(0)W(t)\right) \nonumber\\ & =\arg Tr[Z^{\ast}CC(t)Z^{T}]\nonumber\\ & =\arg Tr[CC(t)]. \end{align} It is straightforward to check $\gamma^{Uhl}(t)=\Gamma(t)$. So our statement in fact results in Uhlmann's geometric phase. In brief, after choosing the Hamiltonian of ancilla to meet (\ref{Condition for K -1}), and selecting the specific basis of $\mathcal{H}^{S}$ to diagonalize such Hamiltonian, we indeed answer the question\ and meanwhile obtain Uhlmann's phase. Let's consider a special case in which $V(t)$ is already diagonal in the original basis $\left\{ \left\vert f_{j}\right\rangle \right\} $.\ Recall (\ref{psi-tilde(t)}) and (\ref{rho(t)}). It is easy to see that $p_{j} (t)=\langle\widetilde{\psi}_{j}(t)|\widetilde{\psi}_{j}(t)\rangle$\ is invariant and in fact is $\lambda_{j}$. Define the Hamiltonian of ancilla as $K_{kl}=-\delta_{kl}H_{kl}$, where $K_{kl}$ and$\ H_{kl}$'s are respectively the matrix elements of $K$ and$\ H$. The evolution of ancilla is given by $V(t)=diag[e^{iH_{11}t},e^{iH_{22}t},\cdots,e^{iH_{nn}t}]$. We have $\langle\widetilde{\psi}_{j}(t)|\frac{d}{dt}|\widetilde{\psi}_{j}(t)\rangle =0$. Similar to (\ref{Gamma(t)}), the geometric phase of $\left\vert \Psi(t)\right\rangle $\ in this case is \begin{align} \Gamma^{\prime}(t) & =\arg\langle\Psi(0)|\Psi(t)\rangle\nonumber\\ & =\arg {\displaystyle\sum\nolimits_{j=1}^{n}} \langle\widetilde{\psi}_{j}(0)|\widetilde{\psi}_{j}(t)\rangle\nonumber\\ & =\arg {\displaystyle\sum\nolimits_{j=1}^{n}} \lambda_{j}\left\vert \langle e_{j}|U(t)|e_{j}\rangle\right\vert e^{i\gamma_{j}^{\prime}(t)}, \end{align} where $\gamma_{j}^{\prime}(t)=\arg\langle\widetilde{\psi}_{j}(0)|\widetilde {\psi}_{j}(t)\rangle$. This is just the mixed state geometric phase based on Sj\"{o}qvist's formalism\cite{Sjoqvist}. Thus for mixed states our proposal includes the origination of Sj\"{o}qvist's phase and Uhlmann's phase. We almost get a comprehensive interpretation on mixed state geometric phase. A remained discomfort in the above discussion is the time-evolution of the ancilla, i.e., $V(t)$. In the following we try to remove $V(t)$\ and give a more concise expression. Actually in the above discussion our purpose of applying $V(t)$ to the ancilla\ is to cancel out the dynamical phase of the system, or more strictly speaking, to cancel out the dynamical phase of each component of the mixed ensemble $\rho(t)$. Then we can re-explain our procedure in alternative viewpoint as follows. We firstly \textit{postulate} that in some representation of $\mathcal{H}^{A}$, say $\left\{ \left\vert f_{j} \right\rangle \right\} $, the ancilla is subjected to the Hamiltonian $K$ satisfying (\ref{Condition for K -1}). Then choose another representation $\left\{ \left\vert g_{j}\right\rangle \right\} $ of $\mathcal{H}^{A}$ in which $K$ is diagonal. As stated above, unitary $Z$ transforms $\left\{ \left\vert g_{j}\right\rangle \right\} $ to $\left\{ \left\vert f_{j}\right\rangle \right\} $. Afterwards we can \textit{disregard} $K$ and the corresponding time evolution operator $V(t)=\exp\left( -iKt\right) $. So $\left\vert \Psi(t)\right\rangle $ is decided as the following. \[ \left\vert \Psi(t)\right\rangle = {\displaystyle\sum\nolimits_{j=1}^{n}} U(t)CZ^{T}\left\vert e_{j}\right\rangle \otimes\left\vert g_{j}\right\rangle . \] Let $\vert$ $\widetilde{\chi}_{j}(0)\rangle=CZ^{T}\left\vert e_{j}\right\rangle $ and\ $\vert$ $\widetilde{\chi}_{j}(t)\rangle=U(t)$ $\vert$ $\widetilde{\chi}_{j}(0)\rangle$. Apparently $\langle\widetilde{\chi} _{j}(t)|\widetilde{\chi}_{j}(t)\rangle$ is invariant and is just $q_{j} $\ (\ref{qj}). Let $|\chi_{j}(t)\rangle=q_{j}^{-1/2}|\widetilde{\chi} _{j}(t)\rangle$. We have \begin{align} \rho(t) & = {\displaystyle\sum\nolimits_{j=0}^{n}} |\widetilde{\chi}_{j}(t)\rangle\langle\widetilde{\chi}_{j}(t)|\nonumber\\ & = {\displaystyle\sum\nolimits_{j=0}^{n}} q_{j}|\chi_{j}(t)\rangle\langle\chi_{j}(t)|.\label{rho in khi} \end{align} Also $|\chi_{j}(t)\rangle$'s are not orthogonal with one another and do not endure parallel transport.\ Let's calculate the dynamical phase of $\vert$ $\chi_{j}(t)\rangle$. \begin{align*} & \phi_{j}^{d}(t)\\ & =\frac{-i}{q_{j}} {\displaystyle\int\nolimits_{0}^{t}} \langle\widetilde{\chi}_{j}(t^{\prime})|\frac{d}{dt^{\prime}}\left\vert \widetilde{\chi}_{j}(t^{\prime})\right\rangle dt^{\prime}\\ & =-\frac{1}{q_{j}}\left\langle e_{j}\right\vert Z^{\ast}CHCZ^{T}\left\vert e_{j}\right\rangle t \end{align*} If applying (\ref{Condition for K -1}), we have $\phi_{j}^{d}(t)=\left\langle e_{j}\right\vert Z^{\ast}K^{T}Z^{T}\left\vert e_{j}\right\rangle t=\kappa _{j}t$. The total phase is \begin{align*} \phi_{j}^{t}(t) & =\arg\langle\chi_{j}(0)|\chi_{j}(t)\rangle\\ & =\arg\langle e_{j}|Z^{\ast}CU(t)CZ^{T}|e_{j}\rangle. \end{align*} Then the geometric phase associated with each $\vert$ $\chi_{j}(t)\rangle$ is $\phi_{j}^{g}(t)=\phi_{j}^{t}(t)-\phi_{j}^{d}(t)$. Apparently $\phi_{j}^{g}(t)=\gamma_{j}(t)$ (see (\ref{gamma(t)})). Hence we have demonstrated that in order to obtain Uhlmann's phase we may not resort to the evolution of ancillary system, and all we have to do is to find specific representation of $\mathcal{H}^{A}$ in which the hermitian matrix $K$ satisfying (\ref{Condition for K -1}) is diagonal. Conclusively, in the formalism proposed in this paper, the mixed state geometric phase is provided with unified interpretation. The key point is Eq.(\ref{Condition for K -1}) which implies the representation transformation of the ancilla. As stated before, the unitary evolution of ancilla is not necessary, and it is the choice of representation of ancillary Hilbert space that results in particular geometric phase. For given mixed state and its purification, different choice of the representation of $\mathcal{H}^{A}$\ can be considered as different kind of measurement. So the mixed state geometric phase behaves somewhat like the result of quantum measurement: Having nothing to do with $\mathcal{H}^{A}$,\ we would get Sj\"{o}qvist's result; shifting to the other basis of $\mathcal{H}^{A}$, we would get Uhlmann's result. The hidden significance of such basis transformation is not clear. We hope our discussion would be helpful to further research of geometric phase. Financial support from National Natural Science Foundation of China (Grant No. 10375057 and No. 10425524), the National Fundamental Research Program (2001CB309300) and the ASTAR (Grant No. 012-104-305) is gratefully acknowledged. \end{document}
math
23,102
\begin{document} \title{Reaction-diffusion on metric graphs and conversion probability} \author{ Renato Feres\footnote{Dept. of Mathematics, Washington University, Campus Box 1146, St. Louis, MO 63130}, Matt Wallace\footnotemark[1]} \maketitle \begin{abstract} We consider the following type of reaction-diffusion systems, motivated by a specific problem in the area of heterogeneous catalysis: A pulse of reactant gas of species $A$ is injected into a domain $U\subset \mathbb{R}^d$ that we refer to as the {\em reactor}. This domain is permeable to gas diffusion and chemically inert, except for a few relatively small regions, referred to as {\em active sites}. At the active sites, an irreversible first-order reaction $A\rightarrow B$ can occur. Part of the boundary of $U$ is designated as the {\em reactor exit}, out of which a mixture of reactant and product gases can be collected and analyzed for their composition. The rest of the boundary is chemically inert and impermeable to diffusion; we assume that instantaneous normal reflection occurs there. The central problem is then: Given the geometric parameters defining the configuration of the system, such as the shape of $U$, position of gas injection, location and shape of the active sites, and location of the reactor exit, find the (molar) fraction of product gas in the mixture after the reactor $U$ is fully evacuated. Under certain assumptions, this fraction can be identified with the {\em reaction probability}\----that is, with the probability that a single diffusing molecule of $A$ reacts before leaving through the exit. More specifically, we are interested in how this reaction probability depends on the rate constant $k$ of the reaction $A\rightarrow B$. After giving a stochastic formulation of the problem, we consider domains having the form of a network of thin tubes in which the active sites are located at some of the junctures. The main result of the paper is that in the limit, as the thin tubes approach curves in $\mathbb{R}^d$, reaction probability converges to functions of the point of gas injection that can be described fairly explicitly in their dependence on the (appropriately rescaled) parameter $k$. Thus, we can use the simpler processes on metric graphs as model systems for more complicated behavior. We illustrate this with a number of analytic examples and one numerical example. \end{abstract} \begin{keywords} Reaction-diffusion, metric graphs. \end{keywords} \section{Introduction} Consider the system described schematically in Figure \ref{domain}. The bounded domain $U\subset \mathbb{R}^d$ represents a chemical reactor packed with inert granular particles permeable to gas diffusion. At the point $x\in U$, one injects a small pulse of gas molecules of species $A$. The pulse diffuses in $U$ with a certain diffusion coefficient $\mathcal{D}$ before escaping through the reactor exit, marked in the figure with a dashed line and denoted $\partial U_a$. Now suppose a certain small subset $\mathcal{A}$ of $U$ contains a catalytically active material promoting the irreversible first-order reaction $A\rightarrow B$ with {\em rate constant} $k$. We call $\mathcal{A}$ the {\em active zone} Then a certain fraction of the molecules in the initial pulse may interact with $\mathcal{A}$ and convert to $B$ before exiting the reactor. This number is called the {\em fractional conversion}. Motivated by certain problems in heterogeneous catalysis (see \cite{FYMB09, FCYG09} and references therein) we seek to determine the fractional conversion in terms of the rate constant $k$, the diffusion coefficient $\mathcal{D}$ and the geometric configuration of $\mathcal{A}$ and $U$. \begin{figure} \caption{\small Schematic description of the chemical reactor. $U$ is the domain of diffusion; the shaded regions represent active sites, whose union is denoted $\mathcal{A} \label{domain} \end{figure} The practical concern is that $k$ may be difficult to determine experimentally, because the reaction $A\rightarrow B$ may involve, at the microscopic level, a complex sequence of steps such as adsorption and surface reaction. On the other hand, measuring the fractional conversion is comparatively simple. Similarly, one can carefully control in a lab the size and shape of the reactor, the size and shape of the active zone, and the nature of the particulate matter supporting the diffusion (hence the diffusion coefficient $\mathcal{D}$). Determining $k$ in terms of these other variables is therefore of considerable interest. We intend to set up a probabilistic model for determining $k$. Let us briefly describe what physical assumptions we impose for the model to work. First, we assume that the pressure within $U$ is low enough, and the injected pulse is narrow enough, that the diffusion follows the {\it Knudsen regime.} In other words, the diffusion is driven at the microscopic level by tiny collisions between diffusing molecules and the particulate matter making up the reactor bed\----not by collisions between molecules themselves. Second, we assume the pulse is small enough that any reactions occurring within $U$ during the course of the experiment negligibly alter the composition of the active sites. These assumptions are physically reasonable and can in fact be arranged for in a lab; see \cite{GYZF10} or \cite{FSYW15} for more information. With the set-up just described, write $\alpha(x)$ for the fractional conversion, or $\alpha_k(x)$ when necessary to emphasize dependence on the rate constant $k$. Because of our assumption about the Knudsenian character of the diffusion, we can identify $\alpha_k(x)$ with the {\em reaction probability;} that is, with the probability that a single molecule, entering the chamber at $x$ and undergoing reflecting Brownian motion in $U$ with diffusion coefficient $\mathcal{D}$, converts to $B$ before hitting $\partial U_a$. This identification is possible because every molecule's path in the reactor is independent of every other's. Therefore a single pulse of $N$ molecules is equivalent to $N$ independent pulses of one molecule. For this reason, we can model the reaction $A\rightarrow B$ by killing a reflecting Brownian motion (RBM) in $U$ with a rate function of the form $r=kq$ where $k$ is the reaction rate and $q$ is the indicator function on $\mathcal{A}$, or perhaps a smooth approximation thereof. By RBM in $U$, we mean a diffusion process $(X_t,\mathbb{P}_x)$ with sample paths in $\overline{U}$ and satisfying the stochastic differential equation \begin{equation} X_t - X_0 = \sigma B_t + \int_0^t \nu(X_s) \mathrm{d}\ell_s \label{eq:sk} \end{equation} where $\sigma=\sqrt{2\mathcal{D}}$, $B_t$ is an ordinary $d$-dimensional Brownian motion, $\nu$ is the inward-pointing unit normal vector field on $\partial U$, and $\ell_s$ is a continuous increasing process (called the boundary local time) that increases only on $\{t : X_t\in\partial U\}$. The SDE \eqref{eq:sk} is called the {\em Skhorohod} or {\em semimartingale decomposition} of $(X_t)$. See, for example, \cite{BH91, C93} for more information about RBM's that have a semimartingale decomposition. As for killing a diffusion, see \cite{RW1}. The existence of an RBM $(X_t)$ in $U$ with a decomposition \eqref{eq:sk} requires that $\partial U$ satisfy certain modest smoothness conditions. For example, $C^2$ smoothness is more than sufficient to ensure that \eqref{eq:sk} has a pathwise unique solution; see \cite{BB06, BB08, LS84, LS84, S87}. See \cite{IW} for general notions about stochastic differential equations. Henceforth we shall assume $\partial U$ is at least $C^2$ unless otherwise stated. Furthermore, we assume the first hitting time of $(X_t)$ to $\partial U_a$ is finite almost surely. As for $\mathcal{A}$, we assume for now that it's a compact subset of $U$ with a regular boundary, and impose further conditions when necessary. It will be convenient to introduce the {\em survival function} $\psi_k(x)$, defined as the probability that Brownian motion starting at $x$ reaches $\partial U_a$ without being killed (that is, without reacting). We also write $\psi(x)$ when $k$ is understood. Then the reaction probability we seek is $\alpha_k(x)=1-\psi_k(x)$. Our model for the reaction and diffusion within $U$ entails that \begin{equation}\label{eq:psi} \psi_k(x) = \mathbb{E}_x\left[ \exp\left\{ -\int_0^T kq(X_s)\mathrm{d}s \right\} \right] \end{equation} where $T$ is the first hitting time of $(X_t)$ to $\partial U_a$, that is $T:=\inf\{t>0\,:\,X_t\in\partial U_a\}$. Functionals such as the right-hand side of \eqref{eq:psi} are well-known and have the following physical interpretation in terms of our catalysis problem: the probability that a gas molecule injected at $x$ and following the sample path $\omega$ {\em hasn't} reacted decreases exponentially with the amount of time spent in the chemically active region $\mathcal{A}$, the reaction constant $k$ being the exponential rate. Thus the expression $\exp\left\{-k\int_0^{T(\omega)} q(X_s(\omega))\, ds\right\}$ represents the probability, for that sample path, that the molecule does not react by the time it exits the reactor, and then $\psi_k(x)$ is the expected value of this probability. It is also well-known that $\psi_k(x)$ can be expressed as the solution to an elliptic boundary value problem in $U$. Our numerical illustration below is based on this fact: \begin{proposition} Suppose that $u$ is a function which is bounded and continuous in $\overline{U}$, satisfies \[ \mathcal{D}\Delta u -kqu=0\] in $U\setminus \partial \mathcal{A}$, together with the boundary conditions $\frac{\partial u}{\partial n}=0$ on $\partial U_r$ and $u=1$ on $\partial U_a$. Suppose furthermore that $\partial\mathcal{A}$ is a set of zero potential for RBM in $\overline{U}$. Then $u=\psi$ within $U$. \end{proposition} \begin{remark} Let $S=\inf\{t>0\,:\,X_t\in\mathcal{A}\}$ be the first hitting time of $X$ to $\mathcal{A}$. Then $\int_0^T kq(X)\mathrm{d}s$ is only positive on $(S<T)$ = the event that $X_t$ hits $\mathcal{A}$ at all before leaving through $\partial U_a$. Accordingly, letting $k\rightarrow\infty$ reduces \eqref{eq:psi} to $\psi_\infty(x)=\mathbb{P}_x\left[S<T\right]$ = the probability that, starting from $x\in U$, the RBM $X_t$ escapes through $\partial U_a$ without ever hitting $\mathcal{A}$. We write $P_{\mathrm{h}}(x)=1-\psi_\infty(x)$ for the complementary probability that $X_t$ hits $\mathcal{A}$ at all before leaving. This last probability can also be determined by solving a boundary value problem. Namely, if $v(x)$ solves $\Delta v=0$ in $V:=U\setminus\mathcal{A}$, together with the boundary conditions $u=1$ on $\partial\mathcal{A}$, $u=0$ on $\partial U_a$, and $\mathbf{n}\cdot\nabla v=0$ on $\partial V\setminus [\partial U_a\cup\partial\mathcal{A}]$ then $v(x)=P_{\mathrm{h}}(x)$. \end{remark} Elsewhere we will address the existence of solutions satisfying the conditions in Proposition 1. For now, we simply assume that a solution exists and can be approximated using the Finite Element (FE) method. This is not a serious defect since we only intend to use Proposition 1 to corroborate numerically the predicted form of $\alpha_k(x)$ in Theorem 2 below. To understand what to expect, then, consider the case where $U$ is a line or a metric graph, and $\mathcal{A}=\{a\}$, a single point. Then simple manipulations involving the strong Markov property (described in detail below) show that $\alpha_k(x)$ factors as $\alpha_k(x)=P_{\mathrm{h}}(x)f(x,k)$ where $P_{\mathrm{h}}(x)$ is the hitting probability for $\mathcal{A}$ (described in the Remark under Proposition 1) and $f(x,k)$ is the reaction probability conditional on starting from $a$. Clearly $P_{\mathrm{h}}(x)$ doesn't depend on $k$, so that $f(k,x)$ is the quantity of interest. In this case, $f(x,k)$ is simply equal to $\alpha_k(a)$, and it follows from basic properties of local times (described in detail below) that \begin{equation}\label{typical}f(x,k)=\frac{\lambda k}{1+\lambda k}\end{equation} where $\lambda$ is a constant that doesn't depend on $k$. For this reason we regard the formula \eqref{typical} as ``typical'' behavior for situations where $\mathcal{A}$ can reasonably be thought of as a single point in the state space. This involves some coarse-graining, because single points are polar for Brownian motion. \begin{figure} \caption{\small Example in dimension $2$ of a reactor domain with a small active site. } \label{maize} \end{figure} To illustrate this behavior, consider the domain in Figure \ref{maize}. Taking the length of the shortest side of the polygonal region as unit, the radius of the disc-shaped active site is $0.1$. Here, $\mathcal{A}$ is not a single point, but is nevertheless small enough relative to $U$ that we can reasonably hope that a factorization of the form $\alpha_k(x)=P_{\mathrm{h}}(x)f(x,k)$ holds. That is, if we {\em define} $f(x,k)$ by \begin{equation} \label{typical2} f(x,k) := \frac{\alpha_k(x)}{P_{\mathrm{h}}(x)} \end{equation} and then compute the right-hand members of \eqref{typical2} using the boundary value problems determining $\alpha_k(x)$ and $P_{\mathrm{h}}(x)$, then we expect to see the behavior described in \eqref{typical}. To this end, we computed $f(x,k)$ as defined in \eqref{typical2} for a fixed $x$ and $50$ different equally-spaced values of $k\in[0,100]$ using FEniCS, and then ``solved for $\lambda$'' under the working assumption that \eqref{typical} holds. That is, we defined \[ \lambda := \frac{1}{k}\frac{f(x,k)}{1-f(x,k)} =\frac{1}{k}\frac{\psi_k(x)/P_{\mathrm{h}}(x)}{1-\psi_k(x)/P_{\mathrm{h}}(x)}\] and checked $\lambda$'s dependence on $k$. As expected, the resulting $\lambda$'s exhibited little dependence on $k$ . Denoting by $\lambda_{\text{\tiny max}}$ and $\lambda_{\text{\tiny min}}$ the maximum and minimum values of $\lambda$ over the range of $k$'s we looked at, we found $(\lambda_{\text{\tiny max}}-\lambda_{\text{\tiny min}})/\lambda_{\text{\tiny min}}< 0.005$ with mean value $\lambda=0.02342$. It will be shown that the above expression \eqref{typical} is indeed a typical approximation of $f(x,k)$ when the active region is a small single site. Roughly speaking, $\lambda$ is the amount of time accumulated at $a$, starting from $a$, before departing through the reactor exit. In particular, $\lambda$ depends only on the geometry of the reactor and the diffusion coefficient, and not $k$. For more general active site configurations, more complicated but related approximation formulas apply. We call the regions to which our main result will apply {\em fat graphs}. These are graph-like domains comprising a number of thin tubes and junctures that converge, in an appropriate sense, to an underlying metric graph $\Gamma=(\mathcal{V},\mathcal{E})$. Here, $\mathcal{E}$ denotes the edges of $\Gamma$, and $\mathcal{V}$ the vertices. Then each tube in the domain corresponds to an edge $e\in\mathcal{E}$, and each juncture corresponds to a vertex $v\in\mathcal{V}$. We assume that the active region $\mathcal{A}$ is concentrated in the juncture regions. The precise definitions are given in section \ref{fatgraph}. For now, figure \ref{fatgraphparameters} conveys a rough idea. \begin{figure} \caption{\small A fat graph in $\mathbb{R} \label{fatgraphparameters} \end{figure} There are two scale parameters involved in the approximation: $\epsilon>0$, which gives the thickness of the tubes in the domain, and $h>0$, which gives the radius of the active zone around each active site. As $\epsilon\downarrow 0$, the domain collapses to the skeleton graph, and as $h\downarrow 0$, active sites collapse to vertices. At the same time, reaction activity, given by the rate constant $k^h=k/h$ must increase accordingly. Thus let $f^{\epsilon, h}(x,k/h)$ denote the quantity introduced above that gives the dependence of the reaction probability on the rate constant. That is, \[ f^{\epsilon, h}(x, k/h) := \frac{\psi_k(x)}{P_{\mathrm{h}}(x)}.\] The main result is then the following. \begin{theorem} \label{theorem1} Let $U^\epsilon$ be a bounded fat graph whose skeleton metric graph $\Gamma$ has finitely many vertices and edges. Let $\mathcal{C}$ be the set of vertices corresponding to active sites of $U^\epsilon$ and $\delta$ the relative radius of the active sites. Define $\kappa:=k\delta/\cal{D}$, where $\mathcal{D}$ is the diffusion constant. Let $|\mathcal{C}|$ denote the number of active vertices. Then, under the hypotheses of Proposition \ref{firstlimit} below, $$\lim_{h\rightarrow 0} \lim_{\epsilon\rightarrow 0} f^{\epsilon,h}(x,k/h)= 1-\sum_{v\in \mathcal{C}}p_v(x)\frac{\lambda_v(\kappa)}{\lambda(\kappa)},$$ where $p_v(x)$ is the probability that a diffusing particle starting from $x$ hits $\mathcal{C}$ for the first time at $v$, conditional on hitting $\mathcal{C}$ at all, and $\lambda(\kappa)$, $\lambda_v(\kappa)$ are polynomials in $\kappa$ of degree at most $|\mathcal{C}|$ and $|\mathcal{C}|-1$, respectively. The coefficients of $\lambda(\kappa),\lambda_v(\kappa)$ depend only on geometric properties of $\Gamma$: lengths of edges, degrees of vertices, location of exit and active vertices. When $\mathcal{C}=\{v\}$, this limit reduces to $$\lim_{h\rightarrow 0} \lim_{\epsilon\rightarrow 0} f^{\epsilon,h}(x,k/h)= \frac{\lambda \kappa}{1+\lambda \kappa}.$$ \end{theorem} It would be interesting to find methods that could tell {\em a priori} how the coefficients of $\lambda(\kappa)$ and $\lambda_v(\kappa)$ can be expressed in terms of the geometry and topology of the graph, that is, as function of lengths, degrees, etc. A starting point would be to explore in a systematic way the properties of these polynomials for families of graphs. Here we are content with showing a few examples only. \begin{figure} \caption{\small A few examples of graphs for which the reaction probability is easily computed. The vertex types are: $\cdot=$ inert, $\odot=$ active, $\emptyset=$ exit. The labels $l_i$ indicate edge lengths and the $x_i$ on the middle graph of the left column are the coordinates of the active nodes. On the third graph of the left column, $n_\odot$ is the number of vertices of type $\odot$.} \label{diagrams} \end{figure} The reaction probability for the examples of figure \ref{diagrams} are easily obtained by solving the linear system of equations indicated in section \ref{examples}. In all cases we assume that the point of gas injection is the left-most vertex. The results are as follows. \begin{enumerate} \item Top graph on the left column: $$ \alpha(\kappa)= \frac{2l\kappa}{1+2l\kappa}.$$ Naturally, only the length of the edge between the active and exit vertices matters. On the other hand, if the initial edge is completely eliminated so that the starting point for Brownian motion is the active vertex, then the term $2l\kappa$ should be replaced with $l\kappa$. \item Top graph on the right column: $$\alpha(\kappa)=\frac{\text{deg}(v)l\kappa}{1+\text{deg}(v)l\kappa} $$ where $\text{deg}(v)$ is the degree of the active vertex. As in the first example, only the length of the edge connecting the active vertex to the exit matters, but the number of shorter edges leading to inert vertices influences $\alpha$, regardless of those edges' lengths, so long as their are greater than zero. \item Middle graph of the left column. Let $l_j=x_j-x_{j-1}$. The value of $\alpha(\kappa)$ can be obtained recursively as follows. Set $g_1=1$ and $g_{j+1}= g_j+ l_{j+1}\left(g_1+ \cdots + g_j\right) \kappa$ for $j=2, \dots, m$. (The vertex $\emptyset$ is at position $x_{m+1}$.) Then $$\alpha(\kappa)=\frac{g_{m+1}-1}{g_{m+1}}.$$ It is interesting to note the following property concerning optimal arrangement of active nodes: When $\kappa$ is small, $$\alpha(\kappa)=\left[(L-x_1)+\cdots+(L-x_m)\right]\kappa+ \text{higher order terms in } \kappa $$ where $L$ is the total length of the graph. So if $\kappa$ is small, one maximizes reaction probability by clustering all the active nodes near the entrance point. On the other hand, for large values of $\kappa$, $$\alpha(\kappa) =l_2 l_3 \cdots l_{m+1} \kappa^m + \text{lower order tems in } \kappa.$$ One easily obtains that the coefficient of $\kappa^m$ attains a maximum when the active vertices are equally spaced. \item Middle graph of the right column: $$\alpha(\kappa)=\alpha(\infty)\frac{\lambda\kappa}{1+\lambda\kappa} $$ where $\alpha(\infty)= l_1/(l_0+l_1)$ and $\lambda=2{l(l_0+l_1)}/({l_0+l_1+l}).$ \item Bottom graph of left column: $$\alpha(\kappa)=\frac{2(l_0+n_\odot l)\kappa}{1+2(l_0+n_\odot l)\kappa}.$$ Despite having multiple active vertices, this system behaves, in its dependence on $\kappa$, like one with a single active vertex. \item Bottom graph of right column: $$ \alpha(\kappa)= \frac{3l\left(1+l_1\kappa\right)\kappa}{1+3l\left(1+l_1\kappa\right)\kappa}.$$ As expected for a system with two active vertices, this is the quotient of second degree polynomials in $\kappa$. \end{enumerate} \section{Metric graphs and fat-graph domains}\label{fatgraph} We introduce here notation and terminology concerning {\em metric graphs,} together with an associated class of domains in $\mathbb{R}^d$ we call {\em fat graphs.} Basically, a metric graph is an abstract graph realized as a collection of curvilinear segments; a fat graph is a neighborhood of $\Gamma$ whose boundary satisfies some smoothness conditions. More precisely, let $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ be an abstract graph with vertex set $\mathcal{V}$ and edge set $\mathcal{E}$. We assume the edges in $\mathcal{E}$ are oriented, and denote by $s, t: \mathcal{E}\rightarrow \mathcal{V}$ the source and target vertex functions. For each edge $e\in \mathcal{E}$ let $\bar{e}$ be the inverse of $e$, which is the same edge given the opposite orientation. Thus $s(\bar{e})=t(e)$ and $t(\bar{e})=s(e)$. We assume that $\mathcal{E}$ is closed under the inverse operation. We also assume that $|\mathcal{E}|,|\mathcal{V}|<\infty$ where $|\cdot|$ indicates the cardinality. Associate to each $v\in \mathcal{V}$ a point in $\mathbb{R}^d$, still denoted $v$ (so that we now think of $\mathcal{V}$ as a subset of the Euclidean space) and to each $e\in \mathcal{E}$ a smooth curve $\gamma_e:[0,l_e]\rightarrow \mathbb{R}^d$, parametrized by arclength, such that $\gamma_e(0)=s(e)$ and $\gamma_e(l_e)=t(e)$. Thus, $l_e$ is the length of the curvilinear segment $E_e:=\gamma_e([0,l_e])$. We assume that $\max_e l_e<\infty$ and write $l_e=|e|=|E_e|$. We also set $\gamma_{\bar{e}}(s):=\gamma_e(|e|-s)$. The union $\Gamma:=\bigcup_{e\in\mathcal{E}} E_e$ will be called a {\em metric graph}, denoted $\Gamma$. With slight notational abuse we also write $\Gamma^\circ$ for $\bigcup_{e\in\mathcal{E}} E^\circ_v$, where $E_v^\circ=\gamma_e((0,l_e))$. Thus, $\Gamma^\circ=\Gamma\setminus\mathcal{V}$. The word {\em metric} refers to the natural distance defined by minimizing a path between two points. This induced metric makes $\Gamma$ into a separable metric space. Each edge has a natural coordinate $y_e=\gamma_e^{-1}:E_e\rightarrow I_e$. By means of this coordinate we can identify functions on $E_e$ with functions on $[0,l_e]$. Similarly, we can identify a function $f:\Gamma\rightarrow\mathbb{R}$ with a collection of functions on the various coordinate intervals $[0,l_e]$ by defining $f_e(s)=f(\gamma_e(s))$ for $s\in[0,l_e]$. Thus we have an obvious way of checking that $f$ is continuous: each $f_e$ must be continuous on $(0,l_e)$ in the ordinary sense, and the extensions to $[0,l_e]$ must agree at the vertices, in the sense that $f_{e_1}(l_{e_1})=f_{e_2}(0)$ whenever $t(e_1)=s(e_2)$. The set of continuous functions on $\Gamma$ is denoted $C(\Gamma)$. Similarly, we can define the derivative of $f\in C(\Gamma)$ at a point $x=\gamma_e(s)\in E^\circ_v$ by $(f\circ\gamma_e)'(s)$, when this derivative exists in the ordinary sense. At a vertex $v$, we define the one-sided derivatives \[ (D_ef)(v) := \lim_{s\downarrow 0} \frac{f(\gamma_e(s))-f(v)}{s}\] when the limit exists. Thus $D_ef(v)$ is the directional derivative of $f$ at $v$, pointing into $e$. Clearly this definition only makes sense for $e\in\mathcal{E}$ such that $s(e)=v$. Note that, in general, we do not require that the directional derivatives at $v$ all agree. We now define the fat graph $U^\epsilon$ as a union of certain $\epsilon$-tubes $U_e^\epsilon$ (one for each $e\in\mathcal{E}$) together with $\epsilon$-junctures $J_v^\epsilon$ (one for each $v\in\mathcal{V}$). Some care is required in formulating the definitions; however, Figure \ref{fatgraphparameters} should convey the right idea. The following conditions ensure that the construction works properly: \begin{enumerate} \item If $e, e'$ are any two edges such that $s(e)=s(e')=v$, then $\mathbf{T}_e(v)\neq \mathbf{T}_{e'}(v)$. \item \label{straight} There exists a $r_0>0$ such that $\mathbf{T}_e'(s)=0$ for $s\in [0,r_0]$ and all $e\in \mathcal{E}$. \item \label{nonintersect} The curves comprised by $\Gamma$ do not intersect each other and have no self-intersection. \item \label{smooth} Each $\gamma_e$ is $C^3$. \end{enumerate} Now fix an $\epsilon>0$. We begin by defining the $\epsilon$-tubes $U^\epsilon_e$ corresponding to the edges $E_e$ in $\Gamma$. For each $e\in\mathcal{E}$, let there be given a {\em relative radius} $r_e>0$; and then, for each $\epsilon>0$ sufficiently small, let $U^\epsilon_e$ be a tubular neighborhood of $E_e^\circ$ with cross-sectional radius $r_e\epsilon$. According to \cite{F84}, $U^\epsilon_e$ has the {\em nearest point property} with respect to $E_e^\circ$, meaning each $x\in U^\epsilon$ has a unique point $\pi^\epsilon(x)\in E_e^\circ$ nearest to $x$, and the induced mapping $\pi^\epsilon:U^\epsilon_e\rightarrow E_e^\circ$ is a $C^2$ submersion. Furthermore, the distance function \[ x\mapsto d(x,E_e^\circ) := \inf\{y\in E_e^\circ\,:\, \|x - y\|\} \] is $C^3$ near $E_e^\circ$ (because $\gamma_e$ is $C^3$\----see \cite{F84}), so that the unit normal vector field, given by \[ \mathbf{n}^\epsilon(x)=\nabla d(x,E_e)/\|\nabla d(x,E_e)\|,\quad x\in \partial U^\epsilon_e,\] is $C^2$ away from the ends of $U^\epsilon_e$. Now, the full $U_e^\epsilon$'s may intersect near the vertices. For this reason, we use assumptions \ref{straight} and \ref{nonintersect} from above to introduce constants $c_0>0$ and $\epsilon_0>0$ which depend only on $\Gamma$ and have the following property: if each $U_e^\epsilon$ is shrunken by a length $c_0\epsilon$ at its ends, then $U_{e_1}^\epsilon\cap U_{e_2}^\epsilon=\varnothing$ whenever $e_1,e_2\in \mathcal{E}$ are distinct and $\epsilon\leqslant \epsilon_0$. In summary, we have a family $\{ U_e^\epsilon \, :\, e\in\mathcal E,\,\epsilon<\epsilon_0\}$ such that: (i) for fixed $\epsilon\leqslant\epsilon_0$ the $U_e^\epsilon$'s are pairwise disjoint; (ii) $\bigcup_{e\in\mathcal{E}} U_e^\epsilon$ has the nearest point property with respect to $\Gamma^\circ$, and the natural projection $\pi^\epsilon$ onto $\Gamma^\circ$ is $C^2$; (iii) $\bigcup_{e\in\mathcal{E}}U_e^\epsilon\downarrow \Gamma^\circ$ as $\epsilon\downarrow 0$. Next, we define the juncture regions $J_v^\epsilon$. For each $v\in \mathcal{V}$ let there be given a ``template'' region such that \[ J_v\cup\bigcup_{e:s(e)=v} U_e^{\epsilon_0}\] is simply-connected and has a boundary smooth enough that the unit normal vector field is still $C^2$ away from the ends. In other words, $J_v$ lines up smoothly with its incident $\epsilon_0$-tubes. It is clear that such a $J_v$ exists. It is also clear that we can shrink or enlarge $J_v$ so that $J_v\cap U_e^{\epsilon_0}$ is a cylindrical wafer of height $(r_0-c_0)\epsilon_0>0$ and radius $r_e\epsilon_0$, for each $e\in\mathcal{E}$. Then $J_v^\epsilon$ is defined in an obvious way by scaling down homothetically: \[ J^\epsilon_v:=\left\{x\in \mathbb{R}^d: v+\epsilon^{-1}(x-v)\in J_v\right\}.\] \begin{figure} \caption{\small The $\epsilon$-juncture $J^\epsilon_v$ at $v$ in part of a fat graph domain $U^\epsilon$ with skeleton $\Gamma$. The $r_e$ are the relative radii of the tube cross sections. Note that the part of the edge curve $\gamma_e$ contained in $J^\epsilon_v$ is straight.} \label{juncture} \end{figure} Finally, with the $\epsilon$-tubes and $\epsilon$-junctures as above, we define the {\em fat graph} with fatness parameter $\epsilon$ and relative radii $r_e$ as \[ U^\epsilon = \left[\bigcup_{v\in \mathcal V} J_v^\epsilon \right]\cup \left[ \bigcup_{e\in\mathcal{E}} U_e^\epsilon \right]. \] From the construction, the unit normal vector field on $\partial U^\epsilon$ is $C^2$, and $U^\epsilon\downarrow \Gamma$ as $\epsilon\downarrow 0$. It is also clear that we can extend the projections $\pi^\epsilon$ to be defined inside the $J_v^\epsilon$'s in such a way that $\pi^\epsilon:U^\epsilon\rightarrow\Gamma$ is continuous, and $$ \sup_{v\in \mathcal{V}} \sup_{x\in J^\epsilon_v} \|\pi^\epsilon(x)-v\|=O(\epsilon).$$ Furthermore, we have that \begin{equation} \label{proj} \sup_{x\in U^\epsilon} \|\pi^\epsilon(x) - x\| = O(\epsilon)\end{equation} uniformly in $x\in U^\epsilon$. \section{Limit of diffusions from the fat graph to its graph skeleton}\label{fattothin} In this section we review some background material related to diffusions on fat-graph domains and their limits as the fat graphs shrink down to their underlying metric graphs. The main result quoted here comes from \cite{AK12}, which extends the earlier work \cite{FW93}, with modifications required for our needs. Let $\Gamma$ be a metric graph in $\mathbb{R}^d$, and $U^\epsilon$ a family of fat graphs with skeleton $\Gamma$, fatness parameter $\epsilon>0$, and relative radii $r_e$ ($e\in\mathcal{E}$) as defined in Section \ref{fatgraph}. Let $a=\left(a^{ij}(x)\right)$ be a $d\times d$ matrix-valued function on $\mathbb{R}^d$ and $b=\left(b^i(x)\right)$ a vector field in $\mathbb{R}^d$. We assume that $a$ is uniformly positive definite, and that $a$ and $b$ are both bounded and Lipschitz continuous in $\mathbb{R}^d$. Define the differential operator $$L=\frac12 \sum_{i,j} a^{ij}(x) D_iD_j + \sum_i b^i(x) D_i,$$ where $D_i$ is partial differentiation in $x_i$. Writing $a(x)=\sigma(x)\sigma^t(x)$, with $\sigma$ assumed positive-definite and Lipschitz continuous, consider the stochastic differential equation \begin{equation}\label{reflecting} X_{t}^\epsilon-X_0^\epsilon= \int_0^{t} \sigma\left(X_s^\epsilon\right)\, dW_s + \int_0^{t} b\left(X^\epsilon_s\right)\, ds +\int_0^{t}\mathbf{n}^\epsilon\left(X_s^\epsilon\right)\, d\ell_s^\epsilon,\end{equation} in which $W$ is a $d$-dimensional Brownian motion, $\mathbf{n}$ is the inward pointing unit normal vector field on $\partial U^\epsilon$ and $(\ell_t^\epsilon)$ is a continuous increasing process adapted to the filtration of $X$ and increasing only on the set $\{ t\,:\, X_t\in\partial U^\epsilon\}$. Since $\mathbf{n}^\epsilon$ is $C^2$, the geometric conditions in \cite{S87} are clearly met. Therefore \eqref{reflecting} admits a strong solution, in the sense of \cite{IW}. Let us describe the behavior of $(X^\epsilon_t)$ as $\epsilon\downarrow 0$. First, define an operator on $C(\Gamma)$ as follows. For each $e\in \mathcal{E}$, let $C_b^2(E_e^\circ)$ denote the space of functions on $E_e^\circ$ with two bounded and continuous derivatives; and then let $L_e$ act on $f\in C_b^2(E_e^\circ)$ as $$(L_ef)(x)= \frac12\left\| \sigma^t(x)\mathbf{T}_e(x)\right\|^2 f''_e(s) + \left[\left\langle b(x), \mathbf{T}_e(x) \right\rangle + \left\langle a(x)\mathbf{T}'_e(x), \mathbf{T}_e(x)\right\rangle\right]f'_e(s), $$ where $x=\gamma_e(s)$, $\mathbf{T}'_e(x)=\left(\mathbf{T}_e\circ\gamma_e\right)'(s)$ and $\langle\cdot,\cdot\rangle$ is the ordinary Euclidean inner product. To ``paste together'' the $L_e$'s, we must specify what happens at the vertices. Thus, we define $\mathcal{D}(L_\Gamma)$ to contain those functions $f\in C(\Gamma)$ for which: \begin{enumerate} \item $f_e\in C^2_b(E_e^\circ)$ for each $e\in\mathcal{E}$; \item for each $v\in \mathcal{V}$, and $e\in \mathcal{E}$ such that $v=s(e)$, the one-sided limits $\lim_{s\downarrow 0}(L_ef)(\gamma_e(s))$ exist and have a common value, denoted $(Lf)(v)$; \item for each $v\in \mathcal{V}$, \begin{equation}\label{pev} \sum_{e:s(e)=v} p_v(e) (D_ef)(v)=0\end{equation} where the numbers $p_v(e)$ are defined by \begin{equation}\label{defpev} p_v(e):=\frac{r_e^{d-1}}{\sum_{e':s(e')=v} r_{e'}^{d-1}}.\end{equation} \end{enumerate} Then, for $f\in \mathcal{D}\left(L_\Gamma\right)$ as above, we define $L_{\Gamma}f$ by $$(L_\Gamma)(x)=\begin{cases} (L_ef_e)(y_e(x)) & \text{ if } x\in E_e^\circ\\ \lim_{y\rightarrow x}(L_ef)(y) & \text{ if } x\in \mathcal{V}. \end{cases} $$ According to Theorem 3.1 in \cite{FW93}, there is a diffusion process $(X_t)$ on $\Gamma$ generated by $(L_\Gamma,\mathcal{D}(L_\Gamma))$. On the other hand, Theorem 4.2 in \cite{AK12} says that $(X^{\epsilon}_t)$ converges in distribution to $(X_t)$ as $\epsilon\downarrow 0$ if $X^\epsilon_0$ converges in distribution to a $\Gamma$-valued random variable. Since pathwise uniqueness holds for \eqref{reflecting} under our assumptions, ``in distribution'' can be replaced by ``almost sure'' in the preceding sentence. In particular, this is true if the starting point $X_0^\epsilon$ is a point $x\in\Gamma$ for all $\epsilon>0$. \begin{theorem}\label{diffusionlimit} Let $U^\epsilon$ be a fat graph with skeleton $\Gamma$. Assume that $X_0^\epsilon=x$ for all $\epsilon>0$ and some $x\in\Gamma$. Then $X^\epsilon$ converges in distribution to $X$ with initial value $X_0=x$. If pathwise uniqueness holds in \eqref{reflecting}, then $X^\epsilon$ converges a.s. to $X$. \begin{remark} The coefficients $p_v(e)$ appearing above have the following probabilistic significance: if $\tau_\delta$ is the first exit time of $X_t$ from a ball of radius $\delta$ at $v$, then $p_v(e)=\lim_{\delta\downarrow 0}\mathbb{P}_v[X(\tau_\delta)\in E_e]$. \end{remark} \end{theorem} \section{Diffusions with killing} Let $U^\epsilon$ be a fat-graph domain with skeleton $\Gamma$. The terminology and assumptions of Sections \ref{fatgraph} and \ref{fattothin} remain in force. We introduce a function $r(x)$ which represents the {\em rate of killing} of a diffusion in $U^\epsilon$ or in $\Gamma$. This function is assumed to depend on a parameter $h>0$ whose role will become clear later; roughly, $r^h$ will ``collapse'' to a vertex condition when $h$ goes to $0$. But for the moment we suppress $h$ in order to simplify the notation. Thus, let $r:\mathbb{R}^d\rightarrow [0,\infty)$ be a non-negative measurable function, representing the chemical reaction rate. We are interested in processes obtained from $X^\epsilon$ on $U^\epsilon$ (resp. $X$ on $\Gamma$) by killing using the rate function $r$. Loosely, killing with rate $r$ means forming new processes $Y^\epsilon$ (resp. $Y$) which behave like $X^\epsilon$ (resp. $X$) until $\int_0^t r(X^\epsilon_s)\,ds$ (resp. $\int_0^t r(X_s)\,ds$) exceed an independent exponential random time; after this time, they are sent to a cemetery state $\triangle$. In this situation, the Feynman-Kac formula says that $Y^\epsilon$ and $Y$ have extended generators $$L^rf= Lf-r(x)f \ \ \ \text{ and }\ \ \ L^r_\Gamma f=L_\Gamma f- r(x) f,$$ respectively, where $L$ and $L_\Gamma$ are as defined in section \ref{fattothin} and the domains are the same. Furthermore, the semigroup of $Y^\epsilon$ acts on functions as \[ P_t^\epsilon f(x) = \mathbb{E}_x\left[\exp\left\{ -\int_0^t r(X^\epsilon_s)\,ds\right\} f(X_t^\epsilon)\right] \] with a similar statement holding for $Y$. In particular, $P_t^\epsilon1(x)=\mathbb{E}_x\left[\exp\left\{-\int_0^t r(X_s^\epsilon)\,ds\right\}\right]$ is the probability that $Y^\epsilon$ is still in $U^\epsilon$, i.e. still ``alive'' at time $t$. With this in mind, we define the {\em survival function} of the process at $x\in U^\epsilon$ as \begin{equation}\label{fk1}\psi^{\epsilon}(x):= \mathbb{E}_x \left[\exp\left\{-\int_0^{T_a^\epsilon} r\left(X^\epsilon_s\right)\, ds\right\} \right]. \end{equation} Here, $T_a^\epsilon$ is a random time which can be thought of as the time that $X^\epsilon$ is absorbed at the reactor, as we now explain. \begin{figure} \caption{\small A portion of the absorbing part of the boundary of $U^\epsilon$ and its limit in $\Gamma$. The process $X^\epsilon$ in $U^\epsilon$ is killed when it reaches the dotted line separating the regions indicated in light and dark shading. The limiting process $X$ on $\Gamma$ is killed when it reaches $a$. } \label{end} \end{figure} Let there be given a certain subset $\mathcal{V}_a=\{a_j\}\subset\mathcal{V}$ of degree 1 vertices in $\Gamma$ to be regarded as the reactor exit. Write (with abuse of notation) $\partial U_a^\epsilon$ for the closure of the union of the corresponding juncture regions in $U^\epsilon$, and $\partial U_r^\epsilon$ for the rest of the boundary of $U^\epsilon$. We call $\partial U_a^\epsilon$ the {\em absorbing} part of the boundary and $\partial U_r^\epsilon$ the {\em reflecting} part. Then $\partial U_a^\epsilon\downarrow \mathcal{V}_a$ as $\epsilon\downarrow 0$. (See Figure \ref{end}.) Let $T_a^\epsilon$ be the first hitting time of $X^\epsilon$ to $\partial U_a^\epsilon$, and $T_a$ the first hitting time of $X$ to $\mathcal{V}_a$. For now, we {\em assume} that $T_a^\epsilon$ and $T_a$ are finite a.s., and that $T^\epsilon \rightarrow T$ a.s. Under these circumstances, we have the following: \begin{proposition} \label{firstlimit} Suppose that $T_a^\epsilon$ and $T$ are a.s. finite and that $T_a^\epsilon\rightarrow T_a$ a.s. If $x\in \Gamma$ then $$\psi^{\epsilon,h}(x)\rightarrow \mathbb{E}_x\left[ \exp \left\{ -\int_0^{T_a} r^h\left(X_s\right)\, ds\right\}\right] $$ as $\epsilon\downarrow 0$, where $T_a$ is the hitting time to $\mathcal{V}_a$ of the limiting process $X$. \end{proposition} With some additional smoothness on $r^h$, this conclusion can also be recast as a boundary value problem, which will be useful for computations. The following conclusion is standard. See, for example, \cite{B98}. \begin{corollary} Suppose that $u$ is a bounded and continuous function on $\overline{U^\epsilon}$ that solves the equation $Lu-r^h u=0$ in $U^\epsilon$, together with the boundary conditions $$ \frac{\partial u}{\partial n}= 0 \text{ on } \partial U_r^\epsilon \text{ and } u=1 \text{ on } \partial U_a^\epsilon. $$ Then $u=\psi^{\epsilon, h}$, with $\psi^{\epsilon,h}$ defined as in \eqref{fk1}, and $$\psi^{\epsilon, h}(x)\rightarrow \psi^h(x)$$ for every $x\in \Gamma$, where $\psi^h$ is the solution to the corresponding ordinary differential equation on the graph $\Gamma$, namely: $L_\Gamma u-r^hu=0$ with boundary condition $u=1$ on $\mathcal{V}_a$. \end{corollary} \section{Collapsing active zones towards vertices of $\Gamma$} In the previous sections we described a conservative diffusion $X=(X_t)$ on a metric graph $\Gamma$ generated by an operator $L_\Gamma$ acting on a domain characterized by the vertex condition \eqref{pev}. This $X$ was obtained by collapsing the conservative diffusion $X^\epsilon$ on the fat graph $U^\epsilon$ down to $\Gamma$ as $\epsilon\downarrow 0$. We also described a nonconservative process $Y^h=(Y^h_t)$ obtained by killing $X$ using the rate function $r^h$. The killed process $Y^h$ has generator $L_\Gamma-r^h$ acting on the same domain. The function $r^h$ represents the rate of chemical activity on the active zones. Now, we wish to collapse the active zones, i.e. the regions in $\Gamma$ where $r^h$ is positive, to a collection of vertices, as $h\downarrow 0$. We have in mind that all the chemical activity is concentrated on the active vertices as $h\downarrow0$, while the killing rate is increased towards $\infty$ in such a way that the overall effect is the same in the limit. For concreteness, we assume that the operator $L_\Gamma$ acts as $\mathcal{D}\frac{d^2}{d y_e^2}$ on each edge $E_e$, where $\mathcal{D}$ is the diffusion coefficient; also, that the active regions are balls $B(v,h\delta)$ centered at some of the vertices with radius $h\delta$, where $\delta>0$ is a fixed number with the units of distance, and $h>0$ is a dimensionless parameter. Thus, each $B(v,h\delta)$ is a star-shaped neighborhood of $v$ consisting of $v$ and a union of segments of edges of length $h\delta$ for each edge issuing from $v$. The killing rate function $r^h(x)$ will then be assumed to have the form \begin{equation}\label{rateh}r^h(x)=\sum_{v\in \mathcal{V}}\frac{k_v}h \mathbbm{1}_{B(v,h\delta)}(x)\end{equation} where $k_v$ is either $k$ or $0$ depending on whether that vertex is to be considered active or not. It is known (as explained in Remark 2.5 of \cite{FS00}) that, as $h\downarrow 0$, $$\mathbb{E}_x\left[ \exp \left\{ -\int_0^{T_a} r^h\left(X_s\right)\, ds\right\}\right] \rightarrow \mathbb{E}_x\left[ \exp\left\{-\sum_{v\in \mathcal{V}} \kappa_v \ell_v(T_a)\right\}\right],$$ where $\kappa_v= k_v\delta/\mathcal{D}$ and $\ell_v(T_a)$ is the semimartingale local time (as defined in \cite{FS00}) at $v$ evaluated at the hitting time $T_a$. Write $\mathcal{C}$ for the set of active vertices in $\mathcal{V}$, and let $\ell_\mathcal{C}(t)$ denote the local time accumulated at $\mathcal{C}$ up to time $t$. In other words, $\ell_\mathcal{C}(t)$ is the sum of the $\ell_v(t)$ for which $k_v$ is not zero. Then the limit in the above expression becomes $\mathbb{E}_x\left[\exp\left\{ -\kappa \ell_\mathcal{C}(T_a) \right\}\right]$ where $\kappa=k\delta/\mathcal{D}$. Also, the killed processes $Y^h$ converge to a process $Y$ which is obtained from $X$ as follows: run $X$ until $\kappa\ell_\mathcal{C}(t)$ exceeds an independent rate $1$ exponential; after this time, send $X$ to the cemetery state $\triangle$. In other words, kill $X$ using the local time $\ell_\mathcal{C}(t)$ rather than the integral $\int_0^t r^h(X_s)\,ds$. Then the new process $Y$ is has a generator which is defined in the same way as $L_\Gamma$ for functions on $\Gamma^\circ$. However, the domain of $Y$'s generator is characterized by a different vertex condition, as explained in the following: \begin{proposition} Let $r^{\epsilon,h}(x)$ be positive bounded functions on $U^\epsilon$ with the property that $$\lim_{\epsilon\downarrow 0} r^{\epsilon,h}(x)=r^h(x)$$ for all $x\in\Gamma$, where $r^h(x)$ is defined at \eqref{rateh}. Let $\psi^{\epsilon,h}(x)$ and $\psi(x)$ be the survival functions associated with $Y^{\epsilon,h}$ and $Y$, respectively; that is, \begin{align*}\psi^{\epsilon,h}(x)&=\mathbb{E}_x\left[ \exp\left\{ -\kappa\int_0^{T_a^\epsilon} r^{h,\epsilon}(X^\epsilon_s)\,ds\right\}\right],\\ \psi(x) &= \mathbb{E}_x\left[ \exp\left\{ -\kappa\ell_\mathcal{C}(T_a)\right\}\right]. \end{align*}If the conditions of Proposition \ref{firstlimit} are met, then $$\psi(x)=\lim_{h\rightarrow 0}\lim_{\epsilon\rightarrow0} \psi^{\epsilon, h}(x)$$ where $\kappa=k\delta/\mathcal{D}$, as defined above. Furthermore, the generator of the limiting process has a domain characterized by the vertex condition \begin{equation}\label{vcond}\sum_{e:s(e)=v} p_v(e) (D_eu)(v)=\kappa_v u(v) \end{equation} where $\kappa_v=\kappa$ if $v\in\mathcal{C}$ and $\kappa_v=0$ if $v\in\mathcal{V}\setminus\mathcal{C}$.\end{proposition} \begin{proof}[Sketch of proof] The limit statement has already been discussed. As for the vertex condition, we will show that a function in the domain of $Y$'s generator must satisfy \eqref{vcond}. Thus, let $(\hat{L},D(\hat{L}))$ be the generator of $Y$. Fix an active vertex $v\in\mathcal{C}$. Let $U_\delta=B(v,\delta)$ and $\hat{\tau}_\delta$ the first exit time of $Y_t$ from $U_\delta$. Also, write $\hat{\mathbb{E}}_v$ for the law of $Y_t$ started at $v$. If $F\in D(\hat{L})$ then Dynkin's formula (see \cite{RW1} Ch. 3) reads: \[ (\hat{L}F)(v)= \lim_{\delta\downarrow 0} \frac{\hat{\mathbb{E}}_v[F(Y(\hat{\tau}_\delta))] - F(v)}{\hat{\mathbb{E}}_v[\hat{\tau}_\delta]}\] Let $\mathbf{e}$ be an independent rate 1 exponential (we can always enlarge the probability space to accommodate such a variable) and then $\hat{\zeta}=\inf\{ t>0 : \kappa\ell_\mathcal{C}(t) > \mathbf{e}\}$. Thus, $\hat{\zeta}$ is the time that $Y_t$ jumps from $\Gamma$ to the cemetery state $\triangle$. Evidently $\hat{\tau}_v=\tau_\delta\wedge\hat{\zeta}$ where $\tau_\delta$ is the first exit time of $X_t$ from $U_\delta$. In the denominator, then, we have \[ \hat{\mathbb{E}}_v[\hat{\tau}_v] \sim \mathbb{E}_v[\tau_\delta] = \frac{\delta^2}{2\mathcal{D}}\] as $\delta\downarrow 0$. On the other hand, $Y_t$ leaves $U_\delta$ either through a point $(\delta,e)$ (using local coordinates) if $\tau_\delta<\hat{\zeta}$ or by a jump to $\triangle$ if $\hat{\zeta}\leqslant\tau_\delta$. Since $F(\triangle)=0$ in the latter case, we have in the numerator, in view of the Remark under Theorem \ref{diffusionlimit}, \begin{align*} \hat{\mathbb{E}}_v[F(Y(\hat{\tau}_\delta))]-F(v) &= \sum_e F_e(\delta)\mathbb{P}_v[\tau_\delta<\hat{\zeta}]-F(v) \\ &= \sum_e F_e(\delta)p_v(e)\mathbb{P}_v[\tau_\delta<\hat{\zeta}]-F(v)+o(\delta)\end{align*} Now $(\tau_\delta<\hat{\zeta})=(\kappa\ell_\mathcal{C}(\tau_\delta)\leqslant\mathbf{e})=(\kappa\ell_c(\tau_\delta)\leqslant \mathbf{e})$ $\mathbb{P}_v$-a.s., because starting from $v$, only the local time at $v$ can contribute to $\ell_\mathcal{C}(t)$ before $\tau_\delta$. Also, $\ell_v(\tau_\delta)$ has an exponential distribution under $\mathbb{P}_v$, as explained below; we can compute its $\mathbb{E}_v$-mean as $\mathbb{E}_v[\tau_\delta]=\delta$ using the methods in Section \ref{examples}. Therefore $\mathbb{P}_v[\kappa\ell_v(\tau_\delta)\leqslant \mathbf{e}]$ is the probability that a mean $\kappa\delta$ exponential is less than an independent mean $1$ exponential; this is easily found to be $\frac{1}{1+\kappa\delta}$. Therefore the numerator equals \begin{align*} \frac{1}{1+\kappa\delta}\sum_e F_e(\delta)p_v(e) - F(v)+o(\delta) &= \frac {1}{1+\kappa\delta}\sum_e p_v(e)\left[F_e(\delta-F(v)-\kappa\delta F(v)\right] \\ &= \frac{1}{1+\kappa\delta}\sum_e p_v(e)\left[D_eF(v)-\kappa F(v)\right]\delta+o(\delta)\end{align*} Existence and finiteness of the limit in Dynkin's formula therefore requires that $\sum_e (D_e F)(v)-\kappa F(v)=0$ if $F\in D(\hat{L})$. Similar reasoning away from the vertex shows that $\hat{L}$ has to act as $L$ there. For the rest of the details see \cite{KPS12}. \end{proof} \begin{remark} The upshot of these considerations is that, in the limit as $h\downarrow 0$ and then $\epsilon\downarrow0$, we obtain an expression $\mathbb{E}_x[\exp\{-\kappa\ell_\mathcal{C}(T_a)\}]$ for the survival probability. Therefore the problem of finding explicit formulas for reaction probabilities reduces to the problem of evaluating $\mathbb{E}_x$-moments of $\ell_\mathcal{C}(T_a)$. There is a method for dealing with this issue in considerable generality, called Kac's moment formula, which we explain in the next section. \end{remark} \section{Explicit formulas for reaction probability} We now focus on the dependence on $\kappa$ of the conversion probability $\alpha_\kappa(x)=1-\psi_k(x)$ for the reaction-diffusion process on a metric graph $\Gamma$, where the active sites consist of a set of vertices $\mathcal{C}\subset \mathcal{V}$. In this case, it is possible to obtain reasonably explicit formulas for $\alpha_\kappa(x)$ by simple arguments using the strong Markov property. First we consider the case where $\mathcal{C}$ is a single vertex: \begin{proposition}\label{simpleprop} Suppose $\mathcal{C}=\{c\}$ and $T_a=\inf\left\{t>0\,:\,X_t\in\mathcal{V}_a\right\}$. Set $\lambda:=\mathbb{E}_{c}\left[\ell_c(T_a)\right]$ = the expected local time accumulated at $c$ up to time $T$, starting from $c$. Then \begin{equation}\label{simple} {\alpha_\kappa(x)}={\alpha_\infty(x)}\frac{\lambda \kappa}{1+\lambda\kappa}.\end{equation} \end{proposition} \begin{proof} Write $T_c:=\inf \{t:X_t = c\}$, the first time that $X_t$ hits the active vertex. Since $\ell_c(T_a)=0$ on the event $(T_c \geqslant T_a)$, we have \begin{equation}\label{pxc} \begin{aligned} \alpha_\kappa(x)&=\mathbb{E}_{x}\left[\left(1-\exp\left\{ -\kappa \ell_c(T_a)\right\} \right)\mathbbm{1}_{(T_c<T_a)}\right]\\ &=\alpha_\infty(x)-\mathbb{E}_{x}\left[\exp\left\{ -\kappa \ell_c(T_a)\right\} \mathbbm{1}_{(T_c<T_a)}\right] \end{aligned} \end{equation} where $\alpha_\infty(x)=\mathbb{E}_{x}[\mathbbm{1}_{(T_c<T_a)}]$ is the probability that $X_{t}$ hits $c$ before $\mathcal{V}_a$. Also, since $\ell_c(t)$ does not start increasing until $T_c$, we have that $\ell_c(T_a)=\ell_c(T_a)\circ\theta_{T_c}$ on $(T_c<T_a)$. Using the strong Markov property, and then the fact that $X(T_c)=c$ a.s., we find that the right-most term in the second line of \eqref{pxc} equals: \begin{equation*}\begin{aligned} \mathbb{E}_{x}\left[\exp\left\{-\kappa\ell_c(T_a)\circ\theta_{T_c}\right\}\mathbbm{1}_{(T_c<T_a)}\right] &= \mathbb{E}_x\left[\mathbb{E}_{X(T_c)}[\exp\{-\kappa\ell_c(T_a)\}]\mathbbm{1}_{(T_c<T_a)}\right]\\ &= \mathbb{E}_c[\exp\{-\kappa\ell_c(T_a)\}]\mathbb{E}_x[\mathbbm{1}_{(T_c<T_a)}] \end{aligned}\end{equation*} We arrive at \begin{equation}\label{prefactor}\alpha_\kappa(x)=\alpha_\infty(x)\left(1-\mathbb{E}_c\left[\exp\left\{ -\kappa \ell_c(T_a)\right\} \right]\right).\end{equation} Now, we use the fact that {\em $\ell_c(T_a)$ must have an exponential distribution under $\mathbb{P}_c$. } This follows from a simple argument which can be found on p. 106 of \cite{MR06}. Writing $\lambda:=\mathbb{E}_c\left[\ell_c(T_a)\right]$ for the mean, and then explicitly computing the Laplace transform, we obtain the result $$ \mathbb{E}_c\left[\exp\left\{ -\kappa \ell_c(T_a) \right\} \right]=\frac{1}{1+\lambda\kappa}. $$ Inserting this last expression into \eqref{prefactor} establishes \eqref{simple}. \end{proof} When $\mathcal{C}$ consists of more than one point, the survival function takes the form \[ \psi_\kappa(x) = \mathbb{E}_x\left[ \exp\left\{ -\kappa\sum_{c\in\mathcal{C}} \ell_c(T_a)\right\}\right] \] where $\ell_c(T_a)$ is the local time at the vertex $c$ up to the exit time $T_a$. In this case, it is still possible to obtain an explicit formula for $\psi_\kappa(x)$ (hence $\alpha_\kappa(x)$) because Kac's moment formula can be used to evaluate the expectation appearing in $\psi_\kappa(x)$. Specifically, we can use the following corollary of Kac's moment formula, which can be found in Section 6 of \cite{FP99}: \begin{proposition} Write $\mathcal{C}=\{c_j\}_{j=1}^C$ with $C=\#\mathcal{C}$ and define the Green's function by \[ G_{ij} := g(c_i,c_j)=\mathbb{E}_{c_i}\left[ \ell_{c_j}(T_a) \right]. \] Let $\kappa_i=\kappa(c_i)$ be a positive function on $\mathcal{C}$. Then the function \[ \psi_\kappa(c_i) := \mathbb{E}_{c_i}\left[\exp\{-\sum_{j=1}^C \kappa_j \ell_{c_j}(T_a)\}\right] \] is the unique solution to the system of equations \[ \psi(c_i) = 1 - \sum_{j=1}^C G_{ij}\kappa_j \psi(c_j).\] In other words, if $M_\kappa$ is the diagonal matrix with the entries of $\kappa_j$ along the main diagonal, then \begin{equation}\label{conversion1} \psi_\kappa = (I + GM_\kappa)^{-1}\mathbbm{1}_\mathcal{C}.\end{equation} \end{proposition} \begin{remark} The expression \eqref{conversion1} for the survival function on $\mathcal{C}$ can be recast as a formula involving determinants by using Cramer's rule. To wit, let $G^{(j)}$ denote the matrix obtained from $G$ by subtracting row $j$ from every row. (So, $G^{(j)}$ has a row of 0's in the $j$-th row.) Then \begin{equation}\label{conversion2} \psi_\kappa(c_j) = \frac{\det(I+G^{(j)}M_\kappa)}{\det(I+GM_\kappa)}.\end{equation} See \cite{MR06} Ch. 2 for additional details about \eqref{conversion2}. \end{remark} Now we can repeat the same argument that was given in Proposition \ref{simpleprop}, replacing the single point $c$ with the set $\mathcal{C}$, so that $T_c$ becomes $T_\mathcal{C}=\inf\{t>0 : X_t\in\mathcal{C}\}$. Then \eqref{pxc} takes the form \[ \alpha_\kappa(x)=\alpha_\infty(x)-\mathbb{E}_{x}\left[\exp\left\{ -\kappa \ell_\mathcal{C}(T_a)\right\}\circ \theta_{T_\mathcal{C}} \mathbbm{1}_{(T_\mathcal{C}<T_a)}\right] \] where we abbreviate $\sum_{j=1}^C\ell_{c_j}(T_a)$ as $\ell_\mathcal{C}(T_a)$. To deal with the expectation in the last display, split the event $(T_\mathcal{C}<T_a)$ as $\bigcup_{j=1}^C (X(T_\mathcal{C})=c_j, T_\mathcal{C}<T_a)$, and then write \[ p_j(x)=\mathbb{P}_x[X(T_\mathcal{C})=c_j\,|\, T_\mathcal{C}<T_a].\] Thus, $p_j(x)$ is the probability of starting from $x$ and first hitting $\mathcal{C}$ at $c_j$, conditional on hitting $\mathcal{C}$ at all. We get \begin{equation*} \begin{aligned} \mathbb{E}_{x}\left[\exp\left\{ -\kappa \ell_\mathcal{C}(T_a)\right\}\circ \theta_{T_\mathcal{C}} \mathbbm{1}_{(T_\mathcal{C}<T_a)}\right] &= \sum_{j=1}^C \mathbb{P}_x[X(T_\mathcal{C})=c_j,\,T_\mathcal{C}<T_a]\mathbb{E}_{c_j} \left[ \exp\{-\kappa\ell_\mathcal{C}(T_a)\right] \\ &= \sum_{j=1}^C p_j(x) \mathbb{P}_x[T_{\mathcal{C}}<T_a]\mathbb{E}_{c_j}[\exp\{-\kappa\ell_\mathcal{C}(T_a)\}] \end{aligned}\end{equation*} Since $\mathbb{P}_x[T_\mathcal{C}<T_a]=\alpha_\infty(x)$, we can apply \eqref{conversion2} with $\kappa(c_j)$ all being the same constant $\kappa$ to express this last equation in the simple form \begin{equation}\label{conversion3} \alpha_\kappa(x) = \alpha_\infty(x)\left[ 1 - \sum_{j=1}^C p_j(x) \frac{\det\left(I+\kappa G^{(j)}\right)}{\det(I+\kappa G)}\right].\end{equation} In the numerator, the determinant is a polynomial with degree $\leqslant C-1$, because $\kappa$ appears in only $C-1$ rows of the matrix. On the other hand the denominator is a polynomial with degree $\leqslant C$. Furthermore the coefficients depend only on the $G_{ij}$, which in turn depend only on the geometric properties of $\Gamma$. This explains the claim made in Theorem \ref{theorem1}. \section{Remark concerning the examples} \label{examples} In this section we present two slightly different approaches to computing the reaction probabilities. In both subsections, we let $X=(X_t)$ be a diffusion process on $\Gamma$ as described in section \ref{fatgraph}. For simplicity, we'll assume that $X$ is a Brownian motion with coefficient $\mathcal{D}$, so that the operator $L_\Gamma$ acts as $L_\Gamma f(x) = \mathcal{D}\frac{d}{dx^2} f_e(x)$ for $x\in E_e^\circ$. This $X$ is a conservative diffusion process (i.e. without killing) which corresponds to the limit of a conservative diffusion process on $U^\epsilon$ as $\epsilon\downarrow 0$. \subsection{Method 1} Previously, we explained how Kac's moment formula can be used to express the reaction probabilities in terms of certain polynomials involving the coefficients $G_{ij} = \mathbb{E}_{c_i}[\ell_{c_j}(T_a)]$. What remains, then, is to compute the $G_{ij}$'s from the graph and diffusion coefficients. To this end we apply the graph stochastic calculus from \cite{FS00}. Then we have, for each $v\in\mathcal{V}$, a process $\ell_v(t)$ which is adapted to the filtration of $X$, continuous, increasing, and increases only on $\{t \, : \, X_t\in v\}$. As explained in \cite{FS00}, $\ell_v(t)$ can be recovered from $X$ by the formula \[ \ell_v(t) = \frac{1}{\mathcal{D}} \lim_{\delta\downarrow 0} \frac{1}{\delta} \int_0^t \mathbbm{1}_{B(v,\delta)}(X_s)\,ds \] where $B(v,\delta)$ is a ball around $v$ of radius $\delta$. Also, let $C^2_b(\Gamma)$ be the set of $f\in C(\Gamma)$ with two continuous derivatives in $\Gamma^\circ$. (The derivatives don't have to extend to be continuous at the vertices.) Then \cite{FS00} gives the following version of the Ito-Tanaka formula for $\Gamma$: \begin{theorem} Let $F\in C^2_b(\Gamma)$. Define, for each $v\in\mathcal{V}$, \[ \rho_v(F) = \sum_{e\in\mathcal{E} : s(e) = v} p_v(e)(D_e F)(v).\] Then \[ F(X_t) - F(X_0) = M_t + \int_0^t (L_\Gamma F)(X_s)\,ds + \sum_{v\in\mathcal{V}} \rho_v(F) \ell_v(t) \] where $M_t$ is a continuous local martingale. \end{theorem} Ito's formula is all we need to find the $G_{ij}$. Namely, to find $\mathbb{E}_{c_i}[\ell_{c_j}(T_a)]$, we must find a function $F\in C^2_b(\Gamma)$ for which (i) $L_\Gamma F=0$ on $\Gamma^\circ$, (ii) $F(v)=0$ for $v\in\mathcal{V}_a$, and (iii) $\rho_v(F)=0$ except for $v=c_i,c_j$. In this case, (i) means that $F$ is affine on each edge, so that $F_e(y)=a_e y+b_e$ for certain constants $a_e$ and $b_e$ and $0\leqslant y\leqslant |e|$, where we abbreviate $|E_e|=|e|$ for the length of the segment $E_e$. Choosing an orientation arbitrarily for each $e$, this gives a total of $2E$ constants, with $E=\#\mathcal{E}$, which must be determined compatibly with (i)-(iii). This reduces the problem to a discrete one on the underlying combinatorial graph. This method is best illustrated by working a few examples. In the examples, when it is not necessary to give names to the edges, we write $[v_1,v_2]$ for the edge with $s(e)=v_1$ and $t(e)=v_2$. \begin{example} Consider the graph shown in the upper right of Figure \ref{diagrams}. Write $c$ for the active vertex in the middle, $a$ for the absorbing vertex at the far end, and $v_1,\ldots,v_n$ for the remaining inert vertices. For simplicity, we assume that the $p_c([c,v_j])$'s are all equal to $\frac{1}{n}$, i.e. that the relative radii were all equal. Starting from any $v_j$, the probability of hitting $c$ before $a$ is 1. So the conversion will have the form \[\alpha_\kappa(v_j) = \frac{\lambda\kappa}{1+\lambda\kappa}\] where $\lambda=\mathbb{E}_c[\ell_c(T_a)]$. Orient the various edges so that $c$ is the origin. Then we seek a function $F$ which is linear on each segment and continuous on $\Gamma$. The condition $\rho_{v_j}(F)=0$ forces $F$ to be constant on $[c,v_j]$ and we can choose this constant to be $l$, the length of $[c,a]$. The condition $F(a)=0$ means that $F$ is linear with slope $-1$ on $[c,a]$. In other words, $\rho_c(F)=\frac{-1}{n}$. Taking expectations in Ito's formula, \[ 0 - l = 0+\frac{-1}{n}\cdot \lambda \quad\Rightarrow \quad \lambda = nl.\] From this we obtain the formula given in item 2 in the list of examples after Theorem \ref{theorem1}: $$\alpha_\kappa(v_j)=\frac{nl\kappa}{1+nl\kappa}.$$ \end{example} \begin{example} In a similar spirit, consider the second figure in the left column of Figure \ref{diagrams}. Let $v$ be the inactive vertex at the left, and $c_1,\ldots,c_m$ the active vertices, and $a$ the absorbing vertex at the right. Starting from $v$, $X$ hits $\mathcal{C}$ with probability $1$, and with probability $1$ this first hit occurs at $c_1$. Thus $p_1(v)=1$ and $p_j(v)=0$ for the other $c_j$ and formula \eqref{conversion3} simplifies to \[ \alpha_\kappa(v) =1 - \frac{\det(I + \kappa G^{(1)})}{\det(I+\kappa G)}.\] To simplify, we again assume that $p_{[c_j,c_k]}(e)=1/2$ for adjacent $c_j,c_k$ and $p_v([v,c_1])=1$. Now, to compute $G_{ij}$, first write $l_j$ for the length of the segment to the right of $c_j$. Then set $L_j=\sum_{i=j}^m l_i$. Thus, $L_k$ is the sum of the lengths of all the segments to the right of $c_k$. Regarding $\Gamma$ as a single straight line, take $F$ to be the function which is constantly $L_j$ on $[0,c_j]$ and then decreases linearly with slope $-1$ on $[c_j,a]$, so that $F(a)=0$ and $\rho_{c_j}(F)=-1/2$. Using this $F$ in Ito's formula shows that: if $c_i\leqslant c_j$ then $G_{ij}=2L_j$, and if $c_i > c_j$ then $G_{ij} = 2L_i$. In particular, the matrix $G^{(1)}$ must be strictly triangular, so that the determinant in the numerator of $\alpha_\kappa(v)$ is $1$. Therefore we can write \[ \alpha_\kappa(v) = \frac{\det(I+\kappa G)-1}{\det(I+\kappa G)}\] \end{example} \subsection{Method 2} A different approach is to work instead with the process $Y=(Y_t)$ obtained from $X$ by sending it to $\triangle$ at $\zeta$, the first time that $\kappa \ell_\mathcal{C}(t)$ exceeds an independent rate 1 exponential. This new process $Y$ is a non-conservative diffusion whose generator still acts as $L_\Gamma$, but whose domain now consists of functions $F\in C^2(\Gamma)$ satisfying these vertex conditions: \begin{equation}\label{vertices2} \begin{aligned} \sum_{e:s(e)=c} p_v(e)(D_e F)(c)&=\kappa F(c)\quad &c&\in\mathcal{C} \\ \sum_{e:s(e)=v} p_v(e)(D_eF)(v) &= 0 \quad &v&\in\mathcal{V}\setminus[\mathcal{C}\cup\mathcal{V}_a] \end{aligned}\end{equation} It follows that if $F\in C^2_b(\Gamma)$ satisfies $L_\Gamma F(x)=0$ in $\Gamma^\circ$, $F(a)=1$ for $a\in \mathcal{V}_a$, together with the vertex conditions \eqref{vertices2}, then $F(Y_t)$ is a martingale. By optional stopping, \[ F(v) = \mathbb{E}_v [ F( Y(T_a) ) ] = \mathbb{P}_v[ T_a < \zeta ] \] which means that $F(v)=\psi_\kappa(v)$, i.e. the survival function evaluated at $v$. Again we assume that $p_v(e)=1/\mathrm{deg}(v)$ whenever $v=s(e)$; equivalently, that all $r_e$'s are equal. Since $F$ must be affine on each edge, it is determined by its values on $\mathcal{V}$, and its derivative $DF$ can be regarded as a function on $\mathcal{E}$: \[ DF(e) = \frac{F(t(e))-F(s(e))}{|e|}.\] Therefore we can couch the problem of determining $F$ as a kind of discrete boundary problem on the combinatorial graph $\mathcal{G}$ rather than on the metric graph $\Gamma$. For this purpose, regard the vertices in $\mathcal{V}_a$ as the {\em boundary} of $\mathcal{G}$, and the remaining vertices as the {\em interior} of $\mathcal{G}$. Then $F$ is determined by the equations \begin{equation} \sum_{e: s(e)=v}{|e|^{-1}}{F(r(e))}=\left({\text{deg}(v)\kappa}_v+\sum_{e: s(e)=v}{|e|^{-1}}\right)F(v)\label{keyequation} \end{equation} for interior vertices and $f(v)=1$ for exit vertices, $v\in \mathcal{V}_a$. Reaction probability for the examples given in the introduction are easily obtained by solving the above system of linear equations. \begin{remark} Equation \eqref{keyequation} can be regarded as a kind of discrete Feynman-Kac equation involving the discrete Laplacian on $\mathcal{G}$. See \cite{bk13} for more information. \end{remark} \end{document}
math
62,177
\begin{document} \title{Fast and simple characterization of a photon pair source} \author{F.~Bussi\`eres$^{1,2,\dag}$, J.~A.~Slater$^{2,\dag}$, N.~Godbout$^{1}$, and W.~Tittel$^{2}$} \address{$^{1}$ Laboratoire des fibres optiques, D\'epartement de g\'enie physique, \'Ecole Polytechnique de Montr\'eal, CP~6079, succ. Centre-Ville, Montr\'eal (Qu\'ebec), H3C~3A7 Canada. \\ $^{2}$ Institute for Quantum Information Science, Department of Physics and Astronomy, University of Calgary, Calgary (Alberta), T2N~1N4 Canada.} \email{[email protected]} \begin{abstract} We present an exact model of the detection statistics of a probabilistic source of photon pairs from which a fast, simple and precise method to measure the source's brightness and photon channel transmissions is demonstrated. We measure such properties for a source based on spontaneous parametric downconversion in a periodically poled LiNbO$_3$ crystal producing pairs at 810 and 1550~nm wavelengths. We further validate the model by comparing the predicted and measured values for the $g^{(2)}(0)$ of a heralded single photon source over a wide range of the brightness. Our model is of particular use for monitoring and tuning the brightness on demand as required for various quantum communication applications. We comment on its applicability to sources involving spectral and/or spatial filtering. \end{abstract} \ocis{(270.0270) Quantum optics; (270.5290) Photon statistics; (190.4223) Nonlinear wave mixing; (270.5565) Quantum communications.} \begin{thebibliography}{99} \bibitem{BB84} C.~H.~Bennett and G.~Brassard, ``Quantum cryptography: public key distribution and coin tossing,'' in {\it Proceedings of the IEEE International Conference on Computers, Systems \& Signal Processing} (Institute of Electrical and Electronics Engineers, Bangalore, India, 1984), pp.~175--179. \bibitem{Ekert91} A.~Ekert, ``Quantum {C}ryptography {B}ased on {B}ell's {T}heorem,'' \prl {\bf 67,} 661--663 (1991). \bibitem{BDCZ98} H.-J. Briegel, W. D\"{u}r, J. I. Cirac and P. Zoller, ``Quantum {R}epeaters: the {R}ole of {I}mperfect {L}ocal {O}perations in {Q}uantum {C}ommunication,'' \prl {\bf 81}, 5932--5935 (1998). \bibitem{DLCZ01} L.-M. Duan, M. D. Lukin, J. I. Cirac and P. Zoller, ``Long-distance quantum communication with atomic ensembles and linear optics,'' \nat {\bf 414} 413--418 (2001). \bibitem{BW70} D.~C.~Burnham and D.~L.~Weinberg, ``Observation of Simultaneity in Parametric Production of Optical Photon Pairs,'' \prl {\bf 25}, 84--87 (1970). \bibitem{FVSK02} M.~Fiorentino, P.~L.~Voss, J.~E.~Sharping, and P.~Kumar, ``All-fiber photon-pair source for quantum communications,'' IEEE Phot. Technol. Lett. {\bf 14}, 983--985 (2002). \bibitem{K03} A.~Kuzmich, W.~P.~Bowen, A.~D.~Boozer, A.~Boca, C.~W.~Chou, L.-M.~Duan, H.~J.~Kimble, ``Generation of nonclassical photon pairs for scalable quantum communication with atomic ensembles," \nat {\bf 423}, 731--734 (2003). \bibitem{STTV07} J.~Simon, H.~Tanji, J.~K.~Thompson, V.~Vuletic, ``Interfacing Collective Atomic Excitations and Single Photons," \prl {\bf 98} 183601 (2007). \bibitem{TW01} W.~Tittel and G.~Weihs, ``Photonic {E}ntanglement for {F}undamental {T}ests and {Q}uantum {C}ommunication,'' Quantum Inf. and Comp.~{\bf 1}, 3--56 (2001). \bibitem{HM86} C.~K.~Hong and L.~Mandel, ``Experimental realization of a localized one-photon state,'' \prl {\bf 56}, 58--60 (1986). \bibitem{MFL07} X.~Ma, C.-H.~Fred~Fung and H.-K.~Lo, ``Quantum key distribution with entangled photon sources,'' \pra {\bf 76}, 012307 (2007). \bibitem{WSY02} E.~Waks, C.~Santori and Y.~Yamamoto, ``Security aspects of quantum key distribution with sub-Poisson light,'' \pra {\bf 66}, 042315 (2002). \bibitem{Chen08} Q.~W.~Chen, G.~Xavier, M.~Swillo, T.~Zhang, S. Sauge, M.~Tengner, Z.-F.~Han, G.-C.~Guo, and A.~Karlsson, ``Experimental Decoy-State Quantum Key Distribution with a Sub-Poissionian Heralded Single-Photon Source,'' \prl {\bf 100}, 090501 (2008). \bibitem{Adachi07} Y.~Adachi,T.~Yamamoto, M.~Koashi, and N.~Imoto, ``Simple and Efficient Quantum Key Distribution with Parametric Down-Conversion,'' \prl {\bf 99}, 180503 (2007). \bibitem{MS07} W.~Mauerer, C.~Silberhorn, ``Quantum key distribution with passive decoy state selection," \pra {\bf 75}, 050305(R) (2007). \bibitem{RMTZG03} H.~de~Riedmatten, I.~Marcikic, W.~Tittel, H.~Zbinden, N.~Gisin, ``Quantum interference with photon pairs created in spatially separated sources,'' \pra {\bf 67}, 022301 (2003). \bibitem{TOS04} S.~Takeuchi, R.~Okamoto, and K.~Sasaki, ``High-Yield Single-Photon Source Using Gated Spontaneous Parametric Downconversion,'' \ao {\bf 43}, 5708--5711 (2004). \bibitem{OTS05} R.~Okamoto, S.~Takeuchi, and K.~Sasaki, ``Detailed analysis of a single-photon source using gated spontaneous parametric downconversion,'' \josab {\bf 22}, 2393--2401 (2005). \bibitem{TL07} M.~Tengner, and D.~Ljunggren, ``Characterization of an asynchronous source of heralded single photons generated at a wavelength of 1550~nm,'' arXiv:0706.2985v1 [quant-ph] (2007). \bibitem{Mandel95} L.~Mandel and E.~Wolf, \textit{Optical coherence and quantum optics} (Cambridge University Press, 1995). \bibitem{MRTSZG02} I.~Marcikic, H.~de~Riedmatten, W.~Tittel, V.~Scarani, H.~Zbinden, N.~Gisin, ``Time-bin entangled qubits for quantum communication created by femtosecond pulses,'' \pra {\bf 66}, 062308 (2002). \bibitem{RSMATZG04} H.~de~Riedmatten, V.~Scarani, I.~Marcikic, A.~Ac\'in, W.~Tittel, H.~Zbinden, and N.~Gisin, ``Two independent photon pairs versus four-photon entangled states in parametric down conversion,'' \jmo {\bf 51}, 1637--1649 (2004). \bibitem{HBT56} R.~Hanbury Brown, and R.~Q.~Twiss, ``A Test of a New Type of Stellar Interferometer on Sirius," \nat {\bf 178}, 1046--1048 (1956). \bibitem{thermalg2} For a Thermal distribution the $\gt$ is higher by a factor of $2$: $\gt = 2\mu(2-\eh)$. In the case of spectral and/or spatial correlations with a Poissonian source, as discussed in section~\ref{section:correlations}, $\gt = \mu(2/c - \eh)$. \bibitem{BM95} M.~Zukowski, A.~Zeilinger, H.~Weinfurter, ``Entangling Photons Radiated by Independent Pulsed Sources," in {\it Annals of the New York Academy of Sciences,} D.~M.~Greenberger, A.~Zeilinger, ed. (New York, 1995), pp. 91--102 \bibitem{F92} J.~D.~Franson, ``Nonlocal cancellation of dispersion," \pra {\bf 45}, 3126--3132 (1992). \bibitem{MLS+08} P.~J.~Mosley, J.~S.~Lundeen, B.~J.~Smith, P.~Wasylczyk, A.~B.~U'Ren, C.~Silberhorn, and I.~A.~Walmsley, ``Heralded Generation of Ultrafast Single Photons in Pure Quantum States,'' \prl {\bf 100}, 133601 (2008). \bibitem{BBR88} C.~H.~Bennett, G.~Brassard, and J.-M.~Robert, ``Privacy amplification by public discussion,'' SIAM J. Comput. \textbf{17}, 210-–229 (1988). \bibitem{BBCM95} C.~H.~Bennett, G.~Brassard, C.~Cr\'epeau, and U.~M.~Maurer, ``Generalized privacy amplification,'' IEEE Trans. Inf. Theory \textbf{41}, 1915-–1923 (1995). \end{thebibliography} \section{Introduction} Sources of photon pairs are an essential building block in implementations of Quantum Communication protocols. Examples of such are Quantum Key Distribution (QKD), enabling unconditional security in the exchange of confidential messages~\cite{BB84,Ekert91}, or Quantum Repeaters, needed to break the distance barrier of QKD~\cite{BDCZ98,DLCZ01}. Photon pairs obtained from Spontaneous Parametric Downconversion (SPDC)~\cite{BW70} or Spontaneous Four-Wave-Mixing (SFWM)~\cite{FVSK02} in nonlinear materials, or from atomic ensembles~\cite{K03,STTV07}, can be used to generate either entangled photons by a careful arrangement of two downconversion paths~\cite{TW01}, or to create a single photon source by announcing the presence of one photon through the detection of the other one (a Heralded Single Photon Source, or HSPS)~\cite{HM86}. All aforementioned sources are of probabilistic nature, i.e. the number of emitted pairs per time unit follows a given statistical distribution such as a Poissonian or thermal distribution. In most applications, it is beneficial or even essential to know the mean number of photon pairs $\mu$ emitted per unit of time, a quantity that we shall refer to as the \textit{brightness}. For entanglement based QKD, Ma~\textit{et al.} have shown that both the key generation rate and the maximum distance over which a secret key can be established can be maximized by properly tuning the brightness~\cite{MFL07}. Another example is the security of QKD based on HSPS, which relies on the ability of the sender to assess the photon statistics in a precise way~\cite{WSY02,Chen08,Adachi07,MS07}. Also, de~Riedmatten~\textit{et al.} have shown that the visibility in Bell-state measurements, which is a key element of proposed quantum repeaters, crucially depends on the brightness~\cite{RMTZG03}. Assessing the brightness of a source of photon pairs is a non-trivial task when limited to lossy channels and non photon-number-resolving detectors. This problem can be solved provided one knows the exact value of the total transmission of all photon channels. However, evaluating the loss associated with coupling a single photon from free-space to a singlemode fibre is not simple. One technique requires mode-matching a probe laser to the single photon mode, but this can be imprecise and unpractical (see~\cite{TOS04,OTS05,TL07} as examples). The brightness can also be inferred from measurements of the second-order autocorrelation function, $\gt$~\cite{Mandel95}. However, as the time required for $\gt$ measurements depends on three-fold coincidence detection stemming from two simultaneously generated pairs, such measurements are time consuming to implement (see~\cite{STTV07} and \cite{MRTSZG02} as examples). Therefore, a method from which the brightness and the losses of the transmission lines can be determined with precision, speed and simplicity is necessary. In this work, we show how one can assess the brightness and the photon channel transmissions of a source of photon pairs by solely measuring single and two-fold coincidence detections stemming from photons belonging to one pair. This makes this method very fast and efficient. In section~\ref{section:model}, we present a model describing the detection statistics of any probabilistic source of photon pairs, and we show how the brightness and the losses of the channels can be assessed. Then, in section~\ref{section:results}, we describe an implementation of the proposed method and use it to predict the value of the autocorrelation function, $\gt$, of a HSPS for a wide range of brightness values. We confirm the model by the direct measurement of the predicted $\gt$ values. Finally, in section~\ref{section:correlations}, we discuss the limits of our model when the generated photons are spectrally correlated. \section{An exact model of the detection statistics} \label{section:model} \subsection{Description of the model} \label{subsection:modeldescrip} To assess the properties of a source of photon pairs, we developed an exact model of the detection statistics of the experimental setup detailed in Fig.~\ref{fig:genericsetup}. \begin{figure} \caption{The sources of photon pairs we consider comprise all probabilistic sources, including those based on nonlinear crystals, optical fibres or atomic ensembles. The distribution of the number of produced photon pairs per measurement time window can be given by any distribution such as Poissonian or thermal and is assumed to be known in advance. The pairs are deterministically separated, potentially by a dichroic beamsplitter in the case of collinear generation with non-degenerate wavelengths, or by non-collinear generation, into two separate channels. Each beam is filtered to remove all pump light and then the pairs are coupled into optical fibres. One beam is split again at a 50/50 beamsplitter before the photons are detected by non-photon number resolving single photon detectors $D_H$, $D_A$ and $D_B$.} \label{fig:genericsetup} \end{figure} To model the detection statistics of this experimental setup we construct a column vector $\vector{P}$, as shown in Eq.~(\ref{state-vector-def}), which describes the joint state of the detectors: \begin{equation} \label{state-vector-def} \vector{P} = \left( \begin{smallmatrix} p_{\bar{A}\bar{B}\bar{H}} & p_{A\bar{B}\bar{H}} & p_{\bar{A}B\bar{H}} & p_{\bar{A}\bar{B}H} & p_{AB\bar{H}} & p_{A\bar{B}H} & p_{\bar{A}BH} & p_{ABH} \\ \end{smallmatrix} \right)^{\mathrm{T}}. \end{equation} Each element of $\vector{P}$ describes the probability that a set of detectors clicked or not per measurement time window, which is defined as the elementary observation time (e.g. a short time window centered on one pump pulse; see later) for which detections are considered for statistical analysis. For example, $p_{A\bar{B}\bar{H}}$ is the probability that detector $D_A$ clicked, during the measurement time window, and $D_H$ and $D_B$ did not. The goal is to determine how this vector, initially in state $\vector{P}_0 = \left( \begin{smallmatrix} 1 & 0 & \ldots & 0 \end{smallmatrix} \right)^{\text{T}}$, is affected by single and multiple photon pair emissions as well as detector dark counts during one measurement time window. First, we describe the interaction of \textit{one} photon pair with the detectors using the following transition matrix: \begin{equation} \mmat = \left( \begin{smallmatrix} 1 - \eh + (\ea + \eb)(\eh -1) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \ea(1-\eh) & (1 - \eb)(1 - \eh) & 0 & 0 & 0 & 0 & 0 & 0 \\ \eb(1-\eh) & 0 & (1 - \ea)(1 - \eh) & 0 & 0 & 0 & 0 & 0 \\ \eh( 1 - (\ea + \eb)) & 0 & 0 & 1 - (\ea + \eb) & 0 & 0 & 0 & 0 \\ 0 &\eb(1-\eh) & \ea(1-\eh) & 0 & 1-\eh & 0 & 0 & 0 \\ \ea \eh & \eh(1 - \eb) & 0 & \ea & 0 & 1-\eb & 0 & 0 \\ \eb \eh & 0 & \eh(1 - \ea) & \eb & 0 & 0 & 1-\ea & 0 \\ 0 & \eb \eh & \ea \eh & 0 & \eh & \eb & \ea & 1 \\ \end{smallmatrix} \right). \label{M-matrix-def} \end{equation} Each element of $\mmat$ describes the probability for a pair to cause a transition of the three detectors. Each term is written as a function of $\eh$, $\ea$ and $\eb$ which are the overall transmissions of each channel, from the photon pair source to $D_H$, $D_A$ and $D_B$ respectively, including all optical losses, fibre coupling losses, detector inefficiencies, and the 50/50 beamsplitter. For example, $\mmat(1,1)$ is the probability for the system to make a transition from $\bar{A}\bar{B}\bar{H}$ to $\bar{A}\bar{B}\bar{H}$ (i.e. to remain in the state where no detectors clicked), which must equal $p_{\bar{A}\bar{H}} + p_{\bar{B}\bar{H}} = (1 - \eh - \ea + \ea\eh) + (1 - \eh - \eb + \eb\eh) = 1 - \eh + (\ea + \eb)(\eh -1)$. Similarly, $\mmat(2,1)$ is the probability to make a transition from $\bar{A}\bar{B}\bar{H}$ to $A\bar{B}\bar{H}$ (i.e. no detectors clicked before and, provided one photon pair arrives, only $D_A$ clicks), which equals $\ea(1-\eh)$. All the upper diagonal elements are equal to~0 as photons cannot make detectors ``unclick''. The rest of the matrix is constructed following the same physical reasoning. Furthermore, to conserve the total probability, each column of $\mmat$ sums to~$1$. The result of \textit{one} photon pair interacting with the detectors is thus given by $\mmat\vector{P}_0$. Second, the evolution of the system when~$i$ photon pairs are created during the measurement time window is described by $(\mmat)^i\vector{P}_0$, as losses and the beamsplitter choice for individual pairs in multi-pair emission are independent. In addition to the absorption of a photon, thermal excitations can also cause detector clicks. These dark counts can be taken into account by constructing another matrix $M_{dc}$. Thus, the evolution resulting from dark counts and $i$ photon pairs is described by $M_{dc}(\mmat)^i\vector{P}_0$. Noting the dark count probabilities per measurement time window as $d_H$, $d_A$ and $d_B$, we get \begin{equation} M_{dc} = \left( \begin{smallmatrix} (1-\dca)(1-\dcb)(1-\dch) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \dca(1-\dcb)(1-\dch) & (1-\dcb)(1-\dch) & 0 & 0 & 0 & 0 & 0 & 0 \\ (1-\dca)\dcb(1-\dch) & 0 & (1-\dca)(1-\dch) & 0 & 0 & 0 & 0 & 0 \\ (1-\dca)(1-\dcb)\dch & 0 & 0 & (1-\dca)(1-\dcb) & 0 & 0 & 0 & 0 \\ \dca \dcb (1-\dch) & \dcb(1-\dch) & \dca(1-\dch) & 0 & 1-\dch & 0 & 0 & 0 \\ \dca(1-\dcb)\dch & (1-\dcb)\dch & 0 & \dca(1-\dcb) & 0 & 1-\dcb & 0 & 0 \\ (1-\dca)\dcb\dch & 0 & (1-\dca)\dch & (1-\dca)\dcb & 0 & 0 & 1-\dca & 0 \\ \dca \dcb \dch & \dcb \dch & \dca \dch & \dca\dcb & \dch & \dcb & \dca & 1 \\ \end{smallmatrix} \right). \label{matrix-dc} \end{equation} Thus, when an unknown number of photon pairs are incident, it is possible to calculate the final vector $\vector{P}$ through \begin{equation} \vector{P} = \sum_{i=0}^{\infty} p_i M_{dc}(M_{\eta})^i \vector{P}_0, \label{eqn:master-matrix} \end{equation} where $p_i$ is the probability to create $i$ photon pairs per measurement time window. Provided that the probability distribution for $p_i$ is known this equation holds for all distributions, such as Poissonian, thermal or any distributions between the two~\cite{RSMATZG04}. Note that all matrices commute so the order in which they are applied does not matter. The construction of the matrices ensures that all elements of $\vector{P}$ are bounded individually between~0 and~1 and that the elements of $\vector{P}$ sum to~1, i.e. the total probability is conserved. We note that the model is exact and there are no approximations. \subsection{Determining the photon channel transmissions} We now show how one can determine precisely the values of $\mu$, $\eh$, $\ea$ and $\eb$ by measuring single and two-fold coincidence detection probabilities stemming from single pairs only. For these measurements, we require that the pump power (or equivalently the brightness of the photon pair source) is low enough so that multi-pair events are negligible: $p_i \ll p_1$ for $i>1$. Experimentally, this can be verified by looking at how correlated detections on $D_A$ are to detections on $D_H$. To measure this, we first define $p_H$ to be the heralding probability, i.e. the probability for $D_H$ to click independent of the other detectors, $p_H = p_{\bar{A}\bar{B}H} + p_{A\bar{B}H} + p_{\bar{A}BH} + p_{ABH}$, and the similar expressions for $p_{AH}$ and $p_{A}$. We then define a parameter $G = p_{AH}/(p_{H}p_{A})$ quantifying the strength of the correlation between detections at $D_A$ and $D_H$. The model described by Eq.~(\ref{eqn:master-matrix}) predicts that, for Poisson, thermal and in between distributions, the value of $G$ equals one at a very low brightness, when the coincidences are dominated by dark counts and detections are uncorrelated, and equals one again at high heralding probabilities, when the coincidence detections stem mostly from multi-pair emissions and correlations are smeared out. In between, the value of $G$ can go well above~1 and this is an indication that multi-pair emissions are negligible. As we show here, this allows to experimentally obtain an upper bound for $\mu$ when proceeding as follows. First, the dark count probability per measurement time window for each detector is measured. Second, the pump power is lowered and the transmissions are optimized until a value of $G$ significantly higher than~1 is measured. Third, a plot of $G$ versus $\mu$ is produced numerically assuming that the fibre coupling is perfect and that there are no additional optical losses, thereby setting the values of $\eh$ and $\ea$ equal to the specified detection efficiency of the detectors. Finally, an upper bound for $\mu$ is obtained from this plot by identifying the largest value of $\mu$ that produces a value of $G$ equal to the measured value. They key point is that, for a given $\mu$ and dark count probabilities, $G$ is decreased towards~1 when the transmissions are decreased. Thus, using this method, the true value of $\mu$ must be smaller than the upper bound as the transmissions are overestimated. This, in return, allows one to obtain a lower bound for the ratio, $r = p_1/p_{i>1}$, of the probability of single pair emissions, $p_1$, over the probability of multi-pair emissions, $p_{i>1} = 1 - p_0 - p_1$. As an illustration, using $\eh = 60\%$ and $\ea = 25\%$, corresponding to the detection efficiencies of our detectors, and using their respective measured dark count probabilities (see section~\ref{section:results}), we produced the solid line shown on Fig.~\ref{fig:G} where we assumed a Poisson distribution, $p_i = \exp(-\mu)\mu^i/i!$. \begin{figure} \caption{Correlation strength $G$ versus brightness $\mu$. The solid line corresponds to $\eh = 60\%$ and $\ea = 25\%$. It reaches a maximum value at very low $\mu$ and then sharply decreases to~1 for $\mu = 0$ (not visible). The meanings of the dotted and dashed lines are discussed in section~\ref{section:results} \label{fig:G} \end{figure} Once the pump power is properly set and the lower bound on $r$ is satisfactory, Eq.~(\ref{eqn:master-matrix}) can be truncated to $i=1$ and one can show that the probability for $D_H$ to click on a photon and not a dark count is given by $p_{H}^{(1)} = (p_{H} - d_H)/(1 - d_H)$. Similarly, we get \mbox{$p_A^{(1)} = (p_A - d_A)/(1-d_A)$} and the equivalent for $p_B^{(1)}$. In the same way, we can get expressions for the coincidence probabilities $p_{AH}$ and $p_{BH}$. Then, using these expressions and an experimental data collection run with a heralding probability that guarantees negligible multi-pair events, one can solve for the four unknowns $\mu$, $\eh$, $\ea$ and $\eb$, since the dark count probabilities can be measured directly. These unknowns can be calculated through equations~(\ref{eqn:eh-calc}) through~(\ref{eqn:mu-calc}). The equivalent set for $D_B$ is constructed by replacing $\ea$ and $p_A^{(1)}$ by $\eb$ and $p_B^{(1)}$, respectively: \begin{equation} \eh = \dfrac{p_{AH} - p_H^{(1)} d_A (1-d_H) - p_A^{(1)}d_H(1-d_A) - d_A d_H}{p_A^{(1)}(1-d_A)(1-d_H)}, \label{eqn:eh-calc} \end{equation} \begin{equation} \label{eqn:ea-calc} \ea = \dfrac{p_{AH} - p_H^{(1)}d_A(1- d_H) - p_A^{(1)}d_H(1-d_A) - d_Ad_H}{p_H^{(1)}(1-d_A)(1-d_H)}, \end{equation} \begin{equation} \label{eqn:mu-calc} p_1 = \dfrac{p_{H}^{(1)}}{\eh} = \dfrac{p_{A}^{(1)}}{\ea}. \end{equation} Note that these predictions apply to any statistical distribution for which the multi-pair events can be neglected (for example, through the method described above). However, to determine the value of the brightness, one must have prior knowledge of the distribution and how to relate it to the measured value of~$p_1$. In the case of a Poissonian source, we have $p_1 = \mu \exp(-\mu)$ which can be solved numerically for $\mu$. The case of a thermal distribution is similar with \mbox{$p_1 = (\tanh \mu/\cosh \mu)^2$}. Once the transmissions are precisely known, one can use Eq.~(\ref{eqn:master-matrix}) to find the brightness that corresponds to any measured heralding probability. This will then allow to predict the complete detection statistics vector $\vector{P}$. \subsection{Application to a HSPS} The transmissions and the brightness, along with the knowledge of the pair distribution type, can be used to predict the second-order autocorrelation function of the heralded mode of a HSPS, $\gt$, for any desired heralding probability $p_H$, in a Hanbury Brown and Twiss (HBT) experiment~\cite{HBT56}. The distribution of the number of photons in that mode follows the distribution of the number of photon pairs created by the source except for a reduced vacuum component, $p_0$, due to the heralding. A $\gt < 1$, which is achievable with a HSPS, implies a nonclassical source (for a perfect single photon source $\gt = 0$). Alternatively, a $\gt \geq 1$ describes a classical source (for Poissonian \mbox{$\gt = 1$} and for thermal $\gt = 2$). To verify our model, we compare its predictions with a real measurement of the $\gt$. In this experiment, which can be seen as measuring a subset of Eq.~(\ref{eqn:master-matrix}), detectors $D_A$ and $D_B$ are activated only when $D_H$ clicks. The $\gt$ is defined as \begin{equation} \gt = \frac{p_{AB | H}}{p_{A | H} \times p_{B | H}}, \end{equation} where $p_{AB|H}$ is the probability that both $D_A$ and $D_B$ click provided that $D_H$ clicked, etc. For a specific heralding probability $p_H$, we can directly measure $\gt$ using the setup of Fig.~\ref{fig:genericsetup} by keeping only the events where $D_H$ clicked. On the other hand, the $\gt$ can also be predicted for the same heralding probability using Eq.~(\ref{eqn:master-matrix}). The experimental results of this verification are presented in the next section. One interesting result can be derived from our model. Considering a Poissonian distribution at low brightness and assuming that dark counts are negligible, one can derive from Eq.~(\ref{eqn:master-matrix}) that $\gt = \mu(2-\eh)$~\cite{thermalg2}. This indicates that for a HSPS, the transmission to the heralding detector is a crucial parameter to optimize. It is important to note here that the 50/50 beamsplitter used in the setup of Fig.~\ref{fig:genericsetup} is not required to determine the brightness and the transmissions. Indeed, to assess the brightness and the transmissions only the detectors $D_H$ and $D_A$ are necessary. In this work, the beamsplitter and $D_B$ were added only to provide a way to verify the validity of the predictions through the $\gt$ measurement. Modifying the vector $\vector{P}$ and the matrices to accommodate for a setup with no beamsplitter and $D_B$ is straightforward. \section{Experimental results} \label{section:results} The experimental setup is shown in Fig.~\ref{fig:realsetup}. \begin{figure} \caption{Experimental setup.} \label{fig:realsetup} \end{figure} A clocking signal triggers a pulsed laser diode (PicoQuant PicoTA) creating 50~ps pulses at 532~nm filtered to remove excess 1064~nm light. The pulses are focused onto a 1~cm long periodically poled LiNbO$_3$ crystal (PPLN) from Stratophase with a grating period of 7.05~\textmu m heated to 175.7~$^{\circ}$C. Then, collinear spontaneous parametric downconversion to one or several photon pairs can occur, each pair consisting of one 810~nm and one 1550~nm photon. A dichroic mirror DM separates the two wavelengths and, after removing the 532~nm light with long-pass color filters, the photons are coupled into SMF28 optical fibres. The 810~nm photons are sent towards a free-running Si single photon counting module $D_H$ (SPCM-AQR-14-FC, Perkin-Elmer) and the 1550~nm photons are detected by two gated InGaAs single photon detectors $D_A$ and $D_B$ (id201, IdQuantique) positioned right after a 50/50 fibre beamsplitter. The detection statistics are recorded using a Time-Digital-Converter (TDC-GPX, ACAM), providing the time elapsed between a start pulse, given by the clock, and each stop pulse, corresponding to detections. The data is analyzed in real-time and the statistics are updated continuously until the end of each run. The coherence length $l_c$ of the downconverted 810~nm photons was measured to be $180~\mu$m using a Michelson interferometer. From these measurements we calculated the bandwidth to be $\Delta \lambda_{\text{810}} \approx 4\,\text{nm}$ and, based on energy conservation of the SPDC process, $\Delta \lambda_{\text{1550}} \approx 15\,\text{nm}$. As the downconverted photons' coherence time, which equals $l_c/c = 0.54$~ps, is much smaller than the pump pulse duration, which is $50$~ps, we can confidently assume that our source of photon pairs follows Poissonian statistics~\cite{RSMATZG04}. The InGaAs detectors, which require gating to limit excess dark counts, were activated from each clocking signal for a 5~ns measurement time window. Detections on the Si detector were considered valid only if they arrived within a 5~ns window centered on the clocking signal, as measured by the TDC. To determine the transmissions, the clocking signal triggering the laser and InGaAs detectors was set to 30~kHz. This low repetition rate ensured that saturation effects in the detection electronics and biases in the detection statistics from the InGaAs detectors' 10~\textmu s dead-time were avoided. We first measured the dark count probabilities to be $d_A = 2.87\times 10^{-4}$, $d_B = 3.84 \times 10^{-4}$ and $d_H = 2.5 \times 10^{-7}$ per 5~ns. Next, we lowered the pump power using neutral density filters in order to increase the correlation strength between $D_H$ and $D_A$ to a value of $G = 20.6\pm 1.0$, corresponding to a heralding probability of $0.287 \pm 0.001\%$. Intersecting this value with the solid line of Fig.~\ref{fig:G} gives a upper bound of $\mu \le 0.0480 \pm 0.0013$, yielding $r \ge 41.0 \pm 2.2$, which we considered sufficiently high to continue. Next we measured the single and coincidence detection probabilities from which we obtained the following values: $\eh = 0.1212 \pm 0.0031$, $\ea = 0.0145 \pm 0.0005$, $\eb = 0.0162 \pm 0.0005$ and $\mu = 0.02375 \pm 0.00016$, corresponding to $r = 83.5 \pm 0.6$. The $G$ curve corresponding to these values is plotted as the dotted line on Fig.~\ref{fig:G}, and the predicted value of $G$ at $\mu = 0.02375$ is $23.9 \pm 0.5$, which is close to the measured value of~20.6. Using these values together with Eq.~(\ref{eqn:master-matrix}), we produced a plot of the predicted $p_{AB|H}$, $p_{A|H}$ and $p_{B|H}$ for a wide range of the brightness (and consequently, of the heralding probability). These predictions are compared to the measured values on Fig.~\ref{fig:measuredprobabilities}(a) and~\ref{fig:measuredprobabilities}(b). Next we compared the predicted and measured $\gt$ as shown on Fig.~\ref{fig:g2all}(a). On the same figure we plotted the value of the brightness $\mu$ corresponding to each heralding probability. In all cases, the agreement between the predicted and measured values is excellent. We note that for these measurements, the repetition rate was increased to 5~MHz and the InGaAs detectors were activated for 5~ns only when the Si detector clicked synchronously (within a 5~ns window) with the pump, as required for $\gt$ measurements in the HBT setup. This resulted in an average detection rate of 30~kHz, with randomly distributed time differences, for the InGaAs detectors and was thus sufficient to ensure that saturation effects in the detection electronics was not an issue. However, to eliminate the effect of the dead-time on the detection statistics, we considered only the events where both InGaAs detectors were ready to detect photons, as provided by the ``gate out'' electrical signals of these detectors. \begin{figure} \caption{(a) Predicted (solid lines) and measured (points) conditional detection probabilities $p_{A|H} \label{fig:measuredprobabilities} \end{figure} \begin{figure} \caption{(a) Predicted autocorrelation $\gt$ for Poissonian (solid line) and thermal (dotted line) distributions, measured values (points), and the corresponding brightness $\mu$ (dash-dotted line). The measured data agrees very well with the Poissonian distribution. The dashed lines are the one standard deviation uncertainty bounds on the predicted values. (b) As the heralding probability reaches the noise level of $D_H$ (dashed line), the model correctly predicts that the $\gt$ approaches one, as uncorrelated dark counts begin to dominate over photon clicks.} \label{fig:g2all} \end{figure} Our method drastically reduces the time needed to characterize the source as measurements of single and two-fold coincidence detections at a low heralding probability are sufficient to determine the transmissions. These can then be used to predict the brightness and the $\gt$ of a HSPS for any heralding probability. In contrast, a single direct measurement of the $\gt$ at a given heralding probability requires three-fold coincidence detections stemming from multi-pair emissions, which are less likely to happen. In our experiment, at a heralding probability of 0.287\%, two-fold coincidences were approximately~700 times more likely than three-fold coincidences. Consequently, a direct $\gt$ measurement required much more time. \section{Effect of spectral and spatial correlations} \label{section:correlations} The model proposed in section~\ref{section:model} may not apply directly to a source when spectral and/or spatial correlations exist between the photons of each pair and when the transmission of each channel are frequency and/or spatially selective. For instance, Bell state measurements generally requires filtering of photons that are spectrally correlated~\cite{BM95}. Let's suppose the spectral filtering applied on each spatially separated photons is performed with two separate filters that both need to be aligned on the photons spectra. Due to energy correlation, transmission of one photon through a spectral filter determines the spectrum of the other photon~\cite{F92}. If the filtering of the second photon does not match its now modified spectrum, the coincidence detection probability is reduced. In this case, given that one photon pair was created, we can still write the probability to get a detection at $D_H$ to be $\eh$ and at $D_A$ to be $\ea$. However, the probability to get a coincidence is lowered to $c\eh\ea$, where $0\le c \le 1$. The upper bound is reached when either the photons' spectra are uncorrelated before the filtering (see~\cite{MLS+08}), or when the selected spectra satisfy the energy conservation conditions perfectly. When this situation applies, the detection matrix can be re-written as follows: \begin{equation*} M_{\eta}' = \left( \begin{smallmatrix} 1 - \eh + (\ea + \eb)(c\eh-1) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \ea(1-c\eh) & 1 - \eb + \eh(c\eb - 1) & 0 & 0 & 0 & 0 & 0 & 0 \\ \eb(1-c\eh) & 0 & 1 - \ea + \eh( c\ea - 1 ) & 0 & 0 & 0 & 0 & 0 \\ \eh( 1 - c(\ea + \eb) ) & 0 & 0 & 1 - (\ea + \eb) & 0 & 0 & 0 & 0 \\ 0 & \eb(1-c\eh) & \ea(1-c\eh) & 0 & 1-\eh & 0 & 0 & 0 \\ c\ea \eh & \eh(1 - c\eb) & 0 & \ea & 0 & 1-\eb & 0 & 0 \\ c\eb \eh & 0 & \eh(1 - c\ea) & \eb & 0 & 0 & 1-\ea & 0 \\ 0 & c\eb \eh &c\ea \eh & 0 & \eh & \eb & \ea & 1 \\ \end{smallmatrix} \right). \end{equation*} The presence of spectral correlations affects the predictions of the proposed model but as we show here, the consequences are minimal. First of all, for a given brightness, a value of $c<1$ lowers the measured value of~$G$ towards~$1$. Therefore, when assessing if multi-pair events are negligible, one can assume that $c=1$ and the upper bound on $\mu$, along with the lower bound on $r$, are still valid. As an illustration, reducing~$c$ from~1, corresponding to the solid line of Fig.~\ref{fig:G}, down to~$0.5$, corresponding to the dashed line on the same figure, lowers the curve towards~$G=1$. Also, performing the above analysis to calculate the transmissions, while neglecting dark counts, we can show that the obtained solutions are $\eh' = c\eh$, $\ea' = c\ea$, $\eb' = c\eb$ and $\mu' = \mu/c$, and that the value of $c$ cannot be assessed directly. Therefore, if one is unaware of $c$, the analysis performed underestimates the transmissions by a factor of $c$ and overestimates the brightness by a factor of $1/c$. However, we can show by direct calculation that the predicted probability vector $\vector{P}$, and consequently the predicted $\gt$ for a given heralding rate, are independent of $c$. Therefore, in our results obtained in section~\ref{section:results}, we may have slightly overestimated the values of the brightness but we still predicted the correct value for the measured $\gt$. For QKD with a HSPS, one can assess the security of the protocol against PNS attacks by knowing the value of $\mu$. The analysis we propose gives an overestimated value for $\mu$ when there are reasons to expect that spectral correlations and unmatched filtering are present. This is not detrimental to the security of QKD, as a sender unaware of this will, in the worst case, only overestimate the information available to an eavesdropper and shorten the key more than necessary through privacy amplification~\cite{BBR88,BBCM95}. \section{Conclusion} We developed a model exactly describing the detection statistics of a probabilistic source of photon pairs. From this model, we outlined a method from which the transmission of each photon channel, as well as the source's brightness, can be determined by measuring single and two-fold coincidence detection probabilities stemming from photons belonging to one pair. Then, we experimentally confirmed the method by demonstrating that the measured $\gt$ of a HSPS can be correctly predicted for any heralding probability. This allows one to quickly tune the brightness on demand as required to optimize the performance of entangled QKD, to assess the security of HSPS-based QKD or to optimize quantum repeater error rates and distances, all in the context of fluctuating experimental conditions such as photon channel transmissions. Finally, we showed that our model correctly reproduces the detection statistics even if the photons are spectrally and/or spatially correlated, and that this only leads to an overestimation of the brightness of the source. The simplicity of the proposed method makes it very attractive for the field of quantum communication in general. \section*{Acknowledgments} This work is supported by NSERC, iCORE, GDC, CFI, AAET, QuantumWorks, NATEQ, AIF and CIPI. \ \\ \ \\ $^{\dag}$These authors contributed equally to this work. \end{document}
math
37,605
\begin{document} \title{Lempert Theorem for strongly linearly convex domains} \author{\L ukasz Kosi\'nski and Tomasz Warszawski} \subsetbjclass[2010]{32F45} \keywords{Lempert Theorem, strongly linearly convex domains, Lempert extremals} \address{Instytut Matematyki, Wydzia\l\ Matematyki i Informatyki, Uniwersytet Jagiello\'nski, ul. Prof. St. \L ojasiewicza 6, 30-348 Krak\'ow, Poland} \email{[email protected], [email protected]} \begin{abstract} In 1984 L.~Lempert showed that the Lempert function and the Carath\'eodory distance coincide on non-planar bounded strongly linearly convex domains with real analytic boundaries. Following this paper, we present a~slightly modified and more detailed version of the proof. Moreover, the Lempert Theorem is proved for non-planar bounded ${\mathcal C}^2$-smooth strongly linearly convex domains. \end{abstract} \maketitle The aim of this paper is to present a detailed version of the proof of the Lempert Theorem in the case of non-planar bounded strongly linearly convex domains with smooth boundaries. The original Lempert's proof is presented only in proceedings of a conference (see \cite{Lem1}) with a very limited access and at some places it was quite sketchy. We were encouraged by some colleagues to prepare an extended version of the proof in which all doubts could be removed and some of details of the proofs could be simplified. We hope to have done it below. Certainly, \textbf{the idea of the proof belongs entirely to Lempert}. The main differences, we would like to draw attention to, are \begin{itemize} \item results are obtained in $\mathcal C^2$-smooth case; \item the notion of stationary mappings and $E$-mappings is separated; \item a geometry of domains is investigated only in neighborhoods of boundaries of stationary mappings (viewed as boundaries of analytic discs) --- this allows us to obtain localization properties for stationary mappings; \item boundary properties of strongly convex domains are expressed in terms of the squares of their Minkowski functionals. \end{itemize} Additional motivation for presenting the proof is the fact, showed recently in \cite{Pfl-Zwo}, that the so-called symmetrized bidisc may be exhausted by strongly linearly convex domains. On the other hand it cannot be exhausted by domains biholomorphic to convex ones (\cite{Edi}). Therefore, the equality of the Lempert function and the Carath\'eodory distance for strongly linearly convex domains does not follow directly from \cite{Lem2}. \section{Introduction and results} Let us recall the objects we will deal with. Throughout the paper $\mathbb{D}$ denotes the unit open disc on the complex plane, $\mathbb{T}$ is the unit circle and $p$ --- the Poincar\'e distance on $\mathbb{D}$. Let $D\subsetbset\mathbb{C}^{n}$ be a domain and let $z,w\in D$, $v\in\mathbb{C}^{n}$. The {\it Lempert function}\/ is defined as \begin{equation}\label{lem} \widetildedetilde{k}_{D}(z,w):=\inf\{p(0,\xi):\xi\in[0,1)\textnormal{ and }\exists f\in \mathcal{O}(\mathbb{D},D):f(0)=z,\ f(\xi)=w\}. \end{equation} The {\it Kobayashi-Royden \emph{(}pseudo\emph{)}metric}\/ we define as \begin{equation}\label{kob-roy} \kappa_{D}(z;v):=\inf\{\lambda^{-1}:\lambda>0\text{ and }\exists f\in\mathcal{O}(\mathbb{D},D):f(0)=z,\ f'(0)=\lambda v\}. \end{equation} Note that \begin{equation}\label{lem1} \widetildedetilde{k}_{D}(z,w)=\inf\{p(\zeta,\xi):\zeta,\xi\in\mathbb{D}\textnormal{ and }\exists f\in \mathcal{O}(\mathbb{D},D):f(\zeta)=z,\ f(\xi)=w\}, \end{equation} \begin{multline}\label{kob-roy1} \kappa_{D}(z;v)=\inf\{|\lambda|^{-1}/(1-|\zeta|^2):\lambda\in\mathbb{C}_*,\,\zeta\in\mathbb{D}\text{ and }\\ \exists f\in\mathcal{O}(\mathbb{D},D):f(\zeta)=z,\ f'(\zeta)=\lambda v\}. \end{multline} If $z\neq w$ (respectively $v\neq 0$), a mapping $f$ for which the infimum in \eqref{lem1} (resp. in \eqref{kob-roy1}) is attained, we call a $\widetilde{k}_D$-\textit{extremal} (or a \textit{Lempert extremal}) for $z,w$ (resp. a $\kappa_D$-\textit{extremal} for $z,v$). A mapping being a $\widetilde k_D$-extremal or a $\kappa_D$-extremal we will call just an \textit{extremal} or an \textit{extremal mapping}. We shall say that $f:\mathbb{D}\longrightarrow D$ is a unique $\widetilde{k}_D$-extremal for $z,w$ (resp. a unique $\kappa_D$-extremal for $z,v$) if any other $\widetilde{k}_D$-extremal $g:\mathbb{D}\longrightarrow D$ for $z,w$ (resp. $\kappa_D$-extremal for $z,v$) satisfies $g=f\circ a$ for some M\"obius function $a$. In general, $\widetilde{k}_{D}$ does not satisfy a triangle inequality --- take for example $D_{\alpha}:=\{(z,w)\in\mathbb{C}^{2}:|z|,|w|<1,\ |zw|<\alpha\}$, $\alpha\in(0,1)$. Therefore, it is natural to consider the so-called \textit{Kobayashi \emph{(}pseudo\emph{)}distance} given by the formula \begin{multline*}k_{D}(w,z):=\subsetp\{d_{D}(w,z):(d_{D})\text{ is a family of holomorphically invariant} \\\text{pseudodistances less than or equal to }\widetildedetilde{k}_{D}\}.\end{multline*} It follows directly from the definition that $$k_{D}(z,w)=\inf\left\{\subsetm_{j=1}^{N}\widetilde{k}_{D}(z_{j-1},z_{j}):N\in\mathbb{N},\ z_{1},\ldots,z_{N}\in D,\ z_{0}=z,\ z_{N}=w\right\}.$$ The next objects we are dealing with, are the \textit{Carath\'eodory \emph{(}pseudo\emph{)}distance} $$c_{D}(z,w):=\subsetp\{p(F(z),F(w)):F\in\mathcal{O}(D,\mathbb{D})\}$$ and the \textit{Carath\'eodory-Reiffen \emph{(}pseudo\emph{)}metric} $$\gamma_D(z;v):=\subsetp\{|F'(z)v|:F\in\mathcal{O}(D,\mathbb{D}),\ F(z)=0\}.$$ A holomorphic mapping $f:\mathbb{D}\longrightarrow D$ is said to be a \emph{complex geodesic} if $c_D(f(\zeta),f(\xi))=p(\zeta,\xi)$ for any $\zeta,\xi\in\mathbb{D}$. Here is some notation. Let $z_1,\ldots,z_n$ be the standard complex coordinates in $\mathbb{C}^n$ and $x_1,\ldots,x_{2n}$ --- the standard real coordinates in $\mathbb{C}^n=\mathbb{R}^n+i\mathbb{R}^n\simeq\mathbb{R}^{2n}$. We use $T_{D}^\mathbb{R}(a)$, $T_{D}^\mathbb{C}(a)$ to denote a real and a complex tangent space to a ${\mathcal C}^1$-smooth domain $D$ at a point $a\in\partialrtial D$, i.e. the sets \begin{align*}T_{D}^\mathbb{R}(a):&=\left\{X\in\mathbb{C}^{n}:\re\subsetm_{j=1}^n\frac{\partialrtial r}{\partialrtial z_j}(a)X_{j}=0\right\},\\ T_{D}^\mathbb{C}(a):&=\left\{X\in\mathbb{C}^{n}:\subsetm_{j=1}^n\frac{\partialrtial r}{\partialrtial z_j}(a)X_{j}=0\right\},\end{align*} where $r$ is a defining function of $D$. Let $\nu_D(a)$ be the outward unit normal vector to $\partialrtial D$ at $a$. Let $\mathcal{C}^{k}(\overlineerline{\DD})$, where $k\in(0,\infty]$, denote a class of continuous functions on $\overlineerline{\DD}$, which are of class ${\mathcal C}^k$ on $\mathbb{D}$ and \begin{itemize} \item if $k\in\mathbb{N}\cup\{\infty\}$ then derivatives up to the order $k$ extend continuously on~$\overlineerline{\DD}$; \item if $k-[k]=:c>0$ then derivatives up to the order $[k]$ are $c$-H\"older continuous on $\mathbb{D}$. \end{itemize} By $\mathcal{C}^\omega$ class we shall denote real analytic functions. Further, saying that $f$ is of class $\mathcal{C}^{k}(\mathbb{T})$, $k\in(0,\infty]\cup\{\omega\}$, we mean that the function $t\longmapsto f(e^{it})$, $t\in\mathbb{R}$, is in $\mathcal{C}^{k}(\mathbb R)$. For a compact set $K\subset\mathbb{C}^n$ let ${\mathcal O}(K)$ denote the set of functions extending holomorphically on a neighborhood of $K$ (we assume that all neighborhoods are open). In that case we shall sometimes say that a given function is of class ${\mathcal O}(K)$. Note that $\mathcal{C}^{\omega}(\mathbb{T})={\mathcal O}(\mathbb{T})$. Let $|\cdot|$ denote the Euclidean norm in $\mathbb{C}^{n}$ and let $\dist(z,S):=\inf\{|z-s|:s\in S\}$ be a distance of the point $z\in\mathbb{C}^n$ to the set $S\subset\mathbb{C}^n$. For such a set $S$ we define $S_*:=S\setminus\{0\}$. Let $\mathbb{B}_n:=\{z\in\mathbb{C}^n:|z|=1\}$ be the unit ball and $B_n(a,r):=\{z\in\mathbb{C}^n:|z-a|<r\}$ --- an open ball with a center $a\in\mathbb{C}^n$ and a radius $r>0$. Put $$z\bullet w:=\subsetm_{j=1}^nz_{j}{w}_{j}$$ for $z,w\in\mathbb{C}^{n}$ and let $\langle\cdotp,-\rangle$ be a hermitian inner product on $\mathbb{C}^n$. The real inner product on $\mathbb{C}^n$ is denoted by $\langle\cdotp,-\rangle_{\mathbb{R}}=\re\langle\cdotp,-\rangle$. We use $\nabla$ to denote the gradient $(\partial/\partial x_1,\ldots,\partial/\partial x_{2n})$. For real-valued functions the gradient is naturally identified with $2(\partial/\partial\overline z_1,\ldots,\partial/\partial\overline z_n)$. Recall that $$\nu_D(a)=\frac{\nabla r(a)}{|\nabla r(a)|}.$$ Let $\mathcal{H}$ be the Hessian matrix $$\left[\frac{\partial^2}{\partial x_j\partial x_k}\right]_{1\leq j,k\leq 2n}.$$ Sometimes, for a ${\mathcal C}^2$-smooth function $u$ and a vector $X\in\mathbb{R}^{2n}$ the Hessian $$\subsetm_{j,k=1}^{2n}\frac{\partialrtial^2 u}{\partialrtial x_j\partialrtial x_k}(a)X_{j}X_{k}=X^T{\mathcal H} u(a)X$$ will be denoted by ${\mathcal H} u(a;X)$. By $\|\cdot\|$ we denote the operator norm. \begin{df}\label{29} Let $D\subsetbset\mathbb{C}^{n}$ be a domain. We say that $D$ is \emph{linearly convex} (resp. \emph{weakly linearly convex}) if through any point $a\in\mathbb C^n\setminus D$ (resp. $a\in \partialrtial D$) there goes an $(n-1)$-dimensional complex hyperplane disjoint from $D$. A domain $D$ is said to be \emph{strongly linearly convex} if \begin{enumerate} \item $D$ has $\mathcal{C}^{2}$-smooth boundary; \item there exists a defining function $r$ of $D$ such that \begin{equation}\label{48}\subsetm_{j,k=1}^n\frac{\partialrtial^2 r}{\partialrtial z_j\partialrtial\overlineerline z_k}(a)X_{j}\overlineerline{X}_{k}>\left|\subsetm_{j,k=1}^n\frac{\partialrtial^2 r}{\partialrtial z_j\partialrtial z_k}(a)X_{j}X_{k}\right|,\ a\in\partialrtial D,\ X\in T_{D}^\mathbb{C}(a)_*.\end{equation} \end{enumerate} More generally, any point $a\in\partial D$ for which there exists a defining function $r$ satisfying \eqref{48}, is called a \emph{point of the strong linear convexity} of $D$. Furthermore, we say that a domain $D$ has \emph{real analytic boundary} if it possesses a real analytic defining function. \end{df} Note that the condition \eqref{48} does not depend on the choice of a defining function of $D$. \begin{rem} Let $D\subsetbset\mathbb{C}^{n}$ be a strongly linearly convex domain. Then \begin{enumerate} \item any $(n-1)$-dimensional complex tangent hyperplane intersects $\partialrtial{D}$ at precisely one point; in other words $$\overlineerline D\cap(a+T_{D}^\mathbb{C}(a))=\{a\},\ a\in\partial D;$$ \item for $a\in\partial D$ the equation $\langle w-a, \nu_D(a)\rangle=0$ describes the $(n-1)$-dimensional complex tangent hyperplane $a+T_{D}^\mathbb{C}(a)$, consequently $$\langle z-a, \nu_D(a)\rangle\neq 0,\ z\in D,\ a\in\partial D.$$ \end{enumerate} \end{rem} The main aim of the paper is to present a detailed proof of the following \begin{tw}[Lempert Theorem]\label{lem-car} Let $D\subsetbset\mathbb{C}^{n}$, $n\geq 2$, be a bounded strongly linearly convex domain. Then $$c_{D}=k_{D}=\widetilde{k}_{D}\text{\,\ and\,\, }\gamma_D=\kappa_D.$$ \end{tw} An important role will be played by strongly convex domains and strongly convex functions. \begin{df} A domain $D\subsetbset\mathbb{C}^{n}$ is called \emph{strongly convex} if \begin{enumerate} \item $D$ has $\mathcal{C}^{2}$-smooth boundary; \item there exists a defining function $r$ of $D$ such that \begin{equation}\label{sc}\subsetm_{j,k=1}^{2n}\frac{\partialrtial^2 r}{\partialrtial x_j\partialrtial x_k}(a)X_{j}X_{k}>0,\ a\in\partialrtial D,\ X\in T_{D}^\mathbb{R}(a)_*.\end{equation} \end{enumerate} Generally, any point $a\in\partial D$ for which there exists a defining function $r$ satisfying \eqref{sc}, is called a \emph{point of the strong convexity} of $D$. \end{df} \begin{rem} A strongly convex domain $D\subsetbset\mathbb{C}^{n}$ is convex and strongly linearly convex. Moreover, it is strictly convex, i.e. for any different points $a,b\in\overlineerline D$ the interior of the segment $[a,b]=\{ta+(1-t)b:t\in [0,1]\}$ is contained in $D$ (i.e. $ta+(1-t)b\in D$ for any $t\in(0,1)$). Observe also that any bounded convex domain with a real analytic boundary is strictly convex. Actually, if a domain $D$ with a real analytic boundary were not strictly convex, then we would be able to find two distinct points $a,b\in\partial D$ such that the segment $[a,b]$ lies entirely in $\partialrtial D$. On the other hand, the identity principle would imply that the set $\{t\in\mathbb R:\exists\varepsilon>0:sa+(1-s)b\in\partial D\text{ for }|s-t|<\varepsilon\}$ is open-closed in $\mathbb R$. Therefore it has to be empty. This immediately gives a contradiction. \end{rem} \begin{rem} It is well-known that for any convex domain $D\subset\mathbb{C}^{n}$ there is a sequence $\{D_m\}$ of bounded strongly convex domains with real analytic boundaries, such that $D_m\subset D_{m+1}$ and $\bigcup_m D_m=D$. In particular, Theorem~\ref{lem-car} holds for convex domains. \end{rem} \begin{df} Let $U\subset\mathbb{C}^n$ be a domain. A function $u:U\longrightarrow\mathbb{R}$ is called \emph{strongly convex} if \begin{enumerate} \item $u$ is $\mathcal{C}^{2}$-smooth; \item $$\subsetm_{j,k=1}^{2n}\frac{\partialrtial^2 u}{\partialrtial x_j\partialrtial x_k}(a)X_{j}X_{k}>0,\ a\in U,\ X\in(\mathbb{R}^{2n})_*.$$ \end{enumerate} \end{df} \begin{df} A degree of a continuous function (treated as a curve) $:\mathbb T\longrightarrow\mathbb T$ is called its winding number. The fundamental group is a homotopy invariant. Thus the definition of the \emph{winding number of a continuous function} $\varphi:\mathbb T\longrightarrow\mathbb C_*$ is the same. We denote it by $\widetildend\varphi$. In the case of a ${\mathcal C}^1$-smooth function $\varphi:\mathbb{T}\longrightarrow\mathbb{C}_*$, its winding number is just the index of $\varphi$ at 0, i.e. $$\widetildend\varphi=\frac{1}{2\pi i}\int_{\varphi(\mathbb{T})}\frac{d\zeta}{\zeta}=\frac{1}{2\pi i}\int_{0}^{2\pi}\frac{\frac{d}{dt}\varphi(e^{it})}{\varphi(e^{it})}dt.$$ \end{df} \begin{rem}\label{49} \begin{enumerate} \item\label{51} If $\varphi\in{\mathcal C}(\mathbb{T},\mathbb{C}_*)$ extends to a function $\widetildedetilde{\varphi}\in{\mathcal O}(\mathbb{D})\cap \mathcal C(\overlineerline{\DD})$ then $\widetildend\varphi$ is the number of zeroes of $\widetildedetilde{\varphi}$ in $\mathbb{D}$ counted with multiplicities; \item\label{52} $\widetildend(\varphi\psi)=\widetildend\varphi+\widetildend\psi$, $\varphi,\psi\in{\mathcal C}(\mathbb{T},\mathbb{C}_*)$; \item\label{53} $\widetildend\varphi=0$ if $\varphi\in{\mathcal C}(\mathbb{T})$ and $\re\varphi>0$. \end{enumerate} \end{rem} \begin{df} The boundary of a domain $D$ of $\mathbb C^n$ is \emph{real analytic in a neighborhood} $U$ of the set $S\subset\partial D$ if there exists a function $r\in\mathcal C^{\omega}(U,\mathbb{R})$ such that $D\cap U=\{z\in U:r(z)<0\}$ and $\nabla r$ does not vanish in $U$. \end{df} \begin{df}\label{21} Let $D\subsetbset\mathbb{C}^{n}$ be a domain. We call a holomorphic mapping $f:\mathbb{D}\longrightarrow D$ a \emph{stationary mapping} if \begin{enumerate} \item $f$ extends to a holomorphic mapping in a neighborhood od $\overlineerline{\DD}$ $($denoted by the same letter$)$; \item $f(\mathbb{T})\subsetbset\partialrtial D$; \item there exists a real analytic function $\rho:\mathbb{T}\longrightarrow\mathbb{R}_{>0}$ such that the mapping $\mathbb{T}\ni\zeta\longmapsto\zeta \rho(\zeta)\overlineerline{\nu_D(f(\zeta))}\in\mathbb{C}^{n}$ extends to a mapping holomorphic in a neighborhood of $\overlineerline{\DD}$ $($denoted by $\widetildedetilde{f}${$)$}. \end{enumerate} Furthermore, we call a holomorphic mapping $f:\mathbb{D}\longrightarrow D$ a \emph{weak stationary mapping} if \begin{enumerate} \item[(1')] $f$ extends to a ${\mathcal C}^{1/2}$-smooth mapping on $\overlineerline{\DD}$ $($denoted by the same letter$)$; \item[(2')] $f(\mathbb{T})\subsetbset\partialrtial D$; \item[(3')] there exists a ${\mathcal C}^{1/2}$-smooth function $\rho:\mathbb{T}\longrightarrow\mathbb{R}_{>0}$ such that the mapping $\mathbb{T}\ni\zeta\longmapsto\zeta \rho(\zeta)\overlineerline{\nu_D(f(\zeta))}\in\mathbb{C}^{n}$ extends to a mapping $\widetildedetilde{f}\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}^{1/2}(\overlineerline{\DD})$. \end{enumerate} The definition of a $($weak$)$ stationary mapping $f:\mathbb D\longrightarrow D$ extends naturally to the case when $\partial D$ is real analytic in a neighborhood of $f(\mathbb{T})$. \end{df} Directly from the definition of a stationary mapping $f$, it follows that $f$ and $\widetilde f$ extend holomorphically on some neighborhoods of $\overlineerline{\DD}$. By $\mathbb{D}_f$ we shall denote their intersection. \begin{df}\label{21e} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain with real analytic boundary. A holomorphic mapping $f:\mathbb{D}\longrightarrow D$ is called a (\emph{weak}) $E$-\emph{mapping} if it is a (weak) stationary mapping and \begin{enumerate} \item[(4)] setting $\varphi_z(\zeta):=\langle z-f(\zeta),\nu_D(f(\zeta))\rangle,\ \zeta\in\mathbb{T}$, we have $\widetildend\varphi_z=0$ for some $z\in D$. \end{enumerate} \end{df} \begin{rem} The strong linear convexity of $D$ implies $\varphi_z(\zeta)\neq 0$ for any $z\in D$ and $\zeta\in\mathbb{T}$. Therefore, $\widetildend\varphi_z$ vanishes for all $z\in D$ if it vanishes for some $z\in D$. Additionally, any stationary mapping of a convex domain is an $E$-mapping (as $\re \varphi_z<0$). \end{rem} We shall prove that in a class of non-planar bounded strongly linearly convex domains with real analytic boundaries weak stationary mappings are just stationary mappings, so there is no difference between $E$-mappings and weak $E$-mappings. We have the following result describing extremal mappings, which is very interesting in its own. \begin{tw}\label{main} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain. Then a holomorphic mapping $f:\mathbb{D}\longrightarrow D$ is an extremal if and only if $f$ is a weak $E$-mapping. For a domain $D$ with real analytic boundary, a holomorphic mapping $f:\mathbb D\longrightarrow D$ is an extremal if and only if $f$ is an $E$-mapping. If $\partial D$ is of class ${\mathcal C}^k$, $k=3,4,\ldots,\infty$, then any weak $E$-mapping $f:\mathbb{D}\longrightarrow D$ and its associated mappings $\widetilde f,\rho$ are $\mathcal C^{k-1-\varepsilon}$-smooth for any $\varepsilon>0$. \end{tw} The idea of the proof of the Lempert Theorem is as follows. In real analytic case we shall show that $E$-mappings are complex geodesics (because they have left inverses). Then we shall prove that for any different points $z,w\in D$ (resp. for a point $z\in D$ and a vector $v\in(\mathbb{C}^n)_*$) there is an $E$-mapping passing through $z,w$ (resp. such that $f(0)=z$ and $f'(0)=v$). This will give the equality between the Lempert function and the Carath\'eodory distance. In the general case, we exhaust a ${\mathcal C}^2$-smooth domain by strongly linearly convex domains with real analytic boundaries. To prove Theorem \ref{main} we shall additionally observe that (weak) $E$-mappings are unique extremals. \begin{center}{\sc Real analytic case}\end{center} In what follows and if not mentioned otherwise, $D\subset\mathbb{C}^n$, $n\geq 2$, is a \textbf{bounded strongly linearly convex domain with real analytic boundary}. \section{Weak stationary mappings of strongly linearly convex domains with real analytic boundaries are stationary mappings}\label{55} Let $M\subsetbset\mathbb{C}^m$ be a totally real $\mathcal{C}^{\omega}$ submanifold of the real dimension $m$. Fix a point $z\in M$. There are neighborhoods $U,V\subset\mathbb{C}^m$ of $0$ and $z$ respectively and a biholomorphic mapping $\Phi:U\longrightarrow V$ such that $\Phi(\mathbb{R}^m\cap U)=M\cap V$ (for the proof see Appendix). \begin{prop}\label{6} A weak stationary mapping of $D$ is a stationary mapping of $D$ with the same associated mappings. \end{prop} \begin{proof} Let $f:\mathbb{D}\longrightarrow D$ be a weak stationary mapping. Our aim is to prove that $f,\widetildedetilde{f}\in{\mathcal O}(\overlineerline{\DD})$ and $\rho\in\mathcal C^{\omega}(\mathbb{T})$. Choose a point $\zeta_0\in\mathbb{T}$. Since $\widetildedetilde{f}(\zeta_0)\neq 0$, we can assume that $\widetildedetilde{f}_1(\zeta)\neq 0$ in $\overlineerline{\DD}\cap U_0$, where $U_0$ is a neighborhood of $\zeta_0$. This implies $\nu_{D,1}(f(\zeta_0))\neq 0$, so $\nu_{D,1}$ does not vanish on some set $V_0\subset\partial D$, relatively open in $\partial D$, containing the point $f(\zeta_0)$. Shrinking $U_0$, if necessary, we may assume that $f(\mathbb{T}\cap U_0)\subsetbset V_0$. Define $\psi:V_0\longrightarrow\mathbb{C}^{2n-1}$ by $$\psi(z)=\left(z_1,\ldots,z_n, \overline{\left(\frac{\nu_{D,2}(z)}{\nu_{D,1}(z)}\right)},\ldots,\overline{\left(\frac{\nu_{D,n}(z)}{\nu_{D,1}(z)}\right)}\right).$$ The set $M:=\psi(V_0)$ is the graph of a $\mathcal{C}^{\omega}$ function defined on the local $\mathcal{C}^{\omega}$ submanifold $V_0$, so it is a local $\mathcal{C}^{\omega}$ submanifold in $\mathbb{C}^{2n-1}$ of the real dimension $2n-1$. Assume for a moment that $M$ is totally real. Let $$g(\zeta):=\left(f_1(\zeta),\ldots,f_n(\zeta), \frac{\widetildedetilde{f}_2(\zeta)}{\widetildedetilde{f}_1(\zeta)},\ldots,\frac{\widetildedetilde{f}_n(\zeta)}{\widetildedetilde{f}_1(\zeta)}\right),\ \zeta\in\overlineerline{\DD}\cap U_0.$$ If $\zeta\in\mathbb{T}\cap U_0$ then $\widetildedetilde{f}_k(\zeta)\widetildedetilde{f}_1(\zeta)^{-1} = \overlineerline{\nu_{D,k}(f(\zeta))}\ \overlineerline{\nu_{D,1}(f(\zeta))}^{-1}$, so $g(\zeta)=\psi(f(\zeta))$. Therefore, $g(\mathbb{T}\cap U_0)\subsetbset M$. Thanks to the Reflection Principle (see Appendix), $g$ extends holomorphically past $\mathbb{T}\cap U_0$, so $f$ extends holomorphically on a neighborhood of $\zeta_0$. The mapping $\overlineerline{\nu_D\circ f}$ is real analytic on $\mathbb{T}$, so it extends to a mapping $h$ holomorphic in a neighborhood $W$ of $\mathbb{T}$. For $\zeta\in\mathbb{T}\cap U_0$ we have $$\frac{\zeta h_1(\zeta)}{\widetildedetilde{f}_1(\zeta)}=\frac{1}{\rho(\zeta)}.$$ The function on the left side is holomorphic in $\mathbb{D}\cap U_0\cap W$ and continuous in $\overlineerline{\DD}\cap U_0\cap W$. Since it has real values on $\mathbb{T}\cap U_0$, the Reflection Principle implies that it is holomorphic in a neighborhood of $\mathbb{T}\cap U_0$. Hence $\rho$ and $\widetildedetilde{f}$ are holomorphic in a neighborhood of $\zeta_0$. Since $\zeta_0$ is arbitrary, we get the assertion. It remains to prove that $M$ is totally real. Let $r$ be a defining function of $D$. Recall that for any point $z\in V_0$ $$\frac{\overline{\nu_{D,k}(z)}}{\overline{\nu_{D,1}(z)}}=\frac{\partialrtial r}{\partialrtial z_k}(z)\left(\frac{\partialrtial r}{\partialrtial z_1}(z)\right)^{-1},\,k=1,\ldots,n.$$ Consider the mapping $S=(S_1,\ldots,S_n):V_0\times\mathbb{C}^{n-1}\longrightarrow\mathbb{R}\times\mathbb{C}^{n-1}$ given by $$S(z,w):=\left(r(z),\frac{\partialrtial r}{\partialrtial z_2}(z)-w_{1}\frac{\partialrtial r}{\partialrtial z_1}(z),\ldots,\frac{\partialrtial r}{\partialrtial z_n}(z)-w_{n-1}\frac{\partialrtial r}{\partialrtial z_1}(z)\right).$$ Clearly, $M=S^{-1}(\{0\})$. Hence \begin{equation}\label{tan} T_{M}^{\mathbb{R}}(z,w)\subsetbset\ker\nabla S(z,w),\ (z,w)\in M,\end{equation} where $\nabla S:=(\nabla S_1,\ldots,\nabla S_n)$. Fix a point $(z,w)\in M$. Our goal is to prove that $T_{M}^{\mathbb{C}}(z,w)=\lbrace 0\rbrace$. Take an arbitrary vector $(X,Y)=(X_1,\ldots,X_n,Y_1,\ldots,Y_{n-1})\in T_{M}^{\mathbb{C}}(z,w)$. Then we infer from \eqref{tan} that $$\subsetm_{k=1}^n\frac{\partialrtial r}{\partialrtial z_k}(z)X_k=0,$$ i.e. $X\in T_{D}^{\mathbb{C}}(z)$. Denoting $v:=(z,w)$, $V:=(X,Y)$ and making use of \eqref{tan} again we find that $$0=\nabla S_k(v)(V)=\subsetm_{j=1}^{2n-1}\frac{\partial S_k}{\partial v_j}(v)V_j+\subsetm_{j=1}^{2n-1}\frac{\partial S_k}{\partial\overline v_j}(v)\overline V_j$$ for $k=2,\ldots,n$. But $V\in T_{M}^{\mathbb{C}}(v)$, so $iV\in T_{M}^{\mathbb{C}}(v)$. Thus $$0=\nabla S_k(v)(iV)=i\subsetm_{j=1}^{2n-1}\frac{\partial S_k}{\partial v_j}(v)V_j-i\subsetm_{j=1}^{2n-1}\frac{\partial S_k}{\partial\overline v_j}(v)\overline V_j.$$ In particular, \begin{multline*}0=\subsetm_{j=1}^{2n-1}\frac{\partial S_k}{\partial\overline v_j}(v)\overline V_j=\subsetm_{j=1}^{n}\frac{\partial S_k}{\partial\overline z_j}(z,w)\overline X_j+\subsetm_{j=1}^{n-1}\frac{\partial S_k}{\partial\overline w_j}(z,w)\overline Y_j=\\=\subsetm_{j=1}^n\frac{\partialrtial^2r}{\partialrtial z_k\partialrtial\overlineerline{z}_j}(z)\overlineerline X_j-w_{k-1}\subsetm_{j=1}^n\frac{\partialrtial^2r}{\partialrtial z_1\partialrtial\overlineerline{z}_j}(z)\overlineerline X_j. \end{multline*} The equality $M=S^{-1}(\{0\})$ gives $$w_{k-1}=\frac{\partialrtial r}{\partialrtial z_k}(z)\left(\frac{\partialrtial r}{\partialrtial z_1}(z)\right)^{-1},$$ so $$\frac{\partialrtial r}{\partialrtial z_1}(z)\subsetm_{j=1}^n\frac{\partialrtial^2r}{\partialrtial z_k\partialrtial\overlineerline{z}_j}(z)\overlineerline X_j=\frac{\partialrtial r}{\partialrtial z_k}(z)\subsetm_{j=1}^n\frac{\partialrtial^2r}{\partialrtial z_1\partialrtial\overlineerline{z}_j}(z)\overlineerline X_j,\ k=2,\ldots,n.$$ Note that the last equality holds also for $k=1$. Therefore, \begin{multline*} \frac{\partialrtial r}{\partialrtial z_1}(z)\subsetm_{j,k=1}^n\frac{\partialrtial^2r}{\partialrtial z_k\partialrtial\overlineerline{z}_j}(z)\overlineerline X_jX_k=\subsetm_{k=1}^n\frac{\partialrtial r}{\partialrtial z_k}(z)\subsetm_{j=1}^n\frac{\partialrtial^2r}{\partialrtial z_1\partialrtial\overlineerline{z}_j}(z)\overlineerline X_jX_k =\\=\left(\subsetm_{k=1}^n\frac{\partialrtial r}{\partialrtial z_k}(z)X_k\right)\left(\subsetm_{j=1}^n\frac{\partialrtial^2r}{\partialrtial z_1\partialrtial\overlineerline{z}_j}(z)\overlineerline X_j\right)=0. \end{multline*} By the strong linear convexity of $D$ we have $X=0$. This implies $Y=0$, since $$0=\nabla S_k(z,w)(0,Y)=\subsetm_{j=1}^{n-1}\frac{\partial S_k}{\partial w_j}(v)Y_j+\subsetm_{j=1}^{n-1}\frac{\partial S_k}{\partial\overline w_j}(v)\overline Y_j=-\frac{\partialrtial r}{\partialrtial z_1}(z)Y_{k-1}$$ for $k=2,\ldots,n$. \end{proof} \section{(Weak) $E$-mappings vs. extremal mappings and complex geodesics} In this section we will prove important properties of (weak) $E$-mappings. In particular, we will show that they are complex geodesics and unique extremals. \subsetbsection{Weak $E$-mappings are complex geodesics and unique extremals} The results of this subsection are related to weak $E$-mappings of bounded strongly linearly convex domains $D\subset\mathbb{C}^n$, $n\geq 2$. Let $$G(z,\zeta):=(z-f(\zeta))\bullet\widetildedetilde{f}(\zeta),\ z\in\mathbb{C}^n,\ \zeta\in\mathbb{D}_f.$$ \begin{propp}\label{1} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain and let $f:\mathbb{D}\longrightarrow D$ be a weak $E$-mapping. Then there exist an open set $W\subsetpset\overlineerline D\setminus f(\mathbb{T})$ and a holomorphic mapping $F:W\longrightarrow\mathbb{D}$ such that for any $z\in W$ the number $F(z)$ is a unique solution of the equation $G(z,\zeta)=0,\ \zeta\in\mathbb{D}$. In particular, $F\circ f=\id_{\mathbb{D}}$. \end{propp} In the sequel we will strengthen the above proposition for domains with real analytic boundaries (see Proposition~\ref{34}). \begin{proof}[Proof of Proposition~\ref{1}] Set $A:=\overlineerline{D}\setminus f(\mathbb{T})$. Since $D$ is strongly linearly convex, $\varphi_z$ does not vanish in $\mathbb{T}$ for any $z\in A$, so by a continuity argument the condition (4) of Definition~\ref{21e} holds for every $z$ in some open set $W\subsetpset A$. For a fixed $z\in W$ we have $$G(z,\zeta)=\zeta\rho(\zeta)\varphi_z(\zeta),\ \zeta\in\mathbb{T},$$ so $\widetildend G(z,\cdotp)=1$. Since $G(z,\cdotp)\in{\mathcal O}(\mathbb{D})$, it has in $\mathbb{D}$ exactly one simple root $F(z)$. Hence $G(z,F(z))=0$ and $\frac{\partialrtial G}{\partialrtial\zeta}(z,F(z))\neq 0$. By the Implicit Function Theorem, $F$ is holomorphic in $W$. The equality $F(f(\zeta))=\zeta$ for $\zeta\in\mathbb{D}$ is clear. \end{proof} From the proposition above we immediately get the following \begin{corr}\label{5} A weak $E$-mapping $f:\mathbb{D}\longrightarrow D$ of a bounded strongly linearly convex domain $D\subset\mathbb{C}^n$, $n\geq 2$, is a complex geodesic. In particular, $$c_{D}(f(\zeta),f(\xi))=\widetilde k_D(f(\zeta),f(\xi))\text{\,\ and\,\, }\gamma_D(f(\zeta);f'(\zeta))=\kappa_D(f(\zeta);f'(\zeta)),$$ for any $\zeta,\xi\in\mathbb{D}$. \end{corr} Using left inverses of weak $E$-mappings we may prove the uniqueness of extremals. \begin{propp}\label{2} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain and let $f:\mathbb{D}\longrightarrow D$ be a weak $E$-mapping. Then for any $\xi\in(0,1)$ the mapping $f$ is a unique $\widetilde{k}_D$-extremal for $z=f(0)$, $w=f(\xi)$ \emph{(}resp. a unique $\kappa_D$-extremal for $z=f(0)$, $v=f'(0)$\emph{)}. \end{propp} \begin{proof} Suppose that $g$ is a $\widetilde{k}_D$-extremal for $z,w$ (resp. a $\kappa_D$-extremal for $z,v$) such that $g(0)=z$, $g(\xi)=w$ (resp. $g(0)=z$, $g'(0)=v$). Our aim is to show that $f=g$. Proposition~\ref{1} provides us with the mapping $F$, which is a left inverse for $f$. By the Schwarz Lemma, $F$ is a left inverse for $g$, as well, that is $F\circ g=\text{id}_{\mathbb{D}}$. We claim that $\lim_{\mathbb{D}\ni\zeta\to\zeta_0}g(\zeta)=f(\zeta_0)$ for any $\zeta_0\in\mathbb{T}$ (in particular, we shall show that the limit does exist). Assume the contrary. Then there are $\zeta_0\in\mathbb{T}$ and a sequence $\{\zeta_m\}\subsetbset\mathbb{D}$ convergent to $\zeta_0$ such that the limit $Z:=\lim_{m\to\infty}g(\zeta_m)\in\overlineerline{D}$ exists and is not equal to $f(\zeta_0)$. We have $G(z,F(z))=0$, so putting $z=g(\zeta_m)$ we infer that $$0=(g(\zeta_m)-f(F(g(\zeta_m))))\bullet \widetildedetilde{f}(F(g(\zeta_m)))=(g(\zeta_m)-f(\zeta_m))\bullet\widetildedetilde{f}(\zeta_m). $$ Passing with $m$ to the infinity we get $$0=(Z-f(\zeta_0))\bullet \widetildedetilde{f}(\zeta_0)=\zeta_0\rho(\zeta_0)\langle Z-f(\zeta_0),\nu_D(f(\zeta_0))\rangle.$$ This means that $Z-f(\zeta_0)\in T^{\mathbb{C}}_D(f(\zeta_0))$. Since $D$ is strongly linearly convex, we deduce that $Z=f(\zeta_0)$, which is a contradiction. Hence $g$ extends continuously on $\overlineerline{\DD}$ and, by the maximum principle, $g=f$. \end{proof} \begin{propp}\label{3} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain, let $f:\mathbb{D}\longrightarrow D$ be a weak $E$-mapping and let $a$ be an automorphism of $\mathbb{D}$. Then $f\circ a$ is a weak $E$-mapping of $D$. \end{propp} \begin{proof} Set $g:=f\circ a$. Clearly, the conditions (1') and (2') of Definition~\ref{21} are satisfied by $g$. To prove that $g$ satisfies the condition (4) of Definition~\ref{21e} fix a point $z\in D$. Let $\varphi_{z,f}$, $\varphi_{z,g}$ be the functions appearing in the condition (4) for $f$ and $g$ respectively. Then $\varphi_{z,g}=\varphi_{z,f}\circ a$. Since $a$ maps $\mathbb{T}$ to $\mathbb{T}$ diffeomorphically, we have $\widetildend\varphi_{z,g}=\pm\widetildend\varphi_{z,f}=0$. It remains to show that the condition (3') of Definition~\ref{21} is also satisfied by $g$. Note that the function $\widetilde a(\zeta):=\zeta/a(\zeta)$ has a holomorphic branch of the logarithm in the neighborhood of $\mathbb{T}$. This follows from the fact that $\widetildend \widetilde a=0$, however the existence of the holomorphic branch may be shown quite elementary. Actually, it would suffices to prove that $\widetilde a(\mathbb{T})\neq\mathbb{T}$. Expand $a$ as $$a(\zeta)=e^{it}\frac{\zeta-b}{1-\overlineerline b\zeta}$$ with some $t\in\mathbb{R}$, $b\in\mathbb{D}$ and observe that $\widetildedetilde a$ does not attain the value $-e^{-it}$. Indeed, if $\zeta/a(\zeta)=-e^{-it}$ for some $\zeta\in\mathbb{T}$, then $$\frac{1-\overlineerline b\zeta}{1-b\overlineerline\zeta}=-1,$$ so $2=2\re(b\overlineerline\zeta)\leq 2|b|$, which is impossible. Concluding, there exists a function $v$ holomorphic in a neighborhood of $\mathbb{T}$ such that $$\frac{\zeta}{a(\zeta)}=e^{i v(\zeta)}.$$ Note that $v(\mathbb{T})\subset\mathbb{R}$. Expanding $v$ in Laurent series $$v(\zeta)=\subsetm_{k=-\infty}^{\infty}a_k\zeta^k,\ \zeta\text{ near }\mathbb{T},$$ we infer that $a_{-k}=\overlineerline a_k$, $k\in\mathbb{Z}$. Therefore, $$v(\zeta)=a_0+\subsetm_{k=1}^\infty 2\re(a_k\zeta^k)=\re\left(a_0+2\subsetm_{k=1}^\infty a_k\zeta^k\right),\ \zeta\in\mathbb{T}.$$ Hence, there is a function $h$ holomorphic in the neighborhood of $\overlineerline{\DD}$ such that $v=\im h$. Put $u:=h-iv$. Then $u\in{\mathcal O}(\mathbb{T})$ and $u(\mathbb{T})\subset\mathbb{R}$. Take $\rho$ be as in the condition (3') of Definition~\ref{21} for $f$ and define $$r(\zeta):=\rho(a(\zeta))e^{u(\zeta)},\ \zeta\in\mathbb{T}.$$ Let us compute \begin{eqnarray*}\zeta r(\zeta)\overlineerline{\nu_D(g(\zeta))}=\zeta u^{u(\zeta)}\rho(a(\zeta))\overlineerline{\nu_D(f(a(\zeta)))}&=&\\=a(\zeta)h(\zeta)\rho(a(\zeta))\overlineerline{\nu_D(f(a(\zeta)))} &=&h(\zeta)\widetildedetilde{f}(a(\zeta)),\quad\zeta\in\mathbb{T}. \end{eqnarray*} Thus $\zeta\longmapsto\zeta r(\zeta)\overlineerline{\nu_D(g(\zeta))}$ extends holomorphically to a function of class ${\mathcal O}(\mathbb{D})\cap{\mathcal C}^{1/2}(\overlineerline{\DD})$. \end{proof} \begin{corr}\label{28} A weak $E$-mapping $f:\mathbb{D}\longrightarrow D$ of a bounded strongly linearly convex domain $D\subset\mathbb{C}^n$, $n\geq 2$, is a unique $\widetilde{k}_D$-extremal for $f(\zeta),f(\xi)$ \emph{(}resp. a unique $\kappa_D$-extremal for $f(\zeta),f'(\zeta)$\emph{)}, where $\zeta,\xi\in\mathbb{D}$, $\zeta\neq\xi$. \end{corr} \subsetbsection{Generalization of Proposition~\ref{1}} The results obtained in this subsection will play an important role in the sequel. We start with \begin{propp}\label{4} Let $f:\mathbb{D}\longrightarrow D$ be an $E$-mapping. Then the function $f'\bullet\widetildedetilde{f}$ is a positive constant. \end{propp} \begin{proof} Consider the curve $$\mathbb{R}\ni t\longmapsto f(e^{it})\in\partialrtial D.$$ Its any tangent vector $ie^{it}f'(e^{it})$ belongs to $T_{D}^\mathbb{R}(f(e^{it}))$, i.e. $$\re\langle ie^{it}f'(e^{it}),\nu_D(f(e^{it}))\rangle=0.$$ Thus for $\zeta\in\mathbb{T}$ $$0=\rho(\zeta)\re\langle i\zeta f'(\zeta),\nu_D(f(\zeta))\rangle=-\im f'(\zeta)\bullet\widetildedetilde{f}(\zeta),$$ so the holomorphic function $f'\bullet\widetildedetilde{f}$ is a real constant $C$. Considering the curve $$[0,1+\varepsilon)\ni t\longmapsto f(t)\in\overlineerline D$$ for small $\varepsilon>0$ and noting that $f([0,1))\subset D$, $f(1)\in\partialrtial D$, we see that the derivative of $r\circ f$ at a point $t=1$ is non-negative, where $r$ is a defining function of $D$. Hence $$0\leq\re\langle f'(1),\nu_D(f(1))\rangle =\frac{1}{\rho(1)} \re( f'(1)\bullet\widetildedetilde{f}(1))= \frac{C}{\rho(1)},$$ i.e. $C\geq 0$. For $\zeta\in\mathbb{T}$ $$\frac{f(\zeta)-f(0)}{\zeta}\bullet\widetildedetilde{f}(\zeta)=\rho(\zeta)\langle f (\zeta)-f(0),\nu_D(f(\zeta))\rangle.$$ This function has the winding number equal to $0$. Therefore, the function $$g(\zeta):=\frac{f(\zeta)-f(0)}{\zeta}\bullet\widetildedetilde{f}(\zeta),$$ which is holomorphic in a neighborhood of $\overlineerline{\DD}$, does not vanish in $\mathbb{D}$. In particular, $C=g(0)\neq 0$. \end{proof} The function $\rho$ is defined up to a constant factor. \textbf{We choose $\rho$ so that $ f'\bullet\widetildedetilde{f}\equiv 1$}, i.e. \begin{equation}\label{rho}\rho(\zeta)^{-1}=\langle\zeta f'(\zeta),\nu_D(f(\zeta))\rangle,\ \zeta\in\mathbb{T}.\end{equation} In that way $\widetildedetilde{f}$ and $\rho$ are uniquely determined by $f$. \begin{propp} An $E$-mapping $f:\mathbb{D}\longrightarrow D$ is injective in $\overlineerline{\DD}$. \end{propp} \begin{proof}The function $f$ has the left-inverse in $\mathbb{D}$, so it suffices to check the injectivity on $\mathbb{T}$. Suppose that $f(\zeta_1)=f(\zeta_2)$ for some $\zeta_1,\zeta_2\in\mathbb{T}$, $\zeta_1\neq\zeta_2$, and consider the curves $$\gamma_j:[0,1]\ni t\longmapsto f(t\zeta_j)\in\overlineerline D,\ j=1,2.$$ Since $$\re\langle\gamma_j'(1),\nu_D(f(\zeta_j))\rangle=\re\langle\zeta_jf'(\zeta_j),\nu_D(f(\zeta_j))\rangle =\rho(\zeta_j)^{-1}\neq 0,$$ the curves $\gamma_j$ hit $\partial D$ transversally at their common point $f(\zeta_1)$. We claim that there exists $C>0$ such that for $t\in(0,1)$ close to $1$ there is $s_t\in(0,1)$ satisfying $\widetilde k_D(f(t\zeta_1),f(s_t\zeta_2))<C$. It will finish the proof since $$\widetilde k_D(f(t\zeta_1),f(s_t\zeta_2))=p(t\zeta_1,s_t\zeta_2)\to\infty,\ t\to 1.$$ We may assume that $f(\zeta_1)=0$ and $\nu_D(0)=(1,0,\ldots,0)=:e_1$. There exists a ball $B\subset D$ tangent to $\partial D$ at $0$. Using a homothety, if necessary, one can assume that $B=\mathbb{B}_n-e_1$. From the transversality of $\gamma_1,\gamma_2$ to $\partial D$ there exists a cone $$A:=\{z\in\mathbb{C}^n:-\re z_1>k|z|\},\quad k>0,$$ such that $\gamma_1(t),\gamma_2(t)\in A\cap B$ if $t\in(0,1)$ is close to $1$. For $z\in A$ let $k_z>k$ be a positive number satisfying the equality $$|z|=\frac{-\re z_1}{k_z}.$$ Note that for any $a\in\gamma_1((0,1))$ sufficiently close to $0$ one may find $b\in\gamma_2((0,1))\cap A\cap B$ such that $\re b_1=\re a_1$. To get a contradiction it suffices to show that $\widetilde k_D(a,b)$ is bounded from above by a constant independent on $a$ and $b$. We have the following estimate \begin{multline*}\widetilde k_D(a,b)\leq\widetilde k_{\mathbb{B}_n-e_1}(a,b)=\widetilde k_{\mathbb{B}_n}(a+e_1,b+e_1)=\\=\tanh^{-1}\sqrt{1-\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\langle a+e_1,b+e_1 \rangle|^2}}.\end{multline*} The last expression is bounded from above if and only if $$\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\langle a+e_1,b+e_1\rangle|^2}$$ is bounded from below by some positive constant. We estimate $$\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\langle a+e_1,b+e_1\rangle|^2}=\frac{(2\re a_1+|a|^2)(2\re b_1+|b|^2)}{|\langle a, b\rangle+a_1+\overlineerline b_1|^2}=$$$$=\frac{\left(2\re a_1+\frac{(\re a_1)^2}{k^2_a}\right)\left(2\re a_1+\frac{(\re a_1)^2}{k^2_b}\right)}{|\langle a, b\rangle+2\re a_1+i\im a_1-i\im b_1|^2}\geq\frac{(\re a_1)^2\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2|\langle a, b\rangle+i\im a_1-i\im b_1|^2+2|2\re a_1|^2}$$$$\geq\frac{(\re a_1)^2\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2(|a||b|+|a|+|b|)^2+8(\re a_1)^2}=\frac{(\re a_1)^2\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2\left(\frac{(-\re a_1)^2}{k^2_ak^2_b}-\frac{\re a_1}{k_a}-\frac{\re a_1}{k_b}\right)^2+8(\re a_1)^2}$$$$=\frac{\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2\left(\frac{-\re a_1}{k^2_ak^2_b}+\frac{1}{k_a}+\frac{1}{k_b}\right)^2+8}>\frac{1}{2(1+2/k)^2+8}.$$ This finishes the proof. \end{proof} Assume that we are in the settings of Proposition~\ref{1} and $D$ has real analytic boundary. Our aim is to replace $W$ with a neighborhood of $\overline D$. \begin{remm}\label{przed34} For $\zeta_0\in\mathbb{D}_f$ we have $G(f(\zeta_0),\zeta_0)=0$ and $\frac{\partialrtial G}{\partialrtial\zeta}(f(\zeta_0),\zeta_0)=-1$. By the Implicit Function Theorem there exist neighborhoods $U_{\zeta_0},V_{\zeta_0}$ of $f(\zeta_0),\zeta_0$ respectively and a holomorphic function $F_{\zeta_0}:U_{\zeta_0}\longrightarrow V_{\zeta_0}$ such that for any $z\in U_{\zeta_0}$ the point $F_{\zeta_0}(\zeta)$ is the unique solution of the equation $G(z,\zeta)=0$, $\zeta\in V_{\zeta_0}$. In particular, if $\zeta_0\in\mathbb{D}$ then $F_{\zeta_0}=F$ near $f(\zeta_0)$. \end{remm} \begin{propp}\label{34} Let $f:\mathbb{D}\longrightarrow D$ be an $E$-mapping. Then there exist arbitrarily small neighborhoods $U$, $V$ of $\overlineerline D$, $\overlineerline{\DD}$ respectively such that for any $z\in U$ the equation $G(z,\zeta)=0$, $\zeta\in V$, has exactly one solution. \end{propp} \begin{proof} In view of Proposition~\ref{1} and Remark~\ref{przed34} it suffices to prove that there exist neighborhoods $U$, $V$ of $\overlineerline D$, $\overlineerline{\DD}$ respectively such that for any $z\in U$ the equation $G(z,\cdotp)=0$ has at most one solution $\zeta\in V$. Assume the contrary. Then for any neighborhoods $U$ of $\overlineerline D$ and $V$ of $\overlineerline{\DD}$ there are $z\in U$, $\zeta_1,\zeta_2\in V$, $\zeta_1\neq\zeta_2$ such that $G(z,\zeta_1)=G(z,\zeta_2)=0$. For $m\in\mathbb{N}$ put $$U_m:=\{z\in\mathbb{C}^n:\dist(z,D)<1/m\},$$ $$V_m:=\{\zeta\in\mathbb{C}:\dist(\zeta,\mathbb{D})<1/m\}.$$ There exist $z_m\in U_m$, $\zeta_{m,1},\zeta_{m,2}\in V_m$, $\zeta_{m,1}\neq\zeta_{m,2}$ such that $G(z_m,\zeta_{m,1})=G(z_m,\zeta_{m,2})=0$. Passing to a subsequence we may assume that $z_m\to z_0\in\overline D$. Analogously we may assume $\zeta_{m,1}\to\zeta_1\in \overlineerline{\DD}$ and $\zeta_{m,2}\to\zeta_2\in\overlineerline{\DD}$. Clearly, $G(z_0,\zeta_1)=G(z_0,\zeta_2)=0$. Let us consider few cases. 1) If $\zeta_1,\zeta_2\in\mathbb{T}$, then $G(z_0,\zeta_j)=0$ is equivalent to $$\langle z_0-f(\zeta_j), \nu_D(f(\zeta_j))\rangle=0,\ j=1,2,$$ consequently $z_0-f(\zeta_j)\in T^{\mathbb{C}}_D(f(\zeta_j))$. By the strong linear convexity of $D$ we get $z_0=f(\zeta_j)$. But $f$ is injective in $\overlineerline{\DD}$, so $\zeta_1=\zeta_2=:\zeta_0$. It follows from Remark~\ref{przed34} that in a sufficiently small neighborhood of $(z_0,\zeta_0)$ all solutions of the equation $G(z,\zeta)=0$ are of the form $(z,F_{\zeta_0}(z))$. Points $(z_m,\zeta_{m,1})$ and $(z_m,\zeta_{m,2})$ belong to this neighborhood for large $m$, which gives a contradiction. 2) If $\zeta_1\in\mathbb{T}$ and $\zeta_2\in\mathbb{D}$, then analogously as above we deduce that $z_0=f(\zeta_1)$. Let us take an arbitrary sequence $\{\eta_m\}\subset\mathbb{D}$ convergent to $\zeta_1$. Then $f(\eta_m) \in D$ and $f(\eta_m)\to z_0$, so the sequence $G(f(\eta_m),\cdotp)$ converges to $G(z_0,\cdotp)$ uniformly on $\mathbb{D}$. Since $G(z_0,\cdotp)\not\equiv 0$, $G(z_0,\zeta_2)=0$ and $\zeta_2\in\mathbb{D}$, we deduce from Hurwitz Theorem that for large $m$ the functions $G(f(\eta_m),\cdotp)$ have roots $\theta_m\in\mathbb{D}$ such that $\theta_m\to\zeta_2$. Hence $G(f(\eta_m),\theta_m)=0$ and from the uniqueness of solutions in $D\times\mathbb{D}$ (Proposition~\ref{1}) we have $$\theta_m=F(f(\eta_m))=\eta_m.$$ This is a contradiction, because the left side tends to $\zeta_2$ and the right one to $\zeta_1$, as $m\to\infty$. 3) We are left with the case $\zeta_1,\zeta_2\in\mathbb{D}$. If $z_0\in\overlineerline{D}\setminus f(\mathbb{T})$ then $z_0\in W$. In $W\times\mathbb{D}$ all solutions of the equation $G=0$ are of the form $(z,F(z))$, $z\in W$. But for large $m$ the points $(z_m,\zeta_{m,1})$, $(z_m,\zeta_{m,2})$ belong to $W\times\mathbb{D}$, which is a contradiction with the uniqueness. If $z_0\in f(\mathbb{T})$, then $z_0=f(\zeta_0)$ for some $\zeta_0\in\mathbb{T}$. Clearly, $G(f(\zeta_0),\zeta_0)=0$, whence $G(z_0,\zeta_0)=G(z_0,\zeta_1)=0$ and $\zeta_0\in\mathbb{T}$, $\zeta_1\in \mathbb{D}$. This is just the case 2), which has been already considered. \end{proof} \begin{corr} There are neighborhoods $U$, $V$ of $\overlineerline D$ and $\overlineerline{\DD}$ respectively with $V\Subset\mathbb{D}_f$, such that the function $F$ extends holomorphically on $U$. Moreover, all solutions of the equation $G|_{U\times V}=0$ are of the form $(z,F(z))$, $z\in U$. In particular, $F\circ f=\id_{V}$. \end{corr} \section{H\"older estimates}\label{22} \begin{df}\label{30} For a given $c>0$ let the family $\mathcal{D}(c)$ consist of all pairs $(D,z)$, where $D\subset\mathbb{C}^n$, $n\geq 2$, is a bounded pseudoconvex domain with real $\mathcal C^2$ boundary and $z\in D$, satisfying \begin{enumerate} \item $\dist(z,\partialrtial D)\geq 1/c$; \item the diameter of $D$ is not greater than $c$ and $D$ satisfies the interior ball condition with a radius $1/c$; \item for any $x,y\in D$ there exist $m\leq 8 c^2$ and open balls $B_0,\ldots,B_m\subsetbset D$ of radius $1/(2c)$ such that $x\in B_0$, $y\in B_m$ and the distance between the centers of the balls $B_j$, $B_{j+1}$ is not greater than $1/(4c)$ for $j=0,\ldots,m-1$; \item for any open ball $B\subsetbset\mathbb{C}^n$ of radius not greater than $1/c$, intersecting non-emptily with $\partial D$, there exists a mapping $\Phi\in{\mathcal O}(\overlineerline{D},\mathbb{C}^n)$ such that \begin{enumerate} \item for any $w\in\Phi(B\cap\partialrtial D)$ there is a ball of radius $c$ containing $\Phi(D)$ and tangent to $\partialrtial\Phi(D)$ at $w$ (let us call it the ``exterior ball condition'' with a radius $c$); \item $\Phi$ is biholomorphic in a neighborhood of $\overline B$ and $\Phi^{-1}(\Phi(B))=B$; \item entries of all matrices $\Phi'$ on $B\cap\overline D$ and $(\Phi^{-1})'$ on $\Phi(B\cap\overlineerline{D})$ are bounded in modulus by $c$; \item $\dist(\Phi(z),\partialrtial\Phi(D))\geq 1/c$; \end{enumerate} \item the normal vector $\nu_D$ is Lipschitz with a constant $2c$, that is $$|\nu_D(a)-\nu_D(b)|\leq 2c|a-b|,\ a,b\in \partialrtial D;$$ \item the $\varepsilon$-hull of $D$, i.e. a domain $D_{\varepsilon}:=\{w\in\mathbb C^n:\dist (w,D)<\varepsilon\}$, is strongly pseudoconvex for any $\varepsilon\in (0,1/c).$ \end{enumerate} \end{df} Recall that the {\it interior ball condition} with a radius $r>0$ means that for any point $a\in\partial D$ there is $a'\in D$ and a ball $B_n(a',r)\subset D$ tangent to $\partial D$ at $a$. Equivalently $$D=\bigcup_{a'\in D'}B_n(a',r)$$ for some set $D'\subset D$. It may be shown that (2) and (5) may be expressed in terms of boundedness of the normal curvature, boundedness of a domain and the condition (3). This however lies beyond the scope of this paper and needs some very technical arguments so we omit the proof of this fact. The reasons why we decided to use (2) in such a form is its connection with the condition (3) (this allows us to simplify the proof in some places). \begin{rem}\label{con} Note that any convex domain satisfying conditions (1)-...-(4) of Definition~\ref{30} satisfies conditions (5) and (6), as well. Actually, it follows from (2) that for any $a\in\partial D$ there exists a ball $B_n(a',1/c)\subset D$ tangent to $\partial D$ at $a$. Then $$\nu_D(a)=\frac{a'-a}{|a'-a|}=c(a'-a).$$ Hence $$|\nu_D(a)-\nu_D(b)|=c|a'-a-b'+b|=c|a'-b'-(a-b)|\leq c|a'-b'|+c|a-b|.$$ Since $D$ is convex, we have $|a'-b'|\leq|a-b|$, which gives (5). The condition (6) is also clear --- for any $\varepsilon>0$ an $\varepsilon$-hull of a strongly convex domain is strongly convex. \end{rem} \begin{rem} For a convex domain $D$ the condition (3) of Definition \ref{30} amounts to the condition (2). Indeed, for two points $x,y\in D$ take two balls of radius $1/(2c)$ containing them and contained in $D$. Then divide the interval between the centers of the balls into $[4c^2]+1$ equal parts and take balls of radius $1/(2c)$ with centers at the points of the partition. Note also that if $D$ is strongly convex and satisfies the interior ball condition with a radius $1/c$ and the exterior ball condition with a radius $c$, one can take $\Phi:=\id_{\mathbb{C}^n}$. \end{rem} \begin{rem}\label{D(c),4} For a strongly pseudoconvex domain $D$ and $c'>0$ and for any $z\in D$ such that $\dist(z,\partialrtial D)>1/c'$ there exists $c=c(c')>0$ satisfying $(D,z)\in\mathcal{D}(c)$. Indeed, the conditions (1)-...-(3) and (5)-(6) are clear. Only (4) is non-trivial. The construction of the mapping $\Phi$ amounts to the construction of Forn\ae ss peak functions. Actually, apply directly Proposition 1 from \cite{For} to any boundary point of $\partialrtial D$ (obviously $D$ has a Stein neighborhood basis). This gives a covering of $\partialrtial D$ with a finite number of balls $B_j$, maps $\Phi_j\in{\mathcal O}(\overlineerline{D},\mathbb{C}^n)$ and strongly convex $C^\infty$-smooth domains $C_j$, $j=1,\ldots, N$, such that \begin{itemize}\item $\Phi_j(D)\subsetbset C_j$; \item $\Phi_j(\overline D)\subsetbset\overline C_j$; \item $\Phi_j(B_j\setminus\overline D)\subsetbset\mathbb C^n\setminus\overline C_j$; \item $\Phi_j^{-1}(\Phi_j(B_j))=B_j$; \item $\Phi_j|_{B_j}: B_j\longrightarrow \Phi_j(B_j)$ is biholomorphic. \end{itemize} Therefore, one may choose $c>0$ such that every $C_j$ satisfies the exterior ball condition with $c$, i.e. for any $x\in \partialrtial C_j$ there is a ball of radius $c$ containing $C_j$ and tangent to $\partialrtial C_j$ at $x$, every ball of radius $1/c$ intersecting non-emptily with $\partial D$ is contained in some $B_j$ (here one may use a standard argument invoking the Lebesgue number) and the conditions (c), (d) are also satisfied (with $\Phi:=\Phi_j$). \end{rem} In this section we use the words `uniform', `uniformly' if $(D,z)\in \mathcal D(c)$. This means that estimates will depend only on $c$ and will be independent on $D$ and $z$ if $(D,z)\in\mathcal{D}(c)$ and on $E$-mappings of $D$ mapping $0$ to $z$. Moreover, in what follows we assume that $D$ is a strongly linearlu convex domain with real-analytic boundary. \begin{prop}\label{7} Let $f:(\mathbb{D},0)\longrightarrow(D,z)$ be an $E$-mapping. Then $$\dist(f(\zeta),\partialrtial D)\leq C(1-|\zeta|),\ \zeta\in\overlineerline{\DD}$$ with $C>0$ uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof} There exists a uniform $C_1$ such that $$\text{if }\dist(w,\partialrtial D)\geq 1/c\text{ then }k_D(w,z)<C_1.$$ Indeed, let $\dist(w,\partialrtial D)\geq 1/c$ and let balls $B_0,\ldots,B_m$ with centers $b_0,\ldots,b_m$ be chosen to the points $w$, $z$ as in the condition (3) of Definition~\ref{30}. Then \begin{multline*}k_D(w,z)\leq k_D(w,b_0)+\subsetm_{j=0}^{m-1}k_D(b_j,b_{j+1})+k_D(b_m,z)\leq\\\leq k_{B_n(w,1/c)}(w,b_0)+\subsetm_{j=0}^{m-1}k_{B_j}(b_j,b_{j+1})+k_{B_n(z,1/c)}(b_m,z)=\\=p\left(0,\frac{|w-b_0|}{1/c}\right)+\subsetm_{j=0}^{m-1}p\left(0,\frac{|b_j-b_{j+1}|}{1/(2c)}\right)+ p\left(0,\frac{|b_m-z|}{1/c}\right)\leq\\\leq(m+2)p\left(0,\frac{1}{2}\right)\leq(8c^2+2)p\left(0,\frac{1}{2}\right)=:C_1. \end{multline*} If $\zeta\in\mathbb{D}$ is such that $\dist(f(\zeta),\partialrtial D)\geq 1/c$ then $$k_D(f(0),f(\zeta))\leq C_2-\frac{1}{2}\log\dist(f(\zeta),\partialrtial D)$$ with a uniform $C_2:=C_1+\frac{1}{2}\log c$. In the other case, i.e. when $\dist(f(\zeta),\partialrtial D)<1/c$, denote by $\eta$ the nearest point to $f(\zeta)$ lying on $\partialrtial D$. Let $w\in D$ be a center of a ball $B$ of radius $1/c$ tangent to $\partialrtial D$ at $\eta$. By the condition (2) of Definition~\ref{30} we have $B\subsetbset D$. Hence \begin{multline*}k_D(f(0),f(\zeta))\leq k_D(f(0),w)+k_D(w,f(\zeta))\leq\\\leq C_1+k_B(w,f(\zeta))\leq C_1+\frac{1}{2}\log 2-\frac{1}{2}\log\left(1-\frac{|f(\zeta)-w|}{1/c}\right)=\\=C_1+\frac{1}{2}\log 2-\frac{1}{2}\log(c\dist(f(\zeta),\partialrtial B))=C_3-\frac{1}{2}\log\dist(f(\zeta),\partialrtial D) \end{multline*} with a uniform $C_3:=C_1+\frac{1}{2}\log\frac{2}{c}$. We have obtained the same type estimates in both cases. On the other side, by Corollary~\ref{5} $$k_D(f(0),f(\zeta))=p(0,\zeta)\geq-\frac{1}{2}\log(1-|\zeta|),$$ which finishes the proof. \end{proof} Recall that we have assumed that $\rho$ is of the form~\eqref{rho}. \begin{prop}\label{9} Let $f:(\mathbb{D},0)\longrightarrow(D,z)$ be an $E$-mapping. Then $$C_1<\rho(\zeta)^{-1}<C_2,\ \zeta\in\mathbb{T},$$ where $C_1,C_2$ are uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof} For the upper estimate fix $\zeta_0\in\mathbb{T}$. Set $B:=B_n(f(\zeta_0),1/c)$ and let $\Phi\in{\mathcal O}(\overlineerline{D},\mathbb{C}^n)$ be as in the condition (4) of Definition~\ref{30} for $B$. One can assume that $f(\zeta_0)=\Phi(f(\zeta_0))=0$ and $\nu_D(0)=\nu_{\Phi(D)}(0)=(1,0,\ldots,0)$. Then $\Phi(D)$ is contained in the half-space $\{w\in\mathbb{C}^n:\re w_1<0\}$. Putting $h:=\Phi\circ f$ we have $$h_1(\mathbb{D})\subsetbset\{w_1\in\mathbb{C}:\re w_1<0\}.$$ In virtue of the Schwarz Lemma on the half-plane \begin{equation}\label{schh1}|h_1'(t\zeta_0)|\leq\frac{-2\re h_1(t\zeta_0)}{1-|t\zeta_0|^2}.\end{equation} Let $\delta$ be the signed boundary distance of $\Phi(D)$, i.e. $$\delta(x):=\begin{cases}-\dist(x,\partialrtial\Phi(D)),\ x\in\Phi(D)\\\ \ \ \dist(x,\partialrtial\Phi(D)),\ x\notin\Phi(D).\end{cases}$$ It is a defining function of $\Phi(D)$ in a neighborhood of $0$ (recall that $\Phi^{-1}(\Phi(B))=B$). Observe that $$\delta(x)=\delta(0)+\re\langle\nabla\delta(0), x\rangle+O(|x|^2)=\re x_1+O(|x|^2).$$ If $x\in\Phi(D)$ tends transversally to $0$, then the angle between the vector $x$ and the hyperplane $\{w\in\mathbb{C}^n:\re w_1=0\}$ is separated from $0$, i.e. its sinus $(-\re x_1)/|x|>\varepsilon$ for some $\varepsilon>0$ independent on $x$. Thus $$\frac{\delta(x)}{\re x_1}=1+O(|x|)\text{ as }x\to 0\text{ transversally. }$$ Consequently \begin{equation}\label{50}-\re x_1\leq 2\dist(x,\partialrtial\Phi(D))\text{ as }x\to 0\text{ transversally. }\end{equation} We know that $t\longmapsto f(t\zeta_0)$ hits $\partialrtial D$ transversally. Therefore, $t\longmapsto h(t\zeta_0)$ hits $\partialrtial \Phi(D)$ transversally, as well. Indeed, we have \begin{multline}\label{hf}\left\langle\left.\frac{d}{dt}h(t\zeta_0)\right|_{t=1},\nu_{\Phi(D)}(h(\zeta_0))\right\rangle=\left\langle \Phi'(0)f'(\zeta_0)\zeta_0,\frac{(\Phi^{-1})'(0)^*\nabla r(0)}{|(\Phi^{-1})'(0)^*\nabla r(0)|}\right\rangle=\\=\frac{\langle\zeta_0 f'(\zeta_0),\nabla r(0)\rangle}{|(\Phi'(0)^{-1})^*\overlineerline{\nabla r(0)}|}=\frac{\langle\zeta_0 f'(\zeta_0),\nu_D(f(\zeta_0))|\nabla r(0)|\rangle}{|(\Phi'(0)^{-1})^*\overlineerline{\nabla r(0)}|}. \end{multline} where $r$ is a defining function of $D$. In particular, \begin{multline*} \re \left\langle\left.\frac{d}{dt}h(t\zeta_0)\right|_{t=1},\nu_{\Phi(D)}(h(\zeta_0))\right\rangle=\re \frac{\langle\zeta_0 f'(\zeta_0),\nu_D(f(\zeta_0))|\nabla r(0)|\rangle}{|(\Phi'(0)^{-1})^*\overlineerline{\nabla r(0)}|}=\\=\frac{\rho(\zeta_0)^{-1}|\nabla r(0)|}{|(\Phi'(0)^{-1})^*\overlineerline{\nabla r(0)}|}\neq 0.\end{multline*} This proves that $t\longmapsto h(t\zeta_0)$ hits $\partialrtial\Phi(D)$ transversally. Consequently, we may put $x=h(t\zeta_0)$ into \eqref{50} to get \begin{equation}\label{hf1}\frac{-2\re h_1(t\zeta_0)}{1-|t\zeta_0|^2}\leq\frac{4\dist(h(t\zeta_0),\partialrtial\Phi(D))} {1-|t\zeta_0|^2},\ t\to 1.\end{equation} But $\Phi$ is a biholomorphism near $0$, so \begin{equation}\label{nfr}\frac{4\dist(h(t\zeta_0),\partialrtial\Phi(D))}{1-|t\zeta_0|^2}\leq C_3\frac{\dist(f(t\zeta_0),\partialrtial D)}{1-|t\zeta_0|},\ t\to 1,\end{equation} where $C_3$ is a uniform constant depending only on $c$ (thanks to the condition (4)(c) of Definition~\ref{30}). By Proposition \ref{7}, the term on the right side of~\eqref{nfr} does not exceed some uniform constant. It follows from \eqref{hf} that \begin{multline*}\rho(\zeta_0)^{-1}=|\langle f'(\zeta_0)\zeta_0,\nu_D(f(\zeta_0))\rangle|\leq C_4|\langle h'(\zeta_0), \nu_{\Phi(D)}(h(\zeta_0))\rangle|=\\=C_4|h_1'(\zeta_0)|=\lim_{t\to 1}C_4|h_1'(t\zeta_0)|\end{multline*} with a uniform $C_4$ (here we use the condition (4)(c) of Definition~\ref{30} again). Combining \eqref{schh1}, \eqref{hf1} and \eqref{nfr} we get the upper estimate for $\rho(\zeta_0)^{-1}.$ Now we are proving the lower estimate. Let $r$ be the signed boundary distance to $\partialrtial D$. For $\varepsilon=1/c$ the function $$\varrho(w):=-\log(\varepsilon-r(w))+\log\varepsilon,\ w\in D_\varepsilon,$$ where $D_\varepsilon$ is an $\varepsilon$-hull of $D$, is plurisubharmonic and defining for $D$. Indeed, we have $$-\log(\varepsilon-r(w))=-\log\dist(w,\partialrtial D_\varepsilon),\ w\in D_\varepsilon$$ and $D_\varepsilon$ is pseudoconvex. Therefore, a function $$v:=\varrho\circ f:\overlineerline{\mathbb{D}}\longrightarrow(-\infty,0]$$ is subharmonic on $\mathbb{D}$. Moreover, since $f$ maps $\mathbb{T}$ in $\partialrtial D$ we infer that $v=0$ on $\mathbb{T}$. Moreover, since $|f(\lambda)-z|<c$ for $\lambda\in\mathbb{D}$, we have $$|f(\lambda)-z|<\frac{1}{2c}\text{ if }|\lambda|\leq\frac{1}{2c^2}.$$ Therefore, for a fixed $\zeta_0\in\mathbb{T}$ $$M_{\zeta_0}(x):=\max_{t\in[0,2\pi]}v(\zeta_0 e^{x+it})\leq-\log\left(1+\frac{1}{2c\varepsilon}\right)=:-C_5\text{ if }x\leq-\log(2c^2).$$ Since $M_{\zeta_0}$ is convex for $x\leq 0$ and $M_{\zeta_0}(0)=0$, we get $$v(\zeta_0 e^x)\leq M_{\zeta_0}(x)\leq\frac{C_5x}{\log(2c^2)}\text{\ \ \ for \ }-\log(2c^2)\leq x\leq 0.$$ Hence (remember that $v(\zeta_0)=0$) \begin{multline}\label{wk}\frac{C_5}{\log(2c^2)}\leq\left.\frac{d}{dx}v(\zeta_0 e^x)\right|_{x=0}=\subsetm_{j=1}^n\frac{\partial\varrho}{\partial z_j}(f(\zeta_0))f_j'(\zeta_0)\zeta_0=\\=\langle\zeta_0 f'(\zeta_0),\nabla\varrho(f(\zeta_0))\rangle=\rho(\zeta_0)^{-1}|\nabla\varrho(f(\zeta_0))|.\end{multline} Moreover, \begin{multline*}|\nabla\varrho(f(\zeta_0))|= \left\langle\nabla\varrho(f(\zeta_0)),\frac{\nabla\varrho(f(\zeta_0))}{|\nabla\varrho(f(\zeta_0))|}\right\rangle_\mathbb{R} =\langle\nabla\varrho(f(\zeta_0)),\nu_D(f(\zeta_0))\rangle_\mathbb{R}=\\=\frac{\partialrtial\varrho}{\partialrtial\nu_D}(f(\zeta_0))=\lim_{t\to 0}\frac{\varrho(f(\zeta_0)+t\nu_D(f(\zeta_0)))-\varrho(f(\zeta_0))}{t}=\frac{1}{\varepsilon}=c, \end{multline*} as $r(a+t\nu(a))=t$ if $a\in \partialrtial D$ and $t\in\mathbb R$ is small enough. This, together with \eqref{wk}, finishes the proof of the lower estimate. \end{proof} \begin{prop}\label{8} Let $f:(\mathbb{D},0)\longrightarrow (D,z)$ be an $E$-mapping. Then $$|f(\zeta_1)-f(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\overlineerline{\DD},$$ where $C$ is uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof} Let $\zeta_0\in\mathbb{D}$ be such that $1-|\zeta_0|<1/(cC)$, where $C$ is as in Proposition~\ref{7}. Then $B:=B_n(f(\zeta_0),1/c)$ intersects $\partialrtial D$. Take $\Phi$ for the ball $B$ from the condition (4) of Definition~\ref{30}. Let $w$ denote the nearest point to $\Phi(f(\zeta_0))$ lying on $\partialrtial\Phi(D)$. From the conditions (4)(b)-(c) of Definition~\ref{30} we find that there is a uniform constant $r<1$ such that the point $w$ belongs to $\Phi(B\cap\partialrtial D)$ provided that $|\zeta_0|\geq r$. From the condition (4)(a) of Definition~\ref{30} we get that there is $w_0$ such that $\Phi(D)\subsetbset B_n(w_0,c)$ and the ball $B_n(w_0,c)$ is tangent to $\Phi(D)$ at $w$. Let $$h(\zeta):=(\Phi\circ f)\left(\frac{\zeta_0-\zeta}{1-\overlineerline{\zeta_0}\zeta}\right),\ \zeta\in\mathbb{D}.$$ Then $h$ is holomorphic, $h(\mathbb{D})\subsetbset B_n(w_0,c)$ and $h(0)=\Phi(f(\zeta_0))$. Using Lemma \ref{schw} we get \begin{multline*}|h'(0)|\leq\sqrt{c^2-|h(0)-w_0|^2}\leq\sqrt{2c(c-|\Phi(f(\zeta_0))-w_0|)}=\\ =\sqrt{2c(|w_0-w|-|\Phi(f(\zeta_0))-w_0|)}\leq\sqrt{2c}\sqrt{|\Phi(f(\zeta_0))-w|}=\\ =\sqrt{2c}\sqrt{\dist(\Phi(f(\zeta_0)),\partialrtial\Phi(D))}.\end{multline*} Since $$h'(0)=\Phi'(f(\zeta_0))f'(\zeta_0)\left.\frac{d}{d\zeta}\frac{\zeta_0-\zeta}{1-\overlineerline{\zeta_0}\zeta}\right|_{\zeta=0},$$ bby the condition (4)(c) of Definition~\ref{3} we get $$|h'(0)|\geq C_1|f'(\zeta_0)|(1-|\zeta_0|^2)$$ with a uniform $C_1$, so $$|f'(\zeta_0)|\leq\frac{|h'(0)|}{C_1(1-|\zeta_0|^2)}\leq\frac{\sqrt{2c}}{C_1}\frac{\sqrt{\dist(\Phi(f(\zeta_0)),\partialrtial\Phi(D))}}{1-|\zeta_0|^2}\leq C_2\frac{\sqrt{\dist(f(\zeta_0),\partialrtial D)}}{1-|\zeta_0|^2},$$ where $C_2$ is uniform. Combining with Proposition \ref{7} \begin{equation}\label{46}|f'(\zeta_0)|\leq C_3\frac{\sqrt{1-|\zeta_0|}}{1-|\zeta_0|^2}=\frac{C_3}{\sqrt{1-|\zeta_0|}},\end{equation} where a constant $C_3$ is uniform. We have shown that \eqref{46} holds for $r\leq |\zeta_0|<1$ with a uniform $r<1$. For $|\zeta_0|<r$ we estimate in the following way $$|f'(\zeta_0)|\leq\max_{|\zeta|=r}|f'(\zeta)|\leq\frac{C_3}{\sqrt{1-r}}\leq\frac{C_4}{\sqrt{1-|\zeta_0|}}$$ with a uniform $C_4:=C_3/\sqrt{1-r}$. Using Theorems \ref{lit1} and \ref{lit2} with $\alpha=1/2$ we finish the proof. \end{proof} \begin{prop}\label{10a} Let $f:(\mathbb{D} ,0)\longrightarrow (D,z)$ be an $E$-mapping. Then $$|\rho(\zeta_1)-\rho(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\mathbb{T},$$ where $C$ is uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof}It suffices to prove that there exist uniform $C,C_1>0$ such that $$|\rho(\zeta_1)-\rho(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\mathbb{T},\ |\zeta_1-\zeta_2|<C_1.$$ Fix $\zeta_1\in\mathbb{T}$. Without loss of generality we may assume that $\nu_{D,1}(f(\zeta_1))=1$. Let $0<C_1\leq 1/4$ be uniform and such that $$|\nu_{D,1}(f(\zeta))-1|<1/2,\ \zeta\in\mathbb{T}\cap B_n(\zeta_1,3C_1).$$ It is possible, since by Proposition \ref{8} $$|{\nu_D(f(\zeta))}-{\nu_D(f(\zeta'))}|\leq 2c|f(\zeta)-f(\zeta')|\leq C'\sqrt{|\zeta-\zeta'|},\ \zeta,\zeta'\in\mathbb{T},$$ with a uniform $C'>0$. There exists a function $\psi\in{\mathcal C}^1(\mathbb{T},[0,1])$ such that $\psi=1$ on $\mathbb{T}\cap B_n(\zeta_1,2C_1)$ and $\psi=0$ on $\mathbb{T}\setminus B_n(\zeta_1,3C_1)$. Then the function $\varphi:\mathbb{T}\longrightarrow\mathbb{C}$ defined by $$\varphi:=(\overlineerline{\nu_{D,1}\circ f}-1)\psi+1$$ satisfies \begin{enumerate} \item $\varphi(\zeta)=\overlineerline{\nu_{D,1}(f(\zeta))}$, $\zeta\in\mathbb{T}\cap B_n(\zeta_1,2C_1)$; \item $|\varphi(\zeta)-1|<1/2$, $\zeta\in\mathbb{T}$; \item $\varphi$ is uniformly $1/2$-H\"older continuous on $\mathbb{T}$, i.e. it is $1/2$-H\"older continuous with a uniform constant (remember that $\psi$ was chosen uniformly). \end{enumerate} First observe that $\log\varphi$ is well-defined. Using using properties listed above we deduce that $\log\varphi$ and $\im\log\varphi$ are uniformly $1/2$-H\"older continuous on $\mathbb{T}$, as well. The function $\im\log\varphi$ can be extended continuously to a function $v:\overlineerline{\DD}\longrightarrow\mathbb{R}$, harmonic in $\mathbb{D}$. There is a function $h\in\mathcal O(\mathbb{D})$ such that $v=\im h$ in $\mathbb{D}$. Taking $h-\re h(0)$ instead of $h$, one can assume that $\re h(0)=0$. By Theorem \ref{priv} applied to $ih$, we get that the function $h$ extends continuously on $\overlineerline{\DD}$ and $h$ is uniformly $1/2$-H\"older continuous in $\overlineerline{\DD}$. Hence the function $u:=\re h:\overlineerline{\DD}\longrightarrow\mathbb{R}$ is uniformly $1/2$-H\"older continuous in $\overlineerline{\DD}$ with a uniform constant $C_2$. Furthermore, $u$ is uniformly bounded in $\overlineerline{\DD}$, since $$|u(\zeta)|=|u(\zeta)-u(0)|\leq C_2\sqrt{|\zeta|},\ \zeta\in\overlineerline{\DD}.$$ Let $g(\zeta):=\widetilde{f}_1(\zeta)e^{-h(\zeta)}$ and $G(\zeta):=g(\zeta)/\zeta$. Then $g\in\mathcal O(\mathbb{D})\cap\mathcal C(\overlineerline{\mathbb{D}})$ and $G\in\mathcal O(\mathbb{D}_*)\cap\mathcal C((\overlineerline{\mathbb{D}})_*)$. Note that for $\zeta\in\mathbb{T}$ $$|g(\zeta)|=|\zeta \rho(\zeta)\overlineerline{\nu_{D,1}(f(\zeta))}e^{-h(\zeta)}|\leq\rho(\zeta)e^{-u(\zeta)},$$ which, combined with Proposition \ref{9}, the uniform boundedness of $u$ and the maximum principle, gives a uniform boundedness of $g$ in $\overlineerline{\DD}$. The function $G$ is uniformly bounded in $\overlineerline{\mathbb{D}}\cap B_n(\zeta_1,2C_1)$. Moreover, for $\zeta\in\mathbb{T}\cap B_n(\zeta_1,2C_1)$ \begin{eqnarray*} G(\zeta)&=&\rho(\zeta)\overlineerline{\nu_{D,1}(f(\zeta))}e^{-u(\zeta)-i\im\log \varphi(\zeta)}=\\&=&\rho(\zeta)\overlineerline{\nu_{D,1}(f(\zeta))}e^{-u(\zeta)+\re\log\varphi(\zeta)}e^{-\log\varphi(\zeta)} =\rho(\zeta)e^{-u(\zeta)+\re\log\varphi(\zeta)}\in\mathbb{R}.\end{eqnarray*} By the Reflection Principle one can extend $G$ holomorphically past $\mathbb{T}\cap B_n(\zeta_1,2C_1)$ to a function (denoted by the same letter) uniformly bounded in $B_n(\zeta_1,2C_2)$, where a constant $C_2$ is uniform. Hence, from the Cauchy formula, $G$ is uniformly Lipschitz continuous in $B_n(\zeta_1,C_2)$, consequently uniformly $1/2$-H\"older continuous in $B_n(\zeta_1,C_2)$. Finally, the functions $G$, $h$, $\nu_{D,1}\circ f$ are uniformly $1/2$-H\"older continuous on $\mathbb{T}\cap B_n(\zeta_1,C_2)$, $|\nu_{D,1}\circ f|>1/2$ on $\mathbb{T}\cap B_n(\zeta_1,C_2)$, so the function $\rho=Ge^h/\overlineerline{\nu_{D,1}\circ f}$ is uniformly $1/2$-H\"older continuous on $\mathbb{T}\cap B_n(\zeta_1,C_2)$. \end{proof} \begin{prop}\label{10b} Let $f:(\mathbb{D},0)\longrightarrow (D,z)$ be an $E$-mapping. Then $$|\widetilde{f}(\zeta_1)-\widetilde{f}(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\overlineerline{\mathbb{D}},$$ where $C$ is uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof} By Propositions \ref{8} and \ref{10a} we have desired inequality for $\zeta_1,\zeta_2\in\mathbb{T}$. Theorem \ref{lit2} finishes the proof. \end{proof} \section{Openness of $E$-mappings' set}\label{27} We shall show that perturbing a little a domain $D$ equipped with an $E$-mapping, we obtain a domain which also has an $E$-mapping, being close to a given one. \subsetbsection{Preliminary results} \begin{propp}\label{11} Let $f:\mathbb{D}\longrightarrow D$ be an $E$-mapping. Then there exist domains $G,\widetilde D,\widetilde G\subsetbset\mathbb{C}^n$ and a biholomorphism $\Phi:\widetilde D\longrightarrow\widetilde G$ such that \begin{enumerate} \item $\widetilde D,\widetilde G$ are neighborhoods of $\overlineerline D,\overlineerline G$ respectively; \item $\Phi(D)=G$; \item $g(\zeta):=\Phi(f(\zeta))=(\zeta,0,\ldots,0),\ \zeta\in\overlineerline{\DD}$; \item $\nu_G(g(\zeta))=(\zeta,0,\ldots,0),\ \zeta\in\mathbb{T}$; \item for any $\zeta\in\mathbb{T}$, a point $g(\zeta)$ is a point of the strong linear convexity of $G$. \end{enumerate} \end{propp} \begin{proof} Let $U,V$ be the sets from Proposition \ref{34}. We claim that after a linear change of coordinates one can assume that $\widetildedetilde{f}_1,\widetildedetilde{f}_2$ do not have common zeroes in $V$. Since $ f'\bullet\widetildedetilde{f}=1$, at least one of the functions $\widetilde f_1,\ldots,\widetilde f_n$, say $\widetilde f_1$, is not identically equal to $0$. Let $\lambda_1,\ldots,\lambda_m$ be all zeroes of $\widetilde f_1$ in $V$. We may find $\alpha\in\mathbb{C}^n$ such that $$(\alpha_1\widetilde f_1+\ldots+\alpha_n\widetilde f_n)(\lambda_j)\neq 0,\ j=1,\ldots,m.$$ Otherwise, for any $\alpha\in\mathbb{C}^n$ there would exist $j\in\{1,\ldots,m\}$ such that $\alpha\bullet\widetilde f(\lambda_j)=0$, hence $$\mathbb{C}^n=\bigcup_{j=1}^m\{\alpha\in\mathbb{C}^n:\ \alpha\bullet\widetilde f(\lambda_j)=0\}.$$ The sets $\{\alpha\in\mathbb{C}^n:\alpha \bullet \widetilde f(\lambda_j)=0\}$, $j=1,\ldots,m$, are the $(n-1)$-dimensional complex hyperplanes, so their finite sum cannot be the space $\mathbb{C}^n$. Of course, at least one of the numbers $\alpha_2,\ldots,\alpha_n$, say $\alpha_2$, is non-zero. Let $$A:=\left[\begin{matrix} 1 & 0 & 0 & \cdots & 0\\ \alpha_1 & \alpha_2 & \alpha_3 &\cdots & \alpha_n\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{matrix}\right],\quad B:=(A^T)^{-1}.$$ We claim that $B$ is a change of coordinates we are looking for. If $r$ is a defining function of $D$ then $r\circ B^{-1}$ is a defining function of $B_n(D)$, so $B_n(D)$ is a bounded strongly linearly convex domain with real analytic boundary. Let us check that $Bf$ is an $E$-mapping of $B_n(D)$ with associated mappings \begin{equation}\label{56}A\widetilde f\in{\mathcal O}(\overlineerline{\DD})\text{\ \ and\ \ }\rho\frac{|A\overlineerline{\nabla r\circ f}|}{|\nabla r\circ f|}\in\mathcal{C}^{\omega}(\mathbb{T}).\end{equation} The conditions (1) and (2) of Definition~\ref{21} are clear. For $\zeta\in\mathbb{T}$ we have \begin{equation}\label{57}\overlineerline{\nu_{B_n(D)}(Bf(\zeta))}=\frac{\overlineerline{\nabla(r\circ B^{-1})(Bf(\zeta))}}{|\nabla(r\circ B^{-1})(Bf(\zeta))|}=\frac{(B^{-1})^T\overlineerline{\nabla r(f(\zeta))}}{|(B^{-1})^T\overlineerline{\nabla r(f(\zeta))}|}=\frac{A\overlineerline{\nabla r(f(\zeta))}}{|A\overlineerline{\nabla r(f(\zeta))}|},\end{equation} so \begin{equation}\label{58}\zeta\rho(\zeta)\frac{|A\overlineerline{\nabla r(f(\zeta))}|}{|\nabla r(f(\zeta))|}\overlineerline{\nu_{B_n(D)}(Bf(\zeta))}=\zeta\rho(\zeta)A\overlineerline{\nu_D(f(\zeta))}=A\widetilde f(\zeta).\end{equation} Moreover, for $\zeta\in\mathbb{T}$, $z\in D$ \begin{multline*}\langle Bz-Bf(\zeta), \nu_{B_n(D)}(Bf(\zeta))\rangle=\overlineerline{\nu_{B_n(D)}(Bf(\zeta))}^T(Bz-Bf(\zeta))=\\=\frac{\overlineerline{\nabla r(f(\zeta))}^TB^{-1}B_n(z-f(\zeta))}{|(B^{-1})^T\overlineerline{\nabla r(f(\zeta))}|}=\frac{|\nabla r(f(\zeta))|}{|(B^{-1})^T\overlineerline{\nabla r(f(\zeta))}|}\overlineerline{\nu_D(f(\zeta))}^T(z-f(\zeta))=\\=\frac{|\nabla r(f(\zeta))|}{|(B^{-1})^T\overlineerline{\nabla r(f(\zeta))}|}\langle z-f(\zeta), \nu_D(f(\zeta))\rangle. \end{multline*} Therefore, $B$ is a desired linear change of coordinates, as claimed. If necessary, we shrink the sets $U,V$ associated with $f$ to sets associated with $Bf$. There exist holomorphic mappings $h_1,h_2:V\longrightarrow\mathbb{C}$ such that $$h_1\widetildedetilde{f}_1+h_2\widetildedetilde{f}_2\equiv 1\text{ in }V.$$ Generally, it is a well-known fact for functions on pseudoconvex domains, however in this case it may be shown quite elementarily. Indeed, if $\widetildedetilde{f}_1\equiv 0$ or $\widetildedetilde{f}_2\equiv 0$ then it is obvious. In the opposite case, let $\widetildedetilde{f}_j=F_jP_j$, $j=1,2$, where $F_j$ are holomorphic, non-zero in $V$ and $P_j$ are polynomials with all (finitely many) zeroes in $V$. Then $P_j$ are relatively prime, so there are polynomials $Q_j$, $j=1,2$, such that $$Q_1P_1+Q_2P_2\equiv 1.$$ Hence $$\frac{Q_1}{F_1}\widetildedetilde{f}_1+\frac{Q_2}{F_2}\widetildedetilde{f}_2\equiv 1\ \text{ in }V.$$ Consider the mapping $\Psi:V\times\mathbb{C}^{n-1}\longrightarrow\mathbb{C}^n$ given by \begin{equation}\label{et2} \Psi_1(Z):=f_1(Z_1)-Z_2\widetildedetilde{f}_2(Z_1)-h_1(Z_1) \subsetm_{j=3}^{n}Z_j\widetildedetilde{f}_j(Z_1), \end{equation} \begin{equation}\label{et3} \Psi_2(Z):=f_2(Z_1)+Z_2\widetildedetilde{f}_1(Z_1)-h_2(Z_1) \subsetm_{j=3}^{n}Z_j\widetildedetilde{f}_j(Z_1), \end{equation} \begin{equation}\label{et4} \Psi_j(Z):=f_j(Z_1)+Z_j,\ j=3,\ldots,n. \end{equation} We claim that $\Psi$ is biholomorphic in $\Psi^{-1}(U)$. First of all observe that $\Psi^{-1}(\{z\})\neq\emptyset$ for any $z\in U$. Indeed, by Proposition \ref{34} there exists (exactly one) $Z_1\in V$ such that $$(z-f(Z_1))\bullet\widetildedetilde{f}(Z_1)=0.$$ The numbers $Z_j\in\mathbb{C}$, $j=3,\ldots,n$ are determined uniquely by the equations $$Z_j=z_j-f_j(Z_1).$$ At least one of the numbers $\widetilde f_1(Z_1),\widetilde f_2(Z_1)$, say $\widetilde f_1(Z_1)$, is non-zero. Let $$Z_2:=\frac{z_2-f_2(Z_1)+h_2(Z_1)\subsetm_{j=3}^{n}Z_j\widetildedetilde{f}_j(Z_1)}{\widetilde f_1(Z_1)}.$$ Then we easily check that the equality $$z_1=f_1(Z_1)-Z_2\widetildedetilde{f}_2(Z_1)-h_1(Z_1) \subsetm_{j=3}^{n}Z_j\widetildedetilde{f}_j(Z_1)$$ is equivalent to $(z-f(Z_1))\bullet\widetildedetilde{f}(Z_1)=0$, which is true. To finish the proof of biholomorphicity of $\Psi$ in $\Psi^{-1}(U)$ it suffices to check that $\Psi$ is injective in $\Psi^{-1}(U)$. Let us take $Z,W$ such that $\Psi(Z)=\Psi(W)=z\in U$. By a direct computation both $\zeta=Z_1\in V$ and $\zeta=W_1\in V$ solve the equation $$(z-f(\zeta))\bullet\widetildedetilde{f}(\zeta)=0.$$ From Proposition \ref{34} we infer that it has exactly one solution. Hence $Z_1=W_1$. By \eqref{et4} we have $Z_j=W_j$ for $j=3,\ldots,n$. Finally $Z_2=W_2$ follows from one of the equations \eqref{et2}, \eqref{et3}. Let $G:=\Psi^{-1}(D)$, $\widetilde D:=U$, $\widetilde G:=\Psi^{-1}(U)$, $\Phi:=\Psi^{-1}$. Now we are proving that $\Phi$ has desired properties. We have $$\Psi_j(\zeta,0,\ldots,0)=f_j(\zeta),\ j=1,\ldots,n,$$ so $\Phi(f(\zeta))=(\zeta,0,\ldots,0)$, $\zeta\in\overlineerline{\DD}$. Put $g(\zeta):=\Phi(f(\zeta))$, $\zeta\in\overlineerline{\DD}$. Note that the entries of the matrix $\Psi'(g(\zeta))$ are $$\frac{\partialrtial\Psi_1}{\partialrtial Z_1}(g(\zeta))=f_1'(\zeta),\ \frac{\partialrtial\Psi_1}{\partialrtial Z_2}(g(\zeta))=-\widetildedetilde{f}_2(\zeta),\ \frac{\partialrtial\Psi_1}{\partialrtial Z_j}(g(\zeta))=-h_1(\zeta)\widetildedetilde{f}_j(\zeta),\ j\geq 3,$$$$\frac{\partialrtial\Psi_2}{\partialrtial Z_1}(g(\zeta))=f_2'(\zeta),\ \frac{\partialrtial\Psi_2}{\partialrtial Z_2}(g(\zeta))=\widetildedetilde{f}_1(\zeta),\ \frac{\partialrtial\Psi_2}{\partialrtial Z_j}(g(\zeta))=-h_2(\zeta)\widetildedetilde{f}_j(\zeta),\ j\geq 3,$$$$\frac{\partialrtial\Psi_k}{\partialrtial Z_1}(g(\zeta))=f_k'(\zeta),\ \frac{\partialrtial\Psi_k}{\partialrtial Z_2}(g(\zeta))=0,\ \frac{\partialrtial\Psi_k}{\partialrtial Z_j}(g(\zeta))=\delta^{k}_{j},\ j,k\geq 3.$$ Thus $\Psi '(g(\zeta))^T\widetilde f(\zeta)=(1,0,\ldots,0)$, $\zeta\in\overlineerline{\DD}$ (since $f'\bullet\widetilde f=1$). Let us take a defining function $r$ of $D$. Then $r\circ\Psi$ is a defining function of $G$. Therefore, \begin{multline*}\nu_G(g(\zeta))=\frac{\nabla(r\circ\Psi)(g(\zeta))}{|\nabla(r\circ\Psi)(g(\zeta))|}= \frac{\overlineerline{\Psi'(g(\zeta))}^T\nabla r(f(\zeta))}{|\overlineerline{\Psi'(g(\zeta))}^T\nabla r(f(\zeta))|}=\\=\frac{\overlineerline{\Psi'(g(\zeta))}^T\overline{\frac{\widetilde f(\zeta)}{\zeta\rho(\zeta)}}|\nabla r(f(\zeta))|}{\left|\overlineerline{\Psi'(g(\zeta))}^T\overline{\frac{\widetilde f(\zeta)}{\zeta\rho(\zeta)}}|\nabla r(f(\zeta))|\right|}=g(\zeta),\ \zeta\in\mathbb{T}.\end{multline*} It remains to prove the fifth condition. By Definition \ref{29}(2) we have to show that \begin{equation}\label{sgf}\subsetm_{j,k=1}^n\frac{\partialrtial^2(r\circ\Psi)}{\partialrtial z_j\partialrtial\overlineerline{z}_k}(g(\zeta))X_{j}\overlineerline{X}_{k}>\left|\subsetm_{j,k=1}^n\frac{\partialrtial^2(r\circ\Psi)}{\partialrtial z_j\partialrtial z_k}(g(\zeta))X_{j}X_{k}\right|\end{equation} for $\zeta\in\mathbb{T}$ and $X\in(\mathbb{C}^{n})_*$ with $$\subsetm_{j=1}^n\frac{\partialrtial(r\circ\Psi)}{\partialrtial z_j}(g(\zeta))X_{j}=0,$$ i.e. $X_1=0$. We have $$\subsetm_{j,k=1}^n\frac{\partialrtial^2(r\circ\Psi)}{\partialrtial z_j\partialrtial\overlineerline{z}_k}(g(\zeta))X_{j}\overlineerline{X}_{k}=\subsetm_{j,k,s,t=1}^n\frac{\partialrtial^2 r}{\partialrtial z_s\partialrtial\overlineerline{z}_t}(f(\zeta))\frac{\partialrtial\Psi_s}{\partialrtial z_j}(g(\zeta))\overlineerline{\frac{\partialrtial\Psi_t}{\partialrtial z_k}(g(\zeta))}X_{j}\overlineerline{X}_{k}=$$$$=\subsetm_{s,t=1}^n\frac{\partialrtial^2 r}{\partialrtial z_s\partialrtial\overlineerline{z}_t}(f(\zeta))Y_{s}\overlineerline{Y}_{t},$$ where $$Y:=\Psi'(g(\zeta))X.$$ Note that $Y\neq 0$. Additionally $$\subsetm_{s=1}^n\frac{\partialrtial r}{\partialrtial z_s}(f(\zeta))Y_{s}=\subsetm_{j,s=1}^n\frac{\partialrtial r}{\partialrtial z_s}(f(\zeta))\frac{\partialrtial\Psi_s}{\partialrtial z_j}(g(\zeta))X_j=\subsetm_{j=1}^n\frac{\partialrtial(r\circ\Psi)}{\partialrtial z_j}(g(\zeta))X_{j}=0.$$ Therefore, by the strong linear convexity of $D$ at $f(\zeta)$ $$\subsetm_{s,t=1}^n\frac{\partialrtial^2 r}{\partialrtial z_s\partialrtial\overlineerline{z}_t}(f(\zeta))Y_{s}\overlineerline{Y}_{t}>\left|\subsetm_{s,t=1}^n\frac{\partialrtial^2 r}{\partialrtial z_s\partialrtial z_t}(f(\zeta))Y_{s}Y_{t}\right|.$$ To finish the proof observe that $$\left|\subsetm_{j,k=1}^n\frac{\partialrtial^2(r\circ\Psi)}{\partialrtial z_j\partialrtial z_k}(g(\zeta))X_{j}X_{k}\right|=\left|\subsetm_{j,k,s,t=1}^n\frac{\partialrtial^2 r}{\partialrtial z_s\partialrtial z_t}(f(\zeta))\frac{\partialrtial\Psi_s}{\partialrtial z_j}(g(\zeta))\frac{\partialrtial\Psi_t}{\partialrtial z_k}(g(\zeta))X_{j}X_{k}+\right.$$$$\left.+\subsetm_{j,k,s=1}^n\frac{\partialrtial r}{\partialrtial z_s}(f(\zeta))\frac{\partialrtial^2\Psi_s}{\partialrtial z_j\partialrtial z_k}(g(\zeta))X_{j}X_{k}\right|=$$$$=\left|\subsetm_{s,t=1}^n\frac{\partialrtial^2 r}{\partialrtial z_s\partialrtial z_t}(f(\zeta))Y_{s}Y_{t}+\subsetm_{j,k=2}^n\subsetm_{s=1}^n\frac{\partialrtial r}{\partialrtial z_s}(f(\zeta))\frac{\partialrtial^2\Psi_s}{\partialrtial z_j\partialrtial z_k}(g(\zeta))X_{j}X_{k}\right|$$ and $$\frac{\partialrtial^2\Psi_s}{\partialrtial z_j\partialrtial z_k}(g(\zeta))=0,\ j,k\geq 2,\ s\geq 1,$$ which gives \eqref{sgf}. \end{proof} \begin{remm}\label{rem:theta} Let $D$ be a bounded domain in $\mathbb C^n$ and let $f:\mathbb{D}\longrightarrow D$ be a (weak) stationary mapping such that $\partialrtial D$ is real analytic in a neighborhood of $f(\mathbb{T})$. Assume moreover that there are a neighborhood $U$ of $f(\overlineerline{\DD})$ and a mapping $\Theta:U\longrightarrow\mathbb{C}^n$ biholomorphic onto its image and the set $D\cap U$ is connected. Then $\Theta\circ f$ is a (weak) stationary mapping of $G:=\Theta(D\cap U)$. In particular, if $U_1$, $U_2$ are neighborhoods of the closures of domains $D_1$, $D_2$ with real analytic boundaries and $\Theta:U_1\longrightarrow U_2$ is a biholomorphism such that $\Theta(D_1)=D_2$, then $\Theta$ maps (weak) stationary mappings of $D_1$ onto (weak) stationary mappings of $D_2$. \end{remm} \begin{proof} Actually, it is clear that two first conditions of the definition of (weak) stationary mappings are preserved by $\Theta$. To show the third one we proceed similarly as in the equations \eqref{56}, \eqref{57}, \eqref{58}. Let $f:\mathbb{D}\longrightarrow D $ be a (weak) stationary mapping. The candidates for the mappings in condition (3) (resp. (3')) of Definition~\ref{21} for $\Theta\circ f$ in the domain $G$ are $$((\Theta'\circ f)^{-1})^T\widetilde f\text{\ \ and\ \ }\rho\frac{|((\Theta'\circ f)^{-1})^T\overlineerline{\nabla r\circ f}|}{|\nabla r\circ f|}.$$ Indeed, for $\zeta\in\mathbb{T}$ \begin{multline*}\overlineerline{\nu_{G}(\Theta(f(\zeta)))}= \frac{\overlineerline{\nabla(r\circ\Theta^{-1})(\Theta(f(\zeta)))}}{|\nabla(r\circ\Theta^{-1})(\Theta(f(\zeta)))|}=\frac{[(\Theta^{-1})'(\Theta(f(\zeta)))]^T\overlineerline{\nabla r(f(\zeta))}}{|[(\Theta^{-1})'(\Theta(f(\zeta)))]^T\overlineerline{\nabla r(f(\zeta))}|}=\\ =\frac{(\Theta'(f(\zeta))^{-1})^T\overlineerline{\nabla r(f(\zeta))}}{|(\Theta'(f(\zeta))^{-1})^T\overlineerline{\nabla r(f(\zeta))}|}, \end{multline*} hence \begin{multline*}\zeta\rho(\zeta)\frac{|(\Theta'(f(\zeta))^{-1})^T\overlineerline{\nabla r(f(\zeta))}|}{|\nabla r(f(\zeta))|}\overlineerline{\nu_{G}(\Theta(f(\zeta)))}=\\ =\zeta\rho(\zeta)(\Theta'(f(\zeta))^{-1})^T\overlineerline{\nu_{D}(f(\zeta))}= (\Theta'(f(\zeta))^{-1})^T\widetilde f(\zeta). \end{multline*} \end{proof} \subsetbsection{Situation (\dag)}\label{dag} Consider the following situation, denoted by (\dag) (with data $D_0$ and $U_0$): \begin{itemize} \item $D_0$ is a bounded domain in $\mathbb{C}^n$, $n\geq 2$; \item $f_0:\overlineerline{\DD}\ni\zeta\longmapsto(\zeta,0,\ldots,0)\in\overline D_0$, $\zeta\in\overlineerline{\DD}$; \item $f_0(\mathbb{D})\subsetbset D_0$; \item $f_0(\mathbb{T})\subsetbset\partialrtial D_0$; \item $\nu_{D_0}(f_0(\zeta))=(\zeta,0,\ldots,0)$, $\zeta\in\mathbb{T}$; \item for any $\zeta\in\mathbb{T}$, a point $f_0(\zeta)$ is a point of the strong linear convexity of $D_0$; \item $\partialrtial D_0$ is real analytic in a neighborhood $U_0$ of $f_0(\mathbb{T})$ with a function $r_0$; \item $|\nabla r_0|=1$ on $f_0(\mathbb{T})$ (in particular, $r_{0z}(f_0(\zeta))=(\overline\zeta/2,0,\ldots,0)$, $\zeta\in\mathbb{T}$). \end{itemize} Since $r_0$ is real analytic on $U_0\subset\mathbb{R}^{2n}$, it extends in a natural way to a holomorphic function in a neighborhood $U_0^\mathbb{C}\subset\mathbb{C}^{2n}$ of $U_0$. Without loss of generality we may assume that $r_0$ is bounded on $U_0^\mathbb{C}$. Set $$X_0=X_0(U_0,U_0^{\mathbb C}):=\{r\in\mathcal{O}(U_0^\mathbb{C}):\text{$r(U_0)\subset\mathbb{R}$ and $r$ is bounded}\},$$ which equipped with the sup-norm is a (real) Banach space. \begin{remm} Lempert considered the case when $U_0$ is a neighborhood of a boundary of a bounded domain $D_0$ with real analytic boundary. We shall need more general results to prove the `localization property'. \end{remm} \subsetbsection{General lemmas}\label{General lemmas} We keep the notation from Subsection \eqref{dag} and assume Situation (\dag). Let us introduce some additional objects we shall be dealing with and let us prove more general lemmas (its generality will be useful in the next section). Consider the Sobolev space $W^{2,2}(\mathbb{T})=W^{2,2}(\mathbb{T},\mathbb{C}^m)$ of functions $f:\mathbb{T}\longrightarrow\mathbb{C}^m$, whose first two derivatives (in the sense of distribution) are in $L^2(\mathbb{T})$. The $W^{2,2}$-norm is denoted by $\|\cdot\|_W$. For the basic properties of $W^{2,2}(\mathbb{T})$ see Appendix. Put $$B:=\{f\in W^{2,2}(\mathbb{T},\mathbb{C}^n):f\text{ extends holomorphically on $\mathbb{D}$ and $f(0)=0$}\},$$$$B_0:=\{f\in B:f(\mathbb{T})\subset U_0\},\quad B^*:=\{\overlineerline{f}:f\in B\},$$$$Q:=\{q\in W^{2,2}(\mathbb{T},\mathbb{C}):q(\mathbb{T})\subset\mathbb{R}\},\quad Q_0:=\{q\in Q:q(1)=0\}.$$ It is clear that $B$, $B^*$, $Q$ and $Q_0$ equipped with the norm $\|\cdot\|_W$ are (real) Banach spaces. Note that $B_0$ is an open neighborhood of $f_0$. In what follows, we identify $f\in B$ with its unique holomorphic extension on $\mathbb{D}$. Let us define the projection $$\pi:W^{2,2}(\mathbb{T},\mathbb{C}^n)\ni f=\subsetm_{k=-\infty}^{\infty}a_k\zeta^{k}\longmapsto\subsetm_{k=-\infty}^{-1}a_k\zeta^{k}\in{B^*}.$$ Note that $f\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$ extends holomorphically on $\mathbb{D}$ if and only if $\pi(f)=0$ (and the extension is $\mathcal C^{1/2}$ on $\mathbb{T}$). Actually, it suffices to observe that $g(\zeta):=\subsetm_{k=-\infty}^{-1}a_k\zeta^{k}$, $\zeta\in\mathbb{T}$, extends holomorphically on $\mathbb{D}$ if and only if $a_k=0$ for $k<0$. This follows immediately from the fact that the mapping $\mathbb{T}\ni\zeta\longmapsto g(\overline\zeta)\in\mathbb{C}^n$ extends holomorphically on $\mathbb{D}$. Consider the mapping $\Xi:X_0\times\mathbb{C}^n\times B_0\times Q_0\times\mathbb{R}\longrightarrow Q\times{B^*}\times\mathbb{C}^n$ defined by $$\Xi(r,v,f,q,\lambda):=(r\circ f,\pi(\zeta(1+q)(r_z\circ f)),f'(0)-\lambda v),$$ where $\zeta$ is treated as the identity function on $\mathbb{T}$. We have the following \begin{lemm}\label{cruciallemma} There exist a neighborhood $V_0$ of $(r_0,f_0'(0))$ in $X_0\times\mathbb{C}^n$ and a real analytic mapping $\Upsilon:V_0\longrightarrow B_0\times Q_0\times\mathbb{R}$ such that for any $(r,v)\in V_0$ we have $\Xi(r,v,\Upsilon(r,v))=0$. \end{lemm} Let $\widetilde\Xi:X_0\times\mathbb{C}^n\times B_0\times Q_0\times(0,1)\longrightarrow Q\times{B^*}\times\mathbb{C}^n$ be defined as $$\widetilde\Xi(r,w,f,q,\xi):=(r\circ f,\pi(\zeta(1+q)(r_z\circ f)),f(\xi)-w).$$ Analogously we have \begin{lemm}\label{cruciallemma1} Let $\xi_0\in(0,1)$. Then there exist a neighborhood $W_0$ of $(r_0,f_0(\xi_0))$ in $X_0\times D_0$ and a real analytic mapping $\widetilde\Upsilon:W_0\longrightarrow B_0\times Q_0\times(0,1)$ such that for any $(r,w)\in W_0$ we have $\widetilde\Xi(r,w,\widetilde\Upsilon(r,w))=0$. \end{lemm} \begin{proof}[Proof of Lemmas \ref{cruciallemma} and \ref{cruciallemma1}] We will prove the first lemma. Then we will see that a proof of the second one reduces to that proof. We claim that $\Xi$ is real analytic. The only problem is to show that the mapping $$T: X_0\times B_0\ni(r,f)\longmapsto r\circ f\in Q$$ is real analytic (the real analyticity of the mapping $X_0\times B_0\ni(r,f)\longmapsto r_z\circ f\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$ follows from this claim). Fix $r\in X_0$, $f\in B_0$ and take $\varepsilon>0$ so that a $2n$-dimensional polydisc $P_{2n}(f(\zeta),\varepsilon)$ is contained in $U_0^\mathbb{C}$ for any $\zeta\in\mathbb{T}$. Then any function $\widetilde r\in X_0$ is holomorphic in $U_0^\mathbb{C}$, so it may be expanded as a holomorphic series convergent in $P_{2n}(f(\zeta),\varepsilon)$. Losing no generality we may assume that $n$-dimensional polydiscs $P_{n}(f(\zeta),\varepsilon)$, $\zeta\in\mathbb{T}$, satisfy $P_{n}(f(\zeta),\varepsilon)\subset U_0$. This gives an expansion of the function $\widetilde r$ at any point $f(\zeta)$, $\zeta\in\mathbb{T}$, into a series $$\subsetm_{\alpha\in\mathbb{N}_0^{2n}}\frac{1}{\alpha!}\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}(f(\zeta))x^\alpha$$ convergent to $\widetilde r(f(\zeta)+x)$, provided that $x=(x_1,\ldots,x_{2n})\in P_n(0,\varepsilon)$ (where $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$ and $|\alpha|:=\alpha_1+\ldots+\alpha_{2n}$). Hence \begin{equation}\label{69}T(r+\varrho,f+h)=\subsetm_{\alpha\in\mathbb{N}_0^{2n}}\frac{1}{\alpha!}\left(\frac{\partial^{|\alpha|}r}{\partial x^\alpha}\circ f\right)h^\alpha+\subsetm_{\alpha\in\mathbb{N}_0^{2n}}\frac{1}{\alpha!}\left(\frac{\partial^{|\alpha|}\varrho}{\partial x^\alpha}\circ f\right)h^\alpha\end{equation} pointwise for $\varrho\in X_0$ and $h\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$ with $\|h\|_{\subsetp}<\varepsilon$. Put $P:=\bigcup_{\zeta\in \mathbb{T}} P_{2n}(f(\zeta),\varepsilon)$ and for $\widetilde r\in X_0$ put $||\widetilde r||_P:=\subsetp_P|\widetilde r|$. Let $\widetilde r$ be equal to $r$ or to $\varrho$, where $\varrho$ lies is in a neighborhood of $0$ in $X_0$. The Cauchy inequalities give \begin{equation}\label{series}\left|\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}(f(\zeta))\right|\leq\frac{\alpha!\|\widetilde r\|_{P}}{\varepsilon^{|\alpha|}},\quad\zeta\in\mathbb{T}.\end{equation} Therefore, $$\left|\left|\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}\circ f\right|\right|_W\leq C_1\frac{\alpha!\|\widetilde r\|_{P}}{\varepsilon^{|\alpha|}}$$ for some $C_1>0$. There is $C_2>0$ such that $$\|gh^\alpha\|_W\leq C_2^{|\alpha|+1}\|g\|_W\|h_1\|^{\alpha_1}_W\cdotp\ldots\cdotp\|h_{2n}\|^{\alpha_{2n}}_W$$ for $g\in W^{2,2}(\mathbb{T},\mathbb{C})$, $h\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$, $\alpha\in\mathbb{N}_0^{2n}$ (see Appendix for a proof of this fact). Using the above inequalities we infer that $$\subsetm_{\alpha\in\mathbb{N}_0^{2n}}\left|\left|\frac{1}{\alpha!}\left(\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}\circ f\right)h^\alpha\right|\right|_W$$ is convergent if $h$ is small enough on the norm $\|\cdot\|_W$. Therefore, the series~\eqref{69} is absolutely convergent in the norm $\|\cdot\|_W$, whence $T$ is real analytic. To show the existence of $V_0$ and $\Upsilon$ we will make use of the Implicit Function Theorem. More precisely, we shall show that the partial derivative $$\Xi_{(f,q,\lambda)}(r_0,f_0'(0),f_0,0,1):B\times Q_0\times\mathbb{R}\longrightarrow Q\times{B^*}\times\mathbb{C}^n$$ is an isomorphism. Observe that for any $(\widetildedetilde{f},\widetildedetilde{q},\widetildedetilde{\lambda})\in B\times Q_0\times\mathbb{R}$ the following equality holds \begin{multline*}\Xi_{(f,q,\lambda)}(r_0,f_0'(0),f_0,0,1)(\widetildedetilde{f},\widetildedetilde{q},\widetildedetilde{\lambda})=\left.\frac{d}{dt} \Xi(r_0,f_0'(0),f_0+t\widetildedetilde{f},t\widetildedetilde{q},1+t\widetildedetilde{\lambda})\right|_{t=0}=\\ =((r_{0z}\circ f_0)\widetildedetilde{f}+(r_{0\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}},\pi(\zeta\widetildedetilde{q}r_{0z}\circ f_0+\zeta(r_{0zz} \circ f_0)\widetildedetilde{f}+\zeta(r_{0z\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}}),\widetildedetilde{f}'(0)-\widetildedetilde{\lambda}f_0'(0)), \end{multline*} where we treat ${r_0}_z,{r_0}_{\overlineerline{z}}$ as row vectors, $\widetildedetilde{f},\overlineerline{\widetildedetilde{f}}$ as column vectors and $r_{0zz}=\left[\frac{\partialrtial^2r_0}{\partialrtial z_j\partialrtial z_k}\right]_{j,k=1}^n$, $r_{0z\overlineerline{z}}=\left[\frac{\partialrtial^2r_0}{\partialrtial z_j\partialrtial\overlineerline z_k}\right]_{j,k=1}^n$ as $n\times n$ matrices. By the Bounded Inverse Theorem it suffices to show that $\Xi_{(f,q,\lambda)}(r_0,f_0'(0),f_0,0,1)$ is bijective, i.e. for $(\eta,\varphi,v)\in Q\times B^*\times\mathbb{C}^n$ there exists exactly one $(\widetildedetilde{f},\widetildedetilde{q},\widetildedetilde{\lambda})\in B\times Q_0\times\mathbb{R}$ satisfying \begin{equation} (r_{0z}\circ f_0)\widetildedetilde{f}+(r_{0\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}}=\eta, \label{al1} \end{equation} \begin{equation} \pi(\zeta\widetildedetilde{q}r_{0z}\circ f_0+\zeta (r_{0zz}\circ f_0)\widetildedetilde{f}+\zeta(r_{0z\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}})=\varphi, \label{al2} \end{equation} \begin{equation} \widetildedetilde{f}'(0)-\widetildedetilde{\lambda} f_0'(0)=v. \label{al3} \end{equation} First we show that $\widetilde\lambda$ and $\widetilde f_1$ are uniquely determined. Observe that, in view of assumptions, (\ref{al1}) is just $$\frac{1}{2}\overlineerline{\zeta}\widetildedetilde{f}_1+\frac{1}{2}\zeta\overlineerline{\widetildedetilde{f}_1}=\eta$$ or equivalently \begin{equation} \re(\widetildedetilde{f}_1/\zeta)=\eta\text{ (on }\mathbb{T}). \label{al4} \end{equation} Note that the equation (\ref{al4}) uniquely determines $\widetildedetilde{f}_1/\zeta\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ up to an imaginary additive constant, which may be computed using (\ref{al3}). Actually, $\eta=\re G$ on $\mathbb{T}$ for some function $G\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$. To see this, let us expand $\eta(\zeta)=\subsetm_{k=-\infty}^{\infty}a_k\zeta^{k}$, $\zeta\in\mathbb{T}$. From the equality $\eta(\zeta)=\overline{\eta(\zeta)}$, $\zeta\in\mathbb{T}$, we get \begin{equation}\label{65}\subsetm_{k=-\infty}^{\infty}a_k\zeta^{k}=\subsetm_{k=-\infty}^{\infty}\overline a_k\zeta^{-k}=\subsetm_{k=-\infty}^{\infty}\overline a_{-k}\zeta^{k},\ \zeta\in\mathbb{T},\end{equation} so $a_{-k}=\overline a_k$, $k\in\mathbb{Z}$. Hence $$\eta(\zeta)=a_0+\subsetm_{k=1}^\infty 2\re(a_k\zeta^k)=\re\left(a_0+2\subsetm_{k=1}^\infty a_k\zeta^k\right),\ \zeta\in\mathbb{T}.$$ Set $$G(\zeta):=a_0+2\subsetm_{k=1}^\infty a_k\zeta^k,\ \zeta\in\mathbb{D}.$$ This series is convergent for $\zeta\in\mathbb{D}$, so $G\in{\mathcal O}(\mathbb{D})$. Further, the function $G$ extends continuously on $\overlineerline{\DD}$ (to the function denoted by the same letter) and the extension lies in $W^{2,2}(\mathbb{T},\mathbb{C})$. Clearly, $\eta=\re G$ on $\mathbb{T}$. We are searching $C\in\mathbb{R}$ such that the functions $\widetildedetilde{f}_1:=\zeta(G+iC)$ and $\theta:=\im(\widetildedetilde{f}_1/\zeta)$ satisfy $$\eta(0)+i\theta(0)=\widetildedetilde{f}_1'(0)$$ and $$\eta(0)+i\theta(0)-\widetildedetilde{\lambda}\re{f_{01}'(0)}-i\widetildedetilde{\lambda}\im{{f_{01}'(0)}}=\re{v_1}+i\im{v_1}.$$ But $$\eta(0)-\widetildedetilde{\lambda}\re{f_{01}'(0)}=\re{v_1},$$ which yields $\widetildedetilde{\lambda}$ and then $\theta(0)$, consequently the number $C$. Having $\widetildedetilde{\lambda}$ and once again using (\ref{al3}), we find uniquely determined $\widetildedetilde{f}_2'(0),\ldots,\widetildedetilde{f}_n'(0)$. Therefore, the equations $\eqref{al1}$ and $\eqref{al3}$ are satisfied by uniquely determined $\widetilde f_1$, $\widetilde\lambda$ and $\widetildedetilde{f}_2'(0),\ldots,\widetildedetilde{f}_n'(0)$. Consider (\ref{al2}), which is the system of $n$ equations with unknown $\widetildedetilde{q},\widetildedetilde{f}_2,\ldots,\widetildedetilde{f}_n$. Observe that $\widetildedetilde{q}$ appears only in the first of the equations and the remaining $n-1$ equations mean exactly that the mapping \begin{equation} \zeta(r_{0\widetildedehat{z}\widetildedehat{z}}\circ f_0) \widetildedehat{\widetildedetilde{f}}+\zeta(r_{0\widetildedehat{z}\widetildedehat{\overlineerline{z}}}\circ f_0)\widetildedehat{\overlineerline{\widetildedetilde{f}}}-\psi \label{al5} \end{equation} extends holomorphically on $\mathbb{D}$, where $\widetildedehat{a}:=(a_{2},\ldots,a_{n})$ and $\psi\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})$ may be obtained from $\varphi$ and $\widetildedetilde{f}_1$. Indeed, to see this, write (\ref{al2}) in the form $$\pi(F_{1}+\zeta F_{2}+\zeta F_{3})=(\varphi_1,\ldots,\varphi_n),$$ where $$F_1:=(\widetilde q,0,\ldots,0),$$$$F_2:=(A_{j})_{j=1}^n,\ A_{j}:=\subsetm\limits_{k=1}^n(r_{0z_jz_k}\circ f_0)\widetildedetilde{f}_k,$$$$F_3=(B_{j})_{j=1}^n,\ B_{j}:=\subsetm\limits_{k=1}^n(r_{0z_j\overline z_k}\circ f_0)\overlineerline{\widetildedetilde{f}_k}.$$ It follows that $$\widetildedetilde{q}+\zeta A_1+\zeta B_1-\varphi_1$$ and $$\zeta A_j+\zeta B_j-\varphi_j,\ j=2,\ldots,n,$$ extend holomorphically on $\mathbb{D}$ and $$\psi:=\left(\varphi_j-\zeta(r_{0z_jz_1}\circ f_0)\widetildedetilde{f}_1-\zeta(r_{0z_j\overline z_1}\circ f_0)\overlineerline{\widetildedetilde{f}_1}\right)_{j=2}^n.$$ Put $$g(\zeta):=\widetildedehat{\widetildedetilde{f}}(\zeta)/\zeta,\quad\alpha(\zeta):=\zeta^2r_{0\widetildedehat{z}\widetildedehat{z}}(f_0(\zeta)), \quad\beta(\zeta):=r_{0\widetildedehat{z}\widetildedehat{\overlineerline{z}}}(f_0(\zeta)).$$ Observe that $\alpha(\zeta)$, $\beta(\zeta)$ are the $(n-1)\times(n-1)$ matrices depending real analytically on $\zeta$ and $g(\zeta)$ is a column vector in $\mathbb{C}^{n-1}$. This allows us to reduce \eqref{al5} to the following problem: we have to find a unique $g\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ such that \begin{equation} \alpha g+\beta\overlineerline{g}-\psi\text{ extends holomorphically on $\mathbb{D}$ and } g(0)={\widetildedehat{\widetildedetilde{f}'}}(0). \label{al6} \end{equation} The fact that every $f_0(\zeta)$ is a point of strong linear convexity of the domain $D_0$ may be written as \begin{equation} |X^T\alpha(\zeta)X|<X^{T}\beta(\zeta)\overlineerline{X},\ \zeta\in\mathbb{T},\ X\in(\mathbb{C}^{n-1})_*. \label{al7} \end{equation} Note that $\beta(\zeta)$ is self-adjoint and strictly positive, hence using Proposition \ref{12} we get a mapping $H\in{\mathcal O}(\overlineerline{\DD},\mathbb{C}^{(n-1)\times(n-1)})$ such that $\det H\neq 0$ on $\overlineerline{\DD}$ and $HH^*=\beta$ on $\mathbb{T}$. Using this notation, (\ref{al6}) is equivalent to \begin{equation} H^{-1}\alpha g+H^*\overlineerline{g}-H^{-1}\psi\text{ extends holomorphically on $\mathbb{D}$} \label{al8} \end{equation} or, if we denote $h:=H^Tg$, $\gamma:=H^{-1}\alpha (H^T)^{-1}$, to \begin{equation} \gamma h+\overlineerline{h}-H^{-1}\psi\text{ extends holomorphically on $\mathbb{D}.$} \label{al9} \end{equation} For any $\zeta\in\mathbb{T}$ the operator norm of the symmetric matrix $\gamma(\zeta)$ is uniformly less than 1. In fact, from (\ref{al7}) for any $X\in\mathbb{C}^{n-1}$ with $|X|=1$ \begin{multline*}|X^{T}\gamma(\zeta)X|=|X^{T}H(\zeta)^{-1}\alpha(\zeta)(H(\zeta)^T)^{-1}X|<X^TH(\zeta)^{-1}\beta(\zeta) \overlineerline{(H(\zeta)^T)^{-1}X}=\\=X^TH(\zeta)^{-1}H(\zeta)H(\zeta)^*\overlineerline{(H(\zeta)^T)^{-1}} \overlineerline{X}=|X|^2=1,\end{multline*} so, by the compactness argument, $|X^{T}\gamma(\zeta)X|\leq 1-\widetilde\varepsilon$ for some $\widetilde\varepsilon>0$ independent on $\zeta$ and $X$. Thus $\|\gamma(\zeta)\|\leq 1-\widetilde\varepsilon$ by Proposition \ref{59}. We have to prove that there is a unique solution $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ of (\ref{al9}) such that $h(0)=a$ with a given $a\in\mathbb{C}^{n-1}$. Define the operator $$P:W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\ni\subsetm_{k=-\infty}^{\infty}a_k\zeta^{k}\longmapsto\overlineerline{\subsetm_{k=-\infty}^{-1}a_k\zeta^{k}}\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1}),$$ where $a_k\in\mathbb{C}^{n-1}$, $k\in\mathbb{Z}$. We will show that a mapping $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ satisfies (\ref{al9}) and $h(0)=a$ if and only if it is a fixed point of the mapping $$K:W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\ni h\longmapsto P(H^{-1}\psi-\gamma h)+a\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1}).$$ Indeed, take $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ and suppose that $h(0)=a$ and $\gamma h+\overlineerline{h}-H^{-1}\psi$ extends holomorphically on $\mathbb{D}$. Then $$h=a+\subsetm_{k=1}^{\infty}a_k\zeta^{k},\quad\overlineerline{h}=\overlineerline{a}+\subsetm_{k=1}^{\infty}\overlineerline a_k\zeta^{-k}=\subsetm_{k=-\infty}^{-1}\overlineerline a_{-k}\zeta^{k}+\overlineerline{a},$$ $$P(h)=0,\quad P(\overlineerline{h})=\subsetm_{k=1}^{\infty}a_k\zeta^{k}=h-a$$ and $$P(\gamma h+\overlineerline{h}-H^{-1}\psi)=0,$$ which implies $$P(H^{-1}\psi-\gamma h)=h-a$$ and finally $K(h)=h$. Conversely, suppose that $K(h)=h$. Then $$P(H^{-1}\psi-\gamma h)=h-a=\subsetm_{k=1}^{\infty}a_k\zeta^{k}+a_1-a,\quad P(h)=0$$ and $$P(\overlineerline{h})=\subsetm_{k=1}^{\infty}a_k\zeta^{k}=h-a_1,$$ from which follows that $$P(\gamma h+\overlineerline{h}-H^{-1}\psi)=P(\overlineerline{h})-P(H^{-1}\psi-\gamma h)=a-a_1$$ and $$P(\gamma h+\overlineerline{h}-H^{-1}\psi)=0\text{ iff }a=a_1.$$ Observe that $h(0)=K(h)(0)=P(H^{-1}\psi-\gamma h)(0)+a=a$. We shall make use of the Banach Fixed Point Theorem. To do this, consider $W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})$ equipped with the following norm $$\|h\|_{\varepsilon}:=\|h\|_L+\varepsilon\|h'\|_L+ \varepsilon^2\|h''\|_L,$$ where $\varepsilon>0$ and $\|\cdot\|_L$ is the $L^2$-norm (it is a Banach space). We will prove that $K$ is a contraction with respect to the norm $\|\cdot\|_{\varepsilon}$ for sufficiently small $\varepsilon$. Indeed, there is $\widetilde\varepsilon>0$ such that for any $h_1,h_2\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})$ \begin{equation} \|K(h_1)-K(h_2)\|_L=\|P(\gamma(h_2-h_1))\|_L\leq\|\gamma(h_2-h_1)\|_L\leq (1-\widetilde\varepsilon)\|h_2-h_1\|_L. \label{al10} \end{equation} Moreover, \begin{multline} \|K(h_1)'-K(h_2)'\|_L= \|P(\gamma h_2)'-P(\gamma h_1)'\|_L\leq\\ \leq\|(\gamma h_2)'-(\gamma h_1)'\|_L= \|\gamma '(h_2-h_1)+\gamma(h_2'-h_1')\|_L. \label{al11} \end{multline} Furthermore, \begin{equation} \|K(h_1)''-K(h_2)''\|_L\leq\|\gamma ''(h_2-h_1)\|_L+2\|\gamma '(h_2'-h_1')\|_L+\|\gamma (h_1''-h_2'')\|_L.\label{al12} \end{equation} Using the finiteness of $\|\gamma '\|$, $\|\gamma ''\|$ and putting (\ref{al10}), (\ref{al11}), (\ref{al12}) together we see that there exists $\varepsilon>0$ such that $K$ is a contraction w.r.t. the norm $\|\cdot\|_{\varepsilon}$. We have found $\widetildedetilde{f}$ and $\widetildedetilde{\lambda}$ satisfying (\ref{al1}), (\ref{al3}) and the last $n-1$ equations from (\ref{al2}) are satisfied. It remains to show that there exists a unique $\widetildedetilde{q}\in Q_0$ such that $\widetildedetilde{q}+\zeta A_1+\zeta B_1-\varphi_1$ extends holomorphically on $\mathbb{D}$. Comparing the coefficients as in \eqref{65}, we see that if $$\pi(\zeta A_1+\zeta B_1-\varphi_1)=\subsetm_{k=-\infty}^{-1}a_k\zeta^{k}$$ then $\widetildedetilde{q}$ has to be taken as $$-\subsetm_{k=-\infty}^{-1}a_k\zeta^{k}-\subsetm_{k=0}^{\infty}b_k\zeta^{k}$$ with $b_k:=\overlineerline a_{-k}$ for $k\geq 1$ and $b_0\in\mathbb{R}$ uniquely determined by $\widetildedetilde{q}(1)=0$.\\ Let us show that the proof of the second Lemma follows from the proof of the first one. Since $\widetilde\Xi$ is real analytic it suffices to prove that the derivative $$\widetilde\Xi_{(f,q,\xi)}(r_0,f_0(\xi_0),f_0,0,\xi_0):B\times Q_0\times\mathbb{R}\longrightarrow Q\times{B^*}\times\mathbb{C}^n$$ is invertible. For $(\widetildedetilde{f},\widetildedetilde{q},\widetildedetilde{\xi})\in B\times Q_0\times\mathbb{R}$ we get \begin{multline*} \widetilde\Xi_{(f,q,\xi)}(r_0,f_0(\xi_0),f_0,0,\xi_0)(\widetildedetilde{f},\widetildedetilde{q},\widetildedetilde{\xi})=\left.\frac{d}{dt} \widetilde\Xi(r_0,f_0(\xi_0),f_0+t\widetildedetilde{f},t\widetildedetilde{q},\xi_0+t\widetildedetilde{\xi})\right|_{t=0}=\\ =((r_{0z}\circ f_0)\widetildedetilde{f}+(r_{0\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}}, \pi(\zeta\widetildedetilde{q}r_{0z}\circ f_0+\zeta(r_{0zz}\circ f_0)\widetildedetilde{f}+\zeta(r_{0z\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}}),\widetildedetilde{f}(\xi_0)+\widetilde\xi f_0'(\xi_0)). \end{multline*} We have to show that for $(\eta,\varphi,w)\in Q\times B^*\times\mathbb{C}^n$ there exists exactly one $(\widetildedetilde{f},\widetildedetilde{q},\widetildedetilde{\xi})\in B\times Q_0\times\mathbb{R}$ satisfying \begin{equation} (r_{0z}\circ f_0)\widetildedetilde{f}+(r_{0\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}}=\eta, \label{1al1} \end{equation} \begin{equation} \pi(\zeta\widetildedetilde{q}r_{0z}\circ f_0+\zeta (r_{0zz}\circ f_0)\widetildedetilde{f}+\zeta(r_{0z\overlineerline{z}}\circ f_0)\overlineerline{\widetildedetilde{f}})=\varphi, \label{1al2} \end{equation} \begin{equation} \widetilde f(\xi_0)+\widetilde\xi f_0'(\xi_0)=w. \label{1al3} \end{equation} The equation (\ref{1al1}) turns out to be \begin{equation} \re(\widetildedetilde{f}_1/\zeta)=\eta\text{ (on }\mathbb{T}). \label{1al4} \end{equation} The equation above uniquely determines $\widetildedetilde{f}_1/\zeta\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ up to an imaginary additive constant, which may be computed using (\ref{1al3}). Indeed, there exists $G\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ such that $\eta=\re G$ on $\mathbb{T}$. We are searching $C\in\mathbb{R}$ such that the functions $\widetildedetilde{f}_1:=\zeta(G+iC)$ and $\theta:=\im(\widetildedetilde{f}_1/\zeta)$ satisfy $$\xi_0\eta(\xi_0)+i\xi_0\theta(\xi_0)=\widetildedetilde{f}_1(\xi_0)$$ and $$\xi_0(\eta(\xi_0)+i\theta(\xi_0))+\widetildedetilde{\xi}\re{f_{01}'(\xi_0)}+i\widetildedetilde{\xi}\im{{f_{01}'(\xi_0)}}= \re{w_1}+i\im{w_1}.$$ But $$\xi_0\eta(\xi_0)+\widetildedetilde{\xi}\re{f_{01}'(\xi_0)}=\re{w_1},$$ which yields $\widetildedetilde{\xi}$ and then $\theta(\xi_0)$, consequently the number $C$. Having $\widetildedetilde{\xi}$ and once again using (\ref{1al3}), we find uniquely determined $\widetildedetilde{f}_2(\xi_0),\ldots,\widetildedetilde{f}_n(\xi_0)$. Therefore, the equations $\eqref{1al1}$ and $\eqref{1al3}$ are satisfied by uniquely determined $\widetilde f_1$, $\widetilde\xi$ and $\widetildedetilde{f}_2(\xi_0),\ldots,\widetildedetilde{f}_n(\xi_0)$. In the remaining part of the proof we change the second condition of \eqref{al6} to $$g(\xi_0)={\widetildedehat{\widetildedetilde{f}}}(\xi_0)/\xi_0$$ and we have to prove that there is a unique solution $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ of (\ref{al9}) such that $h(\xi_0)=a$ with a given $a\in\mathbb{C}^{n-1}$. Let $\tau$ be an automorphism of $\mathbb{D}$ (so it extends holomorphically near $\overlineerline{\DD}$), which maps $0$ to $\xi_0$, i.e. $$\tau(\xi):=\frac{\xi_0-\xi}{1-\overline\xi_0\xi},\ \xi\in\mathbb{D}.$$ Let the maps $P,K$ be as before. Then $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ satisfies (\ref{al9}) and $h(\xi_0)=a$ if and only if $h\circ\tau\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ satisfies (\ref{al9}) and $(h\circ\tau)(0)=a$. We already know that there is exactly one $\widetilde h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ satisfying (\ref{al9}) and $\widetilde h(0)=a$. Setting $h:=\widetilde h\circ\tau^{-1}$, we get the claim. \end{proof} \subsetbsection{Topology in the class of domains with real analytic boundaries}\label{topol} We introduce a concept of a domain being close to some other domain. Let $D_0\subset\mathbb{C}^n$ be a bounded domain with real analytic boundary. Then there exist a neighborhood $U_0$ of $\partialrtial D_0$ and a real analytic defining function $r_0:U_0\longrightarrow\mathbb{R}$ such that $\nabla r_0$ does not vanish in $U_0$ and $$D_0\cap U_0=\{z\in U_0:r_0(z)<0\}.$$ \begin{dff} We say that domains $D$ \textit{tend to} $D_0$ $($or are \textit{close to} $D_0${$)$} if one can choose their defining functions $r\in X_0$ such that $r$ tend to $r_0$ in $X_0$. \end{dff} \begin{remm} If $r\in X_0$ is near to $r_0$ with respect to the topology in $X_0$, then $\{z\in U_0:r(z)=0\}$ is a compact real analytic hypersurface which bounds a bounded domain. We denote it by $D^{r}$. Moreover, if $D^{r_0}$ is strongly linearly convex then a domain $D^r$ is also strongly linearly convex provided that $r$ is near $r_0$. \end{remm} \subsetbsection{Statement of the main result of this section} \begin{remm}\label{f} Assume that $D^r$ is a strongly linearly convex domain bounded by a real analytic hypersurface $\{z\in U_0:r(z)=0\}$. Let $\xi\in(0,1)$ and $w\in(\mathbb{C}^n)_*$. Then a function $f\in B_0$ satisfies the conditions $$f\text{ is a weak stationary mapping of }D^r,\ f(0)=0,\ f(\xi)=w$$ if and only if there exists $q\in Q_0$ such that $q>-1$ and $\widetilde\Xi(r,w,f,q,\xi)=0$. Actually, from $\widetilde\Xi(r,w,f,q,\xi)=0$ we deduce immediately that $r\circ f=0$ on $\mathbb{T}$, $f(\xi)=w$ and $\pi(\zeta(1+q)(r_z\circ f))=0$. From the first equality we get $f(\mathbb{T})\subsetbset \partialrtial D^{r}$. From the last one we deduce that the condition (3') of Definition~\ref{21} is satisfied (with $\rho:=(1+q)|r_z\circ f|$). Since $D^{r}$ is strongly linearly convex, $\overline{D^r}$ is polynomially convex (use the fact that projections of $\mathbb{C}$-convex domains are $\mathbb{C}$-convex, as well, and the fact that $D^r$ is smooth). In particular, $$f(\overlineerline{\DD})=f(\widetildedehat{\mathbb{T}})\subsetbset\widetildedehat{f(\mathbb{T})}\subsetbset\widetildedehat{\overline{D^r}}=\overline{D^r},$$ where $\widehat S:=\{z\in\mathbb{C}^m:|P(z)|\leq\subsetp_S|P|\text{ for any polynomial }P\in\mathbb{C}[z_1,\ldots,z_m]\}$ is the polynomial hull of a set $S\subset\mathbb{C}^m$. Note that this implies $f(\mathbb{D})\subset D^r$ --- this follows from the fact that $\partial D^r$ does not contain non-constant analytic discs (as $D^r$ is strongly pseudoconvex). The opposite implication is clear. In a similar way we show that for any $v\in(\mathbb{C}^n)_*$ and $\lambda>0$, a function $f\in B_0$ satisfies the conditions $$f\text{ is a weak stationary mapping of }D^r,\ f(0)=0,\ f'(0)=\lambda v$$ if and only if there exists $q\in Q_0$ such that $q>-1$ and $\Xi(r,v,f,q,\lambda)=0$. \end{remm} \begin{propp}\label{13} Let $D_0\subset\mathbb{C}^n$, $n\geq 2$, be a strongly linearly convex domain with real analytic boundary and let $f_0:\mathbb{D}\longrightarrow D_0$ be an $E$-mapping. $(1)$ Let $\xi_0\in(0,1)$. Then there exist a neighborhood $W_0$ of $(r_0,f_0(\xi_0))$ in $X_0\times D_0$ and real analytic mappings $$\Lambda:W_0\longrightarrow\mathcal{C}^{1/2}(\overlineerline{\mathbb{D}}),\ \Omega:W_0\longrightarrow(0,1)$$ such that $$\Lambda(r_0,f_0(\xi_0))=f_0,\ \Omega(r_0,f_0(\xi_0))=\xi_0$$ and for any $(r,w)\in W_0$ the mapping $f:=\Lambda(r,w)$ is an $E$-mapping of $D^{r}$ satisfying $$f(0)=f_0(0)\text{ and }f(\Omega(r,w))=w.$$ $(2)$ There exist a neighborhood $V_0$ of $(r_0,f_0'(0))$ in $X_0\times\mathbb{C}^n$ and a real analytic mapping $$\Gamma:V_0\longrightarrow\mathcal{C}^{1/2}(\overlineerline{\mathbb{D}})$$ such that $$\Gamma(r_0,f_0'(0))=f_0$$ and for any $(r,v)\in V_0$ the mapping $f:=\Gamma(r,v)$ is an $E$-mapping of $D^{r}$ satisfying $$f(0)=f_0(0)\text{ and }f'(0)=\lambda v\text{ for some }\lambda>0.$$ \end{propp} \begin{proof} Observe that Proposition \ref{11} provides us with a mapping $g_0=\Phi\circ f_0$ and a domain $G_0:=\Phi(D_0)$ giving a data for situation (\dag) (here $\partialrtial D_0$ is contained in $U_0$). Clearly, $\rho_0:=r_0\circ\Phi^{-1}$ is a defining function of $G_0$. Using Lemmas \ref{cruciallemma}, \ref{cruciallemma1} we get neighborhoods $V_0$, $W_0$ of $(\rho_0, g_0'(0))$, $(\rho_0,g_0(\xi_0))$ respectively and real analytic mappings $\Upsilon$, $\widetilde\Upsilon$ such that $ \Xi(\rho,v,\Upsilon(\rho,v))=0$ on $V_0$ and $ \widetilde\Xi(\rho,w,\widetilde\Upsilon(\rho,w))=0$ on $W_0$. Define $$\widehat\Lambda:=\pi_B\circ\widetilde\Upsilon,\quad\Omega:=\pi_\mathbb{R}\circ\widetilde\Upsilon,\quad\widehat\Gamma:=\pi_B\circ\Upsilon,$$ where $$\pi_B:B\times Q_0\times\mathbb{R}\longrightarrow B,\quad\pi_\mathbb{R}:B\times Q_0\times\mathbb{R}\longrightarrow\mathbb{R},\ $$ are the projections. If $\rho$ is sufficiently close to $\rho_0$, then the hypersurface $\{\rho=0\}$ bounds a strongly linearly convex domain. Moreover, then $\widehat\Lambda(\rho,w)$ and $\widehat\Gamma(\rho,v)$ are extremal mappings in $G^{\rho}$ (see Remark~\ref{f}). Composing $\widehat\Lambda(\rho,w)$ and $\widehat\Gamma(\rho,v)$ with $\Phi^{-1}$ and making use of Remark \ref{rem:theta} we get weak stationary mappings in $D^r$, where $r:=\rho\circ\Phi$. To show that they are $E$-mappings we proceed as follows. If $D^r$ is sufficiently close to $D_0$ (this depends on a distance between $\rho$ and $\rho_0$), the domain $D^r$ is strongly linearly convex, so by the results of Section \ref{55} $$\Lambda(r,w):=\Phi^{-1}\circ\widehat\Lambda(\rho,w)\text{\ and\ }\Gamma(r,v):=\Phi^{-1}\circ\widehat\Gamma(\rho,v)$$ are stationary mappings. Moreover, they are close to $f_0$ provided that $r$ is sufficiently close to $r_0$. Therefore, their winding numbers are equal. Thus $f$ satisfies condition (4) of Definition~\ref{21e}, i.e. $f$ is an $E$-mapping. \end{proof} \section{Localization property} \begin{prop}\label{localization} Let $D\subset\mathbb C^n$, $n\geq 2$, be a domain. Assume that $a\in\partialrtial D$ is such that $\partialrtial D$ is real analytic and strongly convex in a neighborhood of $a$. Then for any sufficiently small neighborhood $V_0$ of $a$ there is a weak stationary mapping of $D\cap V_0$ such that $f(\mathbb T)\subset\partialrtial D$. In particular, $f$ is a weak stationary mapping of $D$. \end{prop} \begin{proof} Let $r$ be a real analytic defining function in a neighborhood of $a$. The problem we are dealing with has a local character, so replacing $r$ with $r\circ\Psi$, where $\Psi$ is a local biholomorphism near $a$, we may assume that $a=(0,\ldots,0,1)$ and a defining function of $D$ near $a$ is $r(z)=-1+|z|^2+h(z-a)$, where $h$ is real analytic in a neighborhood of $0$ and $h(z)=O(|z|^3)$ as $z\to 0$ (cf. \cite{Rud}, p. 321). Following \cite{Lem2}, let us consider the mappings $$A_t(z):=\left((1-t^2)^{1/2}\frac{z'}{1+tz_n},\frac{z_n+t}{1+tz_n}\right),\quad z=(z',z_n)\in\mathbb{C}^{n-1}\times\mathbb{D},\,\,t\in(0,1),$$ which restricted to $\mathbb{B}_n$ are automorphisms. Let $$r_t(z):=\begin{cases}\frac{|1+tz_n|^2}{1-t^2}r(A_t(z)),&t\in(0,1),\\-1+|z|^2,&t=1.\end{cases}$$ It is clear that $f_{(1)}(\zeta)=(\zeta,0,\ldots,0)$, $\zeta\in\mathbb{D}$ is a stationary mapping of $\mathbb B_n$. We want to have the situation (\dag) which will allow us to use Lemma \ref{cruciallemma} (or Lemma \ref{cruciallemma1}). Note that $r_t$ does not converge to $r_1$ as $t\to 1$. However, $r_t\to r_1$ in $X_0(U_0,U_0^{\mathbb C})$, where $U_0$ is a neighborhood of $f_{(1)}(\mathbb{T})$ contained in $\{z\in\mathbb C^n:\re z_n>-1/2\}$ and $U_0^{\mathbb C}$ is sufficiently small (remember that $h(z)=O(|z|^3)$). Therefore, making use of Lemma \ref{cruciallemma} for $t$ sufficiently close to $1$ we obtain stationary mappings $f_{(t)}$ in $D_t:=\{z\in \mathbb C^n: r_t(z)<0,\ \re z_n>-1/2\}$ such that $f_{(t)}\to f_{(1)}$ in the $W^{2,2}$-norm (so also in the sup-norm). Actually, it follows from Lemma~\ref{cruciallemma} that one may take $f_{(t)}:=\pi_B\circ\Upsilon(r_t,f_{(1)}'(0))$ (keeping the notation from this lemma). The argument used in Remark~\ref{f} gives that $f_{(t)}$ satisfies conditions (1'), (2') and (3') of Definition~\ref{21}. Since the non-constant function $r\circ A_t\circ f_{(t)}$ is subharmonic on $\mathbb{D}$, continuous on $\overlineerline{\DD}$ and $r\circ A_t\circ f_{(t)}=0$ on $\mathbb{T}$, we see from the maximum principle that $f_{(t)}$ maps $\mathbb{D}$ in $D_t$. Therefore, $f_{(t)}$ are weak stationary mappings for $t$ close to $1$. In particular, $$f_{(t)}(\mathbb{D})\subsetbset 2\mathbb B_n \cap \{z\in\mathbb C^n:\re z_n>-1/2\}$$ provided that $t$ is close to $1$. The mappings $A_t$ have the following important property $$A_t(2\mathbb B_n\cap\{z\in\mathbb C^n:\re z_n>-1/2\})\to\{a\}$$ as $t\to 1$ in the sense of the Hausdorff distance. Therefore, we find from Remark \ref{rem:theta} that $g_{(t)}:=A_t\circ f_{(t)}$ is a stationary mapping of $D$. Since $g_{(t)}$ maps $\mathbb{D}$ onto arbitrarily small neighborhood of $a$ provided that $t$ is sufficiently close to $1$, we immediately get the assertion. \end{proof} \section{Proofs of Theorems \ref{lem-car} and \ref{main}} We start this section with the following \begin{lem}\label{lemat} For any different $z,w\in D$ $($resp. for any $z\in D$, $v\in(\mathbb{C}^n)_*${$)$} there exists an $E$-mapping $f:\mathbb{D}\longrightarrow D$ such that $f(0)=z$, $f(\xi)=w$ for some $\xi\in(0,1)$ $($resp. $f(0)=z$, $f'(0)=\lambda v$ for some $\lambda>0${$)$}. \end{lem} \begin{proof} Fix different $z,w\in D$ (resp. $z\in D$, $v\in(\mathbb{C}^{n})_*$). First, consider the case when $D$ is bounded strongly convex with real analytic boundary. Without loss of generality one may assume that $0\in D\Subset\mathbb{B}_n$. We need some properties of the Minkowski functionals. Let $\mu_G$ be a Minkowski functional of a domain $G\subsetbset\mathbb{C}^n$ containing the origin, i.e. $$\mu_G(x):=\inf\left\{s>0:\frac{x}{s}\in G\right\},\ x\in\mathbb{C}^n.$$ Assume that $G$ is bounded strongly convex with real analytic boundary. We shall show that \begin{itemize} \item $\mu_G-1$ is a real analytic outside $0$, defining function of $G$; \item $\mu^2_G-1$ is a real analytic outside $0$, strongly convex outside $0$, defining function of $G$. \end{itemize} Clearly, $G=\{x\in\mathbb{R}^{2n}:\mu_G(x)<1\}$. Setting $$q(x,s):=r\left(\frac{x}{s}\right),\ (x,s)\in U_0\times U_1,$$ where $r$ is a real analytic defining function of $G$ (defined near $\partial G$) and $U_0\subset\mathbb{R}^{2n}$, $U_1\subset\mathbb{R}$ are neighborhoods of $\partial G$ and $1$ respectively, we have $$\frac{\partialrtial q}{\partialrtial s}(x,s)=-\frac{1}{s^2}\left\langle\nabla r\left(\frac{x}{s}\right),x\right\rangle_{\mathbb{R}}\neq 0$$ for $(x,s)$ such that $x\in\partialrtial G$ and $s=\mu_G(x)=1$ (since $0\in G$, the vector $-x$ hooked at the point $x$ is inward $G$, so it is not orthogonal to the normal vector at $x$). By the Implicit Function Theorem for the equation $q=0$, the function $\mu_G$ is real analytic in a neighborhood $V_0$ of $\partialrtial G$. To see that $\mu_G$ is real analytic outside $0$, fix $x_0\in(\mathbb{R}^{2n})_*$. Then the set $$W_0:=\left\{x\in\mathbb{R}^{2n}:\frac{x}{\mu_G(x_0)}\in V_0\right\}$$ is open and contains $x_0$. Since $$\mu_G(x)=\mu_G(x_0)\mu_G\left(\frac{x}{\mu_G(x_0)}\right),\ x\in W_0,$$ the function $\mu_G$ is real analytic in $W_0$. Therefore, we can take $d/ds$ on both sides of $\mu_G(sx)=s\mu_G(x),\ x\neq 0,\ s>0$ to obtain $$\langle\nabla\mu_G(x),x\rangle_{\mathbb{R}}=\mu_G(x),\ x\neq 0,$$ so $\nabla\mu_G\neq 0$ in $(\mathbb{R}^{2n})_*$. Furthermore, $\nabla\mu^2_G=2\mu_G\nabla\mu_G$, so $\mu^2_G-1$ is also a defining function of $G$. To show that $u:=\mu^2_G$ is strongly convex outside $0$ let us prove that $$X^T\mathcal{H}_aX>0,\quad a\in\partial G,\ X\in(\mathbb{R}^{2n})_*,$$ where $\mathcal{H}_x:=\mathcal{H}u(x)$ for $x\in(\mathbb{R}^{2n})_*$. Taking $\partial/\partial x_j$ on both sides of $$u(sx)=s^2u(x),\ x,s\neq 0,$$ we get \begin{equation}\label{62}\frac{\partial u}{\partial x_j}(sx)=s\frac{\partial u}{\partial x_j}(x)\end{equation} and further taking $d/ds$ $$\subsetm_{k=1}^{2n}\frac{\partial^2 u}{\partial x_j\partial x_k}(sx)x_k=\frac{\partial u}{\partial x_j}(x).$$ In particular, $$x^T\mathcal{H}_xy=\subsetm_{j,k=1}^{2n}\frac{\partial^2 u}{\partial x_k\partial x_j}(x)x_ky_j=\langle\nabla u(x),y\rangle_{\mathbb{R}},\ x\in(\mathbb{R}^{2n})_*,\ y\in\mathbb{R}^{2n}.$$ Let $a\in\partial G$. Since $\langle\nabla\mu_G(a),a\rangle_{\mathbb{R}}=\mu_G(a)=1$, we have $a\notin T^\mathbb{R}_G(a)$. Any $X\in(\mathbb{R}^{2n})_*$ can be represented as $\alpha a+\beta Y$, where $Y\in T^\mathbb{R}_G(a)$, $\alpha,\beta\in\mathbb{R}$, $(\alpha,\beta)\neq(0,0)$. Then \begin{eqnarray*}X^T\mathcal{H}_aX&=&\alpha^2a^T\mathcal{H}_aa+2\alpha\beta a^T\mathcal{H}_aY+\beta^2Y^T\mathcal{H}_aY=\\&=&\alpha^2\langle\nabla u(a),a\rangle_{\mathbb{R}} +2\alpha\beta\langle\nabla u(a),Y\rangle_{\mathbb{R}} +\beta^2Y^T\mathcal{H}_aY= \\&=&\alpha^22\mu_G(a)\langle\nabla\mu_G(a),a\rangle_{\mathbb{R}} +\beta^2Y^T\mathcal{H}_aY= 2\alpha^2+\beta^2Y^T\mathcal{H}_aY.\end{eqnarray*} Since $G$ is strongly convex, the Hessian of any defining function is strictly positive on the tangent space, i.e. $Y^T\mathcal{H}_aY>0$ if $Y\in(T^\mathbb{R}_G(a))_*$. Hence $X^T\mathcal{H}_aX\geq 0$. Note that it cannot be $X^T\mathcal{H}_aX=0$, since then $\alpha=0$, consequently $\beta\neq 0$ and $Y^T\mathcal{H}_aY=0$. On the other side $Y=X/\beta\neq 0$ --- a contradiction. Taking $\partial/\partial x_k$ on both sides of \eqref{62} we obtain $$\frac{\partial^2 u}{\partial x_j\partial x_k}(sx)=\frac{\partial^2 u}{\partial x_j\partial x_k}(x),\ x,s\neq 0$$ and for $a,X\in(\mathbb{R}^{2n})_*$ $$X^T\mathcal{H}_aX=X^T\mathcal{H}_{a/\mu_G(a)}X>0.$$ Let us consider the sets $$D_t:=\{x\in\mathbb{C}^n:t\mu^2_D(x)+(1-t)\mu^2_{\mathbb{B}_n}(x)<1\},\ t\in[0,1].$$ The functions $t\mu^2_D+(1-t)\mu^2_{\mathbb{B}_n}$ are real analytic in $(\mathbb{C}^n)_*$ and strongly convex in $(\mathbb{C}^n)_*$, so $D_t$ are strongly convex domains with real analytic boundaries satisfying $$D=D_1\Subset D_{t_2}\Subset D_{t_1}\Subset D_0=\mathbb{B}_n\text{\ if \ }0<t_1<t_2<1.$$ It is clear that $\mu_{D_t}=\sqrt{t\mu^2_D+(1-t)\mu^2_{\mathbb{B}_n}}$. Further, if $t_{1}$ is close to $t_{2}$ then $D_{t_{1}}$ is close to $D_{t_{2}}$ w.r.t. the topology introduced in Section \ref{27}. We want to show that $D_t$ are in some family $\mathcal D(c)$. Only the interior and exterior ball conditions need to verify. There exists $\delta>0$ such that $\delta\mathbb{B}_n\Subset D$. Further, $\nabla\mu_{D_t}^2\neq 0$ in $(\mathbb{R}^{2n})_*$. Set $$M:=\subsetp\left\{\frac{\mathcal{H}\mu_{D_t}^2(x;X)}{|\nabla\mu_{D_t}^2(y)|}: t\in[0,1],\ x,y\in 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n,\ X\in\mathbb{R}^{2n},\ |X|=1\right\}.$$ It is a positive number since the functions $\mu_{D_t}^2$ are strongly convex in $(\mathbb{R}^{2n})_*$ and the `sup' of the continuous, positive function is taken over a compact set. Let $$r:=\min\left\{\frac{1}{2M},\frac{\dist(\partial D,\delta\mathbb{B}_n)}{2}\right\}.$$ For fixed $t\in[0,1]$ and $a\in\partial D_t$ put $a':=a-r\nu_{D_t}(a)$. In particular, $\overline{B_n(a',r)}\subset 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n$. Let us define $$h(x):=\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a'|}(|x-a'|^2-r^2),\ x\in 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n.$$ We have $h(a)=1$ and $$\nabla h(x)=\nabla\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{|a-a'|}(x-a').$$ For $x=a$, dividing the right side by $|\nabla\mu^2_{D_t}(a)|$, we get a difference of the same normal vectors $\nu_{D_t}(a)$, so $\nabla h(a)=0$. Moreover, for $|X|=1$ $$\mathcal{H}h(x;X)=\mathcal{H}\mu^2_{D_t}(x;X)-\frac{|\nabla\mu^2_{D_t}(a)|}{r}\leq M|\nabla\mu^2_{D_t}(a)|-2M|\nabla\mu^2_{D_t}(a)|<0.$$ It follows that $h\leq 1$ in any convex set $S$ such that $a\in S\subset 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n$. Indeed, assume the contrary. Then there is $y\in S$ such that $h(y)>1$. Let us join $a$ and $y$ with an interval $$g:[0,1]\ni t\longmapsto h(ta+(1-t)y)\in S.$$ Since $a$ is a strong local maximum of $h$, the function $g$ has a local minimum at some point $t_0\in(0,1)$. Hence $$0\leq g''(t_0)=\mathcal{H}h(t_0a+(1-t_0)y;a-y),$$ which is impossible. Setting $S:=\overline{B_n(a',r)}$, we get $$\mu^2_{D_t}(x)\leq 1+\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a'|}(|x-a'|^2-r^2)<1$$ for $x\in B_n(a',r)$, i.e. $x\in D_t$. The proof of the exterior ball condition is similar. Set $$m:=\inf\left\{\frac{\mathcal{H}\mu_{D_t}^2(x;X)}{|\nabla\mu_{D_t}^2(y)|}: t\in[0,1],\ x,y\in(\overline{\mathbb{B}}_n)_*,\ X\in\mathbb{R}^{2n},\ |X|=1\right\}.$$ Note that the $m>0$. Actually, the homogeneity of $\mu_{D_t}$ implies $\mathcal{H}\mu_{D_t}^2(sx;X)=\mathcal{H}\mu_{D_t}^2(x;X)$ and $\nabla\mu_{D_t}^2(sx)=s\nabla\mu_{D_t}^2(x)$ for $x\neq 0$, $X\in \mathbb{R}^{2n}$, $s>0$. Therefore, there are positive constants $C_1,C_2$ such that $C_1\leq\mathcal{H}\mu_{D_t}^2(x;X)$ for $x\neq 0$, $X\in \mathbb{R}^{2n}$, $|X|=1$ and $|\nabla\mu_{D_t}^2(y)|\leq C_2$ for $y\in\overline\mathbb{B}_n$. In particular, $m\geq C_1/C_2$. Let $R:=2/m$. For fixed $t\in[0,1]$ and $a\in\partial D_t$ put $a'':=a-R\nu_{D_t}(a)$. Let us define $$\widetilde h(x):=\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a''|}(|x-a''|^2-R^2),\ x\in\overline{\mathbb{B}}_n.$$ We have $\widetilde h(a)=1$ and $$\nabla\widetilde h(x)=\nabla\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{|a-a''|}(x-a''),$$ so $\nabla\widetilde h(a)=0$. Moreover, for $x\in(\overline{\mathbb{B}}_n)_*$ and $|X|=1$ $$\mathcal{H}\widetilde h(x;X)=\mathcal{H}\mu^2_{D_t}(x;X)-\frac{|\nabla\mu^2_{D_t}(a)|}{R}\geq m|\nabla\mu^2_{D_t}(a)|-m/2|\nabla\mu^2_{D_t}(a)|>0.$$ Therefore, $a$ is a strong local minimum of $\widetilde h$. Now using the properties listed above we may deduce that $\widetilde h\geq 1$ in $\overline\mathbb{B}_n$. We proceed similarly as before: seeking a contradiction suppose that there is $y\in\overline\mathbb{B}_n$ such that $\widetilde h(y)<1$. Moving $y$ a little (if necessary) we may assume that $0$ does not lie on the interval joining $a$ and $y$. Then the mapping $\widetilde g(t):=\widetilde h(ta+ (1-t)y)$ attains its local maximum at some point $t_0\in(0,1)$. The second derivative of $\widetilde g$ at $t_0$ is non-positive, which gives a contradiction with a positivity of the Hessian of the function $\widetilde h$. Hence, we get $$\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a''|}(|x-a''|^2-R^2)\leq\mu^2_{D_t}(x)-1<0,$$ for $x\in D_t$, so $D_t \subsetbset B_n(a'',R)$. Let $T$ be the set of all $t\in[0,1]$ such that there is an $E$-mapping $f_{t}:\mathbb{D}\longrightarrow D_{t}$ with $f_{t}(0)=z$, $f_{t}(\xi_{t})=w$ for some $\xi_{t}\in(0,1)$ (resp. $f_{t}(0)=z$, $f_{t}'(0)=\lambda_{t}v$ for some $\lambda_{t}>0$). We claim that $T=[0,1]$. To prove it we will use the open-close argument. Clearly, $T\neq\emptyset$, as $0\in T$. Moreover, $T$ is open in $[0,1]$. Indeed, let $t_{0}\in T$. It follows from Proposition \ref{13} that there is a neighborhood $T_{0}$ of $t_{0}$ such that there are $E$-mappings $f_{t}:\mathbb{D}\longrightarrow D_{t}$ and $\xi_{t}\in(0,1)$ such that $f_{t}(0)=z$, $f_{t}(\xi_{t})=w$ for all $t\in T_{0}$ (resp. $\lambda_{t}>0$ such that $f_{t}(0)=z$, $f_{t}'(0)=\lambda_{t} v$ for all $t\in T_{0}$). To prove that $T$ is closed, choose a sequence $\{t_{m}\}\subset T$ convergent to some $t\in[0,1]$. We want to show that $t\in T$. Since $f_{t_m}$ are $E$-mappings, they are complex geodesics. Therefore, making use of the inclusions $D\subsetbset D_{t_m}\subsetbset\mathbb B_n$ we find that there is a compact set $K\subset(0,1)$ (resp. a compact set $\widetildedetilde K\subsetbset(0,\infty)$) such that $\{\xi_{t_m}\}\subsetbset K$ (resp. $\{\lambda_{t_m}\}\subsetbset\widetildedetilde K$). By Propositions \ref{8} and \ref{10b} the functions $f_{t_{m}}$ and $\widetildedetilde f_{t_{m}}$ are equicontinuous in $\mathcal{C}^{1/2}(\overlineerline{\mathbb{D}})$ and by Propositions \ref{9} and \ref{10a} the functions $\rho_{t_{m}}$ are uniformly bounded from both sides by positive numbers and equicontinuous in $\mathcal{C}^{1/2}(\mathbb{T})$. From the Arzela-Ascoli Theorem there are a subsequence $\{s_{m}\}\subsetbset\{t_{m}\}$ and mappings $f,\widetilde f\in{\mathcal O}(\mathbb{D})\cap\mathcal C^{1/2}(\overlineerline{\mathbb D})$, $\rho\in{\mathcal C}^{1/2}(\mathbb{T})$ such that $f_{s_{m}}\to f$, $\widetildedetilde{f}_{s_{m}}\to\widetilde f$ uniformly on $\overlineerline{\mathbb{D}}$, $\rho_{s_{m}}\to\rho$ uniformly on $\mathbb{T}$ and $\xi_{s_m}\to\xi\in (0,1)$ (resp. $\lambda_{s_m}\to\lambda>0$). Clearly, $f(\overlineerline{\DD})\subset\overlineerline{D}_{t}$, $f(\mathbb{T})\subset\partialrtial D_{t}$ and $\rho>0$. By the strong pseudoconvexity of $D_t$ we get $f(\mathbb{D})\subset D_t$. The conditions (3') and (4) of Definitions~\ref{21} and \ref{21e} follow from the uniform convergence of suitable functions. Therefore, $f$ is a weak $E$-mapping of $D_{t}$, consequently an $E$-mapping of $D_t$, satisfying $f(0)=z$, $f(\xi)=w$ (resp. $f(0)=z$, $f'(0)=\lambda v$). Let us go back to the general situation that is when a domain $D$ is bounded strongly linearly convex with real analytic boundary. Take a of point $\eta\in\partialrtial{D}$ such that $\max_{\zeta\in\partialrtial{D}}|z-\zeta|=|z-\eta|$. Then $\eta$ is a point of the strong convexity of $D$. Indeed, by the Implicit Function Theorem one can assume that in a neighborhood of $\eta$ the defining functions of $D$ and $B:=B_n(z,|z-\eta|)$ are of the form $r(x):=\widetilde r(\widetilde x)-x_{2n}$ and $q(x):=\widetilde q(\widetilde x)-x_{2n}$ respectively, where $x=(\widetilde x,x_{2n})\in\mathbb{R}^{2n}$ is sufficiently close to $\eta$. From the inclusion $D\subset B$ it it follows that $r-q\geq 0$ near $\eta$ and $(r-q)(\eta)=0$. Thus the Hessian $\mathcal{H}(r-q)(\eta)$ is weakly positive in $\mathbb{C}^n$. Since $\mathcal{H}q(\eta)$ is strictly positive on $T_B^\mathbb{R}(\eta)_*=T_D^\mathbb{R}(\eta)_*$, we find that $\mathcal{H}r(\eta)$ is strictly positive on $T_D^\mathbb{R}(\eta)_*$, as well. By a continuity argument, there is a convex neighborhood $V_0$ of $\eta$ such that all points from $\partial D\cap V_0$ are points of the strong convexity of $D$. It follows from Proposition \ref{localization} (after shrinking $V_0$ if necessary) that there is a weak stationary mapping $g:\mathbb{D}\longrightarrow D\cap V_0$ such that $g(\mathbb{T})\subsetbset\partialrtial D$. In particular, $g$ is a weak stationary mapping of $D$. Since $D\cap V_0$ is convex, the condition with the winding number is satisfied on $D\cap V_0$ (and then on the whole $D$). Consequently $g$ is an $E$-mapping of $D$. If $z=g(0)$, $w=g(\xi)$ for some $\xi\in\mathbb{D}$ (resp. $z=g(0)$, $v=g'(0)$) then there is nothing to prove. In the other case let us take curves $\alpha:[0,1]\longrightarrow D$, $\beta:[0,1]\longrightarrow D$ joining $g(0)$ and $z$, $g(\xi)$ and $w$ (resp. $g(0)$ and $z$, $g'(0)$ and $v$). We may assume that the images of $\alpha$ and $\beta$ are disjoint. Let $T$ be the set of all $t\in[0,1]$ such that there is an $E$-mapping $g_{t}:\mathbb{D}\longrightarrow D$ such that $g_{t}(0)=\alpha(t)$, $g_{t}(\xi_{t})=\beta(t)$ for some $\xi_{t}\in(0,1)$ (resp. $g_{t}(0)=\alpha(t)$, $g_{t}'(0)=\lambda_{t}\beta(t)$ for some $\lambda_{t}>0$). Again $T\neq\emptyset$ since $0\in T$. Using the results of Section \ref{22} similarly as before (but for one domain), we see that $T$ is closed. Since $\widetilde k_D$ is symmetric, it follows from Proposition \ref{13}(1) that the set $T$ is open in $[0,1]$ (first we move along $\alpha$, then by the symmetry we move along $\beta$). Therefore, $g_1$ is the $E$-mapping for $z,w$. In the case of $\kappa_{D}$ we change a point and then we change a direction. To be more precise, consider the set $S$ of all $s\in[0,1]$ such that there is an $E$-mapping $h_{s}:\mathbb{D}\longrightarrow D$ such that $h_{s}(0)=\alpha(s)$. Then $0\in S$, by Proposition \ref{13}(1) the set $S$ is open in $[0,1]$ and by results of Section~\ref{22} again, it is closed. Hence $S=[0,1]$. Now we may join $h'_{1}(0)$ and $v$ with a curve $\gamma:[0,1]\longrightarrow \mathbb C^n$. Let us define $R$ as the set of all $r\in[0,1]$ such that there is an $E$-mapping $\widetilde h_{r}:\mathbb{D}\longrightarrow D$ such that $\widetilde h_{r}(0)=h_1(0)$, $\widetilde h'_{r}(0)=\sigma_{r}\gamma(1-r)$ for some $\sigma_r>0$. Then $1\in R$, by Proposition \ref{13}(2) the set $R$ is open in $[0,1]$ and, by Section \ref{22}, it is closed. Hence $R=[0,1]$, so $\widetilde h_{0}$ is the $E$-mapping for $z,v$. \end{proof} Now we are in position that allows us to prove the main results of the Lempert's paper. \begin{proof}[Proof of Theorem \ref{lem-car} $($real analytic case$)$] It follows from Lemma \ref{lemat} that for any different points $z,w\in D$ (resp. $z\in D$, $v\in(\mathbb{C}^n)_*$) one may find an $E$-mapping passing through them (resp. $f(0)=z$, $f'(0)=v$). On the other hand, it follows from Proposition \ref{1} that $E$-mappings have left inverses, so they are complex geodesics. \end{proof} \begin{proof}[Proof of Theorem \ref{main} $($real analytic case$)$] This is a direct consequence of Lemma \ref{lemat} and Corollary \ref{28}. \end{proof} \begin{center}{\sc ${\mathcal C}^2$-smooth case}\end{center} \begin{lem}\label{un} Let $D\subset\mathbb C^n$, $n\geq 2$, be a bounded strongly pseudoconvex domain with $\mathcal C^2$-smooth boundary. Take $z\in D$ and let $r$ be a defining function of $D$ such that \begin{itemize}\item $r\in \mathcal C^2(\mathbb C^n);$ \item $D=\{x\in \mathbb C^n:r(x)<0\}$; \item $\mathbb C^n\setminus D=\{x\in \mathbb C^n:r(x)>0\}$; \item $|\nabla r|=1$ on $\partialrtial D;$ \item $\subsetm_{j,k=1}^n\frac{\partialrtial^2 r}{\partialrtial z_j\partialrtial\overlineerline z_k}(a)X_{j}\overlineerline{X}_{k}\geq C|X|^2$ for any $a\in \partialrtial D$ and $X\in \mathbb C^n$ with some constant $C>0$. \end{itemize} Suppose that there is a sequence $\{r_m\}$ of $\mathcal C^2$-smooth real-valued functions such that $D^{\alpha}r_n$ converges to $D^{\alpha}r$ locally uniformly for any $\alpha\in \mathbb N_0^{2n}$ such that $|\alpha|:=|\alpha_1| +\ldots+|\alpha_n|\leq 2$. Let $D_m$ be a connected component of the set $\{x\in\mathbb C^n:r_m(x)<0\}$, containing the point $z$. Then there is $c>0$ such that $(D_m,z)$ and $(D,z)$ belong to $\mathcal D(c)$, $m>>1.$ \end{lem} \begin{proof} Losing no generality assume that $D\Subset\mathbb B_n.$ Note that the conditions (1), (5), (6) of Definition \ref{30} are clearly satisfied. To find $c$ satisfying ($2$), we take $s>0$ such that $\mathcal H r (x;X)< s |X|^2$ for $x\in\overline\mathbb{B}_n$ and $X\in(\mathbb R^{2n})_*$. Then ${\mathcal H} r_m (x;X)<2s|X|^2$ for $x\in\overline\mathbb{B}_n$, $X\in(\mathbb R^{2n})_*$ and $m>>1$. Let $U_0\subsetbset\mathbb B_n$ be an open neighborhood of $\partial D$ such that $|\nabla r|$ is on $U_0$ between $3/4$ and $5/4$. Note that $\partialrtial D_m\subsetbset U_0$ and $|\nabla r_m|\in (1/2, 3/2)$ on $U_0$ for $m>>1$. Fix $m$ and $a\in \partialrtial D_m$ and put $b:=a-R\nu_{D_m}(a)$, where a small number $R>0$ will be specified later. There is $t>0$ such that $\nabla r_m(a)=2t(a-b)$. Note that $t$ may be arbitrarily large provided that $R$ was small enough. We take $t:=2s$ and $R:=|\nabla r_m(a)|/t$. Then we have $\mathcal H r_m(x;X)<2t |X|^2$ for $x\in\overline\mathbb{B}_n$, $X\in(\mathbb R^{2n})_*$ and $m>>1$. Then a function $$h(x):=r_m(x)-t(|x-b|^2-R^2),\ x\in \mathbb C^n,$$ attains at $a$ its global maximum on $\overline\mathbb{B}_n$ ($a$ is a strong local maximum and the Hessian of $h$ is negative on the convex set $\overline\mathbb{B}_n$, cf. the proof of Lemma \ref{lemat}). Thus $h\leq 0$ on $\mathbb B_n$. From this we immediately get (2). Note that it follows from (2) that $D_m=\{x\in\mathbb C^n:r_m(x)<0\}$ for $m$ big enough (i.e. $\{x\in \mathbb C^n:\ r_m(x)<0\}$ is connected). Moreover, the condition (2) implies the condition (3) as follows. We infer from Remark~\ref{D(c),4} that there is $c'>0$ such that $D$ satisfies (3) with $c'$. Let $m_0$ be such that the Hausdorff distance between $\partialrtial D$ and $\partialrtial D_m$ is smaller than $1/c'$ for $m\geq m_0$. There is $c''$ such that $D_{m_0}$ satisfies (3) with $c''$. Losing no generality we may assume that $c''<c'$. Take any $x,y\in D_m$. Since $D_m$ satisfies the interior ball condition with a radius $c$ we infer that there are balls of a radius $1/c$ contained in $D_m$ and containig $x$ and $y$ respectively. The centers of these balls lie in $D_{m_0}$. Using the fact that $(D_{m_0},z)$ lies in $\mathcal D(c'')$, we may join chosen centers with balls of a radius $1/(2c'')$ as in the condition (3), so we have found a chain consiting of balls of radii $c'$ and $c''$ joining $x$ and $y$. Thus we may join $x$ and $y$ with balls contained entirely in the constructed chain whose radii depend only on $c'$ and $c''$. Now we are proving $(4)$. We shall show that there is $c>c'$ such that every $D_m$ satisfies (4) with $c$ for $m$ big enough. To do it let us cover $\partialrtial D$ with a finite number of balls $B_j$, $j=1,\ldots,N$, from condition (4) and let $B'_j$ be a ball contained relatively in $B_j$ such that $\{B_j\}$ covers $\partialrtial D$, as well. Let $\Phi_j$ be mappings corresponding to $B_j$. Let $\varepsilon$ be such that any ball of radius $\varepsilon$ intersecting $\partialrtial D$ non-emptily is relatively contained in $B_j'$ for some $j$. Observe that any ball $B$ of radius $\varepsilon/2$ intersecting non-emptily $\partialrtial D_m$ is contained in a ball of radius $\varepsilon$ intersecting non-emptily $\partialrtial D$; hence it is contained in $B_j'$ for some $j$. Then the pair $B$, $\Phi_j$ satisfies the conditions (4) (b), (c) and (d). Therefore, it suffices to check that there is $c>2/\varepsilon$ such that each pair $B_j'$, $\Phi_j$ satisfies the condition (4) for $D_m$ with $c$ ($m>>1$). This is possible since $\Phi_j(D_m)\subsetbset\Phi_j(D)$, $D^\alpha\Phi_j(\partial D_m\cap B_j)$ converges to $D^\alpha\Phi_j(\partial D\cap B_j)$ for $|\alpha|\leq 2$ and for any $w\in\Phi(\partial D\cap B_j)$ there is a ball of radius $2/\varepsilon$ containing $\Phi_j(D)$ and tangent to $\partialrtial\Phi_j(D)$ at $w$. To be precise, we proceed as follows. Let $a,b\in\mathbb{C}^n$ and let $x\in\partial B_n(a,\widetilde c)$, where $\widetilde c>c'$. Then a ball $B_n(2a-x,2\widetilde c)$ contains $B_n(a,\widetilde c)$ and is tangent to $B_n(a,\widetilde c)$ at $x$. There is a number $\eta=\eta(\delta,\widetilde c)>0$, independent of $a,b,x$, such that the diameter of the set $B_n(b,\widetilde c)\setminus B_n(2a-x,2\widetilde c)$ is smaller than $\delta>0$, whenever $|a-b|<\eta$ (this is a simple consequence of the triangle inequality). Let $\widetilde s>0$ be such that $\mathcal H(r\circ\Phi_j^{-1})(x;X)\geq 2\widetilde s|X|^2$ for $x\in U_j$, $j=1,\ldots,N$, where $U_j$ is an open neighborhood of $\Phi_j(\partialrtial D\cap B_j)$. Then, for $m$ big enough, $\mathcal H(r_m\circ \Phi_j^{-1})(x;X)\geq\widetilde s|X|^2$ for $x\in U_j$ and $\Phi_j(\partialrtial D_m\cap B_j')\subsetbset U_j$, $j=1,\ldots,N$. Repeating for the function $$x\longmapsto(r_m\circ\Phi_j^{-1})(x)-\widetilde t(|x-\widetilde b|^2-\widetilde R^2)$$ the argument used in the interior ball condition with suitable chosen $\widetilde t$ and uniform $\widetilde R>c$, we find that there is uniform $\widetilde\varepsilon>0$ such that for any $j,m$ and $w\in\Phi_j(\partialrtial D_m\cap B_j')$ there is a ball $B$ of radius $\widetilde R$, tangent to $\Phi_j(\partialrtial D_m\cap B_j')$ at $w$, such that $\Phi_j(\partialrtial D_m\cap B_j')\cap B_n(w,\widetilde\varepsilon)\subsetbset B$. Let $a_{j,m}(w)$ denote its center. On the other hand for any $w\in \Phi_j(\partialrtial D_m\cap B_j')$ there is $t>0$ such that $w'=w+t\nu (w)\in \Phi_j(\partialrtial D\cap B_j)$, where $\nu(w)$ is a normal vector to $\Phi_j(\partialrtial D_m\cap B_j')$ at $w$. Let $a_j(w')$ be a center of a ball of radius $\widetilde R$ tangent to $\Phi_j(\partialrtial D\cap B_j)$ at $w'$. It follows that $|a_{j,m}(w)-a_j(w')|<\eta(\widetilde\varepsilon/2,\widetilde R)$ provided that $m$ is big enough. Joinining the facts presented above, we finish the proof of the exterior ball condition (with a radius dependent only on $\widetilde\varepsilon$ and $\widetilde R$). \end{proof} \begin{proof}[Proof of Theorems \ref{lem-car} and \ref{main} \emph{(}$\mathcal C^2$-smooth case$)$] Losing no generality assume that $0\in D\Subset\mathbb{B}_n$. It follows from the Weierstrass Theorem that there is sequence $\{P_k\}$ of real polynomials on $\mathbb{C}^n\simeq\mathbb R^{2n}$ such that $$D^{\alpha}P_{k}\to D^{\alpha}r \text{ uniformly on }\overline\mathbb{B}_n,$$ where $\alpha=(\alpha_1,\ldots, \alpha_{2n})\in \mathbb N_0^{2n}$ is such that $|\alpha|=\alpha_1+\ldots +\alpha_{2n}\leq 2$. Consider the open set $$\widetilde D_{k,\varepsilon}:=\{x\in \mathbb C^n:P_{k}(x)+\varepsilon<0\}.$$ Let $\varepsilon_{m}$ be a sequence of positive numbers converging to $0$ such that $3\varepsilon_{m+1}<\varepsilon_m.$ For any $m\in \mathbb N$ there is $k_{m}\in\mathbb{N}$ such that $\subsetp_{\overline\mathbb{B}_n}|P_{k_{m}}-r|<\varepsilon_{m}$. Putting $r_{m}:=P_{k_{m}}+2\varepsilon_{m}$, we get $r+\varepsilon_{m}<r_{m}<r+3\varepsilon_{m}$. In particular, $r_{m+1}<r_m.$ Let $D_m$ be a connected component of $D_{k_m,2\varepsilon_m}$ containing $0$. It is a bounded strongly linearly convex domain with real analytic boundary and $r_m$ is its defining function provided that $m$ is big enough. Moreover, $D_{m}\subsetbset D_{m+1}$ and $\bigcup_m D_{m}=D$. Using properties of holomorphically invariant functions and metrics we get Theorem~\ref{lem-car}. We are left with showing the claim that for any different $z,w\in D$ (resp. $z\in D$, $v\in(\mathbb C^n)_*$) there is a weak $E$-mapping for $z,w$ (resp. for $z,v$). Fix $z\in D$ and $w\in D$ (resp. $v\in(\mathbb C^n)_*$). Then $z,w\in D_m$ (resp. $z\in D_m$), $m>>1$. Therefore, for any $m>>1$ one may find an $E$-mapping $f_m$ of $D_m$ for $z,w$ (resp. for $z,v$). Since $(D_m,z)\in \mathcal D(c)$ for some uniform $c>0$ ($m>>1$) (Lemma~\ref{un}), we find that $f_m$, $\widetilde f_m$ and $\rho_m$ satisfy the uniform estimates from Section~\ref{22}. Thus, passing to a subsequence we may assume that $\{f_m\}$ converges uniformly on $\overlineerline{\DD}$ to a mapping $f\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}^{1/2}(\overlineerline{\DD})$ passing through $z,w$ (resp. such that $f(0)=z$, $f'(0)=\lambda v$, $\lambda>0$), $\{\widetilde f_m\}$ converges uniformly on $\overlineerline{\DD}$ to a mapping $\widetilde f\in{\mathcal O}(\mathbb{D})\cap\mathcal C^{1/2}(\overlineerline{\mathbb D})$ and $\{\rho_m\}$ is convergent uniformly on $\mathbb{T}$ to a positive function $\rho\in{\mathcal C}^{1/2}(\mathbb{T})$ (in particular, $f'\bullet\widetilde f=1$ on $\mathbb{D}$, so $\widetilde f$ has no zeroes in $\overlineerline{\DD}$). We already know that this implies that $f$ is a weak $E$-mapping of $D$. To get ${\mathcal C}^{k-1-\varepsilon}$-smoothness of the extremal $f$ and its associated mappings for $k\geq 3$, it suffices to repeat the proof of Proposition~5 of \cite{Lem2}. This is just the Webster Lemma (we have proved it in the real analytic case --- see Proposition~\ref{6}). Namely, let $$\psi:\partialrtial D\ni z\longmapsto(z,T_{D}^\mathbb{C}(z))\in \mathbb C^n\times(\mathbb P^{n-1})_*,$$ where $\mathbb P^{n-1}$ is the $(n-1)$-dimensional complex projective space. Let $\pi:(\mathbb{C}^n)_*\longrightarrow\mathbb P^{n-1}$ be the canonical projection. By \cite{Web}, $\psi(\partialrtial D)$ is a totally real manifold of $\mathcal C^{k-1}$ class. Observe that the mapping $(f,\pi\circ \widetilde f):\overlineerline{\DD}\longrightarrow\mathbb{C}^n\times\mathbb P^{n-1}$ is $1/2$-H\"older continuous, is holomorphic on $\mathbb D$ and maps $\mathbb T$ into $\psi(\partialrtial D)$. Therefore, it is $\mathcal C^{k-1-\varepsilon}$-smooth for any $\varepsilon>0$, whence $f$ is $\mathcal C^{k-1-\varepsilon}$-smooth. Since $\nu_D\circ f$ is of class $\mathcal C^{k-1-\varepsilon}$, it suffices to proceed as in the proof of Proposition~\ref{6}. \end{proof} \section{Appendix}\label{Appendix} \subsetbsection{Totally real submanifolds} Let $M\subsetbset\mathbb{C}^m$ be a totally real local $\mathcal{C}^{\omega}$ submanifold of the real dimension $m$. Fix a point $z\in M$. There are neighborhoods $U_0\subset\mathbb{R}^m$, $V_0\subset\mathbb{C}^m$ of $0$ and $z$ and a $\mathcal{C}^{\omega}$ diffeomorphism $\widetildedetilde{\Phi}:U_0\longrightarrow M\cap V_0$ such that $\widetildedetilde{\Phi}(0)=z$. The mapping $\widetildedetilde{\Phi}$ can be extended in a natural way to a mapping $\Phi$ holomorphic in a neighborhood of $0$ in $\mathbb{C}^m$. Note that this extension will be biholomorphic in a neighborhood of $0$. Actually, we have $$\frac{\partialrtial\Phi_j}{\partialrtial z_k}(0)=\frac{\partialrtial\Phi_j}{\partialrtial x_k}(0)=\frac{\partialrtial\widetildedetilde{\Phi}_j}{\partialrtial x_k}(0),\ j,k=1,\ldots,m,$$ where $x_k=\re z_k$. Suppose that the complex derivative $\Phi'(0)$ is not an isomorphism. Then there is $X\in(\mathbb{C}^m)_*$ such that $\Phi'(0)X=0$, so \begin{multline*}0=\subsetm_{k=1}^m\frac{\partialrtial\Phi}{\partialrtial z_k}(0)X_k=\subsetm_{k=1}^m\frac{\partialrtial\widetilde\Phi}{\partialrtial x_k}(0)(\re X_k+i\im X_k)=\\=\underbrace{\subsetm_{k=1}^m\frac{\partialrtial\widetilde\Phi}{\partialrtial x_k}(0)\re X_k}_{=:A}+i\underbrace{\subsetm_{k=1}^m\frac{\partialrtial\widetilde\Phi}{\partialrtial x_k}(0)\im X_k}_{=:B}.\end{multline*} The vectors $$\frac{\partialrtial\widetilde\Phi}{\partialrtial x_k}(0),\ k=1,\ldots,m$$ form a basis of $T^{\mathbb{R}}_M(z)$, so $A,B\in T^{\mathbb{R}}_M(z)$, consequently $A,B\in iT^{\mathbb{R}}_M(z)$. Since $M$ is totally real, i.e. $T^{\mathbb{R}}_M(z)\cap iT^{\mathbb{R}}_M(z)=\{0\}$, we have $A=B=0$. By a property of the basis we get $\re X_k=\im X_k=0$, $k=1,\ldots,m$ --- a contradiction. Therefore, $\Phi$ in a neighborhood of $0$ is a biholomorphism of two open subsets of $\mathbb{C}^m$, which maps a neighborhood of $0$ in $\mathbb{R}^m$ to a neighborhood of $z$ in $M$. \begin{lemm}[Reflection Principle]\label{reflection} Let $M\subsetbset\mathbb{C}^m$ be a totally real local $\mathcal{C}^{\omega}$ submanifold of the real dimension $m$. Let $V_0\subsetbset\mathbb{C}$ be a neighborhood of $\zeta_0\in\mathbb{T}$ and let $g:\overlineerline{\mathbb{D}}\cap V_0\longrightarrow\mathbb{C}^m$ be a continuous mapping. Suppose that $g\in{\mathcal O}(\mathbb{D}\cap V_0)$ and $g(\mathbb{T}\cap V_0)\subsetbset M$. Then $g$ can be extended holomorphically past $\mathbb{T}\cap V_0$. \end{lemm} \begin{proof} In virtue of the identity principle it is sufficient to extend $g$ locally past an arbitrary point $\zeta_0\in\mathbb{T}\cap V_0$. For a point $g(\zeta_0)\in M$ take $\Phi$ as above. Let $V_1\subsetbset V_0$ be a neighborhood of $\zeta_0$ such that $g(\overlineerline{\DD}\cap V_1)$ is contained in the image of $\Phi$. The mapping $\Phi^{-1}\circ g$ is holomorphic in $\mathbb{D}\cap V_1$ and has real values on $\mathbb{T}\cap V_1$. By the ordinary Reflection Principle we can extend this mapping holomorphically past $\mathbb{T}\cap V_1$. Denote this extension by $h$. Then $\Phi\circ h$ is an extension of $g$ in a neighborhood of $\zeta_0$. \end{proof} \subsetbsection{Schwarz Lemma for the unit ball} \begin{lemm}[Schwarz Lemma]\label{schw} Let $f\in{\mathcal O}(\mathbb{D},B_n(a,R))$ and $r:=|f(0)-a|$. Then $$|f'(0)|\leq \sqrt{R^2-r^2}.$$ \end{lemm} \subsetbsection{Some estimates of holomorphic functions of ${\mathcal C}^{\alpha}$-class} Let us recall some theorems about functions holomorphic in $\mathbb{D}$ and continuous in $\overlineerline{\DD}$. Concrete values of constants $M,K$ are possible to calculate, seeing on the proofs. In fact, it is only important that they do not depend on functions. \begin{tww}[Hardy, Littlewood, \cite{Gol}, Theorem 3, p. 411]\label{lit1} Let $f\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$. Then for $\alpha\in(0,1]$ the following conditions are equivalent \begin{eqnarray}\label{47}\exists M>0:\ |f(e^{i\theta})-f(e^{i\theta'})|\leq M|\theta-\theta'|^{\alpha},\ \theta,\theta'\in\mathbb{R};\\ \label{45}\exists K>0:\ |f'(\zeta)|\leq K(1-|\zeta|)^{\alpha-1},\ \zeta\in\mathbb{D}. \end{eqnarray} Moreover, if there is given $M$ satisfying \eqref{47} then $K$ can be chosen as $$2^{\frac{1-3\alpha}{2}}\pi^\alpha M\int_0^\infty\frac{t^\alpha}{1+t^2}dt$$ and if there is given $K$ satisfying \eqref{45} then $M$ can be chosen as $(2/\alpha+1)K$. \end{tww} \begin{tww}[Hardy, Littlewood, \cite{Gol}, Theorem 4, p. 413]\label{lit2} Let $f\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overlineerline{\DD})$ be such that $$|f(e^{i\theta})-f(e^{i\theta'})|\leq M|\theta-\theta'|^{\alpha},\ \theta,\theta'\in\mathbb{R},$$ for some $\alpha\in(0,1]$ and $M>0$. Then $$|f(\zeta)-f(\zeta')|\leq K|\zeta-\zeta'|^{\alpha},\ \zeta,\zeta'\in\overlineerline{\DD},$$ where $$K:=\max\left\{2^{1-2\alpha}\pi^\alpha M,2^{\frac{3-5\alpha}{2}}\pi^\alpha\alpha^{-1} M\int_0^\infty\frac{t^\alpha}{1+t^2}dt\right\}.$$ \end{tww} \begin{tww}[Privalov, \cite{Gol}, Theorem 5, p. 414]\label{priv} Let $f\in{\mathcal O}(\mathbb{D})$ be such that $\re f$ extends continuously on $\overlineerline{\DD}$ and $$|\re f(e^{i\theta})-\re f(e^{i\theta'})|\leq M|\theta-\theta'|^\alpha,\ \theta,\theta'\in\mathbb{R},$$ for some $\alpha\in(0,1)$ and $M>0$. Then $f$ extends continuously on $\overlineerline{\DD}$ and $$|f(\zeta)-f(\zeta')|\leq K|\zeta-\zeta'|^\alpha,\ \zeta,\zeta'\in\overlineerline{\DD},$$ where $$K:=\max\left\{2^{1-2\alpha}\pi^\alpha,2^{\frac{3-5\alpha}{2}}\pi^\alpha\alpha^{-1}\int_0^\infty\frac{t^\alpha}{1+t^2}dt\right\}\left(\frac{2}{\alpha}+1\right)2^{\frac{3-3\alpha}{2}}\pi^{\alpha}M\int_0^\infty\frac{t^\alpha}{1+t^2}dt.$$ \end{tww} \subsetbsection{Sobolev space} The Sobolev space $W^{2,2}(\mathbb{T})=W^{2,2}(\mathbb{T},\mathbb{C}^m)$ is a space of functions $f:\mathbb{T}\longrightarrow\mathbb{C}^m$, whose first two derivatives (in the sense of distribution) are in $L^2(\mathbb{T})$ (here we use a standard identification of functions on the unit circle and functions on the interval $[0,2\pi]$). Then $f$ is $\mathcal C^1$-smooth. It is a complex Hilbert space with the following scalar product $$\langle f,g\rangle_W:=\langle f,g\rangle_{L}+\langle f',g'\rangle_{L}+\langle f'',g''\rangle_{L},$$ where $$\langle\widetilde f,\widetilde g\rangle_{L}:=\frac{1}{2\pi}\int_0^{2\pi}\langle\widetilde f(e^{it}),\widetilde g(e^{it})\rangle dt.$$ Let $\|\cdot\|_L$, $\|\cdot\|_W$ denote the norms induced by $\langle\cdotp,-\rangle_L$ and $\langle\cdotp,-\rangle_W$. The following characterization simply follows from Parseval's identity $$W^{2,2}(\mathbb{T})=\left\{f\in L^2(\mathbb{T}):\subsetm_{k=-\infty}^{\infty}(1+k^2+k^4)|a_k|^2<\infty\right\},$$ where $a_k\in\mathbb{C}^m$ are the $m$-dimensional Fourier coefficients of $f$, i.e. $$f(\zeta)=\subsetm_{k=-\infty}^{\infty}a_k\zeta^k,\ \zeta\in\mathbb{T}.$$ More precisely, Parseval's identity gives $$\|f\|_W=\sqrt{\subsetm_{k=-\infty}^{\infty}(1+k^2+k^4)|a_k|^2},\ f\in W^{2,2}(\mathbb{T}).$$ Note that $W^{2,2}(\mathbb{T})\subset\mathcal{C}^{1/2}(\mathbb{T})\subset\mathcal{C}(\mathbb{T})$ and both inclusions are continuous (in particular, both inclusions are real analytic). Note also that \begin{equation}\label{67}\|f\|_{\subsetp}\leq\subsetm_{k=-\infty}^{\infty}|a_k|\leq\sqrt{\subsetm_{k=-\infty}^{\infty}\frac{1}{1+k^2}\subsetm_{k=-\infty}^{\infty}(1+k^2)|a_k|^2}\leq\frac{\pi}{\sqrt 3}\|f\|_W.\end{equation}\\ Now we want to show that there exists $C>0$ such that $$\|h^\alpha\|_W\leq C^{|\alpha|}\|h_1\|^{\alpha_1}_W\cdotp\ldots\cdotp\|h_{2n}\|^{\alpha_{2n}}_W,\quad h\in W^{2,2}(\mathbb{T},\mathbb{C}^n),\,\alpha\in\mathbb{N}_0^{2n}.$$ Thanks to the induction it suffices to prove that there is $\widetilde C>0$ satisfying $$\|h_1h_2\|_W\leq\widetilde C\|h_1\|_W\|h_2\|_W,\quad h_1,h_2\in W^{2,2}(\mathbb{T},\mathbb{C}).$$ Using \eqref{67}, we estimate $$\|h_1h_2\|^2_W=\|h_1h_2\|^2_L+\|h_1'h_2+h_1h_2'\|^2_L+\|h_1''h_2+2h_1'h_2'+h_1h_2''\|^2_L\leq$$$$\leq C_1\|h_1h_2\|_{\subsetp}^2+(\|h_1'h_2\|_L+\|h_1h_2'\|_L)^2+(\|h_1''h_2\|_L+\|2h_1'h_2'\|_L+\|h_1h_2''\|_L)^2\leq$$\begin{multline*}\leq C_1\|h_1\|_{\subsetp}^2\|h_2\|_{\subsetp}^2+(C_2\|h_1'\|_L\|h_2\|_{\subsetp}+C_2\|h_1\|_{\subsetp}\|h_2'\|_L)^2+\\+(C_2\|h_1''\|_L\|h_2\|_{\subsetp}+C_2\|2h_1'h_2'\|_{\subsetp}+C_2\|h_1\|_{\subsetp}\|h_2''\|_L)^2\leq\end{multline*}\begin{multline*}\leq C_3\|h_1\|_W^2\|h_2\|_W^2+(C_4\|h_1\|_W\|h_2\|_W+C_4\|h_1\|_W\|h_2\|_W)^2+\\+(C_4\|h_1\|_W\|h_2\|_W+2C_2\|h_1'\|_{\subsetp}\|h_2'\|_{\subsetp}+C_4\|h_1\|_W\|h_2\|_W)^2\leq\end{multline*}$$\leq C_5\|h_1\|_W^2\|h_2\|_W^2+(2C_4\|h_1\|_W\|h_2\|_W+2C_2\|h_1'\|_{\subsetp}\|h_2'\|_{\subsetp})^2$$ with constants $C_1,\ldots,C_5$. Expanding $h_j(\zeta)=\subsetm_{k=-\infty}^{\infty}a^{(j)}_k\zeta^{k}$, $\zeta\in\mathbb{T}$, $j=1,2$, we obtain $$\|h_j'\|_{\subsetp}\leq\subsetm_{k=-\infty}^{\infty}|k||a^{(j)}_k|\leq\sqrt{\subsetm_{k\in\mathbb{Z}_*}\frac{1}{k^2}\subsetm_{k\in\mathbb{Z}_*}k^4|a^{(j)}_k|^2}\leq\frac{\pi}{\sqrt 3}\|h_j\|_W$$ and finally $\|h_1h_2\|^2_W\leq C_6\|h_1\|_W^2\|h_2\|_W^2$ for some constant $C_6$. \subsetbsection{Matrices} \begin{propp}[Lempert, \cite{Lem2}, Th\'eor\`eme $B$]\label{12} Let $A:\mathbb{T}\longrightarrow\mathbb{C}^{n\times n}$ be a matrix-valued real analytic mapping such that $A(\zeta)$ is self-adjoint and strictly positive for any $\zeta\in\mathbb{T}$. Then there exists $H\in{\mathcal O}(\overlineerline{\DD},\mathbb{C}^{(n-1)\times(n-1)})$ such that $\det H\neq 0$ on $\overlineerline{\DD}$ and $HH^*=A$ on $\mathbb{T}$. \end{propp} In \cite{Lem2}, the mapping $H$ was claimed to be real analytic in a neighborhood of $\overlineerline{\DD}$ and holomorphic in $\mathbb{D}$, but it is equivalent to $H\in{\mathcal O}(\overlineerline{\DD})$. Indeed, since $\overline\partial H$ is real analytic near $\overlineerline{\DD}$ and $\overline\partial H=0$ in $\mathbb{D}$, the identity principle for real analytic functions implies $\overline\partial H=0$ in a neighborhood of $\overlineerline{\DD}$. \begin{propp}[\cite{Tad}, Lemma $2.1$]\label{59} Let $A$ be a complex symmetric $n\times n$ matrix. Then $$\|A\|=\subsetp\{|z^TAz|:z\in\mathbb{C}^n,\,|z|=1\}.$$ \end{propp} \textsc{Acknowledgements.} We would like to thank Sylwester Zaj\k ac for helpful discussions. We are also grateful to our friends for the participation in preparing some parts of the work. \end{document}
math
150,955
\begin{document} \title{Limits of (certain) CAT(0) groups,\\I: Compactification} \author[Daniel Groves]{Daniel Groves} \address{Department of Mathematics, California Institute of Technology\\Pasadena, CA, 91125, USA} \email{[email protected]} \primaryclass{20F65} \secondaryclass{20F67, 20E08, 57M07} \keywords{CAT$(0)$ spaces, isolated flats, Limit groups, {\bf R}-trees} \asciikeywords{CAT(0) spaces, isolated flats, Limit groups, R-trees} \begin{abstract} The purpose of this paper is to investigate torsion-free groups which act properly and cocompactly on CAT$(0)$ metric spaces which have isolated flats, as defined by Hruska \cite{Hruska}. Our approach is to seek results analogous to those of Sela, Kharlampovich and Miasnikov for free groups and to those of Sela (and Rips and Sela) for torsion-free hyperbolic groups. This paper is the first in a series. In this paper we extract an $\mathbb R$-tree from an asymptotic cone of certain CAT$(0)$ spaces. This is analogous to a construction of Paulin, and allows a great deal of algebraic information to be inferred, most of which is left to future work. \end{abstract} \asciiabstract{ The purpose of this paper is to investigate torsion-free groups which act properly and cocompactly on CAT(0) metric spaces which have isolated flats, as defined by Hruska. Our approach is to seek results analogous to those of Sela, Kharlampovich and Miasnikov for free groups and to those of Sela (and Rips and Sela) for torsion-free hyperbolic groups. This paper is the first in a series. In this paper we extract an R-tree from an asymptotic cone of certain CAT(0) spaces. This is analogous to a construction of Paulin, and allows a great deal of algebraic information to be inferred, most of which is left to future work.} \maketitle \section{Introduction} Using the theory of isometric actions on $\mathbb R$-trees as a starting point, Sela has solved the isomorphism problem for hyperbolic groups (at least for torsion-free hyperbolic groups which do not admit a small essential action on an $\mathbb R$-tree \cite{SelaIso}, though he has a proof in the general torsion-free case), has proved that torsion-free hyperbolic groups are Hopfian \cite{SelaHopf}, and recently has classified those groups with the same elementary theory as a given torsion-free hyperbolic group \cite{Sela1, SelaVI, SelaHyp}. Kharlampovich and Miasnikov have a similar, but more combinatorial, approach to this last problem for free groups; see \cite{KM} and references contained therein.\footnote{Neither Sela's nor Kharlampovich and Miasnikov's work on the elementary theory of groups have entirely appeared in refereed journals.} \eject It seems that Sela's methods will not work for non-positively curved groups in general (whatever the phrase `non-positively curved group' means). For example, Wise \cite{Wise} constructed a group which acts properly and cocompactly on a CAT$(0)$ metric space, but is non-Hopfian. The class of groups acting properly and cocompactly on CAT$(0)$ spaces with the isolated flats condition is in many ways an intermediary between hyperbolic groups (which are the `negatively curved groups' in the context of discrete group) and CAT$(0)$ groups. Sela \cite[Question I.8]{SelaProblems} asked whether such a group is Hopfian, and whether one can construct {\em Makanin-Razborov diagrams} for these groups. In the second paper in this series, \cite{CWIFShort}, we will provide a positive answer to these questions (under certain extra hypotheses, described below). The purpose of this paper is to develop tools for addressing these questions. The initial ingredient in many of Sela's arguments is a result of Paulin (\cite{Paulin3, Paulin}; see also \cite{Bestvina} and \cite{BS}; and see \cite{MS84} for work preceding Paulin's) which extracts an isometric action on an $\mathbb R$-tree from (certain) sequences of actions on $\delta$-hyperbolic spaces. Given two finitely generated groups $G$ and $\Gamma$, and a sequence of non-conjugate homomorphisms $\{ h_i \co G \to \Gamma \}$, it is straightforward to construct an action of $G$ on a certain asymptotic cone of $\Gamma$ with no global fixed point. If $\Gamma$ acts properly and cocompactly by isometries on a metric space $X$, then a $G$-action can be constructed on an asymptotic cone of $X$ (which is bi-Lipschitz homeomorphic, but not necessarily isometric, to the analogous asymptotic cone of $G$). For $\delta$-hyperbolic groups, this is in essence the above-mentioned result of Paulin. In the case of groups acting on CAT$(0)$ spaces, it is carried out by Kapovich and Leeb in \cite{KL}, but the general case is hardly more complicated. Of course, for a general finitely generated group $\Gamma$, the existence of an action of $G$ on an asymptotic cone of $\Gamma$ with no global fixed point provides little information about $G$ or $\Gamma$. In this paper we place certain restrictions on $\Gamma$ so that we can find a $G$-action on an $\mathbb R$-tree which provides much the same information as Paulin's result. We study the case where $\Gamma$ acts properly and cocompactly on a CAT$(0)$ metric space with isolated flats. We study the asymptotic cone of such a space. Under a further hypothesis (that the stabilisers of maximal flats are free abelian), we construct an $\mathbb R$-tree, which allows many of Sela's arguments to be carried out in this context (though we leave most such applications to subsequent work). The first application of this construction is the following:\eject {\bf Theorem \ref{SplittingTheorem}}\qua {\sl Suppose that $\Gamma$ is a torsion-free group acting properly and cocompactly on a CAT$(0)$ space $X$ which has isolated flats, so that flat stabilisers in $\Gamma$ are abelian. Suppose further that $\text{Out}(\Gamma)$ is infinite. Then $\Gamma$ admits a nontrivial splitting over a finitely generated free abelian group.} This partially answers a question of Swarup (see \cite[Q 2.1]{BestvinaQuestions}). However, Theorem \ref{SplittingTheorem} is only the first application. Our hope is that much of Sela's program for free groups and torsion-free hyperbolic groups can be carried out for groups $\Gamma$ as in the statement of Theorem \ref{SplittingTheorem}. In future work, we will consider the automorphism groups of such groups (in analogy with \cite{RipsSelaGAFA, SelaGAFA}), the Hopf property (in analogy with \cite{SelaHopf}) and Makanin-Razborov diagrams for these groups (in analogy with \cite{Sela1, SelaHyp}). The last of these involves finding a description of $\text{Hom}(G,\Gamma)$, where $G$ is an arbitrary finitely-generated group. A key argument in Sela's solution to all of these problems for torsion-free hyperbolic groups is the {\em shortening argument}, which we present for these CAT$(0)$ groups with isolated flats in \cite{CWIFShort}. The outline of this paper is as follows. In Section \ref{Prelim} we recall some basic definitions and results and prove some preliminary results about CAT$(0)$ spaces with isolated flats and groups acting properly and cocompactly on such spaces. In Section \ref{Limits}, we consider a torsion-free group $\Gamma$ which acts properly and cocompactly on a CAT$(0)$ metric space $X$ with isolated flats. Given a finitely generated group $G$ and a sequence of homomorphisms $\{ h_n \co G \to \Gamma \}$ no two of which differ only by an inner automorphism of $\Gamma$, it is straightforward to construct an action of $G$ on the asymptotic cone of $X$. A key feature of this action is that it has no global fixed point. This construction amounts to a compactification of a certain space of $G$-actions on $X$ (those actions which factor through a fixed homomorphism $q \co \Gamma \to \text{Isom}(X)$). In Section \ref{TreeSection}, we restrict to a torsion-free group $\Gamma$ which acts properly and cocompactly on a CAT$(0)$ space with isolated flats and has abelian flat stabilisers. Under this additional hypothesis, we are able to extract an isometric action of $G$ on an $\mathbb R$-tree $T$ with no global fixed point. The action of $G$ on $T$ largely encodes the same information from the homomorphisms $\{ h_n \}$ as Paulin's construction does in the case where $\Gamma$ is $\delta$-hyperbolic. See, in particular, Theorem \ref{LinfProps}, the main technical result of this paper. Finally, in Section \ref{conclusion} we discuss a few simple relations between our limiting objects, $\Gamma$-limit groups, and other definitions of $\Gamma$-limit groups, and prove Theorem \ref{SplittingTheorem}. I would like to thank Jason Manning for several conversations which illustrated my na\"ivet\'e, and in particular for pointing out an incorrect argument in a previous construction of the limiting tree $T$ in \S\ref{TreeSection}. \section{CAT$(0)$ metric spaces with isolated flats and isometric actions upon them} \label{Prelim} For the definition of $\mathbb R$-trees and the basic properties of their isometries, we refer the reader to \cite{Shalen1}, \cite{Shalen2}, \cite{Chiswell} and \cite{Bestvina2}. For this paper, we do not need much of this theory. For the definition and a multitude of results about CAT$(0)$ metric spaces, and isometric actions upon them, we refer the reader to \cite{BH}. We recall only a few basic properties and record our notation. Suppose that $X$ is a geodesic metric space. If $p,q,r \in X$, then $[p,q]$ denotes a geodesic between $p$ and $q$, and $\Delta (p,q,r)$ denotes the triangle consisting of the geodesics $[p,q],[q,r],[r,p]$. Geodesics (and hence geodesic triangles) need not be unique in geodesic metric spaces, but they are in CAT$(0)$ spaces. If $p,q,r \in X$ then $[p,q,r]$ denotes the path $[p,q] \cup [q,r]$. Expressions such as $[p,q,r,s]$ are defined similarly. If $\Gamma$ is a group acting properly and cocompactly by homeomorphisms on a connected simply-connected topological space then $\Gamma$ is finitely presented (see \cite[Theorem I.8.10, pp.135-137]{BH}). Obviously, if $\Gamma$ is torsion-free, then the action is free. Suppose now that $X$ is a CAT$(0)$ metric space and that $\Gamma$ acts properly and cocompactly by isometries on $X$. Then (see \cite[II.6.10.(2), p.233]{BH}) each element of $\Gamma$ acts either elliptically (fixing a point) or hyperbolically (there is an invariant axis upon which the element acts by translation). If also $\Gamma$ is torsion-free then all isometries are hyperbolic. Recall the following two results. \begin{lemma} \label{Convex} {\rm\cite[Proposition II.2.2, p. 176]{BH}}\qua Let $X$ be a CAT$(0)$ space. Given any pair of geodesics $c\co [0,1] \to X$ and $c' \co [0,1] \to X$ parametrised proportional to arc length, the following inequality holds for all $t \in [0,1]$: \[ d_X(c(t),c'(t)) \le (1-t)d_X(c(0),c'(0)) + t(d_X(c(1),c'(1)) . \] \end{lemma} \begin{proposition} {\rm\cite[Proposition II.2.4, pp. 176--177]{BH}}\qua Let $X$ be a CAT$(0)$ space, and let $C$ be a convex subset which is complete in the induced metric. Then, \begin{enumerate} \item for every $x \in X$, there exists a unique point $\pi_C(x) \in C$ such that $d(x,\pi_C(x)) = d(x,C) := \inf_{y \in C}d(x,y)$; \item if $x'$ belongs to the geodesic segment $[x,\pi_C(x)]$, then $\pi_C(x') = \pi_C(x)$; \item the map $x \to \pi_C(x)$ is a retraction of $X$ onto $C$ which does not increase distances. \end{enumerate} \end{proposition} \subsection{CAT$(0)$ spaces with isolated flats and groups acting on them} \begin{definition} A {\em flat} in a CAT$(0)$ space $X$ is an isometric embedding of Euclidean space $\mathbb E^k$ into $X$ for some $k \ge 2$. \end{definition} Note that we do not consider a geodesic line to be a flat. \begin{definition} \cite[2.1.2]{Hruska} \label{CWIFDef} A CAT$(0)$ metric space $X$ has {\em isolated flats} if it contains a family $\mathcal F_X$ of flats with the following properties: \begin{enumerate} \item (Maximal)\qua There exists $B \ge 0$ such that every flat in $X$ is contained in a $B$-neighbourhood of some flat in $\mathcal F_X$; \item (Isolated)\qua There is a function $\phi \co \mathbb R_+ \to \mathbb R_+$ such that for every pair of distinct flats $E_1, E_2 \in \mathcal F_X$ and for every $k \ge 0$, the intersection of the $k$-neighbourhoods of $E_1$ and $E_2$ has diameter less than $\phi(k)$. \end{enumerate} \end{definition} This definition is due to C. Hruska \cite{Hruska}, but such an idea is implicit in Chapter 11 of \cite{E+}, and in the work of Wise \cite{Wise2} and of Kapovich and Leeb \cite{KL}. \begin{convention} \label{PhiConvention} To simplify constants in the sequel, we assume that $\phi(k) \ge k$ for all $k \ge 0$ and that $\phi$ is a nondecreasing function. We can certainly make these assumptions, and usually do so without comment. \end{convention} For the basic properties of CAT$(0)$ metric spaces with isolated flats, for examples of such spaces, and for some properties of isometric actions upon them, we refer the reader to \cite{Hruska}. Hruska also introduced the {\em relatively thin triangles} property: \begin{definition} \cite[3.1.1]{Hruska}\label{RelThinDef}\qua A geodesic triangle in a metric space $X$ is {\em $\delta$-thin relative to the flat $E$} if each side of the triangle lies in the $\delta$-neighbourhood of the union of $E$ and the other two sides of the triangle (see \figref{RelThinPic}). A metric space $X$ has the {\em relatively thin triangle property} if there is a constant $\delta$ so that each triangle in $X$ is either $\delta$-thin in the usual sense or $\delta$-thin relative to some flat in $\mathcal F_X$. \end{definition} \begin{figure} \caption{A triangle which is thin relative to the flat $E$} \label{RelThinPic} \end{figure} Using work of Dru\c{t}u and Sapir \cite{DS} on asymptotic cones of relatively hyperbolic groups, Hruska and Kleiner \cite{HruskaKleiner} have proved that if $X$ is a CAT$(0)$ space with isolated flats which admits a cocompact isometric group action then $X$ satisfies the relatively thin triangles condition. In this paper, the symbol `$\delta$' will always refer to the constant from Definition \ref{RelThinDef}. \begin{terminology} When we refer to a {\em CAT$(0)$ group with isolated flats} we mean a group which admits a proper, cocompact and isometric action on a CAT$(0)$ space with isolated flats. \end{terminology} We now consider some of the basic properties of CAT$(0)$ spaces with isolated flats, and groups acting properly, cocompactly and isometrically upon them, which are necessary in the sequel. \begin{proposition} {\rm\cite[2.1.4]{Hruska}}\label{FInv}\qua Suppose $X$ is a CAT$(0)$ space with isolated flats. The family $\mathcal F_X$ of flats in Definition \ref{CWIFDef} may be assumed to be invariant under all isometries of $X$. \end{proposition} \begin{lemma} {\rm\cite[2.1.9]{Hruska}}\label{Periodic}\qua Suppose that the CAT$(0)$ space $X$ has isolated flats and admits a proper and cocompact action by some group of isometries. Then any maximal flat in $X$ is periodic. \end{lemma} \begin{lemma} \label{UniqueFlat} Suppose that $X$ is a CAT$(0)$ space with isolated flats, and that $\Delta = \Delta(a,b,c)$ is a geodesic triangle in $X$. If $\Delta$ is not $(\delta + \frac{\phi(\delta)}{2})$-thin then $\Delta$ is $\delta$-thin relative to a {\em unique} flat $E \in \mathcal F_X$. \end{lemma} \begin{proof} Let $l_{a,b}$ be that part of the geodesic $[a,b]$ which lies outside of the $\delta$-neighbourhood of $[a,c] \cup [b,c]$, and define $l_{a,c}$ and $l_{b,c}$ similarly. Suppose that $\Delta$ is $\delta$-thin relative to $E, E' \in \mathcal F_X$, where $E \neq E'$. Then $l_{a,b}, l_{a,c}, l_{b,c}$ all lie in the $\delta$-neighbourhood both of $E$ and of $E'$. The intersection of these $\delta$-neighbourhoods has diameter at most $\phi(\delta)$. Therefore, the length of $l_{a,b}$ is at most $\phi(\delta)$ (since it is a geodesic). Thus, from any point on $l_{a,b}$, the distance to $[a,c] \cup [b,c]$ is at most $\delta + \frac{\phi(\delta)}{2}$. A symmetric argument for $l_{a,c}$ and $l_{b,c}$ finishes the proof. \end{proof} \subsection{Bieberbach groups and toral actions on CAT$(0)$ spaces with isolated flats} Given a proper and cocompact isometric action of a group $\Gamma$ on a CAT$(0)$ space $X$ with isolated flats, we are compelled to study the subgroups $\text{Stab}(E)$, where $E$ is a maximal flat in $X$.\footnote{By $\text{Stab}(E)$ we mean $\{ g \in \Gamma\ |\ g.E = E \}$. The point-wise stabiliser is $\text{Fix}(E) = \{ g \in \Gamma\ |\ g.x = x, \ \forall x \in E \}$.} By Lemma \ref{Periodic} we have a proper and cocompact action of the group $\text{Stab}_\Gamma(E)$ on $E \cong \mathbb E^n$. Recall the following celebrated result of Bieberbach \cite{Bieb1, Bieb2}. \footnote{An {\em $n$-dimensional crystallographic group} is a cocompact discrete group of isometries of $\mathbb E^n$.} \begin{theorem} [Bieberbach; see for example \cite{Thurston}, 4.2.2, p.222] $\phantom{99}$ \begin{enumerate} \item[\rm(a)] A group $\Gamma$ is isomorphic to a discrete group of isometries of $\mathbb E^n$, for some $n$, if and only if $\Gamma$ contains a subgroup of finite index that is free abelian of finite rank; \item[\rm(b)] An $n$-dimensional crystallographic group $\Gamma$ contains a normal subgroup of finite index that is free abelian of rank $n$ and equals its own centraliser. This subgroup is characterised as the unique maximal abelian subgroup of finite index in $\Gamma$, or as the translation subgroup of $\Gamma$. \end{enumerate} \end{theorem} The structure of the subgroups $\text{Stab}_\Gamma(E)$ will be important to us in the sequel. In particular, when there are such groups which are not free abelian the construction in Section \ref{TreeSection} does not work. Motivated by this consideration, we make the following \begin{definition} Suppose that $X$ is a CAT$(0)$ space with isolated flats and that a group $\Gamma$ acts properly and cocompactly by isometries on $X$. We say that the action of $\Gamma$ on $X$ is {\em toral} if for each maximal flat $E \subseteq X$, the subgroup $\text{Stab}(E) \le \Gamma$ is free abelian. We say that $\Gamma$ is a {\em toral} CAT$(0)$ group with isolated flats if there is a proper, cocompact and toral action of $\Gamma$ on a CAT$(0)$ space $X$ with isolated flats. \end{definition} \begin{remark} We observe in Lemma \ref{AllToral} below that if a torsion-free group $\Gamma$ admits a proper, cocompact and toral action on some CAT$(0)$ space $X$ with isolated flats then {\em any} proper and cocompact action of $\Gamma$ on a CAT$(0)$ space with isolated flats is toral. Thus the property of being toral belongs to the group rather than the given action on a CAT$(0)$ space with isolated flats. Also, Hruska and Kleiner have proved \cite{HruskaKleiner} that any CAT$(0)$ space $X$ on which a CAT$(0)$ group with isolated flats acts properly and cocompactly by isometries has isolated flats. \end{remark} \subsection{Basic algebraic properties of CAT$(0)$ groups with isolated flats} In this paragraph we consider a few basic algebraic properties of torsion-free CAT$(0)$ groups with isolated flats. \begin{definition} A subgroup $K$ of a group $G$ is said to be {\em malnormal} if for all $g \in G \smallsetminus K$ we have $gKg^{-1} \cap K = \{ 1 \}$. A group $G$ is said to be {\em CSA} if any maximal abelian subgroup of $G$ is malnormal. \end{definition} The following lemma is straightforward and certainly well known, but we record and prove it for later use. \begin{lemma} \label{SolAb} Suppose that $G$ is a CSA group. Then every soluble subgroup of $G$ is abelian. Also, every virtually abelian subgroup of $G$ is abelian. \end{lemma} \begin{proof} Suppose that $S$ is a nontrivial soluble subgroup of $G$. Let $S^{(i)}$ be the smallest nontrivial term of the derived series of $S$. Then $S^{(i)}$ is a normal abelian subgroup of $S$. However, it is an abelian subgroup of $G$, so is contained in a maximal abelian subgroup $A$. If $g \in S$, then $g$ normalises $S^{(i)}$, so $g \in A$, since $A$ is malnormal. Therefore, $S$ is contained in $A$ and $S$ is abelian. Any virtually abelian subgroup $H$ has a finite index normal abelian subgroup $A$. By the above argument, the normaliser of $A$ is abelian and contains $H$, so $H$ is abelian. \end{proof} \begin{proposition} \label{malnormal} Suppose that $\Gamma$ is a torsion-free group which admits a proper and cocompact action on a CAT$(0)$ space $X$ with isolated flats. Then the stabiliser in $\Gamma$ of any maximal flat in $X$ is malnormal. \end{proposition} \begin{proof} Let $\mathcal F_X$ be the collection of flats from Definition \ref{CWIFDef}, and let $E$ be a maximal flat in $X$. Consider $M = \text{Stab}(E)$. Without loss of generality, we may assume that $E \in \mathcal F_X$. Suppose that $g \in \Gamma$ is such that $gMg^{-1} \cap M \neq \{ 1 \}$. We prove that $g \in M$. There exist $a_1 , a_2 \in M \smallsetminus \{ 1 \}$ so that $ga_1g^{-1} = a_2$. Now, $ga_1g^{-1} = a_2$ leaves both $E$ and $gE$ invariant. Therefore, there is an axis for $ga_1g^{-1}$ in each of $E$ and $gE$, and there is a Euclidean strip, isometric to $[0,k] \times \mathbb R$ for some $k$, joining these axes. However, $E$ and $gE$ are both in $\mathcal F_X$ by Proposition \ref{FInv}, and we have seen that the $k$-neighbourhoods of $E$ and $gE$ intersect in an unbounded set, so we must have that $E = gE$, which is to say that $g \in M$. \end{proof} \begin{corollary} \label{CSA} Suppose that $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats. Then $\Gamma$ is CSA. \end{corollary} \begin{proof} Let $A$ be a maximal abelian subgroup of $\Gamma$ and let $X$ be a CAT$(0)$ space with isolated flats with a proper, cocompact and toral action of $\Gamma$. Suppose first that $A$ is noncyclic. Then $A$ stabilises some flat $E \in \mathcal F_X$, and hence some maximal flat (by the Isolated Flats condition). Since $A$ is maximal abelian, and the action of $\Gamma$ on $X$ is toral, $A = \text{Stab}(E)$. In this case the result follows from Proposition \ref{malnormal}. Suppose now that $A$ is a cyclic maximal abelian subgroup, and that for some $g \in \Gamma \smallsetminus A$ we have $gAg^{-1} \cap A \neq \{ 1 \}$. Let $A = \langle a \rangle$. Then $ga^pg^{-1} = a^q$ for some $p, q$. Since $A$ is maximal abelian, we do not have $p,q = 1$. However, $\Gamma$ is a CAT$(0)$ group, so $|p| = |q|$ (see \cite[Theorem III.$\Gamma$.1.1(iii)]{BH}). Thus $g^2$ commutes with $a^p$. Therefore, $\langle a^p \rangle$ is central in $G = \langle g^2, a^p \rangle$. By \cite[II.6.12]{BH}, there is a finite index subgroup $H$ of $G$ so that $H = \langle a^p \rangle \times H_1$, for some group $H_1$. If $H_1$ is infinite, then $\langle a^p \rangle$ is contained in a subgroup isomorphic to $\mathbb Z^2$. This $\mathbb Z^2$ stabilises a flat, and hence a maximal flat, so $\langle a^p \rangle$ is contained in $\text{Stab}(E)$ for some $E \in \mathcal F_X$. However, $a$ normalises $\langle a^p \rangle$, so by Proposition \ref{malnormal} $a \in \text{Stab}(E)$. This subgroup is abelian since the action of $\Gamma$ on $X$ is toral, which contradicts $A$ being maximal abelian. Therefore, $H_1$ is finite, and since $\Gamma$ is torsion-free, $H_1$ is trivial. Therefore, $G$ is virtually cyclic, and being torsion-free, is itself infinite cyclic. Hence $g^2$ commutes with $a$ and so $\langle g^2 \rangle$ is central in $G_1 = \langle g, a \rangle$. Exactly the same argument as above applied to $G_1$ and $\langle g^2 \rangle$ implies that $G_1$ is cyclic. Since $A$ is maximal abelian, $g \in A$, a contradiction to the choice of $g$. Thus $A$ is malnormal, as required. \end{proof} \begin{lemma} \label{AllToral} Suppose that $\Gamma$ is a torsion-free group which admits a proper, cocompact and toral action on a CAT$(0)$ space $X$ with isolated flats. Then any proper and cocompact action of $\Gamma$ on a CAT$(0)$ space with isolated flats is toral. If $\Gamma$ is a torsion-free CAT$(0)$ group with isolated flats then $\Gamma$ is toral if and only if $\Gamma$ is CSA. \end{lemma} \begin{proof} Let $\Gamma$ act properly and cocompactly on a CAT$(0)$ space $Y$ with isolated flats, and let $M$ be the stabiliser of a maximal flat $E \in \mathcal F_Y$. Since $\Gamma$ admits a proper, cocompact and toral action on a CAT$(0)$ space $X$ with isolated flats, by Corollary \ref{CSA} any maximal abelian subgroup of $\Gamma$ is malnormal, and so the normaliser of any abelian group is abelian. Since $M$ is a Bieberbach group, it has a normal abelian subgroup $A$ of finite index. However, by the above, the normaliser of $A$ is abelian and it certainly contains $M$, so $M$ is abelian. Therefore the action of $\Gamma$ on $Y$ is toral. This proves the first claim of the lemma. The second claim follows from the proof of the first and Corollary \ref{CSA}. \end{proof} \begin{definition} A group $G$ is said to be {\em commutative transitive} if for all $u_1,u_2,u_3 \in G \smallsetminus \{ 1 \}$, whenever $[u_1,u_2] = 1$ and $[u_2,u_3] = 1$ we necessarily have $[u_1,u_3] =1$. \end{definition} CSA groups are certainly commutative transitive, so we have \begin{corollary} \label{CommTrans} Suppose $\Gamma$ is torsion-free toral CAT$(0)$ group with isolated flats. Then $\Gamma$ is commutative transitive. Hence every abelian subgroup in $\Gamma$ is contained in a unique maximal abelian subgroup. \end{corollary} \subsection{Projecting to flats} \label{ProjectSection} Fix $X$, a CAT$(0)$ space with isolated flats. Let $\delta$ be the constant from Definition \ref{RelThinDef} and $\phi$ the function from Definition \ref{CWIFDef}. We study the closest-point projection from $X$ onto a flat $E \subset X$. \begin{lemma} \label{Triangle} Suppose that $E \in \mathcal F_X$. Suppose that $x,y \in E$ and $z \in X$. There exist $u \in [x,z]$ and $v \in [y,z]$ so that $u$ and $v$ lie in the $2\delta$-neighbourhood of $E$, and \[ d_X(u,v) \le \phi(\delta) . \] \end{lemma} \begin{proof} If $z$ lies in the $2\delta$-neighbourhood of $E$ then the result is immediate, so we assume that this is not the case. The key (though trivial) observation is that $[x,y]$ lies entirely within $E$. Let $u_1$ be the point on $[x,z]$ which is furthest from $x$ in the $\delta$-neighbourhood of $E$. The convexity of $E$ and the convexity of the metric on $X$ ensures that $u_1$ is unique. We consider the triangle $\Delta = \Delta(x,y,z)$. If $\Delta$ is $\delta$-thin, then there is clearly a point $v_1$ on $[y,z]$ within $\delta$ of $v_1$, and we take $u = u_1, v = v_1$ (this is because if $u_1$ is not $\delta$-close to $[y,z]$ then neither is a point nearby $u_1$, but there are points arbitrarily close to $u_1$ on $[x,z]$ which are not $\delta$-close to $E$, which would contradict $\Delta$ being $\delta$-thin since $[x,y] \subset E$). Thus suppose that $\Delta$ is not $\delta$-thin, so that $\Delta$ is $\delta$-thin relative to a flat $E'$. If $E' = E$, then we have a point $v_1 \in [y,z]$ which is within $\delta$ of $u_1$, by the same reasoning as above. Again, we take $u = u_1, v = v_1$. Suppose then that $E' \ne E$. Either $u_1$ is $\delta$-close to $[y,z]$, in which case we proceed as above, or $u_1$ is $\delta$-close to $E'$. In this case define $v_2$ to be the point on $[y,z]$ which is furthest from $y$ but in the $\delta$-neighbourhood of $E$. Again, $v_2$ is either $\delta$-close to $[x,z]$ or $\delta$-close to $E'$. In the first situation, we proceed as above, with $v = v_2$ and $u$ a point on $[x,z]$ which is within $\delta$ of $v_2$. In the second situation, both $u_1$ and $v_2$ are within $\delta$ of $E$ and of $E'$, and the intersection of the $\delta$-neighbourhoods of $E$ and $E'$ has diameter less than $\phi(\delta)$. Thus in this case, $d_X(u_1,v_2) < \phi(\delta)$ and we may take $u = u_1, v = v_2$. We have proved that there exist $u$ and $v$ in the $2\delta$-neighbourhood of $E$ so that $d_X(u,v) \le \max \{ \delta , \phi(\delta) \}$. However, $\delta \le \phi(\delta)$ by Convention \ref{PhiConvention}, so the proof is complete. \end{proof} \begin{proposition} \label{Projection} Suppose that $E \subset \mathcal F_X$ is a flat and that $x,y \in X$ are such that $[x,y]$ does not intersect the $4\delta$-neighbourhood of $E$. Let $\pi \co X \to E$ be the closest-point projection map. Then $d_X(\pi(x),\pi(y)) \le 2\phi(3\delta)$. \end{proposition} \begin{proof} By Lemma \ref{Triangle}, there are points $w_1 \in [\pi(x),y]$ and $w_2 \in [\pi(y),y]$, both in the $2\delta$-neighbourhood of $E$, so that $d_X(w_1,w_2) \le \phi(\delta)$. Now consider the triangle $\Delta' = \Delta(\pi(x),x,y)$. By a similar argument to the proof of Lemma \ref{Triangle}, we find points $u_1 \in [\pi(x),x]$ and $u_2 \in [\pi(x),y]$ which lie outside the $2\delta$-neighbourhood of $E$ such that $d_X(u_1,u_2) \le \max \{ \delta, \phi(3\delta) \} \le \phi(3\delta)$. Indeed, let $v_1$ be the point on $[\pi(x),x]$ furthest from $\pi(x)$ which lies in the $3\delta$-neighbourhood of $E$, and let $v_2$ be the point on $[\pi(x),y]$ furthest from $\pi(y)$ which lies in the $3\delta$-neighbourhood of $E$. If $\Delta'$ is $\delta$-thin then there is a point $u_2$ on $[\pi(x),y]$ within $\delta$ of $v_1$. We may take $v_1 = u_1$. Similarly, if $\Delta'$ is $\delta$-thin relative to $E$ then once again there must be such a point $u_2$. Therefore, suppose that $\Delta'$ is $\delta$-thin relative to $E' \ne E$. Then $v_1$ does not lie within $\delta$ of $[x,y]$ since $[x,y]$ does not intersect the $4\delta$-neighbourhood of $E$. Therefore, either $v_1$ lies within $\delta$ of $E'$ or within $\delta$ of $[\pi(y),x]$. The second case is unproblematic as usual. Also, $v_2$ either lies within $\delta$ of $E'$ or within $\delta$ of $[\pi(x),x]$, and in this second case we proceed as usual. So suppose that $v_1$ and $v_2$ both lie within $\delta$ of $E'$. Then they both lie within the $3\delta$-neighbourhoods of $E$ and $E'$ and so $d_X(v_1,v_2) \le \phi(3\delta)$. Now, $u_2$ is closer to $y$ along $[\pi(x),y]$ than $w_1$, since $u_2$ lies outside the $2\delta$-neighbourhood of $E$, and $w_1$ lies within. Hence, the convexity of the metric in $X$ ensures that there is a point $u_3 \in [\pi(y),y]$ so that $d_X(u_2,u_3) \le d_X(w_1,w_2)$. Now, $u_1 \in [\pi(x),x]$ so $\pi(u_1) = \pi(x)$, and similarly $\pi(y_3) = \pi(y)$. Therefore, \begin{eqnarray*} d_X(\pi(x),\pi(y)) & = & d_X(\pi(u_1),\pi(u_3))\\ & \le & d_X(\pi(u_1),\pi(u_2)) + d_X(\pi(u_2),\pi(u_3))\\ & \le & d_X(\pi(u_1),\pi(u_2)) + d_X(\pi(w_1),\pi(w_2))\\ & \le & \phi(3\delta) + \phi(\delta) \\ & \le & 2\phi(3\delta), \end{eqnarray*} as required. \end{proof} \section{Asymptotic cones of CAT$(0)$ spaces with isolated flats} \label{Limits} In this section, we construct a limiting action from a sequence of homomorphisms from a fixed finitely generated group $G$ to $\Gamma$, a CAT$(0)$ group with isolated flats. The action is of $G$ on the asymptotic cone of $X$, where $X$ is the CAT$(0)$ space with isolated flats upon which $G$ acts properly and cocompactly. Asymptotic cones of CAT$(0)$ spaces have been studied in \cite{KL} and we use or adapt many of their results. We note that one of the results of this section is that the asymptotic cone of $X$ is a {\em tree-graded metric space}, in the terminology of \cite{DS}. This follows from \cite{HruskaKleiner} and \cite{DS}. This paper was written before \cite{HruskaKleiner} or \cite{DS} appeared publicly, and we need more results from this section than follow directly from either \cite{DS} or \cite{HruskaKleiner}. Thus, we prefer to leave this section unchanged, rather than referring to \cite{DS} or \cite{HruskaKleiner} for some of the results herein. \begin{remark} The construction below could be carried out in a similar way to those found in \cite{Paulin3, Paulin} (see also \cite{Bestvina} and \cite{BS}) using the equivariant Gromov topology on `approximate convex hulls' of finite orbits of a basepoint $x$ under the various actions of $G$ on $X$. For the sake of brevity, however, we use asymptotic cones. However, having used asymptotic cones we use Lemma \ref{GromovTop} below to pass back to the context of the equivariant Gromov topology. \end{remark} \subsection{Constructing the asymptotic cone} Suppose that $X$ is a CAT$(0)$ space with isolated flats, and $\Gamma \to \text{Isom}(X)$ is a proper, cocompact and isometric action of $\Gamma$ on $X$. Let $G$ be a finitely generated group, and suppose that $\{ h_n \co G \to \Gamma \}$ is a sequence of nontrivial homomorphisms. A homomorphism $h \co G \to \Gamma$ gives rise to a sequence of proper isometric actions of $G$ on $X$: \[ \lambda_{h} \co G \times X, \] given by $\lambda_{h} = \iota \circ h$, where $\iota\co \Gamma \to \text{Isom}(X)$ is the fixed homomorphism given by the action of $\Gamma$ on $X$. Because the action of $\Gamma$ on $X$ is proper and cocompact, we have the following: \begin{lemma} For any $y \in X$, $j \ge 1$ and $g \in G$, the function $\iota_{g,j,y} \co \Gamma \to \mathbb R$ defined by \[ \iota_{g,j,y}(\gamma) = d_X \left( \gamma . y, \lambda_j ( g , \gamma . y ) \right) , \] achieves its infimum for some $\gamma' \in \Gamma$. \end{lemma} Let $\mathcal A$ be a finite generating set for $G$ and let $x \in X$ be arbitrary. For a homomorphism $h \co G \to \Gamma$ define $\mu_h$ and $\gamma_h \in \Gamma$ so that \begin{eqnarray*} \mu_h & = & \max_{g \in \mathcal A} d_X \left( \gamma_h . x , \lambda_h (g, \gamma_h . x) \right) \\ & = & \min_{\gamma \in \Gamma} \max_{g \in \mathcal A} d_X \left( \gamma . x , \lambda_h (g, \gamma . x) \right) . \end{eqnarray*} For the chosen sequence of homomorphisms $h_n \co G \to \Gamma$, we write $\lambda_n$ instead of $\lambda_{h_n}$, $\mu_i$ instead of $\mu_{h_i}$ and $\gamma_i$ instead of $\gamma_{h_i}$. Now define the pointed metric spaces $(X_n,x_n)$ to be the set $X$ with basepoint $x_n = x$, with the metric $d_{X_n} = \frac{1}{\mu_n}d_X$. Since there is a natural identification between $\text{Isom}(X)$ and $\text{Isom}(X_i)$, we consider $\lambda_i$ to give an action of $G$ on $X_i$, as well as on $X$. The next lemma follows from the fact that $\Gamma . x$ is discrete, and that $G$ is finitely generated. \begin{lemma} \label{mujGrows} Suppose that for all $j \ne i$ there is no element $\gamma \in \Gamma$ so that $h_i = \tau_{\gamma} \circ h_j$ where $\tau_\gamma$ is the inner automorphism of $\Gamma$ induced by $\gamma$. Then the sequence $\{ \mu_j \}$ does not contain a bounded subsequence. \end{lemma} We use the homomorphism $h_j$ and the translation minimising element $\gamma_j$ to define an isometric action $\hat{\lambda}_j \co G \times X_n \to X_n$ by defining \[ \hat{\lambda}_n(g,y) = \left( \gamma_n^{-1} h_n(g) \gamma_n \right) . y . \] \begin{convention} For the remainder of the paper, we assume that the homomorphisms $h_n$ were chosen so that $\gamma_n = 1$ for all $n$. Therefore, $\hat{\lambda}_n (g,x) = \lambda_n(g,x) = h_n(g) . x$ for all $n \ge 1$, $g \in G$ and $x \in X$. \end{convention} Using the spaces $(X_n,x_n)$ and the actions $\lambda_n$ of $G$ on $X_n$, we construct an action of $G$ on the asymptotic cone of $X$, with respect to the basepoints $x_n = x$, scalars $\mu_n$ and an arbitrary non-principal ultrafilter $\omega$. We briefly recall the definition of asymptotic cones. For more details, see \cite{VW} and \cite{DS}, or \cite{KL} in the context of CAT$(0)$ spaces. \begin{definition} A {\em non-principal ultra-filter}, $\omega$, is a $\{ 0, 1 \}$-valued finitely additive measure on $\mathbb N$ defined on all subsets of $\mathbb N$ so that any finite set has measure $0$. \end{definition} The existence of non-principal ultrafilters is guaranteed by Zorn's Lemma. Fix once and for all a non-principal ultrafilter $\omega$.\footnote{The choice of ultrafilter will affect the resulting construction, but will not affect our results. Thus we are unconcerned which ultrafilter is chosen.} Given any bounded sequence $\{ a_n \} \subset \mathbb \mathbb R$ there is a unique number $a \in \mathbb R$ so that, for all $\epsilon > 0$, $\omega \left( \{ a_n\ |\ |a - a_n| < \epsilon \} \right) = 1$. We denote $a$ by $\omega$-$\lim \{ a_n \}$. This notion of limit exhibits most of the properties of the usual limit (see \cite{VW}). The {\em asymptotic cone of $X$ with respect to $\{ x_n \}$, $\{ \mu_n \}$ and $\omega$}, denoted $X_\omega$ is defined as follows. First, define the set $\tilde{X}_\omega$ to consist of all sequences $\{ y_n \ | \ y_n \in X_n\}$ for which $\{ d_{X_n}(x_n,y_n) \}$ is a bounded sequence. Define a pseudo-metric $\tilde{d}$ on $\tilde{X}$ by \[ \tilde{d} (\{y_n, z_n \}) = \mbox{$\omega$-$\lim$ } d_{X_n}(y_n,z_n). \] The asymptotic cone $X_\omega$ is defined to be the metric space induced by the pseudo-metric $\tilde{d}$ on $\tilde{X}_\omega$: \[ X_\omega := \tilde{X_\omega}/ \sim , \] where the equivalence relation `$\sim$' on $\tilde{X}_\omega$ is defined by: $x \sim y$ if and only if $\tilde{d}(x,y) = 0$. The pseudo-metric $\tilde{d}$ on $\tilde{X}_\omega$ naturally descends to a metric $d_\omega$ on $X_\omega$. \begin{lemma} [see \cite{VW} and \cite{KL}, Proposition 3.6] $(X_\omega, d_\omega)$ is a complete, geodesic CAT$(0)$ space. \end{lemma} We now define an isometric action of $G$ on $X_\omega$. Let $g \in G$ and $\{ y_n \} \in \tilde{X}_\omega$. Then define $g.\{ y_n \}$ to be $\{ {\lambda}_n(g, y_n) \} \in \tilde{X}_\omega$. This descends to an isometric action of $G$ on $X_\omega$. \begin{remark} The action of $G$ defined on the asymptotic cone $X_\omega$ is slightly different to the one described in \cite[\S 3.4]{KL}, but the salient features remain the same.\footnote{The difference comes in the choice of scalars $\mu_n$ and the choice of basepoints $x_n$.} \end{remark} We assume a familiarity with asymptotic cones, but we essentially use only two properties. The first is that finite sets in $\mathbb N$ have $\omega$-measure $0$. The second property is the following: \begin{lemma} Suppose that $X_\omega$ is constructed using the sequence $\{ h_n \co G \to \Gamma \}$ as above. Suppose that $Q \subset G$ is finite and $S \subset X_\omega$ is finite. For each $s \in S$, let $\{ s_n \}$ be a sequence of elements from $X_n$ such that $\{ s_n \} \in \tilde{X}_\omega$ is a representative of the equivalence class $s$. Fix $\epsilon > 0$ and define $I_{\epsilon, Q, S}$ to be the set of $i \in \mathbb N$ so that for all $q_1, q_2 \in Q \cup \{ 1 \}$ and all $s, s' \in S$ we have \[ |d_{X_i}(\hat{\lambda}(q_1 , s_i),\hat{\lambda}(q_2 , s_i')) - d_{X_\omega}(q_1 . s, q_2 . s')| < \epsilon. \] Then $\omega (I_{\epsilon,Q,S}) = 1$. \end{lemma} Given finite subsets $Q$ of $G$ and $S$ of $X_\omega$ and $\epsilon > 0$ as above, if $i \in I_{\epsilon,Q,S}$ then the pair $(X_i,\lambda_i)$ is called an {\em $\epsilon$-approximation} for $Q$ and $S$. \subsection{Properties of $X_\omega$} \begin{lemma} \label{NoXinfFixedPt} The action of $G$ on $X_{\omega}$ by isometries does not have a global fixed point. \end{lemma} \begin{proof} Let $K \subseteq X$ be a compact set so that the basepoint $x$ is in $K$ and $\Gamma . K = X$. Let $D = \text{Diam}(K)$. Suppose that $y \in X_\omega$ is fixed by all points of $G$. Choose a large $i$ so that (i) $\mu_i > 4D$; and (ii) $X_i$ is a $\frac{1}{2}$-approximation for $\{ y \}$ and $\mathcal A$. (Recall that $\mathcal A$ is the fixed finite generating set for $G$.) Thus, if $\{ y_n \}$ represents $y$ then for all $g \in \mathcal A$ \[ d_{X_i}(y_i,{\lambda}_i(g, y_i)) < \frac{1}{2}. \] This implies that, for all $g \in \mathcal A$, \[ d_X(y_i, h_i(g) . y_i) < \frac{\mu_i}{2}. \] Now, there exists $\gamma \in \Gamma$ so that $d_X(x,\gamma . y_i) \le D$. Let $g \in S$ be the element which realises the maximum: \[ \mu_i = \min_{\overline{\gamma} \in \Gamma} \max_{g \in S} d_X(x, \overline{\gamma} h_i(g) \overline{\gamma}^{-1}) . x ). \] Then we have \begin{eqnarray*} \mu_i & \le & d_X(x,(\gamma h_i(g) \gamma^{-1}) . x) \\ & \le & d_X(x,\gamma . y_i) + d_X( \gamma . y_i, (\gamma h_i(g) \gamma^{-1}) \gamma . y_i) \\ && + d_X((\gamma h_i(g) \gamma^{-1}) \gamma . y_i, (\gamma h_i(g) \gamma^{-1}) . x) \\ & = & 2d_X(x,\gamma . y_i) + d_X( y_i, h_i(g) . y_i)\\ & < & 2D + \frac{\mu_i}{2}. \end{eqnarray*} Since $\mu_i > 4D$ this is a contradiction. Therefore there is no global fixed point for the action of $G$ on $X_\infty$. \end{proof} We now prove some results about $X_\omega$ which are very similar to those obtained in \cite{KL} in the context of asymptotic cones of certain $3$-manifolds. \begin{definition} (See \cite{KL}, $\S$2-2)\qua Let $X$ be a CAT$(0)$ space and $x,y,z \in X$. Define $x',y',z'$ by $[x,x'] = [x,y] \cap [x,z]$, $[y,y'] = [y,z] \cap [y,x]$ and $[z,z'] = [z,x] \cap [z,y]$. The triangle $\Delta (x',y',z')$ is called the {\em open triangle} spanned by $x,y,z$. The triangle $\Delta (x,y,z)$ is called {\em open} if $x=x'$, $y=y'$ and $z= z'$. An open triangle $\Delta (x',y',z')$ is {\em non-degenerate} if the three points $x', y', z'$ are distinct. \end{definition} Let $\mathcal F_X$ be the set of flats in $X$ from Definition \ref{CWIFDef}. Let $\mathcal F_n$ be the set $\mathcal F_X$ considered as subsets of $X_n$. Denote by $\mathcal F_\omega$ the set of all flats in $X_\omega$ which arise as limits of flats $\{ E_i \}_{i \in \mathcal N}$ where $E_{i} \in \mathcal F_{i}$. \begin{proposition} [See \cite{KL}, Proposition 4.3] \label{XinfProps} The space $X_\omega$ satisfies the following two properties: \begin{enumerate} \item[\rm(F1)] Every non-degenerate open triangle in $X_\omega$ is contained in a flat $E \in \mathcal F_\omega$; and \item[\rm(F2)] Any two flats in $\mathcal F_\omega$ intersect in at most a point. \end{enumerate} \end{proposition} \begin{proof} Let $\Delta = \Delta(x,y,z)$ be an open triangle in $X_\omega$. Then $\Delta$ can be obtained as a limit of triangles $\Delta_{i}$, where $\Delta_{i} = \Delta (x_{i},y_{i},z_{i})$ is a triangle in $X_{i}$. The triangle $\Delta_{i}$ may be identified with a triangle $\Delta'_{i}$ in $X$ (since $X$ and $X_n$ are the same set with different metrics). For $\omega$-almost all $i$, the triangle $\Delta'_{i}$ is not $\delta$-thin, for otherwise the limit would not be a non-degenerate open triangle. Therefore, $\Delta'_{i}$ is $\delta$-thin relative to some flat $E_{i} \in \mathcal F_X$. Consider a point $w \in [x,y] \smallsetminus \{ x,y \}$. The point $w \in X_\infty$ corresponds to a sequence of points $\{ w_{i} \}$. Now $d_{\omega}(w,[y,z]) > 0$ and $d_{\omega}(w,[x,z])$, so for $\omega$-almost all $i$ the point $w_{i}$ is not contained in the $\delta$-neighbourhood of $[y_{i},z_{i}]$ or the $\delta$-neighbourhood of $[x_{i},z_{i}]$. Therefore, for $\omega$-almost all $i$ the point $w_{i}$ is contained in the $\delta$-neighbourhood of $E_{i}$. Let $u_{i}$ be a point in $E_{i}$ within $\delta$ of $w_{i}$. It is clear that the sequences $\{ w_{i} \}$ and $\{ u_{i} \}$ have the same limit, namely $w$ (although $u_i$ is only defined for $\omega$-almost all $i$). Therefore, $w$ is contained in the limit of the flats $\{ E_{i} \}$. This proves Property (F1). Now suppose that the flats $E, E' \in \mathcal F_\infty$ intersect in more than one point. Let $x, y \in \hat{E}_1 \cap \hat{E}_2$ be distinct. By Property (F1), there is a sequence of flats $\{ E_{i} \}$ which approximate $E$. Let $u, w \in [x,y] \smallsetminus \{ x,y \}$ be arbitrary ($u \neq w$) and let $\{ x_i \}, \{ y_i \}, \{ u_i \}, \{ w_i \} \subseteq E_{i}$ be sequences of points representing $x$, $y$, $u$ and $w$, respectively. Let $z \in E'$ be arbitrary so that $\Delta (x,y,z)$ is a non-degenerate triangle, and let $\{ z_i \in X_{i} \}$ be a sequence of points representing $z$. Since the triangle $\Delta (x,y,z)$ is an open triangle, there is a sequence of flats $\{ E'_{i} \}$ whose limit contains $\Delta (x,y,z)$. For $\omega$-almost all $i$, neither $u_i$ nor $w_i$ is contained in the $\delta$-neighbourhood of $[x_i,z_i] \cup [y_i,z_i]$, and so are contained in the $\delta$-neighbourhood of $E'_{i}$. Therefore the points $u_i$ and $w_i$ are each contained in the $\delta$-neighbourhoods of both $E_{i}$ and $E'_{i}$. However $u$ and $w$ are distinct, so for $\omega$-almost all $i$ the points $u_i$ and $w_i$ are at least $\phi(\delta)$ apart, which implies that $E_{i} = E'_{i}$ for $\omega$-almost all $i$. Therefore, the triangle $\Delta(x,y,z)$ is contained in $E$. Since $z$ was arbitrary, $E' \subseteq E$, and a symmetric argument shows that the two flats are equal. \end{proof} Using only the properties (F1) and (F2) from the conclusion of Proposition \ref{XinfProps} above, Kapovich and Leeb proved the following two results. \begin{lemma} {\rm\cite[Lemma 4.4]{KL}}\label{ConstantProject}\qua Let $E \in \mathcal F_\omega$ be a flat in $X_\omega$ and let $\pi_E \co X_\omega \to E$ be the closest-point projection map. Let $\gamma \co [0,1] \to X_\omega \smallsetminus E$ be a curve in the complement of $E$. Then $\pi_E \circ \gamma \co [0,1] \to E$ is constant. \end{lemma} \begin{lemma} {\rm\cite[Lemma 4.5]{KL}}\label{NoLoops}\qua Every embedded loop in $Y$ is contained in a flat $E \in \mathcal F_\omega$. \end{lemma} \subsection{The equivariant Gromov topology} \label{GromovSubsection} From the sequence of homomorphisms $h_n \co G \to \Gamma$, we have constructed a space $X_{\omega}$, a basepoint $x_\omega$ and an isometric action of $G$ on $X_\omega$ with no global fixed point. Let $X_\infty$ be the convex hull of the set $G . x_\omega$, and let $\mathcal C_\infty$ be the union of the geodesics $[x_\omega, g . x_\omega]$, along with the flats $E \in \mathcal Finf$ which contain some non-degenerate open triangle contained in a triangle $\Delta (g_1 . x_\omega, g_2 . x_\omega, g_3 . x_\omega)$, for $g_1,g_2,g_3 \in G$. Certainly $X_\infty \subseteq \mathcal C_\infty$. The set $\mathcal C_\infty$, and hence also $X_\infty$, is separable. Note that $\mathcal C_\infty$ is a CAT$(0)$ space and that Proposition \ref{XinfProps} and Lemmas \ref{ConstantProject} and \ref{NoLoops} hold for $\mathcal C_\infty$ also. The action of $G$ on $X_\omega$ leaves $\mathcal C_\infty$ invariant, so there is an isometric action of $G$ on $\mathcal C_\infty$. Since $C_\infty \subseteq X_\omega$, Lemma \ref{NoXinfFixedPt} implies: \begin{lemma} \label{NoFixCinf} There is no global fixed point for the action of $G$ on $\mathcal C_\infty$. \end{lemma} We have chosen to consider the space $\mathcal C_\infty$ rather than $X_\infty$ so that if some flat in $X_\omega$ intersects our subspace in a set containing a non-degenerate open triangle then the entire flat containing this triangle is contained in the subspace. Suppose that $\{ (Y_n,\lambda_n) \}_{n=1}^\infty$ and $(Y,\lambda)$ are pairs consisting of metric spaces, together with actions $\lambda_n \co G \to \text{Isom}(Y_n)$, $\lambda \co G \to \text{Isom}(Y)$. Recall (cf. \cite[\S 3.4, p. 16]{BF3}) that $(Y_n, \lambda_n ) \to (Y,\lambda)$ in the {\em $G$-equivariant Gromov topology} if and only if: for any finite subset $K$ of $Y$, any $\epsilon > 0$ and any finite subset $P$ of $G$, for sufficiently large $n$, there are subsets $K_n$ of $Y_n$ and bijections $\rho_n \co K_n \to K$ such that for all $s_n, t_n \in K_n$ and all $g_1, g_2 \in P$ we have \[ \left| d_{Y}(\lambda(g_1) . \rho_n (s_n) , \lambda(g_2) . \rho_n (t_n)) - d_{Y_n} ( \lambda_n(g_1) . s_n , \lambda_n(g_2) . t_n ) \right| < \epsilon . \] To a homomorphism $h \co G \to \Gamma$, we naturally associate a pair $(X_h,\lambda_h)$ as follows: let $X_h$ be the convex hull in $X$ of $G.x$ (where $x$ is the basepoint of $X$), endowed with the metric $\frac{1}{\mu_h}d_X$; and let $\lambda_h = \iota \circ h$, where $\iota\co \Gamma \to \text{Isom}(X)$ is the fixed homomorphism. \begin{lemma} \label{GromovTop} Let $\Gamma$, $X$, $G$ and $\{ h_n \co G \to \Gamma \}$ be as described above. Let $X_\omega$ be the asymptotic cone of $X$, and $\mathcal C_\infty$ be as described above. Let $\lambda_\infty \co G \to \text{Isom}(\mathcal C_\infty)$ denote the action of $G$ on $\mathcal C_\infty$ and $(\mathcal C_\infty, \lambda_\infty)$ the associated pair. There exists a subsequence $\{ f_i \} \subseteq \{ h_i \}$ so that the elements $(X_{f_i},\lambda_{f_i})$ converge to $(\mathcal C_\infty, \lambda_\infty)$ in the $G$-equivariant Gromov topology. \end{lemma} \begin{proof} Since $\mathcal C_\infty$ is separable, there is a countable dense subset of $\mathcal C_\infty$, $S$ say. Let $S_1 \subset S_2 \subset \ldots$ be a collection of finite sets whose union is $S$. Let $\{ 1 \} =Q_1 \subset Q_2 \subset \ldots \subset G$ be an exhaustion of $G$ by finite subsets. Define $J_i$ to be the collection of $i \in \mathbb N$ so that $(X_{h_i}, \lambda_{i})$ is a $\frac{1}{i}$-approximation for $Q_i$ and $S_i$. By the definition of asymptotic cone, $\omega (J_i) = 1$, and in particular each $J_i$ is infinite. Let $n_1$ be the least element of $J_1$, and let $f_1 = h_{n_1}$. Inductively, define $n_k$ to be the least element of $J_k$ which is not contained in $\{ n_1, \ldots , n_{k-1} \}$, and define $f_k = h_{n_k}$. It is straightforward to see that the sequence $\{ f_i \}$ satisfies the conclusion of the lemma. \end{proof} The above result can be interpreted as a compactification of a certain space of metric spaces equipped with $G$-actions. This is the `compactification' referred to in the title of this paper. \begin{convention} \label{ConvergentSubseq} For the remainder of the paper, we will assume that we started with the sequence $\{ f_i \co G \to \Gamma \}$ found in Lemma \ref{GromovTop} above. This will allow us to speak of `all but finitely many $n$' instead of `$\omega$-almost all $n$'. \end{convention} To make the use of Convention \ref{ConvergentSubseq} more transparent, when using this convention we speak of the homomorphisms $f_i$, rather than $h_i$. However, we still write $\lambda_i$ for the action of $G$ on $X$ induced by $f_i$, and we write $\mu_i$ for $\mu_{f_i}$ and $X_i$ for $X$ endowed with the metric $d_{X_i} := \frac{1}{\mu_i} d_X = \frac{1}{\mu_{f_i}} d_X$. Let $\mathcal F_\infty$ be the set of flats in $\mathcal C_\infty$. \begin{corollary} \label{Flats} Under Convention \ref{ConvergentSubseq}, for each $E \in \mathcal F_\infty$ there is a sequence $\{ E_i \subset X_i \}$ so that $E_i \to E$ in the $G$-equivariant Gromov topology. \end{corollary} \begin{proof} This follows from the proofs of Proposition \ref{XinfProps} and Lemma \ref{GromovTop}. \end{proof} \subsection{The action of $G$ on $X_{\omega}$} \label{Action} \begin{lemma} \label{FlatInv} Let $E \in \mathcal F_\infty$ be a flat which is a limit of the flats $\{ E_i \}$. If $g \in G$ and $g . E = E$ then for all but finitely many $j$ we have $f_j(g) . E_j = E_j$. \end{lemma} \begin{proof} Choose a non-degenerate triangle $\Delta(a,b,c)$ in $E$. Let $\{ E_{i} \}$ be a sequence of flats from $X_{i}$ approximating $E$. Let $\{ a_i \}, \{ b_i \}$ and $\{ c_i \}$ be sequences of points representing $a,b$ and $c$, respectively. By the definition of the action of $G$ on $X_\omega$, the point $g.x$ is represented by the sequence $\{ {\lambda}_{i}(g,a_i) \}$. The triangle $\Delta(g.a,g.b,g.c)$ is also a non-degenerate triangle in $E$ and at least one of the triangles $\Delta(a,b,g.a)$, $\Delta (a,c,g.a)$, $\Delta (b,c,g.a)$ is non-degenerate in $E$. The argument from the proof of Proposition \ref{XinfProps} (along with Corollary \ref{Flats}) applied to this non-degenerate triangle shows that for all but finitely many $i$ the point ${\lambda}_{i}(g,a_i)$ is $\delta$-close to the flat $E_{i}$. Similarly, for all but finitely many $i$ the point ${\lambda}_i(g,b_i)$ is $\delta$-close to $E_{i}$. Since $E_i \in \mathcal F_X$, so is the flat $f_{i}(g) . E_{i}$, by Proposition \ref{FInv}. However, \[ d_X({\lambda}_{i}(g,a_i), {\lambda}_i (g,b_i)) = d_X(a_i,b_i) , \] which is greater than $\phi(\delta)$ for all but finitely many $i$. Therefore, by the definition of the function $\phi$, for all but finitely many $i$ the flats $E_i$ and $f_i(g) . E_{i}$ are the same, as required. \end{proof} \begin{lemma} \label{NoElliptic} Suppose that $\Gamma$ is a group acting properly and cocompactly on a CAT$(0)$ space $X$ with isolated flats, and suppose that this action is toral. Let $G$ and $X_\omega$ be as above. Suppose that $g \in G$ leaves a flat $E$ in $\mathcal C_\infty$ invariant as a set. Then $g$ acts by translation on $E$. \end{lemma} \begin{proof} Suppose that $g$ acts nontrivially on $E$, but not as a translation. Then there are $y, z \in E$ which are moved different distances by $g$ (suppose that $y$ is moved further than $z$ by $g$). Let $\{ E_{i} \}$ be a sequence of flats in $X_{i}$ which converge to $E$. Since $g$ maps $E$ to itself, for all but finitely many $i$, we have $f_i(g) . E_i = E_i$, by Lemma \ref{FlatInv}. Let $\{ z_i \}, \{ y_i \} \subseteq E_{i}$ be sequences of points in representing $z$ and $y$, respectively. Suppose that $d_{X_\omega}(y,g.y) - d_{X_\omega}(z,g.z) = \epsilon > 0$. Choose large $i$ so that the points $z_i$, $y_i$, ${\lambda}_{i}(g , z_i)$ and ${\lambda}_{i}(g . y_i)$ satisfy \begin{eqnarray*} |d_{X_n}({\lambda}_{i}(g , z_i), z_i) - d_X(g.z,z)| & < & \frac{\epsilon}{3}; \mbox{ and }\\ |d_{X_n}({\lambda}_{i}(g , y_i), y_i) - d_X(g.y,y)| & < & \frac{\epsilon}{3}. \end{eqnarray*} Since $\Gamma$ is toral, the action of $g$ on $X_n$ via ${\lambda}_{i}$ is by (possibly trivial) translations. Therefore, \[ d_{X_n}({\lambda}_{i}(g, z_i), z_i) = d_{X_n}({\lambda}_{i}(g, y_i), y_i) . \] However, $d_X(g.y,y) - d_X(g.z,z) = \epsilon > 0$ and we have a contradiction. \end{proof} \begin{remark} As we shall see in Example \ref{NontoralEx} below, Lemma \ref{NoElliptic} does not hold when $\Gamma$ is a non-toral CAT$(0)$ group with isolated flats. \end{remark} \subsection{Algebraic $\Gamma$-limit groups} \begin{definition} (cf.\ \cite{Sela1}, Definition 1.2)\label{Kinf} \qua Define the normal subgroup $K_\infty$ of $G$ to be the kernel of the action of $G$ on $\mathcal C_\infty$: \[ K_\infty = \{ g \in G\ |\ \forall y \in \mathcal C_\infty , \ g(y) = y \} . \] The {\em strict $\Gamma$-limit group} is $L_\infty = G/K_\infty$. Let $\eta \co G \to L_\infty$ be the natural quotient map. A {\em $\Gamma$-limit group} is a group which is either a strict $\Gamma$-limit group or a finitely generated subgroup of $\Gamma$. \end{definition} Recall the following (see \cite[Definition 1.5]{BF3}). \begin{definition} Let $G$ and $\Xi$ be finitely generated groups. A sequence $\{ f_i\} \subseteq \text{Hom}(G, \Xi)$ is {\em stable} if, for all $g \in G$, the sequence $\{ f_i(g) \}$ is eventually always $1$ or eventually never $1$. For any sequence $\{ f_i \co G \to \Xi \}$ of homomorphisms, the {\em stable kernel} of $\{ f_i \}$, denoted $\underrightarrow{\text{Ker}}\, (f_i)$, is \[ \{ g\in G\ |\ f_i(g) = 1\ \mbox{ for all but finitely many $i$} \}. \] \end{definition} \begin{definition} An {\em algebraic $\Gamma$-limit group} is the quotient $G/\underrightarrow{\text{Ker}}\, (h_i)$, where $\{ h_i \co G \to \Gamma \}$ is a stable sequence of homomorphisms. \end{definition} In the case that $\Gamma$ is a free group (acting on its Cayley graph), Bestvina and Feighn \cite{BF3} define {\em limit groups} to be those groups of the form $G/\underrightarrow{\text{Ker}}\, f_i$, where $\{ f_i \}$ is a stable sequence in $\text{Hom}(G,\Gamma)$. When $\Gamma$ is a free group, this leads to the same class of groups as the geometric definition analogous to Definition \ref{Kinf} above (see \cite{Sela1}; this is also true when $\Gamma$ is a torsion-free hyperbolic group, see \cite{SelaHyp}). When $\Gamma$ is a torsion-free CAT$(0)$ group with isolated flats we may have torsion in $G/K_\infty$, but $G/\SK$ is always torsion-free. However, for any stable sequence $\{ f_i \}$, we always have $\underrightarrow{\text{Ker}}\, f_i \subseteq K_\infty$. Torsion in $G/ K_\infty$ can only occur when $\mathcal C_\infty$ is a single flat, in which case $f_i(G)$ is virtually abelian for almost all $i$. In Section \ref{TreeSection} below, when $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats we use the action of $G$ on $\mathcal C_\infty$ to construct an action of $G$ on an $\mathbb R$-tree $T$. In this case, the class of $\Gamma$-limit groups and algebraic $\Gamma$-limit groups coincides. It will be this fact that allows us in \cite{CWIFShort} to prove many results about torsion-free toral CAT$(0)$ groups with isolated flats in analogy to Sela's results about free groups and torsion-free hyperbolic groups. The following are elementary. \begin{lemma} \label{SKinKinf} Suppose that $\{ f_n \co G \to \Gamma \}$ gives rise to the action of $G$ on $\mathcal C_\infty$ as in the previous section. Then $\underrightarrow{\text{Ker}}\, ( f_n ) \subseteq K_\infty$. \end{lemma} \begin{lemma} \label{Limitfg} Let $L$ be a $\Gamma$-limit group. Then $L$ is finitely generated. \end{lemma} \subsection{A non-toral example} \label{NonToral} In this paragraph we consider an example of the above construction in the case that $\Gamma$ is a torsion-free {\em non}-toral CAT$(0)$ group with isolated flats. For any torsion-free group acting an properly and cocompactly on a CAT$(0)$ space $X$ with isolated flats, and for any maximal flat $E \in \mathcal Finf$, we know that $H:= \text{Stab}(E)$ is a torsion-free, proper cocompact lattice in $\mathbb R^n$, for some $n$. Hence, by Bieberbach's theorem, $H$ has a free abelian group of finite index. We have an exact sequence \[ 1 \to \mathbb Z^n \to H \to A \to 1, \] where $A$ is a finite subgroup of $O(n)$. Each element $g \in H$ acts on $\mathbb R^n$ as \[ g(v) = r_g(v) + t_g, \] where $r_g \in A \subset O(n)$ and $t_v \in \mathbb R^n$. The homomorphism $H \to A$ is given by $g \to r_g$. \begin{example} \label{NontoralEx} Let $G_1$ be a non-abelian torsion-free crystallographic group as above, with the exact sequence \[ 1 \to \mathbb Z^n \to G_1 \to A \to 1, \] where $A$ is a nontrivial finite group and let $\Gamma = G_1 \times \mathbb Z$. Let $w$ be the generator of the $\mathbb Z$ factor of $\Gamma$. Clearly the group $\Gamma$ acts properly and cocompactly by isometries on a CAT$(0)$ space with isolated flats. Let $\Gamma$ be generated by $\{ g_1, \ldots , g_k , w \}$, where $\{ g_1, \ldots , g_k \}$ is a generating set for $G_1$ and let $F_{k+1}$ be the free group of rank $k+1$ with basis $\{ x_1 , \ldots , x_{k+1} \}$. For $n \ge 1$, define the homomorphism $\phi_n \co F_{k+1} \to G_3$ by $\phi_n(x_i) = g_i$ for $1 \le i \le k$, and $\phi_n( x_{k+1}) = w^n$. All of the kernels of $\phi_n$ are identical, so the algebraic $\Gamma$-limit group is $F / \underrightarrow{\text{Ker}}\, (\phi_n) \cong \Gamma$. In this case $\mathcal C_\infty = X_\infty = X_\omega = \mathbb R^{n+1}$. In the geometric $\Gamma$-limit group, the $\mathbb Z^n$ in $G_1$ acts trivially, but the elements not in $\mathbb Z^n$ act like the corresponding element of $A$. The element $w$ acts nontrivially by translation, and the $\Gamma$-limit group is isomorphic to $A \times \mathbb Z$, which is not torsion-free. \end{example} \section{The $\mathbb R$-tree $T$} \label{TreeSection} For the remainder of the paper, we suppose that $\Gamma$ is a torsion-free {\em toral} CAT$(0)$ group with isolated flats. In this section we extract an $\mathbb R$-tree $T$ from the space $X_\omega$ and an isometric action of $G$ on $T$. The idea is to remove the flats in $X_\omega$ in order to obtain an $\mathbb R$-tree. We replace the flats with lines. \subsection{Constructing the $\mathbb R$-tree} \label{ConstructT} Suppose that $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats acting on the space $X$, and that the sequence of homomorphisms $h_n \co G \to \Gamma$ gives rise to the limiting space $X_\omega$, as in the previous section. Let $\mathcal C_\infty$ be the collection of geodesics and flats as described in Subsection \ref{Action}, and let $\mathcal F_\infty$ be the set of flats in $\mathcal C_\infty$. Suppose that $E \in \mathcal F_\infty$. By Proposition \ref{XinfProps}, for any $g \in G$, exactly one of the following holds: (i) $g.E = E$; (ii) $| g.E \cap E| = 1$; or (iii) $g.E \cap E = \emptyset$. By Lemmas \ref{FlatInv} and \ref{NoElliptic}, the action of $\text{Stab}(E)$ on $E$ is as a finitely generated free abelian group, acting by translations on $E$. Let $\mathcal D_E$ be the set of directions of the translations of $E$ by elements of $\text{Stab}(E)$. For each element $g \in G \smallsetminus \text{Stab}(E)$, let $l_g(E)$ be the (unique) point where any geodesic from a point in $E$ to a point in $g.E$ leaves $E$, and let $\mathcal L_E$ be the set of all $l_g(E) \subset E$. Note that if $g.E \cap E$ is nonempty (and $g \not\in \text{Stab}(E)$) then $g.E \cap E = \{ l_g(E) \}$. Since $G$ is finitely generated, and hence countable, both sets $\mathcal D_E$ and $\mathcal L_E$ are countable. Given a (straight) line $p \subset E$, let $\chi_E^p$ be the projection from $E$ to $p$. Since $\mathcal L_E$ is countable, there are only countably many points in $\chi_E^p(\mathcal L_E)$. Therefore, there is a line $p_E \subseteq E$ such that \begin{enumerate} \item the direction of $p_E$ is not orthogonal to a direction in $\mathcal D_E$; \item if $x$ and $y$ are distinct points in $\mathcal L_E$, then $\chi_E^{p_E}(x) \neq \chi_E^{p_E}(y)$; \end{enumerate} Project $E$ onto $p_E$ using $\chi_E^{p_E}$. The action of $\text{Stab}(E)$ on $p_E$ is defined in the obvious way (using projection) -- this is an action since the action of $\text{Stab}(E)$ on $E$ is by translations. Connect $\mathcal C_\infty \smallsetminus E$ to $p_E$ in the obvious way -- this uses the following observation which follows immediately from Lemma \ref{ConstantProject}. \begin{observation} Suppose $S$ is a component of $\mathcal C_\infty \smallsetminus E$. Then there is a point $x_S \in E$ so that $S$ is a component of $\mathcal C_\infty \smallsetminus \{ x_E \}$. \end{observation} Glue such a component $S$ to $p_E$ at the point $\chi_E^{p_E} ( x_S)$. Perform this projecting and gluing construction in an equivariant way for all flats $E \subseteq \mathcal C_\infty$ -- so that for all $E \subseteq \mathcal C_\infty$ and all $g \in G$ the lines $p_{g . E}$ and $g . p_E$ have the same direction (this is possible since the action of $\text{Stab}(E)$ on $E$ is by translations, so doesn't change directions). Having done this for all flats $E \subseteq \mathcal C_\infty$, we arrive at a space $T$, which is endowed with the (obvious) path metric. The action of $G$ on $T$ is defined in the obvious way from the action of $G$ on $X_\omega$. This action is clearly by isometries. The space $T$ has a distinguished set of geodesic lines, namely those of the form $\chi_E^{p_E}(E)$, for $E \in F_\infty$. Denote the set of such geodesic lines by $\mathbb P$. \begin{lemma} \label{NoFixedPtOnT} $T$ is an $\mathbb R$-tree and there is an action of $G$ on $T$ by isometries without global fixed points. \end{lemma} \begin{proof} That $T$ is an $\mathbb R$-tree is obvious, since there are no embedded loops. We have already noted that there is an isometric action of $G$ on $T$. Finally, suppose that there is a fixed point $y$ for the action of $G$ on $T$. If $y$ is not contained in some geodesic line in $\mathbb P$, then $y$ would correspond to a fixed point for the action of $G$ on $X_\omega$, and there are no such fixed points, by Lemma \ref{NoFixCinf}. Thus $y$ is contained in some geodesic line $p_E \in \mathbb P$, corresponding to the flat $E \in \mathcal F_\infty$. Let $g \in G$. If $g$ does not fix $p_E$ then it takes $p_E$ to some line $p_{E'}$, and $g$ takes $E$ to $E'$ in $X_\omega$, fixing the point of intersection. Suppose that $g_1$ and $g_2$ are elements of $G$ which fix $y$ but not $p_E$. Then let $\alpha \in X_\omega$ be the point of intersection of $E$ and $g_1 . E$ and let $\beta$ be the point of intersection of $E$ and $g_2 . E$. Then $\alpha$ and $\beta$ are both in $\mathcal L_E$ and $\chi_E^{p_E}(\alpha) = \chi_E^{p_E}(\beta)$, so by the choice of $p_E$ we must have $\alpha = \beta$. Therefore, there is a point $\alpha \in E$ so that all elements $g \in G$ which do not fix $p_E \subseteq T$ fix $\alpha \in X_\omega$. If $g$ does leave $p_E$ invariant then it fixes $E$ as a set, and so acts by translations on $E$, and hence by translations on $p_E$. Therefore $g$ fixes $p_E$ pointwise, and the direction of translation of $g$ on $E$ is orthogonal to the direction of $p_E$. By the choice of the direction of $p_E$ above, this means that $g$ acts trivially on $E$, and in particular fixes the point $\alpha$ found above. Thus $\alpha$ is a global fixed point for the action of $G$ on $\mathcal C_\infty$, contradicting Lemma \ref{NoFixCinf}. \end{proof} \begin{remark} Since $K_\infty \le G$ acts trivially on $\mathcal C_\infty$, it also acts trivially on $T$ and the action of $G$ on $T$ induces an isometric action of $L_\infty$ on $T$. \end{remark} \subsection{The actions of $G$ and $L_\infty$ on $T$} The following theorem is the main technical result of this paper, and the remainder of this section is devoted to its proof. Let $G$ be a finitely generated group, $\Gamma$ a torsion-free toral CAT$(0)$ group with isolated flats and $\{ h_i \co G \to \Gamma \}$ a sequence of homomorphisms, no two of which differ only by conjugation in $\Gamma$. Let $X_\omega$, $\mathcal C_\infty$ and $T$ be as in Section \ref{Limits} and Subsection \ref{ConstructT} above. Let $\{ f_i \co G \to \Gamma \}$ be the subsequence of $\{ h _i \}$ as in the conclusion of Lemma \ref{GromovTop}. Let $K_\infty$ be the kernel of the action of $G$ on $\mathcal C_\infty$ and let $L_\infty = G / K_\infty$ be the associated strict $\Gamma$-limit group. \begin{theorem} [Compare \cite{Sela1}, Lemma 1.3] \label{LinfProps} In the above situation, the following properties hold. \begin{enumerate} \item Suppose that $[A,B]$ is a non-degenerate segment in $T$. Then $\text{Fix}_{L_\infty}[A,B]$ is an abelian subgroup of $L_\infty$; \label{SegStabAb} \item If $T$ is isometric to a real line then for all sufficiently large $n$ the group $f_n(G)$ is free abelian. Furthermore in this case $L_\infty$ is free abelian. \label{Tline} \item If $g \in G$ fixes a tripod in $T$ pointwise then $g \in \underrightarrow{\text{Ker}}\, (f_i)$; \label{tripod} \item Let $[y_1,y_2] \subset [y_3,y_4]$ be a pair of non-degenerate segments of $T$ and assume that the stabiliser $\text{Fix}([y_3,y_4])$ of $[y_3,y_4]$ in $L_\infty$ is non-trivial. Then \[ \text{Fix}([y_1,y_2]) = \text{Fix}([y_3,y_4]) . \] In particular, the action of $L_\infty$ on the $\mathbb R$-tree $T$ is stable. \label{Stable} \item Let $g \in G$ be an element which does not belong to $K_\infty$. Then for all but finitely many $n$ we have $g \not\in \text{ker}(f_n)$; \label{SeqStable} \item $L_\infty$ is torsion-free; \label{tf} \item If $T$ is not isometric to a real line then $\{ f_i \}$ is a stable sequence of homomorphisms. \label{fiStable} \end{enumerate} \end{theorem} We prove Theorem \ref{LinfProps} in a number of steps. First, we prove \ref{LinfProps}(\ref{SegStabAb}). Suppose that $[A,B] \subseteq T$ is a non-degenerate segment with a nontrivial stabiliser. If there is a line $p_E \in \mathbb P$ such that $[A,B] \cap p_E$ contains more than one point, then any elements $g_1,g_2 \in \text{Fix}([A,B])$ fix $p_E$ and hence fix $E \in \mathcal F_\infty$. Therefore, by Lemma \ref{FlatInv} for all but finitely many $i$ the elements $g_1$ and $g_2$ fix the flat $E_i \in X_i$, where $\{ E_i \} \to E$. The stabiliser of $E_i$ is free abelian, so $[ g_1, g_2] \in \text{ker}(f_i)$. Thus $[g_1,g_2] \in \underrightarrow{\text{Ker}}\, (f_i)$. By Lemma \ref{SKinKinf}, $\underrightarrow{\text{Ker}}\, (f_i) \subseteq K_\infty$, so $[g_1, g_2] \in K_\infty$. Hence, in this case the stabiliser in $L_\infty$ of $[A,B]$ is abelian. Suppose therefore that there is no $p_E \in \mathbb P$ which intersects $[A,B]$ in more than a single point. In particular, $A$ and $B$ are not both contained in $p_E$ for any $p_E \in \mathbb P$. In fact, we need something stronger than this. First we prove the following: \begin{lemma} \label{NearFlat} Suppose that $\alpha, \beta \in X_n$ and $g \in G$ are such that there is a segment of length at least $6\phi(4\delta) + 4\max \{ d_{X}(g.\alpha, \alpha), d_{X}(g. \beta, \beta) \}$ in $[\alpha, \beta]$ which is within $\delta$ of a flat $E \in \mathcal F_X$. Then $g \in \text{Fix}(E)$. \end{lemma} \begin{proof} In this lemma, all distances are measured with the metric $d_X$. Let $L = \max \{ d_{X}(g.\alpha, \alpha), d_{X}(g. \beta, \beta) \}$. Let $[\alpha_1, \beta_1]$ be the segment in $[\alpha,\beta]$ of length at least $6\phi(4\delta) + 4L$ which is in the $\delta$-neighbourhood of $E$. Consider first the triangle $\Delta_1 = \Delta(\alpha, \beta, g . \beta)$. If $\Delta_1$ is $\delta$-thin, then there is a segment $[\alpha_2,\beta_2] \in [\alpha, g . \beta]$ which has length at least $6\phi(4\delta) + 3L - 2\delta$ and is within $\delta$ of $[\alpha_1,\beta_1]$. Hence $[\alpha_2,\beta_2]$ is in the $2\delta$-neighbourhood of $E$. Also, when $[\alpha, \beta]$ and $[\alpha, g . \beta]$ are both parametrised by arc length, there is an interval of time of length at least $6\phi(4\delta) + 3L - 2\delta$ when $[\alpha_1,\beta_1]$ and $[\alpha_2,\beta_2]$ both occur. Suppose then that $\Delta_1$ is $\delta$-thin relative to a flat $E' \neq E$. Then a subsegment $[\alpha_1', \beta'] \subseteq [\alpha_1,\beta_1]$ of length at least $6\phi(4\delta) +3L - \phi(\delta)$ does not intersect the $\delta$-neighbourhood of $E'$, and so must be contained in the $\delta$-neighbourhood of $[\alpha, g. \beta] \cup [\beta, g . \beta]$. However, $d_X(\alpha,\beta) \ge 6\phi(4\delta) + 4L$, so there is an interval $[\alpha_2, \beta_2]$ in $[\alpha, g . \beta]$ of length at least $6\phi(4\delta) + 3L - \phi(\delta)$ which is within $\delta$ of $[\alpha_1, \beta_1]$ and hence in the $2\delta$-neighbourhood of $E$. Once again, when $[\alpha, \beta]$ and $[\alpha, g . \beta]$ are parametrised by arc length there is an interval of time of length at least $6\phi(4\delta) + 3L - \phi(\delta)$ when both $[\alpha_1,\beta_1]$ and $[\alpha_2, \beta_2]$ occur. Finally suppose that $\Delta_1$ is $\delta$-thin relative to $E$. Then there is certainly a segment $[\alpha_2,\beta_2] \subseteq [\alpha, g. \beta]$ of length at least $6\phi(4\delta) + 3L - 2\delta$ in the $\delta$-neighbourhood of $E$. Since \begin{eqnarray*} d_X(\alpha, g . \beta) & \le & d_X(\alpha , \beta) + d_X(\beta, g . \beta)\\ & \le & d_X(\alpha, \beta) + L, \end{eqnarray*} then just as above there are subsegments $[\alpha_1',\beta_1']$ and $[\alpha_2' , \beta_2']$ (contained in $[\alpha_1,\beta_1]$ and $[\alpha_2,\beta_2]$ ,respectively) which occur at the same time for an interval of at least $6\phi(4\delta) + 2L - 2\delta$. In any of these cases, denote by $[\alpha_1, \beta_1]$ and $[\alpha_2, \beta_2]$ the intervals in $[\alpha , \beta]$ and $[\alpha , g . \beta]$ of length at least $6\phi(4\delta) + 2L - 2\phi(\delta)$ which occur at the same time when $[\alpha, \beta]$ and $[\alpha, g . \beta]$ are parametrised by arc length and such that $[\alpha_1,\beta_1]$ and $[\alpha_2,\beta_2]$ are both contained in the $2\delta$-neighbourhood of $E$. We now consider $\Delta_2 = \Delta(g . \alpha, \beta, g . \beta)$. Using the same arguments as those which found $[\alpha_2,\beta_2]$ as above, it is not difficult to find an interval $[\alpha_3,\beta_3] \subseteq [g.\alpha, g . \beta]$ which is of length at least $6\phi(4\delta) - 4\phi(\delta)$ and occurs at the same time as $[\alpha_2,\beta_2]$ when $[\alpha, g . \beta]$ and $[g . \alpha, g . \beta]$ are parametrised by arc length. The only wrinkle in this argument occurs when $\Delta_2$ is $\delta$-thin relative to $E$ and we may have to change $[\alpha_2,\beta_2]$ as in the third case above. However, in this case we can find an appropriate $[\alpha_1,\beta_1]$. Now, $[\alpha_3,\beta_3]$ is contained in the $4\delta$-neighbourhood of $E$. Also, the time at which it occurs overlaps the time at which $[\alpha_1,\beta_1]$ occurs by at least $6\phi(4\delta) - 4\phi(\delta)$. Note that $6\phi(4\delta) - 4\phi(\delta) > \phi(4\delta) + 1$. Since $[\alpha_1,\beta_1] \subseteq [\alpha, \beta]$ is contained in the $\delta$-neighbourhood of $E$, and since $[\alpha_3, \beta_3] \subseteq [g . \alpha, g . \beta]$ occurs at the same time as $[\alpha_1, \beta_1]$, the interval $[\alpha_3, \beta_3]$ is contained in the $\delta$-neighbourhood of $g . E$. Therefore, the $4\delta$-neighbourhoods of $E$ and $g . E$ intersect in a geodesic segment of length at least $\phi(4\delta) + 1$, which implies that $E = g . E$, so $g \in \text{Fix}(E)$, as required. \end{proof} Fix a finite subset $Q \in \text{Fix}_G([A,B])$. Let the sequences $\{ A_k \}$ and $\{ B_k \}$ converge to $A$ and $B$, respectively. Suppose that, for some $\epsilon > 0$, for all but finitely many $i$ there is a segment $[\alpha_{i},\beta_{i}] \subseteq [A_{i},B_{i}]$ for which (i) $d_{X_i}(\alpha_{i},\beta_{i}) \ge \epsilon$; and (ii) there is a flat $E_{i}$ so that $[\alpha_{i},\beta_{i}]$ is in the $\delta$-neighbourhood of $E_{i}$. In this case, for all but finitely many $i$, the conditions of Lemma \ref{NearFlat} are satisfied for each $g \in Q$ and the segment $[\alpha_{i},\beta_{i}]$. Therefore, $f_{i}(Q) \subseteq \text{Stab}_{\Gamma}(E_{i})$, so $\langle f_{i}(Q) \rangle$ is abelian for all but finitely many $i$. In this case $\left\langle q K_\infty \ |\ q \in Q \right\rangle$ is certainly abelian. Thus we can suppose that for all $\epsilon > 0$ there is no such segment $[ \alpha_i, \beta_i ]$. Let $Q = \{ g_1 , \ldots , g_s \}$. Let $[\rho, \sigma] \in [A, B]$ be the middle third, and let the sequences $\{ \rho_k \}$, $\{ \sigma_k \}$, $\{ {\lambda}_k(g_i, \rho_k) \}$ and $\{ {\lambda_k}(g_i, \sigma_k) \}$ converge to $\rho$, $\sigma$, $g_i . \rho$, and $g_i . \sigma$. Consider the triangles $\Delta (\rho_k, \sigma_k, \lambda(g_i , \sigma_k))$ and $\Delta( \lambda(g_i , \rho_k), \sigma_k, \lambda(g_i , \sigma_k))$. By the argument in Lemma \ref{NearFlat}, the argument in the above paragraph and the argument in \cite[Proposition 2.4]{Paulin}, for all but finitely many $k$ there is a segment $[\tau_k, \upsilon_k] \subseteq [\rho_k, \sigma_k]$ whose length (measured in $d_{X_k}$) is at least $\frac{1}{3}d_{X_k}(\rho_k,\sigma_k)$ and such that for each $g_i \in Q$, the image $g_i . [\tau_k,\upsilon_k]$ lies in the $4\delta$-neighbourhood of $[\rho_k,\sigma_k]$. Note that $d_{X_k}(\tau_k,\upsilon_k) \ge \frac{1}{9}(A_k,B_k)$. Now, since translations on a line commute, each element of the form $[g,g']$, $g,g' \in Q$ moves the midpoint of $[\tau_k,\upsilon_k]$ at most $16\delta$ (see \cite[Proposition 2.4]{Paulin}). There is an absolute bound on the size of the ball of radius $16\delta$ around any point in $X$. Let this bound be $D_1$. Since the action of $\Gamma$ is free, we have $|\{ [f_{n}(g),f_{n}(g')] \ | g,g' \in Q \}| \le D_1$ irrespective of the size of $Q$. Let $x_1, x_2 \in \text{Fix}_G([A,B])$. Either for all but finitely many $n$ there is a flat $E_n$ so that $x_1, x_2 \in \text{Fix}(E_n)$ or the above argument bounding the size of the set of commutators holds. In the first case, $\langle f_{n}(x_1), f_{n}(x_2) \rangle$ is abelian for all but finitely many $n$. Suppose then that the second case holds. By the above argument, with $Q = \{ x_2, x_1, x_1^2, \ldots , x_1^{D_1+1} \}$, for all but finitely many $n$ we can find $1 \le s_1 < s_2 \le D_1 + 1$ for which $f_{n}([x_1^{s_1},x_2]) = f_{n}([x_1^{s_2},x_2])$. Therefore $f_{n}(x_1^{s_1}x_2x_1^{-s_1}) = f_{n}(x_1^{s_2}x_2x_1^{-s_2})$, which implies that $f_{n}([x_1^{s_2-s_1},x_2]) = 1$. Hence $f_{n}(x_1)^{s_2-s_1}$ commutes with $f_{n}(x_2)$. Also, $\langle f_{n}(x_1)^{s_2-s_1} \rangle \subseteq \langle f_{n}(x_1)^{s_2-s_1} , f_{n}(x_2) \rangle^{f_{n}(x_1)}$, so by Proposition \ref{malnormal}, $f_{n}(x_1)$ is contained in the same maximal abelian subgroup as $\langle f_{n}(x_1)^{s_2 - s_1}, f_{n}(x_2) \rangle$, which is to say that $f_{n}(x_1)$ commutes with $f_{n}(x_2)$. Therefore, in any case if $x_1, x_2 \in \text{Fix}_G([A,B])$ then for all but finitely many $n$ $f_{n}(x_1)$ commutes with $f_{n}(x_2)$. Therefore $x_1 K_\infty$ commutes with $x_2 K_\infty$. Since $x_1$ and $x_2$ were arbitrary, we have proved that the group $\text{Fix}_{L_\infty}([A,B])$ is abelian. This finishes the proof of \ref{LinfProps}(\ref{SegStabAb}). We now prove \ref{LinfProps}(\ref{Tline}). Suppose that $T$ is isometric to a real line, so $L_\infty$ is a subgroup of $\text{Isom}(\mathbb R)$. Suppose first that $\mathcal C_\infty$ is not a single flat. Suppose that $k_1,k_2 \in G$ are arbitrary. Let $H = \langle k_1,k_2 \rangle$ and $\overline{H}$ be the image of $\langle k_1,k_2 \rangle$ in $L_\infty$. Then $\overline{H}$ is a $2$-generator subgroup of $\text{Isom}(\mathbb R)$ and so is one of the following (i) cyclic; (ii) infinite dihedral; or (iii) free abelian of rank $2$. We first prove that $\overline{H}$ cannot be infinite dihedral, so that it is abelian. Suppose that $k \in G$ reverses the orientation of $T$ (which we are assuming is isometric to $\mathbb R$). Let $a,b \in T$ be distinct points so that $k(a) = b$ and $k(b) = a$. Approximate the segment $[a,b]$ by a segment $[a_i,b_i] \subset X_i$ for large $i$, and let $[c_i,d_i]$ be the middle third of $[a_i,b_i]$. Since $\mathcal C_\infty$ is not a single flat, the segment $k.[c_i,d_i]$ lies within $2\delta$ of $[a_i,b_i]$, with orientation reversed (distances are being measured in $d_X$). It is not hard to see that there must be a point $e_i \in [c_i,d_i]$ which is moved at most $2\delta$ by $k$. Therefore $d_X(k^2 . e_i, e_i) \le 6\delta$. In turn, this implies that $k^2$ moves each point on $[c_i,d_i]$ distance at most $10\delta$. Repeating this argument (with a larger $i$) with the elements $k, k^3, \ldots , k^{2D_1+1}$ we find $1 \le i_1 < i_2 \le D_1$ so that $k^{2(2i_1+1)} . e_i = k^{2(2i_2+1)} . e_i$, which implies that $k^{2(2i_1+1)} k^{-2(2i_2+1)} \in \text{ker}(f_i)$ for large enough $i$. This in turn implies, since $\Gamma$ is torsion-free, that $k \in \text{ker}(f_i)$ for all but finitely many $i$. Therefore $k$ acts trivially on $T$, and so cannot reverse the orientation. This proves that $\overline{H}$ is abelian, and being an orientation preserving subgroup of $\text{Isom}(\mathbb R)$, it is free abelian. Suppose then that $k_1$ and $k_2$ act as translations on $T$, with translations lengths $\tau_1, \tau_2$, say. Choose $\kappa > 100D_1 ( \max \{ \tau_1, \tau_2 \} + 1)$, and choose $a,b \in T$ distance $\kappa$ from each other. For large enough $i$, the approximation $[a_i,b_i] \subset X_i$ of $[a,b]$ is such that for all $j \in \{ 1, \ldots , D_1 \}$, the elements $k_1^j$ and $k_2$ move the middle third $[c_i,d_i]$ of $[a_i,b_i]$ entirely within the $4\delta$ neighbourhood of $[a_i,b_i]$. As in the proof of \ref{LinfProps}(\ref{SegStabAb}) above, the commutator $[k_1^j,k_2]$ moves the midpoint of $[c_i,d_i]$ a distance at most $16\delta$. Therefore, there is $1 \le j_1 < j_2 \le D_1$ so that $f_i([k_1^{j_1},k_2]) = f_i([k_1^{j_2},k_2])$, which as above implies that $f_i([k_1,k_2]) = 1$ for large enough $i$. In this argument $k_1$ and $k_2$ were arbitrary, so letting $k_1$ and $k_2$ run over all pairs in a finite generating set $\{ g_1, \ldots ,g_k \}$ for $G$ we see that for all $i, j \in \{ 1 , \ldots , k \}$ the elements $f_{n}(g_i)$ and $f_{n}(g_j)$ commute for all but finitely many $n$. Thus, $f_{n}(G)$ is abelian for all but finitely many $n$. We have also proved that $L_\infty$ is an orientation preserving subgroup of $\text{Isom}(\mathbb R)$, which is free abelian, as required. Now suppose that $\mathcal C_\infty$ is a single flat. By Lemma \ref{FlatInv}, for any $g \in G$ for all but finitely many $i$ the element $g$ fixes a flat $E_i \subseteq X_i$. Take a finite generating set for $G$ and note that for all but finitely many $i$ each of the elements in this set fix the flat $E_i$. Therefore, for all but finitely many $i$, $f_{i}(G) \subseteq \text{Fix}(E_i)$, which is free abelian. This proves also that $L_\infty$ is abelian, and again the only abelian subgroups of $\text{Isom}(\mathbb R)$ which are not free abelian have a global fixed point. This proves \ref{LinfProps}(\ref{Tline}). We now prove \ref{LinfProps}(\ref{tripod}). Let $T(A,B,C)$ be a tripod in $T$ and let $N$ be the valence three vertex in $T(A,B,C)$. Suppose that $g \in G \smallsetminus \{ 1 \}$ stabilises $A, B$ and $C$ and therefore also $N$. We prove that $g \in \text{ker}(f_n)$ for all but finitely many $n$. Let $K_0$ be the maximum number of elements of any orbit $\Gamma . y$ in any ball of radius $170\phi(16\phi(\phi(3\delta)))$ in $X$ (with metric $d_X$). Such a $K_0$ exists because the action of $\Gamma$ on $X$ is proper and cocompact. Suppose that some point $\alpha \in T(A,B,C)$ is contained in a line $p_E \in \mathbb P$. Certainly not all of $T(A,B,C)$ is contained in $p_E$, so let $y \in T(A,B,C) \smallsetminus p_E$. Let $\alpha \in \mathcal C_\infty$ be a point corresponding to $y \in T$ and $\beta = \pi_E(\alpha)$. Let $\{ E_i \}$ be a sequence of flats converging to $E$ (such a sequence exists by Corollary \ref{Flats}). By Lemma \ref{FlatInv} $f_i(g) \in \text{Fix}(E_i)$ for all but finitely many $i$. By fixing a sufficiently small $\epsilon$ and finding an $X_i$ which is an $\epsilon$ approximation for $Q = \{ 1, g , \ldots , g^{K_0+1} \}$ and $\{ \alpha, \beta \}$ we can ensure that for all $q \in Q$ the geodesic $[\alpha_i, {\lambda}_{i}(q, \alpha_i)]$ does not intersect the $4\delta$-neighbourhood of $E_i$, where $E$ is the limit of the flats $\{ E_i \}$. Hence by Proposition \ref{Projection} $d_X(\pi_{E_i}(\alpha_i), \pi_{E_i}({\lambda}_{i}(q , \alpha_i))) \le 2\phi(3\delta)$, for all $q \in Q$. However, $|Q| > K_0$ and there are no more than $K_0$ elements of $\Gamma . y$ in a ball of radius $2\phi(3\delta)$ in $X$, by the choice of $K_0$. Note also that if $f_i(g)$ leaves $E_i$ invariant then $\pi_{E_i}({\lambda}_{i}(q, \alpha_i)) = {\lambda}_i(q, \pi_{E_i}(\alpha_i))$. Therefore, there exist $1 \le s_1 < s_2 \le K_0 + 1$ so that ${\lambda}_i (g^{s_1}, \pi_{E_i}(\alpha_i)) = {\lambda}_i(g^{s_2}, \pi_{E_i}(\alpha_i))$, which implies that $f_i(g^{s_2-s_1})$ fixes $\pi_{E_i}(\alpha_i)$. Therefore, $f_i(g^{s_2-s_1}) = 1$, and since $\Gamma$ is torsion-free $g \in \text{ker}(f_{i})$ for all but finitely many $i$, as required. Therefore, we assume for the moment that no point in the tripod $T(A,B,C)$ is contained in any line $p_E \in \mathbb P$. In this case, $A,B$ and $C$ correspond to points $\overline{A}, \overline{B}$ and $\overline{C}$ in $\mathcal C_\infty$ for which $\Delta(\overline{A},\overline{B},\overline{C})$ is a tripod in $\mathcal C_\infty$. Let $\overline{N}$ be the valence three vertex in the tripod $T(\overline{A},\overline{B},\overline{C})$. Let $\overline{A}', \overline{B}', \overline{C}'$ be the midpoints of $[\overline{A},\overline{N}], [\overline{B},\overline{N}]$ and $[\overline{C},\overline{N}]$ respectively and let $S = \{ \overline{A}, \overline{B}, \overline{C}, \overline{N}, \overline{A}', \overline{B}', \overline{C}' \}$. We also define the set $Q = \{ 1, g , g^2, \ldots , g^{K_0 +1} \}$. For varying $\epsilon$, we will consider those $X_i$ which are $\epsilon$ approximations for $Q$ and $S$. Consider the triangle $\Delta = \Delta(\overline{A}_k,\overline{B}_k,\overline{C}_k)$ in $X_k$, an $\epsilon$-approximation for $Q$ and $S$. Suppose that $\Delta$ is $\delta$-thin relative to a flat $E$. If $\epsilon$ is small enough, then necessarily $\overline{A}_k$ is at least distance $\frac{1}{3}d_{X_i}(\overline{A}_k, \overline{A}_k')$ from $E$. See \figref{ApproxPic}. \begin{figure} \caption{Ensuring $\overline{A} \label{ApproxPic} \end{figure} Note that since $\overline{A}_k$, $\overline{B}_k$ and $\overline{C}_k$ are not moved far by $q \in Q$, compared to the distances $d_{X_i}(\overline{A}_k,\overline{A}_k')$, etc., the same property is true for triangles such as $\Delta(\overline{A}_k, \overline{B}_k, q . \overline{C}_k)$. Fix $X_i$, an $\epsilon$-approximation for $Q$ and $S$ so that $\epsilon$ is `small enough' in the sense of the previous two paragraphs, and also $\epsilon < \frac{1}{100}$. Consider the triangle $\Delta = \Delta(\overline{A}_i, \overline{B}_i, \overline{C}_i)$ in $X_i$. Define the constant $\delta' = 17\phi(16\phi(\phi(3\delta)))$. Suppose that $\Delta$ is not $\delta'$-thin, so it is $\delta$-thin relative to a unique flat $E \subset X_i$. In this case $d_X(\pi_E(\overline{A}_i), \pi_E(\overline{B}_i)) \ge \delta' - 2\delta$. Now, since $\epsilon < \frac{1}{100}$ and $d_X(\overline{A}_i, \pi_E(\overline{A}_i)) \ge \frac{1}{3}d_{X_i}(\overline{A}_k,\overline{A}_k')$, for large enough $i$ the geodesic $[\overline{A}_i, {\lambda}_i(q , \overline{A}_i)]$ avoids the $4\delta$-neighbourhood of $E$ for all $q \in Q$. Therefore, for all $q \in Q$, $d_X(\pi_E(\overline{A}_i), \pi_E({\lambda}_i(q , \overline{A}_i))) \le 2 \phi(3\delta)$. Fix $q \in Q$. We now prove that $f_i(q)$ leaves $E$ invariant, and then as above we argue that $g \in \text{ker}(f_i)$. To prove that $f_i(q)$ leaves $E$ invariant, we consider the triangle \[ \Delta' = \Delta({\lambda}_i(q , \overline{A}_i), {\lambda}_i(q , \overline{B}_i) , {\lambda}_i(q , \overline{C}_i)), \] and prove that it is $16\phi(\phi(3\delta))$-thin relative to $E$. Since it is also $\delta$-thin relative to $f_i(q) . E$ and since it is not $\delta'$-thin, we must have that $E = f_i(q) . E$, by an argument similar to that which proved Lemma \ref{UniqueFlat}. Let $\alpha_1 = {\lambda}_i(q,\overline{A}_i), \alpha_2 = \pi_E(\alpha_1), \beta_1 = {\lambda}_i(q, \overline{B}_i)$ and $\beta_2 = \pi_E(\beta_1)$. Now, \begin{eqnarray*} d_{X}(\alpha_2, \pi_E(\overline{A}_i)), d_{X}(\beta_2), \pi_E(\overline{B}_i)) & \le & 2\phi(3\delta), \mbox{ and }\\ d_{X}(\pi_E(\overline{A}_i), \pi_E(\overline{B}_i)) & \ge & \delta' - 2\delta, \end{eqnarray*} so we have \[ d_{X}(\alpha_2, \beta_2) \ge 17\phi(16\phi(\phi(3\delta))) - 4\phi(3\delta) - 2\delta . \] Now, by Lemma \ref{Triangle} there exists $u,v$ in the $2\delta$-neighbourhood of $E$ so that $u \in [\alpha_1, \alpha_2]$ and $v \in [\alpha_1, \beta_2]$ and $d_{X}(u,v) \le \phi(\delta)$. Now, since $\alpha_2 = \pi_E(\alpha_1)$, $d_{X}(u,\alpha_2) \le 2\delta$, and so $d_{X}(v,\alpha_2) \le \phi(\delta) + 2\delta)$. Therefore, $[\alpha_2,\beta_2]$ and $[v,\beta_2]$ $3\phi(\delta)$-fellow travel and $\Delta(\alpha_1,\alpha_2,\beta_2)$ is $3\phi(\delta)$-thin. Also $d_{X}(v,\beta_2) \ge \delta' - 7\phi(3\delta) - 2\delta$. Now, consider the triangle $\Delta_1 = \Delta(\alpha_1,\beta_1,\beta_2)$. Suppose it is $\delta$-thin relative to a flat $E' \neq E$. Let $w_1$ be the point on $[\beta_2,\beta_1]$ which lies in the $4\phi(\delta)$-neighbourhood of $E$ furthest from $\beta_2$. Then $w_1$ is not in the $\delta$-neighbourhood of $[v,\beta_2]$, and so is not in the $\delta$-neighbourhood of $[\alpha_1,\beta_2]$. If $w_1$ is in the $\delta$-neighbourhood of $[\beta_1,\alpha_1]$ then $\Delta_1$ is $5\phi(\delta)$-thin. Thus suppose that $w_1$ is in the $\delta$-neighbourhood of $E'$. Now, because (i) $\beta_2 = \pi_E(\beta_1)$; (ii) $d_X(v,\beta_2) \ge \delta' - 7\phi(3\delta) - 2\delta$; and (iii) $[v,\beta_2]$ is contained in the $3\phi(\delta)$-neighbourhood of $E$, either $\Delta_1$ is $(\phi(3\phi(\delta)) + 3\phi(\delta) + \delta)$-thin or there is a segment in $[v,\beta_2]$ of length at least $\phi(\phi(3\delta))$ which lie in the $\delta$-neighbourhood of $E'$. However, this segment also lies in the $\phi(3\delta)$-neighbourhood of $E$, which is a contradiction. Therefore, in any case either $\Delta_1$ is $5\phi(\phi(3\delta))$-thin or $\Delta_1$ is $\delta$-thin relative to $E$. Suppose that $\Delta_1$ is $\delta$-thin relative to $E$. The geodesic $[\beta_2,\beta_1]$ intersects the $\delta$-neighbourhood of $E$ in a segment of length at most $\delta$ and so in this case $\Delta_1$ is $2\delta$-thin. We have proved that $\Delta_1 = \Delta(\alpha_1,\beta_1,\beta_2)$ is $5\phi(\phi(3\delta))$-thin, and also that $\Delta(\alpha_1,\alpha_2,\beta_2)$ is $3\phi(\delta)$-thin. Therefore, the geodesic $[\alpha_1,\beta_1]$ must $8\phi(\phi(3\delta))$-fellow travel the path $[\alpha_1,\alpha_2,\beta_2,\beta_1]$. Similar arguments applied to the geodesic segments $[{\lambda}_i(q, \overline{A}_i), {\lambda}_i(q , \overline{C}_i)]$ and $[{\lambda}_i(q, \overline{B}_i), {\lambda}_i(q , \overline{C}_i)]$ show that the triangle $\Delta' = \Delta({\lambda}_i(q , \overline{A}_i), {\lambda}_i(q , \overline{B}_i) , {\lambda}_i(q , \overline{C}_i))$ is $16\phi(\phi(3\delta))$-thin relative to $E$. Since $\Delta'$ is not $\delta'$-thin, the argument from Lemma \ref{UniqueFlat} implies that $\Delta'$ is $16\phi(\phi(3\delta))$-thin relative to a unique flat. Since $\Delta'$ is certainly $16(\phi(3\delta))$-thin relative to $f_i(q) . E$ we must have that $f_i(q) . E = E$, as required. Now we know for all $q \in Q$ that $d_X({\lambda_i}(q, \pi_E(\overline{A}_i)), \pi_E(\overline{A}_i)) \le 2\phi(3\delta)$, which implies as above that $g \in \text{ker}(f_i)$. Therefore, we may assume that $\Delta$ is $\delta'$-thin. Similar arguments to those above allow us to infer that for all $r_1, r_2, r_3 \in \{ 1 , q \}$, the triangle \[ \Delta(f_i(r_1) . \overline{A}_i, f_i(r_2) . \overline{B}_i, f_i(r_3) . \overline{C}_i) , \] is $\delta'$-thin. Now, by the Claim in the proof of \cite[Lemma 4.1]{RipsSelaGAFA}, the point $\overline{N}_i$ is moved by $f_i(q)$ at most $170\phi(16\phi(\phi(3\delta)))$. Again in this case, we find $1 \le s_1 < s_2 \le K_0 +1$ so that $f_i(g^{s_2-s_1})$ fixes $\overline{N}_i$, which implies that $g \in \text{ker}(f_i)$. We have proved that if $g \in G$ stabilises a tripod in $T$ then for all but finitely many $i$ $g \in \text{ker}(h_i)$. This proves \ref{LinfProps}(\ref{tripod}). The proof of \ref{LinfProps}(\ref{Stable}) is identical to the of \cite[Proposition 4.2]{RipsSelaGAFA}, except that segment stabilisers are abelian, rather than cyclic. However, all that is used in this proof is that segment stabilisers are abelian. We now prove \ref{LinfProps}(\ref{SeqStable}). Suppose that $g \not\in K_\infty$. Then certainly $g \not\in \text{ker}(f_{k})$ for all but finitely many $k$, by the choice of the sequence $\{ f_i \}$ in Lemma \ref{GromovTop}. We now prove \ref{LinfProps}(\ref{tf}). If $T$ is isometric to a real line then by \ref{LinfProps}(\ref{Tline}) $L_\infty$ is finitely generated free abelian, and is certainly torsion-free. Therefore suppose that $T$ is not isometric to a real line, that $g \in G$ and that $g^p \in K_\infty$. Since $T$ is not isometric to a real line, $g^p$ stabilises a tripod, so $g^p \in \text{ker}(f_{k})$ for all but finitely many $k$. However $\Gamma$ is torsion-free, so $g \in \text{ker}(f_{k})$ for all but finitely many $k$ and, by \ref{LinfProps}(\ref{SeqStable}), $g \in K_\infty$, as required. This proves \ref{LinfProps}(\ref{tf}). Finally, we prove \ref{LinfProps}(\ref{fiStable}). To see that $\{ f_i \}$ is a stable sequence of homomorphisms when $T$ is not isometric to a real line, suppose that $g \in G$. If $g \not\in K_\infty$ then by \ref{LinfProps}.(\ref{SeqStable}) we have $g \not\in \text{ker}(f_i)$ for all but finitely many $i$. If $g \in K_\infty$ then $g$ stabilises a tripod in $T$ and so by \ref{LinfProps}.(\ref{tripod}) $g \in \text{ker}(f_i)$ for all but finitely many $i$. This proves that $\{ f_i \}$ is a stable sequence of homomorphisms. This finally completes the proof of Theorem \ref{LinfProps}. $\square$ \section{$\Gamma$-limit groups and concluding musings} \label{conclusion} \subsection{Various kinds of $\Gamma$-limit groups} \begin{theorem} Suppose that $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats. The class of $\Gamma$-limit groups coincides with the class of algebraic $\Gamma$-limit groups. \end{theorem} \begin{proof} The class of abelian $\Gamma$-limit groups is exactly class of finitely generated free abelian groups. It is easy to see that these are also the abelian algebraic $\Gamma$-limit groups. Clearly a finitely generated subgroup of $\Gamma$ is an algebraic $\Gamma$-limit group. Suppose then that $\{ h_i \co G \to \Gamma\}$ is a sequence of homomorphisms and $\{ f_i \}$ is the subsequence obtained from Lemma \ref{GromovTop}. If the limiting tree $T$ is isometric to a real line then the associated $\Gamma$-limit group is abelian. We have already covered this case, so we may assume $T$ is not isometric to a real line. Therefore, by \ref{LinfProps}.(\ref{tripod}), \ref{LinfProps}.(\ref{SeqStable}) and \ref{LinfProps}.(\ref{fiStable}), $\{ f_i \}$ is a stable sequence and $K_\infty = \underrightarrow{\text{Ker}}\, (f_i)$. Hence the limit group $L_\infty$ is an algebraic $\Gamma$-limit group. Conversely, suppose that $\{ h_i \co G \to \Gamma \}$ is a stable sequence of homomorphisms. If the associated sequence of stretching factors $\{ \mu_j \}$ contains a bounded subsequence, then there is a subsequence $\{ h_{i'} \}$ of $\{ h_i \}$ so that $h_{i_1'}(G) \cong h_{i_2'}(G)$ for all $i_1' , i_2' \in \{ h_{i'} \}$. In this case, since $\{ h_i \}$ is a stable sequence, the associated algebraic $\Gamma$-limit group is isomorphic to a finitely generated subgroup of $\Gamma$. Thus suppose that there is not a bounded subsequence of the $\{ \mu _j \}$. In this case we can construct a limiting spaces $X_\omega$ and $\mathcal C_\infty$ and the associated $\mathbb R$-tree $T$. If $T$ is isometric to a real line, we are done. Otherwise since passing to a subsequence of a stable sequence does not change the stable kernel, we see again that $K_\infty = \underrightarrow{\text{Ker}}\, (h_i)$, so that the algebraic $\Gamma$-limit group is a $\Gamma$-limit group. \end{proof} We now recall a topology on the set of finitely generated groups from \cite{CG} (see also \cite{Grig, Champ}). \begin{definition} A {\em marked group} $(G,\mathcal A)$ consists of a finitely generated group $G$ with an ordered generating set $\mathcal A = (a_1, \ldots , a_n )$. Two marked groups $(G,\mathcal A)$ and $(G',\mathcal A')$ are {\em isomorphic} if the bijection taking $a_i$ to $a_i'$ for each $i$ induces an isomorphism between $G$ and $G'$. For a fixed $n$, the set $\Gn$ consists of those marked groups $(G,\mathcal A)$ where $|\mathcal A| = n$. \end{definition} We now introduce a metric on $\Gn$. First, we introduce the following abuse of notation. \begin{convention} \label{Abuse} Marked groups are always considered up to isomorphism of marked groups. Thus for any marked groups $(G,\mathcal A)$ and $(G',\mathcal A')$ in $\Gn$, we identify an $\mathcal A$-word with the corresponding $\mathcal A'$-word under the canonical bijection induced by $a_i \to a_i'$, $i = 1 ,\ldots , n$. \end{convention} \begin{definition} A {\em relation} in a marked group $(G,\mathcal A)$ is an $\mathcal A$-word representing the identity in $G$. Two marked groups $(G,\mathcal A)$ and $(G',\mathcal A')$ in $\Gn$ are at distance $e^{-d}$ from each other if they have the exactly the same relations of length at most $d$, but there is a relation of length $d+1$ which holds in one marked group but not the other. \end{definition} The following result is implicit in \cite{CG}. \begin{proposition} Let $G$ be a finitely generated group and $\Xi$ a finitely presented group. Suppose that $\{ h_i \co G \to \Xi \}$ is a stable sequence, and $\{ g_1, \ldots , g_k \}$ is a generating set for $G$. Then the marked group \[ \left(G/\underrightarrow{\text{Ker}}\, \{ h_i \}, \left\{ g_1 \underrightarrow{\text{Ker}}\, \{ h_i \}, \ldots , g_k \underrightarrow{\text{Ker}}\, \{ h_i \} \right\} \right) , \] is a limit of marked groups $(G_i,\mathcal A_i)$ where each $G_i$ is a finitely generated subgroup of $\Xi$. Conversely, if the marked group $(G,\mathcal A)$ is a limit of finitely generated subgroups of $\Xi$, then $(G,\mathcal A)$ is an algebraic $\Xi$-limit group. \end{proposition} \begin{proof} Suppose that $\{ h_i \co G \to \Xi \}$ is a stable sequence, and $\{ g_1, \ldots , g_k \}$ is a generating set for $G$. Consider the marked group \[ \left(G/\underrightarrow{\text{Ker}}\, \{ h_i \}, \left\{ g_1 \underrightarrow{\text{Ker}}\, \{ h_i \}, \ldots , g_k \underrightarrow{\text{Ker}}\, \{ h_i \} \right\} \right) . \] For each $n$, let $H_n = \langle h_n (g_1), \ldots , h_n(g_k) \rangle \leq \Xi$. We consider the marked groups $\left( H_n , \left\{ h_n(g_1), \ldots , h_n(g_k) \right\} \right)$, and prove that they converge to $G/ \underrightarrow{\text{Ker}}\, \{ h_i \}$ with the above marking. Let $j \ge 1$ be arbitrary and let $W_j$ be the set of all words of length at most $j$ in the alphabet $\{ x_1^{\pm 1}, \ldots , x_k^{\pm 1} \}$. Following Convention \ref{Abuse}, we interpret $W_j$ as words in the various generating sets without changing notation. The set $W_j$ admits a decomposition into $T_j \cup N_j$, where $T_j$ are the words of length at most $j$ which are in $\underrightarrow{\text{Ker}}\, \{ h_i \}$ and $N_j$ are the remaining words of length at most $j$. Since $\{ h_i \}$ is a stable sequence, for each element $w \in T_j$, the element $h_n(w)$ is trivial for all but finitely many $n$, and for each $w \in N_j$ the element $h_n(w)$ is nontrivial for all but finitely many $n$. Thus, for all but finitely many $n$, the relations in $H_n$ of length at most $j$ are exactly the same as the relations of length at most $j$ in $G / \underrightarrow{\text{Ker}}\, \{ h_i \}$. Thus for all but finitely many $n$, the group $H_n$ with the given marking is at distance at most $e^{-j}$ from $G / \underrightarrow{\text{Ker}}\, \{ h_i \}$ with the given marking. This implies that the sequence $\{ H_n \}$ (with markings) converges to $G / \underrightarrow{\text{Ker}}\, \{ h_i \}$ (with marking). For the converse, suppose that $(G,\mathcal A)$ is a limit of a (convergent) sequence of marked finitely generated subgroups of $\Xi$. Denote these subgroups by $(H_i,\mathcal A_i)$. Note that $|\mathcal A_i|$ is fixed. Let $\mathcal A = \{ a_1 , \ldots , a_k \}$, let $\mathcal A_i = \{ b_{i,1}, \ldots , b_{i,k} \}$ and let $F$ be the free group on the set $\mathcal A$. Define homomorphisms $h_i \co F \to \Xi$ by $h_i(a_j) = b_{i,j}$. It is not difficult to see that $G \cong F / \underrightarrow{\text{Ker}}\, \{ h_i \}$. \end{proof} For a group $H$, let $T_\forall(H)$ be the {\em universal theory} of $H$ -- the set of all universal sentences which are true in $H$ (see \cite{CG} or \cite{Paulin2} for the definition, we are interested only in its consequences). The results in \cite{CG} now imply \begin{corollary} Let $\Xi$ be a finitely presented group and suppose that $L$ is an algebraic $\Xi$-limit group. Then $T_\forall(\Xi) \subseteq T_\forall (L)$. \end{corollary} \begin{corollary} Suppose that $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats and that $L$ is a $\Gamma$-limit group. Then \begin{enumerate} \item any finitely generated subgroup of $L$ is a $\Gamma$-limit group; \item $L$ is torsion-free; \item $L$ is commutative transitive; and \item $L$ is CSA. \end{enumerate} \end{corollary} Lemma \ref{SolAb} now implies the following: \begin{corollary} \label{AbSubgps} Let $\Gamma$ be a torsion-free toral CAT$(0)$ group with isolated flats and let $L$ be a $\Gamma$-limit group. Every solvable subgroup of $L$ is abelian. \end{corollary} \subsection{The Main Theorem and conclusions} Finally, we have: \begin{theorem} \label{SplittingTheorem} Suppose that $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats such that $\text{Out}(\Gamma)$ is infinite. Then $\Gamma$ admits a nontrivial splitting over a finitely generated free abelian group. \end{theorem} \begin{proof} Suppose that $\{ \phi_i \}$ is an infinite set of automorphisms of $\Gamma$ which belong to distinct conjugacy classes in $\text{Out}(\Gamma)$. Then the construction from Sections \ref{Limits} and \ref{TreeSection} allows us to find an isometric action of $\Gamma$ on an $\mathbb R$-tree $T$ without global fixed points. Pass to the subsequence $\{ f_i \}$ of $\{ \phi_i \}$ as in Lemma \ref{GromovTop}. Suppose first that $T$ is isometric to a real line. Then by Theorem \ref{LinfProps}.(\ref{Tline}) the group $f_i(\Gamma)$ is free abelian for all but finitely many $i$. But $f_i(\Gamma) = \Gamma$, so $\Gamma$ must be free abelian in this case. The theorem certainly holds for finitely generated free abelian groups. Therefore, we may suppose that $T$ is not isometric to a real line. In this case, since $K_\infty = \SK$ is trivial, the $\Gamma$-limit group $L_\infty$ is $\Gamma$ itself. Then by Theorem 9.5 of \cite{BF}, the group $\Gamma$ splits over a group of the form $E$-by-cyclic, where $E$ fixes a non-degenerate segment of $T$. The stabilisers in $\Gamma$ of non-degenerate segments are free abelian, by Lemma \ref{LinfProps}.(\ref{Stable}). Hence the group of the form $E$-by-cyclic is soluble, and hence free abelian by Lemma \ref{SolAb}. Note that free abelian subgroups of $\Gamma$ are finitely generated. This finishes the proof of the theorem. \end{proof} Suppose that $\Gamma$ is a CAT$(0)$ group and that $\text{Out}(\Gamma)$ is infinite. Swarup asked (see \cite{BestvinaQuestions}, Q2.1) whether $\Gamma$ necessarily admits a Dehn twist of infinite order. The above result shows that this is the case for the class of torsion-free toral CAT$(0)$ groups with isolated flats. Swarup also asked whether there is an analog of the theorem of Rips and Sela that $\text{Out}(\Gamma)$ is virtually generated by Dehn twists. In the subsequent work \cite{CWIFShort}, we will prove if $\Gamma$ is a torsion-free toral CAT$(0)$ group with isolated flats then $\text{Out}(\Gamma)$ is virtually generated by generalised Dehn twists (which take into account the existence of noncyclic abelian groups). It seems that the techniques developed here will be of little help in answering Swarup's question in the case of a general CAT$(0)$ group. It is also clear from the above construction that if $L$ is a non-abelian freely indecomposable strict $\Gamma$-limit group then $L$ splits over an abelian group. However, there is no reason to conclude that the edge group in this splitting is finitely generated. It is straightforward to construct the canonical abelian JSJ decomposition of a strict $\Gamma$-limit group $L_\infty$, using acylindrical accessibility \cite{SelaAcyl}.\footnote{Note that strict $\Gamma$-limit groups need not be finitely presented, so the results of \cite{RipsSela} do not apply.} However, for example, we do not yet know that the edge groups in the abelian JSJ decomposition of $L_\infty$ are finitely generated. To prove that this is the case if $L_\infty$ is freely indecomposable and nonabelian involves the shortening argument of Sela, which we present for torsion-free toral CAT$(0)$ groups with isolated flats in future work. The shortening argument allows us to prove that torsion-free toral CAT$(0)$ groups with isolated flats are Hopfian, and to construct Makanin-Razborov diagrams for these groups, thus partially answering a question of Sela (see \cite[I.8.(i), (ii), (iii)]{SelaProblems}). This work will be undertaken in \cite{CWIFShort}. \overline{A}ddresses\recd \end{document}
math
104,542
\begin{document} \title{Cavity cooling of an atomic array} \author{O.S. Mishina} \address{Theoretische Physik, Universit\"{a}t des Saarlandes, D-66041 Saarbr\"{u}cken, Germany} \ead{[email protected]} \begin{abstract} While cavity cooling of a single trapped emitter was demonstrated, cooling of many particles in an array of harmonic traps needs investigation and poses a question of scalability. This work investigates the cooling of a one dimensional atomic array to the ground state of motion via the interaction with the single mode field of a high-finesse cavity. The key factor ensuring the cooling is found to be the mechanical inhomogeneity of the traps. Furthermore it is shown that the pumped cavity mode does not only mediate the cooling but also provides the necessary inhomogeneity if its periodicity differs from the one of the array. This configuration results in the ground state cooling of several tens of atoms within a few milliseconds, a timescale compatible with current experimental conditions. Moreover, the cooling rate scaling with the atom number reveals a drastic change of the dynamics with the size of the array: atoms are either cooled independently, or via collective modes. In the latter case the cavity mediated atom interaction destructively slows down the cooling as well as increases the mean occupation number, quadratically with the atom number. Finally, an order of magnitude speed up of the cooling is predicted as an outcome the optimization scheme based on the adjustment of the array versus the cavity mode periodicity. \end{abstract} \pacs{37.30.+i, 37.10.Jk, 37.10.De} \maketitle \section{Introduction} The possibility of trapping chains of atoms \cite{Gupta2007,Schleier-Smith2011,Brandt2010} and Wigner crystals \cite{Herskind2009} in an optical cavity provides a new platform to study quantum optomechanics \cite{Stamper-Kurn2012,Ritsch2013}. The advantage of this system compared to other optomechanical platforms (i.e., micro- and nanometer scale mechanical oscillators) is the access to internal atomic degrees of freedom that can be used to tune the coupling and to manipulate the mechanical modes. The cavity does not only provide tailored photonic modes to interact with the atomic mechanical modes, it also alters the radiative properties of atoms giving rise to cavity mediated atom-atom interactions and collective effects. The combination of these ingredients results in a high degree of control of the optomechanical interface which has allowed, for example, the experimental observation of cavity nonlinear dynamics at a single photon level \cite{Gupta2007} and ponderomotive squeezing of light \cite{Brooks2012}. Such a platform, in which the optomechanical system includes multiple mechanical oscillators in the quantum regime globally coupled to the cavity field, shall eventually allow multipartite entanglement of distant atom motion \cite{Peng2002,Li2006}, hybrid light-motion entanglement \cite{Peng2002}, \cite{Cormick2013} and also engineering of spin-phonon coupling mediated by light when considering the atomic internal degrees of freedom. An important problem on the way to reach the quantum optomechanical regime, is the cooling of the atomic mechanical modes to the ground state. Several techniques can be envisaged to prepare an atomic chain in the ground state of motion. One way is to prepare the atoms in the ground state of an optical lattice prior to coupling them to the cavity field. Sidebandanalysed resolved laser cooling can be used in this case \cite{Hamann1998}, but the implementation of Raman sideband cooling is restricted to atomic species with a suitable cycling transition. Another route, very powerful and experimentally convenient, is to use the cavity mode itself for cooling the atomic chain. It eliminates the need for additional preparation steps and allows reusing the same atoms multiple times. Moreover, it is not restricted to specific atomic species and can be potentially extended to the cooling of any polarizable object such as, for example, molecules \cite{Lev2008}. While the problem of cooling a single trapped particle in a cavity was explored theoretically \cite{Cirac1995a,Vuletic2001,Zippilli2005} and experimentally \cite{Leibrandt2009}, the simultaneous cooling of many particles forming an array poses the question of scalability. Cooling of an atomic array using a cavity mode was experimentally demonstrated in Ref. \cite{Schleier-Smith2011} where a single mode of the collective atomic motion was cooled close to the ground stare. The cooling rate of this unique collective mode was found to be proportional to the number of atoms in the array. A similar scaling was reported in a theoretical work for the case when a homogeneous cloud is first organized by the cavity potential and than collectively cooled \cite{Elsasser2003}. A number of questions remain open on the protocol to cool down an array of atoms to the ground state inside a cavity. What is the role of the collective modes in the cooling dynamics of individual atoms? How do the cooling rates of individual atoms scale with the number of atoms in the array? What is the role of the lattice periodicity \emph{vs} the cavity mode period? What is the most efficient cooling scheme? This work provides the answers to these questions. It shows that (i) cooling of a single collective mode is faster than the cooling of individual atoms, which is destructively suppresed due to collective effects, (ii) the cooling time for individual atoms increases non-linearly with the atom number, and (iii) the periodicity of the array plays a key role in the dynamics which can be used to optimize the cooling performance. Additionally it considers the limitations imposed by the spontaneous estimate outside of the cavity mode and shows the experimental feasibility of the cavity cooling of tens of atoms in the array. In order to address these questions, a theoretical model is developed describing the general configuration in which the cavity potential and the atomic array have different periodicity as, for example, implemented in ref. \cite{Schleier-Smith2011}. The key factor insuring the ground state cooling of all atoms via global coupling to the single cavity mode is found to be the mechanical inhomogeneity of the traps. The cavity mode itself is demonstrated to provide the nesessary inhomogeneity due to the effect of the cavity potential on the individual traps. This controlability makes the configuration of an atomic array coupled to the cavity with different periodicity an attractive platform for further investigation of a multimode quantum optomethanical interface. Additionally, the proposed cavity cooling scheme can be extended to the case of an array of micro- or nanometer scale mechanical oscillators, where strong optomechanical coupling was recently predicted \cite{Xuereb2012}. The paper is organized as follows. Section \ref{sec_model} summarizes the theoretical model and describes the physical mechanisms governing the cooling dynamics. In section \ref{sec_anal_res} we present the analytical results for the scaling of the cooling rates with the atom number. Section \ref{sec_num_res} compares numerical and analytical results for the cooling rates and the steady state mean phonon number per atom. The transition between two distinct regimes, when atoms interact independently or collectively with the cavity field, is reported. Also the destructive suppression of the cooling due to collective effects is demonstrated. In section \ref{sec_optimization} the role of the lattice periodicity \emph{vs} the cavity mode period is discussed and a possible way to speed up the cooling is suggested. Finally the effect of the spontaneous emission on the scaling of the steady state phonon number is analysed in section \ref{sec_spont_em} together with the experimental feasibility of the proposed cooling scheme. The conclusions are drawn in section \ref{sec_conclusion}. \section{Summary of the model} \label{sec_model} The system under investigation consists of two elements: (i) a one dimensional array of $N$ independently trapped atoms coupled to (ii) a quantum light field with wave number $k_c$ confined inside an optical cavity pumped by a monomode laser as presented in figure \ref{fig_schema}. The chain of two-level atoms is formed along the axis of the cavity where the atoms are confined in a deep optical lattice potential generated by an additional external classical field \cite{Gupta2007,Schleier-Smith2011}. The case of hopping and tunnelling of atoms between the different sites will be neglected. The trap array holding the neutral atoms may be experimentally implemented in various ways. In the works \cite{Gupta2007,Schleier-Smith2011} an extra cavity pump field resonant to the other cavity frequency was used to create a deep optical lattice. Alternatively, an optical lattice along the cavity can be created by two laser beams crossing each other at an angle inside the cavity or with the use of a spatial light modulator. Although the focus of this work is on the cooling of neutral atoms it is worth noticing that the generalization of the model for the case of of ions or other polarizable particles can be straightforwardly done. \begin{figure} \caption{The schematic representation of the system. $N$ individually trapped atoms are placed inside a cavity with resonant frequency $\omega_{c} \label{fig_schema} \end{figure} The main mechanism behind the cavity cooling is the scattering process taking place when an atom absorbs a photon with pump frequency $\omega_p$ and then emits a photon back into the cavity with frequency $\omega_c$. If the pump frequency is lower than the cavity resonance frequency ($\Delta_c=\omega_p-\omega_c < 0$) and the difference is equal to the atomic trap frequency $\nu$, the atom will lose one vibration quantum, and the photon, eventually leaving the cavity, will carry this energy away. Such a cooling mechanism essentially relies on the interaction of atoms with the cavity field and assumes that the spontaneous emission into free space is negligibly small. This requires the cavity-to-free space scattering ratio to be much larger than one, which is reached when the single atom cooperativity (Purcell number) $c_r=\frac{g^2}{\kappa\gamma}$ is larger than one, regardless of the pump filed detuning from the atomic transition $\Delta_a=\omega_p-\omega_{eg}$ \cite{Vuletic2001}. It is achieved when a light-atom coupling strength $g$ is larger than the geometric average of the atomic natural linewidth $\gamma$ and the cavity decay rate $\kappa$. We will focus on the regime in which the cavity field is far off-resonance from the atomic transition $|g\rangle \leftrightarrow |e\rangle$, such that the probability of an atomic excitation is negligibly small. Under the conditions $|\Delta_a|\gg\gamma,\kappa,g\sqrt{N_{ph}}$, where $N_{ph}$ is the mean photon number in the cavity, the atomic internal degree of freedom can be adiabatically eliminated. In this case the coherent part of the optomechanical interaction between the cavity and atomic motion is described by the effective Hamiltonian \cite{Domokos2003,Larson2008}: \begin{eqnarray} \label{Hammiltonian_spin_phot_phon} H=&-&\hbar \left(\Delta_c-U_0 \sum_{i=1}^{N}\cos^2(k_c x^{(0)}_i+k_c\hat{x}_i) \right)\hat{A}^\dag \hat{A} +\sum_{i=1}^{N}\left(\frac{m \nu^2\hat{x}^2_i}{2}+\frac{\hat{p}^2_i}{2m}\right) \nonumber \\ &+&i\hbar\left(\frac{•}{•}\eta_p\hat{A}^\dag-\eta_p^*\hat{A}\frac{•}{•}\right). \end{eqnarray} Here $\hat{A}^\dag$ and $\hat{A}$ stands for the creation and annihilation operators of the cavity field in the rotating frame at the pump frequency $\omega_p$, and $\eta_p$ is the cavity pumping strength. The motion of atoms with mass $m$ inside the traps with identical frequencies $\nu$ is described by the displacement operator $\hat{x}_i$ of the $i$-th atom from its trap center $x_i^{(0)}$. The single atom off-resonant coupling strength at the anti-node is $U_0=g^2/\Delta_a$. The first term in the Hamiltonian contains the optomechanical interaction between the cavity field and the atomic motion: $U_0 \sum_{i=1}^{N}\cos^2(k_c x^{(0)}_i+k_c\hat{x}_i)$ is the shift of the cavity frequency caused by the presence of the atoms and, conversely, the mechanical potential exerted on the atoms by a single cavity photon. Further on only the Lamb-Dicke regime will be considered, when the atoms are localized on a length scale $\Delta x=\sqrt{\hbar/(2\nu m)}$ much smaller than the cavity wavelength $\lambda=2\pi/k_c$ ($\eta=k_c \Delta x$ is much smaller than one). Thus only the contributions up to the second order in the Lamb-Dicke parameter will be considered and the approximation ${\cos^2(k_c x^{(0)}_i+k_c\hat{x}_i)=\cos^2(k_c x^{(0)}_i)-\sin(2k_c x^{(0)}_i) k_c \hat{x}_i-\cos(2k_c x^{(0)}_i) (k_c \hat{x}_i)^2}$ will be used. The incoherent dynamic due to the cavity decay and the spontaneous emission (up to the second order in $1/\Delta_a$) is captured by the following Heisenberg-Langevin equations: \begin{eqnarray} \label{H-L_equations} \label{H-L_equations_sp_em} \dot{\hat{A}}&=&\frac{i}{\hbar}\left[H,\hat{A}\right] -(\kappa+\sum_{i=1}^ND_{ai}/2)\hat{A} +\sqrt{2\kappa}\hat{S}_{a} +i\sum_{i=1}^N\sqrt{D_{ai}}\hat{f}_{ai}, \nonumber \\ \dot{\hat{p}}_i&=&\frac{i}{\hbar}\left[H,\hat{p}_i\right] -2\Delta p\sqrt{D_{bi}}\hat{f}_{bi},\,\,\,\,\,\, \dot{\hat{x}}_i=\frac{i}{\hbar}\left[H,\hat{x}_i\right]. \end{eqnarray} where $\Delta p=\sqrt{\hbar\nu m/2}$. The above equations are derived in the appendix by taking in to account the coupling of the atom-cavity system to the external electromagnetic environment and using the markovian approximation to eliminate the external field modes from the equation \cite{Bienert2012} prio to the elimination of the atomic internal degree of freedom \cite{Vitali2008}. The noise operator $\hat{S}_a$ of the vacuum field entering the cavity through the mirror has the zero mean value and its correlation functions are: \begin{eqnarray} \langle\hat{S}_{a}(t)\hat{S}^\dag_{a}(t')\rangle=\delta(t-t'), \\ \nonumber \langle\hat{S}^\dag_{a}(t)\hat{S}_{a}(t')\rangle=\langle\hat{S}_{a}(t)\hat{S}_{a}(t')\rangle=\langle\hat{S}^\dag_{a}(t)\hat{S}^\dag_{a}(t')\rangle=0. \end{eqnarray} The scattering of the cavity photons by the atoms in to the outer modes causes the Langevin forces $\hat{f}_{ai}$ and $ \hat{f}_{bi}$ correspond to the loss of the cavity photons with rate $D_{ai}$, and the diffusion of an atomic motion with rates $D_{bi}$ respectively. They have the following non-zero correlation functions: \begin{eqnarray} \label{eq_cor_fun_sp_em} D_{ai}=\gamma \frac{g^2}{\Delta_a ^2}\cos ^2(k_cx^{(0)}_i),\,\,\,\,\, D_{bi}=\gamma \frac{g^2}{\Delta_a ^2}\eta ^2 \alpha ^2K_i , \nonumber \\ \langle \hat{f}_{ai}(t) \hat{f}^\dag_{ai}(t') \rangle = \delta(t-t'), \nonumber \\ \langle \hat{f}_{bi}(t)\hat{f}_{bi}(t') \rangle = \delta(t-t'),\,\,\,\,\,\, \hat{f}^\dag_{bi}(t)= \hat{f}_{bi}(t), \nonumber \\ \langle \sqrt{K_i}\hat{f}_{bi}(t)\hat{f}_{ai}^\dag(t') \rangle = \langle \hat{f}_{ai}(t) \sqrt{K_i}\hat{f}_{bi}(t')\rangle=\sin(k_cx^{(0)}_i)\delta(t-t'). \end{eqnarray} Here $\alpha^2$ represents the mean cavity photon number in the zero order with respect to the Lamb-Dicke parameter $\eta$. An order of unity coefficient $K_i=\sin^2(k_cx^{(0)}_i)+C_{xi}\cos^2(k_cx^{(0)}_i)$ depends on the atomic position along the cavity axes and on $C_{xi}= \int_{-1}^1 d\cos(\theta) \cos^2(\theta)\mathcal{N}_i(\cos(\theta))$, which gives the angular dispersion of the atom momentum and accounts for the dipole emission pattern $\mathcal{N}_i(\cos(\theta))$ \cite{SteckDataWeb}. Equations (\ref{H-L_equations_sp_em}) and correlation functions (\ref{eq_cor_fun_sp_em}) are derived under the assumption that the inter-atomic distance $d$ is much larger that the cavity wavelength $k_cd\gg1$ which allows one to consider atoms as independent scatterers. In the case of a single atom, equations (\ref{H-L_equations_sp_em}) correspond to the result reported in \cite{Vitali2008} where the rates to raise and lower the vibration quanta also compensate each other up to the second order in $1/\Delta_a$ and only the diffusion effect remains. The difference with the result presented here accounts for the different pumping geometry - the atom is pumped from the side or the cavity is pumped through the mirror. Next assumption on the way to solve equations (\ref{H-L_equations_sp_em}) is a large intracavity photon number with only small fluctuations around its steady state mean value: ${\langle\hat{A}^\dag\hat{A}\rangle\gg\langle\hat{a}^\dag\hat{a}\rangle}$, with ${\hat{a}=\hat{A}-\langle\hat{A}\rangle}$. The steady state mean values for the cavity field $\langle\hat{A}\rangle$, atom displacement $\langle\hat{x}_{i}\rangle$ and momentum $\langle\hat{p}_{i}\rangle$ are the solutions of the nonlinear algebraic equations constructed by taking the mean values on the left- and right-hand sides in equations (\ref{H-L_equations_sp_em}) and putting the derivatives to zero (assuming that the fluctuations are small) : \begin{eqnarray} \label{equations_mean_steady_state} \langle\hat{A}\rangle=\frac{\eta_p}{(\kappa_\mathrm{eff}-i\Delta_c'-iU_0 \sum_{i=1}^{N}(s_ik_c\langle\hat{x}_{i}\rangle+ c_ik^2_c\langle\hat{x}_{i}^2\rangle)}, \nonumber \\ k_c\langle\hat{x}_{i}\rangle= \frac{2U_0 \eta |\langle\hat{A}\rangle|^2s_i} {\nu-4U_0 \eta |\langle\hat{A}\rangle |^2c_i},\,\,\,\,\, \langle\hat{p}_{i}\rangle=0. \end{eqnarray} Here $\Delta'_c=\Delta_c-U_0\sum_{i=1}^N\cos^2(k_cx_i^{0})$, $\kappa_\mathrm{eff}=\kappa+\sum_{i=1}^ND_{ai}/2$, and $c_i=\cos(2k_c x^{(0)}_i)$ and $s_i=\sin(2k_c x^{(0)}_i)$. Without any loss of generality we assume $\langle\hat{A}\rangle$ to be real, which can be adjusted by choosing the phase of $\eta_p$. In the Lamb-Dicke regime the cavity mean field can be seen as a power series in the Lamb-Dicke parameter $\langle\hat{A}\rangle=\alpha+O(\eta)$ with the zero order term \begin{eqnarray} \label{mean_cavity_field} \alpha=\frac{\eta_p}{\kappa_\mathrm{eff}-i\Delta_c'}. \end{eqnarray} The evolution of small fluctuations around the steady state mean values is well described by the linear system of equations.Substituting $\hat{A}=\langle\hat{A}\rangle+\hat{a}$, $\hat{x}_i=\langle\hat{x}_{i}\rangle+\tilde{\hat{x}}_i $ and $\hat{p}_i=\langle\hat{p}_{i}\rangle+\tilde{\hat{p}}_i $ in to equations (\ref{H-L_equations_sp_em}) and neglecting the nonlinear terms together with other terms of the same order of magnitude brings us to the following equations: \begin{eqnarray} \label{H-L_equations_linearised} \dot{\hat{a}}=\left(-\kappa_\mathrm{eff}+i\Delta_c'\right)\hat{a}+i\frac{U_0 \eta \alpha}{\Delta x}\sum_{i=1}^{N}s_i\tilde{\hat{x}}_i +\sqrt{2\kappa}\hat{S}_{a} +i\sum_{i=1}^N\sqrt{D_{ai}}\hat{f}_{ai}, \nonumber \\ \dot{\tilde{\hat{p}}}_i=-m\nu_i^2\tilde{\hat{x}}_i+2\Delta p \,U_0 \eta \alpha\, s_i(\hat{a}^\dag+\hat{a}) -2\Delta p\sqrt{D_{bi}}\hat{f}_{bi},\,\,\,\,\, \dot{\tilde{\hat{x}}}_i=\frac{\tilde{\hat{p}}_i}{m}, \end{eqnarray} where $\nu^2_i=\nu(\nu-4 U_0 (\eta \alpha)^2c_i)$ is a modified trap frequency. To be consistent with the Lamb-Dicke approximation and to ensure that $|k_c\langle\hat{x}_{i}\rangle|\ll1$ in equation (\ref{equations_mean_steady_state}), the following inequality should be fulfilled: \begin{equation} \label{ineq_keep_LDL} 6 U_0 (\eta \alpha)^2\ll\nu. \end{equation} Finally, linear equations (\ref{H-L_equations_linearised}) allow to reconstruct the effective Hamiltonian governing the dynamics of the cavity field fluctuations and atomic motion in the traps: \begin{eqnarray} \label{Hammiltonian_effective} H_\mathrm{eff}=-\hbar \Delta'_c \hat{a}^\dag \hat{a} +\sum_{i=1}^{N}\left(\frac{m \nu_i^2 \hat{\tilde{x}}^2_i}{2}+\frac{\tilde{\hat{p}}^2_i}{2m}\right) -\hbar \frac{U_0\eta \alpha}{\Delta x} (\hat{a}^\dag+\hat{a}) \sum_{i=1}^{N}s_i\hat{\tilde{x}}_i. \end{eqnarray} This expression shows the two main effects captured by our model: the optomechanical coupling responsible for the cooling mechanism (last term) and the modification of the trap frequencies as a mean field effect of the cavity potential. The trap inhomogeneity is an essential ingredient for cooling atoms to the ground state of motion. As only one collective mode of motion $\hat{X}\sim \sum_{i=1}^{N}s_i\tilde{\hat{x}}_i$ couples to the cavity, the cooling mechanism takes place exclusively by removing the excitations from this mode \cite{Schleier-Smith2011}. If the frequencies of all the traps are identical, for example when the inter-atomic distance in the array is a multiple of the cavity wavelength, this collective mode is also an eigenmode of a free atomic system. Thus it will be decoupled from the remaining $N-1$ longitudinal modes of collective motion \cite{Genes2008}, and these modes will stay excited. As the steady state energy of each atom is determined by the weights of all the collective modes, individual atoms will be only partially cooled. Alternatively, if the trap frequencies are different than the collective mode $\hat{X}$ is no longer an eigenmode of a free atomic subsystem and it will be coupled to other $N-1$ longitudinal modes of collective atomic motion. This will allow a sympathetic cooling of all the collective modes, and all the atoms. The same principle is the basis for the sideband cooling of a trapped ion in three dimensions with a single laser beam \cite{Eschner2003}. In that case the requirements for cooling in all three dimensions are: different oscillation frequencies along each axes and a non-zero projection of the light wave vector on all the axes. The case of a one-dimensional cooling of many particles appears analogous to the cooling of a single particle in multiple directions. Similarly, in the case of an atomic array the conditions are: different trap frequencies and non-zero coupling of light to each atom. \section{Analytical results: cooling rates} \label{sec_anal_res} This section is devoted to the analyses of the atom-cavity evolution neglecting the effect of the spontaneous emission ($\hat{f}_{ai}=\hat{f}_{ai}=D_{ai}=D_{bi}=0$) and taking in to account only the cavity decay. Importantly, this simplification will not significantly effect the cooling rates in the far off-resonance regime, provided that the cavity decay is much faster than the spontaneous emission rate: \begin{equation} \label{ineq_cavity_decay_vs_sp_em} \kappa\gg \frac{\gamma g^2}{2\Delta_a^2}\sum_{i=1}^N\cos^2(k_cx^{(0)}_i), \end{equation} and $\kappa_{\mathrm{eff}}\approx \kappa$. In this regime the spontaneous emission will mainly cause a diffusion, the process in which the rate of adding and subtracting of a motion quantum are identical, and the contributions of both in to the final cooling rate cancel each other. Contrary the steady state phonon number for the atoms will increase due to the diffusion process and section \ref{sec_spont_em} will be devoted to this issue. Direct cooling of collective mode $X$ and its exchange with the remaining collective modes may appear on different time scales and the slowest of them will correspond to the cooling time scale for individual atoms. This section presents the analytical limits for the cooling rates of different modes of motion and the scaling of the cooling dynamics with atom number $N$. The cavity potential provides the trap inhomogeneity with a narrow distribution of the trap frequencies around $\nu$ in the Lamb-Dicke regime: ${\nu^2_i=\nu\left[\nu-4 U_0 (\eta \alpha)^2\cos(2k_cx^{(0)}_i)\right]}$. Distributing the atoms such that ${2k_cx^{(0)}_i=i\left(\frac{\pi}{N+1}+2n\pi\right)}$, $i=1,...N$, where $n$ is any integer will correspond to the following ratio between the inter-atomic distance $d$ and the cavity wavelength $\lambda$: \begin{eqnarray} \label{equation_periodicity} \frac{d}{\lambda}=\frac{n}{2}+\frac{1}{4(N+1)}. \end{eqnarray} This configuration will simplify the calculation and will allow to find the analytical solutions for the cooling rates. More over, the results will capture the general properties of the cooling, regardless the atomic configuration. It is convenient to introduce the collective modes in the following way: first mode $X$ coupled to the cavity and remaining modes $X_i$, $i=1,...N-1$, uncoupled from each other and coupled only to the first one. For the selected ratio $\frac{d}{\lambda}$ it can be done using the following transformation from the basis of individual atomic displacements: \begin{eqnarray} \label{equations_collective_mode transformation} X=\sqrt{\frac{2}{N+1}}\sum_{i=1}^N\sin(\frac{\pi \cdot i}{N+1})\tilde{\hat{x}}_i, \\ \nonumber X_i=\sqrt{\frac{2}{N}}\sum_{k=1}^{N-1}\sin(\frac{\pi \cdot i\cdot k}{N})\sqrt{\frac{2}{N+1}}\sum_{j=1}^N\sin(\frac{\pi \cdot j \cdot (k+1)}{N+1})\tilde{\hat{x}}_j. \end{eqnarray} Identical transformations relate the momenta of the collective modes $P$ and $P_i$ with the momenta of the individual atoms $\tilde{\hat{p}}_i$. Substituting this transformation into the effective Hamiltonian (\ref{Hammiltonian_effective}) and introducing the creation and anihilation operators $X=\sqrt{\frac{\hbar}{2m\nu}}(\hat{B}^\dag+\hat{B})$, $P=i\sqrt{\frac{\hbar m\nu}{2}}(\hat{B}^\dag-\hat{B})$, and $X_j=\sqrt{\frac{\hbar}{2m\omega_j}}(\hat{B}_j^\dag+\hat{B}_j)$, $P_j=i\sqrt{\frac{\hbar m\omega_j}{2}}(\hat{B}_j^\dag-\hat{B}_j)$ we get the Hamiltonian in the desired form: \begin{eqnarray} H_\mathrm{eff}=&-&\hbar\Delta'_c\hat{a}^\dag \hat{a} +\hbar\nu \hat{B}^\dag \hat{B} +\sum_{j=1}^{N-1}\hbar\omega_j \,\hat{B}_j^\dag \hat{B}_j \nonumber \\ &-&\hbar\frac{\epsilon}{2}\left(\hat{a}+\hat{a}^\dag \right)(\hat{B}+\hat{B}^\dag) -\hbar\sum_{j=1}^{N-1}\frac{\beta_j}{2}(\hat{B}+\hat{B}^\dag)(\hat{B}_j+\hat{B}_j^\dag). \end{eqnarray} The coupling strengths $\epsilon$, between the cavity and collective mode $X$, and $\beta_i$, between collective modes $X$ and $X_i$, and the collective mode frequencies $\omega_i$ are: \begin{eqnarray} \label{eq_collective_couplings} \epsilon=U_0\alpha\eta\sqrt{2(N+1)}, \nonumber \\ \beta_j=2U_0(\alpha\eta)^2 \sqrt{\frac{2}{N}}\sqrt{\frac{\nu}{\omega_j}}\sin\left(\pi\frac{j}{N}\right), \\ \nonumber \omega^2_j=\nu\left[\nu-4U_0(\eta\alpha)^2\cos\left(\pi\frac{j}{N}\right)\right]. \end{eqnarray} Thus the cavity coupling to the $X$ mode increases with $N$ while the coupling between $X_i$ and $X$ modes decreases with $N$. Such an opposite dependence will lead to the emergence of separate time scales when the atom number is sufficiently large. In this case the dynamic will consist of a fast excitation subtraction from the $X$ mode via the exchange with the cavity followed by the cavity decay, and a slow exchange between the modes $X_i$ and $X$. To find separately the asymptotic expressions of the collective mode decay rates for the fast and slow processes for $N\gg1$, at first only the interaction between cavity and $X$ mode is considered ($\beta_i=0$). This single mode case was considered in the work \cite{Schleier-Smith2011} and it is analogous to the cavity cooling of a single trapped particle \cite{Vuletic2001,Zippilli2005} as well as of an eigenmode of a mechanical cantilever \cite{Marquardt2007}. It can be be described by the rate equations for a mean phonon number $N_X=\langle B^\dag B\rangle$ in the form: $\dot{N}_X=-\gamma_X\left(N_X-N_X(t\rightarrow\infty)\right)$ when the cavity mode is adiabatically eliminated \cite{Stenholm1986}. In this work the rate equation is derived by evaluating $\langle\dot{\hat{B}^\dag\hat{B}}\,\,\rangle=\langle\dot{\hat{B}}^\dag\hat{B}\rangle+\langle\hat{B}^\dag\dot{\hat{B}}\rangle$. The cooling rate and the steady state phonon number are found to be: \begin{eqnarray} \label{eq_colective_cooling_rate} \gamma_X=\frac{\epsilon^2}{2\kappa}\left[S_-(\nu) -S_+(\nu)\right]=\kappa\, c_d^2\,(\eta\alpha)^2(N+1)\left[S_-(\nu) -S_+(\nu)\right], \\ \label{eq_colective_cooling_occupation_number} N_X(t\rightarrow\infty)=\frac{S_+(\nu)}{S_-(\nu)-S_+(\nu)}. \end{eqnarray} Here ${c_d=\frac{U_0}{\kappa}=\frac{g^2}{\Delta_a\kappa}}$ is an off-resonance single atom cooperativity which is the key parameter characterizing cavity-atom interaction and representing both: the cavity frequency shift due to the interaction with one atom and the atom resonance shift due to the interaction with the cavity in the units of the cavity lightweight. Spectral parameters ${S_\pm(\nu)=(1+(\Delta_c'\mp\nu)^2/\kappa^2)^{-1}}$ stand for the subtraction ($S_-$)/addition ($S_+$) of an energy quantum from/to the collective mode of motion and refer to the cooling and heating processes respectively. This description is applicable in the weak interaction regime when $\epsilon\ll\kappa$ which imposes an upper limit for the atom number in the array, $N\ll(c_d\eta\alpha)^{-2}$. Efficient cooling of collective mode $X$ will occur at the cooling side-band $\Delta_c'=-\nu$ and in the resolved side-band regime $\kappa\ll\nu$. In this case $S_-(\nu)=1$ and $S_+(\nu)\approx\frac{\kappa^2}{4\nu^2}$ and the contribution of the heating processes is negligible. The cooling rate is then $\gamma_X\approx\epsilon^2/(2\kappa)=\kappa(c_d\eta\alpha)^2(N+1)$ while the mean phonon number $N_X(t\rightarrow\infty)\approx S_+(\nu)$ is close to zero. Assuming this regime, we now consider the evolution of the remaining modes. If the exchange between modes $X$ and $X_i$ occurs at the time scale much slower than $\gamma_X^{-1}$, mode $X$ will serve as a decay channel for the remaining modes. We will look for the cooling rate for each $X_i$ mode independently, assuming that the effect of the presence of modes $X_j$ ($j\neq i$), can be neglected for sufficiently large $N$. We shall note that the condition similar to the one providing the resolved side-band regime is automatically fulfilled: the decay rate of mode $X$ is much smaller that the frequency of the $i$-th mode $\gamma_X\ll\omega_i$. Also the condition similar to the cooling side-band condition is fulfilled for each mode, $|\nu-\omega_i|\ll\gamma_X$ for sufficiently large atom number $(N+1)\gg2/c_d$. The independent cooling rate for the $X_i$ mode in this resolved side-band regime is well approximated by the following expression: \begin{eqnarray} \label{eq_colling_rates_analitical} \gamma_{X_i} = \frac{\beta_i^2}{\gamma_X}= \kappa \frac{8(\alpha\eta)^2}{(N+1)N}\sin^2\left(\frac{\pi\cdot i}{N}\right)\frac{\nu}{\omega_i}. \end{eqnarray} In the derivation of the rate equation for the mean phonon number in mode $X_i$, the counter-rotating terms $\hat{B}^\dag\hat{B}_i^\dag$ and $\hat{B}\hat{B}_i$ in the Hamiltonian were neglected (rotating wave approximation). Then, in the frame rotating with frequency $\nu$, the cavity mode $\hat{a}$ and collective mode $\hat{B}$ were subsequently eliminated to get the Heisenberg-Langevin equation for the $\hat{B}_i$ mode alone. The rate equation for the mean phonon number in each mode was again derived by calculating the mean value $\langle\dot{\hat{B}^\dag_i\hat{B}_i}\,\,\rangle=\langle\dot{\hat{B}}^\dag_i\hat{B}_i\rangle+\langle\hat{B}^\dag_i\dot{\hat{B}}_i\rangle$. The rotating wave approximation allows to reconstruct only the decay term but not the steady state mean phonon number in the rate equation because the heating side-band is neglected. Apart from this drawback it allows one to find the cooling rate with a good accuracy in the resolved sideband regime. Expression (\ref{eq_colling_rates_analitical}) shows a non-linear decrease of the independent cooling rates with increasing atom number. The smallest rate which will determine the cooling rate of individual atoms is $\gamma_{X_1}\sim N^{-4}$ when the atom number is much bigger than one. Here we shall recall that while changing the atom number we keep the modified cavity detuning fixed to the cooling side-band $\Delta_c'=-\nu$ which means that the pump frequency is adjusted for each atom number such that $\Delta_c=-\nu+U_0\sum_{i=0}^Nc^2_i=-\nu+U_0(N-1)/2$. Also, the choice of the array periodicity (\ref{equation_periodicity}) made the ratio $d/\lambda$ dependent on $N$. One should note that the cooling rates (\ref{eq_colling_rates_analitical}) in this collective cooling regime does not really depend on the interaction strength $U_0$, only weakly through $\omega_i$. It is at first surprising, but reasonable, since the cooling is a trade-off between two processes: the exchange among different collective modes and the decay of the collective mode coupled to the cavity. These two processes are initially governed by the interaction of the same origin with strength $U_0$. When the collective mode decay rate increases the cooling slows down because less exchange events appear on the decay time scale, this is compensated by the simultaneous growth of the exchange rate. Thus the single atom interaction strength cancels out in the resulting cooling rate. This fact will crucially change the influence of the spontaneous emission on the cooling process in comparison with the single atom case which we will discuss in section \ref{sec_spont_em}. \section{Numerical results: cooling rates and mean phonon numbers} \label{sec_num_res} The exact evolution of $N$ atoms coupled to the cavity mode described by Hamiltonian (\ref{Hammiltonian_effective}) cannot be found analytically and the cooling rates and mean phonon numbers are calculated numerically still neglecting the effect of the spontaneous emission ($\hat{f}_{ai}=\hat{f}_{ai}=D_{ai}=D_{bi}=0$). Performing the transformations $\tilde{\hat{x}}_i=\sqrt{\frac{\hbar}{2m\nu_i}}(\hat{b}^\dag_i+\hat{b}_i)$, $\tilde{\hat{p}}_i=i\sqrt{\frac{\hbar m\nu_i}{2}}(\hat{b}^\dag_i-\hat{b}_i)$, where $\hat{b}^\dag$ and $\hat{b}$ are the creation and annihilation operators of a vibrational excitation for individual atoms, the system of equations (\ref{H-L_equations_linearised}) can be rewritten in the matrix form \begin{equation} \dot{Y}=MY+S. \end{equation} Here we introduced the vectors of the system fluctuations and the noise operators: \begin{eqnarray} Y=(\hat{a},\hat{b}^{}_1,\hat{b}^{}_2,...\hat{b}^{}_{N},\hat{a}^\dag,\hat{b}_1^\dag,\hat{b}_2^\dag,...\hat{b}_{N}^\dag)^T, \\ \nonumber S=(\sqrt{2\kappa}\hat{S}_{a},0,0,...0,\sqrt{2\kappa}\hat{S}^\dag_{a},0,0,...0)^T. \end{eqnarray} The dynamical matrix $M$ is non-Hermitian and its non-zero elements are $M_{aa}=M^*_{a^\dag a^\dag}=-\kappa+i\Delta_c'$, $M_{b_ib_i}=M^*_{b^\dag_ib^\dag_i}=-i\nu_i$ and $M_{a b_i}=M_{a b_i^\dag}=M^*_{a^\dag b_i}=M^*_{a^\dag b_i^\dag}=M_{b_i a}=M_{b_i a^\dag}=M^*_{b^\dag_i a}=M^*_{b^\dag_i a^\dag}=iU_0\eta\alpha s_i$. The transformation diagonalizing this matrix will result in the new operators combining light and atomic variables. The decay rates of the population of these polaritonic modes, $\Gamma_i$, are given by the real part of the eigenvalues $\mu_i$ of matrix $M$. Since a steady state energy of individual atoms are determined by the weighted energies of all the polaritonic modes, the smallest of $\Gamma_i$ will set the decay rate for individual atoms. \begin{figure} \caption{Scaling with the atom number. For each $N$ there are $N+1$ points representing polaritonic decay rates $\Gamma_i=-2Re[\mu_i]$ (a) and $N$ points representing the phonon numbers per atom (b). Dashed and solid lines correspond to the analytical results: collective cooling rates $\gamma_X$ (\ref{eq_colective_cooling_rate} \label{fig_scaling_all_rates} \end{figure} The decay rates of the polaritonic modes $\Gamma_i=-2Re[\mu_i]$, $i=1,...N+1$, are plotted in figure \ref{fig_scaling_all_rates}.a for different atom numbers in the array. The pump frequency was adjusted to keep the cooling sideband condition $\Delta_c'=-\nu$, and the atomic periodicity \emph{vs} the cavity wavelength $d/\lambda$ was also modified according to (\ref{equation_periodicity}). Other parameters are selected such that $\epsilon\ll\kappa$ and the cavity decay happens much faster than the phonon decay. In this case we clearly see the dominating decay rate $\Gamma_1\approx2\kappa$ corresponding to the polaritonic modes mainly consisting of the cavity mode. Consequently the remaining polaritonic modes will mostly consist of atomic modes. For sufficiently large $N$ the decay rates are well approximated by the analytical expressions for the collective mode decay rates $\gamma_X$ (blue, dashed-dotted line) and $\gamma_{X_i}$ (red, dashed line for $\gamma_{X_1}$ of figure \ref{fig_scaling_all_rates}.a) thus these modes are close to the collective modes introduced in the previous section. When the atom number is small, analytical results (\ref{eq_colective_cooling_rate},\ref{eq_colling_rates_analitical}) are no longer valid because the collective mode $X$ cannot be treated independently from the remaining modes $X_i$. It turns out that for $N\ll 2/c_d$, the polaritonic decay rates $\Gamma_i$ for $i=2,...N+1$ are well approximated by the independent decay rates of each atom (green, solid line for $\gamma_{x1}$), found by putting $s_j=0$ for $j\neq i$: \begin{eqnarray} \label{eq_independent_cooling_rate} \gamma_{x_i}=\frac{\epsilon_i^2}{2\kappa}\left[S_-(\nu_i) -S_+(\nu_i)\right] =2\kappa\,c_d^2(\alpha\eta_i)^2\left[S_-(\nu_i) -S_+(\nu_i)\right]. \end{eqnarray} Here we introduce an effective coupling strength $\epsilon_i=2U_0\eta_i\alpha s_i$ between the cavity mode and the motion of the $i$-th atom and a Lamb-Dicke parameter for each atom $\eta_i=k_c\sqrt{\hbar/(2m\nu_i)}$. From this we conclude that atoms do not feel the presence of each other and they are cooled down independently. This is due to the fact that the difference between the trap frequencies is larger than the mechanical damping rate of each atom $\gamma_{x_i}$ and there is no interference effect between the cooling of different atoms. On the contrary, for a large atom number the frequencies $\nu_i\approx\nu-2 U_0 (\eta \alpha)^2\cos(i\pi/(N+1))$ are close to each other and when the difference becomes smaller than $\gamma_{x_i}$ the light mediated interaction between the traps slows down the cooling. A similar interference effect was previously found for two mechanical modes of a micromirror in an optical cavity \cite{Genes2008}. In figure \ref{fig_scaling_all_rates}.a the transition point between the two regimes in the atomic array when one collective decay rate splits from the others is clearly seen. Its position depends on the array geometry and is captured by $c_dN=\mathrm{const}$ where $\mathrm{const}=2$ in the present configuration. The steady state mean occupation number of each atom, presented on \ref{fig_scaling_all_rates}.b, practically does not depend on the total number of atoms if the spontaneous emission is neglected. It is approximately the same for all atoms and it is close to the lowest value achievable for a single atom resolved side-band cooling $\kappa^2/(4\nu^2)$ (0.0025 for the selected parameters) when the diffusion due to spontaneous emission is negligible \cite{Vuletic2001}. This is due to the fact that the shifts of the trap frequencies are much smaller than the cavity bandwidth and the cooling sideband conditions are still fulfilled for all the atoms. Comparison of the numerical and analytical results allows us to associate the collective modes of the atomic motion presented in the previous section with the normal polaritonic modes of the full system. It also revealed the transition between two different regimes when atoms are cooled independently or collectively. Comparing the smallest collective decay rate $\gamma_{X_1}$ with the smallest independent decay rate $\gamma_{x_1}$ for $N\gg1$ we see the suppression by a factor $\gamma_{x_1}/\gamma_{X_1}=(c_dN/2)^2$. Thus, while the collective effects are favourable for the cooling of one mode shortening its cooling time linearly with $N$ \cite{Schleier-Smith2011}, they destructively suppress the cooling of individual atoms and prolong their cooling time quadratically with N. \section{Optimal array periodicity \emph{vs} the cavity wavelength} \label{sec_optimization} So far we analysed the configuration when the ratio between the lattice constant and the cavity wavelength was set by expression (\ref{equation_periodicity}), which corresponds to the spread of the trap frequencies over the whole available interval $\cos(i\pi/(N+1))\in(-1,1)$. This provides the largest frequency difference between the traps and supposedly fastest exchange between the collective modes. However, in this case, atoms on the edge of the chain are weakly coupled to the cavity due to the factor $\sin(i\pi/(N+1))$, which slows down the cooling. This section shows the existence of the optimal configuration of atoms in the cavity which maximizes the cooling rate due to the trade-off between the frequency separations and the coupling to the cavity. \begin{figure} \caption{Optimization of the cooling procedure. (a): Coupling strength $s_i=\sin(2k_cx_i^{(0)} \label{fig_optimization} \end{figure} Considering the frequency spread to be symmetric around $\nu$, the periodicity ratio $d/\lambda$ and the array location along the cavity axes will be varied to decrease the interval along which the trap frequencies are spread. This will automatically increase the minimal coupling to the cavity. Such a change can be parametrized as follows: \begin{eqnarray} 2k_cx^{(0)}_i=\frac{l}{L}\cdot\frac{\pi}{2}+ i\left(\frac{L-l}{L}\cdot\frac{\pi}{N+1}+2n\pi\right),\,\,\,\,\, l=0,...L-1; \\ \nonumber \frac{d}{\lambda}=\frac{n}{2}+\frac{1}{4(N+1)}\cdot\frac{L-l}{L}. \end{eqnarray} Here $L$ is the number of steps in the search for the optimal configuration. By changing the value of the optimization parameter $l$ from $0$ to $L-1$ we go from the largest to the smallest frequency spread. As an example, figure \ref{fig_optimization}.a shows the distribution of the frequencies $c_i=\cos(2k_c x_i^{(0)})$ and couplings $s_i=\sin(2k_c x_i^{(0)})$ for nine atoms and the optimization parameter $l=0,2,5$ with $L=10$. Figure \ref{fig_optimization}.b shows the change of the minimal cooling rate $\mathrm{Min\{\Gamma_i\}}$ with $l$ for three different atom numbers $N=20,40,60$. We find one order of magnitude improvement of the cooling rate as a result of the suggested optimization scheme. It is important to mention the role of the array location along the cavity axes. Displacement of the array away from the optimal location will be equivalent to the rotation of the selected segment shown on figure \ref{fig_optimization}.a around the origin. This would lead to the reduction of the cooling rate fro some atoms due to the decrease of the coupling to the cavity. Additionally if some $c_i$ become identical, some collective mode of motion will decouple and consequently the steady state phonon number per atom will increase. It is experimentally convenient that the trap frequency inhomogeneity is provided by the cavity potential itself because no extra arrangements are needed to lift the trap degeneracy. Additionally, the key role of the array \emph{vs} cavity field periodicity may be used to speed up the cooling by a factor ${\sim N^2}$ by only displacing and stretching the array along the cavity. Alternatively additional external potentials can be considered to introduce an arbitrary trap inhomogeneity, however this is beyond the scope of this paper. \section{Effect of spontaneous emission} \label{sec_spont_em} Up to now only the exchange between the atoms and the cavity mode was considered, and the spontaneous emission of the cavity photons by the atoms into the free space was neglected. Spontaneous emission on a single atom causes diffusion \cite{Vuletic2001,Bienert2012,Zippilli2005a} and, thus, heating. This leads to a higher steady state phonon number than predicted by the model neglecting the spontaneous emission. Now it will be take in to account by considering the additional Langevin sources and decay terms in equations (\ref{H-L_equations_linearised}) which were omitted in the previous sections. In general, the many atom case is different from the single atom configuration. Nevertheless, we can already guess that in the individual cooling regime, when the atom number is sufficiently small, the many- and single-atom cases will be similar and here it will be proven analytically. More importantly, in this section I will also treat in detail the effect of the spontaneous emission in the collective cooling regime. We will see that the destructive suppression of the cooling rates discussed in the previous sections leads yet to another problem when accounting for the spontaneous emission: as the cooling slows down, the diffusion due to the spontaneous scattering into the free space accumulates during a longer time. This increases the steady state photon number in the traps setting an additional limitation for the proposed cooling scheme. This section presents both the numerical and analytical studies of the effect including the results derived for the first time for the considered configuration: many atoms in a pumped cavity. The results will be used to find the guidelines on how to set the parameters to avoid undesirable heating and to achieve the proposed cooling scheme experimentally feasible. In the regime of the independent cooling, i.e. when the atom number is sufficiently small, we shall compare the exact solution with the analytical result for a single atom. The rate equation for the mean occupation number in $i$-th trap is derived by putting $\epsilon_ j$ ($j\neq i$) to zero and adiabatically eliminating the cavity mode assuming that $\kappa_\mathrm{eff}\gg\epsilon_i$. The cooling rate remains the same (\ref{eq_independent_cooling_rate}) and the steady state phonon number is found to be: \begin{equation} \label{eq_phonon_number_sp_em} n_{i}(t\rightarrow\infty)= \frac{S_+(\nu_i)}{S_-(\nu_i)-S_+(\nu_i)}\left(1 +\frac{1}{2c_r}\frac{K_i}{s^2_i}\frac{1}{S_+(\nu_i)}\right). \end{equation} In the expression for $S_\pm(\nu_i)$ the cavity decay rate $\kappa$ should be replaced by the modified rate $\kappa_\mathrm{eff}$, although under the condition (\ref{ineq_cavity_decay_vs_sp_em}) the dominating effects of the spontaneous emission will be captures if $\kappa_\mathrm{eff}\approx\kappa$, so will be assumed in the following. This expression clearly demonstrates the necessity of a large cooperativity $c_r$ to reach the ground state cooling. It is in agreement with the results reported in \cite{Vuletic2001,Zippilli2005a} with the only difference being a numerical factor of the order of unity accounting for different pumping configuration. This result also coincides (up to the second order in $1/\Delta_p$) with the the result reported in \cite{Bienert2012}, there the cavity pump configuration was also considered. It is interesting to note that the initial assumption on the intra-cavity mean photon number made in this work ($|\alpha|^2\gg\langle\hat{a}^\dag\hat{a}\rangle$) is essentially different to the one of \cite{Bienert2012} ($|\alpha|^2\ll1$). The exact agreement between the results underlines that the limit of a small intra-cavity photon number and the limit of a small fluctuation around a large inta-cavity photon number are two related approximations in the far off-resonance regime. For the case of $N$ atoms inside a cavity the problem is now solved numerically and the results are compared with the a single atom case in figure \ref{fig_occupation number_decay_rates_sp_em}. The scaling of the steady state phonon number with $N$ is presented for two different atomic configurations: the optimized configuration (figure \ref{fig_occupation number_decay_rates_sp_em}.a) and the one considered in figure \ref{fig_scaling_all_rates} (figure \ref{fig_occupation number_decay_rates_sp_em}.b). In agreement with the cooling rate scaling presented on figure \ref{fig_scaling_all_rates} the steady state phonon number scaling confirms that up to a certain atom number atoms cool down independently according to (\ref{eq_phonon_number_sp_em}). Above this atom number the cooling slows down which causes the increase of the phonon number as more spontaneous emission events occur during a longer cooling time. Thus the transition from the individual to the collective cooling accompanied by the suppression of the cooling rate and the increase of the mean phonon number quadratically with the atom number is present in both configuration. For the selected parameters $c_d=0.05$ and $c_r=10$ up to $20$ atoms can be cooled close to the ground state with the phonon number less than $0.1$. \begin{figure} \caption{Steady state occupation numbers per atom vs the atom number N. (a): optimal configuration with $l=5$, $L=10$, (b): configuration corresponding to figure \ref{fig_scaling_all_rates} \label{fig_occupation number_decay_rates_sp_em} \end{figure} In the resolved side band limit $\kappa\ll\nu$ expression (\ref{eq_phonon_number_sp_em}) simplifies towards ${n_{i}\approx \frac{\kappa^2}{4\nu_i^2} +\frac{1}{2c_r}\frac{K_i}{s_i^2}\left(1+\frac{\kappa^2}{4\nu_i^2}\right)}$ and it is possible to estimate the steady state phonon number in the regime of collective cooling by taking into account the ratio between the individual (\ref{eq_independent_cooling_rate}) and collective (\ref{eq_colling_rates_analitical}) cooling rates $\gamma_{x_1}/\gamma_{X_1}=(c_dN/2)^2$: \begin{equation} \label{eq_phonon_number_col_sp_em} n_{i}(N\gg 2/c_d)= \frac{\kappa^2}{4\nu_i^2} +\frac{(c_dN/2)^2}{2c_r}\frac{K_i}{s_i^2}\left(1+\frac{\kappa^2}{4\nu_i^2}\right). \end{equation} As can be seen in figure \ref{fig_occupation number_decay_rates_sp_em}.b, this expression reproduces the exact result for the atom number $N\gg 2/c_d$ . To suppress the spontaneous emission effect (the second term) the single atom cooperativity should obey the inequality: \begin{equation} \label{ineq_cooperativity_sp_em} c_r\gg c_d^2 N^2/(8 s_i^2). \end{equation} This is fundamentally different from the condition in the case of a single atom $c_r \gg1$ where $c_d$ does not enter and consequently the detuning does not play a role. It is because the cooling rate (\ref{eq_colling_rates_analitical}) no longer depends on $c_d$ and thus on the detuning, while the spontaneous emission rate does. As we see from (\ref{ineq_cooperativity_sp_em}), in the case of collective cooling the cooperativity $c_r$ is required to be larger than in a single atom, i.e. the positive effect of the cavity is corrupted by the destructive interference in the cooling dynamic. But at the same time the detuning is becoming a knob to reduce the diffusion caused by the spontaneous emission. Inequality (\ref{ineq_cooperativity_sp_em}) is equivalent to ${\kappa\gg\gamma \frac{g^2}{\Delta_a^2}N^2/(8 s_i^2)}$. The optimization decreases the phonon number for the hottest atom ($s_i\approx \pi/N$) and improves the scaling by a factor ${\sim N^2}$. In this case the condition sufficient to suppress the effect of spontaneous emission is found to be: \begin{equation} \label{ineq_supression_sp_em} {\kappa\gg\gamma \frac{g^2}{\Delta_a^2}N^2}. \end{equation} This inequality should be compared to the condition (\ref{ineq_cavity_decay_vs_sp_em}) insuring that the spontaneous emission rate is much lower than the cavity decay rate, ${\kappa\gg\gamma \frac{g^2}{\Delta_a^2}N}$, assumed through the derivations. Condition (\ref{ineq_cavity_decay_vs_sp_em}) was also considered to be sufficient for neglect the spontaneous emission effect in the configuration different to the one presented here, i.e. homogeneour cold atomic cloud instead of the array \cite{Gangl1999,Horak2000}. As we can see, an additional factor of $N$ makes condition (\ref{ineq_supression_sp_em}) more strict than (\ref{ineq_cavity_decay_vs_sp_em}). This is a special feature of the collective cooling regime, when the distructive interference suppresses the cooling effect. Lets now estimate experimental accessibility of the proposed cooling scheme for a chain of $^{87}$Rb atoms using the limitation (\ref{ineq_supression_sp_em}) as a guideline. Given the recoil frequency $\omega_R=2\pi\cdot3.9$ kHz and demanding a Lamb-Dicke parameter of $\eta=0.04$, the trap frequencies shall be set to $\nu=2\pi\cdot 2.4$ MHz. The resolved side-band condition requires the cavity bandwidth to be at maximum $\kappa=2\pi\cdot240$ kHz. From figure \ref{fig_optimization}.a the cooling rate for the array of 20 atoms is $10^{-4}\kappa$ which gives a cooling time of about $6.6$ ms. This is a realistic time comparable with the stability of an optical trap which will form the array, and it is close to the single atom cooling time experimentally achieved via Raman side-band cooling \cite{Reiserer2013}. This rate can be achieved with the single atom-cavity coupling strength $g=2\pi\cdot 3.8$ MHz leading to cooperativity $c_r=10$ and the detuning from the atomic resonance $\Delta_a=2\pi\cdot 1.2$ GHz. The diffusion due to the spontaneous emission will set the limit for the number of atoms which can be cooled to the ground state. The upper bound of this limit can be estimated form condition (\ref{ineq_supression_sp_em}), ${N^2\ll\kappa\left(\gamma\frac{g^2}{\Delta_a^2}\right)^{-1}}$, and it is about ten for the selected parameters. To push this limit without changing the cooling rate constant one could go further away from the atomic transition and simultaneously increase the coupling strength $g\sim \sqrt{\Delta_a}$. The cavity cooling protocol for an atomic array proposed in this work is shown to be limited by the presence of spontaneous emission. The Heisenberg-Langevin equations derived for the first time in the considered configuration were used to quantify this limitations and shown that the proposed scheme is experimentally feasible. Moreover the predicted cooling times for an array of tens of atoms at the reachable experimental is comparable with the best achived up to date for a single atom case \cite{Reiserer2013}. \section{Conclusion} \label{sec_conclusion} Cooling of the array of an atomic array via coupling to a single mode cavity is accessible when the inhomogeneity of the atomic trap frequencies is present. This work shows that the intra-cavity field with sufficiently large photon number is able to provide this inhomogeneity, simultaneously mediating the cooling of atoms to the ground state of the individual wells. The cooling dynamics drastically changes with the size of the array from (i) the regime when atoms are cooled independently from each other to (ii) when the cooling happens via collective modes which increase the cooling time and the steady state mean phonon number by a factor $\sim(c_dN)^2$. The main reason for the suppression of the cooling at the large atom number is the destructive interference occurring because the separations between the trap frequencies become comparable with mechanical damping rate (an analog of the linewidth). It results into the destructive suppression of the cooling which is a signature of an enhancement of the cavity mediated atom-atom interaction. Consequently the detrimental spontaneous emission effect increases with the atom number and a larger single atom cooperativity $c_r\gg (c_dN)^2$ is necessary to suppress it. Due to the periodic nature of the inhomogeneity induced by the cavity field the periodicity of the array \emph{vs} the cavity mode plays a crucial role in the cooling dynamics. It allows an optimization of the cooling by adjusting the lattice constant and the array position along the cavity axes offering one order of magnitude gain in the cooling speed. Cooling of a few tens of atoms to the ground state of motion within a few milliseconds is experimantaly feasible with the use of the suggested scheme. This demonstrates a controlability of the array motion with a single mode cavity and sets the basis for the further exploration of the quantum optomechanical interface and, possibly, generation of novel non-classical states of collective atomic motion. Moreover, our cooling scheme can also be extended to the case of an array of micro- or manometer scale mechanical oscillators which makes it a useful tool for different systems. \ack I would like to thank Giovanna Morigi for the wise guidelines along the project and comments on the manuscript, Marc Bienert for the discussions, critical reading and comments on the manuscript, Cecilia Cormick, Endre Kajari, and Monika Schleier-Smith for the fruitful discussions of the work in progress, Thomas Fogarty for reading the manuscript and for the linguistic advising, and Pavel Bushev and Lars Madsen for the useful comments on the manuscript. The research leading to these results has received funding from the European Union Programme (FP7/2007-2013) under the FP7-ICT collaborative project AQUTE (grant number: 247687) and the individual Marie Curie IEF project AAPLQIC (grant number: 330004). \begin{appendix} \section{Heisenberg-Langevin equations: atom motion and cavity light } The main ideas and the key steps of the derivation of equations (\ref{H-L_equations_sp_em}) and (\ref{eq_cor_fun_sp_em}) are presented in this appendix. The starting point is the full Hamiltonian of the system, which includes the cavity field, the atoms with their spin and mechanical degrees of freedom, and the reservoir containing the field modes outside of the cavity which interact directly with the atoms: \begin{eqnarray} \mathrm{H}_\mathrm{tot}=\mathrm{H}_\mathrm{sys} +\sum_{\vec{k},\epsilon}\hbar\omega_{k}\hat{a}^\dag_{\vec{k},\epsilon}\hat{a}_{\vec{k},\epsilon} - \sum_{ { i=1} \atop {\vec{k},\epsilon} } ^{N} \hbar g_{\vec{k},\epsilon} \left( \sigma^{(i)}_{eg}\hat{a}_{\vec{k},\epsilon} e^{i\vec{k}\vec{\hat{r}}_i} +\sigma^{(i)}_{ge}\hat{a}^\dag_{\vec{k},\epsilon}e^{-i\vec{k}\vec{\hat{r}}_i} \right). \end{eqnarray} The creation $\hat{a}^\dag_{\vec{k},\epsilon}$ and annihilation $\hat{a}_{\vec{k},\epsilon}$ operators of the reservoir modes are labeled by the wave vector $\vec{k}$ and the polarization $\epsilon$ indexes and the summation goes over all the free space modes excluding those entering through the cavity mirrors. The last term in the Hamiltonian represents the interaction between the atoms and the reservoir field modes in the rotating wave approximation with the interaction constant $g_{\vec{k},\epsilon}=\sqrt{\frac{\omega_k}{2 \pi \hbar V \epsilon_0}}\,\,\,\vec{\varepsilon}_\epsilon\cdot\vec{d}_{eg}$. Here $\vec{d}_{eg}$ is an atomic dipole moment, $V$ is the quantization volume and $\epsilon_0$ the vacuum permittivity. The spin of the $i$-th atom is represented by the operators $\sigma^{(i)}_{ge}=|g\rangle_i\langle e|$, $\sigma^{(i)}_{eg}=|e\rangle_i\langle g|$ and $\sigma^{(i)}_{z}=|g\rangle_i\langle g|-|e\rangle_i\langle e|$. The atom-cavity Hamiltonian $\mathrm{H_{sys}}$ contains the non-interacting parts $H_\mathrm{0}$, the interaction part $H_\mathrm{int}$ and the cavity pumping $H_\mathrm{p}$ : \begin{eqnarray} \label{eq_system_Hamiltonian} \mathrm{H}_\mathrm{sys}=H_\mathrm{0}+H_\mathrm{int}+H_\mathrm{p}, \nonumber \\ H_\mathrm{0}=-\hbar\frac{\omega_{eg}}{2} \sum_{i=1}^{N}\sigma^{(i)}_z-\hbar \omega_c \hat{A}^\dag \hat{A} +\sum_{i=1}^N \left( \frac{m \nu^2}{2}\hat{x}^2_i+\frac{1}{2m}\hat{p}^2_i\right), \nonumber \\ H_\mathrm{int}=-\hbar g\sum_{i=1}^{N} \cos(k_c x^{(0)}_i+k_c\hat{x}_i)\left(\sigma^{(i)}_{eg} \hat{A}+\hat{A}^\dag \sigma^{(i)}_{ge}\right) \nonumber, \\ H_\mathrm{p}=i\hbar\left(\eta_p\hat{A}^\dag e^{-i\omega_p t}-\eta_p^*\hat{A}e^{i\omega_p t}\right). \end{eqnarray} The Heisenberg-Langevin equations for the atom-cavity system and reservoir are: \begin{eqnarray} \label{eq_HL_system+reservoir} \dot{\hat{a}}_{\vec{k},\epsilon}=-i \omega_k \hat{a}_{\vec{k},\epsilon} - g_{\vec{k},\epsilon}\sigma^{(i)}_{ge} e^{-i\vec{k}\cdot\vec{\hat{r}}_i}, \\ \nonumber \dot{\hat{A}}=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\hat{A}\right] -\kappa \hat{A} +\sqrt{2\kappa}\hat{A}_{in}, \\ \nonumber \dot{\sigma}^{(i)}_{ge}=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\sigma^{(i)}_{ge}\right] +i \sigma^{(i)}_{z}\sum_{\vec{k},\epsilon} g_{\vec{k},\epsilon}\hat{a}_{\vec{k},\epsilon} e^{i\vec{k}\cdot\vec{\hat{r}}_i}, \\ \nonumber \dot{\sigma}^{(i)}_z=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\sigma^{(i)}_{z}\right] +2i\sum_{\vec{k},\epsilon} \left( g^*_{\vec{k},\epsilon}\sigma^{(i)}_{ge}\hat{a}^\dag_{\vec{k},\epsilon} e^{-i\vec{k}\cdot\vec{\hat{r}}_i} -g_{\vec{k},\epsilon}\sigma^{(i)}_{eg}\hat{a}_{\vec{k},\epsilon} e^{i\vec{k}\cdot\vec{\hat{r}}_i} \right), \\ \nonumber \dot{\hat{p}}_i=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\hat{p}_i\right] +i\sum_{\vec{k},\epsilon} \hbar k_x \left( g^*_{\vec{k},\epsilon}\sigma^{(i)}_{ge} \hat{a}^\dag_{\vec{k},\epsilon} e^{-i\vec{k}\cdot\vec{\hat{r}}_i} -g_{\vec{k},\epsilon}\sigma^{(i)}_{eg} \hat{a}_{\vec{k},\epsilon} e^{i\vec{k}\cdot\vec{\hat{r}}_i} \right), \\ \nonumber \dot{\hat{x}}_i=\hat{p}_i/m. \end{eqnarray} The first step on the way to the equation which contain only the atomic quantum motion and the cavity field is to eliminate the reservoir. It is done by formally solving the first equation of system (\ref{eq_HL_system+reservoir}): \begin{eqnarray} \hat{a}_{\vec{k},\epsilon}(t)=\hat{a}_{\vec{k},\epsilon}(0)e^{-i\omega_k t} -ig^*_{\vec{k},\epsilon} \int_0^t e^{-i\omega_k(t-\tau)}\sum_{i=1}^N \sigma^{(i)}_{ge}(\tau) e^{-i\vec{k}\cdot\vec{\hat{r}}_i}d\tau, \end{eqnarray} and plugging this solution into the remaining equation of system (\ref{eq_HL_system+reservoir}). Assuming a markovian memoryless reservoir \cite{Cohen-Tannoudji1992,Gardinner2004} the system of equations can be developed to the following form: \begin{eqnarray} \label{eq_HL_spin+motion+losses} \dot{\sigma}^{(i)}_{ge}=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\sigma^{(i)}_{ge}\right] -\frac{\gamma}{2}\sigma^{(i)}_{ge} +i \sigma^{(i)}_{z}\hat{F}_i(t), \\ \nonumber \dot{\sigma}^{(i)}_z=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\sigma^{(i)}_{z}\right] -\gamma \left(\sigma^{(i)}_z+\mathrm{I}\right) +2i\left( \hat{F}_i^\dag(t) \sigma^{(i)}_{ge} -\sigma^{(i)}_{eg}\hat{F}_i(t) \right), \\ \nonumber \dot{\hat{p}}_i=\frac{i}{\hbar}\left[\mathrm{H_{sys}},\hat{p}_i\right] +i\left( \hat{F}_{pi}^\dag(t) \sigma^{(i)}_{ge} -\sigma^{(i)}_{eg}\hat{F}_{pi}(t) \right), \end{eqnarray} with $\mathrm{I}$ is the identity operator. Here the Langevin sources contain the operators ${\hat{F}_i(t)=\sum_{\vec{k},\epsilon}g_{\vec{k},\epsilon} e^{i\left(\vec{k}\cdot\vec{\hat{r}}_i(t)-\omega_k t\right)} \hat{a}_{\vec{k},\epsilon}(0)}$ and ${\hat{F}_{pi}(t)=\sum_{\vec{k},\epsilon}\hbar k_xg_{\vec{k},\epsilon} e^{i\left(\vec{k}\cdot\vec{\hat{r}}_i(t)-\omega_k t\right)} \hat{a}_{\vec{k},\epsilon}(0)}$ accounting for the noise entering the atom-cavity system from the reservoir. Under the assumption that all the modes of the reservoir are in the vacuum state the only non zero correlation function of the operators $\hat{F}_i$ and $\hat{F}_i^\dag$ is ${\langle\hat{F}_i(t)\hat{F}_i^\dag(t')\rangle=g(t-t')e^{-i\omega_{eg}(t-t')}}$. The function ${g(\tau)=\sum_{\vec{k},\epsilon}|g_{\vec{k},\epsilon}|^2e^{-i(\omega_k-\omega_{eg})\tau}}$ is not exactly the delta-function although if the reservoir bandwidth is much larger than the inverse of the smallest time step considered in the problem then, it approaches a delta function ${\int_{-\infty}^{+\infty} g(\tau)d\tau=2\pi\sum_{\vec{k},\epsilon}|g_{\vec{k},\epsilon}|^2\delta(\omega_k-\omega_{eg})=\gamma}$ \cite{Cohen-Tannoudji1992}. Apart from the spontaneous decay rate $\gamma$ also a negligibly small energy shift additional to $\omega_{eg}$ appears due to the spontaneous emission which will further on be reabsorbed into the frequency. The second step is the adiabatic elimination of the atomic excited state in the limit of the large detuning $\Delta_a\gg\gamma,g\sqrt{N_{ph}},\kappa,\nu$. This is done by formaly solving the first two equations of system (\ref{eq_HL_spin+motion+losses}) and expanding the solution up to the second order in $1/\Delta_a$: \begin{eqnarray} \label{eq_optical_coherence_2order} \sigma^{(i)}_{ge}= &-&\frac{g f(\hat{x}_i)}{\Delta_a}\left[ \left(1+\frac{\Delta_c}{\Delta_a}+i\frac{\frac{\gamma}{2}-\kappa}{\Delta_a}\right)\hat{A}^\dag +\frac{i}{\Delta_a}\left(\eta_p^*e^{i\omega_p t} + \sqrt{2\kappa}\hat{A}^\dag_{in}\right) \right] \\ \nonumber &-&i\frac{g\sqrt{\omega_R\nu}}{\sqrt{2}\Delta^2_a}f'(\hat{x}_i)\frac{\hat{p}_i}{\Delta p}\hat{A}^\dag +i\frac{\hat{F}^\dag(t)}{i\Delta_a+\gamma/2} +O(\frac{1}{\Delta_a^3}). \end{eqnarray} The geometric functions depending on the positions of the atoms along the cavity are $f(\hat{x}_i)=\cos(k_cx_i^{(0)})-\sin(k_cx_i^{(0)})k_c\hat{x}_i-\frac{1}{2}\cos(k_cx_i^{(0)})(k_c\hat{x}_i)^2$ and $f'(\hat{x}_i)=-\sin(k_cx_i^{(0)})-\cos(k_cx_i^{(0)})k_c\hat{x}_i$ up to the second order the Lamb-Dicke parameter. The final point needed to arrive from equations (\ref{eq_HL_spin+motion+losses}) to equation (\ref{H-L_equations_sp_em}) and (\ref{eq_cor_fun_sp_em}) is the relations between the functions $\hat{F}_i(t)$ and $\hat{F}_{p_i}(t)$ and the normalized Langevin sources $ \hat{f}_{ai}(t)$ and $ \hat{f}_{bi}(t)$: \begin{eqnarray} \hat{f}_{ai}(t)=\frac{1}{\gamma}\hat{F}_i e^{i\omega_p t} \\ \nonumber \hat{f}_{bi}(t)=\frac{\cos(k_c x_i^{(0)})}{\sqrt{\gamma}} \left(\hat{F}^\dag_i e^{-i\omega_p t}+\hat{F}_i e^{i\omega_p t}\right) +\frac{i\sin(k_c x_i^{(0)})}{\sqrt{\gamma}\hbar k_c} \left(\hat{F}^\dag_{pi} e^{-i\omega_p t}-\hat{F}_{pi} e^{i\omega_p t}\right) \end{eqnarray} Equations (\ref{H-L_equations_sp_em}) and (\ref{eq_cor_fun_sp_em}) are then derived from equations (\ref{eq_HL_spin+motion+losses}) using these relations, expression (\ref{eq_optical_coherence_2order}) and keeping only the terms up to the second order in $1/\Delta_a$. \end{appendix} \end{document}
math
66,883
\begin{document} \title{Solving fuzzy two-point boundary value problem using fuzzy Laplace transform} \author{Latif Ahmad\footnote{1. Shaheed Benazir Bhutto University, Sheringal, Upper Dir, Khyber Pakhtunkhwa, Pakistan. 2. Institute of Pure \& Applied Mathematics, University of Peshawar, 25120, Khyber Pakhtunkhwa, Pakistan. E-mail: [email protected]}, \mbox{} Muhammad Farooq\footnote{ Institute of Pure \& Applied Mathematics, University of Peshawar, 25120, Khyber Pakhtunkhwa, Pakistan. E-mail: [email protected]}, \mbox{} Saif Ullah\footnote{Institute of Pure \& Applied Mathematics, University of Peshawar, 25120, Khyber Pakhtunkhwa, Pakistan. E-mail: [email protected]}, \mbox{} Saleem Abdullah\footnote{Department of Mathematics, Quaid-i-Azam University, Islamabad, Pakistan. E-mail: [email protected]}} \maketitle \begin{abstract} A natural way to model dynamic systems under uncertainty is to use fuzzy boundary value problems (FBVPs) and related uncertain systems. In this paper we use fuzzy Laplace transform to find the solution of two-point boundary value under generalized Hukuhara differentiability. We illustrate the method for the solution of the well known two-point boundary value problem $Schr\ddot{o}dinger$ equation, and homogeneous boundary value problem. Consequently, we investigate the solutions of FBVPs under as a new application of fuzzy Laplace transform. \end{abstract} {\bf Keywords}: Fuzzy derivative, fuzzy boundary value problems, fuzzy Laplace transform, fuzzy generalized Hukuhara differentiability. \section{Introduction} ``The theory of fuzzy differential equations (FDEs) has attracted much attention in recent years because this theory represents a natural way to model dynamical systems under uncertainty'', Jamshidi and Avazpour \cite{1a}. The concept of fuzzy set was introduce by Zadeh in $1965$ \cite{1}. The derivative of fuzzy-valued function was introduced by Chang and Zadeh in $1972$ \cite{2}. The integration of fuzzy valued function is presented in \cite{8}. Kaleva and Seikala presented fuzzy differential equations (FDEs) in \cite{3,4}. Many authors discussed the applications of FDEs in \cite{5,6,7}. Two-point boundary value problem is investigated in \cite{9}. In case of Hukuhara derivative the funding Green's function helps to find the solution of boundary value problem of first order linear fuzzy differential equations with impulses \cite{10}. Wintner-type and superlinear-type results for fuzzy initial value problems (FIVPs) and fuzzy boundary value problems (FBVPs) are presented in \cite{11}. The solution of FBVPs must be a fuzzy-valued function under the Hukuhara derivative \cite{12,13,14,15,16}. Also two-point boundary value problem (BVP) is equivalent to fuzzy integral equation \cite{17}. Recently in \cite{18,19,20} the fuzzy Laplace transform is applied to find the analytical solution of FIVPs. According to \cite{21} the fuzzy solution is different from the crisp solution as presented in \cite{12,13,14,15,22,23}. In \cite{21} they solved the $Schr\ddot{o}dinger$ equation with fuzzy boundary conditions. Further in \cite{18} it was discussed that under what conditions the fuzzy Laplace transform (FLT) can be applied to FIVPs. For two-point BVP some of the analytical methods are illustrated in \cite{21,24,25} while some of the numerical methods are presented in \cite{1a,27}. But every method has its own advantages and disadvantages for the solution of such types of fuzzy differential equation (FDE). In this paper we are going to apply the FLT on two-point BVP \cite{21}. Moreover we investigate the solution of second order $Schr\ddot{o}dinger$ equation and other homogeneous boundary value problems \cite{21}. After applying the FLT to BVP we replace one or more missing terms by any constant and then apply the boundary conditions which eliminates the constants. The crisp solution of fuzzy boundary value problem (FBVP) always lies between the upper and lower solutions. If the lower solution is not monotonically increasing and the upper solution is not monotonically decreasing then the solution of the FDE is not a valid level set. \noindent This paper is organized as follows:\\ In section $2$, we recall some basics definitions and theorems. FLT is defined in section $3$ and in this section the FBVP is briefly reviewed. In section $4$, constructing solution of FBVP by FLT is explained. To illustrate the method, several examples are given in section $5$. Conclusion is given in section $6$. \section{Basic concepts} In this section we will recall some basics definitions and theorems needed throughout the paper such as fuzzy number, fuzzy-valued function and the derivative of the fuzzy-valued functions. \begin{definition} A fuzzy number is defined in \cite{1} as the mapping such that $u:R\rightarrow[0,1]$, which satisfies the following four properties \begin{enumerate} \item $u$ is upper semi-continuous. \item $u$ is fuzzy convex that is $u(\lambda x+(1-\lambda)y) \geq \min{\{u(x), u(y)\}}, x, y\in R$ and $\lambda\in [0,1]$. \item $u$ is normal that is $\exists$ $x_0\in R$, where $u(x_0)=1$. \item $A=\{\overline{x \in \mathbb{R}: u(x)>0}\}$ is compact, where $\overline{A}$ is closure of $A$. \end{enumerate} \end{definition} \begin{definition} A fuzzy number in parametric form given in \cite{2,3,4} is an order pair of the form $u=(\underline{u}(r), \overline{u}(r))$, where $0\leq r\leq1$ satisfying the following conditions. \begin{enumerate} \item $\underline{u}(r)$ is a bounded left continuous increasing function in the interval $[0,1]$. \item $\overline{u}(r)$ is a bounded left continuous decreasing function in the interval $[0,1]$. \item $\underline{u}{(r)\leq\overline{u}(r)}$. \end{enumerate} If $\underline{u}(r)=\overline{u}(r)=r$, then $r$ is called crisp number. \end{definition} Now we recall a triangular fuzzy from \cite{1,18,19} number which must be in the form of $u=(l, c, r),$ where $l,c,r\in R$ and $l\leq c\leq r$, then $\underline{u}(\alpha)=l+(c-r)\alpha$ and $\overline{u}(\alpha)=r-(r-c)\alpha$ are the end points of the $\alpha$ level set. Since each $y\in R$ can be regarded as a fuzzy number if \begin{eqnarray*}\widetilde{y}(t)=\begin{cases}1, \;\;\; if \;\; y=t,\\ 0, \;\;\; if \;\; 1\neq t.\end{cases}\end{eqnarray*} For arbitrary fuzzy numbers $u=(\underline{u}(\alpha), \overline{u}(\alpha))$ and $v=(\underline{v}(\alpha), \overline{v}(\alpha))$ and an arbitrary crisp number $j$, we define addition and scalar multiplication as: \begin{enumerate} \item $(\underline{u+v})(\alpha)=(\underline{u}(\alpha)+\underline{v}(\alpha))$. \item $(\overline{u+v})(\alpha)=(\overline{u}(\alpha)+\overline{v}(\alpha))$. \item $(j\underline{u})(\alpha)=j\underline{u}(\alpha)$, $(j\overline{u})(\alpha)=j\overline{u}(\alpha)$ \mbox{ } $j\geq0$. \item $(j\underline{u})(\alpha)=j\overline{u}(\alpha)\alpha, (j\overline{u})(\alpha)=j\underline{u}(\alpha)\alpha$, $j<0$. \end{enumerate} \begin{definition} (See Salahshour \& Allahviranloo, and Allahviranloo \& Barkhordari \cite{18,19}) Let us suppose that x, y $\in E$, if $\exists$ $z\in E$ such that $x=y+z$. Then, $z$ is called the H-difference of $x$ and $y$ and is given by $x\ominus y$.\end{definition} \begin{remarks}(see Salahshour \& Allahviranloo \cite{18}). Let $X$ be a cartesian product of the universes, $X_1$, $X_1, \cdots, X_n$, that is $X=X_1 \times X_2 \times \cdots \times X_n$ and $A_{1},\cdots,A_{n}$ be $n$ fuzzy numbers in $X_1, \cdots, X_n$ respectively. Then, $f$ is a mapping from $X$ to a universe $Y$, and $y=f(x_{1},x_{2},\cdots,x_{n})$, then the Zadeh extension principle allows us to define a fuzzy set $B$ in $Y$ as; \begin{equation*}B=\{(y, u_B(y))|y=f(x_1,\cdots,x_{n}),(x_{1},\cdots,x_{n})\in X\},\end{equation*} \noindent where \begin{eqnarray*}u_B(y)=\begin{cases} \sup_{{(x_1,\cdots,x_n)} \in f^{-1}(y)} \min\{u_{A_1}(x_1),\cdots u_{A_n}(x_n)\}, \;\;\; if \;\;\; f^{-1}(y)\neq 0,\\ 0, \;\;\;\; otherwise, \end{cases}\end{eqnarray*} \noindent where $f^{-1}$ is the inverse of $f$. The extension principle reduces in the case if $n=1$ and is given as follows: $B=\{(y, u_B(y)|y=f(x), \mbox{ } x \in X)\},$ \noindent where \begin{eqnarray*}u_B(y)=\begin{cases}\sup_{x\in f^{-1}(y)} \{u_A(x)\}, \mbox{ if } f^{-1}(y)\neq 0,\\0, \;\;\;\; otherwise. \end{cases} \end{eqnarray*} By Zadeh extension principle the approximation of addition of $E$ is defined by $(u\oplus v)(x)=\sup_{y\in R} \min(u(y), v(x-y))$, $x \in R$ and scalar multiplication of a fuzzy number is defined by \begin{eqnarray*}(k\odot u)(x)=\begin{cases}u(\frac{x}{k}), \;\;\; k > 0,\\ 0 \;\;\; \mbox{ otherwise }, \end{cases} \end{eqnarray*} \noindent where $\widetilde{0}\in E$. \end{remarks} \noindent The Housdorff distance between the fuzzy numbers \cite{6,12,18,19} defined by \[d:E\times E\longrightarrow R^{+}\cup \{{0}\},\] \[d(u,v)=\sup_{r\in[0,1]}\max\{|\underline{u}(r)-\underline{v}(r)|, |\overline{u}(r)-\overline{v}(r)|\},\] \noindent where $u=(\underline{u}(r), \overline{u}(r))$ and $v=(\underline{v}(r), \overline{v}(r))\subset R$. \\\\ We know that if $d$ is a metric in $E$, then it will satisfies the following properties, introduced by Puri and Ralescu \cite{28}: \begin{enumerate} \item $d(u+w,v+w)=d(u,v)$, $\forall$ u, v, w $\in$ E. \item $(k \odot u, k \odot v)=|k|d(u, v)$, $\forall$ k $\in$ R, \mbox{ and } u, v $\in$ E. \item $d(u \oplus v, w \oplus e)\leq d(u,w)+d(v,e)$, $\forall$ u, v, w, e $\in$ E. \end{enumerate} \begin{definition}(see Song and Wu \cite{29}). If $f:R\times E \longrightarrow E$, then $f$ is continuous at point $(t_0,x_0) \in R \times E$ provided that for any fixed number $r \in [0,1]$ and any $\epsilon > 0$, $\exists$ $\delta(\epsilon,r)$ such that $d([f(t,x)]^{r}, [f(t_{0},x_{0})]^{r}) < \epsilon$ whenever $|t-t_{0}|<\delta (\epsilon, r)$ and $d([x]^{r}, [x_{0}]^{r})<\delta(\epsilon,r)$ $\forall$ t $\in$ R, x $\in E$. \end{definition} \begin{theorem} (see Wu \cite{30}). Let $f$ be a fuzzy-valued function on $[a,\infty)$ given in the parametric form as $(\underline{f}(x,r), \overline{f}(x,r))$ for any constant number $r\in[0,1]$. Here we assume that $\underline{f}(x,r)$ and $\overline{f}(x,r)$ are Riemann-Integral on $[a,b]$ for every $b\geq a$. Also we assume that $\underline{M}(r)$ and $\overline{M}(r)$ are two positive functions, such that\\ $\int_a^b|\underline{f}(x,r)| dx \leq \underline{M}(r)$ and $\int_a^b |\overline{f}(x,r)| dx \leq \overline{M}(r)$ for every $b\geq a$, then $f(x)$ is improper integral on $[{a}, \infty)$. Thus an improper integral will always be a fuzzy number.\\ In short \[ \int_a^r f(x) dx = ( \int_a^b|\underline{f}(x,r)| dx, \int_a^b |\overline{f}(x,r)| dx).\] It is will known that Hukuhare differentiability for fuzzy function was introduced by Puri \& Ralescu in $1983$. \end{theorem} \begin{definition}(see Chalco-Cano and Rom\'{a}n-Flores \cite{31}). Let $f:(a,b)\rightarrow E$ where $x_{0}\in (a,b)$. Then, we say that $f$ is strongly generalized differentiable at $x_0$ (Beds and Gal differentiability). If $\exists$ an element $f'(x_0)\in E$ such that \begin{enumerate} \item $\forall h>0$ sufficiently small $\exists$ $f(x_0+h)\ominus f(x_0)$, $f(x_0)\ominus f(x_0-h)$, then the following limits hold (in the metric $d$)\\ $\lim_{h\rightarrow 0}\frac{f(x_0+h)\ominus f(x_0)}{h}=\lim_{h\rightarrow 0}\frac{f(x_0)\ominus f(x_0-h)}{h}=f'(x_0)$, \noindent Or \item $\forall h>0$ sufficiently small, $\exists$ $f(x_0)\ominus f(x_0+h)$, $f(x_0-h)\ominus f(x_0)$, then the following limits hold (in the metric $d$) \\$\lim_{h\rightarrow 0}\frac{f(x_0)\ominus f(x_0+h)}{-h}=\lim_{h\rightarrow 0}\frac{f(x_0-h)\ominus f(x_0)}{-h}=f'(x_0)$, \noindent Or \item $\forall h>0$ sufficiently small $\exists$ $f(x_0+h)\ominus f(x_0)$, $f(x_0-h)\ominus f(x_0)$ and the following limits hold (in metric $d$)\\ $\lim_{h\rightarrow 0}\frac{(x_0+h)\ominus f(x_0)}{h}=\lim_{h\rightarrow 0}\frac{f(x_0-h)\ominus f(x_0)}{-h}=f'(x_0)$, \noindent Or \item $\forall h>0$ sufficiently small $\exists$ $f(x_0)\ominus f(x_0+h)$, $f(x_0)\ominus f(x_0-h)$, then the following limits holds(in metric $d$)\\ $\lim_{h\rightarrow 0}\frac{f(x_0)\ominus f(x_0+h)}{-h}=\lim_{h\rightarrow 0}\frac{f(x_0-h)\ominus f(x_0)}{h}=f'(x_0)$. \end{enumerate} The denominators $h$ and $-h$ denote multiplication by $\frac{1}{h}$ and $\frac{-1}{h}$ respectively.\end{definition} \begin{theorem}(Ses Chalco-Cano and Rom\'{a}n-Flores \cite{31}). Let $f:R\rightarrow E$ be a function denoted by $f(t)=(\underline{f}(t,r),\overline{f}(t,r))$ for each $r\in[0,1]$. Then \begin{enumerate} \item If $f$ is $(i)$-differentiable, then $\underline{f}(t,r)$ and $\overline{f}(t,r)$ are differentiable functions and $f'(t)=(\underline{f}'(t,r), \overline{f}'(t,r))$, \item If $f$ is $(ii)$-differentiable, then $\underline{f}(t,r)$ and $\overline{f}(t,r)$ are differentiable functions and $f'(t)=(\overline{f}'(t,r), \underline{f}'(t,r))$. \end{enumerate} \end{theorem} \begin{lemma}(see Bede and Gal \cite{32,33}). Let $x_0\in R$. Then, the FDE $y'=f(x,y)$, $y(x_0)=y_0\in R$ and $f:R\times E\rightarrow E$ is supposed to be a continuous and equivalent to one of the following integral equations. \[y(x)=y_0+\int_{x_0}^x f(t, y(t))dt \;\;\; \forall \;\;\; x\in [x_0, x_1],\] \noindent or \[y(0)=y^1(x)+(-1)\odot\int_{x_0}^x f(t,y(t))dt\;\;\; \forall \;\;\; x\in [x_0, x_1],\] \noindent on some interval $(x_0, x_1)\subset R$ depending on the strongly generalized differentiability. Integral equivalency shows that if one solution satisfies the given equation, then the other will also satisfy. \end{lemma} \begin{remarks}(see Bede and Gal \cite{32,33}). In the case of strongly generalized differentiability to the FDE's $y'=f(x,y)$ we use two different integral equations. But in the case of differentiability as the definition of H-derivative, we use only one integral. The second integral equation as in Lemma $2.10$ will be in the form of $y^{1}(t)=y^{1}_0\ominus(-1)\int_{x_0}^x f(t,y(t))dt$. The following theorem related to the existence of solution of FIVP under the generalized differentiability. \end{remarks} \begin{theorem} Let us suppose that the following conditions are satisfied. \begin{enumerate} \item Let $R_0=[x_0, x_0+s]\times B(y_0, q), s,q>0, y\in E$, where $B(y_0,q)=\{y\in E: B(y,y_0)\leq q\}$ which denotes a closed ball in $E$ and let $f:R_0\rightarrow E$ be continuous functions such that $D(0, f(x,y))\leq M$, $\forall (x,y) \in R_0$ and $0\in E$. \item Let $g:[x_0, x_0+s]\times [0,q]\rightarrow R$ such that $g(x, 0)\equiv 0$ and $0\leq g(x,u)\leq M$, $\forall x \in [x_0, x_0+s], 0\leq u\leq q$, such that $g(x,u)$ is increasing in u, and g is such that the FIVP $u'(x)=g(x, u(x)), u(x)\equiv 0$ on $[x_0, x_0+s].$ \item We have $D[f(x,y),f(x,z)\leq g(x, D(y,z))]$,$\forall$ (x,y), (x, z)$\in R_0$ and $D(y,z)\leq q.$ \item $\exists d>0$ such that for $x\in [x_0, x_0+d]$, the sequence $y^1_n:[x_0, x_0+d]\rightarrow E$ given by $y^1_0(x)=y_0$, $y^1_{n+1}(x)=y_0\ominus(-1)\int_{x_0}^{x} f(t, y^{1}_n)dt$ defined for any $n\in N$. Then the FIVP $y'=f(x,y)$, $y(x_0)=y_0$ has two solutions that is (1)-differentiable and the other one is (2)-differentiable for $y$.\end{enumerate} $y^{1}=[x_0, x_0+r]\rightarrow B(y_0, q)$, where $r=\min\{s,\frac{q}{M},\frac{q}{M_1},d\}$ and the successive iterations $y_0(x)=y_0$, $y_{n+1}(x)=y_{0}+\int_{x_0}^{x}f(t,y_{n}(t))dt$ and $y^{1}_{n+1}=y_0$, $y^{1}_{n+1}(x)=y_0\ominus (-1)\int_{x_0}^{x}f(t, y^{1}_{n}(t))dt$ converge to these two solutions respectively. Now according to theorem (2.11), we restrict our attention to function which are (1) or (2)-differentiable on their domain except on a finite number of points as discussed in \cite{33}. \end{theorem} \section{Fuzzy Laplace Transform} Suppose that $f$ is a fuzzy-valued function and $p$ is a real parameter, then according to \cite{18,19} FLT of the function $f$ is defined as follows: \begin{definition}\label{eq1} The FLT of fuzzy-valued function is \cite{18,19} \begin{equation}\label{eq2}\widehat{F}(p)=L[f(t)]=\int_{0}^{\infty}e^{-pt}f(t)dt,\end{equation} \begin{equation}\label{eq3}\widehat{F}(p)=L[f(t)]=\lim_{\tau\rightarrow\infty}\int_{0}^{\tau}e^{-pt}f(t)dt,\end{equation} \begin{equation}\widehat{F}(p)=[\lim_{\tau\rightarrow\infty}\int_{0}^{\tau}e^{-pt}\underline{f}(t)dt,\lim_{\tau\rightarrow\infty}\int_{0}^{\tau}e^{-pt}\overline{f}(t)dt],\end{equation} \noindent whenever the limits exist. \end{definition} \begin{definition}\textbf{Classical Fuzzy Laplace Transform:} Now consider the fuzzy-valued function in which the lower and upper FLT of the function are represented by \begin{equation}\label{eq4}\widehat{F}(p;r)=L[f(t;r)]=[l(\underline{f}(t;r)),l(\overline{f}(t;r))],\end{equation} \noindent where \begin{equation}\label{eq5}l[\underline{f}(t;r)]=\int_{0}^{\infty}e^{-pt}\underline{f}(t;r)dt=\lim_{\tau\rightarrow\infty} \int_{0}^{\tau}e^{-pt}\underline{f}(t;r)dt,\end{equation} \begin{equation}\label{eq6}l[\overline{f}(t;r)]=\int_{0}^{\infty}e^{-pt}\overline{f}(t;r)dt=\lim_{\tau\rightarrow\infty}\int_{0}^{\tau}e^{-pt}\overline{f}(t;r)dt. \end{equation} \end{definition} \subsection{Fuzzy Boundary Value problem} The concept of fuzzy numbers and fuzzy set was first introduced by Zadeh \cite{1}. Detail information of fuzzy numbers and fuzzy arithmetic can be found in \cite{12,13,14}. In this section we review the fuzzy boundary valued problem (FBVP) with crisp linear differential equation but having fuzzy boundary values. For example we consider the second order fuzzy boundary problem as \cite{9,10,21,1a}. \begin{equation}\begin{split}\label{eq76} \psi^{''}(t)+c_1(t)\psi^{'}(t)+c_2(t)\psi(t)=f(t),\\ \psi(0)=\tilde{A},\\ \psi(l)=\tilde{B}. \end{split}\end{equation} \section{Constructing Solutions Via FBVP} In this section we consider the following second order FBVP in general form under generalized H-differentiability proposed in \cite{21}. We define \begin{equation}\label{eq7711a}y''(t)=f(t, y(t),y'(t)), \end{equation} \noindent subject to two-point boundary conditions \\ \[y(0)=(\underline{y}(0;r), \overline{y}(0;r)),\] \[y(l)=(\underline{{y}}(l;r), \overline{y}(l;r)).\] \noindent Taking FLT of (\ref{eq7711a}) \begin{equation}\label{eq78}L[y''(t)]=L[f(t, y(t),y'(t))], \end{equation} \noindent which can be written as \[p^2L[y(t)]\ominus py(0)\ominus y'(0)=L[f(t, y(t),y'(t))].\] The classical form of FLT is given below: \begin{equation}\begin{split}\label{eq79} p^{2}l[\underline{y}(t;r)]-p\underline{y}(0;r)-\underline{y}'(0;r)=l[\underline{f}(t, y(0;r),y'(0;r))], \end{split}\end{equation} \begin{equation}\begin{split}\label{eq80} p^{2}l[\overline{y}(t;r)]-p\overline{y}(0;r)-\overline{y}'(0;r)=l[\overline{f}(t, y(0;r),y'(0;r))]. \end{split}\end{equation} Here we have to replace the unknown value $y'(0,r)$ by constant $F_1$ in lower case and by $F_2$ in upper case. Then we can find these values by applying the given boundary conditions. In order to solve equations (\ref{eq79}) and (\ref{eq80}) we assume that $A(p;r)$ and $B(p;r)$ are the solutions of (\ref{eq79}) and (\ref{eq80}) respectively. Then the above system becomes \begin{equation}\label{eq81} l[\underline{y}(t;r)]=A(p;r), \end{equation} \begin{equation}\label{eq30} l[\overline{y}(t;r)]=B(p;r). \end{equation} \noindent Using inverse Laplace transform, we get the upper and lower solution for given problem as: \begin{equation}\label{eq31} [\underline{y}(t;r)]=l^{-1}[A(p;r)], \end{equation} \begin{equation}\label{eq32} [\overline{y}(t;r)]=l^{-1}[B(p;r)]. \end{equation} \section{Examples} In this section first we consider the $Schr\ddot{o}dinger$ equation \cite{21} with fuzzy boundary conditions under Hukuhara differentiability. \begin{example} The $Schr\ddot{o}dinger$ FBVP \cite{21} is as follows: \begin{equation}\label{eq1n} (\frac{h^2}{2m})u^{''}(x)+V(x)u(x)=Eu(x), \end{equation} \noindent where $V(x)$ is potential and is defined as \begin{eqnarray*}V(x)=\begin{cases}0, \;\;\; if \;\; x < 0,\\ l, \;\;\; if \;\; x>0,\end{cases}\end{eqnarray*} \noindent subject to the following boundary conditions\\ \[u(0)=(1+r, 3-r),\] \[u(l)=(4+r, 6-r).\]\\ \noindent Now let $a=\frac{h^2}{2m}$, $b=E$. Then, (\ref{eq1n}) becomes \begin{equation}\label{eq2n} au^{''}(x)+V(x)u(x)=bu(x). \end{equation} In (\ref{eq1n}) for $x<0$, we discuss (1,1) and (2,2)-differentiability while in the case $x>0$ we will discuss (1,2) and (2,1)-differentiability. \subsection{Case-I: (1,1) and (2,2)-differentiability} For $x<0$, (\ref{eq2n}) becomes \begin{equation*} au^{''}=bu, \end{equation*} \begin{equation}\label{eq4n} au^{''}-bu=0. \end{equation} Now applying FLT on both sides of equation (\ref{eq4n}), we get \begin{equation*}\label{eq65} aL[u^{''}(x)]-bL[u(x)]=0, \end{equation*} \noindent where \begin{equation*}\label{eq73} L[u''(x)]=p^{2}L[u(x)]\ominus pu(0)\ominus u'(0). \end{equation*} The classical FLT form of the above equation is \begin{equation*}\label{eq74} l[\underline u''(x,r)]=p^{2}l[\underline{u}(x,r)]-p\underline u(0,r)-\underline u'(0,r), \end{equation*} \begin{equation*}\label{eq75} l[\overline u''(x,r)]=p^{2}l[\overline {u}(x,r)]-p\overline u(0,r)-\overline u'(0,r). \end{equation*} \noindent Solving the above classical equations for lower and upper solutions, we have \begin{equation*}\begin{split}\label{eq76} a\{p^{2}l[\underline{u}(x,r)]-p\underline u(0,r)-\underline u'(0,r)\}-bl[\underline {u}(x,r)]=0, \end{split}\end{equation*} or \begin{equation*}\begin{split}\label{eq76} (ap^{2}-b)l[\underline{u}(x,r)]=a\{p\underline u(0,r)+\underline u'(0,r)\}. \end{split}\end{equation*} \noindent Applying the boundary conditions, we have \begin{equation*}\begin{split}\label{eq77} (ap^{2}-b)l[\underline{u}(x,r)]=a\{p(1+r)+F_1\}, \end{split}\end{equation*} where \[F_1=\underline u'(0,r).\] \noindent Simplifying and applying inverse Laplace we get \begin{equation*}\begin{split}\label{eq77} \underline{u}(x,r)=(\frac{1+r}{2})l^{-1}\{\frac{p}{p^2-\frac{b}{a}}\}+F_1l^{-1}\{\frac{1}{p^2-\frac{b}{a}}\}. \end{split}\end{equation*} Using partial fraction \begin{equation}\begin{split}\label{eq7} \underline{u}(x,r)=(\frac{1+r}{2})\{e^{\sqrt{\frac{b}{a}}x}+e^{-\sqrt{\frac{b}{a}}x}\}+\frac{F_1}{2\sqrt{\frac{b}{a}}}\{e^{\sqrt{\frac{b}{a}}x}-e^{-\sqrt{\frac{b}{a}}x}\}. \end{split}\end{equation} \noindent Now applying boundary conditions on (\ref{eq7}) we get \begin{equation*}\begin{split}\label{eq77} F_1=\frac{4+r-\frac{1+r}{2}\{e^{\sqrt{\frac{b}{a}}l}+e^{-\sqrt{\frac{b}{a}}l}\}}{\frac{1}{2}\sqrt{\frac{a}{b}}\{e^{\sqrt{\frac{b}{a}}l}-e^{-\sqrt{\frac{b}{a}}l}\}}. \end{split}\end{equation*} \noindent Putting value of $F_1$ in (\ref{eq7}) we get \begin{equation*}\begin{split}\label{eq771} \underline{u}(x,r)=(\frac{1+r}{2})\{e^{\sqrt{\frac{b}{a}}x}+e^{-\sqrt{\frac{b}{a}}x}\}+\frac{4+r-\frac{1+r}{2}\{e^{\sqrt{\frac{b}{a}}l}+e^{-\sqrt{\frac{b}{a}}l}\}}{\{e^{\sqrt{\frac{b}{a}}l}-e^{-\sqrt{\frac{b}{a}}l}\}}\{e^{\sqrt{\frac{b}{a}}x}-e^{-\sqrt{\frac{b}{a}}x}\}. \end{split}\end{equation*} Now solving the classical FLT form for $\overline{u}(x,r)$, we have \begin{equation*}\begin{split}\label{eq76} a\{p^{2}l[\overline {u}(x,r)]-p\overline u(0,r)-\overline u'(0,r)\}-bl[\overline {u}(x,r)]=0, \end{split}\end{equation*} \begin{equation*}\begin{split}\label{eq76} (ap^{2}-b)l[\overline {u}(x,r)]=a\{p\overline u(0,r)+\overline u'(0,r)\}. \end{split}\end{equation*} \noindent Using the boundary conditions, we have \begin{equation*}\begin{split}\label{eq77} (ap^{2}-b)l[\overline {u}(x,r)]=a\{p(3-r)+F_2\}, \end{split}\end{equation*} \noindent where \[F_2=\overline u'(0,r).\] \noindent Simplifying and applying inverse laplace we get \begin{equation*}\begin{split}\label{eq77} \overline {u}(x,r)=(\frac{3-r}{2})l^{-1}\{\frac{p}{p^2-\frac{b}{a}}\}+F_2l^{-1}\{\frac{1}{ap^2-\frac{b}{a}}\}. \end{split}\end{equation*} Using partial fraction \begin{equation}\begin{split}\label{eq774} \overline{u}(x,r)=(\frac{3-r}{2})\{e^{\sqrt{\frac{b}{a}}x}+e^{-\sqrt{\frac{b}{a}}x}\}+\frac{F_2}{2\sqrt{\frac{b}{a}}}\{e^{\sqrt{\frac{b}{a}}x}-e^{-\sqrt{\frac{b}{a}}x}\}. \end{split}\end{equation} \noindent Now applying boundary conditions on (\ref{eq774}) we have \begin{equation*}\begin{split}\label{eq77} F_2=\frac{6-r-\frac{3-r}{2}\{e^{\sqrt{\frac{b}{a}}l}+e^{-\sqrt{\frac{b}{a}}l}\}}{\frac{1}{2}\sqrt{\frac{a}{b}}\{e^{\sqrt{\frac{b}{a}}l}-e^{-\sqrt{\frac{b}{a}}l}\}}. \end{split}\end{equation*} \noindent Putting value of $F_2$ in (\ref{eq774}) we get \begin{equation*}\begin{split}\label{eq771} \overline{u}(x,r)=(\frac{3-r}{2})\{e^{\sqrt{\frac{b}{a}}x}+e^{-\sqrt{\frac{b}{a}}x}\}+\frac{6-r-\frac{3-r}{2}\{e^{\sqrt{\frac{b}{a}}l}+e^{-\sqrt{\frac{b}{a}}l}\}}{\{e^{\sqrt{\frac{b}{a}}l}-e^{-\sqrt{\frac{b}{a}}l}\}}\{e^{\sqrt{\frac{b}{a}}x}-e^{-\sqrt{\frac{b}{a}}x}\}. \end{split}\end{equation*} \subsection{Case-II: (1) and (2)-differentiability, (2) and (1)-differentiability} For $x>0$, (\ref{eq2n}) becomes \begin{equation}\label{eq4nn} au^{''}+(l-b)u=0. \end{equation} Applying FLT and inverse Laplace transform and then simplifying we get the following lower solution. \begin{equation*}\begin{split}\label{eq771} \underline{u}(x,r)=(\frac{1+r}{2})\bigg[\cos\frac{x\sqrt{b-l}}{\sqrt{a}}+\cosh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg]+\frac{H_1\sqrt{a}}{2\sqrt{b-l}}\bigg[\sin\frac{x\sqrt{b-l}}{\sqrt{a}}+\sinh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg]\\ -\frac{(3-r)}{2}\bigg[\cos\frac{x\sqrt{b-l}}{\sqrt{a}}-\cosh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg]-\frac{H_2\sqrt{a}}{2\sqrt{b-l}}\bigg[\sin\frac{x\sqrt{b-l}}{\sqrt{a}}-\sinh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg], \end{split}\end{equation*} \noindent or \begin{equation*}\begin{split}\label{eq771} \underline{u}(x,r)=\frac{1+r}{2}(c_1)+\frac{H_1\sqrt{a}}{2\sqrt{b-l}}(c_2) -\frac{3-r}{2}(c_3)-\frac{H_2\sqrt{a}}{2\sqrt{b-l}}(c_4). \end{split}\end{equation*} \noindent The upper solution will be as follows: \begin{equation*}\begin{split}\label{eq771} \overline{u}(x,r)=(\frac{3-r}{2})\bigg[\cos\frac{x\sqrt{b-l}}{\sqrt{a}}+\cosh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg]+\frac{H_2\sqrt{a}}{2\sqrt{b-l}}\bigg[\sin\frac{x\sqrt{b-l}}{\sqrt{a}}+\sinh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg]\\ -\frac{(1+r)}{2}\bigg[\cos\frac{x\sqrt{b-l}}{\sqrt{a}}-\cosh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg]-\frac{H_1\sqrt{a}}{2\sqrt{b-l}}\bigg[\sin\frac{x\sqrt{b-l}}{\sqrt{a}}-\sinh\frac{x\sqrt{b-l}}{\sqrt{a}}\bigg] \end{split}\end{equation*} \begin{equation*}\begin{split}\label{eq771} \overline{u}(x,r)=\frac{3-r}{2}(c_1)+\frac{H_2\sqrt{a}}{2\sqrt{b-l}}(c_2) -\frac{1+r}{2}(c_3)-\frac{H_1\sqrt{a}}{2\sqrt{b-l}}(c_4) \end{split}\end{equation*} \noindent where \[c_1=\cos\frac{x\sqrt{b-l}}{\sqrt{a}}+\cosh\frac{x\sqrt{b-l}}{\sqrt{a}},\] \[c_2=\sin\frac{x\sqrt{b-l}}{\sqrt{a}}+\sinh\frac{x\sqrt{b-l}}{\sqrt{a}},\] \[c_3=\cos\frac{x\sqrt{b-l}}{\sqrt{a}}-\cosh\frac{x\sqrt{b-l}}{\sqrt{a}},\] \[c_4=\sin\frac{x\sqrt{b-l}}{\sqrt{a}}-\sinh\frac{x\sqrt{b-l}}{\sqrt{a}}.\] \[H_1=\frac{2c_2}{c^2_2-c^2_4}\bigg[4+r-\frac{r+1}{2}c_1+\frac{3-r}{2}c_3\bigg]+\frac{2c_4}{c^2_2-c^2_4}\bigg[6-r-\frac{3-r}{2}c_1+\frac{1+r}{2}c_3\bigg],\] and \[H_2=\frac{2c_4}{c^2_2-c^2_4}\bigg[4+r-\frac{r+1}{2}c_1+\frac{3-r}{2}c_3\bigg]+\frac{2c_2}{c^2_2-c^2_4}\bigg[6-r-\frac{3-r}{2}c_1+\frac{1+r}{2}c_3\bigg].\] \end{example} \begin{example} Consider the following fuzzy homogenous boundary value problem \begin{equation}\label{s1} x^{''}(t)-3x^{'}(t)+2x(t)=0,\end{equation} \noindent subject to the following boundary conditions\\ \[x(0)=(0.5r-0.5, 1-r),\] \[x(1)=(r-1, 1-r).\]\\ Now applying fuzzy Laplace transform on both sides of equation (\ref{s1}), we get \begin{equation}\label{eq65} L[x^{''}(t)]=3L[x'(t)]-2L[x(t)]. \end{equation} \noindent We know that \begin{equation*}\label{s2} L[x''(t)]=p^{2}L[x(t)]\ominus px(0)\ominus x'(0). \end{equation*} The classical FLT form of the above equation is \begin{equation*}\label{eq74} l[\underline x''(t,r)]=p^{2}l[\underline{x}(t,r)]-p\underline{x}(0,r)-\underline{x}'(0,r), \end{equation*} \begin{equation*}\label{eq75} l[\overline x''(t,r)]=p^{2}l[\overline {x}(t,r)]-p\overline{x}(0,r)-\overline x'(0,r). \end{equation*} Now on putting in (\ref{eq65}), we have \begin{equation}\label{eq76a} p^{2}l[\underline{x}(t,r)]-p\underline x(0,r)-\underline x'(0,r)\}-3pl[\underline {x}(t,r)]+3\underline{x}(0,r)+2l[\underline{x}(t,r)]=0,\end{equation} \begin{equation}\label{eq76b}p^{2}l[\overline{x}(t,r)]-p\overline{x}(0,r)-\overline x'(0,r)\}-3pl[\overline{x}(t,r)]+3\overline{x}(0,r)+2l[\overline{x}(t,r)]=0. \end{equation} Solving (\ref{eq76a}) for $l[\underline{x}(t,r)]$, we get \begin{equation*}\begin{split}\label{eq76} (p^{2}-3p+2)l[\underline{x}(t,r)]=p\underline x(0,r)+\underline x'(0,r)+3[\underline {x}(0,r)]. \end{split}\end{equation*} \noindent Applying boundary conditions, we have \begin{equation*}\label{eq76} l[\underline{x}(t,r)]=\frac{(0.5r-0.5)p}{p^2-3p+2}-\frac{3(0.5r-0.5)}{p^2-3p+2}+\frac{A}{p^2-3p+2}. \end{equation*} Using partial fraction and then applying inverse Laplace, we get \begin{equation*} \underline{x}(t,r)=(0.5r-0.5)[-e^t+2e^{2t}]-3(0.5r-0.5)[-e^t+2e^{2t}]+A[-e^t+e^{2t}].\end{equation*} Using boundary values, we get \begin{equation*}\underline{x}(1,r)=r-1=(0.5r-0.5)[-e+2e^2]-3(0.5r-0.5)[-e+2e^2]+A[-e+e^2],\end{equation*} \begin{equation*}A=\frac{r-1+(0.5r-0.5)[-2e+e^2]}{e^2-e}.\end{equation*} \noindent Finally on putting value of A we have \begin{equation*} \underline{x}(t,r)=(0.5r-0.5)(-e^t+2e^{2t})-3(0.5r-0.5)(-e^t+2e^{2t})+\frac{r-1+(0.5r-0.5)(-2e+e^2)}{e^2-e}(-e+e^2) \end{equation*} Now solving (\ref{eq76b}) for $l[\overline{x}(t,r)]$, we have \begin{equation*}\label{eq76} (p^{2}-3p+2)l[\overline{x}(t,r)]=p\overline x(0,r)+\overline x'(0,r)\}+3[\overline {x}(0,r)]. \end{equation*} \noindent Applying boundary condition we get \begin{equation*}\label{eq76} l[\overline{x}(t,r)]=\frac{(1-r)p}{p^2-3p+2}-\frac{3(1-r)}{p^2-3p+2}+\frac{A}{p^2-3p+2}. \end{equation*} \noindent Using partial fraction and then applying inverse Laplace \begin{equation}\label{eq76c} \overline{x}(t,r)=(1-r)[-e^t+2e^{2t}]-3(1-r)[-e^t+2e^{2t}]+A[-e^t+e^{2t}].\end{equation} Using boundary values \begin{equation*} \overline{x}(1,r)=1-r=(1-r)[-e+2e^2]-3(1-r)[-e+2e^2]+A[-e+e^2],\end{equation*} \begin{equation*}A=\frac{1-r+(1-r)[-2e+e^2]}{e^2-e}.\end{equation*} \noindent Putting value of A in (\ref{eq76c}) we get \begin{equation*} \overline {x}(t,r)=(1-r)[-e^t+2e^{2t}]-3(1-r)[-e^t+2e^{2t}]+\frac{1-r+(1-r)[-2e+e^2]}{e^2-e}[-e+e^{2t}]. \end{equation*} \end{example} \section{Conclusion} In this paper, we applied the fuzzy Laplace transform to solve FBVPs under generalized H-differentiability, in particular, solving $Schr\ddot{o}dinger$ FBVP. We also used FLT to solve homogenous FBVP. This is another application of FLT. Thus FLT can also be used to solve FBVPs analytically. The method can be extended for an $nth$ order FBVP. This work is in progress. \end{document}
math
30,846
\begin{document} \begin{abstract} Given a typical interval exchange transformation, we may naturally associate to it an infinite sequence of matrices through Rauzy induction. These matrices encode visitations of the induced interval exchange transformations within the original. In 2010, W.\ A.\ Veech showed that these matrices suffice to recover the original interval exchange transformation, unique up to topological conjugacy, answering a question of A.\ Bufetov. In this work, we show that interval exchange transformation may be recovered and is unique modulo conjugacy when we instead only know consecutive products of these matrices. This answers another question of A.\ Bufetov. We also extend this result to any inductive scheme that produces square visitation matrices. \end{abstract} \maketitle \section{Introduction} Interval exchange transformations (IET's) are invertible piece-wise translations on an interval $I$. They are typically defined by a permutation $\pi$ on $\{1,\dots,n\}$ and a choice of partitioning of $I$ into sub-intervals $I_1,\dots,I_n$ with respective lengths $\lambda_1,\dots,\lambda_n$. The sub-intervals are reordered by $T$ according to $\pi$. Rauzy induction, as defined in \cite{cRau1979}, is a map that sends an IET $T$ on $I$ to its first return $T'$ on $I' \subset I$ for suitably chosen $I'$. For almost every\footnote{For every appropriate $\pi$ and Lebesgue almost every $\lambda = (\lambda_1,\dots,\lambda_n)\in \mathbb{R}_+^n$.} IET $T$, Rauzy induction may be applied infinitely often. This yields a sequence $T^{(k)}$, $k\geq 0$, of IET's so that each transition $T^{(k-1)}\mapsto T^{(k)}$ is the result of a Rauzy induction. To each step we may define a \emph{visitation matrix} $A_k$ so that $(A_k)_{ij}$ counts the number of disjoint images of the intervals $I_j^{(k)}$ in $I_i^{(k-1)}$ before return to $I^{(k)}$. It is part of the general theory of IET's that the initial $\pi$ and the sequence of $A_k$'s define $T$ uniquely up to topological conjugacy. In preparation for \cite{cBuf2013}, A. Bufetov posed the following. \begin{ques}[A. Bufetov] Given only the sequence of $A_k$'s, can the initial permutation $\pi$ be determined and is it unique? \end{ques} In response, W. A. Veech gave an affirmative answer in \cite[Theorem 1.2]{cVe2010}. This allowed A. Bufetov to ensure the injectivity of a map that intertwines the Kontsevich-Zorich cocycle with a renormalization cocycle (see the remark ending Section 4.3.1 in \cite{cBuf2013}). However, if another induction scheme was used to get visitation matrices, we may not know each individual $A_k$. For instance, we may follow A. Zorich's acceleration of Rauzy induction (see \cite{cZor1996}) or choose to induce on the first interval $I_1$. In either of these cases, our visitation matrix $B$ will actually be a product $A_1\cdots A_N$ of the $A_k$'s realized by Rauzy induction. Motivated by this, we say that a sequence $B_\ell$, $\ell\in \mathbb{N}$, is a product of the $A_k$'s if there exist an increasing sequence of integers $k_\ell$, $\ell\geq 0$, so that $k_0 = 0$ and $B_\ell = A_{k_{\ell-1}+1}A_{k_{\ell-1}+2}\cdots A_{k_{\ell}}$ for each $\ell\geq 1$. We now are able to pose Bufetov's second, more general, question. \begin{ques}[A. Bufetov] Given instead a sequence $B_\ell$, $\ell\in \mathbb{N}$, of products of the $A_k$'s, can the initial permutation $\pi$ still be determined and is it unique? \end{ques} This work is dedicated to answering this second question and its generalizations. We answer in the affirmative by our main results. Extended Rauzy induction is more general than regular Rauzy induction and is discussed in Section \ref{SSecLRI} before Lemma \ref{LemIDOCext}. \begin{main} If $B_1,B_2,B_3,\dots$ are consecutive matrix products defined by an infinite sequence of steps of (extended) Rauzy induction, then the initial permutation $\pi$ is unique. \end{main} Recently, J. Jenkins proved this result in \cite{cJenkinsThesis} for the $3\times 3$ matrix case. He then explored the $4\times 4$ case numerically. In the most general case, we call the inductions on $I'\supsetneq I'' \supsetneq I'' \supsetneq \dots$ an \emph{admissible induction sequence} if the $n\times n$ visitation matrices $A_k$ from $T^{(k-1)}$ to $T^{(k)}$ are well-defined. We then are able to answer Bufetov's question in a much broader setting. \begin{main} If visitation matrices $B_1,B_2,B_3,\dots$ are defined by an admissible induction sequence, then the initial permutation $\pi$ is unique. \end{main} \subsection*{Outline of Paper} In Section \ref{SecDef} we establish our notation and provide known results concerning IET's and related objects as well as general linear algebra. In particular, the anti-symmetric matrix $L_\pi$ is defined given $\pi$, and this matrix plays a central role here. In Section \ref{SecPF} the Perron-Frobenius\ eigenvalue and eigenvector are discussed. The main argument of that section is Corollary \ref{CorPosNeverNull}, which says that the Perron-Frobenius\ eigenvector cannot be in the nullspace of any linear combination $L_\pi-c L_{\pi'}$ for permutations $\pi,\pi'$ and scalar $c$. Section \ref{SecMain} begins with a reduction of Main Theorem 1 to a special case, stated as the Main Lemma. The section ends with a proof of the Main Lemma. Section \ref{SecMain2} reduces Main Theorem 2 to Main Theorem 1 by Lemma \ref{LemAdmissIsRauzy}. This lemma states that any admissible induction sequence must arise from extended Rauzy induction. Appendix \ref{SecA} provides further results concerning admissibility and induced maps which lead to the proof of Lemma \ref{LemAdmissIsRauzy} in Appendix \ref{SecB}. \section{Definitions}\label{SecDef} An \emph{interval} or \emph{sub-interval} is of the form $[a,b)$ for $a<b$, i.e. a non-empty subset of $\mathbb{R}$ that is closed on the left and open on the right. If $I = [a,b)$ is an interval, $|I| = b-a$ denotes its \emph{length}. For a set $C$, $\#C$ denotes its cardinality. A \emph{translation} $\phi:I \to J$ for intervals $I$ and $J$ is any function that may be expressed as $\phi(x) = x + c$ for constant $c$. If $\psi:C\to D$ is a function and $E\subseteq C$ we use the notation $\phi E$ to mean the \emph{image of $E$ by $\psi$}, or $ \psi E = \{\phi(c): c\in E\}\subseteq D.$ For $\lambda\in \mathbb{R}_+^n$, or a vector in $\mathbb{R}^n$ with all positive entries, $ |\lambda| = \lambda_1 + \dots \lambda_n$ denotes the $1$-norm of $\lambda$. \subsection{Permutations and a Matrix} The notations in this section describe either standard definitions from algebra or standard literature on interval exchange transformations. Let $\mathfrak{S}_n$ be the set of all permutations on $\{1,\dots,n\}$, i.e. bijections on $\{1,\dots,n\}$. \begin{defn}\label{DefIrr} The \emph{irreducible permutations} on $\{1,\dots,n\}$, $\mathfrak{S}^0_n$, is the set of $\pi\in \mathfrak{S}_n$ so that $\pi\{1,\dots,k\} = \{1,\dots,k\}$ iff $k = n$. \end{defn} \begin{defn}\label{DefLPi} For $\pi\in \mathfrak{S}_n$, the anti-symmetric $n\times n$ matrix $L_\pi$ is given by $$ (L_{\pi})_{ij} = \RHScase{1, & i < j \mbox{ and } \pi(i) > \pi (j),\\ -1, & i > j \mbox{ and } \pi(i) < \pi(j),\\ 0, & \text{otherwise,}}$$ $1\leq i,j,\leq n$. \end{defn} The proof of the Main Theorem requires that no distinct $\pi,\pi' \in \mathfrak{S}^0_n$ satisfy $L_\pi = L_{\pi'}$. This is given by the next result. \begin{lem}\label{LemLPi} The map from $\mathfrak{S}_n$ to the set of $n\times n$ matrices given by $$ \pi \mapsto L_\pi$$ is injective. \end{lem} \begin{proof} The result follows immediately from the relationship $$ \pi(i) - i = \sum_{\pi(j) \leq \pi(i)} 1 - \sum_{k\leq i}1 = \sum_{j=1}^n \chi_{\pi(i)\geq \pi(j)}(j) - \chi_{i\geq j}(j)= \sum_{j=1}^n (L_\pi)_{ij},$$ for all $i\in\{1,\dots,n\}$, where $\chi$ is the indicator function. \end{proof} The final two definitions simply fix notation of established concepts from linear algebra and will be used without remark for what follows. \begin{defn} If $L$ is an $n\times n$ matrix, $\mathcal{N}_L$ will denote its \emph{nullspace}, i.e. $$ \mathcal{N}_L = \{v\in \mathbb{C}^n: Lv =0\}.$$ For $\pi\in \mathfrak{S}^0_n$, $\mathcal{N}_\pi = \mathcal{N}_{L_\pi}$. \end{defn} \begin{defn} If $L$ is an $n\times n$ anti-symmetric matrix, $(\cdot,\cdot)_L$ is the \emph{bilinear form associated to $L$} given by $$ (u,v)_L = u^* L v,$$ where the last value is treated as a scalar. For $\pi \in \mathfrak{S}^0_n$, $(\cdot,\cdot)_\pi = (\cdot,\cdot)_{L_\pi}$. \end{defn} \subsection{Interval Exchange Transformations} An interval exchange transformation $T$ is an invertible transformation on an interval that divides the interval into sub-intervals of lengths $\lambda_1,\dots,\lambda_n$ and reorders them according to $\pi$. We will assume $n \geq 2$, as $T$ is the identity if $n=1$. More precisely, for fixed $\pi\in \mathfrak{S}^0_n$ and $\lambda \in \mathbb{R}_+^n$, let $\beta_j = \sum_{i \leq j} \lambda_i$ for $0\leq j \leq n$ and $I = [0,\beta_n)$, where $\beta_0 =0$ and $\beta_n = |\lambda|$. For each interval $I_j= [\beta_{j-1},\beta_j)$, then $T$ restricted to $I_j$ is just translation by a value $\omega_j$. If the $j^{th}$ interval \emph{is in position}\footnote{Many texts on interval exchange transformations let $\pi(j)$ describe the interval in position $j$ after the application of $T$.} $\pi(j)$ after the application of $T$, then $ \omega_j = \sum_{\pi(i) < \pi(j)} \lambda_i- \sum_{k< j} \lambda_k.$ We see that $\omega = L_\pi \lambda$. \begin{defn} The \emph{interval exchange transformation (IET)} defined by $(\pi,\lambda)$, $T=\mathcal{I}_{\pi,\lambda}$, is a map $I \to I$ defined piece-wise by $$ T(x) = x + \omega_j,\text{ for }x\in I_j,$$ $1\leq j \leq n$. \end{defn} We restrict our attention to $\pi\in \mathfrak{S}^0_n$ when defining an IET. Indeed, if $\pi \in \mathfrak{S}_n \setminus \mathfrak{S}^0_n$ then there exists $k<n$ so that $T[0,\beta_k) = [0,\beta_k)$. In this case we may reduce to studying $T$ restricted to $[0,\beta_k)$ and $[\beta_k,\beta_n)$ separately. \begin{defn}\label{DefIDOC} IET $T = \mathcal{I}_{\pi,\lambda}$ satisfies the \emph{infinite distinct orbit condition or i.d.o.c.}\footnote{This is also known as the \emph{Keane Condition}.} iff each orbit $\mathcal{O}_T(\beta_j) = \{T^k \beta_j: k\in \mathbb{Z}\}$, $1\leq j <n$, is infinite and the orbits are pairwise distinct. \end{defn} If $\lambda$ is rationally independent, meaning $c_1\lambda_1 + \dots + c_n \lambda_n = 0$ has a solution with $c_i$'s integers iff $c_1 = \dots = c_n =0$, then $T = \mathcal{I}_{\pi,\lambda}$ is an i.d.o.c. IET. Therefore, $T$ is is i.d.o.c. for fixed $\pi$ and Lebesgue almost every $\lambda\in \mathbb{R}_+^n$. \begin{lem}[Keane \cite{cKea1975}]\label{LemMinimal} If $T$ is an i.d.o.c. IET on $I$, then for any sub-interval $J\subseteq I$, $\bigcup_{k=0}^\infty J = I$, and for any $x\in I$ the orbit $\mathcal{O}_T(x)$ is infinite and dense. \end{lem} \subsection{Admissible Inductions} Consider any interval $I'\subseteq I$ for IET $T = \mathcal{I}_{\pi,\lambda}$, $\pi\in \mathfrak{S}^0_n$ and $\lambda\in \mathbb{R}_+^n$. Informally, we will call $I'$ \emph{admissible} if the induced map $T'$ is an IET on $n$ intervals and an $n\times n$ visitation matrix $A$ is well defined. Let $r(x) = \min\{k\in \mathbb{N}: T^kx\in I'\}$ be the \emph{return time} of $x\in I'$. Then the \emph{induced transformation} for $T$ on $I'$ is denoted by $T|_{I'}$ and is given by $$ T|_{I'}(x) = T^{r(x)}(x)$$ for each $x\in I'$. \begin{defn}\label{DefAdmiss} Let $T= \mathcal{I}_{\pi,\lambda}$ be an i.d.o.c. $n$-IET on $I$. The sub-interval $I'\subseteq I$ is \emph{admissible} for $T$ if there exists a partition of $I'$ into $n$ consecutive sub-intervals $I_1',I_2',\dots,I_n'$ so that for each $1\leq i \leq n$: \begin{enumerate} \item $r(x) = r(x')$ for each $x,y\in I'_i$, letting $r_i$ be this common value, \item for each $0\leq k < r_i$, $T$ restricted to $T^kI'_i$ is a translation, \item for each $0 \leq k < r_i$, $T^k I'_i \subseteq I_j$ for some $1\leq j \leq n$. \end{enumerate} The $n\times n$ \emph{visitation matrix} $A$ is given by $A_{ij} = \#\{0\leq k < r_i: T^k I_j' \subseteq I_i\}$. \end{defn} This definition of admissible is equivalent to the one given in \cite[Section 3.3]{cDolPer2013} for i.d.o.c. $T$. It follows that if $I'$ is admissible for $T$, then $T' = T|_I'$ is an $n$-IET, and $$ \lambda = A \lambda'$$ where $T' = \mathcal{I}_{\pi',\lambda'}$ and $\lambda'_j = |I'_j|$ for $1\leq j \leq n$. \begin{rem}\label{RemInduce} Consider i.d.o.c. $T$ on $I$ with $I''\subseteq I' \subseteq I$. It is a consequence that any of the two following statements imply the third: \begin{enumerate} \item $I'$ is admissible for $T$ on $I$, \item $I''$ is admissible for $T$ on $I$, \item $I''$ is admissible for $T'$ on $I'$, where $T'$ is the induction of $T$ on $I'$. \end{enumerate} Also, if $A_1$ is the visitation matrix of the induction from $I$ to $I'$ and $A_2$ is the visitation matrix of the induction from $I'$ to $I''$, then the product $A_1A_2$ is the visitation matrix of the induction from $I$ to $I''$. \end{rem} Please refer to Appendix \ref{SecA} for a more thorough discussion of admissible inductions. For example, Lemma \ref{LemA1} proves the remark assuming the first statement and one other holds. \subsection{Rauzy Induction} Rauzy induction was defined in \cite{cRau1979}, and we see that it is defined as an admissible induction over an appropriately chosen sub-interval $I'$. We recall the definition here and discuss some results relevant for our work. For $\pi \in \mathfrak{S}^0_n$, let $m = \pi^{-1}(n)$ denote the interval placed last by $\pi$. Assume that $\lambda_n \neq \lambda_m$ and let $I' = [0, \beta_n - \min\{\lambda_m,\lambda_n\})$. The induced transformation $T'= T|_{I'}$ is also an IET and we give the description below for $T'= \mathcal{I}_{\pi',\lambda'}$. If $\lambda_n > \lambda_m$, then $$ \pi'(i) = \RHScase{\pi(i), & \pi(i) < \pi(n),\\ \pi(n) +1, & \pi(i)=n,\\ \pi(i)+1, & \pi(i) > \pi(n), } \text{ and } \lambda'_i = \RHScase{ \lambda_n - \lambda_m, & i = n,\\ \lambda_i, & i < n, }$$ for $1\leq i \leq n$. If $\lambda_n<\lambda_m$, then instead $$ \pi'(i) = \RHScase{\pi(i), & i \leq m,\\ \pi(n), & i = m+1,\\ \pi(i-1), & i > m+1, } \text{ and } \lambda'_i = \RHScase{ \lambda_i, & i <m,\\ \lambda_m - \lambda_n, & i = m,\\ \lambda_n, & i = m+1,\\ \lambda_{i-1}, & i >m+1, }$$ for $1\leq i \leq n$. \begin{defn} Consider $T$ defined by $\pi$ and $\lambda$ with $m$ as above. If $\lambda_n>\lambda_m$, the change from $T$ to $T'$ is a move of \emph{Rauzy induction of type $0$}. If $\lambda_n<\lambda_m$, the change from $T$ to $T'$ is a move of \emph{Rauzy induction of type $1$}. If $\lambda_n = \lambda_m$, Rauzy Induction is not well defined. \end{defn} For fixed $\pi\in \mathfrak{S}^0_n$, the condition $\lambda_n = \lambda_m$ is of zero Lebesgue measure in $\mathbb{R}_+^n$. Given $\pi$ and the type of Rauzy induction $\varepsilon$, we may define the visitation matrix $A = A_{(\pi,\varepsilon)}$ as given in Definition \ref{DefAdmiss}. If $\varepsilon = 0$, then $$ A_{ij} = \RHScase{1, & i = j,\\ 1, & i=n,~j = m,\\ 0, & \text{otherwise,}}$$ and if $\varepsilon = 1$, then $$ A_{ij} = \RHScase{1, & i=j<m,\\ 1, & j=i+1 >k,\\ 1, & i =n,~j = m,\\ 0, & \text{otherwise.}}$$ Suppose we may act by $N$ consecutive steps of Rauzy induction on $T = \mathcal{I}_{\pi,\lambda}$, and let $T,T',T'',\dots, T^{(N)}$ be the resulting IET's at each step where $T^{(k)} = \mathcal{I}_{\pi^{(k)},\lambda^{(k)}}$. Let $\varepsilon_k$ be the type of induction from $T^{(k-1)}$ to $T^{(k)}$. If $$ B = A_{(\pi,\varepsilon_1)}A_{(\pi',\varepsilon_2)}\cdots A_{(\pi^{(N-1)},\varepsilon_N)}$$ then $ \lambda = B \lambda^{(N)}.$ We may verify that if $A= A_{\pi,\varepsilon}$ and $\pi'$ is the result of the type $\varepsilon$ induction on $\pi$, then $ A^* L_\pi A = L_{\pi'}$. For a proof of this with different notation, see \cite[Lemma 10.2]{cVia2006}. It follows that if $B$ is defined by $N$ consecutive steps of induction with initial permutation $\pi$ and ending at $\pi^{(N)}$, then \begin{equation}\label{EqBLB} B^* L_{\pi} B = L_{\pi^{(N)}}. \end{equation} We finish this section by answering a question: for which IET's can Rauzy induction be applied infinitely many times? See Section 4 of \cite{cYoc2006} for a treatment of this result. \begin{lem}\label{LemIDOC} If $\pi\in \mathfrak{S}^0_n$, $\lambda\in \mathbb{R}_+^n$ and $T = \mathcal{I}_{\pi,\lambda}$, then the following are equivalent: \begin{enumerate} \item\label{LemIDOCa} $T$ is i.d.o.c., and \item\label{LemIDOCb} $T$ admits infinitely many steps of Rauzy induction. \end{enumerate} \end{lem} The following is shown in Sections 1.2.3--1.2.4 of \cite{cMarMouYoc2005}. \begin{lem}\label{LemIDOC2} If $T = \mathcal{I}_{\pi,\lambda}$ admits infinitely many steps of Rauzy induction and $A_k = A_{(\pi^{(k)},\lambda^{(k)})}$ are the corresponding matrices, then for each $j\in \mathbb{N}$ there exists $k_0 = k_0(j)\in \mathbb{N}$ so that for all $k>k_0$, $$ A_{[j,j+k]} = A_k A_{k+1} \cdots A_{j+k}$$ is a matrix with all positive entries. \end{lem} \subsection{Left Rauzy Induction}\label{SSecLRI} Left Rauzy induction was defined in \cite{cVe1990} and describes inducing on $T = \mathcal{I}_{\pi,\lambda}$ by $I' = [\min\{\lambda_1,\lambda_{m'}\},|\lambda|)$, where $m' = \pi^{-1}(1)$ for $\pi\in \mathfrak{S}^0_n$ and $\lambda\in \mathbb{R}_+^n$. In other words, instead of removing a sub-interval from the right as in (right) Rauzy induction, we remove one from the left. We will give the explicit definitions and then show how this type of induction relates to (right) Rauzy induction. \begin{defn} \emph{Left Rauzy induction} is the result of taking the first return of $T = \mathcal{I}_{\pi,\lambda}$ on $I'$ as defined above. The induction is \emph{type $\tilde{0}$} iff $\lambda_1>\lambda_{m'}$ and \emph{type $\tilde{1}$} iff $\lambda_1< \lambda_{m'}$. The induction is not well defined if $\lambda_1 = \lambda_{m'}$. \end{defn} Let $T' = \mathcal{I}_{\pi',\lambda'}$ be the resulting IET by this induction (up to a translation so that $I'$ begins at $0$). If the induction is type $\tilde{0}$, then $$ \pi'(i) = \RHScase{ \pi(1)-1, & i = m',\\ \pi(i)-1, & 1<\pi(i)<\pi(i),\\ \pi(i), & \pi(i) \geq \pi(1), } \text{ and } \lambda'_i = \RHScase{ \lambda_1 - \lambda_{m'}, & i= 1,\\ \lambda_i, & i>1. } $$ Likewise, if the induction is type $\tilde{1}$, then $$ \pi'(i) = \RHScase{\pi(i+1), & i < m'-1,\\ \pi(1), & i = m'-1,\\ \pi(i), & i \geq m',\\ } \text{ and } \lambda_i' = \RHScase{ \lambda_{i+1}, & i< m'-1,\\ \lambda_{1}, & i= m'-1,\\ \lambda_{m'} - \lambda_1, & i= m',\\ \lambda_i, & i > m'.} $$ Let $\tau_n$ be given by $$ \tau_n(i) = n - i\text{ for } 0\leq i \leq n. $$ For $\pi\in \mathfrak{S}^0_n$, let $\pi_\tau$ be given by $$ \pi_\tau = \tau_{n+1} \circ \pi \circ \tau_{n+1},$$ noting that $\pi_\tau\in \mathfrak{S}^0_n$ as well. If $\tilde{\varepsilon} \pi$ is the result of type $\tilde{\varepsilon}$ induction on $\pi$ and $\varepsilon\pi_\tau$ is the result of type $\varepsilon$ induction on $\pi_\tau$, then $$ \tilde{\varepsilon}\pi = (\varepsilon \pi_\tau)_\tau. $$ For $\lambda\in\mathbb{R}_+^n$, let $\lambda_\tau$ be given by $$ (\lambda_\tau)_i = \lambda_{\tau_{n+1}(i)}. $$ If $\tilde{\varepsilon}\lambda$ and $\varepsilon\lambda_\tau$ are defined analogously to $\tilde{\varepsilon} \pi$ and $\varepsilon\pi_\tau$, then $$ \tilde{\varepsilon}\lambda = (\varepsilon\lambda_\tau)_\tau.$$ We define the $n\times n$ permutation matrix $P_n$ by $$ (P_n)_{ij} = \RHScase{1, & i = \tau_{n+1}(j),\\ 0, & \text{otherwise,}} $$ then we see that $A_{\pi,\tilde{\varepsilon}} = P_n A_{\pi_\tau,\varepsilon} P_n$, where $A_{\pi,\tilde{\varepsilon}}$ is the visitation matrix that satisfies $\lambda = A_{\pi,\tilde{\varepsilon}} \cdot \tilde{\varepsilon}\lambda$. Furthermore, $$ L_{\pi} = P_n L_{\pi_\tau} P_n,$$ and so $A^*_{\pi,\tilde{\varepsilon}} L_\pi A_{\pi,\tilde{\varepsilon}} = L_{\tilde{\varepsilon}\pi}$ as a direct consequence. Therefore, if $A_1,\dots, A_N$ are visitation matrices given by consecutive steps of \emph{extended Rauzy induction}, i.e. left and/or right Rauzy induction, and $B = A_1\cdots A_N$ is the product, then $$ B^* L_\pi B = L_{\pi^{(N)}} $$ where $\pi$ is the initial permutation and $\pi^{(N)}$ is the resulting permutation after the $N$ steps. The proof of the following is a modification of Lemma \ref{LemIDOC2} and has a similar proof. However, the notation from \cite{cMarMouYoc2005} is significantly different and will not be included here. \begin{lem}\label{LemIDOCext} If $T = \mathcal{I}_{\pi,\lambda}$ admits infinitely many steps of extended Rauzy induction and $A_k$ are the corresponding matrices, then for each $j\in \mathbb{N}$ there exists $k_0 = k_0(j)\in \mathbb{N}$ so that for all $k>k_0$, $$ A_{[j,j+k]} = A_k A_{k+1} \cdots A_{j+k}$$ is a matrix with all positive entries. \end{lem} \subsection{Veech's result for $\mathcal{N}_\pi$}\label{SSecVeech} The main result in this section is shown in \cite[Lemma 5.7]{cVe1984i}. Please refer to that work as well as \cite{cVe1982} for the original definitions and proofs. Consider each $\pi\in \mathfrak{S}^0_n$ to be extended so that $\pi(0) = 0$ and $\pi(n+1) = n+1$. Note that $\pi_\tau$ respects this extension as well. Let $\sigma_\pi$ be a function on $\{0,\dots,n\}$ given by $$ \sigma_\pi(i) = \pi^{-1}(\pi(i) + 1) -1, $$ as in \cite{cVe1982}. Let $\Sigma(\pi)$ be the partition of $\{0,1,\dots,n\}$ given by orbits of $\sigma_\pi$. For each $S\in \Sigma(\pi)$, let $b_S\in \mathbb{Z}^n$ be given by $$ (b_S)_i = \chi_S(i-1) - \chi_S(i). $$ It was shown in \cite[Lemma 5.3]{cVe1984i} that $$ \#\Sigma(\pi) = \dim\mathcal{N}_\pi + 1, $$ and \cite[Proposition 5.2]{cVe1984i} states that $$ \mathrm{span}\{b_S:S\in \Sigma(\pi)\} = \mathcal{N}_\pi. $$ \begin{lem}[Veech \cite{cVe1984i}]\label{LemVeechPaper} For $\pi\in \mathfrak{S}^0_n$ and $\varepsilon\in \{0,1\}$, there exists a bijection $\varepsilon:\Sigma(\pi) \to \Sigma(\varepsilon\pi)$ so that $$ A_{(\pi,\varepsilon)} b_S = b_{\varepsilon S}$$ for each $S\in \Sigma(\pi)$. \end{lem} Recall $\tau_n$, $\tau_{n+1}$ and $P=P_n$ from the previous section. By direct computation, we see that $$ \sigma_{\pi_\tau} = \tau_n \circ \sigma_{\pi}^{-1} \circ \tau_n, $$ and so $\Sigma(\pi_\tau) = \tau_n \Sigma(\pi)$. \begin{cor}\label{CorVeechPaper} For $\pi\in \mathfrak{S}^0_n$ and $\varepsilon\in \{0,1\}$, there exists a bijection $\tilde{\varepsilon}:\Sigma(\pi) \to \Sigma(\varepsilon\pi)$ so that $$ A_{(\pi,\tilde{\varepsilon})} b_S = b_{\tilde{\varepsilon} S}$$ for each $S\in \Sigma(\pi)$. \end{cor} \begin{proof} For each $S\in \Sigma(\pi)$ and $1\leq i \leq n$, $$ \begin{array}{rcl} (Pb_S)_i & = & (b_S)_{n+1-i}\\ & = & \chi_S(n-i) - \chi_S(n+1-i)\\ & = & \chi_{\tau_n S}(\tau_n(n-i)) - \chi_{\tau_n S}(\tau_n(n+1-i))\\ & = & \chi_{\tau_n S}(i) - \chi_{\tau_n S}(i-1)\\ & = & - (b_{\tau_n S})_i. \end{array}$$ And so $$ A_{(\pi,\tilde{\varepsilon})} b_S = P A_{(\pi_\tau,\varepsilon)} P b_S = - P A_{(\pi_\tau,\varepsilon)} b_{\tau_nS}.$$ By Lemma \ref{LemVeechPaper} we continue, $$ - P A_{(\pi_\tau,\varepsilon)} b_{\tau_nS} = -P b_{\varepsilon (\tau_n S)} = b_{\tau_n(\varepsilon(\tau_nS))}.$$ Therefore the desired bijection is $\tilde{\varepsilon} = \tau_n \circ \varepsilon \circ \tau_n$. \end{proof} \subsection{Invariant Spaces} For $n\times n$ matrix $B$, a subspace $V\subseteq \mathbb{C}^n$ is \emph{$B$-invariant} if $BV \subseteq V$. If $B$ is invertible then $V$ is $B$-invariant iff $BV = V$. An \emph{eigenbasis} of $B$ for $V$ is a basis $\{u_1,\dots,u_m\}$ of $V$ so that each $u_j$ is an eigenvector for $B$. Recall that an \emph{eigenvector} for $B$ is a non-zero vector $u$ with a corresponding \emph{eigenvalue} $\alpha$ such that $u\in \mathcal{N}_{B_\alpha^p}$ for some $p\in\mathbb{N}$ where $B_\alpha = B - \alpha I$ for identity matrix $I$. The lemma and corollary in this section allow us to find an eigenbasis for $\mathbb{C}^n$ that includes bases of invariant subspaces. The definition that follows then correctly associates to a $B$-invariant subspace eigenvalues. \begin{lem} Let $B$ be an $n\times n$ matrix and $V,W\subseteq \mathbb{C}^n$ be subspaces such that $W\subseteq V$ and $V,W$ are each $B$-invariant. There exists an \emph{eigenbasis} $\{u_1,\dots,u_m\}$ of $B$ of $V$ so that $\{u_1,\dots,u_{m'}\}$ is a basis for $W$, $m' = \dim W$. \end{lem} \begin{cor}\label{CorSplit} If $V,W\subseteq \mathbb{C}^n$ are $B$-invariant subspaces, $m = \dim V$ and $m' = \dim W$, then there exists an eigenbasis $\{u_1,\dots,u_n\}$ of $B$ for $\mathbb{C}^n$ such that \begin{itemize} \item $\{u_{n-m-m'+\ell+1},\dots, u_{n-m'+\ell}\}$ is a basis for $W$, \item $\{u_{n-m'+1},\dots, u_{n-m'+\ell}\}$ is a basis for $V\cap W$, \item $\{u_{n-m'+1},\dots, u_n\}$ is a basis for $W$, \end{itemize} where $\ell = \dim(V\cap W)$. \end{cor} \begin{defn} If $V\subseteq \mathbb{C}^n$ is $B$-invariant, and $\alpha_1,\dots,\alpha_m$ are the respective eigenvalues for the eigenbasis in the previous lemma or corollary, then they are the \emph{eigenvalues of $B$ over $V$}. \end{defn} \section{The Perron-Frobenius\ Eigenvalue}\label{SecPF} We begin with a specific case of a fundamental result. See \cite[Theorem 0.16]{cWalters2000} for a more general version of this theorem. \begin{nnthm}[Perron-Frobenius\ Theorem] If $B$ is a positive matrix, then there exists positive eigenvalue $\alpha$ for $B$ so that for all other eigenvalues $\alpha'$ of $B$, $\alpha>|\alpha'|$. Furthermore, there exists a positive eigenvector $u$ for $B$ with eigenvalue $\alpha$ and any eigenvector $u'$ for $B$ with eigenvalue $\alpha$ is a scalar multiple of $u$. \end{nnthm} We call $\alpha$ the \emph{Perron-Frobenius\ eigenvalue} such a positive vector $u$ a \emph{Perron-Frobenius\ eigenvector}. If $B$ is a positive integer matrix, then $\alpha>1$. Corollary \ref{CorNullspace} tells us that $u$ is not in $\mathcal{N}_\pi$ for any $\pi\in \mathfrak{S}^0_n$. Then Corollary \ref{CorPosNeverNull} forbids $u$ from being in $\mathcal{N}_{L}$ for any non-zero matrix $L = L_\pi - c L_{\pi'}$. Finally, Corollary \ref{CorPFUnique} tells us that for a fixed eigenbasis $\{u_1,\dots,u_n\}$ of $B$ with $u_1 = u$ there exists a unique $u_j$ so that $(u_1,u_j)_\pi\neq 0$. \begin{defn} An \emph{extended Rauzy cycle} at $\pi$ is a finite sequence of consecutive steps of extended Rauzy induction that begins and ends at $\pi$. \end{defn} \begin{lem}\label{LemVeech} If $B$ is described by an extended Rauzy cycle at $\pi\in \mathfrak{S}^0_n$, then there exists a basis $\{b_1,\dots,b_m\}$ of $\mathcal{N}_\pi$ and $p\in \mathbb{N}$ so that $$ B^p b_i = b_i,$$ for each $i\in \{1,\dots,m\}$. \end{lem} \begin{proof} As in Section \ref{SSecVeech}, let $b_S$ for $S\in \Sigma(\pi)$. By applying Lemma \ref{LemVeechPaper} and Corollary \ref{CorVeechPaper} to the product $B$, we have a bijection $d$ on $\Sigma(\pi)$ so that $ B b_S = b_{dS}$ for each $S\in \Sigma(\pi)$. Let $p$ be any power such that $d^p$ is the identity on $\Sigma(\pi)$, and choose the $b_1,\dots,b_m$ as a subset of the $b_S$'s that form a basis of $\mathcal{N}_\pi$. Then for any $i\in \{1,\dots,m\}$, $$ B^p b_i = B^p b_S = b_{d^pS} = b_S = b_i, $$ where $b_i = b_S$. \end{proof} \begin{cor}\label{CorNullspace} Let $B$ be given by an extended Rauzy cycle at $\pi$. If $\beta_1,\dots,\beta_m$ are the eigenvalues of $B$ over $\mathcal{N}_\pi$, $\pi\in \mathfrak{S}^0_n$, then each $\beta_j$ is a root of unity. Furthermore, if $B$ is a positive integer matrix with Perron-Frobenius\ eigenvalue $\alpha>1$, then $\{\beta_1,\dots,\beta_m\}\cap\{\alpha,1/\alpha\} = \emptyset$. \end{cor} \begin{lem}\label{LemPositiveNeverNull} If $L = L_{\pi} - c L_{\pi'}$ for distinct $\pi,\pi'\in \mathfrak{S}^0_n$ and real $c$, then there exists $i$ such that row $i$ of $L$ is non-zero and either non-positive or non-negative. \end{lem} \begin{proof} We consider the value of $c$. If $c\leq 0$, then the first row of $L$ is at least the first row of $L_{\pi}$. The claim then holds for $i=1$. If $c>1$ and $L' = L_{\pi'} - \frac{1}{c} L_{\pi}$ then $L = -c L'$. If we find a non-zero row for $L'$, then the same row satisfies the claim for $L$ (but with the opposite sign). We therefore consider two remaining cases: $0<c<1$ and $c=1$. If $0<c<1$, let $i$ satisfy $\pi(i) = n$. Because $\pi\in \mathfrak{S}^0_n$, $i<n$. Then if $i>j$, $$ L_{ij} = (L_\pi)_{ij} - c (L_{\pi'})_{ij} = - c (L_{\pi'})_{ij} \geq 0.$$ If $i<j$, $$ L_{ij} = (L_\pi)_{ij} - c (L_{\pi'})_{ij} = 1 - c (L_{\pi'})_{ij} >0.$$ Therefore row $i$ is non-zero and is non-negative. If $c=1$, then let $i$ be such that $\pi(i) \neq \pi'(i)$ and that maximizes $\pi(i)$. Let $k\leq n$ be the position of $i$ in $\pi$, i.e. $k = \pi(i)$. Note that $\pi'(i)<k$. If $\pi(j) > k$, then $\pi(j) = \pi'(j)$ and so $$ (L_\pi)_{ij} = (L_{\pi'})_{ij} \Rightarrow L_{ij} = 0.$$ If $\pi(j) < k$ and $i>j$, then $$ L_{ij} = (L_\pi)_{ij} - (L_{\pi'})_{ij} = -(L_{\pi'})_{ij} \geq 0.$$ If $\pi(j) < k$ and $i<j$, then $$ L_{ij} = (L_\pi)_{ij} - (L_{\pi'})_{ij} = 1 - (L_{\pi'})_{ij} \geq 0.$$ Row $i$ of $L$ is non-negative, and we must verify that it is non-zero. Let $i'\neq i$ satisfy $\pi'(i') = k$. By definition, $\pi(i')<\pi(i)$ and $\pi'(i') > \pi'(i)$. Either \begin{itemize} \item $i<i'$ and so $(L_\pi)_{ii'} = 1$ and $(L_{\pi'})_{ii'} =0$, or \item $i>i'$ and so $(L_\pi)_{ii'} = 0$ and $(L_{\pi'})_{ii'} = -1$. \end{itemize} Therefore, $L_{ii'} =1$ and row $i$ of $L$ is non-zero. \end{proof} \begin{cor}\label{CorPosNeverNull} If $v$ is a positive vector, then it does not belong to the nullspace of $$ L = L_\pi - c L_{\pi'}$$ for distinct $\pi,\pi'\in \mathfrak{S}^0_n$ and complex $c$. \end{cor} \begin{proof} Suppose $c$ is real. By Lemma \ref{LemPositiveNeverNull}, there exists a row $i$ that is non-zero and non-negative (resp. non-positive). Therefore $(L v)_i >0$ (resp. $(Lv)_i<0$) and so $Lv \neq 0$. If $c$ is non-real, then by Lemma \ref{LemPositiveNeverNull} the real vector $L_{\pi'} v$ is non-zero. Therefore $$ L v = (L_\pi - c L_{\pi'}) v$$ must have a non-zero imaginary component and cannot be zero. \end{proof} \begin{lem}\label{LemBF} Let $B^*LB = L$ for anti-symmetric matrix $L$ and matrix $B$. If $u,u'$ are eigenvectors of $B$ with corresponding eigenvalues $\alpha,\alpha'$, and $(u,u')_L \neq 0$, then $\overline{\alpha} \alpha' = 1$. \end{lem} \begin{proof} Recall that the \emph{order} of eigenvalue $u$ with eigenvalue $\alpha$ is the minimum $p\in \mathbb{N}$ so that $u\in \mathcal{N}_{B_\alpha^p}$, $B_\alpha = B - \alpha I$. We proceed by induction, first on the order of $u$ and then on the sum of the orders of $u$ and $u'$. If $u,u'$ are both \emph{true eigenvectors}, i.e. order $1$, then $$ (u,u')_L = (Bu,Bu')_L = \overline{\alpha}\alpha' (u,u')_L,$$ and $\overline{\alpha} \alpha' =1$ as $(u,u')_L \neq 0$ by assumption. If $u$ is higher order and $u'$ is order $1$, let $w$ be defined by $A u = \alpha u +w$ and note that $w$ is one order lower than $u$ for the same eigenvalue $\alpha$. Then $$ (u,u')_L = (Bu,Bu')_L = \overline{\alpha}\alpha' (u,u')_L + \alpha' (w,u')_L.$$ If $(w,u')_L\neq 0$, then the claim follows by induction. If $(w,u')_L = 0$, then the claim follows just as in the base case. If $u,u'$ are both of higher order, let $w$ be defined as before and let $w'$ be defined by $Bu' = \alpha' u' + w'$. Then $(u,u')_L = (Bu,Bu')_L$ which is the sum $$ \overline{\alpha}\alpha' (u,u')_L + \alpha' (w,u')_L + \overline{\alpha} (u,w')_L + (w,w')_L.$$ If any term but the first is non-zero, then the claim follows by induction. Note that $(v,v')_L = \overline{(v',v)_L}$. If only the first term is non-zero, then the claim is verified as before. \end{proof} \begin{cor}\label{CorPFUnique} Let $B^* L B = L$ for anti-symmetric matrix $L$ and positive matrix $B$. Let $u_1,\dots,u_n$ an eigenbasis for $B$ with respective eigenvalues $\alpha_1,\dots,\alpha_n$ such that $u_{n-m+1},\dots,u_n$ forms a basis of $\mathcal{N}_L$ of dimension $m < n$. If $\alpha_1>0$ is the Perron-Frobenius\ eigenvalue with positive eigenvector $u_1$ (in particular $u_1\notin \mathcal{N}_L$), then there is a unique $j \leq n-m$ so that $(u_1,u_j)_L \neq 0$. This is also the unique $j \leq n-m$ so that $\alpha_j = 1/\alpha_1$. \end{cor} \begin{proof} The final claim follows from the first by Lemma \ref{LemBF}. Because $u_1 \notin \mathcal{N}_L$, there must exist $u_j$ so that $(u_1,u_j)_L = c \neq 0$. By Lemma \ref{LemBF}, $\alpha_j = 1/\alpha_1$. Because $u_j\notin \mathcal{N}_L$, $j\leq n-m$. Suppose by contradiction there exists $j' \neq j$ so that $(u_1,u_{j'})_L = c'\neq 0$. Again $j'\leq n-m$ and by Lemma \ref{LemBF}, $\alpha_{j'} = \alpha_j$. If there exists $i \neq 1$ so that either $(u_i,u_{j'})_L \neq 0$ or $(u_i,u_j)_L \neq 0$, then $\alpha_i = \alpha_1$, a contradiction to the simplicity of the Perron-Frobenius\ eigenvalue. Therefore $(u_k,u_j)_L$ is non-zero iff $k=1$ and the same statement holds for $(u_k,u_{j'})_L$. Let $u = c' u_j - c u_{j'}$ and note that $$ (u_1,u)_L = c'(u_1,u_j)_L - c (u_1,u_{j'})_L = c'c - c c' = 0.$$ Because $(u_k,u)_L=0$ for $k\neq 1$, $c\in \mathcal{N}_L$. This implies a linear dependence between $u_j$, $u_{j'}$ and $u_{n-m+1},\dots,u_n$, a contradiction. \end{proof} \section{Proof of Main Theorem 1}\label{SecMain} \begin{mainlem} If $\tilde{B}$ is a positive matrix defined by an extended Rauzy cycle, then the initial permutation $\pi\in \mathfrak{S}^0_n$ is unique. \end{mainlem} \begin{proof}[Proof of Main Theorem 1] Let $B_1,B_2,\dots$ be the matrix products defined by an infinite sequence of extended Rauzy induction steps, and assume by contradiction that there exist distinct $\pi,\pi'\in \mathfrak{S}^0_n$ such that infinite induction steps beginning at $\pi$ and $\pi'$ each exist and define the $B_k$'s. There exist $\pi_k$'s and $\pi'_k$'s, $k\in \mathbb{N}_0$, so that $\pi_0 = \pi$, $\pi'_0 = \pi'$ and for each $k\in \mathbb{N}$, $$ B_k^*L_{\pi_{k-1}}B_k = L_{\pi_k} \text{ and }B_k^*L_{\pi'_{k-1}}B_k = L_{\pi'_k}.$$ Because the $B_k$'s are invertible and by Lemma \ref{LemLPi}, $L_{\pi_k} \neq L_{\pi'_k}$ by induction on $k$ and so $\pi_k \neq \pi'_k$ for all $k\in \mathbb{N}_0$ and are uniquely determined by $\pi$, $\pi'$ and the $B_k$'s. There exist distinct $\tilde{\pi},\tilde{\pi}'\in \mathfrak{S}^0_n$ so that $\pi_k = \tilde{\pi}$ and $\pi'_k = \tilde{\pi}'$ simultaneously for infinitely many $k$. By Lemma \ref{LemIDOC2} we may choose such $k_0,k_1$ so that $k_0<k_1$ and $\tilde{B} = B_{k_0+1} B_{k_0+2} \cdots B_{k_1}$ is positive. Therefore $\tilde{B}$ is a positive matrix defined by an extended Rauzy cycle at $\tilde{\pi}$ and also an extended Rauzy cycle at $\tilde{\pi}'$. By the Main Lemma $\tilde{\pi} = \tilde{\pi}'$, a contradiction. \end{proof} \begin{proof}[Proof of Main Lemma] Suppose $\tilde{B}$ is a positive integer matrix that is described by two extended Rauzy cycles: one each at distinct $\tilde{\pi},\tilde{\pi}'\in \mathfrak{S}^0_n$. Then by equation \eqref{EqBLB}, $$ \tilde{B}^* L_{\tilde{\pi}} \tilde{B} = L_{\tilde{\pi}} \text{ and } \tilde{B}^* L_{\tilde{\pi}'} \tilde{B} = L_{\tilde{\pi}'}.$$ Let $m = \dim(\mathcal{N}_{\tilde{\pi}})$, $m' = \dim(\mathcal{N}_{\tilde{\pi}'})$ and $\ell = \dim(\mathcal{N}_{\tilde{\pi}} \cap \mathcal{N}_{\tilde{\pi}'})$. Using Corollary \ref{CorSplit}, let $\{u_1,\dots,u_n\}$ be an eigenbasis for $B$ with respective eigenvalues $\alpha_1,\dots,\alpha_n$ such that \begin{itemize} \item $\alpha_1>1$ is the Perron-Frobenius\ eigenvalue and $u_1$ is positive, \item $\{u_{n-m-m'+\ell+1},\dots, u_{n-m'+\ell}\}$ is a basis for $\mathcal{N}_{\tilde{\pi}}$, \item $\{u_{n-m'+1},\dots, u_{n-m'+\ell}\}$ is a basis for $\mathcal{N}_{\tilde{\pi}} \cap \mathcal{N}_{\tilde{\pi}'}$, and \item $\{u_{n-m'+1},\dots, u_n\}$ is a basis for $\mathcal{N}_{\tilde{\pi}'}$. \end{itemize} Because $u_1$ is not in $\mathcal{N}_{\tilde{\pi}}$ there must exist a unique $j\leq n + \ell - m - m'$ so that $(u_1,u_j)_{\tilde{\pi}}\neq 0$ by Corollary \ref{CorPFUnique}. By Corollary \ref{CorNullspace} this is also the unique $1\leq j\leq n$ so that $\alpha_j = 1/\alpha_1$. It follows that $(u_1,u_i)_{\tilde{\pi}'} \neq 0$ iff $i = j$ as well, and $j \leq n + \ell -m- m'$. Let $c_1 = (u_1,u_j)_{\tilde{\pi}}$, $c_2 = (u_1,u_j)_{\tilde{\pi}'}$ and $L = L_{\tilde{\pi}} - \frac{c_1}{c_2} L_{\tilde{\pi}'}$. For $i\neq j$, $$(u_1,u_i)_L = (u_1,u_i)_{\tilde{\pi}} - \frac{c_1}{c_2} (u_1,u_i)_{\tilde{\pi}'} = 0 - \frac{c_1}{c_2} 0 = 0$$ and $$ (u_1,u_j)_L = (u_1,u_j)_{\tilde{\pi}} - \frac{c_1}{c_2} (u_1,u_j)_{\tilde{\pi}'} = c_1 - \frac{c_1}{c_2} c_2 = 0.$$ This implies that $u_1\in\mathcal{N}_L$, a contradiction to Corollary \ref{CorPosNeverNull}. \end{proof} \section{Proof of Main Theorem 2}\label{SecMain2} The following result is proven in \cite[Theorem 4.3]{cDolPer2013} under different notation. A proof is provided in Appendix \ref{SecB}. \begin{lem}\label{LemAdmissIsRauzy} Let $T:I\to I$ be an i.d.o.c. $n$-IET. Then $J\subsetneq I$ is admissible iff the induction realized by $J$ is given by consecutive steps of extended Rauzy induction. \end{lem} \begin{defn} An \emph{admissible induction sequence} for $T=\mathcal{I}_{\pi,\lambda}:I \to I$ is a sequence $I',I'',\dots,I^{(k)},\dots$ so that the induction from $I^{(k-1)}$ to $I^{(k)}$ is admissible for each $k$. \end{defn} By Lemma \ref{LemA3}, it follows that $|I^{(k)}|\to 0$ as $k\to\infty$ for any admissible induction sequence. We say that $B_1,B_2,\dots$ are defined by an admissible induction sequence $I',I'',\dots$ if for each $k$ $B_k$ is the visitation matrix the induction of $T^{(k-1)}$ on $I^{(k)}$, where $T^{(k)}$ is the induction of $T$ on $I^{(k)}$, or equivalently $T^{(k)}$ is the induction of $T^{(k-1)}$ on $I^{(k)}$. \begin{proof}[Proof of Main Theorem 2] A sequence of matrices $B_1,B_2,\dots$ defined by an admissible induction sequence must be given by products of $A_k$'s given by extended Rauzy induction by Lemma \ref{LemAdmissIsRauzy}. The result then follows from Main Theorem 1. \end{proof} \appendix \section{Admissibile Inductions}\label{SecA} We will give a construction of the induced map given by $T:I\to I$ over sub-interval $I'\subsetneq I$. Let $T = \mathcal{I}_{\pi,\lambda}$, $\pi\in \mathfrak{S}^0_n$ and $\lambda\in \mathbb{R}_+^n$, be an i.d.o.c. $n$-IET with sub-intervals $I_i = [\beta_{i-i},\beta_i)$, $1\leq i\leq n$, where we recall $ \beta_i = \sum_{j\leq i} \lambda_j$ for $0\leq i \leq n$. Also, recall the return time $r(x)$ of $x\in I'$ to $I' = [a',b')$ by $T$. Given $x\in I'$, we describe how to construct the sub-interval $I'_x\subseteq I'$ so that \begin{enumerate} \item $x\in I'_x$, \item for each $z,z'\in I'_x$, $r(z) = r(z')$, \item for each $0\leq k < r(x)$, $T$ restricted to $T^k I'_x$ is a translation, \item for each $0\leq k < r(x)$, there exists $j=j(k)$ so that $T^k I'_x\subseteq I_j$, and \item $I'_x$ is the maximal sub-interval with these properties. \end{enumerate} Compare properties 2--4 with those of intervals $I'_i$ for admissibility in Definition \ref{DefAdmiss}. Observe that for any sub-interval $[c,d)$ such that $[c,d)\subseteq I_j$ for some $j$, $T$ restricted to $[c,d)$ is a translation. Also, $T[c,d) \subseteq I_{j'}$ for some $j'$ iff $(c,d) \cap T^{-1}\mathcal{D} = \emptyset$ where $$ \mathcal{D} = \{\beta_1,\dots,\beta_{n-1}\}.$$ Let $a(x)$ be the minimum $y\in \mathcal{I}' \cap [0,x]$ so that \begin{itemize} \item $r(z) = r(x)$ for all $z\in [y,x]$, and \item for each $0\leq k <r(x)$, $T^k[y,x] \subset I_j$ for some $j = j(k)$. \end{itemize} It is an exercise to see that $$x - a(x) = \min\{T^k x - z: 0\leq k <r(x),~ z\in \mathcal{D}'\cap [0,T^kx]\},$$ where $\mathcal{D}' = \{a',b',0\}\cup\{\beta_1,\dots,\beta_{n-1}\}$. Therefore, either $a(x) = a'$ or $a(x) = T^{-r_(z)} z$ for some $z\in \mathcal{D}'$, where $r_-(z) = \min\{k \in \mathbb{N}_0: T^{-k}z \in (a',b')\}$. Likewise, let $b(x)$ be defined by $$b(x)-x = \min\{z- T^k x: 0\leq k <r(x),~ z\in (\{|\lambda|\}\cup \mathcal{D}')\cap [T^kx, |\lambda|]\},$$ and note that $b(x) \in (I'\cup\{b'\})\cap [x,|\lambda|]$ is the maximum value $y$ so that \begin{itemize} \item $r(z) = r(x)$ for all $z\in [x,y)$, and \item for each $0\leq k <r(x)$, $T^k[x,y) \subset I_j$ for some $j = j(k)$. \end{itemize} Either $b(x) = b'$ or $b(x) = a(x')$ for some $x'>x$. Let $I'_x = [a(x),b(x))$ for each $x\in I'$. Because $I'_y = I'_x$ for each $y\in I'_x$, $$\mathcal{P}_{I'} = \{I'_x: x\in I'\}$$ is a partition of $I'$. If $\mathcal{P}_{I'} = \{I'\}$, then $T$ is periodic and cannot be and i.d.o.c. IET. Therefore $\#\mathcal{P}_{I'} \geq 2$. \begin{lem}\label{LemA2} For i.d.o.c. $n$-IET $T:I\to I$ and sub-interval $I'\subseteq I$, if $m = \#\mathcal{P}_{I'}$ then $n \leq m \leq n+2$. \end{lem} \begin{proof} Let $\mathcal{D}_{I'} = \{a'\}\cup\{T^{-r_-(z)}z: z\in \mathcal{D}'\}$ and $\gamma_j = T^{-r_-(\beta_j)}\beta_j$ for $0\leq j <n$. As discussed in the previous paragraphs, $a(x) \in \mathcal{D}_{I'}$ for each $x$. Furthermore, $a(z) = z$ for each $a\in \mathcal{D}_{I'}$. Therefore $\#\mathcal{P}_{I'} = \#\mathcal{D}_{I'}$. Because $T$ is i.d.o.c., $T^\ell \beta_j = \beta_{j'}$ iff $j=\pi^{-1}(1)$, $j'=0$ and $\ell=1$. It follows that $\gamma_0 = \gamma_{\pi^{-1}(1)}$ and $\gamma_j \neq \gamma_{j'}$ for distinct $j,j'>0$. Therefore $\#(\mathcal{D}_{I'}\setminus\{a'\}) \geq n-1$, or $\#\mathcal{P}_{I'} \geq n$, and $\#(\mathcal{D}_{I'}\setminus\{a'\}) \leq n+1$ or $\#\mathcal{P}_{I'} \leq n+2$. \end{proof} We order and name the sub-intervals in $\mathcal{P}_{I'}$ as $I'_1,\dots,I'_m$. We call this the \emph{natural decomposition} of $I'$ by $T' = T|_{I'}$, and $I'$ is admissible iff $m = n$. The next statement proves that $ T|_{I''} = (T|_{I'})|_{I''}$, under appropriate choices of $T$, $I'$ and $I''$, and the natural decompositions agree with this identity. \begin{lem}\label{LemA1} Let $T = \mathcal{I}_{\pi,\lambda}$ be an $n$-IET defined on $I$ with sub-intervals $I_1,\dots,I_n$ and let $\emptyset \subsetneq I'' \subsetneq I' \subsetneq I$ be sub-intervals. If $T' = T|_{I'}$ with natural decomposition $I_1',\dots,I_m'$ of $I'$, $T'' = T|_{I''}$ with natural decomposition $I_1'',\dots, I_{m'}''$ of $I''$ and $S = T'|_{I''}$ with the natural decomposition $J_1',\dots,J_{m''}'$ of $I''$, then $m'' = m'$ and $J'_i = I''_i$ for all $1\leq i \leq m'$. \end{lem} For $x\in I'$, let $q(x) = (j_0,\dots,j_{r(x)-1})$ be the ordered $r(x)$-tuple given by $T^k x \in I_{j_k}$ for $0\leq k < r(x)$. Note that $I'_1,\dots,I'_m$ is the natural decomposition of $I'$ by $T'$ iff the following statements hold: \begin{enumerate} \item $q(x) = q(y)$ for all $x,y\in I'_i$, $1\leq i \leq m$, and \item $q(x) \neq q(y)$ if $x\in I'_i$ and $y\in I'_{i'}$ for $i\neq i'$. \end{enumerate} The definition of $q(x)$ coincides with a \emph{return word} when considering an IET by its natural symbolic coding. See \cite{cKea1975} for an introduction to this coding. Because these tuples are constant on each $I'_i$, let $q_i = q(x)$ for any $x\in I'_i$. \begin{proof}[Proof of Lemma \ref{LemA1}] Let $q_i$'s be the return words of $I'$ in $I$ of $T$, $q'_i$'s be the return words of $I''$ in $I$ of $T'$ and $q''_i$'s the return words of $I''$ in $I'$ of $S$. Let $q_iq_{i'}$ denote the concatenation of the tuples $q_i$ and $q_{i'}$, meaning $$ q_iq_{i'} = (j_0,\dots,j_{r-1},j'_0,\dots,j'_{r'-1})$$ where $q_i = (j_0,\dots,j_{r-1})$ and $q_{i'} = (j'_0,\dots,j'_{r'-1})$. Let $\tilde{q}_i$, $1\leq i \leq m''$, be given by $$ \tilde{q}_i = q_{j_0} q_{j_1} \cdots q_{j_{r-1}}$$ where $q''_i = (j_0,\dots,j_{r-1})$. These are the return words for the $J'_i$'s in $I$ by $T$. The lemma follows from two claims: each $\tilde{q}_i$ equals $q'_j$ for some $j= j(i)$ and $\tilde{q}_i \neq \tilde{q}_{i'}$ for $i\neq i'$. The first claim follows as for any $x\in J'_i$ and $k \in \mathbb{N}_0$, if $T^kx \in I'_{j'}$ then $T^{k+k'}x\in I_{j''_{k'}}$ for each $0\leq k'< r_{j'}$, where $q_{j'} = (j''_0,\dots,j''_{r_{j'}-1})$. The second claim holds because $q''_i \neq q''_{i'}$ and $q_j \neq q_{j'}$ for $i\neq i'$ and $j\neq j'$. \end{proof} \begin{lem}\label{LemA3} Let $T = \mathcal{I}_{\pi,\lambda}$, $\pi\in \mathfrak{S}^0_n$ and $\lambda\in\mathbb{R}_+^n$, be an i.d.o.c. $n$-IET. If $I',I'',I''',\dots,I^{(k)},\dots$, $k\in \mathbb{N}$, are admissible sub-intervals so that $\emptyset \subsetneq I^{(k)} \subsetneq I^{(k-1)}$ for each $k\in \mathbb{N}$, then either $J =\bigcap_{k=1}^\infty I^{(k)}$ is empty or consists of one point. \end{lem} \begin{proof} Suppose by contradiction that $J$ is contains an open interval and therefore is a sub-interval of $I$ by possibly removing the right endpoint. Let $J_1,\dots,J_m$ be the natural decomposition of $T|_{J}$. By Lemma \ref{LemA2}, $m \leq n+2$. Because $m$ is finite, $\varepsilon= \min\{|J_i|:1\leq i \leq m\}>0$ is well-defined. Fix $k_0\in \mathbb{N}$ so that $|I^{(k_0)}| < |J| + \varepsilon/2$ and let $T' = T|_{I^{(k_0)}}$. By Lemma \ref{LemA1}, $J_1,\dots,J_m$ is the natural decomposition of $T'$ restricted to $J$. By Lemma \ref{LemMinimal}, because $J\subsetneq I^{(k_0)}$ there must exist $J_i$ with return time in $I^{(k)}$ greater than $1$. In this case $T^{(k)}$ acts on $J_i$ by translation and $T^{(k)}J_i \subseteq I^{(k)} \setminus J$. However, this implies that $|J_i|<\varepsilon/2$, a contradiction. \end{proof} \section{Admissibility and Extended Rauzy Induction}\label{SecB} \begin{proof}[Proof of Lemma \ref{LemAdmissIsRauzy}] Suppose that $J = [a,b)$ is not given by extended Rauzy induction, and let $S = T|_{J}$ be the induced map with natural decomposition $J_1,\dots,J_m$. We will construct a sub-interval $I'$ of $I$ with induced $T'= T|_{I'}$ so that $I' = [a',b')$ is given by steps of extended Rauzy induction and if $T'=\mathcal{I}_{\pi',\lambda'}$ then \begin{equation}\label{Eqaa'bb'} a' \leq a < a' + \min\{\lambda'_1,\lambda'_{i_0}\} \text{ and } b' - \min\{\lambda'_{i_1},\lambda'_n\} < b \leq b', \end{equation} where $i_0,i_1$ satisfy $\pi'(i_0) = 1$ and $\pi'(i_1) = n$. Because $J$ is not realized by extended Rauzy induction, at most one equality may occur. We then will show that $J$ is not admissible for $I'$, which shows our claim by Lemma \ref{LemA1}. Initially, let $I' = I$ and so $J \subseteq I'$ trivially. Given our definitions, suppose the inequalities for $a$ and $a'$ do not hold for $I$'. We then perform left Rauzy induction on $T|_{I'}$ and replace $I'$ by this new sub-interval, noting that $a'$ will increase. If the inequalities between $b$ and $b'$ do not hold, then we act by right Rauzy induction and replace $I'$ with this new sub-interval. Note that $J\subseteq I'$ still holds after each step. This process must terminate at our desired $I'$, otherwise we have constructed an infinite properly nested sequence of admissible intervals that each contain $J$, a contradiction to Lemma \ref{LemA3}. Given $I'$ and $J$ so that inequalities \eqref{Eqaa'bb'} hold, $J_1,\dots,J_m$ is the natural decomposition from $T'$ on $J$ and the natural decomposition from $T$ on $J$ as well by Lemma \ref{LemA1}. If $a = a'$, then $m = n+1$ and $$ J_i = \RHScase{ I'_i, & i \leq i_1,\\ I'_{i_1} \setminus (T')^{-1}I_n', & i = i_1,\\ (T')^{-1}I_n', & i = i_1 + 1,\\ I'_{i-1}, & i_1 < i \leq m,}$$ by direct computation. Similarly, $m = n+1$ if $b=b'$. If both $a>a'$ and $b<b'$, then analogous computations will hold unless $i_0 = n$ and $i_1 = 1$, or $\pi'$ is \emph{standard}. In this case, each $I'_i$ is a sub-interval in the natural decomposition of $J$ for $1<i<n$ with return time $1$. Let $J'_1 = I'_1\cap J$ and $J'_n = I'_n \cap J$. Both $J'_1 \cap (T')^{-2} J'_n$ and $J_n'\cap (T')^{-2}J'_1$ are sub-intervals in the decomposition of $J$ with return time $2$. Because $T'$ is i.d.o.c., $(T')^2 J'_1 \neq J'_n$ and $(T')^2 J'_n \neq J'_1$. Therefore, there must be at least one more sub-interval of $J$ in $(J'_1 \setminus (T')^{-2} J'_n)\cup (J_n'\setminus (T')^{-2}J'_1)$ and so $m>n$. \end{proof} \end{document}
math
52,880
\begin{document} \title{Monopoles on $\bb R^5$ and Generalized Nahm's equations} \author{Rodrigo Pires dos Santos} \address{Instituto de Matem\'atica, Universidade Estadual de Campinas} \curraddr{R. Sergio Buarque de Holanda, 651 - Cidade Universitaria, Campinas - SP, Brazil, 13083-859} \email{[email protected]} \date{September 26, 2016} \keywords{Gauge theory, differential geometry, Nahm's equations} \begin{abstract} Our approach to define monopoles is twistorial and we start by developing the twistor theory of $\bb R^5$, which is an analogue of the twistor theory for $\bb R^3$ developed by Hitchin. Using this, we describe a Hitchin-Ward transform for $\bb R^5$, that gives monopoles. In order for us to construct monopoles we make use of spectral curves. Then, using those spectral curves we find a new system of equations, analogue to the Nahm's equations. \end{abstract} \maketitle \tableofcontents \section*{Introduction} Let $\nabla$ be an $SU(2)$-connection on a complex vector bundle $E$ over $\bb R^3$ and $\phi$ a skew-symmetric section of $End(E)$. A monopole on the Euclidean $\bb R^3$ consists of a pair $(\nabla,\phi)$ minimising, with finite energy, the Yang-Mills-Higgs energy functional \begin{align*} \mcal V=\int_{\bb{R}^3}|F_\nabla|^2+|\nabla\phi|^2. \end{align*} One can show \cite{AH} that the pair $(\nabla,\phi)$ is a monopole if and only if it satisfies the Bogomolny equation, \begin{align*} F_\nabla=*\nabla\phi, \end{align*} and $\nabla$ and $\phi$ are subject to the boundary conditions: \begin{align*} |\phi|&=1-\frac{m}{2r}+O(r^{-2})\\ \der{|\phi|}{\Omega}&=O(r^{-2})\\ |\nabla\phi|&=O(r^{-2}),\,\,\text{as}\,\,r\to\infty,\\ \end{align*} where $\der{|\phi|}{\Omega}$ is the angular derivative of $|\phi|$. The first boundary condition says that we can restrict $\phi$ to $S_\infty$, the sphere at the infinity, and obtain a map $\frac{\phi}{|\phi|}:S_\infty\to S^2\subset \mathfrak{su}(2)$. By integration on the sphere at the infinity, one can show that the degree of this map is the integer $m$, called the topological charge of the monopole. Using the geometry of oriented lines in $\bb R^3$, Hitchin proved in \cite{MG} that a solution to the Bogomolny equations correspond to holomorphic bundles on $\bb T$, the total space of $T\bb CP^1$; this type of result is known in the literature as the Hitchin-Ward correspondence. Furthermore, he gave a twistor description of the boundary conditions. Namely, he proved that if a bundle $\tilde E$ on $\bb T$ corresponds to a monopole, then $\tilde E$ is given by an extension of line bundles on $\bb T$. He was also able to determine the bundle $\tilde E$ from an algebraic curve on the twistor space. Recently, Bielawski \cite{SU2} defined \emph{generalised hypercomplex manifolds} (GHC manifolds for short) that are manifolds whose tangent space at every point decomposes as copies of irreducible representations of $SU(2)$. An important feature of GHC manifolds is that they possess a twistor interpretation similar to Hitchin's twistor description of $\bb R^3$. Thus, we can describe \emph{Bogomolny pairs}, a generalization to the Bogomolny equations (this also generalizes the Bogomolny Hierarchy discussed in \cite{MS}). More specifically, a \emph{Bogomolny pair} on a GHC manifold $M$ is a pair $(\nabla,\Phi)$, where $\nabla$ is a connection on a complex vector bundle $E$ and $\Phi$ a tuple of endomorphisms of $E$, such that $\nabla\oplus\Phi$ is flat over certain subspaces of $M$ called $\alpha$-surfaces. There is a Hitchin-Ward correspondence for $M$ giving a correspondence between Bogomolny pairs and holomorphic bundles on the twistor space of $M$ that are trivial on real sections. This paper presents an approach to the construction of monopoles on $\bb R^5$ and is organized as follows: In the first section \ref{sec:GHC} we recover the results of \cite{SU2} on GHC manifolds. Then, we define a GHC structure on $\bb R^5$ by describing it as the space of real sections of the line bundle $\mcal O(4)$ over $\bb CP^1$. The second section \ref{section2} is devoted to the description of the Hitchin-Ward correspondence for $\bb R^5$. We use the proof of the correspondence to find a distinguished line bundle $L$ on the twistor space that corresponds to a trivial Bogomolny pair on $\bb R^5$; this bundle will play an important role in the construction of monopoles. In section \ref{section3} we initiate the construction of monopoles. We begin with a discussion on how a spectral curve can be used to build a pair $(\nabla,\Phi)$ on $\bb R^5$ and prove that spectral curves gives rise to solutions to the generalized Nahm's equations. In section \ref{section4} we deduce the boundary conditions for the generalized Nahm's equations. Namely, we prove an equivalence between the following: \begin{enumerate}[(1)] \item A compact algebraic curve $S$ in the total space of $\mcal O(4)$ such that: \begin{itemize} \item $S$ is a compact algebraic curve in the linear system $|\mathcal{O}(4k)|$, \item $S$ has no multiple components, \item the line bundle $L$ has order $2$ on $S$ and \item $H^0(S,L^z(2k-3))=0$ for $z\in (0,2)$. \end{itemize} \item A solution to the system of equations: \begin{align*} \begin{array}{ll} \dot{A}_0&=\dfrac{1}{2}[A_0,A_2]\\ \dot{A}_1&=[A_0,A_3]+\dfrac{1}{2}[A_1,A_2]\\ \dot{A}_2&=[A_1,A_3]+[A_0,A_4]\\ \dot{A}_3&=[A_1,A_4]+\dfrac{1}{2}[A_2,A_3]\\ \dot{A}_4&=\dfrac{1}{2}[A_2,A_4], \end{array} \end{align*} where $A_j(s)$ is a $k\times k$ matrix for $z\in (0,2)$ and such that: \begin{itemize} \item $A_1$ and $A_3$ are analytic on the whole interval $[0,2]$; \item $A_0$, $A_4$ and $A_2$ have simple poles at $0$ and $2$, but are otherwise analytic; \item The residues of $A_0$, $A_4$ and $A_2$ at $z=0$ and $z=2$ define an irreducible $k$-dimensional representation of $\mathfrak{sl}(2,\bb C)$. \end{itemize} \end{enumerate} The formality of the proof follows the idea for the construction of monopoloes on $\bb R^3$ done in \cite{CM} and \cite{HM}. However, it is important to highlight the main differences. First, since we do not define monopoles from an energy functional, we do not have a topological definition of charge as in the $\bb R^3$ case. Thus, we use the degree of the spectral curve as a parameter of solutions. Another difference is the proof of proposition \ref{prop:bextension}, which is essential to the proof of the main result. This proposition is the analogue to proposition 5.13 in \cite{CM}. Hitchin's proof relies on an $SL(2,\bb C)$ invariance of the construction and our proof consists of making a more explicit calculation since we do not have a group invariance. \section{Generalised hypercomplex manifolds}\label{sec:GHC} The main purpose of this section is to introduce the concepts of generalized hypercomplex manifolds. The main reference is \cite{SU2}. \begin{defi} Let $M$ be a smooth manifold. A \textit{generalised almost complex manifold} is a smooth fibrewise action of $SU(2)$ in the tangent bundle such that each $T_xM$ decomposes as $V\otimes\bb R^n$, where $V$ is an irreducible representation of $SU(2)$. The complexified representation $V^{\bb C}$ is one or two copies of the $k\textsuperscript{th}$-symmetric power of the defining representation of $SL(2,\bb C)$. We shall then call $M$ an \textit{almost k-hypercomplex manifold}. \end{defi} We can produce examples of those structures by looking into the space of sections of holomorphic bundles over $\bb CP^1$. This happens because irreducible representations of $SL(2,\bb C)$ can be realised as sections of line bundles over $\bb CP^1$. A $\sigma$-bundle (or \textit{real} bundle) over $\bb CP^1$ is a holomorphic bundle $E$ equipped with a anti-holomorphic involution $\sigma$ covering the antipodal map on $\bb C^1$, a \textit{real section} of a $\sigma$-bundle is a section invariant under the involution $\sigma$. The map $\sigma$ will be called a \textit{real structure}. Following these definitions, we can describe the irreducible representation of $SU(2)$ as real sections of a $\sigma$-bundle over $\bb CP^1$. Consequently, the tangent space of a generalised almost hypercomplex manifold is the space of real sections of a $\sigma$-bundle. In fact: \begin{prop}[ \cite{SU2} proposition 2.2] Let $Z$ be a complex manifold fibering over $\bb CP^1$ equipped with an anti-holomorphic involution $\tau$ which covers the antipodal map on $\bb CP^1$. Suppose there exists a holomorphic real section of $Z\to\bb CP^1$ whose normal bundle is isomorphic to $\mcal O(k)\otimes\bb C^n$ ($k>0$), then the space of such real sections is an almost $k$-hypercomplex manifold of dimension $n(k+1)$. \end{prop} This proposition motivates the following definition: \begin{defi} An almost $k$-hypercomplex structure on a manifold $M$ is \textit{integrable} if $M$, together with the $SU(2)$ action on its tangent bundle, can be described (locally) as the space of real sections of a complex manifold $Z$ fibering over $\bb CP^1$. We shall say that $M$ is a \textit{generalised hypercomplex manifold} or GHC manifold for short. The space $Z$ is called the \textit{twistor space} of $M$. \end{defi} \begin{eg}\label{eg:R^k+1} Let $H$ be the $k$-dimensional, for $k$ even, irreducible representation of $SL(2,\mathbb{C})$, then it acts irreducibly on the dual $H^*$. Let $B$ be a Borel subgroup of $SL(2,\mathbb{C})$, then $SL(2,\mathbb{C})/B\cong\mathbb{C}P^1$. For each $q\in\mathbb{C}P^1$, let $B_q$ be its corresponding Borel subgroup and $l_q$ be the highest weight vectors for $B_q$. This gives an injective map $\mathbb{C}P^1\mapsto\mathbb{P}(H^*)$ and let $\tilde{L}_k$ be the bundle on $\mathbb{C}P^1$ given by the pullback of the tautological bundle on $\mathbb{P}(H^*)$. For $L_k=(\tilde{L}_k)^*$ we have: \begin{thm}\label{thm:Borel-Weil} (Borel-Weil theorem) In the notation of the example above, \begin{align*} H^0(G/B,L_k)\cong H. \end{align*} \end{thm} Since $k$ is even, we can endow $H$ with a real structure and then $H^\bb R$ is a GHC-manifold with twistor space $L_k$. We shall later describe explicitly the $\alpha$-surfaces when $k=4$. \end{eg} Let $M$ be a GHC manifold and consider the action of $SL(2,\bb C)$ on the complexified cotangent bundle $T^*M^{\bb C}$. For each point $q\in\bb CP^1$ let $B_q$ be the corresponding Borel subgroup of $SL(2,\bb C)$. Define the following: \begin{enumerate}[i)] \item $\mcal U_q$ is the subbundle of $(T^*M)$ corresponding to the highest weight with respect to $B_q$, \item $\mcal K_q$ is the subbundle of $TM^{\bb C}$ annihilated by $\mcal U_q$ and \item $\mcal F_q=\mcal K_q\cap\overline{\mcal K_q}\cap TM$ is a distribution on $M$. \end{enumerate} We then have: \begin{thm}\label{thm:integrability} (\cite{SU2} theorem 2.5) An almost $k$-hypercomplex structure on a manifold $M$ is integrable if and only if for every $q\in\bb CP^1$ the subbundle $\mcal K_q$ is involutive for all $q\in\bb CP^1$, this is to say, $[\mcal K_q,\mcal K_q]\subset\mcal K_q$. \end{thm} We shall not prove the theorem above, however we shall see how it can be used to construct the twistor space of a GHC-manifold. Define the \textit{twistor distribution} $\mcal Z$ of $M$ to be the distribution on $M\times \bb CP^1$ given by $\mcal Z_{(m,q)}=((\mcal F_q)_m,0)$. The theorem above says that this distribution is involutive and thus it defines a foliation of $M\times\bb CP^1$. Moreover, the leaf space $Z=(M\times\bb CP^1)/\mcal Z$ is the twistor space of the GHC-manifold $M$. If the foliation is simple, then $Z$ is a complex manifold and the projection $\eta:M\times\bb CP^1\to Z$ is a surjective submersion, in this case $M$ is called a \textit{regular} GHC-manifold. The leaves of the foliation $\mcal Z$ will be called $\alpha$-surfaces. Let $M$ be a GHC-manifold, then it is given as the space of real sections of a fibration $Z\to \bb CP^1$, then $M$ has a natural complexification $M^{\bb C}$, it is the space of all sections of the fibration. Notice that the holomorphic tangent bundle $TM^{(1,0)}$ of $M^{\bb C}$ is then endowed with a holomorphic action of $SL(2,\bb C)$ such that $TM^{(1,0)}=S^k\bb C^2\otimes\bb C^n$. We now start the description of a the twistor theory of a GHC manifold. We start by describing some distinguished bundles on a GHC-manifold M. But first we need some results regarding bundles on $\bb CP^1$ and representations of $SL(2,\bb C)$. In the remaining of this section we denote $G=SL(2,\bb C)$ and $B$ is the Borel subgroup of the upper diagonal matrices. Let $L=\mcal O(k)$ be the degree $k$ line bundle on $\bb CP^1$, for $k>0$. Then the space of sections $H$ is an irreducible representation of $G$ from \eqref{thm:Borel-Weil}. Notice that the homogeneous bundle $\underline{H}=G\times_B H$ is trivial and that we have an equivariant map: $$ \underline{H}\to L,$$ which is given by evaluation. Namely, if $h\in H$ and $q\in G/B\cong\bb CP^1$ the map above sends $(h,q)$ to $h(q)$. Now, define a bundle $K$ on $\bb CP^1$ given by the exact sequence of homogeneous bundles: \begin{align}\label{eq:sequence1} 0\to K\to \underline{H}\to L\to0. \end{align} The cohomology exact sequence of the dual to the sequence \eqref{eq:sequence1} gives an exact sequence of $G$ representations: \begin{align}\label{eq:sequence2} 0\to H^*\xrightarrow{i} \hat{H}\xrightarrow{j} H'\to0, \end{align} where $H'=H^1(L^*)\cong S^{k-2}\bb C^2$, $\hat{H}=H^0(K^*)$ and notice that $H^*\cong H^0(\underline{H^*})$, since $H$ is a trivial bundle. Moreover, the sequence \eqref{eq:sequence2} is split. Now for each $q\in\bb CP^1$ we notice that the line of highest weight vectors in $H$, denoted by $S_q$, are contained in $K$. Therefore, we can define a subbundle $S$ of $K$ whose fibre at $q$ is $S_q$. We then consider the short exact sequence: \begin{align}\label{sequence3} 0\to (K/S)^*\rightarrow K^*\rightarrow S^*\to0. \end{align} Its long exact sequence in cohomology starts as: \begin{align}\label{sequence4} 0\to H^0((K/S)^*)\rightarrow \hat{H}\rightarrow H^0(S^*). \end{align} Now Borel-Weil theorem says that $H^*$ and $H^0(S^*)$ are isomorphic representations of $G$. Thus, we obtain a map $p:\hat{H}\to H^*$. It is proved in \cite{SU2} lemma 3.3 that $p$ is the left inverse for the map $i$ in \eqref{eq:sequence2}. We can now state and prove: \begin{prop}\label{prop:emap} We have an isomorphism of homogeneous bundles $$ (K/S)^*\cong G\times_BH'.$$ In particular, $H^0((K/S)^*)\cong H^1(\bb CP^1,L^*)$ and $K/S$ is trivial. \end{prop} \begin{proof} $H$ is isomorphic to $S^k\bb C^2$ as a representation of $G$ and we shall write the vectors of $H$ as $(v_0,v_1,\cdots,v_k)$ where the coordinates are relative to the weight decomposition with respect to $B$, where $v_0$ correspond to the minimal weight and $v_k$, the maximal weight. The fibre $K_{[1]}$ of $K$ at the point $[1]\in G/B\cong\bb CP^1$ is given by vectors of the form $(0,v_1,\cdots,v_k)$ and the fibre $S_{[1]}$, by $(0,\cdots,0,v_k)$. The map $K_{[1]}/S_{[1]}\to S^{k-2}\bb C^2$ induced by $$ (0,v_1,\cdots,v_k)\mapsto (v_1,\cdots,v_{k-1})$$ is an isomorphism of $B$-modules. Since the bundles are homogeneous we have an isomorphism of bundles. \end{proof} We can now return our attentions to differential geometry. Let $M$ be a regular GHC-manifold and $Z$ its twistor space. Therefore, on the complexified case, we have the double fibration: \begin{align}\label{dublefibration} Z\xleftarrow{\eta}Y=M^{\bb C}\times \bb CP^1\xrightarrow{p}M^{\bb C}. \end{align} \begin{defi} The sheaf of $\eta$-\textit{vertical holomorphic $l$-forms} $\Omega^l_\eta$ is defined by $$ \Omega^l_\eta=\Lambda^l(\Omega^1(Y)/\eta^*(\Omega^1(Z))).$$ \end{defi} \begin{prop} We have an isomorphism of sheaves $p_*(\Omega^1_\eta)\cong E^*\otimes\hat{H}$, where $\hat{H}$ is defined in the sequence \eqref{eq:sequence2}. \end{prop} \begin{proof} Let $x\in M^\bb C$ and let $\bb CP^1_x$ be the fibre of $p$ over $x$. The $\eta$-normal bundle of $\bb CP^1_x$ in $Y$, this is to say, the normal bundle of $\bb CP^1_x$ along the fibres of $\eta$, is the bundle whose fibre at $(x,q)\in \bb CP^1_x$ is $\mcal K_q$. From the definition of push-forward we have: $$p_*(\Omega^1_\eta)=H^0(\bb CP^1_x,\mcal K^*). $$ Now we have the decompositions $TM^\bb C=E_M\otimes H$ and $\mcal K=\bb C^n\otimes K$, where $K$ is defined in \eqref{eq:sequence1}. Since $H^0(\bb CP^1,K^*)=\hat H$, we have proved the proposition. \end{proof} We now state a result that will be necessary later. \begin{prop}\label{2forms} We have a splittings: \begin{itemize} \item $p_*(\Omega^1_\eta)\cong\Omega^1(M^{\bb C})\oplus(E^*\otimes H')$. \item $p_*(\Omega^2_\eta)\cong(S^2E^*\otimes H_-)\oplus(\Lambda^2E^*\otimes H_+),$ where $$H_-=H^0(\bb CP^1,\Lambda^2K^*)\,\, \text{and}\,\, H_+=H^0(\bb CP^1,S^2K^*).$$ \end{itemize} \end{prop} \subsection{$\bb R^5$ as a GHC-manifold and its twistor theory}\label{subsec:r5asghc} Example \eqref{eg:R^k+1} defines $\bb R^5$ as a $4$-GHC manifold. In this section we shall describe explicitly the twistor distribution for $\bb R^5$. First we shall fix some notations that will be used throughout this paper. Let $\bb CP^1=\bb C\cup \{ \infty \}$ and put coordinates $\xi$ on $U=\bb C\subset \bb CP^1$ and $\xi'$ on $U'=(\bb C\setminus \{0\})\cup\{\infty\}$ such that $\xi'=\dfrac{1}{\xi}$ on $U\cap U'$. We can now fix holomorphic coordinates on $\mathcal{O}(k)$. Let $\pi:\mathcal{O}(k)\to\mathbb{C}P^1$ be the projection and define the open sets $U_0=\pi^{-1}(U)$ and $U_1=\pi^{-1}(U')$. Put coordinates $(\eta,\xi)$ in $U_0$ and $(\eta',\xi')$ on $U_1$ such that $\eta'=\eta/\xi^k$. Furthermore, from now on, whenever we refer to the total space of the bundle $\mathcal{O}(k)$, we shall name it $\bb T$. Under these coordinates we can express a holomorphic section $p$ of $\mcal O(k)$ as a polynomial of degree $k$ in $\xi$, namely $p(\xi)=a_0+a_1\xi+\cdots+a_k\xi^k$. We can define an anti-holomorphic involution in the total space of $\mathcal{O}(k)$, in local coordinates, by $\tau(\eta,\xi)=(\overline{\eta}/\overline{\xi}^k,-1/\overline{\xi})$. Observe that $\tau$ covers the antipodal map in $\mathbb{C}P^1$ and therefore swaps the open sets $U_0$ and $U_1$. This map induces an involution in $H^0(\mathbb{C}P^1,\mathcal{O}(k))$, which will still be called by $\tau$, in the following way: If $p(\xi)=a_0+a_1\xi+\cdots+a_k\xi^k$ is a holomorphic section of $\mathcal{O}(k)$, then $\tau(p)=b_0+\cdots+b_k\xi^k$, where $b_j=(-1)^j\overline{a}_{k-j}$. For a point $(\eta,\lambda)\in\mcal O(k)$ we can define the $\alpha$-\textit{surface} $\Pi_{(\eta,\lambda)}=\{p(\xi)\in\bb C^5|\,p(\lambda)=\eta\}$. We can now concentrate on $\bb R^5$. A point $(x_0,x_1,x_2,x_3,x_4)\in\mathbb{R}^5$ corresponds to the section $p(\xi)=(x_0+ix_4)+(x_1+ix_3)\xi+x_2\xi^2-(x_1-ix_3)\xi^3+(x_0-ix_4)\xi^4\in H^0(\mathbb{C}P^1,\mathcal{O}(4))$. Conversely, given a point $z\in Z$, we define the \textit{real $\alpha$-surface} corresponding to $z$, denoted by $P_z$, to be the subspace in $\mathbb{R}^5$ consisting of real sections through $z$. Namely, we can consider $z\in U_0$ so that we can write $z=(\eta_0,\xi_0)$ in local coordinates, then we have $P_z=\{p\in H^0(\mathbb{C}P^1,\mathcal{O}(4))|\,\,p(\xi_0)=\eta_0\}$. We define $\bb C^5$ as the fourth symmetric power of the defining representation of $SL(2,\bb C)$, therefore it can be described as the space of polynomials of degree $4$ in $\xi$ and the explicit action of $SL(2,\bb C)$ on $\bb C^5$ is given by: \begin{align}\label{eq:action1} g\cdot p(\xi)=(c\xi+d)^4 \cdot p\left( \frac{a\xi+b}{c\xi+d}\right), \end{align} where $g=\left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix} \right)\in SL(2,\bb C)$ and $p(\xi)\in\bb C^5$. We can understand this action as being induced by the action of $SL(2,\bb C)$ in the total space of $\mcal O(4)$ defined by \begin{align}\label{eq:action2} g\cdot (\eta,\xi)=\left( \frac{\eta}{(c\xi+d)^4},\frac{a\xi+b}{c\xi+d}\right). \end{align} For the following proposition, we write an element $g\in SU(2)\subset SL(2,\bb C)$ as $ g=\left(\begin{smallmatrix} \alpha &-\overline{\beta}\\ \beta &\overline{\alpha} \end{smallmatrix} \right)$. \begin{prop} This action is compatible with the real structure $\tau$ in $\mathcal{O}(4)$, this is to say, $\tau g= g \tau$ for all $g\in SU(2)$. \end{prop} \begin{proof} The proof follows by direct computation using the action \eqref{eq:action2} and the definition of $\tau$. We have: \begin{align*} g\cdot\tau (\eta,\xi)=\left(\frac{\overline{\eta}}{(\overline{\alpha}\overline{\xi}+\overline{\beta})^4}, \frac{\overline{\beta}\overline{\xi}-\overline{\alpha}}{(\overline{\alpha}\overline{\xi}+\overline{\beta})}\right) =\tau g\cdot(\eta,\xi). \end{align*} \end{proof} For a point $\lambda\in U\subset\mathbb{C}P^1$ define $g_\lambda\in SU(2)$ by $g_\lambda=\frac{1}{\sqrt{1+\overline{\lambda}\lambda}}\begin{pmatrix} 1&\lambda \\ -\overline{\lambda} & 1 \end{pmatrix}$ and notice that $g_\lambda$ is the unique, up to a $U(1)$ multiplication, element in $SU(2)$ such that $g^{-1}_\lambda\cdot (0,0)=(0,\lambda)$. Now we shall explicitly describe the bundle $K$, which is defined in \eqref{eq:sequence1}. First, we identify the tangent space $T_x\bb C^5$ at $x\in \bb C^5$ with $S^4\bb C^2$ and denote a vector in $T_x\bb C^5$ as a polynomial of degree $4$ in $\xi$. We then have: \begin{prop}\label{twistorfibration1} Let $\lambda\in U\subset\bb CP^1$, then the fibre $K_\lambda=Span_\bb C\{V_1^\lambda,V_2^\lambda,V_3^\lambda,V_4^\lambda\}$, where \begin{itemize} \item $V_1^\lambda=g_\lambda^{-1}\cdot\xi=\dfrac{1}{(1+\lambda\overline{\lambda})^2}(\overline{\lambda}\xi+1)^3(\xi-\lambda)$; \item $V_2^\lambda=g_\lambda^{-1}\cdot\xi^2=\dfrac{1}{(1+\lambda\overline{\lambda})^2}(\overline{\lambda}\xi+1)^2(\xi-\lambda)^2$; \item $V_3^\lambda=g_\lambda^{-1}\cdot\xi^3=\dfrac{1}{(1+\lambda\overline{\lambda})^2}(\overline{\lambda}\xi+1)(\xi-\lambda)^3$; \item $V_4^\lambda=g_\lambda^{-1}\cdot\xi^4=\dfrac{1}{(1+\lambda\overline{\lambda})^2}(\xi-\lambda)^4$. \end{itemize} \end{prop} \begin{proof} First remember that the fibre $K_\lambda$ is given by the holomorphic sections $p$ of $\mcal O(4)$ such that $p(\lambda)=0$. Then, notice that $K_0=Span_\bb C\{\xi,\xi^2,\xi^3,\xi^4\}$. Since the group action is an endomorphism, the subspace of $H^0(\bb CP^1,\mcal O(4))$ generated by the $V^\lambda_k$s is a basis for $K_\lambda$. This proves the proposition. \end{proof} \begin{rmk} It is important to highlight the use of the group action in the proof above. It will be important when we discuss aspects of the twistor theory of $\bb R^5$ that are invariant under the group action. \end{rmk} We have that in our case $\mcal K_q= K_q$, for all $q\in\bb CP^1$. Therefore, applying the reality condition we have: \begin{prop}\label{twistorfibration2} The twistor distribution $\mathscr F$ on $\bb R^5\times\bb CP^1$ is given by $\mathscr{F}_{(x,\lambda)}=Span_\mathbb{R}\{(v_1^\lambda,0),(v_2^\lambda,0),(v_3^\lambda,0)\}$, where \begin{align} \label{eq:generators} v_1^\lambda&=g_\lambda^{-1}\cdot(\xi-\xi^3)=\frac{(1+\overline{\lambda}\xi)^3(\xi-\lambda)-(1+\overline{\lambda}\xi)(\xi-\lambda)^3}{(1+\lambda\overline{\lambda})^2},\\ v_2^\lambda&=g_\lambda^{-1}\cdot\xi^2=\frac{(1+\overline{\lambda}\xi)^2(\xi-\lambda)^2}{(1+\lambda\overline{\lambda})^2}\,\,\text{and}\\ v_3^\lambda&=g_\lambda^{-1}\cdot i(\xi+\xi^3)=i\left(\frac{(1+\overline{\lambda}\xi)^3(\xi-\lambda)+(1+\overline{\lambda}\xi)(\xi-\lambda)^3}{(1+\lambda\overline{\lambda})^2}\right). \end{align} Moreover, fixing an ordered frame $\{(v_1^\lambda,0),(v_2^\lambda,0),(v_3^\lambda,0)\}$ for the twistor distribution gives an orientation for the vector space $\mathscr F_{(x,\lambda)}$. \end{prop} \begin{rmk} We are describing $\bb R^5$ as the real form of the fourth symmetric power of the defining representation of $SU(2)$. Let $B_q$ be the Borel subgroup of $SU(2)$ corresponding to $q\in\bb CP^1$. If we consider the weight decomposition of $\bb R^5$ with respect to $B_q$, we must have that $v_1$, $v_2$ and $v_3$ are the weight-vectors corresponding to the weights $-2$, $0$ and $+2$ respectively. Therefore, the orientation mentioned in the proposition above is natural with respect to the $SU(2)$ action. \end{rmk} \subsection{Invariant metric on $\bb R^5$, $\alpha$-surfaces and further properties} Since we shall need to identify $T\bb R^5$ and $T^*\bb R^5$, we need an $SU(2)$-invariant metric for this: \begin{prop}\label{invariantmetric} (\cite{Mukai} page 27 proposition 1.25) Let $p(\xi)=a_0+a_1\xi+a_2\xi^2+a_3\xi^3+a_4\xi^4$ as a point in $T_x\bb C^5$. Define the quadratic form on $T_x\bb C^5$ by $N(p)=a_2^2-3a_1a_3+12a_0a_4$. Then, $N$ is $SL(2,\bb C)$-invariant, this is to say, $N(g\cdot p)=N(p)$, for all $p\in T_x\bb C^5$ and $g\in SL(2,\bb C)$. \end{prop} We can apply the reality condition and restrict this form to the tangent space $T_x\bb R^5$ for $x\in\bb R^5$. For a tangent vector $p(\xi)=(x_0+ix_4)+(x_1+ix_3)\xi+x_2\xi^2-(x_1-ix_3)\xi^3+(x_0-ix_4)\xi^4\in T_x\bb R^5$, we have $$N(p)=x_2^2+3(x_1^2+x_3^2)+12(x_0^2+x_4^2). $$ Thus, $N(p)$ is positive definite and defines an $SU(2)$-invariant metric $g$ on $\bb R^5$ by the polarisation formula. Moreover, we must have that $\{v_1^\lambda,v_2^\lambda,v_3^\lambda\}$, defined in proposition \eqref{twistorfibration2}, is an orthogonal frame for the twistor distribution $\mathscr F$. We now turn to the description of the leaves of the twistor foliation, the so called $\alpha$-surfaces. Let $z\in\bb T$, we define $\Pi_z$ to be the space of section of $\mcal O(4)$ that contains $z$, in local coordinates, $\Pi_{(\eta,\xi)}=\{p\in\mcal O(4)| \,p(\xi)=\eta\}$. Applying the reality structure, we define $P_z=\Pi_z\cap\tau(\Pi_z)\cap\bb R^5$. The following proposition follows from propositions \ref{twistorfibration1} and \ref{twistorfibration2}. \begin{prop}\label{prop:twistorplanes} Let $(\eta,\lambda)\in U_0$, then \begin{multline*} \Pi_{(\eta,\lambda)}=\left\{ \frac{1}{(1+\lambda\overline{\lambda})^4}\left[\eta(1+\overline{\lambda}\xi)^4+ a_1(1+\overline{\lambda}\xi)^3(\xi-\lambda)+a_2(1+\overline{\lambda}\xi)^2(\xi-\lambda)^2+\right. \right. \\ \left.\left. +a_3(1+\overline{\lambda}\xi)(\xi-\lambda)^3+a_4(\xi-\lambda)^4 \right] | a_1,\,a_2,\,a_3,\,a_4\in\bb C\right\}. \end{multline*} Applying the reality condition: \begin{multline*} P_{(\eta,\lambda)}=\left\{ \frac{1}{(1+\lambda\overline{\lambda})^4}\left( \eta(1+\overline{\lambda}\xi)^4+\overline{\eta}(\xi-\lambda)^4+\right.\right. \\ \left.\left.x_1[(1+\overline{\lambda}\xi)^3(\xi-\lambda)-(1+\overline{\lambda}\xi)(\xi-\lambda)^3]+\right.\right. \\ \left.\left. x_2(1+\overline{\lambda}\xi)^2(\xi-\lambda)^2-x_3[(1+\overline{\lambda}\xi)^3(\xi-\lambda)+(1+\overline{\lambda}\xi)(\xi-\lambda)^3]\right)|x_1,\,x_2,\,x_3\in\bb R \right\}. \end{multline*} \end{prop} Now we shall concentrate on the tangent space to the $\alpha$-surfaces. We shall use the isomorphism $T\mathbb{R}^5\cong T^*\mathbb{R}^5$ given by the above inner product and define what we shall call ``natural forms'' on $\Omega^{0,1}(\mathcal{O}(4))$. The tangent space of the $\alpha$-surface $P_{(\eta,\lambda)}$, $\lambda\in\mathbb{C}P^1$, is generated by vectors $v_1^\lambda,v_2^\lambda,v_3^\lambda$, where \begin{align}\label{eq:basistwistor} v_1^\lambda&=\frac{1}{(1+\lambda\overline{\lambda})^2}[(1+\overline{\lambda}\xi)^3(\xi-\lambda)-(1+\overline{\lambda}\xi)(\xi-\lambda)^3],\\ v_2^\lambda&=\frac{1}{(1+\lambda\overline{\lambda})^2}[(1+\overline{\lambda}\xi)^2(\xi-\lambda)^2],\\ v_3^\lambda&=\frac{1}{(1+\lambda\overline{\lambda})^2}[(1+\overline{\lambda}\xi)^3(\xi-\lambda)+(1+\overline{\lambda}\xi)(\xi-\lambda)^3], \end{align} where $\lambda$ is the holomorphic coordinate for a point in $U_0=\mathbb{C}P^1\setminus \{\infty\}$. Using the metric, we can find the dual to the basis above. Namely, we define $\omega^\lambda_j\coloneqq g(v_j^\lambda,\cdot)\in\Omega^1P_{(\eta,\lambda)}$. Using holomorphic coordinates $(a_0, a_1, a_2, a_3, a_4)$ for $\bb C^5$ we can write a frame for $(1,0)$-forms as $\{da_0, da_1, da_2, da_3, da_4\}$. Expanding the formulas for $v_j^\lambda$ above we get: \begin{align*} \left. \begin{array}{ll} \omega_1^\lambda&=\frac{1}{3(1+\lambda\overline{\lambda})^2}[6\overline{f_0}da_0+\frac{3}{2}\overline{f_1}da_1+f_2da_2+\frac{3}{2}\overline{f_3}da_3+6\overline{f_4}da_4],\\ \omega_2^\lambda&=\frac{1}{(1+\lambda\overline{\lambda})^2}[6\overline{\lambda}^2da_0^2-3\overline{\lambda}(1-\lambda\overline{\lambda})da_1+\\&(1-4\lambda\overline{\lambda}+(\lambda\overline{\lambda})^2)da_2+3\lambda(1-\lambda\overline{\lambda})da_3+6\lambda^2da_4],\\ \omega_3^\lambda&=\frac{i}{3(1+\lambda\overline{\lambda})^2}[6\overline{g_0}da_0+\frac{3}{2}\overline{g_1}da_1+\overline{g_2}da_2+\frac{3}{2}\overline{g_3}da_3+6\overline{g_4}da_4], \end{array} \right. \end{align*} where: \begin{align*} f_0&=(\lambda^3-\lambda)=\overline{f_4},\\ f_1&=(-3\overline{\lambda}\lambda+1-3\lambda^2+\overline{\lambda}\lambda^3)=-\overline{f_3},\\ f_2&=(-3\overline{\lambda}^2\lambda+3\overline{\lambda}-3\overline{\lambda}\lambda^2+3\lambda)=\overline{f_2},\\ g_0&=-(\lambda^3+\lambda)=-\overline{g_4},\\ g_1&=(-3\overline{\lambda}\lambda+1+3\lambda^2-\overline{\lambda}\lambda^3)=\overline{g_3},\\ g_2&=(-3\overline{\lambda}^2\lambda+3\overline{\lambda}+3\overline{\lambda}\lambda^2-3\lambda)=-\overline{g_2}. \end{align*} Observe that $(0,\omega_k^\lambda)$ defines a 1-form on $\mathbb{C}P^1\times\mathbb{R}^5$. However, it will be denoted by the same symbol, $\omega_k^\lambda$. Now we consider a section $s$ of $\eta:\mathbb{C}P^1\times\mathbb{R}^5\to\mathcal{O}(4)$, $\eta(q,m)=m(q)$, and shall find the pull back $\theta_k\coloneqq s^*\omega_k$, notice that $\theta_k$ is independent of the section $s$. In the next section, we shall use $\theta_k^{0,1}$ to describe distinguished bundles on the total space of $\mcal O(4)$ that correspond with a trivial $U(1)$ monopole data. Thus, this method allows us to define line bundles over $\mathcal{O}(4)$ with vanishing first Chern class. We can choose an explicit section s of $\eta$: \begin{align}\label{naturalsection} \begin{array}{ll} s:\mathcal{O}(4)&\to\mathbb{C}P^1\times\mathbb{R}^5,\\ (\mu,\lambda)&\mapsto\left(\lambda,\frac{1}{(1+\lambda\overline{\lambda})^2}(xv_0^\lambda+yv_4^\lambda)\right), \end{array} \end{align} where $\mu=x+iy$ and $$v_0^\lambda=\dfrac{1}{(1+\lambda\overline{\lambda})^2}[(1+\overline{\lambda}\xi)^4+(\xi-\lambda)^4] $$ and $$v_4^\lambda=\dfrac{i}{(1+\lambda\overline{\lambda})^2}[(1+\overline{\lambda}\xi)^4-(\xi-\lambda)^4] .$$ The vector fields $v_0^\lambda$ and $v_4^\lambda$ on $\bb R^5$ correspond respectively to the maximal and minimal weights of $\bb R^5$, as a $SU(2)$ representation, with respect to the Borel subgroup $B_\lambda$. We can now state the following: \begin{prop}\label{naturalforms} The $(0,1)$ parts of the \textit{natural forms} are given by: \begin{align*} \theta_1^{0,1}&=\frac{3\mu}{(1+\lambda\overline{\lambda})^3}d\overline{\lambda},\\ \theta_2^{0,1}&=0,\\ \theta_3^{0,1}&=\frac{3i\mu}{(1+\lambda\overline{\lambda})^3}d\overline{\lambda}. \end{align*} \end{prop} \begin{rmk} Before we proceed with the proof of this result, we shall point out that the differential forms above shall be used in the description of distinguished line bundles on the total space of $\bb T$. \end{rmk} \begin{proof} First write $\omega_j^\lambda=\sum_{j=0}^4h^j_k(\lambda)da_k$, where the $h^j_k$s are given by the equations defining $\omega_j$s. The pullback by $s$ is given by: $$s^*\omega_j =\sum_{j=0}^4h^j_k(s(\mu,\lambda))d(a_k(s(\mu,\lambda))),$$ where $a_k(s(\mu,\lambda))$ is the coordinate function and notice that $h^j_k(s(\mu,\lambda))=h^j_k(\lambda)$. Expanding $v_0^\lambda$ and $v_4^\lambda$ above we get $$v_0^\lambda=\dfrac{1}{(1+\lambda\overline{\lambda})^2}\left[ (1+\lambda^4)+4(\overline{\lambda}-\lambda^3)\xi+6(\lambda^2+\overline{\lambda}^2)\xi^2-4(\lambda-\overline{\lambda}^3)\xi^3+(1+\overline{\lambda}^4)\xi^4\right]$$ and $$v_4^\lambda=\dfrac{i}{(1+\lambda\overline{\lambda})^2}\left[ (1-\lambda^4)+4(\overline{\lambda}+\lambda^3)\xi+6(\overline{\lambda}^2-\lambda^2)\xi^2+4(\lambda+\overline{\lambda}^3)\xi^3+(\overline{\lambda}^4-1)\xi^4\right].$$ From the definition of $s$ we have: \begin{itemize} \item $x_0(s(\mu,\lambda))=\dfrac{1}{(1+\lambda\overline{\lambda})^4}[x(1+\lambda^4)+iy(1-\lambda^4)]=\dfrac{1}{(1+\lambda\overline{\lambda})^4}[\mu+\overline{\mu}\lambda^4],$ \item $x_1(s(\mu,\lambda))=\dfrac{4}{(1+\lambda\overline{\lambda})^4}[x(\overline{\lambda}-\lambda^3)]+iy(\overline{\lambda}+\lambda^3)=\dfrac{4}{(1+\lambda\overline{\lambda})^4}[\mu\overline{\lambda}+\overline{\mu\lambda^3}],$ \item $x_2(s(\mu,\lambda))=\dfrac{6}{(1+\lambda\overline{\lambda})^4}[x(\lambda^2+\overline{\lambda}^2)+iy(\overline{\lambda}^2)-\lambda^2]=\dfrac{6}{(1+\lambda\overline{\lambda})^4}[\overline{\mu}\lambda^2+\mu\overline{\lambda}^2],$ \item $x_3(s(\mu,\lambda))=\dfrac{4}{(1+\lambda\overline{\lambda})^4}[-x(\lambda-\overline{\lambda}^3)+iy(\overline{\lambda}+\lambda)]=\dfrac{4}{(1+\lambda\overline{\lambda})^4}[-\overline{\mu}\lambda+\mu\overline{\lambda}^3]\,\,\text{and}$ \item $x_4(s(\mu,\lambda))=\dfrac{1}{(1+\lambda\overline{\lambda})^4}[x(1+\lambda^4)+iy(\overline{\lambda}^4-1)]=\dfrac{1}{(1+\lambda\overline{\lambda})^4}[\overline{\mu}+\mu\overline{\lambda}^4].$ \end{itemize} Since we are interested only in the $(0,1)$ part of the $s^*\omega_j$s, we shall compute: $$ dx_j(s(\mu,\lambda))^{0,1}=\der{x_j(s(\mu,\lambda))}{\overline{\lambda}}d\overline{\lambda}+\der{x_j(s(\mu,\lambda))}{\overline{\mu}}d\overline{\mu}.$$ Computing the derivatives: \begin{center} \begin{align*} \left\{ \begin{array}{ll} \der{x_0(s(\mu,\lambda))}{\overline{\lambda}}=\dfrac{-4(\mu\lambda+\overline{\mu}\lambda^5)}{(1+\lambda\overline{\lambda})^5},\\ \der{x_0(s(\mu,\lambda))}{\overline{\mu}}=\dfrac{\lambda^4}{(1+\lambda\overline{\lambda})^4}. \end{array} \right. \end{align*} \begin{align*} \left\{ \begin{array}{ll} \der{x_1(s(\mu,\lambda))}{\overline{\lambda}}=\dfrac{4[\mu(1+\lambda\overline{\lambda})-4\lambda(\mu\overline{\lambda}-\overline{\mu}\lambda^3)]}{(1+\lambda\overline{\lambda})^5},\\ \der{x_1(s(\mu,\lambda))}{\overline{\mu}}=\dfrac{-4\lambda^3}{(1+\lambda\overline{\lambda})^4}. \end{array} \right. \end{align*} \begin{align*} \left\{ \begin{array}{ll} \der{x_2(s(\mu,\lambda))}{\overline{\lambda}}=\dfrac{12[\mu\overline{\lambda}(1+\lambda\overline{\lambda})-4\lambda(\overline{\mu}\lambda^2+\mu\overline{\lambda}^2)]}{(1+\lambda\overline{\lambda})^5},\\ \der{x_2(s(\mu,\lambda))}{\overline{\mu}}=\dfrac{6\lambda^2}{(1+\lambda\overline{\lambda})^4}. \end{array} \right. \end{align*} \begin{align*} \left\{ \begin{array}{ll} \der{x_3(s(\mu,\lambda))}{\overline{\lambda}}=\dfrac{4[3\overline{\lambda}^2\mu(1+\lambda\overline{\lambda})-4\lambda(-\overline{\mu}\lambda+\mu\overline{\lambda^3})]}{(1+\lambda\overline{\lambda})^5},\\ \der{x_3(s(\mu,\lambda))}{\overline{\mu}}=\dfrac{-4\lambda}{(1+\lambda\overline{\lambda})^4}. \end{array} \right. \end{align*} \begin{align*} \left\{ \begin{array}{ll} \der{x_4(s(\mu,\lambda))}{\overline{\lambda}}=\dfrac{4\mu\overline{\lambda}^3[(1+\lambda\overline{\lambda})-4\lambda(\overline{\mu}+\mu\overline{\lambda}^4)]}{(1+\lambda\overline{\lambda})^5},\\ \der{x_4(s(\mu,\lambda))}{\overline{\mu}}=\dfrac{1}{(1+\lambda\overline{\lambda})^4}. \end{array} \right. \end{align*} \end{center} Substituting these into the equation for the pullback, we obtain the expressions stated in the proposition. \end{proof} We now finish this section with a result concerning the behaviour of the $\alpha$-surfaces with respect to the real structure on $\bb T$. More specifically, for $z\in\bb T$, we want to compare $P_z$ with $P_{\tau(z)}$, where $\tau$ is the real structure in $\bb T$. With this intention we shall state the following results whose proofs follow by straightforward calculations and shall not be done here. \begin{lem}\label{lem:orientation} Let $\lambda\in U\cap U'=\bb CP^1\setminus\{\infty,0\}$ and write $\dfrac{\overline{\lambda}}{\lambda}=x+iy$. The change of basis matrix from the basis $\{v_0^\lambda,v_1^\lambda,v_2^\lambda,v_3^\lambda,v_4^\lambda\}$ to $\{v_0^{(-1/\overline{\lambda})},v_1^{(-1/\overline{\lambda})},v_2^{(-1/\overline{\lambda})},v_3^{(-1/\overline{\lambda})},\\v_4^{(-1/\overline{\lambda})}\}$ is given by \begin{align} \left( \begin{array}{ccccc} x^2-y^2 & 0 & 0 & 0 & -2xy\\ 0 & x & 0 &-y & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & -y & 0 & -x & 0\\ -2xy & 0 & 0 & 0 & -(x^2-y^2) \end{array} \right). \end{align} \end{lem} \begin{cor}\label{cor:orientation} Under the notation of the lemma above, the change of basis from $\{v_1^\lambda,v_2^\lambda,v_3^\lambda\}$ to $\{v_1^{(-1/\overline{\lambda})},v_2^{(-1/\overline{\lambda})},v_3^{(-1/\overline{\lambda})}\}$ is given by: \begin{align} \left( \begin{array}{ccc} x & 0 &-y \\ 0 & 1 & 0 \\ -y & 0 & -x \\ \end{array} \right). \end{align} In particular, the $\alpha$-surfaces corresponding to $z\in\bb T$ and $\tau(z)$ are the same $3$-dimensional affine subspaces of $\bb R^5$ with the reverse orientation. \end{cor} We conclude this section by stating the twistor correspondence between $\bb R^5$ and $\bb T$: \textit{Every point $(x_0,x_1,x_2,x_3,x_4)\in\mathbb{R}^5$ corresponds to the real section $p(\xi)=(x_0+ix_4)+(x_1+ix_3)\xi+x_2\xi^2-(x_1-ix_3)\xi^3+(x_0-ix_4)\xi^4\in H^0(\mathbb{C}P^1,\mathcal{O}(4))$. Conversely, every point $z\in \bb T$ corresponds to an oriented $3$-dimensional affine subspace of $\bb R^5$ given explicitly in local coordinates by proposition \eqref{prop:twistorplanes} and whose orientation is given by the orientation of the basis $\{v_1^\lambda,v_2^\lambda,v_3^\lambda\}$.} \section{Bogomolny pairs on GHC-manifolds}\label{section2} Let $M$ be a regular GHC-manifold whose twistor space is $Z$ and consider the double fibration for the complexified GHC-manifold: \begin{align*} Z\xleftarrow{\eta}Y=\bb CP^1\times M^\bb C\xrightarrow{p}M^\bb C. \end{align*} Also, let $\Omega^*_\eta$ be the sheaf on $Y$ of $\eta$-vertical holomorphic forms and define the relative differential operator $d_\eta$ to be the composition map: \begin{align*} \Omega^0(Y)\xrightarrow{d}\Omega^1(Y)\xrightarrow{proj.}\Omega^1_\eta; \end{align*} observe that $d_\eta$ annihilates $\eta^*\Omega^0(Z)$. We shall now state and prove the following lemma: \begin{lem}\label{lem:flatrelative} Let $F$ be a holomorphic bundle on $Z$. Then $d_\eta$ extends to a \textit{flat relative connection} on $\eta^*F$, this is to say, an operator $$ \nabla_\eta:\eta^*F\to\Omega^1\otimes F,$$ satisfying the Leibniz rule $$ \nabla_\eta(fs)=f\nabla_\eta (s)+d_\eta f\otimes s.$$ Conversely, if $\eta$ has simply connected fibres, then the holomorphic bundles on $Y$ arising from pull-back of a bundle on $Z$ are those which admit a flat relative connection. \end{lem} \begin{proof} Suppose $F$ has rank $k$, let $U$ and $U'$ be open sets on $Y$ and $\{e_0,\cdots,e_k\}$ and $\{e'_0,\cdots,e'_k\}$ be local frames for $\eta^*F$ on $U$ and $U'$ respectively. Let $g_{ij}$ be the transition function of $\eta^*F$ from $U'$ to $U$, this is to say, $e_i=\sum^k_{j=0}g_{ij}e'_j$, such that $d_\eta(g_{ij})=0$. Let $s=\sum^k_{i=0}f_i\otimes e_i$ be a local section for $\eta^*F$ on $U$ define $\nabla_\eta$ on this open set by: $$ \nabla_\eta(s)=\sum^k_{j=0}d_\eta(f_i)\otimes e_i.$$ If we define it similarly for other trivialisations, we shall prove it is well defined. In fact, on $U\cap U'$ we can write $s=\sum^k_{i=0}f_ig_{ij}\otimes e'_i$, therefore we have: $$ \nabla_\eta(s)=\sum^k_{j=0}d_\eta(f_i)g_{ij}\otimes e'_i=\sum^k_{j=0}d_\eta(f_i)\otimes e_i,$$ since $d_\eta(g_{ij})=0$. Then, $\nabla_\eta$ is well-defined and clearly satisfies the Leibniz rule. Conversely, Let $E$ be a bundle on $Y$ endowed with a flat relative connection $\nabla_\eta$. Since $\eta$ has simply connected fibres, we can trivialise $E$ with relative parallel section, this is to say, we can find local frames $\{e_0,\cdots,e_k\}$ for $E$ such that $\nabla_\eta(e_j)=0$. Now it is easy to see that the transition functions $g$ for this trivialisation must satisfy $d_\eta(g)=0$, this means that $g$ is constant along the fibres of $\eta$. Thus, each transition function factors as $g=\eta\circ h$, where $h$ is the transition function for a holomorphic bundle $F$ on $Z$. \end{proof} Suppose now that a holomorphic bundle $F$ on $Z$ is trivial on each section of $Z$, then the pull-back $\eta^*F$ is trivial on each fibre of $p$. Therefore, $ \hat F=p_*\eta^*F$ is a vector bundle on $M^\bb C$ with the same rank as $F$. Moreover, from the lemma above, the relative flat connection $\nabla_\eta$ can be pushed down via $p$ to an operator $$ D:\hat F\to p_*\Omega^1_\eta\otimes\hat F,$$ satisfying the Leibinitz rule $$ D(fs)=fD(s)+p_*d_\eta(f)\otimes s.$$ We now use a fact from the last section that there exists a canonical isomorphism $$ (p_*\Omega^1_\eta)_x\cong H^0(\bb CP^1_x,\mcal K^*),$$ where $\mcal K_q$ is the subspace of $T_xM\times\bb CP^1$ given by the kernel of the highest weight $1$-forms for each $q\in\bb CP^1$. This isomorphism allows us to define a canonical map \begin{equation}\label{eq:mape} e_q: p_*\Omega^1_\eta\to\mcal K^*_q, \end{equation} given by evaluation at $q\in\bb CP^1$. Restrict $\hat F$ and the operator $D$ to the submanifold $p(\eta^{-1}(z))$ of $M^\bb C$, where $z\in Z_q$ is a point in the fibre of $Z$ at $q\in CP^1$. Since $\nabla_\eta$ is relatively flat, $D$ is a flat connection restricted to this submanifold. Conversely, notice that if we have a bundle $\hat F$ with an operator $D$ that is flat on $p(\eta^{-1}(z))$ for all $z\in Z$, then we obtain a bundle $p^*(\hat F)$ endowed with a relative flat connection $\nabla_\eta=p^*(D)$. Now consider the splitting $$ p_*\Omega^1_\eta=\Omega^1(M^\bb C)\oplus (E^*\otimes H').$$ Moreover, it is proved in \cite{SU2} that we have $p_*d_\eta=d\oplus0$ under the splitting above. This means that $D$ can be written as $D=\nabla\oplus\Phi$, where $\nabla$ is an actual connection and $\Phi$ is a section of $End(\hat F)\otimes(E^*\otimes H')$ and is called the \textit{Higgs field}. Moreover, on each $\alpha$-surface $\Pi_z=p(\eta^{-1}(z))$ by the composition: \begin{align*} \hat F\xrightarrow{\nabla\oplus\Phi}(\hat F\otimes E^*\otimes H^*)\oplus(\hat F\otimes E^*\otimes H')=\hat F\otimes E^*\otimes\hat H\xrightarrow{e_q}\hat F\otimes\Omega^1(\Pi_z). \end{align*} This motivates the following definition: \begin{defi} Let $M$ be a regular GHC-manifold and $\hat F$ a vector bundle on $M^\bb C$, a \textit{Bogomolny pair} on $\hat F$ is a pair $(\nabla,\Phi)$, where $\nabla$ is connection on $\hat F$ and $\Phi$ is a section of $End(\hat F)\otimes(E^*\otimes H')$, such that the connection $\nabla\oplus\Phi$, as defined by the composition above, is flat on each $\alpha$-surface. Applying the reality condition gives Bogomolny pairs on $M$. \end{defi} We have then the Hitchin-Ward correspondence for GHC-manifolds: \begin{thm}\label{thm:GHChitchinward} \cite{SU2}Let $M$ be a regular GHC manifold. There is a one to one and onto correspondence between Bogomolny pairs $(\nabla,\Phi)$ for a bundle $\hat F$ on $M^\bb C$ and holomorphic bundles $F$ on $Z$ that are trivial on sections. The correspondence remains true in the presence of a real structure. \end{thm} \begin{rmk} The theorem above gives a Bogomolny pair for the group $SL(n,\bb C)$, where $n$ is the rank of $F$. By considering bundles $F$ on $Z$ whose structure group reduces we have the above correspondence between those bundles and Bogomolny pairs $(\nabla,\Phi)$ for a gauge group $G\subset SL(n,\bb C)$. The objective of this section is to describe Bogomolny pairs for the group $SU(2)$ when $M=\bb R^5$. \end{rmk} \begin{rmk} In \cite{MG}, Hitchin proves a correspondence between solutions to the Bogomolny equation in $\bb R^3$ and holomorphic bundles on the total space of the holomorphic tangent bundle $\bb T_2$ to $CP^1$ that are trivial on real sections. Therefore, $(\nabla,\Phi)$ is a Bogomolny pair in $\bb R^3$ if and only if it satisfies the Bogomolny equation $F_\nabla=*D_\nabla\Phi$. \end{rmk} \subsection{The map $e_q$ and the Higgs field} We shall now turn our attentions to the case where $M=\bb R^5$. In the last section we saw that the map $e_q$, given by equation \eqref{eq:mape}, plays a very important role in the description of Bogomolny pairs. In this section we shall describe it in the case $M=\bb R^5$. We know that $p_*(\Omega^1_\eta)=H^*\oplus H'$, therefore, $e_q$ is an equivariant map \begin{equation*} e_q:(\bb C^5)^*\oplus \bb C^3\to K^*_q. \end{equation*} In this section we shall describe the real version of this map: \begin{equation} e_q:(\bb R^5)^*\oplus H'_\bb R\to (K^*_\bb R)_q. \end{equation} According to \cite{SU2}, under the splitting above, the map $e_q$ acts on $\bb R^5$ as a projection and on $H'_\bb R$ it is described in the sequence \eqref{eq:sequence2}. Then, we shall move towards the description of $$e_q: H'_\bb R\to K^*_q$$ and its real version. First we shall decompose $\bb C^3$ in weights with respect to $\lambda\in\bb CP^1$. Similarly to the the $\bb C^5$ case, defining $\bb C^3$ as polynomials of degree $2$ in the variable $\xi$ allows us to write the action of $SL(2,\bb C)$ in $\bb C^3$ by: \begin{align*} g\cdot p(\xi)=(c\xi+d)^2 \cdot p\left( \frac{a\xi+b}{c\xi+d}\right), \end{align*} where $g=\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in SL(2,\bb C)$ and $p(\xi)\in\bb C^3$. Moreover, for a point $\lambda\in U\subset\mathbb{C}P^1$ define $g_\lambda\in SU(2)$ by $g_\lambda=\frac{1}{\sqrt{1+\overline{\lambda}\lambda}}\begin{pmatrix} 1&\lambda \\ -\overline{\lambda} & 1 \end{pmatrix}$. Thus, the weight decomposition of $\bb C^3$ with respect to $\lambda$ is: \begin{align}\label{eq:decompositionc3} \left\{ \begin{array}{ll} \alpha_1^\lambda=g_\lambda^{-1}\cdot 1&=\dfrac{1}{(1+\lambda\overline{\lambda})}(\overline{\lambda}\xi+1)^2\\&=\dfrac{1}{(1+\overline{\lambda}\lambda)}(1+2\overline{\lambda}\xi+\overline{\lambda}^2\xi^2),\\ \alpha_2^\lambda=g_\lambda^{-1}\cdot\xi&=\dfrac{1}{(1+\lambda\overline{\lambda})}(\overline{\lambda}\xi+1)(\xi-\lambda)\\&=\dfrac{1}{(1+\overline{\lambda}\lambda)}(-\lambda+(1-\lambda\overline{\lambda})\xi+\overline{\lambda}\xi^2),\\ \alpha_3^\lambda=g_\lambda^{-1}\cdot\xi^2&=\dfrac{1}{(1+\lambda\overline{\lambda})}(\xi-\lambda)^2\\&=\dfrac{1}{(1+\overline{\lambda}\lambda)}(\lambda^2-2\lambda\xi+\xi^2). \end{array} \right. \end{align} Write $(H')^*=\bb C^3$, $G=SL(2,\bb C)$ and $B$ the Borel subgroup of upper diagonal matrices. Then, the $\alpha_j^\lambda$ trivialise the homogeneous bundle $(G\times_B H')^*$ over $\bb CP^1$, where $\lambda$ is a local holomorphic coordinate for $q\in\bb CP^1$. Then, from proposition \eqref{prop:emap} we know that there is an isomorphism $(K/S)_\lambda^*\to (H')^*$. This isomorphism allows us to describe a global frame for $(K/S)_\lambda^*$: \begin{align}\label{eq:decompositionks} \left\{ \begin{array}{ll} F_1^\lambda=\dfrac{1}{(1+\overline{\lambda}\lambda)}(W_1^\lambda+2\overline{\lambda}W^\lambda_2+\overline{\lambda}^2W^\lambda_3),\\ F_2^\lambda=\dfrac{1}{(1+\overline{\lambda}\lambda)}(-\lambda W_1^\lambda+(1-\lambda\overline{\lambda})W^\lambda_2+\overline{\lambda}W^\lambda_3),\\ F_3^\lambda=\dfrac{1}{(1+\overline{\lambda}\lambda)}(\lambda^2W_1^\lambda-2\lambda W^\lambda_2+W^\lambda_3), \end{array} \right. \end{align} where $W_1^\lambda=\omega^\lambda_1+i\omega^\lambda_3$, $W^\lambda_2=\omega^\lambda_2$ and $W^\lambda_3=\omega^\lambda_1-i\omega^\lambda_3$, where the $\omega^\lambda_j$ were defined in the last section. \begin{rmk} Before proceeding, it is important to notice that $e_q^*:End(E)\otimes (H')^*\to (K/S)^*$ is given by $$e_q(\phi_1,\phi_2,\phi_3)=\sum_{j=1}^3\phi_jF^q_j.$$ \end{rmk} We can now apply the reality condition on $(\bb C^5)^*$ to explicitly describe a global frame for $(K/S)^*_{\bb R}$: \begin{align}\label{eq:decompositionr3} \left\{ \begin{array}{ll} h_1^\lambda&=F_1^\lambda-F_3^\lambda=\dfrac{1}{(1+\lambda\overline{\lambda})}[(1-\lambda^2)W_1^\lambda+2(\lambda+\overline{\lambda})W_2^\lambda-(1-\overline{\lambda}^2)W_3^\lambda],\\ h_2^\lambda&=F_2^\lambda=\dfrac{1}{(1+\lambda\overline{\lambda})}[-\lambda W_1^\lambda+(1+\lambda\overline{\lambda})W_2^\lambda+\overline{\lambda}W_3^\lambda],\\ h_3^\lambda&=i(F_1^\lambda+F_3^\lambda)=\dfrac{i}{(1+\lambda\overline{\lambda})}[(1+\lambda^2)W_1^\lambda+2(\lambda-\overline{\lambda})W_2^\lambda+(1+\overline{\lambda}^2)W_3^\lambda]. \end{array} \right. \end{align} The proposition below describes the map $e_q$ and follows from the discussion above and proposition \eqref{prop:emap}: \begin{prop}\label{prop:emapex} Let $E$ be a vector bundle over $\bb R^5$, $\nabla$ a connection on $E$ and $\Phi=(\phi_1,\phi_2,\phi_3)$ a section of $End(E)\otimes \bb C^3$. On the $\alpha$-surface $P_{(\lambda,\mu)}$ we have: $$ e_q(\nabla\oplus\Phi)|_{P_{(\lambda,\mu)}}=\nabla|_{P_{(\lambda,\mu)}}+\sum_{j=1}^3\phi_jh^\lambda_j. $$ \end{prop} We conclude this section by mentioning how the results of this section give a natural orientation for the $\alpha$-surfaces. A straightforward calculation proves the following lemma: \begin{lem}\label{lemma:naturalorientation} Let $\lambda\in\bb CP^1$ and $-1/\overline{\lambda}$ be its antipodal, then $h_j^{-1/\overline{\lambda}}=-h_j^\lambda$, for $j=1,2,3$. \end{lem} The following corollary says that a choice of frame for the homogeneous bundle $(K/S)^*$ naturally defines an orientation on the $\alpha$-surfaces: \begin{cor}\label{cor:naturalorientation} Let $P_{(\lambda,\mu)}$ be an $\alpha$-surface. Define its orientation by the $3$-form $$\Xi_{(\lambda,\mu)}=h_1^\lambda\wedge h_2^\lambda\wedge h_3^\lambda.$$ Then, $P_{\tau(\lambda,\mu)}$ and $P_{(\lambda,\mu)}$ are the same submanifold of $\bb R^5$ with reverse orientation. \end{cor} \begin{rmk} Notice that, from \ref{eq:decompositionr3}, this orientation coincides with the one given by the order of the triple $v_1^\lambda, v_2^\lambda, v_3^\lambda$. \end{rmk} \subsection{SU(2)-Bogomolny pairs on $\bb R^5$} We begin this section by defining the \textit{fundamental forms}: \begin{defi}\label{fundamentalforms} Consider $h^\lambda_j$, for $j=1,2,3$, as a $1$-form on $\bb CP^1\times\bb R^5$. Let $s$ be the section of $\eta:\bb CP^1\times\bb R^5\to\bb T$ defined in \eqref{naturalsection} . Define the \textit{fundamental forms} on $\bb T$ by $\Psi _j=s^*h^\lambda_j$. \end{defi} In our local coordinates we have the following lemma: \begin{lem}\label{fundamentalforms} In local coordinates for the open set $U_0\subset\bb T$, the fundamental forms are given by: \begin{align*} \left\{ \begin{array}{ll} \Psi_1=&6\mu\dfrac{(1-\overline{\lambda}^2)}{(1-\overline{\lambda}\lambda)^4}d\overline{\lambda},\\ \Psi_2=&-6\mu\dfrac{\overline{\lambda}}{(1-\overline{\lambda}\lambda)^4}d\overline{\lambda},\\ \Psi_3=&-6i\mu\dfrac{(1+\overline{\lambda}^2)}{(1-\overline{\lambda}\lambda)^4}d\overline{\lambda}. \end{array} \right. \end{align*} \end{lem} \begin{proof} The result follows from proposition \eqref{naturalforms} and from substituting $h_j^\lambda$ in proposition \eqref{eq:decompositionr3}. Moreover, notice that $W_1^\lambda=\omega^\lambda_1+i\omega^\lambda_3$ and $W_3^\lambda=-\omega^\lambda_1+i\omega^\lambda_3$, therefore $s^*W_1^\lambda=0$ and $s^*W_3^\lambda=-6\dfrac{\mu}{(1-\overline{\lambda}\lambda)^4}d\overline{\lambda}$. \end{proof} \begin{rmk} \begin{enumerate} \item The fundamental forms will play an important role in the explicit description of the holomorphic structure of the bundle corresponding to a Bogomolny pair on $\bb R^5$. \item It is important to notice that each $\Psi_j$ defines a cohomology class in $H^1(\bb T,\mcal O)$ and hence, by exponentiation, an element of the Picard group $Pic_0(\bb T)$. The line bundles corresponding to this class shall be explicitly described in the next section. \end{enumerate} \end{rmk} \begin{defi}\label{defi:su2pair} Let $E$ be an $SU(2)$ vector bundle on $\bb R^5$, this is to say, $E$ has complex rank $2$ and is equipped with a symplectic form and a quaternionic structure. We say that the pair $(\nabla,\Phi)$ on $E$ is a $SU(2)$ Bogomolny pair if: \begin{enumerate} \item $\nabla$ and $\Phi=(\phi_1,\phi_2,\phi_3)$ preserve the symplectic form; \item For every $\alpha$-surface $P_z$, the connection $\nabla\oplus\Phi$, given in \eqref{prop:emapex}, is flat. \end{enumerate} \end{defi} We know that $P_z$ is a leaf of the integral distribution $\{v^q_1,v^q_2,v^q_3\}$. From the previous section we can choose coordinates $\{\chi^z_1,\chi^z_2,\chi^z_3\}$ such that $d\chi^z_k=h^q_k$. If $A$ is the connection $1$-form for $\nabla$ on $P_z$, then we can write: $$ e^q(\nabla\oplus-i\Phi)|_{P_z} = \sum^3_{k=1}(A_k-i\phi_k)d\chi^z_k.\footnote{The $-i$ here will become clear in the proof of theorem \eqref{thm:hitchinward}. }$$ The zero curvature condition for this connection gives: \begin{align}\label{zerocurvature} F_{kj}+i\nabla_k\phi_j-i\nabla_j\phi_k-[\phi_j,\phi_k]=0, \end{align} where $F$ is the curvature $2$-form for $\nabla$. Before proceeding to the main result of this section we shall state the following lemma which compares the connections $e^q(\nabla\oplus\Phi)|_{P_z}$ and $e_{\tau(q)}(\nabla\oplus\Phi)|_{P_{\tau(z)}}$: \begin{lem} \label{lem:antipodal} $e_{\tau(q)}(\nabla\oplus\Phi)|_{P_{\tau(z)}}=\nabla-\phi_1h^q_1-\phi_2h^q_2-\phi_3h^q_3$. \end{lem} \begin{proof} The proof is a straightforward calculation using lemma \eqref{lemma:naturalorientation}. \end{proof} \begin{thm}\label{thm:hitchinward} Let $E$ be a $SU(2)$ bundle on $\bb R^5$. There is a 1-1 onto correspondence between $SU(2)$ Bogomolny pairs $(\nabla,\Phi)$ and holomorphic bundles $\tilde E$ on $\bb T$ satisfying: \begin{enumerate}[(i)] \item $\tilde E$ is trivial on real sections, \item $\tilde E$ has a symplectic structure, \item $\tilde E$ is equipped with a quaternionic structure $\sigma$ covering $\tau$, this is to say, $\sigma$ is an anti-holomorphic linear map $$ \sigma:\tilde E_z\to\tilde E_{\tau(z)},$$ such that $\sigma^2=-id_{\tilde E_z}$. \end{enumerate} \end{thm} \begin{proof} We shall prove the conditions to reduce the gauge group to $SU(2)$ and describe the holomorphic structure for the bundle $\tilde E$ explicitly. Let $(\nabla,\Phi)$ be a $SU(2)$ Bogomolny pair on $E$ consider the double fibration: \begin{align*} \bb T\xleftarrow{\eta}Y=\bb CP^1\times \bb R^5\xrightarrow{p}\bb R^5. \end{align*} Let $s$ be the section of $\eta$ as defined in \eqref{naturalsection}. We already know from theorem \eqref{thm:GHChitchinward} that $\tilde E=s^*(p^*E)$ is holomorphic and trivial on real sections of $\bb T$, however we shall describe this holomorphic structure explicitly: Define the operator $\overline{\partial}:\Omega^0(\bb T,\tilde E)\to\Omega^{0,1}(\bb T,\tilde E)$ by: $$ \overline{\partial} t=\left((s^*\nabla)t-i\left[\sum^3_{k=1}(s^*\phi_k)t\otimes\Psi_k\right]\right)^{0,1}, $$where $t$ is a section of $\tilde E$. We claim that $\overline{\partial}$ is a holomorphic structure on $\tilde E$. We have to prove that $\overline{\partial}^2=0$. To simplify our notation, write $$ \hat \nabla=s^*\nabla-i\Omega,$$ where $\Omega=\sum^3_{k=1}s^*\phi_k\otimes\Psi_k$. Observe that $\Omega$ is a section of $\Omega^1\otimes End(\tilde E)$ and this makes $\hat \nabla$ a connection on $\tilde E$. Then, $\overline{\partial}^2=F_{\hat \nabla}^{0,2}$, where $F_{\hat \nabla}$ is the curvature of $\hat \nabla$. We have: \begin{align*} F_{\hat \nabla}&=s^*F_\nabla-i(s^*\nabla(\Omega))-\Omega\wedge\Omega\\ &=s^*F_\nabla+i\left[\sum^3_{k=1}\Psi_k\wedge s^*(\nabla\phi_k)\right]-i\left[\sum^3_{k=1}(s^*\phi_k)\otimes d\Psi_k\right]-\sum_{j< k}[s^*\phi_j,s^*\phi_k]\Psi_j\wedge\Psi_k. \end{align*} Now, $F_{\hat \nabla}^{0,2}$ vanishes from the zero curvature condition \eqref{zerocurvature} on every $\alpha$-surface and $d\Psi_k^{0,2}=\overline{\partial}\Psi_k=0$. This proves that $\overline{\partial}$ is a holomorphic structure on $\tilde E$. Let $\omega$ be a symplectic structure on $E$. Since $\nabla$ and $-i\Phi$ preserve $\omega$, from the definition of $\overline{\partial}$ we must have that $ss^*\omega$ is also preserved by $\overline{\partial}$. Therefore, $\tilde E$ is endowed with a symplectic structure compatible with $\overline{\partial}$. To describe the quaternionic structure, we shall use an alternative description for the fibres of $\tilde E$. Let $z\in\bb T$ and define: $$\tilde E_z;\{t\in\Gamma(P_z,E)|\,\,e_q(\nabla\oplus\Phi)t=0\}.$$ Now $E$ has a quaternionic structure $\sigma$ and let $t\in\tilde E_z$, then $t$ satisfies \begin{align*} \left(\nabla -i\sum^3_{k=1}\phi_kh^q_k\right)t=0 \end{align*} Applying $\sigma$: \begin{align*} \left(\nabla +i\sum^3_{k=1}\phi_kh^q_k\right)\sigma(t)=0. \end{align*} Using lemma \eqref{lem:antipodal}: \begin{align*} \left(\nabla -i\sum^3_{k=1}\phi_kh^{\tau(q)}_k\right)\sigma(t)=0, \end{align*} Thus, $t\in E_z$ implies $\sigma(t)\in E_{\tau(z)}$. Therefore, $\sigma: E_z\to E_{\tau(z)}$ is anti-holomorphic and satisfies $\sigma^2=-id_{E_z}$. For the converse, we just need to observe that both the symplectic structure $\eta^*\omega$ and the quaternionic structure $\eta^*\sigma$ on the bundle $\eta^*(E)$ are compatible with the flat relative connection $\nabla_\eta$ on $\eta^*(\tilde E)$. Furthermore, both structures remain compatible with $D$ on $E=p_*(\eta^*\tilde E)$ when they are pushed down to $\bb R^5$ via $p$ and, therefore $\nabla$ and $\Phi$ are both compatible with the quaternionic and symplectic structures on $E$. \end{proof} The theorem above is phrased for the group $SU(2)$, however minor modifications in the real structure leads to Bogomolny pairs for other groups. \subsection{The bundles $L_{(a,b,c)}$} To illustrate the construction above we shall find the explicit transition functions for the bundles on $\bb T$ that correspond to a trivial $U(1)$ Bogomolny pair corresponding to the following data: $E=\bb R^5\times \bb C$, $\nabla=d$ and $\Phi=(-ia,-ib,-ic)$, where $a,b,c$ are real numbers, not all vanishing. Let $\tilde L$ be the trivial complex line bundle on $\bb T$. From theorem \eqref{thm:hitchinward} we can endow $\tilde L$ with a holomorphic structure $\overline{\partial}$ given by: $$\overline{\partial}(s)=\der{s}{\overline{\lambda}}+\Omega(s), $$ where $\Omega=\sum_{j=1}^3-i\phi_j\Psi_j$. Let $l$ be a smooth trivialisation for $\tilde L$, i.e. $l$ is a non-vanishing complex function on $\bb T$, a local section $s=fl$ is holomorphic if and only if $\overline{\partial}(fl)=0$. But this means that: $$\der{f}{\overline{\mu}}=0 $$ and $$ \der{f}{\overline{\lambda}}=f\beta,$$ where $\Omega=\beta d\overline{\lambda}$. Suppose that $f=g\cdot exp(u)$, with $g$ holomorphic, then $$\der{f}{\overline{\lambda}}=\der{u}{\overline{\lambda}}g.$$ Thus, if we want to trivialise $\tilde L$ in a given open set, we have to find a function $u$, regular on this open set, such that $\der{u}{\overline{\lambda}}=\beta$. In this case, $f=g\cdot exp(u)$ will be the given trivialisation. We shall investigate three separate cases: \begin{itemize} \item $\phi_1=\dfrac{i}{2}$, $ \phi_2=0$ and $\phi_3=0$. The bundle corresponding to this data will be denoted by $L_{(\frac{1}{2},0,0)}$ In this case, we must have $\Omega=\Psi_1=3\mu\dfrac{(1-\overline{\lambda}^2)}{(1-\overline{\lambda}\lambda)^4}d\overline{\lambda}$. Then $$\beta_1=3\mu\dfrac{(1-\overline{\lambda}^2)}{(1-\overline{\lambda}\lambda)^4}.$$ Define $$\tilde u_1=-\dfrac{\mu}{(1-\overline{\lambda}\lambda)^3}\left(\dfrac{1}{\lambda}+\overline{\lambda}^3\right)$$ and observe that $\tilde u_1$ is singular at $\infty$ and at $0$. Now, define $\tilde g_1=\dfrac{\mu}{\lambda}$ and $$u_1=\tilde u_1 +\tilde g_1=\dfrac{\mu}{(1-\overline{\lambda}\lambda)^3}\left(3\overline{\lambda}+\lambda\overline{\lambda}^2+\lambda^2\overline{\lambda}^3\right).$$ Then, since $u_1$ is regular at $0$ and singular at $\infty$, $f_0=exp(u_1)$ defines a trivialisation of $L_{(\frac{1}{2},0,0)}$ in the open set $U_0$. Now define $\tilde{\tilde g}_1=\dfrac{\mu}{\lambda^3}$. Write $g_1=-\tilde g_1+\tilde{\tilde g}_1$ We have: $$u_1+g=\dfrac{\mu}{(1-\overline{\lambda}\lambda)^3} \left(\frac{1}{\lambda^3}+\dfrac{\overline{\lambda}}{\lambda^2}+\dfrac{\overline{\lambda}^2}{\lambda}\right).$$ Since $u_1+g_1$ is regular at $\infty$ and singular at $0$, $f_1=exp(u_1+g_1)$ is a trivialisation of $L_{(\frac{1}{2},0,0)}$ over $U_1$. On the intersection $U_0\cap U_1$ we have $f_1e^{g_1}=f_0$. Then the transition function for $L_{(\frac{1}{2},0,0)}$ is given by \begin{align}\label{eq:transition1} g^1_{01}=exp\left(-\mu\left(\dfrac{1}{\lambda}-\dfrac{1}{\lambda^3}\right)\right). \end{align} \item$\phi_1=0$, $ \phi_2=i$ and $\phi_3=0$. We shall denote the bundle corresponding to this data by $L_{(0,1,0)}$ and in this case we have $$ \Omega=\Psi_2=-6\mu\dfrac{\overline{\lambda}}{(1-\overline{\lambda}\lambda)^4}d\overline{\lambda}.$$ Define $$u_2= \dfrac{\mu}{(1-\overline{\lambda}\lambda)^3}\left(\dfrac{3\overline{\lambda}}{\lambda}+\dfrac{1}{\lambda^2}\right).$$ We have that $u_2$ is singular at $0$ but regular at $\infty$, therefore $f_1=u_2$ trivialises $L_{(0,1,0)}$ on $U_1$. Now, for $g_2=-\dfrac{\mu}{\lambda^2}$ we have: $$u_2+g_2=-\dfrac{\mu}{(1-\overline{\lambda}\lambda)^3}\left(3\overline{\lambda}+\lambda\overline{\lambda}\right) ,$$ which is regular at $0$, but singular at $\infty$. Thus, $f_0=u_2+g_2$ trivialises $L_{(0,1,0)}$ on $U_0$. On the intersection we then have $f_0=e^{\mu/\lambda^2}f_1$. Therefore, the transition function of $L_{(0,1,0)}$ is: \begin{align}\label{eq:transition2} g^2_{01}=exp\left(\dfrac{\mu}{\lambda^2}\right). \end{align} \item$\phi_1=0$, $ \phi_2=0$ and $\phi_3=\dfrac{i}{2}$ This case is similar to the first one and we shall write the transition function for this bundle without proof: \begin{align} g^3_{01}=exp\left(-i\mu\left(\dfrac{1}{\lambda}+\dfrac{1}{\lambda^3}\right)\right). \end{align} \end{itemize} Now we state: \begin{prop}\label{prop:transition} The bundle $L_{(\frac{a}{2},b,\frac{c}{2})}$ has transition function \begin{align} g^{(a,b,c)}_{01}=exp\left(-a\mu\left(\dfrac{1}{\lambda}-\dfrac{1}{\lambda^3}\right)+b\dfrac{\mu}{\lambda^2}-ic\mu\left(\dfrac{1}{\lambda}+\dfrac{1}{\lambda^3}\right)\right). \end{align} \end{prop} Since the real structure in our local coordinates is given by $$\tau(\lambda,\mu)=(-\overline{\mu}/\overline{\lambda}^4,-1/\overline{\lambda}), $$ noting that $\tau$ interchanges $U_0$ and $U_1$ gives us $$\tau(g^{(a,b,c)}_{01})=\left(\overline{g^{(a,b,c)}_{01}}\right)^{-1}.$$ Therefore we have an anti-holomorphic isomorphism $$ \sigma:L_{(\frac{a}{2},b,\frac{c}{2})}\cong\left(L_{(\frac{a}{2},b,\frac{c}{2})}\right)^*.$$ \subsection{Relations with self-duality on $\bb R^8$} We bring this section to an end by relating the concepts of Bogomolny pairs and Self-duality. We start with a $1$-hypercomplex manifold $M$ and a complex vector bundle $E$ on $M$. Since, there is no Higgs field for a Bogomolny pair, we can say that a connection $\nabla$ is \emph{self-dual}, or \emph{hyperholomorphic} \cite{MV}, if $\nabla$ restricted to the $\alpha$-surfaces is flat. Remember that we have $TM^\bb C=E_M\otimes H$, which gives a decomposition $$\Lambda^2T^*M^\bb C=(S^2E^*_M\otimes\Lambda^2 H)\oplus(\Lambda^2E^*_M\otimes S^2H).$$ We now state some results from \cite{SU2} and \cite{MV}: \begin{prop} The following are equivalent: \begin{enumerate}[(i)] \item $\nabla$ is self-dual, \item $F_\nabla$ lies in the component $(S^2E^*_M\otimes\Lambda^2 H)$ in the decomposition above, \item $F_\nabla$ is $SU(2)$ invariant. \end{enumerate} \end{prop} A connection $\nabla$ is called \emph{Yang-Mills} if $\nabla$ is a minimal of the $\emph{Yang-Mills}$ functional: \begin{align}\label{YM} \mcal Y(\nabla)=\int_M|F_\nabla|^2\text{vol}_g, \end{align} where $|F_\nabla|^2=\text{Tr}(F_\nabla\wedge *F_\nabla)$ and $\text{vol}_g$ is the volume form on $M$ with respect to $g$. \begin{rmk} It is proved in \cite{MV} that if a connection $\nabla$ satisfies conditions (i), (ii) or (iii) of the proposition above, then it is Yang-Mills. \end{rmk} In order for us to explain the relations between self-dual connections and Bogomolny pairs, we shall first recover the following result from \cite{SU2}: \begin{thm} If $M$ be a $k$-hypercomplex manifold, then there exists a hypercomplex manifold $\tilde M$ with a projection $p:\tilde M\to M$ such that the pair $(\nabla,\Phi)$ on a bundle $F$ on $M$ is a monopole if and only if $p^*(\nabla\oplus\Phi)$ on the bundle $p^*F$ on $\tilde M$ is self-dual. \end{thm} The results above say that Bogomolny pairs in $\bb R^5$ are obtained from self-duality in $\bb R^8$. However, it is important to remark that a self-dual connection $\nabla$ in $\bb R^8$ does not have finite energy \cite{TAU}, this is to say, $\mcal Y(\nabla)$ is not finite. This makes us to believe that the Yang-Mills-Higgs functional on $\bb R^5$, obtained from \eqref{YM} by dimensional reduction, also does not admit finite energy minima. \section{Algebraic curves and monopoles on $\bb R^5$}\label{section3} In this section we describe the method of constructing Bogomolny pairs from algebraic curves on $\bb T$. Let $S\subset\mathcal{O}(4)$ be a compact algebraic curve in the linear system $|\mathcal{O}(4k)|$, this is to say, on the open set $U$, $S$ is defined by the equation \begin{align}\label{eq:spectralcurve} P(\xi,\eta)=\eta^k+a_1(\xi)\eta^{k-1}+\cdots+a_{k-1}(\xi)\eta+a_k(\xi)=0, \end{align} where $a_j(\xi)$ is a polynomial of degree $4j$ in $\xi$. Next we shall discuss how those curves relate to holomorphic bundles on $\bb T$. In this section, we shall write $L=L_{(a,b,c)}$ for any non-zero $(a,b,c)\in\bb R^3$, where $L_{(a,b,c)}$ is defined in \eqref{prop:transition}. Then, there exists a short exact sequence of sheaves: \begin{align}\label{seq:spectralseq} 0\to\mcal O(L^2(-4k))\to\mcal O(L^2)\to\mcal O_S(L^2)\to0. \end{align} This gives a long exact sequence on cohomology: \begin{align}\label{seq:longspectral} 0&\to H^0(\bb T,L^2(-4k))\to H^0(\bb T,L^2)\to H^0(S,L^2)\\ &\to H^1(\bb T,L^2(-4k))\to H^1(\bb T,L^2)\to H^1(S,L^2)\cdots \end{align} Assume further that $S$ is such that $L^2|_S$ is trivial. This implies that $H^1(S,L^2)=0$ and \eqref{seq:longspectral} becomes: \begin{align}\label{seq:longspectral2} 0\to H^0(S,L^2)\xrightarrow{\delta} H^1(\bb T,L^2(-4k))\xrightarrow{\otimes\psi} H^1(\bb T,L^2)\to 0, \end{align} where $\psi\in H^0(\bb T,\mcal O(4k))$ is the section defining $S$. Choose a trivialisation $s$ of $L^2$ over $S$, this is to say, $s$ is a non-zero element in $H^0(S,L^2)$. Define the bundle $\tilde E$ over $\bb T$ by the cohomology class $\delta (s)$. This means $\tilde E$ is given as an extension: \begin{align*} 0\to L(-2k)\xrightarrow{\alpha}\tilde E\xrightarrow{\beta} L^*(2k)\to0. \end{align*} We then have the following: \begin{prop}\label{prop:keepinmind} $\tilde E$ satisfies the following conditions: \begin{enumerate}[(i)] \item $\tilde E$ has a symplectic structure, \item If $S$ is real and $L(-2k)$ has a quaternionic structure on $S$, then $\tilde E$ is equipped with a quaternionic structure $\sigma$ covering $\tau$, this is to say, $\sigma$ is an anti-holomorphic linear map $$ \sigma:\tilde E_z\to\tilde E_{\tau(z)},$$ such that $\sigma^2=-id_{\tilde E_z}$. \end{enumerate} \end{prop} \begin{proof} The properties (i) is straightforward from the definition of $\tilde E$. For (ii), let $\sigma:L\to L^*$ be the anti-holomorphic isomorphism. We can define a bundle $\sigma(\tilde E)$ on $\bb T$ via the extension: \begin{align*} 0\to L^*(-2k)\xrightarrow{\alpha'}\sigma(\tilde E)\xrightarrow{\beta'} L(2k)\to0. \end{align*} Now, we can extend the antiholomorphic isomorphism $L\cong L^*$ to an antiholomorphic isomorphism $\tilde E\cong\sigma(\tilde E)$. \end{proof} We shall need the following facts about these spectral curves \cite{HAH} \begin{prop} The cohomology group $H^1(\mathbb{T},\mathcal{O}_{\mathbb{T}})$ is generated by $\eta^i /\xi^j,0<i\leq k-1,0<j<ki$. \end{prop} Noticing that $exp:H^1(\mathbb{T},\mathcal{O}_{\mathbb{T}})\to Pic_0(S)$ is an isomorphism, the bundles with vanishing degree over $S$ are generated by $exp(\eta^i /\xi^j)$. \begin{prop} The natural map $H^1(\mathbb{T},\mathcal{O}_{\mathbb{T}})\to H^1(S,\mathcal{O}_S)$ is surjective, which means that $H^1(S,\mathcal{O}_S)$ is generated by $\eta^i /\xi^j,0<i\leq k-1,0<j<ki$. \end{prop} Notice that if $S$ is smooth the proposition above gives degree zero line bundles on $S$. In this section we shall assume the curves are smooth, the adjustments for the non-smooth case essentially follow from what is done in \cite{HAH}. A bit of notation: Let $\pi:\mathbb{T}\to \mathbb{C}P^1$, then $\mathcal{O}_{\mathbb{T}}(l)$ denotes the pull-back of $\mathcal{O}(l)$ by $\pi$. Also, if $F$ is a sheaf on $\mathbb{T}$ we denote by $F(l)$ the sheaf $F\otimes\mathcal{O}_\mathbb{T}(l)$ \begin{defi} The theta divisor $\Theta$ in $S$ is the set of line bundles of degree $g-1$ that have a non-zero global section. The affine Jacobian $J^{g-1}$ is the set of line bundles of degree $g-1$ on $S$. \end{defi} \begin{thm} [Beauville \cite{Beau}] There is a 1-1 correspondence between $J^{g-1}\setminus\Theta$ and $Gl(k,\mathbb{C})$-conjugacy classes of $gl(k,\mathbb{C})$-valued polynomials $A(\xi)=\sum_{j=0}^{4}A_j\xi^j$ such that $A(\xi)$ is regular for every $\xi$ and the characteristic polynomial of $A(\xi)$ is \eqref{eq:spectralcurve}. \end{thm} We shall now give the idea of this construction with enough details that will be necessary when we see the boundary conditions. In order to do this, we need the following lemma: \begin{lem}\label{lem:techiso} Let $E$ be an invertible sheaf on $\mathbb{T}$ whose degree is $g-1$ and such that $H^0(S,E)=0$, then $H^0(S,E(1))\cong\mathbb{C}^k$. \end{lem} \begin{proof} Let $\xi_0\in\mathbb{C}P^1$ and denote by $D_{\xi_0}$ the divisor corresponding to the meromorphic function $(\xi-\xi_0)$ on $S$, this means that as a set $D_{\xi_0}$ consists of the points of $S$ in the fibre $E_{\xi_0}$, which is a set of $k$ points, counted with multiplicities. Now consider the exact sequence of sheaves; \cite{GH} page 139. \begin{align*} 0\to\mathcal{O}_S(E)\to\mathcal{O}_S(E(1))\to\mathcal{O}_{D_{\xi_0}}(E(1))\to0. \end{align*} From Riemann-Roch, the hypothesis $H^0(S,E)=0$ implies that $H^1(S,E)=0$. Taking the exact sequence on cohomology and noticing that $H^0(D_{\xi_0},E)=\mathbb{C}^k$, since $D_{\xi_0}s$ are a set of $k$ points counted with multiplicity, gives the required isomorphism. \end{proof} For $\xi\in U$, define a map $Z:H^0(D_{\xi},E(1))\to H^0(D_{\xi},E(1))$ given by multiplication by $\eta$. We define the linear map $$A(\xi):\cohm{0}{S}{E(1)}\to\cohm{0}{S}{E(1)}$$ by the commutative diagram: \begin{align*} \begin{CD} \cohm{0}{S}{E(1)} @>>> H^0(D_{\xi},E(1))\\ @VVA(\xi)V @VVZV\\ \cohm{0}{S}{E(1)} @>>> H^0(D_{\xi},E(1)) \end{CD} \end{align*} where the horizontal maps are the isomorphism given in the lemma \eqref{lem:techiso}. Conversely, let $A(\xi)=\sum_{j=0}^{4}A_j\xi^j$ be a regular matricial polynomial and define a sheaf $E(1)$ over $\bb T$ via the exact sequence: \begin{align}\label{eq:matricestolinebundles} 0\to\mcal O(-4)_\bb T^{\oplus k}\xrightarrow{\eta-A(\xi)}\mcal O_\bb T^{\oplus k}\to E(1)\to0. \end{align} $E(1)$ is supported on $S$ and since $A(\xi)$ has $1$-dimensional kernel, $E(1)$ is a line bundle of degree $g-1$, where $g$ is the genus of $S$. \subsection{Monopoles from spectral curves} From now on in this paper, we shall consider the bundle $L$ to be the line bundle on $\bb T$ given by the transition function $g=exp(\eta/\xi^2)$. Now we define spectral curves: \begin{defi}\label{spectral} A spectral curve is a compact algebraic curve $S$ in $\mathbb{T}$ satisfying: \begin{enumerate}[(i)] \item $S$ is a compact algebraic curve in the linear system $\mathcal{O}(4k)$, therefore it is given by an equation of the type \begin{align}\label{eq:spectralcurve} P(\xi,\eta)=\eta^k+a_1(\xi)\eta^{k-1}+\cdots+a_{k-1}(\xi)\eta+a_k(\xi)=0, \end{align} where $a_j(\xi)$ is a polynomial of degree $4j$ in $\xi$. \item $S$ has no multiple components. \item The line bundle $L$ has order $2$ on $S$. \item $H^0(S,L^z(2k-3))=0$ for $z\in (0,2)$. \end{enumerate} \end{defi} Using the results from the last section, we are able to make the following definition: \begin{defi} A Bogomolny pair $(\nabla,\Phi)$ on $\bb R^5$ is called a \emph{monopole} if the corresponding holomorphic bundle $\tilde E$ on $\bb T$ is defined via a spectral curve $S$ satisfying the conditions of definition \eqref{spectral}. We say the \emph{algebraic charge} of the monopole is $k$ if the curve $S$ corresponding to the monopole has degree $k$. \end{defi} We shall call the algebraic charge shortly by charge, however we must bear in mind that we do not have yet a topological definition of charge for monopoles in $\bb R^5$. \begin{rmk} Our main motivation to make this definition is that the conditions in \eqref{spectral} are similar to the conditions for the spectral curves for monopoles in $\bb R^3$ \cite{CM}. Also, condition (iv) above allows us to define a flow of endomorphisms for a bundle over the interval $(0,2)$ whose fibre at $z\in(0,2)$ is $\cohm{0}{S}{L^z(2k-2)}$. \end{rmk} \begin{eg}\label{eg:k=1} A spectral curve $S$ for $k=1$ is given by the equation $\eta+a_1(\xi)=0$, where $a_1$ is a polynomial of degree $4$. Imposing the condition that $S$ is real gives $\overline a_1(\xi)=\overline\xi^4a_1(-\frac{1}{\overline\xi})$, but this condition says that $S$ is a real section $P$ of $\bb T$ over $\bb CP^1$. Moreover, conditions $(ii)$ and $(iii)$ are clearly satisfied, remember that $L$ is trivial on real sections of $\bb T$ since it corresponds to a bogomolny pair on $\bb R^5$. For condition $(iv)$, notice that on $P$, we have $L^z(2k-3)=L^z(-1)\cong\mcal O(-1)$. Then, we conclude that the spectral curves for charge $1$ monopoles correspond to real sections of $\bb T$. \end{eg} \subsection{From linear flows to Nahm's equations} Also, in the last section, we saw that in order for us to construct the monopole data we need a antiholomorphic isomorphism $L=L^*$ on $S$ and this implies that $L^2$ is trivial on $S$. This condition and the condition (iii) of \eqref{spectral} above implies that the element $g\in H^1(S,\mathcal{O})$ is a lattice point in $H^1(S,\mathbb{Z})$. Thus, the straight line between $0$ and $g$ defines a morphism, which we will refer as a flow: \begin{align*} h:S^1&\to H^1(S,\mathcal{O})/H^1(S,\mathbb{Z})\cong Pic^0(S)\\ exp(i\pi z)&\mapsto exp(izg), \,\, z\in[0,2]. \end{align*} Let $Pic^0(S)$ be the group of degree $0$ line bundles on $S$ and $J^{\mathbf{g}-1}(S)$, the Jacobian of line bundles of degree $\mathbf g-1$, where $\mathbf g$ is the genus of $S$. We can identify $Pic^0(S)$ with $J^{\mathbf g-1}(S)$ by doing $F\to F(2k-3)$, since $deg(F(2k-3))=k(2k-3)=2k^2-3k=(k-1)(2k-1)-1$. Now, $h$ can be considered a flow in the Jacobian and the condition (iv) in \eqref{spectral} says that, for $z\in(0,2)$, $h(z)$ is not in the theta divisor, the line bundles in $J^{\mathbf g-1}(S)$ with a non-vanishing holomorphic section. These properties will allow us to derive the Nahm's equations satisfying the appropriate boundary conditions. However, the boundary conditions will be treated in the next section. \begin{lem} For $z\in (0,2)$ we have $dimH^0(S,L^z(2k-2))=k$. \end{lem} \begin{proof} This lemma follows from lemma \ref{lem:techiso} by noticing that the degree of $L(2k-3)$ as a line bundle on $S$ is $\mathbf g-1$, where $\mathbf g=(k-1)(2k-1)$ is the genus of $S$. \end{proof} We can now define a bundle $V$ on $\mathbb{C}$ in the following way: Let $W$ be the bundle over $\mathbb{C}\times S$ whose fibre at $(z,p)$ is $L^z(2k-2)_p$ and $P_1:\mathbb{C}\times S\to\mathbb{C}$ be the projection in the first coordinate, define $V=(P_1)_*W$.\footnote{V is a locally free sheaf since the direct image sheaf $(P_1)_*W$ over $\mathbb{C}$ is torsion free.} From the proposition above, we know that $V$ has rank $k$ and, moreover, the fibre at $z\in(0,2)$ is $V_z=H^0(S,L^z(2k-2))$. Now we shall state the following lemma whose proof is similar to the proof of proposition (4.5) in \cite{CM}. \begin{lem}\label{lem:tecreal} If $l < 4k$, then any section $s\in\cohm{0}{S}{\mcal O(l)}$ can be written uniquely as: \begin{align*} s=\sum_{j=0}^{[l/4]}\eta^j\pi^*(c_j), \end{align*} where $c_j\in\cohm{0}{\bb CP^1}{l-4j}$. \end{lem} Observe that this lemma implies that at $z=0$ the bundle $L^z(2k-2)$ has more sections than for $z\in(0,2)$. This means that the fibre $V_0$ is not just $\cohm{0}{S}{L^z(2k-2)}$ and we shall treat this case later. In this section we shall consider the behaviour of the bundle $V$ on the interval $(0,2)$ and the endpoints will be studied in the next section. From Beauville's theorem we have that, for $z\in(0,2)$, each line bundle $L^z(2k-3)$ corresponds to a conjugacy class of a regular matricial polynomial $A(\xi,z)=\sum_{j=0}^{j=4}A_j(z)\xi^j$. Moreover, $A(\xi,z)$ can be seen, from its construction, as a linear map $A(\xi,z):H^0(S,L^z(2k-2))\to H^0(S,L^z(2k-2))$, this is to say, $A(\xi,z):V_z\to V_z$. However, we want to define actual matrices, and so far we only have an equivalence class of matrices, in other words, we have endomorphisms of $V_z$. The objective now is to use the endomorphisms $A_j(z)$ to define a connection for $V$ on the interval $(0,2)$. Then we shall trivialise $V$ by parallel constant section with respect to this connection. Let $s(z)$ be a local holomorphic section of $V$, we can write it as a pair of holomorphic functions $f_0:S\cap U_0\times\bb C^*\to\mathbb{C}^k$ and $f_1:S\cap U_1\times\bb C^*\to\mathbb{C}^k$ satisfying $f_0=exp(z\eta/\xi^2)\xi^{2k-2}f_1$ on $U_0\cap U_1$. We now follow the construction on \cite{CM} page 169: Differentiating with respect to $z$: \begin{align*} \frac{\partial f_0}{\partial z}=\frac{\eta}{\xi^2}e^{z\eta/\xi^2}\xi^{2k-2}f_1+e^{z\eta/\xi^2}\xi^{2k-2}\frac{\partial f_1}{\partial z}. \end{align*} From the definition of $A$, we have: \begin{align*} (\eta-A_0-A_1\xi-A_2\xi^2-A_3\xi^3-A_4\xi^4)s=0, \end{align*} or \begin{align*} \frac{\eta}{\xi^2}s=(A_0\xi^{-2}+A_1\xi^{-1}+\frac{1}{2}A_2)s+(\frac{1}{2}A_2+A_3\xi+A_4\xi^2)s. \end{align*} This implies that on $U\cap U'$ we have \begin{align*} &\frac{\partial f_0}{\partial z}-(\frac{1}{2}A_2+A_3\xi+A_4\xi^2)s\\ =&\frac{\partial f_0}{\partial z}-\frac{\eta}{\xi^2}f_0+(A_0\xi^{-2}+A_1\xi^{-1}+\frac{1}{2}A_2)s\\ =&e^{z\eta/\xi^2}\xi^{2k-2}\frac{\partial f_1}{\partial z}+e^{z\eta/\xi^2}\xi^{2k-2}(A_0\xi^{-2}+A_1\xi^{-1}+\frac{1}{2}A_2)s\\ =&e^{z\eta/\xi^2}\xi^{2k-2}\left[\frac{\partial f_1}{\partial z}+(A_0\xi^{-2}+A_1\xi^{-1}+\frac{1}{2}A_2)s \right]. \end{align*} The lines above tell us that we can define a connection on $V$, over $(0,2)$, whose covariant derivative on $U$ is given by: \begin{align*} \nabla_zs=\frac{\partial f_0}{\partial z}-(\frac{1}{2}A_2+A_3\xi+A_4\xi^2)s. \end{align*} We shall use this to define a frame $(s_1,\cdots,s_k)$ of covariant sections for $V$. Let $A_+=\frac{1}{2}A_2+A_3\xi+A_4\xi^2$, then we can write \begin{align*} \frac{\partial s}{\partial z}-A_+s=0. \end{align*} Taking the derivative of \begin{align*} (\eta-A)s=0, \end{align*} with respect to $z$, we have \begin{align*} (\eta-A)\frac{\partial s}{\partial z}-\frac{\partial A}{\partial z}s=0. \end{align*} Thus, \begin{align*} -(\eta-A)A_+s-\frac{\partial A}{\partial z}s=-\eta A_+s+AA_+s-\frac{\partial A}{\partial z}s=0, \end{align*} hence \begin{align*} \left([A,A_+]-\frac{\partial A}{\partial z} \right)s=0. \end{align*} Observe that this equation is independent of $\eta$. Now let $F$ be a fibre of $\mathbb{T}$ such that $F\cap S=\{x_1,\cdots,x_k\}$ with the $x_j$ all distinct. Therefore, we have an exact sequence \begin{align*} 0\to\mathcal{O}_SL^z(2k-3)\to\mathcal{O}_SL^z(2k-2)\to\mathcal{O}_{F\cap S}\to0. \end{align*} Using the fact that $H^0(S,L^z(2k-2))=H^1(S,L^z(2k-2))=0$, the exact cohomology sequence says that the restriction map $H^0(S,L^z(2k-2))\to H^0(F\cap S,\mathcal{O})$ is an isomorphism. Thus, we can find a frame $s_1,\cdots,s_k$ for $H^0(S,L^z(2k-2))$ such that $s_i(x_j)=\delta_{ij}$. We then have that $B=[A,A_+]-\frac{\partial A}{\partial z}$ satisfies $\sum_jB_{ij}s_j(x_l)=0$ for all $i,l$. But this says that $B_{ij}=0$. Since the condition on $F$ is generic, we must have $B_{ij}=0$ for all the fibres. Thus, we must have \begin{align*} \frac{\partial A}{\partial z}=[A,A_+]. \end{align*} We therefore have the generalized Nahm's equations: \begin{align}\label{eq:cNahm's} \begin{array}{ll} \dot{A}_0&=\dfrac{1}{2}[A_0,A_2]\\ \dot{A}_1&=[A_0,A_3]+\dfrac{1}{2}[A_1,A_2]\\ \dot{A}_2&=[A_1,A_3]+[A_0,A_4]\\ \dot{A}_3&=[A_1,A_4]+\dfrac{1}{2}[A_2,A_3]\\ \dot{A}_4&=\dfrac{1}{2}[A_2,A_4], \end{array} \end{align} for $z\in(0,2)$ and $\dot{A}_j=\dfrac{\partial A_j}{\partial z}$. \begin{rmk}\label{rmk:momentmap} We now make an important remark on the equations \ref{eq:Nahm's}. Let $k\geq 2$, we defined the endomorphisms as linear maps $A(\xi,z):H^0(S,L^z(2k-2))\to H^0(S,L^z(2k-2))$. Now, in order for $\tilde E$ in \ref{prop:keepinmind} to inherit a quaternionic structure we need $L(-2k)$ to be quaternionic on $S$ and therefore $L(2k-2)=L(-2k)\otimes\mcal O(4k-2)$ is quaternionic. This means that there is no reality condition to be imposed on $A(\xi,z)$ that gives us real spectral curves, ie, this method does not construct monopoles for the group $SU(2)$. However, we can still impose a reality. Namely, let $A_0=T_1+iT_2$, $A_1=T_3+iT_4$, $A_2=2iT_5$, $A_3=T_3-iT_4$ and $A_4=-T_1+iT_2$, then we have: \begin{align}\label{eq:Nahm's} \begin{array}{ll} \dot{T}_1&=[T_5,T_2]\\ \dot{T}_2&=[T_1,T_5]\\ \dot{T}_3&=[T_1,T_3]+[T_2,T_4]+[T_5,T_4]\\ \dot{T}_4&=-[T_1,T_4]+[T_2,T_3]-[T_5,T_3]\\ \dot{T}_5&=[T_1,T_2]+[T_4,T_3]. \end{array} \end{align} The interesting fact here is that equations \ref{eq:Nahm's} can be interpreted as a $2$-symplectic moment map \cite{SU2} and it would be interesting to study the moduli space of solutions to \ref{eq:Nahm's}. \end{rmk} Before proceeding to the next section, we give an alternative description of the endomorphisms $A_j(z)$ that will be useful later. Let $S$ be a spectral curve and consider the map: \begin{align}\label{eq:mapm} m:\cohm{0}{S}{\mcal O(4)}\otimes\cohm{0}{S}{L^z(2k-2)}\to\cohm{0}{S}{L^z(2k+2)}, \end{align} and denote by $K_z$ its kernel at $z$. We have the following proposition, whose proof is similar to the proof of proposition 4.8 in \cite{CM}: \begin{prop}\label{prop:kernelm} The map $h:K_z\to V_z$ given by \begin{align*} h(\eta\otimes t_0+1\otimes s_0+\xi\otimes s_1+\xi^2\otimes s_2+\xi^3\otimes s_3+\xi^4\otimes s_4)\mapsto t_0 \end{align*} is an isomorphism for every $z\in(0,2)$. \end{prop} An immediate consequence of this proposition is that there exist endomorphisms $A_j(z)\in \text{End}V_z$ such that : $$ (\eta-A_0-A_1\xi-A_2\xi^2-A_3\xi^3-A_4\xi^4)s=0.$$ The uniqueness of Beauville's theorem tells us that these endomorphism are the same ones obtained via Beauville's theorem for the bundle $L^z(2k-3)$. \section{Boundary conditions for the Nahm's equations}\label{section4} In this section we find necessary and sufficient conditions on the matrices $A_j$ in the equations \ref{eq:cNahm's} such that they correspond to spectral curves satisfying conditions i)-iv) in definition \ref{spectral}. \begin{defi} Let $p(\xi,\eta)$ be the polynomial defining the spectral curve $S$, this is to say, $S=\{(\xi,\eta)|\,\,p(\xi,\eta)=0\}$, we shall use the following notation in this section: \begin{enumerate}[a)] \item Define $M=\bb C\times S$ and $P:M\to \bb C$ is the projection in the first coordinate. \item $\tilde{M}=\{(z,\xi,\eta)\in\bb C \times\bb T|\,\,\tilde p(z,\xi,\eta)=0\}$, where $\tilde p(z,\xi,\eta)=z^kp \left( \xi,\dfrac{\eta}{z} \right)$ and $\tilde P: \tilde M\to \bb C$ is the projection in the first coordinate. \item For fixed $z\in\bb C$ we define the curve $zS$, it is $S$ shrunk by a factor $z$, to be the curve defined by $\tilde p(z,\xi,\eta)$. \item $\tilde V=\tilde P_*(X|_{\tilde M})$, where $X$ is the bundle on $\bb T$ whose fibre at $(z,\eta,\xi)$ is $L^z(2k-2)_{(\eta,\xi)}$. \item Define $\mcal L$ over $\bb C\times\bb T$ to be the bundle such that $\mcal L_{\{z\}\times \bb T}=L^z$. \item Similarly, we have $X=P_*(\mcal L(2k+2))$ and $\tilde X=\tilde P_*(\mcal L(2k+2))$ \item Bundles on $\bb T$, their lifts to $\bb C\times \bb T$ and their restrictions to $M$ and $\tilde M$ will be denoted by the same letter. \end{enumerate} \end{defi} \begin{rmk} \begin{enumerate}[i)] \item If we denote the zero section of $\mcal O(4)$ by $F$, we then notice that $\tilde P^{-1} (0)= F^{(k-1)}$, the $(k-1)\textsuperscript{th}$ formal neighbourhood of $F$ in the total space of $\mcal O(4)$. \item $V=P_*(X|_M)$ is the bundle defined in the previous section. \end{enumerate} \end{rmk} \subsection{The fibre of $V$ at $0$} \begin{lem}\label{lem:ztozn} Define a map $\rho:L(k)|_{\tilde M}\to\mcal L(k)_M$ in the following way: Let $s$ be a section of $L(k)$ on $\tilde M$ such that on the trivialisation $U_i$ it is represented by $\tilde f_i(z,\eta,\xi)$. Define $\rho(s)$ to be the section of $\mcal L(k)$ on $M$ represented by $f_i(z,\eta,\xi)=\tilde f_i(z,z\eta,\xi)$. Then, $\rho$ is a well defined map of bundles and it is an isomorphism for $z\neq 0$. \end{lem} \begin{proof} We just need to verify that $f_0=\exp(z\eta/\xi^2)f_1$, but this is true since $\tilde f_0=\xi^k\exp(\eta/\xi^2)\tilde f_1$. It is immediate that $\rho$ is an isomorphism. \end{proof} \begin{cor}\label{cor:ztozn} Taking the direct images in the lemma above, there is a map of sheaves over $\bb C$ \[ \rho:\tilde V\to V \] which is an isomorphism for $z\neq0$. \end{cor} Consider now the evaluation map: $$\tilde {ev}_z:\tilde V_z\to \cohm{0}{\tilde P^{-1}(z)}{L(2k-2)} .$$ It is an isomorphism for $z\neq 0$. For the next result, we shall use the following notation: $\Gamma_m\subset\mcal O(2m)$ consist of sections $s$ of the form $s=\sum_{j=0}^ma_j\xi^{2j}$ and denote by $L\otimes\Gamma\subset L(2m)$ the set of sections of the form $\sum_{jk}\alpha_j\otimes s_k$, with $\alpha_j$ a section of $L$ and $s_k\in\Gamma_m$. The first result in this section is: \begin{prop}\label{prop:fibreat0} Let $V_0\subset H^0(S,\mathcal{O}(2k-2))$ be the fibre of $V$ at $z=0$ and $\Gamma_{k-1}\subset H^0(\mathbb{C}P^1,\mathcal{O}(2k-2))$ be the sections of the form $p(\xi)=c_{2k-2}\xi^{2k-2}+c_{2k-4}\xi^{2k-4}+\cdots+c_2\xi^2+c_0$. Then, $V_0\cong\Gamma_{k-1}$. \end{prop} An extension of a section of $\mcal O(2k-2)$ to a section of $X$ to the $m\textsuperscript{th}$ formal neighbourhood consists of the following data: \begin{align*} s&=s_0+zs_1+\cdots+z^ms_m,\,\,\,\,\,s_i\in\cohm{0}{U_0}{\mcal O},\\ s'&=s'_0+zs'_1+\cdots+z^ms'_m,\,\,\,\,\,s'_i\in\cohm{0}{U_1}{\mcal O}, \end{align*} such that $s=\xi^{2k-2}(e^{\eta/x_i^2})s'\text{mod}z^{m+1}$ on $U_0\cap U_1$. From lemma \eqref{lem:ztozn} we have that we can change $z$ to $z\eta$ near $z=0$. This means the extension above can be written as: \begin{align*} p&=p_0+zp_1+\cdots+z^mp_m,\,\,\,\,\,p_i\in\cohm{0}{U_0}{\mcal O(2k-2-4i)},\\ p'&=p'_0+zp'_1+\cdots+z^mp'_m,\,\,\,\,\,p'_i\in\cohm{0}{U_1}{\mcal O(2k-2-4i)}, \end{align*} such that $p=(e^{\eta/x_i^2})p'\text{mod}\eta^{m+1}$. We can now state and prove the following: \begin{lem}\label{lem:extension} Every section in $L\otimes\Gamma_m$ on $Z\subset\mathbb{T}$ can be extended uniquely to the $m\textsuperscript{th}$ formal neighbourhood, but no section can be extended to the $(m+1)\textsuperscript{th}$ neighbourhood. \end{lem} \begin{proof} A section of $L(2m)$ on the m\textsuperscript{th} neighbourhood consists of local section $p_i\in H^0(U_0,\mathcal{O}(2m-4i))$ and $p'_i\in H^0(U_1,\mathcal{O}(2m-4i))$, such that \begin{align*} p_0+\eta p_1+\cdots+\eta^mp_m=e^{\eta/\xi^2}(p'_0+\eta p'_1+\cdots+\eta^mp'_m)\text{mod}\eta^{m+1}. \end{align*} We are therefore looking for functions $p_i$ on $U_0$ and $p'_i$ on $U_1$ such that on the intersection $U_0\cap U_1$ we have: \begin{align}\label{eq:matrix1} \left(\begin{array}{ccccc} \xi^{2m} & 0 & 0 & \ldots & 0 \\ \xi^{2m-2} & \xi^{2m-4} & 0 & \ldots & 0 \\ \frac{1}{2}\xi^{2m-4} & \xi^{2m-6} & \xi^{2m-8} & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac{1}{m!} & \frac{1}{(m-1)!}\xi^{-2} & \cdots & \cdots & \xi^{-2m} \end{array} \right) \left(\begin{array}{c} p'_0 \\ p'_1 \\ \vdots \\ \vdots \\ p'_m \end{array} \right)= \left(\begin{array}{c} p_0 \\ p_1 \\ \vdots \\ \vdots \\ p_m \end{array} \right). \end{align} Now for $l$ even and such that $0\leq l\leq 2m$, \begin{align}\label{eq:solution2} \left(\begin{array}{c} p'_0 \\ p'_1 \\ \vdots \\ \\ \vdots \\ p'_m \end{array} \right)= \left(\begin{array}{c} c_0 \xi^{-2m+l}\\ c_1 \xi^{-2m+l+2}\\ \vdots \\ c_{(\frac{2m-l}{2})} \\ \vdots\\ 0 \end{array} \right) \end{align} solves \eqref{eq:matrix1} if \begin{align}\label{eq:solution1} \sum_{i=0}^{(\frac{2m-l}{2})}\frac{c_i}{(n-i)!}=0, \end{align} where $\left(\dfrac{l}{2}+1\right)\leq n\leq m$. From \cite{CM} page 173, there exists a unique solution of \eqref{eq:solution1}, and for this solution we have $c_0$ and $c_{(\frac{2m-l}{2})}$ are both non-vanishing. This implies that \eqref{eq:solution2} trivialises a rank-$m+1$ bundle $E_m\to\bb C P^1$ whose transition function is given by the matrix in \eqref{eq:matrix1}. From the exact sequence \begin{align} 0\to E_{m-1}(-2)\to E_m\xrightarrow{p_0} \mcal O(2m)\to0, \end{align} we have the following long exact sequence in cohomology: \begin{align} 0&\to\cohm{0}{\bb CP^1}{E_{m-1}(-2)}\to\cohm{0}{\bb CP^1}{E_m}\xrightarrow{p_0}\cohm{0}{\bb CP^1}{\mcal O(2m)}\to\\ &\to\cohm{1}{\bb CP^1}{E_{m-1}(-2)}\to\dots \end{align} We can deduce from general sheaf cohomology theory that $\cohm{1}{\bb CP^1}{E_{m-1}(-2)}=0$. Therefore $\cohm{0}{\bb CP^1}{E_m}\xrightarrow{p_0}\cohm{0}{\bb CP^1}{\mcal O(2m)}$ is injective. It remains to find the image of the map $p_0$, in cohomology, above. Since $l$ is even, we can write $l=2j$, for $0\leq j\leq m$. Define $v_j$ by the equation \eqref{eq:solution2} and notice that $\{v_0,\cdots,v_m\}$ is a global frame for $E_m$. Thus, for $\alpha\in\cohm{0}{\bb CP^1}{E_m}$, we can write $\alpha=\sum_{j=0}^m\alpha_jv_j$ and we have: \begin{equation}\label{eq:image} p_0(\alpha)=\sum_{j=0}^m\alpha_j\xi^{2j}\in\cohm{0}{\bb CP^1}{\mcal O(2m)}. \end{equation} Using our notation, this means that the image of $p_0$ is $\Gamma_m$. This implies that sections of the form \eqref{eq:image} can be extended uniquely to $E_m$ and hence to the $m\textsuperscript{th}$ formal neighbourhood. An extension of sections given by \eqref{eq:image} on the $(m+1)\textsuperscript{th}$ neighbourhood is given by the pull-back to $E_{m+1}(-2)$ in the exact sequence: \begin{align} 0\to E_{m}(-4)\to E_{m+1}(-2)\xrightarrow{p_0} \mcal O(2m)\to0. \end{align} However, in this case $\cohm{0}{\bb CP^1}{E_{m+1}(-2)}=0$ and no extension exists. As a concern of notation, if $s$ is a section in $\Gamma_m$, its formal extension in $L^{(m)}(2m)$ will be denoted by $\overline{s}$. \end{proof} Before we proceed we shall state the following lemma, whose proof is similar to the proof of lemma (5.2) in \cite{CM}: \begin{lem}\label{lem:h1unique} Every element $c\in\cohm{1}{S}{O(2k-2)}$ can be written uniquely in the form: \begin{align*} c=\sum_{i=[k+1/2]}^{2k-2}\eta^i\pi^*c_i, \end{align*} where $c_i\in\cohm{1}{\bb CP^1}{O(2k-2-4i)}$. \end{lem} \begin{proof}[Proof of proposition \eqref{prop:fibreat0}] Let us start with the exact sequence: \begin{align*} 0\to\mcal O(-2m-4)\to L^{(m+1)}(2m)\to L^{(m)}(2m)\to0. \end{align*} Form its exact sequence in cohomology we have a map $$\delta:\cohm{0}{\bb CP^1}{L^{(m)}(2m)}\to\cohm{1}{\bb CP^1}{O(-2m-4)}.$$ Since $\cohm{0}{\bb CP^1}{L^{(m+1)}(2m)}=0$ and $\cohm{0}{\bb CP^1}{L^{(m)}(2m)}=\Gamma_m$ from lemma \eqref{lem:extension}, we can define an injective map $$h:\Gamma_m\to\cohm{1}{\bb CP^1}{\mcal O(-2m-4)},$$ defined by $hs=\delta\overline s$. Let $s\in\Gamma_{k-1}$ and take the extension of $\pi^*s\in\cohm{0}{S}{\mcal O(2k-2)}$ to the order $k-1$, as in lemma \eqref{lem:extension}, and consider it to be a section of $L^z(2k-2)$ over $\bb C\times S$. The obstruction to extending to the order $k$ is the element $$ c=\eta^k\pi^*hs\in\cohm{1}{S}{\mcal O(2k-2)}.$$ Now, since $S$ satisfies $\eta^k+a_1\eta^{k-1} +\cdots+a_0=0,$ we must have $$ c=-\sum a_i\eta^{k-i}\pi^*hs.$$ Then we can write the above as: $$ c=-\sum \eta^{k-j}\pi^*h_j,$$ where $h_j\in\cohm{1}{\bb CP^1}{\mcal O(4j-2k-2)}$ and also each $h_j$ must be in the image of $h$. Therefore, for each $j$ we can find a unique section $s_i\in\Gamma_{k-1-2j}$ such that $ \eta^{k-j}\pi^*h_j$ is the obstruction to extend $\pi^*s_j\in\cohm{0}{S}{\mcal O(2k-2-4j)}$ to the order $(k-2j)$ as a section of $L^z(2k-2-4j)$. This is the obstruction to extending $z^{2j}\eta^j\pi^*s_j$ from the order $(k-1)$ to the order $k$. Therefore, if $\overline{s}$ denotes a formal extension, we have that \begin{align*} s^1=\overline{s}-z^2\eta\overline{s}_1-z^4\eta^2\overline{s}_2-\cdots-z^2l\eta^l\overline{s}_l \end{align*} extends to the order $k$ in $z$. Now, we can consider an extension of $s^1$ whose obstruction is $c'\in\cohm{1}{S}{\mcal O(2k-2)}$. We can proceed as above we shall add modifications of order $z^3$. Then, every coefficient of $z^n$ requires a finite number of modifications and we have a power series in $z$. Now we can use a result in \cite{Har} (proposition II 9.6) to prove that a convergent extension exists. We have then proved that $\pi^*(\Gamma_{k-1})\subset V_0$. Since both vector spaces have dimension $k$, we have proved the proposition. \end{proof} \begin{rmk} An important remark here is that since $\Gamma_m$ is not a natural irreducible representation of $SL(2,\bb C)$, the maps in cohomology in the proof above are interpreted only as maps of abelian groups and not as maps between irreducible representations. Therefore, the fibre of $V$ at $z=0$ does not have a natural $SL(2,\bb C)$ representation structure. This is an important difference between our case and the $\bb R^3$ case. \end{rmk} \subsection{The behaviour of the matrix $A$ at $0$} After having established the fibre of $V$ at $0$, we can move toward the description of the behaviour of the matrix $A(z,\xi)$ at $0$. Namely, we shall prove that $A(z,\xi)$ has a pole at $0$ and $2$ and describe the respective residues. As before, we shall work with $\tilde M$ instead of $M$. Remember that in corollary \eqref{cor:ztozn} we defined a map $\rho:\tilde V \to V$, which is an isomorphism away from $z=0$. Also, remember that $X=P_*(\mcal L(2k+2))$ and $\tilde X=\tilde P_*(\mcal L(2k+2))$. Then, we can state the following lemma: \begin{lem} The diagram: \begin{align*} \begin{CD} \tilde V @>\tilde F>> \tilde X\\ @VV\rho V @VV\rho V\\ V @>F>> X \end{CD} \end{align*} is commutative if either $F=z\eta$ and $\tilde F=\eta$ or $F=\tilde F= A(\xi,\eta)$. \end{lem} The proof of this lemma is direct from lemma \eqref{lem:ztozn} and corollary \eqref{cor:ztozn}. We now have the following: \begin{cor}\label{cor:AtoB} Define $B(\xi,\eta)=z A(\xi\eta)$, then $(\eta-A(\xi,\eta))V=0$ if and only if $(\eta-B(\xi,\eta))\tilde V=0$. \end{cor} We shall now study the behaviour of $B$ at $z=0$ and use this corollary to deduce the corresponding behaviour of $A$. To start with this, we shall use Beauville's construction of $B$. We start with the commutative diagram: \begin{align*} \begin{CD} \tilde V_z @>restr_{z\text{,}q} >> \cohm{0}{zS\cap T_q}{L(2k-2)}\cong \bb C^{k-1}\\ @VVB(\xi\text{,}\eta)V @VV\times\eta V\\ \tilde V_z @>restr_{z\text{,}q} >> \cohm{0}{zS\cap T_q}{L(2k-2)}\cong \bb C^{k-1} \end{CD} \end{align*} where $q\in\bb CP^1$, $T_q$ is the fibre of $\bb T$ over $q$ and $$restr_{z,q}: \cohm{0}{zS}{L(2k-2)}\to \cohm{0}{zS\cap T_q}{L(2k-2)}$$ is the natural restriction map. Moreover, as in the construction of $A$, the cohomologies in the diagram above can be interpreted as polynomials in $\eta$ of degree $k-1$. Observe that $restr_{z,q}$ is an isomorphism for all $z\neq0$ and its limit $restr_{0,q}$ is also an isomorphism. Now, let $\tilde e_0,\cdots,\tilde e_{k-1}$ be a local frame for $\tilde V$, in a neighbourhood of $0$, such that $restr_{0,q}(\tilde e_j)=\eta^j$. Then $B$ is well-defined and continuous at $z=0$ and, if $\xi_0$ correspond to the point $q\in\bb CP^1$, \begin{align*} B(0,\xi_0)(\tilde e_j)=\tilde e_{k+1}. \end{align*} Since $B=zA$, we must have that $A$ has simple poles at $z=0$ and the next objective will be the description of the residues of $A$ at $0$. Now we shall use the alternative description of $A$ given in \eqref{prop:kernelm} to find the residues of $A$. This means we shall investigate the behaviour of the kernel $K_z$ of the product map $$ m:\cohm{0}{S}{\mcal O(4)}\otimes\cohm{0}{S}{L^z(2k-2)}\to\cohm{0}{S}{L^z(2k+2)}$$ as $z\to 0$. We start by noticing that, under the embedding $\bb T\subset \bb CP^5$, finding $K_0$ is equivalent to finding which sections of $\cohm{0}{S}{\Omega^1_{\bb CP^5}(2k+2)}$ extend to $\cohm{0}{S}{L^z\Omega^1_{\bb CP^5}(2k+2)}$. Since $dim K_z=k$ for $z\in(0,2)$, we should have a $k$-dimensional subspace $K_0$ that extends. Next, we shall describe $K_0$. Let $\{1,\xi^2,\cdots,\xi^{2k}\}$ be a basis for $\Gamma_{k-1}$ and define the linear operators $B_0,B_1$ and $B_2$ in $\Gamma_{k-1}$ by the the matrices: \begin{align}\label{eq:X_0} X_0=\left(\begin{array}{ccccc} 0 & 0 & 0 & \ldots & 0 \\ -(k-1) & 0 & 0 & \ldots & 0 \\ 0 & -(k-2) & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \cdots & \cdots & -1 & 0 \end{array} \right), \end{align} \begin{align}\label{eq:X_2} X_2=\left(\begin{array}{ccccc} k & 0 & 0 & \ldots & 0 \\ 0 & (k-2) & 0 & \ldots & 0 \\ 0 & 0 & (k-4) & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \cdots & \cdots & 0 & -k \end{array} \right)\,\,\text{and} \end{align} \begin{align}\label{eq:X_4} X_4=\left(\begin{array}{ccccc} 0 & 1& 0 & \ldots & 0 \\ 0 & 0 & 2 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \cdots & \cdots & 0 & (k-1)\\ 0 & \cdots & \cdots & 0 & 0 \end{array} \right) \end{align} we can now state the following result: \begin{prop}\label{prop:bextension} Every element $s\in K_0$ can be written uniquely in the form $$ s=\pi^*(1\otimes X_0\hat s+\xi^2\otimes X_2\hat s+\xi^4\otimes X_4\hat s),$$ where $\hat s\in\Gamma_{k-1}$. \end{prop} \begin{proof} The idea of the proof of this proposition is to work on the $(k-1)\textsuperscript{th}$ order neighbourhood first. We shall find a basis for the fibre of the bundle $V$ at $0$ in the formal neighbourhood in the language of the lemma \eqref{lem:extension}, this is to say, we have to solve \eqref{eq:matrix1}. In what follows, we shall find $$P'^j=p'^j_0+p'^j_1(z\eta)+\cdots+p'^j_{(k-1)}(z\eta)^{(k-1)}$$ and $$P^j=p^j_0+p^j_1(z\eta)+\cdots+p^j_{(k-1)}(z\eta)^{(k-1)}$$ satisfying \eqref{eq:matrix1}, for $0\leq j\leq(k-1)$. In what follows we shall use $m=k-1$ for simplicity. Fix $j$ and and define on the open set $U_0$: \begin{align}\label{eq:plinha} p'^j_l= \left\{ \begin{array}{l} (-1)^l\dfrac{(m-l)!}{(m-l-j)!}\dfrac{(m-j)!}{m!}\dfrac{1}{l!}\xi^{-2(m-j-l)}\,\,\text{for}\,\,0\leq l\leq(m-j),\\ 0\,\,\text{otherwise}. \end{array} \right. \end{align} And on $U_1$: \begin{align}\label{eq:psemlinha} p^j_l= \left\{ \begin{array}{l} \dfrac{(m-l)!}{(j-l)!}\dfrac{j!}{m!}\dfrac{1}{l!}\xi^{2(j-l)}\,\,\text{for}\,\,0\leq l\leq j,\\ 0\,\,\text{otherwise}. \end{array} \right. \end{align} We now need to check this data satisfies \eqref{eq:matrix1}. Let $$\beta_b=\left(\dfrac{1}{b!}\xi^{(2m-2b)},\dfrac{1}{(b-1)!}\xi^{(2m-2b-2)},\cdots,\xi^{(2m-4b)},0,\cdots,0 \right)$$ be the $b\textsuperscript{th}$ line of the matrix \eqref{eq:matrix1}. We need to prove that $$\beta_b\cdot P'^j =p^j_b.$$ \begin{align*} \beta_b\cdot P^j&=\sum^{\text{min}\{b,m-j\}}_{l=0}(-1)^l\dfrac{(m-l)!(m-j)!}{(b-l)!(m-j-l)!m!l!}\xi^{(2j-2b)}\\ &=\dfrac{(m-b)!}{m!}\left[\sum^{\text{min}\{b,m-j\}}_{l=0}(-1)^l{m-l \choose b-l}{m-j \choose l}\right]\xi^{(2j-2b)}\\ &=\dfrac{(m-b)!}{m!}{j \choose b}\xi^{(2j-2b)}\\ &=\dfrac{(m-b)!j!}{(j-b)!b!m!}\xi^{(2j-2b)}=p^j_b. \end{align*} Where we used the identity: \[ \sum^{\text{min}\{b,m-j\}}_{l=0}(-1)^l{m-j \choose l}{m-l \choose b-l}={j\choose b}. \] Then, we have that $P^j$ gives a basis for $V_0$ in the $(k-1)\textsuperscript{th}$ neighbourhood. Now we shall describe the kernel of the multiplication map \begin{align*} m:\cohm{0}{F^{(k-1)}}{\mcal O(4)}\otimes\cohm{0}{F^{(k-1)}}{L(2k-2)}\to\cohm{0}{F^{(k-1)}}{L(2k+2)}. \end{align*} First, notice that we have $\cohm{0}{F^{(k-1)}}{\mcal O(4)}\cong\cohm{0}{\bb T}{\mcal O(4)}=Span_\bb C\{1,\xi,\xi^2,\xi^3,\xi^4,\eta\}$. Now, a direct computation shows that the kernel of $m$ is generated by the elements of the form: \[ \omega^z_j=[z\eta\otimes P^j]-(m-j)[1\otimes P^{(j+1)}]+(m-2j)[\xi^2\otimes P^j]+j[\xi^4\otimes P^{(j-1)}], \] for $0\leq j\leq(k-1)$. In other words, this says that we can find sections $t_0,s_0,s_2,s_4\in\Gamma_{k-1}$ such that $$z\eta\overline t_0 +\overline s_0+\overline s_2\xi^2+\overline s_4\xi^4=0\,\, \text{mod}z^k,$$ where $\overline t_0$ and $\overline s_j$ represent the canonical extensions of $t_0$ and $s_j$ respectively. Moreover, we have proved above that we can actually take $t_0=s$ and $s_j=X_j(s)$, for $j=0,2,4$, for $s\in\Gamma_{k-1}$. Now, the canonical extension is of order $(k-1)$ and we proceed as in the proof of proposition \eqref{prop:fibreat0} to extend to higher orders and produce a formal extension. We can use again a result in \cite{Har} (proposition II 9.6) to prove that the obstruction to extend to higher orders are removable and, therefore we can produce an actual extension. \end{proof} \begin{rmk} It is important to highlight how we found the solutions \eqref{eq:plinha} and $\eqref{eq:psemlinha}$ to \eqref{eq:matrix1}. We solved \eqref{eq:matrix1} explicitly, from $k=2$ up to $k=6$, using the constraints \eqref{eq:solution1} and then we obtained a pattern for the solution for general $k$. In the proof written here, we just used this general form of the solution and proved it actually solves \eqref{eq:matrix1}. \end{rmk} We can now use this to prove our main result: \begin{thm}\label{thm:main} Let $S$ be a spectral curve in $\bb T$ satisfying the conditions in definition \eqref{spectral}with charge $k$. Then the corresponding matrices $A_i$, that satisfy the Nahm's equations \ref{eq:cNahm's}, also satisfy the following boundary conditions: \begin{enumerate} \item $A_1$ and $A_3$ are analytic on the whole interval $[0,2]$; \item $A_0$, $A_4$ and $A_2$ have simple poles at $0$ and $2$, but are otherwise analytic; \item The residues of $A_0$, $A_4$ and $A_2$ at $z=0$ and $z=2$ define an irreducible $k$-dimensional representation of $\mathfrak{sl}(2,\bb C)$. \end{enumerate} \end{thm} \begin{proof} Remember that from corollary \eqref{cor:AtoB} the endomorphisms $B_j$, defined by $B_j=zA_j$, are analytic on the whole interval $[0,2]$. Moreover, the proposition above tells us that $B_1$ and $B_3$ vanish at $z=0$ and: $$\lim_{z\to0}zB_j(s)= X_j(s),$$ for $j=0,2,4$. This means the endomorphisms $A_1$ and $A_3$ are analytic on the whole interval $[0,2]$ and the endomorphisms $A_0,A_2$ and $A_4$ have simple poles at $0$ whose residues are given by $X_0,X_2$ and $X_4$ respectively. We now shall extend this to the matrices that appear on the Nahm's equations. We have that the covariant derivative in $V$ is defined by: \begin{align*} \nabla_zs=\frac{\partial f_0}{\partial z}-\left(\dfrac{1}{2}A_2s+\xi A_3s+\xi^2A_4s\right). \end{align*} From the above and the definition of $X_j$( equations \eqref{eq:X_0}, \eqref{eq:X_2} and $\eqref{eq:X_4}$) we have: \begin{align*} \dfrac{1}{2}A_2+\xi A_3+\xi^2A_4=\dfrac{(k-1)}{2z}\times\bb I+D, \end{align*} where $D$ is analytic in the whole interval $[0,2]$ and $\bb I$ is the $k\times k$ identity matrix. Since the residue of the connection is a scalar, we can use the same argument in \cite{CM} page 179 to conclude that the matrices $A_j$ have the same residue as the corresponding endomorphisms. Thus, $A_0, A_4$ and $A_2$ define an irreducible representation of $\mathfrak{sl}(2,\bb C)$. Moreover, the condition that $L^2$ is trivial on $S$ says that the behaviour of the residues at $z=2$ is the same as at $z=0$. \end{proof} \begin{rmk} If we have $A_0=T_1+iT_2$, $A_1=T_3+iT_4$, $A_2=2iT_5$, $A_3=T_3-iT_4$ and $A_4=-T_1+iT_2$, then we have that $T_3$ and $T_4$ are analytic on the whole interval $[0,2]$ and the residues of $T_1$, $T_2$ and $T_5$ at $0$ define an irreducible representation of $\mathfrak{sl}(2,\bb R)$. \end{rmk} \subsection{From Nahm's equations to spectral curves} The goal of this subsection is to prove the following theorem: \begin{prop}\label{prop:converse} Let $A_j:(0,2)\to \mathfrak gl(k)$, $j=0,1,2,3,4$ satisfy the Nahm's equations and such that: \begin{enumerate} \item $A_1$ and $A_3$ are analytic on the whole interval $[0,2]$. \item $A_0, A_4$ and $A_2$ are analytic on $(0,2)$ and have simple poles at $z=0$ and $z=2$ with residues $a_0, a_4$ and $a_2$ defining the irreducible representation of $\mathfrak sl(2,\bb C)$ of rank $k$. \end{enumerate} Then, the curve $S$ defined by $det(\eta-A)=0$, where $A=A_0+\xi A_1+\xi^2 A_2+\xi^3 A_3+\xi^4 A_4$, satisfies: \begin{enumerate}[i)] \item $S$ is compact, \item $L^2$ is trivial on $S$, \item $\cohm{0}{S}{L^z(2k-3)}=0$ for $z\in(0,2)$. \end{enumerate} \end{prop} \begin{proof} For part i) notice that, $det(\eta-A)$ can be written, in local coordinates, as the polynomial in the equation $\ref{eq:spectralcurve}$. Thus, $S$ is compact. We now start to invert the procedure we used to construct the $A(\xi,t)$. Namely, using Beauville's theorem, we obtain a flow of line bundles $K_t$ on $S$. More explicitly, given the matrix $A(\xi,t)$ we have that $$K_t=coker(\eta-A(\xi,t)),$$ where $$(\eta-A(\xi,t)):\mcal O(-4)^{\oplus k}\to\mcal O^{\oplus k}.$$ However, it is easier to consider the dual approach. This means we are going to find the dual flow: $$K_t^*=ker(\eta-A(\xi,t))^t,$$ where $$(\eta-A(\xi,t))^t:\mcal O^{\oplus k}\to\mcal O(4)^{\oplus k}.$$ First we shall prove that $K^*_t=K^*_{t_0}\otimes L^{t-t_0}$. We start with a section $s$ of $K_{t_0}^*$ and it can be represented by $u$ in the open set $\{\xi\neq\infty\}$ and by $v$ on $\{\xi\neq 0\}$. Moreover, let $g(t_0)$ be the transition function of $K^*_{t_0}$ such that $u=g(t_0)v$. Observe that on $\{\xi\neq\infty\}$ we must have: $$ (\eta-A(\xi,t_0))^tu=0$$ and on $\{\xi\neq 0\}$: $$ (1/\xi^4)(\eta-A(\xi,t_0))^tv=0.$$ Let $A_+=\frac{1}{2}A_2+A_3\xi+A_4\xi^2$ and we shall vary $t$. To begin with, we impose that $u$ satisfies: $$ \der{u}{t}=A_+^tu.$$ We can use Nahm's equation to prove that $$\der{}{t}(\eta-A)^tu=A_+(\eta-A)^tu .$$ Now, the initial condition for this differential equation is given by $(\eta-A)^tu=0$. Thus, we have $(\eta-A)^tu=0$ for all $t$. On the other open set we can impose $$\der{v}{t}=-\left(A/\xi^2-A_+\right)^tv $$ and prove that $$ (1/\xi^4)(\eta-A)^tv=0$$ for all $t$. Now we have: \begin{align*} A_+^t=\der{u}{t}=\der{gv}{t}=\der{g}{t}v-g\left(A/\xi^2-A_+\right)^tv. \end{align*} This implies that $$ \frac{\eta}{\xi^2}u=\der{g}{t}g^{-1}u.$$ The solution of this equation can be written in terms of $g(t_0)$ as $g(\eta,\xi,t)=e^{t\eta/\xi^2}\cdot g(t_0)$. Therefore, we proved that $K^*_t=K^*_{t_0}\otimes L^{t-t_0}$. We now move towards the description of $K_0$ and we shall use the boundary behaviour of the matrices $A_i$ to prove that $K_0\cong\mcal O(2k-2)$. Near $t=0$ we can write for $t>0$ $A(\xi,t)=\dfrac{\alpha(\xi,t)}{t}$, with $\alpha(\xi,t)$ analytic near $t=0$. Also, denote $a(\xi,t)=\alpha(\xi,t)^t$. Write $\alpha(0,\xi)=a(\xi)=a_0+a_1\xi+a_2\xi^2+a_3\xi^3+a_4\xi^4$. From our hypothesis, $a_j$ satisfy the conditions in theorem \eqref{thm:main}. This means that $a_1=a_3=0$ and $a_0,a_2$ and $a_4$ define an irreducible representation of $\mathfrak{sl}(2,\bb C)$. Let $\Gamma_{k-1}$ be the subspace of $\bb C^{2k-1}$ consisting of polynomials of the form $p(\xi)=\sum_{i=0}^{2k-2}\xi^{2i}$. The matrices $a_j$ act on $\Gamma_{k-1}$ by multiplication. Moreover, we can choose a basis $e_i$ for $\Gamma_{k-1}$ such that $ker [a(\xi)]=(\xi^{2k-2},\cdots,\xi^{2j},\cdots,1)$. Notice that in this basis, $a_0(e_i)=e_{i+1}$. We shall next compute a section of $ker(\eta-A)^t$, first observe that $$(\eta-A)^t(\eta-A)^t_{adj}=det(\eta-A)\times\bb I,$$ where $adj$ is the formal adjoint and $\bb I$ is the identity matrix. This means that on $S$, $Im(\eta-A)^t_{adj}\subset K^*_t$. However, since $(\eta-A)$ is regular $(\eta-A)^t_{adj}$ has rank one and the inclusion becomes an equality. Now, we shall compute a section of $(\eta-A)^t_{adj}$. Observe that at $\xi=0$, we have that the image of $(\eta-A)^t_{adj}$ has a finite limit, because of the choice of basis above. In the general case, $Im(\eta-A)^t_{adj}\subset K^*_t$ will consist of a polynomial of degree $2k-2$ in $\xi$ and therefore, $K_0\cong\mcal O(2k-2)$. This means that $K_t=L^t(2k-2)$. Notice that, since the behaviour of the matrices $T_j$ at $t=2$ are the same as at $t=0$, we also have $K_2\cong\mcal O(2k-2)$. This implies that $L^2=0$. Lastly, from Beauville's theorem we must have $K_t(-1)\in J^{g-1}_S$, this is to say, $\cohm{0}{S}{L^t(2k-3)}$ for all $t\in(0,2)$. \end{proof} \nocite{Spec,HAH,HM} \end{document}
math
112,343
\begin{document} \begin{frontmatter} \title{The Computational Complexity of Disconnected Cut and $\mathbf{2K_2}$-Partition\thanksref{CP}} \author{Barnaby Martin\thanksref{epsrc1} and Dani\"el Paulusma \thanksref{epsrc2}} \address{ School of Engineering and Computing Sciences, Durham University,\\ Science Labs, South Road, Durham DH1 3LE, U.K.} \thanks[CP]{An extended abstract of this paper appeared in the proceedings of CP2011~\cite{MP11}.} \thanks[epsrc1]{The first author was supported by EPSRC grant EP/G020604/1.} \thanks[epsrc2]{The second author was supported by EPSRC grant EP/G043434/1.} \maketitle \begin{abstract} For a connected graph $G=(V,E)$, a subset $U\subseteq V$ is called a disconnected cut if $U$ disconnects the graph and the subgraph induced by $U$ is disconnected as well. We show that the problem to test whether a graph has a disconnected cut is {\sf NP}-complete. This problem is polynomially equivalent to the following problems: testing if a graph has a $2K_2$-partition, testing if a graph allows a vertex-surjective homomorphism to the reflexive 4-cycle and testing if a graph has a spanning subgraph that consists of at most two bicliques. Hence, as an immediate consequence, these three decision problems are {\sf NP}-complete as well. This settles an open problem frequently posed in each of the four settings. \end{abstract} \begin{keyword} Graph Theory \sep Disconnected Cut \sep 2K2-Partition \sep Biclique Cover \end{keyword} \end{frontmatter} \section{Introduction} We solve an open problem that showed up as a missing case (often {\it the} missing case) in a number of different research areas arising from connectivity theory, graph covers, graph homomorphisms and graph modification. It is the only open question in the papers by Dantas et al.~\cite{DFGK05} and Fleischner et al.~\cite{FMPS09}, a principal open question of Ito et al.~\cite{IKPT09,IKPT}, and the central question discussed by Cook et al.~\cite{CDEFFK10} and Dantas et al.~\cite{DMS10}. Indeed, the problem is considered important enough to generate its own complexity class~\cite{F,TDF10}, and it is known to be tractable for many graph classes~\cite{CDEFFK10,DMS10, FMPS09,IKPT09}. Before we explain how these areas are related, we briefly describe them first. Throughout the paper, we consider undirected finite graphs that have no multiple edges. Unless explicitly stated otherwise they do not have self-loops either. We denote the vertex set and edge set of a graph $G$ by $V_G$ and $E_G$, respectively. If no confusion is possible, we may omit the subscripts. We let $n=|V(G)|$ denote the number of vertices of $G$. The {\it complement} of a graph $G=(V,E)$ is the graph $\overline{G}=(V,\{uv\notin E\; |\; u\neq v\})$. For a subset $U \subseteq V_G$, we let $G[U]$ denote the subgraph of $G$ {\it induced by} $U$, which is the graph $(U,\{uv\; |\; u,v\in U\; \mbox{and}\; uv\in E_G\}$). \subsection{Vertex Cut Sets} A maximal connected subgraph of $G$ is called a {\em component} of $G$. A {\it vertex cut (set)} or {\it separator} of a graph $G=(V,E)$ is a subset $U\subset V$ such that $G[V\backslash U]$ contains at least two components. Vertex cuts play an important role in graph connectivity, and various kinds of vertex cuts have been studied in the literature. For instance, a cut $U$ of a graph $G=(V,E)$ is called a {\it $k$-clique cut} if $G[U]$ has a spanning subgraph consisting of $k$ complete graphs; a {\it strict $k$-clique cut} if $G[U]$ consists of $k$ components that are complete graphs; a {\it stable cut} if $U$ is an independent set; and a {\it matching cut} if $E_{G[U]}$ is a matching. The problem that asks whether a graph has a $k$-clique cut is solvable in polynomial time for $k=1$ and $k=2$, as shown by Whitesides~\cite{Wh81} and Cameron et al.~\cite{CEHS07}, respectively. The latter authors also showed that deciding if a graph has a strict 2-clique cut can be solved in polynomial time. On the other hand, the problems that ask whether a graph has a stable cut or a matching cut, respectively, are {\sf NP}-complete, as shown by Chv\'atal~\cite{Ch84} and Brandst\"adt et al.~\cite{BDBS00}, respectively. For a fixed constant $k \ge 1$, a cut $U$ of a connected graph $G$ is called a {\em $k$-cut} of $G$ if $G[U]$ contains exactly $k$ components. Testing if a graph has a $k$-cut is solvable in polynomial time for $k=1$, whereas it is {\sf NP}-complete for every fixed $k\geq 2$~\cite{IKPT09}. For $k\geq 1$ and $\ell\geq 2$, a $k$-cut $U$ is called a $(k,\ell)$-{\it cut} of a graph $G$ if $G[V\backslash U]$ consists of exactly $\ell$ components. Testing if a graph has a $(k,\ell)$-cut is polynomial-time solvable when $k=1$, $\ell \ge 2$, and {\sf NP}-complete otherwise~\cite{IKPT09}. A cut $U$ of a graph $G$ is called \emph{disconnected} if $G[U]$ contains at least two components. We observe that $U$ is a disconnected cut if and only if $V\backslash U$ is a disconnected cut if and only if $U$ is a $(k,\ell)$-cut for some $k\geq 2$ and $\ell\geq 2$. The following question was posed in several papers~\cite{FMPS09,IKPT09,IKPT} as an open problem. \noindent {\it Q1. How hard is it to test if a graph has a disconnected cut?} \noindent The problem of testing if a graph has a disconnected cut is called the {\sc Disconnected Cut} problem. It is known that every graph of diameter 1 has no disconnected cut, and every graph of diameter at least 3 has a disconnected cut~\cite{FMPS09}. Hence, in order to determine the computational complexity of {\sc Disconnected Cut}, we may restrict ourselves to graphs of diameter 2. A disconnected cut $U$ of a connected graph $G=(V,E)$ is {\it minimal} if $G[(V\backslash U) \cup \{u\}]$ is connected for every $u \in U$. Recently, the corresponding decision problem called {\sc Minimal Disconnected Cut} was shown to be {\sf NP}-complete~\cite{IKPT}. \subsection{$H$-partitions} A \emph{model graph} $H$ with $V_H=\{h_0,\ldots,h_{k-1}\}$ has two types of edges: solid and dotted edges, and an {\it $H$-partition} of a graph $G$ is a partition of $V_G$ into $k$ (nonempty) sets $V_0,\dots,V_{k-1}$ such that for all vertices $u\in V_i$, $v\in V_j$ and for all $0\leq i<j\leq k-1$ the following two conditions hold. Firstly, if $h_ih_j$ is a solid edge of $H$, then $uv\in E_G$. Secondly, if $h_ih_j$ is a dotted edge of $H$, then $uv\notin E_G$. There are no such restrictions when $h_i$ and $h_j$ are not adjacent. Let $2K_2$ be the model graph with vertices $h_0,\dots,h_3$, solid edges $h_0h_2, h_1h_3$ and no dotted edges, and $2S_2$ be the model graph with vertices $h_0,\dots,h_3$, dotted edges $h_0h_2, h_1h_3$ and no solid edges. We observe that a graph $G$ has a $2K_2$-partition if and only if its complement $\overline{G}$ has a $2S_2$-partition. The following question was mentioned in several papers~\cite{CDEFFK10,DFGK05,DMS10,F,TDF10} as an open problem. \noindent {\it Q2. How hard is it to test if a graph has a $2K_2$-partition?} \noindent One of the reasons for posing this question is that the (equivalent) cases $H=2K_2$ and $H=2S_2$ are the only two cases of model graphs on at most four vertices for which the computational complexity of the corresponding decision problem, called $H$-{\sc Partition}, is still open. In fact it is known that $H$-{\sc Partition} is polynomial-time solvable for all other 4-vertex model graphs~$H$. In particular, the model graph $H$ with vertices $h_0,\ldots,h_3$, solid edge $h_0h_2$ and dotted edge $h_1h_3$ is well known. In that case the $H$-{\sc Partition} problem is called the {\sc Skew Partition} problem. Note that this problem is equivalent to asking whether the vertex set of a given graph can be partitioned into two sets $V_1$ and $V_2$ such that $V_1$ induces a disconnected graph in~$G$ and $V_2$ induces a disconnected graph in~$\overline{G}$. Even the list version of this problem, where each vertex has been assigned a list of blocks in which it must be placed, is polynomial-time solvable, as shown by de Figueiredo, Klein and Reed~\cite{FKR00} (later, Kennedy and Reed~\cite{KR08} presented a faster polynomial-time algorithm for the non-list version). All other cases of $H$-{\sc Partition} for 4-vertex model graphs~$H\notin \{K_2,S_2\}$ have been settled by Dantas et al.~\cite{DFGK05}. In the literature, $2K_2$-partitions have been well studied, see e.g. three recent papers of Cook et al.~\cite{CDEFFK10}, Dantas, Maffray and Silva~\cite{DMS10} and Teixeira, Dantas and de Figueiredo~\cite{TDF10}. The first two papers~\cite{CDEFFK10,DMS10} study the $2K_2$-{\sc Partition} problem for several graph classes, and the third paper~\cite{TDF10} defines a new class of problems called $2K_2$-hard. In addition, the first paper also proves that $2K_2$-{\sc Partition} can be solved in $O((2^d-1)n^2)$ time for $n$-vertex graphs of minimum vertex degree~$d$. By a result on retractions of Hell and Feder~\cite{FH98}, which we explain later, the list versions of $2S_2$-{\sc Partition} and $2K_2$-{\sc Partition} are {\sf NP}-complete. A variant on $H$-partitions that allows empty blocks $V_i$ in an $H$-partition is studied by Feder et al.~\cite{FHKM03}, whereas Cameron et al.~\cite{CEHS07} consider the list version of this variant. \subsection{Graph Covers} Let $G$ be a graph and $\mathcal{S}$ be a set of (not necessarily vertex-induced) subgraphs of $G$ that has size $|{\cal S}|$. The set $\mathcal{S}$ is a \emph{cover} of $G$ if every edge of $G$ is contained in at least one of the subgraphs in $\mathcal{S}$. The set $\mathcal{S}$ is a \emph{vertex-cover} of $G$ if every vertex of $G$ is contained in at least one of the subgraphs in $\mathcal{S}$. If all subgraphs in $\mathcal{S}$ are \emph{bicliques}, that is, complete connected bipartite graphs, then we speak of a \emph{biclique cover} or a \emph{biclique vertex-cover}, respectively. Testing whether a graph has a biclique cover of size at most $k$ is polynomial-time solvable for any fixed $k$; it is even fixed-parameter tractable in $k$ as shown by Fleischner et al.~\cite{FMPS09}. The same authors~\cite{FMPS09} show that testing whether a graph has a biclique vertex-cover of size at most $k$ is polynomial-time solvable for $k=1$ and {\sf NP}-complete for $k\geq 3$. For $k=2$, they show that this problem can be solved in polynomial time for bipartite input graphs, and they pose the following open problem. \noindent {\it Q3. How hard is it to test if a graph has a biclique vertex-cover of size $2$?} \noindent The problem of testing if a graph has a biclique vertex-cover of size 2 is called the 2-{\sc Biclique Vertex-Cover} problem. In order to answer question Q3 we may without loss of generality restrict to biclique vertex-covers in which every vertex is in exactly one of the subgraphs in $\mathcal{S}$ (cf.~\cite{FMPS09}). \subsection{Graph Homomorphisms} A {\it homomorphism} from a graph $G$ to a graph $H$ is a mapping $f: V_G \to V_H$ that maps adjacent vertices of $G$ to adjacent vertices of $H$, i.e., $f(u)f(v)\in E_H$ whenever $uv\in E_G$. The problem {\sc $H$-Homomorphism} tests whether a given graph $G$ (also called the {\it guest graph}) allows a homomorphism to a graph $H$ called the {\it target} which is fixed, i.e., not part of the input. This problem is also known as $H$-{\sc Coloring}. Hell and Ne\v{s}et\v{r}il~\cite{HN90} showed that $H$-{\sc Homomorphism} is solvable in polynomial time if $H$ is bipartite, and {\sf NP}-complete otherwise. Here, $H$ is assumed not to have a self-loop $xx$, as otherwise we can map every vertex of $G$ to $x$. A homomorphism $f$ from a graph $G$ to a graph $H$ is {\it surjective} if for each $x\in V_H$ there exists at least one vertex $u\in V_G$ with $f(u)=x$. This leads to the problem of deciding if a given graph allows a surjective homomorphism to a fixed target graph $H$, which is called the {\sc Surjective $H$-Homomorphism} or {\sc Surjective $H$-Coloring} problem. For this variant, the presence of a vertex with a self-loop in the target graph $H$ does not make the problem trivial. Such vertices are called {\it reflexive}, whereas vertices with no self-loop are said to be {\it irreflexive}. A graph that contains one or more reflexive vertices is called {\it partially reflexive}. In particular, a graph is {\it reflexive} if all its vertices are reflexive, and a graph is {\it irreflexive} if all its vertices are irreflexive. Golovach, Paulusma and Song~\cite{GPS11} showed that for any fixed tree $H$, the {\sc Surjective $H$-Homomorphism} problem is polynomial-time solvable if the (possibly empty) set of reflexive vertices in $H$ induces a connected subgraph of $H$, and {\sf NP}-complete otherwise. They mention that the smallest open case is the case, in which $H$ is the reflexive 4-cycle denoted ${\cal C}_4$. \noindent {\it Q4. How hard is it to test if a graph has a surjective homomorphism to ${\cal C}_4$?} \noindent The following two notions are closely related to surjective homomorphisms. A homomorphism $f$ from a graph $G$ to an induced subgraph $H$ of $G$ is a {\it retraction} from $G$ to $H$ if $f(h)=h$ for all $h\in V_H$. In that case we say that $G$ {\it retracts to} $H$. Note that this implies that $G$ allows a vertex-surjective homomorphism to $H$, whereas the reverse implication does not necessarily hold. For a fixed graph $H$, the $H$-{\sc Retraction} problem has as input a graph $G$ that contains $H$ as an induced subgraph and is to test if $G$ retracts to $H$. Hell and Feder~\cite{FH98} showed that $\reflex{C}_4$-{\sc Retraction} is {\sf NP}-complete. We emphasize that a surjective homomorphism is vertex-surjective. A stronger notion is to require a homomorphism from a graph $G$ to a graph $H$ to be {\it edge-surjective}, which means that for any edge $xy\in E_H$ with $x\neq y$ there exists an edge $uv\in E_G$ with $f(u)=x$ and $f(v)=y$. Note that the edge-surjectivity condition only holds for edges $xy\in E_H$; there is no such condition on the self-loops $xx\in E_H$. An edge-surjective homomorphism is also called a {\it compaction}. If $f$ is a compaction from $G$ to $H$, we say that $G$ {\it compacts} to $H$. The $H$-{\sc Compaction} problem asks if a graph $G$ compacts to a fixed graph~$H$. Vikas~\cite{Vi02,Vi04,Vi05} determined the computational complexity of this problem for several classes of fixed target graphs. In particular, he showed that $\reflex{C}_4$-{\sc Compaction} is {\sf NP}-complete~\cite{Vi02}. More recently, Vikas~\cite{Vi11,Vi13} considered $H$-{\sc Compaction} for guest graphs belonging to some restricted graph class. \subsection{Graph Contractibility}\label{s-contract} A graph modification problem has as input a graph $G$ and an integer $k$. The question is whether $G$ can be modified to belong to some specified graph class that satisfies further properties by using at most $k$ operations of a certain specified type such as deleting a vertex or deleting an edge. Another natural operation is the {\it contraction} of an edge, which removes both end-vertices of the edge and replaces them by a new vertex adjacent to precisely those vertices that were adjacent to at least one of the two end-vertices. If a graph $H$ can be obtained from $G$ by a sequence of edge contractions, then $G$ is said to be {\it contractible to} $H$. The problem {\sc $\Pi$-Contractibility} has as input a graph $G$ together with an integer $k$ and is to test whether $G$ is contractible to a graph in $\Pi$ by using at most $k$ edge contractions. Asano and Hirata~\cite{AH83} show that $\Pi$-{\sc Contractibility} is {\sf NP}-complete if $\Pi$ satisfies certain conditions. As a consequence, this problem is {\sf NP}-complete for many graph classes $\Pi$ such as the classes of planar graphs, outerplanar graphs, series-parallel graphs, forests and chordal graphs. By a result of Heggernes et al.~\cite{HHLLP11}, $\Pi$-{\sc Contractibility} is {\sf NP}-complete for trees even if the input graph is bipartite. If $\Pi$ is the class of paths or cycles, then $\Pi$-{\sc Contractibility} is polynomially equivalent to the problems of determining the length of a longest path and a longest cycle, respectively, to which a given graph can be contracted. The first problem has been shown to be {\sf NP}-complete by van 't Hof, Paulusma and Woeginger~\cite{HPW09} even for graphs with no induced path on 6 vertices. The second problem has been shown to be {\sf NP}-complete by Hammack~\cite{Ha02}. Heggernes et al.~\cite{HHLP11} observed that $\Pi$-{\sc Contractibility} is {\sf NP}-complete when $\Pi$ is the class of bipartite graphs, whereas Golovach et al.~\cite{GKPT11} showed that $\Pi$-{\sc Contractibility} is {\sf NP}-complete when $\Pi$ is the class of graphs of a certain minimum degree $d$; they show that $d=14$ suffices. A graph $G$ contains a graph $H$ as a {\it minor} if $G$ can be modified to $H$ by a sequence of vertex deletions, edge deletions, and edge contractions. Eppstein~\cite{Ep09} showed that it is {\sf NP}-complete to decide if a given graph $G$ has a complete graph $K_h$ as a minor for some given integer $h$. This problem is equivalent to deciding if a graph $G$ is contractible to $K_h$. Hence, $\Pi$-{\sc Contractibility} is {\sf NP}-complete if $\Pi$ is the class of complete graphs. The biclique with partition classes of size $k$ and $\ell$, respectively, is denoted $K_{k,\ell}$. A {\it star} is a biclique $K_{1,\ell}$ for some integer $\ell\geq 2$. If $\Pi$ is the class of stars, then $\Pi$-{\sc Contractibility} is {\sf NP}-complete. This can be seen as follows. Let $K_1\Join G$ denote the graph obtained from a graph $G$ after adding a new vertex and making it adjacent to all vertices of $G$. Then $G$ has an independent set of size $h$ for some given integer $h\geq 2$ if and only if $K_1\Join G$ is contractible to a star $K_{1,h}$. Since the first problem is {\sf NP}-complete~\cite{GJ79,Ka72}, the result follows. A remaining elementary graph class is the class of bicliques $K_{k,\ell}$ with $k\geq 2$ and $\ell\geq 2$; we call such bicliques {\it proper}. In order to determine the complexity for this graph class, we first consider the following question. \noindent {\it Q5. How hard is it to test if a graph is contractible to a proper biclique?} The problem of testing whether a graph can be contracted to a proper biclique is called the {\sc Biclique Contractibility} problem. By setting $k=n$, we see that this problem is a special instance of the corresponding $\Pi$-{\sc Contractibility} problem. If one of the two integers $k\geq 2$ or $\ell\geq 2$ is fixed, then testing if $G$ is contractible to $K_{k,\ell}$ is known to be {\sf NP}-complete~\cite{IKPT09}. \subsection{The Relationships Between Questions Q1--Q5} Before we explain how questions Q1--Q5 are related, we introduce some new terminology. The {\em distance} $d_G(u,v)$ between two vertices $u$ and $v$ in a graph $G$ is the number of edges in a shortest path between them; if there is no path between $u$ and $v$ then $d_G(u,v)=\infty$. The {\em diameter} ${\rm diam}(G)$ is defined as $\max\{d_G(u,v)\; |\; u,v\in V\}$. A biclique is called {\it nontrivial} if $k\geq 1$ and $\ell\geq 1$. \begin{prop}[\cite{IKPT09}]\label{p-known} Let $G$ be a connected graph. Then statements $(1)$--$(5)$ are equivalent: \begin{itemize} \item [$(1)$] $G$ has a disconnected cut. \item [$(2)$] $G$ has a {\it $2S_2$-partition}. \item [$(3)$] $G$ allows a vertex-surjective homomorphism to $\reflex{C}_4$. \item [$(4)$] $\overline{G}$ has a spanning subgraph that consists of exactly two nontrivial bicliques. \item [$(5)$] $\overline{G}$ has a {\it $2K_2$-partition}. \end{itemize} If ${\rm diam}(G)=2$, then $(1)$--$(5)$ are also equivalent to the following statements: \begin{itemize} \item [$(6)$] $G$ allows a compaction to $\reflex{C}_4$. \item [$(7)$] $G$ is contractible to some biclique $K_{k,\ell}$ for some $k,\ell\geq 2$. \end{itemize} \end{prop} Due to Proposition~\ref{p-known}, questions Q1--Q4 are equivalent. Recall that we may restrict ourselves to graphs of diameter~2, as every graph of diameter 1 has no disconnected cut, and every graph of diameter at least 3 has a disconnected cut~\cite{FMPS09}. Under this restriction, Proposition~\ref{p-known} tells us that Q1--Q4 are also equivalent to~Q5 and to the question of determining the computational complexity of $\reflex{C}_4$-{\sc Compaction}. Recall that Vikas~\cite{Vi02} showed that the latter problem is {\sf NP}-complete. However, the gadget in his {\sf NP}-completeness reduction has diameter~3 as observed by Ito et al.~\cite{IKPT}. \noindent {\bf Our Result.} A pair of vertices in a graph is a {\it dominating (non-)edge} if the two vertices of the pair are (non-)adjacent, and any other vertex in the graph is adjacent to at least one of them. We solve question Q4 by showing that the problem {\sc Surjective ${\cal C}_4$-Homomorphism} is indeed {\sf NP}-complete for graphs of diameter 2 even if they have a dominating non-edge. In contrast, Fleischner et al.~\cite{FMPS09} showed that this problem is polynomial-time solvable on input graphs with a dominating edge. As a consequence of our result and Proposition~\ref{p-known}, we find that the problems {\sc Disconnected Cut}, $2K_2$-{\sc Partition}, $2S_2$-{\sc Partition}, and 2-{\sc Biclique Vertex-Cover} and also that the problems ${\cal C}_4$-{\sc Compaction} and {\sc Biclique Contraction} are {\sf NP}-complete for graphs of diameter 2 even if they have a dominating non-edge. Hence, we have not only solved question~Q4 but also questions~Q1,~Q2,~Q3 and~Q5. Our approach to prove {\sf NP}-completeness is as follows. As mentioned before, we can restrict ourselves to graphs of diameter 2. We therefore try to reduce the diameter in the gadget of the {\sf NP}-completeness proof of Vikas~\cite{Vi02} for $\reflex{C}_4$-{\sc Compaction} from 3 to 2. This leads to {\sf NP}-completeness of {\sc Surjective $\reflex{C}_4$-Homomorphism}, because these two problems coincide for graphs of diameter 2 due to Proposition~\ref{p-known}. The proof that $\reflex{C}_4$-{\sc Compaction} is {\sf NP}-complete~\cite{Vi02} has its roots in the proof that $\reflex{C}_4$-{\sc Retraction} is {\sf NP}-complete~\cite{FH98}. So far, it was only known that $\reflex{C}_4$-{\sc Retraction} stays {\sf NP}-complete for graphs of diameter 3~\cite{IKPT}. We start our proof by showing that $\reflex{C}_4$-{\sc Retraction} is {\sf NP}-complete even for graphs of diameter 2. The key idea is to base the reduction from an {\sf NP}-complete homomorphism (constraint satisfaction) problem that we obtain only after a fine analysis under the algebraic conditions of Bulatov~\cite{Conservative} and Bulatov, Krokhin and Jeavons \cite{JBK}, which we perform in Section~\ref{s-algebra}. This approach is novel in the sense that usually graph theory provides a test-bed for constraint satisfaction problems whereas here we see a case where the flow of techniques is the other way around. We present our {\sf NP}-completeness proof for $\reflex{C}_4$-{\sc Retraction} on graphs of diameter 2 in Section~\ref{s-retract}. This leads to a special input graph of the $\reflex{C}_4$-{\sc Retraction} problem, which enables us to modify the gadget of the proof of Vikas~\cite{Vi02} for $\reflex{C}_4$-{\sc Compaction} in order to get its diameter down to 2, as desired. We explain this part in Section~\ref{s-surjective}. We also point out that Vikas~\cite{Vi11,Vi13} has announced to have an {\sf NP}-completeness proof of {\sc $\reflex{C}_4$-Homomorphism} as well but so far has not made his proof publicly available. \section{Constraint Satisfaction}\label{s-algebra} The notion of a graph homomorphism can be generalized as follows. A {\it structure} is a tuple ${\cal A}=(A; R_1,\ldots, R_k)$, where $A$ is a set called the {\it domain} of ${\cal A}$ and $R_i$ is an $n_i$-ary {\it relation} on $A$ for $i=1,\ldots, k$, i.e., a set of $n_i$-tuples of elements from $A$. Note that a graph $G=(V,E)$ can be seen as a structure $G=(V;\{(u,v),(v,u)\; |\; uv\in E\})$. Throughout the paper we only consider {\it finite} structures, i.e., with a finite domain. Let ${\cal A}=(A; R_1,\ldots, R_k)$ and ${\cal B}=(B; S_1,\ldots,S_k)$ be two structures, where each $R_i$ and $S_i$ are relations of the same arity $n_i$. Then a {\it homomorphism} from ${\cal A}$ to ${\cal B}$ is a mapping $f: A\rightarrow B$ such that $(a_1 , \ldots, a_{n_i}) \in R_i$ implies $(f(a_1 ), \ldots, f (a_{n_i})) \in S_i$ for every $i$ and every $n_i$-tuple $(a_1, \ldots, a_{n_i}) \in A^{n_i}$. The decision problem that is to test if a given structure ${\cal A}$ allows a homomorphism to a fixed structure ${\cal B}$ is called the ${\cal B}$-{\sc Homomorphism} problem, also known as the ${\cal B}$-{\sc Constraint Satisfaction} problem. Let ${\cal A}=(A; R_1,\ldots, R_k)$ be a structure and $\ell$ be an integer. The {\it power structure} $\mathcal{A}^\ell$ has domain $A^\ell$ and for $1\leq i\leq k$, has relations $$R^\ell_i:=\{((a^1_1,\ldots,a^1_\ell),\ldots,(a^{n_i}_1,\ldots,a^{n_i}_\ell))\; |\; (a^1_1,\ldots,a^{n_i}_1),\ldots, (a^1_\ell,\ldots,a^{n_i}_\ell) \in R_i\}.$$ We note that $R^1_i=R_i$ for $1\leq i\leq k$. An ($\ell$-ary) \emph{polymorphism} of $\mathcal{A}$ is a homomorphism from $\mathcal{A}^\ell$ to $\mathcal{A}$ for some integer $\ell$. A $1$-ary polymorphism is also called an {\it endomorphism}. The set of polymorphisms of $\mathcal{A}$ is denoted Pol$(\mathcal{A})$. A binary function $f$ on a domain $A$ is a {\it semilattice} function if $f(h,f(i,j))$ $= $ $f(f(h,i),j)$, $f(i,j)=f(j,i)$, and $f(i,i)=i$ for all $i,j\in A$. A ternary function $f$ is a {\it Mal'tsev} function if $f(i,j,j)=f(j,j,i)=i$ for all $i,j\in A$. A ternary function $f$ is a \emph{majority} function if $f(h,h,i)=f(h,i,h)=f(i,h,h)=h$ for all $h,i\in A$. On the Boolean domain $\{0,1\}$, we may consider propositional functions. The only two semilattice functions on the Boolean domain are the binary function $\wedge$, which maps $(h,i)$ to $(h \wedge i)$, which is $1$ if $h=i=1$ and $0$ otherwise, and the binary function $\vee$ which maps $(h,i)$ to $(h \vee i)$, which is $0$ if $h=i=0$ and $1$ otherwise. We may consider each of these two functions on any two-element domain (where we view one element as $0$ and the other as $1$). For a function $f$ on $B$, and a subset $A \subseteq B$, we let $f_{|A}$ be the restriction of $f$ to $A$. A structure is a \emph{core} if all of its endomorphisms are {\it automorphisms}, i.e., are invertible. We will make use of the following theorem from Bulatov, Krokhin and Jeavons~\cite{JBK} (it appears in this form in Bulatov~\cite{Conservative}). \begin{thm}[\cite{Conservative,JBK}] \label{thm:BJK} Let $\mathcal{B}=(B;S_1,\ldots,S_k)$ be a core and $A \subseteq B$ be a subset of size $|A|= 2$ that as a unary relation is in $\mathcal{B}$. If for each $f \in \Pol(\mathcal{B})$, $f_{|A}$ is not majority, semilattice or Mal'tsev, then ${\cal B}$-{\sc Homomorphism} is {\sf NP}-complete. \end{thm} Let $\mathcal{D}$ be the structure on domain $D=\{0,1,3\}$ with four binary relations \[\begin{array}{lcl} S_1 &:= &\{(0,3),(1,1),(3,1),(3,3)\}\\[3pt] S_2 &:= &\{(1,0),(1,1),(3,1),(3,3)\}\\[3pt] S_3 &:= &\{(1,3),(3,1),(3,3)\}\\[3pt] S_4 &:=&\{(1,1),(1,3),(3,1)\}. \end{array}\] We use $\{0,1,3\}$ (instead of, say, $\{0,1,2\}$) to tie in exactly with the Vikas~\cite{Vi02} labelling of $\reflex{C}_4$. \begin{prop}\label{p-d4} The ${\mathcal D}$-{\sc Homomorphism} problem is {\sf NP}-complete. \end{prop} \begin{pf} We use Theorem~\ref{thm:BJK}. We first show that ${\cal D}$ is a core. Let $g$ be an endomorphism of ${\cal D}$. We must show that $g$ is an automorphism. If $g(0)=3$ then $g(1)=3$ by preservation of $S_2$, i.e., as otherwise $(1,0)\in S_2$ does not imply $(g(1),g(0))\in S_2$. However, $(1,1)\in S_4$ but $(g(1),g(1))=(3,3)\notin S_4$. Hence $g(0)\neq 3$. If $g(0)=1$ then $g(3)=1$ by preservation of $S_1$. However, $(3,3)\in S_3$ but $(g(3),g(3))=(1,1)\notin S_3$. Hence $g(0)\neq 1$. This means that $g(0)=0$. Consequently, $g(1)=1$ by preservation of $S_2$, and $g(3)=3$ by preservation of $S_1$. Hence, $g$ is the identity mapping, which is an automorphism, as desired. Let $A=\{1,3\}$, which is in $\mathcal{D}$ in the form of $S_1(p,p)$ (or $S_2(p,p)$). Suppose that $f\in \Pol(\mathcal{D})$. In order to prove Proposition~\ref{p-d4}, we must show that $f_{|A}$ is neither majority nor semilattice nor Mal'tsev. Suppose that $f_{|A}$ is semilattice. Then $f_{|A}=\wedge$ or $f_{|A}=\vee$. If $f_{|A}=\wedge$, then either $f(1,1)=1$, $f(1,3)=3$, $f(3,1)=3$, $f(3,3)=3$, or $f(1,1)=1$, $f(1,3)=1$, $f(3,1)=1$, $f(3,3)=3$ depending on how the elements $1,3$ correspond to the two elements of the Boolean domain. The same holds if $f_{|A}=\vee$. Suppose that $f(1,1)=1$, $f(1,3)=3$, $f(3,1)=3$, $f(3,3)=3$. By preservation of $S_4$ we find that $f(1,3)=1$ due to $f(3,1)=3$. This is not possible, as $f(1,3)=3$. Suppose that $f(1,1)=1$, $f(1,3)=1$, $f(3,1)=1$, $f(3,3)=3$. By preservation of $S_3$ we find that $f(1,3)=3$ due to $f(3,1)=1$. This is not possible either. Suppose that $f_{|A}$ is Mal'tsev. By preservation of $S_4$, we find that $f(1,1,3)=1$ due to $f(3,1,1)=3$. However, because $f(1,1,3)=3$, this is not possible. Suppose that $f_{|A}$ is majority. By preservation of $S_1$, we deduce that $f(0,3,1)\in \{0,3\}$ due to $f(3,3,1)=3$, and that $f(0,3,1)\in \{1,3\}$ due to $f(3,1,1)=1$. Thus, $f(0,3,1)=3$. By preservation of $S_2$, however, we deduce that $f(0,3,1)\in \{0,1\}$ due to $f(1,3,1)=1$. This is a contradiction. Hence, we have completed the proof of Proposition~\ref{p-d4}.\qed \end{pf} \section{Retractions}\label{s-retract} In the remainder of this paper, the graph $H$ denotes the reflexive 4-vertex cycle $\reflex{C}_4$. We let $h_0,\ldots,h_3$ be the vertices and $h_0h_1$, $h_1h_2$, $h_2h_3$, and $h_3h_0$ be the edges of $H$. We prove that $H$-{\sc Retraction} is {\sf NP}-complete for graphs of diameter 2 by a reduction from ${\cal D}$-{\sc Homomorphism}. Let ${\cal A}=(A;R_1,\ldots,R_4)$ be an instance of ${\cal D}$-{\sc Homomorphism}, where we may assume that each $R_i$ is a binary relation. From ${\cal A}$ we construct a graph $G$ as follows. We let the elements in ${\cal A}$ correspond to vertices of $G$. If $(p,q)\in R_i$ for some $1\leq i\leq 4$, then we say that vertex $p$ in $G$ is of {\it type} $\ell$ and vertex $q$ in $G$ is of {\it type} $r$. Note that a vertex can be of type $\ell$ and $r$ simultaneously, because it can be the first element in a pair in $R_1\cup \cdots \cup R_4$ and the second element of another such pair. For each $(p,q)\in R_i$ and $1\leq i\leq 4$ we introduce four new vertices $a_p,b_p,c_q,d_q$ with edges $a_pp$, $a_pb_p$, $b_pp$, $c_qq$, $c_qd_q$ and $d_qq$. We say that a vertex $a_p,b_p,c_q,d_q$ is of {\it type} $a,b,c,d$, respectively; note that these vertices all have a unique type. We now let the graph $H$ be an induced subgraph of $G$ (with distinct vertices $h_0,\ldots,h_3$). Then formally $G$ must have self-loops $h_0h_0,\ldots, h_3h_3$, because $H$ has such self-loops. Outside of $H$ in $G$, it does not matter whether we consider~$G$ to have self-loops or not. In any case we do not draw any loops in our figures in order to keep these uncluttered. In $G$ we join every $a$-type vertex to $h_0$ and $h_3$, every $b$-type vertex to $h_1$ and $h_2$, every $c$-type vertex to $h_2$ and $h_3$, and every $d$-type vertex to $h_0$ and $h_1$. We also add an edge between $h_0$ and every vertex of $A$. We continue the construction of $G$ by describing how we distinguish between two pairs belonging to different relations. If $(p,q)\in R_1$, then we add the edges $c_qp$ and $qh_2$; see Figure~\ref{f-I}. If $(p,q)\in R_2$, then we add the edges $h_2p$ and $b_pq$; see Figure~\ref{f-II}. If $(p,q)\in R_3$, then we add the edges $h_2p$, $h_2q$ and $a_pc_q$; see Figure~\ref{f-III}. If $(p,q)\in R_4$, then we add the edges $h_2p$, $h_2q$ and $b_pd_q$; see Figure~\ref{f-IV}. \begin{figure} \caption{The part of a ${\cal D} \label{f-I} \caption{The part of a ${\cal D} \label{f-II} \end{figure} \begin{figure} \caption{The part of a ${\cal D} \label{f-III} \caption{The part of a ${\cal D} \label{f-IV} \end{figure} We finish the construction of $G$ by adding an edge between any two vertices of type $a$, between any two vertices of type $b$, between any two vertices of type $c$, and between any two vertices of type $d$. Note that this leads to four mutually vertex-disjoint cliques in $G$; here a {\it clique} means a vertex set of a complete graph. We call $G$ a {\it ${\cal D}$-graph} and prove the following result. \begin{lem}\label{l-diam1} Every ${\cal D}$-graph has diameter 2 and a dominating non-edge. \end{lem} \begin{pf} Let $G$ be a ${\cal D}$-graph. We first show that $G$ has a dominating non-edge. Note that $h_0$ is adjacent to all vertices except to $h_2$ and the vertices of type $b$ and $c$. However, all vertices of type $b$ and $c$ are adjacent to $h_2$. Because $h_0$ and $h_2$ are not adjacent, this means that $h_0$ and $h_2$ form a dominating non-edge in $G$. \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} & $h_0$ & $h_1$ & $h_2$ & $h_3$ & $\ell$ & $r$ & $a$ & $b$ &$c$ &$d$ \\ \hline $h_0$ & 0 & 1 & $2^{h_1}$ & 1 & 1 & 1 & 1 & $2^{h_1}$ &$2^{h_3}$ &1\\ \hline $h_1$ & . & 0 & 1 & $2^{h_0}$ & $2^{h_0}$ & $2^{h_0}$ & $2^{h_0}$ & 1 &$2^{h_2}$ &1\\ \hline $h_2$ & . & . & 0 & 1 & $2^{b}$ & $2^{c}$ & $2^{b}$ & 1 &1 &$2^{c}$\\ \hline $h_3$ & . & . & . & 0 & $2^{h_0}$ & $2^{h_0}$ & 1 & $2^{a}$ &1 &$2^{c}$\\ \hline $\ell$& . & . & . & . & $2^{h_0}$ & $2^{h_0}$ & $2^{h_0}$ & $2^{b}$ &$2^{h_2}_{c}$ &$2^{h_0}$\\ \hline $r$ & . & . & . & . & . & $2^{h_0}$ & $2^{h_0}$ & $2^{h_2}_{b}$ &$2^{c}$ &$2^{h_0}$\\ \hline $a$ & . & . & . & . & . & . & 1 & $2^{a}$ &$2^{h_3}$ &$2^{h_0}$\\ \hline $b$ & . & . & . & . & . & . & . & 1 &$2^{h_2}$ &$2^{h_1}$\\ \hline $c$ & . & . & . & . & . & . & . & . &1 &$2^{c}$\\ \hline $d$ & . & . & . & . & . & . & . & . &. &1 \end{tabular}\\[8pt] \caption{Determining the diameter of a ${\cal D}$-graph $G$.}\label{t-diam2} \end{center} \end{table} We show that $G$ has diameter 2 in Table~\ref{t-diam2}. In this table, $\ell$, $r$, $a$, $b$, $c$, $d$ denote vertices of corresponding type, and superscripts denote the vertex or its type that connects the two associated vertices in the case they are not adjacent already. The distances in the table must be interpreted as upper bounds. For example, the distance between a vertex $p$ of type $\ell$ and a vertex $a_{p'}$ of type $a$ is either 1 if $p=p'$ or 2 if $p\neq p'$. In the latter case, they are connected by $h_0$ (and perhaps by some other vertices as well). The table denotes this as $2^{h_0}$. In two cases the connecting vertex depends on the relation $R_i$. Then a subscript denotes the necessary second possibility that occurs when the superscript vertex is not valid. For instance, when a vertex $p$ of type $\ell$ is not adjacent to $h_2$, then $p$ must be the first element in a pair $(p,q)\in R_1$, and then $p$ is adjacent to $c_q$, and hence there is always an intermediate vertex (either $h_2$ or $c_q$) to connect $p$ to an arbitrary vertex of type $c$ not necessarily equal to $c_q$. In the table this is expressed as $2^{h_2}_c$. \qed \end{pf} Recall that Feder and Hell~\cite{FH98} showed that $H$-{\sc Retraction} is {\sf NP}-complete. Ito et al.~\cite{IKPT} observed that $H$-{\sc Retraction} stays {\sf NP}-complete on graphs of diameter 3. For our purposes, we need the following theorem. Note that Lemma~\ref{l-diam1} and Theorem~\ref{t-ret} together imply that $H$-{\sc Retraction} is {\sf NP}-complete for graphs of diameter 2 that have a dominating non-edge. \begin{thm}\label{t-ret} The $H$-{\sc Retraction} problem is {\sf NP}-complete even for ${\cal D}$-graphs. \end{thm} \begin{pf} We recall that $H$-{\sc Retraction} is in {\sf NP}, because we can guess a partition of the vertex set of the input graph $G$ into four (non-empty) sets and verify in polynomial time if this partition corresponds to a retraction from $G$ to $H$. To show {\sf NP}-hardness, we reduce from the ${\cal D}$-{\sc Homomorphism} problem. From an instance ${\cal A}=(A; R_1,\ldots, R_4)$ of ${\cal D}$-{\sc Homomorphism} we construct a ${\cal D}$-graph $G$. We claim that ${\cal A}$ allows a homomorphism to ${\cal D}$ if and only if $G$ retracts to $H$. First suppose that ${\cal A}$ allows a homomorphism $f$ to ${\cal D}$. We construct a mapping~$g$ from $V_G$ to $V_H$ as follows. For each $a\in A$ we let $g(a)=h_i$ if $f(a)=i$, and for $i=0,\ldots, 3$ we let $g(h_i)=h_i$. Because $f$ is a homomorphism from ${\cal A}$ to ${\cal D}$, this leads to Tables~\ref{t-s1}--\ref{t-s4}, which explain where $a_p$, $b_p$, $c_q$ and $d_q$ map under $g$, according to where $p$ and $q$ map. From these, we conclude that $g$ is a retraction from $G$ to $H$. In particular, we note that the edges $c_qp,b_pq,a_pc_q$, and $b_pd_q$ each map to an edge or self-loop in $H$ when $(p,q)$ belongs to $R_1,\ldots,R_4$, respectively. To prove the reverse implication, suppose that $G$ allows a retraction $g$ to $H$. We construct a mapping $f:A\to \{0,1,2,3\}$ by defining, for each $a\in A$, $f(a)=i$ if $g(a)=h_i$. We claim that $f$ is a homomorphism from ${\cal A}$ to ${\cal D}$. In order to see this, we first note that $g$ maps all $a$-type vertices to $\{h_0,h_3\}$, all $b$-type vertices to $\{h_1,h_2\}$, all $c$-type vertices to $\{h_2,h_3\}$ and all $d$-type vertices to $\{h_0,h_1\}$. We now show that $(p,q)\in R_i$ implies that $(f(p),f(q))\in S_i$ for $i=1,\ldots, 4$. \begin{table} \begin{minipage}[b]{0.5\linewidth} \begin{center} \begin{tabular}{c|c|c|c|c|c} $p$ & $q$ & $a_p$ & $b_p$ & $c_q$ & $d_q$\\ \hline $h_0$ & $h_3$ & $h_0$ &$h_1$ &$h_3$ &$h_0$\\ \hline $h_1$ & $h_1$ & $h_0$ &$h_1$ &$h_2$ &$h_1$\\ \hline $h_3$ & $h_1$ & $h_3$ &$h_2$ &$h_2$ &$h_1$\\ \hline $h_3$ & $h_3$ & $h_3$ &$h_2$ &$h_3$ &$h_0$ \end{tabular}\\[8pt] \caption{$g$-values when $(p,q)\in R_1$.}\label{t-s1} \end{center} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \begin{center} \begin{tabular}{c|c|c|c|c|c} $p$ & $q$ & $a_p$ & $b_p$ & $c_q$ & $d_q$\\ \hline $h_1$ & $h_0$ & $h_0$ &$h_1$ &$h_3$ &$h_0$\\ \hline $h_1$ & $h_1$ & $h_0$ &$h_1$ &$h_2$ &$h_1$\\ \hline $h_3$ & $h_1$ & $h_3$ &$h_2$ &$h_2$ &$h_1$\\ \hline $h_3$ & $h_3$ & $h_3$ &$h_2$ &$h_3$ &$h_0$ \end{tabular}\\[8pt] \caption{$g$-values when $(p,q)\in R_2$.}\label{t-s2} \end{center} \end{minipage} \end{table} \begin{table} \begin{minipage}[b]{0.5\linewidth} \begin{center} \begin{tabular}{c|c|c|c|c|c} $p$ & $q$ & $a_p$ & $b_p$ & $c_q$ & $d_q$\\ \hline $h_1$ & $h_3$ & $h_0$ &$h_1$ &$h_3$ &$h_0$\\ \hline $h_3$ & $h_1$ & $h_3$ &$h_2$ &$h_2$ &$h_1$\\ \hline $h_3$ & $h_3$ & $h_3$ &$h_2$ &$h_3$ &$h_0$ \end{tabular}\\[8pt] \caption{$g$-values when $(p,q)\in R_3$.}\label{t-s3} \end{center} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \begin{center} \begin{tabular}{c|c|c|c|c|c} $p$ & $q$ & $a_p$ & $b_p$ & $c_q$ & $d_q$\\ \hline $h_1$ &$h_1$ &$h_0$ &$h_1$ &$h_2$ &$h_1$ \\ \hline $h_1$ & $h_3$ & $h_0$ &$h_1$ &$h_3$ &$h_0$\\ \hline $h_3$ & $h_1$ & $h_3$ &$h_2$ &$h_2$ &$h_1$ \end{tabular}\\[8pt] \caption{$g$-values when $(p,q)\in R_4$.}\label{t-s4} \end{center} \end{minipage} \end{table} Suppose that $(p,q)\in R_1$. Because $p$ is adjacent to $h_0$, we find that $g(p)\in \{h_0,h_1,h_3\}$. Because $q$ is adjacent to $h_0$ and $h_2$, we find that $g(q)\in \{h_1,h_3\}$. If $g(p)=h_0$, then $g$ maps $c_q$ to $h_3$, and consequently $g(q)=h_3$. If $g(p)=h_1$, then $g$ maps $c_q$ to $h_2$, and consequently, $d_q$ to $h_1$, implying that $g(q)=h_1$. Hence, we find that $(f(p),f(q))\in \{(0,3),(1,1),(3,1),(3,3)\}=S_1$, as desired. Suppose that $(p,q)\in R_2$. Because $p$ is adjacent to $h_0$ and $h_2$, we find that $g(p)\in \{h_1,h_3\}$. Because $q$ is adjacent to $h_0$, we find that $g(q)\in \{h_0,h_1,h_3\}$. If $g(q)=h_0$, then $g$ maps $b_p$ to $h_1$, and consequently, $g(p)=h_1$. If $g(q)=h_3$, then $g$ maps $b_p$ to $h_2$, and consequently, $a_p$ to $h_3$, implying that $g(p)=h_3$. Hence, we find that $(f(p),f(q))\in \{(1,0),(1,1),(3,1),(3,3)\}=S_2$, as desired. Suppose that $(p,q)\in R_3$. Because both $p$ and $q$ are adjacent to both $h_0$ and $h_2$, we find that $g(p)\in \{h_1,h_3\}$ and $g(q)\in \{h_1,h_3\}$. If $g(p)=h_1$, then $g$ maps $a_p$ to $h_0$, and consequently, $c_q$ to $h_3$, implying that $g(q)=h_3$. Hence, we find that $(f(p),f(q))\in \{(1,3),(3,1),(3,3)\}=S_3$, as desired. Suppose that $(p,q)\in R_4$. Because both $p$ and $q$ are adjacent to both $h_0$ and $h_2$, we find that $g(p)\in \{h_1,h_3\}$ and $g(q)\in \{h_1,h_3\}$. If $g(q)=h_3$, then $g$ maps $d_q$ to $h_0$, and consequently, $b_p$ to $h_1$, implying that $g(p)=h_1$. Hence, we find that $(f(p),f(q))\in \{(1,1),(1,3),(3,1)\}=S_4$, as desired. This completes the proof of Lemma~\ref{t-ret}.\qed \end{pf} \section{Surjective Homomorphisms}\label{s-surjective} Vikas~\cite{Vi02} constructed the following graph from a graph $G=(V,E)$ that contains $H$ as an induced subgraph. For each vertex $v\in V_G\backslash V_H$ we add three new vertices $u_v,w_v,y_v$ with edges $h_0u_v, h_0y_v, h_1u_v$, $h_2w_v, h_2y_v, h_3w_v, u_vv, u_vw_v, u_vy_v$, $vw_v, w_vy_v$. We say that a vertex $u_v$, $w_v$ and $y_v$ has {\it type} $u$, $w$, or $y$, respectively. We also add all edges between any two vertices $u_v,u_{v'}$ and between any two vertices $w_v,w_{v'}$ with $v\neq v'$. For each edge $vv'$ in $E_G\backslash E_H$ we choose an arbitrary orientation, say from $v$ to $v'$, and then add a new vertex $x_{vv'}$ with edges $vx_{vv'}, v'x_{vv'}, u_vx_{vv'},w_{v'}x_{vv'}$. We say that this new vertex has {\it type} $x$. The new graph $G'$ obtained from $G$ is called an $H$-{\it compactor} of $G$. See Figure~\ref{f-compactor} for an example. This figure does not depict any self-loops, although formally $G'$ must have at least four self-loops, because $G'$ contains $G$, and consequently, $H$ as an induced subgraph. However, for the same reason as for the $H$-{\sc Retraction} problem, this is irrelevant for the {\sc Surjective $H$-Homomorphism} problem, and we may assume that $G$ and $G'$ are irreflexive. \begin{figure} \caption{The part of $G'$ that corresponds to edge $vv'\in E_G\setminus E_H$ as displayed in~\cite{Vi02} \label{f-compactor} \end{figure} Vikas~\cite{Vi02} showed that a graph $G$ retracts to $H$ if and only if an (arbitrary) $H$-compactor $G'$ of $G$ retracts to $H$ if and only if $G'$ compacts to $H$. Recall that an $H$-compactor is of diameter 3 as observed by Ito et al.~\cite{IKPT}. Our aim is to reduce the diameter in such a graph to 2. This forces us to make a number of modifications. Firstly, we must remove a number of vertices of type $x$. Secondly, we can no longer choose the orientations regarding the remaining vertices of type $x$ arbitrarily. Thirdly, we must connect the remaining $x$-type vertices to $H$ via edges. We explain these modifications in detail below. Let $G$ be a ${\cal D}$-graph. For all vertices in $G$ we create vertices of type $u,v,w,y$ with incident edges as in the definition of a compactor. We then perform the following three steps. \noindent {\bf 1. Not creating all the vertices of type ${\mathbf x}$.}\\ We do not create $x$-type vertices for the following edges in $G$: edges between two $a$-type vertices, edges between two $b$-type vertices, edges between two $c$-type vertices, and edges between two $d$-type vertices. We create $x$-type vertices for all the other edges in $E_G\setminus E_H$ as explained in Step 2. \noindent {\bf 2. Choosing the ``right'' orientation of the other edges of $\mathbf{G-H}$.}\\ For $(p,q)\in R_i$ and $1\leq i\leq 4$, we choose $x$-type vertices $x_{a_pp}$, $x_{pb_p}$, $x_{a_pb_p}$, $x_{qc_q}$, $x_{qd_q}$, and $x_{d_qc_q}$. In addition we create the following $x$-type vertices. For $(p,q)\in R_1$ we choose $x_{pc_q}$. For $(p,q)\in R_2$ we choose $x_{qb_p}$. For $(p,q)\in R_3$ we choose $x_{a_pc_q}$. For $(p,q)\in R_4$ we choose $x_{d_qb_p}$. Note that in this way we have indeed created $x$-type vertices for all the other edges of $E_G\setminus E_H$. \noindent {\bf 3. Connecting the created ${\mathbf x}$-type vertices to ${\mathbf H}$.}\\ We add an edge between $h_0$ and every vertex of type $x$ that we created in Step~2. We also add an edge between $h_2$ and every such vertex. \noindent We call the resulting graph a {\it semi-compactor} of $G$ and prove two essential lemmas. \begin{lem}\label{l-diam2} Let $G$ be a ${\cal D}$-graph. Every semi-compactor of $G$ has diameter 2 and a dominating non-edge. \end{lem} \begin{pf} Let $G''$ be a semi-compactor of a ${\cal D}$-graph $G$. We first show that $G''$ has a dominating non-edge. We note that $h_0$ is adjacent to all vertices except to $h_2$ and the vertices of type $b,c$, and $w$. However, all vertices of type $b,c$, and $w$ are adjacent to $h_2$. Because $h_0$ and $h_2$ are not adjacent, this means that $h_0$ and $h_2$ form a dominating non-edge in $G''$. We show that $G''$ has diameter 2 in Table~\ref{t-diam3}. In this table, $v$ denotes a vertex of $V_G$, and $u$, $w$, $y$, $x$ denote vertices of the corresponding type. For reasons of clarity we explain the first row of Table~\ref{t-diam3} below; superscripts for the other rows are used in the same way as in Table~\ref{t-diam2}. \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c} & $v$ & $u$ & $w$ & $y$ & $x$ \\ \hline $v$ & 2 & $2$ & $2$ & $2$ & $2$\\ \hline $u$ & . & 1 & $2^w$ & $2^{h_0}$ & $2^{h_0}$\\ \hline $w$ & . & . & 1 & $2^{h_2}$ & $2^{h_2}$\\ \hline $y$ & . & . & . & $2^{h_0}$ & $2^{h_0}$\\ \hline $x$& . & . & . & . & $2^{h_0}$ \end{tabular}\\[8pt] \caption{Determining the diameter of a semi-compactor $G''$.}\label{t-diam3} \end{center} \end{table} For the first position of row 1 we use Table~\ref{t-diam2} to determine an upper bound for the distance between two vertices in $G''$; hence, there is no superscript for this position. For the second position of row 1 we use the fact that every vertex $v\in V_G\setminus V_H$ is adjacent to $u_v$ and that $u_v$ is adjacent to every other vertex of type $u$. Furthermore, if $v=h_0$ or $v=h_1$ then $v$ is adjacent to every vertex of type $u$, and if $v=h_2$ or $v=h_3$ then $v$ is of distance two from every vertex of type $u$ by using $h_0$ or $h_1$ as an intermediate vertex, respectively. The third position of row 1 can be explained by similar arguments. The fourth and fifth positions follow from the already deduced property of $G''$ that $h_0$ and $h_2$ form a dominating non-edge combined with the property that every vertex of type $x$ and $y$ is adjacent to both $h_0$ and $h_2$. This completes the proof of Lemma~\ref{l-diam2}.\qed \end{pf} \begin{lem}\label{l-equi} Let $G''$ be a semi-compactor of a ${\cal D}$-graph $G$. Then the following statements are equivalent: \begin{itemize} \item [(i)] $G$ retracts to $H$; \item [(ii)] $G''$ retracts to $H$; \item [(iii)] $G''$ compacts to $H$; \item [(iv)] $G''$ has a vertex-surjective homomorphism to $H$. \end{itemize} \end{lem} \begin{pf} We show the following implications: $(i)\Rightarrow (ii)$, $(ii)\Rightarrow (i)$, $(ii)\Rightarrow (iii)$, $(iii)\Rightarrow (ii)$, $(iii)\Rightarrow (iv)$, and $(iv)\Rightarrow (iii)$. \noindent ``$(i)\Rightarrow (ii)$'' Let $f$ be a retraction from $G$ to $H$. We show how to extend $f$ to a retraction from $G''$ to $H$. We observe that every vertex of type $u$ can only be mapped to $h_0$ or $h_1$, because such a vertex is adjacent to $h_0$ and $h_1$. We also observe that every vertex of type $w$ can only be mapped to $h_2$ or $h_3$, because such a vertex is adjacent to $h_2$ and $h_3$. This implies the following. Let $v\in V_G\setminus V_H$. If $f(v)=h_0$ or $f(v)=h_1$, then $w_v$ must be mapped to $h_3$ or $h_2$, respectively. Consequently, $u_v$ must be mapped to $h_0$ or $h_1$, respectively, due to the edge $u_vw_v$. If $f(v)=h_2$ or $f(v)=h_3$, then $u_v$ must be mapped to $h_1$ or $h_0$, respectively. Consequently, $w_v$ must be mapped to $h_2$ or $h_3$, respectively, due to the edge $u_vw_v$. Hence, $f(v)$ fixes the mapping of the vertices $u_v$ and $w_v$. Moreover, we showed that either $u_v$ is mapped to $h_1$ or $w_v$ is mapped to $h_3$. Note that both vertices are adjacent to $y_v$. Then, because $y_v$ can only be mapped to $h_1$ or $h_3$ due to the edges $h_0y_v$ and $h_2y_v$, the mapping of $y_v$ is fixed as well; if $u_v$ is mapped to $h_1$ then $y_v$ is mapped to $h_1$, and if $w_v$ is mapped to $h_3$ then $y_v$ is mapped to $h_3$. What is left to do is to verify whether we can map the vertices of type $x$. For this purpose we refer to Table~\ref{t-fixed}, where $v,v'$ denote two adjacent vertices of $V_G\setminus V_H$. Every possible combination of $f(v)$ and $f(v')$ corresponds to a row in this table. As we have just shown, this fixes the image of the vertices $u_v$, $u_{v'}$, $w_v$, $w_{v'}$, $y_{v'}$ and $y_v$. For $x_{vv'}$ we use its adjacencies to $v$, $v'$, $u_v$ and $w_{v'}$ to determine potential images. For some cases, this number of potential images is not one but two. This is shown in the last column of Table~\ref{t-fixed}; here we did not take into account that every $x_{vv'}$ is adjacent to $h_0$ and $h_2$ in our construction. Because of these adjacencies, every $x_{vv'}$ can only be mapped to $h_1$ or $h_3$. In the majority of the 12 rows in Table~\ref{t-fixed} we have this choice; the exceptions are row 4 and row 9. In rows 4 and 9, we find that $x_{vv'}$ can only be mapped to one image, which is $h_0$ or $h_2$, respectively. We will show that neither row can occur. \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c} $v$ & $v'$ & $u_v$ & $u_{v'}$ & $w_v$ & $w_{v'}$ & $y_v$ & $y_{v'}$ &$x_{vv'}$\\ \hline $h_0$ &$h_0$ &$h_0$ &$h_0$ &$h_3$ &$h_3$ &$h_3$ &$h_3$ &$h_0/h_3$\\ \hline $h_0$ &$h_1$ &$h_0$ &$h_1$ &$h_3$ &$h_2$ &$h_3$ &$h_1$ &$h_1$\\ \hline $h_0$ &$h_3$ &$h_0$ &$h_0$ &$h_3$ &$h_3$ &$h_3$ &$h_3$ &$h_0/h_3$\\ \hline $h_1$ &$h_0$ &$h_1$ &$h_0$ &$h_2$ &$h_3$ &$h_1$ &$h_3$ &$h_0$\\ \hline $h_1$ &$h_1$ &$h_1$ &$h_1$ &$h_2$ &$h_2$ &$h_1$ &$h_1$ &$h_1/h_2$\\ \hline $h_1$ &$h_2$ &$h_1$ &$h_1$ &$h_2$ &$h_2$ &$h_1$ &$h_1$ &$h_1/h_2$\\ \hline $h_2$ &$h_1$ &$h_1$ &$h_1$ &$h_2$ &$h_2$ &$h_1$ &$h_1$ &$h_1/h_2$\\ \hline $h_2$ &$h_2$ &$h_1$ &$h_1$ &$h_2$ &$h_2$ &$h_1$ &$h_1$ &$h_1/h_2$\\ \hline $h_2$ &$h_3$ &$h_1$ &$h_0$ &$h_2$ &$h_3$ &$h_1$ &$h_3$ &$h_2$\\ \hline $h_3$ &$h_0$ &$h_0$ &$h_0$ &$h_3$ &$h_3$ &$h_3$ &$h_3$ &$h_0/h_3$\\ \hline $h_3$ &$h_2$ &$h_0$ &$h_1$ &$h_3$ &$h_2$ &$h_3$ &$h_1$ &$h_3$\\ \hline $h_3$ &$h_3$ &$h_0$ &$h_0$ &$h_3$ &$h_3$ &$h_3$ &$h_3$ &$h_0/h_3$ \end{tabular}\\[8pt] \caption{Determining a retraction from $G''$ to $H$.}\label{t-fixed} \end{center} \end{table} By Steps 1-2 of the definition of a semi-compactor, we have that $(v,v')$ belongs to $$\{(a_p,p), (p,b_p), (a_p,b_p), (q,c_q), (q,d_q), (d_q,c_q), (p,c_q), (q,b_p),(a_p,c_q),(d_q,b_p)\}.$$ We first show that row 4 cannot occur. In order to obtain a contradiction, suppose that row 4 does occur, i.e., that $f(v)=h_1$ and $f(v')=h_0$ for some $v,v'\in V_G\setminus V_H$. Due to their adjacencies with vertices of $H$, every vertex of type $a$ is mapped to $h_0$ or $h_3$, every vertex of type $b$ to $h_1$ or $h_2$, every vertex of type $c$ to $h_2$ or $h_3$ and every vertex of type $d$ to $h_0$ or $h_1$. This means that $v$ can only be $p,q,b_p$, or $d_q$, whereas $v'$ can only be $p$, $q$, $a_p$ or $d_q$. If $v=p$ then $v'\in \{b_p,c_q\}$. If $v=q$ then $v'\in \{c_q,d_q,b_p\}$. If $v=b_p$ then $v'$ cannot be chosen. If $v=d_q$ then $v'\in \{c_q,b_p\}$. Hence, we find that $v=q$ and $v'=d_q$. However, then $f$ is not a retraction from $G$ to $H$, because $c_q$ is adjacent to $d_q,q,h_2,h_3$, and $f$ maps these vertices to $h_0,h_1,h_2,h_3$, respectively. Hence, row 4 does not occur. We now show that row 9 cannot occur. In order to obtain a contradiction, suppose that row 9 does occur, i.e., that $f(v)=h_2$ and $f(v')=h_3$. As in the previous case, we deduce that every vertex of type $a$ is mapped to $h_0$ or $h_3$, every vertex of type $b$ to $h_1$ or $h_2$, every vertex of type $c$ to $h_2$ or $h_3$ and every vertex of type $d$ to $h_0$ or $h_1$. Moreover, every vertex of type $\ell$ or $r$ cannot be mapped to $h_2$, because it is adjacent to $h_0$. Then $v$ can only be $b_p$ or $c_q$, and $v'$ can only be $p$, $q$, $a_p$ or $c_q$. However, if $v=b_p$ or $v=c_q$ then $v'$ cannot be chosen. Hence, row 9 cannot occur, and we conclude that $f$ can be extended to a retraction from $G''$ to $H$, as desired. \noindent ``$(ii)\Rightarrow (i)$'' Let $f$ be a retraction from $G''$ to $H$. Then the restriction of $f$ to $V_{G}$ is a retraction from $G$ to $H$. Hence, this implication is valid. \noindent ``$(ii)\Rightarrow (iii)$'' This implication is valid, because every retraction from $G''$ to $H$ is an edge-surjective homomorphism, so \emph{a fortiori} a compaction from $G''$ to $H$. \noindent ``$(iii)\Rightarrow (ii)$'' Let $f$ be a compaction from $G''$ to $H$. We will show that $f$ is without loss of generality a retraction from $G''$ to $H$. Our proof goes along the same lines as the proof of Lemma 2.1.2 in Vikas~\cite{Vi02}, i.e., we use the same arguments but in addition we must examine a few more cases due to our modifications in steps 1--3; we therefore include all the proof details below. We let $U$ consist of $h_0,h_1$ and all vertices of type $u$. Similarly, we let $W$ consist of $h_2,h_3$ and all vertices of type $w$. Because $U$ forms a clique in $G$, we find that $f(U)$ is a clique in $H$. This means that $1\leq |f(U)|\leq 2$. By the same arguments, we find that $1\leq f(W)\leq 2$. We first prove that $|f(U)|=|f(W)|=2$. In order to derive a contradiction, suppose that $|f(U)|\neq 2$. Then $f(U)$ has only one vertex. By symmetry, we may assume that $f$ maps every vertex of $U$ to $h_0$; otherwise we can redefine $f$. Because every vertex of $G''$ is adjacent to a vertex in $U$, we find that $G''$ contains no vertex that is mapped to $h_2$ by $f$. This is not possible, because $f$ is a compaction from $G''$ to $H$. Hence $|f(U)|=2$, and by the same arguments, $|f(W)|=2$. Because $U$ is a clique, we find that $f(U)\neq \{h_0,h_2\}$ and $f(U)\neq \{h_1,h_3\}$. Hence, by symmetry, we assume that $f(U)=\{h_0,h_1\}$. We now prove that $f(W)=\{h_2,h_3\}$. In order to obtain a contradiction, suppose that $f(W)\neq \{h_2,h_3\}$. Because $f$ is a compaction from $G''$ to $H$, there exists an edge $st$ in $G''$ with $f(s)=h_2$ and $f(t)=h_3$. Because $f(U)$ only contains vertices mapped to $h_0$ or $h_1$, we find that $s\notin U$ and $t\notin U$. Because we assume that $f(W)\neq \{h_2,h_3\}$, we find that $st$ is not one of $w_vw_{v'},w_vh_2,w_vh_3,h_2h_3$. Hence, $st$ is one of the following edges $$vw_v, w_vy_v, vx_{vv'}, y_vh_2, vh_2, vh_3, vv', v'x_{vv'},w_{v'}x_{vv'},x_{vv'}h_2,$$ where $v,v'\in V_G\setminus V_H$. We must consider each of these possibilities. If $st\in \{vw_v,w_vy_v,vx_{vv'}\}$ then $f(u_v)\in \{h_2,h_3\}$, because $u_v$ is adjacent to $v,w_v,y_v,x_{vv'}$. However, this is not possible because $f(u_v)\in \{h_0,h_1\}$. If $st=y_vh_2$, then $f(w_v)=h_2$ or $f(w_v)=h_3$, because $w_v$ is adjacent to both $y_v$ and $h_2$, and $\{f(y_v),f(h_2)\}=\{h_2,h_3\}$. This means that either $f(w_v)=f(y_v)$ or $f(w_v)=f(h_2)$. If $f(w_v)=f(y_v)$, then $\{f(w_v),f(h_2)\}=\{h_2,h_3\}$. Consequently, $f(W)=\{h_2,h_3\}$, which we assumed is not the case. Hence, $f(w_v)\neq f(y_v)$. Then $f$ maps the edge $w_vy_v$ to $h_2h_3$, and we return to the previous case. We can repeat the same arguments if $st=vh_2$ or $st=vh_3$. Hence, we find that $st$ cannot be equal to those edges either. If $st=vv'$, then by symmetry we may assume without loss of generality that $f(v)=h_2$ and $f(v')=h_3$. Consequently, $f(u_v)=h_1$, because $u_v\in U$ is adjacent to $v$, and can only be mapped to $h_0$ or $h_1$. By the same reasoning, $f(u_{v'})=h_0$. Because $w_v$ is adjacent to $v$ with $f(v)=h_2$ and to $u_v$ with $f(u_v)=h_1$, we find that $f(w_v)\in \{h_1,h_2\}$. Because $w_{v'}$ is adjacent to $v'$ with $f(v')=h_3$ and to $u_{v'}$ with $f(u_{v'})=h_0$, we find that $f(w_{v'})\in \{h_0,h_3\}$. Recall that $f(W)\neq \{h_2,h_3\}$. Then, because $w_v$ and $w_{v'}$ are adjacent, we find that $f(w_v)=h_1$ and $f(w_{v'})=h_0$. Suppose that $x_{vv'}$ exists. Then $x_{vv'}$ is adjacent to vertices $v$ with $f(v)=h_2$, to $v'$ with $f(v')=h_3$, to $u_v$ with $f(u_v)=h_1$ and to $w_{v'}$ with $f(w_{v'})=h_0$. This is not possible. Hence $x_{vv'}$ cannot exist. This means that $v,v'$ are both of type $a$, both of type $b$, both of type $c$ or both of type $d$. If $v,v'$ are both of type $a$ or both of type $d$, then $f(h_0)\in \{h_2,h_3\}$, which is not possible because $h_0\in U$ and $f(U)=\{h_0,h_1\}$. If $v,v'$ are both of type $b$, we apply the same reasoning with respect to $h_1$. Suppose that $v,v'$ are both of type $c$. Then both $v$ and $v'$ are adjacent to $h_2$. This means that $f(h_2)\in \{h_2,h_3\}$. Then either $\{f(v),f(h_2)\}=\{h_2,h_3\}$ or $\{f(v'),f(h_2)\}=\{h_2,h_3\}$. Hence, by considering either the edge $vh_2$ or $v'h_2$ we return to a previous case. We conclude that $st\neq vv'$. If $st=v'x_{vv'}$ then $f(v)\in \{h_2,h_3\}$, because $v$ is adjacent to $v'$ and $x_{vv'}$. Then one of $vv'$ or $vx_{vv'}$ maps to $h_2h_3$, and we can return to a previous case. Hence, we find that $st\neq v'x_{vv'}$. If $st=w_{v'}x_{vv'}$ then $f(v')\in \{h_2,h_3\}$, because $v'$ is adjacent to $w_{v'}$ and $x_{vv'}$. Then one of $v'w_{v'}$ or $v'x_{vv'}$ maps to $h_2h_3$, and we can return to a previous case. Hence, we find that $st\neq w_{v'}x_{vv'}$. If $st=x_{vv'}h_2$ then $f(w_{v'})\in \{h_2,h_3\}$, because $w_{v'}$ is adjacent to $x_{vv'}$ and $h_2$. Because $f(W)\neq \{h_2,h_3\}$, we find that $f(w_{v'})=f(h_2)$. Then $w_{v'}x_{vv'}$ is mapped to $h_2h_3$, and we return to a previous case. Hence, we find that $st\neq x_{vv'}h_2$. We conclude that $G''$ has no edge $st$ with $f(s)=h_2$ and $f(t)=h_3$. This is a contradiction; recall that $f$ is a compaction. Hence, $f(W)=\{h_2,h_3\}$. The next step is to prove that $f(h_0)\neq f(h_1)$. In order to obtain a contradiction, suppose that $f(h_0)=f(h_1)$. By symmetry we may assume without loss of generality that $f(h_0)=f(h_1)=h_0$. Because $f(U)=\{h_0,h_1\}$, there exists a vertex $u_v$ with $f(u_v)=h_1$. Because $w_v$ with $f(w_v)\in \{h_2,h_3\}$ is adjacent to $u_v$, we find that $f(w_v)=h_2$. Because $h_2$ with $f(h_2)\in \{h_2,h_3\}$ is adjacent to $h_1$ with $f(h_1)=h_0$, we find that $f(h_2)=h_3$. However, then $y_v$ is adjacent to $h_0$ with $f(h_0)=h_0$, to $u_v$ with $f(u_v)=h_1$, to $w_v$ with $f(w_v)=h_2$, and to $h_2$ with $f(h_2)=h_3$. This is not possible. Hence, we find that $f(h_0)\neq f(h_1)$. By symmetry, we may assume without loss of generality that $f(h_0)=h_0$ and $f(h_1)=h_1$. We are left to show that $f(h_2)=h_2$ and $f(h_3)=h_3$. This can be seen as follows. Because $h_2$ is adjacent to $h_1$ with $f(h_1)=h_1$, and $f(h_2)\in \{h_2,h_3\}$ we find that $f(h_2)=h_2$. Because $h_3$ is adjacent to $h_0$ with $f(h_0)=h_0$, and $f(h_3)\in \{h_2,h_3\}$ we find that $f(h_3)=h_3$. Hence, we have found that $f$ is a retraction from $G''$ to $H$, as desired. \noindent ``$(iii)\Rightarrow (iv)$'' and ``$(iv)\Rightarrow (iii)$'' immediately follow from the equivalence between statements 3 and 6 in Proposition~\ref{p-known}, after recalling that $G''$ has diameter 2 due to Lemma~\ref{l-diam2}. \qed \end{pf} We are now ready to state the main result of our paper. Its proof follows from Lemmas~\ref{l-diam2} and~\ref{l-equi}, in light of Theorem~\ref{t-ret}; note that all constructions may be carried out in polynomial time. \begin{thm}\label{t-main} The {\sc Surjective ${\mathcal C}_4$-Homomorphism} problem is {\sf NP}-complete for graphs of diameter 2 even if they have a dominating non-edge. \end{thm} \noindent {\bf Acknowledgments.} The authors thank Andrei Krokhin for useful comments on Section~\ref{s-algebra} and an anonymous reviewer for helpful comments on the presentation of our paper. \end{document}
math
60,631
\begin{document} \title[Upper bounds on the size of transitive subtournaments in digraphs] {Upper bounds on the size of transitive subtournaments in digraphs} \author[Koji Momihara and Sho Suda]{Koji Momihara$^*$ and Sho Suda$^{\dagger}$} \thanks{$^\ast$ Koji Momihara is supported by JSPS KAKENHI Grant Number (C)24540013.} \thanks{$^{\dagger}$ Sho Suda is supported by JSPS KAKENHI Grant Number 15K21075.} \address{$^\ast$ Faculty of Education, Kumamoto University, 2-40-1 Kurokami, Kumamoto 860-8555, Japan} \email{[email protected]} \address{$^{\dagger}$ Department of Mathematics Education, Aichi University of Education, 1 Hirosawa, Igaya-cho, Kariya, Aichi 448-8542, Japan} \email{[email protected]} \keywords{Transitive subtournament; Regular digraph; Doubly regular tournament; Paley tournament; Hoffman's bound; Block-intersection polynomial} \begin{abstract} In this paper, we consider upper bounds on the size of transitive subtournaments in a digraph. In particular, we give an analogy of Hoffman's bound for the size of cocliques in a regular graph. Furthermore, we partially improve the Hoffman type bound for doubly regular tournaments by using the technique of Greaves and Soicher for strongly regular graphs~\cite{GS}, which gives a new application of block intersection polynomials. \end{abstract} \maketitle \section{Introduction} The problem to find a sharp bound on the size of cliques or cocliques in a graph has been well-studied, and some nontrivial bounds have been known based on linear algebraic techniques, cf.~\cite{Ha1,Ha2}. In particular, it is well-known that if a $k$-regular graph with $v$ vertices has a coclique of size $s$, then \begin{equation}\label{eq:hoff:ori} s\le -v\lambda_{{\mathrm{min}}}/(k-\lambda_{{\mathrm{min}}}), \end{equation} where $\lambda_{{\mathrm{min}}}$ is the minimum eigenvalue of the adjacency matrix of $G$. This bound is an unpublished result of A. J. Hoffman, and so this is known as {\it Hoffman's bound}. Furthermore, the case where the equality holds in this bound was studied in relation to some combinatorial structures, such as strongly regular graphs and association schemes~\cite{Ha2}. The Hoffman bound for strongly regular graphs is sometimes referred to as the Delsarte bound. The Delsarte bound is difficult to improve in general. In fact, until recently, there was only a few classes of strongly regular graphs, for which the bound is improvable. For example, Bachoc, Matolcsi and Ruzsa~\cite{BMR} improved the Delsarte bound for Paley graphs of nonsquare order using the properties of quadratic residues of finite fields. Very recently, it was announced that Greaves and Soicher~\cite{GS} improved the Delsarte bound for a large class of strongly regular graphs using block-intersection polynomials. In this paper, we are interested in the size of transitive subtournaments in a digraph. The problem to find a sharp bound on the size of transitive subtournaments in a tournament was initially considered by Erd\H{o}s and Moser~\cite{EM} in 1964. It is clear that any tournament with $v$ vertices contains a transitive subtournament with $1+\lfloor \log_2(v)\rfloor$ vertices. This bound is tight for the case where $v=7$. In fact, the Paley tournament on seven vertices contains no transitive tournament with four vertices. They also proved that there are tournaments with $v$ vertices without a transitive tournament of size $2(1+\lfloor \log_2(v)\rfloor)$. On the other hand, Reid and Parker~\cite{RP} showed that the lower bound above is not tight for large $v$. There have been some studies of upper and lower bounds on the size of transitive subtournaments in a digraph to improve Erd\H{o}s-Moser's bound, cf. \cite{Moo,Sa1,Sa2,T}. In particular, in \cite{T}, it was shown by a simple counting argument that for the maximum size $s$ of transitive subtournaments in a doubly regular tournament with $v$ vertices \begin{equation}\label{eq:bound} s\le \frac{-3+\sqrt{13+12v}}{2}. \end{equation} In this paper, we consider analogies of the Hoffman bound and the Greaves-Soicher bound to digraphs. In particular, as an analogy of the Hoffman bound for regular graphs, we show that if a regular digraph $G$ with $v$ vertices has a transitive subtournament of size $s$, then \[ s\le \frac{-3\theta_{{\mathrm{max}}}^2+\sqrt{9\theta_{{\mathrm{max}}}^4+4v^2+12\theta_{{\mathrm{max}}}^2v^2}}{2v}, \] where $\theta_{{\mathrm{max}}}$ is the maximum eigenvalue of the Seidel matrix of $G$. As an immediate corollary, we have for a doubly regular tournament $ s \le (-3+\sqrt{13+12v})/2$, which coincides with the bound~\eqref{eq:bound}. Furthermore, this bound can be partially improved by using Greaves-Soicher's technique for strongly regular graphs~\cite{GS}, which gives a new application of block-intersection polynomials. This paper is organized as follows. In Section~\ref{sec:2}, we introduce basic facts on spectra of digraphs. In Section~\ref{sec:3}, we obtain an upper bound on the size of transitive subtournaments in a digraph using the ``interlacing property'' of eigenavalues without any restriction. Section~\ref{sec:hof} is the main part of this paper, where we consider an analogy of the Hoffman bound to digraphs. In Section~\ref{sec:5}, we introduce the adjacency polynomials for doubly regular tournaments, and partially improve the Hoffman type bound for doubly regular tournaments, that is, the bound~\eqref{eq:bound}. Finally, we give some open problems related to this study in Section~\ref{sec:6}. \section{Preliminaries}\label{sec:2} Let $G=(V,E)$ be a digraph with $v$ vertices. An {\it adjacency matrix} $A$ of $G$ is a $v\times v$ matrix whose columns and rows are labeled by the vertices of $G$, and its entries are defined by \begin{align*} A_{x,y}=\begin{cases} 1,\quad &\textup{if $(x,y)\in E$},\\ 0,\quad & \textup{otherwise}. \end{cases} \end{align*} It is clear that $A+A^\top $ is symmetric and $\sqrt{-1}(A-A^\top )$ is Hermitian, where $A^\top$ denotes the transpose of $A$. The matrix $S_{G}:=\sqrt{-1}(A-A^\top )$ is called the {\it Seidel matrix} of $G$. Let ${\bf 1}$ be the all-one vector of length $v$. If $|\{x\in V:(x,y)\in E\}|=|\{x\in V:(y,x)\in E\}|=k$ for any $y\in V$, then the digraph is called {\it regular}. This condition is equivalent to the fact that $A\cdot {\bf 1}=A^\top \cdot {\bf 1}$ and all entries of the vector $A\cdot {\bf 1}$ are equal to $k$. A digraph is called a {\it tournament} if $A+A^\top =J-I$, where $J$ is the all-one square matrix of order $v$ and $I$ is the identity matrix of order $v$. A tournament $G$ is called {\it doubly regular} if it is $(v-1)/2$-regular and the number of vertices dominated by a pair of two distinct vertices simultaneously is constant, say $\lambda$, not depending on the choice of the pair. It is clear that $\lambda=(v-3)/4$, which implies that $v\equiv 3\,({\mathrm{mod\,\,}}{4})$. The adjacency matrix $A$ satisfies that \[ AA^\top =\frac{v+1}{4}I+\frac{v-3}{4}J. \] Noting that $A+A^\top =J-I$, $A$ has the eigenvalues $\frac{v-1}{2},\frac{-1+\sqrt{-v}}{2},\frac{-1-\sqrt{-v}}{2}$. In other words, $S_G$ has $0,\sqrt{v},-\sqrt{v}$ as its eigenvalues \cite[Theorem~2.5]{NS}. Since $S_G$ is Hermitian, $S_G$ has only real eigenvalues. Let $E_\theta$ be the orthogonal projection matrix of an eigenvalue $\theta$. Then, $S_G$ has the spectral decomposition \begin{equation}\label{eq:spe} S_G=\sum_{\theta\in \mbox{ev}(G)}\theta E_\theta, \end{equation} where $\mbox{ev}(G)$ is the set of distinct eigenvalues of $S_G$. We denote by $m_\theta$ the multiplicity corresponding to $\theta\in\mbox{ev}(G)$. The {\it main angle} $\beta_\theta$ is defined to be $\beta_\theta=(1/\sqrt{v})||E_\theta\mathbf{1}||^2$. A {\it main eigenvalue} of $S_G$ is an eigenvalue $\theta$ with $\beta_\theta\neq0$. Since $S_G$ is Hermitian and $S_G^\top =-S_G$, $E_\theta$'s satisfy the following basic properties. \begin{enumerate} \item $E_\theta^2=E_{\theta}$ and $E_\theta E_{\tau}=O$ for $\theta \not=\tau$, \item $S_G E_\theta=\theta E_\theta$, \item $\sum_{\theta \in\mbox{ev}(G)}E_\theta=I$, \item if $\theta\in\mbox{ev}(G)$, then $-\theta\in\mbox{ev}(G)$, \end{enumerate} where $O$ denotes the zero matrix of order $v$. Define the set $M(G)$ of main eigenvalues to be $M(G)=\{\theta\in \mathrm{ev}(G): \beta_\theta\neq 0\}$, and the matrix $F_G$ to be $F_G=\sum_{\theta\in M(G)}E_\theta$. Note that $M(G)\neq\emptyset$ since $\sum_{\theta\in\mathrm{ev}(G)}\beta_{\theta}^2=1$. In the rest of this paper, we are interested in the size of transitive subtournaments in a digraph. A transitive tournament is a tournament satisfying the following: if $(x,y)\in E$ and $(y,z)\in E$, then $(x,z)\in E$. After reordering the vertices appropriately, we may assume that a transitive tournament has the adjacency matrix \begin{align}\label{eq:SeidelTra} A=\begin{pmatrix} 0 & 0&0&\cdots&0 \\ 1 & 0&0&\cdots&0\\ 1&1&0&\cdots&0\\ \sigmadots & \sigmadots&\sigmadots &\ddots&\sigmadots\\ 1&1&1&\cdots&0 \end{pmatrix}. \end{align} \section{Interlacing}\label{sec:3} In this section, we will use the following well-known fact on interlacing of eigenvalues. \begin{proposition}{\em (\cite[Theorem 4.3.17]{HJ})} Let $A$ be a Hermitian matrix of order $v$ with eigenvalues \[ \lambda_1\ge \lambda_2\ge \cdots\ge \lambda_v. \] Let $B$ be a principal submatrix of $A$ of order $m$ with eigenvalues \[ \mu_1\ge \mu_2\ge \cdots\ge \mu_m. \] Then, the eigenvalues of $B$ interlace those of $A$, i.e., \[ \lambda_i\ge \mu_i\ge \lambda_{v-m+i}, \, \, \, \, i=1,2,\ldots,m. \] \end{proposition} Using interlacing of eigenvalues, we obtain the following theorem. \begin{theorem}\label{thm;inte} Let $G$ be a digraph with $v$ vertices and ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Let $\theta_i$, $i=1,2,\ldots,v$, be the eigenvalues of $S_G$ with ordering $\theta_1\geq\cdots\geq\theta_v$. Then, \begin{equation}\label{eq:inter} s\leq \frac{(2i-1)\pi}{2\mathrm{arccot}(\theta_{i})} \end{equation} for $i=1,\ldots,\lfloor s/2 \rfloor$. \end{theorem} {\sigmaspace{-0.0cm}\bf Proof: \,} It is easily shown that the eigenvalues of $S_{\mathcal{G}}amma$ are $\cot(\frac{(2i-1)\pi }{2s})$, $i=1,\ldots,s$. By interlacing for $S_G$ and $S_{\mathcal{G}}amma$, we have $\cot(\frac{(2i-1)\pi}{2s})\leq \theta_{i}$, that is, the inequality \eqref{eq:inter}. { $\square$} \sigmaspace{0.3cm} By applying Theorem~\ref{thm;inte} to doubly regular tournaments, we have the following corollary. \begin{corollary}\label{cor;inte} Let $G$ be a doubly regular tournament with $v$ vertices and ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Then, \[ s\leq \frac{\pi}{2\mathrm{arccot}(\sqrt{v})}. \] \end{corollary} {\sigmaspace{-0.0cm}\bf Proof: \,} This corollary follows by the fact that $\theta_i=\sqrt{v}$ for $i=1,\ldots,(v-1)/2$. { $\square$} \sigmaspace{0.3cm} We list the values of $\frac{\pi}{2\mathrm{arccot}(\sqrt{v})}$ for small $v\equiv 3\,({\mathrm{mod\,\,}}{4})$ in Table~\ref{tableEx3}. As far as we checked for small cases, this bound is not better than the bound obtained in Section~\ref{sec:hof}. On the other hand, the advantage of Theorem~\ref{thm;inte} is that the result is applicable to general digraphs not computing $\beta_\theta$'s. \begin{table}[h] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|} \hline $v$ & 7&11&15&19&23&27&31&35\\ \hline $\frac{\pi}{2\mathrm{arccot}(\sqrt{v})}$& 4.346&5.363&6.216&6.965&7.641&8.261&8.839&9.380 \\ \hline \end{tabular} \end{center} \caption{The values of $\frac{\pi}{2\mathrm{arccot}(\sqrt{v})}$ for small $v$.}\label{tableEx3} \end{table} \section{Analogy of Hoffman's bound}\label{sec:hof} In this section we consider an analogy of Hoffman's bound to digraphs. Recall that $M(G):=\{\theta\in \mathrm{ev}(G): \beta_\theta\neq 0\}$ and $F_G:=\sum_{\theta\in M(G)}E_\theta$. \begin{lemma}\label{lem:chi1} Let $G$ be a digraph with $v$ vertices and $\chi$ be a $(0,1)$-vector of norm $s$. Then, $\chi^\top F_G\chi\geq\frac{1}{v}s^2$. \end{lemma} {\sigmaspace{-0.0cm}\bf Proof: \,} By the definition of main angles, the space $F_G\mathbb{R}^v$ contains $\text{span}\{{\bf 1}\}$. Letting $E$ be the orthogonal projection onto $F_G\mathbb{R}^v\cap \text{span}\{{\bf 1}\}^\perp$, we have \begin{align*} F_G=\frac{1}{v}J+E, \end{align*} from which the result follows. { $\square$} \begin{lemma}\label{lem:chi2} Let $G$ be a digraph with $v$ vertices and ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Then it holds that \[ s(s^2-1)/3\leq \chi^\top S_GS_G^* \chi, \] with equality if and only if for any vertex $x\not\in V({\mathcal{G}}amma)$, the number of vertices in $V({\mathcal{G}}amma)$ dominating $x$ equals to the number of vertices in $V({\mathcal{G}}amma)$ dominated by $x$. \end{lemma} {\sigmaspace{-0.0cm}\bf Proof: \,} After reordering the vertices appropriately, we may set \begin{align*} S_G=\begin{pmatrix} S_{\mathcal{G}}amma & S_{12} \\ S_{21} & S_{22} \end{pmatrix} . \end{align*} Then it is easy to see that \begin{align*} 0\le {\bf 1}^\top S_{12} S_{12}^\ast{\bf 1}= \chi^\top S_G S_G^\ast \chi-{\bf 1}^\top S_ {\mathcal{G}}amma S_{\mathcal{G}}amma^\ast {\bf 1}=\chi^\top S_G S_G^\ast \chi-s(s^2-1)/3. \end{align*} The equality holds if and only if $S_{12}^*{\bf 1}={\bf 0}$, which is equivalent to the desired condition. { $\square$} \sigmaspace{0.3cm} The following is our main theorem in this section. \begin{theorem}\label{thm:hoff1} Let $G$ be a digraph with $v$ vertices and ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Let ${\alpha}pha={\mathrm{max}}\{\theta:\theta\in M(G)\}$ and ${\gamma}mma={\mathrm{max}} \{\theta:\theta\in\mathrm{ev}(G)\setminus M(G)\}$. If ${\alpha}pha\leq {\gamma}mma$, then \begin{align}\label{ineq:1} s\le \frac{3{\alpha}pha^2-3 {\gamma}mma^2 + \sqrt{4v^2(1+3{\gamma}mma^2)+9({\gamma}mma^2-{\alpha}pha^2)^2}}{2 v}. \end{align} \end{theorem} {\sigmaspace{-0.0cm}\bf Proof: \,} Let $\chi$ be the characteristic vector of ${\mathcal{G}}amma$. Then \begin{align} \chi^\top S_G S_G^\ast \chi=&\, \chi^\top \big( \sum_{\theta\in\mbox{ev}(G)}\theta E_\theta\big) \big( \sum_{\theta\in\mbox{ev}(G)}\theta E_\theta^\ast\big)\chi\nonumber\displaybreak[0]\\ =&\, \chi^\top \big( \sum_{\theta\in\mbox{ev}(G)}\theta^2 E_\theta\big)\chi\nonumber\displaybreak[0]\\ \leq&\, {\alpha}pha^2\chi^\top \big( \sum_{\theta\in M(G)}E_\theta\big)\chi+{\gamma}mma^2\chi^\top \big( \sum_{\theta\in\mbox{ev}(G)\setminus M(G)} E_\theta\big)\chi\nonumber\displaybreak[0]\\ = &\, {\alpha}pha^2 \chi^\top F_G\chi+{\gamma}mma^2\chi^\top \big( I-F_G )\chi \nonumber\displaybreak[0]\\ =&\, ({\alpha}pha^2-{\gamma}mma^2)\chi^\top F_G\chi+{\gamma}mma^2\chi^\top \chi \nonumber\displaybreak[0] \\ =&\, ({\alpha}pha^2-{\gamma}mma^2)\chi^\top F_G\chi+{\gamma}mma^2 s. \label{eq:hoff2i} \end{align} By the inequality~\eqref{eq:hoff2i} and Lemma~\ref{lem:chi2}, we have \begin{align}\label{eq:hoff11} 0\leq ({\alpha}pha^2-{\gamma}mma^2)\chi^\top F_G\chi+{\gamma}mma^2 s-s(s^2-1)/3. \end{align} Since ${\alpha}pha\leq {\gamma}mma$, the desired inequality follows from Lemma~\ref{lem:chi1} and the inequality~\eqref{eq:hoff11}. { $\square$} \sigmaspace{0.3cm} We now consider the case where $G$ is a regular digraph. Then $M(G)=\{0\}$ and ${\gamma}mma={\mathrm{max}}\{\theta:\theta\in \mathrm{ev}(G)\setminus M(G)\}$ is equal to the maximum eigenvalue of $S_G$. Thus we have the following corollary. \begin{corollary}\label{cor:regu} Let $G$ be a regular digraph with $v$ vertices and ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Then, \begin{equation}\label{eq:regu} s\le \frac{-3\theta_{{\mathrm{max}}}^2+\sqrt{9\theta_{{\mathrm{max}}}^4+4v^2+12\theta_{{\mathrm{max}}}^2v^2}}{2v}, \end{equation} where $\theta_{{\mathrm{max}}}$ is the maximum eigenvalue of the Seidel matrix of $G$. \end{corollary} Next, we move on to the case where $G$ is a regular tournament. Since $A=\frac{-\sqrt{-1}S_G+J-I}{2}$ and $S_G+S_G^\top =O$, we have \[ (A+{A}^\top )\chi=(J-I)\chi=s {\bf 1}-\chi. \] Hence, for any $x\not\in V({\mathcal{G}}amma)$ \[ |\{y\in V({\mathcal{G}}amma): (x,y)\in E\}|+|\{y\in V({\mathcal{G}}amma): (y,x)\in E\}|=s. \] On the other hand, if the equality holds in the inequality~\eqref{eq:regu}, we have \[ |\{y\in V({\mathcal{G}}amma): (x,y)\in E\}|=|\{y\in V({\mathcal{G}}amma): (y,x)\in E\}|. \] Therefore, $s$ must be even. Summing up, we have the following corollary. \begin{corollary}\label{cor:par} Let $G$ be a regular tournament with $v$ vertices. If the equality holds in the inequality~\eqref{eq:regu}, then $s$ is even. \end{corollary} Also, by applying Corollary~\ref{cor:regu} to doubly regular tournaments, we have the following. \begin{corollary}\label{cor:doub} Let $G$ be a doubly regular tournament with $v$ vertices and ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Then, it holds that \begin{align}\label{ineq:drt} s\le \frac{-3 + \sqrt{13 + 12 v}}{2}. \end{align} \end{corollary} {\sigmaspace{-0.0cm}\bf Proof: \,} The corollary follows from the fact that $\mbox{ev}(S_G)=\{0,\pm \sqrt{v}\}$ \cite[Theorem~2.5]{NS}. { $\square$} \sigmaspace{0.3cm} In Table~\ref{tableEx2}, we computed the maximum size of transitive subtournaments in a known doubly regular tournament with $v$ vertices for small $v$. In the row of ``maximum'', we found a doubly regular tournament with a transitive subtournament having the indicated number of vertices in known doubly regular tournaments given in \cite{Ma}. Here, the upper bound $s\le 6$ for $v=23$ is obtained by Corollary~\ref{cor:par}. Also, we list the maximum size of transitive subtournaments in the Paley tournament. Here, the Paley tournament is defined as follows: Let $q$ be a prime power congruent to $3$ modulo $4$, and let $C_0$ be the set of nonzero squares of the finite field ${\mathbb F}_q$. The {\it Paley tournament of order $q$} is the tournament with the elements of ${\mathbb F}_q$ as vertices; $(x,y)\in E$ if and only if $x-y\in C_0$. See \cite{Sa1} for further computational results on the maximum size of transitive subtournaments in the Paley tournament. \begin{table}[h] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|} \hline $v$& 7&11&15&19&23&27&31&35 \\ \hline $\#$& 1&1&2&2&37&722&$\ge$ 5&$\ge$ 486 \\ \hline \hline upper bound & 3&4&5&6&6&7&8&8\\ \hline maximum & 3&4&5&5&6&7&$\ge$ 7&$\ge$ 7\\ \hline Paley & 3&4&-&5&5&5&7&-\\ \hline \end{tabular} \end{center} \caption{The size of transitive subtournaments in a doubly regular tournament}\label{tableEx2} \end{table} Let $G$ be a doubly regular tournament with $v$ vertex and $x$ be a vertex of $G$. Let $G'$ be the induced subgraph with vertex set $V(G)\setminus \{x\}$ of $G$. Then the tournament $G'$ has $v-1$ vertices and is biregular. Furthermore, by \cite[Theorem~1.1]{NS}, the spectrum of the tournament $G'$ is \begin{align*} \mbox{ev}(G')=\{\pm\sqrt{v},\pm1\},\quad \beta_{\pm\sqrt{v}}=0,\quad \beta_{\pm1}=1/\sqrt{2}. \end{align*} By applying Theorem~\ref{thm:hoff1} as ${\alpha}pha=1$ and ${\gamma}mma=\sqrt{v}$, we have \[ s\le \frac{-3+\sqrt{25+12v}}{2}=\frac{-3+\sqrt{13+12(v+1)}}{2}, \] which coincides with the bound~\eqref{ineq:drt}. \section{Bounds from block intersection polynomials}\label{sec:5} The concept of {\it block intersection polynomials} was first introduced by Cameron and Soicher in \cite{CS}. For a non-negative integer $k$, define the polynomial \[ P(x,k):=x(x-1)\cdots (x-k+1). \] Thus, for $n$ a non-negative integer, ${n\choose k}=P(n,k)k!$. For real number sequences $M=[m_0,\ldots,m_s]$, $\Lambda=[\lambda_0,\ldots,\lambda_t]$ with $t\le s$, define the {\it block intersection polynomial} \begin{equation}\label{def:bip} B(x,M,\Lambda)=\sum_{j=0}^t{t \choose j}P(-x,t-j)\Big( P(s,j)\lambda_j-\sum_{i=j}^sP(i,j)m_i \Big). \end{equation} Cameron-Soicher~\cite{CS} and Soicher~\cite{S1} proved the following result. \begin{theorem}\label{thm:bip} Let $s$ and $t$ be non-negative integers with $s\ge t$, let $n_0,\ldots,n_s$, $m_0,\ldots,m_s$, and $\lambda_0,\ldots,\lambda_t$ be real numbers, such that \[ \sum_{i=0}^{s}{i\choose j}n_i={s\choose j}\lambda_j, \, \, \, j=0,1,\ldots,t, \] and let $B(x)$ be the block intersection polynomial defined in \eqref{def:bip}. If $m_i\le n_i$ for all $i$, then $B(b)\ge 0$ for every integer $b$. \end{theorem} See \cite{S1} for further properties of block intersection polynomials. Cameron and Soicher~\cite{CS} discussed the multiplicity of a block in a $t$-design using block intersection polynomials. Soicher~\cite{S1,S2} defined adjacency polynomials for edge-regular graphs as a special form of block intersection polynomials and discussed the existence of cliques in edge-regular graphs. Very recently, Greaves and Soicher~\cite{GS} announced that they improved the Hoffman bound for strongly regular graphs using adjacency polynomials. In this section, we define adjacency polynomials for digraphs and give an improved bound of the Hoffman type bound given in Corollary~\ref{cor:doub} for doubly regular tournaments. \subsection{Adjacency polynomials for doubly regular tournaments} Let $G$ be a doubly regular tournament with $v$ vertices. Note that $v=4m-1$ for some $m\in {\mathbb N}$. Then the number of vertices dominated by each vertex is $k=2m-1$, and the number of vertices dominated by two distinct vertices simultaneously is $\lambda=m-1$. Let ${\mathcal{G}}amma$ be a transitive subtournament of $G$ with $s$ vertices. For $D\subseteq V(G)$, let \[ \lambda_D:=|\{q\in V(G)\setminus V({\mathcal{G}}amma):D\subseteq N(q)\}|, \] where $N(q)$ is the set of vertices dominating $q$, and for $0\le j\le s$, set \[ \lambda_j:=\frac{\sum_{D\in {V({\mathcal{G}}amma)\choose j}}\lambda_D}{{s \choose j}}. \] Then, it is clear that $\lambda_0=|V(G)\setminus V({\mathcal{G}}amma)|=v-s$ and \[ \lambda_1=\frac{sk}{s}-\frac{1}{s}\sum_{a\in V({\mathcal{G}}amma)}\mbox{outdeg}_{ {\mathcal{G}}amma}(a)=k-\frac{s-1}{2}, \] where $\mbox{outdeg}_{{\mathcal{G}}amma}(a)$ is the outdegree of $a\in V({\mathcal{G}}amma)$ in ${\mathcal{G}}amma$. Furthermore, we have \[ \lambda_2=\lambda-\frac{2}{s(s-1)}\sum_{a,b\in V({\mathcal{G}}amma);a\not=b}\mbox{dom}_{{\mathcal{G}}amma}(a,b), \] where $\mbox{dom}_{{\mathcal{G}}amma}(a,b)$ is the number of vertices of ${\mathcal{G}}amma$ dominated by $a$ and $b$. Since ${\mathcal{G}}amma$ is transitive, we have $\sum_{a,b\in V({\mathcal{G}}amma);a\not=b}\mbox{dom}_{{\mathcal{G}}amma}(a,b)={s\choose 3}$. Hence, $\lambda_2=\lambda-\frac{s-2}{3}$. \begin{lemma} Let $n_i=|\{q\in V(G)\setminus V({\mathcal{G}}amma):|N(q)\cap V({\mathcal{G}}amma)|=i\}|$, $0\le i\le s$. Then, for $j=0,1,2$ \[ \sum_{i=0}^s{i\choose j}n_j={s\choose j}\lambda_j. \] \end{lemma} {\sigmaspace{-0.0cm}\bf Proof: \,} For $j=0,1,\ldots,s$, we count in two ways the number $N_j$ of ordered pairs $(q,D)$, where $q\in V(G)\setminus V({\mathcal{G}}amma)$ and $D$ is a $j$-subset of $N(q)\cap V({\mathcal{G}}amma)$. Each $j$-subset $D$ of $V({\mathcal{G}}amma)$ contributes exactly $\lambda_D$ pairs of the form $(-,D)$ to $N_j$. Hence, by the definition of $\lambda_j$, \[ N_j=\sum_{D\in {V({\mathcal{G}}amma) \choose j}}\lambda_D={s \choose j}\lambda_j. \] On the other hand, each $q\in V(G)\setminus V({\mathcal{G}}amma)$ contributes exactly ${|N(q)\cap V({\mathcal{G}}amma)|\choose j}$ pairs of the form $(q,-)$ to $N_j$. Hence, by the definition of $n_i$, \[ N_j=\sum_{q\in V(G)\setminus V({\mathcal{G}}amma)}{|N(q)\cap V({\mathcal{G}}amma)|\choose j}=\sum_{i=0}^s{i \choose j}n_i. \] This completes the proof. { $\square$} \sigmaspace{0.3cm} By the lemma above, the integers $n_0,\ldots,n_s$, $\lambda_0,\lambda_1,\lambda_2$ satisfy the assumption of Theorem~\ref{thm:bip}. Define \begin{align*} B(x)=&\,B(x,[0^{s+1}],[v-s,k-\frac{s-1}{2},\lambda-\frac{s-2}{3}])\\ =&\,\sum_{j=0}^2{2\choose j}P(-x,2-j)P(s,j)\lambda_j\\ =&\,x(x+1)(v-s)-2xs(k-(s-1)/2)+s(s-1)(\lambda-(s-2)/3). \end{align*} We now define the {\it adjacency polynomial} for doubly regular tournaments with $v=4m-1$ vertices: \[ C(x,y)=x(x+1)(4m-1-y)-2xy(2m-(y+1)/2)+y(y-1)(m-(y+1)/3). \] Then, we have the following result by Theorem~\ref{thm:bip}: \begin{proposition}\label{thm:bip2} Let $G$ be a doubly regular tournament with $v=4m-1$ vertices. If $G$ contains a transitive subtournament of size $s$, then $C(b,s)\ge 0$ for all integers $b$. \end{proposition} By this proposition, if $C(b,s)<0$ for some integer $b$, then $G$ can not contain a transitive subtournament of size $s$. For example, put $y=3$ and $m=2$, and then $C(x,3)=4(x-1)^2\ge 0$. On the other hand, we have $C(x,4)=3x^2-9x+4$ and $C(1,4)<0$. Hence, a doubly regular tournament with seven vertices can not contain a transitive subtournament of size $4$. \subsection{Improved bound for doubly regular tournaments} By solving the equation $C(x,y)=0$ for $x$, we have \[ x=\frac{3 (-1 + 4 m - y) (-1 + y) \pm \sqrt{-3 (-1 + 4 m - y) (-1 + y) (-3 + 12 m - 4 y - y^2)}}{6 (-1 + 4 m - y)}. \] Since we can assume that $1<y<4m-1$, $C(x,y)$ is nonnegative if $3-12m+4y+y^2\le 0$, i.e., $-2-\sqrt{1+12m}\le y\le -2+\sqrt{1+12m}$. Put \[ z_{m,y}:=\frac{\sqrt{-3 (-1 + 4 m - y) (-1 + y) (-3 + 12 m - 4 y - y^2)}}{6 (-1 + 4 m - y)}. \] To find a bound on the size of transitive subtournaments, we need to find an integral $x$ in the interval $((-1 + y)/2 - z_{m,y},(-1+y)/2 + z_{m,y})$ for some integer $y> -2+\sqrt{1+12m}$. \begin{theorem} \label{thm:bip3} Let $G$ be a doubly regular tournament with $v=4m-1$ vertices and let ${\mathcal{G}}amma$ be a transitive subtournament of size $s$ in $G$. Let $\epsilon:=\sqrt{1+12m}-\lfloor \sqrt{1+12m}\rfloor$. The, the following hold. \begin{itemize} \item[(1)] $s\le -1+\lfloor \sqrt{1+12m}\rfloor$. \item[(2)] $s\le -2+\sqrt{1+12m}$ if $\epsilon=0$. \item[(3)] $s\le -2+\lfloor\sqrt{1+12m}\rfloor$ if $\epsilon\not=0$ and $-1+\lfloor \sqrt{1+12m}\rfloor$ is odd. \item[(4)] $s\le -2+\lfloor\sqrt{1+12m}\rfloor$ if $\epsilon\not=0$ and $\lfloor \sqrt{1+12m}\rfloor > (-1+\sqrt{1+48m})/2$. \end{itemize} \end{theorem} {\sigmaspace{-0.0cm}\bf Proof: \,} (1) If $\lfloor\sqrt{1+12m}\rfloor$ is even, put $(x,y)=((\lfloor\sqrt{1+12m}\rfloor)/2,\lfloor\sqrt{1+12m}\rfloor)$. Then, \[ C(x,y)=-\frac{1}{12} (-\epsilon + \sqrt{1 + 12 m}) (3 - 3 \epsilon + \epsilon^2 + 3 \sqrt{1 + 12 m} - 2 \epsilon \sqrt{1 + 12 m})<0. \] If $\lfloor\sqrt{1+12m}\rfloor$ is odd, put $(x,y)=((\lfloor\sqrt{1+12m}\rfloor-1)/2,\lfloor\sqrt{1+12m}\rfloor)$. Then, \[ C(x,y)=\frac{1}{12} (-2 + \epsilon) (-1 - \epsilon + \sqrt{1 + 12 m}) (2 - \epsilon + 2 \sqrt{1 + 12 m})<0. \] Then, by Proposition~\ref{thm:bip2}, the assertion follows. (2) Since $C((-1+\sqrt{1+12m})/2,-1+\sqrt{1+12m})=-m<0$, the assertion follows. (3) Put $y=-1+\lfloor\sqrt{1+12m}\rfloor$. Since $-1+\lfloor \sqrt{1+12m}\rfloor>-2+\sqrt{1+12m}$ and $(-1+y)/2$ is integer, we have $C((-1+y)/2,-1+\lfloor \sqrt{1+12m}\rfloor)<0$. (4) If $2z_{m,y}> 1$, then we can find an integral $x$ in the interval $((-1 + y)/2 - z_{m,y},(-1+y)/2 + z_{m,y})$. The bound $2z_{m,y}> 1$ is equivalent to $2-12m+3y+y^2> 0$, i.e., $y> (-3+\sqrt{1+48m})/2$. By substituting $y=-1+\lfloor\sqrt{1+12m}\rfloor$ into this inequality, we have $\lfloor \sqrt{1+12m}\rfloor > (-1+\sqrt{1+48m})/2$. The proof is now complete. { $\square$} \sigmaspace{0.3cm} For example, for $v=27$, we have $\sqrt{1+12m}=9.21954...$. In this case, the condition of Theorem~\ref{thm:bip3} (4) is satisfied, and we have $s\le 7$. \begin{remark} Theorem~\ref{thm:bip3} (3) is particularly important because the result can improve the bound~\eqref{ineq:drt}. For example, in the case where $v=71$, Theorem~\ref{thm:bip3} (3) says that $s\le 12$ while the bound ~\eqref{ineq:drt} says that $s\le 13$. \end{remark} \section{Concluding remarks}\label{sec:6} In this paper, we gave upper bounds on the size of transitive subtournaments in a digraph. In particular, we obtained an analogy of Hoffman's bound to digraphs. Furthermore, we partially improved the Hoffman type bound for doubly regular tournaments by applying block intersection polynomials, and thus we found a new application of block intersection polynomials. Interesting problems which are worth looking into as future works are listed below. \begin{itemize} \item We could not find any example of digraphs attaining the equality of the upper bound~\eqref{thm:hoff1}. Find examples of such digraphs or prove the nonexistnce of such digraphs. \item Bachoc et al.~\cite{BMR} improved the Hoffman bound for the Paley graphs by using the properties of quadratic residues of finite fields. Find an analogy of Bachoc et al.'s result to the Paley tournaments. \item Hoffman~\cite{H} gave a bound on the chromatic number of a graph. In particular, it was proved that for any graph $G$, $\chi(G)\ge 1-\lambda_{\mathrm{max}}/\lambda_{{\mathrm{min}}}$, where $\chi(G)$ is the chromatic number of $G$, and $\lambda_{{\mathrm{max}}}$ and $\lambda_{\mathrm{min}}$ are the maximum and minimum eigenvalues of $G$, respectively. Find an analogy of this bound to digraphs. Here, the chromatic number of a digraph is defined to be the minimum number of disjoint transitive subtournaments covering all vertices. \end{itemize} \end{document}
math
29,627
\begin{document} \title{On uniqueness of end sums and 1-handles at infinity} \author[J.~Calcut]{Jack S. Calcut} \address{Department of Mathematics\\ Oberlin College\\ Oberlin, OH 44074} \email{[email protected]} \urladdr{\href{http://www.oberlin.edu/faculty/jcalcut/}{\curl{http://www.oberlin.edu/faculty/jcalcut/}}} \author[R.~Gompf]{Robert E. Gompf} \address{The University of Texas at Austin\\ Mathematics Department RLM 8.100\\ Attn: Robert Gompf\\ 2515 Speedway Stop C1200\\ Austin, Texas 78712-1202} \email{[email protected]} \urladdr{\href{https://www.ma.utexas.edu/users/gompf/}{\curl{https://www.ma.utexas.edu/users/gompf/}}} \begin{abstract} For oriented manifolds of dimension at least 4 that are simply connected at infinity, it is known that end summing is a uniquely defined operation. Calcut and Haggerty showed that more complicated fundamental group behavior at infinity can lead to nonuniqueness. The present paper examines how and when uniqueness fails. Examples are given, in the categories \textsc{top}, \textsc{pl} and \textsc{diff}, of nonuniqueness that cannot be detected in a weaker category (including the homotopy category). In contrast, uniqueness is proved for Mittag-Leffler ends, and generalized to allow slides and cancellation of (possibly infinite) collections of 0- and 1-handles at infinity. Various applications are presented, including an analysis of how the monoid of smooth manifolds homeomorphic to ${\mathbb R}^4$ acts on the smoothings of any noncompact 4-manifold. \end{abstract} \maketitle \section{Introduction} Since the early days of topology, it has been useful to combine spaces by simple gluing operations. The connected sum operation for closed manifolds has roots in nineteenth century surface theory, and its cousin, the boundary sum of compact manifolds with boundary, is also classical. These two operations are well understood. In the oriented setting, for example, the connected sum of two connected manifolds is unique, as is the boundary sum of two manifolds with connected boundary. The boundary sum has an analogue for open manifolds, the \emph{end sum}, which has been used in various dimensions since the 1980s, but is less well known and understood. In contrast with boundary sums, end sums of one-ended oriented manifolds need not be uniquely determined, even up to proper homotopy \cite{CH}. The present paper explores uniqueness and its failure in more detail. To illustrate the subtlety of the issue, we present examples in various categories (homotopy, \textsc{top}, \textsc{pl}, and \textsc{diff}) where uniqueness fails, but the failure cannot be detected in weaker categories. In counterpoint, we find general hypotheses under which the operation {\em is} unique in all categories and apply this result to exotic smoothings of open 4-manifolds. Our results naturally belong in the broader context of {\em attaching handles at infinity}. We obtain general uniqueness results for attaching collections of 0- and 1-handles at infinity, generalizing handle sliding and cancellation. We conclude that end sums, and more generally, collections of handles at infinity with index at most one, can be controlled in broad circumstances, although deep questions remain. End sums are the natural analogue of boundary sums. To construct the latter, we choose codimension zero embeddings of a disk into the boundaries of the two summands, then use these to attach a 1-handle. For an end sum of open manifolds, we attach a 1-handle at infinity, guided by a properly embedded ray in each summand. Informally, we can think of the 1-handle at infinity as a piece of tape joining the two manifolds; see Definition~\mathcal Ref{onehandles} for details. Boundary summing two compact manifolds then has the effect of end summing their interiors. While this notion of end summing seems obvious, the authors have been unable to find explicit appearences of it before the second author's 1983 paper \cite{threeR4} and sequel \cite{infR4} on exotic smoothings of ${\mathbb R}^4$. However, the germ of the idea may be perceived in Mazur's 1959 paper \cite{M59} and Stallings' 1965 paper \cite{Stall65}. End summing was used in \cite{infR4} to construct infinitely many exotic smoothings of ${\mathbb R}^4$. The appendix of that paper showed that the operation is well-defined in that context, so is independent of choice of rays and their order (even for infinite sums). Since then, the second author and others have continued to use end summing with an exotic ${\mathbb R}^4$ for constructing many exotic smoothings on various open 4-manifolds, e.g., Taylor (1997) \cite[Theorem~6.4]{T}, Gompf (2017) \cite[Section~7]{MinGen}. The operation has also been subsequently used in other dimensions, for example by Ancel (unpublished) in the 1980s to study high-dimensional Davis manifolds, and by Tinsley and Wright (1997) \cite{TW97} and Myers (1999) \cite{My} to study 3-manifolds. In 2012, the first author, with King and Siebenmann, gave a somewhat general treatment \cite{CKS} of end sum (called {\em CSI, connected sum at infinity}, therein) in all dimensions and categories (\textsc{top}, \textsc{pl}, and \textsc{diff}). One corollary gave a classification of multiple hyperplanes in ${\mathbb R}^n$ for all $n\neq3$, which was recently used by Belegradek \cite{B14} to study certain interesting open aspherical manifolds. Most recently, Sparks (2018) \cite{Sp} used infinite end sums to construct uncountably many contractible topological 4-manifolds obtained by gluing two copies of ${\mathbb R}^4$ along a subset homeomorphic to ${\mathbb R}^4$. While \cite{infR4} showed that end sums are uniquely determined for oriented manifolds homeomorphic to ${\mathbb R}^4$, uniqueness fails in general for multiple reasons. The most obvious layer of difficulty already occurs for the simpler operation of boundary summing. In that case, when a summand has disconnected boundary, we must specify which boundary component to use. For example, nondiffeomorphic boundary components can lead to boundary sums with nondiffeomorphic boundaries. We must also be careful to specify orientations --- a pair of disk bundles over $S^2$ with nonzero Euler numbers can be boundary summed in two different ways, distinguished by their signatures (0 or $\pm 2$). In general, we should specify an orientation on each orientable boundary component receiving a 1-handle. Similarly, for end sums and 1-handles at infinity, we must specify which ends of the summands we are using and an orientation on each such end (if orientable). Unlike boundary sums, however, end sums have a more subtle layer of nonuniqueness. One difficulty is specific to dimension 3: the rays in use can be knotted. Myers \cite{My} showed that uncountably many homeomorphism types of contractible manifolds can be obtained by end summing two copies of ${\mathbb R}^3$ along knotted rays. For this reason, the present paper focuses on dimensions above 3. However, another difficulty persists in high dimensions: rays determining a given end need not be properly homotopic. The first author and Haggerty \cite{CH} constructed examples of pairs of one-ended oriented $n$-manifolds ($n\ge4$) that can be summed in different ways, yielding manifolds that are not even properly homotopy equivalent. We explore this phenomenon more deeply in Section~\mathcal Ref{Nonunique}. After sketching the key example of \cite{CH} in Example~\mathcal Ref{CH}, we exhibit more subtle examples of nonuniqueness of end summing (and related constructions) on fixed oriented ends. Examples~\mathcal Ref{htpy} include topological 5-manifolds with properly homotopy equivalent but nonhomeomorphic end sums on the same pair of ends, and \textsc{pl} $n$-manifolds (for various $n\ge9$) whose end sums are properly homotopy equivalent but not \textsc{pl}-homeomorphic. Unlike other examples in this section, those in Examples~\mathcal Ref{htpy} have extra ends or boundary components; the one-ended case seems more elusive. Examples~\mathcal Ref{PL} provide end sums of smooth manifolds ($n\ge8$) that are \textsc{pl}-homeomorphic but not diffeomeorphic. The analogous construction in dimension 4 gives smooth manifolds whose end sums are naturally identified in the topological category, but whose smoothings are not stably isotopic. Distinguishing their diffeomorphism types seems difficult. These failures of uniqueness arise from complicated fundamental group behavior at the relevant ends, contrasting with uniqueness associated with the simply connected end of ${\mathbb R}^4$. Section~\mathcal Ref{Unique} examines more generally when ends are simple enough to guarantee uniqueness of end sums and 1-handle attaching. In dimensions 4 and up, it suffices for the end to satisfy the {\em Mittag-Leffler} condition (also called {\em semistability}), whose definition we recall in Section~\mathcal Ref{Unique}. Ends that are simply connected or topologically collared are Mittag-Leffler; in fact, the condition can only fail when the end requires infinitely many $(n-1)$-handles in any topological handle decomposition (Proposition~\mathcal Ref{Morse}). For example, Stein manifolds of complex dimension at least 2 have (unique) Mittag-Leffler ends. (See Corollaries~\mathcal Ref{stein} and \mathcal Ref{stein1h}, and Theorem~\mathcal Ref{JB} for an application to 4-manifold smoothing theory.) The Mittag-Leffler condition is necessary and sufficient to guarantee that any two rays approaching the end are properly homotopic. This fact traces back at least to Geoghegan in the 1980s, and appears to have been folklore since the preceding decade. (See also Edwards and Hastings \cite{EH76}, Mihalik \cite[Thm.~2.1]{Mi83}, and \cite{Ge08}). The first author and King worked out an algebraic classification of proper rays up to proper homotopy on an arbitrary end in 2002. This material was later excised from the 2012 published version of \cite{CKS} due to length considerations and since a similar proof had appeared in Geoghegan's text \cite{Ge08} in the mean time. The present paper gives a much simplified version of the proof, dealing only with the Mittag-Leffler case, in order to highlight the topology underlying the algebraic argument (Lemma~\mathcal Ref{ML}). This lemma leads to a general statement (Theorem~\mathcal Ref{main}) about attaching countable collections of 1-handles to an open manifold. The following theorem is a special case. \begin{thm}\label{introMain} Let $X$ be a (possibly disconnected) $n$-manifold, $n\ge4$. Then the result of attaching a (possibly infinite) collection of 1-handles at infinity to some oriented Mittag-Leffler ends of $X$ depends only on the pairs of ends to which each 1-handle is attached, and whether their orientations agree. \end{thm} \noindent Note that uniqueness of end sums along Mittag-Leffler ends (preserving orientations) is a special case. Theorem~\mathcal Ref{main} also deals with ends that are nonorientable or not Mittag-Leffler. Theorem~\mathcal Ref{main} has consequences for open 4-manifold smoothing theory, which we explore in Section~\mathcal Ref{Smooth}. The theorem easily implies the result from \cite{infR4} that the oriented diffeomorphism types of 4-manifolds homeomorphic to ${\mathbb R}^4$ form a monoid $\mathcal R$ under end sum, allowing infinite sums that are independent of order and grouping. This monoid acts on the set $\mathcal S(X)$ of smoothings (up to isotopy) of any given oriented 4-manifold $X$ with a Mittag-Leffler end, and more generally a product of copies of $\mathcal R$ acts on $\mathcal S(X)$ through any countable collection of Mittag-Leffler ends (see Corollary~\mathcal Ref{R4}). One can also deal with arbitrary ends by keeping track of a family of proper homotopy classes of rays. Similarly, one can act on $\mathcal S(X)$ by summing with exotic smoothings of $S^3\times {\mathbb R}$ along properly embedded lines (Corollary~\mathcal Ref{SxR}), or modify smoothings along properly embedded star-shaped graphs. While summing with a fixed exotic ${\mathbb R}^4$ is unique for an oriented (or nonorientable) Mittag-Leffler end, Section~\mathcal Ref{Nonunique} suggests that there should be examples of nonuniqueness when the end of $X$ is not Mittag-Leffler. However, such examples seem elusive, prompting the following natural question. \begin{ques}\label{R4sum} Let $X$ be a smooth, one-ended, oriented 4-manifold. Can summing $X$ with a fixed exotic ${\mathbb R}^4$, preserving orientation, yield different diffeomorphism types depending on the choice of ray in $X$? \end{ques} \noindent We show (Proposition~\mathcal Ref{subtle_prop}) that such examples would be quite difficult to detect. Having studied the uniqueness problem for adding $1$-handles at infinity, we progress in Section~\mathcal Ref{slides} to uniqueness of adding collections of $0$- and $1$-handles at infinity (Theorem~\mathcal Ref{maincancel}). It turns out that, when adding countably many handles of index $0$ and $1$, the noncompact case is simpler than for compact handle addition. As an application of Theorem~\mathcal Ref{maincancel}, we present (Theorem~\mathcal Ref{hut}) a very natural and partly novel proof of the hyperplane unknotting theorem of Cantrell~\cite{C63} and Stallings~\cite{Stall65}: each proper embedding of ${\mathbb R}^{n-1}$ in ${\mathbb R}^n$, $n\geq 4$, is unknotted (in each category \textsc{diff}, \textsc{pl}, and \textsc{top}). An immediate corollary is the \textsc{top} Schoenflies theorem: the closures of the two complementary regions of a (locally flat) embedding of $S^{n-1}$ in $S^n$, $n\geq4$, are topological disks. Mazur's infinite swindle still lies at the heart of our proof of the hyperplane unknotting theorem. The novelty in our proof consists of the supporting framework of $0$- and $1$-handle additions, slides, and cancellations at infinity. Throughout the text, we take manifolds to be Hausdorff with countable basis, so with only countably many components. We allow boundary, and note that the theory is vacuous unless there is a noncompact component. {\em Open} manifolds are those with no boundary and no compact components. We work in a category \textsc{cat} that can be \textsc{diff}, \textsc{pl}, or \textsc{top}. For example, \textsc{diff} homeomorphisms are the same as diffeomorphisms. Embeddings (particularly with codimension zero) are not assumed to be proper. (Proper means the preimage of every compact set is compact.) In \textsc{pl} and \textsc{top}, embeddings are assumed to be locally flat (as is automatically true in \textsc{diff}). It follows that in each category, codimension-one two-sided embeddings in $\mathop{\mathcal Rm Int}\nolimits X$ are bicollared (Brown \cite{Brown} in \textsc{top}; see Connelly \cite{Co71} for a simpler proof in both \textsc{top} and \textsc{pl}). Furthermore, a \textsc{cat} proper embedding $\gamma\colon\thinspace Y{\hookrightarrow} X^n$ of a \textsc{cat} 1-manifold $Y$ with $b_1(Y)=0$ and $\gamma^{-1}(\partial X)=\emptyset$ extends to a \textsc{cat} proper embedding $\overline{\nu}\colon\thinspace Y\times D^{n-1}{\hookrightarrow} X^n$ whose boundary (after rounding corners in \textsc{diff}) is bicollared. (This is easy in \textsc{diff} and \textsc{pl}, and follows in \textsc{top} by a classical argument: Cover suitably by charts exhibiting $Y$ as locally flat, then stretch one chart consecutively through the others.) If we radially identify ${\mathbb R}^{n-1}$ with $\mathop{\mathcal Rm Int}\nolimits D^{n-1}$, $\overline{\nu}$ determines an embedding $\nu\colon\thinspace Y\times{\mathbb R}^{n-1}{\hookrightarrow} X$. We call $\nu$ and $\overline{\nu}$ {\em tubular neighborhood maps}, and their images open (resp.\ closed) {\em tubular neighborhoods} of $Y$. Thus, an open tubular neighborhood extends to a closed tubular neighborhood by definition. \section{1-handles at infinity}\label{handles} We begin with our procedure for attaching 1-handles at infinity. \begin{de}\label{onehandles} A {\em multiray} in a \textsc{cat} $n$-manifold $X$ is a \textsc{cat} proper embedding $\gamma\colon\thinspace S\times [0,\infty){\hookrightarrow} X$, with $\gamma^{-1}(\partial X)=\emptyset$, for some discrete (so necessarily countable) set $S$ called the {\em index set} of $\gamma$. If the domain has a single component, $\gamma$ will be called a {\em ray}. Given two multirays $\gamma^-,\gamma^+\colon\thinspace S\times [0,\infty){\hookrightarrow} X$ with disjoint images, choose tubular neighborhood maps $\nu^\pm\colon\thinspace S\times [0,\infty)\times{\mathbb R}^{n-1}{\hookrightarrow} X$ with disjoint images, and let $Z$ be the \textsc{cat} manifold obtained by gluing $S\times[0,1]\times{\mathbb R}^{n-1}$ to $X$ using identifications $\nu^\pm\circ(\mathop{\mathcal Rm id}\nolimits_S\times\varphi^\pm\times\mathcal Rho^\pm)$, where $\varphi^-\colon\thinspace[0,\frac12)\to[0,\infty)$ and $\varphi^+\colon\thinspace(\frac12,1]\to[0,\infty)$ and $\mathcal Rho^\pm\colon\thinspace{\mathbb R}^{n-1}\to{\mathbb R}^{n-1}$ are diffeomorphisms, with $\mathcal Rho^\pm$ chosen so that $\varphi^\pm\times\mathcal Rho^\pm$ preserves orientation. Then $Z$ is obtained by {\em attaching 1-handles at infinity} to $X$ along $\gamma^-$ and $\gamma^+$ (see Figure~\mathcal Ref{fig:onehandle}). \end{de} \begin{figure} \caption{Data for attaching $h$, a $1$-handle at infinity, to the $n$-manifold $X$ (left) and resulting $n$-manifold $Z$ (right).} \label{fig:onehandle} \end{figure} \noindent The case of handle attaching where $S$ is a single point and $X$ has two components that are connected by the 1-handle at infinity is called the {\em end sum} or {\em connected sum at infinity} in the literature. In general, we will see that $Z$ depends in a subtle way on the choice of images of $\gamma^\pm$ (Section~\mathcal Ref{Nonunique}), but not on the parametrizations of their rays. It depends on the orientations locally induced by $\nu^\pm$, but is otherwise independent of the choices of maps $\nu^\pm$, $\varphi^\pm$ and $\mathcal Rho^\pm$. (Independence follows from the stronger Theorem~\mathcal Ref{main} when $n\ge4$, and by a similar method in lower dimensions.) By reparametrizing the maps $\varphi^\pm$, we can change their domains to smaller neighborhoods of the endpoints of $[0,1]$ without changing $Z$, making it more obvious that attaching compact 1-handles to the boundary of a compact manifold has the effect of attaching handles at infinity to the interior. Yet another description of handle attaching at infinity is to remove the interiors of the closed tubular neighborhoods from $X$ and glue together the resulting ${\mathbb R}^{n-1}$ boundary components. Some articles (eg.\ \cite{CKS}, \cite{Sp}) use this perspective for defining end sums. It can be useful to start more generally, with any countable collection of disjoint rays, allowing clustering (for example, to preserve an infinite group action \cite{groupR4}). However, this gains no actual generality, since we can transform such a collection to a multiray by suitably truncating the domains of the rays to achieve properness of the combined embedding. \begin{remark} Handles at infinity of higher index are also useful \cite{MfdQuot}, although additional subtleties arise. For example, a Casson handle $CH$ can be attached to an unknot in the boundary of a 4-ball $B$ so that the interior of the resulting smooth 4-manifold is not diffeomorphic to the interior of any compact manifold. However, $\mathop{\mathcal Rm Int}\nolimits CH$ is diffeomorphic to ${\mathbb R}^4$, so we can interchange the roles of $\mathop{\mathcal Rm Int}\nolimits CH$ and $\mathop{\mathcal Rm Int}\nolimits B$, exhibiting the manifold as ${\mathbb R}^4$ with a 2-handle attached at infinity. The latter is attached along a properly embedded $S^1\times[0,\infty)$ in ${\mathbb R}^4$ that is topologically unknotted but smoothly knotted, and cannot be smoothly compactified to an annulus in the closed 4-ball. This proper annulus seems analogous to a knotted ray in a 3-manifold, but is more subtle since it is unknotted in \textsc{top}. \end{remark} Variations on the above 1-handle construction are used in \cite{MinGen}. Let $X$ be a topological 4-manifold with a fixed smooth structure, and let $R$ be an exotic ${\mathbb R}^4$ (a smooth, oriented manifold homeomorphic but not diffeomorphic to ${\mathbb R}^4$). Choose a smooth ray in $X$, and homeomorphically identify a smooth, closed tubular neighborhood $N$ of it with the complement of a tubular neighborhood of a ray in $R$. Transporting the smooth structure from $R$ to $N$, where it fits together with the original one on $X-\mathop{\mathcal Rm Int}\nolimits N$, we obtain a new smooth structure on $X$ diffeomorphic to an end sum of $X$ and $R$. The advantage of this description is that it fixes the underlying topological manifold, allowing us to assert, for example, that the two smooth structures are stably isotopic. Another variation from \cite{MinGen} is to sum a smooth structure with an exotic ${\mathbb R}\times S^3$ along a smooth, properly embedded line in each manifold, with one line topologically isotopic to ${\mathbb R}\times\{p\}\subset{\mathbb R}\times S^3$. (We order the factors this way instead of the more commonly used $S^3\times{\mathbb R}$ so that the obvious identification with ${\mathbb R}^4-\{0\}$ preserves orientation.) One can similarly change a smooth structure on a high-dimensional \textsc{pl} manifold by summing along a line with ${\mathbb R}\times\Sigma$ for some exotic sphere $\Sigma$. We exhibit these operations in Section~\mathcal Ref{Smooth} as well-defined monoid actions on the set of isotopy classes of smoothings of a fixed topological manifold. One can also consider \textsc{cat} sums along lines in general. We discuss nonuniqueness of this latter operation in Section~\mathcal Ref{Nonunique} as a prelude to discussing subtle end sums. There are several obvious sources of nonuniqueness for attaching 1-handles at infinity. For attaching 1-handles in the compact setting, the result can depend both on orientations and on choices of boundary components. We will consider orientations in Section~\mathcal Ref{Unique}, but now recall the noncompact analogue of the set of boundary components, the space of ends of a manifold (e.g., \cite{HR}). This only depends on the underlying \textsc{top} structure of a \textsc{cat} manifold $X$ (and generalizes to other spaces). A {\em neighborhood of infinity} in $X$ is the complement of a compact set, and a {\em neighborhood system of infinity} is a nested sequence $\{U_i|i\in{\mathbb Z}^+\}$ of neighborhoods of infinity with empty intersection, and with the closure of $U_{i+1}$ contained in $U_i$ for all $i\in{\mathbb Z}^+$. \begin{de} For a fixed neighborhood system $\{U_i\}$ of infinity, the {\em space of ends} of $X$ is given by $\mathcal E=\mathcal E (X)=\mathop{\lim_\leftarrow}\nolimits \pi_0(U_i)$. \end{de} \noindent That is, an end $\epsilon\in\mathcal E (X)$ is given by a sequence $V_1\supset V_2\supset V_3\supset\cdots$, where each $V_i$ is a component of $U_i$. For two different neighborhood systems of infinity for $X$, the resulting spaces $\mathcal E (X)$ can be canonically identified: The set is preserved when we pass to a subsequence, but any two neighborhood systems of infinity have interleaved subsequences. A {\em neighborhood} of the end $\epsilon$ is an open subset of $X$ containing one of the subsets $V_i$. This notion allows us to topologize the set $X\cup\mathcal E(X)$ so that $X$ is homeomorphically embedded as a dense open subset and $\mathcal E(X)$ is totally disconnected \cite{Fr}. (The new basis elements are the components of each $U_i$, augmented by the ends of which they are neighborhoods.) The resulting space is Hausdorff with a countable basis. If $X$ has only finitely many components, this space is compact, and called the {\em Freudenthal} or {\em end compactification} of $X$. In this case, $\mathcal E(X)$ is homeomorphic to a closed subset of a Cantor set. Ends can also be be described using rays, most naturally if we allow the rays to be singular. We call a continuous, proper map $\gamma\colon\thinspace S\times[0,\infty)\to X$ ($S$ discrete and countable) a {\em singular multiray}, or a {\em singular ray} if $S$ is a single point. Every singular ray $\gamma$ in a manifold $X$ determines an end $\epsilon_\gamma\in\mathcal E(X)$. This is because $\gamma$ is proper, so every neighborhood $U$ of infinity in $X$ contains $\gamma([k,\infty))$ for sufficiently large $k$, and this image lies in a single component of $U$. In fact, an alternate definition of $\mathcal E(X)$ is as the set of equivalence classes of singular rays, where two such are considered equivalent if their restrictions to ${\mathbb Z}^+$ are properly homotopic. A singular multiray $\gamma\colon\thinspace S\times [0,\infty){\hookrightarrow} X$ then determines a function $\epsilon_\gamma\colon\thinspace S\to\mathcal E(X)$ that is preserved under proper homotopy of $\gamma$. Attaching 1-handles at infinity depends on these functions for $\gamma^-$ and $\gamma^+$, just as attaching compact 1-handles depends on choices of boundary components, with examples of the former easily obtained from the latter by removing boundary. We will find more subtle dependence on the defining multirays in the next section, but a weak condition preventing these subtleties in Section~\mathcal Ref{Unique}. \section{Nonuniqueness}\label{Nonunique} We now investigate examples of nonuniqueness in the simplest setting. In each case, we begin with an open manifold $X$ with finitely many ends, and attach a single 1-handle at infinity, at a specified pair of ends. We assume the 1-handle respects a preassigned orientation on $X$. For attaching 1-handles in the compact setting, this would be enough information to uniquely specify the result, but we demonstrate that uniqueness can still fail for a 1-handle at infinity. It was shown in \cite{CH} that even the proper homotopy type need not be uniquely determined; Example~\mathcal Ref{CH} below sketches the simplest construction from that paper. Our subsequent examples are more subtle, having the same proper homotopy (or even $\textsc{cat}'$ homeomorphism) type but distinguished by their \textsc{cat} homeomorphism types. All of our examples necessarily have complicated fundamental group behavior at infinity, since Section~\mathcal Ref{Unique} proves uniqueness when the fundamental group is suitably controlled. We obtain the required complexity by the following construction, which generalizes examples of \cite{CH}: \begin{de} For an oriented \textsc{cat} manifold $X$, let $\gamma^-,\gamma^+\colon\thinspace S\times [0,\infty){\hookrightarrow} X$ be multirays with disjoint images. {\em Ladder surgery} on $X$ along $\gamma^-$ and $\gamma^+$ is orientation-preserving surgery on the infinite family of 0-spheres given by $\{\gamma^-(s,n),\gamma^+(s,n)\}$ for each $s\in S$ and $n\in{\mathbb Z}^+$. That is, we find disjoint \textsc{cat} balls centered at the points $\gamma^\pm(s,n)$, remove the interiors of the balls, and glue each resulting pair of boundary spheres together by a reflection (so that the orientation of $X$ extends). \end{de} \noindent It is not hard to verify that the resulting oriented \textsc{cat} homeomorphism type only depends on the end functions $\epsilon_{\gamma^\pm}$ of the multirays; see Corollary~\mathcal Ref{ladder} for details and a generalization to unoriented manifolds. If $X$ has two components $X_1$ and $X_2$, each with $k$ ends, any bijection from $\mathcal E(X_1)$ to $\mathcal E(X_2)$ determines a connected manifold with $k$ ends obtained by ladder surgery with $S=\mathcal E(X_1)$. Such a manifold will be called a {\em ladder sum} of $X_1$ and $X_2$. For closed, connected, oriented $(n-1)$-manifolds $M$ and $N$, we let ${\mathbb L}(M,N)$ denote the ladder sum of the two-ended $n$-manifolds ${\mathbb R}\times M$ and ${\mathbb R}\times N$, for the bijection preserving the ends of ${\mathbb R}$. (This is a slight departure from \cite{CH}, which used the one-ended manifold $[0,\infty)$ in place of ${\mathbb R}$.) Note that any ladder surgery transforms its multirays $\gamma^\pm$ into infinite unions of circles, and surgery on all these circles (with any framings) results in the manifold obtained from $X$ by adding 1-handles at infinity along $\gamma^\pm$. (This is easily seen by interpreting the surgeries as attaching 1- and 2-handles to $I\times X$.) The examples in~\cite{CH} are naturally presented in terms of ladder sums and attaching 1-handles at infinity. They represent the simplest type of example where a single 1-handle may be attached at infinity in essentially distinct ways, namely an orientation-preserving end sum of one-ended manifolds. \begin{example}\label{CH} {\bf Homotopy inequivalent end sums (one-ended) \cite{CH}.} For a fixed prime $p>1$, let $E$ denote the ${\mathbb R}^2$-bundle over $S^2$ with Euler number $-p$ (so $E$ has a neighborhood of infinity diffeomorphic to ${\mathbb R}\times L(p,1)$). Let $Y$ be the ladder sum of $E$ and ${\mathbb R}^4$. We will attach a single 1-handle at infinity to the disjoint union $X=Y \sqcup E$ in two ways to produce distinct, one-ended, boundaryless manifolds $Z_0$ and $Z_1$. Let $\gamma_0$ and $\gamma_1$ be rays in $Y$, with $\gamma_0$ lying in the $E$ summand and $\gamma_1$ lying in the ${\mathbb R}^4$ summand. Let $\gamma$ be any ray in $E$, and let $Z_i$ be obtained from $X$ by attaching a 1-handle at infinity along $\gamma_i$ and $\gamma$. The manifolds $Z_0$ and $Z_1$ are not properly homotopy equivalent (in fact, their ends are not properly homotopy equivalent) since they have nonisomorphic cohomology algebras at infinity~\cite{CH}. The basic idea is that both manifolds $Z_i$ have obvious splittings as ladder sums. For $Z_0$, one summand is ${\mathbb R}^4$, so all cup products from $H^1(Z_0;{\mathbb Z}/p)\otimes H^2(Z_0;{\mathbb Z}/p)$ are supported in the other summand in a 1-dimensional subspace of $H^3(Z_0;{\mathbb Z}/p)$. However, $Z_1$ has cup products on both sides, spanning a 2-dimensional subspace. \end{example} Our remaining examples are pairs with the same homotopy type, distinguished by more subtle means. \begin{examples}[{\bf a}]\label{htpy} {\bf Homotopy equivalent but nonhomeomorphic sums.} It should not be surprising that the sum of two manifolds along a properly embedded line in each depends on more than just the ends and orientations involved. However, as a warm-up for end sums, we give an explicit example in \textsc{top} where moving one line changes the resulting homeomorphism type but not its proper homotopy type. Let $P$ and $Q$, respectively, denote ${\mathbb C} P^2$ and Freedman's fake ${\mathbb C} P^2$ (e.g.\ \cite{FQ}). Then there is a homotopy equivalence between $P$ and $Q$, restricting to a pairwise homotopy equivalence between the complements of a ball interior in each. But $P$ and $Q$ cannot be homeomorphic since $Q$ is unsmoothable. The ladder sum ${\mathbb L}(P,Q)$ is an unsmoothable topological 5-manifold with two ends. The lines ${\mathbb R}\times\{p\}\subset {\mathbb R}\times P$ and ${\mathbb R}\times\{q\}\subset{\mathbb R}\times Q$ can be chosen to lie in ${\mathbb L}(P,Q)$, with each spanning the two ends of ${\mathbb L}(P,Q)$, but they are dual to two different elements of $H^4({\mathbb L}(P,Q);{\mathbb Z}/2)$ (cf.\ \cite{CH}), with ${\mathbb R}\times\{q\}$ dual to the Kirby-Siebenmann smoothing obstruction of ${\mathbb L}(P,Q)$. Clearly, there is a proper homotopy equivalence of ${\mathbb L}(P,Q)$ interchanging the two lines. Thus, the two resulting ways to sum ${\mathbb L}(P,Q)$ along a line with ${\mathbb R}\times\overline Q$ (where the orientation on $Q$ is reversed for later convenience) give properly homotopy equivalent manifolds, namely ${\mathbb L}(\overline Q\#P,Q)$ and $L(P,Q\#\overline Q)=L(P,P\#\overline P)$. (The last equality follows from Freedman's classification of simply connected topological 4-manifolds \cite{FQ}.) These two manifolds cannot be homeomorphic, since the latter is a smooth manifold whereas the former is unsmoothable, with Kirby-Siebenmann obstruction dual to a pair of lines running along opposite sides of the ladder. (A discussion of the cohomology of such manifolds can be found in \cite{CH}, but more simply, there are subsets $(a,b)\times Q$ on which the Kirby-Siebenmann obstruction must evaluate nontrivially.) \noindent {\bf (b) Homotopy equivalent but nonhomeomorphic end sums.} We adapt the previous example to end sums. Instead of summing along a line, we end sum ${\mathbb L}(P,Q)$ with ${\mathbb R}\times\overline Q$ along their positive ends in two different ways (using rays obtained from the positive ends of the previous lines). We obtain a pair of properly homotopy equivalent, unsmoothable, three-ended manifolds. In one case, the modified end has a neighborhood that is smoothable, and in the other case, all three ends fail to have smoothable neighborhoods since the Kirby-Siebenmann obstruction cannot be avoided. Thus, we have a pair of nonhomeomorphic, but properly homotopy equivalent, manifolds, both obtained by an orientation-preserving end sum on the same pair of ends. There are several other variations of the construction. We can replace the ${\mathbb R}$ factor by $[0,\infty)$ so that the ladder sum is one-ended, to get an example of nonuniqueness of summing one-ended topological manifolds with compact boundary. Unfortunately, we cannot cap off the boundaries to obtain one-ended open manifolds, since the Kirby-Siebenmann obstruction is a cobordism invariant of topological 4-manifolds. However, we can modify the original ladder sum so that we do ladder surgery on the positive end, but end~sum on the negative end (which then has a neighborhood homeomorphic to ${\mathbb R}\times(\overline P\#\overline Q)$). Now we have a connected, two-ended open manifold whose ends can be joined by an orientation-preserving 1-handle at infinity in two different ways, yielding properly homotopy equivalent but nonhomeomorphic one-ended manifolds, only one of which has a smoothable neighborhood of infinity. \noindent {\bf (c) Homotopy equivalent but not PL homeomorphic end sums.} In higher dimensions, the Kirby-Siebenmann obstruction of a neighborhood $V$ of an end cannot be killed by adding 1-handles at infinity (since $H^4(V;{\mathbb Z}/2)$ is not disturbed), but we can do the analogous construction using higher smoothing obstructions. This time, we obtain \textsc{pl} $n$-manifolds (for various $n\ge9$) that are properly homotopy equivalent but not \textsc{pl} homeomorphic. Let $P$ and $Q$ be homotopy equivalent \textsc{pl} $(n-1)$-manifolds with $P$ and $Q-\{q_0\}$ smooth but $Q$ unsmoothable. (For an explicit 24-dimensional pair, see Anderson \cite[Proposition~5.1]{A}.) The previous discussion applies almost verbatim with \textsc{pl} in place of \textsc{top}, with the smoothing obstruction in $H^{n-1}(X;\Theta_{n-2})$ for \textsc{pl} manifolds $X$ in place of the Kirby-Siebenmann obstruction. The one change is that smoothability of $Q\#\overline Q$ follows since it is the double of the smooth manifold obtained from $Q$ by removing the interior of a \textsc{pl} ball centered at $q_0$. (This time the orientation reversal is necessary since the smoothing obstruction need not have order 2.) \end{examples} \begin{examples}[{\bf a}]\label{PL} {\bf PL homeomorphic but nondiffeomorphic end sums (one-ended).} A similar construction shows that end summing along a fixed pair of ends can produce \textsc{pl} homeomorphic but nondiffeomorphic manifolds. Let $\Sigma$ be an exotic $(n-1)$-sphere with $n>5$. Then $\Sigma$ is \textsc{pl} homeomorphic to $S^{n-1}$, so the ladder sum ${\mathbb L}(\Sigma,S^{n-1})$ is a two-ended smooth manifold with a \textsc{pl} self-homeomorphism that is not isotopic to a diffeomorphism. Since $\Sigma\#\overline\Sigma=S^{n-1}$, summing ${\mathbb L}(\Sigma,S^{n-1})$ along a line with ${\mathbb R}\times\overline\Sigma$ gives the two manifolds ${\mathbb L}(S^{n-1},S^{n-1})$ and ${\mathbb L}(\Sigma,\overline\Sigma)$. The first of these bounds an infinite handlebody made with 0- and 1-handles, as does its universal cover. Since a contractible 1-handlebody is a ball with some boundary points removed, it follows that the universal cover of ${\mathbb L}(S^{n-1},S^{n-1})$ embeds in $S^n$. However, ${\mathbb L}(\Sigma,\overline\Sigma)$ contains copies of $\Sigma$ arbitrarily close to its ends. Since any homotopy $(n-1)$-sphere ($n>5$) that embeds in $S^n$ cuts out a ball, so is standard, it follows that no neighborhood of either end of ${\mathbb L}(\Sigma,\overline\Sigma)$ has a cover embedding in $S^n$. Thus, the two manifolds have nondiffeomorphic ends, although they are \textsc{pl} homeomorphic. As before, we can modify this example to get a pair of end sums of two-ended manifolds, or a pair obtained from a two-ended connected manifold by joining its ends with a 1-handle in two different ways. This time however, we can also interpret the example as end summing two one-ended open manifolds, by first obtaining one-ended manifolds with compact boundary, then capping off the boundary. (Note that $\Sigma$ bounds a compact manifold. Unlike codimension-0 smoothing existence obstructions, the uniqueness obstructions are not cobordism invariants.) The resulting pair of one-ended \textsc{diff} manifolds are now easily seen to be \textsc{pl} homeomorphic (by Corollary~\mathcal Ref{spherecollar}, for example) but nondiffeomorphic. \noindent {\bf (b) Nonisotopic DIFF=PL structures on a fixed TOP 4-manifold (one-ended).} The previous construction has an analogue in dimension 4, where the categories \textsc{diff} and \textsc{pl} coincide. Replace ${\mathbb R}\times\Sigma$ by $W$, Freedman's exotic ${\mathbb R}\times S^3$. This is distinguished from the standard ${\mathbb R}\times S^3$ by the classical \textsc{pl} uniqueness obstruction in $H^3({\mathbb R}\times S^3;{\mathbb Z}/2)\colon\thinspaceng{\mathbb Z}/2$, dual to ${\mathbb R}\times\{p\}$. The ladder sum $L$ of $W$ with ${\mathbb R}\times S^3$ can be summed along a line with $W$ in two obvious ways. These can be interpreted as smoothings on the underlying topological manifold ${\mathbb L}(S^3,S^3)$, and can be transformed to an example of end summing one-ended \textsc{diff} manifolds as before: To transform $W$ into a one-ended \textsc{diff} manifold, cut it in half along a Poincar\'e homology sphere $\Sigma$, then cap it with an $E_8$-plumbing. The result $E$ is a smoothing of a punctured Freedman $E_8$-manifold. (Alternatively, we can take $E$ homeomorphic to a punctured fake ${\mathbb C} P^2$.) We ladder sum with ${\mathbb R}^4$. The two results of end summing with another copy of $E$ are identified in \textsc{top} with a ladder sum of two copies of $E$ (cf.\ Corollary~\mathcal Ref{spherecollar}). The smoothings are nonisotopic (even stably, i.e., after Cartesian product with ${\mathbb R}^k$), since the uniqueness obstruction by which they differ near infinity is dual to a pair of lines on opposite sides of the ladder. However, the authors have not been able to distinguish their diffeomorphism types. The problem with the previous argument is that the sum of two copies of $W$ along a line is not diffeomorphic to ${\mathbb R}\times S^3$ (although the classical invariant vanishes). While $W$ contains a copy of $\Sigma$ separating its ends, so cannot embed in $S^4$, the sum of two copies of $W$ contains $\Sigma\#\Sigma$, which also does not embed in $S^4$. The effect of summing with reversed orientation or switched ends, or replacing $\Sigma$ by a different homology sphere, is less clear. This leads to the following question, which is discussed further in Section~\mathcal Ref{Smooth} (Question~\mathcal Ref{inverses2}). \begin{ques}\label{inverses} Are there two exotic smoothings on ${\mathbb R}\times S^3$ whose sum along a line is the standard ${\mathbb R}\times S^3$? \end{ques} \noindent If such smoothings exist, one of which has the additional property that every neighborhood of one end has a slice $(a,b)\times S^3$ (as seen in \textsc{top}) that cannot smoothly embed in $S^4$, then the method of (a) gives two one-ended open 4-manifolds that can be end summed in two homeomorphic but not diffeomorphic (or \textsc{pl} homeomorphic) ways. \end{examples} \section{Uniqueness for Mittag-Leffler ends}\label{Unique} Having examined the failure of uniqueness in the last section, we now look for hypotheses that guarantee that 1-handle attaching at infinity {\em is} unique. There are several separate issues to deal with. In the compact setting, attaching a 1-handle to given boundary components can yield two different results if both boundary components are orientable, so uniqueness requires specified orientations in that case. The same issue arises for 1-handles at infinity. Beyond that, we must consider the dependence on the involved multirays. Since rays in ${\mathbb R}^3$ can be knotted, uncountably many homeomorphism types of contractible manifolds arise as end sums of two copies of ${\mathbb R}^3$ \cite{My}. (See also \cite{CH}.) Thus, we assume more than 3 dimensions and conclude, not surprisingly, that the multirays affect the result only through their proper homotopy classes, and that the choices of (suitably oriented) tubular neighborhood maps cause no additional difficulties. We have already seen that different rays determining the same end can yield different results for end summing with another fixed manifold and ray, but we give a weak group-theoretic condition on an end that entirely eliminates dependence on the choice of rays limiting to it. We begin with terminology for orientations. We will call an end $\epsilon$ of an $n$-manifold $X$ {\em orientable} if it has an orientable neighborhood in $X$. An orientation on one connected, orientable neighborhood of $\epsilon$ determines an orientation on every other such neighborhood, through the component of their intersection that is a neighborhood of $\epsilon$. Such a compatible choice of orientations will be called an {\em orientation of $\epsilon$}, so every orientable end has two orientations. We let $\mathcal Eo\subset\mathcal E(X)$ denote the open subset of orientable ends of $X$. (This need not be closed, as seen by deleting a sequence of points of $X$ converging to a nonorientable end.) If $\gamma$ is a singular multiray in a \textsc{diff} manifold $X$, the tangent bundle of $X$ pulls back to a trivial bundle $\gamma^*TX$ over $S\times[0,\infty)$. A fiber orientation on this bundle will be called a {\em local orientation of $X$ along $\gamma$}, and if such an orientation is specified, $\gamma$ will be called {\em locally orienting}. We apply the same terminology in \textsc{pl} and \textsc{top}, using the appropriate analogue of the tangent bundle, or equivalently but more simply, using local homology groups $H_n(X,X-\{\gamma(s,t)\})\colon\thinspaceng{\mathbb Z}$. If $\gamma$ is a (nonsingular) \textsc{cat} multiray, a \textsc{cat} tubular neighborhood map $\nu$ induces a local orientation of $X$ along $\gamma$; if this agrees with a preassigned local orientation along $\gamma$, $\nu$ will be called {\em orientation preserving}. A homotopy between two singular multirays determines a correspondence between their local orientations (e.g., by pulling back the tangent bundle to the domain of the homotopy). If a singular ray $\gamma$ determines an orientable end $\epsilon_\gamma\in\mathcal Eo$, then a local orientation along $\gamma$ induces an orientation on the end, since $\gamma([k,\infty))$ lies in a connected, orientable neighborhood of $\epsilon_\gamma$ when $k$ is sufficiently large. We now turn to the group theory of ends. See Geoghegan \cite{Ge08} for a more detailed treatment. An {\em inverse sequence of groups} is a sequence $G_1\leftarrow G_2\leftarrow G_3\leftarrow\cdots$ of groups and homomorphisms. We suppress the homomorphisms from the notation, since they will be induced by obvious inclusions in our applications. A {\em subsequence} of an inverse sequence is another inverse sequence obtained by passing to a subsequence of the groups and using the obvious composites of homomorphisms. Passing to a subsequence and its inverse procedure, along with isomorphisms commuting with the maps, generate the standard notion of equivalence of inverse sequences. \begin{de}\label{ml} An inverse sequence $G_1\leftarrow G_2\leftarrow G_3\leftarrow\cdots$ of groups is called {\em Mittag-Leffler} (or {\em semistable}) if for each $i\in{\mathbb Z}^+$ there is a $j\ge i$ such that all $G_k$ with $k\ge j$ have the same image in $G_i$. \end{de} \noindent Clearly, a subsequence is Mittag-Leffler if and only if the original sequence is, so the notion is preserved by equivalences. After passing to a subsequence, we may assume $j=i+1$ in the definition. For a manifold $X$ with a singular ray $\gamma$ and a neighborhood system $\{U_i\}$ of infinity, we reparametrize $\gamma$ so that $\gamma([i,\infty))$ lies in $U_i$ for each $i\in{\mathbb Z}^+$. \begin{de} The {\em fundamental progroup} of $X$ based at $\gamma$ is the inverse sequence of groups $\pi_1(U_i,\gamma(i))$, where the homomorphism $\pi_1(U_{i+1},\gamma(i+1))\to \pi_1(U_i,\gamma(i))$ is the inclusion-induced map to $\pi_1(U_i,\gamma(i+1))$ followed by the isomorphism moving the base point to $\gamma(i)$ along the path $\gamma|[i,i+1]$. \end{de} \noindent This only depends on the \textsc{top} structure of $X$. Passing to a subsequence of $\{U_i\}$ replaces the fundamental progroup by a subsequence of it. Since any two neighborhood systems of infinity have interleaved subsequences, the fundamental progroup is independent, up to equivalence, of the choice of neighborhood system. It is routine to check that it is similarly preserved by any proper homotopy of $\gamma$, so it only depends on $X$ and the proper homotopy class of $\gamma$. Furthermore, the inverse sequence is unchanged if we replace each $U_i$ by its connected component containing $\gamma([i,\infty))$, so it is equivalent to use a neighborhood system of the end $\epsilon_\gamma$. Beware, however, that even if there is only one end, the choice of proper homotopy class of $\gamma$ can affect the fundamental progroup, and even whether its inverse limit vanishes. (See \cite[Example 16.2.4]{Ge08}. The homomorphisms in the example are injective, but changing $\gamma$ conjugates the resulting nested subgroups, changing their intersection.) We call the pair $(X,\gamma)$ {\em Mittag-Leffler} if its fundamental progroup is Mittag-Leffler. We will see in Lemma~\mathcal Ref{ML}(a) below that this condition implies $\gamma$ is determined up to proper homotopy by its induced end $\epsilon_\gamma$, so the fundamental progroup of $\epsilon_\gamma$ is independent of $\gamma$ in this case, and it makes sense to call $\epsilon_\gamma$ a {\em Mittag-Leffler end}. Note that this condition rules out ends made by ladder surgery, and hence the examples of Section~\mathcal Ref{Nonunique}. We will denote the set of Mittag-Leffler ends of $X$ by $\mathcal EML\subset\mathcal E(X)$, and its complement by $\mathcal Ebad$. Many important types of ends are Mittag-Leffler. {\em Simply connected} ends are (essentially by definition) the special case for which the given images all vanish. {\em Topologically collared} ends, with a neighborhood homeomorphic to ${\mathbb R}\times M$ for some compact $(n-1)$-manifold $M$, are {\em stable}, the special case for which the fundamental progroup is equivalent to an inverse sequence with all maps isomorphisms. Other important ends are neither simply connected nor collared, but still Mittag-Leffler if the maps are nontrivial surjections (Example~\mathcal Ref{surj}). Any end admits a neighborhood system for which the maps are not even surjective, obtained from an arbitrary system by adding 1-handles to each $U_i$ inside $U_{i-1}$; such ends may still be Mittag-Leffler. In the smooth category, we can analyze ends using a Morse function $\varphi$ that is exhausting (i.e., proper and bounded below). For such a function, the preimages $\varphi^{-1}(i,\infty)$ for $i\in{\mathbb Z}^+$ form a neighborhood system of infinity. \begin{prop}\label{Morse} Let $X$ be a \textsc{diff} open $n$-manifold. If an end $\epsilon$ of $X$ is not Mittag-Leffler, then for every exhausting Morse function $\varphi$ on $X$ and every $t\in{\mathbb R}$, there are infinitely many critical points of index $n-1$ in the component of $\varphi^{-1}(t,\infty)$ containing $\epsilon$. In particular, if $X$ admits an exhausting Morse function with only finitely many index-$(n-1)$ critical points, then all of its ends are Mittag-Leffler. \end{prop} \begin{proof} After perturbing $\varphi$ and composing it with an orientation-preserving diffeomorphism of ${\mathbb R}$, we can assume each $\varphi^{-1}[i,i+1]$ is an elementary cobordism. Since $\epsilon$ is not Mittag-Leffler, its corresponding fundamental progroup must have infinitely many homomorphisms that are not surjective. Thus, there are infinitely many values of $i$ for which $\varphi^{-1}[i,\infty)$ is made from $\varphi^{-1}[i+1,\infty)$ by attaching a 1-handle with at least one foot in the component of the latter containing $\epsilon$. This handle corresponds to an index-1 critical point of $-\varphi$, or an index-$(n-1)$ critical point of $\varphi$. \end{proof} The Mittag-Leffler condition on an end of a \textsc{cat} manifold is determined by its underlying \textsc{top} structure (in fact, by its proper homotopy type), so we are free to change the smooth structure on a manifold before looking for a suitable Morse function. This is especially useful in dimension 4. For example, an exhausting Morse function on an exotic ${\mathbb R}^4$ with nonzero Taylor invariant must have infinitely many index-3 critical points \cite{T}, but after passing to the standard structure, there is such a function with a unique critical point. (Furthermore, an exotic ${\mathbb R}^4$ is topologically collared and simply connected at infinity.) Proposition~\mathcal Ref{Morse} is most generally stated in \textsc{top}, using topological Morse functions. (These are well-behaved \cite{KS} and can be constructed from handle decompositions, which exist on all open \textsc{top} manifolds, e.g.\ \cite{FQ}.) Since every Stein manifold of complex dimension $m$ (real dimension $2m$) has an exhausting Morse function with indices at most $m$, we conclude: \begin{cor}\label{stein} For every Stein manifold of complex dimension at least 2, the unique end of each component is Mittag-Leffler. \qed \end{cor} \begin{example}\label{surj} For infinite-type Stein surfaces ($m=2$), the ends must be Mittag-Leffler, but they are typically neither simply connected nor stable (hence, not topologically collared). This is more generally typical for open 4-manifolds whose exhausting Morse functions require infinitely many critical points, but none of index above 2. As a simple example, let $X$ be an infinite end sum of ${\mathbb R}^2$-bundles over $S^2$. (Its diffeomorphism type is independent of the choice of rays, by Theorems~\mathcal Ref{main} and~\mathcal Ref{maincancel}, but it is convenient to think of the bundles as indexed by ${\mathbb Z}^+$ and summed consecutively.) If each Euler number is less than $-1$, then $X$ will be Stein. We get a neighborhood system of infinity with each $U_i$ obtained from a collar of the end of the first $i$-fold sum by attaching the remaining (simply connected) summands. Then each group $G_i$ is a free product of $i$ cyclic groups, and each homomorphism is surjective, projecting out one factor. The inverse limit is not finitely generated, so the end is not stable. (Every neighborhood system of the end has a subsequence that can be interleaved by some of our neighborhoods $U_i$.) \end{example} We can now state our main theorem on uniqueness of attaching 1-handles. Its primary conclusion is that when we attach 1-handles at infinity, any locally orienting defining ray that determines a Mittag-Leffler end will affect the outcome only through the end and local orientation it determines. If the end is also nonorientable, then even the local orientation has no influence (as for a compact 1-handle attached to a nonorientable boundary component). To state this in full generality, we also allow rays determining ends that are not Mittag-Leffler, which are required to remain in a fixed proper homotopy class. That is, we allow an arbitrary multiray $\gamma$, but require its restriction to the subset $\epsilon^{-1}_{\gamma}(\mathcal Ebad)$ of the index set $S$ (corresponding to rays determining ends that are not Mittag-Leffler) to lie in a fixed proper homotopy class. For each 1-handle with at least one defining ray determining a nonorientable Mittag-Leffler end, no further constraint is necessary, but otherwise we keep track of orientations. We do this through orientations of the end if they exist. In the remaining case, the end is not Mittag-Leffler, and we compare the local orientations of the rays through a proper homotopy. More precisely, we have: \begin{thm}\label{main} For a \textsc{cat} $n$-manifold $X$ with $n\ge4$, discrete $S$ and $i=0,1$, let $\gamma^-_i,\gamma^+_i\colon\thinspace S\times [0,\infty){\hookrightarrow} X$ be locally orienting \textsc{cat} multirays whose images (for each fixed $i$) are disjoint, and whose end functions $\epsilon_{\gamma^\pm_i}\colon\thinspace S\to \mathcal E(X)$ are independent of $i$. Suppose that \begin{itemize} \item[(a)] after $\gamma^-_0$ and $\gamma^-_1$ are restricted to the index subset $\epsilon^{-1}_{\gamma^-_0}(\mathcal Ebad)$, there is a proper homotopy between them. \item[(b)] for each $s\in \epsilon^{-1}_{\gamma^-_0}(\mathcal Ebad\cup\mathcal Eo)\cap \epsilon^{-1}_{\gamma^+_0}(\mathcal Ebad\cup\mathcal Eo)$, the local orientations of the corresponding rays in $\gamma^-_0$ and $\gamma^-_1$ induce the same orientation of the end if there is one, and otherwise correspond under the proper homotopy of (a). \item[(c)] the two analogous conditions apply to $\gamma^+_i$. \end{itemize} Let $Z_i$ be the result of attaching 1-handles to $X$ along $\gamma^\pm_i$ (for any choice of orientation-preserving tubular neighborhood maps $\nu^\pm_i$). Then there is a \textsc{cat} homeomorphism from $Z_0$ to $Z_1$ sending the submanifold $X$ onto itself by a \textsc{cat} homeomorphism \textsc{cat} ambiently isotopic in $X$ to the identity map. \end{thm} It follows that 1-handle attaching is not affected by reparametrization of the rays (a proper homotopy), or changing the auxiliary diffeomorphisms $\varphi^\pm$ and $\mathcal Rho^\pm$ occurring in Definition~\mathcal Ref{onehandles} (which only results in changing the parametrization and tubular neighborhood maps, respectively). \begin{cor}\label{maincor} For an oriented \textsc{cat} $n$-manifold $X$ with $n\ge4$, every countable multiset of (unordered) pairs of Mittag-Leffler ends canonically determines a \textsc{cat} manifold obtained from $X$ by attaching 1-handles at infinity to those pairs of ends, respecting the orientation. \qed \end{cor} Since the end of ${\mathbb R}^n$ is Mittag-Leffler, we immediately obtain cancellation of 0/1-handle pairs at infinity: \begin{cor}\label{cancel} For $n\ge4$, every end sum of a \textsc{cat} $n$-manifold $X$ with ${\mathbb R}^n$ (or countably many copies of ${\mathbb R}^n$) is \textsc{cat} homeomorphic to $X$. \qed \end{cor} \noindent See Section~\mathcal Ref{slides} for further discussion of 0-handles at infinity. This corollary shows that end summing with an exotic ${\mathbb R}^4$ doesn't change the homeomorphism type of a smooth 4-manifold (although it typically changes its diffeomorphism type); cf.\ Section~\mathcal Ref{Smooth}. It also shows: \begin{cor}\label{spherecollar} Suppose $X_0$ and $X_1$ are connected, oriented \textsc{cat} $n$-manifolds with $n\ge4$, and that $X_0$ has an end $\epsilon$ that is \textsc{cat} collared by $S^{n-1}$. Then all manifolds obtained as the oriented end sum of $X_0$ with $X_1$ at the end $\epsilon$ are \textsc{cat} homeomorphic. \end{cor} \begin{proof} Write $X_0$ as a connected sum $X\#{\mathbb R}^n$. Then any such end sum is $X\# X_1$. \end{proof} The following corollary shows that 1-handles at infinity respect Stein structures. This will be applied to 4-manifold smoothing theory in Theorem~\mathcal Ref{JB}. \begin{cor}\label{stein1h} Every manifold $Z$ obtained from a Stein manifold $X$ by attaching 1-handles at infinity, respecting the complex orientation, admits a Stein structure. The resulting almost-complex structure on $Z$ can be assumed to restrict to the given one on $X$, up to homotopy. \end{cor} \begin{proof} Since every open, oriented surface has a Stein structure and a contractible space of almost complex structures, we assume $X$ has real dimension $2m\ge4$. Since $X$ is Stein, it has an exhausting Morse function with indices at most $m$. It can then be described as the interior of a smooth (self-indexed) handlebody whose handles have index at most $m$. This is well-known when there are only finitely many critical points. A proof of the infinite case is given in the appendix of \cite{yfest}, which also shows that when $m=2$ one can preserve the extra framing condition that arises for 2-handles, encoding the given almost-complex structure. By Corollaries~\mathcal Ref{stein} and \mathcal Ref{maincor}, we can realize the 1-handles at infinity by attaching compact handles to the handlebody before passing to the interior (and after adding infinitely many canceling 0-1 pairs if necessary to accommodate infinitely many new 1-handles, avoiding compactness issues). Now we can convert the handlebody interior back into a Stein manifold by Eliashberg's Theorem; see \cite{CE}. The almost-complex structures then correspond by construction. \end{proof} The proof of Theorem~\mathcal Ref{main} follows from two lemmas. The first guarantees that (a) Mittag-Leffler ends are well-defined and (b) singular multirays with a given Mittag-Leffler end function are unique up to proper homotopy. \begin{lem}[a]\label{ML} If $(X,\gamma)$ is a Mittag-Leffler pair, then every singular ray determining the same end as $\gamma$ is properly homotopic to $\gamma$. In particular, the Mittag-Leffler condition for ends is independent of choice of singular ray, so the subset $\mathcal EML\subset\mathcal E$ is well-defined. \item[(b)] Let $\gamma_0,\gamma_1\colon\thinspace S\times [0,\infty){\hookrightarrow} X$ be locally orienting singular multirays with the same end function. Suppose that this function $\epsilon_{\gamma_0}=\epsilon_{\gamma_1}$ has image in $\mathcal EML$, and that for each $s$ with $\epsilon_{\gamma_0}(s)\in\mathcal Eo$, the corresponding locally orienting singular rays of $\gamma_0$ and $\gamma_1$ induce the same orientation (depending on $s$) of the end $\epsilon_{\gamma_0}(s)$. Then there is a proper homotopy from $\gamma_0$ to $\gamma_1$, respecting the given local orientations. \end{lem} The first sentence and its converse are essentially Proposition~16.1.2 of \cite{Ge08}, which is presented as an immediate consequence of two earlier statements: Proposition~16.1.1 asserts that the set of proper homotopy classes of singular rays approaching an arbitrary end corresponds bijectively to the derived limit $\lim^1_\leftarrow\pi_1(U_i,\gamma(i))$ of a neighborhood system $U_i$ of infinity; Theorem~11.3.2 asserts that an inverse sequence of countable groups $G_i$ is Mittag-Leffler if and only if $\lim^1_\leftarrow G_i$ has only one element. We follow those proofs but considerably simplify the argument, eliminating use of derived limits, by focusing on the Mittag-Leffler case. This reveals the underlying geometric intuition: If an end $\epsilon$ is topologically collared by a neighborhood identified with ${\mathbb R}\times M$, and $\gamma=(\gamma_{\mathbb R},\gamma_M)\colon\thinspace[0,\infty)\to{\mathbb R}\times M$ is a singular ray, we can assume after a standard proper homotopy of the first component that $\gamma_{\mathbb R}\colon\thinspace[0,\infty)\to{\mathbb R}$ is inclusion. Then the proper homotopy $\gamma_s(t)=(t,\gamma_M((1-s)t))=\frac{1}{1-s}\gamma((1-s)t)$ (where the last multiplication acts only on the first factor) stretches the image of $\gamma$, pushing any winding in $M$ out toward infinity, so that when $s\to1$ the ray becomes a standard radial ray. If, instead, $\epsilon$ only has a neighborhood system with $\pi_1$-surjective inclusions, we can compare two singular rays using an initial proper homotopy after which they agree on ${\mathbb Z}^+\subset[0,\infty)$, and so only differ by a proper sequence of loops. Then $\pi_1$-surjectivity again allows us to push the differences out to infinity: inductively collapse loops by transferring their homotopy classes to more distant neighborhoods of infinity, so that the resulting homotopy sends one ray to the other. In the general Mittag-Leffler case, we still have enough surjectivity to push each loop to infinity after pulling it back a single level in the neighborhood system (with properness preserved because we only pull back one level). The following proof efficiently encodes this procedure with algebra. \begin{proof} First we prove (a), showing that an arbitrary singular ray $\gamma'$ determining the same Mittag-Leffler end as $\gamma$ is properly homotopic to it. We also keep track of preassigned local orientations along the two singular rays. If $\epsilon_\gamma$ is orientable, we assume these local orientations induce the same orientation on $\epsilon_\gamma$ (as in (b)). Let $\{U_i\}$ be a neighborhood system of infinity, arranged (by passing to a subsequence if necessary) so that each $j$ is $i+1$ in the definition of the Mittag-Leffler condition, and that the component of $U_1$ containing $\epsilon_\gamma$ is orientable if $\epsilon_\gamma$ is. Then reparametrize $\gamma$ so that each $\gamma([i,\infty))$ lies in $U_i$. Reparametrize $\gamma'$ similarly, then arrange it to agree with $\gamma$ on ${\mathbb Z}^+$ by inductively moving $\gamma'$ near each $i\in{\mathbb Z}^+$ separately, with compact support inside $U_i$. The limiting homotopy is then well-defined and proper. If $\epsilon_\gamma$ is nonorientable, then so is the relevant component of each $U_i$, so we can assume (changing the homotopy via orientation-reversing loops as necessary) that the local orientations along the two singular rays agree at each $i$. (This is automatic when $\epsilon_\gamma$ is orientable.) The two singular rays now differ by a sequence of orientation-preserving loops, representing classes $x_i\in\pi_1(U_i,\gamma(i))$ for each $i\ge 1$. Inductively choose orientation-preserving classes $y_i\in\pi_1(U_i,\gamma(i))$ for all $i\ge2$ starting from an arbitrary $y_2$, and for $i\ge1$ choosing $y_{i+2}\in\pi_1(U_{i+2},\gamma(i+2))$ to have the same image in $\pi_1(U_i,\gamma(i))$ as $x_{i+1}^{-1}y_{i+1}\in\pi_1(U_{i+1},\gamma(i+1))$. (This is where the Mittag-Leffler condition is necessary.) For each $i\ge 1$, let $z_i=x_iy_{i+1}\in\pi_1(U_i,\gamma(i))$ (where we suppress the inclusion map). In that same group, we then have $z_iz_{i+1}^{-1}=x_iy_{i+1}y_{i+2}^{-1}x_{i+1}^{-1}=x_i$. After another proper homotopy, we can assume the two singular rays and their induced local orientations on $X$ agree along $\frac12{\mathbb Z}^+$ and give the sequence $z_1,z_2^{-1},z_2,z_3^{-1},\dots$ in $U_1,U_1,U_2,U_2,\dots$. Now a proper homotopy fixing ${\mathbb Z}^++\frac12$ cancels all loops between these points and eliminates $z_1$ (moving $\gamma'(0)$) so that the two singular rays coincide. This completes the proof of (a), and also (since $\mathcal EML$ is now well-defined) the case of (b) with $S$ a single point. For the general case of (b), we wish to apply the previous case to each pair of of singular rays separately. The only issue is properness of the resulting homotopy of singular multirays. Let $\{W_j\}$ be a neighborhood system of infinity with $W_1=X$. For each $s\in S$, find the largest $j$ such that $W_j$ contains both rays indexed by $s$, and apply the previous case inside that $W_j$. Since the singular multirays are proper, each $W_j$ contains all but finitely many pairs of singular rays, guaranteeing that the combined homotopy is proper. \end{proof} \begin{remark} To see the correspondence of this proof with the geometric description, first consider the case with all inclusion maps $\pi_1$-surjective. Then the argument simplifies: We can just define $z_1=1$, and inductively choose $z_{i+1}$ to be any pullback of $x_i^{-1}z_i$. Then $z_i$ is a pullback of $(x_1\cdots x_{i-1})^{-1}$ to $U_i$, exhibiting the loops being transferred toward infinity. \end{remark} To upgrade a proper homotopy of multirays to an ambient isotopy, we need the following lemma. \begin{lem}\label{ambient} Suppose that $X$ is a \textsc{cat} $n$-manifold with $n\ge 4$ and $Y$ is a \textsc{cat} 1-manifold with $b_1(Y)=0$. Let $\Gamma\colon\thinspace I\times Y{\hookrightarrow} \mathop{\mathcal Rm Int}\nolimits X$ be a topological proper homotopy, between \textsc{cat} embeddings $\gamma_i$ ($i=0,1$) that extend to \textsc{cat} tubular neighborhood maps $\nu_i\colon\thinspace Y\times{\mathbb R}^{n-1}{\hookrightarrow} X$ whose local orientations correspond under $\Gamma$. Then there is a \textsc{cat} ambient isotopy $\Phi\colon\thinspace I\times X\to X$, supported in a preassigned neighborhood of $\mathop{\mathcal Rm Im}\nolimits\Gamma$, such that $\Phi_0=\mathop{\mathcal Rm id}\nolimits_X$ and $\Phi_1\circ\nu_0$ agrees with $\nu_1$ on a neighborhood of $Y\times\{0\}$ in $Y\times{\mathbb R}^{n-1}$. \end{lem} \noindent This lemma is well-known when \textsc{cat}=\textsc{diff} or \textsc{pl}, but a careful proof seems justified by the subtlety of noncompactness: The corresponding statement in ${\mathbb R}^3$ is false even with $\Gamma$ a proper (nonambient) isotopy of $Y={\mathbb R}$. (Such an isotopy $\Gamma$ can slide a knot out to infinity, changing the fundamental group of the complement, and this can even be done while fixing the integer points of ${\mathbb R}$.) The case \textsc{cat}=\textsc{top} is also known to specialists. We did not find a theorem in the literature from which it follows immediately. Instead, we derive it from much stronger results of Dancis \cite{D76} with antecedents dating back to pioneering work of Homma \cite{H62}. \begin{proof} First we solve the case \textsc{cat}=\textsc{diff}. By transversality, we may assume (after an ambient isotopy that we absorb into $\Phi$) that $\gamma_0$ and $\gamma_1$ have disjoint images. Then we properly homotope $\Gamma$ rel $\partial I\times Y$ to be smooth and generic, so it is an embedding if $n\ge 5$ and an immersion with isolated double points if $n=4$. After decomposing $Y$ as a cell complex with 0-skeleton $Y_0$, we can assume $\Gamma$ restricts to a smooth embedding on some neighborhood of $I\times Y_0$. Then there is a tubular neighborhood $J$ of $Y_0$ in $Y$ such that $\Gamma|(I\times J)$ extends to an ambient isotopy. (Apply the Isotopy Extension Theorem separately in disjoint compact neighborhoods of the components of $\Gamma(I\times Y_0)$.) After using this ambient isotopy to define $\Phi$ for parameter $t\le\frac12$, it suffices to assume $\Gamma$ fixes $J$, and view $\Gamma$ as a countable collection of path homotopies of the 1-cells of $Y$. We need the resulting immersed 2-disks to be disjoint. This is automatic when $n\ge 5$, but is the step that fails for knotted lines in ${\mathbb R}^3$. For $n=4$, we push the disks off of each other by finger moves. This operation preserves properness of $\Gamma$ since each compact subset of $X$ initially intersects only finitely many disks, which have only finitely many intersections with other disks (and we do not allow finger moves over other fingers). Now we can extend to an ambient isotopy, working in disjoint compact neighborhoods of the disks. We arrange $\nu_0$ to correspond with $\nu_1$ by uniqueness of tubular neighborhoods and contractibility of the components of $Y$. We reduce the \textsc{pl} and \textsc{top} cases to \textsc{diff}. As before, we can assume the images of $\gamma_0$ and $\gamma_1$ are disjoint. (We did not find a clean \textsc{top} statement of this. However, we can easily arrange $\gamma_0(Y_0)$ to be disjoint from $\gamma_1(Y)$, then apply \cite[General Position Lemma~3]{D76}. While this lemma assumes the moved manifold is compact and without boundary, we can apply it to the remaining 1-cells of $\gamma_0(Y)$ by arbitrarily extending them to circles.) A tubular neighborhood $N$ of $\gamma_0(Y)\sqcup\gamma_1(Y)$ now inherits a smoothing $\Sigma$ from the maps $\nu_i$. If $n=4$, $\Sigma$ extends over the entire manifold $X$ except for one point in each compact component \cite{FQ}. Homotoping $\Gamma$ off of these points, we reduce to the case \textsc{cat}=\textsc{diff}. If $n\ge 5$, we again homotope $\Gamma$ rel $\partial I\times Y$ to an embedding. (Again we found no clean \textsc{top} statement, but it follows by smoothing $\Gamma$ on $\Gamma^{-1}(N)$, homotoping so that $\Gamma^{-1}(N)$ is a collar of $\partial I\times Y$, and applying \cite[Corollary~6.1]{D76} in $X-N$.) Since $(I,\partial I)\times Y$ has no cohomology above dimension 1, there is no obstruction to extending $\Sigma$ over a neighborhood of the image of $\Gamma$, again reducing to \textsc{cat}=\textsc{diff}. \end{proof} \begin{proof}[Proof of Theorem~\mathcal Ref{main}] For each $i=0,1$, the two multirays $\gamma^-_i$ and $\gamma^+_i$ can be thought of as a single multiray $\gamma_i$ with index set $S^*=S\times\{-1,1\}$. For each index $(s,\sigma)\in\epsilon_{\gamma_0}^{-1}(\mathcal Eo)\subset S^*$, we arrange for the corresponding locally orienting rays in $\gamma_0$ and $\gamma_1$ to induce the same orientation of the end: If this is not already true, then Hypothesis (b) of the theorem implies that the opposite end $\epsilon_{\gamma_0}(s,-\sigma)$ is Mittag-Leffler but nonorientable. In this case, reverse the local orientations along both rays in $\gamma_1$ parametrized by $s$. This corrects the orientations without changing $Z_1$, since the change extends as a reflection of the 1-handle $\{s\}\times[0,1]\times{\mathbb R}^{n-1}$. Now split $\gamma_i$ into two multirays $\gamma_i^{\mathcal Rm ML}$ and $\gamma_i^{\mathcal Rm bad}$, according to whether the rays determine Mittag-Leffler ends. By Hypothesis (a), we have a proper homotopy from $\gamma_0^{\mathcal Rm bad}$ to $\gamma_1^{\mathcal Rm bad}$, which respects the local orientations by Hypothesis (b) after further possible flips as above when the opposite end is Mittag-Leffler but nonorientable. Lemma~\mathcal Ref{ML}(b) then gives a proper homotopy from $\gamma_0^{\mathcal Rm ML}$ to $\gamma_1^{\mathcal Rm ML}$ respecting local orientations. Reassembling the multirays, we obtain a proper homotopy from $\gamma_0$ to $\gamma_1$ that respects local orientations. Now we apply Lemma~\mathcal Ref{ambient} with $Y=S^*\times[0,\infty)$, and $\nu_i$ the given tubular neighborhood map for $\gamma_i$ (after the above flips). We obtain a \textsc{cat} ambient isotopy $\Phi$ of $\mathop{\mathcal Rm id}\nolimits_X$ such that $\Phi_1\circ\nu_0$ agrees with $\nu_1$ on a neighborhood $N$ of $S^*\times[0,\infty)\times\{0\}$ in $S^*\times[0,\infty)\times{\mathbb R}^{n-1}$. Note that the quotient space $Z_i$ does not change if we cut back the 1-handles $S\times[0,1]\times{\mathbb R}^{n-1}$ to any neighborhood $N'$ of $S\times\{\frac12\}\times{\mathbb R}^{n-1}$ and use the restricted gluing map. Recall that the gluing map factors through an ${\mathbb R}^{n-1}$-bundle map $\mathop{\mathcal Rm id}\nolimits_S\times\varphi^\pm\times\mathcal Rho^\pm$ to $S^*\times[0,\infty)\times{\mathbb R}^{n-1}$. We can assume that the resulting image of $N'$ lies in some disk bundle (with radii increasing along the rays) inside $S^*\times[0,\infty)\times{\mathbb R}^{n-1}$. A smooth ambient isotopy supported inside a larger disk bundle moves this image into $N$. Conjugating with $\nu_i$ gives a \textsc{cat} ambient isotopy $\Psi_{(i)}$ on $X$. Then $\Phi'=\Psi_{(1)}^{-1}\circ\Phi\circ\Psi_{(0)}$ is a \textsc{cat} ambient isotopy for which $\Phi'_1\circ\nu_0$ agrees with $\nu_1$ on $N'$. The \textsc{cat} homeomorphism $\Phi'_1$ extends to one sending $Z_0$ to $Z_1$ with the required properties. \end{proof} We can now address uniqueness of ladder surgeries. Note that their definition immediately extends to unoriented manifolds, provided that we use locally orienting multirays. \begin{cor}\label{ladder} For a \textsc{cat} manifold $X$, discrete $S$ and $i=0,1$, let $\gamma^\pm_i\colon\thinspace S\times [0,\infty){\hookrightarrow} X$ be locally orienting \textsc{cat} multirays with disjoint images (for each fixed $i$) such that the end functions $\epsilon_{\gamma^\pm_i}\colon\thinspace S\to \mathcal E(X)$ are independent of $i$. Suppose that for each $s\in \epsilon^{-1}_{\gamma^-_0}(\mathcal Eo)\cap \epsilon^{-1}_{\gamma^+_0}(\mathcal Eo)$, the local orientations of the corresponding rays in $\gamma^\pm_i$ induce the same orientation of the end for $i=0,1$. Then the manifolds $Z_i$ obtained by ladder surgery on $X$ along $\gamma^\pm_i$ are \textsc{cat} homeomorphic. \end{cor} \begin{proof} As in the previous proof, we assume that each ray of $\gamma^\pm_0$ determining an orientable end induces the same orientation of that end as the corresponding ray of $\gamma^\pm_1$, after reversing orientations on some mated pairs of rays (with the mate determining a nonorientable end). Since the end functions are independent of $i$, there is a proper homotopy of $\gamma^\pm_0$ for each choice of sign, after which $\gamma^\pm_i(s,n)$ is independent of $i$ for each $s\in S$ and $n\in{\mathbb Z}^+$ (as in the proof of Lemma~\mathcal Ref{ML}). We can assume the local orientations agree at each of these points, after possibly changing the homotopy on each ray determining a nonorientable end. The proper homotopy of $\gamma^\pm_0|S\times{\mathbb Z}^+$ extends to an ambient isotopy as in the proof of Lemma~\mathcal Ref{ambient}, without dimensional restriction (since we only deal with the 0-skeleton $Y_0$). \end{proof} \section{Smoothings of open 4-manifolds}\label{Smooth} Recall from Section~\mathcal Ref{handles} that end summing with an exotic ${\mathbb R}^4$ can be defined as an operation on the smooth structures of a fixed topological 4-manifold, and that one can similarly change smoothings of $n$-manifolds by summing with an exotic ${\mathbb R}\times S^{n-1}$ along a properly embedded line. (The latter is most interesting when $n=4$, but the comparison with higher dimensions is illuminating.) We now address uniqueness of both operations, expressing them as monoid actions on the set of isotopy classes of smoothings of a topological manifold. We define an {\em action} of a monoid $\mathcal M$ on a set $\mathcal S$ by analogy with group actions: Each element of $\mathcal M$ is assigned a function $\mathcal S\to\mathcal S$, with the identity of $\mathcal M$ assigned $\mathop{\mathcal Rm id}\nolimits_\mathcal S$, and with monoid addition corresponding to composition of functions in the usual way. We first consider end summing with an exotic ${\mathbb R}^4$. In \cite{infR4}, it was shown that the set $\mathcal R$ of oriented diffeomorphism types of smooth manifolds homeomorphic to ${\mathbb R}^4$ admits the structure of a commutative monoid under end sum, with identity given by the standard ${\mathbb R}^4$, and such that countable sums are well-defined and independent of order and grouping. (Infinite sums were defined as simultaneously end summing onto the standard ${\mathbb R}^4$ along a multiray in the latter. Thus, the statement follows from Theorem~\mathcal Ref{main} with the two multirays $\gamma^+_i$ in ${\mathbb R}^4$ differing by a permutation of $S$, and with Corollary~\mathcal Ref{cancel} addressing grouping; cf.\ also Section~\mathcal Ref{slides}.) For any set $S$, the Cartesian product $\mathcal R^S$ inherits a monoid structure with the same properties, as does the submonoid $\mathcal R^S_c$ of $S$-tuples that are the identity except in countably many coordinates. Note that every action by such a monoid inherits a notion of infinite iteration, since we can sum infinitely many monoid elements together before applying them. In the case at hand, we obtain the following corollary of the lemmas of the previous section. We again split a multiray $\gamma\colon\thinspace S\times[0,\infty)\to X$ into two multirays $\gamma_{\mathcal Rm{ML}}\colon\thinspace S_{\mathcal Rm{ML}}\times[0,\infty)\to X$ and $\gamma_{\mathcal Rm{bad}}\colon\thinspace S_{\mathcal Rm{bad}}\times[0,\infty)\to X$, according to which rays determine Mittag-Leffler ends. \begin{cor}\label{R4} Let $X$ be a \textsc{top} 4-manifold with a locally orienting \textsc{top} multiray $\gamma\colon\thinspace S\times[0,\infty)\to X$. Then $\gamma$ determines an action of $\mathcal R^S$ on the set $\mathcal S(X)$ of isotopy classes of smoothings of $X$. The action only depends on the proper homotopy class of the locally orienting multiray $\gamma_{\mathcal Rm{bad}}$, the function $\epsilon_{\gamma_{\mathcal Rm{ML}}}$, and the subset of $S_{\mathcal Rm{ML}}$ inducing a preassigned orientation on the orientable ends. In particular, if $X$ is oriented (or orientations are specified on all orientable Mittag-Leffler ends) then the monoid $\mathcal R^{\mathcal EML(X)}_c$ acts canonically on $\mathcal S(X)$. \end{cor} \noindent Note that orientation reversal induces an involution on the monoid $\mathcal R$, and changing the local orientations of $\gamma$ changes the action by composing with this involution on the affected factors of $\mathcal R^S$. \begin{proof} To define the action, fix a smoothing on $X$ and an indexed set $\{R_s|\thinspace s\in S\}$ of elements of $\mathcal R$. According to Quinn (\cite{Q}, cf.\ also \cite{FQ}), $\gamma$ can be made smooth by a \textsc{top} ambient isotopy. For each $s\in S$, choose a smooth ray $\gamma'$ in $R_s$, and use it to sum $R_s$ with $X$ along the corresponding ray in $X$. We do this by homeomorphically identifying the complement of a tubular neighborhood of $\gamma'$ (with smooth ${\mathbb R}^3$ boundary) with a corresponding closed tubular neighborhood of the ray in $X$ (preserving orientations), then transporting the smoothing of $R_s$ to $X$. We assume the identification is smooth near each boundary ${\mathbb R}^3$, and then the smoothing fits together with the given one on the rest of $X$. This process can be performed simultaneously for all $s\in S$, provided that we work within a closed tubular neighborhood of $\gamma$. Each ray $\gamma'$ is unique up to smooth ambient isotopy (Lemma~\mathcal Ref{ambient}), and the required identifications of neighborhoods (homeomorphic to the half-space $[0,\infty)\times{\mathbb R}^3$) are unique up to topological ambient isotopy that is smooth on the boundary (by the Alexander trick), so the resulting isotopy class of smoothings on $X$ is independent of choices made in the $R_s$ summands. Similarly, the resulting smoothing is changed by an isotopy if the original smoothing of $X$ is isotoped or $\gamma$ is changed by a proper homotopy (Lemma~\mathcal Ref{ambient} again). In particular, the initial choice of smoothing of $\gamma$ does not matter. Since the proper homotopy class of the locally orienting multiray $\gamma_{\mathcal Rm{ML}}$ is determined by $\epsilon_{\gamma_{\mathcal Rm{ML}}}$ and the orientation data (Lemma~\mathcal Ref{ML}(b)), we have a well-defined function $\mathcal S(X)\to\mathcal S(X)$ determined by an element of $\mathcal R^S$ and the data given in the corollary. The rest of the corollary is easily checked. To verify that we have a monoid action, consecutively apply two elements $\{R_s\}$ and $\{R'_s\}$ of $\mathcal R^S$. This uses the multiray $\Gamma$ twice. After summing with each $R_s$, however, $\Gamma$ lies in the new summands, so we are equivalently end summing $X$ with the sum of the two elements of $\mathcal R^S$ as required. If we enlarge the index set $S$ of $\{R_s\}$ while requiring all of the new summands $R_s$ to be ${\mathbb R}^4$, the induced element of $\mathcal S(X)$ will be unchanged, so it is easy to deduce the last sentence of the corollary even when $\mathcal EML$ is uncountable. \end{proof} In contrast with more general end sums, the action of $\mathcal R^S$ on $\mathcal S(X)$ is not known to vary with the choice of proper homotopy class of $\gamma$ (for a fixed end function). \begin{ques} Suppose that two locally orienting multirays in $X$ have the same end function, and that for each $s\in S$, the two corresponding rays induce the same orientation on the corresponding end, if it admits one. Can the two actions of $\mathcal R^S$ on $\mathcal S(X)$ be different? \end{ques} \noindent We can also ask about diffeomorphism types rather than isotopy classes; cf.\ Question~\mathcal Ref{R4sum}. Clearly, any example of nonuniqueness must involve an end that fails to be Mittag-Leffler, such as one arising by ladder surgery. While such examples seem likely to exist, there are also reasons for caution, as we now discuss. First, not every exotic ${\mathbb R}^4$ can give such examples. Freedman and Taylor \cite{FT} constructed a ``universal" ${\mathbb R}^4$, $R_U\in\mathcal R$, which is characterized as being the unique fixed point of the $\mathcal R$-action on itself. They essentially showed that for any smoothing $\Sigma$ of a 4-manifold $X$, the result of end summing with copies of $R_U$ depends only on the subset of $\mathcal E(X)$ at which the sums are performed, regardless of whether those ends are Mittag-Leffler. Then $\mathcal R$ subsequently acts trivially on each of those ends. They also showed that the result of summing with $R_U$ on a dense subset of ends creates a smoothing depending only on the stable isotopy class of $\Sigma$ (classified by $H^3(X,\partial X;{\mathbb Z}/2)$). For such a smoothing, $\mathcal R^S$ acts trivially for any choice of multiray. The main point is that the universal property is obtained through a countable collection of disjoint compact subsets of $R_U$ that allow h-cobordisms to be smoothly trivialized. If $X$ is summed with $R_U$ on one side of a ladder sum (for example), those compact subsets are also accessible on the other side by reaching through the rungs of the ladder. A second issue is that examples of nonuniqueness would be subtle and hard to distinguish: \begin{prop}\label{subtle_prop} Let $X$ be a \textsc{top} 4-manifold with smoothing $\Sigma$. Let $\gamma_0,\gamma_1\colon\thinspace S\times[0,\infty)\to X$ be multirays as in the above question, inducing smoothings $\Sigma_0$ and $\Sigma_1$, respectively, via a fixed element of $\mathcal R^S$. Then for every compact \textsc{diff} 4-manifold $K$, every $\Sigma_0$-smooth embedding $\iota\colon\thinspace K\to X$ is \textsc{top} ambiently isotopic to a $\Sigma_1$-smooth embedding. After isotopy of $\Sigma_1$, every neighborhood of infinity in $X$ contains another such neighborhood $U$ such that whenever $\iota(K)\subset U$ and $K$ is a 2-handlebody, the resulting isotopy can be assumed to keep $\iota(K)$ inside $U$. \end{prop} \noindent This shows that many of the standard 4-dimensional techniques for distinguishing smooth structures will fail in the above situation. One of the oldest techniques for distinguishing two smoothings on ${\mathbb R}^4$ is to find a compact \textsc{diff} manifold that smoothly embeds in one but not the other \cite{infR4}. A newer incarnation of this idea is the Taylor invariant \cite{T}, distinguishing \textsc{diff} 4-manifolds via an exotic ${\mathbb R}^4$ embedded in one with compact closure. Clearly, such techniques must fail in the current situation. Most recently, the genus function has turned out to be useful \cite{MinGen}, distinguishing by the minimal genera of smoothly embedded surfaces representing various homology classes. However, any such surface for $\Sigma_0$ will be homologous to one of the same genus for $\Sigma_1$ and vice versa. Minimal genera at infinity \cite{MinGen} will also fail: If we choose a system of neighborhoods $U$ of infinity as in the proposition, any corresponding sequence of $\Sigma_0$-smooth surfaces in these will be homologous to a corresponding sequence for $\Sigma_1$ with the same genera. A possibility remains of distinguishing $\Sigma_0$ and $\Sigma_1$ by sequences of smoothly embedded 3-manifolds approaching infinity (such as by the engulfing index of \cite{BG}, cf.\ also Remark~4.3(b) of \cite{MinGen}), but there does not currently seem to be any good way to analyze such sequences. Note that the situation is not improved by passing to a cover, since the corresponding lifted smoothings will behave similarly. (The multirays $\gamma_i$ will lift to multirays, and for each $s\in S$ the lifts of the corresponding rays of $\gamma_0$ and $\gamma_1$ will will be multirays with end functions whose images have the same closure in $\mathcal E(\widetilde X)$, cf.\ last paragraph of proof of Theorem~8.1 in \cite{MinGenV2}. The proof below still applies to this situation.) \begin{proof} For the first conclusion, let $\overline{\nu}_i\colon\thinspace S\times[0,\infty)\times D^3\to X$ be the closed tubular neighborhood maps of the multirays $\gamma_i$ used for the end sums. By properness, both subsets $\overline{\nu}_i^{-1}\iota(K)$ are contained in a single subset of the form $T=S_0\times[0,N]\times D^3$ for some finite $S_0\subset S$ and $N\in{\mathbb Z}^+$. We need a $\Sigma$-smooth ambient isotopy $\Phi_t$ of $\mathop{\mathcal Rm id}\nolimits_X$ such that $\Phi_1\circ\overline{\nu}_0=\overline{\nu}_1$ on $T$, allowing no new intersections with $\iota(K)$, i.e., with $\overline{\nu}_1^{-1}\Phi_1\iota(K)$ still lying in $T$. This is easily arranged, since for each $s\in S_0$ the corresponding rays of $\gamma_0$ and $\gamma_1$ determine the same end and induce the same orientation on it if possible. This allows us to move $\gamma_0(s,N)$ to $\gamma_1(s,N)$ so that the local orientations agree, and then complete the isotopy following the initial segments of the rays. (The end hypothesis is needed when $X-\iota(K)$ is disconnected, for example.) After we perform the end sums, our isotopy will only be topological. However, $\Phi_1\circ\iota$ will be $\Sigma_1$-smooth as required, since the new smoothings correspond under $\Phi_1$ on the images of $T$ and the smoothing $\Sigma$ is preserved elsewhere on $\iota(K)$. For the second statement, assume (isotoping $\Sigma_1$) that the images of $\overline{\nu}_i$, for $i=0,1$, are disjoint. Given a neighborhood of infinity, pass to a smaller neighborhood $U$ such that the two subsets $\overline{\nu}_i^{-1}(U)$ are equal, with complement of the form $S_1\times[0,N']\times D^3$ for some finite $S_1$ and $N'\in{\mathbb Z}^+$. For any $K$ and $\iota$ with $\iota(K)\subset U$, we can repeat the previous argument. There is only one difficulty: If $K=M^3\times I$, for example, some sheets of $M$ may be caught between $\partial U$ and the moving image of $\gamma_0$ during the final isotopy, and be pushed out of $U$. However, if $K$ is a handlebody with all indices 2 or less, we can remove the image of $K$ from the path of $\gamma_0$ (which will be following arcs of $\gamma_1$) by transversality. The statement now follows as before. \end{proof} Elements of $\mathcal R$ can be either {\em large} or {\em small}, depending on whether they contain a compact submanifold that cannot smoothly embed in the standard ${\mathbb R}^4$ (e.g., \cite[Section~9.4]{GS}). Action on $\mathcal S(X)$ by small elements does not change the invariants discussed above (except for 3-manifolds at infinity), but still can yield uncountably many diffeomorphism types \cite[Theorem~7.1]{MinGen}. However, large elements typically do change invariants. In particular, the minimal genus of a homology class can drop under end sum with, for example, the universal ${\mathbb R}^4$ \cite[Theorem~8.1]{MinGen}. For Stein surfaces, the adjunction inequality gives a lower bound on minimal genera, which is frequently violated after such sums. Thus, the following application of Corollary~\mathcal Ref{stein1h} seems surprising: \begin{thm}[Bennett]\label{JB} \cite[Corollary~4.1.3]{BenDis} There is a family $\{R_t|\thinspace t\in{\mathbb R}\}$ of distinct large elements of $\mathcal R$ (with nonzero Taylor invariant) such that if $Z$ is obtained from a Stein surface $X$ by any orientation-preserving end sums with elements $R_t$ then the adjunction inequality of $X$ applies in $Z$. \end{thm} \noindent Nevertheless, we expect such sums to destroy the Stein structure, since every handle decomposition of each $R_t$ requires infinitely many 3-handles. The idea of the proof is that \cite{BenDis} or \cite{Ben} constructs such manifolds $R_t$ embedded in Stein surfaces, in such a way that the sums can be performed pairwise. By Corollary~\mathcal Ref{stein1h}, we obtain $Z$ embedded in a Stein surface so that the adjunction inequality is preserved. Next we consider sums along properly embedded lines. For a fixed $n\ge 4$, let $\mathcal Q$ denote the set of oriented diffeomorphism types of manifolds homeomorphic to ${\mathbb R}\times S^{n-1}$, with a given ordering of their two ends. Each such manifold admits a \textsc{diff} proper embedding of a line, preserving the order of the ends, and this is unique up to \textsc{diff} ambient isotopy by Lemma~\mathcal Ref{ambient}. Thus, $\mathcal Q$ has a well-defined commutative monoid structure induced by summing along lines, preserving orientations on the lines and $n$-manifolds. (This time, properness prevents infinite sums.) The identity is ${\mathbb R}\times S^{n-1}$ with its standard smoothing. For $n=5,6,7$, $\mathcal Q$ is trivial, and for $n>5$, $\mathcal Q$ is canonically isomorphic to the finite group $\Theta_{n-1}$ of homotopy $(n-1)$-spheres \cite{KM} (by taking their product with ${\mathbb R}$). However when $n=4$, $\mathcal Q$ has much more structure: High-dimensional theory predicts that $\mathcal Q$ should be ${\mathbb Z}/2$, but in fact it is an uncountable monoid with an epimorphism to ${\mathbb Z}/2$ (analogous to the Rohlin invariant of homology 3-spheres). Uncountability is already suggested by Corollary~\mathcal Ref{R4}, but the structure of $\mathcal Q$ is richer than can be obtained just by acting by $\mathcal R$ at the two ends, as can be seen as follows. For $V,V'\in\mathcal Q$, call $V$ a {\em slice} of $V'$ if it embeds in $V'$ separating the ends. (For this discussion, orientations and order of the ends do not matter.) Every known ``large" exotic ${\mathbb R}^4$ has a neighborhood of infinity in $\mathcal Q$ with the property that disjoint slices are never diffeomorphic \cite{infR4}. This neighborhood clearly has infinitely many disjoint slices, which comprise an infinite family in $\mathcal Q$ such that no two share a common slice. Thus, no two are obtained from a common element of $\mathcal Q$ by the action of $\mathcal R\times\mathcal R$. A similar family representing the other class in ${\mathbb Z}/2$ is obtained from the end of a smoothing of Freedman's punctured $E_8$-manifold. To get an action on $\mathcal S(X)$ for $n\ge4$, let $\gamma\colon\thinspace S\times{\mathbb R}\to X$ ($S$ discrete) be a proper, locally orienting \textsc{top} embedding. Then $\mathcal Q^S$ has a well-defined action on $\mathcal S(X)$ (although without infinite iteration) by the same method as before, and this only depends on the proper homotopy class of $\gamma$. (We assume after proper homotopy that $\gamma^{-1}(\partial X)=\emptyset$. To see that a self-homeomorphism rel boundary of ${\mathbb R}\times D^{n-1}$ is isotopic to the identity, first use the topological Schoenflies Theorem to reduce to the case where $\{0\}\times D^{n-1}$ is fixed.) Note that while $\mathcal Q$ admits only finite sums, the set $S$ may be countably infinite. Examples~\mathcal Ref{PL} showed that the action of $\mathcal Q$ on $\mathcal S(X)$ for a two-ended 4-manifold $X$ can depend on the choice of line spanning the ends, and in high dimensions, even the resulting diffeomorphism type can depend on the line. We next find fundamental group conditions eliminating such dependence. To obtain such conditions, note that the fundamental progroup of $X$ based at a ray $\gamma$ has an inverse limit with well-defined image in $\pi_1(X,\gamma(0))$. In the Mittag-Leffler case, its image equals the image of $\pi_1(U_2,\gamma(2))$ for a suitably defined neighborhood system of infinity (i.e.\ with $j=i+1$ in Definition~\mathcal Ref{ml}). If $\gamma$ is instead a line, it splits as a pair $\gamma_\pm$ of rays, obtained by restricting its parameter $\pm t$ to $[0,\infty)$, determining ends $\epsilon_\pm$ and images $G_\pm\subset\pi_1(X,\gamma(0))$ of the corresponding inverse limits. We will call the pair $(\epsilon_-,\epsilon_+)$ a {\em Mittag-Leffler couple} if both ends are Mittag-Leffler and the double coset space $G_-\backslash\pi_1(X,\gamma(0))/G_+$ is trivial. The proof below shows that $\gamma$ is then uniquely determined up to proper homotopy by the pair of ends, so the condition is independent of choice of $\gamma$ (as well as the direction of $\gamma$). A proper embedding $\gamma\colon\thinspace S\times{\mathbb R}\to X$ now splits into $\gamma_{\mathcal Rm{ML}}$ and $\gamma_{\mathcal Rm{bad}}$ according to which lines connect Mittag-Leffler couples, and the restriction $\epsilon_{\gamma_{\mathcal Rm{ML}}}$ of the end function $\epsilon_{\gamma}\colon\thinspace S\times\{\pm1\}\to \mathcal E$ picks out the corresponding pairs of Mittag-Leffler ends. For simplicity, we now assume $X$ is oriented. \begin{cor}\label{SxR} Let $X$ be an oriented topological $n$-manifold ($n\ge4$) with a proper embedding $\gamma\colon\thinspace S\times{\mathbb R}\to X$. Then $\gamma$ determines an action of $\mathcal Q^S$ on $\mathcal S(X)$, depending only on the proper homotopy classes of $\gamma_{\mathcal Rm{bad}}$ and $\gamma_{\mathcal Rm{ML}}$. If the latter consists of finitely many lines, it only affects the action through its end function $\epsilon_{\gamma_{\mathcal Rm{ML}}}$. \end{cor} \noindent If $X$ is simply connected and $\mathcal EML$ is finite, we obtain a canonical action of $\mathcal Q^{\mathcal EML\times\mathcal EML}$ on $\mathcal S(X)$. \begin{proof} For a proper embedding $\gamma$ of ${\mathbb R}$ determining a Mittag-Leffler couple $\epsilon_\pm$ as above, we show that any other embedding $\gamma'$ determining the same ordered pair of ends is properly homotopic to $\gamma$. This verifies that Mittag-Leffler couples are well-defined, and proves the corollary. (The finiteness hypothesis guarantees properness of the homotopy that we make using the proper homotopies of the individual lines.) Let $\{U_i\}$ be a neighborhood system of infinity as in the proof of Lemma~\mathcal Ref{ML}, and reparametrize the four rays $\gamma_\pm$ and$\gamma'_\pm$ accordingly (fixing 0). As before, we can properly homotope $\gamma'$ to agree with $\gamma$ on ${\mathbb Z}\subset{\mathbb R}$, so that $\gamma$ and $\gamma'$ are related by a doubly infinite sequence of loops. The loop captured between $\pm2$ (starting at $\gamma(0)$, then following $\gamma_-$, $\gamma'$ and (backwards) $\gamma_+$) represents a class in $\pi_1(X,\gamma(0))$ that by hypothesis can be written in the form $w_-w_+$ with $w_\pm\in G_\pm$. After a homotopy of $\gamma'$ supported in $[-2,2]$, we can assume that $\gamma'=\gamma$ on $[-1,1]$, and the innermost loops are given by $w_\pm$ pulled back to $\pi_1(U_1,\gamma(\pm1))$. Working with each sign separately, we now complete the proof of Lemma~\mathcal Ref{ML}(a), denoting the pullback of $w_\pm$ by $x_1$ as before. By the definition of $G_\pm$, $x_1$ can be assumed to pull back further to $\pi_1(U_2,\gamma_\pm(2))$; let $y_2$ be the inverse of such a pullback. Completing the construction, we see that $z_1=1$, so that $\gamma'$ is then properly homotoped to $\gamma$ rel $[-1,1]$. \end{proof} Corollary~\mathcal Ref{SxR} is most interesting when $n=4$, since classical smoothing theory reduces the higher dimensional case to discussing the Poincar\'e duals of the relevant lines in $H^{n-1}(X,\partial X;\Theta_{n-1})$. When $n=4$, this same discussion applies to the classification of smoothings up to stable isotopy (isotopy after product with ${\mathbb R}$) by the obstruction group $H^3(X,\partial X;{\mathbb Z}/2)$, but one typically encounters uncountably many isotopy classes (and diffeomorphism types) within each stable isotopy class. Note that the above method can be used to study sums of more general \textsc{cat} manifolds along collections of lines. In dimension 4, one can also consider actions on $\mathcal S(X)$ of the monoid $\mathcal Q_k$ of oriented smooth manifolds homeomorphic to a $k$-punctured 4-sphere $\Sigma_k$ with an order on the ends, generalizing the cases $\mathcal Q_1=\mathcal R$ and $\mathcal Q_2=\mathcal Q$ considered above. (The monoid operation is summing along $k$-fold unions of rays with a common endpoint; see the end of \cite{mod} for a brief discussion.) However, little is known about this monoid beyond what can be deduced from Corollaries~\mathcal Ref{R4} and \mathcal Ref{SxR} and the structure of $\mathcal R$ and $\mathcal Q$. It follows formally from having infinite sums that $\mathcal R$ has no nontrivial invertible elements, and no nontrivial homomorphism to a group \cite{infR4}; cf.\ also Theorem~\mathcal Ref{hut}. However, the other monoids do not allow infinite sums. This leads to the following reformulation of Question~\mathcal Ref{inverses}: \begin{ques}\label{inverses2} Does $\mathcal Q$ (or more generally any $\mathcal Q_k$, $k\ge2$) have any nontrivial invertible elements? Is $H^3(\Sigma_k;{\mathbb Z}/2)$ the largest possible image of $\mathcal Q_k$ under a homomorphism to a group? \end{ques} \section{1-handle slides and 0/1-handle cancellation at infinity}\label{slides} Our uniqueness result for adding $1$-handles at infinity (Theorem~\mathcal Ref{main}) easily extends to adding both $0$- and $1$-handles at infinity, while allowing infinite slides and cancellation (Theorem~\mathcal Ref{maincancel}). With {\em compact} handles of index $0$ and $1$, one may easily construct countable handlebodies that are contractible, but are distinguished by their numbers of ends. In this regard, adding $0$- and $1$-handles at infinity turns out to be simpler. For instance, in each dimension at least four, every (at most) countable, connected, and oriented union of $0$- and $1$-handles at infinity is determined by its first Betti number. As an application of Theorem~\mathcal Ref{maincancel}, we give a very natural and partly novel proof of the hyperplane unknotting theorem. The novelty here is that $0$- and $1$-handles at infinity provide the basic framework in which we employ Mazur's infinite swindle.\\ For simplicity, we assume throughout this section that all manifolds are oriented and all handle additions respect orientations.\\ Let $X$ be a possibly disconnected \textsc{cat} $n$-manifold where $n\geq4$. Add to $X$ a collection of $0$-handles at infinity $W= \bigsqcup_{i\in J} w_i$ where each $w_i$ is \textsc{cat} homeomorphic to ${\mathbb R}^n$. The index set $J$ and all others below are discrete and countable. Attach to $X\sqcup W$ a collection of $1$-handles at infinity $H=\bigsqcup_{i\in S} h_i$ where each $h_i$ is \textsc{cat} homeomorphic to $[0,1]\times{\mathbb R}^{n-1}$ (see Figure~\mathcal Ref{fig:add_handles}). By Definition~\mathcal Ref{onehandles} and Theorem~\mathcal Ref{main}, $H$ is determined by multiray data \begin{figure} \caption{Manifold $Z$ obtained from the manifold $X$ by adding $0$- and $1$-handles at infinity, the latter denoted by arcs.} \label{fig:add_handles} \end{figure} $\gamma^-,\gamma^+\colon\thinspace S\times [0,\infty){\hookrightarrow} X\sqcup W$ with disjoint images.\\ To this data, we associate a graph $G$ defined as follows (see Figure~\mathcal Ref{fig:graph}). \begin{figure} \caption{Graph $G$ associated to the construction in Figure~\mathcal Ref{fig:add_handles} \label{fig:graph} \end{figure} Let $\left\{v_i \mid i \in I \mathcal Right\}$ be the set of proper homotopy classes of rays in the multiray data for $H$ that lie in $X$. Each $v_i$ has at least one representative of the form $\gamma^-(j_i)$ or $\gamma^+(j_i)$ for some $j_i \in S$. The vertex set $V$ of $G$ is: \[ V:=\left\{v_i \mid i \in I \mathcal Right\} \sqcup \left\{w_i \mid i\in J \mathcal Right\}. \] The collection $E$ of edges of $G$ is bijective with the $1$-handles at infinity $H$ and thus is indexed by $S$. The edge $e_i$, $i\in S$, corresponding to $h_i$ is formally defined to be the multiset of the two vertices in $V$ determined by the multiray data of $h_i$. In particular, $E$ itself is a multiset, and the graph $G$ is countable, but is not necessarily locally finite, connected, or simple. Indeed, $G$ may have multiple edges and loops. Let $C=\bigsqcup_{i\in I(C)} C_i$ be the connected components of $G$ such that each component $C_i$ contains a vertex $v_{j(i)}$ in $X$. Let $D=\bigsqcup_{i\in I(D)} D_i$ be the remaining components of $G$ where each component $D_i$ contains no vertex $v_j$ in $X$. Notice that $C$ induces a partition $\mathcal{P}=\left\{P_j \mid j\in I(C)\mathcal Right\}$ of $\left\{v_i \mid i \in I \mathcal Right\}$ where $P_j$ is the subset of vertices in $\left\{v_i \mid i \in I \mathcal Right\}$ that lie in $C_j$. Below, Betti numbers $b_k$ are finite or countably infinite. \begin{thm}\label{maincancel} For a \textsc{cat} $n$-manifold $X$ with $n\ge4$, the \textsc{cat} oriented homeomorphism type of the manifold $Z$ obtained by adding 0- and 1-handles at infinity to $X$ as above is determined by: \begin{itemize} \item[(a)] The set of pairs $\left(P_j,b_1\left(C_j\mathcal Right)\mathcal Right)$ where $P_j \in \mathcal{P}$. \item[(b)] The multiset with elements $b_1(D_i)$ where $i\in I(D)$. \end{itemize} \end{thm} Thus, we only need to keep track of which proper homotopy classes of rays in $X$ are used by at least one 1-handle (encoded as the vertices in each $P_j$), together with the most basic combinatorial data of the new handles. When the relevant ends are Mittag-Leffler, we can replace the ray data by the set of corresponding ends. The theorem implies that all 0-handles at infinity can be canceled except for one in each component of $Z$ disjoint from $X$, and that we can slide 1-handles over each other whenever their attaching rays are properly homotopic (e.g., whenever they determine the same Mittag-Leffler end). Furthermore, any reasonable notion of infinitely iterated handle sliding is allowed. \begin{proof} First, consider a component $D_i$ of $G$. Let $M$ denote the component of $Z$ corresponding to $D_i$. By Corollary~\mathcal Ref{maincor}, we can and do assume that the rays used to attach $1$-handles at infinity in $M$ are radial (while still remaining proper and disjoint). Then when $D_i$ is a tree, we can easily describe $M$ as a nested union of smooth $n$-disks, so it is a copy of ${\mathbb R}^n$. In general, a spanning tree $T$ of $D_i$ determines a copy of ${\mathbb R}^n$ in $M$ (namely, one ignores a subset of the $1$-handles at infinity). Thus, $M$ is ${\mathbb R}^n$ with $b_1(D_i)$ $1$-handles at infinity attached. By Corollary~\mathcal Ref{maincor}, such a manifold is determined by $b_1(D_i)$.\\ Second, consider a component $C_j$ of $G$. Let $N$ denote the component of $Z$ corresponding to $C_j$. Let $N'$ be the $n$-manifold obtained from $N$ as follows. For each vertex $v_k$ in $C_j$, introduce a $0/1$-handle pair at infinity where the new $1$-handle at infinity attaches to a ray in the class $v_k$ and to a ray in the new $0$-handle at infinity. Also, the $1$-handles at infinity in $N$ attached to rays in the class of $v_k$ attach in $N'$ to rays in the new $0$-handle at infinity. Theorem~\mathcal Ref{main} implies that $N$ and $N'$ are \textsc{cat} oriented homeomorphic. The graph $C'_j$ corresponding to $N'$ is obtained from $C_j$ by adding a leaf to each $v_k$. Let $T$ be a spanning tree of the connected graph obtained by removing the new leaves from $C'_j$. Then, $T$ determines a copy of ${\mathbb R}^n$ in $N'$. This exhibits $N'$ as: the components of $X$ containing the vertices in $P_j$, a single $0$-handle at infinity $w_0$, $b_1(C_j)$ oriented $1$-handles at infinity attached to $w_0$, and an oriented $1$-handle at infinity from each $v_k \in P_j$ to $w_0$. \end{proof} As an application of $1$-handle slides and $0/1$-handle cancellation at infinity, we prove the hyperplane unknotting theorem of Cantrell~\cite{C63} and Stallings~\cite{Stall65}. Recall that we assume \textsc{cat} embeddings are locally flat. \begin{thm}\label{hut} Let $f\colon\thinspace {\mathbb R}^{n-1}\to{\mathbb R}^n$ be a proper \textsc{cat} embedding where $n\geq4$, and let $H=f\left({\mathbb R}^{n-1}\mathcal Right)$. Then, there is a \textsc{cat} homeomorphism of ${\mathbb R}^n$ that carries $H$ to a linear hyperplane. \end{thm} A \textsc{cat} ray in ${\mathbb R}^k$ is \emph{unknotted} provided there is a \textsc{cat} homeomorphism of ${\mathbb R}^k$ that carries the ray to a linear ray. Recall that each \textsc{cat} ray in ${\mathbb R}^k$, $k\geq4$, is unknotted. For \textsc{cat}=\textsc{pl} and \textsc{cat}=\textsc{diff}, this fact follows from general position, but for \textsc{cat}=\textsc{top} it is nontrivial and requires Homma's method (see Lemma~\mathcal Ref{ambient} above and~\cite[\S~7]{CKS}). Thus, the following holds under the hypotheses of Theorem~\mathcal Ref{hut} by taking $r$ to be the image under $f$ of a linear ray in ${\mathbb R}^{n-1}$: \emph{There is a \textsc{cat} ray $r\subset H$ that is unknotted in both $H$ and ${\mathbb R}^n$, where the former means $f^{-1}(r)$ is unknotted in ${\mathbb R}^{n-1}$.} The hyperplane $H$ separates ${\mathbb R}^n$ into two connected components by Alexander duality. Let $A'$ and $B'$ denote the closures in ${\mathbb R}^n$ of these two components as in Figure~\mathcal Ref{fig:A_and_B}. \begin{figure} \caption{Closures $A'$ and $B'$ of the complement of $H$ in ${\mathbb R} \label{fig:A_and_B} \end{figure} So, $\partial A'=H=\partial B'$, and $H$ has a bicollar neighborhood in ${\mathbb R}^n$. Using the bicollar, define: \begin{align*} A:=&A'\cup(\textnormal{open collar on $H$ in $B'$})\\ B:=&B'\cup(\textnormal{open collar on $H$ in $A'$}) \end{align*} as in Figure~\mathcal Ref{fig:A_and_B}. Figure~\mathcal Ref{fig:A_and_B} also depicts \textsc{cat} rays $a\subset A$ and $b\subset B$ that are radial with respect to the collarings. Evidently, $a$ and $b$ are \textsc{cat} ambient isotopic to $r$ in $A$ and $B$ respectively. (These simple isotopies have support in a neighborhood of the open collars).\\ \begin{lem}\label{ABcuhs} It suffices to show that $A'$ and $B'$ are \textsc{cat} homeomorphic to closed upper half-space ${\mathbb R}^n_+$. \end{lem} \begin{proof} We are given \textsc{cat} homeomorphisms $g\colon\thinspace A'\to{\mathbb R}^n_+$ and $h\colon\thinspace B'\to{\mathbb R}^n_+$. Replace $h$ by its composition with a reflection so that $h$ maps $B'\to{\mathbb R}^n_-$. Note that $g$ and $h$ need not agree pointwise on $H$. Identify ${\mathbb R}^{n-1}\times\left\{0\mathcal Right\}$ with ${\mathbb R}^{n-1}$. We have a \textsc{cat} homeomorphism $j\colon\thinspace {\mathbb R}^{n-1}\to{\mathbb R}^{n-1}$ given by the restriction of $g\circ h^{-1}$ to ${\mathbb R}^{n-1}$. Define the \textsc{cat} homeomorphism $k\colon\thinspace B'\to{\mathbb R}^n_-$ by $k=(j\times\textnormal{id})\circ h$ (that is, compose $h$ with $j$ at each height). Now, $g$ and $k$ agree pointwise on $H$. For \textsc{cat}=\textsc{top} and \textsc{cat}=\textsc{pl}, the proof of the lemma is complete. For \textsc{cat}=\textsc{diff}, one smooths along collars as in Hirsch~\cite[Theorem~1.9, p.~182]{H94}. \end{proof} We will use the symbols in Figure~\mathcal Ref{fig:hut_notation} to denote the indicated manifold/ray pairs. Here, $c$ is a radial ray in ${\mathbb R}^n$. \begin{figure} \caption{Notation for relevant manifold/ray pairs.} \label{fig:hut_notation} \end{figure} All rays in this proof, such as $a$ and $b$, will be parallel (\textsc{cat} ambient isotopic) to $r$ or $c$. An added $1$-handle at infinity will be denoted by an arc connecting such symbols as in Figure~\mathcal Ref{fig:hut_cancel}.\\\ \begin{lem}\label{ABcancel} All three of the manifold/ray pairs in Figure~\mathcal Ref{fig:hut_cancel} are \textsc{cat} homeomorphic to one another. \begin{figure} \caption{Isomorphic manifold/ray pairs.} \label{fig:hut_cancel} \end{figure} \end{lem} \begin{proof} First, we claim that adding a $1$-handle at infinity to $(A,a)\sqcup(B,b)$ yields ${\mathbb R}^n$. Recalling the collars in Figure~\mathcal Ref{fig:A_and_B}, the claim would be evident if we could choose the tubular neighborhood maps for the $1$-handle at infinity to be the full collars in the ${\mathbb R}^{n-1}$ directions. However, an open tubular neighborhood must, by our definition, extend to a closed tubular neighborhood. So, instead we use smaller tubular neighborhoods inside the collars as follows. Identify the collar on $H$ in $A$ with ${\mathbb R}^{n-1}\times [0,1)$ so that $H$ corresponds to ${\mathbb R}^{n-1}\times\left\{0\mathcal Right\}$ and the ray $a$ corresponds to $\left\{0\mathcal Right\}\times [1/2,1)$. For each $t\in[1/2,1)$, there is an open horizontal $(n-1)$-disk in ${\mathbb R}^{n-1}\times[0,1)$ at height $t$, of radius $1/(1-t)$, and with center on $a$. The union of these disks is our desired open tubular neighborhood of $a$. Similarly, we obtain an open tubular neighborhood of $b$ using the compatible collar in $B$. The claim follows by attaching the $1$-handle at infinity using these tubular neighborhood maps and reparameterizing collars. Next, let $a'$ and $b'$ be the indicated rays in Figure~\mathcal Ref{fig:hut_cancel} parallel to $a$ and $b$ respectively. The lemma follows by shrinking the above tubular neighborhood maps in the ${\mathbb R}^{n-1}$ directions to be disjoint from $a'$ and $b'$ respectively. \end{proof} \begin{lem}\label{pairs_std} It suffices to prove that $(A,a)$ and $(B,b)$ are \textsc{cat} homeomorphic as pairs to $({\mathbb R}^n,c)$. \end{lem} \begin{proof} First, consider the cases \textsc{cat}=\textsc{diff} and \textsc{cat}=\textsc{pl}. The collar on $H$ in $A$ is a \textsc{cat} \emph{closed} regular neighborhood of $a$ in $A$ with boundary $H$. Using the hypothesis $(A,a)\colon\thinspaceng ({\mathbb R}^n,c)$, apply uniqueness of such neighborhoods in $({\mathbb R}^n,c)$ to see that $A'$ is \textsc{cat} homeomorphic to ${\mathbb R}^n_+$. Similarly, $B'$ is \textsc{cat} homeomorphic to ${\mathbb R}^n_+$. Now, apply Lemma~\mathcal Ref{ABcuhs}.\\ For \textsc{cat}=\textsc{top}, we are given a homeomorphism $g\colon\thinspace (A,a)\to({\mathbb R}^n,c)$. Let $V\colon\thinspaceng{\mathbb R}^n_+$ be the collar added to $A'$ along $H$ to obtain $A$ as in Figure~\mathcal Ref{fig:A_and_B}. Let $U\colon\thinspaceng{\mathbb R}^n_+$ be a collar on $H$ in $A$ on the opposite side of $H$ as in Figure~\mathcal Ref{fig:mcn}. \begin{figure} \caption{Homeomorphic manifold/ray pairs $(A,a)$ and $({\mathbb R} \label{fig:mcn} \end{figure} Recall that ${\mathbb R}^n$ itself is an open mapping cylinder neighborhood of $c$ in ${\mathbb R}^n$ (see~\cite{KR} and~\cite[pp.~1816,1831]{CKS}). Similarly, $U\cup V$ is an open mapping cylinder neighborhood of $a$ in $U\cup V$. So, $g(U\cup V)$ is another open mapping cylinder neighborhood of $c$ in ${\mathbb R}^n$. Uniqueness of such neighborhoods (see~\cite{KR} and~\cite{CKS}) implies there exists a homeomorphism $h\colon\thinspace g(U\cup V)\to{\mathbb R}^n$ that fixes $g(V)$ pointwise. Therefore: \[ g(U)\colon\thinspaceng {\mathbb R}^n-\textnormal{Int}g(V)=g(A'). \] Hence, $A'\colon\thinspaceng U \colon\thinspaceng {\mathbb R}^n_+$. Similarly, $B'$ is homeomorphic to ${\mathbb R}^n_+$. Again, Lemma~\mathcal Ref{ABcuhs} completes the proof. \end{proof} \begin{figure} \caption{Mazur's infinite swindle as $1$-handle slides and $0/1$-handle cancellations at infinity.} \label{fig:hut_pf} \end{figure} Finally, we come to the heart of the proof of the hyperplane unknotting theorem. Mazur's infinite swindle~\cite{M59} is realized as $1$-handle slides and $0/1$-handle cancellations at infinity. Figure~\mathcal Ref{fig:hut_pf} proves that $(A,a)$ is \textsc{cat} homeomorphic to $({\mathbb R}^n,c)$. In Figure~\mathcal Ref{fig:hut_pf}, the horizontal region is a copy of ${\mathbb R}^n$. The first, third, and fifth isomorphisms in Figure~\mathcal Ref{fig:hut_pf} hold by Theorem~\mathcal Ref{maincancel}. The second and fourth isomorphisms hold by Lemma~\mathcal Ref{ABcancel}. With $(A,a)\colon\thinspaceng ({\mathbb R}^n,c)$, Figure~\mathcal Ref{fig:hut_cancel} implies that $(B,b)\colon\thinspaceng ({\mathbb R}^n,c)$. By Lemma~\mathcal Ref{pairs_std}, our proof of the hyperplane unknotting theorem is complete.\\ \end{document}
math
113,499
\begin{document} \tau} \def\va{\varepsilon} \def\o{\omegaitle[Locally $2$-transitive graphs] {On the Stabilisers of Locally $2$-Transitive Graphs} \tau} \def\va{\varepsilon} \def\o{\omegahanks{{\it 2010 Mathematics subject classification}: 05E18, 20B25} \tau} \def\va{\varepsilon} \def\o{\omegahanks{This work forms a part of an ARC grant project.} \alpha} \def\b{\beta} \def\d{\delta}\def\g{\gamma} \def\s{\sigmauthor[Song]{Shu Jiao Song} \alpha} \def\b{\beta} \def\d{\delta}\def\g{\gamma} \def\s{\sigmaddress{School of Mathematics and Statistics\\ The University of Western Australia\\ Crawley, WA 6009, Australia} \email{[email protected]} \date\tau} \def\va{\varepsilon} \def\o{\omegaoday \maketitle \begin{abstract} For a connected locally $(G,s)$-arc-transitive graph ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ with $s\geqslantslant 2$ and an edge $\{v,w\}$, determining the amalgam $(G_v,G_w,G_{vw})$ is a fundamental problem in the area of symmetrical graph theory, but it is very difficult. In this paper, we give a classification of $(G_v,G_w,G_{vw})$ in the case where the vertex stabilisers $G_v$ and $G_w$ are faithful on their neighbourhoods, which shows that except for the case $G_v\cong G_w$, there are exactly 16 such triples. \end{abstract} \section{Introduction}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{Intro} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected undirected simple graph. An {\it $s$-arc} of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is an $(s+1)$-tuple $(v_0,v_1,\dots,v_s)$ of vertices such that $\{v_{i-1},v_i\}\in E$ for $1\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}e s$ and $v_{i-1}\not=v_{i+1}$ for $1\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}e i\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}e s-1$. For a group $G\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}e{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$, ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is called {\it locally $(G,s)$-arc-transitive} if, for any vertex $v\in V$, the vertex stabiliser $G_v$ acts transitively on the set of $t$-arcs starting at $v$, for any $t$ with $1\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslant t\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslant s$. A locally $(G,s)$-arc transitive graph is called {\it $(G,s)$-arc transitive} if $G$ is also transitive on the vertex set. Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)$ be the set of vertices adjacent to $v$, and $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ the permutation group on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)$ induced by $G_v$. For a connected locally $(G,s)$-arc-transitive graph ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ and an edge $\{v,w\}$, the triple $(G_v,G_w,G_{vw})$ is called the {\it amalgam} of $G$, and also of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$. A fundamental problem for studying locally $s$-arc-transitive graphs is to determine their amalgams. van Bon \cite{vanBon-2} proved that if $G_{vw}^{[1]}=1$ then ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is not locally $(G,s)$-arc-transitive. Potocnik \cite{Potocnik} determined the amalgams for valency $\{3,4\}$ in the case where $G_{vw}^{[1]}=1$. This paper is one of a series of papers which aim to classify the amalgams $(G_v,G_w,G_{vw})$ with trivial edge kernel. Assume that ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is locally $(G,s)$-arc-transitive with $s\ge2$. For a vertex $v$ of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$, denote by $G_v^{[1]}$ the kernel of $G_v$ acting on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)$. Then $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}\cong G_v/G_v^{[1]}$, and $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ is a 2-transitive permutation group. As usual, denote by $X^{(\infty)}$ the smallest normal subgroup of $X$ such that $X/X^{(\infty)}$ is soluble. Let ${\sf rad}(H)$ be the soluble radical of a group $H$, that is, the largest soluble normal subgroup of $H$. Here we generalise a classical result for primitive permutation groups, see \cite[Section 18]{Wielandt}, which shows that some information of $(G_v,G_w,G_{vw})$ can be obtained from the permutation groups $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$. \begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{key-lem} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ be a connected $G$-edge-transitive graph, where $G\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$. Then, for an edge $\{v,w\}$ of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$, the following statements hold. \begin{itemize} \item[(1)] Each composition factor of $G_v$ is a composition factor of $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$, $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$. \item[(2)] Each composition factor of $G_{vw}$ is a composition factor of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$. \end{itemize} \end{theorem} \begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{faith-stab} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ be a connected locally $(G,2)$-arc-transitive graph, and let $\{v,w\}$ be an edge of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$. Assume that $G_v^{[1]}=G_w^{[1]}=1$. Then either $G_v\cong G_w$ and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is regular, or $(G_v,G_w,G_{vw})$ is one of the sixteen triples listed in Table~$\ref{Intro}$. \end{theorem} \[\begin{array}{|c ccccccc |}\hline &G_v&G_w&G_{vw}&|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|&|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|&G_{w_{-1}vw}&G_{vwv_1} \\ \hline 1 & 3^2{:}{\rm S}} \def\G{{\rm G}L_2(3) & 5^2{:}{\rm S}} \def\G{{\rm G}L_2(3) & {\rm S}} \def\G{{\rm G}L_2(3) & 3^2 & 5^2 & {\bf Z}} \def\O{{\bf O}Z_3 & 1\\ 2 & 3^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 5^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & {\rm GL}} \def\SL{{\rm SL}_2(3) & 3^2 & 5^2 & {\rm S}} \def\G{{\rm G}_3& {\bf Z}} \def\O{{\bf O}Z_2\\ 3 & 3^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 7^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & {\rm GL}} \def\SL{{\rm SL}_2(3) & 3^2 & 7^2 & {\rm S}} \def\G{{\rm G}_3 & 1\\ 4 & 5^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 7^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & {\rm GL}} \def\SL{{\rm SL}_2(3) & 5^2 & 7^2 & {\bf Z}} \def\O{{\bf O}Z_2 & 1 \\ \hline 5&{\rm A}}\def\Sym{{\rm Sym}_6&{\rm PSL}}\def\PGL{{\rm PGL}_2(11)&{\rm A}}\def\Sym{{\rm Sym}_5&6&11&{\rm A}}\def\Sym{{\rm Sym}_4&{\rm D}} \def\Q{{\rm Q}_6 \\ 6&{\rm PSL}}\def\PGL{{\rm PGL}_2(11) &2^4{:}{\rm A}}\def\Sym{{\rm Sym}_5&{\rm A}}\def\Sym{{\rm Sym}_5&11&16&{\rm D}} \def\Q{{\rm Q}_6&{\bf Z}} \def\O{{\bf O}Z_2^2 \\ 7&{\rm A}}\def\Sym{{\rm Sym}_6&2^4.{\rm A}}\def\Sym{{\rm Sym}_5&{\rm A}}\def\Sym{{\rm Sym}_5&6&16&{\rm A}}\def\Sym{{\rm Sym}_4&{\bf Z}} \def\O{{\bf O}Z_2^2 \\ 8&{\rm S}} \def\G{{\rm G}_6&2^4.{\rm S}} \def\G{{\rm G}_5&{\rm S}} \def\G{{\rm G}_5&6&16&{\rm S}} \def\G{{\rm G}_4&{\rm D}} \def\Q{{\rm Q}_8 \\ 9&{\rm A}}\def\Sym{{\rm Sym}_7&2^4.{\rm A}}\def\Sym{{\rm Sym}_6&{\rm A}}\def\Sym{{\rm Sym}_6&7&16&{\rm A}}\def\Sym{{\rm Sym}_5&{\rm S}} \def\G{{\rm G}_4 \\ 10&{\rm S}} \def\G{{\rm G}_7&2^4.{\rm S}} \def\G{{\rm G}_6&{\rm S}} \def\G{{\rm G}_6&7&16&{\rm S}} \def\G{{\rm G}_5&2\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}_4 \\ 11&{\rm A}}\def\Sym{{\rm Sym}_8&2^4.{\rm A}}\def\Sym{{\rm Sym}_7&{\rm A}}\def\Sym{{\rm Sym}_7&8&16&{\rm A}}\def\Sym{{\rm Sym}_6&{\rm PSL}}\def\PGL{{\rm PGL}_2(7) \\ 12&{\rm A}}\def\Sym{{\rm Sym}_9&2^4.{\rm A}}\def\Sym{{\rm Sym}_8&{\rm A}}\def\Sym{{\rm Sym}_8&9&16&{\rm A}}\def\Sym{{\rm Sym}_7&2^3{:}{\rm GL}} \def\SL{{\rm SL}_3(2) \\ 13& {\rm S}} \def\G{{\rm G}_9 & {\rm S}} \def\G{{\rm G}p_6(2) &{\rm S}} \def\G{{\rm G}_8 & 9 &36 & {\rm S}} \def\G{{\rm G}_7 & ({\rm S}} \def\G{{\rm G}_4\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}_4).2\\ \hline 14&{\rm A}}\def\Sym{{\rm Sym}_7 & 2^3{:}{\rm GL}} \def\SL{{\rm SL}_3(2) & {\rm PSL}}\def\PGL{{\rm PGL}_2(7) & 15 & 8 & {\rm A}}\def\Sym{{\rm Sym}_4 & {\rm S}} \def\G{{\rm G}_4\\ 15&5^2{:}{\rm S}} \def\G{{\rm G}L_2(5)&{11}^2{:}{\rm S}} \def\G{{\rm G}L_2(5)&{\rm S}} \def\G{{\rm G}L_2(5)&5^2&11^2&{\bf Z}} \def\O{{\bf O}Z_5&1\\ 16&{13}^2{:}{\rm S}} \def\G{{\rm G}L_2(13)&3^6{:}{\rm S}} \def\G{{\rm G}L_2(13)&{\rm S}} \def\G{{\rm G}L_2(13)&13^2&3^6&{\bf Z}} \def\O{{\bf O}Z_{13}&{\bf Z}} \def\O{{\bf O}Z_3\\ \hline \end{array}\] \centerline{\bf Table~\ref{Intro}} \vskip0.1in \begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{pair-2-trans} Let $P$ and $Q$ be two $2$-transitive permutation groups which are not isomorphic but their stabilisers are isomorphic to the same group $H$. Then $(P,Q,H)$ is one of the triples $(G_v,G_w,G_{uw})$ in Table~$\ref{Intro}$. \end{corollary} For locally 3-arc-transitive graphs, there are only six of such amalgams. \begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{faith-stab-3-trans} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ be a connected locally $(G,3)$-arc-transitive, and let $\{v,w\}$ be an edge of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$. Assume that $G_v^{[1]}=G_w^{[1]}=1$. Then $(G_v,G_w,G_{vw})$ is one of the following triples: $({\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_6),\ ({\rm S}} \def\G{{\rm G}_7,{\rm S}} \def\G{{\rm G}_7,{\rm S}} \def\G{{\rm G}_6),\ ({\rm A}}\def\Sym{{\rm Sym}_7,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6,{\rm A}}\def\Sym{{\rm Sym}_6),\ ({\rm S}} \def\G{{\rm G}_7,2^4{:}{\rm S}} \def\G{{\rm G}_6,{\rm S}} \def\G{{\rm G}_6), ({\rm A}}\def\Sym{{\rm Sym}_8,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_7),$ $ ({\rm A}}\def\Sym{{\rm Sym}_9,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_8,{\rm A}}\def\Sym{{\rm Sym}_8),$ or $\ ({\rm S}} \def\G{{\rm G}_9,{\rm S}} \def\G{{\rm G}p_6(2),{\rm S}} \def\G{{\rm G}_8).$ \end{corollary} \section{Examples} Let $H$ be a 2-transitive permutation group of degree $n$, and let $p$ be a prime which is coprime to the order $|H|$. Let $V=\mathbb F} \def{\bf Z}} \def\O{{\bf O}Z{\mathbb Z_p^n$ be the permutation module of $H$ over $\mathbb F} \def{\bf Z}} \def\O{{\bf O}Z{\mathbb Z_p$. Then $H\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant{\rm GL}} \def\SL{{\rm SL}_n(p)$. Let $\s$ be the unique involution in ${\bf Z}} \def\O{{\bf O}({\rm GL}} \def\SL{{\rm SL}_n(p))$. Then $\s$ reverses every vector in $V$, and $\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr} \s,H\r=\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr} \s\r\tau} \def\va{\varepsilon} \def\o{\omegaimes H$ since $\s$ centralizes $H$ and $H$ is a 2-transitive permutation groups. Define a group . \[X={\bf Z}} \def\O{{\bf O}Z_p^n{:}(\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr} \s\r\tau} \def\va{\varepsilon} \def\o{\omegaimes H).\] Let $a_1,a_2,\dots,a_n$ be a basis for $V={\bf Z}} \def\O{{\bf O}Z_p^n$. Then $a_i^\s=a_i$, and $H$ is 2-transitive on the basis. Let \[g=a_1\s.\] Then $g$ is an involution. Let \[S=\{a_1\s,a_2\s,\dots,a_n\s\},\] and let $R=\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr} S\r$. Then $(a_i\s)(a_j\s)=a_ia_j\in R$, and $H$ acts 2-transitively on $S$. Now $H$ induces a subgroup of ${\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}(R)$. Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}={\sf Cay}(R,S)$, and \[G=R{:}H.\] Thus ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is a $(G,2)$-arc-transitive Cayley graph of $R$ with vertex stabiliser $H$. \begin{proposition} Any $2$-transitive permutation group is the vertex stabiliser of $2$-arc-transitive graph. \end{proposition} \begin{example} {\rm Let $(G_v,G_w,G_{vw})$ be one of the triples listed in Table~\ref{Intro} except for rows~5, 6, 10, 12 and 13. Then there exists a group $G$ such that $G=G_vG_w$, and thus there is a complete bipartite graph ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}={\sf K}_{m,n}$ which is locally $(G,2)$-arc-transitive. These examples given in the next table. } \end{example} \[\begin{array}{|c ccc c |}\hline G & G_v&G_w&G_{vw}& {\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta} \\ \hline (3^2\tau} \def\va{\varepsilon} \def\o{\omegaimes 5^2){:}{\rm S}} \def\G{{\rm G}L_2(3)& 3^2{:}{\rm S}} \def\G{{\rm G}L_2(3) & 5^2{:}{\rm S}} \def\G{{\rm G}L_2(3) & {\rm S}} \def\G{{\rm G}L_2(3) & {\sf K}_{3^2,5^2}\\ (3^2\tau} \def\va{\varepsilon} \def\o{\omegaimes 5^2){:}{\rm GL}} \def\SL{{\rm SL}_2(3)& 3^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 5^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & {\rm GL}} \def\SL{{\rm SL}_2(3) & {\sf K}_{3^2,5^2}\\ (3^2\tau} \def\va{\varepsilon} \def\o{\omegaimes 7^2){:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 3^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 7^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & {\rm GL}} \def\SL{{\rm SL}_2(3) & {\sf K}_{3^2,7^2}\\ (5^2\tau} \def\va{\varepsilon} \def\o{\omegaimes 7^2){:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 5^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 7^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & {\rm GL}} \def\SL{{\rm SL}_2(3) & {\sf K}_{5^2,7^2} \\ 2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6&{\rm A}}\def\Sym{{\rm Sym}_6&2^4.{\rm A}}\def\Sym{{\rm Sym}_5&{\rm A}}\def\Sym{{\rm Sym}_5&{\sf K}_{6,2^4} \\ 2^4{:}{\rm S}} \def\G{{\rm G}_6&{\rm S}} \def\G{{\rm G}_6&2^4.{\rm S}} \def\G{{\rm G}_5&{\rm S}} \def\G{{\rm G}_5&{\sf K}_{6,2^4} \\ 2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7&{\rm A}}\def\Sym{{\rm Sym}_7&2^4.{\rm A}}\def\Sym{{\rm Sym}_6&{\rm A}}\def\Sym{{\rm Sym}_6&{\sf K}_{7,2^4} \\ 2^4{:}{\rm A}}\def\Sym{{\rm Sym}_8&{\rm A}}\def\Sym{{\rm Sym}_8&2^4.{\rm A}}\def\Sym{{\rm Sym}_7&{\rm A}}\def\Sym{{\rm Sym}_7&{\sf K}_{8,2^4} \\ {\rm A}}\def\Sym{{\rm Sym}_8 &{\rm A}}\def\Sym{{\rm Sym}_7 & 2^3{:}{\rm GL}} \def\SL{{\rm SL}_3(2) & {\rm GL}} \def\SL{{\rm SL}_3(2) & {\sf K}_{7,15}\\ (5^2\tau} \def\va{\varepsilon} \def\o{\omegaimes11^2){:}{\rm S}} \def\G{{\rm G}L_2(5)&5^2{:}{\rm S}} \def\G{{\rm G}L_2(5)&{11}^2{:}{\rm S}} \def\G{{\rm G}L_2(5)&{\rm S}} \def\G{{\rm G}L_2(5)&{\sf K}_{5^2,11^2}\\ (13^2\tau} \def\va{\varepsilon} \def\o{\omegaimes 3^6){:}{\rm S}} \def\G{{\rm G}L_2(13)&{13}^2{:}{\rm S}} \def\G{{\rm G}L_2(13)&3^6{:}{\rm S}} \def\G{{\rm G}L_2(13)&{\rm S}} \def\G{{\rm G}L_2(13)&{\sf K}_{13^2,3^6}\\ \hline \end{array}\] \vskip0.1in \noindent {\bf Problem.} Construct explicit examples of graphs with the amalgams in row~5, 6, 10, 12, or 13 in Table~\ref{Intro}. \vskip0.1in \begin{example} {\rm It is well-known that the triple $({\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_6)$ is the amalgam for both the complete graph ${\sf K}_8$ and Hoffman-Singleton graph. The former is 2-arc-transitive but not 3-arc-transitive, but the latter is 3-arc-transitive. The same phenomena appears for $({\rm S}} \def\G{{\rm G}_7,{\rm S}} \def\G{{\rm G}_7,{\rm S}} \def\G{{\rm G}_6)$. } \end{example} \section{Composition factors of the stabilisers}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{B-pty} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}=(V,E)$ be a connected $G$-edge-transitive bipartite graph with biparts $U$ and $W$. For vertices $v_0,v_1,\dots,v_\ell$ of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$, let \[G_{v_0v_1\dots v_\ell}^{[1]}=G_{v_0}^{[1]}\cap G_{v_1}^{[1]}\cap\dots\cap G_{v_\ell}^{[1]}.\] Then, for an edge $\{v,w\}$, the group $G_{vw}^{[1]}$ is the point-wise stabiliser of the subset of vertices ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)\cup{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)$. \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{chain-1} Let $v_0,v_1,\dots,v_\ell$ be an $\ell$-arc, where $\ell$ is a positive integer. Then we have a chain of normal subgroups: \[G_{v_0v_1\dots v_\ell}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0v_1\dots v_{\ell-1}}^{[1]} \langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd\dots\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0v_1}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0}.\] \end{lemma} \par\noindent{\it Proof.\ \ } For each integer $i$ with $0\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant i\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant\ell$, the group $G_{v_0v_1\dots v_i}^{[1]}$ is the kernel of $G_{v_0v_1\dots v_i}$ acting on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v_0)\cup{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v_1)\cup\dots\cup{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v_i)$, and hence \[G_{v_0v_1\dots v_i}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0v_1\dots v_i}.\] For $i\geqslantslant 1$, we have the inclusion relation $G_{v_0v_1\dots v_i}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant G_{v_0v_1\dots v_{i-1}}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant G_{v_0v_1\dots v_i}$, and we thus conclude that $G_{v_0v_1\dots v_i}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0v_1\dots v_{i-1}}^{[1]}$. This holds for all possible values of $i$ with $1\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant i\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant\ell$, giving the chain of normal subgroups as claimed. \qed \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{chain-2} Let $\{v,w\}$ be an edge, and $v_0,v_1,\dots,v_\ell$ be an $\ell$-arc, where $\ell$ is a positive integer. Then for $0\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant i\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant \ell-1$, the factor group $G_{v_0v_1\dots v_i}^{[1]}/G_{v_0v_1\dots v_{i+1}}^{[1]}$ is isomorphic to a subnormal subgroup of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$. \end{lemma} \par\noindent{\it Proof.\ \ } Observing that $G_{v_0v_1\dots v_i}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_i}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_iv_{i+1}}$, we obtain $$G_{v_0v_1\dots v_i}^{[1]}/G_{v_0v_1\dots v_{i+1}}^{[1]} \cong (G_{v_0v_1\dots v_i}^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v_{i+1})} \langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_iv_{i+1}}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v_{i+1})}.$$ Since $G$ is edge-transitive on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$, we have $G_{v_iv_{i+1}}\cong G_{vw}$, and so \[\mbox{ $G_{v_iv_{i+1}}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v_{i+1})}\cong G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$.}\] Therefore, $G_{v_0v_1\dots v_i}^{[1]}/G_{v_0v_1\dots v_{i+1}}^{[1]}$ is isomorphic to a subnormal subgroup of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$. \qed \vskip0.1in {\bf Proof of Theorem~\ref{key-lem}:} First of all, we claim that a composition factor of $G_{v}^{[1]}$ is isomorphic to a composition factor of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}.$ Let $T$ be a composition factor of $G_v^{[1]}$. Since $G\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is finite and connected, there exists a path $v_0=v,v_1,\dots,v_\ell$ such that $G_{v_0v_1\dots v_\ell}^{[1]}=1$. By Lemma~\ref{chain-1}, we have a chain of normal subgroups: \[1=G_{v_0v_1\dots v_\ell}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0v_1\dots v_{\ell-1}}^{[1]} \langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd\dots\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0v_1}^{[1]}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{v_0}^{[1]}=G_v^{[1]}.\] Let $i$ be the largest integer with $0\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant i\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant\ell$ such that $T$ is a composition factor of $G_{v_0v_1\dots v_i}^{[1]}$. Then $T$ is not a composition factor of $G_{v_0v_1\dots v_{i+1}}^{[1]}$. Hence $T$ is a composition factor of the quotient group $G_{v_0v_1\dots v_i}^{[1]}/G_{v_0v_1\dots v_{i+1}}^{[1]}.$ By Lemma~\ref{chain-2}, $G_{v_0v_1\dots v_i}^{[1]}/G_{v_0v_1\dots v_{i+1}}^{[1]}$ is isomorphic to a subnormal subgroup of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$, and so $T$ is a composition factor of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ as we claimed. Now let $T_1$ be a composition factor of $G_v$. If $T_1$ is a composition factor of $G_v/G_v^{[1]}\cong G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$, then we are done. We thus assume that $T_1$ is not a composition factor of $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}\cong G_v/G_v^{[1]}$. Then $T_1$ is a composition factor of $G_v^{[1]}$, and so $T_1$ is a composition factor of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$. This proves part~(i). Similarly, let $T_2$ be a composition factor of $G_{vw}$. If $T_2$ is a composition factor of $G_{vw}/G_v^{[1]}\cong G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$, then we are done. Now we assume that $T_2$ is a composition factor of $G_v^{[1]}$, and so $T_1$ is a composition factor of $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$. Thus part~(ii) of Theorem~\ref{key-lem} hold. \begin{corollary}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{key-lem-2} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ be a connected graph, and let $G\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}e{\sf Aut}} \def\Inn{{\sf Inn}} \def\Out{{\sf Out}{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ be transitive on the edge set. Let $\{v,w\}$ be an edge of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$. Then the following statements hold: \begin{itemize} \item[(i)] The vertex stabiliser $G_v$ is soluble if and only if $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ are both soluble. \item[(ii)] $G_{vw}$ is soluble if and only if $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ are both soluble; in particular. \item[(iii)] $G_v$, $G_w$ are both soluble if and only if $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ are both soluble. \item[(iv)] $G_v^{[1]}$ and $G_w^{[1]}$ are both soluble if and only if $(G_v^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ and $(G_w^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ are both soluble. \end{itemize} \end{corollary} \par\noindent{\it Proof.\ \ } Part~(i), and part~(ii) follow from Theorem~\ref{key-lem}\,(1), and (2), respectively. For part~(iii), it is obvious that if $G_v$, $G_w$ are both soluble then$G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ are both soluble. So it is sufficient to show if both $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ is soluble, then both $G_v$ and $G_w$ are soluble. Suppose that $G_v$ is insoluble. Let $T$ be an insoluble composition factor of $G_v$. By Theorem~\ref{key-lem}~(1), $T$ is a composition factor of $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$. Similarly, if $G_w$ is insoluble, then either $G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $G_{vw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ is insoluble. It implies that if both $G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ is soluble, then both $G_v$ and $G_w$ are soluble. This proves part~(iii). Similarly, to prove part~{iv}, we need only to prove if both $(G_v^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ and $(G_w^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ are soluble then so are $G_v^{[1]}$ and $G_w^{[1]}$. Suppose that $G_v^{[1]}$ is insoluble. Arguing as in the second paragraph with $G_v^{[1]}$ in the place of $G_v$ shows that either $(G_v^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ or $(G_{vw}^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ is insoluble. Similarly, if $G_w^{[1]}$ is insoluble then either $(G_w^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ or $(G_{vw}^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ is insoluble. It implies that if both $(G_v^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$ and $(G_w^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ are soluble, then so are $G_v^{[1]}$ and $G_w^{[1]}$. \qed The next lemma refines Lemma~\ref{chain-2}. \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{chain-3} Let $\{u,w\}$ be an edge with $u\in U$ and $w\in W$, and let \[u_0,w_0,\dots,w_{i-1},u_i,w_i,u_{i+1},\dots\] be a path such that $u_0=u$ and $w_0=w$. Then for $i\geqslantslant 0$, the following are true: \begin{itemize} \item[(i)] $G_{u_0\dots u_i}^{[1]}/G_{u_0\dots u_iw_i}^{[1]}$ is isomorphic to a subnormal subgroup of $G_{uw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$. \item[(ii)] $G_{u_0\dots w_i}^{[1]}/G_{u_0\dots w_iu_{i+1}}^{[1]}$ is isomorphic to a subnormal subgroup of $G_{uw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(u)}$. \end{itemize} \end{lemma} \par\noindent{\it Proof.\ \ } We notice that \[\begin{array}{l} G_{u_0\dots u_i}^{[1]}/G_{u_0\dots u_iw_i}^{[1]}\cong (G_{u_0\dots u_i}^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w_i)}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{u_iw_i}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w_i)}\cong G_{uw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)},\\ G_{u_0\dots w_i}^{[1]}/G_{u_0\dots w_iu_{i+1}}^{[1]}\cong (G_{u_0\dots w_i}^{[1]})^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(u_{i+1})}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}hd G_{w_iu_{i+1}}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(u_{i+1})}\cong G_{uw}^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(u)}. \end{array}\] Then the proof follows. \qed \section{2-Transitive permutation groups}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{2-trans} The proof of Theorem~\ref{faith-stab} depends on the structural properties of stabilisers of 2-transitive permutation groups. We state the point stabilisers in the following two theorems, refer to \cite[Tables~7.3 and 7.4]{Cameron-book} , and \cite{LiSS}. The {\it socle} of a group $X$ is the subgroup of $X$ generated by all minimal normal subgroups of $X$, denoted by ${\sf soc}} \def\Inndiag{{\sf InnDiag}(X)$. \begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{AS-2-trans} Let $X$ be an almost simple $2$-transitive permutation group on ${\it\Omega}ga$ of degree $k$, and let $T={\sf soc}} \def\Inndiag{{\sf InnDiag}(X)$ and $\o,\o'\in{\it\Omega}ga$. Then $X/T\cong X_\o/T_\o\cong X_{\o\o}/T_{\o\o'}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant \Out(T)$, and further, they satisfy the following table: \end{theorem} {\small \[\begin{array}{|c|c|c|c|c|}\hline \mbox{row} & T & T_\o & T_{\o\o'} & k\\ \hline 1& {\rm A}}\def\Sym{{\rm Sym}_n,n\geqslant 5 & {\rm A}}\def\Sym{{\rm Sym}_{n-1} & {\rm A}}\def\Sym{{\rm Sym}_{n-2} & n \\ \hline 2& {\rm PSL}}\def\PGL{{\rm PGL}_n(q) & q^{n-1}{:}({\rm S}} \def\G{{\rm G}L_{n-1}(q).{q-1\overline} \def\un{\underlineer(n,q-1)}) & [q^{2(n-2)}]{:}({\rm GL}} \def\SL{{\rm SL}_{n-2}(q).{q-1\overline} \def\un{\underlineer(n,q-1)})& {q^n-1\overline} \def\un{\underlineer q-1} \\ \hline 3& {\rm S}} \def\G{{\rm G}z(q), q>2 & [q^2]{:}{\bf Z}} \def\O{{\bf O}Z_{q-1} & {\bf Z}} \def\O{{\bf O}Z_{q-1} & q^2+1 \\ 4& {\rm Ree}(q),q>3 & [q^3]{:}{\bf Z}} \def\O{{\bf O}Z_{q-1} & {\bf Z}} \def\O{{\bf O}Z_{q-1} & q^3+1 \\ 5& \PSU_3(q),q\geqslant 3 & [q^3]{:}{\bf Z}} \def\O{{\bf O}Z_{(q^2-1)/(3,q+1)} & {\bf Z}} \def\O{{\bf O}Z_{(q^2-1)/(3,q+1)} & q^3+1 \\ \hline 6& {\rm S}} \def\G{{\rm G}p_{2n}(2), & {\rm P\Omega}_{2n}^-(2).2 & 2^{2n-2}{:}{\rm P\Omega}_{2n-2}^-(2).2 & 2^{2n-1}-2^{n-1} \\ 7& n\geqslantslant3 & {\rm P\Omega}_{2n}^+(2).2 & 2^{2n-2}{:}{\rm P\Omega}_{2n-2}^+(2).2 & 2^{2n-1}+2^{n-1} \\ \hline 8& {\rm PSL}}\def\PGL{{\rm PGL}_2(11) & {\rm A}}\def\Sym{{\rm Sym}_5 & {\rm S}} \def\G{{\rm G}_3 & 11 \\ 9& \M_{11} & \M_{10} & 3^2{:}\Q_8 & 11 \\ 10& \M_{11} & {\rm PSL}}\def\PGL{{\rm PGL}_2(11) & {\rm A}}\def\Sym{{\rm Sym}_5 & 12 \\ 11& \M_{12} & \M_{11} & \M_{10} & 12 \\ 12& {\rm A}}\def\Sym{{\rm Sym}_7 & {\rm PSL}}\def\PGL{{\rm PGL}_2(7) & {\rm A}}\def\Sym{{\rm Sym}_4 & 15 \\ 13& \M_{22} & {\rm PSL}}\def\PGL{{\rm PGL}_3(4) & 2^4{:}{\rm A}}\def\Sym{{\rm Sym}_5 & 22 \\ 14& \M_{23} & \M_{22} & {\rm PSL}}\def\PGL{{\rm PGL}_3(4) & 23 \\ 15& \M_{24} & \M_{23} & \M_{22} & 24 \\ 16& {\rm PSL}}\def\PGL{{\rm PGL}_2(8) & {\rm D}} \def\Q{{\rm Q}_{18} & {\bf Z}} \def\O{{\bf O}Z_2 & 28 \\ 17& {\rm HS} & \PSU_3(5).2 & {\rm A}}\def\Sym{{\rm Sym}_6.2^2 & 176 \\ 18& {\bf C}} \def\N{{\bf N}} \def\bfI{{\bf I}o_3 & {\rm McL}} \def\He{{\rm He}{:}2 & \PSU_4(3){:}2 & 276 \\ \hline \end{array}\] \nobreak {\small \centerline{{\bf Table~\ref{2-trans}.1}: Almost simple 2-transitive affine permutation groups and their stabilisers}} } \vskip0.08in A complete list of finite 2-transitive permutation groups was obtained by Hering, see for example \cite{Liebeck}. Here we list their point stabilisers. \begin{theorem}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{HA-2-trans} Let $X={\bf Z}} \def\O{{\bf O}Z_p^e{:}X_\o$ be an affine $2$-transitive permutation group of degree $p^e$ on a set ${\it\Omega}ga$, where $\o\in{\it\Omega}ga$. Then for a point $\o'\in{\it\Omega}ga\setminus\{\o\}$, either \begin{enumerate} \item[(1)] $X_\omega\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant {\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}mmaL_1(p^e)\cong{\bf Z}} \def\O{{\bf O}Z_{p^e-1}{:}{\bf Z}} \def\O{{\bf O}Z_e$, and $X_{\o\o'}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}esssim{\bf Z}} \def\O{{\bf O}Z_e$, or \item[(2)] $X_\o$ and $X_{\o\o'}$ are as in Table~$\ref{2-trans}.2$, \end{enumerate} \end{theorem} {\small \[\begin{array}{|c|c|c|c|c|c| c| }\hline \mbox{\rm row} & X_\o & X_{\o\o'} & p^e & \\ \hline 1 & {\rm S}} \def\G{{\rm G}L_n(q).{\mathcal O} & q^{n-1}{:}{\rm GL}} \def\SL{{\rm SL}_{n-1}(q).{\mathcal O} & q^n & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant{\bf Z}} \def\O{{\bf O}Z_{q-1}{:}{\bf Z}} \def\O{{\bf O}Z_e\\ 2 & {\rm S}} \def\G{{\rm G}p_{2n}(q).{\mathcal O} & q.q^{2n-2}{:}({\rm GL}} \def\SL{{\rm SL}_1(q)\tau} \def\va{\varepsilon} \def\o{\omegaimes {\rm S}} \def\G{{\rm G}p_{2n-2}(q)).{\mathcal O} &q^{2n} & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant {\bf Z}} \def\O{{\bf O}Z_e\\ 3 & \G_2(q).{\mathcal O} & [q^5]{:}{\rm GL}} \def\SL{{\rm SL}_2(q).{\mathcal O} & q^6 & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant {\bf Z}} \def\O{{\bf O}Z_{q-1}{:}{\bf Z}} \def\O{{\bf O}Z_{(3,q)e}\\ \hline 4& \Q_8{:}3, \ \Q_8.6,\ (\Q_8{:}3).4 & 1,\ {\bf Z}} \def\O{{\bf O}Z_2, [4]& 5^2& \\ 5 &\Q_8{:}{\rm S}} \def\G{{\rm G}_3,\ 3\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8.2),\ 3\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}{\rm S}} \def\G{{\rm G}_3) & 1,\ 1, \ {\bf Z}} \def\O{{\bf O}Z_3 & 7^2&\\ 6&5\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}3), \ 5\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}{\rm S}} \def\G{{\rm G}_3) & 1, \ {\bf Z}} \def\O{{\bf O}Z_2 & 11^2&\\ 7&{11}\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}{\rm S}} \def\G{{\rm G}_3) & 1 & 23^2&\\ \hline 8 &2^{1+4}{:}5,2^{1+4}{:}{\rm D}} \def\Q{{\rm Q}_{10},2^{1+4}{:}(5{:}4),& 2, [4], [8],& 3^4 &\\ &2^{1+4}{:}{\rm A}}\def\Sym{{\rm Sym}_5,2^{1+4}{:}{\rm S}} \def\G{{\rm G}_5 &2.{\rm A}}\def\Sym{{\rm Sym}_4,2.{\rm S}} \def\G{{\rm G}_4&& \\ \hline 9&{\rm S}} \def\G{{\rm G}L_2(13) & {\bf Z}} \def\O{{\bf O}Z_3 & 3^6 &\\ 10&{\rm A}}\def\Sym{{\rm Sym}_6 & {\rm S}} \def\G{{\rm G}_4 & 2^4 &\\ 11&{\rm A}}\def\Sym{{\rm Sym}_7 & {\rm PSL}}\def\PGL{{\rm PGL}_2(7) & 2^4 & \\ 12&\PSU_3(3) & 4.{\rm S}} \def\G{{\rm G}_4,\ 4^2.{\rm S}} \def\G{{\rm G}_3 & 2^6 &\\ \hline 13&{\rm S}} \def\G{{\rm G}L_2(5),\ 5\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5) & 1,\ {\bf Z}} \def\O{{\bf O}Z_5 & 11^2 &\\ 14& 9\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5) & {\bf Z}} \def\O{{\bf O}Z_3 &19^2&\\ 15&7\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5),\ 7\tau} \def\va{\varepsilon} \def\o{\omegaimes(4\circ{\rm S}} \def\G{{\rm G}L_2(5)) & 1,\ {\bf Z}} \def\O{{\bf O}Z_2 &29^2 &\\ 16&{29}\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5) & 1 & 59^2 & \\ \hline \end{array}\] } {\small \centerline{{\bf Table~\ref{2-trans}.2}: Affine 2-transitive groups and their stabilisers} } \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{2-pt-stab} Let $X$ be a $2$-transitive group on ${\it\Omega}$ of degree $k$, and take $\o,\o'\in{\it\Omega}$. Then the stabiliser $X_{\o\o'}$ has a transitive representation of degree $k-1$ if and only if $X={\rm A}}\def\Sym{{\rm Sym}_7$ or ${\rm S}} \def\G{{\rm G}_7$, and $X_{\o\o'}={\rm A}}\def\Sym{{\rm Sym}_5$ or ${\rm S}} \def\G{{\rm G}_5$, respectively. \end{lemma} \par\noindent{\it Proof.\ \ } Suppose $X_{\o\o'}$ has has a transitive permutation representation of degree $k-1$. Then $k-1$ divides $|X_{\o\o'}|$. It is clear that $k-1$ does not divide $|X_{\o\o'}|$ for candidates in rows 3 to 5 and 8 to 18 in Table~\ref{AS-2-trans} and rows 4 to 16 in Table~\ref{HA-2-trans}. Now we inspect the other candidates given in Tables~\ref{AS-2-trans} and \ref{HA-2-trans}. Suppose ${\sf soc}} \def\Inndiag{{\sf InnDiag}(X)={\rm A}}\def\Sym{{\rm Sym}_k$ as in row 1 of Table~\ref{AS-2-trans}. Then $X_{\o\o'}={\rm A}}\def\Sym{{\rm Sym}_{k-2}$ or ${\rm S}} \def\G{{\rm G}_{k-2}$. If $X_{\o\o'}={\rm A}}\def\Sym{{\rm Sym}_{k-2}$ or ${\rm S}} \def\G{{\rm G}_{k-2}$ has a transitive permutation representation of degree $k-1$, then $k=7$. Suppose that ${\sf soc}} \def\Inndiag{{\sf InnDiag}(X)={\rm PSL}}\def\PGL{{\rm PGL}_n(q)$, with $k={q^n-1\overline} \def\un{\underlineer q-1}$ as in row 2 of Table~\ref{AS-2-trans}. Suppose further $X_{\o\o'}$ has a transitive permutation representation of degree $k-1=\frac{q(q^{n-1}-1)}{q-1}$. If $q^{n-1}-1$has a primitive prime divisor $r$ say, then $r$ does not divides the order $|X_{\o\o'}|=|[q^{2(n-2)}]{:}({\rm GL}} \def\SL{{\rm SL}_{n-2}(q).{\bf Z}} \def\O{{\bf O}Z_{q-1\overline} \def\un{\underlineer(n,q-1)}).{\mathcal O}|$, which is not possible. Thus $q^{n-1}-1$ has no primitive prime divisor, and $n-1=2$ or $(q,n-1)=(2,6)$ by Zsigmondy Theorem. For the former, $k-1=q(q+1)$, and $|X_{\o\o'}|=q^2(q-1)^2/(3,q-1)$ which is not divisible by $q+1$ So $k-1=126$, and $X_{\o\o'}=[2^{10}]:GL(5,2)$. But $X_{\o\o'}$ has no transitive permutation representation of degree 126 contradicts to our assumption. Similarly, let $X=q^n{:}{\rm S}} \def\G{{\rm G}L_n(q).{\mathcal O}$ be as in the first row of Table~\ref{HA-2-trans}. If $k-1=q^n-1$ divides the order of $X_{\o\o'}=q^{n-1}{:}{\rm GL}} \def\SL{{\rm SL}_{n-1}(q).{\mathcal O}$, then $q^n-1$ has no primitive prime divisor. Thus, by Zsigmondy Theorem, $n=2$ or $(q,n)=(2,6)$. For the former, $k-1=q^2-1$, and $|X_{\o\o'}|=q(q-1)$ which is not divided by $k-1$. For the latter, $k-1=63$, $X_{\o\o'}=[2^5]:{\rm GL}} \def\SL{{\rm SL}(4,2)$ which has no transitive permutation representation of degree 63. For candidates in rows 6,7 in Table~~\ref{AS-2-trans}, and rows 2,3 in Table~\ref{HA-2-trans}, we have the following table: {\small \[\begin{array}{|c|c|c| }\hline X_\o & X_{\o\o'} & k-1 \\ \hline {\rm P\Omega}_{2n}^-(2).2.{\mathcal O} & 2^{n^2-n+1}(2^{n-1}+1)\Pi_{i=1}^{n-2}(2^{2i}-1).{\mathcal O} & 2^{2n-1}-2^{n-1} \\ {\rm P\Omega}_{2n}^+(2).2.{\mathcal O} & 2^{n^2-n+1}(2^{n-1}-1)\Pi_{i=1}^{n-2}(2^{2i}-1).{\mathcal O} & 2^{2n-1}+2^{n-1} \\ \hline {\rm S}} \def\G{{\rm G}p_{2n}(q).{\mathcal O} & q^{n^2}(q-1)\Pi_{i=1}^{n-1}(q^{2i}-1).{\mathcal O} &q^{2n}-1 \\ \G_2(q).{\mathcal O} & q^6(q-1)^2(q+1).{\mathcal O} & q^6 -1\\ \hline \end{array}\] } It is easy to see that $k-1$ does not divide the order $|X_{\o\o'}|$. So $X_{\o\o'}$ has no transitive permutation representation of degree $k-1$.\qed \section{Proof of Theorem~\ref{faith-stab}}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{main-thm} Let ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ be a connected locally $(G,2)$-arc-transitive graph, and let $\{v,w\}$ be an edge of ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$. Assume that $G_v^{[1]}=G_w^{[1]}=1$. In this section, we will give the proof of Theorem~\ref{faith-stab}. We first notice that, since $G_v\cong G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$ and $G_w\cong G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$, the stabilisers $G_v,G_w$ are 2-transitive permutation groups which share common point stabilisers $G_{vw}$. To prove this theorem, we will split it into several cases, given in the following few lemmas. \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{isomorphic socle} Suppose $G_v$ and $G_w$ have isomorphic socles. Then $G_v\cong G_w$ and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is regular. \end{lemma} \par\noindent{\it Proof.\ \ } Since $G_v$ and $G_w$ are 2-arc-transitive groups, we have either they are both affine groups or they are both almost simple groups. Suppose ${\sf soc}} \def\Inndiag{{\sf InnDiag}(G_v)\cong{\sf soc}} \def\Inndiag{{\sf InnDiag}(G_w)\cong{\bf Z}} \def\O{{\bf O}Z_p^d$, then $G_v\cong {\bf Z}} \def\O{{\bf O}Z_p^d{:}G_{vw}\cong G_w$, and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is regular of valency $p^d$. Suppose that $G_v$ and $G_w$ are almost simple and have isomorphic socles. Since they share point stabiliser, we have $G_v\cong G_w$ by checking the candidates in Table~\ref{2-trans}.1.\qed From now on, we assume that $G_v$ and $G_w$ have non-isomorphic socles. We need to search for different 2-transitive groups which have the same point stabiliser, by inspecting the 2-transitive permutation groups, listed in Tables~\ref{2-trans}.1 and \ref{2-trans}.2. \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{both soluble} Suppose both $G_v$ and $G_w$ are soluble. Then \[\begin{array}{ l | l l l l l |} G_v & 3^2{:}{\rm S}} \def\G{{\rm G}L_2(3) & 3^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 3^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3)& 5^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) \\ \hline G_w & 5^2{:}{\rm S}} \def\G{{\rm G}L_2(3)& 5^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 7^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3) & 7^2{:}{\rm GL}} \def\SL{{\rm SL}_2(3)\\ \end{array}\] \centerline{Table \ref{main-thm}.1: Soluble stabilisers} \end{lemma} \par\noindent{\it Proof.\ \ } Suppose that $G_v$ and $G_w$ are soluble. Then by inspecting the 2-transitive permutation groups, listed in Tables~\ref{2-trans}.1 and \ref{2-trans}.2, we have \[\begin{array}{ ll } \hline G_{vw} & |{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)| \\ \hline \langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant {\bf Z}} \def\O{{\bf O}Z_{p^d-1}{:}{\bf Z}} \def\O{{\bf O}Z_d & p^d \\ {\rm GL}} \def\SL{{\rm SL}_2(2) & 2^2 \\ {\rm S}} \def\G{{\rm G}L_2(3),\ {\rm GL}} \def\SL{{\rm SL}_2(3) & 3^2 \\ \Q_8{:}3, \ \Q_8.6,\ (\Q_8{:}3).4 & 5^2 \\ \Q_8{:}{\rm S}} \def\G{{\rm G}_3,\ 3\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8.2),\ 3\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}{\rm S}} \def\G{{\rm G}_3) & 7^2 \\ 5\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}3), \ 5\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}{\rm S}} \def\G{{\rm G}_3) & 11^2 \\ {11}\tau} \def\va{\varepsilon} \def\o{\omegaimes(\Q_8{:}{\rm S}} \def\G{{\rm G}_3) & 23^2 \\ 2^{1+4}{:}5,2^{1+4}{:}{\rm D}} \def\Q{{\rm Q}_{10},2^{1+4}{:}(5{:}4) & 3^4 \\ \hline \end{array}\] \nobreak \centerline{Table \ref{main-thm}.2} \vskip0.08in We note that the candidate in the second line, i.e., $G_{vw}={\rm GL}} \def\SL{{\rm SL}_2(2)$, is the same as the one in the first line with $p^d=2^2$ and $G_{vw}={\bf Z}} \def\O{{\bf O}Z_{p^d-1}{:}{\bf Z}} \def\O{{\bf O}Z_2\cong{\rm S}} \def\G{{\rm G}_3$. Assume first that $G_{vw}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant{\bf Z}} \def\O{{\bf O}Z_{p^d-1}{:}{\bf Z}} \def\O{{\bf O}Z_d$ with $|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|=p^d$. Then $G_{vw}$ is a split metacyclic group. So $G_{vw}$ can not appear as one of rows~3-8 of Table \ref{main-thm}.2. We thus have that $G_{vw}$ only appears in one of rows~3-8 of Table \ref{main-thm}.2. It follows that $G_v,G_w$ are as in Table \ref{main-thm}.1.\qed \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{one insoluble} Suppose $G_v$ is insoluble. Then $G_{vw}$ is insoluble. \end{lemma} \par\noindent{\it Proof.\ \ } Suppose $G_{vw}$ is soluble. Then one of the candidates in the following table appears: \[\begin{array}{ lll ll } \hline G_v & G_{vw} & |{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)| & {\mathcal O} & \\ \hline {\rm A}}\def\Sym{{\rm Sym}_5, {\rm S}} \def\G{{\rm G}_5 & {\rm A}}\def\Sym{{\rm Sym}_4,{\rm S}} \def\G{{\rm G}_4 & 5 & & \\ {\rm PSL}}\def\PGL{{\rm PGL}_3(3).{\mathcal O} & 3^2{:}{\rm S}} \def\G{{\rm G}L_2(3).{\mathcal O}&26 &&\\ {\rm PSL}}\def\PGL{{\rm PGL}_2(q).{\mathcal O} & [q]{:}{\bf Z}} \def\O{{\bf O}Z_{(q-1)/(2,q-1)}.{\mathcal O} & q+1 & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant (2,q-1).f & q=p^f\geqslantslant4\\ {\rm S}} \def\G{{\rm G}z(q).{\mathcal O} & [q^2]{:}{\bf Z}} \def\O{{\bf O}Z_{q-1}.{\mathcal O} & q^2+1 & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant f & q=2^f>2, f \mbox{ odd}\\ {\rm Ree}(q).{\mathcal O} & [q^3]{:}{\bf Z}} \def\O{{\bf O}Z_{q-1}.{\mathcal O} & q^3+1 & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant f & q=3^f>3, f \mbox{ odd} \\ \PSU_3(q).{\mathcal O} & [q^3]{:}{\bf Z}} \def\O{{\bf O}Z_{(q^2-1)/(3,q+1)} & q^3+1 & {\mathcal O}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant (3,q+1).f & q\geqslant 3\\ {\rm PG}ammaL_2(8) & 9{:}6 & 28 & &\\ \hline \end{array}\] \centerline{Table \ref{main-thm}.3: Soluble edges stabilisers} \vskip0.08in We note that, in Table~\ref{main-thm}.2, the candidate in the first line is the same as the one in the second line with $q=4$ due to ${\rm A}}\def\Sym{{\rm Sym}_5\cong{\rm PSL}}\def\PGL{{\rm PGL}_2(4)$. The rest are mutually non-isomorphic. Thus $G_w$ should be soluble, appearing in Table~\ref{main-thm}.1, and we conclude that in this case $G_v$ and $G_w$ do not have isomorphic stabilisers. So $G_{vw}$ is insoluble.\qed \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{$G_{vw}$ almost simple} Suppose $G_{vw}$ is almost simple. Then $(G_v,G_w,G_{vw})$ are listed in rows 5 to 14 of Table~\ref{Intro}. \end{lemma} \par\noindent{\it Proof.\ \ } First, we observe that $G_{vw}$ does not have socle isomorphic to one of the following groups: \begin{quote} ${\rm P\Omega}_{2n}^-(2)$ with $n\geqslantslant3$, ${\rm P\Omega}_{2n}^+(2)$ with $n\geqslantslant4$, $\M_{11}$, $\M_{22}$, $\M_{23}$, $\PSU_3(5)$, ${\rm McL}} \def\He{{\rm He}$, ${\rm S}} \def\G{{\rm G}p_{2n}(q)$ with $(n,q)\not=(2,2)$, $\G_2(q)$, $\PSU_3(3)$, \end{quote} since each of them appears only one time as the socle of the stabiliser of 2-transitive permutation groups. Similarly, $G_{vw}$ is not $\M_{10}$. This shows that ${\sf soc}} \def\Inndiag{{\sf InnDiag}(G_{vw})$ is an alternating group of a linear group. {\bf Case 1.} Let first ${\sf soc}} \def\Inndiag{{\sf InnDiag}(G_{vw})={\rm A}}\def\Sym{{\rm Sym}_m$, where $m\geqslantslant5$. Notice that the isomorphisms between ${\rm A}}\def\Sym{{\rm Sym}_m$ and other simple groups: \[\begin{array}{l} {\rm A}}\def\Sym{{\rm Sym}_5\cong{\rm PSL}}\def\PGL{{\rm PGL}_2(5)\cong{\rm PSL}}\def\PGL{{\rm PGL}_2(4) \cong{\rm P\Omega}_4^-(2), \\ {\rm A}}\def\Sym{{\rm Sym}_6\cong{\rm PSL}}\def\PGL{{\rm PGL}_2(9)\cong{\rm S}} \def\G{{\rm G}p_4(2)', \\ {\rm A}}\def\Sym{{\rm Sym}_8\cong{\rm PSL}}\def\PGL{{\rm PGL}_4(2)\cong{\rm P\Omega}_6^+(2). \end{array}\] Now $G_w$ is one of such groups such that ${\sf soc}} \def\Inndiag{{\sf InnDiag}(G_w)\not\cong{\sf soc}} \def\Inndiag{{\sf InnDiag}(G_v)$. It implies that $m\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant8$. Notice that ${\rm PSL}}\def\PGL{{\rm PGL}_2(4)\cong{\rm S}} \def\G{{\rm G}L_2(4)$, ${\rm PSL}}\def\PGL{{\rm PGL}_4(2)\cong{\rm S}} \def\G{{\rm G}L_4(2)$, and ${\rm S}} \def\G{{\rm G}p_4(2)\cong {\rm S}} \def\G{{\rm G}_6$, we have $G_v$ and $G_w$ is one of the following groups: \[\begin{array}{c |c|c|c|c|c|c} G_v\mbox{:almost simple} & {\rm A}}\def\Sym{{\rm Sym}_6,{\rm S}} \def\G{{\rm G}_6 &{\rm A}}\def\Sym{{\rm Sym}_7,{\rm S}} \def\G{{\rm G}_7& {\rm A}}\def\Sym{{\rm Sym}_8,{\rm S}} \def\G{{\rm G}_8 &{\rm S}} \def\G{{\rm G}_9,{\rm S}} \def\G{{\rm G}_9& {\rm S}} \def\G{{\rm G}p_4(2)&{\rm PSL}}\def\PGL{{\rm PGL}_2(11)\\ \hline G_{vw} & {\rm A}}\def\Sym{{\rm Sym}_5,{\rm S}} \def\G{{\rm G}_5&{\rm A}}\def\Sym{{\rm Sym}_6,{\rm S}} \def\G{{\rm G}_6&{\rm A}}\def\Sym{{\rm Sym}_7,{\rm S}} \def\G{{\rm G}_7&{\rm A}}\def\Sym{{\rm Sym}_8,{\rm S}} \def\G{{\rm G}_8&{\rm P\Omega}_6^+(2)&{\rm A}}\def\Sym{{\rm Sym}_5\\ \end{array}\] \vskip0.1in \[\begin{array}{c |c|c|c|c|c} G_v\mbox{: affine}&4^2{:}{\rm S}} \def\G{{\rm G}L_2(4),4^2{:}{\rm S}} \def\G{{\rm G}L_2(4).2 &2^4{:}{\rm S}} \def\G{{\rm G}L_4(2)&2^4{:}{\rm S}} \def\G{{\rm G}p_4(2),&2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6&2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7\\ \hline G_{vw} &{\rm S}} \def\G{{\rm G}L_2(4),{\rm S}} \def\G{{\rm G}L_2(4).2&{\rm S}} \def\G{{\rm G}L_4(2)&{\rm S}} \def\G{{\rm G}p_4(2)&{\rm A}}\def\Sym{{\rm Sym}_6&{\rm A}}\def\Sym{{\rm Sym}_7\\ \end{array}\] \centerline{Table \ref{main-thm}.4: altnating edge stabilisers} Suppose $m=5$. Then $G_v,G_w\rhd{\rm A}}\def\Sym{{\rm Sym}_6$, ${\rm PSL}}\def\PGL{{\rm PGL}_2(11)$, $2^4{:}{\rm S}} \def\G{{\rm G}L_2(4)$. It then follows that $\{G_v,G_w\}$ is one of the following four pairs, noticing that ${\rm S}} \def\G{{\rm G}L_2(4)\cong{\rm A}}\def\Sym{{\rm Sym}_5$ and ${\rm S}} \def\G{{\rm G}L_2(4).2\cong{\rm S}} \def\G{{\rm G}igmaL_2(4)\cong{\rm S}} \def\G{{\rm G}_5$: \[\mbox{$\{{\rm A}}\def\Sym{{\rm Sym}_6,{\rm PSL}}\def\PGL{{\rm PGL}_2(11)\}$, $\{{\rm PSL}}\def\PGL{{\rm PGL}_2(11),2^4{:}{\rm A}}\def\Sym{{\rm Sym}_5\}$, $\{{\rm A}}\def\Sym{{\rm Sym}_6,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_5\}$, or $\{{\rm S}} \def\G{{\rm G}_6,2^4{:}{\rm S}} \def\G{{\rm G}_5\}$,}\] as listed in rows~5-8 of Table~\ref{Intro}. If $m=6$, then each of $ G_v$ and $G_w$ is conjugate to ${\rm A}}\def\Sym{{\rm Sym}_7$, ${\rm S}} \def\G{{\rm G}_7$ $2^4{:}{\rm S}} \def\G{{\rm G}p_4(2)$, or $2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6$, and hence $\{G_v,G_w\}=\{{\rm A}}\def\Sym{{\rm Sym}_7,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6\}$ or $\{{\rm S}} \def\G{{\rm G}_7,2^4{:}{\rm S}} \def\G{{\rm G}_6\}$, as in rows~9-10 of Table~\ref{Intro}. If $m=7$, then $G_v\rhd{\rm A}}\def\Sym{{\rm Sym}_8$ or $2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7$, and so is $G_w$. This leads to $\{G_v,G_w\}=\{{\rm A}}\def\Sym{{\rm Sym}_8,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7\}$, as in row~11 of Table~\ref{Intro}. If $m=8$, then each of $G_v$ and $G_w$ is ${\rm A}}\def\Sym{{\rm Sym}_9$, ${\rm S}} \def\G{{\rm G}p_6(2)$, or $2^4{:}{\rm GL}} \def\SL{{\rm SL}_4(2)$, leading to $\{G_v,G_w\}=\{{\rm A}}\def\Sym{{\rm Sym}_9,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_8\}$ or $\{{\rm S}} \def\G{{\rm G}_9,{\rm S}} \def\G{{\rm G}p_6(2)\}$, giving in rows~12-13 of Table~\ref{Intro}. \vskip0.1in {\bf Case 2.} Assume that ${\sf soc}} \def\Inndiag{{\sf InnDiag}(G_{vw})$ is a linear group which is not isomorphic to ${\rm A}}\def\Sym{{\rm Sym}_m$ with $m\geqslantslant5$. We notice that ${\rm PSL}}\def\PGL{{\rm PGL}_2(7)\cong{\rm S}} \def\G{{\rm G}L_3(2)\cong {\rm GL}} \def\SL{{\rm SL}_3(2)$. Then ${\sf soc}} \def\Inndiag{{\sf InnDiag}(G_{vw})={\rm PSL}}\def\PGL{{\rm PGL}_2(7)$, and $G_v,G_w\rhd{\rm A}}\def\Sym{{\rm Sym}_7$ or $2^3{:}{\rm GL}} \def\SL{{\rm SL}_3(2)$. Therefore, we conclude that $\{G_v,G_w\}=\{{\rm A}}\def\Sym{{\rm Sym}_7,2^3{:}{\rm GL}} \def\SL{{\rm SL}_3(2)\}$, which is listed in row~14 of Table~\ref{Intro}.\qed \begin{lemma}\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}abel{$G_{vw}$ non-almost simple} Assume $G_{vw}$ is insoluble but not almost simple. Then $(G_v,G_w,G_{vw})$ are listed in rows 15 and 16 of Table~\ref{Intro}. \end{lemma} \par\noindent{\it Proof.\ \ } Assume $G_{vw}$ is insoluble but not almost simple. All the possibilities of $G_{vw}$ are listed in the following table. \[\begin{array}{cc c}\hline G_{vw} & {\sf soc}} \def\Inndiag{{\sf InnDiag}(G_v) & \\ \hline q^n{:}{\rm S}} \def\G{{\rm G}L_n(q).{\mathcal O} & {\rm PSL}}\def\PGL{{\rm PGL}_{n+1}(q) &\\ {\rm S}} \def\G{{\rm G}L_n(q).{\mathcal O} & q^n &\\ {\rm S}} \def\G{{\rm G}p_{2n}(q).{\mathcal O} & q^{2n} & \\ 2^{1+4}{:}{\rm A}}\def\Sym{{\rm Sym}_5,2^{1+4}{:}{\rm S}} \def\G{{\rm G}_5 & 2^4 \\ {\rm S}} \def\G{{\rm G}L_2(13) & 3^6 \\ {\rm S}} \def\G{{\rm G}L_2(5),\ 5\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5) & 11^2 \\ 9\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5) &19^2 \\ 7\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5),\ 7\tau} \def\va{\varepsilon} \def\o{\omegaimes(4\circ{\rm S}} \def\G{{\rm G}L_2(5)) &29^2 \\ {29}\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}L_2(5) & 59^2 \\ \hline \end{array}\] The candidate in the first line or the third line is not isomorphic to any other one in the table. This leads to the following possibilities: \[\{G_v,G_w\}=\{5^2{:}{\rm S}} \def\G{{\rm G}L_2(5),11^2{:}{\rm S}} \def\G{{\rm G}L_2(5)\},\ \mbox{or}\ \{13^2{:}{\rm S}} \def\G{{\rm G}L_2(13), 3^6{:}{\rm S}} \def\G{{\rm G}L_2(13)\}.\] These are listed in rows~15-16 of Table~\ref{Intro}.\qed {\bf Proof of Theorem~\ref{Intro}:} By assumption, $G_{v}^{[1]}=G_w^{[1]}=1$, so $G_v\cong G_v^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)}$, $G_w\cong G_w^{{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)}$, and the stabilisers $G_v,G_w$ are 2-transitive permutation groups which share common point stabilisers $G_{vw}$. Suppose first that $G_v$ and $G_w$ have isomorphic soles. Then by Lemma~\ref{isomorphic socle}, $G_v\cong G_w$ and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is regular. Now suppose $G_v$ and $G_w$ have non-isomorphic socles. Suppose further that both of them are soluble. Then by Lemma~\ref{both soluble}, $(G_v,G_w,G_{vw})$ are listed in the first four rows of Table~\ref{Intro}. Suppose $G_v$ is insoluble. Then by Lemma~\ref{one insoluble}, $G_{vw}$ is insoluble. Thus by Lemmas~\ref{$G_{vw}$ almost simple} and \ref{$G_{vw}$ non-almost simple}, $(G_v,G_w,G_{vw})$ are listed in rows 5 to 16 of Table~\ref{Intro}. Thus the theorem holds.\qed {\bf Proof of Corollary~\ref{faith-stab-3-trans}:} We first notice that ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is locally $(G,3)$-arc-transitive if and only if $G_{w_1vw}$ and $G_{vwv_1}$ are transitive on $[G_{w_{-1}vw}:G_{w_{-1}vwv_1}]$ and $[G_{vwv_1}:G_{w_{-1}vwv_1}]$, respectively. This is equivalent to $|G_{w_{-1}vw}:G_{w_{-1}vwv_1}|=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1$ and $|G_{vwv_1}:G_{w_{-1}vwv_1}|=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1$. Suppose first that ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is a regular graph. Then the 2-arc stabiliser $G_{vwv_1}$ is transitive on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)\setminus\{w\}$. Observe that $G_{vwv_1}=(G_w)_{vv_1}$ is the stabiliser of the two points $v,v_1\in{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)$ in the 2-transitive permutation group $G_w$. By Lemma~\ref{2-pt-stab}, we conclude that $(G_v,G_w,G_{vw})=({\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_6)$, or $({\rm S}} \def\G{{\rm G}_7,{\rm S}} \def\G{{\rm G}_7,{\rm S}} \def\G{{\rm G}_6)$. Now assume that ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is not a regular graph. We need analyse the triple of stabilisers $(G_v,G_w,G_{vw})$ listed in Table~\ref{Intro}. We notice that, for each of the candidates in rows~1-8 and 14-16 of Table~\ref{Intro}, the order $|G_{vwv_1}|$ is not divisible by $|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1$. Therefore, we conclude that $(G_v,G_w,G_{vw})$ can only be one of $({\rm A}}\def\Sym{{\rm Sym}_7,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6,{\rm A}}\def\Sym{{\rm Sym}_6)$, $({\rm S}} \def\G{{\rm G}_7,2^4{:}{\rm S}} \def\G{{\rm G}_6,{\rm S}} \def\G{{\rm G}_6)$, $({\rm A}}\def\Sym{{\rm Sym}_8,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_7)$, $({\rm A}}\def\Sym{{\rm Sym}_9,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_8,{\rm A}}\def\Sym{{\rm Sym}_8)$, or $({\rm S}} \def\G{{\rm G}_9,{\rm S}} \def\G{{\rm G}p_6(2),{\rm S}} \def\G{{\rm G}_8)$. For the first candidate $(G_v,G_w,G_{vw})=({\rm A}}\def\Sym{{\rm Sym}_7,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_6,{\rm A}}\def\Sym{{\rm Sym}_6)$, the intersection $G_{w_{-1}vwv_1}$ of $G_{w_{-1}vw}={\rm A}}\def\Sym{{\rm Sym}_5$ and $G_{vwv_1}={\rm S}} \def\G{{\rm G}_4$ is a subgroup of ${\bf Z}} \def\O{{\bf O}Z_2^2$. Since $|G_{w_{-1}vw}:G_{w_{-1}vwv_1}|\langle} \def\r{\rangle} \def\tau} \def\va{\varepsilon} \def\o{\omegar{{\rm tr}eqslantslant|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1=15$ and $|G_{vwv_1}:G_{w_{-1}vwv_1}|=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1=6$. We thus have \[G_{w_{-1}vwv_1}=G_{w_{-1}vw}\cap G_{vwv_1}={\rm A}}\def\Sym{{\rm Sym}_5\cap{\rm S}} \def\G{{\rm G}_4={\bf Z}} \def\O{{\bf O}Z_2^2.\] Therefore, $|G_{w_{-1}vw}:G_{w_{-1}vwv_1}|=15=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1$ and $|G_{vwv_1}:G_{w_{-1}vwv_1}|=6=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1$, and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is locally $(G,3)$-arc-transitive. Similarly, $(G_v,G_w,G_{vw})=({\rm S}} \def\G{{\rm G}_7,2^4{:}{\rm S}} \def\G{{\rm G}_6,{\rm S}} \def\G{{\rm G}_6)$ is an amalgam for locally $(G,3)$-arc-transitive graph. For the case $(G_v,G_w,G_{vw})=({\rm A}}\def\Sym{{\rm Sym}_8,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_7,{\rm A}}\def\Sym{{\rm Sym}_7)$, we have \[G_{w_{-1}vw}={\rm A}}\def\Sym{{\rm Sym}_6,\ G_{vwv_1}={\rm PSL}}\def\PGL{{\rm PGL}_2(7).\] Since $|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1=8-1=7$, the stabiliser $G_{w_{-1}vwv_1}$ is of index at most 7 in $G_{vwv_1}={\rm PSL}}\def\PGL{{\rm PGL}_2(7)$. It implies that $|G_{vwv_1}:G_{w_{-1}vwv_1}|=7$, and $G_{w_{-1}vwv_1}={\rm S}} \def\G{{\rm G}_4$. Then the index $|G_{w_{-1}vw}:G_{w_{-1}vwv_1}|=|{\rm A}}\def\Sym{{\rm Sym}_6:{\rm S}} \def\G{{\rm G}_4|=15=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1$. We conclude that $G_{w_{-1}vw}$ is transitive on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)\setminus\{v\}$, and $G_{vwv_1}$ is transitive on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)\setminus\{w\}$. Therefore, ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is locally $(G,3)$-arc-transitive. For the triple $(G_v,G_w,G_{vw})=({\rm A}}\def\Sym{{\rm Sym}_9,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_8,{\rm A}}\def\Sym{{\rm Sym}_8)$, the index of $G_{w_{-1}vwv_1}$ in $G_{w_{-1}vw}={\rm A}}\def\Sym{{\rm Sym}_7$ is at most $|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1=15$ and in $G_{vwv_1}=2^3{:}{\rm GL}} \def\SL{{\rm SL}_3(2)$ is at most $|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1=8$. It follows that $G_{w_{-1}vwv_1}={\rm GL}} \def\SL{{\rm SL}_3(2)$, and so $|G_{w_{-1}vw}:G_{w_{-1}vwv_1}|=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1$ and $|G_{vwv_1}:G_{w_{-1}vwv_1}|=|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1$. Therefore, $({\rm A}}\def\Sym{{\rm Sym}_9,2^4{:}{\rm A}}\def\Sym{{\rm Sym}_8,{\rm A}}\def\Sym{{\rm Sym}_8)$ is a locally 3-arc-transitive amalgam. For the case $(G_v,G_w,G_{vw})=({\rm S}} \def\G{{\rm G}_9,{\rm S}} \def\G{{\rm G}p_6(2),{\rm S}} \def\G{{\rm G}_8)$, we have $G_{w_{-1}vw}={\rm S}} \def\G{{\rm G}_7,\ G_{vwv_1}=({\rm S}} \def\G{{\rm G}_4\tau} \def\va{\varepsilon} \def\o{\omegaimes{\rm S}} \def\G{{\rm G}_4).2.$ The intersection $G_{w_{-1}vwv_1}$ of $G_{w_{-1}vw}$ and $G_{vwv_1}$ is ${\rm S}} \def\G{{\rm G}_4\tau} \def\va{\varepsilon} \def\o{\omegaimes {\rm S}} \def\G{{\rm G}_3$ with index $8$ in $G_{vwv_1}$ and index $35$ in $G_{w_{-1}vw}$. Moreover, $|{\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)|-1=9-1=8$ and ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)|-1=35$. We conclude that $G_{w_{-1}vw}$ is transitive on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(w)\setminus\{v\}$, and $G_{vwv_1}$ is transitive on ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}(v)\setminus\{w\}$. Therefore, ${\it\Gamma}} \def\Sig{{\it \Sigma}} \def\Del{{\it \Delta}$ is locally $(G,3)$-arc-transitive. \qed \end{document}
math
62,583
\begin{document} \title[Solutions to Schr\"odinger-Poisson system]{Groundstates and infinitely many high energy solutions to a class of nonlinear Schr\"odinger-Poisson systems} \author{Tomas Dutko} \address{Department of Mathematics, Computational Foundry, Swansea University, Fabian Way, Swansea, U.K. SA1~8EN} \email{[email protected]} \author{Carlo Mercuri} \address{Department of Mathematics, Computational Foundry, Swansea University, Fabian Way, Swansea, U.K. SA1~8EN} \email{[email protected]} \author{Teresa Megan Tyler} \address{Department of Mathematics, Computational Foundry, Swansea University, Fabian Way, Swansea, U.K. SA1~8EN} \email{[email protected]} \begin{abstract} We study a nonlinear Schr\"{o}dinger-Poisson system which reduces to the nonlinear and nonlocal PDE \begin{equation*} - {\mathcal D}elta u+ u + \lambda^2 \left(\frac{1}{\omega|x|^{N-2}}\star \rho u^2\right) \rho(x) u = |u|^{q-1} u \quad x \in {\mathbb R}^N, \end{equation*} where $\omega = (N-2)\abs{\mathbb{S}^{N-1}},$ $\lambda>0,$ $q\in(1,2^{\ast} -1),$ $\rho:{\mathbb R}^N \to {\mathbb R}$ is nonnegative, locally bounded, and possibly non-radial, $N=3,4,5$ and $2^*=2N/(N-2)$ is the critical Sobolev exponent. In our setting $\rho$ is allowed as particular scenarios, to either 1) vanish on a region and be finite at infinity, or 2) be large at infinity. We find least energy solutions in both cases, studying the vanishing case by means of a priori integral bounds on the Palais-Smale sequences and highlighting the role of certain positive universal constants for these bounds to hold. Within the Ljusternik-Schnirelman theory we show the existence of infinitely many distinct pairs of high energy solutions, having a min-max characterisation given by means of the Krasnoselskii genus. Our results cover a range of cases where major loss of compactness phenomena may occur, due to the possible unboundedness of the Palais-Smale sequences, and to the action of the group of translations. \\ MSC: 35Q55, 35J20, 35B65, 35J60. \\ Keywords: Nonlinear Schr\"{o}dinger-Poisson System, Weighted Sobolev Spaces, Palais-Smale Sequences, Compactness, Multiple Solutions, Nonexistence. \end{abstract} \maketitle \section{Introduction} \noindent This paper is devoted to the nonlinear and nonlocal equation \begin{equation}\label{SP one equation} \tag{$\mathcal E$}- {\mathcal D}elta u+ u + \lambda^2 \left(\frac{1}{\omega|x|^{N-2}}\star \rho u^2\right) \rho(x) u = |u|^{q-1} u \quad x \in {\mathbb R}^N, \end{equation} where $\omega = (N-2)\abs{\mathbb{S}^{N-1}},$ $\lambda>0,$ $q\in(1,2^{\ast} -1),$ $\rho:{\mathbb R}^N \to {\mathbb R}$ is nonnegative, locally bounded, and possibly non-radial, $N=3,4,5$ and $2^*=2N/(N-2)$ is the critical Sobolev exponent. We are mainly concerned with the existence and multiplicity of solutions, together with their variational characterisation. This brings us to addressing issues related to a suitable functional setting and its relevant properties, such as those related to separability and compactness. In particular, the variational formulation of \eqref{SP one equation} requires in general a functional setting different from the standard Sobolev space $H^1({\mathbb R}^N).$ This is the case if the right hand side of the classical Hardy-Littlewood-Sobolev inequality \begin{equation}\label{HLS intro} \tag{HLS}\quad \int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{|x-y|^{N-2}}\,\mathrm{d}{x} \,\mathrm{d}{y}\, \lesssim ||\rho u^2||_{L^{\frac{2N}{N+2}}({\mathbb R}^N)}^2, \end{equation} \noindent is not finite for some $u\in H^1({\mathbb R}^N).$ In what follows, we consider separately two assumptions on $\rho$: \begin{enumerate}[label=$\mathbf{(\rho_{\arabic*})}$] \item \label{vanishing_rho} $\rho^{-1} (0)$ has non-empty interior and there exists $\overline{M} > 0$ such that \begin{equation*} \abs{x \in {\mathbb R}^N : \rho(x) \leq \overline{M}} < \infty; \end{equation*} \item \label{coercive_rho} for every $M > 0$, \begin{equation*} \abs{x \in {\mathbb R}^N : \rho(x) \leq M} < \infty. \end{equation*} \end{enumerate} These are reminiscent of analogous assumptions considered in the `local' context of the nonlinear Schr\"odinger equation by Bartsch and Wang in \cite{Bartsch and Wang}. In particular, we will refer to \ref{vanishing_rho} as to the {\it vanishing case}, and to \ref{coercive_rho} as to the {\it coercive case}, as the latter assumption is verified if $\rho$ is locally bounded such that $\rho(x)\rightarrow \infty$ as $|x|\rightarrow \infty,$ yielding compactness properties in the functional setting which are stronger than in the other case. It is clear that \ref{vanishing_rho} is compatible with $\rho$ exploding, as well as with $\rho$ having a finite limit at infinity. The latter is a situation which yields loss of compactness phenomena to occur, in part due, in the present subcritical regime, to the action of the group of translations in ${\mathbb R}^N$. In this vanishing case we prove uniform a priori bounds on suitable sequences of approximated critical points, which allow us to construct nontrivial weak limits having a definite variational nature. \newline To state and prove our results we define $E({\mathbb R}^N) \subseteq H^1({\mathbb R}^N)$ as \[E({\mathbb R}^N) \coloneqq \left\{u\in W^{1,1}_{\textrm{loc}}({\mathbb R}^N) \,: \, \|u\|_{E({\mathbb R}^N)} < +\infty\right\},\] \noindent with norm \[\|u\|_{E({\mathbb R}^N)} \coloneqq \left(\int_{{\mathbb R}^N}(|\nabla u|^2 + u^2)\,\mathrm{d} x + \lambda\left(\int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{|x-y|^{N-2}}\,\mathrm{d} x\,\mathrm{d} y\right)^{1/2}\right)^{1/2}.\] Variants of the space $E({\mathbb R}^N)$ have been studied since the work of P.L. Lions \cite{Lions Hartree}, see e.g. \cite{RuizARMA}, and \cite{Bellazzini},\cite{Bonheure and Mercuri}, \cite{Mercuri Moroz Van Schaftingen}. Solutions to \eqref{SP one equation} are the critical points of the $C^1(E({\mathbb R}^N);{\mathbb R})$ energy functional \begin{equation}\label{definition I} I_{\lambda}(u) = \frac{1}{2}\int_{{\mathbb R}^N}(|\nabla u|^2 + u^2)+\frac{\lambda^2}{4}\int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{\omega|x-y|^{N-2}}\,\mathrm{d} x\,\mathrm{d} y -\frac{1}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}. \end{equation} One could regard \eqref{SP one equation} as formally equivalent to a nonlinear Schr\"{o}dinger-Poisson system \begin{equation}\label{main SP system multiplicity} \left\{ \begin{array}{lll} - {\mathcal D}elta u+ u + \lambda^2 \rho (x) \phi u = |u|^{q-1} u, \qquad &x\in {\mathbb R}^N, \\ \,\,\, -{\mathcal D}elta \phi=\rho(x) u^2,\ & x\in {\mathbb R}^N. \end{array} \right. \end{equation}\\ \noindent In fact, it is well-known from classical potential theory that if $u^2\rho \in L^1_{\textrm{loc} }({\mathbb R}^N)$ is such that \begin{equation}\label{double integral finite} \int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{|x-y|^{N-2}}\,\mathrm{d} x\,\mathrm{d} y < +\infty, \end{equation} \noindent then \begin{equation}\label{definition of phi u} \phi_u (x) = \int_{{\mathbb R}^N}\frac{\rho (y)u^2(y)}{\omega |x-y|^{N-2}}\,\mathrm{d} y \end{equation} \noindent is the unique weak solution in $D^{1,2}({\mathbb R}^N)$ of the Poisson equation \begin{equation}\label{poissoneqn} -{\mathcal D}elta \phi = \rho (x) u^2 \end{equation} \noindent and it holds that \begin{equation}\label{double integral} \int_{{\mathbb R}^N} |\nabla \phi_u|^2 = \int_{{\mathbb R}^N} \rho \phi_u u^2 \,\mathrm{d} x =\int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{\omega|x-y|^{N-2}}\,\mathrm{d}{x} \,\mathrm{d}{y} . \end{equation} Here we set \[D^{1,2}({\mathbb R}^N) = \{ u\in L^{2^{\ast}}({\mathbb R}^N) : \nabla u \in L^2({\mathbb R}^N)\},\] equipped with norm \[ \|u\|_{D^{1,2}({\mathbb R}^N)} = \|\nabla u\|_{L^2({\mathbb R}^N)}.\] \noindent By elliptic regularity, the local boundedness of $\rho$ implies that any pair $(u,\phi)\in E({\mathbb R}^N)\times D^{1,2}({\mathbb R}^N)$ solution to (\ref{main SP system multiplicity}) is such that $u$ and $\phi$ are both of class $C^{1,\alpha}_{\textrm{loc}}({\mathbb R}^N).$ In particular, if $u\geq 0$ is nontrivial, it holds that $u,\phi>0.$ Note that $\inf I_\lambda=-\infty,$ however it is an easy exercise to see that $I_\lambda$ is bounded below on the set of its nontrivial critical points by a positive constant. It therefore makes sense to define a solution $u\in E({\mathbb R}^N)$ to \eqref{SP one equation} as a {\it groundstate} if it is nontrivial, and if it holds that $I_\lambda(u)\leq I_\lambda(v)$ for every nontrivial critical point $v\in E({\mathbb R}^N)$ of $I_\lambda$. \newline Since the classical work of Ambrosetti-Rabinowitz \cite{Ambrosetti and Rabinowitz}, considerable advances have been made in the understanding of several classes of nonlinear elliptic PDE's in the absence of either the so-called Palais-Smale or the Ambrosetti-Rabinowitz conditions, yet achieving in the spirit of \cite{Ambrosetti and Rabinowitz} existence and multiplicity results; see e.g. \cite{Ambrosetti and Malchiodi2, Ambrosetti and Malchiodi, Struwe Book, Willem book}. In addition to those of Strauss \cite{Strauss} and Berestycki-Lions \cite{Berestycki and Lions}, which have been a breakthrough in the study of autonomous scalar field equations on the whole of ${\mathbb R}^N,$ a great deal of work, certainly inspired by that of Floer and Weinstein \cite{Floer and Weinstein}, has been devoted to the study of nonlinear Schr\"odinger equations with nonradial potentials and involving various classes of nonlinearities: \begin{equation}\label{nonlinear s} -{\mathcal D}elta u + V(x) u = f(x,u), \quad x\in {\mathbb R}^N. \end{equation} The classical works of Rabinowitz \cite{Rabinowitz} and Benci-Cerami \cite{Benci Cerami} have provided a penetrating analysis on equations like \eqref{nonlinear s}, and inspired the work on various remarkable variants of it, under different hypotheses on $V$ and $f$ which may allow loss of compactness phenomena to occur. Authors have contributed to understand these phenomena in a min-max setting, in analogy to what had been discovered and highlighted in the context of minimisation problems by P.L. Lions in \cite{Lions} and related papers. An interesting case has been considered by Bartsch and Wang \cite{Bartsch and Wang} who proved existence and multiplicity of solutions to \eqref{nonlinear s} for $V(x) = 1 + \lambda^2 \rho(x),$ and with $\rho$ satisfying either \ref{vanishing_rho} or \ref{coercive_rho}. Years later Jeanjean and Tanaka in \cite{Jeanjean and Tanaka} and related papers, have looked into cases where $f(x,u)$ may violate the Ambrosetti-Rabinowitz condition. Remarkably, they have been able to overcome the possible unboundedness of the Palais-Smale sequences, with an approach which is reminiscent of the `monotonicity trick' introduced for a different problem by Struwe \cite{Struwe Paper}. \newline\newline Even though our equation \eqref{SP one equation} is formally a nonlocal variant of the above nonlinear Schr\"odinger equation, there are some specific variational features that we wish to highlight, which are not shared with \eqref{nonlinear s}. Firstly, although our nonlinearity $f(x,u)=|u|^{q-1}u$ does satisfy the Ambrosetti-Rabinowitz condition, to the best of our knowledge it is still not known whether the boundedness of the Palais-Smale sequences holds for $q\in(2,3).$ We stress that for this reason and in this range of exponents, it is not known whether the Palais-Smale condition holds, even with $\rho\equiv 1$ and working with the subspace of radial functions in $H^1({\mathbb R}^N)$. In the range $q\in(2,3],$ the relation between the mountain-pass level and the infimum over the Nehari manifold for the functional $I_\lambda$ associated to \eqref{SP one equation} seems non-straightforward; we recall that these levels coincide when dealing with \eqref{nonlinear s} for a fairly broad class of nonlinearities $f,$ see e.g. \cite[p. 73]{Willem book}. In the case of pure power nonlinearities and $q\in(1,2],$ and unlike for the action functional associated with \eqref{nonlinear s}, the variational properties of $I_\lambda$ are particularly sensitive to $\lambda,$ yielding existence, multiplicity (of a local minimiser and at the same time of a mountain-pass solution) and nonexistence results, see e.g. \cite{RuizJFA, RuizARMA} and \cite{Mercuri}. Finally, a natural functional setting associated to \eqref{SP one equation} may not be necessarily a Hilbert space. In fact note that assumption \ref{vanishing_rho} is compatible with a situation where $\rho(x) \rightarrow \rho_{\infty}> 0$ as $|x| \rightarrow \infty$, in which the space $E({\mathbb R}^N) \simeq H^1({\mathbb R}^N),$ as well as with the case $\rho(x)\rightarrow \infty$ as $|x|\rightarrow \infty,$ in which $E({\mathbb R}^N)\subset H^1({\mathbb R}^N);$ we tackle the case of vanishing $\rho$ with a unified approach for these particular sub-cases. \newline Variants of (\ref{SP one equation}) appear in the study of the quantum many--body problem, see \cite{Bao Mauser Stimming}, \cite{Catto}, \cite{Lions Hartree Fock}. The convolution term represents a repulsive interaction between particles, whereas the local nonlinearity $|u|^{q-1} u$ is a generalisation of the $u^{5/3}$ term introduced by Slater \cite{Slater} as local approximation of the exchange term in Hartree--Fock type models, see e.g. \cite{Bokanowski Lopez Soler}, \cite{Mauser}. In the last few decades, nonlocal equations like (\ref{SP one equation}) have received increasing attention on questions related to existence, non-existence, variational setting and singular limit in the presence of a parameter. We draw the reader's attention to \cite{Ambrosetti}, \cite{Benci and Fortunato}, \cite{Catto} and references therein, for a broader mathematical picture on questions related to Schr\"odinger-Poisson type systems. Relevant contributions to the existence of positive solutions, mostly for $q>3=N,$ such as \cite{Cerami and Vaira, Cerami and Molle}, are based on the classification of positive solutions given by Kwong \cite{Kwong} to \begin{equation*}\label{Kwong} - {\mathcal D}elta u+ u = u^{q} , \qquad x\in {\mathbb R}^3, \end{equation*} regarded as a `limiting' PDE when $\rho(x)\rightarrow 0,$ as $|x|\rightarrow \infty.$ Recently in \cite{Sun Wu Feng, Mercuri and Tyler}, in the case $\rho(x)\rightarrow 1,$ as $|x|\rightarrow \infty,$ the relation between \eqref{main SP system multiplicity} and \begin{equation}\label{A-Ruiz PDE} \begin{cases} -{\mathcal D}elta u + u + \lambda^2 \phi u = |u|^{q-1} u, \quad &\mathbb{R}^3\\ -{\mathcal D}elta \phi = u^2 &\mathbb{R}^3 \end{cases} \end{equation} \noindent as a limiting problem, has been studied, though a full understanding of the set of positive solutions to \eqref{A-Ruiz PDE} has not yet been achieved. \\ Considerably fewer results have been obtained in relation to the multiplicity of solutions. It is worth mentioning \cite{Ambrosetti and Ruiz} whose (radial) approach is suitable in the presence of constant potentials. More precisely Ambrosetti-Ruiz \cite{Ambrosetti and Ruiz} have studied the problem \eqref{A-Ruiz PDE} with $\lambda > 0$ and $1 < q < 5$. When $q \in (1,2)\cap(3,5)$ their approach relies on the symmetric version of the Mountain-Pass Theorem \cite{Ambrosetti and Rabinowitz}, whereas for $q \in (2,3]$ and in the spirit of \cite{Jeanjean and Tanaka, Struwe Paper}, they develop a min-max approach to the multiplicity which in fact improves upon \cite{Ambrosetti and Rabinowitz} and is based on the existence of bounded Palais-Smale sequences at specific levels associated with the perturbed functional $$ I_{\mu,\lambda}(u) = \frac{1}{2}\int_{{\mathbb R}^3}(|\nabla u|^2 + u^2)+ \frac{\lambda^2}{4}\int_{{\mathbb R}^3}\int_{{\mathbb R}^3}\frac{u^2(x) u^2(y)}{\omega|x-y|}\,\mathrm{d} x\,\mathrm{d} y - \frac{\mu}{q+1} \int_{{\mathbb R}^3} \abs{u}^{q+1} \text{d}x, $$ for a dense set of values $\mu \in \left[\half, 1 \right).$ \subsection{Main Results} In the vanishing case \ref{vanishing_rho} our main result is the following. \begin{theorem}[{\bf Groundstates for $q\geq 3$ under under \ref{vanishing_rho}}]\label{groundstate sol} Let $N=3$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ be nonnegative, satisfying \ref{vanishing_rho}, and $q \in [3, 2^{\ast}-1).$ There exists a positive constant $\lambda_*=\lambda_*(q,\overline{M})$ such that for every $\lambda\geq\lambda_*,$ \eqref{SP one equation} admits a positive groundstate solution $u\in E({\mathbb R}^3).$ For $q>3$, $u$ is a mountain-pass solution. \end{theorem} We point out that by construction $\lambda_*= \max\{\lambda_0,\lambda_1\},$ where $\lambda_0$ and $\lambda_1$ are universal constants defined in Proposition \ref{positivityofweakuthm} and in Proposition \ref{lambdaone}, which ensure that, for every $\lambda\geq\lambda_*,$ certain Palais-Smale sequences possess weak limits with a precise variational characterisation.\newline This result extends to a nonlocal equation that of Bartsch-Wang \cite{Bartsch and Wang}, as we are able to show in the spirit of their work that for $\lambda$ large, there are no Palais-Smale sequences at the mountain-pass level which are weakly convergent to zero, in a context where the embedding of $E({\mathbb R}^3)$ into $L^{q+1}({\mathbb R}^3)$ is in general non-compact. This is the case if for instance $\rho(x)\rightarrow \rho_\infty>\overline{M},$ as $|x|\rightarrow\infty.$ In this case $E({\mathbb R}^3)\simeq H^1({\mathbb R}^3),$ with equivalent norms by \eqref{HLS intro}, and the non-compactness of the embedding is a well-known fact. Under \ref{vanishing_rho}, a condition `at infinity' for certain Palais-Smale sequences to be relatively compact is given in Proposition \ref{PSconditionVan} and Proposition \ref{CPSconditionVan}.\newline It is worth observing that the arguments of Proposition \ref{lambdaone} can be adapted to the original result of Bartsch and Wang \cite[Section 5]{Bartsch and Wang} on the nonlinear Schr\"odinger equation to prove in their setting, for the whole range of exponents and for $\lambda$ large enough, the existence of a mountain-pass solution and hence, using the Nehari characterisation of the mountain-pass level \cite[p. 73]{Willem book}, the existence of a groundstate solution. We prove Proposition \ref{lambdaone} highlighting how the `interaction' between $\lambda$ and $\overline{M}$ appearing in \ref{vanishing_rho} yields the desired estimates. To this aim we carry out a Brezis-Lieb type splitting argument in the spirit of \cite{Brezis and Lieb}, combining it with a simple weighted $L^3$ estimate given in Lemma \ref{weightedL3boundlemma}, together with the relation between the mountain-pass level and the infimum on the Nehari manifold, which may be sensitive to whether $q=3$ or $q>3.$ \newline To prove Theorem \ref{groundstate sol} we follow a Nehari constraint approach, paying attention to the more delicate case $q=3.$ For this exponent, it is not clear whether the mountain-pass level is critical. From a variational perspective, this is another point that makes our work different from \cite{Bartsch and Wang}; see also Theorem \ref{least energy pure homog rho} below. We stress here that \ref{vanishing_rho} may not be enough for the right continuity of the mountain-pass levels $c_\lambda(q)$ to hold as $q\rightarrow 3^+.$\newline In the coercive case \ref{coercive_rho} we show that $E({\mathbb R}^N)$ is compactly embedded in $L^p({\mathbb R}^N)$ for any $2<p<2^*$ and $\lambda>0.$ This is used to prove the following. \begin{theorem}[\textbf{Groundstates for $q\geq 3$ under \ref{coercive_rho}}]\label{existenceandleastenergy} Let $N = 3$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^3)$ be nonnegative, satisfying \ref{coercive_rho}, and $q \in [3,2^{\ast} -1)$. Then, for any fixed $\lambda > 0$, \eqref{SP one equation} has both a positive mountain-pass solution and a positive groundstate solution in $E({\mathbb R}^3),$ whose energy levels coincide for $q>3.$ \end{theorem} Note that in this case the compact embedding result provided by Lemma \ref{compactembeddingofeintolpplus} allows us to have a `variationally' stronger result for $q=3,$ to be compared with Theorem \ref{groundstate sol}. Namely, we can show that the mountain-pass level is critical, using that the Palais-Smale condition is satisfied under \ref{coercive_rho} and for $3\leq q<5$. A positive mountain-pass solution, which may not be a groundstate for $q=3,$ is constructed as a strong limit of a Palais-Smale sequence living nearby the positive cone in $E({\mathbb R}^3)$. \newline When dealing with the range $q\in(2,3),$ we overcome the possible unboundedness of the Palais-Smale sequences, combining tools developed in this paper, with the approach of Jeanjean and Tanaka \cite{Jeanjean and Tanaka}. Roughly speaking, the proof is based on constructing a sequence $(u_n)_{n\in \mathbb N}$ of critical points to suitable approximated functionals $$ I_{n}(u) = \frac{1}{2}\int_{{\mathbb R}^N}(|\nabla u|^2 + u^2)+\frac{\lambda^2}{4} \int_{{\mathbb R}^N}\rho(x) \phi_u u^2 -\frac{\mu_n}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}, $$ which accumulates around the desired solution when letting $\mu_n\rightarrow 1^-$ as a result of satisfying a Pohozaev-type condition stated in Lemma \ref{pohozaevlemma} (which guarantees its boundedness), and by the compactness property provided by Lemma \ref{compactembeddingofeintolpplus}. More precisely, we have the following. \begin{theorem}[\textbf{Groundstates for $q<3$ under \ref{coercive_rho}}]\label{mountainpasssoltheoremlowp} Let $N=3,4,5$, $q \in (2, 3)$ if $N=3$ and $q\in(2,2^{\ast}-1)$ if $N=4,5$. Let $\lambda>0,$ and assume $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)\cap W^{1,1}_{loc}({\mathbb R}^N)$ is nonnegative and satisfies \ref{coercive_rho}. Moreover suppose that $k\rho (x) \leq (x,\nabla \rho)$ for some $k>\frac{-2(q-2)}{(q-1)}$. Then, \eqref{SP one equation} has a mountain-pass solution $u\in E({\mathbb R}^N).$ Moreover, there exists a groundstate solution. \end{theorem} \begin{remark} The same proof when working instead with the functional \begin{equation*} I_{+} (u) = \frac{1}{2} \int_{{\mathbb R}^N} \left( | \nabla u |^2 + u^2 \right) + \frac{\lambda^2}{4} \int_{{\mathbb R}^N}\rho(x) \phi_u u^2 - \frac{1}{q+1} \int_{{\mathbb R}^N} u_+^{q+1}, \end{equation*} allows to show that mountain-pass and groundstate critical points exist for this functional, and are positive by construction. \end{remark} Under \ref{coercive_rho} and for $q\leq3$, the relation between mountain-pass solutions and groundstates found in Theorem \ref{existenceandleastenergy} for $q=3$ and Theorem \ref{mountainpasssoltheoremlowp} for $q<3$ seems not obvious; in particular, it is not clear whether they actually coincide. We are able to get more insight about the variational nature of these solutions in the case $\rho$ is homogeneous of a suitable order $\bar{k}>0$, as shown in the following theorem. It is worth pointing out that this homogeneity condition is not compatible with $\rho$ vanishing on a region. \begin{theorem}[\textbf{Homogeneous case for $q\le 3:$ mountain-pass solutions vs. groundstates}]\label{least energy pure homog rho} Let $N=3,4,5$, $q \in (2, 3]$ if $N=3$ and $q\in(2,2^{\ast}-1)$ if $N=4,5$. Suppose $\lambda > 0$ and $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)\cap W^{1,1}_{loc}({\mathbb R}^N)$ is nonnegative, satisfies \ref{coercive_rho}, and is homogeneous of degree $\bar{k}$, namely $ \rho(tx)=t^{\bar{k}}\rho(x)$ for all $t>0$, for some $$\bar{k}>\left(\max\left\{\frac{N}{4},\frac{1}{q-1}\right\}\cdot(3-q)-1\right)_+.$$ Then, the mountain-pass solutions that we find in Theorem \ref{existenceandleastenergy} $(q=3)$ and Theorem \ref{mountainpasssoltheoremlowp} $(q<3)$ are groundstates. \end{theorem} We prove the above theorem analysing some relevant scaling properties of $I_{\lambda}$ in Proposition \ref{variational characterisation MP level low q}, which allows us to characterise the mountain-pass level in terms of the infimum over a certain manifold, defined as a suitable combination of the Nehari and Pohozaev identities. We believe that this manifold is a natural constraint. We point the reader to Remark \ref{k bound}, in which we give an explanation of the lower bound assumption on $\bar{k}.$ In the spirit of Ambrosetti-Rabinowitz \cite{Ambrosetti and Rabinowitz} and under \ref{coercive_rho} we show that \eqref{main SP system multiplicity} possesses infinitely many high energy solutions. In our context it seems appropriate to distinguish the cases $q \in (3,5)$ and $q \in (2,3]$ when working within the Ljusternik-Schnirelman theory. Since for $q \in (3,5)$ Lemma \ref{compactembeddingofeintolpplus} implies that the Palais-Smale condition is satisfied, we can use the $\mathbb Z_2$-equivariant Mountain-Pass theorem, adapting to $E({\mathbb R}^N)$ arguments similar to those developed for a different functional setting by Szulkin; see \cite{Szulkin}. To this aim, in Lemma \ref{separability} we prove that for $N\geq 3$ $E({\mathbb R}^N)$ is a separable Banach space, by constructing a suitable linear isometry of $E({\mathbb R}^N)$ onto the Cartesian product of $H^1({\mathbb R}^N)$ with some of the mixed norm Lebesgue spaces studied by Benedek and Panzone \cite{Benedek Panzone}, namely $L^4({\mathbb R}^N;L^2({\mathbb R}^N)).$ As a consequence of this identification, we can show that $E({\mathbb R}^N)$ admits a Markushevic basis, that is a set of elements $\{(e_m,e_m^*)\}_{m\in{\mathbb N}}\subset E({\mathbb R}^N) \times E^*({\mathbb R}^N)$ such that the duality product $<e_n,e_m^*>=\delta_{nm}$ for all $n,m\in{\mathbb N}$, the $e_m$'s are linearly dense in $E({\mathbb R}^N)$, and the weak$^*$-closure of span$\{e_m^*\}_{m \in{\mathbb N}}$ is $E^*({\mathbb R}^N)$. We use this, combined with Lemma \ref{compactembeddingofeintolpplus} to obtain lower bounds on the energy which allow us to show the divergence of a sequence of min-max critical levels defined by means of the classical notion of Krasnoselskii genus; see Lemma \ref{divergence of critical levels} below. This yields the following \begin{theorem}[\textbf{Infinitely many high energy solutions for $q>3$}]\label{first multiplicity theorem} Let $N = 3,$ $q\in(3,2^{\ast} -1)$ and $\lambda > 0.$ Suppose $\rho\in L^{\infty}_{loc}({\mathbb R}^3)$ is nonnegative and satisfies \ref{coercive_rho}. Then, there exist infinitely many distinct pairs of critical points $\pm u_m\in E({\mathbb R}^N)$, $m\in {\mathbb N}$, for $I_{\lambda}$ such that $I_{\lambda}(u_m)\to+\infty$ as $m\to+\infty$. \end{theorem} When $q<3$, the above construction is not directly applicable because of the possible unboundedness of the Palais-Smale sequences. Here we use a deformation lemma due to Ambrosetti and Ruiz \cite{Ambrosetti and Ruiz}, in the flavor of the work of Jeanjean and Tanaka, which is suitable for Ljusternik-Schnirelman type results. Assuming that $\rho(x)$ is homogeneous of some order $\bar{k}>0$, allows us to define as in \cite{Ambrosetti and Ruiz}, certain classes of admissible subsets of $E({\mathbb R}^N)$ and hence of min-max levels; see Lemma \ref{derivatives of f} and Lemma \ref{use of curves} below. This together with the aforementioned Pohozaev type inequality (which in the present homogeneous case becomes an identity by Euler's classical theorem) and Lemma \ref{compactembeddingofeintolpplus}, allows us to show that these min-max levels are critical, and that they are arbitrarily large, by Lemma \ref{divergence of critical levels} again. We therefore have the following. \begin{theorem}[\textbf{Infinitely many high energy solutions for $q\leq 3$}]\label{partial multiplicity result low q} Let $N=3,4,5$. Assume $q\in(2,3]$ if $N=3$ and $q\in(2,2^{\ast}-1)$ if $N=4,5$. Suppose $\lambda > 0$ and $\rho\in L^{\infty}_{loc}({\mathbb R}^N)\cap W^{1,1}_{loc}({\mathbb R}^N)$ is nonnegative, satisfies \ref{coercive_rho}, and is homogeneous of degree $\bar{k}$, namely, $ \rho(tx)=t^{\bar{k}}\rho(x)$ for all $t>0$, for some $$\bar{k}>\left(\max\left\{\frac{N}{4},\frac{1}{q-1}\right\}\cdot(3-q)-1\right)_+.$$ Then, there exist infinitely many distinct pairs of critical points, $\pm u_m\in E({\mathbb R}^N)$, $m\in {\mathbb N}$, for $I_{\lambda}$ such that $I_{\lambda}(u_m)\to+\infty$ as $m\to+\infty$. \end{theorem} \begin{remark} For $q=3$, the homogeneity assumption on $\rho$ is not used to prove the boundedness of the Palais-Smale sequences which holds for this exponent, but rather it is used in the construction of the min-max levels. \end{remark} \begin{remark} For $N=3,4$, we can cover all $\bar{k}>0$. For $N=5$, the threshold for $\bar{k}$ is sensitive to the range of $q$. Namely, if $q\in(\frac{11}{5},2^{\ast}-1)$, we can cover all $\bar{k}>0$, however if $q\in(2,\frac{11}{5})$, this is not the case. \end{remark} \subsection{Outline} The paper is organised as follows. In Section \ref{preliminaries section} we deal with general facts about the functional setting, we prove the separability of $E({\mathbb R}^N)$ and other properties that will be used throughout, comprising positivity and regularity. We prove a Pohozaev type necessary condition that will be extensively used in the existence proofs, and that in this section is applied to a nonexistence result for $q = 2^{\ast}-1$. Here we also discuss the min-max setting and related properties, which hold for a generic locally bounded $\rho.$ In Section \ref{vanishing rho section} we work under the vanishing assumption \ref{vanishing_rho}. Here we develop a set of uniform integral estimates which hold for all the values of $\lambda$ above certain lower thresholds. We conclude the section with the proof of the existence of groundstates, Theorem \ref{groundstate sol}, and provide with Proposition \ref{PSconditionVan} and Proposition \ref{CPSconditionVan}, some new compactness results on sequences of approximated critical points of $I_\lambda.$ Section \ref{coercive rho section} is devoted to the coercive case \ref{coercive_rho}. For any fixed arbitrary $\lambda>0$ we prove the compactness Lemma \ref{compactembeddingofeintolpplus}, and the existence results Theorem \ref{existenceandleastenergy}, Theorem \ref{mountainpasssoltheoremlowp}, and Theorem \ref{least energy pure homog rho}. Section \ref{5.1} is entirely devoted to the multiplicity of high energy solutions. In particular, in Section \ref{prelims for multiplicity section} we prove Lemma \ref{divergence of critical levels} that is key to show later in the proofs the existence of a blowing up sequence of infinitely many distinct critical levels of high energy. In Section \ref{proof of multiplicity theorem section} we recall the notion of the Krasnoselskii genus and its properties, and deal with the proof of Theorem \ref{first multiplicity theorem}. Finally, in Section \ref{proof of multiplicity theorem low q section} we prove Theorem \ref{partial multiplicity result low q}. \section{Preliminaries}\label{preliminaries section} \noindent We introduce the functional setting for our problem and provide a few preliminary lemmas that hold for all nonnegative $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ and will be used under all assumptions on $\rho$. \subsection{Functional setting} For what follows, we will need some properties of the functional setting which are contained in the next lemma. \begin{lemma}[{\bf Properties of $E({\mathbb R}^N)$}]\label{separability} Assume $N\geq 3,$ and $\rho\geq 0$ is a measurable function. The space $E({\mathbb R}^N)$ is a separable Banach space that admits a Markushevic basis, that is a fundamental and total biorthogonal system, $\{(e_m,e_m^*)\}_{m\in{\mathbb N}}\subset E({\mathbb R}^N) \times E^*({\mathbb R}^N)$. Namely, $<e_n,e_m^*>=\delta_{nm}$ for all $n,m\in{\mathbb N}$, the $e_m$'s are linearly dense in $E({\mathbb R}^N)$, and the weak$^*$-closure of span$\{e_m^*\}_{m \in{\mathbb N}}$ is $E^*({\mathbb R}^N)$. \end{lemma} \begin{proof} Following \cite{Mercuri Moroz Van Schaftingen}, we note that we can equip $E({\mathbb R}^N)$ with the equivalent norm \begin{equation}\label{equivnorm} \|u\|_{1}=\left(\|u\|^2_{H^1({\mathbb R}^N)}+\lambda\left(\int_{{\mathbb R}^N}\left| I_1 \star (\sqrt{\rho}|u|)^2\right|^2\right)^{1/2}\right)^{1/2}. \end{equation} Here, we have set $\alpha=1$ in $I_{\alpha}:{\mathbb R}^N\to{\mathbb R}$, the Riesz potential of order $\alpha\in(0,N)$, defined for $x\in{\mathbb R}^N\setminus\{0\}$ as \[I_{\alpha}(x)=\frac{A_{\alpha}}{|x|^{N-\alpha}},\quad A_{\alpha}=\frac{\Gamma(\frac{N-\alpha}{2})} {\Gamma(\frac{\alpha}{2})\pi^{{N}/{2}}2^{\alpha}},\] and the choice of normalisation constant $A_{\alpha}$ ensures that the kernel $I_{\alpha}$ enjoys the semigroup property \begin{equation*} I_{\alpha+\beta}=I_{\alpha}\star I_{\beta} \text{ for each } \alpha,\beta\in(0,N) \text{ such that } \alpha+\beta<N. \end{equation*} We first notice that the operator $T:E({\mathbb R}^N)\to H^1({\mathbb R}^N)\times L^4({\mathbb R}^N;L^2({\mathbb R}^N))$ defined by \[(Tu)(x_0,x_1,x_2)=[u(x_0),(\lambda I_1(x_2-x_1)\rho(x_1))^{\frac{1}{2}}u(x_1)],\] is a linear isometry from $E({\mathbb R}^N)$ into the product space $H^1({\mathbb R}^N)\times L^4({\mathbb R}^N;L^2({\mathbb R}^N)),$ endowed with the norm $$\|[u,v]\|_{\times}=\left(\|u\|^2_{H^1({\mathbb R}^N)}+\|v\|_{L^4({\mathbb R}^N;L^2({\mathbb R}^N))}^{2}\right)^{1/2}.$$ Here $L^4({\mathbb R}^N;L^2({\mathbb R}^N))$ is the mixed norm Lebesgue space of functions $v:{\mathbb R}^N\times{\mathbb R}^N\to{\mathbb R}$ such that \[||v||_{L^4({\mathbb R}^N;L^2({\mathbb R}^N))}= \left(\int_{{\mathbb R}^N}\left(\int_{{\mathbb R}^N}|v(x_1,x_2)|^2\,\mathrm{d} x_1\right)^2\,\mathrm{d} x_2\right)^{{1}/{4}}<+\infty,\] see \cite{Benedek Panzone}. Since $L^4({\mathbb R}^N;L^2({\mathbb R}^N))$ is a separable (see e.g. \cite[\ p.\ 107]{Pick Kufner John Fucik}) Banach space (see e.g.\ \cite{Benedek Panzone}), it follows that the linear subspace $T(E({\mathbb R}^N))\subseteq H^1({\mathbb R}^N)\times L^4({\mathbb R}^N;L^2({\mathbb R}^N)),$ and hence $E({\mathbb R}^N),$ also satisfies each of these properties. Since every separable Banach space admits a Markushevic basis (see e.g. \cite{Hajek Santalucia Vanderwerff Zizler}), the proof is complete. \end{proof} Reasoning as in \cite{RuizARMA} and \cite{Mercuri Moroz Van Schaftingen} it is easy to see that $C^\infty_c({\mathbb R}^N)$ is dense in $E({\mathbb R}^N)$ and that the unit ball in $E({\mathbb R}^N)$ is weakly compact; in fact this space is uniformly convex and hence is reflexive. The following variant to the classical Brezis-Lieb lemma will be useful to study the convergence of bounded sequences in $E({\mathbb R}^N)$; see e.g. \cite{Bellazzini Frank Visciglia}, \cite{Mercuri Moroz Van Schaftingen}. \begin{lemma}[{\bf Nonlocal Brezis-Lieb lemma}] \label{nonlocalBL} Assume $N \geq 3$ and $\rho(x) \in L^{\infty}_{\textrm{loc}} ({\mathbb R}^N)$ is nonnegative. Let $(u_n)_{n\in \mathbb N}\subset E({\mathbb R}^N)$ be a bounded sequence such that $u_n \rightarrow u$ almost everywhere in ${\mathbb R}^N$. Then it holds that $$\lim_{n\rightarrow \infty} \Big[\|\nabla\phi_{u_n}\|^2_{L^2({\mathbb R}^N)}-\|\nabla \phi_{(u_n-u)}\|^2_{L^2({\mathbb R}^N)}\Big]=\| \nabla \phi_{u} \|^2_{L^2({\mathbb R}^N)}.$$ \end{lemma} \noindent The next simple estimate is based on an observation of P.-L. Lions, given in \cite{ Lions Hartree Fock} for $\rho \equiv 1$; see also \cite{RuizARMA}, and \cite{Bellazzini}, \cite{Mercuri Moroz Van Schaftingen}. \begin{lemma}[{\bf Coulomb-Sobolev inequality}]\label{weightedL3boundlemma} Assume $N \geq 3$, $\rho(x) \in L^{\infty}_{\textrm{loc}} ({\mathbb R}^N)$ is nonnegative. Then the following inequality holds for all $u \in E({\mathbb R}^N)$, \begin{equation} \int_{\R^N} \rho (x) \abs{u}^3 \leq \left( \int_{\R^N} \abs{\nabla u}^2 \right)^{\half} \left( \int_{\R^N} \abs{\nabla \phi_{u}}^2 \right)^{\half}. \end{equation} \end{lemma} \begin{proof} Testing the Poisson equation \eqref{poissoneqn} with $|u|,$ the statement follows immediately by Cauchy-Schwarz inequality. \end{proof} \subsection{Regularity and positivity} Using standard elliptic regularity theory and the maximum principle, we now provide a result giving the regularity and positivity of the solutions to the Schr\"{o}dinger-Poisson system.\\ \begin{proposition}\label{reg}[{\bf Regularity and positivity}] Let $N\in[3,6]$ and $q\in[1,2^{\ast} -1],$ $\rho\in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ be nonnegative and $\rho(x) \not\equiv 0$ and $(u,\phi_u)\in E({\mathbb R}^N)\times D^{1,2}({\mathbb R}^N)$ be a nontrivial weak solution to \begin{equation}\label{poh system} \left\{ \begin{array}{lll} - {\mathcal D}elta u+b u +c \rho (x) \phi u = d|u|^{q-1} u, \qquad &x\in {\mathbb R}^N, \\ \,\,\, -{\mathcal D}elta \phi=\rho(x) u^2,\qquad &x\in {\mathbb R}^N, \end{array} \right. \end{equation} \noindent with $b, c, d \in {\mathbb R}_+$. Then, $u$, $\phi_u\in W^{2,s}_{\textrm{loc}}({\mathbb R}^N),$ for every $s\geq1$, and so $u$, $\phi_u \in C^{1,\alpha}_{\textrm{loc}}({\mathbb R}^N);$ moreover $\phi_u>0.$ If, in addition, $u\geq 0$, then $u>0$ everywhere. \end{proposition} \begin{proof} Under the hypotheses of the proposition, both $u$ and $\phi_u$ have weak second derivatives in $L^s_{\textrm{loc} }({\mathbb R}^N)$ for all $s<+\infty$. To show this, note that from the first equation in \eqref{poh system}, we have that $-{\mathcal D}elta u = g(x,u)$, where \begin{align*} |g(x,u)|&=|(-b u -c \rho (x) \phi_u u + d|u|^{q-1} u| \\ &\leq C(1+|\rho \phi_u |+|u|^{q-1})(1+|u|)\\ &=: h(x)(1+|u|). \end{align*} Using our assumptions on $\rho$, $\phi_u$, $u$, and that $q \leq 2^{\ast} -1$, we can show that $h\in L^{N/2}_{\textrm{loc} }({\mathbb R}^N)$, which implies that $u\in L^s_{\textrm{loc} }({\mathbb R}^N)$ for all $s<+\infty$ (see e.g. \cite[p.270]{Struwe Book}). Note that here the restriction on the dimension implies that $\phi_u\in L^{N/2}_{\textrm{loc}}({\mathbb R}^N).$ Since $u^2\rho \in L^s_{\textrm{loc} }({\mathbb R}^N)$ for all $s<+\infty$, then by the second equation in \eqref{poh system} and the Calder\'{o}n-Zygmund estimates, we have that $\phi_u\in W^{2,s}_{\textrm{loc} }({\mathbb R}^N)$ (see e.g.\ \cite{Gilbarg and Trudinger}). This then enables us to show that $g\in L^s_{\textrm{loc} }({\mathbb R}^N)$ for all $s<+\infty$, which implies, by Calder\'{o}n-Zygmund estimates, that $u\in W^{2,s}_{\textrm{loc} }({\mathbb R}^N)$ (see e.g.\ \cite{Gilbarg and Trudinger}). The $C^{1,\alpha}_{\textrm{loc}}({\mathbb R}^N)$ regularity of both $u,\phi_u$ is a consequence of Morrey's embedding theorem. Finally, the strict positivity is a consequence of the strong maximum principle with $L^\infty_{\textrm{loc}}({\mathbb R}^N)$ coefficients \cite{Montenegro}, and this concludes the proof. \end{proof} \subsection{Nonexistence} The following lemma, proved in the Appendix, will be extensively used. \begin{lemma}\label{pohozaevlemma}[{\bf Pohozaev-type condition}] Assume $N\in[3,6],$ $q\in[1,2^{\ast} -1]$, $\rho \in L^\infty_{\textrm{loc} }({\mathbb R}^N) \cap W^{1,1}_{\textrm{loc} }({\mathbb R}^N)$ is nonnegative, and $k\rho(x)\leq (x,\nabla \rho)$ for some $k\in{\mathbb R}$. Let $(u, \phi_u) \in E({\mathbb R}^N) \times D^{1,2}({\mathbb R}^N)$ be a weak solution to \eqref{poh system}. Then, it holds that \begin{equation}\label{pohozaev} \begin{split} \frac{N-2}{2}\int_{{\mathbb R}^N}&|\nabla u|^2 \,\mathrm{d} x +\frac{Nb}{2}\int_{{\mathbb R}^N}u^2\,\mathrm{d} x \\ &+\frac{(N+2+2k)c}{4}\int_{{\mathbb R}^N}\rho\phi_u u^2 \,\mathrm{d} x-\frac{Nd}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}\,\mathrm{d} x \leq 0. \end{split} \end{equation} In particular the above is an identity, provided $k\rho(x)= (x,\nabla \rho)$ (by Euler's theorem, this is the case if $\rho$ is homogeneous of order $k,$ see e.g. \cite[p. 296]{Gelfand and Shilov}). \end{lemma} Although we will use the above necessary condition mainly for existence purposes, this also allows us to find a family of nonexistence results in a certain range of the parameters $N,q,\lambda,k$. \begin{proposition}[{\bf Nonexistence: the critical case $q=2^*-1$}]\label{nonexistence for critical q} Assume $N\in[3,6],$ $q=2^{\ast} -1$, $\rho \in L^\infty_{\textrm{loc} }({\mathbb R}^N) \cap W^{1,1}_{\textrm{loc} }({\mathbb R}^N)$ nonnegative, $k\rho(x)\leq (x,\nabla \rho)$ for some $k\geq\frac{N-6}{2},$ and $\lambda> 0$. Let $(u, \phi_u) \in E({\mathbb R}^N)\times D^{1,2}({\mathbb R}^N)$ be a weak solution to \eqref{main SP system multiplicity}. Then, $(u, \phi_u)=(0,0).$ \end{proposition} \begin{proof} Combining the Nehari identity $I_\lambda'(u)(u)=0$ with Lemma \ref{pohozaevlemma} yields \begin{equation*} \Big(\frac{N-2}{2}-\frac{N}{q+1}\Big)\int_{{\mathbb R}^N}|\nabla u|^2 \,\mathrm{d} x +\Big(\frac{N}{2}-\frac{N-2}{2}\Big)\int_{{\mathbb R}^N}u^2\,\mathrm{d} x +\Big(\frac{2k+6-N}{4}\Big)\lambda^2\int_{{\mathbb R}^N}\rho\phi_u u^2 \,\mathrm{d} x \leq 0. \end{equation*} Hence, $$\int_{{\mathbb R}^N}u^2\,\mathrm{d} x \leq 0,$$ and this concludes the proof. \end{proof} \begin{remark} Similar nonexistence results have been obtained in the case of constant potentials and for $N=3$, in \cite{D'Aprile and Mugnai}. We point out that the in the above proposition $\lambda>0$ is arbitrary and the condition on $\rho$ is compatible with \ref{vanishing_rho}, as well as with \ref{coercive_rho}. It is interesting to note that for $N=6$ we have $q=2^*-1=2,$ namely nonexistence occurs in a `low-$q$' regime, under both conditions \ref{vanishing_rho} and \ref{coercive_rho}. The proof shows also that for supercritical exponents $q+1>2^*$ and higher dimensions, under further regularity assumptions required for Lemma \ref{pohozaevlemma} to hold, nonexistence also occurs. \end{remark} \begin{proposition}[{\bf Nonexistence: the case $q\in(1,2]$}]\label{nonexistence for lowest q} Assume $N\geq3,$ $q\in(1,2]$, $\rho \in L^\infty_{\textrm{loc} }({\mathbb R}^N)$ and $\rho(x)\geq 1$ almost everywhere and $\lambda\geq\frac{1}{2}$. Let $u\in E({\mathbb R}^N) \cap L^{q+1}({\mathbb R}^N)$ satisfy \begin{equation}\label{nonex-eq} - {\mathcal D}elta u+ u + \lambda^2 \left(\frac{1}{\omega|x|^{N-2}}\star \rho u^2\right) \rho(x) u = |u|^{q-1} u, \qquad \textrm{in}\,\,\mathcal D'( \mathbb R^N). \end{equation} Then, $u\equiv 0.$ \end{proposition} We note that this proposition is stated to cover also the dimensions $N> 2\left(\frac{q+1}{q-1}\right),$ namely the supercritical cases $3\geq q+1>2^*$ where $E({\mathbb R}^N)$ does not embed in $L^{q+1}({\mathbb R}^N).$ \begin{proof} By density we can test (\ref{nonex-eq}) by $u$ and so we obtain \begin{equation}\label{nonexistenceforlowpequation} \int_{\R^N} \abs{\nabla u}^2 + u^2 + \lambda^2 \rho (x) \phi_u u^2 - \abs{u}^{q+1} = 0. \end{equation} Following \cite[Theorem 4.1]{RuizJFA}, by Lemma \ref{weightedL3boundlemma} and Young's inequality we have \begin{equation}\label{poissonon} \int_{\R^N} \rho(x) \abs{u}^3 \leq \int_{\R^N} \abs{\nabla u}^2 + \frac{1}{4} \int_{{\mathbb R}^N}\rho (x) \phi_u u^2. \end{equation} Combining \eqref{nonexistenceforlowpequation} and \eqref{poissonon}, we have for all $\lambda \geq \half$ \begin{equation*} 0\geq \int_{\R^N} u^2 + \rho(x) \abs{u}^3 - \abs{u}^{q+1}\geq\int_{{\mathbb R}^N}f(u), \end{equation*} where $f(u) = u^2 + \abs{u}^3 - \abs{u}^{q+1}$ is positive except at zero. Hence $u \equiv 0,$ and this concludes the proof. \end{proof} \subsection{Min-max setting} The present section is devoted to the min-max properties of $I_{\lambda},$ which will be used in our existence results. \begin{lemma}[\bf{Mountain-Pass Geometry for $I_{\lambda}$}]\label{mpgfori} Assume $N = 3,4,5$, $\rho(x) \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ is nonnegative and $q \in (2, 2^* -1]$. Then, it holds that \begin{enumerate}[label=(\roman*)] \item $I_{\lambda} (0) = 0$ and there exist constants $r,a > 0$ such that $I_{\lambda}(u) \geq a$ if $\norme{u}{{\mathbb R}^N} = r;$ \item there exist $v \in E({\mathbb R}^N)$ with $\norme{v}{{\mathbb R}^N} > r$ such that $I_{\lambda}(v) \leq 0$. \end{enumerate} \end{lemma} \begin{proof} Statement $(i)$ follows reasoning as in Lemma \ref{nonzeroweaklimitlemma}. To show $(ii)$, pick $u\in C^{1}({\mathbb R}^N)$, supported in the unit ball, $B_1$. Setting $v_t(x) \coloneqq t^2u(tx)$ we find that \begin{equation}\label{rescaledI} \begin{split} I_{\lambda} (v_t) =\frac{t^{6-N}}{2} \int_{{\mathbb R}^N} &|\nabla u|^2 + \frac{t^{4-N}}{2} \int_{{\mathbb R}^N} u^2 + \frac{ t^{6-N}}{4} \lambda^2\int_{{\mathbb R}^N}\int_{{\mathbb R}^N} \frac{ u^2(y)\rho(\frac{y}{t})u^2(x)\rho(\frac{x}{t})}{\omega |x-y|^{N-2}}\,\mathrm{d} y\,\,\mathrm{d} x \\ &- \frac{ t^{(2q+2-N)}}{q+1} \int_{{\mathbb R}^N}|u|^{q+1}. \end{split} \end{equation} Since for every $t\geq 1$ and for almost every $x\in B_1$ we have $\rho(x/t)\leq ||\rho||_{L^{\infty}(B_1)},$ the fact that $2q+2>6$ in \eqref{rescaledI} yields $I_{\lambda} (v_{t}) \to -\infty$ as $t\to +\infty,$ and this is enough to conclude the proof. \end{proof} To prove our results for $q<3$, we will need to work with a perturbed functional, $I_{\mu,\lambda}:E({\mathbb R}^N)\to{\mathbb R}^N$, defined by \begin{equation}\label{definition I mu lambda} I_{\mu,\lambda}(u) = \frac{1}{2}\int_{{\mathbb R}^N}(|\nabla u|^2 + u^2)+\frac{\lambda^2}{4}\int_{{\mathbb R}^N} \rho\phi_u u^2 -\frac{\mu}{q+1}\int_{{\mathbb R}^N}|u|^{q+1},\quad \mu\in\left[\frac{1}{2}, 1\right]. \end{equation} As in Lemma \ref{mpgfori}, $I_{\mu,\lambda}$ has the mountain-pass geometry in $E({\mathbb R}^N)$ for all $\mu\in\left[\frac{1}{2},1\right]$. This, as well as the monotonicity of $I_{\mu,\lambda}$ with respect to $\mu$, imply that we can define the min-max level associated with $I_{\mu,\lambda}$ as \begin{equation}\label{mp level low q} c_{\mu,\lambda}= \inf_{\gamma\in\Gamma_{\lambda}}\max_{t\in[0,1]} I_{\mu,\lambda} (\gamma(t)),\quad \mu\in\left[\frac{1}{2},1\right] \end{equation} where \begin{equation}\label{gamma lambda} {\Gamma}_{\lambda} = \{ \gamma \in C([0,1], E({\mathbb R}^N)) : \gamma (0) =0, \, I_{\frac{1}{2},\lambda}(\gamma(1))< 0\}. \end{equation} Since the mapping $[1/2,1] \ni \mu \mapsto c_{\mu,\lambda}$ is non-increasing and left-continuous in $\mu$ (see \cite[Lemma $2.2$]{Ambrosetti and Ruiz}) and the non-perturbed functional $I_{\lambda}$ has the mountain-pass geometry by Lemma \ref{mpgfori}, we are now in position to define the min-max level associated with $I_{\lambda}$ for all $q\in(2,2^{\ast}-1)$. \begin{definition}[\bf{Definition of mountain-pass level for $I_{\lambda}$}] We set \begin{equation}\label{mountainpasslevelforI} {c}_{\lambda}= \left\{ \begin{array}{lll} c_{1,\lambda},\ &q\in(2,3), \\ \inf\limits_{\gamma \in \bar{\Gamma}_{\lambda}} \max\limits_{t \in [0,1]} \ I_{\lambda}(\gamma (t)),\ & q\in[3,2^{\ast}-1), \end{array} \right. \end{equation} where $c_{1,\lambda}$ is given by \eqref{mp level low q} and $\bar{\Gamma}_{\lambda}$ is the family of paths defined as \begin{equation}\label{bar gamma lambda} \bar{\Gamma}_{\lambda} = \left\{ \gamma \in C([0,1]; E({\mathbb R}^N)) : \gamma(0) = 0, I_{\lambda}(\gamma(1)) < 0 \right\}. \end{equation} \end{definition} \color{black} The remainder of this subsection is devoted to further characterisations of the min-max level $c_{\lambda}$ for $q\leq3$. We first require the following technical lemma. \begin{lemma}\label{derivatives of f} Suppose $N\geq 3$, $q>2$ and $\nu>\max\left\{\frac{N}{2},\frac{2}{q-1}\right\}$. Let $\bar{k}\in\left(\frac{\nu(3-q)-2}{2},\frac{4\nu-N-2}{2} \right)$. Define $f:{\mathbb R}^+_0\to {\mathbb R}$ as \[f(t)=a{t^{2\nu+2-N}}+bt^{2\nu-N}+c t^{4\nu-N-2-2\bar{k}}-d t^{\nu(q+1)-N}, \quad t\geq0,\] where $a, b, c, d\in{\mathbb R}$ are such that $a, b, d>0$, $c\geq0$. Then, $f$ has a unique critical point corresponding to its maximum. \end{lemma} \begin{remark}\label{k bound} We point out that our range of parameters ensures that $f(t)\to -\infty$ as $t\to +\infty$ and it holds that $$\left(\frac{\nu(3-q)-2}{2},\frac{4\nu-N-2}{2} \right)\bigcap \left(\frac{(\nu+1)(3-q)-2}{2},\frac{4(\nu+1)-N-2}{2} \right)\neq \emptyset.$$ In Theorem \ref{least energy pure homog rho} and Theorem \ref{partial multiplicity result low q}, we use Lemma \ref{derivatives of f}, assuming $$ \bar{k}> \max\left\{\frac{N}{4},\frac{1}{q-1}\right\}(3-q)-1 $$ for $\bar{k}$ to belong to one of these intervals. \end{remark} \begin{proof}[Proof of Lemma \ref{derivatives of f}] Note that by our assumptions, we can write $$f(t)=\sum_{i=1}^k a_it^{p_i}-t^p,$$ where $a_i\geq0$, $0\leq p_i<p$ and both $a_i$, $p_i\neq0$ for some $i$. Setting $s=t^p$, we find $$f(s)=\sum_{i=1}^k a_is^{\frac{p_i}{p}}-s.$$ It follows that $f(s)$ is strictly concave and has a unique critical point, which is a maximum. Since our assumptions ensure that $f(t)\to -\infty$ as $t\to +\infty,$ we can conclude. \end{proof} \noindent To state our next result, for any $\nu\in{\mathbb R}$, we set \begin{equation}\label{definition of M lambda} \bar{\mathcal{M}}_{\lambda,\nu}= \left\{u\in E({\mathbb R}^N)\setminus \{0\}:J_{\lambda,\nu}(u)=0\right\}, \end{equation} where $J_{\lambda,\nu}:E({\mathbb R}^N)\to{\mathbb R}^N$ is defined as \begin{equation}\label{definition of J lambda} \begin{split} J_{\lambda,\nu}(u)=\frac{2\nu+2-N}{2}\int_{{\mathbb R}^N}&|\nabla u|^2+\frac{2\nu-N}{2}\int_{{\mathbb R}^N} u^2\\ &+\frac{4\nu-N-2-2\bar{k}}{4}\cdot \lambda^2\int_{{\mathbb R}^N}\rho\phi_u u^2- \frac{\nu(q+1)-N}{q+1}\int_{{\mathbb R}^N}| u|^{q+1}. \end{split} \end{equation} Notice that, if $\rho$ is homogeneous of order $\bar{k},$ $J_{\lambda,\nu}(u)$ is the derivative of the polynomial $f(t)=I_{\lambda}(t^{\nu}u(t\cdot))$ at $t=1$. \begin{proposition}[\bf{Mountain-pass characterisation of groundstates}]\label{variational characterisation MP level low q} Let $N=3,4,5$, $q \in (2, 3]$ if $N=3$ and $q\in(2,2^{\ast}-1)$ if $N=4,5$. Suppose $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)\cap W^{1,1}_{loc}({\mathbb R}^N)$ is nonnegative and is homogeneous of degree $\bar{k}$, namely $ \rho(tx)=t^{\bar{k}}\rho(x)$ for all $t>0$, for some \[\bar{k}>\max\left\{\frac{N}{4},\frac{1}{q-1}\right\}(3-q)-1.\] Then, there exists $\nu>\max\{\frac{N}{2},\frac{2}{q-1}\}$ such that $${c}_{\lambda}=\inf_{u\in \bar{\mathcal{M}}_{\lambda,\nu}}I_{\lambda}(u)=\inf_{u\in E({\mathbb R}^N)\setminus\{0\}}\max_{t\geq0} I_{\lambda}(t^{\nu} u(t\cdot)),$$ where ${c}_{\lambda}$ and $\bar{\mathcal{M}}_{\lambda,\nu}$ are defined in \eqref{mountainpasslevelforI} and \eqref{definition of M lambda}, respectively. \end{proposition} \begin{proof} We first note that under the assumptions on the parameters, it holds that \[\frac{4\nu-N-2}{2}>\frac{(\nu+1)(3-q)-2}{2}.\] It follows from this and the lower bound assumption on $\bar{k}$ that we can always find at least one interval \[\left(\frac{\nu(3-q)-2}{2},\frac{4\nu-N-2}{2}\right), \quad \text{with } \nu>\max\left\{\frac{N}{2},\frac{2}{q-1}\right\},\] that contains $\bar{k}$. We fix $\nu$ corresponding to such an interval. We break the remainder of the proof into a series of claims.\\ \noindent \textbf{Claim 1.} $\inf_{u\in E({\mathbb R}^N)\setminus\{0\}}\max_{t\geq0} I_{\lambda}(t^{\nu} u(t\cdot))\leq \inf_{u\in \bar{\mathcal{M}}_{\lambda,\nu}}I_{\lambda}(u)$\\ \noindent To see this, let $u\in E({\mathbb R}^N)\setminus \{0\}$ be fixed and consider the function \begin{equation}\label{definition of g least energy} \begin{split} g(t)&= I_{\lambda}(t^{\nu} u(t\cdot))\\ &=a{t^{2\nu+2-N}}+bt^{2\nu-N}+c t^{4\nu-N-2-2\bar{k}}-d t^{\nu(q+1)-N}, \quad t\geq0, \end{split} \end{equation} where \[a=\frac{1}{2} \int_{{\mathbb R}^N}|\nabla u|^2, \,\, b=\frac{1}{2} \int_{{\mathbb R}^N} u^2,\,\, c=\frac{\lambda^2}{4} \int_{{\mathbb R}^N}\rho\phi_u u^2, \,\, d=\frac{1}{q+1} \int_{{\mathbb R}^N}|u|^{q+1}.\] By Lemma \ref{derivatives of f}, it holds that $g$ has a unique critical point, $t=\tau_u$, corresponding to its maximum. Moreover, we can see that \begin{align*} g'(t)&=\frac{\,\mathrm{d} I_{\lambda}(t^{\nu} u(t\cdot))}{\,\mathrm{d} t}\\ &=\frac{2\nu+2-N}{2}\cdot t^{2\nu+1-N}\int_{{\mathbb R}^N}|\nabla u|^2+\frac{2\nu-N}{2}\cdot t^{2\nu-N-1} \int_{{\mathbb R}^N} u^2\\ &\qquad +\frac{4\nu-N-2-2\bar{k}}{4}\cdot t^{4\nu-N-3-2\bar{k}}\cdot \lambda^2\int_{{\mathbb R}^N}\rho\phi_u u^2- \frac{\nu(q+1)-N}{q+1}\cdot t^{\nu(q+1)-N-1}\int_{{\mathbb R}^N}| u|^{q+1}, \end{align*} and so \[g'(t)=0 \iff t^{\nu} u(t\cdot)\in \bar{\mathcal{M}}_{\lambda,\nu}.\] Taken together, we have shown that for any $u\in E({\mathbb R}^N)\setminus\{0\}$, there exists a unique $t=\tau_u$ such that $\tau_u^{\nu} u({\tau_u}\cdot)\in \bar{\mathcal{M}}_{\lambda,\nu}$ and the maximum of $I_{\lambda}(t^{\nu} u(t\cdot))$ for $t\geq0$ is achieved at $\tau_u$. Thus, it holds that \begin{equation*} \begin{split} \inf_{u\in E({\mathbb R}^N)\setminus\{0\}}\max_{t\geq0} I_{\lambda}(t^{\nu} u(t\cdot))&\leq \max_{t\geq0} I_{\lambda}(t^{\nu} u(t\cdot)) =I_{\lambda}(\tau_u^{\nu} u({\tau_u}\cdot)), \quad \forall u\in E({\mathbb R}^N)\setminus\{0\}, \end{split} \end{equation*} from which we can deduce that the claim holds.\\ \noindent \textbf{Claim 2.} ${c}_{\lambda} \leq \inf_{u\in E({\mathbb R}^N)\setminus\{0\}}\max_{t\geq0} I_{\lambda}(t^{\nu}u(t\cdot)).$\\ \noindent By the assumptions on our parameters, we can deduce that $\nu(q+1)-N>2\nu+2-N$ and $\nu(q+1)-N>4\nu-N-2-2\bar{k}$. It follows that $I_{\lambda}(t^{\nu} u(t\cdot))< 0$ for every $u\in E({\mathbb R}^N)\setminus \{0\}$ and $t$ large. Similarly, $I_{\frac{1}{2},\lambda}(t^{\nu} u(t\cdot))< 0$ for every $u\in E({\mathbb R}^N)\setminus \{0\}$ and $t$ large. Therefore, we obtain \[{c}_{\lambda} \leq \max_{t\geq0} I_{\lambda}(t^{\nu} u(t\cdot)),\quad \forall u\in E({\mathbb R}^N)\setminus\{0\},\] and the claim follows.\\ \noindent \textbf{Claim 3.} $\inf_{u\in \bar{\mathcal{M}}_{\lambda,\nu}}I_{\lambda}(u)\leq {c}_{\lambda}.$\\ \noindent We define \[A_{\lambda,\nu}=\left\{u\in E({\mathbb R}^N)\setminus \{0\}: J_{\lambda,\nu}(u) > 0\right\}\cup \{0\},\] and first note that $A_{\lambda,\nu}$ contains a small ball around the origin. Indeed, arguing as in the proof of Lemma \ref{nonzeroweaklimitlemma}, we can show that for every $u\in E({\mathbb R}^N)\setminus \{0\}$ and any $\beta>0$, we have \begin{equation*}\label{localmin least energy low q} \begin{split} J_{\lambda,\nu} (u) \geq \frac{2\nu-N}{2}&||u||_{H^1({\mathbb R}^N)}^2- \left(\frac{4\nu-N-2-2\bar{k}}{\omega}\right)\left(\frac{\beta-1}{4}\right)||u||^4_{H^1({\mathbb R}^N)}\\ &+\left(\frac{4\nu-N-2-2\bar{k}}{\omega}\right)\left(\frac{\beta-1}{4\beta}\right) ||u||_{E({\mathbb R}^N)}^4 -\frac{S_{q+1}^{-(q+1)}(\nu(q+1)-N)}{q+1}||u||_{H^1({\mathbb R}^N)}^{q+1}. \end{split} \end{equation*} \noindent We now pick $\delta=\left(\frac{(2\nu-N)(q+1)S_{q+1}^{q+1}}{4(\nu(q+1)-N)}\right)^{1/(q-1)}$ and note that since $\nu>\frac{N}{2}$, it follows that $\delta>0$. We assume $||u||_{E({\mathbb R}^N)}<\delta$ and choosing $\beta>1$ sufficiently near $1$ we obtain \begin{align*} J_{\lambda,\nu} (u) &\geq \left[ \frac{2\nu - N}{4}-\left(\frac{4\nu-N-2-2\bar{k}}{\omega}\right)\left( \frac{\beta-1}{4} \right) \delta^2\right] ||u||_{H^1({\mathbb R}^N)}^2 +\left(\frac{4\nu-N-2-2\bar{k}}{\omega}\right)\left(\frac{\beta-1}{4\beta}\right) ||u||_{E({\mathbb R}^N)}^4\\ &\geq \left(\frac{4\nu-N-2-2\bar{k}}{\omega}\right)\left(\frac{\beta-1}{4\beta}\right) ||u||_{E({\mathbb R}^N)}^4, \end{align*} which is strictly positive by our choice of $\nu$. This is enough to prove that $A_{\lambda,\nu}$ contains a small ball around the origin. Now, notice that if ${u}\in A_{\lambda,\nu}$, then $g'(1)>0$, where $g$ is defined in \eqref{definition of g least energy}. Since $g(0)=0$ and we showed in Claim $1$ that $\tau_u$ is the unique critical point of $g$ corresponding to its maximum, it follows that $1<\tau_u$. Using the facts that $I_{\lambda}(0)=0$ and $g'(t)=\frac{\,\mathrm{d} I_{\lambda}(t^{\nu} u(t\cdot))}{\,\mathrm{d} t}\geq0$ for all $t\in[0, \tau_u]$, we obtain that $I_{\lambda}(t^{\nu} u(t\cdot))\geq0$ for all $t\in[0, \tau_u]$ and, in particular, at $t=1$. Thus, we have shown $I_{\lambda}(u)\geq 0$, which also implies that $I_{\frac{1}{2},\lambda}(u)\geq 0$, for every $u\in A_{\lambda,\nu}$. Therefore, every $\gamma \in \Gamma_{\lambda}$ and every $\gamma \in \bar{\Gamma}_{\lambda}$, where $\Gamma_{\lambda}$ and $\bar{\Gamma}_{\lambda}$ are given by \eqref{gamma lambda} and \eqref{bar gamma lambda} respectively, has to cross $\bar{\mathcal{M}}_{\lambda,\nu}$, and so the claim holds. \\ \noindent \textbf{Conclusion.} Putting the claims together, it is clear that the statement holds. \end{proof} \subsection{Palais-Smale sequences} We recall that a sequence $(u_{n})_{n\in{\mathbb N}} \subset E({\mathbb R}^N)$ is said to be a Palais-Smale sequence for $I_{\lambda}$ at some level $c\in{\mathbb R}$ if \begin{equation*} I(u_{n}) \rightarrow c, \quad I'(u_{n}) \rightarrow 0, \quad \text{as} \ n \rightarrow \infty. \end{equation*} If any such a sequence is relatively compact in the $E({\mathbb R}^N)$ topology, then we say that the functional $I_{\lambda}$ satisfies the Palais-Smale condition at level $c$. \begin{lemma}[\bf{Boundedness of Palais-Smale sequences}]\label{suff conds bounded PS large q} Assume $N = 3,4$, $\rho\in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ is nonnegative, $q\in[3,2^{\ast}-1]$, and $(u_n)_{n\in{\mathbb N}} \subset E({\mathbb R}^N)$ is a Palais-Smale sequence for $I_{\lambda}$ at any level $c>0$. Then, for any fixed $\lambda > 0$, $(u_n)_{n\in{\mathbb N}}$ is bounded in $E({\mathbb R}^N)$. \end{lemma} \noindent We stress that our assumption on $N$ yields $3\leq 2^*-1.$ \begin{proof} For convenience, set $$a_n= ||u_n||_{H^1({\mathbb R}^N)},\qquad b_n=\lambda\left( \int_{\R^N}\phi_{u_n}u_n^2\rho(x)\right)^{\frac{1}{2}},\qquad c_q=\min{\left\{\left(\frac{q-1}{2}\right),\left(\frac{q-3}{4}\right)\right\}}$$ and note that, as $n\to+\infty,$ \begin{equation}\label{whole equation bdd PS} C_1+o(1)||u_n||_{E({\mathbb R}^N)}\geq (q+1)I_{\lambda}(u_n)-I_{\lambda}'(u_n)(u_n)=\left(\frac{q-1}{2}\right)a^2_n+ \left(\frac{q-3}{4}\right)b^2_n \end{equation} for some $C_1>0$. Assuming $||u_n||_{E({\mathbb R}^N)}\to+\infty$, we show a contradiction in each of the cases: \begin{enumerate}[label=(\roman*)] \item $a_n$, $b_n\to +\infty$, \item $a_n$ bounded and $b_n \to+\infty$, \item $a_n\to+\infty$ and $b_n$ bounded. \end{enumerate} First consider $q>3$. If $b_n\to+\infty$, for large $n$ we have $b_n^2\geq b_n$ and by (\ref{whole equation bdd PS}) we get $$C_1+o(1)||u_n||_{E({\mathbb R}^N)}\geq c_q||u_n||_{E({\mathbb R}^N)}^2,\,n\to+\infty, $$ a contradiction in case $(\textrm{i})$ and $(\textrm{ii})$. If $a_n\to+\infty$ and $b_n$ is bounded, then $||u_n||_{E({\mathbb R}^N)}\sim a_n,$ hence $$ C_1+o(1)a_n\geq c_q a_n^2,\,\,n\to+\infty, $$ a contradiction in case $(\textrm{iii})$. This makes the proof complete for $q>3$.\\ Consider now $q=3$. By Sobolev inequality we have $$ C_2 \geq I_{\lambda}(u_n)\geq \frac{1}{2} a_n^2+\frac{1}{4} b_n^2 -C_3 a_n^{4}, $$ for some $C_2$, $C_3>0,$ which yields a contradiction in case $(\textrm{ii})$. On the other hand if $a_n\to+\infty$, from the same estimate we have \begin{equation}\label{lessim ineq} b_n\lesssim a_n^2,\,n\to+\infty. \end{equation} Note that \eqref{whole equation bdd PS} yields \begin{equation}\label{chain of inequality bdd PS 2} C_1+o(1)||u_n||_{E({\mathbb R}^N)}\geq a_n^2,\,\,n\to+\infty. \end{equation} Dividing by $||u_n||_{E({\mathbb R}^N)}=\left(a_n^2+b_n\right)^{\frac{1}{2}}$, we get $\frac{a_n^4}{a_n^2+b_n}= o(1),\,n\to+\infty,$ hence \[b_n\gtrsim a_n^4,\,\,n\to+\infty,\] a contradiction in case $(\textrm{iii})$. This and \eqref{lessim ineq}, give \[a_n^4\lesssim a_n^2,\,\,n\to+\infty,\] a contradiction in case $(\textrm{i})$. This completes the proof. \end{proof} \begin{lemma}[\bf{Lower bound uniform in $\lambda$ for PS sequences at level $c_\lambda$}]\label{nonzeroweaklimitlemma} Assume $N= 3,4,5$, $\lambda > 0$, $q \in (2,2^{\ast} -1]$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ is nonnegative. There exists a universal constant $\alpha=\alpha(q) > 0$ independent of $\lambda$ such that for any Palais-Smale sequence $(u_{n})_{n \in {\mathbb N}}\subset E({\mathbb R}^N)$ for $I_{\lambda}$ at level $c_{\lambda},$ it holds that \begin{equation*} \liminf_{n \rightarrow \infty} \normLqplusp{u_{n}}{{\mathbb R}^N} \geq \alpha. \end{equation*} \end{lemma} \begin{proof} For every $u\in E({\mathbb R}^N),$ denoting $S_{q+1}$ the best constant such that $S_{q+1}\|u\|_{L^{q+1}(\mathbb R^N)}\leq \| u\|_{H^1(\mathbb R^N)},$ we have \begin{align*} I_{\lambda} (u) \geq \frac{1}{2}||u||_{H^1({\mathbb R}^N)}^2+\frac{\lambda^2}{4}\int_{{\mathbb R}^3}\rho \phi_u u^2 -\frac{S_{q+1}^{-(q+1)}}{q+1}||u||_{H^1({\mathbb R}^N)}^{q+1}. \end{align*} \noindent Since $\omega\lambda^2\int_{{\mathbb R}^N} \rho \phi_u u^2 = \left(||u||^2_{E({\mathbb R}^N)} - ||u||_{H^1({\mathbb R}^N)}^2\right)^2,$ estimating the term $||u||^2_{E({\mathbb R}^N)} ||u||_{H^1({\mathbb R}^N)}^2$ with Young's inequality, we have for any $\beta>0$ \begin{align}\label{localmin} I_{\lambda} (u) \geq \frac{1}{2}||u||_{H^1({\mathbb R}^N)}^2- \frac{1}{\omega}\left(\frac{\beta-1}{4}\right)||u||^4_{H^1({\mathbb R}^N)}+\frac{1}{\omega}\left(\frac{\beta-1}{4\beta}\right) ||u||_{E({\mathbb R}^N)}^4 -\frac{S_{q+1}^{-(q+1)}}{q+1}||u||_{H^1({\mathbb R}^N)}^{q+1}.\nonumber \end{align} \noindent We now pick $\delta=\left(\frac{(q+1)S_{q+1}^{q+1}}{4}\right)^{1/(q-1)}$ and assume $||u||_{E({\mathbb R}^N)}<\delta$, which also implies that $||u||_{H^1({\mathbb R}^N)}<\delta$. Then, choosing $\beta>1$ sufficiently near $1$ we obtain \begin{align*} I_{\lambda} (u) &\geq \left[ \frac{1}{4}-\frac{1}{\omega}\left( \frac{\beta-1}{4} \right) \delta^2\right] ||u||_{H^1({\mathbb R}^N)}^2 +\frac{1}{\omega}\left(\frac{\beta-1}{4\beta}\right) ||u||_{E({\mathbb R}^N)}^4\\ &\geq \frac{1}{\omega}\left(\frac{\beta-1}{4\beta}\right) ||u||_{E({\mathbb R}^N)}^4. \end{align*} We note here that both $\delta$ and $\beta$ depend on $q$ but not on $\lambda.$ Thus, we have shown that if $||u||_{E({\mathbb R}^N)}=\delta/2$, then $I_{\lambda}(u)\geq\underline{c}$, for some $\underline{c}>0$ independent of $\lambda.$ So, since every path connecting the origin to where the functional $I_{\lambda}$ is negative crosses the sphere of radius $\delta/2$, it follows that $$c_{\lambda} \geq \underline{c} \text{ for every } \lambda \geq 0.$$ For convenience, set $$a_n= ||u_n||_{H^1({\mathbb R}^N)},\qquad b^2_n=\lambda\left( \int_{\R^N}\phi_{u_n}u_n^2\rho(x)\right)^{\frac{1}{2}},$$ where $(u_n)_{n\in \mathbb N}$ is an arbitrary Palais-Smale sequence at the level $c_\lambda.$ It holds that \begin{align*} c_{\lambda} +o(1)-\|I_\lambda'(u_n)\|_{E'({\mathbb R}^N)}\|u_n\|_{E({\mathbb R}^N)} &\leq I_{\lambda} (u_{n}) - I^{\prime}_{\lambda} (u_{n}) u_{n} \\ &= \left(\frac{1}{2}-1\right)a_n^2+\left(\frac{1}{4}-1\right)b_n^4+\left(1-\frac{1}{q+1}\right)\|u_n\|_{q+1}^{q+1}. \end{align*} By concavity note that $\|u_n\|_{E({\mathbb R}^N)}\leq a_n+b_n,$ hence the above yields $$\underline{c} +o(1)u_{n}derbrace{-\|I_\lambda'(u_n)\|_{E'({\mathbb R}^N)}(a_n+b_n)+\frac{1}{2}\left(a_n^2+b_n^4\right)}_{c_n}\leq\|u_n\|_{q+1}^{q+1},$$ and it is easy to see that $\liminf c_n\geq0.$ The conclusion follows then with $\alpha:=\underline{c}.$ \color{black} \end{proof} \section{The case of $\rho$ vanishing on a region}\label{vanishing rho section} \noindent Throughout this section we will make the assumption that \begin{enumerate}[label=$\mathbf{(\rho_{\arabic*})}$] \item \label{vanishing_rho} $\rho^{-1} (0)$ has non-empty interior and there exists $\overline{M} > 0$ such that \begin{equation*} \abs{x \in {\mathbb R}^N : \rho(x) \leq \overline{M}} < \infty. \end{equation*} \end{enumerate} \noindent In what follows it is convenient to set \begin{equation*} A(R) = \{ x \in {\mathbb R}^N : \abs{x} > R, \ \rho (x) \geq \overline{M} \}, \end{equation*} \begin{equation*} B(R) = \{ x \in {\mathbb R}^N : \abs{x} > R, \ \rho (x) < \overline{M} \}, \end{equation*} for any $R>0$. \begin{lemma}[\bf{Key vanishing property}]\label{measureofBgoingtozerolemma} Suppose $\rho$ is a measurable function and that for some $\overline{M} \in {\mathbb R}$ it holds that $$ \overline{B}:=\abs{x \in {\mathbb R}^N : \rho(x) < \overline{M}} < \infty. $$ Then $$\lim\limits_{R\rightarrow\infty}|B(R)|=0.$$ \end{lemma} \begin{proof} The conclusion follows by the dominated convergence theorem as $B(R)\subseteq \overline{B}$ yields $$|B(R)|=\int_{\overline{B}}\chi_{B(R)}(x)\,\mathrm{d} x\leq |\overline{B}|.$$ \end{proof} \begin{lemma}[\bf{Uniform bounds in $\lambda$ for PS sequences at level $c_\lambda$}]\label{boundedPSforvanishingrho} Assume $N = 3,4$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ is nonnegative, satisfying \ref{vanishing_rho}, $q\in[3,2^{\ast}-1]$, $\lambda > 0.$ There exists a universal constant $\overline{C}=\overline{C}(q,N)>0$ independent of $\lambda,$ such that for any Palais-Smale sequence $(u_n)_{n\in{\mathbb N}} \subset E({\mathbb R}^N)$ for $I_{\lambda}$ at level $c_{\lambda}$ it holds that $\|u_n\|_{E({\mathbb R}^N)}<\overline{C}.$ \end{lemma} \begin{proof} Let $v \in C^{\infty}_c ({\mathbb R}^N)\setminus \{0\}$ have support in $\rho^{-1} (0)$. Pick $t_v>0$ such that $I_0(t_v v)<0$ and set $v_t=tt_v v.$ Then, by definition of $c_{\lambda}$, \begin{equation}\label{uniform upper bound mp level} c_{\lambda} \leq \max_{t \in[0,1]} I_{\lambda} (v_t)=\max_{t\geq 0}I_0(tv)=:\overline{c}\,\,\footnote{In fact this bound holds in dimensions $N= 3,4,5$ and every $q\in(2,2^*-1].$}. \end{equation} Note that since $(u_n)$ is bounded by Lemma \ref{suff conds bounded PS large q}, it holds that \begin{align*} c_{\lambda} &= \lim_{n \rightarrow \infty} ( I_{\lambda} (u_{n}) - \frac{1}{q+1} I^{\prime}_{\lambda} (u_{n}) \cdot u_{n} )\\ &= \lim_{n \rightarrow \infty} \Big( \Big( \frac{1}{2} - \frac{1}{q+1} \Big) \|u_{n}\|^2_{H^1(\mathbb R^N)}+\lambda^2 \Big( \frac{1}{4} - \frac{1}{q+1} \Big) \int_{{\mathbb R}^N}\phi_{u_{n}} \rho(x) u_{n}^2 \Big). \end{align*} The conclusion follows immediately in the case $q>3$. For $q=3$ the above yields a uniform bound independent on $\lambda$ for the $H^1({\mathbb R}^N)$ norm and hence for the $L^{q+1}({\mathbb R}^N)$ norm as well by Sobolev's inequality. Since $$\lambda^2\limsup_{n \rightarrow\infty} \int_{{\mathbb R}^N} \phi_{u_{n}} u_{n}^2 \rho(x)\leq 4 \left(c_\lambda+\limsup_{n \rightarrow\infty}\left( \|u_n\|^2_{H^1({\mathbb R}^N)}+\|u_n\|^{q+1}_{L^{q+1}({\mathbb R}^N)}\right)\right),$$ this concludes the proof. \end{proof} \begin{lemma}[\bf{Control on the tails of uniformly bounded sequences}]\label{intoutsideballlemma} Assume $N = 3,4,5,$ $\rho \in L^{\infty}_{\textrm{loc}}({\mathbb R}^N)$ is nonnegative, satisfying \ref{vanishing_rho}, and $(u_{n})_{n \in {\mathbb N}}\subset E({\mathbb R}^N)$ is bounded uniformly with respect to $\lambda$. Then, for every $\beta > 0$ there exists $\lambda_{\beta} > 0$ and $R_{\beta} > 0$ such that for $\lambda > \lambda_{\beta}$ and $R > R_{\beta}$, \begin{equation*} ||{u_{n}}||_{L^{3}({\mathbb R}^N \setminus B_R)}^{3} < \beta. \end{equation*} \end{lemma} \begin{proof} By Lemma \ref{weightedL3boundlemma} we have \begin{equation} \lambda \int_{{\mathbb R}^N} \rho(x) \abs{u_{n}}^3 \leq C \|u_n\|_{E({\mathbb R}^N)}^3 \leq C', \end{equation} for some positive constant $C'$ independent of $\lambda.$ Hence $$ \int_{A(R)} \abs{u_{n}}^3 \leq \frac{C'}{\lambda \overline{M}} . $$ Also observe that by H\"older's inequality and Lemma \ref{measureofBgoingtozerolemma} we have \begin{align*} \int_{B(R)} \abs{u_{n}}^3 &\leq \Big( \int_{{\mathbb R}^N} \abs{u_{n}}^{2^*} \Big)^{\frac{3}{2^*}} \Big( \int_{B(R)} 1 \Big)^{\frac{2^*-3}{2^*}}\\ &\leq C'' \norme{u_{n}}{{\mathbb R}^N}^3 \cdot \abs{B(R)}^{\frac{2^*-3}{2^*}}\\ &\leq C''' \abs{B(R)}^{\frac{2^*-3}{2^*}}\rightarrow 0 . \end{align*} as $R\rightarrow \infty,$ again for some uniform constant $C'''>0.$ Note that our assumption on $N$ yields $3<2^*.$ This is enough to conclude the proof. \end{proof} \begin{proposition}[{\bf Nonzero weak limits of PS sequences at level $c_\lambda$ for $\lambda$ large}]\label{positivityofweakuthm} Let $N=3$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ be nonnegative, satisfying \ref{vanishing_rho}, and $q \in [3, 5).$ There exist universal positive constants $\lambda_0=\lambda_0(q,\overline{M})$ and $\alpha_0=\alpha_0(q),$ such that if for some $\lambda\geq\lambda_0,$ $u\in E({\mathbb R}^3)$ is the weak limit of a Palais-Smale sequence for $I_{\lambda}$ at level $c_{\lambda},$ then it holds that \begin{equation*} \int_{ {\mathbb R}^3}|u|^3\,\mathrm{d} x > \alpha_0 . \end{equation*} \end{proposition} \begin{proof} Let $(u_n)_{n\in\mathbb N}\subset E({\mathbb R}^3)$ be an arbitrary Palais-Smale sequence at level $c_\lambda.$ Note that we can pick $\alpha(q) > 0$ independent of $\lambda$ and of the sequence such that \begin{equation*} \liminf_{n \rightarrow \infty}\|u_n\|^3_{L^3({\mathbb R}^3)} \geq \alpha(q). \end{equation*} Indeed by interpolation $$\int_{{\mathbb R}^3}|u_n|^{q+1}\leq\Big(\int_{{\mathbb R}^3}|u_n|^3\Big)^{\frac{5-q}{3}}\Big(\int_{{\mathbb R}^3}|u_n|^6\Big)^{\frac{q-2}{3}}$$ and the claim follows by Sobolev inequality and the uniform bound given by Lemma \ref{boundedPSforvanishingrho} and by Lemma \ref{nonzeroweaklimitlemma}. In particular, recall that by Lemma \ref{boundedPSforvanishingrho}, there exists a universal constant $\overline{C}=\overline{C}(q,N)>0$ independent of $\lambda$ and of the sequence, such that $\|u_n\|_{E({\mathbb R}^N)}<\overline{C}.$ By Lemma \ref{intoutsideballlemma}, it follows than that we can pick $\lambda_0(q,\overline{M})$ and $R_\alpha>0$ such that such that for every $\lambda\geq \lambda_0$ and every $R>R_\alpha$ we have $$\limsup_{n\rightarrow\infty} \norm{u_{n}}^{3}_{L^{3} ({\mathbb R}^3 \setminus B_R)} < \frac{\alpha}{2}.$$ By the classical Rellich theorem, passing if necessary to a subsequence, we can assume that $u_n\rightarrow u$ in $L^{3}_{\textrm{loc}}(\mathbb R^3).$ Therefore, for every $R>R_\alpha$, we have \begin{equation*} \norm{u}^{3}_{L^{3} (B_R)} =\lim_{n\rightarrow \infty}\norm{u_n}^{3}_{L^{3} (B_R)}\geq \liminf\limits_{n\rightarrow \infty}\norm{u_{n}}^{3}_{L^{3} (\mathbb R^3)} -\limsup\limits_{n\rightarrow \infty}\norm{u_{n}}^{3}_{L^{3} ({\mathbb R}^3 \setminus B_R)} > \frac{\alpha}{2}. \end{equation*} The conclusion follows with $\alpha_0=\alpha/2.$ \end{proof} \begin{proposition}[\bf{Energy estimates for $\lambda$ large}]\label{lambdaone} Let $N=3$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^3)$ be nonnegative, satisfying \ref{vanishing_rho}, and $q \in [3, 5).$ Let $\lambda_0$ be defined as in Proposition \ref{positivityofweakuthm}. There exists a universal constant $\lambda_1=\lambda_1(q,\overline{M})>0$ such that, if $\lambda\geq\max\left(\lambda_0,\lambda_1\right)$ and $u$ is the nontrivial weak limit in $E({\mathbb R}^3)$ of some Palais-Smale sequence $(u_n)_{n\in\mathbb N}\subset E(\mathbb R^3)$ for $I_\lambda$ at level $c_\lambda,$ then it holds that \begin{itemize} \item $I_\lambda(u)= c_\lambda, \quad \textrm{for}\,\,q\in(3,5),$ \item [] \item $\inf_{v\in\mathcal N_\lambda}I_\lambda(v)\leq I_{\lambda}(u)\leq c_\lambda,\quad \textrm{for}\,\,q=3.$ \item [] \end{itemize} In particular, for all $\lambda\geq\max\left(\lambda_0,\lambda_1\right),$ the mountain-pass level $c_\lambda$ is critical for $q\in(3,5),$ as well as the level $I_\lambda(u)$ for $q=3.$ \end{proposition} \begin{proof} By Proposition \ref{positivityofweakuthm}, for every $q\in[3,2^{\ast}-1)$ and $\lambda\geq\lambda_0$, passing if necessary to a subsequence, we can assume that $u_n\rightharpoonup u\in E({\mathbb R}^3)\setminus\{0\}$ weakly in $E({\mathbb R}^3)$ and almost everywhere, for some Palais-Smale sequence $(u_n)_{n\in\mathbb N}\subset E(\mathbb R^3)$ for $I_\lambda$ at level $c_\lambda.$ By a standard argument $u$ is a critical point of $I_\lambda.$ For sake of clarity we break the proof into two steps.\\ \textbf{Step 1:} We first show that there exists a universal constant $C=C(q)>0$ such that for every $\lambda\geq \lambda_0,$ $R>0$ and $n\in\mathbb N,$ it holds that \begin{equation} \begin{aligned}\label{boundforerrorterm} I_{\lambda}(u_{n} - u) &\geq \left( \frac{1}{4} - S_{\lambda} S^{-1} \left( \int_{A(R)} \abs{u_{n} - u}^6 \right)^{\frac{2}{3}} \right) \int_{{\mathbb R}^3} \abs{\nabla(u_{n} - u)}^2\\ & \qquad \qquad - C \abs{B(R)}^{\frac{5-q}{6}} - \overqplus \int_{\abs{x} < R} \abs{u_{n} - u}^{q+1}, \end{aligned} \end{equation} where \begin{equation*} S_{\lambda} := (q-2)[3(q+1)]^{\frac{-3}{q-2}}\left( \frac{2(5-q)}{\lambda \overline{M}} \right)^{\frac{5-q}{q-2}}, \end{equation*} $S=3(\pi/2)^{4/3}$ is the Sobolev constant, and $\overline{M}$ is defined as in \ref{vanishing_rho}. Reasoning as in Lemma \ref{weightedL3boundlemma} and by Lemma \ref{intoutsideballlemma} we obtain, \begin{align}\label{initialboundforerrorterm} \nonumber I_{\lambda}(u_{n} - u) &\geq \quarter \int_{{\mathbb R}^3} \abs{\nabla (u_{n} - u)}^2 + \quarter \int_{{\mathbb R}^3} \abs{\nabla (u_{n} - u)}^2\\ \nonumber&\qquad \qquad + \frac{\lambda^2}{4} \int_{{\mathbb R}^3} \phi_{(u_{n} - u)} (u_{n} - u)^2 \rho (x) - \overqplus \int_{{\mathbb R}^3} \abs{u_{n} - u}^{q+1}\\ \nonumber&\geq \quarter \int_{{\mathbb R}^3} \abs{\nabla (u_{n} - u)}^2 + \frac{\lambda}{2} \int_{{\mathbb R}^3} \rho (x) \abs{u_{n} - u}^3 - \overqplus \int_{{\mathbb R}^3} \abs{u_{n} - u}^{q+1}\\ &\geq \quarter \int_{{\mathbb R}^3} \abs{\nabla (u_{n} - u)}^2 + \frac{\lambda \overline{M}}{2} \int_{A(R)} \abs{u_{n} - u}^3 - \overqplus \int_{{\mathbb R}^3} \abs{u_{n} - u}^{q+1}. \end{align} Note that \begin{equation*} \int_{{\mathbb R}^3} \abs{u_{n} - u}^{q+1} = \int_{\abs{x} < R} ... + \int_{A(R)}... + \int_{B(R)} ... \end{equation*} Using that $(u_n)_{n\in\mathbb N}$ is uniformly bounded in $E({\mathbb R}^3)$ and arguing as in Lemma \ref{intoutsideballlemma} and by Sobolev inequality, we have \begin{equation}\label{errortermBbound} \int_{B(R)} \abs{u_{n} - u}^{q+1} \leq C_1 \norm{u_{n} - u}_{L^6 ({\mathbb R}^3)}^{q+1} \abs{B(R)}^{\frac{5 - q}{6}} \leq C_2 \abs{B(R)}^{\frac{5-q}{6}}. \end{equation} By the interpolation and Young's inequalities we obtain for all $\delta>0$ that \begin{align*} \overqplus \int_{A(R)} \abs{u_{n} - u}^{q+1} &\leq \overqplus \left( \int_{A(R)} \abs{u_{n} - u}^3 \right)^{\frac{5-q}{3}} \left( \int_{A(R)} \abs{u_{n} - u}^6 \right)^{\frac{q-2}{3}}\\ &\leq \left( \frac{5-q}{3} \right) \left( \frac{\delta}{q+1} \right)^{\frac{3}{5-q}} \int_{A(R)} \abs{u_{n} - u}^3 + \left( \frac{q-2}{3} \right) \delta^{\frac{-3}{q-2}} \int_{A(R)} \abs{u_{n} - u}^6. \end{align*} In particular, we can set \begin{equation*} \delta = \left( \frac{\lambda \overline{M}}{2} \cdot \frac{3}{5-q} \right)^{\frac{5-q}{3}} (q+1). \end{equation*} Hence \begin{align}\label{errortermAbound} \nonumber \overqplus \int_{A(R)} \abs{u_{n} - u}^{q+1} &\leq \frac{\lambda \overline{M}}{2} \int_{A(R)} \abs{u_{n} - u}^3 + S_{\lambda} \int_{A(R)} \abs{u_{n} - u}^6\\ &\leq \frac{\lambda \overline{M}}{2} \int_{A(R)} \abs{u_{n} - u}^3 + S_{\lambda} S^{-1} \left( \int_{A(R)} \abs{u_{n} - u}^6 \right)^{\frac{2}{3}} \int_{{\mathbb R}^3} \abs{\nabla (u_{n} - u)}^2, \end{align} where we have used Sobolev's inequality written as \begin{equation*} S \left( \int_{A(R)} \abs{u_{n} - u}^6 \right)^{\frac{1}{3}} \leq \int_{{\mathbb R}^3} \abs{\nabla(u_{n} - u)}^2. \end{equation*} Putting together \eqref{initialboundforerrorterm}, \eqref{errortermBbound} and \eqref{errortermAbound}, the claim \eqref{boundforerrorterm} follows.\\ \textbf{Step 2: Conclusion.} By the classical Brezis-Lieb lemma and Lemma \ref{nonlocalBL} we have \begin{equation}\label{BL decompositions} c_\lambda=\lim_{n\rightarrow\infty}I_\lambda(u_n)=I_\lambda(u)+\lim_{n\rightarrow\infty}I_\lambda(u_n-u). \end{equation} Note that there exists a positive constant $\lambda_1=\lambda_1(q,\overline{M})$ such that for every $\lambda\geq \lambda_1$ it holds that \begin{equation}\label{positivity of error term} \quarter - S_{\lambda} S^{-3} \overline{C}^4 \geq 0, \end{equation} where $\overline{C}$ is defined via Lemma \ref{boundedPSforvanishingrho} by the property $\|u_n\|_{E({\mathbb R}^3)}<\overline{C} .$ Note that, again by the Brezis-Lieb lemma, we have \begin{equation*} \int_{A(R)} \abs{u_{n} - u}^6 = \int_{A(R)} \abs{u_{n}}^6 - \int_{A(R)} \abs{u}^6 + o_n (R), \end{equation*} with $\lim_{n\rightarrow \infty} o_n (R)=0$ for any fixed $R>0$; since by Sobolev's inequality it holds that \begin{equation*} \int_{A(R)} \abs{u_{n}}^6 \leq S^{-3} \left( \int_{{\mathbb R}^3} \abs{\nabla u_{n}}^2 \right)^3 \leq S^{-3} \overline{C}^6, \end{equation*} we obtain the estimate \begin{equation}\label{BL6} \limsup_{R \rightarrow \infty} \limsup_{n \rightarrow \infty} \int_{A(R)} \abs{u_{n} - u}^6 \leq S^{-3} \overline{C}^6. \end{equation} We conclude, by \eqref{boundforerrorterm}, \eqref{positivity of error term}, \eqref{BL6} and the classical Rellich theorem that \begin{align*} \lim\limits_{n \rightarrow \infty} I_{\lambda} (u_{n} - u) &\geq \liminf\limits_{R \rightarrow \infty} \liminf\limits_{n \rightarrow \infty} \left( \frac{1}{4} - S_{\lambda} S^{-1} \left( \int_{A(R)} \abs{u_{n} - u}^6 \right)^{\frac{2}{3}} \right) \int_{{\mathbb R}^3} \abs{\nabla(u_{n} - u)}^2\\ &\geq \left[ \quarter - S_{\lambda} S^{-3} \overline{C}^4 \right] \liminf\limits_{n \rightarrow \infty} \int_{{\mathbb R}^3} \abs{\nabla (u_{n} - u)}^2 \geq 0, \end{align*} and hence by \eqref{BL decompositions} that $I_{\lambda} (u) \leq c_{\lambda}$. On the other hand, since $u \in \mathcal{N}_{\lambda}$, it holds that \begin{equation*} \inf\limits_{v \in \mathcal{N}_{\lambda}} I_{\lambda} (v) \leq I_{\lambda} (u) \leq c_{\lambda}, \end{equation*} and this completes the proof for $q=3$. For $q\in(3,2^{\ast}-1)$, since $$c_{\lambda} = \inf\limits_{v \in \mathcal{N}_{\lambda}} I_{\lambda} (v),$$ it follows that $I_{\lambda} (u) = c_{\lambda},$ and this concludes the proof. \end{proof} \begin{remark}[{\bf On the Palais-Smale condition}]\label{PSremark} When $q>3,$ the fact that $\lim I_\lambda(u_n-u)=0$ for $\lambda$ large suggests that the Palais-Smale condition at the mountain-pass level $c_\lambda$ can be recovered in some cases. To illustrate this, note that the assumption \ref{vanishing_rho} is compatible with having, say $\rho(x)\rightarrow 2\overline{M},$ as $|x|\rightarrow\infty,$ namely a situation where lack of compactness phenomena may occur for the system \eqref{main SP system multiplicity} as a consequence of the invariance by translations of \eqref{A-Ruiz PDE}, which plays the role of a `problem at infinity', see e.g. \cite{Mercuri and Tyler}. We stress here that $\rho$ may approach its limit from below as well as from above. To see that in this case the Palais-Smale condition is satisfied for $\lambda$ large, denote by $I_{\lambda}^{\rho\equiv2\overline{M}}$ the functional associated to \eqref{SP one equation} with $\rho\equiv2\overline{M},$ and observe that in this situation $E({\mathbb R}^3)\simeq H^1({\mathbb R}^3),$ with equivalent norms by \eqref{HLS intro}. Reasoning as in \cite[Proposition 1.6]{Mercuri and Tyler}, there exist $l\in {\mathbb N}\cup\{0\}$, functions $(v_1,\ldots, v_l)\subset H^1({\mathbb R}^3)$, and sequences of points $(y_n^j)_{n\in{\mathbb N}}\subset {\mathbb R}^3$, $1\leq j \leq l$, such that, passing if necessary to a subsequence, \begin{itemize} \item $v_j$ are possibly nontrivial critical points of $I_{\lambda}^{\rho\equiv2\overline{M}}$ for $1\leq j\leq l$, \item [] \item $| y_n^j | \to +\infty$, $|y_n^j - y_n^{j'}| \to +\infty$ as $n\to+\infty$ if $j\neq j'$, \item [] \item $||u_n-u- \sum_{j=1}^{l} v_j(\cdot -y_n^j)||_{H^1({\mathbb R}^3)}\to 0$ as $n\to+\infty$, \item [] \item $c_\lambda = I_{\lambda}(u)+\sum_{j=1}^{l}I_{\lambda}^{\rho\equiv2\overline{M}}(v_j)$.\item [] \end{itemize} It is standard to see that $I_{\lambda}^{\rho\equiv2\overline{M}}$ is uniformly bounded below on the set of its nontrivial critical points by a positive constant, independent on $\lambda$. It then follows that for all $\lambda \geq\max\left(\lambda_0,\lambda_1\right),$ Proposition \ref{lambdaone} and the above yield $c_\lambda=I_\lambda(u)$ and at the same time $l=0;$ as a consequence the Palais-Smale condition is satisfied at the level $c_\lambda.$ These considerations yield the following \begin{proposition}[{\bf Palais-Smale condition under \ref{vanishing_rho}}]\label{PSconditionVan} Let $N=3<q$ and $\rho\geq 0$ be locally bounded such that \ref{vanishing_rho} is satisfied and such that $\rho(x)\rightarrow \rho_\infty>\overline{M}$ as $|x|\rightarrow \infty$. Let $\lambda_0$ and $\lambda_1$ be as in Proposition \ref{lambdaone}. Then, for all $\lambda\geq\max\left(\lambda_0,\lambda_1\right),$ $I_\lambda$ satisfies the Palais-Smale condition at the mountain-pass level $c_\lambda.$ \end{proposition} It is not obvious how to prove the above proposition in the case $q=3;$ nevertheless the same considerations on strong convergence apply instead to approximated critical points of $I_\lambda$ constrained on the Nehari manifold, see the proof Theorem \ref{groundstate sol} and Proposition \ref{CPSconditionVan} below. \end{remark} \subsection{Proof of Theorem \ref{groundstate sol}} Now that we have the necessary preliminaries we present the proof of Theorem \ref{groundstate sol}. \begin{proof}[Proof of Theorem \ref{groundstate sol}] We recall that \[\mathcal{N}_{\lambda}\coloneqq\left\{u\in E({\mathbb R}^3)\setminus\{0\}:G_{\lambda}(u)=0\right\},\] where \[G_{\lambda}(u)=I_{\lambda}'(u)(u)=||u||_{H^1({\mathbb R}^3)}^2+\lambda^2\int_{{\mathbb R}^3}\rho\phi_u u^2-||u||_{L^{q+1}({\mathbb R}^3)}^{q+1}.\] We note that for all $q\in[3,2^{\ast}-1)$, it is standard to see that $\mathcal{N}_{\lambda}$ is nonempty. Moreover, we claim that the conditions \begin{enumerate}[label=(\roman*)] \item $\exists r>0:B_r\cap \mathcal{N}_{\lambda}=\emptyset$, \item $G_{\lambda}'(u)(u)\neq 0,\quad \forall u\in\mathcal{N}_{\lambda}$, \end{enumerate} are satisfied, and so, by standard arguments, it follows that the Nehari manifold $\mathcal{N}_{\lambda}$ is a natural constraint (see e.g.\ \cite{Ambrosetti and Malchiodi}). Indeed, for $(i)$, we notice that if $u\in\mathcal{N}_{\lambda}$, then \[0=||u||_{H^1({\mathbb R}^3)}^2+\lambda^2\int_{{\mathbb R}^3}\rho\phi_u u^2-||u||_{L^{q+1}({\mathbb R}^3)}^{q+1}\geq ||u||_{H^1({\mathbb R}^3)}^2-S_{q+1}^{-(q+1)}||u||_{H^1({\mathbb R}^3)}^{q+1}, \] from which it follows that \begin{equation}\label{lower bound nehari} ||u||_{E({\mathbb R}^3)}\geq ||u||_{H^1({\mathbb R}^3)} \geq S_{q+1}^{(q+1)/(q-1)},\quad\forall u\in\mathcal{N}_{\lambda}. \end{equation} Setting $r=S_{q+1}^{(q+1)/(q-1)}-\delta$ for some small $\delta>0$ yields $(i)$. For $(ii)$, we notice that if $u\in\mathcal{N}_{\lambda}$, then by the definition of the Nehari manifold, the assumption $q\geq3$ and \eqref{lower bound nehari}, it holds that \begin{equation}\label{G' less that zero} \begin{split} G_{\lambda}'(u)(u)&=2||u||^2_{H^1({\mathbb R}^3)}+4\lambda^2\int_{{\mathbb R}^3}\rho\phi_u u^2-(q+1)||u||_{L^{q+1}({\mathbb R}^3)}^{q+1}\\ &=(1-q) ||u||^2_{H^1({\mathbb R}^3)}+(3-q)\lambda^2\int_{{\mathbb R}^3}\rho\phi_u u^2\\ &\leq (1-q) S_{q+1}^{2(q+1)/(q-1)}\\ &<0. \end{split} \end{equation} Thus, the claim holds and so the Nehari manifold is a natural constraint. Now, if $q\in(3,2^{\ast}-1)$, setting $\lambda_*= \max\{\lambda_0,\lambda_1\}$, the conclusion follows immediately from Proposition \ref{lambdaone} and the following characterisation of the mountain-pass level, $$c_{\lambda} = \inf\limits_{v \in \mathcal{N}_{\lambda}} I_{\lambda} (v).$$ On the other hand, assume $q=3$ and $\lambda\geq\max\{\lambda_0,\lambda_1\}$. We note that $$c_{\lambda}^*\coloneqq\inf\limits_{v \in \mathcal{N}_{\lambda}} I_{\lambda} (v)$$ is well-defined since $\mathcal{N}_{\lambda}$ is nonempty, and so, we take $(\tilde{w}_n)_{n\in{\mathbb N}}\subset \mathcal{N}_{\lambda}$ to be a minimising sequence for $I_{\lambda}$ on $\mathcal{N}_{\lambda}$, namely, $I_{\lambda}(\tilde{w}_n)\to c_{\lambda}^*$. By the Ekeland variational principle (see e.g.\ \cite{Costa}), there exists another minimising sequence $({w}_n)_{n\in{\mathbb N}}\subset \mathcal{N}_{\lambda}$ and $\xi_n\in{\mathbb R}$ such that \begin{equation}\label{Ekeland 1} I_{\lambda}({w}_n)\to c_{\lambda}^*, \end{equation} \begin{equation}\label{Ekeland 2} I_{\lambda}'({w}_n)({w}_n)=0, \end{equation} and \begin{equation}\label{Ekeland 3} I_{\lambda}'({w}_n)-\xi_nG_{\lambda}'({w}_n)\to 0,\qquad \textrm{in}\,\,(E({\mathbb R}^3))'. \end{equation} Now, by Proposition \ref{lambdaone}, \eqref{uniform upper bound mp level}, \eqref{Ekeland 1} and \eqref{Ekeland 2}, it holds that \[\lim_{n\to+\infty}\left(I_{\lambda}({w}_n)-\frac{1}{q+1}I_{\lambda}'({w}_n)({w}_n)\right)=c_{\lambda}^*\leq c_{\lambda}\leq \bar{c},\] for some $\bar{c}$ independent of $\lambda$. We can therefore argue as in Lemma \ref{boundedPSforvanishingrho} to show that \begin{equation}\label{unif bound tilde w} ||{w}_n||_{E({\mathbb R}^3)}< \bar{C}, \end{equation} where $\bar{C}>0$ is the same constant independent of $\lambda$ given by Lemma \ref{boundedPSforvanishingrho}. Moreover, since $({w}_n)_{n\in{\mathbb N}}\subset \mathcal{N}_{\lambda}$, it follows using \eqref{lower bound nehari} that \begin{align*} ||{w}_n||_{L^{4}({\mathbb R}^3)}^{4}=||{w}_n||_{H^1({\mathbb R}^3)}^2+\lambda^2\int_{{\mathbb R}^3}\rho \phi_{{w}_n}{w}_n^2\geq ||{w}_n||_{H^1({\mathbb R}^3)}^2\geq S_{4}^{4}>0. \end{align*} Thus, by interpolation it holds $$S_4^4\leq \int_{{\mathbb R}^3}|{w}_n|^{4}\leq\left(\int_{{\mathbb R}^3}|{w}_n|^3\right)^{\frac{2}{3}}\left(\int_{{\mathbb R}^3}|{w}_n|^6\right)^{\frac{1}{3}},$$ and so, by the Sobolev inequality and \eqref{unif bound tilde w}, it follows that we can pick $\alpha > 0$ independent of $\lambda$ such that \begin{equation*} \liminf_{n \rightarrow \infty}\|{w}_n\|^3_{L^3({\mathbb R}^3)} \geq \alpha. \end{equation*} Moreover, by Lemma \ref{intoutsideballlemma}, we can set $\lambda_*= \max\{\lambda_0,\lambda_1\}$ and $R_\alpha>0$ such that such that for every $\lambda\geq \lambda_*$ and every $R>R_\alpha$ we have $$\limsup_{n\rightarrow\infty} \norm{{w}_n}^{3}_{L^{3} ({\mathbb R}^3 \setminus B_R)} < \frac{\alpha}{2}.$$ Now, since $({w}_n)_{n\in{\mathbb N}}$ is bounded, passing if necessary to a subsequence, we can assume that ${w}_n\rightharpoonup w$ in $E({\mathbb R}^3)$ and ${w}_n\rightarrow w$ in $L^{3}_{\textrm{loc}}(\mathbb R^3)$. It follows that for every $\lambda\geq \lambda_*$ and $R>R_\alpha$, \begin{equation*} \norm{w}^{3}_{L^{3} (B_R)} \geq \liminf\limits_{n\rightarrow \infty}\norm{{w}_n}^{3}_{L^{3} (\mathbb R^3)} -\limsup\limits_{n\rightarrow \infty}\norm{{w}_n}^{3}_{L^{3} ({\mathbb R}^3 \setminus B_R)} > \frac{\alpha}{2}, \end{equation*} and so $w\not\equiv0$. We now notice that by \eqref{Ekeland 2}, \eqref{Ekeland 3}, and \eqref{unif bound tilde w}, it holds, up to a constant independent of $\lambda$, that \begin{equation*} \begin{split} o(1)&= ||I_{\lambda}'({w}_n)-\xi_nG_{\lambda}'({w}_n)||_{(E({\mathbb R}^3))'}\\ &\gtrsim |I_{\lambda}'({w}_n)({w}_n)-\xi_nG_{\lambda}'({w}_n)({w}_n)|\\ &=|\xi_nG_{\lambda}'({w}_n)({w}_n)|, \end{split} \end{equation*} for some $\xi_n\in{\mathbb R}$. Since $({w}_n)\subset\mathcal{N}_{\lambda}$, by \eqref{G' less that zero}, we have that $G_{\lambda}'({w}_n)({w}_n)<-2S^4_4<0$, and so the above yields $\xi_n\to0$. Moreover, using \eqref{unif bound tilde w} and the inequality \[|D(f,g)|^2\leq D(f,f)D(g,g),\] where \[D(f,g)\coloneqq \int_{{\mathbb R}^3}\int_{{\mathbb R}^3}\frac{f(x)g(y)}{|x-y|}\,\mathrm{d} x \,\mathrm{d} y,\] for $f,g$ measurable and nonnegative functions (see \cite[p.250]{Lieb and Loss}), it follows that $G_{\lambda}'({w}_n)$ is bounded. Taken together, we have that $\xi_nG_{\lambda}'({w}_n)\to0$, and using this and \eqref{Ekeland 3}, we obtain $I_{\lambda}'({w}_n)\to0$. Hence, $({w}_n)_{n\in{\mathbb N}}$ is a Palais-Smale sequence for $I_{\lambda}$ at level $c_{\lambda}^*$, and so, since we have also shown that ${w}_n\rightharpoonup w\not\equiv 0$ in $E({\mathbb R}^3)$, a standard argument yields that $w$ is a nontrivial critical point of $I_{\lambda}$. Namely, $w\in\mathcal{N}_{\lambda}$, and thus \begin{equation}\label{groundstate ineq 1} c_{\lambda}^*\leq I_{\lambda}(w). \end{equation} On the other hand, arguing as in Proposition \ref{lambdaone}, replacing $u_n$, $u$, and $c_{\lambda}$ with ${w}_n$, $w$, and $c_{\lambda}^*$, respectively, for every $\lambda \geq \lambda_*$, we obtain \begin{equation}\label{groundstate ineq 2} I_{\lambda}(w)\leq c_{\lambda}^*. \end{equation} For the reader convenience we recall that $\lambda_1$ is chosen in Proposition \ref{lambdaone} so that for every $\lambda\geq\lambda_1$, it holds that $\frac{1}{4}-S_{\lambda}S^{-3}\bar{C}^4\geq0$, where $\bar{C}$ is defined via Lemma \ref{boundedPSforvanishingrho} by the property $||u_n||_{E({\mathbb R}^3)}<\bar{C}.$ Going through the same argument with $({w}_n)_{n\in{\mathbb N}}$, since $({w}_n)_{n\in{\mathbb N}}$ is bounded by precisely the same uniform constant, namely $||{w}_n||_{E({\mathbb R}^3)}<\bar{C}$, we conclude that \eqref{groundstate ineq 2} holds for every $\lambda\geq\lambda_*$, as $\lambda_*\geq\lambda_1$ by construction. Putting \eqref{groundstate ineq 1} and \eqref{groundstate ineq 2} together yields $$I_{\lambda}(w)=\inf\limits_{v \in \mathcal{N}_{\lambda}} I_{\lambda} (v).$$ Since $I_{\lambda}(w)=I_{\lambda}(|w|)$ and $w\in \mathcal N_{\lambda}$ if and only if $|w|\in \mathcal N_{\lambda}$, we can assume $w\geq0$, and it follows that $w>0$ by Proposition \ref{reg}. This completes the proof. \end{proof} \noindent As a byproduct of the above proof, we have the following \begin{proposition}[{\bf Constrained Palais-Smale condition under \ref{vanishing_rho}}]\label{CPSconditionVan} Let $N=3=q$ and $\rho\geq 0$ be locally bounded such that \ref{vanishing_rho} is satisfied and such that $\rho(x)\rightarrow \rho_\infty>\overline{M}$ as $|x|\rightarrow \infty$. Let $\lambda_0$ and $\lambda_1$ be as in Proposition \ref{lambdaone}. Then, for all $\lambda\geq\max\left(\lambda_0,\lambda_1\right),$ the restriction $I_\lambda|_{\mathcal{N}_{\lambda}}$ satisfies the Palais-Smale condition at the level $$c_{\lambda}^*=\inf\limits_{v \in \mathcal{N}_{\lambda}} I_{\lambda} (v).$$ That is, every sequence $(u_n)_{n\in\mathbb N}\subset E({\mathbb R}^3)\simeq H^{1}({\mathbb R}^3)$ such that $$I(u_n)\rightarrow c_{\lambda}^*,\qquad \nabla I_\lambda(u_n)|_{\mathcal{N}_{\lambda}}\rightarrow 0\,\,\textrm{in}\,\,H^{-1}({\mathbb R}^3)$$ is relatively compact. \end{proposition} \begin{proof} The proof follows reasoning exactly as in Remark \ref{PSremark}. We leave out the details. \end{proof} \section{The case of coercive $\rho$}\label{coercive rho section} \noindent In the present section $\lambda > 0$ is an arbitrary fixed value, and on $\rho$ we make the assumption that \begin{enumerate}[label=$\mathbf{(\rho_{\arabic*})}$] \setcounter{enumi}{1} \item \label{coercive_rho} For every $M > 0$, \begin{equation*} \abs{x \in {\mathbb R}^N : \rho(x) \leq M} < \infty. \end{equation*} \end{enumerate} \begin{lemma}[\bf{Compactness property}]\label{compactembeddingofeintolpplus} Let $N = 3,4,5$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ be nonnegative, satisfying \ref{coercive_rho}, and $q \in (1, 2^{\ast} -1)$. Then, $E({\mathbb R}^N)$ is compactly embedded into $L^{q+1} ({\mathbb R}^N)$. \end{lemma} \begin{proof} By Lemma \ref{weightedL3boundlemma}, multiplying by $\lambda$ we obtain \begin{equation}\label{weightedL3} \lambda \int_{\R^N} \rho(x) \abs{u}^3 \leq \Big( \frac{1}{\omega} \Big)^{\half} \norme{u}{{\mathbb R}^N}^3. \end{equation} Set \begin{equation*} A (R) = \{ x \in {\mathbb R}^N : \abs{x} > R, \ \rho (x) \geq M\}, \end{equation*} \begin{equation*} B (R) = \{ x \in {\mathbb R}^N : \abs{x} > R, \ \rho (x) < M\}. \end{equation*} Without loss of generality, assume that $(u_n)_{n\in{\mathbb N}}\subset E({\mathbb R}^N)$ is such that $u_{n} \rightharpoonup 0$. For convenience, write \[ \int_{{\mathbb R}^N \setminus B_R} \abs{u_{n}}^3 = \int_{A (R)} \abs{u_{n}}^3 + \int_{B (R)} \abs{u_{n}}^3 \] where $B_R$ is a ball of radius $R$ centred at the origin. Fix $\delta > 0$ and pick $M$, $r$, $C$, such that $M > \frac{2}{\lambda \delta}\left(\frac{1}{\omega}\right)^{\frac{1}{2}}\sup_n \norme{u_{n}}{{\mathbb R}^N}^3$, $r=\frac{2^{\ast}}{3}>1$ and \begin{equation*} C \geq \sup\limits_{u \in E({\mathbb R}^N) \setminus \{0\}} \frac{\norm{u}_{L^{2^*}({\mathbb R}^N)}^3}{\norme{u}{{\mathbb R}^N}^3}. \end{equation*} Let $\frac{1}{r} + \frac{1}{r^{\prime}} = 1$. By Lemma \ref{measureofBgoingtozerolemma}, for every $M > 0$, and every $R>0$ large enough, it holds that \begin{equation} |B (R)| \leq \Big[ \frac{\delta}{2C \sup_n \norme{u_{n}}{{\mathbb R}^N}^3} \Big]^{r^{\prime}}. \end{equation} Since $N=3,4,5$, we can pick $r = \frac{2^{\ast}}{3} > 1$ such that by H\"{o}lder inequality it holds that \begin{align*} \int_{B (R)} \abs{u_{n}}^3 &\leq \Big( \int_{B (R)} \abs{u_{n}}^{2^*} \Big)^{\frac{1}{r}} \Big( \int_{B (R)} 1 \Big)^{\frac{1}{r^{\prime}}}\\ &\leq \norm{u_{n}}_{L^{2^*} ({\mathbb R}^N)}^3 \cdot | B (R) |^{\frac{1}{r^{\prime}}}\\ &\leq C \norme{u_{n}}{{\mathbb R}^N}^3 \cdot | B (R) |^{\frac{1}{r^{\prime}}} \leq \frac{\delta}{2}, \end{align*} Moreover, by our choice of $M$ and \eqref{weightedL3}, we see that \begin{equation*} \int_{A (R)} \abs{u_{n}}^3 \leq \frac{1}{\lambda M}\left(\frac{1}{\omega}\right)^{\frac{1}{2}}||u_n||_{E({\mathbb R}^N)}^3 \leq \frac{\delta}{2}. \end{equation*} By the classical Rellich theorem, and since $\delta$ was arbitrary, this is enough to prove our lemma for $q = 2$. By interpolation the case $q \neq 2$ follows immediately, and this concludes the proof. \end{proof} Using the above lemma, and for $q\geq 3,$ it is easy to see that the Palais-Smale condition holds for $I_{\lambda}$ at any level. \begin{lemma}[\bf{Palais-Smale condition}]\label{pscconditionfori} Let $N = 3$, $\rho \in L^\infty_{\textrm{loc}}({\mathbb R}^N)$ be nonnegative, satisfying \ref{coercive_rho}, and $q \in [3, 2^{\ast} -1)$. Then, $I_{\lambda}$ satisfies the Palais-Smale condition at every level $c\in {\mathbb R}$. \end{lemma} \begin{proof} Since by Lemma \ref{compactembeddingofeintolpplus} the embedding of $E({\mathbb R}^3)$ into $L^{q+1}({\mathbb R}^3)$ is compact, using Lemma \ref{nonlocalBL}, the conclusion follows arguing as in \cite[p. 1077]{Bonheure and Mercuri}. \end{proof} \subsection{Proof of Theorem \ref{existenceandleastenergy} and Theorem \ref{mountainpasssoltheoremlowp}}\label{proofs of existence theorems section} \begin{proof}[Proof of Theorem \ref{existenceandleastenergy}] Using Lemma \ref{mpgfori} and Lemma \ref{pscconditionfori}, the Mountain-Pass Theorem yields the existence a mountain-pass type solution for all $q\in[3,2^{\ast}-1)$. Namely, there exists $u \in E({\mathbb R}^N)$ such that $I_{\lambda}(u) = c_{\lambda}$ and $I_{\lambda}'(u) = 0$, where $c_{\lambda}$ is given in \eqref{mountainpasslevelforI}. For $q > 3$, the mountain-pass level $c_{\lambda}$ has the characterisation \begin{equation*} c_{\lambda} = \inf_{u \in \mathcal{N}_{\lambda}} I_{\lambda}(u), \quad \mathcal{N}_{\lambda} := \{ u \in E({\mathbb R}^N) \setminus \{0\} \ | \ I_{\lambda}' (u)(u) = 0 \}, \end{equation*} and it follows that $u$ is a groundstate solution of $I_{\lambda}$. Since $I_{\lambda}(u)=I_{\lambda}(|u|)$, we can assume $u\geq0$, and so $u>0$ by the strong maximum principle, Proposition \ref{reg}. For $q=3$, we can show the existence of a positive mountain-pass solution applying the general min-max principle \cite[p.41]{Willem book}, and observing that, in our context, we can restrict to admissible curves $\gamma$'s which map into the positive cone $P\coloneqq \{u\in E({\mathbb R}^3) : u\geq0\}.$ In fact, arguing as in \cite[p.481]{Mercuri and Willem}, since $I_{\lambda}$ satisfies the mountain-pass geometry by Lemma \ref{mpgfori}, it is possible to select a Palais-Smale sequence $(u_n)_{n\in{\mathbb N}}$ at the level $c_{\lambda}$ such that $$\textrm{dist}(u_n,P)\rightarrow 0,$$ from which it follows that $(u_n)_-\rightarrow 0$ in $L^{6}({\mathbb R}^3),$ see also \cite[Lemma 2.2]{Bonheure Di Cosmo Mercuri}. Then, by construction and up to a subsequence, there exists a weak limit $u\geq 0,$ and hence, by Lemma \ref{pscconditionfori} a nontrivial nonnegative solution, the positivity of which holds by Proposition \ref{reg}. \newline The existence of a positive groundstate can be shown with a mild modification to the proof of Theorem \ref{groundstate sol}, using here that all the relevant convergence statements hold for any fixed $\lambda>0$ as a consequence of assumption \ref{coercive_rho} and Lemma \ref{compactembeddingofeintolpplus}. This is enough to conclude. \end{proof} \begin{proof}[Proof of Theorem \ref{mountainpasssoltheoremlowp}] We can argue as in \cite[Theorem $1.3$]{Mercuri and Tyler}, based on \cite{Jeanjean and Tanaka} and on the compactness of the embedding of $E({\mathbb R}^N)$ into $L^{q+1}({\mathbb R}^N).$ The latter is provided in our context by Lemma \ref{compactembeddingofeintolpplus}. By these, there exists an increasing sequence $\mu_n \rightarrow 1$ and $(u_{n})_{n\in{\mathbb N}} \in E({\mathbb R}^N)$ such that $I_{\mu_n, \lambda} (u_{n}) = c_{\mu_n, \lambda}$ and $I_{\mu_n, \lambda}' (u_{n}) = 0,$ where $I_{\mu_n, \lambda}$ and $c_{\mu_n, \lambda}$ are defined as in \eqref{definition I mu lambda} and \eqref{mp level low q}. By Lemma \ref{pohozaevlemma}, we see that \begin{equation}\label{poh soe ineq} \frac{N-2}{2}\int_{\R^N} (\abs{\nabla u_{n}}^2 + u_{n}^2) + \left( \frac{N+2+2k}{4} \right) \int_{\R^N} \rho(x) \phi_{u_{n}} u_{n}^2 - \frac{N\mu_n}{q+1} \int_{\R^N} \abs{u_{n}}^{q+1} \leq 0. \end{equation} Setting $\alpha_n = \int_{\R^N} (\abs{\nabla u_{n}}^2 + u_{n}^2)$, $\gamma_n = \lambda^2 \int_{\R^N} \rho(x) \phi_{u_{n}} u_{n}^2$, $\delta_n = \mu_n \int_{\R^N} \abs{u_{n}}^{q+1},$ we can put together the equalities $I_{\mu_n, \lambda} (u_{n}) = c_{\mu_n, \lambda}$ and $I_{\mu_n, \lambda}' (u_{n})(u_{n}) = 0$ with \eqref{poh soe ineq} obtaining the system \begin{equation}\label{SOE existence low q} \begin{cases} \begin{array}{c c c c c c c} \alpha_n & + & \gamma_n & - & \delta_n & = & 0,\\ \half \alpha_n & + & \frac{1}{4} \gamma_n & - & \overqplus \delta_n & = & c_{\mu_n, \lambda},\\ \frac{N-2}{2} \alpha_n & + & \left( \frac{N+2+2k}{4} \right) \gamma_n & - & \frac{N}{q+1} \delta_n & \leq & 0, \end{array} \end{cases} \end{equation} \noindent which yields \begin{equation*} \delta_n \leq \frac{c_{\mu_n, \lambda}(6-N + 2k) (q+1)}{2(q-2) + k(q-1)}, \quad \gamma_n \leq \frac{2c_{\mu_n, \lambda}\big( 2(q+1)-N (q-1)\big)}{2(q-2) + k(q-1)}, \quad \text{and} \ \alpha_n = \delta_n - \gamma_n. \end{equation*} We note that $k > \frac{-2(q-2)}{(q-1)} > \frac{N-6}{2}$ since $q<2^{\ast}-1$, and so since $\alpha_n, \gamma_n, \delta_n$ are all nonnegative, it follows that $\alpha_n, \gamma_n, \delta_n$ are all bounded. Hence the sequence $(u_{n})_{n \in {\mathbb N}}$ is bounded and there exists $u \in E({\mathbb R}^N)$ such that, up to a subsequence, $u_{n} \rightharpoonup u$ in $E({\mathbb R}^N)$. Using Lemma \ref{compactembeddingofeintolpplus} and arguing as in \cite[Theorem 1]{Bonheure and Mercuri} we obtain that $\norme{u_{n}}{{\mathbb R}^N}^2 \rightarrow \norme{u}{{\mathbb R}^N}^2$ and \begin{equation}\label{convergenceofmufunctional} c_{\mu_n, \lambda} = I_{\mu_n, \lambda} (u_{n}) \rightarrow I_{\lambda} (u). \end{equation} It follows that $u_{n} \rightarrow u$ in $E({\mathbb R}^N)$, which combined with the left-continuity property of the levels \cite[Lemma $2.2$]{Ambrosetti and Ruiz}, namely $c_{\mu_n, \lambda} \rightarrow c_{1, \lambda}={c}_{\lambda}$ as $\mu_n \nearrow 1,$ yields $I_{\lambda} (u) = {c}_{\lambda}.$ Since $u$ is a critical point by the weak convergence, it follows that $u$ is mountain-pass solution. Finally, the existence of a groundstate solution is based on minimising over the set of nontrivial critical points of $I_{\lambda},$ and carrying out an identical argument to the above to show the strong convergence of such a minimising sequence, again using Lemma \ref{compactembeddingofeintolpplus}. This concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{least energy pure homog rho}} Under an additional hypotheses on $\rho$, we now prove that the energy level of the groundstate solutions coincide with the mountain-pass level. \begin{proof}[Proof of Theorem \ref{least energy pure homog rho}] By Proposition \ref{variational characterisation MP level low q}, it holds that $${c}_{\lambda}=\inf_{u\in \bar{\mathcal{M}}_{\lambda,\nu}}I_{\lambda}(u),$$ where $\bar{\mathcal{M}}_{\lambda,\nu}$ is defined in \eqref{definition of M lambda}. Since $J_{\lambda,\nu}(u)=0$ is equivalent to the Pohozaev equation given by Lemma \ref{pohozaevlemma} minus the equation $\nu I_{\lambda}'(u)(u)=0$, it is clear that $\bar{\mathcal{M}}_{\lambda,\nu}$ contains all of the critical points of $I_{\lambda}$, and thus the mountain-pass solutions that we find in Theorem \ref{existenceandleastenergy} ($q=3$) and Theorem \ref{mountainpasssoltheoremlowp} ($q<3$) are groundstates. This completes the proof. \end{proof} \color{black} \section{Multiplicity results: coercive $\rho$}\label{5.1} In the current section, we discuss the existence of high energy solutions in the case $\rho$ satisfies \ref{coercive_rho}. Throughout what follows, we denote the unit ball in $E({\mathbb R}^N)$ by $B_1$. Moreover, since $\lambda$ does not play any role and can be fixed arbitrarily under assumption \ref{coercive_rho}, we set $\lambda\equiv1$ for the sake of simplicity and define \begin{equation*} I(u) \coloneqq \frac{1}{2}\int_{{\mathbb R}^N}(|\nabla u|^2 + u^2)+\frac{1}{4}\int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{|x-y|^{N-2}}\,\mathrm{d} x\,\mathrm{d} y -\frac{1}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}. \end{equation*} \subsection{Preliminaries}\label{prelims for multiplicity section} We will now discuss some preliminaries that will be used in proving both Theorem \ref{first multiplicity theorem} and \ref{partial multiplicity result low q}. Following \cite{Ambrosetti and Rabinowitz} we set \begin{equation}\label{A hat} \hat{A}_0 =\{u\in E({\mathbb R}^N) : I(u)\geq0\}, \end{equation} and \begin{align}\label{gamma star} \Gamma^* =\{&h\in C(E({\mathbb R}^N),E({\mathbb R}^N)): h(0)=0, h \text{ is an odd homeomorphism of } E({\mathbb R}^N) \\ &\text{ onto } E({\mathbb R}^N), h(B_1)\subset \hat{A}_0\}.\nonumber \end{align} In the next lemma, we establish a result that allows us to obtain high energy solutions in our Banach space setting. Before stating the lemma, we note that since the biorthogonal system given by Lemma \ref{separability} is fundamental, then, for any $m\in{\mathbb N}$, it holds that \[E({\mathbb R}^N)=\text{span}\{e_1,\dots, e_m\}\oplus \overline{\text{span}}\{e_{m+1}, \dots\}.\] Thus, throughout what follows we set \[E_m= \text{span}\{e_1,\dots, e_m\},\] \[E_m^{\perp}=\overline{\text{span}}\{e_{m+1}, \dots\},\] and note that, for any $m\in{\mathbb N}$, $E_m$ and $E_m^{\perp}$ define algebraically and topologically complementary subspaces of $E({\mathbb R}^N)$. \begin{lemma}[\bf{Divergence of min-max levels $d_m$}]\label{divergence of critical levels} Let $N \geq 3$ and $q>1$. Suppose $\rho\in L^{\infty}_{loc}({\mathbb R}^N)$ is nonnegative, satisfying \ref{coercive_rho}. Define \begin{equation}\label{def of dm} d_m\coloneqq \sup_{h\in \Gamma^*}\inf_{u\in\partial B_1 \cap E_{m-1}^{\perp}} I(h(u)), \end{equation} where $\Gamma^*$ is given by \eqref{gamma star}. Then, $d_m\to+\infty$ as $m\to+\infty$. \end{lemma} \begin{proof} First we set \[T= \left\{u\in E({\mathbb R}^N)\setminus \{0\}:||u||_{H^1({\mathbb R}^N)}^2=||u||_{L^{q+1}({\mathbb R}^N)}^{q+1}\right\}\] and \[\tilde{d}_m= \inf_{u\in T\cap E_m^{\perp}}||u||_{E({\mathbb R}^N)},\] and claim that $\tilde{d}_m\to+\infty$ as $m\to+\infty$. To see this, assume to the contrary that there exists $u_m\in T \cap E_m^{\perp}$ and some $d>0$ such that $||u_m||_{E({\mathbb R}^N)}\leq d$ for all $m\in{\mathbb N}$. Since $<e_n^*,u_m>=0$ for all $m\geq n$ and the $e_n^*$'s are total by Lemma \ref{separability}, then it follows that $u_m\rightharpoonup 0$ in $E({\mathbb R}^N)$ (see e.g.\ \cite{Szulkin}). Since $E(\mathbb R^N)$ is compactly embedded into $L^{q+1}(\mathbb R^N)$ by Lemma \ref{compactembeddingofeintolpplus}, it follows that $u_m\to0$ in $L^{q+1}(\mathbb R^N)$. However, since $u_m\in T$, it follows from the Sobolev inequality that \[||u_m||_{H^1(\mathbb R^N)}^{q+1}\geq S_{q+1}^{q+1}||u_m||_{L^{q+1}(\mathbb R^N)}^{q+1}=S_{q+1}^{q+1}||u_m||_{H^{1}(\mathbb R^N)}^{2},\] from which we deduce \[||u_m||_{L^{q+1}(\mathbb R^N)}^{q+1}\geq S_{q+1}^{2(q+1)/(q-1)}>0.\] This shows that $u_m$ is bounded away from $0$ in $L^{q+1}(\mathbb R^N)$, a contradiction, and so we have proved that \begin{equation}\label{divergence tilde d} \tilde{d}_m\to+\infty \text{ as } m\to+\infty. \end{equation} Now notice that since $E_m$ and $E_m^{\perp}$ are complementary subspaces, it holds that there exists a $\bar{C}\geq1$ such that each $u\in B_1$ can be uniquely written as \begin{equation}\label{decomposition of E functions} u=v+w, \text{ with } v\in E_m, w\in E_m^{\perp}, \end{equation} \begin{equation}\label{v less than C} ||v||_{E(\mathbb R^N)}\leq \bar{C}||u||_{E(\mathbb R^N)}\leq\bar{C}, \end{equation} \begin{equation}\label{w less than C} ||w||_{E(\mathbb R^N)}\leq \bar{C}||u||_{E(\mathbb R^N)}\leq\bar{C}, \end{equation} as a consequence of the open mapping theorem, see \cite[p.37]{Brezis Book}. Define $h_m:E_m^{\perp}\to E_m^{\perp}$ by \[h_m(u)=(\bar{C}K)^{-1}\tilde{d}_m u,\] where \[K>\max\left\{1,\left(\frac{4}{q+1}\right)^{\frac{1}{q-1}}\right\},\] and note that $h_m$ is an odd homeomorphism of $E_m^{\perp}$ onto $E_m^{\perp}$. Now, for any $u\in E({\mathbb R}^N) \setminus \{0\}$, there exists a unique $\beta(u)>0$ such that $\beta(u)u\in T$, namely \begin{equation}\label{def of beta u} \beta(u) = \left(\frac{||u||_{H^1(\mathbb R^N)}^2}{||u||_{L^{q+1}(\mathbb R^N)}^{q+1}}\right)^{\frac{1}{q-1}}. \end{equation} If we define \[I_0(u)= \frac{1}{2} ||u||_{H^1(\mathbb R^N)}^2-\frac{1}{q+1}||u||_{L^{q+1}(\mathbb R^N)}^{q+1},\] then for each $u\in E({\mathbb R}^N) \setminus \{0\}$, it holds that \[I_0(tu)=\frac{t^2}{2} ||u||_{H^1(\mathbb R^N)}^2-\frac{t^{q+1}}{q+1}||u||_{L^{q+1}(\mathbb R^N)}^{q+1}\] is a monotone increasing function for $t\in[0,\beta(u)]$ with a maximum at $t=\beta(u)$. Note that for each $u\in (E_m^{\perp}\cap B_{\bar{C}})\setminus \{0\}$, by the definition of $\bar{d}_m$ and $\beta(u)$, we have \begin{align}\label{d m over beta u} \bar{C}^{-1}\tilde{d}_m\leq \bar{C}^{-1} ||\beta(u)u||_{E(\mathbb R^N)}\leq \beta(u), \end{align} and so since $K\geq1$, it holds that \[(\bar{C}K)^{-1}\tilde{d}_m\leq \bar{C}^{-1}\tilde{d}_m\leq \beta(u),\quad \text{ for all } u\in (E_m^{\perp}\cap B_{\bar{C}})\setminus \{0\}.\] Putting everything together, it follows that \[I_0(h_m(u))=I_0((\bar{C}K)^{-1}\tilde{d}_mu)>0\quad \text{ for all } u\in (E_m^{\perp}\cap B_{\bar{C}})\setminus \{0\}.\] Moreover, \[h_m(0)=0.\] Therefore, \begin{equation}\label{image of Em orthog intersect ball} h_m(E_m^{\perp}\cap B_{\bar{C}})\subset \left\{u\in E({\mathbb R}^N): I_0(u)\geq 0\right\}. \end{equation} Now, for each $m\in{\mathbb N}$ and some $\delta>0$, define $\tilde{h}_m:E_m\times E_m^{\perp}\to E_m\times E_m^{\perp}$ by \[\tilde{h}_m([v,w])=[\delta v,\, (\bar{C}K)^{-1}\tilde{d}_mw ].\] Notice that $\tilde{h}_m$ is an odd homeomorphism of $E_m\times E_m^{\perp}$ onto $E_m\times E_m^{\perp}$. Moreover, by \eqref{decomposition of E functions}, the function $g_m:E_m\times E_m^{\perp}\to E({\mathbb R}^N)$ defined by \[g_m([v,w])=v+w,\] is an odd homeomorphism. Hence, defining $H_m:E({\mathbb R}^N)\to E({\mathbb R}^N)$ as \[H_m= g_m\circ \tilde{h}_m \circ g^{-1}_m,\] we see that $H_m$ is an odd homeomorphism of $E({\mathbb R}^N)$ onto $E({\mathbb R}^N)$. By \eqref{decomposition of E functions}-\eqref{w less than C}, it holds that \[B_1\subseteq g_m(\left\{E_m\cap B_{\bar{C}}\right\}\times\{E_m^{\perp}\cap B_{\bar{C}}\}),\] and so \begin{align}\label{image of ball contained in Z} H_m(B_1)&\subseteq H_m(g_m(\{E_m\cap B_{\bar{C}}\}\times\{E_m^{\perp}\cap B_{\bar{C}}\}))\\ &=g_m(\tilde{h}_m(\{E_m\cap B_{\bar{C}}\}\times\{E_m^{\perp}\cap B_{\bar{C}}\}))\nonumber\\ &=g_m (\{\delta(E_m\cap B_{\bar{C}})\}\times\{\bar{C}^{-1}K^{-1}\tilde{d}_m (E_m^{\perp}\cap B_{\bar{C}})\})\nonumber\\ &=\left\{u\in E({\mathbb R}^N): u=v+w, v\in \delta(E_m\cap B_{\bar{C}}), w\in \bar{C}^{-1}K^{-1}\tilde{d}_m (E_m^{\perp}\cap B_{\bar{C}}) \right\}\nonumber\\ &=:Z_{m,\delta}.\nonumber \end{align} Now, fix $m\in{\mathbb N}$. We claim that $$Z_{m,\delta}\subset \left\{u\in E({\mathbb R}^N): I_0(u) > 0\right\} \cup \{0\}$$ for some $\delta=\delta(m)>0$. To see this, assume, by contradiction, that there exists $\delta_j\to 0$ and $u_j\notin \left\{u\in E({\mathbb R}^N): I_0(u) > 0\right\} \cup \{0\}$ such that $u_j\in Z_{m,\delta_j}$. Then, by definition of $Z_{m,\delta_j}$, it holds that \[||u_j||_{E({\mathbb R}^N)}\leq||v_j||_{E({\mathbb R}^N)}+||w_j||_{E({\mathbb R}^N)}\leq \delta_j\bar{C}+K^{-1}\tilde{d}_m,\] which implies $u_j$ is bounded. Thus, up to a subsequence $u_j\rightharpoonup \bar{u}$ in $E({\mathbb R}^N)$ and so it follows that $u_j\rightharpoonup \bar{u}$ in $H^1({\mathbb R}^N)$. Moreover, since $E(\mathbb R^N)$ is compactly embedded into $L^{q+1}(\mathbb R^N)$ by Lemma \ref{compactembeddingofeintolpplus}, it follows that $u_j\to\bar{u}$ in $L^{q+1}(\mathbb R^N),$ with $||\bar{u}||_{L^{q+1}({\mathbb R}^N)}^{q+1}>0$ by previous arguments. Thus, by the weak lower semicontinuity of the $H^1({\mathbb R}^N)$ norm and the strong convergence in $L^{q+1}(\mathbb R^N)$, we deduce that \[\frac{1}{2}||\bar{u}||_{H^1({\mathbb R}^N)}^2 \leq \frac{1}{q+1}||\bar{u}||_{L^{q+1}({\mathbb R}^N)}^{q+1},\] which implies $\bar{u}\notin \left\{u\in E({\mathbb R}^N): I_0(u) > 0\right\} \cup \{0\}$. On the other hand, since $\delta_j\to0$, then $v_j\to 0$. It follows from this and \eqref{image of Em orthog intersect ball} that $\bar{u}\in \bar{C}^{-1}K^{-1}\tilde{d}_m (E_m^{\perp}\cap B_{\bar{C}})\subset \left\{u\in E({\mathbb R}^N): I_0(u) > 0\right\} \cup \{0\}$. Hence, we have reached a contradiction and so the claim holds. Thus, using this and \eqref{image of ball contained in Z}, for each $m\in{\mathbb N}$, we pick $\delta=\delta(m)>0$ so that \begin{align*} H_m(B_1)\subset \left\{u\in E({\mathbb R}^N): I_0(u) > 0\right\} \cup \{0\} \subset \left\{u\in E({\mathbb R}^N): I(u)\geq 0\right\}=\hat{A}_0, \end{align*} namely $H_m\in\Gamma^*$, where $\hat{A}_0$ and $\Gamma^*$ are given by \eqref{A hat} and \eqref{gamma star}, respectively. We can therefore see that \begin{equation}\label{lower bound d m} d_{m+1}= \sup_{h\in \Gamma^*}\inf_{u\in\partial B_1 \cap E_{m}^{\perp}} I(h(u))\geq \inf_{u\in\partial B_1 \cap E_{m}^{\perp}} I(H_{m}(u)). \end{equation} Now take $u\in \partial B_1 \cap E_{m}^{\perp}$. Then, using \eqref{def of beta u}, \eqref{d m over beta u} and the fact that $\int_{{\mathbb R}^N}\rho \phi_u u^2 =\omega^{-1}(1-||u||_{H^1({\mathbb R}^N)}^2)^2$, it holds that \begin{align*} I(H_m(u))&=\frac{1}{2}(\bar{C}^{-1}K^{-1}\tilde{d}_m)^2||u||_{H^1({\mathbb R}^N)}^2+\frac{1}{4}(\bar{C}^{-1}K^{-1}\tilde{d}_m)^4\int_{{\mathbb R}^N}\rho\phi_u u^2\\ &\qquad\qquad-\frac{1}{q+1}(\bar{C}K)^{-q-1}\tilde{d}_m^{q+1}||u||_{L^{q+1}({\mathbb R}^N)}^{q+1}\\ &=\frac{1}{2}(\bar{C}^{-1}K^{-1}\tilde{d}_m)^2||u||_{H^1({\mathbb R}^N)}^2+\frac{1}{4}(\bar{C}^{-1}K^{-1}\tilde{d}_m)^4\int_{{\mathbb R}^N}\rho\phi_u u^2\\ &\qquad\qquad-\frac{(\bar{C}K)^{-q-1}\tilde{d}_m^{2}}{q+1}\left(\frac{\tilde{d}_m}{\beta(u)}\right)^{q-1}||u||_{H^1({\mathbb R}^N)}^{2}\\ &\geq\frac{1}{2}(\bar{C}^{-1}K^{-1}\tilde{d}_m)^2\left(1-\frac{2K^{1-q}}{q+1}\right)||u||_{H^1({\mathbb R}^N)}^2\\ &\qquad\qquad+\frac{1}{4\omega}(\bar{C}^{-1}K^{-1}\tilde{d}_m)^4\left(1-||u||_{H^1({\mathbb R}^N)}^{2}\right)^2\\ &\geq \min \left\{K_1\tilde{d}_m^2,K_2\tilde{d}_m^4 \right\} \left(||u||_{H^1({\mathbb R}^N)}^{4}-||u||_{H^1({\mathbb R}^N)}^{2}+1\right)\\ &\geq \frac{3}{4}\min \left\{K_1\tilde{d}_m^2,K_2\tilde{d}_m^4 \right\}, \end{align*} where $K_1\geq \frac{1}{4\bar{C}^{2}K^{2}}$ by our choice of $K$ and $K_2= \frac{1}{4\omega\bar{C}^4K^4}$. Finally, using this, \eqref{lower bound d m}, and \eqref{divergence tilde d}, we obtain \begin{align*} d_{m+1}&\geq \inf_{u\in\partial B_1 \cap E_{m}^{\perp}} I(H_{m}(u))\\ &\geq \frac{3}{4}\min \left\{K_1\tilde{d}_m^2,K_2\tilde{d}_m^4 \right\}\to+\infty,\quad \text{ as } m\to+\infty. \end{align*} This completes the proof. \end{proof} \subsection{Proof of Theorem \ref{first multiplicity theorem}}\label{proof of multiplicity theorem section} In order to prove Theorem \ref{first multiplicity theorem}, we will need some background material including the notion of the Krasnoselskii-genus and its properties. Throughout what follows we let $G$ be a compact topological group. Following \cite{Costa}, we begin with a number of definitions that we will need before introducing the notion of the Krasnoselskii-genus. \begin{definition}[\bf{Isometric representation}] The set $\{T(g) : g\in G\}$ is an {isometric representation} of $G$ on $E$ if $T(g):E\to E$ is an isometry for each $g\in G$ and the following hold: \begin{enumerate}[label=(\roman*)] \item $T(g_1+g_2)=T(g_1)\circ T(g_2)$ for all $g_1,g_2\in G$ \item $T(0) =I$, where $I:E\to E$ is the identity map on $E$ \item $(g,u) \mapsto T(g)(u)$ is continuous. \end{enumerate} \end{definition} \begin{definition}[\bf{Invariant subset}] A subset $A\subset E$ is {invariant} if $T(g)A=A$ for all $g\in G$. \end{definition} \begin{definition}[\bf{Equivariant mapping}] A mapping $R$ between two invariant subsets $A_1$ and $A_2$, namely $R:A_1\to A_2$, is said to be {equivariant} if $R\circ T(g)=T(g)\circ R$ for all $g\in G$. \end{definition} \begin{definition}[\bf{The class $\mathcal{A}$}] We denote the class of all closed and invariant subsets of $E$ by $\mathcal{A}$. Namely, \[\mathcal{A} \coloneqq \{A\subset E : A \text{ closed},\, T(g)A=A \,\, \forall g\in G\}.\] \end{definition} \begin{definition}[\bf{${G}$-index with respect to $\mathcal{A}$}] A {${G}$-index} on $E$ with respect to $\mathcal{A}$ is a mapping $\text{ind}:\mathcal{A}\to{\mathbb N}\cup\{+\infty\}$ such that the following hold: \begin{enumerate}[label=(\roman*)] \item $\text{ind}(A)=0$ if and only if $A=\emptyset$. \item If $R:A_1\to A_2$ is continuous and equivariant, then $\text{ind}(A_1)\leq \text{ind}(A_2)$. \item $\text{ind}(A_1\cup A_2)\leq \text{ind}(A_1)+ \text{ind}(A_2)$. \item If $A\in \mathcal{A}$ is compact, then there exists a neighbourhood $N$ of $A$ such that $N\in\mathcal{A}$ and $\text{ind}(N)=\text{ind}(A)$. \end{enumerate} \end{definition} With these definitions in place, we are ready to introduce the concept of the Krasnoselskii-genus. \begin{lemma}[\bf{The Krasnoselskii-genus}] Let $G={\mathbb Z}_2=\{0,1\}$ and define $T(0)=I$, $T(1)=-I$, where $I:E\to E$ is the identity map on $E$. Given any closed and symmetric with respect to the origin subset $A\in\mathcal{A}$, define $\gamma(A)=k\in{\mathbb N}$ if $k$ is the smallest integer such that there exists some odd mapping $\varphi \in C(A,{\mathbb R}^k \setminus\{0\})$. Moreover, define $\gamma(A)=+\infty$ if no such mapping exists and $\gamma(\emptyset)=0$. Then, the mapping $\gamma:\mathcal{A}\to {\mathbb N}\cup \{+\infty\} $ is a ${\mathbb Z}_2$-index on $E$, called the Krasnoselskii-genus. \end{lemma} \begin{proof} See the proof of Proposition $2.1$ in \cite{Costa}. \end{proof} The next lemma gives a property of the Krasnoselskii-genus relevant for us to obtain our multiplicity result. \begin{lemma}[\bf{Multiplicity from the Krasnoselskii-genus}]\label{infinitely many sols lemma} Assume $A\in \mathcal{A}$ is such that $0\notin A$ and $\gamma(A)\geq2$. Then, $A$ has infinitely many points. \end{lemma} \begin{proof} See the proof of Proposition $2.2$ in \cite{Costa}. \end{proof} For the proof of Theorem \ref{first multiplicity theorem}, we recall a classical result of Ambrosetti and Rabinowitz, \cite{Ambrosetti and Rabinowitz}. \begin{theorem}[\cite{Ambrosetti and Rabinowitz}; \,\bf{Min-max setting high $q$}]\label{Ambro Rab Mult} Let $I\in C^1(E({\mathbb R}^N),{\mathbb R}^N)$ satisfy the following: \begin{enumerate}[label=(\roman*)] \item $I(0)=0$ and there exists constants $R,a>0$ such that $I(u)\geq a$ if $||u||_{E({\mathbb R}^N)}=R$ \item If $(u_n)_{n\in{\mathbb N}}\subset E({\mathbb R}^N)$ is such that $0<I(u_n)$, $I(u_n)$ bounded above, and $I'(u_n)\to 0$, then $(u_n)_{n\in{\mathbb N}}$ possesses a convergent subsequence \item $I(u)=I(-u)$ for all $u\in E({\mathbb R}^N)$ \item For a nested sequence $E_1\subset E_2\subset \cdots$ of finite dimensional subspaces of $E({\mathbb R}^N)$ of increasing dimension, it holds that $E_i \cap \hat{A}_0$ is bounded for each $i=1,2,\ldots$, where $\hat{A}_0$ is given by \eqref{A hat} \end{enumerate} Define \[b_m=\inf_{K\in\Gamma_m}\max_{u\in K} I(u),\] with \begin{align*} \Gamma_m=&\{K\subset E({\mathbb R}^N) : K \text{ is compact and symmetric with respect to the origin and for}\\ &\text{ all } h\in\Gamma^*, \text{ it holds that } \gamma(K\cap h(\partial B_1))\geq m\}, \end{align*} where $\Gamma^*$ is given by \eqref{gamma star}. Then, for each $m\in{\mathbb N}$, it holds that $0<a \leq b_m \leq b_{m+1}$ and $b_m$ is a critical value of $I$. Moreover, if $b_{m+1}=\cdots =b_{m+r}=b$, then $\gamma(K_b)\geq r$, where \[K_b \coloneqq\{u\in E({\mathbb R}^N): I(u)=b, \, I'(u)=0\},\] is the set of critical points at any level $b>0$. \end{theorem} \begin{proof} See \cite[Theorem $2.8$]{Ambrosetti and Rabinowitz}. \end{proof} We are now in position to prove Theorem \ref{first multiplicity theorem}. \begin{proof} [Proof of Theorem \ref{first multiplicity theorem}] We aim to apply Theorem \ref{Ambro Rab Mult} and therefore must verify that $I$ satisfies assumptions $(i)$-$(iv)$ of this theorem. By Lemma \ref{mpgfori}, $I$ satisfies the Mountain-Pass Geometry and thus $(i)$ holds. By Lemma \ref{pscconditionfori}, $(ii)$ holds. Clearly, $(iii)$ holds due to the structure of the functional $I$. We now must show that $(iv)$ holds. We first notice by straightforward calculations that for any $u\in \partial B_1$ and any for $t>0$, it holds that \begin{align*} I(tu)&=\frac{t^2}{2}||u||^2_{H^1({\mathbb R}^N)}+\frac{t^4}{4}\int_{{\mathbb R}^N}\rho\phi_u u^2-\frac{t^{q+1}}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}\\ &=\frac{t^2}{2}\left(||u||^2_{H^1({\mathbb R}^N)}+\frac{t^2}{2}\int_{{\mathbb R}^N}\rho\phi_u u^2-\frac{2t^{q-1}}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}\right). \end{align*} We now set \[\alpha\coloneqq||u||^2_{H^1({\mathbb R}^N)}>0, \quad \beta \coloneqq \frac{1}{2} \int_{{\mathbb R}^N}\rho\phi_u u^2\geq0, \quad \gamma \coloneqq \frac{2}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}>0,\] and look for positive solutions of \[\frac{t^2}{2}(\alpha+\beta t^2-\gamma t^{q-1})=0.\] Since $q>3$, it holds that $\alpha+\beta t^2-\gamma t^{q-1}=0$ has a unique solution $t=t_u>0$. That is, we have shown that for each $u\in \partial B_1$, there exists a unique $t=t_u>0$ such that $I$ satisfies \begin{align*} &I (t_uu)=0 \\ &I (tu)>0, \,\, \forall t<t_u \\ &I (tu)<0, \,\, \forall t>t_u. \end{align*} Now, for any $m\in{\mathbb N}$, we choose $E_m$ a $m$-dimensional subspace of $E({\mathbb R}^N)$ in such a way that $E_m\subset E_{m'}$ for $m<m'$. Moreover, for any $m\in {\mathbb N}$, we set \[W_m\coloneqq \{w\in E({\mathbb R}^N) : v=tu,\,\, t\geq 0,\,\, u\in \partial B_1 \cap E_m\}.\] Then, the function $h:E_m \to W_m$ given by \[h(z)= t \frac{z}{||z||}, \quad \text{with } t=||z||\] defines a homeomorphism from $E_m$ onto $W_m$, and so $W_1\subset W_2\subset \cdots$ is a nested sequence of finite dimensional subspaces of $E({\mathbb R}^N)$ of increasing dimension. We also notice that \[T_m\coloneqq \sup_{u\in \partial B_1 \cap E_m} t_u <+\infty\]\\ since $\partial B_1 \cap E_m$ is compact. So, for all $t>T_m$ and $u\in\partial B_1 \cap E_m$, it holds that $I (tu)<0$, and thus $W_m \cap \hat{A}_0$ is bounded, where $\hat{A}_0$ is given by \eqref{A hat}. Since this holds for arbitrary $m\in{\mathbb N}$, we have shown that $(iv)$ holds. Hence, we have shown that Theorem \ref{Ambro Rab Mult} applies to the functional $I$. If $b_m$ are distinct for $m=1,\dots, j$ with $j\in {\mathbb N}$, we obtain $j$ distinct pairs of critical points corresponding to critical levels $0<b_1<b_2<\cdots<b_j$. If $b_{m+1}=\cdots =b_{m+r}=b$, then $\gamma(K_b)\geq r\geq 2$. Moreover, $0\notin K_b$ since $b>0=I(0)$. Further, $K_b$ is invariant since $I$ is an invariant functional and $K_b$ is closed since $I$ satisfies the Palais-Smale condition, and so $K_b\in \mathcal{A}$. Therefore, by Lemma \ref{infinitely many sols lemma}, $K_b$ possesses infinitely many points. Finally, we note that by \cite[Theorem $2.13$]{Ambrosetti and Rabinowitz}, for each $m\in{\mathbb N}$, it holds that \[d_m\leq b_m,\] where $d_m$ is defined in \eqref{def of dm}. It therefore follows from Lemma \ref{divergence of critical levels} that \[b_m\to+\infty, \text{ as } m\to+\infty.\] This concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{partial multiplicity result low q}}\label{proof of multiplicity theorem low q section} Before proving Theorem \ref{partial multiplicity result low q}, we must establish some preliminary results that we will need to use. The first lemma that we recall will give us an abstract definition of the min-max levels and some properties. \begin{lemma}[\cite{Ambrosetti and Ruiz};\, \bf{Abstract min-max setting for low $q$}]\label{Ambro Ruiz critical levels} Consider a Banach space $E$, and a functional $\Phi_{\mu}:E\to {\mathbb R}$ of the form $\Phi_{\mu}(u)=\alpha(u)-\mu \beta(u)$, with $\mu>0$. Suppose that $\alpha$, $\beta \in C^1$ are even functions, $\lim_{||u||\to+\infty} \alpha(u)=+\infty$, $\beta(u)\geq 0$, and $\beta$, $\beta'$ map bounded sets onto bounded sets. Suppose further that there exists $K\subset E$ and a class $\mathcal{F}$ of compact sets in $E$ such that: \\ \noindent ($\mathcal{F}.1$) $K\subset A$ for all $A\in \mathcal{F}$ and $\sup_{u\in K} \Phi_{\mu}(u)<c_{\mu}$, where $c_{\mu}$ is defined as: \begin{equation}\label{def of c mu} c_{\mu}\coloneqq \inf_{A\in\mathcal{F}}\max_{u\in A}\Phi_{\mu}(u). \end{equation} ($\mathcal{F}.2$) If $\eta\in C([0,1] \times E, E)$ is an odd homotopy such that \begin{itemize} \item $\eta(0, \cdot)=I$, where $I:E\to E$ is the identity map on $E$ \item$\eta(t,\cdot)$ is a homeomorphism \item $\eta(t,x)=x$ for all $x\in K$, \end{itemize} then $\eta(1,A)\in\mathcal{F}$ for all $A\in\mathcal{F}$.\\ \noindent Then, it holds that the mapping $\mu \mapsto c_{\mu}$ is non-increasing and left-continuous, and therefore is almost everywhere differentiable. \end{lemma} \begin{proof} See \cite[Lemma $2.2$]{Ambrosetti and Ruiz}. \end{proof} Under the hypotheses of the previous lemma, we can now define the set of values of $\mu\in\left[\frac{1}{2},1\right]$ such that $c_{\mu}$, given by \eqref{def of c mu}, is differentiable. Namely, we define \begin{equation*}\label{set of mu} \begin{split} \mathcal{J}\coloneqq \bigg\{\mu \in \left[\frac{1}{2},1\right] \, :\, \text{the mapping } \mu \mapsto c_{\mu} \text{ is differentiable}\bigg\}. \end{split} \end{equation*} \begin{corollary}[\bf{On density of perturbation values $\mu$}]\label{density corollary} The set $\mathcal{J}$ is dense in $\left[\frac{1}{2},1\right]$. \end{corollary} \begin{proof} Fix $x\in\left[\frac{1}{2},1\right]$ and $\delta>0,$ and denote by $\abs{\cdot}$ the Lebesgue measure. Since $\left[\frac{1}{2},1\right] \setminus \mathcal{J}$ has zero Lebesgue measure by Lemma \ref{Ambro Ruiz critical levels}, we have \begin{align*} \left|\mathcal{J} \cap (x-\delta,x+\delta)\right| &=\left|\left[\frac{1}{2},1\right] \cap (x-\delta,x+\delta)\right|>0. \end{align*} It follows that $\mathcal{J} \cap (x-\delta,x+\delta)$ is nonempty and so we can choose $y\in\mathcal{J} \cap(x-\delta,x+\delta)$. Since $x$ and $\delta$ are arbitrary, this completes the proof. \end{proof} With the definition of $\mathcal{J}$ in place, we can also recall another vital result from \cite{Ambrosetti and Ruiz}, which will be used to obtain the boundedness of our Palais-Smale sequences. \begin{lemma}[\cite{Ambrosetti and Ruiz};\, \bf{Boundedness of Palais-Smale sequences at level $c_{\mu}$}]\label{Ambro Ruiz bounded PS} For any $\mu\in \mathcal{J}$, there exists a bounded Palais-Smale sequence for $\Phi_{\mu}$ at the level $c_{\mu}$ defined by \eqref{def of c mu}. That is, there exists a bounded sequence $(u_n)_{n\in{\mathbb N}}\subset E({\mathbb R}^N)$ such that $\Phi_{\mu}(u_n)\to c_{\mu}$ and $\Phi_{\mu}'(u_n)\to 0$. \end{lemma} \begin{proof} See \cite[Proposition $2.3$]{Ambrosetti and Ruiz}. \end{proof} Moving toward a less abstract setting, for any $\mu\in\left[\frac{1}{2},1\right]$, we define the perturbed functional $I_{\mu}:E({\mathbb R}^N)\to{\mathbb R}^N$ as \begin{equation}\label{def of I mu} I_{\mu}(u) \coloneqq \frac{1}{2}\int_{{\mathbb R}^N}(|\nabla u|^2 + u^2)+\frac{1}{4}\int_{{\mathbb R}^N}\int_{{\mathbb R}^N}\frac{u^2(x)\rho (x) u^2(y) \rho (y)}{|x-y|^{N-2}}\,\mathrm{d} x\,\mathrm{d} y -\frac{\mu}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}. \end{equation} The next result that we will need in order to prove Theorem \ref{partial multiplicity result low q}, follows as a result of Lemma \ref{derivatives of f}. \begin{lemma}[\bf{On the sign of the energy level of $I_{\mu}$ along certain curves}]\label{use of curves} Assume $N= 3,4,5$ and $q\in(2,2^{\ast}-1]$. Suppose further that $\rho$ is homogeneous of degree $\bar{k}$, namely, $ \rho(tx)=t^{\bar{k}}\rho(x)$ for all $t>0$, for some $$\bar{k}>\max\left\{\frac{N}{4}, \frac{1}{q-1}\right\}\cdot(3-q)-1.$$ Then, there exists $\nu>\max\left\{\frac{N}{2}, \frac{2}{q-1}\right\}$ such that for each fixed $\mu\in\left[\frac{1}{2},1\right]$ and each $u\in E({\mathbb R}^N)\setminus\{0\}$, there exists a unique $t=t_u>0$ with the property that \begin{align*} &I_{\mu} (t_u^{\nu} u({t_u}\cdot))=0, \nonumber \\ &I_{\mu} (t^{\nu} u(t\cdot))>0, \,\, \forall t<t_u, \nonumber \\ &I_{\mu} (t^{\nu} u(t\cdot))<0, \,\, \forall t>t_u, \end{align*} where $I_{\mu}$ is defined in \eqref{def of I mu}. \end{lemma} \begin{proof} We first note that under the assumptions on the parameters, we can show that \[\frac{4\nu-N-2}{2}>\frac{(\nu+1)(3-q)-2}{2}.\] It follows from this and the lower bound assumption on $\bar{k}$ that we can always find at least one interval \[\left(\frac{\nu(3-q)-2}{2},\frac{4\nu-N-2}{2}\right), \quad \text{with } \nu>\max\left\{\frac{N}{2},\frac{2}{q-1}\right\},\] that contains $\bar{k}$. We pick $\nu$ corresponding to such an interval and fix $\mu\in\left[\frac{1}{2},1\right]$. Then, for any $u\in E({\mathbb R}^N)\setminus\{0\}$ and for any $t>0$, using the assumption that $\rho$ is homogeneous of degree $\bar{k}$, we find that \begin{align*} I_{\mu} (t^{\nu} u(t\cdot))&=\frac{t^{2\nu+2-N}}{2}\int_{{\mathbb R}^N}|\nabla u|^2 + \frac{t^{2\nu-N}}{2}\int_{{\mathbb R}^N}u^2+\frac{t^{4\nu-N-2}}{4}\int_{{\mathbb R}^N} \int_{{\mathbb R}^N} \frac{u^2(y)\rho(\frac{y}{t})u^2(x)\rho(\frac{x}{t})}{\omega |x-y|^{N-2}}\\ &\qquad-\frac{\mu t^{\nu(q+1)-N}}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}\\ &=\frac{t^{2\nu+2-N}}{2}\int_{{\mathbb R}^N}|\nabla u|^2 + \frac{t^{2\nu-N}}{2}\int_{{\mathbb R}^N}u^2+\frac{t^{4\nu-N-2-2\bar{k}}}{4}\int_{{\mathbb R}^N} \rho\phi_u u^2 -\frac{\mu t^{\nu(q+1)-N}}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}. \end{align*} We therefore set \[a=\frac{1}{2} \int_{{\mathbb R}^N}|\nabla u|^2, \,\, b=\frac{1}{2} \int_{{\mathbb R}^N} u^2,\,\, c=\frac{1}{4} \int_{{\mathbb R}^N}\rho\phi_u u^2, \,\, d=\frac{\mu}{q+1} \int_{{\mathbb R}^N}|u|^{q+1},\] and consider the polynomial \begin{equation*} f(t)= a{t^{2\nu+2-N}}+bt^{2\nu-N}+c t^{4\nu-N-2-2\bar{k}}-d t^{\nu(q+1)-N}, \quad t\geq0. \end{equation*} Since $u\in E({\mathbb R}^N)\setminus\{0\}$, we can deduce that $a,b,d>0$ and $c\geq0$, and so, by Lemma \ref{derivatives of f}, it holds that $f$ has a unique critical point corresponding to its maximum. Thus, since $I_{\mu} (t^{\nu} u(t\cdot))=f(t)$ and, by assumptions, $\nu(q+1)-N>2\nu+2-N$ and $\nu(q+1)-N>4\nu-N-2-2\bar{k}$, it follows that there exists a unique $t=t_u>0$ such that the conclusion holds. \end{proof} With the previous results established, we are finally in position to prove Theorem \ref{partial multiplicity result low q}. \begin{proof} [Proof of Theorem \ref{partial multiplicity result low q}] We first note that by Lemma \ref{use of curves}, we can choose $\nu>\max\left\{\frac{N}{2},\frac{2}{q-1}\right\}$, so that for each $u\in\partial B_1$, there exists a unique $t=t_u>0$ such that $I_{\mu}$ with $\mu=\frac{1}{2}$, defined by \eqref{def of I mu}, satisfies \begin{align}\label{sign of mathcal I} &I_{\frac{1}{2}} (t_u^{\nu} u({t_u}\cdot))=0, \nonumber \\ &I_{\frac{1}{2}} (t^{\nu} u(t\cdot))>0, \,\, \forall t<t_u, \nonumber \\ &I_{\frac{1}{2}} (t^{\nu} u(t\cdot))<0, \,\, \forall t>t_u. \end{align} Now, for any $m\in{\mathbb N}$, we choose $E_m$ a $m$-dimensional subspace of $E({\mathbb R}^N)$ in such a way that $E_m\subset E_{m'}$ for $m<m'$. Moreover, for any $m\in {\mathbb N}$, we set \[W_m\coloneqq \{w\in E({\mathbb R}^N) : w= t^{\nu}u(t\cdot),\,\, t\geq 0,\,\, u\in \partial B_1 \cap E_m\}.\] Then, the function $h:E_m \to W_m$ given by \[h(e)= t^{\nu}u(t\cdot), \quad \text{with } t=||e||_{E({\mathbb R}^N)}, \,u=\frac{e}{||e||_{E({\mathbb R}^N)}},\] defines an odd homeomorphism from $E_m$ onto $W_m$. We notice that it holds that \begin{equation}\label{def of Tm} T_m\coloneqq \sup_{u\in \partial B_1 \cap E_m} t_u <+\infty, \end{equation} since $\partial B_1 \cap E_m$ is compact. So, the set \[A_m= \{w\in E({\mathbb R}^N) : w=t^{\nu} u(t\cdot),\,\, t\in[0,T_m],\,\, u\in \partial B_1 \cap E_m\}\] is compact. We now define \[H\coloneqq\{g:E({\mathbb R}^N)\to E({\mathbb R}^N) : g \text{ is an odd homeomorphism and } g(w)=w \, \text{ for all } w\in\partial A_m\},\] and \[G_m\coloneqq \{g(A_m): g\in H\}.\] We aim to verify ($\mathcal{F}.1$) and ($\mathcal{F}.2$) of Lemma \ref{Ambro Ruiz critical levels}. We take $G_m$ as the class $\mathcal{F}$ and $K=\partial A_m$ and define the min-max levels \[c_{m,\mu}\coloneqq \inf_{A\in G_m}\max_{u\in A} I_{\mu}(u).\] Then, since $T_m\geq t_u$ for all $u\in \partial B_1 \cap E_m$ by definition, it follows from \eqref{sign of mathcal I} that \[I_{\mu}(w)\leq I_{\frac{1}{2}} (w)\leq 0, \quad \forall w\in \partial A_m, \,\, \forall \mu\in\left[\frac{1}{2},1\right].\] Moreover, since $G_m\subset G_{m+1}$ for all $m\in{\mathbb N}$, it holds that $c_{m,\mu}\geq c_{m-1,\mu}\geq\cdots\geq c_{1,\mu}>0$. Taken together, we have shown that \begin{equation}\label{sup on partial Am less than 0} \sup_{w\in\partial A_m}I_{\mu}(w)\leq 0 <c_{m,\mu}, \end{equation} and thus ($\mathcal{F}.1$) is verified. Moreover, for any $\eta$ given by ($\mathcal{F}.2$) and any $g\in H$, it holds that $\tilde{g}=\eta(1,g)$ belongs to $H$, and so ($\mathcal{F}.2$) is satisfied. Since ($\mathcal{F}.1$) and ($\mathcal{F}.2$) are satisfied, Lemma \ref{Ambro Ruiz critical levels} applies. Thus, for any $m\in{\mathbb N}$, we denote by $\mathcal{J}_m$ the set of values $\mu\in \left[\frac{1}{2},1\right]$ such that the function $\mu\mapsto c_{m,\mu}$ is differentiable. We then let \[\mathcal{M}\coloneqq \bigcap\limits_{m\in{\mathbb N}} \mathcal{J}_m.\] We note that since \[\left[\frac{1}{2},1\right]\setminus \mathcal{M} = \bigcup\limits_{m\in{\mathbb N}} \left(\left[\frac{1}{2},1\right]\setminus \mathcal{J}_m\right)\] and $[\frac{1}{2},1]\setminus\mathcal{J}_m$ has zero Lebesgue measure for each $m$ by Lemma \ref{Ambro Ruiz critical levels}, then it follows that $\left[\frac{1}{2},1\right]\setminus \mathcal{M}$ has zero Lebesgue measure. Arguing as in the proof of Corollary \ref{density corollary}, we obtain that $\mathcal{M}$ is dense in $\left[\frac{1}{2},1\right]$. We can now apply Proposition \ref{Ambro Ruiz bounded PS} with $\Phi_{\mu}=I_{\mu}$. Namely, for each fixed $m\in{\mathbb N}$ and $\mu\in\mathcal{M}$ we obtain that there exists a bounded sequence $(u_n)_{n\in{\mathbb N}}\subset E({\mathbb R}^N)$ such that $I_{\mu}(u_n)\to c_{m,\mu}$ and $I'_{\mu}(u_n)\to 0$. The embedding of $E({\mathbb R}^N)$ into $L^{q+1}({\mathbb R}^N)$ is compact by Lemma \ref{compactembeddingofeintolpplus} so, arguing as in the proof of Theorem \ref{mountainpasssoltheoremlowp}, we can show that the values $c_{m, \mu}$ are critical levels of $I_{\mu}$ for each $m\in{\mathbb N}$ and $\mu\in\mathcal{M}$. We then take $m$ fixed, $(\mu_n)_{n\in{\mathbb N}}$ an increasing sequence in $\mathcal{M}$ such that $\mu_n\to 1$, and $(u_n)_{n\in{\mathbb N}}\subset E({\mathbb R}^N)$ such that $I'_{\mu_n}(u_n)= 0$ and $I_{\mu_n}(u_n)=c_{m, \mu_n}$. We note that since $\rho$ is homogeneous of degree $\bar{k}$ by assumption, it follows from \cite[p. 296]{Gelfand and Shilov} that $\bar{k} \rho(x)=(x, \nabla \rho)$. So, setting $\alpha_n = \int_{\R^N} (\abs{\nabla u_{n}}^2 + u_{n}^2)$, $\gamma_n = \int_{\R^N} \rho(x) \phi_{u_{n}} u_{n}^2$, $\delta_n = \mu_n \int_{\R^N} \abs{u_{n}}^{q+1}$ and using the Pohozaev-type condition deduced in Lemma \ref{pohozaevlemma}, we obtain the system \begin{equation} \begin{cases} \begin{array}{c c c c c c c} \alpha_n & + & \gamma_n & - & \delta_n & = & 0,\\ \half \alpha_n & + & \frac{1}{4} \gamma_n & - & \overqplus \delta_n & = & c_{m,\mu_n},\\ \frac{N-2}{2} \alpha_n & + & \left( \frac{N+2+2k}{4} \right) \gamma_n & - & \frac{N}{q+1} \delta_n & \leq & 0.\\ \end{array} \end{cases} \end{equation} Since the assumptions on $\bar{k}$ guarantee that $\bar{k} > \frac{-2(q-2)}{(q-1)}>\frac{N-6}{2}$ for $q\in(2,3]$ if $N=3$ and for $q\in(2,2^{\ast}-1)$ if $N=4,5$, it follows that we can solve this system and show that $\alpha_n,\gamma_n,\delta_n$ are all bounded as in the proof of Theorem \ref{mountainpasssoltheoremlowp}. Moreover, continuing to argue as in the proof of this theorem and using the compact embedding of $E({\mathbb R}^N)$ into $L^{q+1}({\mathbb R}^N)$, we can then prove that for each fixed $m$ there exists $u\in E({\mathbb R}^N)$ such that, up to a subsequence, $u_n\to u$ in $E({\mathbb R}^N)$, $I(u)=I_1(u)=c_{m,1}$, and $I'(u)=I_1'(u)=0$. It therefore remains to show that $I(u)=c_{m,1} \to+\infty$ as $m\to+\infty$. In order to do so, we define \[\tilde{\Gamma}_m\coloneqq \left\{g\in C(E_m\cap B_1, E({\mathbb R}^N)): g \text{ is odd, one-to-one, } I(g(y))\leq 0 \text{ for all } y\in \partial(E_m\cap B_1)\right\},\] \[\tilde{G}_m\coloneqq \left\{A\subset E({\mathbb R}^N): A=g(E_m\cap B_1), g\in\tilde{\Gamma}_m\right\},\] \[\tilde{b}_m\coloneqq \inf_{A\in\tilde{G}_m}\max_{u\in A} I(u).\] We then note that by \cite[Corollary $2.16$]{Ambrosetti and Rabinowitz}, it holds that \[d_m\leq \tilde{b}_m,\] where $d_m$ is given by \eqref{def of dm}. It therefore follows from Lemma \ref{divergence of critical levels} that \begin{equation}\label{tilde bm diverge} \tilde{b}_m\to+\infty, \text{ as } m\to+\infty. \end{equation} We will now show $G_m\subseteq \tilde{G}_m$. We take $A\in G_m$. Then, by definition, there exists $g\in H$ such that $A=g(A_m)$. We define an odd homeomorphism $\varphi:E_m\cap B_1\to A_m$ by \[\varphi(e) = t^{\nu}u(t\cdot), \quad \text{with } t=T_m||e||_{E({\mathbb R}^N)}, \,u=\frac{e}{||e||_{E({\mathbb R}^N)}},\] where $T_m$ is defined in \eqref{def of Tm}, and set $\tilde{g}=g\circ\varphi$. Since we can write $A=\tilde{g}(E_m\cap B_1)$, then by the definition of $\tilde{G}_m$ we need only to show that $\tilde{g}\in\tilde{\Gamma}_m$. Clearly, $\tilde{g}\in C(E_m\cap B_1, E({\mathbb R}^N))$ is odd and one-to-one. Moreover, for every $y\in\partial(E_m\cap B_1)$, setting $w=\varphi(y)\in\partial A_m$, we have $I(\tilde{g}(y))=I(g(w))$. Since $g\in H$ and $w\in\partial A_m$, then by definition it holds that $g(w)=w$. Putting everything together, we have \[I(\tilde{g}(y))=I(g(w))=I(w)\leq \sup_{w\in\partial A_m}I(w)\leq 0,\] where the final inequality follows from \eqref{sup on partial Am less than 0}. Hence, we have shown $\tilde{g}\in\tilde{\Gamma}_m$ and so $G_m\subseteq \tilde{G}_m$. Therefore, for each $m\in{\mathbb N}$, it follows that \[\tilde{b}_m= \inf_{A\in\tilde{G}_m}\max_{u\in A} I(u)\leq \inf_{A\in G_m}\max_{u\in A} I(u)=c_{m,1},\] and so, by \eqref{tilde bm diverge}, we conclude that \[c_{m,1}\to+\infty, \text{ as } m\to+\infty,\] as required. \end{proof} \section*{Appendix A: Proof of the Pohozaev-type condition } \begin{proof}[Proof of Lemma \ref{pohozaevlemma}] With the regularity remarks of Proposition \ref{reg} in place, we now multiply the first equation in \eqref{poh system} by $(x, \nabla u)$ and integrate on $B_R(0)$ for some $R>0$. We will compute each integral separately. We first note that \begin{equation}\label{POH1} \begin{split} \int_{B_R}-{\mathcal D}elta u (x, \nabla u) \,\mathrm{d} x = \frac{2-N}{2}&\int_{B_R}|\nabla u|^2 \,\mathrm{d} x \\ &-\frac{1}{R}\int_{\partial B_R} |(x,\nabla u)|^2 \,\mathrm{d} \sigma +\frac{R}{2}\int_{\partial B_R}|\nabla u|^2 \,\mathrm{d} \sigma. \end{split} \end{equation} \noindent Fixing $i =1,\dots, N$, integrating by parts and using the divergence theorem, we then see that, \begin{align} \int_{B_R} bu(x_i \partial_i u)\,\mathrm{d} x& =b\left[ -\frac{1}{2} \int_{B_R} u^2\,\mathrm{d} x +\frac{1}{2}\int_{B_R}\partial_i(u^2x_i)\,\mathrm{d} x \right] \nonumber \\ &=b\left[ -\frac{1}{2} \int_{B_R} u^2\,\mathrm{d} x + \frac{1}{2}\int_{\partial B_R} u^2 \frac{x_i^2}{|x|}\,\mathrm{d} \sigma \right]. \nonumber \end{align} \noindent So, summing over $i$, we get \begin{equation}\label{POH2} \int_{B_R} bu(x, \nabla u) \,\mathrm{d} x= b\left[-\frac{N}{2}\int_{B_R}u^2\,\mathrm{d} x +\frac{R}{2}\int_{\partial B_R}u^2 \,\mathrm{d} \sigma \right]. \end{equation} \noindent Again, fixing $i =1,\dots, N$, integrating by parts and using the divergence theorem, we find that, \begin{align} \int_{B_R}c\rho \phi_u u x_i (\partial_i u) \,\mathrm{d} x &= c\bigg[ -\frac{1}{2}\int_{B_R} \rho\phi_u u^2 \,\mathrm{d} x -\frac{1}{2}\int_{B_R} \phi_u u^2x_i (\partial_i\rho) \,\mathrm{d} x \nonumber \\ & \qquad\qquad -\frac{1}{2}\int_{B_R} \rho u^2 x_i (\partial_i \phi_u) \,\mathrm{d} x+\frac{1}{2}\int_{B_R}\partial_i (\rho\phi_u u^2 x_i)\,\mathrm{d} x \bigg] \nonumber\\ &=c\bigg[ -\frac{1}{2}\int_{B_R}\rho\phi_u u^2 \,\mathrm{d} x -\frac{1}{2}\int_{B_R} \phi_u u^2x_i (\partial_i\rho) \,\mathrm{d} x \nonumber \\ & \qquad\qquad -\frac{1}{2}\int_{B_R} \rho u^2 x_i (\partial_i \phi_u) \,\mathrm{d} x+\frac{1}{2}\int_{\partial B_R}\rho \phi_u u^2 \frac{x_i^2}{|x|}\,\mathrm{d} \sigma \bigg]. \nonumber \end{align} \noindent Thus, summing over $i$, we get \begin{align}\label{POH3} \int_{B_R}c\rho\phi_u u (x, \nabla u) \,\mathrm{d} x &= c\bigg[ -\frac{N}{2}\int_{B_R}\rho\phi_u u^2 \,\mathrm{d} x -\frac{1}{2}\int_{B_R} \phi_u u^2 (x, \nabla \rho) \,\mathrm{d} x \nonumber\\ & \qquad\qquad -\frac{1}{2}\int_{B_R} \rho u^2 (x, \nabla \phi_u) \,\mathrm{d} x+\frac{R}{2}\int_{\partial B_R} \rho\phi_u u^2 \,\mathrm{d} \sigma \bigg]. \end{align} \noindent Finally, once more fixing $i=1,\dots, N$, integrating by parts and using the divergence theorem, we find that, \begin{align*} \int_{B_R}d |u|^{q-1}u (x_i \partial_i u) \,\mathrm{d} x = d \left[ \frac{-1}{q+1} \int_{B_R} |u|^{q+1} \,\mathrm{d} x+ \frac{1}{q+1} \int_{\partial B_R} |u|^{q+1} \frac{x_i^2}{|x|} \,\mathrm{d} \sigma \right], \end{align*} \noindent and so, summing over $i$, we see that \begin{equation}\label{POH4} \begin{split} \int_{B_R}d|u|^{q-1}u (x, \nabla u) \,\mathrm{d} x=d\bigg[\frac{-N}{q+1}&\int_{B_R}|u|^{q+1}\,\mathrm{d} x\\ &+\frac{R}{q+1}\int_{\partial B_R}|u|^{q+1}\,\mathrm{d} \sigma\bigg]. \end{split} \end{equation} \noindent Putting \eqref{POH1}, \eqref{POH2}, \eqref{POH3} and \eqref{POH4} together, we see that \begin{equation}\label{prePOH} \begin{split} \frac{2-N}{2}&\int_{B_R}|\nabla u|^2 \,\mathrm{d} x -\frac{1}{R}\int_{\partial B_R} |(x,\nabla u)|^2 \,\mathrm{d} \sigma +\frac{R}{2}\int_{\partial B_R}|\nabla u|^2 \,\mathrm{d} \sigma \\ &+b\bigg[-\frac{N}{2}\int_{B_R}u^2\,\mathrm{d} x +\frac{R}{2}\int_{\partial B_R}u^2 \,\mathrm{d} \sigma \bigg] \\ &\qquad+ c\bigg[ -\frac{N}{2}\int_{B_R}\rho\phi_u u^2 \,\mathrm{d} x-\frac{1}{2}\int_{B_R} \phi_u u^2 (x, \nabla \rho) \,\mathrm{d} x\\ &\qquad\qquad-\frac{1}{2}\int_{B_R} \rho u^2 (x, \nabla \phi_u) \,\mathrm{d} x +\frac{R}{2}\int_{\partial B_R} \rho \phi_u u^2 \,\mathrm{d} \sigma \bigg]\\ &\qquad\qquad\qquad-d\left[\frac{-N}{q+1}\int_{B_R}|u|^{q+1}\,\mathrm{d} x+\frac{R}{q+1}\int_{\partial B_R}|u|^{q+1}\,\mathrm{d} \sigma\right] =0. \end{split} \end{equation} \noindent We now multiply the second equation in \eqref{poh system} by $(x, \nabla \phi_u)$ and integrate on $B_R(0)$ for some $R>0$. By a simple calculation we see that \begin{equation*} \begin{split} \int_{B_R} \rho u^2 (x, \nabla \phi_u)\,\mathrm{d} x &= \int_{B_R} -{\mathcal D}elta \phi_u (x, \nabla \phi_u) \,\mathrm{d} x \\ &= \frac{2-N}{2}\int_{B_R}|\nabla \phi_u|^2 \,\mathrm{d} x -\frac{1}{R}\int_{\partial B_R} |(x,\nabla \phi_u)|^2 \,\mathrm{d} \sigma \\ &\qquad+\frac{R}{2}\int_{\partial B_R}|\nabla \phi_u|^2 \,\mathrm{d} \sigma. \end{split} \end{equation*} \noindent Substituting this into \eqref{prePOH} and rearranging, we get \begin{equation}\label{POHwithboundary} \begin{split} & \frac{N-2}{2}\int_{B_R}|\nabla u|^2 \,\mathrm{d} x +\frac{Nb}{2}\int_{B_R}u^2\,\mathrm{d} x +\frac{(N+k)c}{2}\int_{B_R}\rho\phi_u u^2 \,\mathrm{d} x \\ &\qquad\qquad +\frac{c(2-N)}{4}\int_{B_R}|\nabla \phi_u|^2\,\mathrm{d} x -\frac{Nd}{q+1}\int_{B_R}|u|^{q+1}\,\mathrm{d} x \\ & \qquad\leq\frac{N-2}{2}\int_{B_R}|\nabla u|^2 \,\mathrm{d} x +\frac{Nb}{2}\int_{B_R}u^2\,\mathrm{d} x +\frac{Nc}{2}\int_{B_R}\rho\phi_u u^2 \,\mathrm{d} x \\ &\qquad\qquad +\frac{c}{2}\int_{B_R} \phi_u u^2 (x, \nabla \rho) \,\mathrm{d} x +\frac{c(2-N)}{4}\int_{B_R}|\nabla \phi_u|^2\,\mathrm{d} x-\frac{Nd}{q+1}\int_{B_R}|u|^{q+1}\,\mathrm{d} x \\ &\qquad=-\frac{1}{R}\int_{\partial B_R} |(x,\nabla u)|^2 \,\mathrm{d} \sigma +\frac{R}{2}\int_{\partial B_R}|\nabla u|^2 \,\mathrm{d} \sigma +\frac{bR}{2}\int_{\partial B_R}u^2 \,\mathrm{d}\sigma \\ &\qquad\qquad +\frac{cR}{2}\int_{\partial B_R} \rho\phi_u u^2 \,\mathrm{d} \sigma +\frac{c}{2R}\int_{\partial B_R} |(x,\nabla \phi_u)|^2 \,\mathrm{d} \sigma \\ &\qquad\qquad\qquad-\frac{cR}{4}\int_{\partial B_R}|\nabla \phi_u|^2 \,\mathrm{d} \sigma-\frac{dR}{q+1}\int_{\partial B_R}|u|^{q+1}\,\mathrm{d} \sigma, \end{split} \end{equation} \noindent where we have used the assumption $k\rho(x)\leq (x,\nabla \rho)$ for some $k\in{\mathbb R}$ to obtain the first inequality. We now call the right hand side of \eqref{POHwithboundary} $I_R$, namely \begin{equation*} \begin{split} I_R & \coloneqq-\frac{1}{R}\int_{\partial B_R} |(x,\nabla u)|^2 \,\mathrm{d} \sigma +\frac{R}{2}\int_{\partial B_R}|\nabla u|^2 \,\mathrm{d} \sigma +\frac{bR}{2}\int_{\partial B_R}u^2 \,\mathrm{d}\sigma \\ &\qquad\qquad +\frac{cR}{2}\int_{\partial B_R} \rho\phi_u u^2 \,\mathrm{d} \sigma +\frac{c}{2R}\int_{\partial B_R} |(x,\nabla \phi_u)|^2 \,\mathrm{d} \sigma \\ &\qquad\qquad\qquad-\frac{cR}{4}\int_{\partial B_R}|\nabla \phi_u|^2 \,\mathrm{d} \sigma-\frac{dR}{q+1}\int_{\partial B_R}|u|^{q+1}\,\mathrm{d} \sigma. \end{split} \end{equation*} We note that $|(x,\nabla u)|\leq R|\nabla u |$ and $|(x,\nabla \phi_u)|\leq R|\nabla \phi_u |$ on $\partial B_R$, so it holds that \begin{align*} |I_R| &\leq \frac{3R}{2}\int_{\partial B_R}|\nabla u|^2 \,\mathrm{d} \sigma +\frac{bR}{2}\int_{\partial B_R}u^2 \,\mathrm{d}\sigma \\ &\qquad + \frac{cR}{2}\int_{\partial B_R}\rho \phi_u u^2 \,\mathrm{d} \sigma+\frac{3cR}{4}\int_{\partial B_R}|\nabla \phi_u|^2 \,\mathrm{d} \sigma+\frac{dR}{q+1}\int_{\partial B_R}|u|^{q+1}\,\mathrm{d}\sigma. \end{align*} \noindent Now, since $|\nabla u|^2$, $u^2 \in L^1({\mathbb R}^N)$ as $u\in E ({\mathbb R}^N)\subseteq H^1({\mathbb R}^N)$, $ \rho\phi_u u^2$, $|\nabla \phi_u|^2 \in L^1({\mathbb R}^N)$ because $\int_{{\mathbb R}^N}\rho\phi_u u^2\,\mathrm{d} x=\int_{{\mathbb R}^N}|\nabla \phi_u|^2\,\mathrm{d} x$ and $\phi_u \in D^{1,2}({\mathbb R}^N)$, and $|u|^{q+1} \in L^1({\mathbb R}^N)$ because $E({\mathbb R}^N) \hookrightarrow L^s({\mathbb R}^N)$ for all $s \in [2,2^{\ast}]$, then it holds that $I_{R_n} \to 0$ as $n\to +\infty$ for a suitable sequence $R_n \to +\infty.$ Moreover, since \eqref{POHwithboundary} holds for any $R>0$, it follows that \begin{align*}\label{POH limit} \frac{N-2}{2}\int_{{\mathbb R}^N}|\nabla u|^2 \,\mathrm{d} x +\frac{Nb}{2}&\int_{{\mathbb R}^N}u^2\,\mathrm{d} x +\frac{(N+k)c}{2}\int_{{\mathbb R}^N}\rho\phi_u u^2 \,\mathrm{d} x \\ &+\frac{c(2-N)}{4}\int_{{\mathbb R}^N}|\nabla \phi_u|^2\,\mathrm{d} x -\frac{Nd}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}\,\mathrm{d} x \leq 0, \end{align*} and so, we obtain \[\frac{N-2}{2}\int_{{\mathbb R}^N}|\nabla u|^2 \,\mathrm{d} x +\frac{Nb}{2}\int_{{\mathbb R}^N}u^2\,\mathrm{d} x +\frac{(N+2+2k)c}{4}\int_{{\mathbb R}^N}\rho\phi_u u^2 \,\mathrm{d} x-\frac{Nd}{q+1}\int_{{\mathbb R}^N}|u|^{q+1}\,\mathrm{d} x \leq 0, \]\\ using the fact that $\int_{{\mathbb R}^N}|\nabla \phi_u|^2\,\mathrm{d} x=\int_{{\mathbb R}^N}\rho\phi_u u^2\,\mathrm{d} x $. This completes the proof. \end{proof} \begin{comment} \section*{Appendix B: Proof of Lemma \ref{derivatives of f} } \begin{proof}[Proof of Lemma \ref{derivatives of f}] We first argue assuming that $a,b,c,d>0$. We begin by computing some derivatives of $f$. Namely, \begin{align*} f'(t)&=(2\nu+2-N)at^{(2\nu+2-N)-1}+(2\nu-N)bt^{(2\nu-N)-1}\\ &\qquad+(4\nu-N-2-2\bar{k})c t^{(4\nu-N-2-2\bar{k})-1}\\ &\qquad\qquad-[\nu(q+1)-N]dt^{(\nu(q+1)-N)-1},\\ \\ f''(t)&=(2\nu+2-N)[(2\nu+2-N)-1]at^{(2\nu+2-N)-2}+(2\nu-N)[(2\nu-N)-1]bt^{(2\nu-N)-2}\\ &\qquad+(4\nu-N-2-2\bar{k})[(4\nu-N-2-2\bar{k})-1]c t^{(4\nu-N-2-2\bar{k})-2}\\ &\qquad\qquad-[\nu(q+1)-N]\cdot[(\nu(q+1)-N)-1]dt^{(\nu(q+1)-N)-2},\\ \\ &\vdots \\ \\ f^{(2\nu+2-N)}(t)&=(2\nu+2-N)[(2\nu+2-N)-1]\cdots (1)a\\ &\qquad+(4\nu-N-2-2\bar{k})[(4\nu-N-2-2\bar{k})-1]\cdots\\ &\qquad\qquad\cdots[(4\nu-N-2-2\bar{k})-(2\nu+1-N)]c t^{(4\nu-N-2-2\bar{k})-(2\nu+2-N)}\\ &\qquad\qquad\qquad-[\nu(q+1)-N]\cdot[(\nu(q+1)-N)-1]\cdots\\ &\qquad\qquad\qquad\qquad\cdots[(\nu(q+1)-N)-(2\nu+1-N)]dt^{(\nu(q+1)-N)-(2\nu+2-N)}\\ &=:\tilde{a}+\tilde{c}t^{2\nu-4-2\bar{k}}-\tilde{d}t^{\nu(q-1)-2}. \end{align*} As a result of our assumptions, we can make some important observations that will be used on numerous occasions in what is to follow:\\ \begin{enumerate}[label=(\roman*)] \item $2\nu+2-N>0$ since $\nu\geq \frac{N}{2}$, \item $(4\nu-N-2-2\bar{k})-(2\nu+1-N)=2\nu-3-2\bar{k}>0$ since $\bar{k}<\frac{2\nu-3}{2}$, \item $(\nu(q+1)-N)-(2\nu+2-N)=\nu(q-1)-2>0$ since $\nu\geq 2$ and $q>2$, \item $2\nu-4-2\bar{k}<\nu(q-1)-2$ since $\bar{k}>\frac{\nu(3-q)-2}{2}$.\\ \end{enumerate} \noindent {\textbf{Step 1. Properties of $f^{(2\nu+2-N)}$}}\\ \noindent We first note that it follows from $(i)$-$(iii)$ that \[\tilde{a}, \tilde{c}, \tilde{d}>0.\] By this and $(iv)$, we can see that \begin{equation}\label{high f deriv at infty} f^{(2\nu+2-N)}(t)\to-\infty \text{ as } t\to+\infty. \end{equation} At this stage, we must break the proof into three cases depending on the sign of $2\nu-4-2\bar{k}$. \\ u_{n}derline{\textit{Case 1. $2\nu-4-2\bar{k}<0$}:} We first notice that since $\tilde{a}, \tilde{c}>0$, it follows from $(iii)$ and the assumption $2\nu-4-2\bar{k}<0$, that \begin{equation}\label{high f deriv at zero case 1} f^{(2\nu+2-N)}(t)\to+\infty \text{ as } t\to 0. \end{equation} We now compute one further derivative, namely \begin{align*} f^{(2\nu+3-N)}(t)=(2\nu-4-2\bar{k})\tilde{c}t^{2\nu-5-2\bar{k}}-[\nu(q-1)-2]\tilde{d}t^{\nu(q-1)-3}, \end{align*} We can see that $f^{(2\nu+3-N)}(t)=0$ if and only if \[t^{\nu(q-3)+2+2\bar{k}}=\frac{(2\nu-4-2\bar{k})\tilde{c}}{[\nu(q-1)-2]\tilde{d}}\] Using $(iii)$, the assumption $2\nu-4-2\bar{k}<0$, and the fact that $\tilde{c}$, $\tilde{d}>0$, we can see that \[\frac{(2\nu-4-2\bar{k})\tilde{c}}{[\nu(q-1)-2]\tilde{d}}<0,\] and so, since $t\geq0$, it follows that there is no such $t$. Namely, $f^{(2\nu+2-N)}$ has no positive critical points. By this, \eqref{high f deriv at infty} and \eqref{high f deriv at zero case 1}, we have shown that $f^{(2\nu+2-N)}$ is strictly decreasing and there exists $t_{2\nu+2-N}>0$ such that $f^{(2\nu+2-N)}(t_{2\nu+2-N})=0$ and $f^{(2\nu+2-N)}(t_{2\nu+2-N}-t)>0$ for $t\neq t_{2\nu+2-N}$.\\ u_{n}derline{\textit{Case 2. $2\nu-4-2\bar{k}=0$}:} We first note that by $(iii)$ and the assumption $2\nu-4-2\bar{k}=0$, it follows that \begin{equation}\label{high f deriv at zero case 2} f^{(2\nu+2-N)}(0)=\tilde{a}+\tilde{c} >0. \end{equation} Moreover, computing one further derivative in this case, we obtain \[f^{(2\nu+3-N)}(t)=-[\nu(q-1)-2]\tilde{d}t^{\nu(q-1)-3}\] If $\nu(q-1)-3\leq0$, it follows that $f^{(2\nu+3-N)}(t)<0$. On the other hand, if $\nu(q-1)-3>0$, it follows that $f^{(2\nu+3-N)}(t)=0$ if and only if $t=0$. Putting this together with \eqref{high f deriv at infty} and \eqref{high f deriv at zero case 2}, we have once again shown that $f^{(2\nu+2-N)}$ is strictly decreasing and there exists $t_{2\nu+2-N}>0$ such that $f^{(2\nu+2-N)}(t_{2\nu+2-N})=0$ and $f^{(2\nu+2-N)}(t_{2\nu+2-N}-t)>0$ for $t\neq t_{2\nu+2-N}$.\\ u_{n}derline{\textit{Case 3. $2\nu-4-2\bar{k}>0$}:} We first notice that since $\tilde{a}$, $\tilde{c}$, $\tilde{d}>0$, by $(iv)$ and the assumption $2\nu-4-2\bar{k}>0$, it follows that \begin{equation}\label{high f deriv at zero case 3} f^{(2\nu+2-N)}(0)=\tilde{a} >0. \end{equation} Once again computing one further derivative, namely \begin{align*} f^{(2\nu+3-N)}(t)=(2\nu-4-2\bar{k})\tilde{c}t^{2\nu-5-2\bar{k}}-[\nu(q-1)-2]\tilde{d}t^{\nu(q-1)-3}, \end{align*} we notice that $f^{(2\nu+3-N)}(t)=0$ if and only if \[t^{\nu(q-3)+2+2\bar{k}}=\frac{(2\nu-4-2\bar{k})\tilde{c}}{[\nu(q-1)-2]\tilde{d}}\] Using $(iii)$, the assumption $2\nu-4-2\bar{k}>0$, and the fact that $\tilde{c}$, $\tilde{d}>0$, we can see that \[\frac{(2\nu-4-2\bar{k})\tilde{c}}{[\nu(q-1)-2]\tilde{d}}>0,\] and so, since $t\geq0$, it follows that there is exactly one such $t$, call it $t_{2\nu+3-N}>0$. This and \eqref{high f deriv at infty} implies that $f^{(2\nu+2-N)}$ decreases for $t>t_{2\nu+3-N}$. So, from this, \eqref{high f deriv at infty} and \eqref{high f deriv at zero case 3}, we can see that there exists $t_{2\nu+2-N}>t_{2\nu+3-N}$ such that $f^{(2\nu+2-N)}(t_{2\nu+2-N})=0$ and $f^{(2\nu+2-N)}(t_{2\nu+2-N}-t)>0$ for $t\neq t_{2\nu+2-N}$.\\ Therefore, taken together, in each case, we have shown the existence of $t_{2\nu+2-N}>0$ such that \[f^{(2\nu+2-N)}(t_{2\nu+2-N})=0,\] and \begin{equation}\label{high f deriv increase decrease} f^{(2\nu+2-N)}(t_{2\nu+2-N}-t)>0, \text{ for } t\neq t_{2\nu+2-N}. \end{equation}\\ \noindent {\textbf{Step 2. Properties of $f^{(2\nu+1-N)}$}}\\ For ease, we rewrite $f^{(2\nu+1-N)}$ as \begin{align*} f^{(2\nu+1-N)}&=\tilde{a}t+\frac{\tilde{c}}{[(4\nu-N-2-2\bar{k})-(2\nu+1-N)]}t^{(4\nu-N-2-2\bar{k})-(2\nu+1-N)}\\ &\qquad\qquad-\frac{\tilde{d}}{[(\nu(q+1)-N)-(2\nu+1-N)]}t^{(\nu(q+1)-N)-(2\nu+1-N)} \\ &=:\tilde{a}t+\tilde{\tilde{c}}t^{2\nu-3-2\bar{k}}-\tilde{\tilde{d}}t^{\nu(q-1)-1}. \end{align*} We can deduce that \begin{align}\label{implications of initial obs} (i)-(iii)& \implies \tilde{a}, \tilde{\tilde{c}}, \tilde{\tilde{d}}>0 \nonumber\\ (ii) &\implies 2\nu-3-2\bar{k}>0,\nonumber\\ (iii)&\implies \nu(q-1)-1>1,\\ (iv) &\implies \nu(q-1)-1>2\nu-3-2\bar{k}\nonumber. \end{align} Thus, it holds that \[f^{(2\nu+1-N)}(0)=0,\] and since we have that $f^{(2\nu+1-N)}$ is increasing for $t<t_{2\nu+2-N}$ by \eqref{high f deriv increase decrease}, then it follows that $f^{(2\nu+1-N)}$ takes positive values at least for $t\in(0,t_{2\nu+2-N})$. Using \eqref{implications of initial obs}, we also notice that \[f^{(2\nu+1-N)}(t)\to-\infty \text{ as } t\to +\infty.\] Moreover, $f^{(2\nu+1-N)}$ is decreasing for $t>t_{2\nu+2-N}$ by \eqref{high f deriv increase decrease}. Thus, putting everything together, we have shown that there exists $t_{2\nu+1-N}>t_{2\nu+2-N}$ such that \[f^{(2\nu+1-N)}(t_{2\nu+1-N})=0,\] and \begin{equation*} f^{(2\nu+1-N)}(t_{2\nu+1-N}-t)>0, \text{ for } t\neq t_{2\nu+1-N}. \end{equation*}\\ \noindent {\textbf{Step 3. Properties of $f^{(\kappa)}$, $\kappa=1,\dots, (2\nu-N)$}}\\ At this point, we notice that each of the $\kappa=1,\dots, (2\nu-N)$ derivatives of $f$ can be written as \[f^{(\kappa)}= A_{\kappa} t^{\alpha_{\kappa}} + B_{\kappa} t^{\beta_{\kappa}} + C_{\kappa} t^{\gamma_{\kappa}} - D_{\kappa} t^{\delta_{\kappa}} ,\quad A_{\kappa}, B_{\kappa}, C_{\kappa}, D_{\kappa}, \alpha_{\kappa}, \beta_{\kappa}, \gamma_{\kappa}, \delta_{\kappa}\in{\mathbb R}.\] For each fixed $\kappa=1,\dots, (2\nu-N)$, we can deduce that \begin{align*} (i)-(iii)&\implies A_{\kappa}, B_{\kappa}, C_{\kappa}, D_{\kappa}, \alpha_{\kappa}, \gamma_{\kappa}, \delta_{\kappa}>0 \text{ and } \beta_{\kappa}\geq 0,\\ (iii)&\implies \alpha_{\kappa}<\delta_{\kappa} \text{ and } \beta_{\kappa}<\delta{\kappa}, \\ (iv) &\implies \gamma_{\kappa}<\delta_{\kappa}\nonumber. \end{align*}\\ \noindent {\textbf{Conclusion.}} Using the properties of the $\kappa=1,\dots, (2\nu-N)$ derivatives of $f$ that we deduced in Step $3$, we can iterate the arguments of Step $2$ until we obtain that there exists $t_1>t_2>\dots>t_{2\nu+1-N}>t_{2\nu+2-N}>0$ such that $f'(t_1)=0$, namely $t_1$ is a critical point of $f$, and $f'(t)(t_1-t)>0$ for $t\neq t_1$, namely $t_1$ is unique. Finally, $t_1$ corresponds to a maximum of $f$ since $f(0)\geq 0$ and $f(t)\to-\infty$ as $t\to +\infty$. This concludes the proof for $a,b,c,d>0$. If $c=0$, we notice that Step $1$ can be argued exactly as in Case $2$ above. The rest of the proof follows similarly. \end{proof} \end{comment} \section*{Data availability statement} On behalf of all authors, the corresponding author states that there are no data associated to our manuscripts. \section*{Conflicts of interest/Competing interests} On behalf of all authors, the corresponding author states that there is no conflict of interest. \renewcommand\refname{References} \end{document}
math
155,783
\begin{document} \title{Tur\'an $H$-densities for 3-graphs} \begin{abstract} Given an $r$-graph $H$ on $h$ vertices, and a family $\mathcal{F}$ of forbidden subgraphs, we define $\ex_{H}(n, \mathcal{F})$ to be the maximum number of induced copies of $H$ in an $\mathcal{F}$-free $r$-graph on $n$ vertices. Then the \emph{Tur\'an $H$-density} of $\mathcal{F}$ is the limit \[\pi_{H}(\mathcal{F})= \lim_{n\rightarrow \infty}\ex_{H}(n, \mathcal{F})/\binom{n}{h}. \] This generalises the notions of \emph{Tur\'an density} (when $H$ is an $r$-edge), and \emph{inducibility} (when $\mathcal{F}$ is empty). Although problems of this kind have received some attention, very few results are known. We use Razborov's semi-definite method to investigate Tur\'an $H$-densities for $3$-graphs. In particular, we show that \[\pi_{K_4^-}(K_4) = 16/27,\] with Tur\'an's construction being optimal. We prove a result in a similar flavour for $K_5$ and make a general conjecture on the value of $\pi_{K_t^-}(K_t)$. We also establish that \[\pi_{4.2}(\emptyset)=3/4,\] where $4.2$ denotes the $3$-graph on $4$ vertices with exactly $2$ edges. The lower bound in this case comes from a random geometric construction strikingly different from previous known extremal examples in $3$-graph theory. We give a number of other results and conjectures for $3$-graphs, and in addition consider the inducibility of certain directed graphs. Let $\vec{S}_k$ be the \emph{out-star} on $k$ vertices; i.e{.} the star on $k$ vertices with all $k-1$ edges oriented away from the centre. We show that \[\pi_{\vec{S}_3}(\emptyset)=2\sqrt3-3,\] with an iterated blow-up construction being extremal. This is related to a conjecture of Mubayi and R\"odl on the Tur\'an density of the 3-graph $C_5$. We also determine $\pi_{\vec{S}_k}(\emptyset)$ when $k=4$, and conjecture its value for general $k$. \end{abstract} \section{Introduction} \subsection{Basic notation and definitions} Given $n \in \mathbb{N}$, write $[n]$ for the integer interval $\{1,2, \dots, n\}$. Let $r \in \mathbb{N}$. An \emph{$r$-graph} or \emph{$r$-uniform hypergraph} $G$ is a pair $G=(V,E)$, where $V=V(G)$ is a set of \emph{vertices} and $E=E(G) \subseteq V^{(r)}=\{A \subseteq V: \ \vert A \vert = r\}$ is a set of \emph{$r$-edges}. We shall often write $x_1x_2\cdots x_r$ as a short-hand for the $r$-edge $\{x_1,x_2, \dots, x_r\}$. Given a family of $r$-graphs $\mathcal{F}$, we say that $G$ is $\mathcal{F}$-free if it contains no member of $\mathcal{F}$ as a subgraph. A classical aim of extremal hypergraph theory is to determine the maximum number of $r$-edges that an $\mathcal{F}$-free $r$-graph on $n$ vertices may contain. We call the corresponding function of $n$ the \emph{T\'uran number} of $\mathcal{F}$, and denote it by \[\ex(n, \mathcal{F}) = \max\left\{ \size{E(G)} \ : \text{ $G$ is $\mathcal{F}$-free}, \ \size{V(G)}=n\right\}.\] In this paper we shall be concerned with the following generalisation of the Tur\'an number. Given an $r$-graph $H$ on $h$ vertices, and an $r$-graph $G$ on $n \geq h$ vertices, let $e_H(G)$ denote the number of $h$-sets from $V(G)$ that induce a copy of $H$ in $G$. (So for example if $H$ is an $r$-edge, then $e_H(G)$ counts the number of edges in $G$.) Then, given a family of forbidden $r$-graphs $\mathcal{F}$, we define the \emph{Tur\'an $H$-number} of $\mathcal{F}$, denoted $\ex_{H}(n, \mathcal{F})$, to be the maximum number of induced copies of $H$ that an $\mathcal{F}$-free $r$-graph on $n$ vertices may contain: \[\ex_{H}(n, \mathcal{F}) = \max\left\{e_H(G): \text{ $G$ is $\mathcal{F}$-free}, \ \size{V(G)}=n\right\}.\] In general, the Tur\'an $H$-number is, like the usual Tur\'an number, hard to determine, and we are interested instead in the asymptotic proportion of $h$-vertex subsets that induce a copy of $H$. The following is well-known. \begin{proposition}\label{averaging} Let $\mathcal{F}$ be a family of $r$-graphs and let $H$ be an $r$-graph on $h$ vertices. Then the limit \[\pi_{H}(\mathcal{F})= \lim_{n \rightarrow \infty} \ex_{H}(n, \mathcal{F})/\binom{n}{h} \] exists. \end{proposition} \begin{proof} For $n\geq h$, it follows by averaging over $n$-vertex subsets that \[\ex_{H}(n+1, \mathcal{F})/\binom{n+1}{h}\leq \ex_{H}(n, \mathcal{F})/\binom{n}{h}.\] Thus the sequence $\ex_H(n, \mathcal{F})/\binom{n}{h}$ is nonincreasing, and because it is bounded below (e.g{.} by $0$), it is convergent. \end{proof} We call $\pi_{H}(\mathcal{F})$ the \emph{Tur\'an $H$-density} of $\mathcal{F}$. In the case where $H$ is the $r$-graph on $r$ vertices with a single edge, we recover the classical \emph{Tur\'an density}, $\pi(\mathcal{F})$. It is easy to see that Proposition~\ref{averaging} and the definitions of $\ex_H(n, \mathcal{F})$ and $\pi_{H}(\mathcal{F})$ when $H$ and $\mathcal{F}$ consist of $r$-graphs could just as well have been made in the setting of directed $r$-graphs. We let our definitions carry over \emph{mutatis mutandis}. In this paper, we shall mainly investigate 3-graphs, although we shall make a digression into directed $2$-graphs in Section~3. \subsection{Previous work on inducibility} When $\mathcal{F}=\emptyset$, $\pi_{H}(\emptyset)$ is known as the \emph{inducibility} of $H$. The inducibility of 2-graphs was first investigated by Pippenger and Golumbic~\cite{PG75} and later by Exoo~\cite{E86}. Motivated by certain questions in Ramsey Theory, Exoo proved some general bounds on $\pi_{H}(\emptyset)$ as well as giving some constructions for small $H$ with $\size{V(H)} \leq 4$. Bollob\'as, Nara and Tachibana~\cite{BNT86} then proved that $\pi_{K_{t,t}}(\emptyset)=(2t)!/2^t{(t!)}^2$, where $K_{t,t}$ is the balanced complete bipartite graph on $2t$ vertices, $K_{t,t}= ([2t], \{\{ij\}:\ i \leq t < j\})$. What is more, they determined $\ex_{K_{t,t}}(n, \emptyset)$ exactly, with the optimal construction a balanced complete bipartite graph. More generally, Brown and Sidorenko~\cite{BS94} showed that if $H$ is complete bipartite then the graphs attaining the Tur\'an $H$-number may be chosen to be themselves complete bipartite. Given a graph $H$ and an integer $b\geq 1$, the \emph{(balanced) $b$-blow-up} of $H$, denoted $H(b)$, is the graph on $b \size{V(H)}$ vertices obtained by taking for every vertex $x \in V(H)$ a set of $b$ vertices $x_1,x_2, \dots, x_b$ and putting an edge between $x_i$ and $y_j$ if and only if $xy \in E(H)$. Bollob\'as, Egawa, Harris and Jin~\cite{BEHJ95} proved that for all $t \in \mathbb{N}$ and all $b$ sufficiently large, the Tur\'an $K_t(b)$-number $\ex_{K_t(b)}(n, \emptyset)$ is attained by balanced blow-ups of $K_t$. This was recently generalised in an asymptotic sense by Hatami, Hirst and Norine~\cite{HHN11} who proved that for any graph $H$ and for all $b$ sufficiently large, the Tur\'an $H(b)$-density is given by considering the `limit' of balanced blow-ups of $H$. Their proof relied on the use of weighted graphs. Finally, several $H$-density results for small $H$ were obtained this year by Grzesik~\cite{G11}, Hatami, Hladk\'y, Kr\'al, Norine and Razborov~\cite{HHKNR11a}, Hirst~\cite{H11} and Sperfeld~\cite{S11}, all using the semi-definite method of Razborov~\cite{R07}. Grzesik~\cite{G11}, and independently Hatami, Hladk\'y, Kr\'al, Norine and Razborov~\cite{HHKNR11a}, proved an old conjecture of Erd\H os~\cite{E84} that the number of (induced) copies of the $5$-cycle $C_5=([5],\{12,23,34,45,51\})$ in a triangle-free graph on $n$ vertices is at most $(n/5)^5$. This bound is attained by a balanced blow-up of $C_5$, thus establishing that \[\pi_{C_5}(K_3)=24/625.\] To describe the other two sets of results, we need to make some more definitions. Let \[K_{1,1,2}=([4], \{12,13,14, 23,24\}),\ \Text{paw}=([4], \{12,23, 31, 14\})\] and \[\vec{C}_3=([3], \{\vec{12}, \vec{23}, \vec{31}\}), \ \vec{K}_2\sqcup E_1= ([3], \{\vec{12}\}).\] Then Hirst showed that \[\pi_{K_{1,1,2}}(\emptyset)=72/125, \ \pi_{\text{paw}}(\emptyset) =3/8,\] with extremal configurations a balanced blow-up of $K_5$ and the complement of a balanced blow-up of $([4],\{12,34\})$ respectively. Sperfeld proved \[\pi_{\vec{C}_3}(\emptyset)=1/4, \ \pi_{\vec{K}_2\sqcup E_1}(\emptyset)=3/4,\] with extremal configurations a random tournament on $n$ vertices and the disjoint union of two tournaments on $n/2$ vertices respectively. \subsection{Flag algebras and Flagmatic} \label{flagmaticsec} Similarly to the works cited above~\cite{G11, HHKNR11a, H11, S11}, the upper bounds on Tur\'an $H$-densities we present in this paper have been obtained using the semi-definite method of Razborov~\cite{R07}. A by-product of the theory of flag algebras, the semi-definite method gives us a systematic way of proving linear inequalities between subgraph densities. It has recently been used in a variety of contexts and has yielded many new results and improved bounds. (See e.g{.} \cite{BT10, BT11, FRV11, G11, HHKNR11a, HHKNR11b, H11, KMS11, R10, R11, S11}.) While it is clearly a powerful and useful tool in extremal combinatorics, the semi-definite method requires its users to overcome two barriers. First of all, a presentation of the method is usually given in the language of flag algebras, quantum graphs or graphons, which, while not impenetrable, is certainly forbidding at first. Second, the method involves numerous small computations, the enumeration of large graph families and optimisation of the entries of large positive semi-definite matrices; none of which can practically be done by hand. The assistance of a computer program is therefore necessary to use the semi-definite method in any nontrivial fashion. In our earlier paper~\cite{FRV11}, we sought to remove these two obstacles by giving an elementary presentation of the semi-definite method from the point of view of extremal combinatorics, stripping it away from the more general framework of flag algebras, and by releasing `Flagmatic', an open-source implementation of Razborov's semi-definite method. Additionally, in an effort to avoid having large matrices and lists of graphs cluttering the main body of the paper, we have used Flagmatic to produce certificates of our results. These certificates, along with Flagmatic, can be downloaded from our website: \begin{quote} \url{http://www.maths.qmul.ac.uk/~ev/flagmatic/} \end{quote} The certificates are also given in the ancillary files section of our arXiv submission. The certificates are in a straight-forward human-readable format, which is documented in our previous paper \cite{FRV11}. The website also contains an independent checker program called \verb|inspect_certificate.py|, which can be used to examine the certificates and help verify our proofs. We shall not repeat here our introduction to the semi-definite method, nor our discussion of certificates and checker programs, but refer the reader back to~\cite{FRV11} for details and use Flagmatic as a `black box' for the remainder of this paper. Finally, let us note that some information on extremal constructions can sometimes be extracted from proofs via the semi-definite method. We address this, and in particular the issue of stability, in a forthcoming paper~\cite{FRV12}. \subsection{Contents and structure of the paper} Let us define formally the $3$-graphs that we study in this paper. First of all, we have the complete 3-graph on $4$ vertices, $K_4$, also known as the \emph{tetrahedron}. We shall also be interested in $K_4^-$, the unique (up to isomorphism) $3$-graph on $4$ vertices with exactly $3$ edges, and in the \emph{(strong) $5$-cycle}, $C_5=([5], \{123,234,345,451,512\})$. Let also $K_t$ denote the complete $3$-graph on $t$ vertices and $K_t^-$ the $3$-graph obtained from $K_t$ by deleting a $3$-edge, and let $H_6$ be the $3$-graph obtained from $C_5$ by adding a new vertex labelled `$6$' to the vertex set and adding the following five edges: $136,356, 526,246,416$. A $3$-graph is said to have \emph{independent neighbourhoods} if for any pair of distinct vertices $x,y$, the joint neighbourhood of $x,y$, \[\Gamma_{xy}=\{z : \text{ $xyz$ is an edge}\}\] is an independent set. Having independent neighbourhoods is easily seen to be equivalent to not containing the graph $F_{3,2}=([5], \{123,124,125,345\})$ as a subgraph. Finally, following the notation used by Flagmatic, we write $m.k$ for the collection of all $3$-graphs on $m$ vertices spanning exactly $k$ edges, up to isomorphism. For example, $4.3=\{K_4^-\}$. Our exact results for Tur\'an $H$-densities of $3$-graphs are listed in the following table: \begin{center} \begin{tabular}{l p{6.5cm}} Result & Extremal construction \\ \hline $\pi_{K_4^-}(K_4) = 16/27$ & Tur\'an's construction: balanced blow-up of $([3], \{112,223,331, 123\})$.\\ $\pi_{4.2}(\emptyset) = 3/4$ & Random geometric construction; see Theorem~\ref{4.2}.\\ $\pi_{4.2}(C_5, F_{3,2}) = 9/16$ & Balanced blow-up of $K_4$.\\ $\pi_{4.2}(K_4^-, F_{3,2}) = 5/9$ & Balanced blow-up of $H_6$.\\ $\pi_{4.2}(K_4^-,C_5, F_{3,2}) = 4/9$ & Balanced blow-up of a 3-edge.\\ $\pi_{K_4}(F_{3,2}) = 3/32$ & Balanced blow-up of $K_4$.\\ $\pi_{K_4^-}(F_{3,2}) = 27/64$ & Unbalanced blow-up of $([2], \{112\})$.\\ $\pi_{5.6}(\emptyset) = 20/27$ & Balanced blow-up of the $3$-graph $([3], \{112, 221, 223, 332, 113, 331\})$. \\ $\pi_{5.7}(\emptyset) = 20/27$ & Balanced blow-up of the $3$-graph $([3], \{111, 222, 333, 112, 223, 331, 123\})$. \\ $\pi_{5.9}(\emptyset) = 5/8$ & Balanced complete bipartite 3-graph.\\ \hline \end{tabular} \end{center} In addition, we prove two inducibility results for directed graphs. We define the \emph{out-star} of order $k$ to be the directed graph \[\vec{S}_k= ([k], \{\vec{1i}: \ i \in [k]\setminus\{1\}\}). \] We prove that \[\pi_{\vec{S}_3}(\emptyset)= 2\sqrt{3}-3,\] with the extremal construction being an unbalanced blow-up of $\vec{S}_2$, iterated inside the part corresponding to the vertex labelled $2$. (Here `iterated' just means: repeat the construction inside the vertices that were allocated to part $2$ after each iteration of the construction, until you run out of vertices.) Sperfeld \cite{S11} previously gave bounds for this problem. This result is interesting to us for two reasons: first of all, this directed $2$-graph problem has a somewhat close and unexpected relation to the Tur\'an problem of maximising the number of $3$-edges in a $C_5$-free $3$-graph. Second, we believe this is the first `simple' instance for which it can be shown that an iterated blow-up construction is extremal. (We elaborate on this in Section~3.) While it is not directly relevant to $3$-graphs, which are the main focus of this paper, we also determine $\pi_{\vec{S}_4}(\emptyset)$ and make a conjecture regarding the value of $\pi_{\vec{S}_k}(\emptyset)$ for all $k \ge 5$. Our paper is structured as follows: in Section~2 we present our $3$-graph results. Section~2.1 deals with the case where we forbid $K_4$ and other complete graphs, while Sections~2.2, 2.3 and 2.4 are concerned with the cases where we forbid $C_5$, $K_4^-$ and both $C_5$ and $K_4^-$ respectively. In Section~2.5 we consider $3$-graphs with the independent neighbourhood property, and Section~2.6 gathers our results on inducibilities of $3$-graphs, in particular our proof that $\pi_{4.2}(\emptyset)=3/4$. Finally, in Section~3 we move on to consider directed $2$-graphs and discuss the relation between $\pi_{\vec{S}_3}(\emptyset)$ and a conjecture of Mubayi and R\"odl regarding the Tur\'an density of the $3$-graph $C_5$. As previously mentioned, the certificates for all the results are available on the Flagmatic website, and in the ancillary files of our arXiv submission. Each certificate has a unique filename, which is given in the following table: \begin{center} {\small \begin{tabular}{p{2.5cm}p{3.1cm}|p{2.5cm}p{2.7cm}} Result & Certificate & Result & Certificate \\ \hline Theorem~\ref{k4-nok4} & \verb|k4max43.js| & Theorem~\ref{4.2} & \verb|max42.js| \\ Proposition~\ref{k4-noc5} & \verb|c5max43.js| & Proposition~\ref{k4-inducibility} & \verb|max43.js| and \verb|41max43.js| \\ Proposition~\ref{4.2noc5} & \verb|c5max42.js| & Theorem~\ref{5.6} & \verb|max56.js| \\ Theorem~\ref{4.2noc5f32} & \verb|c5f32max42.js| & Theorem~\ref{5.7} & \verb|max57.js| \\ Proposition~\ref{4.2nok4-} & \verb|k4-max42.js| & Theorem~\ref{5.9} & \verb|max59.js| \\ Theorem~\ref{4.2nok4-nof32} & \verb|k4-f32max42.js| & Proposition~\ref{f32} & \verb|maxf32.js| \\ Proposition~\ref{4.2nok4-c5} & \verb|k4-c5max42.js| & Proposition~\ref{c5} & \verb|maxc5.js| \\ Theorem~\ref{4.2nok4-c5f32} & \verb|k4-c5f32max42.js| & Theorem~\ref{1213} & \verb|maxs3.js| \\ Theorem~\ref{k4-nof32} & \verb|f32max43.js| & Theorem~\ref{121314} & \verb|maxs4.js| \\ Theorem~\ref{k4nof32} & \verb|f32max44.js| & & \\ Proposition~\ref{4.2nof32} & \verb|f32max42.js| and \verb|f32max41.js| & & \\ \hline \end{tabular} } \end{center} \section{Main results} \subsection{Forbidding $K_4$} The problem of determining the Tur\'an density of the complete $3$-graph on $4$ vertices, $K_4$, has been open for more than sixty years. Tur\'an conjectured that the answer is $5/9$, with the lower bound coming from a balanced blow-up of $([3], \{112,223,331, 123\})$. \begin{conjecture}[Tur\'an] \label{turanconj} \[\pi(K_4)=5/9.\] \end{conjecture} Many other non-isomorphic $K_4$-free constructions with asymptotic edge-density $5/9$ have since been found~\cite{B83, FDF88, F08, K82}, so that if Tur\'an's conjecture is true, there is no stable extremal configuration and a proof is likely to be very hard. Razborov observed that Tur\'an's original construction is the only one known in which no $4$-set spans exactly one $3$-edge. Adding in this restriction, he found that he could use the semi-definite method to prove a weaker form of Tur\'an's conjecture: \begin{theorem}[Razborov~\cite{R10}] \[\pi(K_4, \Text{ induced }4.1)=5/9.\] \end{theorem} What is more, Pikhurko~\cite{P11a} showed that Tur\'an's construction is the unique, stable extremal configuration for this problem. We can show that in fact what Tur\'an's construction does is to maximise the $K_4^-$-density in $K_4$-free $3$-graphs; this can be thought of as the most natural weakening of Tur\'an's conjecture. \begin{theorem}\label{k4-nok4} \[\pi_{K_4^-}(K_4)=16/27\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from Tur\'an's construction, a balanced blow-up of $([3], \{112,223,331, 123\})$. \end{proof} In addition, by essentially mimicking Pikhurko's argument, it is possible to show that any $K_4$-free $3$-graph with $K_4^-$-density `close' to $16/27$ is `close' to Tur\'an's construction in the \emph{edit distance}. That is, one can make it into a copy of Tur\'an's construction by changing `few' edges. We address this, and the more general issue of obtaining stability from proofs via the semi-definite method, in a forthcoming note~\cite{FRV12}. Having established that $\pi_{K_4^-}(K_4)=16/27$, can we say anything about $\pi_{K_5^-}(K_5)$? In Section~2.4 we give a result, Theorem~\ref{5.9}, that implies $\pi_{K_5^-}(K_5)=5/8$, with the lower bound coming from a complete balanced bipartite $3$-graph. More generally, we believe we know what the value of $\pi_{K_t^-}(K_t)$ should be. Define a sequence $(H_t)_{t\geq 2}$ of degenerate $3$-graphs on $t$ vertices as follows. Let \[H_2=\left([2], \{111, 222, 112, 221\}\right),\] and \[H_3=\left([3], \{111, 222, 333, 112, 223, 331\}\right).\] Now for $t\geq 4$, define $H_t$ by adding vertices $t-1$ and $t$ to $H_{t-2}$, together with the edges \[ (t-1)(t-1)(t-1), \ ttt, \ (t-1)(t-1)t, \ (t-1)tt. \] Then let $G_t(n)$ denote the complement of a balanced blow-up of $H_{t-1}$ on $n$ vertices. This construction is due to Keevash and Mubayi, and is well-known (see for example Keevash~\cite{Keevash survey}) to be $K_{t}$-free. \begin{conjecture}\label{superconjecture} $G_t(n)$ is the unique (up to isomorphism) $3$-graph with $\ex(n, K_t)$ edges and $\ex_{K_t^-}(n, K_t)$ induced copies of $K_t^-$. \end{conjecture} It is easy to work out that $G_t(n)$ has edge-density \[1-\frac4{(t-1)^2}+o(1),\] and a slightly more involved calculation shows that its $K_t^-$-density is \[\dfrac{t!}{(t-1)^{t-1}} \dfrac{ 2^{(t-1)/2}}{3}+o(1) \] if $t$ is odd, and \[ \dfrac{t!}{(t-1)^t} \dfrac{(5t-8)}{3} 2^{(t-6)/2}+o(1) \] if $t$ is even. Note that for $t=4$ and $t=5$ this agrees with Theorems~\ref{k4-nok4} and~\ref{5.9} respectively. \subsection{Forbidding $C_5$} Mubayi and R\"odl~\cite{MR02} studied the Tur\'an density problem for $C_5$, and came up with the following ingenious construction. Partition the vertex set into two parts $A$ and $B$ with $\size{A} \approx \sqrt{3} \size{B}$, and add all edges that have two vertices in $A$ and one vertex in $B$, and then iterate inside $B$. This can be described succinctly as an unbalanced blow-up of the (degenerate) $3$-graph $([2], \{112\})$, iterated inside part $2$. We leave it as an exercise for the reader to verify that this is indeed $C_5$-free. Mubayi and R\"odl conjectured that this construction is best possible, and recent applications of the semi-definite method~\cite{FRV11, R10} have provided strong evidence in that direction. \begin{conjecture}[Mubayi, R\"odl~\cite{MR02}]\label{c5conj} \[\pi(C_5)= 2\sqrt{3}-3.\] \end{conjecture} Observe now that Mubayi and R\"odl's construction avoids $K_4$ as well as $C_5$. If their conjecture is true, then $\pi(C_5)=\pi(C_5, K_4)$. We would thus expect their construction to also maximise the number of copies of $K_4^-$ in a $C_5$-free $3$-graph. This appears to be the case, with a minor caveat: the construction is the right one, but the weights we place on each part need to be adjusted slightly. \begin{proposition}\label{k4-noc5} \[0.423570 < \alpha \leq \pi_{K_4^-}(C_5) < 0.423592,\] where $\alpha$ is the maximum value of \[f(x)=\frac{4x(1-x)^3}{1-x^4},\] in the interval $[0,1]$, which, by solving a cubic equation, can be computed explicitly to be \[\alpha = 4-6\left((\sqrt{2}+1)^{1/3}-(\sqrt{2}-1)^{1/3}\right).\] \end{proposition} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a blow-up of $([2], \{112\})$, with proportion $x$ of the vertices placed inside part $2$, iterated inside part $2$. The function $f(x)$ then calculates exactly the asymptotic density of $K_4^-$ in such a construction. The sign of the derivative of $f$ is determined by the product of a cubic and a linear factor. Performing the required calculus, the maximum of $f$ can then be determined in closed form. \end{proof} Note that the maximum of $f$ occurs at a cubic irrational, and not at a quadratic irrational as happens when we maximise the number of $3$-edges. What is more, we place proportion approximately $0.366025$ (i.e{.} a little more than $1/3$) of the vertices inside part $B$ when maximising the edge-density; and this drops down to approximately $0.253077$ (i.e{.} a little more than $1/4$) when maximising the $K_4^-$ density. This is to be expected; in the first case we want an average $3$-set to have about one vertex in part $B$, while in the latter case we want an average $4$-set to have about one vertex in part $B$. We conjecture that the lower bound in Proposition~\ref{k4-noc5} is tight: \begin{conjecture}\label{k4-noc5conj} \[\pi_{K_4^-}(C_5) = 4-6\left((\sqrt{2}+1)^{1/3}-(\sqrt{2}-1)^{1/3}\right).\] \end{conjecture} Given the difference in the proportion of vertices assigned to part $2$ between the case where we are maximising the number of edges and the case where we are maximising the number of copies of $K_4^-$ in a $C_5$-free $3$-graph, one could expect that the way to maximise the number of copies of $4.2$---that is, of $4$-sets spanning exactly $2$ edges---would also be to take a blow-up of $([2], \{112\})$, iterated inside part $2$, with a suitable proportion of vertices (say a little over $1/2$) being assigned to part $2$ at each stage of the iteration. This yields an asymptotic density of only \[\max_{x \in [0,1]} \frac{6x^2(1-x)^2}{1-x^4},\] which is approximately 0.404653. However, it turns out we can do much better using a different construction: \begin{proposition}\label{4.2noc5} \[ 0.571428 < 4/7 \leq \pi_{4.2}(C_5) < 0.583852. \] \end{proposition} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). For the lower bound, consider a balanced blow-up of $K_4$, iterated inside each part. \end{proof} We believe that the lower bound in Proposition~\ref{4.2noc5} is tight: \begin{conjecture}\label{4.2noc5conj} \[\pi_{4.2}(C_5)=4/7.\] \end{conjecture} While the upper bound we can obtain is still some way off $4/7$, the following exact result gives us rather more confidence about Conjecture~\ref{4.2noc5conj}: \begin{theorem}\label{4.2noc5f32} \[\pi_{4.2}(C_5, F_{3,2})=9/16.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). For a lower bound construction, take a balanced blow-up of $K_4$. \end{proof} In a sense Theorem~\ref{4.2noc5f32} tells us that if we do not allow ourselves to use iterated blow-up constructions, then a blow-up of $K_4$ is the best we can do. This trick of forbidding $F_{3,2}$ when we think an iterated construction is best, but cannot close the gap using the semi-definite method, is often helpful, and we shall use it frequently in this paper. As discussed in detail in~\cite{FRV11}, there are heuristic reasons why one would not expect problems that admit iterated blow-up structures as extremal examples to be easily tackled using the semi-definite method; in many cases it is thus sensible to first study extremal problems in the context of $3$-graphs with independent neighbourhoods. \subsection{Forbidding $K_4^-$} The $3$-graph on four vertices with three edges, $K_4^-$, is the smallest $3$-graph with non-trivial Tur\'an density, both in terms of the number of vertices and the number of edges. Disproving an earlier conjecture of Tur\'an, Frankl and F\"uredi~\cite{FF84} showed that $\pi(K_4^-)\geq 2/7$ by considering a balanced blow-up of $H_6$, iterated inside each of its $6$ parts. Using his semi-definite method, Razborov~\cite{R10} proved upper bounds for $\pi(K_4^-)$ quite close to this value (and small improvements were subsequently given in \cite{BT10} and \cite{FRV11}), leading to the natural conjecture that the construction of Frankl and F\"uredi is in fact best possible: \begin{conjecture}[Frankl-F\"uredi, Razborov] \label{k4-conj} \[\pi(K_4^-)=2/7.\] \end{conjecture} Should the conjecture be true, one would expect that an iterated blow-up of $H_6$ also maximises the number of induced copies of $4.2$. As in the previous subsection, the semi-definite method is not quite able to close the gap; again we refer the reader to~\cite{FRV11} for a discussion of why iterated blow-up constructions might be `hard' for the method. \begin{proposition}\label{4.2nok4-} \[0.558139 < 24/43 \leq \pi_{4.2}(K_4^-) < 0.558378\] \end{proposition} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a balanced iterated blow-up of $H_6$. \end{proof} We believe that the lower bound is tight: \begin{conjecture}\label{4.2nok4-conj} \[\pi_{4.2}(K_4^-)=24/43.\] \end{conjecture} As before, restricting the setting to that of $3$-graphs with independent neighbourhoods helps quite a lot, both for the original Tur\'an problem and for the Tur\'an $4.2$-density problem. In \cite{FRV11} it was proved that $\pi(K_4^-, F_{3,2})=5/18$. The extremal construction, a balanced blow-up of $H_6$, is also extremal for the following problem. \begin{theorem}\label{4.2nok4-nof32} \[ \pi_{4.2}(K_4^-,F_{3,2})=5/9. \] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a balanced blow-up of $H_6$. \end{proof} \subsection{Forbidding $K_4^-$ and $C_5$} In~\cite{FRV11}, we considered the problem of forbidding both $K_4^-$ and $C_5$. We have a lower bound of $\pi(K_4^-,C_5) \geq 1/4$ by considering a balanced blow-up of a $3$-edge, with the construction iterated inside each of the $3$ parts; and we gave an upper bound of $\pi(K_4^-, C_5) < 0.251073$ using the semi-definite method, leading us to conjecture that the lower bound is tight: \begin{conjecture}[\cite{FRV11}]\label{nok4-c5conj} \[\pi(K_4^-,C_5)=1/4.\] \end{conjecture} Another construction yielding the same lower bound is as follows: let $H_7$ be the $6$-regular 3-graph on $7$ vertices \[H_7=([7], \{124,137,156,235,267,346,457,653,647,621,542,517,431,327\}).\] This can be thought of as the unique (up to isomorphism) $3$-graph $G$ on $7$ vertices such that for every vertex $x \in V(G)$, the link-graph $G_{x}=(V(G)\setminus\{x\}, \{yz: \ xyz \in E(G)\})$ is the $6$-cycle. Alternatively, $H_7$ can be obtained as the union of two edge-disjoint copies of the Fano plane on the same vertex set \begin{align*} F_1&=([7], \{124,137,156,235,267,346,457\}) \textrm{ and }\\ F_2 &=([7], \{653,647,621,542,517,431,327\}), \end{align*} as depicted in Figure~\ref{furedifano}. (This elegant perspective is due to F\"uredi.) \begin{figure} \caption{F\"uredi's double Fano construction.} \label{furedifano} \end{figure} It is an easy exercise to check that a balanced blow-up of $H_7$ with the construction iterated inside each of the $7$ parts is both $C_5$-free and $K_4^-$-free. (Alternatively, see~\cite{FRV11} for details.) This also gives us a lower bound of $1/4$ on $\pi(K_4^-, C_5)$. When we require independent neighbourhoods, iterated blow-ups are prohibited, and it turns out that a non-iterated blow-up of $H_7$ does better than a blow-up of a $3$-edge (which gives edge-density $2/9$): \begin{theorem}[\cite{FRV11}]\label{nok4-c5f32} \[\pi(K_4^-, C_5,F_{3,2})=12/49,\] with the lower bound attained by a balanced blow-up of $H_7$. \end{theorem} Let us now turn to the problem of maximising the number of copies of $4.2$ in a $(C_5, K_4^-)$-free $3$-graph. As we are forbidding $K_4^-$ (which is the same as forbidding $4.3$), one might expect the problem of maximising the density of $4$-sets spanning $2$ edges to be essentially equivalent to the problem of maximising the number of edges. However, the extremal behaviour of the two problems is different. An iterated blow-up of $H_7$ yields a lower bound of $20/57$ ($\approx 0.350877$) for $\pi_{4.2}(K_4^-,C_5)$, but an iterated blow-up of a $3$-edge does much better: \begin{proposition}\label{4.2nok4-c5} \[0.461538 < 6/13\leq \pi_{4.2}(K_4^-,C_5) < 0.461645.\] \end{proposition} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a balanced iterated blow-up of a $3$-edge. \end{proof} We make the inevitable conjecture that the lower bound in Proposition~\ref{4.2nok4-c5} is tight: \begin{conjecture}\label{4.2nok4-c5conj} \[\pi_{4.2}(K_4^-,C_5)=6/13.\] \end{conjecture} Besides the relative proximity of the upper and lower bounds in Proposition~\ref{4.2nok4-c5}, further motivation for Conjecture~\ref{4.2nok4-c5conj} can be found in the following exact result. \begin{theorem}\label{4.2nok4-c5f32} \[ \pi_{4.2}(K_4^-,C_5,F_{3,2})=4/9.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a balanced blow-up of a $3$-edge. \end{proof} By contrast, a balanced blow-up of $H_7$ only gives a lower bound of $120/343$. Thus when $\{K_4^-, C_5, F_{3,2}\}$ is forbidden, the construction that maximises the density of the most dense $3$-graph on four vertices that is allowed, is different to the construction that maximises the edge-density. And it is different in a rather strong sense: not only are the constructions not isomorphic, but there is no homomorphism from $H_7$ into (a blow-up of) a $3$-edge. Indeed, label the $3$ parts of the blow-up $G$ of a $3$-edge $A$, $B$ and $C$, and suppose $f:\ V(H_7) \rightarrow A\sqcup B\sqcup C$ is a homomorphism. Since $137$ is an edge of $H_7$, it must then be that $1$, $3$ and $7$ are each mapped to different parts $A,B,C$; without loss of generality we may assume that $f(1) \in A$, $f(3) \in B$ and $f(7) \in C$. Since $134$ is also an edge of $H_7$ we must also have $f(4) \in A$. But then $467$ is an edge of $H_7$ with $f(4),f(7)\in C$, and so cannot be mapped by $f$ to an edge of $G$, contradicting our assumption that $f$ is a homomorphism. This structural difference between the problems of maximising the number of $3$-edges and of maximising the number of copies of $4.2$ in a $K_4^-$-free $3$-graph is a somewhat surprising phenomenon. We ask whether this is due solely to the fact that we are forbidding $C_5$ and $F_{3,2}$ on top of $K_4^-$: \begin{question}\label{simultextrem} Let $m$ and $2\leq t \leq \binom{m}{3}$ be integers. Does there exist for every $n \in \mathbb{N}$ an $m.t$-free $3$-graph on $n$ vertices that has both the maximum number of edges and the maximum number of copies of $m.(t-1)$ possible in an $m.t$-free graph? \end{question} Of course this question is most interesting when $t=\binom{m}{3}$; here $m.t$ and $m.(t-1)$ consist of just $K_t$ and $K_t^-$ respectively. In this case we believe the answer to Question~\ref{simultextrem} is `yes', which is, in a weaker form, our Conjecture~\ref{simultextrem} from Section~2.1. \subsection{Independent neighbourhoods} We have now seen several examples of how restricting the setting to $3$-graphs with independent neighbourhoods can render Tur\'an problems significantly more tractable to the semi-definite method; we refer the reader to~\cite{FRV11} for a heuristic discussion of why this might be so. In this subsection, we study Tur\'an $H$-density problems in $F_{3,2}$-free $3$-graphs for their own sake. The Tur\'an density problem for $F_{3,2}$ was solved by F\"uredi, Pikhurko and Simonovits: \begin{theorem}[F\"uredi, Pikhurko, Simonovits~\cite{FPS03}] \[\pi(F_{3,2})= 4/9.\] \end{theorem} In fact, they showed rather more: the unique, stable extremal configuration is an unbalanced blow-up of $([2],\{112\})$, with the size of the two parts chosen so as to maximise the number of edges, so that roughly $2/3$ of the vertices are assigned to part $1$ and $1/3$ to part $2$~\cite{FPS05}. Note that this configuration is $K_4$-free. We therefore expect it to maximise the induced density of $K_4^-$ in an $F_{3,2}$-free graph. This does turn out to be the case, with the minor caveat that we need to change the proportion of vertices in each part; we now want a random $4$-set to have exactly three vertices in part $1$ and one in part $2$, rather than a random $3$-set to have two vertices in part $1$ and one in part $2$. \begin{theorem}\label{k4-nof32} \[\pi_{K_4^-}(F_{3,2})=27/64.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a blow-up of $([2], \{112\})$, with three quarters of the vertices assigned to part $1$ and the rest to part $2$. \end{proof} As the above construction is $C_5$-free, Theorem~\ref{k4-nof32} also implies \[\pi_{K_4^-}(C_5, F_{3,2})=27/64, \] providing us with an analogue for $K_4^-$ of Theorem~\ref{4.2noc5f32} from Section~2.2. The next $3$-graph whose density in $F_{3,2}$-free $3$-graphs we investigate is $K_4$. Observing that $K_5$ is not $F_{3,2}$-free, one is naturally led to guess that the $K_4$-density is maximised by taking a balanced blow-up of $K_4$. This does indeed turn out to be the case: \begin{theorem}\label{k4nof32} \[\pi_{K_4}(F_{3,2})=3/32.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from a balanced blow-up of $K_4$. \end{proof} Thus we are left with two $3$-graphs on $4$ vertices whose density in $F_{3,2}$-free $3$-graphs we would like to maximise. However, we have been unable to obtain sharp results: \begin{proposition}\label{4.2nof32} \[ \begin{array}{rcccl} 4/9 & \leq & \pi_{4.1}(F_{3,2}) & < & 0.514719,\\ 9/16 & \leq & \pi_{4.2}(F_{3,2}) & < & 0.627732. \end{array} \] \end{proposition} \begin{proof} The upper bounds are from flag algebra calculations using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bounds are from balanced blow-ups of $([3], \{112,223,331\})$ and $K_4$ respectively. \end{proof} \subsection{Inducibility} In this subsection, we study $\pi_{H}(\emptyset)$ for small $3$-graphs $H$. The quantity $\pi_{H}(\emptyset)$ is often called the \emph{inducibility} of $H$. Let $\bar G$ denote the \emph{complement} of a 3-graph $G$; that is, the graph containing all edges not present in $G$. A graph $G$ is said to be \emph{self-complementary} if $G$ and $\bar G$ are isomorphic. It is easy to see that the $H$-density of a $3$-graph $G$ is equal to the $\bar H$-density of $\bar G$. Two immediate consequences of this are: \begin{lemma} \label{complementarity} For any $3$-graph $H$, \[\pi_H(\emptyset) = \pi_{\bar H}(\emptyset).\] \end{lemma} \begin{lemma} If $H$ is self-complementary, then either there are either at least two extremal constructions, or the extremal construction is itself self-complementary. \end{lemma} We first study $\pi_{H}(\emptyset)$ for the $3$-graphs $H$ with $\vert V(H) \vert = 4$. Clearly we have $\pi_{K_4}(\emptyset)=\pi_{\bar K_4}(\emptyset)=1$, so this leaves us only two values to determine, $\pi_{4.2}(\emptyset)$ and $\pi_{K_4^-}(\emptyset)$ (which by Lemma~\ref{complementarity} is the same as $\pi_{4.1}(\emptyset)$). \begin{theorem} \label{4.2} \[\pi_{4.2}(\emptyset) = 3/4.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is from the following random geometric construction. First of all, place $n$ vertices on the boundary of the unit disc, spaced at equal intervals. Each pair of vertices $(x,y)$ defines a chord of the unit circle. Consider the division of the unit disc into polygonal regions given by these chords. We independently assign each region a value $0$ or $1$ with equal probability. Then, for each triple of vertices $(x,y,z)$, we add the 3-edge $xyz$ if and only if the sum of the values of the regions contained inside the triangle $xyz$ is odd. This gives us our construction. We shall now prove that with positive probability, at least $3/4$ of the 4-sets of vertices induce the graph $4.2$. Let us begin with two observations. First of all, let $R$ be any collection of regions. Then the probability that the sum of their values is odd is exactly $1/2$. (So in particular, our construction has $3$-edge density $1/2$.) Second, if $R$ and $R'$ are two disjoint collections of regions, the parity of the sum of the values of the regions in $R$ is independent from the parity of the sum of the values of the regions in $R'$. From now on, let us speak of the parity of a collection of regions as a shorthand for the parity of the sum of the values of the regions it contains. Consider a 4-set of vertices $S=\{a,b,c,d\}$. We may assume without loss of generality that when traversing the unit circle clockwise from $a$, the vertices $b$, $c$ and $d$ are met in that order, as depicted in Figure~\ref{quadrilateral}. So $a,b,c,d$ are the vertices of a convex quadrilateral. Let $e$ be the intersection point of the diagonals $ac$ and $bd$, and let $R_1$, $R_2$, $R_3$ and $R_4$ denote the triangles $abe$, $bce$, $cde$ and $ade$. \begin{figure} \caption{From the proof of Theorem~\ref{4.2} \label{quadrilateral} \end{figure} Now, if zero or four of the $R_i$ have odd parity, then the $4$-set $S=\{a,b,c,d\}$ spans no edges in our construction. If one or three of the $R_i$ has odd parity, then $S$ spans two edges; this happens with probability $1/2$. If two of the $R_i$ have odd parity, there are two cases to consider: either the two $R_i$ with odd parity are adjacent to each other---i.e{.} their boundaries intersect in a nontrivial line segment---in which case $S$ spans two edges, or they are opposite one another, in which case $S$ spans four edges. The former case occurs with probability $1/4$. Therefore the probability that $S$ spans two edges is $3/4$. Since the choice of $S$ was arbitrary, it follows that with positive probability our construction gives a $4.2$-density of at least $3/4$, whence we are done. \end{proof} We note that the lower bound construction is quite different from previously known $3$-graphs constructions. Of those that have appeared in the literature, it resembles most the geometric construction of Frankl and F\"uredi~\cite{FF84}, which it in some sense generalises. This construction also features vertices on the unit circle, where $3$-edges are added whenever the corresponding triangle contains the origin in its interior. (Incidentally, this construction has a $4.2$-density of $1/2$.) Let us now consider the inducibility of $K_4^-$. Here by contrast we do not believe we have a good lower bound. We get a similar upper bound if we forbid $4$-sets of vertices from spanning exactly one edge. \begin{proposition}\label{k4-inducibility} \[0.592592 < 16/27 \leq \pi_{K_4^-}(\emptyset) < 0.651912.\] Also, \[16/27 \leq \pi_{K_4^-}(\Text{induced } 4.1) \leq 0.650930.\] \end{proposition} \begin{proof} The upper bounds are from flag algebra calculations using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound in both cases is from Tur\'an's construction: a balanced blow-up of $([3], \{123,112,223,331\})$. \end{proof} It seems likely that both $\pi_{K_4^-}(\emptyset)$ and $\pi_{K_4^-}(\Text{induced } 4.1)$ take values close to $0.65$. Since Tur\'an's construction has no induced copies of $4.1$ and is (by Theorem~\ref{k4-nok4}) a $K_4$-free $3$-graph maximising the $K_4^-$-density, this would indicate that the actual extremal construction(s) for the inducibility of $K_4^-$ have strictly positive $K_4$-density. Turning to $5$-vertex graphs, we are able to obtain a few more exact results. \begin{theorem}\label{5.6} \[\pi_{5.4}(\emptyset)=\pi_{5.6}(\emptyset)= 20/27.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound (for $5.6$) is from a balanced blow-up of $([3], \{112, 221, 223, 332, 113, 331\})$. (This is just a balanced tripartition with all $3$-edges meeting a part in two vertices exactly.) \end{proof} \begin{theorem}\label{5.7} \[\pi_{5.3}(\emptyset) =\pi_{5.7}(\emptyset)= 20/27.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound (for $5.7$) is from a balanced blow-up of $([3], \{111, 222, 333, 123, 112, 223, 331\})$. (This is just Tur\'an's construction with all three parts made complete.) \end{proof} \begin{theorem}\label{5.9} \[\pi_{5.1}(\emptyset)=\pi_{5.9}(\emptyset)=5/8.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is obtained by taking a complete balanced bipartite $3$-graph. \end{proof} In the forthcoming note~\cite{FRV12}, we prove that the complete balanced bipartite $3$-graph is in fact the stable extremum for the inducibility of $5.9$. This relates Theorem~\ref{5.9} to a conjecture of Tur\'an on the Tur\'an density of $K_5$, the complete $3$-graph on $5$ vertices. \begin{conjecture}[Tur\'an]\label{k5conj} \[\pi(K_5)=3/4.\] \end{conjecture} One of the constructions attaining the bound is given by taking a balanced complete bipartite $3$-graph. Many other non-isomorphic constructions are known~\cite{S95}. However, what Theorem~\ref{5.9} shows is that the complete bipartite $3$-graph is, out of all of these, the one which maximises the number of induced copies of $K_5^-$, that is of $5$-sets spanning all but one of the possible $3$-edges. This is a direct analogue of our earlier result Theorem~\ref{k4-nok4}. We close this section on $3$-graphs by giving upper bounds on the inducibility of two other $3$-graphs on $5$ vertices. \begin{proposition}\label{f32} \[0.349325 < \alpha < \pi_{F_{3,2}}(\emptyset) < 0.349465,\] where $\alpha$ is the maximum of \[\frac{10 x^2 (1-x)^3}{1-x^5}\] in the interval $[0, 1]$. \end{proposition} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound is obtained by taking a unbalanced blow-up of $([2], \{112,222\})$, iterated inside part $1$, where a proportion $\alpha$ of the vertices are assigned to part 1 at each stage. \end{proof} We believe that the lower bound construction given above is extremal: \begin{conjecture} \[\pi_{F_{3,2}}(\emptyset)=\max_{x \in [0,1]} \frac{10 x^2 (1-x)^3}{1-x^5}.\] \end{conjecture} Finally, we note that the random geometric construction given in Theorem~\ref{4.2}, which is extremal for the inducibility of the self-complementary graph $4.2$, also gives a reasonably good lower bound on the inducibility of the self-complementary graph $C_5$: \begin{proposition} \label{c5} \[ 0.1875=3/16\leq \pi_{C_5}(\emptyset) < 0.198845.\] \end{proposition} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound comes from considering the random geometric construction we introduced in the proof of Theorem~\ref{4.2}. As the vertices are scattered on the unit circle, any five of them define a convex pentagon. Drawing in the diagonals divides this pentagon into $11$ disjoint regions. The result then follows from a rather tedious case analysis. \end{proof} \begin{figure} \caption{From the proof of Proposition~\ref{dto3} \label{dto3fig} \end{figure} \section{A digression into directed graphs} \subsection{The out-star of order $3$} We define the \emph{out-star} of order $k$ to be the directed graph \[\vec{S}_k= ([k], \{\vec{1i}: \ i \in [k]\setminus\{1\}\}). \] That is, the star with $k-1$ edges oriented away from the centre. In this subsection, we shall be interested in particular in $\vec{S}_3$ and its relation to the $3$-graph $C_5$, the strong cycle on $5$ vertices. Given a directed graph $D$ on $n$ vertices, let us define a $3$-graph $G(D)$ on the same vertex set by setting $xyz$ to be a $3$-edge whenever the $3$-set $\{x,y,z\}$ induces a copy of $\vec{S}_3$ in $D$. \begin{proposition}\label{dto3} $G(D)$ is a $(C_5, K_4)$-free $3$-graph. \end{proposition} \begin{proof} Let us first show that $G(D)$ is $K_4$ free. Suppose $\{a,b,c,d\}$ is a $4$-set of vertices in $G(D)$ that spans a $K_4$. Without loss of generality, we may assume that $\vec{ab}, \vec{ac}$ are in $E(D)$. Therefore neither $\vec{bc}, \vec{cb}$ are in $E(D)$. Since $\{a,b,d\}$ also spans a $3$-edge in $G(D)$, it follows that $\vec{ad} \in E(D)$ and $\vec{bd}, \vec{db}\notin E(D)$. But then $\{b,d,c\}$ spans at most one edge of $D$, and hence cannot be a $3$-edge of $G(D)$, a contradiction. Now suppose $\{a,b,c,d,e\}$ is a $5$-set of vertices that spans a $C_5$ in $G(D)$, with edges $abc$, $bcd$, $cde$, $dea$ and $eab$. Since $abc$ is an edge, $\{a,b,c\}$ must induce a copy of $\vec{S}_3$ in $D$. First of all, suppose we have $\vec{ab}, \vec{ac}$ in $E(D)$, and $\vec{bc}, \vec{cb}$ not in $E(D)$, as depicted in Figure~\ref{dto3fig}. As $bcd \in E(G(D))$, $\{b,c,d\}$ must span a copy of $\vec{S}_3$, and we must have $\vec{db}, \vec{dc} \in E(D)$. Similarly, as $cde \in E(G(D))$ we have $\vec{de} \in E(D)$ and $\vec{ec},\vec{ce} \notin E(D)$. Again as $dea \in E(G(D))$ we must have $\vec{da} \in E(D)$ and $\vec{ae}, \vec{ea} \notin E(D)$. But then $\{e,a,b\}$ cannot induce a copy of $\vec{S}_3$ in $D$, and hence $eab$ cannot be a $3$-edge of $G(D)$, a contradiction. By symmetry, this argument also rules out the possibility of having $\vec{ca}, \vec{cb}$ both in $E(D)$ and $\vec{ab}, \vec{ba} \notin E(D)$. This leaves us with one last possibility, namely that both $\vec{ba}, \vec{bc}$ are in $E(D)$ and neither of $\vec{ac}, \vec{ca}$ is in $E(D)$. Since $bcd$ is an edge of $G(D)$, this implies that $\vec{bd}$ is in $E(D)$ while neither of $\vec{cd}, \vec{dc}$ is. But this also leads to a contradiction by our previous argument, with $bcd$ now playing the role of $abc$. Thus $G(D)$ must be $C_5$-free, as claimed. \end{proof} In fact more is true: the proof of the second part of Proposition~\ref{dto3} generalises to show that, for all integers $t\geq 3$ with $t$ congruent to $1$ or $2$ modulo $3$, $G(D)$ contains no copy of the strong $t$-cycle \[C_{t}=([t], \{ 123, \ 234, \ \dots \ , (t-2)(t-1)t, \ (t-1)t1, \ t12 \}.\] An interesting question is whether some kind of converse is true. Note that an immediate consequence of Proposition~\ref{dto3} is the following: \begin{corollary}\label{s3c5ineq} \[\pi_{\vec{S}_3}(\emptyset) \leq \pi(K_4, C_5, C_7).\] \end{corollary} It is easy to check that the conjectured extremal $3$-graph construction of Mubayi and R\"odl for the $\pi(C_5)$ problem is both $K_4$-free and $C_{t}$-free for all $t\geq 3$, where $t$ is congruent to $1$ or $2$ modulo $3$. We ask therefore the following question: \begin{question}\label{3gtodq} Does there exist, for every $\varepsilon>0$, a $\delta=\delta(\varepsilon)>0$ and $N=N(\varepsilon)$ such that if $G$ is a $C_5$-free $3$-graph on $n>N$ vertices with at least $(2\sqrt{3}-3 - \delta)\binom{n}{3}$ edges, then there is a directed graph $D$ on $n$ vertices such that the $3$-graphs $G$ and $G(D)$ differ on at most $\varepsilon \binom{n}{3}$ edges? \end{question} An affirmative answer to Question~\ref{3gtodq} would, by our next result, automatically imply Conjecture~\ref{c5conj}: \begin{theorem}\label{1213} \[\pi_{\vec{S}_3}(\emptyset)= 2\sqrt{3}-3.\] \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound comes from an unbalanced blow-up of the directed graph \[\vec{S}_2=([2], \{\vec{12}\}),\] and iterating the construction inside part $1$, setting at each stage of the construction a proportion $(\sqrt{3}-1)/2$ of the vertices in part $1$ and the remaining $(3-\sqrt{3})/2$ proportion of the vertices in part $2$. \end{proof} Denote the lower bound construction in Theorem~\ref{1213} by $D$; then $G(D)$ is exactly the $C_5$-free construction of Mubayi and R\"odl described in Section~2.2. It is an interesting question as to why exactly it is that Flagmatic can give us exact bounds on the $\vec{S}_3$-density problem for directed graphs, but not for the Tur\'an density problem for the $3$-graph $C_5$. In a forthcoming note~\cite{FRV12}, we use the directed graph removal lemma of Alon and Shapira~\cite{AS04} to prove that the construction $D$ is stable for this problem. Theorem~\ref{1213} is, to the best of our knowledge, the first known irrational inducibility. But perhaps more significantly, it is the first `simple' problem for which an iterated blowup construction can be shown to be extremal. Pikhurko~\cite{P11b} has shown the far stronger result that every iterated blowup construction for $3$-graphs is the unique extremal configuration for \emph{some} Tur\'an density problem. However his proof works by a kind of compactness argument, and does not give explicit families of suitable forbidden $3$-graphs, but rather proves that such families exist. \subsection{Other directed graphs} Let us now consider $\vec{S}_4$. As in the previous subsection, given a directed graph $D$ we define a $3$-graph $G$ on the same vertex set by letting $xyz$ be a $3$-edge if the $3$-set $\{x,y,z\}$ induces a copy of the out-star of order $3$, $\vec{S}_3$. Then the number of copies of $K_4^-$ in $G(D)$ is exactly the number of copies of $\vec{S}_4$ in $D$, whence we have: \begin{proposition} \[\pi_{\vec{S}_4}(\emptyset) \leq \pi_{K_4^-}(C_5).\] \end{proposition} \begin{proof} By Proposition~\ref{dto3}, for every directed graph $D$, $G(D)$ is $C_5$-free. The claimed inequality follows directly from our remark that copies of $K_4^-$ in $G(D)$ correspond exactly to copies of $\vec{S}_4$ in $D$. \end{proof} We conjectured in Section~2.2 that \[\pi_{K_4^-}(C_5)=4-6\left((\sqrt{2}+1)^{1/3}-(\sqrt{2}-1)^{1/3}\right),\] or, more helpfully, the maximum of \[ \frac{ 4 x(1-x)^3}{1-x^4} \] for $x \in [0,1]$, which is attained at the unique real root of $3t^3+3t^2+3t-1$. We have been unable to prove this using the semi-definite method, but, just as in the previous subsection, the directed graph problem proves to be more tractable, allowing us to show: \begin{theorem}\label{121314} \[\pi_{\vec{S}_4}(\emptyset) = \frac{ 4 p(1-p)^3}{1-p^4}, \] where $p$ is the real root of $3t^3+3t^2+3t-1$. \end{theorem} \begin{proof} The upper bound is from a flag algebra calculation using Flagmatic (see Section~\ref{flagmaticsec} for how to obtain a certificate). The lower bound comes from an unbalanced blow-up of $\vec{S}_2$ and iterating the construction inside part $1$, setting at each stage of the construction a proportion $p$ of the vertices in part $1$ and the remaining $1-p$ proportion of the vertices in part $2$. \end{proof} As in Theorem~\ref{1213}, call $D$ our lower bound construction for Theorem~\ref{121314}. Then $G(D)$ coincides exactly with our lower bound construction in Section~2.2 for $\pi_{K_4^-}(C_5)$, which we conjectured to be optimal. So what about $\pi_{\vec{S}_k}(\emptyset)$ for general $k$? Given Theorems~\ref{1213} and~\ref{121314} it is natural to guess that in general an unbalanced blowup of $\vec{S}_2$ iterated inside part $1$ should be best possible. As we have shown, this is true for the cases $k=3$ and $k=4$, and we conjecture that this remains true for general $k$: \begin{conjecture}\label{skconj} For every $k\geq 3$, \[\pi_{\vec{S}_k}(\emptyset)= \alpha_k,\] where \[ \alpha_k = \max_{x \in [0,1]} \frac{ k x(1-x)^{k-1}}{1-x^k}, \] with the unique stable extremal configuration being a blow-up of $\vec{S}_2$ iterated inside part $1$, with a proportion $\alpha_k$ of the vertices assigned to part $1$ at every iteration. \end{conjecture} With a little bit of calculus, we can describe $\alpha_k$ more precisely; the maximum of \[ \frac{ k x(1-x)^{k-1}}{1-x^k} \] occurs when $x=x_k$, where $x_k$ is the unique positive root of the polynomial \[ (k-1)(t + t^2 + \cdots + t^{k-1}) - 1. \] Note that $x_k \in [0,1/(k-1)]$ and $x_k \rightarrow 1/(k-1)$ as $k\rightarrow \infty$, as we would expect from our construction. Thus also $\alpha_k \rightarrow 1/e$ as $k \rightarrow \infty$. \end{document}
math
55,569
\begin{document} \maketitle \centerline{\scshape Kim Knudsen and Aksel Kaastrup Rasmussen$^*$} {\footnotesize \centerline{Technical University of Denmark} \centerline{Department of Applied Mathematics and Computer Science} \centerline{DK-2800 Kgs. Lyngby, Denmark} } \begin{abstract} Electrical Impedance Tomography gives rise to the severely ill-posed Calderón problem of determining the electrical conductivity distribution in a bounded domain from knowledge of the associated Dirichlet-to-Neumann map for the governing equation. The uniqueness and stability questions for the three-dimensional problem were largely answered in the affirmative in the 1980’s using complex geometrical optics solutions, and this led further to a direct reconstruction method relying on a non-physical scattering transform. In this paper, the reconstruction problem is taken one step further towards practical applications by considering data contaminated by noise. Indeed, a regularization strategy for the three-dimensional Calderón problem is presented based on a suitable and explicit truncation of the scattering transform. This gives a certified, stable and direct reconstruction method that is robust to small perturbations of the data. Numerical tests on simulated noisy data illustrate the feasibility and regularizing effect of the method, and suggest that the numerical implementation performs better than predicted by theory. \end{abstract} \section{Introduction} Electrical Impedance Tomography (EIT) provides a noninvasive method of obtaining information on the electrical conductivity distribution of electric conductive media from exterior electrostatic measurements of currents and voltages. There are many applications in medical imaging including early detection of breast cancer \cite{cherepenin2002,Zou200379}, hemorrhagic stroke detection \cite{malone2014a,goren2018a}, pulmonary function monitoring \cite{Adler19971762,frerichs2019a,Leonhardt20121917} and targeting control in transcranial brain stimulation \cite{schmidt2015a}. Applications also include industrial testing, for example, crack damage detection in cementitious structures \cite{Hou2009ElectricalIT,Hallaji2014}, and subsurface geophysical imaging \cite{zhao2013a}. The mathematical problem of EIT is called the Calderón problem and was first formulated by A.P. Calderón in 1980 \cite{calderoninverse} as follows: Consider a bounded Lipschitz domain $\Omega \subset \mathbb{R}^3$ filled with a conductor with a distribution $\gamma \in L^\infty(\Omega)$, $\gamma\geq c >0$. Under the assumption of no sinks or sources of current in the domain, applying an electrical surface potential $f \in H^{1/2}(\partial \Omega)$ induces an electrical potential $u \in H^1(\Omega)$, which uniquely solves the conductivity equation \begin{equation} \label{eq:cond} \begin{aligned} \nabla \cdot (\gamma \nabla u) &=0 & & \text { in } \Omega, \\ u &=f & & \text { on } \partial \Omega. \end{aligned} \end{equation} The Dirichlet-to-Neumann map $\Lambda_{\gamma}: H^{1/2}(\partial \Omega) \rightarrow H^{-1/2}(\partial \Omega)$ is defined as \begin{equation} \Lambda_{\gamma} f=\gamma \partial_{\nu} u|_{\partial \Omega}, \end{equation} and associates a voltage potential on the boundary with a corresponding normal current flux. All pairs $(f,\gamma \partial_{\nu} u|_{\partial \Omega})$, or equivalently the Dirichlet-to-Neumann map, constitute the available {data}. The forward problem is the problem of determining the Dirichlet-to-Neumann map given the conductivity, and it amounts to solving the boundary value problem \eqref{eq:cond} for all possible $f$. The Calderón problem now asks whether $\gamma$ is uniquely determined by $\Lambda_\gamma$, and how to stably reconstruct $\gamma$ from $\Lambda_\gamma$, if possible. Uniqueness and reconstruction were considered and solved for sufficiently regular conductivity distributions in dimension $n\geq 3$ in a series of papers \cite{Nachman1988595, nachman1988a, Novikov1988263, sylvester1987a, carorogers2016}. The results are based on complex geometrical optics (CGO) solutions to a Schrödinger equation derived from \eqref{eq:cond}. The first step of the reconstruction method is to recover the CGO solutions on $\partial \Omega$ by solving a weakly singular boundary integral equation with an exponentially growing kernel. The second step is obtaining the so-called non-physical scattering transform, which approximates in a large complex frequency limit the Fourier transform of $\gamma^{-1/2}\Delta \gamma^{1/2}$. Applying the inverse Fourier transform and solving a boundary value problem yields $\gamma$ in the third step. Numerical algorithms following the scattering transform approach in dimension $n\geq 3$ have been developed by approximating the scattering transform \cite{bikowski2011a, knudsen2011a,hamilton2020a,boverman2009a}, by approximating the boundary integral equations \cite{delbary2012a}, and for the full theoretical reconstruction algorithm \cite{delbary2014a}. A reconstruction algorithm for conductivity distributions close to a constant has been suggested, but not implemented \cite{cornean2006a}. A similar scattering transform approach combined with tools from complex analysis enables uniqueness and reconstruction \cite{nachman1996a} for the two-dimensional Calderón problem. More recently, a final affirmative answer was given to the question of uniqueness for a general bounded conductivity distribution in two dimensions \cite{astala2006a}. Numerical algorithms and implementation for the two-dimensional problem have been considered \cite{knudsen2003a, knudsen2007a, mueller2003a, mueller2002a, siltanen2001a, siltanen2000a} and a regularization analysis and full implementation was given in \cite{Knudsen2009a}. We stress that in any practical case the Calderón problem is three-dimensional, since applying potentials on the boundary of a planar cross section of $\Omega$ leads to current flow leaving the plane. The Calderón problem is known to be severely ill posed. Conditional stability estimates exist \cite{alessandrini1988a,alessandrini1990a} of the form \begin{equation}\label{eq:stability} \|\gamma_1-\gamma_2\|_{L^\infty(\Omega)} \leq f(\|\Lambda_{\gamma_1}-\Lambda_{\gamma_2}\|_{Z}), \end{equation} for an appropriate function space $Z$ and continuous function $f$ with $f(0)=0$ of logarithmic type. Furthermore, logarithmic stability is optimal \cite{mandache2001a}. While this is relevant for the theoretical reconstruction, there is no guarantee that a practically measured $\Lambda_\gamma^\varepsilon$ of a perturbed Dirichlet-to-Neumann map is the Dirichlet-to-Neumann map of any conductivity. We emphasize that in any practical case we can not have infinite-precision data, but rather a noisy finite approximation. Consequently, any computational algorithm for the problem needs regularization. Classical regularization theory for inverse problems is given in \cite{engl1996a,kirsch1996a} with a focus on least squares formulations. A common approach to regularization for the Calderón problem is based on iterative regularized least-squares, and convergence of such methods is analyzed in \cite{dobson1992, rondi2008a, rondi2016a, lechleiter2008a, jin2012a} in the context of EIT. A quantitative comparison of CGO-based methods and iterative regularized methods is given in \cite{hamilton2020a}. Reconstruction by statistical inversion is developed in \cite{kaipio2000a, dunlop2016a}, where in the latter, the problem is posed in an infinite-dimensional Bayesian framework. A different statistical approach to the Calderón problem shows stable reconstruction of the surface conductivity on a domain given noisy data \cite{caro2017}. Convergence estimates in probability of a statistical estimator (posterior mean) to the true conductivity given noisy data with a sufficiently small noise level are considered in \cite{kweku2019}. In this paper we provide a direct CGO-based regularization strategy with an admissible parameter choice rule for reconstruction in the three-dimensional Calderón problem under the following assumptions: \begin{assumption} For simplicity of exposition, we assume the domain of interest $\Omega$ is embedded in the unit ball in $\mathbb{R}^3$. Furthermore, we assume $\partial \Omega$ is smooth. \end{assumption} \begin{assumption}[Parameter and data space]\label{assumption2} We consider the forward map $F:D(F)\subset L^\infty(\Omega) \rightarrow Y$, $\gamma\mapsto \Lambda_\gamma$ with the following definition of $D(F)$. Let $\Pi>0$ and $0<\rho<1$, then $\gamma \in D(F)\subset L^\infty(\Omega)$ satisfies \begin{equation}\label{Df} \begin{aligned} \|\gamma\|_{C^2(\overline{\Omega})}&\leq \Pi,\\ \gamma(x) &\geq \Pi^{-1} \quad \text{for all $x\in \Omega$,}\\ \gamma(x) &\equiv 1 \qquad \, \text{for $\mathrm{dist}(x,\partial \Omega)< \rho$,} \end{aligned} \end{equation} where we assume knowledge of $\Pi$ and $\rho$. We continuously extend $\gamma\equiv 1$ outside $\Omega$. The data space $Y\subset \mathcal{L}(H^{1/2}(\partial \Omega), H^{-1/2}(\partial \Omega))$ consists of bounded linear operators $\Lambda:H^{1/2}(\partial\Omega)\rightarrow H^{-1/2}(\partial\Omega)$ that are Dirichlet-to-Neumann alike in the sense \begin{equation} \begin{aligned} \Lambda(1)&=0,\\ \int_{\partial \Omega} (\Lambda f)(x) \, d\sigma(x) &= 0 \quad \text{for every $f \in H^{1/2}(\partial \Omega)$.} \end{aligned} \end{equation} We equip $D(F)$ and $Y$ with the inherited norms $\|\cdot\|_{D(F)} = \|\cdot \|_{L^\infty(\Omega)}$ and $\|\cdot\|_Y = \|\cdot\|_{H^{1/2}(\partial \Omega)\rightarrow H^{-1/2}(\partial \Omega)}$. \end{assumption} There is no reason to believe that the regularity assumptions of $\gamma$ is optimal, in fact, we expect that the strategy generalizes to the less regular setting of \cite{carorogers2016}. We recall the adaptation of the definitions in \cite{engl1996a,kirsch1996a} presented in \cite{Knudsen2009a} of a regularization strategy in the nonlinear setting. \label{def:reg1} A family of continuous mappings $\mathcal{R}_\alpha:Y\rightarrow L^\infty(\Omega)$, parametrized by \textit{regularization parameter} $0<\alpha<\infty$, is called a \textit{regularization strategy} for $F$ if \begin{equation}\label{reqweak} \lim_{\alpha \rightarrow 0} \|\mathcal{R}_\alpha \Lambda_\gamma - \gamma\|_{L^\infty(\Omega)}=0, \end{equation} for each fixed $\gamma \in D(F)$. We define the perturbed Dirichlet-to-Neumann map as \begin{equation}\label{eq:perturbdata} \Lambda_\gamma^\varepsilon = \Lambda_\gamma+\mathcal{E}, \end{equation} with $\mathcal{E}\in Y$ and $\|\mathcal{E}\|_Y \leq \varepsilon$ for some $\varepsilon>0$. We call $\varepsilon$ the noise level, since we eventually simulate perturbations $\mathcal{E}$ as random noise. \label{def:reg2} Furthermore, a regularization strategy $\mathcal{R}_\alpha: Y\rightarrow L^\infty(\Omega)$, $0<\alpha<\infty$, is called \textit{admissible} if \begin{equation}\label{alphaprop} \alpha(\varepsilon)\rightarrow 0 \quad \text{ as } \quad \varepsilon \rightarrow 0, \end{equation} and for any fixed $\gamma \in \mathcal{D}(F)$ we have \begin{equation}\label{reqstrong} \sup_{\Lambda_\gamma^\varepsilon\in Y} \{\|\mathcal{R}_{\alpha(\varepsilon)}\Lambda_\gamma^\varepsilon -\gamma\|_{L^\infty(\Omega)}\mid\|\Lambda_\gamma^\varepsilon-\Lambda_\gamma\|_Y\leq \varepsilon\}\rightarrow 0\quad\text{ as }\quad \varepsilon \rightarrow 0. \end{equation} The topology in which we require convergence is essential; we require convergence in strong operator topology, but not in norm topology. The main result of this paper is then as follows. \begin{theorem}\label{maintheorem} Suppose $\Pi>0$ and $0<\rho<1$ are given and let $D(F)$ be as in Assumption \ref{assumption2}. Then there exists $\varepsilon_0>0$, dependent only on $\Pi$ and $\rho$ such that the family $\mathcal{R}_\alpha$ defined by \eqref{def:regstrat} is an admissible regularization strategy for $F$ with the following choice of regularization parameter: \begin{equation}\label{alphadef} \alpha(\varepsilon)=\begin{cases} (-1/11\log(\varepsilon))^{-1/p} & \text{ for } 0<\varepsilon < \varepsilon_0,\\ \frac{\varepsilon}{\varepsilon_0}(-1/11\log({\varepsilon_0}))^{-1/p} & \text{ for } \varepsilon \geq \varepsilon_0, \end{cases} \end{equation} with $p>3/2$. \end{theorem} This gives theoretical justification for practical reconstruction of the Calderón problem in three dimensions. This is the first deterministic regularization analysis for the three-dimensional Calderón problem known to the authors. Similar results have been shown for the related two-dimensional D-bar reconstruction \cite{Knudsen2009a}, and we will in fact adopt the spectral truncation from there to our setting. {This extension is non-trivial in part because there are no existence and uniqueness guarantees for the CGO solutions that are independent of the magnitude of the complex frequency in the three-dimensional case. In addition, while the two-dimensional D-bar method enjoys the continuous dependence of the solution to the D-bar equation on the scattering transform, it is not obvious when the frequency information of $\gamma$ is stably recovered from the scattering transform corresponding to a perturbed Dirichlet-to-Neumann map in the three-dimensional case. } We denote the set of bounded linear operators between Banach spaces $X$ and $Y$ by $\mathcal{L}(X,Y)$ and use $\mathcal{L}(X):=\mathcal{L}(X,X)$. We denote the Euclidean matrix operator norm by $\|\cdot\|_N := \|\cdot\|_{\mathbb{C}^{(N+1)^2}\rightarrow \mathbb{C}^{(N+1)^2}}$. The operator norm of $A:H^{s}(\partial \Omega)\rightarrow H^t(\partial \Omega)$ is denoted by $\|A\|_{s,t}$. We reserve $C$ for generic constants and $C_1,C_2,\hdots$ for constants of specific value. Finally, exponential functions of the form $e^{ix\cdot\zeta}$, $x\in \mathbb{R}^3$, $\zeta\in \mathbb{C}^3$, is denoted $e_\zeta(x)$. In Section \ref{sec:2}, the full non-linear reconstruction algorithm for the three-dimensional Calderón problem is given. Section \ref{sec:3} gives technical estimates regarding the boundary integral equation and the scattering transform and provides a regularizing method for perturbed data with $\varepsilon$ sufficiently small. Then Section \ref{sec:35} extends continuously the method to a regularization strategy $\mathcal{R}_\alpha$ defined on $Y$ and proves Theorem \ref{maintheorem}. In Section \ref{sec:4}, the necessary numerical details concerning the representation of the Dirichlet-to-Neumann map and computation of the relevant norm are given. In addition, a noise model is given. Section \ref{sec:5} presents and discusses numerical results of noise tests with a piecewise constant conductivity distribution using an implementation given in \cite{delbary2014a}, which is available from the corresponding author by request. \section{The full non-linear reconstruction method}\label{sec:2} Let $v=\gamma^{1/2}u$, then $v$ is a solution to the Schrödinger equation \begin{equation}\label{eq:schrod} \begin{aligned} (-\Delta+q) v &=0 \quad \text { in } \Omega, \\ v &=g \quad \text { on } \partial \Omega, \end{aligned} \end{equation} with $q=\gamma^{-1/2}\Delta\gamma^{1/2}$ if and only if $u$ is a solution to \eqref{eq:cond} with $f=\gamma^{-1/2}g$. Note in our setting $q=0$ near $\partial \Omega$ and $q\equiv 0$ is extended continuously outside $\Omega$ and further $\Lambda_q g = \partial_\nu v = \Lambda_\gamma f$. The reconstruction method considered here is based on CGO solutions $\psi_\zeta$ to \eqref{eq:schrod}, which take the form \begin{equation}\label{cgo1} (-\Delta+q)\psi_\zeta = 0 \quad \text{ in } \mathbb{R}^3, \end{equation} satisfying $\psi_\zeta(x) = e^{ix\cdot \zeta}(1+r_\zeta(x))$. Here the complex frequency $\zeta \in \mathbb{C}^3$ satisfies $\zeta \cdot \zeta = 0$ making $e^{ix\cdot \zeta}$ harmonic, and the remainder $r_\zeta$ belongs to certain weighted $L^2$ spaces. In the three-dimensional case, existence and uniqueness of CGO solutions have been shown for large complex frequencies, \begin{equation}\label{largezeta} |\zeta|>C_0\|q\|_{L^\infty(\Omega)}=:D_q \end{equation} for some constant $C_0>0$, or alternatively for $|\zeta|$ small \cite{sylvester1987a,cornean2006a}. The analysis involves the Faddeev Green's function \begin{equation} G_\zeta(x) := e^{i\zeta\cdot x}g_\zeta(x)\qquad g_\zeta(x):=\frac{1}{(2\pi)^3}\int_{\mathbb{R}^3}\frac{e^{ix\cdot\xi}}{|\xi|^2+2\xi\cdot \zeta}\,d\xi, \end{equation} where $g_\zeta$ is defined in the sense of the inverse Fourier transform of a tempered distribution and interpretable as a fundamental solution of $(-\Delta-2i\zeta\cdot \nabla)$. {Boundedness of convolution by $g_\zeta$ on $\Omega$ is well known \cite{sylvester1987a,brown1996a,salo2006a}: } \begin{equation}\label{convgzeta} \|g_\zeta * f \|_{L^2(\Omega)}\leq C|\zeta|^{s-1}\|f\|_{L^2(\Omega)}, \quad s\in \{0,1,2\}, \end{equation} where $|\zeta|$ is bounded away from zero, and $C$ is independent of $\zeta$ and $f$. The non-physical scattering transform is defined for all those $\zeta$ that give rise to a unique CGO solution $\psi_\zeta$ as \begin{equation}\label{eq:tformeq} \mathbf{t}(\xi, \zeta)=\int_{\mathbb{R}^{3}} e^{-i x \cdot(\xi+\zeta)} \psi_{\zeta}(x) q(x)\, d x, \quad \xi \in \mathbb{R}^3. \end{equation} It is useful to see the scattering transform as a non-linear Fourier transform of the potential $q$. Indeed, for $|\zeta|>D_q$ we have \begin{equation}\label{lemmaont} |\mathbf{t}(\xi,\zeta)-\hat q(\xi)| \leq C\|q\|_{L^{{\infty}}(\Omega)}^2|\zeta|^{-1}, \end{equation} for all $\xi \in \mathbb{R}^3$, where $C$ is independent of $\zeta$ and $q$. Whenever $(\zeta+\xi)\cdot (\zeta+\xi)=0$, integration by parts in \eqref{eq:tformeq} yields \begin{equation}\label{scatter2} \mathbf{t}(\xi, \zeta)=\int_{\partial \Omega} e^{-i x \cdot(\xi+\zeta)}(\Lambda_{\gamma}-\Lambda_{1})\psi_\zeta(x)\, d\sigma(x), \end{equation} where $d\sigma$ denotes the surface measure on $\partial \Omega$. For fixed $\xi \in \mathbb{R}^3$ this gives rise to the set $\mathcal{V}_\xi = \{\zeta\in \mathbb{C}^3\setminus \{0\} \mid \zeta\cdot \zeta = 0,\, (\zeta+\xi)\cdot (\zeta+\xi) = 0\}$ parametrized by \begin{equation}\label{zetaxieq} \zeta(\xi)=\left(-\frac{\xi}{2}+\left(\kappa^{2}-\frac{|\xi|^{2}}{4}\right)^{1 / 2} k^{\perp {\perp}}\right)+i \kappa k^{{\perp}}, \end{equation} with $\kappa\geq \frac{|\xi|}{2}$ and $k^\perp, k^{\perp\perp} \in \mathbb{R}^3$ are unit vectors and $\{\xi, k^\perp, k^{\perp\perp} \}$ is an orthogonal set \cite{delbary2014a}. Note that for $\zeta(\xi)\in \mathcal{V}_\xi$ and $k\geq \frac{|\xi|}{2}$ we have $|\zeta(\xi)|= \sqrt{2}\kappa$; consequently $\lim_{\kappa\rightarrow \infty} |\zeta(\xi)|=\infty$. For each fixed $\zeta$ the trace of the CGO solution $\psi_\zeta|_{\partial \Omega}$ is recoverable from the boundary integral equation \begin{equation}\label{bie} \psi_{\zeta}|_{\partial \Omega}+\mathcal{S}_{\zeta}\left(\Lambda_{\gamma}-\Lambda_{1}\right) (\psi_{\zeta}|_{\partial \Omega})=e_{\zeta}|_{\partial \Omega}, \end{equation} where $\mathcal{S}_{\zeta}: H^{-1/2}(\partial \Omega)\rightarrow H^{1/2}(\partial \Omega)$ is {the boundary single layer operator} defined by \begin{equation}\label{fsinglelayer2} \left(\mathcal{S}_{\zeta} \varphi\right)(x)=\int_{\partial \Omega} G_{\zeta}(x-y) \varphi(y) d \sigma(y),\quad x \in \partial\Omega. \end{equation} {With $\mathcal{S}_0$ we denote the boundary single layer operator corresponding to the usual Green’s function $G_0$ for the Laplacian. Occasionally we use the same notation when $x \in \mathbb{R}^3\setminus \partial \Omega$ and note it is well known that $\mathcal{S}_0 \varphi$ and hence $\mathcal{S}_\zeta \varphi$ is continuous in $\mathbb{R}^3$ \cite{colton1992a}.} We let $$B_\zeta:=[I+\mathcal{S}_{\zeta}\left(\Lambda_{\gamma}-\Lambda_{1}\right)],$$ denote the boundary integral operator and we note the boundary integral equation \eqref{bie} is a uniquely solvable Fredholm equation of the second kind for $|\zeta|>D_q$ \cite{nachman1996a}. This gives a method of recovering the Fourier transform of $q$ in every frequency through the scattering transform \eqref{scatter2} as $|\zeta|\rightarrow \infty$. This method of reconstruction for the Calderón problem in three dimensions was first explicitly given in \cite{nachman1988a,Novikov1988263}. We summarize the method in three steps. \begin{method}\label{method:1} CGO reconstruction in three dimensions \begin{description} \item[\textbf{Step} $\mathbf{1}$] Fix $\xi\in \mathbb{R}^3$ and solve the boundary integral equation \eqref{bie} for all $\zeta(\xi)\in \mathcal{V}_\xi$. Compute $\mathbf{t}(\xi,\zeta(\xi))$ by \eqref{scatter2}. \item[\textbf{Step} $\mathbf{2}$] Compute $\hat{q}(\xi)$ by \begin{equation} \lim_{|\zeta(\xi)| \rightarrow \infty} \mathbf{t}(\xi,\zeta(\xi)) = \hat q(\xi), \quad \xi\in\mathbb{R}^3, \end{equation} and $q(x)$ by the inverse Fourier transform. \item[\textbf{Step} $\mathbf{3}$] Solve the boundary value problem \begin{equation} \begin{aligned} (-\Delta+q) \gamma^{1 / 2} &=0 \quad \text { in } \Omega, \\ \gamma^{1 / 2} &=1 \quad \text { on } \partial \Omega, \end{aligned} \end{equation} and extract $\gamma$. \end{description} \end{method} We remark that it is sufficient to solve the boundary integral equation in step 1 for a sequence $\{\zeta_k(\xi)\}_{k=1}^\infty$ of complex frequencies in $\mathcal{V}_\xi$ that tends to infinity. \section{Regularized reconstruction by truncation}\label{sec:3} We continue by mimicking\break Method \ref{method:1} with $\Lambda_\gamma$ replaced by $\Lambda_\gamma^\varepsilon$ with $\varepsilon$ small. We note that, in any case, using $\psi_\zeta$ with $|\zeta|$ large is impractical. {Indeed, when using perturbed measurements naively in \eqref{scatter2}, the propagated perturbation of $\mathbf{t}$ is $\varepsilon$ multiplied with a factor exponentially growing in $|\zeta|$. This factor originates from the solution of the perturbed boundary integral equation} \begin{equation}\label{bienoisy} B_\zeta^\varepsilon(\psi_{\zeta}^\varepsilon|_{\partial \Omega}):=\psi_{\zeta}^\varepsilon|_{\partial \Omega}+\mathcal{S}_{\zeta}\left(\Lambda^\varepsilon_{\gamma}-\Lambda_{1}\right) (\psi^\varepsilon_{\zeta}|_{\partial \Omega})=e_{\zeta}|_{\partial \Omega}, \end{equation} {and in multiplication with} $e^{-ix\cdot(\xi+\zeta{(\xi)})}${, see Lemma \ref{lemma3}}. We will show below that \eqref{bienoisy} is solvable for sufficiently small $\varepsilon$. To mitigate this exponential behavior we propose a reconstruction method that makes use of two coupled truncations: one of the complex frequency $\zeta$ and one of the real frequency of the signal $q^\varepsilon$, the perturbed analog of $q$. As we shall see, an upper bound of the magnitude $|\zeta(\xi)|$ determines an upper bound of the proximity of $\mathbf{t}$ to $\hat{q}$, when using perturbed data. From \eqref{zetaxieq} we have \begin{equation} |\zeta(\xi)|\geq\frac{|\xi|}{\sqrt{2}}, \end{equation} and hence fixing $|\zeta(\xi)|$ gives a bounded region in $\mathbb{R}^3$, $|\xi|<M$ for some $M>0$, in which $\mathbf{t}$ can be computed. This gives the following method. \begin{method}\label{method:2} Truncated CGO reconstruction in three dimensions \begin{description} \item[\textbf{Step} $\mathbf{1}^\varepsilon$] Let $M=M(\varepsilon)>0$ be determined by a sufficiently small $\varepsilon$. For each fixed $\xi$ with $|\xi|<M$, take $\zeta(\xi)\in \mathcal{V}_\xi$ with an appropriate size determined by $M$ and solve \eqref{bienoisy} to recover $\psi_{\zeta}^\varepsilon|_{\partial \Omega}$. Compute the truncated scattering transform by \begin{equation} \mathbf{t}^{\varepsilon}_{M(\varepsilon)}(\xi, \zeta(\xi)):= \begin{cases} \int_{\partial \Omega} e^{-i x \cdot(\xi+\zeta(\xi))}(\Lambda_{\gamma}^\varepsilon-\Lambda_{1})\psi_\zeta^\varepsilon(x) d\sigma(x), & |\xi|< M(\varepsilon),\\ 0, &|\xi|\geq M(\varepsilon), \end{cases} \end{equation} \item[\textbf{Step} $\mathbf{2}^\varepsilon$] Set $\widehat{q^\varepsilon}(\xi):=\mathbf{t}^{\varepsilon}_{M(\varepsilon)}(\xi, \zeta(\xi))$ and compute the inverse Fourier transform to obtain $q^\varepsilon$. \item[\textbf{Step} $\mathbf{3}^\varepsilon$] Solve the boundary value problem \begin{equation} \begin{aligned} (-\Delta+q^\varepsilon)(\gamma^\varepsilon)^{1/2} &=0 \quad && \text { in } \Omega, \\ (\gamma^\varepsilon)^{1/2}&=1 \quad && \text { on } \partial \Omega. \end{aligned} \end{equation} and extract $\gamma^\varepsilon$. \end{description} \end{method} We call $M$ the truncation radius and note it should depend on $\varepsilon$. Truncation of the scattering transform with truncation radius $M$ is well known in regularization theory for the two-dimensional D-bar reconstruction method \cite{Knudsen2009a}. We can see the real truncation as a low-pass filtering in the frequency domain; this leads to additional smoothing in the spatial domain. Note that $M$ determines the level of regularization and poses as a regularization parameter $\alpha=M^{-1}$ in the sense of \eqref{reqstrong}. { In the following section we derive the required properties of $\mathcal{S}_\zeta$, $B_\zeta^{-1}$ and $(B_\zeta^\varepsilon)^{-1}$. The invertibility of $B_\zeta^\varepsilon$ depends on the invertibility of the unperturbed boundary integral operator $B_\zeta$, which is well known due to the mapping properties of $\mathcal{S}_\zeta$. Although boundedness of $\mathcal{S}_\zeta$ and $B_\zeta^{-1}$ in the three-dimensional case follows by similar arguments to that of the two-dimensional \cite{Knudsen2009a}, it is not immediately clear when $(B_\zeta^\varepsilon)^{-1}$ exists in the absence of existence and uniqueness guarantees of $\psi_\zeta$ for small $|\zeta|$. Neither is it clear under which circumstances $q^\varepsilon$ approximates $q$ as the noise level goes to zero. This is dealt with in Lemma \ref{lemma4} below by choosing a suitable rate, at which $|\zeta|$ and $M$ goes to infinity as $\varepsilon$ goes to zero. } \subsection{The perturbed boundary integral equation}\label{sec:per} { When $|\zeta|$ is bounded away from zero we can bound $\mathcal{S}_\zeta$ using the mapping properties \eqref{convgzeta} of convolution with $g_\zeta$ between Sobolev spaces defined on $\Omega$. We note that one can give better bounds for arbitrarily small $|\zeta|<1$ than the following result by considering the integral operator $\mathcal{S}_\zeta-\mathcal{S}_0$ with a smooth kernel, see \cite{cornean2006a,Knudsen2009a}. } \begin{lemma}\label{lemma1} Let $\varphi\in H^{-1/2}\left(\partial \Omega\right)$ such that $\int_{\partial \Omega} \varphi(x)\,d\sigma(x) = 0$ and let $\zeta\in \mathbb{C}^3$ with $\zeta\cdot\zeta=0$ {and $|\zeta|>\beta>0$}. Then for the boundary single layer operator, $\mathcal{S}_\zeta$, we have that \begin{equation}\label{upperboundszeta} \|\mathcal{S}_\zeta \varphi\|_{H^{1/2}\left(\partial \Omega\right)} \leq C_1 (1+|\zeta|)e^{2|\zeta|}\|\varphi\|_{H^{-1/2}\left(\partial \Omega\right)}, \end{equation} where the constant $C_1$ is independent of $\zeta$. \end{lemma} \begin{proof} We follow \cite{Knudsen2009a}. Letting $x\in \mathbb{R}^3\setminus \overline{\Omega}$ and introducing $u\in H^1(\Omega)$ with $\Delta u = 0$ and $\partial_\nu u = \varphi$ we write \ \vspace*{-10pt} \begin{align} (\mathcal{S}_\zeta\varphi)(x) &= \int_{\partial \Omega} G_\zeta(x-y)\varphi(y) \, d\sigma(y),\\ &= \int_{\Omega} \nabla_{y} G_{\zeta}(x-y)\cdot \nabla u(y) d y,\\ &=-\nabla\cdot\left(G_{\zeta} *(\nabla u)\right)(x),\\ &= -\nabla\cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right](x), \end{align} using integration by parts, the chain rule and the fact that $G_\zeta(x-\cdot)$ is smooth in $\Omega$. By the continuity of $\mathcal{S}_\zeta$ the above holds for $x\in \partial \Omega$ as well. Note from \eqref{convgzeta} and Leibniz' rule that \begin{equation} \|\nabla \cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right]\|_{L^2(\Omega)}\leq Ce^{2|\zeta|}\|\nabla u\|_{L^2(\Omega)}, \end{equation} and \begin{equation} \|\partial_{x_i}\nabla \cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right]\|_{L^2(\Omega)}\leq C|\zeta|e^{2|\zeta|}\|\nabla u\|_{L^2(\Omega)}, \end{equation} for $i=1,2,3$. This yields \begin{align} \|\mathcal{S}_\zeta\varphi\|_{H^{1/2}(\partial\Omega)} &\leq \|\nabla\cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right]\|_{H^1(\Omega)},\\ &\leq C(1+|\zeta|)e^{2|\zeta|}\|\nabla u\|_{L^2(\Omega)},\\ &\leq C(1+|\zeta|)e^{2|\zeta|}\|\varphi\|_{H^{-1/2}(\partial\Omega)}, \end{align} using the trace theorem and stability of the Neumann problem for $u$. Here $C$ is dependent on $\beta$ since $|\zeta|>\beta$. \end{proof} We have the following estimate of $B_\zeta^{-1}$. The main idea of the proof is to consider a solution $f\in H^{1/2}(\partial \Omega)$ to $B_\zeta f = h$ for some $h\in H^{1/2}(\partial \Omega)$ and then control the exponential component of $f$ by creating a link to the CGO solutions of the Schrödinger equation. \begin{lemma}\label{lemma2} For $\zeta\in \mathbb{C}^3\backslash\{0\}$ with $\zeta \cdot \zeta = 0$ and $|\zeta|>D_q$ as in \eqref{largezeta}, the operator $B_\zeta$ is invertible with \begin{equation}\label{Bzetainv} \|B_{\zeta}^{-1}\|_{1 / 2} \leq C_{2} (1+|\zeta|)e^{2|\zeta|}, \end{equation} where $C_2$ is a constant depending only on the \textit{a priori} knowledge $\Pi$ and $\rho$. \end{lemma} \begin{proof} { We follow \cite{Knudsen2009a}. Using integration by parts note that $B_\zeta f=f+G_\zeta \ast(qv_f)$ on $\partial \Omega$, where $v_f\in H^1(\Omega)$ is the unique solution to \begin{equation} \begin{aligned} (-\Delta+q)v_f &=0 \quad && \text { in } \Omega, \\ v_f &=f \quad && \text { on } \partial \Omega. \end{aligned} \end{equation} To bound $f$ we bound $v_f$ by writing $v_f = v-u^{\mathrm{exp}}$ with \begin{equation} \begin{aligned} \Delta v &=0 \quad && \text { in } \Omega, \\ v &= B_\zeta f \quad && \text { on } \partial \Omega, \end{aligned} \end{equation} and $u^{\mathrm{exp}}:=G_\zeta *(qv_f)$. From the stability property of the Dirichlet problem it is sufficient to bound $u^{\mathrm{exp}}$ in terms of $v$. Note $(-\Delta+q)u^{\mathrm{exp}}=qv$ and hence conjugating with exponentials yields the equation in $\mathbb{R}^3$, \begin{equation}\label{eq:condconj} (-\Delta-2i\zeta \cdot \nabla+q)u = qve^{-ix\cdot \zeta}, \end{equation} where we set $u = e^{-ix\cdot \zeta}u^{\mathrm{exp}}$. It is well known that $u$ is the unique solution among functions in certain weighted $L^2(\mathbb{R}^3)$-spaces satisfying $$\|u\|_{L^2(\Omega)} \leq C\|q\|_{L^\infty}\frac{e^{|\zeta|}}{|\zeta|}\|v\|_{L^2(\Omega)},$$ whenever $|\zeta|>D_q$, see \cite{sylvester1987a}. Indeed, convolution with $g_\zeta$ on both sides of \eqref{eq:condconj} gives $$u = g_\zeta*(-qu+qve^{-ix\cdot \zeta}),$$ which upgrades the estimate to $$\|u\|_{H^1(\Omega)} \leq C\|q\|_{L^\infty}e^{|\zeta|}\|v\|_{L^2(\Omega)},$$ using \eqref{convgzeta}. Now the estimate \eqref{Bzetainv} follows straightforwardly from the trace theorem. } \end{proof} We note that a main difference between the boundary integral equation in two dimensions and three dimensions is the possible existence of a certain $\zeta$ for which there exists no unique CGO solutions to \eqref{cgo1}. The next result shows that Lemma \ref{lemma1} and Lemma \ref{lemma2} implies solvability of the perturbed boundary integral equation using a Neumann series argument on the form \begin{equation} B_{\zeta}^\varepsilon = I+\mathcal{S}_\zeta(\Lambda_\gamma^\varepsilon-\Lambda_\gamma) +\mathcal{S}_\zeta(\Lambda_\gamma-\Lambda_1)=[I+A_\zeta^\varepsilon] B_\zeta, \end{equation} where $A^\varepsilon_\zeta:=\mathcal{S}_\zeta \mathcal{E} B_\zeta^{-1}$ is a bounded operator in $H^{1/2}(\partial \Omega)$. {It is clear from Lemma \ref{lemma2} that $q$ fixes a lower bound for $|\zeta|$, for which $B_\zeta$ is certain to be invertible.} When the noise level is sufficiently small such that $D_q<|\zeta|<R(\varepsilon)$, for some $R$, we may invert $B_\zeta^\varepsilon$. We have the following result. \begin{lemma}\label{lemma3} Let $R = R(\varepsilon):=-\frac{1}{6}\log{\varepsilon}$, and suppose $D_q<|\zeta |< R(\varepsilon_1)$ for some $0<\varepsilon_1<1$. Then there exists $0<\varepsilon_2\leq\varepsilon_1$ for which $B_\zeta^\varepsilon$ is invertible whenever $0<\varepsilon < \varepsilon_2$. Furthermore we have the estimate \begin{equation} \|\psi^\varepsilon_\zeta - \psi_\zeta\|_{H^{1/2}(\partial \Omega)} \leq C_3\varepsilon(1+R)^4e^{7R}, \end{equation} where $C_3$ is a constant depending only on the \textit{a priori} knowledge of $\Pi$ and $\rho$. \end{lemma} \begin{proof} Since $\mathcal{E}\in Y$, it maps onto trace functions that have zero mean on the boundary. Then from Lemma \ref{lemma1} and Lemma \ref{lemma2} we find \begin{align}\label{epsilon0} \|A_\zeta^\varepsilon\|_{1/2}= \|\mathcal{S}_\zeta \mathcal{E} B_\zeta^{-1}\|_{1/2}&\leq C_1C_2 \varepsilon (1+R)^2 e^{4R},\\ &\leq C \varepsilon e^{5R}, \label{eq:rhs} \end{align} where we have absorbed the polynomial in $R$ into the exponential and thereby obtained a new constant. By the definition of $R$, we note the right-hand side of \eqref{eq:rhs} goes to zero as $\varepsilon$ goes to zero, and hence there exists a $0<\varepsilon_2 \leq \varepsilon_1$ such that $\|A_\zeta^\varepsilon\|_{1/2}<\frac{1}{2}$. Then by a Neumann series argument, $I+A_\zeta^\varepsilon$ is invertible with $\|(I+A_\zeta^\varepsilon)^{-1} \|_{1/2}<2$, and $(B_\zeta^\varepsilon)^{-1}=B_\zeta^{-1}[I+A_\zeta^\varepsilon]^{-1}$. From the boundary integral equations we have $\psi_\zeta=B^{-1}_\zeta (e_\zeta|_{\partial \Omega})$ and $\psi^\varepsilon_\zeta=(B_\zeta^\varepsilon)^{-1}(e_\zeta|_{\partial \Omega})$. Then with the use of Lemma \ref{lemma2} we have for $0<\varepsilon<\varepsilon_2$ \begin{align} \|\psi^\varepsilon_\zeta\|_{H^{1/2}(\partial \Omega)} &\leq \|(B_\zeta^\varepsilon)^{-1}(e_\zeta|_{\partial \Omega})\|_{H^{1/2}(\partial \Omega)},\\ &\leq 2\|B_\zeta^{-1}\|_{1/2} \|e^{ix\cdot \zeta}\|_{H^{1/2}(\partial \Omega)},\label{Bzetaeps}\\ &\leq C (1+|\zeta|)^2e^{3|\zeta|}.\label{psiestimate} \end{align} With the use of Lemma \ref{lemma2} we have for $0<\varepsilon<\varepsilon_2$ \begin{align} \|(B_\zeta^\varepsilon)^{-1}-B^{-1}_\zeta\|_{1/2}&=\|B_\zeta^{-1}[(I+A_\zeta^\varepsilon)^{-1}-I]\|_{1/2},\\ &\leq \|B_\zeta^{-1}\|_{1/2}\|(I+A_\zeta^\varepsilon)^{-1}[I-(I+A_\zeta^\varepsilon)]\|_{1/2},\\ &\leq \|B_\zeta^{-1}\|_{1/2}\|(I+A_\zeta^\varepsilon)^{-1}\|_{1/2}\|A_\zeta^\varepsilon\|_{1/2},\\ &\leq 2 C_1C_{2}^2 \varepsilon(1+R)^3e^{6R}. \end{align} Finally we obtain \begin{align} \|\psi^\varepsilon_\zeta - \psi_\zeta\|_{H^{1/2}(\partial \Omega)} &=\|[(B_\zeta^\varepsilon)^{-1}-B^{-1}_\zeta]e_\zeta\|_{H^{1/2}(\partial \Omega)},\\ &\leq \|(B_\zeta^\varepsilon)^{-1}-B^{-1}_\zeta\|_{1/2} \|e^{ix\cdot \zeta}\|_{H^{1/2}(\partial \Omega)},\\ &\leq 2 C_1C_{2}^2 \varepsilon(1+R)^4e^{7R},\label{psidifestimate} \end{align} for $0<\varepsilon<\varepsilon_2$. \end{proof} \subsection{Truncation of the scattering transform}\label{sec:trunc} We now show that fixing the magnitude of the complex frequency $|\zeta(\xi)|=(M(\varepsilon))^p$ with $p>3/2$, enables control over the proximity of the truncated scattering transform $\mathbf{t}^\varepsilon_M(\cdot,\zeta)$ to $\hat q$ for small noise levels. This choice is justified from the following result. \begin{lemma}\label{lemma4} Let $M(\varepsilon) = (-1/11 \log(\varepsilon))^{1/p}$ be a truncation radius depending on $\varepsilon$ and some exponent $p>3/2$. Fix $\xi \in \mathbb{R}^3$ with $|\xi|< M(\varepsilon)$, suppose $\zeta(\xi)\in \mathcal{V}_\xi$ with \begin{equation} |\zeta(\xi)|=(M(\varepsilon))^p=-\frac{1}{11}\log(\varepsilon) \end{equation} and let $\varepsilon_2$ be defined as in the proof of Lemma \ref{lemma3}. Further fix $q\in L^\infty(\Omega)$ corresponding to a $\gamma \in D(F)$. Then $\mathbf{t}^\varepsilon_M$ is well defined by \eqref{errtscat} for $0<\varepsilon<\varepsilon_2$ and \begin{equation} \lim_{\varepsilon \rightarrow 0} \|\mathbf{t}^\varepsilon_{M(\varepsilon)}-\hat q\|_{L^2(\mathbb{R}^3)}=0. \end{equation} \end{lemma} \begin{proof} For \textit{(i)} fix first $|\xi|< M(\varepsilon)$ and note first by the triangle inequality that \begin{equation}\label{trianglerelation} |\mathbf{t}^{\varepsilon}_{M(\varepsilon)}(\xi,\zeta(\xi))-\hat q(\xi)|\leq |\mathbf{t}^\varepsilon_{M(\varepsilon)}(\xi,\zeta(\xi))-\mathbf{t}(\xi,\zeta(\xi)) |+ |\mathbf{t}(\xi,\zeta(\xi))-\hat q(\xi)|. \end{equation} By Lemma \ref{lemma3} there exists a unique solution $\psi^\varepsilon_\zeta$ to the perturbed boundary integral equation and hence $\mathbf{t}^\varepsilon_M$ is well defined. Using \eqref{psiestimate} and \eqref{psidifestimate}, we find the following, in which we set $R=R(\varepsilon)$, $M=M(\varepsilon)$ and $\zeta = \zeta(\xi)$ for simplicity of exposition, \begin{align} |\mathbf{t}^\varepsilon_M(\xi,\zeta)-\mathbf{t}(\xi,\zeta)| &= \left|\int_{\partial \Omega} e^{-ix\cdot(\xi+\zeta)}[(\Lambda^\varepsilon_\gamma-\Lambda_1)\psi_\zeta^\varepsilon(x)-(\Lambda_\gamma-\Lambda_1)\psi_\zeta(x)]d\sigma(x) \right|,\\ &\leq \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda_\gamma-\Lambda_1\|_Y\|\psi_\zeta^\varepsilon-\psi_\zeta\|_{H^{1/2}(\partial \Omega)}\\ \label{testimation} &\phantom{=}\,\,+ \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda_\gamma^\varepsilon-\Lambda_\gamma\|_Y\|\psi_\zeta^\varepsilon\|_{H^{1/2}(\partial \Omega)},\\ &\leq C(1+|\zeta|)e^{|\zeta|}\left[\varepsilon(1+|\zeta|)^4e^{7|\zeta|} + \varepsilon(1+|\zeta|)^2e^{3|\zeta|} \right ], \end{align} where we use the fact that $\|\Lambda_\gamma - \Lambda_1\|_Y \leq C$, where $C$ depends only on $\Pi$ by the continuity of the forward map $\gamma\mapsto \Lambda_\gamma$. Then, \begin{equation} |\mathbf{t}^\varepsilon_M(\xi,\zeta)-\mathbf{t}(\xi,\zeta)|\leq C\varepsilon e^{9|\zeta|}. \end{equation} Using \eqref{trianglerelation} and the property \eqref{lemmaont} we conclude for $|\xi|< M(\varepsilon)$ that \begin{equation}\label{eq:iestimate} |\mathbf{t}^{\varepsilon}_M(\xi,\zeta)-\hat q(\xi)|\leq C\varepsilon e^{9|\zeta|}+ C|\zeta|^{-1}. \end{equation} Then for \textit{(ii)}, using the triangle inequality and \eqref{eq:iestimate} we find \begin{align} \|\mathbf{t}^\varepsilon_M-\hat q\|_{L^2(\mathbb{R}^3)}&\leq \|\mathbf{t}^\varepsilon_M-\hat q\|_{L^2(|\xi|< M)}+\|\hat q\|_{L^2(|\xi|\geq M)},\\ &\leq C(\varepsilon e^{9|\zeta|}+M^{-p})\left(\int_{0}^M r^2 \, dr\right)^{1/2}+\|\hat q\|_{L^2(|\xi|\geq M)},\\ & \leq C(\varepsilon e^{10|\zeta|}+M^{3/2-p})+\|\hat q\|_{L^2(|\xi|\geq M)},\\ &\leq C\varepsilon^{1/11}+C(-1/11\log(\varepsilon))^{3/2-p}+\|\hat q\|_{L^2(|\xi|\geq M)}, \end{align} for $0<\varepsilon<\varepsilon_2$. Since $q\in L^\infty(\Omega)$ is compactly supported in $\Omega$, we have $q\in L^2(\mathbb{R}^3)$, and hence the energy of the tail of $\hat q$ converges to zero as $M(\varepsilon)$ goes to infinity. The result follows as $p>3/2$. \end{proof} One may obtain an explicit decay of $\hat q$ by assuming a certain regularity of $q$. Notice the proof above works fine with the choice $|\zeta| = K_1M^{p}+K_2$ for some $0<K_1<1$, $K_2>0$ and $p>3/2$. A user may choose among such $|\zeta|$ freely, with $p=3/2$ being the critical choice. We now prove that $\gamma^\varepsilon$ exists and is unique and that the propagated reconstruction error tends to zero if $\varepsilon\rightarrow 0$, given $\|q^\varepsilon-q\|_{L^2(\Omega)}$ is sufficiently small. This is possible in $H^2(\Omega)$ by a Neumann series argument and elliptic regularity. For the boundary value problem \begin{equation} \begin{aligned} (-\Delta+q^\varepsilon)u&=f \quad && \text { in } \Omega, \\ u&=0 \quad && \text { on } \partial \Omega, \end{aligned} \end{equation} with $f\in L^2(\Omega)$, we introduce the notation $L^\varepsilon: H^1_0(\Omega)\cap H^2(\Omega)\rightarrow L^2(\Omega)$, $L^\varepsilon: u\mapsto f$, defined for any $q^\varepsilon\in L^2(\Omega)$ and then note \begin{equation}\label{eq:finalstep} \gamma^\varepsilon = [(L^\varepsilon)^{-1}(-q^\varepsilon)+1]^2, \end{equation} whenever $(L^\varepsilon)^{-1}$ exists. \begin{lemma}\label{lemma5} Let $q=\Delta \gamma^{1/2}\gamma^{-1/2}$ be a potential with $\gamma\in D(F)$. Then there exists a $0<\varepsilon_3<1$ such that for $0<\varepsilon<\min(\varepsilon_2,\varepsilon_3)=:\varepsilon_0$ the boundary value problem \begin{equation}\label{bvp1} \begin{aligned} (-\Delta+q^\varepsilon)(\gamma^\varepsilon)^{1/2}&=0 \quad && \text { in } \Omega, \\ (\gamma^\varepsilon)^{1/2} &=1 \quad && \text { on } \partial \Omega, \end{aligned} \end{equation} has a unique solution in $H^2(\Omega)$. Furthermore the following inequality holds \begin{equation}\label{estimatelem5} \|\gamma^{1 / 2}-(\gamma^{\varepsilon})^{1 / 2}\|_{H^{2}(\Omega)} \leq C_4\|q-q^{\varepsilon}\|_{L^{2}(\Omega)}, \end{equation} where $C_4$ is dependent only on $\Pi$ and $\rho$. \end{lemma} \begin{proof} Note $(-\Delta+q)^{-1}$ exists and is bounded for $L^2(\Omega)$ into $H^1_0(\Omega)\cap H^2(\Omega)$ with \begin{equation}\label{boundedsol} \|u\|_{H^{2}(\Omega)} \leq C\|f\|_{L^{2}(\Omega)}, \end{equation} by elliptic regularity \cite{evans2010a}. Here $C$ is dependent only on $\Pi$. We construct \begin{equation}\label{construction} L^\varepsilon u = (-\Delta+q)[I+(-\Delta+q)^{-1}(q^\varepsilon-q)]u, \end{equation} and seek boundedness of $(-\Delta+q)^{-1}(q^\varepsilon-q)$ in $H^2(\Omega)$ as our goal. For any $u\in H^2(\Omega)$ \begin{align} \|(-\Delta+q)^{-1}(q^\varepsilon-q)u\|_{H^2(\Omega)} \leq C \|q^\varepsilon-q\|_{L^2(\Omega)} \|u\|_{H^2(\Omega)}, \end{align} using \eqref{boundedsol} and Sobolev embedding theory. By Lemma \ref{lemma4}, there exists a $0<\varepsilon_3<1$ such that for all $0<\varepsilon<\min(\varepsilon_2,\varepsilon_3)$ \begin{equation} \|(-\Delta+q)^{-1}(q^\varepsilon-q)\|_{H^{2}(\Omega)\rightarrow H^{2}(\Omega)}\leq C\|q^\varepsilon-q \|_{L^2(\Omega)}<\frac{1}{2}. \end{equation} Hence $(L^\varepsilon)^{-1}$ exists and is uniformly bounded with respect to $0<\varepsilon \leq \varepsilon_0$. Finally, since $\gamma \in L^\infty(\Omega)$ we have $(q^\varepsilon-q)\gamma^{1/2}\in L^2(\Omega)$, and by solving \begin{alignat}{2} L^\varepsilon(\gamma^{1/2}-(\gamma^\varepsilon)^{1/2})&=(q^\varepsilon-q)\gamma^{1/2} \quad && \text { in } \Omega \\ \gamma^{1/2}-(\gamma^\varepsilon)^{1/2}&=0 \quad && \text { on } \partial \Omega, \end{alignat} we obtain the estimate \eqref{estimatelem5}. \end{proof} We conclude that $\gamma^\varepsilon$ of Method \ref{method:2} exists uniquely and approximates $\gamma$ in the $H^2(\Omega)$-norm, whenever $\varepsilon < \varepsilon_0$. \section{Extending the method to a regularization strategy}\label{sec:35} From the definition of an admissible regularization strategy it is clear $\mathcal{R}_\alpha$ must be defined on $Y$ and not only an $\varepsilon_0$-neighborhood of $F(\mathcal{D}(F))$. However, $(B_\zeta^\varepsilon)^{-1}$ and $(L^\varepsilon)^{-1}$ exists only for small enough $\varepsilon$. We confront this by extending these operators to $(B_\zeta^\varepsilon)_\alpha^{\dagger}$ and $(L^\varepsilon)_\alpha^{\dagger}$ coinciding with $(B_\zeta^\varepsilon)^{-1}$ and $(L^\varepsilon)^{-1}$ for $\varepsilon < \varepsilon_0$, such that $\mathcal{R}_\alpha$ is continuous and well defined on $Y$. There are several ways to obtain such extensions, however we will follow \cite{Knudsen2009a} and construct explicit pseudoinverses by means of functional calculus. Define the normal operator \begin{equation} S^\varepsilon_\zeta:=(B_\zeta^\varepsilon)^\ast(B_\zeta^\varepsilon)\in\mathcal{L}(H^{1/2}(\partial \Omega)), \end{equation} where $(B_\zeta^\varepsilon)^\ast$ is the adjoint operator of $(B_\zeta^\varepsilon)\in \mathcal{L}(H^{1/2}(\partial \Omega))$. Similarly we define \begin{equation} T^\varepsilon_\zeta:=(L^\varepsilon)^\ast(L^\varepsilon)\in\mathcal{L}(L^2(\Omega)). \end{equation} Let $h_\alpha^1$ and $h_\alpha^2$ be two real functions defined for $0<\alpha<\infty$ as \begin{equation} h^i_\alpha(t) := \begin{cases} t^{-1} & \text{ for } t>\kappa_i(\alpha),\\ \kappa_i(\alpha)^{-1} & \text{ for } t\leq \kappa_i(\alpha), \end{cases} \end{equation} for $i=1,2$ with $\kappa_i(\alpha)=\frac{1}{4}r_i(\alpha)^2$, where we will see below the estimates \eqref{estimatelem5} and \eqref{boundB} motivates the definition \begin{equation} r_i(\alpha) := \begin{cases} \frac{1}{C_2(1+\alpha^{-p})e^{2\alpha^{-p}}} & \text{ for } i=1,\\ \frac{1}{C_4} & \text{ for } i=2, \end{cases} \end{equation} with $p>3/2$. We define the $\alpha$-pseudoinverses ${(B_\zeta^\varepsilon)}_\alpha^{\dagger}$ of $B_\zeta^\varepsilon$ and ${(L^\varepsilon)}_\alpha^{\dagger}$ of $L^\varepsilon$ for any $0<\alpha<\infty$ as \begin{align} (B_\zeta^\varepsilon)_\alpha^{\dagger} &:= h^1_\alpha(S_\zeta^\varepsilon)(B_\zeta^\varepsilon)^\ast,\\ (L^\varepsilon)_\alpha^{\dagger} &:= h^2_\alpha(T^\varepsilon)(L^\varepsilon)^\ast, \end{align} where the operators $h^1_\alpha(S_\zeta^\varepsilon)$ in $\mathcal{L}(H^{1/2}(\partial \Omega))$ and $h^2_\alpha(T^\varepsilon)$ in $\mathcal{L}(L^2(\Omega))$ are defined in the sense of continuous functional calculus (see for example \cite{reed1980a,so2018a}) and depend continuously on $S_\zeta^\varepsilon$ and $T^\varepsilon$, respectively (see for example \cite[Lemma 3.1]{Knudsen2009a}). This implies $\Lambda_\gamma^\varepsilon \mapsto (B_\zeta^\varepsilon)_\alpha^{\dagger}$ and $q^\varepsilon \mapsto (L^\varepsilon)_\alpha^{\dagger}$ are continuous mappings. Explicitly, for a self-adjoint operator $S:\mathcal{H}\rightarrow \mathcal{H}$ for a Hilbert space $\mathcal{H}$ we set \begin{equation}\label{spectraldecomp} h^i_\alpha(S) = \int_{\sigma(S)} h^i_\alpha(\lambda)\, dP(\lambda), \end{equation} where $\sigma(S)\subset \mathbb{C}$ denotes the spectrum of $S$, and $P$ is a spectral measure on $\sigma(S)$. \begin{method}\label{Method2} Regularized CGO reconstruction in three dimensions \begin{description} \item[\textbf{Step} $\mathbf{1_\alpha}$] Given $\alpha>0$, set $M=\alpha^{-1}$. For each $|\xi|<M$ take $\zeta(\xi)\in \mathcal{V}_\xi$ with $|\zeta(\xi)|=M^{p}$ for $p>3/2$ and define \begin{equation} \tilde{\psi}_\alpha:=(B_\zeta^\varepsilon)_\alpha^{\dagger}(e_\zeta|_{\partial \Omega}) \end{equation} and compute the truncated scattering transform $\mathbf{t}_\alpha(\xi,\zeta(\xi))$ for $\zeta(\xi)$ in $\mathcal{V}_\xi$ by \begin{equation}\label{errtscat} \tilde{\mathbf{t}}_\alpha(\xi, \zeta(\xi))= \begin{cases} \int_{\partial \Omega} e^{-i x \cdot(\xi+\zeta(\xi))}(\Lambda_{\gamma}^\varepsilon-\Lambda_{1})\tilde{\psi}_\alpha(x) d\sigma(x) & |\xi|< M,\\ 0 &|\xi|\geq M \end{cases} \end{equation} \item[\textbf{Step} $\mathbf{2_\alpha}$] Define $\widehat{q_\alpha}(\xi):=\tilde{\mathbf{t}}_\alpha(\xi,\zeta(\xi))$ and compute the inverse Fourier transform to obtain $q_\alpha$. \item[\textbf{Step} $\mathbf{3_\alpha}$] Solve the boundary value problem \eqref{bvp1} by computing $(L^\varepsilon)_\alpha^{\dagger}(-q_\alpha)$ and set \begin{equation}\label{def:regstrat} \mathcal{R}_\alpha \Lambda_\gamma^\varepsilon := [(L^\varepsilon)_\alpha^{\dagger}(-q_\alpha)+1]^2 \end{equation} \end{description} \end{method} \begin{proof}[Proof of Theorem \ref{maintheorem}] Given $\Lambda_\gamma^\varepsilon$ in $Y$ we have \begin{align} |\tilde{\mathbf{t}}_\alpha(\xi, \zeta(\xi))| &\leq \left|\int_{\partial \Omega} e^{-ix\cdot(\xi+\zeta)}[(\Lambda^\varepsilon_\gamma-\Lambda_\gamma)\tilde{\psi}_\alpha(x)+(\Lambda_\gamma-\Lambda_1)\tilde{\psi}_\alpha(x)]d\sigma(x) \right|,\\ &\leq \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda^\varepsilon_\gamma-\Lambda_\gamma\|_Y\|\tilde{\psi}_\alpha\|_{H^{1/2}(\partial \Omega)}\\ &\phantom{=}\,\, \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda_\gamma-\Lambda_1\|_Y\|\tilde{\psi}_\alpha\|_{H^{1/2}(\partial \Omega)},\\ &< \infty,\label{teruendelig} \end{align} for all $\xi \in \mathbb{R}^3$, since $(B_\zeta^\varepsilon)_\alpha^{\dagger}$ is bounded in $H^{1/2}(\partial\Omega)$. Then by compact support $\tilde{\mathbf{t}}_\alpha\in L^2(\mathbb{R}^3)$. It follows the inverse Fourier transform of this object is well defined and hence the family of operators $\mathcal{R}_\alpha$ is well defined. Using the continuity of the maps $\Lambda_\gamma^\varepsilon \mapsto (B_\zeta^\varepsilon)_\alpha^{\dagger}$ and $q_\alpha \mapsto (L^\varepsilon)_\alpha^{\dagger}$, a parallel estimation to \eqref{testimation} and the linearity and boundedness of the inverse Fourier transform in $L^2(\mathbb{R}^3)$, it is clear that $\mathcal{R}_\alpha$ is a family of continuous mappings. Now recall from Lemma \ref{lemma2} and \eqref{Bzetaeps} that for $0<\varepsilon<\varepsilon_0$ we have that \begin{equation}\label{boundB} \|(B_\zeta^\varepsilon)\|_{1/2}^{-1} \leq \|(B_\zeta^\varepsilon)^{-1}\|_{1/2}\leq 2 C_{2} (1+|\zeta|)e^{2|\zeta|}. \end{equation} Set $|\zeta|=\alpha^{-p}$ and note \begin{equation} S_\zeta^\varepsilon \geq \frac{1}{4}r_1(\alpha)^2 I. \end{equation} By definition of the $\alpha$-pseudoinverse and \eqref{spectraldecomp} we have that $(B_\zeta^\varepsilon)_\alpha^{\dagger}=(B_\zeta^\varepsilon)^{-1}$ for $0<\varepsilon<\varepsilon_0$, and hence $\tilde{\psi}_\alpha=\psi_\zeta^\varepsilon$ is unique. It follows by Lemma \ref{lemma4} that $\tilde{\mathbf{t}}(\cdot,\zeta(\cdot))$ is well defined and $q_\alpha = q^\varepsilon$ converges to $q$ as $\varepsilon$ goes to zero. Conversely, for $0<\varepsilon<\varepsilon_0$ we have $(L^\varepsilon)_\alpha^{\dagger}=(L^\varepsilon)^{-1}$, and hence by Lemma \ref{lemma5} and the Sobolev embedding $H^2(\Omega)\subset C^0(\overline{\Omega})$, \eqref{reqstrong} is satisfied. Note also the weaker requirement \eqref{reqweak} follows analogously. The property \eqref{alphaprop} is satisfied by \eqref{alphadef}. \end{proof} A direct consequence of the truncation of the scattering transform is the following property of the reconstruction $\mathcal{R}_\alpha(\varepsilon) \Lambda_\gamma^\varepsilon$ for sufficiently small $\varepsilon$. The regularized reconstructions are as regular as $\Omega$. \begin{proposition} Suppose $\Lambda_\gamma^\varepsilon=\Lambda_\gamma+\mathcal{E}$ with $\|\mathcal{E}\|_Y\leq \varepsilon < \varepsilon_0$. Then $\mathcal{R}_\alpha(\varepsilon) \Lambda_\gamma^\varepsilon\in C^\infty(\overline{\Omega})$. \end{proposition} \begin{proof} Since $\tilde{\mathbf{t}}_\alpha(\cdot, \zeta(\cdot))\in L^1(\mathbb{R}^3)$ has compact support, it follows $q_\alpha$ is smooth. Since $\partial \Omega$ is smooth, it follows $\mathcal{R}_\alpha \Lambda_\gamma^\varepsilon\in C^\infty(\overline{\Omega})$ by elliptic regularity \cite{evans2010a}. \end{proof} \section{Computational methods}\label{sec:4} In this section we outline methods of representing and computing the Dirichlet-to-Neumann map numerically and consider the discretization of the boundary integral equations. We assume $\Omega = B(0,1)$ in order to utilize spherical harmonics in representation of functions on $\partial \Omega$. \subsection{Representation and computation of the Dirichlet-to-Neumann map}\label{sec41} We consider the Hilbert space $H^s(\partial \Omega)$, $s>0$, defined as the space of all functions $f$ in $L^2(\partial \Omega)$ that satisfy \begin{equation}\label{h1norm1} \|f\|_{L^{2}(\partial \Omega)}^{2}+\|(-\Delta_S)^{s / 2} f\|_{L^{2}(\partial \Omega)}^{2}<\infty, \end{equation} where $(-\Delta_S)^{s / 2}$ is the fractional order spherical Laplace operator on the unit sphere. Since spherical harmonics, say $\{Y_n^m\}_{n\in\mathbb{N}_0, |m|\leq n}$, constitute an orthonormal basis of $L^2(\partial \Omega)$ (see for example \cite{colton1992a}), we may expand $f \in L^2(\partial \Omega)$ as \begin{equation} f = \sum_{n=0}^\infty\sum_{m=-n}^n \langle f, Y^m_n \rangle Y^m_n, \qquad \langle f, Y^m_n \rangle = \int_{\partial \Omega} f(x) \overline{Y^m_n(x)}\, d\sigma(x). \end{equation} The spherical harmonics are eigenvectors of $(-\Delta_S)$, in particular, \begin{equation} \left(-\Delta_{S}\right)^{s / 2} Y=(n(n+1))^{s / 2} Y, \end{equation} for any spherical harmonic $Y$ of degree $n$. Then the requirement \eqref{h1norm1} gives rise to a characterization of $H^s(\partial \Omega)$ suitable for $s\in \mathbb{R}$ as those functions $f \in L^2(\partial \Omega)$ that satisfy \begin{equation} \sum_{n=0}^\infty\sum_{m=-n}^n (1+n^2)^s|\langle f, Y^m_n \rangle |^2 < \infty. \end{equation} See \cite[Chapter 1.7]{MR0350177} for a more general treatment and the case $s<0$. Thus we define the $H^s(\partial \Omega)$ inner products as \begin{equation} \langle f,g \rangle_s:=\langle f,g \rangle_{H^s(\partial \Omega)}=\sum_{n=0}^\infty\sum_{m=-n}^n w_s(n)\langle f,Y^m_n\rangle \overline{w_s(n) \langle g,Y^m_n\rangle}, \end{equation} where the multiplier functions are defined as \begin{equation} w_s(n) :=(1+n^2)^{s/2}, \qquad \text{for $n\in \mathbb{N}_0$, $s\in \mathbb{R}$}, \end{equation} and hence $\|f\|_{H^s(\partial \Omega)} = \langle f, f \rangle_s^{1/2}$. We build an orthonormal basis $\{\phi_{n,m}^s\}_{n\in \mathbb{N}_0, |m|\leq n}$ of $H^s(\partial \Omega)$ with \begin{equation} \phi_{n,m}^s = w_{-s}(n)Y^m_n. \end{equation} and hence any $g\in H^s(\partial\Omega)$ has the expansion \begin{equation} g = \sum_{n=0}^\infty\sum_{m=-n}^n \langle g, \phi_{n,m}^s \rangle_s \phi_{n,m}^s. \end{equation} Consider the $L^2(\partial \Omega)$ orthogonal projection $P_N$ to the space spanned by spherical harmonics of degree less than or equal to $N$, as \begin{equation} P_Ng = \sum_{n=0}^N\sum_{m=-n}^n \langle g, Y^m_n\rangle Y^m_n. \end{equation} Note $\langle g, Y^m_n\rangle$ as an integral over the unit sphere may be approximated by coefficients $c_{n,m}(\underline{g})$ using Gauss-Legendre quadrature in $2(N+1)^2$ appropriately chosen quadrature points $\{x_k\}_{k=1}^{2(N+1)^2}$ on the unit sphere as in \cite{delbary2014a}. Here we denote $\underline{g} = (g(x_1),\hdots, g(x_{2(N+1)^2}))$. Define \begin{equation} L_N g := \sum_{n=0}^N\sum_{m=-n}^n c_{n,m}(\underline{g}) Y^m_n. \end{equation} We may approximate any operator $\Lambda:H^{s}(\partial \Omega)\rightarrow H^{-s}(\partial \Omega)$ using $Q$, a matrix in $\mathbb{C}^{2(N+1)^2\times 2(N+1)^2}$ defined by \begin{equation}\label{lambdaapprox} (\Lambda g)(x_k) \simeq [{ Q}\underline{g}]_k := \sum_{n=0}^N\sum_{m=-n}^n c_{n,m}(\underline{g}) (\Lambda Y_n^m)(x_k), \quad k=1,\hdots,2(N+1)^2. \end{equation} {From here it is clear we can write ${ Q}$ as \begin{equation} {Q} = \widetilde{{Q}}{A}, \label{Atransform} \end{equation} where ${A}:\underline{g}\mapsto (c_{0,0}(\underline{g}),\hdots,c_{N,N}(\underline{g}))$, and $[\widetilde{{Q}}]_{k\ell} = \Lambda Y_\ell(x_k)$, where $Y_\ell$ is the $\ell'$th spherical harmonic in the natural order. We can think of ${A}$ as the matrix that takes a point-cloud representation of a function on $\partial \Omega$ and gives the spherical harmonic representation. } Similarly to \cite{Knudsen2009a}, an approximation of the operator norm then takes the form \begin{equation}\label{normapprox} \|\Lambda\|_{s,-s} \simeq \sup_f \frac{\|{{\mathcal{Q}}}f\|_{\mathbb{C}^{(N+1)^2}}}{\|f\|_{\mathbb{C}^{(N+1)^2}}} = \|{{{\mathcal{Q}}}}\|_{N}, \end{equation} where $[{{{\mathcal{Q}}}}]_{ij} = \langle \Lambda \phi^s_{n,m}, \phi^{{-}s}_{n',m'} \rangle_{-s}$ with $i = n'^2+n'+m'+1$ and $j = n^2+n+m+1$. We may approximate \begin{align} \langle \Lambda \phi^s_{n,m}, \phi^{{-}s}_{n',m'} \rangle_{-s} &= w_{-s}(n)w_{-s}(n')\langle \Lambda Y_n^m, Y_{n'}^{m'}\rangle, \\ &\simeq w_{-s}(n)w_{-s}(n') {c_{n',m'}(\underline{\Lambda Y_{n}^m})}. \label{innerapprox} \end{align} {With $\mathcal{B}$ we denote the map that takes the matrix ${Q}$ and gives the approximation of ${{\mathcal{Q}}}$ defined by \eqref{innerapprox}}. For $\Lambda = \Lambda_\gamma$ we denote the approximation \eqref{lambdaapprox}, ${Q}_\gamma$. From \eqref{lambdaapprox} {it} is clear that to represent $\Lambda_\gamma$ we need only to compute $(\Lambda_\gamma Y_n^m)(x_k)$ in the quadrature points $x_k$. In this paper we compute $(\Lambda_\gamma Y_n^m)(x_k)$ efficiently by the boundary integral approach for piecewise constant conductivities given in \cite{delbary2014a}, an approach which despite the lack of reconstruction theory has shown to perform well. \subsection{Noise model} We simulate a perturbation of the Dirichlet-to-Neumann map by adding Gaussian noise to ${Q}_\gamma$. We let \begin{equation} {Q}_\gamma^\varepsilon = {Q}_\gamma+\delta {E}, \label{noisemodel} \end{equation} where $\delta >0$ and the elements of the $2(N+1)^2\times 2(N+1)^2$ matrix ${E}$ are independent Gaussian random variables with zero mean and unit variance. We modify ${E}$ such that $\mathcal{B}{E}$ has a first row and column as zeros, such that we may consider $\mathcal{B}{E}$ as an approximation of a linear and bounded operator $\mathcal{E}\in Y$. Furthermore, we approximate $\|\mathcal{E}\|_Y$ using \eqref{normapprox} and \eqref{innerapprox} and note we can specify an absolute level of noise $\|\mathcal{E}\|_Y \approx \varepsilon$ by choosing $\delta$ appropriately. The relative noise level is then \begin{equation} \delta \frac{\|\mathcal{E}\|_Y}{\|\Lambda_\gamma\|_Y} \approx \delta \frac{\|{\mathcal{B}}{E}\|_{N}}{\|{\mathcal{B}}{Q}_\gamma\|_{N}}. \end{equation} {Note the noise model in \cite{delbary2014a} scales each element of ${E}$ with the corresponding element of ${Q}_\gamma$. Noise models for electrode data simulation typically takes the form $$V_j^\varepsilon = V_j+\delta_j E_j,$$ as in \cite{hamilton2020a}, where $V_j$ is the voltage vector corresponding to the $j$'th current pattern, $\delta_j>0$ is a scaling parameter dependent on $V_j$ and $E_j$ is a Gaussian vector independent of $E_{j'}$ for $j\neq j'$. For our case such a noise model corresponds best to adding to $\widetilde{{Q}}_\gamma$ in \eqref{Atransform} a matrix $\widetilde{{E}}$ whose columns are $\delta_jE_j$. One may check by vectorizing ${A}^T \widetilde{{E}}^T$ that the corresponding ${E}$ of \eqref{noisemodel} consists of independent and identically distributed Gaussian vectors as rows. However, the elements of each row are now correlated with covariance matrix ${A}^T\mathrm{diag}(\delta){A}$.} {Finally, we define the signal-to-noise ratio as \begin{equation} \mathrm{SNR} = \frac{1}{(N+1)^2}\sum_{n=0}^N\sum_{m=-n}^n \frac{\|{Q}_\gamma \underline{Y_n^m}\|_{\mathbb{C}^{2(N+1)^2}}}{\delta\|{E} \underline{Y_n^m}\|_{\mathbb{C}^{2(N+1)^2}}}. \end{equation} } \subsection{Solving the boundary integral equations} Following \cite{delbary2014a} we discretize the perturbed boundary integral equations \eqref{bienoisy} by \begin{equation}\label{discbie} \left[I+(\mathcal{S}_{0} L_{N}+\mathcal{H}_{\zeta}^{N})(\Lambda_{\gamma}^\varepsilon-\Lambda_{1}) L_{N} \right] ((\psi_{\zeta}^{N})^\varepsilon|_{\partial \Omega})=e_{\zeta}|_{\partial \Omega}, \end{equation} where $\mathcal{H}_{\zeta}^{N}$ is the approximation of {the integral operator $\mathcal{S}_\zeta-\mathcal{S}_0$} using the Gauss-Legendre quadrature rule of order $N+1$ on the unit sphere in the aforementioned quadrature points $\{x_k\}_{k=1}^{2(N+1)^2}$. We find the following result regarding the convergence of the perturbed solutions $(\psi_\zeta^N)^\varepsilon$ of \eqref{discbie} analogously to \cite{delbary2012a,delbary2014a}. \begin{theorem} Suppose $D<|\zeta(\xi)|<-\frac{1}{6}\log\varepsilon_2$ and $\mathcal{E}$ is a linear bounded operator from $H^s(\partial \Omega)$ to $H^t(\partial \Omega)$ for all $s\geq 1/2$ and $t>s$. Then for all $s>3/2$, there exists $N_0 \in \mathbb{N}$ such that for all $N\geq N_0$ the operator $I+(\mathcal{S}_{0} L_{N}+\mathcal{H}_{\zeta}^{N})(\Lambda_{\gamma}^\varepsilon-\Lambda_{1}) L_{N} $ is invertible in $H^s(\partial \Omega)$. Furthermore we have, \begin{equation} \|(\psi_{\zeta}^{N})^\varepsilon - \psi_\zeta^\varepsilon\|_{H^s(\partial \Omega)}\leq \frac{C}{N^{s-3/2}}\|e_\zeta\|_{H^s(\partial\Omega)}. \end{equation} \end{theorem} \begin{proof} The result follows from a Neumann series argument as in Lemma 3.1 and Theorem 3.2 of \cite{delbary2014a} as for $D<|\zeta(\xi)|<-\frac{1}{6}\log\varepsilon_2$, there exists a bounded inverse $(B_\zeta^\varepsilon)^{-1}$ by Lemma \ref{lemma3}. \end{proof} This result ensures that the solutions of the discretized perturbed boundary integral equations are unique and converge to the solutions of \eqref{bienoisy}. \subsection{Choice of $|\zeta(\xi)|$ and truncation radius} It is clear from Method \ref{Method2} that we should set $|\zeta(\xi)|=M^{p}$ for some exponent $p>3/2$. Due to the high sensitivity of the CGO solutions with respect to $|\zeta(\xi)|$, we may choose $|\zeta(\xi)|$ differently in practice, although we will not necessarily have a regularization strategy in theory. One idea of \cite{delbary2012a} is to set $|\zeta(\xi)|$ minimal in the admissible set \eqref{zetaxieq}, that is \begin{equation}\label{fix} |\zeta(\xi)| = \frac{M}{\sqrt{2}}. \end{equation} A different idea is to choose $|\zeta(\xi)|$ independently for each $\xi$ such that $|\zeta(\xi)|$ is minimal with $|\zeta(\xi)| = \frac{|\xi|}{\sqrt{2}}$. We take the critical choice $|\zeta(\xi)|=K_1M^{3/2}$ for some constant $0<K_1<1$ to maintain the smallest $|\zeta|$ within the boundaries of the theory. In practice we compute $\mathbf{t}_{M(\varepsilon)}(\xi,\zeta(\xi))$ in a $\xi$-grid of points $|\xi|\leq M$ as in \cite{delbary2014a}. The Shannon sampling theorem ensures we can recover uniquely the inverse Fourier transform if we sample densely enough. We use the discrete Fourier transform in equidistant $\xi$- and $x$-grids in three dimensions. \begin{equation} \xi_k^j = -M+k\frac{2M}{K-1} \quad \text{ and } \quad x_n^j = -x_{\mathrm{max}}+n\frac{2x_{\mathrm{max}}}{K-1}, \end{equation} for $n,k=0,\hdots,K-1$, $j=1,2,3$ and some $x_{\max}$ determined by $K$ and $M$. Indeed the discrete Fourier transform requires \begin{equation}\label{def:K} M= \frac{\pi (K-1)^2}{2Kx_{\mathrm{max}}}. \end{equation} to recover $q^{\varepsilon}(x_n^j)$ for all $n=0,\hdots,K-1$, $j=1,2,3$. In practical applications, we do not know the noise level, in which case we choose $M$ and $K$ and consequently determine $x_{\mathrm{max}}$. Then we recover $q^{\varepsilon}$ in an appropriate finite element mesh of the unit ball using trilinear interpolation. The discrete Fourier transform is computed efficiently with the use of FFT \cite{frigo2005design} with complexity $\mathcal{O}(K^3\log{K^3})$. The problem of finding the optimal truncation radius given noisy data $\Lambda_\gamma^\varepsilon$ is largely open and is related to the problem of systematically choosing a regularization parameter of regularized reconstruction for an inverse problem. In this paper, we choose the truncation radius by inspection for the simulated data. For further details on the implementation of the reconstruction algorithm we refer to \cite{delbary2012a,delbary2014a}. \section{Numerical results}\label{sec:5} We test Method \ref{method:2} as a regularization strategy. We are interested in whether the reconstruction converges to the true conductivity distribution as the noise level goes to zero, and likewise as the regularization parameter $\alpha$ goes to zero for a non-noisy Dirichlet-to-Neumann map. To this end, we simulate a Dirichlet-to-Neumann map for a well-known phantom. \subsection{Test phantom} The piecewise constant heart-lungs phantom consists of two spheroidal inclusions and a ball inclusion embedded in the unit sphere with a background conductivity of $1$. The phantom is summarized in Table \ref{hl-phantom}. We compute and represent the Dirichlet-to-Neumann map and noisy counterparts as described in Section \ref{sec41}. In particular, the forward map is computed using $2(N+1)^2$ boundary points on the unit sphere and using maximal degree $N$ of spherical harmonics with $N=25$. \begin{table}[ht!] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{@{}lllll@{}} \toprule Inclusion & Center & Radii & Axes & Conductivity \\ \midrule Ball & $(-0.09,-0.55,0)$ & $r = 0.273$ & & 2 \\ \midrule Left spheroid & $0.55(-\sin(\frac{5\pi}{12}), \cos(\frac{5\pi}{12}), 0)$ & \begin{tabular}[c]{@{}l@{}}$r_1 = 0.468$,\\ $r_2 = 0.234$,\\ $r_3 = 0.234$\end{tabular} & \begin{tabular}[c]{@{}l@{}}$(\cos(\frac{5\pi}{12}),\sin(\frac{5\pi}{12}),0)$, \\ $(-\sin(\frac{5\pi}{12}),\cos\frac{5\pi}{12}),0), $\\ $(0,0,1)$\end{tabular} & 0.5 \\ \midrule Right spheroid & $0.45(\sin(\frac{5\pi}{12}), \cos(\frac{5\pi}{12}), 0)$ & \begin{tabular}[c]{@{}l@{}}$r_1 = 0.546$,\\ $r_2 = 0.273$,\\ $r_3 = 0.273$\end{tabular} & \begin{tabular}[c]{@{}l@{}}$(\cos(\frac{5\pi}{12}),-\sin(\frac{5\pi}{12}),0)$, \\ $(\sin(\frac{5\pi}{12}), \cos(\frac{5\pi}{12}), 0)$, \\ $(0,0,1)$\end{tabular} & 0.5 \\ \bottomrule \end{tabular} } \caption{Summary of piecewise constant heart-lungs phantom consisting of three inclusions} \label{hl-phantom} \end{table} \begin{figure} \caption{The piecewise constant heart-lungs phantom in a three-dimensional view (\textsc{a} \label{fig:phantom1} \end{figure} \subsection{Regularization in practice} We now consider the regularization strategy, Method \ref{method:2}, in practice. Alluding to \eqref{reqweak}, we test the reconstruction algorithm by keeping the test data fixed and varying the regularization parameter. \begin{figure} \caption{Cross sections $(x^3=0)$ of reconstructions using the regularized reconstruction algorithm with different choices of truncation radius $M$, $K=12$ and $|\zeta(\xi)|=\frac{1} \label{fig:reg1} \end{figure} In Figure \ref{fig:reg1}, we see cross-sectional plots of reconstructed conductivities for different truncation radii $M=\alpha^{-1}$. We use $|\zeta(\xi)|=\frac{1}{4} M^{3/2}$ as the critical choice such that $\zeta(\xi)\in \mathcal{V}_\xi$ for $M\geq 8$, and use the accurate Dirichlet-to-Neumann map with no added noise. The figure shows increasing accuracy and contrast for increasing truncation radii. Similar to the findings of \cite{delbary2014a}, we experience failing reconstructions for large enough truncation radii as the frequency data is dominated by exponentially amplified noise inherent to the finite-precision representation of $\Lambda_\gamma$. This happens since there is noise present in the representation of the Dirichlet-to-Neumann map, no matter how accurately it represents the true infinite-precision data. We see the effect of truncation in practice: low resolution, smaller dynamical range and more smoothness caused by the missing high frequency data. Though not immediately clear from this figure, the reconstructions slightly overshoot the conductivity of the resistive spheroidal inclusions with conductivities as small as 0.38. In addition, the reconstruction algorithm seems to work well in practice on piecewise constant conductivity distributions. In Figure \ref{fig:reg2}, we see cross-sectional plots of reconstructed conductivities using Dirichlet-to-Neumann maps with added noise and for fixed $|\zeta(\xi)|= \frac{1}{3\sqrt{2}}M^{3/2}$. {Here, $K_1$ is chosen such that $\zeta(\xi)$ is small and admissible for $M\geq 9$.} The truncation radii are chosen optimally by visual inspection. The figure shows reconstructions in the presence of noise of levels ranging from $\varepsilon=10^{-6}$ to $\varepsilon=10^{-3}$ in the Dirichlet-to-Neumann map. We see improving quality of reconstruction as the noise level decreases in accordance with Definition \ref{def:reg2}. Beyond noise levels of $10^{-3}$, reconstruction is still feasible without the corruption of unstable noise, although, they need heavy regularization and start to lack visible features of the phantom. In Figure \ref{fig:noisyreg}, we see the conductivity reconstruction using noisy data with $\varepsilon=10^{-{2}}$ corresponding to approximately $1\%$ relative noise. The resistive spheroidal inclusions start to connect {and the conductive spherical inclusion is not as accurately placed. The remaining intensity in the signal compared to the case $M=9.7$ in Figure \ref{fig:noisyreg} could suggest that additional regularization is needed.} \begin{figure} \caption{Cross sections $(x^3=0)$ of reconstructions using the regularized reconstruction algorithm on noisy Dirichlet-to-Neumann maps. {The noise levels correspond to relative noise levels $\varepsilon \approx 0.1\%$ with $\mathrm{SNR} \label{fig:reg2} \end{figure} \begin{figure} \caption{Regularized reconstruction using noisy Dirichlet-to-Neumann maps with $\varepsilon = 10^{-2} \label{fig:partA} \label{fig:partB} \label{fig:noisyreg} \end{figure} \begin{figure} \caption{The truncation radii as predicted by theory $M =(-1/11\log(\varepsilon))^{-1/p} \label{fig:reg3} \end{figure} The truncation radii of reconstructions in Figure \ref{fig:reg2} and \ref{fig:noisyreg} chosen by visual inspection are plotted and compared to the theoretically predicted truncation radius in Figure \ref{fig:reg3}. This comparison suggests the prediction is somewhat pessimistic and that the practical algorithm allows for lighter regularization in comparison to what the theoretical estimates portend. {However, the prediction and practical reconstructions are not directly comparable, since we should pick $|\zeta(\xi)|=K_1M^p$ with $p$ strictly larger than $3/2$ according to theory}. Finally, we note the noise model utilized by \cite{delbary2014a} and \cite{hamilton2020a} give somewhat different results compared to our {unnormalized perturbation}. {The results also raise the question of how practical the reconstruction method is for more realistic data. Had we decreased the resolution of the basis of spherical harmonics to which voltages and currents are projected, the approximation error of highly oscillatory functions would increase. In this case we can expect to pick the truncation radius smaller to get a stable reconstruction. Investigating the reconstruction method for electrode data is subject to further study and is related to \cite{isaacson2004} for the two-dimensional D-bar method and \cite{hamilton2020a} for the three-dimensional so-called $\mathbf{t}^{\mathrm{exp}}$ approximation. Possible improvements to the truncation strategy\break include extending the support of $\mathbf{t}$ with prior information using the forward map as in \cite{MR3554880}. In addition, one could experiment with a truncation by thresholding as in \cite{MR3626801}.} \section{Conclusions} In this paper we provide and investigate a regularization strategy for the Calder\'on problem in three dimensions. The main result of the paper is Theorem \ref{maintheorem}, which shows that the algorithm defined by Method \ref{Method2} yields reconstructions approximating the true conductivity, when using data corrupted by a sufficiently small perturbation. The proof relies on a gap of the magnitude of the complex frequency in which the existence of unique CGO solutions is guaranteed and the noise level allows a stable and unique solution to the boundary integral equation. The reconstructions from this strategy are regular as a result of the spectral filtering. Numerical results show the regularizing behavior of the reconstruction algorithm in practice and suggests one can utilize higher frequency information in the data than suggested by the theory. The reconstructions of piecewise constant conductivity data show promise even in the case of $1\%$ relative noise. \section*{Acknowledgments} AKR and KK were supported by The Villum Foundation (grant no. 25893). \providecommand{\href}[2]{#2} \providecommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL } \end{document}
math
72,140
\begin{document} \title{Homogeneous number of free generators} \author{Menny Aka} \author{Tsachik Gelander} \author{Gregory A. So{\u\i}fer} \begin{abstract} We address two questions of Simon Thomas. First, we show that for any $n\geq3$ one can find a four generated free subgroup of $SL_{n}({\mathbb Z})$ which is profinitely dense. More generally, we show that an arithmetic group $\Gamma$ which admits the congruence subgroup property, has a profinitely dense free subgroup with an explicit bound of its rank. Next, we show that the set of profinitely dense, locally free subgroups of such an arithmetic group $\Gamma$ is uncountable. \end{abstract} \maketitle \section{Introduction} Let $G$ be a simply-connected semisimple algebraic group defined over ${\mathbb Q}$ with fixed embedding to $GL_{n}$. Let $\Gamma$ be an arithmetic subgroup of $G(\bar{{\mathbb Q}})$, i.e., a group commensurable to $G({\mathbb Z}):=G(\bar{{\mathbb Q}})\cap GL_{n}({\mathbb Z})$. Assume moreover that $G({\mathbb R})$ is non-compact. The aim of this paper is to show the following: \begin{thm} \leftarrowbel{thm:Main Theorem}Assume that $G$ admits the congruence subgroup property (see $\S$\ref{sub:CSP}), and let $d(\widehat{\Gamma})$ be the minimal number of generators of the profinite completion of $\Gamma$ (see $\S$\ref{sub:Basic-properties-profinite}). Then, there exists a free subgroup $F\subset\Gamma$ on at most $2+d(\widehat{\Gamma})$ generators which is profinitely dense, i.e., maps onto any finite quotient of $\Gamma$. \end{thm} Let $\alpha(\Gamma)$ be the minimal rank of a profinitely dense free subgroup of $\Gamma$. That is, Theorem \ref{thm:Main Theorem} claim that $\alpha(\Gamma)\leq d(\Gamma)+2$ . In \cite{SV2000} it is proved that $\Gamma$ as above admits a profinitely dense free subgroup of finite rank. Consequently, Simon Thomas asked whether one can find a uniform bound on $\alpha(SL_{n}({\mathbb Z})),n\geq3$. It is known that $SL_{n}({\mathbb Z})$ is generated by two elements for all $n\ge2$ (see \cite{Tr62}) and that $SL_{n}({\mathbb Z})$ (see $\S$\ref{sub:CSP}) admits the Congruence Subgroup Property for $n\geq3$. Thus Theorem \ref{thm:Main Theorem} implies that for any $n\geq3$ there exists a free profinitely dense subgroup of $SL_{n}({\mathbb Z})$ with rank $\leq4$. In particular, given a family $\{\Gamma_{n}\}$ for such arithmetic groups a uniform bound on $d(\Gamma_{n})$ will provide a uniform bound on $\alpha(\Gamma_{n})$. It is interesting whether $\alpha(SL_{n}({\mathbb Z}))=2$ for all $n$. We note that there are arithmetically defined families such that $\alpha(\Gamma_{n})$ is not uniformly bounded. For example, let $\Delta_{n}:=SL_{n}({\mathbb Z})$ and for a rational prime $p$ let $\Delta_{n}(p):=\{\gamma\in\Delta_{n}:\gamma\equiv I(mod\, p)\}$. It is shown in \cite[Theorem 1.1]{LS76} that the finite quotient $\Delta_{n}(p)/\Delta_{n}(p^{2})$ is a vector space over ${\mathbb F}_{p}$ of dimension $n^{2}-1$ so any profinitely dense and free subgroup has as least $n^{2}-1$ generators. That is $\alpha(\Delta_{n}(p))\geq n^{2}-1$ and in particular not uniformly bounded. We remark that although it is shown in \cite{BG2007} that the profinite completion $\widehat{\Gamma}$ of any non-virtually solvable subgroup $\Gamma$, has a free dense subgroup of rank $d(\Gamma)$, it is in general impossible to find a free group inside $\Gamma$. For example, Fuchcian groups are LERF \cite{SC78}(i.e. every finitely generated subgroup is the intersection of the finite-index subgroups that contain it), hence cannot admit a finitely generated profinitely dense proper subgroup. We also address another question of Simon Thomas. Let $\mathfrak{U}$ be the set of locally free subgroups of $\Gamma$ containing a profinitely dense finitely generated subgroup. \begin{thm} \leftarrowbel{thm:Main Theorem maximal subgroups}The set $\mathfrak{U}_{m}$ of maximal elements of $\mathfrak{U}$ is uncountable. \end{thm} In essence, these theorems can be proved using techniques and results of \cite{MS81,BG2007}. The advantage of the following scheme of proof is in being elementary, using simple tools from ergodic theory.\\ Following Tits \cite{Tits1972}, in order to find free subgroups we use dynamics on projective spaces and we review relevant definitions and properties in $\S$\ref{sec:Dynamics-on-projective}. Tits' original result allows us to find a Zariski-dense free subgroup $\leftarrowngle h_{1},h_{2}\rightarrowngle$ of $\Gamma$ of rank $2$. Assuming the Congruence Subgroup Property, the closure of $\leftarrowngle h_{1},h_{2}\rightarrowngle$ in the profinite topology is of the form $\widehat{\Gamma'}$ for a finite-index subgroup $\Gamma'<\Gamma$, as we explain in $\S$\ref{sec:Profinite-completions}. In order to find a profinitely dense free subgroup, we will find so-called {}``ping-pong'' partners to $h_{1}$ and $h_{2}$ that will belong to specified cosets of $\Gamma'$ in $\Gamma$. We will need to add at most $d(\widehat{\Gamma})$ elements in order to construct a profinitely dense free subgroup. This will be done in two steps. The first step, is to find elements in cosets with prescribed dynamics on the projective space. In this step we establish the main new technique of this paper; we use the mixing property of the action of $G$ on the homogeneous space $G/\Gamma$. This is done in $\S$\ref{sec:Elements-in-cosets} where we also recall necessary notions from Ergodic Theory. The second step, is to use, with necessary modifications, the mechanism of rooted free system (which originates in \cite{MS81}) in order to inductively add {}``ping-pong'' partners with desirable properties. This is done in $\S$\ref{sec:Free-rooted-systems}. Using all the above ingredients, we conclude the proofs of Theorems \ref{thm:Main Theorem} and \ref{thm:Main Theorem maximal subgroups} in $\S$\ref{sec:Proof-main-thm}. \section{Dynamics on projective spaces\leftarrowbel{sec:Dynamics-on-projective}} \subsection{Proximal and Hyperbolic elements} Let $V$ be a vector space of dimension $n$ over a local field $K$, ${\mathbb P}(V)$ the associated projective space and $v\mapsto[v]$ the associated projection map. The group $GL(V)$ acts naturally on ${\mathbb P}(V)$ by $g[v]=[gv]$. An element $g\in\operatorname{GL}(V)$ is called \emph{hyperbolic} if it is semisimple and admits a unique (counting multiplicities) eigenvalue of maximal absolute value and minimal absolute value. We denote by $\mathfrak{H}(G)$ the set of hyperbolic elements of $G$. For $g\in\mathfrak{H}(G)$, let $\{v_{1},\dots,v_{n}\}$ be a basis of eigenvectors such that $v_{1}$ correspond to the unique maximal eigenvalue of $g$ and $v_{n}$ correspond to the unique minimal eigenvalue of $g$. Note that although $g$ is not necessarily diagonalizable over $K$, $span(v_{1},\dots,v_{n-1})$ and $span(v_{2},\dots,v_{n})$ are defined over $K$. Indeed, this follows since there is a unique extension of the norm on $K$ to any algebraic extension of $K$ (See e.g. \cite[Proposition 6.4]{Neukirch}). We denote the following subsets of ${\mathbb P}(V)$ by \[ A^{+}(g)=[v_{1}],A^{-}(g)=[v_{n}] \] \[ B^{+}(g)=[span(v_{2},\dots,v_{n})],B^{-}(g)=[span(v_{1},\dots,v_{n-1})] \] and $A^{\pm}(g)=A^{+}(g)\cup A^{-}(g),B^{\pm}(g)=B^{+}(g)\cup B^{-}(g)$. Note that $A^{\pm}(h)\subset B^{\pm}(h)$. For further use we record some basic properties: \[ hA^{+}(g)=A^{+}(hgh^{-1}) \] and likewise for $A^{-},B^{+},B^{-}$. Also \begin{equation} \forall g\in\mathfrak{H}(G)\quad A^{\pm}(g)=A^{\pm}(g^{n}),B^{\pm}(g)=B^{\pm}(g^{n}).\leftarrowbel{eq:attractig same for powers} \end{equation} and \begin{equation} \forall g\in\mathfrak{H}(G)\quad A^{+}(g^{-1})=A^{-}(g)\leftarrowbel{eq:inverse} \end{equation} \subsection{Distance on projective spaces} In order to study separation properties of projective transformations, we consider the following, so-called \emph{standard metric} on ${\mathbb P}(V)$. For any $[v],[w]\in{\mathbb P}(V)$ let \[ d(a,b)=\frac{\Vert v\wedge w\Vert}{\Vert v\Vert\Vert w\Vert}. \] See \cite[Section 3]{BG2003} for the choices of the above norms and related properties. The reader should check that $d$ is well-defined and that $d$ induces the topology on ${\mathbb P}(V)$ that is inherited from the local field $K$. For any two sets $A,B\subset{\mathbb P}(V)$ we set $d(A,B)=min\{d(a,b):a\in A,b\in B\}.$ We also need the following notion of distance between sets. For any two sets $A,B\subset{\mathbb P}(V)$ we define the Hausdorff distance \[ d_{h}(A,B)=max\{\underset{a\in A}{sup}\,\underset{b\in B}{inf}d(a,b),\underset{b\in B}{sup}\,\underset{a\in A}{inf}d(a,b)\} \] The only property of the Hausdorff distance that we are going to use is the following: Let $a\in{\mathbb P}(V),B,C\subset{\mathbb P}(V)$. Then, $d(a,C)\leq d(a,B)+d_{h}(B,C)$. \subsection{Transversality} We call two elements $g,h\in\mathfrak{H}(G)$ \emph{transversal,} and denote $g\perp h$,\emph{ }if $A^{\pm}(g)\subset{\mathbb P}\setminus B^{\pm}(h)$ and $A^{\pm}(h)\subset{\mathbb P}\setminus B^{\pm}(g)$. If moreover, $d(A^{\pm}(g),B^{\pm}(h))>\epsilon$ and $d(A^{\pm}(h),B^{\pm}(g))>\epsilon$ $ $ they are called \emph{$\epsilon$-transversal. }Clearly, any finite set of transversal elements are $\epsilon$-transversal for some $\epsilon>0$. \section{Profinite completions and the congruence subgroup property\leftarrowbel{sec:Profinite-completions}} \subsection{Basic properties\leftarrowbel{sub:Basic-properties-profinite}} For background on profinite groups the reader can consult the book of Ribes and Zalesskii \cite{RZ2010}. For completeness, we record here the definition of the profinite completion of a group. We write $A<_{fi}B$ to mean that $A$ is a finite-index subgroup of $B$. Let $\Gamma$ be a finitely generated group. The profinite topology on $\Gamma$ is defined by taking as a fundamental system of neighborhoods of the identity the collection of all normal subgroups $N$ of $\Gamma$ such that $\Gamma/N$ is finite. One can easily show that a subgroup $H$ is open in $\Gamma$ if and only if $H<_{fi}\Gamma$ . We can complete $\Gamma$ with respect to this topology to get \[ \widehat{\Gamma}:=\varprojlim\{\Gamma/N:N\vartriangleleft_{fi}\Gamma\}; \] this is the profinite completion of $\Gamma$. There is a natural homomorphism $i:\Gamma\rightarrow\widehat{\Gamma}$ given by $i(\gamma)=\varprojlim(\gamma N)$. A subgroup $\Lambda<\Gamma$ is profinitely dense in $\Gamma$ if and only if $i(\Lambda)$ is dense in $\widehat{\Gamma}$ which in turn is equivalent to the property that $\Lambda$ is mapped onto every finite quotient of $\Gamma$. The minimal number of elements of $\widehat{\Gamma}$ needed to generate a dense subgroup of $\widehat{\Gamma}$ is denoted by $d(\widehat{\Gamma})$. \begin{lem} \leftarrowbel{lem:subgroups of profinite completion} Let $\Gamma$ be a residually finite group. There is a one-to-one correspondence between the set ${\mathcal X}$ of all cosets of finite-index subgroup of $\Gamma$ and the set ${\mathcal Y}$ of all cosets of open subgroups of $\widehat{\Gamma}$, given by \[ X\mapsto cl(X)\quad(X\in{\mathcal X}) \] \[ Y\mapsto Y\cap\Gamma\quad(Y\in{\mathcal Y}) \] where $cl(X)$ denotes the closure of $X$ in $\widehat{\Gamma}$. Moreover, this correspondence maps normal subgroups to normal subgroups.\end{lem} \begin{proof} See \cite[Window: profinite groups, Proposition 16.4.3]{LubotzkySegal} for a proof of a correspondence between finite-index subgroup of $\Gamma$ and the set ${\mathcal Y}$ of all open subgroups of $\widehat{\Gamma}$. The natural generalization to cosets is immediate. \end{proof} \subsection{The congruence subgroup property\leftarrowbel{sub:CSP}} We give here a brief survey; for proofs of the assertions below the reader can consult \cite{PR08}. For arithmetic groups, which are by definition commensurable to groups of the form $G({\mathcal O}_{k})$ we can give another interesting topology which is weaker then the profinite topology. As stated in the beginning, $G$ is a simply-connected semisimple algebraic group defined over a number field $k$ and equipped with an implicit $k$-embedding into $GL_{n}$. For any non-zero ideal $I$ of ${\mathcal O}_{k}$ let $K_{I}$ be the kernel of the map \begin{equation} G({\mathcal O}_{k})\rightarrow G({\mathcal O}_{k}/I).\leftarrowbel{eq:Cong map} \end{equation} The completion of $G({\mathcal O}_{k})$, with respect to the topology in which these kernels form a system of neighborhoods of the identity, is called the congruence completion. We denote this completion by $\overline{G({\mathcal O}_{k})}$. Under the assumption we made on $G$, the strong approximation Theorem holds for $G$. This means that the maps in \ref{eq:Cong map} are surjective for all but finitely many ideals $I$ (See \cite[Theorem 7.12]{PR94}). Using this, one can show that $\overlineerline{G({\mathcal O}_{k,S})}$ is naturally isomorphic to the open compact subgroup $\prod_{v\in V^{f}}G({\mathcal O}_{v})$ of $G({\mathbb A}_{k,f})$ where $V^{f}$ denotes the finite valuations of $k$ and ${\mathbb A}_{k,f}$ denotes the ring of finite Adeles (See \cite[section 1.2]{PR94}). As the profinite topology is stronger, we have following the short exact sequence \[ 1\rightarrow C(G)\rightarrow\widehat{G({\mathcal O}_{k})}\rightarrow\overline{G({\mathcal O}_{k})}\rightarrow1 \] and $C(G)$ is called the congruence kernel of $G$ w.r.t. $S$. If $C(G)$ is finite, we say that $G$ admits the congruence subgroup property. Finally, if $\Gamma$ is commensurable to $G({\mathcal O}_{k})$ then the completion of $\Gamma$ w.r.t. the family $\{K_{I}\cap\Gamma\}_{I\vartriangleleft{\mathcal O}_{k}}$ is called the congruence completion of $\Gamma$ and is denoted by $\overline{\Gamma}$. One has a similar short exact sequence \[ 1\rightarrow C(\Gamma)\rightarrow\widehat{\Gamma}\rightarrow\overline{\Gamma}\rightarrow1. \] and one say that $\Gamma$ has the congruence subgroup property if $|C(\Gamma)|<\infty$. Assuming $\Gamma$ has the congruence subgroup property, the closure $\overlineerline{\Lambda}<\widehat{\Gamma}$ of a subgroup $\Lambda<\Gamma$ has finite-index if and only if for all but finitely many $ $ $I\vartriangleleft{\mathcal O}_{k}$, $\Lambda$ maps onto $\Gamma/(K_{I}\cap\Gamma)$. As stated above By \cite{Weis84,PR94} every Zariski dense subgroup of $G(\bar{k})$ satisfy the last condition. \section{Elements in cosets with prescribed dynamics\leftarrowbel{sec:Elements-in-cosets}} In \cite[Theorem 3]{Tits1972}, Tits construct a strongly irreducible representation (i.e. no finite union of proper subspaces is invariant) $\rho:G(k)\rightarrow GL_{d}(K)$ for some $d\in{\mathbb N}$ and some local field $K$. From now on, we identify $G$ (also topologically) with its image under $\rho$. We let ${\mathbb P}={\mathbb P}(K^{d})$ and for $p\in{\mathbb P}$ we write $gp:=\rho(g)p$. We begin by several lemmata: \begin{lem} \leftarrowbel{lem:Wg^nW lemma}Let $g_{0}\in\mathfrak{H}(G)$, and for $j=+,-$ set \[ X_{1}^{j}(\epsilon)=\{g\in\mathfrak{H}(G):d(A^{j}(g_{0}),A^{j}(g))<\epsilon\}, \] \[ X_{2}^{j}(\epsilon)=\{g\in\mathfrak{H}(G):d_{h}(B^{j}(g_{0}),B^{j}(g))<\epsilon\}. \] Then, there exists $\epsilon>0$ such that for any $i=1,2,j=+,-$ and $h\in X_{i}^{j}(\epsilon)$, there exist a symmetric neighborhood of the identity $W\subset G$ and $N\in{\mathbb N}$ such that for all $n>N$ we have $Wh^{n}W\subset X_{i}^{j}(\epsilon)$.\end{lem} \begin{proof} We start with i=1 and $j=+$. We first choose $\epsilon$ such that \[ \bigcup_{g\in X_{1}^{+}(\epsilon)}B^{+}(g) \] and $B(A^{+}(g_{0}),\epsilon)$ (the ball of radius $\epsilon$) are disjoint and denote $X_{1}^{+}=X_{1}^{+}(\epsilon)$. Let $h\in X_{1}^{+}$ be an arbitrary element. We can choose $U\subset B(A^{+}(g_{0}),\epsilon)$ a neighborhood of $A^{+}(g_{0})$, and $W$ a neighborhood of the identity in $G$, such that $WU=\{wu:w\in W,u\in U\}\subset B(A^{+}(g_{0}),\epsilon)$ and $A^{+}(h)\in U'$. By choosing $W$ even smaller we can also find $U'\subset U$ such that the following are satisfied: \begin{align*} & A^{+}(h)\in U',\\ & WU'=\{wu:w\in W,u\in U'\}\subset U,\\ & WU\subset{\mathbb P}\setminus B^{+}(h),\\ & \forall n\quad Wh^{n}W\subset\mathfrak{H}(G). \end{align*} Then, there exists $N\in{\mathbb N}$ such that $h^{n}(WU)\subset U'$ for all $n>N$ and it follows that $(Wh^{n}W)U\subset U$. Therefore, any element $\tilde{h}\in Wh^{n}W$ with $n>N$, has a fixed point inside $U$ which is necessarily $A^{+}(\tilde{h})$. Thus, $A^{+}(\tilde{h})\in U$ so by the choice of $U$ we have $d(A^{+}(\tilde{h}),A^{+}(g_{0}))<\epsilon$, which implies that $Wh^{n}W\subset X_{1}^{+}$ for all $n>N$. Using equation \ref{eq:inverse}, a proof along the exact same lines shows the case $i=1,j=-$. For the case $i=2,$ one can use duality of hyperplanes and points in the projective space and proceed with the same proof as above. Alternatively, we can apply the same proof with the natural action on ${\mathbb P}(\wedge^{d-1}K^{d})$ where hyperplanes of the form $B^{+}(h)$ for some $h\in\mathfrak{H}(G)$ correspond to points. \end{proof} \begin{lem} \leftarrowbel{lem:adding arbitrary trans elt}Let $g_{1},\dots,g_{n}$ be hyperbolic elements of $G$. There exists $g\in\mathfrak{H}(G)$ with $g\perp g_{i}$ for all $i=1,\dots n$.\end{lem} \begin{proof} Let $U^{+}:=\{g\in G:gA^{+}(g_{1})\notin B^{\pm}(g_{j}),\forall j=1,\dots,n\}$; We claim that $U^{+}$ is a non-empty Zariski open set. Assume not, then $\cup_{i=1}^{n}B^{\pm}(g_{i})$ will contain an invariant set which is a union of proper subspaces, which contradicts our assumption. Similarly, $U^{-}:=\{g\in G:g^{-1}A^{-}(g_{j})\notin B^{\pm}(g_{1}),\forall j=1,\dots,n\}$ is a non-empty Zariski open set. As $G$ is Zariski connected, $U=U^{+}\cap U^{-}$ is non-empty open set and for any $g\in U$ we have$A^{\pm}(gg_{1}g^{-1})\notin B^{\pm}(g_{j})$ for all $j=1,\dots n$. Similar argument shows that there exists an open $V\subset G$ such that for any $g\in V$ we have $g^{-1}A^{\pm}(g_{j})\notin B^{\pm}(g_{1})\Leftrightarrow A^{\pm}(g_{j})\notin B^{\pm}(gg_{1}g^{-1})$. Thus, any element of the form $gg_{1}g^{-1}$ for some $g\in U\cap V$ is transversal to all $g_{j}$, as needed. \end{proof} The aim of this section is to find such an element $g$ in a given coset of $G/\Lambda$ where $\Lambda$ is a lattice in $G$. To this end, we begin by {}``approximating'' the transversal element we get from Lemma \ref{lem:adding arbitrary trans elt}. \begin{lem} \leftarrowbel{lem:W neigh}Given $\epsilon>0$, assume that $\{g_{1},\dots,g_{s},g\}$ is a set of pairwise $\epsilon$-transversal hyperbolic elements. There exist an integer $N=N(g,\epsilon)$ and a symmetric neighborhood $W=W(g,\epsilon)$ of the identity in $G$ such that if $n>N$ then for any $h\in Wg^{n}W$, $\{g_{1},\dots,g_{s},h\}$ is a set of pairwise $\frac{\epsilon}{2}$-transversal hyperbolic elements.\end{lem} \begin{proof} We first claim that any $h\in\mathfrak{H}(G)$ with \begin{equation} \begin{alignedat}{1}d(A^{+}(h),A^{+}(g))<\frac{\epsilon}{2},\quad & d(A^{-}(h),A^{-}(g))<\frac{\epsilon}{2}\\ d_{h}(B^{+}(h),B^{+}(g))<\frac{\epsilon}{2},\quad & d_{h}(B^{-}(h),B^{-}(g))<\frac{\epsilon}{2} \end{alignedat} \leftarrowbel{eq:close to g} \end{equation} is $\frac{\epsilon}{2}$- transversal to any element which is $\epsilon$-transversal to $g$. Indeed, say $g'$ is $\epsilon$-transversal to $g$, then we have for example: \begin{align*} \epsilon<d(A^{+}(g),B^{\pm}(g'))\leq d(A^{+}(g),A^{+}(h))+d(A^{+}(h),B^{\pm}(g'))\\ \epsilon<d(A^{\pm}(g'),B^{+}(h))\leq d(A^{+}(g'),B^{+}(h))+d_{h}(B^{+}(h),B^{+}(g)) \end{align*} so using \ref{eq:close to g} we see that $g'$ and $h$ are $\frac{\epsilon}{2}$-transversal. It follows readily from Lemma \ref{lem:Wg^nW lemma} that there exists a symmetric neighborhood $W$ of the identity of $G$ and $N=N(g,\epsilon)\in{\mathbb N}$ such that for large enough $n>N$, any $h\in Wg^{n}W$ is hyperbolic and satisfy \ref{eq:close to g}. \end{proof} Before proceeding to the main proposition of this section we recall some notions from Ergodic Theory. By the Borel Harish-Chandra Theorem (See \cite[\S 4.6]{PR94}), arithmetic subgroups are lattices. By definition, $\Lambda$ is a lattice in $G$ if $\Lambda$ is discrete and $G/\Lambda$ carry a finite $G$-invariant measure which we denote by $\mu$. The action of $G$ on $G/\Lambda$ allows us to use techniques from Ergodic Theory. In particular, for semisimple groups $G$ as in our case, we have the vanishing theorem of Howe-Moore (see \cite[Chaper III]{BM2000} and the references therein). It states that for any $g\in G$ with the property that for any compact subset $K\subset G$ there exists $M$ such that $g^{n}\notin K$ for all $n>M$, we have \begin{equation} \lim_{n\rightarrow\infty}\mu(A\cap g^{n}B)\rightarrow\mu(A)\mu(B)\leftarrowbel{eq:mixing property} \end{equation} for any measurable sets $A,B\subset G/\Lambda$. \begin{prop} \leftarrowbel{prop:trans elmt in coset}Let $\Lambda$ be a lattice in $G$ , $\{g_{1},\dots,g_{s}\}$ a set of pairwise transversal hyperbolic elements, and $x\in G$. Then there exists $h\in x\Lambda$ such that $\{g_{1},\dots,g_{s},h\}$ is a set of pairwise transversal hyperbolic elements.\end{prop} \begin{proof} Using Lemma \ref{lem:adding arbitrary trans elt} we find some $g\in\mathfrak{H}(G)$ such that $\{g_{1},\dots,g_{s},g\}$ is a set of pairwise $\epsilon_{0}$-transversal hyperbolic elements for some $\epsilon_{0}>0$. In order to finish the proof, we only need to find $h\in x\Lambda$ which also satisfy $h\in Wg^{n}W$ for some $n>N$ where $W=W(g,\epsilon_{0}),N=N(g,\epsilon_{0})$ are supplied by Lemma \ref{lem:W neigh}. To this end, we consider the homogeneous space $G/\Lambda$ equipped with the unique left invariant probability measure $\mu$ on $G/\Lambda$. As $g\in\mathfrak{H}(G)$, for any compact subset $K\subset G$ there exists $M$ such that $g^{n}\notin K$ for all $n>M$. Therefore, by the Howe-Moore Theorem stated above, we have that \[ \lim_{n\rightarrow\infty}\mu(g^{n}W\Lambda\cap Wx\Lambda)=\mu(W\Lambda)\mu(Wx\Lambda)>0 \] so for large enough $n$, and in particular for some $n>N$, $g^{n}W\Lambda\cap Wx\Lambda\neq\emptyset$. This yields some $w_{1},w_{2}\in W,\gamma\in\Lambda$ with $g^{n}w_{1}=w_{2}x\gamma$ so $h:=x\gamma=w_{2}^{-1}g^{n}w_{1}$, as desired. \end{proof} \section{Free $g_{0}$-rooted systems\leftarrowbel{sec:Free-rooted-systems}} \begin{defn} Given $g_{o}\in\mathfrak{H}(G)$, a tuple $\{g_{i}\}_{i=1}^{s}\subset\mathfrak{H}(G)$ is called a $g_{0}$-rooted free system if there exist open sets $\{X_{i}\}_{i=0}^{s}$ of ${\mathbb P}$ satisfying: \end{defn} \begin{enumerate} \item $\{X_{i}\}_{i=0}^{s}$ are pairwise disjoint, \item $A^{\pm}(g_{i})\subseteq X_{i}\subset\overlineerline{X_{i}}\subset{\mathbb P}\setminus B^{\pm}(g_{0})$ for all $i=0,.\dots s$, \item $A^{\pm}(g_{0})\subseteq X_{0}\subset\overlineerline{X_{0}}\subset{\mathbb P}\setminus\cup_{i=1}^{s}B^{\pm}(g_{i})$ , \item $g_{i}(\overlineerline{X_{j}})\subset X_{i}$ for any $i,j\in\{0,\dots,s\}$. \end{enumerate} Note that by the so-called ping-pong Lemma \cite[Proposition 1.1]{Tits1972}, the elements $g_{0},\dots,g_{s}$ are independent. $(\{g_{i}\}_{i=1}^{s},g_{0})$ is clearly free. The following lemma exemplify how one can use $g_{0}$-rooted systems for enlarge a given independent set of elements. \begin{lem} \leftarrowbel{lem:Adding element with infinite occurences}Assume we are given a $g_{0}$-rooted free system $\{g_{1},\dots,g_{s}\}$, a subset $F\subset G$ and and element $h\in F\cap\mathfrak{H}(G)$ such that \begin{itemize} \item the set $M_{1}=\{n:g_{0}^{n}hg_{0}^{-n}\in F\}$ is infinite, \item the sets $M_{2,l}=\{n:g_{0}^{l}h^{n}g_{0}^{-l}\in F\}$ are infinite whenever $l\in M_{1}$, \item $h\perp g_{i}$ for $i=0,\dots s$. \end{itemize} Then, there exist $k,n_{0},n_{1}\in{\mathbb N}$ such that $g_{s+1}:=g_{0}^{n_{0}}h^{n_{1}}g_{0}^{-n_{0}}\in F$ and $\{g_{1},\dots,g_{s},g_{s+1}\}$ is a $g_{0}^{k}$-rooted free system.\end{lem} \begin{proof} Let $\{X_{i}\}_{i=0}^{s}$ be the sets showing that $\{g_{1},\dots,g_{s}\}$ is a $g_{0}$-rooted free system and let $h_{n}:=g_{0}^{n}hg_{0}^{-n}$. As $g_{0}\perp h$, there exists $N_{1}\in{\mathbb N}$ such that for any $n>N_{1}$, the following holds: \begin{equation} A^{\pm}(h_{n})=g_{0}^{n}(A^{\pm}(h))\subset X_{0}\leftarrowbel{eq:in X0-1} \end{equation} and for any $i=1,\dots,s$ we have $g_{0}^{-n}\overlineerline{X_{i}}\subset{\mathbb P}\setminus B^{\pm}(h)$ which is equivalent to \begin{equation} \overlineerline{X_{i}}\subset{\mathbb P}\setminus g_{0}^{n}B^{\pm}(h)={\mathbb P}\setminus B^{\pm}(h_{n}).\leftarrowbel{eq: trans to i-1} \end{equation} This show that $\{g_{1},\dots,g_{s},h_{n}\}$ are pairwise transversal for all $n>N_{1}$. Furthermore, it is easy to see that $g_{0}$ and $h_{n}$ are transversal for any $n\in{\mathbb N}$. By assumption, there are arbitrarily large $n\in{\mathbb N}$ for which $h_{n}\in F$ so we can choose $n_{0}>N_{1}$ such that $h_{n_{0}}\in F$. We now define open subsets \begin{equation} \{Y_{i}\}_{i=0}^{s+1}\leftarrowbel{eq:Y_i's} \end{equation} that will show that for some $n_{1}$, $\{g_{1},\dots,g_{s},h_{n_{0}}^{n_{1}}\}$ is $g_{0}^{k}$-rooted system ($n_{1}$ and $k$ will be defined momentarily). Let $Y_{i}=X_{i}$ for $i=1,\dots,s$ and $Y_{s+1}$ be an open subset of ${\mathbb P}$ such that $Y_{s+1}\subset\overlineerline{X_{0}}$ and \begin{equation} A^{\pm}(h_{n_{0}})\subset Y_{s+1}\subset\overlineerline{Y_{s+1}}\subset{\mathbb P}\setminus B^{\pm}(g_{0}).\leftarrowbel{eq:n0 away from g0-1} \end{equation} Furthermore, we let $Y_{0}$ be an open subset of ${\mathbb P}$ such that \begin{equation} A^{\pm}(g_{0})\subset Y_{0}\subset\overlineerline{Y_{0}}\subset X_{0}\,\text{ and }\,\overlineerline{Y_{0}}\subset{\mathbb P}\setminus B^{\pm}(h_{n_{0}}).\leftarrowbel{eq:def of Y0-1} \end{equation} Such sets exist since we've seen that $\{g_{0},\dots,g_{s},h_{n_{0}}\}$ are pairwise transversal; this also imply that there exists $N_{2}$ such that for any integer $n$ with $|n|>N_{2}$ we have that \begin{equation} h_{n_{0}}^{n}(Y_{i})\subset Y_{s+1}\,\,\text{for }i=0,\dots,s.\leftarrowbel{eq: large power to Xs-1} \end{equation} Finally, by assumption , the set \begin{equation} M:=\{n\in{\mathbb N}:h_{n_{0}}^{n}\in F\}\leftarrowbel{eq: infinite set of powers in the coset-1} \end{equation} is infinite. Let $g_{s+1}=h_{n_{0}}^{n_{1}}$ for some $n_{1}\in M$ with $n_{1}>N_{2}$. The above transversality also imply that there exists $k\in{\mathbb N}$ such that \begin{equation} g_{0}^{k}(\cup_{i=0}^{s+1}Y_{i})\subset Y_{0}.\leftarrowbel{eq:large enough k-1} \end{equation} We end the proof by showing that $\{g_{1},\dots,g_{s},g_{s+1}\}$ is $g_{0}^{k}$-rooted system, using the sets $\{Y_{i}\}_{i=0}^{s+1}$. Condition (1) is satisfied by the choice of the sets $\{Y_{i}\}_{i=0}^{s+1}$. $ $Using equation \ref{eq:attractig same for powers}, we see that equation \ref{eq:n0 away from g0-1} and the fact that $X_{i}=Y_{i}$ for $i=1,\dots,s$ imply the Condition (2). Similarly, equation \ref{eq:def of Y0-1} imply Condition (3). Lastly, equation \ref{eq: large power to Xs-1} imply condition (4) for i=s+1 and equation \ref{eq:large enough k-1} does it for i=0. \end{proof} The following two propositions are the main ingredients in the proofs of our main Theorems: \begin{prop} \leftarrowbel{prop:adding elt in coset to free system}Let $\Gamma_{1}\lhd_{fi}\Gamma$, $x\in\Gamma$ and $\{g_{1},\dots,g_{s}\}$ be a $g_{0}$-rooted free system. Then, there exist $k\in{\mathbb N}$ and a hyperbolic element $g_{s+1}\in x\Gamma_{1}$ such that $\{g_{1},\dots,g_{s},g_{s+1}\}$ is a $g_{0}^{k}$-rooted free system.\end{prop} \begin{proof} By Proposition \ref{prop:trans elmt in coset} we can find hyperbolic $h\in x\Gamma_{1}$ such that $\{g_{1},\dots,g_{s},h$\} are pairwise transversal. Moreover, as $g_{0}\in\Gamma$ and $\Gamma_{1}\lhd_{fi}\Gamma$, the set $M_{1}=\{n:g_{0}^{n}hg_{0}^{-n}\in x\Gamma_{1}\}$ is infinite and so are the sets $M_{2,l}=\{n:g_{0}^{l}h^{n}g_{0}^{-l}\in x\Gamma_{1}\}$ whenever $l\in M_{1}$. Thus, applying Lemma \ref{lem:Adding element with infinite occurences} with $F=x\Gamma_{1}$ we find $k\in{\mathbb N}$ and a hyperbolic element $g_{s+1}\in x\Gamma_{1}$ such that $\{g_{1},\dots,g_{s},g_{s+1}\}$ is a $g_{0}^{k}$-rooted free system, as claimed.\end{proof} \begin{prop} \leftarrowbel{prop:adding elt outside a subgroup}Let $H<\Gamma$, be a profinitely dense subgroup and $\{g_{1},\dots,g_{s}\}$ be a $g_{0}$-rooted free system, with $\{g_{i}\}_{i=0}^{s}\subset\Gamma$. Then, there exists a hyperbolic element $g_{s+1}\notin H$ such that $\{g_{1},\dots,g_{s},g_{s+1}\}$ is a $\tilde{g_{0}}$-rooted free system for some $\tilde{g_{0}}\in\mathfrak{H}(G)\cap\Gamma$ (typically $\tilde{g_{0}}=g_{0}^{k}$ for some $k\in{\mathbb N}$).\end{prop} \begin{proof} Consider first the case when $g_{0}\in H$. By Proposition 3 of \cite{MS81} we know that $H$ is Zariski dense since it is profinitely dense. It follows from \cite[Lemma 8]{MS81} that there exists $\tilde{h}\in\mathfrak{H}(G)$ with $\tilde{h}\in\Gamma\setminus H$. Now, \[ W=\{w\in G:\forall i\in\{0,\dots s\},(w\tilde{h}w^{-1})\perp g_{i}\} \] is Zariski open; indeed, \[ W=\cap_{i=1}^{s}\{w:(wA^{\pm}(\tilde{h}))\cap B^{\pm}(g_{i})=\emptyset\}\bigcap\cap_{i=1}^{s}\{w:B^{\pm}(\tilde{h})\cap w^{-1}A^{\pm}(g_{i})=\emptyset\} \] and each on of the sets in the intersection is clearly a Zariski open set. As $H$ is Zariski dense, we can find some element $w\in H\cap W$. Let $h=w\tilde{h}w^{-1}$. Clearly, $h\in\mathfrak{H}(G)$, $h\in\Gamma\setminus H$ and the elements of $\{g_{1},\dots,g_{s},h$\} are pairwise transversal. Moreover, as $g_{0}\in H$, the set $M_{1}=\{n:g_{0}^{n}hg_{0}^{-n}\in\Gamma\setminus H\}={\mathbb N}$ and the sets $M_{2,l}=\{n:g_{0}^{l}h^{n}g_{0}^{-l}\in\Gamma\setminus H\}$ are infinite whenever $l\in M_{1}$. Thus, applying Lemma \ref{lem:Adding element with infinite occurences} with $F=\Gamma\setminus H$ we find $k\in{\mathbb N}$ and a hyperbolic element $g_{s+1}\in\Gamma\setminus H$ such that $\{g_{1},\dots,g_{s},g_{s+1}\}$ is a $g_{0}^{k}$-rooted free system, which conclude the case when $g_{0}\in H$. Now assume $g_{0}\notin H$. Let $\{X_{i}\}_{i=0}^{s}$ be the sets showing that $\{g_{1},\dots,g_{s}\}$ is a $g_{0}$-rooted free system and set $d=min_{1\leq i\leq s}d(B^{\pm}(g_{0},\overlineerline{X_{i}}))$. Let \[ U_{1}=\{g\in G:g\in\mathfrak{H}(G),g\perp g_{0}\}, \] \[ U_{2}=\{g\in G:A^{\pm}(g)\subset X_{0},d_{h}(B^{\pm}(g_{0}),B^{\pm}(g))<\frac{d}{4}\} \] and $U=U_{1}\cap U_{2}$. We now consider two possibilities: if $U\cap H\neq\emptyset$ it is easy to see that there exist $h_{0}\in U\cap H$ and $k\in{\mathbb N}$ such that $\{g_{1},\dots,g_{s}\}$ is a $h_{0}^{k}$-rooted free system. Thus, this possibility reduces to the first case with $g_{0}$ interchanged with $h_{0}^{k}\in H$. Therefore we assume that $U\cap H=\emptyset$. Note that by Lemma \ref{lem:Wg^nW lemma} for any $g\in U$ there exist $W$ and $N_{0}\in{\mathbb N}$ such that $Wg^{n}W\subset U$ for all $n>N$. As $g\in\mathfrak{H}(G)$, for any compact subset $K\subset G$ there exists $M$ such that $g^{n}\notin K$ for all $n>M$. Therefore, by the Howe-Moore Theorem stated above, we have that \[ \lim_{n\rightarrow\infty}\mu(g^{n}W\Gamma\cap W\Gamma)=\mu(W\Gamma)\mu(W\Gamma)>0 \] so for large enough $n$, and in particular for some $n>N_{0}$, $g^{n}W\Gamma\cap W\Gamma\neq\emptyset$. This yields some $w_{1},w_{2}\in W,\gamma\in\Gamma$ with $g^{n}w_{1}=w_{2}\gamma$ so $h:=\gamma=w_{2}^{-1}g^{n}w_{1}\in\Gamma\cap U$ as desired. By the definition of $U$, the elements of $\{g_{1},\dots,g_{s},h\}$ are pairwise transversal, and all of them are transversal to $g_{0}$. To show that they form a $g_{0}$-rooted free system, we can follows the proof of Lemma \ref{lem:Adding element with infinite occurences} from equation \ref{eq:Y_i's} onwards, interchanging $h_{n_{0}}$ with $h$ everywhere, and noting that \ref{eq: infinite set of powers in the coset-1} is also true in our situation as the set \[ M=\{n:h^{n}\notin H\} \] is infinite since $h\notin H$ by the assumption that $U\cap H=\emptyset$. \end{proof} \section{Proof of theorems \ref{thm:Main Theorem} and \ref{thm:Main Theorem maximal subgroups}\leftarrowbel{sec:Proof-main-thm}} \begin{proof}[Proof of Theorem \ref{thm:Main Theorem}:] We first claim that there exist $h_{1},h_{2},g_{0}\in\Gamma$ such that $\{h_{1},h_{2}\}$ is $g_{0}$-rooted free system and $\leftarrowngle h_{1},h_{2}\rightarrowngle$ is Zariski dense in $G$. Indeed, by \cite[Theorem 3]{Tits1972} we can find $f_{1},f_{2}\in\Gamma$ such that $\leftarrowngle f_{1},f_{2}\rightarrowngle$ is free and Zariski dense in $G$. By connectedness of $G$, the same is true for $f_{1}^{k},f_{2}^{k}$ for any $k\in{\mathbb N}$. Moreover, by Proposition \ref{prop:trans elmt in coset} we can find $f_{0}\in\Gamma$ such that $\{f_{0},f_{1},f_{2}\}$ are pairwise transversal. Therefore, there exists a $k\in{\mathbb N}$ such that $\{f_{1}^{k},f_{2}^{k}\}$ are $f_{0}^{k}$-free rooted system. Thus the claim is proved by letting $g_{0}:=f_{0}^{k},h_{1}:=f_{1}^{k},h_{2}:=f_{2}^{k}$. As explain at the end of section \ref{sub:CSP}, since $\Gamma$ admits that Congruence Subgroup Property and $\leftarrowngle h_{1},h_{2}\rightarrowngle$ is Zariski dense, the closure of $\leftarrowngle h_{1},h_{2}\rightarrowngle$ is of finite-index in $\widehat{\Gamma}$. By Lemma \ref{lem:subgroups of profinite completion} there exists $\Gamma_{1}\vartriangleleft_{fi}\Gamma$ such that $\widehat{\Gamma_{1}}\subset\overlineerline{\leftarrowngle h_{1},h_{2}\rightarrowngle}$. Let $y_{1},\dots,y_{m}$ be elements of $\widehat{\Gamma}$ which generate of $\widehat{\Gamma}/\widehat{\Gamma_{1}}$ with $m\leq d(\widehat{\Gamma})$. By Lemma \ref{lem:subgroups of profinite completion}, there exist $x_{1},\dots,x_{m}$ with $x_{i}\Gamma_{1}=y_{i}\widehat{\Gamma_{1}}\cap\Gamma$. Then, using Proposition \ref{prop:adding elt in coset to free system} inductively, we find $\{g_{i}\}_{i=1}^{m}\in\Gamma$ such that $g_{i}\in x_{i}\Gamma_{2}$ and $\{h_{1},h_{2},g_{1},\dots,g_{m}\}$ is a $g_{0}^{k}$-rooted free system for some $k\in{\mathbb N}$. Therefore the image of $\Gamma_{1}$ in $\widehat{\Gamma}$ together with $\{g_{1},\dots,g_{m}\}$ generate $\widehat{\Gamma}$ (topologically). This shows that that $\leftarrowngle h_{1},h_{2},g_{1},\dots,g_{m}\rightarrowngle$ is profinitely dense and is free of rank $m+2$, as asserted. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Main Theorem maximal subgroups}] Assume, by a way of contradiction, that $\mathfrak{U_{m}}$ is countable and let $\{U_{i}\}_{i=1}^{\infty}$ be some enumeration of it. Let $\{F_{i}\}_{i=1}^{\infty}$ be an enumeration of all the cosets of finite-index subgroups of $\Gamma$. Let $g_{0}\in\Gamma$ be some hyperbolic element. Using Proposition \ref{prop:adding elt in coset to free system} (with the empty $g_{0}$-rooted free system) we can find $g_{1}\in F_{1}$ and $g_{0,1}\in\mathfrak{H}(G)\cap\Gamma$ such that $\{g_{1}\}$ is a $g_{0,1}$-rooted free system. Similarly, using Proposition \ref{prop:adding elt outside a subgroup}, we can find $g_{2}$ and $g_{0,2}\in\mathfrak{H}(G)\cap\Gamma$ such that $g_{2}\notin U_{1}$ and $\{g_{1},g_{2}\}$ is a $g_{0,2}$-rooted free system. Continuing in this fashion and using Propositions \ref{prop:adding elt in coset to free system} and \ref{prop:adding elt outside a subgroup} alternately, we find a sequences $\{g_{i}\}_{i=1}^{\infty},\{g_{0,i}\}_{i=1}^{\infty}\subset\mathfrak{H}(G)\cap\Gamma$ such that for any $k\in{\mathbb N}$, $g_{2k}\notin U_{k}$ , $g_{2k-1}\in F_{k}$ and for any $l\in{\mathbb N}$, $\{g_{1},\dots,g_{l}\}$ is $g_{0,l}$-rooted free system. Let $H=\leftarrowngle\{g_{i}\}_{i=1}^{\infty}\rightarrowngle$; we claim that $H$ is free and profinitely dense. Indeed, $H$ is freely generated by $ $$\{g_{i}\}_{i=1}^{\infty}$ since any finite subset of $\{g_{i}\}_{i=1}^{\infty}$ is contained in some free rooted system. It is profinitely dense as otherwise $H$ would be contained in a finite-index subgroup $\Lambda$, which is impossible as $H$ has elements in each coset of $\Lambda$ in $\Gamma$. Therefore $H\subseteq U_{i}$ for some $i$. This is a contradiction as $g_{2i}\notin U_{i}$. \end{proof} \subsection*{Acknowledgments } We acknowledge the support of the ERC grant 226135, the ERC Grant 203418, the ISEF foundation, the Ilan and Asaf Ramon memorial foundation, the \textquotedbl{}Hoffman Leadership and Responsibility\textquotedbl{} fellowship program, the ISF grant 1003/11 and the BSF grant 2010295. \author{ } \end{document}
math
36,565
\begin{document} \title{Axiomatizing origami planes} \author[1]{L. Beklemishev} \author[2]{A. Dmitrieva} \author[3]{J.A. Makowsky} \affil[1]{Steklov Mathematical Institute of RAS, Moscow, Russia} \affil[1]{National Research University Higher School of Economics, Moscow} \affil[2]{University of Amsterdam, Amsterdam, The Netherlands} \affil[3]{Technion -- Israel Institute of Technology, Haifa, Israel} \date{\today} \maketitle \begin{abstract} We provide a variant of an axiomatization of elementary geometry based on logical axioms in the spirit of Huzita--Justin axioms for the origami constructions. We isolate the fragments corresponding to natural classes of origami constructions such as Pythagorean, Euclidean, and full origami constructions. The set of origami constructible points for each of the classes of constructions provides the minimal model of the corresponding set of logical axioms. Our axiomatizations are based on Wu's axioms for orthogonal geometry and some modifications of Huzita--Justin axioms. We work out bi-interpretations between these logical theories and theories of fields as described in J.A. Makowsky (2018). Using a theorem of M. Ziegler (1982) which implies that the first order theory of Vieta fields is undecidable, we conclude that the first order theory of our axiomatization of origami is also undecidable. \end{abstract} \epigraph{Dedicated to Professor Dick de Jongh on the occasion of his 81st birthday} \section{Introduction} \label{se:intro} The ancient art of paper folding, known in Japan and all over the world as \emph{origami}, has a sufficiently long tradition in mathematics.\footnote{The book by M. Friedman \cite{Fri-Orig} is an excellent source on the history of mathematical origami.} The pioneering book by T. Sundara Rao \cite{Rao} on the science of paper folding attracted attention by Felix Klein. Adolf Hurwitz dedicated a few pages of his diaries to paper folding constructions such as the construction of the golden ratio and of the regular pentagon, \ignore{Among other things he developed an (approximate) construction by paper folding (recorded in his notebooks) of a regular pentagon} see \cite{Fri-Orig}. Early books on origami science, such as Young and Young \cite{YY}, considered paper folding mainly as recreational mathematics and as a means of introducing geometry to children. Mathematical origami has been advanced by Margherita Piazzola Beloch in the 1930s. Even though her work was published in Italian at the time of Mussolini and remained essentially unnoticed, she found several key ideas that were rediscovered only much later, see \cite{Beloch}. In particular, she introduced a new operation, the so-called \emph{Beloch fold}, using which she showed a construction of a segment of length $\sqrt[3]{2}$ and more generally showed how to construct the roots of equations of degrees $3$ and $4$. The interest towards mathematical origami reemerged in the 1980s through the work of several enthusiasts such as Humiaki Huzita, who organized the successful \emph{First international meeting on origami science and technology} \cite{Proc-Huzita}. Among various mathematical and computational questions studied in relation to origami in modern literature are whether a given crease pattern can be flat-folded, computational complexity of folding problems, and many others (see e.g.\ \cite{bk:DemRou}). The general problem of origami design is, given a three-dimensional shape, to find a sequence of folds to create an origami approximating that shape (if possible) from a square sheet of paper. Great practical advances in this problem have been achieved by Robert J.~Lang who developed an algorithm and a program that helped to create folding patterns for certain basic shapes. These basic shapes are subsequently relatively easy to refine into the required artistic images. Using his algorithms Lang designed origami models of very complicated and realistic nature \cite{LangArt}. Apart from the visual art, origami science has impressive practical applications in the design of unfoldable structures, for example, satellite solar batteries, telescope mirrors, and even coronary bypasses. One of the basic mathematical questions about origami, highly relevant for this paper, is origami constructibility: which lengths can be constructed by foldings from a square sheet of paper or, more generally, from a given set of initial points on the plane? This problem is similar to the classical one about compass and ruler constructions and has been studied from the very beginnings of mathematical origami. To systematically investigate this problem and to prove results on non-constructibility, there was a need to formulate basic rules of the game: a finite set of operations to which one can reduce any complex folding. J. Justin \cite{justin1989resolution} and H. Huzita \cite{Huzita} identified a list of six such operations (H-1), $\ldots$ , (H-6), later called the Huzita -- Justin or Huzita -- Hatori axioms. (The seventh operation proposed by Justin and Hatori was later shown to be reducible to the other ones.) \begin{description} \item[(H-1):] Given two points $P_1$ and $P_2$, one can make a fold that passes through both of them. \item[(H-2):] Given two points $P_1$ and $P_2$, one can make a fold that places $P_1$ onto $P_2$. \item[(H-3):] Given two lines $\ell_1$ and $\ell_2$, one can make a fold that places $\ell_1$ onto $\ell_2$. \item[(H-4):] Given a point $P$ and a line $\ell$, one can make a fold orthogonal to $\ell$ that passes through $P$. \item[(H-5):] Given two points $P_1$ and $P_2$ and a line $\ell_1$, one can make a fold that places $P_1$ onto $\ell_1$ and passes through $P_2$. \item[(H-6):] Given two points $P_1$ and $P_2$ and two lines $\ell_1$ and $\ell_2$, one can make a fold that places $P_1$ onto $\ell_1$ and $P_2$ onto $\ell_2$. \item[(H-7):] Given a point $P$ and two lines $\ell_1,\ell_2$ one can make a fold orthogonal to $\ell_1$ that places $P$ onto $\ell_2$. \end{description} We note that (H-6) is essentially the Beloch fold mentioned earlier. Some of the expositions also take the points in (H-1) and (H-2) to be distinct and then demand the lines obtained in (H-1), (H-2) and (H-4) to be unique. This obviously does not affect origami constructibility. Therefore, for the reasons of simplicity, we describe the rules without the requirements of uniqueness. Lang and Alperin~\cite{AlpLang} showed that (H-1) -- (H-7) can be characterized as all the operations that define a unique fold by alignment of points and finite line segments. Using these operations elegant solutions of the two classical problems --- the doubling of the cube and the angle trisection --- were found. Alperin \cite{alperin2000mathematical} characterized the classes of points (corresponding to certain subfields of $\mathbb{C}$) constructible using foldings defined by natural subsets of Huzita--Justin operations. Our paper uses the ideas of Alperin in the part that deals with the interpretations between geometric theories and the theories of the corresponding classes of fields. As mentioned above, Huzita -- Justin axioms were not meant to be understood as axioms in a logical sense but rather as specifying a (not necessarily deterministic or always defined) set of operations generating the origami constructible points on the plane. The distinction between logical axioms and the operational approach was clear in the early XXth century works on origami mathematics (see Chapter 5 of \cite{Fri-Orig}, especially the discussion on page~285). In fact, it was considered a weakness of the origami approach compared to axiomatic geometry in the tradition of Hilbert. The aim of this paper is to connect the two views and to adapt the principles of origami constructions to serve as logical axioms of planar geometry. In \cite{makowsky2018undecidability,makowsky2019can} a proof was outlined of the statement that the first order theory of origami planes was undecidable. Although the proof strategy is feasible, the exact definition of the first order theory of origami planes was left imprecise. In particular, the role of the betweenness relation was overlooked. The purpose of this paper is to provide a precise definition of the first order theory of origami planes and to establish its properties. The second aim of the paper is to work out mutual first order interpretations between our logical theories of origami and certain classes of fields, as suggested in \cite{makowsky2019can}. Ziegler's theorem then shows the algorithmic undecidability of the first order theories of these structures. Even though these results are not really surprising, there are many details to take care of in their accurate proofs. The proofs rely on certain background from several areas: The development of axiomatic orthogonal geometry, the construction of coordinatization in the context of sufficiently weak axioms of geometry, and the logical techniques of first order interpretations. Since we deal with the questions of first order definability and interpretability, we need to be careful about the details of the constructions usually presented in a less formal way. Therefore, our paper has a long introductory part where we present these background results. We assume that the reader has some basic knowledge of elementary algebra, analytic geometry and first order logic, including the notion of interpretation. \ignore{ As it turns out, if one directly translates Huzita -- Justin axioms into a logical language treating the free variables in these axioms as universally quantified, the axioms (H-5) and (H-6) become inconsistent. Therefore, the question arises, how we can adapt the principles of origami constructions to serve as logical axioms of planar geometry. There are two issues here: what are the basic predicates, in which the the axioms (H-1), $\ldots$ , (H-6) can be formulated, and what are the additional axioms which govern the basic predicates.} \ignore{In the next section we give background and outline our main results. The axioms mentioned in this section are given explicitly in Section \ref{se:pythagoras} for Wu planes, in Section \ref{se:hjaxioms} for the first four Huzita -- Justin axioms, and in Section \ref{se:ordered} for the betweenness axioms.} The plan of the paper is as follows. In Section \ref{se:wuplanes} we present background on orthogonal geometry using the book by Wu \cite{bk:Wu1994} as our main reference source. We introduce the notion of metric Wu plane which is basic for all further developments. In Section \ref{interp} we present background on interpretations. In Section \ref{se:ptrings} we present the details of the coordinatization of Wu planes. This is an expanded version of the corresponding section of \cite{makowsky2019can}. In Section \ref{se:pyth} we prove the result on the bi-interpretability between the classes of metric Wu planes and of Pythagorean fields. In Section \ref{se:undec} we state a general undecidability result for geometric theories of metric Wu planes based on Ziegler's theorem. In Section \ref{se:hjaxioms} we introduce appropriate logical versions of the Huzita -- Justin axioms. We also show that metric Wu planes satisfy the first four Huzita -- Justin axioms. In Section \ref{se:ordered} we discuss the role of order axioms, introduce ordered metric Wu planes and their relation to ordered Pythagorean fields. In Section \ref{se:vieta} we introduce Euclidean and Vieta fields and prove our main Theorems \ref{th:pyth} and \ref{th:vieta}. Finally, in Section \ref{se:conclu} we discuss the remaining open questions. \label{se:background} Our axiomatization uses two basic sorts of variables, denoting lines and points, respectively. We use $P, P_1, \ldots, P_i$, and more liberally, upper case letters, to denote points and $\ell, \ell_1, \ldots, \ell_i$, and more liberally, lower case letters, to denote lines. In our axiomatization we use the following basic relations: \begin{enumerate}[(i)] \item the \emph{incidence} relation $ P \in \ell$ between points and lines, \item the \emph{orthogonality} relation $\ell_1 \perp \ell_2$ between two lines, \item and the \emph{betweenness} (or \emph{order}) relation between three points $P_1, P_2, P_3$ denoted by $Be(P_1, P_2, P_3)$. \end{enumerate} The sorts are, however, definable using the incidence relation: $\ell$ is a line iff $\exists P (P \in \ell)$, and $P$ is a point iff $\exists \ell (P \in \ell)$. We also note that the equidistance relation $PQ\cong RS$ between points $P,Q$ and $R,S$ is definable from incidence and orthogonality~\cite[page 25]{bk:Wu1994}. We start with Wu's axiomatization of orthogonal geometry, cf.~\cite{bk:Wu1994}, augmented by the axioms for betweenness, and appropriate versions of Huzita--Justin axioms. We denote by $\tau_{wu}$ the vocabulary corresponding to (i)-(ii) and by $\tau_{o-wu}$ the vocabulary corresponding to (i)-(iii). By $\mathrm{FOL}(\tau_{wu})$ and $\mathrm{FOL}(\tau_{o-wu})$ we denote the corresponding sets of first order formulas. The Huzita -- Justin axioms (H-1), (H-2), (H-4) can be used as they are. However, we modify axioms (H-3), (H-5), (H-6) to (H*3), (H*5), (H*6) in a way to assure that they are always applicable. The axioms (H-1), (H-2), (H*3), (H-4), (H*6) can be formulated in $\mathrm{FOL}(\tau_{wu})$. In order to formulate (H*5) we use the betweenness relation. As the most general class of structures for our study we consider \emph{metric Wu planes} (see \cite{makowsky2019can}). Metric Wu planes already satisfy Huzita -- Justin axioms (H-1), (H-2), (H*3), (H-4), where axiom (H-3) is modified so that it holds more generally in arbitrary metric Wu planes rather than just in the ordered ones. Then we add to the axioms of metric Wu planes the axioms of betweenness and further origami axioms (H*5), (H*6) and obtain the corresponding classes of structures. See Section \ref{wuplanes} for the list of axioms used in the definitions below. \begin{definition} \mbox{} \begin{enumerate}[(i)] \item A $\tau_{wu}$-structure $\Pi$ is a {\em metric Wu plane} if it satisfies (I-1), (I-2), (I-3), (O-1), $\ldots$, (O-5), the axiom of infinity (InfLines), (ParAx), the two axioms of Desargues (De-1) and (De-2) and (AxSymAx). \item An {\em ordered metric Wu plane} is a metric Wu plane satisfying axioms of order (B-1), $\ldots$, (B-4). \item A {\em Euclidean ordered metric Wu plane} is an ordered metric Wu plane satisfying (H*5). \item An {\em ordered origami plane} is an ordered metric Wu plane satisfying (H*5) and (H*6). \end{enumerate} \end{definition} Only items (iii) and (iv) involve origami or modified origami axioms in their definitions. The following theorems are from J.A. Makowsky \cite{makowsky2019can}. They can be seen as a logical formalization of the classical results on coordinatization (see \cite{hall1943projective,bk:Wu1994}). \begin{theorem} \label{prop:JAM} \begin{enumerate}[(i)] \item The theory of metric Wu planes is bi-interpretable with the theory of Pythagorean fields of characteristic $0$. \item The theory of ordered metric Wu planes is bi-interpretable with the theory of ordered Pythagorean fields. \end{enumerate} \end{theorem} Ziegler \cite{ar:ziegler} showed that any finitely axiomatized subtheory of the first order theory of real closed fields is undecidable. One of the main ideas of \cite{makowsky2019can} was the use of this result (via bi-interpretability) to show that certain elementary theories of geometries are undecidable. For example, as a corollary of Ziegler's theorem one can conclude from Theorem \ref{prop:JAM} that the elementary theories of the classes of metric Wu planes and of ordered metric Wu planes are undecidable. The main contributions of this paper are as follows. \begin{theorem} \label{th:main-1} \begin{enumerate}[(i)] \item The theory of Euclidean ordered metric Wu planes is bi-interpretable with the theory of Euclidean fields; \item The theory of ordered origami planes is bi-interpretable with the theory of Vieta fields. \end{enumerate} \end{theorem} Again, via bi-interpretability, Ziegler's theorem is applicable to both theories, hence both are undecidable. \section{Background on axiomatic geometry} \label{se:wuplanes} \label{se:pythagoras} In this section we present necessary background on axiomatic geometry and begin our discussion of the question how one can adapt the principles of origami constructions to serve as logical axioms of planar geometry. There are two issues here: what are the basic predicates, in which the logical analogs of the axioms (H-1), $\ldots$ , (H-6) can be formulated, and what are the additional axioms which govern the basic predicates. Looking at the usual statements of (H-1), $\ldots$ , (H-6) we see that these principles use the primitive notions of points, lines (folds), incidence (\emph{``a fold $\ell$ passes through a point $P$''}), which are basic for all standard axiomatizations of elementary geometry. In addition, the predicate \emph{``a fold along $\ell$ identifies points $P$ and $Q$''} and possibly some others are used. The notion of a fold along a given line is similar to that of reflection with respect to a line. There are axiomatizations of geometry based on reflection as a basic notion (see \cite{bk:Bach1973}). However, in this paper we find it convenient to rely on the well-developed axiomatizations of geometry based on the notion of orthogonality of lines. The predicate $\ell\perp m$ \emph{``lines $\ell$ and $m$ are orthogonal''} is sufficiently natural from the point of view of origami constructions: orthogonality can be tested by folding a paper along one of the lines. Moreover, we will see later that the majority of principles corresponding to Huzita--Justin axioms are easily expressible using incidence and orthogonality. Thus, we take Wu's axiomatization of orthogonal plane geometry as basic \cite{bk:Wu1994}. We use this book as a reference source for careful proofs of many technical statements that we will need. As we will see, for some of the logical versions of Huzita--Justin axioms, namely (H-3), (H-5) and (H-6), the notion of \emph{betweenness} also plays a role. We are going to analyze the situation in Section \ref{se:ordered}. The predicate $Be(P_1, P_2, P_3)$ \emph{``$P_2$ is between $P_1$ and $P_3$''} holds if $P_2$ belongs to the line segment $P_1P_3$ and all three points are distinct. In order to state the full axiomatization of origami geometry the predicate of betweenness will be added to our vocabulary. Our axiomatization uses two basic sorts of variables, denoting lines and points, respectively. We use $P, P_1, \ldots, P_i$, and more liberally, upper case letters, to denote points and $\ell, \ell_1, \ldots, \ell_i$, and more liberally, lower case letters, to denote lines. We use the following basic relations: \begin{enumerate}[(i)] \item the \emph{incidence} relation $ P \in \ell$ between points and lines, \item the \emph{orthogonality} relation $\ell_1 \perp \ell_2$ between two lines, \item and the \emph{betweenness} (or \emph{order}) relation between three points $P_1, P_2, P_3$ denoted by $Be(P_1, P_2, P_3)$. \end{enumerate} The two-sorted language can be considered as a notational variant of a single-sorted one, as the set of lines and the set of points are definable using the incidence relation: $\ell$ is a line iff $\exists P (P \in \ell)$, and $P$ is a point iff $\exists \ell (P \in \ell)$. This allows us to apply various standard notions and results for (one-sorted) first order logic in our context. \ignore{We also note that the equidistance relation $PQ\cong RS$ between points $P,Q$ and $R,S$ is definable from incidence and orthogonality~\cite[page 25]{bk:Wu1994}.} We denote by $\tau_{wu}$ the vocabulary corresponding to (i)-(ii) and by $\tau_{o-wu}$ the vocabulary corresponding to (i)-(iii). By $\mathrm{FOL}(\tau_{wu})$ and $\mathrm{FOL}(\tau_{o-wu})$ we denote the corresponding sets of first order formulas. Next we turn to the axioms of two-dimensional orthogonal geometry as presented in \cite{bk:Wu1994}. The axioms are subdivided into several groups. \paragraph{Hilbert's axioms of incidence} \begin{description} \item[(I-1):] For any two distinct points $A,B$ there is a unique line $\ell$ with $A \in \ell$ and $B\in \ell$. \item[(I-2):] Every line contains at least two distinct points. \item[(I-3):] There exist three distinct points $A, B, C$ such that no line $\ell$ contains all of them. \end{description} \paragraph{Hilbert's (sharper) axiom of parallels} \begin{description} \item[(ParAx):] Let $\ell$ be any line and $A$ a point not on $\ell$. Then there exists one and only one line determined by $\ell$ and $A$ that passes through $A$ and does not intersect $\ell$. \end{description} \paragraph{Axiom schema of infinity and Desargues' axioms} \begin{description} \item[(InfLines):] Given distinct non-collinear $A, B, C$ and $\ell$ with $A \in \ell$, $B, C \not\in \ell$ we construct a line $\ell_1$ going through $C$ and parallel to $AB$ and define $A_1$ as the intersection of $\ell_1$ and $\ell$. Inductively, we define $\ell_n$ as a line going through $C$ and parallel to $A_nB$ and define $A_{n+1}$ as its intersection with $\ell$. Then all the $A_i$ are distinct. \item[(De-1):] If the three pairs of the corresponding sides of two triangles $ABC$ and $A' B'C'$ are all parallel to each other, i.e., $AB \parallel A'B'$, $AC \parallel A'C'$, $BC \parallel B'C'$, then the three lines $AA'$, $BB'$, $CC'$ joining the corresponding vertices of these two triangles are either parallel to each other or concurrent. \item[(De-2):] If two pairs of the corresponding sides of two triangles $ABC$ and $A' B'C'$ are parallel to each other, say $AB \parallel A'B'$, $AC \parallel A'C'$, and the three lines joining the corresponding vertices are distinct yet either concurrent or parallel to each other, then the third pair of the corresponding sides are also parallel to each other, i.e., $BC \parallel B'C'$. \end{description} \begin{definition} A $\tau_{\in}$ structure $\Pi$ is a \emph{Desarguesian plane} if it satisfies (I-1, I-2, I-3), the axiom of infinity (InfLines), (ParAx) and the two axioms of Desargues (De-1) and (De-2). \end{definition} In order to introduce the orthogonality axioms in the plane, we consider a new relation of orthogonality $\perp$ and the language $\tau_{wu}$ consisting of $\in$ and $\perp$. \paragraph{Orthogonality axioms} \begin{description} \item[(O-1):] $\ell_1 \perp \ell_2$ iff $\ell_2 \perp \ell_1$. \item[(O-2):] Given $O$ and $\ell_1$, there exists exactly one line $\ell_2$ with $\ell_1 \perp \ell_2$ and $O \in \ell_2$. \item[(O-3):] If $\ell_1 \perp \ell_2$ and $\ell_1 \perp \ell_3$ then $\ell_2 \parallel \ell_3$ or $\ell_2 = \ell_3$. \item[(O-4):] For every $O$ there is an $\ell$ with $O \in \ell$ and $\ell \not \perp \ell$. \item[(O-5):] The three heights of a triangle intersect in one point. \end{description} Concerning Axiom (O-4) we remark that lines $\ell$ such that $\ell\perp \ell$ are called \emph{isotropic}. {\bf Caveat:} Without any axioms of order, the axioms of metric Wu planes do not exclude the existence of isotropic lines. Since our analysis concerns the role of the axioms of order in the statements of origami principles, we do not want to assume that there are no isotropic lines outright. \begin{definition} A $\tau_{wu}$ structure $\Pi$ is an \emph{orthogonal Wu plane} if it is a Desarguesian plane satisfying orthogonality axioms (O-1, O-2, O-3, O-4, O-5). \end{definition} \paragraph{Axiom of symmetric axis.} We assume that we are working in an orthogonal Wu plane. In order to formulate the next axiom, we follow \cite[page 22, Definition 3]{bk:Wu1994} to define the relation of being a symmetric point using only the Incidence relation. For two arbitrary points $A \neq B$ on a line $\ell$, take an arbitrary $E \notin \ell$ and construct a line $\ell'$ parallel to $\ell$ such that $E \in \ell'$. Let $D$ be the intersection of $\ell'$ and a line going through $B$ parallel to $AE$. Then $ABDE$ is a parallelogram. Finally, construct $C$ as the intersection of $\ell$ and the line going through $D$ and parallel to $EB$. Then, due to Desargues' axioms, point $C$ is independent of the choice of $E$. We say that $C$ is the \emph{symmetric point of $A$ with respect to $B$}. In addition, for any point $P$ we say that $P$ is a symmetric point of $P$ with respect to $P$. Next we define a notion of a midpoint, following \cite[page 23, Definition 4]{bk:Wu1994}. Let $A, B$ be two points on a line $\ell$. Draw through $A$ a line $\ell'$ distinct from $\ell$ and take thereon a point $M'$ distinct from $A$. Construct the symmetric point $B'$ of $A$ with respect to $M'$ and draw through $M'$ a line $M'M \parallel B'B$, meeting $\ell$ at $M$. Then $M$ is independent of the choice of $\ell'$, $M'$, and is called the \emph{midpoint} of $A$ and $B$. Define the midpoint of two coincident points to be the point itself. Now we use the relation of orthogonality to define symmetric axis following \cite[page 75]{bk:Wu1994}. For any pair $A, B$ of two distinct points, let the unique line through the midpoint of $A$ and $B$ and perpendicular to the line $AB$ be the \emph{perpendicular bisector of $A, B$}. Clearly, if $AB$ is an isotropic line, then its perpendicular bisector is $AB$ itself. Let the perpendicular bisector of $A, B$ be $\ell$. We call $A$ the \emph{symmetric point of $B$ with respect to $\ell$} or \emph{$\ell$ the symmetric axis of $A, B$}. Any point $A$ on $\ell$ is said to be a symmetric point of itself with respect to $\ell$. (Whenever $\ell$ is isotropic, it is the symmetric axis for any two points $A, B$ on $\ell$.) We denote this relation as $Sym(A, \ell, B)$. It will be important in our treatment of origami geometry. Let $\ell_1$ be any line and $\ell$ be a non-isotropic line. By \cite[page 76, Property 1]{bk:Wu1994}, the points symmetric to the points from $\ell_1$ with respect to $\ell$ are also lying on a unique line, say $\ell_2$. In this case we call $\ell_2$ the \emph{symmetric line of $\ell_1$ with respect to $\ell$}, or $\ell$ a \emph{symmetric axis of $\ell_1$ and $\ell_2$}. Finally, we can add the following axiom to the list. \begin{description} \item[(AxSymAx):] Any two intersecting non-isotropic lines have a symmetric axis. \end{description} Now we are ready to give \begin{definition} A $\tau_{wu}$ structure $\Pi$ is a \emph{metric Wu plane} if it is an orthogonal Wu plane satisfying (AxSymAx). \end{definition} Following Wu, we use the word `metric' here, since assuming the axiom of symmetric axes gives us some metric properties, in particular the notion of congruence. \section{Background on interpretations} \label{interp} We assume familiarity of the reader with the notion of first-order interpretation, we follow the terminology and notations of Makowsky~\cite{makowsky2019can} here. More background on interpretations can be found, for example, in \cite{Sho67,bk:Hodges93,FrVi,ar:MakowskyTARSKI}. Particular interpretations that we consider here are many-dimensional relative interpretations with parameters. In this section we briefly remind the notions of translation scheme, interpretation and bi-interpretability of first order theories. The readers already familiar with these concepts can skip it and move on to the next section. \begin{definition} Let $K$ and $L$ be first-order relational signatures, $n$ a positive integer, $\vec p$ a list of variables. An ($n$-dimensional) \emph{translation scheme} $\Sigma$ with definable parameters $\vec{p}$ from $L$ to $K$ is specified by three items: \begin{itemize} \item[-] a formula $\delta_\Sigma(x_0, \ldots, x_{n-1}, \vec{p})$ of signature $K$, \item[-] for each relational symbol $R (y_0, \ldots, y_{m-1})$ of $L$, a formula $R_\Sigma(\vec{x}_0, \ldots, \vec{x}_{m-1}, \vec{p})$ of $K$ in which the $\vec{x}_i$ are disjoint $n$-tuples of distinct variables, \item[-] a formula $\psi_\Sigma(\vec{p})$ in $K$. \end{itemize} Intuitively, $\delta_\Sigma$ denotes the domain of $\Sigma$; $R_\Sigma$ denote the relations of $\Sigma$ and $\psi_\Sigma(\vec{p})$ denotes the admissible range of parameters of $\Sigma$. In case $L$ is multi-sorted we may use a $\delta_\Sigma$ for each sort. Then the defining formulas for different sorts can also have different dimensions. \end{definition} \ignore{ In our case the translation $RF_{field}$ from the language of fields to $\tau_{wu}$ has a domain formula $x_0 \in \ell_0$, other defining formulas $add_T$, $mult_T$, $x_0 = 0$, $x_0 = 1$ and a parameter defining formula based on our restrictions on the parameters $\ell_0, m_0, Z_0$. The other translation $PP_{wu}$ from $\tau_{wu}$ to the language of fields has two domain formulas: $x_0 = x_0 \ \wedge \ x_1 = x_1$ (or any other tautology in $x_0, x_1$) defining points and $a_0 \neq 0 \vee a_1 \neq 0 \vee a_2 \neq 0$ defining lines. Other defining formulas are \begin{itemize} \item[-] $x_0 = y_0 \wedge x_1 = y_1$ for $x = y$, \item[-] $\exists k ( a_0 = k \cdot b_0 \ \wedge \ a_1 = k \cdot b_1 \ \wedge \ a_2 = k \cdot b_2)$ for $\ell = \ell'$, \item[-] $a_0 \cdot x_0 + a_1 \cdot x_1 + a_2 = 0$ for $x \in \ell$, \item[-] $a_0 \cdot b_0 + a_1 \cdot b_1 = 0$ for $\ell_1 \perp \ell_2$. \end{itemize} } Any translation scheme $\Sigma$ from $L$ to $K$ naturally defines a map from $L$-formulas to $K$-formulas by making it commute with the propositional connectives and relativizing the quantifiers to the domain specified by $\delta_\Sigma$. We denote this map by $\phi \mapsto \phi^\Sigma$. \begin{definition} Let $T$ be a $K$-theory and $S$ be an $L$-theory. A translation scheme $\Sigma$ from $L$ to $K$ is an \emph{interpretation of $S$ in $T$} if \begin{enumerate} \item $T$ proves $\exists \vec p\:\psi(\vec p)$ and $\forall \vec p\:(\psi(\vec p)\to \exists x_0\dots \exists x_{n-1}\delta(x_0,\dots,x_{n-1},\vec p))$. \item For any $L$-sentence $\phi$, $$S \vdash \phi \ \Rightarrow \ T \vdash \forall \vec{p} \ (\psi(\vec{p}) \rightarrow \phi^\Sigma).$$ \end{enumerate} \end{definition} Let $\Sigma$ be an interpretation of $S$ in $T$. Let $M$ be a model of $T$, and let $\vec{m}$ be a tuple of elements of $M$ such that $M \models \psi(\vec{m})$. The interpretation $\Sigma$ naturally defines an internal model $\Sigma^*(M,\vec m)$ of $S$ within $M$. The domain of $\Sigma^*(M,\vec m)$ is the quotient of the set $D:=\{\vec x\in M: M\models \delta(\vec x,\vec m)\}$ by the equivalence relation defined on $D$ by the interpreted equality relation. Formulas $R_\Sigma$ then define the evaluation of the signature of $S$ in $\Sigma^*(M,\vec m)$. In fact, $\Sigma^*(M,\vec m)$ will be a model of $S$, since for all $\alpha \in S$ we have $T \models \alpha^\Sigma$. We can often ignore the dependence of $\Sigma^*(M,\vec m)$ on the choice of parameters $\vec m$ and will denote it by $\Sigma^*(M)$. Next we introduce the notion of bi-interpretability of theories (and of the corresponding classes of models). \begin{definition} Let $T$ be a $K$-theory and $S$ be an $L$-theory. Suppose there are interpretations $\Sigma$ of $S$ in $T$ and $\Xi$ of $T$ in $S$. Then, for each model $M$ of $T$, $\Xi^*(\Sigma^*(M))$ is a model of $T$ and, for each model $N$ of $S$, $\Sigma^*(\Xi^*(N))$ is a model of $S$. We demand that models in these pairs are isomorphic and, moreover, definably isomorphic, in the following sense. Suppose there are formulas $\alpha$ in $K$ and $\beta$ in $L$, such that for any model $M$ of $T$, $\alpha$ defines in $M$ an isomorphism between $M$ and $\Xi^*(\Sigma^*(M))$, for any admissible choice of parameters, and, for any model $N$ of $S$, $\beta$ defines in $N$ an isomorphism between $B$ and $\Sigma^*(\Xi^*(N))$, for any admissible choice of parameters. Then we say that $T$ and $S$ are \emph{bi-interpretable}. \end{definition} Bi-interpretation is a rather strong form of equivalence of theories, in particular it reduces the decision problem for one theory to another.\footnote{There also are well-known weaker conditions under which interpretations preserve decidability of theories. For example, it is sufficient to require that each model of $S$ is isomorphic to a model of the form $\Sigma^*(M)$ for some model $M$ of $T$, see Lemma 4.1 in \cite{makowsky2017can}.} \begin{proposition} If $S$ and $T$ are bi-interpretable and $S$ is decidable, then so is $T$. \end{proposition} We will give concrete examples how this works below. \section{Background on coordinatization} \label{se:ptrings} \begin{definition} Pythagorean field is a field for which every sum of two squares is a square: $$\forall x, y \ \exists z \ x^2 + y^2 = z^2$$ \end{definition} We would like to describe mutual interpretations between the classes of metric Wu planes and of Pythagorean fields. Given a field $\mathcal{F}$ one can define a plane $\Pi$ in a standard manner via Cartesian coordinates, we denote such a $\Pi$ as $PP^*_{wu}(\mathcal{F})$, following \cite{makowsky2019can}. It is rather obvious that the two sorts (points and lines) and the predicates of incidence and orthogonality will be definable in the language of fields, which in fact yields a first-order (two-dimensional) interpretation. Notice that this interpretation is parameter-free. On the other hand, given a plane $\Pi$ we can define a field $\mathcal{F}$ in it by a classical construction known as `coordinatization'. This construction was explicitly introduced by M. Hall in \cite{hall1943projective}. M. Hall credits \cite{von1857beitrage,hilbert1971foundations} for the original idea. A good exposition can be found in \cite{blumenthal1980modern,szmielew1983affine}. Since coordinatization is more complicated than the opposite map, it is not immediately obvious that it yields a first-order interpretation. In fact, the amount of work needed to see it is considerable. The purpose of this section is to give a logically-minded reader enough details, without writing out all the first-order formulas explicitly, so that he or she could be convinced that it, indeed, does. We follow here \cite{ivanov2016affine}, which contains a particularly nice exposition of this construction. Let $\Pi$ be a stucture satisfying (I-1, I-2, I-3) and (ParAx), with two distinguished intersecting lines $\ell_0, m_0$ in $\Pi$. Let $O$ be the point of intersection of the lines $\ell_0$ and $m_0$. Take any point $Z_0$ such that $Z_0 \notin \ell_0 \cup m_0$. All the constructions below will depend on a particular choice of parameters $\ell_0$, $m_0$, and $Z_0$ in the above configuration. \newcommand{\textit{bij}}{\textit{bij}} \begin{lemma} \label{bijection} There is a formula $\textit{bij}(X,Y,\ell_0, m_0, Z_0) \in \mathrm{FOL}_{\in}$ which, for every choice of $\ell_0$, $m_0$ and $Z_0$ as above, defines a bijection between the points of $\ell_0$ and of $m_0$. \end{lemma} \begin{proof} Let $d$ be the line going through the points $O$ and $Z_0$. Let $X \in \ell_0$ and $h(X)$ be the point at the intersection of $d$ and the line $m_X$ parallel to $m_0$ containing $X$. Let $y(X) \in m_0$ be the point at the intersection of $m_0$ and the line $\ell_X$ parallel to $\ell_0$ and containing $h(X)$. Clearly $f: \ell_0 \rightarrow m_0$ given by $f(X) =y(X)$ is a bijection and is $\mathrm{FOL}$ definable by a formula $\textit{bij}(X,Y,\ell_0, m_0, Z_0)$. \end{proof} We will define a structure $RF^*_{field}(\Pi)$ (this notation is once again taken from \cite{makowsky2019can}) whose universe will be denoted $K$, which we take to be ${\{A: A\in\ell_0\}}$. Thinking of $\ell_0$ and $m_0$ as the axes of a {\em coordinate system} we can identify the points of $\Pi$ with elements of $K^2$ as follows. Let $P$ be a point of $\Pi$. The projection of a point $P$ onto $ \ell_0$ is defined by the point $X \in \ell_0$ which is the intersection of the line $m_P$ parallel to $m_0$ with $P \in m_P$. After analogously projecting a point $P$ onto $m_0$, we get a point $Y \in m_0$. Then the coordinates of $P$ is a pair $(X, f(Y)) \in K^2$. We define $0$ and $1$ in $K$ by saying that the point $O$ has coordinates $(0,0)$ and the point $Z_0$ has coordinates $(1, 1)$. Coordinates and elements of $K$ will also be denoted by lower case letters. It should be clear from the context whether lower case letters denote lines or elements of $K$. Next we define the {\em slope} $sl(\ell) \in K \cup \{\infty\}$ of a line $\ell$ in $\Pi$. If $\ell$ is parallel to $\ell_0$, its slope is $0$ and it is called a {\em horizontal} line. If $\ell$ is parallel to $m_0$, its slope is $\infty$ and it is called a {\em vertical} line. For $\ell$ not vertical, let $\ell_1$ be the line parallel to $\ell$ and passing through $0$. Let $(1,a)$ be the coordinates of the intersection of $\ell_1$ with the vertical line $\ell_2$ passing through $(1,0)$. Then we define the slope by $sl(\ell)= a \in K$. This shows: \begin{lemma} \label{slope} There is a first order formula $slope(\ell, A, \ell_0, m_0, Z_0) \in \mathrm{FOL}_{\in}$ which expresses $sl(\ell)=A$, for any choice of parameters $\ell_0, m_0, Z_0$. There is also a first order formula $slope_{\infty}(\ell, m_0) \in \mathrm{FOL}_{\in}$ expressing $sl(\ell)=\infty$. \end{lemma} \begin{lemma} \label{slope-1} \begin{enumerate}[(i)] \item Two lines $\ell, \ell_1$ have the same slope, $sl(\ell)=sl(\ell_1)$ iff they are parallel. \item For the line $d$, defined in Lemma \ref{bijection}, we have $sl(d)=1$ (because $(1,1) \in d$). \end{enumerate} \end{lemma} We now define a ternary operation $T: K^3 \rightarrow K$ on the set $K = \{A:A\in\ell_0\}$. We think of $T(a,x,b)= \langle ax+b \rangle$ as the result of multiplying $a$ with $x$ and then adding $b$. But we yet have to define multiplication and addition. Let $a,b,x \in K$. Let $\ell$ be the unique line with $sl(\ell)=a \neq \infty$ intersecting the line $m_0$ at the point $P_1$ with coordinates $P_1 =(0,b)$. Let $\ell_1 =\{ (x,z) \in K^2 : z \in K\}$. The line $\ell$ intersects $\ell_1$ at a unique point, say $P_2= (x,y)$. We set $T(a,x,b) = y$. \newcommand{\textit{Ter}}{\textit{Ter}} \begin{lemma} \label{ptr} There is a formula $\textit{Ter}(a,x,b,y,\ell_0, m_0, Z_0) \in \mathrm{FOL}_{\in}$, where $a,b,x,y$ range over coordinates and $\ell_0, m_0, Z_0$ are parameters of lines and points, which expresses that $T(a,x,b)=y$. \end{lemma} \begin{lemma} \label{ptr-1} The ternary operation $T(a,x,b)$ has the following properties and interpretations: \begin{description} \item[(T-1):] $T(1,x,0)=T(x,1,0)=x$ \\ \emph{$T(1,x,0)=x$ means that the auxiliary line $d =\{ (x,x) \in K^2 : x \in K \}$ is a line with $sl(d)=1$. \\ $T(x,1,0)=x$ means that the slope of the line $\ell$ passing through $(0,0)$ and $(1,x)$ is given by $sl(\ell)=x$. } \item[(T-2):] $T(a,0,b)=T(0,a,b)=b$ \\ \emph{The equation $T(a,0,b)=b$ means that the line $\ell$ defined by $T(a,x,b)=y$ intersects $m_0$ at $(0,b)$ (which is the meaning of $ax+b$ in analytic geometry). \\ The equation $T(0,a,b)=b$ means that the horizontal line $\ell_1$ passing through $(0,b)$ consists of the points $\{ (a,b) \in K^2 : a \in K \}$.} \item[(T-3):] For all $a,x,y \in K$ there is a unique $b \in K$ such that $T(a,x,b) =y$ \\ \emph{This means that for every slope $s$ different from $\infty$ there is a unique line $\ell$ with $sl(\ell)=s$ passing through $(x,y)$.} \item[(T-4):] For every $a, a', b, b' \in K$ and $a \neq a'$ the equation $T(a,x,b) = T(a'x,b')$ has a unique solution $x \in K$. \\ \emph{This means that two lines $\ell_1$ and $\ell_2$ with different slopes not equal to $\infty$ intersect at a unique point $P$.} \item[(T-5):] For every $x, y, x', y' \in K$ and $x \neq x'$ there is a unique pair $a,b \in K$ such that $T(a,x,b)=y$ and $T(a,x',b)=y'$. \\ \emph{This means that any two points $P_1, P_2$ not on the same vertical line are contained in a unique line $\ell$ with slope different from $\infty$.} \end{description} \end{lemma} A structure $\langle K, T_K, 0, 1 \rangle$ with a ternary operation $T_K$ and $0, 1 \in K$ satisfying (T-1)--(T-5) is called a {\em planar ternary ring} PTR. We also define addition $add_T(a,b,c)$ by $T(a, 1, b) =c$ and multiplication $mult_T(a,x,c)$ by $T(a, x, 0) =c$. Following \cite{makowsky2019can}, we denote the structure $(K; add_T, mult_T, 0, 1)$ as $RF^*_{field}(\Pi)$. It is shown in \cite{hilbert2013grundlagen}, that if $\Pi$ is a Desarguesian plane, then $RF^*_{field}(\Pi)$ is a skew-field (a field, in which the commutativity of the multiplication is not assumed) of characteristic 0. Moreover, as proved in \cite[page 42, Theorem 1]{bk:Wu1994}, for any such $\Pi$ and any two choices of $\ell_0, m_0, Z_0$, there is an isomorphism between the two obtained skew-fields. \section{Metric Wu planes and Pythagorean fields} \label{se:pyth} The following theorem is stated in a somewhat weaker form in \cite{makowsky2019can}, however it is based on the previous classical results, in particular, by Hall \cite{hall1943projective} and Wu \cite{bk:Wu1994}. \begin{theorem} \label{Mak} \begin{itemize} \item[(i)] Let $\mathcal{F}$ be a Pythagorean field of characteristic $0$. Then $PP^*_{wu}(\mathcal{F})$ is a metric Wu plane. \item[(ii)] Let $\Pi$ be a metric Wu plane. Then $RF^*_{field}(\Pi)$ is a Pythagorean field of characteristic $0$. \item[(iii)] $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ is isomorphic to $\mathcal{F}$. \item[(iv)] $PP^*_{wu}(RF^*_{field}(\Pi))$ is isomorphic to $\Pi$. \item[(v)] The isomorphisms in (iii) and (iv) are definable, that is, form a bi-interpretation between the classes of metric Wu planes and of Pythagorean fields. \end{itemize} \end{theorem} \begin{proof} (i) Axioms (I-1, I-2, I-3) and (ParAx) are shown in~\cite[Proposition 14.1]{bk:Hartshorne2000}. The infinity axiom holds, since $\mathcal{F}$ has characteristic $0$. Considering Desargues' axioms, Proposition 14.4 in~\cite{bk:Hartshorne2000} shows that Pappus theorem holds in a plane defined over a field. Then by Hessenberg's theorem~\cite[page 67]{bk:Wu1994}, Desargues' axioms also hold. We naturally define lines $a_0x + b_0y + c_0 = 0$ and $a_1x + b_1y + c_1 = 0$ to be orthogonal if $a_0a_1 + b_0b_1 = 0$. Then the axiom (O-1) holds by commutativity of multiplication. The axioms (O-2) and (O-3) hold since we are able to solve systems of linear equations. Considering axiom (O-5), let $(a_1, b_1)$, $(a_2, b_2)$ and $(a_3, b_3)$ be the coordinates of a triangle. Then a line going through $(a_1, b_1)$ and perpendicular to the opposite side is defined by an equation $$ \frac{x - a_1}{b_3 - b_2} + \frac{y - b_1}{a_3 - a_2} = 0.$$ Then since linear equations are solvable in a Pythagorean field of characteristic 0, we can write equations for all three heights and see that the orthocenter $(x,y)$ exists. Therefore, (O-5) holds. We know that $1 \neq 0$ and hence any line of the form $x = c$ is non-isotropic. Then for any point $(x_0, y_0)$ there is a non-isotropic line $x = x_0$ passing through it and (O-4) holds. Suppose an angle is formed by two non-isotropic lines given by $ l_{1}x+m_{1}y+n_{1}=0$ and $l_{2}x+m_{2}y+n_{2}=0$. Then the internal and external bisectors of the angle are given by the two equations $$ \frac {l_{1}x+m_{1}y+n_{1}}{\sqrt {l_{1}^{2}+m_{1}^{2}}}=\pm {\frac {l_{2}x+m_{2}y+n_{2}}{\sqrt {l_{2}^{2}+m_{2}^{2}}}}. $$ Since $\mathcal{F}$ is Pythagorean, the roots exist and are not equal to $0$, because the lines are non-isotropic. Therefore, bisectors exist and (AxSymAx) holds. (ii) This statement is extensively discussed in \cite{bk:Wu1994}. As mentioned above, Hilbert showed in \emph{Grundlagen der Geometrie} that $RF^*_{field}(\Pi)$ forms a skew-field of characteristic $0$. Wu first proves in \cite[Section 2.1]{bk:Wu1994} that Linear Pascalian axiom (a version of Pappus theorem) is sufficient to obtain the commutativity of multiplication and then on \cite[page 72]{bk:Wu1994} shows that Linear Pascalian axiom holds in any metric Wu plane. Finally, to conclude that $RF^*_{field}(\Pi)$ is Pythagorean, we refer to the Pythagorean Theorem (known as `kou-ku theorem' \cite{bk:Wu1994}) proved on \cite[page 97]{bk:Wu1994}. (iii) As mentioned above, it is shown in \cite[page 42, Theorem 1]{bk:Wu1994}, that any choice of parameters $\ell_0$, $m_0$ and $Z_0$ in the definition of $RF^*_{field}(\Pi)$ gives us isomorphic fields. Wu provides an explicit construction of such an isomorphism and we shortly describe it here. Let $\ell_0$, $m_0$, $Z_0$ and $\ell'_0$, $m'_0$, $Z'_0$ be two choices of parameters and let $O$, $I$ and $O'$, $I'$ be the corresponding zero and one points. First consider the case when $O = O'$ but $\ell_0 \neq \ell'_0$. Then as shown by Wu the parallel projection with respect to the line $II'$ defines the required isomorphism. Now consider the case $\ell_0 \parallel \ell'_0$ and $OO' \parallel II'$. Then the parallel projection with respect to the line $OO'$ defines the required isomorphism. Finally, observe that any other case can be reduced to those two by taking compositions of isomorphisms. For example, in a general case of $\ell_0$ and $\ell'_0$ being neither coincident nor parallel and their intersection point not equal to $O$ or $O'$, construct a line $\ell''_0$ through $O$ parallel to $\ell'_0$. Then take $I''$ to be a point on $\ell''_0$ such that $O'O \parallel I'I''$. The line $\ell''_0$ and points $O$, $I''$ define a field by taking $m''_0 \perp \ell''_0$, $O \in m''_0$ and $Z''_0$ defined by $I''$. Now using the first case obtain an isomorphism between fields of $\ell_0$ and $\ell''_0$ and using the second case an isomorphism between fields of $\ell''_0$ and $\ell'_0$. Then the required isomorphism is their composition. Clearly, if we choose $\ell_0, m_0$ to be the axes of $PP^*_{wu}(\mathcal{F})$ and $Z_0 = (1, 1)$, then the constructed field will be isomorphic to $\mathcal{F}$. Hence, it will be isomorphic to $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ for every choice of $\ell_0, m_0, Z_0$. This result can also be found in ~\cite[Theorem 5.9]{hall1943projective}. (iv) Let $\Pi$ be a metric Wu plane and let $\mathcal{F}=RF^*_{field}(\Pi)$. In order to establish an isomorphism between $PP^*_{wu}(\mathcal{F})$ and $\Pi$ we need to define two maps: a map of points and a map of lines. Points of $PP^*_{wu}(\mathcal{F})$ are pairs $(x,y)\in \mathcal{F}^2$. We recall that the universe of $\mathcal{F}$ is the set of points incident to the line $\ell_0$, and that there is a definable bijection $f$ between the points of $\ell_0$ and $m_0$ (the coordinate axes in $\Pi$, parameters of the considered interpretation). Hence, given $(x,y)$ we can define two auxiliary lines: $m_x$ going through $x\in\ell_0$ and parallel to $m_0$, and $\ell_y$, going through $f(y)\in m_0$ and parallel to $\ell_0$. Let $A$ be the intersection of $m_x$ and $\ell_y$. We map $(x,y)$ to $A$, and it is clear that $A$ has coordinates $(x,y)$. Thus, we have described a (definable) bijection between the sets of points of $PP^*_{wu}(\mathcal{F})$ and $\Pi$. Lines of $PP^*_{wu}(\mathcal{F})$ can be specified by equations $ax+by+c=0$ where not all of $a,b,c$ are $0$. Thus, a line is interpreted by a triple $(a,b,c)\in \mathcal{F}^3$. Two lines are defined to be equal if $(a,b,c)$ and $(a',b',c')$ are proportional. A point $(x,y)$ is incident to a line $(a,b,c)$ if $ax+by+c=0$. Each line in $PP^*_{wu}(\mathcal{F})$ is equal to a line defined by the equation $y=ax+b$ or to a vertical line $x=c$. We construct the corresponding line in $\Pi$ by drawing, in the first case, a line through the point $(0,b)$ with the slope $a$, and in the second case a vertical line (parallel to $m_0$) through $(c,0)$. This maps lines in $PP^*_{wu}(\mathcal{F})$ to lines in $\Pi$ and is clearly a (definable) bijection preserving the incidence relation. Concerning the orthogonality relation, we may assume that the coordinate axes $\ell_0$ and $m_0$ in $\Pi$ are selected to be orthogonal. (By Wu, the field $\mathcal{F}$ does not depend on the choice of parameters, up to isomorphism.) We can define lines $(a,b,c)$ and $(a',b',c')$ in $PP^*_{wu}(\mathcal{F})$ to be orthogonal iff $aa'+bb'=0$. Then the usual arguments show that this agrees with the orthogonality of the corresponding lines in $\Pi$. \item[(v)] In the proof of (iii) we use the isomorphisms established by Wu in \cite[page 42, Theorem 1]{bk:Wu1994}. The construction of these isomorphisms mostly involves drawing various parallel lines and is clearly definable. The isomorphism between $\mathcal{F}$ and $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ where $\ell_0, m_0$ are the axes of $PP^*_{wu}(\mathcal{F})$ and $Z_0 = (1, 1)$, is defined by a map $x \mapsto (x, 0)$, which is first-order definable. The construction given in the proof of (iv) explicitly provides us with a definable isomorphism. \end{proof} \section{Undecidability} \label{se:undec} To establish the undecidability of the theory of Pythagorean fields we refer to the following theorem of M. Ziegler~\cite{ar:ziegler,BeesonZiegler}: \begin{theorem} \label{Ziegler} Let $T$ be a finite subtheory of the theory of the field of reals $(\mathbb{R}; +, \times)$. Then \begin{enumerate}[(i)] \item $T$ is undecidable; \item The same holds for the extension of $T$ by the axioms stating that the characteristic of the field is $0$. \end{enumerate} \end{theorem} Although the second part is not mentioned as a result in ~\cite{ar:ziegler,BeesonZiegler}, it easily follows from Ziegler's proof. \begin{corollary} \label{Euc} The theories of Pythagorean fields and of Pythagorean fields of characteristic $0$ are undecidable. \end{corollary} Using the bi-interpretability of Pythagorean fields and metric Wu planes we obtain the undecidability of the theory of metric Wu planes. In fact, we prove a more general theorem establishing the undecidability of a sufficiently wide class of geometric theories. As a preparation for its proof, we define syntactic translations between formulas in the language of fields and formulas in the language $\tau_{wu}$. Consider any formula $\phi$ in the language of fields. Using the formulas $add$ and $mult$ from the construction of $RF^*_{field}(\Pi)$ to interpret addition and multiplication, we obtain a formula $\phi^{wu}(\ell_0,m_0,Z_0)$ in the language $\tau_{wu}$, where $\ell_0, m_0, Z_0$ are the parameters (free variables) of the formula. \newcommand{\textit{Par}}{\textit{Par}} Let $\textit{Par}(\ell_0,m_0,Z_0)$ denote the formula stating that $\ell_0$ and $m_0$ are lines intersecting in exactly one point and that $Z_0$ is not incident with either $\ell_0$ or $m_0$. These conditions definably specify the admissible values of the parameters. Then, for any metric Wu plane $\Pi$, \begin{equation}\label{wu}RF^*_{field}(\Pi) \models \phi \Longleftrightarrow \Pi \models \forall{\ell_0,m_0,Z_0}\:(\textit{Par}(\ell_0,m_0,Z_0)\to \phi^{wu}).\end{equation} Similarly, for the other interpretation, consider any formula $\phi$ in the language of $\tau_{wu}$. Using the formulas from the construction of $PP^*_{wu}$ to interpret the two sorts of variables, equality, incidence and orthogonality, we obtain a formula $\phi^{field}$ in the language of fields. Then, for any field $\mathcal{F}$, \begin{equation}\label{field}PP^*_{wu}(\mathcal{F}) \models \phi \Longleftrightarrow \mathcal{F} \models \phi^{field}.\end{equation} Now we are ready to state a geometric version of Ziegler's Theorem. Let $\Pi_\mathbb{R}$ denote the real plane $PP^*_{wu}(\mathbb{R})$. Let $\mathrm{WU}$ denote the first order theory of metric Wu planes and let $\mathrm{PF}$ denote the first order theory of Pythagorean fields of characteristic $0$. \begin{theorem} \label{G-Ziegler} Let $T$ be a finite set of axioms in the vocabulary $\tau_{wu}$ such that $\Pi_\mathbb{R} \models T$. Then $T \cup \mathrm{WU}$ is undecidable. \end{theorem} \begin{proof} Let $T' = \{ \phi^{field} \mid \phi \in T\}$. Then by Ziegler's theorem, $T' \cup \mathrm{PF}$ is undecidable. We want to prove that \begin{equation}\label{eq} T' \cup \mathrm{PF} \models \phi \iff T \cup \mathrm{WU} \models \forall{\ell_0,m_0,Z_0}\:(\textit{Par}(\ell_0,m_0,Z_0)\to \phi^{wu}). \end{equation} Then, since the translation $(\cdot)^{wu}$ is computable, this provides a computable reduction of $T' \cup \mathrm{PF}$ to $T \cup \mathrm{WU}$ and proves that the latter is undecidable. To prove (\ref{eq}), suppose $T \cup \mathrm{WU} \models \forall{\ell_0,m_0,Z_0}\:(\textit{Par}(\ell_0,m_0,Z_0)\to \phi^{wu})$. Take any $\mathcal{F} \models T' \cup \mathrm{PF}$. Then by Theorem \ref{Mak}, $PP^*_{wu}(\mathcal{F})$ is a metric Wu plane and, using (\ref{field}), we have $PP^*_{wu}(\mathcal{F}) \models T \cup \mathrm{WU}$. Hence, $$PP^*_{wu}(\mathcal{F}) \models \forall{\ell_0,m_0,Z_0}\:(\textit{Par}(\ell_0,m_0,Z_0)\to \phi^{wu}).$$ Then, by (\ref{wu}), $RF^*_{field}(PP^*_{wu}(\mathcal{F})) \models \phi$ and, since ${\mathcal{F} \cong RF^*_{field}(PP^*_{wu}(\mathcal{F}))}$, we obtain $T' \cup \mathrm{PF} \models \phi$. Suppose $T' \cup \mathrm{PF} \models \phi$. Consider any $\Pi \models T \cup \mathrm{WU}$. Let $\mathcal{F} = RF^*_{field}(\Pi)$. By Theorem \ref{Mak}, $\mathcal{F}$ is a Pythagorean field of characteristic $0$ and $\Pi$ is isomorphic to $PP^*_{wu}(\mathcal{F})$. Then, using (\ref{field}), we obtain $\mathcal{F} \models T' \cup \mathrm{PF}$. Therefore, $\mathcal{F} \models \phi$ and by (\ref{wu}), $\Pi \models \forall{\ell_0,m_0,Z_0}\:(\textit{Par}(\ell_0,m_0,Z_0)\to \phi^{wu}).$ It follows that $T \cup \mathrm{WU} \models \forall{\ell_0,m_0,Z_0}\:(\textit{Par}(\ell_0,m_0,Z_0)\to \phi^{wu})$. This completes the proof of (\ref{eq}) and thereby of Theorem~\ref{G-Ziegler}. \end{proof} \begin{corollary} \label{undecidability} The theory of metric Wu planes is undecidable. \end{corollary} \section{Logical Huzita -- Justin axioms} \label{se:hjaxioms} Looking at the Huzita -- Justin axioms it appears as if one could axiomatize origami geometry in the language of incidence and orthogonality only, taking as the underlying geometry a metric Wu plane. However, it turns out that there are multiple reasons for choosing as the underlying geometry an {\em ordered} metric Wu plane. Firstly, the Huzita -- Justin axiom (H-3), which says that for any two lines $l_1, l_2$ there is a fold which places $l_1$ on $l_2$, is only true if $l_1, l_2$ are non-isotropic (not orthogonal to themselves). Our formulation of (H*3) takes this into account. Secondly, Axiom (H-5) states that, given two points $P_1$ and $P_2$ and a line $\ell$, one should be able to construct a fold that places $P_1$ onto $\ell$ and passes through $P_2$. However, such a fold only exists provided $P_1$ is closer to the given line $\ell$ than $P_2$. This condition needs to be expressible in the language, which we achieve by introducing the betweenness relation, as formulated in (H*5). Other possibilities are discussed at the end of the paper. We say that a metric Wu plane is orderable if it can be equipped with a ternary relation $Be(A,B,C)$ which satisfies the Hilbertian axioms of betweenness. In Proposition \ref{prop:orderable-2} we prove that a metric Wu plane is orderable iff it has no isotropic lines. If $A,B,C$ are collinear on a line $\ell$, let $\ell'$ be orthogonal to $\ell$ going through the point $C$ and let $B'$ be the point on $\ell$ obtained by placing $B$ on $\ell$ after folding along $\ell'$. In the real plane and the presence of the betweenness relation $B$ is between $A$ and $C$, $Be(A,B,C)$ we have $Out(A,C,B)$ iff either $Be(A,B,C)$ or $Be(A,B',C)$. Therefore $Out(A,C,B)$ is definable using $Be(A,B,C)$ and the usual axioms for betweenness. It is not obvious, however, if $Be(A,B,C)$ can be formulated using an axiomatization of $Out(A,C,B)$. The same can be said about $Closer(A_1, A_2, \ell)$. A reasonable axiomatization of origami geometry can be obtained from ordered metric Wu planes by adding a finite set of axioms. Metric Wu planes already satisfy Huzita -- Justin axioms (H-1), (H-2), (H*3), (H-4) where (H*3) is our modification of (H-3), see Proposition \ref{prop:isotropic}. An {\em ordered origami plane} is an ordered metric Wu plane which satisfies also our modified axioms (H*5) and (H*6), see section \ref{se:vieta}. This deviates from the definition given in \cite{makowsky2018undecidability,makowsky2019can} where an ordered origami plane is defined in an inconsistent way. The ordered metric Wu planes are bi-interpretable with ordered Pythagorean fields, whose first-order theory is undecidable by Ziegler's theorem. Similarly, we obtain that the first order theory of our axiomatization of origami geometry is also undecidable. Huzita--Justin axioms were not meant to be axioms in the logical sense, but rather rules of folding. Yet, one can try to naively formulate them in the language $\tau_{wu}$ by treating the requirement to construct an object (satisfying given conditions) by a classical existential statement. Huzita--Justin axioms are naturally stated using the relation $\textit{Sym}(P_1, \ell, P_2)$ \emph{``points $P_1$ and $P_2$ are symmetric with respect to line $\ell$''} defined in Section~\ref{se:wuplanes}. Then one obtains the following versions of Huzita--Justin axioms. \begin{description} \item[(H-1):] Given two points $P_1$ and $P_2$, one can make a fold that passes through both of them: $$ \forall P_1, P_2 \,\exists \ell \:(P_1 \in \ell \wedge P_2 \in \ell). $$ \item[(H-2):] Given two points $P_1$ and $P_2$, one can make a fold that places $P_1$ onto $P_2$: $$ \forall P_1, P_2\, \exists \ell\: \textit{Sym}(P_1, \ell, P_2). $$ \item[(H-3):] Given two lines $\ell_1$ and $\ell_2$, one can make a fold that places $\ell_1$ onto $\ell_2$: $$ \forall \ell_1, \ell_2\, \exists k\, \forall P_1\: \left( P_1 \in \ell_1 \rightarrow \exists P_2 \left( P_2 \in \ell_2 \wedge \textit{Sym}(P_1, k, P_2) \right) \right). $$ \item[(H-4):] Given a point $P$ and a line $\ell$, one can make a fold orthogonal to $\ell$ that passes through $P$: $$ \forall P, \ell\, \exists k\: (P \in k \wedge \ell \perp k). $$ \item[(H-5):] Given two points $P_1$ and $P_2$ and a line $\ell_1$, one can make a fold that places $P_1$ onto $\ell_1$ and passes through $P_2$: $$ \forall P_1, P_2, \ell_1 \,\exists \ell_2\: (P_2 \in \ell_2 \wedge \exists P_3\: (\textit{Sym}(P_1, \ell_2, P_3) \wedge P_3 \in \ell_1)). $$ \item[(H-6):] Given two points $P_1$ and $P_2$ and two lines $\ell_1$ and $\ell_2$, one can make a fold that places $P_1$ onto $\ell_1$ and $P_2$ onto $\ell_2$: \begin{multline*} \forall P_1, P_2, \ell_1, \ell_2 \, \exists \ell_3 \: \left( \exists Q_1\: (\textit{Sym}(P_1, \ell_3, Q_1) \wedge Q_1 \in \ell_1) \wedge \right. \\ \left. \wedge \exists Q_2\: (\textit{Sym}(P_2, \ell_3, Q_2) \wedge Q_2 \in \ell_2) \right). \end{multline*} \end{description} Since the original Huzita -- Justin axioms talk only about the possibility of the existence of folds, Axioms (H-5) and (H-6) formulated above do not hold in a real plane. The exceptional configurations, where a described fold does not exist, have to be described explicitly. In order to fix that we are going to amend them with the most obvious conditions under which the lines would indeed exist. On the other hand, the formalizations of Axioms (H-1, H-2, H-3, H-4) obviously hold in the real plane and, as explained below, almost hold in any metric Wu plane. We would like to state that metric Wu planes satisfy the first four origami axioms (H-1, H-2, H-3, H-4). However, since there could exist non-isotropic lines in a metric Wu plane, they do not necessarily satisfy (H-3). Thus, we modify this axiom. \begin{description} \item[(H*3):] Given two non-isotropic lines $\ell_1$ and $\ell_2$, there is a fold (line) that places $\ell_1$ onto $\ell_2$. \end{description} \begin{proposition} \label{prop:isotropic} Every metric Wu plane satisfies the origami axioms (H-1, H-2, H*3, H-4). \end{proposition} \begin{proof} (H-1) is equivalent to (I-1) and (H-4) is equivalent to (O-2). To prove (H-2) we use the construction from \cite[page 75]{bk:Wu1994}. (AxSymAx) is an analogue of (H*3) for intersecting lines, so we may only consider the case of parallel lines. Take $\ell_1$ and $\ell_2$ parallel to each other. Take any point $P_1 \in \ell_1$ and drop a perpendicular from $P_1$ on $\ell_2$. Let the intersecting point be $P_2$. Then we claim that the perpendicular bisector of $P_1 P_2$ is the line we need. \end{proof} \section{Ordered metric Wu planes and Pythagorean fields} \label{se:ordered} We have established a correspondence between Pythagorean fields and metric Wu planes. Following \cite{alperin2000mathematical}, the next step would be a correspondence between Euclidean fields, defined below, and planes satisfying some analogue of (H-5). \begin{definition} \label{eucfield} A \emph{Euclidean field} is a formally real Pythagorean field such that every element is either a square or the opposite of a square: $$\forall x \exists y \ (x=y^2 \lor -x=y^2).$$ \end{definition} As we will discuss in the next section, Euclidean fields are always uniquely ordered, therefore we want our plane to be in some sense ``ordered'' as well. One way to do so would be to take as an axiom that there are no isotropic lines. Then the corresponding field would be formally real and hence orderable (see below). We take a different approach and follow \cite{bk:Wu1994}. As the concept 'lying between' is not definable in a metric Wu plane, we introduce a new relation of Betweenness and the axioms that describe it. We interpret $Be(P_1, P_2, P_3)$ as \emph{three distinct points are on the same line and $P_2$ is between $P_1$ and $P_3$}. Note, that if three points are not all distinct, none of them lie between the others. Let $\tau_{o-wu}$ be the signature consisting of $\in$, $\perp$ and $Be$. \paragraph{Axioms of betweenness} \begin{description} \item[(B-1):] Let $A, B, C$ be three distinct points on a line. If $B$ lies between $A$ and $C$, then $B$ also lies between $C$ and $A$. \item[(B-2):] For any two distinct points $A$ and $C$ on a line, there always exists another point $B$ which lies between $A$ and $C$, and another point $D$ such that $C$ lies between $A$ and $D$. \item[(B-3):] Given any three distinct points $A, B, C$ on a line, one and only one of the following three cases holds: $B$ lies between $A$ and $C$, $A$ lies between $B$ and $C$, and $C$ lies between $A$ and $B$. \item[(B-4):] (Pasch) Assume the points $A, B, C$ and $\ell$ in general position, i.e. the three points are not on one line, none of the points is on $\ell$. Suppose there is a point $D$ at which $\ell$ and the line $AB$ intersect. If $Be(A, D, B)$ there is $D' \in \ell$ with $Be(A, D', C)$ or $Be(B, D', C)$. \end{description} \begin{definition} A $\tau_{o-wu}$ structure $\Pi$ is an ordered metric Wu plane if it is a metric Wu plane satisfying axioms of betweenness (B-1, B-2, B-3, B-4). \end{definition} \begin{proposition} \label{prop:orderable-1} Every ordered metric Wu plane satisfies (H-3) and hence the origami axioms (H-1, H-2, H-3, H-4). \end{proposition} \begin{proof} There are no isotropic lines in ordered metric Wu planes as proven in \cite[page 107, Theorem 3]{bk:Wu1994}. \end{proof} If $\mathcal{F}$ is an ordered field, we define the relation of betweenness on $PP^*_{wu}(\mathcal{F})$ in the standard way. If $\Pi$ is an ordered metric Wu plane, we follow \cite[page 105]{ bk:Wu1994} to define an order on $RF^*_{field}(\Pi)$. Using the result \cite[page 103, Separation property 1]{bk:Wu1994}, we separate all points on $\ell_0$ distinct from $0$ into two parts. Then $0$ lies between $A, B$ when $A, B$ lie in different parts, and $0$ does not lie between $A, B$ when $A, B$ lie in the same part. We define those numbers in $RF^*_{field}(\Pi)$ whose corresponding points lie in the same part as $1$ to be positive numbers and those whose corresponding points lie in the other part to be negative numbers. Then we can say that $a < b$ whenever $b - a$ is a positive number. \begin{theorem} \label{o-wuplanes} \begin{itemize} \item[(i)] Let $\mathcal{F}$ be an ordered Pythagorean field. Then $PP^*_{wu}(\mathcal{F})$ is an ordered metric Wu plane. \item[(ii)] Let $\Pi$ be an ordered metric Wu plane. Then $RF^*_{field}(\Pi)$ is an ordered Pythagorean field. \item[(iii)] $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ is isomorphic to $\mathcal{F}$. \item[(iv)] $PP^*_{wu}(RF^*_{field}(\Pi))$ is isomorphic to $\Pi$. \item[(v)] The isomorphisms in (iii) and (iv) are definable, that is, form a bi-interpretation between the classes of ordered metric Wu planes and of ordered Pythagorean fields. \end{itemize} \end{theorem} \begin{proof} (i) Using properties of ordered fields, it is easy to check that $PP^*_{wu}(\mathcal{F})$ satisfies the axioms of betweenness (B-1, B-2, B-3, B-4). (ii) We only need to show that $RF^*_{field}(\Pi)$ is an ordered field. This is proved on \cite[page 105, Theorem 1]{bk:Wu1994}. (iii) By Theorem \ref{Mak} it is sufficient to check that the relation of order is preserved. As discussed on \cite[page 105]{bk:Wu1994}, if we take different $\ell_0$, $m_0$, $Z_0$ in the construction of $RF^*_{field}(\Pi)$, the canonical isomorphism between the obtained fields will preserve order. If we choose $\ell_0, m_0$ to be the axes of $PP^*_{wu}(\mathcal{F})$ and $Z_0 = (1, 1)$, then the field is clearly isomorphic to $\mathcal{F}$. It follows that $\mathcal{F}$ is isomorphic to $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ for any choice of parameters. (iv) Let $\mathcal{F}$ denote the field $RF^*_{field}(\Pi)$. By Theorem \ref{Mak} it is sufficient to check that the betweenness relation is preserved under the isomorphism of metric Wu planes $\Pi$ and $PP^*_{wu}(\mathcal{F})$. Since the collinearity is preserved, it is sufficient to consider the betweenness relation for points on the same line. Suppose three points $A,B,C$ on a line $\ell$ in $\Pi$ are given. Assume $\ell$ is not vertical and consider the intersections $A', B', C'$ of $\ell_0$ axis and lines parallel to $m_0$ and going through $A, B, C$ respectively. By Corollary 1 in \cite[page 104]{bk:Wu1994}, $Be(A, B, C)$ holds in an ordered metric Wu plane if and only if $Be(A', B', C')$ does. On the other hand, the interpretation of betweenness in $PP^*_{wu}(\mathcal{F})$ for points of the coordinate axis is the same as that in $\Pi$. This shows the claim in the case $\ell$ is not vertical. If $\ell$ is vertical, we consider the projections of $A,B,C$ on the $m_0$ axis and the corresponding points on $\ell_0$ via the bijection $f$. By the same principle, $f$ preserves the betweenness on respective coordinate axes, hence $Be(A,B,C)$ holds in $\Pi$ iff it holds in $PP^*_{wu}(\mathcal{F})$. \item[(v)] We use the same isomorphisms as in Theorem \ref{Mak}, which are already showed to be definable. \end{proof} After establishing Theorem \ref{o-wuplanes}, we also want to obtain an ordered analogue of Theorem \ref{G-Ziegler}. For this purpose, we need an ordered version of Theorem \ref{Ziegler}. Although in ~\cite{ar:ziegler,BeesonZiegler} only theories in the language of fields are concerned, the proof of Ziegler's Theorem essentially consists of constructing a model of $T$, in which the ring of integers is interpretable. Then it is easy to see that the proof still holds for the case of ordered fields, which gives us the following result. \begin{theorem} \label{o-Ziegler} Let $T$ be a finite subtheory of the theory of the ordered field of reals $(\mathbb{R}; +, \times, \leq)$. Then \begin{enumerate}[(i)] \item $T$ is undecidable; \item The same holds for the extension of $T$ by the axioms stating that the characteristic of the field is $0$. \end{enumerate} \end{theorem} Then using Theorem \ref{o-wuplanes} and the same technique as in Theorem \ref{G-Ziegler}, we obtain an ordered version of the geometrical Ziegler's theorem. Let $\Pi_\mathbb{R}$ denote the real plane $PP^*_{wu}(\mathbb{R})$. Let $\mathrm{OWU}$ denote the first order theory of ordered metric Wu planes. \begin{theorem} \label{o-G-Ziegler} Let $T$ be a finite set of axioms in the vocabulary $\tau_{o-wu}$ such that $\Pi_\mathbb{R} \models T$. Then $T \cup \mathrm{OWU}$ is undecidable. \end{theorem} Next we consider the question of orderability. Recall that a field is orderable (or \emph{formally real}) if $-1$ is not a sum of squares. For Pythagorean fields this is equivalent to saying that $-1$ is not a square. The statement that there are no isotropic lines plays a similar role for metric Wu planes. \begin{proposition} \label{prop:orderable-2} A metric Wu plane $\Pi$ is orderable iff there are no isotropic lines in $\Pi$. \end{proposition} \begin{proof} In one direction, we have already mentioned a theorem of Wu that ordered metric Wu planes have no isotropic lines. In the other direction, we assume a $\Pi$ without isotropic lines is given and consider as a coordinate system a pair of orthogonal lines and the corresponding field ${\cal F}=RF^*_{field}(\Pi)$. By Theorem \ref{Mak} (iv) $PP^*_{wu}({\cal F})$ is isomorphic to $\Pi$. We claim that ${\cal F}$ is formally real. Assume the contrary, then $d^2+1=0$ in $\cal F$, for some $d$. Then the line defined by the points $(1,0)$ and $(0,d)$ (and the parallel line given by the equation $dx+y=0$) is isotropic. Since $\cal F$ is formally real, $PP^*_{wu}({\cal F})$ is orderable, but it is isomorphic to $\Pi$. \end{proof} \ignore{{\color{blue} Let $T$ be a first order theory in a signature $\tau$, and let $\sigma$ be obtained from $\tau$ by adding a new $n$-ary predicate letter $P$. A \emph{definitional extension of $T$ in $\sigma$} is a theory obtained from $T$ by adding the axiom $$\forall x_1,\dots,x_n\:(P(x_1,\dots, x_n)\leftrightarrow \phi(x_1,\dots, x_n)),$$ where $\phi$ is a formula in $\tau$ with exactly the variables $x_1,\dots, x_n$ free. \begin{proposition} The theory of ordered metric Wu plains is deductively equivalent to a definitional extension of the theory of metric Wu plains without isotropic lines. \end{proposition} \begin{proof} It is sufficient to show that there is a formula $\phi_{Be}(A,B,C)$ in $\tau_{wu}$ defining the betweenness relation in every metric Wu plane without isotropic lines. We use the definability of the isomorphism of a metric Wu plane $\Pi$ and the interpreted structure $PP^*_{wu}(RF^*_{field}(\Pi))$. Let $\Pi$ be a metric Wu plane without isotropic lines and let $\mathcal{F}$ denote $RF^*_{field}(\Pi)$. We know that $\mathcal{F}$ is an orderable Pythagorean field (interpreted in $\Pi$). Moreover, the order on $\mathcal{F}$ is definable in the language of fields (non-negative elements are the squares). Using the definition of order in $\mathcal{F}$ we can define the betweenness relation in $PP^*_{wu}(RF^*_{field}(\Pi))$. However, this structure is interpreted in $\Pi$ and is definably isomorphic to $\Pi$. Using the definition of the isomorphism we obtain a definition of betweenness in $\Pi$. \end{proof} The undecidability result for the unordered geometries can now be extended to the ordered ones. Let $\Pi_\mathbb{R}$ denote the ordered real plane. \begin{theorem} \label{OG-Ziegler} Let $T$ be a finite set of axioms in the vocabulary $\tau_{o-wu}$ such that $\Pi_\mathbb{R} \models T$. Then the extension of the theory of ordered metric Wu planes by $T$ (and even its fragment in $\tau_{wu}$) is undecidable. \end{theorem} \begin{proof} Let $S$ be the set of formulas obtained from $T$ by replacing $Be$ by its definition $\phi$ in metric Wu planes without isotropic lines. Clearly, $T$ is a conservative definitional extension of the theory of metric Wu planes without isotropic lines together with $S$. However, $S$ is undecidable by Theorem~\ref{G-Ziegler}, therefore so is $T$. \end{proof} } } \section{Euclidean and Vieta fields and origami axioms} \label{se:vieta} As already mentioned in the previous section, our goal is to establish a correspondence between Euclidean fields and planes satisfying some amended version of (H-5). Recall that a Euclidean field is a formally real Pythagorean field such that every element is either a square or the opposite of a square. The nonzero squares of a Euclidean field constitute a positive cone, hence (see~\cite{Becker}) Euclidean fields admit a unique ordering: $$x\leq y \iff \exists z\ (x+z^2=y).$$ Since the ordering is definable, one often considers Euclidean fields as ordered fields. An ordered field is Euclidean iff each positive element in it is a square. \begin{proposition} The first order theory of Euclidean fields is undecidable. \end{proposition} \begin{proof} Ziegler's Theorem. \end{proof} Next we formulate our amended version of Axiom (H-5) in which we add an appropriate precondition for the constructed fold to exist. Below we use the notion \emph{a point $A$ is closer to a line $\ell$ than to a point $B$}, $\textit{Closer}(A, \ell, B)$, which can be formulated in the language $\tau_{o-wu}$ by saying that there exist points $H \in \ell$ and $B'$ and a line $m\ni A$ such that $\textit{Sym}(H,m,B')$, while $Be(A, B', B)$ or $B = B'$. \begin{description} \item[(H*5):] Given two points $P_1$ and $P_2$ and a line $\ell_1$, if $P_2$ is closer to $\ell_1$ than to $P_1$, then there is a fold (line) that places $P_1$ onto $\ell_1$ and passes through $P_2$. \end{description} \begin{definition} A $\tau_{o-wu}$ structure $\Pi$ is a \emph{Euclidean ordered metric Wu plane} if it is an ordered metric Wu plane satisfying (H*5). \end{definition} \begin{theorem} \label{th:pyth} \begin{itemize} \item[(i)] Let $\mathcal{F}$ be a Euclidean field. Then $PP^*_{wu}(\mathcal{F})$ is a Euclidean ordered metric Wu plane. \item[(ii)] Let $\Pi$ be a Euclidean ordered metric Wu plane. Then $RF^*_{field}(\Pi)$ is a Euclidean field. \item[(iii)] Furthermore, $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ is isomorphic to $\mathcal{F}$. \item[(iv)] $PP^*_{wu}(RF^*_{field}(\Pi))$ is isomorphic to $\Pi$. \item[(v)] The isomorphisms in (iii) and (iv) are definable, that is, form a bi-interpretation between the classes of Euclidean metric Wu planes and of Euclidean fields. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] In order to show (H*5) it is enough to prove that a circle intersects a line whenever the radius is smaller than the distance between the center and the line. In $PP^*_{wu}(\mathcal{F})$ that is equivalent to solving a quadratic equation. Therefore, if $\mathcal{F}$ is Euclidean, (H*5) holds. \item[(ii)] Consider any positive $s \in RF^*_{field}(\Pi)$ and show that the square root of $s$ exists. Without loss of generality we assume $s > 1$, since otherwise we can find a square root of $s^{-1}$. Let $P_1 = (0, s)$, $P_2 = (0, \frac{s - 1}{2})$ and let $\ell_0$ be the $x$ axis. Then by (H*5) we can find a point $P_3 = (x, y)$, such that $P_3 \in \ell_0$ and $|P_1 P_2| = |P_3 P_2|$. The first condition gives us $y = 0$. The second one, if we calculate the squares of distances, means ${\left( \frac{s-1}{2} \right)^2 + x^2 = \left( \frac{s+1}{2} \right)^2}$, which is equivalent to $x^2 = s$. \item[(iii-v)] We use Theorem \ref{o-wuplanes}, since every Euclidean field is a Pythagorean field and every Euclidean ordered metric Wu plane is an ordered metric Wu plane. \end{itemize} \end{proof} Then by Theorem \ref{o-G-Ziegler}, we obtain: \begin{corollary} The theory of Euclidean ordered Wu planes is undecidable. \end{corollary} Finally, we would like to find a correct version of (H*6) and to establish its correspondence with Vieta fields as defined below. \begin{definition} A \emph{Vieta field} is a Euclidean field in which every element is a cube: $$\forall x \ \exists y \ y^3 = x.$$ \end{definition} It follows from Cardano formula that any cubic polynomial over a Vieta field has at least one root. \begin{proposition} The first order theory of Vieta fields is undecidable. \end{proposition} \begin{proof} Ziegler's Theorem. \end{proof} The following version of (H-6) is inspired by~\cite[Proposition 6]{ghourabi2012algebraic}. \begin{description} \item[(H*6):] Given two points $P_1$ and $P_2$ and two lines $\ell_1$ and $\ell_2$, if $P_1 \notin \ell_1$, $P_2 \notin \ell_2$, $\ell_1$ and $\ell_2$ are not parallel and points are distinct or lines are distinct, then there is a fold (line) that places $P_1$ onto $\ell_1$ and $P_2$ onto $\ell_2$.\\ \end{description} \begin{definition} $\Pi$ is an \emph{ordered Wu origami plane} if it is an ordered metric Wu plane which also satisfies (H*5) and (H*6). \end{definition} \begin{theorem} \label{th:vieta} \begin{itemize} \item[(i)] Let $\mathcal{F}$ be a Vieta field. Then $PP^*_{wu}(\mathcal{F})$ is an ordered Wu origami plane. \item[(ii)] Let $\Pi$ be an ordered Wu origami plane. Then $RF^*_{field}(\Pi)$ is a Vieta field. \item[(iii)] Furthermore, $RF^*_{field}(PP^*_{wu}(\mathcal{F}))$ is isomorphic to $\mathcal{F}$. \item[(iv)] $PP^*_{wu}(RF^*_{field}(\Pi))$ is isomorphic to $\Pi$. \item[(v)] The isomorphisms in (iii) and (iv) are definable, that is, form a bi-interpretation between the classes of ordered Wu origami planes and of Vieta fields. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] It sufficies to show that (H*6) holds in $PP^*_{wu}(\mathcal{F})$. We use ~\cite[Proposition 6]{ghourabi2012algebraic} to conclude that the conditions we chose are sufficient for the existence of a fold. Although the original result was proven specifically for the real plane, we note that it essentially uses only the Vieta property of $\mathbb{R}$ and therefore holds for any plane over a Vieta field. \item[(ii)] Take any $r \in RF^*_{field}(\Pi)$. Let $P_1 = (-1, 0)$, $P_2 = (0, -r)$, $\ell_1$ be the line $x = 1$ and $\ell_2$ the line $y = r$. Then by (H*6) we can find a line $\ell$. Drop perpendiculars from $P_1$ and $P_2$ on $\ell$ and let the constructed points be $H_1$ and $H_2$. Then since a reflection in $\ell$ maps $P_1$ onto $\ell_1$ and $P_2$ onto $\ell_2$, the coordinates of $H_1$ and $H_2$ have to be of the form $(0, s)$ and $(t, 0)$ respectively. Lines $P_1 H_1$ and $P_2 H_2$ have to be parallel, therefore $t = \frac{r}{s}$. Finally, $H_1 H_2$ is perpendicular to $P_1 H_1$, which means $\frac{r}{s} - s^2 = 0$. Hence, $s^3 = r$. \item[(iii-v)] Once again these follow from Theorem \ref{o-wuplanes}, since every Vieta field is a Pythagorean field and every ordered Wu origami plane is an ordered metric Wu plane. \end{itemize} \end{proof} \begin{corollary} The theory of ordered Wu origami planes is undecidable. \end{corollary} \section{Discussion} \label{se:conclu} \ignore{We have axiomatized origami geometry using ordered orthogonal Wu planes augmented by versions of Huzita-Justin origami axioms and established their bi-interpretations with the first order theories of fields corresponding to the classes of Pythagorean, Euclidean and Vieta fields. A few natural questions concerning the axiomatization of geometry via origami constructions were left open. One such question concerns the choice of the considered language. } \ignore{{\color{magenta} Instead of ordered orthogonal Wu planes, axiomatized by the set of first order axioms $AX_{o-wu}$ augmented by the axioms of the betweenness relation $Be$, (B-1, B-2, B-3, B-4), and its first order form $AX_{between}$, we could try to axiomatize the relation $Closer$ or $Out$ with axioms $AX_{closer}$ or $AX_{out}$, over the vocabulary of $\tau_{wu}$. If we look at the orthogonal Wu plane $\Pi_{RCF}$ where the field of coordinates obtained from the planar ternary ring is a real closed field, the three relations $Be$, $Out$ and $Closer$ are definable already over the vocabulary $\tau_{wu}$. But this not true for all orthogonal Wu planes. We have to make precise how to compare expansions of orthogonal Wu planes by additional predicates and their axioms. This can be done in a more abstract setting. Let $T$ be a first order theory in a vocabulary $\tau$ and let $R$ and $S$ be two relation symbols not in $\tau$ with axioms $AX_R$ and $AX_S$ over the vocabulary $\tau \cup \{R\}$, respectively $\tau \cup \{S\}$. We say that the {\em relation $S$ with axioms $AX_S$ is definable over $T$ and $R$ with $AX_R$} if there is a formula $\phi_S$ over the vocabulary $\tau \cup \{R\}$ such that $T \cup AX_R \models AX_S^{\phi_S}$. Here $AX_S^{\phi_S}$ is obtained from $AX_S$ by substituting every occurrence of $S$ by its definition $\phi_S$. We says that $R$ with $AX_R$ and $S$ with $AX_S$ are bi-definable over $T$ if each is definable in the other. Note that, if $AX_S$ is empty, every formula $\phi_S$ defines something. \begin{problem} Find a {\em finite} set of axioms $AX_{closer}$ such that over $AX_{o-wu}$ $Be$ with $AX_{between}$ defines $R$ with $AX_R$ and $AX_{o-wu} \cup AX_{out}$ together with the modified Huzita-Justin axioms (H-1), (H-2), (H*3), (H-4), (H*5) formulated using $Out$, (H*6) is an axiomatization of origami geometry. \end{problem} In case we have found with $AX_{closer}^0$ a positive solution to the above problem, we note that the resulting axiomatization of origami geometry is still an undecidable theory. \begin{problem} Is $Closer$ with $AX_{closer}^0$ bi-definable over $AX_{o-wu}$ with $Be$ and $AX_{between}$? \end{problem} The analogue questions can also be formulated for $AX_{out}$.} } \ignore{ {\color{magenta} \subsection*{The role of orthogonality} Although the orthogonality of lines is natural from the point of view of origami constructions --- orthogonality can be tested just by a single fold --- we see that the Huzita--Justin axioms are formulated frequently using the notion of symmetry two points w.r.t.\ a line $\textit{Sym}(P,\ell,Q)$. It would be natural to consider this notion as basic and orthogonality as definable. It should be possible to provide an axiomatization of metric Wu planes (without isotropic lines) based on $\in$ and $\textit{Sym}$. \begin{problem} Find such an axiomatization. \end{problem} Another question is to develop a constructive version of origami geometry as a logical theory based on intuitionistic logic, in the spirit of the work of Beeson. In such a theory existential statements would yield actual origami constructions rather than just be classically true. } } We have axiomatized the classes of orthogonal Wu planes using versions of origami axioms and established their bi-interpretations with the first order theories of fields (corresponding to the classes of Pythagorean, Euclidean and Vieta fields). A few natural questions concerning the axiomatization of geometry via origami constructions were left open. One such question concerns the choice of the considered language. Although the orthogonality of lines is natural from the point of view of origami constructions --- orthogonality can be tested just by a single fold --- we see that the Huzita--Justin axioms are easily formulated using the notion of symmetry of two points w.r.t.\ a line $\textit{Sym}(P,\ell,Q)$. It would be natural to consider this notion as basic and orthogonality as definable. The $\textit{Sym}$ predicate behaves well provided the metric Wu plane is orderable, that is, has no isotropic lines. \begin{problem} Find a natural axiomatization of orderable metric Wu planes in terms of $\in$ and $\textit{Sym}$. \end{problem} A similar question can be asked about betweenness. We have based our axiomatization on the standard Hilbertian axioms for betweenness. One can, however, consider as basic the relation $\textit{Closer}(A,\ell,B)$ which holds if $A$ is closer to line $\ell$ than to $B$. It is the relation that was used in the statement of (H*5). \begin{problem} Find a natural axiomatization of the class of ordered metric Wu planes in terms of $\in$, $\perp$ and \textit{Closer}. In particular, this requires that there is a first-order formula that works as a definition of betweenness in each structure satisfying these axioms. \end{problem} Another question concerns the definability of betweenness in Euclidean metric Wu planes. Recall that in Euclidean fields the ordering is definable. This suggests that there is an axiomatization of Euclidean ordered metric Wu planes in the language $\tau_{wu}$ only. In fact, one such axiomatization based on the so-called \emph{Euclidean axiom of betweenness} is well-known~\cite[page 149]{degen-profke}. This axiom defines the betweenness relation $Be(A,B,C)$ by stating that $A,B,C$ are collinear and there exists a point $D$ such that $DA\perp DC$ and $DB\perp AC$. Then, a metric Wu plane is Euclidean ordered iff it has no isotropic lines and the above relation $Be$ satisfies the usual axioms of betweenness. It would be interesting to know if the Euclidean axiom of betweenness can be replaced by its alternative suggested by the origami axiom (H-5). First, \emph{define} $\textit{Closer}(A,\ell,B)$ by saying that there is a fold $m$ that goes through $A$ and places $B$ on $\ell$. Second, define $Be(A,B,C)$ by saying that $A,B,C$ are collinear and there is a line $\ell\ni D$ such that $\ell\perp AC$, $\textit{Closer}(A,\ell,C)$ and $\textit{Closer}(C,\ell,A)$. State (some of) the betweenness axioms for the defined relation. Though this approach may work, there are a number of details to be worked out here that we leave for a future study. \begin{problem} Find an axiomatization of Euclidean orderable Wu planes in the language $\tau_{wu}$ that would be natural from the point of view of origami. \end{problem} Yet another interesting direction of study is to develop a constructive version of origami geometry as a logical theory based on intuitionistic logic, in the spirit of the work of Beeson~\cite{Beeson}. In such a theory existential statements would yield actual origami constructions rather than just be classically true. \begin{problem} Develop a constructive version of origami geometry. \end{problem} \end{document}
math
86,161
\begin{document} \tx{\mathbf n}ewtheorem{theorem}{\textsc{Theorem}}[section] \tx{\mathbf n}ewtheorem{problem}[theorem]{\textsc{Problem}} \tx{\mathbf n}ewtheorem{exercise}{\textsc{Exercise}}[section] \tx{\mathbf n}ewtheorem{proposition}[theorem]{\textsc{Proposition}} \tx{\mathbf n}ewtheorem{lemma}[theorem]{\textsc{Lemma}} \tx{\mathbf n}ewtheorem{corollary}[theorem]{\textsc{Corollary}} \tx{\mathbf n}ewtheorem{definition}[theorem]{\textsc{Definition}} \tx{\mathbf n}ewtheorem{remark}[theorem]{\rightm \textsc{Remark}} \tx{\mathbf n}ewtheorem{example}[theorem]{\rightm \textsc{Example}} \rightenewcommand{$\blacksquare$}{$\blacksquare$} \tx{\mathbf n}ewcommand{\mathbb}{\mathbb} \tx{\mathbf n}ewcommand{\mathcal}{\mathcal} \rightenewcommand{\mathbf}{\mathbf} \rightenewcommand{\overline}{\overline} \rightenewcommand{\text{Re}\,}{\text{Re}\,} \rightenewcommand{\text{Im}\,}{\text{Im}\,} \tx{\mathbf n}ewcommand{\text{im}\,}{\text{im}\,} \tx{\mathbf n}ewcommand{\tx{\mathbf w}idetilde}{\tx{\mathbf w}idetilde} \tx{\mathbf n}ewcommand{\tx{\mathbf w}idehat}{\tx{\mathbf w}idehat} \tx{\mathbf n}ewcommand{\rightightharpoonup}{\rightightharpoonup} \tx{\mathbf n}ewcommand{\leftangle}{\leftanglengle} \tx{\mathbf n}ewcommand{\rightangle}{\rightanglengle} \rightenewcommand{\right}{\rightight} \rightenewcommand{\left}{\lefteft} \tx{\mathbf n}ewcommand{\text{ind}}{\text{ind}} \tx{\mathbf n}ewcommand{\text{Res}}{\text{Res}} \tx{\mathbf n}ewcommand{\boldsymbol}{\boldsymbol} \tx{\mathbf n}ewcommand{\text}{\text} \rightenewcommand{\tx{\bf v}}{\text{\mathbf v}} \rightenewcommand{\tx{\bf u}}{\text{\mathbf u}} \tx{\mathbf n}ewcommand{\tx{\mathbf n}}{\text{\mathbf n}} \tx{\mathbf n}ewcommand{\tx{\mathbf w}}{\text{\mathbf w}} \rightenewcommand{\text{div}\,}{\text{div}\,} \tx{\mathbf n}ewcommand{\bs{\cdot}}{\boldsymbol{\cdot}} \tx{\mathbf n}ewcommand{\operatorname}{\operatorname} \tx{\mathbf n}ewcommand{\lefthook\joinrel\leftongrightarrow}{\lefthook\joinrel\leftongrightarrow} \setcounter{section}{0} \title{Asymptotics for the level set equation near a maximum} \author{Nick Strehlke} \date{} \maketitle \begin{abstract} We give asymptotics for the level set equation for mean curvature flow on a convex domain near the point where it attains a maximum. It is known that solutions are not necessarily $C^3,$ and we recover this result and construct non-smooth solutions which are $C^3.$ We also construct solutions having prescribed behavior near the maximum. We do this by analyzing the asymptotics for rescaled mean curvature flow converging to a stationary sphere. \end{abstract} \section{Introduction} Let $\Omega$ be a smooth bounded mean-convex domain in $\mathbb R^{n+1}.$ The level set equation on $\Omega$ is a degenerate elliptic boundary value problem asking for a function $t\colon \Omega \to\mathbb R$ with $t = 0$ on the boundary $\partial\Omega$ and \begin{align} |\tx{\mathbf n}abla t| \text{div}\,{\lefteft(\frac{\tx{\mathbf n}abla t}{|\tx{\mathbf n}abla t|}\rightight)} = -1. \leftanglebel{lse} \end{align} This problem is known to admit a unique, twice-differentiable solution that satisfies (\rightef{lse}) in the classical sense away from critical points. Away from critical points, the equation is non-degenerate elliptic and the solution is smooth. The second derivative, however, may in general be discontinuous at critical points. If $t$ solves (\rightef{lse}) for a mean convex domain, then the level sets $M_\tau = \{x\in \Omega\colon t(x) = \tau\}$ form a mean curvature flow starting from $M_0 = \partial\Omega,$ that is, the position vector $X(\tau)$ of $M_\tau$ satisfies \begin{align*} N\bs{\cdot} \frac{\mathrm d X}{\mathrm d\tau} &= - H, \end{align*} where $N$ is the outer unit normal for $M_\tau$ at the point $X$ and $H = \text{div}\,_{M_\tau} N$ is the scalar mean curvature. Mean-convexity (meaning that the mean curvature $H$ of the boundary $\partial\Omega$ is nonnegative) is the condition required to ensure that the surfaces making up the mean curvature flow are disjoint. If $x\in \Omega,$ the value $t(x)$ is therefore the time at which the mean curvature flow starting from $M_0 = \partial \Omega$ arrives at the point $x.$ For this reason, the function $t$ is sometimes called the \emph{arrival time} for mean curvature flow. If $\Omega$ is a bounded convex domain, it was proved by Huisken in \cite{Hu84} that the mean curvature flow $\{M_\tau\}$ starting from $\partial\Omega$ contracts smoothly to a single point $x_0\in \Omega$ at some finite time $T.$ Moreover, the translated and rescaled flow $(T-\tau)^{-1/2}(M_\tau-x_0)$ converges at time $T$ to the round sphere $\mathbf{S}^n$ of radius $(2n)^{1/2}$ centered at the origin. The function $t$ solving (\rightef{lse}) for $\Omega$ therefore has a single critical point $x_0$ inside $\Omega,$ where $t(x_0) = T$ is the maximum for $t.$ In this case, $t$ is actually $C^2$ on $\Omega$ and the second derivative $\tx{\mathbf n}abla^2 t(x_0)$ of $t$ at this critical point is the identity: $\partial_i\partial_j t = \delta_{ij}.$\footnote{See Theorem 6.1 of \cite{Hu93}.} In the case of a general mean-convex domain, the arrival time $t$ is known to be twice differentiable but not necessarily $C^2,$ see \cite{CoMi16}, \cite{CoMi17}, and \cite{CoMi18}. In fact, it was shown in \cite{Wh00} (Theorem 1.2) and \cite{Wh11} that any tangent flow of a smooth mean convex mean curvature flow is a generalized cylinder. From this one can figure out what the Hessian of the arrival time function must be if it exists. The remaining issue was to show that the Hessian exists, which is equivalent to the problem of uniqueness of tangent flows. This was solved in \cite{CoMi15}. The study of the arrival time is referred to as the \emph{level set method} in the mean curvature flow literature, because it gives a means of rigorously extending mean curvature flow beyond singularities. This point of view was first taken in a computational context by Osher and Sethian, \cite{OsSe88}, and the theory was then developed in \cite{ChGi91}, \cite{EvSp91}, \cite{EvSp92}, \cite{EvSp92a}, and \cite{EvSp95}. We will restrict attention to the case in which the domain of the arrival time function is convex. In \cite{KoSe06}, Robert Kohn and Sylvia Serfaty proved that the solution to equation (\rightef{lse}) on a convex planar domain $\Omega$ is always $C^3,$ and they asked whether this is true in higher dimensions. Natasa Sesum demonstrated in \cite{Se08} that the answer is negative: if $n\geq 2,$ there exists a convex domain $\Omega\subset \mathbb R^{n+1}$ for which the solution $t$ to (\rightef{lse}) is not three times differentiable. To prove this, she analyzed the rate of convergence of a rescaled MCF $(T-\tau)^{-1/2}M_\tau,$ proving the existence of solutions for which this rescaled flow converges to the sphere like $(T-\tau)^{1/n}.$ We recover this result and extend it by describing all possible rates of convergence for rescaled MCF over the sphere. As a result, we are able to describe the first terms of all possible Taylor expansions of a solution $t$ to equation (\rightef{lse}) on a convex domain $\Omega$ at the point where $t$ attains its maximum. We also construct solutions which have the prescribed asymptotics, but we do not prove here that they are actually Taylor expansions (we do not show that the solutions are better than $C^2$ on the domain $\Omega$). The main result is the following theorem. \begin{theorem} \leftanglebel{arrival-time} Let $t$ be a solution to the level set equation (\rightef{lse}) on a smooth bounded convex domain $\Omega\subset\mathbb R^{n+1}$ which attains its maximum $T$ at the the origin. Then either $\Omega$ is a round ball and $t = T-|x|^2/(2n)$ for $x\in \Omega,$ or there exists an integer $k\geq 2$ and a nonzero homogeneous harmonic polynomial $P$ of degree $k$ for which $t$ has, at the origin, the asymptotic expansion \begin{align} t(x) &= T- \frac{|x|^2}{2n} + |x|^{k(k-1)/n}P(x) + O\lefteft(|x|^{\sigma + k + k(k-1)/n}\rightight) \leftanglebel{taylor} \end{align} for some $\sigma>0.$ Moreover, if $P$ is a homogeneous harmonic polynomial of degree $k$ there exists a smooth bounded convex domain $\Omega \subset\mathbb R^{n+1}$ for which the corresponding arrival time $t$ satisfies (\rightef{taylor}) near the origin where it attains its maximum $T.$ \end{theorem} \emph{Remark.} Part of the statement of the theorem is a unique continuation result for the arrival time on a convex domain: if the arrival time attains its maximum at the origin and coincides to infinite order there with the arrival time $T-|x|^2/(2n)$ for a ball, then in fact the domain is a ball and the arrival time is identically equal to $T-|x|^2/(2n).$ This is proved in a companion paper, \cite{Str18}, as a consequence of the fact that a rescaled mean curvature flow cannot converge to a sphere faster than any exponential unless it is identically equal to the sphere.\footnote{A different and more complicated parabolic unique continuation property for self-shrinkers was recently proved by Jacob Bernstein in \cite{Be17}.} As will be seen in Section \rightef{as-arr}, Theorem \rightef{arrival-time} follows straightforwardly from Theorems \rightef{stable-mfld} and \rightef{prescribed-lin} on the possible rates of convergence for rescaled mean curvature flow. As a consequence, the statement of the asymptotic expansion (\rightef{taylor}) in Theorem \rightef{arrival-time} may be sharpened in keeping with the slightly more complicated statement of Theorem \rightef{stable-mfld}. The most precise statement is: Let $\leftanglembda_j = j(j+n-1)/(2n)-1$ be the $j$th eigenvalue for the operator $\Delta+1$ on the sphere $\mathbf{S}^n$ of radius $(2n)^{1/2}$ centered at the origin. For $j\geq k$ such that $\leftanglembda_j<2\leftanglembda_k,$ there exists a homogeneous harmonic polynomial $P_j$ of degree $j$ such that \begin{align*} t(x) &= T- \frac{|x|^2}{2n} + \sum_{\genfrac{}{}{0pt}{2}{j\geq k}{j + j(j-1)/n<2k + 2k(k-1)/n}}|x|^{j(j-1)/n}P_j(x) + O\lefteft(|x|^{2\sigma}\rightight) \end{align*} for all $\sigma < k + k(k-1)/n.$ Notice that $j+j(j-1)/n = 2+2\leftanglembda_j.$ In particular, when $k\geq 3$ or $n=1$ or $2,$ the exponent $\sigma$ appearing in (\rightef{taylor}) can be taken equal to $1.$ If $k=2$ and $n\geq 3,$ then we can choose any $\sigma <2/n.$ We do not prove in this paper that the asymptotic expansion (\rightef{taylor}) of Theorem \rightef{arrival-time} is actually a Taylor expansion, though of course it is true that the Taylor expansion at the origin must coincide with (\rightef{taylor}) \emph{if it exists}. Proving existence requires bounding the derivative of the arrival time in a neighborhood of the origin, an analysis we do not carry out here. It follows, however, from results of Huisken and Sesum,\footnote{See Theorem 6.1 of \cite{Hu93} for Huisken's proof that the arrival time is $C^2$ and Corollary 5.1 of \cite{Se08} for Sesum's proof that the arrival time is $C^3$ in case $k\geq 3$ in our Theorem \rightef{arrival-time}.} that the arrival time for a convex domain is $C^2$ in all cases and that, in case $k\geq 3$ in our Theorem \rightef{arrival-time}, the arrival time is $C^3.$ In the following section, we introduce the rescaled mean curvature flow and describe the relationship between rates of convergence for rescaled MCF and the Taylor expansion of the arrival time near its maximum. \section{Rate of convergence of MCF and relation to level set equation} \leftanglebel{as-arr} We begin by recalling the rescaled mean curvature flow. Let $\Omega$ be a convex domain and let $\{M_\tau\}_{\tau\in [0,T)}$ be the mean curvature flow starting from $M_0 = \partial\Omega.$ As mentioned in the introduction, $M_\tau$ shrinks smoothly down to a point $x_0\in \Omega$ as $\tau\to T$ in such a way that the rescaled surfaces $(T-\tau)^{-1/2}(M_\tau - x_0)$ converge in $C^k,$ for any $k,$ to the sphere $\mathbf{S}^n$ of radius $(2n)^{1/2}$ centered at the origin in $\mathbb R^{n+1}.$ It is natural therefore to study the surfaces $(T-\tau)^{-1/2} (M_\tau - x_0),$ and the analysis is simplified by a change of variable: we put $s = -\leftog{(T-\tau)}$ and for $s\geq -\leftog{T}$ define $\Sigma_s = (T-\tau)^{-1/2}(M_\tau - x_0) = e^{s/2} (M_{T-e^{-s}} - x_0).$ The $1$-parameter family $\{\Sigma_s\}$ is called a \emph{rescaled mean curvature flow}. Its position vector $X(s)$ satisfies the equation \begin{align*} \frac{\mathrm d X}{\mathrm ds} \bs{\cdot} N = -H + \frac{1}{2}X\bs{\cdot} N, \end{align*} with $N$ and $H$ now the outer unit normal and scalar mean curvature of $\Sigma_s.$ The sphere $\mathbf{S}^n$ of radius $(2n)^{1/2}$ centered at the origin is stationary under the rescaled mean curvature flow.\footnote{Surfaces that are stationary for this flow are in general called \emph{self-shrinkers}, because they shrink homothetically under mean curvature flow. It was shown in \cite{Br16} that the sphere $\mathbf{S}^n$ is the only compact embedded self-shrinker with genus zero.} Let $\mathbf{n}$ be the outer unit normal to the sphere $\mathbf{S}^n.$ If $\{\Sigma_s\}$ is a convex rescaled mean curvature flow, then it converges as $s\to\infty$ to $\mathbf{S}^n$ in $C^2.$ This means that there exists $s_0\in \mathbb R$ and a scalar function $u\colon \mathbf{S}^n\times[s_0,\infty)\to \mathbb R$ with the property that $\Sigma_s$ is the normal graph of $u(\cdot,s)$ over the sphere $\mathbf{S}^n$ for $s\geq s_0$: \begin{align*} \Sigma_s = \lefteft\{y + u(y,s)\mathbf{n}(y)\colon y\in \mathbf{S}^n\rightight\}. \end{align*} The function $u$ is uniquely determined and solves a quasilinear parabolic equation \begin{align} \partial_s u &= \Delta u + u + N(u,\tx{\mathbf n}abla u,\tx{\mathbf n}abla^2u) \leftanglebel{rMCF} \end{align} where $\Delta$ is the Laplacian on $\mathbf{S}^n$ and $N$ is a nonlinear term of the following form: \begin{align*} N(u,\tx{\mathbf n}abla u,\tx{\mathbf n}abla^2 u) = f(u,\tx{\mathbf n}abla u) + \operatorname{trace}(B(u,\tx{\mathbf n}abla u)\tx{\mathbf n}abla^2u), \end{align*} where $f$ and $B$ are smooth and $f(0,0),$ $df(0,0),$ and $B(0,0)$ are zero. We now state our results on the rate of convergence for rescaled mean curvature for a sphere. Our first main result is that a solution to the equation (\rightef{rMCF}) that converges to zero as $s\to\infty$ approaches a solution to the linear equation $\partial_s u = \Delta u + u.$ The linear operator $\Delta + 1$ has discrete spectrum with eigenvalues $\leftanglembda_k = k(k+n-1)/(2n)-1,$ for $k=0,1,2,\dots,$ each corresponding to an eigenspace $E_k$ of finite dimension $\binom{n+k}{n}-\binom{n+k-2}{n}.$ Notice that zero is not an eigenvalue. Let $d_k$ be the dimension of the space of eigenfunctions corresponding to eigenvalues $\leftanglembda_0,\dots,\leftanglembda_{k-1}$ which are strictly smaller than $\leftanglembda_k.$ With this notation, the precise result is the following. \begin{theorem} \leftanglebel{stable-mfld} For any integer $r>n/2+1$ and any integer $k\geq 2,$ there exists an open neighborhood $B = B(k,r)$ of the origin in $H^r(\mathbf{S}^n)$ with the property that the set of initial data $u_0\in B$ for which the solution $u$ to the rescaled MCF equation (\rightef{rMCF}) exists for all time $s\geq 0$ and converges to zero with exponential rate $\leftanglembda_k$ is a codimension $d_k$ submanifold of $B$ which is invariant for equation (\rightef{rMCF}). For such initial data, there exist, for $j\geq k$ with $\leftanglembda_j<2\leftanglembda_k,$ eigenfunctions $P_j \in E_j$ for which the corresponding solution $u$ satisfies \begin{align*} \big\| u(y,s) - \sum_{\genfrac{}{}{0pt}{2}{j\geq k}{\leftanglembda_j<2\leftanglembda_k}} e^{-\leftanglembda_j s} P_j(y)\big\|_{H^r(\mathbf{S}^n)} \lefteq C e^{-2\sigma s} \end{align*} for some constant $C>0$ and all $\sigma<\leftanglembda_k.$ \end{theorem} \emph{Remark.} The proof closely follows the proof of the analogous theorem for ODEs. Moreover, the proof of the existence of an invariant manifold is modeled on the argument of \cite{Na88} (which generalizes \cite{EpWe87}). We also prove that the leading eigenfunction $P_k$ to which $e^{\leftanglembda_k s}u(x,s)$ converges in $H^r(\mathbf{S}^n)$ may be prescribed. \begin{theorem} \leftanglebel{prescribed-lin} Suppose $k\geq 2$ and let $P \in E_k$ be an eigenfunction for the operator $\Delta+1$ on the sphere $\mathbf{S}^n$ corresponding to the eigenvalue $\leftanglembda_k.$ There exists $s_0\geq0$ and $u\colon \mathbf{S}^n\times[s_0,\infty)\to\mathbb R$ which solves the rescaled MCF equation (\rightef{rMCF}) and satisfies \begin{align*} \|e^{\leftanglembda_k s} u(y,s) - P(y)\|_{H^r(\mathbf{S}^n)} \lefteq Ce^{-\sigma s} \end{align*} for some constants $C>0$ and $\sigma>0$ and for all $s\geq s_0.$ \end{theorem} \emph{Remarks.} If $k\geq 3$ or $n=1$ or $2,$ then we may take $\sigma = \leftanglembda_{k+1}$ in the statement of the theorem, and if $n\geq 3$ and $k=2$ we may take any $\sigma <2\leftanglembda_2 = 2/n.$ The precise asymptotics of the limit, and the prescription of them, are inspired by \cite{AnVe97}. In fact, the present investigation came from the author's wish to determine similar asymptotics in the simpler compact setting. We now show the relationship between these results and the level set equation. We will derive Theorem \rightef{arrival-time} from Theorems \rightef{stable-mfld}(a) and \rightef{prescribed-lin}. Let $\Omega\subset\mathbb R^{n+1}$ be a bounded convex region and suppose $t\colon \Omega \to \mathbb R$ with $t(x) = 0$ on $\partial\Omega$ solves the level set equation (\rightef{lse}) on $\Omega.$ Let $M_\tau = \{x\in \Omega\colon t(x) = \tau\}$ be the corresponding mean curvature flow and $\Sigma_s$ the corresponding rescaled MCF. Then $\Sigma_s$ converges to the sphere $\mathbf{S}^n$ as $s\to\infty,$ and, as remarked previously, it follows that for sufficiently large $s$ the surface $\Sigma_s$ is a normal graph over $\mathbf{S}^n$: there exists $s_0\geq0$ and a function $u\colon \mathbf{S}^n\times [s_0,\infty)\to\mathbb R$ which solves (\rightef{rMCF}) and for which \begin{align*} \Sigma_s &= \{y+ u(y,s)\mathbf{n}(y)\colon y\in\mathbf{S}^n\}. \end{align*} By rescaling the initial mean curvature flow if necessary, we may take $s_0=0$ without loss of generality. By Theorem \rightef{stable-mfld}, either $u$ is identically zero or there exists $k\geq 2$ and a nonzero homogeneous harmonic polynomial $P$ of degree $k,$ the restriction of which to $\mathbf{S}^n$ is an eigenfunction in $E_k$ corresponding to the eigenvalue $\leftanglembda_k$ of $\Delta + 1,$ for which \begin{align} u(y,s) &= e^{-\leftanglembda_k s} P(y) + O\lefteft(e^{-(\leftanglembda_k + \sigma) s}\rightight) \leftanglebel{asymptotic1} \end{align} in $H^{r+1}(\mathbf{S}^n)$ as $s\to\infty,$ where $\sigma>0.$ Since $r>n/2+1$ this bound actually holds in $L^\infty(\mathbf{S}^n)$ by the Sobolev imbedding theorem. The position vector of a point $x$ of $M_t = (T-t)^{1/2}\Sigma_s$ must satisfy the equation \begin{align*} x &= (T-t)^{1/2}\frac{x}{|x|}(2n)^{1/2} + (T-t)^{1/2}u\lefteft(\frac{x}{|x|}, s\rightight) \frac{x}{|x|}, \end{align*} remembering as always that $s = -\leftog{(T-t)}.$ In other words, \begin{align*} \frac{|x|}{(2n)^{1/2}} &= (T-t)^{1/2}\lefteft(1 + u\lefteft(\frac{x}{|x|}, s\rightight)\rightight). \end{align*} Substituting the asymptotic (\rightef{asymptotic1}) for $u$ and replacing $s$ with $-\leftog{(T-t)}$ gives \begin{align*} \frac{|x|}{(2n)^{1/2}} &= (T-t)^{1/2}\lefteft(1 + e^{-\leftanglembda_k s} P\lefteft(\frac{x}{|x|}\rightight) + O\lefteft(e^{-(\leftanglembda_k + \sigma) s}\rightight)\rightight)\tx{\mathbf n}onumber\\ &= (T-t)^{1/2}\lefteft(1 + (T-t)^{\leftanglembda_k }|x|^{-k}P(x) + O\lefteft( (T-t)^{\leftanglembda_k + \sigma}\rightight)\rightight). \end{align*} Since it is known that $T-t\to 0$ as $x\to0,$ this equation implies that $T-t = |x|^2/(2n) + o(|x|^2)$ as $x\to0.$ But then squaring and rearranging and substituting this for $T-t$ we obtain \begin{align} T-t &= \frac{|x|^2}{2n} - (T-t)^{1+\leftanglembda_k} |x|^{-k}P(x) + O\lefteft((T-t)^{1+\leftanglembda_k + \sigma}\rightight) \leftanglebel{asymptotic2}\\ &= \frac{|x|^2}{2n} - \lefteft(\frac{|x|^2}{2n} + o\lefteft(|x|^2\rightight)\rightight)^{1+\leftanglembda_k} |x|^{-k}P(x) + O\lefteft( |x|^{2 + 2\leftanglembda_k + 2\sigma}\rightight)\tx{\mathbf n}onumber\\ &= \frac{|x|^2}{2n} - \frac{|x|^{2+2\leftanglembda_k - k}}{(2n)^{1+\leftanglembda_k}} P(x) + o\lefteft(|x|^{2+2\leftanglembda_k}\rightight). \leftanglebel{asymptotic3} \end{align} Finally, substituting this improved asymptotic (\rightef{asymptotic3}) for each occurrence of $T-t$ in the first line (\rightef{asymptotic2}) and carrying out the same computation gives the improvement \begin{align*} T-t &= \frac{|x|^2}{2n} - \frac{|x|^{2+2\leftanglembda_k - k}}{(2n)^{1+\leftanglembda_k}} P(x) + O\lefteft(|x|^{2+4\leftanglembda_k}+|x|^{2 + 2\leftanglembda_k + 2\sigma}\rightight), \end{align*} which is equivalent to the conclusion of Theorem \rightef{arrival-time}. In the remainder of the paper, we prove the results of Theorems \rightef{stable-mfld} and \rightef{prescribed-lin}. We prove Theorem \rightef{stable-mfld} in the next section and afterward prove \rightef{prescribed-lin}. \section{Construction of the invariant manifolds} \leftanglebel{inv-man} In this section, we adapt the argument of \cite{Na88}, which is a general stable manifold theorem for geometric evolution equations, to our situation in order to construct invariant manifolds of solutions which converge with prescribed exponential rate. We now briefly summarize the main result of \cite{Na88} and explain how our results differ: Let $M$ be a closed Riemannian manifold of dimension $n$ and let $L$ be an elliptic differential operator on $M$ which is symmetric in the $L^2(M)$ inner product and which has discrete spectrum accumulating only at $+\infty$ (in particular the operator is assumed to be bounded below). Suppose $N = N(u)$ is a nonlinear function defined on $H^{r-1}(M)$ for an integer $r>n/2+1$ which satisfies $N(0) = 0$ and a bound of the form we prove in Lemma \rightef{bilin-bd}. In this situation, Naito proves the following: \begin{theorem}[(Naito, \cite{Na88})] \leftanglebel{na-thm} There exists a ball $B$ centered at the origin in $H^{r+1}(M)$ in which the nonlinear evolution equation \begin{align*} \partial_su &= Lu + N(u) \end{align*} has an invariant stable manifold of finite codimension. \end{theorem} The codimension is equal to the codimension of the space on which $L$ is negative definite (the index of $L$ plus the dimension of the kernel). Naito's argument is modeled on Epstein \& Weinstein's earlier proof of a stable manifold theorem for mean curvature flow in the plane, \cite{EpWe87}, and both of these arguments follow closely the proof of the stable manifold theorem for ODE.\footnote{For a treatment of the stable manifold theorem in the finite-dimensional ODE context, see, e.g., \cite{Ha09}, \S III.6.} Theorem \rightef{na-thm} already almost implies part of the conclusion of Theorem \rightef{stable-mfld}, though it does not include the precise rate of convergence and does not describe the asymptotics of the limit. Using the notation of Theorem \rightef{stable-mfld} from the preceding subsection and assuming $k\geq 2,$ one would like, in our situation, to replace a solution $u(x,s)$ of (\rightef{rMCF}) with $e^{\leftanglembda_{k} s}u(x,s)$ and to replace the linear term $\Delta + 1$ on the right side of (\rightef{rMCF}) with $L = \Delta + 1 + \leftanglembda_k$ and then to apply Naito's theorem. The main issue then is that the nonlinear term will depend on the time parameter $s,$ but this is easy to overcome in this context because the time-dependent nonlinear term satisfies a bound that is uniform in $s.$ Notice that, assuming this argument is carried out successfully, the stable manifold one obtains in this case from Theorem \rightef{na-thm} is the set of solutions for which $e^{\leftanglembda_k s}u(s)\to 0,$ and it will have the codimension of all eigenspaces corresponding to eigenvalues $\leftanglembda_j$ with $j\lefteq k$ (the index plus nullity of $\Delta+1+\leftanglembda_k$). If we want precisely the solutions for which $s\mapsto e^{\leftanglembda_k s}u(x,s)$ is bounded, that is, precisely the solutions for which $u$ converges to $0$ exponentially at rate $\leftanglembda_k$ as $s\to\infty,$ we must instead apply Theorem \rightef{na-thm} to $e^{(\leftanglembda_k -\tx{\bf v}arepsilon)s} u(x,s)$ and $L = \Delta + 1 +\leftanglembda_k - \tx{\bf v}arepsilon$ for sufficiently small $\tx{\bf v}arepsilon.$ The ultimate conclusion of this analysis is that there exists a codimension $d_k$ invariant submanifold for the equation (\rightef{rMCF}) with the property that any solution in this invariant submanifold converges to zero at exponential rate $\leftanglembda_k - \tx{\bf v}arepsilon$ for all $\tx{\bf v}arepsilon>0.$ In particular, this argument does not prove that $e^{\leftanglembda_ks}u(s)$ is bounded in $H^{r+1}(M),$ though this can be proved (and we prove it below) using the bound on the nonlinear term.\footnote{The assertion is not true for a general nonlinear term, as is already apparent in the finite-dimensional ODE case, for essentially the same reason that a center manifold need not be stable. Consider for example the ODE \begin{align*} \dot{x} = -\tx{\bf v}arepsilon x - \frac{x}{\leftog{|x|}} \end{align*} on $\mathbb R.$ For small initial data, the solution converges to zero like $te^{-\tx{\bf v}arepsilon t}$ as $t\to\infty.$ If the nonlinear term is $O(|x|^{1+\alpha})$ for some $\alpha>0$ as $x\to0$ this cannot happen.} Thus the bound on the nonlinear term does imply that the rate of convergence is better than shown in \cite{Na88} or \cite{EpWe87}.\footnote{Cf. Proposition 5.2 of \cite{Na88}, where the author establishes convergence to zero with exponential rate $\sigma$ for any $\sigma$ smaller than the first positive eigenvalue of the linear operator, and Remark 3, page 136 of \cite{EpWe87}, where the same claim is made.} The same argument improves the rate of convergence in Naito's general theorem, because we only use his bound on the nonlinear term. Rather than apply the conclusion of Theorem \rightef{na-thm} in this way, we prefer to adapt the argument to our situation. This is done in this section (Section \rightef{inv-man}). Section \rightef{lin-an} collects some bounds required for the construction in Section \rightef{contraction}, and both sections follow closely arguments of \cite{Na88} and \cite{EpWe87}. We also include, for the convenience of the reader, a proof that a quasilinear nonlinear term $N$ of second order does satisfy the bound required by Naito's hypotheses in \cite{Na88} and Theorem \rightef{na-thm}. This occupies Section \rightef{nonlin-est} In Section \rightef{asymptotics}, we establish the rest of Theorem \rightef{stable-mfld}, namely, the precise rate of convergence and the asymptotics. This part does not overlap with \cite{Na88} or \cite{EpWe87}, and in fact the same arguments extend the results of \cite{Na88} in the more general setting of that paper. We also show that the asymptotics can be prescribed as in Theorem \rightef{prescribed-lin}. Analysis of the asymptotics requires a closer look at the construction of the stable invariant manifold in the first place, and this is part of the reason we prefer to argue directly in the proof of Theorem \rightef{stable-mfld} rather than attempt to apply the conclusion of Theorem \rightef{na-thm} to our situation. \subsection{Linear estimates} \leftanglebel{lin-an} Throughout, we write $\leftanglengle v,w\rightanglengle$ for the $L^2(\mathbf{S}^n)$ inner product: \begin{align*} \leftanglengle v,w\rightanglengle = \int_{\mathbf{S}^n} vw. \end{align*} Let $L$ be the linear operator $\Delta + 1$ on the sphere $\mathbf{S}^n,$ and let $F_k$ be the subspace of $H^r(\mathbf{S}^n)$ defined by \begin{align*} F_k &= \bigoplus_{j=k}^\infty E_j \end{align*} with $E_j$ as before the eigenspace for $L$ corresponding to the $j$th eigenvalue $\leftanglembda_j = j(j+n-1)/(2n) - 1.$ From now on, we fix an integer $k\geq 2$ so that $L$ is negative definite and bounded above on $F_k,$ satisfying $\leftanglengle Lv,v\rightanglengle \lefteq -\leftanglembda_k\|v\|_{L^2(\mathbf{S}^n)}^2$ for $v\in F_k.$ For $v\in F_k,$ we may \emph{define} the $H^\ell(\mathbf{S}^n)$ norm for integer $\ell\geq 0$ by \begin{align*} \|v\|_{H^\ell} : = \leftanglengle (-L)^\ell v,v\rightanglengle. \end{align*} This norm is equivalent to the usual $H^\ell$ norm. \begin{lemma} If $s\mapsto v(s)$ is a continuously differentiable path in $F_k\cap H^{r+1}(\mathbf{S}^n),$ then for any $\tx{\bf v}arepsilon>0$ and any integer $r\geq 1,$ \begin{align} \frac{1}{2}\frac{\mathrm d}{\mathrm ds} \|v(s)\|_{H^r(\mathbf{S}^n)}^2 + (1-\tx{\bf v}arepsilon) \|v(s)\|_{H^{r+1}(\mathbf{S}^n)}^2 \lefteq \frac{1}{4\tx{\bf v}arepsilon} \|(\partial_s - L)v(s)\|_{H^{r-1}(\mathbf{S}^n)}^2. \leftanglebel{lin-est1} \end{align} \end{lemma} \begin{proof} Write $f = (\partial_s - L)v$ for brevity. Use Cauchy-Schwarz to get, for any $\tx{\bf v}arepsilon>0,$ \begin{align*} \leftanglengle (-L)^r v, f\rightanglengle &\lefteq \tx{\bf v}arepsilon \leftanglengle (-L)^{r+1}v,v\rightanglengle + \frac{1}{4\tx{\bf v}arepsilon} \leftanglengle (-L)^{r-1} f,f\rightanglengle. \end{align*} Rearranging and substituting $f = (\partial_s - L)v$ on the left gives \begin{align*} \leftanglengle (-L)^r v, (\partial_s - (1-\tx{\bf v}arepsilon)L)v \rightanglengle \lefteq \frac{1}{4\tx{\bf v}arepsilon}\leftanglengle (-L)^{r-1}f,f\rightanglengle = \frac{1}{4\tx{\bf v}arepsilon} \|f\|_{H^{r-1}}^2, \end{align*} and because $\partial_s \|v\|_{H^r}^2/2 = \leftanglengle (-L)^r v, \partial_s v\rightanglengle$ and $\leftanglengle (-L)^{r+1}v,v\rightanglengle = \|v\|_{H^{r+1}}^2,$ this is equivalent to the conclusion of the lemma. \end{proof} \begin{corollary} \leftanglebel{sup-bd} If $v(s)\in F_k$ for all $s\geq 0,$ then for any $\sigma$ with $0<\sigma<\leftanglembda_k$ and any integer $r\geq 1,$ \begin{align*} e^{2\sigma s}\|v(s)\|_{H^r(\mathbf{S}^n)}^2 \lefteq \|v(0)\|_{H^r(\mathbf{S}^n)}^2 + \frac{\leftanglembda_k}{2(\leftanglembda_k - \sigma)}\int_0^s e^{2\sigma\tau}\|(\partial_s - L)v(\tau)\|_{H^{r-1}(\mathbf{S}^n)}^2\,\mathrm d\tau. \end{align*} \end{corollary} \begin{proof} Notice that the left side of (\rightef{lin-est1}) can be bounded below for $v\in F_k$ using $\|v(s)\|_{H^{r+1}}^2\geq \leftanglembda_k\|v(s)\|_{H^r}^2.$ The result is \begin{align*} \frac{\mathrm d}{\mathrm ds} \|v(s)\|_{H^r(\mathbf{S}^n)}^2 + 2(1-\tx{\bf v}arepsilon)\leftanglembda_k \|v(s)\|_{H^{r+1}(\mathbf{S}^n)}^2 &\lefteq \frac{1}{2\tx{\bf v}arepsilon} \|(\partial_s - L)v(s)\|_{H^{r-1}(\mathbf{S}^n)}^2. \end{align*} This is equivalent to the statement of the corollary with $\sigma = (1-\tx{\bf v}arepsilon)\leftanglembda_k$ because the left side can be written \begin{align*} \frac{\mathrm d}{\mathrm ds} \|v(s)\|_{H^r(\mathbf{S}^n)}^2 + 2(1-\tx{\bf v}arepsilon)\leftanglembda_k \|v(s)\|_{H^{r+1}(\mathbf{S}^n)}^2&= e^{-2(1-\tx{\bf v}arepsilon)\leftanglembda_ks} \frac{\mathrm d}{\mathrm ds} \lefteft(e^{2(1-\tx{\bf v}arepsilon)\leftanglembda_ks}\|v(s)\|_{H^r}^2\rightight) \end{align*} and we can multiply through by $e^{2(1-\tx{\bf v}arepsilon)\leftanglembda_ks}$ and integrate. \end{proof} \begin{corollary} \leftanglebel{L2-bd} In the situation of the lemma, if $r\geq 1$ is an integer and $\|v(s_j)\|_{H^r(\mathbf{S}^n)}\to 0$ for some sequence $s_j$ increasing to infinity, then \begin{align*} \int_0^\infty \|v(s)\|_{H^{r+1}(\mathbf{S}^n)}^2\,\mathrm ds &\lefteq \|v(0)\|_{H^r(\mathbf{S}^n)} + \int_0^\infty \|(\partial_s - L)v(s)\|_{H^{r-1}(\mathbf{S}^n)}^2\,\mathrm ds. \end{align*} \end{corollary} \subsection{Nonlinear estimate} \leftanglebel{nonlin-est} The nonlinear term $N\colon \mathbb R\times \Gamma(T\mathbf{S}^n) \times \Gamma(T^* \mathbf{S}^n\otimes T\mathbf{S}^n)\to\mathbb R$ (here $\Gamma(T\mathbf{S}^n)$ is the space of sections of the tangent bundle, for instance) appearing in the rescaled mean curvature flow equation (\rightef{rMCF}) over the sphere has the form \begin{align} N(u,\tx{\mathbf n}abla u,\tx{\mathbf n}abla^2 u) &= f(u,\tx{\mathbf n}abla u) + \operatorname{trace}(B(u,\tx{\mathbf n}abla u)\tx{\mathbf n}abla^2u) \leftanglebel{quasilin-assump} \end{align} where $f\colon \mathbb R\times \Gamma(T\mathbf{S}^n) \to\mathbb R$ is smooth with $f(0,0) = 0$ and $Df(0,0) = 0,$ and where $B\colon \mathbb R\times \Gamma(T\mathbf{S}^n) \to \Gamma(T^*\mathbf{S}^n\otimes T\mathbf{S}^n)$ is smooth and satisfies $B(0,0) = 0.$\footnote{See \cite{CoMi15}, Appendix A, for a proof of this fact.} In this section, we prove the following Sobolev estimate for a nonlinear term $N$ of this form. We abbreviate $N(u,\tx{\mathbf n}abla u,\tx{\mathbf n}abla^2 u)$ by $N(u).$ \begin{lemma} \leftanglebel{bilin-bd} Let $r$ be an integer with $r>n/2+1,$ let $N$ be smooth function of the form (\rightef{quasilin-assump}), and let $R>0$ be fixed. There exists a constant $C$ depending on $N$ and $R$ and $r$ with the property that all $v,w\in C^\infty(\mathbf{S}^n)$ with $\|v\|_{H^r(\mathbf{S}^n)},\|w\|_{H^r(\mathbf{S}^n)}\lefteq R$ satisfy \begin{align*} \|N(v) - N(w)\|_{H^{r-1}(\mathbf{S}^n)} \lefteq C\lefteft( \|v\|_{H^{r+1}(\mathbf{S}^n)}\|v-w\|_{H^r(\mathbf{S}^n)} + \|w\|_{H^r(\mathbf{S}^n)} \|v-w\|_{H^{r+1}(\mathbf{S}^n)}\rightight). \end{align*} \end{lemma} For the proof of Lemma \rightef{bilin-bd}, we need a Sobolev product lemma which is standard. In this simple case ($s$ an integer) it can be proved using H\"older's inequality and the Sobolev imbedding theorems. \begin{lemma} \leftanglebel{sobolev-prod} Suppose $M = M^n$ is a closed Riemannian manifold of dimension $n,$ and $s_1,s_2,$ and $s$ satisfy $s_i\geq s$ and $s_1+s_2\geq s+d/2.$ Then there is a constant $C$ depending on $s$ and the Sobolev constant for $M$ such that \begin{align*} \|vw\|_{H^s(M)} \lefteq C\|v\|_{H^{s_1}(M)}\|w\|_{H^{s_2}(M)} \end{align*} for all $v,w\in C^\infty(M).$ \end{lemma} We now indicate the proof of Lemma \rightef{bilin-bd}, demonstrating the bound on the $f$ term of $N.$ The other term is similar so we omit the details. For clarity, let us now work in a coordinate chart (it makes no difference in the analysis). Thus let $u_j = \partial_ju$ be the components of the gradient $\tx{\mathbf n}abla u.$ Under the preceding assumptions, we can express $f$ as \begin{align*} f(u,\tx{\mathbf n}abla u) &= g_0(u,\tx{\mathbf n}abla u) u^2 + \sum_{j=1}^n g_j(u,\tx{\mathbf n}abla u) u_j^2 \end{align*} for some smooth functions $g_j.$ In particular, \begin{align*} f(u,\tx{\mathbf n}abla u) - f(v,\tx{\mathbf n}abla v) &= g_0(u,\tx{\mathbf n}abla u) (u-v)(u+v) + (g_0(u,\tx{\mathbf n}abla u) - g_0(v,\tx{\mathbf n}abla v))v^2 \\ &\quad + \sum_{j=1}^n g_j(u,\tx{\mathbf n}abla u) (u_j - v_j)(u_j+v_j) + (g_j(u,\tx{\mathbf n}abla u) - g_j(v,\tx{\mathbf n}abla v))v_j^2. \end{align*} Now suppose that $u$ and $v$ are in $H^{r}(\mathbf{S}^n),$ where $r>n/2+1.$ There is a continuous imbedding $H^r(\mathbf{S}^n) \lefthook\joinrel\leftongrightarrow C^1(\mathbf{S}^n),$ and so the $C^1$ norms of $u$ and $v$ are controlled by the $H^r$ norms. In this situation, if we assume that $\|u\|_{H^r},\|v\|_{H^r}\lefteq R,$ we can deduce that the functions $g_j(u,\tx{\mathbf n}abla u)$ satisfy \begin{align*} \|g_j(u,\tx{\mathbf n}abla u)\|_{H^\ell}\lefteq C(1 + \|u\|_{H^{\ell+1}}) \end{align*} for any integer $\ell\geq 0,$ where $C$ is a constant that depends on the function $g_j$ and on $R.$ (The proof is by induction, and we use the fact that the domain $\mathbf{S}^n$ has finite volume.) In particular, $g_j(u,\tx{\mathbf n}abla u)$ and $g_j(v,\tx{\mathbf n}abla v)$ are in $H^{r-1},$ and since $r-1>n/2$ we may apply the Sobolev product theorem (with $r-1=s=s_1=s_2$) to terms like $g_j(u,\tx{\mathbf n}abla u) (u_j - v_j)(u_j+v_j).$ To deal with the terms $(g_j(u,\tx{\mathbf n}abla u) - g_j(v,\tx{\mathbf n}abla v))v_j^2,$ we write \begin{align*} g_j(u,\tx{\mathbf n}abla u) - g_j(v,\tx{\mathbf n}abla v) &= \int_0^1 \partial_1 g(u + t(v-u),\tx{\mathbf n}abla u + t\tx{\mathbf n}abla(v-u))\,\mathrm dt\, (v-u) \\ &\qquad + \sum_{i=2}^{n+1} \int_0^1 \partial_i g(u + t(v-u),\tx{\mathbf n}abla u + t\tx{\mathbf n}abla (v-u))\,\mathrm dt\, (v_i - u_i). \end{align*} The functions $\int_0^1 \partial_i g(u + t(v-u),\tx{\mathbf n}abla u + t\tx{\mathbf n}abla (v-u))\,\mathrm dt$ are in $H^{r-1}$ for the same reason that $g_j(u,\tx{\mathbf n}abla u)$ is, and so we may apply the Sobolev product theorem to these terms as well. Combining everything, we get a bound \begin{align*} \|f(u,\tx{\mathbf n}abla u) - f(v,\tx{\mathbf n}abla v)\|_{H^{r-1}} &\lefteq \|(u-v)(u+v)\|_{H^{r-1}} + C\sum_{j=1}^n\|(u_j-v_j)(u_j+v_j)\|_{H^{r-1}} \\ &\qquad + C\|(u-v)(v^2+ |\tx{\mathbf n}abla v|^2)\|_{H^{r-1}} + C\sum_{j=1}^n\|(u_j-v_j)(v^2 + |\tx{\mathbf n}abla v|^2)\|_{H^{r-1}} \end{align*} where the constant $C$ depends on $f$ and $R$ and $r.$ We can now apply the Sobolev product theorem to the right side to obtain \begin{align*} \|f(u,\tx{\mathbf n}abla u) - f(v,\tx{\mathbf n}abla v)\|_{H^{r-1}} &\lefteq C\|u+v\|_{H^r}\|u-v\|_{H^r} + C\|v\|_{H^r}^2 \|u-v\|_{H^r}. \end{align*} Since $\|v\|_{H^r}\lefteq R$ by assumption this is bounded by $C \|u-v\|_{H^r} (\|u\|_{H^r} + \|v\|_{H^r}).$ \subsection{Constructing the invariant manifolds: contraction argument} \leftanglebel{contraction} Let $\Pi_k\colon H^r(\mathbf{S}^n) \to F_k$ be orthogonal projection onto $F_k.$ This orthogonal projection operator is the same for all $r$ because of the way we have defined $H^r.$ Now fix an integer $r\geq 1.$ Define $X_{r,\sigma}$ to be the Banach space of paths $v = v(s)\colon \mathbb R\to H^{r+1}(\mathbf{S}^n)$ for which the norm $\|\cdot \|_{r,\sigma}$ defined by \begin{align*} \|v\|_{r,\sigma} &= \lefteft(\int_0^\infty \|v(s)\|_{H^{r+1}(\mathbf{S}^n)}^2\,\mathrm ds\rightight)^{1/2} + \sup_{s\geq 0} e^{\sigma s}\|v(s)\|_{H^r(\mathbf{S}^n)} \end{align*} is finite. We define an operator $T$ for $(v(s),u_0)\in X_{r,\sigma} \times F_k$ by requiring that the path $T(s) = T(v;u_0)(s)$ solve the equation \begin{align} (\partial_s - L)T(s) & = N(v(s)) \leftanglebel{soln-op} \\ T(0) &= u_0 - \int_0^\infty e^{-L\tau}(1-\Pi_k) N(v(\tau))\,\mathrm d\tau. \tx{\mathbf n}onumber \end{align} The integral in the second equation makes sense pointwise because $1-\Pi_k$ projects on a finite-dimensional invariant subspace for $L.$ We will see moreover that for $N$ satisfying our requirements it is convergent and defines an element of $H^r$ for $v\in X_{r,\sigma}$ with $\leftanglembda_{k-1}<\sigma <\leftanglembda_k.$ Notice that if $v$ is a fixed point for $T(\cdot;u_0),$ then $v$ solves the nonlinear evolution equation (\rightef{rMCF}). If this fixed point lies in the space $X_{r,\sigma},$ then by definition it converges to zero exponentially. We will show that for small enough $u_0\in F_k$ and for $\leftanglembda_{k-1}<\sigma<\leftanglembda_k,$ the mapping $T(\cdot;u_0)$ has precisely one fixed point $v$ in a small ball centered at the origin in $X_{r,\sigma}.$ This fixed point depends smoothly in $H^r$ on the parameter $u_0,$ and the initial datum of the corresponding evolution is $v(0).$ The orthogonal projection of $v(0)$ onto $F_k$ is just $u_0,$ and it follows easily that the space of initial data in a small ball of $H^r$ centered at $0$ which converges to zero exponentially with rate between $\leftanglembda_{k-1}$ and $\leftanglembda_k$ is a graph over $F_k.$ The size of the ball in $H^r$ on which this is true depends on the exponential rate $\sigma\in (\leftanglembda_{k-1},\leftanglembda_k),$ but since the solution converges to zero and therefore enters every ball centered at zero it is in fact true that the exponential rate of convergence to zero is automatically better than $\sigma$ for any $\sigma<\leftanglembda_k.$ The main result of this section is the following theorem. \begin{theorem} \leftanglebel{contraction-thm} If $r>n/2+1$ and $\leftanglembda_{k-1}<\sigma<\leftanglembda_k$ and if $u_0\in F_k$ with $\|u_0\|_{H^r}$ sufficiently small, then $T(\cdot;u_0)$ maps a small ball centered at the origin in $X_{r,\sigma}$ into itself and satisfies \begin{align} \|T(v,u_0) - T(w,u_0)\|_{r,\sigma} \lefteq C \lefteft(\|v\|_{r,\sigma} + \|w\|_{r,\sigma}\rightight) \|v-w\|_{r,\sigma} \leftanglebel{cont-bd} \end{align} for some constant $C= C(r,\sigma,k)$ depending on $r,\sigma,$ and $k.$ \end{theorem} \begin{corollary} The mapping $T$ is a contraction mapping of a small ball centered at the origin in $X_{r,\sigma}$ into itself. Consequently, it has a unique fixed point in this ball. \end{corollary} \begin{proof}[Proof of Theorem \rightef{contraction}] We first prove the bound (\rightef{cont-bd}) on a small ball, and then we show that if this ball is small enough it is mapped into itself by $T.$ If $v$ and $w$ are in $X_{r,\sigma}$ and $u_0 \in F_k,$ then the difference $D(s) = T(v;u_0)(s) - T(w;u_0)(s)$ is continuously differentiable and satisfies the equation \begin{align} (\partial_s - L)D(s) &= N(v(s)) - N(w(s)) \leftanglebel{diff-eq} \\ D(0) &= -\int_0^\infty e^{-L\tau}(1-\Pi_k) \lefteft(N(v(\tau)) - N(w(\tau))\rightight)\,\mathrm d\tau.\tx{\mathbf n}onumber \end{align} To bound $D,$ we break it up into components using the orthogonal projection $\Pi_k\colon H^r\to F_k.$ The bound on the component $(1-\Pi_k)D(s)$ is simple, so we take care of that first. The more interesting bound is on $\Pi_kD(s),$ and for this we make use of Corollaries \rightef{L2-bd} and \rightef{sup-bd}, which apply because $\Pi_kD(s)\in F_k$ for all $s\geq 0$ (this is why we break $D$ into components in the first place). We now show how $(1-\Pi_k)D(s)$ is controlled in $X_{r,\sigma}.$ First, $1-\Pi_k$ projects onto a finite-dimensional subspace of $H^r,$ and $(1-\Pi_k)D(s)$ can be expressed as an integral \begin{align*} (1-\Pi_k)D(s) &= e^{Ls}(1-\Pi_k)D(0) + \int_0^s e^{L(s-\tau)}\lefteft(1-\Pi_k\rightight)\lefteft(N(v(\tau)) - N(w(\tau))\rightight)\,\mathrm d\tau\\ &= -\int_s^\infty e^{L(s-\tau)} \lefteft(1-\Pi_k\rightight)\lefteft(N(v(\tau)) - N(w(\tau))\rightight)\,\mathrm d\tau, \end{align*} where the second line is obtained from the first by substituting the expression for $D(0)$ and simplifying. For $\tau>s,$ the operator $e^{L(s-\tau)}$ has norm $e^{\leftanglembda_{k-1}(\tau-s)}$ on $\operatorname{range}(1-\Pi_k).$ Because the range is finite-dimensional, and all norms on it are equivalent, we may write \begin{align*} \|(1-\Pi_k) D(s)\|_{H^r(\mathbf{S}^n)} &\lefteq C\int_s^\infty e^{\leftanglembda_{k-1}(\tau-s)} \|(1-\Pi_k) \lefteft(N(v(\tau)) - N(w(\tau))\rightight)\|_{H^{r-1}(\mathbf{S}^n)}\,\mathrm d\tau \\ &\lefteq C \int_s^\infty e^{\leftanglembda_{k-1}(\tau-s)} \|N(v(\tau)) - N(w(\tau))\|_{H^{r-1}(\mathbf{S}^n)}\,\mathrm d\tau \end{align*} where $C$ is a constant that depends on $k$ and $r.$ Now we just use the nonlinear estimate Lemma \rightef{bilin-bd} to bound the right side and obtain \begin{align*} \|(1-\Pi_k)D(s)\|_{H^r} \lefteq C\int_s^\infty e^{\leftanglembda_{k-1}(\tau-s)} \lefteft(\|v(\tau)\|_{H^{r+1}}\|v(\tau) - w(\tau)\|_{H^{r}} +\|w(\tau)\|_{H^r} \|v(\tau) - w(\tau)\|_{H^{r+1}}\rightight)\, \mathrm d\tau. \end{align*} Finally, assuming $\leftanglembda_{k-1}<\sigma<\leftanglembda_k,$ we bound the right side by the $\|\cdot\|_{r,\sigma}$ norm straightforwardly as follows (using the first summand for an example): \begin{align*} \int_s^\infty e^{\leftanglembda_{k-1}(\tau-s)} \|v(\tau)\|_{H^{r+1}} &\|v(\tau) - w(\tau)\|_{H^{r}}\,\mathrm d\tau \\ & \lefteq \sup_{\tau\geq s} e^{\sigma (\tau-s)} \|v(\tau) - w(\tau)\|_{H^r} \int_s^\infty e^{-(\sigma-\leftanglembda_{k-1})(\tau-s)}\|v(\tau)\|_{H^{r+1}}\,\mathrm d\tau \\ &\lefteq e^{-\sigma s}\|v-w\|_{r,\sigma} \lefteft(\int_0^\infty e^{-2(\sigma-\leftanglembda_{k-1})\tau }\,\mathrm d\tau \rightight)^{1/2}\lefteft(\int_0^\infty \|v(\tau)\|_{H^{r+1}}^2\,\mathrm d\tau\rightight)^{1/2} \\ &\lefteq e^{-\sigma s} \|v-w\|_{r,\sigma} \|v\|_{r,\sigma} \frac{1}{(\sigma - \leftanglembda_{k-1})^{1/2}}. \end{align*} The passage from the first to the second line is just Cauchy--Schwarz. All told, we obtain \begin{align} e^{\sigma s} \|(1-\Pi_k)D(s)\|_{H^r} \lefteq C(\|v\|_{r,\sigma} + \|w\|_{r,\sigma})\|v-w\|_{r,\sigma}, \leftanglebel{proj-bd1} \end{align} where $C$ depends on $k$ and $\sigma.$ Since the $H^r$ and $H^{r+1}$ norms are equivalent on the range of $1-\Pi_k,$ we see from the bound (\rightef{proj-bd1}) that \begin{align*} \|(1-\Pi_k)D(s)\|_{H^{r+1}} \lefteq e^{-\sigma s}C(\|v\|_{r,\sigma} + \|w\|_{r,\sigma})\|v-w\|_{r,\sigma}, \end{align*} and since $e^{-\sigma s}$ is square-integrable over $[0,\infty)$ for $\sigma>0$ we obtain \begin{align} \int_0^\infty \|(1-\Pi_k)D(s)\|_{H^{r+1}}^2 \,\mathrm ds &\lefteq C(\|v\|_{r,\sigma} + \|w\|_{r,\sigma})\|v-w\|_{r,\sigma}. \leftanglebel{proj-bd2} \end{align} Combining (\rightef{proj-bd1}) and (\rightef{proj-bd2}) gives the desired bound \begin{align*} \|(1-\Pi_k)D\|_{r,\sigma} \lefteq C(\|v\|_{r,\sigma} + \|w\|_{r,\sigma})\|v-w\|_{r,\sigma}, \end{align*} with $C$ depending on $k$ and $\sigma$ and $r.$ Let us now bound $\|\Pi_k D(s)\|_{r,\sigma}.$ Notice that $\Pi_kD(0) = 0,$ so that Corollary \rightef{sup-bd} implies \begin{align*} e^{2\sigma s}\lefteft\|\Pi_k D(s)\rightight\|_{H^r(\mathbf{S}^n)}^2 &\lefteq\frac{\leftanglembda_k}{2(\leftanglembda_k - \sigma)} \int_0^s e^{2\sigma\tau} \|\Pi_k\lefteft[ N(v(\tau)) - N(w(\tau))\rightight]\|_{H^{r-1}(\mathbf{S}^n)}^2\,\mathrm d\tau \\ & \lefteq\frac{\leftanglembda_k}{2(\leftanglembda_k - \sigma)} \int_0^s e^{2\sigma\tau} \|N(v(\tau)) - N(w(\tau))\|_{H^{r-1}(\mathbf{S}^n)}^2\,\mathrm d\tau. \end{align*} To pass from the first line to the second we just use the fact that $\Pi_k$ does not increase the $H^{r-1}$ norm. Inserting the bilinear estimate for $N$ into this we bound the integral as \begin{align*} \int_0^s e^{2\sigma\tau} & \|N(v(\tau)) - N(w(\tau))\|_{H^{r-1}}^2\,\mathrm d\tau \\ & \lefteq C \int_0^s e^{2\sigma \tau}\lefteft( \|v(\tau)\|_{H^{r+1}}^2 \|v(\tau)-w(\tau)\|_{H^r}^2 + \|w(\tau)\|_{H^r}^2 \|v(\tau)-w(\tau)\|_{H^{r+1}}^2\rightight)\,\mathrm d\tau, \end{align*} from which, using the definition of $\|\cdot \|_{r,\sigma},$ we straightforwardly obtain \begin{align*} \int_0^s e^{2\sigma\tau} \|N(v(\tau)) - N(w(\tau))\|_{H^{r-1}}^2\,\mathrm d\tau &\lefteq C\lefteft(\|v\|_{r,\sigma}^2 + \|w\|_{r,\sigma}^2\rightight)\|v-w\|_{r,\sigma}^2. \end{align*} Combining this with the $H^r$ estimate for $D(0)$ we get \begin{align*} e^{2\sigma s}\lefteft\|D(s)\rightight\|_{H^r(\mathbf{S}^n)}^2 &\lefteq C\lefteft(1 + \frac{\leftanglembda_k}{\leftanglembda_k - \sigma} \rightight) \lefteft(\|v\|_{r,\sigma}^2 + \|w\|_{r,\sigma}^2\rightight)\|v-w\|_{r,\sigma}^2. \end{align*} By Corollary \rightef{L2-bd} and an analogous use of the nonlinear estimate, we similarly obtain \begin{align*} \int_0^\infty \lefteft\|\Pi_k D(s)\rightight\|_{H^{r+1}}^2\,\mathrm ds & \lefteq \int_0^\infty\|N(v(s)) - N(w(s))\|_{H^{r-1}}^2\,\mathrm ds \lefteq C \lefteft(\|v\|_{r,\sigma}^2 + \|w\|_{r,\sigma}^2\rightight)\|v-w\|_{r,\sigma}^2. \end{align*} This completes the bound on $\|\Pi_k D(s)\|_{r,\sigma}.$ Combining all of these estimates gives us the final bound: \begin{align*} \|D\|_{r,\sigma} \lefteq \|(1-\Pi_k)D\|_{r,\sigma} + \|\Pi_kD\|_{r,\sigma} &\lefteq C\lefteft( \frac{\leftanglembda_k}{\leftanglembda_k - \sigma}\rightight)^{1/2} \lefteft(\|v\|_{r,\sigma} + \|w\|_{r,\sigma}\rightight)\|v-w\|_{r,\sigma}. \end{align*} This proves (\rightef{cont-bd}). Now let us show that $T$ maps a small ball centered at the origin in $X_{r,\sigma}$ into itself. Let $U(s) = e^{Ls}u_0$ be the solution to the linear homogeneous equation $(\partial_s - L) U = 0$ with initial data $U(0) = u_0.$ First, taking $w=0$ in (\rightef{cont-bd}) shows, since $T(0;u_0) = U$ by the definition (\rightef{soln-op}) of $T,$ that \begin{align*} \|T(v;u_0) - U\|_{r,\sigma} &\lefteq C\|v\|_{r,\sigma}^2. \end{align*} Therefore if $0<\delta<1/C$ and $\|U\|_{r,\sigma}<\delta - C\delta^2,$ then $\|T(v;u_0)\|_{r,\sigma}<\delta$ whenever $\|v\|_{r,\sigma}<\delta.$ That is, $T(\cdot;u_0)$ maps the ball of radius $\delta$ centered at zero in $X_{r,\sigma}$ into itself. We need only to show now that $\|U\|_{r,\sigma}$ can be controlled by $\|u_0\|_{H^r(\mathbf{S}^n)}.$ But this follows immediately from the estimates of Corollaries \rightef{sup-bd} and \rightef{L2-bd} since $U(s)\in F_k$ for all $s\geq 0.$ \end{proof} \section{Asymptotics of the limit} \leftanglebel{asymptotics} In the preceding section, we constructed, for each $k\geq 2,$ a codimension $d_k$ invariant submanifold for equation (\rightef{rMCF}) consisting of solutions which converge to zero with exponential rate $\sigma$ for every $\sigma<\leftanglembda_k.$ In this section, we show that any such solution must actually converge to zero with exponential rate $\leftanglembda_k,$ and we show also that any such solution is approximated well by a solution to the linear equation. The first result is the following. \begin{proposition} \leftanglebel{rate} Suppose $k\geq 2$ is an integer and $u\colon \mathbf{S}^n\times[s_0,\infty)\to\mathbb R$ is a solution to (\rightef{rMCF}) which satisfies \begin{align*} \sup_{s\geq s_0} e^{\sigma s} \|u(s)\|_{H^{r+2}(\mathbf{S}^n)} <\infty \end{align*} for all $\sigma <\leftanglembda_k.$ Then \begin{align*} \sup_{s\geq s_0} e^{\leftanglembda_k s} \|u(s)\|_{H^r(\mathbf{S}^n)} <\infty \end{align*} and in fact there exists $P\in E_k$ such that \begin{align*} u(y,s) &= e^{-\leftanglembda_k s} P(y) + O\lefteft(e^{-2\leftanglembda_k s} + e^{-\leftanglembda_{k+1}s}\rightight) \end{align*} in $H^r(\mathbf{S}^n)$ as $s\to\infty.$ \end{proposition} We now prove a lemma, showing that the first hypothesis of Proposition \rightef{rate} is met automatically for all solutions of (\rightef{rMCF}) satisfying \begin{align*} \sup_{s\geq s_0} e^{\sigma s} \|u(s)\|_{H^{r}(\mathbf{S}^n)} <\infty \end{align*} for all $\sigma <\leftanglembda_k.$ Notice that Proposition \rightef{rate} requires this condition to hold for the $H^{r+2}(\mathbf{S}^n)$ norm, and not just the $H^{r}(\mathbf{S}^n)$ norm. We show that it always holds in the $H^{r+2}(\mathbf{S}^n)$ norm if it holds in the $H^{r}(\mathbf{S}^n)$ norm. \begin{lemma} Suppose $u\colon \mathbf{S}^n\times [s_0,\infty)\to\mathbb R$ is a solution to (\rightef{rMCF}) converging to zero in $L^2(\mathbf{S^n})$ as $s\to\infty.$ Then either $u$ is identically zero or \begin{align*} \sup_{s \geq s_0} \frac{\|u(s)\|_{H^{r+1}(\mathbf{S}^n)}}{\|u(s)\|_{H^r(\mathbf{S}^n)}} <\infty \end{align*} for every integer $r\geq 0.$ \end{lemma} \begin{proof} The crucial feature of rescaled mean curvature flow making this work is that a solution $u$ to (\rightef{rMCF}) converging to zero in $L^2(\mathbf{S}^n)$ also converges to zero in $H^r(\mathbf{S}^n)$ for every $r\geq 0.$ This follows from Huisken's result, \cite{Hu84} (see Remark (i) after Theorem 1.1), that convergence of a convex mean curvature flow to the sphere is exponential in $C^k$ for any $k.$ The rest of the proof uses generalities about the equation (\rightef{rMCF}) satisfied by $u.$ Since $u$ converges to zero in $H^r(\mathbf{S}^n)$ for every $r,$ it lies in one of the invariant manifolds of Theorem \rightef{stable-mfld}, as proved in the preceding section. Moreover, $u$ cannot converge to zero faster than any exponential unless it is identically zero, as proved in \cite{Str18} (see Theorem 2.2). Therefore, if $u$ is not identically zero, there is a largest integer $k = k(r)\geq 2,$ depending on $r,$ with the property that \begin{align*} \sup_{s\geq s_0} e^{\sigma s} \|u(s)\|_{H^r(\mathbf{S}^n)} <\infty \end{align*} for all $\sigma <\leftanglembda_k.$ Since $\|\cdot\|_{H^{r+1}(\mathbf{S}^n)}\geq \|\cdot\|_{H^{r}(\mathbf{S}^n)},$ the integer $k(r)$ does not increase with $r.$ This means that eventually $k(r)$ is constant in $r,$ that is, there exists some $r_0$ such that $k(r) = k(r_0)$ for $r\geq r_0.$ Then for $r\geq r_0$ we can apply Proposition \rightef{rate} to conclude that \begin{align*} \sup_{s\geq s_0} e^{\leftanglembda_k s} \|u(s)\|_{H^r(\mathbf{S}^n)}<\infty, \end{align*} where $k = k(r) = k(r_0),$ and that there exists $P\in E_k$ with the property that $\|e^{\leftanglembda_k s} u(s) - P\|_{H^r(\mathbf{S^n})} \lefteq Ce^{-\sigma s}$ for some $\sigma>0.$ Now $P$ must be nonzero, otherwise $e^{\sigma s}\|u(s)\|_{H^r(\mathbf{S}^n)}$ would be bounded for all $\sigma<\leftanglembda_{k+1},$ and $k$ would not be the largest integer with this property. (It now follows easily that $k=k(r)$ is the same for all $r\geq 0$ and not just all sufficiently large $r.$) This is enough to conclude that $\|u(s)\|_{H^{r+1}(\mathbf S^n)}/\|u(s)\|_{H^r(\mathbf S^n)}$ is bounded in $s$ for any $r,$ since \begin{align*} \frac{\|u(s)\|_{H^{r+1}(\mathbf S^n)}}{\|u(s)\|_{H^r(\mathbf S^n)}} \lefteq \frac{e^{-\leftanglembda_ks} \|P\|_{H^{r+1}(\mathbf S^n)} + C_1 e^{-(\leftanglembda_k + \sigma)s}}{ e^{-\leftanglembda_ks} \|P\|_{H^{r}(\mathbf S^n)} - C_2 e^{-(\leftanglembda_k + \sigma)s}} \end{align*} for some constants $C_1,C_2,\sigma>0,$ and for $s$ sufficiently large the expression on the right is bounded. \end{proof} The proof of Proposition \rightef{rate}, to which we now turn, will be the consequence of a series of three lemmas in which we bound the projections of $u(s)$ onto $F_{k+1} = \bigoplus_{j\geq k+1} E_j,$ onto $E_k,$ and onto $F_k^\perp = \bigoplus_{j<k} E_j,$ with $E_j$ as always the eigenspace for $\Delta+1$ corresponding to $\leftanglembda_j.$ We begin by bounding the projection onto $F_{k+1}.$ \begin{lemma} \leftanglebel{st-bd} Under the hypotheses of Proposition \rightef{rate}, \begin{align*} \|\Pi_{k+1} u(s)\|_{H^r(\mathbf{S}^n)} = O(e^{-\leftanglembda_{k+1}s} + e^{-2\sigma s}) \end{align*} as $s\to\infty,$ for any $\sigma <\leftanglembda_k.$ \end{lemma} \begin{proof} Notice that \begin{align*} \|\Pi_{k+1} u(s)\|_{H^r} \frac{\mathrm d}{\mathrm ds} \|\Pi_{k+1}u(s)\|_{H^r} &= \frac{\mathrm d}{\mathrm ds} \|\Pi_{k+1}u(s)\|_{H^r}^2/2 \\ &= \leftanglengle \Pi_{k+1} u(s),Lu(s)\rightanglengle_{H^r} + \leftanglengle \Pi_{k+1}u(s),N(u(s))\rightanglengle_{H^r} \\ &\lefteq -\leftanglembda_{k+1} \|\Pi_{k+1}u(s)\|_{H^r}^2 + \|\Pi_{k+1}u(s)\|_{H^r} \|N(u(s))\|_{H^r}. \end{align*} Thus if $\mu\lefteq \leftanglembda_{k+1},$ we get \begin{align} \frac{\mathrm d}{\mathrm ds} \lefteft(e^{\mu s} \|\Pi_{k+1} u(s)\|_{H^r}\rightight) &\lefteq e^{\mu s}\|N(u(s))\|_{H^r}. \leftanglebel{st-bd1} \end{align} On the other hand \begin{align} \|N(u(s))\|_{H^r} \lefteq C\|u(s)\|_{H^{r+1}}\|u(s)\|_{H^{r+2}} \lefteq C e^{-2\sigma s} \leftanglebel{st-bd2} \end{align} for any $\sigma <\leftanglembda_k$ (the constant may depend on $\sigma$). The first inequality is the nonlinear estimate Lemma \rightef{bilin-bd} and the second follows from the hypothesis of Proposition \rightef{rate}. Inserting (\rightef{st-bd2}) into (\rightef{st-bd1}) and integrating shows that $e^{\mu s} \|\Pi_{k+1}u(s)\|_{H^r}$ is bounded provided $\mu\lefteq \leftanglembda_{k+1}$ and $\mu<2\leftanglembda_k.$ This is the same as the conclusion of the lemma. \end{proof} Next we bound the projection onto $F_k^\perp.$ \begin{lemma} \leftanglebel{unst-bd} In the situation of Proposition \rightef{rate}, \begin{align*} \|(1-\Pi_k) u(s)\|_{H^r(\mathbf{S}^n)} = O( e^{-2\sigma s}) \end{align*} as $s\to\infty,$ for any $\sigma <\leftanglembda_k.$ \end{lemma} \begin{proof} Reasoning as in the proof of Lemma \rightef{st-bd}, we obtain \begin{align*} \frac{\mathrm d}{\mathrm ds} \|(1-\Pi_k) u(s)\|_{H^r(\mathbf{S}^n)} &\geq -\leftanglembda_{k-1}\|(1-\Pi_k) u(s)\|_{H^r(\mathbf{S}^n)} - C\|N(u(s))\|_{H^r} \\ &\geq - \leftanglembda_{k-1} \|(1-\Pi_k) u(s)\|_{H^r(\mathbf{S}^n)} - C e^{-2\sigma s} \end{align*} for any $\sigma <\leftanglembda_k.$ In other words, \begin{align*} \frac{\mathrm d}{\mathrm ds}\lefteft( e^{\leftanglembda_{k-1}s} \|(1-\Pi_k) u(s)\|_{H^r}\rightight) &\geq -C e^{(\leftanglembda_{k-1} - 2\sigma) s}. \end{align*} Integrating this gives, for $s<t$ and $\sigma >\leftanglembda_{k-1}/2,$ \begin{align*} e^{\leftanglembda_{k-1} t} \|(1-\Pi_k) u(t)\|_{H^r} &\geq e^{\leftanglembda_{k-1} s} \|(1-\Pi_k) u(s)\|_{H^r} -C e^{(\leftanglembda_{k-1} - 2\sigma) s} \\ &= e^{\leftanglembda_{k-1} s} \lefteft(\|(1-\Pi_k) u(s)\|_{H^r} - C e^{-2\sigma s}\rightight). \end{align*} Now make $t\to\infty.$ Because $\leftanglembda_{k-1}<\leftanglembda_k,$ the hypothesis of Proposition \rightef{rate} tells us that the left side converges to zero. Consequently the right side must be non-positive, or, in other words, \begin{align*} \|(1-\Pi_k) u(s)\|_{H^r} \lefteq C e^{-2\sigma s} \end{align*} for all $s\geq 0$ and any $\sigma <\leftanglembda_k.$ As far as we know, $C$ depends on $\sigma$ of course. \end{proof} Finally, we turn to the projection onto $E_k.$ Write $\pi_k = \Pi_k - \Pi_{k+1}$ for the projection of $H^r$ onto $E_k.$ \begin{lemma} \leftanglebel{ctr-bd} Under the hypotheses of Proposition \rightef{rate} and assuming $s<t,$ \begin{align*} \| e^{\leftanglembda_k t}\pi_ku(t) - e^{\leftanglembda_k s}\pi_k u(s)\|_{H^r(\mathbf{S}^n)} &\lefteq C e^{-\sigma s} \end{align*} for any $\sigma <\leftanglembda_k.$ \end{lemma} \begin{proof} First, \begin{align*} \frac{\mathrm d}{\mathrm ds} e^{\leftanglembda_k s} \pi_k u(s) &= e^{\leftanglembda_k s}\pi_k N(u(s)) \end{align*} since $L\pi_k u = -\leftanglembda_k \pi_k u.$ Now we integrate this (the equation is in a finite dimensional vector space, namely, the range of $\pi_k$) and use the triangle inequality to obtain \begin{align*} \| e^{\leftanglembda_k t}\pi_ku(t) - e^{\leftanglembda_k s}\pi_k u(s)\|_{H^r} &= \lefteft\| \int_s^t e^{\leftanglembda_k s}\pi_k N(u(\tau))\,\mathrm d\tau \rightight\|_{H^r} \\ &\lefteq \int_s^t e^{\leftanglembda_k s}\|\pi_kN(u(\tau))\|_{H^r}\,\mathrm d\tau \\ &\lefteq C \int_s^t e^{(\leftanglembda_k- 2\sigma )\tau}\,\mathrm d\tau \end{align*} for $s<t$ and any $\sigma<\leftanglembda_k.$ This implies the lemma. \end{proof} An immediate corollary of Lemma \rightef{ctr-bd} is that $e^{\leftanglembda_k s} \pi_k u(s)$ converges in $H^r$ norm exponentially fast. The limit of course must be an element of $E_k,$ that is, an eigenfunction $P$ of $\Delta +1$ with eigenvalue $\leftanglembda_k.$ Altogether, Lemmas \rightef{st-bd}, \rightef{unst-bd}, and \rightef{ctr-bd} imply that \begin{align*} u(y,s) &= e^{-\leftanglembda_k s} P(y) + O\lefteft(e^{-\leftanglembda_{k+1}s} +e^{-2\sigma s}\rightight) \end{align*} in $H^r(\mathbf{S}^n)$ as $s\to\infty,$ for any $\sigma <\leftanglembda_k.$ In particular, $\|u(s)\|_{H^r} \lefteq C e^{-\leftanglembda_k s}$ for some $C>0,$ and this fact can be used to improve the asymptotics and take $\sigma = \leftanglembda_k$ in Lemmas \rightef{unst-bd} and \rightef{ctr-bd} (but not Lemma \rightef{st-bd}) as follows. The appearance of $\sigma$ came through $H^r$ bounds on the nonlinear term $N(u(s))$ used in the proofs of Lemmas \rightef{st-bd}, \rightef{unst-bd}, and \rightef{ctr-bd}: We could only say, based on the hypotheses of Proposition \rightef{rate}, that $\|N(u(s))\|_{H^r} \lefteq C e^{-2\sigma s}$ for any $\sigma<\leftanglembda_k$ and for some $C>0$ depending on $\sigma.$ But now that we know $\|u(s)\|_{H^r}$ (hence $\|u(s)\|_{H^{r+1}}$ and $\|u(s)\|_{H^{r+2}}$ by standard parabolic estimates using the fact that $u$ is a solution to (\rightef{rMCF})) is actually $O(e^{-\leftanglembda_k s}),$ we can say that $N(u(s)) = O(e^{-2\leftanglembda_k s})$ and then obtain improvements on the error in Lemmas \rightef{unst-bd} and \rightef{ctr-bd}. A close look at the proof of Lemma \rightef{st-bd} reveals that the same method does not work there and we are stuck with the appearance of $\sigma$ in the conclusion. Of course it does not matter so long as $2\leftanglembda_k>\leftanglembda_{k+1},$ which is the case for most $k.$ We summarize these observations in a corollary, which states a more precise version of Proposition \rightef{rate}. \begin{corollary} \leftanglebel{precise-rate} Under the hypotheses of Proposition \rightef{rate}: \begin{itemize} \item[(a)] The projection $\Pi_{k+1}u$ onto the sum $F_{k+1}$ of eigenspaces corresponding to eigenvalues $\leftanglembda_j$ with $j>k$ satisfies \begin{align*} \|\Pi_{k+1}u(s)\|_{H^r(\mathbf{S}^n)} \lefteq C\lefteft(e^{-\leftanglembda_{k+1} s} + e^{-2\sigma s}\rightight) \end{align*} for any $\sigma <\leftanglembda_k$ and some $C>0$ (depending on $\sigma$) and all $s\geq s_0.$ \item[(b)] The projection $(1-\Pi_k)u$ onto the sum $F_k^\perp$ of eigenspaces corresponding to eigenvalues $\leftanglembda_j$ with $j<k$ satisfies \begin{align*} \|(1-\Pi_k) u(s)\|_{H^r(\mathbf{S}^n)} \lefteq C e^{-2\leftanglembda_k s} \end{align*} for some $C>0$ and all $s\geq s_0.$ \item[(c)] The projection $\pi_k u$ onto the eigenspace $E_k$ corresponding to the eigenvalue $\leftanglembda_j$ satisfies \begin{align*} \|e^{\leftanglembda_k s} \pi_k u(s) - P\|_{H^r(\mathbf{S}^n)} \lefteq C e^{-\leftanglembda_k s} \end{align*} for some $P\in E_k$ and some $C>0$ and all $s\geq s_0.$ \end{itemize} \end{corollary} We now obtain more precise asymptotics. \begin{lemma} \leftanglebel{higher-order} Suppose $u$ satisfies the hypotheses of Proposition \rightef{rate}, and suppose $j\geq k$ and $\leftanglembda_j <2\leftanglembda_k.$ Then there exists $P_j $ in the eigenspace $E_j$ corresponding to $\leftanglembda_j$ such that \begin{align*} \|e^{\leftanglembda_j s}\pi_j u(s) - P_j\|_{H^r(\mathbf{S}^n)} \lefteq C e^{(\leftanglembda_j - 2\leftanglembda_k)s} \end{align*} for all $s\geq s_0.$ \end{lemma} \begin{proof} We argue as in Lemma \rightef{ctr-bd}, using \begin{align*} \frac{\mathrm d}{\mathrm ds} e^{\leftanglembda_j s} \pi_j u(s) = e^{\leftanglembda_j s} \pi_j N(u(s)) \end{align*} to obtain \begin{align*} \|e^{\leftanglembda_j t} \pi_j u(t) - e^{\leftanglembda_j s} \pi_j u(s)\|_{H^r} &\lefteq C \int_s^t e^{\leftanglembda_j \tau} \|N(u(\tau))\|_{H^r}\,\mathrm d\tau \lefteq C \int_s^t e^{(\leftanglembda_j - 2\leftanglembda_k) \tau}\,\mathrm d\tau. \end{align*} The right side is $O(e^{(\leftanglembda_j - 2\leftanglembda_k)s})$ independent of $t$ provided that $\leftanglembda_j<2\leftanglembda_k,$ and this gives the conclusion of the lemma. \end{proof} From Lemma \rightef{higher-order} we obtain higher order asymptotics in certain cases (when $k$ is large). \begin{corollary} Let $u$ satisfy the hypotheses of Proposition \rightef{rate}. Then for $j\geq k$ such that $\leftanglembda_j<2\leftanglembda_k,$ there exists $P_j \in E_j$ such that \begin{align*} u(y,s) &= \sum_{\genfrac{}{}{0pt}{2}{j\geq k}{\leftanglembda_j<2\leftanglembda_k}} e^{-\leftanglembda_j s} P_j(y) + O(e^{-2\sigma s}) \end{align*} for any $\sigma<\leftanglembda_k.$ \end{corollary} \subsection{Prescribing the first-order asymptotics} In this section we prove Theorem \rightef{prescribed-lin}. Given $a\in F_k$ sufficiently small in $H^r$ for $r>n/2+1,$ we constructed in Section \rightef{contraction} a unique solution $u(s;a)$ to (\rightef{rMCF}) satisfying $\Pi_ku(0;a) = a$ and converging to zero with exponential rate $\leftanglembda_k.$ In the preceding subsection we showed that \begin{align*} P(a) &= \leftim_{s\to\infty} e^{\leftanglembda_k s} u(s;a) \end{align*} exists and is an element of the eigenspace $E_k$ corresponding to $\leftanglembda_k.$ Here we will study the map $a\mapsto P(a).$ We will show that the image of this map contains a small ball centered at the origin in $E_k.$ This is enough to conclude that every $P\in E_k$ is attained as the limit $e^{\leftanglembda_k s} u(s)$ of some solution $u$ to (\rightef{rMCF}), because we can always replace $u$ with $\tilde{u}(s) = u(s-s_0)$ for $s\geq s_0$ thereby scaling the limit by a factor $e^{\leftanglembda_k s_0}.$ Actually, we do not even need to look at arbitrary $a\in F_k$ to obtain surjectivity: we may even restrict attention to $a\in E_k.$ The precise result is the following: \begin{proposition} There exists $\delta>0$ such that if $b\in E_k$ satisfies $\|b\|_r<\delta,$ there exists $a\in E_k$ with $P(a) = b.$ \end{proposition} \emph{Remark.} A slightly more careful argument along the lines of the below proof shows that $P$ actually maps a small neighborhood of the origin in $E_k$ homeomorphically onto another small neighborhood of the origin. \begin{proof} Let us first recall the norm $\|\cdot\|_{r,\sigma}$ from Section \rightef{contraction}: \begin{align*} \|v\|_{r,\sigma} &= \lefteft(\int_0^\infty \|v(s)\|_{r+1}^2\,\mathrm ds\rightight)^{1/2} + \sup_{s\geq 0} e^{\sigma s}\|v(s)\|_r. \end{align*} It follows from Theorem \rightef{contraction-thm} that if $\sigma<\leftanglembda_k$ and $a\in F_k$ is sufficiently small in $H^r,$ then \begin{align*} \|u(s;a) - e^{Ls}a\|_{r,\sigma} \lefteq C\|u(\cdot;a)\|_{r,\sigma}^2 \end{align*} for some constant $C>0$ depending only on $\|a\|_{H^r}.$ By making $a$ smaller if necessary, we may moreover assume that $\|u(\cdot;a)\|_{r,\sigma}<1/(2C)$ so that we get the bound \begin{align} \|u\|_{r,\sigma} \lefteq 2\|e^{Ls}a\|_{r,\sigma} \lefteq 4\|a\|_{H^r}. \leftanglebel{pr.init-bd} \end{align} The last inequality is just an $H^r$ estimate for the homogeneous linear equation. Thus we can bound $\|u\|_{r,\sigma}$ by a constant times $\|a\|_{H^r}$ for any $\sigma<\leftanglembda_k,$ provided that $\|a\|_{H^r}$ is small enough. We now make use of the representation \begin{align*} e^{\leftanglembda_k s} u(s;a) & = e^{(\leftanglembda_k+L)s}a \\ &+ \int_{0}^s e^{(\leftanglembda_k + L)(s-t)} e^{\leftanglembda_kt} \Pi_{k} N(u(t;a))\,\mathrm dt - \int_s^\infty e^{(\leftanglembda_k + L)(s-t)}e^{\leftanglembda_k t}(1-\Pi_{k}) N(u(t;a))\,\mathrm dt, \end{align*} which is valid for $u$ because $e^{\leftanglembda_k s}u(s;a)$ is bounded. By taking the $H^r$ norm of both sides and applying the triangle inequality we deduce \begin{align*} \lefteft\|e^{\leftanglembda_k s} u(s) -e^{(\leftanglembda_k + L)s} a\rightight\|_{H^r} &\lefteq \int_{0}^\infty e^{\leftanglembda_kt} \|N(u(t))\|_{H^r}\,\mathrm dt \lefteq C\int_{0}^\infty e^{\leftanglembda_k t} \|u(t)\|_{H^{r+1}}\|u(t)\|_{H^{r+2}} \,\mathrm dt. \end{align*} where in the last inequality we've used the nonlinear bound $\|N(u)\|_{H^r}\leftesssim \|u\|_{H^{r+1}}\|u\|_{H^{r+2}}$ from Lemma \rightef{bilin-bd}. Now, the right side can be bounded by $\|u\|_{r+2,3\leftanglembda_k/4}^2,$ for example, as follows: \begin{align*} \int_{0}^\infty e^{\leftanglembda_k t}\|u(t)\|_{H^{r+1}} \|u(t)\|_{H^{r+2}} \,\mathrm dt &\lefteq\int_{0}^\infty\lefteft( e^{3\leftanglembda_k t/4}\|u(t)\|_{H^{r+2}}\rightight)^2e^{-\leftanglembda_k/2} \,\mathrm dt \lefteq \frac{2}{\leftanglembda_k}\lefteft(\sup_{t\geq 0} e^{3\leftanglembda_k/4}\|u(t)\|_{H^{r+2}}\rightight)^2. \end{align*} If we now assume that $a$ is sufficiently small in $H^{r+2}$ rather than in $H^r$ and employ the bound (\rightef{pr.init-bd}) (with $r+2$ instead of $r$) we obtain \begin{align*} \lefteft\|e^{\leftanglembda_k s} u(s) -e^{(\leftanglembda_k + L)s} a\rightight\|_{H^r} \lefteq 4 \|a\|_{H^{r+2}}^2. \end{align*} On the left side we let $s\to\infty.$ If $\pi_ka$ is the projection of $a$ onto the eigenspace $E_k,$ the result is \begin{align*} \frac{1}{\leftanglembda_k^2} \|P(a) - \pi_k a\|_{H^{r+2}} = \| P(a) - \pi_ka\|_{H^r} \lefteq 4\|a\|_{H^{r+2}}^2. \end{align*} The first inequality is just the definition of $H^r$ norm on the eigenspace $E_k.$ To finish the argument, we restrict attention to $a\in E_k.$ For such $a$ we have $\pi_k a = a$ and the foregoing estimate reduces to $\|P(a) - a\| \lefteq C\|a\|^2$ for all sufficiently small $a$ in $E_k$ (the norm is unimportant because $E_k$ is finite-dimensional). This is enough to prove that the image of $P$ contains a small ball in $E_k$ centered at the origin. Indeed, we run a standard contraction argument as in one direction of the proof of the inverse function theorem: if $b\in E_k,$ we want to solve the fixed point equation $a = b - (P(a) - a).$ If $\delta <1/(2C)$ and $0<\|b\|\lefteq \delta - C\delta^2,$ then the map $F$ defined by $F(a) = b-(P(a) - a)$ sends the closed ball $\|a\|\lefteq\delta$ to itself. Indeed if $\|a\|\lefteq \delta$ then \begin{align*} \|F(a)\| = \|b- (P(a)-a)\| \lefteq \|b\| + \|P(a) - a\| \lefteq \delta - C\delta^2 + C\delta^2 = \delta. \end{align*} On the other hand, $F$ depends continuously on $a$ (this follows from the proof of Theorem \rightef{contraction-thm}) and so, being a continuous mapping of a closed ball into itself, it must have a fixed point $a,$ that is, a solution to $P(a) = b.$ \end{proof} \end{document}
math
69,016
\begin{document} \centerline {\LARGE\bf On S.L.~Tabachnikov's conjecture} \centerline {A.I.~Nazarov, F.V.~Petrov} {\small S. L. Tabachnikov's conjecture is proved: for any closed curve $\Gamma$ lying inside convex closed planar curve $\Gamma_1$ the mean absolute curvature $T(\Gamma)$ exceeds $T(\Gamma_1)$ if $\Gamma\ne k\Gamma_1$. An inequality $T(\Gamma)\ge T(\Gamma_1)$ is proved for curves in a hemisphere.} \section*{1. Problem setting. Main ideas} Let $\Gamma(s), s\in[0,L(\Gamma)]$ be a naturally parametrized closed curve on a plane. We say that $\Gamma(s)$ belongs to the class $BV^1$ if the velocity $\Gamma'(s)$ exists and is continious on $[0,L(\Gamma)]$ with the exception of countable set; at the points of this set $\Gamma'$ has left and right limits and the variation of $\Gamma'$ is bounded\footnote {The variation of a function $f$ mapping into unit circle is defined as supremum of sums $\sum_{i=1}^n \rho(f(t_i),f(t_{i-1}))+\rho(f(t_n),f(t_0))$ taken by all subdivisions $t_0<t_1<\dots<t_n$ of a segment in which $f$ is defined provided $f$ is defined in the nodes $t_i$; $\rho$ is intrinsic metrics of the circle.}. Full variation of $\Gamma'$ is called {\it full rotation} of the curve $\Gamma$ and it is denoted by $V(\Gamma)$. Note the following properties of the full rotation: $1^{\circ}$. For $C^2$-smooth curves the full rotation is equal to the integral of curvature modulus with respect to natural parameter. $2^{\circ}$. Full rotation of a closed polygonal line equals the sum of external angles in all its vertices. $3^{\circ}$. Full rotation of a closed convex curve exists and equals $2\pi$. Define the {\it mean absolute curvature} of a curve $\Gamma\in BV^1$ as its full rotation divided by the length: $T(\Gamma)=V(\Gamma)/L(\Gamma)$. S.L.~Tabachnikov [1] has formulated the following conjecture which he called {\it DNA inequality}: { \bf Theorem ${\cal P}$.} 1. Mean absolute curvature $T(\Gamma)$ of a closed curve $\Gamma\in BV^1$ ("DNA") lying inside convex closed curve $\Gamma_1$ ("cell") is not less than $T(\Gamma_1)$. {2.} If $T(\Gamma)=T(\Gamma_1)$, then curve $\Gamma$ is a multiple circuit of $\Gamma_1$. A survey of results concerning this conjecture and generalisations is made in [1]. The first part of Theorem ${\cal P}$ is proved in [2]. We prove DNA inequality in full generality. The proof of the first part partially follows the strategy of [2], but it is more clear and it is used for proving the second part. In order to make a paper self-completed, we give (significantly simplified) proofs of all lemmas from [2] being used. Without loss of generality, we may assume that $\Gamma_1$ is a convex hull boundary of $\Gamma$. A curve $\widetilde\Gamma$ is said {\it better} than a curve $\Gamma$ if $T(\Gamma)\ge T(\widetilde\Gamma)$, and it is said {\it strictly better} if $T(\Gamma)>T(\widetilde\Gamma)$. We call an {\it improvement} (resp. {strict improvement}) of a curve the replacement of a curve onto a better (resp. strictly better) one provided the convex hull of "new"\ curve is not larger than the convex hull of "old"\ one. Note that if after some improvements of a curve $\Gamma$ we get a multiple circuit of $\Gamma_1$ then Claim 1 of Theorem ${\cal P}$ is proved for a curve $\Gamma$. If, moreover, at least one improvement is strict then the strict inequality $T(\Gamma)>T(\Gamma_1)$ is established. At first Claim 1 is reduced to the case of polygonal lines. After that vertices of a polygonal line are moved to a boundary (here and further: a boundary of a convex hull). After that each change of rotation admits an imrovement of a curve. A finite number of such improvements lets us to get a curve, which rotates in only one direction (say, only clockwise), for which Claim 1 is almost obvious. Then we prove that every curve different from the multiple circuit of a boundary may be strictly improved. This proves Claim 2. \section*{2. Reduction to a polygonal line} Consider a point of jump of the function $\Gamma'$. Note that the variation of $\Gamma'$ does not change if we redefine $\Gamma'$ at this point as arbitrary vector lying on the unit circle between\footnote { i.e. on smaller of two arcs.} left and right limits of $\Gamma'$. Further, speaking about the values set of a velocity on some subinterval, we take into account that we add to it these sets of admissible values in jump points mentoned above. So, the values set of velocity on any interval is an arc of unit circle. We need the following {\bf Lemma 1.} Consider two points $A$ and $B$ on a curve $\Gamma$. A full rotation of curve $\Gamma$ between $A$ and $B$ (such part of $\Gamma$ will be denoted by $\Gamma_{AB}$) is not less than $\rho(\Gamma'(A),e)+\rho(\Gamma'(B),e)$, where $e$ is a unit vector directed as $\overline{AB}$. {\bf Proof.} If a vector $e$ lies in the value set of $\Gamma'$ at the part from $A$ to $B$ then the claim is clear (it suffices to consider a subdivision of $\Gamma_{AB}$ with nodes $A,\,B$ and a point $C$ with $\Gamma'(C)=e$). In the opposite case, the values set of $\Gamma'$ from $A$ to $B$ is an arc not less than semicircle (otherwise one could find a half-plane which contains this values set and does not contain $\overline{AB}$). Consider a subdivision of $\Gamma_{AB}$, with nodes at points with extremal velocities. We get that the rotation of $\Gamma$ from $A$ to $B$ is not less than the larger arc between $\Gamma'(A)$ and $\Gamma'(B)$. This proves Lemma in this case. $\square$ {\bf Lemma 2.} Assume that a curve $\Gamma$ does not satisfy Claim 1 of Theorem ${\cal P}$. Then there exists a polygonal line, which also does not satisfy this claim. {\bf Proof.} By our assumprions we have $T(\Gamma)<T(\Gamma_1)$. We inscribe a polygonal line $\Delta$ into $\Gamma$ so that the length of $\Delta$ is quite closed to the length of $\Gamma$ (namely, $L(\Delta)> L(\Gamma)\cdot \frac {T(\Gamma)}{ T(\Gamma_1)}$). Clearly, the convex hull of $\Delta$ (by $\Delta_1$ we denote its boundary) lies inside $\Gamma_1$. To prove that $V(\Delta)\le V(\Gamma)$ it suffices to sum up the inequalities of Lemma 1 with respect to all edges of a polygonal line, and then to use the triangle inequality. Hence $$T(\Delta)=\frac {V(\Delta)}{ L(\Delta)} <\frac {V(\Gamma)}{L(\Gamma)}\cdot \frac {T(\Gamma_1)}{ T(\Gamma)}=T(\Gamma_1)\le T(\Delta_1),$$ \noindent and Lemma follows. $\square$ Let $A_1A_2\dots A_nA_1$ be a closed polygonal line. We denote by $L$ its length, by $P$ the perimeter of its convex hull and by $V:=\sum_{i=1}^n(\pi-\angle A_{i-1}A_iA_{i+1})$ its full rotation (enumeration of indices is cyclic modulo $n$). We assume that no vertex $A_i$ of a polygonal line lies on a segment $[A_{i-1}A_{i+1}]$. Such vertices may appear in a process of improvement, in this case they will be removed immediately. In terms of above notations, Claim 1 of Theorem ${\cal P}$ for polygonal lines may be reformulated as follows: {\bf Lemma 3.} $\displaystyle{\frac {L}{ V}\le \frac {P}{ 2\pi}}$. Note that Lemmas 2 and 3 imply Claim 1 in the general case. \section*{3. Quadrilaterals} Here we prove two lemmas, which form a claim of Lemma 3 for quadrilaterals. These lemmas will be used further for improvement of arbitrary polygonal line. {\bf Lemma 4.} For any triangle $ABC$ the inequality $$\frac {AB+BC}{ 2\pi-\beta}< \frac {AB+BC+AC}{ 2 \pi},\eqno(1)$$ \noindent holds with $\beta=\angle ABC$. {\bf Proof.} By the sine theorem we have $$\frac {AB+BC}{ AC}= \frac {\sin \angle A+\sin \angle C}{ \sin \beta}= \frac {2\sin(\frac {\angle A+\angle C}{ 2}) \cos(\frac {\angle A-\angle C}{ 2})}{ 2\sin \frac {\beta}{ 2} \cos\frac{\beta}{2}} =\frac {\cos(\frac {\angle A-\angle C}{ 2})}{ \sin\frac {\beta}{ 2}}\le \frac {1}{\sin\frac {\beta}{ 2}}.$$ Due to concavity of the sine on $[0,\pi/2]$ we have $\sin \frac {\beta }{ 2}> \frac {\beta}{ \pi}$. Hence $$\frac {AB+BC}{ AC}< \frac {\pi}{ \beta}< \frac {2\pi-\beta}{ \beta},$$ \noindent that is equivalent to (1). $\square$ {\bf Lemma 5.} Let $ABCD$ be a convex quadrilateral, $O$ be a point of diagonals intersection. Put $\varphi=\angle AOB$. Then $$\frac {AB+BD+DC+CA}{ 2(\pi+\varphi)}< \frac {AB+BC+CD+DA}{ 2\pi}.\eqno(2)$$ {\bf Proof.} By Lemma 4 we have $AB> \frac {\varphi}{\pi}(AO+OB)$, $CD> \frac {\varphi}{\pi}(CO+OD)$. Adding these inequalities we get $AB+CD> \frac {\varphi}{\pi}(AC+BD)$. Analogously, $BC+AD>(1-\frac {\varphi}{\pi})(AC+BD)$. Hence $$\frac {\varphi}{\pi}\cdot \frac {AB+CD}{AC+BD}+ (1+\frac {\varphi}{\pi})\cdot \frac {BC+DA}{AC+BD}>\frac {\varphi}{\pi}\cdot\frac {\varphi}{\pi}+ (1+\frac {\varphi}{\pi})\cdot(1-\frac {\varphi}{\pi})=1. $$ Therefore, $$ \frac {AB+CD} {AC+BD}+1< (1+\frac {\varphi}{\pi}) (\frac {AB+CD}{AC+BD}+\frac {BC+DA}{AC+BD}). $$ This inequality is equivalent to (2). $\square$ Statements of Lemmas 4 and 5 are nothing more than partial cases of Lemma 3 for concave and selfintersecting quadrilaterals respectively. {\it Remark 1.} Inequalities of Lemmas 4 and 5 (with $\le$ sign) are true also for degenerate triangle $ABC$ and quadrilateral $ABCD$ respectively. \section*{4. Vertices moving to the boundary} Here we reduce the proof of Lemma 3 to the case, where all vertices of a polygonal line belong to the boundary of its convex hull. Assume that a vertice $A_i$ is situated strictly inside convex hull. Let us consider three cases. Case {\bf a)}. The line $A_iA_{i+1}$ does not separate points $A_{i-1}$ and $A_{i+2}$. In this case we can strictly improve a polygonal line, increasing its length without change a rotation: just move a vertice $A_i$ beyond the segment $A_{i-1}A_i$ while it touches either the boundary or the ray $A_{i+2}A_{i+1}$. So we get a better polygonal line with less number of vertices situated strictly inside convex hull (it may well be that a total number of vertices also decreases). Similar operation is possible if $A_iA_{i-1}$ does not separate $A_{i+1}$ and $A_{i-2}$. Case {\bf b)}. Assume that the line $A_iA_{i+1}$ separates $A_{i-1}$ and $A_{i+2}$ while $A_iA_{i-1}$ separates $A_{i+1}$ and $A_{i-2}$. Let $A_{i-2}$ and $A_{i+2}$ be situated in angles supplementary (by side containing $A_i$) to angles $\angle A_{i+1}A_{i-1}A_i$ and $\angle A_{i-1}A_{i+1}A_i$, respectively. Then we replace edges $A_{i-1}A_i$ and $A_iA_{i+1}$ of a polygonal line $A_1A_2\dots A_n$ to one edge $A_{i-1}A_{i+1}$. Assume that the old polygonal line does not satisfy the inequality of Lemma 3 while new one does, i.e. $$\frac {L}{ V}> \frac {P}{ 2\pi}\ge \frac {L-(A_{i-1}A_i+A_iA_{i+1}-A_{i-1}A_{i+1})}{ V-2(\pi-\beta)},\eqno(3)$$ with $\beta=\angle A_{i-1}A_iA_{i+1}$. Then $$P\cdot V-2(\pi-\beta)\cdot P+ 2\pi(A_{i-1}A_i+A_iA_{i+1}-A_{i-1}A_{i+1})\ge 2\pi L> P\cdot V,$$ hence $$ 2\pi(A_{i-1}A_i+A_iA_{i+1}-A_{i-1}A_{i+1}) > 2(\pi-\beta) P \ge2(\pi-\beta)(A_{i-1}A_i+A_iA_{i+1}+A_{i-1}A_{i+1}), $$ and we obtain a contradiction with Lemma 4. Thus, the new polygonal line also has to be a counterexample to Lemma 3, while it has less inner vertices. It remains to consider case {\bf c)}, where, for example, the line $A_{i-1}A_{i+1}$ separates $A_{i+2}$ and $A_i$ (in this case a vertice $A_{i+1}$ also lies strictly inside the convex hull). Without loss of generality we assume that the angle $A_{i-1}A_{i+1}A_i$ is the least for all indices $i$ satisfying this condition. Let us replace $i$ to $i+1$, and consider analogous cases. The vertex $A_{i-1}$ lies in an angle supplementary to $\angle A_iA_{i+2}A_{i+1}$ with respect to $A_iA_{i+1}$. If the vertex $A_{i+3}$ does not lie in angle vertical to $A_{i+1}A_{i+2}A_i$, the polygonal line may be improved as it is shown (with the change $i\rightarrow i+1$). In the opposite case we get a contradiction with the choise of $i$: the angle $\angle A_iA_{i+2}A_{i+1}$ is less than the angle $\angle A_iA_{i+1}A_{i-1}$ (since $\angle A_iA_{i+2}A_{i+1}+\angle A_iA_{i+1}A_{i+2}< \angle A_iA_{i+1}A_{i_1}+\angle A_iA_{i+1}A_{i+2}$). So, by the finite number of steps we reduce general case to the case, where all the vertices $A_i$ of a polygonal line $A_1A_2\dots A_n$ lie on the boundary of its convex hull. \section*{5. Decreasing the number of direction changes} Fix an orientation of the plane. We say that polygonal line $A_1A_2\dots A_n$ turns to the right in a vertex $A_i$ if the base $\overline{A_{i-1}A_i}$, $\overline{A_iA_{i+1}}$ is negatively oriented. Otherwise (in particular, if these vectors are collinear) we say that it turns to the left. If the polygonal line turns to the right (or to the left) two times in succession we may replace an edge between these turns to the part of boundary, passed in the same direction. This operation improves a polygonal line. We call it {\it the stretching} of a polygonal line. {\bf Lemma 6.} Assume that some consequent edges of our polygonal line form a full circuit of a boundary, and the first and the last edges coincide (i.e. the boundary is a convex polygon $C_1C_2\dots C_m$ while a polygonal line has a part $XC_1C_2\dots C_mC_1C_2Y$). Then the claim of Lemma 3 for this polygonal line is equiavelent to the claim of Lemma 3 for a polygonal line with this part removed (i.e. for a polygonal line, in which this part is replaced by just $XC_1C_2Y$). {\bf Proof.} Note that perimeter of a polygonal line after circuit removing equals $L-P$, and its full rotation equals $V-2\pi$. The statement of Lemma 3 for the new polygonal line claims $\frac{L-P}{V-2\pi}\le \frac{P}{2\pi}$, which is equivalent to the statement of Lemma 3 for the initial polygonal line. $\square$ Let us repeat the operation of Lemma 6 while it is possible. This process must stop while the number of edges decreases. Note that the number of changes of turns directions does not change. Now the polygonal line is partitioned to parts, in which all the turns have the same direction; and in each part all the edges (except, possibly, the first and the last) go along the boundary and, due to Lemma 6, are distinct. Let us develop the following operation. Choose a part $A_iA_{i+1}\dots A_k$, in which all turns are, say, the left ones (namely, turns in vertices $A_{i+1}$, $A_{i+2},\dots$, $A_{k-1}$ are left, and turns in $A_i$ and $A_k$ are right). Replace the path $A_iA_{i+1}\dots A_k$ to the part of boundary $A_i..A_k$ bypassing the boundary in the opposite (in our case --- negative) direction. The number of direction changes decreases after this operation. Assume that the initial polygonal line does not satisfy the inequality of Lemma 3. Our goal is to prove that in this case a new polygonal line does not satisfy it as well. Six cases are possible, they are determined by the order of vertices $A_i$, $A_{i+1}$, $A_{k-1}$, $A_k$ while bypassing a boundary in the positive direction: $1^{\circ}$. $A_iA_{i+1}A_{k-1}A_k$; $2^{\circ}$. $A_iA_kA_{i+1}A_{k-1}$; $3^{\circ}$. $A_iA_{i+1}A_kA_{k-1}$; $4^{\circ}$. $A_iA_{k-1}A_kA_{i+1}$; $5^{\circ}$. $A_iA_{k-1}A_{i+1}A_k$; $6^{\circ}$. $A_iA_{k-1}A_kA_{i+1}$. Cases 3 and 4 are equivalent up to renaming and symmetry. Denote the length of a polygonal line $A_iA_{i+1}\dots A_k$ by $s$, and denote the length of its replacement by $s'$. \begin{floatingfigure}{240pt} \caption{} \begin{picture}(220,200)(0,40) \put(15,195){$A_i$} \put(20,130){$A_{i+1}$} \put(180,145){$A_{k-1}$} \put(190,170){$A_k$} \put(10,200){\line(-1,-3){5}} \put(10,200){\vector(1,1){10}} \put(20,210){\vector(2,1){20}} \put(40,220){\vector(4,1){40}} \put(80,230){\vector(1,0){40}} \put(120,230){\vector(3,-1){30}} \put(150,220){\vector(2,-1){40}} \put(190,200){\vector(1,-1){20}} \put(210,180){\line(1,-2){5}} \thicklines \put(10,200){\vector(1,-2){30}} \put(40,140){\vector(1,-1){40}} \put(80,100){\vector(2,-1){30}} \put(110,85){\vector(3,-1){30}} \put(140,75){\vector(1,0){20}} \put(160,75){\vector(2,1){30}} \put(190,90){\vector(1,3){20}} \put(210,150){\vector(0,1){30}} \end{picture} \end{floatingfigure} $1^{\circ}$ (see Figure 1). Denote $\angle A_{i+1}A_iA_k=\alpha$, $\angle A_{k-1}A_kA_i=\beta$. After the change of a polygonal line, its full rotation decreases by $2(\alpha+\beta)$, and the length decreases by $s-s'$. If the new polygonal line satisfies the inequality of Lemma 3, we have $$\frac {L}{ V}>\frac {P}{ 2\pi}\ge \frac {L+s'-s}{V-2(\alpha+\beta)},\eqno(4)$$ or $$P\cdot V -2P(\alpha+\beta)+2\pi (s-s')\ge 2\pi L>P\cdot V,$$ hence $2\pi (s-s')>2(\alpha+\beta)P\ge 2(\alpha+\beta)(s+s')$, and $2(\pi-\alpha-\beta)s>2(\pi+\alpha+\beta)s'$. The last inequality may hold only if $\alpha+\beta<\pi$. In this case rays $A_iA_{i+1}$ and $A_kA_{k-1}$ meet in point $C$, and $CA_i+CA_k\ge s$, $A_iA_k\le s'$. So, $$(\pi-\alpha-\beta)(CA_i+CA_k)>(\pi+\alpha+\beta) A_iA_k,$$ that contradicts Lemma 4. $2^{\circ}$ (see Figure 2). Denote by $O$ the point of intersection of segments $A_iA_{i+1}$ and $A_kA_{k-1}$, $\angle A_iOA_k=\varphi$. After the change of a polygonal line, its full rotation decreases by $2\varphi$, and the length decreases by $s-s'$. \begin{floatingfigure}{240pt} \caption{} \begin{picture}(220,220)(0,50) \put(130,235){$A_i$} \put(20,130){$A_{i+1}$} \put(180,145){$A_{k-1}$} \put(10,215){$A_k$} \put(30,210){\line(2,1){20}} \put(130,230){\vector(2,-1){50}} \put(180,205){\vector(1,-1){20}} \put(200,185){\vector(1,-2){10}} \put(210,165){\vector(0,-1){15}} \put(40,140){\vector(-1,2){20}} \put(20,180){\vector(0,1){20}} \put(20,200){\vector(1,1){10}} \put(130,230){\line(-4,1){20}} \thicklines \put(130,230){\vector(-1,-1){90}} \put(40,140){\vector(1,-1){40}} \put(80,100){\vector(2,-1){30}} \put(110,85){\vector(3,-1){30}} \put(140,75){\vector(1,0){20}} \put(160,75){\vector(2,1){30}} \put(190,90){\vector(1,3){20}} \put(210,150){\vector(-3,1){180}} \end{picture} \end{floatingfigure} If new polygonal line satisfies Lemma 3 then $$\frac {L}{V}>\frac {P}{2\pi}\ge \frac {L+s'-s}{V-2\varphi}.$$ From here, analogously to p.$1^{\circ}$, we obtain $2\pi (s-s')>2\varphi P$, whence $$\frac {A_iA_{i+1}+A_kA_{k-1}-(A_iA_{k-1}+A_kA_{i+1})} {A_iA_k+A_kA_{i+1}+A_{i+1}A_{k-1}+A_iA_{k-1}}>\frac {\varphi}{\pi}.$$ This contradicts Lemma 5 (for quadrilateral $A_iA_{k-1}A_{i+1}A_k$). Other cases are analogous to these two, Lemma 4 is used in cases 3 and 6 and Lemma 5 is used in case 5. So, after a finite number of steps we get a polygonal line which turns only to the right. Using stretching we get a multiple circuit of a boundary from this polygonal line, hence the statement of Lemma 3 holds for this polygonal line. So, initial assumption was wrong, and Lemma 3 is proved. Claim 1 of Theorem $\cal P$ is proved as well. \section*{6. Proof of Claim 2} Assume that a curve $\Gamma$ is not a (multiple) boundary circuit, but $T(\Gamma)=T(\Gamma_1)$. We select a finite number of points on $\Gamma$ so that sum of velocity jumps in other points is quite small (say, less then $\pi/180$). A union of this finite set and a set $\Gamma\cap\Gamma_1$ is closed. Preimage (recall that a curve is naturally parametrised) of its complement is a union of countable number of intervals. Consider one of these intervals, let it corresponds to the part of $\Gamma$ between points $A$ and $B$. A part $\Gamma_{CD}$ is said to be {\it small} if the values set of velocity on this part is an arc of length at most $\pi/4$, and a circle with diameter $CD$ lies strictly inside $\Gamma_1$. It is easy to see that any inner point of $\Gamma_{AB}$ belongs to some small subpart. Consider a small part $\Gamma_{CD}$. Redefine, if necessary, velocities $\Gamma'(C)$ and $\Gamma'(D)$ as their right limits. Define a parallelogram $CPQD$ with vectors $\overline{CP}$ and $\overline{CQ}$ directed as extremal directions of the velocity of curve $\Gamma$ on a part $\Gamma_{CD}$. This parallelogram lies strictly inside $\Gamma_1$ (since extremal directions are quite close to the direction of vector $\overline{CD}$). There are points $X$ and $Y$ on $\Gamma_{CD}$ with tangents parallel to $CP$ and $CQ$, respectively. Without loss of generality we can say that the order of points is $C-X-Y-D$. Replace a part $\Gamma_{CD}$ to a polygonal line $CPD$. Note that full rotation of $\Gamma_{CD}$ is not less than $$v:=\rho(\Gamma'(C),\Gamma'(X))+\rho(\Gamma'(X),\Gamma'(Y))+ \rho(\Gamma'(Y),\Gamma'(D)),$$ while the full rotation of a new part equals $v$. Equality holds only if $\Gamma$ is convex from $C$ to $X$, from $X$ to $Y$ and from $Y$ to $D$. Furthermore, the length of $\Gamma_{CD}$ does not exceed $CP+PD$. To prove this consider an arbitrary polygonal line inscribed in $\Gamma_{CD}$. Directions of its edges may differ between directions of vectors $\overline{CP}$ and $\overline{CQ}$. So, after their rearrangement in the order monotone (in the sense of direction) from $\overline{CP}$ till $\overline{CQ}$ we get a convex polygonal line $C\dots D$ situated inside triangle $CPD$, and hence its length does not exceed $CP+PD$. So, after replacement of a part $\Gamma_{CD}$ onto a polygonal line $CPD$ the full rotation does not decrease, and length does not increase, i.e. curve $\Gamma$ improves. But it may not be strictly improved due to Claim 1 already proved. Hence after such replacement neither full rotation, nor length changes. The first is possible only if $\Gamma_{CD}$ may be splitted into at most three convex parts ($C-X$, $X-Y$, $Y-D$). The second is possible only if each of these parts is a polygonal line with at most two edges. So, $\Gamma_{CD}$ is a polygonal line with at most six edges (the number of edges may be decreased, but we do not need it). Now we fix points $A'$ and $B'$ on an open arc $\Gamma_{AB}$ and cover $\Gamma_{A'B'}$ by finite number of small parts. Curve $\Gamma$ is a polygonal line on each small part, hence $\Gamma_{A'B'}$ is a polygonal line too. Now we prove that if $\Gamma_{A'B'}$ has at least four edges then $\Gamma$ may be strictly improved. Consider cases of \S4. In the case {\bf a)} we use only local structure of a polygonal line, and the same argument works in our situation. In the case {\bf b)}, using Claim 1 for a changed curve, we obtain the inequality analogous to (3): $$\frac {L(\Gamma)}{V(\Gamma)}=\frac {L(\Gamma_1)}{2\pi}\ge \frac {L(\Gamma)-(A_{i-1}A_i+A_iA_{i+1}-A_{i-1}A_{i+1})} {V(\Gamma)-2(\pi-\beta)},$$ \noindent hence $$2\pi(A_{i-1}A_i+A_iA_{i+1}-A_{i-1}A_{i+1}) \ge 2(\pi-\beta)(A_{i-1}A_i+A_iA_{i+1}+A_{i-1}A_{i+1}),$$ that contradicts Lemma 4. In the case {\bf c)}, if $A_{i+2}$ and $A_i$ are separated by line $A_{i-1}A_{i+1}$ (in this case a vertex $A_{i+1}$ also lies strictly inside convex curve), the polygonal line may be strictly improved by replacing an edge $A_iA_{i+1}$ to a parallel longer edge $A_i'A_{i+1}'\parallel A_iA_{i+1}$, where $A_i\in [A_{i-1}A_i'[$, $A_{i+1}'\in ]A_{i+1}A_{i+2}]$. So, $\Gamma_{A'B'}$ is a polygonal line with at most three edges. Since $A'$ and $B'$ were chosen arbitrary, a curve $\Gamma_{AB}$ is a polygonal line with at most three edges too. Now we add the points of "large turn", excluded before, to the considered intervals. Then the whole inner part of a curve $\Gamma$ is splitted to at most countable set of polygonal lines with a finite number of edges. If one of such parts contains more than one edge, the curve $\Gamma$ may be strictly improved as it was done in \S4. So, all points of $\Gamma$ lie either on a boundary, or on segments joining boundary points. Assume that the number of segments is infinite. Then there exists a sequence of segments with length tending to 0 and endpoints tending to some point $C\in\Gamma_1$. Let us fix a small neighborhood of the point $C$ with full rotation of $\Gamma_1$ equal to $\varphi_0<\pi$. Consider one of segments ${\overline{AB}}$ ($A,\,B\in \Gamma_1$) lying in this neighborhood. If vectors $\Gamma'(A-)$ and $\Gamma'(B+)$ are directed to different sides with respect to the line $AB$, then the curve $\Gamma$ may be strictly improved by the stretching of a segment $AB$ to the boundary. It is impossible. So, the variation of $\Gamma'$ on $AB$ is not less than $\pi-\varphi_0$, hence the full variation is infinite. So, the curve $\Gamma$ consists of a finite number of boundary pieces and a finite number of segments between them. If $\Gamma$ contains return points on a boundary (since $\Gamma\in BV^1$ there may be only a finite number of such points), we consider them as "inner segments of zero length". If two consecutive pieces of the boundary have the same circuit direction, it admits an improvement of $\Gamma$: just stretch the segment between them. Further, we may remove all the full circuits of a boundary as in Lemma 6. Now consider an arc $\Gamma_{AB}$ consisting of the segment $AA_1$, piece of boundary $\Gamma_{A_1B_1}$ (which has, say, positive direction), and the segment $B_1B$. Analogously to \S5, replace $\Gamma_{AB}$ to a "negative" arc of a boundary between $A$ and $B$. Here we have to consider six cases of \S5 again, cases depend on the order of points $A$, $A_1$, $B$, $B_1$ in a positive circuit. For example, in the case $1^{\circ}$ (order $AA_1B_1B$), we use Claim 1 for the new curve and get the inequality analogous to (4), $$\frac {L(\Gamma)}{V(\Gamma)}=\frac {L(\Gamma_1)}{ 2\pi}\ge \frac {L(\Gamma)+s'-s}{V(\Gamma)-2(\angle A_1AB+\angle B_1BA)}.$$ From here we deduce, as in \S5, that the rays $AA_1$ and $BB_1$ meet in a point $C$, and $$(\pi-\angle A_1AB-\angle B_1BA)\cdot(CA+CB)\ge (\pi+\angle A_1AB+\angle B_1BA)\cdot AB,$$ that contradicts Lemma 4. Analogous contradictions can be obtained in remaining cases. It shows that the curve $\Gamma$ may not have inner segments, and hence it is a circuit of a boundary. Now remember the circuits removed earlier and realize that initially $\Gamma$ was a multiple circuit of a boundary. The statement 2 is proved. $\square$ \section*{7. The surfaces of constant curvature} Now we prove a statement generalizing DNA inequality to a spherical case. Let $\Gamma$ be a closed curve lying in some hemisphere (here and further: of unit radius). Let the variation of the right rotation $V(\Gamma)$ be finite. For the definitions we refer to [3]. Note that if $\Gamma:\ A_1A_2\dots A_nA_1$ is a closed polygonal line then $V(\Gamma)=\sum_{i=1}^n (\pi-\angle A_{i-1}A_iA_{i+1})$ (enumeration of indices is cyclic). Define a {\it mean absolute geodesic curvature} $T(\Gamma)$ of a closed curve on a sphere as $T(\Gamma)=V(\Gamma)/L(\Gamma)$. { \bf Theorem ${\cal S}$.} Let $\Gamma$ be a closed curve in a hemisphere, and let the variation of its right rotation be finite. Let $\Gamma_1$ be a boundary of its convex hull. Then $T(\Gamma)\ge T(\Gamma_1)=(2\pi-S)/L(\Gamma_1)$, where $S$ is the area of the convex hull. The plan of a proof of a Theorem ${\cal S}$ is the same as in planar case. First of all, we formulate corresponding statement for poligonal lines. { \bf Theorem ${\cal S}'$.} Let $\Gamma$ be a closed poligonal line in a hemisphere, and let $\Gamma_1$ be a boundary of its convex hull. If $\Gamma$ is not multiple circuit of $\Gamma_1$ then $T(\Gamma)> T(\Gamma_1)$. Before we pass to the case of quadrilaterals, we prove the following claim elaborating (in particular case) the theorem of A.D.~Aleksandrov on angles comparing. {\bf Lemma $1s$.} Let $ABC$ be a non-degenerate triangle on a sphere. We denote its sides by $BC=a$, $CA=b$, $AB=c$, and its angles by $\alpha$, $\beta$, $\gamma$ respectively. We denote by $\alpha'$, $\beta'$, $\gamma'$ the angles of a triangle with sides $a$, $b$, $c$ on a plane. Then $$\alpha-\alpha'< (\beta-\beta')+(\gamma-\gamma'). \eqno(1_s)$$ {\bf Proof.} We denote $a+b+c=4S$, $S-a/2=X$, $S-b/2=Y$, $S-c/2=Z$. ${\cal E}=\alpha+\beta+\gamma-\pi$ is an area of triangle $ABC$. The inequality $(1_s)$ is equivalent to the inequality $\alpha'> \alpha-{\cal E}/2$ or, in other words, $$\tan \frac{\alpha'}2 > \tan(\alpha/2-{\cal E}/4).\eqno(2_s)$$ Substituting here the formulas $$\gathered \tan\frac {\alpha} 2=\sqrt\frac {\sin 2Y \sin 2Z} {\sin 2X \sin 2S},\quad \tan\frac {\cal E}4= \sqrt {\tan S\cdot \tan X\cdot\tan Y\cdot\tan Z },\\ \tan \frac {\alpha'}2= \sqrt {\frac{YZ}{X(X+Y+Z)}} \endgathered$$ (the first formula is [3, (28)], the second is [4], the third is [5, (20)]), we convert ($2_s$) to the inequality $$ \frac {\sin Z\sin Y} {\sin X\sin S}\cdot \frac {\cos Y\cos Z-\sin X\sin S}{\cos X\cos S+\sin Y\sin Z}< \sqrt{\frac {YZ} {X(X+Y+Z)}}\cdot\sqrt {\frac {\sin(2Z)\sin(2Y)} {\sin(2X)\sin(2S)}}.\eqno (3_s) $$ Since $S=X+Y+Z$, we have $\cos(S-X)=\cos(Y+Z)$, hence the second multiple in the left-hand side of ($3_s$) equals 1. Let us denote $f(x)=x\cot x$, then ($3_s$) reduces to $$f(Y)f(Z)> f(X)f(X+Y+Z). \eqno(4_s)$$ Since $f'(x)=\frac {sin (2x)-2x}{2\sin^2 x}<0$ for $0<x<\frac{\pi}2$, the function $f$ srictly decreases on $[0,\frac{\pi}2]$. Since all the arguments in ($4_s$) lie in $[0,\frac{\pi}2]$ (we recall that $X+Y+Z=(a+b+c)/4\le \pi/2$), we may suppose that $X=0$ and prove the inequality $$f(Y)f(Z)> f(0)f(Y+Z).\eqno(5_s)$$ We have $(\ln (f))''(x)= \frac {4}{\sin^2(2x)}\left(\cos(2x)- \frac {\sin^2(2x)}{4x^2}\right)$. We omit an elementary proof of the inequality $\cos t< (\frac {\sin t}t)^2$ for $t=2x\in]0,\pi[$. This shows that $\ln (f)$ is strictly concave on $[0,\frac{\pi}2]$, and ($5_s$) follows. $\square$ Now we are ready to prove the analogs of Lemmas 4 and 5 on a sphere. {\bf Lemma $2s$}. Let $ABC$ be a non-degenerate triangle on a sphere. Then, with the same notations as in Lemma $1s$, $$ \frac {a+c}{2\pi-\beta}< \frac {a+b+c}{2\pi-{\cal E}}. $$ {\bf Proof.} The statement follows from a chain of inequalities $$\frac {a+c}{a+b+c}<\frac {2\pi-\beta'}{2\pi}< \frac {2\pi-\beta+{\cal E}/2}{2\pi}\le \frac {2\pi-\beta}{2\pi-{\cal E}}$$ (the first inequality is Lemma 4, the second is Lemma 1s, the third reduces to the obvious $\beta\le \pi+{\cal E}/2$). $\square$ {\bf Lemma $3s$}. Consider a convex spherical quadrilateral $ABCD$ on a hemisphere and denote by $O$ the point of diagonals intersection. Put $\varphi=\angle AOB$. Denote $AB=a$, $BC=b$, $CD=c$, $DA=d$, $BD=m$, $AC=n$, and $\angle AOB=\varphi$. We denote by ${\cal E}_1$, ${\cal E}_2$, ${\cal E}_3$, ${\cal E}_4$ the areas of triangles $OAB$, $OBC$, $OCD$, $ODA$, respectively, and put ${\cal E}={\cal E}_1+{\cal E}_2+{\cal E}_3+{\cal E}_4$. Then $$ \frac {a+c+m+n}{2\pi-({\cal E}_1+{\cal E}_3)+2\varphi}< \frac {a+b+c+d}{2\pi-{\cal E}}.\eqno(6_s) $$ {\bf Proof.} We denote by $\varphi'$ an angle of a planar triangle with sides $a$, $AO$, $BO$, opposite to side $a$. Then $$\frac a{AO+BO}>\frac {\varphi'}{\pi}> \frac {\varphi-{\cal E}_1/2}{\pi} >\frac {\varphi-({\cal E}_1+{\cal E}_3)/2}{\pi}, \eqno (7_s)$$ (the first inequality follows from the proof of Lemma 4, the second one -- from the Lemma 1s). Analogously $$\frac c{CO+DO}>\frac {\varphi-({\cal E}_1+{\cal E}_3)/2}{\pi}. \eqno (8_s)$$ Estimates ($7_s$) and ($8_s$) imply that $$x:=\frac {a+c}{m+n}>\frac {\varphi-({\cal E}_1+{\cal E}_3)/2}{\pi}. $$ Analogously, $$ y:=\frac {b+d}{m+n}>\frac {\pi-\varphi-({\cal E}_2+{\cal E}_4)/2}{\pi}. $$ We substitute the lower bounds for $x$ and $y$ to the equality $$ z:=\frac {a+c+m+n}{a+c+b+d}=\frac {x+1}{x+y}=1+\frac {1-y}{x+y}. $$ Since $y<1$, we get an upper bound for $z$. It gives $(6_s)$. $\square$ Now we briefly explain the plan of the proof of Theorem ${\cal S}'$. Arguments used in cases {\bf a)} and {\bf c)} of Section 4 may be transfered with minor changes (natural changes arising due to spherical excess $\cal E$ play for us). In the case {\bf b)} we choose index $i$ so that an angle vertical to $\angle A_{i-1}A_{i+1}A_i$ has the least area in hemisphere. Angle vertical to $\angle A_iA_{i+2}A_{i+1}$ is contained in an angle vertical to $\angle A_{i-1}A_{i+1}A_i$ (in hemisphere), since there are no conjugate points on a hemisphere. This leads to a contradiction. The arguments of Section 5 are changed in the same way as ones of case {\bf c)} of Section 4. { \bf The deduction of Theorem ${\cal S}$ from Theorem ${\cal S}'$.} First of all we note that the curve $\Gamma$ can be splitted into a finite number of parts without self-intersections. Really, if the part of $\Gamma$ with self-intersections has sufficiently small length then its rotation is not less than $\pi/2$. Let $0=t_1<t_2<\dots<t_n<t_{n+1}=L(\Gamma)$ be the nodes of this partition (we can suppose $\Gamma$ is naturally parametrized with the starting point in a node). Set $A_i:=\Gamma(t_i)$, $\Gamma_i:=\Gamma_{[t_i,t_{i+1}]}$. By Theorem 1 [4], for $i=1,\dots,n$ there exists a sequence of poligonal lines $g_k^i:\ [t_i,t_{i+1}]\rightarrow S^2$ $(k=1,\,2,\dots)$ s.t. $g_k^i(t_i)=A_i$, $g_k^i(t_{i+1})=A_{i+1}$, $g_k^i$ converge to $\Gamma_i$ from the right, and $\limsup V(g_k^i)\le V(\Gamma_i)$. Moreover, the directions of $g_k^i$ at the points $A_i$ and $A_{i+1}$ converge to the directions (right and left, correspondingly) of the curve $\Gamma$ at these points. We denote by $g_k$ the sum of poligonal lines $g_k^i$ with respect to $i=1,\dots,n$, and by $G_k$ the boundary of the convex hull of $g_k$. Then $\limsup V(g_k)\le V(\Gamma)$. Further, $L(\Gamma)\le \liminf L(g_k)$. Finally, since $g_k\rightarrow \Gamma$ uniformly, then $G_k\rightarrow \Gamma_1$, whence $L(G_k)\rightarrow L(\Gamma_1)$ and $T(G_k)\rightarrow T(\Gamma_1)$ . By Theorem ${\cal S}'$ $$T(\Gamma)\ge \limsup T(g_k)\ge \limsup T(G_k)= T(\Gamma_1).$$ This completes the proof. $\square$ Unfortunately, we cannot transfer the statement 2 of Theorem $\cal P$ to the sphere. Note that in the Lobachevskii plane the DNA inequality is not true. To show this we consider in the Lobachevskii plane a triangle $ABC$ and a polygonal line $\Gamma=ABCC_1B_1BCA$ with some points $B_1\in AB$, $C_1\in AC$. Then $\Gamma_1=ABCA$, and $$\frac {V(\Gamma)}{V(\Gamma_1)}=2-\frac {S(A_1B_1C_1)}{V(\Gamma_1)}<2-\frac {S(A_1B_1C_1)}{3\pi}.$$ Moving the vertex $B$ sufficiently far along the ray $AB_1$ we make the quotient $L(\Gamma)/L(G_1)$ arbitrary close to $2$ that gives $T(\Gamma)<T(\Gamma_1)$. We are grateful to V.A.~Zalgaller who attracted our attention to the references [3], [4]. We also thank S.V.~Duzhin for his attention. The paper is supported by grant VNSh-2261.2003.1 (the first author) and by grants VNSh-2251.2003.1 and RFFR 02-01-00093 (the second author). {\centerline {References} } 1. S. Tabachnikov. The tale of a geometric inequality // MASS colloquium lecture, 2001. 2. J. Lagarias, T. Richardson. Convexity and the average curvature of the plane curves // Geom. Dedicata, 67 (1997), 1-38. 3. A.D. Aleksandrov. Intrinsic Geometry of Convex Surfaces. OGIZ, Moscow -- Leningrad, 1948 (Russian). English transl.: A.D. Alexandrov. Selected works: Intrinsic Geometry of Convex Surfaces. CRC, 2005. 4. V. A. Zalgaller. On curves with curvature of bounded variation on a convex surface // Mat. Sbornik, 26 (1950). 205--214 (Russian). 5. http:$\backslash\backslash$ mathworld.wolfram.com$\backslash$ SphericalTrigonometry.html 6. http:$\backslash\backslash$ mathworld.wolfram.com$\backslash$ SphericalExcess.html 7. http:$\backslash\backslash$ mathworld.wolfram.com$\backslash$ Triangle.html \end{document}
math
35,393
\begin{document} \title {A short note on Godbersen's Conjecture} \date{} \author{S. Artstein-Avidan\thanks{Supported by ISF}} \maketitle A convex body $K \subset {\mathbb R}R^n$ is a compact convex set with non-empty interior. For compact convex sets $K_1, \ldots,K_m \subset {\mathbb R}^n$, and non-negative real numbers $\lambdabda_1, \ldots,\lambdabda_m$, a classical result of Minkowski states that the volume of $\sum \lambdabda_i K_i$ is a homogeneous polynomial of degree $n$ in $\lambdabda_i$, \begin{equation}\label{Eq_Vol-Def} {\rm Vol} \left(\sum_{i=1}^m \lambdabda_i K_i\right) = \sum_{i_1,\dots,i_n=1}^m \lambdabda_{i_1}{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots\lambdabda_{i_n} V(K_{i_1},\dots,K_{i_n}). \end{equation} The coefficient $V(K_{i_1},\dots,K_{i_n})$, which depends solely on $K_{i_1}, \ldots, K_{i_n}$, is called the mixed volume of $K_{i_1}, \ldots, K_{i_n}$. The mixed volume is a non-negative, translation invariant function, monotone with respect to set inclusion, invariant under permutations of its arguments, and positively homogeneous in each argument. For $K$ and $L$ compact and convex, we denote $V(K[j], L[n-j])$ the mixed volume of $j$ compies of $K$ and $(n-j)$ copies of $L$. One has $V(K[n]) = {\rm Vol}(K)$. By Alexandrov's inequality, $V(K[j],-K[n-j])\ge {\rm Vol}(K)$, with equality if and only of $K= x_0-K$ for some $x_0$, that is, some translation of $K$ is centrally symmetric. For further information on mixed volumes and their properties, see Section \textsection 5.1 of \cite{Schneider-book}. Recently, in the paper \cite{ArtsteinEinhornFlorentinOstrover} we have shown that for any $\lambdabda \in [0,1]$ and for any convex body $K$ one has that \[ \lambdabda^j (1-\lambdabda)^{n-j} V(K[j], -K[n-k])\le {{\rm Vol}(K)}. \] In particular, picking $\lambdabda = \frac{j}{n}$, we get that \[V(K[j], -K[n-k])\le \frac{n^n}{j^j (n-j)^{n-j}}{\rm Vol}(K)\sim \binom{n}{j} \sqrt{2\pi \frac{j(n-j)}{n}}. \] The conjecture for the tight upper bound $\binom{n}{j}$, which is what ones get for a body which is an affine image of the simplex, was suggested in 1938 by Godbersen \cite{Godbersen} (and independently by Hajnal and Makai Jr. \cite{Makai}). \begin{conj}[Godbersen's conjecture]\label{conj:god} For any convex body $K\subset {\mathbb R}R^n$ and any $1\le j\le n-1$, \begin{equation}\label{eq:Godbersen-conj} V(K[j], -K[n-j])\le \binom{n}{j} {\rm Vol}(K),\end{equation} with equality attained only for simplices. \end{conj} We mention that Godbersen \cite{Godbersen} proved the conjecture for certain classes of convex bodies, in particular for those of constant width. We also mention that the conjecture holds for $j=1,n-1$ by the inclusion $K\subset n(-K)$ for bodies $K$ with center of mass at the origin, and inclusion which is tight for the simplex, see Schneider \cite{Schneider-simplex}. The bound from \cite{ArtsteinEinhornFlorentinOstrover} quoted above seems to be the currently smallest known upper bound for general $j$. In this short note we improve the aforementioned inequality and show \begin{theorem}\label{theorem-on-sum} For any convex body $K\subset {\mathbb R}R^n$ and for any $\lambdabda \in [0,1]$ one has \[\sum_{j=0}^n \lambdabda^j (1-\lambdabda)^{n-j} V(K[j], -K[n-j])\le {{\rm Vol}(K)}. \] \end{theorem} The proof of the inequality will go via the consideration of two bodies, $C\subset {\mathbb R}R^{n+1}$ and $T\subset {\mathbb R}R^{2n+1}$. Both were used in the paper of Rogers and Shephard \cite{RS}. We shall show by imitating the methods of \cite{RS} that \begin{lemma}\label{lem:vol-of-C} Given a convex body $K\subset {\mathbb R}R^n$ define $C\subset {\mathbb R}R\times {\mathbb R}R^n$ by \[ C = {\rm conv} (\{0\}\times(1-\lambdabda) K \cup \{1\}\times -\lambdabda K). \] Then we have \[ {\rm Vol}(C)\le \frac{{\rm Vol}(K)}{n+1}.\] \end{lemma} With this lemma in hand, we may prove our main claim by a simple computation \begin{proof}[Proof of Theorem \ref{theorem-on-sum}] \begin{eqnarray*} {\rm Vol}(C) &=& \int_{0}^{1} {\rm Vol}((1-\eta)(1-\lambdabda)K - \eta \lambdabda K) d\eta \\ &=& \sum_{j=0}^n \binom{n}{j}(1-\lambdabda)^{n-j}\lambdabda^j V(K[j],-K[n-j]) \int_{0}^{1}(1-\eta)^{n-j}\eta^j d\eta\\ & = & \frac{1}{n+1}\sum_{j=0}^n(1-\lambdabda)^{n-j}\lambdabda^j V(K[j],-K[n-j]). \end{eqnarray*} Thus, using Lemma \ref{lem:vol-of-C}, we have that \[ \sum_{j=0}^n(1-\lambdabda)^{n-j}\lambdabda^j V(K[j],-K[n-j])\le {\rm Vol}(K).\] \end{proof} Before turning to the proof of Lemma \ref{lem:vol-of-C} let us state a few consequences of Theorem \ref{theorem-on-sum}. First, integration with respect to the parameter $\lambdabda$ yields \begin{cor}\label{cor-AVERGAE-UNIFORM} For any convex body $K\subset {\mathbb R}R^n$ \[ \frac{1}{n+1} \sum_{j=0}^n \frac{V(K[j],-K[n-j])}{\binom{n}{j}}\le {\rm Vol}(K),\] which can be rewritten as \[ \frac{1}{n-1} \sum_{j=1}^{n-1} \frac{V(K[j],-K[n-j])}{\binom{n}{j}}\le {\rm Vol}(K).\] \end{cor} So, on average the Godbersen conjecture is true. Of course, the fact that it holds true on average was known before, but with a different kind of average. Indeed, the Rogers-Shephard inequality for the difference body, which is \[ {\rm Vol}(K-K)\le \binom{2n}{n} {\rm Vol}(K)\] (see for example \cite{Schneider-book} or \cite{AGMbook}) can be rewritten as \[ \frac{1}{\binom{2n}{n}} \sum_{j=0}^n {\binom{n}{j}} V(K[j], -K[n-j]) \le {\rm Vol}(K).\] However, our new average, in Corollary \ref{cor-AVERGAE-UNIFORM} is a uniform one, so we know for instance that the median of the sequence $( {\binom{n}{j}}^{-1}{V(K[j], -K[n-j])})_{j=1}^{n-1}$ is less than two, so that at least for one half of the indices $j=1,2,\ldots, n-1$, the mixed volumes satisfy Godbersen's conjecture up to factor $2$. More generally, apply Markov's inequality for the uniform measure on $\{1, \ldots, n-1\}$ to get \begin{cor} Let $K\subset {\mathbb R}R^n$ be a convex body with ${\rm Vol}(K)=1$. For at least $k$ of the indices $j=1,2,\ldots n-1$ it holds that \[ V(K[j],-K[n-j]) \le \frac{n-1}{n-k} \binom{n}{j}.\] \end{cor} We mention that the inequality of Theorem \ref{theorem-on-sum} can be reformulated, for $K$ with ${\rm Vol}(K)=1$, say, as \[\sum_{j=1}^{n-1} \lambdabda^{j-1} (1-\lambdabda)^{n-j-1} [V(K[j], -K[n-j])-\binom{n}{j}] \le 0\] So that by taking $\lambdabda=0,1$ we see, once again, that $V(K, -K[n-1]) = V(K[n-1],-K) \le n$. A key ingredient in the proof of Lemma \ref{lem:vol-of-C} is Rogers-Shephard inequality for sections and projections from \cite{RS}, which states that \begin{lem}[Rogers and Shephard]\label{lem-RSsec-proj} Let $T\subset {\mathbb R}R^m$ be a convex body, let $E\subset {\mathbb R}R^m$ be a subspace of dimension $j$. Then \[ {\rm Vol}(P_{E^{\perp}}T){\rm Vol}(T{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E) \le \binom{m}{j}{\rm Vol}(T), \] where $P_{E^{\perp}} $ denotes the projection operator onto $E^{\perp}$. \end{lem} We turn to the proof of Lemma \ref{lem:vol-of-C} regarding the volume of $C$. \begin{proof}[Proof of Lemma \ref{lem:vol-of-C}] We borrow directly the method of \cite{RS}. Let $K_1,K_2\subset {\mathbb R}R^n$ be convex bodies, we shall consider $T\subset {\mathbb R}R^{2n+1}= {\mathbb R}R \times {\mathbb R}R^n \times {\mathbb R}R^n$ defined by \[ T = {\rm conv} (\{(0,0,y); y\in K_2\} \cup \{ (1,x,-x): x\in K_1\}). \] Written out in coordinates this is simply \begin{eqnarray*} T &=& \{ (\theta, \theta x , -\theta x + (1-\theta)y ): x\in K_1, y\in K_2\}\\ & = & \{ (\theta, w, z): w\in \theta K_1, z+w\in (1-\theta)K_2\}. \end{eqnarray*} The volume of $T$ is thus, by simple integration, equal to \[ {\rm Vol}(T) = {\rm Vol}(K_1) {\rm Vol}(K_2) \int_0^1 \theta^n (1-\theta)^n d\theta = \frac{n!n!}{(2n+1)!}{\rm Vol}(K_1) {\rm Vol}(K_2). \] We now take the section of $T$ by the $n$ dimensional affine subspace \[ E = \{(\theta_0, x, 0): x\in {\mathbb R}R^n\}\] and project it onto the complement $E^\perp$. We get for the section: \[ T{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E = \{ (\theta_0, x, 0): x\in \theta_0 K_1 {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p (1-\theta_0)K_2\}\] and so ${\rm Vol}_n(T{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E) = {\rm Vol}(\theta_0 K_1 {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p (1-\theta_0) K_2 )$. As for the projection, we get \begin{eqnarray*} P_{E^{\perp}}T &=& \{ (\theta, 0, y): \exists x {\rm ~with~} (\theta, x, y)\in T \}\\ & = & \{ (\theta, 0, y): \theta K_1 {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p ((1-\theta)K_2-y) \}\\ & = & \{ (\theta, 0, y): y\in (1-\theta)K_2-\theta K_1 \}.\end{eqnarray*} Thus ${\rm Vol}_n(P_{E^\perp}T)= {\rm Vol}((\theta, y): y\in (1-\theta)K_2-\theta K_1)$ which is precisely a set of the type we considered before in ${\mathbb R}R^{n+1}$. In fact, putting instead of $K_1$ the set $\lambdabda K$ and instead of $K_2$ the set $(1-\lambdabda)K$ we get that $P_{E^\perp}T = C$. Staying with our original $K_1$ and $K_2$, and using the Rogers-Shephard Lemma \ref{lem-RSsec-proj} bound for sections and projections, we see that \[ {\rm Vol}(P_{E^{\perp}}T){\rm Vol}(T{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E) \le \binom{2n+1}{n}{\rm Vol}(T), \] which translates, to the following inequality \[ {\rm Vol} ({\rm conv} (\{0\}\times K_2 \cup \{1\}\times (-K_1))) \le \frac{1}{n+1} \frac{{\rm Vol}(K_1){\rm Vol}(K_2)}{{\rm Vol}(\theta_0 K_1 {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p (1-\theta_0)K_2)}. \] We mention that this exact same construction was preformed and analysed by Rogers and Shephard for the special choice $\theta_0 = 1/2$, which is optimal if $K_1 = K_2$. For our special choice of $K_2 = (1-\lambdabda)K$ and $K_1 = \lambdabda K$ we pick $\theta_0 = (1-\lambdabda)$ so that the intersection in question is simply $\lambdabda (1-\lambdabda) K$, which cancels out when we compute the volumes in the numerator. We end up with \[ {\rm Vol} ({\rm conv} (\{0\}\times (1-\lambdabda)K \cup \{1\}\times (-\lambdabda K))) \le \frac{1}{n+1} {\rm Vol}(K), \] which was the statement of the lemma. \end{proof} Our next assertion is connected with the following conjecture regarding the unbalanced difference body \[ D_\lambdabda K = (1-\lambdabda)K + \lambdabda(-K).\] \begin{conj}\label{unbalancedRS} For any $\lambdabda\in (0,1)$ one has \[ \frac{{\rm Vol}(D_\lambdabda K)}{{\rm Vol}(K)} \le \frac{{\rm Vol}(D_\lambdabda {D}elta )}{{\rm Vol}({D}elta )} \] where ${D}elta$ is an $n$-dimensional simplex. \end{conj} Reformulating, Conjecture \ref{unbalancedRS} asks whether the following inequality holds \begin{eqnarray} \sum_{j=0}^n \binom{n}{j} \lambdabda^j (1-\lambdabda)^{n-j} V_j \le \sum_{j=0}^n \binom{n}{j}^2 \lambdabda^j (1-\lambdabda)^{n-j}, \end{eqnarray} where we have denoted $V_j= V(K [j], -K[n-j])/{\rm Vol}(K)$. Clearly Conjecture \ref{unbalancedRS} follows from Godbersen's conjecture. Conjecture \ref{unbalancedRS} holds for $\lambdabda = 1/2$ by the Rogers-Shephard difference body inequality, it holds for $\lambdabda = 0,1$ as then both sides are $1$, and it holds on average over $\lambdabda$ by Lemma \ref{lem:vol-of-C} (one should apply Lemma \ref{lem:vol-of-C} for the body $2K$ with $\lambdabda_0 = 1/2$). We rewrite two of the inequalities that we know on the sequence $V_j$: \begin{eqnarray} \sum_{j=0}^n \lambdabda^j (1-\lambdabda)^{n-j} V_j \le \sum_{j=0}^n \binom{n}{j} \lambdabda^j (1-\lambdabda)^{n-j}. \end{eqnarray}\begin{eqnarray} \sum_{j=0}^n \binom{n}{j} V_j \le \sum_{j=0}^n \binom{n}{j}^2 . \end{eqnarray} In all inequalities we may disregard the $0^{th}$ and $n^{th}$ terms as they are equal on both sides. We may take advantage of the fact that the $j^{th}$ and the $(n-j)^{th}$ terms are the same in each inequality, and sum only up to $(n/2)$ (but be careful, if $n$ is odd then each term appears twice, and if $n$ is even then the $(n/2)^{th}$ term appears only once). \begin{theorem} For $n=4,5$ Conjecture \ref{unbalancedRS} holds. \end{theorem} \begin{proof} For $n=4$ We have that $V_0 = V_4 = 1$ and $V_1 = V_3$. We thus know that \[ 8V_1+6V_2 \le 32+36\] and that for any $\lambdabda \in [0,1]$ we have \[ (\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)V_1 + \lambdabda^2 (1-\lambdabda)^2V_2 \le 4(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3) + 6\lambdabda^2 (1-\lambdabda)^2. \] We need to prove that \[ 4(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)V_1 + 6\lambdabda^2 (1-\lambdabda)^2V_2 \le 16(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3) + 36\lambdabda^2 (1-\lambdabda)^2. \] If we find $a,b\ge 0$ such that \[ (\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)a + 8b = 4(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)\] and \[ \lambdabda^2 (1-\lambdabda)^2a + 6b = 6\lambdabda^2 (1-\lambdabda)^2\] then by summing the two inequalities with these coefficients, we shall get the needed inequality. We thus should check whether the following system of equations has a non-negative solution in $a,b$: \[ \left(\begin{array}{ll} (\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3 & 8 \\ \lambdabda^2 (1-\lambdabda)^2 & 6 \\ \end{array} \right)\left(\begin{array}{l} a \\ b\\ \end{array}\right) = \left(\begin{array}{l} 4(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3) \\ 6\lambdabda^2 (1-\lambdabda)^2 \\ \end{array}\right) . \] The determinant of the matrix of coefficients is positive: \begin{eqnarray*} 6(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)-8\lambdabda^2 (1-\lambdabda)^2 = \\ 2\lambdabda (1-\lambdabda) [3(\lambdabda^2 + (1-\lambdabda)^2)-4\lambdabda(1-\lambdabda)] = \\ 2\lambdabda (1-\lambdabda) [3(1-2\lambdabda)^2 +2 \lambdabda(1-\lambdabda)] \ge 0 \end{eqnarray*} We invert it to get, up to a positive multiple, that \begin{eqnarray*} \left(\begin{array}{l} a \\ b\\ \end{array}\right) & =& c \left(\begin{array}{ll} 6 & -8 \\ -\lambdabda^2 (1-\lambdabda)^2 & (\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3 \\ \end{array} \right) \left(\begin{array}{l} 4(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3) \\ 6\lambdabda^2 (1-\lambdabda)^2 \\ \end{array}\right) \\ & = & c \left(\begin{array}{l} 24(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3) - 48\lambdabda^2 (1-\lambdabda)^2 \\ 2(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)\lambdabda^2 (1-\lambdabda)^2 \\ \end{array}\right) \\ & = & c \left(\begin{array}{l} 24\lambdabda (1-\lambdabda) (1-2\lambdabda)^2 \\ 2(\lambdabda^3(1-\lambdabda)+\lambdabda(1-\lambdabda)^3)\lambdabda^2 (1-\lambdabda)^2 \\ \end{array}\right) . \end{eqnarray*} We see that indeed the resulting $a,b$ are non-negative. For $n=5$ we do the same, namely we have $V_0 = V_5 = 1$ and $V_1= V_4$ and $V_2 = V_3$ so we just have two unknowns, for which we know that \[ 5V_1+10 V_2 \le 25+100 \] and that for any $\lambdabda \in [0,1]$ we have \[ (\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4)V_1 + (\lambdabda^2 (1-\lambdabda)^3+ \lambdabda^3(1-\lambdabda)^2)V_2 \le 5(\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4) + 10(\lambdabda^2 (1-\lambdabda)^3+ \lambdabda^3(1-\lambdabda)^2). \] We need to prove that \[ 5(\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4)V_1 + 10 (\lambdabda^2 (1-\lambdabda)^3 + \lambdabda^3 (1-\lambdabda)^2)V_2 \le 25(\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4) + 100 (\lambdabda^2 (1-\lambdabda)^3 + \lambdabda^3 (1-\lambdabda)^2). \] We are thus looking for a non-negative solution to the equation \[ \left(\begin{array}{ll} (\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4) & 5 \\ (\lambdabda^2 (1-\lambdabda)^3 + \lambdabda^3 (1-\lambdabda)^2) & 10 \\ \end{array} \right)\left(\begin{array}{l} a \\ b\\ \end{array}\right) = \left(\begin{array}{l} 5 (\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4) \\ 10(\lambdabda^2 (1-\lambdabda)^3 + \lambdabda^3 (1-\lambdabda)^2) \\ \end{array}\right) . \] The determinant is positive since the left hand column is decreasing and the right hand column increasing. Up to a positive constant $c$ we thus have \[ \left(\begin{array}{l} a \\ b\\ \end{array}\right) = c \left(\begin{array}{ll} 10 & -5 \\ - (\lambdabda^2 (1-\lambdabda)^3 + \lambdabda^3 (1-\lambdabda)^2) & (\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4) \\ \end{array} \right) \left(\begin{array}{l} 5 (\lambdabda^4(1-\lambdabda)+\lambdabda(1-\lambdabda)^4) \\ 10(\lambdabda^2 (1-\lambdabda)^3 + \lambdabda^3 (1-\lambdabda)^2) \\ \end{array}\right) . \] Multiplying we see that the solution is non-negative. (We use that $(\lambdabda^j (1-\lambdabda)^{n-j} + \lambdabda^{n-j} (1-\lambdabda)^j)$ is decreasing in $j\in \{0,1,\ldots, n/2\}$, an easy fact to check.) \end{proof} We end this note with a simple geometric proof of the following inequality from \cite{ArtsteinEinhornFlorentinOstrover} (which reappeared independently in \cite{AGJV}) \begin{theorem} \label{thm:strange} Let $K,L \subset {\mathbb R}^n$ be convex bodies which include the origin. Then $$ {\rm Vol} ({\rm conv}(K \cup - L) ) \, {\rm Vol} ((K^\circ + L^\circ)^\circ ) \le {\rm Vol}(K) \, {\rm Vol}(L).$$ \end{theorem} We remark that this inequality can be thought of as a dual to the Milman-Pajor inequality \cite{MilmanPajor} stating that when $K$ and $L$ have center of mass at the origin one has \[ {\rm Vol} ({\rm conv}(K {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p - L) ) \, {\rm Vol} (K + L) \ge {\rm Vol}(K) \, {\rm Vol}(L).\] \begin{proof}[Simple geometric proof of Theorem \ref{thm:strange}] Consider two convex bodies $K$ and $L$ in ${\mathbb R}R^n$ and build the body iin ${\mathbb R}R^{2n}$ which is \[ C = {\rm conv} (K\times \{0\}\cup \{0\}\times L)\] The volume of $C$ is simply \[ {\rm Vol}(C) = {\rm Vol}(K) {\rm Vol}(L) \frac{1}{\binom{2n}{n}}. \] Let us look at the two orthogonal subspaces of ${\mathbb R}R^{2n}$ of dimension $n$ given by $E = \{ (x,x): x\in {\mathbb R}R^n\}$ and $E^{\perp} = \{ (y,-y): y \in {\mathbb R}R^{n}\}$. First we compute $C{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E$: \[ C{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E = \{ (x,x): x = \lambdabda y, x=(1-\lambdabda)z,\lambdabda \in [0,1], y\in K, z\in L\}. \] In other words, \[ C{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E = \{ (x,x): x\in \cup_{\lambdabda\in [0,1]}( \lambdabda K {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p (1-\lambdabda)L)\}= \{ (x,x): x\in (K^{\circ}+L^{\circ})^{\circ}\}. \] Next let us calculate the projection of $C$ onto $E^{\perp}$: Since $C$ is a convex hull, we may project $K\times \{0\}$ and $\{0\}\times L$ onto $E^{\perp}$ and then take a convex hull. In other words we are searching for all $(x,-x)$ such that there exists $(y,y)$ with $(x+y, -x+y)$ in $K\times \{0\}$ or $\{0\}\times L$. Clearly this means that $y$ is either $x$, in the first case, or $-x$, in the second, which means we get \[P_{E^{\perp}}C = {\rm conv} \{ (x,-x): 2x\in K {\rm ~or~} -2x \in L\} = \{ (x,-x): x\in {\rm conv} (K/2 \cup -L/2) \}.\] In terms of volume we get that \[ {\rm Vol}_n(C{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E) = \sqrt{2}^n {\rm Vol}_n((K^{\circ}+L^{\circ})^{\circ}) \] and \[ {\rm Vol}_n (P_{E^{\perp}}(C)) = \sqrt{2}^{-n} {\rm Vol}_n ({\rm conv} (K \cup -L))\] and so their product is precisely the quantity in the right hand side of Theorem \ref{thm:strange}, and by the Rogers Shephard inequality for sections and projections, Lemma \ref{lem-RSsec-proj}, we know that \[ {\rm Vol}_n(C{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E){\rm Vol}_n (P_{E^{\perp}}(C)) \le {\rm Vol}_n(C)\binom{2n}{n}. \] Plugging in the volume of $C$, we get our inequality from Theorem \ref{thm:strange}.\end{proof} \begin{remark} Note that taking, for example, $K = L$ in the last construction, but taking $E_\lambdabda = \{(\lambdabda x, (1-\lambdabda)x): x\in {\mathbb R}R^n\}$, we get that \[ C{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p E = \{ (\lambdabda x, (1-\lambdabda)x): x\in K\}\] and \[ P_{E_\lambdabda^\perp} C = \{ ((1-\lambdabda)x, -\lambdabda x): x \in \frac{1}{\lambdabda^2 + (1-\lambdabda)^2} {\rm conv}((1-\lambdabda)K \cup -\lambdabda K) \}. \] In particular, the product of their volumes, which is simply \[ {\rm Vol}( {\rm conv}((1-\lambdabda)K \cup -\lambdabda K)){\rm Vol}(K)\] is bounded by $\binom{2n}{n}{\rm Vol}(C)$ which is itself ${\rm Vol}(K)$, giving yet another proof of the following inequality from \cite{ArtsteinEinhornFlorentinOstrover}, valid for a convex body $K$ such taht $0\in K$ \[{\rm conv}((1-\lambdabda)K \cup -\lambdabda K) \le {\rm Vol}(K), \] and more importantly a realization of all these sets as projections of a certain body. \end{remark} {\setminusall \noindent Shiri Artstein-Avidan \noindent School of Mathematical Science, Tel Aviv University, Ramat Aviv, Tel Aviv, 69978, Israel.\vskip 2pt \noindent Email address: [email protected]} \end{document}
math
21,175
\begin{document} \title[Positive solutions of resonant singular equations] {Pairs of positive solutions for resonant singular equations with the $p$-Laplacian} \author[N. S. Papageorgiou, V. D. R\u{a}dulescu, D. D. Repov\v{s}] {Nikolaos S. Papageorgiou, Vicen\c{t}iu D. R\u{a}dulescu, Du\v{s}an D. Repov\v{s}} \address{Nikolaos S. Papageorgiou \newline National Technical University, Department of Mathematics, Zografou Campus, Athens 15780, Greece} \email{[email protected]} \address{Vicen\c{t}iu D. R\u{a}dulescu (corresponding author) \newline Department of Mathematics, Faculty of Sciences, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia. \newline Department of Mathematics, University of Craiova, Craiova 200585, Romania} \email{[email protected]} \address{Du\v{s}an D. Repov\v{s} \newline Faculty of Education and Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana 1000, Slovenia} \email{[email protected]} \subjclass[2010]{35J20, 35J25, 35J67} \keywords{Singular reaction; resonance; regularity; positive solutions; \break\indent maximum principle; mountain pass theorem} \begin{abstract} We consider a nonlinear elliptic equation driven by the Dirichlet $p$-Laplacian with a singular term and a $(p-1)$-linear perturbation which is resonant at $+\infty$ with respect to the principal eigenvalue. Using variational tools, together with suitable truncation and comparison techniques, we show the existence of at least two positive smooth solutions. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \allowdisplaybreaks \section{Introduction} Let $\Omega\subseteq{\mathbb R}^N$ be a bounded domain with a $C^2$-boundary $\partial\Omega$. In this paper, we study the following nonlinear elliptic problem with singular reaction \begin{equation}\label{eq1} \begin{gathered} -\Delta_pu(z)=u(z)^{-\mu}+f(z,u(z))\quad \text{in } \Omega,\\ u|_{\partial\Omega}=0,\quad u>0,\quad 1<p<\infty,\; 0<\mu<1. \end{gathered} \end{equation} In this problem, $\Delta_p$ denotes the $p$-Laplacian differential operator defined by $$ \Delta_pu={\rm div}\,(|Du|^{p-2}Du)\quad \text{for all } u\in W^{1,p}(\Omega),\ 1<p<\infty. $$ In the reaction term, $u^{-\mu}$ (with $0<\mu<1$) is the singular part and $f:\Omega\times{\mathbb R}\to{\mathbb R}$ is a Carath\'eodory perturbation (that is, for all $x\in{\mathbb R}$ the mapping $z\mapsto f(z,x)$ is measurable and for almost all $z\in\Omega$ the map $x\mapsto f(z,x)$ is continuous) which exhibits $(p-1)$-linear growth near $+\infty$. Using variational tools, together with suitable truncation and comparison techniques, we prove a multiplicity theorem establishing the existence of two positive smooth solutions. Such multiplicity theorems for singular problems were proved by Hirano, Saccon and Shioji \cite{7}, Papageorgiou and R\u adulescu \cite{12}, Sun, Wu and Long \cite{16} (semilinear problems driven by the Laplacian) and Giacomoni and Saudi \cite{4}, Giacomoni, Schindler and Takac \cite{5}, Kyritsi and Papageorgoiu \cite{6}, Papageorgiou and Smyrlis \cite{13, 14}, Perera and Zhang \cite{15} (nonlinear problems). In all these papers the reaction term is parametric. The presence of the parameter permits a more precise control of the nonlinearity as the positive parameter $\lambda$ becomes small. A complete overview of the theory of singular elliptic equations can be found in the book by Ghergu and R\u adulescu \cite{3}. \section{Mathematical background and hypotheses} Let $X$ be a Banach space and $X^*$ its topological dual. By $\langle \cdot,\cdot\rangle$ we denote the duality brackets for the pair $(X^*,X)$. Given $\varphi\in C^1(X,{\mathbb R})$, we say that $\varphi$ satisfies the ``Cerami condition" (the ``C-condition" for short), if the following property holds: \begin{quote} Every sequence $\{u_n\}_{n\geq 1}\subseteq X$ such that $\{\varphi(u_n)\}_{n\in {\mathbb N}}\subseteq{\mathbb R}$ is bounded and $(1+\|u_n\|)\varphi'(u_n)\to 0$ in $X^*$ as $n\to\infty$, admits a strongly convergent subsequence. \end{quote} This is a compactness-type condition on the functional $\varphi$. It leads to a deformation theorem from which we can deduce the minimax theory of the critical values of $\varphi$. One of the main results of this theory is the so-called ``mountain pass theorem'', which we recall here. \begin{theorem}\label{thm1} Assume that $\varphi\in C^1(X,{\mathbb R})$ satisfies the C-condition, $0<\rho<\|u_0-u_1\|$, $$ \max\{\varphi(u_0),\varphi(u_1)\}< \inf\{\varphi(u):\|u-u_0\|=\rho\}=m_{\rho} $$ and $$ c=\inf_{\gamma\in\Gamma}\max_{0\leq t\leq 1}\varphi(\gamma(t)), $$ where $$ \Gamma=\{\gamma\in C([0,1],X):\gamma(0)=u_0,\gamma(1)=u_1\}. $$ Then $c\geq m_{\rho}$ and $c$ is a critical value of $\varphi$ (that is, there exists $u_0\in X$ such that $\varphi(u_0)=c$ and $\varphi'(u_0)=0$). \end{theorem} In the analysis of problem \eqref{eq1} we will use the Sobolev space $W^{1,p}_0(\Omega)$ and the Banach space $C^1_0(\overline{\Omega})=\{u\in C^1(\overline{\Omega}):u|_{\partial\Omega}=0\}$. In what follows, we denote by $\|\cdot\|$ the norm of the Sobolev space $W^{1,p}_0(\Omega)$. On account of the Poincar\'e inequality, we have $$ \|u\|=\|Du\|_p\quad \text{for all } u\in W^{1,p}_0(\Omega). $$ The Banach space $C^1_0(\overline{\Omega})$ is an ordered Banach space with positive (order) cone given by $$ C_+(\overline{\Omega})=C_+=\{u\in C^1_0(\overline{\Omega}):u(z) \geq 0\ \text{for all}\ z\in\Omega\}. $$ This cone has a nonempty interior $$ \operatorname{int}C_+=\big\{u\in C_+:u(z)>0 \text{ for all } z\in\Omega,\frac{\partial u}{\partial n}\big|_{\partial\Omega}<0\big\}. $$ Here, $\frac{\partial u}{\partial n}=(Du,n)_{{\mathbb R}^N}$ with $n(\cdot)$ being the outward unit normal on $\partial\Omega$. Let $A:W^{1,p}_0(\Omega)\to W^{-1,p'}(\Omega)=W^{1,p}_0(\Omega)^*$ (with $\frac{1}{p}+\frac{1}{p'}=1$) be the nonlinear map defined by $$ \langle A(u),h\rangle=\int_{\Omega}|Du|^{p-2}(Du,Dh)_{{\mathbb R}^N}dz\quad \text{for all } u,h\in W^{1,p}_0(\Omega). $$ This map has the following properties (see, for example, Motreanu, Motreanu and Papageorgiou \cite[p. 40]{11}). \begin{proposition}\label{prop2} The map $A:W^{1,p}_0(\Omega)\to W^{-1,p'}(\Omega)$ is bounded (that is, maps bounded sets to bounded sets), continuous, strictly monotone (hence maximal monotone, too) and of type $(S)_+$, that is, \[ u_n\stackrel{w}{\to}u \text{ in } W^{1,p}_0(\Omega) \text{ and } \limsup_{n\to\infty}\langle A(u_n),u_n-u\rangle \leq 0\Rightarrow u_n\to u \text{ in } W^{1,p}_0(\Omega). \] \end{proposition} We will also need some facts about the spectrum of the Dirichlet $p$-Laplacian. So, we consider the following nonlinear eigenvalue problem $$ -\Delta_pu(z)=\hat{\lambda}m(z)|u(z)|^{p-2}u(z) \text{ in } \Omega,\quad u|_{\partial\Omega}=0. $$ Here, $m\in L^{\infty}(\Omega),m\geq 0,m\neq 0$. We say that $\hat{\lambda}$ is an ``eigenvalue", if the above problem admits a nontrivial solution $\hat{u}$ known as an ``eigenfunction'' corresponding to the eigenvalue $\hat{\lambda}$. The nonlinear regularity theory (see, for example, Gasinski and Papageorgiou \cite[pp. 737-738]{2}), implies that $\hat{u}\in C^1_0(\overline{\Omega})$. There exists a smallest eigenvalue $\hat{\lambda}_1(m)$ such that: \begin{itemize} \item $\hat{\lambda}_1(m)>0$ and is isolated in the spectrum $\hat{\sigma}(p)$ of ($-\Delta_p,W^{1,p}_0(\Omega),m$) (that is, there exists $\epsilon>0$ such that $(\hat{\lambda}_1(m),\hat{\lambda}_1(m)+\epsilon)\cap\hat{\sigma}(p)=\emptyset$); \item $\hat{\lambda}_1(m)>0$ is simple in the sense that if $\hat{u},\hat{v}$ are two eigenfunctions corresponding to $\hat{\lambda}_1(m)>0$, then $\hat{u}=\xi\hat{v}$ for some $\xi\in{\mathbb R}\backslash\{0\}$; \item \begin{equation}\label{eq2} \hat{\lambda}_1(m)=\inf\Big[\frac{\|Du\|^p_p}{\int_{\Omega}m(z)|u|^pdz}: u\in W^{1,p}_0(\Omega),u\neq 0\Big]. \end{equation} \end{itemize} The infimum in \eqref{eq2} is realized on the one-dimensional eigenspace corresponding to $\hat{\lambda}_1(m)$. From the above properties it follows that the elements of this eigenspace have constant sign. We denote by $\hat{u}_1(m)$ the $L^p$-normalized (that is, $\|\hat{u}_1(m)\|_p=1$) positive eigenfunction for the eigenvalue $\hat{\lambda}_1(m)$. As we have already mentioned, $\hat{u}_1(m)\in C_+$. In fact, the nonlinear maximum principle (see, for example, Gasinski and Papageorgiou \cite[p. 738]{2}) implies that $\hat{u}_1(m)\in \operatorname{int}C_+$. If $m\equiv 1$, then we write $$ \hat{\lambda}_1(1)=\hat{\lambda}_1>0\quad \text{and}\quad \hat{u}_1(1)=\hat{u}_1\in \operatorname{int}C_+. $$ The map $m\mapsto\hat{\lambda}_1(m)$ exhibits the following strict monotonicity property. \begin{proposition}\label{prop3} If $m_1,m_2\in L^{\infty}(\Omega),\ 0\leq m_1(z)\leq m_2(z)$ for almost all $z\in\Omega$ and $m_1\neq 0,m_2\neq m_1$, then $\hat{\lambda}_1(m_2)<\hat{\lambda}_1(m_1)$. \end{proposition} We mention that every eigenfunction $\hat{u}$ corresponding to an eigenvalue $\hat{\lambda}\neq\hat{\lambda}_1(m)$, is necessarily nodal (that is, sign changing). For details on the spectrum of $(-\Delta_p, W^{1,p}_0(\Omega),m)$ we refer to \cite{2, 11}. For $x\in{\mathbb R}$ we define $x^{\pm}=\max\{\pm x,0\}$. Then, given $u\in W^{1,p}_0(\Omega)$, we set $u^{\pm}(\cdot)=u(\cdot)^{\pm}$. We have $$ u^{\pm}\in W^{1,p}_0(\Omega),\ u=u^+-u^-,\ |u|=u^++u^-. $$ Given a measurable function $g:\Omega\times{\mathbb R}\to{\mathbb R}$ (for example, a Carath\'eodory function), we denote by $N_g(\cdot)$ the Nemitsky (superposition) operator corresponding to $g$, that is, $$ N_g(u)(\cdot)=g(\cdot,u(\cdot))\ \text{for all}\ u\in W^{1,p}_0(\Omega). $$ We know that $z\mapsto N_g(u)(z)=g(z,u(z))$ is measurable. The hypotheses on the perturbation term $f(z,x)$ are the following: \noindent(H1):\ $f:\Omega\times{\mathbb R}\to{\mathbb R}$ is a Carath\'eodory function such that $f(z,0)=0$ for almost all $z\in\Omega$ and \begin{itemize} \item[(i)] for every $\rho>0$, there exists $a_{\rho}\in L^{\infty}(\Omega)$ such that $$ |f(z,x)|\leq a_{\rho}(z)\quad \text{for almost all } z\in\Omega, \text{ all } 0\leq x\leq\rho $$ and there exists $w\in C^1(\overline{\Omega})$ such that $$ w(z)\geq\hat{c}>0 \text{ for all } z\in\overline{\Omega} \text{ and } -\Delta_pw\geq 0 \text{ in } W^{1,p}_0(\Omega)^*=W^{-1,p'}(\Omega) $$ and for every compact $K\subseteq \Omega$ we can find $c_K>0$ such that $$ w(z)^{-\mu}+f(z,w(z))\leq-c_K<0 \text{ for almost all } z\in K; $$ \item[(ii)] if $F(z,x)=\int^x_0f(z,s)ds$, then there exists $\eta\in L^{\infty}(\Omega)$ such that \begin{gather*} \hat{\lambda}_1\leq\liminf_{x\to+\infty}\frac{f(z,x)}{x^{p-1}} \leq\limsup_{x\to+\infty}\frac{f(z,x)}{x^{p-1}}\leq \eta(z) \text{ uniformly for almost all } z\in\Omega,\\ f(z,x)x-pF(z,x)\to-\infty\text{ as } x\to+\infty \text{ uniformly for almost all } z\in\Omega; \end{gather*} \item[(iii)] there exists $\delta\in(0,\hat{c})$ such that for all compact $K\subseteq\Omega$ we have $$ f(z,x)\geq\hat{c}_K>0\ \text{ for almost all } z\in K, \text{ all } 0<x\leq \delta; $$ \item[(iv)] for every $\rho>0$, there exists $\hat{\xi}_{\rho}>0$ such that for almost all $z\in\Omega$ the mapping $$ x\mapsto f(z,x)+\hat{\xi}_{\rho}x^{p-1} $$ is nondecreasing on $[0,\rho]$. \end{itemize} \begin{remark} \rm Since we are looking for positive solutions and all the above hypotheses concern the positive semiaxis ${\mathbb R}_+=\left[0,+\infty\right)$, we may assume without any loss of generality that \begin{equation}\label{eq3} f(z,x)=0 \text{ for almost all } z\in\Omega,\text{ all } x\leq 0. \end{equation} \end{remark} Hypothesis (H1)(ii) permits resonance with respect to the principal eigenvalue $\hat{\lambda}_1>0$. The second convergence condition in (H1)(ii) implies that the resonance at $+\infty$ with respect to $\hat{\lambda}_1>0$, is from the right of the principle eigenvalue in the sense that $$ \hat{\lambda}_1x^{p-1}-pF(z,x)\to-\infty\ \text{as}\ x\to+\infty \text{ uniformly for almost all } z\in\Omega $$ (see the proof of Proposition \ref{prop5}). This makes the problem noncoercive and so the direct method of the calculus of variations is not applicable. Hypothesis (H1)(iv) is satisfied if for example $f(z,\cdot)$ is differentiable and the derivative $f'_x(z,\cdot )$ satisfies for some $\rho>0$ $$ f'_x(z,x)\geq -\tilde{c}_\rho x^{p-2} \text{ for almost all $x\in\Omega$, for all $0\leq x\leq\rho$ and some $\tilde{c}_\rho>0$.} $$ \begin{example} \rm The following function satisfies hypotheses (H1). For the sake of simplicity we drop the $z$-dependence: $$ f(x)=\begin{cases} x^{p-1}-2x^{r-1}&\text{if } 0\leq x\leq 1\\ \eta x^{p-1}+x^{\tau-1}-(2+\eta)x^{q-1}&\text{if } 1<x, \end{cases} $$ with $\eta\geq\hat{\lambda}_1$ and $1<\tau,\ q<p<r<\infty$. \end{example} \section{Pair of positive solutions} In this section we prove the existence of two positive smooth solutions for problem \eqref{eq1}. We start by considering the auxiliary singular Dirichlet problem \begin{equation}\label{eq4} -\Delta_pu(z)=u(z)^{-\mu} \text{ in } \Omega,\ u|_{\partial\Omega}=0,\ u>0. \end{equation} By Papageorgiou and Smyrlis \cite[Proposition 5 ]{14}, we know that problem \eqref{eq4} has a unique positive solution $\tilde{u}\in \operatorname{int}C_+$. Let $\delta>0$ be as postulated by hypothesis (H1)(iii) and let $$ 0<t\leq \min\big\{1,\frac{\delta}{\|u\|_{\infty}}\big\}. $$ We set $\underline{u}=t\tilde{u}$. Then $\underline{u}\in \operatorname{int}C_+$ and we have \begin{equation} \label{eq5} \begin{aligned} -\Delta_p\underline{u}(z)=t^{p-1}[-\Delta_p\tilde{u}(z)] &= t^{p-1}\tilde{u}(z)^{-\mu} \\ &\leq \underline{u}(z)^{-\mu}\quad (\text{since } 0<t\leq 1) \\ &\leq \underline{u}(z)^{-\mu}+f(z,\underline{u}(z)) \text{ for almost all } z\in\Omega \end{aligned} \end{equation} (see \cite{14}, note that $\underline{u}(z)\in\left(0,\delta\right]$ for all $z\in\overline{\Omega}$ and see hypothesis (H1)(iii)). Also note that $\underline{u}\leq w$. We introduce the following truncation of the reaction term in \eqref{eq1}: \begin{equation}\label{eq6} \hat{f}(z,x)=\begin{cases} \underline{u}(z)^{-\mu}+f(z,\underline{u}(z))&\text{if } x<\underline{u}(z)\\ x^{-\mu}+f(z,x)&\text{if } \underline{u}(z)\leq x\leq w(z)\\ w(z)^{-\mu}+f(z,w(z))&\text{if } w(z)<x. \end{cases} \end{equation} This is a Carath\'eodory function. We set $\hat{F}(z,x)=\int^x_0\hat{f}(z,s)ds$ and consider the functional $\hat{\varphi}:W^{1,p}_0(\Omega)\to{\mathbb R}$ defined by $$ \hat{\varphi}(u)=\frac{1}{p}\|Du\|^p_p-\int_{\Omega}\hat{F}(z,u)dz\quad \text{for all } u\in W^{1,p}_0(\Omega). $$ By Papageorgiou and Smyrlis \cite[Proposition 3]{14} we have $\hat{\varphi}\in C^1(W^{1,p}_0({\mathbb R}))$. In what follows, we denote by $[\underline{u},w]$ the order interval $$ [\underline{u},w]=\{u\in W^{1,p}_0(\Omega):\underline{u}(z) \leq u(z)\leq w(z) \text{ for almost all } z\in\Omega\}. $$ Also, we denote by $\operatorname{int}_{C^1_0(\overline{\Omega})} [\underline{u},w]$ the interior in the $C^1_0(\overline{\Omega})$-norm topology of $[\underline{u},w]\cap C^1_0(\overline{\Omega})$. In the next proposition we produce a positive smooth solution located in the above order interval. \begin{proposition}\label{prop4} If hypotheses {\rm (H1)} hold, then problem \eqref{eq1} has a positive solution $u_0\in \operatorname{int}_{C^1_0(\overline{\Omega})}[\underline{u},w]$. \end{proposition} \begin{proof} We know that $\underline{u}\in \operatorname{int}C_+$. So, using Marano and Papageorgiou \cite[Proposition 2.1]{10} we can find $c_0>0$ such that \[ \hat{u}^{1/p'}_{1}\leq c_0\underline{u}\quad \Rightarrow\quad \underline{u}^{-\mu}\leq c_0^{\mu}\hat{u}_1^{-\mu/p'}. \] Hence using the lemma of Lazer and McKenna \cite{9}, we have that $$ \underline{u}^{-\mu}\in L^{p'}(\Omega). $$ Therefore by \eqref{eq5} we see that $\hat{\varphi}(\cdot)$ is coercive. Also, using the Sobolev embedding theorem, we see that $\hat{\varphi}$ is sequentially weakly lower semicontinuous. So, by the Weierstrass-Tonelli theorem, we can find $u_0\in W^{1,p}_0(\Omega)$ such that \begin{equation} \label{eq7} \begin{aligned} &\hat{\varphi}(u_0)=\inf[\hat{\varphi}(u):u\in W^{1,p}_0(\Omega)], \\ &\Rightarrow \hat{\varphi}'(u_0)=0, \\ &\Rightarrow \langle A(u_0),h\rangle=\int_{\Omega}\hat{f}(z,u_0)hdz \text{ for all } h\in W^{1,p}_0(\Omega). \end{aligned} \end{equation} In \eqref{eq7} we first choose $h=(\underline{u}-u_0)^+\in W^{1,p}_0(\Omega)$. Then \begin{align*} \langle A(u_0),(\underline{u}-u_0)^+\rangle &=\int_{\Omega}[\underline{u}^{-\mu}+f(z,\underline{u})](\underline{u}-u_0)^+dz \quad \text{(see \eqref{eq6})} \\ &\geq \langle A(\underline{u}),(\underline{u}-u_0)^+\rangle\quad (\text{see \eqref{eq5}}) \end{align*} which implies \[ \langle A(\underline{u})-A(u_0),(\underline{u}-u_0)^+\rangle\leq 0, \] and this implies $\underline{u}\leq u_0$. Next, in \eqref{eq7} we choose $h=(u_0-w)^+\in W^{1,p}_0(\Omega)$ (see hypothesis (H1)(i)). Then \begin{align*} \langle A(u_0),(u_0-w)^+\rangle &=\int_{\Omega}[w^{-\mu}+f(z,w)](u_0-w)^+dz\\ &\leq\langle A(w),(u_0-w)^+\rangle\quad (\text{see hypothesis (H1)(i)}), \end{align*} which implies \[ \langle A(u_0)-A(w),(u_0-w)^+\rangle\leq 0, \] and this implies $u_0\leq w$. So, we have proved that \begin{equation}\label{eq8} u_0\in[\underline{u},w]=\{u\in W^{1,p}_0(\Omega):\underline{u}(z) \leq u_0(z)\leq w(z)\quad \text{ for almost all } z\in\Omega\}. \end{equation} Clearly, $u_0\neq \underline{u}$ (see hypothesis (H1)(iii)) and $u_0\neq w$ (see hypothesis (H1)(i)). From \eqref{eq6}, \eqref{eq7}, \eqref{eq8}, we have \[ \langle A(u_0),h\rangle=\int_{\Omega}[u_0^{-\mu}+f(z,u_0)]hdz,\quad 0\leq u_0^{-\mu}\leq \underline{u}^{-\mu}\in L^p(\Omega) \] which implies \begin{equation}\label{eq9} -\Delta_pu_0(z)=u_0(z)^{-\mu}+f(z,u_0(z)) \quad\text{for a.a. } z\in\Omega,\; u_0|_{\partial\Omega}=0, \end{equation} see \cite{14}. Also, by Gilbarg and Trudinger \cite[ Lemma 14.16 p. 355]{6} we know that there exists small $\delta_0>0$ such that, if $\Omega_{\delta_0}=\{z\in\Omega:d(z,\partial\Omega)<\delta_0\}$, then $$ d\in \operatorname{int}C_+(\overline{\Omega}_{\delta_0}), $$ where $d(\cdot)=d(\cdot,\partial\Omega)$. Let $D^*=\overline{\Omega}\backslash \Omega_{\delta_0}$. Setting $C(D^*)_+=\{h\in C(D^*):h(z)\geq 0 \text{ for all } z\in D^*\}$, we have $d\in \operatorname{int}C(D^*)_+\subseteq \operatorname{int}C_+(D^*)$. Then as before, via Marano and Papageorgiou \cite[Proposition 2.1 ]{10} we find $0<c_1<c_2$ such that \begin{equation}\label{eq10} c_1d\leq \underline{u}\leq c_2d. \end{equation} Then by \eqref{eq9}, \eqref{eq10}, hypotheses (H1)(i), (H1)(iv) and Giacomoni and Saudi \cite[Theorem B.1]{4}, we have $$ u_0\in \operatorname{int}C_+. $$ Now let $\rho=\|w\|_{\infty}$ and let $\hat{\xi}_{\rho}>0$ be as postulated by hypothesis (H1)(iv). We have \begin{align*} &-\Delta_p u_0(z)-u_0(z)^{-\mu}+\hat{\xi}_{\rho}u_0(z)^{p-1}\\ &= f(z,u_0(z))+\hat{\xi}_{\rho}u_0(z)^{p-1}\quad (\text{see \eqref{eq9}})\\ &\geq f(z,\underline{u}(z))+\hat{\xi}_{\rho}\underline{u}(z)^{p-1}\quad (\text{see \eqref{eq8} and hypothesis}\ (H1)(iv))\\ &> \hat{\xi}_{\rho}\underline{u}(z)^{p-1}\quad (\text{see hypothesis(H1)(ii)})\\ &\geq -\Delta_p\underline{u}(z)-\underline{u}(z)^{-\mu}+\hat{\xi}_{\rho} \underline{u}(z)^{p-1}\quad (\text{see \eqref{eq5}})\ \text{for almost all } z\in\Omega. \end{align*} Hence, invoking Proposition \ref{prop4} of Papageorgiou and Smyrlis \cite{14}, we have $$ u_0-\underline{u}\in \operatorname{int}C_+. $$ From the hypothesis on the function $w(\cdot)$ (see (H1)(i)), we see that $$ D_0=\{z\in\Omega:u_0(z)=w(z)\}\ \text{is compact in}\ \Omega. $$ Then we can find an open set $\mathcal{U}\subseteq\Omega$ with Lipschitz boundary, such that $$ D_0\subseteq\mathcal{U}\subseteq\overline{\mathcal{U}}\subseteq\Omega \text{ and } d(z,D_0)\leq \delta_1 \text{ for all } z\in\overline{\mathcal{U}}, \text{ with } \delta_1>0. $$ Let $\epsilon>0$ be such that \begin{equation}\label{eq11} u_0(z)+\epsilon\leq w(z) \text{ for all } z\in\partial\mathcal{U} \end{equation} (such an $\epsilon>0$ exists since $\partial\Omega$ is compact and $w-u_0\in C(\overline{\Omega})$). Exploiting the uniform continuity of the map $x\mapsto x^{p-1}$ on $[0,\rho]$ we can find $\delta_2>0$ such that \begin{equation}\label{eq12} \hat{\xi}_{\rho}|x^{p-1}-v^{p-1}|\leq\epsilon\quad \text{for all } x,v\in[\min_{\overline{\mathcal{U}}}u_0,\max_{\overline{\mathcal{U}}}w],\; |x-v|\leq\delta_2. \end{equation} Similarly, the uniform continuity of $x\mapsto x^{-\mu}$ on any compact subset of $(0,+\infty)$, implies that we can find $\delta_3\in\left(0,\delta_2\right]$ such that \begin{equation}\label{eq13} |x^{-\mu}-v^{-\mu}|\leq\epsilon\quad \text{for all } x,v\in\big[\frac{\hat{c}}{2},\|w\|_{\infty}\big],\; |x-v|\leq\delta_2. \end{equation} Then choosing $\delta_1\in(0,\delta_3)$ small enough and $\tilde{\delta}\in(0,\delta_1)$ we have \begin{equation} \label{eq14} \begin{aligned} &-\Delta_p(u_0+\tilde{\delta})(z)+\hat{\xi}_{\rho}(u_0+\tilde{\delta})(z)^{p-1} \\ &\leq -\Delta_pu_0(z)+\tilde{\xi}_{\rho}u_0(z)^{p-1}+\epsilon \quad (\text{see \eqref{eq12}}) \\ &= u_0(z)^{-\mu}+f(z,u_0(z))+\hat{\xi}_{\rho}u_0(z)^{p-1}+\epsilon \quad (\text{see \eqref{eq9}}) \\ &\leq w(z)^{-\mu}+f(z,w(z))+\hat{\xi}_{\rho}w(z)^{p-1}+2\epsilon \quad (\text{see \eqref{eq13}, \eqref{eq8}, (H1)(iv)}) \\ &\leq -c_{\overline{\mathcal{U}}}+2\epsilon+\hat{\xi}_{\rho}w(z)^{p-1} \text{ for almost all } z\in\Omega \quad (\text{see (H1)(i)}). \end{aligned} \end{equation} Choosing $\epsilon\in\left(0,c_{\overline{\mathcal{U}}}/2\right)$ and using once more hypothesis (H1)(i), we deduce from \eqref{eq14} that \begin{equation}\label{eq15} -\Delta_p(u_0+\tilde{\delta})+\hat{\xi}_{\rho}(u_0+\tilde{\delta})^{p-1} \leq-\Delta_pw+\hat{\xi}_{\rho}w^{p-1}\ \text{in } W^{1,p}_0(\Omega)^*=W^{-1,p'}(\Omega). \end{equation} From \eqref{eq15}, \eqref{eq11} and the weak comparison principle of Tolksdorf \cite[Lemma 3.1]{17}, we have $$ (u_0+\tilde{\delta})(z)\leq w(z) \text{ for all } z\in\overline{\mathcal{U}}. $$ But $D_0\subseteq\overline{\mathcal{U}}$. Therefore $D_0=\emptyset$ and so $$ 0<(w-u_0)(z)\text{ for all } z\in\overline{\Omega}. $$ We conclude that $$ u_0\in \operatorname{int}_{C^1_0(\overline{\Omega})}[\underline{u},w]. $$ The proof is now complete. \end{proof} Next we produce a second positive smooth solution for problem \eqref{eq1}. \begin{proposition}\label{prop5} If hypotheses {\rm (H1)} hold, then \eqref{eq1} has a second positive solution $\hat{u}\in \operatorname{int} C_+$. \end{proposition} \begin{proof} Consider the following truncation of the reaction term in \eqref{eq1}: \begin{equation} \label{eq16} g(z,x)=\begin{cases} \underline{u}(z)^{-\mu}+f(z,\underline{u}(z))&\text{if } u\leq\underline{u}(z)\\ x^{-\mu}+f(z,x)&\text{if } \underline{u}(z)<x. \end{cases} \end{equation} This is a Carath\'eodory function. We set $G(z,x)=\int^x_0g(z,s)ds$ and consider the functional $\varphi_0:W^{1,p}_0(\Omega)\to{\mathbb R}$ defined by $$ \varphi_0(u)=\frac{1}{p}\|Du\|^p_p-\int_{\Omega}G(z,u)dz\quad \text{for all } u\in W^{1,p}_0(\Omega). $$ As before, Papageorgiou and Smyrlis \cite[Proposition 3]{14} implies that $$ \varphi_0\in C^1(W^{1,p}_0(\Omega)). $$ \noindent\textbf{Claim.} $\varphi_0$ satisfies the C-condition. We consider a sequence $\{u_n\}_{n\geq 1}\subseteq W^{1,p}_0(\Omega)$ such that \begin{gather} |\varphi_0(u_n)|\leq M_1\quad \text{for some $M_1>0$ and for all } n\in {\mathbb N},\label{eq17}\\ (1+\|u_n\|)\varphi'_0(u_n)\to 0\quad \text{in } W^{-1,p'}(\Omega) \text{ as } n\to\infty.\label{eq18} \end{gather} From \eqref{eq17} we have \begin{equation} \label{eq19} \big|\langle A(u_n),h\rangle-\int_{\Omega}g(z,u_n)hdz\big| \leq\frac{\epsilon_n\|h\|}{1+\|u_n\|} \end{equation} for all $h\in W^{1,p}_0(\Omega)$ with $\epsilon_n\to 0^+$. In \eqref{eq19} we choose $h=-u^-_n\in W^{1,p}_0(\Omega)$. Then \[ \|Du^-_n\|^p_p-\int_{\Omega}[\underline{u}^{-\mu}+f(z,\underline{u})](-u^-_n)dz \leq\epsilon_n\quad \text{for all $n\in {\mathbb N}$, (see \eqref{eq16})} \] which implies \begin{equation} \label{eq20} \begin{aligned} &\|u^-_n\|^p\leq c_3\|u^-_n\|\quad \text{for some $c_3>0$ and for all } n\in {\mathbb N}, \\ &\Rightarrow \{u^-_n\}_{n\geq 1}\subseteq W^{1,p}_0(\Omega) \text{ is bounded}. \end{aligned} \end{equation} Suppose that $\{u^+_n\}_{n\geq 1}\subseteq W^{1,p}_0(\Omega)$ is unbounded. By passing to a subsequence if necessary, we may assume that \begin{equation}\label{eq21} \|u^+_n\|\to\infty \end{equation} Let $y_n=\frac{u^+_n}{\|u^+_n\|}$, $n\in {\mathbb N}$. Then $\|y_n\|=1$, $y_n\geq 0$ for all $n\in {\mathbb N}$. So, we may assume that \begin{equation}\label{eq22} y_n\stackrel{w}{\to}y\ \text{in}\ W^{1,p}_0(\Omega)\quad \text{and}\quad y_n\to y \text{ in } L^p(\Omega),\; y\geq 0. \end{equation} From \eqref{eq19} and \eqref{eq20} we have \[ \big|\langle A(u^+_n),h\rangle-\int_{\Omega}g(z,u^+_n)hdz\big| \leq c_4\|h\|\quad \text{for some $c_4>0$ and all } n\in {\mathbb N} \] which implies \begin{equation} \label{eq23} \big|\langle A(y_n),h\rangle-\int_{\Omega}\frac{N_g(u^+_n)}{\|u^+_n\|^{p-1}}hdz \big|\leq\frac{c_4\|h\|}{\|u^+_n\|^{p-1}}\quad \text{for all } n\in {\mathbb N}. \end{equation} Hypotheses (H1)(i) and (H1)(i)(ii) imply that there exists $c_5>0$ such that $$ |f(z,x)|\leq c_5(1+x^{p-1})\quad \text{for almost all $z\in\Omega$ and all } x>0. $$ From this growth estimate and \eqref{eq16}, it follows that $$ \Big\{\frac{N_g(u^+_n)}{\|u^+_n\|^{p-1}}\Big\}_{n\geq 1} \subseteq L^{p'}(\Omega) \text{ is bounded}. $$ So, by passing to a suitable sequence if necessary and using hypothesis (H1)(ii) we have \begin{equation}\label{eq24} \frac{N_g(u^+_n)}{\|u^+_n\|^{p-1}}\stackrel{w}{\to}\tilde{\eta}(z)y^{p-1}\quad \text{in $L^{p'}(\Omega)$ as } n\to\infty, \end{equation} with $\hat{\lambda}_1\leq\tilde{\eta}(z)\leq\eta(z)$ for almost all $z\in\Omega$, see Aizicovici, Papageorgiou and Staicu \cite[proof of Proposition 16)]{1}. Recall that $\underline{u}^{-\mu}\in L^{p'}(\Omega)$. Therefore \[ \big|\int_{\Omega}\underline{u}^{-\mu}hdz\big| \leq c_6\|h\|\quad \text{for some $c_6>0$ and all } h\in W^{1,p}_0(\Omega) \] which implies \begin{equation}\label{eq25} \frac{1}{\|u^+_n\|^{p-1}}\int_{\Omega}\underline{u}^{-\mu}hdz\to 0\quad \text{as $n\to\infty$, (see \eqref{eq21})}. \end{equation} If in \eqref{eq23} we choose $h=y_n-y\in W^{1,p}_0(\Omega)$ and pass to the limit as $n\to\infty$, then using \eqref{eq22}, \eqref{eq24}, \eqref{eq25} we have $\lim_{n\to\infty}\langle A(y_n),y_n-y\rangle=0$ which implies \begin{equation}\label{eq26} y_n\to y \text{ in } W^{1,p}_0(\Omega),\quad \|y\|=1,\; y\geq 0\; (\text{see Proposition \ref{prop2}}). \end{equation} So, if in \eqref{eq23} we pass to the limit as $n\to\infty$ and use \eqref{eq24}, \eqref{eq25}, \eqref{eq26} to obtain \[ \langle A(y),h\rangle=\int_{\Omega}\tilde{\eta}(z)y^{p-1}hdz\quad \text{for all } h\in W^{1,p}_0(\Omega) \] which implies \begin{equation}\label{eq27} -\Delta_py(z)=\tilde{\eta}(z)y(z)^{p-1}\quad \text{for almost all } z\in\Omega,\quad y|_{\partial \Omega}=0. \end{equation} Recall that $$ \hat{\lambda}_1\leq\tilde{\eta}(z)\leq\eta(z)\quad \text{for almost all } z\in\Omega\ (\text{see \eqref{eq24}}). $$ We first assume that $\hat{\lambda}_1\not\equiv\tilde{\eta}$. Then using Proposition \ref{prop3} we have $$ \hat{\lambda}_1(\tilde{\eta})<\hat{\lambda}_1(\hat{\lambda_1})=1. $$ Also, from \eqref{eq27} and since $\|y\|=1$ (hence $y\neq 0$, see \eqref{eq26}), we infer that $y(\cdot)$ must be nodal, a contradiction to \eqref{eq22}. Next, we assume that $\tilde{\eta}(z)=\hat{\lambda}_1$ for almost all $z\in\Omega$. It follows from \eqref{eq27} that \[ y=\vartheta\hat{u}_1\quad \text{with $\vartheta>0$, see \eqref{eq26}}. \] Then $y\in \operatorname{int}C_+$ and so $y(z)>0$ for all $z\in\Omega$. Therefore \begin{equation} u^+_n(z)\to+\infty\ \text{for all}\ z\in\Omega\quad \text{as } n\to\infty,\label{eq28} \end{equation} which implies \[ f(z,u^+_n(z))u^+_n(z)-pF(z,u^+_n(z))\to-\infty \] for almost all $z\in\Omega$ as $n\to\infty$, see hypothesis (H1)(ii). This in turn implies \begin{equation} \int_{\Omega}[f(z,u^+_n)u^+_n-pF(z,u^+_n)]dz\to-\infty\quad (\text{by Fatou's lemma}). \label{eq29} \end{equation} From \eqref{eq19} with $h=u^+_n\in W^{1,p}_0(\Omega)$, we have \begin{equation}\label{eq30} -\|Du^+_n\|^p_p+\int_{\Omega}g(z,u^+_n)u^+_ndz\geq-\epsilon_n\quad \text{for all } n\in {\mathbb N}. \end{equation} On the other hand, from \eqref{eq17} and \eqref{eq20}, we have \begin{equation}\label{eq31} \|Du^+_n\|^p_p-\int_{\Omega}pG(z,u^+_n)dz\geq-M_2\quad \text{for some $M_2>0$ and all } n\in {\mathbb N}. \end{equation} Adding \eqref{eq30} and \eqref{eq31}, we obtain \[ \int_{\Omega}[g(z,u^+_n)u^+_n-pG(z,u^+_n)]dz\geq-M_3\quad \text{for some $M_3>0$ and all } n\in {\mathbb N} \] which implies \begin{equation}\label{eq32} \int_{\Omega}[f(z,u^+_n)u^+_n-pF(z,u^+_n)]dz\geq-M_4 \end{equation} for some $M_4>0$ and all $n\in{\mathbb N}$ (see \eqref{eq16} and \eqref{eq28}). Comparing \eqref{eq29} and \eqref{eq32}, we have a contradiction. This proves that \begin{align*} &\{u^+_n\}_{n\geq 1}\subseteq W^{1,p}_0(\Omega) \text{ is bounded},\\ &\Rightarrow \{u_n\}_{n\geq 1}\subseteq W^{1,p}_0(\Omega) \text{ is bounded (see \eqref{eq20})}. \end{align*} So, we assume that $$ u_n\stackrel{w}{\to}u \text{ in } W^{1,p}_0(\Omega)\quad \text{and}\quad u_n\to u \text{ in } L^p(\Omega). $$ Then we obtain \begin{equation}\label{eq33} \int_{\Omega}g(z,u_n)(u_n-u)dz\to 0\quad \text{as } n\to\infty. \end{equation} If in \eqref{eq19} we choose $h=u_n-u\in W^{1,p}_0(\Omega)$, then \begin{align*} &\lim_{n\to\infty}\langle A(u_n),u_n-u\rangle=0,\\ &\Rightarrow u_n\to u\quad \text{in } W^{1,p}_0(\Omega)\ (\text{see Proposition \ref{prop2}}). \end{align*} This proves the claim. Note that \begin{equation}\label{eq34} \hat{\varphi}\big|_{[\underline{u},w]} =\varphi_0\big|_{[\underline{u},w]}\quad (\text{see \eqref{eq6} and \eqref{eq16}}). \end{equation} From the proof of Proposition \ref{prop4} we know that $u_0\in \operatorname{int}_{C^1_0(\overline{\Omega})}[\underline{u},w]$ is a minimizer of $\hat{\varphi}$. Hence it follows from \eqref{eq34} that $u_0$ is a local $C^1_0(\overline{\Omega})$-minimizer of $\varphi_0$. Invoking Giacomoni and Saudi \cite[Theorem 1.1]{4}, we can say that $u_0$ is a local $W^{1,p}_0(\Omega)$-minimizer of $\varphi_0$. Using \eqref{eq16} we can easily see that \begin{align*} K_{\varphi_0} &= \{u\in W^{1,p}_0(\Omega):\varphi'_0(u)=0\} \subseteq[\underline{u})\cap C_+\\ &= \{u\in C^1_0(\overline{\Omega}):\underline{u}(z)\leq u(z) \text{ for all } z\in\overline{\Omega}\}. \end{align*} So, we may assume that $K_{\varphi_0}$ is finite or otherwise we already have an infinity of positive smooth solutions of \eqref{eq1}. Since $u_0$ is a local minimizer of $\varphi_0$ we can find $\rho\in(0,1)$ small such that \begin{equation}\label{eq35} \varphi_0(u_0)<\inf[\varphi_0(u):\|u-u_0\|=\rho]=m_{\rho} \end{equation} (see Aizicovici, Papageorgiou and Staicu \cite[proof of Proposition 29]{1}). Hypothesis (H1)(ii) implies that given any $\xi>0$, we can find $M_5=M_5(\xi)>0$ such that \begin{equation}\label{eq36} f(z,x)x-pF(z,x)\leq-\xi \text{ for almost all $z\in\Omega$ and all } x\geq M_5. \end{equation} We have \begin{align*} \frac{d}{dx}\Big(\frac{F(z,x)}{x^p}\Big)\ &=\frac{f(z,x)x^{2p}-px^{p-1}F(z,x)}{x^{2p}} \\ &=\frac{f(z,x)x-pF(z,x)}{x^{p+1}} \\ &\leq-\frac{\xi}{x^{p+1}} \end{align*} for almost all $z\in\Omega$ and all $x\geq M_5$, see \eqref{eq36}. This implies \begin{equation}\label{eq37} \frac{F(z,x)}{x^p}-\frac{F(z,y)}{y^p} \leq\frac{\xi}{p}\big[\frac{1}{x^p}-\frac{1}{y^p}\big], \end{equation} for almost all $z\in\Omega$, for all $x\geq y\geq M_5$. Hypothesis (H1)(iii) implies \begin{equation}\label{eq38} \hat{\lambda}_1\leq\liminf_{x\to+\infty}\frac{pF(z,x)}{x^p} \leq\limsup_{x\to+\infty}\frac{pF(z,x)}{x^p}\leq\eta(z) \end{equation} uniformly for almost all $z\in\Omega$. In \eqref{eq37} we pass to the limit as $x\to+\infty$ and use \eqref{eq38}. We obtain that $\hat{\lambda}_1y^p-pF(z,y)\leq-\xi$ for almost all $z\in\Omega$ and all $y\geq M_5$. This implies \begin{equation}\label{eq39} \hat{\lambda}_1y^p-pF(z,y)\to-\infty\quad \text{as } y\to+\infty \text{ uniformly for a.a .} z\in\Omega. \end{equation} For $t>0$ big (so that $t\hat{u}_1\geq \underline{u}$, recall that $\hat{u}_1\in \operatorname{int}C_+$), we have \[ \varphi_0(t\hat{u}_1)\leq\frac{t^p}{p}\hat{\lambda}_1-\int_{\Omega}F(z,t\hat{u}_1)dz +c_7\quad \text{for some $c_7>0$, see \eqref{eq16}} \] which implies \[ p\varphi_0(t\hat{u}_1)\leq\int_{\Omega}[\hat{\lambda}_1(t\hat{u}_1)^p -pF(z,t\hat{u}_1)]dz+pc_7, \] which in turn implies \begin{equation}\label{eq40} p\varphi_0(t\hat{u}_1)\to-\infty\quad (\text{see \eqref{eq39} and use Fatou's lemma}). \end{equation} Then \eqref{eq35}, \eqref{eq40} and the claim permit the use of Theorem \ref{thm1} (the mountain pass theorem) and so we can find $\hat{u}\in W^{1,p}_0(\Omega)$ such that \begin{equation}\label{eq41} \hat{u}\in K_{\varphi_0}\quad \text{and}\quad m_{\rho}\leq\varphi_0(\hat{u}). \end{equation} It follows from \eqref{eq35} and \eqref{eq41} that $\hat{u}\neq u_0$, $\hat{u}\in[\underline{u})\cap C_+$ and so $\hat{u}\in \operatorname{int}C_+$ is the second positive smooth solution of problem \eqref{eq1}. \end{proof} So, we can state the following multiplicity theorem for problem \eqref{eq1} \begin{theorem}\label{thm6} If hypotheses {\rm (H1)} hold, then problem \eqref{eq1} has at least two positive smooth solutions $u_0$ and $\hat{u}$ in $\operatorname{int}C_+$. \end{theorem} \subsection*{Acknowledgments} This research was supported by the Slovenian Research Agen\-cy grants P1-0292, J1-8131, and J1-7025. V. D. R\u adulescu acknowledges the support through a grant of the Ministry of Research and Innovation, CNCS--UEFISCDI, project number PN-III-P4-ID-PCE-2016-0130, within PNCDI III. \end{document}
math
34,498
\betaegin{eqnarray}gin{document} \alphallowdisplaybreaks \thetaetaitle{Path independence of the additive functionals for stochastic differential equations driven by G-L\'evy processes*} \alphauthor{Huijie Qiao$^{1,2}$ and Jiang-Lun Wu$^3$} \thetaetahanks{{\it AMS Subject Classification(2010):} 60H10, 60G51.} \thetaetahanks{{\it Keywords:} Path independence, additive functionals, G-L\'evy processes, stochastic differential equations driven by G-L\'evy processes.} \thetaetahanks{*This work was partly supported by NSF of China (No. 11001051, 11371352, 11671083) and China Scholarship Council under Grant No. 201906095034.} \subjclass{} \deltaate{} \varepsilonnd{eqnarray*}dicatory{1. School of Mathematics, Southeast University\\ Nanjing, Jiangsu 211189, China\\ 2. Department of Mathematics, University of Illinois at Urbana-Champaign\\ Urbana, IL 61801, USA\\ [email protected]\\ 3. Department of Mathematics, Computational Foundry, Swansea University\\ Bay Campus, Swansea SA1 8EN, UK\\ [email protected]} \betaegin{eqnarray}gin{abstract} In the paper, we consider a type of stochastic differential equations driven by G-L\'evy processes. We prove that a kind of their additive functionals has path independence and extend some known results. \varepsilonnd{abstract} \title{Path independence of the additive functionals for stochastic differential equations driven by G-L\'evy processes*} \rm \section{Introduction} Recently, development of mathematical finance forces the appearance of a type of processes-G-Brownian motions(\cite{p2}). And then the related theory, such as stochastic calculus and stochastic differential equations (SDEs in short) driven by G-Brownian motions, are widely studied(\cite{g, p2, peng, q1}). However, in some financial models, volatility uncertainty makes G-Brownian motions insufficient for simulating these models. One important reason lies in the continuity of their paths with respect to the time variable. So, Hu-Peng \cite{hp} solved the problem by introducing G-L\'evy processes. And the type of processes has discontinuous (right continuous with left limits) paths. Later, Paczka \cite{pk1} defined the It\^o-L\'evy stochastic integrals, deduced the It\^o formula, established SDEs driven by G-L\'evy processes and stated the existence and uniqueness of solutions for these equations under lipschitz conditions. Most recently, under non-lipschitz conditions, Wang-Gao \cite{wg} proved well-definedness of SDEs driven by G-L\'evy processes and investigated exponential stability of their solutions. Here, we follows up the line in \cite{wg}, define the additive functionals of SDEs driven by G-L\'evy processes and study their path independence. Concretely speaking, we consider the following SDEs on ${\mathbb R}^d$: \betaegin{eqnarray} \mathrm{d} Y_t=b(t,Y_t)\mathrm{d} t +h_{ij}(t,Y_t)\mathrm{d} {\langle}B^i, B^j{\rangle}_t+\sigma(t,Y_t)\mathrm{d} B_t+\int_{{\mathbb R}^d\setminus\{0\}}f(t,Y_t,u)L(\mathrm{d} t,\mathrm{d} u), \langlembdaabel{glesde} \varepsilonnd{eqnarray} where $B$ is a G-Brownian motion, ${\langle}B^i, B^j{\rangle}_t$ is the mutual variation process of $B^i$ and $B^j$ for $i,j=1,2,\cdots,d$ and $L(\mathrm{d} t,\mathrm{d} u)$ is a G-random measure (See Subsection \ref{itointe}). The coefficients $b: [0,T]\thetaetaimes{\mathbb R}^d\mapsto{\mathbb R}^d$, $h_{ij}=h_{ji}: [0,T]\thetaetaimes{\mathbb R}^d\mapsto{\mathbb R}^d$, $\sigma: [0,T]\thetaetaimes{\mathbb R}^d\mapsto{\mathbb R}^{d\thetaetaimes d}$ and $f: [0,T]\thetaetaimes{\mathbb R}^d\thetaetaimes({\mathbb R}^d\setminus\{0\})\mapsto{\mathbb R}^d$ are Borel measurable. Here and hereafter we use the convention that the repeated indices stand for the summation. Thus, under (${\betaf H}^1_{b,h,\sigma,f}$)-(${\betaf H}^2_{b,h,\sigma,f}$) in Subsection \ref{sdegle}, by \cite[Theorem 3.1]{wg}, we know that Eq.(\ref{glesde}) has a unique solution $Y_t$. And then we introduce the additive functionals of $Y_t$ and define path independence of these functionals. Finally, we prove that these functionals have path independence under some assumption. Next, we say our motivations. First, we mention that Ren-Yang \cite{ry} proved path independence of additive functionals for SDEs driven by G-Brownian motions. Since these equations can not satisfy the actual demand very well, to extend them becomes one of our motivations. Second, by analyzing some special cases, we surprisingly find that, we can express explicitly these additive functionals. However, in our known results (\cite{qw1,qw2,qw3}), it is difficult to express explicitly additive functionals. Therefore, this is the other of our motivations. This paper is arranged as follows. In Section \ref{pre}, we introduce G-L\'evy processes, the It\^o-L\'evy stochastic integrals, SDEs driven by G-L\'evy processes, additive functionals, path independence and some related results. The main results and their proofs are placed in Section \ref{main}. Moreover, we analysis some special cases and compare our result with some known results (\cite{qw1,qw2,qw3, ry}) in Subsection \ref{com}. \section{Preliminary}\langlembdaabel{pre} In the section, we introduce some concepts and results used in the sequel. \subsection{Notation} In the subsection, we introduce notations used in the sequel. For convenience, we shall use $\mid\cdot\mid$ and $\partialarallel\cdot\partialarallel$ for norms of vectors and matrices, respectively. Furthermore, let $\langlembdaangle\cdot$ , $\cdot\ranglengle$ denote the scalar product in ${\mathbb R}^d$. Let $Q^*$ denote the transpose of the matrix $Q$. Let $lip({\mathbb R}^n)$ be the set of all Lipschitz continuous functions on ${\mathbb R}^n$ and $C_{b, lip}({\mathbb R}^d)$ be the collection of all bounded and Lipschitz continuous functions on ${\mathbb R}^d$. Let $C_b^3({\mathbb R}^d)$ be the space of bounded and three times continuously differentiable functions with bounded derivatives of all orders less than or equal to $3$. \subsection{G-L\'evy processes}\langlembdaabel{glevy} In the subsection, we introduce G-L\'evy processes.(c.f.\cite{hp}) Let $\Omega$ be a given set and ${\mathcal H}$ be a linear space of real functions defined on $\Omega$ such that if $X_1,\deltaots,X_n\in{\mathcal H}$, then $\partialhi(X_1,\deltaots,X_n)\in{\mathcal H}$ for each $\partialhi\in lip({\mathbb R}^n)$. If $X\in{\mathcal H}$, we call $X$ as a random variable. \betad\langlembdaabel{subexp} If a functional $\betaar{{\mathbb E}}: {\mathcal H}\mapsto {\mathbb R}$ satisfies: for $X, Y\in{\mathcal H}$, (i) $X\gammaeq Y, \betaar{{\mathbb E}}[X]\gammaeq\betaar{{\mathbb E}}[Y]$, (ii) $\betaar{{\mathbb E}}[X+Y]\langlembdaeq\betaar{{\mathbb E}}[X]+\betaar{{\mathbb E}}[Y]$, (iii)$ \mbox{ for all}~\langlembdaambda\gammaeq 0, \betaar{{\mathbb E}}[\langlembdaambda X]=\langlembdaambda\betaar{{\mathbb E}}[X]$, (iv)$ \mbox{ for all}~c\in{\mathbb R}, \betaar{{\mathbb E}}[X+c]=\betaar{{\mathbb E}}[X]+c$, we call $\betaar{{\mathbb E}}$ as a sublinear expectation on ${\mathcal H}$ and $(\Omega, {\mathcal H}, \betaar{{\mathbb E}})$ as a sublinear expectation space. \varepsilond Next, we define the distribution of a random vector on $(\Omega, {\mathcal H}, \betaar{{\mathbb E}})$. For a $n$-dimensional random vector $X=(X_1, X_2, \cdots, X_n)$ for $X_i\in{\mathcal H}$, $i=1,2, \cdots, n$, set $$ F_X(\partialhi):=\betaar{{\mathbb E}}(\partialhi(X)), \qquad \partialhi\in lip({\mathbb R}^n), $$ and then we call $F_X$ as the distribution of $X$. \betad\langlembdaabel{samedist} Assume that $X_1, X_2$ are two $n$-dimensional random vectors defined on different sublinear expectation spaces. If for all $\partialhi\in lip({\mathbb R}^n)$, $$ F_{X_1}(\partialhi)=F_{X_2}(\partialhi), $$ we say that the distributions of $X_1, X_2$ are the same. \varepsilond \betad\langlembdaabel{indepen} For two random vectors $Y=(Y_1, Y_2, \cdots, Y_m)$ for $Y_j\in{\mathcal H}$ and $X=(X_1, X_2, \cdots, X_n)$ for $X_i\in{\mathcal H}$, if for all $\partialhi\in lip({\mathbb R}^n\thetaetaimes{\mathbb R}^m)$, $$ \betaar{{\mathbb E}}[\partialhi(X,Y)]=\betaar{{\mathbb E}}[\betaar{{\mathbb E}}[\partialhi(x,Y)]_{x=X}], $$ we say that $Y$ is independent from $X$. \varepsilond Here, we use two above concepts to define L\'evy processes on $(\Omega, {\mathcal H}, \betaar{{\mathbb E}})$. \betad\langlembdaabel{defle} Let $X=(X_t)_{t\gammaeq0}$ be a $d$-dimensional c\`adl\`ag process on $(\Omega, {\mathcal H}, \betaar{{\mathbb E}})$. If $X$ satisfies (i) $X_0=0$; (ii) for $t, s\gammaeq 0$, the increment $X_{s+t}-X_t$ is independent from $(X_{t_1}, X_{t_2}, \cdots, X_{t_n})$, for any $n$ and $0\langlembdaeq t_1<t_2\cdots<t_n\langlembdaeq t$; (iii) the distribution of $X_{s+t}-X_t$ does not depend on $t$; we call $X$ as a L\'evy process. \varepsilond \betad\langlembdaabel{defgle} Assume that $X$ is a $d$-dimensional L\'evy process. If there exists a decomposition $X_t=X_t^c+X_t^d$ for $t\gammaeq 0$, where $(X_t^c, X_t^d)$ is a $2d$-dimensional L\'evy process satisfying $$ \langlembdaim\langlembdaimits_{t\deltaownarrow 0}\frac{\betaar{{\mathbb E}}|X_t^c|^3}{t}=0, \quad \betaar{{\mathbb E}}|X_t^d|\langlembdaeq Ct, \quad t\gammaeq 0, \quad C\gammaeq 0, $$ we call $X$ as a G-L\'evy process. \varepsilond In the following, we characterize G-L\'evy processes by partial differential equations. \betat\langlembdaabel{gleop} Assume that $X$ is a $d$-dimensional G-L\'evy process. Then for $g\in C_b^3({\mathbb R}^d)$ with $g(0)=0$, set $$ G_X[g(\cdot)]:=\langlembdaim\langlembdaimits_{t\deltaownarrow 0}\frac{\betaar{{\mathbb E}}[g(X_t)]}{t}, $$ and then $G_X$ has the following L\'evy-Khintchine representation \betaegin{eqnarray} G_X[g(\cdot)]=\sup\langlembdaimits_{(\nu,\zeta,Q)\in{\mathcal U}}\langlembdaeft\{\int_{{\mathbb R}^d\setminus\{0\}}g(u)\nu(\mathrm{d} u)+{\langle}\partialartial_x g(0), \zeta{\rangle}+\frac{1}{2}tr[\partialartial_x^2g(0)QQ^*]\right\}, \langlembdaabel{levykhin} \varepsilonnd{eqnarray} where ${\mathcal U}$ is a subset of ${\mathcal M}({\mathbb R}^d\setminus\{0\})\thetaetaimes{\mathbb R}^d\thetaetaimes{\mathbb R}^{d\thetaetaimes d}$, ${\mathcal M}({\mathbb R}^d\setminus\{0\})$ is the collection of all measures on $({\mathbb R}^d\setminus\{0\}, {\mathscr B}({\mathbb R}^d\setminus\{0\}))$, ${\mathbb R}^{d\thetaetaimes d}$ is the set of all $d\thetaetaimes d$ matrices and ${\mathcal U}$ satisfies \betaegin{eqnarray} \sup\langlembdaimits_{(\nu,\zeta,Q)\in{\mathcal U}}\langlembdaeft\{\int_{{\mathbb R}^d\setminus\{0\}}|u|\nu(\mathrm{d} u)+|\zeta|+\frac{1}{2}tr[QQ^*]\right\}<\infty. \langlembdaabel{usatcon} \varepsilonnd{eqnarray} \varepsilont \betat\langlembdaabel{glevis} Suppose that $X$ is a $d$-dimensional G-L\'evy process. Then for $\partialhi\in C_{b, lip}({\mathbb R}^d)$, $v(t,x):=\betaar{{\mathbb E}}[\partialhi(x+X_t)]$ is the unique viscosity solution of the following partial integro-differential equation: \betaegin{eqnarray*} 0&=&\partialartial_t v(t,x)-G_X[v(t,x+\cdot)-v(t,x)]\\ &=&\partialartial_t v(t,x)-\sup\langlembdaimits_{(\nu,\zeta,Q)\in{\mathcal U}}\betaigg\{\int_{{\mathbb R}^d\setminus\{0\}}[v(t,x+u)-v(t,x)]\nu(\mathrm{d} u)+{\langle}\partialartial_x v(t,x), \zeta{\rangle}\\ &&\qquad\qquad\qquad\qquad+\frac{1}{2}tr[\partialartial_x^2v(t,x)QQ^*]\betaigg\} \varepsilonnd{eqnarray*} with the initial condition $v(0,x)=\partialhi(x)$. \varepsilont Conversely, if we have a set ${\mathcal U}$ satisfying (\ref{usatcon}), is there a $d$-dimensional G-L\'evy process having the L\'evy-Khintchine representation (\ref{levykhin}) with the same set ${\mathcal U}$? The answer is affirmed. We take $\Omega:=D_0({\mathbb R}^+, {\mathbb R}^d)$, where $D_0({\mathbb R}^+, {\mathbb R}^d)$ is the space of all c\`adl\`ag functions ${\mathbb R}_+\ni t\mapsto \omegamega_t\in{\mathbb R}^d$ with $\omegamega_0=0$, equipped with the Skorokhod topology. \betat\langlembdaabel{exisgle} Suppose that ${\mathcal U}$ satisfies (\ref{usatcon}). Then there exists a sublinear expectation $\betaar{{\mathbb E}}$ on $\Omega$ such that the canonical process $X$ is a $d$-dimensional G-L\'evy process having the L\'evy-Khintchine representation (\ref{levykhin}) with the same set ${\mathcal U}$. \varepsilont \subsection{A capacity}\langlembdaabel{capa} In the subsection, we introduce a capacity and related definitions. First of all, fix a set ${\mathcal U}$ satisfying (\ref{usatcon}) and $T>0$ and take $\Omega_T:=D_0([0,T], {\mathbb R}^d)$ and the sublinear expectation $\betaar{{\mathbb E}}$ in Theorem \ref{exisgle}. Thus, we know that $(\Omega_T, {\mathcal H}, \betaar{{\mathbb E}})$ is a sublinear expectation space. Here, we work on the space. Let $L_G^p(\Omega_T)$ be the completion of $lip(\Omega_T)$ under the norm $\|\cdot\|_p:=(\betaar{{\mathbb E}}|\cdot|^p)^{1/p}, p\gammaeq 1$. Let $$ {\mathcal V}:=\{\nu\in{\mathcal M}({\mathbb R}^d\setminus\{0\}): \varepsilonxists (\zeta, Q)\in{\mathbb R}^d\thetaetaimes{\mathbb R}^{d\thetaetaimes d} ~\mbox{such ~ that}~ (\nu, \zeta, Q)\in{\mathcal U}\} $$ and let ${\mathcal G}$ be the set of all the Borel measurable functions $g: {\mathbb R}^d\mapsto{\mathbb R}^d$ with $g(0)=0$. {\betaf Assumption:} \betaegin{eqnarray}gin{enumerate}[(i)] \item There exists a measure $\mu\in{\mathcal M}({\mathbb R}^d)$ such that \betaegin{eqnarray*} \int_{{\mathbb R}^d\setminus\{0\}}|z|\mu(\mathrm{d} z)<\infty, \quad \mu(\{0\})=0, \varepsilonnd{eqnarray*} and for all $\nu\in{\mathcal V}$ there exists a function $g_{\nu}\in{\mathcal G}$ satisfying $$ \nu(A)=\mu(g_{\nu}^{-1}(A)), \quad \forall A\in{\mathscr B}({\mathbb R}^d\setminus\{0\}). $$ \item There exists $0<q<1$ such that $$ \sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{0<|z|<1}|z|^q\nu(\mathrm{d} z)<\infty. $$ \item $$ \sup\langlembdaimits_{\nu\in{\mathcal V}}\nu({\mathbb R}^d\setminus\{0\})<\infty. $$ \varepsilonnd{enumerate} Let $(\thetaetailde{\Omega}, {\mathscr F}, {\mathbb P})$ be a probability space supporting a Brownian motion $W$ and a Poisson random measure $N(\mathrm{d} t, \mathrm{d} z)$ with the intensity measure $\mu(\mathrm{d} z)\mathrm{d} t$. Let $$ {\mathscr F}_t:=\sigma\langlembdaeft\{W_s, N((0,s],A): 0\langlembdaeq s\langlembdaeq t, A\in{\mathscr B}({\mathbb R}^d\setminus\{0\})\right\}\vee{\mathcal N}, \quad {\mathcal N}:=\{U\in{\mathscr F}, {\mathbb P}(U)=0\}. $$ We introduce the following set. \betad\langlembdaabel{autt} ${\mathcal A}_{0,T}^{{\mathcal U}}$ is a set of all the processes $\thetaeta_t=(\thetaeta^d_t, \thetaeta_t^{1,c}, \thetaeta_t^{2,c})$ for $t\in[0,T]$ satisfying (i) $(\thetaeta_t^{1,c}, \thetaeta_t^{2,c})$ is a ${\mathscr F}_t$-adapted process and $\thetaeta^d$ is a ${\mathscr F}_t$-predictable random field on $[0,T]\thetaetaimes{\mathbb R}^d$, (ii) For ${\mathbb P}$-a.s. $\omegamega$ and a.e. $t\in[0,T]$, $$ (\thetaeta^d(t,\cdot)(\omegamega), \thetaeta^{1,c}_t(\omegamega), \thetaeta^{2,c}_t(\omegamega))\in\langlembdaeft\{(g_{\nu},\zeta,Q)\in{\mathcal G}\thetaetaimes{\mathbb R}^d\thetaetaimes{\mathbb R}^{d\thetaetaimes d}: (\nu,\zeta,Q)\in{\mathcal U}\right\}, $$ (iii) $$ {\mathbb E}^{{\mathbb P}}\langlembdaeft[\int_0^T{\Big(}|\thetaeta^{1,c}_t|+\|\thetaeta^{2,c}_t\|^2+\int_{{\mathbb R}^d\setminus\{0\}}|\thetaeta^d(t,z)|\mu(\mathrm{d} z){\Big)}\mathrm{d} t\right]<\infty. $$ \varepsilond For $\thetaeta\in{\mathcal A}_{0,T}^{{\mathcal U}}$, set $$ B_t^{0,\thetaeta}:=\int_0^t\thetaeta^{1,c}_s\mathrm{d} s+\int_0^t\thetaeta^{2,c}_s\mathrm{d} W_s+\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}\thetaeta^d(s,z)N(\mathrm{d} s,\mathrm{d} z), \quad t\in[0,T], $$ and by Corollary 14 in \cite{pk1}, it holds that for $\xi\in L_G^1(\Omega_T)$ $$ \betaar{{\mathbb E}}[\xi]=\sup\langlembdaimits_{\thetaeta\in{\mathcal A}_{0,T}^{{\mathcal U}}}{\mathbb E}^{{\mathbb P}^{\thetaeta}}[\xi], \quad {\mathbb P}^{\thetaeta}={\mathbb P}\circ (B_{\cdot}^{0,\thetaeta})^{-1}. $$ And then define $$ \betaar{C}(D):=\sup\langlembdaimits_{\thetaeta\in{\mathcal A}_{0,T}^{{\mathcal U}}}{\mathbb P}^{\thetaeta}(D), \quad D\in{\mathscr B}(\Omega_T), $$ and $\betaar{C}$ is a capacity. For $D\in{\mathscr B}(\Omega_T)$, if $\betaar{C}(D)=0$, we call $D$ as a polar set. So, if a property holds outside a polar set, we say that the property holds quasi-surely (q.s. in short). \subsection{The It\^o integrals with respect to G-L\'evy processes}\langlembdaabel{itointe} In the subsection, we introduce the It\^o integrals with respect to G-L\'evy processes under the framework of the above subsection. Let $X$ denote the canonical process on the space, i.e. $X_t(\omegamega)=\omegamega_t, t\in[0,T]$. So, $X$ is a $d$-dimensional G-L\'evy process. Although the It\^o integrals with respect to G-Brownian motions have been introduced in \cite{q1}, we need to introduce two related spaces used in the sequel. Take $0=t_0<t_1<\cdots<t_N=T$. Let $p\gammaeq 1$ be fixed. Define \betaegin{eqnarray*} {\mathcal M}^{p,0}_G(0,T):=\Big\{\varepsilonta_t(\omegamega)=\sum\langlembdaimits_{j=1}^N\xi_{j-1}(\omegamega)1_{[t_{j-1},t_j)}(t);\xi_{j-1}(\omegamega)\in L^p_G(\Omega_{t_{j-1}})\Big\}. \varepsilonnd{eqnarray*} Let ${\mathcal M}^p_G(0,T)$ and ${\mathcal H}_G^p(0,T)$ denote the completion of ${\mathcal M}^{p,0}_G(0,T)$ under the norm $$ \|\varepsilonta\|_{{\mathcal M}^p_G(0,T)}=\langlembdaeft(\int_0^T\betaar{{\mathbb E}}|\varepsilonta_t|^p\mathrm{d} t\right)^{\frac{1}{p}} ~\mbox{and}~\|\varepsilonta\|_{{\mathcal H}^p_G(0,T)}=\langlembdaeft(\betaar{{\mathbb E}}\langlembdaeft(\int_0^T|\varepsilonta_t|^2\mathrm{d} t\right)^{\frac{p}{2}}\right)^{\frac{1}{p}}, $$ respectively. Let ${\mathcal M}^p_G([0,T], {\mathbb R}^d)$ and ${\mathcal H}^p_G([0,T], {\mathbb R}^d)$ be the collection of all the processes $$ \varepsilonta_t=(\varepsilonta^1_t, \varepsilonta^2_t, \cdots, \varepsilonta^d_t), \quad t\in[0,T], \quad \varepsilonta^i\in {\mathcal M}^{p}_G(0,T) ~\mbox{and}~ {\mathcal H}_G^p(0,T), $$ respectively. Next, we introduce the It\^o integrals with respect to random measures. First, define a random measure: for any $0\langlembdaeq t\langlembdaeq T$ and $A\in{\mathscr B}({\mathbb R}^d\setminus\{0\})$, \betaegin{eqnarray*} \kappa_t:=X_t-X_{t-}, \quad L((0,t], A):=\sum\langlembdaimits_{0<s\langlembdaeq t}I_A(\kappa_s), \quad q.s.. \varepsilonnd{eqnarray*} And then we define the It\^o integral with respect to the random measure $L(\mathrm{d} t, \mathrm{d} u)$. Let ${\mathcal H}^S_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))$ be the collection of all the processes defined on $[0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\})\thetaetaimes\Omega$ with the form $$ f(s,u)(\omegamega)=\sum\langlembdaimits_{k=1}^{n-1}\sum\langlembdaimits_{l=1}^m\partialhi_{k,l}(X_{t_1}, X_{t_2}-X_{t_1}, \cdots, X_{t_k}-X_{t_{k-1}})I_{[t_k, t_{k+1})}(s)\partialsi_l(u), n,m\in{\mathbb N}, $$ where $0\langlembdaeq t_1<\cdots<t_n\langlembdaeq T$ is a partition of $[0,T]$, $\partialhi_{k,l}\in C_{b, lip}({\mathbb R}^{d\thetaetaimes k})$ and $\{\partialsi_l\}_{l=1}^m\subset C_{b, lip}({\mathbb R}^d)$ are functions with disjoint supports and $\partialsi_l(0)=0$. \betad\langlembdaabel{simito} For any $f\in{\mathcal H}^S_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))$, set $$ \int_0^t\int_{{\mathbb R}^d\setminus\{0\}}f(s,u)L(\mathrm{d} s, \mathrm{d} u):=\sum\langlembdaimits_{0<s\langlembdaeq t}f(s,\kappa_s)I_{{\mathbb R}^d\setminus\{0\}}(\kappa_s), \quad q.s.. $$ \varepsilond By Theorem 28 in \cite{pk1}, we have that $\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}f(s,u)L(\mathrm{d} s, \mathrm{d} u)\in L_G^2(\Omega_T)$. Let ${\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))$ be the completion of ${\mathcal H}^S_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))$ with respect to the norm $\|\cdot\|_{{\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))}$, where $$ \|f\|_{{\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))}:=\betaar{{\mathbb E}}\langlembdaeft[\int_0^T\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}|f(s,u)|^2\nu(\mathrm{d} u)\mathrm{d} s\right]^{1/2}, \quad f\in{\mathcal H}^S_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\})). $$ Thus, Corollary 29 in \cite{pk1} admits us to get that for $f\in{\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))$, \betaegin{eqnarray} \int_0^t\int_{{\mathbb R}^d\setminus\{0\}}f(s,u)L(\mathrm{d} s, \mathrm{d} u)=\sum\langlembdaimits_{0<s\langlembdaeq t}f(s,\kappa_s)I_{{\mathbb R}^d\setminus\{0\}}(\kappa_s), \quad q.s.. \langlembdaabel{jumpin} \varepsilonnd{eqnarray} Let ${\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}), {\mathbb R}^d)$ be the space of all the processes $$ f(t,u)={\Big(}f^1(t,u), f^2(t,u), \cdots, f^d(t,u){\Big)}, \quad f^i\in{\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\})). $$ \subsection{Stochastic differential equations driven by G-L\'evy processes}\langlembdaabel{sdegle} In the subsection, we introduce stochastic differential equations driven by G-L\'evy processes and related additive functionals. First, we introduce some notations. Let ${\mathbb S}^d$ be the space of all $d\thetaetaimes d$ symmetric matrices. For $A\in{\mathbb S}^d$, set $$ G(A):=\frac{1}{2}\sup\langlembdaimits_{Q\in{\mathcal Q}}\thetaetar[QQ^*A], $$ where ${\mathcal Q}$ is a nonempty, bounded, closed and convex subset of ${\mathbb R}^{d\thetaetaimes d}$. And then $G: {\mathbb S}^d\mapsto{\mathbb R}$ is a monotonic, sublinear and positive homogeneous functional(\cite{peng}). We choose ${\mathcal U}\subset{\mathcal M}({\mathbb R}^d\setminus\{0\})\thetaetaimes\{0\}\thetaetaimes{\mathcal Q}$ satisfying (\ref{usatcon}) and still work under the framework of Subsection \ref{capa}. So, the canonical process $X_t$ can be represented as $X_t=B_t+X_t^d$, where $B_t$ is a G-Brownian motion associated with ${\mathcal Q}$ and $X_t^d$ is a pure jump G-L\'evy process associated with ${\mathcal M}({\mathbb R}^d\setminus\{0\})$. Next, we consider Eq.(\ref{glesde}) and assume: \betaegin{eqnarray}gin{enumerate}[(${\betaf H}^1_{b,h,\sigma,f}$)] \item There exists a constant $C_1>0$ such that for any $t\in[0,T]$ and $x,y\in{\mathbb R}^d$, \betaegin{eqnarray*} &&|b(t,x)-b(t,y)|^2+|h_{ij}(t,x)-h_{ij}(t,y)|^2+\|\sigma(t,x)-\sigma(t,y)\|^2\\ &+&\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}|f(t,x,u)-f(t,y,u)|^2\nu(\mathrm{d} u)\langlembdaeq C_1\rho(|x-y|^2), \varepsilonnd{eqnarray*} where $\rho:(0,+\infty)\mapsto(0,+\infty)$ is a continuous, increasing and concave function so that \betaegin{eqnarray*} \rho(0+)=0, \quad \int_0^1\frac{dr}{\rho(r)}=+\infty. \varepsilonnd{eqnarray*} \varepsilonnd{enumerate} \betaegin{eqnarray}gin{enumerate}[(${\betaf H}^2_{b,h,\sigma,f}$)] \item There exists a constant $C_2>0$ such that for any $t\in[0,T]$ \betaegin{eqnarray*} |b(t,0)|^2+|h_{ij}(t,0)|^2+\|\sigma(t,0)\|^2+\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}|f(t,0,u)|^2\nu(\mathrm{d} u)\langlembdaeq C_2. \varepsilonnd{eqnarray*} \varepsilonnd{enumerate} By \cite[Theorem 3.1]{wg}, we know that under (${\betaf H}^1_{b,h,\sigma,f}$)-(${\betaf H}^2_{b,h,\sigma,f}$), Eq.(\ref{glesde}) has a unique solution $Y_t$ with \betaegin{eqnarray} \betaar{{\mathbb E}}\langlembdaeft[\sup\langlembdaimits_{t\in[0,T]}|Y_t|^2\right]<\infty. \langlembdaabel{estiy} \varepsilonnd{eqnarray} And then we introduce the following additive functional \betaegin{eqnarray} F_{s,t}&:=&\alpha\int_s^t G(g_1)(r,Y_r)\mathrm{d} r+\betaegin{eqnarray}ta\int_s^t g^{ij}_1(r,Y_r)\mathrm{d} {\langle}B^i,B^j{\rangle}_r+\int_s^t {\langle}g_2(r,Y_r),\mathrm{d} B_r{\rangle}\nonumber\\ &&+\int_s^t\int_{{\mathbb R}^d\setminus\{0\}}g_3(r,Y_r, u)L(\mathrm{d} r, \mathrm{d} u)+\int_s^t\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}\gammaamma g_3(r,Y_r, u)\nu(\mathrm{d} u)\mathrm{d} r, \nonumber\\ &&\quad 0\langlembdaeq s<t\langlembdaeq T, \langlembdaabel{addfun} \varepsilonnd{eqnarray} where $\alpha, \betaegin{eqnarray}ta, \gammaamma\in{\mathbb R}$ are three constants and \betaegin{eqnarray*} &&g_1: [0,T]\thetaetaimes{\mathbb R}^d\mapsto{\mathbb R}^{d\thetaetaimes d}, \quad g_1^{ij}=g_1^{ji}, \\ &&g_2:[0,T]\thetaetaimes{\mathbb R}^d\mapsto{\mathbb R}^d, \\ &&g_3:[0,T]\thetaetaimes{\mathbb R}^d\thetaetaimes({\mathbb R}^d\setminus\{0\})\mapsto{\mathbb R}, \varepsilonnd{eqnarray*} are Borel measurable so that $F_{s,t}$ is well-defined. \betad\langlembdaabel{pathinde} The additive functional $F_{s,t}$ is called path independent, if there exists a function $$ V: [0,T]\thetaetaimes{\mathbb R}^d\mapsto{\mathbb R}, $$ such that for any $s\in[0, T]$ and $Y_s\in L^2_G(\Omega_T)$, the solution $(Y_t)_{t\in[s,T]}$ of Eq.(\ref{glesde}) satisfies \betaegin{eqnarray} F_{s,t}=V(t,Y_t)-V(s,Y_s). \langlembdaabel{defi} \varepsilonnd{eqnarray} \varepsilond \section{Main results and their proofs}\langlembdaabel{main} In the section, we state and prove the main results under the framework of Subsection \ref{sdegle}. And then we analysis some special cases and compare our result with some known results. \subsection{Main results}\langlembdaabel{maires} In the subsection, we state and prove the main results. Let us begin with a key lemma. \betal\langlembdaabel{zero} Assume that ${\mathcal Q}$ is bounded away from $0$ and $Z_t$ is a $1$-dimensional G-It\^o-L\'evy process, i.e. \betaegin{eqnarray} Z_t=\int_0^t\Gamma_s\mathrm{d} s+\int_0^t\Phi_{ij}(s)\mathrm{d} {\langle}B^i, B^j{\rangle}_s+\int_0^t{\langle}\Psi_s,\mathrm{d} B_s{\rangle}+\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}K(s,u)L(\mathrm{d} s, \mathrm{d} u), \langlembdaabel{gitole} \varepsilonnd{eqnarray} where $\Gamma\in{\mathcal M}^1_G(0,T), \Phi_{ij}\in{\mathcal M}^1_G(0,T), \Phi_{ij}=\Phi_{ji}, i,j=1,2,\cdots,d, \Psi\in{\mathcal H}^1_G([0,T], {\mathbb R}^d), K\in{\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\}))$. Then $Z_t=0$ for all $t\in[0,T]$ q.s. if and only if $\Gamma_t=0, \Phi_{ij}(t)=0, \Psi_t=0$ a.e.$\thetaetaimes$q.s. on $[0,T]\thetaetaimes\Omega_T$ and $K(t,u)=0$ a.e.$\thetaetaimes$a.e.$\thetaetaimes$q.s. on $[0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\})\thetaetaimes\Omega_T$. \varepsilonl \betaegin{eqnarray}gin{proof} Sufficiency is direct if one inserts $\Gamma_t=0, \Phi_{ij}(t)=0, \Psi_t=0, K(t,u)=0$ into (\ref{gitole}). Let us prove necessity. If $Z_t=0$ for any $t\in[0,T]$, we get that \betaegin{eqnarray} 0=\int_0^t\Gamma_s\mathrm{d} s+\int_0^t\Phi_{ij}(s)\mathrm{d} {\langle}B^i, B^j{\rangle}_s+\int_0^t\Psi_s^*\mathrm{d} B_s+\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}K(s,u)L(\mathrm{d} s, \mathrm{d} u). \langlembdaabel{gitole1} \varepsilonnd{eqnarray} By taking the quadratic process with $\int_0^t{\langle}\Psi_s, \mathrm{d} B_s{\rangle}$ on two sides of (\ref{gitole1}), it holds that \betaegin{eqnarray*} 0&=&{\langle}\int_0^{\cdot}\Psi^*_s\mathrm{d} B_s, \int_0^{\cdot}\Psi^*_s\mathrm{d} B_s{\rangle}_t=\int_0^t\Psi^{i}_s\Psi^{j}_s\mathrm{d} {\langle}B^i, B^j{\rangle}_s=\int_0^t\thetaetar(\Psi_s\Psi^*_s\mathrm{d} {\langle}B{\rangle}_s)\\ &=&\int_0^t\thetaetar(\mathrm{d} {\langle}B{\rangle}_s\Psi_s\Psi^*_s)=\int_0^t{\langle}\mathrm{d} {\langle}B{\rangle}_s\Psi_s, \Psi_s{\rangle}, \varepsilonnd{eqnarray*} where \betaegin{eqnarray*} {\langle}B{\rangle}:=\langlembdaeft(\betaegin{eqnarray}gin{array}{c} {\langle}B^1,B^1{\rangle}\quad {\langle}B^1,B^2{\rangle}\cdots{\langle}B^1,B^d{\rangle}\\ \vdots\qquad\qquad\vdots\qquad\qquad\vdots\\ {\langle}B^d,B^1{\rangle}\quad {\langle}B^d,B^2{\rangle}\cdots{\langle}B^d,B^d{\rangle} \varepsilonnd{array} \right). \varepsilonnd{eqnarray*} Note that ${\mathcal Q}$ is bounded away from $0$. Thus, there exists a constant $\iota>0$ such that ${\langle}B{\rangle}_s\gammaeq\iota sI_d$ and then $$ 0=\int_0^t{\langle}\mathrm{d} {\langle}B{\rangle}_s\Psi_s, \Psi_s{\rangle}\gammaeq \iota\int_0^t{\langle}\Psi_s, \Psi_s{\rangle}\mathrm{d} s. $$ From this, we know that $\Psi_t=0$ a.e.$\thetaetaimes$q.s.. So, (\ref{gitole1}) becomes $$ 0=\int_0^t\Gamma_s\mathrm{d} s+\int_0^t\Phi_{ij}(s)\mathrm{d} {\langle}B^i, B^j{\rangle}_s+\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}K(s,u)L(\mathrm{d} s, \mathrm{d} u). $$ Next, set $$ \thetaetaau_0=0, \quad \thetaetaau_n:=\inf\{t>\thetaetaau_{n-1}: \kappa_t\neq 0 \}, \quad n=1,2,\cdots, $$ and $\{\thetaetaau_n\}$ is a stopping time sequence with respect to $({\mathscr B}_t)_{t\gammaeq0}$ and $\thetaetaau_n\uparrow\infty$ as $n\rightarrow\infty$ q.s.(c.f. \cite[Proposition 16]{pk1}). So, by (\ref{jumpin}) it holds that for $t\in[0, \thetaetaau_1\langlembdaand T)$, \betaegin{eqnarray*} 0=\int_0^{\thetaetaau_1\langlembdaand T}\Gamma_s\mathrm{d} s+\int_0^{\thetaetaau_1\langlembdaand T}\Phi_{ij}(s)\mathrm{d} {\langle}B^i, B^j{\rangle}_s, \varepsilonnd{eqnarray*} i.e. $$ -\int_0^{\thetaetaau_1\langlembdaand T}\Gamma_s\mathrm{d} s=\int_0^{\thetaetaau_1\langlembdaand T}\Phi_{ij}(s)\mathrm{d} {\langle}B^i, B^j{\rangle}_s. $$ Thus, by the similar deduction to that in \cite[Corollary 1]{ry} one can have that $$ \betaar{{\mathbb E}}\int_0^{\thetaetaau_1\langlembdaand T}\langlembdaeft(\thetaetar[\Phi_s\Phi_s]\right)^{1/2}\mathrm{d} s=\betaar{{\mathbb E}}\int_0^{\thetaetaau_1\langlembdaand T}|\Gamma_s|\mathrm{d} s=0. $$ Based on this, we know that $\Phi_t=0, \Gamma_t=0$ for $t\in[0, \thetaetaau_1\langlembdaand T)$. If $\thetaetaau_1\gammaeq T$, the proof is over; if $\thetaetaau_1<T$, we continue. For $t=\thetaetaau_1$, (\ref{gitole1}) goes to $$ \Phi_t=0, \quad \Gamma_t=0, \quad K(t,\kappa_t)=0. $$ For $t\in[\thetaetaau_1, \thetaetaau_2\langlembdaand T)$, by the same means to the above for $t\in[0, \thetaetaau_1\langlembdaand T)$, we get that $\Phi_t=0, \Gamma_t=0$ for $t\in[\thetaetaau_1, \thetaetaau_2\langlembdaand T)$. If $\thetaetaau_2\gammaeq T$, the proof is over; if $\thetaetaau_2<T$, we continue till $T\langlembdaeq\thetaetaau_n$. Thus, we obtain that $\Gamma_t=0, \Phi_{ij}(t)=0, \Psi_t=0$ a.e.$\thetaetaimes$q.s. on $[0,T]\thetaetaimes\Omega_T$ and $K(t,u)=0$ a.e.$\thetaetaimes$a.e.$\thetaetaimes$q.s. on $[0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\})\thetaetaimes\Omega_T$. The proof is complete. \varepsilonnd{proof} The main result in the section is the following theorem. \betat\langlembdaabel{suffnece} Assume that ${\mathcal Q}$ is bounded away from $0$ and $b, h, \sigma, f$ satisfy (${\betaf H}^1_{b,h,\sigma,f}$)-(${\betaf H}^2_{b,h,\sigma,f}$). Then for $V\in C^{1,2}_b([0,T]\thetaetaimes{\mathbb R}^d)$, $F_{s,t}$ is path independent in the sense of (\ref{defi}) if and only if $(V, g_1, g_2, g_3)$ satisfies the partial integral-differential equation \betaegin{eqnarray}\langlembdaeft\{\betaegin{eqnarray}gin{array}{ll} \partialartial_tV(t,x)+{\langle}\partialartial_xV(t,x), b(t,x){\rangle}=\alpha G(g_1)(t,x)+\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}\gammaamma g_3(t,x, u)\nu(\mathrm{d} u),\\ {\langle}\partialartial_xV(t,x), h_{ij}(t,x){\rangle}+\frac{1}{2}{\langle}\partialartial^2_xV(t,x)\sigma^i(t,x), \sigma^j(t,x){\rangle}=\betaegin{eqnarray}ta g^{ij}_1(t, x),\\ (\sigma^T\partialartial_x V)(t, x)=g_2(t, x),\\ V{\Big(}t, x+f(t,x,u){\Big)}-V(t, x)=g_3(t, x,u), \\ t\in[0,T], x\in{\mathbb R}^d, u\in{\mathbb R}^d\setminus\{0\}. \varepsilonnd{array} \langlembdaabel{three} \right. \varepsilonnd{eqnarray} \varepsilont \betaegin{eqnarray}gin{proof} First, we prove necessity. On one hand, since $F_{s,t}$ is path independent in the sense of (\ref{defi}), by Definition \ref{pathinde} it holds that \betaegin{eqnarray} V(t,Y_t)-V(s,Y_s)&=&\alpha\int_s^t G(g_1)(r,Y_r)\mathrm{d} r+\betaegin{eqnarray}ta\int_s^t g^{ij}_1(r,Y_r)\mathrm{d} {\langle}B^i,B^j{\rangle}_r+\int_s^t {\langle}g_2(r,Y_r),\mathrm{d} B_r{\rangle}\nonumber\\ &&+\int_s^t\int_{{\mathbb R}^d\setminus\{0\}}g_3(r,Y_r, u)L(\mathrm{d} r, \mathrm{d} u)+\int_s^t\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}\gammaamma g_3(r,Y_r, u)\nu(\mathrm{d} u)\mathrm{d} r.\nonumber\\ \langlembdaabel{pathindepen} \varepsilonnd{eqnarray} On the other hand, applying the It\^o formula for G-It\^o-L\'evy processes (\cite[Theorem 32]{pk1}) to $V(t,Y_t)$, one can obtain that \betaegin{eqnarray} V(t,Y_t)-V(s,Y_s)&=&\int_s^t\partialartial_rV(r,Y_r)\mathrm{d} r+\int_s^t\partialartial_kV(r,Y_r)b^k(r,Y_r)\mathrm{d} r\nonumber\\ &&+\int_s^t\partialartial_kV(r,Y_r)h^k_{ij}(r,Y_r)\mathrm{d} {\langle}B^i, B^j{\rangle}_r+\int_s^t{\langle}(\sigma^T\partialartial_xV)(r,Y_r),\mathrm{d} B_r{\rangle}\nonumber\\ &&+\int_s^t\int_{{\mathbb R}^d\setminus\{0\}}{\Big(}V(r,Y_r+f(r,Y_r,u))-V(r,Y_r){\Big)}L(\mathrm{d} r,\mathrm{d} u)\nonumber\\ &&+\frac{1}{2}\int_s^t\partialartial_{kl}V(r,Y_r)\sigma^{ki}(r,Y_r)\sigma^{lj}(r,Y_r)\mathrm{d} {\langle}B^i, B^j{\rangle}_r. \langlembdaabel{itofor} \varepsilonnd{eqnarray} By (\ref{estiy}) (${\betaf H}^1_{b,h,\sigma,f}$)-(${\betaf H}^2_{b,h,\sigma,f}$), one can verify that \betaegin{eqnarray*} &&\partialartial_rV(r,Y_r)+\partialartial_kV(r,Y_r)b^k(r,Y_r)\in{\mathcal M}^1_G(0,T),\\ &&\partialartial_kV(r,Y_r)h^k_{ij}(r,Y_r)+\frac{1}{2}\partialartial_{kl}V(r,Y_r)\sigma^{ki}(r,Y_r)\sigma^{lj}(r,Y_r)\in{\mathcal M}^1_G(0,T),\\ &&(\sigma^T\partialartial_xV)(r,Y_r)\in{\mathcal H}^1_G([0,T], {\mathbb R}^d),\\ &&V(r,Y_r+f(r,Y_r,u))-V(r,Y_r)\in{\mathcal H}^2_G([0,T]\thetaetaimes({\mathbb R}^d\setminus\{0\})). \varepsilonnd{eqnarray*} Thus, by (\ref{pathindepen}) (\ref{itofor}) and Lemma \ref{zero} we know that \betaegin{eqnarray*}\langlembdaeft\{\betaegin{eqnarray}gin{array}{ll} \partialartial_rV(r,Y_r)+{\langle}\partialartial_xV(r,Y_r), b(r,Y_r){\rangle}=\alpha G(g_1)(r,Y_r)+\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}\gammaamma g_3(r,Y_r, u)\nu(\mathrm{d} u),\\ {\langle}\partialartial_xV(r,Y_r), h_{ij}(r,Y_r){\rangle}+\frac{1}{2}{\langle}\partialartial^2_xV(r,Y_r)\sigma^i(r,Y_r), \sigma^j(r,Y_r){\rangle}=\betaegin{eqnarray}ta g^{ij}_1(r,Y_r),\\ (\sigma^T\partialartial_x V)(r,Y_r)=g_2(r,Y_r),\\ V{\Big(}r,Y_r+f(r,Y_r,u){\Big)}-V(r,Y_r)=g_3(r,Y_r,u), \quad a.e.\thetaetaimes q.s.. \varepsilonnd{array} \right. \varepsilonnd{eqnarray*} Now, we insert $r=s, Y_s=x\in{\mathbb R}^d$ into the above equalities and get that \betaegin{eqnarray*}\langlembdaeft\{\betaegin{eqnarray}gin{array}{ll} \partialartial_sV(s,x)+{\langle}\partialartial_xV(s,x), b(s,x){\rangle}=\alpha G(g_1)(s,x)+\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}\gammaamma g_3(s,x, u)\nu(\mathrm{d} u),\\ {\langle}\partialartial_xV(s,x), h_{ij}(s,x){\rangle}+\frac{1}{2}{\langle}\partialartial^2_xV(s,x)\sigma^i(s,x), \sigma^j(s,x){\rangle}=\betaegin{eqnarray}ta g^{ij}_1(s, x),\\ (\sigma^T\partialartial_x V)(s, x)=g_2(s, x),\\ V{\Big(}s, x+f(s,x,u){\Big)}-V(t, x)=g_3(s, x,u), \\ s\in[0,T], x\in{\mathbb R}^d, u\in{\mathbb R}^d\setminus\{0\}. \varepsilonnd{array} \right. \varepsilonnd{eqnarray*} Since $s, x$ are arbitrary, we have (\ref{three}). Next, we treat sufficiency. By the It\^o formula for G-It\^o-L\'evy processes to $V(t,Y_t)$, we have (\ref{itofor}). And then one can apply (\ref{three}) to (\ref{itofor}) to get (\ref{pathindepen}). That is, $F_{s,t}$ is path independent in the sense of (\ref{defi}). The proof is complete. \varepsilonnd{proof} \subsection{Some special cases} In the subsection, we analysis some special cases. If $b(t,x)=0, h_{ij}(t,x)=0, \sigma(t,x)=I_d, f(t,x,u)=u$, Eq.(\ref{glesde}) becomes \betaegin{eqnarray*} \mathrm{d} Y_t=\mathrm{d} B_t+\int_{{\mathbb R}^d\setminus\{0\}}uL(\mathrm{d} t, \mathrm{d} u)=\mathrm{d} X_t. \varepsilonnd{eqnarray*} We take $\alpha=1, \betaegin{eqnarray}ta=\frac{1}{2}, \gammaamma=1$. Thus, by Theorem \ref{suffnece}, it holds that $F_{s,t}$ is path independent in the sense of (\ref{defi}) if and only if $(V, g_1, g_2, g_3)$ satisfies the partial integral-differential equation \betaegin{eqnarray*}\langlembdaeft\{\betaegin{eqnarray}gin{array}{ll} \partialartial_tV(t,x)=G(\partialartial^2_xV)(t,x)+\sup\langlembdaimits_{\nu\in{\mathcal V}} \int_{{\mathbb R}^d\setminus\{0\}}(V(t, x+u)-V(t, x))\nu(\mathrm{d} u),\\ \partialartial^2_xV(t,x)= g_1(t, x),\\ \partialartial_x V(t, x)=g_2(t, x),\\ V(t, x+u)-V(t, x)=g_3(t, x,u), \\ t\in[0,T], x\in{\mathbb R}^d, u\in{\mathbb R}^d\setminus\{0\}. \varepsilonnd{array} \right. \varepsilonnd{eqnarray*} Besides, by Theorem \ref{glevis}, it holds that for $\partialhi\in C_{b, lip}({\mathbb R}^d)$, $V(t,x)=\betaar{{\mathbb E}}[\partialhi(x+X_t)]$ is the unique viscosity solution of the following partial integro-differential equation: \betaegin{eqnarray*} \partialartial_t V(t,x)-G(\partialartial^2_xV)(t,x)-\sup\langlembdaimits_{\nu\in{\mathcal V}}\int_{{\mathbb R}^d\setminus\{0\}}{\Big(}V(t, x+u)-V(t, x){\Big)}\nu(\mathrm{d} u)=0 \varepsilonnd{eqnarray*} with the initial condition $V(0,x)=\partialhi(x)$. So, \betaegin{eqnarray*} &&g_1(t, x)=\partialartial^2_x\betaar{{\mathbb E}}[\partialhi(x+X_t)], \\ &&g_2(t, x)=\partialartial_x\betaar{{\mathbb E}}[\partialhi(x+X_t)], \\ &&g_3(t, x,u)=\betaar{{\mathbb E}}[\partialhi(x+u+X_t)]-\betaar{{\mathbb E}}[\partialhi(x+X_t)]. \varepsilonnd{eqnarray*} That is, we can find $g_1, g_2, g_3$. If $d=1, \alpha=1, \betaegin{eqnarray}ta=0, \gammaamma=0$, it follows from Theorem \ref{suffnece} that $F_{s,t}$ is path independent in the sense of (\ref{defi}) if and only if $(V, g_1, g_2, g_3)$ satisfies the partial integral-differential equation \betaegin{eqnarray*}\langlembdaeft\{\betaegin{eqnarray}gin{array}{ll} \partialartial_tV(t,x)+\partialartial_xV(t,x)b(t,x)=G(g_1)(t,x),\\ \partialartial_xV(t,x)h(t,x)+\frac{1}{2}\partialartial^2_xV(t,x)\sigma^2(t,x)=0,\\ (\sigma\partialartial_x V)(t, x)=g_2(t, x),\\ V{\Big(}t, x+f(t,x,u){\Big)}-V(t, x)=g_3(t, x,u), \\ t\in[0,T], x\in{\mathbb R}, u\in{\mathbb R}\setminus\{0\}. \varepsilonnd{array} \right. \varepsilonnd{eqnarray*} And then if $\sigma(t,x)\neq 0$, the unique solution of the above second equation is $$ V(t,x)=V(t,0)+\partialartial_xV(t,0)\int_0^xe^{-2\int_0^z\frac{h(t,v)}{\sigma^2(t,v)}\mathrm{d} v}\mathrm{d} z. $$ Besides, since ${\mathcal Q}$ is bounded away from $0$, $G$ is invertible. Thus, \betaegin{eqnarray*} &&g_1(t,x)=G^{-1}{\Big(}\partialartial_tV(t,x)+b(t,x)\partialartial_xV(t,0)e^{-2\int_0^x\frac{h(t,v)}{\sigma^2(t,v)}\mathrm{d} v}{\Big)},\\ &&g_2(t, x)=\sigma(t, x)\partialartial_xV(t,0)e^{-2\int_0^x\frac{h(t,v)}{\sigma^2(t,v)}\mathrm{d} v},\\ &&g_3(t, x,u)=\partialartial_xV(t,0)\int_x^{x+f(t,x,u)}e^{-2\int_0^z\frac{h(t,v)}{\sigma^2(t,v)}\mathrm{d} v}\mathrm{d} z. \varepsilonnd{eqnarray*} That is, we also give out $g_1, g_2, g_3$ in the case. \betar In the above special cases, we can describe concretely $g_1, g_2, g_3$. This is interesting and also is one of our motivations. \varepsilonr \subsection{Comparison with some known results}\langlembdaabel{com} In the subsection, we compare our result with some known results. First, if we take $f(t,x,u)=0$ in Eq.(\ref{glesde}) and $g_3(t,x,u)=0$ in (\ref{addfun}), Theorem \ref{suffnece} becomes \cite[Theorem 2]{ry}. Therefore, our result is more general. Second, we take ${\mathcal M}({\mathbb R}^d\setminus\{0\})=\{\nu\}, {\mathcal Q}=\{I_d\}$ in Subsection \ref{sdegle}. Thus, $B$ is a classical Brownian motion with ${\langle}B^i, B^j{\rangle}_t=I_{i=j}t$ and $L(\mathrm{d} t, \mathrm{d} u)$ is a classical Poisson random measure. And then Eq.(\ref{glesde}) goes into \betaegin{eqnarray} \mathrm{d} Y_t=b(t,Y_t)\mathrm{d} t +\sum\langlembdaimits_{i=1}^d h_{ii}(t,Y_t)\mathrm{d} t+\sigma(t,Y_t)\mathrm{d} B_t+\int_{{\mathbb R}^d\setminus\{0\}}f(t,Y_t,u)L(\mathrm{d} t,\mathrm{d} u). \langlembdaabel{special} \varepsilonnd{eqnarray} Note that $\nu({\mathbb R}^d\setminus\{0\})<\infty$. So, by (\ref{estiy}) (${\betaf H}^1_{b,h,\sigma,f}$)-(${\betaf H}^2_{b,h,\sigma,f}$), \cite[Theorem 13]{pk2} admits us to obtain that $$ \int_0^t\int_{{\mathbb R}^d\setminus\{0\}}f(s,Y_s,u)\thetaetailde{L}(\mathrm{d} s,\mathrm{d} u):=\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}f(s,Y_s,u)L(\mathrm{d} s,\mathrm{d} u)-\int_0^t\int_{{\mathbb R}^d\setminus\{0\}}f(s,Y_s,u)\nu(\mathrm{d} u)\mathrm{d} s $$ is a ${\mathscr B}_t$-martingale, where ${\mathscr B}_t:=\sigma\{\omegamega_s, 0\langlembdaeq s\langlembdaeq t\}, 0\langlembdaeq t\langlembdaeq T$. Therefore, we can rewrite Eq.(\ref{special}) to get that \betaegin{eqnarray*} \mathrm{d} Y_t&=&b(t,Y_t)\mathrm{d} t +\sum\langlembdaimits_{i=1}^d h_{ii}(t,Y_t)\mathrm{d} t+\int_{{\mathbb R}^d\setminus\{0\}}f(t,Y_t,u)\nu(\mathrm{d} u)\mathrm{d} t+\sigma(t,Y_t)\mathrm{d} B_t\\ &&+\int_{{\mathbb R}^d\setminus\{0\}}f(t,Y_t,u)\thetaetailde{L}(\mathrm{d} t,\mathrm{d} u). \varepsilonnd{eqnarray*} This is a classical stochastic differential equation with jumps. Thus, by \cite[Theorem 1.2]{q0} it holds that under (${\betaf H}^1_{b,h,\sigma,f}$)-(${\betaf H}^2_{b,h,\sigma,f}$), the above equation has a unique solution. In the following, note that $G(g_1)=\frac{1}{2}\thetaetar(g_1)=\frac{1}{2}\sum\langlembdaimits_{i=1}^d g_1^{ii}$. And then $F_{s,t}$ can be represented as \betaegin{eqnarray*} F_{s,t}&=&\int_s^t \langlembdaeft(\frac{\alpha}{2}+\betaegin{eqnarray}ta\right)\sum\langlembdaimits_{i=1}^d g_1^{ii}(r,Y_r)\mathrm{d} r+\int_s^t {\langle}g_2(r,Y_r),\mathrm{d} B_r{\rangle}+\int_s^t\int_{{\mathbb R}^d\setminus\{0\}}g_3(r,Y_r, u)\thetaetailde{L}(\mathrm{d} r, \mathrm{d} u)\\ &&+\int_s^t\int_{{\mathbb R}^d\setminus\{0\}}(1+\gammaamma )g_3(r,Y_r, u)\nu(\mathrm{d} u)\mathrm{d} r. \varepsilonnd{eqnarray*} This is just right \cite[(3)]{qw3} without the distribution of $Y_r$ for $r\in[s,t]$. So, in the case Definition \ref{pathinde} and Theorem \ref{suffnece} are Definition 2.1 and Theorem 3.2 in \cite{qw3} without the distribution of $Y_r$ for $r\in[s,t]$, respectively. Therefore, our result overlaps \cite[Theorem 3.2]{qw3} in some sense. \betaigskip \thetaetaextbf{Acknowledgements:} The authors are very grateful to Professor Xicheng Zhang for valuable discussions. The first author also thanks Professor Renming Song for providing her an excellent environment to work in the University of Illinois at Urbana-Champaign. \betaegin{eqnarray}gin{thebibliography}{999} \betaibitem{g} F. Gao: Pathwise properties and hoemeomorphic flows for stochastic differential equations driven by G-Brownian motion, {\it Stochastic Processes and their Applications}, 119(2009)3356-3382. \betaibitem{hp} M. Hu and S. Peng: G-L\'evy processes under sublinear expectations. arXiv:0911.3533v1. \betaibitem{pk1} K. Paczka: Itô calculus and jump diffusions for G-L\'evy processes. arXiv:1211.2973v3. \betaibitem{pk2} K. Paczka: G-martingale representation in the G-L\'evy setting. arXiv:1404.2121v1. \betaibitem{p2} S. Peng: Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, {\it Stochastic Processes and their Applications}, 118(2008)2223-2253. \betaibitem{peng} S. Peng: {\it Nonlinear Expectations and Stochastic Calculus under Uncertainty with Robust CLT and G-Brownian Motion}. Probability Theory and Stochastic Modelling 95, Springer, 2019. \betaibitem{q0} H. Qiao: Euler-Maruyama approximation for SDEs with jumps and non-Lipschitz coefficients, {\it Osaka Journal of Mathematics}, 51(2014)47-66. \betaibitem{q1} H. Qiao: The cocycle property of stochastic differential equations driven by G-Brownian motion, {\it Chinese Annals of Mathematics, Series B}, 36(2015)147-160. \betaibitem{qw1} H. Qiao and J.-L. Wu: Characterising the path-independence of the Girsanov transformation for non-Lipschnitz SDEs with jumps, {\it Statistics and Probability Letters}, 119(2016)326-333. \betaibitem{qw2} H. Qiao and J.-L. Wu: On the path-independence of the Girsanov transformation for stochastic evolution equations with jumps in Hilbert spaces, {\it Discrete and Continuous Dynamical Systems-B}, 24(2019)1449-1467. \betaibitem{qw3} H. Qiao and J.-L. Wu: Path independence of the additive functionals for McKean-Vlasov stochastic differential equations with jumps, http://arxiv.org/abs/1911.03830. \betaibitem{ry} P. Ren and F. Yang: Path independence of additive functionals for stochastic differential equations under G-framework, {\it Front. Math. China} 14 (2019), no. 1, 135-148. \betaibitem{twwy} A. Truman, F.-Y. Wang, J.-L. Wu, W. Yang: A link of stochastic differential equations to nonlinear parabolic equations, {\it SCIENCE CHINA Mathematics}, 55 (2012) 1971-1976. \betaibitem{wg} B. Wang and H. Gao: Exponential stability of solutions to stochastic differential equations driven by G-L\'evy process, appear in {\it Applied Mathematics \& Optimization.} \betaibitem{wy} J.-L. Wu and W. Yang: On stochastic differential equations and a generalised Burgers equation, pp 425-435 in {\it Stochastic Analysis and Its Applications to Finance: Essays in Honor of Prof. Jia-An Yan (eds. T S Zhang, X Y Zhou)}, Interdisciplinary Mathematical Sciences, Vol. 13, World Scientific, Singapore, 2012. \varepsilonnd{thebibliography} \varepsilonnd{document}
math
44,833
\begin{document} \title{Interpreting the Ising Model: The Input Matters} \begin{abstract} The Ising model is a model for pairwise interactions between binary variables that has become popular in the psychological sciences. It has been first introduced as a theoretical model for the alignment between positive (1) and negative (-1) atom spins. In many psychological applications, however, the Ising model is defined on the domain $\{0,1\}$ instead of the classical domain $\{-1,1\}$. While it is possible to transform the parameters of the Ising model in one domain to obtain a statistically equivalent model in the other domain, the parameters in the two versions of the Ising model lend themselves to different interpretations and imply different dynamics, when studying the Ising model as a dynamical system. In this tutorial paper, we provide an accessible discussion of the interpretation of threshold and interaction parameters in the two domains and show how the dynamics of the Ising model depends on the choice of domain. Finally, we provide a transformation that allows one to transform the parameters in an Ising model in one domain into a statistically equivalent Ising model in the other domain. \end{abstract} \xdef\@thefnmark{}\@footnotetext{This article has been accepted for publication in Multivariate Behavioral Research, published by Taylor \& Francis.} \section{Introduction}\label{sec_intro} The Ising model is a model for pairwise interactions between binary variables that originated in statistical mechanics \citep{ising1925beitrag, glauber1963time} but is now used in a large array of applications in the psychological sciences \citep[e.g.][]{borsboom2013network, marsman2015bayesian, boschloo2015network, boschloo2016network, fried2015loss, cramer2016major, dalege2016toward, rhemtulla2016network, van2017network, haslbeck2017predictable, afzali2017network, deserno2017multicausal, savi2018wiring, marsman2019network} The original Ising model has been introduced as a model for the interactions between atom spins, which can be positive (1) and negative (-1) \citep{brush1967history}. In this setting, with variables taking values in the domain $\{-1,1\}$, the interaction parameters in the Ising model determine the \emph{alignment} between variables: If an interaction parameter between two variables is positive, the two variables tend to take on the same value; on the other hand, if the interaction parameter is negative, the two variables tend to take on different values. In most psychological applications, however, the Ising model is defined with variables taking values in the domain $\{0,1\}$. While it is possible to transform the parameters of a given Ising model in one domain to obtain a statistically equivalent model in the other domain, the parameters in the two versions of the Ising model lend themselves to different interpretations and imply different dynamics, when studying the Ising model as a dynamical system. If unaware of those subtle differences, one might erroneously apply theoretical results from the $\{-1,1\}$ domain to an estimated model in the $\{0,1\}$ domain, or simply interpret parameters incorrectly. To prevent such confusion in the emerging psychological networks literature which makes heavy use of the Ising model, we provide a detailed discussion of both versions of the Ising model in the present tutorial paper. We begin by discussing the different interpretations of the Ising model in the $\{-1,1\}$ and $\{0,1\}$ domain in Section \ref{sec_dd_p2}, using a simple example with two variables which allows the reader to follow all calculations while reading. We explain the differences in the interpretation of the threshold and interaction parameters in the two versions of the Ising model, and discuss in which situation which version might be more appropriate. While most psychological applications of the Ising model use it as a statistical model, it has also been studied as a dynamical system in psychological research \citep[e.g.,][]{cramer2016major, dalege2016toward, lunansky2019personality}. In Section \ref{sec_con_dynamics} we discuss how the dynamics of the Ising model depends on the choice of domain, and show that the domain changes the \emph{qualitative} behavior of the model. Finally, in Section \ref{sec_IsingTrans_short} we provide a transformation that allows one to transform the parameters in an Ising model in one domain into a statistically equivalent Ising model in the other domain. \section{Different Domain, Different Interpretation}\label{sec_dd_p2} In this section we estimate an Ising model with $p=2$ variables in both domains, $\{-1, 1 \}$ and $\{0, 1\}$, and show that the resulting threshold and interaction parameters have different values and lend themselves to different interpretations. We choose the $p=2$ variable case to make the explanation as accessible as possible. However, all results immediately extend to the general situation with $p$ variables. The Ising model for two variables is given by \begin{equation}\label{twovar_Ising} P(y_1, y_2) = \frac{1}{Z} \exp \left \{ \alpha_1 y_1 + \alpha_2 y_2 + \beta_{12} y_1 y_2 \right \}, \end{equation} \noindent where $y_1, y_2$ are either elements of $\{-1,1\}$ or $\{0,1\}$, $P(y_1, y_2)$ is the probability of the event $(y_1, y_2)$, $\alpha_1, \alpha_2, \beta_{12}$ are parameters in $\mathbb{R}$, and $Z$ is a normalization constant which denotes the sum of the exponent across all possible states. There are $2^p=4$ states in this example. To illustrate the differences across models, we generate $n = 1000$ samples of the labels $A, B$ with the relative frequencies shown in Table \ref{table_datagen}: \begin{table}[H] $$ \bordermatrix{~ & A & B \cr A & 0.14 & 0.18 \cr B & 0.18 & 0.50 \cr} $$ \caption{Relative frequency of states in the example data set.}\label{table_datagen} \end{table} In applications, the labels $A, B$ can stand for any pair of categories such as being for or against something, some event having happened or not, or a symptom being present or not. The two domains are two different ways to numerically represent these labels. We obtain the Maximum Likelihood Estimates (MLE) of the parameters in two different ways: once, by filling in $\{-1, 1 \}$ for $\{A, B\}$; and once by filling in $\{0, 1 \}$ for $\{A, B\}$. Figure \ref{fig_intro} summarizes the two resulting models: \begin{figure} \caption{The threshold and interaction parameters estimated from the data generated from Table \ref{table_datagen} \label{fig_intro} \end{figure} The first column in Figure \ref{fig_intro} shows the parameter estimates $\alpha_1, \alpha_2$ and $\beta_{12}$, and log potentials in domain $\{-1, 1 \}$. We first focus on the interpretation of the interaction parameter $\beta_{12}$. To understand the interpretation of this parameter we take a look at the log potentials for all four states $\{ (-1,-1), (-1,1), (1,-1), (1,1) \}$, which we obtain by plugging the four states into the expression within the exponential in equation (\ref{twovar_Ising}). The resulting log potentials are displayed in the second row in Figure \ref{fig_intro} and show us the following: if $\beta_{12}$ becomes larger, the probability of the states $(-1,-1), (1,1) $ increases relative to the probability of the states $(-1,1), (1,-1)$. This means that the interaction parameter determines the degree of \emph{alignment} of two variables. That is, if $\beta_{12}>0$ the \emph{same} labels align with each other, and if $\beta_{12}<0$ \emph{opposite} labels align with each other. In other words, $\beta_{12}$ models the probability of the states $(-1,-1), (1,1) $ relative to the probability of the states $(-1,1), (1,-1)$. This is not the case in the $\{0, 1 \}$ domain. The second column in Figure \ref{fig_intro} shows that the parameter estimates $\alpha_1^*, \alpha_1^*, \beta_{12}^*$ in domain $\{0, 1 \}$, and we see that they have different values than in the $\{-1, 1 \}$ domain. To understand why this is the case, we again look at the interpretation of the interaction parameter $\beta_{12}^*$ by inspecting the four log potentials. The key observation is that $\beta_{12}^*$ only appears in the log potential of the state $(1,1)$. What happens if $\beta_{12}^*$ increases? Then the probability of the state $(1,1)$ increases relative to the probability of all other states $(0,1),(1,0), (0,0)$. In other words, $\beta_{12}^*$ models the probability of state $(1,1)$ relative to the probability of the states $(0,1),(1,0),(0,0)$. Next, we turn to the interpretation of the threshold parameters. If all interaction parameters are equal to zero, the threshold parameters in both domains indicate the tendency of a variable to be in one state or the other. That is, $\alpha, \alpha^* > 0$ implies a larger probability for the states $(1) \in \{0, 1 \}$, $(1) \in \{-1, 1 \}$ than for states $(0) \in \{0, 1 \}$, $(-1) \in \{-1, 1 \}$. If $\alpha, \alpha^* < 0$ the reverse is true, and if $\alpha, \alpha^* = 0$, the corresponding states have both probability $0.5$. However, in the general case in which interaction parameters are allowed to be nonzero, the interpretation depends on the domain: in the $\{-1, 1 \}$ domain the threshold parameter indicates the tendency of a variable averaged over all possible states of all other variables. In more formal terms, the threshold parameter of a given variable indicates the marginal mean of that variable. In contrast, the threshold in the $\{0, 1 \}$ domain indicates the tendency of a variable when all other variables are equal to zero. We return to the different interpretations of thresholds in Section \ref{sec_con_dynamics}, in which we discuss the dynamics of the Ising model. In this section we showed that depending on its domain, the parameters of the Ising model have different interpretations. What are the consequences for applied researchers? In terms of reporting, it is important to state which domain has been used such that the reported model can be re-used in the correct way: if someone reports a set of parameters estimated from in the $\{0, 1 \}$ domain, and a reader applies it to the $\{-1, 1 \}$ domain they will obtain the incorrect probabilities. Note that in order to use the model one also has to report the threshold parameters. Not reporting the threshold parameters is a common problem and irrespective of the issue discussed in this paper. The only situation in which the domain does not matter is if the only goal is to compare the relative size of interaction parameters since the relative size is the same in both domains (see Section \ref{sec_IsingTrans_short}). The second consequence is that researchers have to choose which version of the Ising model is more appropriate for the phenomenon at hand. The above clarified interpretations of the Ising model in its two different domains allow to take this decision. For example, the $\{-1, 1\}$ parameterization may be more plausible for labels that are not qualitatively different, but rather opposing each other in some way such as supporting or opposing a certain viewpoint, for example agreeing or disagreeing with a statement like ``Elections should be held every two years instead of every four years''. This also reflects the origin of the Ising model as a model for atom spins, which are either positive or negative. The parameterization implied by $\{0, 1\}$ could be more appropriate if the two labels are qualitatively different, such as the presence or absence of an event or a characteristic. Take psychiatric symptoms as an example: while it seems plausible that \emph{fatigue} leads to \emph{lack of concentration}, it is less clear whether the absence of \emph{fatigue} also leads to the increase of \emph{concentration}. In such a case, we can encode the possible belief that the absence of something cannot have an influence on something else by choosing the $\{0, 1\}$ domain. Importantly, the decision of which version to pick has to be based on information beyond the data, because the models are statistically equivalent and therefore cannot be distinguished by observational data. In Appendix \ref{A_probcalc} we prove this equivalence for the example shown in Figure \ref{fig_intro}. While Ising models in psychological research are usually fit to cross-sectional data, one is typically interested in within-subjects dynamics. In this context, one is often interested in inferring from an estimated Ising model how to best intervene on the system. In the next section we will show how the dynamics of the Ising model depends on its domain, and that the different versions of the Ising model make different predictions for optimal interventions. \section{Different Domain, Different Dynamics}\label{sec_con_dynamics} The choice of domain also determines the dynamics of the Ising model, when studying it as a dynamical system describing within-person dynamics. The dynamical version of the Ising model is initialized by $p$ initial values at $t=1$, and then each variable at time $t$ is a function of all variables it is connected to via a nonzero interaction parameter at $t-1$ \footnote{Glauber dynamics \citep{glauber1963time} describe a different way to sample from a dynamic Ising model. The qualitative results presented in this section also hold for Glauber dynamics.}. An often studied characteristic in this model is how its behavior changes when the size of the interaction parameters increases. A typical behavior of interest is the number of variables in state $(1)$ \citep[e.g.][]{dalege2016toward, cramer2016major}. Which behavior would we expect in the two domains $\{-1,1\}$ and $\{0,1\}$? From the previous section we know that in domain $\{-1,1\}$, the interaction parameter $\beta_{ij}$ models the probability of states $\{ (-1,-1), (1,1)\}$ relative to the states $\{(-1,1), (1,-1)\}$. Now, when increasing all $\beta_{ij}$, connected variables become more synchronized, which means that all (connected) variables tend to be either \emph{all} in state $(-1)$ or $(1)$. In terms of number of variables in state $(1)$, we would therefore predict that the expected number of variables in state $(1)$ remains unchanged, because the states $(-1)$ and $(1)$ occur equally often in the aligned ($(-1,-1)$ and $(1,1)$) and not aligned ($(-1,1)$ and $(1,-1)$) states. And second, we predict that the probability that at a given time point either all variables are in state $(-1)$, or all variables are in state $(1)$, increases. The reason is that, in the $(-1,1)$ domain, the larger the interaction parameter, the stronger the alignment between variables. This second prediction implies that the variance of the number of states in $(1)$ increases. In the domain $\{0,1\}$, the interaction parameter $\beta_{ij}^*$ models the probability of the state $(1,1)$ relative to the remaining three states $\{(0,1), (1,0), (0,0)\}$. Now, when increasing $\beta_{ij}^*$, connected variables will have a higher probability to be all in state $1$. Importantly, the frequency of $1$s in the high probability state $(1,1)$ is higher than in the other three states. We therefore expect that the number of variables in state $(1)$ increases and that the probability that all variables are in state $(1)$ increases. The second prediction implies that the variance of the number of states in $(1)$ decreases. We prove that the expected number of variables in state $(1)$ remains unchanged for $\{-1,1\}$ and increases for $\{0,1\}$, if $\beta_{ij} >0$ for the case $p=2$ variables in Appendix \ref{A_marginal_p2}. Here, we show via simulation that our predictions are correct. We sample $n = 10^6$ observations from a fully connected (i.e., all interaction parameters are nonzero) Ising model with $p=10$ variables in which all edge weights (interaction parameters) have the same size and all thresholds are set to zero. We vary both the size of the interaction parameters $\beta_{ij} \in \{0, .1, .2\}$ and the domain\footnote{The code to reproduce the simulation and Figure \ref{fig_dynamics} is available at \url{http://github.com/jmbh/IsingVersions}.}. Figure \ref{fig_dynamics} shows the distribution (over time steps) of the number of variables that are in state $\{1\}$. The first row of Figure \ref{fig_dynamics} shows the distribution of the number of variables in state $(1)$ across time when all interaction parameters are equal to zero. We see a symmetric, unimodal distribution with mean $5$ for both domains. This is what we would expect since the probability of each variable being in state $(1)$ can be seen as an unbiased (because the thresholds are zero) coin flip that is independent of all other variables. Thus, since we have 10 variables, the means are equal to $10 \times 0.5 = 5$. \begin{figure} \caption{The distribution of the number of variables being in state one as a function of the size of the interaction parameter in a fully connected Ising model $\{0, .1, .2\} \label{fig_dynamics} \end{figure} However, when increasing the interaction parameter from $0$ to $0.1$ (second row) the distributions become different: in domain $\{-1,1\}$ the mean remains unchanged and the probability mass shifts from around 5 to more extreme values, resulting in increased variance. In contrast, in domain $\{0,1\}$ the distribution shifts to the right, which implies that the mean increases and the variance slightly decreases. When further increasing the interaction parameters to $0.2$ (third row), in domain $\{-1,1\}$ most of the probability mass is concentrated on 0 and 10, while leaving the mean unchanged; in domain $\{0,1\}$ the mean further increases and the variance further decreases. From a dynamical perspective, this means that for strongly connected Ising models (with thresholds equal to zero) the domain $\{-1,1\}$ implies two stable states (all variables in state $(-1)$ or $(1)$), while the domain $\{0,1\}$ implies only a single stable (all variables in state $(1)$), whose position depends on whether interaction parameters are positive or negative. This means that the dynamic Ising model in the $\{-1,1\}$ can switch between stable states, while $\{0,1\}$ it always stays in the same stable state\footnote{The result about bistability is true for the considered fully connected Ising model with zero thresholds. It is also possible to construct a bistable Ising model in the $\{0,1\}$ domain by choosing large negative thresholds and large positive interaction parameters. The relationship between mean/variance and changing the interaction parameter in the two domains, however, is always true.}. For the general case of Ising models that are not fully connected and also have negative interaction parameters, the results described above extend to local clusters of two or more variables: in the domain $\{-1,1\}$, increasing the interaction parameter will leave the means of all variables in the cluster unchanged, however, the variables become increasingly aligned (if interaction parameters are positive) or disaligned (if interaction parameters are negative). Alignment will lead to an increase in variance, while disalignment will lead to a decrease in variance. In contrast, in the $\{0,1\}$ domain the mean of all variables in the cluster will increase in the case of positive interaction parameters, and decrease in the case of negative interaction parameters. This shows that, depending on which domain is used one can come to entirely different conclusions about the dynamics of the Ising model. For example, \cite{cramer2016major} model the interactions between psychiatric symptoms with an Ising model in domain $\{0,1\}$ and conclude that densely connected Ising models imply a larger number of active (in state $(1)$) symptoms and therefore represent ``pathological'' models. The above argument and simulation show that this is only true when using the $\{0,1\}$ domain, which encodes the belief that the absence of a symptom cannot influence the absence of another symptom. If one decides that an alignment between variables is a more plausible interaction (as implied by the $\{-1,1\}$ domain), then densely connected Ising models do not imply a large number of active symptoms. Instead, high density implies high variance and two stable states. Thus, the characterization of dense networks as pathological networks as in \cite{cramer2016major} hinges on choosing the $\{0,1\}$ domain. This has important consequences: when choosing the $\{0,1\}$ domain, we would conclude that highly connected symptom networks are necessarily ``bad'', and interventions on the interactions between symptoms as suggested by \cite{borsboom2017network} should always reduce symptom activation. On the other hand, in the $\{-1,1\}$ domain highly connected symptom networks are not necessarily bad, but in fact can lead to high resilience, if the threshold parameters are large negative values. In such a situation strong interactions would keep the system in a state in which all symptoms are deactivated. \section{Transforming from $\{-1, 1\}$ to $\{0, 1\}$ and vice versa}\label{sec_IsingTrans_short} The Ising model is typically estimated by a sequence of $p$ logistic regressions, which require the domain $\{0, 1\}$. However, the previous sections showed that in some situations the domain $\{-1, 1 \}$ may be more appropriate. In Table \ref{tab:tab2_intro} we present a transformation that allows one to obtain the parameterization based on domain $\{-1, 1\}$ from the parameterization based on domain $\{0, 1\}$ and vice versa (see Appendix \ref{sec_IsingTrans} for the derivation of the transformations). We define $\beta_{i+}^\ast = \sum\limits_{j = i1}^{p} \beta_{ij}^\ast$ as the sum over the interaction parameters associated with a given variable $y_i$. \begin{table}[h] \centering \begin{tabular}{ r | c c } \textbf{Transformation} & Thresholds & Interactions \\ \hline $\{0, 1\} \Rightarrow \{-1, 1\}$ & $\alpha_i = \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast$ & $\beta_{ij} = \frac{1}{4}\beta_{ij}^\ast$ \\ $\{-1, 1\} \Rightarrow \{0, 1\}$ & $\alpha_i^\ast = 2\alpha_i - 2 \beta_{i+}$ & $\beta_{ij}^\ast = 4\beta_{ij}$ \end{tabular} \caption{Transformation functions to obtain the threshold and interaction parameters of one parameterization from the threshold and interaction parameters of the other parameterization. Parameters with asterisk indicate parameters in the $\{0, 1\}$ domain.} \label{tab:tab2_intro} \end{table} Table \ref{tab:tab2_intro} shows that the interaction parameters $\beta_{ij}$ in the $\{-1, 1\}$ domain are 4 times smaller than the interaction parameters $\beta_{ij}^*$ in the $\{0, 1\}$ domain. We also see that the threshold parameter $\alpha_i$ is a function of \emph{both} the threshold and the interaction parameters $\alpha_i^*, \beta_{ij}^*$ in the other parameterization. We now apply the transformations in Table \ref{tab:tab2_intro} to the $p=2$ variable example in Figure \ref{fig_intro}. We choose to transform from $\{0, 1\}$ to $\{-1, 1\}$: \begin{align*} a_1 & = \frac{1}{2} a_1^\ast + \frac{1}{4}\beta_{i+}^\ast = \frac{0.251}{2} + \frac{0.77}{4} = 0.318 \\ \beta_{12} &= \frac{1}{4} \beta_{12}^\ast = \frac{.77}{4} = 0.1925 \approx 0.193 \\ \end{align*} \noindent And indeed, we obtain the parameters obtained when estimating the Ising model in the $\{-1, 1 \}$ domain (see first column in Figure \ref{fig_intro}). From the transformation in Table \ref{tab:tab2_intro} follows that the two models are statistically equivalent. This implies that one could also estimate the model in the $\{-1, 1\}$ domain, transform the parameters, and would obtain the parameters one would have obtained from estimating in the $\{0, 1\}$ domain. Also, note that the standard errors of estimates are subject to the same transformation, and therefore one always reaches the same conclusion regarding statistical significance in both domains. However, note that one does not necessarily arrive at statistical equivalent models when estimating in the two different domains using biased estimators. An example of a biased estimation method is the popular $\ell_1$-regularized estimator \citep{van2014new}. We discuss why statistical equivalence is not guaranteed in this specific example in Appendix \ref{A_penMS}. The possibility that different domains lead to models that are not statistically equivalent highlights the importance of choosing the most plausible Ising model on substantive grounds. \section{Conclusions} In this paper we have investigated the subtleties in choosing the domain of the Ising model. We showed that estimating the Ising model in the domains $\{0, 1\}$ and $\{-1, 1\}$ results in parameters with different values and different interpretations. We also showed that the qualitative behavior of the dynamical Ising model depends on the chosen domain. Finally, we provided a transformation that explains the relation between the two parameterizations and allows one to obtain one from the other. This is useful in practice, because typically used software packages require the $\{0, 1\}$ domain. This transformation also implies that the two parameterizations are statistically equivalent, which means that one cannot choose one over the other on empirical grounds. Thus, researchers should carefully reflect on which interactions between variables are plausible and choose the domain accordingly. \appendix \section{Statistical Equivalence worked out for two variable example}\label{A_probcalc} Here we show that the two models shown in Figure \ref{fig_intro} are statistically equivalent. Two models statistically equivalent if they output the same probability for any of states on which the models are defined. We begin with the model estimated on the domain $\{-1,-1\}$. We first compute the potentials for the four states $\{(-1,-1), (-1, 1), (1, -1), (1, 1)\}$: $$ \exp \left \{ 0.318(-1) + 0.318(-1) + 0.193(-1)(-1) \right \} = 0.6415304 $$ $$ \exp \left \{ 0.318(-1) + 0.318(1) + 0.193(-1)(1) \right \} = 0.8248249 $$ $$ \exp \left \{ 0.318(1) + 0.318(-1) + 0.193(1)(-1) \right \} = 0.8248249 $$ $$ \exp \left \{ 0.318(1) + 0.318(1) + 0.193(1)(1) \right \} = 2.29118 $$ \noindent and then the normalization constant $$ Z = 0.6415304+0.8248249+0.8248249+2.29118 = 4.58236 $$ \noindent We divide the potentials by Z and obtain the probabilities $$ P(Y_1 = -1, Y_2 = -1) = \frac{0.6415304}{Z} = 0.14 $$ $$ P(Y_1 = -1, Y_2 = 1) = \frac{0.8248249}{Z} = 0.18 $$ $$ P(Y_1 = 1, Y_2 = -1) = \frac{0.8248249}{Z} = 0.18 $$ $$ P(Y_1 = 1, Y_2 = 1) = \frac{2.29118}{Z} = 0.5 $$ We now repeat the same with domain $\{0, 1\}$ and first compute the potentials for the states $\{(0,0), (0, 1), (1, 0), (1, 1)\}$: $$ \exp \left \{ 0.251(0) + 0.251(0) + 0.77(0)(0) \right \} = 1 $$ $$ \exp \left \{ 0.251(0) + 0.251(1) + 0.77(0)(1) \right \} = 1.285714 $$ $$ \exp \left \{ 0.251(1) + 0.251(0) + 0.77(1)(0) \right \} = 1.285714 $$ $$ \exp \left \{ 0.251(1) + 0.251(0) + 0.77(1)(0) \right \} = 3.571429 $$ \noindent and then the normalization constant $$ Z = 1 + 1.285714 + 1.285714 + 3.571429 = 7.142857 $$ \noindent We divide the potentials by Z and obtain the probabilities $$ P(X_1 = 0, X_2 = 0) = \frac{1}{Z} = 0.14 $$ $$ P(X_1 = 0, X_2 = 1) = \frac{1.285714}{Z} = 0.18 $$ $$ P(X_1 = 1, X_2 = 0) = \frac{1.285714}{Z} = 0.18 $$ $$ P(X_1 = 1, X_2 = 1) = \frac{3.571429}{Z} = 0.5 $$ We see that both models predict the same probabilities and are therefore statistically equivalent. \section{Increasing interaction parameters only changes the marginal probabilities domain in $\{0, 1\}$}\label{A_marginal_p2} Here we show that for an Ising model with $p=2$ variables with $\alpha_1, \alpha_2 = 0$ and $\beta_{12} > 0$ it holds that \begin{equation}\label{claim_1} P(X_1=-1) = P(X_2=-1) = P(X_2=1) = P(X_2=1) \end{equation} \noindent for the domain $\{-1, 1\}$, and that \begin{equation}\label{claim_2} P(X_1=0) = P(X_2=0) < P(X_1=1) = P(X_2=1) \end{equation} \noindent for the domain $\{0, 1\}$. We first show (\ref{claim_1}). We assume $\alpha_1, \alpha_2 = 0$ and $\beta_{12} > 0$. Then the Ising model is given by \begin{align*} P(X_1, X_2) &= \frac{1}{Z} \exp\{ \alpha_1 X_1 + \alpha_2 X_2 + \beta_{12} X_2 X_1\} \\ &= \frac{1}{Z} \exp\{ \beta_{12} X_2 X_1 \}, \end{align*} \noindent where $Z$ is the normalizing constant summing over all $2^p=4$ states. We calculate the probability of the four possible states: $$ P(X_1 = 1 , X_2 = -1) = \frac{1}{Z} \exp\{ -\beta_{12} \}, $$ $$ P(X_1 = 1 , X_2 = 1) = \frac{1}{Z} \exp\{ \beta_{12} \}, $$ $$ P(X_1 = -1 , X_2 = -1) = \frac{1}{Z} \exp\{ \beta_{12} \}, $$ $$ P(X_1 = -1 , X_2 = 1) = \frac{1}{Z} \exp\{ -\beta_{12} \}. $$ And average over the state of $X_2$ to obtain the marginals probabilities $P(X_1)$: $$ P(X_1 = 1) = P(X_1 = 1 , X_2 = -1) + P(X_1 = 1 , X_2 = 1) = \frac{1}{Z} \exp\{ -\beta_{12} \} + \frac{1}{Z} \exp\{ \beta_{12} \}$$ $$ P(X_1 = -1) = P(X_1 = -1 , X_2 = -1) + P(X_1 = -1 , X_2 = 1) = \frac{1}{Z} \exp\{ \beta_{12} \} + \frac{1}{Z} \exp\{ -\beta_{12} \}$$ We see that $P(X_1 = 1) = P(X_1 = -1) $. By symmetry the same is true for $X_2$, which proves our claim.\\ We next prove (\ref{claim_2}). We again assume $\alpha_1, \alpha_2 = 0$ and $\beta_{12} > 0$ and calculate the probabilities of the four possible states: $$ P(X_1 = 1 , X_2 = 0) = \frac{1}{Z} \exp\{ 0 \}, $$ $$ P(X_1 = 1 , X_2 = 1) = \frac{1}{Z} \exp\{ \beta_{12} \}, $$ $$ P(X_1 = 0 , X_2 = 0) = \frac{1}{Z} \exp\{ 0 \}, $$ $$ P(X_1 = 0 , X_2 = 1) = \frac{1}{Z} \exp\{ 0 \}. $$ The marginal probabilities $P(X_1)$ are: $$ P(X_1 = 1) = P(X_1 = 1 , X_2 = 0) + P(X_1 = 1 , X_2 = 1) = \frac{1}{Z} \exp\{ 0 \} + \frac{1}{Z} \exp\{ \beta_{1,2} \}$$ $$ P(X_1 = 0) = P(X_1 = 0 , X_2 = 0) + P(X_1 = 0 , X_2 = 1) = 2 \frac{1}{Z} \exp\{ 0 \}$$ Since $\exp\{ \beta_{12} \} > \exp\{ 0 \}$, we have $P(X_1 = 1) > P(X_1 = 1) $, if $\beta_{12} >0$. By symmetry the same is true for $X_2$, which proves our claim.\\ Note that if we assume $\beta_{12} < 0$, (\ref{claim_1}) holds again for $\{-1, 1\}$, while for $\{0, 1\}$ we have \begin{equation*} P(X_1=0) = P(X_2=0) > P(X_2=1) = P(X_2=1) \end{equation*} \noindent instead. \section{Derivation of Transformation from $\{0, 1\}$ to $\{-1, 1\}$ and vice versa}\label{sec_IsingTrans} In this section, we first introduce the Ising model for $p$ variables with domain $\{-1, 1 \}$, which is the domain used in physics applications. Next, we introduce the Ising model for $p$ variables with domain $\{0, 1 \}$, which is mostly used in the statistics literature. We connect both models by deriving a formula of the parameters of one parameterization as a function of the parameters of the other parameterization. This allows us to transform the parameterization based on domain $\{-1, 1 \}$ into the parameterization of domain $\{0, 1 \}$ and vice versa. In the physics domain, variables can take on values in $\{-1, 1 \}$. The probability distribution of the Ising model for $p$ such random variables is specified by \begin{equation}\label{eq_Ising_X} p(y) = \frac{\exp\left(\sum\limits_{i=1}^p\alpha_iy_i +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}y_iy_j\right)}{\sum\limits_y\exp\left(\sum\limits_{i=1}^p\alpha_iy_i +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}y_iy_j\right)}, \end{equation} \noindent where $y, y \in \{-1\text{, }1\}^p$, denotes a configuration of the $p$ random variables, and the sum $\sum\limits_y$ in the denominator denotes a sum that ranges over all $2^p$ possible configurations or realizations of $y$. From a statistical perspective, the Ising model is a model that is completely determined by the spin variables' main effects and their pairwise interactions. A spin variable in the network tends to have a positive value ($y_i = 1$) when its main effect is positively valued ($\alpha_i > 0$), and tends to have a negative value ($y_i = -1$) when its main effect is negatively valued ($\alpha_i < 0$). Furthermore, any two variables $y_i$ and $y_j$ in the network tend to align their values when their interaction effect is positive ($\beta_{ij} > 0$), and tend to be in different states when their interaction effect is negative ($\beta_{ij} < 0$). In statistical applications, the Ising model is typically used to describe the probability distribution of $p$ binary random variables, \begin{equation} p(x) = \frac{\exp\left(\sum\limits_{i=1}^p\alpha_i^\ast x_i + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast x_ix_j\right)}{\sum\limits_x\exp\left(\sum\limits_{i=1}^p\alpha_i^\ast x_i + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast x_ix_j\right)}, \end{equation} \noindent where $x$, $x \in \{0\text{, }1\}^p$, denotes a configuration of the $p$ binary random variables, and again we use $\sum\limits_x$ to denote the sum that ranges over all $2^p$ possible configurations or realizations of $x$. Even though the model is again completely determined by main effects and pairwise interactions, its interaction parameters $\beta^\ast$ carry a different interpretation than the interaction parameters of the Ising model for variables $Y$ in the $\{-1, 1\}$ domain. Here, two binary variables $x_i$ and $x_j$ in the network tend to both equal one ($x_ix_j = 1$) when their interaction effect is positive ($\beta_{ij}^\ast >0$), but their product tends to equal zero ($x_ix_j = 0$) when their interaction effect is negative ($\beta_{ij}^\ast <0$). That is, whenever the interaction between two binary variables $x_i$ and $x_j$ in the network is negative ($\beta_{ij} < 0$), they tend to be in one of the states $\{0\text{, }0\}$, $\{0\text{, }1\}$ or $\{1\text{, }0\}$. Despite the different interpretations of the two Ising model formulations, one can traverse the two specifications by a simple change of variables. To wit, assume that we have obtained an Ising model for $p$ binary variables $p(x)$ and wish to express its solution in terms of the variables in the $\{-1, 1 \}$ domain, then we require the change of variables \begin{equation} x_i = \frac{1}{2}(y_i + 1) \text{ with inverse relation } y_i = 2x_i - 1. \end{equation} We use this transformation in the distribution of the binary random variables, \begin{align} p(x) &= \frac{\exp\left(\sum\limits_{i=1}^p\alpha_i^\ast x_i +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast x_ix_j\right)}{\sum\limits_x\exp\left(\sum\limits_{i=1}^p\alpha_i^\ast x_i + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast x_ix_j\right)} \nonumber \\ &= \frac{\exp\left(\sum\limits_{i=1}^p\alpha_i^\ast \frac{1}{2}(y_i + 1) +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast \frac{1}{2}(y_i + 1)\frac{1}{2}(y_j + 1)\right)}{\sum\limits_y\exp\left(\sum\limits_{i=1}^p\alpha_i^\ast \frac{1}{2}(y_i + 1) +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast \frac{1}{2}(y_i + 1) \frac{1}{2}(y_j + 1)\right)} = p(y), \end{align} \noindent and observe that this transformation affects both main effects and pairwise interactions. Working out the sum over pairs of variables, we find \begin{align} \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \beta_{ij}^\ast \frac{1}{2}(y_i + 1)\frac{1}{2}(y_j + 1) &= \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast\left( y_iy_j + y_i + y_j + 1 \right) \nonumber \\ &= \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_i + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_j \nonumber \\ &\qquad + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast \nonumber \\ &= \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j + \sum\limits_{i=1}^p \sum\limits_{\substack{j=1 \\ j \neq i}}^p \frac{1}{4} \beta_{ij}^\ast y_i + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast \nonumber \\ &= \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j + \sum\limits_{i=1}^p \frac{1}{4} \beta_{i+}^\ast y_i + \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast \,, \end{align} \noindent where the first term reflects pairwise interactions between the variables $y$, the second term reflects main effects of the variables with main effect $\beta_{i+}^\ast = \sum\limits_{j = 1}^p \beta_{ij}^\ast$, and the last term is constant with respect to (w.r.t.) the variables $y$. Similarly, we can express the sum over the main effects as \begin{equation} \sum\limits_{i=1}^p\alpha_i^\ast \frac{1}{2}(y_i + 1) = \sum\limits_{i=1}^p\alpha_i^\ast \frac{1}{2}y_i + \sum\limits_{i=1}^p\alpha_i^\ast \frac{1}{2}, \end{equation} \noindent where the last term is again constant w.r.t. the variables $y$. Collecting the main effects, \begin{equation} \sum\limits_{i=1}^p\frac{1}{2}\alpha_i^\ast y_i + \sum\limits_{i=1}^p \frac{1}{4} \beta_{i+}^\ast y_i = \sum\limits_{i=1}^p\left(\frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast\right)y_i, \end{equation} \noindent and constant terms, \begin{equation} C = \sum\limits_{i=1}^p \frac{1}{2}\alpha_i^\ast + \sum\limits_{i=1}^p\sum\limits_{j=1}^p \frac{1}{4} \beta_{ij}^\ast, \end{equation} \noindent we obtain: \begin{align} p(y) &= \frac{\exp\left(\sum\limits_{i=1}^p\left( \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast\right)y_i +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j + C\right)}{\sum\limits_y\exp\left(\sum\limits_{i=1}^p\left( \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast\right)y_i +\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j + C\right)} \nonumber \\ &= \frac{\exp\left(\sum\limits_{i=1}^p\left( \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast\right)y_i+ \sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j \right)}{\sum\limits_y\exp\left(\sum\limits_{i=1}^p\left( \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast\right)y_i+\sum\limits_{i=1}^{p-1}\sum\limits_{j>i}^p \frac{1}{4} \beta_{ij}^\ast y_iy_j\right)}, \end{align} \noindent which is equal to the Ising model for variables in the $\{-1, 1 \}$ domain when we write $\alpha_i = \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast$ and $\beta_{ij} = \frac{1}{4}\beta_{ij}^\ast$. In a similar way, one can obtain the parameter values of the binary case from a solution of the Ising model for variables in the $\{-1, 1 \}$ domain using $\alpha_i^\ast = 2\alpha_i - 2 \beta_{i+}$ and $\beta_{ij}^\ast = 4\beta_{ij}$. Thus, we can obtain the binary Ising model parameters $\alpha^\ast$ and $\beta^\ast$ from a simple transformation of the $\{-1, 1\}$ coded Ising model parameters $\alpha$ and $\beta$, and vice versa. Table \ref{tab:tab2} summarizes these transformations: \begin{table}[h] \centering \begin{tabular}{ r | c c } \textbf{Transformation} & $\alpha$ & $\beta$\\ \hline $\{0,1\} \Rightarrow \{-1,1\}$ & $\alpha_i = \frac{1}{2}\alpha_i^\ast + \frac{1}{4}\beta_{i+}^\ast$ & $\beta_{ij} = \frac{1}{4}\beta_{ij}^\ast$ \\ $\{-1,1\} \Rightarrow \{0,1\}$ & $\alpha_i^\ast = 2\alpha_i - 2 \beta_{i+}$ & $\beta_{ij}^\ast = 4\beta_{ij}$ \end{tabular} \caption{Transformation functions to obtain the threshold and interaction parameters in one parameterization from the threshold and interaction parameters in the other parameterization. Parameters with asterisk indicate parameters in the $\{0, 1\}$ domain.} \label{tab:tab2} \end{table} \section{Model equivalence across domains with penalized estimation}\label{A_penMS} If one estimates the Ising model with an unbiased estimator, one can estimate with domain $ \{0, 1\}$ and obtain by transformation the estimates one would have obtained by estimating with domain $\{-1,1\}$ (and vice versa). In this section we ask whether this is also the case for penalized estimation, which is a popular way to estimate the Ising model \citep[e.g.][]{van2014new, ravikumar2010high}. In penalized estimation, the likelihood is maximized with respect to a constraint $c$, typically on the $\ell_1$-norm of the vector of interaction parameters $\beta_{ij}$ $$ \sum_{i=1}^{p} \sum_{\substack{j=1\\ j \neq i}}^{p} | \beta_{ij} | < c. $$ Estimation with an $\ell_1$-penalty is attractive because it sets small parameter estimates to zero, which makes it easier to interpret the model. The key problem in this setting is selecting an appropriate constraint $c$. A popular approach is to consider a sequence of candidate constraints $C = \{c_1, \dots, c_k\}$ and select the $c_i$ that minimizes the Extended Bayesian Information Criterion (EBIC) \citep{foygel2010extended}, which extends the BIC \citep{schwarz1978estimating} by an additional penalty (weighted by $\gamma$) for the number of nonzero interaction parameters $$ \text{EBIC}_{c_i} = -2 LL_{c_i}+ s_0 \log n + 4 s_0 \gamma \log p , $$ \noindent where $LL_{c_i}$ is the maximized log-likelihood under constraint $c_i$, $s_0$ is the number of nonzero interaction parameters, $n$ is the number of observations and $p$ the number of estimated interaction parameters. We are interested in whether selecting models with this procedure in the two domains, $\{0, 1\}$ and $\{-1, 1\}$, leads to statistically equivalent models. This is indeed the case for the following reason: assume that $c^*$ minimizes the EBIC for domain $\{0, 1\}$, then from the transformation in Table \ref{tab:tab2_intro}, $\frac{c^*}{4}$ should give the lowest EBIC in domain $\{-1,1\}$, because the constraint $|| \beta^* ||_1 < c^*$ on $\{0, 1\}$ is equivalent to the constraint $|| \beta^* ||_1 < \frac{c^*}{4}$ on $Y$. Thus, if $\frac{c^*}{4}$ is included in the candidate set $C$, when estimating in domain $\{-1,1\}$, two statistically equivalent models should be selected. Note that exactly $\frac{c}{4}$ has to be included, because a slightly larger/smaller constraint can lead to a very different model, if the number of nonzero parameter changes. This nonlinearity arises from the EBIC, in which $s_0$ decreases by 1 (large change) if some parameter with a tiny value (e.g. 0.0001) is set to zero (small change). Therefore, in order to ensure statistically equivalent models one would need to search a dense sequence $C$. Clearly, this is unfeasible in practice. This means that, in practice $\ell_1$-regularized estimation can return models from domains $\{0, 1\}$ and $\{-1,1\}$ that are not statistically equivalent. We leave the task of investigating this issue for different estimation algorithms for future research. In what follows we provide an extended version of this argument. We define: \begin{align*} c^* &= \arg_{c \in C} \min \text{EBIC}_c \\ &= \arg_{c \in C} \min -2 \log \left [ \frac{1}{Z} \prod_{m=1}^{n} \exp \left \{ \sum_{i=1}^p \alpha_i^* X_i + \sum_{i=1}^{p} \sum_{\substack{j=1\\ j \neq i}}^{p} \beta_{ij}^* X_i X_j \right \} \right] \\ &+ s_0 \log n + 4 s_0 \gamma \log [p(p-1) / 2], \end{align*} \noindent with constraint $$ \sum_{i=1}^{p} \sum_{\substack{j=1\\ j \neq i}}^{p} | \beta_{ij}^* | < c, $$ \noindent where $s_0$ is the number of nonzero interaction parameters, $n$ is the sample size, $p$ is the number of variables $\frac{p(p-1)}{2}$ is the total number of interaction parameters, and $\gamma$ is a tuning parameter. Now, we would like to show that if $c^*$ minimizes the EBIC in domain $\{0,1\}$, then $4 c^*$ minimizes the EBIC in $\{-1,1\}$. We use the transformation in Table \ref{tab:tab2_intro} to rewrite the EBIC into the parameterization implied by $\{-1, 1\}$: \begin{align*} c^* &= \arg_{c \in C} \min \text{EBIC}_c \\ &= \arg_{c \in C} \min -2 \log \left [ \frac{1}{Z} \prod_{m=1}^{n} \exp \left \{ \sum_{i=1}^p (\frac{1}{2} \alpha_i^* + \frac{1}{4} \sum_{\substack{j=1 \\ j\neq i}} \beta_{ij}^*) X_i + \sum_{i=1}^{p} \sum_{\substack{j=1\\ j \neq i}}^{p} \frac{1}{4} \beta_{ij}^* X_i X_j \right \} \right] \\ &+ s_0 \log n + 4 s_0 \gamma \log [p(p-1) / 2], \end{align*} \noindent with constraint $$ \sum_{i=1}^{p} \sum_{\substack{j=1\\ j \neq i}}^{p} | \frac{1}{4} \beta_{ij}^* | < c^*. $$ We can rewrite the constraint into $$ \sum_{i=1}^{p} \sum_{\substack{j=1\\ j \neq i}}^{p} | \beta_{ij}^* | < 4c^*. $$ The last inequality shows that the constraint is $4$ times larger for the parameterization in domain $\{0,1\}$. Or the other way around, the constraint is $\frac{1}{4}$ times smaller in $\{-1,1\}$ compared to $\{0,1\}$. We know that the models are statistically equivalent across domains. Therefore, the likelihood of the model with constraint $c$ in domain $\{0,1\}$ is equal to the likelihood of the model with constraint $\frac{c}{4}$ in domain $\{-1,1\}$. Now, since the transformation never changes a zero estimate in a nonzero estimate or vice versa with probability 1, also the terms $s_0 \log n + 4 s_0 \gamma \log [p(p-1) / 2]$ in the EBIC remain constant across domains. It follows that, if $c^* = \arg_{c \in C} \min \text{EBIC}_c$ in domain $\{0,1 \}$, then $\frac{c^*}{4} = \arg_{c \in C} \min \text{EBIC}_c$ in domain $\{-1, 1\}$. \end{document}
math
45,619
\overrightarrow{\beta}gin{document} \maketitle \overrightarrow{\beta}gin{abstract} We investigate the consequences of natural conjectures of Montgomery type on the non-vanishing of Dirichlet $L$-functions at the central point. We first justify these conjectures using probabilistic arguments. We then show using a result of Bombieri, Friedlander and Iwaniec and a result of the author that they imply that almost all Dirichlet $L$-functions do not vanish at the central point. We also deduce a quantitative upper bound for the proportion of Dirichlet $L$-functions for which $L(\frac 12,\chi)=0$. \end{abstract} \section{Introduction and statement of results} The central values of $L$-functions and their derivatives are of crucial importance in number theory. Perhaps the most important example are the values $L^{(k)}(E,1)$ for an elliptic curve $E$, which are strongly linked with important invariants of $E$. For $k=1$ this is the Gross-Zagier Formula, and for $k\leq r(E)$ (the rank of $E$), this is the Birch and Swinnerton-Dyer Conjecture. It is widely believed that the vanishing of $L$-functions at the central point should be explained by arithmetical reasons. The Birch and Swinnerton-Dyer Conjecture is such a reason, and another type of reason is the value of the root number. Indeed, self-dual $L$-functions whose root number is $-1$ must vanish to odd order at the central point. As for Dirichlet $L$-functions, it is believed that we always have $L(\frac 12,\chi)\neq 0$; this was first conjectured by Chowla \cite{Ch} for real primitive characters $\chi$. A good reason to believe this conjecture is that the root number of self-dual Dirichlet $L$-functions, that is $L(s,\chi)$ with $\chi$ real and primitive, can never equal $-1$\footnote{This can be deduced by an exact Gauss sum computation (see Chapters 2 and 9 of \cite{Da})}. While Chowla's Conjecture is still open, there has been substantial progress towards this question. A famous result of Soundararajan \cite{So} states that the proprotion of Dirichlet $L$-functions $L(s,\chi_{8d})$ with $d$ odd and squarefree which do not vanish at $s=\frac 12$ is at least $\frac 78$; this result was extended by Conrey and Soundararajan \cite{CS} to show that at least $20\%$ of these $L$-functions do not vanish on the whole interval $s\in [0,1]$. As for general Dirichlet characters $\chi \bmod q$, Balasubramanian and K. Murty \cite{BK}, improving on \cite{B}, have shown that at least $4\%$ of the Dirichlet $L$-functions with $\chi \bmod q$ do not vanish at $s=\frac 12$. This proportion was subsequently improved to $\frac 13$ by Iwaniec and Sarnak\footnote{These authors have also shown \cite{IS2} that in certain families of newforms of either varying weight or level, at least $\frac 12$ of the members satisfy $L(\frac 12,f\otimes \chi_D) >0$, for any fixed $D$. Iwaniec and Sarnak further proved that any improvement of the constant $\frac 12$ would imply a significant bound on Landau-Siegel zeros.} \cite{IS1}, and more recently to $34.11\%$ by Bui \cite{Bu}. Under GRH, Murty \cite{Mu} (see also \cite{Si}) has shown that this proportion is at least $50\%$\footnote{One can interpret this result as an asymptotic for the $1$-level density of low-lying zeros of Dirichlet $L$-functions for test function whose Fourier transform has support contained in $(-2,2)$.}. Sarnak\footnote{Private conversation.} noticed that Montgomery's Conjecture on primes in arithmetic progressions implies the Katz-Sarnak prediction for the $1$-level density for any finite support, and as a consequence almost all Dirichlet $L$-functions do not vanish at the central point. However as we will see below, Montgomery's Conjecture heavily depends on the assumption that $L(\frac 12,\chi)\neq 0$. The goal of the current paper is to formulate an analogue of Montgomery's Conjecture which is independent of real zeros. From this we will deduce that $L(\tfrac 12,\chi)\neq 0$ for almost all $\chi \bmod q$, with $Q< q \leq 2Q$. We should also mention that corresponding questions for the derivatives $L^{(k)}(\frac 12,\chi)$ have been studied. Bui and Milinovich have shown that asymptotically for $q$ and $k$ tending to infinity, $L^{(k)}(\frac 12,\chi)\neq 0$ for almost all $\chi \bmod q$. A corresponding result for completed Dirichlet $L$-functions $\Lambda(s,\chi)$ had earlier been obtained by Michel and VanderKam \cite{MV}, but with limiting proportion $\frac 23$. The unconditional results mentioned earlier rely heavily on mollification methods, who have greatly flourished in the past years. The goal of the current paper is to take a different viewpoint to the vanishing of $L(s,\chi)$ at the central point, by inputting probabilistic arguments. Bombieri, Friedlander and Iwaniec have shown \cite{BFI3} that in the range $Q_0< Q \leq 2 Q_0$, with $Q_0=x^{\frac 12} (\log x)^A$ and $a\neq 0$ a fixed integer, \overrightarrow{\beta}gin{equation} \sum_{\substack{Q<q \leq 2Q \\ (q,a)=1}} \left| \psi(x;q,a) - \frac{\psi(x;\chi_0)}{\phi(q)} \text{ real}ight| \ll x \left ( \frac{\log\log x}{\log x} \text{ real}ight)^2. \end{equation} As a consequence, taking $a=1$ and using the orthogonality relations we obtain the bound \overrightarrow{\beta}gin{equation} \sum_{\substack{Q<q \leq 2Q}} \bigg| \frac 1{\phi(q)} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi}} \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \bigg| \ll x \left ( \frac{\log\log x}{\log x} \text{ real}ight)^2, \label{equation corollary of BFI} \end{equation} where $\text{ real}ho_{\chi}$ runs through the nontrivial zeros of $L(s,\chi)$. If $\text{ real}ho_{\chi} \notin \mathbb R$, then the term $x^{\text{ real}ho_{\chi}}/\text{ real}ho_{\chi}$ oscillates; however potential real zeros $\text{ real}ho_{\chi}$ would result in non-oscillating terms on the left hand side of \eqref{equation corollary of BFI}. It is therefore natural to believe that a better bound holds after removing the real zeros - or at least it is very natural to believe that \overrightarrow{\beta}gin{equation} \sum_{Q<q \leq 2Q} \bigg| \frac 1{\phi(q)} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi}\notin \mathbb R} \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \bigg| \ll x \left ( \frac{\log\log x}{\log x} \text{ real}ight)^2. \label{equation corollary of BFI without real zeros} \end{equation} We first remark that this last bound implies the non-vanishing of almost all Dirichlet $L$-functions at the central point. \overrightarrow{\beta}gin{proposition} \label{theorem nonvanishing with BFI} Fix $A>-2,$ and assume that \eqref{equation corollary of BFI without real zeros} holds in the range $x^{\frac 12} (\log x)^{A}< Q \leq 2 x^{\frac 12} (\log x)^A$. Then almost all Dirichlet $L$-functions do not vanish at the central point. More precisely, for $Q$ large enough we have \overrightarrow{\beta}gin{equation} \frac 1{Q^2} \sum_{q \leq Q} \sum_{\chi \bmod q} z(\chi) \ll \frac {(\log\log Q)^2}{(\log Q)^{2+A} }, \label{equation nonvanishing} \end{equation} where $z(\chi)$ is the number of real zeros of $L(s,\chi)$ in the critical strip, counted with multiplicity. \end{proposition} \overrightarrow{\beta}gin{remark} Montgomery's probabilistic argument (see below) supports \eqref{equation corollary of BFI without real zeros} (and predicts a stronger bound). As for \eqref{equation corollary of BFI} (which is known unconditionally), one would need to add the assumption that $L(\frac 12,\chi)\neq 0$ for Montgomery's argument to support this bound. \end{remark} We now investigate the implications of a more powerful conjecture than \eqref{equation corollary of BFI without real zeros} on the non-vanishing of Dirichlet $L$-functions at the central point. Montgomery's Conjecture, which is motivated by a probabilistic argument, states that in a certain range of $q$ and $x$ with $(a,q)=1$, $$ \psi(x;q,a) - \frac{\psi(x;\chi_0)}{\phi(q)} = \frac 1{\phi(q)} \sum_{\chi\neq \chi_0} \overline{\chi}(a) \psi(x;\chi) \ll_{\epsilon} \frac{x^{\frac 12+\epsilon}}{q^{\frac 12}}. $$ This conjecture is based on the fact that under GRH we have \overrightarrow{\beta}gin{equation} \frac {x^{-\frac 12}}{\phi(q)} \sum_{\chi\neq \chi_0} \overline{\chi}(a) \psi(x;\chi) = -\frac 1{\phi(q)}\sum_{\chi\neq \chi_0} \overline{\chi}(a) \sum_{\text{ real}ho_{\chi}} \frac{x^{ i\gamma_{\chi}}}{\text{ real}ho_{\chi}}+O(x^{-\frac 12}(\log qx)^2), \label{equation explicit formula} \end{equation} and one can show (see Appendix \text{ real}ef{appendix}) that if the $\gamma_{\chi}$ are distinct and nonzero, then the first term on the right hand side of \eqref{equation explicit formula} has a limiting logarithmic distribution with zero mean and variance $\overrightarrow{a}symp \phi(q)^{-1}\log q$. Hence we believe that this term should not exceed $q^{-\frac 12+\epsilon}$. If we remove the assumption that the $\gamma_{\chi}$ are nonzero, then we need to reformulate Montgomery's Conjecture. Indeed if the proportion of $\chi \bmod q$ such that $L(\frac 12 ,\chi)=0 $ is not exactly zero, then Montgomery's Conjecture is false\footnote{\label{footnote sarnak}The contrapositive of this statement follows from Theorem 2.13 of \cite{FiMi}. Indeed, taking the test function $\eta_{\kappa}(y):=(\sin(\kappa\pi y )/\kappa\pi y)^2$, whose Fourier transform is supported in the interval $[-\kappa,\kappa]$, the $1$-level density is asymptotically $\widehat{\eta_{\kappa}}(0)=1/\kappa$, and taking arbitrarily large values of $\kappa$ gives the desired conclusion. }. We now reformulate this conjecture, depending on a parameter $0<\eta<1$. \overrightarrow{\beta}gin{hypothesis}[Modified Montgomery Conjecture] \label{hypothesis reformulated montgomery} Fix $\epsilon>0$. In the range $q \leq x^{\eta}$, we have for $(a,q)=1$ that \overrightarrow{\beta}gin{equation} \frac 1{\phi(q)}\sum_{\chi\neq \chi_0} \overline{\chi}(a) \sum_{\text{ real}ho_{\chi} \notin \mathbb R} \frac{x^{ \text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \ll_{\epsilon} \frac{x^{\frac 12+\epsilon}}{q^{\frac12}}. \label{equation strong hypothesis} \end{equation} \end{hypothesis} We will show that this hypothesis implies a strong non-vanishing result on Dirichlet $L$-functions at the central point. \overrightarrow{\beta}gin{theorem} \label{theorem strong nonvanishing} Fix $\epsilon>0$. Assume GRH, and assume that for some $\frac 12< \eta<1$, Hypothesis \text{ real}ef{hypothesis reformulated montgomery} holds\footnote{It is actually sufficient to assume that \eqref{equation strong hypothesis} holds on average over $q \leq Q$, with $Q\leq x^{\eta}$ and $a=1$.}. Then we have that \overrightarrow{\beta}gin{equation} \frac 1{Q^2} \sum_{q\leq Q} \sum_{\chi \bmod q} z(\chi) \ll_{\epsilon} \frac 1{Q^{\frac 12-\epsilon}}, \label{equation nonvanishing strong} \end{equation} where $z(\chi)$ is the order of vanishing of $L(s,\chi)$ at $s=\frac 12$. \end{theorem} \overrightarrow{\beta}gin{remark}In contrast with Montgomery's Conjecture, Hypothesis \text{ real}ef{hypothesis reformulated montgomery} does not imply GRH, but rather implies that the nonreal zeros of $L(s,\chi)$ lie on the line $\mathbb Re(s)=\frac 12$. This last statement was used as a hypothesis in the work of Sarnak and Zaharescu \cite{SZ}, who showed that it implies an effective bound on the class number of imaginary quadratic fields. \end{remark} \overrightarrow{\beta}gin{remark} As mentioned earlier in Footnote \text{ real}ef{footnote sarnak}, Montgomery's Conjecture implies the Katz-Sarnak prediction for the $1$-level density in the family of Dirichlet $L$-functions modulo $q$, and thus it follows that almost all members of this family do not vanish at the central point. However, Montgomery's Conjecture has the assumption that $L(\frac 12,\chi)\neq 0$ built in, and moreover the Katz-Sarnak density conjecture does not allow one to obtain an explicit error term as in \eqref{equation nonvanishing} and \eqref{equation nonvanishing strong}. Finally, the range $x^{\frac 12-o(1)} < Q< x^{\frac 12+o(1)} $ in which we are working in Proposition \text{ real}ef{theorem nonvanishing with BFI} corresponds in the Katz-Sarnak problem to test functions whose Fourier transform is supported in $(-2-\epsilon,-2+\epsilon) \cup (2-\epsilon,2+\epsilon)$, and thus does not allow one to tackle the rest of the support, which is needed to obtain the non-vanishing of almost-all Dirichlet $L$-functions at the central point. \end{remark} Proposition \text{ real}ef{theorem nonvanishing with BFI} follows from a fairly straightforward argument. As for Theorem \text{ real}ef{theorem strong nonvanishing}, the proof is more involved and relies on the properties of the sum $$S(Q;x):=-\sum_{ Q<q\leq 2Q} \left( \psi(x;q,1) - \frac{\psi(x,\chi_0)}{\phi(q)} \text{ real}ight), $$ which we will study using two different techniques. We now record one of the resulting estimates which we believe is of independent interest. \overrightarrow{\beta}gin{proposition} \label{proposition strong estimate for S(Q;x)} Fix $\epsilon>0$, assume GRH and assume that Hypothesis \text{ real}ef{hypothesis reformulated montgomery} holds for some $\frac 12 < \eta <1$. Then in the range $x^{\frac 12}<Q \leq x$ we have that $$ S(Q;x) = \frac Q2 \log(x/Q) +C_3 Q +O_{\epsilon} \left(\frac{x^{1+\epsilon}} {Q^{\frac 12}} +Q^{\frac 32-\epsilon} x^{-\frac 12+\epsilon}\text{ real}ight), $$ where $$ C_3:= \frac 12\left( \log 2\pi + \gamma +\sum_p \frac{\log p}{p-1} +1 \text{ real}ight)-\log 2. $$ (Note that this gives an asymptotic for $S(Q,x)$ in the range $x^{\frac 23+o(1)} < Q = o(x)$, and that the error term is independent of $\eta$.) \end{proposition} \section{An application of the Bombieri-Friedlander-Iwaniec Theorem} In this section we prove Proposition \text{ real}ef{theorem nonvanishing with BFI}. \overrightarrow{\beta}gin{proof}[Proof of Proposition \text{ real}ef{theorem nonvanishing with BFI}] Applying the triangle inequality twice gives that in the range \linebreak $x^{\frac 12} (\log x)^{A}< Q \leq 2 x^{\frac 12} (\log x)^A$, \overrightarrow{\beta}gin{align*} \bigg|\sum_{Q<q\leq 2Q} \frac 1{\phi(q)} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi}\in \mathbb R} \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \bigg| &\leq \sum_{Q<q\leq 2Q} \bigg|\frac 1{\phi(q)} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi} } \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \bigg| + \sum_{Q<q\leq 2Q} \bigg|\frac 1{\phi(q)} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi} \notin \mathbb R} \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \bigg|\\ & \ll x \left(\frac{\log\log x}{\log x}\text{ real}ight)^2, \end{align*} by \eqref{equation corollary of BFI} and \eqref{equation corollary of BFI without real zeros}. We therefore have that \overrightarrow{\beta}gin{equation} x \left ( \frac{\log\log x}{\log x} \text{ real}ight)^2\gg \sum_{Q<q\leq 2Q} \frac 1{\phi(q)} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi}\in \mathbb R} \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \geq \sum_{Q<q\leq 2Q} \frac 1{q} \sum_{\chi \neq \chi_0} \sum_{\text{ real}ho_{\chi}\in \mathbb R} \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}}. \label{equation proof 1} \end{equation} We now note that if $\text{ real}ho \in \mathbb R$ is a zero of $L(s,\chi)$, then $1-\text{ real}ho$ is also a zero of $L(s,\overline{\chi})$ with the same multiplicity $m_{\text{ real}ho}$, hence this pair of zeros give a contribution of $$ m_{\text{ real}ho}\frac{x^{\text{ real}ho}}{\text{ real}ho} +m_{\text{ real}ho} \frac{x^{1-\text{ real}ho}}{1-\text{ real}ho} \geq m_{\text{ real}ho} x^{\frac 12}.$$ Therefore grouping characters by conjugate pairs in \eqref{equation proof 1}, we obtain that the last term on the right is $$\geq \sum_{Q<q\leq 2Q} \frac 1{q} \sum_{\chi \neq \chi_0} \frac 12 \sum_{\text{ real}ho_{\chi}\in \mathbb R} x^{\frac 12} \geq \frac {x^{\frac 12}}{4Q} \sum_{Q<q\leq 2Q} \sum_{\chi \neq \chi_0}z(\chi).$$ We conclude that $$\frac 1{Q^2} \sum_{Q<q\leq 2Q} \sum_{\chi \bmod q}z(\chi) \ll \frac{x^{\frac 12}} Q \left ( \frac{\log\log x}{\log x} \text{ real}ight)^2.$$ A standard argument using dyadic intervals gives the claimed bound. \end{proof} \section{Applications of Montgomery's Conjecture} In this section we study the quantity \overrightarrow{\beta}gin{equation} \label{equation definition S(Q;x)} S(Q;x)=-\sum_{ Q<q\leq 2Q} \left( \psi(x;q,1) - \frac{\psi(x,\chi_0)}{\phi(q)} \text{ real}ight), \end{equation} using two different techniques. The proof of Theorem \text{ real}ef{theorem strong nonvanishing} will follow by comparing these two estimates. We first give a conditional bound on $S(Q;x)$ using techniques of \cite{fiorilli}, which ultimately relies on Hooley's variant of the divisor switching method \cite{H}. \overrightarrow{\beta}gin{lemma} \label{lemma weak divisor switch} Fix $\epsilon>0$ and assume GRH. In the range $x^{\frac 12} \leq Q \leq x$, we have the estimate \overrightarrow{\beta}gin{equation} S(Q;x)= \frac Q2 \log (x/Q) + C_3 Q +O_{\epsilon} \left(\frac{x^{\frac 32} (\log x)^2} Q +Q^{\frac 32-\epsilon} x^{-\frac 12+\epsilon}\text{ real}ight), \end{equation} where $S(Q;x)$ is defined in \eqref{equation definition S(Q;x)} and $$ C_3:= \frac 12\left( \log 2\pi + \gamma +\sum_p \frac{\log p}{p(p-1)} +1 \text{ real}ight)-\log 2. $$ (Note that this is gives an asymptotic for $S(Q;x)$ in the range $x^{\frac 34} \log x = o(Q)$, $Q\leq x$.) \end{lemma} \overrightarrow{\beta}gin{proof} We evaluate $S(Q,x)$ by following the argument in the proof of Proposition 6.1 of \cite{fiorilli} (see also \cite{Fo, FrGr, FGHM,H}). We first write $$ S(Q,x)= \sum_{2Q<q\leq x} \psi(x;q,1) -\sum_{Q<q\leq x} \psi(x;q,1) + \psi(x;\chi_0) \sum_{Q< q\leq 2Q} \frac 1{\phi(q)}=I-II+III. $$ Lemma 5.2 of \cite{fiorilli} combined with the Riemann Hypothesis implies that $$III = C_1 x \log 2 +O\left( x\frac{\log Q}Q +x^{\frac 12}(\log x)^2\text{ real}ight),$$ where $C_1:= \zeta(2)\zeta(3)/\zeta(6)$. We treat $I$ and $II$ as follows (see Lemma 5.1 of \cite{fiorilli}): \overrightarrow{\beta}gin{align} II&=\sum_{Q<q\leq x} \sum_{\substack{ Q< n\leq x \\ q \mid n-1}} \Lambda(n) = \sum_{\substack{1\leq r <(x-1)/Q}} \sum_{\substack{rQ+1 < n \leq x \\ r \mid n-1 }} \Lambda(n) \notag\\ &= \sum_{\substack{1\leq r <(x-1)/Q}} (\psi(x;r,1)-\psi(rQ+1;r,1)) \label{equation to iterate 1} \\ &= \sum_{\substack{1\leq r <(x-1)/Q}} \frac{x-rQ-1}{\phi(r)} + O\left( \frac{x^{\frac 32} (\log x)^2} Q\text{ real}ight) \label{equation to iterate 2} \\ &=x \left(C_1 \log(x/Q) +C_2 + \frac{\log (x/Q)}{2x/Q} +\frac{C_0 Q}x + O_{\epsilon}\left(\left(\frac Qx \text{ real}ight)^{\frac 32-\epsilon} +\frac{x^{\frac 12} (\log x)^2} Q \text{ real}ight)\text{ real}ight)\notag \end{align} by GRH and Lemma 5.9 of \cite{fiorilli}. Here, $$ C_2:= C_1 \left( \gamma-1 -\sum_p \frac{\log p}{p^2-p+1} \text{ real}ight), \hspace{1cm} C_0:= \frac 12\left( \log 2\pi + \gamma +\sum_p \frac{\log p}{p(p-1)} +1 \text{ real}ight). $$ Note that at this point we cannot apply Hypothesis \text{ real}ef{hypothesis reformulated montgomery} in going from \eqref{equation to iterate 1} to \eqref{equation to iterate 2}, since we have no information on the real zeros of $L(s,\chi)$. Later we will reiterate this proof and apply our non-vanishing results at this step to get a better error term. We conclude the proof by collecting our estimates for $I$, $II$ and $III$: \overrightarrow{\beta}gin{align*} S(Q,x) &= \frac Q2 \log(x/Q) +Q( C_0-\log 2) +O_{\epsilon}\left( x^{\frac 12}(\log x)^2+ Q^{\frac 32-\epsilon} x^{-\frac 12+\epsilon} +\frac{x^{\frac 32} (\log x)^2} Q\text{ real}ight).\\ \end{align*} \end{proof} We now combine Lemma \text{ real}ef{lemma weak divisor switch} with Hypothesis \text{ real}ef{hypothesis reformulated montgomery} to obtain a first non-vanishing result. \overrightarrow{\beta}gin{lemma} \label{lemma weaker result} Fix $\epsilon>0$. Assume GRH, and assume that for some $\frac 12< \eta<1$, Hypothesis \text{ real}ef{hypothesis reformulated montgomery} holds\footnote{It is actually sufficient to assume that for $Q\overrightarrow{a}symp x^{\eta}$, $ \sum_{Q<q \leq 2Q}\frac 1{\phi(q)}\sum_{\chi\neq \chi_0} \sum_{\text{ real}ho_{\chi} \notin \mathbb R} \frac{x^{ \text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} \ll_{\epsilon} Q^{\frac 12}x^{\frac 12+\epsilon}.$}. Then we have that \overrightarrow{\beta}gin{equation} \frac 1{Q^2} \sum_{Q<q\leq 2Q} \sum_{\chi \bmod q} z(\chi) \ll_{\epsilon} \frac 1{Q^{\min(\frac 12,2-\frac 1{\eta})-\epsilon}}. \label{equation nonvanishing weak} \end{equation} where $z(\chi)$ is the number of real zeros of $L(s,\chi)$ in the critical strip, counted with multiplicity. (Note that if $\frac 23 \leq \eta < 1$, then the right hand side of \eqref{equation nonvanishing weak} equals $Q^{-\frac 12+\epsilon}$). \end{lemma} \overrightarrow{\beta}gin{proof} We study the quantity $$S(Q;x)=-\sum_{ Q<q\leq 2Q} \left( \psi(x;q,1) - \frac{\psi(x,\chi_0)}{\phi(q)} \text{ real}ight), $$ in the range $x^{\eta}/3 \leq Q\leq x^{\eta}/2$. On one hand, we apply the explicit formula and GRH: \overrightarrow{\beta}gin{align} S(Q;x) &= \sum_{ Q<q\leq 2Q} \frac 1{\phi(q)}\sum_{\chi\neq \chi_0} \sum_{\text{ real}ho_{\chi} } \frac{x^{ \text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} +O(Q (\log Qx)^2) \notag \\&= 2x^{\frac 12}\sum_{ Q<q\leq 2Q} \frac 1{\phi(q)} \sum_{\chi \neq \chi_0} z(\chi) +\sum_{ Q<q\leq 2Q} \frac 1{\phi(q)}\sum_{\chi\neq \chi_0} \sum_{\text{ real}ho_{\chi} \notin \mathbb R } \frac{x^{\text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}} +O(Q (\log Qx)^2) \notag \\ & \geq \frac{x^{\frac 12}}{Q} \sum_{Q<q\leq 2Q} \sum_{\chi \neq \chi_0} z(\chi) +O_{\epsilon}(x^{\frac 12+\frac{\epsilon}2} Q^{\frac 12}), \label{equation first estimate for S} \end{align} by Hypothesis \text{ real}ef{hypothesis reformulated montgomery}. On the other hand, we compare this with the estimate for $S(Q;x)$ in Lemma \text{ real}ef{lemma weak divisor switch}, yielding $$\frac 1{Q^2}\sum_{Q<q\leq 2Q} \sum_{\chi \neq \chi_0} z(\chi) \ll_{\epsilon} x^{\frac{\epsilon}2} Q^{-\frac 12}+ \frac{\log x}{x^{\frac 12}} + \frac{x (\log x)^2} {Q^2} \ll_{\epsilon} Q^{\epsilon-\frac 12} + Q^{\frac 1{\eta} -2+\epsilon}. $$ \end{proof} We now refine Lemma \text{ real}ef{lemma weaker result}, by re-inserting Hypothesis \text{ real}ef{hypothesis reformulated montgomery} in its proof. We will iterate this process several times, until we reach the error term appearing in Theorem \text{ real}ef{theorem strong nonvanishing}. \overrightarrow{\beta}gin{lemma} Fix $\epsilon>0$, assume GRH and assume that Hypothesis \text{ real}ef{hypothesis reformulated montgomery} holds\footnote{Again it is sufficient to assume that \eqref{equation strong hypothesis} holds on average over $q \leq Q$, with $Q\leq x^{\eta}$ and $a=1$.} for some $\frac 12 < \eta < 1$. Assume further that for $\kappa(\eta)$ a function of $\eta$ satisfying $ 0<\kappa(\eta)<\frac 12$, we have \overrightarrow{\beta}gin{equation} \frac 1{Q^2}\sum_{Q< q\leq 2Q} \sum_{\chi \neq \chi_0} z(\chi) \ll_{\epsilon} \frac 1{Q^{\frac 12-\epsilon}}+ \frac 1{Q^{\kappa(\eta) -\epsilon}}. \label{estimate to improve} \end{equation} Then it follows that \overrightarrow{\beta}gin{equation} \frac 1{Q^2}\sum_{Q<q\leq 2Q} \sum_{\chi \neq \chi_0} z(\chi) \ll_{\epsilon} \frac 1{Q^{\frac 12-\epsilon}}+\frac 1{Q^{2-\frac 1{\eta} -\kappa(\eta)(1-\frac 1{\eta}) -\epsilon}}. \label{estimate improved} \end{equation} \label{lemma iterative process} \end{lemma} \overrightarrow{\beta}gin{proof} We set $x^{\eta}/3 \leq Q \leq x^{\eta}/2$ and follow the proofs of Lemmas \text{ real}ef{lemma weak divisor switch} and \text{ real}ef{lemma weaker result}, applying \eqref{estimate to improve} in going from \eqref{equation to iterate 1} to \eqref{equation to iterate 2}. Note that \eqref{estimate to improve}, GRH and Hypothesis \text{ real}ef{hypothesis reformulated montgomery} imply that \overrightarrow{\beta}gin{align*} \sum_{\substack{1\leq r <(x-1)/Q}} & \left(\psi(x;r,1) -\psi(rQ+1;r,1) - \frac {\psi(x,\chi_0)-\psi(rQ+1,\chi_0)}{\phi(r)}\text{ real}ight) \\ &= \sum_{\substack{1\leq r <(x-1)/Q}} \frac 1{\phi(r)}\sum_{\chi \neq \chi_0} (\psi(x,\chi)-\psi(rQ+1,\chi)) \\ &= \sum_{\substack{1\leq r <(x-1)/Q}} \frac 1{\phi(r)} \sum_{\chi \neq \chi_0}\sum_{\gamma_{\chi}\neq 0} \frac{x^{\frac 12+i\gamma_{\chi}} - (rQ+1)^{\frac 12+i\gamma_{\chi}}}{\frac 12+i\gamma_{\chi}} \\ & \hspace{1cm}+ 2\sum_{\substack{1\leq r <(x-1)/Q}} \frac 1{\phi(r)} \sum_{\chi\neq \chi_0}z(\chi) (x^{\frac 12} - (rQ+1)^{\frac 12}) \\ &\ll_{\epsilon} \frac{x^{1+\epsilon}} {Q^{\frac 12}} + x^{\frac 32-\kappa(\eta)+\epsilon}Q^{-1+\kappa(\eta)}, \end{align*} since for $r <(x-1)/Q$ we always have $r\leq (rQ+1)^{\eta}$, thanks to the fact that $\frac 12 < \eta < 1$. Also in applying \eqref{estimate to improve} we used a dyadic decomposition of the sum over $r$. Following the subsequent steps of the proofs of Lemmas \text{ real}ef{lemma weak divisor switch} and \text{ real}ef{lemma weaker result} we obtain that since $Q\geq x^{\frac 12}$, $$\frac 1{Q^2}\sum_{Q<q\leq 2Q} \sum_{\chi \neq \chi_0} z(\chi) \ll_{\epsilon} \frac{x^{\frac{\epsilon}2}}{Q^{\frac 12}}+ x^{1-\kappa(\eta)+\frac{\epsilon}2}Q^{-2+\kappa(\eta)}+x^{\frac 12+\epsilon}Q^{-\frac 32}\ll_{\epsilon} \frac 1{Q^{\min(\frac 12,2-\frac 1{\eta} -\kappa(\eta)(1-\frac 1{\eta}))-\epsilon}}. $$ \end{proof} We now show that starting from Lemma \text{ real}ef{lemma weaker result} with a fixed $\frac 12<\eta< 1$ and applying Lemma \text{ real}ef{lemma iterative process} iteratively, we eventually obtain the error term $Q^{-\frac 12+\epsilon}$. \overrightarrow{\beta}gin{lemma} \label{lemma iterative process stops} Fix $\frac 12< \eta <1$, and define $f(t):=2-\frac 1{\eta} - t (1-\frac 1{\eta} )$. Then for $n$ large enough (depending on $\eta$), we have that $$ f^{(n)}\left(2-\frac 1\eta\text{ real}ight) >\frac 12, $$ where $f^{(n)}$ is the $n$-th iterate of $f$. \end{lemma} \overrightarrow{\beta}gin{proof} One easily shows the following formula: $$f^{(n)}\left(2-\frac 1\eta\text{ real}ight) =\left( 2-\frac 1{\eta} \text{ real}ight) \sum_{k=0}^n \left( \frac 1{\eta}-1 \text{ real}ight)^k = 1- \left( \frac 1{\eta}-1\text{ real}ight)^{n+1}. $$ It follows that for any fixed $\frac 12 < \eta < \infty,$ $$ \lim_{n\text{ real}ightarrow \infty} f^{(n)}\left(2-\frac 1\eta\text{ real}ight) = 1.$$ \end{proof} We are now ready to prove Theorem \text{ real}ef{theorem strong nonvanishing}. \overrightarrow{\beta}gin{proof}[Proof of Theorem \text{ real}ef{theorem strong nonvanishing}] Fix $\frac 12<\eta<1$. By Lemma \text{ real}ef{lemma weaker result}, we have that \overrightarrow{\beta}gin{equation} \frac 1{Q^2} \sum_{Q<q\leq 2Q} \sum_{\chi \bmod q} z(\chi) \ll_{\epsilon} \frac 1{Q^{\frac 12-\epsilon }}+ \frac 1{Q^{2-\frac 1{\eta}-\epsilon}}. \end{equation} We apply Lemma \text{ real}ef{lemma iterative process} iteratively to this estimate; Lemma \text{ real}ef{lemma iterative process stops} implies that after a finite number of steps we will obtain the bound \overrightarrow{\beta}gin{equation} \frac 1{Q^2} \sum_{Q<q\leq 2Q} \sum_{\chi \bmod q} z(\chi) \ll_{\epsilon} \frac 1{Q^{\frac 12-\epsilon }}. \end{equation} The desired estimate follows from a decomposition into dyadic intervals. \end{proof} \overrightarrow{\beta}gin{proof}[Proof of Proposition \text{ real}ef{proposition strong estimate for S(Q;x)}] We follow the proof of Lemma \text{ real}ef{lemma weak divisor switch}. We apply Theorem \text{ real}ef{theorem strong nonvanishing} in going from \eqref{equation to iterate 1} to \eqref{equation to iterate 2}; as seen in the proof of Lemma \text{ real}ef{lemma iterative process}, this will yield that $$ S(Q;x) = \frac Q2 \log(x/Q) +C_3Q + O_{\epsilon}\left( x^{\frac 12} (\log x)^2 + Q^{\frac 32-\epsilon} x^{-\frac 12+\epsilon} + \frac{x^{1+\epsilon}}{Q^{\frac 12}}\text{ real}ight). $$ The proof follows since $x^{\frac 12}<Q \leq x$. \end{proof} \overrightarrow{a}ppendix \section{The distribution of the error term in the prime number theorem in arithmetic progressions} \label{appendix} In this appendix we study the limiting logarithmic distribution of the term on the left hand side of \eqref{equation strong hypothesis}, and justify Hypothesis \text{ real}ef{hypothesis reformulated montgomery}. Let us first study the remainder term in the prime number theorem for arithmetic progressions: $$T(x;q,a):= -x^{-\frac 12}\left( \psi(x;q,a)-\frac{\psi(x,\chi_0)}{\phi(q)}\text{ real}ight) =\frac {x^{-\frac 12}}{\phi(q)}\sum_{\chi\neq \chi_0} \overline{\chi}(a) \sum_{\text{ real}ho_{\chi}} \frac{x^{ \text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}}+o(1).$$ Assuming GRH, one can show that $T(x;q,a)$ has a limiting logarithmic distribution $\mu_{q;a}$, a probability measure whose associated random variable will be denoted by $X_{q;a}$. \overrightarrow{\beta}gin{proposition} \label{proposition limiting distribution} Assume GRH. Then $T(e^y;q,a)$ has a limiting probability distribution $\mu_{q;a}$ as $y \text{ real}ightarrow \infty$, whose mean is given by $$ \mathbb E[X_{q;a}]=\int_{\mathbb R} t d\mu_{q;a}(t) = \frac 2{\phi(q)}\sum_{\chi \neq \chi_0} \overline{\chi}(a) z(\chi),$$ where $z(\chi)$ is the order of vanishing of $L(s,\chi)$ at $s=\frac 12$. The variance of $X_{q;a}$ is given by $$\text{Var}[X_{q;a}]= \int_{\mathbb R} (t-\mathbb E[X_{q;a}])^2 d\mu_{q;a}(t) = \frac 1{\phi(q)^2}\sum_{\chi \neq \chi_0} |\chi(a)|^2\sum_{\gamma_{\chi}\neq 0} \frac{m_{\gamma_{\chi}}^2}{\frac 14+ \gamma_{\chi}^2}, $$ where $m_{\gamma_{\chi}}$ denotes the multiplicity of $\gamma_{\chi}$ in the multiset $S(q):= \{ \gamma_{\chi} : L(\frac 12+i\gamma_{\chi},\chi)=0,\chi \bmod q \}$. \end{proposition} \overrightarrow{\beta}gin{proof} The existence of the limiting distribution follows from \cite{NgSh}. The computation of the first two moments is almost identical to that in Lemmas 2.4 and 2.5 of \cite{Fi}. \end{proof} We now study the left hand side of \eqref{equation strong hypothesis} by defining $$T^*(x;q,a):= \frac {x^{-\frac 12}}{\phi(q)}\sum_{\chi\neq \chi_0} \overline{\chi}(a) \sum_{\text{ real}ho_{\chi}\notin \mathbb R} \frac{x^{ \text{ real}ho_{\chi}}}{\text{ real}ho_{\chi}}.$$ Similarly as in Proposition \text{ real}ef{proposition limiting distribution}, one shows under GRH that $T^*(x;q,a)$ has a limiting logarithmic distribution whose mean is exactly zero and whose variance is given by $$ V^*(q;a):=\frac 1{\phi(q)^2}\sum_{\chi \neq \chi_0} |\chi(a)|^2\sum_{\gamma_{\chi}\neq 0} \frac{m_{\gamma_{\chi}}^2}{\frac 14+ \gamma_{\chi}^2}. $$ Assuming that the $m_{\gamma_{\chi}}$ are uniformly bounded, we deduce using the Riemann-von Mangoldt formula that $V^*(q;a) \overrightarrow{a}symp \phi(q)^{-1}\log q$. Hence, if $\text{Prob}si(q)$ is any function tending to infinity, then Chebyshev's Inequality gives $$\text{Prob}[|X_{q;a}| \geq \text{Prob}si(q) \phi(q)^{-\frac 12} (\log q)^{\frac 12}] \ll \frac 1{\text{Prob}si(q)^2}, $$ that is $X_{q;a}$ is normally bounded above by $\phi(q)^{-\frac 12} (\log q)^{\frac 12}$. We need however to be careful in making conjectures about the size of $T^*(x;q,a)$, since even though very rare, 'Littlewood phenomena' do happen. For this reason we add the $x^{\epsilon}$ factor, which gives \eqref{equation strong hypothesis}. We should also be careful with the range $q\leq x^{\eta}$ in \eqref{equation strong hypothesis}, since by the work of Friedlander and Granville \cite{FG1,FG2}, the primes up to $x$ are not equidistributed in arithmetic progressions modulo $q$ when $q \overrightarrow{a}symp x/(\log x)^B$ . \end{document}
math
31,671
\begin{document} \title{Generalized Lyapunov-Sylvester operators for Kuramoto-Sivashinsky Equation} \author{Abdelhamid BEZIA \and Anouar Ben Mabrouk} \maketitle \begin{abstract} A numerical method is developed leading to algebraic systems based on generalized Lyapunov-Sylvester operators to approximate the solution of two-dimensional Kuramoto-Sivashinsky equation. It consists of an order reduction method and a finite difference discretization which is proved to be uniquely solvable, stable and convergent by using Lyapunov criterion and manipulating generalized Lyapunov-Sylvester operators. Some numerical implementations are provided at the end to validate the theoretical results. \newline \textbf{Mathematics Subject Classification (2010)}. 65M06, 65M12, 65M22, 15A30, 37B25. \textbf{Key words}: Kuramoto-Sivashinsky equation, Finite difference method, LyapunovSylvester operators. \end{abstract} \section{Introduction} The present paper is devoted to the development of a computational method based on two-dimensional finite difference scheme to approximate the solution of the nonlinear Kuramoto-Sivashinsky equation \begin{equation} \frac{\partial u}{\partial t}=q\Delta u-\kappa \Delta ^{2}u+\lambda \left\vert \nabla u\right\vert ^{2},\ ((x,y),t)\in \Omega \times (t_{0},+\infty ), \label{E-Kura-Shiva} \end{equation} with initial conditions \begin{equation} u(x,y,t_{0})=\varphi (x,y)\;;\quad (x,y)\in \Omega \label{eqn1-3} \end{equation} and boundary conditions \begin{equation} \frac{\partial u}{\partial \eta }\left( x,y,t\right) =0\;;\quad ((x,y),t)\in \partial \Omega \times (t_{0},+\infty ), \label{eqn1-4} \end{equation} on a rectangular domain $\Omega =[L_{0},L_{1}]\times \lbrack L_{0},L_{1}]$ in $\mathbb{R}^{2},$ $t_{0}\geq 0$ is a real parameter fixed as the initial time. $\frac{\partial }{\partial t}$ is the time derivative, $\nabla $ is the space gradient operator and $\Delta =\frac{\partial ^{2}}{\partial x^{2}} +\frac{\partial ^{2}}{\partial y^{2}}$ is the Laplace operator in $\mathbb{R} ^{2}$, $q$,$\,\kappa $,$\,\lambda $ are real parameters. $\varphi $ and $ \psi $ are twice differentiable real valued functions on $\overline{\Omega }$ . We propose to apply an order reduction of the derivation and thus to solve a coupled system of equation involving second order differential operators. We set $v=qu-\kappa \Delta u$ and thus we have to solve the system \begin{equation} \left\{ \begin{array}{l} \frac{\partial u}{\partial t}=\Delta v+\lambda \left\vert \nabla u\right\vert ^{2},\quad (x,y,t)\in \Omega \times (t_{0},+\infty ) \\ v=qu-\kappa \Delta u,\quad (x,y,t)\in \Omega \times (t_{0},+\infty ) \\ \left( u,v\right) (x,y,t_{0})=\left( \varphi ,\psi \right) (x,y),\quad (x,y)\in \overline{\Omega } \\ \overrightarrow{\nabla }(u,v)(x,y,t)=0,\quad (x,y,t)\in \partial \Omega \times (t_{0},+\infty ) \end{array} \right. \label{Problem1} \end{equation} The Kuramoto-Sivashinsky equation (KS) is one of the most famous equations in math-physics for many decades. It has its origin in the work of Kuramoto since the 70-th decade of the 20-th century in his study of reaction-diffusion equation. The equation was then considered by Sivashinsky in modeling small thermal diffusion instabilities for laminar flames and modeling the reference flux of a film layer on an inclined plane. Since then the KS equation has experienced a growing development in theoretical mathematics, numerical as well as physical mechanics, nonlinear physics, hydrodynamics, chemistry, plasma physics, particle distributions advection, surface morphology, ...etc. See \cite{Benachour}, \cite{Hansen}, \cite{Hong} , \cite{Jayaprakash}, \cite{Kuramoto}, \cite{Nadjafikhah}, \cite{Procaccia}, \cite{Rost}, \cite{Sivashinsky}, \cite{Sivashinsky2}. In \cite{Nadjafikhah}, the symmetry problem of the model was studied based on the theory of Lie algebras. In \cite{Benachour}, an anisotropic version of the KS equation has been proposed leading to global resolutions of the equation on rectangular domaines. Sufficient conditions were given for the existence of global solution. From the dimensional point of view, KS equation has been widely studied in one dimension, \cite{Giacomelli}, \cite{Goodman}, \cite{Ilyashenko}, \cite {Nicolaenko}, \cite{Otto}, \cite{Temam}. However, this equation since its appearance is related to the modeling of flame spread which is a two-dimensional problem. In this context, an important result was developed in \cite{Sell} where the authors showed by adapting the method developed in \cite{Raugel} the existence of a set of bounded solutions on a rectangular domain. This major importance of the two-dimensional model was a main motivation behind this work. A model representing a nonlinear dynamical system defined in a two-dimensional space is considered, where the solution $ u(x,y,t)$ satisfies a fourth order partial differential equation of the form (\ref{E-Kura-Shiva}), where $u$ is the height of the interface, $q$ is a pre-factor proportional to the coefficient of surface tension expressed by $ q\nabla ^{2}u$. The quantity $\kappa \nabla ^{4}u$ represents the result of the diffusion surface due to the chemical potential gradient induced curvature. The pre-factor $\kappa $ represents the surface diffusion term. The quantity $\lambda |\nabla u|^{2}$ is due to the existence of overhangs and vacancies during the deposition process. Finally, the quantity $\nu \nabla ^{2}u+\lambda \nabla u|^{2}u$ is referred to as modeling the effect of deposited atoms. cf. \cite{Hong}. In the present work we propose to serve of algebraic operators to approximate the solutions of the Kuramoto-Sivashinsky(K-S) equation in the two patial and one temporal dimesion without adapting classical developments based on separation of variables, radial solutions, tridiagonal operators, ... etc. This was onefold of the present paper. A second crucial idea is to transform the continuous K-S equation into a generalized Lyapunov-Sylvester equation of the form \begin{equation} \sum_{i}A_{i}X_{n}B_{i}=C_{n} \label{Lyp-Sylv-Equa} \end{equation} where $A_{i}$ and $B_{i}$ are appropriate matrices depending on the discretization procedure and the problem parameters. $X_{n}$ represents the numerical solution at time $n$ and $C_{n}$ is usually depending on the past values $X_{k},$ $k\leq n-1$ of the the solution. The equation $\left( \ref {Lyp-Sylv-Equa}\right) $ is known as generalized Lyapunov-Sylvester equation. Such equations have their origin in the work of Sylvester on classical matrices equations. In the particular case \begin{equation} \sum A_{i}X\overline{A}_{i}^{T}=C \end{equation} $ $the equation is knwon restrictively as Lyapunov one \cite {Simoncini}. Generally speaking, the equation \begin{equation} \sum_{i}A_{i}XB_{i}=C \label{LSG1} \end{equation} \ is very difficult to be inverted and remains an open problem in algebra. Nevertheless, some works have been developed and proved that under suitable conditions on the coefficient matrices, one may get a unique solution, but it's exact computation remains hard. It necessitates to compute eigenvalues and precisely bounds/estimates of eigenvalues or direct inverses of big matrices which remains a complicated problem and usually inappropriate. In \cite{Lancaster}, a native method to solve $\left( \ref{LSG1}\right) $ is investigated based on Kronecker product and equivalent matrix-vector equation. The Sylvester's equation is transformed into a linear equivalent one on the form $Gx=c,$ with a matrix $G$ obtained by tensor products issued from the $A_{j}^{\prime }s$ and the $B_{j}^{\prime }s$. However, the general case remained already complicated. The authors have been thus restricted to special cases where the matrices $A_{j}$ and $B_{j}$ are scalar polynomials based on spacial and fixed matrices $A\ $and $B.$ Denote by $\sigma \left( A\right) $ the spectrum of $A$ and $\sigma (B)$ the one of $B$, the spectrum $\sigma (G)$ may then be determined in terms these spectra. Indeed, with the assumptions the $A_{j}^{\prime }s$ and the $B_{j}^{\prime }s,$ the equation $ \left( \ref{LSG1}\right) $ can be writen as \begin{equation*} \sum_{j,k}\alpha _{j,k}A^{j}XB^{k}=C, \end{equation*} where $\alpha _{jk}$ are complexe numbers. Hence, the tensor matrix $G$ will be written on the form $G=\phi \left( A,B\right) ,$ where $\phi $ is the 2-variables polynomial $\phi \left( x,y\right) =\sum_{j,k}\alpha _{j,k}x^{i}y^{k}.$Thus, $G$ is singular if and only if $\phi \left( \lambda ,\mu \right) =0$ for some eigenvalues $\lambda $ and $\mu $ of $A$ and $B$ respectively. In the case of square matrices an other criterion of existence of the solution of $\left( \ref{LSG1}\right) $ was pointed out by Roth \cite{Roth}. It was shown that the solution is unique if and only if the matrices \begin{equation*} \left[ \begin{array}{cc} A & C \\ 0 & B \end{array} \right] ,\ \ \ \text{and \ \ }\left[ \begin{array}{cc} A & 0 \\ 0 & B \end{array} \right] \end{equation*} are similar. However, in numerical stydies of PDE's we may be confronted with matrices $G$ where the computation of spectral properties are not easy and necessitate enormes calculus and sometimes, induces slow algorithms and bed convergence rates. Thus, one motivation here and issued from \cite{Bezia} consists to resolve such problem and prove the invertibility of the algebraic operator yielded in the numerical scheme by applaying topological method Instead of using classical ones such as tri-diagonal transformations. We thus aim to prove that generilazed Lyapunov-Sylvester operators can be good candidates for investigating numerical solutions of PDEs in multi-dimensional spaces. The paper is organized as follows. In next section the discretization of $ \left( \ref{Problem1}\right) $ is developed, the solvability of the scheme is analyzed. In section 4, the consistency of the method is shown and next, the stability and convergence are proved based on Lyapunov method and Lax-Richtmyer theorem. Finally, a numerical implementation is provided leading to the computation of the numerical solution and error estimates. \section{The numerical scheme} Let $J\in \mathbb{N}^{\ast }$ and $h=\frac{L_{1}-L_{0}}{J}$ be the space step. Denote for $\left( j,m\right) \in I^{2}=\left\{ 0,1,...,J\right\} ^{2}, $ $x_{j}=L_{0}+jh$ and $y_{m}=L_{0}+mh$. Next, let $l=\Delta t$ be the time step and $t_{n}=t_{0}+nl,$ $n\in \mathbb{N}$ be the time grid. We denote also $\widetilde{\Omega }=\left\{ \left( x_{j},y_{m},t_{n}\right) ;\left( x_{j},y_{m},t_{n}\right) \in I^{2}\times \mathbb{N}\right\} $ the associated discrete space. Finally, for $\left( j,m\right) \in I^{2}$ and $ n\geq 0$, $u_{j,m}^{n}$ denotes the net function $u(x_{j},y_{m},t_{n})$ and $ U_{j,m}^{n} $ is the numerical solution. The following discrete approximations will be applied for the different differential operators involved in the problem. For time derivatives, we set \begin{equation*} \frac{\partial u}{\partial t}\leadsto \frac{U_{j,m}^{n+1}-U_{j,m}^{n-1}}{2l} \end{equation*} and for space derivatives, we shall use for the one order \begin{equation*} \frac{\partial u}{\partial x}\leadsto \frac{U_{j+1,m}^{n}-U_{j-1,m}^{n}}{2h} ,\ \hbox{ \ \ and \ \ }\frac{\partial u}{\partial y}\leadsto \frac{ U_{j,m+1}^{n}-U_{j,m-1}^{n}}{2h} \end{equation*} and for the second order ones, we set \begin{equation*} \frac{\partial ^{2}u}{\partial x^{2}}\leadsto \frac{U_{j+1,m}^{n,\alpha }-2U_{j,m}^{n,\alpha }+U_{j-1,m}^{n,\alpha }}{h^{2}},\ \hbox{and\ \ \ }\frac{ \partial ^{2}u}{\partial y^{2}}\leadsto \frac{U_{j,m+1}^{n,\alpha }-2U_{j,m}^{n,\alpha }+U_{j,m-1}^{n,\alpha }}{h^{2}}. \end{equation*} Finally, for $n\in \mathbb{N}^{\ast }$ and $\alpha \in \mathbb{R}$, we denote \begin{equation*} U^{n,\alpha }=\alpha U^{n-1}+\left( 1-2\alpha \right) U^{n}+\alpha U^{n+1} \end{equation*} to designate the estimation of $U_{j,m}^{n}$ with an $\alpha $ -extrapolation/interpolation barycenter method. By applying these discrete approximations, we obtain \begin{eqnarray*} U_{j,m}^{n+1}-U_{j,m}^{n-1} &=&\frac{2l}{h^{2}}\left[ V_{j-1,m}^{n,\beta }-2V_{j,m}^{n,\beta }+V_{j+1,m}^{n,\beta }+V_{j,m-1}^{n,\beta }-2V_{j,m}^{n,\beta }+V_{j,m+1}^{n,\beta }\right] \\ &&+\frac{2l}{h^{2}}\lambda \left[ \frac{1}{4}\left( U_{j+1,m}^{n}-U_{j-1,m}^{n}\right) ^{2}+\frac{1}{4}\left( U_{j,m+1}^{n}-U_{j,m-1}^{n}\right) ^{2}\right] . \end{eqnarray*} In what follows, we set \begin{equation*} F_{j,m}^{n}=\frac{1}{4}\left[ \left( U_{j+1,m}^{n}-U_{j-1,m}^{n}\right) ^{2}+\left( U_{j,m+1}^{n}-U_{j,m-1}^{n}\right) ^{2}\right] \end{equation*} We thus get \begin{eqnarray*} U_{j,m}^{n+1}-U_{j,m}^{n-1} &=&\sigma \lbrack \beta V_{j-1,m}^{n+1}+\left( 1-2\beta \right) V_{j-1,m}^{n}+\beta V_{j-1,m}^{n-1} \\ &&-2\beta V_{j,m}^{n+1}-2\left( 1-2\beta \right) V_{j,m}^{n}-2\beta V_{j,m}^{n-1} \\ &&+\beta V_{j+1,m}^{n+1}+\left( 1-2\beta \right) V_{j+1,m}^{n}+\beta V_{j+1,m}^{n-1} \\ &&+\beta V_{j,m-1}^{n+1}+\left( 1-2\beta \right) V_{j,m-1}^{n}+\beta V_{j,m-1}^{n-1} \\ &&-2\beta V_{j,m}^{n+1}-2\left( 1-2\beta \right) V_{j,m}^{n}-2\beta V_{j,m}^{n-1} \\ &&+\beta V_{j,m+1}^{n+1}+\left( 1-2\beta \right) V_{j,m+1}^{n}+\beta V_{j,m+1}^{n-1}]+\sigma \lambda F_{j,m}^{n} \end{eqnarray*} where $\sigma =\frac{2l}{h^{2}}$. Otherwise, this can be written as \begin{eqnarray*} &&U_{j,m}^{n+1}-\sigma \beta \left[ V_{j-1,m}^{n+1}-2V_{j,m}^{n+1}+V_{j+1,m}^{n+1}+V_{j,m-1}^{n+1}-2V_{j,m}^{n+1}+V_{j,m+1}^{n+1} \right] \\ &=&U_{j,m}^{n-1}+\sigma \left( 1-2\beta \right) \left[ V_{j-1,m}^{n}-2V_{j,m}^{n}+V_{j+1,m}^{n}+V_{j,m-1}^{n}-2V_{j,m}^{n}+V_{j,m+1}^{n} \right] \\ &&+\sigma \beta \left[ V_{j-1,m}^{n-1}-2V_{j,m}^{n-1}+V_{j+1,m}^{n-1}+V_{j,m-1}^{n-1}-2V_{j,m}^{n-1}+V_{j,m+1}^{n-1} \right] +\sigma \lambda F_{j,m}^{n}. \end{eqnarray*} Taking into account the boundary conditions, we obtain the full matrix expression \begin{eqnarray} U^{n+1}-\sigma \beta \left( AV^{n+1}+V^{n+1}A^{T}\right) &=&U^{n-1}+\sigma \left( 1-2\beta \right) \left( AV^{n}+V^{n}A^{T}\right) \notag \\ &&+\sigma \beta \left( AV^{n-1}+V^{n-1}A^{T}\right) +\sigma \lambda F^{n} \label{FinalDiscrete1} \end{eqnarray} where \begin{equation*} A=\left( \begin{array}{cccccc} -2 & 2 & 0 & \cdots & \cdots & 0 \\ 1 & -2 & 1 & 0 & \cdots & 0 \\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & 1 & -2 & 1 \\ 0 & \cdots & \cdots & 0 & 2 & -2 \end{array} \right) \end{equation*} $U^{n}=\left( U_{j,m}^{n}\right) _{0\leq j,m\leq J}$ and $V=\left( V_{j,m}^{n}\right) _{0\leq j,m\leq J}$. Next, for $Q\in M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) ,$ we denote $\mathcal{L}_{Q}$ the linear operator which associates to $X\in M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) $ its image $\mathcal{L} _{Q}\left( X\right) =QX+XQ^{T}$. Thus, equation (\ref{FianlDiscrete1}) will be written as \begin{equation} U^{n+1}-\sigma \beta \mathcal{L}_{A}\left( V^{n+1}\right) =U^{n-1}+\sigma \left( 1-2\beta \right) \mathcal{L}_{A}\left( V^{n}\right) +\sigma \beta \mathcal{L}_{A}\left( V^{n-1}\right) +\sigma \lambda F^{n}. \end{equation} Now, applying similar techniques as previously we obtain \begin{equation*} V_{j,m}^{n,\beta }=qU_{j,m}^{n,\alpha }-\frac{\kappa }{h^{2}}\left( U_{j-1,m}^{n,\alpha }-2U_{j,m}^{n,\alpha }+U_{j+1,m}^{n,\alpha }+U_{j,m-1}^{n,\alpha }-2U_{j,m}^{n,\alpha }+U_{j,m+1}^{n,\alpha }\right) . \end{equation*} Let $\delta =\frac{1}{h^{2}}$. This results in the following equation \begin{eqnarray*} \beta V_{j,m}^{n+1}+\left( 1-2\beta \right) V_{j,m}^{n}+\beta V_{j,m}^{n-1} &=&q\left[ \alpha U_{j,m}^{n+1}+\left( 1-2\alpha \right) U_{j,m}^{n}+\alpha U_{j,m}^{n-1}\right] \\ &&-\delta \kappa \lbrack \alpha U_{j-1,m}^{n+1}+\left( 1-2\alpha \right) U_{j-1,m}^{n}+\alpha U_{j-1,m}^{n-1} \\ &&-2\alpha U_{j,m}^{n+1}-2\left( 1-2\alpha \right) U_{j,m}^{n}-2\alpha U_{j,m}^{n-1} \\ &&+\alpha U_{j+1,m}^{n+1}+\left( 1-2\alpha \right) U_{j+1,m}^{n}+\alpha U_{j+1,m}^{n-1} \\ &&+\alpha U_{j,m-1}^{n+1}+\left( 1-2\alpha \right) U_{j,m-1}^{n}+\alpha U_{j,m-1}^{n-1} \\ &&-2\alpha U_{j,m}^{n+1}-2\left( 1-2\alpha \right) U_{j,m}^{n}-2\alpha U_{j,m}^{n-1} \\ &&+\alpha U_{j,m+1}^{n+1}+\left( 1-2\alpha \right) U_{j,m+1}^{n}+\alpha U_{j,m+1}^{n-1}], \end{eqnarray*} or equivalently, \begin{eqnarray*} &&\beta V_{j,m}^{n+1}-q\alpha U_{j,m}^{n+1} \\ &&+\delta \kappa \alpha \left[ U_{j-1,m}^{n+1}-2U_{j,m}^{n+1}+U_{j+1,m}^{n+1}+U_{j,m-1}^{n+1}-2U_{j,m}^{n+1}+U_{j,m+1}^{n+1} \right] \\ &=&-\left( 1-2\beta \right) V_{j,m}^{n}-\beta V_{j,m}^{n-1}+q\left[ \left( 1-2\alpha \right) U_{j,m}^{n}+\alpha U_{j,m}^{n-1}\right] \\ &&-\delta \kappa \lbrack \left( 1-2\alpha \right) \left[ U_{j-1,m}^{n}-2U_{j,m}^{n}+U_{j+1,m}^{n}+U_{j,m-1}^{n}-2U_{j,m}^{n}+U_{j,m+1}^{n} \right] \\ &&+\left[ U_{j-1,m}^{n-1}-2U_{j,m}^{n-1}+U_{j+1,m}^{n-1}+U_{j,m-1}^{n-1}-2U_{j,m}^{n-1}+U_{j,m+1}^{n-1} \right] ]. \end{eqnarray*} Now, applying the boundary conditions, we obtain \begin{eqnarray} &&\beta V^{n+1}-\alpha \left[ qU^{n+1}-\delta \kappa \mathcal{L}_{A}\left( U^{n+1}\right) \right] \notag \\ &=&\left( 1-2\alpha \right) \left( qU^{n}-\delta \kappa \mathcal{L} _{A}\left( U^{n}\right) \right) +\alpha \left( qU^{n-1}-\delta \kappa \mathcal{L}_{A}\left( U^{n-1}\right) \right) \notag \\ &&-\left( 1-2\beta \right) V^{n}-\beta V^{n-1}. \end{eqnarray} As a result, we obtain finally the following discrete coupled system. \begin{equation} \left\{ \begin{array}{l} U^{n+1}-\sigma \beta \mathcal{L}_{A}\left( V^{n+1}\right) =U^{n-1}+\sigma \left( 1-2\beta \right) \mathcal{L}_{A}\left( V^{n}\right) +\sigma \beta \mathcal{L}_{A}\left( V^{n-1}\right) +\sigma \lambda F^{n} \\ \beta V^{n+1}-\alpha \left[ qU^{n+1}-\delta \kappa \mathcal{L}_{A}\left( U^{n+1}\right) \right] =\left( 1-2\alpha \right) \left( qU^{n}-\delta \kappa \mathcal{L}_{A}\left( U^{n}\right) \right) \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\alpha \left( qU^{n-1}-\delta \kappa \mathcal{L}_{A}\left( U^{n-1}\right) \right) \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\left( 1-2\beta \right) V^{n}-\beta V^{n-1} \end{array} \right. \label{LyapunovSystem} \end{equation} \section{Solvability of the discret method} In this section we will examine the solvability of the discrete scheme. the main idea is by transforming the system $\left(\ref{LyapunovSystem}\right)$ to an equality of the forme $\left(U^{n+1},V^{n+1}\right)=\phi \left(U^{n},V^{n},U^{n-1},V^{n-1}\right)$ with an appropriate function $\phi$ . We prove precisely that $\phi$ can be expressed with general Lyapunov-Sylvester form. Next using general properties of such operators we prove that $\phi $ is an isomorphism. In most studies, even recent ones such as \cite{Benmabrouk2} the authors proved that $\phi$ or some modified versions may be contractive by inserting translation-dilation parameters leading to fixed point theory. In the present context it seams that such transformation is not possible as $\left\Vert\phi\right\Vert$ may be greater than $1$ and thus no contraction may occur. To overcome this problem we come back to differential calculus and topological properties. \begin{theorem} The system $\left(\ref{LyapunovSystem}\right)$ is uniquely solvable whenever $U^{0}$ and $U^{1}$ are known. \end{theorem} The proof of this result is based on the following preliminary lemma. \begin{lemma} Let $E$ be a finite dimensional vector space on $\mathbb{R}$ (or $\mathbb{C}$ ), and $\left(\phi_{n}\right)_{n}$ be a sequence of endomorphisms converging uniformly to an invertible endomorphism $\phi$. Then there exist $n_{0}$ such that the endomorphism $\phi_{n}$ is invertible for all $n\geq n_{0}$. \end{lemma} \begin{proof} Consider the endomorphism $\phi $ on $M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) \times M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) $ defined by \begin{equation} \phi \left( X,Y\right) =\left( X-\sigma \beta \mathcal{L}_{A}\left( Y\right) ,\beta Y-\alpha \Gamma \left( X\right) \right) \end{equation} where $\Gamma \left( X\right) =qX-\delta \kappa \mathcal{L}_{A}\left( X\right) .$ To prove Theorem 2, we show that $\phi $ is a one to one, and so $\ker \phi $ is reduced to $0.$ Indeed, \begin{equation*} \phi \left( X,Y\right) =0\Leftrightarrow \left( X-\sigma \beta \mathcal{L} _{A}\left( Y\right) ,\beta Y-\alpha \left[ qX-\delta \kappa \mathcal{L} _{A}\left( X\right) \right] \right) =\left( 0,0\right) . \end{equation*} Which is equivalent to \begin{equation} \left\{ \begin{array}{l} X=\sigma \beta \mathcal{L}_{A}\left( Y\right) \\ \beta Y=\alpha \Gamma \left( X\right) \end{array} \right. \label{SolvabiltySystem} \end{equation} for $\beta \neq 0$ we get \begin{equation} Y=\sigma \alpha \Gamma \mathcal{L}_{A}\left( Y\right) \end{equation} Choosing $l=o\left( h^{s+4}\right) $ (which is always possible), the operator $\mathcal{K}=I-\sigma \alpha \Gamma \mathcal{L}_{A}$ tends uniformly to $I$ whenever $h$ tends to zero. Indeed, denote \begin{equation*} W=\sigma \alpha \left( qA-\delta \kappa A^{2}\right) =\frac{2l}{h^{2}}\alpha \left( qA-\frac{1}{h^{2}}\kappa A^{2}\right) \hbox{.} \end{equation*} We have \begin{equation*} \mathcal{K}\left( X\right) =X-\sigma \alpha \Gamma \mathcal{L}_{A}\left( X\right) =Y-WX-XW^{T}+2\sigma \alpha \delta \kappa AXA^{T}. \end{equation*} \newline thus, \begin{eqnarray*} \left\Vert \left( \mathcal{K}-I\right) X\right\Vert &=&\left\Vert WX+XW^{T}+2\left( \sigma \alpha \delta \kappa \right) AXA^{T}\right\Vert \\ &\leq &2\left\Vert W\right\Vert \left\Vert X\right\Vert +32\sigma \alpha \delta \left\vert \kappa \right\vert \left\Vert X\right\Vert \end{eqnarray*} \newline Since we have $\left\Vert W\right\Vert \leq 4\sigma \left\vert \alpha \right\vert \left[ \left\vert q\right\vert +4\delta \left\vert \kappa \right\vert \right] ,$ we obtain, \begin{equation*} \left\Vert \left( \mathcal{K}-I\right) X\right\Vert \leq 8\sigma \alpha \left[ \left\vert q\right\vert +8\delta \left\vert \kappa \right\vert \right] \left\Vert X\right\Vert \end{equation*} For $l=o\left( h^{4+s}\right) $ this implies that, \begin{equation*} \left\Vert \left( \mathcal{K}\left( X\right) -I\left( X\right) \right) \right\Vert \leq 16\alpha \left[ \left\vert q\right\vert h^{2+s}+8h^{s}\left\vert \kappa \right\vert \right] \left\Vert X\right\Vert \end{equation*} Consequentely the operator $\mathcal{K}$ converge uniformly to the identity whenevr $h$ tends towred $0$ and $l=o\left( h^{4+s}\right) ,$ withs $s>0.$ Thus, using Lemma 1 $\phi $ is invertible for $l$,$h$ small enough with $ l=o\left( h^{4+s}\right) .$\newline For $\beta =0$ we obtain the system \begin{equation} \left\{ \begin{array}{l} U^{n+1}=U^{n-1}+\sigma \mathcal{L}_{A}\left( V^{n}\right) +\sigma \lambda F^{n} \\ V^{n}=\alpha \Gamma \left( U^{n+1}\right) +\left( 1-2\alpha \right) \Gamma \left( U^{n}\right) +\alpha \Gamma \left( U^{n-1}\right) \end{array} \right. \label{ThesystemB=0} \end{equation} and thus, \begin{equation} U^{n+1}-\sigma \alpha \Gamma \mathcal{L}_{A}\left( U^{n+1}\right) =\sigma \lbrack \left( 1-2\alpha \right) \Gamma \mathcal{L}_{A}\left( U^{n}\right) +U^{n-1}+\alpha \Gamma \mathcal{L}_{A}\left( U^{n-1}\right) +\lambda F^{n}] \end{equation} For the same assumption on $l$ and $h$ as above the same operator $\mathcal{K }\left( X\right) =X-\sigma \alpha \Gamma \mathcal{L}_{A}\left( X\right) $ tends toward the identity as $h$ tends to $0$. \end{proof} \section{Consistency} The consistency of the proposed method will be proved by evaluating the local truncation error arising from the scheme introduced for the discretization of the system $\left( \ref{Problem1}\right) $ Applying Taylor Taylor's expansion, and assuming that $u$ and $v$ to be sufficiently differentiable, we get \begin{equation} U_{j,m}^{n+1}=u+l\frac{\partial u}{\partial t}+\frac{l^{2}}{2}\frac{\partial ^{2}u}{\partial t^{2}}+\frac{l^{3}}{6}\frac{\partial ^{3}u}{\partial t^{3}}+ \frac{l^{4}}{24}\frac{\partial ^{4}u}{\partial t^{4}}+... \label{Un+1jm} \end{equation} Similarly, \begin{equation} U_{j,m}^{n-1}=u-l\frac{\partial u}{\partial t}+\frac{l^{2}}{2}\frac{\partial ^{2}u}{\partial t^{2}}-\frac{l^{3}}{6}\frac{\partial ^{3}u}{\partial t^{3}}+ \frac{l^{4}}{24}\frac{\partial ^{4}u}{\partial t^{4}}+... \label{Un-1jm} \end{equation} Hence, \begin{equation} \frac{U_{j,m}^{n+1}-U_{j,m}^{n-1}}{2l}=\frac{\partial u}{\partial t}+\frac{ l^{2}}{6}\frac{\partial ^{3}u}{\partial t^{3}}+... \end{equation} Next, we get also \begin{eqnarray} V_{j-1,m}^{n-1} &=&\left[ v-l\frac{\partial v}{\partial t}+\frac{l^{2}}{2} \frac{\partial ^{2}v}{\partial t^{2}}-\frac{l^{3}}{6}\frac{\partial ^{3}v}{ \partial t^{3}}+\frac{l^{4}}{24}\frac{\partial ^{4}v}{\partial t^{4}}\right] \notag \\ &&-h\left[ \frac{\partial v}{\partial x}-l\frac{\partial ^{2}v}{\partial t\partial x}+\frac{l^{2}}{2}\frac{\partial ^{3}v}{\partial t^{2}\partial x}- \frac{l^{3}}{6}\frac{\partial ^{4}v}{\partial t^{3}\partial x}+\frac{l^{4}}{ 24}\frac{\partial ^{5}v}{\partial t^{4}\partial x}\right] \notag \\ &&+\frac{h^{2}}{2}\left[ \frac{\partial ^{2}v}{\partial x^{2}}-l\frac{ \partial ^{3}v}{\partial t\partial x^{2}}+\frac{l^{2}}{2}\frac{\partial ^{4}v }{\partial t^{2}\partial x^{2}}-\frac{l^{3}}{6}\frac{\partial ^{5}v}{ \partial t^{3}\partial x^{2}}+\frac{l^{4}}{24}\frac{\partial ^{6}v}{\partial t^{4}\partial x^{2}}\right] \notag \\ &&-\frac{h^{3}}{6}\left[ \frac{\partial ^{3}v}{\partial x^{3}}-l\frac{ \partial ^{4}v}{\partial t\partial x^{3}}+\frac{l^{2}}{2}\frac{\partial ^{5}v }{\partial t^{2}\partial x^{3}}-\frac{l^{3}}{6}\frac{\partial ^{6}v}{ \partial t^{3}\partial x^{3}}+\frac{l^{4}}{24}\frac{\partial ^{7}v}{\partial t^{4}\partial x^{3}}\right] \notag \\ &&+\frac{h^{4}}{24}\left[ \frac{\partial ^{4}v}{\partial x^{4}}-l\frac{ \partial ^{5}v}{\partial t\partial x^{4}}+\frac{l^{2}}{2}\frac{\partial ^{6}v }{\partial t^{2}\partial x^{4}}-\frac{l^{3}}{6}\frac{\partial ^{7}v}{ \partial t^{3}\partial x^{4}}+\frac{l^{4}}{24}\frac{\partial ^{8}v}{\partial t^{4}\partial x^{4}}\right] +... \end{eqnarray} and \begin{equation} V_{j-1,m}^{n}=v-h\frac{\partial v}{\partial x}+\frac{h^{2}}{2}\frac{\partial ^{2}v}{\partial x^{2}}-\frac{h^{3}}{6}\frac{\partial ^{3}v}{\partial x^{3}}+ \frac{h^{4}}{24}\frac{\partial ^{4}v}{\partial x^{4}}+... \end{equation} and finally, \begin{eqnarray} V_{j-1,m}^{n+1} &=&\left[ v+l\frac{\partial v}{\partial t}+\frac{l^{2}}{2} \frac{\partial ^{2}v}{\partial t^{2}}+\frac{l^{3}}{6}\frac{\partial ^{3}v}{ \partial t^{3}}+\frac{l^{4}}{24}\frac{\partial ^{4}v}{\partial t^{4}}\right] \notag \\ &&-h\left[ \frac{\partial v}{\partial x}+l\frac{\partial ^{2}v}{\partial t\partial x}+\frac{l^{2}}{2}\frac{\partial ^{3}v}{\partial t^{2}\partial x}+ \frac{l^{3}}{6}\frac{\partial ^{4}v}{\partial t^{3}\partial x}+\frac{l^{4}}{ 24}\frac{\partial ^{5}v}{\partial t^{4}\partial x}\right] \notag \\ &&+\frac{h^{2}}{2}\left[ \frac{\partial ^{2}v}{\partial x^{2}}+l\frac{ \partial ^{3}v}{\partial t\partial x^{2}}+\frac{l^{2}}{2}\frac{\partial ^{4}v }{\partial t^{2}\partial x^{2}}+\frac{l^{3}}{6}\frac{\partial ^{5}v}{ \partial t^{3}\partial x^{2}}+\frac{l^{4}}{24}\frac{\partial ^{6}v}{\partial t^{4}\partial x^{2}}\right] \notag \\ &&-\frac{h^{3}}{6}\left[ \frac{\partial ^{3}v}{\partial x^{3}}+l\frac{ \partial ^{4}v}{\partial t\partial x^{3}}+\frac{l^{2}}{2}\frac{\partial ^{5}v }{\partial t^{2}\partial x^{3}}+\frac{l^{3}}{6}\frac{\partial ^{6}v}{ \partial t^{3}\partial x^{3}}+\frac{l^{4}}{24}\frac{\partial ^{7}v}{\partial t^{4}\partial x^{3}}\right] \notag \\ &&+\frac{h^{4}}{24}\left[ \frac{\partial ^{4}v}{\partial x^{4}}+l\frac{ \partial ^{5}v}{\partial t\partial x^{4}}+\frac{l^{2}}{2}\frac{\partial ^{6}v }{\partial t^{2}\partial x^{4}}+\frac{l^{3}}{6}\frac{\partial ^{7}v}{ \partial t^{3}\partial x^{4}}+\frac{l^{4}}{24}\frac{\partial ^{8}v}{\partial t^{4}\partial x^{4}}\right] +... \end{eqnarray} Thus, \begin{eqnarray} V_{j-1,m}^{n,\beta } &=&v+\beta l^{2}\frac{\partial ^{2}v}{\partial t^{2}} +\beta \frac{l^{4}}{12}\frac{\partial ^{4}v}{\partial t^{4}} \notag \\ &&+h\left[ -\frac{\partial v}{\partial x}-\beta l^{2}\frac{\partial ^{3}v}{ \partial t^{2}\partial x}-\beta \frac{l^{4}}{12}\frac{\partial ^{5}v}{ \partial t^{4}\partial x}\right] \notag \\ &&+\frac{h^{2}}{2}\left[ \frac{\partial ^{2}v}{\partial x^{2}}+\beta l^{2} \frac{\partial ^{4}v}{\partial t^{2}\partial x^{2}}+\beta \frac{l^{4}}{12} \frac{\partial ^{6}v}{\partial t^{4}\partial x^{2}}\right] \notag \\ &&+\frac{h^{3}}{6}\left[ -\frac{\partial ^{3}v}{\partial x^{3}}-\beta l^{2} \frac{\partial ^{5}v}{\partial t^{2}\partial x^{3}}-\beta \frac{l^{4}}{12} \frac{\partial ^{7}v}{\partial t^{4}\partial x^{3}}\right] \notag \\ &&+\frac{h^{4}}{24}\left[ \frac{\partial ^{4}v}{\partial x^{4}}+\beta l^{2} \frac{\partial ^{6}v}{\partial t^{2}\partial x^{4}}+\beta \frac{l^{4}}{12} \frac{\partial ^{8}v}{\partial t^{4}\partial x^{4}}\right] +... \end{eqnarray} Similarly, \begin{equation} V_{j,m}^{n,\beta }=v+\beta l^{2}\frac{\partial ^{2}v}{\partial t^{2}}+\beta \frac{l^{4}}{12}\frac{\partial ^{4}v}{\partial t^{4}}+... \end{equation} We have also \begin{eqnarray} V_{j+1,m}^{n,\beta } &=&v+\beta l^{2}\frac{\partial ^{2}v}{\partial t^{2}} +\beta \frac{l^{4}}{12}\frac{\partial ^{4}v}{\partial t^{4}} \notag \\ &&+h\left[ \frac{\partial v}{\partial x}+\beta l^{2}\frac{\partial ^{3}v}{ \partial t^{2}\partial x}+\beta \frac{l^{4}}{12}\frac{\partial ^{5}v}{ \partial t^{4}\partial x}\right] \notag \\ &&+\frac{h^{2}}{2}\left[ \frac{\partial ^{2}v}{\partial x^{2}}+\beta l^{2} \frac{\partial ^{4}v}{\partial t^{2}\partial x^{2}}+\beta \frac{l^{4}}{12} \frac{\partial ^{6}v}{\partial t^{4}\partial x^{2}}\right] \notag \\ &&+\frac{h^{3}}{6}\left[ \frac{\partial ^{3}v}{\partial x^{3}}+\beta l^{2} \frac{\partial ^{5}v}{\partial t^{2}\partial x^{3}}+\beta \frac{l^{4}}{12} \frac{\partial ^{7}v}{\partial t^{4}\partial x^{3}}\right] \notag \\ &&+\frac{h^{4}}{24}\left[ \frac{\partial ^{4}v}{\partial x^{4}}+\beta l^{2} \frac{\partial ^{6}v}{\partial t^{2}\partial x^{4}}+\beta \frac{l^{4}}{12} \frac{\partial ^{8}v}{\partial t^{4}\partial x^{4}}\right] +... \end{eqnarray} Finally, \begin{eqnarray} \frac{V_{j,m-1}^{n,\beta }-2V_{j,m}^{n,\beta }+V_{j,m+1}^{n,\beta }}{h^{2}} &=&\left[ \frac{\partial ^{2}v}{\partial y^{2}}+\beta l^{2}\frac{\partial ^{4}v}{\partial t^{2}\partial y^{2}}+\beta \frac{l^{4}}{12}\frac{\partial ^{6}v}{\partial t^{4}\partial y^{2}}\right] \notag \\ &&+\frac{h^{2}}{12}\left[ \frac{\partial ^{4}v}{\partial y^{4}}+\beta l^{2} \frac{\partial ^{6}v}{\partial t^{2}\partial y^{4}}+\beta \frac{l^{4}}{12} \frac{\partial ^{8}v}{\partial t^{4}\partial y^{4}}\right] +... \end{eqnarray} Now, \begin{eqnarray} &&\frac{U_{j,m}^{n+1}-U_{j,m}^{n-1}}{2l} \notag \\ &&-\left[ \frac{V_{j-1,m}^{n,\beta }-2V_{j,m}^{n,\beta }+V_{j+1,m}^{n,\beta } }{h^{2}}+\frac{V_{j,m-1}^{n,\beta }-2V_{j,m}^{n,\beta }+V_{j,m+1}^{n,\beta } }{h^{2}}\right] \notag \\ &=&\frac{\partial u}{\partial t}-\Delta v+\frac{l^{2}}{6}\frac{\partial ^{3}u }{\partial t^{3}}-\beta l^{2}\frac{\partial ^{2}}{\partial t^{2}}\left( \Delta v\right) -\frac{h^{2}}{12}\left( \frac{\partial ^{4}v}{\partial x^{4}} +\frac{\partial ^{4}v}{\partial y^{4}}\right) \notag \\ &&+\beta l^{2}\frac{\partial ^{2}}{\partial t^{2}}\left( \frac{\partial ^{4}v }{\partial x^{4}}+\frac{\partial ^{4}v}{\partial y^{4}}\right) -\beta \frac{ l^{4}}{12}\left[ \frac{\partial ^{6}v}{\partial t^{4}\partial x^{2}}+\frac{ \partial ^{6}v}{\partial t^{4}\partial y^{2}}\right] \notag \\ &&-\beta \frac{l^{4}}{12}\frac{h^{2}}{12}\left[ \frac{\partial ^{8}v}{ \partial t^{4}\partial x^{4}}+\frac{\partial ^{8}v}{\partial t^{4}\partial y^{4}}\right] +... \end{eqnarray} We now examine the second equation in $\left( \ref{Problem1}\right) $. Applying the same calculus as above, we get \begin{equation*} v=\left( qv-\kappa \left( \Delta u\right) \right) -\beta l^{2}\frac{\partial ^{2}v}{\partial t^{2}}+q\alpha l^{2}\frac{\partial ^{2}u}{\partial t^{2}} -\kappa \alpha l^{2}\frac{\partial \left( \Delta u\right) }{\partial t^{2}} -\kappa \frac{h^{2}}{12}\left( \frac{\partial ^{4}u}{\partial x^{4}}+\frac{ \partial ^{4}u}{\partial y^{4}}\right) +o\left( l^{2}+h^{2}\right) . \end{equation*} It results from above that the principal part of the first equation in system \ref{Problem1} is \begin{eqnarray} \mathcal{L}_{u,v}^{1}\left( t,x,y\right) &=&\beta l^{2}\frac{\partial ^{2}v}{ \partial t^{2}}-\alpha ql^{2}\frac{\partial ^{2}u}{\partial t^{2}}-\alpha \kappa l^{2}\frac{\partial ^{2}\left( \Delta v\right) }{\partial t^{2}} \notag \\ &&-\kappa \frac{l^{2}}{12}\left( \frac{\partial ^{4}v}{\partial x^{4}}+\frac{ \partial ^{4}v}{\partial y^{4}}\right) +o\left( l^{2}+h^{2}\right) . \end{eqnarray} The principal part of the local error truncation due to the second equation is \begin{eqnarray} \mathcal{L}_{u,v}^{2}\left( t,x,y\right) &=&\beta \frac{l^{2}}{2}\frac{ \partial ^{2}}{\partial t^{2}}\left( v-qu-\kappa \alpha \Delta u\right) \notag \\ &&-\kappa \frac{h^{2}}{12}\left( \frac{\partial ^{4}v}{\partial x^{4}}+\frac{ \partial ^{4}v}{\partial y^{4}}\right) +o\left( l^{2}+h^{2}\right) . \end{eqnarray} As a result, we get the following lemma. \begin{lemma} The numerical method is consistent with an order $2$ in space and time. \end{lemma} \begin{proof} It is clear that the two operators $\mathcal{L}_{u,v}^{1}$ and $\mathcal{L} _{u,v}^{2}$ tend towards $0$ as $l$ and $h$ tend to $0$, which ensures the consistency of the method. Furthermore, the method is consistent with an order $2$ in time and space. \end{proof} \section{Stability and convergence} The stability of the discrete scheme will be evaluated using Lyapunov criterion which states that a linear system $\mathcal{L}\left( x_{n+1},x_{n},x_{n-1},...\right) =0$ is stable in the sense of Lyapunov if for any bounded initial solution $x_{0},$ the solution $x_{n}$ remains bounded for all $n\geq 0.$ In this section we prove precisely the following result. \begin{lemma} The solution $\left( U^{n},V^{n}\right) $ is bounded independently of $n$ whenever the initial solution $\left( U^{0},V^{0}\right) $ is. \end{lemma} \begin{proof} We proceed by recurrence on $n.$ Assume $\left\Vert \left( U_{0},V_{0}\right) \right\Vert \leq \eta $ for some positive $\eta $. The system $\left( \ref{LyapunovSystem}\right) $ can be written on the form \begin{equation} \left\{ \begin{array}{c} U^{n+1}-\sigma \beta \mathcal{L}_{A}\left( V^{n+1}\right) =U^{n-1}+\sigma \left( 1-2\beta \right) \mathcal{L}_{A}\left( V^{n}\right) +\sigma \beta \mathcal{L}_{A}\left( V^{n-1}\right) +\sigma \lambda F^{n}. \\ \beta V^{n+1}=\alpha \Gamma \left( U^{n+1}\right) +\left( 1-2\alpha \right) \Gamma \left( U^{n}\right) +\alpha \Gamma \left( U^{n-1}\right) -\left( 1-2\beta \right) V^{n}-\beta V^{n-1}. \end{array} \right. \label{StabilitySystem} \end{equation} The last equation gives, \begin{eqnarray*} \beta \mathcal{L}_{A}\left( V^{n+1}\right) &=&\alpha \mathcal{L}_{A}\Gamma \left( U^{n+1}\right) +\left( 1-2\alpha \right) \mathcal{L}_{A}\Gamma \left( U^{n}\right) \\ &&+\alpha \mathcal{L}_{A}\Gamma \left( U^{n-1}\right) -\left( 1-2\beta \right) \mathcal{L}_{A}\left( V^{n}\right) -\beta \mathcal{L}_{A}\left( V^{n-1}\right) . \end{eqnarray*} Substituting in the first one, we obtain \begin{equation} \mathcal{K}\left( U^{n+1}\right) =\sigma \left( 1-2\alpha \right) \mathcal{L} _{A}\Gamma \left( U^{n}\right) +\sigma \alpha \mathcal{L}_{A}\Gamma \left( U^{n-1}\right) +U^{n-1}+\sigma \lambda F^{n}. \label{TUn+1} \end{equation} \newline Next, recall that \begin{equation*} \left\vert F_{j,m}^{n}\right\vert =\frac{1}{4}\left\vert \left( U_{j+1,m}^{n}-U_{j-1,m}^{n}\right) ^{2}+\left( U_{j,m+1}^{n}-U_{j,m-1}^{n}\right) ^{2}\right\vert . \end{equation*} Thus, \begin{equation*} \left\vert F_{j,m}^{n}\right\vert \leq \frac{1}{2}\left[ \left( U_{j+1,m}^{n}\right) ^{2}+\left( U_{j-1,m}^{n}\right) ^{2}+\left( U_{j,m+1}^{n}\right) ^{2}+\left( U_{j,m-1}^{n}\right) ^{2}\right] \end{equation*} and consequently, \begin{equation} \left\Vert F^{n}\right\Vert _{2}\leq 2\left\Vert U^{n}\right\Vert _{2}^{2}. \label{NormeFn} \end{equation} Finally, $\left( \ref{TUn+1}\right) $ yields that \begin{eqnarray*} \left\Vert \mathcal{K}\left( U^{n+1}\right) \right\Vert &\leq &\sigma \left\vert 1-2\alpha \right\vert \left\Vert \mathcal{L}_{A}\Gamma \left( U^{n}\right) \right\Vert +\sigma \left\vert \alpha \right\vert \left\Vert \mathcal{L}_{A}\Gamma \left( U^{n-1}\right) \right\Vert +\left\Vert U^{n-1}\right\Vert \\ &&+\sigma \left\vert \lambda \right\vert \left\Vert F^{n}\right\Vert . \end{eqnarray*} Setting $\omega =\left\vert q\right\vert +8\delta \left\vert \kappa \right\vert $, we obtain \begin{equation} \left\Vert \mathcal{K}\left( U^{n+1}\right) \right\Vert \leq 8\omega \sigma \left\vert 1-2\alpha \right\vert \left\Vert U^{n}\right\Vert +\left[ 1+8\omega \sigma \left\vert \alpha \right\vert \right] \left\Vert U^{n-1}\right\Vert +2\sigma \left\vert \lambda \right\vert \left\Vert U^{n}\right\Vert ^{2}. \label{NormeTUn+1} \end{equation} \newline We now evaluate $\left\Vert V^{n+1}\right\Vert .$ Applying $\Gamma $ for the first equation in the system $\left( \ref{StabilitySystem}\right) $, we get \begin{eqnarray*} \Gamma \left( U^{n+1}\right) &=&\sigma \beta \Gamma \mathcal{L}_{A}\left( V^{n+1}\right) +\Gamma \left( U^{n-1}\right) +\sigma \left( 1-2\beta \right) \Gamma \left( \mathcal{L}_{A}\left( V^{n}\right) \right) \\ &&+\sigma \beta \Gamma \left( \mathcal{L}_{A}\left( V^{n-1}\right) \right) +\sigma \lambda \Gamma \left( F^{n}\right) . \end{eqnarray*} By replacing in the second equation of $\left( \ref{SolvabiltySystem}\right) $\ we obtain \begin{eqnarray} \beta \mathcal{K}\left( V^{n+1}\right) &=&\left( 1-2\beta \right) \left[ \sigma \alpha \Gamma \left( \mathcal{L}_{A}\left( V^{n}\right) \right) -V^{n} \right] \notag \\ &&+\beta \left[ \sigma \alpha \Gamma \left( \mathcal{L}_{A}\left( V^{n-1}\right) \right) -V^{n-1}\right] \notag \\ &&+2\alpha \Gamma \left( U^{n-1}\right) +\left( 1-2\alpha \right) \Gamma \left( U^{n}\right) +\sigma \alpha \lambda \Gamma \left( F^{n}\right) , \label{ThetaVn+1} \end{eqnarray} We get from $\left( \ref{ThetaVn+1}\right) $ and $\left( \ref{NormeFn} \right) $, \begin{eqnarray} \left\Vert \beta \mathcal{K}\left( V^{n+1}\right) \right\Vert &\leq &\left\vert 1-2\beta \right\vert \left[ 8\sigma \left\vert \alpha \right\vert \omega +1\right] \left\Vert V^{n}\right\Vert \label{NormThetaVn+1} \\ &&+\left\vert \beta \right\vert \left[ \left[ 8\sigma \left\vert \alpha \right\vert \omega +1\right] \left\Vert V^{n-1}\right\Vert \right] +2\left\vert \alpha \right\vert \omega \left\Vert U^{n-1}\right\Vert \notag \\ &&+\left( 1-2\alpha \right) \omega \left\Vert U^{n}\right\Vert +2\sigma \alpha \lambda \omega \left\Vert U^{n}\right\Vert ^{2}. \notag \end{eqnarray} \newline Now comming back to $\left( \ref{Problem1}\right) $ and appalying boundry conditions, we get \begin{equation} U^{-1}=U^{0}+l\widetilde{\varphi }\hbox{ \ \ and \ \ }V^{-1}=qU^{0}+ \widetilde{\psi } \end{equation} where, \begin{equation*} \widetilde{\varphi }=-q\Delta \varphi +\kappa \Delta ^{2}\varphi -\lambda \left\vert \nabla \varphi \right\vert ^{2} \end{equation*} and \begin{equation*} \widetilde{\psi }=-\left( lq^{2}+\kappa \right) \Delta \varphi +2l\kappa q\Delta ^{2}\varphi -l\kappa ^{2}\Delta ^{3}\varphi -\lambda lq\left\vert \nabla \varphi \right\vert ^{2}+\lambda l\kappa \Delta \left( \left\vert \nabla \varphi \right\vert ^{2}\right) . \end{equation*} Hence, \begin{equation} \left\Vert U^{-1}\right\Vert \leq \left\Vert U^{0}\right\Vert +l\left\Vert \widetilde{\varphi }\right\Vert \hbox{ \ \ and \ \ }\left\Vert V^{-1}\right\Vert \leq \left\vert q\right\vert \left\Vert U^{0}\right\Vert +\left\Vert \widetilde{\psi }\right\Vert . \label{NromeU-1andV-1} \end{equation} Now, the Lyapunov criterion for stability states exactly that \begin{equation*} \forall \varepsilon >0,\exists \eta >0:\ \left\Vert \left( U^{0},V^{0}\right) \right\Vert \leq \eta \Rightarrow \left\Vert \left( U^{n},V^{n}\right) \right\Vert \leq \varepsilon ,\ \ \ \ \forall n\geq 0. \end{equation*} \newline For $n=1,$ and any $\varepsilon $ given such that $\left\Vert \left( U^{1},V^{1}\right) \right\Vert \leq \varepsilon ,$ we seek an $\eta >0$ for wich $\left\Vert \left( U^{0},V^{0}\right) \right\Vert <\eta $.\newline By direct substitution in $\left( \ref{NormeTUn+1}\right) $, for $n=0,$ we obtain \begin{equation*} \left\Vert \mathcal{K}\left( U^{1}\right) \right\Vert \leq 8\omega \sigma \left\vert 1-2\alpha \right\vert \left\Vert U^{0}\right\Vert +\left[ 1+8\omega \sigma \left\vert \alpha \right\vert \right] \left\Vert U^{-1}\right\Vert +2\sigma \left\vert \lambda \right\vert \left\Vert U^{0}\right\Vert ^{2}. \end{equation*} From $\left( \ref{NromeU-1andV-1}\right) ,$ we obtain \begin{eqnarray*} \left\Vert \mathcal{K}\left( U^{1}\right) \right\Vert &\leq &2\sigma \left\vert \lambda \right\vert \left\Vert U^{0}\right\Vert ^{2}+\left( 1+8\omega \sigma \left[ \left\vert 1-2\alpha \right\vert +\left\vert \alpha \right\vert \right] \right) \left\Vert U^{0}\right\Vert \\ &&+l\left( 1+8\sigma \left\vert \alpha \right\vert \omega \right) \left\Vert \widetilde{\varphi }\right\Vert . \end{eqnarray*} Observing that, \begin{equation*} \left\vert 1-2\alpha \right\vert +\left\vert \alpha \right\vert \leq \left( 1+3\left\vert \alpha \right\vert \right) , \end{equation*} we get \begin{equation*} \left\Vert \mathcal{K}\left( U^{1}\right) \right\Vert \leq 2\sigma \left\vert \lambda \right\vert \left\Vert U^{0}\right\Vert ^{2}+8\omega \sigma \left( 1+3\left\vert \alpha \right\vert \right) \left\Vert U^{0}\right\Vert +l\left( 1+8\sigma \left\vert \alpha \right\vert \omega \right) \left\Vert \widetilde{\varphi }\right\Vert . \end{equation*} Next choosing $l=o\left( h^{4+s}\right) $ small enough, we obtain \begin{equation*} \left\Vert U^{1}\right\Vert \leq 4\sigma \left\vert \lambda \right\vert \left\Vert U^{0}\right\Vert ^{2}+16\omega \sigma \left( 1+3\left\vert \alpha \right\vert \right) \left\Vert U^{0}\right\Vert +2l\left( 1+8\sigma \left\vert \alpha \right\vert \omega \right) \left\Vert \widetilde{\varphi } \right\Vert . \end{equation*} Now, for $\varepsilon >0$, we seek $\eta >0$ such that \begin{equation} 4\sigma \left\vert \lambda \right\vert \eta ^{2}+16\omega \sigma \left( 1+3\left\vert \alpha \right\vert \right) \eta +2l\left( 1+8\sigma \left\vert \alpha \right\vert \omega \right) \left\Vert \widetilde{\varphi }\right\Vert <\varepsilon \label{equality1} \end{equation} or otherwise, \begin{equation*} 4\sigma \left\vert \lambda \right\vert \eta ^{2}+16\omega \sigma \left( 1+3\left\vert \alpha \right\vert \right) \eta +2l\left( 1+8\sigma \left\vert \alpha \right\vert \omega \right) \left\Vert \widetilde{\varphi }\right\Vert -\varepsilon <0. \end{equation*} The discriminant of the last inequatlity is \begin{equation*} \Delta ^{\prime }=64\left( \omega \sigma \left( 1+3\left\vert \alpha \right\vert \right) \right) ^{2}-4\sigma \left\vert \lambda \right\vert (2l\left( 1+8\sigma \left\vert \alpha \right\vert \omega \right) \left\Vert \widetilde{\varphi }\right\Vert -\varepsilon ). \end{equation*} For the same assumption on $l$ and $h$ as above, \begin{equation*} \Delta ^{\prime }\sim 64\left( \omega \sigma \left( 1+3\left\vert \alpha \right\vert \right) \right) ^{2}+4\sigma \left\vert \lambda \right\vert \varepsilon >0. \end{equation*} Consenquentely, there are two zeros $\eta _{1}<\eta _{2}$ of the inequality above. Furthermore, replacing $\eta $ with $0$ we get a negative quantity, thus $\eta _{1}<0<\eta _{2}.$ As a result, $\eta _{2}$ is the good candidate. Now, choosing $\left\Vert (U^{0},V^{0})\right\Vert \leq \eta _{2}$ , we get immediately $\left\Vert U^{1}\right\Vert <\varepsilon .$\newline Next, already with $n=0,$ we get similarly to the previous case \begin{eqnarray*} \left\Vert \beta \mathcal{K}\left( V^{1}\right) \right\Vert &\leq &\left\vert 1-2\beta \right\vert \left[ 8\sigma \left\vert \alpha \right\vert \omega +1\right] \left\Vert V^{0}\right\Vert +\left\vert \beta \right\vert \left[ \left[ 8\sigma \left\vert \alpha \right\vert \omega +1 \right] \left\Vert V^{-1}\right\Vert \right] \\ &&+2\left\vert \alpha \right\vert \omega \left\Vert U^{-1}\right\Vert +\left\vert 1-2\alpha \right\vert \omega \left\Vert U^{0}\right\Vert +2\sigma \alpha \lambda \omega \left\Vert U^{0}\right\Vert ^{2} \end{eqnarray*} \newline Choosing $l=o\left( h^{4+s}\right) $ small enough as above, and $\mu =8\sigma \left\vert \alpha \right\vert \omega +1$, we obtain \begin{eqnarray*} \frac{\left\vert \beta \right\vert }{2}\left\Vert V^{1}\right\Vert &\leq &\left\Vert \beta \mathcal{K}\left( V^{1}\right) \right\Vert \leq \mu \left\vert 1-2\beta \right\vert \left\Vert V^{0}\right\Vert +\mu \left\vert \beta \right\vert \left[ \left\Vert V^{-1}\right\Vert \right] \\ &&+2\left\vert \alpha \right\vert \omega \left\Vert U^{-1}\right\Vert +\left\vert 1-2\alpha \right\vert \omega \left\Vert U^{0}\right\Vert +2\sigma \alpha \lambda \omega \left\Vert U^{0}\right\Vert ^{2} \end{eqnarray*} Next, recall that \begin{equation*} \left\Vert U^{-1}\right\Vert \leq \left\Vert U^{0}\right\Vert +\left\Vert \widetilde{\varphi }\right\Vert \hbox{ and }\left\Vert V^{-1}\right\Vert \leq \left\vert q\right\vert \left\Vert U^{0}\right\Vert +l\left\Vert \widetilde{\psi }\right\Vert , \end{equation*} we get \begin{eqnarray*} \left\vert \beta \right\vert \left\Vert V^{1}\right\Vert &\leq &2\mu \left\vert 1-2\beta \right\vert \left\Vert V^{0}\right\Vert +2\mu \left\vert \beta \right\vert \left( \left\vert q\right\vert \left\Vert U^{0}\right\Vert +\left\Vert \widetilde{\psi }\right\Vert \right) \\ &&+4\left\vert \alpha \right\vert \omega \left( \left\Vert U^{0}\right\Vert +l\left\Vert \widetilde{\varphi }\right\Vert \right) +2\left\vert 1-2\alpha \right\vert \omega \left\Vert U^{0}\right\Vert +4\sigma \alpha \lambda \omega \left\Vert U^{0}\right\Vert ^{2}. \end{eqnarray*} Henceforth, \begin{eqnarray*} \left\vert \beta \right\vert \left\Vert V^{1}\right\Vert &\leq &4\sigma \alpha \lambda \omega \left\Vert U^{0}\right\Vert ^{2}+2\mu \left\vert 1-2\beta \right\vert \left\Vert V^{0}\right\Vert \\ &&+2\left( \mu \left\vert q\right\vert \left\vert \beta \right\vert +\omega \left( 2\left\vert \alpha \right\vert +\left\vert 1-2\alpha \right\vert \right) \right) \left\Vert U^{0}\right\Vert \\ &&+2\mu \left\vert \beta \right\vert \left\Vert \widetilde{\psi }\right\Vert +4\left\vert \alpha \right\vert \omega l\left\Vert \widetilde{\varphi } \right\Vert . \end{eqnarray*} Now, proceeding as for $U^{1}$, we prove that for all $\varepsilon >0,$ there is an $\eta _{2}^{\prime }>0$ satisfying $\left\Vert V^{1}\right\Vert <\varepsilon $ whenever $\left\Vert (U^{0},V^{0})\right\Vert \leq \eta _{2}^{\prime }.$ Finally $\eta =\min \left( \eta _{2},\eta _{2}^{\prime }\right) $ answers the question.\newline Assume now that $\left( U^{k},V^{k}\right) $ is bounded for $k=1,2,...,n$ by $\varepsilon _{1}$ whenever $\left( U^{0},V^{0}\right) $ is bounded by $\eta $ and let $\varepsilon >0.$ We shall prove that it is possible to choose $ \eta $ satisfying $\left\Vert \left( U^{n+1},V^{n+1}\right) \right\Vert \leq \varepsilon .$ \end{proof} \begin{lemma} (Banach) Let $E,F$ be two Banach spaces and $\phi :E\rightarrow F$ be linear. If $\phi $ is continue and bijective then $\phi $ is a homeomorphism. \newline Consider as above the endomorphism $\phi $ on $M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) \times M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) $ defined by \begin{equation*} \phi \left( X,Y\right) =\left( X-\sigma \beta \mathcal{L}_{A}\left( Y\right) ,\beta Y-\alpha \left[ qX-\delta \kappa \mathcal{L}_{A}\left( X\right) \right] \right) . \end{equation*} Consider also \begin{equation*} f_{1}\left( X,Y,Z,W\right) =X+\alpha \beta \mathcal{L}_{A}\left( Y\right) +\sigma \left( 1-2\beta \right) +\sigma \beta \mathcal{L}_{A}\left( Z\right) +\sigma \lambda W \end{equation*} and \begin{equation*} f_{2}\left( X,Y,Z,T\right) =\alpha \Gamma \left( X\right) -\beta Y+\left( 1-2\alpha \right) \Gamma \left( Z\right) -\left( 1-2\beta \right) T. \end{equation*} We obtain thus \begin{eqnarray*} \phi \left( U^{n+1},V^{n+1}\right) &=&\left( U^{n+1}-\sigma \beta \mathcal{L} _{A}\left( V^{n+1}\right) ,\beta V^{n+1}-\alpha \Gamma \left( U^{n+1}\right) \right) \\ &=&\left( f_{1}\left( U^{n-1},V^{n-1},V^{n},F^{n}\right) ,f_{2}\left( U^{n-1},V^{n-1},U^{n},V^{n}\right) \right) . \end{eqnarray*} We have already proved that $\phi $ is one to one for $l$ and $h$ small enough. Since $\phi $ is linear and continuous, then $\phi $ has a continuous inverse function. So, $\phi $ is a homeomorphism on $M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) \times M_{\left( J+1\right) ^{2}}\left( \mathbb{R}\right) $ . Furthermore \begin{equation*} \left( U^{n+1},V^{n+1}\right) =\phi ^{-1}\left( f_{1}\left( U^{n-1},V^{n-1},V^{n},F^{n}\right) ,f_{2}\left( U^{n-1},V^{n-1},U^{n},V^{n}\right) \right) . \end{equation*} As $\phi ^{-1}$ is continuous, there exists $C>0$ such that \end{lemma} \begin{eqnarray*} \left\Vert \left( U^{n+1},V^{n+1}\right) \right\Vert &=&\left\Vert \phi ^{-1}\left( f_{1}\left( U^{n-1},V^{n-1},V^{n},F^{n}\right) ,f_{2}\left( U^{n-1},V^{n-1},U^{n},V^{n}\right) \right) \right\Vert \\ &\leq &C\left\Vert \left( f_{1}\left( U^{n-1},V^{n-1},V^{n},F^{n}\right) ,f_{2}\left( U^{n-1},V^{n-1},U^{n},V^{n}\right) \right) \right\Vert . \end{eqnarray*} \newline By choosing $\left\Vert \left( U^{k},V^{k}\right) \right\Vert \leq \eta $, for all $k=0,1,...n$, we get \begin{equation*} \left\Vert f_{1}\left( U^{n-1},V^{n-1},V^{n},F^{n}\right) \right\Vert \leq 2\sigma \left\vert \lambda \right\vert \eta ^{2}+\left( 1+8\sigma \left( 1+3\left\vert \beta \right\vert \right) \right) \eta . \end{equation*} Now, it suffices to prove that there exists $\eta >0$ for which \begin{equation*} \left( 1+8\sigma \left( 1+3\left\vert \beta \right\vert \right) \right) \eta +2\sigma \left\vert \lambda \right\vert \eta ^{2}\leq \varepsilon \ \ \ \Leftrightarrow \ \ \ 2\sigma \left\vert \lambda \right\vert \eta ^{2}+\left( 1+8\sigma \left( 1+3\left\vert \beta \right\vert \right) \right) \eta -\varepsilon \leq 0. \end{equation*} The discernments is \begin{equation*} \Delta =\left( 1+8\sigma \left( 1+3\left\vert \beta \right\vert \right) \right) ^{2}+8\sigma \left\vert \lambda \right\vert \varepsilon >0. \end{equation*} Hence, there are two zeros of the last equality \begin{equation*} \eta _{1}=\frac{-\left( 1+8\sigma \left( 1+3\left\vert \beta \right\vert \right) \right) -\sqrt{\Delta }}{4\sigma \left\vert \lambda \right\vert } \hbox{ and }\eta _{2}=\frac{-\left( 1+8\sigma \left( 1+3\left\vert \beta \right\vert \right) \right) +\sqrt{\Delta }}{4\sigma \left\vert \lambda \right\vert }. \end{equation*} It is straightforward that $0\in ]\eta _{1},\eta _{2}[.$ Hence $\eta _{2}>0$ . Now for $f_{2}$ we get \begin{equation*} \left\Vert f_{2}\left( U^{n-1},V^{n-1},U^{n},V^{n}\right) \right\Vert \leq \left[ \omega \left( 1+3\left\vert \alpha \right\vert \right) +\left( 1+3\left\vert \beta \right\vert \right) \right] \eta . \end{equation*} For \begin{equation*} \eta \leq \eta _{3}=\frac{\varepsilon }{\omega \left( 1+3\left\vert \alpha \right\vert \right) +\left( 1+3\left\vert \beta \right\vert \right) }, \end{equation*} we obtain $\left\Vert V^{n+1}\right\Vert \leq \varepsilon .$ Finally, choosing $\eta $ the minimum between $\eta _{2}$ and $\eta _{3}$ the criterion is proved. \begin{lemma} As the numerical scheme is consistent and stable, it is then convergent. \end{lemma} \section{Numerical Implementations} We propose in this section to present some numerical examples to validate the theoretical results developed previously. The error between the exact solutions and the numerical ones via an $L_{2}$ discrete norm will be estimated. The matrix norm used will be \begin{equation*} \Vert X\Vert _{2}=\left( \displaystyle\sum_{i,j=1}^{N}|X_{ij}|^{2}\right) ^{1/2} \end{equation*} for a matrix $X=(X_{ij})\in \mathcal{M}_{N+2}\mathbb{C}$. Denote $u^{n}$ the net function $u(x,y,t^{n})$ and $U^{n}$ the numerical solution. We propose to compute the discrete error \begin{equation} Er=\displaystyle\max_{n}\Vert U^{n}-u^{n}\Vert _{2} \label{Er} \end{equation} on the grid $(x_{i},y_{j})$, $0\leq i,j\leq J+1$ and to validate the convergence rate of the proposed schemes we propose to compute the proportion \begin{equation*} C=\displaystyle\frac{Er}{l^{2}+h^{2}}. \end{equation*} We consider the inhomogeneous problem \begin{equation} \left\{ \begin{array}{l} \frac{\partial u}{\partial t}=\Delta v+\lambda \left\vert \nabla u\right\vert ^{2}+g(x,y,t),\quad (x,y,t)\in \Omega \times (t_{0},+\infty ) \\ v=qu-\kappa \Delta u,\quad (x,y,t)\in \Omega \times (t_{0},+\infty ) \\ \left( u,v\right) (x,y,t_{0})=\left( \varphi ,\psi \right) (x,y),\quad (x,y)\in \overline{\Omega } \\ \overrightarrow{\nabla }(u,v)(x,y,t)=0,\quad (x,y,t)\in \partial \Omega \times (t_{0},+\infty ) \end{array} \right. \label{Problem2} \end{equation} on the rectangular domain $\Omega =[-1,1]\times \lbrack -1,1]$, where \begin{eqnarray} g(x,y,t) &=&Ke(t)[e(t)[C^{4}(x)S^{2}(x)C^{6}(y)+C^{6}(x)C^{4}(y)S^{2}(y)] \\ &&-\left[ C(x)S^{2}(x)C(y)S^{2}(y)\right] ] \end{eqnarray} and the exact solution \begin{equation*} (u,v)(x,y,t)=(e(t)C^{3}(x)C^{3}(y),...), \end{equation*} with \begin{equation*} C(x)=\cos (\frac{\pi x}{2})\text{ \ , \ }S(x)=\sin (\frac{\pi x}{2})\text{ \ , \ }e(t)=e^{-9\pi ^{4}t/2}\text{ \ \ and \ \ }K=\frac{9\pi ^{4}}{2}. \end{equation*} In the following tables, numerical results are provided. We computed for different space and time steps the discrete $L_{2}$-error estimates defined by (\ref{Er}). The time interval is $[0,1]$ for a choice $t_{0}=0$ and $T=1$ . The following results are obtained for different values of the parameters $ J$ (and thus $h$), $l$ ((and thus $N$). The parameters $\alpha $ and $\beta $ are fixed to $\alpha =\frac{1}{3}$ and $\beta =\frac{1}{5}.$ We just notice that some variations done on these latter parameters have induced an important variation in the error estimates which explains their effect as they calibrates the position of the approximated solution around the exact one. The parameters $q,$ $\lambda $ and $\kappa $ have the role of viscosity-type coefficients and fixed to the values $\kappa =1,$ $\lambda =-2\pi ^{2}$ and $q=\frac{11\pi ^{2}}{2}.$ The following tables outputs the error estimates relatively to the discrete $L^{2}-$norm defined above for different values of the space and time steps. First, we provide estimates when the optimal condition $l=o(h^{4+s}),$ $s>0$ is fulfilled. $s$ is fixed to $0.01$ for the first table. Next, to control more the effect of such assumption which is due to the presence of a second order Laplacian in the original problem, we tested the convergence of the scheme at some orders less than the optimal power fixed to $4.$ The second table provides the estimates with a slightly sub-critical power $4-s,$ $s>0$ small enough. Here also $s$ is fixed to $0.01$, and finally in the third table, we tested the discrete scheme for a strong sub-critical power. \ \ \ \ \begin{table}[tbp] \centering \begin{tabular}{|c|c|c|} \hline $J$ & $N$ & $Er2$ \\ \hline $10$ & $640$ & $1,25.10^{-5}$ \\ \hline $12$ & $1320$ & $2,46.10^{-6}$ \\ \hline $14$ & $2450$ & $6,14.10^{-7}$ \\ \hline $16$ & $4183$ & $1,84.10^{-7}$ \\ \hline $18$ & $6707$ & $6,38.10^{-8}$ \\ \hline $20$ & $10233$ & $2,46.10^{-8}$ \\ \hline $22$ & $14997$ & $1,04.10^{-8}$ \\ \hline $24$ & $21258$ & $4,76.10^{-9}$ \\ \hline $25$ & $25039$ & $3,29.10^{-9}$ \\ \hline $30$ & $52015$ & $6,36.10^{-10}$ \\ \hline \end{tabular} \caption{Error estimates for $l=o(h^{4.01})$}\label{tab1} \end{table} \begin{table}[tbp] \centering \begin{tabular}{|c|c|c|} \hline $J$ & $N$ & $Er2$ \\ \hline $10$ & $616$ & $1,35.10^{-5}$ \\ \hline $12$ & $1273$ & $2,55.10^{-6}$ \\ \hline $14$ & $2355$ & $6,65.10^{-7}$ \\ \hline $16$ & $4012$ & $2,00.10^{-7}$ \\ \hline $18$ & $6419$ & $6,96.10^{-8}$ \\ \hline $20$ & $7973$ & $4,06.10^{-8}$ \\ \hline $22$ & $14295$ & $1,14.10^{-8}$ \\ \hline $24$ & $20228$ & $5,26.10^{-9}$ \\ \hline $25$ & $23806$ & $3,64.10^{-9}$ \\ \hline $30$ & $49273$ & $7,09.10^{-10}$ \\ \hline \end{tabular} \caption{Error estimates for $l=o(h^{3.99})$}\label{tab2} \end{table} \begin{table}[tbp] \centering \begin{tabular}{|c|c|c|} \hline $J$ & $N$ & $Er2$ \\ \hline $10$ & $128$ & $3,10.10^{-4}$ \\ \hline $12$ & $220$ & $8,82.10^{-5}$ \\ \hline $14$ & $350$ & $2,99.10^{-5}$ \\ \hline $16$ & $523$ & $1,17.10^{-5}$ \\ \hline $18$ & $746$ & $9,59.10^{-6}$ \\ \hline $20$ & $1024$ & $2,45.10^{-6}$ \\ \hline $22$ & $1364$ & $1,26.10^{-6}$ \\ \hline $24$ & $1772$ & $6,84.10^{-7}$ \\ \hline $25$ & $2004$ & $5,14.10^{-7}$ \\ \hline $30$ & $3468$ & $1,34.10^{-7}$ \\ \hline $50$ & $16137$ & $3,96.10^{-9}$ \\ \hline \end{tabular} \caption{Error estimates for $l=o(h^{3.01})$}\label{tab3} \end{table} \ \ \section{Conclusion} This paper investigated the solution of the well-known Kuramoto-Sivashinsky equation in two-dimensional case by applying a two-dimensional finite difference discretization. The original equation is a 4-th order partial differential equation. Thus, in a first step it was recasted into a system of second order partial differential equations by applying a reduction order. Next, the continuous system of simultaneous coupled PDEs has been transformed into an algebraic discrete system involving a generalized Lyapunov-Syslvester type operators. Solvability, consistency, stability and convergence are then established by applying well-known methods such as Lax-Richtmyer equivalence theorem and Lyapunov Stability and by examining the topological properties of the obtained Lyapunov-Sylvester type operators. The method was finally improved by developing a numerical example. \end{document}
math
57,894
\begin{eqnarray}gin{document} \title{Comparison of two equivariant $\eta$-forms} \author{Bo LIU} \address{School of Mathematical Sciences, Shanghai Key Laboratory of PMMP, East China Normal University, 500 Dongchuan Road, Shanghai, 200241 P.R. China} \email{[email protected]} \author{Xiaonan MA} \address{Universit\'e de Paris and Sorbonne Université, CNRS, IMJ-PRG, F-75013 Paris, France} \email{[email protected]} \date{\today} \begin{eqnarray}gin{abstract} In this paper, we first define the equivariant infinitesimal $\eta$-form, then we compare it with the equivariant $\eta$-form, modulo exact forms, by a locally computable form. As a consequence, we obtain the singular behavior of the equivariant $\eta$-form, modulo exact forms, as a function on the acting Lie group. This result extends a result of Goette and it plays an important role in our recent work on the localization of $\eta$-invariants and on the differential $K$-theory. \end{abstract} \maketitle \tableofcontents \setcounter{section}{-1} \section{Introduction} \langlebel{s00} In order to find a well-defined index for a first order elliptic differential operator over an even-dimensional compact manifold with nonempty boundary, Atiyah-Patodi-Singer \cite{APS75} introduced a global boundary condition which is particularly significant for applications. In this final index formula, the contribution from the boundary is given by the Atiyah-Patodi-Singer (APS) $\eta$-invariant associated with the restriction of the operator on the boundary. Formally, the $\eta$-invariant is equal to the number of positive eigenvalues of the self-adjoint operator minus the number of its negative eigenvalues. If the manifold admits a compact Lie group action, in \cite{D78}, extending the APS index theorem \cite{APS75}, Donnelly proved a Lefschetz type formula for manifolds with boundary. The contribution of the boundary is expressed as the equivariant $\eta$-invariant $\eta_g$. Note that the $\eta$-invariant and the equivariant $\eta$-invariant are well-defined for any compact manifold. In \cite[Theorem 0.5]{Go00}, Goette studied the singularity of $\eta_g$ at $g=e$ the identity element, when the group action is locally free. He defined the equivariant infinitesimal $\eta$-invariant as a formal power series and express the singularity of $\eta_g$ at $g=e$ as a locally computable term through the comparison of the equivariant infinitesimal $\eta$-invariant and the equivariant $\eta$-invariant. In \cite{BG00,BG04}, Bismut and Goette established the general comparison formulas for holomorphic analytic torsions and de Rham torsions. They used the analytic localization techniques developed by Bismut and Lebeau in \cite{BL91} and developed new techniques to overcome the difficulty that the operators do not have lower bounds. In the holomorphic case \cite[Theorem 0.1]{BG00}, besides the predictable Bott-Chern current, in the final formula, there is an exotic additive characteristic class of the normal bundle, which is closely related to the Gillet-Soul\'e R-genus \cite{GilS} and Bismut's equivariant extension \cite{Bi94}. In the real case \cite[Theorem 0.1]{BG04}, in the final formula, besides the predictable Chern-Simons current, they discovered an exotic locally computable diffeomorphism invariant of the fixed point set, the so-called $V$-invariant. The mysterious $V$-invariant should be understood as a finite dimensional analogue of the real analytic (de Rham) torsion. On the other hand, extending the works of Bismut-Freed \cite{BF86II} and Cheeger \cite{Ch87} on the Witten's holonomy conjecture, Bismut and Cheeger \cite{BC89} studied the adiabatic limit for a fibration of compact spin manifolds and found that under the invertible assumption of the fiberwise Dirac operator, the adiabatic limit of the $\eta$-invariant of the associated Dirac operators on the total space is expressible in terms of a canonically constructed differential form, $\tilde{\eta}$, so-called Bismut-Cheeger $\eta$-form, on the base space. Later, Dai \cite{Dai91} extended this result to the case when the kernels of the fiberwise Dirac operators form a vector bundle over the base manifold. The Bismut-Cheeger $\eta$-form, $\tilde{\eta}$, is the families version of the $\eta$-invariant and its $0$-degree part is just the APS $\eta$-invariant. It appears naturally as the boundary contribution of the family index theorem for manifolds with boundary (cf. \cite{BC90I,BC90II,MP97,Mu95}). We cite also \cite{Zh94} for a nice topological application of eta forms. As the holomorphic analytic torsion and its family version, Bismut-K\"ohler holomorphic torsion form \cite{BK92} are the analytic counterpart to the direct image in Arakelov geometry \cite{Soule92}, whose foundation was developed by Gillet-Soul\'e and Bismut in the 1980s, the Bismut-Cheeger $\eta$-form is also the analytic counterpart to the direct image in differential $K$-theory introduced by Freed-Hopkins \cite{FH00} and developed further by \cite{BS09}, \cite{FreedLott10}, \cite{HopkinsSinger05}, \cite{SimonsS08}, etc. When the fibration admits a fiberwise compact Lie group action, the Bismut-Cheeger $\eta$-form could be naturally extended to the equivariant $\eta$-form $\widetilde{\eta}_g$. Recently, the functoriality of equivariant $\eta$-forms with respect to the composition of two submersions was established in \cite{Liu17a}, which extends the previous work of Bunke-Ma \cite{BM04} for usual $\eta$-forms for flat vector bundles with duality, cf. \cite{Be09,BB94,BKR11,DZ15,Ma99,Ma00,Ma00a,Puchol16} for related works on $\eta$-forms and holomorphic torsions. In the same way as fixed-point formula has two equivariant versions, the Lefschetz fixed-point formula and Kirillov-like formula of Berline-Vergne \cite{BV83}, the same is true for equivariant $\eta$-forms. In this paper, we use the analytic techniques of Bismut-Goette in \cite{BG00} to define the equivariant infinitesimal Bismut-Cheeger $\eta$-form and prove a general comparison formula between the equivariant infinitesimal Bismut-Cheeger $\eta$-form and the equivariant Bismut-Cheeger $\eta$-form which extend the work of Goette \cite{Go00}. In particular, we express the singularity of $\widetilde{\eta}_g$ modulo exact forms, at any $g\in G$ as a locally computable differential form. Let $G$ be a compact Lie group with Lie algebra $\mathfrak{g}$. We assume that $G$ acts isometrically on an odd-dimensional compact oriented Riemannian manifold $X$ and the $G$-action lifts on a Clifford module $\mathcal{E}$ over $X$. In general, the equivariant APS $\eta$-invariant $\eta_g$ is not a continuous function on $g\in G$. In \cite{Go00}, Goette studied the singularity of the equivariant $\eta$-invariant $\eta_g$ at $g=e$. He defined a formal power series $\eta_K\in \field{C}[[\mathfrak{g}^*]]$ for $K\in \mathfrak{g}$, called the equivariant infinitesimal $\eta$-invariant and showed that if the Killing vector field $K^X$ induced by $K$ has no zeroes on $X$, for any $N\in \field{N}$, as $0\neq t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:0.01} [\eta_{tK}]_N-\eta_{e^{tK}}=\mathcal{M}_{tK}+\mathcal{O}(t^N), \end{align} where $[\eta_{tK}]_N$ is the part of the formal power series $\eta_{tK}$ with degree $\leq N$ and $\mathcal{M}_{tK}$ could be expressed precisely as a locally computable term. Moreover, there exist $c_j(K)\in \field{C}$ such that when $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:0.02} \mathcal{M}_{tK}=\sum_{j=1}^{(\dim X+1)/2}c_j(K)t^{-j}+\mathcal{O}(t^0). \end{align} It means that if the Killing vector field $K^X$ is nowhere vanishing, the singular behavior of $\eta_{e^{tK}}$ when $t\rightarrow 0$ could be computed as the integral of the local terms explicitly. In this paper, we show first that $\eta_{tK}$ is an analytic function on $t$ for $t$ small enough and for any $0\neq K\in \mathfrak{g}$, \begin{eqnarray}gin{align}\langlebel{eq:0.03} \eta_{tK}-\eta_{e^{tK}}=\mathcal{M}_{tK},\quad \til{E}xt{for $t\neq 0$ small enough.} \end{align} In Theorem \ref{thm:0.2}, we establish a general version of (\ref{eq:0.03}), in particular, its family version. Let's explain in detail our result here. Let $\pi:W\rightarrow B$ be a smooth submersion of smooth compact manifolds with fiber $X$. Note that $n=\dim X$ can be even or odd. Let $TX=TM/B$ be the relative tangent bundle to the fiber $X$. We assume that $TX$ is oriented and that the compact Lie group $G$ acts fiberwise on $W$ and as identity on $B$ and preserves the orientation of $TX$. Let $g^{TX}$ be a $G$-invariant metric on $TX$. Let $(\mathcal{E},h^{\mathcal{E}})$ be a Clifford module of $TX$ to the fiber $X$ and we assume that the $G$-action lifts on $(\mathcal{E},h^{\mathcal{E}})$ and is compatible with the Clifford action. Let $\nabla^{\mathcal{E}}$ be a $G$-invariant Clifford connection on $(\mathcal{E}, h^{\mathcal{E}})$, i.e., $\nabla^{\mathcal{E}}$ is a $G$-invariant Hermitian connection on $(\mathcal{E}, h^{\mathcal{E}})$ and compatible with the Clifford action (see (\ref{eq:1.17})). Let $D$ be the fiberwise Dirac operator associated with ($g^{TX}, \nabla^{\mathcal{E}}$) (see (\ref{eq:1.18})). \til{E}xtbf{We assume that the kernels $\Ker (D)$ form a vector bundle over $B$.} Then for any $g\in G$, the equivariant $\eta$-form $\tilde{\eta}_g$ is well-defined (see Definition \ref{defn:1.03})\footnote{For even dimensional fiber, any family of Dirac operators could be deformed to another one which satisfies this assumption and has the same family index in $K^0(B)$ (see e.g., \cite[\S 9.5]{BGV}). But for odd dimensional fiber, some topological obstruction appears: if a family of Dirac operators $D$ satisfies this assumption, the family index of $D$ vanishes in $K^1(B)$ (this fact is implicitly contained in \cite{AS69}, a proof of which is presented in \cite[Theorem 4.1]{E13}). Recently, for odd dimensional fiber case, Wittmann \cite{Wi15} defined an $\eta$-form under the assumption that the family of Dirac operators has one eigenvalue of multiplicity one crossing zero transversally. It is expected that many properties of Bismut-Cheeger $\eta$-form could be extended to this case.}. In the whole paper, if $n=\dim X$ is even, $\mathcal{E}$ is naturally $\field{Z}_2$-graded by the chirality operator $\Gamma$ defined in (\ref{eq:1.15}) and the supertrace for $A\in \field{E}nd(\mathcal{E})$ is defined by $\tr_s[A]:=\tr[\Gamma A]$; if $\dim X$ is odd, $\mathcal{E}$ is ungraded. For $\sigma=\alpha\otimes A$ with $\alpha\in \Lambda(T^*B)$, $A\in \field{E}nd(\mathcal{E})$, we define $\tr[\sigma]:=\alpha\cdot\tr[A]$. We denote by $\tr^{\mathrm{odd}}[\sigma]$ the odd degree part of $\tr[\sigma]$. Set \begin{eqnarray}gin{align}\langlebel{eq:0.04} \widetildedetilde{\tr}[\sigma]= \left\{ \begin{eqnarray}gin{array}{ll} \tr_s[\sigma]\hspace{10mm} & \hbox{if $n=\dim X$ is even;} \\ \tr^{\mathrm{odd}}[\sigma]\hspace{10mm} & \hbox{if $n=\dim X$ is odd.} \end{array} \right. \end{align} For $\alpha\in \Omega^j(\field{R}\times B)$, the space of $j$-th differential forms on $\field{R}\times B$, set \begin{eqnarray}gin{align}\langlebel{eq:0.05} \psi_{\field{R}\times B}(\alpha)=\left\{ \begin{eqnarray}gin{array}{ll} \left(2i\pi \right)^{-\frac{j}{2}}\cdot \alpha \hspace{10mm} & \hbox{if $j$ is even;} \\ \pi^{-\frac{1}{2}}\left(2i\pi \right)^{-\frac{j-1}{2}}\cdot \alpha \hspace{10mm} & \hbox{if $j$ is odd.} \end{array} \right. \end{align} Let $t$ be the coordinate of $\field{R}$ in $\field{R}\times B$. If $\alpha=\alpha_0+dt\wedge \alpha_1$, with $\alpha_0, \alpha_1\in \Lambda (T^*B)$, set \begin{eqnarray}gin{align}\langlebel{eq:0.06} [\alpha]^{dt}:=\alpha_1. \end{align} Let $\mathcal{L}_{K}$ be the infinitesimal action on $\mathscr{C}^{\infty}(W,\mathcal{E})$ induced by $K\in \mathfrak{g}$ (see (\ref{eq:2.02b})). For $g\in G$, we denote by $Z(g)\subset G$ the centralizer subgroup of $g$ with Lie algebra $\mathfrak{z}(g)$. Let $W^g=\{x\in W: gx=x \}$ be the fixed point set of $g$. Then the restriction of $\pi$ on $W^g$, $\pi|_{W^g}:W^g\rightarrow B$ is a fibration with compact fiber $X^g$. Let $\mathbb{B}_t$ be the rescaled Bismut superconnection defined in (\ref{eq:1.22}). Let $d$ be the exterior differential operator. Let $\widetildedehat{\mathrm{A}}_{g,K}(\cdot)$ and $\ch_{g,K}(\cdot)$ be equivariant infinitesimal versions of the $\widetildedehat{ \mathrm{A}}$-form and the Chern character form (cf. (\ref{eq:2.14}) and (\ref{eq:2.15})). The following result extends the equivariant infinitesimal $\eta$-invariant to the family case at any $g\in G$ (see Definition \ref{defn:2.03}, (\ref{eq:2.29}), (\ref{eq:2.30}), (\ref{eq:2.33}) and (\ref{eq:2.34})). \begin{eqnarray}gin{thm}\langlebel{thm:0.1} For any $g\in G$, there exists $\begin{eqnarray}ta>0$ such that if $K\in \mathfrak{z}(g)$ with $|K|<\begin{eqnarray}ta$, the integral \begin{eqnarray}gin{align}\langlebel{eq:0.07} \widetilde{\eta}_{g,K}=-\int_{0}^{+\infty}\left\{\psi_{\field{R}\times B} \widetilde{\tr}\left[g\exp\left(-\left(\mathbb{B}_{t} +\frac{c(K^X)}{4\sqrt{t}}+dt\wedge \frac{\partial}{\partial t}\right)^2-\mathcal{L}_{K}\right) \right]\right\}^{dt}dt \end{align} is a well-defined differential form on $B$, and \begin{eqnarray}gin{align}\langlebel{eq:0.08} d\tilde{\eta}_{g,K}=\left\{ \begin{eqnarray}gin{aligned} &\int_{X^g}\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX})\, \ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}})\\ &\hspace{50mm}-\ch_{ge^K}(\Ker (D), \nabla^{\Ker (D)}) & \hbox{if $n$ is even;} \\ &\int_{X^g}\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX})\, \ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}}) & \hbox{if $n$ is odd.} \end{aligned} \right. \end{align} Moreover, for fixed $K\in \mathfrak{z}(g)$, $\widetilde{\eta}_{g,zK}$ is an analytic function of $z\in \field{C}$ for $|zK|<\begin{eqnarray}ta$. \end{thm} In the sequel, $\widetilde{\eta}_{g,K}$ is called the equivariant infinitesimal (Bismut-Cheeger) $\eta$-form. Let $\varepsilontheta_K\in T^*X$ be the $1$-form which is dual to $K^X$ by the metric $g^{TX}$. Now we state the main result of this paper. \begin{eqnarray}gin{thm}\langlebel{thm:0.2} For $g\in G$ and $K_{0}\in \mathfrak{z}(g)$, there exists $\begin{eqnarray}ta>0$ such that for any $K=zK_{0}$, $K\neq 0$ and $-\begin{eqnarray}ta<z<\begin{eqnarray}ta$, modulo exact forms on $B$, we have \begin{eqnarray}gin{align}\langlebel{eq:0.09} \tilde{\eta}_{g,K}=\tilde{\eta}_{ge^K}+\mathcal{M}_{g,K}, \end{align} where $\mathcal{M}_{g,K}$ is a well-defined integral defined by \begin{eqnarray}gin{align}\langlebel{eq:0.10} \mathcal{M}_{g,K}=-\int_0^{+\infty}\int_{X^g}\frac{\varepsilontheta_K}{2 i\pi v} \exp\left(\frac{d\varepsilontheta_K-2i\pi |K^X|^2}{2 i\pi v} \right)\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})\frac{dv}{v}, \end{align} and $t^{\lfloor(\dim W^g+1)/2\rfloor} \mathcal{M}_{g,tK}$ is real analytic on $t\in \field{R}$, $|t|<1$. Moreover, we have \begin{eqnarray}gin{multline}\langlebel{eq:0.12} d\mathcal{M}_{g,K}=\int_{X^{g}} \widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}) \\ - \int_{X^{ge^K}} \widetildedehat{\mathrm{A}}_{ge^K}(TX,\nabla^{TX}) \ch_{ge^K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}). \end{multline} \end{thm} By Theorem \ref{thm:0.1}, $\tilde{\eta}_{g,tK}$ is an analytic function of $t$ near $t=0$. Thus when $t\rightarrow 0$, modulo exact forms, the singularity of $\tilde{\eta}_{ge^{tK}}$ is the same as that of $-\mathcal{M}_{g,tK}$. Note that the general comparison formula for the two versions of equivariant holomorphic analytic torsions is established in \cite[Theorem 5.1]{BG00}, which is the model of our paper. The analytical tools in this paper are inspired by those of \cite{BG00} with necessary modifications. For this problem on de Rham torsion forms, a comparison formula is stated in \cite[Theorem 5.13]{BG04}. \begin{eqnarray}gin{rem}\langlebel{rem:0.3} Let $G$ act on an odd dimensional compact Riemannian manifold $(X,g^{TX})$ and on a Clifford module $(\mathcal{E}, h^{\mathcal{E}}, \nabla^{\mathcal{E}})$ compatible with the Clifford action. Then for $g=e$ the identity element of $G$, (\ref{eq:0.07}) defines a complex number $\eta_{K}$ for any $K\in \mathfrak{g}$, $|K|<\begin{eqnarray}ta$. As formal power series on $K$, this $\eta_K$ is just the equivariant infinitesimal $\eta$-invariant $\eta_{K}$ in \cite[Definition 0.4]{Go00}. Let $P\rightarrow B$ be a $G$-principal bundle with connection and associated curvature $\Omega$. Then we get naturally a fibration $P\times_GX\rightarrow B$ with fiber $X$. Let $\widetildedetilde{\eta}$ be the associated Bismut-Cheeger $\eta$-form. For this fibration, by Bismut \cite[\S1d), \S3b)]{Bi86}, under the notation of (\ref{eq:1.22}), the term $c(T^H)$ in the Bismut superconnection is $c(\Omega)$, and $(\nabla^{\mathbb{E},u})^2=\mathcal{L}_\Omega$, thus we get \cite[Lemma 1.14]{Go00}, \begin{eqnarray}gin{align}\langlebel{eq:19b} \widetildedetilde{\eta} =\eta_{\frac{i}{2\pi}\Omega}. \end{align} Thus we can understand the formal power series of $\eta_K$ as a universal $\eta$-form. \end{rem} \begin{eqnarray}gin{rem}\langlebel{rem:0.4} Assume temporarily that $B=\mathrm{pt}$, $\dim X=n$ is odd, and $X$ is the boundary of a $G$-equivariant Riemannian manifold $Z$, which has product structure near $X$. We also assume that $\mathcal{E}_Z=\mathcal{E}_Z^+\oplus \mathcal{E}_Z^-$ is a $G$-equivariant Clifford module on $Z$ such that $\left.\mathcal{E}_Z^+\right|_X=\mathcal{E}$ and $\mathcal{E}_Z^{\pm}$ near $X$ is the pull-back of $\mathcal{E}$ as Hermitian vector bundles with connections. Let $D_Z$ be the associated Dirac operator on $\mathcal{E}_Z$ over $Z$. Then the index of $D_Z^+:=D_Z|_{\mathscr{C}^{\infty}(Z,\mathcal{E}_Z^+)}$ with respect to the Atiyah-Patodi-Singer (APS) boundary condition is a virtual representation of $G$. For $g\in G$, its equivariant APS index $\Ind_{\mathrm{APS},g}(D_Z^+)$ can be computed by Donnelly's theorem \cite{D78}, \begin{eqnarray}gin{align}\langlebel{eq:0.12b} \Ind_{\mathrm{APS},g}(D_Z^+)=\int_{Z^g} \widetildedehat{\mathrm{A}}_{g}(TZ,\nabla^{TZ}) \ch_{g}(\mathcal{E}_Z/\mS_Z,\nabla^{\mathcal{E}_Z}) -\frac{1}{2}\left(\eta_g(D)+\tr|_{\Ker(D)}[g] \right). \end{align} By combining (\ref{eq:0.09}), (\ref{eq:0.12}) (more precisely the Stokes formula \cite[p. 775]{BrM06}, (\ref{eq:3.25b}) and (\ref{eq:3.26a})), and (\ref{eq:0.12b}), for any $K\in \mathfrak{g}$, there exists $\begin{eqnarray}ta>0$ such that, for any $-\begin{eqnarray}ta<t<\begin{eqnarray}ta$, we have \begin{eqnarray}gin{align}\langlebel{eq:0.12c} \Ind_{\mathrm{APS},e^{tK}}(D_Z^+)=\int_{Z} \widetildedehat{\mathrm{A}}_{tK}(TZ,\nabla^{TZ}) \ch_{tK}(\mathcal{E}_Z/\mS_Z,\nabla^{\mathcal{E}_Z}) -\frac{1}{2}\left(\eta_{tK}(D)+\tr|_{\Ker(D)}[e^{tK}] \right). \end{align} Here $\widetildedehat{\mathrm{A}}_{tK}(\cdot):= \widetildedehat{\mathrm{A}}_{e,tK}(\cdot)$ and $\ch_{tK}(\cdot):=\ch_{e,tK}(\cdot)$. \end{rem} The main result of this paper is announced in \cite{LM18} and plays an important role in our recent work \cite{LM18a}. This paper is organized as follows. In Section \ref{s01}, we recall the definition of the equivariant Bismut-Cheeger $\eta$-form. In Section \ref{s02}, we state the family Kirillov formula and define the equivariant infinitesimal $\eta$-form, in particular, we establish Theorem \ref{thm:0.1} modulo some technical details. In Section \ref{s03}, we prove that $\mathcal{M}_{g,tK}$ in (\ref{eq:0.10}) is well-defined and state our main result, Theorem \ref{thm:0.2}. In Section \ref{s04}, we state some intermediate results and prove Theorem \ref{thm:0.2}. In Section \ref{s05}, we give an analytic proof of the family Kirillov formula and the technical details to establish Theorem \ref{thm:0.1} following the lines of \cite[\S 7]{BG00}. For the convenience to compare the arguments here with those in \cite[\S 7]{BG00}, especially how the extra terms for the families version appear, the structure of this section is formulated almost the same as in \cite[\S 7]{BG00}. In Section \ref{s06}, we prove the intermediate results in Section \ref{s04} using the analytical techniques in \cite[\S 8, \S9]{BG00}. From Remark \ref{rem:1.03}, to simplify the presentation, in Sections \ref{s05}, \ref{s06}, we will {\bf assume} that $TX^g$ is oriented. \til{E}xtbf{Notation}: we use the Einstein summation convention in this paper: when an index variable appears twice in a single term and is not otherwise defined, it implies summation of that term over all the values of the index. We denote by $\lfloor x\rfloor$ the maximal integer not larger than $x$. We denote by $d$ the exterior differential operator and $d^B$ when we like to insist the base manifold $B$. Let $\Omega^{\mathrm{even/odd}}(B,\field{C})$ be the space of even/odd degree complex valued differential forms on $B$. For a real vector bundle $E$, we denote by $\dim E$ the real rank of $E$. If $\mathcal{A}$ is a $\field{Z}_2$-graded algebra, and if $a,b\in \mathcal{A}$, then we will note $[a,b]:=ab-(-1)^{\deg a\cdot\deg b}ba$ as the supercommutator of $a$, $b$. In the whole paper, if $\mathcal{A}$, $\mathcal{A}'$ are $\field{Z}_2$-graded algebras we will note $\mathcal{A}\widetildedehat{\otimes}\mathcal{A}'$ as the $\field{Z}_2$-graded tensor product as in \cite[\S 1.3]{BGV}. If one of $\mathcal{A}, \mathcal{A}'$ is ungraded, we understand it as $\field{Z}_2$-graded by taking its odd part as zero. For the fiber bundle $\pi: W\rightarrow B$, we will often use the integration of the differential forms along the oriented fibers $X$ in this paper. Since the fibers may be odd dimensional, we must make precisely our sign conventions: for $\alpha\in \Omega^{\bullet}(B)$ and $\begin{eqnarray}ta\in \Omega^{\bullet}(W)$, then \begin{eqnarray}gin{align}\langlebel{eq:0.11} \int_X(\pi^*\alpha)\wedge\begin{eqnarray}ta=\alpha\wedge \int_X\begin{eqnarray}ta. \end{align} \noindent{\bf Acknowledgments}. B.\ L.\ is partially supported by Science and Technology Commission of Shanghai Municipality (STCSM), grant No.18dz2271000, Natural Science Foundation of Shanghai, grant No.20ZR1416700 and NSFC No.11931007. X.\ M.\ is partially supported by NSFC No.11528103, No.11829102, ANR-14-CE25-0012-01, and funded through the Institutional Strategy of the University of Cologne within the German Excellence Initiative. Part of this work was done while the authors were visiting University of Science and Technology in China and Wuhan University. \section{Equivariant $\eta$-forms}\langlebel{s01} In this section, we recall the definition of the equivariant $\eta$-form in the language of Clifford modules. In Section \ref{s0101}, we recall the definition of the Clifford algebra. In Section \ref{s0102}, we explain the Bismut superconnection. In Section \ref{s0103}, we define the equivariant $\eta$-form for Clifford module. \subsection{Clifford algebras}\langlebel{s0101} Let $(V, \langle,\rangle)$ be a Euclidean space, such that $\dim V=n$, with orthonormal basis $\{e_i\}_{i=1}^n$. Let $c(V)$ be the Clifford algebra of $V$ defined by the relations \begin{eqnarray}gin{align}\langlebel{eq:1.01} e_ie_j+e_je_i=-2\delta_{ij}. \end{align} To avoid ambiguity, we denote by $c(e_i)$ the element of $c(V)$ corresponding to $e_i$. If $e\in V$, let $e^*\in V^*$ correspond to $e$ by the scalar product $\langle , \rangle$ of $V$. The exterior algebra $\Lambda V^*$ is a module of $c(V)$ defined by \begin{eqnarray}gin{align}\langlebel{eq:1.02} c(e)\alpha=e^*\wedge\alpha -i_{e}\alpha \end{align} for any $\alpha\in\Lambda V^*$, where $\wedge$ is the exterior product and $i$ is the contraction operator. The map $a\mapsto c(a)\cdot 1$, $a\in c(V)$, induces an isomorphism of vector spaces \begin{eqnarray}gin{align}\langlebel{eq:1.03} \sigma:c(V)\rightarrow \Lambda V^*. \end{align} \subsection{Bismut superconnection}\langlebel{s0102} Let $\pi:W\rightarrow B$ be a smooth submersion of smooth compact manifolds with $n$-dimensional fibers $X$. Let $TX=TW/B$ be the relative tangent bundle to the fibers $X$. Let $G$ be a compact Lie group acting on $W$ along the fibers $X$, that is, if $g\in G$, $\pi\circ g=\pi$. Then $G$ acts on $TW$ and on $TX$. Let $T^HW\subset TW$ be a $G$-invariant horizontal subbundle, so that \begin{eqnarray}gin{align}\langlebel{eq:1.04} TW=T^HW\oplus TX. \end{align} Since $G$ is compact, such $T^HW$ always exists. Let $P^{TX}:TW \rightarrow TX$ be the projection associated with the splitting (\ref{eq:1.04}). Note that \begin{eqnarray}gin{align}\langlebel{eq:1.05} T^HW\cong \pi^*TB. \end{align} Let $g^{TX}$ be a $G$-invariant metric on $TX$. Let $g^{TB}$ be a Riemannian metric on $TB$. We equip $TW$ with the $G$-invariant metric via (\ref{eq:1.04}) and (\ref{eq:1.05}), \begin{eqnarray}gin{align}\langlebel{eq:1.06} g^{TW}=\pi^*g^{TB}\oplus g^{TX}. \end{align} Let $\nabla^{TW,L}$ (resp. $\nabla^{TB}$) be the Levi-Civita connection on $(TW,g^{TW})$ (resp. $(TB, g^{TB})$). Let $\nabla^{TX}$ be the connection on $TX$ defined by \begin{eqnarray}gin{align}\langlebel{eq:1.07} \nabla^{TX}=P^{TX}\nabla^{TW,L}P^{TX}. \end{align} It is $G$-invariant. Let $\nabla^{TW}$ be the $G$-invariant connection on $TW$, via (\ref{eq:1.04}) and (\ref{eq:1.05}), \begin{eqnarray}gin{align}\langlebel{eq:1.08} \nabla^{TW}=\pi^*\nabla^{TB}\oplus\nabla^{TX}. \end{align} Put \begin{eqnarray}gin{align}\langlebel{eq:1.09} S=\nabla^{TW,L}-\nabla^{TW}. \end{align} Then $S$ is a 1-form on $W$ with values in antisymmetric elements of $\field{E}nd(TW)$. Let $T$ be the torsion of $\nabla^{TW}$. By \cite[(1.28)]{Bi86}, if $U,V,Z\in TW$, \begin{eqnarray}gin{align}\langlebel{eq:1.10} \begin{eqnarray}gin{split} &S(U)V-S(V)U+T(U,V)=0, \\ &2\langle S(U)V,Z\rangle+\langle T(U,V),Z\rangle+\langle T(Z,U),V\rangle-\langle T(V,Z),U\rangle=0. \end{split} \end{align} If $U$ is a vector field on $B$, let $U^H$ be its lift in $T^HW$ and let $\mathcal{L}_{U^H}$ be the Lie derivative operator associated with the vector field $U^H$. Then $\mathcal{L}_{U^H}$ acts on the tensor algebra of $TX$. In particular, if $U\in TB$,$\left(g^{TX}\right)^{-1} \mathcal{L}_{U^H}g^{TX}$ defines a self-adjoint endomorphism of $TX$. If $U,V$ are vector fields on $B$, from \cite[Theorem 1.1]{Bi97}, \begin{eqnarray}gin{align}\langlebel{eq:1.11} T(U^H,V^H)=-P^{TX}[U^H,V^H], \end{align} and if $U\in TB$, $Z, Z'\in TX$, \begin{eqnarray}gin{align}\langlebel{eq:1.12} T(U^H,Z)=\frac{1}{2}\left(g^{TX}\right)^{-1}\mathcal{L}_{U^H}g^{TX}Z, \quad T(Z,Z')=0. \end{align} From (\ref{eq:1.10}) and (\ref{eq:1.12}), if $U\in TB$, $Z,Z'\in TX$, we have \begin{eqnarray}gin{align}\langlebel{eq:1.13} \langle S(Z)Z',U^H\rangle=-\langle T(U^H,Z),Z'\rangle=-\langle T(U^H,Z'),Z\rangle. \end{align} We recall some properties in \cite[\S 1.1]{Bi97}. \begin{eqnarray}gin{prop}\langlebel{prop:1.01} 1) The connection $\nabla^{TX}$ does not depend on $g^{TB}$ and on each fiber $X$, it restricts to the Levi-Civita connection of $(TX,g^{TX})$. 2) If $U\in TB$, then \begin{eqnarray}gin{align}\langlebel{eq:1.14} \nabla^{TX}_{U^H}=\mathcal{L}_{U^H}+\frac{1}{2} \left(g^{TX}\right)^{-1}\mathcal{L}_{U^H}g^{TX}. \end{align} 3) The tensors $T$ and $\langle S(\cdot)\cdot,\cdot\rangle$ do not depend on $g^{TB}$. \end{prop} Let $c(TX)$ be the Clifford algebra bundle of $(TX, g^{TX})$, whose fiber at $x\in W$ is the Clifford algebra $c(T_xX)$ of the Euclidean space $(T_xX, g^{T_xX})$. Let $\mathcal{E}$ be a Clifford module of $c(TX)$. It means that $\mathcal{E}$ is a complex vector bundle and restricted on a fiber, $\mathcal{E}_x$ is a representation of $c(T_xX)$. We assume that the $G$-action lifts on $\mathcal{E}$ and commutes with the Clifford action. From now on, we assume that $TX$ is $G$-equivariant oriented. \emph{In the whole paper}, if $n$ is even, as in \cite[Lemma 3.17]{BGV}, for a locally oriented orthonormal frame $e_1,\cdots,e_n$ of $TX$, we define the chirality operator by \begin{eqnarray}gin{align}\langlebel{eq:1.15} \Gamma=i^{n/2}c(e_1)\cdots c(e_n). \end{align} Then $\Gamma$ does not depend on the choice of the frame, commutes with the $G$-action and $\Gamma^2=\Id$. Thus $\mathcal{E}$ is naturally $\field{Z}_2$-graded by the chirality operator $\Gamma$. The supertrace for $A\in \field{E}nd(\mathcal{E})$ is defined by \begin{eqnarray}gin{align}\langlebel{eq:1.16} \tr_s[A]:=\tr[\Gamma A]. \end{align} If $n$ is odd, $\mathcal{E}$ is ungraded. Let $h^{\mathcal{E}}$ be a $G$-invariant Hermitian metric on $\mathcal{E}$. For $b\in B$, let $\mathbb{E}_{b}$ be the set of smooth sections over $X_b=\pi^{-1}(b)$ of $\mathcal{E}|_{X_b}$. As in \cite{Bi86}, we will regard $\mathbb{E}$ as an infinite dimensional vector bundle over $B$. Let $dv_X(x)$ be the Riemannian volume element of $X_b$. The bundle $\mathbb{E}_b$ is naturally endowed with the Hermitian product \begin{eqnarray}gin{align}\langlebel{eq:1.19} \langle s,s'\rangle_0=\int_{X_b}\langle s,s'\rangle(x)dv_X(x), \quad \til{E}xt{for}\ s,s'\in \mathbb{E}. \end{align} Then $G$ acts on $\mathbb{E}_{b}=\mathscr{C}^{\infty} (X_b,\mathcal{E}|_{X_b})$ as \begin{eqnarray}gin{align}\langlebel{eq:1.19b} (g.s)(x)=g(s(g^{-1}x)) \quad \til{E}xt{for any}\ g\in G. \end{align} Let $\nabla^{\mathcal{E}}$ be a $G$-invariant Clifford connection on $\mathcal{E}$ (cf. \cite[\S 10.2]{BGV}), that is, $\nabla^{\mathcal{E}}$ is $G$-invariant, preserves $h^{\mathcal{E}}$ and for any $U\in TW$, $Z\in \mathscr{C}^{\infty}(W,TX)$, \begin{eqnarray}gin{align}\langlebel{eq:1.17} \left[\nabla_U^{\mathcal{E}}, c(Z)\right]=c\left(\nabla^{TX}_U Z\right). \end{align} The fiberwise Dirac operator is defined by \begin{eqnarray}gin{align}\langlebel{eq:1.18} D=\sum_{i=1}^nc(e_{i})\nabla_{e_i}^{\mathcal{E}}, \end{align} which is independent of the choice of the orthonormal frame $\{e_i \}_{i=1}^n$. Let $k\in (T^HW)^*$ such that for any $U\in TB$, $\mathcal{L}_{U^H}dv_X(x)/dv_X(x)=2k(U^H)(x)$. The connection $\nabla^{\mathbb{E},u}$ on $\mathbb{E}$ defined by (cf. \cite[Definition 1.3]{BF86I}) \begin{eqnarray}gin{align}\langlebel{eq:1.20} \nabla_U^{\mathbb{E},u}s:=\nabla_{U^H}^{\mathcal{E}}s+k(U^H) s \quad \til{E}xt{ for } s\in \mathscr{C}^{\infty}(B, \mathbb{E}) =\mathscr{C}^{\infty}(W, \mathcal{E}) , \end{align} is $G$-invariant and preserves the $G$-invariant $L^2$-product (\ref{eq:1.19}) (see e.g., \cite[Proposition 1.4]{BF86I}). Let $\{f_p\}$ be a local frame of $TB$ and $\{f^p\}$ be its dual. Set \begin{eqnarray}gin{align}\langlebel{eq:1.21} \nabla^{\mathbb{E},u}=f^p\wedge \nabla^{\mathbb{E}, u}_{f_p},\quad c(T^H)=\frac{1}{2}\,c\left(T(f_p^H, f_q^H)\right)f^p\wedge f^q\wedge. \end{align} Then $c(T^H)$ is a section of $\pi^*\Lambda^2(T^*B) \widetildedehat{\otimes}\field{E}nd(\mathcal{E})$. By \cite[(3.18)]{Bi86}, the rescaled Bismut superconnection $\mathbb{B}_u$, $u>0$, is defined by \begin{eqnarray}gin{align}\langlebel{eq:1.22} \mathbb{B}_u=\sqrt{u}D+\nabla^{\mathbb{E},u}-\frac{1}{4 \sqrt{u}}c(T^H):\mathscr{C}^{\infty}(B,\Lambda(T^*B)\widetildedehat{ \otimes}\mathbb{E})\rightarrow\mathscr{C}^{\infty}(B,\Lambda(T^*B) \widetildedehat{\otimes}\mathbb{E}). \end{align} Obviously, the Bismut superconnection $\mathbb{B}_u$ commutes with the $G$-action. Furthermore, $\mathbb{B}_u^2$ is a $2$nd-order elliptic differential operator along the fiber $X$ (cf. \cite[(3.4)]{Bi86}) acting on $\Lambda(T^*B) \widetildedehat{\otimes}\mathbb{E}$. Let $\exp(-\mathbb{B}_u^2)$ be the heat operators associated with the fiberwise elliptic operator $\mathbb{B}_u^2$. \subsection{Equivariant $\eta$-forms}\langlebel{s0103} Take $g\in G$ fixed and set $W^g=\{x\in W: gx=x\}$, the fixed point set of $g$. Then $W^g$ is a submanifold of $W$ and $\pi|_{W^g}:W^g\rightarrow B$ is a fibration with compact fiber $X^g$. Let $N_{W^g/W}$ denote the normal bundle of $W^g$ in $W$, then \begin{eqnarray}gin{align}\langlebel{eq:1.23} N_{W^g/W}:=\frac{TW}{TW^g}=\frac{TX}{TX^g}=:N_{X^g/X}. \end{align} Let $\{X^g_{\alpha} \}_{\alpha\in \mathfrak{B}}$ be the connected components of $X^g$ with \begin{eqnarray}gin{align}\langlebel{eq:1.24} \dim X^g_{\alpha}=\ell_{\alpha}. \end{align} By an abuse of notation, we will often simply denote by all $\ell_{\alpha}$ the same $\ell$. \begin{eqnarray}gin{assump}\langlebel{ass:1.02} We assume that the kernels $\Ker (D)$ form a vector bundle over $B$. \end{assump} For $\sigma=\alpha\widetildedehat{\otimes} A$ with $\alpha\in \Lambda(T^*B)$, $A\in \field{E}nd(\mathcal{E})$, we define \begin{eqnarray}gin{align}\langlebel{eq:1.25} \tr[\sigma]=\alpha\cdot\tr[A],\quad \tr^{\mathrm{odd}}[\sigma]=\{\alpha\}^{\mathrm{odd}}\cdot \tr[A],\quad \tr^{\mathrm{even}}[\sigma]=\{\alpha\}^{ \mathrm{even}}\cdot\tr[A], \end{align} where $\{\alpha\}^{\mathrm{odd/even}}$ is the odd or even degree part of $\alpha$. Set \begin{eqnarray}gin{align}\langlebel{eq:1.26} \widetildedetilde{\tr}[\sigma]=\left\{ \begin{eqnarray}gin{array}{ll} \tr_s[\sigma]:=\alpha\cdot\tr[\Gamma A]\hspace{10mm} & \hbox{if $n=\dim X$ is even;} \\ \tr^{\mathrm{odd}}[\sigma]\hspace{10mm} & \hbox{if $n =\dim X$ is odd.} \end{array} \right. \end{align} Let $\field{E}nd_{c(TX)}(\mathcal{E})$ be the set of endomorphisms of $\mathcal{E}$ supercommuting with the Clifford action. It is a vector bundle over $W$. As in \cite[Definition 3.28]{BGV}, we define the relative trace $\tr^{\mathcal{E}/\mS}:\field{E}nd_{c(TX)}(\mathcal{E})\rightarrow \field{C}$ by: for any $A\in \field{E}nd_{c(TX)}(\mathcal{E})$, \begin{eqnarray}gin{align}\langlebel{eq:1.27} \tr^{\mathcal{E}/\mS}[A]=\left\{ \begin{eqnarray}gin{array}{ll} 2^{-n/2}\tr_s[\Gamma A]\hspace{10mm} & \hbox{if $n=\dim X$ is even;} \\ 2^{-(n-1)/2}\tr[A]\hspace{10mm} & \hbox{if $n=\dim X$ is odd.} \end{array} \right. \end{align} Let $R^{TX}=\left(\nabla^{TX}\right)^2$, $R^{\mathcal{E}}=\left(\nabla^{\mathcal{E}}\right)^2$ be the curvatures of $\nabla^{TX}$, $\nabla^{\mathcal{E}}$. Then \begin{eqnarray}gin{align}\langlebel{eq:1.28} R^{\mathcal{E}/\mS}:=R^{\mathcal{E}}-\frac{1}{4}\langle R^{TX}e_i, e_j\rangle c(e_i)c(e_j) \in \mathscr{C}^{\infty}(W, \Lambda^2(T^*W)\otimes \field{E}nd_{c(TX)}(\mathcal{E})) \end{align} is the twisting curvature of the Clifford module $\mathcal{E}$ as in \cite[Proposition 3.43]{BGV}. Note that if $TX$ has a $G$-equivariant spin structure, then there exists a $G$-equivariant Hermitian vector bundle $E$ such that $\mathcal{E}=\mS_X\otimes E$, with $\mS_X$ the spinor bundle of $TX$, $\nabla^{\mathcal{E}}$ is induced by $\nabla^{TX}$ and a $G$-invariant Hermitian connection $\nabla^E$ on $E$ and \begin{eqnarray}gin{align}\langlebel{eq:1.29} R^{\mathcal{E}/\mS}=R^E=(\nabla^E)^2. \end{align} We denote the differential of $g$ by $dg$ which gives a bundle isometry $dg: N_{X^g/X}\rightarrow N_{X^g/X}$. As $G$ is compact, we know that there is an orthonormal decomposition of real vector bundles over $W^g$, \begin{eqnarray}gin{align}\langlebel{eq:1.30} TX|_{W^g}=TX^g\oplus N_{X^g/X} =TX^g\oplus \bigoplus_{0<\theta\leq \pi}N(\theta), \end{align} where $dg|_{N(\pi)}=-\mathrm{Id}$ and for each $\theta$, $0<\theta<\pi$, $N(\theta)$ is the underlying real vector bundle of a complex vector bundle $N_{\theta}$ over $W^g$ on which $dg$ acts by multiplication by $e^{i\theta}$. Since $g$ preserves the metric and the orientation of $TX$, thus $\det(dg|_{N(\pi)})=1$, this means $\dim N(\pi)$ is even. So the normal bundle $N_{X^g/X}$ is even dimensional. Since $\nabla^{TX}$ commutes with the group action, its restriction on $W^g$, $\nabla^{TX}|_{W^g}$, preserves the decomposition (\ref{eq:1.30}). Let $\nabla^{TX^g}$ and $\nabla^{N(\theta)}$ be the corresponding induced connections on $TX^g$ and $N(\theta)$, with curvatures $R^{TX^g}$ and $R^{N(\theta)}$. Set \begin{eqnarray}gin{multline}\langlebel{eq:1.31} \widetildedehat{\mathrm{A}}_g(TX,\nabla^{TX})=\mathrm{det}^{\frac{1}{2}} \left(\frac{\frac{ i}{4\pi}R^{TX^g}}{\sinh \left(\frac{ i}{4\pi}R^{TX^g}\right)}\right) \\ \cdot \prod_{0<\theta\leq \pi}\left( i^{\frac{1}{2}\dim N(\theta)}\mathrm{det}^{\frac{1}{2}}\left(1-g \exp\left(\frac{ i}{2\pi}R^{N(\theta)}\right)\right) \right)^{-1}\in \Omega^{2\bullet}(W^g,\field{C}). \end{multline} The sign convention in (\ref{eq:1.31}) is that the degree $0$ part in $\prod_{0<\theta\leq \pi}$ is given by $\left(\frac{e^{i\theta/2}}{e^{i\theta}-1}\right)^{\frac{1}{2} \dim N(\theta)}$. By \cite[Lemma 6.10]{BGV}, along $W^g$, the action of $g\in G$ on $\mathcal{E}$ may be identified with a section $g^{\mathcal{E}}$ of $c(N_{X^g/X})\otimes \field{E}nd_{c(TX)}(\mathcal{E})$. Under the isomorphism (\ref{eq:1.03}), $\sigma(g^{\mathcal{E}})\in \mathscr{C}^{\infty}(W^g, \Lambda (N_{X^g/X}^*)\otimes \field{E}nd_{c(TX)}(\mathcal{E}))$. Let $\sigma_{n-\ell}(g^{\mathcal{E}})\in \mathscr{C}^{\infty}(W^g, \Lambda^{n-\ell}(N_{X^g/X}^*)\otimes \field{E}nd_{c(TX)}(\mathcal{E}))$ be the highest degree part of $\sigma(g^{\mathcal{E}})$ in $\Lambda(N_{X^g/X}^*)$. Then we define the localized relative Chern character $\ch_g(\mathcal{E}/\mS,\nabla^{\mathcal{E}})$ as in \cite[Definition 6.13]{BGV}: \begin{eqnarray}gin{multline}\langlebel{eq:1.32} \ch_g(\mathcal{E}/\mS, \nabla^{\mathcal{E}}):=\frac{2^{(n-\ell)/2}} {\mathrm{det}^{1/2}(1-g|_{N_{X^g/X}})} \tr^{\mathcal{E}/\mS}\left[\sigma_{n-\ell}(g^{\mathcal{E}})\exp\left(-\frac{R^{ \mathcal{E}/\mS}|_{W^g}}{2i\pi}\right)\right] \\ \in \Omega^{\bullet}\big(W^g,\det N_{X^g/X}\big). \end{multline} \begin{eqnarray}gin{rem}\langlebel{rem:1.03} In general, $TX^g$ is not necessary oriented. The orientation of $TX$ allows us to identify $\det N_{X^g/X}$ as the orientation line of $X^{g}$, thus the integral $\int_{X^g}$ of a form in $\Omega^{\bullet}\big(W^g,\det N_{X^g/X}\big)$ makes sense as in \cite[Theorem 6.16]{BGV}. Assume that $TX^g$ is oriented, then the orientations of $TX^g$ and $TX$ induce canonically an orientation on $N_{X^g/X}$. By pairing with the volume form of $N_{X^g/X}$, we obtain \begin{eqnarray}gin{align} \ch_g(\mathcal{E}/\mS, \nabla^{\mathcal{E}})\in \Omega^{\bullet}(W^g,\field{C}). \end{align} If $TX$ has a $G$-equivariant spin$^c$ structure, then $TX^g$ is canonically oriented (cf. \cite[Proposition 6.14]{BGV}, \cite[Lemma 4.1]{LMZ00}). If $TX$ has a $G$-equivariant spin structure, $\ch_g(\mathcal{E}/\mS,\nabla^{\mathcal{E}})$ under the above convention is just the usual equivariant Chern character (cf. (\ref{eq:1.29})) \begin{eqnarray}gin{align}\langlebel{eq:1.33} \ch_g(E,\nabla^E)=\tr^{E}\left[g \exp\left(-\frac{R^{E}|_{W^g}}{2i\pi}\right)\right]. \end{align} \end{rem} As in (\ref{eq:0.05}), for $\alpha\in \Omega^j(B)$, set \begin{eqnarray}gin{align}\langlebel{eq:1.34} \psi_B(\alpha)=\left\{ \begin{eqnarray}gin{array}{ll} \left(2i\pi\right)^{-\frac{j}{2}}\cdot \alpha\hspace{10mm} & \hbox{if $j$ is even;} \\ \pi^{-\frac{1}{2}}\left(2i\pi\right)^{-\frac{j-1}{2}} \cdot \alpha\hspace{10mm} & \hbox{if $j$ is odd.} \end{array} \right. \end{align} Then from the equivariant family local index theorem (see e.g., \cite[Theorem 4.17]{Bi86}, \cite[Theorem 2.10]{BF86II}, \cite[Theorem 2.2]{Liu17b}, \cite[Theorem 1.3]{LM00}), for any $u>0$, the differential form $\psi_B\widetildedetilde{\tr}[g\exp(-\mathbb{B}_u^2)] \in \Omega^{\bullet}(B, \field{C})$ is closed, its cohomology class is independent of $u>0$, and \begin{eqnarray}gin{align}\langlebel{eq:1.35} \lim_{u\rightarrow 0}\psi_B\widetildedetilde{\tr}[g\exp(-\mathbb{B}_u^2)] =\int_{X^g}\widetildedehat{\mathrm{A}}_g(TX,\nabla^{TX})\, \ch_g(\mathcal{E}/\mS,\nabla^{\mathcal{E}}). \end{align} Let $P^{\Ker (D)}:\mathbb{E}\rightarrow \Ker (D)$ be the orthogonal projection with respect to (\ref{eq:1.19}). Let \begin{eqnarray}gin{align}\langlebel{eq:1.36} \nabla^{\Ker (D)}=P^{\Ker (D)}\nabla^{\mathbb{E},u}P^{\Ker (D)} \end{align} and $R^{\Ker(D)}$ be the curvature of the connection $\nabla^{\Ker(D)}$ on $\Ker(D)$. $\bullet$ If $n=\dim X$ is even, from the natural equivariant extension of \cite[Theorem 9.19]{BGV}, we have \begin{eqnarray}gin{align}\langlebel{eq:1.37} \lim_{u\rightarrow +\infty}\psi_B\tr_s[g\exp(-\mathbb{B}_u^2)] =\tr_s\left[g\exp\left(-\frac{R^{\Ker(D)}}{2i\pi} \right)\right]=\ch_g(\Ker(D), \nabla^{\Ker(D)}). \end{align} Since $\mathbb{B}_{u}$ is $G$-invariant, the equivariant version of \cite[Theorem 9.17]{BGV} shows that \begin{eqnarray}gin{align}\langlebel{eq:1.38} \frac{\partial}{\partial u}\tr_s\left[g \exp(-\mathbb{B}_{u}^{2})\right] =-d^B\tr_s\left[g\frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]. \end{align} Thus for $0<\varepsilon<T<+\infty$, \begin{eqnarray}gin{align}\langlebel{eq:1.39} \tr_s\left[g \exp(-\mathbb{B}_{\varepsilon}^{2})\right]-\tr_s\left[g \exp(-\mathbb{B}_{T}^{2})\right] =d^B\int_{\varepsilon}^T\tr_s\left[g\frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]du. \end{align} The natural equivariant extension of \cite[Theorems 9.23 and 10.32(1)]{BGV} (cf. e.g., \cite[(2.72) and (2.77)]{Liu17a}) shows that \begin{eqnarray}gin{align}\langlebel{eq:1.40} \begin{eqnarray}gin{split} &\tr_s\left[g\frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]=\mathcal{O}(u^{-1/2})\quad \hbox{as $u\rightarrow 0$,} \\ &\tr_s\left[g\frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]=\mathcal{O}(u^{-3/2})\quad \hbox{as $u\rightarrow +\infty$.} \end{split} \end{align} In this case, by (\ref{eq:1.34}) and (\ref{eq:1.40}), the equivariant $\eta$-form is defined by \begin{eqnarray}gin{align}\langlebel{eq:1.41} \tilde{\eta}_g=\int_0^{+\infty} \left.\frac{1}{2 i \sqrt{\pi}}\psi_{B} \tr_s\right.\left[g\left.\frac{\partial\mathbb{B}_{u}}{ \partial u}\right.\exp(-\mathbb{B}_{u}^{2})\right] du\in \Omega^{\mathrm{odd}}(B,\field{C}). \end{align} By (\ref{eq:1.35}), (\ref{eq:1.37}), (\ref{eq:1.39}) and (\ref{eq:1.41}), we have \begin{eqnarray}gin{align}\langlebel{eq:1.42} d^B\tilde{\eta}_g=\int_{X^g} \widetildedehat{\mathrm{A}}_{g}(TX,\nabla^{TX}) \ch_{g}(\mathcal{E}/\mS, \nabla^{\mathcal{E}}) -\ch_{g}(\Ker (D), \nabla^{\Ker (D)}). \end{align} $\bullet$ If $n$ is odd, since the equivariant extension of \cite[Theorem 9.19]{BGV} also holds, we have \begin{eqnarray}gin{align}\langlebel{eq:1.43} \lim_{u\rightarrow +\infty}\tr^{\mathrm{odd}}[g \exp(-\mathbb{B}_u^2)] =\tr^{\mathrm{odd}}\left[g\exp\left(-R^{\Ker(D)} \right)\right]=0. \end{align} As an analogue of (\ref{eq:1.39}), for $0<\varepsilon<T<+\infty$, we have \begin{eqnarray}gin{align}\langlebel{eq:1.44} \tr^{\mathrm{odd}}\left[g \exp(-\mathbb{B}_{\varepsilon}^{2})\right]-\tr^{\mathrm{odd}}\left[g \exp(-\mathbb{B}_{T}^{2})\right] =d^B\int_{\varepsilon}^T\tr^{\mathrm{even}}\left[g \frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]du. \end{align} Following the same arguments in the proof of (\ref{eq:1.40}), we have \begin{eqnarray}gin{align}\langlebel{eq:1.45} \begin{eqnarray}gin{split} &\tr^{\mathrm{even}}\left[g\frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]=\mathcal{O}(u^{-1/2})\quad \hbox{as $u\rightarrow 0$,} \\ &\tr^{\mathrm{even}}\left[g\frac{\partial \mathbb{B}_{u} }{\partial u} \exp(-\mathbb{B}_{u}^{2})\right]=\mathcal{O}(u^{-3/2})\quad \hbox{as $u\rightarrow +\infty$.} \end{split} \end{align} In this case, by (\ref{eq:1.34}) and (\ref{eq:1.45}), the equivariant $\eta$-form is defined by \begin{eqnarray}gin{align}\langlebel{eq:1.46} \tilde{\eta}_g=\int_0^{+\infty} \left.\frac{1}{ \sqrt{\pi}}\psi_{B} \tr^{\mathrm{even}}\right.\left[g\left.\frac{\partial \mathbb{B}_{u}}{\partial u} \right.\exp(-\mathbb{B}_{u}^{2})\right] du\in \Omega^{\mathrm{even}}(B,\field{C}). \end{align} From (\ref{eq:1.35}), (\ref{eq:1.43}), (\ref{eq:1.44}) and (\ref{eq:1.46}), we get \begin{eqnarray}gin{align}\langlebel{eq:1.47} d^B\tilde{\eta}_g=\int_{X^g}\widetildedehat{ \mathrm{A}}_{g}(TX,\nabla^{TX}) \ch_{g}(\mathcal{E}/\mS, \nabla^{\mathcal{E}}). \end{align} We write the definition of the equivariant $\eta$-form (\ref{eq:1.41}) and (\ref{eq:1.46}) in a uniform way using the notation $\{\cdot \}^{du}$ as in (\ref{eq:0.06}). \begin{eqnarray}gin{defn}\langlebel{defn:1.03}\cite[Definition 2.3]{Liu17a} For $g\in G$ fixed, under Assumption \ref{ass:1.02}, the equivariant Bismut-Cheeger $\eta$-form is defined by \begin{eqnarray}gin{align}\langlebel{eq:1.48} \tilde{\eta}_g:=-\int_0^{+\infty}\left\{\psi_{\field{R}\times B}\left. \widetildedetilde{\tr}\right.\left[g\exp\left(-\left(\mathbb{B}_{u} +du\wedge\frac{\partial}{\partial u}\right)^2\right)\right] \right\}^{du}du\in \Omega^{\bullet}(B,\field{C}). \end{align} \end{defn} If $g=e$ the identity element of $G$, (\ref{eq:1.48}) is exactly the Bismut-Cheeger $\eta$-form defined in \cite{BC89}. If $B$ is noncompact, (\ref{eq:1.40}) and (\ref{eq:1.45}) hold uniformly on any compact subset of $B$, thus Definition \ref{defn:1.03}, (\ref{eq:1.42}) and (\ref{eq:1.47}) still hold. \section{Equivariant infinitesimal $\eta$-forms}\langlebel{s02} In this section, we state the family Kirillov formula and define the equivariant infinitesimal $\eta$-form. In Section \ref{s0201}, we state the families version of the Kirillov formula. In Section \ref{s0202}, we define the equivariant infinitesimal $\eta$-form, and establish Theorem \ref{thm:0.1} modulo some technical details. In this section, we use the same notations and assumptions in Section 1. Especially, $TX$ is $G$-equivariant oriented and Assumption \ref{ass:1.02} holds in this section. \subsection{Moment maps and the family Kirillov formula}\langlebel{s0201} Let $|\cdot|$ be a $G$-invariant norm on the Lie algebra $\mathfrak{g}$ of $G$. For $K\in \mathfrak{g}$, let \begin{eqnarray}gin{align}\langlebel{eq:2.01} K^X(x)=\left.\frac{\partial}{\partial t}\right|_{t=0} e^{tK}\cdot x\quad \til{E}xt{for}\ x\in W \end{align} be the induced vector field on $W$. Since $G$ acts fiberwise on $W$, $K^X\in \mathscr{C}^{\infty}(W,TX)$ and \begin{eqnarray}gin{align}\langlebel{eq:2.01b} [K^X,K'^X]=-[K,K']^X\quad\quad \til{E}xt{for any}\ K,K'\in \mathfrak{g}. \end{align} For $K\in \mathfrak{g}$, let $\mathcal{L}_{K}$ be the corresponding Lie derivative given by \begin{eqnarray}gin{align}\langlebel{eq:2.02b} \mathcal{L}_Ks=\left.\frac{\partial}{\partial t}\right|_{t=0} \left(e^{-tK}. s\right), \end{align} for $s\in \mathscr{C}^{\infty}(W,\mathcal{E})$ (cf. (\ref{eq:1.19b})). The associated moment maps $m^{TX}(\cdot)$, $m^{\mathcal{E}}(\cdot)$ are defined by \cite[Definition 7.5]{BGV} (see also \cite[Definition 2.1]{BG00}), \begin{eqnarray}gin{align}\langlebel{eq:2.02} \begin{eqnarray}gin{split} m^{TX}(K)&:=\nabla^{TX}_{K^X} -\mathcal{L}_{K}\vert_{TX}\in \mathscr{C}^{\infty}(W,\field{E}nd(TX)), \\ m^{\mathcal{E}}(K)&:=\nabla^{\mathcal{E}}_{K^X}-\mathcal{L}_{K}\vert_{\mathcal{E}}\in \mathscr{C}^{\infty}(W,\field{E}nd(\mathcal{E})). \end{split} \end{align} Since the vector field $K^X$ is Killing and $\nabla^{TX}$, $\nabla^{\mathcal{E}}$ preserve the corresponding metrics, $m^{TX}(K)$ and $m^{\mathcal{E}}(K)$ are skew-adjoint actions of $\field{E}nd(TX)$ and $\field{E}nd(\mathcal{E})$ respectively. By Proposition \ref{prop:1.01}, the connection $\nabla^{TX}$ is the Levi-Civita connection of $(TX, g^{TX})$ when it is restricted on a fiber. Since the $G$-action is along the fiber, we have \begin{eqnarray}gin{align}\langlebel{eq:2.03} m^{TX}(K)=\nabla^{TX}_{\cdot}K^X\in \mathscr{C}^{\infty}(W,\field{E}nd(TX)). \end{align} Since the connection $\nabla^{TX}$ is $G$-invariant, from (\ref{eq:2.02}) (cf. \cite[(7.4)]{BGV} or \cite[(2.8)]{BG00}), \begin{eqnarray}gin{align}\langlebel{eq:2.05} \nabla^{TX}_{\cdot}m^{TX}(K)+i_{K^X}R^{TX}=0. \end{align} We denote by $m^{\mS}(K)\in \field{E}nd(\mathcal{E})$ by \begin{eqnarray}gin{align}\langlebel{eq:2.06} m^{\mS}(K):=\frac{1}{4}\langle m^{TX}(K)e_i, e_j\rangle c(e_i)c(e_j). \end{align} If $TX$ is spin, $m^{\mS}(K)$ is just the moment map of the spinor. Set \begin{eqnarray}gin{align}\langlebel{eq:2.07} m^{\mathcal{E}/\mS}(K):=m^{\mathcal{E}}(K)-m^{\mS}(K). \end{align} From (\ref{eq:1.28}), we set (cf. \cite[(2.30)]{BG00}) \begin{eqnarray}gin{align}\langlebel{eq:2.08} R_K^{TX}=R^{TX}-2i\pi\, m^{TX}(K),\quad R_K^{\mathcal{E}/\mS}=R^{\mathcal{E}/\mS}-2i\pi\, m^{\mathcal{E}/\mS}(K). \end{align} Then $R_K^{TX}$ (resp. $R_K^{\mathcal{E}/\mS}$) is called the equivariant curvature of $TX$ (resp. equivariant twisted curvature of $\mathcal{E}$). Let $Z(g)\subset G$ be the centralizer of $g\in G$ with Lie algebra $\mathfrak{z}(g)$. Then in the sense of the adjoint action, \begin{eqnarray}gin{align}\langlebel{eq:2.09} \mathfrak{z}(g)=\{K\in \mathfrak{g}: g.K=K\}. \end{align} We fix $g\in G$ from now on. In the sequel, we always take $K\in \mathfrak{z}(g)$. Put \begin{eqnarray}gin{align}\langlebel{eq:2.10} W^K=\{x\in W: K^X(x)=0 \}. \end{align} Then $W^K$, which is the fixed point set of the group generated by $K$, is a totally geodesic submanifold along each fiber $X$. Set \begin{eqnarray}gin{align}\langlebel{eq:2.11} W^{g,K}=W^g\cap W^K. \end{align} Then $W^{g,K}$ is also a totally geodesic submanifold along each fiber $X$. Moreover, if $K_0\in \mathfrak{z}(g)$ and $z\in \field{R}$, for $z$ small enough, we have \begin{eqnarray}gin{align}\langlebel{eq:2.12} W^{g,zK_0}=W^{ge^{zK_0}}. \end{align} Since the $G$-action is trivial on $B$, $W^K\rightarrow B$ and $W^{g,K}\rightarrow B$ are fibrations with compact fiber $X^K$ and $X^{g,K}$. As in (\ref{eq:1.24}), by an abuse of notation, we will often simply denote by \begin{eqnarray}gin{align}\langlebel{eq:2.13} \dim X^{g,K}=\ell'. \end{align} Observe that $m^{TX}(K)|_{X^g}$ acts on $TX^g$ and $N_{X^g/X}$. Also it preserves the splitting (\ref{eq:1.30}). Let $m^{TX^g}(K)$ and $m^{N(\theta)}(K)$ be the restrictions of $m^{TX}(K)|_{X^g}$ to $TX^g$ and $N(\theta)$. We define the corresponding equivariant curvatures $R_K^{TX^g}$, $R^{N(\theta)}_K$ as in (\ref{eq:2.08}). For $K\in \mathfrak{z}(g)$ with $|K|$ small enough, comparing with (\ref{eq:1.31}), set \begin{eqnarray}gin{multline}\langlebel{eq:2.14} \widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) =\mathrm{det}^{\frac{1}{2}}\left(\frac{\frac{ i}{4\pi} R_K^{TX^g}}{\sinh\left(\frac{ i}{4\pi}R_K^{TX^g}\right)} \right) \\ \cdot\prod_{k>0}\left( i^{\frac{1}{2}\dim N(\theta)}\mathrm{det}^{\frac{1}{2}}\left(1-g\exp\left( \frac{ i}{2\pi}R_K^{N(\theta)}\right)\right)\right)^{-1} \in \Omega^{2\bullet}(W^g,\field{C}). \end{multline} Note that $W$ compact and $|K|$ small guarantee that the denominator in (\ref{eq:2.14}) is invertible. Comparing with (\ref{eq:1.32}), set \begin{eqnarray}gin{align}\langlebel{eq:2.15} \ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}}):=\frac{2^{(n-\ell)/2}}{ \mathrm{det}^{1/2}(1-g|_{N_{X^g/X}})}\tr^{\mathcal{E}/\mS}\left[ \sigma_{n-\ell}(g^{\mathcal{E}})\exp\left(-\frac{R_K^{\mathcal{E}/\mS}|_{W^g}}{ 2i\pi}\right)\right]. \end{align} As in (\ref{eq:1.33}), if $TX$ has a $G$-equivariant spin structure, $\ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}})$ is just the equivariant infinitesimal Chern character in \cite[Definition 2.7]{BG00}, \begin{eqnarray}gin{align}\langlebel{eq:2.16} \ch_{g,K}(E, \nabla^{E})=\tr^{E}\left[g \exp\left(-\frac{R_K^E|_{W^g}}{ 2i\pi}\right)\right]\in \Omega^{2\bullet}(W^g,\field{C}), \end{align} where $m^E(K)=\nabla_{K^X}^E-\mathcal{L}_K$, $R_K^E:=R^E-2i\pi m^E(K)$ as in (\ref{eq:2.02}) and (\ref{eq:2.08}). Set \begin{eqnarray}gin{align}\langlebel{eq:2.17} d_K=d-2i\pi\ i_{K^X}. \end{align} Then by (\ref{eq:2.05}) (cf. \cite[Theorem 7.7]{BGV}), \begin{eqnarray}gin{align}\langlebel{eq:2.18} d_K\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX})=0, \quad d_K\ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}})=0. \end{align} Recall that $\mathbb{B}_t$ is the rescaled Bismut superconnection in (\ref{eq:1.22}). Set \begin{eqnarray}gin{align}\langlebel{eq:2.19} \mathbb{B}_{K,t}=\mathbb{B}_{t}+\frac{c(K^X)}{4\sqrt{t}}. \end{align} Then $\mathbb{B}_{K,t}^2$ is a $2$nd-order elliptic differential operator along the fiber $X$ acting on $\Lambda(T^*B)\widetildedehat{\otimes}\mathbb{E}$. If the base $B$ is a point, then the operator $\mathbb{B}_{K,t}$ is $\sqrt{t}D+\frac{c(K^X)}{4\sqrt{t}}$, and it was introduced by Bismut \cite{Bi85} in his heat kernel proof of the Kirillov formula for the equivariant index. As observed by Bismut \cite[\S 1d), \S 3b)]{Bi86} (cf. also \cite[\S 10.7]{BGV}), its square plus $\mathcal{L}_{K^X}$ is the square of the Bismut superconnection for a fibration with compact structure group, by replacing $K^X$ by the curvature of the fibration. Thus we can roughly interpret $\mathbb{B}_{K,t}$ as the Bismut superconnection by extending our fibration by a fibration with compact structure group. Now we state the families version of the Kirillov formula and delayed a heat kernel proof of it to Section 5. \begin{eqnarray}gin{thm}\langlebel{thm:2.01} For any $K\in \mathfrak{z}(g)$ and $|K|$ small, \begin{eqnarray}gin{itemize} \item { if $n$ is even, for $t>0$, the differential form $$ \psi_B\tr_s\left[g\exp\left(-\mathbb{B}_{K,t}^2- \mathcal{L}_K\right)\right]\in \Omega^{\mathrm{even}}(B, \field{C}) $$ is closed, the cohomology class defined by it is independent of $t$ and \begin{eqnarray}gin{align}\langlebel{eq:2.20} \lim_{t\rightarrow 0}\psi_B\tr_s \left[g\exp\left(-\mathbb{B}_{K,t}^2-\mathcal{L}_{K}\right)\right] =\int_{X^g}\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX})\, \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}). \end{align} } \item { if $n$ is odd, for $t>0$, the differential form $$ \psi_B\tr^{\mathrm{odd}}\left[g\exp\left(- \mathbb{B}_{K,t}^2- \mathcal{L}_K\right)\right]\in \Omega^{\mathrm{odd}}(B, \field{C}) $$ is closed, the cohomology class defined by it is independent of $t$ and \begin{eqnarray}gin{align}\langlebel{eq:2.21} \lim_{t\rightarrow 0}\psi_B\tr^{\mathrm{odd}} \left[g\exp\left(-\mathbb{B}_{K,t}^2-\mathcal{L}_{K}\right)\right] =\int_{X^g}\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX})\, \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}). \end{align} } \end{itemize} \end{thm} If $B$ is a point and $g=e$, this heat kernel proof of the Kirillov formula is given by Bismut in \cite{Bi85} (see also \cite[Theorem 8.2]{BGV}). If $B$ is a point, (\ref{eq:2.20}) is established in \cite{BG00}. For $g=e$, (\ref{eq:2.20}) is obtained in \cite{Wa14}. \subsection{Equivariant infinitesimal $\eta$-forms: Theorem \ref{thm:0.1}}\langlebel{s0202} For $t>0$, set \begin{eqnarray}gin{align}\langlebel{eq:2.22} \mathcal{B}_{K,t}=\mathbb{B}_{K,t}+ dt\wedge \frac{\partial}{\partial t}. \end{align} Then by (\ref{eq:2.19}), \begin{eqnarray}gin{align}\langlebel{eq:2.23} \mathcal{B}_{K,t}^2=\mathbb{B}_{K,t}^2 + dt\wedge \frac{\partial \mathbb{B}_{K,t}}{\partial t} =\left(\mathbb{B}_{t}+\frac{c(K^X)}{4\sqrt{t}} \right)^2+ dt\wedge \frac{\partial}{\partial t}\left(\mathbb{B}_{t}+\frac{c(K^X)}{4\sqrt{t}}\right). \end{align} \begin{eqnarray}gin{thm}\langlebel{thm:2.02} There exist $\begin{eqnarray}ta>0$, $\delta,\delta'>0$, $C>0$, such that if $K\in \mathfrak{z}(g)$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, a) for any $t\geq 1$, \begin{eqnarray}gin{align}\langlebel{eq:2.24} \left|\left\{\widetilde{\tr}\Big[g\exp\big(-\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K}\big) \Big]\right\}^{dt}\right|\leq \frac{C}{t^{1+\delta}}; \end{align} b) for any $0<t\leq 1$, \begin{eqnarray}gin{align}\langlebel{eq:2.25} \left|\left\{\widetilde{\tr}\Big[g\exp\big(-\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K}\big) \Big]\right\}^{dt}\right|\leq C\,t^{\delta'-1}. \end{align} \end{thm} We delay the proof of Theorem \ref{thm:2.02} to Section 5. $\bullet$ If $n=\dim X$ is even, then for $t>0$, as $\mathbb{B}_{K,t}$ commutes with $g$, $\mathcal{L}_K$, by \cite[Lemma 9.15]{BGV}, \begin{eqnarray}gin{align}\langlebel{eq:2.26b} d^B\tr_s\left[g \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] =\tr_s\Big[\big[\mathbb{B}_{K,t}, g \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\big]\Big] =0. \end{align} As in (\ref{eq:1.37}) (cf. \cite[Proposition 8.11 and Theorem 9.19]{BGV}), we have \begin{eqnarray}gin{align}\langlebel{eq:2.26} \lim_{t\rightarrow +\infty}\psi_B\tr_s\Big[g\exp\big( -\mathbb{B}_{K,t}^2-\mathcal{L}_{K} \big)\Big] =\ch_{ge^K}(\Ker(D), \nabla^{\Ker(D)}). \end{align} As in (\ref{eq:1.38}), \begin{eqnarray}gin{multline}\langlebel{eq:2.27} \frac{\partial}{\partial t}\tr_s\left[g \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] =-d^B\tr_s\left[g\frac{\partial \mathbb{B}_{K,t} }{\partial t} \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] \\ =d^B\left\{\tr_s\left[g \exp(-\mathcal{B}_{K,t}^{2}-\mathcal{L}_K)\right]\right\}^{dt}. \end{multline} Thus from (\ref{eq:2.27}), for $0<\varepsilon<T<+\infty$, \begin{eqnarray}gin{multline}\langlebel{eq:2.28} \tr_s\left[g \exp(-\mathbb{B}_{K,T}^{2}-\mathcal{L}_K)\right]-\tr_s\left[g \exp(-\mathbb{B}_{K,\varepsilon}^{2}-\mathcal{L}_K)\right] \\ =d^B\int_{\varepsilon}^T\left\{\tr_s\left[g \exp(-\mathcal{B}_{K,t}^{2}-\mathcal{L}_K)\right]\right\}^{dt}dt. \end{multline} In this case, for $|K|\leq \begin{eqnarray}ta$, by Theorem \ref{thm:2.02}, the equivariant infinitesimal $\eta$-form is defined by \begin{eqnarray}gin{multline}\langlebel{eq:2.29} \tilde{\eta}_{g,K}=-\int_0^{+\infty}\frac{1}{2 i \sqrt{\pi}}\psi_{B} \left\{\tr_s\left[g \exp(-\mathcal{B}_{K,t}^{2}-\mathcal{L}_K)\right]\right\}^{dt}dt \\ =\int_0^{+\infty}\frac{1}{2 i \sqrt{\pi}}\psi_{B} \tr_s\left[g\frac{\partial \mathbb{B}_{K,t} }{\partial t} \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right]dt \in \Omega^{\mathrm{odd}}(B,\field{C}). \end{multline} By (\ref{eq:2.20}), (\ref{eq:2.28}) and (\ref{eq:2.29}), we have \begin{eqnarray}gin{align}\langlebel{eq:2.30} d^B\tilde{\eta}_{g,K}=\int_{X^g}\widetildedehat{ \mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}})-\ch_{ge^K} (\Ker (D), \nabla^{\Ker (D)}). \end{align} $\bullet$ If $n$ is odd, then for $t>0$, as $\mathbb{B}_{K,t}$ commutes with $g$, $\mathcal{L}_K$, again by the argument in \cite[Lemma 9.15]{BGV}, \begin{eqnarray}gin{align}\langlebel{eq:2.31b} d^B\tr^{\mathrm{odd}}\left[g \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] =\tr^{\mathrm{even}}\Big[\big[\mathbb{B}_{K,t}, g \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\big]\Big] =0. \end{align} As the same argument in (\ref{eq:1.43}), \begin{eqnarray}gin{align}\langlebel{eq:2.31} \lim_{t\rightarrow +\infty}\tr^{\mathrm{odd}} \Big[g\exp\big( -\mathbb{B}_{K,t}^2-\mathcal{L}_{K} \big)\Big]=0. \end{align} Comparing with (\ref{eq:1.38}) and (\ref{eq:2.27}), we have \begin{eqnarray}gin{multline}\langlebel{eq:2.32} \frac{\partial}{\partial t}\tr^{\mathrm{odd}}\left[g \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] =-d^B\tr^{\mathrm{even}}\left[g\frac{\partial \mathbb{B}_{K,t}}{\partial t} \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] \\ =d^B\left\{\tr^{\mathrm{odd}}\left[g \exp(-\mathcal{B}_{K,t}^{2}-\mathcal{L}_K)\right]\right\}^{dt}. \end{multline} From Theorem \ref{thm:2.02}, in this case, for $|K|\leq \begin{eqnarray}ta$, the equivariant infinitesimal $\eta$-form is defined by \begin{eqnarray}gin{multline}\langlebel{eq:2.33} \tilde{\eta}_{g,K}=-\int_0^{+\infty}\frac{1}{ \sqrt{\pi}}\psi_{B} \left\{\tr^{\mathrm{odd}}\left[g \exp(-\mathcal{B}_{K,t}^{2}-\mathcal{L}_K)\right]\right\}^{dt} dt \\ =\int_0^{+\infty}\frac{1}{ \sqrt{\pi}}\psi_{B} \tr^{\mathrm{even}}\left[g\frac{\partial \mathbb{B}_{K,t}}{\partial t} \exp(-\mathbb{B}_{K,t}^{2}-\mathcal{L}_K)\right] dt \in \Omega^{\mathrm{even}}(B,\field{C}). \end{multline} As in (\ref{eq:1.47}), by (\ref{eq:2.21}), (\ref{eq:2.31}), (\ref{eq:2.32}) and (\ref{eq:2.33}), we get \begin{eqnarray}gin{align}\langlebel{eq:2.34} d^B\tilde{\eta}_{g,K}=\int_{X^g}\widetildedehat{ \mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS, \nabla^{\mathcal{E}}). \end{align} \begin{eqnarray}gin{defn}\langlebel{defn:2.03} For $K\in \mathfrak{z}(g)$, $|K|\leq \begin{eqnarray}ta$, determined in Theorem \ref{thm:2.02}, under Assumption \ref{ass:1.02}, the equivariant infinitesimal Bismut-Cheeger $\eta$-form is defined by \begin{eqnarray}gin{align}\langlebel{eq:2.35} \widetilde{\eta}_{g,K}=-\int_{0}^{+\infty}\left\{\psi_{\field{R}\times B} \widetilde{\tr}\Big[g\exp\left(-\mathcal{B}_{K,t}^2-\mathcal{L}_{K} \right) \Big]\right\}^{dt}dt. \end{align} \end{defn} By (\ref{eq:0.05}) and (\ref{eq:1.34}), (\ref{eq:2.35}) is a reformulation of (\ref{eq:2.29}) and (\ref{eq:2.33}). From (\ref{eq:2.30}) and (\ref{eq:2.34}), we establish the first part of Theorem \ref{thm:0.1}. Remark that the compactness of $B$ guarantees the existence of the constant $\begin{eqnarray}ta>0$ in Definition \ref{defn:2.03}. From (\ref{eq:2.29}) and (\ref{eq:2.33}), it is obvious that if $K=0$, $\widetilde{\eta}_{g,K}=\widetilde{\eta}_{g}$ in (\ref{eq:1.48}). From the Duhamel's formula (cf. e.g., \cite[Theorem 2.48]{BGV}), we have \begin{eqnarray}gin{align}\langlebel{eq:2.36} \frac{\partial}{\partial\bar{z}}\widetilde{\tr}\Big[g\exp\big( -\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K}\big)\Big] =-\widetilde{\tr}\left[g\frac{\partial (\mathcal{B}_{zK,t}^2+z\mathcal{L}_{K})}{\partial \bar{z}}\exp\big(-\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K}\big)\right]=0. \end{align} Thus, $\widetilde{\tr}\Big[g\exp\big(-\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K}\big)\Big]$ is $\mathscr{C}^{\infty}$ on $t>0$ and holomorphic on $z\in \field{C}$. We fix $K\in \mathfrak{z}(g)$. Thus for $0<\varepsilon<T<+\infty$, the function $$\int_{\varepsilon}^{T}\left\{ \psi_{\field{R}\times B}\widetilde{\tr}\Big[g\exp\left(-\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K} \right) \Big]\right\}^{dt}dt$$ is holomorphic on $z$. By Theorem \ref{thm:2.02} and the dominated convergence theorem, we have \begin{eqnarray}gin{align}\langlebel{eq:2.37} \widetilde{\eta}_{g,zK}:=-\int_{0}^{+\infty}\left\{\psi_{\field{R}\times B} \widetilde{\tr}\Big[g\exp\left(-\mathcal{B}_{zK,t}^2-z\mathcal{L}_{K} \right) \Big]\right\}^{dt}dt \end{align} is holomorphic on $z\in \field{C}$, $|zK|< \begin{eqnarray}ta$. Thus we get the last part of Theorem \ref{thm:0.1}. The proof of Theorem \ref{thm:0.1} is completed. \section{Comparison of two equivariant $\eta$-forms}\langlebel{s03} In this section, we state our main result. We use the same notations and assumptions in Sections 1 and 2. Let $\varepsilontheta_K\in T^*X$ be the $1$-form which is dual to $K^X$ by the metric $g^{TX}$, i.e., for any $U\in TX$, \begin{eqnarray}gin{align}\langlebel{eq:3.01} \varepsilontheta_K(U)=\langle K^X, U\rangle. \end{align} We identify $\varepsilontheta_K$ to a vertical $1$-form on $W$, i.e., to a $1$-form which vanishes on $T^HW$. Then by (\ref{eq:2.17}) and (\ref{eq:3.01}), we have \begin{eqnarray}gin{align}\langlebel{eq:3.02} d_K\varepsilontheta_{K}=d\varepsilontheta_K-2i\pi\,|K^X|^2. \end{align} Let $d^X$ be the exterior differential operator along the fiber $X$. By (\ref{eq:2.03}) and (\ref{eq:3.01}) (cf. \cite[Lemma 7.15 (1)]{BGV}), for $U,U'\in TX$, we have \begin{eqnarray}gin{align}\langlebel{eq:3.03} d^X\varepsilontheta_K(U,U')=2\langle \nabla^{TX}_UK^X,U'\rangle =2\langle m^{TX}(K)U,U'\rangle. \end{align} From (\ref{eq:1.11}) and (\ref{eq:1.12}), set \begin{eqnarray}gin{align}\langlebel{eq:3.04} \widetilde{T}=2T(f_{p}^H,e_i)f^p\wedge e^i\wedge +\frac{1}{2}T(f_{p}^H,f_q^H)f^p\wedge f^q\wedge. \end{align} From \cite[Proposition 10.1]{BGV} or \cite[(3.61) and (3.94)]{BG04}, \begin{eqnarray}gin{align}\langlebel{eq:3.05} d\varepsilontheta_K=d^X\varepsilontheta_K+\langle \widetilde{T},K^X\rangle =d^X\varepsilontheta_K+\varepsilontheta_K(\widetilde{T}). \end{align} For $K\in \mathfrak{z}(g)$, $|K|$ small, $v>0$, set \begin{eqnarray}gin{align}\langlebel{eq:3.07} \begin{eqnarray}gin{split} \alpha_K&=\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})\in \Omega^{2\bullet}\big(W^g, \det N_{X^g/X}\big),\\ \widetilde{e}_v&=-\int_{X^g}\frac{\varepsilontheta_K}{8vi\pi}\exp\left(\frac{d_{K} \varepsilontheta_K}{8vi\pi}\right)\alpha_K\in \Omega^{\bullet}(B,\field{C}). \end{split}\end{align} Note that if $W^{g,K}=W^g$, as $\varepsilontheta_K=0$ on $W^{g,K}$, we get $\widetilde{e}_v=0$. \begin{eqnarray}gin{lemma}\langlebel{lem:3.01} If $W^{g,K}\varepsilonsubsetneq W^g$. Then $\widetilde{e}_v=\mathcal{O}(v^{-1})$ as $v\rightarrow +\infty$ and $\widetilde{e}_v=\mathcal{O}(v^{1/2})$ as $v\rightarrow 0$. \end{lemma} \begin{eqnarray}gin{proof} By (\ref{eq:3.02}) and (\ref{eq:3.07}), we have \begin{eqnarray}gin{align}\langlebel{eq:3.08} \widetilde{e}_v=-\sum_{j=0}^{\lfloor \dim W^g/2\rfloor}\frac{1}{j!} \left(\frac{1}{2i\pi}\right)^{j+1} \int_{X^g}\frac{\varepsilontheta_K}{4v}\left(\frac{d\varepsilontheta_K}{4v} \right)^{j}\exp\left(-\frac{|K^X|^2}{4v} \right)\cdot\alpha_K. \end{align} Thus when $v\rightarrow +\infty$, $\widetilde{e}_v=\mathcal{O}(v^{-1})$. For $v\rightarrow 0$, we follow the argument in the proof of \cite[Theorem 1.3]{Bi86a}. For $x\in W^g$, if $K^X_x\neq 0$, when $v\rightarrow 0$, the integral term in (\ref{eq:3.08}) at $x$ is of exponential decay. So the integral in (\ref{eq:3.08}) could be localized on a neighbourhood of $W^{g,K}$. Let $N_{X^{g,K}/X^g}$ be the normal bundle of $W^{g,K}$ in $W^g$, and we identify it as the orthogonal complement of $TX^{g,K}=TX^g|_{W^{g,K}}\cap TX^K|_{W^{g,K}}$ in $TX^g|_{W^{g,K}}$. Recall that as $K^X$ is a Killing vector field, for any $b\in B$, $X^{g,K}_b$ is totally geodesic in $X^g_b$, and as the same argument in Section \ref{s0201}, $\nabla^{TX^g}$, $m^{TX}(K)$ preserve the splitting \begin{eqnarray}gin{align}\langlebel{eq:3.18b} TX^g=TX^{g,K}\oplus N_{X^{g,K}/X^g} \quad \til{E}xt{ on } W^{g,K} \end{align} and $m^{TX}(K)=0$ on $TX^{g,K}$. In particular, \begin{eqnarray}gin{align}\langlebel{eq:3.09b} m^{N_{X^{g,K}/X^g}}(K)=m^{TX}(K)|_{N_{X^{g,K}/X^g}}\in \field{E}nd(N_{X^{g,K}/X^g}) \til{E}xt{ is skew-adjoint and invertible.} \end{align} Combining with (\ref{eq:2.05}), it implies that $N_{X^{g,K}/X^g}$ is orientable, and we fix an orientation. Then the orientations on $TX$, $N_{X^{g,K}/X^g}$ induce the identifications over $W^{g,K}$, \begin{eqnarray}gin{align}\langlebel{eq:3.10b} \det(N_{X^{g}/X})\simeq \det(TX^g)\simeq \det(TX^{g,K}). \end{align} Given $\varepsilon>0$, let $\mathcal{U}_{\varepsilon}''$ be the $\varepsilon$-neighborhood of $W^{g,K}$ in $N_{X^{g,K}/X^g}$. There exists $\varepsilon_0$ such that for $0<\varepsilon\leq \varepsilon_0$, the fiberwise exponential map $(y,Z)\in N_{b, X^{g,K}/X^g}\rightarrow \exp_{y}^{X}(Z)\in X_b^g$ is a diffeomorphism from $\mathcal{U}_{\varepsilon}''$ into the tubular neighborhood $\mathcal{V}_{\varepsilon}''$ of $W^{g,K}$ in $W^g$. We denote $\widetilde{\mathcal{V}}_{\varepsilon}''$ the fiber of the fibration $\mathcal{V}_{\varepsilon}'' \rightarrow B$. With this identification, let $\bar{k}(y,Z)$ be the function such that \begin{eqnarray}gin{align}\langlebel{eq:3.09} dv_{X^g}(y,Z)=\bar{k}(y,Z)dv_{X^{g,K}}(y)dv_{N_{X^{g,K}/X^g}}(Z). \end{align} Here $dv_{X^g}\in \Lambda^{\max}(T^*X^g)\otimes \det(T^*X^g)$, $dv_{X^{g,K}}\in \Lambda^{\max}(T^*X^{g,K}) \otimes \det(T^*X^{g,K})$ are the Riemannian volume forms of $X^g$, $X^{g,K}$ and $dv_{N_{X^{g,K}/X^g}}$ is the Euclidean volume form on $N_{X^{g,K}/X^g}$. Let $e^1,\cdots,e^{\ell}$ be a locally orthonormal frame of $T^*X^g$. For $\begin{eqnarray}ta\in \Omega^{\bullet} \big(W^g,\det (N_{X^g/X})\big)$, let $[\begin{eqnarray}ta]^{\max}$ be the coefficient of $e^1\wedge\cdots\wedge e^{\ell}\otimes \widetildedehat{e^1\wedge\cdots\wedge e^{\ell}}$ of $\begin{eqnarray}ta$, here $\widetildedehat{e^1\wedge\cdots\wedge e^{\ell}}$ means the local frame of $\det(N_{X^g/X})$ induced by $e^1\wedge\cdots\wedge e^{\ell}$ via (\ref{eq:3.10b}). Consider the dilation $\delta_v$, $v>0$, of $N_{X^{g,K}/X^g}$ by $\delta_v(y,Z)=(y,\sqrt{v}Z)$. We have \begin{eqnarray}gin{multline}\langlebel{eq:3.10} \int_{\widetilde{\mathcal{V}}_{\varepsilon}''}\frac{\varepsilontheta_K}{4v}\left(\frac{d\varepsilontheta_K }{4v}\right)^{j}\exp\left(-\frac{|K^X|^2}{4v} \right)\alpha_K \\ =\int_{X^{g,K}}\int_{Z\in N_{X^{g,K}/X^g}, |Z|< \varepsilon}\left[\frac{ \varepsilontheta_K|_{(y,Z)}}{4v}\left(\frac{d\varepsilontheta_K |_{(y,Z)}}{4v}\right)^{j} \exp\left(-\frac{|K^X(y,Z)|^2}{4v}\right)\alpha_K(y,Z) \right]^{\max} \\ \cdot \bar{k}(y,Z)dv_{X^{g,K}}(y)dv_{N_{X^{g,K}/X^g}}(Z) \\ =\int_{X^{g,K}}\int_{Z\in N_{X^{g,K}/X^g}, |Z|< \varepsilon/\sqrt{v}} \bigg[\frac{\delta_v^*\varepsilontheta_K|_{(y,Z)}}{4v}\left(\frac{ d\delta_v^*\varepsilontheta_K|_{(y,Z)}}{4v}\right)^{j}\exp\left(- \frac{|K^X(y,\sqrt{v}Z)|^2}{4v}\right) \\ \cdot\delta_v^*\alpha_K|_{(y,Z)}\Bigg]^{\max} \cdot \bar{k}(y,\sqrt{v}Z)dv_{X^{g,K}}(y) dv_{N_{X^{g,K}/X^g}}(Z). \end{multline} Let $\nabla^{N_{X^{g,K}/X^g}}$ be the connection on $N_{X^{g,K}/X^g}$ induced by $\nabla^{TX}$ as explained after (\ref{eq:1.30}). Let $\pi_N:N_{X^{g,K}/X^g} \rightarrow W^{g,K}$ be the obvious projection. With respect to $\nabla^{N_{X^{g,K}/X^g}}$, we have the canonical splitting of bundles over $N_{X^{g,K}/X^g}$, \begin{eqnarray}gin{align}\langlebel{eq:3.11} TN_{X^{g,K}/X^g}=T^HN_{X^{g,K}/X^g} \oplus \pi_N^*N_{X^{g,K}/X^g}. \end{align} By (\ref{eq:1.04}) and (\ref{eq:3.11}), we have \begin{eqnarray}gin{align}\langlebel{eq:3.11b} T^HN_{X^{g,K}/X^g}\simeq \pi_N^*TW^{g,K} \simeq \pi_N^*(T^HW\oplus TX^{g,K}). \end{align} On $N_{X^{g,K}/X^g}$, by (\ref{eq:3.11}) and (\ref{eq:3.11b}), we have \begin{eqnarray}gin{multline}\langlebel{eq:3.12} \Lambda(T^*N_{X^{g,K}/X^g})=\Lambda (T^{H*}N_{X^{g,K}/X^g})\widetildedehat{\otimes} \pi_N^*\Lambda (N_{X^{g,K}/X^g}^*) \\ \simeq \pi_N^*\left( \Lambda(T^*W^{g,K})\widetildedehat{\otimes} \Lambda (N_{X^{g,K}/X^g}^*)\right). \end{multline} For $y\in W^{g,K}$ fixed, we take $Y_1, Y_1'\in T_yW^{g,K}$, $Y^V, Y'^V\in N_{X^{g,K}/X^g,y}$, then $Y=Y_1+Y^V$, $Y'=Y_1'+Y'^V$ are sections of $TN_{X^{g,K}/X^g}$ along $N_{X^{g,K}/X^g,y}$ under our identification (\ref{eq:3.11}), i.e., \begin{eqnarray}gin{align}\langlebel{eq:3.13} Y_{(y,Z)}=Y_1^H(y,Z)+Y^V, \quad Y_{(y,Z)}'=Y_1'^H(y,Z)+Y'^V. \end{align} Here $Y_1^H, Y_1'^H\in T^HN_{X^{g,K}/X^g}$ are the lifts of $Y_1, Y_1'$. Let $\theta_0$ be the one form on total space $\mathcal{N}$ of $N_{X^{g,K}/X^g}=N_{W^{g,K}/W^g}$ given by \begin{eqnarray}gin{align}\langlebel{eq:3.16} \theta_0(Y)_{(y,Z)}=\langle m^{TX}(K)Z, Y^V\rangle_{y} \quad \til{E}xt{for}\ Y=Y_1^H+Y^V\in T^HN_{X^{g,K}/X^g} \oplus (\pi_N^*N_{X^{g,K}/X^g}). \end{align} By \cite[Lemma 7.15 (2)]{BGV}, we have \begin{eqnarray}gin{align}\langlebel{eq:3.17} \frac{1}{v}\delta_v^*\varepsilontheta_K=\theta_0+\mathcal{O}(v^{1/2}). \end{align} From (\ref{eq:3.17}), we get \begin{eqnarray}gin{align}\langlebel{eq:3.18} \frac{1}{v}\delta_v^*d\varepsilontheta_K= \frac{1}{v}d\delta_v^*\varepsilontheta_K=d\theta_0+\mathcal{O}(v^{1/2}). \end{align} As in the argument before \cite[p218, Lemma 7.16]{BGV}, by (\ref{eq:3.18b}), we calculate that for $(y,Z)\in N_{X^{g,K}/X^g}$, \begin{eqnarray}gin{align}\langlebel{eq:3.21} d\theta_0(Y,Y')_{(y,Z)}=2\langle m^{TX}(K)Y^V, Y'^V\rangle_{y} -\langle R^{TX}(Y_1^H,Y_1'^H)(m^{TX}(K)Z), Z\rangle_{y}. \end{align} By (\ref{eq:2.03}) and (\ref{eq:2.11}), for $y\in W^{g,K}$, \begin{eqnarray}gin{align}\langlebel{eq:3.19} \frac{1}{v}|K^X(y,\sqrt{v}Z)|^2=|m^{TX}(K)Z|^2+\mathcal{O}(v^{1/2}). \end{align} From (\ref{eq:3.10}), (\ref{eq:3.17}), (\ref{eq:3.18}) and (\ref{eq:3.19}), for any $\alpha\in \Omega^{\bullet} \big(W^g,\det (N_{X^g/X})\big)$, as $v\rightarrow 0$, \begin{eqnarray}gin{multline}\langlebel{eq:3.20} \int_{\widetilde{\mathcal{V}}_{\varepsilon}''}\frac{\varepsilontheta_K}{4v}\left(\frac{d\varepsilontheta_K }{4v}\right)^{j}\exp\left(-\frac{|K^X|^2}{4v} \right)\alpha \\ =\int_{X^{g,K}}\alpha_y\int_{Z\in N_{X^{g,K}/X^g}} \frac{\theta_0}{4}\left(\frac{d\theta_0 }{4}\right)^{j}\exp\left(-\frac{|m^{TX}(K)Z|^2}{4} \right) +\mathcal{O}(v^{1/2}). \end{multline} From (\ref{eq:3.21}), $d\theta_0$ is an even polynomial in $Z$. However from (\ref{eq:3.16}), $\theta_0$ is linear in $Z$. Thus the last integral in (\ref{eq:3.20}) is zero. Therefore, as $v\rightarrow 0$, we have \begin{eqnarray}gin{align}\langlebel{eq:3.25} \int_{X^g}\frac{\varepsilontheta_K}{4v}\left(\frac{d\varepsilontheta_K }{4v}\right)^{j}\exp\left(-\frac{|K^X|^2}{4v} \right)\alpha_K= \mathcal{O}(v^{1/2}). \end{align} The proof of Lemma \ref{lem:3.01} is completed. \end{proof} Remark that when $B$ is a point, for $g=e$, Lemma \ref{lem:3.01} is proved in \cite[Proposition 2.2]{Go09}. From Lemma \ref{lem:3.01} and (\ref{eq:3.07}), the following integral is well-defined, \begin{eqnarray}gin{align}\langlebel{eq:3.26} \mathcal{M}_{g,K}:=\int_0^{+\infty}\widetilde{e}_v\frac{dv}{v}. \end{align} \begin{eqnarray}gin{prop}\langlebel{prop:3.02} For any $K_{0}\in \mathfrak{z}(g)$, there exists $\begin{eqnarray}ta>0$ such that for $K=zK_{0}$, $-\begin{eqnarray}ta<z<\begin{eqnarray}ta$, we have \begin{eqnarray}gin{multline}\langlebel{eq:3.27a} d^B\mathcal{M}_{g,K}=\int_{X^{g}}\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}) \\ - \int_{X^{ge^K}} \widetildedehat{\mathrm{A}}_{ge^K}(TX,\nabla^{TX}) \ch_{ge^K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}). \end{multline} And there exist $c_j(K)\in \Omega^{\bullet}(B,\field{C})$ for $1\leq j\leq \lfloor(\dim W^g+1)/2\rfloor$ such that $\mathcal{M}_{g,tK}$ is smooth on $|t|< 1$, $t\neq 0$ and as $t\rightarrow 0$, we have \begin{eqnarray}gin{align}\langlebel{eq:3.27} \mathcal{M}_{g,tK}=\sum_{j=1}^{\lfloor(\dim W^g+1)/2\rfloor}c_j(K)t^{-j}+\mathcal{O}(t^0). \end{align} Moreover, $t^{\lfloor(\dim W^g+1)/2\rfloor}\mathcal{M}_{g,tK} $ is real analytic in $t$ for $|t|<1$. \end{prop} \begin{eqnarray}gin{proof} By (\ref{eq:2.17}), $d_K^2=-2i\pi\mathcal{L}_K$, and $\varepsilontheta_K$ is $K$-invariant. We know \begin{eqnarray}gin{align}\langlebel{eq:3.22b} \frac{\partial}{\partial v}\left(\exp\left( \frac{d_K\varepsilontheta_K}{2vi\pi}\right) \right) =-\frac{1}{v^2}d_K\left( \frac{\varepsilontheta_K}{2i\pi} \exp\left(\frac{d_K\varepsilontheta_K}{2vi\pi} \right)\right). \end{align} We define the corresponding equivariant curvature $R^{N_{X^{g,K}/X^g}}_K$ as in (\ref{eq:2.08}) via (\ref{eq:3.18b}). By the proof of (\ref{eq:3.25}) and \cite[Theorem 1.3]{Bi86a}, we know that there exists $C>0$, such that for any $v\in (0,1]$, $\alpha\in \Omega^{\bullet}(W^g,\det (N_{X^g/X}))= \Omega^{\bullet}(W^g,o(TX^g))$, \begin{eqnarray}gin{align}\langlebel{eq:3.23b} \begin{eqnarray}gin{split} &\left|\int_{X^g}\exp\left( \frac{d_K\varepsilontheta_K}{2vi\pi}\right) \alpha -\int_{X^{g,K}}\frac{i^{-(\ell-\ell')/2}\alpha}{ \mathrm{det}^{1/2}\left( R^{N_{X^{g,K}/X^g}}_K/(2i\pi)\right)}\right| \leq C\sqrt{v}\|\alpha\|_{\mathscr{C}^1(W^g)}, \\ &\left|\int_{X^g}\frac{\varepsilontheta_K}{2vi\pi} \exp\left(\frac{d_K\varepsilontheta_K}{2vi\pi}\right) \alpha \right| \leq C\sqrt{v}\|\alpha\|_{\mathscr{C}^1(W^g)}. \end{split} \end{align} Let $Q_K$ be the current on $W^g$ such that if $\alpha\in \Omega^{\bullet}(W^g,\det (N_{X^g/X}))$, then \begin{eqnarray}gin{align}\langlebel{eq:3.24b} \int_{X^g}Q_K\alpha=-\int_0^{+\infty}\int_{X^g} \frac{\varepsilontheta_K}{2vi\pi} \exp\left(\frac{d_K\varepsilontheta_K}{2vi\pi}\right)\alpha \frac{dv}{v}. \end{align} From (\ref{eq:3.08}), the second equation of (\ref{eq:3.23b}), we know (\ref{eq:3.24b}) is well-defined. From (\ref{eq:3.22b})-(\ref{eq:3.24b}), the following equality of currents on $W^g$ holds (cf. \cite[Theorem 1.8]{B11a}): \begin{eqnarray}gin{align}\langlebel{eq:3.25b} d_KQ_K=1-\frac{i^{-(\ell-\ell')/2}\delta_{W^{g,K}}}{ \mathrm{det}^{1/2}\left( R^{N_{X^{g,K}/X^g}}_K/(2i\pi)\right)}, \end{align} where $\delta_{W^{g,K}}$ is the current of integration on $W^{g,K}$. From (\ref{eq:3.07}), (\ref{eq:3.26}) and (\ref{eq:3.24b}), we get \begin{eqnarray}gin{multline}\langlebel{eq:3.26b} \mathcal{M}_{g,K}=\int_{X^g}Q_K\alpha_K \\ =-\int_0^{+\infty}\int_{X^g}\frac{\varepsilontheta_K}{2vi\pi} \exp\left(\frac{d_K\varepsilontheta_K}{2vi\pi} \right)\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})\frac{dv}{v}. \end{multline} For $x\in W^g$, $K\in \mathfrak{z}(g)$, we have $K^X(x)\in T_xX^g$. From \cite[(1.7)]{BGV}, for $\sigma\in \Omega^{\bullet}(W^g,o(TX^g))$, using the sign convention in (\ref{eq:0.11}), we have \begin{eqnarray}gin{align}\langlebel{eq:3.23} d^B\int_{X^g}\sigma=\int_{X^g}d\sigma=\int_{X^g}d_K\sigma. \end{align} From (\ref{eq:2.11}), proceeding as the same calculation in the proof of \cite[Theorem 8.2]{BGV}, we get, as elements in $\Omega^{\bullet}(W^{g,K}, \det (N_{X^g/X}))$, \begin{eqnarray}gin{align}\langlebel{eq:3.26a} \frac{\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}) }{i^{(\ell-\ell')/2} \mathrm{det}^{1/2}\left( R^{N_{X^{g,K}/X^g}}_K/(2i\pi)\right)}= \widetildedehat{\mathrm{A}}_{ge^K}(TX,\nabla^{TX}) \ch_{ge^K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}}). \end{align} As $\alpha_K$ is $d_K$-closed, by (\ref{eq:2.12}) and (\ref{eq:3.24b})-(\ref{eq:3.26a}), we get (\ref{eq:3.27a}). For $t\neq 0$, by (\ref{eq:3.26b}) and changing the variables $v\mapsto vt^2$, we have \begin{eqnarray}gin{align}\langlebel{eq:3.29} \mathcal{M}_{g,tK} =-\int_0^{+\infty}\int_{X^g}\frac{\varepsilontheta_K}{2vi\pi t} \sum_{k=0}^{\lfloor(\dim W^g-1)/2\rfloor}\left(\frac{(d \varepsilontheta_K)^k}{(2vi\pi t)^k k! } \right) \exp\left(-\frac{|K^X|^2}{v}\right)\alpha_{tK} \frac{dv}{v}. \end{align} From the arguments in the proof of (\ref{eq:3.25}), we get (\ref{eq:3.27}). From (\ref{eq:2.14}), (\ref{eq:2.15}) and (\ref{eq:3.07}), we see that $\alpha_{tK}$ is real analytic on $t$ for $|t|<1$. Following the proof of (\ref{eq:3.25}), $$\int_0^{+\infty}\int_{X^g}\frac{\varepsilontheta_K}{v} \left(\frac{d\varepsilontheta_K}{v}\right)^k \exp\left(-\frac{|K^X|^2}{v}\right)\alpha_{tK} \frac{dv}{v}$$ is uniformly absolutely integrable on $v$ for $|t|<1$. Thus $t^{\lfloor(\dim W^g+1)/2\rfloor} \mathcal{M}_{g,tK}$ is real analytic on $t$ for $|t|<1$. The proof of Proposition \ref{prop:3.02} is completed. \end{proof} From Proposition \ref{prop:3.02}, we could state our main result, Theorem \ref{thm:0.2} as follows. \begin{eqnarray}gin{thm}\langlebel{thm:3.03} For any $g\in G$, $K_{0}\in \mathfrak{z}(g)$, there exists $\begin{eqnarray}ta>0$ such that for $K=zK_{0}$, $-\begin{eqnarray}ta<z<\begin{eqnarray}ta$, $K\neq 0$, we have \begin{eqnarray}gin{align}\langlebel{eq:3.30} \tilde{\eta}_{g,K}=\tilde{\eta}_{ge^K}+\mathcal{M}_{g,K}\in \Omega^{\bullet}(B,\field{C})/d\Omega^{\bullet}(B,\field{C}). \end{align} \end{thm} Observe that by (\ref{eq:2.37}), $\tilde{\eta}_{g,tK}$ is analytic on $t$ for $t$ small. By (\ref{eq:3.30}), when $t\rightarrow 0$, modulo exact forms, the singularity of $\tilde{\eta}_{ge^{tK}}$ is the same as that of $-\mathcal{M}_{g,tK}$ in (\ref{eq:3.27}). Note that Theorem \ref{thm:3.03} is compatible with (\ref{eq:1.42}), (\ref{eq:1.47}), (\ref{eq:2.30}), (\ref{eq:2.34}) and (\ref{eq:3.27a}). \begin{eqnarray}gin{rem}\langlebel{rem:3.04} For $K\in \mathfrak{z}(g)$, $M=\lfloor(\dim W^g-1)/2 \rfloor$, on $W^g\setminus \{K^X=0\}$, we have \begin{eqnarray}gin{multline}\langlebel{eq:3.31} Q_K=-\sum_{j= 0}^M \frac{1}{j!}\left(\frac{1}{2i\pi} \right)^{j+1}\int_0^{+\infty}\frac{\varepsilontheta_K}{v} \left(\frac{d\varepsilontheta_K}{v}\right)^j \exp\left(-\frac{|K^X|^2}{v}\right)\frac{dv}{v} \\ =-\sum_{j= 0}^M \frac{1}{j!}\left(\frac{1}{2i\pi} \right)^{j+1}\frac{\varepsilontheta_K}{|K^X|^2} \frac{(d\varepsilontheta_K)^j}{|K^X|^{2j}} \int_0^{+\infty}v^je^{-v}dv \\ =-\sum_{j= 0}^M \frac{\varepsilontheta_K(d\varepsilontheta_K)^j}{ (2i\pi)^{j+1}|K^X|^{2j+2}} =-\frac{\varepsilontheta_K}{2i\pi|K^X|^2}\left(1- \frac{d\varepsilontheta_K}{2i\pi|K^X|^{2}}\right)^{-1}. \end{multline} From (\ref{eq:3.17})-(\ref{eq:3.19}), we know that there exists $C>0$ such that \begin{eqnarray}gin{align}\langlebel{eq:3.34b} \left|K^X(y,Z) \right|^2\geq C|Z|^2, \end{align} and for $Y_1\in T_yW^{g,K}$, \begin{eqnarray}gin{align}\langlebel{eq:3.34c} i_{Y_1^H}\varepsilontheta_{K}=\mathcal{O}(|Z|^3), \quad i_{Y_1^H}d\varepsilontheta_{K}=\mathcal{O}(|Z|^2). \end{align} From (\ref{eq:3.31})-(\ref{eq:3.34c}) and the rank $\ell-\ell'$ of $N_{X^{g,K}/X^g}$ is even, we know that near $W^{g,K}$, \begin{eqnarray}gin{align}\langlebel{eq:3.35b} Q_K(y,Z)=\mathcal{O}(|Z|^{1-(\ell-\ell')}). \end{align} Thus as a current over $W^g$, $Q_K$ is in fact locally integrable over $W^g$ and given by (\ref{eq:3.31}). For $g=e$, and $B=\mathrm{pt}$, this is exactly \cite[Proposition 2.2]{Go09}. Assume now $K^X$ has no zeros, for $t\neq 0$ small enough, by (\ref{eq:3.07}), (\ref{eq:3.26}), (\ref{eq:3.30}) and (\ref{eq:3.31}), we have \begin{eqnarray}gin{align}\langlebel{eq:3.33} \widetilde{\eta}_{g,tK}=\widetilde{\eta}_{ge^{tK}} - \int_{X^g}\frac{\varepsilontheta_{K}}{2i\pi t|K^X|^2}\left(1- \frac{d\varepsilontheta_{K}}{2i\pi t|K^X|^{2}}\right)^{-1} \alpha_{tK} \in \Omega^{\bullet}(B,\field{C})/d\Omega^{\bullet}(B,\field{C}). \end{align} In particular, for $g=e$ and $B=\mathrm{pt}$, (\ref{eq:3.33}) as Taylor expansion at $t=0$ is \cite[Theorem 0.5]{Go00}. \end{rem} \section{A proof of Theorem \ref{thm:3.03}}\langlebel{s04} In this section, we state some intermediate results and prove Theorem \ref{thm:3.03}. The proofs of the intermediate results are delayed to Section 6. \subsection{Some intermediate results}\langlebel{s0401} For $t>0$, $v>0$, set \begin{eqnarray}gin{align}\langlebel{eq:4.01} \mathcal{C}_{v,t}=\mathbb{B}_t+\frac{\sqrt{t}c(K^X)}{4} \left(\frac{1}{t}-\frac{1}{v}\right) +dt\wedge \frac{\partial}{\partial t} +dv\wedge \frac{\partial}{\partial v}. \end{align} Then $\mathcal{C}_{v,t}$ is a superconnection associated with the fibration $(\field{R}_+^*)^2\times W \rightarrow (\field{R}_+^*)^2\times B$. From the argument in the proof of \cite[Theorem 9.17]{BGV}, we have \begin{eqnarray}gin{align}\langlebel{eq:4.02} d^{\field{R}^2\times B}\widetilde{\tr}[g\exp(-\mathcal{C}_{v,t}^2 -\mathcal{L}_{K})]=0. \end{align} For $\alpha\in \Lambda(T^*(\field{R}^2\times B))$, \begin{eqnarray}gin{align}\langlebel{eq:4.03} \alpha=\alpha_0+dv\wedge \alpha_1+dt\wedge \alpha_2+ dv\wedge dt\wedge \alpha_3,\quad \alpha_i\in \Lambda (T^*B), i=0,1,2,3, \end{align} as in (\ref{eq:0.06}), set \begin{eqnarray}gin{align}\langlebel{eq:4.04} [\alpha]^{dv}:=\alpha_1,\quad [\alpha]^{dt}:=\alpha_2,\quad [\alpha]^{dv\wedge dt}:=\alpha_3. \end{align} \begin{eqnarray}gin{defn}\langlebel{defn:4.01} We define $\begin{eqnarray}ta_{g,K}$ to be the part of $-\psi_{\field{R}^2\times B} \widetilde{\tr}[g\exp(-\mathcal{C}_{v,t}^2-\mathcal{L}_{K})]$ of degree one with respect to the coordinates $(v,t)$. We denote by \begin{eqnarray}gin{align}\langlebel{eq:4.06} \alpha_{g,K}=-\left\{\psi_{\field{R}^2\times B} \widetilde{\tr}[g\exp(-\mathcal{C}_{v,t}^2-\mathcal{L}_{K})]\right\}^{ dv\wedge dt}. \end{align} \end{defn} From comparing the coefficient of $dv\wedge dt$ part of (\ref{eq:4.02}), we have \begin{eqnarray}gin{align}\langlebel{eq:4.07} \left(dv\wedge\frac{\partial}{\partial v}+dt\wedge\frac{ \partial}{\partial t}\right)\begin{eqnarray}ta_{g,K}=-dv\wedge dt\wedge d^B\alpha_{g,K}. \end{align} Take $a,A$, $0<a\leq 1\leq A<+\infty$. Let $\Gamma= \Gamma_{a,A}$ be the oriented contour in $\field{R}_{+,v} \times\field{R}_{+,t}$: \begin{eqnarray}gin{equation*} \begin{eqnarray}gin{tikzpicture} \draw[->][ -triangle 45] (-0.25,0) -- (5.5,0); \draw[->][ -triangle 45] (0,-0.25) -- (0,5); \draw[->][ -triangle 45] (1,1) -- (2.5,1); \draw (2.5,1) -- (4,1); \draw[->][ -triangle 45] (4,1) -- (4,2.5); \draw (4,2.5) -- (4,4); \draw[->][ -triangle 45] (4,4) -- (2.5,2.5); \draw (2.5,2.5) -- (1,1); \draw[dashed] (0,1) -- (1,1); \draw[dashed] (0,4) -- (4,4); \draw[dashed] (1,0) -- (1,1); \draw[dashed] (4,0) -- (4,1); \foreach \x in {0} \draw (\x cm,1pt) -- (\x cm,1pt) node[anchor=north east] {$\x$}; \draw (3,1.75) node {$\Delta$}(3,1.75); \draw (0,4.75) node[anchor=east] {$t$}(0,4.75); \draw (5.5,0) node[anchor=west] {$v$}(5.5,0); \draw (0,1) node[anchor=east] {$a$}(0,1); \draw (0,4) node[anchor=east] {$A$}(0,4); \draw (1,0) node[anchor=north] {$a$}(1,0); \draw (4,0) node[anchor=north] {$A$}(4,0); \draw (4.25,2.5) node[anchor=west] {\small{$\Gamma_1$}}(4.25,2.5); \draw (2.5,0.75) node[anchor=north] {\small{$\Gamma_3$}}(2.5,0.75); \draw (2.25,2.5) node[anchor=south] {\small{$\Gamma_2$}}(2.25,2.5); \end{tikzpicture} \end{equation*} The contour $\Gamma$ is made of three oriented pieces $\Gamma_1, \Gamma_2, \Gamma_3$ indicated in the above picture. For $1\leq k\leq 3$, set $I_k^0=\int_{\Gamma_k}\begin{eqnarray}ta_{g,K}$. Also $\Gamma$ bounds an oriented triangular domain $\Delta$. By Stocks' formula and (\ref{eq:4.07}), \begin{eqnarray}gin{align}\langlebel{eq:4.08} \sum_{k=1}^3I_k^0=\int_{\partial \Delta}\begin{eqnarray}ta_{g,K}= \int_{\Delta}\left(dv\wedge\frac{\partial}{\partial v} +dt\wedge\frac{\partial}{\partial t}\right)\begin{eqnarray}ta_{g,K} =-d^B\left(\int_{\Delta} \alpha_{g,K} dv\wedge dt\right). \end{align} The proof of the following theorem is left to Section \ref{s0511}. \begin{eqnarray}gin{thm}\langlebel{thm:4.02} For $K\in \mathfrak{z}(g)$, $|K|$ small enough, there exist $\delta>0$, $C>0$ such that for any $t\geq 1$, $v\geq t$, we have \begin{eqnarray}gin{align}\langlebel{eq:4.09} \left|[\begin{eqnarray}ta_{g,K}(v,t)]^{dt}\right|\leq \frac{C}{t^{1+\delta}}. \end{align} \end{thm} For $\alpha\in \Omega^{j}(B,\field{C})$, we define \begin{eqnarray}gin{align}\langlebel{eq:4.10} \phi(\alpha):=\{\psi_{\field{R}\times B}(dv\wedge \alpha) \}^{dv}=\left\{ \begin{eqnarray}gin{array}{ll} \pi^{-\frac{1}{2}}\left(2i\pi\right)^{-\frac{j}{2}}\cdot \alpha\hspace{10mm} & \hbox{if $j$ is even;} \\ \left(2i\pi\right)^{-\frac{j+1}{2}}\cdot \alpha \hspace{10mm} & \hbox{if $j$ is odd.} \end{array} \right. \end{align} Comparing with (\ref{eq:1.26}), we set \begin{eqnarray}gin{align}\langlebel{eq:4.10a} \widetildedetilde{\tr}'=\left\{ \begin{eqnarray}gin{array}{ll} \tr_s & \hbox{if $n$ is even;} \\ \tr^{\mathrm{even}} & \hbox{if $n$ is odd.} \end{array} \right. \end{align} For $0<t\leq v$, set \begin{eqnarray}gin{align}\langlebel{eq:4.11} \mathcal{B}_{K,t,v}=\left(\mathbb{B}_t+\frac{\sqrt{t}c(K^X)}{4}\left( \frac{1}{t}-\frac{1}{v} \right)\right)^2+\mathcal{L}_{K}. \end{align} Then by Definition \ref{defn:4.01}, (\ref{eq:4.01}) and (\ref{eq:4.11}), we have \begin{eqnarray}gin{align}\langlebel{eq:4.05} \begin{eqnarray}gin{split} &[\begin{eqnarray}ta_{g,K}(v,t)]^{dt}\\ &\hspace{5mm}=-\left\{\psi_{\field{R}_t\times B} \widetilde{\tr}\left[g\exp\left(-\mathcal{B}_{K,t,v} -dt\wedge \frac{\partial}{\partial t} \left(\mathbb{B}_t+\frac{\sqrt{t} c(K^X)}{4}\left(\frac{1}{t}-\frac{1}{v}\right)\right)\right) \right]\right\}^{dt} \\ &\hspace{5mm}=\phi\widetilde{\tr}'\left[g\frac{\partial}{\partial t} \left(\mathbb{B}_t+\frac{\sqrt{t} c(K^X)}{4}\left(\frac{1}{t}-\frac{1}{v}\right)\right) \exp\left(-\mathcal{B}_{K,t,v}\right) \right], \\ &[\begin{eqnarray}ta_{g,K}(v,t)]^{dv}=-\left\{\psi_{\field{R}_v\times B} \widetilde{\tr}\left[g\exp\left(-\mathcal{B}_{K,t,v}-dv \frac{\sqrt{t}c(K^X)}{4v^2}\right) \right]\right\}^{dv} \\ &\hspace{5mm}=\phi\widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{4v^2} \exp\left(-\mathcal{B}_{K,t,v}\right) \right]. \end{split} \end{align} Thus as $\mathcal{B}_{K,t,t}=\mathbb{B}_t^2+\mathcal{L}_K$, by (\ref{eq:4.05}), on $\Gamma_2$, we have \begin{eqnarray}gin{multline}\langlebel{eq:4.13b} \begin{eqnarray}ta_{g,K}(v,t)=dt\wedge \phi\widetilde{\tr}'\left[ g\frac{\partial\mathbb{B}_t}{\partial t} \exp\left(-\mathbb{B}_t^2-\mathcal{L}_K\right) \right] \\ =-dt\wedge \left\{\psi_{\field{R}\times B} \left.\widetildedetilde{\tr}\right.\left[g\exp\left(-\left( \mathbb{B}_{t}+dt\wedge\frac{\partial}{\partial t}\right)^2-\mathcal{L}_K\right)\right]\right\}^{dt}. \end{multline} In the remainder of this section, we use Theorem \ref{thm:4.02} and the following estimates to prove Theorem \ref{thm:3.03}. The proofs of these estimates are delayed to Section 6. Recall that $\widetildedetilde{e}_v$ is defined in (\ref{eq:3.07}). \begin{eqnarray}gin{thm}\langlebel{thm:4.03} For $K_{0}\in \mathfrak{z}(g)$, there exists $\begin{eqnarray}ta>0$ such that for $K=zK_{0}$, $-\begin{eqnarray}ta<z<\begin{eqnarray}ta$, $K\neq 0$, a) when $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:4.12} \phi\widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{4v}\exp\left(- \mathcal{B}_{K,t,v}\right) \right]\rightarrow -\widetilde{e}_v; \end{align} b) there exist $C>0$, $\delta\in (0,1]$, such that for $t\in (0,1]$, $v\in [t,1]$, \begin{eqnarray}gin{align}\langlebel{eq:4.13} \left|\phi\widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{4v}\exp\left( -\mathcal{B}_{K,t,v}\right) \right]+\widetilde{e}_v\right|\leq C\left(\frac{t }{v} \right)^{\delta}; \end{align} c) there exists $C>0$ such that for $t\in (0,1]$, $v\geq 1$, \begin{eqnarray}gin{align}\langlebel{eq:4.14} \left|\widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{4v}\exp\left( -\mathcal{B}_{K,t,v}\right) \right]\right|\leq \frac{C}{v}; \end{align} d) for $v\geq 1$, \begin{eqnarray}gin{align}\langlebel{eq:4.15} \lim_{t\rightarrow 0}\widetilde{\tr}'\left[g\frac{c(K^X)}{4\sqrt{t}v} \exp\left(-\mathcal{B}_{K,t,tv}\right) \right]=0. \end{align} \end{thm} \subsection{A proof of Theorem \ref{thm:3.03}}\langlebel{s0402} We now finish the proof of Theorem \ref{thm:3.03} by using Theorems \ref{thm:4.02} and \ref{thm:4.03}. By (\ref{eq:4.08}), we know that $I_1^0+I_2^0+I_3^0$ is an exact form on $B$. We take the limits $A\rightarrow+\infty$ and then $a\rightarrow 0$ in the indicated order. We claim that the limit of the part $I_j^0(A,a)$ as $A\rightarrow +\infty$ exists, denoted by $I_j^1(a)$, and the limit of $I_j^1(a)$ as $a\rightarrow 0$ exists, denoted by $I_j^2$ for $j=1,2,3$. i) By (\ref{eq:4.11}) and (\ref{eq:4.05}), $[\begin{eqnarray}ta_{g,K} (v,t)]^{dt}$ is uniformly bounded for $v\geq 1$, $t\in I$, a compact interval $I\subset (0,+\infty)$, and \begin{eqnarray}gin{align}\langlebel{eq:4.17b} \lim_{v\rightarrow +\infty}[\begin{eqnarray}ta_{g,K}(v,t)]^{dt} =[\begin{eqnarray}ta_{g,K}(+\infty,t)]^{dt}. \end{align} From Theorem \ref{thm:4.02}, (\ref{eq:2.23}), (\ref{eq:4.05}) and the dominated convergence theorem, we see that \begin{eqnarray}gin{align}\langlebel{eq:4.17} I_1^1(a)=\lim_{A\rightarrow +\infty}\int_a^A [\begin{eqnarray}ta_{g,K} (A,t)]^{dt}dt=-\int_{a}^{+\infty}\left\{\psi_{\field{R}\times B} \widetilde{\tr}\Big[g\exp\left(-\mathcal{B}_{K,t}^2-\mathcal{L}_{K} \right) \Big]\right\}^{dt}dt. \end{align} Thus by Theorem \ref{thm:2.02} and Definition \ref{defn:2.03}, we have \begin{eqnarray}gin{align}\langlebel{eq:4.18} I_1^2=-\int_{0}^{+\infty}\left\{\psi_{\field{R}\times B} \widetilde{\tr}\Big[g\exp\left(-\mathcal{B}_{K,t}^2-\mathcal{L}_{K} \right) \Big]\right\}^{dt}dt=\widetilde{\eta}_{g,K}. \end{align} ii) From Definition \ref{defn:1.03}, (\ref{eq:2.02b}) and (\ref{eq:4.13b}), we have \begin{eqnarray}gin{align}\langlebel{eq:4.19} I_2^2=\int_0^{+\infty}\left\{\psi_{\field{R}\times B} \left.\widetildedetilde{\tr}\right.\left[g\exp\left(-\left( \mathbb{B}_{t}+dt\wedge\frac{\partial}{\partial t}\right)^2-\mathcal{L}_K\right)\right]\right\}^{dt}dt=-\widetilde{ \eta}_{ge^K}. \end{align} iii) For the term $I_3^{0}(A,a)$, set \begin{eqnarray}gin{align}\langlebel{eq:4.20} \begin{eqnarray}gin{split} &J_1=-\int_a^1\widetilde{e}_v\frac{dv}{v}, \\ &J_2=\int_1^{+\infty}\phi\widetilde{\tr}'\left[g\frac{\sqrt{a}c(K^X)}{ 4v}\exp\left(-\mathcal{B}_{K,a,v}\right) \right]\frac{dv}{v}, \\ &J_3=\int_1^{1/a}\left(\phi\widetilde{\tr}'\left[g\frac{c(K^X)}{ 4\sqrt{a}v}\exp\left(-\mathcal{B}_{K,a,av}\right) \right]+\widetilde{e}_{av} \right)\frac{dv}{v}. \end{split} \end{align} Clearly, by Theorem \ref{thm:4.03} c) and (\ref{eq:4.05}), we have \begin{eqnarray}gin{align}\langlebel{eq:4.21} I_3^1(a)=J_1+J_2+J_3. \end{align} By (\ref{eq:4.12}), (\ref{eq:4.14}) and (\ref{eq:4.20}), from the dominated convergence theorem, we find that as $a\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:4.22} J_2\rightarrow J_2^1=-\int_{1}^{+\infty}\widetilde{e}_v\frac{dv}{v}. \end{align} By (\ref{eq:4.13}), there exist $C>0$, $\delta\in (0,1]$ such that for $a\in (0,1]$, $1\leq v\leq 1/a$, \begin{eqnarray}gin{align}\langlebel{eq:4.23} \left|\phi\widetilde{\tr}'\left[g\frac{c(K^X)}{4\sqrt{a}v}\exp\left( -\mathcal{B}_{K,a,av}\right) \right]+\widetilde{e}_{av}\right|\leq \frac{C }{v^{\delta}}. \end{align} Using Lemma \ref{lem:3.01}, (\ref{eq:4.15}), (\ref{eq:4.20}), (\ref{eq:4.23}), and the dominated convergence theorem, as $a\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:4.24} J_3\rightarrow J_3^1=0. \end{align} By (\ref{eq:3.26}), (\ref{eq:4.20})-(\ref{eq:4.22}) and (\ref{eq:4.24}), we have \begin{eqnarray}gin{align}\langlebel{eq:4.25} I_3^2=-\int_0^{+\infty}\widetilde{e}_v\frac{dv}{v}=-\mathcal{M}_{g,K}. \end{align} By \cite[\S22, Theorem 17]{dR73}, $d\Omega^{\bullet}(B,\field{C})$ is closed under the uniformly convergence. Thus, by (\ref{eq:4.08}), \begin{eqnarray}gin{align}\langlebel{eq:4.26} \sum_{j=1}^{3}I_j^2\equiv0\ \mathrm{mod}\ d\Omega^{\bullet}(B,\field{C}). \end{align} By (\ref{eq:4.18}), (\ref{eq:4.19}), (\ref{eq:4.25}) and (\ref{eq:4.26}), the proof of Theorem \ref{thm:3.03} is completed. \section{Construction of the equivariant infinitesimal $\eta$-forms}\langlebel{s05} In this section, we prove Theorems \ref{thm:2.02} and \ref{thm:4.02} following the lines of \cite[\S 7]{BG00} and give a heat kernel proof of the family Kirillov formula, Theorem \ref{thm:2.01}. For the convenience to compare the arguments in this section with those in \cite{BG00}, especially how the extra terms for the family version appear, the structure of this section is formulated almost the same as in \cite[\S 7]{BG00}. This section is organized as follows. In Section \ref{s0501}, we prove Theorem \ref{thm:2.02} a). In Sections \ref{s0502}-\ref{s0510}, we give proofs of Theorems \ref{thm:2.01} and \ref{thm:2.02} b). In Section \ref{s0511}, we prove Theorem \ref{thm:4.02}. \subsection{The behaviour of the trace as $t\rightarrow +\infty$}\langlebel{s0501} Set \begin{eqnarray}gin{align}\langlebel{eq:5.01} C_{K,t}=\mathbb{B}_{t}+\frac{c(K^X)}{4\sqrt{t}}+ t\cdot dt\wedge \frac{\partial}{\partial t}. \end{align} For $z\in \field{C}$, we denote by \begin{eqnarray}gin{align}\langlebel{eq:5.02} \mathcal{A}_{zK,t}:=C_{zK,t}^2+z\mathcal{L}_{K}. \end{align} Then Theorem \ref{thm:2.02} a) is implied by the following estimate. \begin{eqnarray}gin{thm}\langlebel{thm:5.01} For $\begin{eqnarray}ta>0$ fixed, there exist $C>0$, $\delta>0$ such that if $K\in \mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $t\geq 1$, \begin{eqnarray}gin{align}\langlebel{eq:5.03} \left|\left\{\widetilde{\tr}[g\exp(-\mathcal{A}_{zK,t})]\right\}^{dt}\right|\leq \frac{C}{t^{\delta}}. \end{align} \end{thm} \begin{eqnarray}gin{proof} This subsection is devoted to the proof of Theorem \ref{thm:5.01}. \end{proof} In this subsection, we fix $\begin{eqnarray}ta>0$. The constants in this subsection may depend on $\begin{eqnarray}ta$. For $b\in B$, recall that $\mathbb{E}_{b}$ is the vector space of the smooth sections of $\mathcal{E}$ on $X_b$. For $\mu\in \field{R}$, let $\mathbb{E}_{b}^{\mu}$ be the Sobolev spaces of the order $\mu$ of sections of $\mathcal{E}$ on $X_b$. We equip $\mathbb{E}_{b}^{0}$ by the Hermitian product $\langle\ ,\ \rangle_0$ in (\ref{eq:1.19}). Let $\|\cdot\|_0$ be the corresponding norm of $\mathbb{E}_{b}^{0}$. For $\mu\in \field{Z}$, let $\|\cdot\|_{\mu}$ be the Sobolev norm of $\mathbb{E}_{b}^{\mu}$ induced by $\nabla^{TX}$ and $\nabla^{\mathcal{E}}$. Recall that we assume that the kernels $\Ker (D)$ form a vector bundle over $B$. We denote by $P$ the orthogonal projection from $\mathbb{E}_{}^0$ to $\Ker (D)$ and let $P^{\bot}=1-P$. Recall that $P^{TX}:TW=T^HW\oplus TX\rightarrow TX$ is the projection defined by (\ref{eq:1.04}). For $s,s'\in \mathbb{E}$, $t\geq 1$, we set \begin{eqnarray}gin{align}\langlebel{eq:5.04} \begin{eqnarray}gin{split} |s|_{t,0}^2:&=\|s\|_0^2\,, \\ |s|_{t,1}^2:&=\|Ps\|_{0}^2+t\|P^\bot s\|_0^2 +t\|\nabla^{\mathcal{E}}_{P^{TX}\cdot}P^\bot s\|_0^2\,. \end{split} \end{align} Set \begin{eqnarray}gin{align}\langlebel{eq:5.06} |s|_{t,-1}=\sup_{0\neq s'\in \mathbb{E}^1}\frac{|\langlengle s,s'\ranglengle_{0}|}{|s'|_{t,1}}. \end{align} Then (\ref{eq:5.04}) and (\ref{eq:5.06}) define Sobolev norms on $\mathbb{E}^1$ and $\mathbb{E}^{-1}$. Since $\nabla^{\mathcal{E}}_{P^{TX}\cdot}P$ is an operator along the fiber $X$ with smooth kernel, we know that $|\cdot|_{t,1}$ (resp. $|\cdot|_{t,-1}$) is equivalent to $\|\cdot\|_1$ (resp. $\|\cdot\|_{-1}$) on $\mathbb{E}^{1}$ (resp. $\mathbb{E}^{-1}$). Let $\mathcal{A}_{zK,t}^{(0)}$ be the piece of $\mathcal{A}_{zK,t}$ which has degree 0 in $\Lambda(T^*(\field{R}\times B))$. \begin{eqnarray}gin{lemma}\langlebel{lem:5.02} There exist $c_1, c_2, c_3, c_4>0$, such that for any $t\geq 1$, $K\in\mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $s,s'\in\mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.07} \begin{eqnarray}gin{split} \field{R}e\left\langlengle \mathcal{A}_{zK,t}^{(0)}s,s\right\rangle_{0}&\geq c_1|s|_{t,1}^2-c_2|s|_{t,0}^2, \\ \left|{\rm Im}\left\langlengle \mathcal{A}_{zK,t}^{(0)}s,s\right\rangle_{0} \right|&\leq c_3|s|_{t,1}|s|_{t,0}, \\ \left|\left\langlengle \mathcal{A}_{zK,t}^{(0)}s,s'\right\rangle_{0} \right|&\leq c_4|s|_{t,1}|s'|_{t,1}. \end{split} \end{align} \end{lemma} \begin{eqnarray}gin{proof} From (\ref{eq:1.22}), (\ref{eq:5.01}) and (\ref{eq:5.02}), we have \begin{eqnarray}gin{align}\langlebel{eq:5.08} \mathcal{A}_{zK,t}^{(0)}=tD^{2}+\frac{z}{4}\left[D,c(K^X)\right] -z^2\frac{|K^X|^2}{16t}+z\mathcal{L}_K. \end{align} So we have \begin{eqnarray}gin{align}\langlebel{eq:5.09} \begin{eqnarray}gin{split} &\field{R}e \langle \mathcal{A}_{zK,t}^{(0)}s,s\rangle_0=\left\langle\left(tD^{2} +{\rm Im}(z)i\left(\frac{1}{4}\left[D,c(K^X)\right]+\mathcal{L}_K\right) -\field{R}e(z^2) \frac{|K^X|^2}{16t}\right)s,s\right\rangle_0, \\ &{\rm Im} \langle \mathcal{A}_{zK,t}^{(0)}s,s\rangle_0=\left\langle\left( -\field{R}e(z)i\left(\frac{1}{4} \left[D,c(K^X)\right]+\mathcal{L}_K\right)-{\rm Im}(z^2) \frac{|K^X|^2}{16t}\right)s,s\right\rangle_0. \end{split} \end{align} From (\ref{eq:5.04}), there exist $c_1', c_2', c_3', c_4'>0$ such that for any $t\geq 1$, $|zK|\leq \begin{eqnarray}ta$, $\epsilon>0$, \begin{eqnarray}gin{align}\langlebel{eq:5.10} \begin{eqnarray}gin{split} \left\langle\left(tD^{2} -\field{R}e(z^2) \frac{|K^X|^2}{16t}\right)s,s\right\rangle_{0}&\geq c_1'|s|_{t,1}^2-c_2'|s|_{t,0}^2, \\ \left|\left\langle \frac{{\rm Im}(z)}{4}\left[D,c(K^X)\right]s, s\right\rangle_{0}\right|&\leq c_3'|s|_{t,1}|s|_{t,0}\leq c_3' \epsilon|s|_{t,1}^2+\frac{c_3'}{4\epsilon}|s|_{t,0}^2, \\ \left|\left\langle |z|\mathcal{L}_Ks,s\right\rangle_{0} \right|&\leq c_4'|s|_{t,1}|s|_{t,0}\leq c_4' \epsilon|s|_{t,1}^2+\frac{c_4'}{4\epsilon}|s|_{t,0}^2. \end{split} \end{align} By taking $\epsilon=\min\{c_1'/(4c_3'), c_1'/(4c_4')\}$, from (\ref{eq:5.09}), we get the first estimate of (\ref{eq:5.07}). The other estimates in (\ref{eq:5.07}) follow directly from (\ref{eq:5.04}) and (\ref{eq:5.09}). The proof of Lemma \ref{lem:5.02} is completed. \end{proof} By using Lemma \ref{lem:5.02} and exactly the same argument in \cite[Theorem 11.27]{BL91}, we get \begin{eqnarray}gin{lemma}\langlebel{lem:5.03} There exist $c, C>0$, such that if $t\geq 1$, $K\in\mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, \begin{eqnarray}gin{align}\langlebel{eq:5.11} \langlembda\in U_c:=\left\{\langlembda\in\field{C}:\field{R}e(\langlembda)\leq\frac{ {\rm Im}(\langlembda)^2}{4c^2}-c^2\right\}, \end{align} the resolvent $(\langlembda-\mathcal{A}_{zK,t}^{(0)})^{-1}$ exists, and moreover for any $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.12} \begin{eqnarray}gin{split} &|(\langlembda-\mathcal{A}_{zK,t}^{(0)})^{-1}s|_{t,0}\leq C|s|_{t,0}, \\ &|(\langlembda-\mathcal{A}_{zK,t}^{(0)})^{-1}s|_{t,1}\leq C(1+|\langlembda|)^2|s|_{t,-1}. \end{split} \end{align} \end{lemma} The following lemma is the analogue of \cite[Theorem 9.15]{Bi97}. \begin{eqnarray}gin{lemma}\langlebel{lem:5.04} There exist $C>0, k\in \field{N}$, such that for $t\geq 1$, $K\in\mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $\langlembda\in U_{c}$, with $c$ in Lemma \ref{lem:5.03}, the resolvent $(\langlembda-\mathcal{A}_{zK,t})^{-1}$ exists, extends to a continuous linear operator from $\Lambda(T^*(\field{R}\times B)) \otimes\mathbb{E}^{-1}$ into $\Lambda(T^*(\field{R}\times B))\otimes\mathbb{E}^1$, and moreover for $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.16} |(\langlembda-\mathcal{A}_{zK,t})^{-1}s|_{t,1}\leq C(1+|\langlembda|)^k|s|_{t,-1}. \end{align} \end{lemma} \begin{eqnarray}gin{proof} From (\ref{eq:1.01}), (\ref{eq:1.22}), (\ref{eq:5.01}) and (\ref{eq:5.02}), \begin{eqnarray}gin{multline}\langlebel{eq:5.13} \mathcal{A}_{zK,t}-\mathcal{A}_{zK,t}^{(0)}=\sqrt{t}\left([D, \nabla^{\mathbb{E},u}]+\frac{1}{2}dt\wedge D \right) +\left(\nabla^{\mathbb{E},u}\right)^2 -\frac{1}{4}[D,c(T^H)] \\ +\frac{1}{8\sqrt{t}}\Big(2[\nabla^{\mathbb{E},u}, z c(K^X) -c(T^H)]-dt\wedge (z c(K^X)-c(T^H))\Big) \\ +\frac{1}{16t}\Big(2z \langle K^X,T^H\rangle+c(T^H)^2\Big). \end{multline} By \cite[Theorem 2.5]{Bi86}, $[D, \nabla^{ \mathbb{E},u}]$ and $\left(\nabla^{\mathbb{E},u}\right)^2$ are first order differential operators along the fiber. From $P[D,\nabla^{\mathbb{E},u}]P=0$, we get \begin{eqnarray}gin{align}\langlebel{eq:5.14} \Big|\langle \sqrt{t}[D, \nabla^{\mathbb{E},u}]s, s'\rangle_0\Big|\leq C(|s|_{t,0}|s'|_{t,1}+|s|_{t,1}|s'|_{t,0}). \end{align} By (\ref{eq:5.13}) and (\ref{eq:5.14}), there exists $C'>0$ such that for any $t\geq 1$, we have \begin{eqnarray}gin{align}\langlebel{eq:5.15} |(\mathcal{A}_{zK,t}-\mathcal{A}_{zK,t}^{(0)})s|_{t,-1}\leq C'|s|_{t,1}. \end{align} Take $\langlembda\in U_{c}$. Then since $\mathcal{A}_{zK,t}-\mathcal{A}_{zK,t}^{(0)}$ has positive degree in $\Lambda(T^*(\field{R}\times B))$, we have \begin{eqnarray}gin{align}\langlebel{eq:5.17} (\langlembda-\mathcal{A}_{zK,t})^{-1}=\sum_{m=0}^{1+\dim B}(\langlembda- \mathcal{A}_{zK,t}^{(0)})^{-1}\Big((\mathcal{A}_{zK,t}-\mathcal{A}_{zK,t}^{(0)}) (\langlembda-\mathcal{A}_{zK,t}^{(0)})^{-1}\Big)^m. \end{align} Therefore, by (\ref{eq:5.12}), (\ref{eq:5.15}) and (\ref{eq:5.17}), we obtain (\ref{eq:5.16}). The proof of Lemma \ref{lem:5.04} is completed. \end{proof} \begin{eqnarray}gin{prop}\langlebel{prop:5.05} There exists $C>0$, such that for $t\geq 1$, $K\in\mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.18} \left\|\Big(\exp(-\mathcal{A}_{zK,t})-\exp(-\mathbb{B}_{zK,t}^2-z\mathcal{L}_K )\Big)s\right\|_0\leq \frac{C}{\sqrt{t}} \|s\|_0. \end{align} \end{prop} \begin{eqnarray}gin{proof} From (\ref{eq:5.04}) and (\ref{eq:5.06}), we know for $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.18b} |P^{\bot}s|_{t,-1}=\sup_{0\neq s'\in \mathbb{E}^1,\, Ps'=0} \frac{|\langle P^{\bot}s,s'\rangle_0|}{|s'|_{t,1}} =\frac{1}{\sqrt{t}}\|P^{\bot}s\|_{-1} \leq \frac{1}{\sqrt{t}}\|P^{\bot}s\|_0. \end{align} Note that from (\ref{eq:2.19}), (\ref{eq:5.01}) and (\ref{eq:5.02}), we have \begin{eqnarray}gin{align}\langlebel{eq:5.19b} \mathcal{A}_{zK,t}=\mathbb{B}_{zK,t}^2+z\mathcal{L}_K+dt\wedge \left(\frac{1}{2} \sqrt{t}D-\frac{1}{8\sqrt{t}}\big(zc(K^X)-c(T^H)\big) \right). \end{align} Thus $\mathbb{B}_{zK,t}^2+z\mathcal{L}_K$ has the same spectrum as $\mathcal{A}_{zK,t}$ and by omitting $dt$ part, we know Lemma \ref{lem:5.04} holds for $\mathbb{B}_{zK,t}^2+z\mathcal{L}_K$. Thus from (\ref{eq:5.16}) and (\ref{eq:5.18b}), for $\langlembda\in U_c$, we have \begin{eqnarray}gin{multline}\langlebel{eq:5.20b} \left\|(\langlembda-\mathcal{A}_{zK,t})^{-1}\sqrt{t}D\Big(\langlembda- (\mathbb{B}_{zK,t}^2+z\mathcal{L}_K)\Big)^{-1}s\right\|_0 \\ \leq \frac{C}{\sqrt{t}}(1+|\langlembda|)^k\left\| \sqrt{t}DP^{\bot}\Big(\langlembda- (\mathbb{B}_{zK,t}^2+z\mathcal{L}_K)\Big)^{-1}s\right\|_0 \\ \leq \frac{C}{\sqrt{t}}(1+|\langlembda|)^k\left| \Big(\langlembda- (\mathbb{B}_{zK,t}^2+z\mathcal{L}_K)\Big)^{-1}s\right|_{t,1} \\ \leq \frac{C^2}{\sqrt{t}}(1+|\langlembda|)^{2k}|s|_{t,-1}\leq \frac{C^2}{\sqrt{t}}(1+|\langlembda|)^{2k}\|s\|_0. \end{multline} Note that \begin{eqnarray}gin{align}\langlebel{eq:5.21b} \exp(-\mathcal{A}_{zK,t})=\frac{1}{2i\pi} \int_{\partial U_c}e^{-\langlembda}(\langlembda-\mathcal{A}_{zK,t})^{-1}d\langlembda, \end{align} and (\ref{eq:5.21b}) also holds for $\mathbb{B}_{zK,t}^2+z\mathcal{L}_K$. From (\ref{eq:5.19b}), \begin{eqnarray}gin{multline}\langlebel{eq:5.22b} (\langlembda-\mathcal{A}_{zK,t})^{-1}-(\langlembda- (\mathbb{B}_{zK,t}^2+z\mathcal{L}_K))^{-1} \\ =(\langlembda-\mathcal{A}_{zK,t})^{-1}\cdot \left(dt\wedge \left(\frac{1}{2} \sqrt{t}D-\frac{1}{8\sqrt{t}}\big(zc(K^X)-c(T^H)\big) \right) \right)\cdot(\langlembda- (\mathbb{B}_{zK,t}^2+z\mathcal{L}_K))^{-1}. \end{multline} Now from (\ref{eq:5.20b})-(\ref{eq:5.22b}), we get (\ref{eq:5.18}). The proof of Proposition \ref{prop:5.05} is completed. \end{proof} Since $B$ is compact, there exists a family of smooth sections of $TX$, $U_1,\cdots,U_m$ such that for any $x\in W$, $U_1(x),\cdots, U_m(x)$ spans $T_xX$. Let $\mathcal{D}$ be a family of operators on $\mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.22} \mathcal{D}=\left\{P^\bot \, \nabla^{\mathcal{E}}_{U_i}P^\bot\right\}. \end{align} From (\ref{eq:5.08}) and (\ref{eq:5.13}), by the same argument as the proof of \cite[Proposition 11.29]{BL91} (see also e.g., \cite[Theorem 9.17]{Bi97}, \cite[Lemma 5.17]{Liu17a}), we get the following lemma. \begin{eqnarray}gin{lemma}\langlebel{lem:5.06} For any $k\in \mathbb{N}$ fixed, there exists $C_k>0$ such that for $t\geq 1$, $K\in\mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $Q_1,\cdots, Q_k\in\mathcal{D}$ and $s,s'\in \mathbb{E}$, we have \begin{eqnarray}gin{align}\langlebel{eq:5.23} |\langlengle [Q_1, [Q_2,\cdots[Q_k, \mathcal{A}_{zK,t}],\cdots]]s,s' \rangle_{0}|\leq C_k|s|_{t,1}|s'|_{t,1}. \end{align} \end{lemma} For $k\in \mathbb{N}$, let $\mathcal{D}^k$ be the family of operators $Q$ which can be written in the form \begin{eqnarray}gin{align}\langlebel{eq:5.24} Q=Q_1\cdots Q_k,\quad Q_i\in \mathcal{D}. \end{align} If $k\in \mathbb{N}$, we define the Hilbert norm $\|\cdot \|_{k}'$ by \begin{eqnarray}gin{align}\langlebel{eq:5.25} \|s\|_{k}^{'2}=\sum_{\ell=0}^k\sum_{Q\in \mathcal{D}^\ell}\|Qs\|^2_0. \end{align} Since $P \nabla_{P^{TX}\cdot}^{\mathcal{E}}$ and $\nabla_{P^{TX}\cdot}^{\mathcal{E}} P$ are operators along the fiber with smooth kernels, the Sobolev norm $\|\cdot\|_{k}'$ is equivalent to the Sobolev norm $\|\cdot\|_k$. Thus, we also denote the Sobolev space with respect to $\|\cdot \|_{k}'$ by $\mathbb{E}^k$. By using Lemma \ref{lem:5.06}, as the proof of \cite[Theorem 11.30]{BL91}, we get \begin{eqnarray}gin{lemma}\langlebel{lem:5.07} For any $m\in \mathbb{N}$, there exist $p_m\in\mathbb{N}$ and $C_m>0$ such that for $t\geq 1$, $\langlembda\in U_c$, $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.26} \|(\langlembda-\mathcal{A}_{zK,t})^{-1}s\|_{m+1}'\leq C_m(1+|\langlembda|)^{p_m}\|s\|_{m}'. \end{align} \end{lemma} Let $\exp(-\mathcal{A}_{zK,t})(x,x')$, $\exp(-\mathbb{B}_{zK,t}^2 -z\mathcal{L}_K)(x,x')$ be the smooth kernels of the operators $\exp(-\mathcal{A}_{zK,t})$, $\exp(-\mathbb{B}_{zK,t}^2-z\mathcal{L}_K)$ associated with $dv_{X}(x')$. By using Lemma \ref{lem:5.07}, following the same progress as in the proof of \cite[Theorem 11.31]{BL91}, we get \begin{eqnarray}gin{prop}\langlebel{prop:5.08} For $m\in \field{N}$, there exists $C>0$, such that for $b\in B$, $x,x'\in X_b$, $t\geq 1$, $K\in\mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, \begin{eqnarray}gin{align}\langlebel{eq:5.27} \sup_{|\alpha|,|\alpha'|\leq m}\left|\frac{ \partial^{|\alpha|+|\alpha'|}} {\partial x^{\alpha}\partial {x}^{'\alpha'}} \exp(-\mathcal{A}_{zK,t})(x,x')\right|\leq C. \end{align} \end{prop} By omitting $dt$ part, we know Proposition \ref{prop:5.08} holds for $\exp(-\mathbb{B}_{zK,t}^2-z\mathcal{L}_K)(x,x')$. From Propositions \ref{prop:5.05}, \ref{prop:5.08} and (\ref{eq:5.19b}), by the arguments in \cite[\S 11 p)]{BL91}, there exist $C>0$, $\delta>0$, such that for $t\geq 1$, $K\in \mathfrak{g}$, $|zK|\leq \begin{eqnarray}ta$, \begin{eqnarray}gin{align}\langlebel{eq:5.28} \left|\exp\left(-\mathcal{A}_{zK,t} \right) (x,x')-\exp(-\mathbb{B}_{zK,t}^2-z\mathcal{L}_K)(x,x')\right| \leq \frac{C}{t^\delta}. \end{align} From (\ref{eq:5.19b}), \begin{eqnarray}gin{align}\langlebel{eq:5.27b} dt\wedge \left\{\widetilde{\tr}\left[g\exp\left(-\mathcal{A}_{zK,t}\right) \right]\right\}^{dt}=\widetilde{\tr}\left[g\Big(\exp\left(-\mathcal{A}_{zK,t}\right) -\exp(-\mathbb{B}_{zK,t}^2-z\mathcal{L}_K)\Big)\right]. \end{align} From (\ref{eq:5.27}) and (\ref{eq:5.27b}), we get Theorem \ref{thm:5.01}. \subsection{A proof of Theorems \ref{thm:2.01} and \ref{thm:2.02} b)}\langlebel{s0502} Section \ref{s0503} is devoted to the proof of the following theorem. \begin{eqnarray}gin{thm}\langlebel{thm:5.09} There exist $\begin{eqnarray}ta>0$, $C>0$, $0<\delta\leq 1$ such that if $K\in \mathfrak{z}(g)$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $0<t\leq 1$, \begin{eqnarray}gin{align}\langlebel{eq:5.32} \left|\psi_{\field{R}\times B} \widetilde{\tr}\left[g\exp\left(-\mathcal{A}_{zK,t}\right) \right]-\int_{X^g}\widetildedehat{\mathrm{A}}_{g,zK}(TX,\nabla^{TX})\, \ch_{g,zK}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})\right|\leq Ct^{\delta}. \end{align} \end{thm} Since $\int_{X^g}\widetildedehat{\mathrm{A}}_{g,zK}(TX,\nabla^{TX})\, \ch_{g,zK}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})$ does not have the $dt$ term, we get Theorem \ref{thm:2.02} b) from Theorem \ref{thm:5.09}, which we reformulate as follows. \begin{eqnarray}gin{thm}\langlebel{thm:5.10} There exist $\begin{eqnarray}ta>0$, $C>0$, $\delta>0$, such that if $K\in \mathfrak{z}(g)$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $0<t\leq 1$, \begin{eqnarray}gin{align}\langlebel{eq:5.33} \left|\left\{\widetilde{\tr}\Big[g\exp\big(-\mathcal{A}_{zK,t} \big)\Big]\right\}^{dt}\right| \leq Ct^{\delta}. \end{align} \end{thm} \begin{eqnarray}gin{proof}[Proof of Theorem \ref{thm:2.01}] If we omit the $dt$ term in (\ref{eq:5.32}) and take $z=1$, it follows that \begin{eqnarray}gin{multline}\langlebel{eq:5.34} \left|\psi_{B}\widetilde{\tr}\left[g\exp\left(-\left(\mathbb{B}_t +\frac{c(K^X)}{4\sqrt{t}}\right)^2-\mathcal{L}_{K} \right) \right]\right. \\ \left.-\int_{X^g}\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX})\, \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})\right|\leq Ct^{\delta}. \end{multline} From (\ref{eq:5.34}), we get (\ref{eq:2.20}) and (\ref{eq:2.21}). From (\ref{eq:2.26b}), (\ref{eq:2.27}), (\ref{eq:2.31b}) and (\ref{eq:2.32}), we get other parts of Theorem \ref{thm:2.01}. The proof of Theorem \ref{thm:2.01} is completed. \end{proof} For simplicity, we will assume in the remainder of this section that $n=\dim X$ is even. The functional analysis part is exactly the same for even and odd dimensional. We only explain in Remark \ref{rem:5.22} how to use the argument in the proof of \cite[Theorem 2.10]{BF86II} to compute the local index in odd dimensional case. \subsection{Finite propagation speed and localization}\langlebel{s0503} The proof of the following lemma is the same as Lemma \ref{lem:5.02}. \begin{eqnarray}gin{lemma}\langlebel{lem:5.11} Given $\begin{eqnarray}ta>0$, there exist $C_1, C_2, C_2'(\begin{eqnarray}ta), C_3(\begin{eqnarray}ta), C_3'(\begin{eqnarray}ta), C_4, C_5(\begin{eqnarray}ta)>0$ such that if $K\in \mathfrak{g}$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $s, s'\in \mathbb{E}$, $0<t\leq 1$, \begin{eqnarray}gin{align}\langlebel{eq:5.35}\begin{eqnarray}gin{split} \field{R}e \langle t\mathcal{A}_{zK,t}^{(0)}s,s\rangle_0&\geq C_1t^2\|s\|_1^2 -(C_2t^2+C_2'(\begin{eqnarray}ta))\|s\|_0^2,\\ |{\rm Im}\langle t\mathcal{A}_{zK,t}^{(0)}s,s\rangle_0|&\leq C_3(\begin{eqnarray}ta)t\|s\|_1 \|s\|_0+C_3'(\begin{eqnarray}ta)\|s\|_0^2,\\ |\langle t\mathcal{A}_{zK,t}^{(0)}s,s'\rangle_0|&\leq C_4(t\|s\|_1+C_5(\begin{eqnarray}ta)\|s\|_0)(t\|s'\|_1+C_5(\begin{eqnarray}ta)\|s'\|_0). \end{split} \end{align} Moreover, as $\begin{eqnarray}ta\rightarrow 0$, $C_2'(\begin{eqnarray}ta)$, $C_3(\begin{eqnarray}ta)$, $C_3'(\begin{eqnarray}ta)$, $C_5(\begin{eqnarray}ta)\rightarrow 0$. \end{lemma} In the sequel, we take $\begin{eqnarray}ta>0$ and always assume that $K\in \mathfrak{g}$, $|zK|\leq \begin{eqnarray}ta$. For $c>0$, put \begin{eqnarray}gin{align}\langlebel{eq:5.36} \begin{eqnarray}gin{split} &V_c=\left\{\langlembda\in\field{C} : \field{R}e (\langlembda)\geq \frac{{\rm Im} (\langlembda)^2}{4c^2}-c^2\right\}, \\ &\Gamma_c=\left\{\langlembda\in\field{C} : \field{R}e (\langlembda)= \frac{{\rm Im} (\langlembda)^2}{4c^2}-c^2\right\}. \end{split} \end{align} Note that $U_c, V_c, \Gamma_c$ are the images of $\{\langlembda\in\field{C}: |{\rm Im}(\langlembda)|\geq c \}$, $\{\langlembda\in\field{C}: |{\rm Im} (\langlembda)|\leq c \}$, $\{\langlembda\in\field{C}: |{\rm Im} (\langlembda)|= c \}$ by the map $\langlembda\mapsto \langlembda^2$. The following lemma is an analogue of \cite[Theorem 7.12]{BG00}. \begin{eqnarray}gin{lemma}\langlebel{lem:5.12} There exists $C>0$ such that given $c\in (0,1]$, for $\begin{eqnarray}ta>0$ and $t\in(0,1]$ small enough, if $\langlembda\in U_c$, $|zK|\leq \begin{eqnarray}ta$, the resolvent $(\langlembda-t\mathcal{A}_{zK,t}^{(0)})^{-1}$ exists, extends to a continuous operator from $\mathbb{E}^{-1}$ into $\mathbb{E}^1$, and moreover, for $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.37} \begin{eqnarray}gin{split} &\|(\langlembda-t\mathcal{A}_{zK,t}^{(0)})^{-1}s\|_0\leq \frac{2}{c^2}\|s\|_0, \\ &\|(\langlembda-t\mathcal{A}_{zK,t}^{(0)})^{-1}s\|_{1}\leq \frac{C}{c^2t^4}(1+|\langlembda|)^2\|s\|_{-1}. \end{split} \end{align} \end{lemma} \begin{eqnarray}gin{proof} From the same arguments in \cite[(7.47)-(7.49)]{BG00}, by Lemma \ref{lem:5.11}, if $\langlembda\in \field{R}$, $\langlembda\leq -(C_2t^2+C_2'(\begin{eqnarray}ta))$, the resolvent $(\langlembda-t\mathcal{A}_{zK,t}^{(0)})^{-1}$ exists. Now we take $\langlembda=a+ib$, $a,b\in\field{R}$. By (\ref{eq:5.35}), \begin{eqnarray}gin{multline}\langlebel{eq:5.38} |\langle(t\mathcal{A}_{zK,t}^{(0)}-\langlembda)s,s\rangle|\geq \sup \Big\{C_1t^2\|s\|_1^2-(C_2t^2+C_2'(\begin{eqnarray}ta)+a)\|s\|_0^2, \\ -C_3(\begin{eqnarray}ta)t\|s\|_1\|s\|_0+(|b|-C_3'(\begin{eqnarray}ta))\|s\|_0^2 \Big\}. \end{multline} Set \begin{eqnarray}gin{align}\langlebel{eq:5.39} C(\langlembda, t)=\inf_{u\geq 1}\sup\Big\{C_1(tu)^2-(C_2t^2 +C_2'(\begin{eqnarray}ta)+a),-C_3(\begin{eqnarray}ta)tu-C_3'(\begin{eqnarray}ta)+|b| \Big\}. \end{align} Since $\|s\|_0\leq \|s\|_1$, using (\ref{eq:5.38}), (\ref{eq:5.39}), we get \begin{eqnarray}gin{align}\langlebel{eq:5.40} |\langle(t\mathcal{A}_{zK,t}^{(0)}-\langlembda)s,s\rangle|\geq C(\langlembda,t)\|s\|_0^2. \end{align} Now we fix $c\in (0,1]$. Suppose that $\langlembda\in U_c$, i.e., \begin{eqnarray}gin{align}\langlebel{eq:5.41} a\leq \frac{b^2}{4c^2}-c^2. \end{align} Assume that $u\in\field{R}$ is such that \begin{eqnarray}gin{align}\langlebel{eq:5.42} |b|-C_3(\begin{eqnarray}ta)tu-C_3'(\begin{eqnarray}ta)\leq c^2. \end{align} Then by (\ref{eq:5.41}) and (\ref{eq:5.42}), \begin{eqnarray}gin{multline}\langlebel{eq:5.43} C_1(tu)^2-(C_2t^2+C_2'(\begin{eqnarray}ta)+a)\geq C_1(tu)^2-\frac{b^2}{4c^2}+c^2-C_2t^2-C_2'(\begin{eqnarray}ta) \\ \geq \left(C_1-\frac{C_3(\begin{eqnarray}ta)^2}{4c^2} \right)(tu)^2-\frac{(c^2+C_3'(\begin{eqnarray}ta))C_3(\begin{eqnarray}ta)}{2c^2}tu+c^2 -\frac{(c^2+C_3'(\begin{eqnarray}ta))^2}{4c^2}-C_2t^2-C_2'(\begin{eqnarray}ta). \end{multline} The discriminant $\Delta$ of the polynomial in the variable $tu$ in the right-hand side of (\ref{eq:5.43}) is given by \begin{eqnarray}gin{multline}\langlebel{eq:5.44} \Delta=-3c^2C_1+2C_1(C_3'(\begin{eqnarray}ta)+2C_2t^2+2C_2'(\begin{eqnarray}ta)) +C_3(\begin{eqnarray}ta)^2 \\ +\frac{1}{c^2}(C_1C_3'(\begin{eqnarray}ta)^2-C_2C_3(\begin{eqnarray}ta)^2t^2 -C_2'(\begin{eqnarray}ta)C_3(\begin{eqnarray}ta)^2). \end{multline} Clearly, for $\begin{eqnarray}ta$, $t$ small enough, \begin{eqnarray}gin{align}\langlebel{eq:5.45} \Delta\leq -2c^2C_1,\quad C_1-\frac{C_3^2(\begin{eqnarray}ta)}{4c^2}>0. \end{align} From (\ref{eq:5.43})-(\ref{eq:5.45}), we get \begin{eqnarray}gin{align}\langlebel{eq:5.46} C_1(t^2u)^2-(C_2t^4+C_2'(\begin{eqnarray}ta)+a)\geq -\frac{\Delta}{4(C_1-C_3^2(\begin{eqnarray}ta)/(4c^2))}\geq \frac{c^2}{2}. \end{align} Ultimately, by (\ref{eq:5.39})-(\ref{eq:5.46}), we find that for $\begin{eqnarray}ta>0$, $t\in (0,1]$ small enough, if $\langlembda\in U_c$, \begin{eqnarray}gin{align}\langlebel{eq:5.47} C(\langlembda,t)\geq \frac{c^2}{2}. \end{align} From (\ref{eq:5.38}), (\ref{eq:5.39}) and (\ref{eq:5.47}), we get the first equation of (\ref{eq:5.37}). Then combining with (\ref{eq:5.35}) and the argument in \cite[(7.64)-(7.68)]{BG00}, we get the other part of Lemma \ref{lem:5.12}. The proof of Lemma \ref{lem:5.12} is completed. \end{proof} As in (\ref{eq:5.15}), from (\ref{eq:5.13}), there exists $C>0$, such that for any $0<t\leq 1$, $s\in\mathbb{E}^1$, \begin{eqnarray}gin{align}\langlebel{eq:5.48} \|(t\mathcal{A}_{zK,t}-t\mathcal{A}_{zK,t}^{(0)})s\|_{-1}\leq C\|s\|_{1}. \end{align} From Lemma \ref{lem:5.12}, following the same process as the proof of (\ref{eq:5.16}), we get the following lemma. \begin{eqnarray}gin{lemma}\langlebel{lem:5.13} There exist $k,m\in \field{N}$, $C>0$, such that given $c\in(0,1]$, for $\begin{eqnarray}ta>0$ and $t\in(0,1]$ small enough, if $\langlembda\in U_c$, $|zK|\leq \begin{eqnarray}ta$, the resolvent $(\langlembda-t\mathcal{A}_{zK,t})^{-1}$ exists, extends to a continuous operator from $\Lambda(T^*(\field{R}\times B))\otimes\mathbb{E}^{-1}$ into $\Lambda(T^*(\field{R}\times B))\otimes\mathbb{E}^1$, and moreover, for $s\in \mathbb{E}$, \begin{eqnarray}gin{align}\langlebel{eq:5.49} \|(\langlembda-t\mathcal{A}_{zK,t})^{-1}s\|_1\leq\frac{C}{c^kt^k} (1+|\langlembda|)^{m}\|s\|_{-1}. \end{align} \end{lemma} \begin{eqnarray}gin{defn}\langlebel{defn:5.14} If $H, H'$ are separable Hilbert spaces, if $1\leq p< +\infty$, set \begin{eqnarray}gin{align}\langlebel{eq:5.50} \mathscr{L}_p(H, H') =\{A\in \mathscr{L}(H, H'): \tr[(A^*A)^{p/2}]<+\infty\}. \end{align} If $A\in \mathscr{L}_p(H, H')$, set \begin{eqnarray}gin{align}\langlebel{eq:5.51} \|A\|_{(p)}:=\left(\tr[(A^*A)^{p/2}]\right)^{1/p}. \end{align} Then by \cite[Chapter IX Proposition 6]{RS75}, $\|\cdot\|_{(p)}$ is a norm on $\mathscr{L}_p(H,H')$. Similarly, if $A\in \mathscr{L}(H,H')$, let $\|A\|_{_{(\infty)}}$ be the usual operator norm of $A$. \end{defn} In the sequel, the norms $\|\cdot\|_{(p)}$, $\|\cdot\|_{(\infty)}$ will always be calculated with respect to the Sobolev spaces $\mathbb{E}^0$. From Lemma \ref{lem:5.13}, we get the following lemma with the same proof as in \cite[Theorem 7.13]{BG00}. \begin{eqnarray}gin{lemma}\langlebel{lem:5.15} Given $q\geq 2\dim X+1$, there exist $C>0$, $k,m\in\field{N}$, such that given $c\in(0,1]$, for $\begin{eqnarray}ta>0$ and $t\in (0,1]$ small enough, if $\langlembda\in U_c$, $|zK|\leq \begin{eqnarray}ta$, \begin{eqnarray}gin{align}\langlebel{eq:5.52} \begin{eqnarray}gin{split} &\|(\langlembda-t\mathcal{A}_{zK,t})^{-1}\|_{(q)}\leq\frac{C}{c^kt^k} (1+|\langlembda|)^m, \\ &\|(\langlembda-t\mathcal{A}_{zK,t})^{-q}\|_{(1)}\leq\frac{C^q}{ (c^kt^k)^q}(1+|\langlembda|)^{mq}. \end{split} \end{align} \end{lemma} Let $a_X$ be the inf of the injectivity radius of the fibers $X$. Let $\alpha\in (0,a_X/8]$. The precise value of $\alpha$ will be fixed later. The constants $C>0$, $C'>0...$ may depend on the choice of $\alpha$. Let $f:\field{R}\rightarrow [0,1]$ be a smooth even function such that \begin{eqnarray}gin{align}\langlebel{eq:5.53} f(u)=\left\{ \begin{eqnarray}gin{aligned} 1\quad &\hbox{for $|u|\leq \alpha/2$;} \\ 0\quad &\hbox{for $|u|\geq \alpha$.} \end{aligned} \right. \end{align} Set \begin{eqnarray}gin{align}\langlebel{eq:5.54} g(u)=1-f(u). \end{align} For $t>0$, $a\in\field{C}$, put \begin{eqnarray}gin{align}\langlebel{eq:5.55} \left\{ \begin{eqnarray}gin{aligned} &F_t(a)=\int_{-\infty}^{+\infty}\exp(\sqrt{2} iua)\exp \left(-\frac{u^2}{2}\right)f(\sqrt{t}u)\frac{du}{\sqrt{2\pi}}, \\ &G_t(a)=\int_{-\infty}^{+\infty}\exp(\sqrt{2} iua)\exp \left(-\frac{u^2}{2}\right)g(\sqrt{t}u)\frac{du}{\sqrt{2\pi}}. \end{aligned} \right. \end{align} Then $F_t(a)$, $G_t(a)$ are even holomorphic functions of $a$ such that \begin{eqnarray}gin{align}\langlebel{eq:5.56} \exp(-a^2)=F_t(a)+G_t(a). \end{align} Moreover when restricted on $\field{R}$, $F_t$ and $G_t$ both lie in the Schwartz space $\mS(\field{R})$. Put \begin{eqnarray}gin{align}\langlebel{eq:5.57} I_t(a)=G_t(a/\sqrt{t}). \end{align} Clearly, there exist uniquely defined holomorphic functions $\widetilde{F}_t(a)$, $\widetilde{G}_t(a)$, $\widetilde{I}_t(a)$ such that \begin{eqnarray}gin{align}\langlebel{eq:5.58} F_t(a)=\widetilde{F}_t(a^2),\quad G_t(a)=\widetilde{G}_t(a^2),\quad I_t(a)=\widetilde{I}_t(a^2). \end{align} By (\ref{eq:5.56}) and (\ref{eq:5.57}), we have \begin{eqnarray}gin{align}\langlebel{eq:5.59} \exp(-a)=\widetilde{F}_t(a)+\widetilde{G}_t(a),\quad \widetilde{I}_t(a)=\widetilde{G}_t(a/t). \end{align} From (\ref{eq:5.59}), \begin{eqnarray}gin{align}\langlebel{eq:5.60} \exp(-\mathcal{A}_{zK,t})=\widetilde{F}_t(\mathcal{A}_{zK,t}) +\widetilde{I}_t(t\mathcal{A}_{zK,t}). \end{align} From Lemma \ref{lem:5.15}, the proof of the following lemma is the same as that of \cite[Theorem 7.15]{BG00}. \begin{eqnarray}gin{lemma}\langlebel{lem:5.16} There exist $\begin{eqnarray}ta>0$, $C>0$, $C'>0$ such that if $t\in(0,1]$, $K\in \mathfrak{g}$, $|zK|\leq \begin{eqnarray}ta$, \begin{eqnarray}gin{align}\langlebel{eq:5.61} \|\widetilde{I}_t(t\mathcal{A}_{zK,t})\|_{(1)}\leq C\exp(-C'/t). \end{align} \end{lemma} By (\ref{eq:5.60}) and (\ref{eq:5.61}), we find that to establish (\ref{eq:5.32}), we may as well replace $\exp(-\mathcal{A}_{zK,t})$ by $\widetilde{F}_t(\mathcal{A}_{zK,t})$. Let $\widetilde{F}_t(\mathcal{A}_{zK,t})(x,x')$, $(x,x'\in X_b, b\in B)$ be the smooth kernel associated with the operator $\widetilde{F}_t(\mathcal{A}_{zK,t})$ with respect to $dv_X(x')$. Clearly the kernel of $g\widetilde{F}_t(\mathcal{A}_{zK,t})$ is given by $g\widetilde{F}_t(\mathcal{A}_{zK,t})(g^{-1}x,x')$. Then, \begin{eqnarray}gin{align}\langlebel{eq:5.62} \tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t})]=\int_X \tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t})(g^{-1}x,x)]dv_X(x). \end{align} For $\varepsilon>0$, $x\in X_b$, $b\in B$, let $B^X(x,\varepsilon)$ be the open ball in $X_b$ with centre $x$ and radius $\varepsilon$. Using finite propagation speed for solutions of hyperbolic equations (cf. \cite[Appendix D.2]{MM07}), we find that given $x\in X_b$, $\widetilde{F}_t(\mathcal{A}_{zK,t})(x,\cdot)$ vanishes on the complement of $B^X(x,\alpha)$ in $X_b$, and depends only on the restriction of the operator $\mathcal{A}_{zK,t}$ to the ball $B^X(x,\alpha)$. Therefore, we have shown that the proof of (\ref{eq:5.32}) can be made local on $X_b$. Therefore, we may and we will assume that $TX_b$ is spin and \begin{eqnarray}gin{align}\langlebel{eq:5.63} \mathcal{E}=\mS_X\otimes E \end{align} over $X_b$, where $\mS_X$ is the spinor of $TX$ and $E$ is a complex vector bundle, and the metric and the connection on $\mathcal{E}$ are induced from those on $TX$ and $E$. By the above, it follows that $g\widetilde{F}_t(\mathcal{A}_{zK,t})( g^{-1}x,x)$, $x\in X_b$ vanishes if $d^{X_b}(g^{-1}x,x)\geq \alpha$. Here $d^{X_b}$ is the distance in ($X_b$, $g^{TX}$). Now we explain our choice of $\alpha$. Recall that $N_{X^g/X}$ is identified with the orthogonal bundle to $TX^g$ in $TX|_{X^g}$. Given $\varepsilon>0$, let $\mathcal{U}_{\varepsilon}$ be the $\varepsilon$-neighborhood of $X_b^g$ in $N_{X^g/X}$. There exists $\varepsilon_0\in (0,a_X/32]$ such that if $\varepsilon\in (0,16\varepsilon_0]$, the fiberwise exponential map $(x,Z)\in N_{X^g/X}\rightarrow \exp_x^X(Z)$ is a diffeomorphism of $\mathcal{U}_{\varepsilon}$ on the tubular neighborhood $\mathcal{V}_{\varepsilon}$ of $X^g$ in $X$. In the sequel, we identify $\mathcal{U}_{\varepsilon}$ and $\mathcal{V}_{\varepsilon}$. This identification is $g$-equivariant. We will assume that $\alpha\in (0,\varepsilon_0]$ is small enough so that for any $b\in B$, if $x\in X_b$, $d^{X_b}(g^{-1}x,x)\leq \alpha$, then $x\in \mathcal{V}_{\varepsilon_0}$. By (\ref{eq:5.61}), (\ref{eq:5.62}) and the above considerations, it follows that for $\begin{eqnarray}ta>0$ small enough, the problem is localized on the $\varepsilon_0$-neighborhood $\mathcal{V}_{\varepsilon_0}$ of $X^g$. As in (\ref{eq:3.09}), let $k(x,Z)$ be the smooth function on $\mathcal{U}_{\varepsilon_0}$ such that \begin{eqnarray}gin{align}\langlebel{eq:5.64} dv_X(x,Z)=k(x,Z)dv_{X^g}(x)dv_{N_{X^g/X}}(Z). \end{align} In particular, $k|_{X^g}=1$. For $\omegaega\in \Lambda(T^*\field{R})\widetildedehat{\otimes}\Lambda(T^*W^g)$, via (\ref{eq:1.04}) and (\ref{eq:1.05}), we will write $$\omegaega=\sum_{1\leq i_1<\cdots<i_p\leq \ell} \omegaega_{i_1,\cdots, i_p} \wedge e^{i_1}\wedge\cdots\wedge e^{i_p},\quad \til{E}xt{for}\ \omegaega_{i_1,\cdots, i_p}\in \Lambda(T^*\field{R}) \widetildedehat{\otimes}\pi^*\Lambda(T^*B).$$ We denote by \begin{eqnarray}gin{align}\langlebel{eq:5.65} \omegaega^{\max}:=\omegaega_{1,\cdots,\ell}\in \Lambda(T^*\field{R}) \widetildedehat{\otimes}\pi^*\Lambda(T^*B). \end{align} Note that if the fiber is odd dimensional, our sign convention here is compatible with that in (\ref{eq:0.11}). \begin{eqnarray}gin{thm}\langlebel{thm:5.17} There exist $\begin{eqnarray}ta>0$, $\delta\in (0,1]$ such that if $K\in \mathfrak{z}(g)$, $z\in \field{C}$, $|zK|\leq \begin{eqnarray}ta$, $t\in (0,1]$, $x\in X^g$, \begin{eqnarray}gin{multline}\langlebel{eq:5.66} \left|t^{\frac{1}{2}\dim N_{X^g/X}}\int_{Z\in N_{X^g/X},|Z|\leq \varepsilon_0/\sqrt{t}}\psi_{\field{R}\times B}\tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t})( g^{-1}(x,\sqrt{t}Z),(x,\sqrt{t}Z))]\right. \\ \cdot k(x,\sqrt{t}Z)dv_{N_{X^g/X}}(Z)- \left\{\widetildedehat{\mathrm{A}}_{g,zK} (TX,\nabla^{TX})\ch_{g,zK}(E, \nabla^E) \right\}^{\max} \Big|\leq Ct^{\delta}. \end{multline} \end{thm} \begin{eqnarray}gin{proof} Sections \ref{s0504}-\ref{s0510} are devoted to the proof of this theorem. \end{proof} \begin{eqnarray}gin{proof}[Proof of Theorem \ref{thm:5.09}] By (\ref{eq:5.62}) and the finite propagation speed argument above, we have \begin{eqnarray}gin{multline}\langlebel{eq:5.67} \int_X\tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t})(g^{-1}x,x)]dv_X(x) =\int_{\mathcal{U}_{\varepsilon_0}}\tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t})(g^{-1}x,x)] dv_X(x) \\ =\int_{(x,Z)\in\mathcal{U}_{\varepsilon_0/\sqrt{t}}}t^{\frac{1}{2}\dim N_{X^g/X}}\tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t}) (g^{-1}(x,\sqrt{t}Z),(x,\sqrt{t}Z))] \\ \times k(x,\sqrt{t}Z)dv_{X^g}(x)dv_{N_{X^g/X}}(Z). \end{multline} By Lemma \ref{lem:5.16}, Theorem \ref{thm:5.17} and (\ref{eq:5.67}), there exist $\begin{eqnarray}ta>0$, $\delta\in (0,1]$ such that for $K\in \mathfrak{z}(g)$, $|zK|\leq \begin{eqnarray}ta$, $t\in (0,1]$, \begin{eqnarray}gin{align}\langlebel{eq:5.68} \left|\psi_{\field{R}\times B}\tr_s\left[g\exp\left(-\mathcal{A}_{zK,t} \right) \right]-\int_{X^g}\widetildedehat{\mathrm{A}}_{g,zK}(TX, \nabla^{TX})\,\ch_{g,zK}(\mathcal{E}/\mS,\nabla^{\mathcal{E}})\right| \leq Ct^{\delta}. \end{align} So we obtain Theorem \ref{thm:5.09}. \end{proof} \subsection{A Lichnerowicz formula}\langlebel{s0504} Let $e_1,\cdots,e_n$ be a locally defined smooth orthonormal frame of $TX$. Let $(F,\nabla^F)$ be a vector bundle with connection on $X$. We use the notation \begin{eqnarray}gin{align}\langlebel{eq:5.69} (\nabla_{e_i}^F)^2=\sum_{i=1}^n(\nabla_{e_i}^F)^2 -\nabla^F_{\sum_{i=1}^n\nabla^{TX}_{e_i}e_i}. \end{align} Let $H$ be the scalar curvature of $X$. The following result is a combination of \cite[Theorem 1.6]{Bi85}, \cite[Proposition 7.18]{BG00} (for the term involved $K^X$ and base $B=\mathrm{pt}$), \cite[Theorem 3.5]{Bi86} (for Bismut's Lichnerowicz formula for Bismut superconnection) and \cite[Theorem 2.10]{BGSII} (for the term involved $dt$). \begin{eqnarray}gin{prop}\langlebel{prop:5.18} The following identity holds, \begin{eqnarray}gin{multline}\langlebel{eq:5.70} \mathcal{A}_{zK,t}=-t\left(\nabla_{e_i}^{\mathcal{E}}+\frac{1}{2 \sqrt{t}}\langle S(e_i)e_j, f_p^H\rangle c(e_j)f^p\wedge\right. \\ \left.+\frac{1}{4t}\langle S(e_i)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge-\frac{z\langle K^X,e_i\rangle}{4t}-dt\wedge\frac{ c(e_i)}{4\sqrt{t}}\right)^2 \\ +\frac{t}{4}H+\frac{t}{2}R^{\mathcal{E}/\mS}(e_i,e_j)c(e_i)c(e_j) +\sqrt{t}R^{\mathcal{E}/\mS}(e_i,f_p^H)c(e_i)f^p\wedge \\ +\frac{1}{2}R^{\mathcal{E}/\mS}(f_p^H,f_q^H)f^p\wedge f^q\wedge-zm^{\mathcal{E}/\mS}(K). \end{multline} \end{prop} \begin{eqnarray}gin{proof} From Bismut's Lichnerowicz formula (cf. \cite[Theorem 3.5]{Bi86}), \begin{eqnarray}gin{multline}\langlebel{eq:5.70a} \mathbb{B}_t^2=-t\left(\nabla_{e_i}^{\mathcal{E}}+\frac{1}{2 \sqrt{t}}\langle S(e_i)e_j, f_p^H\rangle c(e_j)f^p\wedge +\frac{1}{4t}\langle S(e_i)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge\right)^2 \\ +\frac{t}{4}H+\frac{t}{2}R^{\mathcal{E}/\mS}(e_i,e_j)c(e_i)c(e_j) +\sqrt{t}R^{\mathcal{E}/\mS}(e_i,f_p^H)c(e_i)f^p\wedge +\frac{1}{2}R^{\mathcal{E}/\mS}(f_p^H,f_q^H)f^p\wedge f^q\wedge. \end{multline} From (\ref{eq:1.17}), (\ref{eq:2.03}) and (\ref{eq:2.06}), \begin{eqnarray}gin{align} \frac{1}{4}[D,c(K^X)] =\frac{1}{4}c(e_k)c\left(\nabla_{e_k}^{TX}K^X\right) -\frac{1}{2}\langle K^X,e_j\rangle\nabla_{e_j}^{\mathcal{E}} =m^{\mS}(K)-\frac{1}{2}\nabla_{K^X}^{\mathcal{E}}. \end{align} Since the $G$-action preserves the splitting (\ref{eq:1.04}), $\langle[K^X,f_p^H],e_j\rangle=0$. Thus from (\ref{eq:1.09}), (\ref{eq:1.17}) and (\ref{eq:1.21}), \begin{eqnarray}gin{multline} [\nabla^{\mathbb{E},u},c(K^X)] =f^p\wedge c\left(\nabla_{f_p^H}^{TX}K^{X}\right) =-\langle \nabla_{f_p^H}^{TW,L}K^X,e_j\rangle c(e_j)f^p\wedge \\ =\langle \nabla_{K^X}^{TW,L}e_j,f_p^H\rangle c(e_j)f^p\wedge =\langle S(K^X)e_j,f_p^H\rangle c(e_j)f^p\wedge. \end{multline} From (\ref{eq:1.10})-(\ref{eq:1.12}), we get \begin{eqnarray}gin{align}\langlebel{eq:5.72b} S(e_j)e_k=S(e_k)e_j,\quad \langle S(e_j)f_p^H,f_q^H\rangle =\frac{1}{2}\langle T(f_p^H, f_q^H),e_j\rangle. \end{align} Thus from (\ref{eq:1.21}), \begin{eqnarray}gin{multline}\langlebel{eq:5.70b} [c(T^H),c(K^X)] =\langle S(e_j)f_p^H,f_q^H\rangle f^p\wedge f^q\wedge [c(e_j),c(K^X)] \\ =-2\langle S(K^X)f_p^H,f_q^H\rangle f^p\wedge f^q\wedge. \end{multline} Thus from (\ref{eq:5.01}), (\ref{eq:5.02}) and (\ref{eq:5.70a})-(\ref{eq:5.70b}), we get (\ref{eq:5.70}) without $dt$ term. By comparing directly the coefficient of $dt$ on both sides of (\ref{eq:5.70}) as in \cite[Theorem 2.10]{BGSII}, we get (\ref{eq:5.70}). The proof of Proposition \ref{prop:5.18} is completed. \end{proof} \subsection{A local coordinate system near $X^g$}\langlebel{s0505} Take $x\in W^g$. Then the fiberwise exponential map $Z\in T_{x}X\rightarrow\exp_{x}^X(Z)\in X$ identifies $B^{T_{x}X}(0,16\varepsilon_0)$ with $B^{X}(x,16\varepsilon_0)$. With this identification, there exists a smooth function $k_{x}'(Z)$, $Z\in B^{T_{x}X}(0,a_X/2)$ such that \begin{eqnarray}gin{align}\langlebel{eq:5.72} dv_X(Z)=k_{x}'(Z)dv_{TX}(Z),\quad \til{E}xt{with}\ k_{x}'(0)=1. \end{align} We may and we will assume that $\varepsilon_0$ is small enough so that if $Z\in T_{x}X, |Z|\leq 4\varepsilon_0$, \begin{eqnarray}gin{align}\langlebel{eq:5.73} \frac{1}{2}g_{x}^{TX}\leq g_Z^{TX}\leq \frac{3}{2}g_{x}^{TX}. \end{align} Assume from now, $K\in \mathfrak{z}(g)$. Recall that $\varepsilontheta_K$ is the one form dual to $K^X$ defined in (\ref{eq:3.01}). \begin{eqnarray}gin{defn}\langlebel{defn:5.19} Let $\,^1\nabla^{\mathcal{E},t}$ be the connection on $\Lambda(T^*\field{R}) \widetildedehat{\otimes} \pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}$ along the fibers, \begin{eqnarray}gin{multline}\langlebel{eq:5.74} \,^1\nabla^{\mathcal{E},t}_{\cdot}:=\nabla^{\mathcal{E}}_{\cdot}+\frac{1}{ 2\sqrt{t}}\langle S(\cdot)e_j, f_p^H\rangle c(e_j)f^p\wedge +\frac{1}{4t}\langle S(\cdot)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge \\ -\frac{z\varepsilontheta_K(\cdot)}{4t} -dt\wedge\frac{c(\cdot)}{4\sqrt{t}}. \end{multline} \end{defn} In the sequel, we will trivialize $\Lambda(T^*\field{R}) \widetildedehat{\otimes} \pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}$ by parallel transport along $u\in [0,1]\rightarrow uZ$ with respect to the connection $\,^1\nabla^{\mathcal{E},t}$. Observe that the above connection is $g$-invariant. From (\ref{eq:1.10}) and (\ref{eq:1.13}), we have $S(e_i)e_j=S(e_j)e_i$. Let $L$ be a trivial line bundle over $W$. We equip a connection on $L$ by \begin{eqnarray}gin{align}\langlebel{eq:5.75} \nabla^L =d-\frac{z\varepsilontheta_K}{4}. \end{align} Thus \begin{eqnarray}gin{align}\langlebel{eq:5.76} R^L=(\nabla^L)^2=-\frac{zd\varepsilontheta_K}{4}. \end{align} From (\ref{eq:1.29}), (\ref{eq:3.03}), (\ref{eq:3.05}), (\ref{eq:5.72b}), (\ref{eq:5.75}) and (\ref{eq:5.76}), we could calculate that \begin{eqnarray}gin{multline}\langlebel{eq:5.77} \left(\,^1\nabla^{\mathcal{E},1}\right)^2(e_i,e_j)=\frac{1}{4}\langle R^{TX}(e_k,e_l)e_i,e_j\rangle c(e_k)c(e_l)+\frac{1}{2}\langle R^{TX}(e_k,f_p^H)e_i,e_j\rangle c(e_k)f^p\wedge \\ +\frac{1}{4}\langle R^{TX}(f_p^H,f_q^H)e_i,e_j\rangle f^p\wedge f^q\wedge+R^{E}(e_i,e_j)-\frac{z}{2}\langle m^{TX}(K)e_i, e_j\rangle. \end{multline} Note that when $K=0$, (\ref{eq:5.77}) is \cite[Theorem 4.14]{Bi86}, \cite[Theorem 10.11]{BGV} or \cite[Theorem 11.8]{Bi97}. Note that from (\ref{eq:5.74}), $\left(\,^1\nabla^{\mathcal{E},t}\right)^2$ could be obtained from $\left(\,^1\nabla^{\mathcal{E},1}\right)^2$ by replacing $f^p\wedge$, $f^q\wedge$ and $K$ by $\frac{f^p}{\sqrt{t}}\wedge$, $\frac{f^q}{\sqrt{t}}\wedge$ and $\frac{K}{t}$. Let $A$, $A'$ be smooth sections of $TX$. By (\ref{eq:5.74}), \begin{eqnarray}gin{align}\langlebel{eq:5.78a} \,^1\nabla_A^{\mathcal{E},1}c(A')=c(\nabla_A^{TX}A')+ \langle S(A)A',f_p^H\rangle f^p\wedge +\frac{1}{2}\langle A, A'\rangle dt. \end{align} Let $c^1(TX)\simeq TX$ be the set of elements of length $1$ in $c(TX)$. It follows from (\ref{eq:5.78a}) that parallel transport along the fiber $X$ with respect to $\,^1\nabla^{\mathcal{E},1}$ maps $c^1(TX)$ into $c^1(TX)\oplus T^*B\oplus T^*\field{R}$, while leaving $\Lambda(T^*B)\widetildedehat{\otimes} \Lambda(T^*\field{R})$ invariant. \subsection{Replacing $X$ by $T_{x}X$}\langlebel{s0506} Let $\gamma(u)$ be a smooth even function from $\field{R}$ into $[0,1]$ such that \begin{eqnarray}gin{align}\langlebel{eq:5.79} \gamma(u)=\left\{ \begin{eqnarray}gin{aligned} 1\quad &\hbox{if $|u|\leq 1/2$;} \\ 0\quad &\hbox{if $|u|\geq 1$.} \end{aligned} \right. \end{align} If $Z\in T_{x}X$, put \begin{eqnarray}gin{align}\langlebel{eq:5.80} \rho(Z)=\gamma\left(\frac{|Z|}{4\varepsilon_0}\right). \end{align} Then \begin{eqnarray}gin{align}\langlebel{eq:5.81} \rho(Z)=\left\{ \begin{eqnarray}gin{aligned} 1\quad&\hbox{if $|Z|\leq 2\varepsilon_0$;} \\ 0\quad&\hbox{if $|Z|\geq 4\varepsilon_0$.} \end{aligned} \right. \end{align} For $x\in W^g$, let $\mathbf{H}_x$ be the vector space of smooth sections of $\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*(\Lambda (T^*B))\widetildedehat{\otimes} \mathcal{E}_x$ over $T_xX$. Let $\Delta^{TX}$ be the (negative) standard Laplacian on the fiber of $TX$. Let $L_{x,zK}^{1,t}$ be the differential operator acting on $\mathbf{H}_x$, \begin{eqnarray}gin{align}\langlebel{eq:5.82} L_{x,zK}^{1,t}=(1-\rho^2(Z))(-t\Delta^{TX})+\rho^2(Z)\mathcal{A}_{zK,t}. \end{align} Let $\widetilde{F}_t(L_{x,zK}^{1,t})(Z,Z')$ be the smooth kernel of $\widetilde{F}_t(L_{x,zK}^{1,t})$ with respect to $dv_{TX}(Z')$. Using the finite propagation speed for solutions of hyperbolic equations \cite[Appendix D.2]{MM07} and (\ref{eq:5.72}), we find that if $Z\in N_{X^g/X,x}$, $|Z|\leq \varepsilon_0$, then \begin{eqnarray}gin{align}\langlebel{eq:5.83} \widetilde{F}_t(\mathcal{A}_{zK,t})(g^{-1}Z,Z)k_x'(Z) =\widetilde{F}_t(L_{x,zK}^{1,t})(g^{-1}Z,Z). \end{align} Thus in our proof of Theorem \ref{thm:5.17}, we can then replace $\mathcal{A}_{zK,t}$ by $L_{x,zK}^{1,t}$. \subsection{The Getzler rescaling}\langlebel{s0507} Let $\mathrm{Op}_x$ be the set of scalar differential operators on $T_xX$ acting on $\mathbf{H}_x$. Then by (\ref{eq:5.63}), \begin{eqnarray}gin{align}\langlebel{eq:5.84} L_{x,zK}^{1,t}\in (\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*(\Lambda (T^*B))\widetildedehat{\otimes} c(TX) \otimes\field{E}nd( E))_x\otimes \mathrm{Op}_x. \end{align} For $t>0$, let $H_t:\mathbf{H}_x\rightarrow \mathbf{H}_x$ be the linear map \begin{eqnarray}gin{align}\langlebel{eq:5.85} H_th(Z)=h(Z/\sqrt{t}). \end{align} Let $L_{x,zK}^{2,t}$ be the differential operator acting on $\mathbf{H}_x$ defined by \begin{eqnarray}gin{align}\langlebel{eq:5.86} L_{x,zK}^{2,t}=H_t^{-1}L_{x,zK}^{1,t}H_t. \end{align} By (\ref{eq:5.84}), \begin{eqnarray}gin{align}\langlebel{eq:5.87} L_{x,zK}^{2,t}\in (\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*(\Lambda (T^*B))\widetildedehat{\otimes} c(TX) \otimes\field{E}nd( E))_x\otimes \mathrm{Op}_x. \end{align} Recall that $\dim X^g=\ell$ and $\dim N_{X^g/X}=n-\ell$. Let $(e_1,\cdots,e_{\ell})$ be an orthonormal oriented basis of $T_xX^g$, let $(e_{\ell+1},\cdots,e_{n})$ be an orthonormal oriented basis of $N_{X^g/X}$, so that $(e_1,\cdots,e_n)$ is an orthonormal oriented basis of $T_xX$. We denote with an superscript the corresponding dual basis. For $1\leq j\leq \ell$, the operators $e^j\wedge$ and $i_{e_j}$ act as odd operators on $\Lambda(T^*X^g)$. \begin{eqnarray}gin{defn}\langlebel{defn:5.20} For $t>0$, put \begin{eqnarray}gin{align}\langlebel{eq:5.88} c_t(e_j)=\frac{1}{\sqrt{t}}e^j\wedge-\sqrt{t}i_{e_j}, \quad 1\leq j\leq \ell. \end{align} \end{defn} Let $L_{x,zK}^{3,t}$ be the differential operator acting on $\mathbf{H}_x$ obtained from $L_{x,zK}^{2,t}$ by replacing $c(e_j)$ by $c_t(e_j)$ for $1\leq j\leq \ell$. For $A\in (\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*(\Lambda (T^*B))\widetildedehat{\otimes} c(TX) \otimes\field{E}nd( E))_x\otimes \mathrm{Op}_x$, we denote by $[A]_t^{(3)}$ the differential operator obtained from $A$ by using the Getzler rescaling of the Clifford variables which is given in Definition \ref{defn:5.20}. Let $\tau e_j(Z)$ be the parallel transport of $e_j$ along the curve $t\in [0,1]\rightarrow tZ$ with respect to the connection $\nabla^{TX}$. Let $\mathcal{O}_1(|Z|^2)$ be any object in $\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*(\Lambda (T^*B))\widetildedehat{\otimes} c(TX)$ which is of length at most $1$ and is also $\mathcal{O}(|Z|^2)$. By (\ref{eq:5.78a}), in the trivialization associated with $\,^1\nabla^{\mathcal{E},t}$, \begin{eqnarray}gin{align}\langlebel{eq:5.89a} c(\tau e_j(Z))=c(e_j)+\frac{1}{\sqrt{t}}\langle S(Z)e_j,f_p^H\rangle f^p\wedge +\frac{1}{2\sqrt{t}}\langle Z, e_j\rangle dt\wedge +\mathcal{O}_1(t^{-1/2}|Z|^2). \end{align} From (\ref{eq:5.89a}), for $1\leq j\leq \ell$, \begin{eqnarray}gin{align}\langlebel{eq:5.89b} \left[\sqrt{t}c(\tau e_j(\sqrt{t}Z)) \right]_t^{(3)} =e^j\wedge +\mathcal{O}(\sqrt{t}|Z|); \end{align} for $\ell+1\leq j\leq n$, \begin{eqnarray}gin{align}\langlebel{eq:5.89c} \left[c(\tau e_j(\sqrt{t}Z)) \right]_t^{(3)} =c(e_j)+\langle S(Z)e_j,f_p^H\rangle f^p\wedge +\frac{1}{2}\langle Z, e_j\rangle dt\wedge +\mathcal{O}(\sqrt{t}|Z|^2). \end{align} From \cite[Proposition 1.18]{BGV}, (\ref{eq:5.70}), (\ref{eq:5.77}), (\ref{eq:5.82}), (\ref{eq:5.86}) and (\ref{eq:5.88}), we calculate that \begin{eqnarray}gin{multline}\langlebel{eq:5.89} L_{x,zK}^{3,t}=(1-\rho^2(\sqrt{t}Z))(-\Delta^{TX}) +\rho^2(\sqrt{t}Z)\cdot\left\{-g^{ij}(\sqrt{t}Z) \left(\nabla_{e_i}'\nabla_{e_j}' -\Gamma_{ij}^k(\sqrt{t}Z)\sqrt{t}\nabla_{e_k}' \right)\right. \\ \left.+\frac{t}{4}H_{\sqrt{t}Z}+\frac{t}{2} R^{E}_{\sqrt{t}Z}(\tau e_i,\tau e_j)\left[ c(\tau e_i(\sqrt{t}Z))c(\tau e_j(\sqrt{t}Z))\right]_t^{(3)}\right. \\ +\sqrt{t}R^{E}_{\sqrt{t}Z}(\tau e_j,f_p^H)\left[ c(\tau e_j(\sqrt{t}Z))\right]_t^{(3)} f^p\wedge \left.+\frac{1}{2}R^{E}_{\sqrt{t}Z} (f_p^H, f_q^H)f^p\wedge f^q\wedge -m_{\sqrt{t}Z}^{E}(zK)\right\}, \end{multline} where $\left(g^{ij}(Z)\right)$ is the inverse matrix of $\left(g_{ij}(Z)=\langlengle e_i, e_j\ranglengle_Z \right)$, $\left(\nabla^{TX}_{e_i}e_j\right)_Z=\Gamma_{ij}^k(Z)e_k$ and \begin{eqnarray}gin{multline}\langlebel{eq:5.90} \nabla_{e_i}'=\nabla_{\tau e_i(\sqrt{t}Z)}+\frac{t}{8} \Big(\langle R^{TX}_x(e_k,e_l)Z,e_i\rangle+\mathcal{O}(\sqrt{t}|Z|^2)\Big) \left[ c(\tau e_i(\sqrt{t}Z))c(\tau e_j(\sqrt{t}Z))\right]_t^{(3)} \\ +\frac{\sqrt{t}}{4}\Big(\langle R^{TX}_x(e_j,f_p^H)Z, e_i\rangle+\mathcal{O}(\sqrt{t}|Z|^2)\Big)\left[ c(\tau e_j(\sqrt{t}Z))\right]_t^{(3)}f^p\wedge \\ +\frac{1}{8}\Big(\langle R^{TX}_x(f_p^H,f_q^H)Z,e_i\rangle +\mathcal{O}(\sqrt{t}|Z|^2)\Big) f^p\wedge f^q\wedge + \frac{t}{2}\left( R_x^E(Z, e_i)+\mathcal{O}(\sqrt{t}|Z|^2)\right) \\ -\frac{1}{4}\langle m^{TX}_x(zK)Z,e_i\rangle +\frac{1}{\sqrt{t}}h_i(zK,\sqrt{t}Z). \end{multline} Here $\nabla_U$ is the ordinary differentiation operator on $TX$ in the direction $U$, $h_i(zK, Z)$ is a function depending linearly on $zK$ and $h_i(zK,Z)=\mathcal{O}(|Z|^2)$ for $|zK|<\begin{eqnarray}ta$. Let $\widetilde{F}_t(L_{x,zK}^{3,t})(Z,Z')$ be the smooth kernel associated with $\widetilde{F}_t(L_{x,zK}^{3,t})$ with respect to $dv_{TX}(Z')$. From the finite propagation speed argument explained before (\ref{eq:5.63}), we could also assume that $TX^g$ and $N_{X^g/X}$ are spin. Let $\mS_{X^g}$ and $\mS_N$ be the spinors of $TX^g$ and $N_{X^g/X}$ respectively. Then $\mS_X=\mS_{X^g}\widetildedehat{\otimes} \mS_N$. Recall that $g$ acts on $(\mS_N\otimes E)_x$. We may write $\widetilde{F}_t(L_{x,zK}^{3,t})(Z,Z')$ in the form \begin{eqnarray}gin{multline}\langlebel{eq:5.91} \widetilde{F}_t(L_{x,zK}^{3,t})(Z,Z')=\sum \widetilde{F}^{j_1 \cdots j_q}_{t,i_1\cdots i_p}(Z,Z') e^{i_1}\wedge\cdots \wedge e^{i_p}i_{e_{j_1}}\cdots i_{e_{j_q}}, \\ 1\leq i_1<\cdots<i_p\leq \ell,\ 1\leq j_1< \cdots<j_q\leq \ell, \end{multline} and $\widetilde{F}^{j_1\cdots j_q}_{t,i_1\cdots i_p}(Z,Z')\in \Lambda(T^*\field{R})\widetildedehat{\otimes}\pi^*\Lambda(T^*B) \otimes \big(c(N_{X^g/X})\otimes \field{E}nd(E)\big)_x$. As explained in Section \ref{s0103}, $\ell=\dim X^g$ has the same parity as $n=\dim X$. As in (\ref{eq:5.65}), put \begin{eqnarray}gin{align}\langlebel{eq:5.92} [\widetilde{F}_t(L_{x,zK}^{3,t})(Z,Z') ]^{\max}= \widetilde{F}_{t,1\cdots \ell} (Z,Z'). \end{align} In other words, $\widetilde{F}_{t,1\cdots \ell}(Z,Z')$ is the coefficient of $e^1\wedge\cdots\wedge e^{\ell}$ in (\ref{eq:5.91}). \begin{eqnarray}gin{prop}\langlebel{prop:5.21} If $Z\in T_xX$, $|Z|\leq \varepsilon_0/\sqrt{t}$, \begin{eqnarray}gin{multline}\langlebel{eq:5.93} t^{(n-\ell)/2}\tr_s[g\widetilde{F}_t(\mathcal{A}_{zK,t}) (g^{-1}(\sqrt{t}Z), \sqrt{t}Z)]k_x'(\sqrt{t}Z) \\ =(-i)^{\ell/2}2^{\ell/2}\tr_s^{\mS_N\otimes E}[g \widetilde{F}_t(L_{x,zK}^{3,t})(g^{-1}Z,Z) ]^{\max}. \end{multline} \end{prop} \begin{eqnarray}gin{proof} As $K\in \mathfrak{z}(g)$, $\,^1\nabla^{\mathcal{E},t}$ is $g$-equivariant. Thus the trivialization $\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}$ is $g$-equivariant and the action of $g$ on $\big(\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E} \big)_{g^{-1}Z}$ is the action of $g$ on $\big(\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E} \big)_x$, which is an element in $\big(c(N_{X^g/X})\otimes \field{E}nd(E)\big)_x$. Now we get Proposition \ref{prop:5.21} by the same proof of \cite[Proposition 7.25]{BG00}. \end{proof} \begin{eqnarray}gin{rem}\langlebel{rem:5.22} As in \cite[(1.6) and (1.7)]{BF86II}, if $n=\dim X$ is even, \begin{eqnarray}gin{align}\langlebel{eq:5.94} \begin{eqnarray}gin{split} &\tr_s^{\mS_X}[c(e_{i_1})\cdots c(e_{i_p})]=0, \ \til{E}xt{for} \ p<n, 1\leq i_1<\cdots<i_p\leq n, \\ &\tr_s^{\mS_X}[c(e_1)\cdots c(e_n)]=(-2i)^{n/2}; \end{split} \end{align} if $n=\dim X$ is odd, \begin{eqnarray}gin{align}\langlebel{eq:5.95} \tr^{\mS_X}[1]=2^{(n-1)/2},\quad \tr^{\mS_X}[c(e_1)\cdots c(e_n)] =(-i)^{(n+1)/2}2^{(n-1)/2}, \end{align} and the trace of the other monomials is zero. If $n=\dim X$ is odd, since (\ref{eq:5.95}) holds and the total degree of $\widetilde{F}_t(\mathcal{A}_{zK,t})$ is even, we only take the trace for the odd degree Clifford part. In this case, (\ref{eq:5.66}) is replaced by \begin{eqnarray}gin{multline}\langlebel{eq:5.96} \left|t^{(n-\ell)/2}\int_{Z\in N,|Z|\leq\frac{ \varepsilon_0}{\sqrt{t}}}\psi_{\field{R}\times B}\tr^{\mathrm{odd}} [g\widetilde{F}_t(\mathcal{A}_{zK,t})(g^{-1}(x,\sqrt{t}Z),(x,\sqrt{t}Z))] \right. \\ \left.\times k(x,\sqrt{t}Z)dv_{N_{X^g/X}}(Z)-\left\{\widetildedehat{ \mathrm{A}}_{g,zK}(TX,\nabla^{TX})\ch_{g,zK}(E, \nabla^E) \right\}^{\max}\right|\leq Ct^{\delta}. \end{multline} In particular, since $n-\ell$ is even, \begin{eqnarray}gin{multline}\langlebel{eq:5.97} \tr^{\mS_X}[c(e_1)\cdots c(e_n)] \\ =(-i)^{(\ell+1)/2}2^{(\ell-1)/2} t^{\ell/2}\left\{\tr_s^{\mS_N}[c_t(e_1)\cdots c_t(e_{\ell}) c(e_{\ell+1})\cdots c(e_n)] \right\}^{\max}, \end{multline} the analogue of (\ref{eq:5.93}) is \begin{eqnarray}gin{multline}\langlebel{eq:5.98} t^{(n-\ell)/2}\tr^{\mathrm{odd}}[g\widetilde{F}_t(\mathcal{A}_{zK,t}) (g^{-1}(\sqrt{t}Z), \sqrt{t}Z)]k_x'(\sqrt{t}Z) \\ =(-i)^{(\ell+1)/2}2^{(\ell-1)/2} \{\tr_s^{\mS_N\otimes E}[g \widetilde{F}_t(L_{x,zK}^{3,t})(g^{-1}Z,Z) ]\}^{\max}. \end{multline} \end{rem} Let $\jmath:W^g\rightarrow W$ be the obvious embedding. \begin{eqnarray}gin{defn}\langlebel{defn:5.23} Let $L_{x,zK}^{3,0}$ be the operator in \begin{eqnarray}gin{align}\langlebel{eq:5.99} (\pi^*(\Lambda (T^*B))\widetildedehat{\otimes} \Lambda(T^*X^g)\widetildedehat{\otimes} c(N_{X^g/X}) \otimes\field{E}nd( E))_x\otimes\mathrm{Op}_x, \end{align} under the notation (\ref{eq:5.69}), given by \begin{eqnarray}gin{align}\langlebel{eq:5.100} L_{x,zK}^{3,0}=-\left(\nabla_{e_i}+\frac{1}{4}\left\langle (\jmath^*R^{TX}_x-m^{TX}(zK)_x)Z,e_i\right\rangle\right)^2 +\jmath^*R^E_x-m^{E}(zK)_x. \end{align} \end{defn} In the sequel, we will write that a sequence of differential operators on $T_xX$ converges if its coefficients converge together with their derivatives uniformly on the compact subsets in $T_xX$. Comparing with \cite[Proposition 7.27]{BG00}, from (\ref{eq:5.89b})-(\ref{eq:5.90}), we have \begin{eqnarray}gin{prop}\langlebel{prop:5.24} As $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:5.101} L_{x,zK}^{3,t}\rightarrow L_{x,zK}^{3,0}. \end{align} \end{prop} \subsection{A family of norms}\langlebel{s0508} For $x\in W^g$, let $\mathbf{I}_x$ be the vector space of smooth sections of $(\Lambda(T^*\field{R})\widetildedehat{\otimes} \pi^*\Lambda(T^*B)\widetildedehat{\otimes} \Lambda(T^*X^g)\widetildedehat{\otimes} \mS_N\otimes E)_x$ on $T_xX$, let $\mathbf{I}_{(r,q),x}$ be the vector space of smooth sections of $$ \Big(\big(T^*\field{R}\widetildedehat{\otimes}\pi^* \Lambda^{r-1}(T^*B)\oplus \pi^*\Lambda^{r}(T^*B)\big)\widetildedehat{\otimes} \Lambda^q(T^*X^g)\widetildedehat{\otimes} \mS_N\otimes E\Big)_x$$ on $T_xX$. We denote by $\mathbf{I}^0_x =\bigoplus_{r,q} \mathbf{I}^0_{(r,q),x}$ the corresponding vector space of square-integrable sections. Put $k=\dim B$. \begin{eqnarray}gin{defn}\langlebel{defn:5.25} If $s\in \mathbf{I}_{(r,q),x}$ has compact support, put \begin{eqnarray}gin{align}\langlebel{eq:5.102} |s|_{t,x,0}^2=\int_{T_xX}|s(Z)|^2\left(1+|Z|\,\rho\left( \frac{\sqrt{t}Z}{2}\right) \right)^{2(k+\ell+1-q-r)}dv_{TX}(Z). \end{align} \end{defn} Recall that by (\ref{eq:5.81}), if $\rho(\sqrt{t}Z)>0$, then $|\sqrt{t}Z|\leq 4\varepsilon_0$. If $\sqrt{t}|Z|\leq 4\varepsilon_0$, then $\rho(\sqrt{t}Z/2)=1$. By the same arguments as in \cite[Proposition 11.24]{BL91}, for $t\in (0,1]$, the following family of operators acting on $(\mathbf{I}_x^0, |\cdot|_{t,x,0})$ are uniformly bounded: \begin{eqnarray}gin{align}\langlebel{eq:5.103} \begin{eqnarray}gin{split} &1_{|\sqrt{t}Z|\leq 4\varepsilon_0}\sqrt{t}c_t(e_j),\quad 1_{ |\sqrt{t}Z|\leq 4\varepsilon_0}|Z|\sqrt{t}c_t(e_j),\quad \til{E}xt{for}\ 1\leq j\leq \ell, \\ &1_{|\sqrt{t}Z|\leq 4\varepsilon_0}|Z|f^p\wedge,\quad 1_{|\sqrt{t}Z|\leq 4\varepsilon_0}|Z| dt\wedge. \end{split} \end{align} \begin{eqnarray}gin{defn}\langlebel{defn:5.26} If $s\in \mathbf{I}_x$ has compact support, put \begin{eqnarray}gin{align}\langlebel{eq:5.104} |s|_{t,x,1}^2=|s|_{t,x,0}^2+\sum_{i=1}^{n}|\nabla_{e_i} s|_{t,x,0}^2, \end{align} and \begin{eqnarray}gin{align}\langlebel{eq:5.105} |s|_{t,x,-1}=\sup_{0\neq s'\in \mathbf{I}_x}\frac{|\langle s,s' \rangle_{t,x,0}|}{|s'|_{t,x,1}}. \end{align} \end{defn} Let $(\mathbf{I}_x^1, |\cdot|_{t,x,1})$ be the Hilbert closure of the above vector space with respect to $|\cdot|_{t,x,1}$. Let $(\mathbf{I}_x^{-1}, |\cdot|_{t,x,-1})$ be the antidual of $(\mathbf{I}_x^1, |\cdot|_{t,x,1})$. Then $(\mathbf{I}_x^1, |\cdot|_{t,x,1})$ and $(\mathbf{I}_x^0, |\cdot|_{t,x,0})$ are densely embedded in $(\mathbf{I}_x^0, |\cdot|_{t,x,0})$ and $(\mathbf{I}_x^{-1}, |\cdot|_{t,x,-1})$ with norms smaller than 1 respectively. Comparing with \cite[Proposition 7.31]{BG00}, by (\ref{eq:5.89}) and (\ref{eq:5.103}), we get the following estimates. \begin{eqnarray}gin{lemma}\langlebel{lem:5.27} There exist constants $C_i>0$, $i=1,2,3,4$, such that if $t\in (0,1]$, $z\in \field{C}$, $|zK|\leq 1$, if $n\in \field{N}$, $x\in X^g$, if the support of $s,s'\in \mathbf{I}_x$ is included in $\{Z\in T_xX: |Z|\leq n \}$, then \begin{eqnarray}gin{align}\langlebel{eq:5.106} \begin{eqnarray}gin{split} &\field{R}e\langle L_{x,zK}^{3,t}s,s\rangle_{t,x,0}\geq C_1|s|_{t,x,1}^2-C_2(1+|nzK|^2)|s|_{t,x,0}^2, \\ &|{\rm Im}\langle L_{x,zK}^{3,t}s,s\rangle_{t,x,0}|\leq C_3\Big((1+|nzK|)|s|_{t,x,1}|s|_{t,x,0}+|nzK|^2|s|_{t,x,0}^2\Big), \\ &|\langle L_{x,zK}^{3,t}s,s'\rangle_{t,x,0}|\leq C_4(1+|nzK|^2)|s|_{t,x,1}|s'|_{t,x,1}. \end{split} \end{align} \end{lemma} \begin{eqnarray}gin{proof} We only need to observe that the terms containing $|nzK|^2$ come from terms \begin{eqnarray}gin{align}\langlebel{eq:5.107} \left|\left\langle\left(\rho(\sqrt{t}Z)\left(-\frac{1}{4} \langle m^{TX}(zK)Z,e_i \rangle+\frac{1}{\sqrt{t}}h_i(zK,\sqrt{t}Z)\right)\right)^2s,s\right\rangle_{t,x,0} \right|, \end{align} which can be dominated by $C(1+|nzK|^2)|s|_{t,x,0}^2$. The proof of Lemma \ref{lem:5.27} is completed. \end{proof} \subsection{The kernel $\widetilde{F}_t(L_{x,K}^{3,t})$ as an infinite sum}\langlebel{s0509} Let $h$ be a smooth even function from $\field{R}$ into $[0,1]$ such that \begin{eqnarray}gin{align}\langlebel{eq:5.108} h(u)=\left\{ \begin{eqnarray}gin{aligned} 1\quad&\hbox{if $|u|\leq \frac{1}{2}$;} \\ 0\quad&\hbox{if $|u|\geq 1$.} \end{aligned} \right. \end{align} For $n\in \field{N}$, put \begin{eqnarray}gin{align}\langlebel{eq:5.109} h_n(u)=h\left(u+\frac{n}{2}\right)+h\left(u-\frac{n}{2}\right). \end{align} Then $h_n$ is a smooth even function whose support is included in $\left[-\frac{n}{2}-1,-\frac{n}{2}+1\right]\cup \left[\frac{n}{2}-1,\frac{n}{2}+1\right]$. Set \begin{eqnarray}gin{align}\langlebel{eq:5.110} \mathcal{H}(u)=\sum_{n\in \field{N}}h_n(u). \end{align} The above sum is locally finite, and $\mathcal{H}(u)$ is a bounded smooth even function which takes positive values and has a positive lower bound on $\field{R}$. Put \begin{eqnarray}gin{align}\langlebel{eq:5.111} k_n(u)=\frac{h_n}{\mathcal{H}}(u). \end{align} Then the $k_n$ are bounded even smooth functions with bounded derivatives, and moreover \begin{eqnarray}gin{align}\langlebel{eq:5.112} \sum_{n\in \field{N}}k_n=1. \end{align} Note that here we use $n$ as an index for the natural numbers, not the $\dim X$ in the previous sections. \begin{eqnarray}gin{defn}\langlebel{defn:5.28} For $t\in [0,1]$, $n\in \field{N}$, $a\in\field{C}$, put \begin{eqnarray}gin{align}\langlebel{eq:5.113} F_{t,n}(a)=\int_{-\infty}^{+\infty}\exp(\sqrt{2}iua) \exp\left(-\frac{u^2}{2}\right)f(\sqrt{t}u)k_n(u)\frac{du }{\sqrt{2\pi}}. \end{align} \end{defn} By (\ref{eq:5.55}), (\ref{eq:5.112}) and (\ref{eq:5.113}), \begin{eqnarray}gin{align}\langlebel{eq:5.114} F_t(a)=\sum_{n\in \field{N}}F_{t,n}(a). \end{align} Also, given $m,m'\in\field{N}$, there exist $C>0$, $C'>0$, $C''>0$ such that for any $t\in [0,1]$, $n\in \field{N}$, $c>0$, \begin{eqnarray}gin{align}\langlebel{eq:5.115} \sup_{a\in \field{C}, |{\rm Im}(a)|\leq c}|a|^m\left|F_{t,n}^{(m')}(a) \right|\leq C\exp(-C'n^2+C''c^2). \end{align} Let $\widetilde{F}_{t,n}(a)$ be the unique holomorphic function such that \begin{eqnarray}gin{align}\langlebel{eq:5.116} F_{t,n}(a)=\widetilde{F}_{t,n}(a^2). \end{align} Recall that $V_c$ was defined in (\ref{eq:5.36}). By (\ref{eq:5.115}), given $m,m'\in\field{N}$, there exist $C>0$, $C'>0$, $C''>0$ such that for any $t\in [0,1]$, $n\in \field{N}$, $c>0$, $\langlembda\in V_c$, \begin{eqnarray}gin{align}\langlebel{eq:5.117} |\langlembda|^m\left|\widetilde{F}_{t,n}^{(m')}(\langlembda) \right|\leq C\exp(-C'n^2+C''c^2). \end{align} By (\ref{eq:5.114}), \begin{eqnarray}gin{align}\langlebel{eq:5.118} \widetilde{F}_t(a)=\sum_{n\in \field{N}}\widetilde{F}_{t,n}(a). \end{align} Using (\ref{eq:5.118}), we get \begin{eqnarray}gin{align}\langlebel{eq:5.119} \widetilde{F}_t(L_{x,zK}^{3,t})=\sum_{n\in \field{N}}\widetilde{F}_{t,n} (L_{x,zK}^{3,t}). \end{align} More precisely, by (\ref{eq:5.117}) and using standard elliptic estimates, given $t\in (0,1]$, we have the identity \begin{eqnarray}gin{align}\langlebel{eq:5.120} \widetilde{F}_t(L_{x,zK}^{3,t})(Z,Z')=\sum_{n\in \field{N}}\widetilde{F}_{t,n}(L_{x,zK}^{3,t})(Z,Z') \end{align} and the series in the right-hand side of (\ref{eq:5.120}) converges uniformly together with its derivatives on the compact sets in $T_xX\times T_xX$. \begin{eqnarray}gin{defn}\langlebel{defn:5.29} For $\gamma$ in (\ref{eq:5.79}), put \begin{eqnarray}gin{align}\langlebel{eq:5.121} L_{x,zK,n}^{3,t}=-\left(1-\gamma^2\left(\frac{|Z|}{2(n+2)} \right)\right)\Delta^{TX}+\gamma^2\left(\frac{|Z|}{2(n+2)} \right)L_{x,zK}^{3,t}. \end{align} \end{defn} Observe that if $k_n(u)\neq 0$, then $|u|\leq \frac{n}{2}+1$. Using finite propagation speed and (\ref{eq:5.73}), we find that if $Z\in T_xX$, the support of $\widetilde{F}_{t,n}(L_{x,zK}^{3,t})(Z,\cdot)$ is included in $\{Z'\in T_xX: |Z'-Z|\leq n+2 \}$. Therefore, given $p\in\field{N}$, if $Z\in T_xX$, $|Z|\leq p$, the support of $\widetilde{F}_{t,n}(L_{x,zK}^{3,t})(Z,\cdot)$ is included in $\{Z'\in T_xX: |Z'|\leq n+p+2 \}$. If $|Z|\leq n+p+2$, then $\gamma(|Z|/2(n+p+2))=1$. Using finite propagation speed again, we see that by (\ref{eq:5.121}), for $Z\in T_xX$, $|Z|\leq p$, \begin{eqnarray}gin{align}\langlebel{eq:5.122} \widetilde{F}_{t,n}(L_{x,zK}^{3,t})(Z,Z')=\widetilde{F}_{t,n}(L_{x,zK, n+p}^{3,t})(Z,Z'). \end{align} From Lemma \ref{lem:5.27}, we have \begin{eqnarray}gin{align}\langlebel{eq:5.123} \begin{eqnarray}gin{split} &\field{R}e\langle L_{x,zK,n}^{3,t}s,s\rangle_{t,x,0}\geq C_1|s|_{t,x,1}^2-C_2(1+|nzK|^2)|s|_{t,x,0}^2, \\ &|{\rm Im}\langle L_{x,zK,n}^{3,t}s,s\rangle_{t,x,0}|\leq C_3\Big((1+|nzK|)|s|_{t,x,1}|s|_{t,x,0}+|nzK|^2 |s|_{t,x,0}^2\Big), \\ &|\langle L_{x,zK,n}^{3,t}s,s'\rangle_{t,x,0}|\leq C_4(1+|nzK|^2)|s|_{t,x,1}|s'|_{t,x,1}. \end{split} \end{align} Put \begin{eqnarray}gin{align}\langlebel{eq:5.124} L_{x,zK,n}^{3,0}=-\left(1-\gamma^2\left(\frac{|Z|}{2(n+2)} \right)\right)\Delta^{TX}+\gamma^2\left(\frac{|Z|}{2(n+2)} \right)L_{x,zK}^{3,0}. \end{align} By Proposition \ref{prop:5.24}, as $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:5.125} L_{x,zK,n}^{3,t}\rightarrow L_{x,zK,n}^{3,0}. \end{align} By (\ref{eq:5.123}), the functional analysis arguments in \cite[\S 7.10-7.12]{BG00} work perfectly here. We have the following uniform estimates, which is formally the same as \cite[Theorem 7.38]{BG00}. In particular, since the estimates in (\ref{eq:5.106}) and (\ref{eq:5.123}) are the analogue of \cite[(7.131) and (7.148)]{BG00}, the proof of the following theorem is exactly the same as that of \cite[Theorem 7.38]{BG00}. \begin{eqnarray}gin{thm}\langlebel{thm:5.30} There exist $C'>0$, $C''>0$, $C'''>0$ such that for $\eta>0$ small enough, there is $c_{\eta}\in (0,1]$ such that for any $m\in \field{N}$, there are $C>0$, $r\in \field{N}$ such that for $t\in (0,1]$, $|zK|\leq c_{\eta}$, $n\in \field{N}$, $x\in X^g$, $Z, Z'\in T_xX$, \begin{eqnarray}gin{multline}\langlebel{eq:5.126} \sup_{|\alpha|,|\alpha'|\leq m}\left|\frac{\partial^{|\alpha| +|\alpha'|}}{\partial Z^{\alpha}\partial Z'^{\alpha'}}\widetildedetilde{F}_{t,n}(L_{x,zK}^{3,t})(Z,Z') \right| \\ \leq C(1+|Z|+|Z'|)^r\exp\Big(-C'n^2/4+2C''\eta^2\sup (|Z|^2, |Z'|^2)-C'''|Z-Z'|^2 \Big). \end{multline} \end{thm} \subsection{A proof of Theorem \ref{thm:5.17}}\langlebel{s0510} Remark that as explained in the introduction of \cite{BG00}, $L_{x,zK}^{3,t}$ does not have a fixed lower bound. So it is not possible to define a priori a honest heat kernel for $\exp(-L_{x,zK}^{3,t})$. So we cannot prove Theorem \ref{thm:5.17} following the arguments in \cite[\S 11]{BL91}. Since $L_{x,zK,n+p}^{3,0}$ coincides with $-\Delta^{TX}$ near infinity, the operator $\widetilde{F}_{0,n}(L_{x,zK,n+p}^{3,0})$ is well-defined. Also, by proceeding as in (\ref{eq:5.122}), if $|Z|, |Z'|\leq p$, using finite propagation speed, we find that the kernel $\widetilde{F}_{0,n}(L_{x,zK,n+p}^{3,0})(Z,Z')$ does not depend on $p$. Finally this kernel verifies estimates similar to (\ref{eq:5.126}) for $\eta>0$ small enough and $|zK|\leq c_{\eta}$. Therefore we may define the kernel $\exp(-L_{x,zK}^{3,0})(Z,Z')$ by \begin{eqnarray}gin{align}\langlebel{eq:5.127} \exp(-L_{x,zK}^{3,0})(Z,Z')=\sum_{n\in \field{N}}\widetilde{F}_{0,n}(L_{x,zK, n+p}^{3,0})(Z,Z'),\quad \til{E}xt{for}\ |Z|, |Z'|\leq p. \end{align} Note that the estimate in (\ref{eq:5.126}) also works for $t=0$. Thus the series in (\ref{eq:5.127}) converges uniformly on compact subsets of $T_xX\times T_xX$ together with its derivatives. From (\ref{eq:5.89}), (\ref{eq:5.100}), (\ref{eq:5.121}) and (\ref{eq:5.124}), there exists $C>0$ such that for $t\in (0,1]$, $z\in \field{C}$, $|zK|\leq 1$, $n\in \field{N}$, $x\in X^g$, if $s\in \mathbf{I}_x$ has compact support, then \begin{eqnarray}gin{align}\langlebel{eq:5.127a} \left|(L_{x,zK,n}^{3,t}-L_{x,zK,n}^{3,0})s \right|_{t,x,-1} \leq C\sqrt{t}(1+n^4)|s|_{0,x,1}. \end{align} From Theorem \ref{thm:5.30}, (\ref{eq:5.127}) and (\ref{eq:5.127a}), the proof of the following theorem is exactly the same as that of \cite[Theorem 7.43]{BG00}. \begin{eqnarray}gin{thm}\langlebel{thm:5.31} There exist $C''>0$, $C'''>0$ such that for $\eta>0$ small enough, there exist $c_{\eta}\in (0,1]$, $r\in \field{N}$, $C>0$, such that for $t\in (0,1]$, $z\in \field{C}$, $|zK|\leq c_{\eta}$, $x\in X^g$, $Z, Z'\in T_xX$, \begin{eqnarray}gin{multline}\langlebel{eq:5.128} \left|\left(\widetildedetilde{F}_{t}(L_{x,zK}^{3,t}) -\exp(-L_{x,zK}^{3,0})\right)(Z,Z') \right| \leq Ct^{\frac{1}{4(\dim X+1)}}(1+|Z|+|Z'|)^r \\ \cdot\exp\big(2C''\eta^2\sup (|Z|^2, |Z'|^2) -C'''|Z-Z'|^2/2 \big). \end{multline} \end{thm} Now there is $C>0$ such that if $Z\in N_{X^g/X}$, then \begin{eqnarray}gin{align}\langlebel{eq:5.129} |g^{-1}Z-Z|\geq C|Z|. \end{align} By (\ref{eq:5.128}) and (\ref{eq:5.129}), we find that there exists $C''''>0$ such that if $Z\in N_{X^g/X}$, \begin{eqnarray}gin{multline}\langlebel{eq:5.130} \left|(\widetildedetilde{F}_{t}(L_{x,zK}^{3,t})-\exp(-L_{x,zK}^{3,0})) (g^{-1}Z,Z) \right| \\ \leq Ct^{\frac{1}{4(\dim X+1)}}(1+|Z|)^r \cdot\exp\left(2C''\eta^2|Z|^2-C''''|Z|^2 \right). \end{multline} For $\eta>0$ small enough, \begin{eqnarray}gin{align}\langlebel{eq:5.131} 2C''\eta^2-C''''\leq -C''''/2. \end{align} So by (\ref{eq:5.130}), if $Z\in N_{X^g/X}$, \begin{eqnarray}gin{align}\langlebel{eq:5.132} \left|\left(\widetildedetilde{F}_{t}(L_{x,zK}^{3,t}) -\exp(-L_{x,zK}^{3,0})\right)(g^{-1}Z,Z) \right| \leq Ct^{\frac{1}{4(\dim X+1)}}\exp\left(-C''''|Z|^2/4 \right). \end{align} For $K\in \mathfrak{z}(g)$, put \begin{eqnarray}gin{align}\langlebel{eq:5.133} H^{TX}=\jmath^*R^{TX}-m^{TX}(zK). \end{align} Clearly $H^{TX}$ splits under $TX=TX^g\oplus N_{X^g/X}$ as \begin{eqnarray}gin{align}\langlebel{eq:5.134} H^{TX}=H^{TX^g}+H^N. \end{align} Using the Mehler's formula (cf. e.g., \cite[(1.34)]{LM00}), by (\ref{eq:5.100}), for $|zK|$ small enough, \begin{eqnarray}gin{multline}\langlebel{eq:5.135} \exp(-L_{x,zK}^{3,0})(g^{-1}Z,Z)= (4\pi)^{-\dim X/2}\mathrm{det}^{\frac{1}{2}}\left(\frac{ H^{TX}/2}{\sinh(H^{TX}/2)}\right) \\ \cdot\exp\left(-\frac{1}{2}\left\langle \frac{H^{N}/2}{\sinh(H^{N}/2)} \big(\cosh(H^{N}/2)-\exp(H^{N}/2)g^{-1}\big)Z,Z \right\rangle \right) \\ \cdot\exp(-\jmath^*R^{E}+m^{E}(zK)). \end{multline} Observe that for $z\in \field{C}$, $|zK|$ small enough, the right-hand side of (\ref{eq:5.135}) is well-defined. Using (\ref{eq:5.135}), comparing with \cite[(1.37)]{LM00}, if $|zK|$ is small enough, \begin{eqnarray}gin{multline}\langlebel{eq:5.138} \int_{N_{X^g/X}}\exp(-L_{x,zK}^{3,0})(g^{-1}Z,Z)dv_N(Z) =(4\pi)^{-\ell/2}\mathrm{det}^{\frac{1}{2}} \left(\frac{H^{TX^g}/2}{\sinh( H^{TX^g}/2)}\right) \\ \cdot\Big(\mathrm{det}^{1/2}(1-g^{-1}|_N) \mathrm{det}^{1/2}(1-g\exp (-H^N))\Big)^{-1} \cdot\exp(-\jmath^*R^{E}+m^{E}(zK)). \end{multline} Also compare with \cite[(1.38)]{LM00}, \begin{eqnarray}gin{multline}\langlebel{eq:5.139} \tr_s^{\mS_N\otimes E}[g\exp(-\jmath^*R^{E} +m^{E}(zK))] \\ =(-i)^{(\dim X-\ell)/2}\mathrm{det}^{1/2}(1-g^{-1}|_N) \tr^E[g\exp(-\jmath^*R^{E}+m^{E}(zK))]. \end{multline} Using (\ref{eq:2.14}), (\ref{eq:2.15}), (\ref{eq:5.138}) and (\ref{eq:5.139}), we get \begin{eqnarray}gin{multline}\langlebel{eq:5.140} \psi_{\field{R}\times B}\int_{N_{X^g/X}}(-i)^{\ell/2}2^{\ell/2} \left\{\tr_s^{\mS_N\otimes E}[g\exp(-L_{x,zK}^{3,0})(g^{-1}Z,Z)]\right\}^{\max}dv_N(Z) \\ =\left\{\widetildedehat{\mathrm{A}}_{g,zK}(TX,\nabla^{TX}) \ch_{g,zK}(E,\nabla^{E})\right\}^{\max}. \end{multline} From (\ref{eq:5.93}), (\ref{eq:5.132}) and (\ref{eq:5.140}), we obtain Theorem \ref{thm:5.17} for $\dim X$ even. If $\dim X$ is odd, following the explanation in Remark \ref{rem:5.22}, the proof is the same. The proof of Theorem \ref{thm:5.17} is completed. \subsection{A proof of Theorem \ref{thm:4.02}}\langlebel{s0511} Since $v\geq t>0$, we have \begin{eqnarray}gin{align}\langlebel{eq:5.141} 0\leq t^{-1}-v^{-1}< t^{-1}. \end{align} Set \begin{eqnarray}gin{align}\langlebel{eq:5.142} \mathcal{A}_{K,t,v}'=\left(\mathbb{B}_t+\frac{\sqrt{t}c(K^X)}{4}\left( \frac{1}{t}-\frac{1}{v} \right)+t\cdot dt\wedge \frac{\partial}{\partial t}\right)^2+\mathcal{L}_K. \end{align} Let $\mathcal{A}_{K,t,v}'^{(0)}$ be the piece of $\mathcal{A}_{K,t,v}'$ which has degree $0$ in $\Lambda(T^*(\field{R}\times B))$. Then from (\ref{eq:5.141}), $\mathcal{A}_{K,t,v}'^{(0)}$ satisfies the same estimate in Lemma \ref{lem:5.02} and the estimate (\ref{eq:5.15}) of $\mathcal{A}_{K,t}-\mathcal{A}_{K,t}^{(0)}$ also holds for $\mathcal{A}_{K,t,v}' -\mathcal{A}_{K,t,v}'^{(0)}$ uniformly on $v\geq t\geq 1$. Since $v\geq t$, as $t\rightarrow +\infty$, we have \begin{eqnarray}gin{align}\langlebel{eq:5.143} \left|\frac{\partial}{\partial t}\left(\frac{\sqrt{t}c(K^X)}{4} \left(\frac{1}{t}-\frac{1}{v} \right)-\frac{c(T^H)}{4\sqrt{t}} \right)\right|=\mathcal{O}(t^{-3/2}). \end{align} Then the analogue of Propositions \ref{prop:5.05} and \ref{prop:5.08} holds for $\mathcal{A}_{K,t,v}'$ uniformly for $v\geq t\geq 1$. Thus replacing $\mathcal{A}_{zK,t}$ by $\mathcal{A}_{K,t,v}'$ in the proof of Theorem \ref{thm:5.01}, we obtain Theorem \ref{thm:4.02}. \section{A proof of Theorem \ref{thm:4.03}}\langlebel{s06} In this section, we prove Theorem \ref{thm:4.03}. This section is organized as follows. In Section \ref{s0601}, we establish a Lichnerowicz formula for $\mathcal{B}_{K,t,v}$ in (\ref{eq:4.11}). In Section \ref{s0602}, we prove Theorem \ref{thm:4.03} a). In Sections \ref{s0603} - \ref{s0608}, we prove Theorem \ref{thm:4.03} b). In Section \ref{s0609}, we prove Theorem \ref{thm:4.03} c). In Section \ref{s0610}, we prove Theorem \ref{thm:4.03} d). In this section, we use the assumptions and the notations in Section \ref{s04}. \subsection{A Lichnerowicz formula}\langlebel{s0601} Let $L$ be a trivial line bundle over $W$. We equip a connection on $L$ by \begin{eqnarray}gin{align}\langlebel{eq:6.01} \nabla_v^L =d-\frac{\varepsilontheta_K}{4v}. \end{align} Thus \begin{eqnarray}gin{align}\langlebel{eq:6.02} R_v^L=(\nabla_v^L)^2=-\frac{d\varepsilontheta_K}{4v}. \end{align} Let $\nabla_v^{\mathcal{E}\otimes L}$ be the connection on $\mathcal{E}\otimes L$ induced by $\nabla^{\mathcal{E}}$ and $\nabla_v^L$. The corresponding Dirac operator is \begin{eqnarray}gin{align}\langlebel{eq:6.03} D_v=\sum_{i=1}^nc(e_i)\nabla^{\mathcal{E}\otimes L}_{v,e_i}=D -\frac{c(K^X)}{4v}. \end{align} Since \begin{eqnarray}gin{align}\langlebel{eq:6.04} \nabla^{\mathcal{E}\otimes L}_{v,f_p^H}=\nabla^{\mathcal{E}}_{f_p^H}, \end{align} from (\ref{eq:6.03}), the new Bismut superconnection associated with $\mathcal{E}\otimes L$ is \begin{eqnarray}gin{align}\langlebel{eq:6.05} \mathbb{B}_t^v=\mathbb{B}_t-\frac{\sqrt{t}c(K^X)}{4v}. \end{align} \begin{eqnarray}gin{thm}\langlebel{thm:6.01} The following identity holds, \begin{eqnarray}gin{multline}\langlebel{eq:6.06} \mathcal{B}_{K,t,v}=-t\left(\nabla_{e_i}^{\mathcal{E}} +\frac{1}{2\sqrt{t}}\langle S(e_i)e_j, f_p^H\rangle c(e_j)f^p \wedge\right. \\ \left.+\frac{1}{4t}\langle S(e_i)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge-\frac{\langle K^X,e_i\rangle}{4}\left(\frac{1}{t} +\frac{1}{v}\right)\right)^2 \\ +\frac{t}{4}H+\frac{t}{2}\left(R^{\mathcal{E}/\mS}(e_i,e_j)-\frac{1}{2v} \langle\nabla_{e_i}^{TX}K^X,e_j\rangle\right)c(e_i)c(e_j) \\ +\sqrt{t}\left(R^{\mathcal{E}/\mS}(e_i,f_p^H)-\frac{1}{2v}\langle T(e_i,f_p^H),K^X\rangle \right)c(e_i)f^p\wedge \\ +\frac{1}{2}\left(R^{\mathcal{E}/\mS}(f_p^H,f_q^H)-\frac{1}{8v}\langle T(f_p^H,f_q^H), K^X\rangle \right)f^p\wedge f^q\wedge-m^{\mathcal{E}/\mS}(K^X)+\frac{1}{4v}|K^X|^2. \end{multline} \end{thm} \begin{eqnarray}gin{proof} From (\ref{eq:4.11}), (\ref{eq:5.01}), (\ref{eq:5.02}), (\ref{eq:5.70}) and (\ref{eq:6.05}), we have \begin{eqnarray}gin{multline}\langlebel{eq:6.07} \mathcal{B}_{K,t,v}=\left(\mathbb{B}_t^v+\frac{c(K^X)}{4\sqrt{t}} \right)^2+\mathcal{L}_{K}=-t\left(\nabla_{v,e_i}^{\mathcal{E}\otimes L} +\frac{1}{2\sqrt{t}}\langle S(e_i)e_j, f_p^H\rangle c(e_j)f^p \wedge\right. \\ \left.+\frac{1}{4t}\langle S(e_i)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge-\frac{\langle K^X,e_i\rangle}{4t}\right)^2 +\frac{t}{4}H+\frac{t}{2}R^{\mathcal{E}\otimes L/\mS}(e_i,e_j) c(e_i)c(e_j) \\ +\sqrt{t}R^{\mathcal{E}\otimes L/\mS}(e_i,f_p^H)c(e_i)f^p\wedge +\frac{1}{2}R^{\mathcal{E}\otimes L/\mS}(f_p^H,f_q^H)f^p\wedge f^q\wedge-m^{\mathcal{E}\otimes L/\mS}(K^X). \end{multline} Since $G$ acts trivially on $L$, the corresponding $m^L(K)$ in the sense of (\ref{eq:2.02}) is given by \begin{eqnarray}gin{align}\langlebel{eq:6.08} m^L(K^X)=-K^X+\nabla^L_{v,K^X}=-\frac{|K^X|^2}{4v}. \end{align} Then (\ref{eq:6.06}) follows from (\ref{eq:3.03})-(\ref{eq:3.05}), (\ref{eq:6.02}), (\ref{eq:6.07}) and (\ref{eq:6.08}). The proof of Theorem \ref{thm:6.01} is completed. \end{proof} \subsection{A proof of Theorem \ref{thm:4.03} a)}\langlebel{s0602} Comparing with (\ref{eq:5.74}), we set \begin{eqnarray}gin{align}\langlebel{eq:6.18} \,^2\nabla^{\mathcal{E},t}_{\cdot}:=\nabla^{\mathcal{E}}_{\cdot}+\frac{1}{ 2\sqrt{t}}\langle S(\cdot)e_j, f_p^H\rangle c(e_j)f^p\wedge +\frac{1}{4t}\langle S(\cdot)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge -\frac{\varepsilontheta_K(\cdot)}{4t}\left(1+\frac{t}{v}\right). \end{align} We trivialize $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}$ by parallel transport along $u\in [0,1]\rightarrow uZ$ with respect to the connection $\,^2\nabla^{\mathcal{E},t}$. Observe that the above connection is $g$-equivariant as $K\in \mathfrak{z}(g)$. Let $A$, $A'$ be smooth sections of $TX$. As in (\ref{eq:5.78a}), from (\ref{eq:6.18}), \begin{eqnarray}gin{align}\langlebel{eq:6.18a} \,^2\nabla_A^{\mathcal{E},1}c(A')=c(\nabla_A^{TX}A')+ \langle S(A)A',f_p^H\rangle f^p\wedge. \end{align} For $x\in W^g$, in this section, we denote by $\mathbf{H}_x$ the vector space of smooth sections of $\pi^*(\Lambda(T^*B)) \widetildedehat{\otimes}\mathcal{E}_x$. Let $L_{x,K}^{1,(t,v)}$ be the differential operator acting on $\mathbf{H}_x$, \begin{eqnarray}gin{align}\langlebel{eq:6.18b} L_{x,K}^{1,(t,v)}=(1-\rho^2(Z))(-t\Delta^{TX})+\rho^2(Z)\mathcal{B}_{K,t,v}. \end{align} We define $L_{x,K}^{2,(t,v)}:=H_t^{-1}L_{x,K}^{1,(t,v)}H_t$ and $L_{x,K}^{3,(t,v)}:=\left[L_{x,K}^{2,(t,v)} \right]_t^{(3)}$ as in Section \ref{s0507}. By Proposition \ref{prop:5.24} for $(\mathcal{E}\otimes L, \nabla_v^{\mathcal{E}\otimes L})$, we have \begin{eqnarray}gin{align}\langlebel{eq:6.18c} L_{x,K}^{3,(0,v)}=-\left(\nabla_{e_i}+\frac{1}{4}\left\langle (\jmath^*R^{TX}_x-m^{TX}(K)_x)Z,e_i\right\rangle\right)^2 +\jmath^*R^{E\otimes L}_x-m^{E\otimes L}(K)_x, \end{align} and as $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:6.18d} L_{x,K}^{3,(t,v)}\rightarrow L_{x,K}^{3,(0,v)}. \end{align} By (\ref{eq:6.18a}), as in (\ref{eq:5.89a}) and (\ref{eq:5.89c}), \begin{eqnarray}gin{align}\langlebel{eq:6.16b} \left[\sqrt{t}c(K^X)(\sqrt{t}Z) \right]_t^{(3)}=\jmath^*\varepsilontheta_{K} +\mathcal{O}(\sqrt{t}Z+\sqrt{t}). \end{align} By (\ref{eq:2.08}), (\ref{eq:2.16}), (\ref{eq:3.02}), (\ref{eq:6.02}) and (\ref{eq:6.08}), we get \begin{eqnarray}gin{align} \jmath^*R_{v,K}^L=\jmath^*R_v^L-2i\pi m^L(K)=-\frac{1}{4v} (d^{W^g}\varepsilontheta_{K}-2i\pi|K^X|^2) =-\frac{d_K^{W^g}\varepsilontheta_{K}}{4v}. \end{align} Then by (\ref{eq:2.17}), \begin{eqnarray}gin{align} \ch_{g,K}(L,\nabla_v^L)=\exp\left(\frac{d_K^{W^g}\varepsilontheta_{K} }{8\pi iv} \right). \end{align} From (\ref{eq:2.14}), (\ref{eq:2.15}), (\ref{eq:3.01}), (\ref{eq:3.02}) and (\ref{eq:3.07}), set \begin{eqnarray}gin{align}\langlebel{eq:6.17b} \gamma_{K,v}=-\frac{\varepsilontheta_K}{8vi\pi} \exp\left(\frac{d_K\varepsilontheta_K}{8vi\pi} \right)\widetildedehat{\mathrm{A}}_{g,K}(TX,\nabla^{TX}) \ch_{g,K}(\mathcal{E}/\mS,\nabla^{\mathcal{E}/\mS})\in \Omega(W^g, \det(N_{X^g/X})). \end{align} By (\ref{eq:4.10}), (\ref{eq:6.18c}) and (\ref{eq:6.17b}), if $\dim X$ is even, as in (\ref{eq:5.140}), we get \begin{eqnarray}gin{multline}\langlebel{eq:6.19b} \phi\int_{N_{X^g/X}}(-i)^{\ell/2}2^{\ell/2} \left\{\tr_s^{\mS_N\otimes E\otimes L}\left[g\frac{\jmath^*\varepsilontheta_{K}}{4v} \exp(-L_{x,K}^{3,(0,v)}) (g^{-1}Z,Z)\right]\right\}^{\max}dv_N(Z) \\ =-\left\{\gamma_{K,v}\right\}_x^{\max}. \end{multline} By (\ref{eq:3.07}), (\ref{eq:6.18d})-(\ref{eq:6.19b}), from the same argument of Section \ref{s0507} and (\ref{eq:5.132}) for $(\mathcal{E}\otimes L, \nabla_v^{\mathcal{E}\otimes L})$, we obtain Theorem \ref{thm:4.03} a) for $\dim X$ even. If $n$ is odd, following the explanation in Remark \ref{rem:5.22}, the proof is the same. The proof of Theorem \ref{thm:4.03} a) is completed. \subsection{Localization of the problem}\langlebel{s0603} The proof of Theorem \ref{thm:4.03} b) is devoted to Sections \ref{s0603}-\ref{s0608}. Let $\mathcal{B}^{0}$ be the piece of $\mathcal{B}_{K,t,v}$ which has degree $0$ in $\Lambda(T^*B)$. Then for $t\in(0,1]$, $v\in [t,1]$, by (\ref{eq:5.141}), $t\mathcal{B}^{0}$ satisfies the same estimates as Lemma \ref{lem:5.11} uniformly for $v\in [t,1]$. Thus following the same arguments in the proof of Lemma \ref{lem:5.16}, we have \begin{eqnarray}gin{thm}\langlebel{thm:6.03} There exist $\begin{eqnarray}ta>0$, $C>0$, $C'>0$ such that if $K\in\mathfrak{g}$, $|K|\leq \begin{eqnarray}ta$, $t\in(0,1]$, $v\in [t,1]$, \begin{eqnarray}gin{align}\langlebel{eq:6.14} \|\widetilde{I}_t(t\mathcal{B}_{K,t,v})\|_{(1)}\leq C\exp(-C'/t). \end{align} \end{thm} So our proof of inequality (\ref{eq:4.13}) in Theorem \ref{thm:4.03} can be localized near $X^g$. As in Section 5.3, we may and we will assume that $W=B\times X$, $TX$ is spin and $\mathcal{E}=\mS_X\otimes E$. \subsection{A rescaling of the normal coordinate to $X^{g,K}$ in $X^g$}\langlebel{s0604} In the sequel, we fix $g\in G$, $0\neq K_0\in \mathfrak{z}(g)$ and \begin{eqnarray}gin{align} K=zK_0,\quad z\in \field{R}^*. \end{align} Recall that $X^g$ and $X^{g,K}$ are totally geodesic in $X$. Given $\varepsilon>0$, let $\mathcal{U}_{\varepsilon}''$ be the $\varepsilon$-neighbourhood of $X^{g,K}$ in $N_{X^{g,K}/X^g}$ (cf. the notation in the proof of Lemma \ref{lem:3.01}). By zooming out $\varepsilon_0\in(0,a_X/32]$ in Section \ref{s0503}, we can assume that the map $(y_0,Z_0)\in N_{X^{g,K}/X^g}\rightarrow \exp_{y_0}^{X^g}(Z_0)\in X^g$ is a diffeomorphism from $\mathcal{U}_{\varepsilon}''$ into the tubular neighbourhood $\mathcal{V}_{\varepsilon}''$ of $X^{g,K}$ in $X^g$ for any $0<\varepsilon\leq 16\varepsilon_0$. Since $X^g$ is totally geodesic in $X$, the connection $\nabla^{TX}$ induces the connection $\nabla^{N_{X^g/X}}$ on $N_{X^g/X}$ (cf. (\ref{eq:1.30}) and (\ref{eq:3.18b})). For $(y_0,Z_0)\in\mathcal{U}_{\varepsilon}''$, we identify $N_{X^g/X,(y_0,Z_0)}$ with $N_{X^g/X,y_0}$ by parallel transport along the geodesic $u\in [0,1]\rightarrow u Z_0$ with respect to $\nabla^{TX}$. If $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g,y_0}$, $Z\in N_{X^g/X,y_0}$, $|Z_0|,|Z|\leq 4\varepsilon_0$, we identify $(y_0,Z_0,Z)$ with $\exp_{\exp_{y_0}^{X^g}(Z_0)}^X(Z)\in X$. Therefore, $(y_0, Z_0,Z)$ defines a coordinate system on $X$ near $X^{g,K}$. From (\ref{eq:2.14}), (\ref{eq:2.15}) and (\ref{eq:6.17b}), for $|z|$ small enough, $\gamma_{K,v}$ is a smooth form on $W^g$. Recall that the function $k$ is defined in (\ref{eq:5.64}) and $\ell'=\dim X^{g,K}$. \begin{eqnarray}gin{thm}\langlebel{thm:6.04} There exist $\begin{eqnarray}ta\in (0,1]$, $\delta\in (0,1]$ such that for $p\in \field{N}$, there is $C>0$ such that if $z\in \field{R}^*$, $|z|\leq \begin{eqnarray}ta$, $t\in (0,1]$, $v\in [t,1]$, $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0/\sqrt{v}$, then for $K=zK_0$, \begin{eqnarray}gin{multline}\langlebel{eq:6.16} \left|v^{\frac{1}{2}\dim N_{X^{g,K}/X^g}}\left(\phi\int_{Z\in N_{X^g/X,y_0},|Z|\leq \varepsilon_0}\widetilde{\tr}'\left[g\frac{\sqrt{t} c(K^X)}{4v}\widetilde{F}_t\left(-\mathcal{B}_{K,t,v}\right)\right.\right.\right. \\ \left.\Big.\left.(g^{-1}(y_0,\sqrt{v}Z_0,Z), (y_0,\sqrt{v}Z_0,Z)) \right]\cdot k(y_0,\sqrt{v}Z_0,Z)dv_{N_{X^g/X}}(Z)\right. \\ \left.+\{\gamma_{K,v}\}^{\max} (y_0,\sqrt{v}Z_0)\Big)\right| \leq C\frac{(1+|Z_0|)^{\ell'+1}}{(1+|zZ_0|)^p}\left(\frac{t}{v} \right)^{\delta}. \end{multline} \end{thm} \begin{eqnarray}gin{proof} Sections 6.5-6.7 will be devoted to the proof of Theorem \ref{thm:6.04}. \end{proof} \subsection{A new trivialization and Getzler rescaling near $X^{g,K}$}\langlebel{s0605} Since $g$ preserves geodesics and the parallel transport, in the coordinate system in above subsection, \begin{eqnarray}gin{align}\langlebel{eq:6.17} g(Z_0,Z)=(Z_0,gZ). \end{align} By an abuse of notation, we will often write $Z_0+Z$ instead of $\exp_{\exp_{y_0}^{X^g}(Z_0)}^X(Z)$. Firstly, we fix $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0$, and we take $Z\in T_{y_0}X, |Z|\leq 4\varepsilon_0$. The curve $u\in [0,1]\rightarrow Z_0+uZ$ lies in $B_{y_0}^X(0,5\varepsilon_0)$. Moreover we identify $TX_{Z_0+Z}$, $\pi^*\Lambda(T^*B)\otimes\mathcal{E}_{Z_0+Z}$ with $TX_{Z_0}$, $\pi^*\Lambda(T^*B)\otimes\mathcal{E}_{Z_0}$ by parallel transport with respect to the connections $\nabla^{TX}$, $\,^2\nabla^{\mathcal{E},t}$ along the curve. When $Z_0\in N_{X^{g,K}/X^g,y_0}$ is allowed to vary, we identify $TX_{Z_0}$, $\pi^*\Lambda(T^*B)\otimes\mathcal{E}_{Z_0}$ with $TX_{y_0}$, $\pi^*\Lambda(T^*B)\otimes\mathcal{E}_{y_0}$ by parallel transport with respect to the connections $\nabla^{TX}$, $\nabla^{\mathcal{E}}$ along the curve $u\in [0,1]\rightarrow uZ_0$. Then $\mathbf{H}_{Z_0}$ is identified with $\mathbf{H}_{y_0}$ associated with this trivialization. Furthermore the fiber of $\pi^*\Lambda(T^*B)\otimes\mathcal{E}$ at $Z_0+Z$ and $y_0$ are identified by parallel transport along the broken curve $u\in [0,1]\rightarrow 2uZ_0$, for $0\leq u\leq \frac{1}{2}$; $Z_0+(2u-1)Z$ for $\frac{1}{2}\leq u\leq 1$. Note that here we use the trick in \cite[Section 11.4]{Bi97} (cf. also \cite[Section 9.5]{BG00}) and the trivialization here is different from that in the proof of Theorem \ref{thm:4.03} a) in Section \ref{s0602}. Under this new trivialization, the identification between $\mathbf{H}_{y_0}$ and $\mathbf{H}_{Z_0}$ is an isometry with respect to (\ref{eq:1.19}). For $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0$, the considered trivializations depend explicitly on $Z_0$. We denote by $(\mathcal{B}_{K,t,v})_{Z_0}$ the action of $\mathcal{B}_{K,t,v}$ centered at $Z_0$. Thus the operator $(\mathcal{B}_{K,t,v})_{Z_0}$ acts on $\mathbf{H}_{Z_0}$. As $\mathbf{H}_{Z_0}$ is identified with $\mathbf{H}_{y_0}$, so that ultimately, $(\mathcal{B}_{K,t,v})_{Z_0}$ acts on $\mathbf{H}_{y_0}$. We may and we will assume that $\varepsilon_0$ is small enough so that if $|Z_0|\leq \varepsilon_0$, $|Z|\leq 4\varepsilon_0$, then \begin{eqnarray}gin{align}\langlebel{eq:6.19} \frac{1}{2}g^{TX}_{y_0}\leq g^{TX}_{Z_0+Z}\leq \frac{3}{2} g^{TX}_{y_0}. \end{align} We define $k_{(y_0,Z_0)}'(Z)$ as in (\ref{eq:5.72}). Recall that $\rho(Z)$ is defined in (\ref{eq:5.81}). \begin{eqnarray}gin{defn}\langlebel{defn:6.05} Let $L_{Z_0,K}'^{1,(t,v)}$ be the differential operator acting on $\mathbf{H}_{y_0}$, \begin{eqnarray}gin{align}\langlebel{eq:6.21} L_{Z_0,K}'^{1,(t,v)}=-(1-\rho^2(Z))(-t\Delta^{TX})+\rho^2(Z) (\mathcal{B}_{K,t,v})_{Z_0}. \end{align} \end{defn} By proceeding as in (\ref{eq:5.83}), and using Theorem \ref{thm:6.03} and (\ref{eq:6.17}), we find that if $Z_0\in N_{X^{g,K}/X^g,y_0}, Z\in N_{X^g/X,y_0}$, $|Z|, |Z_0|\leq \varepsilon_0$, \begin{eqnarray}gin{align}\langlebel{eq:6.22} \widetilde{F}_t(\mathcal{B}_{K,t,v})(g^{-1}(Z_0,Z),(Z_0,Z))k_{(y_0,Z_0)}'(Z) =\widetilde{F}_t(L_{Z_0,K}^{1,(t,v)})(g^{-1}Z,Z). \end{align} We still define $H_t$ as in (\ref{eq:5.85}). Let \begin{eqnarray}gin{align}\langlebel{eq:6.23} L_{Z_0,K}'^{2,(t,v)}=H_{t}^{-1}L_{Z_0,K}'^{1,(t,v)}H_{t}. \end{align} Let $(e_1,\cdots,e_{\ell'})$, $(e_{\ell'+1},\cdots,e_{\ell})$, $(e_{\ell+1},\cdots,e_{n})$ be orthonormal basis of $T_{y_0}X^{g,K}$, $N_{X^{g,K}/X^g,y_0}$, $N_{X^g/X,y_0}$ respectively. \begin{eqnarray}gin{defn}\langlebel{defn:6.07} Let $L_{Z_0,K}'^{3,(t,v)}$ be the differential operator acting on $\mathbf{H}_{y_0}$ obtained from $L_{Z_0,K}'^{2,(t,v)}$ by replacing $c(e_j)$ by $c_{t}(e_j)$ (cf. (\ref{eq:5.88})) for $1\leq j\leq \ell'$, by $c_{t/v}(e_j)$ for $\ell'+1\leq j\leq \ell$, while leaving unchanged the $c(e_j)$'s for $\ell+1\leq j\leq n$. \end{defn} For $A\in (\pi^*(\Lambda (T^*B))\widetildedehat{\otimes} c(TX) \otimes\field{E}nd( E))_x\otimes \mathrm{Op}_x$, we denote by $[A]_{(t,v)}^{(3)}$ the differential operator obtained from $A$ by using the Getzler rescaling of the Clifford variables which is given in Definition \ref{defn:6.07}. If $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0$, $Z\in T_{y_0}X$, $|Z|\leq 4\varepsilon_0$, if $U\in T_{y_0}X$, let $\tau^{Z_0}U(Z)\in TX_{Z_0+Z}$ be the parallel transport of $U$ along the curve $u\rightarrow 2uZ_0$, $0\leq u\leq \frac{1}{2}$, $u\rightarrow \exp_{Z_0}^X((2u-1)Z)$, $\frac{1}{2}\leq u\leq 1$, with respect to $\nabla^{TX}$. By (\ref{eq:6.18a}), under the identification of $\pi^*\Lambda(T^*B)\otimes \mathcal{E}_{Z_0+Z}$ and $\pi^*\Lambda(T^*B)\otimes \mathcal{E}_{y_0}$ at the beginning of this subsection, in the trivialization \begin{eqnarray}gin{align}\langlebel{eq:6.25a} c\left(\tau^{Z_0} e_j(Z)\right)=c(e_j)+\frac{1}{\sqrt{t}} \big(\langle S(Z)e_j,f_p^H\rangle_{Z_0} +\mathcal{O}(|Z|^2)\big) f^p\wedge. \end{align} Then comparing with (\ref{eq:5.89}) and (\ref{eq:5.90}), from (\ref{eq:6.06}), we have \begin{eqnarray}gin{multline}\langlebel{eq:6.25} L_{Z_0,K}'^{3,(t,v)}=-(1-\rho^2(\sqrt{t}Z))\Delta^{TX} +\rho^2(\sqrt{t}Z)\cdot\left\{ -g^{ij}(\sqrt{t}Z)\left(\nabla_{e_i}''\nabla_{e_j}'' -\Gamma_{ij}^k(\sqrt{t}Z)\sqrt{t}\nabla_{e_k}'' \right)\right. \\ +\frac{t}{2}\left( R^{\mathcal{E}/\mS}_{(Z_0,\sqrt{t}Z)}(e_i,e_j)-\frac{1}{2v} \langle\nabla_{e_i}^{TX}K^X,e_j\rangle_{(Z_0,\sqrt{t}Z)} \right)\left[c\left(\tau^{Z_0} e_i(\sqrt{t}Z)\right) c\left(\tau^{Z_0} e_j(\sqrt{t}Z)\right)\right]_{(t,v)}^3 \\ +\sqrt{t}\left(R^{\mathcal{E}/\mS}_{(Z_0,\sqrt{t}Z)}(e_i,f_p^H)-\frac{1}{2v} \langle T(e_i,f_p^H),K^X\rangle_{(Z_0,\sqrt{t}Z)}\right) \left[c\left(\tau^{Z_0} e_i(\sqrt{t}Z)\right) \right]_{(t,v)}^3f^p\wedge \\ +\frac{1}{2}\left(R^{\mathcal{E}/\mS}_{(Z_0,\sqrt{t}Z)}(f_p^H, f_q^H)- \frac{1}{8v}\langle T(f_p^H,f_q^H), K^X \rangle_{(Z_0,\sqrt{t}Z)}\right)f^p\wedge f^q\wedge \\ \left.+\frac{t}{4}H_{(Z_0,\sqrt{t}Z)} -m_{(Z_0,\sqrt{t}Z)}^{\mathcal{E}/\mS}(K^X)+\frac{1}{4v} |K^X(Z_0,\sqrt{t}Z)|^2\right\}, \end{multline} where \begin{eqnarray}gin{multline}\langlebel{eq:6.26} \nabla_{e_i}''=\nabla_{\tau^{Z_0}e_i(\sqrt{t}Z)}+\frac{t}{8}\Big( \langle R^{TX}_{Z_0}(e_k,e_l)Z, e_i\rangle+\mathcal{O}(\sqrt{t}|Z|^2)\Big) \left[c\left(\tau^{Z_0} e_k(\sqrt{t}Z)\right) c\left(\tau^{Z_0} e_l(\sqrt{t}Z)\right)\right]_{(t,v)}^3 \\ +\frac{\sqrt{t}}{4}\Big(\langle R^{TX}_{Z_0} (e_k,f_p^H)Z,e_i\rangle +\mathcal{O}(\sqrt{t}|Z|^2)\Big) \left[c\left(\tau^{Z_0} e_k(\sqrt{t}Z)\right) \right]_{(t,v)}^3f^p\wedge \\ +\frac{1}{8} \Big(\langle R^{TX}_{Z_0} (f_p^H,f_q^H)Z,e_i\rangle +\mathcal{O}(\sqrt{t}|Z|^2)\Big) f^p\wedge f^q\wedge + \frac{t}{2}\left(R^E_{Z_0}(Z, e_i)+\mathcal{O}(\sqrt{t}|Z|^2) \right) \\ -\frac{1}{4}\left(1+\frac{t}{v}\right)\langle m^{TX}_{Z_0} (K)Z,e_i\rangle +\sqrt{t}h_i(K,\sqrt{t}Z)\left(\frac{1}{t}+\frac{1}{v}\right). \end{multline} Here $h_i(K, Z)$ is a function depending linearly on $K$ and $h_i(K,Z)=\mathcal{O}(|Z|^2)$ for $|K|$ bounded. Let $\psi_v\in \field{E}nd(\Lambda(T^*X^g))$ be the morphism of exterior algebras such that \begin{eqnarray}gin{align}\langlebel{eq:6.31} \begin{eqnarray}gin{split} &\psi_v(e^j)=e^j,\quad 1\leq j\leq \ell', \\ &\psi_v(e^j)=\sqrt{v}e^j,\quad \ell'+1\leq j\leq \ell. \end{split} \end{align} Recall that for $x=(y_0,Z_0)\in X^g$, $\Lambda(T^*X^g)_{(y_0, Z_0)}$ has been identified with $\Lambda(T^*X^g)_{y_0}$. \begin{eqnarray}gin{defn}\langlebel{defn:6.10} Let $L_{x,K}'^{3,(0,v)}$ be the operator \begin{eqnarray}gin{align}\langlebel{eq:6.32} L_{x,K}'^{3,(0,v)}=\psi_v L_{x,K}^{3,(0,v)}\psi_v^{-1}. \end{align} \end{defn} By Definitions \ref{defn:6.07} and \ref{defn:6.10}, (\ref{eq:6.18d}) and (\ref{eq:6.31}), as $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:6.33} L_{Z_0,K}'^{3,(t,v)}\rightarrow L_{(y_0,Z_0),K}'^{3,(0,v)}. \end{align} \subsection{A family of norms}\langlebel{s0606} For $0\leq p\leq \ell'$, $0\leq q\leq \ell-\ell'$, put \begin{eqnarray}gin{align}\langlebel{eq:6.34} \Lambda^{(p,q)}(T^*X^g)_{y_0}=\Lambda^{p}(T^*X^{g,K})_{y_0} \widetildedehat{\otimes}\Lambda^{q}(N^*_{X^{g,K}/X^g})_{y_0}. \end{align} The various $\Lambda^{(p,q)}(T^*X^g)_{y_0}$ are mutually orthogonal in $\Lambda(T^*X^g)_{y_0}$. Let $\mathbf{I}_{y_0}$ be the vector space of smooth sections of $(\pi^*\Lambda(T^*B)\otimes\Lambda(T^*X^g)\widetildedehat{\otimes} \mS_N\otimes E)_{y_0}$ on $T_{y_0}X$, let $\mathbf{I}_{(r,p,q),y_0}$ be the vector space of smooth sections of $(\pi^*\Lambda^r(T^*B)\otimes\Lambda^{(p,q)}(T^*X^g) \widetildedehat{\otimes}\mS_N\otimes E)_x$ on $T_{y_0}X$. Let $\mathbf{I}^0_{y_0}, \mathbf{I}^0_{(r,p,q),y_0}$ be the corresponding vector spaces of square-integrable sections. Now we imitate constructions in \cite[\S 11]{BL91}. Recall that $\dim B=k$. \begin{eqnarray}gin{defn}\langlebel{defn:6.12} For $t\in [0,1]$, $v\in \field{R}_+^*$, $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0/\sqrt{v}$, $s\in \mathbf{I}_{(r,p,q),y_0}$, set \begin{eqnarray}gin{multline}\langlebel{eq:6.35} |s|_{t,v,Z_0,0}^2=\int_{T_{y_0}X}|s(Z)|^2\left(1+\big(|Z_0|+|Z|\big) \rho\left(\frac{\sqrt{t}Z}{2}\right) \right)^{2(k+\ell'-p-r)} \\ \cdot\left(1+\sqrt{v}|Z|\rho\left(\frac{\sqrt{t}Z}{2} \right) \right)^{2(\ell-\ell'-q)}dv_{TX}(Z). \end{multline} \end{defn} Then (\ref{eq:6.35}) induces a Hermitian product $\langle\cdot, \cdot\rangle_{t,v,Z_0,0}$ on $\mathbf{I}^0_{(r,p,q),y_0}$. We equip $\mathbf{I}^0_{y_0}=\bigoplus \mathbf{I}^0_{(r,p,q),y_0}$ with the direct sum of these Hermitian metrics. Recall that by (\ref{eq:5.81}), if $\rho(\sqrt{t}Z)>0$, then $|\sqrt{t}Z|\leq 4\varepsilon_0$. The proof of the following proposition is almost the same as that of \cite[Proposition 8.16]{BG00} (cf. also \cite[Proposition 11.24]{BL91}). \begin{eqnarray}gin{prop}\langlebel{prop:6.13} For $t\in (0,1]$, $v\in [t,1]$, $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0/\sqrt{v}$, the following family of operators acting on $(\mathbf{I}_{y_0}^0, |\cdot|_{t,v,Z_0,0})$ are uniformly bounded: \begin{eqnarray}gin{multline}\langlebel{eq:6.36} 1_{|\sqrt{t}Z|\leq 4\varepsilon_0}\sqrt{t}c_{t}(e_j),\ 1_{ |\sqrt{t}Z|\leq 4\varepsilon_0}|Z|\sqrt{t}c_{t}(e_j),\ 1_{|\sqrt{t}Z|\leq 4\varepsilon_0}|Z_0|\sqrt{t}c_{t}(e_j),\, \til{E}xt{for}\ \ 1\leq j\leq \ell', \\ 1_{|\sqrt{t}Z|\leq 4\varepsilon_0}|Z_0|f^p\wedge,\quad 1_{|\sqrt{t}Z|\leq 4\varepsilon_0}|Z|f^p\wedge,\ \\ 1_{|\sqrt{t}Z|\leq 4\varepsilon_0}\sqrt{\frac{t}{v}} c_{\frac{t}{v}}(e_j),\quad 1_{ |\sqrt{t}Z|\leq 4\varepsilon_0}|Z|\sqrt{t}c_{\frac{t}{v}}(e_j),\ \, \til{E}xt{for}\ \ell'+1\leq j\leq \ell. \end{multline} \end{prop} \begin{eqnarray}gin{defn}\langlebel{defn:6.14} For $t\in [0,1]$, $v\in \field{R}_+^*$, $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0/\sqrt{v}$, if $s\in \mathbf{I}_{y_0}$ has compact support, set \begin{eqnarray}gin{align}\langlebel{eq:6.37} |s |_{t,v,Z_0,1}^2=|s|_{t,v,Z_0,0}^2+\frac{1}{v} \left|\rho(\sqrt{t}Z)|K^X|(\sqrt{v}Z_0+\sqrt{t}Z)s \right|_{t,v,Z_0,0}^2+\sum_{i=1}^{n}|\nabla_{e_i}s|_{t,v, Z_0,0}^2. \end{align} \end{defn} Note that $|s |_{t,v,Z_0,1}$ depends explicitly on $K=zK_0$. In fact, $|s |_{t,v,Z_0,1}$ depends on $z\in \field{R}^*$. \begin{eqnarray}gin{thm}\langlebel{thm:6.15} There exist constants $C_i>0$, $i=1,2,3,4$, such that if $t\in (0,1]$, $v\in [t,1]$, $n\in\field{N}$, $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g,y_0}$, $|Z_0|\leq \varepsilon_0/\sqrt{v}$, $z\in \field{R}$, $|z|\leq 1$, and if the support of $s,s'\in \mathbf{I}_{y_0}$ is included in $\{Z\in T_{y_0}X: |Z|\leq n \}$, then \begin{eqnarray}gin{align}\langlebel{eq:6.38} \begin{eqnarray}gin{split} &\field{R}e\langle L_{\sqrt{v}Z_0,zK_0}'^{3,(t,v)}s,s\rangle_{t,v,Z_0,0}\geq C_1|s|_{t,v,Z_0,1}^2-C_2(1+|nz|^2)|s|_{t,v,Z_0,0}^2, \\ &|{\rm Im}\langle L_{\sqrt{v}Z_0,zK_0}'^{3,(t,v)}s,s\rangle_{t,v,Z_0,0}|\leq C_3((1+|nz|)|s|_{t,v,Z_0,1}|s|_{t,v,Z_0,0}+|nz|^2|s|_{t,v,Z_0,0}^2), \\ &|\langle L_{\sqrt{v}Z_0,zK_0}'^{3,(t,v)}s,s'\rangle_{t,v,Z_0,0}|\leq C_4(1+|nz|^2)|s|_{t,v,Z_0,1}|s'|_{t,v,Z_0,1}. \end{split} \end{align} \end{thm} \begin{eqnarray}gin{proof} Comparing with $L_{x,K}^{3,t}$ in (\ref{eq:5.89}) and (\ref{eq:5.106}), there are four additional terms in (\ref{eq:6.25}) which should be estimated: \begin{eqnarray}gin{align}\langlebel{eq:6.39} \frac{1}{4v}\left|\rho(\sqrt{t}Z)|zK_0^X|(\sqrt{v}Z_0+\sqrt{t}Z)s \right|_{t,v,Z_0,0}^2, \end{align} \begin{eqnarray}gin{multline}\langlebel{eq:6.40} -\rho^2(\sqrt{t}Z)\frac{t}{4v}\left\langle \langle\nabla^{TX}_{ \tau^{\sqrt{v}Z_0}e_i(\sqrt{t}Z)}zK_0^X(\sqrt{v}Z_0+\sqrt{t}Z), \tau^{\sqrt{v}Z_0}e_j(\sqrt{t}Z))\rangle\right. \\ \left. \cdot \left[c\left(\tau^{\sqrt{v}Z_0} e_i(\sqrt{t}Z)\right) c\left(\tau^{\sqrt{v}Z_0} e_j(\sqrt{t}Z)\right)\right]_{(t,v)}^3s,s \right\rangle_{t,v,Z_0,0}, \end{multline} \begin{eqnarray}gin{align}\langlebel{eq:6.41} -\rho^2(\sqrt{t}Z)\frac{\sqrt{t}}{2v}\left\langle\langle T(e_i,f_p^H), zK_0^X\rangle (\sqrt{v}Z_0+\sqrt{t}Z) \left[c\left(\tau^{\sqrt{v}Z_0} e_i(\sqrt{t}Z)\right) \right]_{(t,v)}^3f^p\wedge s, s\right\rangle_{t,v,Z_0,0}, \end{align} and \begin{eqnarray}gin{align}\langlebel{eq:6.42} -\rho^2(\sqrt{t}Z)\frac{1}{16v}\left\langle\langle T(f_p^H, f_q^H), zK_0^X\rangle (\sqrt{v}Z_0+\sqrt{t}Z)f^p\wedge f^q\wedge s,s \right\rangle_{t,v,Z_0,0}. \end{align} The first term is controlled by (\ref{eq:6.37}) and the second term was estimated in the proof of \cite[Theorem 8.18]{BG00}. We only need to estimate (\ref{eq:6.41}) and (\ref{eq:6.42}), which are new terms in the family case. By (\ref{eq:3.04}), $\widetilde{T}$ is $G$-invariant, thus $[K^X,\widetilde{T}]=0$. Since $m^{TX}(K)$ is skew-adjoint, by (\ref{eq:2.03}), \begin{eqnarray}gin{align}\langlebel{eq:6.60b} Z\langle \widetilde{T}, K^X\rangle =\langle \nabla_Z^{TX}\widetilde{T},K^X\rangle +\langle\widetilde{T},\nabla^{TX}_{Z}K^X\rangle =\langle \nabla_Z^{TX}\widetilde{T},K^X\rangle -\langle \nabla_{K^X}^{TX}\widetilde{T}, Z\rangle. \end{align} As $y_0\in X^{g,K}\subset X^K$, we know $K_{y_0}^X=0$. Thus from (\ref{eq:6.60b}), we have \begin{eqnarray}gin{align}\langlebel{eq:6.61} \frac{\partial}{\partial s}\langle \widetilde{T}, K^X\rangle_{(y_0,sZ)}|_{s=0}=0. \end{align} From (\ref{eq:6.61}), we have \begin{eqnarray}gin{align}\langlebel{eq:6.62} \langle \widetilde{T},K^X\rangle_{(y_0,Z)}=\mathcal{O}(|Z|^2). \end{align} Thus we have \begin{eqnarray}gin{multline}\langlebel{eq:6.43} \rho^2(\sqrt{t}Z)\frac{\sqrt{t}}{2v}\langle T(e_i,f_p^H),zK_0^X\rangle (\sqrt{v}Z_0+\sqrt{t}Z)\left[c\left(\tau^{\sqrt{v}Z_0} e_i(\sqrt{t}Z)\right) \right]_{(t,v)}^3f^p\wedge \\ =\rho^2(\sqrt{t}Z)v^{-1}\sqrt{t}|z|\left[c\left(\tau^{\sqrt{v}Z_0} e_i(\sqrt{t}Z)\right) \right]_{(t,v)}^3f^p\wedge\cdot \mathcal{O}((\sqrt{v}|Z_0|+\sqrt{t}|Z|)^2), \end{multline} and \begin{eqnarray}gin{multline}\langlebel{eq:6.44} \rho^2(\sqrt{t}Z)\frac{1}{16v}\langle T(f_p^H, f_q^H),zK_0^X\rangle (\sqrt{v}Z_0+\sqrt{t}Z)f^p\wedge f^q\wedge \\ =\rho^2(\sqrt{t}Z)|z|f^p\wedge f^q\wedge \cdot \mathcal{O}((|Z_0| +\sqrt{t/v}|Z|)^2). \end{multline} Using the fact that $v\leq 1$ and $t/v\leq 1$ and also Proposition \ref{prop:6.13}, from (\ref{eq:6.25a}), we find that the operators in (\ref{eq:6.43}) and (\ref{eq:6.44}) remain uniformly bounded with respect to $|\cdot|_{t,v,Z_0,0}$. The proof of Theorem \ref{thm:6.15} is completed. \end{proof} \begin{eqnarray}gin{defn}\langlebel{defn:6.16} Put \begin{eqnarray}gin{align}\langlebel{eq:6.45} L_{Z_0,K,n}^{3,(t,v)}=-\left(1-\gamma^2\left(\frac{|Z|}{2(n+2)} \right)\right)\Delta^{TX}+\gamma^2\left(\frac{|Z|}{2(n+2)} \right)L_{Z_0,K}^{3,(t,v)}. \end{align} \end{defn} Let $\widetilde{F}_t(L_{Z_0,K}^{3,(t,v)})(Z,Z')$ and $\widetilde{F}_t(L_{Z_0,K,n}^{3,(t,v)})(Z,Z')$ be the smooth kernels associated with $\widetilde{F}_t(L_{Z_0,K}^{3,(t,v)})$ and $\widetilde{F}_t(L_{Z_0,K,n}^{3,(t,v)})$ with respect to $dv_{TX}(Z')$. Using (\ref{eq:6.19}) and proceeding as in (\ref{eq:5.122}), i.e., using finite propagation speed, we see that if $Z\in T_{y_0}X$, $|Z|\leq p$, \begin{eqnarray}gin{align}\langlebel{eq:6.46} \widetilde{F}_{t,n}(L_{Z_0,K}^{3,(t,v)})(Z,Z')=\widetilde{F}_{t,n} (L_{Z_0,K,n+p}^{3,(t,v)})(Z,Z'). \end{align} Clearly, when replacing $L_{\sqrt{v}Z_0,zK_0}^{3,(t,v)}$ in (\ref{eq:6.38}) by $L_{\sqrt{v}Z_0,zK_0,n}^{3,(t,v)}$, the estimates (\ref{eq:6.38}) still hold. \subsection{A Proof of Theorem \ref{thm:6.04}}\langlebel{s0607} Since $W$ is a compact manifold, there exists a finite family of smooth functions $f_1,\cdots,f_r:W\rightarrow [-1,1]$ which have the following properties: \begin{eqnarray}gin{itemize} \item $W^K=\cap_{j=1}^r\{x\in W: f_j(x)=0 \}$; \item On $W^K$, $df_1\cdots, df_r$ span $N_{X^{g,K}/X}$. \end{itemize} \begin{eqnarray}gin{defn}\langlebel{defn:6.17} Let $\mathcal{Q}_{t,v,Z_0}$ be the family of operators \begin{eqnarray}gin{align}\langlebel{eq:6.47} \mathcal{Q}_{t,v,Z_0}=\left\{\nabla_{e_i}, 1\leq i\leq \dim X; \frac{z}{\sqrt{v}}\rho(\sqrt{t}Z)f_j(\sqrt{v}Z_0+\sqrt{t}Z), 1\leq j\leq r \right\}. \end{align} For $j\in\field{N}$, let $\mathcal{Q}_{t,v,Z_0}^j$ be the set of operators $Q_1\cdots Q_j$, with $Q_i\in \mathcal{Q}_{t,v,Z_0}$, $1\leq i\leq j$. \end{defn} Following the arguments in \cite[\S 8.8-8.10]{BG00}, we have the following uniform estimate, which is formally the same as \cite[(8.76)]{BG00}. We only need to take care that in the proof of the analogue of \cite[Proposition 8.22 and Theorems 8.23, 8.24]{BG00}, there are two new terms like (\ref{eq:6.41}) and (\ref{eq:6.42}) appear in our family case. However, they are easy to be controlled as in (\ref{eq:6.43}) and (\ref{eq:6.44}). \begin{eqnarray}gin{thm} There exist $C>0$, $C'>0$ such that given $m>0$, there exists $\begin{eqnarray}ta_1>0$ such that if $t\in (0,1]$, $v\in [t,1]$, $z\in \field{R}$, $|z|\leq \begin{eqnarray}ta_1$, $y_0\in X^{g,K}$, $Z_0\in N_{X^{g,K}/X^g, y_0}, |Z_0|\leq \varepsilonepsilon_0/\sqrt{v}$, $Z\in N_{X^g/X,y_0}$, $|Z|\leq \varepsilon_0/\sqrt{t}$, \begin{eqnarray}gin{multline}\langlebel{eq:6.48} \left|\left(\widetildedetilde{F}_{t}\left(L_{\sqrt{v}Z_0,zK_0}^{3,(t,v)} \right) -\exp\left(-L_{\sqrt{v}Z_0,zK_0}^{3,(0,v)}\right) \right)(g^{-1}Z,Z) \right| \\ \leq C\left(\frac{t}{v}\right)^{\frac{1}{4(\dim X+1)}} \cdot\frac{(1+|Z_0|)^{\ell'+1}}{(1+|zZ_0|)^m}\exp\left( -C'|Z|^2/4 \right). \end{multline} \end{thm} The kernel $\exp(-L_{\sqrt{v}Z_0,zK_0}^{3,(0,v)}))(g^{-1}Z,Z)$ here is defined in the same way as in (\ref{eq:5.127}). From (\ref{eq:6.25a}), we get \begin{eqnarray}gin{align}\langlebel{eq:6.50b} \sqrt{\frac{t}{v}}\left[c\left(\tau^{\sqrt{v}Z_0} e_j(\sqrt{t}Z)\right) \right]_{(t,v)}^3=\left\{ \begin{eqnarray}gin{aligned} &\frac{1}{\sqrt{v}}e^j\wedge +\sqrt{\frac{t}{v}}\Big(\mathcal{O}(\sqrt{t})+ \mathcal{O}(|Z|)\Big),\ &\hbox{if $1\leq j\leq \ell'$;} \\ &e^j\wedge +\sqrt{\frac{t}{v}} \mathcal{O}(1+|Z|),\quad&\hbox{if $\ell'+1\leq j\leq \ell$;} \\ &\sqrt{\frac{t}{v}}\big(c(e_j)+\mathcal{O}(|Z|) \big), \quad&\hbox{if $\ell+1\leq j\leq n$.} \end{aligned} \right. \end{align} Moreover as $K^X$ vanishes on $W^{g,K}$, we have \begin{eqnarray}gin{align}\langlebel{eq:6.51b} \begin{eqnarray}gin{split} \langle K_0^X(\sqrt{v}Z_0+\sqrt{t}Z),\tau^{\sqrt{v}Z_0} e_j(\sqrt{t}Z)\rangle &=\langle K_0^X(\sqrt{v}Z_0),\tau^{\sqrt{v}Z_0} e_j\rangle_{(y_0,\sqrt{v}Z_0)}+\mathcal{O}(\sqrt{t}|Z|), \\ \langle K_0^X(\sqrt{v}Z_0),\tau^{\sqrt{v}Z_0} e_j\rangle_{(y_0,\sqrt{v}Z_0)}&=\mathcal{O}(\sqrt{v}|Z_0|). \end{split} \end{align} By (\ref{eq:3.01}), (\ref{eq:6.31}), (\ref{eq:6.50b}) and (\ref{eq:6.51b}), we get \begin{eqnarray}gin{align}\langlebel{eq:6.52b} \frac{\sqrt{t}}{4v}\left[c\left(K^X_{\sqrt{v}|Z_0|+ \sqrt{t}|Z|}\right) \right]_{(t,v)}^3=\left(\psi_v \frac{\jmath^*\varepsilontheta_K}{4v} \psi_v^{-1}\right)_{\sqrt{v}Z_0}+\frac{\sqrt{t}}{v}z \mathcal{O}(\sqrt{v}|Z_0|+\sqrt{t}|Z|)\mathcal{O}(1+|Z|). \end{align} Note that we have $(\delta_v^*\alpha)_{Z_0}=(\psi_v\alpha\psi_v^{-1})_{\sqrt{v}Z_0}$ for any $\alpha\in \Omega(W^g)$ with $\delta_v$ defined above (\ref{eq:3.10}). Therefore, from (\ref{eq:3.17}), (\ref{eq:6.48}) and (\ref{eq:6.52b}), we get Theorem \ref{thm:6.04}. \subsection{A Proof of Theorem \ref{thm:4.03} b)}\langlebel{s0608} Theorem \ref{thm:4.03} b) follows directly from the following theorem. \begin{eqnarray}gin{thm}\langlebel{thm:6.19} There exist $\begin{eqnarray}ta_1>0$, $r\in \field{N}$, $C>0$, $\delta\in (0,1]$, such that if $t\in (0,1]$, $v\in [t,1]$, if $z\in \field{R}\backslash \{0\}$, $|z|\leq \begin{eqnarray}ta_1$, then \begin{eqnarray}gin{align}\langlebel{eq:6.51} |z|^r\left|\phi\widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{4v} \exp\left(-\mathcal{B}_{K,t,v}\right) \right]+\widetilde{e}_v\right| \leq C\left(\frac{t}{v} \right)^{\delta}. \end{align} \end{thm} \begin{eqnarray}gin{proof} Recall that $\mathcal{U}_{\varepsilon}$, $\mathcal{U}_{\varepsilon}'$, $\mathcal{U}_{\varepsilon}''$ are $\varepsilon$-neighborhoods of $X^g$, $X^{g,K}$, $X^{g,K}$ in $N_{X^g/X}$, $N_{X^{g,K}/X}$, $N_{X^{g,K}/X^g}$ respectively. Let $\bar{k}(y_0,Z_0)$ be the function defined on $X^g\cap \mathcal{U}_{\varepsilon}''$ by the relation \begin{eqnarray}gin{align}\langlebel{eq:6.52} dv_{X^g}(y_0,Z_0)=\bar{k}(y_0,Z_0)dv_{X^{g,K}}(y_0) dv_{N_{X^{g,K}/X^g}}(Z_0). \end{align} Then \begin{eqnarray}gin{align}\langlebel{eq:6.53} \bar{k}|_{X^{g,K}}=1. \end{align} Recall that $\widetilde{F}_t(\mathcal{B}_{K,t,v})(g^{-1}x,x)$ vanishes on $X\backslash \mathcal{U}_{\varepsilon_0}$. Using (\ref{eq:5.72}), (\ref{eq:6.52}), we get \begin{eqnarray}gin{multline}\langlebel{eq:6.54} \phi\int_{\mathcal{U}_{\varepsilon_0}'}\widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{ 4v}\exp\left(-\mathcal{B}_{K,t,v}\right)(g^{-1}x,x) \right]dv_X(x) +\int_{X^g\cap\mathcal{U}_{\varepsilon_0}'} \{\gamma_{K,v}\}^{\max} dv_{X^g} \\ =\int_{y_0\in X^{g,K}}v^{(\ell-\ell')/2}\int_{|Z_0|\leq \varepsilon_0/\sqrt{v}} \left[\phi\int_{|Z|\leq \varepsilon_0}\widetilde{\tr}' \left[g\frac{\sqrt{t}c(K^X)}{4v}\exp\left(-\mathcal{B}_{K,t,v} \right)\right.\right. \\ (g^{-1}(y_0,\sqrt{v}Z_0,Z),(y_0,\sqrt{v}Z_0,Z)) \Big]\cdot k(y_0,\sqrt{v}Z_0,Z)dv_{N_{X^g/X}}(Z) \\ +\{\gamma_{K,v}\}^{\max}(y_0,\sqrt{v}Z_0) \Big] \bar{k}(y_0,\sqrt{v}Z_0) dv_{N_{X^{g,K}/X^g}}(Z_0)dv_{X^{g,K}} (y_0). \end{multline} Using Theorem \ref{thm:6.04} and (\ref{eq:6.54}), we find that there exist $C>0$ and $\begin{eqnarray}ta_1>0$ such that for $z\in \field{R}^*$, $|z|\leq \begin{eqnarray}ta_1$, \begin{eqnarray}gin{multline}\langlebel{eq:6.55} |z|^{\ell+1}\left|\phi\int_{\mathcal{U}_{\varepsilon_0}'} \widetilde{\tr}'\left[g\frac{\sqrt{t}c(K^X)}{4v}\exp\left(-\mathcal{B}_{K,t,v} \right)(g^{-1}x,x) \right]dv_X(x)+\int_{X^g\cap\mathcal{U}_{\varepsilon_0}'} \gamma_{K,v} \ \right| \\ \leq C|z|^{\ell-\ell'}\int_{y_0\in X^{g,K}}\int_{Z_0\in N_{X^{g,K}/X^g}, |Z_0|<\varepsilon_0}(1+|zZ_0|)^{-(\ell-\ell')-1}dZ_0\cdot \left(\frac{t}{v}\right)^{\delta} \leq C\left(\frac{t}{v}\right)^{\delta}. \end{multline} Similar estimates can be obtained for \begin{eqnarray}gin{align}\langlebel{eq:6.56} \left|\phi\int_{X\backslash\mathcal{U}_{\varepsilon_0}'}\widetilde{\tr}'\left[g\frac{ \sqrt{t}c(K^X)}{4v}\exp\left(-\mathcal{B}_{K,t,v}\right)(g^{-1}x,x) \right]dv_X(x)+\int_{X^g\backslash\mathcal{U}_{\varepsilon_0}'} \gamma_{K,v} \right|. \end{align} In fact, on $X\backslash\mathcal{U}_{\varepsilon_0}'$, we observe that $|K^X|^2/2v$ has a positive lower bound. Then we adopt the above techniques to the case where $X^{g,K}=\emptyset$. The potentially annoying term $\frac{\sqrt{t}c(K^X)}{4v}$ can be controlled by the term $|K^X|^2/2v$. The proof of Theorem \ref{thm:6.19} is completed. \end{proof} \subsection{A Proof of Theorem \ref{thm:4.03} c)}\langlebel{s0609} When $v\in [1,+\infty)$, $\frac{1}{v}$ remains bounded. By using the methods of the last section and of the present section, one sees easily that for $K_0\in \mathfrak{z}(g)$, $K=zK_0$, there exist $C>0$, $\begin{eqnarray}ta>0$ such that for $t\in (0,1]$, $v\in [1,+\infty)$, $|zK_0|<\begin{eqnarray}ta$, we have \begin{eqnarray}gin{align}\langlebel{eq:6.57} \left|\widetilde{\tr}'\left[g\sqrt{t}c(K^X)\exp\left(-\mathcal{B}_{K,t,v} \right) \right]\right|\leq C, \end{align} which is equivalent to Theorem \ref{thm:4.03} c). The proof of Theorem \ref{thm:4.03} c) is completed. \subsection{A Proof of Theorem \ref{thm:4.03} d)}\langlebel{s0610} In this subsection, we will prove Theorem \ref{thm:4.03} d) by using the method in \cite[\S 9]{BG00}. Since the singular term there does not appear here, our proof is in fact much easier. We fix $g\in G$, $0\neq K_0\in \mathfrak{z}(g)$, and take $K=zK_0$ with $z\in \field{R}^*$. From Theorem \ref{thm:6.01}, we have \begin{eqnarray}gin{multline}\langlebel{eq:6.58} \mathcal{B}_{K,t,tv}=-t\left(\nabla_{e_i}^{\mathcal{E}} +\frac{1}{2\sqrt{t}}\langle S(e_i)e_j, f_p^H\rangle c(e_j)f^p \wedge\right. \\ \left.+\frac{1}{4t}\langle S(e_i)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge-\frac{\langle K^X,e_i\rangle}{4t}\left(1 +\frac{1}{v}\right)\right)^2 \\ +\frac{t}{4}H+\frac{t}{2}\left(R^{\mathcal{E}/\mS}(e_i,e_j)-\frac{1}{2vt} \langle\nabla_{e_i}^{TX}K^X,e_j\rangle\right)c(e_i)c(e_j) \\ +\sqrt{t}\left(R^{\mathcal{E}/\mS}(e_i,f_p^H)-\frac{1}{2vt}\langle T(e_i,f_p^H),K^X\rangle\right)c(e_i)f^p\wedge \\ +\frac{1}{2}\left(R^{\mathcal{E}/\mS}(f_p^H,f_q^H)-\frac{1}{8vt}\langle T(f_p^H,f_q^H), K^X\rangle\right)f^p\wedge f^q\wedge-m^{\mathcal{E}/\mS}(K^X)+\frac{1}{4vt}|K^X|^2. \end{multline} As in sections \ref{s0503} and \ref{s0603}, the proof of Theorem \ref{thm:4.03} d) can be localized near $X^g$. In the following, we will concentrate on the estimates near $X^{g,K}$. As in (\ref{eq:6.56}), the proof of the estimates near $X^g$ and far from $X^{g,K}$ is much easier. We may assume that for $\varepsilon_0$ taken in Section \ref{s0604}, if $\varepsilon\in (0,8\varepsilon_0]$, the map $(y_0,Z)\in N_{X^{g,K}/X} \rightarrow \exp_{y_0}^X(Z)\in X$ induces a diffeomorphism from the $\varepsilon$-neighborhood $\mathcal{U}_{\varepsilon}'$ of $X^{g,K}$ in $N_{X^{g,K}/X}$ on the tubular neighborhood $\mathcal{V}_{\varepsilon}'$ of $X^{g,K}$ in $X$ as in the proof of Theorem \ref{thm:6.19}. As in (\ref{eq:5.74}) and (\ref{eq:6.18}), we put \begin{eqnarray}gin{multline}\langlebel{eq:6.59} \,^3\nabla^{\mathcal{E},t}_{\cdot}:=\nabla^{\mathcal{E}}_{\cdot}+\frac{1}{ 2\sqrt{t}}\langle S(\cdot)e_j, f_p^H\rangle c(e_j)f^p\wedge \\ +\frac{1}{4t}\langle S(\cdot)f_p^H, f_q^H\rangle f^p\wedge f^q\wedge -\frac{\varepsilontheta_K(\cdot)}{4t}\left(1+\frac{1}{v}\right). \end{multline} Take $y_0\in W^{g,K}$ in (\ref{eq:2.11}). If $Z\in N_{X^{g,K}/X,y_0}$, $|Z|\leq 4\varepsilon_0$, we identify $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}_Z$ with $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}_{y_0}$ by parallel transport with respect to the connection $\,^3\nabla^{\mathcal{E},t}$ along the curve $u\in [0,1] \rightarrow uZ$. Recall that $\rho$ is the cut-off function in (\ref{eq:5.81}). Let \begin{eqnarray}gin{align}\langlebel{eq:6.60} L_{y_0,K}^{1,(t,v)}=(1-\rho^2(Z))(-t\Delta^{TX})+\rho^2(Z) (\mathcal{B}_{K,t,tv}). \end{align} We still define $H_t$ as in (\ref{eq:5.85}) and define $L_{y_0,K}^{2,(t,v)}$ as in (\ref{eq:5.86}) from $L_{y_0,K}^{1,(t,v)}$. Let $L_{y_0,K}^{3,(t,v)}$ be the operator obtained from $L_{y_0,K}^{2,(t,v)}$ by replacing $c(e_j)$ by $c_t(e_j)$ as in (\ref{eq:5.88}) for $1\leq j\leq \ell'$ (cf. (\ref{eq:2.13})), while leaving the $c(e_j)$'s unchanged for $\ell'+1\leq j\leq n$. As in (\ref{eq:3.19}), we have \begin{eqnarray}gin{align}\langlebel{eq:6.63} |K^X(y_0,Z)|^2=|m_{y_0}^{TX}(K)Z|^2+\mathcal{O}(|Z|^3). \end{align} Let $\jmath':W^{g,K}\rightarrow W$ be the obvious embedding. Put \begin{eqnarray}gin{multline}\langlebel{eq:6.64} L_{y_0,K}^{3,(0,v)}=-\left(\nabla_{e_i}+\frac{1}{4}\left\langle \left(\jmath'^*R^{TX}-\left(1+\frac{1}{v}\right)m^{TX}(K)\right)Z, e_i\right\rangle\right)^2 \\ +\jmath'^*R^E_{y_0}-m^{E}(K)_{y_0} -\frac{1}{4v}\sum_{j,k\geq \ell'+1} \langle m^{TX}(K)e_j,e_k\rangle_{y_0} c(e_j)c(e_k) \\ +\frac{1}{4v}\langlengle \jmath'^*R^{TX}(m^{TX}(K)Z), Z\ranglengle_{y_0} +\frac{1}{4v}|m_{y_0}^{TX}(K)Z|^2. \end{multline} From (\ref{eq:3.05}), (\ref{eq:3.18}), (\ref{eq:3.21}), (\ref{eq:6.62}), (\ref{eq:6.58}) and (\ref{eq:6.64}), as Proposition \ref{prop:5.24}, we have \begin{eqnarray}gin{align}\langlebel{eq:6.65} L_{y_0,K}^{3,(t,v)}\rightarrow L_{y_0,K}^{3,(0,v)}. \end{align} Now we take a new trivialization as in Section \ref{s0605}. Take $Z_0\in N_{X^{g,K}/X^g,y_0}, |Z_0|\leq \varepsilon_0$. If $Z\in T_{y_0}X$, $|Z|\leq 4\varepsilon_0$, we identify $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}_{Z+Z_0}$ with $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}_{Z_0}$ by parallel transport along the curve $u\in[0,1]\rightarrow \exp_{Z_0}^X(uZ)$ with respect to the connection $\,^3\nabla^{\mathcal{E},t}$. Also we identify $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}_{Z_0}$ with $\pi^*\Lambda(T^*B)\widetildedehat{\otimes}\mathcal{E}_{y_0}$ by parallel transport along the curve $u\in[0,1]\rightarrow uZ_0$ with respect to the connection $\nabla^{\mathcal{E}}$. Using this trivialization, the analogues of \cite[Theorems 9.19 and 9.22]{BG00} hold here following the same arguments except for replacing the norm in \cite[(9.43)]{BG00} by \begin{eqnarray}gin{align} |s|_{t,Z_0,0}^2=\int_{T_{y_0}X}|s(Z)|^2\left(1+(|Z|+|Z_0|)\,\rho\left( \frac{\sqrt{t}Z}{2}\right) \right)^{2(k+\ell'-p-r)}dv_{TX}(Z). \end{align} Here $s$ is a square integrable section of $\left(\pi^*\Lambda^r (T^*B)\widetildedehat{\otimes}\Lambda^p(T^*X^{g,K})\widetildedehat{\otimes} \mS_{N_{X^{g,K}/X}}\otimes E\right)_{y_0}$ over $T_{y_0}X$, and $\dim B=k$. As in \cite[(9.52)-(9.57)]{BG00}, combining with (\ref{eq:3.17}), if $n$ is even, there exists $\begin{eqnarray}ta>0$, if $z\in \field{R}^*$, $|z|\leq \begin{eqnarray}ta$, for $t\rightarrow 0$, \begin{eqnarray}gin{multline}\langlebel{eq:6.66} \int_{X^{g,K}}\int_{\substack{(Z_0,Z)\in N_{X^{g,K}/X^g} \times N_{X^g/X}, \\ |Z_0|,|Z|\leq \varepsilon_0}}\tr_s\left[g \frac{c(K^X)}{4\sqrt{t}v}\widetilde{F}_t(\mathcal{B}_{zK_0,t,tv}) (g^{-1}(y_0,Z_0,Z),\right. \\ \left. (y_0, Z_0, Z))\right]dv_X(y_0, Z_0, Z) \\ \rightarrow \int_{X^{g,K}}\int_{N_{X^{g,K}/X}}(-i)^{\ell'/2} 2^{\ell'/2} \tr_s\left[g\frac{c\left(m_{y_0}^{TX}(K)Z\right)}{4v} \exp\left(-L_{y_0,zK_0}^{3,(0,v)} \right)(g^{-1}Z,Z)\right] dv_{N_{X^{g,K}/X}}(Z). \end{multline} The heat kernel $\exp\left(-L_{y_0,zK_0}^{3,(0,v)} \right)(g^{-1}Z,Z)$ could be calculated as in (\ref{eq:5.135}) by \cite[Theorem 4.13]{BG00}, which is an even function on $Z$ and can be controlled by $C\exp(-C'|Z|^2)$. So the right-hand side of (\ref{eq:6.66}) is an integral of an odd function on $Z$ over $N_{X^{g,K}/X}$, which is zero. If $n$ is odd, by Remark \ref{rem:5.22}, from the same argument above, as $t\rightarrow 0$, \begin{eqnarray}gin{align}\langlebel{eq:6.67} \int_{\mathcal{U}_{\varepsilon_0}'}\tr^{\mathrm{even}}\left[g\frac{c(K^X)}{ 4\sqrt{t}v}\exp\left(-\mathcal{B}_{K,t,tv} \right) \right]\rightarrow 0. \end{align} After adopting the above technique to the case where $X^{g,K} =\emptyset$, for $z\in \field{R}^*$, $|z|$ small enough, as $t\rightarrow 0$, we have \begin{eqnarray}gin{align}\langlebel{eq:6.68} \int_{X\backslash \mathcal{U}_{\varepsilon_0}'}\widetilde{\tr}'\left[g\frac{c(K^X)}{ 4\sqrt{t}v}\exp\left(-\mathcal{B}_{K,t,tv} \right) \right]\rightarrow 0. \end{align} The proof of Theorem \ref{thm:4.03} d) is completed. \begin{eqnarray}gin{thebibliography}{1} \bibitem{APS75} M.~F. Atiyah, V.~K. Patodi, and I.~M. Singer, \emph{Spectral asymmetry and {R}iemannian geometry. {I}}, Math. Proc. Cambridge Philos. Soc. \til{E}xtbf{77} (1975), 43--69. \bibitem{AS69} M.~F.~Atiyah and I.~M.~Singer. \emph{Index theory for skew-adjoint {F}redholm operators}, Inst. Hautes \'Etudes Sci. Publ. Math. \til{E}xtbf{37} (1969), 5--26. \bibitem{BGV} N.~Berline, E.~Getzler, and M.~Vergne, \emph{Heat kernels and {D}irac operators}, Grundlehren Text Editions, Springer-Verlag, Berlin, 2004, Corrected reprint of the 1992 original. \bibitem{BV83} N.~Berline and M.~Vergne, \emph{Z\'eros d'un champ de vecteurs et classes caract\'eristiques \'equivariantes}, Duke Math. J. \til{E}xtbf{50} (1983), no.\ 2, 539--549. \bibitem{Be09} A.~Berthomieu, \emph{Direct image for some secondary {K}-theories}, Ast\'erisque \til{E}xtbf{327} (2009), 289-360. \bibitem{BB94} A.~Berthomieu and J.-M.~Bismut, \emph{Quillen metrics and higher analytic torsion forms}, J. Reine Angew. Math. \til{E}xtbf{457} (1994), 85--184. \bibitem{Bi85} J.-M.~Bismut, \emph{The infinitesimal {L}efschetz formulas: a heat equation proof}, J. Funct. Anal. \til{E}xtbf{62} (1985), no.\ 3, 435--457. \bibitem{Bi86} J.-M.~Bismut, \emph{The {A}tiyah-{S}inger index theorem for families of {D}irac operators: two heat equation proofs}, Invent. Math. \til{E}xtbf{83} (1986), no.\ 1, 91--151. \bibitem{Bi86a} J.-M.~Bismut, \emph{Localization formulas, superconnections, and the index theorem for families}, Comm. Math. Phys. \til{E}xtbf{103} (1986), no.\ 1, 127--166. \bibitem{Bi94} J.-M.~Bismut, \emph{Equivariant short exact sequences of vector bundles and their analytic torsion forms}, Comp. Math. \til{E}xtbf{93} (1994), 291--354. \bibitem{Bi97} J.-M.~Bismut, \emph{Holomorphic families of immersions and higher analytic torsion forms}, Ast\'erisque \til{E}xtbf{244} (1997), viii+275pp. \bibitem{B11a} J.-M. Bismut, \emph{{D}uistermaat-{H}eckman formulas and index theory}. In Geometric aspects of analysis and mechanics, volume \til{E}xtbf{292} of Progr. Math., pages 1-55. Birkh\"auser/Springer, New York, 2011. \bibitem{BC89} J.-M.~Bismut and J.~Cheeger, {\it $\eta$-invariants and their adiabatic limits}, J. Amer. Math. Soc. \til{E}xt{2} (1989), 33--70. \bibitem{BC90I} J.-M.~Bismut and J.~Cheeger, \emph{Families index for manifolds with boundary, superconnections, and cones. {I}. {F}amilies of manifolds with boundary and {D}irac operators}, J. Funct. Anal. \til{E}xtbf{89} (1990), no.~2, 313--363. \bibitem{BC90II} J.-M.~Bismut and J.~Cheeger, \emph{Families index for manifolds with boundary, superconnections and cones. {II}.\ {T}he {C}hern character}, J. Funct. Anal. \til{E}xtbf{90} (1990), no.~2, 306--354. \bibitem{BF86I} J.-M.~Bismut and D.~Freed, \emph{The analysis of elliptic families. {I}. {M}etrics and connections on determinant bundles}, Comm. Math. Phys. \til{E}xtbf{106} (1986), no.~1, 159--176. \bibitem{BF86II} J.-M.~Bismut and D.~Freed, \emph{The analysis of elliptic families. {II}. {D}irac operators, eta invariants, and the holonomy theorem}, Comm. Math. Phys. \til{E}xtbf{107} (1986), no.~1, 103--163. \bibitem{BGSII} J.-M.~Bismut, H.~Gillet, and C.~Soul\'e, \emph{Analytic torsion and holomorphic determinant bundles. {II}. {D}irect images and {B}ott-{C}hern forms}, Comm. Math. Phys. \til{E}xtbf{115} (1988), no.~1, 79--126. \bibitem{BG00} J.-M.~Bismut and S.~Goette, \emph{Holomorphic equivariant analytic torsions}, Geom. Funct. Anal. \til{E}xtbf{10} (2000), no.~6, 1289--1422. \bibitem{BG04} J.-M.~Bismut and S.~Goette, \emph{Equivariant de {R}ham torsions}, Ann. of Math. (2) \til{E}xtbf{159} (2004), no.~1, 53--216. \bibitem{BL91} J.-M.~Bismut and G.~Lebeau, \emph{Complex immersions and {Q}uillen metrics}, Inst. Hautes \'Etudes Sci. Publ. Math. \til{E}xtbf{74} (1991), ii+298 pp. \bibitem{BK92} J.-M.~Bismut and K.~K\"ohler, \emph{Higher analytic torsion forms for direct images and anomaly formulas}, J. Algebraic Geom. \til{E}xtbf{1} (1992), no.~4, 647--684. \bibitem{BKR11} J.~Br\"uning, F.W.~Kamber and K.~Richardson, \emph{Index theory for basic Dirac operators on Riemannian foliations}, Noncommutative geometry and global analysis, 39--81, Contemp. Math., 546, Amer. Math. Soc., Providence, RI, 2011. \bibitem{BrM06} J. ~Br{\"u}ning and X. Ma, \emph{An anomaly formula for {R}ay-{S}inger metrics on manifolds with boundary}, Geom. Funct. Anal. \til{E}xtbf{16} (2006), 767-837. \bibitem{BM04} U.~Bunke and X.~Ma, \emph{Index and secondary index theory for flat bundles with duality}, Aspects of boundary problems in analysis and geometry, Oper. Theory Adv. Appl., vol. 151, Birkh\"auser, Basel, 2004, pp.~265--341. \bibitem{BS09} U.~Bunke and T.~Schick, \emph{Smooth {$K$}-theory}, Ast\'erisque, \til{E}xtbf{328} (2009), 45--135. \bibitem{Ch87} J.~Cheeger, \emph{$\eta$-invariants, the adiabatic approximation and conical singularities. I. The adiabatic approximation}, J. Differential Geom., \til{E}xtbf{26} (1987), no.~1, 175--221. \bibitem{Dai91} X.~Dai, \emph{Adiabatic limits, nonmultiplicativity of signature, and {L}eray spectral sequence}, J. Amer. Math. Soc. \til{E}xtbf{4} (1991), no.~2, 265--321. \bibitem{DZ15} X.~Dai and W.~Zhang, \emph{Eta invariant and holonomy: the even dimensional case}, Adv. Math. \til{E}xtbf{279} (2015), 291--306. \bibitem{dR73} G.~de Rham. \emph{\em Vari\'et\'es diff\'erentiables. {F}ormes, courants, formes harmoniques}, Hermann, Paris, 1973. Troisi{\`e}me {\'e}dition revue et augment{\'e}e, Publications de l'Institut de Math{\'e}matique de l'Universit{\'e} de Nancago, III, Actualit{\'e}s Scientifiques et Industrielles, No. 1222b. \bibitem{D78} H.~Donnelly, \emph{Eta invariants for {$G$}-spaces}, Indiana Univ. Math. J. \til{E}xtbf{27} (1978), no.~6, 889--918. \bibitem{E13} J.~Ebert, \emph{A vanishing theorem for characteristic classes of odd-dimensional manifold bundles}, J. Reine Angew. Math. \til{E}xtbf{684} (2013), 1--29. \bibitem{FH00} D.~S. Freed and M.~Hopkins, \emph{On {R}amond-{R}amond fields and {$K$}-theory}, J. High Energy Phys., (2000) no.~5, Paper 44, 14 pp. \bibitem{FreedLott10} D.~S. Freed and J.~Lott, \emph{An index theorem in differential {$K$}-theory}, Geom. Topol. \til{E}xtbf{14} (2010), no.~2, 903--966. \bibitem{GilS} H.~Gillet and C.~Soul\'e, \emph{Analytic torsion and the arithmetic Todd genus}, Topology \til{E}xtbf{30} (1991), no.~1, 21--54. \bibitem{Go00} S.~Goette, \emph{Equivariant {$\eta$}-invariants and {$\eta$}-forms}, J. Reine Angew. Math. \til{E}xtbf{526} (2000), 181--236. \bibitem{Go09} S.~Goette, \emph{Eta invariants of homogeneous spaces}, Pure Appl. Math. Q. \til{E}xtbf{5} (2009), no.~3, 915--946. \bibitem{HopkinsSinger05} M. J. Hopkins and I. M. Singer, \emph{Quadratic functions in geometry, topology, and $M$-theory}. J. Differ. Geom. \til{E}xtbf{70} (2005), 329-452. \bibitem{Liu17a} B.~Liu, \emph{Functoriality of equivariant eta forms}, J. Noncommut. Geom. \til{E}xtbf{11} (2017), no.~1, 225--307. \bibitem{Liu17b} B.~Liu, \emph{Real embedding and equivariant eta forms}, Math. Z. \til{E}xtbf{292} (2019), 849--878. \bibitem{LM18} B.~Liu and X.~Ma, \emph{Differential K-theory, $\eta$-invariants and localization}, C. R. Math. Acad. Sci. Paris, \til{E}xtbf{357} (2019), 803--813. \bibitem{LM18a} B.~Liu and X.~Ma, \emph{Differential {$K$}-theory and localization formula for $\eta$-invariants}, Invent. Math. \til{E}xtbf{222} (2020), 545--613. \bibitem{LM00} K.~Liu and X.~Ma, \emph{On family rigidity theorems. {I}}, Duke Math. J. \til{E}xtbf{102} (2000), no.~3, 451--474. \bibitem{LMZ00} K.~Liu, X.~Ma, and W.~Zhang, \emph{{${\rm Spin}^c$} manifolds and rigidity theorems in {$K$}-theory}, Asian J. Math. \til{E}xtbf{4} (2000), no.~4, 933--959. \bibitem{Ma99} X.~Ma, \emph{Formes de torsion analytique et familles de submersions. {I}}, Bull. Soc. Math. France \til{E}xtbf{127} (1999), no.~4, 541--621. \bibitem{Ma00} X.~Ma, \emph{Formes de torsion analytique et familles de submersions. {II}}, Asian J. Math. \til{E}xtbf{4} (2000), 633--667. \bibitem{Ma00a} X.~Ma. \emph{Submersions and equivariant Quillen metrics}, Ann. Inst. Fourier (Grenoble) \til{E}xtbf{50} (2000), 1539--1588. \bibitem{MM07} X.~Ma and G.~Marinescu, \emph{Holomorphic {M}orse inequalities and {B}ergman kernels}, Progress in Mathematics, vol. 254, Birkh\"auser Verlag, Basel, 2007. \bibitem{MP97} R.~B. Melrose and P.~Piazza, \emph{Families of {D}irac operators, boundaries and the {$b$}-calculus}, J. Differential Geom. \til{E}xtbf{46} (1997), no.~1, 99--180. \bibitem{Mu95} W.~M\"uller, \emph{The eta invariant (some recent developments)}, S\'eminaire Bourbaki, Vol. 1993/94. Ast\'erisque \til{E}xtbf{227} (1995), Exp. No. 787, 335–-364. \bibitem{Puchol16} M. Puchol, \emph{L'asymptotique des formes de torsion analytique holomorphe}, C. R. Acad. Sci. Paris, \til{E}xtbf{354} (2016), no.~3, 301--306. \bibitem{RS75} M.~Reed and B.~Simon, \emph{Methods of modern mathematical physics. {II}. {F}ourier analysis, self-adjointness}, Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975. \bibitem{SimonsS08} J. Simons, D. Sullivan, \emph{Axiomatic characterization of ordinary differential cohomology}. J. Topol. \til{E}xtbf{1}, No. 1, (2008), 45-56. \bibitem{Soule92} C.~Soul\'e, \emph{Lectures on {A}rakelov geometry}, Cambridge Studies in Advanced Mathematics, vol.~33, Cambridge University Press, Cambridge, 1992. With the collaboration of D. Abramovich, J.-F. Burnol and J. Kramer. \bibitem{Wa14} Y.~Wang, \emph{The noncommutative infinitesimal equivariant index formula}, J. K-Theory \til{E}xtbf{14} (2014), no.~1, 73--102. \bibitem{Wi15} A.~Wittmann, \emph{Analytical index and eta forms for Dirac operators with one-dimensional kernel over a hypersurface}, arXiv:1503.02002. \bibitem{Zh94} W.~Zhang, \emph{Circle bundles, adiabatic limits of $\eta$-invariants and Rokhlin congruences}, Ann. Inst. Fourier (Grenoble) \til{E}xtbf{44} (1994), no.~1, 249--270. \end{thebibliography} \end{document}
math
218,449
\begin{document} \title{Naimark extension for the single-photon canonical phase measurement} \author{Nicola Dalla Pozza} \affiliation{Quantum Driving and Bio-complexity, Dipartimento di Fisica e Astronomia, Universit\`a degli Studi di Firenze, I-50019 Sesto Fiorentino, Italy}\email{[email protected]} \author{Matteo G. A. Paris} \affiliation{Quantum Technology Lab, Dipartimento di Fisica 'Aldo Pontremoli', Universit\`a degli Studi di Milano, I-20133 Milano, Italy}\email{[email protected]} \date{\today} \begin{abstract} We address the implementation of the positive operator-valued measure (POVM) describing the optimal $M$-outcomes discrimination of the polarization state of a single photon. Initially, the POVM elements are extended to projective operators by Naimark theorem, then the resulting projective measure is implemented by a Knill-Laflamme-Milburn scheme involving an optical network and photon counters. We find the analytical expression of the Naimark extension and the detection scheme that realise it for an arbitrary number of outcomes $M=2^N$. \end{abstract} \maketitle \section{\label{introduction}Introduction} Quantum information processing with flying qubits, especially photons, has been extensively studied in the last decades, with applications in communication \cite{Gisin2007, Gyongyosi2018}, quantum key distribution (QKD) \cite{Gisin2002, Lo2014}, quantum networking \cite{Elliott2002, Lloyd2004, Kimble2008} and universal quantum computing \cite{Knill2001, Raussendorf2001, Ralph2006}. While remarkable results for communication and QKD have already been proved \cite{Chen2012, Pfister2018, Boaron2018, Zhang2018}, there is great expectation for the medium-to-long-term realization of a quantum internet \cite{Dahlberg2018, Wehner2018}, and possibly of a photonic quantum computer \cite{Takeda2019}. In all these scenarios, photons are used as qubit encoders and information carriers because of their ability to travel long distances with minimal decoherence. Information processing is performed at the encoding and decoding stage, and in intermediate stages as well (e.g. in repeaters \cite{Briegel1998, Azuma2015}) to implement quantum gates. Depending on the qubit encoding into the possible degrees of freedom, gates may have easy or more challenging implementations in terms of resources \cite{Ralph2005}, with two-qubit gates being the hardest components to realize due to the small photon-photon coupling that can be obtained via matter-mediated processes \cite{Thompson2011}. Several theoretical and experimental proposals have been put forward \cite{OBrien2009, Northup2014, Diamanti2016, ShenoyHejamadi2017, Pirandola2019, Kok2007, Krovi2017}, but the transition from theory to practice is not always straightforward. In this paper, we focus our attention on quantum measurements and consider a detection scheme which arises in optimal discrimination theory. The detection operators are described by an analytical expression, and we translate it into an optical scheme. We believe the steps undertaken in this paper are relevant in the framework of current technology and that they will prove useful for the implementation of other measurement schemes in linear optics quantum computing with discrete variables. \par The most general description of a quantum measurement is provided by positive operator-valued measures (POVMs) acting on the Hilbert space of the systems under investigation. Mathematically speaking, a POVM is a resolution of identity made by sets of positive operators $\{\Pi_k\}$, $\sum_k \Pi_k = {\mathbb I}$. The cardinality of the POVM is not limited by the dimension of the Hilbert space and the operators $\{\Pi_k\}$ are not required to be projectors. POVMs have found useful applications in several fields of quantum information theory, e.g. in unambiguous quantum discrimination, where the optimal detection scheme may not correspond to a projective measurement. \par Experimental realizations, however, always involve {\em observable} quantities, which strictly correspond to projective measures. Therefore, the challenge usually comes in finding an appropriate detection scheme to realize a given POVM. Fortunately, there is a canonical route to achieve this goal, which is provided by the Naimark theorem \cite{Naimark1940, Akhiezer1993, Helstrom1973, Helstrom1976, Holevo2001}, ensuring that the POVM formulation may always be extended to a projecive one, which may then be implemented experimentally. More explicitly, the Naimark theorem states that for any POVM $\{\Pi_k\}_{k\in{\cal K}}$ on the Hilbert space $\mathcal{H}_S$, generating a probability distribution $p(k) = \hbox{Tr}_{S}[\rho\, \Pi_k]$, $\forall \rho$, there exist a set of orthogonal projectors $\{P_k\}_{k\in{\cal K}}$ on the enlarged Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_S$ and a pure state $|\omega\rangle\in \mathcal{H}_A$ such that $$ p(k)=\hbox{Tr}_{S}[\rho\, \Pi_k] =\hbox{Tr}_{AS}[\rho\otimes |\omega\rangle\langle\omega|\, P_k]\ . $$ \par In this paper, we address the implementation of the POVM $\{\Pi_k\}$, $k= 0,1,\ldots ,M-1$ which acts on the Hilbert space of a two-level system, and describes the optimal $M$-outcomes discrimination of the polarization state of a single photon. The explicit expression is given by \begin{align} \Pi_k & = \frac{2}{M}\, \pure{\psi_k} \label{povm}\\ |\psi_k\rangle & = \frac1{\sqrt{2}} \left(e^{-i \frac\pi{M} k} |0\rangle + e^{i \frac{\pi}{M}k}|1\rangle\right)\,. \end{align} Our strategy to solve the problem is the following: firstly, the POVM elements $\Pi_k$ are extended to projective operators upon exploting the Naimark theorem; secondly the resulting projective measure is implemented by a Knill-Laflamme-Milburn scheme involving an optical network and photon counters. The key idea is to consider an interferometer with a single rail input mode to receive the input signal, and multiple output path modes, one for each outcome. Each of these output modes goes to photon counters, such that when we record a click we assign the corresponding outcome. In this way, we have found explicitly the Naimark extension and the detection scheme for an arbitrary number of outcomes $M=2^N$, $N>1$. \par The paper is structured as follows. In Section \ref{sec:Naimark} the Naimark theorem is introduced and the algorithm to evaluate the extension of the phase measurement is described. In Section \ref{application} the application of the Naimark extension to the phase measurement of the polarization of a single photon is presented. In Section \ref{Zconstruction} we give the corresponding expressions in the case of $M=8$. The extension is factorized in a sequence of unitaries in Section \ref{decomposition}, with Section \ref{Zdecomposition} considering the case of $M=8$ in this instance. The implementation of the unitaries are provided in Section \ref{physicalRealization}, and two different schemes for the phase measurement are designed in Sections \ref{direct} and \ref{folded}. Section IV closes the paper with some concluding remarks. \section{Naimark extension of phase POVM} \label{sec:Naimark} We are going to consider phase-measurements on a qubit. In particular, we consider measurements described by the POVM $\{\Pi_k\}$, where the $k$-th outcome, $k \in \{0, 1, \ldots, M-1\}$, is associated with the phase value $\theta_k = k\frac{2\pi}{M}$. The number of outcomes $M$ defines the resolution of the measurement scheme, and it may be arbitrarily high. The scheme we are going to discuss works for $M$ being a power of 2, i.e. $M=2^N, N \geq 1 \in \mathbb{N}$. \par As a matter of fact, many different kind of phase measurements have been analysed and discussed, with the main goal of achieving optimal phase estimation. In this paper we study how to implement the phase measurement which is the solution of the following problem. Given the states $$\ket{\varphi_k} = \frac{\ket{0} + e^{i \varphi_k} \ket{1}}{\sqrt{2}}\,,$$ where $\varphi_k = \frac{2\pi}{M}k$, $ k \in \{0, 1, \ldots, M-1\}$, drawn with equal probability $\frac{1}{M}$, find the optimal POVM $\{\Pi_k\}$ that maximizes the probability of guessing correctly, i.e. \begin{equation} P_{guessing} = \sum_k P[\theta_k | \varphi_k] = \sum_k \san{\varphi_k}{\Pi_k}{\varphi_k} \ . \label{correctProbability} eq The problem is well known because of the symmetry of the states, and the solution, which was found long ago \cite{Helstrom1976}, is the POVM in Eq. (\ref{povm}). This optimal POVM may also be seen as an approximate \emph{canonical phase-measurement}, that is, the measurement defined by the POVM $\{\Pi_\theta\},\ \theta~\in~[0,~2\pi)$ defined in the standard basis as \begin{equation} \Pi_\theta = \frac{1}{2 \pi}\,\pure{\theta}\,, \quad \ket{\theta} = \sum_n e^{{i\mkern1mu} n \theta} \ket{n}\ . \label{canonicalPhaseMeasurement} eq When we restrict the Hilbert space to the subspace spanned by $\ket{n}=\{\ket{0},\ \ket{1}\}$ and we discretize the outcome $\theta$ in the $M$ values $\theta_k$, we obtain the POVM $\{\Pi_k\}$ of Eq. (\ref{povm}). Note that each POVM element may be expressed in terms of the column vectors \begin{equation} \ket{\psi_k} = \frac{1}{\sqrt{2}} \begin{bmatrix} e^{-i k\frac{\pi}{M}} \\ e^{i k \frac{\pi}{M}} \end{bmatrix} \label{phaseMeasurementPOVM} eq for $k= 0,1,\ldots ,M-1$, and also as \begin{equation} \Pi_k = X_k X_k^{\dagger}, \quad X_k=\frac{1}{\sqrt{M}} \begin{bmatrix} e^{-i k\frac{\pi}{M}} \\ e^{i k \frac{\pi}{M}} \end{bmatrix}\,, eq i.e., using the set of the \emph{unnormalized} column vector $X_k$. Note also that the POVM elements are not orthogonal, except for $M=2$, $\Pi_k \Pi_l \neq \Pi_k \delta_{k,l}$. \par The POVM elements are operators on the original system Hilbert space $\mathcal{H}_S$ and, according to the Naimark Theorem \cite{Naimark1940, Akhiezer1993, Helstrom1973, Helstrom1976, Holevo2001}, may be implemented as a \emph{projective measurement} in a larger Hilbert space $\mathcal{H}$, usually referred to as the \emph{Naimark extension} of the POVM. Actually, the theorem ensures that a \emph{canonical extension} exists amongst the infinite others, i.e., an implementation as an indirect measurement, where the system under investigation is coupled to an independently prepared probe system \cite{Peres1990} and then only the probe is subject to a (projective) measurement \cite{He2007, Bergou2010, Paris2012}, whose statistics mimick that of the POVM. \par Here we look for a Naimark extension of the POVM $\{\Pi_k\}$ in Eq. (\ref{povm}) using a recursive algorithm designed in a previous paper \cite{DallaPozza2017}. The algorithm builds the projectors one by one, enlarging the size of the Hilbert space $\mathcal{H}$ only when necessary. As we will see in a moment, the projectors have rank one and may be described in a matrix representation as $P_k = Z_k Z_k^{\dagger}$, with $Z_k$ a column vector. Each projector must verify a set of orthogonality conditions, which translates to constraints on $Z_k$, i.e. \begin{equation} P_k P_l = 0, \ k\neq l \quad \iff \quad Z_k^\dagger Z_l = 0 \ , \label{orthogonalityCondition} eq as well as an idempotent condition, \begin{equation} (P_k )^2 = P_k \quad \iff \quad Z_k^\dagger Z_k = 1 \ . \label{idempotentCondition} eq In addition, in order to be the extension of a POVM element, each projector $P_k$ must satisfy \begin{equation} \Pi_k = \tr[A]{P_k ( \rho_A \otimes \mathbb{I}_S) }, \label{POVMrelation} eq which is the constraint required to evaluate the correct outcome probability in $\mathcal{H}_S$ and in $\mathcal{H}$ respectively, i.e. \begin{equation} \tr[S]{\Pi_k \rho_S} = \tr[AS]{P_k ( \rho_A \otimes \rho_S) }\ . \label{sameOutcomeProbabilityConstraint} eq In Eqs. \eqref{POVMrelation} and in \eqref{sameOutcomeProbabilityConstraint}, we have introduced the enlarged Hilbert space $\mathcal{H}$ given by the tensor product of an ancillary Hilbert space $\mathcal{H}_A$ and the original Hilbert space $\mathcal{H}_S$, i.e. $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_S$. We have introduced also an auxiliary state $\rho_A$ defined in $\mathcal{H}_A $, whose choice gives some degrees of freedom in building the extension. Following the suggestion of Helstrom \cite{Helstrom1976}, we use $\rho_A = \pure{e_1^{\mathcal{A}}}$, where $\ket{e_1^{\mathcal{A}}}$ is the ancillary pure state whose column representation is the vector $e_1$ of the canonical basis with the appropriate size \footnote{Since the algorithm enlarges the size of the Hilbert space $\mathcal{H}$ only when necessary, the size of $\mathcal{H}_A$ is only determined at the end.}, i.e., with all the entries equal to zero except for the first one. The recursive algorithm works by building the columns $Z_k$ one at a time. In each column, the upper coefficients are set equal to $X_k$ \footnote{By Eq.~\eqref{POVMrelation}, the choice \unexpanded{$\rho_A = \pure{e_1^{\mathcal{A}}}$} imposes that the first entries of $Z_k$ are equal to $X_k$.}. Then, the following coefficients are found by imposing the orthogonality condition \eqref{orthogonalityCondition} with the previously found $Z_0, \ldots, Z_{k-1}$. Finally, the last coefficient is obtained by solving Eq.~\eqref{idempotentCondition}. All following coefficients are set to zero. If during the evaluation of a coefficient the provisional vector is already orthogonal to $Z_l,\ l<k$ or idempotent, it is not necessary to add another coefficient. This helps in reducing the growth in size of $\mathcal{H}$. The paper \cite{DallaPozza2017} actually finds a general expression for the coefficients to solve the orthogonal and idempotent constraints. The algorithm can be implemented numerically to find the Naimark extension of the POVM $\{\Pi_k\}$. However, an analytical expression for the projectors for arbitrarily high $M$ can be found when employing the order $Z_0$, $Z_{M/2}$, $Z_1$, $Z_{M/2+1}$, \dots, $Z_{M-1}$ for their evaluation. The projectors are hence extended in pairs evaluating $Z_k$ and $Z_{k + M/2}$ for $k=0,\dots,M/2-1$, resulting in the overall expressions of Eq. \eqref{Zk}. To better illustrate how the recursive algorithm works, we show the case for $M=8$ in the Section \ref{Zconstruction}. As expected, the first two coefficients are $X_k$. Then, $2(k+1)$ coefficients are defined, followed by zero entries that pad the vector up to the size of $M$. For $k=M/2-1$ and $k=M-1$ the last coefficients (which would overflow the length of $M$) are zeros, so that $Z_{M/2-1}$ and $Z_{M-1}$ can be truncated to the correct length. The columns $Z_k$ can be packed in the $M \times M$ matrix $Z$, \begin{equation} Z = \left[ Z_0 \ Z_{M/2} \ Z_1 \ Z_{M/2+1} \ \cdots \ Z_{M/2-1} \ Z_{M-1} \right] \label{matrixZ} eq and Eqs.~\eqref{orthogonalityCondition}, \eqref{idempotentCondition} can be checked analitically or numerically to verify $Z^{\dagger} \cdot Z = Z \cdot Z^{\dagger} = I$. As an example, we evaluate $Z$ for $M=8$ in Section \ref{Zconstruction} and report it in Eq.~\eqref{matrixZ8}. Expression \eqref{Zk} may be proved by induction. \begin{widetext} \begin{equation} Z_k =\begin{bmatrix} \frac{e^{-i\frac{k}{M}\pi}}{\sqrt{M}} \\ \frac{e^{i\frac{k}{M}\pi}}{\sqrt{M}} \\ - \frac{2}{\sqrt{M(M-2)}}\cos \left(\frac{k}{M} \pi \right)\\ -\frac{2}{\sqrt{M(M-2)}}\sin \left(\frac{k}{M} \pi\right) \\ - \frac{2}{\sqrt{(M-2)(M-4)}}\cos \left(\frac{k-1}{M} \pi\right)\\ - \frac{2}{\sqrt{(M-2)(M-4)}}\sin \left(\frac{k-1}{M} \pi\right) \\ \vdots \\ - \frac{2}{\sqrt{(M-2k+2)(M-2k)}}\cos \left(\frac{1}{M} \pi\right)\\ - \frac{2}{\sqrt{(M-2k+2)(M-2k)}}\sin \left(\frac{1}{M} \pi\right) \\ \sqrt{\frac{M-2k-2)}{M-2k}} \\ 0 \\ 0\\ \vdots\\ 0 \end{bmatrix} , \quad Z_{k+M/2} =\begin{bmatrix} \frac{e^{-i\frac{k+M/2}{M}\pi}}{\sqrt{M}} \\ \frac{e^{i\frac{k+M/2}{M}\pi}}{\sqrt{M}} \\ \frac{2}{\sqrt{M(M-2)}}\sin \left(\frac{k}{M} \pi\right)\\ - \frac{2}{\sqrt{M(M-2)}}\cos \left(\frac{k}{M} \pi\right) \\ \frac{2}{\sqrt{(M-2)(M-4)}}\sin \left(\frac{k-1}{M} \pi\right)\\ - \frac{2}{\sqrt{(M-2)(M-4)}}\cos \left(\frac{k-1}{M} \pi\right) \\ \vdots \\ \frac{2}{\sqrt{(M-2k+2)(M-2k)}}\sin \left(\frac{1}{M} \pi\right)\\ - \frac{2}{\sqrt{(M-2k+2)(M-2k)}}\cos \left(\frac{1}{M} \pi\right) \\ 0\\ \sqrt{\frac{M-2k-2}{M-2k}} \\ 0\\ \vdots\\ 0 \end{bmatrix} \label{Zk} eq \end{widetext} \subsection{\label{Zconstruction}Recursive evaluation of $Z_k$ for $M=8$} In this section we illustrate how the recursive algorithm presented in \cite{DallaPozza2017} builds the columns of $Z$ for $M=8$. The procedure can be followed by looking at the columns of Eq. \eqref{matrixZ8}, which represent the resulting matrix. As anticipated in Section \ref{sec:Naimark}, the columns are evaluated one at a time starting from $Z_0$ and following the order of Eq. \eqref{matrixZ}. In $Z_0$, the first two coefficients are $X_0 = [1/\sqrt{M},\ 1/\sqrt{M} ]^T$. Then, the next coefficient is obtained by imposing the condition that the overall $Z_0$ has unitary norm, as in Eq.~\eqref{idempotentCondition}. These three coefficients will be extended and padded with zeros once the final length is known. For the second column, which corresponds to $Z_4$, the first two coefficients are $X_4 = [-{i\mkern1mu}/\sqrt{M},\ {i\mkern1mu}/\sqrt{M}]^T$. The following one is obtained by imposing the orthogonality constraint \eqref{orthogonalityCondition} with $Z_0$, which gives a zero coefficient in the third item. The next coefficient is obtained from \eqref{idempotentCondition} imposing the unit norm. The third column, which correspond to $Z_1$, is evaluated with the same procedure. The first two coefficients are $X_1=[e^{-i \pi /8 }/\sqrt{M},\ e^{i \pi /8 }/\sqrt{M}]^T$. The following coefficients are obtained from the orthogonality constraint with $Z_0$ and $Z_4$, obtaining $-2\cos \left(\pi /8 \right)/\sqrt{(M-2)M}$ and $-2 \sin \left(\pi /8 \right)/\sqrt{(M-2)M}$, respectively. The following coefficient is obtained again from the idempotent constraint \eqref{idempotentCondition}. The recursive procedure continues in the same way for the remaining columns, first by copying the coefficients of $X_k$, then by imposing the orthogonal constraint \eqref{orthogonalityCondition} with all the previous columns, and finally evaluating the last coefficient by the idempotent constraint \eqref{idempotentCondition}. Note that while with this procedure up to $M+2$ coefficients may be evaluated for each column, the last coefficients of the last two columns are zeros since $M=8$, and the columns can be truncated to the correct length of $M=8$. \begin{widetext} \begin{equation} Z =\begin{bmatrix} \frac{1}{\sqrt{M}} & -\frac{i}{\sqrt{M}} & \frac{e^{-i \pi /8 }}{\sqrt{M}} & \frac{e^{-i5 \pi /8 }}{\sqrt{M}} & \frac{e^{-i 2 \pi /8 }}{\sqrt{M}} & \frac{e^{-i6 \pi /8 }}{\sqrt{M}} & \frac{e^{-i3 \pi /8 }}{\sqrt{M}} & \frac{e^{-i 7 \pi /8 }}{\sqrt{M}} \\ \frac{1}{\sqrt{M}} & \frac{i}{\sqrt{M}} & \frac{e^{i \pi /8 }}{\sqrt{M}} & \frac{e^{i5 \pi /8 }}{\sqrt{M}} & \frac{e^{i2 \pi /8 }}{\sqrt{M}} & \frac{e^{i6 \pi /8 }}{\sqrt{M}} & \frac{e^{i3 \pi /8 }}{\sqrt{M}} & \frac{e^{i7 \pi /8 }}{\sqrt{M}} \\ \sqrt{\frac{M-2}{M}} & 0 & -\frac{2\cos \left(\pi /8 \right) }{\sqrt{(M-2)M}} & \frac{2 \sin \left(\pi /8 \right)}{\sqrt{(M-2)M}} & - \frac{2 \cos \left(2\pi /8 \right)}{\sqrt{M(M-2)}} & \frac{2 \sin \left(2\pi /8 \right)}{\sqrt{M(M-2)}}& - \frac{2 \cos \left(3\pi /8 \right)}{\sqrt{M(M-2)}}& \frac{2 \sin \left(3\pi /8 \right)}{\sqrt{M(M-2)}}\\ 0 & \sqrt{\frac{M-2}{M}} & -\frac{2 \sin \left(\pi /8 \right)}{\sqrt{(M-2)M}} & -\frac{2 \cos \left(\pi /8 \right) }{\sqrt{(M-2)M}}& - \frac{2 \sin \left(2\pi /8 \right)}{\sqrt{M(M-2)}} & - \frac{2 \cos \left(2\pi /8 \right)}{\sqrt{M(M-2)}} & - \frac{2 \sin \left(3\pi /8 \right)}{\sqrt{M(M-2)}} & - \frac{2 \cos \left(3\pi /8 \right)}{\sqrt{M(M-2)}} \\ 0 & 0 & \sqrt{\frac{M-4}{M-2}} & 0 & - \frac{2\cos \left(\pi /8 \right)}{\sqrt{(M-2)(M-4)}}& \frac{2 \sin \left(\pi /8 \right)}{\sqrt{(M-2)(M-4)}}& - \frac{2 \cos \left(2\pi /8 \right)}{\sqrt{(M-2)(M-4)}} & \frac{2\sin \left(2\pi /8 \right)}{\sqrt{(M-2)(M-4)}}\\ 0 & 0 & 0 & \sqrt{\frac{M-4}{M-2}} & - \frac{2 \sin \left(\pi /8 \right)}{\sqrt{(M-2)(M-4)}} & - \frac{2 \cos \left(\pi /8 \right)}{\sqrt{(M-2)(M-4)}} & - \frac{2 \sin \left(2\pi /8 \right)}{\sqrt{(M-2)(M-4)}} & - \frac{2 \cos \left(2\pi /8 \right)}{\sqrt{(M-2)(M-4)}} \\ 0 & 0 & 0 & 0 & \sqrt{\frac{M-6}{M-4}} & 0& - \frac{2 \cos \left(\pi /8 \right)}{\sqrt{(M-4)(M-6)}}& \frac{2 \sin \left(\pi /8 \right)}{\sqrt{(M-4)(M-6)}}\\ 0 & 0 & 0 & 0 & 0 & \sqrt{\frac{M-6}{M-4}}& - \frac{2 \sin \left(\pi /8 \right)}{\sqrt{(M-4)(M-6)}} & - \frac{2 \cos \left(\pi /8 \right)}{\sqrt{(M-4)(M-6)}} \\ 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{\frac{M-8}{M-6}} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{\frac{M-8}{M-6}} \end{bmatrix} \label{matrixZ8} eq \end{widetext} \section{Phase-Measurement on a Single Photon} \label{application} Let us denote by $\rho_S$ the state of the qubit, defined on a two-level system representing the \emph{single rail polarization encoding}, that is, identifying the logical system-basis $\{\ket{0}_L,\ \ket{1}_L\}$ with the polarization modes $\ket{0}_L = \ket{H} = a^{\dagger (m)}_{H} \ket{0},\ \ket{1}_L = \ket{V} = a^{\dagger (m)}_{V} \ket{0}$. Operators $a^{\dagger (m)}_{H},\ a^{\dagger (m)}_{V}$ are the creation operators of the polarization modes on the $m$-th path and $\ket{0}$ is the vacuum state. In this case the optical state is defined on a \emph{single} path, as opposed to the \emph{dual rail} encoding which employs the $m$-th and $n$-th spatial modes to define the logical basis $\ket{0}_L = \ket{10}_{mn} = a^{\dagger(m)} \ket{00}_{mn}$ and $ \ket{1}_L = \ket{01}_{mn} = a^{\dagger(n)} \ket{00}_{mn}$. \par Note that even though our system qubit is defined with the single rail polarization encoding, in the following we will also employ the dual rail encoding when speaking about the implementation scheme of the phase measurement. In that framework, we will denote the mode number in the superscript, while making the polarization explicit in the subscript. \par Let us start by summarising the key idea behind our detection scheme. We implement the Naimark extension of the POVM by an optical network that receives the quantum state $\rho_S$ to be measured as input, and (probabilistically) outputs a single photon towards an array of photon counters. Each detector is associated with an outcome, corresponding to a click in a specific detector. In the ideal case of no losses in the network, and no detector noise, every time we send $\rho_S$ into the optical network we always get one and only one click. In this respect, our scheme resembles the KLM scheme for measurements \cite{Knill2001} since the measurement device is implemented with a unitary rotation followed by a set of projectors. In our case the overall projectors $P_k$ to be applied on $\rho_A \otimes \rho_S$ can be obtained as $P_k = Z \pure{e_k^{\mathcal{H}}} Z^{\dagger}$, where $\ket{e_k^{\mathcal{H}}}$ is the state defined in $\mathcal{H}$ with column representation of the $k$-th element of the canonical basis. Since \begin{equation} \tr[]{P_k \ \rho_A \otimes \rho_S} = \tr[]{ \left( Z^{\dagger} \rho_A \otimes \rho_S Z \right)\pure{e_k^{\mathcal{H}}}}, eq the unitary rotation is defined by $Z^{\dagger}$ and implemented with the optical network, while the projector $\pure{e_k^{\mathcal{H}}}$ is implemented with a photon counter on the $k$-th output mode. \subsection{Decomposition of the unitary $Z^{\dagger}$} \label{decomposition} The unitary $Z^{\dagger}$ can be decomposed as a product of simpler unitary rotations \cite{HornJohnson} usually referred to as \emph{Givens Rotations} (GR), i.e., a rotation in the plane spanned by two coordinates axes, often employed to zero out a particular entry in a vector. Section \ref{physicalRealization} describes how to implement each GR, so that the overall sequence realizes the interferometer associated with $Z^{\dagger}$. GR have a matrix representation that looks like the identity matrix, with the exception of the coefficients on two rows and two columns, which define the mixing between the two. We define such an $M \times M$ matrix as \begin{equation} W(u,v, \omega) = \ \begin{blockarray}{cccccc} & & u & v & \\ \begin{block}{(ccccc)c} 1 & 0 & 0 & 0 & 0 & \\ 0 & \ddots & 0 & 0 & 0 & \\ 0 & 0 & \cos(\omega) & \sin(\omega) & 0 & \ u \\ 0 & 0 & -\sin(\omega) & \cos(\omega) & 0 & \ v \\ 0 & 0 & 0 & 0 & 1 & \\ \end{block} \end{blockarray} eq with $u,v$ the indices of the rows and columns being mixed, and $\omega$ a parameter defining the mixing. We define also the matrix $S(u, \phi)$ \begin{equation} S(u, \phi) = \ \begin{blockarray}{cccccc} & & & u & \\ \begin{block}{(ccccc)c} 1 & 0 & 0 & 0 & 0 & \\ 0 & \ddots & & \vdots & & \\ 0 & & 1 & 0 & 0 & \\ 0 & \cdots & 0 & e^{-i\phi} & 0 & \ u \\ 0 & & 0 & 0 & 1 & \\ \end{block} \end{blockarray}\,, eq which corresponds to a phase shift on the $u$-th vector of the basis. \par In decomposing $Z^{\dagger}$ we take advantage of its structure, which is almost lower-triangular due to the zero padding of $Z_k$ to reach the length of $M$ coefficients (see for instance the structure of $Z$ in Eq.~\eqref{matrixZ8} in the example in Section~\ref{Zconstruction}). For further details on the decomposition of $Z^{\dagger}$ are reported in Section~\ref{Zdecomposition}. A pattern in the sequence of unitaries $W$ and $S$ emerges, suggesting an analytical expression for the decomposition for any $M$ (see for instance Eq.~\eqref{decomposition8}). In fact, with the exception of $S(2, \pi/2)$ and $W(1,2,\pi/4)$, the GR can be grouped in triplets of unitaries where the $u,\ v$ indices act on the same group, e.g. \{5,7,6,8\}, \{3,4,5,6\} or \{1,2,3,4\}. The parameter $\omega$ also shows a pattern in its value, i.e. it has the same value in the first two GR of the triplet and it has the same value in the third GR amongst different triplets. These patterns have a direct effect on the physical realization of $Z^{\dagger}$ decomposition (see Section \ref{physicalRealization}). \par \begin{widetext} \begin{align} Z^{\dagger} = & W\left(7,8,\pi + \frac{\pi}{M} \right) \cdot W\left(6,8,\arctan\sqrt{\frac{M-6}{2}}\right) \cdot W\left(5,7,\arctan\sqrt{\frac{M-6}{2}}\right) \nonumber \\ & \cdot W\left(5,6,\pi + \frac{\pi }{M} \right) \cdot W\left(4,6, \arctan\sqrt{\frac{M-4}{2}}\right) \cdot W\left(3,5,\arctan \sqrt{\frac{M-4}{2}}\right) \nonumber \\ & \cdot W\left(3,4,\pi + \frac{\pi }{M} \right) \cdot W\left(2,4, \arctan \sqrt{\frac{M-2}{2}}\right) \cdot W\left(1,3,\arctan \sqrt{\frac{M-2}{2}}\right) \nonumber \\ & \cdot S\left(2,\frac{\pi}{2}\right) \cdot W\left(1,2,\frac{\pi }{4}\right) \label{decomposition8} \end{align} \end{widetext} Expression~\eqref{decomposition8} can be easily checked by multiplying it by $Z$ and obtaining the identity matrix. \subsection{\label{Zdecomposition} Decomposition of $Z$ for $M=8$} In this section we describe more in detail how the decomposition of $Z^{\dagger}$ can be obtained. We will consider the case of $M=8$, which is reported in Eq.~\eqref{decomposition8}. We start from the corresponding matrix $Z$, whose expression is reported in Eq.~\eqref{matrixZ8}. This matrix is unitary, and therefore can be decomposed as a sequence of GR \cite{HornJohnson}. To find this decomposition, a handy procedure is to left-multiply $Z$ by $W,\ S$ until we obtain the identity matrix. In short, we should multiply $Z$ by GR that nullify the off-diagonal entries. The sequence of $W,\ S$ then corresponds to the decomposition we are looking for. To simplify the procedure, we first multiply $Z$ by $W_0 = W(1,\ 2,\ \pi/4)$ and $S_0 = S(2,\ \pi/2)$ to convert the complex entries in the first two rows of $Z$ into their corresponding real and imaginary parts. The matrix becomes \begin{equation} S_0 \cdot W_0\cdot Z = \begin{bmatrix} \frac{\sqrt{2}}{\sqrt{M}} & 0 & \frac{\sqrt{2}\cos(\pi/M)}{\sqrt{M}} &\dots\\ 0 & \frac{\sqrt{2}}{\sqrt{M}} & \frac{\sqrt{2}\sin(\pi/M)}{\sqrt{M}} &\\ \sqrt{\frac{M-2}{M}} & 0 & -\frac{2\cos(\pi/M)}{\sqrt{(M-2)M}} & \\ 0 & \sqrt{\frac{M-2}{M}} & -\frac{2\sin(\pi/M)}{\sqrt{(M-2)M}} & \\ 0 & 0 & \sqrt{\frac{M-4}{M-2}} \\ \vdots & & & \ddots \end{bmatrix} . \label{SWZ} eq We can then focus on nulling the entry below the diagonal. We need a GR for each of the entries in the first and second columns, $W_1 = W(1,3,\omega_{13})$ and $W_2=W(2,4, \omega_{24})$ respectively, with $\omega_{13} = \omega_{24} =\arctan (\sqrt{(M-2)/2})$. We then obtain \begin{equation} {W_2 \cdot W_1 \cdot S_0 \cdot W_0 \cdot Z} =\begin{bmatrix} 1 & 0 & 0 & 0 & \dots \\ 0 & 1 & 0 & 0 & \\ 0 & 0 & \frac{\sqrt{2}\cos(\frac{\pi}{M})}{\sqrt{M-2}} & -\frac{\sqrt{2}\sin(\frac{\pi}{M})}{\sqrt{M-2}} & \\ 0 & 0 & \frac{\sqrt{2}\sin(\frac{\pi}{M})}{\sqrt{M-2}} & \frac{\sqrt{2}\cos(\frac{\pi}{M})}{\sqrt{M-2}} & \\ 0 & 0 & \sqrt{\frac{M-4}{M-2}} & 0 & \\ 0 & 0 & 0 & \sqrt{\frac{M-4}{M-2}} & \\ \vdots & & & & \ddots \end{bmatrix} . eq Note that at this point all the off-diagonal entries in the first and second rows, as well as those in the first and second columns are zero. If we then multiply the matrix by $W_3 = W(3,4,\pi + \pi/M )$ to nullify the first off-diagonal entry in the fourth row, we obtain a matrix that resembles \eqref{SWZ} except for $M-2$ in place of $M$, i.e. \begin{equation} W_3 \cdot W_2 \cdot W_1 \cdot S_0 \cdot W_0 \cdot Z =\begin{bmatrix} 1 & 0 & 0 & 0 & \dots \\ 0 & 1 & 0 & 0 & \\ 0 & 0 & \frac{\sqrt{2}}{\sqrt{M-2}} & 0 & \\ 0 & 0 & 0 & \frac{\sqrt{2}}{\sqrt{M-2}} & \\ 0 & 0 & \sqrt{\frac{M-4}{M-2}} & 0 & \\ 0 & 0 & 0 & \sqrt{\frac{M-4}{M-2}} & \\ \vdots & & & & \ddots \end{bmatrix}. eq The multiplication by $W_3 \cdot W_2 \cdot W_1$ has effectively nullified the left off-diagonal entries in the second and third rows of $S_0 \cdot W_0\cdot Z$. From here on, we can find triplets of GR $\{W_6,\ W_5,\ W_4\}$ that acts like $\{W_3,\ W_2,\ W_1\}$ to nullify the left off-diagonal entries in rows $5,\ 6$. This procedure can be repeated for the remaining rows and gives the pattern of GR triplets in the decomposition \eqref{decomposition8}. Once we obtain the final identity matrix, the product of the GR employed $W_9 \cdot W_8 \dots W_3 \cdot W_2 \cdot W_1 \cdot S_0 \cdot W_0$ is a decomposition of $Z^{\dagger}$. The decomposition works for arbitrarily high values of $M$ since the matrix $Z$ has the same structure. \subsection{Physical realization of Givens Rotations} \label{physicalRealization} Without loss of generality, we can consider the measurement of $\rho_S = \pure{\varphi^{S}}$, with $\ket{\varphi^{S}}$ being a single-photon state, \begin{equation} \ket{\varphi^{S}} = \frac{\ket{0}_L + e^{i \varphi} \ket{1}_L}{\sqrt{2}} = \frac{a^{\dagger(1)}_H + e^{i \varphi} a^{\dagger(1)}_V}{\sqrt{2}} \ket{0}, \label{singlePhoton} eq and $\varphi \in [0, 2\pi)$ an unknown phase to be estimated. Again, $a^{\dagger(1)}_H$ and $a^{\dagger(1)}_V$ are the creator operators for the first path mode, for the horizontal and vertical mode respectively. In the case of a mixed state, the result of the phase measurement follows by linearity from the measurement of the eigenvectors of $\rho_S$. To define the vector representation of the state in the enlarged Hilbert space $\mathcal{H}$, we collect the coefficients of the creation operators $a^{\dagger(m)}_H,\ a^{\dagger(m)}_V$ and stack them in order in a column. For instance, the input quantum state \eqref{singlePhoton} is represented as \begin{equation} \ket{\varphi^{AS}} = \ket{e_1^{A}} \otimes \ket{\varphi^{S}} \quad \longrightarrow \quad \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{e^{i \varphi}}{\sqrt{2}} \\ 0 \\ \vdots \\ 0 \end{bmatrix} eq because the coefficients of $a^{\dagger(1)}_H,\ a^{\dagger(1)}_V$ are placed in the first two items in the column representation, while the zeros are the coefficients of $a^{\dagger(m)}_H,\ a^{\dagger(m)}_V,\ m>1$. This representation is useful because in the Hilbert space spanned by the polarizations of a single photon on multiple modes, the mixing of $n$ optical modes is represented by a $n \times n$ unitary matrix \footnote{On the contrary, linear mixing between annihilation operators and creation operators require nonlinear optical interactions, as it results from squeezing transformations.}. As a consequence, the states that define the canonical basis in this representation and in the unitary $Z^{\dagger}$ are single-photon states of some polarization and path modes. The auxiliary state $\ket{e_1^{A}}$ is just the tensor product of many vacuum states corresponding to multiple modes. In this Hilbert space the converse also holds, i.e., any unitary transformation can be achieved with a set of passive devices such as beam splitters, polarizing beam splitters, waveplates and mirrors \cite{Reck1994}. We will leverage this result to provide a possible realization for the unitaries $S(u, \phi)$ and $W(u,v, \omega)$. The unitary $S(u, \phi)$ can be realized with a waveplate of the appropriate thickness where the fast axis is aligned with the horizontal mode and the slow axis with the vertical mode. In this way, the vertical mode gains a phase shift equal to $-\phi$ with respect to the horizontal one. The corresponding transformation given by the waveplate can be expressed as \begin{equation} \begin{bmatrix} \hat{a}_H^{\dagger(m)} \\ \hat{a}_V^{\dagger(m)} \\ \end{bmatrix}_{(out)} = \begin{bmatrix} 1 & 0 \\ 0 & e^{-i\phi} \\ \end{bmatrix} \begin{bmatrix} \hat{a}_H^{\dagger(m)}\\ \hat{a}_V^{\dagger(m)} \\ \end{bmatrix}_{(in)} \ , \label{phaseShiftUnitary} eq where the index $u$ in $S(u, \phi)$ specifies the column and row associated with $\hat{a}_V^{\dagger(m)}$. The transformation $W(u,v, \omega)$ can be realized differently depending on whether the modes involved refer to different polarizations of the same rail, or two spatial modes on different rails. In the first case, a simple rotation equal to $\omega$ of the coordinate system on the polarization plane realizes the transformation, i.e., \begin{equation} \begin{bmatrix} \hat{a}^{\dagger(m)}_H \\ \hat{a}^{\dagger(m)}_V \\ \end{bmatrix}_{(out)} = \begin{bmatrix} \cos (\omega) & \sin(\omega) \\ -\sin (\omega) & \cos(\omega) \end{bmatrix} \begin{bmatrix} \hat{a}^{\dagger(m)}_H \\ \hat{a}^{\dagger(m)}_V \\ \end{bmatrix}_{(in)} \ . \label{waveplateMixing} eq In this case, the indices $u,v$ specify the columns and rows of $\hat{a}^{\dagger(m)}_H, \hat{a}^{\dagger(m)}_V $ respectively. If a mixing between two different spatial modes is required, a beam splitter (BS) with the appropriate transmissivity and reflectivity may be used. The transmissivity and reflectivity may even depend on the polarization, and in this case a partially polarizing beam splitter (PPBS) is required to realize the transformation \begin{equation} \begin{bmatrix} \cos (\omega_H) & 0 & \sin(\omega_H) & 0\\ 0 & \cos (\omega_V) & 0 & \sin(\omega_V) \\ -\sin (\omega_H) & 0 & \cos(\omega_H) & 0 \\ 0 & -\sin (\omega_V) & 0 & \cos(\omega_V) \\ \end{bmatrix} \label{ppbsMixing} eq from the input to the output creation operators enlisted in the column $ [\ \hat{a}^{\dagger(m)}_H,\ \hat{a}^{\dagger(m)}_V,\ \hat{a}^{\dagger(n)}_H, \ \hat{a}^{\dagger(n)}_V ]^T$. In this case, we are actually implementing the transformation $W(u,v,\omega_H) \cdot W(u',v',\omega_V)$, where $u,v$ point at the coefficients of the horizontal polarizations and $u',v'$ at those of the vertical ones. When $\omega_H = \omega_V$, we recover the transformation of a BS. An extreme example of PPBS is the polarizing beam splitter (PBS), which completely transmits the horizontal polarizations and reflects the vertical polarizations, and corresponds to the transformation \eqref{ppbsMixing} with $\omega_H = 0,\ \omega_V = \pi/2$. In order to obtain the implementation of \eqref{decomposition8}, we follow the sequence of the unitary transformations $S(u, \phi),\ W(u,v, \omega)$ from right to left and compose in a cascade the corresponding implementations. Two possible implementations arise, a \emph{direct} one and a \emph{folded} one, which are the topic of the next sections. \subsection{Direct scheme} \label{direct} The \emph{direct} implementation is designed following the sequence of GR in Eq. \eqref{decomposition8}. Figure \ref{directScheme} depicts the scheme for $M=8$, where the optical network and the photon counters can be clearly recognized. \begin{figure*} \caption{Direct scheme for M=8. The qubit $\ket{\varphi^S} \label{directScheme} \end{figure*} The scheme presents an \emph{initial block} implementing the unitaries $W\left(1,2,\pi/4 \right)$ and $S\left(2,\pi/2 \right)$. The qubit to be measured is defined on the Cartesian coordinate system of the polarization plane, which is rotated by $\pi/4$ in order to implement $W(1,2,\pi/4)$. In Fig.~\ref{directScheme}, such a rotation is indicated with a curved arrow. The optical modes $H^{(1)},\ V^{(1)}$ must then go through a quarter-wave plate which realizes $S(2,\pi/2)$. The waveplate, indicated in Fig.~\ref{directScheme} with a slim rectangular box, is aligned with the new coordinate system, and the same holds for the following components. The decomposition \eqref{decomposition8} highlights a structure for the matrices following the initial block. In particular, the GR can be grouped in triplets which work on the same group of modes, e.g. \begin{align} & W\left(3,4,\omega_{34} \right) \cdot W\left(2,4,\omega_{24} \right) \cdot W\left(1,3,\omega_{13} \right) \ . \label{IBmatrices} \end{align} The same holds if we add $2k,\ k=0, \ldots, \frac{M}{2}-2$ to the indices of the modes, and employing the angles $\omega_{34}~=~\pi + \frac{\pi}{M}$ and $\omega_{24} = \omega_{13} = \arctan \sqrt{(M-2-2k)/2}$. This observation suggests a modular implementation of the triplet, which is repeated several times. The modular block is shown within the dashed line in Fig. \ref{directScheme}. In general, the unitaries $W\left(2,4,\omega_{24} \right) \cdot W\left(1,3,\omega_{13} \right)$ may be implemented with a PPBS realizing \eqref{ppbsMixing} with $\omega_H = \omega_{13},\ \omega_V = \omega_{24}$, where the horizontal creation operators have indices $1,3$ and the vertical ones have indices $2,4$. However, since $\omega_H = \omega_V = \omega_{24} = \arctan \sqrt{(M-2-2k)/2}$, a BS suffices to implement the transformation. After this, two of the outgoing modes go to a PBS to be separated into horizontal and vertical polarization modes, and then on to photon counters to record a possible click. The other two outgoing modes are mixed with a rotation of the polarization plane, realizing the unitary $W(3,4,\omega_{34} )$ as in \eqref{waveplateMixing}, with $\omega = \omega_{34} = \pi + \frac{\pi}{M}$. Note that the actual transformation implemented by the BS followed by the rotation of the polarization plane would be (supposing $k=1$) \begin{equation} \resizebox{\columnwidth}{!}{ $ \begin{bmatrix} \sqrt{\frac{2}{M}} & 0 & \sqrt{\frac{M-2}{M}} & 0\\ 0 & \sqrt{\frac{2}{M}} & 0 & \sqrt{\frac{M-2}{M}} \\ \sqrt{\frac{M-2}{M}} \cos(\frac{\pi}{M}) & \sqrt{\frac{M-2}{M}} \sin(\frac{\pi}{M}) & -\sqrt{\frac{2}{M}} \cos(\frac{\pi}{M}) & -\sqrt{\frac{2}{M}} \sin(\frac{\pi}{M}) \\ -\sqrt{\frac{M-2}{M}} \sin(\frac{\pi}{M}) & \sqrt{\frac{M-2}{M}} \cos(\frac{\pi}{M}) & \sqrt{\frac{2}{M}} \sin(\frac{\pi}{M}) & -\sqrt{\frac{2}{M}} \cos(\frac{\pi}{M}) \end{bmatrix}. $ } eq However, two of the input modes of the PPBS are vacuum states, and the effective transformation from the coefficients of $[\hat{a}^{\dagger(m)}_H,\ \hat{a}^{\dagger(m)}_V ]^T$ to those of $[\ \hat{a}^{\dagger(m)}_H,\ \hat{a}^{\dagger(m)}_V,\ \hat{a}^{\dagger(n)}_H,\ \hat{a}^{\dagger(n)}_V ]^T$ results \begin{equation} \resizebox{\columnwidth}{!}{ $ \begin{bmatrix} \hat{a}^{\dagger(m)}_H \\ \hat{a}^{\dagger(m)}_V \\ \hat{a}^{\dagger(n)}_H \\ \hat{a}^{\dagger(n)}_V \end{bmatrix} = \begin{bmatrix} \sqrt{\frac{2}{M}} & 0 \\ 0 & \sqrt{\frac{2}{M}} \\ \sqrt{\frac{M-2}{M}} \cos(\frac{\pi}{M}) & \sqrt{\frac{M-2}{M}} \sin(\frac{\pi}{M}) \\ -\sqrt{\frac{M-2}{M}} \sin(\frac{\pi}{M}) & \sqrt{\frac{M-2}{M}} \cos(\frac{\pi}{M}) \end{bmatrix} \begin{bmatrix} \hat{a}^{\dagger(m)}_H \\ \hat{a}^{\dagger(m)}_V \end{bmatrix}. $ } \label{equivalentTransformation} eq The horizontal and vertical polarizations of mode $m$ then goes to a PBS followed by two photon counters, while those in mode $n$ go to the next modular block, or to additional photon counters in the case of the last module. As a side note, we would like to point out that, in general, photonic implementations with BS require the rails to be swapped and brought close in order to perform the unitary operation on adjacent modes. The direct scheme, as well as the folded scheme, are free of this issue as can be seen from the schematics of Figs.~\ref{directScheme} and \ref{foldedScheme}. This is also true for arbitrarily high $M$ since this property originates from the pattern of GR triplets in the decomposition of $Z^{\dagger}$ and their modular implementation with a BS and a polarization plane rotation. \subsection{Folded scheme} \label{folded} The folded scheme is a variation of the direct scheme which comes from the following considerations. Firstly, an experimental implementation of the direct scheme requires a number of photon counters equal to the number of outcomes $M$, which may become highly expensive to realize if a fine resolution of the phase is required, i.e., a large number $M$. It is therefore worthwhile exploring the possibility of reducing the number of devices required. Secondly, the photon counters and the modular blocks in general are not ``used'' at the same time, but at different times as the photon will click later in PD$(k)$ with respect to PD$(0)$. This opens up the possibility to exploit ``recursive'' schemes that reuse the same modular block in different time slots. The folded scheme is obtained by putting a delay line after the first modular block, connecting the output modes of this block to the input modes of the PPBS, effectively ``folding'' all the blocks onto the first one. Fig.~\ref{foldedScheme} shows a possible implementation of the scheme. Note that since the parameters $\omega_H,\ \omega_V$ change from one modular block to another, a time-varying BS is required, which may be realized with an interferometer with the appropriate phase shift on one arm. This interferometer is singled out inside the dashed enclosure in Fig.~\ref{foldedScheme}. The photon to be measured enters the loop and in the following $M/2$ time slots (each lasting a round trip time in the delay line) it exits towards the PBS and the photon counters. The outcome depends both on the polarization and the time slot, and it corresponds to $k$ and $k+M/2$ for a click recorded in the $(k+1)$-th time slot in the photon counter associated respectively with the horizontal and vertical polarization. \begin{figure*} \caption{Folded scheme for the phase measurement. The qubit $\ket{\varphi^S} \label{foldedScheme} \end{figure*} A comment on the experimental feasibility of these schemes is in order. As a matter of fact, the direct scheme can straightforwardly be realised with bulk optics or in integrated optical circuits. On the other hand, the folded scheme requires a careful design of the delay line. In particular, its length defines the time slot where a photon can be recorded at the photon counters, and must at least amount to the temporal span of the single photon plus the time required to the time-varying BS to adjust its parameters. This latter duration is the slower constraint, with commercial devices that reports switching frequencies of the order of tens of MHz in their datasheet. The resulting length for the delay line is of the order of tens of meters, which are feasible to realize in a lab. The dead time of photon counter, which may blind successive time slots, does not impact the current measurement, thought it may affect the time slots in the following one. This can be solved by imposing an idle time interval between consecutive measurements, which reduces the overall rate of measurements that can performed. However, for a proof-of-principle experimental test this is usually not an issue. \section{Conclusions} In conclusion, we have addressed the optical implementation of POVM corresponding to the optimal $M$-outcomes discrimination of the polarization state of a single photon. In particular, we have found an explicit Naimark extension and optical implementation for any $M=2^N,\ N > 1$, so that the resolution of the estimated phase can be arbitrarily small. \par The measurement scheme has been devised to estimate the phase of the polarization of a single photon. The single photon passes through an optical network towards a set of photon counters, providing information about which path was taken, depending on its polarization. The optical network is defined by the unitary obtained from the projectors, and it is realized as a sequence of modular blocks that reflects the structure of the unitary decomposition in GR. Each block is a combination of beam splitters and waveplates that act on multiple polarization modes. The photon counters are placed at the outgoing modes and at each recorded click they assign the corresponding outcome. \par We have provided the analytical expression for both the Naimark extension and its decomposition in GR, and we have proposed an implementation for the measurement scheme of the polarization, but other phase measurements can in principle be realized. Our results pave the way for realistic implementations of the canonical phase POVM for single-photon states and for extension to high-dimensional Hilbert spaces. \begin{acknowledgments} N. Dalla Pozza thanks G. Vallone, M. Avesani and J. Tinsley for useful discussions and comments. M. G. A. Paris is member of GNFM-INdAM and thanks S. Olivares and S. Cialdi for discussions. \end{acknowledgments} \pagebreak \end{document}
math
46,287
\begin{document} \title{Higher order traps for some strongly degenerate quantum control systems} \author{Boris O. Volkov\footnote{E-mail: \href{mailto:[email protected]}{[email protected]} URL: \href{https://www.mathnet.ru/php/person.phtml?&personid=94935&option_lang=eng}{mathnet.ru/eng/person94935} ORCID \href{https://orcid.org/0000-0002-6430-9125}{0000-0002-6430-9125}} ~and~Alexander N. Pechen\footnote{E-mail: \href{mailto:[email protected]}{[email protected]} URL: \href{https://www.mathnet.ru/eng/person17991}{mathnet.ru/eng/person17991} ORCID \href{https://orcid.org/0000-0001-8290-8300}{0000-0001-8290-8300}}\\ Department of Mathematical Methods for Quantum Technologies, Steklov Mathematical Institute of Russian Academy of Sciences, 8 Gubkina str., Moscow, 119991, Russia} \date{} \maketitle \makeatletter \renewcommand{\@makefnmark}{} \makeatother \footnotetext{This work is supported by the Russian Science Foundation under grant \textnumero~22-11-00330, \url{https://rscf.ru/en/project/22-11-00330/}.} Control of quantum systems attracts high interest due to fundamental reasons and applications in quantum technologies~\cite{Koch_2022}. Controlled dynamics of an $N$-level closed quantum system is described by Schr\"odinger equation $i\dot U_t^f=(H_0+f(t))U_t^f$ with the initial condition $U_{t=0}^f=\mathbb I$ for unitary evolution operator $U_t^f$ in the Hilbert space ${\cal H}=\mathbb C^N$. Here $H_0=H_0^\dagger$ and $V=V^\dagger$ are free and interaction Hamiltonians (Hermitian operators in ${\cal H}$), $f\in\mathfrak{H}^0:= L_2([0,T];\mathbb R)$ is a control function, $T>0$ is some target time. Consider Mayer type quantum control objective functional of the form $J_O={\rm Tr}(OU_T^f\rho_0 U_T^{f\dagger})\to\max$, where $\rho_0$ is the initial density matrix (a Hermitian operator in $\cal H$ such that $\rho_0\ge 0$ and ${\rm Tr}\rho_0=1$) and $O=O^\dagger$ is target operator (a Hermitian operator in $\cal H$). A quantum system $(H_0,V)$ is called {\it completely controllable} if there exist such time $T_{\rm min}$ that for all $T\geq T_{\rm min}$ and $U\in U(N)$ there exists a control $f\in \mathfrak{H}^0$ such that $U=U_T^fe^{i\alpha}$ for some $\alpha\in \mathbb{R}$. An important for quantum control question is to establish, for a controlled system, whether the objective has trapping behaviour or no~\cite{Rabitz2004}. An {\it $n$-th order trap} for the objective functional $J_O$, where $n\geq2$, is a control $f_0\in \mathfrak{H}^0$ such that (a) $f_0$ is not a point of global extremum of $J_O$ and (b) the Taylor expansion of the objective functional at the point $f_0$ has the form $$ J_O(f_0+\delta f)=J_O(f_0)+\sum\limits_{j=2}^{n} \frac 1{j!}J^{(j)}_O(f_0)(\delta f,\ldots,\delta df)+o(\|\delta f\|^{n}),\text{\;for $\|\delta f\|\rightarrow 0$,} $$ where the nonzero functional $R(\delta f):=\sum\limits_{j=2}^{n} \frac 1{j!}J^{(j)}_O(f_0)(\delta f,\ldots,\delta f)$ is such that for any $\delta f\in \mathfrak{H}^0$ there exists $\varepsilon>0$ such that $R(t\delta f)\leq 0$ for all $t\in (-\varepsilon,\varepsilon)$. The analysis of traps is important since traps, if they would exist, would determine the level of difficulty for the search for globally optimal controls, including in practical applications. The absence of traps for 2-level quantum systems ($N=2$) was proved in~\cite{PechenPRA2012,PechenRMS2015,VolkovJPA2021}. In~\cite{Pechen2011,Pechen2012}, some examples of {\it third order traps} were constructed for special $N$--level degenerate quantum systems with $N\ge 3$. Traps were discovered for some systems with $N\geq 4$ in~\cite{deFouquieresSchirmer}. In this work, we prove the existence of traps of an arbitrary order for special highly degenerate quantum systems. We will consider $N$-level quantum system with the Hamiltonian $(H_0,V)$: \begin{equation*} \label{HN} H_0=a|1\rangle \langle1|+\sum_{k=2}^N b|k\rangle \langle k|,\quad V=\sum_{k=1}^{N-1}\overline{v}_{k}|k\rangle\langle k+1|+v_{k}|k+1\rangle\langle k|\,. \end{equation*} Here $a\neq b$ and all $v_{k}\in \mathbb{R}$ are nonzero. It is known~\cite{SFS,deFouquieresSchirmer} that such a system is completely controllable for any $N$ (a generalization to the case of $v_{k}\in\mathbb C$ for $N=4$ is given in~\cite{KuznetsovPechen}). \textbf{Theorem.} {\it Let $N\ge 3$, $\rho_0=|N\rangle \langle N|$ and $O=\sum_{k=1}^N \lambda_k |k\rangle\langle k|$ such that $\lambda_1>\lambda_N>\lambda_{N-1}$. Then for any $T\geq T_{\rm min}$ the control $f_0\equiv 0$ is a trap of the $2N-3$ order for $J_O$.} To prove this statement, note that for such $\rho_0$ and $O$ the complete controllabillity of the quantum system implies that the control $f_0\equiv0$ is not a point of global extremum of $J_O$ for $T\geq T_{\rm min}$~\cite{Pechen2011}. Without loss of generality we can assume that $\lambda_N=0$~\cite{Pechen2011}. Let $V_t:=e^{itH_0}Ve^{-itH_0}$. Let $A^n_{lk}\colon \mathfrak{H}^0 \to\mathbb{C}$ be the form of the order $n$ defined as $$ A^n_{lk}\langle f\rangle:= \int_0^Tdt_1\int_0^{t_1}dt_2\ldots \int_0^{t_{n-1}}dt_nf(t_1)\ldots f(t_n)\langle l|V_{t_1}\ldots V_{t_n}|k\rangle. $$ Let $A^0_{lk}=\delta_{lk}$ (the Kronecker symbol). By direct calculations, one can obtain a formula for Fr\'echet differential of order $n$ of the objective functional $J_O$ at~$f_0$: \begin{equation} \label{Jm1} \frac 1{n!}J^{(n)}_O(f_0)(f,\ldots,f)=\sum_{j=0}^n \sum_{l=1}^{N-1} (-1)^{n-j}i^{n} \lambda_l A^j_{lN}\langle f \rangle \overline{A^{n-j}_{lN}\langle f \rangle} \end{equation} For $n=1$ we get that $J'_O(f_0)=0$. Due to $\langle l|V|N\rangle=0$ for $l\neq N-1$, the second differential of $J_O$ at $f_0$ has the form $$\frac 1{2!}J''_O(f_0)(f,f)=\lambda_{N-1}|A^1_{(N-1)N}\langle f\rangle|^2=\lambda_{N-1}v^2_{N-1}\left(\int_0^Tf(t)dt\right)^2.$$ Introduce the space $\mathfrak{H}^{1}=\{f \in \mathfrak{H}^0\colon \int_0^Tf(t)dt=0\}$. Then $J''_O(f_0)(f,f)<0$ for $f\in \mathfrak{H}^0 \setminus\mathfrak{H}^1$ and $J''_O(f_0)(f,f)=0$ for $f\in \mathfrak{H}^1$. Note that the following holds true for the quantum system $(H_0,V)$. If $1<l$ and $n\leq N-1$, then $\langle l|V_{s_n}\ldots V_{s_1}|N\rangle=\langle l|V^n|N\rangle$ and, hence, the form $A^n_{lN}\langle f\rangle=\frac{\langle l|V^n|N\rangle}{n!}\left(\int\limits_0^Tf(t)dt\right)^{n}$ vanishes on $\mathfrak{H}^{1}$. Also if $n<N-1$, then $\langle 1|V_{s_n}\ldots V_{s_1}|N\rangle=0$ and hence $A^n_{1N}=0$. Then it follows from formula~(\ref{Jm1}) that $J^{(n)}_O(f_0)(f,\ldots,f)=0$ for $3\leq n \leq 2N-3$ and for $f\in \mathfrak{H}^1$. Moreover, \begin{multline} \frac 1{(2N-2)!} J_O^{(2N-2)}(f_0)(f,\ldots,f)=\lambda_1|A_{1N}^{N-1}\langle f \rangle|^2=\\=\lambda_1\Bigl|\int\limits_{[0,T]^{N-1}}K(t_1,\ldots,t_{N-1})f(t_1)\ldots f(t_{N-1})dt_1\ldots dt_{N-1}\Bigr|^2\geq 0 \end{multline} for $f\in\mathfrak{H}^1$, where $K(t_1,t_2,\ldots,t_{N-1})=\frac 1{(N-1)!}v_{1}v_{2}\ldots v_{N-1}e^{i(a-b)\max(t_1,\ldots,t_{N-1})}$. Thus the control $f_0\equiv 0$ is a trap of the $2N-3$ order. This completes the proof of the Theorem. The authors are grateful to S.A.~Kuznetsov for pointing out the proof of controllability in~\cite{SFS}. \end{document}
math
7,195
\begin{document} \title{Vanishing theorem for the cohomology of line bundles on Bott-Samelson varieties} \author{Boris Pasquier} \maketitle \begin{abstract} We use the toric degeneration of Bott-Samelson varieties and the description of cohomolgy of line bundles on toric varieties to deduce vanishings results for the cohomology of lines bundles on Bott-Samelson varieties. \end{abstract} \section*{Introduction} Bott-Samelson varieties were originally defined as desingularizations of Schubert varieties and were used to describe the geometry of Schubert varieties. In particular, the cohomology of some line bundles on Bott-Samelson varieties were used to prove that Schubert varieties are normal, Cohen-Macaulay and with rational singularities (see for example \cite{BK05}). In this paper, we will be interested in the cohomology of all line bundles of Bott-Samelson varieties. We consider a Bott-Samelson variety $X(\tilde{w})$ over an algebraically closed field $\Bbbk$ associated to an expression $\tilde{w}=s_{\beta_1}\dots s_{\beta_N}$ of an element $w$ in the Weyl group of a Kac-Moody group $G$ over $\Bbbk$ (see Definition \ref{BS} (i)). In the case where $G$ is semi-simple, N.~Lauritzen and J.F.~Thomsen proved, using Frobenius splitting, the vanishing of the cohomology in positive degree of line bundles on $X(\tilde{w})$ of the form $\mathcal{L}(-D)$ where $\mathcal{L}$ is any globally generated line bundle on $X(\tilde{w})$ and $D$ a subdivisor of the boundary of $X(\tilde{w})$ corresponding to a reduced expression of $w$ \cite[Th7.4]{LT04}. The aim of this paper is to give the vanishing in some degrees of the cohomology of any line bundles on $X(\tilde{w})$.\\ Let us define, for all $\epsilon=(\epsilon_k)_{k\in\{1,\dots,N\}}\in\{+,-\}^N$ and for all integers $1\leq i<j\leq N$, $$\alpha_{ij}^\epsilon:=\langle\beta_i^\vee,(\prod_{i<k<j,\,\epsilon_k=-}s_{\beta_k})(\beta_j)\rangle.$$ These integers are natural geometric invariants of the Bott-Samelson variety, they also appear, for example, in \cite[Theorem 3.21]{Wi06} in product formula in the equivariant cohomology of complex Bott-Samelson varieties . Since $X(\tilde{w})$ is smooth, we can consider divisors instead of line bundles. Thus, let us denote by $Z_1,\dots,Z_N$ the natural basis of divisors of $X(\tilde{w})$ (see Definition \ref{BS} (ii)). Let $D:=\sum_{i=1}^Na_iZ_i$ be any divisor of $X(\tilde{w})$. Let $i\in\{1,\dots,N\}$. We say that $D$ satisfies condition $(C_i^+)$ if for all $\epsilon\in\{+,-\}^N$, we have $$C_i^\epsilon:=a_i+\sum_{j>i,\,\epsilon_j=+}\alpha_{ij}^\epsilon a_j\geq -1$$ and we say that $D$ satisfies condition $(C_i^-)$ if for all $\epsilon\in\{+,-\}^N$, we have $$C_i^\epsilon:=a_i+\sum_{j>i,\,\epsilon_j=+}\alpha_{ij}^\epsilon a_j\leq -1.$$ The main result of this paper is the following. \begin{teo}\label{main} Let $X(\tilde{w})$ be a Bott-Samelson variety and $D$ a divisor of $X(\tilde{w})$. Let $\eta\in\{+,-,0\}^N$. Define two integers $\eta^+:=\sharp\{1\leq j \leq N\mid\eta_j=+\}$ and $\eta^-:=\sharp\{1\leq j \leq N\mid\eta_j=-\}.$ Suppose that $D$ satisfies conditions $(C_i^{\eta_i})$ for all $i\in\{1,\dots,N\}$ such that $\eta_i\neq 0$. Then $H^i(X,D)=0,\mbox{ for all }i<\eta^-\mbox{ and for all }i>N-\eta^+.$ \end{teo} Let us remark that, Conditions $(C_N^+)$ and $(C_N^-)$ are respectively $a_N\geq -1$ and $a_N\leq -1$, so that $\eta_N$ can always be chosen different from $0$. Thus, for any divisor $D$ of $X(\tilde{w})$, Theorem \ref{main} gives the vanishing of the cohomology of $D$ in at least one degree. Although Theorem \ref{main} gives us a lot of cases of vanishing, it does not permit to recover all the result of N.~Lauritzen and J.P.~Thomsen. See Example \ref{exemple} to illustrate this facts. However, for lots of divisors, Theorem \ref{main} gives the vanishing of their cohomology in all degrees except one. More precisely, we have the following. \begin{cor}\label{cornonvide} Let $D=\sum_{i=1}^Na_iZ_i$ be a divisor of $X=X(\tilde{w})$. Suppose that, for all $i\in\{1,\dots,N\}$, one of the following two conditions $\tilde{C}_i^+$ and $\tilde{C}_i^-$ is satisfied: $$\begin{array}{cc} \tilde{C}_i^+: & a_i\geq -1+\max_{\epsilon\in\{+,-\}^N}(-\sum_{j>i,\,\epsilon_j=+}\alpha_{ij}^\epsilon a_j) \\ \tilde{C}_i^-: & a_i\leq -1+\min_{\epsilon\in\{+,-\}^N}(-\sum_{j>i,\,\epsilon_j=+}\alpha_{ij}^\epsilon a_j) \end{array}.$$ Then, $H^i(X,D)=0$ for all $i\neq\sharp\{1\leq j\leq N\mid\tilde{C}_j^-\mbox{ is satisfied }\}$. \end{cor} Let us remark that, for all $\eta\in\{+,-\}^N$, the set of points $(a_i)\in{\mathbb Z}^N$ satisfying $\tilde{C}_j^{\eta_j}$ for all $j\in\{1,\dots,N\}$ is a non empty cone. So that Corollary \ref{cornonvide} can be applied to infinitly many divisors.\\ The strategy of the proof of Theorem \ref{main} is the following. In Section \ref{1}, we define and describe a family of deformation with general fibers the Bott-Samelson variety and with special fiber a toric variety. The toric variety we obtain is a Bott tower, its fan has a simple and well understood structure (for example it has $2N$ cones of dimension~1 and $2^N$ cones of dimension $N$). In Section \ref{2}, we describe how to compute the cohomology of divisors on the special fiber and we prove the same vanishings as in Theorem \ref{main} but for divisors on this toric variety. Then Theorem \ref{main} is a direct consequence of the semicontinuity Theorem \cite[III 12.8]{Ha77}. \section{Toric degeneration of Bott-Samelson varieties}\label{1} In this section we rewrite the theory of M.~Grossberg and Y.~Karshon \cite{GK94} on Bott-towers, in the case of Bott-Samelson varieties and in an algebraic point of view. Let $A=(a_{ij})_{1\leq i,j\leq n}$ be a generalized Cartan matrix, {\it i.e.} such that (for all $i,j$) $a_{ii}=2$, $a_{ij}\leq 0$ for $i\neq j$, and $a_{ij}=0$ if $a_{ji}=0$. Let $G$ be the ``maximal'' Kac-Moody group over $\Bbbk$ associated to $A$ constructed in \cite[Section 6.1]{Ku02} (see \cite{Ti81a} and \cite{Ti81b} in arbitrary characteristic). Note that, in the finite case, $G$ is the simply-connected semisimple algebraic group over $\Bbbk$. Denote by $B$ the standard Borel subgroup of $G$ containing the standard maximal torus $T$. Let $\alpha_1,\dots,\alpha_n$ be the simple roots of $(G,B,T)$ and $s_{\alpha_1},\dots,s_{\alpha_n}$ the associated simple reflections generating the Weyl Group $W$. For all $i\in\{1,\dots,n\}$, denote by $P_{\alpha_i}:=B\cup Bs_{\alpha_i}B$ the minimal parabolic subgroup containing $B$ associated to $\alpha_i$. Let $w\in W$ and $\tilde{w}:=s_{\beta_1}\dots s_{\beta_N}$ be an expression (not neccessarily reduced) of $w$, with $\beta_1,\dots,\beta_N$ simple roots. For all $i$ and $j$ in $\{1,\dots,N\}$, denote by $\beta_{ij}$ the integer $\langle\beta_i^\vee,\beta_j\rangle$. \begin{defi}\label{BS} \begin{enumerate}[(i)] \item The Bott-Samelson variety associated to $\tilde{w}$ is $$X(\tilde{w}):=P_{\beta_1}\times^B\dots\times^B P_{\beta_N}/B$$ where the action of $B^N$ on $P_{\beta_1}\times\dots\times P_{\beta_N}$ is defined by $$(p_1,\dots,p_N).(b_1,\dots,b_N)=(p_1b_1,b_1^{-1}p_2b_2,\dots,b_{N-1}^{-1}p_Nb_N),\,\forall p_i\in P_{\beta_i},\,\forall b_i\in B.$$ \item For all $i\in\{1,\dots,N\}$, we denote by $Z_i$ the divisor of $X(\tilde{w})$ defined by $\{(p_1,\dots,p_N)\in X(\tilde{w})\mid p_i\in B\}$. Thus $(Z_i)_{i\in\{1,\dots,N\}}$ is a basis of the Picard group of $X(\tilde{w})$, and if $\tilde{w}$ is reduced it is the basis of effective divisor \cite[Section 3]{LT04}. \end{enumerate} \end{defi} In order to define a toric degeneration of a Bott-Samelson variety, we need to introduce particular endomorphisms of $G$ and $B$. Since the simple roots are linearly independant elements in the character group of $G$, one can choose a positive integer $q$ and an injective morphism $\lambda:\Bbbk^*\longrightarrow T$ such that for all $i\in\{1,\dots,n\}$ and all $u\in \Bbbk^*$, $\alpha_i(\lambda(u))=u^q$. And let us define, for all $u\in \Bbbk^*$, $$\begin{array}{cccc} \tilde{\psi_u} : & G & \longrightarrow & G \\ & g & \longmapsto & \lambda(u)g\lambda(u)^{-1}. \end{array}$$ The morphism $\psi$ from $\Bbbk^*$ to the set of endomorphism of $B$ defined by $\psi(u)=\tilde{\psi_u}_{|B}$ can be continuously extended to $0$. Indeed, the unipotent radical $U$ of $B$ lives in a group (denoted by $U^{(1)}$ in \cite{Ti81b}) where the action of $t\in T$ by conjugation is, on some generators (except the identity), the multiplication by some positive powers of $\alpha_i(t)$ for some $i\in\{1,\dots,n\}$. Then for all $x\in U$, $\psi(u)$ goes to the identity when $u$ goes to zero. We denote, for all $u\in \Bbbk$, by $\psi_u$ the morphism $\psi(u)$. Remark that $\psi_0$ is the projection from $B$ to $T$. We are now able to give the following \begin{defi} \begin{enumerate}[(i)] \item Let $\mathfrak{X}\longrightarrow \Bbbk$ be the variety defined by $$\mathfrak{X}:=\Bbbk\times P_{\beta_1}\times\dots\times P_{\beta_N}/B^N$$ where the action of $B^N$ on $\Bbbk\times P_{\beta_1}\times\dots\times P_{\beta_N}$ is defined by $\forall u\in \Bbbk,\,\forall p_i\in P_{\beta_i},\,\forall b_i\in B$, $$(u,p_1,\dots,p_N).(b_1,\dots,b_N)=(u,p_1b_1,\psi_u(b_1)^{-1}p_2b_2,\dots,\psi_u(b_{N-1})^{-1}p_Nb_N).$$ \item For all $i\in\{1,\dots,N\}$, we denote by $\mathcal{Z}_i$ the divisor of $\mathfrak{X}$ defined by $$\{(u,p_1,\dots,p_N)\in \mathfrak{X}\mid p_i\in B\}.$$ \end{enumerate} \end{defi} For all $u\in \Bbbk$, we denote by $\mathfrak{X}(u)$ the fiber of $\mathfrak{X}\longrightarrow \Bbbk$ over $u$. \begin{prop} \begin{enumerate}[(i)] \item For all $u\in \Bbbk^*$, $\mathfrak{X}(u)$ is isomorphic to the Bott-samelson variety $X(\tilde{w})$ such that, for all $i\in\{1,\dots,N\}$, the divisor $\mathcal{Z}_i(u):=\mathfrak{X}(u)\cap\mathcal{Z}_i$ corresponds to the divisor $Z_i$ of $X(\tilde{w})$. \item $\mathfrak{X}(0)$ is a toric variety of dimension $N$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate}[(i)] \item Remark first that $\mathfrak{X}(1)$ is by definition the Bott-Samelson variety and, that for all $i\in\{1,\dots,N\}$, $\mathcal{Z}_i(1)=Z_i$ . Now let $u\in \Bbbk^*$ and check that $$\begin{array}{cccl} \theta_u : & \mathfrak{X}(1) & \longrightarrow & \mathfrak{X}(u) \\ & (p_1,\dots,p_N) & \longmapsto & (p_1,\tilde{\psi_u}(p_2),\tilde{\psi_u}^2(p_3),\dots,\tilde{\psi_u}^{N-1}(p_N)). \end{array}$$ is well-defined and is an isomorphism. Moreover, for all $i\in\{1,\dots,N\}$, $p_i$ is in $B$ if and only if $\tilde{\psi_u}(p_i)$ is in $B$, so that $\theta_u(Z_i)=\mathcal{Z}_i(u)$. \item Let $T_{\beta_i}$ be the maximal subtorus of $T$ acting trivially on $P_{\beta_i}/B\simeq {\mathbb P}^1_\Bbbk$. Now, since $\psi_0(b)$ commutes with $T$ for all $b\in B$, one can define an effective action of $\prod_{i=1}^NT/T_{\beta_i}\simeq (\Bbbk^*)^N$ on $\mathfrak{X}(0)$ as follows $$\forall t_i\in T,\,\forall p_i\in P_{\beta_i},\,(t_1,\dots,t_N).(p_1,\dots,p_N)=(t_1p_1t_1^{-1},t_2p_2t_2^{-1},\dots,t_Np_Nt_N^{-1}).$$ Moreover, since $T/T_{\beta_i}\simeq\Bbbk^*$ acts on $P_{\beta_i}/B\simeq{\mathbb P}^1_\Bbbk$ with an open orbit, $(\Bbbk^*)^N$ acts also with an open orbit in $\mathfrak{X}(0)$. \end{enumerate} \end{proof} \begin{prop}\label{fan} Let $(e_1^+,\dots,e_N^+)$ be a basis of ${\mathbb Z}^N$. Define, for all $i\in\{1,\dots,N\}$, the vector $e_i^-:=-e_i^+-\sum_{j>i}\beta_{ij}e_j^+$. Then a fan $\mathbb{F}$ of $\mathfrak{X}(0)$ consists of cones generated by subsets of $\{e_1^+,\dots,e_N^+,e_1^-,\dots,e_N^-\}$ containing no subset of the form $\{e_i^+,e_i^-\}$. (In other words,the fan whose maximal cones are the cones generated by $e_1^{\epsilon_1},\dots,e_N^{\epsilon_N}$ with $\epsilon\in\{+,-\}^N$.) Moreover, for all $i\in\{1,\dots,N\}$, $\mathcal{Z}_i(0)$ is the irreducible $(\Bbbk^*)^N$-stable divisor of $\mathfrak{X}(0)$ corresponding to the one dimensional cone of $\mathbb{F}$ generated by $e_i^+$. \end{prop} \begin{ex}\label{exemple1} If $G=\operatorname{SL(3)}$ and $\tilde{w}=s_{\alpha_1}s_{\alpha_1}$, we have the following fan. \begin{center} \includegraphics[width=5cm]{figure1.eps} \end{center} \end{ex} In fact, one can prove that the Bott-Samelson variety of Example \ref{exemple1} is isomorphic the toric variety $\mathfrak{X}(0)$. But this is not the general case. For example, if $G=\operatorname{SL(2)}$ and $\tilde{w}=s_{\alpha_1}s_{\alpha_2}$, $X(\tilde{w})$ is a toric variety (it is in fact ${\mathbb P}^1_\Bbbk\times{\mathbb P}^1_\Bbbk$) but it is not isomorphic to $\mathfrak{X}(0)$. And, if $G=\operatorname{SL(4)}$ and $\tilde{w}=s_{\alpha_2}s_{\alpha_1}s_{\alpha_3}s_{\alpha_2}$, then $X(\tilde{w})$ is not a toric variety. \begin{proof} Let us first write a few technical results. For all simple roots $\alpha$, there exists a unique closed subgroup $\mathcal{U}_\alpha$ of $G$ and an isomorphism $$u_\alpha: {\mathbb G}_a\longrightarrow \mathcal{U}_\alpha\mbox{ such that } \forall t\in T,\,\forall x\in \Bbbk,\, tu_\alpha(x)t^{-1}=u_\alpha(\alpha(t)x).$$ Moreover, the $u_\alpha$ can be chosen such that $n_\alpha:=u_\alpha(1)u_{-\alpha}(-1)u_\alpha(1)$ is in the normalizer of $T$ in $G$ and has image $s_\alpha$ in $W$. And for all $x\in\Bbbk^*$ we have $$u_{-\alpha}(x)u_{\alpha}(-x^{-1})u_{-\alpha}(x)=\alpha^\vee(x^{-1})n_{-\alpha}.$$ See \cite[Chapter8]{Sp98} for the finite case. And we can reduce to the finite case in the general case by construction of $G$. Then for all $x\in\Bbbk^*$ we also have \begin{equation}\label{formule1} n_{-\alpha} u_{-\alpha}(-x)=\alpha^\vee(x)u_{-\alpha}(x)u_\alpha(-x^{-1})=u_{-\alpha}(x^{-1})\alpha^\vee(x)u_\alpha(-x^{-1}).\end{equation} Remark also that, for all simple root $\alpha$, the subgoup $\mathcal{U}_{-\alpha}$ is a subgroup of $P_\alpha$ and $n_{-\alpha}\in P_\alpha$. Then, for all $\epsilon\in\{0,1\}^N$, we can define an embedding $\phi_\epsilon$ of $\Bbbk^N$ in $\mathfrak{X}(0)$ by $$\phi_\epsilon(x_1,\cdots,x_N)=((n_{-\beta_1})^{\epsilon_1}u_{-\beta_1}((-1)^{\epsilon_1})x_1),\dots,(n_{-\beta_N})^{\epsilon_N}u_{-\beta_N}((-1)^{\epsilon_N})x_N).$$ Note that, the $\phi_\epsilon(\Bbbk^N)$ with $\epsilon\in\{0,1\}^N$, are the maximal affine $({\mathbb C}^*)^N$-stable subvarieties of $\mathfrak{X}(0)$. Moreover, if for all $i\in\{1,\dots,N\}$, $x_i\in\Bbbk^*$, we prove by induction, using Equation \ref{formule1} and the defintion of $\mathfrak{X}(0)$, that \begin{eqnarray} \phi_\epsilon(x_1,\cdots,x_N) & = & (u_{-\beta_1}(x_1^{(-1)^{\epsilon_1}})\beta_1^\vee(x_1^{-1})^{\epsilon_1},\dots,u_{-\beta_N}(x_N^{(-1)^{\epsilon_N}})\beta_N^\vee(x_N^{-1})^{\epsilon_N})\nonumber\\ & = & (u_{-\beta_1}(x_1^{(-1)^{\epsilon_1}}), \dots, u_{-\beta_i}(x_i^{(-1)^{\epsilon_i}}\prod_{j<i}x_j^{-\epsilon_j\beta_{ji}}),\dots).\label{formule2}\end{eqnarray} Now, let us compute the weight of regular functions of all these affine subvarieties. We need first to fix a basis of characters of $(\Bbbk^*)^N$. Let us denote, for all $i\in\{1,\dots,N\}$, by $X_i$ the function in $\Bbbk(\mathfrak{X}(0))=\Bbbk(\phi_{(0,\dots,0)}(({\mathbb G}_a)^N))$ defined by $X_i((u_{-\beta_1}(x_1),\dots,u_{-\beta_N}(x_N))=x_i$. Denote also by $(\chi_i)_{i\in\{1,\dots,N\}}$ the weights with $(\Bbbk^*)^N$ acts on $(X_i)_{i\in\{1,\dots,N\}}$, and by $(e_i^+)_{i\in\{1,\dots,N\}}$ the dual basis of $(\chi_i)_{i\in\{1,\dots,N\}}$. Then, if $\chi=\sum_{i=1}^Nk_i\chi_i$, we can check, using Equation \ref{formule2}, that $$\prod_{i=1}^NX_i^{k_i}\in \Bbbk[\phi_\epsilon(({\mathbb G}_a)^N))]\Longleftrightarrow \forall i\in\{1,\dots,N\},\,\left\{\begin{array}{ll} k_i=\langle\chi,e_i^+\rangle\geq 0 & \mbox{if }\epsilon_i=0 \\ -k_i-\sum_{j>i}\beta_{ij}k_j=\langle\chi,e_i^-\rangle\geq 0 & \mbox{if }\epsilon_i=1 \end{array}\right..$$ In other words, the cone associated to $\phi_\epsilon(\Bbbk^N)$ is generated by $e_1^{\epsilon_1'},\dots,e_N^{\epsilon_N'}$, where $\epsilon_i'=+$ and $-$ if $\epsilon_i=0$ and $1$ respectively. It proves the first result of the proposition. For the last statement, just remark that $\mathcal{Z}_i(0)$ is the divisor of $\mathfrak{X}(0)$ defined by the equation $X_i=0$, and that $X_i$ has weight $\chi_i$ which is the dual of $e_i^+$. \end{proof} \section{Cohomology of divisors on the toric variety $\mathfrak{X}(0)$}\label{2} Let us first recall the result of M.~Demazure \cite{De70} on the cohomology of line bundles on smooth toric varieties. For the general theory of toric varieties, see \cite{Od88} or \cite{Fu93}. Let $X$ be a smooth complete toric variety of dimension $N$ associated to a complete fan $\mathbb{F}$. Let $\Delta(1)$ be the set of primitive elements of one-dimensional cones of $\mathbb{F}$. For all $\rho\in\Delta(1)$, we denote by $D_\rho$ the corresponding irreducible $(\Bbbk^*)^N$-stable divisor of $X$. Let $D:=\sum_{\rho\in\Delta(1)}a_\rho D_\rho$. Let $h_D$ be the piecewise linear function associated to $D$, {\it i.e.} if $\mathcal{C}$ is the cone generated by $\rho_1,\dots,\rho_N$ then $h_{D|\mathcal{C}}$ is the linear function which takes values $a_{\rho_i}$ at $\rho_i$. Denote by $X((\Bbbk^*)^N)$ be the set of characters of $(\Bbbk^*)^N$. For all $m\in X((\Bbbk^*)^N)$, define the piecewise linear function $\phi_m:n\longmapsto \langle m,n\rangle+h_D(n)$. Let $\Delta(1)_m:=\{\rho\in\Delta(1)\mid\phi_m(\rho)<0\}.$ And define the simplicial scheme $\Sigma_m$ to be the set of all subset of $\Delta(1)_m$ generating a cone of $\mathbb{F}$ (we refer to \cite[Chapter I.3]{Go58} for cohomology of simplicial schemes). The cohomology spaces $H^i(X,D)$ is a $(\Bbbk^*)^N$-module so that we have the following decomposition $$H^i(X,D)=\bigoplus_{m\in X((\Bbbk^*)^N)}H^i(X,D)_m.$$ M.~Demazure proved the following result. \begin{teo}[\cite{De70}]\label{toriccohom} With the notation above, \begin{enumerate}[(i)] \item if $\Sigma_m=\emptyset$, then $H^0(X,D)_m=\Bbbk$ and $H^i(X,D)_m=0$ for all $i>0$; \item if $\Sigma_m\neq\emptyset$, then $H^0(X,D)_m=0$, $H^1(X,D)=H^0(\Sigma_m,\Bbbk)/\Bbbk$ and $H^i(X,D)_m=H^{i-1}(\Sigma_m,\Bbbk)$ for all $i>1$. \end{enumerate} \end{teo} Applying Theorem \ref{toriccohom} to $\mathfrak{X}(0)$, with the notation of the first section, one can deduce the following (the proof is left to the reader). \begin{cor}\label{cortoriccohom} Let $\mathcal{D}=\sum_{i=1}^Na_i\mathcal{Z}_i$ be a divisor of $\mathfrak{X}$ and $\mathcal{D}(0)$ be the corresponding divisor $\sum_{i=1}^Na_i\mathcal{Z}_i(0)$ of $\mathfrak{X}(0)$. \begin{enumerate}[(i)] \item If there is an integer $j$ such that $\phi_m(e_j^+)\geq 0$ and $\phi_m(e_j^-)< 0$, or, $\phi_m(e_j^+)< 0$ and $\phi_m(e_j^-)\geq 0$, then $H^i(\mathfrak{X}(0),\mathcal{D}(0))_m=0$ for all $i\geq 0$. \item If the condition above is not satisfied, let $j_m:=\sharp\{i\in\{1,\dots,N\}\mid\phi_m(e_j^+)<0\}$, then $H^i(\mathfrak{X}(0),\mathcal{D}(0))_m=0$ for all $i\neq j)m$ and $H^{j_m}(\mathfrak{X}(0),\mathcal{D}(0))_m=\Bbbk$. \end{enumerate} \end{cor} \begin{ex} If $G=\operatorname{SL(3)}$ and $\tilde{w}=s_{\alpha_1}s_{\alpha_2}$, if the simplical scheme $\Sigma_m$ is not empty, it is one of the following modulo symmetries. \begin{center} \includegraphics[width=12cm]{figure2.eps} \end{center} In the first three cases, $H^0(\Sigma_m,\Bbbk)=\Bbbk$ and the cohomology of $\Sigma_m$ in positive degrees vanishes, and we are in the case (i) of Corollary \ref{cortoriccohom}. In fourth and fifth cases, we are in the case (ii) of Corollary \ref{cortoriccohom}. In fourth case, the non trivial cohomology are $H^0(\Sigma_m,\Bbbk)=\Bbbk^2$ and $H^1(\Sigma_m,\Bbbk)=\Bbbk$. In fifth case, the only non trivial cohomology is $H^0(\Sigma_m,\Bbbk)=\Bbbk^2$. \end{ex} \begin{proof}[Sketch of proof of Corollary \ref{cortoriccohom}] \begin{enumerate}[(i)] \item Suppose $\phi_m(e_j^+)\geq 0$ and $\phi_m(e_j^-)< 0$. Then, all maximal simplices of $\Sigma_m$ contain $e_j^-$, so that $\Sigma_m$ is contractible. \item One can check that $\Sigma_m$ is the set of faces of a $j_m$-dimensional convexe polytope. \end{enumerate} \end{proof} We will now prove two lemmas. In the first one, we give a necessary condition on $m\in X((\Bbbk^*)^N)$ to satisfy the condition of Corollary \ref{cortoriccohom} (ii). The second lemma will be used to compute, in Case (ii), the possible values of $j_m$ which depend on the Conditions $(C_i^\pm)$.\\ First, for all $\epsilon\in\{+,-\}^N$, we define $x^\epsilon\in{\mathbb Z}^N$ by induction, as follows: $$x^\epsilon_i= \left\{\begin{array}{ll} -a_i & \mbox{ if }\epsilon_i=+\\ -\sum_{j>i}\beta_{ij}x^\epsilon_j & \mbox{ if }\epsilon_i=- \end{array}\right.$$ \begin{lem}\label{convexhull} Let $m\in{\mathbb Z}^N$ such that for all $i\in\{1,\dots,N\}$, we have either $\phi_m(e_i^+)\geq 0$ and $\phi_m(e_i^-)\geq 0$, or, $\phi_m(e_i^+)< 0$ and $\phi_m(e_i^-)< 0$. Then $m$ is in the convex hull of the $x^\epsilon$. \end{lem} \begin{proof} Since, for all $i\in\{1,\dots,N\}$, $\phi_m(e_i^+)=m_i+a_i$ and $-\phi_m(e_i^-)=m_i+\sum_{j>i}\beta_{ij}m_j$ have opposite signs, there exists $N$ real numbers $\lambda_1,\dots,\lambda_N$ in $[0,1]$ such that, for all $i\in\{1,\dots,N\}$, $m_i=-\lambda_ia_i-(1-\lambda_i)\sum_{j>i}\beta_{ij}m_j$. Denote, for all $\epsilon\in\{+,-\}^N$, by $m^\epsilon$ the product $$\prod_{\begin{array}{c}\scriptstyle 1\leq i\leq N\\ \scriptstyle \epsilon_i=+\end{array}}\lambda_i\times\prod_{\begin{array}{c}\scriptstyle 1\leq i\leq N\\ \scriptstyle \epsilon_i=-\end{array}}(1-\lambda_i).$$ Remark that $m^\epsilon\in [0,1]$. Let us prove by induction that for all $i\in\{1,\dots,N\}$ we have $m_i=\sum_{\epsilon\in\{+,-\}^N}m^\epsilon x^\epsilon_i$, {\it i.e.} $m=\sum_{\epsilon\in\{+,-\}^N}m^\epsilon x^\epsilon$. We will use the following easy fact: for all $i\in\{1,\dots,N\}$, \begin{equation}\lambda_i=\sum_{\begin{array}{c}\scriptstyle \epsilon\in\{+,-\}^N\\ \scriptstyle \epsilon_i=+\end{array}}\label{lambdai} m^\epsilon.\end{equation} In particular, for $i=N$, we deduce, with the definition of $x_N^\epsilon$, that $\sum_{\epsilon\in\{+,-\}^N}m^\epsilon x^\epsilon_N=-\lambda_Na_N=m_N$. Now let $i<N$ such that, for all $j>i$, $m_j=\sum_{\epsilon\in\{+,-\}^N}m^\epsilon x^\epsilon_j$. Then \begin{eqnarray*} m_i & = & -\lambda_ia_i-(1-\lambda_i)\sum_{j>i}\beta_{ij}m_j\\ & = & -\lambda_ia_i-(1-\lambda_i)\sum_{j>i}\beta_{ij}\sum_{\epsilon\in\{+,-\}^N}m^\epsilon x^\epsilon_j\\ & = & -\lambda_ia_i-(1-\lambda_i)\sum_{\epsilon\in\{+,-\}^N}m^\epsilon \sum_{j>i}\beta_{ij} x^\epsilon_j. \end{eqnarray*} Moreover, if for all $\epsilon\in\{+,-\}^N$, we define $\epsilon'\in\{+,-\}^N$ by $\epsilon'_j=\epsilon_j$ for all $j\neq i$ and $\epsilon'_i=-\epsilon_i$, we have $$\sum_{j>i}\beta_{ij} x^\epsilon_j= \left\{\begin{array}{ll} -x^\epsilon_i & \mbox{ if }\epsilon_i=-\\ -x^{\epsilon'}_i & \mbox{ if }\epsilon_i=+ \end{array}\right.$$ Then $$m_i= -\lambda_ia_i-(1-\lambda_i)\sum_{\begin{array}{c}\scriptstyle \epsilon\in\{+,-\}^N\\ \scriptstyle \epsilon_i=-\end{array}}(m^\epsilon+m ^{\epsilon'})x^\epsilon_i.$$ We conclude by \ref{lambdai} and by checking that, for all $\epsilon\in\{+,-\}^N$ such that $\epsilon_i=-$, we have $(1-\lambda_i)(m^\epsilon+m^{\epsilon'})=m^\epsilon$. \end{proof} \begin{lem}\label{lemme2} For all $\epsilon\in\{+,-\}^N$ and all $i\in\{1,\dots,N\}$, we have $\phi_{x^\epsilon }(e_i^+)=0$ and $\phi_{x^\epsilon }(e_i^-)=a_i+\sum_{j>i,\,\epsilon_j=+}\alpha_{ij}^\epsilon a_j$ if $\epsilon_i=+$ (and conversely if $\epsilon_i=-$). \end{lem} \begin{proof} Fix $\epsilon\in\{+,-\}^N$. The lemma follows from the three following steps.\\ Step 1. Let us first prove by induction that, for all $i\in\{1,\dots,N\}$, $$x^\epsilon_i=\sum_{\begin{array}{c}\scriptstyle i+1\leq h\leq N \\ \scriptstyle \epsilon_h=+\end{array}}\left(\sum_{k\geq 1}\sum_{\begin{array}{c}\scriptstyle i=i_0<i_1<\dots<i_k=h \\ \scriptstyle \forall x<k, \,\epsilon_{i_x}=-\end{array}}(-1)^{k+1}\prod_{x=0}^{k-1}\beta_{i_xi_{x+1}}\right)a_h+\left\{\begin{array}{ll} -a_i & \mbox{ if }\epsilon_i=+\\ 0 & \mbox{ if }\epsilon_i=- \end{array}\right.$$ Let $i\in\{1,\dots,N\}$. Remark that if $\epsilon_i=+$, this equality is clearly true because for all $k\geq 1$ there exists no $i=i_0<i_1<\dots<i_k=h$ such that $\forall x<k, \,\epsilon_{i_x}=-$. Remark also, for similar reason, that the sum from $h=i+1$ to $N$ can be replaced by the sum from $h=i$ to $N$ (always with the condition $\epsilon_h=+$). Suppose now that $\epsilon_i=-$ and that for all $j>i$ the equality holds. Then \begin{eqnarray*} x^\epsilon_i & = & -\sum_{j>i}\beta_{ij}x^\epsilon_j \\ & = & -\sum_{j>i}\sum_{\begin{array}{c}\scriptstyle j\leq h\leq N \\\scriptstyle \epsilon_h=+\end{array}}\left(\sum_{k\geq 1}\sum_{\begin{array}{c}\scriptstyle j=j_0<j_1<\dots<j_k=h \\ \scriptstyle \forall x<k, \,\epsilon_{j_x}=-\end{array}}(-1)^{k+1}\beta_{ij}\prod_{x=0}^{k-1}\beta_{j_xj_{x+1}}\right)a_h+\sum_{\begin{array}{c}\scriptstyle j>i\\ \scriptstyle \epsilon_j=+\end{array}}\beta_{ij}a_j \\ & = & \sum_{\begin{array}{c}\scriptstyle i+1\leq h\leq N \\ \scriptstyle \epsilon_h=+\end{array}}\left(\sum_{j=i+1}^h\sum_{k\geq 1}\sum_{\begin{array}{c}\scriptstyle j=j_0<j_1<\dots<j_k=h \\ \scriptstyle \forall x<k, \,\epsilon_{j_x}=-\end{array}}(-1)^{k+2}\beta_{ij}\prod_{x=0}^{k-1}\beta_{j_xj_{x+1}}\right)a_h+\sum_{\begin{array}{c}\scriptstyle j>i\\ \scriptstyle \epsilon_j=+\end{array}}\beta_{ij}a_j\\ & = & \sum_{\begin{array}{c}\scriptstyle i+1\leq h\leq N \\ \scriptstyle \epsilon_h=+\end{array}}\left(\sum_{k\geq 2}\sum_{\begin{array}{c}\scriptstyle i=i_0<i_1<\dots<i_k=h \\ \scriptstyle \forall x<k, \,\epsilon_{i_x}=-\end{array}}(-1)^{k+2}\prod_{x=0}^{k-1}\beta_{i_xi_{x+1}}\right)a_h+\sum_{\begin{array}{c}\scriptstyle h>i\\ \scriptstyle \epsilon_h=+\end{array}}\beta_{ih}a_h. \end{eqnarray*} But for all $h\in\{i+1,\dots,h\}$, $\beta_{ih}$ equals $$\sum_{\begin{array}{c}\scriptstyle i=i_0<i_1<\dots<i_k=h \\ \scriptstyle \forall x<k, \,\epsilon_{i_x}=-\end{array}}(-1)^{k+2}\prod_{x=0}^{k-1}\beta_{i_xi_{x+1}}$$ when $k=1$, so that we obtain the wanted equation.\\ Step 2. For all $i\in\{1,\dots,N\}$ and $j\in\{i+1,\dots,N\}$ we have $$\alpha^\epsilon_{ij}=\sum_{k\geq 1}\sum_{\begin{array}{c}\scriptstyle i=i_0<i_1<\dots<i_k=j \\ \scriptstyle \forall x<k, \,\epsilon_{i_x}=-\end{array}}(-1)^{k+1}\prod_{x=0}^{k-1}\beta_{i_xi_{x+1}}.$$ The proof, by induction on $j$, of this formula is the same as in \cite[Lemma 3.5]{Pe05} and is left to the reader.\\ Step 3. Recall that $\phi_m(e_i^+)=m_i+a_i$ and that $\phi_m(e_i^-)=-m_i-\sum_{j>i}\beta_{ij}m_j$. Then, if $\epsilon_i=+$, we have $\phi_{x^\epsilon }(e_i^+)=0$ and $\phi_{x^\epsilon }(e_i^-)=-x^\epsilon_i-\sum_{j>i}\beta_{ij}x^\epsilon_j$. And, if $\epsilon_i=-$, we have $\phi_{x^\epsilon }(e_i^-)=0$ and $\phi_{x^\epsilon }(e_i^+)=a_i+x^\epsilon_i$. In fact, we only have to compute $\phi_{x^\epsilon }(e_i^+)$ in the case where $\epsilon_i=-$, {\it i.e.} $a_i+x^\epsilon_i$. Indeed, if $\epsilon_i=+$, define $\epsilon'\in\{+,-\}^N$ by $\epsilon'_j=\epsilon_j$ for all $j\neq i$ and $\epsilon'_i=-$. Then $\phi_{x^\epsilon }(e_i^-)=\phi_{x^{\epsilon'} }(e_i^+)$. \end{proof} We are now able to prove the vanishing theorem for divisors on the toric variety $\mathfrak{X}(0)$. \begin{teo}\label{maintoric} Let $\mathcal{D}=\sum_{i=1}^Na_i\mathcal{Z}_i$ be a divisor of $\mathfrak{X}$ and $\eta\in\{+,-,0\}^N$. Suppose that the coefficient $(a_i)_{i\in\{1,\dots,N\}}$ satisfy conditions $(C_i^{\eta_i})$ for all $i\in\{1,\dots,N\}$ such that $\eta_i\neq 0$. Then $$H^i(\mathfrak{X}(0),\mathcal{D}(0))=0,\mbox{ for all }i<\sharp\{1\leq j \leq N\mid\eta_j=-\}\mbox{ and for all }i>N-\sharp\{1\leq j \leq N\mid\eta_j=+\}.$$ \end{teo} \begin{proof} Let $m\in X((\Bbbk^*)^N)$ such that $H^i(\mathfrak{X}(0),\mathcal{D}(0))_m$ is not zero for all $i\in\{1,\dots,N\}$. Then, by Corollary \ref{cortoriccohom} (i) and Lemma \ref{convexhull}, there exist non negative real numbers $m^\epsilon$ with $\epsilon\in\{+,-\}^N$ such that $\sum_{\epsilon\in\{+,-\}^N}m^\epsilon=1$ and $m=\sum_{\epsilon\in\{+,-\}^N}m^\epsilon x^\epsilon$. Then, by Lemma \ref{lemme2}, $$\phi_m(e_i^+)=\sum_{\begin{array}{c}\scriptstyle \epsilon\in\{+,-\}^N\\ \scriptstyle \epsilon_i=-\end{array}}m^\epsilon C_i^\epsilon \hspace{1cm}\mbox{ and }\hspace{1cm} \phi_m(e_i^-)=\sum_{\begin{array}{c}\scriptstyle \epsilon\in\{+,-\}^N\\ \scriptstyle \epsilon_i=+\end{array}}m^\epsilon C_i^\epsilon.$$ Then, if Condition $C_i^-$ is satisfied, we have $\phi_m(e_i^+)$ and $\phi_m(e_i^-)$ are both negative. And if Condition $C_i^+$ is satisfied and if the integers $\phi_m(e_i^+)$ and $\phi_m(e_i^-)$ are not both non-negative, then one of them equals $-1$ (say for example $\phi_m(e_i^+)$). It means that for all $\epsilon \in\{+,-\}^N$ such that $\epsilon_i=+$, we have $m^\epsilon=0$. Then $\phi_m(e_i^-)=0$ that is not possible by hypothesis on $m$ and Corollary \ref{cortoriccohom} (i). We conclude the proof by Corollary \ref{cortoriccohom} (ii). \end{proof} \begin{ex} If $G=\operatorname{SL(3)}$ and $\tilde{w}=s_{\alpha_1}s_{\alpha_2}$, the vanishings of the cohomology of the divisor $\mathcal{D}=a_1\mathcal{Z}_1+a_2\mathcal{Z}_2$ obtained by Theorem \ref{maintoric} is reprensented in the following picture. \begin{center} \includegraphics[width=12cm]{figure3.eps} \end{center} \end{ex} Let us now discuss, with a more general example, what sort of vanishings Theorem \ref{main} gives. \begin{ex}\label{exemple} Let $G=\operatorname{SL(4)}$ and $\tilde{w}=s_{\alpha_2}s_{\alpha_1}s_{\alpha_3}s_{\alpha_2}$ (with natural notation). Let $D=\sum_{i=1}^4a_iZ_i$ be a divisor of $X(\tilde{w})$. Then, all the integers $C_i^\epsilon$ we obtain are the followings: $$\begin{array}{cc} i=4 & a_4\\ i=3 & a_3,\,a_3-a_4\\ i=2 & a_2,\,a_2-a_4\\ i=1 & a_1,\,a_1-a_2,\,a_1-a_3,\,a_1-a_2-a_3,\,a_1-a_2+a_4,\,a_1-a_3+a_4,\,a_1-a_2-a_3+2a_4. \end{array}$$ In particular, Conditions $(C_i^+)_{i\in\{1,2,3,4\}}$ are equivalent to $a_4\geq -1$, $a_3\geq a_4-1$, $a_2\geq a_4-1$ and $a_1\geq a_2+a_3-1$. In that case, Theorem \ref{main} tells us that the cohomology of $D$ vanishes in non zero degree. But this fact can already be deduced by \cite[Theorem 7.4]{LT04}. Actually, the theorem of N.~Lautitzen and J.F.~Thomsen gives us the vanishing of the cohomology of $D$ in non zero degree exactly for all $D$ such that only if $a_4\geq -1$, $a_3\geq \max(a_4-1,-1)$, $a_2\geq \max(a_4-1,-1)$ and $a_1\geq \max(a_2+a_3-a_4-1,-1)$. Let us consider $D=2Z_1+2Z_2+2Z_3+2Z_4$, by the latter assertion the cohomology of $D$ in non zero degree vanishes. But one can compute that the cohomology of the corresponding divisor on $\mathfrak{X}(0)$ is not trivial in degree~1 (indeed, we have for example $H^1(\mathfrak{X}(0),\mathcal{D}(0))_m=\Bbbk$ when $m=\frac{1}{2}(x^{(-,+,+,+)}+x^{(-,+,+,-)})=(0,-2,-2,-3)$). Theorem \ref{main} is not as powerful as the results of N.~Lautitzen and J.F.~Thomsen for ``positive'' divisors (or also for ``negative'' divisors). But for all other divisors it gives many new vanishings results. For example, if $a_4\geq 0$, $a_3\geq a_4$, $a_2<0$, Theorem \ref{main} gives the vanishing of the cohomology of $D$ in degree 0, 3 and 4. \end{ex} \begin{rems} Theorem \ref{main} is easy to apply to a given divisor of a Bott-Sameslon variety. Indeed, we made a program that takes a triple $(A,\tilde{w},Z)$ consisting of a Cartan matrix $A$, an expression $\tilde{w}$ and a divisor $Z$ of $X(\tilde{w})$, and that computes the vanishing results in the cohomology of $Z$ given by Theorem \ref{main} (contact the author for more detail). We can also obtain vanishing results in the cohomology of line bundles on Schubert varieties. These results are also computable. Then, we remark that, as for Bott-Samelson varieties, we do not recover all the already-known vanishing results on ``positive'' line bundles, but it gives new results for more general line bundles. And we also remark that the result we obtain depends on the choosen reduced expression of the element of the Weil group associated to the Schubert variety. \end{rems} \noindent \noindent Boris {\sc Pasquier}, Hausdorff Center for Mathematics, Universit{\"a}t Bonn, Landwirtschaftskammer (Neubau) Endenicher Allee 60, 53115 Bonn, Germany. \noindent {\it email}: \texttt{[email protected]} \end{document}
math
32,269
\begin{document} \title{Generalized Sweeping Line Spanners} \begin{abstract} We present \emph{sweeping line graphs}, a generalization of $\Theta$-graphs. We show that these graphs are spanners of the complete graph, as well as of the visibility graph when line segment constraints or polygonal obstacles are considered. Our proofs use general inductive arguments to make the step to the constrained setting that could apply to other spanner constructions in the unconstrained setting, removing the need to find separate proofs that they are spanning in the constrained and polygonal obstacle settings. \end{abstract} \section{Introduction} A \emph{geometric graph} $G$ is a graph whose vertices are points in the plane and whose edges are line segments between pairs of points. Every edge in the graph has weight equal to the Euclidean distance between its two endpoints. The distance between two vertices $u$ and $v$ in $G$, denoted by $\delta_{G}(u,v)$, or simply $\delta(u,v)$ when $G$ is clear from the context, is defined as the sum of the weights of the edges along the shortest path between $u$ and $v$ in $G$. A subgraph $H$ of $G$ is a $t$-spanner of $G$ (for $t \ge 1$) if for each pair of vertices $u$ and $v$, $\delta_{H}(u,v) \le t \cdot \delta_{G}(u,v)$. The smallest $t$ for which $H$ is a $t$-spanner is the \emph{spanning ratio} or \emph{stretch factor} of $H$. The graph $G$ is referred to as the \emph{underlying graph} of $H$. The spanning properties of various geometric graphs have been studied extensively in the literature (see~\cite{BS11, NS-GSN-06} for an overview). The spanning ratio of a class of graphs is the supremum over the spanning ratios of all members of that graph class. Since spanners preserve the lengths of all paths up to a factor of $t$, these graphs have applications in the context of geometric problems, including motion planning and optimizing network costs and delays. We introduce a generalization of an existing geometric spanner ($\Theta$-graphs) which we call \emph{sweeping line spanners}. We show that these graphs are spanners, also when considering line segment obstacles or polygonal obstacles during its construction. We show this using a very general method that we conjecture applies to other geometric spanners as well, meaning that such proofs do not have to be reinvented for different spanner constructions. Clarkson~\cite{C87} and Keil~\cite{K88} independently introduced $\Theta$-graphs as follows: for each vertex $u$, we partition the plane into $k$ disjoint cones with apex $u$. Each cone has aperture equal to $\theta = \frac{2\pi}{k}$ (see Figure~\ref{fig:cones}) and the orientation of these cones is identical for every vertex. The $\Theta$-graph is constructed by, for each cone with apex $u$, connecting $u$ to the vertex $v$ whose projection onto the bisector of the cone is closest to $u$ (see Figure~\ref{fig:closest}). When $k$ cones are used, we denote the resulting $\Theta$-graph by the $\Theta_{k}$-graph. We note that while $\Theta$-graphs can be defined using an arbitrary line in the cone instead of the bisector (see Chapter 4 of \cite{NS-GSN-06}), the bisector is by far the most commonly used construction and the spanning ratios mentioned in the following paragraph only apply when the bisector is used in the construction process. \begin{figure} \caption{The plane around $u$ is split into 10 cones.} \label{fig:cones} \caption{Vertex $v$ is the vertex with the projection closest to $u$.} \label{fig:closest} \end{figure} Ruppert and Seidel~\cite{RS91} upperbounded the spanning ratio of these graphs (when there are at least 7 cones) by $1/(1 - 2 \sin (\theta/2))$, when $\theta < \pi/3$. Bonichon~\emph{et~al.}\xspace~\cite{BGHI10} showed that the $\Theta_6$-graph has a tight spanning ratio of 2, i.e., it has a matching upper and lower bound of 2. Other recent results include a tight spanning ratio of $1 + 2 \sin(\theta/2)$ for $\Theta$-graphs with $4m + 2$ cones, where $m \geq 1$ and integer, and improved upper bounds for the other families of $\Theta$-graphs~\cite{BCMRV16}. When there are fewer than 6 cones, most inductive arguments break down. Hence, it was only recently that upper bounds on the spanning ratio of the $\Theta_5$-graph and the $\Theta_4$-graph were determined: $\sqrt{50 + 22 \sqrt{5}} \approx 9.960$ for the $\Theta_5$-graph~\cite{BMRV2015} and $(1 + \sqrt{2}) \cdot (\sqrt{2} + 36) \cdot \sqrt{4 + 2 \sqrt{2}} \approx 237$ for the $\Theta_4$-graph~\cite{BBCRV2013}. These bounds were recently improved to $5.70$ for the $\Theta_5$-graph~\cite{BHO21} and $17$ for the $\Theta_4$-graph~\cite{BCDS19}. Constructions similar to those demonstrated by El Molla~\cite{E09} for Yao-graphs show that $\Theta$-graphs with fewer than 4 cones are not spanners. In fact, until recently it was not known that the $\Theta_3$-graph is connected~\cite{ABBBKRTV2013}. An alternative way of describing the $\Theta$-graph construction is that for each cone with apex $u$, we visualize a line perpendicular to the bisector of the cone that sweeps outwards from the apex $u$. The $\Theta$-graph is then constructed by connecting $u$ to the first vertex $v$ in the cone encountered by the sweeping line as it moves outwards from $u$ (see Figure~\ref{fig:sweepingTheta}). \begin{figure} \caption{The sweeping line of a cone in a $\Theta$-graph. The sweeping line is a thick black segment inside the cone and grey dotted outside, as vertices outside the cone as ignored.} \label{fig:sweepingTheta} \caption{The sweeping line of a cone in the sweeping line graph. For comparison to $\Theta$-graphs, the line through $u$ perpendicular to the sweeping line is shown in red.} \label{fig:sweepingLine} \end{figure} The \emph{sweeping line graph} generalizes this construction by allowing the sweeping line to be at an angle $\gamma$ to the line perpendicular to the bisector of the cone (see Figure~\ref{fig:sweepingLine}, a more precise definition follows in Section 2). When $\gamma \in [0, \frac{\pi - 3\theta}{2})$ we show that the resulting graph is a spanner whose spanning ratio depends only on $\theta$ and $\gamma$. We note that this angle $\gamma$ implies that the line perpendicular to the sweeping line can be \emph{outside} the cone associated with that sweeping line, which is not supported in the common $\Theta$-graph (using bisectors) or the more general ones described in~\cite{NS-GSN-06}. For example, when 10 cones are used (i.e., $\theta = \pi/5$) the construction in~\cite{NS-GSN-06} allows for an angle up to $\theta/2 = \pi/10$, while our construction allows the far larger value of $7\pi/20$ and this difference increases as the number of cones increases (i.e., $\theta$ decreases). Pushing the generalization of these graphs even further, we consider spanners in two more general settings by introducing line segment constraints and polygonal obstacles. Specifically, given a set $P$ of points in the plane, let $S$ be a set of line segments connecting pairs of vertices in $P$ (not every vertex in $P$ needs to be an endpoint of a constraint in $S$). We refer to $S$ as \emph{line segment constraints}, or simply \emph{constraints}. The set of line segment constraints is planar, i.e. no two constraints intersect properly. Two vertices $u$ and $v$ can see each other if and only if either the line segment $uv$ does not properly intersect any constraint or $uv$ is itself a constraint. If two vertices $u$ and $v$ can see each other, the line segment $uv$ is a \emph{visibility edge} (this notion can be generalized to apply to arbitrary points that can see each other). The \emph{visibility graph} of $P$ with respect to a set of constraints $S$, denoted by $Vis(P,S)$, has $P$ as vertex set and all visibility edges defined by vertices in $P$ as edge set. In other words, it is the complete graph on $P$ minus all edges that properly intersect one ore more constraints in $S$. The aim of this generalization is to construct a spanner of $Vis(P,S)$ such that no edge of the spanner properly intersects any constraint. In other words, to construct a spanner of $Vis(P,S)$. Polygonal obstacles generalize the notion of line segment constraints by allowing the constraints to be simple polygons instead of line segments. In this situation, $S$ is a finite set of simple polygonal obstacles where each corner of each obstacle is a vertex in $V$, such that no two obstacles intersect. We assume that each vertex is part of at most one polygonal obstacle and occurs at most once along its boundary, i.e., the obstacles are vertex-disjoint simple polygons. Note that $V$ can also contain vertices that do not lie on the corners of the obstacles. The definitions of visibility edge and visibility graph are analogous to the ones for line segment constraints. In the context of motion planning amid obstacles, Clarkson~\cite{C87} showed how to construct a linear-sized $(1+\epsilon)$-spanner of $Vis(P,S)$. Subsequently, Das~\cite{D97} showed how to construct a spanner of $Vis(P,S)$ with constant spanning ratio and constant degree. More recently, the constrained $\Theta_6$-graph was shown to be a 2-spanner of $Vis(P,S)$~\cite{BFRV12} when considering line segment constraints. This result was recently generalized to polygonal obstacles~\cite{polygonallemma}. Most related to this paper is the result by Bose and van Renssen~\cite{BR2019}, who generalized the results from~\cite{BCMRV16} to the setting with line segment constraints, without increasing the spanning ratios of the graphs. In this paper, we examine the sweeping line graph in the setting without constraints (or the unconstrained setting), with line segment constraints (or the constrained setting), and with polygonal obstacles. First, we generalize the spanning proof of the $\Theta$-graph given in the book by Narasimhan and Smid~\cite{NS-GSN-06} to the sweeping line graph in the unconstrained setting. Next, we extend the proof to the constrained setting and finally apply it to the case of polygonal obstacles. In all three cases, we prove that the spanning ratio is upperbounded by $\frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta}$, where $\theta = \frac{2\pi}{k}$ ($k \geq 7$) and $\gamma \in [0, \frac{\pi - 3\theta}{2})$. The most interesting aspect of our approach is that the step from the unconstrained to the constrained and polygonal obstacle settings is very general and could apply to other spanner constructions in the unconstrained setting as well, making it a step towards a general condition of which spanners in the unconstrained setting can be proven to be spanners in the presence of obstacles. \section{Preliminaries} Throughout this paper, the notation $|pq|$ refers to the Euclidean distance between $p$ and $q$. We also emphasize that a point can be any point in the plane, while a vertex is restricted to being one of the points in the point set $P$. Before we formally define the \emph{sweeping line graph}, we first need a few other notions. A cone is the region in the plane between two rays that emanate from the same point, called the apex of the cone. Let $k \ge 7$ and define $\theta = \frac{2\pi}{k}$. If we rotate the positive $x$-axis by angles $i \cdot \theta$, $0 \le i < k$, then we get $k$ rays. Each pair of consecutive rays defines a cone whose apex is at the origin. We denote the collection of these $k$ cones by $\zeta_{k}$. Let $C$ be a cone of $\zeta_{k}$. For any point $p$ in the plane, we define $C_{p}$ to be the cone obtained by translating $C$ such that its apex is at $p$. Next, given an angle $\theta$ for a particular cone and an angle $\gamma \in [0, \frac{\pi - 3\theta}{2})$, we give the definition of the sweeping line: For any vertex $x$ in a cone, let the \emph{sweeping line} be the line through the vertex $x$ that is at an angle of $\gamma$ to the line perpendicular to the bisector of the cone. We then define $x_{\gamma}$ to be the intersection of the left-side of the cone and this sweeping line (see Figure~\ref{fig:definition}). \begin{figure} \caption{Defining the point $x_{\gamma} \label{fig:definition} \end{figure} Finally, we define the sweeping line graph: \begin{definition}[Sweeping line graph] Given a set of points $P$ in the plane, an integer $k \ge 7$, $\theta = \frac{2\pi}{k}$, and $\gamma \in [0, \frac{\pi - 3\theta}{2})$. The sweeping line graph is defined as follows: \begin{enumerate} \item The vertices of the graph are the vertices of $P$. \item For each vertex $u$ of $S$ and for each cone $C$ of $\zeta_{k}$, such that the translated cone $C_{u}$ contains one or more vertices of $P \setminus \{u\}$, the spanner contains the undirected edge $(u, r)$, where $r$ is the vertex in $C_{u} \cap S \setminus \{u\}$, which minimizes $|ur_{\gamma}|$. This vertex $r$ is referred to as the \emph{closest} vertex in this cone of $u$. In the remainder of the paper, we use $|ur_{\gamma}|$ when measuring the closeness between a vertex $r$ and the apex $u$ of a cone that contains it. \end{enumerate} \end{definition} For ease of exposition, we only consider point sets in general position: no two vertices lie on a line parallel to one of the rays that define the cones, no two vertices lie on a line parallel to the sweeping line of any cone, and no three points are collinear. Using the structure of the sweeping line graph, we define a simple algorithm called \emph{sweeping-routing} to construct a path between any two vertices $s$ and $t$. The name comes from the fact that this is also a 1-local routing algorithm on the sweeping line graph. Let $t$ be the destination of the routing algorithm and let $u$ be the current vertex (initially $u = s$). If there exists a direct edge to $t$, follow this edge. Otherwise, follow the edge to the closest vertex in the cone of $u$ that contains $t$. \subsection{Auxiliary Lemmas} In order to prove that the sweeping line graph is indeed a spanner, we start with a number of auxiliary lemmas needed to prove the main geometric lemma used throughout our proofs. \begin{lemma} \label{lem:lemma1} Let $\theta \in (0, \frac{2\pi}{7}]$ and $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Then $\cos(\frac{\theta}{2} + \gamma) - \sin \theta > 0$. \end{lemma} \begin{proof} Since $\cos(\frac{\theta}{2} + \gamma) - \sin\theta$ is decreasing with respect to $\gamma$ in our domain, it is minimized when $\gamma$ is maximized. It follows that: \begin{align*} \cos\left(\frac{\theta}{2} + \gamma\right) - \sin \theta &> \cos\left(\frac{\theta}{2} + \frac{\pi}{2} - \frac{3\theta}{2}\right) - \sin \theta \\ &= \cos\left(\frac{\pi}{2} - \theta\right) - \sin\theta\\ &= \sin\theta - \sin\theta\\ &= 0 \end{align*} Therefore, within our domain, $\cos(\frac{\theta}{2} + \gamma) - \sin \theta > 0$. \end{proof} \begin{lemma} \label{lem:lemma2} Let $\theta \in (0, \frac{2\pi}{7}]$, $\gamma \in [0,\frac{\pi - 3\theta}{2})$, and $\kappa \in [0,\theta]$. Then $\cos(\frac{\theta}{2} - \gamma - \kappa) > 0$ and $\cos(\frac{\theta}{2} + \gamma - \kappa) > 0$. \end{lemma} \begin{proof} To prove this, it suffices to show that $-\frac{\pi}{2} < \frac{\theta}{2} - \gamma - \kappa, \frac{\theta}{2} + \gamma - \kappa < \frac{\pi}{2}$ as within this domain, $\cos(\frac{\theta}{2} - \gamma - \kappa)$ and $\cos(\frac{\theta}{2} + \gamma - \kappa)$ are greater than 0. \emph{Proof of $-\frac{\pi}{2} < \frac{\theta}{2} - \gamma - \kappa < \frac{\pi}{2}$:} First, we show that $\frac{\theta}{2} - \gamma - \kappa$ is upperbounded by $\frac{\pi}{2}$: \begin{align*} \frac{\theta}{2} - \gamma - \kappa \le \frac{\theta}{2} &\le \frac{\pi}{7} \text{ (using the domain of $\theta$)}\\ &< \frac{\pi}{2} \end{align*} Next, we show that $\frac{\theta}{2} - \gamma - \kappa$ is lowerbounded by $-\frac{\pi}{2}$: \begin{align*} \frac{\theta}{2} - \gamma - \kappa > -\gamma - \kappa &\ge -\frac{\pi - 3\theta}{2}- \theta \text{ (using the domain of $\gamma$ and $\kappa$)}\\ &> -\frac{\pi}{2} \end{align*} \emph{Proof of $-\frac{\pi}{2} < \frac{\theta}{2} + \gamma - \kappa < \frac{\pi}{2}$:} First, we show that $\frac{\theta}{2} + \gamma - \kappa$ is upperbounded by $\frac{\pi}{2}$: \begin{align*} \frac{\theta}{2} + \gamma - \kappa \le \frac{\theta}{2} + \gamma &< \frac{\theta}{2} + \frac{\pi - 3\theta}{2} \text{ (using the bounds on $\gamma$)}\\ &= \frac{\pi -2\theta}{2}\\ &< \frac{\pi}{2} \end{align*} Next, we show that $\frac{\theta}{2} + \gamma - \kappa$ is lowerbounded by $-\frac{\pi}{2}$: \begin{align*} \frac{\theta}{2} + \gamma - \kappa > -\kappa &\ge -\frac{2\pi}{7} \text{ (using the bounds on $\kappa$)}\\ &> -\frac{\pi}{2} \end{align*} \end{proof} \begin{lemma} \label{lem:lemma3} Let $a$ and $b$ be positive reals and $\theta \in (0, \frac{2\pi}{7}]$, $\gamma \in [0,\frac{\pi - 3\theta}{2})$, and $\kappa \in [0,\theta]$. Then $a-\frac{b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)}{\cos (\frac{\theta}{2} - \gamma - \kappa)} \le a - b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)$ and $a-\frac{b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)}{\cos (\frac{\theta}{2} + \gamma - \kappa)} \le a - b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)$. \end{lemma} \begin{proof} By Lemma~\ref{lem:lemma2}, we know that $0 < \cos (\frac{\theta}{2} - \gamma - \kappa) \le 1$. This implies that $1 \le \frac{1}{\cos (\frac{\theta}{2} - \gamma - \kappa)}$ and thus $-\frac{1}{\cos \left(\frac{\theta}{2} - \gamma - \kappa\right)} \le -1$. Using that $(\cos(\frac{\theta}{2} + \gamma) - \sin \theta) > 0$ from Lemma~\ref{lem:lemma1}, we obtain: \begin{align*} -\frac{b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)}{\cos (\frac{\theta}{2} - \gamma - \kappa)} &\le -b\left(\cos\left(\frac{\theta}{2} + \gamma\right) - \sin \theta\right) \end{align*} which implies that \begin{align*} a - \frac{b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)}{\cos (\frac{\theta}{2} - \gamma - \kappa)} &\le a - b\left(\cos\left(\frac{\theta}{2} + \gamma\right) - \sin \theta\right) \end{align*} An analogous argument shows that $a-\frac{b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)}{\cos (\frac{\theta}{2} + \gamma - \kappa)} \le a - b(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)$. \end{proof} \begin{lemma} \label{lem:lemma4} Let $\theta \in (0, \frac{2\pi}{7}]$ and $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Then $\cos(\frac{\theta}{2} - \gamma) \ge \cos(\frac{\theta}{2} + \gamma)$. \end{lemma} \begin{proof} To prove this, we show that $\cos\left(\frac{\theta}{2} - \gamma\right) -\cos\left(\frac{\theta}{2} + \gamma\right)$ is at least 0. \begin{align*} \cos\left(\frac{\theta}{2} - \gamma\right) -\cos\left(\frac{\theta}{2} + \gamma\right) &= \cos\frac{\theta}{2}\cos \gamma + \sin \frac{\theta}{2} \sin \gamma - \cos \frac{\theta}{2} \cos \gamma + \sin \frac{\theta}{2} \sin \gamma\\ &= 2\sin \frac{\theta}{2} \sin \gamma\\ &\ge 0 \text{ (due to the domain of $\theta$ and $\gamma$)} \end{align*} \end{proof} \begin{lemma} \label{lem:lemma5} Let $\theta \in (0, \frac{2\pi}{7}]$, $\gamma \in [0, \frac{\pi-3\theta}{2})$, and $\kappa \in [0, \theta]$. Then $\cos(\frac{\theta}{2} - \gamma - \kappa) \ge \cos(\frac{\theta}{2} + \gamma)$. \end{lemma} \begin{proof} We observe that $\cos(\frac{\theta}{2} - \kappa - \gamma) = \cos(\kappa - (\frac{\theta}{2} - \gamma))$. Note that $ -\frac{\pi}{2} < \frac{\theta}{2} - \gamma < \frac{\pi}{2}$ and that $\cos(\kappa - (\frac{\theta}{2} - \gamma))$ corresponds to the shifted $\cos \kappa$ function. To prove the lemma, we distinguish between two cases: \emph{Case 1:} If $\frac{\theta}{2} - \gamma \le 0$, $\cos(\kappa - (\frac{\theta}{2}-\gamma))$ corresponds to translating $\cos \kappa$ to the left by $\frac{\theta}{2} - \gamma$. Therefore, $\cos(\kappa - (\frac{\theta}{2} - \gamma))$ is decreasing over the domain of $\kappa$, which implies that $\cos(\frac{\theta}{2} - \gamma - \kappa) \ge \cos(\frac{\theta}{2} - \gamma - \theta) = \cos(\frac{\theta}{2} + \gamma)$. \emph{Case 2:} If $\frac{\theta}{2} - \gamma > 0$, $\cos(\kappa - (\frac{\theta}{2}-\gamma))$ corresponds to translating $\cos \kappa$ to the right by $\frac{\theta}{2}-\gamma$. Therefore, for $\kappa \in (\frac{\theta}{2}-\gamma, \theta]$ the function is decreasing, so we can apply an argument analogous to that in Case 1 to prove the result in this domain. It remains to prove the result for $\kappa \in [0, \frac{\theta}{2}-\gamma]$. In this domain, $\cos(\kappa - (\frac{\theta}{2}-\gamma))$ is increasing. Therefore to prove the result, we need to show that at $\kappa = 0$ (where $\cos(\kappa - (\frac{\theta}{2}-\gamma))$ is minimized in this domain), $\cos(\kappa - (\frac{\theta}{2}-\gamma)) > \cos(\frac{\theta}{2} + \gamma)$. After substituting $\kappa = 0$, we see that this corresponds to showing that $\cos(\frac{\theta}{2} - \gamma) \ge \cos(\frac{\theta}{2} + \gamma)$ which follows from Lemma~\ref{lem:lemma4}. \end{proof} \begin{lemma} \label{lem:lemma6} Let $\theta \in (0, \frac{2\pi}{7}]$, $\gamma \in [0, \frac{\pi-3\theta}{2})$, and $\kappa \in [0, \theta]$. Then $\cos(\frac{\theta}{2} + \gamma - \kappa) \ge \cos(\frac{\theta}{2} + \gamma)$. \end{lemma} \begin{proof} We observe that $\cos(\frac{\theta}{2} - \kappa + \gamma) = \cos(\kappa - (\frac{\theta}{2} + \gamma))$. Note that $ 0 < \frac{\theta}{2} + \gamma < \frac{\pi}{2}$ and that $\cos(\kappa - (\frac{\theta}{2} + \gamma))$ corresponds to translating $\cos \kappa$ to the right by $\frac{\theta}{2}+\gamma$, since $\frac{\theta}{2}+\gamma$ is positive. Therefore, for $\kappa \in [0, \frac{\theta}{2}+\gamma],\text{ } \cos(\kappa - (\frac{\theta}{2} + \gamma))$ is increasing and so $\cos(\kappa - (\frac{\theta}{2} + \gamma)) \ge \cos(-(\frac{\theta}{2} + \gamma)) = \cos(\frac{\theta}{2} + \gamma).$ For $\kappa \in (\frac{\theta}{2}+\gamma, \theta], \text{ } \cos(\kappa - (\frac{\theta}{2} + \gamma))$ is decreasing. Therefore to prove the result, we need to show that at $\kappa = \theta$ (where $\cos(\kappa - (\frac{\theta}{2}+\gamma))$ is minimized in this domain), $\cos(\kappa - (\frac{\theta}{2}+\gamma)) \ge \cos(\frac{\theta}{2} + \gamma)$. After substituting $\kappa = \theta$, we see that this corresponds to showing that $\cos(\frac{\theta}{2} - \gamma) \ge \cos(\frac{\theta}{2} + \gamma)$ which we proved in Lemma~\ref{lem:lemma4}. \end{proof} \subsection{Main Geometric Lemma} Now that we have our auxiliary lemmas, we are ready to prove the main geometric lemma that we use throughout our later proofs. \begin{lemma} \label{lem:lemma7} Let $k \geq 7$ be an integer, $\theta$ = $\frac{2\pi}{k}$, and $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Let $p$ and $q$ be two distinct points in the plane and let $C$ be the cone of $\zeta_{k}$ such that $q \in C_{p}$. Let $r$ be a point in $C_{p}$ such that it is at least as close to $p$ as $q$ is to $p$. Then $| rq | \le | pq | - (\cos (\frac{\theta}{2} + \gamma) - \sin \theta) | pr |$. \end{lemma} \begin{proof} If $r = q$ then the claims holds. We assume in the rest of the proof that $r \neq q$. Let $\ell$ be the line through $p$ and $q$. Let $s$ be the intersection of $\ell$ and the sweeping line through $r$. Let $a$ and $b$ be the intersection of the sweeping line through $r$ with the left and right side of the cone respectively. Let $x$ be the intersection of the right side of the cone and the line through $a$ perpendicular to the bisector of the cone. Finally, let $\alpha$ be the angle between the segments $pq$ and $pr$ and let $\beta$ be the angle between the segment $pr$ and either the left or right side of the cone such that $\alpha$ and $\beta$ do not overlap. We have $0 \leq \alpha, \beta \leq \theta$ and $0 \leq \alpha + \beta \leq \theta$. We distinguish two cases depending on whether $r$ lies to the left or right of $\ell$. \emph{Case 1:} If $r$ lies to the left of $\ell$ (see Figure~\ref{fig:mainLemmaCase1}), we have that since $\triangle pax$ is isosceles, $\angle pax = \frac{\pi - \theta}{2}$. By considering $\triangle pas$, we can then deduce that $\angle asp = \frac{\pi}{2} + \frac{\theta}{2} - \gamma - (\alpha + \beta)$. Finally, by considering $\triangle prs$, we can deduce that $\angle prs = \frac{\pi}{2} - \frac{\theta}{2} + \gamma + \beta$. \begin{figure} \caption{The points and angles defined for Case 1.} \label{fig:mainLemmaCase1} \end{figure} Applying the sine rule and trigonometric rewrite rules, we have: \begin{align*} | rs | &= | pr | \frac{\sin \alpha}{\cos (\frac{\theta}{2} - (\alpha + \beta) - \gamma)} \\ &\le | pr | \frac{\sin \theta}{\cos (\frac{\theta}{2} - (\alpha + \beta) - \gamma)} \end{align*} and \begin{align*} | ps | &= | pr | \frac{\cos (\frac{\theta}{2} - \beta - \gamma)}{\cos (\frac{\theta}{2} - (\alpha + \beta) - \gamma)} \\ &\ge | pr | \frac{\cos (\frac{\theta}{2} + \gamma)}{\cos (\frac{\theta}{2} - (\alpha + \beta) - \gamma)} \text{(using Lemma~\ref{lem:lemma5}).} \end{align*} Applying the triangle inequality, we get: \begin{align*} | rq | &\le | rs | + | sq | \\ &= | rs | + | pq | - | ps | \\ &\le | pq | + | pr | \frac{\sin \theta}{\cos (\frac{\theta}{2} - (\alpha + \beta) - \gamma)} - | pr | \frac{\cos (\frac{\theta}{2} + \gamma)}{\cos (\frac{\theta}{2} - (\alpha + \beta) - \gamma)} \\ &= | pq | - | pr | \frac{1}{\cos \left(\frac{\theta}{2} - (\alpha + \beta) - \gamma\right)}\left(\cos \left(\frac{\theta}{2} + \gamma\right) - \sin{\theta}\right) \\ &\le | pq | - | pr | \left(\cos \left(\frac{\theta}{2} + \gamma\right) - \sin{\theta}\right) \text{ (using Lemma~\ref{lem:lemma3}).} \end{align*} \emph{Case 2:} If $r$ lies to the right of $q$ (see Figure~\ref{fig:mainLemmaCase2}), we have that since $\triangle pax$ is an isosceles triangle, $\angle pxb = \frac{\pi - \theta}{2}$. This implies that $\angle axb = \frac{\pi + \theta}{2}$. By considering $\triangle abx$, we can then deduce that $\angle abx = \frac{\pi}{2} - \frac{\theta}{2} - \gamma$. We can then deduce that $\angle psb = \frac{\pi}{2} + \frac{\theta}{2} + \gamma - (\alpha + \beta)$ by considering $\triangle psb$. Finally, by considering $\triangle psr$, we can deduce that $\angle srp = \frac{\pi}{2} - \frac{\theta}{2} - \gamma + \beta$. \begin{figure} \caption{The points and angles defined for Case 2.} \label{fig:mainLemmaCase2} \end{figure} By applying the sine rule and using trigonometric rewrite rules we have: \begin{align*} | rs | &= | pr | \frac{\sin \alpha}{\cos (\frac{\theta}{2} - (\alpha + \beta) + \gamma)} \\ &\le | pr | \frac{\sin \theta}{\cos (\frac{\theta}{2} - (\alpha + \beta) + \gamma)} \end{align*} and \begin{align*} | ps | &= | pr | \frac{\cos (\frac{\theta}{2} - \beta + \gamma)}{\cos (\frac{\theta}{2} - (\alpha + \beta) + \gamma)} \\ &\ge | pr | \frac{\cos (\frac{\theta}{2} + \gamma)}{\cos (\frac{\theta}{2} - (\alpha + \beta) + \gamma)} \text{(using Lemma~\ref{lem:lemma6}).} \end{align*} An argument identical to that of Case 1 completes the proof of this case. \end{proof} \section{The Unconstrained Setting} Now that we have our tools ready, it is time to prove that the sweeping line graph is a spanner in the unconstrained setting. \begin{theorem} Let k $\ge$ 7 be an integer, let $\theta = \frac{2\pi}{k}$, and let $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Then the sweeping line construction produces a $t$-spanner, where t = $\frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta}$. \end{theorem} \begin{proof} Let $u$ and $w$ be two distinct vertices of $P$. We consider the path $u = v_{0}, v_{1}, v_{2}, ...$ that is constructed by the \emph{sweeping-routing} algorithm. We start by showing that this path terminates at $w$. Let $i \ge 0$ and assume that $v_{i} \neq w$. Hence, vertex $v_{i+1}$ exists. Consider the three vertices $v_{i}$, $v_{i+1}$, and $w$. Let $C$ be the cone such that $w \in C_{u}$. By the construction of the sweeping line graph, $v_{i+1}$ is at least as close to $u$ as $w$ is to $u$. Hence, by applying Lemma~\ref{lem:lemma7} followed by Lemma~\ref{lem:lemma1} we obtain: \[| v_{i+1}w | \le | v_{i}w | - \left(\cos\left(\frac{\theta}{2} + \gamma\right) - \sin\theta\right)| v_{i}v_{i + 1} | < | v_{i}w |.\] Hence, the vertices $v_{0}, v_{1}, v_{2}, ...$ on the path starting at $v$ are pairwise distinct, as each vertex on this path lies strictly closer to $w$ than any of its predecessors. Since the set $P$ is finite, this implies that the algorithm terminates. Therefore, the algorithm constructs a path between $u$ and $w$. We now prove an upper bound on the length of this path. Let $m$ be the index such that $v_{m} = w$. Rearranging $| v_{i+1}w | \le | v_{i}w | - (\cos(\frac{\theta}{2} + \gamma) - \sin\theta)| v_{i}v_{i + 1} |$, yields \[| v_{i}v_{i + 1} | \le \frac{1}{\cos\left(\frac{\theta}{2} + \gamma\right) - \sin\theta}(| v_{i}w | - | v_{i+1}w |),\] for each $i$ such that $0 \le i < m$. Therefore, the path between $u$ and $w$ has length \begin{align*} \sum_{i=0}^{m-1} | v_{i}v_{i+1} | &\leq \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta} \sum_{i=0}^{m-1} (| v_{i}w | - | v_{i+1}w |)\\ &= \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta}(| v_{0}w | - | v_{m}w |)\\ &= \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta} |uw |, \end{align*} completing the proof. \end{proof} In addition to showing that the graph is a spanner, the above proof shows that the sweeping-routing algorithm constructs a bounded length path, thus we obtain a local competitive routing algorithm. \begin{corollary} Let k $\ge$ 7 be an integer, let $\theta = \frac{2\pi}{k}$, and let $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Then for any pair of vertices $u$ and $w$ the sweeping-routing algorithm produces a path from $u$ to $w$ of length at most $\frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta} \cdot |uw|$. \end{corollary} \section{The Constrained Setting} Next, we generalize the sweeping line graph to a more general setting with the introduction of \emph{line segment constraints}. Recall that $P$ is a set of points in the plane and that $S$ is a set of line segments connecting two vertices in $P$ (not every vertex in $P$ needs to be an endpoint of a constraint in $S$). The set of constraints is planar, i.e. no two constraints intersect properly. Let vertex $u$ be an endpoint of a constraint $c$ and let the other endpoint be $v$. Let $C$ be the cone of $\zeta_{k}$ such that $v \in C_{u}$. The line through $c$ splits $C_u$ into two \emph{subcones} and for simplicity, we say that $v$ is contained in both of these. In general, a vertex $u$ can be an endpoint of several constraints and thus a cone can be split into several subcones. For ease of exposition, when a cone $C_u$ is not split, we consider $C_u$ itself to be a single subcone. We use $C^{j}_u$ to denote the $j$-th subcone of $C_u$. Recall that for any vertex $x$ in a cone, we defined $x_{\gamma}$ to be the intersection of the left-side of the cone and the sweeping line through $x$. We define the constrained sweeping line graph (see Figure~\ref{fig:defnConstrained}): \begin{definition}[Constrained sweeping line graph] Given a set of points $P$ in the plane, a plane set $S$ of line segment constraints connecting pairs of vertices in $P$, an integer $k \ge 7$, $\theta = \frac{2\pi}{k}$, and $\gamma \in [0, \frac{\pi - 3\theta}{2})$. The constrained sweeping line graph is defined as follows: \begin{enumerate} \item The vertices of the graph are the vertices of $P$. \item For each vertex $u$ of $P$ and for each \emph{subcone} $C^j_u$ that contains one or more vertices of $P \setminus \{u\}$ visible to $u$, the spanner contains the undirected edge $(u, r)$, where $r$ is the vertex in $C^j_u \cap P \setminus \{u\}$, which is visible to $u$ and minimizes $|ur_{\gamma}|$. This vertex $r$ is referred to as the \emph{closest} visible vertex in this subcone of $u$. \end{enumerate} \end{definition} \begin{figure} \caption{The edges in a cone of the constrained sweeping line graph. The thick red segment represents a constraint. The sweeping line of a subcone is a thick black segment inside the subcone and grey dotted outside, as vertices outside the subcone as ignored.} \label{fig:defnConstrained} \end{figure} To prove that the above graph is a spanner, we need three lemmas. A proof of Lemma~\ref{lem:convexChainConstrained} can be found in~\cite{constrained}. \begin{lemma} \label{lem:convexChainConstrained} Let $u$, $v$, and $w$ be three arbitrary points in the plane such that $uw$ and $vw$ are visibility edges and $w$ is not the endpoint of a constraint intersecting the interior of triangle $uvw$. Then there exists a convex chain of visibility edges (different from the chain consisting of $uw$ and $wv$) from $u$ to $v$ in triangle $uvw$, such that the polygon defined by $uw$, $wv$ and the convex chain is empty and does not contain any constraints. \end{lemma} \begin{lemma} \label{lem:visibilityEdgeConstrained} Let $u$ and $w$ be two distinct vertices in the constrained sweeping line graph such that $uw$ is a visibility edge and let $C$ be the cone of $\zeta_{k}$ such that $w \in C_{u}$. Let $v_{0}$ be the closest visible vertex in the subcone of $C_{u}$ that contains $w$. Let $\ell$ be the line through $u$ and $w$. Let $s$ be the intersection of $\ell$ and the sweeping line through $v_{0}$. Then $v_{0}s$ is a visibility edge. \end{lemma} \begin{proof} We use a proof by contradiction (see Figure~\ref{fig:constrained} for an example layout). Assume $v_{0}s$ is not a visibility edge. Then there must be a line segment constraint intersecting $v_{0}s$. This implies that one of its endpoints lies in $\triangle uv_{0}s$, as $uw$ and $uv_{0}$ are visibility edges and thus the constraint cannot pass through them. However, such an endpoint would imply that there exists a vertex that is visible to $u$ and closer to $u$ than $v_{0}$ (in particular the first vertex hit by the sweeping line starting from $u$), contradicting that $v_0$ is the closest visible vertex. Therefore, no constraint intersects $v_{0}s$. \end{proof} \begin{figure} \caption{An example layout of $p$, $q$, $v_0$, and $s$.} \label{fig:constrained} \end{figure} The following lemma ensures that we can apply induction later. \begin{lemma} \label{lem:segmentLengths} Let $k \ge 7$ be an integer, let $\theta = \frac{2\pi}{k}$, and let $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Let $u$ and $w$ be two distinct vertices in the constrained sweeping line graph and let $C$ be the cone of $\zeta_{k}$ such that $w \in C_{u}$. Let $v_{0}$ be a vertex in $C_{u}$ such that it is the closest visible vertex to $u$. Let $\ell$ be the line through $uw$ and let $s$ be the intersection of $\ell$ and the sweeping line through $v_{0}$. Then $| v_{0}s | < | uw |$, $| sw | < | uw |$, and $| v_{0}w | < | uw |$. \end{lemma} \begin{proof} Refer to Figure~\ref{fig:constrained} for an example layout. We first show that $| v_{0}s | < | uw |$: By applying Lemma~\ref{lem:lemma7} to $u$, $s$, and $v_0$, we obtain that $| v_{0}s | \le | us | - (\cos(\frac{\theta}{2} + \gamma) - \sin\theta) \cdot| uv_{0} |$. Using Lemma~\ref{lem:lemma1}, this implies that $| v_{0}s | < | us |$. Finally, using the fact that $us$ is contained in $uw$, we conclude that $| v_{0}s | < | uw |$. Next, since $sw$ is contained in $uw$, it follows that $| sw | < | uw |$. Finally, we argue that $| v_{0}w | < | uw |$: By applying Lemma~\ref{lem:lemma7} to $u$, $w$, and $v_0$, we obtain that $| v_{0}w | \le | uw | - (\cos(\frac{\theta}{2} + \gamma) - \sin\theta) \cdot | uv_{0} | $. We can then apply Lemma~\ref{lem:lemma1} to obtain that $| v_{0}w | < | uw |$. \end{proof} We are now ready to prove that the constrained sweeping line graph is a spanner of the visibility graph. \begin{theorem} \label{theo:constrained} Let $k \geq 7$ be an integer, let $\theta = \frac{2\pi}{k}$, and let $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Let $u$ and $w$ be two distinct vertices in the plane that can see each other. There exists a path connecting $u$ and $w$ in the constrained sweeping line graph of length at most $\frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin(\theta)} \cdot | uw |$. \end{theorem} \begin{proof} Let $C$ be the cone of $\zeta_{k}$ such that $w \in$ $C_{u}$. We prove the theorem by induction on the rank of the pairs of vertices that can see each other, based on the Euclidean distance between them. Our inductive hypothesis is the following: $\delta(u,w) \le \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta} | uw |$, where $u$ and $w$ are two distinct vertices that can see each other and $\delta(u,w)$ is the length of the shortest path between them in the constrained sweeping line graph. \emph{Base case:} In this case, $u$ and $w$ are the Euclidean closest visible pair. If there exists an edge between $u$ and $w$, then $\delta(u, w) = | uw | \le \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta} | uw |$, so the induction hypothesis holds. It remains to show that, indeed, there exists an edge between the Euclidean closest visible pair. We prove this using contradiction. Assume that there is no edge between $u$ and $w$. Then there must exist a vertex $v_{0}$ in the subcone $C^j_u$ that contains $w$ that has an edge to $u$ in the constrained sweeping line graph. Let $\ell$ be the line through $uw$. Let $s$ be the intersection of $\ell$ and the sweeping line through $v_{0}$. Note that $sw$ is a visibility edge, due to $uw$ being a visibility edge, and $v_{0}s$ is a visibility edge, by Lemma~\ref{lem:visibilityEdgeConstrained}. Therefore, by applying Lemma~\ref{lem:convexChainConstrained}, there exists a convex chain $v_{0}, v_{1}, ..., v_{m-1}, v_{m} = w$ of visibility edges from $v_{0}$ to $w$ inside $\triangle v_0 s w$. By applying Lemma~\ref{lem:segmentLengths} using $u$, $w$, and $v_{0}$, we infer that all sides of $\triangle v_0 s w$ have length less than the Euclidean distance between $u$ and $w$. Since the convex chain is contained in this triangle, it follows that any pair of consecutive vertices along it has a smaller Euclidean distance than the Euclidean distance between $u$ and $w$. This contradicts that $uw$ is the closest Euclidean pair of visible vertices. \emph{Induction step:} We assume that the induction hypothesis holds for all pairs of vertices that can see each other and whose Euclidean distance is smaller than the Euclidean distance between $u$ and $w$. If $uw$ is an edge in the constrained sweeping line graph, the induction hypothesis follows by the same argument as in the base case. If there is no edge between $u$ and $w$, let $v_{0}$ be the closest visible vertex to $u$ (using the sweeping line) in the subcone $C^j_u$ that contains $w$. By construction, $(u,v_0)$ is an edge of the graph. Let $\ell$ be the line passing through $u$ and $w$. Let $s$ be the intersection of $\ell$ and the sweeping line through $v_{0}$ (see Figure~\ref{fig:constrained}). By definition, $\delta(u,w) \le | uv_{0} | + \delta(v_{0}, w)$. We know that $sw$ is a visibility edge, since $uw$ is a visibility edge, and we know $v_{0}s$ is a visibility edge by Lemma~\ref{lem:visibilityEdgeConstrained}. Therefore, by Lemma~\ref{lem:convexChainConstrained} there exists a convex chain $v_{0},...,v_{m} = w$ of visibility edges inside $\triangle v_0 s w$ connecting $v_{0}$ and $w$. Applying Lemma~\ref{lem:segmentLengths} to the points $u$, $v_{0}$, and $w$, we infer that each side of $\triangle v_{0}sw$ has length smaller than $|uw|$. Therefore, we can apply induction to every visibility edge along the convex chain from $v_0$ to $w$, as each has length smaller than $|uw|$. Therefore, \begin{align*} \delta(u,w) &\le | uv_{0} | + \sum_{i = 0}^{m-1} \delta(v_{i},v_{i + 1}) \\ &\le | uv_{0} | + \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta}\sum_{i = 0}^{m-1} | v_{i}v_{i + 1} | \text{ (using the induction hypothesis)}\\ &\le | uv_{0} | + \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin\theta} (| v_{0}s | + | sw |) \text{ (using the fact that the chain is convex)} \end{align*} Finally, we apply Lemma~\ref{lem:lemma7}, using $r = v_{0}$, $q = s$, and $p = u$. This gives us that $| v_{0}s | \le | us | - | uv_{0} | \left(\cos\left(\frac{\theta}{2} + \gamma\right) - \sin \theta\right)$, which can be rewritten to $| uv_{0} | + | v_{0}s |/(\cos(\frac{\theta}{2} + \gamma) - \sin \theta) \le | us |/(\cos(\frac{\theta}{2} + \gamma) - \sin \theta)$. By adding $| sw |/(\cos(\frac{\theta}{2} + \gamma) - \sin\theta)$ to both sides and the fact that $|us| + |sw| = |uw|$, we obtain: \begin{align*} | uv_{0} | + \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin \theta}(| v_{0}s | + | sw |) &\le \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin \theta} (| us | + | sw |)\\ &= \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin \theta} |uw|. \end{align*} Hence, we conclude that $\delta(u,w) \le \frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin \theta} | uw |$. \end{proof} \section{Polygonal Obstacles} Finally, we generalize the result from the previous section to more complex obstacles. Recall that in this setting $S$ is a finite set of polygonal obstacles where each corner of each obstacle is a vertex in $P$, such that no two obstacles intersect, and that each vertex is part of at most one polygonal obstacle and occurs at most once along its boundary. As in the constrained setting, the line segment between two visible vertices is called a \emph{visibility edge} and the \emph{visibility graph} of a point set $P$ and a set of polygonal obstacles $S$ is the complete graph on $P$ excluding all the edges that properly intersect some obstacle. Cones that are split are considered to be subcones of the original cone. Note that since $S$ consists of vertex-disjoint simple polygons, a cone can be split into at most two subcones. By focusing on the subcones, the polygonal-constrained sweeping line graph is defined analogously to the constrained sweeping line graph: for each subcone $C^{j}_u$ of each vertex $u$, we add an undirected edge between $u$ and the closest vertex in that subcone that can see $u$, where the distance is measured along the left side of the original cone of $u$ (not the left side of the subcone). We now introduce modifications to Lemmas~\ref{lem:convexChainConstrained} and ~\ref{lem:visibilityEdgeConstrained} to make them suited for polygonal obstacles. A proof of Lemma~\ref{lem:convexChainPolygonal} can be found in \cite{polygonallemma}. \begin{lemma} \label{lem:convexChainPolygonal} Let $u$, $v$, and $w$ be three points where $(w,u)$ and $(u,v)$ are both visibility edges and $u$ is not a vertex of any polygonal obstacle $P$ where the open polygon $P'$ of $P$ intersects $\triangle wuv$. The area $A$, bounded by $(w,u)$, $(u,v)$, and a convex chain formed by visibility edges between $w$ and $v$ inside $\triangle wuv$, does not contain any vertices and is not intersected by any obstacles. \end{lemma} \begin{lemma} \label{lem:visibilityEdgePolygonal} Let $u$ and $w$ be two distinct vertices in the polygonal-constrained sweeping line graph such that $uw$ is a visibility edge and let $C$ be the cone of $\zeta_{k}$ such that $w \in C_{u}$. Let $v_{0}$ be the closest visible vertex in the subcone of $C_{u}$ that contains $w$. Let $\ell$ be the line through $u$ and $w$. Let $s$ be the intersection of $\ell$ and the sweeping line through $v_{0}$. Then $v_{0}s$ is a visibility edge. \end{lemma} \begin{proof} This proof is analogous to the proof of Lemma~\ref{lem:visibilityEdgeConstrained}. \end{proof} Using these two modified lemmas, we can prove that the polygonal-constrained sweeping line graph is a spanner of the visibility graph. \begin{theorem} Let $k \geq 7$ be an integer, let $\theta = \frac{2\pi}{k}$, and let $\gamma \in [0, \frac{\pi - 3\theta}{2})$. Let $u$ and $w$ be two distinct vertices in the plane that can see each other. There exists a path connecting $u$ and $w$ in the polygonal-constrained sweeping line graph of length at most $\frac{1}{\cos(\frac{\theta}{2} + \gamma) - \sin(\theta)} \cdot | uw |$. \end{theorem} \begin{proof} The proof is analogous to the proof of Theorem~\ref{theo:constrained}. The only changes required are that the uses of Lemma~\ref{lem:convexChainConstrained} and Lemma~\ref{lem:visibilityEdgeConstrained} are replaced with Lemma~\ref{lem:convexChainPolygonal} and Lemma~\ref{lem:visibilityEdgePolygonal} respectively. Note that all other arguments still hold, as they are arguments based on Euclidean distance, rather than the specific shape of the (straight-line) obstacles. \end{proof} \section{Conclusion} We showed that the sweeping line construction produces a spanner in the unconstrained, constrained, and polygonal constrained settings. These graphs are a generalization of $\Theta$-graphs and thus we also showed that every $\Theta$-graph with at least 7 cones is a spanner in the presence of polygonal obstacles. We also note that the proof in the unconstrained case immediately implied a local routing algorithm with competitive ratio equal to (the current upper bound of) the spanning ratio. Our proofs rely on Lemma~\ref{lem:lemma7}, which bounds the length of the inductive part of our path. We conjecture that any proof strategy that uses induction must satisfy a condition similar to Lemma~\ref{lem:lemma7} in order to upper bound the spanning ratio. An analogous argument could then be applied to prove the construction to be a spanner in all three settings using the methods from this paper (i.e., finding a vertex $v_0$ satisfying the conditions of Lemmas~\ref{lem:visibilityEdgeConstrained} and~\ref{lem:segmentLengths}). This would greatly simplify spanner construction for the constrained and polygonal obstacle settings, by putting the focus on the simpler unconstrained setting. In particular, we conjecture that the strategy described in this paper can be applied to generalize the known results for Yao-graphs. \end{document}
math
45,889
\begin{document} \maketitle \begin{abstract} In this paper we provide the second variation formula for L-minimal Lagrangian submanifolds in a pseudo-Sasakian manifold. We apply it to the case of Lorentzian-Sasakian manifolds and relate the L-stability of L-minimal Legendrian submanifolds in a Sasakian manifold $M$ to their L-stability in an associated Lorentzian-Sasakian structure on $M$. \end{abstract} \section*{Introduction} Let $(M, g)$ be a Riemannian manifold. If $f:L \rightarrow M$ is a Riemannian submanifold, then it is called \emph{minimal} if $t=0$ is a critical point of the volume functional for all deformations $f_t: L \rightarrow M$ with $t \in (-\varepsilon, \varepsilon)$ and $f_0 = f$. Equivalently, $L$ is minimal if and only if its mean curvature vector vanishes. The submanifold is called \emph{stable} if $t=0$ is actually a minimum, that is if the second derivative of the volume functional at $t=0$ is nonnegative. The explicit expressions of the first and second derivatives of the volume are standard and can be found, for instance, in \cite{simons}. When $(M, \omegamega)$ is K\"ahler of real dimension $2n$, it is natural to study the above problem restricted to minimal \emph{Lagrangian} submanifolds, namely $n$-dimensional submanifolds that are minimal in the Riemannian geometric sense and where $\omegamega$ vanishes. Let us restrict ourselves to deformations that keep $L$ Lagrangian, namely such that $f_t^* \omegamega = 0$. Infinitesimally this can be seen in the fact that $\mathcal L_X \omegamega = d(\iota_X \omegamega) = 0$, where $X$ is the normal component of the derivative of $f_t$. These deformations are called \emph{Lagrangian}. In \cite{Oh_InvMath}, Oh has introduced the notion of \emph{Hamiltonian stability}. A minimal Lagrangian submanifold is Hamiltonian stable (H-stable) if its volume is a minimum among all infinitesimal \emph{Hamiltonian} deformations, namely given infinitesimally by normal fields $X$ such that $\iota_X \omegamega$ is \emph{exact}, i.e. Hamiltonian vector fields. He then computes the Jacobi operator of a minimal Lagrangian submanifold and applies his second variation formula to provide a stability criterion for a submanifold $L$ in a K\"ahler-Einstein ambient in terms of the first eigenvalue $\lambda_1(L)$ of the Laplacian on $L$. Namely $L$ is H-stable if, and only if, $\lambda_1(L)$ is greater or equal than the Einstein constant of $M$. There are several examples of minimal H-stable submanifolds of $\mathbb CP^n$ or other Hermitian symmetric spaces that are not stable in the usual sense. A survey of results and techniques, mostly for the homogeneous case, can be found in Ohnita's paper \cite{Ohnita_surv}. A slight generalization of minimal Lagrangian submanifolds are \emph{H-minimal} ones, namely Lagrangians that extremize the volume under all \emph{Hamiltonian} variations or, equivalently, if the mean curvature vector is $L^2$-orthogonal to all Hamiltonian vector fields, see \cite{Oh_Hmin}. The odd dimensional counterpart of K\"ahler geometry is Sasakian geometry, that merges together Riemannian, contact and CR structures. A natural contact geometric object analogous to Lagrangian submanifolds are Legendrian submanifolds. A submanifold $f: L^n \rightarrow (M^{2n+1}, \eta)$ of a contact manifold is \emph{Legendrian} if $f^*\eta = 0$ and a deformation $f_t$ that preserves the Legendre condition is called \emph{Legendrian}. Infinitesimally, it translates into having a variation field that is a contactomorphism. Oh's notion of H-stability is here replaced by Legendrian stability (L-stability), namely when the second derivative of the volume functional is nonnegative for all contact vector fields. The computation of the second variation of minimal Legendrian submanifolds in Sasakian manifolds has been provided by Ono \cite{Ono}, along as a stability criterion -- for a $\eta$-Sasaki-Einstein ambient -- in terms of the spectrum of the Laplacian. Namely if the ambient Ricci tensor satisfies $\mathbb Ric = A g + (2n-A) \eta \omegatimes \eta$, then $L$ is L-stable if, and only if, $\lambda_1(L) \geq A+2$, the K\"ahler-Einstein constant of the transverse metric. Using Ono's expression of the second variation and the known properties of the Jacobi operator, Calamai and the first author \cite{calapet} were able to construct eigenfunctions of the Laplacian with eigenvalue $A+2$, under the assumption of the presence of nontrivial Sasaki ambient automorphisms. Ono's work has been generalized by Kajigaya \cite{Kajigaya}, who has introduced the notion of L-minimal Legendrian submanifolds, namely the ones that are stationary points of the volume under Legendrian deformations and computed their second variation. In this case, a criterion involving the spectrum of the Laplacian cannot be provided in dimension (of the ambient manifold) greater than three. The minimality condition extends of course to the pseudo-Riemannian setting and is treated in Anciaux's monograph \cite{anciaux_book}. The compatible combination of a pseudo-Riemannian metric and a complex structure leads to the notion of pseudo-K\"ahler structures that, being symplectic, allow us to speak about Lagrangian submanifolds. A, up to a certain point, similar structure of symplectic pseudo-Riemann\-ian manifold is given by para-K\"ahler ones, for which we refer to \cite{survey_AMT}, \cite{paracomplex_survey96}. The study of the Hamiltonian stability of minimal Lagrangian in pseudo- and para-K\"ahler manifolds has been done by Anciaux and Georgiou \cite{anc_geo}, where they compute the second variation of such submanifolds and give a stability criterion analogous to Oh's in case these are space-like. In this paper we treat the analogous problem for pseudo-Sasakian manifolds. These structures have been introduced by Takahashi in \cite{Takahashi} and consist in normal almost contact structures endowed with compatible pseudo-Riemannian metrics. Our main result is the following (see Section \ref{sec:legendrian}). \renewcommand{\!\!}{\ref{thm:2var}} \begin{thmref} Let $L$ be a L-minimal Legendrian submanifold, possibly with boundary $\partial L$, of a pseudo-Sasakian manifold $(M, \eta, \xi, g,\varphi, \varepsilon)$ with $\varepsilon = |\xi|^2 = \pm 1$. Then, in the normal Legendrian direction $V = f \xi + \frac 1 2 \varphi \nabla f$ vanishing on $\partial L$, the second variation of the volume is \begin{align*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) = \frac 1 4 \int_L \biggl \{ &(\Delta f)^2 -2 \varepsilon |\nabla f|^2 - \omegaverline \mathbb Ric(\varphi \nabla f, \varphi \nabla f) \\ &-2 g(H, h(\nabla f, \nabla f) + g(H, \varphi \nabla f)^2 \biggr \} dv_0 \end{align*} where $H$ is the mean curvature vector, $\omegaverline \mathbb Ric$ is the Ricci tensor of $(M, g)$ and $dv_0$ is the volume form of $(L, g)$. \end{thmref} In the special case when $L$ is minimal ($H=0$) and $g$ is $\eta$-Einstein ($\omegaverline \mathbb Ric = A g + (2n+\varepsilon A) \eta \omegatimes \eta$), the formula above simplifies to \begin{align*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) &= \frac 1 4 \int_L \biggl \{ |\Delta f|^2 - (A+2 \varepsilon)|\nabla f|^2 \biggr \} dv_0 \end{align*} and we are able to give the following stability criterion in case $L$ is space-like. \renewcommand{\!\!}{\ref{prop:critlambda1}} \begin{propref} The minimal space-like Legendrian $L$ in the pseudo-Sasaki $\eta$-Einstein manifold $M$ is Legendrian stable if and only if its first eigenvalue of the Laplacian on functions $\lambda_1(L)$ satisfies \begin{equation} \label{est:lambda1} \lambda_1(L) \geq A+2 \varepsilon. \end{equation} \end{propref} For $\varepsilon = 1$ we reobtain Ono's formula and stability criterion. Concerning usual stability in the pseudo-Riemannian case, it is known that every minimal submanifold is always unstable if the ambient metric is indefinite on its tangent or normal bundle, see \cite[Thm.~37]{anciaux_book}. In contrast, we have the following. \renewcommand{\!\!}{\!\!} \begin{corolref} If $A+2\varepsilon \leq 0$, then every minimal Legendrian submanifold is Legendrian stable. In particular this holds in the pseudo-Sasaki-Einstein case (with $A=-2n$). \end{corolref} Then we focus on Lorentzian-Sasakian manifolds, namely when the signature is $(2n,1)$ and $\varepsilon = -1$. They appeared in \cite{Baum}, \cite{Bohle} in the study of twistor and Killing spinors on Lorentzian manifolds. Later their study has been proposed in Sasakian geometry, see \cite{BGM_eta} or \cite[Sect.~11.8.1]{monoBG}. In particular it is proved in \cite{BGM_eta} that every negative Sasakian manifold admits a Lorentzian-Sasaki-Einstein metric and conversely. In Subsection \ref{sec:tanno} we consider these deformations that map every Sasakian structure to a Lorentzian-Sasakian one. They generalize the well-known $D$-homothetic deformations of Tanno \cite{Tanno}. We then prove that for every minimal Lagrangian submanifold $L$ in a Sasakian manifold $M$ is Legendrian stable if, and only if, it is in the associated Lorentzian-Sasakian structure on $M$. \section{Pseudo-Sasakian manifolds} In this section we recall the definition and main properties of pseudo-Sasakian structures, following \cite{Takahashi}. Let $M^{2n+1}$ be a differentiable manifold and let $\xi$ be a vector field on $M$, $\eta$ a $1$-form and $\varphi$ a section of $\End(TM)$. Then the triple $(\xi, \eta, \varphi)$ is an \emph{almost contact structure} if $\eta(\xi) = 1$ and $\varphi^2 = - \id + \eta \omegatimes \xi$, see e.g. \cite{Blair}. If $g$ is a pseudo-Riemannian metric, then we have the following. \begin{defi} The tuple $(\xi, \eta, \varphi, g, \varepsilon)$ is an \emph{almost contact metric structure} if $(\xi, \eta, \varphi)$ is an almost contact structure and the following compatibility relations hold \begin{enumerate} \item $g(\xi, \xi) = \varepsilon \in \{ \pm 1 \}$; \item $\eta(X) = \varepsilon g(\xi, X)$; \item $g(\varphi X, \varphi Y) = g(X,Y) - \varepsilon \eta(X) \eta(Y)$. \end{enumerate} \end{defi} A tuple as above is a \emph{contact metric structure} if\footnote{Unlike Takahashi, in this paper we use the convention $d\eta(X,Y) = X \eta(Y) - Y \eta(X) - \eta([X,Y])$.} $d\eta = 2g(\varphi \cdot, \cdot)$. \begin{defi} A contact metric structure is \emph{normal} or \emph{Sasakian} if \begin{equation} \label{defnabla} (\nabla_X \varphi) Y = \varepsilon \eta(Y)X-g(X,Y)\xi \end{equation} where $\nabla$ is the Levi-Civita connection of $g$. \end{defi} For an almost contact metric structure we have the following. \begin{prop} \label{prop:nablaxi} If the identity \eqref{defnabla} holds, then $\nabla \xi = \varepsilon \varphi$, the field $\xi$ is Killing and the structure is contact metric. \end{prop} In this paper we focus on a special kind of pseudo-Sasakian manifolds, namely Lorentzian Sasakian. They are characterized by their signature $(2n, 1)$ and $\varepsilon = -1$, see e.g. \cite{Baum},\cite{Bohle}. Before giving some properties of pseudo-Sasakian manifolds we need to fix a sign convention for the curvature tensor $R$ of a connection $D$ on a vector bundle $E \rightarrow M$ \[ R(X,Y) \sigma = D_X D_Y \sigma - D_Y D_X \sigma - D_{[X,Y]} \sigma \text{ for } X,Y \in \Gamma(TM) \text{ and } \sigma \in \Gamma(E). \] We have the following properties, some of them proved in \cite{schaefer} for the Lorentzian case. \begin{lemma} Let $(M,g, \mathbb Reeb, \eta, \varepsilon)$ be a pseudo-Sasakian manifold, then for $X,Y,Z \in TM$ one has \begin{align} \varphi^2 X &= -X+\eta(X)\mathbb Reeb, \label{eq_Lem_Sas1} \\ (\omegaverline \nabla_X \varphi)Y &= -g(X,Y) \mathbb Reeb + \varepsilon \eta(Y)X, \label{eq_Lem_Sas1bis} \\ g(\varphi X,Y) &=- g(X, \varphi Y), \label{eq_Lem_Sas1bisbis} \\ \omega(X,Y) &= (\omegaverline \nabla_X \eta) Y=g(\varphi X, Y), \label{eq_Lem_Sas2a} \\ (\omegaverline \nabla_X \omega)(Y,Z) &= \varepsilon g(X,Z)\eta(Y) - \varepsilon g(X,Y) \eta(Z), \label{eq_Lem_Sas3} \\ \omegaverline\mathbb Rm(X,Y)\mathbb Reeb &= \eta(X)Y - \eta(Y)\, X, \label{eq_Lem_Sas6} \\ \omegaverline\mathbb Rm(X,\xi, \xi, Y) &= g(X,Y) -\varepsilon \eta(X) \eta(Y), \label{eq_Lem_Sas7}\\ \omegaverline{\mathbb Ric}(\mathbb Reeb,\mathbb Reeb) &= 2n, \label{Rici_in_Reeb} \\ \omegaverline \mathbb Rm(X,Y) \varphi Z &= \varphi \omegaverline \mathbb Rm(X,Y)Z + \varepsilon \biggl( -g(\varphi Y, Z) X + g(\varphi X, Z) Y \\ &- g(Y,Z)\varphi X + g(X,Z) \varphi Y)\biggr) \label{eq_RmPhi} \end{align} where $\omegaverline\mathbb Rm$ is the Riemann curvature tensor of $(M, g)$ and $\omegaverline{\mathbb Ric}$ is its Ricci tensor. \end{lemma} \subsection{Legendrian submanifolds} A submanifold $f\,:\, L \rightarrow M$ of a contact manifold $(M^{2n+1}, \eta)$ is called \emph{ horizontal} if it satisfies $f^* \eta=0$. In particular, it follows $f^* d\eta= f^*\omega=0.$ A \emph{ Legendrian submanifold} is a maximally isotropic submanifold $L^n$, i.e. a horizontal submanifold with $\dim L=n$. Let us consider a smooth Riemannian immersion $ f \,:\, L \rightarrow (M,g)$ into a Lorentzian manifold $(M,g),$ i.e. $ f^*g$ defines a positive definite metric on $L$. Then the \emph{second fundamental form} $h\in \Gamma(T^*L \omegatimes T^*L\omegatimes NL),$ where $NL$ denotes the \emph{normal bundle} of $ f \,:\, L \rightarrow (M,g),$ is given by \[ h(X,Y)= \nabla_X(df)Y=(f^*\omegaverline \nabla)_X(df Y)- df(\nabla_XY), \] where $\omegaverline \nabla$ and $\nabla,$ resp. are the Levi-Civita connections of $g$ and $f^*g,$ resp.\footnote{To keep notation short, we later write $\omegaverline \nabla $ for $f^*\omegaverline \nabla$ and $g $ for $f^*g$.} Further, for a section $\nu$ of $NL$ we define the normal connection $\nabla^\perp$ as the normal part and the shape operator $A_\nu$ as the tangential part of $f^*\omegaverline \nabla_X \nu,$ i.e. via \[ f^*\omegaverline \nabla_X \nu = \nabla^\perp_X \nu - A_\nu X \in NL \omegaplus f_*(TL), \] where $A \in \Gamma(N^*L \omegatimes T^*L\omegatimes TL).$ The \emph{mean curvature} is defined as \[ H:= \tr_g^L h. \] Later we need Gauss' formula for pseudo-Riemannian submanifolds (see e.g. \cite[p.~100]{Oneill}).\footnote{Beware that O'Neill uses the opposite convention than ours for Riemannian curvature.} \begin{lemma} \label{lemma:gauss} The Riemann curvature tensor $\mathbb Rm$ of a pseudo-Riemannian submanifold $L$ in the pseudo-Riemannian manifold $(M,g)$ is related to the ambient curvature tensor $\omegaverline \mathbb Rm$ and to the second fundamental form $h$ of the immersion by \[ \omegaverline \mathbb Rm (A,B,C,D) = \mathbb Rm(A,B,C,D) - g( h(B,C), h(A,D)) + g( h(A,C), h(B,D)) \] for vectors $A,B,C,D$ tangent to $L$. \end{lemma} Following \cite{Ono}, we give the following definition. \begin{defi} Let $(M,g,\mathbb Reeb,\eta,\varphi,\varepsilon)$ be a pseudo-Sasakian manifold and $f \colon L \rightarrow M$ a Legendrian immersion. A smooth family of immersions $\{ f_t \}_{t\in (-\partiallta,\partiallta)}$ is called \emph{Legendrian deformation} of $L,$ if $f_t$ is Legendrian for all $t \in (-\partiallta,\partiallta)$ and it is $f_0=f.$ \end{defi} By the curvature properties of pseudo-Sasakian metrics, we have the following. \begin{lemma} \label{lemma:Rm} For a Legendrian submanifold $L$ in a pseudo-Sasakian manifold $(M,g,\mathbb Reeb,\eta,\varphi,\varepsilon)$ and in a normal orthonormal frame $e_1, \ldots, e_n$ with $\varepsilon_i= g(e_i,e_i)$, along the Legendrian submanifold $L$ one has \begin{enumerate} \item $\sum_{i=1}^n \varepsilon_i \omegaverline\mathbb Rm(\varphi e_i, \xi,\xi, \varphi e_i) = n,$ \item $\varepsilon_i \omegaverline\mathbb Rm(\varphi e_i, \xi, \varphi e_i, V_H) = 0, \mbox{ for } V_H \in \mathcal D.$ \end{enumerate} \end{lemma} \begin{proof} The first is a consequence of \eqref{eq_Lem_Sas6}, i.e. $\omegaverline\mathbb Rm(\varphi e_i, \xi,\xi, \varphi e_i) = g(\varphi e_i,\varphi e_i).$ The second follows from \eqref{eq_Lem_Sas6} and $\eta(\varphi e_i)=0.$ \end{proof} We state a property of the second fundamental form of a Legendrian submanifold in a pseudo-Sasakian manifold, whose proof is basically the same as for its Riemannian counterpart; see \cite[Prop.~3.4]{Ono}. \begin{lemma} \label{lemma:2ff} The second fundamental form $h$ of a Legendrian submanifold $L$ in a pseudo-Sasakian manifold $(M, g, \eta, \xi, \varphi, \varepsilon)$ satisfies the following properties. \begin{enumerate} \item $g(h(X,Y), \xi) = 0$ for all $X, Y$ tangent to $L$; \item $g(h(X,Y), \varphi Z) = g(h(X,Z), \varphi Y)$ for all $X,Y,Z$ tangent to $L$. \end{enumerate} \end{lemma} \section{L-minimal Legendrian submanifolds of pseudo-Sasakian manifolds} \label{sec:legendrian} \subsection{The second variation along Legendrian deformations} \begin{defi} Let $ f \colon L \rightarrow M$ be a Legendrian immersion into a pseudo-Sasakian manifold $(M,g,\mathbb Reeb,\eta,\varphi,\varepsilon).$ Then $f$ is called \emph{Legendrian minimal} or \emph{L-minimal} if \[ \frac{d}{dt}\biggr |_{t=0} \vol(L_t)=0 \] for all Legendrian deformations of $L_t$. \end{defi} Taking the normal field $V = \biggl(\frac{\partial}{\partial t} |_{t=0} f_t(\cdot)\biggr)^\perp$, then $f_t$ is Legendrian if, and only if, $\mathcal L_V \eta = 0$. From the known expression of the first variation (see e.g. \cite{anciaux_book}), namely \[ \frac{d}{dt}\biggr |_{t=0} \vol(L_t) = -\int_L g(X,H) dv_0, \] we see that L-minimality is equivalent to requiring $H$ to be $L^2$-orthogonal to all Legendrian vector fields. For a normal field $V$ we write $V = f \xi + V_H$. It then is $\iota_{V_H} d\eta = 2(\varphi V_H)^\flat$, implying, since $\mathcal L_V \eta = 0$, that $V_H = \frac 1 2 \varphi \nabla f$. For positive signature the following is due to \cite{Iriyeh}. \begin{prop} \label{Lmin} The immersion $ f \colon L \rightarrow M$ of a manifold $L$ is L-minimal (with respect to variations fixing the boundary) if and only if it is \begin{equation} \partiallta \alpha_H=0 \text{ or equivalently }\dive(\varphi H)=0 \label{eq_lmin} \end{equation} where $\alpha_H = d\eta(H, \cdot)$. \end{prop} \begin{proof} In fact, the well-known formula for the first variation along the normal direction $X$ yields for variations with $d\alpha_X =d u$ for some function $u$ on $L$ \begin{align*} \frac{d}{dt}\biggr |_{t=0} \vol(L_t) &= - \int_L g(X, H) dv_0 =- \int_L g(\alpha_X, \alpha_H) dv_0 \\& =- \int_L g(d u, \alpha_H) dv_0 = - \int_L u \partiallta\alpha_H dv_0, \end{align*} where we used that $X$ vanishes on the boundary. Since this vanishes for arbitrary Legendrian variations we conclude \eqref{eq_lmin}. \end{proof} \begin{prop} \label{prop_second_Lmin} Let $(M, \eta, \xi, g, \varphi, \varepsilon)$ be a pseudo-Sasakian manifold and let $L$ be an L-minimal Legendrian submanifold, possibly with boundary. Then the second variation of the volume of $L$ under the normal direction $V = f \xi + \frac{1}{2}\varphi \nabla f$, vanishing on $\partial L$, is \begin{align} \label{formula2var} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) = \int_L \biggl \{ & \tr_g[(\nabla^\perp_{\cdot} V,\nabla^\perp_{\cdot} V) + \omegaverline \mathbb Rm ({\cdot}, V, {\cdot}, V)]- g(A_V, A_V) \\ &- \frac{1}{4} g(h(\nabla f, \nabla f), H) + g(H,V)^2 \biggr \} dv_0 \nonumber \end{align} where $L_t, \; t\in (-\partiallta,\partiallta), $ is a family of submanifolds with variational vector field $V$ and $dv_0$ is the volume form of the induced metric at $t=0$. \end{prop} \begin{proof} For this proof in positive signature we refer to \cite{schoen_wolfson}. We fix a local ($t$-dependent) frame $e_i, i =1, \ldots ,n,$ for $L$. One starts with the well-known formula for the first variation along the direction $X$ \[ \frac{d}{dt} \vol(L_t) = - \int_L g(X, H) dv_t \] where $H$ is the mean curvature vector of the family $L_t$ with variational vector field $X$ and derives this expression at $t=0$ \begin{equation*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) = -\int_L \frac{d}{dt} g(H,X)|_{t=0} dv_0 + \int_L g(X, H)^2 dv_0 \end{equation*} where we have used the well-known fact that $\frac{d}{dt}(dv_t) = - g(X, H) dv_t$. Here we write $g$ for the induced metric on $L_t,$ too. Further we also write $\omegaverline \nabla$ for the pull-back of the Levi-Civita connection along the immersion $(-\varepsilon,\varepsilon) \times L \ni (t,p)\mapsto f_t(p) \in M$. The first two terms are \begin{equation*} \frac{d}{dt} \biggr |_{t=0} g(H,X) = \frac{d g^{ij}}{dt}(0) g(h(e_i, e_j), X ) + g^{ij} g \biggl ( \frac{d}{dt}|_{t=0} h(e_i, e_j), X \biggr ) \end{equation*} where we recall \begin{align*} \frac{d}{dt} g_{ij} & = X \cdot g(e_i, e_j) = g(\omegaverline \nabla_X e_i,e_j) + g(e_i, \omegaverline \nabla_X e_j) = g(\omegaverline \nabla_{e_i} X, e_j) + g(e_i, \omegaverline \nabla_{e_j} X) \\&= -2 g(h(e_i, e_j), X) = -2g(A_X e_i, e_j), \end{align*} where we have used that $[e_k,X]=0$. It is then $\frac{d}{dt} g^{ij}(0) = - g^{ia} g'_{ab} g^{bj}|_{t=0} = -\varepsilon_i \varepsilon_jg(h(e_i, e_j), X)$. We have \begin{align*} -\frac{d}{dt} \biggr |_{t=0} g(H,X) &= - 2\varepsilon_i \varepsilon_j ( g(h(e_i, e_j), X)g(\omegaverline \nabla_{e_i} X, e_j) + \partiallta_{ij} \varepsilon_i X \cdot g(\omegaverline \nabla_{e_i} X, e_j) \\ &= -2\varepsilon_i \varepsilon_j g(h(e_i, e_j), X)^2 + \varepsilon_i \biggl [ g(\omegaverline \nabla_X \omegaverline \nabla_{e_i} X, e_i) + g(\omegaverline \nabla_{e_i} X, \omegaverline \nabla_X e_i) \biggr ]\\ &= -2\varepsilon_i \varepsilon_j g(h(e_i, e_j), X)^2 + \varepsilon_i \biggl [ \omegaverline \mathbb Rm(X,e_i,X,e_i) + g( \omegaverline \nabla_{e_i} \omegaverline \nabla_X X, e_i) + |\omegaverline \nabla_{e_i}X|^2 \biggr ]. \end{align*} On the other hand $A_X e_i = g(A_X e_i, e_j) \varepsilon_j e_j$, so $\varepsilon_i g(A_X e_i, A_X e_i) = g(h(e_i, e_j), X)^2 \varepsilon_j$. So we have \begin{align*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t)= \int_L \biggl \{ & \varepsilon_i |\nabla^\perp_{e_i} X|^2 + \varepsilon_i \omegaverline \mathbb Rm(X, e_i, X, e_i) - \varepsilon_i |A_X e_i|^2 \\ &+ \dive(\omegaverline \nabla_X X)^T - g(\omegaverline \nabla_X X, H) + g(X,H)^2 \biggr \} dv_0\ \end{align*} The divergence term is $\int_L \dive( \omegaverline \nabla_X X)^T dv_0 = \int_{\partial L} g(\omegaverline \nabla_X X, \nu) = 0$ since $X$ vanishes on $\partial L$. Finally we compute \begin{align*} g(\omegaverline \nabla_X X, H) - \frac 1 4 g(h(\nabla f, \nabla f), H) &= g(\varphi \omegaverline \nabla_X X, \varphi H) - \frac 1 4 g(\varphi h(\nabla f, \varphi H),\nabla f) \\ &= g(\omegaverline \nabla_X \varphi X, \varphi H) - \frac 1 4 g(\varphi h(\nabla f, \varphi H),\nabla f) \\ &= X \cdot g(\varphi X,\varphi H)- g(\varphi X, \omegaverline \nabla_X \varphi H)- \frac 1 4 g(\varphi h(\nabla f, \varphi H),\nabla f) \\ &= (\omegaverline \nabla_X (\varphi X)^\flat)(\varphi H)- \frac 1 4 g(\varphi \omegaverline \nabla_{\varphi H} \nabla f, \nabla f)\\ &= (\omegaverline \nabla_X (\varphi X)^\flat)(\varphi H)+ \frac 1 2 g(\omegaverline \nabla_{\varphi H} X, \nabla f)\\ &= (\omegaverline \nabla_X (\varphi X)^\flat)(\varphi H)+ \frac 1 2 df (\omegaverline \nabla_{\varphi H} X) \\ &= \frac 1 2 ( X df(\varphi H) - df (\omegaverline \nabla_X \varphi H) + df (\omegaverline \nabla_{\varphi H} X))\\ &= \varphi H \cdot X \cdot f \\ &= 0 \end{align*} since $X$ is \emph{normal} to $L$. So we can conclude \eqref{formula2var}. \end{proof} Let us now compute the first term of \eqref{formula2var}. We have \begin{align*} \nabla_{e_i}^\perp V &= \biggl ( \omegaverline \nabla_{e_i} (f \xi + \frac 1 2 \varphi \nabla f) \biggr )^\perp\\ &= f (\omegaverline \nabla_{e_i} \xi)^\perp + e_i \cdot f \xi + \frac 1 2( \omegaverline \nabla_{e_i} \varphi \nabla f)^\perp\\ &= \varepsilon f \varphi e_i + e_i \cdot f \xi + \frac 1 2 ( \varphi \nabla_{e_i} \nabla f - g(e_i, \nabla f) \xi) \\ &= \varepsilon f \varphi e_i + \frac 1 2 e_i \cdot f \xi + \frac 1 2 \varphi \nabla_{e_i} \nabla f \end{align*} so we get summing over $i$ \begin{align} \varepsilon_i g(\nabla_{e_i}^\perp V,\nabla_{e_i}^\perp V) &= f^2 \varepsilon_i |\varphi e_i|^2 + \frac 1 4 (e_i f)^2 \varepsilon_i \varepsilon + \frac 1 4 |\nabla^2 f|^2 + \varepsilon f g(e_i, \nabla_{e_i} \nabla f) \varepsilon_i\\ \nonumber &= n f^2 + \frac 1 4 \varepsilon |\nabla f|^2 + \frac 1 4 |\nabla^2 f|^2 - \varepsilon f \Delta f \label{1stterm} \end{align} where $|\nabla^2 f|^2 = \varepsilon_i |\nabla_{e_i} \nabla f|^2$ is the norm of the Hessian of $f$. Applying \eqref{eq_RmPhi} we obtain the following, that says we have the symmetries analogous to the ones of the K\"ahler curvature tensor. \begin{lemma} We have $\sum_i \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V_H, \varphi e_i, V_H) = \sum_i \varepsilon_i \omegaverline \mathbb Rm(e_i, \varphi V_H, e_i, \varphi V_H)$. \end{lemma} \begin{proof} Applying the identity \eqref{eq_RmPhi} we have \begin{align*} \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V_H, \varphi e_i, V_H) &= \varepsilon_i g(\varphi \omegaverline \mathbb Rm (\varphi e_i, V_H) e_i, V_H) + \varepsilon_i \varepsilon (-g(\varphi V_H, e_i) g(\varphi e_i, V_H) - g(e_i, e_i) g(V_H, V_H)) \\ &= - \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V_H, e_i, \varphi V_H) + \varepsilon( |V_H|^2 - n |V_H|^2)\\ &= - \varepsilon_i \omegaverline \mathbb Rm(e_i, \varphi V_H, \varphi e_i, V_H) + \varepsilon(1-n)|V_H|^2\\ &= - \varepsilon_i g(\varphi \omegaverline \mathbb Rm(e_i, \varphi V_H) e_i, V_H) \\ &- \varepsilon_i \varepsilon (-g(\varphi V_H, e_i) g(\varphi e_i, V_H) - g(e_i, e_i) g(V_H, V_H))) + \varepsilon(1-n)|V_H|^2\\ &= \varepsilon_i \omegaverline \mathbb Rm(e_i, \varphi V_H, e_i, \varphi V_H) - \varepsilon( |V_H|^2 - \varepsilon_i g(e_i, e_i) |V_H|^2) + \varepsilon(1-n)|V_H|^2\\ &= \varepsilon_i \omegaverline \mathbb Rm(e_i, \varphi V_H, e_i, \varphi V_H). \end{align*} \end{proof} So the second term in our second variation is \begin{align*} \varepsilon_i \omegaverline \mathbb Rm (e_i, V, e_i, V) &= -\omegaverline \mathbb Ric(V,V) - \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V, \varphi e_i, V) - \varepsilon \omegaverline \mathbb Rm(\xi, V, \xi, V) \\ &= - \omegaverline \mathbb Ric(V,V) - \varepsilon_i f^2 \omegaverline \mathbb Rm( \varphi e_i, \xi, \varphi e_i, \xi) - 2 \varepsilon_i f \omegaverline \mathbb Rm(\varphi e_i, V_H, \varphi e_i, \xi)\\ &- \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V_H, \varphi e_i, V_H) - \varepsilon \omegaverline \mathbb Rm(\xi, V, \xi, V)\\ &= -\omegaverline \mathbb Ric(V,V) + n f^2 - \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V_H, \varphi e_i, V_H) + \varepsilon(|V|^2 - \varepsilon f^2) \\ &= -\omegaverline \mathbb Ric(V,V) + n f^2 - \varepsilon_i \omegaverline \mathbb Rm(\varphi e_i, V_H, \varphi e_i, V_H) + \varepsilon|V_H|^2\\ &= -\omegaverline \mathbb Ric(V,V) + n f^2 -\varepsilon_i \omegaverline \mathbb Rm(e_i, \varphi V_H, e_i, \varphi V_H) + \varepsilon |V_H|^2\\ &= -\omegaverline \mathbb Ric(V_H,V_H) - 2nf^2 + n f^2 + \varepsilon |V_H|^2 \\ &- \varepsilon_i \biggl ( \mathbb Rm(e_i, \varphi V_H, e_i, \varphi V_H) - g(h(\varphi V_h, e_i), h(\varphi V_H, e_i)) + g(h(e_i, e_i), h(\varphi V_H, \varphi V_H)) \biggr ) \\ &= -\omegaverline \mathbb Ric(V_H,V_H) - nf^2 + \varepsilon |V_H|^2 + \mathbb Ric(\varphi V_H, \varphi V_H) + g(A_V, A_V) - g(H, h(\varphi V_H, \varphi V_H)) \end{align*} where in the last equality we have used Gauss' formula in Lemma \ref{lemma:gauss} and the fact that, from the definition of the shape operator, we have $A_V e_i =- (\nabla_{e_i} V)^T$ and $A_V = A_{V_H}$. We then compute that $A_{V_H} e_i = \varphi h (e_i, \varphi V_H)$ and hence \[ g(A_V, A_V) = \varepsilon_i g( h(e_i, \varphi V_H), h(e_i, \varphi V_H)). \] So \eqref{formula2var} becomes \begin{align*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) & = \int_L \biggl \{ nf^2 + \frac 1 4 \varepsilon |\nabla f|^2 + \frac 1 4 |\nabla^2 f|^2 - \varepsilon f \Delta f\\ &-n f^2 - \frac 1 4 \omegaverline \mathbb Ric(\varphi \nabla f, \varphi \nabla f) + \frac 1 4 \varepsilon |\nabla f|^2 + \frac 1 4 \mathbb Ric (\nabla f, \nabla f)\\ &\underbrace{- g( H, h(\varphi V_H, \varphi V_H)}_{=- \frac 1 4 g(h(\nabla f, \nabla f), H)} \\ &- \frac 1 4 g(h(\nabla f, \nabla f), H) + \frac 1 4 g(H, \varphi \nabla f)^2 \biggr \} dv_0 \\ &= \frac 1 4 \int_L \biggl \{ -2 \varepsilon |\nabla f|^2 - \omegaverline \mathbb Ric (\varphi \nabla f, \varphi \nabla f) + |\nabla^2 f|^2 + \mathbb Ric(\nabla f, \nabla f)\\ &-2 g(H, h(\nabla f, \nabla f) + g(H, \varphi \nabla f)^2 \biggr \} dv_0 \end{align*} We can group the Hessian term and the Ricci term by means of the pseudo-Riemannian Bochner formula\footnote{The formula on p.~609 of \cite{anc_geo} differs by a sign as they define $\Delta = \dive (\nabla \cdot)$.} in the Appendix of \cite{anc_geo}: \[ \frac 1 2 \Delta |\nabla f|^2 = \mathbb Ric(\nabla f, \nabla f) - g(\nabla f, \nabla(\Delta f)) + |\nabla^2 f|^2 \] that, after integration, gives \[ \int_L (\Delta f)^2 = \int_L ( \mathbb Ric(\nabla f, \nabla f) + |\nabla^2 f|^2) \] since $X= f \xi + \frac 1 2 \varphi \nabla f$ vanishes on $\partial L$ and get the following. \begin{thm} \label{thm:2var} Let $L$ be a L-minimal Legendrian submanifold, possibly with boundary $\partial L$, of a pseudo-Sasakian manifold $(M, \eta, \xi, g,\varphi, \varepsilon)$. Then, in the normal Legendrian direction $V = f \xi + \frac 1 2 \varphi \nabla f$ vanishing on $\partial L$, the second variation of the volume is \begin{align*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) = \frac 1 4 \int_L \biggl \{ &(\Delta f)^2 -2 \varepsilon |\nabla f|^2 - \omegaverline \mathbb Ric(\varphi \nabla f, \varphi \nabla f) \\ &-2 g(H, h(\nabla f, \nabla f) + g(H, \varphi \nabla f)^2 \biggr \} dv_0 \end{align*} where $H$ is the mean curvature vector and $dv_0$ is the volume form of $(L, g)$. \end{thm} \subsection{The minimal case} Let us now consider the more special case where $L$ is minimal and \emph{Riemannian} (i.e. $H=0$) and $M$ is $\eta$-Sasaki-Einstein, i.e. for some $A,B \in \mathbb R,$ it holds \[ \omegaverline \mathbb Ric = A g + B \eta \omegatimes \eta \] where it must be $B = 2n - \varepsilon A$. In this case our second variation formula reads \begin{align*} \frac{d^2}{dt^2}\biggr |_{t=0} \vol(L_t) &= \frac 1 4 \int_L \biggl \{ |\Delta f|^2 - (A+2 \varepsilon)|\nabla f|^2 \biggr \} dv_0 \end{align*} and we recall that $|df|^2 \geq 0$ being $L$ Riemannian. Note that for the Riemannian case $\varepsilon = 1$ we have reobtained the formula of \cite{Ono} (see also \cite{ohnita}). In this case we have a sufficient condition for Legendrian stability coming from the positivity of the second term in the last expression. \begin{prop} A minimal Legendrian $n$-submanifold in a pseudo-Sasakian $\eta$-Einstein manifold with constant $A$ is Legendrian stable if \begin{equation} \label{eq:alwaysstable} A + 2 \varepsilon \leq 0. \end{equation} In particular, if $M$ is Lorentzian-Sasaki-Einstein we have $A = -2n$ and $\varepsilon = -1$ so $L$ is always Legendrian stable. \end{prop} With the same argument used in the Sasakian and K\"ahler case, using that $L$ is a Riemannian manifold so the space $C^\infty(L)$ admits a $L^2$-orthogonal decomposition given by the eigenspaces of the Laplacian, we can prove the following. \begin{prop} \label{prop:critlambda1} The minimal space-like Legendrian $L$ in the pseudo-Sasaki $\eta$-Einstein manifold $M$ is Legendrian stable if and only if its first eigenvalue of the Laplacian on functions $\lambda_1(L)$ satisfies \begin{equation} \label{est:lambda1} \lambda_1(L) \geq A+2 \varepsilon. \end{equation} \end{prop} \section{Lorentzian Sasakian manifolds} \subsection{Tanno deformations} \label{sec:tanno} The following is a generalization of the well-known Tanno deformations \cite{Tanno}. Starting with a Sasakian manifold $(M, g, \eta, \xi, \varphi)$ one defines for fixed $\alpha \in \mathbb R_+$ and $\beta := \alpha+\alpha^2$ \begin{equation} \label{gtilde} \widetilde g:= \widetilde g_\alpha := \alpha g - \beta \eta \omegatimes \eta. \end{equation} This is a Lorentzian metric, since it is $$\widetilde g (\xi,\xi)= \alpha g (\xi,\xi) -(\alpha^2+\alpha)=-\alpha^2.$$ For the proof of this Proposition we need the next Lemma. \begin{lemma} \label{lemma:nabla} If $\widetilde \nabla$ is the Levi-Civita connection of $\widetilde g_\alpha$ and $\nabla$ is the one of $g$, then we have \begin{equation} \label{eq:diffcon} \widetilde \nabla_X Y = \nabla_X Y - \alpha^{-1} \beta( \eta(X) \varphi Y + \eta(Y) \varphi X). \end{equation} \end{lemma} For a proof in the case $\alpha=1$ and $\beta=2$ using Koszul's formula we refer to Proposition 3.3 of Brunetti and Pastore \cite{brunpast}. The Riemannian case is due to Tanno \cite{Tanno}, see also \cite[Chap.~7]{monoBG}. We remark a sign difference with \cite{brunpast} due to the opposite convention in the definition of the fundamental $2$-form. \begin{proof} Define $\widetilde \nabla$ by \[ \widetilde \nabla_X Y = \nabla_X Y + S_X Y, \] where \[ S_XY:= -\alpha^{-1} \beta \biggl[ \eta(X) \varphi Y + \eta(Y) \varphi X \biggr]. \] The tensor field $S$ is symmetric, hence $ \widetilde \nabla$ is a torsion-free connection. We compute \begin{eqnarray*} (\widetilde \nabla_X \widetilde g) (Y,Z) &=& \nabla_X \widetilde g (Y,Z) - \widetilde g (S_X Y,Z ) - \widetilde g (Y,S_XZ) \\ &=& - \beta \biggl ( (\nabla_X\eta)(Y)\eta(Z) + (\nabla_X\eta)(Z)\eta(Y) \biggr )\\ & &+\alpha^{-1} \beta \, \widetilde g\biggl(\biggl[ \eta(X) \varphi Y + \eta(Y) \varphi X \biggr], Z \biggr)\\ & &+\alpha^{-1} \beta \,\widetilde g\biggl(Y, \biggl[ \eta(X) \varphi Z + \eta(Z) \varphi X \biggr] \biggr)\\ &=& 0, \end{eqnarray*} where we have used that $\nabla_X \eta (Y) = g(\varphi X, Y)$, as a consequence of Proposition \ref{prop:nablaxi}. Hence, $ \widetilde \nabla$ is metric for $\widetilde g$ and has no torsion, so it coincides with the Levi-Civita connection of $\widetilde g$. \end{proof} The behavior \eqref{eq:diffcon} of the Levi-Civita connection of $\widetilde g_\alpha$ allows us to prove the following. \begin{prop}\label{prop:lor} Let $(\eta, \xi, \varphi, g)$ be a Sasakian structure. Then for $\alpha >0$ the new structure $(\alpha \eta, \alpha^{-1} \xi, \varphi, \widetilde g_\alpha)$ is Lorentzian Sasakian, where $\widetilde g_\alpha$ is defined in \eqref{gtilde}. \end{prop} \begin{proof} For completeness sake we prove this Proposition. Let $\widetilde \xi = \alpha^{-1} \xi$. First we observe, that $Z\in \{\xi,\widetilde \xi\}$ satisfies $ \mathcal L_Z g =0 \mbox{ and } \mathcal L_Z \eta =0$ and as a result $ \mathcal L_Z \widetilde g =0.$ Hence $\widetilde \xi$ is a Killing vector field of length $-1.$ \\ Moreover, for the second term of $\varphi^2$ using $\widetilde g (\mathbb Reeb,\mathbb Reeb)=-\alpha^2$ one has $g(X,\mathbb Reeb)\mathbb Reeb = -\widetilde g(X,\widetilde \mathbb Reeb)\widetilde \mathbb Reeb,$ which shows, that the relation \eqref{eq_Lem_Sas1} is satisfied. \\ Let us note, that it is $ \widetilde \nabla_X Y = \nabla_X Y, \mbox{ for } X,Y \in \mathcal D$ and $\widetilde \nabla_\mathbb Reeb \mathbb Reeb = \nabla_\mathbb Reeb \mathbb Reeb=0.$ In order to check \eqref{defnabla} we observe $$-g(X,Y) \mathbb Reeb +g(Y,\mathbb Reeb)X = -g(X,Y) \mathbb Reeb =-\widetilde g(X,Y)\widetilde \mathbb Reeb, \mbox{ for } X,Y \in \mathcal D $$ and $(\nabla_\mathbb Reeb \varphi)\mathbb Reeb =0 = (\widetilde\nabla_{\widetilde\mathbb Reeb} \varphi) \widetilde \mathbb Reeb.$ It remains to compute the expression for $X \in \mathcal D$ $$(\nabla_X \varphi)\mathbb Reeb = -\varphi (\nabla_X \mathbb Reeb)= -\varphi \biggl(\widetilde \nabla_X \mathbb Reeb - \alpha^{-1} \beta( \varphi X) \biggr) = (\widetilde \nabla_X \varphi)\mathbb Reeb -\alpha^{-1} \beta X $$ and $ -g(X,\mathbb Reeb) \mathbb Reeb +g(\mathbb Reeb,\mathbb Reeb)X = -\widetilde g(\widetilde \mathbb Reeb,\widetilde \mathbb Reeb)X.$ This shows \begin{eqnarray*} (\widetilde \nabla_X \varphi) \widetilde \mathbb Reeb &=& \alpha^{-2} \beta X -\alpha^{-1}\widetilde g(\widetilde \mathbb Reeb,\widetilde \mathbb Reeb)X = \frac{\alpha +\alpha^2-\alpha}{\alpha^{2}}\widetilde g(\widetilde \mathbb Reeb,\widetilde \mathbb Reeb)X\\&=&-\widetilde g(X,\widetilde \mathbb Reeb) \widetilde\mathbb Reeb +\widetilde g(\widetilde\mathbb Reeb,\widetilde\mathbb Reeb)X \end{eqnarray*} and finishes the proof of Proposition \ref{prop:lor}, since the converse statement goes along the same lines. \end{proof} We can compute how the Ricci tensor behaves under these deformations. \begin{prop} \label{prop:loreinst} Let $(M, g, \eta, \xi, \varphi)$ be a Sasakian $\eta$-Einstein manifold with $\mathbb Ric = Ag + (2n+A) \eta \omegatimes \eta$ and let $\widetilde g_\alpha$ as above. Then $\widetilde g_\alpha$ is Lorentzian Sasakian $\eta$-Einstein with $\widetilde \mathbb Ric = A_\alpha \widetilde g_\alpha + (2n +A_\alpha) \eta \omegatimes \eta$ for $A_\alpha = \frac{A+2}{\alpha} + 2$. \end{prop} From Lemma \ref{lemma:nabla} we obtain the following about the curvature tensor. \begin{lemma} \label{lemma:Rm} The curvature tensors $\widetilde \mathbb Rm$ of $\widetilde g_\alpha$ and $\mathbb Rm$ of $g$ are related by \[ \widetilde \mathbb Rm(X,Y)Z = \mathbb Rm(X,Y)Z + \alpha^{-1} \beta \, \biggl ( g(\varphi Y, Z) \varphi X - g(\varphi X, Z) \varphi Y - 2g(\varphi X, Y) \varphi Z \biggr) \] for $X,Y,Z$ in $\mathcal D$. \end{lemma} \begin{proof} We compute for $X,Y,Z$ in $\mathcal D$ \begin{align*} \widetilde \mathbb Rm(X,Y)Z &= \widetilde \nabla_X \nabla_Y Z - \widetilde \nabla_Y \nabla_X Z - \widetilde \nabla_{[X,Y]} Z \\ &= \nabla_X \nabla_Y Z - \alpha^{-1} \beta \, \eta(\nabla_Y Z) \varphi X - \nabla_Y \nabla_X Z \\ &~~~+ \alpha^{-1} \beta \, \eta(\nabla_X Z) \varphi Y - \nabla_{[X,Y]} Z + \alpha^{-1} \beta \, \eta([X,Y]) \varphi Z\\ &= \mathbb Rm(X,Y)Z - \alpha^{-1} \beta \, \eta(\nabla_Y Z) \varphi X \\ &~~~+ \alpha^{-1} \beta\, \eta(\nabla_X Z) \varphi Y + \alpha^{-1} \beta \, \eta([X,Y]) \varphi Z. \end{align*} We have that \[ \eta(\nabla_Y Z) = Y \eta(Z) - (\nabla_Y \eta)(Z) = -g(\varphi Y, Z) \] and similarly $\eta(\nabla_X Z) = -g(\varphi X, Z)$. Moreover, one has \[ \eta([X,Y]) = -d\eta(X,Y) = -2g(\varphi X, Y). \] So we have \begin{align*} \widetilde \mathbb Rm(X,Y)Z &= \mathbb Rm(X,Y)Z + \alpha^{-1} \beta \, g(\varphi Y, Z)\varphi X \\&~~- \alpha^{-1} \beta \, g(\varphi X, Z)\varphi Y -2\,\alpha^{-1} \beta \, g(\varphi X, Y)\varphi Z. \end{align*} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:loreinst}] Let $E_i$ be an orthonormal frame with respect to $g$ of $\mathcal D$ and let $\widetilde E_i = \frac 1 {\sqrt \alpha} E_i$. We want to compute, for $X, Y$ in $ \mathcal D$ \begin{align*} \widetilde \mathbb Ric (X,Y) &= \widetilde \mathbb Rm (X, \widetilde E_i, \widetilde E_i, Y) - \widetilde \mathbb Rm (X, \widetilde \xi, \widetilde \xi, Y) \\ &= \widetilde g ( \widetilde \mathbb Rm (X, \widetilde E_i) \widetilde E_i, Y) - \widetilde g(X, Y) \\ &= g ( \widetilde \mathbb Rm (X, E_i) E_i, Y) - \widetilde g(X, Y) \\ &= \mathbb Rm (X, E_i, E_i, Y) + \frac{\beta}{\alpha} \biggl ( -g (\varphi X, E_i) g(\varphi E_i, Y) - 2 g(\varphi X, E_i) g( \varphi E_i, Y) \biggr ) - \widetilde g(X, Y) \\ &= \mathbb Rm (X, E_i, E_i, Y) + 3 \frac{\beta}{\alpha} g(\varphi X, \varphi Y) - \widetilde g(X, Y) \\ &= \mathbb Ric (X, Y) - \mathbb Rm (X, \xi, \xi, Y) + 3 \frac{\beta}{\alpha} g(\varphi X, \varphi Y) - \widetilde g(X, Y) \\ &= A g (X,Y) - g(X,Y) + 3 \frac{\beta}{\alpha} g(\varphi X, \varphi Y) - \widetilde g(X, Y) \\ &= \biggl (\frac A \alpha - \frac 1 \alpha + 3 \frac{\beta}{\alpha^2} -1 \biggr) \widetilde g(X,Y) \\ &= \biggl (\frac{A+2}{\alpha} + 2 \biggr ) \widetilde g(X,Y). \end{align*} Thus we have $A_\alpha = \frac{A+2}{\alpha} + 2$. \end{proof} \subsection{Tanno deformations and Legendrian instability} Through the transformation of Proposition \ref{prop:lor} it is easy to see the following. \begin{prop} \label{prop:minleg} Let $L \subset M$ be an $n$-dimensional submanifold. Then it is minimal Legendrian with respect to $(M,g, \eta, \xi, \varphi)$ if and only if it is also so with respect to $(M,\widetilde g_\alpha, \widetilde \eta, \widetilde \xi, \varphi)$. \end{prop} \begin{proof} The contact structure does not change. As for minimality, from Lemma \ref{lemma:nabla} we can write down the difference of the mean curvature vectors which turns out to be zero, since the restrictions of $\widetilde \nabla$ and $\nabla$ to $L$ coincide. \end{proof} We emphasize the following observation. \begin{remark} \label{rmk:isom} The induced metrics $g|_L$ and $\widetilde g_\alpha|_L = \alpha g|_L$ on $L$ are homothetic. In particular, their Hodge-de-Rham Laplacians $\Delta_L$ are related by $\widetilde \Delta_L =\alpha^{-1} \Delta_L $ and their first eigenvalues via $ \widetilde \lambda_1(L) =\alpha^{-1}\lambda_1(L).$ \end{remark} From Lemma \ref{prop:loreinst}, Remark \ref{rmk:isom} and Proposition \ref{prop:critlambda1} we can infer the following. \begin{prop} Let $(M, g)$ be an $\eta$-Sasaki-Einstein manifold with constant $A$. Then a L-minimal Legendrian submanifold $L$ is Legendrian stable in $g$ if, and only if, it is Legendrian stable in the associated Lorentzian-Sasakian metric $g_\alpha$, for all $\alpha>0$. \end{prop} \end{document}
math
41,943
\begin{document} \title{Optimal simultaneous measurements of incompatible observables of a single photon} \author{Adetunmise C. Dada} \email{[email protected]} \affiliation{Centre for Quantum Photonics, H. H. Wills Physics Laboratory and Department of Electrical and Electronic Engineering, University of Bristol, Bristol, BS8 1TL, United Kingdom} \author{Will McCutcheon} \affiliation{Centre for Quantum Photonics, H. H. Wills Physics Laboratory and Department of Electrical and Electronic Engineering, University of Bristol, Bristol, BS8 1TL, United Kingdom} \author{Erika Andersson} \affiliation{Institute for Photonics and Quantum Sciences, SUPA, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom} \author{Jonathan Crickmore} \affiliation{Institute for Photonics and Quantum Sciences, SUPA, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom} \author{Ittoop Puthoor} \affiliation{Institute for Photonics and Quantum Sciences, SUPA, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom} \author{Brian D. Gerardot} \affiliation{Institute for Photonics and Quantum Sciences, SUPA, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom} \author{Alex McMillan} \affiliation{Centre for Quantum Photonics, H. H. Wills Physics Laboratory and Department of Electrical and Electronic Engineering, University of Bristol, Bristol, BS8 1TL, United Kingdom} \author{John Rarity} \affiliation{Centre for Quantum Photonics, H. H. Wills Physics Laboratory and Department of Electrical and Electronic Engineering, University of Bristol, Bristol, BS8 1TL, United Kingdom} \author{Ruth Oulton} \affiliation{Centre for Quantum Photonics, H. H. Wills Physics Laboratory and Department of Electrical and Electronic Engineering, University of Bristol, Bristol, BS8 1TL, United Kingdom} \begin{abstract} { Joint or simultaneous measurements of non-commuting quantum observables are possible at the cost of increased unsharpness or measurement uncertainty. Many different criteria exist for defining what an ``optimal" joint measurement is, with corresponding different tradeoff relations for the measurements. Understanding the limitations of such measurements is of fundamental interest and relevant for quantum technology. Here, we experimentally test a tradeoff relation for the \emph{sharpness} of qubit measurements, a relation which refers directly to the form of the measurement operators, rather than to errors in estimates. We perform the first optical implementation of the simplest possible optimal joint measurement, requiring less quantum resources than have previously often been employed. Using a heralded single-photon source, we demonstrate quantum-limited performance of the scheme on single quanta.}\\ \end{abstract} \maketitle \section{Introduction} There are several types of uncertainty relations in quantum mechanics. To start with, if two non-commuting observables are each measured separately, `` sharply" (each observable measured on an ensemble of identically prepared quantum systems), then the product of their variances is bounded from below by uncertainty relations~\cite{Heisenberg1927anschaulichen,robertson1929uncertainty,EScrhodinger1930About}. In addition, measurements generally disturb a measured quantum state. This leads to further limitations on how well two observables can be measured {\em jointly} on the {\em same} quantum system. Different criteria for exactly what is to be optimised lead to different uncertainty or tradeoff relations for joint measurements, see e.g.~\cite{Heisenberg1927anschaulichen, arthurskelly1965, arthursgoodman1988, stenholm1992simultaneous, hall2004prior, ozawa2003universally, busch2013proof, Busch:1986kxba, Busch:2001ue, busch2014Heisenberg}. Uncertainty relations apply to measurements of any non-commuting observables, such as position and momentum, and spin-1/2 (qubit) observables. Aside from their fundamental interest, uncertainty relations are relevant for quantum technology, including for quantum state estimation and quantum metrology. For example, they limit how much we can learn about different properties of quantum systems, and are related to why one can bound the information held by an eavesdropper in quantum key distribution. In this paper, we present the realisation of a tradeoff relation for joint measurements of a spin-1/2 system, given in~\cite{Busch:1986kxba,Busch:2001ue}. Our realization uses the polarization of heralded single photons. Several experimental tests of different kinds of uncertainty relations for joint measurements have been reported, see for example~\cite{Erhart2012,Weston2013, Sulyok2013, rozema2012violation,Ringbauer:2014gk,Kaneda:2014emba, xiong2017optimal}. Many of these realisations have used weak measurements, even if any optimal quantum measurement will necessarily be described by a specific generalised quantum measurement (POM or POVM) which always can be realized in a single shot, with no need to resort to the framework of weak measurements or postselection. Joint measurements can also be accomplished through quantum cloning~\cite{Brougham:2006kz,thekkadath2017determining}. This usually requires entangling operations, thereby imposing practical limitations, e.g., for photonic quantum technologies where deterministic entangling gates are lacking. One might also expect that in order to realise a joint measurement of two non-commuting observables, it would be necessary to couple the quantum system to be measured to an ancillary system. For two qubit observables, however, it turns out that this is not necessary, and that an optimal measurement can be implemented by probabilistically selecting to perform one or the other of two projective measurements~\cite{Andersson:2005kvbacadaea}. Such a setup was also suggested for measurement along two orthogonal spin directions in \cite{busch1987some,Barnett1997}. This leads to the simplest possible realisation of an optimal joint measurement, requiring no entangling operations, and is therefore the technique we employ here. An early example of a tradeoff relation for joint measurements was given in~\cite{Busch:1986kxba,Busch:2001ue}. This relation holds for measurements on spin-1/2 systems. A related relation~\cite{busch2014Heisenberg} has been experimentally realized on a single trapped ion~\cite{xiong2017optimal}. Here, we aim to test the original relation given in~\cite{Busch:1986kxba,Busch:2001ue} using single photons. \section{Theoretical framework} {\bf The sharpness tradeoff relation.} In~\cite{Busch:1986kxba,Busch:2001ue}, the joint measurement is assumed to have marginal measurement operators of the form \begin{equation} \label{eq:margoper} \Pi^a_\pm= {1\over 2}(\mathbf{\hat 1}\pm \alpha {\bf a}\cdot\hat{\sigma}), ~~~ \Pi^b_\pm= {1\over 2}(\mathbf{\hat 1}\pm \beta {\bf b}\cdot\hat{\sigma}), \end{equation} for the jointly measured spin-1/2 observables $\hat A = {\bf a}\cdot\hat\sigma $ and $\hat B = {\bf b}\cdot\hat\sigma$, where $\bf a$ and $\bf b$ are Bloch vectors, each corresponding to a qubit (e.g., polarization) state. It always holds that $0\le \alpha, \beta\le 1$. If $\alpha=1$, then $\Pi^a_\pm $ correspond to a projective measurement of \mbox{${\bf a}\cdot\hat\sigma$}, while $\alpha=0$ corresponds to a random guess, and similarly for $\Pi^b_\pm$. The coefficients $\alpha$ and $\beta$ can be referred to as the {\em sharpnesses} of the measurements of ${\bf a}\cdot\hat\sigma$ and ${\bf b}\cdot\hat\sigma$, and the closer they are to 1, the sharper the measurements. The tradeoff relation for $\alpha$ and $\beta$ given in \cite{Busch:1986kxba,Busch:2001ue} is \begin{equation} \label{alphabound} |\alpha {\bf a} + \beta {\bf b}| + |\alpha {\bf a} - \beta {\bf b}|\leq 2, \end{equation} which can be rewritten as~\cite{Andersson:2005kvbacadaea} \begin{equation} \label{eq:alphaunc} \Delta_\alpha^2\Delta_\beta^2 \equiv {(1-\alpha^2)(1-\beta^2)\over {\alpha^2\beta^2}}\geq\sin^2(2\theta), \end{equation} where $2 \theta$ is the angle between $\bf a$ and $\bf b$, and $\theta$ would be the angle between the equivalent polarization- or qubit-state vectors. The bound in \eqref{alphabound} and \eqref{eq:alphaunc} is tight, in the sense that a joint quantum measurement with marginal measurement operators given by \eqref{eq:margoper}, for any $\alpha$ and $\beta$ saturating the bound, can always be realised. Note that the bound does not depend on the measured state, nor on what values we assign to the measurement outcomes. In this sense, the bound in \eqref{alphabound} and \eqref{eq:alphaunc} can be said to be more ``fundamental" than relations which depend on what values are assigned for measurement outcomes, which is the case for typical error-disturbance relations. In return, we assume that ${\bf a}\cdot\hat\sigma$ and ${\bf b}\cdot\hat\sigma$ are jointly measured using measurement operators of the form in \eqref{eq:margoper}. More generally, however, measurement operators for a joint measurement of ${\bf a}\cdot\sigma$ and ${\bf b}\cdot\sigma$ do not have to be of the form in \eqref{eq:margoper}~\cite{hall2004prior, busch2014Heisenberg}, in which case the bound in \eqref{alphabound} and \eqref{eq:alphaunc} also retains its relevance. In fact, any dichotomic spin-1/2 observable will have measurement operators of the form \begin{equation} \label{eq:margopergen} \Pi_\pm={\gamma_\pm\over 2}\mathbf{\hat 1}\pm{{\gamma_k\over 2} \bf k}\cdot\hat{\sigma}, \end{equation} where $\gamma_+, \gamma_- \ge \gamma_k$ (since measurement operators must be positive) and $\gamma_+ + \gamma_- = 1$ (since $\Pi_++\Pi_-=\hat {\mathbf 1}$ must hold). Thus, the marginal measurement operators of a joint measurement must more generally be of this form. It turns out that if we choose $\gamma_+=\gamma_- = 1/2$, then the measurement can be made sharper, in the sense that $\gamma_k$ in the joint measurement is as large as possible, while keeping the direction $\bf k$ the same. Therefore, no matter what the measurement results are used to estimate, measurement operators of the form in \eqref{eq:margoper} can be said to be optimal. In this sense, \eqref{alphabound} and \eqref{eq:alphaunc} retain their relevance even more generally. For example, in~\cite{busch2014Heisenberg}, one essentially ends up with measurement operators of the form \eqref{eq:margoper}, but for two ``new" spin directions other than $\bf a$ and $\bf b$, and hence one obtains a relation of the form in \eqref{alphabound} and \eqref{eq:alphaunc}, just for these two ``new" spin directions. \begin{figure} \caption{{\bf Joint measurement of incompatible observables ${\hat{A} \label{fig1:exptsetupa} \end{figure} We can also connect \eqref{eq:alphaunc} with uncertainty relations. The total uncertainties in the joint measurement, denoted $\Delta A_j$ and $\Delta B_j$, arise from two sources: the ``intrinsic uncertainties" $\Delta A$ and $\Delta B$ in the quantum observables when they are measured sharply (measured separately, not jointly) on some quantum state, and ``extra" uncertainty coming the fact that they are measured jointly. If we assume that the measurement results both for the sharp and the joint measurements are said to be $\pm 1$, then the variance in the joint measurement of ${\bf a}\cdot\hat\sigma$, scaled with $\alpha^{-2}$, can be written \begin{equation} \Delta^2 A_j /\alpha^2= (1-\alpha^2\langle\hat{A}\rangle^2)/\alpha^2 = (1-\alpha^2)/\alpha^2 + 1-\langle\hat{A}\rangle^2, \end{equation} and similarly for $\Delta^2 B_j$\footnote{Equivalently, we can assume that the measurement results are $\pm 1/\alpha$}. Here $1-\langle{\hat{A}}\rangle^2=\Delta^2A$ is the variance of $\hat{A}$, when measured separately and sharply, and similarly for $\hat{B}$. The quantities $(1-\alpha^2)/\alpha^2\equiv \Delta_\alpha^2$ and $(1-\beta^2)/\beta^2\equiv \Delta_\beta^2$ are seen to be contributions coming from the fact that the measurement is a joint measurement. A lower bound on their product is given by (\ref{eq:alphaunc}), which can now be interpreted as an uncertainty relation giving a lower bound on the uncertainty associated purely with the fact that quantum observables $\hat{A}$ and $\hat{B}$ are measured {\it jointly}. {\bf Optimal joint-measurement scheme.} It turns out that an optimal joint measurement along spin directions ${\bf a}$ and ${\bf b}$ can be realized by doing a projective measurement \emph{either} along $\bf{c}$ {\em or} along $\bf{d}$ with probability $p$ or $1-p$ respectively, as illustrated in Fig.~\ref{fig1:exptsetupa}~\cite{Andersson:2005kvbacadaea}. The results of the joint measurement are assigned as follows. If measurement along $\bf{c}$ is chosen, and the outcome is $C=+1$, then the result of the joint measurement is $A_j=+1 $ and $B_j=+1 $. If the outcome is $C=-1$, the result of the joint measurement is $A_j=-1 $ and $B_j=-1 $. However, if the selected measurement is along $\bf{d}$, and the outcome is $D=+1$, then result of the joint measurement is $A_j=+1 $ and $B_j=-1$, while $D=-1$ corresponds to $A_j=-1 $ and $B_j=+1 $. The expectation values for this joint measurement are then \begin{align} \overline{A_j} = p\langle{{\bf c}\cdot\hat\sigma}\rangle + (1-p)\langle {{\bf d}\cdot\hat{\sigma}}\rangle,\nonumber\\ \overline{B_j} = p\langle{{\bf c}\cdot\hat{\sigma}}\rangle - (1-p)\langle {{\bf d}\cdot\hat{\sigma}}\rangle. \label{eq1:jmconstr} \end{align} On the other hand, the expectation values for a joint measurement with marginal measurement operators given by \eqref{eq:margoper} are $\overline{A_j}=\alpha\langle {\bf a}\cdot\hat\sigma\rangle$ and $\overline{B_j}=\beta\langle {\bf b}\cdot\hat\sigma\rangle$. We therefore obtain \begin{align} {\bf c} =\frac{\left( \alpha {\bf a} + \beta {\bf b} \right)}{2p} {\rm ~~~and~~~} {\bf d} = \frac{\left( \alpha {\bf a} - \beta {\bf b} \right)}{2(1-p)}. \label{eq1:jmconstruct} \end{align} Since ${\bf c},~{\bf d}$ are unit vectors, it holds that \begin{eqnarray} p &=&|(\alpha {\bf a} + \beta {\bf b})|/2\nonumber\\ 1-p &=&|(\alpha {\bf a} - \beta {\bf b})|/2. \label{eq:pexpr} \end{eqnarray} Adding these two equations, we see that this measurement satisfies equality in \eqref{alphabound}, meaning that the joint measurement realized through measuring either $\hat{C}={\bf c}\cdot\hat\sigma$ or $\hat{D}={\bf d}\cdot\hat\sigma$ with probabilities $p$ and $1-p$ is indeed optimal. \begin{figure} \caption{{ {\bf Schematic of experimental realization of a deterministic scheme for joint quantum measurements:} \label{fig1:exptsetup} \end{figure} Solving \eqref{eq:pexpr} and using equality in (\ref{eq:alphaunc}), we can express the corresponding optimal $\alpha$ and $\beta$ in terms of $\theta$ and $p$ as \begin{align} \label{eq:albeSolns} \alpha_{\rm opt}=&\frac{(2 p-1) }{\beta_{\rm opt} \cos (2 \theta )}, \text{ where} \nonumber\\ \beta_{\rm opt} =&\pm \{\sqrt{\pm[2 (p-1) p+1]^2-(1-2 p)^2 \sec ^2(2 \theta)}+\\\nonumber &~~~~~~~~+2 (p-1) p+1\}^{\frac{1}{2}}. \end{align} \section{Experiment} The schematic of the experimental measurement setup realizing the strategy is shown in Fig.~\ref{fig1:exptsetup}. We prepare the input state using a combination of waveplates (not shown) on the input arm before the beam splitter, and implement the random selection between measurement directions $\bf{c}$, $\bf{d}$ using a fixed, non-polarizing beam splitter with a splitting ratio corresponding to $p\sim0.7$. The reason for choosing $p\sim 0.7$ is that this choice allows us to investigate a range of angles between the directions $\bf a$ and $\bf b$, by varying the directions $\bf c$ and $\bf d$. In return, the maximum possible angle between $\bf a$ and $\bf b$ we can achieve is about $2\theta = 50^\circ$. If one would want to perform a joint measurement of maximally complementary observables, this can only be achieved with $p=1/2$. Conversely, $p=1/2$ would always result in a measurement of two maximally complementary spin-1/2 observables; by varying the directions $\bf c$ and $\bf d$, one can in that case vary the relative sharpnesses of the measurements of ${\bf a}\cdot\hat\sigma$ and ${\bf b}\cdot\hat\sigma$ in the joint measurement. To determine the optimal measurement directions ${\bf c}$ and ${\bf d}$, we solve the equations \eqref{eq:pexpr} for $\alpha,\beta$ satisfying equality in \eqref{eq:alphaunc} for each combination of $\bf{a},\bf{b}$. For each $\bf{a},\bf{b}$, we then use these solutions $\alpha_{\rm opt}$, $ \beta_{\rm opt}$, i.e., \eqref{eq:albeSolns}, in \eqref{eq1:jmconstruct} to get the $\bf{c},\bf{d}$ that are subsequently used as the measurement settings. Of course, due to experimental imperfections, the actual experimental values ${\alpha}_{\rm exp},{\beta}_{\rm exp}$ determined from the measurements of $\bf{c},\bf{d}$ chosen in this way may not necessarily saturate the bound in \eqref{eq:alphaunc}. However, we are able to saturate this bound within experimental error bars from Poissonian photon counting statistics. Note that for this scheme it would suffice to use a classical random selection of the measurement, and this is equivalent to the toss of an unbalanced classical coin. \begin{figure} \caption{{{\bf Experimental results} \label{fig3:VarVSoverlap} \end{figure} \begin{figure} \caption{Experimental values of (a) $\alpha$ and (b) $\beta$ as functions of $\theta$, plotted with the corresponding theoretical values. (c) Product of experimental variances for the sharp measurements. (d) Product of experimental variances for the joint measurements and their comparison with theory. } \label{figS4:gRowAlphasBetas} \end{figure} We perform three sets of experiments using pairs ${\bf a},{\bf b}$, with $\bf a$ kept constant as the ${\bf z}$ direction, and varying $\bf b$ to traverse $\theta=1,\ldots,25$\textdegree, along a different plane on the Bloch sphere for each experiment corresponding to azimuthal angles $\phi_1=-160.7$\textdegree, $\phi_2=-51.6$\textdegree, $\phi_3=83.7$\textdegree, for experiments 1, 2 and 3, respectively. For our experiments, we chose as input state the eigenstate of ${\bf a}\cdot \hat{\sigma}$, denoted $|a\rangle$, which coincides with the ``$|0\rangle$" state. We carry out the measurements of ${\bf c}\cdot\hat\sigma$ and ${\bf d}\cdot\hat\sigma$, by measuring in the correpondng polarization bases using appropriate settings of the half-wave plates and quarter-wave plates and subsequent measurement in the $\{|H\rangle,|V\rangle\}$ polarization basis using a polarizing beam splitter and fiber-coupled single-photon detectors. Using coincidence detection with idler photons as heralds, we are able to register any of the four possible outcomes for each heralded photon going through the measurement circuit (see Fig.~\ref{fig1:exptsetup}). \begin{figure*} \caption{ {\bf Examples of pairs of incompatible observables used in the experiments.} \label{figS1:BlochABCDstates} \end{figure*} \section{Methods} We ensure a true single-photon implementation by exploiting a heralded source of single photons consisting of a microstructured photonic crystal fibre (PCF) exploiting birefringent phase-matching~\cite{halder2009nonclassical,clark2011intrinsically} to produce photon pairs via spontaneous four-wave mixing (SFWM). This source is pumped by a pulsed Ti-Sapphire laser with a repetition rate of 80~MHz. The fiber is highly birefringent ($\Delta n = 4 \times 10^{-4}$) with phase-matching conditions leading to generation of signal-idler pairs with polarization orthogonal to that of the pump. In addition to this birefringence, the waveguide contributions to the dispersion can be used to tailor the SFWM for generation of naturally narrowband, spectrally uncorrelated photons when pumped with Ti-Sapphire laser pulses at the flat region of the phase-matching curves ($\lambda_{\rm pump} \simeq 726$~nm) where the idler photons ($\lambda_i = 871$~nm) are group-velocity matched to the pump pulse so that they become spectrally broad ($\Delta \lambda_i = 2.2$~nm) while the signal photons ($\lambda_s = 623$~nm) are intrinsically narrow-band ($\Delta \lambda_s = 0.3$~nm). This narrowband phase-matching results in a highly separable joint-spectral amplitude for a wide range of pump bandwidths, thereby enabling the generation of single photons of high state purity. Although the fibre source is in a Sagnac-loop configuration allowing for generation of entangled states when the pump pulse is set to diagonal polarization and split at the polarising beam-splitter (Fig.~\ref{fig1:exptsetup}), we use horizontally polarised pump pulses so that the PCF is pumped in only one direction for use as a heralded single-photon source. \begin{figure*} \caption{ {\bf Expectation values for the individual ``sharp" and joint measurements} \label{figS2:AllExpVals} \end{figure*} \section{Results} Let the heralded detector count rates corresponding to ${ C}=\pm1$ and ${ D}=\pm1$ be $\mathcal{C}_{\pm}$ and $\mathcal{D}_{\pm}$, respectively. From these, we determine the experimental expectation values as \begin{equation} \langle{\hat{C}}\rangle = \frac{\mathcal{C}_{+}-\mathcal{C}_{-}}{\mathcal{C}_{+}+\mathcal{C}_{-}},~~~\langle{\hat{D}}\rangle = \frac{\mathcal{D}_{+}-\mathcal{D}_{-}}{\mathcal{D}_{+}+\mathcal{D}_{-}}. \label{eq:Cexpvalues} \end{equation} Using this in \eqref{eq1:jmconstr}, and using the value of $p=0.670(1)$ obtained from total count rates in the $C$, and $D$ channels, the experimental joint-measurement expectations values are then obtained directly according to \eqref{eq1:jmconstr}. To benchmark the performance of the implemented joint measurements, we also perform separate sharp measurements of the incompatible observables ${\bf a}\cdot\hat{\sigma}$, ${\bf b}\cdot\hat{\sigma}$. Again, if we denote the detector count rates corresponding to ${ A}=\pm1$ and ${B}=\pm1$ as $\mathcal{A}_{\pm}$ and $\mathcal{B}_{\pm}$, respectively, the expectations values for the sharp measurements are \begin{equation} \langle{{\bf a}\cdot\hat{\sigma}}\rangle = \frac{\mathcal{A}_{+}-\mathcal{A}_{-}}{\mathcal{A}_{+}+\mathcal{A}_{-}}, ~~~\langle{{\bf b}\cdot\hat{\sigma}}\rangle = \frac{\mathcal{B}_{+}-\mathcal{B}_{-}}{\mathcal{B}_{+}+\mathcal{B}_{-}}. \label{eq:Aexpvalues} \end{equation} This allows us to obtain experimental values of $\alpha$, $\beta$ which directly indicates by how much the sharpnesses are worsened solely by the fact that the measurement is joint, and to evaluate the LHS of relation \eqref{eq:alphaunc}, which we plot as a function of $\theta$ in Fig.~\ref{fig3:VarVSoverlap}(b) as our main result. Fig.~\ref{figS4:gRowAlphasBetas} shows $\alpha,~\beta$ and the product of total variances for separate (sharp) and joint measurements. The ideal theoretical product of total ``intrinsic" variances for sharp measurements is zero, as indicated by the solid black line in Fig.~\ref{figS4:gRowAlphasBetas}(c), since the measured state is an eigenstate of ${\bf a}\cdot\hat\sigma$, while that for the joint measurements (determined by the incompatibility of the jointly measured observables, and parametrised by $\theta$) is plotted with the black filled circles in Fig.~\ref{figS4:gRowAlphasBetas}(d). Fig.~\ref{figS1:BlochABCDstates} shows examples of pairs of spin directions ${\bf a},{\bf b},{\bf c},{\bf d}$ used in the sets of experiments. Also, shown in Fig.~\ref{figS2:AllExpVals} are the expectation values for the individual ``sharp" measurements of ${\bf a}\cdot\hat{\sigma}$, ${\bf b}\cdot\hat{\sigma}$, and expectation values resulting from the implemented joint-measurement strategy. \section{Discussion} The non-ideal experimental values of $\Delta^2A \Delta^2B$, rather than the joint measurement strategy, is accountable for the deviation of the $\Delta^2A_j \Delta^2B_j$ from the ideal value because, as seen in Fig.~\ref{fig3:VarVSoverlap}, the contribution purely due to the jointedness of the measurement is at the quantum limit. We see that, even without making any other corrections for experimental imperfections, our results verge on the quantum mechanical limit of how much variances must increase due to performing the quantum measurements jointly. This is thanks to the simplicity of the scheme, the brightness of the heralded single-photon source which reduced the effect of Poissonian noise, and the precise calibration of the two measurement setups with each other with a fidelity of $99.9993(2)\%$. We emphasise that the quantity $\Delta^2_\alpha\Delta^2_\beta$ is extremely sensitive to experimental error and has no upper bound. In conclusion, we have demonstrated an optimal joint measurement scheme that does not use filtering or postselection, nor does it need entangling interactions with an ancilla. A true joint measurement of two observables performed on a single qubit or spin-$\tfrac{1}{2}$ system should have four possible outcomes for each qubit measured, as is now demonstrated here. The non-requirement for 2-dimensional arrays of (single)-photon detectors, as often used in weak-measurement-based setups, also makes our scheme easier to implement. The implemented scheme can easily be applied to other qubit degrees of freedom and other two-level quantum systems, since standard projective measurements and flips of unbalanced classical ``coins'"are generally easy to implement. It would be of interest to extend this scheme to joint measurement of incompatible observables of higher-dimensional systems (qudits), or of multiple incompatible observables. This work demonstrates how to fully implement joint measurements of non-commuting spin-1/2 (or qubit) observables in an optimal way, and with less quantum resources than often employed previously. {\bf Acknowledgements:} This work was funded by the Future Emerging Technologies (FET)-Open FP7-284743 project Spin Photon Angular Momentum Transfer for Quantum Enabled Technologies (SPANGL4Q) and the Engineering and Physical Sciences Research Council (EPSRC) (EP/L024020/1, EP/M024156/1, EP/N003381/1 and EP/M024458/1). \begin{thebibliography}{27} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Heisenberg}(1927)}]{Heisenberg1927anschaulichen} \BibitemOpen \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Heisenberg}}\bibfield {author} {. ``\"{U}ber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik'',\ }\bibfield {booktitle} {\emph {\bibinfo {booktitle} {Zeitschrift f\"{u}r Physik}},\ }\href {\doibase 10.1007/bf01397280} {\bibfield {journal} {\bibinfo {journal} {Zeitschrift f\"{u}r Physik A Hadrons and Nuclei}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {172} (\bibinfo {year} {1927})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Robertson}(1929)}]{robertson1929uncertainty} \BibitemOpen \bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont {Robertson}}\bibfield {author} {. ``{The Uncertainty Principle}'',\ }\href {https://doi.org/10.1103/PhysRev.34.163} {\bibfield {journal} {\bibinfo {journal} {Physical Review}\ }\textbf {\bibinfo {volume} {34}},\ \bibinfo {pages} {163} (\bibinfo {year} {1929})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schr\"odinger}(1930)}]{EScrhodinger1930About} \BibitemOpen \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Schr\"odinger}}\bibfield {author} {. ``About Heisenberg Uncertainty Relation'',\ }\href {https://arxiv.org/abs/quant-ph/9903100v3} {\bibfield {journal} {\bibinfo {journal} {Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.)}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {296} (\bibinfo {year} {1930})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arthurs}\ and\ \citenamefont {Kelly~Jr}(1965)}]{arthurskelly1965} \BibitemOpen \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Arthurs}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly~Jr}}\bibfield {author} {. ``On the simultaneous measurement of a pair of conjugate observables'',\ }\href {https://doi.org/10.1002/j.1538-7305.1965.tb01684.x} {\bibfield {journal} {\bibinfo {journal} {Bell System Technical Journal}\ }\textbf {\bibinfo {volume} {44}},\ \bibinfo {pages} {725} (\bibinfo {year} {1965})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arthurs}\ and\ \citenamefont {Goodman}(1988)}]{arthursgoodman1988} \BibitemOpen \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Arthurs}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Goodman}}\bibfield {author} {. ``Quantum correlations: A generalized Heisenberg uncertainty relation'',\ }\href {https://doi.org/10.1103/PhysRevLett.60.2447} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {2447} (\bibinfo {year} {1988})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stenholm}(1992)}]{stenholm1992simultaneous} \BibitemOpen \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Stenholm}}\bibfield {author} {. ``Simultaneous measurement of conjugate variables'',\ }\href {https://doi.org/10.1016/0003-4916(92)90086-2} {\bibfield {journal} {\bibinfo {journal} {Annals of Physics}\ }\textbf {\bibinfo {volume} {218}},\ \bibinfo {pages} {233} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hall}(2004)}]{hall2004prior} \BibitemOpen \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Hall}}\bibfield {author} {. ``Prior information: How to circumvent the standard joint-measurement uncertainty relation'',\ }\href {https://doi.org/10.1103/PhysRevA.69.052113} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {052113} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ozawa}(2003)}]{ozawa2003universally} \BibitemOpen \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ozawa}}\bibfield {author} {. ``Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement'',\ }\href {https://doi.org/10.1103/PhysRevA.67.042105} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {042105} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Busch}\ \emph {et~al.}(2013)\citenamefont {Busch}, \citenamefont {Lahti},\ and\ \citenamefont {Werner}}]{busch2013proof} \BibitemOpen \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Busch}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Lahti}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont {Werner}}\bibfield {author} {. ``Proof of Heisenberg's error-disturbance relation'',\ }\href {https://doi.org/10.1103/PhysRevLett.111.160405} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages} {160405} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Busch}(1986)}]{Busch:1986kxba} \BibitemOpen \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Busch}}\bibfield {author} {. ``{Unsharp reality and joint measurements for spin observables}'',\ }\href {https://doi.org/10.1103/PhysRevD.33.2253} {\bibfield {journal} {\bibinfo {journal} {Physical Review D}\ }\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {2253} (\bibinfo {year} {1986})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Busch}\ \emph {et~al.}(2001)\citenamefont {Busch}, \citenamefont {Grabowski},\ and\ \citenamefont {Lahti}}]{Busch:2001ue} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Busch}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Grabowski}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Lahti}},\ }\href {https://doi.org/10.1007/978-3-540-49239-9} {\emph {\bibinfo {title} {{Operational Quantum Physics}}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {2001})\BibitemShut {NoStop} \bibitem [{\citenamefont {Busch}\ \emph {et~al.}(2014)\citenamefont {Busch}, \citenamefont {Lahti},\ and\ \citenamefont {Werner}}]{busch2014Heisenberg} \BibitemOpen \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Busch}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Lahti}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont {Werner}}\bibfield {author} {. ``Heisenberg uncertainty for qubit measurements'',\ }\href {https://doi.org/10.1103/PhysRevA.89.012129} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {012129} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Erhart}\ \emph {et~al.}(2012)\citenamefont {Erhart}, \citenamefont {Sponar}, \citenamefont {Sulyok}, \citenamefont {Badurek}, \citenamefont {Ozawa},\ and\ \citenamefont {Hasegawa}}]{Erhart2012} \BibitemOpen \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Erhart}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Sulyok}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ozawa}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hasegawa}}\bibfield {author} {. ``Experimental demonstration of a universally valid error--disturbance uncertainty relation in spin measurements'',\ }\href {https://doi.org/10.1038/nphys2194} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {185} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weston}\ \emph {et~al.}(2013)\citenamefont {Weston}, \citenamefont {Hall}, \citenamefont {Palsson}, \citenamefont {Wiseman},\ and\ \citenamefont {Pryde}}]{Weston2013} \BibitemOpen \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Weston}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Hall}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Palsson}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}}\bibfield {author} {. ``Experimental test of universal complementarity relations'',\ }\href {https://doi.org/10.1103/PhysRevLett.110.220402} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {220402} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sulyok}\ \emph {et~al.}(2013)\citenamefont {Sulyok}, \citenamefont {Sponar}, \citenamefont {Erhart}, \citenamefont {Badurek}, \citenamefont {Ozawa},\ and\ \citenamefont {Hasegawa}}]{Sulyok2013} \BibitemOpen \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Sulyok}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Erhart}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ozawa}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hasegawa}}\bibfield {author} {. ``Violation of Heisenberg's error-disturbance uncertainty relation in neutron-spin measurements'',\ }\href {https://doi.org/10.1103/PhysRevA.88.022110} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {022110} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rozema}\ \emph {et~al.}(2012)\citenamefont {Rozema}, \citenamefont {Darabi}, \citenamefont {Mahler}, \citenamefont {Hayat}, \citenamefont {Soudagar},\ and\ \citenamefont {Steinberg}}]{rozema2012violation} \BibitemOpen \bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Rozema}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Darabi}}, \bibinfo {author} {\bibfnamefont {D.~H.}\ \bibnamefont {Mahler}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Hayat}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Soudagar}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}}\bibfield {author} {. ``Violation of Heisenberg's measurement-disturbance relationship by weak measurements'',\ }\href {https://doi.org/10.1103/PhysRevLett.109.100404} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {100404} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ringbauer}\ \emph {et~al.}(2014)\citenamefont {Ringbauer}, \citenamefont {Biggerstaff}, \citenamefont {Broome}, \citenamefont {Fedrizzi}, \citenamefont {Branciard},\ and\ \citenamefont {White}}]{Ringbauer:2014gk} \BibitemOpen \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ringbauer}}, \bibinfo {author} {\bibfnamefont {D.~N.}\ \bibnamefont {Biggerstaff}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Broome}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fedrizzi}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Branciard}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {White}}\bibfield {author} {. ``{Experimental Joint Quantum Measurements with Minimum Uncertainty}'',\ }\href {https://doi.org/10.1103/PhysRevLett.112.020401} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {020401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kaneda}\ \emph {et~al.}(2014)\citenamefont {Kaneda}, \citenamefont {Baek}, \citenamefont {Ozawa},\ and\ \citenamefont {Edamatsu}}]{Kaneda:2014emba} \BibitemOpen \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Kaneda}}, \bibinfo {author} {\bibfnamefont {S.-Y.}\ \bibnamefont {Baek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ozawa}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Edamatsu}}\bibfield {author} {. ``{Experimental Test of Error-Disturbance Uncertainty Relations by Weak Measurement}'',\ }\href {https://doi.org/10.1103/PhysRevLett.112.020402} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {020402} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xiong}\ \emph {et~al.}(2017)\citenamefont {Xiong}, \citenamefont {Yan}, \citenamefont {Ma}, \citenamefont {Zhou}, \citenamefont {Chen}, \citenamefont {Yang}, \citenamefont {Feng},\ and\ \citenamefont {Busch}}]{xiong2017optimal} \BibitemOpen \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Xiong}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Feng}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Busch}}\bibfield {author} {. ``Optimal joint measurements of complementary observables by a single trapped ion'',\ }\href {https://doi.org/10.1088/1367-2630/aa70a5} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {063032} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brougham}\ \emph {et~al.}(2006)\citenamefont {Brougham}, \citenamefont {Andersson},\ and\ \citenamefont {Barnett}}]{Brougham:2006kz} \BibitemOpen \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brougham}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Barnett}}\bibfield {author} {. ``{Cloning and joint measurements of incompatible components of spin}'',\ }\href {https://doi.org/10.1103/PhysRevA.73.062319} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {062319} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Thekkadath}\ \emph {et~al.}(2017)\citenamefont {Thekkadath}, \citenamefont {Saaltink}, \citenamefont {Giner},\ and\ \citenamefont {Lundeen}}]{thekkadath2017determining} \BibitemOpen \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Thekkadath}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Saaltink}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Giner}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lundeen}}\bibfield {author} {. ``Determining Complementary Properties with Quantum Clones'',\ }\href {https://doi.org/10.1103/PhysRevLett.119.050405} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {050405} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Andersson}\ \emph {et~al.}(2005)\citenamefont {Andersson}, \citenamefont {Barnett},\ and\ \citenamefont {Aspect}}]{Andersson:2005kvbacadaea} \BibitemOpen \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Barnett}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspect}}\bibfield {author} {. ``{Joint measurements of spin, operational locality, and uncertainty}'',\ }\href {https://doi.org/10.1103/PhysRevA.72.042104} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {042104} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Busch}(1987)}]{busch1987some} \BibitemOpen \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Busch}}\bibfield {author} {. ``Some realizable joint measurements of complementary observables'',\ }\href {https://doi.org/10.1007/BF00734320} {\bibfield {journal} {\bibinfo {journal} {Foundations of Physics}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {905} (\bibinfo {year} {1987})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barnett}(1997)}]{Barnett1997} \BibitemOpen \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Barnett}}\bibfield {author} {. ``Quantum information via novel measurements'',\ }\href {https://doi.org/10.1098/rsta.1997.0126} {\bibfield {journal} {\bibinfo {journal} {Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {355}},\ \bibinfo {pages} {2279} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {Equivalently, we can assume that the measurement results are $\pm 1/\alpha $}\BibitemShut {NoStop} \bibitem [{\citenamefont {Halder}\ \emph {et~al.}(2009)\citenamefont {Halder}, \citenamefont {Fulconis}, \citenamefont {Cemlyn}, \citenamefont {Clark}, \citenamefont {Xiong}, \citenamefont {Wadsworth},\ and\ \citenamefont {Rarity}}]{halder2009nonclassical} \BibitemOpen \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Halder}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fulconis}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Cemlyn}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Clark}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Xiong}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Wadsworth}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Rarity}}\bibfield {author} {. ``Nonclassical 2-photon interference with separate intrinsically narrowband fibre sources'',\ }\href {https://doi.org/10.1364/OE.17.004670} {\bibfield {journal} {\bibinfo {journal} {Optics Express}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {4670} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clark}\ \emph {et~al.}(2011)\citenamefont {Clark}, \citenamefont {Bell}, \citenamefont {Fulconis}, \citenamefont {Halder}, \citenamefont {Cemlyn}, \citenamefont {Alibart}, \citenamefont {Xiong}, \citenamefont {Wadsworth},\ and\ \citenamefont {Rarity}}]{clark2011intrinsically} \BibitemOpen \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Clark}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bell}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fulconis}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Halder}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Cemlyn}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Alibart}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Xiong}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Wadsworth}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Rarity}}\bibfield {author} {. ``Intrinsically narrowband pair photon generation in microstructured fibres'',\ }\href {https://doi.org/10.1088/1367-2630/13/6/065009} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {065009} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
math
46,845
\begin{document} \maketitle \begin{center} \textsc{\textmd{P. Mastrolia\footnote{Universit\`{a} degli Studi di Milano, Italy. Email: [email protected].}, D. D. Monticelli\footnote{Universit\`{a} degli Studi di Milano, Italy. Email: [email protected].} and F. Punzo\footnote{Universit\`{a} degli Studi di Milano, Italy. Email: [email protected]. \\ The three authors are supported by GNAMPA project ``Analisi globale ed operatori degeneri'' and are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`{a} e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). }, }} \end{center} \begin{abstract} We are concerned with nonexistence results of nonnegative weak solutions for a class of quasilinear parabolic problems with a potential on complete noncompact Riemannian manifolds. In particular, we highlight the interplay between the geometry of the underlying manifold, the power nonlinearity and the behavior of the potential at infinity. \end{abstract} \section{Introduction}\leftarrowbel{intro} In this paper we investigate the nonexistence of nonnegative, nontrivial weak solutions (in the sense of Definition \ref{def1} below) to parabolic differential inequalities of the type \begin{equation}\leftarrowbel{Eq} \begin{cases} \partial_t u -\operatorname{div}\pa{\abs{\nabla u}^{p-2}\nabla u}\geq V(x, t) u^q &\text{ in }\, M\times (0, \infty) \\ u = u_0 &\text{ in }\, M\times \set{0}, \end{cases} \end{equation} where $M$ is a complete, $m$--dimensional, noncompact Riemannian manifold with metric $g$, $\operatorname{div}$ and $\nabla$ are respectively the divergence and the gradient with respect to $g$, $p>1, q>\max\{p-1, 1\}$, the potential satisfies $V=V(x,t)>0$ a.e. in $M\times (0,\infty)$ and the initial condition $u_0$ is nonnegative. Local existence, finite time blow-up and global existence of solutions to parabolic Cauchy problems have attracted much attention in the literature. In particular, the following semilinear parabolic Cauchy problem \begin{equation}\leftarrowbel{ei1a} \left\{ \begin{array}{ll} \, \partial_t u - \Delta u\, =\,u^{q} \, &\textrm{in}\,\,\mathbb R^m\times (0,\infty) \\&\\ \textrm{ }u \, = u_0& \textrm{in}\,\, \mathbb R^m\times \{0\} \,, \end{array} \right. \end{equation} where $q>1, u_0\ge 0, u_0\in L^\infty(\mathbb R^m)$, has been largely investigated. Indeed (see \cite{Fujita}, \cite{Fuj2} and \cite{Lev}), problem \eqref{ei1a} does not admit global bounded solutions for $1<q\le 1+\frac 2 m$. On the contrary, for $q>1+\frac 2 m$ global bounded solutions exist, provided that $u_0$ is sufficiently small. For initial conditions $u_0\in L^p(\mathbb R^m)$ similar results have been obtained in the framework of mild solutions in the space $C\big([0,T);L^p(\mathbb R^m)\big)$ in \cite{Weiss}, \cite{Weiss2}. Problem \eqref{Eq} with $(M, g)=(\mathbb R^m, g_{\text{flat}})$, where $g_{\text{flat}}$ is the standard flat metric in the Euclidean space, together with its generalization to a wider class of operators of $p-$Laplace type or related to the porous medium equation, has also been largely studied; without claim of completeness we refer the reader to \cite{Galak1}, \cite{Galak2}, \cite{GalakLev}, \cite{MitPohozAbsence}, \cite{MitPohozMilan}, \cite{PoTe}, \cite{PuTe}, and references therein. In particular, in \cite{MitPohozAbsence} it is shown that problem \eqref{Eq} with $M=\mathbb R^m$ and $V\equiv 1$ does not admit nontrivial nonnegative weak solutions, provided that \[ p >\frac{2m}{m+1}\,, \quad q\leq p - 1 + \frac{p}{m}\,.\] Moreover, the blow-up result given in \cite{Fujita} has been extended to the setting of Riemannian manifolds. To further describe such results, let us introduce some notation. Let $(M, g)$ be a complete noncompact Riemannian manifold, endowed with a smooth Riemannian metric $g$. Fix any point $x_0\in M$, and for any $x\in M$ denote by $r(x)=\textrm{dist}(x_0, x)$ the Riemannian distance between $x_0$ and $x$. Moreover, let $B(x_0,r)$ be the geodesics ball with center $x_0\in M$ and radius $r>0$, and let $\mu$ be the Riemannian volume on $M$ with volume density $\sqrt g$. In \cite{Zhang} it is proved that no nonnegative nontrivial weak solutions to problem \eqref{Eq} with $p=2$ exist, provided there exist $C>0, \alpha>2, \beta>-2$ such that, for all $r>0$ large enough: \begin{itemize} \item [$(a)$] $\mu(B(x,r))\le C r^{\alpha}$ for all $x\in M$; \item [$(b)$] $\frac{\partial \log \sqrt g}{\partial r}\le \frac C r$; \item[$(c)$] $V=V(x)$, $V\in L^\infty_{loc}(M)$ and $C^{-1} r(x)^{\beta}\leq V(x) \leq C r(x)^{\beta}$; \end{itemize} Observe that if the Ricci curvature of $M$ is nonnnegative, then $(a)-(b)$ are satisfied, see e.g. \cite{***}. On the other hand (see Theorem 5.2.10 in \cite{Dav}, or Section 10.1 of \cite{Grig}), hypotheses $(a)-(b)$ imply that $\leftarrowmbda_1(M)=0,$ where $\leftarrowmbda_1(M)$ is the infimum of the $L^2-$ spectrum of the operator $-\Delta\,$ on $M$\,. The semilinear Cauchy problem \begin{equation}\leftarrowbel{ei4} \left\{ \begin{array}{ll} \, \partial_t u = \Delta u\, +\, h(t) u^\nu \, &\textrm{in}\,\,\mathbb H^m\times (0,T) \\&\\ \textrm{ }u \, = u_0& \textrm{in}\,\, \mathbb H^m\times \{0\} \, \end{array} \right. \end{equation} has been studied in \cite{BPT}, where $\mathbb H^m$ is the $m-$dimensional hyperbolic space, $u_0$ is nonnegative and bounded on $M$ and $h$ is a positive continuous function defined in $[0,\infty)$; note that in this case we have $\leftarrowmbda_1(\mathbb H^N)=\frac{(N-1)^2}{4}.$ To be specific, it has been shown that if $h(t)\equiv 1\; (t\ge 0)$, or if \begin{equation}\leftarrowbel{e59} \alpha_1 t^q \le h(t)\le \alpha_2 t^q \quad \textrm{for any}\;\;t>t_0, \end{equation} for some $\alpha_1>0, \alpha_2>0, t_0>0$ and $q>-1$, then there exist global bounded solutions for sufficiently small initial data $u_0$. Moreover, when $h(t)=e^{\alpha t}\quad (t\ge 0)$ for some $\alpha>0$, the authors showed that: \begin{itemize} \item[$(i)$] if $1< q < 1+\frac{\alpha}{\leftarrowmbda_1(\mathbb H^m)},$ then every nontrivial bounded solution of problem \eqref{ei4} blows up in finite time; \item[$(ii)$] if $q > 1+\frac{\alpha}{\leftarrowmbda_1(\mathbb H^m)},$ then problem \eqref{ei4} posses global bounded solutions for small initial data ; \item[$(iii)$] if $q = 1+\frac{\alpha}{\leftarrowmbda_1(\mathbb H^m)}$ and $\alpha> \frac 2 3 \leftarrowmbda_1(\mathbb H^m),$ then there exist global bounded solutions of problem \eqref{ei4} for small initial data. \end{itemize} Analogous results to those established in \cite{BPT} have been obtained in \cite{P1}, for the problem \begin{equation}\leftarrowbel{e610} \left\{ \begin{array}{ll} \, \partial_t u = \Delta u\, +\, h(t) u^q \, &\textrm{in}\,\,M\times (0,T) \\&\\ \textrm{ }u \, = u_0& \textrm{in}\,\, M\times \{0\} \,, \end{array} \right. \end{equation} where $M$ is a Cartan-Hadamard Riemannian manifold with sectional curvature bounded above by a negative constant, and $u_0\in L^\infty(M)$. Moreover, for initial conditions $u_0\in L^p(M)$ similar results have been established for mild solutions belonging to $C\big([0,T);L^p(M)\big)$ in \cite{P2}. Let us mention that nonexistence results of nonnegative nontrivial solutions have been also much investigated for solutions to elliptic equations and inequalities both on $\mathbb R^m$ (see, e.g., \cite{DaMit}, \cite{MitPohoz227}, \cite{MitPohoz359}, \cite{MitPohozMilan}, \cite{Mont}, \cite{DaLu}) and on Riemannian manifolds (see \cite{GrigKond}, \cite{GrigS}, \cite{Kurta} \cite{MMP1}, \cite{MasRigSet}, \cite{Sun1}, \cite{Sun2}). In particular, the present paper is the natural continuation of \cite{MMP1}, where some ideas and methods introduced in \cite{GrigS}, \cite{GrigKond} and \cite{Kurta} have been developed. Indeed, our results can be regarded as the parabolic counterpart of those shown in \cite{MMP1}, concerning nonnegative weak solutions to the inequality \[ -\operatorname{div}\pa{\abs{\nabla u}^{p-2}\nabla u}\geq V(x) u^q \quad \text{ in }\, M\,. \] In \cite{MMP1}, as well as in \cite{GrigKond}, \cite{GrigS}, \cite{Sun1} and \cite{Sun2}, the key assumptions are concerned with the parameters $p, q$ and the behavior of a suitable weighted volume of geodesic balls, with density a negative power of the potential $V(x)$. As for the case of $\mathbb R^m$, also on Riemannian manifolds the parabolic case presents substantial differences with respect to the elliptic one. In fact, new test functions have to be used, and suitable estimates of new integral terms are necessary. On the other hand, as in the case of elliptic inequalities on Riemannian manifolds, a simple adaptation of the methods used in $\mathbb R^m$ does not allow to obtain results as accurate as those we prove in the present work. In the next two subsections we describe our main results and some of their consequences; furthermore, we compare them with results in the literature. \subsection{Main results} In order to formulate our main results, we shall introduce some further notation and hypotheses. For each $R>0$, $\theta_1\geq 1$, $\theta_2\geq 1$ let $S:=M\times[0,\infty)$ and \[E_R:=\{(x,t)\in S\,:\, r(x)^{\theta_2}+t^{\theta_1}\leq R^{\theta_2} \}\,.\] Let \begin{equation*} \begin{array}{rclrcl} \displaystyle\bar s_1\,&:=&\,\displaystyle\frac q{q-1}\theta_2\,,&\bar s_2\,&:=&\,\displaystyle\frac 1{q-1}\,,\\ \displaystyle\bar s_3\,&:=&\,\displaystyle\frac{pq}{q-p+1}\theta_2\,,&\bar s_4\,&:=&\,\displaystyle\frac{p-1}{q-p+1}\,. \end{array} \end{equation*} The following conditions, that we call {\bf HP1} and {\bf HP2}, are the main hypotheses under which we will derive our nonexistence results for nonnegative nontrivial weak solutions of problem \eqref{Eq}. \textbf{HP1.}\;\,Assume that: $(i)$\; there exist constants $\theta_1\geq1$, $\theta_2\geq1$, $C_0>0$, $C>0$, $R_0>0$, $\varepsilon_0>0$ such that for every $R>R_0$ and for every $0<\varepsilon<\varepsilon_0$ one has \begin{equation}\leftarrowbel{hp1a}\int\int_{E_{2^{1/\theta_2}R}\setminus E_R} t^{(\theta_1-1)\left(\frac q{q-1}-\varepsilon\right)}V^{-\frac{1}{q-1}+\varepsilon}d\mu dt\leq C R^{\bar s_1+C_0\varepsilon}(\log R)^{s_2}\,, \end{equation} for some $0\leq s_2<\bar s_2$\,; $(ii)$\; for the same constants as above, for every $R>R_0$ and for every $0<\varepsilon<\varepsilon_0$ one has \begin{equation}\leftarrowbel{hp1b}\int\int_{E_{2^{1/\theta_2}R}\setminus E_R} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}-\varepsilon\right)}V^{-\frac{p-1}{q-p+1}+\varepsilon}d\mu dt\leq C R^{\bar s_3+C_0\varepsilon}(\log R)^{s_4}\,,\end{equation} for some $0\leq s_4<\bar s_4\,.$ \textbf{HP2.}\;\,Assume that: $(i)$\; there exist constants $\theta_1\geq1$, $\theta_2\geq1$, $C_0>0$, $C>0$, $R_0>0$, $\varepsilon_0>0$ such that for every $R>R_0$ and for every $0<\varepsilon<\varepsilon_0$ one has \begin{eqnarray} \leftarrowbel{hp2a}\int\int_{E_{2^{1/\theta_2}R}\setminus E_R} t^{(\theta_1-1)\left(\frac q{q-1}-\varepsilon\right)}V^{-\frac{1}{q-1}+\varepsilon}d\mu dt&\leq& C R^{\bar s_1+C_0\varepsilon}(\log R)^{\bar s_2}\,,\\ \leftarrowbel{hp2aa} \int\int_{E_{2^{1/\theta_2}R}\setminus E_R} t^{(\theta_1-1)\left(\frac q{q-1}+\varepsilon\right)}V^{-\frac{1}{q-1}-\varepsilon}d\mu dt&\leq& C R^{\bar s_1+C_0\varepsilon}(\log R)^{\bar s_2}\,; \end{eqnarray} $(ii)$\; for the same constants as above, for every $R>R_0$ and for every $0<\varepsilon<\varepsilon_0$ one has \begin{eqnarray} \leftarrowbel{hp2b}\int\int_{E_{2^{1/\theta_2}R}\setminus E_R} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}-\varepsilon\right)}V^{-\frac{p-1}{q-p+1}+\varepsilon}d\mu dt& \leq &C R^{\bar s_3+C_0\varepsilon}(\log R)^{\bar s_4}\,,\\ \leftarrowbel{hp2bb}\int\int_{E_{2^{1/\theta_2}R}\setminus E_R} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}+\varepsilon\right)}V^{-\frac{p-1}{q-p+1}-\varepsilon}d\mu dt& \leq &C R^{\bar s_3+C_0\varepsilon}(\log R)^{\bar s_4}\,. \end{eqnarray} \begin{rem}\leftarrowbel{rem1} Passing to the limit as $\varepsilon\to 0$ we see that, if {\bf HP1} holds, then for the same constants as above conditions \eqref{hp1a} and \eqref{hp1b} hold also for $\varepsilon=0$. Similarly, if {\bf HP2} holds then \eqref{hp2a} and \eqref{hp2b} (or equivalently \eqref{hp2aa} and \eqref{hp2bb}) are satisfied also with $\varepsilon=0$. \end{rem} We prove the following theorems (for the definition of weak solution see Definition \ref{def1} below). \begin{theorem}\leftarrowbel{thm1} Let $p>1$, $q>\max\{p-1, 1\}$, $V>0$ a.e. in $M\times (0, \infty)$, $V \in L^1_{loc}\pa{M\times [0, \infty)}$ and $u_0\in L^1_{loc}(M)$, $u_0\geq 0$ a.e. in $M$. Let $u$ be a nonnegative weak solution of problem \eqref{Eq}. Assume condition {\bf HP1}. Then $u=0$ a.e. in $S$\,. \end{theorem} \begin{theorem}\leftarrowbel{thm2} Let $p>1$, $q>\max\{p-1, 1\}$, $V>0$ a.e. in $M\times (0, \infty)$, $V \in L^1_{loc}\pa{M\times [0, \infty)}$ and $u_0\in L^1_{loc}(M)$, $u_0\geq 0$ a.e. in $M$. Let $u$ be a nonnegative weak solution of problem \eqref{Eq}. Assume condition {\bf HP2}. Then $u=0$ a.e. in $S$\,. \end{theorem} We should note that, to the best of our knowledge, no nonexistence results for linear or nonlinear parabolic equations on complete, noncompact Riemannian manifolds have been obtained in the literature under conditions similar to {\bf HP1} and {\bf HP2}, nor using the techniques that we exploit to prove Theorems \ref{thm1} and \ref{thm2}. Even if Theorems \ref{thm1} and \ref{thm2} can be regarded as the natural parabolic counterparts of the results in \cite{MMP1} for elliptic equations, their proofs are substantially different from those in the elliptic case. Moreover, we should also observe that in \cite{MMP1} a nonexistence result for the stationary problem was obtained under a different assumption than the stationary counterparts of the conditions {\bf HP1} and {\bf HP2} introduced in the present work (see \cite[condition {\bf HP3}]{MMP1}). An analogous result which could give rise to nontrivial applications cannot be deduced using our methods for parabolic equations, and the question whether a hypothesis corresponding to \cite[condition {\bf HP3}]{MMP1} can be introduced also in the parabolic setting in order to prove nonexistence results still remains to be understood. \subsection{Applications} This subsection is devoted to the discussion of some consequences of Theorems \ref{thm1} and \ref{thm2} and to comparison with existing results in the literature. \begin{cor}\leftarrowbel{cor1} Let $(M, g)=(\mathbb R^m, g_{\text{flat}})$, $V\equiv 1$, $p> 1$. Suppose that \begin{equation}\leftarrowbel{eq55} \max\{1,p-1\}<q \leq \frac p m + p -1\,. \end{equation} Let $u$ be a nonnegative weak solution of problem \eqref{Eq}. Then $u=0$ a.e. in $S$\,. \end{cor} Note that condition \eqref{eq55} in particular requires that $p>\frac{2m}{m+1}$. Note also that Corollary \ref{cor1} agrees with results in \cite{MitPohozAbsence}. Furthermore, for $p=2$ we recover the results on the Laplace operator in \cite{Fujita, Haya}. \begin{cor}\leftarrowbel{cor2} Let $M$ be a complete noncompact Riemannian manifold, $p>1$, $q>\max\{p-1, 1\}$ and $u_0\in L^1_{loc}(M)$, $u_0\geq 0$ a.e. in $M$. Suppose the potential $V\in L^1_{loc}\pa{M\times [0, \infty)}$ satisfies \begin{equation}\leftarrowbel{eq70} V(x,t) \geq f(t) h(x) \quad \textrm{for a.e.}\;\; (x,t)\in S, \end{equation} where $f:(0,\infty)\to \mathbb R$, $h: M \to \mathbb R$ are two functions satisfying \begin{equation}\leftarrowbel{eq56} 0 < f(t) \leq C (1+t)^\alpha\,\,\,\textrm{for a.e. }t\in (0,\infty)\qquad\textrm{and}\qquad0< h(x) \leq C (1+ r(x))^{\beta}\,\,\,\textrm{for a.e. }x\in M \end{equation} and \begin{eqnarray} \leftarrowbel{57}&&\int_0^T f(t)^{-\frac 1{q-1}}\,dt\leq C T^{\sigma_2}(\log T)^{\delta_2}\,,\qquad \int_0^T f(t)^{-\frac{p-1}{q-p+1}}\, dt\leq C T^{\sigma_4} (\log T)^{\delta_4}\,,\\ \leftarrowbel{58}&&\int_{B_R} h(x)^{-\frac 1{q-1}}\,d\mu\leq C R^{\sigma_1}(\log R)^{\delta_1}\,,\qquad\int_{B_R} h(x)^{-\frac{p-1}{q-p+1}}\, d\mu\leq C R^{\sigma_3} (\log R)^{\delta_3} \end{eqnarray} for $T,R$ large enough, with $\alpha,\beta,\sigma_1,\sigma_2,\sigma_3,\sigma_4,\delta_1,\delta_2,\delta_3,\delta_4\geq0$ and $C>0$. Assume that \begin{itemize} \item[i)] $\delta_1+\delta_2<\frac{1}{q-1}\,,\quad\delta_3+\delta_4<\frac{p-1}{q-p+1}\,$; \item[ii)] $0\leq\sigma_2\leq\frac{q}{q-1}\,,\quad0\leq\sigma_3\leq\frac{pq}{q-p+1}\,$; \item[iii)] if $\sigma_2=\frac{q}{q-1}$ then $\sigma_1=0\,$, if $\sigma_3=\frac{pq}{p-q+1}$ then $\sigma_4=0\,$; \item[iv)] $\sigma_1\sigma_4\leq\left(\frac{q}{q-1}-\sigma_2\right)\left(\frac{pq}{q-p+1}-\sigma_3\right)\,$. \end{itemize} Then problem \eqref{Eq} does not admit any nontrivial nonnegative weak solution. \end{cor} \begin{cor}\leftarrowbel{cor3} Let $M$ be a complete noncompact Riemannian manifold, $p>1$, $q>\max\{p-1, 1\}$ and $u_0\in L^1_{loc}(M)$, $u_0\geq 0$ a.e. in $M$. Assume that $V\in L^1_{loc}\pa{M\times [0, \infty)}$ satisfies condition \eqref{eq70} with $f:(0,\infty)\to \mathbb R$, $h: M \to \mathbb R$ such that \begin{equation}\leftarrowbel{eq60} \begin{array}{ll} \displaystyle C^{-1} (1+t)^{-\alpha} \leq f(t) \leq C (1+t)^\alpha&\textrm{for a.e. }t\in (0,\infty)\,\\ \displaystyle C^{-1} (1+r(x))^{-\beta}\leq h(x) \leq C (1+ r(x))^{\beta}&\textrm{for a.e. }x\in M \end{array} \end{equation} and \eqref{57}, \eqref{58} hold for $T,R$ sufficiently large, $\alpha,\beta,\sigma_1,\sigma_2,\sigma_3,\sigma_4,\delta_1,\delta_2,\delta_3,\delta_4\geq0$ and $C>0$. Suppose that \begin{itemize} \item[i)] $\delta_1+\delta_2\leq\frac{1}{q-1}\,,\quad\delta_3+\delta_4\leq\frac{p-1}{q-p+1}\,$; \item[ii)] $0\leq\sigma_2\leq\frac{q}{q-1}\,,\quad0\leq\sigma_3\leq\frac{pq}{q-p+1}\,$; \item[iii)] if $\sigma_2=\frac{q}{q-1}$ then $\sigma_1=0\,$, if $\sigma_3=\frac{pq}{p-q+1}$ then $\sigma_4=0\,$; \item[iv)] $\sigma_1\sigma_4\leq\left(\frac{q}{q-1}-\sigma_2\right)\left(\frac{pq}{q-p+1}-\sigma_3\right)\,$. \end{itemize} Then problem \eqref{Eq} does not admit any nontrivial nonnegative weak solution. \end{cor} \begin{rem}\leftarrowbel{rem2} \begin{itemize} \item[i)] We explicitly note that the hypotheses in Corollaries \ref{cor2} and \ref{cor3} allow for a potential $V$ that can also be independent of $x\in M$ or of $t\in[0,\infty)$. \item[ii)] In the particular case of the Laplace--Beltrami operator, i.e. for $p=2$, from Corollaries \ref{cor2}, \ref{cor3} we have the following results: \emph{Let $V$ satisfy condition \eqref{eq70}, with $f:(0,\infty)\to \mathbb R$, $h M \to \mathbb R$ such that \eqref{eq56} holds and \begin{eqnarray} \leftarrowbel{59}&&\int_{B_R}h(x)^{-\frac 1{q-1}}\,d\mu\leq C R^{\sigma_1}(\log R)^{\delta_1}\,,\qquad \int_0^T f(t)^{-\frac{1}{q-1}}\, dt\leq C T^{\sigma_2} (\log T)^{\delta_2}\, \end{eqnarray} for $T,R$ large enough, with $\alpha,\beta,\sigma_1,\sigma_2,\delta_1,\delta_2\geq0$, $C>0$ and \[\delta_1+\delta_2<\frac{1}{q-1}\,,\qquad\sigma_1+2\sigma_2\leq\frac{2q}{q-1}.\] Then there exists no nonnegative, nontrivial weak solution of problem \eqref{Eq} with $p=2$.} \emph{Similarly, if condition \eqref{eq70} on $V$ holds with $f,h$ satisfying \eqref{eq60} and \eqref{59} for $T,R$ sufficiently large, $\alpha,\beta,\sigma_1,\sigma_2,\delta_1,\delta_2\geq0$, $C>0$ and if \[\delta_1+\delta_2\leq\frac{1}{q-1}\,,\qquad\sigma_1+2\sigma_2\leq\frac{2q}{q-1}\,,\] then there exists no nonnegative, nontrivial weak solution of problem \eqref{Eq} with $p=2$.} \end{itemize} \end{rem} We should note that, even if in view of Remark \ref{rem2}-i) problem \eqref{ei4} on the hyperbolic space could in principle be addressed, we cannot actually obtain nonexistence results for it using our results. In fact, condition \eqref{58} is not satisfied if $M=\mathbb H^m$ and $h\equiv 1$, due to the exponential volume growth of geodesic balls in the hyperbolic space. Therefore, we do not recover the results given in \cite{BPT} (see also \cite{P1}). This is essentially due to the fact that in \cite{BPT} spectral analysis and heat kernel estimates on $\mathbb H^m$ have been used. Similar methods have also been used on Cartan-Hadamard manifolds in \cite{P1}. Clearly, such tools are not at disposal on general Riemannian manifolds, that are the object of our investigation. On the other hand, our hypotheses {\bf HP1} and {\bf HP2} include a large class of Riemannian manifolds for which results in \cite{BPT} or in \cite{P1} cannot be applied. In particular, this includes the case of Riemannian manifolds that satisfy $(a), (b), (c)$ above, also treated in \cite{Zhang}. In \cite{Zhang} quite different methods from ours have been employed, but also porous medium type nonlinear operators have been considered. However, we remark that in this work we introduce new techniques in the setting of parabolic equations on Riemannian manifolds. We obtain completely new results in the case of the p-Laplace operator, which improve on those already present in the literature even in the particular case of semililinear equations involving the Laplacian. Indeed, we obtain more general nonexistence results than those in \cite{Zhang} (see Example \ref{exe1} below). The paper is organized as follows: in Section \ref{sec2} we prove some preliminary results, that will be used in the proof of the theorems and corollaries stated in the Introduction; Section \ref{sec3} contains the proof of Theorems \ref{thm1} and \ref{thm2}, while Section \ref{sec4} is devoted to the proof of the Corollaries. \section{Auxiliary results}\leftarrowbel{sec2} We begin with \begin{defi}\leftarrowbel{def1} Let $p>1$, $q>\max\{p-1, 1\}$, $V>0$ a.e. in $M\times (0, \infty)$, $V \in L^1_{loc}\pa{M\times [0, \infty)}$ and $u_0\in L^1_{loc}(M)$, $u_0\geq 0$ a.e. in $M$. We say that $u \in W^{1,p}_{loc}(M\times[0,\infty))\cap L^q_{loc}(M\times[0,\infty); V d\mu dt)$ is a \emph{weak solution} of problem \eqref{Eq} if $u\geq 0$ a.e. in $M\times (0, \infty)$ and for every $\psi \in W^{1,p}(M\times[0,\infty))$, with $\psi\geq 0$ a.e. in $M\times [0, \infty)$ and compact support, one has \begin{equation}\leftarrowbel{Eq_weakSol} \intst{\psi u^q V} \leq \intst{\abs{\nabla u}^{p-2}\pair{\nabla u, \nabla \psi}} - \intst{u\,\partial_t\psi}-\int_M u_0 \psi(x, 0)\,d\mu. \end{equation} \end{defi} The next lemmas will be the crucial tools we will use in the proof of Theorems \ref{thm1} and \ref{thm2}. \begin{lemma} Let $s \geq \max\set{1, \frac{q}{q-1}, \frac{pq}{q-p+1}}$ be fixed. Then there exists a constant $C>0$ such that for every $\alpha \in \frac{1}{2}\pa{-\min\set{1, p-1}, 0}$, every nonnegative weak solution $u$ of problem \eqref{Eq} and every $\varphi \in \operatorname{Lip}\pa{M\times [0, \infty)}$ with compact support and $0\leq \varphi \leq 1$ one has \begin{align} \leftarrowbel{1}&\frac{1}{2} \intst{V u^{q+\alpha} \varphi^s} +\frac{3}{4} |\alpha|\intst{|\nabla u|^pu^{\alpha-1}\varphi^s}\\ \nonumber&\hspace{1cm}\leq C\set{\abs{\alpha}^{-\frac{(p-1)q}{q-p+1}} \intst{\abs{\nabla \varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}} + \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}}. \end{align} \end{lemma} \begin{proof} For any $\varepsilon>0$ let $u_\varepsilon := u+\varepsilon$. Define $\psi = u_\varepsilon^\alpha \varphi^s$; then $\psi$ is an admissible test function for problem \eqref{Eq}, with \begin{equation*} \nabla\psi = \alpha u_\varepsilon^{\alpha-1} \varphi^s \nabla u + s\varphi^{s-1}u_\varepsilon^\alpha\nabla\varphi, \quad \partial_t\psi=\alpha u_\varepsilon^{\alpha-1}\varphi^s\partial_t u + s\varphi^{s-1}u_\varepsilon^\alpha\partial_t\varphi. \end{equation*} Inequality \eqref{Eq_weakSol} gives \begin{equation}\leftarrowbel{EQ2} \intst{u^q u_\varepsilon^\alpha \varphi^s V} \leq \alpha\intst{\abs{\nabla u}^pu_\varepsilon^{\alpha-1}\varphi^s} +s\intst{\abs{\nabla u}^{p-2}\pair{\nabla u, \nabla\varphi}u_\varepsilon^\alpha \varphi^{s-1}} + I, \end{equation} where \begin{equation} I = -\alpha \intst{u_\varepsilon^{\alpha-1}\varphi^s u \partial_t u} - s\intst{u u_\varepsilon^{\alpha} \varphi^{s-1} \partial_t\varphi } - \int_M{u_0 \pa{u_0+\varepsilon}^\alpha \varphi^s\pa{x, 0}}\,d\mu. \end{equation} Now we have \begin{align*} -\alpha \intst{u_\varepsilon^{\alpha-1}\varphi^s u \partial_t u} &= -\alpha\intst{u_\varepsilon^{\alpha}\varphi^s \partial_t u}-\alpha\varepsilon\intst{u_\varepsilon^{\alpha-1}\varphi^s \partial_t u}\\\\ &=-\frac{\alpha}{\alpha+1}\intst{\partial_t\big(u_\varepsilon^{\alpha+1}\big)\varphi^s}+\varepsilon\intst{\partial_t\big(u_\varepsilon^{\alpha}\big)\varphi^s}\,. \end{align*} Since $u_\varepsilon^\alpha,u_\varepsilon^{\alpha+1}\in W^{1,p}_{loc}(M\times[0,\infty))$ with $p>1$ and since $\varphi^s\in W^{1,p'}(M\times[0,\infty))$ and has compact support, integrating by parts we obtain \begin{align*} -\alpha \intst{u_\varepsilon^{\alpha-1}\varphi^s u \partial_t u} =& \frac{\alpha s}{\alpha+1}\intst{\varphi^{s-1} u_\varepsilon^{\alpha+1} \partial_t \varphi} -\varepsilon s\intst{u_\varepsilon^\alpha \varphi^{s-1}\partial_t\varphi}\\ &+\frac{\alpha}{\alpha+1}\int_M{\varphi^s\pa{x, 0}\pa{u_0+\varepsilon}^{\alpha+1}}\,d\mu-\varepsilon\int_M{\varphi^s\pa{x, 0}\pa{u_0+\varepsilon}^{\alpha}}\,d\mu, \end{align*} thus, recalling that $u_\varepsilon=u+\varepsilon$, we have \begin{equation} I = -\frac{s}{\alpha+1} \intst{u_\varepsilon^{\alpha+1}\varphi^{s-1}\partial_t\varphi}-\frac{1}{\alpha+1}\int_M{\pa{u_0+\varepsilon}^{\alpha+1}\varphi^s\pa{x, 0}}\,d\mu. \end{equation} This, combined with \eqref{EQ2}, yields \begin{align} \intst{u^qu_\varepsilon^\alpha \varphi^s V} &\leq \alpha\intst{\abs{\nabla u}^pu_\varepsilon^{\alpha-1}\varphi^s} +s\intst{\abs{\nabla u}^{p-2}\pair{\nabla u, \nabla\varphi}u_\varepsilon^\alpha \varphi^{s-1}} \\ \nonumber &-\frac{s}{\alpha+1} \intst{u_\varepsilon^{\alpha+1}\varphi^{s-1}\partial_t\varphi}-\frac{1}{\alpha+1}\int_M{\pa{u_0+\varepsilon}^{\alpha+1}\varphi^s\pa{x, 0}}\,d\mu \end{align} and then \begin{align}\leftarrowbel{Eq_3} &\abs{\alpha}\intst{\abs{\nabla u}^pu_\varepsilon^{\alpha-1}\varphi^s}+\intst{u^qu_\varepsilon^\alpha \varphi^s V} +\frac{1}{\alpha+1}\int_M{\pa{u_0+\varepsilon}^{\alpha+1}\varphi^s\pa{x, 0}}\,d\mu\\\nonumber&\leq s\intst{\abs{\nabla u}^{p-2}\pair{\nabla u, \nabla\varphi}u_\varepsilon^\alpha \varphi^{s-1}} -\frac{s}{\alpha+1} \intst{u_\varepsilon^{\alpha+1}\varphi^{s-1}\partial_t\varphi}. \end{align} Now we estimate the first integral in the right-hand side of \eqref{Eq_3} using Young's inequality, obtaining \begin{align*} &s\intst {\varphi^{s-1}u^{\alpha}_{\varepsilon}\abs{\nabla u}^{p-2}\pair{\nabla u, \nabla\varphi}}\\ &\qquad\leq s\intst {\varphi^{s-1}u^{\alpha}_{\varepsilon}\abs{\nabla u}^{p-1} \abs{\nabla\varphi}} \\ &\qquad= \intst{\pa{\abs{\alpha}^{\frac{p-1}{p}}\varphi^{s\frac{p-1}{p}}u_\varepsilon^{-\pa{\abs{\alpha}+1}\frac{p-1}{p}}\abs{\nabla u}^{p-1} }\pa{s \abs{\alpha}^{-\frac{p-1}{p}}\varphi^{\frac{s}{p}-1}u_\varepsilon^{1-\frac{\abs{\alpha}+1}{p}}\abs{\nabla \varphi}}} \\ &\qquad\leq\frac{\abs{\alpha}}{4}\intst{\varphi^su^{\alpha-1}_{\varepsilon}\abs{\nabla u}^p}+\frac{s}{p}\sq{\frac{4s(p-1)}{\abs{\alpha}p}}^{p-1}\intst{\varphi^{s-p}u^{p-(\abs{\alpha}+1)}_\varepsilon\abs{\nabla\varphi}^p}. \end{align*} From \eqref{Eq_3} we deduce \begin{align}\leftarrowbel{Eq_4} &\frac34\abs{\alpha}\intst{\abs{\nabla u}^pu_\varepsilon^{\alpha-1}\varphi^s}+\intst{u^qu_\varepsilon^\alpha \varphi^s V} +\frac{1}{\alpha+1}\int_M{\pa{u_0+\varepsilon}^{\alpha+1}\varphi^s\pa{x, 0}}\,d\mu\\ \nonumber&\qquad\leq \frac{s}{p}\sq{\frac{4s(p-1)}{\abs{\alpha}p}}^{p-1}\intst{\varphi^{s-p}u^{p-(\abs{\alpha}+1)}_\varepsilon\abs{\nabla\varphi}^p} +\frac{s}{\alpha+1} \intst{u_\varepsilon^{\alpha+1}\varphi^{s-1}\abs{\partial_t\varphi}}. \end{align} Note that, by Young's inequality, \begin{align*} &\frac{s}{p}\sq{\frac{4s(p-1)}{\abs{\alpha}p}}^{p-1}\intst{\varphi^{s-p}u^{p-(\abs{\alpha}+1)}_\varepsilon\abs{\nabla\varphi}^p} \\ &\qquad=\frac{s}{p}\sq{\frac{4s(p-1)}{\abs{\alpha}p}}^{p-1} \intst{\pa{u_\varepsilon^{p+\alpha-1}\varphi^{s\pa{\frac{p+\alpha-1}{q+\alpha}}}V^{\frac{p+\alpha-1}{q+\alpha}}}\pa{\abs{\nabla\varphi}^p\varphi^{s-p-s\pa{\frac{p+\alpha-1}{q+\alpha}}}V^{-\frac{p+\alpha-1}{q+\alpha}}}} \\ &\qquad\leq \frac14 \intst{u_\varepsilon^{q+\alpha}\varphi^s V} + C\pa{\alpha, s} \intst{\abs{\nabla\varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}}\varphi^{s-p\pa{\frac{q+\alpha}{q-p+1}}}V^{-\frac{p+\alpha-1}{q+p-1}}} \end{align*} and \begin{align*} &\frac{s}{\alpha+1}\intst{u_\varepsilon^{\alpha+1}\varphi^{s-1}\abs{\partial_t\varphi}} \\ &\qquad=\frac{s}{\alpha+1}\intst{\pa{u_\varepsilon^{\alpha+1}\varphi^{s\pa{\frac{\alpha+1}{q+\alpha}}}V^{\frac{\alpha+1}{q+\alpha}}}\pa{\varphi^{-s\pa{\frac{\alpha+1}{q+\alpha}}+s-1}\abs{\partial_t\varphi}V^{-\frac{\alpha+1}{q+\alpha}}}} \\&\qquad\leq \frac14\intst{u_\varepsilon^{q+\alpha}\varphi^s V} + D\pa{\alpha, s}\intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}}\varphi^{s-\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}} } \end{align*} where \[ C\pa{\alpha, s}= \frac{s}{p}\sq{\frac{4s(p-1)}{\abs{\alpha}p}}^{p-1} \frac{q-p+1}{q+\alpha}\sq{\frac{\pa{q+\alpha}p}{4s\pa{p+\alpha-1}}\pa{\frac{4s\pa{p-1}}{p\abs{\alpha}}}^{1-p}}^{-\frac{p+\alpha-1}{q-p+1}} \] and \[ D\pa{\alpha, s}=\frac{s}{\alpha+1}\frac{q-1}{q+\alpha}\pa{\frac{4s}{q+\alpha}}^{\frac{\alpha+1}{q-1}}. \] Substituting in \eqref{Eq_4} we have \begin{align*} &\frac34\abs{\alpha}\intst{\abs{\nabla u}^pu_\varepsilon^{\alpha-1}\varphi^s}+\intst{u^qu_\varepsilon^\alpha \varphi^s V}\\&\quad-\frac12\intst{u_\varepsilon^{q+\alpha}\varphi^s V}+\frac{1}{\alpha+1}\int_M{\pa{u_0+\varepsilon}^{\alpha+1}\varphi^s\pa{x, 0}}\,d\mu\\&\qquad\leq C\pa{\alpha, s}\intst{\abs{\nabla\varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}} \varphi^{s-p\pa{\frac{q+\alpha}{q-p+1}}}V^{-\frac{p+\alpha-1}{q-p+1}}}+D\pa{\alpha, s}\intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}}\varphi^{s-\frac{q+\alpha}{q-1}}V^{-\frac{\alpha+1}{q-1}}}. \end{align*} Now letting $\varepsilon\rightarrow0$ and applying Fatou's lemma, we get \begin{align}\leftarrowbel{Eq_5} \frac{3}{4} |\alpha|\intst{|\nabla u|^pu^{\alpha-1}\varphi^s}&+\frac{1}{2}\intst{V u^{q+\alpha}\varphi^s}\\ &\nonumber\quad\leq C\pa{\alpha, s}\intst{\abs{\nabla\varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}}\varphi^{s-p\pa{\frac{q+\alpha}{q-p+1}}}V^{-\frac{p+\alpha-1}{q-p+1}}}\\ &\nonumber\qquad+D\pa{\alpha, s}\intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}}\varphi^{s-\frac{q+\alpha}{q-1}}V^{-\frac{\alpha+1}{q-1}}}\,, \end{align} where we use the convention $|\nabla u|^pu^{\alpha-1}\equiv0$ on the set where $u=0$, since $\nabla u=0$ a.e. on level sets of $u$. Now since there exists a positive constant $C$, depending on $s,p,q$, such that \[ C\pa{\alpha, s} \leq C \abs{\alpha}^{-\frac{(p-1)q}{q-p+1}}, \quad D\pa{\alpha, s} \leq C, \] and since $0\leq\varphi\leq1$ on $M\times[0,\infty)$, by our assumptions on $s$ the conclusion follows from \eqref{Eq_5}. \end{proof} \begin{lemma}\leftarrowbel{LE_tech2} Let $s\geq \max\set{1, \frac{q+1}{q-1}, \frac{2 pq}{q-p+1}}$ be fixed. Then there exists a constant $C>0$ such that for every nonnegative weak solution $u$ of equation \eqref{Eq}, every function $\varphi \in \operatorname{Lip}(S)$ with compact support and $0 \leq \varphi \leq 1$ and every $\alpha\in \pa{-\frac{1}{2}\min\set{1,p-1,q-1,\frac{q-p+1}{p-1}}, 0}$ one has \begin{align} \leftarrowbel{2.10}&\intst{\varphi^su^q V}\\ \nonumber&\quad\leq C \pa{\abs{\alpha}^{-1-\frac{(p-1)q}{(q-p+1)}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi}^{\frac{p(q+\alpha)}{q-p+1}}}}+|\alpha|^{-1} \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} }^\frac{p-1}{p}\\ \nonumber&\qquad\times\pa{\int\int_{S\setminus K} V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{pq}\pa{\int\int_{S\setminus K}\varphi^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{pq} \\ \nonumber&\qquad+C\left(\int\int_{S\setminus K}\varphi^s u^{q+\alpha}V\,d\mu dt\right)^\frac{1}{q+\alpha}\pa{\intst{V^{-\frac{1}{q+\alpha-1}} \abs{\partial_t\varphi}^{\frac{q+\alpha}{q+\alpha-1}}}}^{\frac{q+\alpha-1}{q+\alpha}}. \end{align} with $K=\{\pa{x, t}\in S : \varphi(x, t)=1\}$. \end{lemma} \begin{proof} Under our assumptions $\psi=\varphi^s$ is a feasible test function in equation \eqref{Eq_weakSol}. Thus we obtain \begin{equation}\leftarrowbel{6} \intst{ \varphi^s u^q V}\leq s \intst{\varphi^{s-1}\abs{\nabla u}^{p-2}\pair{\nabla u,\nabla\varphi}}-s\intst{u\varphi^{s-1}\partial_t \varphi} -\int_M u_0(x)\varphi^s(x,0)\,d\mu\,. \end{equation} Through an application of H\"{o}lder's inequality we obtain \begin{align} \leftarrowbel{7}\intst{u\varphi^{s-1}|\partial_t \varphi|} \leq \left(\int\int_{S\setminus K} u^{q+\alpha}V \varphi^s\, d\mu dt \right)^{\frac1{q+\alpha}}\left(\intst{V^{-\frac 1{q+\alpha-1}}\varphi^{\frac{(s-1)(q+\alpha)-s}{q+\alpha-1}}|\partial_t \varphi|^{\frac{q+\alpha}{q+\alpha-1}}}\right)^{\frac{q+\alpha-1}{q+\alpha}}\,. \end{align} On the other hand, using again H\"{o}lder's inequality we obtain \begin{align} \leftarrowbel{7a}&\intst{ s\varphi^{s-1}\abs{\nabla u}^{p-1}\abs{\nabla\varphi}}\\ \nonumber&\qquad=s\intst{\pa{\varphi^{\frac{p-1}{p}s} \abs{\nabla u}^{p-1}u^{-\frac{p-1}{p}(1-\alpha)}} \pa{\varphi^{\frac{s}{p}-1}u^{\frac{p-1}{p}(1-\alpha)}\abs{\nabla\varphi}}}\\ \nonumber&\qquad\leq s\pa{\intst{\varphi^s \abs{\nabla u}^pu^{\alpha-1}}}^\frac{p-1}{p} \pa{\intst{\varphi^{s-p}u^{(p-1)(1-\alpha)}\abs{\nabla\varphi}^p}}^\frac{1}{p}. \end{align} Moreover from equation \eqref{1} we deduce \begin{align}\leftarrowbel{2.6} \intst{{\varphi^s\abs{\nabla u}^pu^{\alpha-1}}}& \leq C |\alpha|^{-1-\frac{(p-1)q}{q-p+1}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi}^{\frac{p(q+\alpha)}{q-p+1}}}}\\ \nonumber&+ C|\alpha|^{-1}\intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}, \end{align} with $C>0$ depending on $s$. Thus from \eqref{6}, \eqref{7}, \eqref{7a} and \eqref{2.6} we obtain \begin{align} \leftarrowbel{2.8}& \intst{\varphi^su^q V} \\ \nonumber& \leq C\left\{ |\alpha|^{-1-\frac{(p-1)q}{q-p+1}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi}^{\frac{p(q+\alpha)}{q-p+1}}}} +|\alpha|^{-1} \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} \right\}^\frac{p-1}{p}\\ \nonumber &\times\pa{\intst{ \varphi^{s-p} u^{(p-1)(1-\alpha)}\abs{\nabla\varphi}^p}}^\frac{1}{p}\\ \nonumber & +C\left(\int\int_{S\setminus K} u^{q+\alpha}V \varphi^s\, d\mu dt \right)^{\frac1{q+\alpha}}\left(\intst{V^{-\frac 1{q+\alpha-1}}\varphi^{\frac{(s-1)(q+\alpha)-s}{q+\alpha-1}}|\partial_t \varphi|^{\frac{q+\alpha}{q+\alpha-1}}}\right)^{\frac{q+\alpha-1}{q+\alpha}}\,. \end{align} We use again H\"{o}lder's inequality with exponents \[ a=\frac{q}{(1-\alpha)(p-1)}, \qquad b=\frac{a}{a-1}=\frac{q}{q-(1-\alpha)(p-1)} \] to obtain \begin{align*} &\intst{\varphi^{s-p}u^{(p-1)(1-\alpha)}\abs{\nabla\varphi}^p}\\ &\qquad\leq \pa{\int\int_{S\setminus K}\varphi^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{q}\pa{\int\int_{S\setminus K}\varphi^{s-\frac{pq}{q-(1-\alpha)(p-1)}} V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{q}. \end{align*} Substituting into \eqref{2.8} we have \begin{align} \leftarrowbel{2.8a}& \intst{\varphi^su^q V} \\ \nonumber& \leq C\left\{ |\alpha|^{-1-\frac{(p-1)q}{q-p+1}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi}^{\frac{p(q+\alpha)}{q-p+1}}}} +|\alpha|^{-1} \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} \right\}^\frac{p-1}{p}\\ \nonumber &\times\pa{\int\int_{S\setminus K}\varphi^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{qp}\pa{\int\int_{S\setminus K}\varphi^{s-\frac{pq}{q-(1-\alpha)(p-1)}} V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{qp}\\ \nonumber & +C\left(\int\int_{S\setminus K} u^{q+\alpha}V \varphi^s\, d\mu dt \right)^{\frac1{q+\alpha}}\left(\intst{V^{-\frac 1{q+\alpha-1}}\varphi^{\frac{(s-1)(q+\alpha)-s}{q+\alpha-1}}|\partial_t \varphi|^{\frac{q+\alpha}{q+\alpha-1}}}\right)^{\frac{q+\alpha-1}{q+\alpha}}\,. \end{align} Now inequality \eqref{2.10} immediately follows from the previous relation, by our assumptions on $s,\alpha$ and since $0\leq\varphi\leq1$. \end{proof} \begin{cor} Under the hypotheses of Lemma \ref{LE_tech2} one has \begin{align} \leftarrowbel{2.11}& \intst{\varphi^su^q V} \\ \nonumber& \leq C\left\{ |\alpha|^{-1-\frac{(p-1)q}{q-p+1}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi}^{\frac{p(q+\alpha)}{q-p+1}}}} +|\alpha|^{-1} \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} \right\}^\frac{p-1}{p}\\ \nonumber &\times\pa{\int\int_{S\setminus K}\varphi^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{qp}\pa{\int\int_{S\setminus K}V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{qp}\\ \nonumber & +C\pa{\abs{\alpha}^{-\frac{(p-1)q}{q-p+1}} \intst{\abs{\nabla\varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}} V^{-\frac{p+\alpha-1}{q-p+1}}} + \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}}^{\frac{1}{q+\alpha}}\\ \nonumber &\times\left(\intst{V^{-\frac 1{q+\alpha-1}}|\partial_t \varphi|^{\frac{q+\alpha}{q+\alpha-1}}}\right)^{\frac{q+\alpha-1}{q+\alpha}}\,. \end{align} \end{cor} \begin{proof} The conclusion immediately follows combining \eqref{2.10} and \eqref{1}. \end{proof} \begin{lemma}\leftarrowbel{LE_tech3} Let $s\geq \max\set{1, \frac{q+1}{q-1}, \frac{2 pq}{q-p+1}}$ be fixed. Then there exists a constant $C>0$ such that for every nonnegative weak solution $u$ of equation \eqref{Eq}, every function $\varphi \in \operatorname{Lip}(S)$ with compact support and $0 \leq \varphi \leq 1$ and every $\alpha\in \pa{-\frac{1}{2}\min\set{1,p-1,q-1,\frac{q-p+1}{p-1}}, 0}$ one has \begin{align} \leftarrowbel{eq47}&\intst{\varphi^su^q V}\\ \nonumber&\quad\leq C \pa{\abs{\alpha}^{-1-\frac{(p-1)q}{(q-p+1)}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi}^{\frac{p(q+\alpha)}{q-p+1}}}}+ |\alpha|^{-1} \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} }^\frac{p-1}{p}\\ \nonumber&\qquad\times\pa{\int\int_{S\setminus K} V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{pq}\pa{\int\int_{S\setminus K}\varphi^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{pq} \\ \nonumber&\qquad+C\left(\int\int_{S\setminus K} \varphi^su^{q}V\,d\mu dt\right)^\frac{1}{q}\pa{\intst{V^{-\frac{1}{q-1}} \abs{\partial_t\varphi}^{\frac{q}{q-1}}}}^{\frac{q-1}{q}}. \end{align} with $S=M\times [0, \infty)$ and $K=\{\pa{x, t}\in S : \varphi(x, t)=1\}$. \end{lemma} \begin{proof} Inequality \eqref{eq47} can be proved in the same way as \eqref{2.10}, where the only difference with respect to the above argument is that in this case one has to use inequality \eqref{7} with $\alpha=0$. \end{proof} \section{Proof of Theorems \ref{thm1} and \ref{thm2}}\leftarrowbel{sec3} \begin{proof}[\it Proof of Theorem \ref{thm1}] For any fixed $R>0$ sufficiently large, let $\alpha:=-\frac 1{\log R}$. Fix any $C_1 > \frac{C_0 + \theta_2+1}{\theta_2}$ with $C_0$ and $\theta_2$ as in {\bf HP1}. Define for all $(x,t)\in S$ \begin{equation}\leftarrowbel{cutoff} \varphi(x,t):=\left\{ \begin{array}{ll} \, 1 &\textrm{if}\,\,(x,t)\in E_R\,, \\& \\ \left(\frac{r(x)^{\theta_2}+t^{\theta_1}}{R^{\theta_2}}\right)^{C_1\alpha}& \textrm{if\ \ } (x,t)\in E_R^c\,, \end{array} \right. \end{equation} and for all $n\in \mathbb N$ \begin{equation}\leftarrowbel{cutoff2} \eta_n(x,t):=\left\{ \begin{array}{ll} \, 1 &\textrm{if\ \ } (x,t)\in E_{nR}\,, \\& \\ 2- \frac{r(x)^{\theta_2}+t^{\theta_1}}{(nR)^{\theta_2}} & \textrm{if\ \ } (x,t)\in E_{2^{1/\theta_2}nR}\setminus E_{nR}\,,\\& \\ 0 & \textrm{if\ \ } (x,t)\in E_{2^{1/\theta_2}nR}^c\,. \end{array} \right. \end{equation} Let \begin{equation}\leftarrowbel{eq13} \varphi_n(x,t):=\eta_n(x,t)\varphi(x,t)\quad \textrm{for all}\;\, (x,t)\in S\,. \end{equation} We have $\varphi_n\in \operatorname{Lip}(S)$ with $0\leq \varphi_n\leq 1$; furthermore, \begin{align*} \partial_t \varphi_n= \eta_n\partial_t\varphi + \varphi\partial_t \eta_n,\qquad\qquad\nabla \varphi_n= \eta_n\nabla\varphi + \varphi\nabla \eta_n \end{align*} a.e. in $S$, and for every $a\geq 1$ \begin{align*} |\partial_t \varphi_n|^a &\leq 2^{a-1}(|\partial_t \varphi|^a + \varphi^a|\partial_t \eta_n|^a),&|\nabla\varphi_n|^a &\leq 2^{a-1}(|\nabla \varphi|^a + \varphi^a|\nabla \eta_n|^a) \end{align*} a.e. in $S$. Now we use $\varphi_n$ in formula \eqref{1}, with any fixed $s \geq \max\set{1, \frac{q}{q-1}, \frac{pq}{q-p+1}}$, and we see that for some positive constant $C$ and for every $n\in \mathbb N$ and every small enough $|\alpha|>0$, we have \begin{align}\leftarrowbel{eq12} & \intst{V u^{q+\alpha} \varphi_n^s} \\ \nonumber&\quad\leq C\set{\abs{\alpha}^{-\frac{(p-1)q}{q-p+1}} \intst{\abs{\nabla \varphi_n}^{\frac{p\pa{q+\alpha}}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}} + \intst{\abs{\partial_t\varphi_n}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}}\\ \nonumber&\quad\leq C\Big\{\abs{\alpha}^{-\frac{(p-1)q}{q-p+1}} \intst{\abs{\nabla \varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}}\\ \nonumber&\qquad + \abs{\alpha}^{-\frac{(p-1)q}{q-p+1}} \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\varphi^{\frac{p\pa{q+\alpha}}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}|\nabla\eta_n|^{\frac{p\pa{q+\alpha}}{q-p+1}}d\mu dt\\ \nonumber&\qquad + \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} + \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}{\varphi^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}|\partial_t \eta_n|^{\frac{q+\alpha}{q-1}}d\mu dt \Big\}\\ \nonumber&\quad\leq C \Big\{ \abs{\alpha}^{-\frac{(p-1)q}{q-p+1}}(I_1 + I_2 ) + I_3 + I_4 \Big\}\,, \end{align} where \begin{eqnarray}\leftarrowbel{I1} I_1& := & \intst{\abs{\nabla \varphi}^{\frac{p\pa{q+\alpha}}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}}\,,\\ \leftarrowbel{I2} I_2& := & \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\varphi^{\frac{p\pa{q+\alpha}}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}|\nabla\eta_n|^{\frac{p\pa{q+\alpha}}{q-p+1}}d\mu dt\,,\\ \leftarrowbel{I3} I_3& := & \intst{\abs{\partial_t\varphi}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}\,,\\ \leftarrowbel{I4} I_4& := &\int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}{\varphi^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}|\partial_t \eta_n|^{\frac{q+\alpha}{q-1}}d\mu dt\,. \end{eqnarray} In view of \eqref{cutoff} and \eqref{cutoff2} and assumption {\bf HP1}-$(ii)$ (see \eqref{hp1b}) with $\varepsilon=-\frac{\alpha}{q-p+1}>0$, for every $n\in \mathbb N$ and every small enough $|\alpha|>0$ we get \begin{eqnarray} \leftarrowbel{eI2} I_2& \leq & \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\left[\theta_2 \left(\frac 1{nR}\right)^{\theta_2}r(x)^{\theta_2-1}|\nabla r(x)|\right]^{\frac{p(q+\alpha)}{q-p+1}}n^{C_1\theta_2\alpha\frac{p(q+\alpha)}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}d\mu dt\\ \nonumber&\leq & C (nR)^{-\frac{\theta_2p(q+\alpha)}{q-p+1}}n^{C_1\theta_2\alpha\frac{p(q+\alpha)}{q-p+1}}\int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}r(x)^{(\theta_2-1)\frac{p(q+\alpha)}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}d\mu dt\\ \nonumber&\leq & C (nR)^{-\frac{\theta_2p(q+\alpha)}{q-p+1}}n^{C_1\theta_2\alpha\frac{p(q+\alpha)}{q-p+1}} (nR)^{\frac{\theta_2 p q}{q-p+1}+C_0\frac{|\alpha|}{q-p+1}}(\log (nR))^{s_4}\,. \end{eqnarray} Now note that for any constant $\bar C\in\mathbb{R}$ and for $R>0$ and $\alpha=-\frac 1{\log R}$ we have \begin{equation}\leftarrowbel{eq9} R^{|\alpha| \bar C}=e^{|\alpha|\bar C\log R}=e^{\bar C}\leq C\,. \end{equation} Thus, also using the fact that \[\frac{|\alpha|[\theta_2 p - C_1\theta_2 p(q+\alpha)+C_0]}{q-p+1}\leq - \frac{|\alpha|}{q-p+1}\,<\,0\,,\] from \eqref{eI2} we deduce \begin{equation}\leftarrowbel{e2I2} I_2 \leq C n^{-\frac{|\alpha|}{q-p+1}}[\log(n R)]^{s_4}\,. \end{equation} In a similar way we can estimate $I_4$, using {\bf HP1}-$(i)$ (see \eqref{hp1a}). Indeed, for $R>0$ large enough, \begin{eqnarray} \leftarrowbel{eI4} I_4&\leq & \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\left(\frac{\theta_1}{(n R)^{\theta_2}} t^{\theta_1-1}\right)^{\frac{q+\alpha}{q-1}}n^{C_1\theta_2\alpha\frac{q+\alpha}{q-1}} V^{\frac{-1+|\alpha|}{q-1}} d\mu dt \\ \nonumber& \leq & C (n R)^{-\theta_2\frac{q+\alpha}{q-1}} n^{C_1\theta_2\alpha\frac{q+\alpha}{q-1}}(n R)^{\frac{\theta_2 q+C_0|\alpha|}{q-1}}[\log(nR)]^{s_2}\\ \nonumber&\leq & C n^{\frac{\alpha}{q-1}\left(-\theta_2 + C_1\theta_2 q- C_0 +C_1\theta_2 \alpha\right)}[\log(nR)]^{s_2}\,\,\leq\,\, C n^{-\frac{|\alpha|}{q-1}}[\log(nR)]^{s_2}\,. \end{eqnarray} In order to estimate $I_1$ we observe that if $f:[0,\infty)\to [0,\infty)$ is a nonincreasing function and if {\bf HP1}-$(ii)$ holds (see \eqref{hp1b}), then \begin{equation}\leftarrowbel{eq6} \int\int_{E_R^c} f\big([r(x)^{\theta_2}+t^{\theta_1}]^{\frac1{\theta_2}}\big)r(x)^{(\theta_2-1)p\left(\frac q{q-p+1}-\varepsilon\right)}V^{-\frac{p-1}{q-p+1}+\varepsilon}d\mu dt\leq C\int_{R/2^{1/\theta_2}}^{\infty} f(z) z^{\bar s_3+ C_0 \varepsilon -1}(\log z)^{s_4} dz\,, \end{equation} for every $0<\varepsilon<\varepsilon_0$ and $R>0$ large enough. This can shown by minor variations in the proof of \cite[formula (2.19)]{GrigS}. Now, since for a.e. $x\in M$ we have $|\nabla r(x)|\leq 1$, we obtain for a.e. $(x,t)\in S$ \begin{equation}\leftarrowbel{eq52} |\nabla \varphi(x,t)|\leq C_1 |\alpha|\theta_2\left(\frac{r(x)^{\theta_2}+t^{\theta_1}}{R^{\theta_2}} \right)^{C_1\alpha-1}\frac{r(x)^{\theta_2-1}}{R^{\theta_2}}\,. \end{equation} Thus, using \eqref{eq9} for every sufficiently large $R>0$ we get \begin{eqnarray*} |\alpha|^{-\frac{(p-1)q}{q-p+1}} I_1 & \leq & C |\alpha|^{-\frac{(p-1)q}{q-p+1}}\int\int_{E_R^c} V^{-\frac{p+\alpha-1}{q-p+1}}\left[ C_1 |\alpha|\theta_2\left(\frac{r(x)^{\theta_2}+t^{\theta_1}}{R^{\theta_2}} \right)^{C_1\alpha-1}\frac{r(x)^{\theta_2-1}}{R^{\theta_2}}\right]^{\frac{p(q+\alpha)}{q-p+1}}d\mu dt\\ &\leq & C |\alpha|^{\frac{p(q+\alpha)-(p-1)q}{q-p+1}}\int\int_{E_R^c}\left\{\left[r(x)^{\theta_2}+t^{\theta_1}\right]^{\frac 1{\theta_2}}\right\}^{\theta_2\frac{(C_1\alpha-1)p(q+\alpha)}{q-p+1}}r(x)^{(\theta_2-1)p\frac{(q+\alpha)}{q-p+1}}V^{-\frac{p+\alpha-1}{q-p+1}}d\mu dt\,. \end{eqnarray*} Now, using \eqref{eq6} with $\varepsilon=\frac{|\alpha|}{q-p+1}$, \begin{equation}\leftarrowbel{eq7} |\alpha|^{-\frac{(p-1)q}{q-p+1}} I_1 \leq C |\alpha|^{\frac{p(q+\alpha)-(p-1)q}{q-p+1}} \int_{R/2^{1/\theta_2}}^{\infty} z^{\theta_2(C_1\alpha -1)\frac{p(q+\alpha)}{q-p+1}+\bar s_3 + C_0\frac{|\alpha|}{q-p+1}-1}(\log z)^{s_4} dz\,. \end{equation} By our choice of $C_1$ and by the very definition of $\bar s_3$ we have \begin{equation*} b:=\theta_2(C_1\alpha -1)\frac{p(q+\alpha)}{q-p+1}+\bar s_3 + C_0\frac{|\alpha|}{q-p+1}\leq -\frac{|\alpha|}{q-p+1}\,. \end{equation*} Then using the change of variable $y:=|b|\log z$ in the right hand side of \eqref{eq7} we obtain for $\alpha>0$ small enough \begin{eqnarray}\leftarrowbel{eq8} |\alpha|^{-\frac{(p-1)q}{q-p+1}} I_1 & \leq & C |\alpha|^{\frac{p(q+\alpha)-(p-1)q}{q-p+1}}\int_0^\infty e^{-y}\left(\frac{y}{|b|}\right)^{s_4}\frac{dy}{|b|}\\ \nonumber &\leq & C |\alpha|^{\frac{p(q+\alpha)-(p-1)q}{q-p+1}-s_4-1}\, \leq \, C |\alpha|^{\frac{p-1}{q-p+1}-s_4}\,. \end{eqnarray} The term $I_3$ can be estimated similarly. Indeed, we start noting that if $f:[0,\infty)\to [0,\infty)$ is nonincreasing function and if {\bf HP1}-$(i)$ holds (see \eqref{hp1a}), then for any sufficiently small $\varepsilon>0$ and every large enough $R>0$ we get \begin{equation}\leftarrowbel{eq8a} \int\int_{E_R^c} f\big([r(x)^{\theta_2}+t^{\theta_1}]^{\frac 1{\theta_2}} \big)t^{(\theta_1-1)\left(\frac q{q-1}-\varepsilon \right)} V^{-\frac 1{q-1}+\varepsilon} d\mu dt\leq C \int_{R/2^{1/\theta_2}}^{\infty} f(z) z^{\bar s_1 + C_0\varepsilon -1}(\log z)^{s_2}dz\,; \end{equation} this can be shown by minor changes in the proof of \cite[formula (2.19)]{GrigS}. Since for a.e. $(x,t)\in S$ \begin{equation}\leftarrowbel{eq30} |\partial_t \varphi(x,t)|\leq C_1 |\alpha|\theta_1\left(\frac{r(x)^{\theta_2}+t^{\theta_1}}{R^{\theta_2}} \right)^{C_1\alpha-1}\frac{t^{\theta_1-1}}{R^{\theta_2}}, \end{equation} also using \eqref{eq9}, we have for every $R>0$ large enough \begin{eqnarray*} I_3& \leq & C |\alpha|^{\frac{q+\alpha}{q-1}}\int\int_{E_R^c}\left[(r(x)^{\theta_2}+t^{\theta_1} )^{\frac 1{\theta_2}}\right]^{\theta_2(C_1\alpha-1)\frac{q+\alpha}{q-1}}t^{(\theta_1-1)\frac{q+\alpha}{q-1}}V^{-\frac{1+\alpha}{q-1}}d\mu dt\,. \end{eqnarray*} Now due to \eqref{eq8a} with $\varepsilon=\frac{|\alpha|}{q-1}$ \begin{equation}\leftarrowbel{eq10} I_3 \leq C |\alpha|^{\frac{q+\alpha}{q-1}} \int_{R/2^{1/\theta_2}}^\infty z^{\theta_2(C_1\alpha -1)\frac{q+\alpha}{q-1}+ C_0\frac{|\alpha|}{q-1}+\bar s_1-1}(\log z)^{s_2} dz\,. \end{equation} By our choice of $C_1$ and the very definition of $\bar s_1$ we have that \[\beta:= \theta_2(C_1\alpha -1 )\frac{q+\alpha}{q-1}+\bar s_1 + C_0\frac{|\alpha|}{q-1}\leq -\frac{|\alpha|}{q-1}\,.\] Using the change of variable $y:=|\beta|\log z$ in \eqref{eq10} we obtain \begin{eqnarray}\leftarrowbel{eq11} I_3 & \leq & C|\alpha|^{\frac q{q-1}}\int_0^\infty e^{-y}\left(\frac{y}{|\beta|} \right)^{s_2} \frac{dy}{|\beta|}\\ \nonumber &\leq & C |\alpha|^{\frac 1{q-1}-s_2}\,. \end{eqnarray} Inserting \eqref{e2I2}, \eqref{eI4}, \eqref{eq8} and \eqref{eq11} into \eqref{eq12} we obtain for every $n\in \mathbb N$ and every sufficiently large $R>0$ \begin{eqnarray*} \int\int_{E_R} u^{q+\alpha} V d\mu dt & \leq & \intst{u^{q+\alpha}\varphi_n^s V} \\ &\leq & C \left(|\alpha|^{\frac{p-1}{q-p+1}-s_4} + |\alpha|^{-\frac{(p-1)q}{q-p+1}} n^{-\frac{|\alpha|}{p-q+1}}[\log(n R)]^{s_4}+|\alpha|^{\frac 1{q-1}-s_2}+n^{-\frac{|\alpha|}{q-1}}[\log(nR)]^{s_2}\right) \end{eqnarray*} with $C$ independent of $n$ and $R$\,. Passing to the $\liminf$ as $n\to\infty$ we deduce that \begin{equation*} \int\int_{E_R} u^{q+\alpha} V d\mu dt \leq C \left(|\alpha|^{\frac{p-1}{q-p+1}-s_4} + |\alpha|^{\frac 1{q-1}-s_2}\right)\,. \end{equation*} Therefore, letting $R\to \infty$ (and thus $\alpha\to 0$), by Fatou's lemma, we have \begin{equation*} \intst{ u^{q} V } =0 \end{equation*} in view of our assumptions on $s_2,s_4$, which concludes the proof. \end{proof} \begin{proof}[\it Proof of Theorem \ref{thm2}] We claim that $u^q \in L^1\pa{S, V d\mu dt}$. To see this, we will show that \begin{equation}\leftarrowbel{eq17} \intst{u^q V} \leq A\pa{\intst{u^q V}}^\sigma + B \end{equation} for some constants $A>0$, $B>0$, $0<\sigma<1$. In order to prove \eqref{eq17} we consider \eqref{2.11} with $\varphi$ replaced by the family of functions $\varphi_n$ defined in \eqref{eq13}, for any fixed $s\geq \max\set{1, \frac{q+1}{q-1}, \frac{2 pq}{q-p+1}}$ and $C_1 >\max\left\{\frac{1+C_0+\theta_2}{\theta_2},\frac{2(C_0+1)}{\theta_2(q-p+1)},\frac{2(C_0+1)}{\theta_2q(q-1)}\right\}$ with $C_0$, $\theta_2$ as in {\bf HP2} and with $R>0$ sufficiently large and $\alpha=-\frac{1}{\log R}$. Thus we have \begin{align} \leftarrowbel{eq14}& \intst{\varphi_n^su^q V} \\ \nonumber& \leq C\left( |\alpha|^{-1-\frac{(p-1)q}{q-p+1}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi_n}^{\frac{p(q+\alpha)}{q-p+1}}}} \right)^\frac{p-1}{p}\pa{\int\int_{E^c_R}\varphi_n^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{qp}\\ \nonumber&\quad\times\pa{\int\int_{E^c_R}V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi_n}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{qp}\\ \nonumber &\quad + C\left(|\alpha|^{-1} \intst{\abs{\partial_t\varphi_n}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} \right)^\frac{p-1}{p}\pa{\int\int_{E^c_R}\varphi_n^su^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{qp}\\ \nonumber&\quad\times\pa{\int\int_{E^c_R}V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi_n}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{qp}\\ \nonumber &\quad +C\pa{\abs{\alpha}^{-\frac{(p-1)q}{q-p+1}} \intst{\abs{\nabla\varphi_n}^{\frac{p\pa{q+\alpha}}{q-p+1}} V^{-\frac{p+\alpha-1}{q-p+1}}} + \intst{\abs{\partial_t\varphi_n}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}}^{\frac{1}{q+\alpha}}\\ \nonumber &\quad\times\left(\intst{V^{-\frac 1{q+\alpha-1}}|\partial_t \varphi_n|^{\frac{q+\alpha}{q+\alpha-1}}}\right)^{\frac{q+\alpha-1}{q+\alpha}}\,. \end{align} Let us prove that for $R>0$ large enough, and thus for $|\alpha|=\frac{1}{\log R}$ sufficiently small, \begin{eqnarray} \leftarrowbel{eq15a} \limsup_{n\to\infty}\left(|\alpha|^{-\frac{(p-1)q}{q-p+1}}J_1\right)&\leq& C, \\ \leftarrowbel{eq15} \limsup_{n\to\infty}\left(|\alpha|^{-\frac{(p-1)q}{q-(1-\alpha)(p-1)}}J_3\right)&\leq& C, \\ \leftarrowbel{eq16} \limsup_{n\to\infty}J_2 &\leq& C, \\ \leftarrowbel{eq18} \limsup_{n\to\infty}J_4 &\leq& C, \end{eqnarray} for some $C>0$ independent of $\alpha$, where \begin{eqnarray}\leftarrowbel{J1} J_1 &:=&\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi_n}^{\frac{p(q+\alpha)}{q-p+1}}}}, \\ \leftarrowbel{J2} J_2 &:=& \intst{\abs{\partial_t\varphi_n}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}}, \\ \leftarrowbel{J3} J_3 &:=& \int\int_{E^c_R}V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi_n}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt, \\ \leftarrowbel{J4} J_4 &:=& \intst{V^{-\frac 1{q+\alpha-1}}|\partial_t \varphi_n|^{\frac{q+\alpha}{q+\alpha-1}}}. \end{eqnarray} Note that \begin{equation}\leftarrowbel{eq19} J_1 \leq C(I_1+I_2), \end{equation} with $I_1$ and $I_2$ defined in \eqref{I1} and \eqref{I2}, respectively. Due to \eqref{hp2b} in {\bf HP2}-$(ii)$, by the same arguments used to obtain \eqref{eq8} and \eqref{e2I2} with $s_4$ replaced by $\bar s_4$, for every $n\in \mathbb N, R>0$ large enough and $\alpha=\frac 1{\log R}$ we have \[|\alpha|^{-\frac{(p-1)q}{q-p+1}} J_1 \leq C\left(1+ |\alpha|^{-\frac{(p-1)q}{q-p+1}} n^{-\frac{|\alpha|}{q-p+1}}[\log(nR)]^{\bar s_4}\right)\,. \] Letting $n\to \infty$ we get \eqref{eq15a}. Next we observe that \begin{equation} J_2 \leq C(I_3+I_4), \end{equation} with $I_3$ and $I_4$ defined in \eqref{I3} and \eqref{I4}, respectively. By the same computations used to obtain \eqref{eq11} and \eqref{eI4}, with $s_2$ replaced by $\bar s_2$, we have for every $n\in \mathbb N, R>0$ large enough and $\alpha=\frac 1{\log R}$ \[J_2 \leq C\left(1+ n^{-\frac{|\alpha|}{q-1}}[\log(nR)]^{\bar s_2}\right)\,.\] Again, letting $n\to \infty$ we obtain \eqref{eq16}. We now proceed to estimate $J_4$; note that \begin{equation}\leftarrowbel{eq21} J_4 \leq C(I_5+I_6), \end{equation} where \begin{eqnarray} I_5 &:=& \intst{V^{-\frac 1{q+\alpha-1}}|\partial_t \varphi|^{\frac{q+\alpha}{q+\alpha-1}}}\\ I_6 &:=& \intst{V^{-\frac 1{q+\alpha-1}}\varphi^{\frac{q+\alpha}{q+\alpha-1}}|\partial_t \eta_n|^{\frac{q+\alpha}{q+\alpha-1}}}. \end{eqnarray} Due to \eqref{eq30} and \eqref{eq9}, we have for every $R>0$ large enough \begin{align}\leftarrowbel{eq23} I_5 \leq C |\alpha|^{\frac{q+\alpha}{q+\alpha-1}} \int\int_{E_R^c}\Big[(r(x)^{\theta_2}+t^{\theta_1})^{\frac 1{\theta_2}}&\Big]^{\theta_2(C_1\alpha-1)\left(\frac{q}{q-1}+\frac{|\alpha|}{(q+\alpha-1)(q-1)}\right)} \\ \nonumber & \times t^{(\theta_1-1)\left(\frac{q}{q-1}+\frac{|\alpha|}{(q+\alpha-1)(q-1)}\right)} V^{-\frac{1}{q-1}-\frac{|\alpha|}{(q+\alpha-1)(q-1)}}d\mu dt\,. \end{align} Note that if $f:[0,\infty)\to [0,\infty)$ is a nonincreasing function and if \eqref{hp2aa} in {\bf HP2}-$(i)$ holds, then for any sufficiently small $\varepsilon>0$ and every large enough $R>0$ we get \begin{equation}\leftarrowbel{eq22} \int\int_{E_R^c} f\big([r(x)^{\theta_2}+t^{\theta_1}]^{\frac 1{\theta_2}} \big)t^{(\theta_1-1)\left(\frac q{q-1}+\varepsilon \right)} V^{-\frac 1{q-1}-\varepsilon} d\mu dt\leq C \int_{R/2^{1/\theta_2}}^{\infty} f(z) z^{\bar s_1 + C_0\varepsilon -1}(\log z)^{\bar s_2}dz\,; \end{equation} this can be shown by minor changes in the proof of \cite[formula (2.19)]{GrigS}. Now due to \eqref{eq23}, \eqref{eq22} with $\varepsilon=\frac{|\alpha|}{(q+\alpha-1)(q-1)}$ \begin{equation}\leftarrowbel{eq24} I_5 \leq C |\alpha|^{\frac{q+\alpha}{q+\alpha-1}} \int_{R/2^{1/\theta_2}}^\infty z^{\theta_2(C_1\alpha -1)\frac{q+\alpha}{q+\alpha-1}+ C_0\frac{|\alpha|}{(q+\alpha-1)(q-1)}+\bar s_1-1}(\log z)^{\bar s_2} dz\,. \end{equation} By our choice of $C_1$ and the very definition of $\bar s_1$, we have for sufficiently small $|\alpha|>0$ \[\gamma:=\theta_2(C_1\alpha -1)\frac{q+\alpha}{q+\alpha-1}+ C_0\frac{|\alpha|}{(q+\alpha-1)(q-1)}+\bar s_1<-\frac{|\alpha|}{(q-1)^2}\,. \] Using the change of variable $y:=|\gamma|\log z$ in the right hand side of \eqref{eq24}, due to the very definition of $\bar s_2$ we obtain for every $R>0$ large enough \begin{equation}\leftarrowbel{eq25} I_5 \leq C|\alpha|^{\frac {q+\alpha}{q+\alpha-1}}\int_0^\infty e^{-y}\left(\frac{y}{|\gamma|} \right)^{\bar s_2} \frac{dy}{|\gamma|}\leq C \,. \end{equation} Moreover, using \eqref{eq9} and \eqref{hp2aa} in {\bf HP2}, for every $n\in\mathbb{N}$ and $\alpha>0$ sufficiently small, we have \begin{eqnarray} \leftarrowbel{eq26} I_6&\leq & \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\left(\frac{\theta_1}{(n R)^{\theta_2}} t^{\theta_1-1}\right)^{\frac{q+\alpha}{q+\alpha-1}}n^{C_1\theta_2\alpha\frac{q+\alpha}{q+\alpha-1}} V^{-\frac{1}{q-1}-\frac{|\alpha|}{(q+\alpha-1)(q-1)}} d\mu dt \\ \nonumber& \leq & C (n R)^{-\theta_2\frac{q+\alpha}{q+\alpha-1}} n^{C_1\theta_2\alpha\frac{q+\alpha}{q+\alpha-1}}(n R)^{\frac{\theta_2 q}{q-1}+ C_0\frac{|\alpha|}{(q+\alpha-1)(q-1)}}[\log(nR)]^{\bar s_2}\\ \nonumber&\leq & C n^{-\frac{|\alpha|}{(q-1)^2}}[\log(nR)]^{\bar s_2}\,. \end{eqnarray} In view of \eqref{eq21}, \eqref{eq25}, \eqref{eq26} we obtain \[J_4\leq C \left( 1 + n^{-\frac{|\alpha|}{(q-1)^2}}[\log(nR)]^{\bar s_2} \right)\,.\] Letting $n\to \infty$ we get \eqref{eq18}. In order to estimate the integral $J_3$ we start by defining $\Leftarrowmbda = \frac{(p-1)q |\alpha|}{(q-p+1)\sq{q-\pa{1-\alpha}\pa{p-1}}}$, and we note that \begin{equation}\leftarrowbel{eq40} \frac{(p-1)q}{\pa{q-p+1}^2}|\alpha|<\Leftarrowmbda<\frac{2(p-1)q}{\pa{q-p+1}^2}|\alpha|<\varepsilon^* \end{equation} for every small enough $|\alpha|>0$, and that \[\frac{(1-\alpha)(p-1)}{q-\pa{1-\alpha}\pa{p-1}}=\bar s_4+\Leftarrowmbda \qquad\text{ and }\qquad\frac{p q}{q-\pa{1-\alpha}\pa{p-1}}=\frac{\bar s_3}{\theta_2} +\Leftarrowmbda p\,. \] By our definition of the functions $\varphi_n$, for every $n\in\mathbb{N}$ and every small enough $|\alpha|>0$ we have \begin{align} \leftarrowbel{eq41} J_3& =\intst{V^{-\bar s_4-\Leftarrowmbda}\abs{\nabla \varphi_n}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}}\\ \nonumber & \leq C\sq{\intst{V^{-\bar s_4-\Leftarrowmbda}{\eta_n}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\abs{\nabla \varphi}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}} +\intst{V^{-\bar s_4-\Leftarrowmbda} \varphi^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\abs{\nabla \eta_n}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}}}\\ \nonumber &\leq C\sq{\int\int_{E_R^c} V^{-\bar s_4-\Leftarrowmbda} \abs{\nabla \varphi}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\, d\mu dt +\int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}} V^{-\bar s_4-\Leftarrowmbda} \varphi^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\abs{\nabla\eta_n}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\,d\mu dt}\\ \nonumber &:=C\pa{I_7+I_8}. \end{align} Now we use condition \eqref{hp2bb} in {\bf HP2}-$(ii)$ with $\varepsilon=\Leftarrowmbda$, and we obtain for every $n\in\mathbb{N}$ and $R>0$ large enough \begin{align*} I_8 & = \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}} V^{-\bar s_4-\Leftarrowmbda} \varphi^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\abs{\nabla \eta_n}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\,d\mu dt \\ &\leq \pa{\sup_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\varphi}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p}\int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\pa{\frac{\theta_2 r(x)^{\theta_2-1}} {(nR)^{\theta_2}}}^{\frac{\bar s_3}{\theta_2}+\Leftarrowmbda p} V^{-\bar s_4-\Leftarrowmbda}\,d\mu dt \\ &\leq C n^{\frac{C_1\theta_2\alpha pq}{q-(1-\alpha)(p-1)}}(nR)^{-\frac{\theta_2 pq}{q-(1-\alpha)(p-1)}}\int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}+\Leftarrowmbda\right)}V^{-\frac{p-1}{q-p+1}-\Leftarrowmbda}d\mu dt \\ &\leq C n^{\frac{C_1\theta_2\alpha pq}{q-(1-\alpha)(p-1)}}(nR)^{-\frac{\theta_2 pq}{q-(1-\alpha)(p-1)}} (nR)^{\frac{\theta_2 pq}{q-p+1} + C_0\Leftarrowmbda} (\log(nR))^{\bar s_4} . \end{align*} By our definition of $C_1,\Leftarrowmbda$ and by relation \eqref{eq40} we easily find \begin{align} \leftarrowbel{eq42} & \frac{C_1\theta_2\alpha pq}{q-(1-\alpha)(p-1)} -\frac{\theta_2 pq}{q-(1-\alpha)(p-1)}+ \frac{\theta_2 pq}{q-p+1} + C_0\Leftarrowmbda \\ \nonumber & \qquad\qquad < \frac{pq \alpha C_1\theta_2}{q-\pa{1-\alpha}\pa{p-1}}-\frac{ \alpha q(p-1)C_0}{\sq{q-\pa{1-\alpha}\pa{p-1}}\pa{q-p+1}} \\ \nonumber& \qquad\qquad \leq \frac{q \alpha p}{\sq{q-\pa{1-\alpha}\pa{p-1}}\pa{q-p+1}} < \frac{q \alpha p}{\pa{q-p+1}^2} <0, \end{align} for any small enough $|\alpha|>0$. Moreover by \eqref{eq9}, since $\alpha=-\frac{1}{\log R}$, we have \[ R^{-\frac{\theta_2 pq}{q-(1-\alpha)(p-1)}+\frac{\theta_2 pq}{q-p+1} + C_0\Leftarrowmbda }\leq C\,. \] Thus, for any sufficiently large $R>0$ and every $n\in\mathbb{N}$, \begin{equation}\leftarrowbel{eq44} I_8 \leq C n^{ \frac{q \alpha p}{\pa{q-p+1}^2}}\pa{\log\pa{nR}}^{\bar s_4}. \end{equation} In order to estimate $I_7$ we observe that if $f:[0,\infty)\to [0,\infty)$ is a nonincreasing function and if {\bf HP2}-$(ii)$ holds (see \eqref{hp2bb}), then \begin{equation}\leftarrowbel{eq43} \int\int_{E_R^c} f\big([r(x)^{\theta_2}+t^{\theta_1}]^{\frac1{\theta_2}}\big)r(x)^{(\theta_2-1)p\left(\frac q{q-p+1}+\varepsilon\right)}V^{-\frac{p-1}{q-p+1}-\varepsilon}\,d\mu dt\leq C\int_{R/2^{1/\theta_2}}^{\infty} f(z) z^{\bar s_3+ C_0 \varepsilon -1}(\log z)^{\bar s_4}\, dz\,, \end{equation} for every $0<\varepsilon<\varepsilon_0$ and $R>0$ large enough. This can againbe shown by minor variations in the proof of \cite[formula (2.19)]{GrigS}. Thus, similarly to \eqref{eq23} and \eqref{eq24}, using \eqref{eq9}, \eqref{eq40} and \eqref{eq43}, we have for $R>0$ large enough and $\alpha=-\frac{1}{\log R}$ \begin{align} \leftarrowbel{eq45}I_7 & \leq C |\alpha|^{\frac{p q}{q-(1-\alpha)(p-1)}}\int\int_{E_R^c} \big[(r(x)^{\theta_2}+t^{\theta_1})^{\frac 1{\theta_2}} \big]^{\theta_2 (C_1\alpha -1)\frac{p q}{q-(1-\alpha)(p-1)}}r(x)^{(\theta_2-1)p\left(\frac q{q-p+1}+\Leftarrowmbda \right)} V^{-\frac{p-1}{q-p+1}-\Leftarrowmbda} \, d\mu dt \\ \nonumber & \leq C |\alpha|^{\frac{p q}{q-(1-\alpha)(p-1)}}\int_{R/2^{1/\theta_2}}^\infty z^{\theta_2 (C_1\alpha -1)\frac{p q}{q-(1-\alpha)(p-1)}+\bar{s}_3+C_0\Leftarrowmbda-1}(\log z)^{\bar{s}_4}\,dz. \end{align} By our choice of $C_1$ and the definition of $\bar{s}_3$ and $\Leftarrowmbda$ we have \[ a:=\theta_2 (C_1\alpha -1)\frac{p q}{q-(1-\alpha)(p-1)}+\bar{s}_3+C_0\Leftarrowmbda<-\frac{qp|\alpha|}{(q-p+1)^2}<0, \] thus using the change of variable $y=|a|\log z$ in the last integral in \eqref{eq45} we obtain \begin{align} \leftarrowbel{eq46}I_7 &\leq\,\, C |\alpha|^{\frac{p q}{q-(1-\alpha)(p-1)}}\int_0^\infty e^{-y}\left(\frac{y}{|a|}\right)^{\bar{s}_4}\,\frac{dy}{|a|}\\ \nonumber&\leq\,\, C |\alpha|^{\frac{p q}{q-(1-\alpha)(p-1)}-\bar{s}_4-1}\,\,=\,\,C |\alpha|^{\frac{(p-1)q}{q-(1-\alpha)(p-1)}+\frac{(p-1)q|\alpha |}{(q-p+1)(q-(1-\alpha)(p-1))}}\,. \end{align} Thus for any sufficiently large $R>0$ and every $n\in\mathbb{N}$, by \eqref{eq41}, \eqref{eq44} and \eqref{eq46} \[ |\alpha|^{-\frac{(p-1)q}{q-(1-\alpha)(p-1)}}J_3\leq C|\alpha|^{-\frac{(p-1)q}{q-(1-\alpha)(p-1)}}\left(|\alpha|^{\frac{(p-1)q}{q-(1-\alpha)(p-1)}+\frac{(p-1)q|\alpha |}{(q-p+1)(q-(1-\alpha)(p-1))}}+n^{ \frac{q \alpha p}{\pa{q-p+1}^2}}\pa{\log\pa{nR}}^{\bar s_4}\right)\,. \] Letting $n\to \infty$, for every $R>0$ large enough and $\alpha=-\frac{1}{\log R}$ we obtain \[ |\alpha|^{-\frac{(p-1)q}{q-(1-\alpha)(p-1)}}J_3\leq C|\alpha|^{\frac{(p-1)q|\alpha |}{(q-p+1)(q-(1-\alpha)(p-1))}}\leq C, \] that is \eqref{eq15}. Now using \eqref{eq15a}--\eqref{eq18} in \eqref{eq14}, since $\varphi_n\equiv1$ on $E_R$ and $0\leq\varphi_n\leq1$ on $M\times[0,\infty)$, for every $R>0$ large enough we have \[ \int\int_{E_R}u^qV\,d\mu dt\,\,\leq\,\,\limsup_{n\to\infty}\left(\intst{\varphi_n^su^qV}\right)\,\,\leq A\left(\intst{u^qV}\right)^\sigma+B \] for some positive constants $A,B$ and $\sigma\in(0,1)$. Passing to the limit as $R\to\infty$ we obtain \eqref{eq17}, and hence we conclude that $u^q \in L^1\pa{S, V d\mu dt}$ as claimed. Next we want to show that \[ \intst{u^qV}=0, \] and thus that $u=0$ a.e., since $V>0$ a.e. on $M\times[0,\infty)$. To this aim, we consider \eqref{eq47} with $\varphi$ replaced by the family of functions $\varphi_n$. Since $\varphi_n\equiv1$ on $E_R$ and since $0\leq\varphi_n\leq1$ on $M\times[0,\infty)$, for every $n\in\mathbb{N}$, every $R>0$ large enough and $\alpha=-\frac{1}{\log R}$ we have \begin{align} \leftarrowbel{eq48}&\int\int_{E_R}u^q V\,d\mu dt\,\,\leq\,\,\intst{\varphi_n^su^q V}\\ \nonumber&\quad\leq C \pa{\abs{\alpha}^{-1-\frac{(p-1)q}{(q-p+1)}}\intst{{V^{-\frac{p+\alpha-1}{q-p+1}}\abs{\nabla\varphi_n}^{\frac{p(q+\alpha)}{q-p+1}}}}+|\alpha|^{-1} \intst{\abs{\partial_t\varphi_n}^{\frac{q+\alpha}{q-1}} V^{-\frac{\alpha+1}{q-1}}} }^\frac{p-1}{p}\\ \nonumber&\qquad\times\pa{\int\int_{E^c_R} V^{-\frac{(1-\alpha)(p-1)}{q-(1-\alpha)(p-1)}} \abs{\nabla\varphi_n}^\frac{pq}{q-(1-\alpha)(p-1)}\,d\mu dt}^\frac{q-(1-\alpha)(p-1)}{pq}\pa{\int\int_{E^c_R}u^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{pq} \\ \nonumber&\qquad+C\left(\int\int_{E^c_R}u^{q}V\,d\mu dt\right)^\frac{1}{q}\pa{\intst{V^{-\frac{1}{q-1}}\abs{\partial_t\varphi_n}^{\frac{q}{q-1}}}}^{\frac{q-1}{q}}. \end{align} Now we claim that for $R>0$ sufficiently large \begin{equation}\leftarrowbel{eq50} \limsup_{n\to\infty}J_5\leq C, \end{equation} where \[ J_5:=\intst{V^{-\frac{1}{q-1}}\abs{\partial_t\varphi_n}^{\frac{q}{q-1}}}. \] This can be shown similarly to inequality \eqref{eq18}. Indeed \begin{equation}\leftarrowbel{eq71} J_5 \leq C(I_9+I_{10}), \end{equation} where \begin{equation*} I_9 \,:=\, \intst{V^{-\frac 1{q-1}}|\partial_t \varphi|^{\frac{q}{q-1}}}\,,\qquad I_{10}\, :=\, \intst{V^{-\frac 1{q-1}}\varphi^{\frac{q}{q-1}}|\partial_t \eta_n|^{\frac{q}{q-1}}}. \end{equation*} By \eqref{eq30} and \eqref{eq9}, for $R>0$ sufficiently large \begin{align}\leftarrowbel{eq73} I_9 \leq C |\alpha|^{\frac{q}{q-1}} \int\int_{E_R^c}\Big[(r(x)^{\theta_2}+t^{\theta_1})^{\frac 1{\theta_2}}&\Big]^{\theta_2(C_1\alpha-1)\frac{q}{q-1}} t^{(\theta_1-1)\frac{q}{q-1}} V^{-\frac{1}{q-1}}d\mu dt\,. \end{align} Now note that if $f:[0,\infty)\to [0,\infty)$ is a nonincreasing function and if \eqref{hp2aa} in {\bf HP2}-$(i)$ holds, then for every $R>0$ sufficiently large we get \begin{equation}\leftarrowbel{eq72} \int\int_{E_R^c} f\big([r(x)^{\theta_2}+t^{\theta_1}]^{\frac 1{\theta_2}} \big)t^{(\theta_1-1)\frac q{q-1}} V^{-\frac 1{q-1}} d\mu dt\leq C \int_{R/2^{1/\theta_2}}^{\infty} f(z) z^{\bar s_1-1}(\log z)^{\bar s_2}dz\,; \end{equation} indeed, the proof of \eqref{eq72} is similar to that of \cite[formula (2.19)]{GrigS}, where here one uses condition \eqref{hp2aa} with $\varepsilon=0$, see also Remark \ref{rem1}. Then \begin{align}\leftarrowbel{eq74} I_9 &\leq C |\alpha|^{\frac{q}{q-1}} \int_{R/2^{1/\theta_2}}^\infty z^{\theta_2(C_1\alpha -1)\frac{q}{q-1} +\bar s_1-1}(\log z)^{\bar s_2} dz\\ \nonumber&\leq C |\alpha|^{\frac{q}{q-1}} \int_1^\infty z^{\frac{\theta_2C_1\alpha q}{q-1}}(\log z)^{\bar s_2}\,\frac{dz}{z}\,\leq\, C |\alpha|^{\frac{q}{q-1}-\bar s_2-1}\,\leq\, C\,. \end{align} Moreover, for every $n\in\mathbb{N}$ by \eqref{eq9} and by \eqref{hp2a} with $\varepsilon=0$, see also Remark \ref{rem1}, \begin{eqnarray} \leftarrowbel{eq76} I_{10}&\leq & \int\int_{E_{2^{1/\theta_2}nR}\setminus E_{nR}}\left(\frac{\theta_1}{(n R)^{\theta_2}} t^{\theta_1-1}\right)^{\frac{q}{q-1}}n^{C_1\theta_2\alpha\frac{q}{q-1}} V^{-\frac{1}{q-1}} d\mu dt \\ \nonumber& \leq & C (n R)^{-\theta_2\frac{q}{q-1}} n^{C_1\theta_2\alpha\frac{q}{q-1}}(n R)^{\frac{\theta_2 q}{q-1}}[\log(nR)]^{\bar s_2}\,=\, C n^{-\frac{C_1\theta_2q}{q-1}|\alpha|}[\log(nR)]^{\bar s_2}\,. \end{eqnarray} In view of \eqref{eq71}, \eqref{eq74}, \eqref{eq76} we have \[J_5\leq C \left( 1 + n^{-\frac{C_1\theta_2q}{q-1}|\alpha|}[\log(nR)]^{\bar s_2} \right)\,.\] Letting $n\to \infty$ we get our claim, inequality \eqref{eq50}. Now consider again \eqref{eq48}; passing to the limsup as $n\to\infty$ and using \eqref{eq15a}--\eqref{eq16} and \eqref{eq50}, we obtain for some constant $C>0$ \begin{equation} \leftarrowbel{eq49}\int\int_{E_R}u^q V\,d\mu dt \leq C\sq{\pa{\int\int_{E^c_R}u^q V\,d\mu dt}^\frac{(1-\alpha)(p-1)}{pq}+\left(\int\int_{E^c_R}u^{q}V\,d\mu dt\right)^\frac{1}{q}}\,. \end{equation} Now we can pass to the limit in \eqref{eq49} as $R\to\infty$, and thus as $\alpha\to 0$, and conclude by using Fatou's Lemma and the fact that $u^q\in L^1(S,Vd\mu dt)$ that \[ \intst{u^qV}=0. \] Thus $u=0$ a.e. on $M\times[0,\infty)$. \end{proof} \section{Proof of Corollaries \ref{cor1}, \ref{cor2} and \ref{cor3}}\leftarrowbel{sec4} \begin{proof}[\it Proof of Corollary \ref{cor1}\,.] We now show that under our assumptions hypothesis {\bf HP1} is satisfied (see conditions \eqref{hp1a} and \eqref{hp1b}). Observe that for small $\varepsilon>0$ \begin{equation*} \int\int_{E_{2^{1/\theta_2}R}\setminus E_R} t^{(\theta_1-1)\left(\frac q{q-1}-\varepsilon \right)}\,dx dt \,\leq\, C R^m \int_0^{2^{1/\theta_1}R^{\frac{\theta_2}{\theta_1}}}t^{(\theta_1-1)\left(\frac q{q-1}-\varepsilon \right)} dt \, \leq\, C R^m R^{\frac{\theta_2}{\theta_1}\left[(\theta_1-1)\left(\frac q{q-1}-\varepsilon \right)+1\right]}\,. \end{equation*} Hence, condition \eqref{hp1a} is satisfied, if \begin{equation}\leftarrowbel{eq53} \frac{\theta_2}{\theta_1}\geq (q-1) m \,. \end{equation} On the other hand, for small $\varepsilon>0$, \begin{align*} \int\int_{E_{2^{1/\theta_2}R}\setminus E_R} |x|^{(\theta_2-1)p\left(\frac q{q-p+1}-\varepsilon \right)}\,dx dt &\leq C R^{\frac{\theta_2}{\theta_1}} \int_0^{2^{1/\theta_2}R}\varrho^{(\theta_2-1)p\left(\frac q{q-p+1}-\varepsilon \right)+m-1} d\varrho \\ \nonumber & \leq C R^{\frac{\theta_2}{\theta_1}+ \left[(\theta_2-1)p\left(\frac q{q-p+1}-\varepsilon \right)+m\right]}\,. \end{align*} Therefore condition \eqref{hp1b} is satisfied, if \begin{equation}\leftarrowbel{eq54} \frac{\theta_2}{\theta_1} \leq \frac{p q}{q-p+1}-m\,. \end{equation} Now note that we can find $\theta_1\geq 1, \theta_2\geq 1$ such that conditions \eqref{eq53} and \eqref{eq54} hold simultaneously, if \eqref{eq55} holds. Thus, from Theorem \ref{thm1} the conclusion follows. \end{proof} \begin{proof}[\it Proof of Corollary \ref{cor2}\,.] Under our assumptions, for $R>0$ large and $\varepsilon>0$ small enough we have \[ \int\int_{E_{2^{1/\theta_2R}}\setminus E_{R}}t^{(\theta_1-1)\left(\frac{q}{q-1}-\varepsilon\right)}V^{-\frac{1}{q-1}+\varepsilon}\,d\mu dt\leq CR^{\frac{\theta_2}{\theta_1}(\theta_1-1)\left(\frac{q}{q-1}-\varepsilon\right)+\frac{\theta_2}{\theta_1}\alpha\varepsilon+\beta\varepsilon+\frac{\theta_2}{\theta_1}\sigma_2+\sigma_1}(\log R)^{\delta_1+\delta_2}. \] Hence condition \eqref{hp1a} in {\bf HP1} is satisfied if we choose $C_0\geq\max\left\{0,\frac{\theta_2}{\theta_1}\left(\alpha+1\right)+\beta-\theta_2\right\}$ and if \begin{equation}\leftarrowbel{eq57} \frac{\theta_2}{\theta_1}\left(\sigma_2-\frac{q}{q-1}\right)+\sigma_1\leq0\,,\qquad \delta_1+\delta_2<\frac{1}{q-1}\,. \end{equation} Similarly for sufficiently large $R>0$ and small $\varepsilon>0$ we have \begin{equation*} \int\int_{E_{2^{1/\theta_2}R}\setminus E_R} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}-\varepsilon\right)}V^{-\frac{p-1}{q-p+1}+\varepsilon}d\mu dt\leq CR^{(\theta_2-1)p\left(\frac{q}{q-p+1}-\varepsilon\right)+\frac{\theta_2}{\theta_1}\alpha\varepsilon+\beta\varepsilon+\frac{\theta_2}{\theta_1}\sigma_4+\sigma_3}(\log R)^{\delta_3+\delta_4}\,. \end{equation*} Therefore condition \eqref{hp1b} in {\bf HP1} is satisfied if $C_0\geq\max\left\{0,\beta+\frac{\theta_2}{\theta_1}\alpha-(\theta_2-1)p\right\}$ and if \begin{equation}\leftarrowbel{eq58} \left(-\frac{pq}{q-p+1}+\sigma_3\right)+\frac{\theta_2}{\theta_1}\sigma_4\leq0\,,\qquad \delta_3+\delta_4<\frac{p-1}{q-p+1}\,. \end{equation} Now for conditions \eqref{eq57} and \eqref{eq58} to be satisfied, by our assumptions it is sufficient to choose $\theta_1\geq1$, $\theta_2\geq1$ such that \begin{eqnarray} \leftarrowbel{51}&&\sigma_1\left(\frac{q}{q-1}-\sigma_2\right)^{-1}\,\leq\,\frac{\theta_2}{\theta_1}\qquad\textrm{ if }0\leq\sigma_2<\frac{q}{q-1}\,,\\ \leftarrowbel{52}&&\frac{\theta_2}{\theta_1}\,\leq\,\left(\frac{pq}{q-p+1}-\sigma_3\right)\sigma_4^{-1}\qquad\textrm{ if }0\leq\sigma_3<\frac{pq}{q-p+1}\,. \end{eqnarray} Thus we can apply Theorem \ref{thm1} and conclude. \end{proof} \begin{proof}[\it Proof of Corollary \ref{cor3}\,.] By our assumptions for large $R>0$ and small $\varepsilon>0$ we have \begin{eqnarray*} \int\int_{E_{2^{1/\theta_2}R}\setminus E_{R}}t^{(\theta_1-1)\left(\frac{q}{q-1}-\varepsilon\right)}V^{-\frac{1}{q-1}+\varepsilon}\,d\mu dt&\leq& CR^{\frac{\theta_2}{\theta_1}(\theta_1-1)\left(\frac{q}{q-1}-\varepsilon\right)+\frac{\theta_2}{\theta_1}\alpha\varepsilon+\beta\varepsilon+\frac{\theta_2}{\theta_1}\sigma_2+\sigma_1}(\log R)^{\delta_1+\delta_2}\,,\\ \int\int_{E_{2^{1/\theta_2}R}\setminus E_{R}}t^{(\theta_1-1)\left(\frac{q}{q-1}+\varepsilon\right)}V^{-\frac{1}{q-1}-\varepsilon}\,d\mu dt&\leq& CR^{\frac{\theta_2}{\theta_1}(\theta_1-1)\left(\frac{q}{q-1}+\varepsilon\right)+\frac{\theta_2}{\theta_1}\alpha\varepsilon+\beta\varepsilon+\frac{\theta_2}{\theta_1}\sigma_2+\sigma_1}(\log R)^{\delta_1+\delta_2}\, . \end{eqnarray*} Thus conditions \eqref{hp2a}--\eqref{hp2aa} of {\bf HP2} are satisfied if we choose $C_0\geq\max\left\{0,\frac{\theta_2}{\theta_1}\left(\alpha-1\right)+\beta+\theta_2\right\}$ and \begin{equation}\leftarrowbel{eq61} \frac{\theta_2}{\theta_1}\left(\sigma_2-\frac{q}{q-1}\right)+\sigma_1\leq0\,,\qquad \delta_1+\delta_2\leq\frac{1}{q-1}\,. \end{equation} Similarly if $R>0$ is large and $\varepsilon>0$ is small enough we have \begin{eqnarray*} \int\int_{E_{2^{1/\theta_2}R}\setminus E_R} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}-\varepsilon\right)}V^{-\frac{p-1}{q-p+1}+\varepsilon}d\mu dt&\leq& CR^{(\theta_2-1)p\left(\frac{q}{q-p+1}-\varepsilon\right)+\frac{\theta_2}{\theta_1}\alpha\varepsilon+\beta\varepsilon+\frac{\theta_2}{\theta_1}\sigma_4+\sigma_3}(\log R)^{\delta_3+\delta_4}\,,\\ \int\int_{E_{2^{1/\theta_2}R}\setminus E_R} r(x)^{(\theta_2-1)p\left(\frac{q}{q-p+1}+\varepsilon\right)}V^{-\frac{p-1}{q-p+1}-\varepsilon}d\mu dt&\leq& CR^{(\theta_2-1)p\left(\frac{q}{q-p+1}+\varepsilon\right)+\frac{\theta_2}{\theta_1}\alpha\varepsilon+\beta\varepsilon+\frac{\theta_2}{\theta_1}\sigma_4+\sigma_3}(\log R)^{\delta_3+\delta_4}\,. \end{eqnarray*} Thus conditions \eqref{hp2b}--\eqref{hp2bb} in {\bf HP2} are satisfied if $C_0\geq\max\left\{0,\beta+\frac{\theta_2}{\theta_1}\alpha+(\theta_2-1)p\right\}$ and \begin{equation}\leftarrowbel{eq62} \left(-\frac{pq}{q-p+1}+\sigma_3\right)+\frac{\theta_2}{\theta_1}\sigma_4\leq0\,,\qquad \delta_3+\delta_4\leq\frac{p-1}{q-p+1}\,. \end{equation} Hence, arguing as in the proof of Corollary \ref{cor2}, we have that under our assumptions {\bf HP2} holds, and we can apply Theorem \ref{thm2} to conclude. \end{proof} We conclude with the next example, where we show that our results extend those in \cite{Zhang} in the case of the Laplace--Beltrami operator on a complete noncompact manifold $M$. Let us start by fixing a point $o\in M$ and denote by $\textrm{Cut}(o)$ the {\it cut locus} of $o$. For any $x\in M\setminus \big[\textrm{Cut}(o)\cup \{o\} \big]$, one can define the {\it polar coordinates} with respect to $o$, see e.g. \cite{Grig}. Namely, for any point $x\in M\setminus \big[\textrm{Cut}(o)\cup \{o\} \big]$ there correspond a polar radius $r(x) := dist(x, o)$ and a polar angle $\theta\in \mathbb S^{m-1}$ such that the shortest geodesics from $o$ to $x$ starts at $o$ with the direction $\theta$ in the tangent space $T_oM$. Since we can identify $T_o M$ with $\mathbb R^m$, $\theta$ can be regarded as a point of $\mathbb S^{m-1}.$ The Riemannian metric in $M\setminus\big[\textrm{Cut}(o)\cup \{o\} \big]$ in polar coordinates reads \[ds^2 = dr^2+A_{ij}(r, \theta)d\theta^i d\theta^j, \] where $(\theta^1, \ldots, \theta^{m-1})$ are coordinates in $\mathbb S^{m-1}$ and $(A_{ij})$ is a positive definite matrix. It is not difficult to see that the Laplace-Beltrami operator in polar coordinates has the form \[ \Delta = \frac{\partial^2}{\partial r^2} + \mathcal F(r, \theta)\frac{\partial}{\partial r}+\Delta_{S_{r}}, \] where $\mathcal F(r, \theta):=\frac{\partial}{\partial r}\big(\log\sqrt{A(r,\theta)}\big)$, $A(r,\theta):=\det (A_{ij}(r,\theta))$, $\Delta_{S_r}$ is the Laplace-Beltrami operator on the submanifold $S_{r}:=\partial B(o, r)\setminus \textrm{Cut}(o)$\,. $M$ is a {\it manifold with a pole}, if it has a point $o\in M$ with $\textrm{Cut}(o)=\emptyset$. The point $o$ is called {\it pole} and the polar coordinates $(r,\theta)$ are defined in $M\setminus\{o\}$. A manifold with a pole is a {\it spherically symmetric manifold} or a {\it model}, if the Riemannian metric is given by \begin{equation}\leftarrowbel{eq600} ds^2 = dr^2+\psi^2(r)d\theta^2, \end{equation} where $d\theta^2$ is the standard metric in $\mathbb S^{m-1}$, and \begin{equation}\leftarrowbel{eq601} \psi\in \mathcal A:=\Big\{f\in C^\infty((0,\infty))\cap C^1([0,\infty)): f'(0)=1,\, f(0)=0,\, f>0\text{ in } (0,\infty)\Big\}. \end{equation} In this case, we write $M\equiv M_\psi$; furthermore, we have $\sqrt{A(r,\theta)}=\psi^{m-1}(r)$, so the boundary area of the geodesic sphere $\partial S_R$ is computed by \[S(R)=\omega_m\psi^{m-1}(R),\] $\omega_m$ being the area of the unit sphere in $\mathbb R^m$. Also, the volume of the ball $B_R(o)$ is given by \[\mu(B_R(o))=\int_0^R S(\xi)d\xi\,. \] Observe that for $\psi(r)=r$, $M=\mathbb R^m$, while for $\psi(r)=\sinh r$, $M$ is the $m-$dimensional hyperbolic space $\mathbb H^m$. \begin{exe}\leftarrowbel{exe1} Let $M$ be an $m-$dimensional model manifold with pole $o$ and metric given by \eqref{eq600} with \[\psi(r):=\begin{cases} r & \,\, \textrm{if} \ 0\leq r<1 \, , \\ [r^{\alpha-1}(\log r)^{\beta} ]^{\frac 1{m-1}} & \ \textrm{if} \ r > 2 \,; \\ \end{cases} \] where $\alpha>1$ and $\beta\in \left(0, \frac 1{q-1}\right]\,.$ We consider problem \eqref{Eq} with $V\equiv 1$ and $p=2$. Note that for $R>0$ large enough \[\mu(B_R)\simeq C R^{\alpha}(\log R)^\beta\leq C R^{\alpha+\sigma}\,,\] for any $\sigma>0$, while $$\lim_{R\to+\infty}\frac{\mu(B_R)}{R^\alpha}=+\infty.$$ Furthermore, \[\frac{d}{dr}\pa{\log\sqrt{A(r)}}=\frac{d}{dr}\pa{\log\big([\psi(r)]^{m-1}\big)}\leq \frac C r\quad \textrm{for all}\,\, r>0\,.\] Thus, for $\alpha>2$, from \cite[Theorem A]{Zhang} we can infer that problem \eqref{Eq} does not admit nonnegative nontrivial solutions, provided that $1<q\leq 1+ \frac{2}{\alpha+\sigma}$ for some $\sigma>0$, that is provided that \[1<q < 1 +\frac 2{\alpha}\,.\] On the other hand, just assuming $\alpha>1$, we can apply Corollary \ref{cor3} with $p=2$ (see also Remark \ref{rem2}), where $f(t)\equiv 1$, $g(x)\equiv 1$, $\sigma_1=\alpha$, $\sigma_2=1$, $\delta_1=\beta$, $\delta_2=0$, and thus we can deduce that problem \eqref{Eq} does not admit nonnegative nontrivial solutions, provided that \[1<q \leq 1 +\frac 2{\alpha}\,.\] So, we can exclude existence of nontrivial solutions also in the particular case when $q=1+\frac{2}{\alpha}$. \end{exe} \end{document}
math
80,261
\begin{document} \title{{\LARGE \textbf{Personalized Cancer Therapy Design: Robustness vs. Optimality} \thispagestyle{empty} \pagestyle{empty} \begin{abstract} Intermittent Androgen Suppression (IAS) is a treatment strategy for delaying or even preventing time to relapse of advanced prostate cancer. IAS consists of alternating cycles of therapy (in the form of androgen suppression) and off-treatment periods. The level of prostate specific antigen (PSA) in a patient's serum is frequently monitored to determine when the patient will be taken off therapy and when therapy will resume. In spite of extensive recent clinical experience with IAS, the design of an ideal protocol for any given patient remains one of the main challenges associated with effectively implementing this therapy. We use a threshold-based policy for optimal IAS therapy design that is parameterized by lower and upper PSA threshold values and is associated with a cost metric that combines clinically relevant measures of therapy success. We apply Infinitesimal Perturbation Analysis (IPA) to a Stochastic Hybrid Automaton (SHA) model of prostate cancer evolution under IAS and derive unbiased estimators of the cost metric gradient with respect to various model and therapy parameters. These estimators are subsequently used for system analysis. By evaluating sensitivity estimates with respect to several model parameters, we identify critical parameters and demonstrate that relaxing the optimality condition in favor of increased robustness to modeling errors provides an alternative objective to therapy design for at least some patients. \end{abstract} \section{Introduction} Several recent attempts have been made to develop mathematical models that explain the progression of cancer in patients undergoing therapy so as to improve (and possibly optimize) the effectiveness of such therapy. As an example, prostate cancer is known to be a multistep process, and patients who evolve into a state of metastatic disease are usually submitted to hormone therapy in the form of continuous androgen suppression (CAS) \cite{Harrison2012}. The initial response to CAS is frequently positive, leading to a significant decrease in tumor size; unfortunately, most patients eventually develop resistance and relapse. Intermittent Androgen Suppression (IAS) is an alternative treatment strategy for delaying or even preventing time to relapse of advanced prostate cancer patients. IAS consists of alternating cycles of therapy (in the form of androgen suppression) and off-treatment periods. The level of prostate specific antigen (PSA) in a patient's serum is frequently monitored to determine when the patient will be taken off therapy and when therapy will resume. In spite of extensive recent clinical experience with IAS, the design of an ideal protocol for any given patient remains one of the main challenges associated with effectively implementing this therapy \cite{Hirata2010}. Various works have aimed at addressing this challenge, and we briefly review some of them. In \cite{Jackson2004} a model is proposed in which prostate tumors are composed of two subpopulations of cancer cells, one that is sensitive to androgen suppression and another that is not, without directly addressing the issue of IAS therapy design. The authors in \cite{Ideta2008} modeled the evolution of a prostate tumor under IAS using a hybrid dynamical system approach and applied numerical bifurcation analysis to study the effect of different therapy protocols on tumor growth and time to relapse. In \cite{Shimada2008} a nonlinear model is developed to explain the competition between different cancer cell subpopulations, while in \cite{Tao2010} a model based on switched ordinary differential equations is proposed. The authors in \cite{Suzuki2010} developed a piecewise affine system model and formulated the problem of personalized prostate cancer treatment as an optimal control problem. Patient classification is performed in \cite{Hirata2010} using a feedback control system to model the prostate tumor under IAS, and in \cite{Hirata2010B} this work is extended by deriving conditions for patient relapse. Most of the existing models provide insights into the dynamics of prostate cancer evolution under androgen deprivation, but fail to address the issue of therapy design. Furthermore, previous works that suggest optimal treatment schemes by classifying patients into groups have been based on more manageable, albeit less accurate, approaches to nonlinear hybrid dynamical systems. Addressing this limitation, the authors in \cite{Liu2015} recently proposed a nonlinear hybrid automaton model and performed $\delta $-reachability analysis to identify patient-specific treatment schemes. However, this model did not account for noise and fluctuations inherently associated with cell population dynamics and monitoring of clinical data. In contrast, in \cite{Tanaka2010} a hybrid model of tumor growth under IAS therapy is developed that incorporated stochastic effects, but is not used for personalized therapy design. A first attempt to define optimal personalized IAS therapy schemes by applying Infinitesimal Perturbation Analysis (IPA) to stochastic models of prostate cancer evolution was reported in \cite{FleckADHS2015}. An IPA-driven gradient-based optimization algorithm was subsequently implemented in \cite{FleckNAHS2016} to adaptively adjust controllable therapy settings so as to improve IAS therapy outcomes. The advantages of these IPA-based approaches stem from the fact that IPA efficiently yields sensitivities with respect to controllable parameters in a therapy (i.e., control policy), which is arguably the ultimate goal of personalized therapy design. More generally, however, IPA yields sensitivity estimates with respect to various model parameters from actual data, thus allowing critical parameters to be differentiated from others that are not. In this paper we build upon the IPA-based methodology from \cite{FleckADHS2015} and \cite{FleckNAHS2016} and focus on the importance of accurate modeling in conjunction with optimal therapy design. In particular, by evaluating sensitivity estimates with respect to several model parameters, we identify critical parameters and verify the extent to which the model from \cite{FleckADHS2015} is robust to them. From a practical perspective, the goal of this paper is to use IPA to explore the tradeoff between system optimality and robustness (or, equivalently, fragility), thus providing valuable insights on modeling and control of cancer progression. Assuming that an underlying, and most likely poorly understood, equilibrium of cancer cell subpopulation dynamics exists at suboptimal therapy settings, we verify that relaxing the optimality condition in favor of increased robustness to modeling errors provides an alternative objective to therapy design for at least some patients. In Section \ref{Formulation} we present a Stochastic Hybrid Automaton (SHA) model of prostate cancer evolution, along with a threshold-based policy for optimal IAS therapy design. Section \ref{IPA} reviews a general framework of IPA based on which we derive unbiased IPA estimators for system analysis. In Section \ref{Results} we evaluate sensitivity estimates with respect to several model parameters, identifying critical parameters and verifying the extent to which our SHA model is robust to them. We include final remarks in Section \ref{Conclusions}. \section{Problem Formulation} \label{Formulation} \subsection{Stochastic Model of Prostate Cancer Evolution} We consider a system composed of a prostate tumor under IAS therapy, which is modeled as a Stochastic Hybrid Automaton (SHA). Details of the problem formulation are given in \cite{FleckADHS2015}, but a condensed description of the SHA modeling framework is included here so as to make this paper as self-contained as possible. By adopting a standard SHA definition \cite{Cassandras2008}, a SHA model of prostate cancer evolution is defined in terms of the following: A \textbf{discrete state set} $Q=\left\{ q^{ON},q^{OFF}\right\} $, where $ q^{ON} $ ($q^{OFF}$, respectively) is the on-treatment (off-treatment, respectively) operational mode of the system. IAS therapy is temporarily suspended when the size of the prostate tumor decreases by a predetermined desirable amount. The reduction in the size of the tumor is estimated in terms of the patient's prostate specific antigen (PSA) level, a biomarker commonly used for monitoring the outcome of hormone therapy. In this context, therapy is suspended when a patient's PSA level reaches a lower threshold value, and reinstated once the size of cancer cell populations has increased considerably, i.e., once the patient's PSA level reaches an upper threshold value. A \textbf{state space} $X=\left\{ x_{1}\left( t\right) ,x_{2}\left( t\right) ,x_{3}\left( t\right) ,z_{1}\left( t\right) ,z_{2}\left( t\right) \right\} $, defined in terms of the biomarkers commonly monitored during IAS therapy, as well as \textquotedblleft clock\textquotedblright\ state variables that measure the time spent by the system in each discrete state. We assume that prostate tumors are composed of two coexisting subpopulations of cancer cells, Hormone Sensitive Cells (HSCs) and Castration Resistant Cells (CRCs), and thus define a state vector $x\left( t\right) =\left[ x_{1}\left( t\right) ,x_{2}\left( t\right) ,x_{3}\left( t\right) \right] $ with $ x_{i}\left( t\right) \in \mathbb{R}^{+}$, such that $x_{1}\left( t\right) $ is the total population of HSCs, $x_{2}\left( t\right) $ is the total population of CRCs, and $x_{3}\left(t\right) $ is the concentration of androgen in the serum. Prostate cancer cells secrete high levels of PSA, hence a common assumption is that the serum PSA concentration can be modeled as a linear combination of the cancer cell subpopulations. It is also frequently assumed that both HSCs and CRCs secrete PSA equivalently \cite{Ideta2008}, and in this work we adopt these assumptions. Finally, we define variable $z_{i}\left( t\right) \in \mathbb{R}^{+}$, $i=1,2$, where $z_{1}\left( t\right) $ ($z_{2}\left( t\right) $, respectively) is the \textquotedblleft clock\textquotedblright\ state variable corresponding to the time when the system is in state $q^{ON}$ ($ q^{OFF}$, respectively), and is reset to zero every time a state transition occurs. Setting $z\left( t\right) =\left[ z_{1}\left( t\right) ,z_{2}\left( t\right) \right] $, the complete state vector is $\left[ x\left( t\right) ,z\left( t\right) \right] $. An \textbf{admissible control set} $U=\left\{ 0,1\right\} $, such that the control is defined, at any time $t$, as: {\small \begin{equation} u\left( x\left( t\right) ,z\left( t\right) \right) \equiv \left\{ \begin{array}{ll} 0 & \text{if }x_{1}\left( t\right) +x_{2}\left( t\right) <\theta _{2}\text{, }q\left( t\right) =q^{OFF} \\ 1 & \text{if }x_{1}\left( t\right) +x_{2}\left( t\right) >\theta _{1}\text{, }q\left( t\right) =q^{ON} \end{array} \right. \label{eq: control set} \end{equation}} This is a simple form of hysteresis control to ensure that androgen deprivation will be suspended whenever a patient's PSA level drops below a minimum threshold value, and that treatment will resume once the patient's PSA\ level reaches a maximum threshold value. To this end, IAS therapy is viewed as a controlled process characterized by two parameters: $\tilde{\theta} = \left[ \tilde{\theta} _{1},\tilde{\theta} _{2}\right] \in \Theta $, where $\tilde{\theta} _{1}\in \left[ \tilde{\theta} _{1}^{\min },\tilde{\theta} _{1}^{\max }\right] $ is the lower threshold value of the patient's PSA level, and $\tilde{\theta} _{2}\in \left[ \tilde{\theta} _{2}^{\min },\tilde{\theta} _{2}^{\max }\right] $ is the upper threshold value of the patient's PSA level, with $\tilde{\theta} _{1}^{\max }<\tilde{\theta} _{2}^{\min }$. An illustrative representation of such threshold-based IAS therapy scheme is depicted in Fig. \ref{fig: IAS}. Simulation driven by clinical data \cite{Bruchovsky2006},\cite{Bruchovsky2007} was performed to generate the plot in Fig. \ref{fig: IAS}, which shows a typical profile of PSA\ level variations along several treatment cycles. \begin{figure} \caption{Schematic representation of Intermittent Androgen Suppression (IAS) therapy} \label{fig: IAS} \end{figure} An \textbf{event set} $E=\left\{ e_{1},e_{2}\right\} $, where $e_{1}$ corresponds to the condition $\left[ x_{1}\left( t\right) +x_{2}\left( t\right) =\theta _{1}\text{ from above}\right] $ (i.e., $x_{1}\left( t^{-}\right) +x_{2}\left( t^{-}\right) >\theta _{1}$) and $e_{2}$ corresponds to the condition $\left[ x_{1}\left( t\right) +x_{2}\left( t\right) =\theta _{2}\text{ from below}\right] $ (i.e., $x_{1}\left( t^{-}\right) +x_{2}\left( t^{-}\right) <\theta _{2}$), where the notation $ t^{-}$ indicates the time instant immediately preceding time $t$. \textbf{System dynamics} describing the evolution of continuous state variables over time, as well as the rules for discrete state transitions. The \emph{ continuous (time-driven) dynamics} capture the prostate cancer cell population dynamics, which are defined in terms of their proliferation, apoptosis, and conversion rates. As in \cite{FleckADHS2015}, we incorporate stochastic effects into the deterministic model from \cite{Liu2015} as follows: {\small \begin{equation} \begin{array} [c]{ll} \dot{x}_{1}(t) & =\alpha_{1}\left[ 1+e^{-\left( x_{3}(t)-k_{1}\right) k_{2}}\right] ^{-1}\cdot x_{1}(t)\\ & -\beta_{1}\left[ 1+e^{-\left( x_{3}(t)-k_{3}\right) k_{4}}\right] ^{-1}\cdot x_{1}(t)\\ & -\left[ m_{1}\left( 1-\frac{x_{3}(t)}{x_{3,0}}\right) +\lambda _{1}\right] \cdot x_{1}(t)\\ & +\mu_{1}+\zeta_{1}(t) \end{array} \label{eq: HSC dynamics} \end{equation} } {\small \begin{equation} \begin{array}{ll} \dot{x}_{2}(t)= & \left[ \alpha _{2}\left( 1-d\frac{x_{3}(t)}{x_{3,0}} \right) -\beta _{2}\right] x_{2}(t) \\ & +m_{1}\left( 1-\frac{x_{3}(t)}{x_{3,0}}\right) x_{1}(t)+\zeta _{2}(t) \end{array} \label{eq: CRC dynamics} \end{equation}} {\small \begin{equation} \dot{x}_{3}(t)=\left\{ \begin{array}{ll} -\frac{x_{3}(t)}{\sigma }+\mu _{3}+\zeta _{3}(t) & \begin{array}{l} \text{if }x_{1}(t)+x_{2}(t)>\theta _{1} \\ \text{and }q(t)=q^{ON} \end{array} \text{ } \\ \frac{x_{3,0}-x_{3}(t)}{\sigma }+\mu _{3}+\zeta _{3}(t) & \begin{array}{l} \text{if }x_{1}(t)+x_{2}(t)<\theta _{2}\text{ } \\ \text{and }q(t)=q^{OFF} \end{array} \end{array} \right. \label{eq: x3 dynamics} \end{equation}} {\small \begin{align} \dot{z}_{1}(t)& =\left\{ \begin{array}{ll} 1 & \text{if }q(t)=q^{ON} \\ 0 & \text{otherwise} \end{array} \right. \label{eq: z1 dynamics} \\ z_{1}(t^{+})& = \begin{array}[t]{ll} 0 & \begin{array}{l} \text{if }x_{1}(t)+x_{2}(t)=\theta _{1} \\ \text{and }q(t)=q^{ON} \end{array} \text{ } \end{array} \notag \end{align} \begin{align} \dot{z}_{2}(t)& =\left\{ \begin{array}{ll} 1 & \text{if }q(t)=q^{OFF} \\ 0 & \text{otherwise} \end{array} \right. \label{eq: z2 dynamics} \\ z_{2}(t^{+})& = \begin{array}[t]{ll} 0 & \begin{array}{l} \text{if }x_{1}(t)+x_{2}(t)=\theta _{2} \\ \text{and }q(t)=q^{OFF}\text{ } \end{array} \end{array} \notag \end{align}} where $\alpha _{1}$ and $\alpha _{2}$ are the HSC proliferation constant and CRC proliferation constant, respectively; $\beta _{1}$ and $\beta _{2}$ are the HSC apoptosis constant and CRC apoptosis constant, respectively; $k_{1}$ through $k_{4}$ are HSC proliferation and apoptosis exponential constants; $ m_{1}$ is the HSC to CRC conversion constant; $x_{3,0}$ corresponds to the patient-specific androgen constant; $\sigma $ is the androgen degradation constant; $\lambda _{1}$ is the HSC basal degradation rate; $\mu _{1}$ and $ \mu _{3}$ are the HSC basal production rate and androgen basal production rate, respectively. Finally, $\{\zeta _{i}(t)\}$, $i=1,2,3$, are stochastic processes which we allow to have arbitrary characteristics and only assume them to be piecewise continuous w.p. 1. The processes $\{\zeta _{i}(t)\}$, $ i=1,2$, represent noise and fluctuations inherently associated with cell population dynamics, while $\{\zeta _{3}(t)\}$ reflects randomness associated with monitoring clinical data, more specifically, with monitoring the patient's androgen level. It is clear from (\ref{eq: HSC dynamics})-(\ref{eq: x3 dynamics}) that $ x_{1}\left( t\right) $ and $x_{2}\left( t\right) $ are dependent on $ x_{3}\left( t\right) $, whose dynamics are affected by mode transitions. To make explicit the dependence of $x_{1}\left( t\right) $ and $x_{2}\left( t\right) $ on the discrete state (mode) $q\left( t\right) $, we let $\tau _{k}\left( \tilde{\theta} \right) $ be the time of occurrence of the $k$th event (of any type), and denote the state dynamics over any interevent interval $\left[ \tau _{k}\left( \tilde{\theta} \right) ,\tau _{k+1}\left( \tilde{\theta} \right) \right) $ as \begin{equation*} \dot{x}_{n}(t)=f_{k}^{x_{n}}(t)\text{, }\dot{z}_{i}(t)=f_{k}^{z_{i}}(t)\text{ , }n=1,\ldots ,3\text{, }i=1,2 \end{equation*} We include $\tilde{\theta} $ as an argument to stress the dependence of the event times on the controllable parameters, but we will subsequently drop this for ease of notation as long as no confusion arises. We thus start by assuming $q(t)=q^{ON}$ for $t\in $ $\left[ \tau _{k},\tau _{k+1}\right) $. Solving (\ref{eq: x3 dynamics}) yields, for $t\in $ $\left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation*} \begin{array}{ll} x_{3}(t) & =x_{3}(\tau _{k}^{+})e^{-\left( t-\tau _{k}\right) /\sigma } \\ & +e^{-t/\sigma }\cdot \int_{\tau _{k}}^{t}e^{\varepsilon /\sigma }\left[ \mu _{3}+\zeta _{3}(\varepsilon )\right] d\varepsilon \end{array} \end{equation*}} It is then possible to define, for $t\in $ $\left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation} \begin{array}{ll} h^{ON}\left( t,\tilde{\zeta}_{3}(t)\right) & \equiv x_{3}(\tau _{k}^{+})e^{-\left( t-\tau _{k}\right) /\sigma } \\ & +\mu _{3}\sigma \lbrack 1-e^{-\left( t-\tau _{k}\right) /\sigma }]+\tilde{ \zeta}_{3}(t) \end{array} \label{eq: h_ON} \end{equation}} where, for notational simplicity, we let {\small \begin{equation} \tilde{\zeta}_{3}(t)=\int_{\tau _{k}}^{t}e^{-\left( t-\varepsilon \right) /\sigma }\zeta _{3}(\varepsilon )d\varepsilon \label{eq: zeta3_tilda} \end{equation}} Next, let $q(t)=q^{OFF}$ for $t\in $ $\left[ \tau _{k},\tau _{k+1}\right) $, so that (\ref{eq: x3 dynamics}) implies that, for $t\in $ $\left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation*} \begin{array}{ll} x_{3}(t) & =x_{3}(\tau _{k}^{+})e^{-\left( t-\tau _{k}\right) /\sigma } \\ & +(\mu _{3}\sigma +x_{3,0})[1-e^{-\left( t-\tau _{k}\right) /\sigma }]+ \tilde{\zeta}_{3}(t) \end{array} \end{equation*}} Similarly as above, we define, for $t\in $ $\left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation} \begin{array}{ll} h^{OFF}\left( t,\tilde{\zeta}_{3}(t)\right) & \equiv x_{3}(\tau _{k}^{+})e^{-\left( t-\tau _{k}\right) /\sigma } \\ & +(\mu _{3}\sigma +x_{3,0})[1-e^{-\left( t-\tau _{k}\right) /\sigma }]+ \tilde{\zeta}_{3}(t) \end{array} \label{eq: h_OFF} \end{equation}} It is then possible to rewrite (\ref{eq: x3 dynamics}) as follows: {\small \begin{equation*} x_{3}(t)=\left\{ \begin{array}{ll} h^{ON}\left( t,\tilde{\zeta}_{3}(t)\right) & \text{if }q(t)=q^{ON} \\ h^{OFF}\left( t,\tilde{\zeta}_{3}(t)\right) & \text{if }q(t)=q^{OFF} \end{array} \right. \end{equation*}} Although we include $\tilde{\zeta}_{3}(t)$ as an argument in (\ref{eq: h_ON} ) and (\ref{eq: h_OFF}) to stress the dependence on the stochastic process, we will subsequently drop this for ease of notation as long as no confusion arises. Hence, substituting (\ref{eq: h_ON}) and (\ref{eq: h_OFF}) into (\ref {eq: HSC dynamics})-(\ref{eq: CRC dynamics}), yields {\small \begin{equation} \dot{x}_{1}(t)=\left\{ \begin{array}{l} \begin{array}{l} \left\{ \alpha _{1}\left[ 1+\phi _{\alpha }^{ON}(t)\right] ^{-1}-\beta _{1} \left[ 1+\phi _{\beta }^{ON}(t)\right] ^{-1}\right. \\ \left. +m_{1}\left( \frac{h^{ON}\left( t\right) }{x_{3,0}}\right) -(m_{1}+\lambda _{1})\right\} \cdot x_{1}(t) \\ +\mu _{1}+\zeta _{1}(t)\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if }q(t)=q^{ON} \end{array} \\ \begin{array}{l} \left\{ \alpha _{1}\left[ 1+\phi _{\alpha }^{OFF}(t)\right] ^{-1}-\beta _{1} \left[ 1+\phi _{\beta }^{OFF}(t)\right] ^{-1}\right. \\ \left. +m_{1}\left( \frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) -(m_{1}+\lambda _{1})\right\} \cdot x_{1}(t) \\ +\mu _{1}+\zeta _{1}(t)\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if }q(t)=q^{OFF} \end{array} \end{array} \right. \label{eq: x1 dynamics} \end{equation}} {\small \begin{equation} \dot{x}_{2}(t)=\left\{ \begin{array}{l} \begin{array}{l} \left[ \alpha _{2}\left( 1-d\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) -\beta _{2}\right] x_{2}(t) \\ +m_{1}\left( 1-\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) x_{1}(t)+\zeta _{2}(t) \\ \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if } q(t)=q^{ON} \end{array} \\ \begin{array}{l} \left[ \alpha _{2}\left( 1-d\frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) -\beta _{2}\right] x_{2}(t) \\ +m_{1}\left( 1-\frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) x_{1}(t)+\zeta _{2}(t) \\ \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if }q(t)=q^{OFF} \end{array} \end{array} \right. \label{eq: x2 dynamics} \end{equation}} with {\small \begin{align*} \phi _{\alpha }^{ON}(t)& =e^{-\left( h^{ON}\left( t\right) -k_{1}\right) k_{2}} \\ \phi _{\beta }^{ON}(t)& =e^{-\left( h^{ON}\left( t\right) -k_{3}\right) k_{4}} \\ \phi _{\alpha }^{OFF}(t)& =e^{-\left( h^{OFF}\left( t\right) -k_{1}\right) k_{2}} \\ \phi _{\beta }^{OFF}(t)& =e^{-\left( h^{OFF}\left( t\right) -k_{3}\right) k_{4}} \end{align*}} The \emph{discrete (event-driven) dynamics} are dictated by the occurrence of events that cause state transitions. Based on the event set $E=\left\{ e_{1},e_{2}\right\} $ we have defined, the occurrence of $e_{1}$ results in a transition from $q^{ON}$ to $q^{OFF}$ and the occurrence of $e_{2}$ results in a transition from $q^{OFF}$ to $q^{ON}$. \subsection{IAS Sensitivity Analysis} Recall that the main goal of this work is to perform sensitivity analysis of (\ref {eq: HSC dynamics})-(\ref{eq: z2 dynamics}) in order to identify critical model parameters and verify the extent to which the SHA model of prostate cancer evolution is robust to them. Of note, several potentially critical parameters exist in the SHA model from \cite{FleckADHS2015}. This work is a first step towards analyzing their relative importance, in which we select a subset of all model parameters in order to illustrate the applicability of our IPA-based methodology. The parameters we consider here are $\alpha _{1}$ and $\alpha _{2}$ (HSC proliferation constant and CRC proliferation constant, respectively), as well as $\beta _{1}$ and $\beta _{2}$ (HSC apoptosis constant and CRC apoptosis constant, respectively). These constants are intrinsically related to the cancer cell subpopulations' net growth rate, whose value dictates how fast the PSA threshold values will be reached, and ultimately how soon treatment will be suspended or reinstated. As a result, correctly estimating the values of $\alpha _{i}$ and $\beta _{i} $, $i=1,2$, is presumably crucial for the purposes of personalized IAS therapy design. In this context, we define an extended parameter vector $\mathbf{\tilde{\theta }}=\left[ \tilde{\theta } _{1},\ldots ,\tilde{\theta } _{6}\right] $, where $\tilde{\theta } _{1}$ ($\tilde{\theta } _{2}$, respectively) corresponds to the lower (upper, respectively) threshold value of the patient's PSA\ level, $\tilde{\theta } _{3}$ ($\tilde{\theta } _{4}$, respectively) corresponds to the HSC (CRC, respectively) proliferation constant, and $\tilde{\theta } _{5}$ ($\tilde{\theta } _{6}$, respectively) corresponds to the HSC (CRC, respectively) apoptosis constant. Within the SHA framework presented above, an IAS therapy can be viewed as a controlled process $u\left( \tilde{\theta} ,t\right) $ characterized by the parameter vector $\tilde{\theta} $, as in (\ref{eq: control set}), whose effect can be quantified in terms of performance metrics of the form $J\left[ u\left( \tilde{\theta} ,t\right) \right] $. Of note, only the first two elements in vector $\tilde{\theta} $ are controllable, while the remaining parameters are not. As in \cite{FleckADHS2015}, here we make use of a sample function defined in terms of complementary measures of therapy success. In particular, we consider the most adequate IAS treatment schemes to be those that $(i)$ ensure PSA levels are kept as low as possible; $(ii)$ reduce the frequency of on and off-treatment cycles. From a practical perspective, $(i)$ translates into the ability to successfully keep the size of cancer cell populations under control, which is directly influenced by the duration of the on and off-treatment periods. On the other hand, $(ii)$ aims at reducing the duration of on-treatment periods, thus decreasing the exposure of patients to medication and their side effects, and consequently improving the patients' quality of life throughout the treatment. Clearly there is a trade-off between keeping tumor growth under control and the cost associated with the corresponding IAS therapy. The latter is related to the duration of the therapy and could potentially include fixed set up costs incurred when therapy is reinstated. For simplicity, we disconsider fixed set up costs and take $(ii)$ to be linearly proportional to the length of the on-treatment cycles. Hence, we define our sample function as the sum of the average PSA level and the average duration of an on-treatment cycle over a fixed time interval $\left[ 0,T\right] $. We also take into account that it may be desirable to design a therapy scheme which favors $(i)$ over $(ii)$ (or vice-versa) and thus associate weight $W$ with $(i)$ and $1-W$ with $ (ii) $, where $0\leq W\leq 1$. Finally, to ensure that the trade-off between $(i)$ and $(ii)$ is captured appropriately, we normalize our sample function: we divide $(i)$ by the value of the patient's PSA level at the start of the first on-treatment cycle ($PSA_{init}$), and normalize $(ii)$ by $T$. Recall that the total population size of prostate cancer cells is assumed to reflect the serum PSA concentration, and that we have defined clock variables which measure the time elapsed in each of the treatment modes, so that our sample function can be written as {\small \begin{equation} L\left( \theta ,x(0),z(0),T\right) = \begin{array}[t]{l} \frac{W}{T}\overset {T}{\underset{0}{\int}}\left[ \frac{x_{1}\left( \theta ,t\right) +x_{2}\left( \theta ,t\right) }{PSA_{init}}\right] dt \\ +\frac{(1-W)}{T}\overset {T}{\underset{0}{\int}}\frac{z_{1}\left( t\right) }{T}dt \end{array} \label{eq: sample function} \end{equation}} where $x(0)$ and $z(0)$ are given initial conditions. We can then define the overall performance metric as {\small \begin{equation} J\left( \tilde{\theta} ,x(0),z(0),T\right) =E\left[ L\left( \tilde{\theta} ,x(0),z(0),T\right) \right] \label{eq: performance metric} \end{equation}} We note that it is not possible to derive a closed-form expression of $ J\left( \tilde{\theta} ,x(0),z(0),T\right)$ without imposing limitations on the processes $\{\zeta _{i}(t)\}$, $i=1,\ldots ,3$. Nevertheless, by assuming only that $\zeta _{i}(t)$, $i=1,\ldots ,3$, are piecewise continuous w.p. 1, we can successfully apply the IPA methodology developed for general SHS in \cite{Cassandras2010} and obtain an estimate of $\nabla J\left( \tilde{\theta} \right) $ by evaluating the sample gradient $\nabla L\left( \tilde{\theta} \right) $. We will assume that the derivatives $dL\left( \tilde{\theta} \right) /d\tilde{\theta} _{i}$ exist w.p. 1 for all $ \tilde{\theta} _{i}\in \mathbb{R}^{+}$. It is also simple to verify that $L\left( \tilde{\theta} \right) $ is Lipschitz continuous for $\tilde{\theta} _{i}$ in $\mathbb{R}^{+}$. We will further assume that $\{\zeta _{i}(t)\}$, $i=1,\ldots ,3$, are stationary random processes over $\left[ 0,T\right] $ and that no two events can occur at the same time w.p. 1. Under these conditions, it has been shown in \cite{Cassandras2010} that $dL\left( \tilde{\theta} \right) /d\tilde{\theta} _{i}$ is an \emph{unbiased} estimator of $dJ\left( \tilde{\theta} \right) /d\tilde{\theta} _{i}$, $i=1,\ldots ,6$. Hence, our goal is to compute the sample gradient $\nabla L\left( \tilde{\theta} \right) $ using data extracted from a sample path of the system (e.g., by simulating a sample path of our SHA\ model using clinical data), and use this value as an estimate of $\nabla J\left( \tilde{\theta} \right) $. \section{Infinitesimal Perturbation Analysis} \label{IPA} For completeness, we provide here a brief overview of the IPA framework developed for stochastic hybrid systems in \cite{Cassandras2010}. For such, we adopt a standard SHA definition \cite{Cassandras2008}: {\small \begin{equation} G_{h}=\left( Q,X,E,U,f,\phi ,Inv,guard,\rho ,q_{0},x_{0}\right) \label{eq: SHA} \end{equation}} where $Q$ is a set of discrete states; $X$ is a continuous state space; $E$ is a finite set of events; $U$ is a set of admissible controls; $f$ is a vector field, $f:Q\times X\times U\rightarrow X$; $\phi $ is a discrete state transition function, $\phi :Q\times X\times E\rightarrow Q$; $Inv$ is a set defining an invariant condition (when this condition is violated at some $q\in Q$, a transition must occur); $guard$ is a set defining a guard condition, $guard\subseteq Q\times Q\times X$ (when this condition is satisfied at some $q\in Q$, a transition is allowed to occur); $\rho $ is a reset function, $\rho :Q\times Q\times X\times E\rightarrow X$; $q_{0}$ is an initial discrete state; $x_{0}$ is an initial continuous state. Consider a sample path of the system over $\left[ 0,T\right] $ and denote the time of occurrence of the $k$th event (of any type) by $\tau _{k}\left( \theta \right) $, where $\theta $ corresponds to the control parameter of interest. Although we use the notation $\tau _{k}\left( \theta \right) $ to stress the dependency of the event time on the control parameter, we will subsequently use $\tau _{k}$ to indicate the time of occurrence of the $k$th event where no confusion arises. In order to further simplify notation, we shall denote the state and event time derivatives with respect to parameter $ \theta $ as $x^{\prime }(t)\equiv \frac{\partial x(\theta ,t)}{\partial \theta }$ and $\tau _{k}^{\prime }\equiv \frac{\partial \tau _{k}}{\partial \theta }$, respectively, for $k=1,...,N$. Additionally, considering that the system is at some discrete mode during an interval $\left[ \tau _{k},\tau _{k+1}\right) $, we will denote its time-driven dynamics over such interval as $f_{k}\left( x,\theta ,t\right) $. It is shown in \cite{Cassandras2010} that the state derivative satisfies {\small \begin{equation} \frac{d}{dt}x^{\prime }(t)=\frac{\partial f_{k}(t)}{\partial x}x^{\prime }(t)+\frac{\partial f_{k}(t)}{\partial \theta } \label{(eq): General state derivative} \end{equation}} with the following boundary condition: {\small \begin{equation} x^{\prime }(\tau _{k}^{+})=x^{\prime }(\tau _{k}^{-})+\left[ f_{k-1}(\tau _{k}^{-})-f_{k}(\tau _{k}^{+})\right] .\tau _{k}^{\prime } \label{(eq): Boundary condition} \end{equation}} when $x(\theta ,t)$ is continuous in $t$ at $t=\tau _{k}$. Otherwise, {\small \begin{equation} x^{\prime }(\tau _{k}^{+})=\frac{d\rho \left( q,q^{\prime },x,e\right) }{ d\theta } \label{eq: Reset fn.} \end{equation}} where $\rho \left( q,q^{\prime },x,e\right) $ is the reset function defined in (\ref{eq: SHA}). Knowledge of $\tau _{k}^{\prime }$ is, therefore, needed in order to evaluate (\ref{(eq): Boundary condition}). Following the framework in \cite {Cassandras2010}, there are three types of events for a general stochastic hybrid system: \textit{(i) Exogenous event}. This type of event causes a discrete state transition which is independent of parameter $\theta $ and, as a result, $\tau _{k}^{\prime }=0$. \textit{(ii) Endogenous event}. In this case, there exists a continuously differentiable function $g_{k}:\Re ^{n}\times \Theta \rightarrow \Re $ such that $\tau _{k}=\min \left\{ t>\tau _{k-1}:g_{k}\left( x(\theta ,t),\theta \right) =0\right\} $, which leads to {\small \begin{equation} \tau _{k}^{\prime }=-\left[ \frac{\partial g_{k}}{\partial x}.f_{k-1}(\tau _{k}^{-})\right] ^{-1}.\left( \frac{\partial g_{k}}{\partial \phi }+\frac{ \partial g_{k}}{\partial x}.x^{\prime }(\tau _{k}^{-})\right) \label{(eq): Endogenous event} \end{equation}} where $\frac{\partial g_{k}}{\partial x}.f_{k-1}(\tau _{k}^{-})\neq 0$. \textit{(iii) Induced event}. Such an event is triggered by the occurrence of another event at time $\tau _{m}\leq \tau _{k}$ and the expression of $\tau _{k}^{\prime }$ depends on the event time derivative of the triggering event ($\tau _{m}^{\prime }$) (details can be found in \cite {Cassandras2010}). Thus, IPA captures how changes in $\theta $ affect the event times and the state of the system. Since interesting performance metrics are usually expressed in terms of $\tau _{k}$ and $x(t)$, IPA can ultimately be used to infer the effect that a perturbation in $\theta $ will have on such metrics. We end this overview by returning to our problem of personalized prostate cancer therapy design and thus defining the derivatives of the states $x_{n}(\tilde{\theta } ,t)$ and $z_{j}(\tilde{\theta } ,t)$ and event times $\tau _{k}(\tilde{\theta } )$ with respect to $\tilde{\theta } _{i}$, $i=1,\ldots ,6$, $j=1,2$, $n=1,\ldots ,3$, as follows: {\small \begin{equation} x_{n,i}^{\prime }(t)\equiv \frac{\partial x_{n}(\tilde{\theta},t)}{\partial \tilde{\theta}_{i}}\text{, \ }z_{j,i}^{\prime }(t)\equiv \frac{\partial z_{j}(\tilde{\theta},t)}{\partial \tilde{\theta}_{i}},\text{ }\tau _{k,i}^{\prime }\equiv \frac{\partial \tau _{k}(\tilde{\theta})}{\partial \tilde{\theta}_{i}} \label{eq: Deriv def} \end{equation}} In what follows, we derive the IPA state and event time derivatives for the events identified in the SHA model of prostate cancer progression. \subsection{State and Event Time Derivatives} We proceed by analyzing the state evolution of our SHA\ model of prostate cancer progression considering each of the states ($q^{ON}$ and $q^{OFF}$) and events ($e_{1}$ and $e_{2}$) therein defined. \emph{1. The system is in state }$q^{ON}$\emph{\ over interevent time interval }$\left[ \tau _{k},\tau _{k+1}\right) $. Using (\ref{(eq): General state derivative}) for $x_{1}\left( t\right) $, we obtain, for $i=1,\ldots ,6 $, {\small \begin{equation*} \begin{array}{ll} \frac{d}{dt}x_{1,i}^{\prime }(t) & =\frac{\partial f_{k}^{x_{1}}(t)}{ \partial x_{1}}x_{1}^{\prime }(t)+\frac{\partial f_{k}^{x_{1}}(t)}{\partial x_{2}}x_{2}^{\prime }(t) \\ & +\frac{\partial f_{k}^{x_{1}}(t)}{\partial z_{1}}z_{1}^{\prime }(t)+\frac{ \partial f_{k}^{x_{1}}(t)}{\partial z_{2}}z_{2}^{\prime }(t)+\frac{\partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{i}} \end{array} \end{equation*}} From (\ref{eq: x1 dynamics}), we have $\frac{\partial f_{k}^{x_{1}}(t)}{ \partial x_{2}}=\frac{\partial f_{k}^{x_{1}}(t)}{\partial z_{j}}=\frac{ \partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{i}}=0$, $i=1,2,4,6$, $ j=1,2$, and {\small \begin{eqnarray*} && \begin{array}{ll} \frac{\partial f_{k}^{x_{1}}(t)}{\partial x_{1}} & =\alpha _{1}\left[ 1+\phi _{\alpha }^{ON}(t)\right] ^{-1}-\beta _{1}\left[ 1+\phi _{\beta }^{ON}(t) \right] ^{-1} \\ & -m_{1}\left( 1-\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) -\lambda _{1} \end{array} \\ && \begin{array}{ll} \frac{\partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{3}} & =x_{1}\left[ 1+\phi _{\alpha }^{ON}(t)\right] ^{-1} \end{array} \\ && \begin{array}{ll} \frac{\partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{5}} & =-x_{1}\left[ 1+\phi _{\beta }^{ON}(t)\right] ^{-1} \end{array} \end{eqnarray*}} It is thus simple to verify that solving (\ref{(eq): General state derivative}) for $x_{1,i}^{\prime }(t)$ yields, for $t\in \left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation} x_{1,i}^{\prime }(t)=x_{1,i}^{\prime }(\tau _{k}^{+})e^{A_{1}\left( t\right) }\text{, \ }i=1,2,4,6 \label{eq: x1_prime ON} \end{equation}} {\small \begin{equation} x_{1,3}^{\prime }(t)=x_{1,3}^{\prime }(\tau _{k}^{+})e^{A_{1}\left( t\right) }+A_{2}\left( t\right) \label{eq: x13_prime ON} \end{equation}} {\small \begin{equation} x_{1,5}^{\prime }(t)=x_{1,5}^{\prime }(\tau _{k}^{+})e^{A_{1}\left( t\right) }+A_{3}\left( t\right) \label{eq: x15_prime ON} \end{equation}} with {\small \begin{eqnarray} && \begin{array}{l} A_{1}\left( t\right) \equiv \int_{\tau _{k}}^{t}\left[ \frac{\alpha _{1}}{ 1+\phi _{\alpha }^{ON}(t)}-\frac{\beta _{1}}{1+\phi _{\beta }^{ON}(t)}\right] dt \\ \text{ \ }-\int_{\tau _{k}}^{t}\frac{m_{1}}{x_{3,0}}h^{ON}\left( t\right) dt-\left( m_{1}+\lambda _{1}\right) \left( t-\tau _{k}\right) \end{array} \label{eq: A(t)} \\ &&A_{2}\left( t\right) \equiv e^{A_{1}\left( t\right) }\int_{\tau _{k}}^{t} \left[ \frac{x_{1}\left( t\right) }{1+\phi _{\alpha }^{ON}(t)} e^{-A_{1}\left( t\right) }\right] dt \\ &&A_{3}\left( t\right) \equiv e^{A_{1}\left( t\right) }\int_{\tau _{k}}^{t} \left[ -\frac{x_{1}\left( t\right) }{1+\phi _{\beta }^{ON}(t)}e^{A_{1}\left( t\right) }\right] dt \end{eqnarray}} In particular, at $\tau _{k+1}^{-}$: {\small \begin{equation} x_{1,i}^{\prime }(\tau _{k+1}^{-})=x_{1,i}^{\prime }(\tau _{k}^{+})e^{A\left( \tau _{k}\right) } \label{eq: Deriv x1 t1} \end{equation}} {\small \begin{equation} x_{1,3}^{\prime }(\tau _{k+1}^{-})=x_{1,3}^{\prime }(\tau _{k}^{+})e^{A\left( \tau _{k}\right) }+A_{2}\left( \tau _{k}\right) \label{eq: Deriv x13 t1} \end{equation}} {\small \begin{equation} x_{1,5}^{\prime }(\tau _{k+1}^{-})=x_{1,5}^{\prime }(\tau _{k}^{+})e^{A\left( \tau _{k}\right) }+A_{3}\left( \tau _{k}\right) \label{eq: Deriv x15 t1} \end{equation}} where {\small $A_{1}\left( \tau _{k}\right) $}, {\small $A_{2}\left( \tau _{k}\right) $}, and {\small $ A_{3}\left( \tau _{k}\right) $} are given from (\ref{eq: A(t)}). Similarly for $x_{2}\left( t\right) $, we have from (\ref{eq: x2 dynamics}) that $\frac{\partial f_{k}^{x_{2}}(t)}{\partial z_{j}}=\frac{\partial f_{k}^{x_{2}}(t)}{\partial \tilde{\theta}_{i}}=0$, $i=1,2,3,5$, $j=1,2$, and {\small \begin{equation*} \begin{array}{l} \frac{\partial f_{k}^{x_{2}}(t)}{\partial x_{1}}=m_{1}\left( 1-\frac{ h^{ON}\left( t\right) }{x_{3,0}}\right) \\ \frac{\partial f_{k}^{x_{2}}(t)}{\partial x_{2}}=\alpha _{2}\left( 1-d\frac{ h^{ON}\left( t\right) }{x_{3,0}}\right) -\beta _{2} \\ \frac{\partial f_{k}^{x_{2}}(t)}{\partial \tilde{\theta}_{4}}=\left( 1-d \frac{h^{ON}\left( t\right) }{x_{3,0}}\right) x_{2}\left( t\right) \\ \frac{\partial f_{k}^{x_{2}}(t)}{\partial \tilde{\theta}_{6}}=-x_{2}\left( t\right) \end{array} \end{equation*}} Combining the last four equations and solving for $x_{2,i}^{\prime }(t)$ yields, for $t\in \left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation} x_{2,i}^{\prime }(t)=x_{2,i}^{\prime }(\tau _{k}^{+})e^{B_{1}(t)}+B_{2}\left( t,x_{1,i}^{\prime }(\tau _{k}^{+}),A_{1}\left( t\right) \right) \text{, \ \ }i=1,2,3,5 \label{eq: x2_prime ON} \end{equation}} {\small \begin{equation} x_{2,4}^{\prime }(t)=x_{2,4}^{\prime }(\tau _{k}^{+})e^{B_{1}(t)}+B_{3}\left( t,x_{1,4}^{\prime }(\tau _{k}^{+}),B_{1}\left( t\right) \right) \label{eq: x24_prime ON} \end{equation}} {\small \begin{equation} x_{2,6}^{\prime }(t)=x_{2,6}^{\prime }(\tau _{k}^{+})e^{B_{1}(t)}+B_{4}\left( t,x_{1,6}^{\prime }(\tau _{k}^{+}),B_{1}\left( t\right) \right) \label{eq: x26_prime ON} \end{equation}} with {\small \begin{align} & B_{1}\left( t\right) \equiv \int_{\tau _{k}}^{t}\left[ \alpha _{2}\left( 1-d\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) -\beta _{2}\right] dt \label{eq: B(t)} \\ & B_{2}\left( \cdot \right) \equiv e^{B_{1}(t)}\int_{\tau _{k}}^{t}G_{1}\left( t,\tau _{k}\right) e^{-B_{1}(t)}dt \notag \\ & B_{3}\left( \cdot \right) \equiv e^{B_{1}(t)}\int_{\tau _{k}}^{t}G_{2}\left( t,\tau _{k}\right) e^{-B_{1}(t)}dt \\ & B_{4}\left( \cdot \right) \equiv e^{B_{1}(t)}\int_{\tau _{k}}^{t}G_{3}\left( t,\tau _{k}\right) e^{-B_{1}(t)}dt \end{align}} where {\small $G_{1}\left( t,\tau _{k}\right) =m_{1}\left( 1-\frac{ h^{ON}\left( t\right) }{x_{3,0}}\right) x_{1,i}^{\prime }(\tau _{k}^{+})e^{A_{1}\left( t\right) }$}, {\small $G_{2}\left( t,\tau _{k}\right) =e^{-B_{1}(t)}x_{2}\left( t\right) \left( 1-d\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) $}$+e^{-B_{1}(t)}x_{1,4}^{\prime }(t)\cdot m_{1}\left( 1-d\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) $, {\small $ G_{3}\left( t,\tau _{k}\right) =e^{-B_{1}(t)}\left[ x_{1,6}^{\prime }(t)\cdot m_{1}\left( 1-d\frac{h^{ON}\left( t\right) }{x_{3,0}}\right) -x_{2}\left( t\right) \right] $}, $t\in \left[ \tau _{k},\tau _{k+1}\right) $ . In particular, at $\tau _{k+1}^{-}$: {\small \begin{equation} x_{2,i}^{\prime }(\tau _{k+1}^{-})=x_{2,i}^{\prime }(\tau _{k}^{+})e^{B_{1}(\tau _{k})}+B_{2}\left( \tau _{k},x_{1,i}^{\prime }(\tau _{k}^{+}),A\left( \tau _{k}\right) \right) \label{eq: Deriv x2 t1} \end{equation}} {\small \begin{equation} x_{2,4}^{\prime }(\tau _{k+1}^{-})=x_{2,4}^{\prime }(\tau _{k}^{+})e^{B_{1}(\tau _{k})}+B_{3}\left( t,x_{1,4}^{\prime }(\tau _{k}^{+}),B_{1}\left( \tau _{k}\right) \right) \label{eq: Deriv x24 t1} \end{equation}} {\small \begin{equation} x_{2,6}^{\prime }(\tau _{k+1}^{-})=x_{2,6}^{\prime }(\tau _{k}^{+})e^{B_{1}(\tau _{k})}+B_{4}\left( t,x_{1,6}^{\prime }(\tau _{k}^{+}),B_{1}\left( \tau _{k}\right) \right) \label{eq: Deriv x26 t1} \end{equation}} where {\small $B_{1}\left( \tau _{k}\right) $}, {\small $B_{2}\left( \tau _{k},x_{1,i}^{\prime }(\tau _{k}^{+}),A\left( \tau _{k}\right) \right) $}, {\small $ B_{3}\left( t,x_{1,4}^{\prime }(\tau _{k}^{+}),B_{1}\left( \tau _{k}\right) \right) $}, and {\small $B_{4}\left( t,x_{1,6}^{\prime }(\tau _{k}^{+}),B_{1}\left( \tau _{k}\right) \right) $} are given from (\ref{eq: B(t)}). Finally, for the \textquotedblleft clock\textquotedblright\ state variable, from (\ref{eq: z1 dynamics})-(\ref{eq: z2 dynamics}) we have $\frac{\partial f_{k}^{z_{i}}(t)}{\partial x_{n}}=\frac{\partial f_{k}^{z_{i}}(t)}{\partial z_{j}}=\frac{\partial f_{k}^{z_{i}}(t)}{\partial \tilde{\theta}_{i}}=0$, $ n,j=1,2$, $i=1,\ldots ,6$, so that $\frac{d}{dt}z_{j,i}^{\prime }(t)=0$, $ j=1,2$, $i=1,\ldots ,6$, for $t\in \left[ \tau _{k},\tau _{k+1}\right) $. Hence, $z_{j,i}^{\prime }(t)=z_{j,i}^{\prime }(\tau _{k}^{+})$, $j=1,2$, $ i=1,\ldots ,6$, and $t\in \left[ \tau _{k},\tau _{k+1}\right) $. \emph{2. The system is in state }$q^{OFF}$\emph{\ over interevent time interval }$\left[ \tau _{k},\tau _{k+1}\right) $. Starting with $x_{1}\left( t\right) $, based on (\ref{eq: x1 dynamics}) we once again have $\frac{ \partial f_{k}^{x_{1}}(t)}{\partial x_{2}}=\frac{\partial f_{k}^{x_{1}}(t)}{ \partial zj}=\frac{\partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{i}}=0$ , $j=1,2$, $i=1,2,4,6$, but now {\small \begin{eqnarray*} && \begin{array}{l} \frac{\partial f_{k}^{x_{1}}(t)}{\partial x_{1}}=\alpha _{1}\left[ 1+\phi _{\alpha }^{OFF}(t)\right] ^{-1}-\beta _{1}\left[ 1+\phi _{\beta }^{OFF}(t) \right] ^{-1} \\ \text{ \ \ \ \ \ \ \ \ }-m_{1}\left( 1-\frac{h^{OFF}\left( t\right) }{x_{3,0} }\right) -\lambda _{1} \end{array} \\ && \begin{array}{ll} \frac{\partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{3}} & =x_{1}\left[ 1+\phi _{\alpha }^{ON}(t)\right] ^{-1} \end{array} \\ && \begin{array}{ll} \frac{\partial f_{k}^{x_{1}}(t)}{\partial \tilde{\theta}_{5}} & =-x_{1}\left[ 1+\phi _{\beta }^{ON}(t)\right] ^{-1} \end{array} \end{eqnarray*}} Therefore, (\ref{(eq): General state derivative}) implies that, for $t\in \left[ \tau _{k},\tau _{k+1}\right) $: {\small \begin{equation} x_{1,i}^{\prime }(t)=x_{1,i}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( t\right) }\text{, \ }i=1,2,4,6 \label{eq: x1_prime OFF} \end{equation}} {\small \begin{equation} x_{1,3}^{\prime }(t)=x_{1,3}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( t\right) }+C_{2}\left( t\right) \label{eq: x13_prime OFF} \end{equation}} {\small \begin{equation} x_{1,5}^{\prime }(\tau _{k+1}^{-})=x_{1,5}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( \tau _{k}\right) }+C_{3}\left( t\right) \label{eq: x15_prime OFF} \end{equation}} with {\small \begin{eqnarray} && \begin{array}{l} C_{1}\left( t\right) \equiv \int_{\tau _{k}}^{t}\left[ \frac{\alpha _{1}}{ 1+\phi _{\alpha }^{OFF}(t)}-\frac{\beta _{1}}{1+\phi _{\beta }^{OFF}(t)} \right] dt \\ \text{ }-\int_{\tau _{k}}^{t}\frac{m_{1}}{x_{3,0}}h^{OFF}\left( t\right) dt-\left( m_{1}+\lambda _{1}\right) \left( t-\tau _{k}\right) \end{array} \label{eq: C(t)} \\ &&C_{2}\left( t\right) \equiv e^{C_{1}\left( t\right) }\int_{\tau _{k}}^{t} \left[ \frac{x_{1}\left( t\right) }{1+\phi _{\alpha }^{OFF}(t)} e^{-C_{1}\left( t\right) }\right] dt \\ &&C_{3}\left( t\right) \equiv e^{C_{1}\left( \tau _{k}\right) }\int_{\tau _{k}}^{\tau _{k+1}}\left[ -\frac{x_{1}\left( t\right) }{1+\phi _{\beta }^{OFF}(t)}e^{C_{1}\left( t\right) }\right] dt \end{eqnarray}} In particular, at $\tau _{k+1}^{-}$: {\small \begin{equation} x_{1,i}^{\prime }(\tau _{k+1}^{-})=x_{1,i}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( \tau _{k}\right) } \label{eq: Deriv x1 t3minus} \end{equation}} {\small \begin{equation} x_{1,3}^{\prime }(\tau _{k+1}^{-})=x_{1,3}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( \tau _{k}\right) }+C_{2}\left( \tau _{k}\right) \label{eq: Deriv x13 t3minus} \end{equation}} {\small \begin{equation} x_{1,5}^{\prime }(\tau _{k+1}^{-})=x_{1,5}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( \tau _{k}\right) }+C_{3}\left( \tau _{k}\right) \label{eq: Deriv x15 t3minus} \end{equation}} where {\small $C_{1}\left( \tau _{k}\right) $}, {\small $C_{2}\left( \tau _{k}\right) $}, and {\small $ C_{3}\left( \tau _{k}\right) $} are given from (\ref{eq: C(t)}). Similarly for $x_{2}(t)$, we have {\small \begin{equation*} \begin{array}{l} \frac{\partial f_{k}^{x_{2}}(t)}{\partial x_{1}}=m_{1}\left( 1-\frac{ h^{OFF}\left( t\right) }{x_{3,0}}\right) \\ \frac{\partial f_{k}^{x_{2}}(t)}{\partial x_{2}}=\alpha _{2}\left( 1-d\frac{ h^{OFF}\left( t\right) }{x_{3,0}}\right) -\beta _{2} \\ \frac{\partial f_{k}^{x_{2}}(t)}{\partial \tilde{\theta}_{4}}=\left( 1-d \frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) x_{2}\left( t\right) \\ \frac{\partial f_{k}^{x_{2}}(t)}{\partial \tilde{\theta}_{6}}=-x_{2}\left( t\right) \end{array} \end{equation*}} It is thus straightforward to verify that (\ref{(eq): General state derivative}) yields, for $t\in \left[ \tau _{k},\tau _{k+1}\right) $, {\small \begin{equation} x_{2,i}^{\prime }(t)=x_{2,i}^{\prime }(\tau _{k}^{+})e^{D_{1}(t)}+D_{2}\left( t,x_{1,i}^{\prime }(\tau _{k}^{+}),C_{1}\left( t\right) \right) \text{, \ \ }i=1,2,4,6 \label{eq: x2_prime OFF} \end{equation}} {\small \begin{equation} x_{2,4}^{\prime }(t)=x_{2,4}^{\prime }(\tau _{k}^{+})e^{D_{1}(t)}+D_{3}\left( t,x_{1,4}^{\prime }(\tau _{k}^{+}),D_{1}\left( t\right) \right) \label{eq: x24_prime OFF} \end{equation}} {\small \begin{equation} x_{2,6}^{\prime }(t)=x_{2,6}^{\prime }(\tau _{k}^{+})e^{D_{1}(t)}+D_{4}\left( t,x_{1,6}^{\prime }(\tau _{k}^{+}),D_{1}\left( t\right) \right) \label{eq: x26_prime OFF} \end{equation}} with {\small \begin{align} & D_{1}\left( t\right) \equiv \int_{\tau _{k}}^{t}\left[ \alpha _{2}\left( 1-d\frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) -\beta _{2}\right] dt \label{eq: D(t)} \\ & D_{2}\left( \cdot \right) \equiv e^{D_{1}(t)}\int_{\tau _{k}}^{t}G_{2}\left( t,\tau _{k}\right) e^{-D_{1}(t)}dt \notag \\ & D_{3}\left( \cdot \right) \equiv e^{D_{1}(t)}\int_{\tau _{k}}^{t}G_{3}\left( t,\tau _{k}\right) e^{-D_{1}(t)}dt \\ & D_{4}\left( \cdot \right) \equiv e^{D_{1}(t)}\int_{\tau _{k}}^{t}G_{4}\left( t,\tau _{k}\right) e^{-D_{1}(t)}dt \end{align}} where {\small $G_{2}\left( t,\tau _{k}\right) =m_{1}\left( 1-\frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) x_{1,i}^{\prime }(\tau _{k}^{+})e^{C_{1}\left( t\right) }$}, {\small $G_{3}\left( t,\tau _{k}\right) =x_{2}\left( t\right) \left( 1-d \frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) +x_{1,4}^{\prime }(t)\cdot m_{1}\left( 1-d\frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) $}, {\small $ G_{4}\left( t,\tau _{k}\right) =x_{1,6}^{\prime }(t)\cdot m_{1}\left( 1-d \frac{h^{OFF}\left( t\right) }{x_{3,0}}\right) -x_{2}\left( t\right) $}, $ t\in \left[ \tau _{k},\tau _{k+1}\right) $. In particular, at $\tau _{k+1}^{-}$: {\small \begin{equation} x_{2,i}^{\prime }(\tau _{k+1}^{-})=x_{2,i}^{\prime }(\tau _{k}^{+})e^{D_{1}(\tau _{k})}+D_{2}\left( \tau _{k},x_{1,i}^{\prime }(\tau _{k}^{+}),C\left( \tau _{k}\right) \right) \label{eq: Deriv x2 tkminus} \end{equation}} {\small \begin{equation} x_{2,4}^{\prime }(t)=x_{2,4}^{\prime }(\tau _{k}^{+})e^{D_{1}(t)}+D_{3}\left( t,x_{1,4}^{\prime }(\tau _{k}^{+}),D_{1}\left( \tau _{k}\right) \right) \label{eq: Deriv x24 tkminus} \end{equation}} {\small \begin{equation} x_{2,6}^{\prime }(t)=x_{2,6}^{\prime }(\tau _{k}^{+})e^{D_{1}(t)}+D_{4}\left( t,x_{1,6}^{\prime }(\tau _{k}^{+}),D_{1}\left( \tau _{k}\right) \right) \label{eq: Deriv x26 tkminus} \end{equation}} where {\small $D_{1}\left( \tau _{k}\right) $}, {\small $D_{2}\left( \cdot \right) $}, {\small $ D_{3}\left( \cdot \right) $}, and {\small $D_{4}\left( \cdot \right) $} are given from (\ref{eq: D(t)}). Finally, for the \textquotedblleft clock\textquotedblright\ state variable, based on (\ref{eq: z1 dynamics})-(\ref{eq: z2 dynamics}) we once again have $ \frac{\partial f_{k}^{z_{i}}(t)}{\partial x_{n}}=\frac{\partial f_{k}^{z_{i}}(t)}{\partial z_{j}}=\frac{\partial f_{k}^{z_{i}}(t)}{\partial \tilde{\theta}_{i}}=0$, $n,j=1,2$, $i=1,\ldots ,6$, so that $\frac{d}{dt} z_{j,i}^{\prime }(t)=0$, $j=1,2$, $i=1,\ldots ,6$, for $t\in \left[ \tau _{k},\tau _{k+1}\right) $. As a result, $z_{j,i}^{\prime }(t)=z_{j,i}^{\prime }(\tau _{k}^{+})$, $j=1,2$, $i=1,\ldots ,6$, and $t\in \left[ \tau _{k},\tau _{k+1}\right) $. \emph{3. A state transition from }$q^{ON}$\emph{\ to }$q^{OFF}$\emph{\ occurs at time }$\tau _{k}$. This necessarily implies that event $e_{1}$ took place at time $\tau _{k}$, i.e., $q(t)=q^{ON}$, $t\in \left[ \tau _{k-1},\tau _{k}\right) $ and $q(t)=q^{OFF}$, $t\in \left[ \tau _{k},\tau _{k+1}\right) $. From (\ref{(eq): Boundary condition}) we have, for $ i=1,\ldots ,6$, {\small \begin{equation} x_{1,i}^{\prime }(\tau _{k}^{+})=x_{1,i}^{\prime }(\tau _{k}^{-})+\left[ f_{k}^{x_{1}}(\tau _{k}^{-})-f_{k+1}^{x_{1}}(\tau _{k}^{+})\right] \cdot \tau _{k,i}^{\prime } \label{eq: x1_prime ON/OFF} \end{equation}} and {\small \begin{equation} x_{2,i}^{\prime }(\tau _{k}^{+})=x_{2,i}^{\prime }(\tau _{k}^{-})+\left[ f_{k}^{x_{2}}(\tau _{k}^{-})-f_{k+1}^{x_{2}}(\tau _{k}^{+})\right] \cdot \tau _{k,i}^{\prime } \label{eq: x2_prime ON/OFF} \end{equation}} where $f_{k}^{x_{1}}(\tau _{k}^{-})-f_{k+1}^{x_{1}}(\tau _{k}^{+})$ and $ f_{k}^{x_{2}}(\tau _{k}^{-})-f_{k+1}^{x_{2}}(\tau _{k}^{+})$ ultimately depend on $h^{ON}\left( \tau _{k}^{-}\right) $ and $h^{OFF}\left( \tau _{k}^{+}\right) $. Evaluating $h^{ON}\left( \tau _{k}^{-}\right) $ from (\ref {eq: h_ON}) over the appropriate time interval results in {\small \begin{equation*} \begin{array}{ll} h^{ON}\left( \tau _{k}^{-}\right) & =x_{3}(\tau _{k-1}^{+})e^{-\left( \tau _{k}-\tau _{k-1}\right) /\sigma } \\ & +\mu _{3}\sigma \lbrack 1-e^{-\left( \tau _{k}-\tau _{k-1}\right) /\sigma }]+\tilde{\zeta}_{3}(\tau _{k}) \end{array} \end{equation*}} and it follows directly from (\ref{eq: h_OFF}) that $h^{OFF}\left( \tau _{k}^{+}\right) =x_{3}(\tau _{k}^{+})$. Moreover, by continuity of $x_{n}(t)$ (due to conservation of mass), $x_{n}(\tau _{k}^{+})=x_{n}(\tau _{k}^{-})$, $ n=1,2$. Also, since we have assumed that $\left\{ \zeta _{i}(t)\right\} $, $ i=1,\ldots ,3$, is piecewise continuous w.p.1 and that no two events can occur at the same time w.p.1, $\zeta _{i}(\tau _{k}^{-})=$ $\zeta _{i}(\tau _{k}^{+})$, $i=1,\ldots ,3$. Hence, for $x_{1}(t)$, evaluating $\Delta _{f}^{1}\left( \tau _{k}\right) \equiv f_{k}^{x_{1}}(\tau _{k}^{-})-f_{k+1}^{x_{1}}(\tau _{k}^{+})$ yields {\small \begin{equation} \begin{array}{l} \Delta _{f}^{1}\left( \tau _{k},\zeta _{3}\left( \tau _{k}\right) \right) =\left\{ \alpha _{1}\left[ 1+\phi _{\alpha }^{ON}(\tau _{k}^{-})\right] ^{-1}\right. \\ \text{ }-\alpha _{1}\left[ 1+\phi _{\alpha }^{OFF}(\tau _{k}^{+})\right] ^{-1}-\beta _{1}\left[ 1+\phi _{\beta }^{ON}(\tau _{k}^{-})\right] ^{-1} \\ \text{ }+\beta _{1}\left[ 1+\phi _{\beta }^{OFF}(\tau _{k}^{+})\right] ^{-1} \\ \text{ }\left. +\frac{m_{1}}{x_{3,0}}\left[ h^{ON}\left( \tau _{k}^{-}\right) -x_{3}(\tau _{k})\right] \right\} \cdot x_{1}(\tau _{k}) \end{array} \label{eq: delta f1 on/off} \end{equation}} Finally, the term $\tau _{k,i}^{\prime }$, which corresponds to the event time derivative with respect to $\tilde{\theta}_{i}$ at event time $\tau _{k} $, is determined using (\ref{(eq): Endogenous event}), as detailed in (\ref {eq: Lemma 1}) later. A similar analysis applies to $x_{2}(t)$, so that $f_{k}^{x_{2}}(\tau _{k}^{-})$ and $f_{k+1}^{x_{2}}(\tau _{k}^{+})$ ultimately depend on $ h^{ON}\left( \tau _{k}^{-}\right) $ and $h^{OFF}\left( \tau _{k}^{+}\right) $ , respectively. Hence, evaluating $\Delta _{f}^{2}\left( \tau _{k}\right) \equiv f_{k}^{x_{2}}(\tau _{k}^{-})-f_{k+1}^{x_{2}}(\tau _{k}^{+})$ from ( \ref{eq: x2 dynamics}) yields {\small \begin{equation} \begin{array}{ll} \Delta _{f}^{2}\left( \tau _{k},\zeta _{3}\left( \tau _{k}\right) \right) & =\frac{\alpha _{2}d}{x_{3,0}}\left[ x_{3}(\tau _{k})-h^{ON}\left( \tau _{k}^{-}\right) \right] \cdot x_{2}(\tau _{k}) \\ & -\frac{m_{1}}{x_{3,0}}\left[ h^{ON}\left( \tau _{k}^{-}\right) -x_{3}(\tau _{k})\right] \cdot x_{1}(\tau _{k}) \end{array} \label{eq: deltaF2 ON/OFF} \end{equation}} In the case of the \textquotedblleft clock\textquotedblright\ state variable, $z_{1}(t)$ is discontinuous in $t$ at $t=\tau _{k}$, while $ z_{2}(t)$ is continuous. Hence, based on (\ref{eq: Reset fn.}) and (\ref{eq: z1 dynamics}), we have that $z_{1,i}^{\prime }(\tau _{k}^{+})=0$. From (\ref {(eq): Boundary condition}) and (\ref{eq: z2 dynamics}), it is straightforward to verify that $z_{2,i}^{\prime }(\tau _{k}^{+})=z_{2,i}^{\prime }(\tau _{k}^{-})-\tau _{k,i}^{\prime }$, $ i=1,\ldots ,6$. \emph{4. A state transition from }$q^{OFF}$\emph{\ to }$q^{ON}$\emph{\ occurs at time }$\tau _{k}$. This necessarily implies that event $e_{2}$ took place at time $\tau _{k}$, i.e., $q(t)=q^{OFF}$, $t\in \left[ \tau _{k-1},\tau _{k}\right) $ and $q(t)=q^{ON}$, $t\in \left[ \tau _{k},\tau _{k+1}\right) $. The same reasoning as above holds, so that (\ref{eq: x1_prime ON/OFF})-(\ref{eq: x2_prime ON/OFF}) also apply. For $x_{1}(t)$, $ f_{k}^{x_{1}}(\tau _{k}^{-})-f_{k+1}^{x_{1}}(\tau _{k}^{+})$ can be evaluated from (\ref{eq: x1 dynamics}) and ultimately depends on $ h^{OFF}\left( \tau _{k}^{-}\right) $ and $h^{ON}\left( \tau _{k}^{+}\right) $ . Evaluating $h^{OFF}\left( \tau _{k}^{-}\right) $ from (\ref{eq: h_OFF}) over the appropriate time interval results in {\small \begin{equation*} \begin{array}{ll} h^{OFF}\left( \tau _{k}^{-}\right) & =x_{3}(\tau _{k-1}^{+})e^{-\left( \tau _{k}-\tau _{k-1}\right) /\sigma } \\ & +(\mu _{3}\sigma +x_{3,0})[1-e^{-\left( \tau _{k}-\tau _{k-1}\right) /\sigma }]+\tilde{\zeta}_{3}(\tau _{k}) \end{array} \end{equation*}} and it follows directly from (\ref{eq: h_ON}) that $h^{ON}\left( \tau _{k}^{+}\right) =x_{3}(\tau _{k}^{+})$. As in the previous case, continuity due to conservation of mass applies, so that evaluating $\Delta _{f}^{1}(\tau _{k})\equiv f_{k}^{x_{1}}(\tau _{k}^{-})-f_{k+1}^{x_{1}}(\tau _{k}^{+})$ yields {\small \begin{equation} \begin{array}{l} \Delta _{f}^{1}(\tau _{k},\zeta _{3}\left( \tau _{k}\right) )=\left\{ \alpha _{1}\left[ 1+\phi _{\alpha }^{OFF}(\tau _{k}^{-})\right] ^{-1}\right. \\ \text{ }-\alpha _{1}\left[ 1+\phi _{\alpha }^{ON}(\tau _{k}^{+})\right] ^{-1}-\beta _{1}\left[ 1+\phi _{\beta }^{OFF}(\tau _{k}^{-})\right] ^{-1} \\ \text{ }+\beta _{1}\left[ 1+\phi _{\beta }^{ON}(\tau _{k}^{+})\right] ^{-1} \\ \text{ }\left. +\frac{m_{1}}{x_{3,0}}\left[ h^{OFF}\left( \tau _{k}^{-}\right) -x_{3}(\tau _{k})\right] \right\} \cdot x_{1}(\tau _{k}) \end{array} \label{eq: deltaF1 OFF/ON} \end{equation}} Similarly for $x_{2}(t)$, by evaluating $\Delta _{f}^{2}(\tau _{k})\equiv f_{k}^{x_{2}}(\tau _{k}^{-})-f_{k+1}^{x_{2}}(\tau _{k}^{+})$ from (\ref{eq: x2 dynamics}), and making the appropriate simplifications due to continuity, we obtain {\small \begin{equation} \begin{array}{ll} \Delta _{f}^{2}(\tau _{k},\zeta _{3}\left( \tau _{k}\right) ) & =\frac{ \alpha _{2}d}{x_{3,0}}\left[ x_{3}(\tau _{k})-h^{OFF}\left( \tau _{k}^{-}\right) \right] \cdot x_{2}(\tau _{k}) \\ & -\frac{m_{1}}{x_{3,0}}\left[ h^{OFF}\left( \tau _{k}^{-}\right) -x_{3}(\tau _{k})\right] \cdot x_{1}(\tau _{k}) \end{array} \label{eq: deltaF2 OFF/ON} \end{equation}} In the case of the \textquotedblleft clock\textquotedblright\ state variable, $z_{1}(t)$ is continuous in $t$ at $t=\tau _{k}$, while $z_{2}(t)$ is discontinuous. As a result, based on (\ref{(eq): Boundary condition}) and (\ref{eq: z1 dynamics}), we have that $z_{1,i}^{\prime }(\tau _{k}^{+})=z_{1,i}^{\prime }(\tau _{k}^{-})-\tau _{k,i}^{\prime }$. From (\ref {eq: Reset fn.}) and (\ref{eq: z2 dynamics}), it is simple to verify that $ z_{2,i}^{\prime }(\tau _{k}^{+})=0$, $i=1,\ldots ,6$. Note that, since $z_{j,i}^{\prime }(t)=z_{j,i}^{\prime }(\tau _{k}^{+})$, $ t\in \left[ \tau _{k},\tau _{k+1}\right) $, we will have that $ z_{j,i}^{\prime }(\tau _{k}^{-})=z_{j,i}^{\prime }(\tau _{k-1}^{+})$, $j=1,2$ , $i=1,\ldots ,6$. Moreover, the sample path of our SHA consists of a sequence of alternating $e_{1}$ and $e_{2}$ events, which implies that $ z_{1,i}^{\prime }(\tau _{k}^{-})=0$ if event $e_{1}$ occurred at $\tau _{k-1} $, while $z_{2,i}^{\prime }(\tau _{k}^{-})=0$ if event $e_{2}$ occurred at $\tau _{k-1}$. Then, adopting the notation $p,\overline{p} =\left\{ 1,2\right\} $ such that $p+\overline{p}=3$, we have: {\small \begin{equation} z_{p,i}^{\prime }(\tau _{k}^{+})=\left\{ \begin{array}{ll} -\tau _{k,i}^{\prime } & \text{if event }e_{\overline{p}}\text{ occurs at } \tau _{k} \\ 0 & \text{otherwise} \end{array} \right. \label{eq: State deriv. z1} \end{equation}} We now proceed with a general result which applies to all events defined for our SHA\ model. We denote the time of occurrence of the $j$th state transition by $\tau _{j}$, define its derivative with respect to the control parameters as $\tau _{j,i}^{\prime }\equiv \frac{\partial \tau _{j}}{ \partial \tilde{\theta}_{i}}$, $i=1,\ldots ,6$, and also define $ f_{j}^{x_{n}}\left( \tau _{j}\right) \equiv \dot{x}_{n}(\tau _{j})$, $n=1,2$. \begin{lemma} When an event $e_{p}$, $p=1,2$, occurs, the derivative $\tau _{j,i}^{\prime } $, $i=1,\ldots ,6$, of state transition times $\tau _{j}$, $j=1,2,\ldots $ with respect to the control parameters $\tilde{\theta}_{i}$, $i=1,\ldots ,6$ , satisfies: \begin{equation} \tau _{j,i}^{\prime }=\left\{ \begin{array}{ll} \frac{\mathbf{1}-x_{1,i}^{\prime }(\tau _{j}^{-})-x_{2,i}^{\prime }(\tau _{j}^{-})}{f_{j-1}^{x_{1}}(\tau _{j}^{-})+f_{j-1}^{x_{2}}(\tau _{j}^{-})} & \begin{array}{l} \text{if }e_{1}\text{ occurs and }i=1 \\ \text{or }e_{2}\text{ occurs and }i=2 \end{array} \\ & \\ \frac{-x_{1,i}^{\prime }(\tau _{j}^{-})-x_{2,i}^{\prime }(\tau _{j}^{-})}{ f_{j-1}^{x_{1}}(\tau _{j}^{-})+f_{j-1}^{x_{2}}(\tau _{j}^{-})} & \begin{array}{l} \text{if }e_{1}\text{ occurs and }i \neq 1 \\ \text{or }e_{2}\text{ occurs and }i \neq 2 \end{array} \end{array} \right. \label{eq: Lemma 1} \end{equation} \end{lemma} \begin{proof} The proof is omitted here, but can be found in \cite{FleckADHS2015}. \end{proof} We note that the numerator in (\ref{eq: Lemma 1}) is determined using (\ref {eq: Deriv x1 t1}) and (\ref{eq: Deriv x2 t1}) if $q(\tau _{j}^{-})=q^{ON}$, or (\ref{eq: Deriv x1 t3minus}) and (\ref{eq: Deriv x2 tkminus}) if $q(\tau _{j}^{-})=q^{OFF}$. Moreover, the denominator in (\ref{eq: Lemma 1}) is computed using (\ref{eq: x1 dynamics})-(\ref{eq: x2 dynamics}) and it is simple to verify that, if event $e_{1}$ takes place at time $\tau _{j}$, {\small \begin{equation} \begin{array}{l} f_{j-1}^{x_{1}}(\tau _{j}^{-})+f_{j-1}^{x_{2}}(\tau _{j}^{-})=\alpha _{1} \left[ 1+\phi _{\alpha }^{ON}(\tau _{j}^{-})\right] ^{-1}\cdot x_{1}(\tau _{j}) \\ \text{ \ }-\left\{ \beta _{1}\left[ 1+\phi _{\beta }^{ON}(\tau _{j}^{-}) \right] ^{-1}+\lambda _{1}\right\} \cdot x_{1}(\tau _{j})+\mu _{1} \\ \text{ \ }+\left[ \alpha _{2}\left( 1-d\frac{h^{ON}\left( \tau _{j}^{-}\right) }{x_{3,0}}\right) -\beta _{2}\right] \cdot x_{2}(\tau _{j}) \\ \text{ \ \ }+\zeta _{1}(\tau _{j})+\zeta _{2}(\tau _{j}) \end{array} \label{eq: event e1} \end{equation}} and, if event $e_{2}$ takes place at time $\tau _{j}$, {\small \begin{equation} \begin{array}{l} f_{j-1}^{x_{1}}(\tau _{j}^{-})+f_{j-1}^{x_{2}}(\tau _{j}^{-})=\alpha _{1} \left[ 1+\phi _{\alpha }^{OFF}(\tau _{j}^{-})\right] ^{-1}\cdot x_{1}(\tau _{j}) \\ \text{ \ \ }-\left\{ \beta _{1}\left[ 1+\phi _{\beta }^{OFF}(\tau _{j}^{-}) \right] ^{-1}+\lambda _{1}\right\} \cdot x_{1}(\tau _{j})+\mu _{1} \\ \text{ \ }+\left[ \alpha _{2}\left( 1-d\frac{h^{OFF}\left( \tau _{j}^{-}\right) }{x_{3,0}}\right) -\beta _{2}\right] \cdot x_{2}(\tau _{j}) \\ \text{ \ \ }+\zeta _{1}(\tau _{j})+\zeta _{2}(\tau _{j}) \end{array} \label{eq: event e2} \end{equation}} \subsection{Cost Derivative} Let us denote the total number of on and off-treatment periods (complete or incomplete) in $\left[ 0,T\right] $ by $K_{T}$. Also let $\xi _{k}$ denote the start of the $k^{th}$ period and $\eta _{k}$ denote the end of the $ k^{th}$ period (of either type). Finally, let $M_{T}=\lfloor \frac{K_{T}}{2} \rfloor $ be the total number of complete on-treatment periods, and $\Delta _{m}^{ON}$ denote the duration of the $m^{th}$ complete on-treatment period, where clearly \begin{equation*} \Delta _{m}^{ON}\equiv \eta _{m}-\xi _{m}\text{, }m=1,2,\ldots \end{equation*} It was shown in \cite{FleckADHS2015} that the derivative of the sample function $L(\tilde{\theta} )$ with respect to the control parameters satisfies: \begin{equation} \begin{array}{l} \frac{dL(\tilde{\theta} )}{d\tilde{\theta} _{i}}=\frac{W}{T}\overset{K_{T}}{\underset{k=1}{ \sum }}\int_{\xi _{k}}^{\eta _{k}}\left[ \frac{x_{1,i}^{\prime }(\tilde{\theta} ,t)+x_{2,i}^{\prime }(\tilde{\theta} ,t)}{PSA_{init}}\right] dt \\ \text{ \ \ }+\frac{\left( 1-W\right) }{T}\overset{M_{T}}{\underset{m=1}{\sum }}\frac{\Delta _{m}^{ON}}{T}\cdot \left( \eta _{m,i}^{\prime }-\xi _{m,i}^{\prime }\right) \\ \text{ \ \ }-\frac{\left( 1-W\right) }{T}\mathbf{1}\left[ K_{T}\text{ is odd} \right] \cdot \xi _{M_{T}+1,i}^{\prime }\cdot \left( \frac{T-\xi _{M_{T}+1}}{ T}\right) \end{array} \label{eq: Theorem 1} \end{equation} where $\mathbf{1}\left[ \cdot \right] $ is the usual indicator function and $ PSA_{init}$ is the value of the patient's PSA level at the start of the first on-treatment cycle. The derivation of (\ref{eq: Theorem 1}) is omitted here, but can be found in \cite{FleckADHS2015}. We now proceed to present the results obtained from our IPA-driven sensitivity analysis. \section{Results} \label{Results} The results shown here represent an initial study of sensitivity analysis applied to a SHA\ model of prostate cancer progression in which we consider only noise and fluctuations associated with cell population dynamics, and do not account for noise in the patient's androgen level. Representing randomness as Gaussian white noise, the authors in \cite{Tanaka2010} verified that variable time courses of the PSA\ levels were produced without losing the tendency of the deterministic system, thus yielding simulation results that were comparable to the statistics of clinical data. For this reason, in this work we take $\left\{ \zeta _{i}\left( t\right) \right\} $, $ i=1,2$, to be Gaussian white noise with zero mean and standard deviation of 0.001, similarly to \cite{Tanaka2010}, although we remind the reader that our methodology applies independently of the distribution chosen to represent $\left\{ \zeta _{i}\left( t\right) \right\} $, $i=1,2$. We estimate the noise associated with cell population dynamics at event times by randomly sampling from a uniform distribution with zero mean and standard deviation of 0.001. Simulations of the prostate cancer model as a pure DES are thus run to generate sample path data to which the IPA estimator is applied. In all results reported here, we measure the sample path length in terms of the number of days elapsed since the onset of IAS therapy, which we choose to be $T=2500$ days. Three sets of simulations were performed: in the first one we consider the optimal therapy configuration determined for Patient \#15 in \cite{FleckNAHS2016} and vary the values of $\tilde{\theta}_{i}$, $i=3,\ldots,6 $ (one at a time). For the second, we use PSA threshold values that yield a therapy of maximum cost and once again vary the values of $\tilde{\theta}_{i}$, $i=3,\ldots,6 $ (one at a time). Finally, in our third set of simulations, we let $\tilde{\theta}_{i}$, $i=3,\ldots,6 $, take the nominal values from \cite{Liu2015} and vary the values of $\tilde{\theta}_{1}$ and $\tilde{\theta}_{2}$ along their allowable ranges. Table \ref{tb: Results opt} presents the sensitivity of the model parameters, $\frac{dL}{d\tilde{\theta}_{i}}$, $i=3,\ldots ,6$, around the optimal configuration $\left[ \tilde{\theta}_{1}^{\ast },\tilde{\theta} _{2}^{\ast }\right] =\left[ 1.5,8.0\right] $ for the values of $\tilde{\theta }_{i}$, $i=3,\ldots ,6$, fitted to the model of Patient \#15 in \cite {Liu2015}. We note that the results shown here are representative of the phenomena that may be uncovered by this type of analysis, and were hence generated using the model of a single patient. Moreover, while the use of different patient models may potentially reveal additional phenomena, the insights presented below are interesting in their own right and thus set the stage for extending this analysis to other patients. \begin{table}[htb] \centering \caption{Sensitivity of model parameters around the optimal therapy configuration}\label{tb: Results opt} \begin{tabular}{|c|c|c|c|} \hline $\frac{dL}{d\tilde{\theta}_{3}}$ & $\frac{dL}{d\tilde{\theta}_{4}}$ & $\frac{ dL}{d\tilde{\theta}_{5}}$ & $\frac{dL}{d\tilde{\theta}_{6}}$ \\ \hline $5.44$ & $-0.25$ & $-5.95$ & $0.28$ \\ \hline \end{tabular} \end{table} Recall that $\tilde{\theta}_{3}$ and $\tilde{\theta}_{4}$ correspond to the HSC proliferation constant and CRC proliferation constant, respectively, while $\tilde{\theta}_{5}$ and $\tilde{\theta}_{6}$ are the HSC apoptosis constant and CRC apoptosis constant, respectively. Several interesting remarks can be made based on the above results; in what follows, we adopt the notation $x\approx y$ to indicate that $x$ takes values \emph{ approximately equal} to $y$. From Table \ref{tb: Results opt}, it can be seen that $\frac{dL}{d\tilde{ \theta}_{3}}\approx -\frac{dL}{d\tilde{\theta}_{5}}$ and $\frac{dL}{d\tilde{ \theta}_{4}}\approx -\frac{dL}{d\tilde{\theta}_{6}}$, which indicates that the sensitivities of proliferation and apoptosis constants are of the same order of magnitude (in absolute value) for any given cancer cell subpopulation. It is also possible to verify a large difference in the values of the sensitivities across different subpopulations; in fact the sensitivities of HSC proliferation and apoptosis constants are approximately 21 times higher than those of CRC constants. In other words, the system is more sensitive to changes in the HSC constants than changes in the CRC constants, i.e., $\tilde{\theta}_{3}$ and $\tilde{\theta}_{5}$ are more critical model parameters than $\tilde{\theta}_{4}$ and $\tilde{\theta}_{6}$ . Additionally, $\frac{dL}{d\tilde{\theta}_{3}}>0$ and $\frac{dL}{d\tilde{ \theta}_{6}}>0$, while $\frac{dL}{d\tilde{\theta}_{4}}<0$ and $\frac{dL}{d \tilde{\theta}_{5}}>0$. A possible explanation for this has to do with the fact that HSCs are the dominant subpopulation in a prostate tumor under IAS therapy, which means that the size of this subpopulation has a greater impact on the overall size of the tumor, and consequently, on the value of the PSA level. As a result, increasing $\tilde{\theta}_{3}$ (or decreasing $ \tilde{\theta}_{5}$) leads to an increase in the size of the HSC population, reflected in the PSA level, thus increasing the overall cost. On the other hand, increasing $\tilde{\theta}_{4}$ (or decreasing $\tilde{\theta}_{6}$) directly increases the size of the CRC population; however, since the conditions under which CRCs thrive are those under which HSCs perish, an increase in the size of the CRC population implies that the size of the HSC population will decrease. Given that HSCs are the dominant subpopulation, the PSA level would ultimately decrease, thus decreasing the overall cost. The effect of changes in $\tilde{\theta}_{i}$, $i=3,\ldots ,6$, on the sensitivity of model parameters was analyzed next. As the values of $\tilde{ \theta}_{i}$, $i=3,\ldots ,6$, were progressively altered, two scenarios emerged: $Scenario$ $A$ - a set of model parameter values was found for which the evolution of the prostate tumor is permanently halted after one or two cycles of treatment, i.e., the simulated IAS therapy scheme is curative; $Scenario$ $B$ - a set of model parameter values was found for which the prostate tumor grows in an uncontrollable manner, i.e., the simulated IAS therapy scheme is ineffective. $Scenario$ $A$ occurred when $\tilde{\theta}_{3}$ took on values that were at least 15\% smaller than the nominal value given in \cite {Liu2015}, or when $\tilde{\theta}_{5}$ took on values that were at least 30\% smaller than the nominal value given in \cite{Liu2015}; no variations in either $\tilde{\theta}_{4}$ or $\tilde{\theta}_{6}$ lead to such scenario. On the other hand, $Scenario$ $B$ occurred when $\tilde{\theta} _{3}$ took on values that were at least 15\% higher than the nominal value given in \cite{Liu2015}, or when $\tilde{\theta}_{4}$ took on values that were at least 10\% higher than the nominal value given in \cite{Liu2015}, or when $\tilde{\theta}_{5}$ took on values that were at least 30\% higher than the nominal value given in \cite{Liu2015}, or when $\tilde{\theta}_{6}$ took on values that were at least 10\% smaller than the nominal value given in \cite{Liu2015}. In practical terms, the above results indicate that if the optimal IAS\ therapy (designed using the model of Patient \#15) were applied to a new patient whose HSC population dynamics are slower than those of Patient \#15 (i.e., the new patient's HSC proliferation constant is at least 15\% smaller than that of Patient \#15; or the new patient's HSC apoptosis constant is at least 30\% smaller than that of Patient \#15), then the size of the new patient's tumor would remain stable and under control after at most two treatment cycles. On the other hand, if the optimal IAS\ therapy (designed using the model of Patient \#15) were applied to a new patient whose HSC population dynamics are faster than those of Patient \#15, then the size of the new patient's tumor would grow uncontrollably. In our second set of simulations, we let $\tilde{\theta}_{1}$ and $\tilde{ \theta}_{2}$ take suboptimal values and once again vary the values of $ \tilde{\theta}_{i}$, $i=3,\ldots ,6$ (one at a time). Table \ref{tb: Results subopt} presents the sensitivity of the model parameters, $\frac{dL}{d\tilde{ \theta}_{i}}$, $i=3,\ldots ,6$, around the suboptimal configuration $\left[ \tilde{\theta}_{1},\tilde{\theta}_{2}\right] =\left[ 7.5,15.0\right] $ for the values of $\tilde{\theta}_{1}$, $i=3,\ldots ,6$, fitted to the model of Patient \#15 in \cite{Liu2015}. \begin{table}[htb] \centering \caption{Sensitivity of model parameters around a suboptimal therapy configuration}\label{tb: Results subopt} \begin{tabular}{|c|c|c|c|} \hline $\frac{dL}{d\tilde{\theta}_{3}}$ & $\frac{dL}{d\tilde{\theta}_{4}}$ & $\frac{ dL}{d\tilde{\theta}_{5}}$ & $\frac{dL}{d\tilde{\theta}_{6}}$ \\ \hline $17.78$ & $0.014$ & $-17.15$ & $-0.016$ \\ \hline \end{tabular} \end{table} Once again, the effect of changes in $\tilde{\theta}_{i}$, $i=3,\ldots ,6$, on the sensitivity of model parameters was analyzed. $Scenario$ $A$ occurred when $\tilde{\theta}_{3}$ took on values that were at least 10\% smaller than the nominal value given in \cite{Liu2015}, or when $\tilde{\theta}_{5}$ took on values that were at least 20\% larger than the nominal value given in \cite{Liu2015}; no variations in either $\tilde{\theta}_{4}$ or $\tilde{ \theta}_{6}$ lead to such scenario. Moreover, $Scenario$ $B$ did not emerge in any of the simulations performed under this suboptimal configuration. In our third set of simulations, we investigate the behavior of the model parameter sensitivities, $\frac{dL}{d\tilde{\theta}_{i}}$, $i=3,\ldots ,6$, across different PSA threshold settings. In particular, we study how the sensitivity values change as we move from an optimal therapy setting towards various suboptimal settings. For such, we let $\tilde{\theta}_{i}$, $ i=3,\ldots ,6$, take the nominal values given in \cite{Liu2015} and vary the values of the lower and upper PSA thresholds along $\left[ \tilde{\theta} _{1}^{\min },\tilde{\theta}_{1}^{\max }\right] $ and $\left[ \tilde{\theta} _{2}^{\min },\tilde{\theta}_{2}^{\max }\right] $, respectively. Figs. \ref{fig: sensTH3_pat15}-\ref{fig: sensTH6_pat15} show how the values of the sensitivities, $\frac{dL}{d\tilde{\theta}_{i}}$, $i=3,\ldots ,6$, vary as a function of the values of the lower and upper PSA thresholds ($\frac{dL}{d\tilde{\theta}_{1}}$ and $\frac{dL}{d\tilde{\theta}_{2}}$, respectively) for the model of Patient \#15. \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH3_pat15} \end{figure} \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH4_pat15} \end{figure} \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH5_pat15} \end{figure} \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH6_pat15} \end{figure} Figs. \ref{fig: sensTH3_pat1}-\ref{fig: sensTH6_pat1} show how the values of the sensitivities, $\frac{dL}{d\tilde{\theta}_{i}}$, $i=3,\ldots ,6$, vary as a function of the values of the lower and upper PSA thresholds ($\frac{dL}{d\tilde{\theta}_{1}}$ and $\frac{dL}{d\tilde{\theta}_{2}}$, respectively) for the model of Patient \#1. \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH3_pat1} \end{figure} \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH4_pat1} \end{figure} \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH5_pat1} \end{figure} \begin{figure} \caption{Sensitivity of $\tilde{\theta} \label{fig: sensTH6_pat1} \end{figure} The above results lend themselves to the following discussion: first, the values of the model parameter sensitivities, $\frac{dL}{d\tilde{\theta}_{i}}$, $i=3,\ldots ,6$, are neither monotonically increasing nor monotonically decreasing along the allowable ranges of $\frac{dL}{d\tilde{\theta}_{1}}$ and $\frac{dL}{d\tilde{\theta}_{2}}$; this is verified for both patients. Second, the system is more sensitive to parameters $\tilde{\theta}_{3}$ and $\tilde{\theta}_{5}$ (HSC proliferation and apoptosis constants, respectively), and more robust to $\tilde{\theta}_{4}$ and $\tilde{\theta}_{6}$ (CRC proliferation and apoptosis constants, respectively); again this is verified across different patients. A possible explanation for this has to do with the fact that HSCs are commonly assumed to be the dominant subpopulation is a prostate tumor undergoing IAS therapy, which means that the size of the this subpopulation has a greater impact on the overall size of the tumor and, consequently, on the value of the PSA level. Additionally, note that two points are marked in Figs. \ref{fig: sensTH3_pat15}-\ref{fig: sensTH6_pat1}: a star marks the optimal therapy configuration and a square marks the values of $\tilde{ \theta}_{1}$ and $\tilde{\theta}_{2}$ for which the sensitivities $\frac{dL}{ d\tilde{\theta}_{i}}$, $i=3,\ldots ,6$, are minimal. In \cite{FleckNAHS2016} the optimal therapy configurations were found to be $\left[ \tilde{\theta}_{1}^{\ast },\tilde{\theta}_{2}^{\ast }\right] =\left[ 1.5,8.0 \right] $ for Patient \#15 and $\left[ \tilde{\theta}_{1}^{\ast },\tilde{ \theta}_{2}^{\ast }\right] =\left[ 2.5,8.0\right] $ for Patient \#1. As it can be seen in Figs. \ref{fig: sensTH3_pat15}-\ref{fig: sensTH6_pat1}, these settings are not located in the regions of minimum sensitivities. Of note, the sensitivities $\frac{dL}{d\tilde{ \theta}_{i}}$, $i=3,\ldots ,6$, take their minimum value at the same suboptimal configuration (namely $\left[ \tilde{\theta}_{1},\tilde{\theta} _{2}\right] =\left[ 7.5,8.0\right] $) across different patients. This could potentially point to the existence of an underlying, and most likely as of yet poorly understood, equilibrium of cancer cell subpopulation dynamics at this suboptimal setting. Moreover, the tradeoff between system fragility and optimality seems more strongly applicable to $\tilde{\theta}_{1}$, and less so to $\tilde{\theta} _{2}$; interestingly, the value of $\tilde{\theta}_{1}^{\ast }$ differed across patients, while $\tilde{\theta}_{2}^{\ast }$ did not. In this sense, relaxing the optimality condition in favor of increased system robustness could potentially be worthwhile in at least some cases. In fact, for Patient \#1, moving from an optimal therapy setting to a slightly suboptimal setting along $\tilde{\theta}_{2}$ (namely $\left[ \tilde{\theta}_{1},\tilde{\theta} _{2}\right] =\left[ 2.5,9.0\right] $) leads to a 9\% increase in the cost of treatment. However, the model parameter sensitivities at this setting decrease by approximately 30\% for $\tilde{\theta}_{3}$ and $\tilde{\theta} _{5}$ and by approximately 70\% for $\tilde{\theta}_{4}$ and $\tilde{\theta} _{6}$. If we move to a suboptimal setting along $\tilde{\theta}_{1}$ (namely $\left[ \tilde{\theta}_{1},\tilde{\theta}_{2}\right] =\left[ 3.5,8.0\right] $ ), the cost increases by 16\%, while the sensitivities decrease by approximately 50\% for $\tilde{\theta}_{3}$ and $\tilde{\theta}_{5}$ and by approximately 90\% for $\tilde{\theta}_{4}$ and $\tilde{\theta}_{6}$. In this case, it seems advantageous to tradeoff optimality for increased robustness. It is interesting to note that the above analysis is not consistently verified across different patients. In fact, for Patient \#15, a marked decrease in system fragility only occurs when we move to a suboptimal setting along $\tilde{\theta}_{1}$ (namely $\left[ \tilde{\theta}_{1},\tilde{ \theta}_{2}\right] =\left[ 7.5,8.0\right] $), at which point the sensitivities decrease by approximately 70\% for $\tilde{\theta}_{3}$ and $ \tilde{\theta}_{5}$ and by approximately 99\% for $\tilde{\theta}_{4}$ and $ \tilde{\theta}_{6}$. However, there is an increase in the cost value of the order of 70\%, which indicates that system optimality is significantly compromised. These results highlight the importance of applying our methodology on a patient-by-patient basis. More generally, they validate recent efforts favoring the development of \emph{personalized} cancer therapies, as opposed to traditional treatment schemes that are typically generated over a cohort of patients and thus effective only on average. \section{Conclusions} \label{Conclusions} We use a stochastic model of prostate cancer evolution under IAS therapy to perform sensitivity analysis with respect to several important model parameters. We find the system to be more sensitive to changes in the HSC proliferation and apoptosis constants than changes in the CRC proliferation and apoptosis constants. We also identify a set of model parameter values for which the simulated IAS therapy scheme is essentially curative, as well as a set of model parameters for which the prostate tumor grows in an uncontrollable manner. Finally, we verify that relaxing optimality in favor of increased system stability can potentially be of interest in at least some cases. This work is a first attempt at investigating the tradeoff between optimality and system robustness/fragility in stochastic models of cancer evolution. A subset of all model parameters is selected and a case study of prostate cancer is used to illustrate the applicability of our IPA-based methodology. Nevertheless, there exist several other potentially critical parameters in the SHA model of prostate cancer evolution we study, so that part of our ongoing work includes extending this sensitivity analysis study to other model parameters. Additionally, future work includes applying this methodology to other types of cancer (e.g., breast cancer), as well as other diseases that are known to progress in stages (e.g., tuberculosis). \end{document}
math
81,159
\begin{equation}gin{document} \title[INFINITE TYPE] {Infinite type germs of smooth hypersurfaces in $\mathbb C^n$} \author{John Erik Forn\ae ss, Lina Lee, Yuan Zhang } \footnote{The first author is supported by an NSF grant. Keywords: Plurisubharmonic Functions, D'Angelo type. 2000 AMS classification. Primary: 32T25; Secondary 32C25} \today \begin{equation}gin{abstract} In this paper we discuss germs of smooth hypersurface in $\mathbb C^n$. We show that if a point on the boundary has infinite D'Angelo type, then there exists a formal complex curve in the hypersurface through that point. \varepsilonsilonnd{abstract} \maketitle \section{Introduction} Let $M$ be a $\mathcal C^\infty$ real hypersurface in $\mathbb C^n$ and $p\in M$. Then there is a $\mathcal C^\infty$ local defining function $r$ of $M$ near $p$, i.e., $M=\set{r=0}\gammaap U$, where $U$ is some neighborhood of $p$ and $dr(p) \neq 0.$ Let $\zeta:(\mathbb C,0)\longrightarrow(\mathbb C^n,p)$ be a germ of a nonconstant holomorphic curve that sends $0$ to $p$. Let $\nu(\zeta)$ denote the degree of vanishing at $0$ of the function $|\zeta_1(t)-\zeta_1(0)|+\gammadots + |\zeta_n(t)-\zeta_n(0)|.$ Likewise, let $\nu(\zeta^*r)$ denote the order of vanishing of $r\gammairc \zeta$ at the origin. \begin{equation}gin{defn} The D'Angelo type of $M$ at $p$, $\mathbb Delta(M,p)$ is defined as $$ \mathbb Delta(M,p):=\sup_{\zeta}\frac{\nu(\zeta^*r)}{\nu(\zeta)}. $$ We say that $p$ is a point of finite type if $\mathbb Delta(M,p)<\infty$ and infinite type otherwise. We also denote $$\mathbb Delta(M, p, \zeta):=\frac{\nu(\zeta^*r)}{\nu(\zeta)}.$$ \varepsilonsilonnd{defn} In the convergent case, D'Angelo \gammaite{Dangelo} proved the following theorem: \begin{equation}gin{thm}[D'Angelo]\label{variety-ra} If $M$ is a germ of a real analytic hypersurface in $\mathbb C^n$ and $p\in M$, then $\mathbb Delta(M,p)=\infty$ if and only if there exists a germ of a complex curve $\zeta(t):(\mathbb C,0)\longrightarrow(\mathbb C^n,p)$ lying on $M$ and passing through $p$. \varepsilonsilonnd{thm} In this paper we prove the formal analogue of this result. With a formal complex curve through $p$ we mean an expression of the form $\zeta(t)=(\zeta_1(t),\dots,\zeta_n(t))$ where each $\zeta_j$ is a formal power series in the complex variable $t$ and $\zeta(0)=p.$ \begin{equation}gin{thm}\label{smoothdangelo} If $M=\{r=0\}$ is a germ of a $\mathcal C^\infty$ real hypersurface in $\mathbb C^n$ and $p\in M$, then $\mathbb Delta(M,p)=\infty$ if and only if there exists a nonconstant formal complex curve $\zeta(t):(\mathbb C,0)\longrightarrow(\mathbb C^n,p)$ such that $\zeta(0)=p$ and $r\gammairc \zeta$ vanishes to infinite order at $0.$ \varepsilonsilonnd{thm} The proof is similar in outline to the proof by D'Angelo, but the formal analogue is somewhat different in some places. Denote by ${}_n\mathcal O_0$ the ring of germs of formal complex power series at 0 in $\mathbb C^n$. A key ingredient is the following \begin{equation}gin{thm}\label{nullstellensatz} Let $I$ be an ideal in ${}_n\mathcal O_0$. If $dim({}_n\mathcal O_0/I)= \infty$, then there is a formal power curve $\zeta$ near 0 such that $\zeta^*f\sim 0$ for all $f\in I$. \varepsilonsilonnd{thm} In section 2 we collect background results. Then in Section 3 we prove the Theorem using Theorem 3, a version of the Nullstellensatz, and in the last section we prove Theorem 3. The authors are grateful to Steven Krantz for suggesting this problem. \section{Preliminary results} \subsection{Infinite matrices} Let $\mathcal M$ denote the set of infinite matrices $$\begin{equation}gin{bmatrix} a_{11}& a_{12}& ... \\ a_{21} & a_{22} & ... \\ ...& ...& ... \\ \varepsilonsilonnd{bmatrix}$$ \noindent where the $\{a_{ij}\}_{1\leq i,j <\infty}$ are complex numbers. For $A=\{a_{ij}\} \in \mathcal M$ let $A^*=\{\overline{a_{ji}}\}$ denote the adjoint matrix. We let $\mathcal M_0$ denote those matrices $A$ for which all sums, $\sum_{j} |a_{ij}|^2 \leq 1, \sum_{i} |a_{ij}|^2 \leq 1.$ If $A=\{a_{ij}\},B=\{b_{k\varepsilonsilonll}\}\in \mathcal M$ we define the matrix product $AB=\{c_{i\varepsilonsilonll}\}$ where $c_{i\varepsilonsilonll}=\sum_{k} a_{ik} b_{k\varepsilonsilonll}$ if all these sums are convergent to a finite complex number. If $A,B\in \mathcal M_0$ then the matrix product $AB\in \mathcal M$ always exists. Also let $\mathcal M_1\subset \mathcal M_0$ denote the unitary matrices, i. e. those for which $AA^*=A^*A= I=:\{\delta_{ij}\}.$ If $A=\{a_{ij}\}\in \mathcal M,A_k=\{a_{ij}^k\} \in \mathcal M_0,k=1,\dots$ and for each $i,j, a^k_{ij}\rightarrow a_{ij}, k \rightarrow \infty$ we say that $A_k$ converges weakly to $A.$ In this case it is easy to see that $A\in \mathcal M_0.$ In particular we note that if we have a sequence $U_k\in \mathcal M_0$, after taking a subsequence, we can assume that $U_k\rightarrow U\in \mathcal M_0$ and $U_k^*\rightarrow U^*\in \mathcal M_0.$ We let $\mathcal M_{1,k}\subset \mathcal M_1$ denote those unitary matrices $A=\{a_{ij}\}$ for which $a_{ij}=\delta_{ij}$ whenever $\max\{i,j\}>k.$ Then the matrix ${}^kA=\{a_{ij}\}_{1\leq i,j\leq k}$ is a unitary matrix on $\mathbb C^k$ and $A$ looks like the identity matrix outside the first $k \times k$ block. Let $\varepsilonsilonll^2$ denote the infinite sequences of complex numbers $c=\{c_j\}_{j\geq 1}$ with norm $\|c\|^2=\sum_j |c_j|^2.$ Then if $A\in \mathcal M_1$, then $\|Ac\|= \|c\|.$ By weak convergence it follows that if $A\in \mathcal M_0$ is a weak limit of matrices in $\mathcal M_1$, then we still have $\|Ac\|\leq \|c\|.$ \subsection{D'Angelo description of defining functions} Let $r$ be the defining function of a smooth hypersurface around $0$ in $\mathbb C^n$. For multiindices $J=(J_1,\dots,J_n)$,$K=(K_1,\dots,K_n)$, denote $|J|:=\sum_{j=1}^n J_j$ and set $J< K$ using some lexicographic ordering. In other words, we say $J < K$ if and only if either $|J|<|K|$, or $|J|=|K|$ and there exists $k\le n$ such that $J_j=K_j$ for $j\le k$ and $J_{k+1}<K_{k+1}$. Then we can write formally \begin{equation}gin{align*} r & \sim 2\mathbb Re h+ 4\mathbb Re\sum_{J} \sum_{K\geq*J}a_{JK}z^J\overline{z}^K\\ & \sim 2\mathbb Re h+ 4\mathbb Re\sum_{J}z^J \sum_{K\geq*J}a_{JK}\overline{z}^K\\ & \sim 2\mathbb Re h+ \sum_{J}\left|z^J +\sum_{K\geq*J}\overline{a}_{JK}z^K\right|^2- \sum_{J}\left|z^J -\sum_{K\geq*J}\overline{a}_{JK}z^K\right|^2\\ & \sim \mathbb Re h+\sum_J |f_J|^2-\sum_J|g_J|^2\\ \varepsilonsilonnd{align*} where $h$ is holomorphic, $f_J=z^J +\sum_{K\geq*J}\overline{a}_{JK}z^K$ and $g_J=z^J -\sum_{K\geq*J}\overline{a}_{JK}z^K$. The key property for the above decomposition is that if we truncate $r$ at any $k$ only finitely many terms are nonzero. \section{Formal Power Series Case} For any given formal complex series $g$, we use $j_k(g)$ to denote the k-jets of $g$. A slight change of D'Angelo's theorem states as follows: \begin{equation}gin{lemma}\label{jets} Let $\zeta=\zeta_k$ be a curve passing through the origin. If $j_{2k\nu(\zeta})(\zeta^*r)=0$, then there is an infinite unitary matrix $U_k$ such that \begin{equation}gin{equation}\label{jetk} j_{2k\nu(\zeta)}(\zeta^*h)=j_{k\nu(\zeta)}(\zeta^*(f-U_k g))=0. \varepsilonsilonnd{equation} \varepsilonsilonnd{lemma} \begin{equation}gin{proof} If $j_{2k\nu(\zeta)}(\zeta^*r)=0$, then $j_{2k\nu(\zeta)}(\zeta^*h)=0$ and $j_{2k\nu(\zeta)}(\|\zeta^*f\|^2)=j_{2k\nu(\zeta)}(\|\zeta^*g\|^2)$. Thus $\|j_{k\nu(\zeta)}\zeta^*f\|^2=\|j_{k\nu(\zeta)}\zeta^*g\|^2$. Note the above norms make sense since there are only finite many terms in $f$ and $g$ with non-vanishing $k$-jets for each $k$ according to our decomposition of $r$. For these finitely many $f_J$'s and $g_J$'s, applying (\gammaite{Dangelo} Theorem 3.5), one can find an element $\tilde U$ of the group of unitary matrices such that $$j_{k\nu(\zeta)}(\zeta^*(f-\tilde U{g}))=0. $$ Extending $\tilde U$ to an unitary operator $U$ on $l^2$ by letting rest of the diagonal entries to be 1 and others 0, then we have (\text{\rm Re}\,f{jetk}) holds. \varepsilonsilonnd{proof} Denote by $M_0$ the maximal ideal in ${}_nO_0$ consisting of those which formally vanish at 0 and define $D(I):= \dim {}_n O_0/I$. \begin{equation}gin{lemma} $D(I)<\infty$ if and only if for some integer $k, M_0^k\subset I$, where $M_0$ is a maximal ideal of ${}_n O_0$. \varepsilonsilonnd{lemma} \begin{equation}gin{proof} If $M_0^k\subset I$, then $D(I)\le D(M_0^k)<\infty$. If $D(I)=\varepsilonsilonll<\infty$, then $z_j^\varepsilonsilonll\in I$ for $j=1,\dots,n$. Therefore $M_0^\varepsilonsilonll\subset I$. \varepsilonsilonnd{proof} \begin{equation}gin{lemma}\label{inf} Suppose that the domain $r<0$ has infinite type at $0.$ Then there exists a matrix $U\in \mathcal M_0$ which is a limit of matrices in $\mathcal M_1$ so that the ideal $I=I(h, f-Ug,U^*f-g)$ has infinite dimension. \varepsilonsilonnd{lemma} \begin{equation}gin{proof} Since the domain has infinite type, there is a sequence of unitary matrices $U_k$ as in the previous lemma. Let $U$ be any weak limit of the $U_k.$ Suppose that $I$ has finite dimension. Then there exists an integer $\varepsilonsilonll$ so that $M_0^\varepsilonsilonll\subset I.$ But then, for large enough $k$, $M_0^\varepsilonsilonll\subset I(h, f-U_kg,U^*_kf-g),$ a contradiction. \varepsilonsilonnd{proof} \begin{equation}gin{corollary}\label{equivalence} Let $M=\{r=0\}$ be a smooth hypersurface in $\mathbb C^n.$ Let $\zeta$ be a formal curve. Then $\zeta^*r\sim 0$ if and only if there exists an operator $U\in \mathcal M_0$ which is a weak limit of operators in $\mathcal M_1$ such that $\zeta^*f\sim 0$ for any $f\in I$ as in Lemma \text{\rm Re}\,f{inf}. \varepsilonsilonnd{corollary} \begin{equation}gin{proof} We only need to prove the sufficient direction, so consider any formal curve $\zeta$ such that $\zeta^*h\sim \zeta^*(f-Ug)\sim \zeta^*(U^*f-g)\sim 0$, Then $j_k(\zeta^*h)=0$ and \begin{equation}gin{align*} \|j_k(\zeta^*f)\|&=\|j_k(\zeta^*U g)\|=\|U(j_k(\zeta^* g))\| \\ &\le \|j_k(\zeta^*g)\| \\ &= \|j_k(\zeta^*U^*f)\|=\|U^*(j_k(\zeta^*f))\| \\ &\le \|j_k(\zeta^*f)\|. \varepsilonsilonnd{align*} Therefore $\|j_k(\zeta^*f)\|=\|j_k(\zeta^*g)\|$ for any $k$. Letting $k$ goes to infinity, then $\zeta^*h\sim \|\zeta^*f\|^2 -\|\zeta^*g\|^2\sim 0$ and hence $\zeta^*r\sim 0$. \varepsilonsilonnd{proof} To complete the proof of Theorem 2, we only need to prove Theorem 3. \section{Theorem 3} In the context of formal complex power series case, many parallel algebraic properties to those of convergent power series still hold. To our interest, we mention that the local ring ${}_n O_0$ is Noetherian. Moreover, Weierstrass preparation Theorem and division Theorem are both valid in the formal case. The reader may check \gammaite{Lang}, \gammaite{Ruiz} etc for references. For any proper ideal $I$ in ${}_nO_0$, following the idea of Gunning(\gammaite{gunning}), we can similarly construct regular system of coordinates $z_1,\ldots, z_n$ such that there exists an integer $k$ satisfying:\\ {\it 1. ${}_k{O_0}\gammaap I=0$;\\ 2. ${}_{j-1}{O_0}[z_j]\gammaap I$ contains a Weierstrass polynomial $p_j$ in $z_j$ for $j=k+1,\ldots,n$.} In addition if $I$ is prime, the theorem of the primitive element guarantees that by making a linear change of coordinates in $z_{k+1},\ldots,z_n$ plane, the quotient field ${}_n\tilde{M_0}$ of ${}_n{O_0}/I$ is an algebraic extension of ${}_kO_0$. i.e. ${}_n\tilde{M_0}={}_k\tilde{{O}}[\tilde z_{k+1}]$. We call the above regular system of coordinates to be {\it strictly regular system of coordinates}. Denote the discriminant of the unique irreducible defining polynomial $p_{k+1}$ of $z_{k+1}$ by $D$. Then for each coordinate function $z_j$, we can construct $q_j:= D\gammadot z_j-Q_j(z_{n+1})\in I\gammaap {}_kO[z_{k+1}, z_j]$ for some $Q_j\in {}_kO[z_j], j={k+2}, \ldots, n$. We also use $p_j\in {}_kO_0[z_j]$ to denote the defining Weierstrass polynomial of $z_j$. The ideal generated by $p_{k+1},q_{k+2}\ldots,q_n$ is called to be the associated ideal. The following lemma shows that the associated ideal $(p_{k+1},q_{k+2}\ldots,q_n)$ and the original ideal cannot differ by very much. \begin{equation}gin{lemma}\label{8} There exists an integer $\nu$ such that $$D^{\nu} I\subseteq (p_{k+1},q_{k+2}\ldots,q_n)\subseteq I.$$ \varepsilonsilonnd{lemma} If $I$ is a principle ideal, we have the following result. \begin{equation}gin{lemma}\label{438} Let $P(z_1,\dots,z_k,w)$ be a formal Weierstrass polynomial $P=w^\varepsilonsilonll+ \sum_{j<\varepsilonsilonll}b_j(z_1,\dots,z_k)w^j$, $b_j(0)=0$. Suppose that $D(z_1,\dots,z_k) \not \varepsilonsilonquiv 0$. Then there exists a formal curve $\zeta$ in $\mathbb C^{k+1}$ such that $P\gammairc \zeta\sim 0$ . \varepsilonsilonnd{lemma} \begin{equation}gin{proof} After linear change of coordinates on $(z_1, \ldots, z_k)$, we assume $D(z_1,0,\ldots,0) \not \varepsilonsilonquiv 0$. Write $D(z_1,0,..., 0)= \sum_{j=s}^\infty a_jz_1^j$ where $a_s \neq 0$. Next we consider truncations $P_r$ of $P$ where $r >> s.$ Then the discriminant of $P_r$, $D_r$ as a polynomial in the elementary symmetric functions can still be written as $D_r(z_1, 0,\ldots, 0)= az_1^s + \gammadots.$ For $z_1 \neq 0$ we can write $P_r(z_1,0,...,0)= \mathbb Pi_j (w-\alpha_j^r(z_1)).$ The expression of $D_r$ leads to the estimate $|\alpha_j^r-\alpha_i^r| \geq c|z_1|^{s}$ for some small $c$. Let $\mathbb Delta(\alpha_j^r(z_1),\varepsilonsilonpsilon |z_1|^s)$ be the disk with radius $\varepsilonsilonpsilon |z_1|^s$ centered at $\alpha_j^r(z_1)$ for each $j$. Then on the boundary of these discs $|P_r| \geq (\varepsilonsilonpsilon |z_1|^s)^\varepsilonsilonll.$ So for all higher truncations with $\rho>r>>s\varepsilonsilonll$ we have as well that $|P_\rho|\geq (\varepsilonsilonpsilon |z_1|)^\varepsilonsilonll$ and that the zeroes of $P_\rho$ are contained in these discs. We need to shrink $z_1$ for this to be true. We can also make these discs much smaller. This means that we have trapped the zero curves for $P_r$ and so we can take formal limits and get a curve on which $P$ vanishes to infinite order. \varepsilonsilonnd{proof} We have the following proposition: \begin{equation}gin{prop}\label{prime} Let $P\subset{}_n O_0$ be a prime ideal with $D(P)=\infty$. Then there exists a formal curve $\zeta$ such that $\zeta^*f\sim 0$ for all $f\in P$. \varepsilonsilonnd{prop} \begin{equation}gin{proof} We first choose coordinates $z_1,\dots, z_n$ such that $I$ is strictly regular. Note that since $D(P)=\infty$, $k\neq 0$. Consider the associated ideals $(p_{k+1},q_{k+2}\ldots,q_n)$. Notice the discriminant of $p_{k+1}$, $D$ is not identically zero. Apply Lemma \text{\rm Re}\,f{438} to $p_{k+1}$, then there exists a formal power curve $\zeta'=(\zeta_1(t),\zeta_2(t),\dots,\zeta_{k+1}(t))$ such that $\zeta'^*p_{k+1}\sim 0$. We mention that the curve can always be chosen so that $D\gammairc (\zeta_1(t),\zeta_2(t),\dots,\zeta_{k}(t)) \ \slash \hspace{-.3cm}\varepsilonsilonquiv 0$. Next we need to add coordinates $z_{k+2}(t),...,z_{n}(t)$ so that all functions in the ideal vanish. Recall $q_j=D z_j-Q_j(z_{k+1})\in I,j>k+1$. Here $Q_j\in {}_nO[z_{k+1}].$ We solve for $z_j=Q_j/D$ and let $\zeta_j(t)=z_j$ for $j=k+2,\ldots, n$. Then $(p_{k+1},q_{k+2}\ldots,q_n)$ vanishes on $\zeta(t)=(\zeta_1(t),\zeta_2(t),\dots,\zeta_{n}(t))$. We'll show $z_j=Q_j/D$ is one of the formal roots for the defining Weierstrass polynomials $p_j\in {}_kO_0[z_j]$ and therefore $Q_j/D$ vanishes at the origin. Indeed, for each $j>k+1$, denote by $n_j$ the degree of $p_j\in {}_kO_0[z_j]$ with respect to $z_j$ and consider $P_j=D^{n_j}p_j$. Replacing $Dz_j$ in $P_j$ by $q_j+Q_j$. We then get $P_j=H(q_j)+G(Q_j)$ for some polynomials $H(\gammadot), G(\gammadot)$ with no constant terms. Since $P_j\in I$ and $p_{k+1}$ is the defining polynomial for $z_{k+1}$, $G(Q_j)$ is divisible by $p_{k+1}$. $P_j$ therefore identically vanishes on the curve defined by $(z_1,\ldots, z_{k+1})=\zeta'(t), q_j(\zeta')=0$. $z_j=Q_j/D$ is then one of the formal roots for the defining Weierstrass polynomials $p_j$. Finally we need to show for any $f$ in the ideal $I$, $f$ vanishes on the formal curve $\zeta(t)$. Indeed, by Lemma \text{\rm Re}\,f{8}, there exists some high power $\nu$, such that $D^{\nu} f\in (p_{k+1},q_{k+2}\ldots,q_n)$. Therefore $(D^{\nu} f)\gammairc \zeta\sim 0$. Notice $D$ vanishes to finite order on the curve, thus $\zeta^*(f)\sim 0$. \varepsilonsilonnd{proof} In order to prove Theorem 3, we also need the following simplification lemmas. \begin{equation}gin{lemma}\label{intersection} If $I= I_1 \gammaap I_2$ and $D(I)=\infty$, then $D(I_j)=\infty$ for some $j.$ \varepsilonsilonnd{lemma} \begin{equation}gin{proof} Suppose that both $D(I_j)<\infty.$ Then for some integers $k_1,k_2$ we have that $M_0^{k_j}\subset I_j; j=1,2.$ Hence $M_0^{\max\set{k_1,k_2}}\subset I$. Hence $D(I)<\infty.$ \varepsilonsilonnd{proof} We also similarly define formal radical of an ideal $I$ as follows \begin{equation}gin{defn} $\sqrt I:=\{f: \text{there exists an integer} k \ \text{such that} \ f^k\in I\}$. \varepsilonsilonnd{defn} \begin{equation}gin{lemma}\label{radical} If $D(P)=\infty$ then $D(\sqrt P)=\infty.$ \varepsilonsilonnd{lemma} \begin{equation}gin{proof} Suppose that $D(\sqrt P)<\infty.$ Then for some $k, M_0^k\subset \sqrt P.$ Let $f_1,...,f_s$ be generators of $M_0^k.$ Then $f_i^{r_i}\in P$ for some $r_i$'s. If $f\in M_0^k,$ then $f=\sum g_j f_j$, so $f^{r_1+\gammadots+r_s}\in P.$ Then $M_0^K\subset P$ for large enough $K.$ But then $D(P)<\infty.$ \varepsilonsilonnd{proof} Recall that an ideal $J$ is primary if whenever $xy\in J$ then $ x\in J$ and $y^n\in J$ for some $n$ or vice versa. We can now prove Theorem \text{\rm Re}\,f{nullstellensatz}. Let $I$ be an ideal in ${}_n O_0$. If $D(I)= \infty$ then we want to show that there is a formal power curve $\zeta$ near 0 such that $\zeta^*f\sim 0$ for all $f\in I$. \begin{equation}gin{proof} By Lasker-Noether decomposition theorem, we can write $I=P_1\gammaap\gammadots\gammaap P_k$, where $P_j$'s are primary ideals. By Lemma \text{\rm Re}\,f{intersection}, we have $D(P_j)=\infty$ for some $j$. Lemma \text{\rm Re}\,f{radical} also implies $D(\sqrt{P_j})=\infty$. Notice $\sqrt{P_j}$ is prime, Proposition \text{\rm Re}\,f{prime} implies the existence of a formal curve $\zeta$ such that $\zeta^*f\sim 0$ for all $f\in \sqrt{P_j}$. Since$I\subset P_j\subset\sqrt{P_j}$, $\zeta^*f\sim 0$ for all $f\in I$. The proof of theorem is thus complete. \varepsilonsilonnd{proof} \begin{equation}gin{thebibliography}{9999} \bibitem{Dangelo} D'Angelo, J. P. Real hypersurfaces, orders of contact, and applications, Annals of Math. 115 (1982), 615-637. \bibitem{Dangelo2} D'Angelo, J. P. Several complex variables and the geometry of real hypersurfaces. Studies in Advanced Mathematics, 1992. \bibitem{gunning} Gunning, R. C. Introduction to holomorphic functions of several variables. Volume II, local theory. \bibitem{Lang} Lang, S. Algebra. 3rd ed. Addison Wesley. Revised third edition. Graduate Texts in Mathematics, 211. Springer-Verlag, New York, 2002. \bibitem{Ruiz} Ruiz, J. The basic theory of power series. Advanced Lectures in Mathematics. Friedr. Vieweg and Sohn, Braunschweig, 1993. \varepsilonsilonnd{thebibliography} \noindent John Erik Forn\ae ss\\ Mathematics Department\\ The University of Michigan\\ East Hall, Ann Arbor, MI 48109\\ USA\\ [email protected]\\ \noindent Lina Lee\\ Mathematics Department\\ The University of Michigan\\ East Hall, Ann Arbor, MI 48109\\ USA\\ [email protected]\\ \noindent Yuan Zhang\\ Mathematics Department\\ Rutgers University\\ New Brunswick, NJ, 08854\\ USA\\ [email protected]\\ \varepsilonsilonnd{document} So let $f$ be in the ideal. We can divide out the Weierstrass polynomials so we can assume that $f\in {}_nO[z_{n+1},\dots,z_m]$ where the degree in each $z_k$ is strictly less than the corresponding Weierstrass polynomial. So why does this vanish on the formal curve??? Here is the crucial result: This we need to interpret formally!!! \begin{equation}gin{thm} Suppose $I$ is a prime ideal which is strictly regular (so regular) in the coordinates $z_,...,z_n,z_{n+1},...,z_m$ Let $P_{n+1}$ be the corresponding Weierstrass polynomials of degree $r.$ Then\\ (i) The natural projection $\mathbb C^m=\mathbb C^n \times \mathbb C^{m-n} \rightarrow \mathbb C^{n}$ exhibits loc(I) as the germ of a finite branched cover of order $r$.\\ (ii) The natural projection $\mathbb C^m =\mathbb C^{n+1}\times \mathbb C^{m-n-1}\rightarrow \mathbb C^{n+1}$ exhibits loc(p) as the germ of a finite branched cover of order one. \varepsilonsilonnd{thm} The formal version should be the formal nullstellensatz!!! To construct the formal curve it should be enough to consider the first Weierstrass polynomial $P_{n+1}$ because there is a unique lifting from there. We need a Lemma. Formula for the discriminant of a polynomial $p(x)=a_nx^n+ \gammadots a_0.$ Let $R(p,p')$ denote the determinant of the $(2n-1)\times (2n-1)$ matrix: $$\begin{equation}gin{bmatrix} a_n& a_{n-1}& a_{n-2}& ... & a_1 & a_0& 0 & ....&....& 0\\ 0 & a_n &a_{n-1}&a_{n-2} & ... & a_1& a_0& 0&...&0\\ &&&&...&&&&\\ 0 & ..... & 0 & a_n & a_{n-1}&a_{n-2}&.... &..... &....& a_0\\ na_n& (n-1)a_{n-1}& (n-2)a_{n-2}& ... &a_1 & 0& 0 & ....&....&0\\ 0& n a_n & (n-1)a_{n-1}&(n-2)a_{n-2}& ... &a_1 &0& 0&... & 0\\ &&&&...&&&&\\ 0 & ..... & 0& na_n &(n-1) a_{n-1}&(n-2)a_{n-2}&.... &..... &....& a_1\\ \varepsilonsilonnd{bmatrix}$$ $R(p,p')$ is called the resultant of $p.$ The discriminant is then given by $D(p)=(-1)^{n(n-1)/2}\frac{1}{a_n}R(p,p').$ An equivalent formula is $a_n^{2n-2} \mathbb Pi_{i<j}(r_i-r_j)^2$ where the $r_i$ are the roots of $p.$ \begin{equation}gin{lemma} Let $D$ be the discriminant of $P_{n+1}$. Then $D(z_1,..,z_n) \not\varepsilonsilonquiv 0.$ \varepsilonsilonnd{lemma} \begin{equation}gin{proof} How do we know that we cannot factor $P_{n+1}=AB?$ Let's assume we have figured this out... Then we need to show that if the discriminant is identically zero, then $P_{n+1}$ factors \varepsilonsilonnd{proof} In particular, it follows from the Lemma that if $z^*r\sim0$, then there is a matrix $U\in \mathcal M_0$ such that $$z^*h\sim z^*(f-Ug)\sim z^*(U^*f-g)\sim 0,$$ where $U^*$ is the adjoint operator of $U$. If $z^*r\sim0$, then there exists a sequence of unitary operators $U^{(k)}$ such that (\text{\rm Re}\,f{jetk}) holds for every $k$. Passing to the subsequence if necessary, we can assume the weak limit of $U^{(k)}$ is $U$. Then $||U||\le 1$, $z^*h\sim 0$ and \begin{equation}gin{equation*}\label{2} z^*f- z^*(Ug)\sim0. \varepsilonsilonnd{equation*} On the other hand, applying the same procedure on the equivalent identities $j_{2k}z^*h=j_k(z^*(U^{(k)*}f-g))=0$, one can also get \begin{equation}gin{equation*}\label{1} z^*(U^*f)-z^*g\sim 0. \varepsilonsilonnd{equation*} Denote by ${}_nO_0$ rings of formal power series near 0, then we have the following definition \begin{equation}gin{defn} Let $I(U,0)$ be the ideal generated by elements $(h, f-Ug, U^*f-g)$ for some $U\in \mathcal M_0$. Define $D(I(U,0)):=dim({}_nO_0/I(U,0))$. $D$ is also called the intersection number of the ideal $I$ in some references. \varepsilonsilonnd{defn} We immediately have the following: \begin{equation}gin{corollary}\label{equivalence} Let $M=\{r=0\}$ be a smooth hypersurface in $\mathbb C^n.$ Let $z$ be a formal curve. Then $z^*r\sim 0$ if and only if there exists an operator $U\in \mathcal U$ such that $z^*f\sim 0$ for any $f\in I(U,0)$. \varepsilonsilonnd{corollary} \begin{equation}gin{proof} We need to prove the sufficient direction. For any formal curve $z$ such that $z^*h\sim z^*(f-Ug)\sim z^*(U^*f-g)\sim 0$, consider the $k$-jets $j_k(z^*h), j_k(z^*(f)), j_k(z^*(Ug))$. Then $j_k(z^*h)=0$ and \begin{equation}gin{align*} \|j_k(z^*f)\|&=\|j_k(z^*U g)\|=\|U(j_k(z^* g))\| \\ &\le \|j_k(z^*g)\| \\ &= \|j_k(z^*U^*f)\|=\|U^*(j_k(z^*f))\| \\ &\le \|j_k(z^*f)\|. \varepsilonsilonnd{align*} Therefore $\|j_k(z^*f)\|=\|j_k(z^*g)\|$ for any $k$. Letting $k$ goes to infinity, then $z^*h\sim \|z^*f\|^2 -\|z^*g\|^2\sim 0$ and hence $z^*r\sim 0$. \varepsilonsilonnd{proof} Suppose now that $z(t)=(z_1(t),\gammadots,z_n(t))$ is a curve tangent to order at least $k$. So say $\nu(\zeta)=p,$ $r\gammairc z= \mathcal O(|t|^{kp}).$ Then only finitely many terms in the D'Angelo description matter. The others only contain terms of order strictly larger than $kp.$ Following D'Angelo's proof, we have a sequence of unitary matrices $U^k$ which only fails to be the indentity on a finite block involving the first $N_k$ $f_i,g_i.$ So we can take a weak limit $U^k \rightarrow U$ and $[U^k]^* \rightarrow U^*.$ For each $k$ there is a complex curve $z_k$ vanishing to order $p_k$ on which $r\gammairc z_k$ vanishes to at least order $k p_k.$ It follows that the functions $f_i-(U^kg)_i \rightarrow \partialhi_i$ converge weakly and likewise $g_i-(U^{k*}f)_i\rightarrow \partialsi_i$ to formal power series. Denote by $\mathcal{U}$ the weak closure of the collections of unitary operators over $l^2$. A straight forward conclusion is the norm of any elements in $\mathcal{U}$ is bounded by 1. We also use $j_k(\gammadot)$ to denote the k-jets of the power series. \begin{equation}gin{proof} If 0 is a point of infinite D'Angelo type, then by Corollary \text{\rm Re}\,f{comparison}, $\sup_{U\in \mathcal U}\tau^*(I(U,0))=\infty$. This implies $sup_{U\in \mathcal U} D(I(U,0))=\infty$ and hence $D(I(U,0))=\infty$ for some $U\in \mathcal U$. Applying Nullstellensatz theorem, there exists a formal power series curve $z$ such that $z^*f\sim0$ for any $f\in I(U, 0)$, therefore $z^*r\sim 0$ by Corollary \text{\rm Re}\,f{equivalence}. \varepsilonsilonnd{proof} \begin{equation}gin{corollary} \label{comparison} Let $z$ be a formal curve passing through 0. Then there exists a $U\in \mathcal U$ such that $\tau^*(I(U,0), z)\ge [\mathbb Delta^*(M,0,z)/2]$. Here $[q]$ is the largest integer less than or equal to $q$. In particular, if $\mathbb Delta^*(M,0)=\infty$, then $\sup_{U\in \mathcal U}\tau^*(I(U,0))=\infty$. \varepsilonsilonnd{corollary} \begin{equation}gin{proof} If $\mathbb Delta^*(M,0,z)=\infty$, i.e., $z^*r \sim 0$, then by Lemma \text{\rm Re}\,f{jets},$\tau^*(I(U,0), z)=\infty$. Otherwise assume $\nu(z^*r)=k$. By Lemma \text{\rm Re}\,f{jets}, $j_k(z^*r)=0$ implies $j_{[k/2]}(z^*(f-Ug))=j_{[k/2]}(z^*(U^*f-g))=j_k(z^*h)=0$ for some unitary matrix $U$. Therefore $\tau^*(I(U,0), z)\ge [k/2]$. \varepsilonsilonnd{proof} \begin{equation}gin{lemma}[D'Angelo] $$\tau^*(I(U,0))\le D(I(U,0)).$$ \varepsilonsilonnd{lemma} \begin{equation}gin{lemma}[D'angelo]\label{univ} $\sup_{U\in \mathcal U} D(I(U,0))=\infty$ if and only if $D(I(U,0))=\infty$ for some $U\in \mathcal U$. \varepsilonsilonnd{lemma} \begin{equation}gin{proof} Suppose there exists a sequence of $U^k\in \mathcal{U}$ with $D(I(U^k,0))<\infty$ and $$D(I(U^k,0))\rightarrow \infty.$$ After passing to subsequences inductively, we can assume $U^k$ are convergent in the entries sense. Denote the limit by $U^{\infty}$. The upper semicontinuity of $D(I)$ then implies that $$D(I({U^{\infty}},0))\ge \lim D(I(U^k,0))=\infty.$$ Notice that $U^{\infty}$ is the weak limit of $U^k$. So $U^{\infty}\in \mathcal U$. \varepsilonsilonnd{proof} So assume now that we have a formal ideal $I.$ We first introduce regular coordinates. \begin{equation}gin{defn} Let $1\le n\le m$. We say an ideal $I$ is regular in the variables $z_{n+1},\dots, z_m$, if we can find coordinates $z_1,...,z_m$ so that there are Weierstrass polynomials $P_j=z_j^{k_j}+ \gammadots\in I$ depending only on $z_1,...,z_j$ for all $j=n+1,\dots,m$ . Moreover the ideal $I$ contains no function $g(z_1,...,z_n), g\not \varepsilonsilonquiv 0.$ \varepsilonsilonnd{defn} \begin{equation}gin{lemma}[\gammaite{gunning}, p.43] Any non zero ideal can be made regular. (Prime ideal not necessary) \varepsilonsilonnd{lemma} \begin{equation}gin{lemma}[\gammaite{gunning}, p.44]\label{p44} A nonzero ideal $I\subset{}_m O_0$ is regular in the variables $z_{n+1},\dots,z_m$ precisely when ${}_n O_0$ is isomorphic to ${}_n\tilde { O}_0$, which is the image of ${}_n O_0$ in ${}_m O/I$, and ${}_m\tilde{ O}_0$ is an integral extension of ${}_n\tilde { O}_0$ generated by $\tilde z_{n+1},\dots,\tilde z_m$, where $\tilde z_j$ is the image of $z_j$ in ${}_m O_0/I$. \varepsilonsilonnd{lemma} By Lemma \text{\rm Re}\,f{p44}, we can assume that in $P_j$ the coefficients are functions of $z_1,...,z_n$ only. A slightly stronger condition is that of $strictly$ regular. This is defined for prime ideals. We need to describe what it means to be formally strictly regularly. Let $I$ be a prime ideal in ${}_mO_0$. The quotient ${}_m\tilde{O}_0:={}_mO_0/I$ is an integral domain. Each element $f \in {}_mO_0$ gives rise to an element $\tilde{f}\in {}_m\tilde{O}_0$. In particular this applies to each coordinate function $z_{n+2},\dots,z_m.$ We let ${}_m\tilde{M}_0$ denote the quotient field of ${}_m\tilde{\mathcal O}_0.$ \begin{equation}gin{defn}[\gammaite{gunning}, p.45] A choice of regular coordinates for a prime $I$ is strictly regular if every element in ${}_m\tilde{M}_0$ can be written as a quotient $\sum \frac{a_j}{b_j} \tilde{z}_{n+1}^j $ for $a_j, b_i \in {}_n\tilde{ O}_0$ and sums are finite. \varepsilonsilonnd{defn} \begin{equation}gin{lemma} [\gammaite{gunning} p.45]\label{gp45} Any non zero $prime$ ideal can be made strictly regular. \varepsilonsilonnd{lemma} \begin{equation}gin{proof} The proof seems to be the same in the formal case. \varepsilonsilonnd{proof} \varepsilonsilonnd{document}
math
29,812
\begin{document} \title{Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient of Smooth Convex Functions } \titlerunning{Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient} \ifjota \author{ Donghwan Kim \and Jeffrey A. Fessler } \institute{Donghwan Kim, Corresponding author \at Korea Advanced Institute of Science and Technology (KAIST), \at Daejeon, Republic of Korea \at [email protected] \and Jeffrey A. Fessler \at University of Michigan, \at Ann Arbor, Michigan \at [email protected] } \date{Received: date / Accepted: date} \else \author{ Donghwan Kim$^1$ \and Jeffrey A. Fessler$^2$ } \institute{ $^1$ Department of Mathematical Sciences, KAIST, Republic of Korea \at $^2$ Department of Electrical Engineering and Computer Science, University of Michigan, USA \at \email{[email protected], [email protected]} } \date{Date of current version: \today} \fi \maketitle \begin{abstract} This paper optimizes the step coefficients of first-order methods for smooth convex minimization in terms of the worst-case convergence bound (\ie, efficiency) of the decrease in the gradient norm. This work is based on the performance estimation problem approach. The worst-case gradient bound of the resulting method is optimal up to a constant for large-dimensional smooth convex minimization problems, under the initial bounded condition on the cost function value. This paper then illustrates that the proposed method has a computationally efficient form that is similar to the optimized gradient method. \keywords{First-order methods \and Gradient methods \and Smooth convex minimization \and Worst-case performance analysis} \subclass{90C25 \and 90C30 \and 90C60 \and 68Q25 \and 49M25 \and 90C22} \end{abstract} \section{Introduction} Large-dimensional optimization problems arise in various modern applications of signal processing, machine learning, control, communication, and many other areas. First-order methods are widely used for solving such large-scale problems as their iterations involve only function/gradient calculations and simple vector operations. However, they can require many iterations to achieve the given accuracy level. Therefore, developing efficient first-order methods has received great interest, which is the main motivation of this paper. In particular, this paper targets the decrease in the gradient for smooth convex minimization, under the initial bounded condition on the cost function value. This paper uses the performance estimation problem (PEP) in~\cite{drori:14:pof} and constructs a new method called \OGMG. Among first-order methods for smooth convex minimization, Nesterov's fast gradient method (FGM) \cite{nesterov:83:amf,nesterov:04} has been used widely because its worst-case \emph{cost function} inaccuracy bound (\ie, the cost function efficiency) is optimal up to a constant, under the initial bounded \emph{distance} condition \cite{nesterov:04,nemirovsky:92:ibc}. Recently, the optimized gradient method (OGM) \cite{kim:16:ofo} (that was numerically first identified in~\cite{drori:14:pof} using PEP) has been found to exactly achieve the optimal worst-case rate of decreasing the smooth convex cost functions~\cite{drori:17:tei}, leaving no room for improvement in the worst-case. On the other hand, first-order methods that decrease the \emph{gradient} at an optimal rate in~\cite{nemirovsky:92:ibc} are yet unknown, even up to a constant. The proposed \OGMG~method has such an optimal rate under the initial bounded \emph{function} condition. After the initial version of this paper was posted online~\cite{kim:18:otc}, a simple method using \OGMG~was constructed in~\cite{nesterov:20:pda} that also has an optimal rate under the initial bounded \emph{distance} condition. Gradient rate analysis is useful both in theory (\eg, for a dual approach~\cite{nesterov:12:htm} and a matrix scaling problem~\cite{allenzhu:18:htm-nips}) and in practice (\eg, can be used as a stopping criterion). In addition, unlike smooth convex minimization, a worst-case \emph{gradient} inaccuracy and an initial bounded \emph{function} condition are standard choices for analyzing gradient methods for smooth \emph{nonconvex} minimization~\cite{drori:20:tco}. Therefore, this work can provide a step towards better understanding the convergence behavior of gradient methods for nonconvex minimization. There is recent interest in developing accelerated methods for decreasing the gradient (in convex minimization) \cite{nesterov:12:htm,allenzhu:18:htm-nips,carmon:19:lbf,kim:18:ala,kim:18:gto}. The best known worst-case gradient rate is achieved by FGM with a regularization technique in~\cite{nesterov:12:htm} that is optimal up to a logarithmic factor. However, a practical limitation of that method is that it requires knowledge of a bound on a value such as the distance between the initial and optimal points. In~\cite{kim:18:gto} we used PEP to derive efficient first-order methods that do not need knowledge of such unavailable values. However, the methods in~\cite{kim:18:gto} are far from achieving the optimal rate (not even up to a logarithmic factor), due to strict relaxations introduced to PEP in~\cite{kim:18:gto}. The methods in~\cite{nesterov:12:htm,kim:18:ala,kim:18:gto,ghadimi:16:agm,monteiro:13:aah} also achieve a similar nonoptimal rate. Thus, there is still room to improve the worst-case gradient convergence bound of the first-order methods for smooth convex minimization. This paper optimizes the step coefficients of first-order methods in terms of the worst-case gradient decrease using PEP~\cite{drori:14:pof,taylor:17:ssc}, yielding \OGMG. The new analysis avoids the (unnecessary) strict relaxations on PEP in~\cite{kim:18:gto}. This paper then shows that \OGMG~has an equivalent efficient form that is similar to OGM, and thus has an inexpensive per-iteration computational complexity. \OGMG~attains the optimal bound of the worst-case gradient norm up to a constant under the initial bounded \emph{function} condition~\cite{nemirovsky:92:ibc}. On the way, this paper also provides a new exact worst-case gradient bound for the gradient method (GM). The initial bounded condition on the distance between initial and optimal points is a standard assumption, whereas the initial condition on the cost function value of interest in this paper is less popular. However, sometimes a constant for the latter bounded condition is known, while a constant for the former condition is either not known or difficult to compute, making the latter condition more useful. In addition, there are cases where the latter initial bounded function condition holds, but the former condition does not. One such example is an unregularized logistic regression of an overparameterized model for separable datasets \cite{nacson:19:cog,soudry:18:tib}, which does not have any finite minimizer. Therefore, this paper's analysis under the initial bounded condition has value for such cases. Section~\ref{sec:prob} reviews a smooth convex problem and first-order methods. Section~\ref{sec:eff} reviews the efficiency of first-order methods and its lower bound. Section~\ref{sec:pep} studies the PEP approach~\cite{drori:14:pof} and provides relaxations for analyzing the worst-case gradient decrease. Section~\ref{sec:gm,ic2} uses the relaxed PEP to provide the exact worst-case gradient bound for GM. Section~\ref{sec:optpep} optimizes the step coefficients of the first-order methods using the relaxed PEP, and develops an efficient first-order method named \OGMG~under the initial function condition. Section~\ref{sec:conc} concludes the paper. \section{Problems and Methods} \label{sec:prob} \subsection{\bf Smooth Convex Problems} We are interested in efficiently solving the following smooth and convex minimization problem: \begin{align} f_* := \inf_{\x\in\reals^d} f(\x) \tag{M} \label{eq:prob} ,\end{align} where we assume that the function $f\;:\;\reals^d\to\reals$ is a convex function of the type $\cC$, \ie, its gradient $\nabla f(\x)$ is Lipschitz continuous: \begin{align} ||\nabla f(\x) - \nabla f(\y)|| \le L||\x - \y||, \quad \forall \x,\y\in\reals^d \label{eq:Lsmooth} \end{align} with a Lipschitz constant $L>0$, where $||\cdot||$ denotes the standard Euclidean norm. \begin{definition} The class of smooth convex functions satisfying the two above conditions is denoted by $\cF$. \end{definition} \begin{definition} The optimal set of $f$ is defined by \begin{align} \Xs := \argmin{\x\in\reals^d}f(\x) = \{\x\in\reals^d\;:\;f(\x) = f_*\} .\end{align} \end{definition} We further assume one of the following two initial conditions, where the latter is especially useful when~\eqref{eq:prob} does not have a finite minimizer, \ie, $\Xs = \emptyset$. \begin{assumption}[IFC] The set $\Xs$ is nonempty, and an initial point $\x_0$ satisfies \begin{align} f(\x_0) - f_* \le \frac{1}{2}LR^2 \quad \text{for a constant $R>0$.} \tag{IFC} \label{eq:init,func} \end{align} \end{assumption} \begin{assumption}[IFC$'$] An initial point $\x_0$ and the $N$th iterate $\x_N$ of a given method satisfy \begin{align} f(\x_0) - f(\x_N) \le \frac{1}{2}LR_N^2 \quad \text{for a constant $R_N>0$.} \tag{IFC$'$} \label{eq:init,func,N} \end{align} \end{assumption} Note that $f(\x_0) - f(\x_N) \le f(\x_0) - f_*$ for any $\x_N$. \subsection{\bf First-Order Methods} To solve a large-dimensional problem~\eqref{eq:prob}, we consider first-order methods that iteratively gain first-order information, \ie, values of the cost function $f$ and its gradient $\nabla f$ at any given point in $\reals^d$. The computational effort for acquiring those values depends mildly on the problem dimension. We are interested in developing a first-order method that efficiently generates a point $\x_N$ after $N$ iterations (starting from an initial point $\x_0$) that minimizes the worst-case absolute gradient inaccuracy under the initial function condition~\eqref{eq:init,func}. \begin{definition} The gradient efficiency is defined as the worst-case absolute gradient inaccuracy \begin{align} \sup_{f\in\cF} ||\nabla f(\x_N)||^2 \label{eq:epsgrad} .\end{align} \end{definition} For simplicity in Sects.~\ref{sec:pep},~\ref{sec:gm,ic2} and~\ref{sec:optpep} that use the PEP approach (as in~\cite{drori:14:pof}), we consider the following \emph{fixed-step} first-order methods (FSFOM): \begin{align} \x_{i+1} = \x_i - \frac{1}{L}\sum_{k=0}^{i}h_{i+1,k}\nabla f(\x_k) \quad i=0,\dots,N-1, \label{eq:fsfom} \end{align} where $\h := \{h_{i+1,k}\} \in \reals^{N(N+1)/2}$ is a tuple of fixed-step coefficients that do not depend on $f$, $\x_0$ and $R$ (or $R_N$). This FSFOM class includes (fixed-step) GM (\ie, $h_{i+1,k} = 0$ for $k<i$), (fixed-step) FGM \cite{nesterov:83:amf,nesterov:04} (see~\cite{drori:14:pof}), OGM \cite{kim:16:ofo}, and the proposed \OGMG, but excludes line-search approaches, such as a backtracking version of FGM in~\cite{beck:09:afi} and an exact line-search version of OGM in~\cite{drori:19:efo}. \section{Efficiency of First-Order Methods} \label{sec:eff} This paper seeks to improve the \emph{efficiency} of first-order methods, where the efficiency consists of the following two parts: the computational effort for selecting a search point (\eg, computing $\x_{i+1}$ in~\eqref{eq:fsfom} given $\x_i$ and $\{\nabla f(\x_k)\}_{k=0}^i$), and the number of evaluations of the cost function value and gradient at each given search point to reach a given accuracy. This paper considers both parts of the efficiency, particularly focusing on the latter part, as also detailed in this section. Regarding the former aspect of the efficiency, we later show that the proposed method has an efficient form, similar to (fixed-step) FGM and OGM, requiring computational effort comparable to that of a (fixed-step) GM. An efficiency estimate of an optimization method is defined by the worst-case absolute inaccuracy. One popular choice of the worst-case absolute inaccuracy is the worst-case absolute cost function inaccuracy. \begin{definition} The cost function efficiency is defined as the worst-case absolute cost function inaccuracy \begin{align} \sup_{f\in\cF} f(\x_N) - f_* \label{eq:epsfunc} .\end{align} \end{definition} When analyzing the cost function efficiency, we usually consider the following initial condition. \begin{assumption}[IDC] The set $\Xs$ is nonempty, and an initial point $\x_0$ satisfies \begin{align} ||\x_0 - \x_*|| \le \overline{R} \quad \text{for a constant $\overline{R}>0$,} \tag{IDC} \label{eq:init,dist} \end{align} for some $\x_* \in \Xs$. \end{assumption} Under~\eqref{eq:init,dist}, GM has an $O(1/N)$ cost function efficiency~\eqref{eq:epsfunc} \cite{nesterov:04}, and this rate was improved to $O(1/N^2)$ rate by FGM~\cite{nesterov:83:amf,nesterov:04}. This efficiency was further optimized by OGM~\cite{drori:14:pof,kim:16:ofo}, which was shown to exactly achieve the optimal efficiency in~\cite{drori:17:tei}. Compared to the worst-case \emph{cost function} inaccuracy~\eqref{eq:epsfunc}, the worst-case absolute \emph{gradient} inaccuracy~\eqref{eq:epsgrad} has received less attention \cite{nemirovsky:92:ibc,nesterov:12:htm,taylor:17:ssc,taylor:18:ewc}. For the initial \emph{distance} condition~\eqref{eq:init,dist}, GM has an $O(1/N^2)$ gradient efficiency \cite{nesterov:12:htm}, while FGM with a regularization technique~\cite{nesterov:12:htm} that requires the knowledge of (practically unavailable) $\overline{R}$ achieves $O(1/N^4)$ up to a logarithmic factor. This is the best known rate, where the rate $O(1/N^4)$ is the optimal gradient efficiency for given~\eqref{eq:init,dist} \cite{nemirovsky:92:ibc}. On the other hand, the papers~\cite{nesterov:12:htm,kim:18:ala,kim:18:gto,ghadimi:16:agm,monteiro:13:aah} studied first-order methods that do not require knowing $\overline{R}$ and that have $O(1/N^3)$ gradient efficiency, but none of them (including~\cite{nesterov:12:htm}) have the optimal efficiency (even up to a constant). On the other hand, gradient efficiency with the initial \emph{function} condition~\eqref{eq:init,func} has received even less attention \cite{nemirovsky:92:ibc,taylor:18:ewc}. It is known to have $O(1/N^2)$ optimal efficiency~\cite{nemirovsky:92:ibc}. Section~\ref{sec:gm,ic2} provides the exact $O(1/N)$ rate of GM, which was studied numerically for the more general nonsmooth composite convex problems in~\cite{taylor:18:ewc}. The paper~\cite{carmon:19:lbf} discusses that FGM with a regularization technique~\cite{nesterov:12:htm} with~\eqref{eq:init,func} also achieves the optimal worst-case gradient rate $O(1/N^2)$ up to a logarithmic factor. This is the best previously known rate, and this paper provides a better rate. In short, none of the existing first-order methods achieve the optimal rate for the gradient inaccuracy even up to a constant, and thus this paper focuses on optimizing the gradient efficiency of first-order methods for smooth convex minimization with \eqref{eq:init,func} and~\eqref{eq:init,func,N}. Table~\ref{tab:rate} summarizes the efficiency of first-order methods and illustrates that the proposed \OGMG~attains the optimal worst-case gradient rate $O(1/N^2)$ with~\eqref{eq:init,func} and~\eqref{eq:init,func,N}. \begin{remark} \label{rk:ogmg} After the initial version of this paper was posted online~\cite{kim:18:otc}, the paper~\cite[Remark 2.1]{nesterov:20:pda} constructed a simple method using \OGMG~that achieves $O(1/N^4)$ under the initial distance condition~\eqref{eq:init,dist}. The method runs an accelerated method such as Nesterov's FGM and OGM for the first half of the iterations and then runs \OGMG~for the rest. That approach (built upon the proposed \OGMG) further closes the open problem of developing an optimal method for decreasing the gradient, under the initial distance condition~\eqref{eq:init,dist}. \end{remark} \begin{table}[!h] \centering \caption{ Summary of the efficiency of first-order methods discussed in Sect.~\ref{sec:eff} \cite{nesterov:04,nemirovsky:92:ibc,nesterov:12:htm,carmon:19:lbf,taylor:18:ewc}; The rates of the proposed \OGMG~and a method in~\cite[Remark 2.1]{nesterov:20:pda} using \OGMG~(see Remark~\ref{rk:ogmg}) are also presented. $\widetilde{O}(\cdot)$ is a big-$O$ notation that ignores a logarithmic factor. } \label{tab:rate} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Efficiency} & Initial & GM & \multicolumn{2}{c|}{Best known rate} & OGM-G & \cite{nesterov:20:pda} & Optimal \\ \cline{4-5} & cond. & rate & w/o $R$ or $\overline{R}$ & w/ $R$ or $\overline{R}$ & rate & rate & rate \\ \hline Cost func.~\eqref{eq:epsfunc} & \eqref{eq:init,dist} & $O(1/N)$ & \multicolumn{2}{c|}{$O(1/N^2)$} & $\cdot$ & $\cdot$ & $O(1/N^2)$ \\ \hline \multirow{3}{*}{Gradient~\eqref{eq:epsgrad}} & \eqref{eq:init,dist} & $O(1/N^2)$ & $O(1/N^3)$ & $\widetilde{O}(1/N^4)$ & $\cdot$ & $O(1/N^4)$ & $O(1/N^4)$ \\ \cline{2-8} & \eqref{eq:init,func} & $O(1/N)$ & $O(1/N)$ & $\widetilde{O}(1/N^2)$ & $O(1/N^2)$ & $\cdot$ & $O(1/N^2)$ \\ \cline{2-8} & \eqref{eq:init,func,N} & $O(1/N)$ & $\cdot$ & $\cdot$ & $O(1/N^2)$ & $\cdot$ & $\cdot$ \\ \hline \end{tabular} \end{table} As Table~\ref{tab:rate} demonstrates, worst-case rates of any given method and optimal worst-case rates depend dramatically on the initial condition. In particular, the worst-case gradient rates for~\eqref{eq:init,func} tend to be slower than those for~\eqref{eq:init,dist}. At first glance, this situation might hinder one's interest on the initial function condition~\eqref{eq:init,func} studied in this paper. However, one should also consider the constants $R$ and $\overline{R}$ for a fair comparison of the worst-case rates. In particular, consider a problem instance $(f,\x_0)$ where $f\in\cF$ and $\Xs \neq \emptyset$. Then, choose $R$ and $\overline{R}$ such that \begin{align} f(\x_0) - f_* = \frac{1}{2}LR^2 \quad\text{and}\quad ||\x_0 - \x_*|| = \overline{R} \end{align} for some $\x_*\in\Xs$. Using the inequality $f(\x_0) - f_* \le \frac{L}{2}||\x_0 - \x_*||^2$ due to the smoothness of $f$ in~\eqref{eq:Lsmooth}, we have the relationship: \begin{align} R \le \overline{R} .\end{align} For any optimization method, including GM and OGM-G, the ratio $\Frac{\overline{R}}{R}$ can be in the order of $N$ or beyond, for a given $N$, and should not be neglected. Section~\ref{sec:worst,ogmg,dist} gives one such example. \section{Performance Estimation Problem (PEP) for the Worst-Case Gradient Decrease} \label{sec:pep} This section studies PEP~\cite{drori:14:pof} and its relaxations for the worst-case gradient analysis under the condition~\eqref{eq:init,func}. \subsection{\bf Exact PEP} The papers~\cite{drori:14:pof,taylor:17:ssc} suggest that for any given step coefficients $\h := \{h_{i,k}\}$ of a FSFOM, total number of iterations $N$, problem dimension $d$, and constants $L$, $R$, the exact worst-case gradient bound under~\eqref{eq:init,func} is given by \begin{align} \Bd_{\mathrm{P}}(\h,N,d,L,R) := &\max_{f\in\cF} \max_{\x_0,\ldots,\x_N\in\reals^d} \frac{1}{L^2R^2}||\nabla f(\x_N)||^2 \tag{P} \label{eq:P} \\ &\st\; \begin{cases} \x_{i+1} = \x_i - \frac{1}{L}\sum_{k=0}^i h_{i+1,k}\nabla f(\x_k), \quad i=0,\ldots,N-1, & \\ f(\x_0) - f_* \le \frac{1}{2}LR^2, \quad \x_* \in \Xs \neq \emptyset, & \end{cases} \nonumber \end{align} where $||\nabla f(\x_N)||^2$ is multiplied by $\frac{1}{L^2R^2}$ for convenience in later analysis. However, as noted in~\cite{drori:14:pof}, it is intractable to solve~\eqref{eq:P} due to its infinite-dimensional function constraint. Thus the next section employs relaxations introduced in~\cite{drori:14:pof}. \subsection{\bf Relaxing PEP} As suggested by~\cite{drori:14:pof,taylor:17:ssc}, to convert~\eqref{eq:P} into an equivalent finite-dimensional problem, we replace the infinite-dimensional constraint $f\in\cF$ by a finite set of inequalities satisfied by $f\in\cF$ \cite[Theorem~2.1.5]{nesterov:04}: \begin{align} \frac{1}{2L}||\nabla f(\x_i) - \nabla f(\x_j)||^2 \le f(\x_i) - f(\x_j) - \inprod{\nabla f(\x_j)}{\x_i - \x_j} \label{eq:smooth} \end{align} on each pair $(i,j)$ for $i,j=0,\ldots,N,*$. For simplicity in the proofs, we further narrow down the set\footnote{ We found that the set of constraints in~\eqref{eq:P1} is sufficient for the exact worst-case gradient analysis of GM and \OGMG~for~\eqref{eq:init,func}, as illustrated in later sections. In other words, the resulting worst-case rates of GM and \OGMG~in this paper are tight with our specific choice of the set of inequalities. Note that this relaxation choice in~\eqref{eq:P1} differs from the choice in~\cite[Problem (G$'$)]{drori:14:pof}.} of inequalities~\eqref{eq:smooth}, specifically the pairs $\{(i-1,i)\;:\;i=1,\ldots,N\}$, $\{(N,i)\;:\;i=0,\ldots,N-1\}$ and $\{(N,*)\}$.\footnote{ \label{ft:nonempty} The inequality~\eqref{eq:smooth} for the pair $\{(N,*)\}$ simplifies to $\frac{1}{2L}||\nabla f(\x_N)||^2 \le f(\x_N) - f_*$ under the condition $\Xs \neq \emptyset$. Such inequality is not used under the assumption~\eqref{eq:init,func,N} in Corollaries~\ref{cor:gm} and~\ref{cor:ogmg}. } This relaxation leads to \begin{align} \Bd_{\mathrm{P1}}(\h,N,d) := &\max_{\substack{\G\in\reals^{(N+1)\times d}, \\ \del\in\reals^{N+1}}}\; \Tr{\G^\top\u_N\u_N^\top\G} \tag{P1} \label{eq:P1} \\ &\st\; \begin{cases} \Tr{\G^\top\A_{i-1,i}(\h)\G} \le \delta_{i-1} - \delta_i, \quad i=1,\ldots,N, & \\ \Tr{\G^\top\B_{N,i}(\h)\G} \le \delta_N - \delta_i, \quad i=0,\ldots,N-1, & \\ \Tr{\G^\top\C_N\G} \le \delta_N, \quad \delta_0 \le \frac{1}{2}, & \end{cases} \nonumber \end{align} where we define \begin{align} \begin{cases} \g_i := \frac{1}{LR}\nabla f(\x_i), \quad i=0,\ldots,N, \quad \G := [\g_0,\ldots,\g_N]^\top, & \\ \delta_i := \frac{1}{LR^2}(f(\x_i) - f_*), \quad i=0,\ldots,N, \quad \del := [\delta_0,\ldots,\delta_N]^\top, & \\ \u_i := [0,\ldots,0,\underbrace{1}_{\text{$(i+1)$th entry}},0,\ldots,0]^\top \in \reals^{N+1}, \quad i=0,\ldots,N, & \end{cases} \label{eq:def1} \end{align} and \begin{align} \begin{cases} \A_{i-1,i}(\h) := \frac{1}{2}(\u_{i-1} - \u_i)(\u_{i-1} - \u_i)^\top + \frac{1}{2}\sum_{k=0}^{i-1} h_{i,k}(\u_i\u_k^\top + \u_k\u_i^\top), \quad i=1,\ldots,N, & \\ \B_{N,i}(\h) := \frac{1}{2}(\u_N - \u_i)(\u_N - \u_i)^\top - \frac{1}{2}\sum_{l=i+1}^N\sum_{k=0}^{l-1} h_{l,k}(\u_i\u_k^\top + \u_k\u_i^\top), & \\ \hspace{250pt} \quad i=0,\ldots,N-1, & \\ \C_N := \frac{1}{2}\u_N\u_N^\top. \end{cases} \label{eq:def2} \end{align} As in~\cite{taylor:17:ssc}, we further relax~\eqref{eq:P1} by introducing the Gram matrix $\Z := \G\G^\top$ as \begin{align} \Bd_{\mathrm{P2}}(\h,N,d) := &\max_{\substack{\Z\in\symm_+^{N+1}, \\ \del\in\reals^{N+1}}}\; \Tr{\u_N\u_N^\top\Z} \tag{P2} \label{eq:P2} \\ &\st\; \begin{cases} \Tr{\A_{i-1,i}(\h)\Z} \le \delta_{i-1} - \delta_i, \quad i=1,\ldots,N, & \\ \Tr{\B_{N,i}(\h)\Z} \le \delta_N - \delta_i, \quad i=0,\ldots,N-1, & \\ \Tr{\C_N\Z} \le \delta_N, \quad \delta_0 \le \frac{1}{2}. & \end{cases} \nonumber \end{align} This problem has the following Lagrangian dual: \begin{align} \Bd_{\mathrm{D}}(\h,N) := &\min_{(\aa,\bb,c,e)\in\reals_+^{2N+2}} \frac{1}{2}e \tag{D} \label{eq:D} \\ &\st\; \begin{cases} \S(\h,\aa,\bb,c) \succeq\zero, \quad -a_1 + b_0 + e = 0, \quad a_N - \sum_{i=0}^{N-1}b_i - c = 0, & \\ a_i - a_{i+1} + b_i = 0, \quad i=1,\ldots,N-1. & \end{cases} \nonumber \end{align} where \begin{align} &\S(\h,\aa,\bb,c) :=\; \sum_{i=1}^Na_i\A_{i-1,i}(\h) + \sum_{i=0}^{N-1}b_i\B_{N,i}(\h) + c\C_N(\h) - \u_N\u_N^\top \label{eq:SS} \\ =\;& \frac{1}{2}\sum_{i=1}^N a_i(\u_{i-1} - \u_i)(\u_{i-1} - \u_i)^\top + \frac{1}{2}\sum_{i=0}^{N-1} b_i(\u_N - \u_i)(\u_N - \u_i)^\top + \frac{1}{2}(c-2)\u_N\u_N^\top \nonumber \\ &+ \frac{1}{2}\sum_{i=1}^N\sum_{k=0}^{i-1} a_i h_{i,k} (\u_i\u_k^\top + \u_k\u_i^\top) - \frac{1}{2}\sum_{i=0}^{N-1}\sum_{k=0}^{N-1} \paren{b_i\sum_{l=\max\left\{\substack{i+1,\\k+1}\right\}}^N h_{l,k}} (\u_i\u_k^\top + \u_k\u_i^\top) \nonumber .\end{align} For given $\h$ and $N$, a semidefinite programming (SDP) problem~\eqref{eq:D} can be solved numerically using an SDP solver (\eg, CVX~\cite{cvxi,gb08}). The next two sections analytically specify feasible points of~\eqref{eq:D} for GM and \OGMG, which were numerically first identified to be solutions of~\eqref{eq:D} for each method by the authors. These feasible points provide the exact worst-case analytical gradient bounds for GM and \OGMG. \section{Applying the Relaxed PEP to GM} \label{sec:gm,ic2} Inspired by the numerical solutions of~\eqref{eq:D} for GM using CVX~\cite{cvxi,gb08}, we next specify a feasible point of~\eqref{eq:D} for GM. \begin{lemma} \label{lem:gm_feas_ic2} For GM, \ie the FSFOM with $h_{i+1,k}$ having $1$ for $k=i$ and $0$ otherwise, the following set of dual variables: \begin{align} \begin{cases} a_i = \frac{2(N+i)}{(N-i+1)(2N+1)} = \frac{N+i}{N-i+1}e, \quad i = 1,\ldots,N, & \\ b_i = \begin{cases} \frac{2}{N(2N+1)} = \frac{1}{N}e, & i = 0, \\ \frac{2}{(N-i)(N-i+1)}, & i = 1,\ldots,N-1, \end{cases} & \\ c = e = \frac{2}{2N+1}, & \end{cases} \label{eq:gm,feas} \end{align} is a feasible point of~\eqref{eq:D}. \begin{proof} Obviously,~\eqref{eq:gm,feas} satisfies the equality conditions of~\eqref{eq:D}, and the rest of proof shows the positive semidefinite condition of~\eqref{eq:D}. For any $\h$ and $(\aa,\bb,c,e)\in\Lambda$, the $(i,j)$th entry of the symmetric matrix~\eqref{eq:SS} can be rewritten as \begin{align} &[2\S(\h,\aa,\bb,c)]_{ij} \label{eq:S} \\ =\;& \begin{cases} a_1 + b_0\paren{1 - 2\sum_{l=1}^N h_{l,0}}, & i=0,\; j=i, \\ a_i + a_{i+1} + b_i\paren{1 - 2\sum_{l=i+1}^N h_{l,i}}, & i=1,\ldots,N-1,\; j=i, \\ a_N + \sum_{l=0}^{N-1}b_l + c - 2 = 2(a_N - 1), & i=N,\; j=i, \\ a_i(h_{i,i-1} - 1) - b_i\sum_{l=i+1}^N h_{l,i-1} - b_{i-1}\sum_{l=i+1}^N h_{l,i}, & i=1,\ldots,N-1,\; j=i-1, \\ a_N(h_{N,N-1} - 1) - b_{N-1}, & i=N,\; j=i-1, \\ a_ih_{i,j} - b_i\sum_{l=i+1}^N h_{l,j} - b_j\sum_{l=i+1}^N h_{l,i}, & i=2,\ldots,N-1, \\ & j=0,\ldots,i-2, \\ a_Nh_{N,j} - b_j, & i=N,\;j=0,\ldots,i-2. \end{cases} \nonumber \end{align} Substituting the step coefficients $\h$ for GM and the dual variables~\eqref{eq:gm,feas} in~\eqref{eq:S} yields \begin{align} [2\S(\h,\aa,\bb,c)]_{ij} &= \begin{cases} a_1 - b_0 = e, & i=0,\; j=i, \\ a_i + a_{i+1} - b_i = 2a_i, & i=1,\ldots,N-1,\; j=i, \\ 2(a_N - 1), & i=N,\; j=i, \\ - b_j, & i=1,\ldots,N,\;j=0,\ldots,i-1, \end{cases} \label{eq:S,gm} \end{align} The matrix~\eqref{eq:S,gm} has nonnegative diagonal entries, and thus showing the diagonal dominance of the matrix~\eqref{eq:S,gm} implies its positive semidefiniteness. A sum of absolute values of nondiagonal elements for each row is \begingroup \allowdisplaybreaks \begin{align} &\sum_{\substack{j=0 \\ j\neq i}}^N\abs{[2\S(\h,\aa,\bb,c)]_{ij}} = \begin{cases} Nb_0, & i=0, \\ b_0 + (N-1)b_1 & i=1, \\ \sum_{j=0}^{i-1}b_l + (N-i)b_i & i=2,\ldots,N-1, \\ \sum_{j=0}^{N-1}b_j & i=N, \end{cases} \\ =\; &\begin{cases} \frac{2}{2N+1}, & i=0, \\ \frac{2}{N(2N+1)} + \frac{2}{N} = \frac{4(N+1)}{N(2N+1)}, & i=1, \\ \frac{2}{N(2N+1)} + \frac{2}{N-i+1} - \frac{2}{N} + \frac{2}{N-i+1} = \frac{4(N+i)}{(N-i+1)(2N+1)}, & i=2,\ldots,N-1, \\ \frac{2}{N(2N+1)} + 2 - \frac{2}{N} = \frac{2(2N-1)}{2N+1}, & i=N, \end{cases} \nonumber \\ =\; &\begin{cases} e, & i=0, \\ \frac{2(N+i)}{(N-i+1)}e, & i=1,\ldots,N-1, \\ 2(2Ne-1), & i=N, \end{cases} \nonumber \end{align} \endgroup and this satisfies $[2\S(\h,\aa,\bb,c)]_i = \sum_{\substack{j=0 \\ j\neq i}}^N\abs{[2\S(\h,\aa,\bb,c)]_{ij}}$ for all $i$, \ie, the matrix~\eqref{eq:S,gm} is diagonally dominant, and this concludes the proof. \qed \end{proof} \end{lemma} The next theorem provides the worst-case convergence gradient bound of GM. \begin{theorem} \label{thm:gm_bound_ic2} Assume that $f\in\cF$, $\Xs \neq \emptyset$, and $f(\x_0) - f_* \le \frac{1}{2}LR^2$~\eqref{eq:init,func}. Let $\x_0,\ldots,\x_N\in\reals^d$ be generated by GM, \ie, the FSFOM with $h_{i+1,k}$ having $1$ for $k=i$ and $0$ otherwise. Then, for any $N\ge1$, \begin{align} ||\nabla f(\x_N)||^2 \le \frac{L^2R^2}{2N+1} \label{eq:gm_bound_ic2} .\end{align} \begin{proof} Using Lemma~\ref{lem:gm_feas_ic2} for the step coefficients $\h$ of GM, we have \begin{align*} ||\nabla f(\x_N)||^2 \le L^2R^2\Bd_{\mathrm{D}}(\h,N) \le L^2R^2\frac{1}{2N+1} .\end{align*} \qed \end{proof} \end{theorem} The PEP proof of Theorem~\ref{thm:gm_bound_ic2}, using Lemma~\ref{lem:gm_feas_ic2}, can be used to construct a conventional proof that derives inequality~\eqref{eq:gm_bound_ic2} by a weighted sum of the inequalities~\eqref{eq:smooth}. Specifically, one can use a weighted sum of inequalities using the dual variables $(\aa,\bb,c,e)$ in~\eqref{eq:gm,feas} as weights: \begin{align} \frac{1}{2L}||\nabla f(\x_{i-1}) - \nabla f(\x_i)||^2 \le f(\x_{i-1}) - f(\x_i) - \inprod{\nabla f(\x_i)}{\x_{i-1} - \x_i} \quad&:\quad a_i \label{eq:ineqs} \\ \frac{1}{2L}||\nabla f(\x_N) - \nabla f(\x_i)||^2 \le f(\x_N) - f(\x_i) - \inprod{\nabla f(\x_i)}{\x_N - \x_i} \quad&:\quad b_i \nonumber \\ \frac{1}{2L}||\nabla f(\x_N)||^2 \le f(\x_N) - f_* \quad&:\quad c \nonumber \\ f(\x_0) - f_* \le \frac{1}{2}LR^2 \quad&:\quad e, \nonumber \end{align} which simplifies to \begin{align} \frac{1}{L}||\nabla f(\x_N)||^2 + \sum_{i=1}^N\sum_{j=0}^{i-1}\frac{b_j}{2L}\left|\left|\nabla f(\x_i) - \nabla f(\x_j)\right|\right|^2 \le \frac{LR^2}{2N+1} \label{eq:gm,weighted} ,\end{align} and this yields~\eqref{eq:gm_bound_ic2}. We next show that the bound~\eqref{eq:gm_bound_ic2} is exact by specifying a certain worst-case function. This implies that the feasible point in~\eqref{eq:gm,feas} is an optimal point of~\eqref{eq:D} for GM. \begin{lemma} \label{lem:worst,gm} For the following Huber function in $\cF$ for all $d\ge1$: \begin{align} \phi(\x) = \begin{cases} \frac{LR}{\sqrt{2N+1}}||\x|| - \frac{LR^2}{2(2N+1)}, & ||\x|| \ge \frac{R}{\sqrt{2N+1}}, \\ \frac{L}{2}||\x||^2, & \text{otherwise}, \end{cases} \end{align} GM exactly achieves the bound~\eqref{eq:gm_bound_ic2} with $\x_0$ satisfying $\phi(\x_0) - \phi_* = \frac{1}{2}LR^2$. \begin{proof} Starting from $\x_0 = \frac{N+1}{\sqrt{2N+1}}R\nnu$ that satisfies $\phi(\x_0) - \phi_* = \frac{1}{2}LR^2$~\eqref{eq:init,func} for any unit-norm vector $\nnu$, the iterates of GM are as follows: \begin{align*} \x_i = \x_0 - \frac{1}{L}\sum_{k=0}^{i-1} \nabla \phi(\x_k) = \paren{\frac{N+1}{\sqrt{2N+1}} - \frac{i}{\sqrt{2N+1}}}R\nnu, \quad i=0,\ldots,N, \end{align*} where all the iterates stay in the affine region of the function $\phi(\x)$ with the same gradient $\nabla \phi(\x_i) = \frac{LR}{\sqrt{2N+1}}\nnu,\; i=0,\ldots,N$. Therefore, after $N$ iterations of GM, we have $ ||\nabla \phi(\x_N)||^2 = \frac{L^2R^2}{2N+1} ,$ which concludes the proof. \qed \end{proof} \end{lemma} \begin{remark} \label{rk:gm} For $f\in\cF$, and for some $\x_*\in\Xs$ and $||\x_0 - \x_*|| \le \overline{R}$~\eqref{eq:init,dist}, the $N$th iterate $\x_N$ of GM has the following exact worst-case cost function bound~\cite[Theorems~1 and 2]{drori:14:pof}: \begin{align} f(\x_N) - f(\x_*) \le \frac{L\overline{R}^2}{2(2N+1)} \label{eq:gm_cost} ,\end{align} where this exact upper bound is equivalent to the exact worst-case gradient bound~\eqref{eq:gm_bound_ic2} of GM up to a constant $\frac{\overline{R}^2}{2LR^2}$. A similar relationship appears in \cite[Table 3]{taylor:18:ewc} for nonsmooth composite convex minimization. \end{remark} The preceding results in this section assume that there is a finite minimizer. There are applications that do not have a finite minimizer $\x_*\in\Xs$, \eg, an unregularized logistic regression of an overparameterized model for separable datasets \cite{nacson:19:cog,soudry:18:tib}. The following corollary extends the analysis to such cases. \begin{corollary} \label{cor:gm} For $f\in\cF$, let $\x_0,\ldots,\x_N\in\reals^d$ be generated by GM. Assume that $f(\x_0) - f(\x_N) \le \frac{1}{2}LR_N^2$~\eqref{eq:init,func,N}. Then, for any $N\ge1$, \begin{align} ||\nabla f(\x_N)||^2 \le \frac{L^2R_N^2}{2N} \label{eq:gm,N} .\end{align} \begin{proof} Equation~\eqref{eq:gm,weighted} consists of a weighted sum of the third and fourth inequalities of~\eqref{eq:ineqs}, scaled by $c=e=\frac{2}{2N+1}$ in~\eqref{eq:gm,feas}: \begin{align*} \frac{c}{2L}||\nabla f(\x_N)||^2 + c(f(\x_0) - f(\x_N)) \le \frac{c}{2}LR^2 .\end{align*} The third inequality of~\eqref{eq:ineqs} assumes $\Xs \neq \emptyset$ (see footnote~\ref{ft:nonempty}), so we derive a bound without the above inequality. Replacing the above inequality in the weighted summation for deriving~\eqref{eq:gm,weighted} by \eqref{eq:init,func,N} scaled by $c$, $ c(f(\x_0) - f(\x_N)) \le \frac{c}{2}LR_N^2 $, yields \begin{align*} \frac{1}{L}\left(1 - \frac{c}{2}\right)||\nabla f(\x_N)||^2 + \sum_{i=1}^N\sum_{j=0}^{i-1}\frac{b_j}{2L}\left|\left|\nabla f(\x_i) - \nabla f(\x_j)\right|\right|^2 \le \frac{LR_N^2}{2N+1} ,\end{align*} which concludes the proof. \qed \end{proof} \end{corollary} \section{Optimizing FSFOM Using the Relaxed PEP} \label{sec:optpep} This section optimizes the step coefficients of FSFOM using the relaxed PEP~\eqref{eq:D} to develop an efficient first-order method for decreasing the gradient of smooth convex functions. \subsection{\bf Numerically Optimizing FSFOM Using the Relaxed PEP} To optimize the step coefficients of $\h$ of FSFOM for each given $N$, we are interested in solving \begin{align} \tilde{\h} := \argmin{\h} \Bd_{\mathrm{D}}(\h,N) \tag{HD} \label{eq:HD} ,\end{align} which is nonconvex. However, the problem~\eqref{eq:HD} is bi-convex over $\h$ and $(\aa,\bb,c,e,\gamma)$, so for each given $N$ we numerically solved~\eqref{eq:HD} by an alternating minimization approach using CVX~\cite{cvxi,gb08}. Inspired by those numerical solutions, the next section specifies a feasible point of~\eqref{eq:HD}. \subsection{\bf A Feasible Point of the Relaxed PEP} The following lemma specifies a feasible point of~\eqref{eq:HD}. \begin{lemma} \label{lem:ogmg_feas_ic2} The following step coefficients of FSFOM: \begin{align} \tth_{i+1,k} &= \begin{cases} \frac{\ttheta_{k+1} - 1}{\ttheta_k}\tth_{i+1,k+1}, & k=0,\ldots,i-2, \\ \frac{\ttheta_{k+1} - 1}{\ttheta_k}(\tth_{i+1,i} - 1), & k=i-1, \\ 1 + \frac{2\ttheta_{i+1} - 1}{\ttheta_i}, & k=i, \end{cases} \label{eq:ogmg,h} \end{align} and the following set of dual variables: \begin{align} a_i = \frac{1}{\ttheta_i^2}, \quad i = 1,\ldots,N, \quad b_i = \frac{1}{\ttheta_i\ttheta_{i+1}^2}, \quad i = 0,\ldots,N-1, \quad c = e = \frac{2}{\ttheta_0^2}, \label{eq:ogmg,feas} \end{align} constitute a feasible point of~\eqref{eq:HD} for the parameters: \begin{align} \ttheta_i = \begin{cases} \frac{1 + \sqrt{1 + 8\ttheta_{i+1}^2}}{2}, & i=0, \\ \frac{1 + \sqrt{1 + 4\ttheta_{i+1}^2}}{2}, & i=1,\ldots,N-1, \\ 1, & i = N. \end{cases} \label{eq:ttheta} \end{align} \begin{proof} The appendix first derives properties of the step coefficients $\tilde{\h} = \{\tth_{i,k}\}$~\eqref{eq:ogmg,h} that are used in the proof: \begin{align} \tth_{i,j} &= \frac{\ttheta_i^2(2\ttheta_i-1)}{\ttheta_j\ttheta_{j+1}^2}, \quad i=2,\ldots,N,\;j=0,\ldots,i-2, \label{eq:ogmg,h,sum1} \\ \sum_{l=i+1}^N \tth_{l,j} &= \begin{cases} \frac{1}{2}(\ttheta_0+1), & i=0,\;j=i, \\ \ttheta_i, & i=1,\ldots,N-1,\;j=i, \\ \frac{\ttheta_{i+1}^4}{\ttheta_j\ttheta_{j+1}^2}, & i=1,\ldots,N-1,\;j=0,\ldots,i-1. \\ \end{cases} \label{eq:ogmg,h,sum2} \end{align} By definition of $\ttheta_i$~\eqref{eq:ttheta}, we also have \begin{align} \ttheta_i^2 &= \begin{cases} \ttheta_i + 2\ttheta_{i+1}^2, & i=0, \\ \ttheta_i + \ttheta_{i+1}^2, & i=1,\ldots,N-1. \end{cases} \label{eq:ttheta,rule} \end{align} Obviously,~\eqref{eq:ogmg,feas} satisfies the equality conditions of~\eqref{eq:D}, and the rest of proof shows the positive semidefinite condition of~\eqref{eq:D}. Substituting the step coefficients $\tilde{\h}$~\eqref{eq:ogmg,h} and the dual variables~\eqref{eq:ogmg,feas} with their properties~\eqref{eq:ogmg,h,sum1},~\eqref{eq:ogmg,h,sum2} and~\eqref{eq:ttheta,rule} in~\eqref{eq:S} yields \begin{align*} &\;[2\S(\h,\aa,\bb,c)]_{ij} \\ =& \begin{cases} \frac{1}{\ttheta_1^2} + \frac{1}{\ttheta_0\ttheta_1^2}(1-(\ttheta_0+1)), \quad i=0,\; j=i, & \\ \frac{1}{\ttheta_i^2} + \frac{1}{\ttheta_{i+1}^2} + \frac{1}{\ttheta_i\ttheta_{i+1}^2} \paren{1 - 2\ttheta_i} = \frac{\ttheta_{i+1}^2 + \ttheta_i - \ttheta_i^2}{\ttheta_i^2\ttheta_{i+1}^2}, \quad i=1,\ldots,N-1,\; j=i, & \\ 2\paren{\frac{1}{\ttheta_N^2} - 1}, \quad i=N,\; j=i, & \\ \frac{1}{\ttheta_i^2}\frac{2\ttheta_i-1}{\ttheta_{i-1}} - \frac{1}{\ttheta_i\ttheta_{i+1}^2}\frac{\ttheta_{i+1}^4}{\ttheta_{i-1}\ttheta_i^2} - \frac{1}{\ttheta_{i-1}\ttheta_i^2}\ttheta_i = \frac{(2\ttheta_i-1)\ttheta_i-\ttheta_{i+1}^2 - \ttheta_i^2}{\ttheta_{i-1}\ttheta_i^3}, \; i=1,\ldots,N-1,\; j=i-1, & \\ \frac{1}{\ttheta_N^2}\frac{2\ttheta_N-1}{\ttheta_{N-1}} - \frac{1}{\ttheta_{N-1}\ttheta_N^2}, \quad i=N,\; j=i-1, &\\ \frac{1}{\ttheta_i^2}\frac{\ttheta_i^2(2\ttheta_i-1)}{\ttheta_j\ttheta_{j+1}^2} - \frac{1}{\ttheta_i\ttheta_{i+1}^2}\frac{\ttheta_{i+1}^4}{\ttheta_j\ttheta_{j+1}^2} - \frac{1}{\ttheta_j\ttheta_{j+1}^2}\ttheta_i = \frac{(2\ttheta_i-1)\ttheta_i - (\ttheta_i-1)^2\ttheta_i^2 - \ttheta_i^2}{\ttheta_j\ttheta_{j+1}^2\ttheta_i}, \;\;\; i=2,\ldots,N-1, \\ \hspace{253pt} j=0,\ldots,i-2, & \\ \frac{1}{\ttheta_N^2}\frac{1}{\ttheta_j\ttheta_{j+1}^2} - \frac{1}{\ttheta_j\ttheta_{j+1}^2}, \quad i=N,\; j=0,\ldots,i-2, & \end{cases} \\ =&\; \zero, \end{align*} which concludes the proof. \qed \end{proof} \end{lemma} The next theorem provides the worst-case convergence gradient bound of FSFOM with step coefficients~\eqref{eq:ogmg,h}. \begin{theorem} \label{thm:ogmg,rate} Assume that $f\in\cF$, $\Xs \neq \emptyset$, and $f(\x_0) - f_* \le \frac{1}{2}LR^2$~\eqref{eq:init,func}. Let $\x_0,\ldots,\x_N\in\reals^d$ be generated by FSFOM with step coefficients~\eqref{eq:ogmg,h}. Then, for any $N\ge1$, \begin{align} ||\nabla f(\x_N)||^2 \le \frac{L^2R^2}{\ttheta_0^2} \le \frac{2L^2R^2}{(N+1)^2} \label{eq:ogmg,rate} .\end{align} \begin{proof} Using Lemma~\ref{lem:ogmg_feas_ic2}, we have $ ||\nabla f(\x_N)||^2 \le L^2R^2\Bd_{\mathrm{D}}(\h,N) \le L^2R^2\frac{1}{\ttheta_0^2} .$ We can easily show that $\ttheta_i$~\eqref{eq:ttheta} satisfies $\ttheta_i \ge \frac{N-i+2}{2}$ for $i=1,\ldots,N$ by induction, and this then yields $\ttheta_0 \ge \frac{N+1}{\sqrt{2}}$, which concludes the proof. \qed \end{proof} \end{theorem} Similar to~\eqref{eq:gm,weighted}, the PEP proof of Theorem~\ref{thm:ogmg,rate}, using Lemma~\ref{lem:ogmg_feas_ic2}, can be used to construct a conventional proof by a weighted sum of inequalities~\eqref{eq:ineqs} using the dual variables $(\aa,\bb,c,e)$ in~\eqref{eq:ogmg,feas} as weights. This weighted sum leads to \begin{align} \frac{1}{L}||\nabla f(\x_N)||^2 \le \frac{LR^2}{\ttheta_0^2} \label{eq:ogmg,weighted} \end{align} and yields~\eqref{eq:ogmg,rate}. The bound~\eqref{eq:ogmg,rate} of FSFOM with~\eqref{eq:ogmg,h} is optimal up to a constant because Nemirovsky shows in~\cite{nemirovsky:92:ibc} that the worst-case rate for the gradient decrease of large-dimensional convex \emph{quadratic} function is $O(1/N^2)$ under~\eqref{eq:init,func}. This result fills in Table~\ref{tab:rate}, improving upon best known rates. The following corollary examines the rate of FSFOM with~\eqref{eq:ogmg,h} for cases where a finite minimizer might not exist. \begin{corollary} \label{cor:ogmg} For $f\in\cF$, let $\x_0,\ldots,\x_N\in\reals^d$ be generated by FSFOM with step coefficients~\eqref{eq:ogmg,h}. Assume that $f(\x_0) - f(\x_N) \le \frac{1}{2}LR^2$~\eqref{eq:init,func,N}. Then, for any $N\ge1$, \begin{align} ||\nabla f(\x_N)||^2 \le \frac{L^2R_N^2}{\ttheta_0^2 - 1} \end{align} \begin{proof} Equation~\eqref{eq:ogmg,weighted} consists of a weighted sum of the third and fourth inequalties of~\eqref{eq:ineqs}, scaled by $c=e=\frac{2}{\ttheta_0^2}$ in~\eqref{eq:ogmg,feas}: \begin{align} \frac{c}{2L}||\nabla f(\x_N)||^2 + c(f(\x_0) - f(\x_N)) \le \frac{c}{2}LR^2 .\end{align} The third inequality of~\eqref{eq:ineqs} assumes $\Xs \neq \emptyset$ (see footnote~\ref{ft:nonempty}), so we derive a bound without the above inequality. Replacing the above inequality in the weighted summation for deriving~\eqref{eq:ogmg,weighted} by~\eqref{eq:init,func,N} scaled by $c$, $c(f(\x_0) - f(\x_N)) \le \frac{c}{2}LR_N^2$, yields \begin{align*} \frac{1}{L}\left(1 - \frac{c}{2}\right)||\nabla f(\x_N)||^2 \le \frac{LR_N^2}{\ttheta_0^2} ,\end{align*} which concludes the proof. \qed \end{proof} \end{corollary} The per-iteration computational complexity of the FSFOM with~\eqref{eq:ogmg,h} would be expensive if implemented directly via~\eqref{eq:fsfom}, compared to GM, FGM and OGM, so the next section provides an efficient form. \subsection{\bf An Efficient Form of the Proposed Optimized Method: \OGMG} This section develops an efficient form of FSFOM with the step coefficients~\eqref{eq:ogmg,h}, named \OGMG. This form is similar to that of OGM~\cite{kim:16:ofo}, which is further studied in Sect.~\ref{sec:relatedOGM}. \fbox{ \begin{minipage}[t]{0.85\linewidth} \begin{flalign*} &\quad \text{\bf OGM-G} & \\ &\qquad \text{Input: } f\in\cF,\; \x_0=\y_0\in\reals^d,\; N\ge1. & \\ &\qquad \ttheta_i = \begin{cases} \frac{1 + \sqrt{1 + 8\ttheta_{i+1}^2}}{2}, & i=0, \\ \frac{1 + \sqrt{1 + 4\ttheta_{i+1}^2}}{2}, & i=1,\ldots,N-1, \\ 1, & i = N, \end{cases} & \\ &\qquad \text{For } i = 0,\ldots,N-1, & \\ &\qquad \qquad \y_{i+1} = \x_i - \frac{1}{L}\nabla f(\x_i), & \\ &\qquad \qquad \x_{i+1} = \y_{i+1} + \frac{(\ttheta_i-1)(2\ttheta_{i+1} - 1)} {\ttheta_i(2\ttheta_i - 1)}(\y_{i+1} - \y_i) + \frac{2\ttheta_{i+1}-1}{2\ttheta_i-1}(\y_{i+1} - \x_i). & \end{flalign*} \end{minipage} } \begin{proposition} \label{prop:ogmg} The sequence $\{\x_0,\ldots,\x_N\}$ generated by FSFOM with~\eqref{eq:ogmg,h} is identical to the corresponding sequence generated by \OGMG. \begin{proof} We first show that the step coefficients $\{\tth_{i+1,k}\}$~\eqref{eq:ogmg,h} are equivalent to \begin{align} \tth'_{i+1,k} = \begin{cases} \frac{(\ttheta_i - 1)(2\ttheta_{i+1} - 1)}{\ttheta_i(2\ttheta_i - 1)}\tth'_{i,k}, & k=0,\ldots,i-2, \\ \frac{(\ttheta_i - 1)(2\ttheta_{i+1} - 1)}{\ttheta_i(2\ttheta_i - 1)}(\tth'_{i,i-1} - 1), & k=i-1, \\ 1 + \frac{2\ttheta_{i+1} - 1}{\ttheta_i}, & k=i. \end{cases} \label{eq:ogmg,hh} \end{align} Obviously, $\tth_{i+1,i} = \tth'_{i+1,i},\;i=0,\ldots,N-1$, and we have \begin{align*} \tth_{i+1,i-1} &= \frac{\ttheta_i-1}{\ttheta_{i-1}}(\tth_{i+1,i} - 1) = \frac{(\ttheta_i-1)(2\ttheta_{i+1}-1)}{\ttheta_{i-1}\ttheta_i} = \frac{(\ttheta_i-1)(2\ttheta_{i+1}-1)}{\ttheta_i(2\ttheta_i-1)} \frac{2\ttheta_i-1}{\ttheta_{i-1}} \\ &= \frac{(\ttheta_i-1)(2\ttheta_{i+1}-1)}{\ttheta_i(2\ttheta_i-1)} (\tth'_{i,i-1}-1) = \tth'_{i+1,i-1} \end{align*} for $i=1,\ldots,N-1$. We next use induction by assuming $\tth_{i+1,k}=\tth'_{i+1,k}$ for $i=0,\ldots,n-1,\;k=0,\ldots,i$. We then have \begin{align*} \tth_{n+1,k} &= \frac{\ttheta_{k+1}-1}{\ttheta_k}\tth_{n+1,k+1} = \paren{\prod_{j=k}^{n-1}\frac{\ttheta_{l+1}-1}{\ttheta_l}}(\tth_{n+1,n}-1) \\ &= \paren{\prod_{j=k}^{n-2}\frac{\ttheta_{l+1}-1}{\ttheta_l}}(\tth_{n,n-1}-1) \frac{\ttheta_n-1}{\ttheta_{n-1}}\frac{\tth_{n+1,n}-1}{\tth_{n,n-1}-1} \\ &= \tth_{n,k}\frac{\ttheta_n-1}{\ttheta_{n-1}} \frac{(2\ttheta_{n+1} - 1)\ttheta_{n-1}}{\ttheta_n(2\ttheta_n-1)} = \frac{(\ttheta_n-1)(2\ttheta_{n+1}-1)}{\ttheta_n(2\ttheta_n-1)}\tth'_{n,k} = \tth'_{n+1,k} \end{align*} for $k=0,\ldots,n-2$, where the fourth equality uses the definition of $\tth_{n,k}$. This proves the first claim that the step coefficients $\{\tilde{h}_{i+1,k}\}$~\eqref{eq:ogmg,h} and $\{\tilde{h}'_{i+1,k}\}$~\eqref{eq:ogmg,hh} are equivalent. We finally use induction to show the equivalence between the generated sequences of FSFOM with~\eqref{eq:ogmg,hh} and \OGMG. For clarity, we use the notation $\x_0',\ldots,\x_N'$ and $\y_0',\ldots,\y_N'$ for \OGMG. Obviously, $\x_0 = \x_0'$, and we have \begingroup \allowdisplaybreaks \begin{align*} \x_1 &= \x_0 - \frac{1}{L}\tth'_{1,0}\nabla f(\x_0) = \x_0 - \frac{1}{L}\paren{1 + \frac{2\ttheta_1-1}{\ttheta_0}}\nabla f(\x_0) \\ &= \y_1' - \frac{1}{L}\frac{(2\ttheta_0-1)(2\ttheta_1-1)} {\ttheta_0(2\ttheta_0 - 1)}\nabla f(\x_0') \\ &= \y_1' - \frac{1}{L}\paren{\frac{(\ttheta_0-1)(2\ttheta_1-1)}{\ttheta_0(2\ttheta_0-1)} + \frac{2\ttheta_1-1}{2\ttheta_0-1}}\nabla f(\x_0') = \x_1'. \end{align*} \endgroup Assuming $\x_i=\x_i'$ for $i=0,\ldots,n$, we have \begingroup \allowdisplaybreaks \begin{align*} &\x_{n+1} = \x_n - \frac{1}{L}\tth'_{n+1,n}\nabla f(\x_n) - \frac{1}{L}\tth'_{n+1,n-1}\nabla f(\x_{n-1}) - \frac{1}{L}\sum_{k=0}^{n-2}\tth'_{n+1,k}\nabla f(\x_k) \\ =\;& \x_n - \frac{1}{L}\paren{1 + \frac{2\ttheta_{n+1}-1}{\ttheta_n}}\nabla f(\x_n) - \frac{1}{L}\frac{(\ttheta_n-1)(2\ttheta_{n+1} - 1)} {\ttheta_n(2\ttheta_n - 1)}(\tth_{n,n-1} - 1)\nabla f(\x_{n-1}) \\ &\quad - \frac{1}{L}\frac{(\ttheta_n-1)(2\ttheta_{n+1} - 1)} {\ttheta_n(2\ttheta_n - 1)}\sum_{k=0}^{n-2}\tth_{n,k}\nabla f(\x_k) \\ =\;& \x_n - \frac{1}{L}\paren{1 + \frac{2\ttheta_{n+1} - 1} {2\ttheta_n - 1}}\nabla f(\x_n) \\ &\quad + \frac{(\ttheta_n-1)(2\ttheta_{n+1} - 1)}{\ttheta_n(2\ttheta_n - 1)} \paren{-\frac{1}{L}\nabla f(\x_n) + \frac{1}{L}\nabla f(\x_{n-1}) - \frac{1}{L}\sum_{k=0}^{n-1}\tth_{n,k}\nabla f(\x_k)} \\ =\;& \y_{n+1}' + \frac{(\ttheta_n-1)(2\ttheta_{n+1} - 1)} {\ttheta_n(2\ttheta_n - 1)}(\y_{n+1}' - \y_n') + \frac{2\ttheta_{n+1} - 1}{2\ttheta_n - 1}(\y_{n+1}' - \x_n') = \x_{n+1}'. \end{align*} \endgroup \qed \end{proof} \end{proposition} \subsection{\bf Two Worst-Case Iterative Behaviors of \OGMG} \label{sec:optpep,worst} This section specifies two worst-case problem instances for \OGMG, associated with Huber and quadratic functions respectively, that make the bound~\eqref{eq:ogmg,rate} exact. These examples imply that the feasible point in~\eqref{eq:ogmg,feas} is an optimal point of~\eqref{eq:D} for \OGMG. \begin{lemma} \label{lem:worst} For the following Huber and quadratic functions in $\cF$: \begin{align} \phi_1(\x) = \begin{cases} \frac{LR}{\ttheta_0}||\x|| - \frac{LR^2}{2\ttheta_0^2}, & ||\x|| \ge \frac{R}{\ttheta_0}, \\ \frac{L}{2}||\x||^2, & \text{otherwise}, \end{cases} \quad \text{and} \quad \phi_2(\x) = \frac{L}{2}||\x||^2, \label{eq:worst,two} \end{align} for all $d\ge1$, \OGMG~exactly achieves the bound~\eqref{eq:ogmg,rate} with an initial point $\x_0$ satisfying $\phi_1(\x_0) - \phi_{1,*} = \phi_2(\x_0) - \phi_{2,*} = \frac{1}{2}LR^2$. \begin{proof} We first consider $\phi_1(\x)$. Starting from an initial point $\x_0 = \frac{\ttheta_0^2+1}{2\ttheta_0}R\nnu$ that satisfies $\phi_1(\x_0) - \phi_{1,*} = \frac{1}{2}LR^2$~\eqref{eq:init,func} for any unit-norm vector $\nnu$, we have \begin{align*} \x_N = \x_0 - \frac{1}{L}\sum_{j=1}^N\sum_{k=0}^{j-1}\tth_{j,k}\nabla f(\x_k) = \paren{\frac{\ttheta_0^2+1}{2\ttheta_0} - \frac{\ttheta_0^2-1}{2\ttheta_0}}R\nnu ,\end{align*} since \begin{align*} \sum_{j=1}^N\sum_{k=0}^{j-1}\tth_{j,k} = \frac{1}{2}(\ttheta_0 + 1) + \sum_{j=1}^{N-1}\ttheta_j = \frac{1}{2}(\ttheta_0 + 1 + 2\ttheta_1^2 - 2) = \frac{1}{2}(\ttheta_0^2-1) \end{align*} that uses~\eqref{eq:ogmg,h,sum2} and~\eqref{eq:ttheta,rule}. Here, all the iterates stay in the affine region of the function $\phi_1(\x)$ with the same gradient $\nabla\phi_1(\x) = \frac{LR}{\ttheta_0}\nnu,\; i=0,\ldots,N$. Therefore, after $N$ iterations of OGM-G, we have $ ||\nabla\phi_1(\x_N)||^2 = \frac{L^2R^2}{\ttheta_0^2} .$ We next consider $\phi_2(\x)$. Starting from an initial point $\x_0 = R\nnu$ that satisfies $\phi_2(\x_0) - \phi_{2,*} = \frac{1}{2}LR^2$~\eqref{eq:init,func} for any unit-norm vector $\nnu$, we have \begin{align*} \x_1 &= -\frac{1}{L}\frac{2\ttheta_1 - 1}{\ttheta_0}\nabla f(\x_0) = -\frac{2\ttheta_1 - 1}{\ttheta_0}\x_0 ,\end{align*} and we have \begin{align*} \x_{i+1} &= - \frac{1}{L}\frac{2\ttheta_{i+1}-1}{2\ttheta_i-1}\nabla f(\x_i) = - \frac{2\ttheta_{i+1}-1}{2\ttheta_i-1}\x_i = (-1)^i\frac{2\ttheta_{i+1}-1}{2\ttheta_1-1}\x_1, \;\; i=1,\ldots,N-1, \end{align*} using $\y_i = \zero,\;i=1,\ldots,N$. Therefore, we have $ ||\nabla \phi_2(\x_N)||^2 = L^2||\x_N||^2 = \frac{L^2R^2}{\ttheta_0^2} .$ \qed \end{proof} \end{lemma} The iterates of OGM-G for the Huber worst-case function $\phi_1$ stay in one side of the affine region of the function, while those for the quadratic worst-case function $\phi_2$ always overshoot the optimum. These are extreme cases, and it is notable that some other first-order methods also have two such worst-case iterative behaviors. Specifically, in~\cite{taylor:17:ssc,kim:17:otc}, first-order methods that have such two types of worst-case iterative behaviors in Lemma~\ref{lem:worst}, associated with Huber and quadratic functions, respectively, were found to have an optimal worst-case bound among a certain subset of first-order methods. This leads us to conjecture that the exact worst-case bound~\eqref{eq:ogmg,rate} of OGM-G may be optimal, but proving it remains an open problem. \subsection{\bf Worst-Case Rate Behaviors of OGM-G under Initial Distance Condition} \label{sec:worst,ogmg,dist} This section further studies the worst-case rate behaviors of OGM-G under initial distance condition~\eqref{eq:init,dist}. Table~\ref{tab:ratePESTO} presents exact numerical worst-case rates of OGM-G (under a large-dimensional condition), using the performance estimation toolbox, named PESTO\footnote{ In PESTO toolbox~\cite{taylor:17:pet}, we used the SDP solver SeDuMi~\cite{sturm:99:us1} interfaced through Yalmip~\cite{lofberg:04:yat}. The \OGMG~method is implemented in the PESTO toolbox. }~\cite{taylor:17:pet}, based on PEP~\cite{drori:14:pof,taylor:17:ssc}. \begin{table}[!h] \centering \caption{ Exact values of the reciprocals of the worst-case cost function inaccuracy $\paren{\frac{L\overline{R}^2}{f(\x_N)-f(\x_*)}}$ in~\eqref{eq:epsfunc} and the worst-case gradient inaccuracy $\paren{\frac{L^2\overline{R}^2}{||\nabla f(\x_N)||^2}, \frac{L^2R^2}{||\nabla f(\x_N)||^2}, \text{ or } \frac{L^2R_N^2}{||\nabla f(\x_N)||^2}}$ in~\eqref{eq:epsgrad} of OGM-G under one of the conditions~\eqref{eq:init,dist},~\eqref{eq:init,func} or~\eqref{eq:init,func,N}. } \label{tab:ratePESTO} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{OGM-G Efficiency} & Initial & \multicolumn{8}{c|}{Number of iterations} \\ \cline{3-10} & cond. & 1 & 2 & 4 & 10 & 20 & 30 & 40 & 50 \\ \hline Cost func.~\eqref{eq:epsfunc} & \eqref{eq:init,dist} & 8.0 & 10.0 & 9.7 & 8.9 & 8.5 & 8.3 & 8.3 & 8.2 \\ \hline \multirow{3}{*}{Gradient~\eqref{eq:epsgrad}} & \eqref{eq:init,dist} & 4.0 & 8.1 & 19.5 & 79.5 & 262.5 & 547.8 & 934.6 & 1422.6 \\ \cline{2-10} & \eqref{eq:init,func} & 4.0 & 8.1 & 19.5 & 79.5 & 262.5 & 547.8 & 934.6 & 1422.6 \\ \cline{2-10} & (IFC$'$) & 3.0 & 7.1 & 18.5 & 78.5 & 261.5 & 546.8 & 933.6 & 1421.6 \\ \hline \end{tabular} \end{table} Table~\ref{tab:ratePESTO} illustrates that the worst-case gradient rates of OGM-G are equivalent numerically under both~\eqref{eq:init,dist} and~\eqref{eq:init,func}. This is because the worst-case problem instance of OGM-G in Lemma~\ref{lem:worst} associated with the quadratic function under~\eqref{eq:init,func} also serves as a worst-case of OGM-G under~\eqref{eq:init,dist}, as formally discussed next. \begin{corollary} \label{cor} Let $\x_0,\ldots,\x_N\in\reals^d$ be generated by OGM-G. Then, for any $N\ge1$, \begin{align} \frac{L^2\overline{R}^2}{\ttheta_0^2} \le \min_{\substack{f\in\cF, \\ \x_*\in\Xs, \\ ||\x_0-\x_*||\le\overline{R}}} ||\nabla f(\x_N)||^2 \label{eq:low} .\end{align} \begin{proof} Consider the quadratic function $\phi_2(\x) = \frac{L}{2}||\x||^2$ in Lemma~\ref{lem:worst} associated with the initial point $\x_0=R\nnu$ for any unit-norm vector $\nnu$. This initial point $\x_0$ satisfies $||\x_0-\x_*||=R$ as well as $\phi_2(\x_0) - \phi_{2,*} = \frac{1}{2}LR^2$, which implies the inequality~\eqref{eq:low} based on Lemma~\ref{lem:worst}. \qed \end{proof} \end{corollary} We conjecture that the lower bound~\eqref{eq:low} of OGM-G under~\eqref{eq:init,dist} is exact, based on numerical evidence in Table~\ref{tab:ratePESTO}. This is a bit disappointing, because it appears that a method that is optimal under one initial condition is far from optimal for another initial condition. It is also unfortunate that OGM-G has a poor worst-case rate for decreasing the cost function under~\eqref{eq:init,dist}. An open problem is finding a method that achieves optimal rates invariant to worst-case rate measures and initial conditions. In addition, we study how the worst-case rate under~\eqref{eq:init,func} transfers to that under~\eqref{eq:init,dist} for given problem instance $(f,\x_0)$. We particularly focus on two worst-case problem instances of OGM-G in Lemma~\ref{lem:worst}, while similar analysis can be done for the worst-case problem instance of GM in Lemma~\ref{lem:worst,gm}. For the worst-case of OGM-G associated with the Huber function $\phi_1(\x)$, the constants $R$ and $\overline{R}$ in~\eqref{eq:init,func} and~\eqref{eq:init,dist} have the following relationship: \begin{align} \overline{R} = ||\x_0 - \x_*|| = \frac{\ttheta_0^2+1}{2\ttheta_0}R \ge \frac{\ttheta_0}{2}R \ge \frac{N+1}{2\sqrt{2}}R .\end{align} We can then show the following upper bound associated with $\overline{R}$ after $N$ iterations of OGM-G: \begin{align} ||\nabla \phi_1(\x_N)||^2 = \frac{L^2R^2}{\ttheta_0^2} \le \frac{2L^2R^2}{(N+1)^2} \le \frac{16L^2\overline{R}^2}{(N+1)^4} ,\end{align} yielding $O(1/N^4)$, instead of the OGM-G rate $O(1/N^2)$, expressed by using $\overline{R}$ instead of $R$. On the other hand, for the worst-case of OGM-G associated with the quadratic function $\phi_2(\x)$ in Lemma~\ref{lem:worst}, we have the relationship $R=\overline{R}$, as mentioned in Corollary~\ref{cor}. These examples illustrate that comparing the worst-case rates under different initial conditions is subtle, and it would be incomplete to treat $R$ and $\overline{R}$ as just arbitrary constants (unrelated to $N$) in Table~\ref{tab:rate}. \subsection{\bf Related Work: OGM} \label{sec:relatedOGM} This section shows that the proposed OGM-G has a close relationship with the following OGM~\cite{kim:16:ofo} (that was numerically first identified in~\cite{drori:14:pof}). \fbox{ \begin{minipage}[t]{0.85\linewidth} \begin{flalign} &\quad \text{\bf OGM~\cite{kim:16:ofo}} & \nonumber \\ &\qquad \text{Input: } f\in\cF,\; \x_0=\y_0\in\reals^d,\; N\ge1, \;\htheta_0 = 1. & \nonumber \\ &\qquad \text{For } i = 0,\ldots,N-1, & \nonumber\\ &\qquad \qquad \y_{i+1} = \x_i - \frac{1}{L}\nabla f(\x_i), & \nonumber \\ &\qquad \qquad \htheta_{i+1} = \begin{cases} \frac{1 + \sqrt{1+4\htheta_i^2}}{2}, & i<N-1, \\ \frac{1 + \sqrt{1+8\htheta_i^2}}{2}, & i=N-1, \end{cases} & \label{eq:theta} \\ &\qquad \qquad \x_{i+1} = \y_{i+1} + \frac{\htheta_i - 1} {\htheta_{i+1}}(\y_{i+1} - \y_i) + \frac{\htheta_i}{\htheta_{i+1}}(\y_{i+1} - \x_i). & \nonumber \end{flalign} \end{minipage} } We can easily notice the symmetric relationship of the parameters \begin{align} \htheta_i = \ttheta_{N-i}, \quad i=0,\ldots,N, \label{eq:theta,rel} \end{align} and the fact that OGM and OGM-G have forms that differ in the coefficients of the terms $\y_{i+1} - \y_i$ and $\y_{i+1} - \x_i$. For $f\in\cF$, $\x_*\in\Xs$ and $||\x_0-\x_*||\le\overline{R}$~\eqref{eq:init,dist}, the final $N$th iterate $\x_N$ of OGM has the following exact worst-case cost function bound \cite[Theorems~2 and 3]{kim:16:ofo}: \begin{align} f(\x_N) - f(\x_*) \le \frac{L\overline{R}^2}{2\htheta_N^2} \le \frac{L\overline{R}^2}{(N+1)^2} \label{eq:ogm,rate} ,\end{align} where this exact upper bound is equivalent to the exact worst-case gradient bound~\eqref{eq:ogmg,rate} of OGM-G up to a constant $\frac{\overline{R}^2}{2LR^2}$. This equivalence is similar to the relationship between the exact worst-case bounds~\eqref{eq:gm_bound_ic2} and~\eqref{eq:gm_cost} of GM discussed in Remark~\ref{rk:gm}. The worst-case rate \eqref{eq:ogm,rate} of OGM is exactly optimal for large-dimensional smooth convex minimization~\cite{drori:17:tei}. OGM is equivalent to FSFOM with the step coefficients~\cite[Proposition 4]{kim:16:ofo}: \begin{align} \hhh_{i+1,k} = \begin{cases} \frac{\htheta_i - 1}{\htheta_{i+1}}\hhh_{i,k}, & k=0,\ldots,i-2, \\ \frac{\htheta_i - 1}{\htheta_{i+1}}(\hhh_{i,i-1} - 1), & k=i-1, \\ 1 + \frac{2\htheta_i - 1}{\htheta_{i+1}}, & k=i. \end{cases} \label{eq:ogm,h} \end{align} for $i=0,\ldots,N-1$. The following proposition shows the symmetric relationship between the step coefficients $\{\hhh_{i+1,k}\}$~\eqref{eq:ogm,h} and $\{\tth_{i+1,k}\}$~\eqref{eq:ogmg,h} of OGM and OGM-G, respectively. \begin{proposition} The step coefficients $\{\hhh_{i+1,k}\}$~\eqref{eq:ogm,h} and $\{\tth_{i+1,k}\}$~\eqref{eq:ogmg,h} of OGM and OGM-G, respectively, have the following relationship \begin{align} \hhh_{i+1,k} = \tth_{N-k,N-i-1}, \quad i=0,\ldots,N-1,\;k=0,\ldots,i. \label{eq:h,rel} \end{align} \begin{proof} We use induction. Obviously, $\hhh_{1,0} = \tth_{N,N-1}$. Then, assuming $\hhh_{i+1,k} = \tth_{N-k,N-i-1}$ for $i=0,\ldots,n-1$, we have \begin{align*} \hhh_{n+1,k} &= \begin{cases} \frac{\ttheta_{N-n} - 1}{\ttheta_{N-n-1}}\tth_{N-k,N-n}, & k=0,\ldots,n-2, \\ \frac{\ttheta_{N-n} - 1}{\ttheta_{N-n-1}}(\tth_{N-n+1,N-n} - 1), & k=n-1, \\ 1 + \frac{2\ttheta_{N-n} - 1}{\ttheta_{N-n-1}}, & k=n, \end{cases} \\ &= \tth_{N-k,N-n-1} .\end{align*} \qed \end{proof} \end{proposition} Building upon the relationships~\eqref{eq:theta,rel} and~\eqref{eq:h,rel} between OGM and OGM-G, we numerically study the momentum coefficient values $\beta_i$ and $\gamma_i$ of OGM and OGM-G in the following form, to characterize the convergence behaviors of the methods. \fbox{ \begin{minipage}[t]{0.85\linewidth} \begin{flalign} &\quad \text{\bf Accelerated First-Order Method} & \nonumber \\ &\qquad \text{Input: } f\in\cF,\; \x_0=\y_0\in\reals^d,\; N\ge1. & \nonumber \\ &\qquad \text{For } i = 0,\ldots,N-1, & \nonumber\\ &\qquad \qquad \y_{i+1} = \x_i - \frac{1}{L}\nabla f(\x_i), & \nonumber \\ &\qquad \qquad \x_{i+1} = \y_{i+1} + \beta_i(\y_{i+1} - \y_i) + \gamma_i(\y_{i+1} - \x_i). & \nonumber \end{flalign} \end{minipage} } Figure~\ref{fig:coeff} compares the momentum coefficients ($\beta_i$, $\gamma_i$) of OGM and OGM-G for $N=100$. It is notable that having \emph{increasing} values of ($\beta_i$, $\gamma_i$) as $i$ increases, except for the last iteration, yields the optimal (fast) worst-case rate for decreasing the cost function, whereas having \emph{decreasing} values of ($\beta_i$, $\gamma_i$), except for the first iteration, yields the fast worst-case rate (that is optimal up to a constant) for decreasing the gradient. We leave further theoretical study on such choices of coefficients as future work. \begin{figure} \caption{Comparison of momentum coefficients ($\beta_i$,$\gamma_i$) of OGM and OGM-G.} \label{fig:coeff} \end{figure} We next compare OGM and OGM-G with their other equivalent efficient forms. Similar to~\cite[Algorithm OGM2]{kim:16:ofo} one can easily show that the last line of OGM is equivalent to \begin{align} \begin{cases} \z_{i+1} = \y_{i+1} + (\htheta_i - 1)(\y_{i+1} - \y_i) + \htheta_i(\y_{i+1} - \x_i), & \\ \x_{i+1} = \paren{1 - \frac{1}{\htheta_{i+1}}}\y_{i+1} + \frac{1}{\htheta_{i+1}}\z_{i+1}, & \end{cases} \end{align} while that of OGM-G is equivalent to \begin{align} \begin{cases} \z_{i+1} = \y_{i+1} + (\ttheta_i - 1)(\y_{i+1} - \y_i) + \ttheta_i(\y_{i+1} - \x_i), & \\ \x_{i+1} = \paren{1 - \frac{2\ttheta_{i+1}-1}{\ttheta_i(2\ttheta_i - 1)}}\y_{i+1} + \frac{2\ttheta_{i+1}-1}{\ttheta_i(2\ttheta_i - 1)}\z_{i+1}. & \end{cases} \end{align} This interpretation stems from a variant of FGM~\cite{nesterov:05:smo} that involves a convex combination of two points as above. \cite{kim:16:ofo} already showed that similar interpretation is possible for OGM, and the expression here also implies that decreasing gradient can be achieved via some convex combination of two points. Further analysis is left as future work. \section{Conclusions} \label{sec:conc} This paper developed a first-order method named OGM-G that has an inexpensive per-iteration computational complexity and achieves the optimal worst-case bound for decreasing the gradient of large-dimensional smooth convex functions up to a constant, under the initial bounded function condition. A simple method in~\cite{nesterov:20:pda}, using the OGM-G, also achieves the optimal worst-case gradient bound up to a constant, under the initial bounded distance condition. The OGM-G was derived by optimizing the step coefficients of first-order methods in terms of the worst-case gradient bound using the performance estimation problem (PEP) approach~\cite{drori:14:pof}. On the way, the exact worst-case gradient bound for a gradient method was studied. A practical drawback of OGM-G is that one must choose the number of iterations $N$ in advance. Finding a first-order method that achieves the optimal worst-case gradient bound (up to a constant), but that does not depend on selecting $N$ in advance, remains an open problem. In addition, extending the approaches based on PEP in this paper to the initial bounded distance condition~\eqref{eq:init,dist} will be interesting future work; this PEP approach with a strict relaxation (unlike this paper) has been studied in~\cite{kim:18:gto}. Further extensions of this paper to nonconvex problems and composite problems are also of interest. \section*{Appendix: Proof of Eqs.~\eqref{eq:ogmg,h,sum1} and \eqref{eq:ogmg,h,sum2}} This proof shows the properties~\eqref{eq:ogmg,h,sum1} and~\eqref{eq:ogmg,h,sum2} of the step coefficients $\{\tth_{i,j}\}$~\eqref{eq:ogmg,h}. We first show~\eqref{eq:ogmg,h,sum1}. We can easily derive \begin{align*} \tth_{i,i-2} = \frac{(\ttheta_{i-1}-1)(2\ttheta_i-1)}{\ttheta_{i-2}\ttheta_{i-1}} = \frac{\ttheta_i^2(2\ttheta_i-1)}{\ttheta_{i-2}\ttheta_{i-1}^2} \end{align*} for $i=2,\ldots,N$ using~\eqref{eq:ttheta,rule}. Again using the definition of~\eqref{eq:ogmg,h} and~\eqref{eq:ttheta,rule}, we have \begingroup \allowdisplaybreaks \begin{align*} \tth_{i,j} &= \frac{\ttheta_{j+1}-1}{\ttheta_j}\tth_{i,j+1} = \cdots = \paren{\prod_{l=j+1}^{i-2}\frac{\ttheta_l-1}{\ttheta_{l-1}}} \tth_{i,i-2} = \paren{\prod_{l=j+1}^{i-1}\frac{\ttheta_l-1}{\ttheta_{l-1}}} \frac{2\ttheta_i-1}{\ttheta_{i-1}} \\ &= \frac{1}{\ttheta_j}\frac{1}{\ttheta_{j+1}} \frac{\ttheta_{j+1}-1}{\ttheta_{j+2}} \cdots \frac{\ttheta_{i-3}-1}{\ttheta_{i-2}} (\ttheta_{i-2}-1)(\ttheta_{i-1}-1) \frac{2\ttheta_i-1}{\ttheta_{i-1}} \\ &= \frac{1}{\ttheta_j}\frac{1}{\ttheta_{j+1}} \frac{\ttheta_{j+2}}{\ttheta_{j+1}} \cdots \frac{\ttheta_{i-2}}{\ttheta_{i-3}} (\ttheta_{i-2}-1)(\ttheta_{i-1}-1) \frac{2\ttheta_i-1}{\ttheta_{i-1}} \\ &= \frac{\ttheta_{i-2}(\ttheta_{i-2}-1)(\ttheta_{i-1}-1)(2\ttheta_i-1)} {\ttheta_j\ttheta_{j+1}^2\ttheta_{i-1}} = \frac{\ttheta_i^2(2\ttheta_i-1)}{\ttheta_j\ttheta_{j+1}^2}, \end{align*} \endgroup for $i=2,\ldots,N,\;j=0,\ldots,i-3$, which concludes the proof of~\eqref{eq:ogmg,h,sum1}. We next prove the first two lines of~\eqref{eq:ogmg,h,sum2} using the induction. For $N=1$, we have $\ttheta_1 = 1$ and \begin{align*} \tth_{1,0} = 1 + \frac{2\ttheta_1-1}{\ttheta_0} = 1 + \frac{\ttheta_1^2}{\ttheta_0} = 1 + \frac{\frac{1}{2}(\ttheta_0^2 - \ttheta_0)}{\ttheta_0} = \frac{1}{2}(\ttheta_0+1) ,\end{align*} where the third equality uses~\eqref{eq:ttheta,rule}. For $N>1$, we have \begin{align*} \tth_{N,N-1} = 1 + \frac{2\ttheta_N-1}{\ttheta_{N-1}} = 1 + \frac{\ttheta_N^2}{\ttheta_{N-1}} = 1 + \frac{\ttheta_{N-1}^2 - \ttheta_{N-1}}{\ttheta_{N-1}} = \ttheta_{N-1} ,\end{align*} where the third equality uses~\eqref{eq:ttheta,rule}. Assuming $\sum_{l=j+1}^N\tth_{l,j} = \ttheta_j$ for $j=n,\ldots,N-1$ and $n\ge1$, we get \begingroup \allowdisplaybreaks \begin{align*} \sum_{l=n}^N\tth_{l,n-1} =\;& 1 + \frac{2\ttheta_n-1}{\ttheta_{n-1}} + \frac{\ttheta_n-1}{\ttheta_{n-1}}(\tth_{n+1,n}-1) + \frac{\ttheta_n-1}{\ttheta_{n-1}}\sum_{l=n+2}^N\tth_{l,n} \\ =\;& 1 + \frac{\ttheta_n}{\ttheta_{n-1}} + \frac{\ttheta_n-1}{\ttheta_{n-1}}\sum_{l=n+1}^N\tth_{l,n} = \frac{\ttheta_{n-1} + \ttheta_n + (\ttheta_n-1)\ttheta_n}{\ttheta_{n-1}} = \frac{\ttheta_{n-1} + \ttheta_n^2}{\ttheta_{n-1}} \\ =\;& \begin{cases} \frac{1}{2}(\ttheta_0 + 1), & n = 0, \\ \ttheta_n, & n=1,\ldots,N-1, \end{cases} \end{align*} \endgroup where the last equality uses~\eqref{eq:ttheta,rule}, which concludes the proof of the first two lines of~\eqref{eq:ogmg,h,sum2}. We finally prove the last line of~\eqref{eq:ogmg,h,sum2} using the induction. For $i\ge1$, we have \begin{align*} \sum_{l=i+1}^N\tth_{l,i-1} &= \sum_{l=i}^N\tth_{l,i-1} - \tth_{i,i-1} = \ttheta_{i-1} - \paren{1+\frac{2\ttheta_i-1}{\ttheta_{i-1}}} = \frac{(\ttheta_i-1)^2}{\ttheta_{i-1}} = \frac{\ttheta_{i+1}^4}{\ttheta_{i-1}\ttheta_i^2} ,\end{align*} where the third and fourth equalities use~\eqref{eq:ttheta,rule}. Then, assuming $\sum_{l=i+1}^N\tth_{l,j}=\frac{\ttheta_i^4}{\ttheta_j\ttheta_{j+1}^2}$ for $i=n,\ldots,N-1$, $j=0,\ldots,i-1$ with $n\ge1$, we get: \begin{align*} \sum_{l=n}^N\tth_{l,j} &= \sum_{l=n+1}^N\tth_{l,j} + \tth_{n,j} = \frac{\ttheta_{n+1}^4}{\ttheta_j\ttheta_{j+1}^2} + \frac{\ttheta_n^2(2\ttheta_n-1)}{\ttheta_j\ttheta_{j+1}^2} = \frac{\ttheta_n^2(\ttheta_n-1)^2 + \ttheta_n^2(2\ttheta_n-1)}{\ttheta_j\ttheta_{j+1}^2} = \frac{\ttheta_n^4}{\ttheta_j\ttheta_{j+1}^2} ,\end{align*} where the second and third equalities use~\eqref{eq:ogmg,h,sum1}, which concludes the proof. \qed \end{document}
math
68,291
\begin{document} \title{Solving the Persistent Phylogeny Problem in polynomial time} \author{Paola Bonizzoni\thanks{Dipartimento di Informatica Sistemistica e Comunicazione, Univ. degli Studi di Milano--Bicocca. \texttt{bonizzoni,[email protected]}} \and Gianluca Della Vedova$^{*}$ \and Gabriella Trucco \thanks{Dipartimento di Tecnologie dell'Informazione, Univ. degli Studi di Milano, Crema. \texttt{[email protected]}} } \pagestyle{plain} \date{November 2nd, 2016} $M$aketitle \begin{abstract} The notion of a Persistent Phylogeny generalizes the well-known Perfect phylogeny model that has been thoroughly investigated and is used to explain a wide range of evolutionary phenomena. More precisely, while the Perfect Phylogeny model allows each character to be acquired once in the entire evolutionary history while character losses are not allowed, the Persistent Phylogeny model allows each character to be both acquired and lost exactly once in the evolutionary history. The Persistent Phylogeny Problem (PPP) is the problem of reconstructing a Persistent phylogeny tree, if it exists, from a binary matrix where the rows represent the species (or the individuals) studied and the columns represent the characters that each species can have. While the Perfect Phylogeny has a linear-time algorithm, the computational complexity of PPP has been posed, albeit in an equivalent formulation, 20 years ago. We settle the question by providing a polynomial time algorithm for the Persistent Phylogeny problem. \end{abstract} \section{Introduction} The problem of reconstructing an evolutionary history from characters is a classical topic in computational biology~\cite{semple2003phylogenetics,steel_phylogeny:2016}. The instance of the problem consists of a matrix $M$ where the rows corresponds to species (or individuals/taxa) and columns corresponds to characters. Moreover, each entry $M[s,c]$ is the \emph{state} of species $s$ and character $c$. Notice that each character can be phenotypical (i.e.,~ a species has wings) or genotypical (a cell has a certain mutation). A \emph{phylogeny} is a tree $T$ where each input species is a node, a character $c$ can be gained or lost in each edge of $T$: how a character can be gained or lost in the tree $T$ is called the evolution model. There is a contrast between different evolution models, as more restrictive models --- such as the Perfect Phylogeny --- are more informative, but are unable to give a tree consistent with all possible input matrices. On the other hand, more general models, such as Camin-Sokal~\cite{caminsokal} and Dollo~\cite{Dollo} are always able to produce a tree consistent with the input matrix, but such tree might not be informative on the actual evolutionary history. In this paper we will focus on binary characters, that is the state $0$ means that a species does not have a character, while the state $1$ means that a species has a character. Each edge represents a change of state for some characters during the evolution of the ancestral taxa: a change from $0$ to $1$ is the gain of a character, while a change from $1$ to $0$ is the loss of a character. Two main general models have been introduced in the literature to describe character-based evolutions of taxa: the Camin-Sokal~\cite{caminsokal} and the Dollo~\cite{Dollo} model. The Camin-Sokal model assumes that each character may change state from $0$ to $1$ several times, but no change from $1$ to $0$ state is allowed. Differently, in the Dollo model, characters may change state from $0$ to $1$ only once in the tree, but they may change state from $1$ to $0$ multiple times. The Dollo model appears appropriate for reconstructing the evolution of genes in eukaryotic organisms, mainly the gain and loss of genes modeled as change to $1$ and to $0$ respectively in the taxa. Indeed, while multiple gains of the same gene in different lineages is improbable, multiple losses of a gene are more common~\cite{Dollo}. The model recently gained a lot if interest in the context of reconstructing the clonal evolution in tumors due to mutation events taking into account copy number aberrations~\cite{nature16}. In this context characters are mutations that are gained or lost during the clonal evolution. It is assumed that a given mutation is acquired at most once in the tree, but it may be lost due to the loss of genes related to deletion events. From the algorithmic point of view, the Dollo model has a trivial decision problem --- given a matrix $M$, there is always a tree $T$ consistent with $M$ --- and an NP-complete optimization problem --- find a tree $T$ consistent with $M$ and minimizing the number of changes of state. At the other end of the spectrum lies the Perfect Phylogeny Problem~\cite{Gusfield} which is likely the most restrictive possible model: each character is acquired exactly once in the tree $T$ and is never lost. Despite its relative simplicity, the Perfect Phylogeny model has found several applications in Biology~\cite{Gus2,Gus06,el-kebir_reconstruction_2015,ElKebir201643} and Linguistics~\cite{ringe_indo-european_2002}. Moreover the Perfect Phylogeny has been extensively studied from a computational point of view, starting from the seminal linear-time algorithm~\cite{Gus91} for binary matrices, and going on with the NP-completeness for the general case of unbounded number of states~\cite{BFW92}, and the polynomial-time algorithms for any constant number of states~\cite{agarwala_polynomial-time_1994,kannan1997fast}. Moreover, the algorithmic properties of some variants of the Perfect Phylogeny have been recently studied~\cite{DBLP:conf-isbra-ManuchPG11,mavnuch2009generalised,benham1995hen} restricting the possible state transitions or the topology of the tree. Another result that is especially relevant for our paper is a polynomial-time algorithm for computing the Directed Perfect Phylogeny with Missing Data~\cite{Sha}, where a first reframing of Perfect Phylogeny as a graph problem. Still, some biological problems require a more general model than the Perfect Phylogeny model, but need an efficient algorithm to compute an informative tree~\cite{DBLP:journals-siamcomp-Fernandez-BacaL03}. In this direction, the notion of a \emph{persistent} character have been proposed in~\cite{przytycka2006graph} and later studied~\cite{DBLP:journals-tcs-BonizzoniBDT12,bonizzoni_when_2014,Bonizzoni2016,bonizzoni_explaining_2014}: a persistent character can change state from $0$ to $1$ at most once during the evolution then change state from one to zero, again at most once in the entire tree. The computational problem of constructing a persistent phylogeny, albeit in a different context, has been introduced at least as early as 1996~\cite{goldberg_minimizing_1996}, where it takes the name of $(1,2)$-phylogeny and the open problem of determining its computational complexity has been stated. Recently some algorithms for the PPP have been introduced: more precisely, an algorithm whose time complexity is polynomial in the number of taxa, but exponential in the number of characters~\cite{DBLP:journals-tcs-BonizzoniBDT12}, and an integer linear programming solution~\cite{gusfield_persistent_2015}. The ILP formulation uses a reformulation of the PPP as the problem of completing a matrix $M_e$ obtained by doubling columns of the input matrix $M$ of the PPP problem (i.e. adding a persistent copy of the character) and posing as unknown the state of characters that may be persistent --- those characters that have entry $0$ in matrix $M$. In~\cite{DBLP:journals-tcs-BonizzoniBDT12,Bonizzoni2016} the PPP problem is restated as a colored graph problem. In this paper we use this graph framework to design a polynomial time algorithm that solves the PPP problem, settling the question of~\cite{goldberg_minimizing_1996}. An implementation of our algorithm is available at \url{http://www.algolab.eu/persistent-phylogeny}. \section{Preliminaries} The input of the PPP problem is an $n \times m$ binary matrix $M$ whose columns are associated with the set $C = \{c_1, \ldots, c_m\}$ of characters and whose rows are associated with the set $S = \{s_1, \dots, s_n\}$ of taxa also called \emph{species} in the paper. Then $M[s,c] = 1$ if and only if the species $s$ has character $c$, otherwise $M[s,c] = 0$. The character $c$ is \emph{gained} in the only edge where its state goes from $0$ to $1$ or, more formally, in the edge $(x,y)$ such that $y$ is a child of $x$ and $c$ has state $0$ in $x$ and state $1$ in $y$. In this case the edge $(x,y)$ is labeled by $c^{+}$. Conversely, $c$ is \emph{lost} in the edge $(x,y)$ if $y$ is a child of $x$ and character $c$ has state $1$ in $x$ and state $0$ in $y$. In the latter case the edge $(x,y)$ is labeled by $c^{-}$. For each character $c$, we allow at most one edge labeled by $c^{-}$~\cite{zeng,DBLP:journals-tcs-BonizzoniBDT12}. Each character $c^{+}$ and $d^{-}$ is called a \emph{signed} character. Let $c$ be an unsigned character and let $M$ be an instance of the PPP problem. Then $S(c)$ is the set of species that have the character $c$, that is the set $\{s\in S: M[s,c] = 1\}$. Given two characters $c_{1}$ and $c_{2}$, we will say that $c_{1}$ \emph{includes} $c_{2}$ if $S(c_{1})\supseteq S(c_{2})$. Then a character $c$ of $M$ is \emph{maximal in $M$} if $S(c)\not\subset S(c')$ for any character $c' $ of $M$. Moreover two characters $c, c'$ \emph{overlap} if they share a common species but neither is included in the other. We now introduce the definition of Persistent Phylogeny used in the paper~\cite{Bonizzoni2016}. \begin{definition}[Persistent Phylogeny] \label{def:persistent-perfect-phylogeny} Let $M$ be an $n \times m$ binary matrix over a set $m$ of characters and a set $n$ of species. Let $A$ be a subset of its characters, called \emph{active} characters. Then a \emph{persistent phylogeny}, in short \emph{ p-pp}, for the pair $(M, A)$ is a rooted tree $T$ such that: \begin{enumerate} \item each node $x$ of $T$ is labeled by a vector $l_x \in\{0, 1\}$ of length $m$; \item the root $r$ of $T$ is labeled by a vector $l_r$ such that $l_r(j)=1$ if and only if $c_j \in A$, while for each node $x$ of $T$ the value $l_x[j]\in\{0, 1\}$ represents the state of character $c_j$ in the node $x$; \item each edge $e=(v,w)$ is labeled by a character, \item for each character $c_j$ there are at most two edges $e=(x,y)$ and $e'=(u,v)$ such that $l_x[j] \neq l_y[j]$ and $l_u[j] \neq l_v[j]$ (representing a change in the state of $c_j$). In that case $e$, $e'$ occur along the same path from the root of $T$ to a leaf of $T$; if $e$ is closer to the root than $e'$, then $l_x[j]=l_v[j]=0$, $l_y[j]=l_u[j]=1$, and the edge $e$ is labeled $c_j^{+}$, while $e'$ is labeled $c_j^{-}$; \item for each row $x$ of $M$ there exists a node $x$ of $T$ labeled by row $x$ and such that vector $l_{x}$ is equal to the row $x$. \end{enumerate} Set $A$ is also called \emph{active set} of matrix $M$ and we say that the matrix $M$ is solved by tree $T$. \end{definition} Observe that the definition of persistent phylogeny allows the internal nodes of $T$ to be labeled by species. The definition given in~\cite{DBLP:journals-tcs-BonizzoniBDT12} corresponds to the case the set $A$ is empty, i.e. the root of a tree $T$ is labeled by the length $m$ zero vector. The above generalization allows to consider persistent phylogenies where the root state is not the zero vector. This fact is relevant when considering subtrees of the phylogenies as solutions of the same problem on subinstances of the input. Indeed, notice that subtrees of the tree solving the instance $M$ have roots that are not labeled by the zero vector. If there exists an edge of $T$ labeled $c^{-}$, then then the character $c$ is called \emph{persistent} in $T$. Moreover, for each character $c\in A$ only the edge labeled $c^{-}$ might appear in the phylogeny. A special case, called \emph{perfect phylogeny}, is when no character of $T$ is ever lost. \begin{definition}[Perfect Phylogeny] \label{def:perfect-phylogeny} A persistent phylogeny $T$ is a \emph{perfect} phylogeny if there are no persistent characters in tree $T$. \end{definition} Since no two edges share the same label of $T$, we might use the label to identify and denote an edge. \begin{comment} Let $T$ be a tree, let $x$ be a node of $T$ and let $e$ be an edge of $T$. Then the \emph{subtree} of $T$ \emph{induced} by $x$ is the subtree of $T$ whose root is $x$: such subtree is denoted by $T|x$. Moreover, the subtree of $T$ induced by $e$ is the subtree of $T$ rooted at the child of $e$, and it is denoted by $T|e$ \notaestesa{paola}{verificare l'uso delle nozioni}. \end{comment} Given a tree $T$ and a node $x$, we say that a character $c$ occurs below node $x$ if $c$ labels an edge of a path from node $x$ to a leaf node of tree $T$. Similarly, we say that a node $v$ occurs below node $x$ if $v$ is along a path from node $x$ to a leaf of tree $T$. Moreover, we will say that the character $c$ is \emph{above} the node $x$ if there is an edge of the path from $x$ to the root that is labeled by $c$. \subsection{The red-black graph and conflicting characters} In this paper, our algorithm is based on a graph called the \emph{red-black graph} and denoted by \ensuremath{G_{RB}}\xspace. Red-black graphs provide an equivalent representation of matrices that are solved by persistent phylogenies~\cite{DBLP:journals-tcs-BonizzoniBDT12,Bonizzoni2016}. Moreover, the iterative construction or visit of a tree can be restated as applying specific graph operations to the red-black graph. In this section we recall the main results that we use in our algorithm. A connected component is called \emph{nontrivial} if it has more than one vertex. \begin{figure} \caption{Instance of the Persistent Persistent Phylogeny problem where the set of active characters is $A=\{c_{4} \label{fig:instance-extended} \end{figure} \begin{figure} \caption{Red-black graph associated with the matrix in Figure~\ref{fig:instance-extended} \label{fig:grb-instance2} \end{figure} \begin{definition}[Red-black graph] \label{definition:red-black} A \emph{red-black graph} on a set $S$ of species and a set $C$ of characters, denoted by \ensuremath{G_{RB}}\xspace, is a bipartite graph whose vertex set is $S\cup C$. Moreover each character $c\in C$ is incident only on black edges (in this case $c$ is \emph{inactive}), or it is incident only on red edges (in this case $c$ is \emph{active}). \end{definition} Given $A$ an empty set of active characters and $M$ an instance of PPP, the red-black \emph{associated with $M$} is the graph $\ensuremath{G_{RB}}\xspace=(S\cup C, E)$ that has black edges given by set $E=\{(s,c) : s\in S, c\in C, M[s,c] = 1\}$. Conversely, given a red-black graph, the following definition of a matrix associated with a red-black graph justifies our use of red-black graphs in place of binary matrices. Given a red-black graph $\ensuremath{G_{RB}}\xspace=(S\cup C, E)$, the matrix $M$ \emph{associated with $\ensuremath{G_{RB}}\xspace$} has species $S$ and characters $C$ and active set $A$ given by the characters in $C$ with incident red edges. Moreover $M[s,c]=1$ iff (1) $(s,c)$ is a black edge of $\ensuremath{G_{RB}}\xspace$, or (2) $c$ is active and $(s,c)$ is not an edge of \ensuremath{G_{RB}}\xspace. In this paper we use extensively the following notion. \begin{definition}[Persistent phylogeny for $\ensuremath{G_{RB}}\xspace$] \label{definition:red-black+matrix} Let $\ensuremath{G_{RB}}\xspace$ be a red-black graph with active characters $A$. Let $M$ be the matrix associated with $\ensuremath{G_{RB}}\xspace$. Then the persistent phylogeny for \ensuremath{G_{RB}}\xspace is the persistent phylogeny for the pair $(M, A)$. \end{definition} If $T$ is a persistent phylogeny for \ensuremath{G_{RB}}\xspace we also say that $\ensuremath{G_{RB}}\xspace$ is solved by tree $T$. We recall that, given a vertex $v$ of a graph $G$, the \emph{neighborhood} of $v$ is the set of vertices of $G$ that are adjacent to $v$, and it is denoted by $N(v)$~\cite{Diestel}. In a previous paper~\cite{Bonizzoni2016} it has been proved that given a persistent phylogeny $T$ solving red-black graph $\ensuremath{G_{RB}}\xspace$, then $T$ can be related to a sequence of graph operations on graph $\ensuremath{G_{RB}}\xspace$ which are described in the following definition. \begin{definition}[Realization] \label{definition:realizing-in-red-black-graph} Let $\ensuremath{G_{RB}}\xspace$ be a red-black graph, and let $c$ be a character of \ensuremath{G_{RB}}\xspace. Let $D(c)$ be the set of species in the connected component of $\ensuremath{G_{RB}}\xspace$ that contains $c$. The result of the \emph{realization of $c^{+}$ on $\ensuremath{G_{RB}}\xspace$}, which is defined only if $c$ is inactive, is a red-black graph obtained from \ensuremath{G_{RB}}\xspace by adding a red edge between $c$ and each species in $D(c)\setminus N(c)$, deleting all black edges incident on $c$, and finally deleting all isolated vertices. The \emph{realization of $c^{-}$ on $\ensuremath{G_{RB}}\xspace$} is defined only if $c$ is active and there is no species in $D(c)\setminus N(c)$: in this case the results of the realization is obtained from $\ensuremath{G_{RB}}\xspace$ by deleting all edges incident on $c$, and then deleting all isolated vertices. \end{definition} The realization of a species $s$ is the realization of its set $C(s)$ of characters in any order. An active character $c$ that is connected to all species of a graph $\ensuremath{G_{RB}}\xspace$ by red edges is called \emph{free} in $\ensuremath{G_{RB}}\xspace$ and it is then deleted from $\ensuremath{G_{RB}}\xspace$. \begin{comment} In particular, the relationship between graph operations on $GRB$ and a persistent phylogeny for \ensuremath{G_{RB}}\xspace is also stated in the following result. \begin{definition}[Graph labeling] \label{definition:graph-labeling-of-persistent-phylogeny} Let \ensuremath{G_{RB}}\xspace be a red-black graph and let $T$ be a persistent phylogeny for \ensuremath{G_{RB}}\xspace. Then the \emph{graph labeling} of $T$ consists of labeling the root of $T$ with \ensuremath{G_{RB}}\xspace, and labeling each other node $x$ of $T$ with the result of the realization of the signed character $c$ labeling the edge $(p,x)$ on the connected component of $\ensuremath{G_{RB}}\xspace(p)$ containing $c$, where $p$ is the parent of $x$ in $T$. \end{definition} \end{comment} The main relationship between a graph \ensuremath{G_{RB}}\xspace and the tree solving \ensuremath{G_{RB}}\xspace is given by the notion of \emph{c-reduction} stated in~\cite{DBLP:journals-tcs-BonizzoniBDT12}. A c-reduction $R$ is a sequence of positive characters. A c-reduction $R$ is \emph{feasible} for $\ensuremath{G_{RB}}\xspace$ if the realization of each character in the sequence $R$ one after the other is defined and the realization of a negative character $c^{-}$ must be applied whenever $c^{-}$ is free in \ensuremath{G_{RB}}\xspace. Clearly a c-reduction also represents a sequence of graph operations on \ensuremath{G_{RB}}\xspace. Then the \emph{application} of a feasible c-reduction $R=\langle c_{1}^{+},\ldots, c_{k}^{+}\rangle$ to red-black graph $\ensuremath{G_{RB}}\xspace$ is the graph $\ensuremath{G_{RB}}\xspace^{k}$ obtained as follows: $\ensuremath{G_{RB}}\xspace^{0}$ is the red-black graph $\ensuremath{G_{RB}}\xspace$, while, when $i>0$, $\ensuremath{G_{RB}}\xspace^{i}$ is obtained from $\ensuremath{G_{RB}}\xspace^{i-1}$ by realizing the character $c_{i}^{+}$ and, eventually all previous characters ${c_l}^{-}$, for $1 \leq l < i$ such that $c_{l}$ is free after the realization of $c_{i}$. Then the \emph{extended} c-reduction of $R$~\cite{Bonizzoni2016} is the sequence of positive and negative characters obtained by the application of $R$ to \ensuremath{G_{RB}}\xspace. For example, the c-reduction $\langle c_{3}^{+}, c_{5}^{+}, c_{2}^{+}\rangle$ corresponds to the extended c-reduction $\langle c_{3}^{+}, c_{5}^{+}, c_{2}^{+}, c_{4}^{-}\rangle$, since the character $c_{4}$ becomes free after the realization of $c_{2}^{+}$ (see Figure~\ref{fig:extended-reduction} for the corresponding red-black graph). The following fact that we use in the paper is stated and proved in~\cite{Bonizzoni2016}. A \emph{tree traversal} of $T$ is a sequence of the nodes of $T$ where each node appears exactly once~\cite{sedgewick2001algorithms} and has the additional properties that a node always precedes all of its descendants. \begin{proposition}[Successful reduction] \label{prop:traversal} A tree traversal of the positive characters of a tree $T$ solving a graph $\ensuremath{G_{RB}}\xspace$ is a c-reduction $R$ that is feasible for $\ensuremath{G_{RB}}\xspace$ and its application to $\ensuremath{G_{RB}}\xspace$ results in an empty graph. Then $R$ is called a \emph{successful reduction} for \ensuremath{G_{RB}}\xspace. \end{proposition} In~\cite{Bonizzoni2016} it has been proved that given a red-black graph \ensuremath{G_{RB}}\xspace and a successful reduction $R$, there exists a polynomial time algorithm that computes a persistent phylogeny solving \ensuremath{G_{RB}}\xspace. Based on Proposition~\ref{prop:traversal} we observe that a successful reduction for a red-black graph \ensuremath{G_{RB}}\xspace can be characterized using the notion of red $\Sigma$-graph: this is a path of length four induced in \ensuremath{G_{RB}}\xspace by a pair $c_{1}$ and $c_{2}$ of active characters and by three species $s_1$ and $s_2$ and $s_3$. Observe that $c_{1}$ and $c_{2}$ cannot be free in the red-black graph as they cannot be connected to all species of the red-black graph by red edges and thus ${c_{1}}^-$ and ${c_{2}}^-$ cannot be realized. Consequently, a red-black graph containing a red $\Sigma$-graph cannot be reduced to an empty graph by a c-reduction. Now, the notion of conflicting characters is used to detect candidate pairs of characters that may induce a red $\Sigma$-graph. Let $M$ be a binary matrix and let $c_{1}$, $c_{2}$ be two characters of $M$. Then the configurations induced by the pair $( c_{1}, c_{2} )$ in $M$ is the set of ordered pairs $( M[s,c_{1}], M[s,c_{2}])$ over all species. Two characters $c_{1}$ and $c_{2}$ of $M$ are \emph{conflicting} if and only if the configurations induced by such pair of columns is the set of all possible pairs $( 0,1)$, $( 1,1) $, $(1,0) $ and $(0,0) $. We say that characters $c, c'$ in graph \ensuremath{G_{RB}}\xspace are \emph{conflicting} if they are conflicting in the matrix associated with \ensuremath{G_{RB}}\xspace. Then graph \ensuremath{G_{RB}}\xspace contains a conflict if it has a conflicting pair of characters. The following two main observations~\cite{DBLP:journals-tcs-BonizzoniBDT12,Bonizzoni2016} and Lemma~\ref{lemma:connected-components-Grb=children} will be used in the next sections and are crucial in the polynomial time algorithm for computing a successful reduction for the red-black graph. \begin{observation} \label{observation1} A c-reduction $R$ for graph \ensuremath{G_{RB}}\xspace is successful if and only if $R$ includes all characters of the graph and each graph ${\ensuremath{G_{RB}}\xspace}_i$ obtained after the realization of the first $i$-characters of sequence of $R$ does not contain red $\Sigma$-graphs. \end{observation} Another important observation used in the paper and easy to prove by the definition of realization of a character is the following. \begin{observation} \label{observation2} A red-black graph consisting of $k$ distinct components has a successful reduction $R$ if and only if each component ${\ensuremath{G_{RB}}\xspace}_i$ has a successful reduction $R_i$. Then $R$ consists of any concatenation of the $k$ sequences $R_i$. \end{observation} \begin{lemma} \label{lemma:connected-components-Grb=children} Let \ensuremath{G_{RB}}\xspace be a red-black graph solved by $T$. If \ensuremath{G_{RB}}\xspace is connected then the root $r$ of $T$ has only one child. \end{lemma} We will assume that the instances of the PPP problem do not contain any free, null or universal characters (a null character is an isolated vertex of the red-black graph, while a universal character is adjacent to all species of the red-black graph by black edges -), or a null species (a species that possesses no characters). Notice that the removal of null characters does not modify the phylogeny, while a null species can only be associated to the root of the phylogeny. Removing a universal character trivially consists of fixing the first character of a c-reduction or, equivalently, determining the label of a topmost edge of the phylogeny. Moreover, we will assume that the instances of the PPP problem do not have two identical columns. A \emph{reducible} red-black graph \ensuremath{G_{RB}}\xspace is an instance of the PPP problem that admits a successful reduction, i.e. it is solved by a persistent phylogeny, and \ensuremath{G_{RB}}\xspace is a connected graph. \begin{comment} In the following we will also restrict our attention only to a specific class of solutions of the instance $M$, that is we will focus on \emph{good} trees. Let $T$ be a persistent phylogeny. Then $T$ is called \emph{good} if there are no two distinct internal nodes with the same state. The following lemma shows that it is not restrictive to consider only good trees. \begin{lemma} \label{lemma:good-trees-suffice} Let $\ensuremath{G_{RB}}\xspace$ be a red-black graph solved by a persistent phylogeny $T$. Then there exists a good phylogeny solving $M$. \end{lemma} \begin{proof} The only interesting case is when $T$ is not good. Let $x$, $y$ be two nodes of $T$ with the same state, without loss of generality assume that $x$ is an ancestor of $y$. Move the subtree of $T$ rooted at $y$ and graft it to $x$ (that is, all children of $y$ are new children of $x$). In the new tree, $y$ is a leaf. Hence the number of internal nodes with the same state is decreased. Repeating this process until the tree does not have two internal nodes with the same state produces the desired tree. \end{proof} \end{comment} \subsection{Maximal reducible graphs} The red-black graph \emph{induced in $\ensuremath{G_{RB}}\xspace$ by a set $C'$ of characters}, denoted as $\ensuremath{G_{RB}}\xspace | C'$, consists of the subgraph $\ensuremath{G_{RB}}\xspace'$ of $\ensuremath{G_{RB}}\xspace$ induced by the set all characters in $C'$ and the species of $\ensuremath{G_{RB}}\xspace$ that are adjacent to $C'$. \begin{comment} Given graph $G_{RB}$, let us recall that the the matrix associated with the graph $G_{RB}$ is the matrix $M$ consisting of the species and characters of $G_{RB}$ where for each species $s$ the characters with value $1$ are those that are inactive in $G_{RB}$ and are connected to $s$ (by black-edges) or are active in $G_{RB}$ and are not connected to $s$ (by red edges). Moreover, matrix $M$ has a row $r$ that represents the active characters in graph $\ensuremath{G_{RB}}\xspace$ and consists of $1$'s for each active character in $\ensuremath{G_{RB}}\xspace$ and $0$'s for the inactive ones. Observe that $r$ is the root of the tree solving $\ensuremath{G_{RB}}\xspace$, if it exists. \end{comment} \begin{comment} A species $s$ is \emph{realizable} in \ensuremath{G_{RB}}\xspace if there exists an ordering $\sigma$ of the characters in $s$, such that the realization of characters of $\sigma$ results in a red-black graph $\ensuremath{G_{RB}}\xspace'$ where $s$ is not a vertex. \end{comment} Given a red-black graph $\ensuremath{G_{RB}}\xspace$ and its associated matrix $M$, then a character $c$ of $\ensuremath{G_{RB}}\xspace$ is \emph{maximal} in $\ensuremath{G_{RB}}\xspace$ if it is maximal among the inactive characters of $M$. Then we denote by $C_M$ the set of maximal characters for a red-black graph $\ensuremath{G_{RB}}\xspace$. A \emph{maximal reducible} graph consists of a reducible red-black graph whose characters are all maximal and inactive in the graph. Then $\ensuremath{G_{RB}}\xspace|C_M$ is the maximal reducible graph induced by $C_M$ in graph \ensuremath{G_{RB}}\xspace. In the paper we denote by \ensuremath{G_{RB}}\xspacem a maximal reducible graph and by $T_M$ a tree solving a maximal reducible graph. \begin{comment} Let $M$ be a matrix. Then we define a relationship between species based on the set of characters they share. Let $C(s)$ be the set of characters of matrix $M$ such that $M[s,c]=1$. Then species can be ordered by the inclusion relationship $ \leq_{M}$, where $s \leq_{M} s'$ if and only if $C(s) \subseteq C(s')$. The inclusion relationship among species can be extended to a red-black graph $\ensuremath{G_{RB}}\xspace$ by considering the matrix $M$ that is associated with $\ensuremath{G_{RB}}\xspace$: given $s, s' \in S(\ensuremath{G_{RB}}\xspace)$, then $s \leq_{\ensuremath{G_{RB}}\xspace} s'$ if and only $s \leq_{M} s'$. In the following, whenever the graph \ensuremath{G_{RB}}\xspace or the matrix $M$ are known from the context, we will use the symbol $\leq$ instead of $ \leq_{M}$ or $\leq_{\ensuremath{G_{RB}}\xspace}$. Moreover, we will denote $s_{1} < s_{2}$ to state that $s_{1} \le s_{2}$ and $s_{2} \not$G$e s_{1}$. \end{comment} Given a tree $T$, a path $\pi$ of the tree is \emph{simple} if it consists of internal nodes of outdegree and indegree one. \subsection{The Hasse diagram} Our polynomial time algorithm for solving the PPP problem computes a successful reduction of a red-black graph \ensuremath{G_{RB}}\xspace associated to the input matrix $M$ in polynomial time, if it exists. Then the algorithm in~\cite{Bonizzoni2016} can be used to build a tree $T$ from the successful reduction. The computation of a successful reduction for \ensuremath{G_{RB}}\xspace is done by computing the Hasse diagram $$M$athcal{P}$ of its maximal reducible graph \ensuremath{G_{RB}}\xspacem obtained as the restriction $\ensuremath{G_{RB}}\xspace|C_M$, where $C_M$ is the set of maximal characters of \ensuremath{G_{RB}}\xspace. In the following, given $s$ a species, by $C(s)$ we denote the set of characters of $s$. \begin{definition}[Hasse diagram for a maximal reducible graph] Let $\ensuremath{G_{RB}}\xspacem$ be a maximal reducible graph. Then the diagram $$M$athcal{P}$ for $\ensuremath{G_{RB}}\xspacem$ is the Hasse diagram for the poset $(P_s, \leq)$ of all species of $\ensuremath{G_{RB}}\xspacem$ ordered by the relation $ \leq$, where $s_1 \leq s_2$ if $C(s_1) \subseteq C(s_2)$. \end{definition} Given $(P_s, \leq)$ the poset of all inactive species of a red-black, we consider the representation of the poset $(P_s, \leq)$ by its Hasse diagram (or simply diagram), represented by a directed acyclic graph $\cal P$~\cite{partial-order}. More precisely, two species $s_{1}$ and $s_{2}$ are connected by the arc $(s_{1},s_{2})$ if $s_{1} < s_{2}$ and there does not exist a species $s_{3}$ such that $s_{1}< s_{3} < s_{2}$. The definition of \ensuremath{\cal P}\xspace can be immediately translated into a polynomial time construction algorithm~\cite{partial-order}. The main notions that we use in the following related to a diagram $$M$athcal{P}$ are those of \emph{source}, \emph{sink} and \emph{chain}. A \emph{source} is a node of indegree $0$ and a \emph{sink} a node of out-degree $0$. In particular, a \emph{chain} of $$M$athcal{P}$ is a direct path of $$M$athcal{P}$ from a source to a sink of $$M$athcal{P}$. Observe that each edge $(s_i, s_{i+1})$ of diagram $$M$athcal{P}$ is labeled by the set of positive characters that are in $C(s_{i+1})$ and not in $C(s_i) $. A chain is \emph{trivial} if it consists of a singleton in the diagram. Thus a diagram $$M$athcal{P}$ consisting only of trivial chains is called \emph{degenerate}. Given a chain $$M$athcal{C}$ of the diagram $$M$athcal{P}$ for a graph \ensuremath{G_{RB}}\xspacem, we associate to it a c-reduction. \begin{definition}[c-reduction of a chain] The c-reduction of the chain $$M$athcal{C} = <s_1, s_2, \ldots, s_k>$ of the diagram $$M$athcal{P}$ for a graph \ensuremath{G_{RB}}\xspacem is the sequence of characters of $s_{1}$ and those labeling the arcs $(s_{i}, s_{i+1})$ for $1\le i\le k-1$. \end{definition} The following two notions of \emph{safe chain} and \emph{safe source} are crucial for building our algorithm. Observe that the notion of safe chain is related to a maximal reducible graph $\ensuremath{G_{RB}}\xspace|C_M$ while the notion of safe source is related to a reducible graph $\ensuremath{G_{RB}}\xspace$. Some examples are in the Appendix as Figures~\ref{fig:maximal-reducible-grb} and~\ref{fig:safe-chain}. \begin{definition}[safe chain] \label{definition:safe-chain} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible red-black graph, let $$M$athcal{P}$ be the diagram for \ensuremath{G_{RB}}\xspacem, and let $$M$athcal C$ be a chain of $$M$athcal{P}$. Then $$M$athcal C$ is \emph{safe} if the c-reduction $S(\cal C)$ of $$M$athcal C$ is feasible for the graph \ensuremath{G_{RB}}\xspacem and applying $S(\cal C)$ to \ensuremath{G_{RB}}\xspacem results in a graph that has no red $\Sigma$-graph. \end{definition} \begin{definition}[safe source] \label{definition:safe-source} Let \ensuremath{G_{RB}}\xspace be a red-black graph, let $$M$athcal{P}$ be the diagram for $\ensuremath{G_{RB}}\xspace|C_M$. A source $s$ of a chain $$M$athcal C$ of diagram $$M$athcal{P}$ is \emph{safe for $\ensuremath{G_{RB}}\xspace$} if the realization of $s$ in $\ensuremath{G_{RB}}\xspace$ does not induce red $\Sigma$-graphs in $\ensuremath{G_{RB}}\xspace$. \end{definition} \subsection{The algorithm} The polynomial time algorithm for finding a successful reduction of a graph \ensuremath{G_{RB}}\xspace starts with the detection in the graph of universal or free characters, which must be realized as the first characters in the graph. Then the algorithm applies observation~\ref{observation2}: if \ensuremath{G_{RB}}\xspace is disconnected then we decompose a red-black graph into its connected components, solve recursively each component separately, and finally concatenate all successful reduction computed. Therefore we focus only on instances corresponding to connected red-black graphs. By Lemma~\ref{lemma:connected-components-Grb=children} any tree $T$ solving a connected \ensuremath{G_{RB}}\xspace has a unique child of the root $r$. Moreover a direct consequence of the definition of maximal character in \ensuremath{G_{RB}}\xspace is that the edge incident to $r$ is labeled by at least a maximal character. Clearly, given $C_M$ the set of maximal characters of \ensuremath{G_{RB}}\xspace, the tree $T$ contains a solution $T_M$ for the subgraph $\ensuremath{G_{RB}}\xspace|C_M$ where $T_M$ is obtained from $T$ by contracting all edges that are not labeled by characters in $C_M$. The first main results of our paper is that the tree $T_M$ has a very specific topology that is strictly related to the one of all other possible solutions for $\ensuremath{G_{RB}}\xspace|C_M$ and this form is computed from the Hasse diagram $$M$athcal{P}$ for $\ensuremath{G_{RB}}\xspace|C_M$. More precisely, if $$M$athcal{P}$ is not degenerate then there are at most two trees $T_1, T_2$ solving $\ensuremath{G_{RB}}\xspace|C_M$: in this case $T_2$ is obtained from $T_1$ by reversing a path form an internal node to a leaf of $T_1$. If $$M$athcal{P}$ is degenerate, then there can be multiple solutions for $\ensuremath{G_{RB}}\xspace|C_M$, but again given two trees $T_1, T_2$ solving $\ensuremath{G_{RB}}\xspace|C_M$, $T_2$ is obtained from $T_1$ by reversing a path form an internal node to a leaf of $T_1$. Now, using the Hasse diagram, we can compute in polynomial time for each tree $T_M$ solving $\ensuremath{G_{RB}}\xspace|C_M$ the maximal characters that label the longest path of $T_M$ that starts from the root and is labeled by positive characters: such path is given by a safe chain of diagram $$M$athcal{P}$. If the diagram $$M$athcal{P}$ has multiple safe chains, we are able to choose in polynomial time the correct chain, i.e. the correct tree $T_M$, by testing the source of the chain using \ensuremath{G_{RB}}\xspace: this is given by the notion of safe source (Definition~\ref{definition:safe-source}) stated before. More precisely, a tree $T$ solving \ensuremath{G_{RB}}\xspace, and equivalently a successful reduction of \ensuremath{G_{RB}}\xspace, starts with the sequence of maximal characters of a safe source of the diagram $$M$athcal{P}$. The above observations are applied in the following two main steps of the algorithm used for computing a successful reduction of a reducible red-black graph \ensuremath{G_{RB}}\xspace. {\em Step 1:} compute the Hasse diagram $$M$athcal{P}$ for the maximal reducible graph $\ensuremath{G_{RB}}\xspace|C_M$. Then find a safe source $s$ of $$M$athcal{P}$. Theorems~\ref{theorem:any-initial-source} and~\ref{theorem:any-source-degenerate} show that there exists a tree $T$ for $\ensuremath{G_{RB}}\xspace$ that starts with the characters of $s$. {\em Step 2:} update graph \ensuremath{G_{RB}}\xspace with the realization of the characters of the safe source $s$. By Theorem~\ref{main-characterization} the updated graph \ensuremath{G_{RB}}\xspace is still reducible and the algorithm is then applied recursively on \ensuremath{G_{RB}}\xspace. Finally, observe that the correctness of the algorithm is based on the above theorems stating characterizations of trees solving maximal reducible graphs and reducible graphs. Whenever the input of the algorithm is a non reducible graph, the above two steps fail. The rest of the paper presents the arguments that Algorithm~\ref{alg:Reduction} (\textbf{Reduce}) that computes a successful reduction of a reducible graph. \section{Maximal reducible graphs} \label{sec:maximal-characters} In this section we consider only maximal reducible graphs and give a characterization of the trees solving such graphs. Observe that we assume that a simple path in a tree $T$ between two species may be contracted to a unique edge that is labeled by the sequence of characters occurring along the path. We distinguish three types of edges in a tree: \emph{positive edges} that are only labeled by positive characters, \emph{negative edges} that are only labeled by negated characters and \emph{mixed} ones, where both positive and negative characters occur. In the paper we consider two main types of trees representing the persistent phylogenies solving maximal reducible graphs: line-trees and branch-trees. A tree $T$ is a {\em line-tree} if it consists of a simple path. A tree $T$ is a {\em branch-tree} if it consists of a simple path from the root $r$ to a node $x$ that is the topmost node with more than one child, and no positive character occurs below node $x$: in this case the path from $r$ to node $x$ is called the {\em initial-path} of the tree $T$, and the node $x$ is called the {\em branch-node} of $T$. \begin{lemma} \label{lemma:tree-structure-max} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph and let $T_M$ be a solution of graph \ensuremath{G_{RB}}\xspacem. Then $T_M$ is either a line-tree or a branch-tree. \end{lemma} \begin{proof} Let $T_M$ be any solution of \ensuremath{G_{RB}}\xspacem and let $x$ be the topmost node of tree $T$ that has at least two children. If $x$ does not exist, then $T$ is a line-tree, and thus the Lemma holds. By Lemma~\ref{lemma:connected-components-Grb=children}, $x$ is not the root, hence in the initial path there is an edge labeled by a positive character. In fact, if the first character of tree $T_M$ is a negated character, it means that \ensuremath{G_{RB}}\xspacem has an active character connected to all species of $\ensuremath{G_{RB}}\xspace$ which is not possible as the graph $\ensuremath{G_{RB}}\xspace$ is reduced. Assume that below $x$ there exists an edge labeled $d^{+}$. Since graph \ensuremath{G_{RB}}\xspacem is connected, there must exist a positive character $a^+$ in the initial path of tree $T_M$ such that is negated below the node $x$. Indeed, if all characters are negated before node $x$, it must be that all positive characters below node $x$ are disjoint from the other characters of the initial path which contradicts the assumption that the graph \ensuremath{G_{RB}}\xspacem is connected. Assume that $a^{-}$ occurs along a branch distinct from the one having character $d^+$. Then $a$ is greater than $d$ and this contradicts the assumption that $d$ is a maximal character, i.e. all characters of $T_M$ are maximal in \ensuremath{G_{RB}}\xspacem. Indeed, observe that all species having $d$ have also character $a$. Otherwise, assume that $a^{-}$ and $d^{+}$ occur along the same branch of tree $T_M$. Then let us consider the edge $e$ outgoing from $x$ that is not in the same branch as $d^{+}$. If such edge $e$ is labeled by a positive edge $b^{+}$, then $b$ is smaller than $a$ and is not maximal, which contradicts the initial assumption that characters are maximal ones. Indeed, $a^-$ occurs along a branch distinct from the one having $b^+$ and thus all species with $b$ have also have character $a$. Otherwise edge $e$ is labeled by character $b^{-}$. But then $b$ is greater than $d$, contradicting the assumption that all characters are maximal. Indeed, observe that all species including character $d$ have also character $b$. \end{proof} Given a line-tree $T_{1}$ whose sequence of species of a depth-first traversal is $s_{1}, \ldots , s_{k}$, the inverted tree $T_{2}$ has the sequence of species $s_{k}, \ldots , s_{1}$. Moreover, the label of each edge $(s_{i+1}, s_{i})$ in $T_{2}$ has the same characters as the edge $(s_{i}, s_{i+1})$ in $T_1$, but with opposite signs (positive characters in $T_1$ are negative characters in $T_{2}$ and vice versa). \begin{lemma} \label{lemma:invert-line-trees} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph solved by a line-tree $T_M$. Then the line-tree $T_{1}$ obtained from $T_M$ inverting the entire tree is a solution for \ensuremath{G_{RB}}\xspacem. \end{lemma} \begin{proof} It is an immediate consequence of the observation that $T_{M}$ has no active character, by our definition of maximal reducible graph. \end{proof} \begin{lemma} \label{lemma:tree-structure-max1} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph, let $T_M$ be a solution of \ensuremath{G_{RB}}\xspacem, and let $c, c'$ be overlapping characters occurring in tree $T_M$ and inactive in \ensuremath{G_{RB}}\xspacem. Then one of the following cases hold: \begin{itemize} \item $T_M$ contains the sequence of edges $c^{+}$ $c'^{+}$, $c^{-}$, $c'^{-}$, with $c'^{-}$ eventually missing; \item $T_M$ contains the sequence of edges $c'^{+}$ $c^{+}$, $c'^{-}$, $c^{-}$, with $c^{-}$ eventually missing; \item $c^{-}$ and $c'^{-}$ appear in two distinct paths, and $T_M$ has a species preceding both $c^{+}$ and $c'^{+}$ if they are conflicting in $\ensuremath{G_{RB}}\xspacem$. \end{itemize} \end{lemma} \begin{proof} Since $c$ and $c'$ are maximal characters, if all four edges $c^{+}$, $c'^{+}$, $c^{-}$, $c'^{-}$ appear in the same path, the relative order of $c^{+}$ and $c'^{+}$ must be the same as $c^{-}$ $c'^{-}$, otherwise there is a containment relation between characters, i.e. either $c$ includes $c'$ or vice versa, contradicting the fact that characters are maximal ones. Let us now consider the case that $c^{-}$ and $c'^{-}$ appear in two distinct paths of $T_M$. By Lemma~\ref{lemma:tree-structure-max}, $T$ is a branch-tree. If they are in conflict in $\ensuremath{G_{RB}}\xspacem$, there must exists a species $s$ of such graph that induces the $(0,0)$ configuration in columns for characters $c, c'$ and such species must occurs before $c$, $c'$ in tree $T$. In fact, in the tree, since they are negated along distinct paths, it means that the only way to have a species with the $(0,0)$ configuration is just to have such species node before the occurrence of $c$ and $c'$. \end{proof} The following proposition is an immediate consequence of the definition of branch-tree and the fact that a maximal reducible graph contains no null character. \begin{proposition} \label{proposition:branch-node-all-characters} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph solved by a branch-tree $T_M$, and let $x$ be the branch-node of $T_M$. Then the state of $x$ consists of all characters of \ensuremath{G_{RB}}\xspacem. \end{proposition} \begin{lemma} \label{lemma:induced-subgraph} Let $\ensuremath{G_{RB}}\xspace$ be a connected red-black graph and let \ensuremath{G_{RB}}\xspacem be the subgraph of $\ensuremath{G_{RB}}\xspace$ induced by the maximal characters of $\ensuremath{G_{RB}}\xspace$. Then ${\ensuremath{G_{RB}}\xspacem}$ is a connected graph. \end{lemma} \begin{proof} Assume to the contrary that \ensuremath{G_{RB}}\xspacem is not connected. Since $\ensuremath{G_{RB}}\xspace$ is connected, there exists a character $c$ in $\ensuremath{G_{RB}}\xspace$ that is adjacent to two species $s_{1}$ and $s_{2}$ belonging to two distinct connected components of the graph \ensuremath{G_{RB}}\xspacem, but $c$ is not in the graph \ensuremath{G_{RB}}\xspacem, i.e., $c$ is not maximal. Since $c$ is not maximal, it must be contained in a maximal character $c_{M}$, hence $c_{M}$ is adjacent to both $s_{1}$ and $s_{2}$ in \ensuremath{G_{RB}}\xspacem, contradicting the assumption that $s_{1}$ and $s_{2}$ belong to two different connected components of \ensuremath{G_{RB}}\xspacem. Thus given a character $c'$ of a given component of $\ensuremath{G_{RB}}\xspacem$ either $c$ is disjoint from $c'$ or share a common species with $c'$. But since $c$ is adjacent to a species $s'$ such that $s'$ is in a component that is distinct from the one having $c'$, and since $c'$ is maximal in $\ensuremath{G_{RB}}\xspace$, it follows that $c$ and $c'$ are not comparable. Consequently, since $c$ is disjoint or is not comparable with any other character in $\ensuremath{G_{RB}}\xspace$, it follows that $c$ is also maximal, thus contradicting the initial assumption that $\ensuremath{G_{RB}}\xspacem$ is not connected. \end{proof} \section{The Hasse diagram and the algorithm} \label{sec:poset-hasse} In this section we explore a characterization of the Hasse diagram associated to maximal reducible graphs. For our purposes we are interested only in solutions of a maximal reducible graph \ensuremath{G_{RB}}\xspacem that is in a special form that we call \emph{normal form}. Such a form requires that a tree $T$ does not have two consecutive edges $e_1, e_2$ each labeled by $c^+$ and $c^-$ respectively. \begin{lemma} \label{normal-form-maximal} Let $T_M$ be a tree solving a maximal reducible graph \ensuremath{G_{RB}}\xspacem. Then tree $T_M$ can be transformed into a tree $T'_M$ that solves \ensuremath{G_{RB}}\xspacem and is in normal form. \end{lemma} \begin{proof} Assume that tree $T_M$ has two consecutive edges $e_1, e_2$ each labeled by $c^+$ and $c^-$ respectively. We distinguish the case that $e_2$ ends before or in the branch-node or $e_2$ ends below the branch-node. Observe that in the last case it must be that $e_1$ ends in the branch-node, as by definition of branch-tree no positive character can be below the branch-node. Now, if the first case holds, then by construction of the tree $T_M$, it must be that $e_1$ ends in a species $s_1$. Since the species $s_1$ is the first species having $c^+$ while soon after $c^-$ is introduced, it is immediate that $c$ is not a maximal character, which is a contradiction with the fact that $T_M$ solves $\ensuremath{G_{RB}}\xspace$. Thus the only possible case occurs when $e_1$ ends in the branch-node $x$ of the tree $T_M$ and $x$ is not labeled by a species, otherwise again $c$ is not a maximal character in \ensuremath{G_{RB}}\xspacem. Now, the species $s_2$ that is the end of $e_2$ does not have character $c$, while all the other species $s$ below $x$ along the branches distinct from the one containing $s_2$ have character $c$. Observe that we can transform tree $T_M$ into a tree $T'_M$ by introducing species $s_2$ above the branch-node $x$. Then $c^+$ is introduced below species $s_2$ along the edge $(s_2, x)$. Observe that this change is possible in tree $T_M$, since if there exist a subtree with root $s_2$, then $c$ is not maximal in tree $T_M$. Thus such subtree does not exist. Then $T'_M$ is in normal form. \end{proof} The following lemma shows that the Hasse diagram for a maximal reducible graph $\ensuremath{G_{RB}}\xspace|C_M$ contains a safe chain. \begin{lemma} \label{lemma:initial-chain} Let $$M$athcal{P}$ be the diagram for a maximal reducible graph \ensuremath{G_{RB}}\xspacem. Let $T_M$ be a tree of \ensuremath{G_{RB}}\xspacem such that the child $x$ of the root $r$ is a source of the diagram $$M$athcal{P}$. Then the longest path $<r,\ldots , z>$ of $T_M$ containing the root $r$ of $T_M$ and consisting only of positive edges corresponds to a safe chain of $$M$athcal{P}$, called the \emph{initial chain} of tree $T_M$. \end{lemma} \begin{proof} Consider a tree $T_M$ solving the maximal reducible graph \ensuremath{G_{RB}}\xspacem. Since \ensuremath{G_{RB}}\xspacem cannot have singletons, the edge of $T_M$ incident on the root must be positive, hence $r\neq x$. By construction of tree $T$ solving $\ensuremath{G_{RB}}\xspace$, each node $t_{i}$ of tree $T_M$ is labeled by a species, while $x$ is not a species only if $x$ is the branch-node of $T_M$. First consider the case that $x$ is not a species. Then $x$ is the branch-node of $T_M$. By Proposition~\ref{proposition:branch-node-all-characters}, the state of $x$ consists of all characters of \ensuremath{G_{RB}}\xspacem. Assume that the path $<r,\ldots , x>$ consists of the single edge $(r,x)$, i.e no species is along the initial path of tree $T_M$. Then we show that tree $T$ cannot be in normal form. Since $T_M$ is a branch tree, and $(r,x)$ is the only edge of the initial path of a solution of \ensuremath{G_{RB}}\xspacem, none of the species of \ensuremath{G_{RB}}\xspacem are comparable, hence the Hasse diagram $$M$athcal{P}$ of \ensuremath{G_{RB}}\xspacem consists of singletons $s$. Clearly, the tree $T$ must have two consecutive edges one labeled $c^+$ and the other labeled $c^-$ which is not possible in the normal form. Indeed, we can obtain a new solution $T_{1}$ from $T$ by replacing the edge $(r,x)$ with the path $<r, s, x>$ (such replacement is always possible since the set of characters of $s$ is a subset of the characters of $x$). It is immediate to notice that $T_{1}$ is a solution of \ensuremath{G_{RB}}\xspacem and the trivial chain of $$M$athcal{P}$ consisting of the singleton $s$ corresponds to the path $<r,x>$ which is the initial path of $T_{1}$. Thus this case proves the Lemma. Now consider the case that $x$ is a species. Let $p=<r,t_{1},\ldots,t_{l}>$ be the path $<r,\ldots , x>$ of $T_M$, where all $t_{i}$ are species of \ensuremath{G_{RB}}\xspace and $l $G$eq 2$, i.e. at least two species label the initial path of tree $T_M$. Notice that each edge $(t_{i}, t_{i+1})$ of $p$ is either an arc of $$M$athcal{P}$ or a path of $$M$athcal{P}$: in the latter case replace in $T_M$ the edge $(t_{i}, t_{i+1})$ of $T$ with the corresponding path of $$M$athcal{P}$. It is immediate to notice that the resulting tree is still a solution of \ensuremath{G_{RB}}\xspace. Since $p$ is the longest path including positive edges, we show that $t_{l}$ must be a sink of the diagram $$M$athcal{P}$. Indeed, given a species $s$ that is a descendant of species $t_{l}$, edges that follows $t_{l}$ in tree $T_M$ are mixed edges or negative edges. Since it is not possible to have two consecutive edges with $c^+$, $c^-$, because of the normal form, it follows that any species $s$ does not include at least a character of $t_l$. Hence $s$ cannot include species $t_{l}$ which must be a sink of the diagram. If $t_{1}$ is not a source of $$M$athcal{P}$, then $(w, t_{1})$ is an edge of $$M$athcal{P}$, hence the set of characters of $w$ is a subset of the set of characters of $t_{1}$. Since the edge $(r,t_{1})$ of $T_M$ is labeled by the characters of $t_{1}$, then we can replace the edge $(r,t_{1})$ of $T_M$ with the path $(r, w, t_{1})$. We can iterate the process until the child of $r$ in the tree is the source of the chain. The chain is safe since the corresponding c-reduction is the initial portion of the c-reduction associated to the tree $T_M$ solving \ensuremath{G_{RB}}\xspacem. \end{proof} As a main consequence of Lemma~\ref{lemma:initial-chain} we are able to show that given \ensuremath{G_{RB}}\xspace reducible there exists a tree $T$ solving $\ensuremath{G_{RB}}\xspace$ such that it starts with the inactive characters of a source $s$ of a safe chain of diagram $$M$athcal{P}$ for $\ensuremath{G_{RB}}\xspace|C_M$. Since such a safe source may not be unique in diagram $$M$athcal{P}$, the rest of the section provides results that will be used to show that we can choose any safe source of the diagram $$M$athcal{P}$ to find the initial characters of a tree $T$ solving \ensuremath{G_{RB}}\xspace. \begin{lemma} \label{lemma:initial-source} Let \ensuremath{G_{RB}}\xspace be a reducible graph and $$M$athcal{P}$ the diagram for the graph $\ensuremath{G_{RB}}\xspace|C_M$. Then there exists a solution $T$ of \ensuremath{G_{RB}}\xspace and a child $x$ of the root $r$ of $T$ such that $C(x)\cap C_{M}$ is the set of characters of a safe source $s$ of $$M$athcal{P}$. The vertex $x$ is called the \emph{initial state} of the tree $T$. \end{lemma} \begin{proof} Given a solution $T$ of \ensuremath{G_{RB}}\xspace, we denote with $z$ the least common ancestor of all species of \ensuremath{G_{RB}}\xspace in $T$. Without loss of generality, we can assume that there is a single edge from the root of $T$ to $z$. Let $T_{1}$ be a solution of \ensuremath{G_{RB}}\xspace such that node $z$ minimizes the number of characters of $C_{M}$. We distinguish two cases, according to whether $z$ is a species of \ensuremath{G_{RB}}\xspace. Case 1: assume initially that $z$ is a species, hence $C(z)\cap C_{M}$ is a node of the diagram $$M$athcal{P}$. By our construction, $C(z)\cap C_{M}$ cannot have an incoming arc in $$M$athcal{P}$, otherwise we would contradict the minimality of $T_{1}$. Then we can split the edge $(r,z)$ of $T_{1}$ into two edges $(r,x)$ and $(x,z)$ where the label of $(r,x)$ is $C(x)\cap C_{M}$. Clearly, by construction there exists a species $s$ of $$M$athcal{P}$ with the set $C(x)\cap C_{M}$ of characters and $s$ is a source node of the diagram $$M$athcal{P}$. Consider the tree $T_1|C_M$ that is induced by the maximal characters of $\ensuremath{G_{RB}}\xspace$ and such that $s$ is the child of the root of $T_1|C_M$ and is a source of diagram $$M$athcal{P}$. By applying Lemma~\ref{lemma:initial-chain}, tree $T_1|C_M$ has the initial-chain which is a safe chain of diagram $$M$athcal{P}$. Finally, since $s$ is obtained by the traversal of tree $T_1$, $s$ is safe in $\ensuremath{G_{RB}}\xspace$ and being the source of a safe chain of diagram $$M$athcal{P}$ is a safe source of $\ensuremath{G_{RB}}\xspace$, as desired. Moreover, it must be that $s$ is the source of the safe chain of diagram $$M$athcal{P}$ which is the initial chain of tree $T|C_M$, thus proving what is required. Case 2: assume now that $z$ is not a species. It is immediate to notice that the tree $T_{1}|C_{M}$, which is a solution of $\ensuremath{G_{RB}}\xspace|C_{M}$ must be a branch-tree, by Lemma~\ref{lemma:tree-structure-max}. Moreover, by Proposition~\ref{proposition:branch-node-all-characters}, $C_{M} \subseteq C(z)$. Let $x$ be any species of \ensuremath{G_{RB}}\xspace such that the species $s$ with set of characters $C(x)\cap C_{M}$ is a source of the diagram $$M$athcal{P}$. Just as for the previous case, we can split the edge $(r,z)$ of $T_{1}$ into two edges $(r,x)$ and $(x,z)$ where the label of $(r,x)$ is given by $C(x)\cap C_{M}$. Then $C(x)\cap C_{M}$ is the set of characters of a species $s$ of tree $T_{1}|C_{M}$ where $s$ is the source of the diagram $$M$athcal{P}$ of $T_{1}|C_{M}$. Similarly as above for case 1, $s$ is a safe source of $\ensuremath{G_{RB}}\xspace$. \end{proof} The following technical Lemma is used to characterize the safe chains of the diagram $$M$athcal{P}$ of a maximal reducible graph. \begin{lemma} \label{lemma:prelim-inverted} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph, let $$M$athcal{P}$ be the diagram for \ensuremath{G_{RB}}\xspacem, and let $T_M$ be a tree solving \ensuremath{G_{RB}}\xspacem. Given a chain $$M$athcal C$ of the diagram $$M$athcal{P}$, then one of the following statements holds: \begin{enumerate} \item $$M$athcal C$ is a sequence of species that occur along a path $\pi$ of the tree $T_M$, where $\pi$ has only positive edges, \item $$M$athcal C$ is a sequence of species that occur along a path $\pi$ of the tree $T_M$, where $\pi$ has only negative edges, \item $$M$athcal C$ is a sequence of species that occur along a path $\pi_1$ consisting only of positive edges and along a path $\pi_2$ consisting only of negative edges, where paths $\pi_1$ and $\pi_2$ are subpaths of the same path and for any character $c^-$ of path $\pi_2$ $c^+$ occurs along the path that connects $\pi_1$ to $\pi_2$. \end{enumerate} \end{lemma} \begin{proof} Assume that chain $$M$athcal C$ has source $s$ and sink $t$, Let us first show that species of a chain are along the same path $\pi$ of tree $T_M$ that occurs from the root to a leaf of tree $T_M$, by proving its contrapositive. Let $s, s'$ be two species that are not in the same path of $T$. By the structure of tree $T_M$ stated in Lemma~\ref{lemma:tree-structure-max}, then the least common ancestor $x$ of $s, s'$ is neither $s$ nor $s'$, and the paths from $x$ to $s$ and $s'$ contain only negative edges. Those two facts imply $s$ and $s'$ are not comparable, hence they cannot be in a chain. Thus a chain corresponds to a path of tree $T$. Assume first that tree $T_M$ is a line-tree: in this case we can prove that only cases (1) or (2) can happen. Assume to the contrary that the path of $T_M$ connecting $s$ and $t$ contains both positive and negative edges (or mixed edges). Assume initially that $s$ is an ancestor of $t$, and consider the negated character $c^{-}$ that is the first to appear in such path (and let $(s_{1}, s_{2})$ be the edge labeled by $c^{+}$). Observe that by Lemma~\ref{lemma:tree-structure-max1} characters are negated in the same order as they are introduced, hence the edge $c^{+}$ either labels the first edge of the path or it does not label any edge of the path. In both cases, $s_{1}$ has the character $c$, while $s_{2}$ has not, hence contradicting the assumption that $(s_{1}, s_{2})$ is an edge of the Hasse diagram. A similar argument holds when $t$ is an ancestor of $s$ and to show that a chain my label a path only consisting of negative edges. Consider now the case when $T_M$ is a branch-tree. If the path connecting $s$ and $t$ in $T_M$ is entirely contained in the initial path or it is composed only of descendants of the branching node, then condition (1) and (2) respectively holds. Hence we only have to consider the case when the $s-t$ path contains both ancestors and descendants of the branch node. We split this path into $\pi_{1}$ and $\pi_{2}$ which are the two subpaths of $\pi$ containing the ancestors and the descendants of the branch node, respectively, and from which species of chain $$M$athcal C$ are obtained. Let $x_{1}$ be the last species of $\pi_{1}$ and let $x_{2}$ be the first species of $\pi_{2}$: clearly $(x_{1}, x_{2})$ is an edge of $$M$athcal{P}$. The same argument used for the line-tree case also proves that $\pi_{1}$ does not contain any negated character $c^{-}$. Moreover, the edge $(x_{1},x_2)$ of $T_M$ cannot be labeled by a negated character $c^{-}$, unless character $c^+$ is introduced after species $x_1$ along the path that connects $x_1$ to $x_2$. Indeed, if $c^+$ is on path $\pi_1$ no species of $\pi_{2}$ can have $c$, while $x_{1}$ has $c$, contradicting the fact that $(x_{1},x_{2})$ is an edge of $$M$athcal{P}$. By Lemma~\ref{lemma:tree-structure-max1} all edges of $\pi_{2}$ must be negative, completing the proof. \end{proof} Now we are able to show the next Lemma that gives a characterization of the safe chains of diagram $$M$athcal{P}$ of a maximal reducible graph and thus it allows to characterize the distinct safe sources of \ensuremath{G_{RB}}\xspace. In the next Lemma we use the notion of inverted path: a path $\pi_{u,v}$ from a node $u$ of $T$ to a node $v$ of $T$ is inverted in tree $T_1$ if the sequence $<s_1, \ldots, s_l>$ of species of the path $\pi_{u,v}$ is replaced in $T_1$ by sequence $<s_l, s_{l-1}, \ldots, s_1>$. \begin{lemma} \label{lemma:inverted-tree+safe} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph and let $$M$athcal C$ and $$M$athcal C_{1}$ be two safe chains with distinct sources, where $$M$athcal C_{1}$ is not trivial. Let $T_M$ be a tree solving \ensuremath{G_{RB}}\xspacem such that $$M$athcal C$ is the initial chain of $T_M$. Then $T_M$ is a line-tree and $$M$athcal C_{1}$ is the initial chain of a line-tree $T_{1}$ such that the path $\pi_{r,v}$ where $r$ is the root of $T_M$ and $v$ the leaf of $T_M$ is inverted in tree $T_1$. \end{lemma} \begin{figure} \caption{Phylogeny} \caption{Hasse diagram} \caption{The chain $$M$athcal{C} \label{fig:chain1} \end{figure} \begin{proof} First observe that by definition of \ensuremath{G_{RB}}\xspacem the graph has no active characters. By Lemma~\ref{lemma:prelim-inverted} given chain $$M$athcal{C}_{1}$, the following cases are possible: (1) the sequence $$M$athcal{C}_{1}$ labels a path of tree $T_M$ consisting of positive edges or (2) $$M$athcal{C}_{1}$ labels a path of tree $T_M$ consisting of negative edges (3) $$M$athcal{C}_{1}$ labels two paths $\pi_1$ and $\pi_2$, where $\pi_1$ consists of positive edges and $\pi_2$ consists of negative edges of tree $T_M$, and both paths occur along the same path of tree $T_M$ and for any character $c^-$ of $\pi_2$, it holds that $c^+$ labels only edges of the path that connects the two paths. Case 1.~(Figure~\ref{fig:chain1}). Assume first that sequence $$M$athcal{C}_{1}$ consists of $<s_1, s_2, \ldots, s_{k}, s_{k+1}>$ and labels a path in tree $T_M$ consisting of positive edges $<c_1, \ldots, c_k>$ of the tree i.e. species $s_i$ is the end of a positive edge labeled $c_i$. W.l.o.g.~ we assume that each edge is labeled by a single character (the same argument applies in the case an edge is labeled by more than one character). Now, since $$M$athcal{C}_{1}$ is not the initial chain of tree $T_M$, it must be that it occurs after chain $$M$athcal{C}$ in tree $T_M$. We distinguish two cases. Case 1.1. Now, let $c_0$ be a character of $s_1$ that is from a species $s_0$ that precedes $s_1$ along the initial path. We now show that $c_1$ and $c_0$ are in conflict in graph \ensuremath{G_{RB}}\xspacem. Observe that they are not comparable as they are maximal ones, but there exists also two species in graph \ensuremath{G_{RB}}\xspacem, one that has both characters, which is the species $s_1$ and one that does not have both characters. Since $s_1$ is the source of chain $$M$athcal{C}_{1}$, then $s_1$ must be preceded by an edge labeled $d^-$. But, $d^+$ precedes character ${c_0}^+$ as characters are negated in the order they are introduced (see Lemma~\ref{lemma:tree-structure-max1}) and thus $d^+$ is introduced in a species $s_2$ distinct from $s_0$ and not having characters $c_0, c_1$ (by the normal form of tree $T_M$, $d^+$ and $d^-$ cannot label two consecutive edges). This fact proves that in graph \ensuremath{G_{RB}}\xspacem species $s_2$ does not have $c_0, c_1$. Consequently, characters $c_0, c_1$ are in conflict in \ensuremath{G_{RB}}\xspacem. Then $c_0, c_1$ induce a red $\Sigma$-graph in \ensuremath{G_{RB}}\xspacem when the source species $s_1$ and $c_1$ are realized, since the species $s_2$ with configurations $(0,0)$ and species $s_0$ with configuration $(1,0)$ for the pair $(c_0, c_1)$ are not removed from graph \ensuremath{G_{RB}}\xspacem. Hence we obtain a contradiction with the fact that the chain $$M$athcal{C}_{1}$ is safe, i.e it has a safe source. Case 1.2. Assume that there does not exist any character $c_{0}$ of $s_1$ that belongs also to a species $s_{0}$ that precedes $s_1$ along the initial path, that is $s_{1}$ has none of the characters labeling the path of $T_M$ from the root to $s_{1}$. Hence we can obtain a new solution $T_{1}$ for \ensuremath{G_{RB}}\xspacem by regrafting the subtree of $T_M$ rooted at $s_{1}$ as a new child of the root. By Lemma~\ref{lemma:connected-components-Grb=children}, the graph \ensuremath{G_{RB}}\xspacem is disconnected, which is a contradiction. Since both cases 1.1 and 1.2 are not possible, then case 1 is not possible. Case 2.~(Figure~\ref{fig:chain2}). Assume that sequence $$M$athcal{C}_{1}$ consists of $<s_{k+1}, \ldots, s_1>$ and is induced by a path in tree $T_M$ of negative edges $<{c_1}^{-}, \ldots, {c_{k}}^{-}>$ of the tree $T_M$, (i.e.,~ each edge $s_i , s_{i+1}$ being labeled by $c_i^-$) --- by maximality of all characters of \ensuremath{G_{RB}}\xspacem, no two negated characters can label the same edge. By Lemma~\ref{lemma:tree-structure-max1} the sequence of characters $<{c_1}^{+}, \ldots, {c_k}^{+}>$ appears in the initial path above the edge $c_{1}^{-}$ --- notice that those positive characters might be interleaved with other characters. First we will prove that no such interleaving can happen, that is no positive character occurs between two consecutive characters ${c_i}^+$, ${c_{i+1}}^+$ for $i \in \{1, \ldots, k-1\}$. Assume to the contrary that a character $d^+$ occurs between $c_i^{+}$ and $c_{i+1}^{+}$ in $T_M$. Then $d^+$ is in conflict with character $c_{k}^+$ in \ensuremath{G_{RB}}\xspacem. Indeed, $d^+$ and ${c_{k}}^+$ are not comparable (being maximal characters) and there exists a species of \ensuremath{G_{RB}}\xspacem that contains $d, c_k$, which is species $s_{k}$ of chain $$M$athcal{C}_{1}$. Indeed, it must be that $d^-$ cannot label an edge of chain $$M$athcal{C}_{1}$ (otherwise $$M$athcal{C}_{1}$ is not the correct chain), and thus $d^-$ must occur on a path distinct from the one of $$M$athcal{C}_{1}$, in virtue of Lemma~\ref{lemma:tree-structure-max1}. Moreover, there is also a species $w_1$ not having $c_k$ and $d$, which is the one occurring before ${c_i}^+$. Observe that there is at least a species having $d$ and not $c_k$ which is the species $w_2$ that is the end of edge labeled $d^+$ along the path $<{c_1}^{+}, \ldots, {c_k}^{+}>$. Since $w_1$ and $w_2$ are not realized before the species $s_{k}$ of chain $$M$athcal{C}_{1}$, the conflict between $d, c_k$ cannot be removed when the chain $$M$athcal{C}_{1}$ is realized, that is $d, c_k$ induce a red $\Sigma$-graph after the realization of the species $s_{k+1}$ of chain $$M$athcal{C}_{1}$. It follows that chain $$M$athcal{C}_{1}$ cannot be safe in \ensuremath{G_{RB}}\xspacem, a contradiction with the initial assumption. Thus it must be that no positive character occurs between two consecutive characters ${c_i}^+, {c_{i+1}}^+$ in tree $T_M$. Observe that the above argument showing that $d^-$ cannot occur on a path distinct from the one having chain $$M$athcal{C}_{1}$ shows that the path labeled by $<{c_1}^{-}, \ldots, {c_{k}}^{-}>$ does not have branches for characters introduced after $<{c_1}^{+}, \ldots, {c_{k}}^{+}>$. Moreover, it cannot have branches due to the negation of characters that precede character $<{c_1}^{-}$, since otherwise there exist a character $c$ that is negated after one that is introduced in the initial path after $c$, contradicting Lemma~\ref{lemma:tree-structure-max1}. It follows that the path labeled $<{c_1}^{-}, \ldots, {c_k}^{-}>$ is simple. To complete the proof of case 2, we show the following. Claim (1): the sequence of edges $<{c_1}^{-}, \ldots, {c_k}^{-}>$ is not followed in tree $T_M$ by any other characters (positive or negative). This fact shows that the tree $T_M$ is a line-tree since the edge labeled ${c_1}^{-}$ is preceded by an edge labeled by a positive character and thus no branch-node occurs before sequence $<{c_1}^{-}, \ldots, {c_k}^{-}>$ and no branch-node occurs after being this path simple. In virtue of Lemma~\ref{lemma:invert-line-trees} it follows that we can build a tree $T'_M$ by reading the tree $T_M$ in inverted order that is from the leaf to the root. Thus this case proves that $$M$athcal{C}_{1}$ is the initial chain of the tree $T'_M$ and this concludes the proof of the Lemma. In order to prove Claim (1) we prove the following Claim (2): the path labeled $<{c_1}^{+}, \ldots, {c_k}^{+}>$ is followed by a positive character $d^+$ before the occurrence of $<{c_1}^{-}, \ldots, {c_k}^{-}>$ if $d^+$ cannot be negated in the tree, that is the path labeled $<{c_1}^{-}, \ldots, {c_k}^{-}>$ ends in a leaf of the tree. Observe that if $<{c_1}^{+}, \ldots, {c_k}^{+}>$ is followed only by negative characters, then the sequence $<{c_1}^{-}, \ldots, {c_k}^{-}>$ does not induce a chain, since the chain ends in a different sink node. Now, assume that a character $d^+$ follows sequence $<{c_1}, \ldots, {c_k}>$. Let us recall that by Lemma~\ref{lemma:tree-structure-max1}, positive characters are introduced in the same order they are negated if they occur negated along the same path. Consequently, it must be that $d^+$ is negated along a path distinct from the one having chain $$M$athcal{C}_{1}$ (i.e. $T_M$ is a branch-tree). Assume that $d$ is negated below the negation of character $c_i$ of chain $$M$athcal{C}_{1}$, where $1 \leq i <k$, (otherwise if $i = k$ or then $$M$athcal{C}_{1}$ has a different source node) and thus there exists a species $w_1$ having $c_{k}$ but not $d$, where $w_1$ is the end of the branch having $d^-$. Observe that in chain $$M$athcal{C}_{1}$ the species $s_k$ contains characters $c_{k}$ and $d$. These two characters share a common species, are not comparable and there exists clearly a species $w_2$ without both characters $c_{k}$ and $d$, which is a species of chain $<{c_1}^{+}, \ldots, {c_k}^{+}>$. Thus they are in conflict in \ensuremath{G_{RB}}\xspacem. Now, after the realization of $s_k$ it holds that $c_{k}$ and $d$ induces a red $\Sigma$-graph in \ensuremath{G_{RB}}\xspacem as the species $w_2$ with configuration $(0,0)$ for the pair $(c_k, d)$ is not realized before $s_k$ and similarly there are species with configuration $(1,0)$ and $(0,1)$ that are not realized before $s_k$ such as the species in which edge labeled ${c_k}^{+}>$ ends. Thus we obtain a contradiction with the fact the chain $$M$athcal{C}_{1}$ is safe. As a consequence of Claim (2), either there is no character that follows path labeled $<{c_1}^{-}, \ldots, {c_k}^{-}>$ proving Claim (1), otherwise there exists no character $d^+$ that follows the path labeled $<{c_1}^{+}, \ldots, {c_k}^{+}>$. Thus if there exists a character $c$ that is after sequence $<{c_1}^{-}, \ldots, {c_k}^{-}>$, $c$ is disjoint from the other characters in \ensuremath{G_{RB}}\xspacem, which contradicts the fact that \ensuremath{G_{RB}}\xspacem is connected. Moreover, no negative character occurs after sequences $<{c_1}, \ldots, {c_k}>$ $<{c_1}^{-}, \ldots, {c_k}^{-}>$ because of the ordering of characters. Thus Claim (1) holds. Case 3.~(Figure~\ref{fig:chain3}). Assume that the sequence $$M$athcal{C}_{1}$ consists of sequence $<w_1, w_2, \ldots, w_{l}>$ (induced in tree $T_M$ by a path labeled by consecutive positive characters $<b_1, b_2, \ldots, b_l>$ ) followed by sequence $< s_{k+1}, \cdots, s_{1}>$ that is induced in tree $T_M$ by a path labeled by consecutive negative characters $<{c_1}^{-}, \cdots, {c_k}^{-}>$ of the tree. More precisely, each edge $(s_{i}, s_{i+1})$ is labeled $c_{i}^-$ --- by maximality of all characters of \ensuremath{G_{RB}}\xspacem, no two negated characters can label the same edge, $w_i$ is the end of the edge labeled $b_i$ when $1 \le i \leq l$. Then we show the following Claim (3): there exists no character in tree $T_M$ that precedes sequence $<b_1, b_2, \cdots, b_l>$. This fact shows that chain $$M$athcal{C}_{1}$ share the same source node of chain $$M$athcal{C}$ thus contradicting the assumption that they have distinct sources. Observe that case 3 implies that for each character ${c_i}^{-}$, then ${c_i}^+$ occurs after $<b_1, b_2, \ldots, b_l>$. Now, by Lemma~\ref{lemma:prelim-inverted}, it must be that the sequence of positive edges $<{{c^+}_1}, \ldots, {{c^+}_k}>$ occurs before sequence $<{c_1}^{-}, \ldots, {c_k}^{-}>$ and after sequence $<b_1, b_2, \ldots, b_l>$. This fact implies that path$<{c_1}^{-}, \ldots, {c_k}^{-}>$ occurs below the branch-node as the negated characters $<{b_1}^{-}, {b_2}^{-}, \ldots,{ b_l}^{-}>$ must occur below the branch-node of tree $T_M$. Similarly, a positive character $d^+$ that occurs before $<{b_1}^{-}, {b_2}^{-}, \ldots,{ b_l}^{-}>$ must occur negated below the branch node. Let us prove Claim (3). Assume to the contrary that a character $d^+$ occurs before ${b_1}^+$. Since chain $$M$athcal{C}_{1}$ does not contain $d^-$ it must be that $d^-$ occurs along a distinct path of the one having $$M$athcal{C}_{1}$ and from the one having $<{b_1}^{-}, {b_2}^{-}, \ldots,{ b_l}^{-}>$. But, then the species $w_2$ of $$M$athcal{C}_{1}$ has character $d^+$. Observe that since chain $$M$athcal{C}_{1}$ starts with the positive edge $<{b_1}^{+}$ it must be that there exists a species before $w_1$ with a positive character distinct from $d^+$ that is then negated after $d^+$. In other words there exists a species $s_0$ without the pair $d^+$ and ${b_1}^{+}$. The two characters are not comparable (they are maximal ones) and they are in the same species $w_2$, thus they are conflicting in \ensuremath{G_{RB}}\xspacem. But, when $w_2$ is realized they induce a red $\Sigma$-graph as the species with configuration $(0,0)$ and $(1,0)$ are not all removed from \ensuremath{G_{RB}}\xspacem when realizing $w_2$. Thus this would contradict that $$M$athcal{C}_{1}$ is safe, thus proving that Claim 3 holds. \end{proof} A main consequence of the proof of the previous Lemma~\ref{lemma:inverted-tree+safe} is the fact that if the diagram of a maximal reducible graph is non degenerate then there are at most two trees solving the same maximal reducible graph such that the initial chain of such trees starts with distinct sources. \begin{lemma} \label{cons:inverted-tree+safe} Let $\ensuremath{G_{RB}}\xspace$ be a reducible graph. Let $\ensuremath{G_{RB}}\xspace|C_M$ be a maximal reducible graph whose diagram $$M$athcal{P}$ is not degenerate. Then diagram $$M$athcal{P}$ has at most two distinct safe chains and at most two safe sources for \ensuremath{G_{RB}}\xspace. \end{lemma} \begin{comment} The following result extends the Lemma~\ref{lemma:inverted-tree+safe} to graphs with a degenerate diagram. \begin{lemma} \label{lemma:inverted-tree+safe+degenerate} Let \ensuremath{G_{RB}}\xspacem be a maximal reducible graph having a degenerate diagram and let $$M$athcal C$ and $$M$athcal C'$ be safe chains. Let $T$ be a tree solving \ensuremath{G_{RB}}\xspacem such that $$M$athcal C$ is the initial chain of $T$. Then $$M$athcal C'$ is the initial chain of a tree $T'$ that differs from tree $T$ by a path from a node $u$ of the initial path to a leaf node $v$ such that is inverted in tree $T'$. \end{lemma} \begin{proof} First we show that all singletons species have at least two characters. Observe that any two singletons species $s_1, s_2$ of the diagram $$M$athcal{P}$ are species nodes of the tree that are connected by a mixed edge in tree $T$. Assume that $s_1$ precedes $s_2$. Since $s_1$ and $s_2$ are not comparable it means that the edge connecting $s_1$ to $s_2$ is labeled by the character ${c_2}^+$ and the character ${c_1}^-$. Now, $c_1$ cannot be added to $s_1$ and then negated in $s_2$ unless $s_1$ has an additional character $c_0$ that is also in $s_2$, otherwise $s_1$ and $s_2$ have disjoint characters and thus the graph \ensuremath{G_{RB}}\xspacem is not connected which contradicts the initial assumption. This fact proves that the two species have at least two characters as required. Now, let chain $$M$athcal C'$ be safe and trivial and containing at least characters ${c_1}$ and ${c_2}$. Observe that being $c_1, c_2$ maximal characters they are not comparable and moreover species $$M$athcal C'$ has both characters. If the chain $$M$athcal C$ has characters distinct from $c_1$ and $c_2$ it means that $$M$athcal C$ is a species that does not have $c_1$ and $c_2$ meaning that they are in conflict in graph \ensuremath{G_{RB}}\xspacem which contradicts the assumption that $$M$athcal C'$ is safe. Thus chain $$M$athcal C$ as at least one of characters $c_1$ and $c_2$; assume that $$M$athcal C$ contains $c_1$ and $c_0$. The two cases must be considered: (1) $c_0$ is active in \ensuremath{G_{RB}}\xspacem and (2) $c_0$ is non active in \ensuremath{G_{RB}}\xspacem. But if case (1) holds it means that the matrix associated to \ensuremath{G_{RB}}\xspacem, by definition has the row $s$ where there exists a $1$ for $c_0$ and zeros for the other characters, thus again implying that there exists a species $s$ not having both characters $c_1, c_2$, a contradiction with the assumption that $$M$athcal C'$ is safe. Now, if $c_0$ is non active, both $c_0$ and $c_1$ are not comparable, which means that there exists a species with $c_0$ and not $c_1$ and vice versa a species with $c_1$ and not $c_0$. Both species must be children of chain $$M$athcal C$ which must be a branch-node of the tree. Consequently, species $$M$athcal C'$ must be also a leaf of tree $T$, since no positive or mixed edge is possible below the branch-node. Thus $$M$athcal C'$ is a descendant of species $$M$athcal C$ and $$M$athcal C'$ is a leaf of the tree $T$. Then it is immediate that there exists a tree $T'$ where $$M$athcal C'$ is the initial chain and $$M$athcal C$ is a leaf instead proving what required. \end{proof} Observe that in the degenerate case, in virtue of the proof of the previous Lemma~\ref{lemma:inverted-tree+safe+degenerate} any singleton is a safe source of the diagram $$M$athcal{P}$. \end{comment} \begin{figure} \caption{Phylogeny} \caption{Hasse diagram} \caption{Alternative Phylogeny} \caption{The chain $$M$athcal{C} \label{fig:chain2} \end{figure} \begin{figure} \caption{Phylogeny} \caption{Hasse diagram} \caption{Alternative Phylogeny} \caption{The chain $$M$athcal{C} \label{fig:chain3} \end{figure} \subsection{The degenerate diagram} If the diagram $$M$athcal{P}$ for graph $\ensuremath{G_{RB}}\xspace|C_M$ is degenerate, then the notion of safe source for $\ensuremath{G_{RB}}\xspace$ is defined differently as follows. \begin{definition}[safe source in a degenerate diagram] \label{definition:safe-source-degenerate} Let $\ensuremath{G_{RB}}\xspace$ be a reducible graph such that $\ensuremath{G_{RB}}\xspace|C_M$ has a degenerate diagram. Then a source $s$ of $$M$athcal{P}$ is safe for \ensuremath{G_{RB}}\xspace if $C(s)$ are realized in graph $\ensuremath{G_{RB}}\xspace$ without inducing red $\Sigma$-graphs and either (1) $s$ is a species of $\ensuremath{G_{RB}}\xspace$ or (2) $s$ is not a species of $\ensuremath{G_{RB}}\xspace$ and none of the sources of $$M$athcal{P}$ is a species of $\ensuremath{G_{RB}}\xspace$. \end{definition} A \emph{degenerate tree} $T_M$ for a maximal reduced graph that is not in normal form is a branch-tree whose branches are single edges outgoing from the branch-node each labeled by a single negative character of the graph; only the leaves are labeled by species. Then a degenerate tree in normal form is obtained by moving one of the leaves in the initial path and then removing the edge labeled by the negated character that ends in the leaf. Observe that given a degenerate tree $T_M$ solving \ensuremath{G_{RB}}\xspacem that is in normal form with $s$ the species of the initial path, then for each each leaf $s_i$ of tree $T_M$ there exists a tree ${T^i}_M$ solving \ensuremath{G_{RB}}\xspacem such that is obtained by inverting the species $s_i$ with the species $s$ (see Figure~\ref{fig:degenerate-solutions}). \begin{lemma} \label{lemma:degenerate-diagram} Let $\ensuremath{G_{RB}}\xspacem$ be a maximal reducible graph such that its diagram is degenerate and let $T_M$ be a tree solving $\ensuremath{G_{RB}}\xspacem$. Then the tree $T_M$ is a degenerate tree and the graph $\ensuremath{G_{RB}}\xspacem$ has no conflicting characters. \end{lemma} \begin{proof} Assume first that the initial-path of the tree $T$ has more than one species. Then the species are ends of mixed edges since they cannot be included one in the other. In the following we show that there cannot be two consecutive mixed edges in the tree $T$. Assume to the contrary that we have two consecutive mixed edges that negate the characters $c, c'$ one after the other. It is not restrictive to assume that these two edges are the first ones to occur along the initial-path of tree $T$. Observe that the two characters must be introduced together otherwise there are species one included in the other. But as a consequence of this fact, character $c'$ includes character $c$, which is a contradiction. As a consequence of this fact there is at most one species $s$ along the initial path of tree $T$. Then we show that each branch of the tree consists of a single edge labeled only by a negated character. Indeed, otherwise a branch may have at least two species $s, s'$ one included in the other as they differ only by the negation of characters, which contradicts the fact that the diagram has only singletons. Let us now show that each branch is labeled by a single character. Otherwise, it is easy to show that the characters labeling a branch edge are identical characters in the graph, which contradicts the initial assumption, or they are not maximal ones. As a consequence of the above observations it follows that all the characters that are negated along the branches occur on the edge ending in species $s$ or they occur on the initial path which has no species. It follows that the tree is a degenerate one as required. It is immediate to show that the graph has no conflicting characters \end{proof} \begin{theorem} \label{theorem:any-source-degenerate} Let $\ensuremath{G_{RB}}\xspace$ be a reducible graph. Let $\ensuremath{G_{RB}}\xspace|C_M$ be a maximal reducible graph whose diagram $$M$athcal{P}$ is degenerate and let $s$ be a safe source of $\ensuremath{G_{RB}}\xspace$. Then there exists a solution $T$ of \ensuremath{G_{RB}}\xspace such that the child $x$ of the root $r$ of $T$ is labeled by the characters of $s$ that are inactive in $\ensuremath{G_{RB}}\xspace$. \end{theorem} \begin{proof} By definition of safe source for $\ensuremath{G_{RB}}\xspace$ we have to distinguish two cases. Let $T$ be a tree solving $\ensuremath{G_{RB}}\xspace$ and let $T_M = T|C_M$ be the tree solving $\ensuremath{G_{RB}}\xspace|C_M$. Case 1: assume first that $s$ is not a species of $\ensuremath{G_{RB}}\xspace$, i.e., by definition any safe source of the diagram is safe for $\ensuremath{G_{RB}}\xspace$. Then given $T_M$ we need to consider two cases. Case 1.1 Assume that tree $T_M$ is not in normal form. This fact implies that if there exists a species $s_1$ below the root of tree $T$ which is on the path of tree $T$ including the initial path of tree $T_M$, then $s_1$ includes all the characters of $C_M$. Then we can split the edge $(r,s_1)$ into two edges $(r, s)$ and $(s, s_1)$, where the first is labeled by the characters of $s$ that are inactive in $\ensuremath{G_{RB}}\xspace$, thus proving the Theorem. Otherwise it means that no species is along the path that includes the initial path of tree $T_M$, but then we can find a state $s$ of tree $T$ corresponding to the safe species $s$ of $$M$athcal{P}$. Case 1.2 Assume that tree $T_M$ is in normal form, i.e., there exists a species $v$ along the initial path of tree $T_M$. Since by assumption $s$ is not a species of $\ensuremath{G_{RB}}\xspace$, there exists a species $s_1$ of graph \ensuremath{G_{RB}}\xspace including $s$ such that $s_1$ also includes at least a non maximal character $x$. Let $s_1$ be the species with the minimum number of characters including $s$. Observe that such a species exists in tree $T$. Now, if tree $T$ starts with species $s_1$ it is immediate that the Theorem holds, since $T$ is the tree satisfying the Theorem. Thus we must consider the case that $s_1$ is a species of tree $T$ but $s_1$ is along a path of tree $T$ that is not the one including the initial path of tree $T_M$, that is $s$ is a leaf of tree $T_M$. Moreover, observe that $s_1$ must be a leaf of tree $T$ because of the structure of degenerate trees. Let us recall that by the structure of the degenerate tree it must be that $s$ consists of set $C_M \setminus \{c_0\}$. Now, by Theorem~\ref{lemma:degenerate-diagram} we know that there exists another tree ${T^1}_M$ with species $s$ in the initial path and solving $\ensuremath{G_{RB}}\xspace|C_M$ such that tree ${T^1}_M$ is obtained by inverting the path $\pi_{v,s}$ labeled $<v, s>$ of tree ${T}_M$. Now, it is easy to show that the tree ${T^1}_M$ can be extended to a solution $T_1$ for graph \ensuremath{G_{RB}}\xspace. Indeed, we build tree $T_1$ from $T$ by inverting the species of the path of tree $T$ that includes path $\pi_{v,s}$. We then show that tree $T_1$ represents all the species of graph \ensuremath{G_{RB}}\xspace and hence is a solution of \ensuremath{G_{RB}}\xspace and moreover $T_1$ is such that starts with species $s_1$, consequently it is the tree that satisfies the Theorem. First observe that the inverting the path is possible if there are no active characters in \ensuremath{G_{RB}}\xspace otherwise, it is immediate to show that $s$ cannot be a safe source of $$M$athcal{P}$, as it cannot have active characters in \ensuremath{G_{RB}}\xspace which is not possible. Indeed, observe that the path $\pi_1$ of tree $T_1$ that includes the initial path of tree ${T^1}_M$ includes the same set of maximal and non maximal characters of the path $\pi$ of tree $T$ that includes the initial path of tree $T_M$. Now, the species that are below the path $\pi_1$ in tree $T$ can be also represented in tree $T_1$ along the branches that are identical in the two trees $T_1$ and $T$ since such species derive by the negation of characters in $\pi$ or adding new characters. In particular notice that the characters that are negated along one branch of tree $T_1$ are the same to be negated along the same branch of tree $T$. It follows that all species of the paths different from the inverted one are also represented in $T_1$. \begin{comment} Indeed, we obtain $T_1$ from $T$ by inverting the species of the path that includes the inverted path in ${T^1}_M$. We now show that given in tree $T$ a leaf node $s_i$, where $s_i$ includes the source $s^i$ of diagram $$M$athcal{P}$ consisting of set $C_M \setminus \{c_i\}$, then we can built a tree $T_i$ that starts with the species $s_i$ since it is obtained by inverting the path of $T$ from the root to leaf $s_i$ and such a tree is solving \ensuremath{G_{RB}}\xspace. This fact clearly shows that there exists a tree that starts with the characters of $s^i$. Now, the tree $T_i$ represents all the species of the paths of $T$ below the state including all characters of $C_M$ since the tree $T_i$ has such state including all characters of $C_M$. Thus we only need to show that tree $T_i$ is able to represent the species $s_1$. Indeed species $s_1$ does not require the negation of any character of $s_1$ and thus can be represented in tree $T_i$ as a leaf of tree $T_i$ simply by the path negating character $c_0$. \end{comment} Case 2: assume that $s$ is a species of $\ensuremath{G_{RB}}\xspace$. Then given tree $T$ we must consider two cases. Case 2.1 Assume that $s$ occurs as the first species of tree $T$. Then the Lemma holds. Case 2.2 Assume that $s$ does not occur as the first species of tree $T$. Now, let $T_M=T|C_M$ be the tree induced by the maximal characters and let ${T^1}_M$ be the tree that is obtained from $T_M$ by inverting the path $\pi$ from the root $r$ to node $v$ such that $\pi$ ends in species $s$, that is $v =s$ (observe that such path exists). In the following we show that the tree $T_1$ obtained from tree $T$ inverting the path that includes $\pi$ is a tree solving $\ensuremath{G_{RB}}\xspace$. First observe that inverting the path is possible if there are no active characters in \ensuremath{G_{RB}}\xspace otherwise, it is immediate to show that $s$ cannot be a safe source of $$M$athcal{P}$, as it cannot have active characters in \ensuremath{G_{RB}}\xspace which is not possible. Now, we need to show that $s$ is a species of tree $T_1$ and is the first species of tree $T_1$. But since $s$ is a species of tree $T$ and being $s$ the leaf of tree ${T}_M$ it follows that $s$ must be a leaf of tree $T$. It is immediate that tree $T_1$ is the tree that satisfies the Lemma thus proving what is required. In fact it is enough to observe that tree $T_1$ contains all the species of tree $T$ and thus by construction it solves $\ensuremath{G_{RB}}\xspace$ as $T$ solves \ensuremath{G_{RB}}\xspace. \end{proof} \subsection{Step 1 of the algorithm} As a consequence of the Lemma~\ref{lemma:initial-source} we are able to show that there exists a tree $T$ for \ensuremath{G_{RB}}\xspace such that the inactive characters of the first state below the root consists of the characters of a safe source of the diagram $$M$athcal{P}$ for $\ensuremath{G_{RB}}\xspace|C_M$. This result is a main consequence of the fact that a tree $T$ starts with the characters of a tree solving the maximal reducible $\ensuremath{G_{RB}}\xspace|C_M$ and such set of characters correspond to those of a safe chain $$M$athcal{C}$ of diagram $$M$athcal{P}$ (see lemma~\ref{lemma:initial-chain}) and clearly since these characters are obtained by a tree traversal of tree $T$ their realization does not induce red $\Sigma$-graphs in \ensuremath{G_{RB}}\xspace, i.e., the source of the chain $$M$athcal{C}$ is safe in \ensuremath{G_{RB}}\xspace. A stronger result states in Theorem~\ref{theorem:any-initial-source} can be proved: for any safe source $s$ of \ensuremath{G_{RB}}\xspace, there exists a tree $T$ solving \ensuremath{G_{RB}}\xspace such that the set of characters of $s$ gives the inactive characters of the first state below the root $r$ of tree $T$. Thus we show that we are able to compute in polynomial time the first state of a tree $T$ solving a graph \ensuremath{G_{RB}}\xspace. Observe that the result is a consequence of the previous characterization of trees solving a maximal reducible graph: there are at most two safe chains of diagram $$M$athcal{P}$ that have distinct sources and are the initial chains of two trees solving $\ensuremath{G_{RB}}\xspacem$ and such trees are two line-trees $T_1$ and $T_2$, one tree being the inverted path of the other tree. \begin{theorem} \label{theorem:any-initial-source} Let \ensuremath{G_{RB}}\xspace be a reducible graph, let $$M$athcal{P}$ be the diagram for a maximal reducible graph $\ensuremath{G_{RB}}\xspace|C_M$ that is not degenerate, and let $s$ be a safe source of \ensuremath{G_{RB}}\xspace. Then there exists a solution $T$ of \ensuremath{G_{RB}}\xspace such that the child $x$ of the root $r$ of $T$ is labeled by the characters of $s$ that are inactive in $\ensuremath{G_{RB}}\xspace$. The vertex $x$ is called the \emph{initial state} of the tree $T$. \end{theorem} \begin{proof} By Lemma~\ref{lemma:initial-source} there exists a tree $T$ solving \ensuremath{G_{RB}}\xspace such that its root has a child node $x$ such that $x$ is a safe source of diagram $$M$athcal{P}$. Let $T_M$ be the tree $T|C_M$ induced by the maximal characters of $T$. Since $$M$athcal{P}$ is not degenerate, by Lemma~\ref{lemma:inverted-tree+safe} there are at most two trees that solve the graph \ensuremath{G_{RB}}\xspacem and have distinct sources. If the diagram has a unique safe source the Lemma follows since $s$ is the unique safe source. Thus assume that there is another tree $T'_M$ that solves the graph \ensuremath{G_{RB}}\xspacem: by Lemma~\ref{lemma:inverted-tree+safe} such tree is obtained from tree $T_M$ which is a line-tree by reading it in inverted order. Assume now that the graph \ensuremath{G_{RB}}\xspace has no active characters. Since tree $T$ is obtained by inserting characters in tree $T_M$ and adding subtrees to states of tree $T_M$, it is immediate to show that a tree $T'$ solving \ensuremath{G_{RB}}\xspace can be also obtained from tree $T'_M$ by reading the tree $T$ in inverted order from the leaf $v$ of tree $T_M$. This fact proves the Lemma in this case. Assume now that the graph \ensuremath{G_{RB}}\xspace has active characters. Now, the other tree ${T'}_M$ solving $\ensuremath{G_{RB}}\xspace|C_M$ starts with a source that is the leaf $v$ of tree $T_M$. Observe that the smallest species of tree $T$ solving \ensuremath{G_{RB}}\xspace and including $v$ does not have the active characters of $\ensuremath{G_{RB}}\xspace$. Indeed, by Lemma~\ref{lemma:tree-structure-max1} characters that occur negated along the same path must be introduced as positive ones in the same order they are negated. Thus state $v$ cannot be the initial state of another tree $T_1$ solving \ensuremath{G_{RB}}\xspace. \end{proof} \subsection{Step 2 of the algorithm} We can now state the main Lemma used in the recursive step of our algorithm~\ref{alg:Reduction} \textbf{Reduce}. \begin{theorem} \label{main-characterization} Let $\ensuremath{G_{RB}}\xspace$ be a reducible graph and let $$M$athcal{P}$ be the diagram for $\ensuremath{G_{RB}}\xspace|C_M$. Let $s$ be a safe source of diagram $$M$athcal{P}$. Then the graph $\ensuremath{G_{RB}}\xspace'$ obtained after the realization of $s$ is reducible. \end{theorem} \begin{proof} By Theorem~\ref{theorem:any-initial-source}, the source $s$ is the initial state of a tree $T$ solving graph $\ensuremath{G_{RB}}\xspace$. Thus the realization of the sequence of characters of $s$ is the initial sequence of a successful reduction for graph \ensuremath{G_{RB}}\xspace. Indeed, the tree traversal of tree $T$ produces a successful reduction in virtue of Proposition~\ref{prop:traversal}. Consequently the realization of $s$ on graph $\ensuremath{G_{RB}}\xspace$ produces a reducible graph. \end{proof} The following result is a natural consequence of the fact that the number of chains in a degenerate diagram for graph \ensuremath{G_{RB}}\xspace is at most the number of characters of \ensuremath{G_{RB}}\xspace. \begin{lemma} \label{lemma:chain-polynomial} The total number of distinct chains in a diagram $$M$athcal{P}$ for a maximal reducible graph $\ensuremath{G_{RB}}\xspace$ is polynomial in the number of characters of the diagram. \end{lemma} \begin{proof} Let $T$ be a tree solving $\ensuremath{G_{RB}}\xspace$. Given a chain $$M$athcal C$, then an internal node $x$ of the chain has outdegree greater than $1$ when $x$ is adjacent to a species obtained by the negation of some characters of the chain below the node $x$. Now, the negation of such characters occur only once in the tree $T$ producing a number of distinct chains that is linear in the number of characters. This observation is enough to prove the Lemma. \end{proof} \begin{lemma} \label{number-initial-polynomial} Let $\ensuremath{G_{RB}}\xspace$ be a maximal reducible graph and let $$M$athcal{P}$ be the diagram for $\ensuremath{G_{RB}}\xspace$. Then the number of safe chains in $\ensuremath{G_{RB}}\xspace$ is polynomial in the number of characters and a safe chain is computed in polynomial time. \end{lemma} \begin{proof} By Lemma~\ref{lemma:chain-polynomial} the number of distinct chains in diagrams $$M$athcal{P}$ that we need to test to be safe is polynomial in the number if characters. A chain $$M$athcal C$ is safe for a red-black graph if the realization of the sequence of characters labeling the edges of the chain does not induce a red $\Sigma$-graph. It is easy to verify that such test can be done in polynomial time. \end{proof} The following is a direct consequence of Lemma~\ref{number-initial-polynomial}. \begin{lemma} \label{number-initial-polynomial2} Let \ensuremath{G_{RB}}\xspace be a reducible graph. The total number of distinct safe sources is polynomial in the number of characters of the diagram and a safe source is computed in polynomial time. \end{lemma} Algorithm~\ref{alg:Reduction} describes the procedure \textbf{Reduce}$(\ensuremath{G_{RB}}\xspace)$ that given a red-black graph (with active and non active characters) computes an extended c-reduction that is a successful reduction of $\ensuremath{G_{RB}}\xspace$ if $\ensuremath{G_{RB}}\xspace$ can be solved by a tree $T$. \begin{algorithm}[htb!] \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A red-black graph $\ensuremath{G_{RB}}\xspace$} \Output{A c-reduction of the graph $\ensuremath{G_{RB}}\xspace$, if it exists. Otherwise, it aborts.} Remove all singletons from \ensuremath{G_{RB}}\xspace\label{alg:reduce:singletons}\; \If{\ensuremath{G_{RB}}\xspace is empty\label{alg:reduce:empty}}{ \Return the empty sequence\; } \If{$G_{RB}$ has a free character $c$\label{alg:reduce:free}}{ $\ensuremath{G_{RB}}\xspace $G$ets$ Realize$(\ensuremath{G_{RB}}\xspace, c^{-})$\; \Return$(<c^{-}, Reduce(\ensuremath{G_{RB}}\xspace)>)$\; } \If{$G_{RB}$ has a universal character $c$\label{alg:reduce:universal}}{ $\ensuremath{G_{RB}}\xspace $G$ets$ Realize$(\ensuremath{G_{RB}}\xspace, c^{+})$\; \Return$(<c^{+}, Reduce(\ensuremath{G_{RB}}\xspace)>)$\; } \If{\ensuremath{G_{RB}}\xspace is disconnected\label{alg:reduce:disconnected}}{ \Return the concatenation of $Reduce(\ensuremath{G_{RB}}\xspace^{i})$ for each connected component $\ensuremath{G_{RB}}\xspace^{i}$ of \ensuremath{G_{RB}}\xspace\label{alg:reduce:disconnected-end}\; } $C_{M}$G$ets$ the maximal characters of \ensuremath{G_{RB}}\xspace\label{alg:reduce:begin-safe-source}\; ${\cal P} $G$ets$ the diagram for $\ensuremath{G_{RB}}\xspace|C_{M}$\; \If{no safe source of the diagram ${$M$athcal P}$ exists}{ Abort\; } $s $G$ets $ a safe source of the diagram $$M$athcal{P}$\; $S_c $G$ets$ the sequence of positive characters of $s$ that are inactive in \ensuremath{G_{RB}}\xspace\; $\ensuremath{G_{RB}}\xspace $G$ets$ Realize$(\ensuremath{G_{RB}}\xspace, S_c)$\; \Return $<S_c, Reduce (\ensuremath{G_{RB}}\xspace)>$\label{alg:reduce:end-safe-source}\; \caption{Recursive Reduce. Realize is a function that returns the new red-black graph after the realization of a sequence of characters} \label{alg:Reduction} \end{algorithm} Notice that Algorithm~\ref{alg:Reduction}, when applied to a graph \ensuremath{G_{RB}}\xspace, computes a successful reduction $R$ of \ensuremath{G_{RB}}\xspace (if it exists). A successful reduction can then be transformed into a persistent phylogeny for \ensuremath{G_{RB}}\xspace --- see Algorithm \textbf{Reduction2Phylogeny}~\cite{Bonizzoni2016}). \begin{theorem} Let \ensuremath{G_{RB}}\xspace be a reducible red-black graph. Then Algorithm~\ref{alg:Reduction} computes a tree $T$ solving \ensuremath{G_{RB}}\xspace in polynomial time. \end{theorem} \begin{proof} The correctness of Algorithm~\ref{alg:Reduction} can be proved by induction on the length $n$ of the reduction found. If $n=0$, then the red-black graph \ensuremath{G_{RB}}\xspace must be edgeless, hence line~\ref{alg:reduce:singletons} removes all vertices of \ensuremath{G_{RB}}\xspace and the condition at line~\ref{alg:reduce:empty} holds. The empty reduction that is computed is trivially correct. If $n = 1$, then the reduction solving \ensuremath{G_{RB}}\xspace is either $c^{+}$ or $c^{-}$. In the both cases, the character $c$ is the only character of \ensuremath{G_{RB}}\xspace that is not a singleton. It is immediate to notice that in the first case $c$ is a universal character and in the second case $c$ is a free character, and the reduction that is computed is correct. Let us now consider the case $n$G$e 2$. If \ensuremath{G_{RB}}\xspace is disconnected, by Observation~\ref{observation2}, the reduction solving \ensuremath{G_{RB}}\xspace is the concatenation of the reduction solving each connected component of \ensuremath{G_{RB}}\xspace. The correctness of lines~\ref{alg:reduce:disconnected},~\ref{alg:reduce:disconnected-end} is immediate. Consider now the case when \ensuremath{G_{RB}}\xspace is connected. If \ensuremath{G_{RB}}\xspace has a universal or a free character, then the algorithm computes the correct reduction by inductive hypothesis. If \ensuremath{G_{RB}}\xspace has no universal or free characters, then lines~\ref{alg:reduce:begin-safe-source}--\ref{alg:reduce:end-safe-source} are reached. By Theorem~\ref{lemma:connected-components-Grb=children} and Theorem~\ref{theorem:any-initial-source}, if the diagram $$M$athcal{P}$ is degenerate, there exists a solution $T$ of \ensuremath{G_{RB}}\xspace such that the topmost edge is labeled by the set $X$ of the characters of $C_{M}$ (which are all inactive in \ensuremath{G_{RB}}\xspace) that are possessed by a safe source $s$ of \ensuremath{G_{RB}}\xspace. Notice that the reduction associated with $T$ begins with the characters $X$, hence lines~\ref{alg:reduce:begin-safe-source}--\ref{alg:reduce:end-safe-source} of the algorithm correctly compute the reduction. A direct consequence of Lemmas~\ref{cons:inverted-tree+safe} is that either \ensuremath{G_{RB}}\xspace has only a safe source, or \ensuremath{G_{RB}}\xspace has two safe sources $s_{1}$ and $s_{2}$, if the diagram $$M$athcal{P}$ is not degenerate. Otherwise, if the diagram is degenerate by definition of safe source given in Definition~\ref{definition:safe-source-degenerate} it is immediate to compute the correct safe source for which Theorem~\ref{theorem:any-source-degenerate} holds. In the latter case, there exist two solutions $T_{1}$ and $T_{2}$ of \ensuremath{G_{RB}}\xspace such that $s_{1}$ is the topmost species of $T_{1}$ and $s_{2}$ is the topmost species of $T_{2}$, hence choosing any of $s_{1}$ or $s_{2}$ is correct. Since each invocation of Algorithm~\ref{alg:Reduction} computes at least a signed character of the reduction and the reduction contains at most $2m$ signed characters, the overall time complexity is polynomial, if computing a safe source has polynomial time complexity, which is a consequence of Lemma~\ref{number-initial-polynomial}. \end{proof} Notice that the same argument shows that applying Algorithm~\ref{alg:Reduction} to a red-black graph \ensuremath{G_{RB}}\xspace that has no solution, then Algorithm~\ref{alg:Reduction} requires polynomial time and either aborts or computes an incorrect tree --- checking if a tree is actually a solution of \ensuremath{G_{RB}}\xspace is immediate. \begin{comment} \begin{figure} \caption{ Two line-trees for the same diagram $$M$athcal{P} \label{fig:line-tree1} \end{figure} \end{comment} \begin{comment} \begin{figure} \caption{A simple-branch tree $T$ and the diagram $$M$athcal{P} \label{fig:simple-branch0} \end{figure} \end{comment} \section{Conclusions} The Persistent Phylogeny model has appeared many times in the literature under different formulations~\cite{goldberg_minimizing_1996}, but its computational complexity has been open for 20 years. In this paper we have answered this question by providing a polynomial-time algorithm that determines if a binary matrix has a persistent phylogeny and constructs such a persistent phylogeny if it exists. A natural optimization criterion is to compute a tree with the smallest number of persistent characters --- this is equivalent to computing the tree with the fewest edges. The computational complexity of this optimization problem is an open problem. We believe that the graph theoretic approach developed in the paper could shed some light into the solution of this problem. \begin{figure} \caption{Red-black graph associated with the matrix in Figure~\ref{fig:instance-extended} \label{fig:extended-reduction} \end{figure} \begin{figure} \caption{The set of maximal characters of the red-black graph in Figure~\ref{fig:grb-instance2} \label{fig:maximal-reducible-grb} \end{figure} \begin{figure} \caption{The Hasse diagram of the graph of Figure~\ref{fig:maximal-reducible-grb} \label{fig:safe-chain} \end{figure} \begin{figure} \caption{Degenerate tree, not in normal form} \caption{Degenerate tree, in normal form} \caption{Degenerate tree, in normal form} \caption{Some possible solutions for a \ensuremath{G_{RB} \label{fig:degenerate-solutions} \end{figure} \end{document}
math
105,733
\begin{document} \begin{abstract} This article concerns a class of elliptic equations on Carnot groups depending on one real positive parameter and involving a subcritical nonlinearity (for the critical case we refer to G. Molica Bisci and D. Repov\v{s}, {\sl Yamabe-type equations on Carnot groups}, Potential Anal. 46:2 (2017), 369-383; \href{https://arxiv.org/abs/1705.10100}{arXiv:1705.10100 [math.AP]} As a special case of our results we prove the existence of at least one nontrivial solution for a subelliptic equation defined on a smooth and bounded domain $D$ of the Heisenberg group $\mathbb{H}^n=\mathbb{C}^n\times {\rm I\!R}$. The main approach is based on variational methods. \end{abstract} \maketitle \section{Introduction} Analysis on Carnot-Carath\'{e}odory (briefly CC) spaces is a field currently undergoing great development. These abstract structures are a special class of metric spaces in which the interactions between analytical and geometric tools have been carried out with prosperous results.\par In this setting, a fundamental role is played by Carnot groups that, as it is well-known, are finite dimensional, simply connected Lie groups $\mathbb{G}$ whose Lie algebra $\mathfrak{g}$ of left invariant vector fields is stratified (see Section \ref{section2}). Roughly speaking Carnot groups can be seen as local models of CC spaces. Indeed, they are the natural {tangent} spaces to CC spaces, exactly as Euclidean spaces are tangent to manifolds.\par It is well-known that a great attention has been focused by many authors on the study of subelliptic equations on Carnot groups and in particular, on the Heisenberg group $\mathbb{H}^n$. See, among others, the papers \cite{BK, BFP, DM, GaLa,Lo1,PiVa}, as well as \cite{Mingio0, Mingio} and references therein.\par Motivated by this large interest, we study here the existence of weak solutions for the following problem $$ (P_{\lambda}^{f})\,\,\,\,\,\,\,\,\,\,\left\{ \begin{array}{ll} -\Delta_{\mathbb{G}} u={\rm diam(\Omega)}isplaystyle \lambda f(\xi,u) & \mbox{\rm in } D\\ u|_{\partial D}=0, & \end{array}\right. $$ where $D$ is a smooth bounded domain of the Carnot group $\mathbb{G}$, $\Delta_{\mathbb{G}}$ is the subelliptic Laplacian on $\mathbb{G}$, and $\lambda$ is a positive real parameter.\par Problem $(P_{\lambda}^{f})$ has a variational nature, hence its weak solutions can be found as critical points of a suitable functional $\mathcal{J}_{\lambda}$ defined on the Folland-Stein space $S^1_0(D)$, whose analytic construction is recalled in Section \ref{section2}.\par Thanks to this fact, the main approach is based on the direct methods of calculus of variations (see \cite{MRS}) and on the geometric abstract framework on Carnot groups (see, among others, the classical reference \cite{BLU} and references therein).\par More precisely, under a suitable subcritical growth condition on the nonlinear term $f$, we are able to prove the existence of at least one (non-trivial) weak solution of problem $(P_{\lambda}^{f})$ provided that $\lambda$ belongs to a precise bounded interval of positive parameters.\par The main novelty of this new framework is that, instead of the usual assumptions on functionals, it requires some hypotheses on the nonlinearity, which allow for better understanding of the existence phenomena. This allows us to enlarge the set of applications of the direct minimization exploiting this abstract method without any asymptotic condition of the term $f$ at zero, as requested in \cite[Theorem 3.1]{GaLa}.\par A special case of our result, in the Heisenberg setting, reads as follows. \begin{theorem}\label{FerraraMolicaBisci22Generale2} Let $D$ be a smooth and bounded domain of the {Heisenberg group} $\mathbb{H}^n$ and let $f:{\rm I\!R}\rightarrow{\rm I\!R}$ be a continuous function such that \begin{equation}\label{growth} \sup_{t\in{\rm I\!R}}\frac{|f(t)|}{1+|t|^{p}}\leq \kappa, \end{equation} where $p\in (1,\gamma 2^*_h-1)$, with $\gamma\in (2/2^*_h,1)$ and $2^*_h:={\rm diam(\Omega)}isplaystyle 2\left(\frac{n+1}{n}\right)$. Assume that \begin{equation}\label{growth2} 0<\kappa<\frac{(p-1)^{\frac{p-1}{p}}}{pc_{1,\gamma}^{\frac{p-1}{p}}c_{2,\gamma}^{\frac{p+1}{p}}}|D|^{\frac{1-\gamma}{p}+ \frac{(1-p)(\gamma 2^{*}_h-1)}{p\gamma 2^{*}_h} }, \end{equation} where $c_{1,\gamma}$ and $c_{2,\gamma}$ denote the embedding constants of the Folland-Stein space $\mathbb{H}^1_0(D)$ in $L^{{\gamma 2^{*}_h}}(D)$ and $L^{\frac{p+1}{\gamma}}(D)$, respectively.\par \indent Then the following subelliptic problem $$ (P_{\kappa})\,\,\,\,\,\,\,\,\,\,\left\{ \begin{array}{ll} -\Delta_{\mathbb{H}^n} u={\rm diam(\Omega)}isplaystyle f(u) & \mbox{\rm in } D\\ u|_{\partial D}=0, & \end{array}\right. $$ has a weak solution $u_{0,\kappa}\in \mathbb{H}^1_0(D)$ such that $$\|u_{0,\kappa}\|_{\mathbb{H}^1_0(D)}< \left(\kappa p\kappa_{2,\gamma}^{p+1}|D|^{1-\gamma}\right)^{\frac{1}{1-p}}.$$ \end{theorem} For the sake of completeness we recall that very recently, in \cite{MR}, the existence of multiple solutions for parametric elliptic equations on Carnot groups has been proved by exploiting the celebrated Ambrosetti-Rabinowitz condition and a local minimum result due to Ricceri (see \cite{R2}). We emphasize that in the present paper we do not require such technical assumption for the nonlinear term $f$. Moreover, the results obtained here are completely different from the one contained in \cite{MRe} (see also \href{https://arxiv.org/abs/1705.10100}{arXiv:1705.10100 [math.AP]}), where critical subelliptic problems on Carnot groups was studied.\par \indent The plan of the paper is as follows. Section \ref{section2} is devoted to our abstract framework and preliminaries. Next, in Section \ref{Section3}, Theorem \ref{MoReGeneral} and some preparatory results (see Lemmas \ref{lemmino1} and \ref{lemmino2}) are presented. In the last section, Theorem \ref{MoReGeneral} is proved.\par \section{Abstract Framework}\label{section2} In this section we briefly recall some basic facts on Carnot groups and the functional space~$S^1_0(D)$.\par \textbf{Dilatations}. Let $({\rm I\!R}^n,\circ)$ be a Lie group equipped with a family of group automorphisms, namely \textit{dilatations}, $\mathfrak{F}:=\{{\rm diam(\Omega)}elta_\eta\}_{\eta>0}$ such that, for every $\eta>0$, the map $$ {\rm diam(\Omega)}elta_\eta:\prod_{k=1}^{r}{\rm I\!R}^{n_k}\rightarrow\prod_{k=1}^{r}{\rm I\!R}^{n_k} $$ is given by $$ {\rm diam(\Omega)}elta_\eta(\xi^{(1)},...,\xi^{(r)}):=(\eta \xi^{(1)}, \eta^{2}\xi^{(2)},...,\eta^r\xi^{(r)}), $$ where $\xi^{(k)}\in {\rm I\!R}^{n_k}$ for every $k\in \{1,...,r\}$ and ${\rm diam(\Omega)}isplaystyle \sum_{k=1}^{r}n_k=n$.\par \textbf{Homogeneous dimension}. The structure $\mathbb{G}:=({\rm I\!R}^n,\circ, \mathfrak{F})$ is called a \textit{homogeneous} group with \textit{homogeneous dimension} \begin{equation}\label{dim} \textrm{dim}_h{\mathbb{G}}:={\rm diam(\Omega)}isplaystyle \sum_{k=1}^{r}kn_k. \end{equation} \noindentoindent From now on, we shall assume that $\textrm{dim}_h{\mathbb{G}}\geq 3$. We remark that, if $\textrm{dim}_h{\mathbb{G}}\leq 3$, then necessarily $\mathbb{G}=({\rm I\!R}^{\textrm{dim}_h{\mathbb{G}}},+)$. Note that the number $\textrm{dim}_h{\mathbb{G}}$ is naturally associated to the family $\mathfrak{F}$ since, for every $\eta>0$, the Jacobian of the map $$ \xi\mapsto {\rm diam(\Omega)}elta_\eta(\xi),\quad \forall\,\xi\in {\rm I\!R}^n $$ equals $\eta^{\textrm{dim}_h{\mathbb{G}}}$.\par \textbf{Stratification}. Let $\mathfrak{g}$ be the Lie algebra of left invariant vector fields on $\mathbb{G}$ and assume that $\mathfrak{g}$ is \textit{stratified}, that is: $$ {\rm diam(\Omega)}isplaystyle\mathfrak{g}=\bigoplus_{k=1}^{r}V_k, $$ where the integer $r$ is called the \textit{step} of $\mathbb{G}$, $V_k$ is a linear subspace of ${\rm diam(\Omega)}isplaystyle\mathfrak{g}$, for every $k\in \{1,...,r\}$, and \begin{itemize} \item[] $\textrm{dim}V_k=n_k$, for every $k\in \{1,...,r\}$; \item[] $[V_1,V_k]=V_{k+1}$, for $1\leq k\leq r-1$, and $[V_1,V_r]=\{0\}$. \end{itemize} In this setting the symbol $[V_1,V_k]$ denotes the subspace of $\mathfrak{g}$ generated by the commutators $[X,Y]$, where $X\in V_1$ and $Y\in V_k$.\par \textbf{The notion of Carnot group and subelliptic Laplacian on $\mathbb{G}$}. A\textit{Carnot group} is a homogeneous group $\mathbb{G}$ such that the Lie algebra $\mathfrak{g}$ associated to $\mathbb{G}$ is stratified.\par Moreover, the \textit{subelliptic Laplacian} operator on $\mathbb{G}$ is the second-order differential operator, given by $$ \Delta_{\mathbb{G}}:={\rm diam(\Omega)}isplaystyle\sum_{k=1}^{n_1}X_k^2, $$ where $\{X_1,...,X_{n_1}\}$ is a basis of $V_1$. We shall denote by $$\noindentabla_\mathbb{G}:=(X_1,...,X_{n_1})$$ the related \textit{horizontal gradient}.\par \indent \textbf{Critical Sobolev inequality}. A crucial role in the functional analysis on Carnot groups is played by the following Sobolev-type inequality \begin{equation}\label{folland} \int_{D}|u(\xi)|^{2^*}\,d\xi\leq C\int_{D}|\noindentabla_{\mathbb{G}} u(\xi)|^2\,d\xi,\,\quad\forall\, u\in C^{\infty}_0(D) \end{equation} due to Folland (see \cite{Fo}). In the above expression $C$ is a positive constant (independent of $u$) and $$ 2^{*}:=\frac{2\textrm{dim}_h{\mathbb{G}}}{\textrm{dim}_h{\mathbb{G}}-2}, $$ is the \textit{critical Sobolev exponent}. Inequality (\ref{folland}) ensures that if $D$ is a bounded open (smooth) subset of $\mathbb{G}$, then the function \begin{equation}\label{norm} u\mapsto \|u\|_{S^1_0(D)}:=\left(\int_{D}|\noindentabla_{\mathbb{G}} u(\xi)|^2\,d\xi\right)^{1/2} \end{equation} is a norm in $C^{\infty}_0({D})$.\par \textbf{Folland-Stein space}. We shall denote by $S^1_0(D)$ the \textit{Folland-Stein space} defined as the completion of $C^{\infty}_0({D})$ with respect to the norm $\|\cdot\|_{S^1_0(D)}$. The exponent $2^{*}$ is critical for $\Delta_{\mathbb{G}}$ since, as in the classical Laplacian setting, the embedding $S^1_0(D)\hookrightarrow L^q(D)$ is compact when $1\leq q<2^{*}$, while it is only continuous if $q=2^{*}$, see Folland and Stein \cite{FoSte} and the survey paper \cite{Lanco} for related facts.\par \indent \textbf{The Heisenberg group}. The simplest example of Carnot group is provided by the \textit{Heisenberg group} $\mathbb{H}^n:=({\rm I\!R}^{2n+1},\circ)$, where, for every $$ p:=(p_1,...,p_{2n},p_{2n+1})\,\,\,\mbox{and}\,\,\, q:=(q_1,...,q_{2n},q_{2n+1})\in \mathbb{H}^n, $$ the usual group operation $\circ:\mathbb{H}^n\times \mathbb{H}^n\rightarrow \mathbb{H}^n$ is given by $$ p\circ q:=\left(p_1+q_1,...,p_{2n}+q_{2n},p_{2n+1}+q_{2n+1}+\frac{1}{2}{\rm diam(\Omega)}isplaystyle\sum_{k=1}^{2n}(p_kq_{k+n}-p_{k+n}q_k)\right) $$ and the family of dilatations has the following form $$ {\rm diam(\Omega)}elta_\eta(p):=(\eta p_1,...,\eta p_{2n},\eta^2p_{2n+1}),\quad \forall\, \eta>0. $$ Thus $\mathbb{H}^n$ is a $(2n+1)$-dimensional group and by (\ref{dim}) it follows that $$ \textrm{dim}_h{\mathbb{H}^n}=2n+2, $$ and $$ 2_h^{*}:={\rm diam(\Omega)}isplaystyle 2\left(\frac{n+1}{n}\right). $$ \indent The Lie algebra of left invariant vector fields on $\mathbb{H}^{n}$ is denoted by $\mathfrak{h}$ and its standard basis is given by \par $$ X_k:=\partial_k-\frac{p_{n+k}}{2}\partial_{2n+1},\quad k\in \{1,...,n\} $$ $$ Y_k:=\partial_{n+k}-\frac{p_{k}}{2}\partial_{2n+1},\quad k\in \{1,...,n\} $$ $$ T:=\partial_{2n+1}. $$ \noindentoindent In such a case, the only non-trivial commutators relations are $$ [X_k,Y_k]=T,\quad \forall\, k\in \{1,...,n\}. $$ Finally, the stratification of $\mathfrak{h}$ is given by $$ \mathfrak{h}=\textrm{span}\{X_1,...,X_n,Y_1,...,Y_n\}\oplus \textrm{span}\{T\}. $$ We denote by $\mathbb{H}^1_0(D)$ the Folland-Stein space in the Heisenberg group setting, as well as by $\Delta_{\mathbb{H}^n}$ the {Kohn-Laplacian} operator on $\mathbb{H}^n$.\par \indent We cite the monograph \cite{BLU} for a nice introduction to Carnot groups and \cite{MRS} for related topics on variational methods used in this paper. \section{The Main Result and some preliminary Lemmas}\label{Section3} The aim of this section is to prove that, under natural assumptions on the nonlinear term $f$, weak solutions to problem $(P_{\lambda}^{f})$ below do exist. More precisely, the main result is an existence theorem for equations driven by the subelliptic Laplacian, as stated here below. \begin{theorem}\label{MoReGeneral} Let $D$ be a smooth and bounded domain of the {Carnot group} $\mathbb{G}$ of homogeneous dimension ${\rm dim}_h{\mathbb{G}}\geq 3$ and let $f:D\times{\rm I\!R}\rightarrow{\rm I\!R}$ be a Carath\'{e}odory function such that \begin{equation}\label{MoReGeneralg} |f( \xi, t)|\leq \alpha(\xi)+\beta(\xi)|t|^{p}\,\, \mbox{almost everywhere in}\,\, D\times{\rm I\!R}, \end{equation} where $$\alpha\in L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)\qquad\mbox{and}\qquad \beta\in L^{\frac{1}{1-\gamma}}(D)$$ with $\gamma\in (2/2^*,1)$, $p\in (1,\gamma 2^*-1)$, and $2^*:={\rm diam(\Omega)}isplaystyle\frac{2{\rm dim}_h{\mathbb{G}}}{{\rm dim}_h{\mathbb{G}}-3}$. Furthermore, let \begin{equation}\label{MoReGeneral3} 0<\lambda<\frac{(p-1)^{\frac{p-1}{p}}}{p\kappa_{1,\gamma}^{\frac{p-1}{p}}\kappa_{2,\gamma}^{\frac{p+1}{p}}\|\alpha\|^{\frac{p-1}{p}}_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}\|\beta\|_{L^{\frac{1}{1-\gamma}}(D)}^{\frac{1}{p}}}, \end{equation} where $\kappa_{1,\gamma}$ and $\kappa_{2,\gamma}$ denote the embedding constants of the Folland-Stein space $S^1_0(D)$ in $L^{{\gamma 2^{*}}}(D)$ and $L^{\frac{p+1}{\gamma}}(D)$, respectively. Then the following subelliptic parametric problem $$ (P_{\lambda}^f)\,\,\,\,\,\,\,\,\,\,\left\{ \begin{array}{ll} -\Delta_{\mathbb{G}} u={\rm diam(\Omega)}isplaystyle \lambda f(\xi,u) & \mbox{\rm in } D\\ u|_{\partial D}=0, & \end{array}\right. $$ has a weak solution $u_{0,\lambda}\in S^1_0(D)$ and $$\|u_{0,\lambda}\|_{S^1_0(D)}<\left(\lambda p\kappa_{2,\gamma}^{p+1}\|\beta\|_{L^{\frac{1}{1-\gamma}}(D)}\right)^{\frac{1}{1-p}}.$$ \end{theorem} We recall that a \textit{weak solution} for the problem $(P_{\lambda}^{f})$, is a function $u:D\to {\rm I\!R}R$ such that $$ \mbox{ $\left\{\begin{array}{lll} $${\rm diam(\Omega)}isplaystyle\int_{D} \langle\noindentabla_\mathbb{G}u(\xi),\noindentabla_\mathbb{G}\varphi(\xi)\rangle\,d\xi\\ \qquad \qquad \qquad\qquad = {\rm diam(\Omega)}isplaystyle\lambda{\rm diam(\Omega)}isplaystyle\int_D f(\xi,u(\xi))\varphi(\xi)d\xi, \,\,\,\,\,\,\,\forall\,\varphi \in S^1_0(D)$$\\ $$u\in S^1_0(D)$$. \end{array} \right.$} $$ Let us consider the functional $\mathcal{J}_{\lambda}:S^1_0(D)\to {\rm I\!R}R$ defined by \begin{equation}\label{Funzionale} \mathcal{J}_{\lambda}(u):=\frac{1}{2}\|u\|_{S^1_0(D)}^2-\lambda{\rm diam(\Omega)}isplaystyle\int_D F(\xi,u(\xi))d\xi,\quad \forall\, u\in S^1_0(D)\, \end{equation} where $\lambda\in {\rm I\!R}$ and, as usual, we set ${\rm diam(\Omega)}isplaystyle F(\xi,t):=\int_0^{t}f(\xi,\tau)d\tau$.\par Note that, under our growth condition on $f$, the functional $\mathcal{J}_{\lambda}\in C^1(S^1_0(D))$ and its derivative at $u\in S^1_0(D)$ is given by $$ \langle \mathcal{J}'_{\lambda}(u), \varphi\rangle = {\rm diam(\Omega)}isplaystyle\int_{D} \langle\noindentabla_\mathbb{G}u(\xi),\noindentabla_\mathbb{G}\varphi(\xi)\rangle\,d\xi-\lambda{\rm diam(\Omega)}isplaystyle\int_D f(\xi,u(\xi))\varphi(\xi)d\xi, $$ for every $\varphi \in S^1_0(D)$.\par Thus the weak solutions of problem $(P_{\lambda}^{f})$ are exactly the critical points of the energy functional $\mathcal{J}_{\lambda}$. Fix $\lambda>0$ and denote $$ \Phi(u):=\|u\|_{S^1_0(D)} \quad\mbox{and}\quad\Psi_\lambda(u):=\lambda\int_{D}F(\xi,u(\xi))d\xi, $$ for every $u\in S^1_0(D)$.\par Note that, thanks to condition \eqref{MoReGeneralg}, the operator $\Psi$ is well defined and sequentially weakly (upper) continuous. So the operator $\mathcal{J}_{\lambda}$ is sequentially weakly lower semicontinuous on $S^1_0(D)$. With the above notations we can prove the next two lemmas that will be crucial in the sequel. \begin{lemma}\label{lemmino1} Let $\lambda>0$ and suppose that \begin{equation}\label{Ga1} \limsup_{\varepsilon\rightarrow 0^+}\frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho_0-\varepsilon])}\Psi_\lambda(v)}{\varepsilon}<\varrho_0, \end{equation} for some $\varrho_0>0$. Then \begin{equation}\label{Ga2} \inf_{\sigma<\varrho_0}\frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_{\lambda}(v)-\sup_{v\in \Phi^{-1}([0,\sigma])}\Psi_\lambda(v)}{\varrho_0^2-\sigma^2}<\frac{1}{2}. \end{equation} \end{lemma} \begin{proof} First, by condition \eqref{Ga1} one has \begin{equation}\label{Ga1GG} \limsup_{\varepsilon\rightarrow 0^+}\frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho_0-\varepsilon])}\Psi_\lambda(v)}{\varrho^2_0-(\varrho_0-\varepsilon)^2}<\frac{1}{2}. \end{equation} Indeed, if $\varepsilon \in (0,\varrho_0)$, one has \par ${\rm diam(\Omega)}isplaystyle \frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho_0-\varepsilon])}\Psi_\lambda(v)}{\varrho^2_0-(\varrho_0-\varepsilon)^2} $ $$ =\frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho_0-\varepsilon])}\Psi_\lambda(v)}{\varepsilon}\times $$ $$ \times\frac{-\varepsilon /\varrho_0}{\varrho_0\left[\left(1-{\rm diam(\Omega)}isplaystyle\frac{\varepsilon}{\varrho_0}\right)^2-1\right]}, $$ and $$ \lim_{\varepsilon\rightarrow 0^+}\frac{-\varepsilon /\varrho_0}{\varrho_0\left[\left(1-{\rm diam(\Omega)}isplaystyle\frac{\varepsilon}{\varrho_0}\right)^2-1\right]}=\frac{1}{2\varrho_0}. $$ Now, by \eqref{Ga1GG} there exists $\bar\varepsilon>0$ such that $$ \frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho_0-\varepsilon])}\Psi_\lambda(v)}{\varrho^2_0-(\varrho_0-\varepsilon)^2}<\frac{1}{2}, $$ for every $\varepsilon\in ]0,\bar\varepsilon[$. Setting $\sigma_0:=\varrho_0-\varepsilon_0$ (with $\varepsilon_0\in ]0,\bar\varepsilon[$), it follows that $$ \frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_{\lambda}(v)-\sup_{v\in \Phi^{-1}([0,\sigma_0])}\Psi_\lambda(v)}{\varrho_0^2-\sigma^2_0}<\frac{1}{2}, $$ and thus inequality \eqref{Ga2} is verified. \end{proof} \begin{lemma}\label{lemmino2} Let $\lambda>0$ and suppose that condition \eqref{Ga2} holds. Then \begin{equation}\label{VGa1} \inf_{u\in\Phi^{-1}([0,\varrho_0))} \frac{{\rm diam(\Omega)}isplaystyle\sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\Psi_\lambda(u)}{\varrho_0^2- \|u\|_{S^1_0(D)}^2}<\frac{1}{2}. \end{equation} \end{lemma} \begin{proof} Assumption \eqref{Ga2} yields \begin{equation}\label{Ga123} {\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\sigma_0])}\Psi_{\lambda}(v)>{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_{\lambda}(v)-\frac{1}{2}(\varrho_0^2-\sigma^2_0), \end{equation} for some $0<\sigma_0<\varrho_0$. Thanks to the weakly regularity of the functional $\Psi_\lambda$, since \begin{equation*}\label{Ga1234} {\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\sigma_0])}\Psi_{\lambda}(v)={\rm diam(\Omega)}isplaystyle \sup_{\|v\|_{S^1_0(D)}=\sigma_0}\Psi_{\lambda}(v), \end{equation*} by \eqref{Ga123} there exists $u_0\in S^1_0(D)$ with $\|u_0\|_{S^1_0(D)}=\sigma_0$ such that \begin{equation}\label{Ga1235} {\rm diam(\Omega)}isplaystyle \Psi_{\lambda}(u_0)>{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_{\lambda}(v)-\frac{1}{2}(\varrho_0^2-\sigma^2_0), \end{equation} that is, \begin{equation}\label{VGa1Finale} \frac{{\rm diam(\Omega)}isplaystyle\sup_{v\in\Phi^{-1}([0,\varrho_0])}\Psi_\lambda(v)-\Psi_\lambda(u_0)}{\varrho_0^2- \|u_0\|_{S^1_0(D)}^2}<\frac{1}{2}, \end{equation} with $\|u_0\|_{S^1_0(D)}=\sigma_0$. The proof is now complete. \end{proof} \section{Proof of Theorem \ref{MoReGeneral}} For the proof of our result, before, we note that problem~$(P_{\lambda}^{f})$ has a variational structure. Indeed, it is the Euler-Lagrange equation of the functional $\mathcal{J}_{\lambda}$.\par Hence, fix \begin{equation}\label{range} \lambda\in \left(0,\frac{(p-1)^{\frac{p-1}{p}}}{p\kappa_{1,\gamma}^{\frac{p-1}{p}}\kappa_{2,\gamma}^{\frac{p+1}{p}}\|\alpha\|^{\frac{p-1}{p}}_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}\|\beta\|_{L^{\frac{1}{1-\gamma}}(D)}^{\frac{1}{p}}}\right), \end{equation} and let us consider $0<\varepsilon<\varrho$. Setting $$ \Lambda_\lambda(\varepsilon,\varrho):={\rm diam(\Omega)}isplaystyle\frac{{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho-\varepsilon])}\Psi_\lambda(v)}{\varepsilon}, $$ one has $$ \Lambda_\lambda(\varepsilon,\varrho)\leq\frac{1}{\varepsilon}\left|{\rm diam(\Omega)}isplaystyle \sup_{v\in\Phi^{-1}([0,\varrho])}\Psi_\lambda(v)-\sup_{v\in \Phi^{-1}([0,\varrho-\varepsilon])}\Psi_\lambda(v)\right|. $$ Moreover, it follows that $$ \Lambda_\lambda(\varepsilon,\varrho)\leq\sup_{v\in \Phi^{-1}([0,1])}\int_D\left|\int_{(\varrho-\varepsilon)v(\xi)}^{\varrho v(\xi)}\lambda\frac{|f(\xi,t)|}{\varepsilon}dt\right|d\xi. $$ Now the growth condition \eqref{MoReGeneralg} yields $$ \sup_{v\in \Phi^{-1}([0,1])}\int_D\left|\int_{(\varrho-\varepsilon)v(\xi)}^{\varrho v(\xi)}\lambda\frac{|f(\xi,t)|}{\varepsilon}dt\right|d\xi \leq \sup_{v\in \Phi^{-1}([0,1])}\int_D \lambda\alpha(\xi)|v(\xi)|d\xi $$ $$ +\sup_{v\in \Phi^{-1}([0,1])}\int_D \frac{\lambda\beta(\xi)}{p+1}\left(\frac{\varrho^{p+1}-(\varrho-\varepsilon)^{p+1}}{\varepsilon}\right)|v(\xi)|^{p+1}d\xi. $$ \noindentoindent Since the Folland-Stein space $S^1_0(D)$ is compactly embedded in $L^{q}(D)$, for every $q\in [1,2^*)$, bearing in mind that $$\lambda \alpha\in L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)\qquad\mbox{and}\qquad \lambda\beta\in L^{\frac{1}{1-\gamma}}(D),$$ the above inequality yields $$ \Lambda_\lambda(\varepsilon,\varrho)\leq \kappa_{1,\gamma}\|\lambda\alpha\|_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}+\frac{\kappa_{2,\gamma}^{p+1}}{p+1}\|\lambda\beta\|_{L^{\frac{1}{1-\gamma}}(D)}\left(\frac{\varrho^{p+1}-(\varrho-\varepsilon)^{p+1}}{\varepsilon}\right). $$ \indent Thus passing to the limsup, as $\varepsilon\rightarrow 0^+$, we get \begin{equation}\label{VGa1Glu1} \limsup_{\varepsilon\rightarrow 0^+}\Lambda_\lambda(\varepsilon,\varrho)<\kappa_{1,\gamma}\|\lambda\alpha\|_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}+\kappa_{2,\gamma}^{p+1}\|\lambda\beta\|_{L^{\frac{1}{1-\gamma}}(D)}\varrho^{p}. \end{equation} Now, consider the real function $$ \varphi_\lambda(\varrho):=\kappa_{1,\gamma}\|\lambda\alpha\|_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}+\kappa_{2,\gamma}^{p+1}\|\lambda\beta\|_{L^{\frac{1}{1-\gamma}}(D)}\varrho^{p}-\varrho, $$ for every $\varrho>0$.\par \noindentoindent It is easy to see that $\inf_{\varrho>0}\varphi_\lambda(\varrho)$ is attained at $$ \varrho_{0,\lambda}:=\left(\lambda p\kappa_{2,\gamma}^{p+1}\|\beta\|_{L^{\frac{1}{1-\gamma}}(D)}\right)^{\frac{1}{1-p}}. $$ and, by \eqref{range}, one has $$ \inf_{\varrho>0}\varphi_\lambda(\varrho)<0. $$ \indent Hence inequality \eqref{VGa1Glu1} yields $$ \limsup_{\varepsilon\rightarrow 0^+}\Lambda_\lambda(\varepsilon,\varrho)<\varrho_{0,\lambda}. $$ \noindentoindent Now, it follows by Lemmas \ref{lemmino1} and \ref{lemmino2} that \begin{equation*}\label{VGa1Glu} \inf_{u\in\Phi^{-1}([0,\varrho_{0,\lambda}))} \frac{{\rm diam(\Omega)}isplaystyle\sup_{v\in\Phi^{-1}([0,\varrho_{0,\lambda}])}\Psi_\lambda(v)-\Psi_\lambda(u)}{\varrho_{0,\lambda}^2- \|u\|_{S^1_0(D)}^2}<\frac{1}{2}. \end{equation*} The above relation implies that there exists $w_{\lambda}\in S^1_0(D)$ such that $$ \Psi_\lambda(u)\leq {\rm diam(\Omega)}isplaystyle\sup_{v\in\Phi^{-1}([0,\varrho_{0,\lambda}])}\Psi_\lambda(v)<\Psi_\lambda(w_\lambda)+\frac{1}{2}(\varrho_{0,\lambda}^2-\|w_\lambda\|^2_{S^1_0(D)}), $$ for every $u\in\Phi^{-1}([0,\varrho_{0,\lambda}])$.\par Thus \begin{equation}\label{John} \mathcal{J}_{\lambda}(w_{\lambda}):=\frac{1}{2}\|w_\lambda\|_{S^1_0(D)}^2-\Psi_\lambda(w_\lambda)<\frac{\varrho^2_{0,\lambda}}{2}-\Psi_\lambda(u), \end{equation} for every $u\in\Phi^{-1}([0,\varrho_{0,\lambda}])$.\par \indent Since the energy functional $\mathcal{J}_{\lambda}$ is sequentially weakly lower semicontinuous, its restriction on $\Phi^{-1}([0,\varrho_{0,\lambda}])$ has a global minimum $u_{0,\lambda}\in \Phi^{-1}([0,\varrho_{0,\lambda}])$.\par \indent Note that $u_{0,\lambda}$ belongs to $\Phi^{-1}([0,\varrho_{0,\lambda}))$. Indeed, if $\|u_{0,\lambda}\|_{S^1_0(D)}=\varrho_{0,\lambda}$, by \eqref{John}, one has \begin{equation*} \mathcal{J}_{\lambda}(u_{0,\lambda})=\frac{\varrho^2_{0,\lambda}}{2}-\Psi_\lambda(u_{0,\lambda})>\mathcal{J}_{\lambda}(w_{\lambda}), \end{equation*} \noindentoindent which is a contradiction.\par \indent In conclusion, it follows that $u_{0,\lambda}\in S^1_0(D)$ is a local minimum for the energy functional $\mathcal{J}_{\lambda}$ with $$\|u_{0,\lambda}\|_{S^1_0(D)}<\varrho_{0,\lambda},$$ hence in particular, a weak solution of problem $(P_{\lambda}^{f})$. This completes the proof. \begin{remark}\rm{ A crucial step in our approach is the explicit computation of the embedding constants $\kappa_{i,\gamma}$ that naturally appear in Theorem \ref{MoReGeneral} and its consequences. In the special case of the Heisenberg group $\mathbb{H}^n$ an explicit expression of these quantities can be obtained by using the best constant in the Sobolev inequality \begin{equation}\label{folland2} \int_{D}|u(\xi)|^{2^*_h}\,d\xi\leq C\int_{D}|\noindentabla_{\mathbb{H}^n} u(\xi)|^2\,d\xi,\,\quad\forall\, u\in C^{\infty}_0(D) \end{equation} that was determined by Jerison and Lee in \cite[Corollary C]{JLe}.} \end{remark} \begin{remark}\rm{It is clear that Theorem \ref{FerraraMolicaBisci22Generale2} is a simple consequence of Theorem \ref{MoReGeneral}. Indeed, preserving our notations and assuming that \begin{equation*} 0<\kappa<\frac{(p-1)^{\frac{p-1}{p}}}{pc_{1,\gamma}^{\frac{p-1}{p}}c_{2,\gamma}^{\frac{p+1}{p}}}|D|^{\frac{1-\gamma}{p}+ \frac{(1-p)(\gamma 2^{*}_h-1)}{p\gamma 2^{*}_h} }, \end{equation*} it is easy to note that $$ \frac{(p-1)^{\frac{p-1}{p}}}{pc_{1,\gamma}^{\frac{p-1}{p}}c_{2,\gamma}^{\frac{p+1}{p}}\|\kappa\|^{\frac{p-1}{p}}_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}\|\kappa\|_{L^{\frac{1}{1-\gamma}}(D)}^{\frac{1}{p}}}>1. $$ Since all the assumptions of Theorem \ref{MoReGeneral} have been verified (with $\lambda=1$) the conclusion of Theorem \ref{FerraraMolicaBisci22Generale2} immediately follows. } \end{remark} A special case of Theorem \ref{MoReGeneral} reads as follows. \begin{corollary}\label{MoReGeneralCor} Let $D$ be a smooth and bounded domain of the {Carnot group} $\mathbb{G}$ of homogeneous dimension ${\rm dim}_h{\mathbb{G}}\geq 3$ and let $f:D\times{\rm I\!R}\rightarrow{\rm I\!R}$ be a Carath\'{e}odory function such that condition \eqref{MoReGeneralg} holds. Assume that \begin{equation}\label{MoReGeneral3D} \|\alpha\|^{p-1}_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}\|\beta\|_{L^{\frac{1}{1-\gamma}}(D)}<\frac{(p-1)^{p-1}}{p\kappa_{1,\gamma}^{{p-1}}\kappa_{2,\gamma}^{{p+1}}}. \end{equation} Then the following subelliptic problem $$ (P_{f})\,\,\,\,\,\,\,\,\,\,\left\{ \begin{array}{ll} -\Delta_{\mathbb{G}} u={\rm diam(\Omega)}isplaystyle f(\xi,u) & \mbox{\rm in } D\\ u|_{\partial D}=0, & \end{array}\right. $$ has a weak solution $u_{0}\in S^1_0(D)$ and $$\|u_{0}\|_{S^1_0(D)}<\left( p\kappa_{2,\gamma}^{p+1}\|\beta\|_{L^{\frac{1}{1-\gamma}}(D)}\right)^{\frac{1}{1-p}}.$$ \end{corollary} \begin{remark}\rm{A special case of Corollary \ref{MoReGeneralCor} in the Euclidean setting has been proved in \cite{AC} by exploiting the variational principle obtained by Ricceri in \cite{R2}. } \end{remark} \indent In conclusion, we present a direct application of our main result. \begin{example}\label{esempiuccio}\rm{ Let $D$ be a smooth and bounded domain of a Carnot group $\mathbb{G}$ with ${\rm dim}_h{\mathbb{G}}\geq 3$ and let $$\alpha\in L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)\setminus\{0\},$$ with $\gamma\in (2/2^*,1)$.\par By virtue of Theorem \ref{MoReGeneral}, there exists an open interval $\Lambda\subset (0,+\infty)$ such that for every $\lambda\in \Lambda$, the following problem $$ \left\{ \begin{array}{ll} -\Delta_{\mathbb{G}} u=\lambda{\rm diam(\Omega)}isplaystyle (\alpha(\xi)+|u|^p) & \mbox{\rm in } D\\ u|_{\partial D}=0, & \end{array}\right. $$ where $p\in (1,\gamma 2^*-1)$, admits at least one non-trivial weak solution $u_{0,\lambda}\in S^1_0(D)$ such that $$\|u_{0,\lambda}\|_{S^1_0(D)}<\left(\lambda p\kappa_{2,\gamma}^{p+1}|D|^{1-\gamma}\right)^{\frac{1}{1-p}}.$$ More precisely, a concrete expression of the interval $\Lambda$ is given by $$ \Lambda:=\left(0,\frac{(p-1)^{\frac{p-1}{p}}|D|^{\frac{\gamma-1}{p}}}{p\kappa_{1,\gamma}^{\frac{p-1}{p}}\kappa_{2,\gamma}^{\frac{p+1}{p}}\|\alpha\|^{\frac{p-1}{p}}_{L^{\frac{\gamma 2^{*}}{\gamma 2^{*}-1}}(D)}}\right). $$ } \end{example} \indent {\bf Acknowledgements.} The manuscript was realized within the auspices of the INdAM - GNAMPA Project 2015 {\it Modelli ed equazioni non-locali di tipo frazionario} and the SRA grants P1-0292, J1-7025, J1-6721, and J1-5435. \end{document}
math
29,404
\begin{document} \title{Controlling the transport of an ion: Classical and quantum mechanical solutions} \author{H A F\"urst$^1$, M H Goerz$^2$, U G Poschinger$^1$, M Murphy$^3$, S Montangero$^3$, T Calarco$^3$, F Schmidt-Kaler$^1$, K Singer$^1$, C P Koch$^2$} \address{$^1$QUANTUM, Institut f\"ur Physik, Universit\"at Mainz, D-55128 Mainz, Germany} \address{$^2$Theoretische Physik, Universit\"at Kassel, Heinrich-Plett-Stra{\ss}e 40, D-34132 Kassel, Germany} \address{$^3$Institut f\"ur Quanteninformationsverarbeitung, Universit\"at Ulm, D-89081 Ulm, Germany} \ead{[email protected]} \begin{abstract} We investigate the performance of different control techniques for ion transport in state-of-the-art segmented miniaturized ion traps. We employ numerical optimization of classical trajectories and quantum wavepacket propagation as well as analytical solutions derived from invariant based inverse engineering and geometric optimal control. We find that accurate shuttling can be performed with operation times below the trap oscillation period. The maximum speed is limited by the maximum acceleration that can be exerted on the ion. When using controls obtained from classical dynamics for wavepacket propagation, wavepacket squeezing is the only quantum effect that comes into play for a large range of trapping parameters. We show that this can be corrected by a compensating force derived from invariant based inverse engineering, without a significant increase in the operation time. \end{abstract} \pacs{37.10.Ty,03.67.Lx,02.30.Yy} \maketitle \section{Introduction} Trapped laser-cooled ions represent a versatile experimental platform offering near-perfect control and tomography of a few body system in the classical and quantum domain~\cite{CIRAC1995,Blatt2008,WINELAND1998,CASANOVA2012}. The fact that both internal (qubit) and external (normal modes of oscillation) degrees of freedom can be manipulated in the quantum regime allows for many applications in the fields of quantum information processing and quantum simulation~\cite{MONZ2011,GERRITSMA2011,RICHERME2013}. Currently, a significant research effort is devoted to scaling these experiments up to larger numbers of qubits. A promising technology to achieve this goal are \textit{microstructured segmented ion traps}, where small ion groups are stored in local potentials and ions are shuttled within the trap by applying suitable voltage ramps to the trap electrodes~\cite{KIELPINSKY2002}. In order to enable scalable experiments in the quantum domain, these shuttling operations have to be performed such that the required time is much shorter than the timescales of the relevant decoherence processes. At the same time, one needs to avoid excitation of the ion's motion after the shuttling operation. These opposing requirements clearly call for the application of advanced control techniques. Adiabatic ion shuttling operations in a segmented trap have been demonstrated in Ref.~\cite{ROWE}. Recent experiments have achieved non adiabatic shuttling of single ions within a few trap oscillation cycles while retaining the quantum ground state of motion~\cite{WALTHER2012,Bowler2012}. This was made possible by finding `sweet spots'\ in the shuttling time or removal of the excess energy accumulated during the shuttling by kicks of the trap potential. Given the experimental constraints, it is natural to ask what the speed limitations for the shuttling process are. The impact of quantum effects for fast shuttling operations, i.e., distortions of the wavepacket, also need to be analyzed, and it needs to be assessed whether quantum control techniques~\cite{SomloiCP93,ZhuJCP98,ReichJCP12} may be applied to avoid these. Moreover, from a control-theoretical perspective and in view of possible future application in experiment, it is of interest to analyze how optimized voltage ramps can be obtained. Optimal control theory (OCT) combined with classical equations of motion was employed in Ref.~\cite{Schulz2006} to obtain optimized voltage ramps. Quantum effects were predicted not to play a role unless the shuttling takes place on a timescale of a single oscillation period. In Refs.~\cite{CHEN2011,Torrontegui2011}, control techniques such as inverse engineering were applied to atomic shuttling problems. The transport of atomic wavepackets in optical dipole potentials was investigated using OCT with quantum mechanical equations of motion~\cite{CALARCO2004,deChiaraPRA08,MurphyPRA09}. The purpose of the present paper is to assess available optimization strategies for the specific problem of transporting a single ion in a microchip ion trap and to utilize them to study the quantum speed limit for this process~\cite{GiovannettiPRA03,CanevaPRL09}, i.e., to determine the shortest possible time for the transport. Although parameters of the trap architecture of Ref.~\cite{SCHULZ2008} are used throughout the entire manuscript, we strongly emphasize that the qualitative results we obtain hold over a wide parameter regime. They are thus generally valid for current segmented ion traps, implemented with surface electrode geometry~\cite{SCHULZ2008,AMINI2011} or more traditional multilayer geometry. The paper is organized as follows. We start by outlining the theoretical framework in Sec.~\ref{subs:ttcp}. In particular we review the combination of numerical optimization with classical dynamics in Sec.~\ref{subs:kloct} and with wavepacket motion in Sec.~\ref{subs:qmoct}. Analytical solutions to the control problem, obtained from the harmonic approximation of the trapping potential, are presented in Secs.~\ref{subsec:geometric} and~\ref{subs:invmeth0}. Section~\ref{subs:apcontrol} is devoted to the presentation and discussion of our results. The control solutions for purely classical dynamics of the ion, obtained both numerically and analytically, yield a minimum transport duration as shown in Sec.~\ref{subs:kloct2}. We discuss in Sec.~\ref{subs:qmprop2}, how far these solutions correspond to the quantum speed limit. Our results obtained by invariant-based inverse engineering are presented in Sec.~\ref{subs:invmeth}, and we analyze the feasibility of quantum optimal control in Sec.~\ref{subs:qmoct2}. Section~\ref{sec:concl} concludes our paper. \section{Methods for trajectory control and wavepacket propagation}\label{subs:ttcp} In the following we present the numerical methods we employ to control the transport of a single trapped ion. Besides numerical optimization describing the motion of the ion either with classical mechanics or via wavepacket propagation, we also utilize two analytical methods. This is made possible by the trap geometry which leads to an almost perfectly harmonic trapping potential for the ion at all times. \subsection{Prerequisites} We assume ponderomotive confinement of the ion at the rf-node of a linear segmented Paul trap and a purely electrostatic confinement along the trap axis $x$, see Fig.~\ref{fig:electrodes}. This enables us to treat the dynamics only along this dimension. We consider transport of a single ion with mass $m$ between two neighboring electrodes, which give rise to individual potentials centered at $x_1$ and $x_2$. This may be scaled up to $N$ electrodes and longer transports without any loss of generality. \begin{figure} \caption{(a) Ion shuttling in a segmented linear trap. The dc electrodes form the axial potential for the ion transport along the $x$-axis. The rf electrodes for confinement of the ions along the $x$-axis are not shown. (b) Axial electrode potentials formed by applying a dc voltage to a facing pair of trap segments. For the specific scenario presented in this manuscript, we use $d = 280\,\mu$m, $g=30\,\mu$m and $h=500\,\mu$m. Each potential is generated from a single pair of segments, depicted in red in (a) and biased to 1$\,$V with all the other dc electrodes grounded. } \label{fig:electrodes} \end{figure} The ion motion is controlled by a time-dependent electrostatic potential, \begin{equation}\label{eq:V} V(x,t)= U_1(t)\phi_1(x)+U_2(t)\phi_2(x)\,, \end{equation} with segment voltages $U_i(t)$, and normal electrode potentials on the trap axis, $\phi_i(x)$. They are dimensionless electrostatic potentials obtained with a bias of $+1$~V at electrode $i$ and the remaining electrodes grounded (see Fig.~\ref{fig:electrodes}(b)). These potentials are calculated by using a fast multipole boundary element method~\cite{SINGER2010} for the trap geometry used in recent experiments~\cite{WALTHER2012} and shown in Fig.~\ref{fig:electrodes}. In order to speed up numerics and obtain smooth derivatives, we calculate values for $\phi_i(x)$ on a mesh and fit rational functions to the resulting data. The spatial derivatives $\phi_i'(x)$ and $\phi_i''(x)$ are obtained by differentiation of the fit functions. Previous experiments have shown that the calculated potentials allow for the prediction of ion positions and trap frequencies with an accuracy of one per cent~\cite{Huber2010,brownnutt2012spatially} which indicates the precision of the microtrap fabrication process. An increase in the precision can be achieved by calibrating the trapping potentials using resolved sideband spectroscopy. This is sufficient to warrant the application of control techniques as studied here. For the geometry of the trap described in Ref.~\cite{WALTHER2012}, we obtain harmonic trap frequencies of about $\omega=2\pi\cdot$1.3~MHz with a bias voltage of -7~V at a single trapping segment. The individual segments are spaced 280~$\mu$m apart. Our goal is to shuttle a single ion along this distance within a time span on the order of the oscillation period by changing the voltages $U_1$ and $U_2$, which are supposed to stay within a predetermined range that is set by experimental constraints. We seek to minimize the amount of motional excitation due to the shuttling process. \subsection{Numerical optimization with classical dynamics} \label{subs:kloct} Assuming the ion dynamics to be well described classically, we optimize the time dependent voltages in order to reduce the amount of transferred energy. This corresponds to minimizing the functional $J$, \begin{equation}\label{eq:J} J = (E(T)-E_{\rm T})^2 + \sum_i \int_0^T \frac{\lambda_a}{S(t)} \Delta U_i(t)^2 \ensuremath{\, \textnormal{d}} t\,, \end{equation} i.e., to miniziming the difference between desired energy $E_{\rm T}$ and the energy $E(T)$ obtained at the final time $T$. $\Delta U_i(t)= U_i^{n+1}(t) - U_i^{n}(t)$ is the update of each voltage ramp in an iteration step $n$, and the second term in Eq.~\eref{eq:J} limits the overall change in the integrated voltages during one iteration. The weight $\lambda_a$ is used to tune the convergence and limit the updates. To suppress updates near $t=0$ and $t=T$ the shape function $S(t) \geq 0$ is chosen to be zero at these points in time. For a predominantly harmonic axial confinement, the final energy is given by \begin{equation}\label{eq:ET} E(T) = \frac{1}{2} m \dot{x}^2(T) + \frac{1}{2} m \omega^2 (x(T)-x_2)^2\,. \end{equation} In order to obtain transport without motional excitation, we choose $E_T = 0$. Evaluation of Eq.~\eref{eq:ET} requires solution of the classical equation of motion. It reads \begin{equation}\label{eq:class:eom} \ensuremath{\, \textnormal{d}}ot x(t) = -\frac{1}{m} \left.\frac{\partial}{\partial x} V(x,t)\right|_{x=x(t)} = -\frac{1}{m} \sum_{i=1}^2 U_i(t)\phi_i'\left(x(t)\right) \end{equation} for a single ion trapped in the potential of Eq.~\eref{eq:V} and is solved numerically using a \textit{Dormand-Prince Runge-Kutta} integrator~\cite{SINGER2010}. Employing Krotov's method for optimal control~\cite{Konnov99} together with the classical equation of motion, Eq.~\eref{eq:class:eom}, we obtain the following iterative update rule: \begin{equation}\label{eqn:krotklupd} \Delta U_i(t) = - \frac{S(t)}{\lambda_{a}} p_2^{(n)}(t) \phi_i'(x^{(n+1)}(t))\,, \end{equation} where $n$ denotes the previous iteration step. $\mathbf{p}=(p_1,p_2)$ is a costate vector which evolves according to \begin{equation} \dot{\mathbf{p}}(t) = - \left(\begin{array}{c} \frac{p_2}{m} V''(U_i(t), x(t)) \\ p_1 \end{array}\right)\,, \end{equation} with its `initial' condition defined at the final time $T$: \begin{equation}\label{eqn:pTkrot} \mathbf{p}(T) = - 2 m \left(E(T)-E_{\rm T}\right) \left(\begin{array}{c} \omega^2 (x(T) - x_2) \\ \dot{x}(T) \end{array}\right)\,. \end{equation} The algorithm works by propagating $x(t)$ forward in time, solving Eq.~\eref{eq:class:eom} with an initial guess for $U_i(t)$ and iterating the following steps until the desired value of $J$ is achieved: \begin{enumerate} \item Obtain $p(T)$ according to Eq.~\eref{eqn:pTkrot} and propagate $p(t)$ backwards in time using Eq.~\eref{eqn:pTkrot}. \item Update the voltages according to Eq.~\eref{eqn:krotklupd} at each time step while propagating $x(t)$ forward in time with the immediately updated voltages. \end{enumerate} The optimization algorithm shows rapid convergence and brings the final excitation energy $E(T)$ as close to zero as desired. An example of an optimized voltage ramp is shown in Fig.~\ref{fig:guesscfoct}(a). The voltages obtained are not symmetric under time reversal in contrast to the initial guess. This is rationalized by the voltage updates occurring only during forward propagation which breaks the time reversal symmetry. We find this behavior to be typical for the Krotov algorithm combined with the classical equation of motion. \subsection{Numerical optimization of wavepacket propagation}\label{subs:qmoct} When quantum effects are expected to influence the transport, the ion has to be described by a wave function $\Psi(x,t)$. The control target is then to perfectly transfer the initial wavefunction, typically the ground state of the trapping potential centered around position $x_1$, to a target wavefunction, i.e., the ground state of the trapping potential centered around position $x_2$. This is achieved by minimizing the functional \begin{equation}\label{eq:qmkrotj} J = 1 - \left\vert \int\limits_{-\infty}^{\infty} \Psi(x,T)^* \Psi^{\operatorname{tgt}}(x) \ensuremath{\, \textnormal{d}} x \right\vert^2 + \int\limits_{0}^{T} \frac{\lambda_a}{S(t)} \sum_i \Delta U_i(t)^2 \ensuremath{\, \textnormal{d}} t\,. \end{equation} Here, $\Psi(x,T)$ denotes the wave function of the single ion propagated with the set of voltages $U_i(t)$, and $\Psi^{\operatorname{tgt}}(x)$ is the target wave function. The voltage updates $\Delta U_i(t)$, scaling factor $\lambda_a$ and shape function $S(t)$ have identical meanings as in Sec.~\ref{subs:kloct}. $\Psi(x,T)$ is obtained by solving the time-dependent Schr\"odinger equation (TDSE), \begin{eqnarray} i \hbar \frac{\partial}{\partial t} \Psi(x,t) = \Op H(t) \Psi(x,t) = \left( -\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + \sum_{i=1}^{N} U_i(t) \phi_i(x) \right) \Psi(x,t)\,. \label{eq:tdse} \end{eqnarray} As in the classical case, optimization of the transport problem is tackled using Krotov's method~\cite{SomloiCP93,ReichJCP12}. The update equation derived from Eq.~\eref{eq:qmkrotj} is given by \begin{equation}\label{eqn:krotovupdate} \Delta U_i(t) = \frac{S(t)}{\lambda_a} \mathfrak{Im}\int\limits_{x_{\min}}^{x_{\max}} \chi^{n}(x,t)^*\, \phi_i(x) \, \Psi^{n+1}(x,t) \ensuremath{\, \textnormal{d}} x\,, \end{equation} with $n$ denoting the iteration step. $\chi(x,t)$ is a costate wave function obeying the TDSE with `initial' condition \begin{equation}\label{eqn:chioft} \chi(x,T) = \left[ \int\limits_{x_{\min}}^{x_{\max}} (\Psi(x,T))^* \Psi^{\operatorname{tgt}}(x) \ensuremath{\, \textnormal{d}} x \; \right] \Psi^{\operatorname{tgt}}(x,T)\,. \end{equation} Optimized voltages $U_i(t)$ are obtained similarly to Sec.~\ref{subs:kloct}, i.e., one starts with the ground state, propagates $\Psi(x,t)$ forward in time according to Eq.~\eref{eq:tdse}, using an initial guess for the voltage ramps, and iterates the following steps until the desired value of $J$ is achieved: \begin{enumerate} \item Compute the costate wave function at the final time $T$ according to Eq.~\eref{eqn:chioft} and propagate $\chi(x,t)$ backwards in time, storing $\chi(x,t)$ at each timestep. \item Update the control voltages according to Eq.~\eref{eqn:krotovupdate} using the stored $\chi(x,t)$, while propagating $\Psi(x,t)$ forward using the immediately updated control voltages. \end{enumerate} Equations~\eref{eqn:krotovupdate} and~\eref{eqn:chioft} imply a sufficiently large initial overlap between the wave function, which is forward propagated under the initial guess, and the target state in order to obtain a reasonable voltage update. This emphasizes the need for good initial guess ramps and illustrates the difficulty of the control problem when large phase space volumes need to be covered. To solve the TDSE numerically, we use the Chebshev propagator~\cite{Tal-EzerJCP84} in conjunction with a Fourier grid~\cite{RonnieReview88,RonnieReview94} for efficient and accurate application of the kinetic energy part of the Hamiltonian. Denoting the transport time by $T$ and the inter-electrode spacing by $d$, the average momentum during the shuttling is given by $\bar{p}=m d / T$. Typical values of these parameters yield a phase space volume of $d \cdot \bar{p}/h\approx 10^7$. This requires the numerical integration to be extremely stable. In order to ease the numerical treatment, we can exploit the fact that the wavefunction's spatial extent is much smaller than $d$ and most excess energy occurs in the form of classical oscillations. This allows for propagating the wave function on a small \textit{moving grid} that extends around the instantaneous position and momentum expectation values~\cite{SINGER2010}. The details of our implementation combining the Fourier representation and a moving grid are described in~\ref{subs:qmprop}. \subsection{Initial guess voltages} \label{subs:guessgen} Any optimization, no matter whether it employs classical or quantum equations of motion, starts from an initial guess. For many optimization problems, and in particular when using gradient-based methods for optimization, a physically motivated initial guess is crucial for success of the optimization~\cite{KochPRA04}. Here, we design the initial guess for the voltage ramps such that the ion is dragged from position $x_1$ to $x_2$ in a smooth fashion. This is achieved as follows: The trapping potential $V(x,t)$ can be described by the position of its local minimum $\alpha(t)$. Obviously, $\alpha(t)$ needs to fulfill the boundary conditions $\alpha(0) = x_1$, $\alpha(T) = x_2$. In order to ensure smooth acceleration and deceleration of the center of the trap, we also demand $\dot\alpha(0) = \dot\alpha(T)=\ensuremath{\, \textnormal{d}}ot{\alpha}(0)=\ensuremath{\, \textnormal{d}}ot{\alpha}(T)=0$. A possible ansatz fulfilling these boundary conditions is given by a polynomial of order 6, \begin{eqnarray}\label{eqn:transfunc1} \alpha(t) = x_1 + d (10 s^3-15s^4+6s^6)\,, \end{eqnarray} where $d=x_2-x_1$ denotes the transport distance and $s=t/T$ is a dimensionless time. To derive initial guess voltages $U_i^0(t)$, we use as a first condition that the local minimum of the potential coincides with $\alpha(t)$. Second, we fix the trap frequency $\omega$ to a constant value throughout the whole shuttling process, \begin{equation} \begin{array}{rl} \left.\frac{\partial V}{\partial x}\right|_{x=\alpha(t)} &=\phi_1'(\alpha(t)) U_1^0(t) + \phi_2'(\alpha(t)) U_2^0(t) \stackrel{!}{=} 0,\\ \left.\frac{\partial^2 V}{\partial x^2}\right|_{x=\alpha(t)} &=\phi_1''(\alpha(t)) U_1^0(t) + \phi_2''(\alpha(t)) U_2^0(t) \stackrel{!}{=} m \omega^2\,. \end{array}\label{eqn:guesspots} \end{equation} These equations depend on first and second order spatial derivatives of the electrode potentials. Solving for $U_1^0(t)$, $U_2^0(t)$, we obtain \begin{equation} U_i^0 (t) = \frac{(-1)^i m \omega ^2 \phi_j'(\alpha(t))}{ \phi_2''(\alpha(t)) \phi_1'(\alpha(t)) - \phi_2'(\alpha(t))\phi_1''(\alpha(t)) }, \quad i,j \in \{1,2\}, \quad j \neq i \label{eqn:solguess}. \end{equation} An example is shown in Fig.~\ref{fig:guesscfoct}. If the electrode potentials have translational symmetry, i.e., $\phi_j(x)=\phi_i(x+d)$, then $U^0_1(t)=U^0_2(T-t)$. This condition is approximately met for sufficiently homogeneous trap architectures. \begin{figure} \caption{Control voltages applied to electrodes for transporting a $^{40} \label{fig:guesscfoct} \end{figure} \subsection{Geometric optimal control}\label{subsec:geometric} Most current ion traps are fairly well described by a simple harmonic model, \begin{equation} V(x,t) = -u_1(t) \frac{1}{2}m \omega_0^2 (x-x_1)^2 - u_2(t) \frac{1}{2}m \omega_0^2 (x-x_2)^2\,, \end{equation} where $\omega_0$ is the trap frequency and $u_i$ are dimensionless control parameters which correspond to the electrode voltages. Since the equations of motion can be solved analytically, one can also hope to solve the control problem analytically. One option is given by Pontryagin's maximum principle~\cite{PONTRY,CHEN2011} which allows to determine time-optimal controls. Compared to numerical optimization which always yields local optima, Pontryagin's maximum principle guarantees the optimum to be global. In general, the cost functional, \begin{equation} J[\mathbf{u}] = \int_0^T g(\mathbf{y},\mathbf{u}) \ensuremath{\, \textnormal{d}} t\,, \end{equation} is minimized for the equation of motion $\dot{\mathbf{y}} = \mathbf{f}(\mathbf{y}, \mathbf{u})$ and a running cost $g(\mathbf{y},\mathbf{u})$ with $\mathbf{u} = (u_1, u_2)$ and $\mathbf{y} = (x,v)$ in our case. The optimization problem is formally equivalent to finding a classical trajectory by the principle of least action. The corresponding classical control Hamiltonian that completely captures the optimization problem is given by \begin{equation} H_c(\mathbf{p},\mathbf{y},\mathbf{u}) = p_0 g(\mathbf{y},\mathbf{u}) + \mathbf{p} \cdot \mathbf{f}(\mathbf{y}, \mathbf{u}) \label{eqn:hamcontrol} \end{equation} with costate $\mathbf{p}$, obeying \begin{equation}\label{eq:costate} \dot{\mathbf{p}}=-\frac{\partial H_c}{\partial \mathbf{y}}\,, \end{equation} and $p_0 < 0$ a constant to compensate dimension. Pontryagin's principle states that $H_c$ becomes maximal for the optimal choice of $\mathbf u(t)$~\cite{PONTRY,CHEN2011}. Here we seek to minimize the transport time $T$. The cost functional then becomes \[ J[\mathbf{u}] = \int_0^{T_{\rm{min}}} \ensuremath{\, \textnormal{d}} t = T_{\rm{min}}\,, \] which is independent of $\mathbf {u}$ itself and leads to $g(\mathbf{y},\mathbf{u})= 1$. Inserting the classical equations of motion $\dot{\mathbf{y}} = (v, -\partial_x V)$, the control Hamiltonian becomes \begin{equation} \label{eq:H_c} H_c(\mathbf{p},\mathbf{y},\mathbf{u}) = p_0 + p_1 v + p_2 \left( u_1 \cdot (x-x_1) + u_2 \cdot (x-x_2)\right) \omega_0^2\,. \end{equation} We bound $u_1$ and $u_2$ by $u_{\rm max}$ which corresponds to the experimental voltage limit. Since $H_c$ is linear in $u_i$ and $x_1 \leq x \leq x_2$, $H_c$ becomes maximal depending on the sign of $p_2$, \begin{equation} u_1(t)= - u_2(t)= \rm{sign}(p_2) u_{\rm max}\,. \label{eqn:biasmax} \end{equation} Evaluating Eq.~\eref{eq:costate} for $H_c$ of Eq.~\eref{eq:H_c} leads to \begin{eqnarray} \dot{p_1} = p_2 \omega_0^2 \left(u_1 - u_2\right)\\ \dot{p_2} = -p_1. \end{eqnarray} In view of Eq.~\eref{eqn:biasmax}, the only useful choice is $p_2(0) > 0$. Otherwise the second electrode would be biased to a positive voltage, leading to a repulsive instead of an attractive potential acting on the ion. The equations of motion for the costate thus become \begin{eqnarray} \dot{p_1} = 0 & \Rightarrow p_1(t) = c_1\\ \dot{p_2} = -p_1& \Rightarrow p_2(t) = p_2(0) - c_1 t.\label{eq:p2} \end{eqnarray} For a negative constant $c_1$, $p_2$ is never going to cross zero. This implies that there will not be a switch in voltages leading to continuous acceleration. For positive $c_1$ there will be a zero crossing at time $t_{\rm sw} = p_2(0)/c_1$. The optimal solution thus corresponds to a single switch of the voltages. We will analyze this solution and compare it to the solutions obtained by numerical optimization below in Section~\ref{subs:apcontrol}. \subsection{Invariant based inverse engineering}\label{subs:invmeth0} For quantum mechanical equations of motion, geometric optimal control is limited to very simple dynamics such as that of three- or four-level systems, see e.g. Ref.~\cite{HaidongPRA12}. A second analytical approach that is perfectly adapted to the quantum harmonic oscillator utilizes the Lewis-Riesenfeld theory which introduces dynamical invariants and their eigenstates~\cite{lewis1969}. This invariant-based inverse engineering approach (IEA) has recently been applied to the transport problem~\cite{TORR2011,PalmeroPRA13}. The basic idea is to compensate the inertial force occurring during the transport sequence. To this end, the potential is written in the following form: \begin{equation}\label{eqn:invpot} V(x,t) = -F(t) x + \frac{m}{2} \Omega^2(t) x^2 + \frac{1}{\rho^2(t)} U\left(\frac{x-\alpha(t)}{\rho(t)}\right)\,. \end{equation} The functions $F$, $\Omega$, $\rho$ and $\alpha$ have to fulfill constraints, \begin{eqnarray} \ensuremath{\, \textnormal{d}}ot \rho(t) + \Omega^2(t)\rho(t) = \frac{\Omega_0^2}{\rho^3(t)}\,,\\ \ensuremath{\, \textnormal{d}}ot \alpha(t) + \Omega^2(t)\alpha(t) = F(t)/m\,, \label{eqn:compforce0} \end{eqnarray} where $\Omega_0$ is a constant and $U$ an arbitrary function. We choose $\Omega(t)=\Omega_0=0$, $\rho(t) = 1$, and $\alpha(t)$ to be the transport function of Sec.~\ref{subs:guessgen}. This enables us to deduce the construction rule for $F(t)$, using Eq.~\eref{eqn:compforce0}, \begin{equation}\label{eq:F} \ensuremath{\, \textnormal{d}}ot\alpha(t) = F(t)/m\,, \end{equation} such that $F(t)$ compensates the inertial force given by the acceleration of the trap center. For the potential of Eq.~\eref{eqn:invpot}, the Hermitian operator \begin{equation} \Op{I} = \frac{1}{2m} \left[\rho\left(p-m\dot\alpha\right)-m\dot\rho\left(x-\alpha\right)\right]^2 +\frac{1}{2} m \Omega_0^2 \left(\frac{x-\alpha}{\rho}\right)^2 + U\left(\frac{x-\alpha}{\rho}\right) \end{equation} fulfills the invariance condition for all conceivable quantum states $\ket{\Psi(t)}$: \begin{equation} \frac{\rmd }{\rmd t }\braket{\Psi(t)|\Op{I}(t)|\Psi(t)} = 0 \quad \Leftrightarrow \quad \frac{\rmd \Op{I}}{\rmd t } = \frac{\partial \Op{I} }{\partial t} + \frac{1}{\rmi \hbar} [\Op{I}(t),\Op{H}(t)] = 0 \end{equation} with $\Op{H}$ the Hamiltonian of the ion. The requirement for transporting the initial ground state to the ground state of the trap at the final time corresponds to $\Op{H}$ and $\Op{I}$ having a common set of eigenfunctions at initial and final time. This is the case for $\dot\alpha(0)= \dot\alpha(T) = \dot{\rho}(t) =0$~\cite{DHARA1984,TORR2011}. We can now identify $U$ in Eq.~\eref{eqn:invpot} with the trapping potential of Eq.~\eref{eq:V}. The additional compensating force is generated using the same trap electrodes by applying an additional voltage $\delta U_i$. For a given transport function $\alpha(t)$ we therefore have to solve the underdetermined equation, \begin{equation} m \ensuremath{\, \textnormal{d}}ot{\alpha}(t) = -\phi_1'(x(t)) \delta U_1(t) - \phi_2'(x(t)) \delta U_2(t),\label{eqn:regul} \end{equation} where $x(t)$ is given by the classical trajectory. Since the ion is forced to follow the center of the trap we can set $x(t)=\alpha(t)$. The compensating force is supposed to be a function of time only, cf. Eq.~\eref{eq:F}, whereas changing the electrode voltages by $\delta U_i$ will, via the $\phi_i(x)$, in general yield a position-dependent force. This leads to a modified second derivative of the actual potential: \begin{equation} m \omega_c(t)^2 =\sum_{i=1}^2\phi_i''(\alpha(t))(U_i^0(t)+\delta U_i(t)) = m( \omega^2 + \delta\omega(t)^2)\,, \end{equation} where $\delta\omega(t)^2$ denotes the change in trap frequency due to the compensation voltages $\delta U_i$, $\omega$ is the initially desired trap frequency, and $U_i^0(t)$ is found in Eq.~\eref{eqn:solguess}. A time-varying actual frequency $omega_c(t)$ might lead to wavepacket squeezing. However, since Eq.~\eref{eqn:regul} is underdetermined, we can set $\delta\omega(t)^2 = 0$ leading to $\omega_c(t) = \omega$ as desired. With this condition we can solve Eq.~\eref{eqn:regul} and obtain \begin{equation}\label{eqn:compconstw} \delta U_i (t) = \frac{\ensuremath{\, \textnormal{d}}ot \alpha(t) \,(-1)^i \, m \,\phi_j''(\alpha(t))}{ \phi_2''(\alpha(t)) \phi_1'(\alpha(t)) - \phi_2'(\alpha(t))\phi_1''(\alpha(t)) }\,,~i,j \in \{1,2\}\,,~j \neq i\,. \end{equation} Note that Eq.~\eref{eqn:compconstw} depends only on the trap geometry. The transport duration $T$ enters merely as a scaling parameter via $\ensuremath{\, \textnormal{d}}ot \alpha(t) = \alpha''(s)/T^2$. An example of a voltage sequence obtained by this method is shown in \fref{fig:guesscfoct}(b). The voltage curves are symmetric under time inversion like the guess voltages, that are derived from the same potential functions $\phi_i(x)$. \section{Application and comparison of the control methods}\label{subs:apcontrol} We now apply the control strategies introduced in Sec.~\ref{subs:ttcp} to a scenario with the parameters chosen to correspond to a typical experimental setting. The scaling of the classical speed limit is studied for a fixed maximum control voltage range and we show how in the limiting case the \textit{bang-bang} solution is obtained. To verify the validity of the classical solution we are applying the obtained voltage ramps to a quantum mechanical wave packet propagation. Similarly, we use the invariant-based approach and verify the result for a quantum mechanical propagation. \subsection{Experimental constraints and limits to control for classical ion transport}\label{subs:kloct2} \begin{figure} \caption{Final energy vs. transport time for different voltage ramps and classical dynamics. (a) shows the improvement over the initial guess (black) by numerical optimization for a maximum voltage of 10$\,$V (blue) and (b) compares the results of numerical optimization for maximum voltages of 10$\,$V (blue), 20$\,$V (purple), and 30$\,$V (green). The spikes in (b) are due to voltage truncation. } \label{fig:transportopts} \end{figure} \begin{figure} \caption{(a) Minimum transport time $T^\mathrm{opt} \label{fig:classlimit} \end{figure} In any experiment, there is an upper limit to the electrode voltages that can be applied. It is the range of electrode voltages that limits the maximum transport speed. Typically this range is given by $\pm$10$\,$V for technical reasons. It could be increased by the development of better voltage supplies. We define the minimum possible transport time $T_{\rm {min}}$ to be the smallest time $T$ for which less than $0.01$ phonons are excited due to the total transport. To examine how $T_{\rm min}$ scales as a function of the maximum electrode voltages $U_{\rm max}$, we have carried out numerical optimization combined with classical equations of motion. The initial guess voltages, cf. Eqs.~\eref{eqn:transfunc1} and~\eref{eqn:solguess}, were taken to preserve a constant trap frequency of $\omega = 2 \pi \cdot 1.3$~MHz for a $^{40}\rm{Ca}^+$ ion. The transport ramps were optimized for a range of maximum voltages between 10-150~V and transport times between 10 ns and 300 ns with voltages truncated to $\pm\,U_{\rm max}$ during the updates. The results are shown in Figs.~\ref{fig:transportopts} and~\ref{fig:classlimit}. Figure~\ref{fig:transportopts} depicts the final excitation energy versus transport time, comparing the initial guess (black) to an optimized ramp with $U_{\rm max}=10\,$V (blue) in Fig.~\ref{fig:transportopts}(a). For the initial guess, the final energy displays an oscillatory behavior with respect to the trap period ($T_{\rm{per}}=0.769\,\mu$s for $\omega = 2 \pi \cdot 1.3\,$MHz) as it has been experimentally observed in Ref.~\cite{WALTHER2012}, and an overall decrease of the final energy for longer transport times. The optimized transport with $U_{\rm max}=10\,$V (blue line in Fig.~\ref{fig:transportopts}(a)) shows a clear speed up of energy neutral transport: An excitation energy of less than 0.01 phonons is obtained for $T^\mathrm{opt}_\mathrm{min}=0.284\,\mu$s compared to $T^\mathrm{guess}_\mathrm{min}=1.391\,\mu$s. The speedup increases with maximum voltage as shown in Fig.~\ref{fig:transportopts}(b). The variation of $T^\mathrm{opt}_{\rm min}$ on $U_{\rm max}$ is studied in Fig.~\ref{fig:classlimit}(a). We find a functional dependence of \begin{equation} T^\mathrm{opt}_{\rm min}(U_{\rm max}) \approx a \left(\frac{U_{\rm max}}{1\,{\rm V}}\right)^{-b} \label{eqn:fitfn} \end{equation} with $a = 0.880(15)\,\mu$s and $b = 0.487(5)$. Optimized voltages are shown in Fig.~\ref{fig:classlimit}(b) for the left electrode with $U_{\rm{max}}=10\,$V. As the transport time decreases, the voltage ramp approaches a square shape. A bang-bang-like solution is attained at $T=280\,$ns. However, for such a short transport time, classical control of energy neutral transport breaks down due to an insufficient voltage range and the final excitation amounts to 5703 mean phonons. In the following we show that for purely harmonic potentials, the exponent $b$ in Eq.~\eref{eqn:fitfn} is universal, i.e., it does not depend on trap frequency nor ion mass. It is solely determined by the bang-bang like optimized voltage sequences, where instantaneous switching between maximum acceleration and deceleration guarantees shuttling within minimum time. The technical feasibility of bang-bang shuttling is thoroughly analyzed in Ref.~\cite{ALONSO2013}. The solution is obtained by the application of Pontryagin's maximum principle \cite{PONTRY,CHEN2011} as discussed in Sec.~\ref{subsec:geometric} and assumes instantaneous switches. Employing Eqs.~\eref{eqn:biasmax} and~\eref{eq:p2}, the equation of motion becomes \begin{equation} \ensuremath{\, \textnormal{d}}ot x = \omega_0^2 u_{\rm max} \cdot \left\{ \begin{array}{cc} d , & t < t_{\rm sw}\\ -d, & t > t_{\rm sw} \end{array} \right. . \end{equation} This can be integrated to \begin{equation} x(t)= \left\{ \begin{array}{lc} x_1 + u_{\rm max} d \omega_0^2t^2 , & 0 \leq t \leq t_{\rm sw}\\ x_1 + d - u_{\rm max} d \omega_0^2(t-T_{\rm{min}})^2 , & t_{\rm sw} \leq t \leq T_{\rm{min}} \end{array} \right. \end{equation} with the boundary conditions $x(0) = x_1$, $x(T_{\rm{min}}) = x_2$ and $\dot{x}(0) = \dot{x}(T_{\rm{min}}) = 0$. Using the continuity of $\dot x$ and $x$ at $t=t_{\rm sw}$, we obtain \begin{equation} t_{\rm sw} = \frac{T}{2} ,\quad T_{\rm{min}} = \frac{\sqrt{2}}{\omega_0} \sqrt{\frac{1}{u_{\rm max}}}. \label{eqn:Ttheo} \end{equation} Notably, the minimum transport time is proportional to $u_{\rm max}^{-1/2}$ which explains the behavior of the numerical data shown in Fig.~\ref{fig:classlimit}. This scaling law can be understood intuitively by considering that in the bang-bang control approach, the minimum shuttling time is given by the shortest attainable trap period, which scales as $u_{max}^{-1/2}$. Assuming a trap frequency of $\omega_0=2 \pi \cdot 0.55$~MHz in Eq.~\eref{eqn:Ttheo}, corresponding to a trapping voltage of $-1\,$V for our trap geometry, we find a prefactor $\sqrt{2}/\omega_0 = 0.41 \mu$s. This is smaller than $a=0.880(15)\mu$s obtained by numerical optimization for realistic trap potentials. The difference can be rationalized in terms of the average acceleration provided by the potentials. For realistic trap geometries, the force exerted by the electrodes is inhomogeneous along the transport path. Mutual shielding of the electrodes reduces the electric field feedthrough of an electrode to the neighboring ones. Thus, the magnitude of the accelerating force that a real electrode can exert on the ion when it is located at a neighboring electrode is reduced with respect to a constant force generating harmonic potential with the same trap frequency. The minimum transport time of $T^\mathrm{opt}_\mathrm{min}=0.284\,\mu$s identified here for $U_\mathrm{max}=10\,$V, cf. the blue line in Fig.~\ref{fig:transportopts}(a), is significantly shorter than operation times realized experimentally. For comparison, an ion has recently been shuttled within $3.6\,\mu$s, leading to a final excitation of $0.10\pm0.01$ motional quanta~\cite{WALTHER2012}. Optimization may not only improve the transport time but also the stability with respect to uncertainties in the time. This is in contrast to the extremely narrow minima of the final excitation energy for the guess voltage ramps shown in black in Fig.~\ref{fig:transportopts}(a), implying a very high sensititivity to uncertainties in the transport time. For example, for the fourth minimum of the black curve, located at $3.795\,\mu$s and close to the operation time of Ref.~\cite{WALTHER2012} (not shown in Fig.~\ref{fig:transportopts}(a)), final excitation energies of less than 0.1 phonons are observed only within a window of 3$\,$ns. Optimization of the voltage ramps for $T=3.351\,\mu$s increases the stability against variations in transport time to more than $60\,$ns. In conclusion we find that optimizing the classical motion of an ion allows us to identify the minimum operation time for a given maximum voltage and improve the stability with respect to timing uncertainies for longer operation times. The analytical solution derived from Pontryagin's maximum principle is helpful to understand the minimum time control strategy. Numerical optimization accounts for all typical features of realistic voltage ramps. It allows for identifying the minimum transport time, predicting $36.9\%$ of the oscillation period for current maximum voltages and a trap frequency of $\omega = 2 \pi\cdot 1.3\,$MHz. This number can be reduced to $12.2\%$ when increasing the maximum voltage by one order of magnitude. However, these predictions may be rendered invalid by a breakdown of the classical approximation. \subsection{Validity of classical solutions in the quantum regime}\label{subs:qmprop2} \begin{figure} \caption{Testing control strategies obtained with classical dynamics for wavepacket motion: (a) Final excitation energy of the ion wavepacket with the initial guess (black) and the optimized voltage ramps with $U_\mathrm{max} \label{fig:sqlimit} \end{figure} We now employ quantum wavepacket dynamics to test the classical solutions, obtained in Sec.~\ref{subs:kloct2}. Provided the trap frequency is constant and the trap is perfectly harmonic, the wave function will only be displaced during the transport. For a time-varying trap frequency, however, squeezing may occur~\cite{scu11}. In extreme cases, anharmonicities of the potential might lead to wavepacket dispersion. Since these two effects are not accounted for by numerical optimization of classical dynamics, we discuss in the following at which timescales such genuine quantum effects become significant. To this end, we have employed the optimized voltages shown in Fig.~\ref{fig:classlimit}(b) in the propagation of a quantum wavepacket. We compare the results of classical and quantum mechanical motion in Fig.~\ref{fig:sqlimit}(a), cf. the red and lightblue lines. A clear deviation is observed. Also, as can be seen Fig.~\ref{fig:sqlimit}(b), the wavefunction fails to reach the target wavefunction for transport times close to the classical limit $T^\mathrm{opt}_{\rm{min}}$. This is exclusively caused by squeezing and can be verified by inspecting the time evolution of the wavepacket in the final potential: We find the width of the wavepacket to oscillate, indicating a squeezed state. No wavepacket dispersion effects are observed, i.e., the final wavepackets are still minimum uncertainty states, with $\min(\Delta x\cdot\Delta p) = \hbar/2$. This means that no effect of anharmonicities in the potential is observed. An impact of anharmonicities is expected once the size of the wavefunction becomes comparable to segment distance $d$ (see Fig.~\ref{fig:electrodes}). Then the wavefunction extends over spatial regions in which the potentials deviate substantially from harmonic potentials. For the ion shuttling problem, this effect does not play a role over the relevant parameter regime. The effects of anharmonicities in the quantum regime for trapped ions were thoroughly analyzed in Ref.~\cite{HOME2011}. Squeezing increases $T_{\rm min}$ from $0.28\,\mu$s to $0.86\,\mu$s for the limit of exciting less than 0.01 phonons, see the red curve in Fig.~\ref{fig:sqlimit}(a), i.e., it only triples the minimum transport time. We show that squeezing can be suppressed altogether in the following section. \subsection{Application of a compensating force approach}\label{subs:invmeth} \begin{figure} \caption{Minimum transport time $T_{\rm min} \label{fig:compforcelimit} \end{figure} In the invariant-based IEA, the minimal transport time is determined by the maximum voltages that are required for attaining zero motional excitation. The total voltage that needs to be applied is given by $U_i(t)=U_i^0(t)+\delta U_i(t)$ with $U_i^0(t)$ and $\delta U_i(t)$ found in Eqs.~\eref{eqn:solguess} and~\eref{eqn:compconstw}. The maximum of $U_i(t)$, and thus the mininum in $T$, is strictly related to the acceleration of the ion provided by the transport function $\alpha(t)$, cf. Eq.~\eref{eqn:compconstw}. If the acceleration is too high, the voltages will exceed the feasibility limit $U_{\rm max}$. At this point it can also be understood why the acceleration should be zero at the beginning and end of the transport: For $\ensuremath{\, \textnormal{d}}ot\alpha(0)\neq 0$ a non-vanishing correction voltage $\delta U_i \neq 0$ is obtained from Eq.~\eref{eqn:compconstw}. This implies that the voltages do not match the initial trap conditions, where the ion should be located at the center of the initial potential. We can derive a transport function $\alpha(t)$ compliant with the boundary conditions using Eq.~\eref{eqn:transfunc1}. For this case, Fig.~\ref{fig:compforcelimit} shows the transport time $T^\mathrm{IEA}_{\rm{min}}$ versus the maximum voltage $U_{\rm{max}}$ that is applied to the electrodes during the transport sequence. For large transport times, the initial guess voltages $U_i^0(t)\propto\omega^2$ dominate the compensation voltages $\delta U_i(t)\propto\ensuremath{\, \textnormal{d}}ot \alpha(t) = \alpha''(s)/T^2 $. This leads to the bend of the red curve. When the trap frequency $\omega$ is lowered, the bend decreases. For the limiting case of no confining potential $\omega=U_i^0(t)=0$, $T^\mathrm{IEA}_\mathrm{min}$ is solely determined by the compensation voltages. In this case the same scaling of $T^\mathrm{IEA}_\mathrm{min}$ with $U_\mathrm{max}$ as for the optimization of classical dynamics is observed, cf. black and blue lines in Fig.~\ref{fig:compforcelimit}. For large $U_\mathrm{max}$, this scaling also applies to the case of non-zero trap frequency, cf. red line in Fig.~\ref{fig:compforcelimit}. We have tested the performance of the compensating force by employing it in the time evolution of the wavefunction. It leads to near-perfect overlap with the target state with an infidelity of less than $10^{-9}$. The final excitation energy of the propagated wave function is shown in Fig.~$\ref{fig:sqlimit}$ (green line) for a maximum voltage of $U_{\rm max} =10\,$V. For the corresponding minimum transport time, $T^\mathrm{IEA}_{\rm min}(10\,\rm{V}) = 418\,$ns, a final excitation energy six orders of magnitude below that found by optimization of the classical dynamics is obtained. This demonstrates that the invariant-based IEA is capable of avoiding the wavepacket squeezing that was observed in Sec.~\ref{subs:qmprop2} when employing classically optimized controls in quantum dynamics. It also confirms that anharmonicities do not play a role since these would not be accounted for by the IEA-variant employed here. Note that an adaptation of the invariant-based IEA to anharmonic traps is found in Ref.~\cite{PalmeroPRA13}. Similarly to numerical optimization of classical dynamics, IEA is capable of improving the stability against variations in transport time $T$. The final excitation energy obtained for $T=3.351\,\mu$s stays below 0.1 phonons within a window of more than $13\,$ns. A further reduction of the minimum transport time may be achieved due to the freedom of choice in the transport function $\alpha(t)$, by employing higher polynomial orders in order to reduce the compensation voltages $\delta U_i(t)$, cf. Eq.~\eref{eqn:compconstw}. However, the fastest quantum mechanically valid transport has to be slower than the solutions obtained for classical ion motion. This follows from the bang-bang control being the time-optimal solution for a given voltage limit and the IEA solutions requiring additional voltage to compensate the wavepacket squeezing. We can thus conclude that the time-optimal quantum solution will be inbetween the blue and black curves of Fig.~\ref{fig:compforcelimit}. \subsection{Feasibility analysis of quantum optimal control}\label{subs:qmoct2} Numerical optimization of the wavepacket motion is expected to become necessary once the dynamics explores spatial regions in which the potential is strongly anharmonic or is subject to strongly anharmonic fluctuations. This can be expected, for example, when the spatial extent of the wavefunction is not too different from that of the trap. Correspondingly, we introduce the parameter $\xi=\sigma_0/d$, which is the wavefunction size normalized to the transport distance. While for current trap architectures, such a scenario is rather unlikely, further miniaturization might lead to this regime. Also, it is currently encountered in the transport of neutral atoms in tailored optical dipole potentials~\cite{Ivanov2010,WaltherTwo2012}. Gradient-based quantum OCT requires an initial guess voltage that ensures a finite overlap of the propagated wave function $\Psi(T)$ with the target state $\Psi^{\rm tgt}$, see Eq.~\eref{eqn:chioft}. Otherwise, the amplitude of the co-state $\chi$ vanishes. The overlap can also be analyzed in terms of phase space volume. For a typical ion trap setting with parameters as in Fig.~\ref{fig:electrodes}, the total covered phase space volume in units of Planck's constant is $ m\,d^2\, \omega /2 \pi h \approx 10^7$. This leads to very slow convergence of the optimization algorithm, unless an extremely good initial guess is available. \begin{figure} \caption{(a) Mean improvement of the optimization functional, $\overline{\Delta J} \label{fig:qmconv} \end{figure} We utilize the results of the optimization for classical dynamics of Sec.~\ref{subs:kloct2} as initial guess ramps for optimizing the wavepacket dynamics and investigate the convergence rate as a function of the system dimension, i.e., of $\xi$. The results are shown in Fig.~\ref{fig:qmconv}(a), plotting the mean improvement per optimization step, $\Delta J$, averaged over 100 iterations, versus the scale parameter $\xi$. We computed the convergence rate $\overline{\Delta J}$ for different, fixed optimization weights $\lambda_a$ in Eq.~\eref{eqn:krotovupdate}. The curves in Fig.~\ref{fig:qmconv}(a) are truncated for large values of $\overline{\Delta J}$, where the algorithm becomes numerically unstable. Values below $\overline{\Delta J}=10^{-6}$ (dashed grey line in Fig.~\ref{fig:qmconv}(a)) indicate an insufficient convergence rate for which no significant gain of fidelity is obtained with reasonable computational resources. In this case the potentials are insufficiently anharmonic to provide \textit{quantum} control of the wavefunction. Numerical optimization of the wavepacket dynamics is applicable and useful for scale parameters of $\xi\approx 0.05$ and larger, indicated by arrows (2) and (3) in Fig.~\ref{fig:qmconv}(a). Then the wavefunction size becomes comparable to the transport distance, leading for example to a phase space volume of around $10\,h$ for arrow (2). At this scale the force becomes inhomogeneous across the wavepacket. This leads to a breakdown of the IEA, as illustrated for $\xi = 0.4$ in Figs.~\ref{fig:qmconv}(b) and~\ref{fig:schema}. The fidelity $\mathfrak{F}_{\rm{IEA}}$ for the IEA drops below $94.6\%$, whereas $\mathfrak{F}_{\rm{qOCT}}=0.999$ is achieved by numerical optimization of the quantum dynamics. \begin{figure} \caption{Limitation of the compensating force approach. A force inhomogeneity $\Delta F = \sum_i[\phi_i'(\alpha(t)+\sigma_0)-\phi'(\alpha(t)-\sigma_0)]\delta U_i(t)$ across the wavefunctions is caused by anharmonicities of the potential $\Delta V = F(t)\, x$ used to implement the compensating force. The relative spread of the force $\Delta F/F$ across the wavefunction is taken at the point in time, where the acceleration $\ensuremath{\, \textnormal{d} \label{fig:schema} \end{figure} \section{Summary and Conclusions}\label{sec:concl} Manipulation of motional degrees of freedom is very widespread in trapped-ion experiments. However, most theoretical calculations involving ion transport over significant distances are based on approximations that in general do not guarantee the level of precision needed for high-fidelity quantum control, especially in view of applications in the context of quantum technologies. As a consequence, before our work little was known about how to apply optimal control theory to large-scale manipulation of ion motion in traps, concerning in particular the most efficient simulation and control methods to be employed in different parameter regimes, as well as the level of improvement that optimization could bring. With this in mind, in the present work we have investigated the applicability of several classical and quantum control techniques for the problem of moving an ion across a trap in a fast and accurate way. When describing the ion dynamics purely classically, numerical optimization yields transport times significantly shorter than a trapping period. The minimum transport duration depends on the maximal electrode voltage that can be applied and was found to scale as $1/\sqrt{U_{\rm{max}}}$. The same scaling is observed for time-optimal bang-bang-like solutions that can be derived using Pontryagin's maximum principle and assuming perfectly harmonic traps. Not surprisingly, the classically optimized solutions were found to fail when tested in quantum wavepacket motion for transport durations of about one third of a trapping period. Wavepacket squeezing turns out to be the dominant source of error with the final wavepacket remaining a minimum uncertainty state. Anharmonic effects were found to play no significant role for single-ion shuttling over a wide range of parameters. Wavepacket squeezing can be perfectly compensated by the control strategy obtained with the invariant-based inverse engineering approach. It amounts to applying correction voltages which can be generated by the trapping electrodes and which exert a compensating force on the ion. This is found to be the method of choice for current experimental settings. Control methods do not only allow to assess the minimum time required for ion transport but can also yield more robust solutions. For transport times that have been used in recent experiments~\cite{WALTHER2012}, significantly larger than the minimum times identified here, the classical solutions are valid also for the quantum dynamics. In this regime, both numerical optimization of classical ion motion and the inverse engineering approach yield a significant improvement of stability against uncertainties in transport time. Compared to the initial guess voltages, the time window within which less than 0.1 phonons are excited after transport is increased by a factor of twenty for numerical optimization and a factor of five for the inverse engineering approach. Further miniaturization is expected to yield trapping potentials where the wavepacket samples regions of space in which the potential, or potential fluctuations, are strongly anharmonic. Also, for large motional excitations recent experiments have shown nonlinear Duffing oscillator behavior \cite{AKERMAN2010}, nonlinear coupling of modes in linear ion crystals \cite{ROOS2008,nie2009theory} and amplitude dependent modifications of normal modes frequencies and amplitude due to nonlinearities \cite{HOMENJP}. In these cases, numerical optimization of the ion's quantum dynamics presents itself as a well-adapted and efficient approach capable of providing high-fidelity control solutions. The results presented in this paper provide us with a systematic recipe, based on a single parameter (the relative wave packet size $\xi$), to assess which simulation and control methods are best suited in different regimes. We observe a crossover between applicability of the invariant-based IEA, for a very small wavefunction extension, and that of quantum OCT, when the width of the wave function becomes comparable with the extension of the potential. Both methods combined cover the full range of conceivable trap parameters. That is, no matter what are the trapping parameters, control solutions for fast, high-fidelity transport are available. In particular, in the regime $\xi\ll 1$, relevant for ion transport in chip traps, solutions obtained with the inverse engineering approach are fully adequate for the purpose of achieving high-fidelity quantum operations. This provides a major advantage in terms of efficiency over optimization algorithms based on the solution of the Schrödinger equation. The latter in turn becomes indispensable when processes involving motional excitations inside the trap and/or other anharmonic effects are relevant. In this case, the numerical quantum OCT method demonstrated in this paper provides a comprehensive way to deal with the manipulation of the ions’ external states. \ack KS, UP, HAF and FSK thank Juan Gonzalo Muga and Mikel Palmero for the discussions about the invariant based approach. HAF thanks Henning Kaufmann for useful contributions to the numerical framework. The Mainz team acknowledges financial support by the Volkswagen-Stiftung, the DFG-Forschergruppe (FOR 1493) and the EU-projects DIAMANT (FP7-ICT), IP-SIQS, the IARPA MQCO project and the MPNS COST Action MP1209. MHG and CPK are grateful to the DAAD for financial support. SM, FSK and TC acknowledge support from EU-projects SIQS, DIAMANT and PICC and from the DFG SFB/TRR21. MHG, SM, TC and CPK enjoyed the hospitality of KITP and acknowledge support in part by the National Science Foundation under Grant No. NSF PHY11-25915. \appendix \section{Quantum wavepacket propagation with a moving Fourier grid}\label{subs:qmprop} For transport processes using realistic trap parameters, naive application of the standard Fourier grid method~\cite{RonnieReview88,RonnieReview94} will lead to unfeasible grid sizes. This is due to the transport distance being usually 3 to 5 orders of magnitude larger than the spatial width of the wavepacket and possible acceleration of the wavepacket requiring a sufficiently dense coordinate space grid. To limit the number of grid points, a \emph{moving grid} is introduced. Instead of using a spatial grid that covers the entire transport distance, the grid is defined to only contain the initial wavepacket, in a window between $x_{\min}$ and $x_{\max}$. The wavepacket $\Psi(x, t_0)$ is now propagated for a single time step to $\Psi(x, t_0+dt)$. For the propagated wave function, the expectation value \begin{equation} \left\langle x \right\rangle = \int_{x_{\min}}^{x_{\max}} \Psi^{*}(x, t_0 + dt)\, x \, \Psi(x, t_0 + dt) \ensuremath{\, \textnormal{d}} x \end{equation} is calculated, and from that an offset is obtained, \begin{equation} \bar{x} = \left\langle x \right\rangle - \frac{x_{\max} - x_{\min}}{2}\,, \end{equation} by which $x_{\min}$ and $x_{\max}$ are shifted. The wavepacket is now moved to the center of the new grid, and the propagation continues to the next time step. The same idea can also be applied to momentum space. After the propagation step, the expectation value $\left\langle k \right\rangle$ is calculated and stored as an offset $\bar{k}$. The wave function is then shifted in momentum space by this offset, which is achieved by multiplying it by $e^{-i\bar{k}x}$. This cancels out the fast oscillations in $\Psi(x,t_0 + dt)$. When applying the kinetic operator in the next propagation step, the offset has to be taken into account, i.e., the kinetic operator in momentum space becomes $(k+\bar{k})^2/2m$. The combination of the moving grid in coordinate and momentum space allows to choose the grid window with the mere requirement of being larger than the extension of the wavepacket at any point of the propagation. We find typically 100 grid points to be sufficient to represent the acceleration within a single time step. The procedure is illustrated in Fig.~\ref{fig:moving_grid} and the steps of the algorithm are summarized in Table~\ref{tab:howtomovgrid}. \begin{figure} \caption{Illustration of the moving grid procedure. The propagation of the wave function $\Psi(x,t_{0} \label{fig:moving_grid} \end{figure} \begin{table}[tbp] \caption{\label{tab:howtomovgrid}Necessary steps for wavepacket propagation over long distances.} \centering \begin{tabular} {l l l l } \br & & Mathematical step& Possible implementation \\ \mr 1. & Calculate position mean & $\langle x\rangle=\bra{\Psi}\Op{x}\ket{\Psi}$ & $\langle x\rangle=\sum_i x_i \Psi_i^*\Psi_i$ \\ 2. & Transform to momentum space & & $\{\Phi_i\}=\mathcal{FFT}(\{\Psi_i\})$ \\ 3. & Calculate momentum mean & $\langle p\rangle=\bra{\Psi}\Op{p}\ket{\Psi}$ & $\langle p\rangle=\sum_i \hbar k_i \Phi_i^*\Phi_i$ \\ 4. & Shift position & $\ket{\Psi}\rightarrow\exp\left(\frac{i}{\hbar}\langle x\rangle \Op{p}\right)\ket{\Psi}$ & $\Phi_i\rightarrow\exp\left(i k_i \langle x\rangle\right)\Phi_i$\\ 5. & Transform to position space & & $\{\Psi_i\}=\mathcal{FFT}^{-1}(\{\Phi_i\})$ \\ 6. & Shift momentum & $\ket{\Psi}\rightarrow\exp\left(\frac{i}{\hbar}\langle p\rangle \Op{x}\right)\ket{\Psi}$ & $\Psi_i\rightarrow\exp\left(\frac{i}{\hbar} \langle p\rangle x_i\right)\Psi_i$\\ 7. & Update classical quantities & & $x_{cl}+=\langle x\rangle, p_{cl}+=\langle p\rangle$\\ \br \end{tabular} \end{table} \section*{References} \providecommand{\operatorname{new}block}{} \end{document}
math
58,883
\baregin{document} \title{Rankin-Selberg integrals for $\SO_{2n+1} \baregin{abstract} The conjectural newform theory for generic representations of $p$-adic ${\rm SO}_{2n+1}$ was formulated by P.-Y. Tsai in her thesis in which Tsai also verified the conjecture when the representations are supercuspidal. The main purpose of this work is to compute the Rankin-Selberg integrals for ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_r$ with $1{\ell}e r{\ell}e n$ attached to newforms and also oldforms under the validity of the conjecture. \varepsilonsilonnd{abstract} \section{Introduction} The theory of newforms was first developed by Atkin-Lehner (\cite{AtkinLehner1970}) in the context of modular forms, and then by Casselman (\cite{Casselman1973}) in the context of generic representations of $p$-adic ${\rm GL}_2$. In the modular form setting, newforms are cusp forms of level $\Gamma_0(N)$ which are simultaneously eigenfunctions of all Hecke operators. Consequently, their Fourier coefficients satisfy strong recurrence relations and hence the associated $L$-functions admit an Euler product expansion. One also has the notion of oldforms. These are cusp forms obtained from newforms of lower level via certain level raising procedures.\\ In the local $p$-adic setting, we are given an irreducible smooth generic (complex) representation $(\pi,{\mathcal V}_\pi)$ of ${\rm GL}_2$ over a $p$-adic field $F$, and a descending family of open compact subgroups $\Gamma_0(\frak{p}^m)$ of ${\rm GL}_2(F)$ indexed by integers $m\ge 0$ with $\Gamma_0(\frak{o})={\rm GL}_2(\frak{o})$. Here $\frak{o}$ is the valuation ring of $F$ and $\frak{p}$ is the maximal ideal of $\frak{o}$. Then Casselman proved that there exists $a_\pi\ge 0$ such that ${\mathcal V}_\pi^{\Gamma_0(\frak{p}^m)}=0$ if $0{\ell}e m<a_\pi$ and ${\rm dim}_\barbC{\mathcal V}_\pi^{\Gamma_0(\frak{p}^{a_\pi})}=1$. Nonzero elements in ${\mathcal V}_\pi^{\Gamma_0(\frak{p}^{a_\pi})}$ are called newforms of $\pi$. Oldforms are elements in ${\mathcal V}_\pi^{\Gamma_0(\frak{p}^m)}$ with $m>a_\pi$, and can be obtained from newforms via certain level raising operators. Casselman's results were subsequently extended to generic representations of ${\rm GL}_r$ by Jacquet, Piatetski-Shapiro and Shalika (\cite{JPSS1981}, \cite{Jacquet2012}, see also \cite{Matringe2013}) and Reeder (\cite{Reeder1991}). Since $\Gamma_0(\frak{o})={\rm GL}_2(\frak{o})$ (which also holds for $r>2$), newforms can be viewed as a generalization of spherical elements in unramified representations to arbitrary generic representations. In addition to this, they also possess the following important features which lead to their applications in number theory and representation theory. First, the integer $a_\pi$ is encoded in the exponent of the $\varepsilonsilonpsilon$-factor of $\pi$. Second, if one computes the zeta integrals in \cite{JLbook}, \cite{JPSS1983} attached to newforms, then one obtains the $L$-factor of $\pi$ (\cite{Matringe2013}, \cite{Miyauchi2014}). In particular, the Whittaker functional of $\pi$ is nontrivial on the space of newforms. And third, explicit formulae for Whittaker functions associated to newforms on the diagonal matrices in ${\rm GL}_r(F)$ can be obtained (\cite{Shintani1976}, \cite{Schmidt2002}, \cite{Miyauchi2014}).\\ In \cite{RobertsSchmidt2007}, Roberts-Schmidt generalized Casselman's results to generic representations of ${\rm GSp}_4$ in the same spirit. Their results are based on what they called the $paramodular$ $subgroups$ $K(\frak{p}^m)$. These are open compact subgroups of ${\rm GSp}_4(F)$ indexed by integers $m\ge 0$ with $K(\frak{o})={\rm GSp}_4(\frak{o})$. Note that; however, paramodular subgroups do not form a descending chain. Now let $(\pi,{\mathcal V}_\pi)$ be an irreducible smooth $generic$ representation of ${\rm GSp}_4(F)$ with $trivial$ central character. Roberts-Schmidt proved that there exists an integer $a_\pi\ge 0$ such that the spaces ${\mathcal V}_\pi^{K(\frak{p}^{m})}=0$ if $0{\ell}e m<a_\pi$ and ${\rm dim}_\barbC{\mathcal V}_\pi^{K(\frak{p}^{a_\pi})}=1$. Again, nonzero elements in ${\mathcal V}_\pi^{K(\frak{p}^{a_\pi})}$ are called newforms of $\pi$, and elements in ${\mathcal V}_\pi^{K(\frak{p}^m)}$ with $m>a_\pi$ are called oldforms. They also showed that the spaces ${\mathcal V}_{\pi}^{K(\frak{p}^m)}$ (with $m>a_\pi$) admit the following bases \baregin{equation}{\ell}abel{E:oldform} \tildeheta'^i\tildeheta^j{algebraically closed ute{e}t}a^k (v_\pi)\quad\tildeext{for integers $i,j,k\geq 0$ with $i+j+2k=m-a_\pi$}. \varepsilonsilonnd{equation} Here $v_\pi$ is a newform of $\pi$ and $\tildeheta, \tildeheta':{\mathcal V}_\pi^{K(\frak{p}^m)}\tildeo{\mathcal V}_\pi^{K(\frak{p}^{m+1})}$ and ${algebraically closed ute{e}t}a:{\mathcal V}_\pi^{K(\frak{p}^m)}\tildeo{\mathcal V}_\pi^{K(\frak{p}^{m+2})}$ are the level raising operators defined in their book, see also \S\ref{SSS:level raising}. In particular, the dimension of ${\mathcal V}_\pi^{K(\frak{p}^m)}$ can be easily computed.\\ Roberts-Schmidt in addition computed Novodvorsky's zeta integrals for ${\rm GSp}_4{\tildeimes}{\rm GL}_1$ (\cite{Novodvorsky1979}) attached to newforms and also oldforms. They showed that \baregin{equation}{\ell}abel{E:zeta newform n=2} Z(s,v_\pi)=\Lambdabda_{\pi,\psi}(v_\pi)L(s,\pi)\neq 0 \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:zeta oldform n=2} Z(s,\tildeheta(v))=q^{-s+\frac{3}{2}}Z(s,v), \quad Z(s,\tildeheta'(v))=qZ(s,v), \quad Z(s,{algebraically closed ute{e}t}a(v))=0 \varepsilonsilonnd{equation} for $v\in{\mathcal V}_\pi^{K(\frak{p}^m)}$ (any $m\ge 0$). Here $q$ is the cardinality of the residue field of $F$, $\Lambdabda_{\pi,\psi}$ is the Whittaker functional on $\pi$ (depending on an additive character $\psi$ of $F$), $Z(s,v)$ is the Novodvorsky's zeta integral attached to $v$, and $L(s,\pi)$ is the $L$-factor of $\pi$ defined by Novodovorsky's local integrals. In particular, \varepsilonsilonqref{E:zeta newform n=2} implies that the Whittaker functional is nontrivial on the space of newforms, and one can use \varepsilonsilonqref{E:oldform}, \varepsilonsilonqref{E:zeta newform n=2} and \varepsilonsilonqref{E:zeta oldform n=2} to compute $Z(s,v)$ for every $paramodular$ $vector$ $v$ of $\pi$, i.e. $v$ is contained in ${\mathcal V}_\pi^{K(\frak{p}^m)}$ for some $m$. The integer $a_\pi$ also has meaning. It appears in the exponent of the $\varepsilonsilonpsilon$-factor $\varepsilonsilonpsilon(s,\pi,\psi)$ of $\pi$ defined by Novodvorsky's zeta integrals.\\ By the accidental isomorphisms ${\rm SO}_3\equivg{\rm PGL_2}$ and ${\rm SO}_5\equivg{\rm PGSp}_4$, results of Casselman and Roberts-Schmidt can be viewed as the theory of newforms for ${\rm SO}_3$ and ${\rm SO}_5$ respectively. In her Harvard thesis (\cite{Tsai2013}), Tsai defined a family of open compact subgroups $K_{n,m}$ (cf. \S\ref{SS:paramodular subgroup}) of (split) ${\rm SO}_{2n+1}(F)$ such that $K_{1,,m}\equivg\Gamma_0(\frak{p}^m)$ and $K_{2,m}\equivg K(\frak{p}^m)$, and extended the theory of newforms to irreducible smooth generic $supercuspidal$ representations of ${\rm SO}_{2n+1}(F)$ (for all $n$). Based on this, Tsai then proposed a conjectural newform theory for generic representations of ${\rm SO}_{2n+1}(F)$ in \cite{Tsai2013}, \cite{Tsai2016}. We will review the conjecture in more details in the next subsection (modulo the definition of $K_{n,m}$). The local conjecture has a global counterpart, which was formulated by Gross (\cite{Gross2015}) with an eye toward giving a refinement of the global Langlands correspondence for discrete symplectic motives of rank $2n$ over $\barbQ$.\\ The aim of this work is to investigate the newform conjecture and compute the (local) Rankin-Selberg integrals for ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_r$ with $1{\ell}e r{\ell}e n$ attached to (conjectural) newforms and oldforms in generic representations of ${\rm SO}_{2n+1}(F)$ under the Hypothesis \ref{H} stated in \S\ref{SS:main}. These Rankin-Selberg integrals were developed by Gelbart and Piatetski-Shapiro (\cite{GPSR1987}), Ginzburg (\cite{Ginzburg1990}) and Soudry (\cite{Soudry1993}), and had already played an important role in Tsai's work. Now let us describe our results in more details. \subsection{Newform conjecture} Let $G_n={\rm SO}_{2n+1}$ be the split odd special orthogonal of rank $n\ge 1$ defined over $F$ and $U_n\subset G_n$ be the upper triangular maximal unipotent subgroup. Let $\psi$ be an additive character of $F$ with $\ker(\psi)=\frak{o}$. Then we have a non-degenerate character $\psi_{U_n}: U_n(F)\tildeo\barbC^{\tildeimes}$ (cf. \S\ref{SSS:SO_{2n+1}}). Let $(\pi,{\mathcal V}_\pi)$ be an irreducible smooth generic representation of $G_n(F)$. We fix a nonzero Whittaker functional $\Lambdabda_{\pi,\psi}\in {\rm Hom}_{U_n(F)}(\pi,\psi_{U_n})$. By the results of Jiang and Soudry (\cite{JiangSoudry2003}, \cite{JiangSoudry2004}), we can attach to $\pi$ an $L$-parameter $\phi_\pi$. Then after composing $\phi_\pi$ with the inclusion $^LG_n^0(\barbC)={\rm Sp}_{2n}(\barbC)\hatookrightarrow{\rm GL}_{2n}(\barbC)$, we get the $\varepsilonsilonpsilon$-factor $\varepsilonsilonpsilon(s,\phi_\pi,\psi)$ (\cite{Tate1979}) associated to $\phi_\pi$ and $\psi$, which can be written as \baregin{equation}{\ell}abel{E:epsilon for pi} \varepsilonsilonpsilon(s,\phi_\pi,\psi)=\varepsilonsilon_\pi q^{-a_\pi(s-\frac{1}{2})} \varepsilonsilonnd{equation} for some $\varepsilonsilon_\pi\in\stt{\pm 1}$ and an integer $a_\pi\ge 0$. Since $\pi$ is generic, one has ${\mathcal V}_\pi^{K_{n,m}}\neq 0$ for some $m\ge 0$ (cf. {\ell}mref{L:existence}). Then we have the following conjecture due to Tsai (\cite[Conjecture 1.2.8]{Tsai2013}, \cite[Conjecture 7.8]{Tsai2016}). \baregin{conj}[Tsai]{\ell}abel{C1} \noindent \baregin{itemize} \item[(1)] The dimension of ${\mathcal V}_\pi^{K_{n,m}}$ is given by \baregin{equation}{\ell}abel{E:dim formula} {\rm dim}_\barbC{\mathcal V}_\pi^{K_{n,m}} = \baregin{pmatrix} n+{\ell}floor\frac{m-a_\pi}{2}\rfloor\\n \varepsilonsilonnd{pmatrix} + \baregin{pmatrix} n+{\ell}floor\frac{m-a_\pi+1}{2}\rfloor-1\\n \varepsilonsilonnd{pmatrix}. \varepsilonsilonnd{equation} In particular, ${\mathcal V}_\pi^{K_{n,m}}=0$ if $0{\ell}e m<a_\pi$ and ${\mathcal V}_\pi^{K_{n,a_\pi}}$ is one-dimensional. \item[(2)] The action of the quotient group $J_{n,a_\pi}/K_{n,a_\pi}$ on ${\mathcal V}_\pi^{K_{n,a_\pi}}$ gives $\varepsilonsilon_\pi$. \item[(3)] The Whittaker functional $\Lambdabda_{\pi,\psi}$ is nontrivial on ${\mathcal V}^{K_{n,a_\pi}}_\pi$. \varepsilonsilonnd{itemize} \varepsilonsilonnd{conj} Here $J_{n,m}\supset K_{n,m}$ is another family of open compact subgroups of $G_n(F)$ with $J_{n,0}=K_{n,0}=G_n(\frak{o})$ and such that $[J_{n,m}:K_{n,m}]=2$ when $m>0$ (cf. \S\ref{SS:paramodular subgroup}). \baregin{remark} Conjecture \ref{C1} holds when $n=1,2$ by the results of Casselman (\cite{Casselman1973}) and Roberts-Schmidt (\cite{RobertsSchmidt2007}), see also {\ell}mref{L:conj for n=2}. On the other hand, suppose that $\pi$ is supercuspidal (any $n\ge 1$). Then in \cite{Tsai2013}, Tsai proved that \varepsilonsilonqref{E:dim formula} holds for $0{\ell}e m{\ell}e a_\pi$. For $m>a_\pi$, she showed that ${\rm dim}_\barbC{\mathcal V}_\pi^{K_{n,m}}$ is greater than or equal to the RHS of \varepsilonsilonqref{E:dim formula}, and she conjectured that they should be equal. In addition, Conjecture \ref{C1} $(2)$ and $(3)$ for $\pi$ were also verified by Tsai. \varepsilonsilonnd{remark} \baregin{remark} As far as we know, Tsai did not formulate the conjecture in this form (in the literature). In fact, she only gave partial conjectures in \cite[Conjecture 1.2.8]{Tsai2013} and \cite[Conjecture 7.8]{Tsai2016}. However, she did point out that all these are expected to be true for all $\pi$ in the same references. \varepsilonsilonnd{remark} \subsection{Rankin-Selberg integrals} Let $Z_r\subset{\rm GL}_r$ be the upper triangular maximal unipotent subgroup and $\bar{\psi}_{Z_r}:Z_r(F)\tildeo\barbC^{\tildeimes}$ be a non-degenerate character (cf. \varepsilonsilonqref{E:psi_Z}). Let $(\tildeau,{\mathcal V}_\tildeau)$ be a smooth representation of ${\rm GL}_r(F)$ with finite length. We assume that the space ${\rm Hom}_{Z_r(F)}(\tildeau,\bar{\psi}_{Z_r})$ is one-dimensional in which we fix a nonzero element $\Lambdabda_{\tildeau,\bar{\psi}}$. Let $H_r={\rm SO}_{2r}$ be the split even special orthogonal group defined over $F$ and $Q_r\subset H_r$ be a Siegel parabolic subgroup whose Levi subgroup is isomorphic to ${\rm GL}_r$. Given a complex number $s$, we have a normalized induced representation $\rho_{\tildeau,s}$ of $H_r(F)$ with the underlying space $I_r(\tildeau,s)$, which consists of smooth functions ${\tildeimes}i_s: H_r(F)\tildeo{\mathcal V}_\tildeau$ satisfying the usual rule (cf. \S\ref{SS:ind rep}). In this work, we always assume that $1{\ell}e r{\ell}e n$. Then $H_r(F)$ can be regarded as a subgroup of $G_n(F)$ via a natural embedding (cf. \varepsilonsilonqref{E:embedding}) and we have the Rankin-Selberg integrals $\Psi_{n,r}(v\otimes{\tildeimes}i_s)$ attached to $v\in{\mathcal V}_\pi$ and ${\tildeimes}i_s\in I_r(\tildeau,s)$. (cf. \S\ref{SSS:RS integral and FE}). \subsection{Main results}{\ell}abel{SS:main} Let $\tildeau$ be an unramified representation of ${\rm GL}_r(F)$ that is induced of Langlands' type (cf. \S\ref{SS:Langlands type}). This includes all (classes of) irreducible smooth generic unramified representations of ${\rm GL}_r(F)$ and the space ${\mathcal V}^{{\rm GL}_r(\frak{o})}_\tildeau$ is one-dimensional. Moreover, $\tildeau$ admits a unique (up to scalars) nonzero Whittaker functional $\Lambdabda_{\tildeau,\bar{\psi}}$ which is nontrivial on ${\mathcal V}_{\tildeau}^{{\rm GL}_r(\frak{o})}$ (\cite[Section 1]{Jacquet2012}). We fix an element $v_\tildeau\in{\mathcal V}_\tildeau^{{\rm GL}_r(\frak{o})}$ with \[ \Lambdabda_{\tildeau,\bar{\psi}}(v_\tildeau)=1. \] Let $J(\tildeau)$ be the unique irreducible unramified quotient of $\tildeau$ (\cite[Corollary 1.2]{Matringe2013}) and $\phi_{J(\tildeau)}$ be the $L$-parameter of $J(\tildeau)$ under the local Langlands correspondence for ${\rm GL}_r$ (\cite{HarrisTaylor2001}, \cite{Henniart2000}, \cite{Scholze2013}). Put $R_{r,m}=K_{n,m}{\mathcal a}p H_r(F)$ for $m\ge 0$. Then the properties of $R_{r,m}$ (cf. \S\ref{SS:R_r,m}) imply that the space $I_r(\tildeau,s)^{R_{r,m}}$ is one-dimensional which admits a generator ${\tildeimes}i^m_{\tildeau,s}$ with ${\tildeimes}i^m_{\tildeau,s}(I_{2r})=v_\tildeau$.\\ We impose the following hypothesis on $\pi$, which holds when (i) $n=1,2$ (any $\pi$) and (ii) $\pi$ is supercuspidal or unramified (any $n$). \baregin{hypA}{\ell}abel{H} The space ${\mathcal V}_\pi^{K_{n,a_\pi}}$ is one-dimensional and $\Lambdabda_{\pi,\psi}$ is nontrivial on ${\mathcal V}_{\pi}^{K_{n,a_\pi}}$. \varepsilonsilonnd{hypA} Now our first result can be stated as follows. \baregin{thm}{\ell}abel{T:main} Under the Hypothesis \ref{H}, we have \baregin{equation}{\ell}abel{E:dim bd} {\rm dim}_\barbC{\mathcal V}_\pi^{K_{n,m}} \ge \baregin{pmatrix} n+{\ell}floor\frac{m-a_\pi}{2}\rfloor\\n \varepsilonsilonnd{pmatrix} + \baregin{pmatrix} n+{\ell}floor\frac{m-a_\pi+1}{2}\rfloor-1\\n \varepsilonsilonnd{pmatrix} \varepsilonsilonnd{equation} for $m>a_\pi$ and the action of the quotient group $J_{n,a_\pi}/K_{n,a_\pi}$ on ${\mathcal V}_\pi^{K_{n,a_\pi}}$ gives $\varepsilonsilon_\pi$. Moreover, if $v_\pi$ is an element in ${\mathcal V}_\pi^{K_{n,a_\pi}}$ with $\Lambdabda_{\pi,\psi}(v_\pi)=1$, then we have \footnote{After this work is completed, David Loeffler informed the author that he had obtained a similar identity (in his unpublished note) by computing the Novodvorsky's local integrals (\cite{Novodvorsky1979}) for ${\rm GSp}_4{\tildeimes}{\rm GL}_2$ attached to newforms (of ${\rm GSp}_4$).} \baregin{equation}{\ell}abel{E:main eqn} \Psi_{n,r}(v_\pi\otimes{\tildeimes}i^{a_\pi}_{\tildeau,s}) = \frac{L(s,\phi_\pi\otimes\phi_{J(\tildeau)})}{L(2s,\phi_{J(\tildeau)},\barigwedge^2)} \varepsilonsilonnd{equation} provided that the Haar measures are properly chosen (cf. \S\ref{SSS:Haar}). \varepsilonsilonnd{thm} \baregin{remark} In \cite{Tsai2013}, Tsai proved that \varepsilonsilonqref{E:main eqn} holds when $\pi$ is supercuspidal and $r=1,n$. Note that in these cases, $L(s,\phi_\pi\otimes\phi_{J(\tildeau)})=1$. On the other hand, when $\pi$ is unramifed, the identity \varepsilonsilonqref{E:main eqn} was first obtained by Gelbart and Piatetski-Shapiro (\cite[Proposition A.1]{GPSR1987}) for $r=n$, and then by Ginzburg (\cite[Theorem B]{Ginzburg1990}) for $r<n$. \varepsilonsilonnd{remark} To prove \varepsilonsilonqref{E:dim bd}, we define subsets $\EuScript B_{\pi,m}\subset{\mathcal V}_\pi^{K_{n,m}}$ (similar to Tsai's) in \S\ref{SS:conj basis} and show that under the Hypothesis \ref{H}, $\EuScript B_{\pi,m}$ are linearly independent and have the cardinality given by the RHS of \varepsilonsilonqref{E:dim formula} (and hence verifies \varepsilonsilonqref{E:dim bd}). In particular, if Conjecture \ref{C1} $(1)$ holds, then $\EuScript B_{\pi,m}$ define bases of ${\mathcal V}_\pi^{K_{n,m}}$ for $m>a_\pi$. In this work, we also compute the integrals $\Psi_{n,r}(v\otimes{\tildeimes}i_{\tildeau,s}^m)$ for $v\in\EuScript B_{\pi,m}$ for $1{\ell}e r{\ell}e n$ (cf. \tildehmref{T:main for oldform}). It should be indicated that although we only compute the Rankin-Selberg integrals when $\tildeau$ is unramified, it turns out to be enough. In fact, we will show (without Hypothesis \ref{H}) that if $v$ is a nonzero element in ${\mathcal V}_\pi^{K_{n, m}}$ (for some $m$) and $\tildeau$ is ramified, then $\Psi_{n,r}(v\otimes{\tildeimes}i_s)=0$ for all ${\tildeimes}i_s\in I_r(\tildeau,s)$ for $1{\ell}e r{\ell}e n$ (cf. {\ell}mref{L:vanish of RS integral}).\\ When $n=2$, we have other bases of oldforms defined by \varepsilonsilonqref{E:oldform}. Alternatively, one can also compute the Rankin-Selberg integrals attached to these bases. When $r=1$, these are given by \varepsilonsilonqref{E:zeta oldform n=2}. Here we have the following result when $r=2$ (and $\tildeau$ unramified). \baregin{thm}{\ell}abel{T:main'} Let $v\in{\mathcal V}_\pi^{K_{2,m}}$. We have \baregin{equation}{\ell}abel{E:raising theta} \Psi_{2,2}(\tildeheta(v)\otimes{\tildeimes}i^{m+1}_{\tildeau,s})=q^{-s+\frac{3}{2}}(\alphalphapha_1+\alphalphapha_2)\Psi_{2,2}(v\otimes{\tildeimes}i^m_{\tildeau,s}) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:raising theta'} \Psi_{2,2}(\tildeheta'(v)\otimes{\tildeimes}i^{m+1}_{\tildeau,s})=q(1+q^{-2s+1}\alphalphapha_1\alphalphapha_2)\Psi_{2,2}(v\otimes{\tildeimes}i^m_{\tildeau,s}) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:raising eta} \Psi_{2,2}({algebraically closed ute{e}t}a(v)\otimes{\tildeimes}i^{m+2}_{\tildeau,s})=q^{-2s+2}\alphalphapha_1\alphalphapha_2\Psi_{2,2}(v\otimes{\tildeimes}i^m_{\tildeau,s}) \varepsilonsilonnd{equation} where $\alphalphapha_1$ and $\alphalphapha_2$ are the Satake parameters of $J(\tildeau)$. \varepsilonsilonnd{thm} \baregin{remark} Combining \varepsilonsilonqref{E:main eqn} with \tildehmref{T:main'}, one can explicitly compute the integrals $\Psi_{2,2}(v\otimes{\tildeimes}i^m_{\tildeau,s})$ for every $v\in{\mathcal V}_\pi^{K_{2,m}}$ and $m\ge 0$. \varepsilonsilonnd{remark} \subsection{An outline of the paper} We end this introduction by a brief outline of the paper. In \S\ref{S:newform}, we will introduce the paramodular subgroups of ${\rm SO}_{2n+1}$ defined by Gross and Tsai. In \S\ref{S:oldform}, we will describe conjectural bases for oldforms given implicitly by Tsai in her thesis. In \S\ref{S:RS integral}, we will introduce the Rankin-Selberg integrals attached to generic representations of ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_r$ (for $1{\ell}e r{\ell}e n$). In \S\ref{S:unramified rep}, we will collect some facts for unramified representations including formulae of the spherical Whittaker functions of ${\rm GL}_r$ and various Satake isomorphisms. The goal of \S\ref{S:key} is to prove \indent opref{P:main prop}, which is the core of this work. The idea is to apply the standard techniques developed by Jacquet et al. to our setting. We also invoke an idea of Ginzburg, which allows us to obtain the results for $r<n$ from that for $r=n$. In \S\ref{S:prove}, we will prove our main results. We end with a comparison between the bases of oldforms given by Roberts-Schmidt, Casselman and Tsai. \subsection{Notation and conventions} Let $F$ be a finite extension of $\barbQ_p$, $\frak{o}$ be the valuation ring of $F$, $\frak{p}$ be the maximal ideal of $\frak{o}$, $\varpi\in\frak{p}$ be a prime element and $q$ be the cardinality of the residue field $\frak{f}=\frak{o}/\frak{p}$. We fix an additive character $\psi$ of $F$ with $\ker(\psi)=\frak{o}$. Let $\nu_1=|\cdot|_F$ be the absolute value on $F$ normalized so that $|\varpi|_F=q^{-1}$. In general, if $r\ge 1$ is an integer, we let $\nu_r$ be the character on ${\rm GL}_r(F)$ defined by $\nu_r(a)=|\det(a)|_F$. Suppose that $G$ is an $\varepsilonsilonlll$-group in the sense of \cite[Section 1]{BZ1976}. We denote by $\delta_G$ the modular function of $G$. If $K\subset G$ is an open compact subgroup, then ${\mathcal H}(G//K)$ denotes the algebra of locally constant, compact supported $\barbC$-valued functions on $G$ which are bi-$K$-invariant. In this work, by a representation of $G$ we mean a smooth representation with coefficients in $\barbC$. If $\pi$ is a representation of $G$, then its underlying (abstract) space is usually denoted by ${\mathcal V}_\pi$. Finally, we define elements $\jmath_r\in{\rm GL}_r(F)$ inductively by \baregin{equation}{\ell}abel{E:J_m} \jmath_1=1\quad\tildeext{and}\quad \jmath_r=\pMX{0}{1}{\jmath_{r-1}}{0} \varepsilonsilonnd{equation} and write $I_r$ for the identity matrix in ${\rm GL}_r(F)$. \section{Paramodular subgroups of ${\rm SO}_{2n+1}$}{\ell}abel{S:newform} In this section, we introduce the open compact subgroups $K_{n,m}$ of ${\rm SO}_{2n+1}$ defined by Gross (\cite[Section 5]{Gross2015}) and Tsai (\cite[Chapter 7 ]{Tsai2013}). Following Roberts-Schmidt, we will call $K_{n,m}$ the $paramodular$ $subgroups$. \subsection{Odd special orthogonal groups}{\ell}abel{SSS:SO_{2n+1}} The group $G_n={\rm SO}_{2n+1}$ is the special orthogonal group of the quadratic space $(V_n,(,))$ with \[ V_n=F e_1\oplus\cdots\oplus Fe_n\oplus Fv_0\oplus Ff_n\oplus\cdots\oplus Ff_1 \] and the symmetric bilinear form $(,):V_n{\tildeimes} V_n\tildeo F$ given by $(v_0,v_0)=2$, $(e_i,f_i)=1$ for $1{\ell}eq i{\ell}eq n$ and all other inner products are zero. Thus the Gram matrix of the ordered basis $\stt{e_1,{\ell}dots, e_n, v_0, f_n,{\ell}dots, f_1}$ is \baregin{equation}{\ell}abel{E:S} S = \baregin{pmatrix} &&\jmath_n\\ &2&\\ \jmath_n \varepsilonsilonnd{pmatrix}\in{\rm GL}_{2n+1}(F). \varepsilonsilonnd{equation} We have \[ G_n(F)=\stt{g\in{\rm SL}_{2n+1}(F)\mid \,^{t}gSg=S} \] where $^{t}g$ denotes the transpose of $g$. Let $U_n\subset G_n$ be the upper triangular maximal unipotent subgroup and define the non-degenerate character $\psi_{U_n}: U_n(F)\tildeo\barbC^{\tildeimes}$ by \baregin{equation}{\ell}abel{E:psi_U} \psi_{U_n}(u)=\psi(u_{12}+u_{23}+\cdots+u_{n-1, n}+ 2^{-1}u_{n,n+1}) \varepsilonsilonnd{equation} for $u=(u_{ij})\in U_n(F)$. \subsection{Paramodular subgroups}{\ell}abel{SS:paramodular subgroup} Let $\mathbb L_m\subset V_n$ be the $\frak{o}$-lattice defined by \[ \mathbb L_{m} = \frak{o}e_1\oplus\cdots\oplus\frak{o}e_n\oplus\frak{p}^m v_0\oplus\frak{p}^mf_n\oplus\cdots\oplus\frak{p}^mf_1 \] for $m\ge 0$. Let $J_{n,m}\subset G_n(F)$ be the open compact subgroup stabilizing the lattice $\mathbb L_m$, so that $J_{n,0}=G_n(\frak{o})$ is the hyperspecial maximal compact subgroup of $G_n(F)$. We indicate that the explicit shape of $J_{n,m}$ can be found in \cite[Section 3.4]{Shahabi2018}. For $m\geq 1$, $J_{n,m}$ is isomorphic to the group of $\frak{o}$ points of a group scheme over $\frak{o}$. More precisely, if we equip $\mathbb L_{m}$ with the symmetry bilinear form $(,)_m:=\varpi^{-m}(,)$, then the Gram matrix of $(\mathbb L_{m}, (,)_m)$ associated to the ordered basis $\stt{e_1,{\ell}dots, e_n, \varpi^m v_0, \varpi^m f_n,{\ell}dots, \varpi^mf_1}$ is \[ S_m = \baregin{pmatrix} &&\jmath_n\\ &2\varpi^m&\\ \jmath_n&& \varepsilonsilonnd{pmatrix} \] and \[ \tildeilde{J}_{n,m}:= \stt{g=(g_{ij})\in{\rm SL}_{2n+1}(\frak{o})\mid \tildeext{$^tgS_m g=S_m$ and $g_{j,n+1}\in\frak{p}^m$ for $1{\ell}eq j\neq n+1{\ell}eq 2n+1$}} \] is the group of $\frak{o}$ point of a group scheme $\tildeilde{\barf{J}}_{n,m}$ over $\frak{o}$, i.e. $\tildeilde{J}_{n,m}=\tildeilde{\barf{J}}_{n,m}(\frak{o})$ (cf. \cite[Theorem 3.6]{Shahabi2018}). Now we have $J_{n,m}=t_m^{-1}\tildeilde{J}_{n,m}t_{m}$ (in ${\rm GL}_{2n+1}(F)$), where \[ t_m = \baregin{pmatrix} \varpi^m I_n&&\\ &1&\\ &&I_n \varepsilonsilonnd{pmatrix}. \] We define $K_{n,0}=J_{n,0}=G_n(\frak{o})$. For $m\geq 1$, on the other hand, the reduction modulo $\frak{p}$ map gives rise to a surjective homomorphism \[ \tildeilde{J}_{n,m}=\tildeilde{\barf{J}}_{n,m}(\frak{o}) \relbar\joinrel\tildewoheadrightarrow {\rm S}({\rm O}_{2n}(\frak{f}){\tildeimes}{\rm O}_1(\frak{f})) \relbar\joinrel\tildewoheadrightarrow {\rm O}_{2n}(\frak{f}) \overset{{\rm det}}{\relbar\joinrel\tildewoheadrightarrow} \stt{\pm 1}. \] Then $K_{n,m}\subset J_{n,m}$ is defined to be the index $2$ normal subgroup such that $t_m K_{n,m} t_m^{-1}\subset \tildeilde{J}_{n,m}$ is the kernel of the above homomorphism. Again, the explicit shape of $K_{n,m}$ can be found in \cite[Section 3.5]{Shahabi2018}. \baregin{remark}{\ell}abel{R:paramodular subgroup} As pointed out in \cite{Gross2015}, the definition of $K_{n,m}$ was suggested by Brumer, based on his extensive computations in \cite{BrumerKenneth2014}. \varepsilonsilonnd{remark} After defining the paramodular subgroups, we now prove: \baregin{lm}{\ell}abel{L:conj for n=2} Conjecture \ref{C1} holds when $n=1,2$. \varepsilonsilonnd{lm} \baregin{proof} Since $K_{1,m}\equivg \Gamma_0(\frak{p}^m)$ and $K_{2,m}\equivg K(\frak{p}^m)$ under the accidental isomorphisms ${\rm SO}_3\equivg{\rm PGL}_2$ and ${\rm SO}_5\equivg{\rm PGSp}_4$, we can apply relevant results for generic representations of ${\rm PGL}_2(F)$ and ${\rm PGSp}_4(F)$. When $n=1$, lemma follows from the results in \cite{Casselman1973} and \cite{Schmidt2002}. When $n=2$, we can apply the results of Roberts-Schmidt in \cite{RobertsSchmidt2007}. However, one thing needs to be clarified. Let $\pi$ be an irreducible generic representation of ${\rm SO}_5(F)\equivg {\rm PGSp}_4(F)$. We denote by $L(s,\pi)$, $\varepsilonsilonpsilon(s,\pi,\psi)$ and $\gamma(s,\pi,\psi)$ the $L$-, $\varepsilonsilonpsilon$- and $\gamma$-factor attached to $\pi$ and $\psi$ defined by Novodvosky's zeta integrals for ${\rm GSp}_4{\tildeimes}{\rm GL}_1$ (\cite[Section 2.6]{RobertsSchmidt2007}). Then as mentioned in the introduction, Roberts-Schmidt proved that there exists an integer $N_\pi\ge 0$ such that ${\mathcal V}_\pi^{K_{2,m}}=0$ if $0{\ell}e m< N_\pi$ and ${\rm dim}_\barbC{\mathcal V}_\pi^{K_{2,N_\pi}}=1$. They also defined Atkin-Lehner elements $u_m\in{\rm GSp}_4(F)$ for $m\ge 0$ (\cite[Page 5]{RobertsSchmidt2007}) and showed that if the action of $\pi(u_{N_\pi})$ on the line ${\mathcal V}_\pi^{K_{2,N_\pi}}$ gives $\varepsilonsilon'_\pi\in\stt{\pm 1}$, then \[ \varepsilonsilonpsilon(s,\pi,\psi)=\varepsilonsilon_\pi' q^{-N_\pi(s-\frac{1}{2})}. \] Under the isomorphism ${\rm SO}_5\equivg{\rm GSp}_4$, it's not hard to see that $u_0\in J_{2,0}=K_{2,0}$ and $u_m\in J_{2,m}\setminus K_{2,m}$ if $m>0$ (\cite[Section 6.2]{Tsai2013}). To apply their results (comparing with Conjecture \ref{C1}), we have to show \[ \varepsilonsilonpsilon(s,\pi,\psi)=\varepsilonsilonpsilon(s,\phi_\pi,\psi). \] When $\pi$ is not supercuspidal, this was already verified by Roberts-Schmidt. So let's assume that $\pi$ is supercuspidal. Then we note that Novodvorsky's zeta integrals (in \cite{RobertsSchmidt2007}) are equal to the Rankin-Selberg integrals for ${\rm SO}_5{\tildeimes}{\rm GL}_1$ (with $\tildeau$ being trivial ) under the accidental isomorphism (cf. Remark \ref{R:zeta integral}). Hence we can apply the results of Soudry, Jiang-Soudry and Kaplan (cf. \tildehmref{T:RS=Gal gamma}) to obtain \[ \gamma(s,\pi,\psi)=\gamma(s,\phi_\pi,\psi). \] Since $\pi$ is supercuspidal, we have $L(s,\pi)=L(s,\phi_\pi)=1$ by \cite[Theorem 1.4]{JiangSoudry2003} and \cite[Proposition 3.9]{Takloo-Bighash2000}. Now the desired identity between $\varepsilonsilonpsilon$-factors follows. \varepsilonsilonnd{proof} \section{Conjectural Basis for Oldforms}{\ell}abel{S:oldform} Let $\pi$ be an irreducible generic representation of $G_n(F)$. In this section, we will define the subsets $\EuScript B_{\pi,m}$ of ${\mathcal V}_\pi^{K_{n,m}}$ for $m>a_\pi$ mentioned in the introduction. They are built on (conjectural) newforms and certain level raising operators coming from elements in the spherical Hecke algebras of ${\rm SO}_{2n}$. These subsets were already appeared implicitly in Tsai's thesis (\cite[Propositions 8.1.5, 9.1.6, 9.1.7]{Tsai2013}). However, there are some inaccuracies in her formulation (when $\pi$ is supercuspidal) and will be fixed in this paper. \subsection{Even special orthogonal groups}{\ell}abel{SS:SO even} The group $H_r(F)={\rm SO}_{2r}(F)$ can be realized as the subgroup of ${\rm SL}_{2r}(F)$ via \[ H_r(F) = \stt{h\in{\rm SL}_{2r}(F)\mid {}^th\jmath_{2r}h=\jmath_{2r}} \] and can be embedded into $G_n(F)$ as a closed subgroup fixing the vectors $e_{r+1}, \cdots, e_n, v_0, f_n,\cdots, f_{r+1}$ pointwisely. In coordinates, we have \baregin{equation}{\ell}abel{E:embedding} H_r(F)\ni\pMX{a}{b}{c}{d}{\ell}ongmapsto \baregin{pmatrix} a&&b\\ &I_{2(n-r)+1}\\ c&&d \varepsilonsilonnd{pmatrix}\in G_n(F) \varepsilonsilonnd{equation} for some $a,b,c,d\in{\rm Mat}_{r{\tildeimes} r}(F)$. In the followings, we do not distinguish $H_r(F)$ with its image in $G_n(F)$ under this embedding. This shall not cause serious confusions. \subsection{Dominant weights} Let $T_n\subset G_n$ be the maximal split diagonal torus and put $T_r=T_n{\mathcal a}p H_r$ for $1{\ell}e r{\ell}e n$. Let $\varepsilonsilonpsilon_1, \varepsilonsilonpsilon_2,{\ell}dots, \varepsilonsilonpsilon_n$ be the standard basis of $X_\barullet(T_n)={\rm Hom}(\barbG_m, T_n)$ such that $\varepsilonsilonpsilon_j(y)\in T_n(F)$ is the diagonal matrix whose $(j,j)$-entry and $(2n+2-j,2n+2-j)$-entry is $y$ and $y^{-1}$ respectively, and all other diagonal entries are $1$, for $1{\ell}e j{\ell}e n$. We also write $y^{\ell}a={\ell}a(y)$ for $y\in F^{\tildeimes}$ and ${\ell}a\in X_\barullet(T_n)$. Let $\|\cdot\|:X_\barullet(T_n)\tildeo\barbZ_{\geq 0}$ be the sup-norm with respect to the standard basis and ${\rm tr}:X_\barullet(T_n)\tildeo\barbZ$ be the trace map defined by \baregin{equation}{\ell}abel{E:trace} {\rm tr}({\ell}ambda)={\ell}a_1+{\ell}a_2+\cdots+{\ell}a_n \quad\tildeext{where}\quad {\ell}ambda={\ell}a_1\varepsilonsilonpsilon_1+{\ell}a_2\varepsilonsilonpsilon_2+\cdots+{\ell}a_n\varepsilonsilonpsilon_n. \varepsilonsilonnd{equation} Let $P^+_{G_n}\subset P^+_{H_n}\subset X_\barullet(T_n)$ be the subsets given by \[ P^+_{G_n} = \stt{{\ell}a_1\varepsilonsilonpsilon_1+{\ell}a_2\varepsilonsilonpsilon_2+\cdots+{\ell}a_n\varepsilonsilonpsilon_n\mid {\ell}a_1\ge {\ell}a_2\ge \cdots\ge {\ell}a_n\ge 0} \subset P^+_{H_n} = \stt{{\ell}a_1\varepsilonsilonpsilon_1+{\ell}a_2\varepsilonsilonpsilon_2+\cdots+{\ell}a_n\varepsilonsilonpsilon_n\mid {\ell}a_1\ge {\ell}a_2\ge \cdots\ge |{\ell}a_n|}. \] We put \[ \mu_r=\varepsilonsilonpsilon_1+\varepsilonsilonpsilon_2+\cdots+\varepsilonsilonpsilon_r\in X_\barullet(T_n). \] \subsection{Filtrations of twisted paramodular subgroups}{\ell}abel{SS:filtration of K} As observed in \cite{Tsai2013}, there are filtrations between paramodular subgroups $K_{n,m}$ according to the parity of $m$ after we conjugate $K_{n,m}$ by certain torus elements. More precisely, let \baregin{equation}{\ell}abel{E:K*} K^{(m)}_{n,m'} = \varpi^{(\frac{m'-m}{2})\mu_n}\cdot K_{n,m'}\cdot\varpi^{-(\frac{m'-m}{2})\mu_n} \varepsilonsilonnd{equation} for $m'\geq m$ with the same parity, then we have \baregin{equation}{\ell}abel{E:filtration} K_{n,m}=K^{(m)}_{n,m}\supset K^{(m)}_{n,m+2}\supset\cdots\supset K^{(m)}_{n,m+2\varepsilonsilonlll}\supset\cdots\supset R_{n,m} =\barigcap_{m'\geq m,\,m'\varepsilonsilonquiv m\pmod 2} K^{(m)}_{n,m'}. \varepsilonsilonnd{equation} When $m=0,1$, \varepsilonsilonqref{E:filtration} was verified in \cite[Section 7.1]{Tsai2013}. The general case is a consequence of that. From \varepsilonsilonqref{E:filtration} we immediately get \baregin{equation}{\ell}abel{E:R* fix space} {\mathcal V}_\pi^{R_{n,m}}=\barigcup_{m'\geq m,\,m'\varepsilonsilonquiv m\pmod 2}{\mathcal V}_{\pi}^{K^{(m)}_{n,m'}}. \varepsilonsilonnd{equation} \subsection{Properties of $R_{r,m}$}{\ell}abel{SS:R_r,m} We record some properties of $R_{r,m}$ in this subsection (cf. \cite[Section 2.5]{Tsai2013}). \baregin{itemize} \item[(i)] $R_{r, 0}=H_r(\frak{o})$ and $R_{r, 1}$ are two non-conjugate hyperspecial maximal compact subgroups of $H_r(F)$ and $R_{r,m}{\mathcal a}p M_r(F)=M_r(\frak{o})$ for $m\ge 0$. \item[(ii)] The relation \baregin{equation}{\ell}abel{E:R*} R_{r,m'}=\varpi^{-(\frac{m'-m}{2})\mu_r}\cdot R_{r,m}\cdot\varpi^{(\frac{m'-m}{2})\mu_r} \varepsilonsilonnd{equation} holds for $m,m'\geq 0$ with the same parity. This follows from \varepsilonsilonqref{E:K*}, \varepsilonsilonqref{E:filtration} and the fact that $R_{r,m}=R_{n,m}{\mathcal a}p H_r(F)$. \item[(iii)] By (i) and (ii), we have the Iwasawa decomposition \baregin{equation}{\ell}abel{E:Iwasawa decomp} H_r(F)=Q_r(F)R_{r,m}=B_{H_r}(F)R_{r,m} \varepsilonsilonnd{equation} for $m\ge 0$, where $B_{H_r}$ denotes the upper triangular Borel subgroup of $H_r$. \item[(iv)] As subgroups in ${\rm GL}_{2r}(F)$, we have \baregin{equation}{\ell}abel{E:R conj} t_{r,m}^{-1}R_{r,m}t_{r,m}=R_{r,0} \quad\tildeext{and}\quad w_{r,m}^{-1}R_{r,m}w_{r,m}=R_{r,m} \varepsilonsilonnd{equation} where \baregin{equation}{\ell}abel{E:t and w} t_{r,m}=\pMX{\varpi^{-m} I_r}{}{}{I_r} \quad\tildeext{and}\quad w_{r,m} = \baregin{pmatrix} &\varpi^{-m}I_r\\\varpi^mI_r \varepsilonsilonnd{pmatrix}. \varepsilonsilonnd{equation} Note that $w_{r,m}\in R_{r,m}$ when $r$ is even and $w_{r,m}\in{\rm O}_{2r}(F)\setminus H_r(F)$ when $r$ is odd. \varepsilonsilonnd{itemize} We then have the following consequence. \baregin{lm}{\ell}abel{L:same vol} Let $dh$ be an arbitrary Haar measure on $H_r(H)$. Then we have ${\rm vol}(R_{r,m}, dh)={\rm vol}(H_r(\frak{o}),dh)$ for all $m\geq 0$. \varepsilonsilonnd{lm} \baregin{proof} Consider the similitude group $\tilde{H}_r={\rm GSO}_{2r}\subset{\rm GL}_{2r}$ defined by same matrix $\jmath_{2r}$ with the center $Z$. Then $\tilde{H}_r(F)$ contains $H_r(F)$ as a subgroup and $Z(F)\equivg F^{\tildeimes}$. Moreover, one has $Z(F){\mathcal a}p H_r(F)=\stt{I_{2r}}$ and $Z(F)H_r(F)$ is a closed subgroup in $\tilde{H}_r(F)$ with finite index, and hence also open. Fix a Haar measure $dz$ on $Z(F)$. Then the Haar measure $dzdh$ on $Z(F)H_r(F)$ induces a unique Haar measure $d\tilde{h}$ on $\tilde{H}_r(F)$, whose restriction to $Z(F)H_r(F)$ is $dzdh$. Now we note that $t_m\in\tilde{H}_r(F)$, so that the first conjugation in \varepsilonsilonqref{E:R conj} is actually in $\tilde{H}_r(F)$. It then follows that \[ {\rm vol}(Z(\frak{o}),dz){\rm vol}(R_{r,m},dh) = {\rm vol}(Z(\frak{o})R_{r,m}, d\tilde{h}) = {\rm vol}(Z(\frak{o})R_{r,0}, d\tilde{h}) = {\rm vol}(Z(\frak{o}),dz){\rm vol}(R_{r,0},dh). \] Since $H_r(\frak{o})=R_{r,0}$, the proof follows. \varepsilonsilonnd{proof} \subsection{Level raising operators}{\ell}abel{SSS:level raising} To define the level raising operators, we use elements in ${\mathcal H}(H_n(F)//R_{n,m})$, whose standard basis can be described as follows. For ${\ell}ambda\in P^+_{H_n}$, let \[ \varphi_{{\ell}ambda, m}=\mathbb I_{R_{n,m}\varpi^{{\ell}ambda}R_{n,m}} \] be the characteristic function of the double coset $R_{n,m}\varpi^{\ell}ambda R_{n,m}$. Then we have \[ {\mathcal H}(H_n(F)//R_{n,m})=\barigoplus_{{\ell}ambda\in P^+_{H_n}}\barbC\cdot\varphi_{{\ell}ambda, m} \] as $\barbC$-linear spaces. This follows from the cases $m=0, 1$ (cf. \cite[Page 51]{Tits1979}) together with \varepsilonsilonqref{E:R*}.\\ Given $\varphi\in{\mathcal H}(H_n(F))$ and $v\in{\mathcal V}_\pi$, define $\varphi\star v\in{\mathcal V}_\pi$ by \baregin{equation}{\ell}abel{E:Hecke action for H} \varphi\star v = \int_{H_n(F)}\varphi(h)\pi(h^{-1})vdh \quad \tildeext{(${\rm vol}(H_n(\frak{o}),dh)=1$).} \varepsilonsilonnd{equation} Now if $v\in{\mathcal V}_\pi^{K_{n,m}}$, then we have $\varphi_{{\ell}ambda, m}\star v\in{\mathcal V}_\pi^{R_{n,m}}$ and hence by \varepsilonsilonqref{E:R* fix space}, $\varphi_{{\ell}ambda, m}\star v\in {\mathcal V}_\pi^{K^{(m)}_{n,m'}}$ for some $m'\geq m$ with the same parity. The following lemma tells us what $m'$ is. \baregin{lm}{\ell}abel{L:level raising for K*} Let $v\in{\mathcal V}_\pi^{K_{n,m}}$ and ${\ell}a\in P^+_{H_n}$ with $\|{\ell}ambda\|=\varepsilonsilonlll$. Then we have $\varphi_{{\ell}ambda,m}\star v\in{\mathcal V}_\pi^{K^{(m)}_{n,m+2\varepsilonsilonlll}}$. \varepsilonsilonnd{lm} \baregin{proof} Let $e\in\stt{0,1}$ and $m'\geq m$ with $m'\varepsilonsilonquiv m\varepsilonsilonquiv e\pmod{2}$. Then by \varepsilonsilonqref{E:K*}, we have \[ \pi(\varpi^{(\frac{m-e}{2})\mu_n}){\mathcal V}_\pi^{K^{(m)}_{n,m'}}={\mathcal V}_\pi^{K^{(e)}_{n,m'}} \] and hence $v':=\pi(\varpi^{(\frac{m-e}{2})\mu_n})v\in{\mathcal V}_\pi^{K^{(e)}_{n,m}}$. Now \cite[Proposition 8.1.1]{Tsai2013} implies \[ \varphi_{{\ell}ambda,e}\star v'\in{\mathcal V}_\pi^{K^{(e)}_{n,m+2\varepsilonsilonlll}}=\pi(\varpi^{(\frac{m-e}{2})\mu_n}) {\mathcal V}_\pi^{K^{(m)}_{n,m+2\varepsilonsilonlll}} \] so that \[ \pi(-\varpi^{(\frac{m-e}{2})\mu_n})\varphi_{{\ell}ambda,e}\star v'\in{\mathcal V}_\pi^{K^{(m)}_{n,m+2\varepsilonsilonlll}}. \] But by definition, we have \baregin{align*} \pi(\varpi^{-(\frac{m-e}{2})\mu_n})\varphi_{{\ell}ambda,e}\star v' &= \int_{H_n(F)} \varphi_{{\ell}ambda,e}(h)\pi(\varpi^{-(\frac{m-e}{2})\mu_n}h^{-1}\varpi^{(\frac{m-e}{2})\mu_n})vdh\\ &= \int_{H_n(F)}\varphi_{{\ell}ambda,m}(h)\pi(h^{-1})vdh\\ &= \varphi_{{\ell}ambda,m}\star v \varepsilonsilonnd{align*} after changing variable and taking \varepsilonsilonqref{E:R*} into account. This proves the lemma. \varepsilonsilonnd{proof} Combining \varepsilonsilonqref{E:K*}, \varepsilonsilonqref{E:filtration} with {\ell}mref{L:level raising for K*}, we obtain level raising operators ${algebraically closed ute{e}t}a_{{\ell}ambda, m, m'}:{\mathcal V}_\pi^{K_{n,m}}\tildeo{\mathcal V}_\pi^{K_{n,m'}}$ defined by \[ {algebraically closed ute{e}t}a_{{\ell}ambda,m,m'} = \pi(\varpi^{-(\frac{m'-m}{2})\mu_n})\circ\varphi_{{\ell}ambda, m} \] for every $m'\geq m+2\|{\ell}a\|$ with $m'\varepsilonsilonquiv m\pmod{2}$. Note that ${algebraically closed ute{e}t}a_{0,m,m}$ is the identity operator by {\ell}mref{L:same vol}. Note also that the operators ${algebraically closed ute{e}t}a_{{\ell}ambda,m,m'}$ raise levels to those with the same parity. We put \[ {algebraically closed ute{e}t}a=\pi(\varpi^{-\mu_n}). \] Then ${algebraically closed ute{e}t}a$ induces an isomorphism between ${\mathcal V}_\pi^{R_{n,m}}$ and ${\mathcal V}_\pi^{R_{n,m+2}}$ (by \varepsilonsilonqref{E:R*}) for every $m\ge 0$. Moreover, the restriction of ${algebraically closed ute{e}t}a$ to ${\mathcal V}_\pi^{K_{n,m}}$ gives the operator ${algebraically closed ute{e}t}a_{0,m,m+2}$. To raise levels to those with the opposite parity, we need two more operators $\tildeheta_m, \tildeheta'_m:{\mathcal V}_\pi^{K_{n,m}}\tildeo{\mathcal V}_\pi^{K_{n,m+1}}$ defined by \baregin{equation}{\ell}abel{E:theta} \tildeheta_m=\pi(u_{n,1,m+1})\circ\tildeheta'_m\circ\pi(u_{n,1,m}) \quad \tildeext{and} \quad \tildeheta'_m(v)=\frac{1}{{\rm vol}(K_{n,m}{\mathcal a}p K_{n,m+1}, dg)}\int_{K_{n,m+1}}\pi(g)v dg \varepsilonsilonnd{equation} where $u_{n,1,m}\in J_{n,m}$ is the element given by \varepsilonsilonqref{E:u_r,m}. In the followings, we usually suppress $m$ from the notations $\tildeheta_m$ and $\tildeheta'_m$ when there is no risk of confusion. \subsection{The subsets $\EuScript B_{\pi,m}$}{\ell}abel{SS:conj basis} Now we are ready to introduce the subsets $\EuScript B_{\pi,m}$. For this, we need one more notation: If ${\ell}a={\ell}a_1\varepsilonsilonpsilon_1+{\ell}a_{2}\varepsilonsilonpsilon_{2}+\cdots+{\ell}a_n\varepsilonsilonpsilon_n\in X_\barullet(T_n)$, then we write $\tilde{{\ell}a}={\ell}a_1\varepsilonsilonpsilon_1+{\ell}a_2\varepsilonsilonpsilon_2+\cdots+{\ell}a_{n-1}\varepsilonsilonpsilon_{n-1}-{\ell}a_n\varepsilonsilonpsilon_n\in X_\barullet(T_n)$. Let $v_\pi\in{\mathcal V}_\pi^{K_{n,a_\pi}}$ be a (conjectural) newform and $m>a_\pi$. Then if $m$ and $a_\pi$ have the same parity, we put \[ \EuScript B_{\pi,m}=\stt{{algebraically closed ute{e}t}a_{{\ell}ambda,a_\pi,m}(v_\pi)\mid {\ell}ambda\in P^+_{H_n},\,2\|{\ell}ambda\|{\ell}eq m-a_\pi}. \] On the other hand, if $m$ and $a_\pi$ have the opposite parity, then the subsets are given by \[ \EuScript B_{\pi,m}=\stt{{algebraically closed ute{e}t}a^\square_{{\ell}a,a_\pi+1,m}\circ\tildeheta(v_\pi),\,\,{algebraically closed ute{e}t}a^\square_{{\ell}ambda, a_\pi+1,m}\circ\tildeheta'(v_\pi)\mid {\ell}ambda\in P^+_{G_n},\,\,2\|{\ell}ambda\|{\ell}eq m-a_\pi-1} \] where ${algebraically closed ute{e}t}a^\square_{{\ell}a,a_\pi+1,m}:={algebraically closed ute{e}t}a_{{\ell}a,a_\pi+1,m}+{algebraically closed ute{e}t}a_{\tilde{{\ell}a}, a_\pi+1, m}$.\\ We have some remarks on $\EuScript B_{\pi,m}$. When $\pi$ is supercuspidal (any $n$) and $m$, $a_\pi$ have the same parity, the subsets $\EuScript B_{\pi,m}$ were appeared implicitly in \cite[Propositions 8.1.6, 9.1.6]{Tsai2013}. When $m$ and $a_\pi$ have the opposite parity, on the other hand, Tsai asserted in \cite[Proposition 9.1.7]{Tsai2013} that the subsets $\EuScript B'_{\pi,m}\subset{\mathcal V}_\pi^{K_{n,m}}$ given by \[ \EuScript B'_{\pi,m}=\stt{{algebraically closed ute{e}t}a_{{\ell}a,a_\pi+1,m}\circ\tildeheta(v_\pi),\,\,{algebraically closed ute{e}t}a_{{\ell}ambda, a_\pi+1,m}\circ\tildeheta'(v_\pi)\mid {\ell}ambda\in P^+_{G_n},\,\,2\|{\ell}ambda\|{\ell}eq m-a_\pi-1} \] are linearly independent. However, this is not quite correct (cf. \S\ref{SSS:concluding remark}). When $n=1,2$, the subsets $\EuScript B_{\pi,m}$ give other bases for the spaces of oldforms. It's then natural to compare them with the ones given by Casselman and Roberts-Schmidt. This will also appear in \S\ref{SSS:concluding remark}. \section{Local Rankin-Selberg integrals for ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_r$}{\ell}abel{S:RS integral} The global Rankin-Selberg integrals for generic cuspidal automorphic representations of ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_r$ were first introduced by Gelbart and Piatetski-Shapiro in \cite[Part B]{GPSR1987} when $r=n$. Their constructions were later extended by Ginzburg in \cite{Ginzburg1990} to $r<n$. The constructions were further extended by Soudry (\cite{Soudry1993}) to $r>n$, who also developed the corresponding local theory. In this section, we review the local Rankin-Selberg integrals for generic representations of ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_r$ with $1{\ell}eq r{\ell}eq n$ following \cite{Soudry1993}, see also \cite{Kaplan2015}.\\ Let $Z_r\subset{\rm GL}_r$ be the upper triangular maximal unipotent subgroup. Define a non-degenerate character of $Z_r(F)$ by \baregin{equation}{\ell}abel{E:psi_Z} \bar{\psi}_{Z_r}(z) = \overline} \nc{\opp}{\mathrm{opp}} \nc{\ul}{\underline{\psi(z_{12}+z_{23}+\cdots+z_{r-1, r})} \varepsilonsilonnd{equation} for $z=(z_{ij})\in Z_r(F)$. Let $\tildeau$ be a representation of ${\rm GL}_r(F)$. We assume that $\tildeau$ has finite length and the $\barbC$-linear space ${\rm Hom}_{Z_r(F)}(\tildeau,\bar{\psi}_{Z_r})$ is one-dimensional. We fix a nonzero element $\Lambdabda_{\tildeau,\bar{\psi}}\in{\rm Hom}_{Z_r(F)}(\tildeau,\bar{\psi}_{Z_r})$. Define $\tildeau^*$ to be the representation of ${\rm GL}_r(F)$ on ${\mathcal V}_\tildeau$ with the action $\tildeau^*(a)=\tildeau(a^*)$, where $a^*=\jmath_{r}{}^t a^{-1}\jmath_r$. Then $\tildeau^*$ also has finite length and the $\barbC$-linear space ${\rm Hom}_{Z_r(F)}(\tildeau^*,\bar{\psi})$ is also one-dimensional. We fix a nonzero element $\Lambdabda_{\tildeau^*,\bar{\psi}}\in {\rm Hom}_{Z_r(F)}(\tildeau^*,\bar{\psi}_{Z_r})$ given by $\Lambdabda_{\tildeau^*,\bar{\psi}}=\Lambdabda_{\tildeau,\bar{\psi}}\circ\tildeau(d_r),$ where \baregin{equation}{\ell}abel{E:eta_r} d_r = \baregin{pmatrix} 1&&&\\ &-1&&\\ &&{\rm d}ots\\ &&&(-1)^{r-1} \varepsilonsilonnd{pmatrix} \in{\rm GL}_r(\frak{o}). \varepsilonsilonnd{equation} \subsection{Induced representations and intertwining maps}{\ell}abel{SS:ind rep} Let $s$ be a complex number and $\tildeau_s$ be a representation of ${\rm GL}_r(F)$ on ${\mathcal V}_\tildeau$ with the action $\tildeau_s(a)=\tildeau(a)\nu_r(a)^{s-\frac{1}{2}}$. The representation $\tildeau^*_{1-s}$ is defined in the similar way. \subsubsection{Siegel parabolic subgroups} Let $Q_r\subset H_r$ be a Siegel parabolic subgroup with the Levi decomposition $M_r{\ell}times N_r$, where \baregin{equation}{\ell}abel{E:levi of Q} M_r(F) = \stt{m_r(a)=\pMX{a}{}{}{a^*}\mid \tildeext{$a\in{\rm GL}_r(F)$ and $a^*=\jmath_r{}^t a^{-1}\jmath_r$}}\equivg{\rm GL}_r(F) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:unipotent radical of Q} N_r(F) = \stt{n_r(b)=\pMX{I_r}{b}{}{I_r}\mid\tildeext{$b\in{\rm Mat}_{r{\tildeimes} r}(F)$ with $b=-\jmath_r{}^t b\jmath_r$}}. \varepsilonsilonnd{equation} There is another Siegel parabolic subgroup $\tilde{Q}_r\subset H_r$ obtained from $Q_r$ by conjugating a Weyl element \baregin{equation}{\ell}abel{E:delta} \delta_r = \baregin{pmatrix} I_{r-1}&&\\ &\jmath_2&\\ &&I_{r-1} \varepsilonsilonnd{pmatrix}^r\in {\rm O}_{2r}(F) \varepsilonsilonnd{equation} i.e. $\tilde{Q}_r(F)=\delta_r^{-1}Q_r(F){\delta_r}$. It has the Levi decomposition $\tilde{Q}_r=\tilde{M}_r{\ell}times\tilde{N}_r$ with $\tilde{M}_r(F)=\delta_r^{-1}M_r(F)\delta_r$ and $\tilde{N}_r(F)=\delta_r^{-1}N_r(F)\delta_r$ (cf. \cite[Chapter 9]{Soudry1993}). \subsubsection{Induced representations}{\ell}abel{SSS:ind rep} By pulling back the homomorphism $Q_r(F)\tildewoheadrightarrow Q_r(F)/N_r(F)\equivg{\rm GL}_r(F)$, $\tildeau_s$ becomes a representation of $Q_r(F)$ on ${\mathcal V}_\tildeau$. We then form a normalized induced representation \[ \rho_{\tildeau, s}={\rm Ind}_{Q_r(F)}^{H_r(F)}(\tildeau_s) \] of $H_r(F)$. The representation space $I_r(\tildeau,s)$ of $\rho_{\tildeau,s}$ consists of smooth functions ${\tildeimes}i_s: H_r(F)\tildeo{\mathcal V}_\tildeau$ satisfying \baregin{equation}{\ell}abel{E:xi_s} {\tildeimes}i_s(mnh)=\delta_{Q_r}^{\frac{1}{2}}(m)\tildeau_s(m){\tildeimes}i_s(h) \varepsilonsilonnd{equation} for $m\in M_r(F)$, $n\in N_r(F)$ and $h\in H_r(F)$. The action on $I_r(\tildeau,s)$ is given by the right translation $\rho$. We remind here that $\delta_{Q_r}(m_r(a))=\nu_r(a)^{r-1}$.\\ Similarly, $\tildeau^*_{1-s}$ can be extended to a representation of $\tilde{Q}_r(F)$ through the homomorphism $\tilde{Q}_r(F)\tildewoheadrightarrow \tilde{Q}_r(F)/\tilde{N}_r(F)\equivg{\rm GL}_r(F)$. Explicitly, its action on ${\mathcal V}_\tildeau$ is given by \[ \tildeau^*_{1-s}(\tilde{m}_r(a)\tilde{n}) = \tildeau^*(a)\nu_r(a)^{\frac{1}{2}-s} \] for $\tilde{m}_r(a)=\delta_r^{-1}m_r(a)\delta_r\in\tilde{M}_r(F)$ and $\tilde{n}\in\tilde{N}_r(F)$. We thus obtain another normalized induced representation \[ \tilde{\rho}_{\tildeau^*,1-s} = {\rm Ind}_{\tilde{Q}_r(F)}^{H_r(F)}(\tildeau^*_{1-s}) \] of $H_r(F)$. Its underlying space $\tilde{I}_r(\tildeau^*,1-s)$ consists of smooth functions $\tilde{{\tildeimes}i}_{1-s}: H_r(F)\tildeo{\mathcal V}_\tildeau$ satisfying the rule similar to that of \varepsilonsilonqref{E:xi_s}. \subsubsection{Intertwining maps}{\ell}abel{SSS:intertwining map} There is an intertwining map between the induced representations $\rho_{\tildeau,s}$ and $\rho_{\tildeau^*,1-s}$. To define it, let $w_r\in H_r(F)$ be given by \[ w_r = \baregin{cases} \baregin{pmatrix}&I_r\\I_r&\varepsilonsilonnd{pmatrix}\quad&\tildeext{if $r$ is even},\\ \baregin{pmatrix}&I_r\\I_r&\varepsilonsilonnd{pmatrix}\baregin{pmatrix}&&1\\&I_{2r-2}&\\1&&\varepsilonsilonnd{pmatrix}\quad&\tildeext{if $r$ is odd}. \varepsilonsilonnd{cases} \] Then the intertwining map $M(\tildeau, s):I_r(\tildeau,s)\tildeo\tilde{I}_r(\tildeau^*, 1-s)$ is defined by the following integral \baregin{equation}{\ell}abel{E:intertwining map} M(\tildeau,s){\tildeimes}i_s(h) = \int_{\tilde{N}_r(F)}{\tildeimes}i_s(w_r^{-1}\tilde{n}h)d\tilde{n} \varepsilonsilonnd{equation} for $\mathbf Re(s)\gg 0$, and by meromorphic continuation in general. The Haar measure $d\tilde{n}$ on $\tilde{N}_r(F)$ is chosen as follows (\cite[Page 398]{Kaplan2015}). The group $\tilde{N}_r(F)$ can be written as a product of its root groups, each of which is isomorphic to $F$. We take the Haar measure on $F$ to be self-dual with respect to $\psi$, and then transport it onto each root group. Since $\ker(\psi)=\frak{o}$, the Haar measure on $F$ is such that the total volume of $\frak{o}$ is $1$. Now $d\tilde{n}$ is taken to be the product measure.\\ The assumptions on $\tildeau$ allow us to define Shahidi's local coefficient $\gamma(s,\tildeau,{\barigwedge}^2,\psi)\in\barbC(q^{-s})$ through his functional equation (\cite[Section 10]{Soudry1993}, \cite[Section 3.1]{Kaplan2015}). Then we have the normalized intertwining map \[ M_\psi^\dagger(\tildeau, s) = \gamma(2s-1,\tildeau, {\BigWedge}^2, \psi)M(\tildeau,s). \] \subsection{Rankin-Selberg integrals}{\ell}abel{SSS:RS integral and FE} Let $\pi$ be an irreducible generic representation of $G_n(F)$. Recall that we have fixed a nonzero Whittaker functional $\Lambdabda_{\pi,\psi}\in{\rm Hom}_{U_n(F)}(\pi,\psi_{U_n})$. Let \baregin{equation}{\ell}abel{E:X_n,r} \barar{X}_{n,r}(F) = \stt{ \baregin{pmatrix}I_r&&&&\{\tildeimes}&I_{n-r}&&\\&&1\\&&&I_{n-r}&\\&&&x'&I_r\varepsilonsilonnd{pmatrix} \mid\tildeext{$x\in{\rm Mat}_{(n-r){\tildeimes} r}(F)$ with $x'=-\jmath_r{}^tx\jmath_{n-r}$}} \varepsilonsilonnd{equation} be a unipotent subgroup of $G_n(F)$. Then the integral $\Psi_{n,r}(v\otimes{\tildeimes}i_s)$ attached to $v\in{\mathcal V}_\pi$ and ${\tildeimes}i_s\in I(\tildeau,s)$ is given by \baregin{equation}{\ell}abel{E:RS integral in general} \Psi_{n,r}(v\otimes{\tildeimes}i_s) = \int_{V_r(F)\!\!\!slash H_r(F)}\int_{\bar{X}_{n,r}(F)} W_v(\bar{x}h)f_{{\tildeimes}i_s}(h)d\bar{x}dh \varepsilonsilonnd{equation} where $W_v(g)=\Lambdabda_{\pi,\psi}(\pi(g)v)$ is the Whittaker function associated to $v$ and $f_{{\tildeimes}i_s}(h)=\Lambdabda_{\tildeau,\bar{\psi}}({\tildeimes}i_s(h))$ for $g\in G_n(F)$ and $h\in H_r(F)$. On the other hand, by using the normalized intertwining operator $M_\psi^\dagger(\tildeau,s)$, we can define another integral $\tilde{\Psi}_{n,r}(v\otimes{\tildeimes}i_s)$ attached to $v$ and ${\tildeimes}i_s$ as follows. Let \baregin{equation}{\ell}abel{E:delta_n,r} \delta_{n,r} = \baregin{pmatrix} I_{r-1}&&\\&&&1\\&&-I_{2(n-r)+1}\\&1\\&&&&I_{r-1} \varepsilonsilonnd{pmatrix}^r\in G_n(F). \varepsilonsilonnd{equation} Observe the similarity between $\delta_r$ (cf. \varepsilonsilonqref{E:delta}) and $\delta_{n,r}$. Indeed, $\delta_{n,r}$ is essentially obtained from $\delta_r$ via the embedding \varepsilonsilonqref{E:embedding}. Then we have \baregin{equation}{\ell}abel{E:dual RS integral in general} \tilde{\Psi}_{n,r}(v\otimes{\tildeimes}i_s) = \int_{V_r(F)\!\!\!slash H_r(F)}\int_{\bar{X}_{n,r}(F)} W_v(\bar{x}h\delta_{n,r})f^*_{M^\dagger_\psi(\tildeau,s){\tildeimes}i_s}(\delta_r^{-1}h\delta_r)d\bar{x}dh \varepsilonsilonnd{equation} where \[ f^*_{M^\dagger_\psi(\tildeau,s){\tildeimes}i_s}(h) = \Lambdabda_{\tildeau^*,\bar{\psi}}(M^\dagger_\psi(\tildeau,s){\tildeimes}i_s(h)). \] As expected, these integrals, which are originally absolute convergence in some half planes, have meromorphic continuations to whole complex plane, and give rise to rational functions in $q^{-s}$. Moreover, the following functional equations: \baregin{equation}{\ell}abel{E:FE} \tilde{\Psi}_{n,r}(v\otimes{\tildeimes}i_s)=\gamma(s,\pi{\tildeimes}\tildeau,\psi)\Psi_{n,r}(v\otimes{\tildeimes}i_s) \varepsilonsilonnd{equation} hold for every $v\in{\mathcal V}_\pi$ and ${\tildeimes}i_s\in I_r(\tildeau,s)$, where $\gamma(s,\pi{\tildeimes}\tildeau,\psi)$ is a nonzero rational function in $q^{-s}$ depending only on $\psi$ and (the classes of) $\pi$, $\tildeau$. \baregin{remark}{\ell}abel{R:zeta integral} When $r=1$, we have $H_1(F)=M_1(F)\equivg F^{\tildeimes}$ and hence $\tildeau$ is a character of $F^{\tildeimes}$ and $\rho_{\tildeau,s}=\tildeau_s=\tildeau\nu_1^{s-\frac{1}{2}}$. It follows that $I_1(\tildeau,s)=\barbC\tildeau_s$ and $\tilde{I}_1(\tildeau^*,1-s)=\barbC\tildeau^{-1}_{1-s}$ are both one-dimensional. Let us put \baregin{equation}{\ell}abel{E:zeta integral} Z(s,v,\tildeau) = \int_{F^{\tildeimes}}\int_{\bar{X}_{n,1}(F)} W_v{\ell}eft(\bar{x}\varepsilonsilonpsilon_1(y)\right)\tildeau\nu_1^{s-\frac{1}{2}}(y)d\bar{x}d^{\tildeimes} y. \varepsilonsilonnd{equation} for $v\in{\mathcal V}_\pi$. Then $\Psi_{n,1}(v\otimes{\tildeimes}i_s)=cZ(s,v,\tildeau)$ if ${\tildeimes}i_s=c\tildeau\nu_1^{s-\frac{1}{2}}$ for some $c\in\barbC$. Note that $\delta_{n,1}=u_{n,1,0}\in G_n(F)$ (cf. \varepsilonsilonqref{E:u_r,m}) and we have $\tilde{\Psi}_{n,1}(v\otimes{\tildeimes}i_s)=cZ(1-s,\pi(u_{n,1,0})v, \tildeau^{-1})$. These are essentially the local integrals introduced by Jacquet-Langlands in \cite{JLbook} when $n=1$, and by Novodvorsky in \cite{Novodvorsky1979} when $n=2$. In the followings, if $\tildeau$ is the trivial character of $F^{\tildeimes}$, then we will drop $\tildeau$ from the notation. \varepsilonsilonnd{remark} We record here the following result on the compatibility between $\gamma$-factors due to Soudry, Jiang-Soudry and Kaplan. \baregin{thm}[\cite{Soudry2000}, \cite{JiangSoudry2004}, \cite{Kaplan2015}]{\ell}abel{T:RS=Gal gamma} Let $\pi$ be an irreducible generic representation of $G_n(F)$ and $\tildeau$ be an irreducible generic representation of ${\rm GL}_r(F)$ for any $n,r\in\barbN$. Then we have \[ \gamma(s,\pi{\tildeimes}\tildeau,\psi) = \omegaega_\tildeau(-1)^n\gamma(s,\phi_\pi\otimes\phi_\tildeau,\psi) \] where $\omegaega_\tildeau$ stands for the central character of $\tildeau$. \varepsilonsilonnd{thm} \subsection{Two consequences} We derive two consequences from \varepsilonsilonqref{E:filtration} and non-vanishing of Rankin-Selberg integrals. The first is existence of nonzero paramodular vectors, whose proof was given in \cite[Theorem 7.3.1]{Tsai2013} when $\pi$ is supercuspidal and was sketched in \cite[Proposition 7.6]{Tsai2016} for generic $\pi$. We provide a proof here for the sake of completeness. Recall the usual action \[ \rho(\varphi){\tildeimes}i_{s} = \int_{H_r(F)}\varphi(h)\rho(h){\tildeimes}i_{s}dh \] for ${\tildeimes}i_s\in I(\tildeau,s)$ and $\varphi\in{\mathcal H}(H_r(F))$. \baregin{lm}{\ell}abel{L:existence} We have ${\mathcal V}_\pi^{K_{n,m}}\neq 0$ for all $m\gg 0$. \varepsilonsilonnd{lm} \baregin{proof} Let $e\in\stt{0,1}$ such that $m\varepsilonsilonquiv e\pmod 2$. Then we have ${\mathcal V}_\pi^{K_{n,m}}\equivg {\mathcal V}_\pi^{K^{(e)}_{n,m}}$ by \varepsilonsilonqref{E:K*} and hence it suffices prove the assertion for the spaces ${\mathcal V}_\pi^{K^{(e)}_{n,m}}$. By \varepsilonsilonqref{E:R* fix space}, we only need to verify that ${\mathcal V}_\pi^{R_{n,e}}\neq 0$. For this, we use the Rankin-Selberg integral for ${\rm SO}_{2n+1}{\tildeimes}{\rm GL}_n$. More precisely, let $\tildeau$ be an unramified irreducible generic representation of ${\rm GL}_n(F)$, and pick $s_0\in\barbC$ so that \baregin{itemize} \item $\rho_{\tildeau,s_0}$ is irreducible, \item $\Psi_{n,n}(v\otimes{\tildeimes}i_{s_0})$ is absolutely convergent for all $v\in{\mathcal V}_\pi$ and ${\tildeimes}i_{s_0}\in I_n(\tildeau, s_0)$. \varepsilonsilonnd{itemize} By \cite[Proposition 12.4]{GPSR1987}, there exist $v\in{\mathcal V}_\pi$ and ${\tildeimes}i_{s_0}\in I_n(\tildeau,s_0)$ such that $\Psi_{n,n}(v\otimes{\tildeimes}i_{s_0})\ne 0$. On the other hand, since $I_n(\tildeau,s_0)$ is unramified and irreducible, the spaces $I_n(\tildeau,s_0)^{R_{n,e}}$ are both one-dimensional and ${\tildeimes}i_{s_0}$ can be written as $\rho(\varphi){\tildeimes}i^e_{s_0}$ for some $\varphi\in{\mathcal H}(H_n(F))$, where ${\tildeimes}i_{s_0}^e\in I_n(\tildeau, s_0)$ are nonzero spherical elements. So we may further assume that $\varphi$ is right $R_{n,e}$-invariant. Now a simple computation shows \[ \Psi_{n,n}(v\otimes{\tildeimes}i_{s_0})=\Psi_{n,n}((\varphi\star v)\otimes{\tildeimes}i^e_{s_0})\ne 0. \] Therefore we have $0\neq \varphi\star v\in{\mathcal V}_\pi^{R_{n,e}}$. \varepsilonsilonnd{proof} As another consequence, we have \baregin{lm}{\ell}abel{L:vanish of RS integral} Let $v\in{\mathcal V}_\pi^{K_{n,m}}$ be a nonzero element for some $m\geq 0$. Then the integrals $\Psi_{n,r}(v\otimes{\tildeimes}i_s)$ vanish for all ${\tildeimes}i_s\in I_r(\tildeau,s)$ and $s\in\barbC$ for $1{\ell}e r{\ell}e n$ if $\tildeau$ is ramified. \varepsilonsilonnd{lm} \baregin{proof} The proof is similar to that of {\ell}mref{L:existence}. First note that we have the following isomorphism \[ I_r(\tildeau,s)^{R_{r,m}}\equivg{\mathcal V}_{\tildeau_s}^{{\rm GL}_r(\frak{o})} \] (between $\barbC$-linear spaces) due to the Iwasawa decomposition $H_r(F)=Q_r(F)R_{r,m}$ and $R_{r,m}{\mathcal a}p M_r(F)\equivg{\rm GL}_r(\frak{o})$. It follows that $I_r(\tildeau,s)^{R_{r,m}}$ is nonzero if and only if $\tildeau$ is unramified. Now suppose that $\tildeau$ is ramified and let $\varphi=\mathbb I_{R_{r,m}}\in{\mathcal H}_r(F)$ be the characteristic function of $R_{r,m}$. Then we have $\varphi\star v=c_r v$ with $c_r={\rm vol}(R_{r,m},dh)$. For $s_0\in\barbC$ with $\mathbf Re(s_0)\gg 0$ so that the integrals $\Psi_{n,r}(v\otimes{\tildeimes}i_{s_0})$ converge absolutely for all ${\tildeimes}i_{s_0}\in I_r(\tildeau,s_0)$, one checks that \[ c_r\Psi_{n,r}(v\otimes{\tildeimes}i_{s_0})=\Psi_{n,r}((\varphi\star v)\otimes{\tildeimes}i_{s_0})=\Psi_{n,r}(v\otimes\rho(\varphi){\tildeimes}i_{s_0}). \] Since $\rho(\varphi){\tildeimes}i_{s_0}\in I_r(\tildeau,{s_0})^{R_{r,m}}=0$, we conclude that $\Psi_{n,r}(v\otimes{\tildeimes}i_{s_0})=0$. For general $s_0\in\barbC$ and ${\tildeimes}i_{s_0}\in I_r(\tildeau,s_0)$, let ${\tildeimes}i'_s$ be the standard section such that ${\tildeimes}i'_{s_0}={\tildeimes}i_{s_0}$. Then $\Psi_{n,r}(v\otimes{\tildeimes}i_{s_0})$ is given by $\Psi_{n,r}(v\otimes{\tildeimes}i'_s)|_{s=s_0}$ via the meromorphic continuation. But since $\Psi_{n,r}(v\otimes{\tildeimes}i'_s)=0$ for all $\mathbf Re(s)\gg 0$ by what we have shown, the meromorphic function $\Psi_{n,r}(v\otimes{\tildeimes}i'_s)$ is identically zero, and hence $\Psi_{n,r}(v\otimes{\tildeimes}i_{s_0})=0$ as desired. \varepsilonsilonnd{proof} \section{Unramified representations}{\ell}abel{S:unramified rep} By {\ell}mref{L:vanish of RS integral}, the computations of Rankin-Selberg integrals attached to nonzero paramodular vectors reduce to the case when $\tildeau$ is unramified. Instead of assuming that $\tildeau$ is irreducible, we work with $\tildeau$ that is induced of "Langlands' type". Another term to say is that $\tildeau$ is a standard module. \subsection{Induced of Lanlands' type}{\ell}abel{SS:Langlands type} Let $A_r\subset{\rm GL}_r$ be the diagonal torus. Given a tuple of $r$ nonzero complex numbers $\ul{\alphalphapha}=(\alphalphapha_1, \alphalphapha_2{\ell}dots, \alphalphapha_r)$, there is a unique unramified character ${\mathbb I}i_{\ul{\alphalphapha}}$ of $A_r(F)$ given by \[ {\mathbb I}i_{\ul{\alpha}}({\rm diag}(a_1, a_2,{\ell}dots, a_r)) = \alphalphapha_1^{-{\rm log}_q |a_1|_F}\alphalphapha_2^{-{\rm log}_q|a_2|_F}\cdots \alphalphapha_r^{-{\rm log}_q|a_r|_F}. \] By extending ${\mathbb I}i_{\ul{\alpha}}$ to a character of the Borel subgroup $B_r=A_r{\ell}times Z_r\subset{\rm GL}_r$, we can form a normalized induced representation $\tildeau_{\ul{\alphalphapha}}={\rm Ind}_{B_r(F)}^{{\rm GL}_r(F)}({\mathbb I}i)$ of ${\rm GL}_r(F)$. This is an unramified representation of ${\rm GL}_r(F)$ whose ${\rm GL}_r(\frak{o})$-fixed subspace is one-dimensional. On the other hand, every unramified irreducible representation of ${\rm GL}_r(F)$ can be realized as a constituent of $\tildeau_{\ul{\alphalphapha}}$ for some $\ul{\alphalphapha}$. Note that $\tildeau^*_{\ul{\alpha}}=\tildeau_{\ul{\alphalphapha}^*}$ with $\ul{\alphalphapha}^*=(\alphalphapha^{-1}_r, \alphalphapha^{-1}_{r-1},{\ell}dots,\alphalphapha_1^{-1})$.\\ An unramified representation $\tildeau$ of ${\rm GL}_r(F)$ is called induced of $Langlands'$ $type$ if $\tildeau\equivg\tildeau_{\ul{\alphalphapha}}$ for some $\ul{\alphalphapha}$ with \baregin{equation}{\ell}abel{E:decreasing} |\alphalphapha_1|{\ell}e |\alphalphapha_2|{\ell}e\cdots{\ell}e |\alphalphapha_r|. \varepsilonsilonnd{equation} These unramified representations may be reducible, but they have the following nice properties that allow us to work with: (i) The $\barbC$-linear space ${\rm Hom}_{Z_r(F)}(\tildeau, \bar{\psi}_{Z_r})$ is one-dimensional. (ii) The intertwining map $u\mapsto W_u$ from $u\in{\mathcal V}_\tildeau$ to its associated Whittaker function is injective (\cite{JacquetShalika1983},\cite[Lemma 2]{Jacquet2012}). (iii) $\tildeau^*$ is again induced of Langlands' type. We denote by $J(\tildeau)$ the unique irreducible quotient of $\tildeau$, which is unramified (\cite[Corollary 1.2]{Matringe2013}) and has the Satake parameters $\alphalphapha_1,\alphalphapha_2, {\ell}dots,\alphalphapha_r$. \subsection{Satake isomorphisms}{\ell}abel{SS:satake isom} Let $\tildeau=\tildeau_{\ul{\alpha}}$ be an unramified representation of ${\rm GL}_r(F)$ (not necessarily induced of Langlands' type) and $v_{\tildeau}\in{\mathcal V}_\tildeau^{{\rm GL}_r(\frak{o})}$ be nonzero. Let ${\mathcal S}_r$ be the $\barbC$-algebra of symmetric polynomials in \[ (X_1, X_1^{-1}, X_2, X_2^{-1},{\ell}dots, X_r, X_r^{-1}). \] It contains a subalgebra ${\mathcal S}_r^0$ consisting of elements $f$ satisfying \[ f(X_1, X_2, {\ell}dots, X^{-1}_i,{\ell}dots, X^{-1}_j,{\ell}dots,X_{r-1}, X_r)=f(X_1, X_2, {\ell}dots, X_i,{\ell}dots, X_j,{\ell}dots,X_{r-1}, X_r) \] for all $1{\ell}e i< j{\ell}e r$. Let $\mathscr S_r:{\mathcal H}({\rm GL}_r(F)//{\rm GL}_r(\frak{o}))\overset{\sim}{{\ell}ongrightarrow}{\mathcal S}_r$ be the Satake isomorphism (\cite{Satake1963}). Then we have \baregin{equation}{\ell}abel{E:satake for GL} \int_{{\rm GL}_r(F)}\varphi(a)\tildeau(a)v_\tildeau da = \mathscr S_r(\varphi)(\alpha_1,\alpha_2,{\ell}dots,\alpha_r)v_\tildeau \quad ({\rm vol}({\rm GL}_r(\frak{o}), da)=1) \varepsilonsilonnd{equation} for every $\varphi\in{\mathcal H}({\rm GL}_r(F)//{\rm GL}_r(\frak{o}))$. The algebra isomorphism $\mathscr S_r$ is the composition of the following isomorphisms \[ {\mathcal H}({\rm GL}_r(F)//{\rm GL}_r(\frak{o}))\overset{\varsigma_r}{{\ell}ongrightarrow}{\mathcal H}(A_r(F)//A_r(\frak{o}))^{W_{{\rm GL}_r}} \equivg\barbC[A_r(\barbC)]^{\frak{S}_r}={\mathcal S}_r \] with \[ \varsigma_r(\varphi)(d) = \delta^{\frac{1}{2}}_{B_r}(d)\int_{Z_r(F)}\varphi(dz)dz \quad ({\rm vol}(Z_r(\frak{o}),dz)=1). \] Similarly, we have the algebra isomorphism \baregin{equation}{\ell}abel{E:Satake for SO(2n)} \mathscr S^0_{r,m}: {\mathcal H}(H_r(F)//R_{r,m})\overset{\varsigma_{r,m}^0}{{\ell}ongrightarrow}{\mathcal H}(T_r(F)//T_r(\frak{o}))^{W_{H_r}} \equivg\barbC[T_r(\barbC)]^{\frak{S}_r{\ell}times(\barbZ/2\barbZ)^{r-1}}={\mathcal S}_r^0 \varepsilonsilonnd{equation} for each $m\ge 0$ (by \varepsilonsilonqref{E:R*}) with \baregin{equation}{\ell}abel{E:satake for H} \varsigma^0_{r,m}(\varphi)(t) = \delta^{\frac{1}{2}}_{B_{H_r}}(t)\int_{V_r(F)}\varphi(tv)dv \quad ({\rm vol}(V_r(\frak{o}),dv)=1). \varepsilonsilonnd{equation} Here $W_{{\rm GL}_r}$ (resp. $W_{H_r}$) is the Weyl group of ${\rm GL}_r$ (resp. $H_r$) and $\frak{S}_r$ is the permutation group of order $r!$.\\ Following \cite[Section 5.3]{Tsai2013}, we define the map $\imath_{r,m}: {\mathcal H}(H_r(F)//R_{r,m}){\ell}ongrightarrow{\mathcal H}({\rm GL}_r(F)//{\rm GL}_r(\frak{o}))$ by \baregin{equation}{\ell}abel{E:middle satake} \imath_{r,m}(\varphi)(a) = \delta^{\frac{1}{2}}_{Q_r}(m(a))\int_{N_r(F)} \varphi(m(a)n)dn \quad ({\rm vol}(N_r(\frak{o}),dn)=1). \varepsilonsilonnd{equation} Then one checks that \baregin{equation}{\ell}abel{E:decompose of Satake isom} \varsigma^0_{r,m}=\varsigma_r\circ\imath_{r,m}. \varepsilonsilonnd{equation} Since the algebras ${\mathcal H}(A_r(F)//A_r(\frak{o}))$ and ${\mathcal H}(T_r(F)//T_r(\frak{o}))$ have a natural identification, \varepsilonsilonqref{E:decompose of Satake isom} implies the following commutative diagram \baregin{equation}{\ell}abel{E:diagram} \baregin{tikzcd} &{\mathcal H}(H_r(F)//R_{r,m})\quad\alpharrow{d}{\imath_{r,m}}\alpharrow{r}{\mathscr S^0_{r,m}}&\quad{\mathcal S}^0_r\alpharrow{d}{{\rm inclusion}}\\ &{\mathcal H}({\rm GL}_r(F)//{\rm GL}_r(\frak{o}))\quad\alpharrow{r}{\mathscr S_r}&\quad{\mathcal S}_r \varepsilonsilonnd{tikzcd} \varepsilonsilonnd{equation} for every $m\ge 0$. These will be used to describe our results for oldforms. \subsection{Spherical Whittaker functions}{\ell}abel{SS:spherical Whittaker function} Let $\tildeau=\tildeau_{\ul{\alphalphapha}}$ be an unramified representation of ${\rm GL}_r(F)$ that is induced of Langlands' type. Fix nonzero elements $v_\tildeau\in{\mathcal V}_\tildeau^{{\rm GL}_r(\frak{o})}$ and $\Lambdabda_{\tildeau,\bar{\psi}}\in{\rm Hom}_{Z_r(F)}(\tildeau,\bar{\psi}_{Z_r})$. Let \[ W(a;\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r; \bar{\psi})=\Lambdabda_{\tildeau,\bar{\psi}}(\tildeau(a)v_{\tildeau}) \] be a spherical Whittaker function on ${\rm GL}_r(F)$ with \[ W(I_r;\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r; \bar{\psi})=1. \] Explicit formulae for $W(-;\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r;\bar{\psi})$ (when $\ker(\psi)=\frak{o}$) were obtained in \cite{Shintani1976}, \cite{CasselmanShalika1980} and can be described as follows. Let $\varepsilonsilon_1,\varepsilonsilon_2, {\ell}dots, \varepsilonsilon_r$ be the standard basis of $X_\barullet(A_r)={\rm Hom}(\barbG_m, A_r)$ such that $\varepsilonsilon_j(y)\in A_r(F)$ ($1{\ell}e j{\ell}e r$) is the diagonal matrix whose $(j,j)$-entry is $y$, while all other diagonal entries are $1$. Put \[ P^+_{{\rm GL}_r} = \stt{c_1\varepsilonsilon_1+c_2\varepsilonsilon_2+\cdots+c_r\varepsilonsilon_r\mid c_1\ge c_2\ge\cdots\ge c_r}\subset X_\barullet(A_r). \] An element $a\in{\rm GL}_r(F)$ can be written as $a=z\varpi^{\ell}ambda k$ for some $z\in Z_r(F)$, $k\in{\rm GL}_r(\frak{o})$ and ${\ell}ambda\in X_\barullet(A_r)$ by the Iwasawa decomposition. Then it is not hard to show that $W(z\varpi^{\ell}ambda k;\alphalphapha_1,\alphalphapha_2, {\ell}dots,\alphalphapha_r;\bar{\psi})=0$ when ${\ell}ambda\nin P^+_{{\rm GL}_r}$. On the other hand, when ${\ell}ambda\in P^+_{{\rm GL}_r}$, we have \baregin{equation}{\ell}abel{E:Whittaker function for GL} W(z\varpi^{\ell}ambda k;\alphalphapha_1,\alphalphapha_2, {\ell}dots,\alphalphapha_r;\bar{\psi}) = \bar{\psi}_{Z_r}(z) \cdot \delta_{B_r}^{\frac{1}{2}}(\varpi^{\ell}ambda) \cdot {\mathbb I}i^{{\rm GL}_r}_{\ell}ambda(d_{\ul{\alphalphapha}}) \varepsilonsilonnd{equation} where ${\mathbb I}i_{\ell}ambda^{{\rm GL}_r}$ is the character of the finite-dimensional representation of ${\rm GL}_r(\barbC)$ whose highest weight (with respect to $B_r$) is ${\ell}ambda$ and we denote $d_{\ul{\alphalphapha}}={\rm diag}(\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r)\in A_r(\barbC)$.\\ In our discussion $|\alphalphapha_1|{\ell}e |\alphalphapha_2|{\ell}e\cdots{\ell}e|\alphalphapha_r|$. However, the function \[ \ul{\alphalphapha}{\ell}ongmapsto W(a;\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r; \bar{\psi}) \] is polynomial and symmetric in $\alphalphapha_j$'s for every fixed $a\in{\rm GL}_r(F)$. Consequently, we have a ${\mathcal S}_r$-valued function \[ W(-:X_1, X_2,{\ell}dots, X_r;\bar{\psi}): {\rm GL}_r(F){\ell}ongrightarrow{\mathcal S}_r \] on ${\rm GL}_r(F)$ such that for every $a\in{\rm GL}_r(F)$ and $r$-tuple of nonzero complex numbers $\ul{\alpha}=(\alpha_1,\alpha_2,{\ell}dots,\alpha_r)$, the scalar $W(a;\alpha_1, \alpha_2,{\ell}dots,\alpha_r;\bar{\psi})$ is the value of the polynomial $W(a;X_1, X_2,{\ell}dots, X_r; \bar{\psi})$ at $(\alpha_1,\alpha_2,{\ell}dots, \alpha_r)$. This function was introduced by Jacquet, Piatetski-Shapiro and Shalika (\cite[Section 3]{JPSS1981}, \cite[Section 2]{Jacquet2012}) in order to prove the existence of the "essential vector" of an irreducible generic representation of ${\rm GL}_r(F)$. Note that for $a$ in a set compact modulo $Z_r(F)$, the polynomials remain in a finite-dimensional subspace of ${\mathcal S}_r$. Moreover, we have the relation \[ W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^s = W(a; q^{-s}\alpha_1, q^{-s}\alpha_2,{\ell}dots, q^{-s}\alpha_r;\bar{\psi}). \] This implies that if $\nu_r(a)=q^{-\varepsilonsilonlll}$, then the polynomial $W(a;X_1, X_2,{\ell}dots, X_r;\bar{\psi})$ is homogeneous of order $\varepsilonsilonlll$, i.e. \baregin{equation}{\ell}abel{E:Whittaker homo} Y^\varepsilonsilonlll W(a;X_1, X_2,{\ell}dots,X_r;\bar{\psi}) = W(a; YX_1, YX_2,{\ell}dots, YX_r;\bar{\psi}). \varepsilonsilonnd{equation} \section{A Key Construction}{\ell}abel{S:key} In this section, we extend the results in \cite[Chapter 5]{Tsai2013} from generic supercuspidal representations to generic representations as well as from $r=n$ to $r{\ell}e n$. These results, which are stated in \indent opref{P:main prop}, are the core of our computations for the Rankin-Selberg integrals attached to (conjectural) newforms and oldforms. To state and to prove \indent opref{P:main prop}; however, we need some preparations. \subsection{Notation and conventions}{\ell}abel{SS:convention} Let $\ul{\alphalphapha}=(\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r)$ be a $r$-tuple of nonzero complex numbers and $\ul{\dot{\alphalphapha}}$ be its rearrangement so that \varepsilonsilonqref{E:decreasing} holds. Let $\tildeau=\tildeau_{\ul{\dot{\alphalphapha}}}$, which is an unramified representation of ${\rm GL}_r(F)$ which is induced of Langlands' type (cf. \S\ref{SS:Langlands type}). Fix elements $v_\tildeau\in{\mathcal V}_\tildeau^{{\rm GL}_r(\frak{o})}$ and $\Lambdabda_{\tildeau,\bar{\psi}}\in{\rm Hom}_{Z_r(F)}(\tildeau,\bar{\psi}_{Z_r})$ so that $\Lambdabda_{\tildeau,\bar{\psi}}(v_\tildeau)=1$. By \varepsilonsilonqref{E:R*} and \varepsilonsilonqref{E:Iwasawa decomp}, the space $I(\tildeau,s)^{R_{r,m}}$ is one-dimensional and has the generator ${\tildeimes}i^m_{\tildeau,s}$ with ${\tildeimes}i^m_{\tildeau,s}(I_{2r})=v_\tildeau$. We have \baregin{equation}{\ell}abel{E:f_xi, m} f_{{\tildeimes}i^m_{\tildeau,s}}(m_r(a)) = \Lambdabda_{\tildeau,\bar{\psi}_{Z_r}}(\rho_{\tildeau,s}(m_r(a)){\tildeimes}i^m_{\tildeau,s}) = W(a;\alphalphapha_1,\alphalphapha_2,{\ell}dots,\alphalphapha_r;\bar{\psi})\nu_r(a)^{s+\frac{r}{2}-1} \varepsilonsilonnd{equation} for $a\in{\rm GL}_r(F)$. Let $\pi$ be an irreducible generic representation of $G_n(F)$ and $\Lambdabda_{\pi,\psi}\in{\rm Hom}_{U_n(F)}(\pi,\psi_{U_n})$ be a nonzero element. \subsubsection{Haar measures}{\ell}abel{SSS:Haar} In this and the next section, the Haar measures appeared in the Rankin-Selberg integrals are chosen as follows. First, we take $d\bar{x}$ to be the Haar measure on $\bar{X}_{n,r}(F)$ with \[ {\rm vol}(\bar{X}_{n,r}(\frak{o}),d\bar{x})=1. \] On the other hand, the Haar measures $dt$ on $T_r(F)$ and $dk$ on $R_{r,m}$ (any $m\ge 0$) are chosen so that \[ {\rm vol}(T_r(\frak{o}),dt)={\rm vol}(R_{r,m},dk)=1. \] Then the quotient measure $dh$ on $V_r(F)\!\!\!slash H_r(F)$ is given by (cf. \varepsilonsilonqref{E:Iwasawa decomp}) \[ \int_{V_r(F)\!\!\!slash H_r(F)}f(h)dh = \int_{T_r(F)}\int_{R_{r,m}}f(tk)\delta_{B_{H_r}}^{-1}(t)dtdk. \] Note that by {\ell}mref{L:same vol}, $dh$ does not depend on the choice of $m$. \subsection{Some lemmas} In this subsection, we collect some lemmas that will be used in the proof of \indent opref{P:main prop}. \baregin{lm}{\ell}abel{L:general AL elt} Let $u_{n,r,m}\in J_{n,m}$ be given by \baregin{equation}{\ell}abel{E:u_r,m} u_{n,r,m} = \baregin{pmatrix} &&\varpi^{-m}I_r\\ &(-1)^rI_{2(n-r)+1}\\ \varpi^m I_r \varepsilonsilonnd{pmatrix}. \varepsilonsilonnd{equation} Then $u_{n,r,m}$ normalizes both $K_{n,m}$ and $R_{n,m}$. \varepsilonsilonnd{lm} \baregin{proof} Certainly, $u_{n,r,m}$ normalizes $K_{n,m}$ as it is contained in $J_{n,m}$. On the other hand, since $u_{n,r,m}$ also normalizes $H_n(F)$ in $G_n(F)$, we find that \[ R_{n,m}=K_{n,m}{\mathcal a}p H_n(F) = u_{n,r,m}^{-1}K_{n,m}u_{n,r,m}{\mathcal a}p u_{n,r,m}^{-1}H_n(F)u_{n,r,m} = u_{n,r,m}^{-1}R_{n,m}u_{n,r,m}. \] This proves the lemma. \varepsilonsilonnd{proof} \baregin{lm}{\ell}abel{L:supp for para vec} Let $v\in{\mathcal V}_\pi^{K_{n,m}}$ and $W_v$ be its associated Whittaker function. Then as a function of $t\in T_n(F)$, $W_v(t)$ is $T_n(\frak{o})$-invariant on the right and $W_v(\varpi^{\ell}a)=0$ if ${\ell}a\nin P^+_{G_n}$. \varepsilonsilonnd{lm} \baregin{proof} This follows immediately from the fact that $T_n(\frak{o})$ and $U_n(\frak{o})$ are both contained in $K_{n,m}$. \varepsilonsilonnd{proof} Recall that $\gamma(s,\tildeau,\barigwedge^2,\psi)$ is Shahidi's local coefficient introduced in \S\ref{SSS:intertwining map}. \baregin{lm}{\ell}abel{L:LS=Gal} We have $\gamma(s,\tildeau,\barigwedge^2,\psi)=\gamma(s,\phi_{J(\tildeau)},\barigwedge^2,\psi)$. \varepsilonsilonnd{lm} \baregin{proof} This is a consequence of the multiplicativity of $\gamma(s,\tildeau,\barigwedge^2,\psi)$ (which holds even when $\tildeau$ is reducible under our assumptions on $\tildeau$) and a result of Shahidi (\cite[Theorem 3.5]{Shahidi1990}) together with a recent result of Cogdell-Shahidi-Tsai (\cite[Theorem 1.1]{CogdellShahidiTsai2017}). \varepsilonsilonnd{proof} The next lemma computes the action of the intertwining map on ${\tildeimes}i_{\tildeau,s}^m$, which is crucial in the proof of \indent opref{P:main prop}. Note that $\delta_r^{-1}w_{r,m}\in H_r(F)$, where $\delta_r$ and $w_{r,m}$ is given by \varepsilonsilonqref{E:delta} and \varepsilonsilonqref{E:t and w} respectively. \baregin{lm}{\ell}abel{L:GK method for m} We have \[ \rho(\delta_r^{-1} w_{r,m})M(\tildeau, s){\tildeimes}i^m_{\tildeau,s}(\delta_r^{-1}h\delta_r) = \omegaega_{\tildeau_s}(\varpi)^m\cdot \frac{L(2s-1,\phi_{J(\tildeau)},\barigwedge^2)}{L(2s, \phi_{J(\tildeau)}, \barigwedge^2)}\cdot {\tildeimes}i^m_{\tildeau^*,1-s}(h) \] for $m\ge 0$, where $\omegaega_{\tildeau_s}$ is the central character of $\tildeau_s$. \varepsilonsilonnd{lm} \baregin{proof} Since $\rho(\delta_r^{-1} w_{r,m})M(\tildeau, s){\tildeimes}i^m_{\tildeau,s}(\delta_r^{-1}h\delta_r)\in I_r(\tildeau^*, 1-s)^{R_{r,m}}$ by the second conjugation in \varepsilonsilonqref{E:R conj}, it suffices to show \[ \rho(\delta_r^{-1} w_{r,m})M(\tildeau, s){\tildeimes}i^m_{\tildeau,s}(I_{2r}) = \omegaega_{\tildeau_s}(\varpi)^m \frac{L(2s-1,\phi_{J(\tildeau)},\barigwedge^2)}{L(2s, \phi_{J(\tildeau)}, \barigwedge^2)}. \] For this, we first note that the integral \varepsilonsilonqref{E:intertwining map} can be written as \[ M(\tildeau,s){\tildeimes}i_s(h)=\int_{N_r(F)}{\tildeimes}i_s(w_{r,0}n\delta_r h)dn. \] If $m=0$, then $\delta_r^{-1}w_{r,0}\in R_{r,0}=H_r(\frak{o})$ and assertion can be read off from the formula in \cite[Section 4]{Arthur1981}, i.e. we have \[ M(\tildeau,s){\tildeimes}i^0_{\tildeau,s}(I_{2r}) = \int_{N_r(F)}{\tildeimes}i^0_{\tildeau,s}(w_{r,0}n\delta_r)dn = \frac{L(2s-1,\phi_{J(\tildeau)},\barigwedge^2)}{L(2s, \phi_{J(\tildeau)}, \barigwedge^2)}. \] Note that the Haar measure $dn$ on $N_r(F)$ (cf. \S\ref{SSS:intertwining map}) is the same as the one used in \cite[Section 4]{Arthur1981}. Now suppose that $m>0$. On one hand, by the first conjugation in \varepsilonsilonqref{E:R conj}, we have $({\tildeimes}i^0_{\tildeau,s})^{t_{r,m}}={\tildeimes}i^m_{\tildeau,s}$, where $({\tildeimes}i^0_{\tildeau,s})^{t_{r,m}}$ is the function on $H_r(F)$ defined by $({\tildeimes}i^0_{\tildeau,s})^{t_{r,m}}(h)={\tildeimes}i^0_{\tildeau,s}(t_{r,m}^{-1}ht_{r,m})$. On the other hand, since $t_{r,m}^{-1}w_{r,0}t_{r,m}=\varpi^{m\mu_r}w_{r,0}$, we find that \baregin{align*} \rho(\delta^{-1} w_{r,m})M(\tildeau, s){\tildeimes}i^m_{\tildeau,s}(I_{2r}) &= M(\tildeau, s)({\tildeimes}i^0_{\tildeau,s})^{t_{r,m}}(\delta_r^{-1}w_{r,m})\\ &= \int_{N_r(F)}{\tildeimes}i^0_{\tildeau,s}(t^{-1}_{r,m}w_{r,0}n w_{r,m}t_{r,m})dn\\ &= \int_{N_r(F)}{\tildeimes}i^0_{\tildeau,s}(\varpi^{m{\ell}ambda_r}w_{r,0} (t^{-1}_{r,m}n\,t_{r,m})w_{r,0})dn\\ &= \omegaega_{\tildeau_s}(\varpi)^m\int_{N_r(F)}{\tildeimes}i^0_{\tildeau,s}(w_{r,0}n\,\delta_r(\delta_r^{-1}w_{r,0}))dn\\ &= \omegaega_{\tildeau_s}(\varpi)^m \frac{L(2s-1,\phi_{J(\tildeau)},\barigwedge^2)}{L(2s, \phi_{J(\tildeau)}, \barigwedge^2)}. \varepsilonsilonnd{align*} This concludes the proof. \varepsilonsilonnd{proof} The following lemma helps us to simplify our computations. \baregin{lm}{\ell}abel{L:supp of W_v} Let $v\in{\mathcal V}_\pi^{K_{n,m}}$, $a\in{\rm GL}_r(F)$ and $\bar{x}\in \bar{X}_{n,r}(F)\setminus \bar{X}_{n,r}(\frak{o})$ with $r<n$. Then $W_v(m_r(a)\bar{x})=0$. \varepsilonsilonnd{lm} \baregin{proof} Let us denote $E_{ij}\in{\rm Mat}_{(2n+1){\tildeimes}(2n+1)}(F)$ to be the matrix with $1$ in the $(i,j)$ entry and $0$ in all other entries. For $x=(x_{ij})\in{\rm Mat}_{(n-r){\tildeimes} r}(F)$, we set \[ \bar{x} = \baregin{pmatrix}I_r&&&&\{\tildeimes}&I_{n-r}&&\\&&1\\&&&I_{n-r}&\\&&&x'&I_r\varepsilonsilonnd{pmatrix} \in \bar{X}_{n,r}(F). \] Suppose that $\bar{x}\notin\bar{X}_{n,r}(\frak{o})$. Then there exist $1 {\ell}e\varepsilonsilonlll{\ell}e n-r$ and $1{\ell}e k{\ell}e r$ so that $x_{\varepsilonsilonlll k}\notin\frak{o}$, but $x_{ij}\in\frak{o}$ if $\varepsilonsilonlll<i{\ell}e n-r$ or $i=\varepsilonsilonlll$ and $m< j{\ell}e r$. Define $x_0=(x^0_{ij})\in{\rm Mat}_{(n-r){\tildeimes} r}(F)$ by $x^0_{ij}=x_{ij}$ if $i< \varepsilonsilonlll$ or $i=\varepsilonsilonlll$ and $1{\ell}e j{\ell}e m$, while $x^0_{ij}=0$ otherwise. Then we have \[ W_v(m_r(a)\bar{x})=W_v(m_r(a)\bar{x}_0) \] since $\bar{x}_0^{-1}\bar{x}\in K_{n,m}$ by our assumption on $\bar{x}$. To prove the lemma, we first assume that $a=t\in A_r(F)$. In this case, the idea is to find $u\in U_n(\frak{o})\subset K_{n,m}$ such that \baregin{equation}{\ell}abel{E:u} W_v(m_r(t)\bar{x}_0) = W_v(m_r(t)\bar{x}_0u) = \psi(yx_{\varepsilonsilonlll m})W_v(m_r(t)\bar{x}_0) \varepsilonsilonnd{equation} where $y\in\frak{o}$ appears in the entries of $u$. Since $\ker(\psi)=\frak{o}$, $x_{\varepsilonsilonlll m}\notin\frak{o}$ and we can let $y\in\frak{o}$ be arbitrary, the proof for $a\in A_r(F)$ will follow. To define $u$, let $y\in\frak{o}$ and put \[ u=I_{2n+1}+yE_{m, \varepsilonsilonlll+1}-yE_{2n+1-\varepsilonsilonlll, 2n+2-m} \] if $\varepsilonsilonlll<n-r$, whereas \[ u=I_{2n+1}-2yE_{m,n+1}+yE_{n+1,2n+2-m}-y^2E_{m,2n+2-m} \] if $\varepsilonsilonlll=n-r$. One then checks that $u\in U_n(\frak{o})$ and \varepsilonsilonqref{E:u} holds. For arbitrary $a\in{\rm GL}_r(F)$, we can write $a=ztk$ for some $z\in Z_r(F)$, $t\in A_r(F)$ and $k\in{\rm GL}_r(\frak{o})$. Then \[ W_v(m_r(a)\bar{x}) = \psi_{U_n}(m_r(z))\cdot W_v(m_r(t)\bar{x}') = 0 \] since $m_r(k)\in K_{n,m}$ and $\bar{x}':=m_r(k)\bar{x}m_r(k)^{-1}\notin\bar{X}_{n,r}(\frak{o})$. \varepsilonsilonnd{proof} \baregin{lm}{\ell}abel{L:exp of RS int} Let $v\in{\mathcal V}_\pi^K$ with $K=K_{n,m}$ if $r<n$ and $K=R_{r,m}$ if $r=n$. Then we have \baregin{align*} \Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s}) &= \sum_{\varepsilonsilonlll\in\barbZ}\int_{a\in Z_r(F)\!\!\!slash{\rm GL}_r(F),\,\nu_r(a)=q^{-\varepsilonsilonlll}} W_v(m_r(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^{s-n+\frac{r}{2}}da \varepsilonsilonnd{align*} for $\mathbf Re(s)\gg 0$ with the $\varepsilonsilonlll$-th summand vanishes for all $\varepsilonsilonlll\gg 0$. \varepsilonsilonnd{lm} \baregin{proof} In the following computations, we assume implicitly that $s$ is in the domain of convergence. First note that we have the identification \[ V_r(F)\!\!\!slash Q_r(F)\equivg Z_r(F)\!\!\!slash{\rm GL}_r(F). \] Then by the Iwasawa decomposition $H_r(F)=Q_r(F)R_{r,m}$ together with \varepsilonsilonqref{E:RS integral in general} and \varepsilonsilonqref{E:f_xi, m}, we find that \baregin{align*} \Psi_{n,n}(v\otimes{\tildeimes}i^m_{\tildeau,s}) &= \int_{ Z_n(F)\!\!\!slash{\rm GL}_n(F)} W_v(m_n(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_n;\bar{\psi})\nu_n(a)^{s-\frac{n}{2}}da\\ &= \sum_{\varepsilonsilonlll\in\barbZ}\int_{a\in Z_n(F)\!\!\!slash{\rm GL}_n(F),\,\nu_n(a)=q^{-\varepsilonsilonlll}} W_v(m_n(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_n;\bar{\psi})\nu_n(a)^{s-\frac{n}{2}}da \varepsilonsilonnd{align*} if $r=n$. On the other hand, by {\ell}mref{L:supp of W_v} and the fact that both $R_{r,m}$ and $\bar{X}_{n,r}(\frak{o})$ are contained in $K_{n,m}$, we get that \baregin{align*} \Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s}) &= \int_{Z_r(F)\!\!\!slash{\rm GL}_r(F)}\int_{\bar{X}_{n,r}(F)} W_v(\bar{x}m_r(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^{s-\frac{r}{2}}d\bar{x}da\\ &= \int_{Z_r(F)\!\!\!slash{\rm GL}_r(F)}\int_{\bar{X}_{n,r}(F)} W_v(m_r(a)\bar{x})W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^{s-n+\frac{r}{2}}d\bar{x}da\\ &= \int_{Z_r(F)\!\!\!slash{\rm GL}_r(F)} W_v(m_r(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^{s-n+\frac{r}{2}}da\\ &= \sum_{\varepsilonsilonlll\in\barbZ}\int_{a\in Z_r(F)\!\!\!slash{\rm GL}_r(F),\,\nu_r(a)=q^{-\varepsilonsilonlll}} W_v(m_r(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^{s-n+\frac{r}{2}}da \varepsilonsilonnd{align*} if $r<n$. This proves the first assertion. To prove the $\varepsilonsilonlll$-th summand vanishes for all $\varepsilonsilonlll\gg 0$, we can use the the Iwasawa decomposition of ${\rm GL}_r$ to derive (recall $R_{r,m}{\mathcal a}p M_r(F)=M_r(\frak{o})$) \baregin{align}{\ell}abel{E:exp of RS int} \baregin{split} &\int_{a\in Z_r(F)\!\!\!slash{\rm GL}_r(F),\,\nu_r(a)=q^{-\varepsilonsilonlll}} W_v(m_r(a))W(a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(a)^{s-n+\frac{r}{2}}da\\ &= \sum_{{\ell}a\in P^+_{{\rm GL}_r},\,\,{\rm tr}({\ell}a)=\varepsilonsilonlll} W_v(m_r(\varpi^{\ell}a))W(\varpi^{\ell}a;\alpha_1,\alpha_2,{\ell}dots,\alpha_r;\bar{\psi})\nu_r(\varpi^{\ell}a)^{\frac{1}{2}-n+\frac{r}{2}}. \varepsilonsilonnd{split} \varepsilonsilonnd{align} Now if ${\ell}a={\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2+\cdots+{\ell}a_r\varepsilonsilon_r\in P^+_{{\rm GL}_r}$, then there exists $N\in\barbZ$ depending only on $v$, such that $W_v(\varpi^{\ell}a)=0$ whenever ${\ell}a_j<N$ for some $1{\ell}e j{\ell}e r$. Actually, we have $N=0$ if $r<n$ by {\ell}mref{L:supp for para vec}. From this one sees that the summation in \varepsilonsilonqref{E:exp of RS int} is a finite sum and each of its term vanishes when $\varepsilonsilonlll\gg 0$. This completes the proof. \varepsilonsilonnd{proof} \subsection{The key proposition} Now we are in the position to state and prove \indent opref{P:main prop}. As mentioned in the beginning of this section, this proposition is the key to the proofs of our main results. Its proof, on the other hand, can be viewed as a generalization of the one given by Jacquet, Piatetski-Shapiro and Shalika (\cite{JPSS1981}, \cite{JPSS1983}) to our case. We note that Kaplan also used the same technique in \cite{Kaplan2013} to establish the relations between the $L$-factors for generic representations of ${\rm SO}_{2n}{\tildeimes}{\rm GL}_r$ defined by the Rankin-Selberg integrals and Langlands-Shahidi's method. \baregin{prop}{\ell}abel{P:main prop} Let $K=K_{n,m}$ if $r<n$ and $K=R_{n,m}$ if $r=n$. Then there exist linear maps \[ \Xi^m_{n,r}: {\mathcal V}_{\pi}^{K}{\ell}ongrightarrow{\mathcal S}_r \] sending $v$ to $\Xi^m_{n,r}(v;X_1, X_2,{\ell}dots,X_r)$ such that the followings are satisfied: \baregin{itemize} \item[(1)] We have \[ \Xi^m_{n,r}(v;q^{-s+1/2}\alphalphapha_1,q^{-s+1/2}\alphalphapha_2, {\ell}dots,q^{-s+1/2}\alphalphapha_r) = \frac{L(2s,\phi_{J(\tildeau)},\barigwedge^2)\Psi_{n,r}(v\otimes {\tildeimes}i^m_{\tildeau,s})}{L(s,\phi_\pi\otimes\phi_{J(\tildeau)})} \] for every $s\in\barbC$. \item[(2)] We have $\pi(u_{n,r,m})v\in{\mathcal V}_\pi^K$ and the functional equation \[ \Xi^m_{n,r}(\pi(u_{n,r,m})v; X_1^{-1},X_2^{-1}, {\ell}dots, X_r^{-1})=\varepsilonsilon_\pi^r(X_1X_2\cdots X_r)^{(a_\pi-m)} \Xi^m_{n,r}(v;X_1, X_2, {\ell}dots,X_r) \] holds, where $u_{n,r,m}\in G_n(F)$ is given by \varepsilonsilonqref{E:u_r,m}. \item[(3)] The kernel of $\Xi^m_{n,r}$ is given by \[ ker(\Xi^m_{n,r})=\stt{v\in {\mathcal V}_\pi^K\mid \tildeext{$W_v(t)=0$ for every $t\in T_r$}}. \] \item[(4)] The relation \[ \Xi^m_{n,r}(v;X_1,X_2, {\ell}dots,X_{r-1},0)=\Xi^m_{n,r-1}(v;X_1, X_2{\ell}dots,X_{r-1}) \] holds for $v\in{\mathcal V}_\pi^{K_{n,m}}$ and $2{\ell}eq r{\ell}eq n$. \item[(5)] We have \[ \Xi^m_{n,n}(\varphi\star v;X_1, X_2,{\ell}dots, X_n) =\mathscr S^0_{n,m}(\varphi)\cdot \Xi^m_{n,n}(v; X_1, X_2,{\ell}dots, X_r) \] for $\varphi\in{\mathcal H}(H_n(F)//R_{n,m})$, where $\mathscr S^0_{n,m}$ is the algebra isomorphism given by \varepsilonsilonqref{E:Satake for SO(2n)}. \varepsilonsilonnd{itemize} \varepsilonsilonnd{prop} We divide the proof into five parts, one for each item. \subsubsection{Proof of $(1)$} Let $v\in{\mathcal V}_\pi^K$. Motivated by \varepsilonsilonqref{E:exp of RS int}, we consider the following integral \baregin{align}{\ell}abel{E:Psi_r,m,l} \baregin{split} \Psi^m_{n,r,\varepsilonsilonlll}(v; X_1, X_2,{\ell}dots, X_r) &= \int_{a\in Z_r(F)\!\!\!slash{\rm GL}_r(F),\,\nu_r(a)=q^{-\varepsilonsilonlll}} W_v(m_r(a))W(a;X_1,X_2,{\ell}dots,X_r;\bar{\psi})\nu_r(a)^{\frac{r+1}{2}-n}da\\ &= \sum_{{\ell}a\in P^+_{{\rm GL}_r},\,\,{\rm tr}({\ell}a)=\varepsilonsilonlll} W_v(m_r(\varpi^{\ell}a))W(\varpi^{\ell}a;X_1,X_2,{\ell}dots,X_r;\bar{\psi})\nu_r(\varpi^{\ell}a)^{\frac{r+1}{2}-n}. \varepsilonsilonnd{split} \varepsilonsilonnd{align} We know this is a finite sum and hence gives rise to a homogeneous polynomial of degree $\varepsilonsilonlll$ in ${\mathcal S}_r$ by \varepsilonsilonqref{E:Whittaker homo}. Furthermore, there exists an integer $N_{v}$ depending only on $v$ such that $\Psi^m_{n,r,\varepsilonsilonlll}(v; X_1, X_2,{\ell}dots, X_r)=0$ for all $\varepsilonsilonlll<N_v$. We then define the following formal Laurent series \baregin{equation}{\ell}abel{E:Psi_r,m} \Psi^m_{n,r}(v;X_1,X_2,{\ell}dots, X_r;Y) = \sum_{\varepsilonsilonlll\in\barbZ}\Psi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2,{\ell}dots, X_r)Y^\varepsilonsilonlll = \sum_{\varepsilonsilonlll\ge N_{v}}\Psi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2,{\ell}dots, X_r)Y^\varepsilonsilonlll \varepsilonsilonnd{equation} with coefficient in ${\mathcal S}_r$. Now {\ell}mref{L:exp of RS int} implies \baregin{equation}{\ell}abel{E:Psi eva} \Psi^m_{n,r}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_r; q^{-s+\frac{1}{2}}) = \Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s}) \varepsilonsilonnd{equation} for $\mathbf Re(s)\gg 0$. On the other hand, let $P_{\phi_\pi}(Y)\in\barbC[Y]$ such that $L(s,\phi_\pi)=P_{\phi_\pi}(q^{-s})^{-1}$ and put \[ P_{\phi_\pi}(X_1, X_2,{\ell}dots, X_r; Y) = \indent od_{j=1}^r P_{\phi_\pi}(q^{-\frac{1}{2}}X_jY) = \sum_{\varepsilonsilonlll\ge 0} a_\varepsilonsilonlll(X_1, X_2,{\ell}dots, X_r)Y^\varepsilonsilonlll. \] Then $a_\varepsilonsilonlll(X_1, X_2, {\ell}dots, X_r)$ is a homogeneous polynomial of degree $\varepsilonsilonlll$ in ${\mathcal S}_r$ with $a_0(X_1, X_2, {\ell}dots, X_r)=1$. We have \[ L(s,\phi_\pi\otimes\phi_{J(\tildeau)}) = P_{\phi_\pi}(\alpha_1,\alpha_2,{\ell}dots, \alpha_r;q^{-s+\frac{1}{2}})^{-1} \quad\tildeext{and}\quad L(1-s,\phi_\pi\otimes\phi_{J(\tildeau^*)}) = P_{\phi_\pi}(\alpha^{-1}_1,\alpha^{-1}_2,{\ell}dots, \alpha^{-1}_r;q^{s-\frac{1}{2}})^{-1}. \] We also put \[ P_{\barigwedge^2}(X_1, X_2,{\ell}dots,X_r; Y) = \indent od_{1{\ell}e i<j{\ell}e r}(1-q^{-1}X_iX_jY^2). \] Then \[ L(2s,\phi_{J(\tildeau)},\BigWedge{}^2) = P_{\barigwedge^2}(\alpha_1, \alpha_2,{\ell}dots,\alpha_r; q^{-s+\frac{1}{2}})^{-1} \quad\tildeext{and}\quad L(2-2s,\phi_{J(\tildeau^*)},\BigWedge{}^2) = P_{\barigwedge^2}(\alpha^{-1}_1, \alpha^{-1}_2,{\ell}dots,\alpha^{-1}_r; q^{s-\frac{1}{2}})^{-1}. \] By the geometric series expansions, one can write \[ P_{\barigwedge^2}(X_1, X_2,{\ell}dots,X_r; Y)^{-1} = \sum_{\varepsilonsilonlll\ge 0} b_\varepsilonsilonlll(X_1,X_2,{\ell}dots, X_r)Y^\varepsilonsilonlll. \] Again, $b_\varepsilonsilonlll(X_1, X_2, {\ell}dots, X_r)$ is a homogeneous polynomial of degree $\varepsilonsilonlll$ in ${\mathcal S}_r$ with $b_0(X_1, X_2, {\ell}dots, X_r)=1$. But certainly this is not a finite sum.\\ At this point, let us put \baregin{equation}{\ell}abel{E:Xi_r,m} \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r;Y) = \frac{P_{\phi_\pi}(X_1, X_2,{\ell}dots, X_r; Y)\Psi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r;Y)} {P_{\barigwedge^2}(X_1, X_2,{\ell}dots,X_r; Y)} \varepsilonsilonnd{equation} which is again a formal Laurent series with coefficients in ${\mathcal S}_r$. It can be written as \baregin{equation}{\ell}abel{E:exp Xi_r,m,l} \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r;Y) = \sum_{\varepsilonsilonlll\ge N_{v}} \Xi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2{\ell}dots,X_r)Y^\varepsilonsilonlll \varepsilonsilonnd{equation} with \baregin{equation}{\ell}abel{E:Xi_r,m,l} \Xi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2{\ell}dots,X_r) = \sum_{\varepsilonsilonlll_1+\varepsilonsilonlll_2+\varepsilonsilonlll_3=\varepsilonsilonlll} \Psi^m_{n,r,\varepsilonsilonlll_1}(v;X_1,X_2{\ell}dots,X_r) a_{\varepsilonsilonlll_2}(X_1, X_2,{\ell}dots, X_r) b_{\varepsilonsilonlll_3}(X_1,X_2,{\ell}dots, X_r) \varepsilonsilonnd{equation} which is a finite sum. Clearly, $\Xi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2{\ell}dots,X_r)$ is a homogeneous polynomial of degree $\varepsilonsilonlll$ in ${\mathcal S}_r$. Also, it follows from \varepsilonsilonqref{E:Psi eva} that \baregin{equation}{\ell}abel{E:formal RS int eva} \Xi^m_{n,r}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_r; q^{-s+\frac{1}{2}}) = \frac{L(2s,\phi_{J(\tildeau)},\barigwedge^2)\Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s})}{L(s,\phi_\pi\otimes\phi_{J(\tildeau)})} \varepsilonsilonnd{equation} for $\mathbf Re(s)\gg 0$. \\ A priori, the sum in \varepsilonsilonqref{E:exp Xi_r,m,l} is infinite; however, we will show that it is indeed a finite sum by using the functional equation \varepsilonsilonqref{E:FE}. To be more precise, let $w=\delta_r^{-1}w_{r,m}\in H_r(F)$ and $\hat{w}\in G_n(F)$ be its image under the embedding \varepsilonsilonqref{E:embedding}. Then one checks that \[ \delta_{n,r}\hat{w} = u_{n,r,m} \] where $\delta_{n,r}$ and $u_{n,r,m}$ are elements in $G_n(F)$ given by \varepsilonsilonqref{E:delta_n,r} and \varepsilonsilonqref{E:u_r,m}, respectively. Now by \varepsilonsilonqref{E:dual RS integral in general}, {\ell}mref{L:LS=Gal} and {\ell}mref{L:GK method for m}, we have \baregin{align*} \tilde{\Psi}_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s}) = \tilde{\Psi}_{n,r}(\pi(\hat{w})v\otimes\rho(w){\tildeimes}i^m_{\tildeau,s}) = \omegaega_{\tildeau_s}(\varpi)^m\frac{L(2-2s,\phi_{J(\tildeau^*)},\barigwedge^2)}{L(2s,\phi_{J(\tildeau)},\barigwedge^2)} \Psi_{n,r}(\pi(u_{n,r,m})v\otimes{\tildeimes}i^m_{\tildeau^*,1-s}). \varepsilonsilonnd{align*} On the other hand, we also have \[ \gamma(s,\pi{\tildeimes}\tildeau,\psi) = \varepsilonsilonpsilon(s,\phi_\pi\otimes\phi_{J(\tildeau)},\psi)\frac{L(1-s,\phi_\pi\otimes\phi_{J(\tildeau^*)})}{L(s,\phi_\pi\otimes\phi_{J(\tildeau)})} \] by \tildehmref{T:RS=Gal gamma}. Together, we find that \varepsilonsilonqref{E:FE} can be written as \baregin{equation}{\ell}abel{E:FE sp} \frac{L(2-2s,\phi_{J(\tildeau^*)},\barigwedge^2)\Psi_{n,r}(\pi(u_{n,r,m})v\otimes{\tildeimes}i^m_{\tildeau^*,1-s})}{L(1-s,\phi_\pi\otimes\phi_{J(\tildeau^*)})} = \omegaega_{\tildeau_s}(\varpi)^{-m}\varepsilonsilonpsilon(s,\phi_\pi\otimes\phi_{J(\tildeau)},\psi) \frac{L(2s,\phi_{J(\tildeau)},\barigwedge^2)\Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s})}{L(s,\phi_\pi\otimes\phi_{J(\tildeau)})}. \varepsilonsilonnd{equation} Now because $u_{n,r,m}$ normalizes $K$ by {\ell}mref{L:general AL elt}, we have $\pi(u_{n,r,m})v\in{\mathcal V}_\pi^K$, and hence {\ell}mref{L:exp of RS int} gives \baregin{align*} \Psi_{n,r}(\pi(&u_{n,r,m})v\otimes{\tildeimes}i^m_{\tildeau^*,1-s})\\ &= \sum_{\varepsilonsilonlll\in\barbZ}\int_{a\in Z_r(F)\!\!\!slash{\rm GL}_r(F),\,\nu_r(a)=q^{-\varepsilonsilonlll}} W_{\pi(u_{n,r,m})v}(m_r(a))W(a;\alpha^{-1}_1,\alpha^{-1}_2,{\ell}dots,\alpha^{-1}_r;\bar{\psi})\nu_r(a)^{1-s-n+\frac{r}{2}}da \varepsilonsilonnd{align*} for $\mathbf Re(s){\ell}l 0$. It follows that the formal Laurent series \[ \Xi^m_{n,r}(\pi(u_{n,r,m})v;X_1, X_2,{\ell}dots, X_r;Y) = \sum_{\varepsilonsilonlll\ge -N_{\pi(u_{n,r,m})v}} \Xi^m_{n,r,\varepsilonsilonlll}(\pi(u_{n,r,m})v;X_1,X_2{\ell}dots,X_r)Y^\varepsilonsilonlll \] satisfies \baregin{equation}{\ell}abel{E:dual formal RS int eva} \Xi^m_{n,r}(\pi(u_{n,r,m})v;\alpha_1^{-1},\alpha_2^{-1},{\ell}dots,\alpha_r^{-1}; q^{s-\frac{1}{2}}) = \frac{L(2-2s,\phi_{J(\tildeau^*)},\barigwedge^2)\Psi_{n,r}(\pi(u_{n,r,m})v\otimes{\tildeimes}i^m_{\tildeau^*,1-s})}{L(1-s,\phi_\pi\otimes\phi_{J(\tildeau^*)})} \varepsilonsilonnd{equation} for $\mathbf Re(s){\ell}l 0$. Let us upgrade the functional equation \varepsilonsilonqref{E:FE sp} to the one between formal Laurent series. For this, we put \[ \varepsilonsilonpsilon_{\phi_\pi,m}(X_1,X_2,{\ell}dots,X_r;Y) = \varepsilonsilon^r_\pi(X_1X_2\cdots X_r)^{a_\pi-m}Y^{(a_\pi-m)r}. \] This is a unit in ${\mathcal S}_r[Y]$ and we have \[ \varepsilonsilonpsilon_{\phi_\pi,m}(\alpha_1,\alpha_2,{\ell}dots,\alpha_r;q^{-s+\frac{1}{2}}) = \omegaega_{\tildeau_s}(\varpi)^{-m}\varepsilonsilonpsilon(s,\phi_\pi\otimes\phi_{J(\tildeau)},\psi) \] for $s\in\barbC$. Applying \varepsilonsilonqref{E:formal RS int eva}, \varepsilonsilonqref{E:dual formal RS int eva} and \varepsilonsilonqref{E:FE sp}, we get that \baregin{equation}{\ell}abel{E:FE for formal RS int} \Xi_{r,m}(\pi(u_{n,r,m})v;X_1^{-1},X_2^{-1},{\ell}dots, X_r^{-1}; Y^{-1}) = \varepsilonsilonpsilon_{\phi_\pi,m}(X_1, X_2,{\ell}dots, X_r;Y) \Xi_{r,m}(v;X_1,X_2,{\ell}dots,X_r;Y). \varepsilonsilonnd{equation} This implies that \varepsilonsilonqref{E:exp Xi_r,m,l} is a finite sum and hence we can define \[ \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r) = \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r;1)\in{\mathcal S}_r. \] Observe that \baregin{equation}{\ell}abel{E:Xi relation} \Xi^m_{n,r}(v;YX_1, YX_2,{\ell}dots, YX_r)=\Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r;Y) \varepsilonsilonnd{equation} by the fact that $\Xi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2,{\ell}dots,X_r)$ is a homogeneous polynomial of degree $\varepsilonsilonlll$. Then \varepsilonsilonqref{E:formal RS int eva} implies \baregin{align*} \Xi^m_{n,r}(v;q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_r) = \frac{L(2s,\phi_{J(\tildeau)},\barigwedge^2)\Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s})}{L(s,\phi_\pi\otimes\phi_{J(\tildeau)})} \varepsilonsilonnd{align*} for $s\in\barbC$. This defines the linear map $\Xi_{n,r}^m$ and completes the proof of $(1)$.\qed \subsubsection{Proof of $(2)$} This follows immediately by putting $Y=1$ in \varepsilonsilonqref{E:FE for formal RS int}.\qed \subsubsection{Proof of $(3)$} We first note that $m_r(A_r(F))=T_r(F)$ (cf. \varepsilonsilonqref{E:levi of Q}). Also, since $m_r(A_r(\frak{o}))\subset R_{r,m}$, it suffices to show \[ \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r)=0 \quad\tildeext{if and only if}\quad W_v(m_r(\varpi^{\ell}a))=0\quad\tildeext{for all ${\ell}a\in X_\barullet(A_r)$}. \] For a given ${\ell}a\in X_\barullet(A_r)$, let us denote \[ {\ell}ambda={\ell}a_1\varepsilonsilon_1+{\ell}ambda_2\varepsilonsilon_2+\cdots+{\ell}ambda_r\varepsilonsilon_r. \] Then we know that $W_v(m_r(\varpi^{\ell}a))=0$ if ${\ell}a_j< N$ for some $1{\ell}e j{\ell}e r$, where $N$ is an integer depending on $v$. Since $m_r({\rm GL}_r(\frak{o}))\subset K$, a standard arguments show that $W_v(m_r(\varpi^{\ell}ambda))=0$ if ${\ell}ambda\nin P^+_{{\rm GL}_r}$. Together, we obtain \baregin{equation}{\ell}abel{E:rought support} W_v(m_r(\varpi^{\ell}ambda))=0\quad\tildeext{if ${\ell}ambda\nin P^+_{{\rm GL}_r}$ or if ${\ell}a\in P^+_{{\rm GL}_r}$ with ${\ell}a_r<N$}. \varepsilonsilonnd{equation} Hence, we are reduced to prove \baregin{equation}{\ell}abel{E:Xi=0} \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r)=0 \quad\tildeext{if and only if}\quad W_v(m_r(\varpi^{\ell}a))=0\quad\tildeext{for all ${\ell}a\in P^+_{{\rm GL}_r}$ with ${\ell}a_r\ge N$}. \varepsilonsilonnd{equation} By \varepsilonsilonqref{E:Xi relation}, \varepsilonsilonqref{E:Xi_r,m} and \varepsilonsilonqref{E:Psi_r,m}, we find that \baregin{equation}{\ell}abel{E:Xi=0 equiv} \Xi^m_{n,r}(v;X_1,X_2,{\ell}dots,X_r)=0 \quad\tildeext{if and only if}\quad \Psi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2,{\ell}dots,X_r)=0\quad\tildeext{for all $\varepsilonsilonlll\in\barbZ$}. \varepsilonsilonnd{equation} This is equivalent to \[ \Psi^m_{n,r,\varepsilonsilonlll}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_r)=0 \] for all $\varepsilonsilonlll\in\barbZ$ and $r$-tuple $\ul{\alpha}=(\alpha_1,\alpha_2,{\ell}dots,\alpha_r)$ of nonzero complex numbers. Now by \varepsilonsilonqref{E:Whittaker function for GL}, \varepsilonsilonqref{E:Psi_r,m,l} and \varepsilonsilonqref{E:rought support}, we find that \baregin{equation}{\ell}abel{E:exp Psi_r,m,l} \Psi^m_{n,r,\varepsilonsilonlll}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_r) = q^{-\varepsilonsilonlll{\ell}eft(\frac{r+1}{2}-n\right)} \sum_{{\ell}ambda\in {\rm U}psilonilon_{\varepsilonsilonlll}} W_v(m_r(\varpi^{\ell}ambda))\delta^{\frac{1}{2}}_{B_r}(\varpi^{\ell}a){\mathbb I}i^{{\rm GL}_r}_{\ell}ambda(d_{\ul{\alpha}}) \varepsilonsilonnd{equation} where \[ {\rm U}psilonilon_\varepsilonsilonlll = \stt{{\ell}ambda\in P^+_{{\rm GL}_r}\mid {\rm tr}({\ell}ambda)=\varepsilonsilonlll\,\,\tildeext{and}\,\,{\ell}a_r\ge N} \] with ${\rm tr}:X_\barullet(A_r)\tildeo\barbZ$ the trace map (with respect to $\varepsilonsilon_1, \varepsilonsilon_2,{\ell}dots, \varepsilonsilon_r$) similar to \varepsilonsilonqref{E:trace}. Evidently, ${\rm U}psilonilon_\varepsilonsilonlll$ is a finite set for all $\varepsilonsilonlll\in\barbZ$, and is in fact the empty set if $\varepsilonsilonlll{\ell}l 0$. Since $\delta^{\frac{1}{2}}_{B_r}(\varpi^{\ell}a)\ne 0$ and $\ul{\alpha}$ can be arbitrary, we conclude from the linear independence of characters and \varepsilonsilonqref{E:exp Psi_r,m,l} that \[ \Psi^m_{n,r,\varepsilonsilonlll}(v;X_1,X_2,{\ell}dots,X_r)=0 \quad\tildeext{if and only if}\quad W_v(m_r(\varpi^{\ell}a))=0\quad\tildeext{for all ${\ell}a\in{\rm U}psilonilon_\varepsilonsilonlll$}. \] Now \varepsilonsilonqref{E:Xi=0} follows from this together with \varepsilonsilonqref{E:Xi=0 equiv}.\qed \subsubsection{Proof of $(4)$} It is clear from the definitions that \[ P_{\phi_\pi}(X_1,X_2,{\ell}dots,X_{r-1},0; Y) = P_{\phi_\pi}(X_1,X_2,{\ell}dots,X_{r-1}; Y) \] and \[ P_{\barigwedge^2}(X_1,X_2,{\ell}dots,X_{r-1},0; Y) = P_{\barigwedge^2}(X_1,X_2,{\ell}dots,X_{r-1}; Y). \] Thus the proof will follow if we could show \baregin{equation}{\ell}abel{E:r to r-1} \Psi^m_{n,r}(v;X_1,X_2,{\ell}dots, X_{r-1},0;Y) = \Psi^m_{n,r-1}(v;X_1,X_2,{\ell}dots, X_{r-1};Y) \varepsilonsilonnd{equation} for $v\in{\mathcal V}_\pi^{K_{n,m}}$. For this, we follow the idea of Ginzburg in \cite[Theorem B]{Ginzburg1990} to "remove" the Satake parameter $\alpha_r$.\\ As in the proof of $(3)$, for a given ${\ell}ambda\in X_\barullet(A_{r})$, we write \[ {\ell}ambda={\ell}ambda_1\varepsilonsilon_1+{\ell}ambda_2\varepsilonsilon_2+\cdots+{\ell}ambda_r\varepsilonsilon_r. \] Elements ${\ell}a\in X_\barullet(A_r)$ with ${\ell}a_r=0$ can be view as in $X_\barullet(A_{r-1})$ via the natural inclusion $X_\barullet(A_{r-1})\subset X_\barullet(A_r)$. Now for a fixed ${\ell}a\in P^+_{{\rm GL}_r}$, we can regard ${\mathbb I}i^{{\rm GL}_r}_{\ell}a(d_{\ul{\alpha}})$ as a polynomial in ${\mathcal S}_r$ as $\ul{\alpha}$ varies. In particular, we can consider ${\mathbb I}i^{{\rm GL}_r}_{\ell}a(d_{\ul{\alpha}_0})$ provided that it is defined, where $\ul{\alpha}_0=(\alpha_1,\alpha_2,{\ell}dots,\alpha_{r-1}, 0)$. Now a lemma in \cite[Section 4]{Ginzburg1990} implies that this is defined when ${\ell}a_r\ge 0$, in which case we have \baregin{equation}{\ell}abel{E:Ginz rel} {\mathbb I}i^{{\rm GL}_r}_{{\ell}a}(d_{\ul{\alpha}_0}) = \baregin{cases} {\mathbb I}i^{{\rm GL}_{r-1}}_{{\ell}a}(d_{\ul{\alpha}'})\quad&\tildeext{if ${\ell}a_r=0$},\\ 0\quad&\tildeext{if ${\ell}a_r>0$}, \varepsilonsilonnd{cases} \varepsilonsilonnd{equation} where $\ul{\alpha}'=(\alpha_1,\alpha_2,{\ell}dots,\alpha_{r-1})$.\\ Following the proof of Ginzburg, we first apply {\ell}mref{L:supp for para vec} and \varepsilonsilonqref{E:Psi_r,m}, \varepsilonsilonqref{E:Psi_r,m,l}, \varepsilonsilonqref{E:Whittaker function for GL} to write ($\mathbf Re(s)\gg 0$) \baregin{align*} \Psi^m_{n,r}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_r; q^{-s+\frac{1}{2}}) =& \sum_{{\ell}a\in P^{+0}_{{\rm GL}_r}} W_v(m_r(\varpi^{\ell}a)){\mathbb I}i^{{\rm GL}_r}_{\ell}a(d_{\ul{\alpha}})\delta_{B_r}(\varpi^{\ell}a)^{-\frac{1}{2}}\nu_r(\varpi^{\ell}a)^{s-n+\frac{r}{2}}\\ &+ \sum_{{\ell}a\in P^{++}_{{\rm GL}_r}} W_v(m_r(\varpi^{\ell}a)){\mathbb I}i^{{\rm GL}_r}_{\ell}a(d_{\ul{\alpha}})\delta_{B_r}(\varpi^{\ell}a)^{-\frac{1}{2}}\nu_r(\varpi^{\ell}a)^{s-n+\frac{r}{2}} \varepsilonsilonnd{align*} where \[ P^{+0}_{{\rm GL}_r} = \stt{{\ell}a\in P^+_{{\rm GL}_r}\mid {\ell}a_r=0} \quad\tildeext{and}\quad P^{++}_{{\rm GL}_r} = \stt{{\ell}a\in P^+_{{\rm GL}_r}\mid {\ell}a_r>0}. \] At this point, we let $\alpha_r=0$ in the above equation. This is legal from our discussions in the previous paragraph. Then the second summation vanishes according to \varepsilonsilonqref{E:Ginz rel}. On the other hand, since \[ \delta_{B_r}(\varpi^{\ell}a)^{-\frac{1}{2}}\nu_r(\varpi^{\ell}a)^{s-n+\frac{r}{2}} = \delta_{B_{r-1}}(\varpi^{{\ell}a})^{-\frac{1}{2}}\nu_{r-1}(\varpi^{{\ell}a})^{s-n+\frac{r-1}{2}} \] for ${\ell}a\in P^{+0}_{{\rm GL}_r}$ and $P^{+0}_{{\rm GL}_r}$ is a disjoint union of $P^{++}_{{\rm GL}_{r-1}}$ and $P^{+0}_{{\rm GL}_{r-1}}$, the first summation becomes \baregin{align*} \Psi^m_{n,r}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_{r-1}, 0; q^{-s+\frac{1}{2}}) &= \sum_{{\ell}a\in P^{+0}_{{\rm GL}_{r-1}}} W_v(m_{r-1}(\varpi^{\ell}a)){\mathbb I}i^{{\rm GL}_{r-1}}_{\ell}a(d_{\ul{\alpha}'}) \delta_{B_{r-1}}(\varpi^{\ell}a)^{-\frac{1}{2}}\nu_{r-1}(\varpi^{\ell}a)^{s-n+\frac{r-1}{2}}\\ &+ \sum_{{\ell}a\in P^{++}_{{\rm GL}_{r-1}}} W_v(m_{r-1}(\varpi^{\ell}a)){\mathbb I}i^{{\rm GL}_{r-1}}_{\ell}a(d_{\ul{\alpha}'}) \delta_{B_{r-1}}(\varpi^{\ell}a)^{-\frac{1}{2}}\nu_{r-1}(\varpi^{\ell}a)^{s-n+\frac{r-1}{2}}\\ &= \Psi^m_{n,r-1}(v;\alpha_1,\alpha_2,{\ell}dots,\alpha_{r-1}; q^{-s+\frac{1}{2}}) \varepsilonsilonnd{align*} by using \varepsilonsilonqref{E:Ginz rel} again. This implies \varepsilonsilonqref{E:r to r-1}.\qed \subsubsection{Proof of $(5)$} By \varepsilonsilonqref{E:Xi_r,m} and the definition of $\Xi^m_{n,n}(v;X_1,X_2,{\ell}dots,X_n)$, it is enough to show \baregin{equation}{\ell}abel{E:satake and jacquet} \Psi^m_{n,n}(\varphi\star v;X_1,X_2,{\ell}dots,X_n;Y) = \mathscr S^0_{n,m}(\varphi)(YX_1,YX_2,{\ell}dots,YX_n) \Psi^m_{n,n}(v;X_1,X_2,{\ell}dots,X_n;Y) \varepsilonsilonnd{equation} for $\varphi\in{\mathcal H}(H_n(F)//R_{n,m})$ and $v\in{\mathcal V}_\pi^{R_{n,m}}$. Let $a\in{\rm GL}_n(F)$. We first compute \baregin{align}{\ell}abel{E:Whittaker and satake} \baregin{split} W_{\varphi\star v}(m_n(a)) &= \int_{H_n(F)} W_v(m_n(a)h^{-1})\varphi(h)dh = \int_{Q_n(F)} W_{v}(m_n(a)p^{-1})\varphi(p)\delta_{Q_n}(p)dp\\ &= \int_{{\rm GL}_n(F)} W_v(m_n(ab^{-1}))\delta^{\frac{1}{2}}_{Q_n}(m_n(b)) {\ell}eft(\int_{N_n(F)}\varphi(m_n(b)u)\delta^{\frac{1}{2}}_{Q_n}(m_n(b))du\right)db\\ &= \int_{{\rm GL}_n(F)} W_v(m_n(ab^{-1}))\delta^{\frac{1}{2}}_{Q_n}(m_n(b))\imath_{r,m}(\varphi)(b)db. \varepsilonsilonnd{split} \varepsilonsilonnd{align} Recall that $\imath_{n,m}(\varphi)\in{\mathcal H}({\rm GL}_n(F)//{\rm GL}_n(\frak{o}))$ is the element defined by \varepsilonsilonqref{E:middle satake}. Next we have \baregin{align}{\ell}abel{E:satake for Whittaker} \baregin{split} \int_{{\rm GL}_n(F)} &\imath_{n,m}(\varphi)(b)W(ab; q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n)db\\ &= \mathscr S^0_{n,m}(\varphi)(q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n) W(a; q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n) \varepsilonsilonnd{split} \varepsilonsilonnd{align} by \varepsilonsilonqref{E:satake for GL} and the commutative diagram \varepsilonsilonqref{E:diagram}. Now by \varepsilonsilonqref{E:Psi_r,m}, we can write \baregin{align*} \Psi^m_{n,n}&(\varphi\star v; \alpha_1, \alpha_2,{\ell}dots, \alpha_n; q^{-s+\frac{1}{2}}) = \int_{Z_n(F)\!\!\!slash{\rm GL}_n(F)} W_{\varphi\star v}(m_n(a))W(a; q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n)da. \varepsilonsilonnd{align*} Applying \varepsilonsilonqref{E:Whittaker and satake}, the right hand side becomes \baregin{align*} \int_{Z_n(F)\!\!\!slash{\rm GL}_n(F)} \int_{{\rm GL}_n(F)}W_v(m_n(ab^{-1}))\imath_{n,m}(\varphi)(b) W(a; q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n) \delta_{Q_n}^{-\frac{1}{2}}(m_n(ab^{-1}))dbda. \varepsilonsilonnd{align*} After changing the variable $a\mapsto ab$ and taking \varepsilonsilonqref{E:satake for Whittaker} into account, we obtain \baregin{align*} \int_{Z_n(F)\!\!\!slash{\rm GL}_n(F)}& W_v(m_n(a)) {\ell}eft(\int_{{\rm GL}_n(F)} \imath_{n,m}(\varphi)(b)W(ab; q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n)db\right) \delta_{Q_n}^{-\frac{1}{2}}(m_n(a))da\\ &= \mathscr S^0_{n,m}(\varphi)(q^{-s+\frac{1}{2}}\alpha_1, q^{-s+\frac{1}{2}}\alpha_2,{\ell}dots, q^{-s+\frac{1}{2}}\alpha_n) \Psi^m_{n,n}(v; \alpha_1, \alpha_2,{\ell}dots, \alpha_n; q^{-s+\frac{1}{2}}). \varepsilonsilonnd{align*} This proves \varepsilonsilonqref{E:satake and jacquet}.\qed\\ Our proof of \indent opref{P:main prop} is now complete.\qed \section{Proof of the Main Results}{\ell}abel{S:prove} We prove our main results in this section. For this, we follow the notation and conventions in \S\ref{SS:convention}. The proof of \tildehmref{T:main} will be given in \S\ref{SS:RS int attached to newform} and \S\ref{SS:RS int attached to oldform}, under the following hypothesis: \[ \tildeext{$The$ $space$ ${\mathcal V}_{\pi}^{K_{n,a_\pi}}$ $is$ $one$-$dimensional$ $and$ $\Lambdabda_{\pi,\psi}$ $is$ $nontrivial$ $on$ ${\mathcal V}_{\pi}^{K_{n,a_\pi}}$.} \] So in the first two subsections, we fix an element $v_\pi\in{\mathcal V}_\pi^{K_{n,a_\pi}}$ with $\Lambdabda_{\pi,\psi}(v_\pi)=1$. In \S\ref{SS:n=r=2}, on the other hand, we will prove \tildehmref{T:main'}, and we will conclude with a comparison between bases in \S\ref{SSS:concluding remark}. \subsection{Rankin-Selberg integrals attached to newforms}{\ell}abel{SS:RS int attached to newform} In this subsection, we prove \tildehmref{T:main} except the inequality \varepsilonsilonqref{E:dim bd} whose proof will be postponed to the next subsection. \subsubsection*{Proof of Conjecture \ref{C1} $(2)$ and \varepsilonsilonqref{E:main eqn}} Let $1{\ell}e r{\ell}e n$ and $u_{n,r,a_\pi}\in J_{n,a_\pi}$ be the elements given by \varepsilonsilonqref{E:u_r,m}. Note that $u_{n,1,a_\pi}\in K_{n,a_\pi}=J_{n,a_\pi}$ if $a_\pi=0$, while $u_{n,1,a_\pi}\in J_{n,a_\pi}\setminus K_{n,a_\pi}$ if $a_\pi>0$. Let $\varepsilonsilon_r\in\stt{\pm 1}$ such that $\pi(u_{n,r,a_\pi})v_\pi=\varepsilonsilon_r v_\pi$. Certainly, we have $\varepsilonsilon_r=1$ for all $r$ if $a_\pi=0$. We need to show $\varepsilonsilon_1=\varepsilonsilon_\pi$ and \varepsilonsilonqref{E:main eqn}. For this, it suffices to prove \baregin{equation}{\ell}abel{E:main identity} \Xi^{a_\pi}_{n,n}(v_\pi; X_1,X_2,{\ell}dots,X_n)=1. \varepsilonsilonnd{equation} Indeed, if \varepsilonsilonqref{E:main identity} is valid, then \indent opref{P:main prop} $(1)$ and $(4)$ (with $m=a_\pi$) will give us \varepsilonsilonqref{E:main eqn}. On the other hand, by \varepsilonsilonqref{E:main identity} and \indent opref{P:main prop} $(2)$, we have $\Xi_{n,1}^{a_\pi}(v_\pi; X^{-1}_1)=\Xi_{n,1}^{a_\pi}(v_\pi; X_1)=1$ and hence by \indent opref{P:main prop} $(2)$ \[ \varepsilonsilon_1 =\varepsilonsilon_1\Xi_{n,1}^{a_\pi}(v_\pi; X^{-1}_1) =\Xi_{n,1}^{a_\pi}(\pi(u_{n,1,a_\pi})v_\pi; X^{-1}_1) =\varepsilonsilon_\pi\Xi^{a_\pi}_{n,1}(v_\pi;X_1) =\varepsilonsilon_\pi. \] So the proof reduces to verifying \varepsilonsilonqref{E:main identity}.\\ To prove \varepsilonsilonqref{E:main identity}, we first claim that $\Xi^{a_\pi}_{n,n}(v_\pi; X_1, X_2, {\ell}dots, X_n)$ is a constant. Let $Y_j$ be the $j$-th elementary symmetric polynomial in $X_1, X_2,{\ell}dots, X_n$ for $1{\ell}e j{\ell}e n$. Then we have \[ {\mathcal S}_n=\barbC[Y_1, Y_2, {\ell}dots, Y_{n-1}, Y_n, Y_n^{-1}]. \] Note that $Y_n=X_1X_2\cdots X_n$ gives a $\barbZ$-grading of ${\mathcal S}_n=\oplus_{\varepsilonsilonlll\in\barbZ}{\mathcal S}_{n,\varepsilonsilonlll}$ by the degree of $Y_n$, i.e. ${\mathcal S}_{n,\varepsilonsilonlll}=\barbC[Y_1, Y_2,{\ell}dots, Y_{n-1}]\, Y_n^\varepsilonsilonlll$ for $\varepsilonsilonlll\in\barbZ$. Now by {\ell}mref{L:supp for para vec}, \varepsilonsilonqref{E:Psi_r,m,l}, \varepsilonsilonqref{E:Xi_r,m,l} and the observation \[ {\rm deg}_{Y_n}W(\varpi^{\ell}a;X_1,X_2,{\ell}dots, X_n;\bar{\psi})={\ell}a_n \] where ${\ell}a={\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2+\cdots+{\ell}a_n\varepsilonsilon_n\in P^+_{{\rm GL}_n}$, we find that \baregin{equation}{\ell}abel{E:degree in Y_n} \Xi^m_{n,n}(v;X_1, X_2,{\ell}dots, X_n)\in\barigoplus_{\varepsilonsilonlll\ge 0}{\mathcal S}_{n,\varepsilonsilonlll} \varepsilonsilonnd{equation} for every $v\in{\mathcal V}_\pi^{K_{n,m}}$ and $m\ge 0$. Then \indent opref{P:main prop} $(2)$ gives \[ \varepsilonsilon_n\Xi^{a_\pi}_{n,n}(v_\pi;X^{-1}, X_2^{-1},{\ell}dots, X_n^{-1}) = \Xi^{a_\pi}_{n,n}(\pi(u_{n,n,a_\pi})v_\pi;X^{-1}, X_2^{-1},{\ell}dots, X_n^{-1}) = \varepsilonsilon_\pi^n \Xi^{a_\pi}_{n,n}(v_\pi; X_1, X_2, {\ell}dots, X_n). \] Combining this with \varepsilonsilonqref{E:degree in Y_n} (with $m=a_\pi$), we see that $\Xi^{a_\pi}_{n,n}(v_\pi; X_1, X_2, {\ell}dots, X_n)=c$ for some constant $c$.\\ To complete the proof, it remains to show $c=1$. By \indent opref{P:main prop} $(1)$ and $(4)$, Remark \ref{R:zeta integral} and the normalization of the spherical Whittaker functions, we find that \[ c = \Xi^{a_\pi}_{n,n}(v_\pi; q^{-s+\frac{1}{2}},\underbrace{0,0,{\ell}dots,0}_{n-1}) = \Xi^{a_\pi}_{n,1}(v_\pi; q^{-s+\frac{1}{2}}) = \frac{Z(s,v_\pi)}{L(s,\phi_\pi)} \] which gives \[ c\,L(s,\phi_\pi) = Z(s,v_\pi). \] By writing both sides as power series in $q^{-s}$, we see that the constant term in the LHS is just $c$. On the other hand, by {\ell}mref{L:supp for para vec} and {\ell}mref{L:exp of RS int}, the constant term in the RHS is $\Lambdabda_{\pi,\psi}(v_\pi)=1$. This finishes the proof.\qed \subsection{Rankin-Selberg integrals attached to oldforms}{\ell}abel{SS:RS int attached to oldform} We prove \varepsilonsilonqref{E:dim bd} in this subsection. The proof is achieved by computing the Rankin-Selberg integrals attached to the subsets $\EuScript B_{\pi,m}$ defined in \S\ref{SS:conj basis}, which is also one of the main objectives of this work (cf. \tildehmref{T:main for oldform}). However, we should mention that our results for oldforms are less explicit, as they involve two positive constants $q_n$ and $q'_n$ \footnote{It seems that these constants only depend on $q$ and $n$.} implicitly given by \baregin{equation}{\ell}abel{E:q_n} Z(s,\tildeheta(v_\pi))=q_nq^{-s+\frac{1}{2}}Z(s,v_\pi) \quad\tildeext{and}\quad Z(s,\tildeheta'(v_\pi))=q'_nZ(s,v_\pi) \varepsilonsilonnd{equation} where $\tildeheta=\tildeheta_{a_\pi+1}, \tildeheta'=\tildeheta'_{a_\pi+1}$ are level raising operators defined in \S\ref{SSS:level raising}. This can be proved by following the arguments in \cite[Proposition 9.1.3]{Tsai2013} (without the Hypothesis \ref{H}). We should point out that $q_2=q'_2=q$ by a result of Roberts-Schmidt (\cite[Proposition 4.1.3]{RobertsSchmidt2007}). Here we will not try to compute these numbers explicitly (for general $n$), but only show that they are equal (under the hypothesis mentioned in the beginning of this section).\\ We start with two lemmas. \baregin{lm}{\ell}abel{L:eta action} Let $v\in{\mathcal V}_\pi^{R_{n,m}}$. We have \[ \Xi^{m+2}_{n,n}({algebraically closed ute{e}t}a(v);X_1,X_2,{\ell}dots, X_n) = q^{\frac{n(n-1)}{2}}(X_1X_2\cdots X_n)\Xi^m_{n,n}(v;X_1,X_2,{\ell}dots, X_n). \] \varepsilonsilonnd{lm} \baregin{proof} Recall that ${algebraically closed ute{e}t}a=\pi(\varpi^{-\mu_n})$ and we have ${algebraically closed ute{e}t}a(v)\in{\mathcal V}_\pi^{R_{n,m+2}}$ by \varepsilonsilonqref{E:R*}. After changing the variable $a\mapsto a(\varpi I_n)$ in \varepsilonsilonqref{E:Psi_r,m,l} and then using the fact \[ W(a(\varpi I_n);X_1,X_2,{\ell}dots, X_n; \bar{\psi}) = (X_1X_2,\cdots X_n) W(a;X_1,X_2,{\ell}dots, X_n; \bar{\psi}) \] we find that \[ \Psi^{m+2}_{n,n,\varepsilonsilonlll}({algebraically closed ute{e}t}a(v);X_1,X_2,{\ell}dots, X_n) = q^{\frac{n(n-1)}{2}}(X_1X_2\cdots X_n)\Psi^m_{n,n,\varepsilonsilonlll-n}(v;X_1,X_2,{\ell}dots, X_n) \] for all $\varepsilonsilonlll\in\barbZ$. It follows that \[ \Psi^{m+2}_{n,n}({algebraically closed ute{e}t}a(v);X_1,X_2,{\ell}dots, X_n;Y) = q^{\frac{n(n-1)}{2}}(X_1X_2\cdots X_n)Y^n \Psi^m_{n,n}(v;X_1,X_2,{\ell}dots, X_n; Y) \] and hence \[ \Xi^{m+2}_{n,n}({algebraically closed ute{e}t}a(v);X_1,X_2,{\ell}dots, X_n;Y) = q^{\frac{n(n-1)}{2}}(X_1X_2\cdots X_n)Y^n \Xi^m_{n,n}(v;X_1,X_2,{\ell}dots, X_n; Y) \] by \varepsilonsilonqref{E:Xi_r,m}. Now the lemma follows from letting $Y=1$ in both sides. \varepsilonsilonnd{proof} In the following lemma, for a given $1{\ell}e r{\ell}e n$, we denote by $Y_j(X_1, X_2,{\ell}dots, X_r)$ the $j$-th elementary symmetric polynomial in $X_1, X_2,{\ell}dots, X_r$ for $1{\ell}e j{\ell}e r$ and put $Y_0(X_1, X_2,{\ell}dots, X_r)=1$. Then one has the relations \[ Y_j(X_1,X_2,{\ell}dots, X_{r-1},0)=Y_j(X_1,X_2,{\ell}dots, X_{r-1}) \quad\tildeext{and}\quad Y_r(X_1,X_2,{\ell}dots, X_{r-1},0)=0 \] for $0{\ell}e j{\ell}e r-1$. \baregin{lm}{\ell}abel{L:level a+1} We have $q_n=q_n'$ and \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi);X_1, X_2,{\ell}dots, X_n) = q_n\sum^n_{j=0}{\ell}eft(\frac{1+(-1)^{j+1}}{2}\right)Y_j(X_1,X_2,{\ell}dots,X_n) \] and \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi);X_1, X_2,{\ell}dots, X_n) = q'_n\sum^n_{j=0}{\ell}eft(\frac{1+(-1)^{j}}{2}\right)Y_j(X_1,X_2,{\ell}dots, X_n). \] \varepsilonsilonnd{lm} \baregin{proof} We first show that if $v\in{\mathcal V}_\pi^{K_{n,a_\pi+1}}$ and $\pi(u_{n,1,a_\pi+1})v=\varepsilonsilon v$ for some $\varepsilonsilon\in\stt{\pm 1}$, then there exists $b_0\in\barbC$ such that \baregin{equation}{\ell}abel{E:Xi with level a+1} \Xi^{a_\pi+1}_{n,n}(v;X_1,X_2,{\ell}dots, X_n) = b_0\sum_{j=0}^n(\varepsilonsilon\varepsilonsilon_\pi)^j Y_j(X_1,X_2,{\ell}dots, X_n). \varepsilonsilonnd{equation} Note that the assumption $\pi(u_{n,1,a_\pi+1})v=\varepsilonsilon v$ implies \baregin{equation}{\ell}abel{E:ev} \pi(u_{n,r,a_\pi+1})v=\varepsilonsilon ^r v \varepsilonsilonnd{equation} for $1{\ell}e r{\ell}e n$, where $u_{n,r,a_\pi+1}\in J_{n,a_\pi+1}$ are the elements given by \varepsilonsilonqref{E:u_r,m}. Indeed, if we put \[ s_r = \baregin{pmatrix} &I_{r-1}&\\ 1&&\\ &&I_{n-r} \varepsilonsilonnd{pmatrix} \quad\tildeext{and}\quad s'_r = \baregin{pmatrix} (-1)^{r-1}&\\ &-I_{r-1}&\\ &&I_{n-r} \varepsilonsilonnd{pmatrix} \] (in ${\rm GL}_n(\frak{o})$), then we have \[ u_{n,r-1,m}u_{n,r,m}=m_{r-1}(\jmath_{r-1})m_r(s'_r)m_r(s_r)u_{n,1,m}m_r(s_r)^{-1}m_r(\jmath_{r}) \] for $m\ge 0$. Since $m_r(s_r)$, $m_r(s'_r)$ $m_{r-1}(\jmath_{r-1})$ and $m_r(\jmath_r)$ are contained in $K_{n,m}$, we deduce that \[ \pi(u_{n,r-1,a_\pi+1}u_{n,r,a_\pi+1})v=\varepsilonsilon v. \] This implies \varepsilonsilonqref{E:ev}.\\ Now by \varepsilonsilonqref{E:degree in Y_n}, we may assume \baregin{equation}{\ell}abel{E:rought Xi for level a+1} \Xi^{a_\pi+1}_{n,n}(v;X_1,X_2,{\ell}dots, X_n) = \sum_{\ul{\varepsilonsilonlll}} b_{\ul{\varepsilonsilonlll}} Y_1(X_1,X_2,{\ell}dots, X_n)^{\varepsilonsilonlll_1}Y_2(X_1,X_2,{\ell}dots, X_n)^{\varepsilonsilonlll_2}\cdots Y_n(X_1,X_2,{\ell}dots, X_n)^{\varepsilonsilonlll_n} \varepsilonsilonnd{equation} for some $\ul{\varepsilonsilonlll}=(\varepsilonsilonlll_1,\varepsilonsilonlll_2,{\ell}dots, \varepsilonsilonlll_n)\in\barbZ^n_{\ge 0}$ and $b_{\ul{\varepsilonsilonlll}}\in\barbC$ with $b_{\ul{\varepsilonsilonlll}}=0$ for almost all $\ul{\varepsilonsilonlll}$. Since \[ Y_j(X^{-1}_1,X_2^{-1}, {\ell}dots, X_r^{-1})=Y_{r-j}(X_1,X_2,{\ell}dots, X_r)Y_r(X_1,X_2,{\ell}dots, X_r)^{-1} \] for $1{\ell}e j{\ell}e r-1$, the functional equation in \indent opref{P:main prop} $(2)$ and the relation \varepsilonsilonqref{E:ev} give \baregin{align*} \Xi^{a_\pi+1}_{n,n}&(v;X_1,X_2,{\ell}dots, X_n)\\ &= \sum_{\ul{\varepsilonsilonlll}} (\varepsilonsilon\varepsilonsilon_\pi)^n b_{\ul{\varepsilonsilonlll}} Y_1(X_1,X_2,{\ell}dots, X_n)^{\varepsilonsilonlll_{n-1}}Y_2(X_1,X_2,{\ell}dots, X_n)^{\varepsilonsilonlll_{n-2}}\cdots Y_n(X_1,X_2,{\ell}dots, X_n)^{1-\varepsilonsilonlll_1-\varepsilonsilonlll_2-\cdots-\varepsilonsilonlll_n}. \varepsilonsilonnd{align*} Since $1-\varepsilonsilonlll_1-\varepsilonsilonlll_2-\cdots-\varepsilonsilonlll_n\ge 0$, we find that $\varepsilonsilonlll_j{\ell}e 1$ for $1{\ell}e j{\ell}e n$. Hence \varepsilonsilonqref{E:rought Xi for level a+1} becomes \[ \Xi^{a_\pi+1}_{n,n}(v;X_1,X_2,{\ell}dots, X_n) = \sum_{j=0}^n b_j Y_j(X_1,X_2,{\ell}dots, X_n). \] To prove \varepsilonsilonqref{E:Xi with level a+1}, it remains to show \baregin{equation}{\ell}abel{E:c_j for a+1} b_j=b_0(\varepsilonsilon\varepsilonsilon_\pi)^j \varepsilonsilonnd{equation} for $1{\ell}e j{\ell}e n$. For this, we apply \indent opref{P:main prop} (4) to get \[ \Xi^{a_\pi+1}_{n,r}(v;X_1,X_2,{\ell}dots, X_r) = \sum_{j=0}^r b_j Y_j(X_1,X_2,{\ell}dots, X_r) \] for $1{\ell}e r{\ell}e n$. Then by the functional equation for $\Xi^{a_\pi+1}_{n,r}(v;X_1,X_2,{\ell}dots, X_r)$ and \varepsilonsilonqref{E:ev}, one obtains \varepsilonsilonqref{E:c_j for a+1} (when $j=r$) after comparing the constant terms. This shows \varepsilonsilonqref{E:Xi with level a+1}. \\ Now let us put \[ v_\pm = \tildeheta(v_\pi)\pm\tildeheta'(v_\pi). \] Then we have $\pi(u_{n,1,a_\pi+1})v_\pm=\pm\varepsilonsilon_\pi v_\pm$ by the fact that $\pi(u_{n,1,a_\pi})v_\pi=\varepsilonsilon_\pi v_\pi$ and the definitions of $\tildeheta, \tildeheta'$ (cf. \varepsilonsilonqref{E:theta}). Therefore, there exist $b_0^\pm\in\barbC$ such that \[ \Xi^{a_\pi+1}_{n,n}(v_\pm;X_1,X_2,{\ell}dots, X_n) = b^\pm_0\sum_{j=0}^n(\pm 1)^j Y_j(X_1,X_2,{\ell}dots, X_n) \] by \varepsilonsilonqref{E:Xi with level a+1}. We then deduce that \baregin{equation}{\ell}abel{E:rought Xi theta} \Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi);X_1,X_2,{\ell}dots, X_n) = \sum_{j=0}^n{\ell}eft(\frac{b^+_0+b_0^-(-1)^j}{2}\right)Y_j(X_1,X_2,{\ell}dots, X_n) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:rought Xi theta'} \Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi);X_1,X_2,{\ell}dots, X_n) = \sum_{j=0}^n{\ell}eft(\frac{b^+_0-b_0^-(-1)^j}{2}\right)Y_j(X_1,X_2,{\ell}dots, X_n). \varepsilonsilonnd{equation} To complete the proof, we need to solve $b_0^\pm$ and show that $q_n=q'_n$. By \indent opref{P:main prop} $(4)$, Remark \ref{R:zeta integral} and \varepsilonsilonqref{E:q_n}, \varepsilonsilonqref{E:main identity}, we first find that \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi);q^{-s+\frac{1}{2}},\underbrace{0,{\ell}dots, 0}_{n-1}) = \Xi^{a_\pi+1}_{n,1}(\tildeheta(v_\pi);q^{-s+\frac{1}{2}}) = \frac{Z(s,\tildeheta(v_\pi))}{L(s,\phi_\pi)} = q_nq^{-s+\frac{1}{2}} \] and \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi);q^{-s+\frac{1}{2}},\underbrace{0,{\ell}dots, 0}_{n-1}) = \Xi^{a_\pi+1}_{n,1}(\tildeheta'(v_\pi);q^{-s+\frac{1}{2}}) = \frac{Z(s,\tildeheta'(v_\pi))}{L(s,\phi_\pi)} = q'_n. \] On the other hand, \varepsilonsilonqref{E:rought Xi theta} and \varepsilonsilonqref{E:rought Xi theta'} imply \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi);q^{-s+\frac{1}{2}},\underbrace{0,{\ell}dots, 0}_{n-1}) = {\ell}eft(\frac{b^+_0+b^-_0}{2}\right)+{\ell}eft(\frac{b^+_0-b^-_0}{2}\right)q^{-s+\frac{1}{2}} \] and \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi);q^{-s+\frac{1}{2}},\underbrace{0,{\ell}dots, 0}_{n-1}) = {\ell}eft(\frac{b^+_0-b^-_0}{2}\right)+{\ell}eft(\frac{b^+_0+b^-_0}{2}\right)q^{-s+\frac{1}{2}}. \] By comparing the coefficients, we obtain \[ b^+_0-b_0^- = 2q_n = 2q'_n \quad\tildeext{and}\quad b^+_0+b^-_0=0. \] It follows that $b_0^\pm=\pm q_n= \pm q'_n$ as desired. \varepsilonsilonnd{proof} Let's summarize our computations in the following theorem. \baregin{thm}{\ell}abel{T:main for oldform} Let $\EuScript B_{\pi,m}$ be the subsets defined in \S\ref{SS:conj basis}. Then the Rankin-Selberg integrals $\Psi_{n,r}(v\otimes{\tildeimes}i^m_{\tildeau,s})$ attached to $v\in\EuScript B_{\pi,m}$ and ${\tildeimes}i^m_{\tildeau,s}$ can be computed by using \indent opref{P:main prop} $(1)$, $(4)$ and $(5)$ together with {\ell}mref{L:eta action}, {\ell}mref{L:level a+1} and \varepsilonsilonqref{E:main identity}. \varepsilonsilonnd{thm} Now we can prove \varepsilonsilonqref{E:dim bd}. \subsubsection*{Proof of \varepsilonsilonqref{E:dim bd}} We are going to show that the subsets $\EuScript B_{\pi,m}$ are linearly independent. Then since the cardinality of $\EuScript B_{\pi,m}$ is given by the RHS of \varepsilonsilonqref{E:dim bd}, the proof follows. The case when $m$ and $a_\pi$ have the same parity is much easier and is simply a consequence of \indent opref{P:main prop} $(5)$, {\ell}mref{L:eta action} and \varepsilonsilonqref{E:main identity}. Suppose that $m$ and $a_\pi$ have the opposite parity. We first claim \[ \mathscr S^0_{n,m}(\varphi_{\tilde{{\ell}a},m})(X_1,X_2,{\ell}dots,X_{n-1}, X_n) = \mathscr S^0_{n,m}(\varphi_{{\ell}a,m})(X_1, X_2,{\ell}dots, X_{n-1},X_n^{-1}) \] for ${\ell}a\in X_\barullet(T_n)$. Recall that $\tilde{{\ell}a}\in X_\barullet(T_n)$ is defied in \S\ref{SS:conj basis}. For this, let \[ u_m=m_{n-1}(\jmath_{n-1})u_{n,n-1,m}u_{n,n,m}m_n(\jmath_n)\in J_{n,m}. \] Then we have $u_{n,m}^{-1}R_{n,m}u_m=R_{n,m}$ by {\ell}mref{L:general AL elt} and the fact that both $m_n(\jmath_n)$ and $m_{n-1}(\jmath_{n-1})$ are contained in $R_{n,m}$. It follows that $\varphi^{u_m}_{{\ell}a,m}=\varphi_{\tilde{{\ell}a},m}$, where $\varphi^{u_m}_{{\ell}a,m}\in{\mathcal H}(H_n(F)//R_{n,m})$ is defined by $\varphi^{u_m}_{\pi,m}(h)=\varphi_{{\ell}a,m}(u_m^{-1}hu_m)$. Since (using \varepsilonsilonqref{E:satake for H}) \[ \varsigma^0_{n,m}(\varphi_{\tilde{{\ell}a},m})(t) =\varsigma^0_{n,m}(\varphi^{u_m}_{{\ell}a,m})(t) =\varsigma^0_{n,m}(\varphi_{{\ell}a,m})(u_m^{-1}t u_m) \] for $t\in T_n(F)$, the claim follows.\\ Now suppose that \[ \sum_{{\ell}a\in P^+_{G_n}} c_{\ell}a {algebraically closed ute{e}t}a^\square_{{\ell}a,a_\pi+1,m}\circ \tildeheta(v_\pi) = \sum_{{\ell}a\in P^+_{G_n}} c'_{\ell}a {algebraically closed ute{e}t}a^\square_{{\ell}a,a_\pi+1,m}\circ \tildeheta'(v_\pi) \] for some $c_{\ell}a, c'_{\ell}a\in\barbC$. Then by \indent opref{P:main prop} $(5)$ and {\ell}mref{L:eta action}, we obtain \baregin{align}{\ell}abel{E:LI} \baregin{split} \sum_{{\ell}a\in P^+_{G_n}}&c_{\ell}a f_{\ell}a(X_1,X_2,{\ell}dots, X_n)\Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi); X_1,X_2,{\ell}dots, X_n)\\ &= \sum_{{\ell}a\in P^+_{G_n}}c'_{\ell}a f_{\ell}a(X_1,X_2,{\ell}dots, X_n)\Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi);X_1,X_2,{\ell}dots, X_n) \varepsilonsilonnd{split} \varepsilonsilonnd{align} after removing some nonzero functions from both sides, where we put \[ f_{\ell}a(X_1,X_2,{\ell}dots, X_n) := \mathscr S^0_{n,a_\pi+1}(\varphi_{{\ell}a,a_\pi+1})(X_1,X_2,{\ell}dots, X_n) + \mathscr S^0_{n,a_\pi+1}(\varphi_{\tilde{{\ell}a},a_\pi+1})(X_1,X_2,{\ell}dots, X_n) \] for ${\ell}a\in P^+_{G_n}$. From the claim in the previous paragraph, we know that \[ f_{\ell}a(X_1,X_2,{\ell}dots,X^{-1}_n)=f_{\ell}a(X_1,X_2,{\ell}dots, X_n). \] On the other hand, by {\ell}mref{L:level a+1}, we have \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi); X_1,X_2,{\ell}dots, X^{-1}_n) = X_n^{-1}\,\Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi); X_1,X_2,{\ell}dots, X_n) \] and \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi); X_1,X_2,{\ell}dots, X^{-1}_n) = X_n^{-1}\,\Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi); X_1,X_2,{\ell}dots, X_n). \] Thus if we replace $X_n$ with $X_n^{-1}$ in \varepsilonsilonqref{E:LI}, we get \baregin{align}{\ell}abel{E:LI2} \baregin{split} \sum_{{\ell}a\in P^+_{G_n}}&c_{\ell}a f_{\ell}a(X_1,X_2,{\ell}dots, X_n)\Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi); X_1,X_2,{\ell}dots, X_n)\\ &= \sum_{{\ell}a\in P^+_{G_n}}c'_{\ell}a f_{\ell}a(X_1,X_2,{\ell}dots,X_n)\Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi); X_1,X_2,{\ell}dots, X_n) \varepsilonsilonnd{split} \varepsilonsilonnd{align} after dividing $X^{-1}_n$ from both sides. Since \[ \Xi^{a_\pi+1}_{n,n}(\tildeheta(v_\pi); X_1,X_2,{\ell}dots, X_n) \pm \Xi^{a_\pi+1}_{n,n}(\tildeheta'(v_\pi); X_1,X_2,{\ell}dots, X_n) \] are nonzero (again by {\ell}mref{L:level a+1}), we can use \varepsilonsilonqref{E:LI} and \varepsilonsilonqref{E:LI2} to deduce \[ \sum_{{\ell}a\in P^+_{G_n}}c_{\ell}a f_{\ell}a(X_1,X_2,{\ell}dots, X_n) = \sum_{{\ell}a\in P^+_{G_n}}c'_{\ell}a f_{\ell}a(X_1,X_2,{\ell}dots, X_n) = 0. \] This implies $c_{\ell}a=c'_{\ell}a=0$ for all ${\ell}a\in P^+_{G_n}$ and hence concludes the proof.\qed\\ This also finishes the proof of \tildehmref{T:main}.\qed \subsection{Case when $n=r=2$}{\ell}abel{SS:n=r=2} This subsection is devoted to prove \tildehmref{T:main'}. Here we don't need the hypothesis mentioned in the beginning of this section. We begin with recalling the accidental isomorphism $\vartheta: {\rm SO}_5\equivg{\rm PGSp}_4$ and some formulae for the spherical Whittaker function of $\tildeau$. \subsubsection{Accidental isomorphism} Let $\barbF$ be a field with characteristic different from $2$. Let $(W,{\ell}angle,\rangle)$ be the $4$-dimensional symplectic space over $\barbF$. Fix an ordered basis $\stt{w_1, w_2, w^*_2, w^*_1}$ of $W$ so that the associated Gram matrix is given by \[ \pMX{}{\jmath_2}{-\jmath_2}{}. \] Let $(\tilde{V}, (,))$ be the $6$-dimensional quadratic space over $\barbF$ with $\tilde{V}=\barigwedge^2 W$ and the symmetric bilinear form $(,)$ defined by \[ v_1\wedge v_2=(v_1,v_2)(w_1\wedge w_2\wedge w^*_1\wedge w^*_2) \] for $v_1,v_2\in\tilde{V}$. Let $\tilde{v}=w_1\wedge w^*_1+w_2\wedge w^*_2\in\tilde{V}$ and put \[ V = \stt{v\in\tilde{V}\mid (v,\tilde{v})=0}. \] Denote also by $(,)$ the restriction of the symmetric bilinear form to $V$ so that $(V,(,))$ becomes a $5$-dimensional quadratic space over $\barbF$. Its Gram matrix associated to the ordered basis $\stt{e_1, e_2, v_0, f_2, f_1}$ is given by \varepsilonsilonqref{E:S} with $n=2$, where \[ e_1=w_1\wedge w_2,\,\, e_2=w_1\wedge w^*_2,\,\ v_0=w_1\wedge w^*_1-w_2\wedge w_2^*,\,\,f_2=w_2\wedge w_1^*, \,\,f_1=w_1^*\wedge w_2^*. \] Let $\tilde{\vartheta}:{\rm GSp}(W)\tildeo{\rm SO}(\tilde{V})$ be the homomorphism defined by \[ \tilde{\vartheta}(h)=\mu(h)^{-1}\BigWedge^2 h \] for $h\in {\rm GSp}(W)$. Here $\mu:{\rm GSp}(W)\tildeo\barbF^{\tildeimes}$ is the similitude character. Then because $\tilde{\vartheta}(h)\tilde{v}=\tilde{v}$, the homomorphism induces an exact sequence \[ 1{\ell}ongrightarrow\barbF^{\tildeimes}\overset{\iota}{{\ell}ongrightarrow}{\rm GSp}(W)\overset{\vartheta}{{\ell}ongrightarrow}{\rm SO}(V){\ell}ongrightarrow 1 \] where $\iota(a)=aI_{W}$ with $I_W:W\tildeo W$ the identity map. We continue to use $\vartheta$ to denote the isomorphism $\vartheta:{\rm PGSp}(W)\overset{\sim}{\tildeo}{\rm SO}(V)$ induced from the exact sequence. Using the ordered bases $\stt{w_1,w_2, w_2^*, w_1^*}$ and $\stt{e_1,e_2,v_0,f_2, f_1}$, we can identify (when $\barbF=F$) ${\rm PGSp}(W)\equivg{\rm PGSp}_4(F)$ and ${\rm SO}(V)\equivg{\rm SO}_5(F)$. Then under $\vartheta$, the paramodular subgroups $K(\frak{p}^m)$ defined in \cite{RobertsSchmidt2007} map onto $K_{2,m}$ defined in \S\ref{SS:paramodular subgroup}. As a consequence, we can transfer results of Roberts-Schmidt to that of ${\rm SO}_5(F)$ via $\vartheta$. \subsubsection{Some formulae} For a given ${\ell}a\in X_\barullet(A_2)$, we denote ${\ell}a={\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2$. Then we have \baregin{equation}{\ell}abel{E:Whittaker for GL_2} W(\varpi^{\ell}a;\alpha_1,\alpha_2;\bar{\psi}) = q^{-\frac{1}{2}({\ell}a_1-{\ell}a_2)}\sum_{j=0}^{{\ell}a_1-{\ell}a_2}\alpha_1^{{\ell}a_2+j}\alpha_2^{{\ell}a_1-j} \varepsilonsilonnd{equation} if ${\ell}a_1\ge {\ell}a_2$, and $0$ otherwise (\cite[Section 2.4]{Schmidt2002}, \cite{Shintani1976}). On the other hand, for a given $y\in F$, let us define $z_2(y), n_2(y)\in {\rm SO}_5(F)$ to be \[ z_2(y) = \baregin{pmatrix} 1&y&\\ &1&\\ &&1&\\ &&&1&-y\\ &&&&1 \varepsilonsilonnd{pmatrix} \quad\tildeext{and}\quad n_2(y) = \baregin{pmatrix} 1&&&-y&\\ &1&&&y\\ &&1\\ &&&1\\ &&&&1 \varepsilonsilonnd{pmatrix}. \] Then via $\vartheta$, equations in \cite[Lemma 3.2.2]{RobertsSchmidt2007} become \baregin{equation}{\ell}abel{E:exp theta} \tildeheta(v) = \pi(m_2(\varpi^{-\varepsilonsilon_1}))v + \sum_{c\in\frak{f}} \pi(m_2(\varpi^{-\varepsilonsilon_2})z_2(c\varpi^{-1}))v \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:exp theta'} \tildeheta'(v) = \pi(m_2(\varpi^{-\varepsilonsilon_1-\varepsilonsilon_2}))v + \sum_{c\in\frak{f}} \pi(n_2(c\varpi^{-m-1}))v \varepsilonsilonnd{equation} for $v\in{\mathcal V}_\pi^{K_{2,m}}$ with $m\ge 0$. \subsubsection{Proof of \varepsilonsilonqref{E:raising theta}} Let us write $\tildeheta(v)=v_1+v_2$ with $v_1=\pi(m_2(\varpi^{-\varepsilonsilon_1}))v$. By {\ell}mref{L:supp for para vec}, {\ell}mref{L:exp of RS int} and the fact $R_{2,m}{\mathcal a}p M_2(F)=M_2(\frak{o})$, we get \baregin{align*} \Psi_{2,2}(\tildeheta(v)\otimes{\tildeimes}i^{m+1}_{\tildeau,s}) =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{\tildeheta(v)}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2})) W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi})q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{({\ell}a_1-1)\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi})q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ &+ \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{v_2}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1} \varepsilonsilonnd{align*} by the Iwasawa decomposition of ${\rm GL}_2(F)$. Since $\ker(\psi)=\frak{o}$, we have \baregin{align*} W_{v_2}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2})) &= \sum_{c\in\frak{f}} W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2})z_2(c\varpi^{-1}))\\ &= \sum_{c\in\frak{f}} W_v(z_2(c\varpi^{{\ell}a_1-{\ell}a_2})m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2}))\\ &= q W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2})) \varepsilonsilonnd{align*} for ${\ell}a_1\ge {\ell}a_2$. On the other hand, a direct computation (using \varepsilonsilonqref{E:Whittaker for GL_2}) shows \[ W(\varpi^{({\ell}a_1+1)\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) + q^{-1}W(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2+1)\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) = q^{-\frac{1}{2}}(\alpha_1+\alpha_2)W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) \] again for ${\ell}a_1\ge {\ell}a_2$. Combining these, we find that \baregin{align*} \Psi_{2,2}(\tildeheta(v)\otimes{\tildeimes}i^{m+1}_{\tildeau,s}) =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{({\ell}a_1-1)\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi})q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ &+ \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{v}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1+1}\\ =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{({\ell}a_1+1)\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2+1)s+2({\ell}a_1+1)}\\ &+ \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{v}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2+1)\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2+1)s+2{\ell}a_1+1}\\ =& q^{-s+\frac{3}{2}}(\alpha_1+\alpha_2)\sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi})q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ =& q^{-s+\frac{3}{2}}(\alpha_1+\alpha_2)\Psi_{2,2}(v\otimes{\tildeimes}i^m_{\tildeau,s}). \varepsilonsilonnd{align*} This proves \varepsilonsilonqref{E:raising theta}.\qed \subsubsection{Proof of \varepsilonsilonqref{E:raising theta'}} The computations are similar to the previous one. We denote $\tildeheta'(v)=v'_1+v'_2$ with $v_1'=\pi(m_2(\varpi^{-\varepsilonsilon_1-\varepsilonsilon_2}))v$. Then we have \baregin{align*} \Psi_{2,2}(\tildeheta'(v)\otimes{\tildeimes}i^{m+1}_{\tildeau,s}) =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{\tildeheta'(v)}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2})) W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi})q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{({\ell}a_1-1)\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ &+ \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{v'_2}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1} \varepsilonsilonnd{align*} where \baregin{align*} W_{v'_2}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2})) &= \sum_{c\in\frak{f}} W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2})n_2(c\varpi^{-m-1}))\\ &= \sum_{c\in\frak{f}} W_v(n_2(c\varpi^{{\ell}a_1+{\ell}a_2-m-1})m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))\\ &= q W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2})). \varepsilonsilonnd{align*} It follows that \baregin{align*} \Psi_{2,2}(\tildeheta'(v)\otimes{\tildeimes}i^{m+1}_{\tildeau,s}) =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{({\ell}a_1-1)\varepsilonsilon_1+({\ell}a_2-1)\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1}\\ &+ \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{v}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1+1}\\ =& \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_v(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{({\ell}a_1+1)\varepsilonsilon_1+({\ell}a_2+1)\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2+2)s+2({\ell}a_1+1)}\\ &+ \sum_{{\ell}a_1\ge {\ell}a_2\ge 0} W_{v}(m_2(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2}))W(\varpi^{{\ell}a_1\varepsilonsilon_1+{\ell}a_2\varepsilonsilon_2};\alpha_1,\alpha_2;\bar{\psi}) q^{-({\ell}a_1+{\ell}a_2)s+2{\ell}a_1+1}\\ =& q(1+q^{-2s+1}\alpha_1\alpha_2)\Psi_{2,2}(v\otimes{\tildeimes}i^m_{\tildeau,s}). \varepsilonsilonnd{align*} This proves \varepsilonsilonqref{E:raising theta'}.\qed \subsubsection{Proof of \varepsilonsilonqref{E:raising eta}} This follows immediately from {\ell}mref{L:eta action} and \indent opref{P:main prop} $(1)$.\qed\\ This also complete the proof of \tildehmref{T:main'}.\qed \baregin{remark} Observe that \varepsilonsilonqref{E:raising theta} and \varepsilonsilonqref{E:raising theta'} imply \baregin{equation}{\ell}abel{E:raising theta formal} \Xi^{m+1}_{2,2}(\tildeheta(v);X_1,X_2) = q(X_1+X_2)\Xi^m_{2,2}(v;X_1,X_2) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:raising theta' formal} \Xi^{m+1}_{2,2}(\tildeheta'(v);X_1,X_2) = q(1+X_1X_2)\Xi^{m}_{2,2}(v;X_1,X_2). \varepsilonsilonnd{equation} In particular, these (when $m=a_\pi$) agree with {\ell}mref{L:level a+1} by \varepsilonsilonqref{E:main identity}. \varepsilonsilonnd{remark} \subsection{Comparisons}{\ell}abel{SSS:concluding remark} We end this paper by comparing our $\EuScript B_{\pi,m}$ in \S\ref{SS:conj basis} with that of Tasi, Casselman and Roberts-Schmidt. As already mentioned, when $m$ and $a_\pi$ have the opposite parity, our $\EuScript B_{\pi,m}$ are slightly different from that of Tsai's. The subsets $\EuScript B'_{\pi,m}$ appeared implicitly in \cite[Proposition 9.1.7]{Tsai2013} are \[ \EuScript B'_{\pi,m}=\stt{{algebraically closed ute{e}t}a_{{\ell}a,a_\pi+1,m}\circ\tildeheta(v_\pi),\,\,{algebraically closed ute{e}t}a_{{\ell}ambda, a_\pi+1,m}\circ\tildeheta'(v_\pi)\mid {\ell}ambda\in P^+_{G_n},\,\,2\|{\ell}ambda\|{\ell}eq m-a_\pi-1}. \] However, these subsets are not necessarily linearly independent as we will give a counterexample. As expected, we consider the case when $n=2$. In particular, the hypothesis mentioned in the beginning of this section is satisfied.\\ For a given ${\ell}a\in P^+_{{\rm SO}_4}$, let $(\sigma_{\ell}a, {\mathcal V}_{\ell}a)$ be the irreducible representation of ${\rm SO}_4(\barbC)$ with highest weight ${\ell}a$. Denote ${\mathbb I}i_{\ell}a$ to be its character. Then $\sigma_{\varepsilonsilonpsilon_1}$ is the standard $4$-dimensional representation of ${\rm SO}_4(\barbC)$ and we have $\barigwedge^2\sigma_{\varepsilonsilonpsilon_1}=\sigma_{\varepsilonsilonpsilon_1+\varepsilonsilonpsilon_2}\oplus\sigma_{\varepsilonsilonpsilon_1-\varepsilonsilonpsilon_2}$ (\cite[Theorem 5.5.13]{GoodmanWallach2009}). One checks that \baregin{equation}{\ell}abel{E:character} {\mathbb I}i_{\varepsilonsilonpsilon_1}(t)=(t_1+t_2+t_1^{-1}+t_2^{-1}), \quad {\mathbb I}i_{\varepsilonsilonpsilon_1+\varepsilonsilonpsilon_2}(t)=(t_1t_2+1+t^{-1}_1t^{-1}_2), \quad {\mathbb I}i_{\varepsilonsilonpsilon_1-\varepsilonsilonpsilon_2}(t)=(t_1t^{-1}_2+1+t^{-1}_1t_2) \varepsilonsilonnd{equation} where \[ t = \baregin{pmatrix} t_1&&&\\ &t_2&&\\ &&t_2^{-1}&\\ &&&t_1^{-1} \varepsilonsilonnd{pmatrix}\in T_2(\barbC). \] Since $\varepsilonsilonpsilon_1$ and $\varepsilonsilonpsilon_1\pm\varepsilonsilonpsilon_2$ are minuscule weights of ${\rm SO}_4(\barbC)$, we find that \baregin{equation}{\ell}abel{E:epsilon_1} \mathscr S^0_{2,m}(\varphi_{\varepsilonsilonpsilon_1,m})(X_1,X_2)=q(X_1+X_2+X_1^{-1}+X_2^{-1}) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:epsilon_1+2} \mathscr S^0_{2,m}(\varphi_{\varepsilonsilonpsilon_1+\varepsilonsilonpsilon_2},m)(X_1,X_2)=q(X_1X_2+1+X^{-1}_1X^{-1}_2) \varepsilonsilonnd{equation} and \baregin{equation}{\ell}abel{E:epsilon_1-2} \mathscr S^0_{2,m}(\varphi_{\varepsilonsilonpsilon_1-\varepsilonsilonpsilon_2},m)(X_1,X_2)=q(X_1X^{-1}_2+1+X^{-1}_1X_2) \varepsilonsilonnd{equation} by \varepsilonsilonqref{E:character} and \cite[(3.13)]{Gross1998}.\\ We show that $\EuScript B'_{\pi,a_\pi+3}$ is not linearly independent when $n=2$. For this we first note that the linear maps $\Xi^m_{2,2}$ constructed in \indent opref{P:main prop} are injective by \cite[Corollary 4.3.8]{RobertsSchmidt2007}, \varepsilonsilonqref{E:K*}, \varepsilonsilonqref{E:R* fix space} and \indent opref{P:main prop} $(3)$. Form the equations above and \varepsilonsilonqref{E:raising theta formal}, \varepsilonsilonqref{E:raising theta' formal}, \varepsilonsilonqref{E:main identity} together with {\ell}mref{L:eta action}, we find that \[ \Xi^{a_\pi+3}_{2,2}({algebraically closed ute{e}t}a_{\varepsilonsilonpsilon_1,a_\pi+1,a_\pi+3}\circ\tildeheta'(v_\pi);X_1,X_2) \] is equal to \[ q\,\Xi^{a_\pi+3}_{2,2}({algebraically closed ute{e}t}a_{0,a_\pi+1,a_\pi+3}\circ\tildeheta(v_\pi);X_1,X_2) + \Xi^{a_\pi+3}_{2,2}({algebraically closed ute{e}t}a_{\varepsilonsilonpsilon_1+\varepsilonsilonpsilon_2,a_\pi+1,a_\pi+3}\circ\tildeheta(v_\pi);X_1,X_2). \] Since $\Xi^{a_\pi+3}_{2,2}$ is injective on ${\mathcal V}_\pi^{K_{2,a_\pi+3}}$, this implies \[ {algebraically closed ute{e}t}a_{\varepsilonsilonpsilon_1,a_\pi+1,a_\pi+3}\circ\tildeheta'(v_\pi) = q\,{algebraically closed ute{e}t}a_{0,a_\pi+1,a_\pi+3}\circ\tildeheta(v_\pi) + {algebraically closed ute{e}t}a_{\varepsilonsilonpsilon_1+\varepsilonsilonpsilon_2,a_\pi+1,a_\pi+3}\circ\tildeheta(v_\pi). \] This shows that the subsets $\EuScript B'_{\pi,m}$ may not be linearly independent when $m$ and $a_\pi$ have the opposite parity.\\ Next we compare $\EuScript B_{\pi,m}$ (when $n=2$) with those given by \varepsilonsilonqref{E:oldform}. We show that they are different in general. In fact, this already happens when $m=a_\pi+2$. Suppose that $\Lambdabda_{\pi,\psi}(v_\pi)=1$. Then under $\Xi^{a_\pi+2}_{2,2}$, the basis for ${\mathcal V}_\pi^{K_{2,a_\pi+2}}$ given by \varepsilonsilonqref{E:oldform} turns to \[ \stt{qX_1X_2,\,\,q^2(X_1+X_2)(1+X_1X_2),\,\,q^2(1+2X_1X_2+X_1^2X_2^2),\,\,q^2(X^2_1+2X_1X_2+X_2^2)} \] by \varepsilonsilonqref{E:raising theta formal}, \varepsilonsilonqref{E:raising theta' formal}, \varepsilonsilonqref{E:main identity} and {\ell}mref{L:eta action}. On the other hand, under $\Xi^{a_\pi+2}_{2,2}$, the subset $\EuScript B_{\pi,a_{\pi+2}}$ becomes \[ \stt{qX_1X_2,\,\,q^2(X_1+X_2)(1+X_1X_2),\,\,q^2(1+X_1X_2+X_1^2X_2^2),\,\,q^2(X_1^2+X_1X_2+X_2^2)} \] by \varepsilonsilonqref{E:epsilon_1}, \varepsilonsilonqref{E:epsilon_1+2}, \varepsilonsilonqref{E:epsilon_1-2} and {\ell}mref{L:eta action}. Since $\Xi^{a_\pi+2}_{2,2}$ is injective on ${\mathcal V}_\pi^{K_{2,a_\pi+2}}$, our claim follows.\\ Finally, we remark that our $\EuScript B_{\pi,m}$ are different from that of Casselman implicitly given in the proof of \cite[Theorem 1]{Casselman1973} when $m$ and $a_\pi$ have the opposite parity. \end{document}
math
143,875
\begin{document} \title{Nonlinearity in oscillating bridges} \author{Filippo GAZZOLA\\ {\small Dipartimento di Matematica del Politecnico, Piazza L. da Vinci 32 - 20133 Milano (Italy)}} \date{} \maketitle \begin{abstract} We first recall several historical oscillating bridges that, in some cases, led to collapses. Some of them are quite recent and show that, nowadays, oscillations in suspension bridges are not yet well understood. Next, we survey some attempts to model bridges with differential equations. Although these equations arise from quite different scientific communities, they display some common features. One of them, which we believe to be incorrect, is the acceptance of the linear Hooke law in elasticity. This law should be used only in presence of small deviations from equilibrium, a situation which does not occur in strongly oscillating bridges. Then we discuss a couple of recent models whose solutions exhibit self-excited oscillations, the phenomenon visible in real bridges. This suggests a different point of view in modeling equations and gives a strong hint how to modify the existing models in order to obtain a reliable theory. The purpose of this paper is precisely to highlight the necessity of revisiting classical models, to introduce reliable models, and to indicate the steps we believe necessary to reach this target.\par\noindent {\em AMS Subject Classification 2010: 74B20, 35G31, 34C15, 74K10, 74K20.} \end{abstract} \tableofcontents \eject \section{Introduction} The story of bridges is full of many dramatic events, such as uncontrolled oscillations which, in some cases, led to collapses. To get into the problem, we invite the reader to have a look at the videos \cite{assago,london,tacoma,volgograd}. These failures have to be attributed to the action of external forces, such as the wind or traffic loads, or to macroscopic mistakes in the projects. {From} a theoretical point of view, there is no satisfactory mathematical model which, up to nowadays, perfectly describes the complex behavior of bridges. And the lack of a reliable analytical model precludes precise studies both from numerical and engineering points of views.\par The main purpose of the present paper is to show the necessity of revisiting existing models since they fail to describe the behavior of real bridges. We will explain which are the weaknesses of the so far considered equations and suggest some possible improvements according to the fundamental rules of classical mechanics. Only with some nonlinearity and with a sufficiently large number of degrees of freedom several behaviors may be modeled. We do not claim to have a perfect model, we just wish to indicate the way to reach it. Much more work is needed and we explain what we believe to be the next steps.\par We first survey and discuss some historical events, we recall what is known in elasticity theory, and we describe in full detail the existing models. With this database at hand, our purpose is to analyse the oscillating behavior of certain bridges, to determine the causes of oscillations, and to give an explanation to the possible appearance of different kinds of oscillations, such as torsional oscillations. Due to the lateral sustaining cables, suspension bridges most emphasise these oscillations which, however, also appear in other kinds of bridges: for instance, light pedestrian bridges display similar behaviors even if their mechanical description is much simpler.\par According to \cite{goldstein}, chaos is a disordered and unpredictable behavior of solutions in a dynamical system. With this characterization, there is no doubt that chaos is somehow present in the disordered and unpredictable oscillations of bridges. From \cite[Section 11.7]{goldstein} we recall a general principle (GP) of classical mechanics: \begin{center} \begin{minipage}{162mm} {\bf (GP)} {\em The minimal requirements for a system of first-order equations to exhibit chaos is that they be nonlinear and have at least three variables.} \end{minipage} \end{center} This principle suggests that \begin{center} {\bf any model aiming to describe oscillating bridges should be nonlinear and with enough degrees of freedom.} \end{center} Most of the mathematical models existing in literature fail to satisfy (GP) and, therefore, must be accordingly modified. We suggest possible modifications of the corresponding differential equations and we believe that, if solved, this would lead to a better understanding of the underlying phenomena and, perhaps, to several practical actions for the plans of future bridges, as well as remedial measures for existing structures. In particular, one of the major scopes of this paper is to convince the reader that linear theories are not suitable for the study of bridges oscillations whereas, although they are certainly too naive, some recent nonlinear models do display self-excited oscillations as visible in bridges.\par In Section \ref{story}, we collect a number of historical events and observations about bridges, both suspended and not. A complete story of bridges is far beyond the scopes of the present paper and the choice of events is mainly motivated by the phenomena that they displayed. The description of the events is accompanied by comments of engineers and of witnesses, and by possible theoretical explanations of the observed phenomena. The described events are then used in order to figure out a common behavior of oscillating bridges; in particular, it appears that the requirements of (GP) must be satisfied. Recent events testify that the problems of controlling and forecasting bridges oscillations is still unsolved.\par In Section \ref{howto}, we discuss several equations appearing in literature as models for oscillating bridges. Most of them use in some point the well-known linear Hooke law (${\cal LHL}$ in the sequel) of elasticity. This is what we believe to be a major weakness, but not the only one, of all these models. This is also the opinion of McKenna \cite[p.16]{mckmonth}: \begin{center} \begin{minipage}{162mm} {\em We doubt that a bridge oscillating up and down by about 10 meters every 4 seconds obeys Hooke's law.} \end{minipage} \end{center} {From} \cite{britannica}, we recall what is known as ${\cal LHL}$. \begin{center} \begin{minipage}{162mm} {\em The linear Hooke law (${\bf {\cal LHL}}$) of elasticity, discovered by the English scientist Robert Hooke in 1660, states that for relatively small deformations of an object, the displacement or size of the deformation is directly proportional to the deforming force or load. Under these conditions the object returns to its original shape and size upon removal of the load. ... At relatively large values of applied force, the deformation of the elastic material is often larger than expected on the basis of ${\cal LHL}$, even though the material remains elastic and returns to its original shape and size after removal of the force. ${\cal LHL}$ describes the elastic properties of materials only in the range in which the force and displacement are proportional.} \end{minipage} \end{center} Hence, by no means one should use ${\cal LHL}$ in presence of large deformations. In such case, the restoring elastic force $f$ is ``more than linear''. Instead of having the usual form $f(s)=ks$, where $s$ is the displacement from equilibrium and $k>0$ depends on the elasticity of the deformed material, it has an additional superlinear term $\varphi(s)$ which becomes negligible for small displacements $s$. More precisely, $$f(s)=ks+\varphi(s)\qquad\mbox{with}\qquad\lim_{s\to0}\frac{\varphi(s)}{s}=0\ .$$ The simplest example of such term is $\varphi(s)=\varepsilon s^p$ with $\varepsilon>0$ and $p>1$; this superlinear term may become arbitrarily small for $\varepsilon$ small and/or $p$ large. Therefore, the parameters $\varepsilon$ and $p$, which do exist, may be chosen in such a way to describe with a better precision the elastic behavior of a material when large displacements are involved. As we shall see, this apparently harmless and tiny nonlinear perturbation has devastative effects on the models and, moreover, it is amazingly useful to display self-excited oscillations as the ones visible in real bridges. On the contrary, linear models prevent to view the real phenomena which occur in bridges, such as the sudden increase of the width of their oscillations and the switch to different ones.\par The necessity of dealing with nonlinear models is by now quite clear also in more general elasticity problems; from the preface of the book by Ciarlet \cite{ciarletbook}, let us quote \begin{center} \begin{minipage}{162mm} {\em ... it has been increasingly acknowledged that the classical linear equations of elasticity, whose mathematical theory is now firmly established, have a limited range of applicability, outside of which they should be replaced by genuine nonlinear equations that they in effect approximate.} \end{minipage} \end{center} In order to model bridges, the most natural way is to view the roadway as a thin narrow rectangular plate. In Section \ref{elasticity}, we quote several references which show that classical linear elastic models for thin plates do not describe with a sufficient accuracy large deflections of a plate. But even linear theories present considerable difficulties and a further possibility is to view the bridge as a one dimensional beam; this model is much simpler but, of course, it prevents the appearance of possible torsional oscillations. This is the main difficulty in modeling bridges: find simple models which, however, display the same phenomenon visible in real bridges.\par In Section \ref{models} we survey a number of equations arising from different scientific communities. The first equations are based on engineering models and mainly focus the attention on quantitative aspects such as the exact values of the parameters involved. Some other equations are more related to physical models and aim to describe in full details all the energies involved. Finally, some of the equations are purely mathematical models aiming to reach a prototype equation and proving some qualitative behavior. All these models have to face a delicate choice: either consider uncoupled behaviors between vertical and torsional oscillations of the roadway or simplify the model by decoupling these two phenomena. In the former case, the equations have many degrees of freedom and become terribly complicated: hence, very few results can be obtained. In the latter case, the model fails to satisfy the requirements of (GP) and appears too far from the real world.\par As a compromise between these two choices, in Section \ref{blup} we recall the model introduced in \cite{gazpav,gazpav3} which describes vertical oscillations and torsional oscillations of the roadway within the same simplified beam equation. The solution to the equation exhibits self-excited oscillations quite similar to those observed in suspension bridges. We do not believe that the simple equation considered models the complex behavior of bridges but we do believe that it displays the same phenomena as in more complicated models closer related to bridges. In particular, finite time blow up occurs with wide oscillations. These phenomena are typical of differential equations of at least fourth order since they do not occur in lower order equations, see \cite{gazpav}. We also show that the same phenomenon is visible in a $2\times2$ system of nonlinear ODE's of second order related to a system suggested by McKenna \cite{mckmonth}.\par Putting all together, in Section \ref{afford} we afford an explanation in terms of the energies involved. Starting from a survey of milestone historical sources \cite{bleich,tac2}, we attempt a qualitative but detailed energy balance and we attribute the appearance of torsional oscillations in bridges to some ``hidden'' elastic energy which is not visible since it involves second order derivatives of the displacement of the bridge: this suggests an analogy between bridges oscillations and a ball bouncing on the floor. The discovery of the phenomenon usually called in literature {\em flutter speed} has to be attributed to Bleich \cite{bleichsolo}; in our opinion, the flutter speed should be seen as a {\bf critical energy threshold} which, if exceeded, gives rise to uncontrolled phenomena such as torsional oscillations. We give some hints on how to determine the critical energy threshold, according to some eigenvalue problems whose eigenfunctions describe the oscillating modes of the roadway.\par In bridges one should always expect vertical oscillations and, in case they become very large, also torsional oscillations; in order to display the possible transition between these two kinds of oscillations, in Section \ref{newmodel} we suggest a new equation as a model for suspension bridges, see \eq{truebeam}. With all the results and observations at hand, in Section \ref{possibleTacoma} we also attempt a detailed description of what happened on November 10, 1940, the day when the Tacoma Narrows Bridge collapsed. As far as we are aware a universally accepted explanation of this collapse in not yet available. Our explanation fits with all the material developed in the present paper. This allows us to suggest a couple of precautions when planning future bridges, see Section \ref{howplan}.\par We recently had the pleasure to participate to a conference on bridge maintenance, safety and management, see \cite{iabmas}. There were engineers from all over the world, the atmosphere was very enjoyable and the problems discussed were extremely interesting. And there was a large number of basic questions still unsolved, most of the results and projects had some percentage of incertitude. Many talks were devoted to suggest new equations to model the studied phenomena and to forecast the impact of new structural issues: even apparently simple problems are still unsolved. We believe this should be a strong motivation for many mathematicians (from mathematical physics, analysis, numerics) to get interested in bridges modeling, experiments, and performances. Throughout the paper we suggest a number of open problems which, if solved, could be a good starting point to reach a deeper understanding of oscillations in bridges. \section{What has been observed in bridges}\label{story} A simplified picture of a suspension bridge can be sketched as in Figure \ref{67} \begin{figure} \caption{Suspension bridges without girder and with girder.} \label{67} \end{figure} where one sees the difference between the elastic structure of a bridge without girder and the more stiff structure of a bridge with girder.\par Although the first project of a suspension bridge is due to the Italian engineer Verantius around 1615, see \cite{veranzio} and \cite[p.7]{navier2} or \cite[p.16]{kawada2}, the first suspension bridges were built only about two centuries later in Great Britain. According to \cite{bender}, \begin{center} \begin{minipage}{162mm} {\em The invention of the suspension bridges by Sir Samuel Brown sprung from the sight of a spider's web hanging across the path of the inventor, observed on a morning's walk, when his mind was occupied with the idea of bridging the Tweed.} \end{minipage} \end{center} Samuel Brown (1776-1852) was an early pioneer of suspension bridge design and construction. He is best known for the Union Bridge of 1820, the first vehicular suspension bridge in Britain.\par An event deserving mention is certainly the inauguration of the Menai Straits Bridge, in 1826. The project of the bridge was due to Thomas Telford and the opening of the bridge is considered as the beginning of a new science nowadays known as ``Structural Engineering''. The construction of this bridge had a huge impact in the English society, a group of engineers founded the ``Institution of Civil Engineers'' and Telford was elected the first president of this association. In 1839 the Menai Bridge collapsed due to a hurricane. In that occasion, unexpected oscillations appeared; Provis \cite{provis} provided the following description: \begin{center} \begin{minipage}{162mm} {\em ... the character of the motion of the platform was not that of a simple undulation, as had been anticipated, but the movement of the undulatory wave was oblique, both with respect to the lines of the bearers, and to the general direction of the bridge.} \end{minipage} \end{center} Also the Broughton Suspension Bridge was built in 1826. It collapsed in 1831 due to mechanical resonance induced by troops marching over the bridge in step. A bolt in one of the stay-chains snapped, causing the bridge to collapse at one end, throwing about 40 men into the river. As a consequence of the incident, the British Army issued an order that troops should ``break step'' when crossing a bridge. These two pioneering bridges already show how the wind and/or traffic loads, both vehicles and pedestrians, play a crucial negative role in the bridge stability.\par A further event deserving to be mentioned is the collapse of the Brighton Chain Pier, built in 1823. It collapsed a first time in 1833, it was rebuilt and partially destroyed once again in 1836. Both the collapses are attributed to violent windstorms. For the second collapse a witness, William Reid, reported valuable observations and sketched a picture illustrating the destruction \cite[p.99]{reid}, see Figure \ref{brighton} \begin{figure} \caption{Destruction of the Brighton Chain Pier.} \label{brighton} \end{figure} which is taken from \cite{rocard}. This is the first reliable report on oscillations appearing in bridges, the most intriguing part of the report being \cite{reid,rocard}: \begin{center} \begin{minipage}{162mm} {\em For a considerable time, the undulations of all the spans seemed nearly equal ... but soon after midday the lateral oscillations of the third span increased to a degree to make it doubtful whether the work could withstand the storm; and soon afterwards the oscillating motion across the roadway, seemed to the eye to be lost in the undulating one, which in the third span was much greater than in the other three; the undulatory motion which was along the length of the road is that which is shown in the first sketch; but there was also an oscillating motion of the great chains across the work, though the one seemed to destroy the other ...} \end{minipage} \end{center} More comments about this collapse are due to Russell \cite{russell}; in particular, he claims that \begin{center} \begin{minipage}{162mm} {\em ... the remedies I have proposed, are those by which such destructive vibrations would have been rendered impossible.} \end{minipage} \end{center} These two comments may have several interpretations. However, what appears absolutely clear is that different kinds of oscillations appeared (undulations, lateral oscillations, oscillation motion of the great chains) and some of them were considered destructive. Further details on the Brighton Chair Pier collapse may be found in \cite[pp.4-5]{bleich}.\par Some decades earlier, at the end of the eighteenth century, the German physicist Ernst Chladni was touring Europe and showing, among other things, the nodal line patterns of vibrating plates, see Figure \ref{patterns}. \begin{figure} \caption{Chladni patterns in a vibrating plate.} \label{patterns} \end{figure} Chladni's technique, first published in \cite{chl}, consisted of creating vibrations in a square-shaped metal plate whose surface was covered with light sand. The plate was bowed until it reached resonance, when the vibration caused the sand to concentrate along the nodal lines of vibrations, see \cite{chladniexperiment} for the nowadays experiment. This simple but very effective way to display the nodal lines of vibrations was seen by Navier \cite{navier} as \begin{center} \begin{minipage}{162mm} {\em Les curieuses exp\'eriences de M. Chaldni sur les vibrations des plaques...} \end{minipage} \end{center} It appears quite clearly from Figure \ref{patterns} how complicated may be the vibrations of a thin plate and hence, see Section \ref{elasticity}, of a bridge. And, indeed, the just described events testify that, besides the somehow expected vertical oscillations, also different kinds of oscillations may appear. For instance, one may have ``an oblique undulatory wave'' or some kind of resonance or the interaction with other structural components such as the suspension chains. The description of different coexisting forms of oscillations is probably the most important open problem in suspension bridges.\par It is not among the scopes of this paper to give the complete story of bridges collapses for which we refer to \cite[Section 1.1]{bleich}, to \cite[Chapter IV]{rocard}, to \cite{aer,tac1,hayden,ward}, to the recent monographs \cite{akesson,kawada2}, and also to \cite{bridgefailure} for a complete database. Let us just mention that between 1818 and 1889, ten suspension bridges suffered major damages or collapsed in windstorms, see \cite[Table 1, p.13]{tac1}, which is commented by \begin{center} \begin{minipage}{162mm} {\em An examination of the British press for the 18 years between 1821 and 1839 shows it to be more replete with disastrous news of suspension bridges troubles than Table 1 reveals, since some of these structures suffered from the wind several times during this period and a number of other suspension bridges were damaged or destroyed as a result of overloading.} \end{minipage} \end{center} The story of bridges, suspended and not, contains many further dramatic events, an amazing amount of bridges had troubles for different reasons such as the wind, the traffic loads, or macroscopic mistakes in the project, see e.g.\ \cite{hao,pearson}. Among them, the most celebrated is certainly the Tacoma Narrows Bridge, collapsed in 1940 just a few months after its opening, both because of the impressive video \cite{tacoma} and because of the large number of studies that it has inspired starting from the reports \cite{Tacoma1,bleich,tac1,tac3,tac4,tac2,tac5}.\par Let us recall some observations made on the Tacoma collapse. Since we were unable to find the Federal Report \cite{Tacoma1} that we repeatedly quote below, we refer to it by trusting the valuable historical research by Scott \cite{wake} and by McKenna and coauthors, see in particular \cite{mck1,mckmonth,mck,mck4}. A good starting point to describe the Tacoma collapse is... the Golden Gate Bridge, inaugurated a few years earlier, in 1937. This bridge is usually classified as ``very flexible'' although it is strongly stiffened by a thick girder, see Figure \ref{69}. \begin{figure} \caption{Girder at the Golden Gate Bridge.} \label{69} \end{figure} The original roadway was heavy and made with concrete; the weight was reduced in 1986 when a new roadway was installed, see \cite{perks}. Nowadays, in spite of the girder, the bridge can swing more than an amazing 8 meters and flex about 3 meters under big loads, which explains why the bridge is classified as very flexible. The huge mass involved and these large distances from equilibrium explain why ${\cal LHL}$ certainly fails. Due to high winds around 120 kilometers per hour, the Golden Gate Bridge has been closed, without suffering structural damage, only three times: in 1951, 1982, 1983, always during the month of December. A further interesting phenomenon is the appearance of traveling waves in 1938: in \cite[Appendix IX]{Tacoma1} (see also \cite{mck4}), the chief engineer of the Golden Gate Bridge writes \begin{center} \begin{minipage}{162mm} {\em ... I observed that the suspended structure of the bridge was undulating vertically in a wavelike motion of considerable amplitude ...} \end{minipage} \end{center} see also the related detailed description in \cite[Section 1]{mck4}. Hence, one should also expect traveling waves in bridges, see the sketched representation in the first picture in Figure \ref{nuove}. \begin{figure} \caption{Traveling waves and torsional motion in bridges without girder.} \label{nuove} \end{figure} All this may occur also in apparently stiff structures. And in presence of extremely flexible structures, these traveling waves can generate further dangerous phenomena such as torsional oscillations, see the second picture in Figure \ref{nuove}.\par When comparing the structure of the Golden Gate Bridge with the one of the original Tacoma Narrows Bridge, one immediately sees a main macroscopic difference: the thick girder sustaining the bridge, compare Figures \ref{69} and \ref{tacoma12}. The girder gives more stiffness to the bridge; this is certainly the main reason why in the Golden Gate Bridge no torsional oscillation ever appeared. A further reason is that larger widths of the roadway seem to prevent torsional oscillations, see \eq{speedflutter} below; from \cite[p.186]{rocard} we quote \begin{center} \begin{minipage}{162mm} {\em ... a bridge twice as wide will have exactly double the critical speed wind.} \end{minipage} \end{center} The Tacoma Bridge was rebuilt with a thick girder acting as a strong stiffening structure, see \cite{tac4}: as mentioned by Scanlan \cite[p.840]{scanlan}, \begin{center} \begin{minipage}{162mm} {\em the original bridge was torsionally weak, while the replacement was torsionally stiff.} \end{minipage} \end{center} The replacement of the original bridge opened in 1950, see \cite{tac4} for some remarks on the project, and still stands today as the westbound lanes of the present-day twin bridge complex, the eastbound lanes opened in 2007. Figure \ref{tacoma12} - picture by Michael Goff, Oregon Department of Transportation, USA - shows the striking difference between the original Tacoma Bridge collapsed in 1940 and the twin bridges as they are today. \begin{figure} \caption{The collapsed Tacoma Bridge and the current twins Tacoma Bridges.} \label{tacoma12} \end{figure} Let us go back to the original Tacoma Bridge: even if it was extremely flexible, it is not clear why torsional oscillations appeared. According to Scanlan \cite[p.841]{scanlan}, \begin{center} \begin{minipage}{162mm} {\em ... some of the writings of von K\'arm\'an leave a trail of confusion on this point. ... it can clearly be shown that the rhythm of the failure (torsion) mode has nothing to do with the natural rhythm of shed vortices following the K\'arm\'an vortex street pattern. ... Others have added to the confusion. A recent mathematics text, for example, seeking an application for a developed theory of parametric resonance, attempts to explain the Tacoma Narrows failure through this phenomenon.} \end{minipage} \end{center} Hence, Scanlan discards the possibility of the appearance of von K\'arm\'an vortices and raises doubts on the appearance of resonance which, indeed, is by now also discarded. Of course, it is reasonable to expect resonance in presence of a single-mode solicitation, such as for the Broughton Bridge. But for the Tacoma Bridge, Lazer-McKenna \cite[Section 1]{mck1} raise the question \begin{center} \begin{minipage}{162mm} {\em ... the phenomenon of linear resonance is very precise. Could it really be that such precise conditions existed in the middle of the Tacoma Narrows, in an extremely powerful storm?} \end{minipage} \end{center} So, no plausible explanation is available nowadays. In a letter \cite{farq}, Prof.\ Farquharson claimed that \begin{center} \begin{minipage}{162mm} {\em ... a violent change in the motion was noted. This change appeared to take place without any intermediate stages and with such extreme violence ... The motion, which a moment before had involved nine or ten waves, had shifted to two.} \end{minipage} \end{center} All this happened under not extremely strong winds, about 80km/h, and under a relatively high frequency of oscillation, about 36cpm, see \cite[p.23]{tac1}. See \cite[Section 2.3]{mckmonth} for more details and for the conclusion that \begin{center} \begin{minipage}{162mm} {\em there is no consensus on what caused the sudden change to torsional motion.} \end{minipage} \end{center} This is confirmed by the following ambiguous comments taken from \cite[Appendix D]{bleich}: \begin{center} \begin{minipage}{162mm} {\em If vertical and torsional oscillations occur, they must be caused by vertical components of wind forces or by some structural action which derives vertical reactions from a horizontally acting wind.} \end{minipage} \end{center} This part is continued in \cite{bleich} by stating that there exist references to both alternatives and that \begin{center} \begin{minipage}{162mm} {\em A few instrumental measurements have been made ... which showed the wind varying up to 8 degrees from the horizontal. Such variation from the horizontal is not the only, and perhaps not the principal source of vertical wind force on a structure.} \end{minipage} \end{center} Besides the lack of consensus on the causes of the switch between vertical and torsional oscillations, all the above comments highlight a strong instability of the oscillation motion as if, after reaching some critical energy threshold, an impulse (a Dirac delta) generated a new unexpected motion. Refer to Section \ref{conclusions} for our own interpretation of this phenomenon which is described in \cite{Tacoma1,bleich} (see also \cite[pp.50-51]{wake}) as: \begin{center} \begin{minipage}{162mm} {\em large vertical oscillations can rapidly change, almost instantaneously, to a torsional oscillation.} \end{minipage} \end{center} We do not completely agree with this description since a careful look at \cite{tacoma} shows that vertical oscillations continue also after the appearance of torsional oscillations; in the video, one sees that at the beginning of the bridge the street-lamps oscillate in opposition of phase when compared with the street-lamps at the end of the bridge. So, the phenomenon which occurs may be better described as follows: \begin{center} \begin{minipage}{162mm} {\bf large vertical oscillations can rapidly create, almost instantaneously, additional torsional oscillations.} \end{minipage} \end{center} Roughly speaking, we believe that part of the energy responsible of vertical oscillations switches to another energy which generates torsional oscillations; the switch occurs without intermediate stages as if an impulse was responsible of it. Our own explanation to this fact is that \begin{center} \begin{minipage}{162mm} {\bf since vertical oscillations cannot be continued too far downwards below the equilibrium position due to the hangers, when the bridge reaches some limit horizontal position with large kinetic energy, part of the energy transforms into elastic energy and generates a crossing wave, namely a torsional oscillation.} \end{minipage} \end{center} We make this explanation more precise in Section \ref{energybalance}, after some further observations. In order to explain the ``switch of oscillations'' several mathematical models were suggested in literature. In next section we survey some of these models which are quite different from each other although they have some common features.\par The Deer Isle Bridge, see Figure \ref{deer}, \begin{figure} \caption{The Deer Isle Bridge (left) and the Bronx-Whitestone Bridge (right).} \label{deer} \end{figure} is a suspension bridge in the state of Maine (USA) which encountered wind stability problems similar to those of the original Tacoma Bridge. Before the bridge was finished, in 1939, the wind induced motion in the relatively lightweight roadway. Diagonal stays running from the sustaining cables to the stiffening girders on both towers were added to stabilize the bridge. Nevertheless, the oscillations of the roadway during some windstorms in 1942 caused extensive damage and destroyed some of the stays. At that time everybody had the collapse of the Tacoma Bridge in mind, so that stronger and more extensive longitudinal and transverse diagonal stays were added. In her report \cite{moran}, Barbara Moran wrote \begin{center} \begin{minipage}{162mm} {\em The Deer Isle Bridge was built at the same time as the Tacoma Narrows, and with virtually the same design. One difference: it still stands.} \end{minipage} \end{center} This shows strong instability: even if two bridges are considered similar they can react differently to external solicitations. Of course, much depends on what is meant by ``virtually similar''...\par The Bronx-Whitestone Bridge displayed in Figure \ref{deer}, was built in New York in 1939 and has shown an intermitted tendency to mild vertical motion from the time the floor system was installed. The reported motions have never been very large, but were noticeable to the traveling public. Several successive steps were taken to stabilise the structure, see \cite{anon}. Midspan diagonal stays and friction dampers at the towers were first installed; these were later supplemented by diagonal stayropes from the tower tops to the roadway level. However, even these devices were not entirely adequate and in 1946 the roadway was stiffened by the addition of truss members mounted above the original plate girders, the latter becoming the lower chords of the trusses \cite{ammann,pavlo}. This is a typical example of bridge built without considering all the possible external effects, subsequently damped by means of several unnatural additional components. Our own criticism is that \begin{center} \begin{minipage}{162mm} {\bf instead of just solving the problem, one should understand the problem.} \end{minipage} \end{center} And precisely in order to understand the problem, we described above those events which displayed the pure elastic behavior of bridges. These were mostly suspension bridges without girders and were free to oscillate. This is a good reason why the Tacoma collapse should be further studied for deeper knowledge: it displays the pure motion without stiffening constraints which hide the elastic features of bridges.\par The Tacoma Bridge collapse is just the most celebrated and dramatic evidence of oscillating bridge but bridges oscillations are still not well understood nowadays. On May 2010, the Russian authorities closed the Volgograd Bridge to all motor traffic due to its strong vertical oscillations (traveling waves) caused by windy conditions, see \cite{volgograd} for the BBC report and video. Once more, these oscillations may appear surprising since the Volgograd Bridge is a concrete girder bridge and its stiffness should prevent oscillations. However, it seems that strong water currents in the Volga river loosened one of the bridge's vertical supports so that the stiffening effect due to the concrete support was lost and the behavior became more similar to that of a suspension bridge. The bridge remained closed while it was inspected for damage. As soon as the original effect was restored the bridge reopened for public access. In Figure \ref{volgabridge} the reader finds pictures of the bridge and of the damped sustaining support. \begin{figure} \caption{The Volgograd Bridge.} \label{volgabridge} \end{figure} These pictures are taken from \cite{volgobridge}, where one can also find full details on the damping system of the bridge. The Volgograd Bridge well shows how oscillation induced fatigue of the structural members of bridges is a major factor limiting the life of the bridge. In \cite{kawada} one may find a mathematical analysis and wind tunnel tests for examining oscillations which occur under ``constant low wind'', rather than under violent windstorms: \begin{center} \begin{minipage}{162mm} {\em Limited oscillation could even cause a collapse of light suspension bridges in a reasonably short time.} \end{minipage} \end{center} As already observed, the wind is not the only possible external source which generates bridges oscillations which also appear in pedestrian bridges where lateral swaying is the counterpart of torsional oscillation. In June 2000, the very same day when the London Millennium Bridge opened and the crowd streamed on it, the bridge started to sway from side to side, see \cite{london}. Many pedestrians fell spontaneously into step with the vibrations, thereby amplifying them. According to Sanderson \cite{sanderson}, the bridge wobble was due to the way people balanced themselves, rather than the timing of their steps. Therefore, the pedestrians acted as negative dampers, adding energy to the bridge's natural sway. Macdonald \cite[p.1056]{macdonald} explains this phenomenon by writing \begin{center} \begin{minipage}{162mm} {\em ... above a certain critical number of pedestrians, this negative damping overcomes the positive structural damping, causing the onset of exponentially increasing vibrations.} \end{minipage} \end{center} Although we have some doubts about the real meaning of ``exponentially increasing vibrations'' we have no doubts that this description corresponds to a superlinear behavior which has also been observed in several further pedestrian bridges, see \cite{franck} and \cite{zivanovic} from which we quote \begin{center} \begin{minipage}{162mm} {\em ... damping usually increases with increasing vibration magnitude due to engagement of additional damping mechanisms.} \end{minipage} \end{center} The Millennium Bridge was made secure by adding some stiffening trusses below the girder, see Figure \ref{LMB} (Photo $\copyright$ Peter Visontay). \begin{figure} \caption{The London Millennium Bridge.} \label{LMB} \end{figure} The mathematical explanation of this solution is that trusses lessen swaying and force the bridge to remain closer to its equilibrium position, that is, closer to a linear behavior as described by ${\cal LHL}$. Although trusses delay the appearance of the superlinear behavior, they do not solve completely the problem as one may wonder what would happen if 10.000 elephants would simultaneously walk through the Millennium Bridge... In this respect, let us quote from \cite[p.13]{tac1} a comment on suspension bridges strengthened by stiffening girders: \begin{center} \begin{minipage}{162mm} {\em That significant motions have not been recorded on most of these bridges is conceivably due to the fact that they have never been subjected to optimum winds for a sufficient period of time.} \end{minipage} \end{center} Another pedestrian bridge, the Assago Bridge in Milan (310m long), had a similar problem. In February 2011, just after a concert the publics crossed the bridge and, suddenly, swaying became so violent that people could hardly stand, see \cite{fazzo} and \cite{assago}. Even worse was the subsequent panic effect when the crowd started running in order to escape from a possible collapse; this amplified swaying but, quite luckily, nobody was injured. In this case, the project did not take into account that a large number of people would go through the bridge just after the events; when swaying started there were about 1.200 pedestrians on the footbridge. This problem was solved by adding positive dampers, see \cite{stella}.\par According to \cite{bridgefailure}, around 400 recorded bridges failed for several different reasons and the ones who failed after year 2000 are more than 70. Probably, some years after publication of this paper, these numbers will have increased considerably... The database \cite{bridgefailure} consists mainly of brief descriptions and statistics for each bridge failure: location, number of fatalities/injuries, etc. rather than in-depth analysis of the cause of the failure for which we refer to the nice book by Akesson \cite{akesson}.\par As we have seen, the reasons of failures are of different kinds. Firstly, strong and/or continued winds: these may cause wide vertical oscillations which may switch to different kinds of oscillations. Especially for suspension bridges the latter phenomenon appears quite evident, due to the many elastic components (cables, hangers, towers, etc.) which appear in it. A second cause are traffic loads, such as some precise resonance phenomenon, or some unpredictable synchronised behavior, or some unexpected huge load; these problems are quite common in many different kinds of bridges. Finally, a third cause are mistakes in the project; these are both theoretical, for instance assuming ${\cal LHL}$, and practical, such as wrong assumptions on the possible maximum external actions.\par After describing so many disasters, we suggest a joke which may sound as a provocation. Since many bridges projects did not forecast oscillations it could be more safe to build old-fashioned rock bridges, such as the Roman aqueduct built in Segovia (Spain) during the first century and still in perfect shape and in use. Of course, we are not suggesting here to replace the Golden Gate Bridge with a Roman-style bridge! But we do suggest to plan bridges by taking into account all possible kinds of solicitations. Moreover, we suggest not to hide unsolved problems with some unnatural solutions such as stiff and heavy girders or more extensive longitudinal and transverse diagonal stays, see Section \ref{howplan} for more suggestions.\par Throughout this section we listed a number of historical events about bridges. They taught us the following facts:\par 1. Self-excited oscillations appear in bridges. Often this is somehow unexpected since the project does not take into account several external strong and/or prolonged effects. And even if expected, oscillations can be much wider than estimated.\par 2. Oscillations can be extenuated by stiffening the structure or by adding positive (and heavy, and expensive) dampers to the structure. However, none of these solutions can completely prevent oscillations, especially in presence of highly unfavorable events such as strong and prolonged winds, not necessarily hurricanes, or heavy and synchronised traffic loads. Due to the unnatural stiffness of the structure, trusses and dampers may cause cracks, see \cite{crack} and references therein; but we leave this problem to engineers... see \cite{kawada2}.\par 3. The oscillations are amplified by an observable superlinear effect. More the bridge is far from its equilibrium position, more the impact of external forces is relevant. It is by now well understood that suspension bridges behave nonlinearly, see e.g.\ \cite{brown,lacarbonara2}.\par 4. In extremely flexible bridges, such as the Tacoma Bridge which had no stiffening truss, vertical oscillations can partially switch to torsional oscillations and even to more complicated oscillations, see the pictures in Figure \ref{kable} which are taken from \cite[p.143]{cable} \begin{figure} \caption{Combined oscillations of a bridge roadway.} \label{kable} \end{figure} and see also \cite[pp.94-95]{rocard} and Figure \ref{ghaffer} below. This occurs when vertical oscillations become too large and, in such case, vertical and torsional oscillations coexist. Up to nowadays, there is no convincing explanation why the switch occurs. \section{How to model bridges}\label{howto} The amazing number of failures described in the previous section shows that the existing theories and models are not adequate to describe the statics and the dynamics of oscillating bridges. In this section we survey different points of view, different models, and we underline their main weaknesses. We also suggest how to modify them in order to fulfill the requirements of (GP). \subsection{A quick overview on elasticity: from linear to semilinear models}\label{elasticity} A quite natural way to describe the bridge roadway is to view it as a thin rectangular plate. This is also the opinion of Rocard \cite[p.150]{rocard}: \begin{center} \begin{minipage}{162mm} {\em The plate as a model is perfectly correct and corresponds mechanically to a vibrating suspension bridge...} \end{minipage} \end{center} In this case, a commonly adopted theory is the linear one by Kirchhoff-Love \cite{kirchhoff,love}, see also \cite[Section 1.1.2]{gazgruswe}, which we briefly recall. The bending energy of a plate involves curvatures of the surface. Let $\kappa_1$, $\kappa_2$ denote the principal curvatures of the graph of a smooth function $u$ representing the deformation of the plate, then a simple model for the bending energy of the deformed plate $\Omega$ is \neweq{curva} \mathbb{E}(u)=\int_\Omega\left(\frac{\kappa_1^2}{2}+\frac{\kappa_2^2}{2}+\sigma\kappa_1\kappa_2\right)\, dx_1dx_2 \end{equation} where $\sigma$ denotes the Poisson ratio defined by $\sigma=\frac{\lambda}{2\left(\lambda +\mu \right)}$ with the so-called Lam\'e constants $\lambda,\mu $ that depend on the material. For physical reasons it holds that $\mu >0$ and usually $\lambda \geq 0$ so that $0\le\sigma<\frac{1}{2}$. In the linear theory of elastic plates, for small deformations $u$ the terms in \eq{curva} are considered to be purely quadratic with respect to the second order derivatives of $u$. More precisely, for small deformations $u$, one has $$(\kappa_1+\kappa_2)^2\approx(\Delta u)^2\ ,\quad\kappa_1\kappa_2\approx\det(D^2u)=(u_{x_1x_1}u_{x_2x_2}-u_{x_1x_2}^{2})\ ,$$ and therefore $$\frac{\kappa_1^2}{2}+\frac{\kappa_2^2}{2}+\sigma\kappa_1\kappa_2\approx\frac{1}{2}(\Delta u)^2+(\sigma-1)\det(D^2u).$$ Then \eq{curva} yields \neweq{energy-gs} \mathbb{E}(u)=\int_{\Omega }\left(\frac{1}{2}\left( \Delta u\right) ^{2}+(\sigma-1)\det(D^2u)\right) \, dx_1dx_2\, . \end{equation} Note that for $-1<\sigma<1$ the functional $\mathbb{E}$ is coercive and convex. This modern variational formulation appears in \cite{Friedrichs}, while a discussion for a boundary value problem for a thin elastic plate in a somehow old fashioned notation is made by Kirchhoff \cite{kirchhoff}. And precisely the choice of the boundary conditions is quite delicate since it depends on the physical model considered.\par Destuynder-Salaun \cite[Section I.2]{destuyndersalaun} describe this modeling by \begin{center} \begin{minipage}{162mm} {\em ... Kirchhoff and Love have suggested to assimilate the plate to a collection of small pieces, each one being articulated with respect to the other and having a rigid-body behavior. It looks like these articulated wooden snakes that children have as toys. Hence the transverse shear strain remains zero, while the planar deformation is due to the articulation between small blocks. But this simplified description of a plate movement can be acceptable only if the components of the stress field can be considered to be negligible.} \end{minipage} \end{center} The above comment says that ${\cal LHL}$ should not be adopted if the components of the stress field are not negligible. An attempt to deal with large deflections for thin plates is made by Mansfield \cite[Chapters 8-9]{mansfield}. He first considers approximate methods, then with three classes of asymptotic plate theories: membrane theory, tension field theory, inextensional theory. Roughly speaking, the three theories may be adopted according to the ratio between the thickness of the plate and the typical planar dimension: for the first two theories the ratio should be less than $10^{-3}$, whereas for the third theory it should be less than $10^{-2}$. Since a roadway has a length of the order of 1km, the width of the order of 10m, even for the less stringent inextensional theory the thickness of the roadway should be less than 10cm which, of course, appears unreasonable. Once more, this means that ${\cal LHL}$ should not be adopted in bridges. In any case, Mansfield \cite[p.183]{mansfield} writes \begin{center} \begin{minipage}{162mm} {\em The exact large-deflection analysis of plates generally presents considerable difficulties...} \end{minipage} \end{center} Destuynder-Salaun \cite[Section I.2]{destuyndersalaun} also revisit an alternative model due to Naghdi \cite{naghdi} by using a mixed variational formulation. They refer to \cite{mindlin,reissner1,reissner2} for further details and modifications, and conclude by saying that none between the Kirchhoff-Love model or one of these alternative models is always better than the others. Moreover, also the definition of the transverse shear energy is not universally accepted: from \cite[p.149]{destuyndersalaun}, we quote \begin{center} \begin{minipage}{162mm} {\em ... this discussion has been at the origin of a very large number of papers from both mathematicians and engineers. But to our best knowledge, a convincing justification concerning which one of the two expressions is the more suitable for numerical purpose, has never been formulated in a convincing manner. This question is nevertheless a fundamental one ...} \end{minipage} \end{center} It is clear that a crucial role is played by the word ``thin''. What is a thin plate? Which width is it allowed to have? If we assume that the width is zero, a quite unrealistic assumption for bridges, a celebrated two-dimensional equation was suggested by von K\'arm\'an \cite{karman}. This equation has been widely, and satisfactorily, studied from several mathematical points of view such as existence, regularity, eigenvalue problems, semilinear versions, see e.g.\ \cite{gazgruswe} for a survey of results. On the other hand, quite often several doubts have been raised on their physical soundness. For instance, Truesdell \cite[pp.601-602]{truesdell} writes \begin{center} \begin{minipage}{162mm} {\em Being unable to explain just why the von K\'arm\'an theory has always made me feel a little nauseated as well as very slow and stupid, I asked an expert, Mr. Antman, what was wrong with it. I can do no better than paraphrase what he told me: it relies upon\par 1) ``approximate geometry'', the validity of which is assessable only in terms of some other theory.\par 2) assumptions about the way the stress varies over a cross-section, assumptions that could be justified only in terms of some other theory.\par 3) commitment to some specific linear constitutive relation - linear, that is, in some special measure of strain, while such approximate linearity should be outcome, not the basis, of a theory.\par 4) neglect of some components of strain - again, something that should be proved mathematically from an overriding, self-consistent theory.\par 5) an apparent confusion of the referential and spatial descriptions - a confusion that is easily justified for the classical linearised elasticity but here is carried over unquestioned, in contrast with all recent studies of the elasticity of finite deformations.} \end{minipage} \end{center} Truesdell then concludes with a quite eloquent comment: \begin{center} \begin{minipage}{162mm} {\em These objections do not prove that anything is wrong with von K\'arm\'an strange theory. They merely suggest that it would be difficult to prove that there is anything right about it.} \end{minipage} \end{center} Let us invite the interested reader to have a careful look at the paper by Truesdell \cite{truesdell}; it contains several criticisms exposed in a highly ironic and exhilarating fashion and, hence, very effective.\par Classical books for elasticity theory are due to Love \cite{love}, Timoshenko \cite{timoshenko}, Ciarlet \cite{ciarletbook}, Villaggio \cite{villaggio}, see also \cite{nadai,naghdi,timoshenkoplate} for the theory of plates. Let us also point out a celebrated work by Ball \cite{ball} who was the first analyst to approach the real 3D boundary value problems for nonlinear elasticity. Further nice attempts to tackle nonlinear elasticity in particular situations were done by Antman \cite{antman1,antman2} who, however, appears quite skeptic on the possibility to have a general theory: \begin{center} \begin{minipage}{162mm} {\em ... general three-dimensional nonlinear theories have so far proved to be mathematically intractable.} \end{minipage} \end{center} Summarising, what we have seen suggests to conclude this short review about plate models by claiming that classical modeling of thin plates should be carefully revisited. This suggestion is absolutely not new. In this respect, let us quote a couple of sentences written by Gurtin \cite{gurtin} about nonlinear elasticity: \begin{center} \begin{minipage}{162mm} {\em Our discussion demonstrates why this theory is far more difficult than most nonlinear theories of mathematical physics. It is hoped that these notes will convince analysts that nonlinear elasticity is a fertile field in which to work.} \end{minipage} \end{center} Since the previously described Kirchhoff-Love model implicitly assumes ${\cal LHL}$, and since quasilinear equations appear too complicated in order to give useful information, we intend to add some nonlinearity only in the source $f$ in order to have a semilinear equation, something which appears to be a good compromise between too poor linear models and too complicated quasilinear models. This compromise is quite common in elasticity, see e.g.\ \cite[p.322]{ciarletbook} which describes the method of asymptotic expansions for the thickness $\varepsilon$ of a plate as a ``partial linearisation'' \begin{center} \begin{minipage}{162mm} {\em ... in that a system of quasilinear partial differential equations, i.e., with nonlinearities in the higher order terms, is replaced as $\varepsilon\to0$ by a system of semilinear partial differential equations, i.e., with nonlinearities only in the lower order terms.} \end{minipage} \end{center} In Section \ref{newmodel}, we suggest a new 2D mathematical model described by a semilinear fourth order wave equation. Before doing this, in next section we survey some existing models and we suggest some possible variants based on the observations listed in Section \ref{story}. \subsection{Equations modeling suspension bridges}\label{models} Although it is oversimplified in several respects, the celebrated report by Navier \cite{navier2} has been for more than one century the only mathematical treatise of suspension bridges. The second milestone contribution is certainly the monograph by Melan \cite{melan}. After the Tacoma collapse, the engineering communities felt the necessity to find accurate equations in order to attempt explanations of what had occurred. In this respect, a first source is certainly the work by Smith-Vincent \cite{tac2} which was written precisely {\em with special reference to the Tacoma Narrows Bridge}. The bridge is modeled as a one dimensional beam, say the interval $(0,L)$, and in order to obtain an autonomous equation, Smith-Vincent consider the function $\eta=\eta(x)$ representing the amplitude of the oscillation at the point $x\in(0,L)$. By linearising they obtain a fourth order linear ODE \cite[(4.2)]{tac2} which can be integrated explicitly. We will not write this equation because we prefer to deal with the function $v=v(x,t)$ representing the deflection at any point $x\in(0,L)$ and at time $t>0$; roughly speaking, $v(x,t)=\eta(x)\sin(\omega t)$ for some $\omega>0$. In this respect, a slightly better job was done in \cite{bleich} although this book was not very lucky since two of the authors (McCullogh and Bleich) passed away during its preparation. Equation \cite[(2.7)]{bleich} is precisely \cite[(4.2)]{tac2}; but \cite[(2.6)]{bleich} considers the deflection $v$ and reads \neweq{primissima} m\, v_{tt}+EI\, v_{xxxx}-H_w\, v_{xx}+\frac{w\, h}{H_w}=0\, ,\qquad x\in(0,L)\, ,\ t>0\, , \end{equation} where $E$ and $I$ are, respectively, the elastic modulus and the moment of inertia of the stiffening girder so that $EI$ is the stiffness of the girder; moreover, $m$ denotes the mass per unit length, $w=mg$ is the weight which produces a cable stress whose horizontal component is $H_w$, and $h$ is the increase of $H_w$ as a result of the additional deflection $v$. In particular, this means that $h$ depends on $v$ although \cite{bleich} does not emphasise this fact and considers $h$ as a constant.\par An excellent source to derive the equation of vertical oscillations in suspension bridges is \cite[Chapter IV]{rocard} where all the details are perfectly explained. The author, the French physicist Yves-Andr\'e Rocard (1903-1992), also helped to develop the atomic bomb for France. Consider again that a long span bridge roadway is a beam of length $L>0$ and that it is oscillating; let $v(x,t)$ denote the vertical component of the oscillation for $x\in(0,L)$ and $t>0$. The equation derived in \cite[p.132]{rocard} reads \neweq{flutter} m\, v_{tt}+EI\, v_{xxxx}-\big(H_w+\gamma v\big)\, v_{xx}+\frac{w\, \gamma}{H_w}v=f(x,t)\, ,\quad x\in(0,L)\, ,\ t>0, \end{equation} where $H_w$, $EI$ and $m$ are as in \eq{primissima}, $\gamma v$ is the variation $h$ of $H_w$ supposed to vary linearly with $v$, and $f$ is an external forcing term. Note that a nonlinearity appears here in the term $\gamma v v_{xx}$. In fact, \eq{flutter} is closely related to an equation suggested much earlier by Melan \cite[p.77]{melan} but it has not been subsequently attributed to him. \begin{problem} {\em Study oscillations and possible blow up in finite time for traveling waves to \eq{flutter} having velocity $c>0$, $v=v(x,t)=y(x-ct)$ for $x\in\mathbb{R}$ and $t>0$, in the cases where $f\equiv1$ is constant and where $f$ depends superlinearly on $v$. Putting $\tau=x-ct$ one is led to find solutions to the ODE $$ EI\, y''''(\tau)-\Big(\gamma y(\tau)+H_w-mc^2\Big)\, y''(\tau)+\frac{w\, \gamma}{H_w}y(\tau)=1\, ,\quad \tau\in\mathbb{R}\, . $$ By letting $w(\tau)=y(\tau)-\frac{H_w}{w\, \gamma}$ and normalising some constants, we arrive at \neweq{y4} w''''(\tau)-\Big(\alpha w(\tau)+\beta\Big)\, w''(\tau)+w(\tau)=0\, ,\quad \tau\in\mathbb{R}\, , \end{equation} for some $\alpha>0$ and $\beta\in\mathbb{R}$; we expect different behaviors depending on $\alpha$ and $\beta$. It would be interesting to see if local solutions to \eq{y4} blow up in finite time with wide oscillations. Moreover, one should also consider the more general problem $$w''''(\tau)-\Big(\alpha w(\tau)+\beta\Big)\, w''(\tau)+f(w(\tau))=0\, ,\quad \tau\in\mathbb{R}\, ,$$ with $f$ being superlinear, for instance $f(s)=s+\varepsilon s^3$ with $\varepsilon>0$ small. Incidentally, we note that such $f$ satisfies \eq{f} and \eq{fmono}-\eq{f2} below.} $\Box$\end{problem} Let us also mention that Rocard \cite[pp.166-167]{rocard} studies the possibility of simultaneous excitation of different bending and torsional modes and obtains a coupled system of linear equations of the kind of \eq{flutter}. With few variants, equations \eq{primissima} and \eq{flutter} seem nowadays to be well-accepted among engineers, see e.g.\ \cite[Section VII.4]{aer}; moreover, quite similar equations are derived to describe related phenomena in cable-stayed bridges \cite[(1)]{bruno} and in arch bridges traversed by high-speed trains \cite[(14)-(15)]{lacarbonara}.\par Let $v(x,t)$ and $\theta(x,t)$ denote respectively the vertical and torsional components of the oscillation of the bridge, then the following system is derived in \cite[(1)-(2)]{como} for the linearised equations of the elastic combined vertical-torsional oscillation motion: \renewcommand{1.5}In this setting, instead of \eq{beam}{2.8} \neweq{eqqq} \left\{\begin{array}{l} \displaystyle m\, v_{tt}+EI\, v_{xxxx}-H_w\, v_{xx}+\frac{w^2}{H_w^2}\, \frac{EA}{L}\int_0^L v(z,t)\, dz=f(x,t)\\ \displaystyle I_0\, \theta_{tt}+C_1\, \theta_{xxxx}-(C_2+H_w\ell^2)\, \theta_{xx}+\frac{\ell^2w^2}{H_w^2}\, \frac{EA}{L}\int_0^L\theta(z,t)\, dz=g(x,t)\\ x\in(0,L)\, ,\ t>0, \end{array}\right. \end{equation} \renewcommand{1.5}In this setting, instead of \eq{beam}{1.5}where $m$, $w$, $H_w$ are as in \eq{primissima}, $EI$, $C_1$, $C_2$, $EA$ are respectively the flexural, warping, torsional, extensional stiffness of the girder, $I_0$ the polar moment of inertia of the girder section, $2\ell$ the roadway width, $f(x,t)$ and $g(x,t)$ are the lift and the moment for unit girder length of the self-excited forces. The linearisation here consists in dropping the term $\gamma v v_{xx}$ but a preliminary linearisation was already present in \eq{flutter} in the zero order term. And the nonlocal linear term $\int_0^L v$, which replaces the zero order term in \eq{flutter}, is obtained by assuming ${\cal LHL}$. The nonlocal term in \eq{eqqq} represents the increment of energy due to the external wind during a period of time; this will be better explained in Section \ref{energies}.\par A special mention is deserved by an important paper by Abdel-Ghaffar \cite{abdel} where variational principles are used to obtain the combined equations of a suspension bridge motion in a fairly general nonlinear form. The effect of coupled vertical-torsional oscillations as well as cross-distortional of the stiffening structure is clarified by separating them into four different kinds of displacements: the vertical displacement $v$, the torsional angle $\theta$, the cross section distortional angle $\psi$, the warping displacement $u$, although $u$ can be expressed in terms of $\theta$ and $\psi$. These displacements are well described in Figure \ref{ghaffer} which is taken from \cite[Figure 2]{abdel}. \begin{figure} \caption{The four different kinds of displacements.} \label{ghaffer} \end{figure} A careful analysis of the energies involved is made, reaching up to fifth derivatives in the equations, see \cite[(15)]{abdel}. Higher order derivatives are then neglected and the following nonlinear system of three PDE's of fourth order in the three unknown displacements $v$, $\theta$, $\psi$ is obtained, see \cite[(28)-(29)-(30)]{abdel}: $$\left\{\begin{array}{l} \frac{w}{g}\, v_{tt}+EI\, v_{xxxx}-\Big(2H_w+H_1(t)+H_2(t)\Big)\, v_{xx}+\frac{b}{2}\, \Big(H_1(t)-H_2(t)\Big)\, (\theta_{xx}+\psi_{xx})\\ \ \ \ +\frac{w}{2H_w}\, \Big(H_1(t)+H_2(t)\Big)-\frac{w_s\, r^2}{g}\, \left(1+\frac{EI}{2G\mu r^2}\right)\, v_{xxtt}+\frac{w_s^2\, r^2}{4gG\mu}\, v_{tttt}=0\\ I_m\, \theta_{tt}+E\Gamma\, \theta_{xxxx}-GJ\, \theta_{xx}-\frac{H_w\, b^2}{2}\, (\theta_{xx}+\psi_{xx})-\frac{\gamma\, \Gamma}{g}\, \theta_{xxtt} -\frac{b^2}{4}\, \Big(H_1(t)+H_2(t)\Big)\, (\theta_{xx}+\psi_{xx})\\ \ \ \ +\frac{b}{2}\, \Big(H_1(t)-H_2(t)\Big)\, v_{xx}-\frac{\gamma\, \Lambda}{g} \psi_{xxtt}+\frac{b\, w}{4H_w}\, \Big(H_2(t)-H_1(t)\Big)+E\Lambda\, \psi_{xxxx}+\frac{w_c\, b^2}{4g}\, \psi_{tt}=0\\ \frac{w_c\, b^2}{4g}\, (\psi_{tt}+\theta_{tt})+\frac{EA\, b^2d^2}{4}\, \psi_{xxxx}-\frac{H_w\, b^2}{2}\, (\psi_{xx}+\theta_{xx})- \frac{\gamma Ab^2d^2}{4g}\, \psi_{xxtt}-\frac{\gamma\, \Lambda}{g}\theta_{xxtt}+E\Lambda\, \theta_{xxxx}\\ \ \ \ -\frac{b^2}{4}\, \Big(H_1(t)+H_2(t)\Big)\, (\theta_{xx}+\psi_{xx})+\frac{b}{2}\, \Big(H_1(t)-H_2(t)\Big)\, v_{xx} +\frac{w\, b}{4H_w}\, \Big(H_2(t)-H_1(t)\Big)=0\ . \end{array}\right.$$ We will not explain here what is the meaning of all the constants involved, it would take several pages... Some of the constants have a clear meaning, for the interpretation of the remaining ones, we refer to \cite{abdel}. Let us just mention that $H_1$ and $H_2$ represent the vibrational horizontal components of the cable tension and depend on $v$, $\theta$, $\psi$, and their first derivatives, see \cite[(3)]{abdel}. We wrote these equations in order to convince the reader that the behavior of the bridge is modeled by terribly complicated equations and by no means one should make use of ${\cal LHL}$. After making such huge effort, Abdel-Ghaffar simplifies the problem by neglecting the cross section deformation, the shear deformation and rotatory inertia; he obtains a coupled nonlinear vertical-torsional system of two equations in the two unknowns functions $v$ and $\theta$. These equations are finally linearised, by neglecting $H_1$ and $H_2$ which are considered small when compared with the initial tension $H_w$. Then the coupling effect disappears and equations \eq{eqqq} are recovered, see \cite[(34)-(35)]{abdel}. What a pity, an accurate modeling ended up with a linearisation! But there was no choice... how can one imagine to get any kind of information from the above system?\par Summarising, after the previously described pioneering models from \cite{bleich,melan,navier2,rocard,tac2} there has not been much work among engineers about alternative differential equations; the attention has turned to improving performances through design factors, see e.g.\ \cite{hhs}, or on how to solve structural problems rather than how to understand them more deeply. In this respect, from \cite[p.2]{mckmonth} we quote a personal discussion between McKenna and a distinguished civil engineer who said \begin{center} \begin{minipage}{162mm} {\em ... having found obvious and effective physical ways of avoiding the problem, engineers will not give too much attention to the mathematical solution of this fascinating puzzle ...} \end{minipage} \end{center} Only modeling modern footbridges has attracted some interest from a theoretical point of view. As already mentioned, pedestrian bridges are extremely flexible and display elastic behaviors similar to suspension bridges, although the oscillations are of different kind. In this respect, we would like to mention an interesting discussion with Diana \cite{diana}. He explained that when a suspension bridge is attacked by wind its starts oscillating, but soon afterwards the wind itself modifies its behavior according to the bridge oscillation; so, the wind amplifies the oscillations by blowing synchronously. A qualitative description of this phenomenon was already attempted by Rocard \cite[p.135]{rocard}: \begin{center} \begin{minipage}{162mm} {\em ... it is physically certain and confirmed by ordinary experience, although the effect is known only qualitatively, that a bridge vibrating with an appreciable amplitude completely imposes its own frequency on the vortices of its wake. It appears as if in some way the bridge itself discharges the vortices into the fluid with a constant phase relationship with its own oscillation... .} \end{minipage} \end{center} This reminds the above described behavior of footbridges where {\em pedestrians fall spontaneously into step with the vibrations}: in both cases, external forces synchronise their effect and amplify the oscillations of the bridge. This is one of the reasons why self-excited oscillations appear in suspension and pedestrian bridges.\par In \cite{bodgi} a simple 1D model was proposed in order to describe the crowd-flow phenomena occurring when pedestrians walk on a flexible footbridge. The resulting equation \cite[(2)]{bodgi} reads \neweq{pedestrian} \left(m_s(x)+m_p(x,t)\right)u_{tt}+\delta(x)u_t+\gamma(x)u_{xxxx}=g(x,t) \end{equation} where $x$ is the coordinate along the beam axis, $t$ the time, $u=u(x,t)$ the lateral displacement, $m_s(x)$ is the mass per unit length of the beam, $m_p(x,t)$ the linear mass of pedestrians, $\delta(x)$ the viscous damping coefficient, $\gamma(x)$ the stiffness per unit length, $g(x,t)$ the pedestrian lateral force per unit length. In view of the superlinear behavior for large displacements observed for the London Millennium Bridge, see Section \ref{story}, we wonder if instead of a linear model one should consider a lateral force also depending on the displacement, $g=g(x,t,u)$, being superlinear with respect to $u$. \begin{problem} {\em Study \eq{pedestrian} modified as follows $$ u_{tt}+\delta u_t+\gamma u_{xxxx}+f(u)=g(x,t)\qquad(x\in\mathbb{R}\, ,\ t>0) $$ where $\delta>0$, $\gamma>0$ and $f(s)=s+\varepsilon s^3$ for some $\varepsilon>0$ small. One could first consider the Cauchy problem $$u(x,0)=u_0(x)\ ,\quad u_t(x,0)=u_1(x)\quad(x\in\mathbb{R})$$ with $g\equiv0$. Then one could seek traveling waves such as $u(x,t)=w(x-ct)$ which solve the ODE $$ \gamma w''''(\tau)+c^2w''(\tau)+\delta c w'(\tau)+f(w(\tau))=0\qquad(x-ct=\tau\in\mathbb{R}). $$ Finally, one could also try to find properties of solutions in a bounded interval $x\in(0,L)$.} $\Box$\end{problem} Scanlan-Tomko \cite{scantom} introduce a model in which the torsional angle $\theta$ of the roadway section satisfies the equation \neweq{scann} I\, [\theta''(t)+2\zeta_\theta\omega_\theta \theta'(t)+\omega_\theta^2\theta(t)]=A\theta'(t)+B\theta(t)\ , \end{equation} where $I$, $\zeta_\theta$, $\omega_\theta$ are, respectively, associated inertia, damping ratio, and natural frequency. The r.h.s.\ of \eq{scann} represents the aerodynamic force and was postulated to depend linearly on both $\theta'$ and $\theta$ with the positive constants $A$ and $B$ depending on several parameters of the bridge. Since \eq{scann} may be seen as a two-variables first order linear system, it fails to fulfill both the requirements of (GP). Hence, \eq{scann} is not suitable to describe the disordered behavior of a bridge. And indeed, elementary calculus shows that if $A$ is sufficiently large, then solutions to \eq{scann} are positive exponentials times trigonometric functions which do not exhibit a sudden appearance of self-excited oscillations, they merely blow up in infinite time. In order to have a more reliable description of the bridge, in Section \ref{blup} we consider the fourth order nonlinear ODE $w''''+kw''+f(w)=0$ ($k\in\mathbb{R}$). We will see that solutions to this equation blow up in finite time with self-excited oscillations appearing suddenly, without any intermediate stage.\par That linearisation yields wrong models is also the opinion of McKenna \cite[p.4]{mckmonth} who comments \eq{scann} by writing \begin{center} \begin{minipage}{162mm} {\em This is the point at which the discussion of torsional oscillation starts in the engineering literature.} \end{minipage} \end{center} He claims that the problem is in fact nonlinear and that \eq{scann} is obtained after an incorrect linearisation. McKenna concludes by noticing that \begin{center} \begin{minipage}{162mm} {\em Even in recent engineering literature ... this same mistake is reproduced.} \end{minipage} \end{center} The mistake claimed by McKenna is that the equations are often linearised by taking $\sin\theta=\theta$ and $\cos\theta=1$ also for large amplitude torsional oscillations $\theta$. The corresponding equation then becomes linear and the main torsional phenomenon disappears. Avoiding this rude approximation, but considering the cables and hangers as linear springs obeying ${\cal LHL}$, McKenna reaches an uncoupled second order system for the functions representing the vertical displacement $y$ of the barycenter $B$ of the cross section of the roadway and the deflection from horizontal $\theta$, see Figure \ref{9}. Here, $2\ell$ denotes the width of the roadway whereas $C_1$ and $C_2$ denote the two lateral hangers which have opposite extension behaviors. \begin{figure} \caption{Vertical displacement and deflection of the cross section of the roadway.} \label{9} \end{figure} McKenna-Tuama \cite{mckO} suggest a slightly different model. They write: \begin{center} \begin{minipage}{162mm} {\em ... there should be some torsional forcing. Otherwise, there would be no input of energy to overcome the natural damping of the system ... we expect the bridge to behave like a stiff spring, with a restoring force that becomes somewhat superlinear.} \end{minipage} \end{center} We completely agree with this, see the conclusions in Section \ref{conclusions}. McKenna-Tuama end up with the following coupled second order system \neweq{coupled} \frac{m\ell^2}{3}\, \theta''=\ell\cos\theta\, \Big(f(y-\ell\sin\theta)-f(y+\ell\sin\theta)\Big)\ ,\quad m\, y''=-\Big(f(y-\ell\sin\theta)+f(y+\ell\sin\theta)\Big)\ , \end{equation} see again Figure \ref{9}. The delicate point is the choice of the superlinearity $f$ which \cite{mckO} take first as $f(s)=(s+1)^+-1$ and then as $f(s)=e^s-1$ in order to maintain the asymptotically linear behavior as $s\to0$. Using \eq{coupled}, \cite{mckmonth,mckO} were able to numerically replicate the phenomenon observed at the Tacoma Bridge, namely the sudden transition from vertical oscillations to torsional oscillations. They found that if the vertical motion was sufficiently large to induce brief slackening of the hangers, then numerical results highlighted a rapid transition to a torsional motion. Nevertheless, the physicists Green-Unruh \cite{green} believe that the hangers were not slack during the Tacoma Bridge oscillation. If this were true, then the piecewise linear forcing term $f$ becomes totally linear. Moreover, by commenting the results in \cite{mckmonth,mckO}, McKenna-Moore \cite[p.460]{mckmoore} write that \begin{center} \begin{minipage}{162mm} {\em ...the range of parameters over which the transition from vertical to torsional motion was observed was physically unreasonable ... the restoring force due to the cables was oversimplified ... it was necessary to impose small torsional forcing}. \end{minipage} \end{center} Summarising, \eq{coupled} seems to be the first model able to reproduce the behavior of the Tacoma Bridge but it appears to need some improvements. First, one should avoid the possibility of a linear behavior of the hangers, the nonlinearity should appear before possible slackening of the hangers. Second, the restoring force and the parameters involved should be chosen carefully. \begin{problem} {\em Try a doubly superlinear term $f$ in \eq{coupled}. For instance, take $f(s)=s+\varepsilon s^3$ with $\varepsilon>0$ small, so that \eq{coupled} becomes \neweq{mia} \frac{m\ell^2}{3}\, \theta''+2\ell^2\cos\theta\sin\theta\, \Big(1+3\varepsilon y^2+\varepsilon\ell^2\sin^2\theta\Big)=0\ ,\quad m\, y''+2\Big(1+3\varepsilon\ell^2\sin^2\theta\Big)y+2\varepsilon y^3=0\ . \end{equation} It appears challenging to determine some features of the solution $(y,\theta)$ to \eq{mia} and also to perform numerical experiments to see what kind of oscillations are displayed by the solutions.} $\Box$\end{problem} System \eq{coupled} is a $2\times2$ system which should be considered as a nonlinear fourth order model; therefore, it fulfills the necessary conditions of the general principle (GP). Another fourth order differential equation was suggested in \cite{lzmck,McKennaWalter,mck4} as a one-dimensional model for a suspension bridge, namely a beam of length $L$ suspended by hangers. When the hangers are stretched there is a restoring force which is proportional to the amount of stretching, according to ${\cal LHL}$. But when the beam moves in the opposite direction, there is no restoring force exerted on it. Under suitable boundary conditions, if $u(x,t)$ denotes the vertical displacement of the beam in the downward direction at position $x$ and time $t$, the following nonlinear beam equation is derived \neweq{beam} u_{tt}+u_{xxxx}+\gamma u^+=W(x,t)\, ,\qquad x\in(0,L)\, ,\quad t>0\, , \end{equation} where $u^+=\max\{u,0\}$, $\gamma u^+$ represents the force due to the cables and hangers which are considered as a linear spring with a one-sided restoring force, and $W$ represents the forcing term acting on the bridge, including its own weight per unit length, the wind, the traffic loads, or other external sources. After some normalisation, by seeking traveling waves $u(x,t)=1+w(x-ct)$ to \eq{beam} and putting $k=c^2>0$, McKenna-Walter \cite{mck4} reach the following ODE \neweq{maineq} w''''(\tau)+kw''(\tau)+f(w(\tau))=0\qquad(x-ct=\tau\in\mathbb{R}) \end{equation} where $k\in(0,2)$ and $f(s)=(s+1)^+-1$. Subsequently, in order to maintain the same behavior but with a smooth nonlinearity, Chen-McKenna \cite{chenmck} suggest to consider \eq{maineq} with $f(s)=e^s-1$. For later discussion, we notice that both these nonlinearities satisfy \neweq{f} f\in {\rm Lip}_{{\rm loc}}(\mathbb{R})\,,\quad f(s)\,s>0 \quad \forall s\in \mathbb{R}\setminus\{0\}. \end{equation} Hence, when $W\equiv0$, \eq{beam} is just a special case of the more general semilinear fourth order wave equation \neweq{beam2} u_{tt}+u_{xxxx}+f(u)=0\, ,\qquad x\in(0,L)\, ,\quad t>0\, , \end{equation} where the natural assumptions on $f$ are \eq{f} plus further conditions, according to the model considered. Traveling waves to \eq{beam2} solve \eq{maineq} with $k=c^2$ being the squared velocity of the wave. Recently, for $f(s)=(s+1)^+-1$ and its variants, Benci-Fortunato \cite{benci} proved the existence of special solutions to \eq{maineq} deduced by solitons of the beam equation \eq{beam2}. \begin{problem} {\em It could be interesting to insert into the wave-type equation \eq{beam2} the term corresponding to the beam elongation, that is, $$\int_0^L\Big(\sqrt{1+u_x(x,t)^2}-1\Big)\, dx.$$ This would lead to a quasilinear equation such as $$u_{tt}+u_{xxxx}-\left(\frac{u_x}{\sqrt{1+u_x^2}}\right)_x+f(u)=0$$ with $f$ satisfying \eq{f}. What can be said about this equation? Does it admit oscillating solutions in a suitable sense? One should first consider the case of an unbounded beam ($x\in\mathbb{R}$) and then the case of a bounded beam ($x\in(0,L)$) complemented with some boundary conditions.} $\Box$\end{problem} Motivated by the fact that it appears unnatural to ignore the motion of the main sustaining cable, a slightly more sophisticated and complicated string-beam model was suggested by Lazer-McKenna \cite{mck1}. They treat the cable as a vibrating string, coupled with the vibrating beam of the roadway by piecewise linear springs that have a given spring constant $k$ if expanded, but no restoring force if compressed. The sustaining cable is subject to some forcing term such as the wind or the motions in the towers. This leads to the system $$\left\{\begin{array}{ll} v_{tt}-c_1v_{xx}+\delta_1v_t-k_1(u-v)^+=f(x,t)\, ,\qquad x\in(0,L)\, ,\quad t>0\, ,\\ u_{tt}+c_2u_{xxxx}+\delta_2u_t+k_2(u-v)^+=W_0\, ,\qquad x\in(0,L)\, ,\quad t>0\, , \end{array}\right.$$ where $v$ is the displacement from equilibrium of the cable and $u$ is the displacement of the beam, both measured in the downwards direction. The constants $c_1$ and $c_2$ represent the relative strengths of the cables and roadway respectively, whereas $k_1$ and $k_2$ are the spring constants and satisfy $k_2\ll k_1$. The two damping terms can possibly be set to $0$, while $f$ and $W_0$ are the forcing terms. We also refer to \cite{ahmed} for a study of the same problem in a rigorous functional analytic setting.\par Since the Tacoma Bridge collapse was mainly due to a wide torsional motion of the bridge, see \cite{tacoma}, the bridge cannot be considered as a one dimensional beam. In this respect, Rocard \cite[p.148]{rocard} states \begin{center} \begin{minipage}{162mm} {\em Conventional suspension bridges are fundamentally unstable in the wind because the coupling effect introduced between bending and torsion by the aerodynamic forces of the lift.} \end{minipage} \end{center} Hence, if some model wishes to display instability of bridges, it should necessarily take into account more degrees of freedom than just a beam. In fact, to be exhaustive one should consider vertical oscillations $y$ of the roadway, its torsional angle $\theta$, and coupling with the two sustaining cables $u$ and $v$. This model was suggested by Matas-O\v cen\'a\v sek \cite{matas} who consider the hangers as linear springs and obtain a system of four equations; three of them are second order wave-type equations, the last one is again a fourth order equation such as $$m\, y_{tt}+k\, y_{xxxx}+\delta\, y_t+E_1(y-u-\ell\sin\theta)+E_2(y-v+\ell\sin\theta)=W(x)+f(x,t)\ ;$$ we refer to $(SB_4)$ in \cite{drabek} for an interpretation of the parameters involved.\par In our opinion, any model which describes the bridge as a one dimensional beam is too simplistic, unless the model takes somehow into account the possible appearance of a torsional motion. In \cite{gazpav} it was suggested to maintain the one dimensional model provided one also allows displacements below the equilibrium position and these displacements replace the deflection from horizontal of the roadway of the bridge; in other words, \renewcommand{1.5}In this setting, instead of \eq{beam}{1.1} \neweq{w0} \begin{array}{c} \mbox{the unknown function $w$ represents the upwards vertical displacement when $w>0$}\\ \mbox{and the deflection from horizontal, computed in a suitable unity measure, when $w<0$.} \end{array} \end{equation} \renewcommand{1.5}In this setting, instead of \eq{beam}{1.5}In this setting, instead of \eq{beam} one should consider the more general semilinear fourth order wave equation \eq{beam2} with $f$ satisfying \eq{f} plus further conditions which make $f(s)$ superlinear and unbounded when both $s\to\pm\infty$; hence, ${\cal LHL}$ is dropped by allowing $f$ to be as close as one may wish to a linear function but eventually superlinear for large displacements. The superlinearity assumption is justified both by the observations in Section \ref{story} and by the fact that more the position of the bridge is far from the horizontal equilibrium position, more the action of the wind becomes relevant because the wind hits transversally the roadway of the bridge. If ever the bridge would reach the limit vertical position, in case the roadway is torsionally rotated of a right angle, the wind would hit it orthogonally, that is, with full power.\par In this section we listed a number of attempts to model bridges mechanics by means of differential equations. The sources for this list are very heterogeneous. However, except for some possible small damping term, none of them contains odd derivatives. Moreover, none of them is acknowledged by the scientific community to perfectly describe the complex behavior of bridges. Some of them fail to satisfy the requirements of (GP) and, in our opinion, must be accordingly modified. Some others seem to better describe the oscillating behavior of bridges but still need some improvements. \section{Blow up oscillating solutions to some fourth order differential equations}\label{blup} If the trivial solution to some dynamical system is unstable one may hope to magnify self-excitement phenomena through finite time blow up. In this section we survey and discuss several results about solutions to \eq{maineq} which blow up in finite time. Let us rewrite the equation with a different time variable, namely \neweq{maineq2} w''''(t)+kw''(t)+f(w(t))=0\qquad(t\in\mathbb{R})\ . \end{equation} We first recall the following results proved in \cite{bfgk}: \begin{theorem}\label{global} Let $k\in \mathbb{R}$ and assume that $f$ satisfies \eqref{f}.\par $(i)$ If a local solution $w$ to \eqref{maineq2} blows up at some finite $R\in\mathbb{R}$, then \neweq{pazzo} \liminf_{t\to R}w(t)=-\infty\qquad\mbox{and}\qquad\limsup_{t\to R}w(t)=+\infty\, . \end{equation} $(ii)$ If $f$ also satisfies \neweq{ff3} \limsup_{s\to+\infty}\frac{f(s)}{s}<+\infty\qquad\mbox{or}\qquad\limsup_{s\to-\infty}\frac{f(s)}{s}<+\infty, \end{equation} then any local solution to \eqref{maineq2} exists for all $t\in\mathbb{R}$. \end{theorem} If both the conditions in \eq{ff3} are satisfied then global existence follows from classical theory of ODE's; but \eq{ff3} merely requires that $f$ is ``one-sided at most linear'' so that statement $(ii)$ is far from being trivial and, as shown in \cite{gazpav}, it does not hold for equations of order at most 3. On the other hand, Theorem \ref{global} $(i)$ states that, under the sole assumption \eq{f}, the only way that finite time blow up can occur is with ``wide and thinning oscillations'' of the solution $w$; again, in \cite{gazpav} it was shown that this kind of blow up is a phenomenon typical of at least fourth order problems such as \eq{maineq2} since it does not occur in related lower order equations. Note that assumption \eq{ff3} includes, in particular, the cases where $f$ is either concave or convex.\par Theorem \ref{global} does not guarantee that the blow up described by \eq{pazzo} indeed occurs. For this reason, we assume further that \neweq{fmono} f\in {\rm Lip}_{{\rm loc}}(\mathbb{R})\cap C^2(\mathbb{R}\setminus\{0\})\ ,\quad f'(s)\ge0\quad\forall s\in\mathbb{R}\ ,\quad\liminf_{s\to\pm\infty}|f''(s)|>0 \end{equation} and the growth conditions \neweq{f2} \exists p>q\ge1,\ \alpha\ge0,\ 0<\rho\le \beta,\quad\mbox{s.t.}\quad\rho|s|^{p+1}\le f(s)s\le\alpha|s|^{q+1}+\beta|s|^{p+1}\quad\forall s\in\mathbb{R}\ . \end{equation} Notice that \eq{fmono}-\eq{f2} strengthen \eq{f}. In \cite{gazpav3} the following sufficient conditions for the finite time blow up of local solutions to \eq{maineq2} has been proved. \begin{theorem}\label{blowup} Let $k\le0$, $p>q\ge1$, $\alpha\ge0$, and assume that $f$ satisfies \eqref{fmono} and \eqref{f2}. Assume that $w=w(t)$ is a local solution to \eqref{maineq2} in a neighborhood of $t=0$ which satisfies \neweq{tech} w'(0)w''(0)-w(0)w'''(0)-kw(0)w'(0)>0\, . \end{equation} Then, $w$ blows up in finite time for $t>0$, that is, there exists $R\in(0,+\infty)$ such that \eqref{pazzo} holds. \end{theorem} Since self-excited oscillations such as \eq{pazzo} should be expected in any equation attempting to model suspension bridges, linear models should be avoided. Unfortunately, even if the solutions to \eq{maineq2} display these oscillations, they cannot be prevented since they arise suddenly after a long time of apparent calm. In Figure \ref{duemila}, we display the plot of a solution to \eq{maineq2}. \begin{figure} \caption{Solution to \eq{maineq2} \label{duemila} \end{figure} It can be observed that the solution has oscillations with increasing amplitude and rapidly decreasing ``nonlinear frequency"; numerically, the blow up seems to occur at $t=8.164$. Even more impressive appears the plot in Figure \ref{mille}. \begin{figure} \caption{Solution to \eq{maineq2} \label{mille} \end{figure} Here the solution has ``almost regular'' oscillations between $-1$ and $+1$ for $t\in[0,80]$. Then the amplitude of oscillations nearly doubles in the interval $[80,93]$ and, suddenly, it violently amplifies after $t=96.5$ until the blow up which seems to occur only slightly later at $t=96.59$. We also refer to \cite{gazpav,gazpav2,gazpav3} for further plots.\par We refer to \cite{gazpav,gazpav3} for numerical results and plots of solutions to \eq{maineq2} with nonlinearities $f=f(s)$ having different growths as $s\to\pm\infty$. In such case, the solution still blows up according to \eq{pazzo} but, although its ``limsup'' and ``liminf'' are respectively $+\infty$ and $-\infty$, the divergence occurs at different rates. We represent this qualitative behavior in Figure \ref{blow}. \begin{figure} \caption{Qualitative blow up for solutions to \eq{maineq2} \label{blow} \end{figure} Traveling waves to \eq{beam2} which propagate at some velocity $c>0$, depending on the elasticity of the material of the beam, solve \eq{maineq2} with $k=c^2>0$. Further numerical results obtained in \cite{gazpav,gazpav3} suggest that a statement similar to Theorem \ref{blowup} also holds for $k>0$ and, as expected, that the blow up time $R$ is decreasing with respect to the initial height $w(0)$ and increasing with respect to $k$. Since $k=c^2$ and $c$ represents the velocity of the traveling wave, this means that the time of blow up is an increasing function of $k$. In turn, since the velocity of the traveling wave depends on the elasticity of the material used to construct the bridge (larger $c$ means less elastic), this tells us that more the bridge is stiff more it will survive to exterior forces such as the wind and/or traffic loads. \begin{problem} {\em Prove Theorem \ref{blowup} when $k>0$. This would allow to show that traveling waves to \eq{beam2} blow up in finite time. Numerical results in \cite{gazpav,gazpav3} suggest that a result similar to Theorem \ref{blowup} also holds for $k>0$.} $\Box$\end{problem} \begin{problem} {\em Prove that the blow up time of solutions to \eq{maineq2} depends increasingly with respect to $k\in\mathbb{R}$. The interest of an analytical proof of this fact relies on the important role played by $k$ within the model.} $\Box$\end{problem} \begin{problem} {\em The blow up time $R$ of solutions to \eq{maineq2} is the expectation of life of the oscillating bridge. Provide an estimate of $R$ in terms of $f$ and of the initial data.} $\Box$\end{problem} \begin{problem} {\em Condition \eq{f2} is a superlinearity assumption which requires that $f$ is bounded both from above and below by the same power $p>1$. Prove Theorem \ref{blowup} for more general kinds of superlinear functions $f$.} $\Box$\end{problem} \begin{problem} {\em Can assumption \eq{tech} be relaxed? Of course, it cannot be completely removed since the trivial solution $w(t)\equiv0$ is globally defined, that is, $R=+\infty$. Numerical experiments in \cite{gazpav,gazpav3} could not detect any nontrivial global solution to \eq{maineq2}.} $\Box$\end{problem} \begin{problem} {\em Study \eq{maineq2} with a damping term: $w''''(t)+kw''(t)+\delta w'(t)+f(w(t))=0$ for some $\delta>0$. Study the competition between the damping term $\delta w'$ and the nonlinear self-exciting term $f(w)$.} $\Box$\end{problem} Note that Theorems \ref{global} and \ref{blowup} ensure that there exists an increasing sequence $\{z_j\}_{j\in\mathbb{N}}$ such that:\par $(i)$ $z_j\nearrow R$ as $j\to\infty$;\par $(ii)$ $w(z_j)=0$ and $w$ has constant sign in $(z_j,z_{j+1})$ for all $j\in\mathbb{N}$.\par It is also interesting to compare the rate of blow up of the displacement and of the acceleration on these intervals. By slightly modifying the proof of \cite[Theorem 3]{gazpav3} one can obtain the following result which holds for any $k\in\mathbb{R}$. \begin{theorem}\label{asymptotics} Let $k\in \mathbb{R}$, $p>q\ge1$, $\alpha\ge0$, and assume that $f$ satisfies \eqref{fmono} and \eqref{f2}. Assume that $w=w(t)$ is a local solution to $$ w''''(t)+kw''(t)+f(w(t))=0\qquad(t\in\mathbb{R}) $$ which blows up in finite time as $t\nearrow R<+\infty$. Denote by $\{z_j\}$ the increasing sequence of zeros of $w$ such that $z_j\nearrow R$ as $j\to+\infty$. Then \neweq{estimate} \int_{z_j}^{z_{j+1}}w(t)^2\, dt\ \ll\ \int_{z_j}^{z_{j+1}}w''(t)^2\, dt\ ,\qquad \int_{z_j}^{z_{j+1}}w'(t)^2\, dt\ \ll\ \int_{z_j}^{z_{j+1}}w''(t)^2\, dt \end{equation} as $j\to\infty$. Here, $g(j)\ll\psi(j)$ means that $g(j)/\psi(j)\to0$ as $j\to\infty$. \end{theorem} The estimate \eq{estimate}, clearly due to the superlinear term, has a simple interpretation in terms of comparison between blowing up energies, see Section \ref{energies}. \begin{remark}\label{pde} {\em Equation \eq{maineq2} also arises in several different contexts, see the book by Peletier-Troy \cite{pt} where one can find some other physical models, a survey of existing results, and further references. Moreover, besides \eq{beam2}, \eq{maineq2} may also be fruitfully used to study some other partial differential equations. For instance, one can consider nonlinear elliptic equations such as $$ \Delta^2u+e^u=\frac{1}{|x|^4}\qquad\mbox{in }\mathbb{R}^4\setminus\{0\}\ , $$ \neweq{critical} \Delta^2 u+|u|^{8/(n-4)}u=0\mbox{ in }\mathbb{R}^n\ (n\ge5),\qquad\Delta\Big(|x|^2\Delta u\Big)+|x|^2|u|^{8/(n-2)}u=0\mbox{ in }\mathbb{R}^n\ (n\ge3); \end{equation} it is known (see, e.g.\ \cite{gazgruswe}) that the Green function for some fourth order elliptic problems displays oscillations, differently from second order problems. Furthermore, one can also consider the semilinear parabolic equation $$u_t+\Delta ^2u=|u|^{p-1}u\mbox{ in }\mathbb{R}_{+}^{n+1}\ ,\qquad u(x,0)=u_0(x)\mbox{ in }\mathbb{R}^{n}$$ where $p>1+4/n$ and $u_0$ satisfies suitable assumptions. It is shown in \cite{fgg,gg2} that the linear biharmonic heat operator has an ``eventual local positivity'' property: for positive initial data $u_0$ the solution to the linear problem with no source is eventually positive on compact subsets of $\mathbb{R}^n$ but negativity can appear at any time far away from the origin. This phenomenon is due to the sign changing properties, with infinite oscillations, of the biharmonic heat kernels. We also refer to \cite{bfgk,gazpav3} for some results about the above equations and for the explanation of how they can be reduced to \eq{maineq2} and, hence, how they display self-excited oscillations. $\Box$} \end{remark} \begin{problem} {\em For any $q>0$ and parameters $a,b,k\in\mathbb{R}$, $c\ge0$, study the equation \neweq{subcrit} w''''(t)+aw'''(t)+kw''(t)+bw'(t)+cw(t)+|w(t)|^qw(t)=0\qquad(t\in\mathbb{R})\ . \end{equation} Any reader who is familiar with the second order Sobolev space $H^2$ recognises the critical exponent in the first equation in \eq{critical}. In view of Liouville-type results in \cite{ambrosio} when $q\le8/(n-4)$, it would be interesting to study the equation $\Delta^2 u+|u|^qu=0$ with the same technique. The radial form of this equation may be written as \eq{maineq2} only when $q=8/(n-4)$ since for other values of $q$ the transformation in \cite{GG} gives rise to the appearance of first and third order derivatives as in \eq{subcrit}: this motivates \eq{subcrit}. The values of the parameters corresponding to the equation $\Delta^2 u+|u|^qu=0$ can be found in \cite{GG}.} $\Box$\end{problem} Our target is now to reproduce the self-excited oscillations found in Theorem \ref{blowup} in a suitable second order system. Replace $\sin\theta\cong\theta$ and $\cos\theta\cong1$, and put $x=\ell\theta$. After these transformations, the McKenna system \eq{coupled} reads \neweq{truesystem} \left\{\begin{array}{ll} x''+\omega^2 f(y+x)-\omega^2 f(y-x)=0\\ y''+f(y+x)+f(y-x)=0\ . \end{array}\right.\end{equation} We further modify \eq{truesystem}; for suitable values of the parameters $\beta$ and $\delta$, we consider the system \neweq{miosystxy} \left\{\begin{array}{ll} x''-f(y-x)+\beta(y+x)=0\\ y''-f(y-x)+\delta(y+x)=0 \end{array}\right. \end{equation} which differs from \eq{truesystem} in two respects: the minus sign in front of $f(y-x)$ in the second equation and the other restoring force $f(y+x)$ being replaced by a linear term. To \eq{miosystxy} we associate the initial value problem \neweq{cauchy} x(0)=x_0\, ,\ x'(0)=x_1\, ,\ y(0)=y_0\, ,\ y'(0)=y_1\ . \end{equation} The following statement holds. \begin{theorem}\label{oscill} Assume that $\beta<\delta\le-\beta$ (so that $\beta<0$). Assume also that $f(s)=\sigma s+cs^2+ds^3$ with $d>0$ and $c^2\le2d\sigma$. Let $(x_0,y_0,x_1,y_1)\in\mathbb{R}^4$ satisfy \neweq{initial} (3\beta-\delta)x_0y_1+(3\delta-\beta)x_1y_0>(\beta+\delta)(x_0x_1+y_0y_1)\ . \end{equation} If $(x,y)$ is a local solution to \eqref{miosystxy}-\eqref{cauchy} in a neighborhood of $t=0$, then $(x,y)$ blows up in finite time for $t>0$ with self-excited oscillations, that is, there exists $R\in(0,+\infty)$ such that $$\liminf_{t\to R}x(t)=\liminf_{t\to R}y(t)=-\infty\qquad\mbox{and}\qquad\limsup_{t\to R}x(t)=\limsup_{t\to R}y(t)=+\infty\, .$$ \end{theorem} \begin{proof} After performing the change of variables \eq{change}, system \eq{miosystxy} becomes $$w''+(\delta-\beta)z=0\ ,\qquad z''-2f(w)+(\beta+\delta)z=0$$ which may be rewritten as a single fourth order equation \neweq{mia4} w''''(t)+(\beta+\delta)w''(t)+2(\delta-\beta)f(w(t))=0\ . \end{equation} Assumption \eq{initial} reads $$w'(0)w''(0)-w(0)w'''(0)-(\beta+\delta)w(0)w'(0)>0\, .$$ Furthermore, in view of the above assumptions, $f$ satisfies \eq{fmono}-\eq{f2} with $\rho=d/2$, $p=3$, $\alpha=2\sigma$, $q=1$, $\beta=3d$. Whence, Theorem \ref{asymptotics} states that $w$ blows up in finite time for $t>0$ and that there exists $R\in(0,+\infty)$ such that \neweq{puzzo} \liminf_{t\to R}w(t)=-\infty\qquad\mbox{and}\qquad\limsup_{t\to R}w(t)=+\infty\, . \end{equation} Next, we remark that \eq{mia4} admits a first integral, namely \begin{eqnarray} E(t) &:=& \frac{\beta+\delta}{2}\,w'(t)^2+w'(t)w'''(t)+2(\delta-\beta)F(w(t))-\frac{1}{2}\,w''(t)^2 \notag \\ \ &=& \frac{\beta+\delta}{2}\,w'(t)^2+(\beta-\delta)w'(t)z'(t)+2(\delta-\beta)F(w(t))-\frac{(\beta-\delta)^2}{2}\,z(t)^2\equiv\overline{E}\ , \label{E} \end{eqnarray} for some constant $\overline{E}$. By \eq{puzzo} there exists an increasing sequence $m_j\to R$ of local maxima of $w$ such that $$z(m_j)=\frac{w''(m_j)}{\beta-\delta}\ge0\ ,\quad w'(m_j)=0\ ,\quad w(m_j)\to+\infty\mbox{ as }j\to\infty\ .$$ By plugging $m_j$ into the first integral \eq{E} we obtain $$\overline{E}=E(m_j)=2(\delta-\beta)F(w(m_j))-\frac{(\beta-\delta)^2}{2}\,z(m_j)^2$$ which proves that $z(m_j)\to+\infty$ as $j\to+\infty$. We may proceed similarly in order to show that $z(\mu_j)\to-\infty$ on a sequence $\{\mu_j\}$ of local minima of $w$. Therefore, we have $$ \liminf_{t\to R}z(t)=-\infty\qquad\mbox{and}\qquad\limsup_{t\to R}z(t)=+\infty\, . $$ Assume for contradiction that there exists $K\in\mathbb{R}$ such that $x(t)\le K$ for all $t<R$. Then, recalling \eq{change}, on the above sequence $\{m_j\}$ of local maxima for $w$, we would have $y(m_j)-K\ge y(m_j)-x(m_j)=w(m_j)\to+\infty$ which is incompatible with \eq{E} since $$2(\delta-\beta)F(y(m_j)-x(m_j))-\frac{(\beta-\delta)^2}{2}\, (y(m_j)+x(m_j))^2\equiv\overline{E}$$ and $F$ has growth of order 4 with respect to its divergent argument. Similarly, by arguing on the sequence $\{\mu_j\}$, we rule out the possibility that there exists $K\in\mathbb{R}$ such that $x(t)\ge K$ for ll $t<R$. Finally, by changing the role of $x$ and $y$ we find that also $y(t)$ is unbounded both from above and below as $t\to R$. This completes the proof.\end{proof} \begin{remark} {\em Numerical results in \cite{gazpav3} suggest that the assumption $\delta\le-\beta$ is not necessary to obtain \eq{puzzo}. So, most probably, Theorem \ref{oscill} and the results of this section hold true also without this assumption. $\Box$}\end{remark} A special case of function $f$ satisfying the assumptions of Theorem \ref{oscill} is $f_\varepsilon(s)=s+\varepsilon s^3$ for any $\varepsilon>0$. We wish to study the situation when the problem tends to become linear, that is, when $\varepsilon\to0$. Plugging such $f_\varepsilon$ into \eq{miosystxy} gives the system \neweq{fe} \left\{\begin{array}{ll} x''+(\beta+1)x+(\beta-1)y+\varepsilon(x-y)^3=0\\ y''+(\delta+1)x+(\delta-1)y+\varepsilon(x-y)^3=0 \end{array}\right. \end{equation} so that the limit linear problem obtained for $\varepsilon=0$ reads \neweq{f0} \left\{\begin{array}{ll} x''+(\beta+1)x+(\beta-1)y=0\\ y''+(\delta+1)x+(\delta-1)y=0\ . \end{array}\right. \end{equation} The theory of linear systems tells us that the shape of the solutions to \eq{f0} depends on the signs of the parameters $$A=\beta+\delta\ ,\quad B=2(\delta-\beta)\ ,\quad \Delta=(\beta+\delta)^2+8(\beta-\delta)\ .$$ Under the same assumptions of Theorem \ref{oscill}, for \eq{f0} we have $A\le0$ and $B>0$ but the sign of $\Delta$ is not known a priori and three different cases may occur.\par $\bullet$ If $\Delta<0$ (a case including also $A=0$), then we have exponentials times trigonometric functions so either we have self-excited oscillations which increase amplitude as $t\to\infty$ or we have damped oscillations which tend to vanish as $t\to\infty$. Consider the case $\delta=-\beta=1$ and $(x_0,y_0,x_1,y_1)=(1,0,1,-1)$, then \eq{initial} is fulfilled and Theorem \ref{oscill} yields \begin{corollary} For any $\varepsilon>0$ there exists $R_\varepsilon>0$ such that the solution $(x^\varepsilon,y^\varepsilon)$ to the Cauchy problem \neweq{feps} \left\{\begin{array}{ll} x''-2y+\varepsilon(x-y)^3=0\\ y''+2x+\varepsilon(x-y)^3=0\\ x(0)=1,\ y(0)=0,\ x'(0)=1,\ y'(0)=-1 \end{array}\right. \end{equation} blows up as $t\to R_\varepsilon$ and satisfies $$ \liminf_{t\to R_\varepsilon}x^\varepsilon(t)=\liminf_{t\to R_\varepsilon}y^\varepsilon(t)=-\infty\qquad\mbox{and}\qquad\limsup_{t\to R_\varepsilon}x^\varepsilon(t)=\limsup_{t\to R_\varepsilon}y^\varepsilon(t)=+\infty\, . $$ \end{corollary} A natural conjecture, supported by numerical experiments, is that $R_\varepsilon\to\infty$ as $\varepsilon\to0$. For several $\varepsilon>0$, we plotted the solution to \eq{feps} and the pictures all looked like Figure \ref{plot1}. \begin{figure} \caption{The solution $x^\varepsilon$ (black) and $y^\varepsilon$ (green) to \eq{feps} \label{plot1} \end{figure} When $\varepsilon=0.1$ the blow up seems to occur at $R_\varepsilon=4.041$. Notice that $x^\varepsilon$ and $y^\varepsilon$ ``tend to become the same'', in the third picture they are indistinguishable. After some time, when wide oscillations amplifies, $x^\varepsilon$ and $y^\varepsilon$ move almost synchronously. When $\varepsilon=0$, the solution to \eq{feps} is explicitly given by $x^0(t)=e^t\cos(t)$ and $y^0(t)=-e^t\sin(t)$, thereby displaying oscillations blowing up in infinite time similar to those visible in \eq{scann}.\par If we replace the Cauchy problem in \eq{feps} with $$x(0)=1,\ y(0)=0,\ x'(0)=-1,\ y'(0)=1$$ then \eq{initial} is not fulfilled. However, for any $\varepsilon>0$ that we tested, the corresponding numerical solutions looked like in Figure \ref{plot1}. In this case, the limit problem with $\varepsilon=0$ admits as solutions $x^0(t)=e^{-t}\cos(t)$ and $y^0(t)=e^{-t}\sin(t)$ which do exhibit oscillations but, now, strongly damped.\par Let us also consider the two remaining limit systems which, however, do not display oscillations.\par $\bullet$ If $\Delta=0$, since $A\le0$, there are no trigonometric functions in the limit case \eq{f0}.\par $\bullet$ If $\Delta>0$, then necessarily $A<0$ since $B>0$, and hence only exponential functions are involved: the solution to \eq{f0} may blow up in infinite time or vanish at infinity.\par The above results explain why we believe that \eq{scann} is not suitable to display self-excited oscillations as the ones which appeared for the TNB. Since it has only two degrees of freedom, it fails to consider both vertical and torsional oscillations which, on the contrary, are visible in the McKenna-type system \eq{miosystxy}. We have seen in Theorem \ref{oscill} that destructive self-excited oscillations may blow up in finite time, something very similar to what may be observed in \cite{tacoma}. Hence, \eq{miosystxy} shows more realistic self-excited oscillations than \eq{scann}.\par Although the blow up occurs at $t=4.04$, the solution plotted in Figure \ref{plot1} is relatively small until $t=3.98$. This, together with the behavior displayed in Figures \ref{duemila} and \ref{mille}, allows us to conclude that \begin{center} \mbox{\bf in nonlinear systems, self-excited oscillations appear suddenly, without any intermediate stage.} \end{center} The material presented in this section also enables us to conclude that \begin{center} \mbox{\bf the linear case cannot be seen as a limit situation of the nonlinear case} \end{center} since the behavior of the solution to \eq{f0} depends on $\beta$, $\delta$, and on the initial conditions, while nothing can be deduced from the sequence of solutions $(x^\varepsilon,y^\varepsilon)$ to problem \eq{fe} as $\varepsilon\to0$ because these solutions all behave similarly independently of $\beta$ and $\delta$. Furthermore, the solutions to the limit problem \eq{f0} may or may not exhibit oscillations and if they do, these oscillations may be both of increasing amplitude or of vanishing amplitude as $t\to+\infty$. All this shows that linearisation may give misleading and unpredictable answers.\par In this section we have seen that the blow up of solutions to \eq{maineq2} and to \eq{miosystxy} occurs with wide oscillations after a long time of apparent calm. Hence, the solution does not display any visible behavior which may warn some imminent danger. The reason is that second derivatives of the solution to \eq{maineq2} blow up at a higher rate, see Theorem \ref{asymptotics}, and second derivatives are not visible by simply looking at the graph. Wide oscillations after a long time of apparent calm suggest that some hidden energy is present in the system. Finally, we have seen that self-excited oscillating blow up also appears for a wide class of superlinear fourth order differential equations, including PDE's. \section{Affording an explanation in terms of energies}\label{afford} \subsection{Energies involved}\label{energies} The most important tools to describe any structure are the energies which appear. A precise description of all the energies involved would lead to perfect models and would give all the information to make correct projects. Unfortunately, bridges, as well as many other structures, do not allow simple characterisations of all the energies present in the structure and, maybe, not all possible existing energies have been detected up to nowadays. Hence, it appears impossible to make a precise list of all the energies involved in the complex behavior of a bridge.\par The kinetic energy is the simplest energy to be described. If $v(x,t)$ denotes the vertical displacement at $x\in\Omega$ and at $t>0$, then the total kinetic energy at time $t$ is given by $$\frac{m}{2}\int_\Omega v_t(x,t)^2\, dx$$ where $m$ is the mass and $\Omega$ can be either a segment (beam model) or a thin rectangle (plate model). This energy gives rise to the term $mv_{tt}$ in the corresponding Euler-Lagrange equation, see Section \ref{models}.\par Then one should consider potential energy, which is more complicated. From \cite[pp.75-76]{bleich}, we quote \begin{center} \begin{minipage}{162mm} {\em The potential energy is stored partly in the stiffening frame in the form of elastic energy due to bending and partly in the cable in the form of elastic stress-strain energy and in the form of an increased gravity potential.} \end{minipage} \end{center} Hence, an important role is played by stored energy. Part of the stored energy is potential energy which is quite simple to determine: in order to avoid confusion, in the sequel we call potential energy only the energy due to gravity which, in the case of a bridge, is computed in terms of the vertical displacement $v$. However, the dominating part of the stored energy in a bridge is its elastic energy.\par The distinction between elastic and potential stored energies, which in our opinion appears essential, is not highlighted with enough care in \cite{bleich} nor in any subsequent treatise of suspension bridges. A further criticism about \cite{bleich} is that it often makes use of ${\cal LHL}$, see \cite[p.214]{bleich}. Apart these two weak points, \cite{bleich} makes a quite careful quantitative analysis of the energies involved. In particular, concerning the elastic energy, the contribution of each component of the bridge is taken into account in \cite{bleich}: the chords (p.145), the diagonals (p.146), the cables (p.147), the towers (pp.164-168), as well as quantitative design factors (pp.98-103).\par A detailed energy method is also introduced at p.74, as a practical tool to determine the modes of vibrations and natural frequencies of suspension bridges: the energies considered are expressed in terms of the amplitude of the oscillation $\eta=\eta(x)$ and therefore, they do not depend on time. As already mentioned, the nonlocal term in \eq{eqqq} represents the increment of energy due to the external wind during a period of time. Recalling that $v(x,t)=\eta(x)\sin(\omega t)$, \cite[p.28]{bleich} represents the net energy input per cycle by \neweq{dissipation} A:=\frac{w^2}{H_w^2}\, \frac{EA}{L}\int_0^L \eta(z)\, dz-C\int_0^L \eta(z)^2\, dz \end{equation} where $L$ is the length of the beam and $C>0$ is a constant depending on the frequency of oscillation and on the damping coefficient, so that the second term is a quantum of energy being dissipated as heat: mechanical hysteresis, solid friction damping, aerodynamic damping, etc. It is explained in Figure 13 in \cite[p.33]{bleich} that \begin{center} \begin{minipage}{162mm} {\em the kinetic energy will continue to build up and therefore the amplitude will continue to increase until $A=0$.} \end{minipage} \end{center} Hence, the larger is the input of energy $\int_0^L \eta$ due to the wind, the larger needs to be the displacement $v$ before the kinetic energy will stop to build up. This is related to \cite[pp.241-242]{bleich}, where an attempt is made \begin{center} \begin{minipage}{162mm} {\em to approach by rational analysis the problem proper of self-excitation of vibrations in truss-stiffened suspension bridges. ... The theory discloses the peculiar mechanism of catastrophic self-excitation in such bridges...} \end{minipage} \end{center} The word ``self-excitation'' suggests behaviors similar to \eq{pazzo}. As shown in \cite{gazpav3}, the oscillating blow up of solutions described by \eq{pazzo} occurs in many fourth order differential equations, including PDE's, see also Remark \ref{pde}, whereas it does not occur in lower order equations. But these oscillations, and the energy generating them, are somehow hidden also in fourth order equations; let us explain qualitatively what we mean by this. Engineers usually say that {\em the wind feeds into the structure an increment of energy} (see \cite[p.28]{bleich}) and that {\em the bridge eats energy} but we think it is more appropriate to say that {\bf the bridge ruminates energy}. That is, first the bridge stores the energy due to prolonged external sources. Part of this stored energy is indeed dissipated (eaten) by the structural damping of the bridge. From \cite[p.211]{bleich}, we quote \begin{center} \begin{minipage}{162mm} {\em Damping is dissipation of energy imparted to a vibrating structure by an exciting force, whereby a portion of the external energy is transformed into molecular energy.} \end{minipage} \end{center} Every bridge has its own damping capacity defined as the ratio between the energy dissipated in one cycle of oscillation and the maximum energy of that cycle. The damping capacity of a bridge depends on several components such as elastic hysteresis of the structural material and friction between different components of the structure, see \cite[p.212]{bleich}. A second part of the stored energy becomes potential energy if the bridge is above its equilibrium position. The remaining part of the stored energy, namely the part exceeding the damping capacity plus the potential energy, is stored into inner elastic energy; only when this stored elastic energy reaches a critical threshold (saturation), the bridge starts ``ruminating'' energy and gives rise to torsional or more complicated oscillations.\par When \eq{pazzo} occurs, the estimate \eq{estimate} shows that $|w''(t)|$ blows up at a higher rate when compared to $|w(t)|$ and $|w'(t)|$. Although any student is able to see if a function or its first derivative are large just by looking at the graph, most people are unable to see if the second derivative is large. Roughly speaking, the term $\int w''(t)^2$ measures the elastic energy, the term $\int w'(t)^2$ measures the kinetic energy, whereas $\int w(t)^2$ is a measure of the potential energy due to gravity. Hence, \eq{estimate} states that the elastic energy has a higher rate of blow up when compared to the kinetic and potential energies; equivalently, we can say that both the potential energy, described by $|w|$, and the kinetic energy, described by $|w'|$, are negligible with respect to the elastic energy, described by $|w''|$. But since large $|w''(t)|$ cannot be easily detected, the bridge may have large elastic energy, and hence large total energy, without revealing it. Since $|w(t)|$ and $|w'(t)|$ blow up later than $|w''(t)|$, the total energy can be very large without being visible; this is what we mean by hidden elastic energy. This interpretation well agrees with the numerical results described in Section \ref{blup} which show that blow up in finite time for \eq{maineq2} occurs after a long waiting time of apparent calm and sudden wide oscillations. Since the apparent calm suggests low energy whereas wide oscillations suggest high energy, this means that some hidden energy is indeed present in the bridge. And the stored elastic energy, in all of its forms, seems the right candidate to be the hidden energy. Summarising, the large elastic energy $|w''(t)|$ hides the blow up of $|w(t)|$ for some time. Then, with some delay but suddenly, also $|w(t)|$ becomes large. If one could find a simple way to measure $|w''(t)|$ which is an approximation of the elastic energy or, even better, $|w''(t)|/\sqrt{1+w'(t)^2}$ which is the mean curvature, then one would have some time to prevent possible collapses.\par A flavor of what we call hidden energy was already present in \cite{bleich} where the energy storage capacity of a bridge is often discussed, see (p.34, p.104, p.160, p.164) for the storage capacity of the different vibrating components of the bridge. Moreover, the displayed comment just before \eq{coupled} shows that McKenna-Tuama \cite{mckO} also had the feeling that some energy could be hidden. \subsection{Energy balance}\label{energybalance} As far as we are aware, the first attempt for a precise quantitative energy balance in a beam representing a suspension bridge was made in \cite[Chapter VII]{tac2}. Although all the computations are performed with precise values of the constants, in our opinion the analysis there is not complete since it does not distinguish between different kinds of potential energies; what is called potential energy is just the energy stored in bending a differential length of the beam.\par A better attempt is made in \cite[p.107]{bleich} where the plot displays the behavior of the stored energies: the potential energy due to gravity and the elastic energies of the cables and of the stiffening frame. Moreover, the important notion of flutter speed is first used. Rocard \cite[p.185]{rocard} attributes to Bleich \cite{bleichsolo} \begin{center} \begin{minipage}{162mm} {\em ... to have pointed out the connection with the flutter speed of aircraft wings ... He distinguishes clearly between flutter and the effect of the staggered vortices and expresses the opinion that two degrees of freedom (bending and torsion) at least are necessary for oscillations of this kind.} \end{minipage} \end{center} A further comment on \cite{bleichsolo} is given at \cite[p.80]{wake}: \begin{center} \begin{minipage}{162mm} {\em ... Bleich's work ... ultimately opened up a whole new field of study. Wind tunnel tests on thin plates suggested that higher wind velocities increased the frequency of vertical oscillation while decreasing that of torsional oscillation.} \end{minipage} \end{center} The conclusion is that when the two frequencies corresponded, a flutter critical velocity was reached, as manifested in a potentially catastrophic coupled oscillation. In order to define the flutter speed, \cite[pp.246-247]{bleich} assumes that the bridge is subject to a natural steady state oscillating motion; the flutter speed is then defined by: \begin{center} \begin{minipage}{162mm} {\em With increasing wind speed the external force necessary to maintain the motion at first increases and then decreases until a point is reached where the air forces alone sustain a constant amplitude of the oscillation. The corresponding velocity is called the critical velocity or flutter speed.} \end{minipage} \end{center} The importance of the flutter speed is then described by \begin{center} \begin{minipage}{162mm} {\em Below the critical velocity $V_c$ an exciting force is necessary to maintain a steady-state motion; above the critical velocity the direction of the force must be reversed (damping force) to maintain the steady-state motion. In absence of such a damping force the slightest increase of the velocity above $V_c$ causes augmentation of the amplitude.} \end{minipage} \end{center} This means that self-excited oscillations appear as soon as the flutter speed is exceeded.\par Also Rocard devotes a large part of \cite[Chapter VI]{rocard} to \begin{center} \begin{minipage}{162mm} {\em ... predict and delimit the range of wind speeds that inevitably produce and maintain vibrations of restricted amplitudes.} \end{minipage} \end{center} This task is reached by a careful study of the natural frequencies of the structure. Moreover, Rocard aims to \begin{center} \begin{minipage}{162mm} {\em ... calculate the really critical speed of wind beyond which oscillatory instability is bound to arise and will always cause fracture.} \end{minipage} \end{center} The flutter speed $V_c$ for a bridge without damping is computed on \cite[p.163]{rocard} and reads \neweq{speedflutter} V_c^2=\frac{2r^2\ell^2}{2r^2+\ell^2}\, \frac{\omega_T^2-\omega_B^2}{\alpha} \end{equation} where $2\ell$ denotes the width of the roadway, see Figure \ref{9}, $r$ is the radius of gyration, $\omega_B$ and $\omega_T$ are the lowest modes circular frequencies of the bridge in bending and torsion respectively, $\alpha$ is the mass of air in a unit cube divided by the mass of steel and concrete assembled within the same unit length of the bridge; usually, $r\approx\ell/\sqrt{2}$ and $\alpha\approx0.02$. More complicated formulas for the flutter speed are obtained in presence of damping factors. Moreover, Rocard \cite[p.158]{rocard} shows that, for the original Tacoma Bridge, \eq{speedflutter} yields $V_c=47$mph while the bridge collapsed under the action of a wind whose speed was $V=42$mph; he concludes that his computations are quite reliable.\par In pedestrian bridges, the counterpart of the flutter speed is the {\em critical number of pedestrians}, see the quoted sentence by Macdonald \cite{macdonald} in Section \ref{story} and also \cite[Section 2.4]{franck}. For this reason, in the sequel we prefer to deal with energies rather than with velocities: the flutter speed $V_c$ corresponds to a {\bf critical energy threshold} $\overline{E}$ above which the bridge displays self-excited oscillations. We believe that \begin{center} \begin{minipage}{162mm} {\bf The critical energy threshold, generated by the flutter speed for suspension bridges and by the critical number of pedestrians for footbridges, is the threshold where the nonlinear behavior of the bridge really appears, due to sufficiently large displacements of the roadway from equilibrium.} \end{minipage} \end{center} The threshold $\overline{E}$ depends on the elasticity of the bridge, namely on the materials used for its construction. This is in accordance with the numerical results obtained in \cite{gazpav3} where it is shown that the blow up time for solutions to \eq{maineq2} depends increasingly on the parameter $k$. We refer to Section \ref{howto2} for a more precise definition of $\overline{E}$ and a possible way to determine it.\par In this section, we attempt a qualitative energy balance involving more kinds of energies. A special role is played by the elastic energy which should be distinguished form the potential energy that, as we repeatedly said, merely denotes the potential energy due to gravity; its level zero is taken in correspondence of the equilibrium position of the roadway.\par Let us first describe what we believe to happen in a single cross section $\Gamma$ of the bridge; let ${\cal E}$ denote its total energy. Let $A$, $B$, and $C$ denote the three positions of the endpoint $P$ of $\Gamma$, as described in Figure \ref{1} \begin{figure} \caption{Different positions for the bridge side.} \label{1} \end{figure} where the thick grey part denotes the roadway whereas the dotted line displays the behavior of the solution $w$ to \eq{maineq2} when $w<0$, namely when the deflection from horizontal appears, see \eq{w0}. In what follows, we denote by $A$, $B$, $C$, both the positions in Figure \ref{1} and the instants of time when they occur for $P$. When $P$ is in its highest position $A$, $\Gamma$ has maximal potential energy $E_p$ and zero kinetic energy $E_k$: $E_p(A)={\cal E}$, $E_k(A)=0$. In the interval of time $t$ when $P$ goes from position $A$ to position $B$ the potential and kinetic energies of $\Gamma$ exhibit the well-known behavior with constant sum, see the first picture in Figure \ref{2345} \begin{figure} \caption{Energy balance for different positions of the bridge.} \label{2345} \end{figure} where $E_\ell$ denotes the portion of the stored elastic energy exceeding the structural damping of the bridge: when it is $0$, it means that there is no stored elastic energy. When $P$ reaches position $B$, corresponding to the maximal elongation of the sustaining hangers, all the energy has been transformed into kinetic energy: $E_p(B^-)=0$, $E_k(B^-)={\cal E}$. In this position the falling of $\Gamma$ is violently stopped by the extended hangers by means of a Dirac delta impulse and the existing kinetic energy of $\Gamma$ is instantaneously stored into elastic energy $E_\ell$: $E_k(B^-)=E_\ell(B^+)$, see the second picture in Figure \ref{2345}. If the total energy ${\cal E}$ is smaller than the critical threshold $\overline{E}$, corresponding to the flutter speed, nothing seems to happen because the elastic energy is not visible; in this case, after a while it reaches position $C$ and thanks to a further impulse, the elastic energy transforms back to kinetic energy. Then $P$ starts raising up towards position $A$ and the sum of the potential and kinetic energies is again constant, see the third picture in Figure \ref{2345}. In the meanwhile, if the wind keeps blowing or traffic loads generate further negative damping, the total energy ${\cal E}$ increases. Hence, after a cycle, when $P$ is back in position $A$ the total energy ${\cal E}$ of $\Gamma$ may have become larger. In turn, $E_k(B^-)$ will also be larger and, after a certain number of cycles, if the wind velocity is larger than the flutter speed, the total energy exceeds the critical threshold: ${\cal E}>\overline{E}$. In turn, also $E_k(B^-)$ exceeds the critical threshold: $E_k(B^-)>\overline{E}$. When this occurs, the energy splits into two parts: the saturated elastic energy $E_\ell(B^+)=\overline{E}$ and a torsional elastic energy $E_t(B^+)={\cal E}-\overline{E}$ which immediately gives rise to torsional oscillations, see the fourth picture in Figure \ref{2345}. As long as ${\cal E}>\overline{E}$, when $P$ reaches position $B$ the torsional elastic energy becomes positive and $P$ ``virtually'' goes from $B$ to $C$ along the dotted line in Figure \ref{1}. In the interval of time when $P$ is between $B^+$ and $C^-$, the torsional elastic energy remains constant and equal to $E_t(C^+)$, see the fourth picture in Figure \ref{2345}. Only after a further impact, in position $B$, it may vary due to the new impulse. Finally, there exists a second critical threshold: if ${\cal E}-\overline{E}$ becomes too large, namely if the total energy ${\cal E}$ itself is too large, then the bridge collapses. \begin{remark} {\rm With some numerical results at hand, Lazer-McKenna \cite[p.565]{mck1} attempt to explain the Tacoma collapse with the following comment: \begin{center} \begin{minipage}{162mm} {\em An impact, due to either an unusual strong gust of wind, or to a minor structural failure, provided sufficient energy to send the bridge from one-dimensional to torsional orbits.}\end{minipage} \end{center} We believe that what they call {\em an unusual impact} is, in fact, a cyclic impulse for the transition between positions $B^-$ and $B^+$. $\Box$} \end{remark} \begin{problem} {\em The above energy balance should become quantitative. An exact way to compute all the energies involved should be determined. Of course, the potential and kinetic energy are straightforward. But the elastic energy needs deeper analysis.} $\Box$\end{problem} Let us now consider the entire bridge which we model as a rectangular plate $\Omega=(0,L)\times(-\ell,\ell)\subset\mathbb{R}^2$. For all $x_1\in(0,L)$ let ${\cal E}_{x_1}$ denote the total energy of the cross section $\Gamma_{x_1}=\{x_1\}\times(-\ell,\ell)$, computed following the above explained method. Then the total energy of the plate $\Omega$ is given by \neweq{allsections} {\cal E}_\Omega=\int_0^L{\cal E}_{x_1}\, dx_1\ . \end{equation} For simplicity we have here neglected the stretching energy which is a kind of ``interaction energy between cross sections''. If one wishes to consider also this energy, one usually assumes that the elastic force is proportional to the increase of surface. Then the stretching energy of the horizontal plate $\Omega$ whose vertical deflection is $u$ reads $${\cal E}_S(u)=\int_\Omega\left(\sqrt{1+|\nabla u|^2}-1\right) \, dx_1dx_2$$ and, after multiplication by a suitable coefficient, it should be added to ${\cal E}_\Omega$. For small deformations $u$ the asymptotic expansion leads to the usual Dirichlet integral $\frac12 \int_\Omega|\nabla u|^2$ and, in turn, to the appearance of the second order term $\Delta u$ in the corresponding Euler-Lagrange equation. Clearly, the bridge is better described with the addition of the stretching energy but, at least to have a description of qualitative behaviors, we may neglect it in a first simplified model.\par The just described elastic phenomenon may also be seen in a much simpler model. Imagine there is a ball at some given height above an horizontal plane $P$, see position $A$ in Figure \ref{10}. \begin{figure} \caption{A suspension bridge is like a bouncing ball.} \label{10} \end{figure} The ball is falling down until it reaches the position tangent to the plane as in position $B$. Then there is some positive time where the ball touches $P$; the reason is that it is squeezed and deformed, although probably less than illustrated in the $B\to C$ picture. But, of course, a very soft ball may have an important deformation. Just after the impact, the ball stores elastic energy which is hardly visible. After some time, the ball recovers its initial spherical shape and is ready to bounce up, see position $C$. When it is back in position $A$ it may store further energy, for instance with a hand pushing it downwards. For these reasons, we believe that there is some resemblance between bouncing balls and oscillating bridges. \subsection{Oscillating modes in suspension bridges: seeking the correct boundary conditions}\label{modes} Smith-Vincent \cite[Section I.2]{tac2} analyse the different forms of motion of a suspension bridge and write \begin{center} \begin{minipage}{162mm} {\em The natural modes of vibration of a suspension bridge can be classified as vertical and torsional. In pure vertical modes all points on a given cross section move vertically the same amount and in phase... The amount of this vertical motion varies along the longitudinal axis of the bridge as a modified sine curve.} \end{minipage} \end{center} Then, concerning torsional motions, they write \begin{center} \begin{minipage}{162mm} {\em In pure torsional modes each cross section rotates about an axis which is parallel to the longitudinal axis of the bridge and is in the same vertical plane as the centerline of the roadway. Corresponding points on opposite sides of the centerline of the roadway move equal distances but in opposite directions.} \end{minipage} \end{center} Moreover, Smith-Vincent also analyse small oscillations: \begin{center} \begin{minipage}{162mm} {\em For small torsional amplitudes the movement of any point is essentially vertical, and the wave form or variation of amplitude along a line parallel to the longitudinal centerline of the bridge ... is the same as for a corresponding pure vertical mode.} \end{minipage} \end{center} With these remarks at hand, in this section we try to set up a reliable eigenvalue problem. We consider the roadway bridge as a long narrow rectangular thin plate, simply supported on its short sides. So, let $\Omega=(0,L)\times(-\ell,\ell)\subset\mathbb{R}^2$ where $L$ is the length of the bridge and $2\ell$ is its width; a realistic assumption is that $2\ell\ll L$.\par As already mentioned in Section \ref{elasticity}, the choice of the boundary conditions is delicate since it depends on the physical model considered. We first recall that the boundary conditions $u=\Delta u=0$ are the so-called Navier boundary conditions, see Figure \ref{navierbc} which is taken from \cite[p.96]{navier}. \begin{figure} \caption{First appearance of Navier boundary conditions.} \label{navierbc} \end{figure} On flat parts of the boundary where no curvature is present, they describe simply supported plates, see e.g.\ \cite{gazgruswe}. When $x_1$ is fixed, either $x_1=0$ or $x_1=L$, these conditions reduce to $u=u_{x_1x_1}=0$. And precisely on these two sides, the roadway $\Omega$ is assumed to be simply supported; this is uniformly accepted in any of the models we met. The delicate point is the determination of the boundary conditions on the other sides.\par In order to get into the problem, we start by dealing with the linear Kirchhoff-Love theory described in Section \ref{elasticity}. In view of \eq{energy-gs}, the energy of the vertical deformation $u$ of a rectangular plate $\Omega=(0,L)\times(-\ell,\ell)$ subject to a load $f=f(x_1,x_2)$ is given by \neweq{energy-f} \mathbb{E}(u)=\int_{\Omega }\left(\frac{1}{2}\left( \Delta u\right) ^{2}+(\sigma-1)\det(D^2u)-f\, u\right) \, dx_1dx_2 \end{equation} and the corresponding Euler-Lagrange equation reads $\Delta^2u=f$ in $\Omega$. For a fully simply supported plate, that is $u=u_{x_1x_1}=0$ on the vertical sides and $u=u_{x_2x_2}=0$ on the horizontal sides, this problem has been solved by Navier \cite{navier} in 1823, see also \cite[Section 2.1]{mansfield}. But Cauchy \cite{cauchy} criticised the work by Navier by claiming that \begin{center} \begin{minipage}{162mm} {\em ... Navier ... avait consid\'er\'e deux esp\`eces de forces produites, les unes par la dilatation ou la contraction, les autres par la flexion de ce m\^eme plan. ... Il me parut que ces deux esp\`eces de forces pouvaient \^etre r\'eduites \`a une seule... .} \end{minipage} \end{center} Did Cauchy already have in mind the difference/analogy between bending, stretching, and torsion? In any case, since the bridge is not a fully simply supported plate, different boundary conditions should be considered on the horizontal sides. The load problem on the rectangle $\Omega$ with only the vertical sides being simply supported was considered by L\'evy \cite{levy}, Zanaboni \cite{zanaboni}, and Nadai \cite{nadai}, see also \cite[Section 2.2]{mansfield} for the analysis of different kinds of boundary conditions on the remaining two sides $x_2=\pm\ell$. Let us also mention the more recent monograph \cite[Chapter 3]{ventsel} for a very clear description of bending of rectangular plates.\par A first natural possibility is to consider the horizontal sides to be free. If no physical constraint is present on the horizontal sides, then the boundary conditions there become (see e.g.\ \cite[(2.40)]{ventsel}) $$ u_{x_2x_2}(x_1,\pm\ell)+\sigma u_{x_1x_1}(x_1,\pm\ell)=0\, ,\quad u_{x_2x_2x_2}(x_1,\pm\ell)+(2-\sigma)u_{x_1x_1x_2}(x_1,\pm\ell)=0\, ,\quad x_1\in(0,L)\, . $$ Unfortunately, some physical constraints are present on the two horizontal sides, both because of the action of the hangers and because the cross section of the bridge may be considered to be rigid. Our purpose is to describe the oscillating modes of the plate $\Omega$ under the most possible realistic boundary conditions; we suggest here some new conditions to be required on the horizontal sides $x_2=\pm\ell$. Hopefully, they should allow to emphasise both vertical and torsional oscillations.\par First of all, note that if the cross section of the roadway is rigid and behaves as in Figure \ref{9} then each cross section has constant deflection from horizontal, that is, it rotates around its barycenter $B$: Prof.\ Farquharson, the man escaping in \cite{tacoma}, ran following the middle line of the roadway precisely in order to avoid torsional oscillations. Denoting by $u=u(x_1,x_2)$ the vertical displacement of the roadway, this amounts to say that \neweq{small} u_{x_2}(x_1,x_2,t)=\Psi(x_1,t)\qquad(x\in\Omega\, ,\ t>0) \end{equation} for some $\Psi$. If we translate this constraint on the vertical sides $x_2=\pm\ell$ of the plate $\Omega$, we obtain $$\begin{array}{cc} u_{x_2x_2}(x_1,x_2)=0\quad x\in(0,L)\times\{-\ell,\ell\}\, ,\\ 2\ell u_{x_2}(x_1,-\ell)=2\ell u_{x_2}(x_1,\ell)=u(x_1,\ell)-u(x_1,-\ell)\quad x_1\in(0,L)\, . \end{array}$$ Indeed, by \eq{small} we have $u_{x_2x_2}\equiv0$ in $\Omega$, which justifies both these conditions. Taking all the above boundary conditions into account, we are led to consider the following eigenvalue problem \neweq{eigen1} \left\{\begin{array}{ll} \Delta^2 u=\lambda u\quad & x=(x_1,x_2)\in\Omega\, ,\\ u(x_1,x_2)=u_{x_1x_1}(x_1,x_2)=0\quad & x\in\{0,L\}\times(-\ell,\ell)\, ,\\ u_{x_2x_2}(x_1,x_2)=0\quad & x\in(0,L)\times\{-\ell,\ell\}\, ,\\ 2\ell u_{x_2}(x_1,-\ell)=2\ell u_{x_2}(x_1,\ell)=u(x_1,\ell)-u(x_1,-\ell)\quad & x_1\in(0,L)\, . \end{array}\right. \end{equation} This is a nonlocal problem which combines boundary conditions on different parts of $\partial\Omega$. The oscillating modes of $\Omega$ are the eigenfunctions to \eq{eigen1}. By separating variables, we find the two families of eigenfunctions \neweq{autofunzione} \sin\left(\frac{m\pi}{L}x_1\right)\ ,\qquad x_2\sin\left(\frac{m\pi}{L}x_1\right)\qquad(m\in\mathbb{N}\setminus\{0\})\ . \end{equation} The first family describes pure vertical oscillations whereas the second family describes pure torsional oscillations with no vertical displacement of the middle line of the roadway. For fixed $m$, both these eigenfunctions correspond to the eigenvalue $$\lambda_m=\frac{m^4\pi^4}{L^4}\ .$$ Although the ``interesting'' eigenfunctions to \eq{eigen1} are the ones in \eq{autofunzione}, further eigenfunctions might exist and are expected to have the form \neweq{formeigen} \psi_m(x_2)\, \sin\left(\frac{m\pi}{L}x_1\right)\qquad(m\in\mathbb{N}\setminus\{0\}) \end{equation} for some $\psi_m\in C^4(-\ell,ell)$ satisfying a suitable linear fourth order ODE, see \cite{fergaz}. \begin{problem} {\em Determine all the eigenvalues and eigenfunctions to \eq{eigen1}. Is $\lambda=0$ an eigenvalue? Which subspace of $H^2(\Omega)$ is spanned by these eigenfunctions?} $\Box$\end{problem} It would be interesting to find out if the corresponding loaded plate problem has an equilibrium. \begin{problem} {\em For any $f\in L^2(\Omega)$ study existence and uniqueness of a function $u\in H^4(\Omega)$ satisfying $\Delta^2 u=f$ in $\Omega$ and \eq{eigen1}$_2$-\eq{eigen1}$_3$-\eq{eigen1}$_4$. Try first some particular forms of $f$ as in \cite[Sections 2.2, 2.2.2]{mansfield} and then general $f=f(x_1,x_2)$. Is there any reasonable weak formulation for this problem?} $\Box$\end{problem} Problem \eq{eigen1} may turn out to be quite complicated from a mathematical point of view: it is not a variational problem and standard elliptic regularity does not apply. So, let us suggest an alternative model which seems to fit slightly better in well-known frameworks and also admits \eq{autofunzione} as eigenfunctions. Consider the eigenvalue problem \neweq{eigen2} \left\{\begin{array}{ll} \Delta^2 u=\lambda u\quad & x=(x_1,x_2)\in\Omega\, ,\\ u(x_1,x_2)=u_{x_1x_1}(x_1,x_2)=0\quad & x\in\{0,L\}\times(-\ell,\ell)\, ,\\ u_{x_2x_2}(x_1,x_2)=u_{x_2x_2x_2}(x_1,x_2)=0\quad & x\in(0,L)\times\{-\ell,\ell\}\, . \end{array}\right. \end{equation} Here, the condition on the third normal derivative replaces \eq{eigen1}$_4$. This condition somehow ``forces $u_{x_2x_2}$ to remain zero" which is precisely what happens in the bridge. It is straightforward to verify that \eq{autofunzione} are eigenfunctions to \eq{eigen2} so that similar problems arise. \begin{problem} {\em Determine all the eigenvalues and eigenfunctions to \eq{eigen2}. Which subspace of $H^2(\Omega)$ is spanned by these eigenfunctions? For any $f\in L^2(\Omega)$ study existence and uniqueness of a function $u\in H^4(\Omega)$ satisfying $\Delta^2 u=f$ in $\Omega$ and \eq{eigen2}$_2$-\eq{eigen2}$_3$.} $\Box$\end{problem} A further alternative eigenvalue problem is also of some interest. A possible additional simplification in the model would be to assume that \neweq{mezzeria} u(x_1,0,t)\simeq0\qquad\mbox{for all }x_1\in(0,L)\, ,\ t>0\, , \end{equation} namely that the center line of the roadway has small vertical oscillations. If on one hand this seems realistic in view of \cite{tacoma}, on the other hand this would preclude the appearance of wide vertical oscillations on the center line. In the whole, we believe that, qualitatively, the behavior of the plate will not change too much. By assuming \eq{small} and that equality holds in \eq{mezzeria}, for all $x_1\in(0,L)$ and $t>0$ we obtain $$u_{x_2}(x_1,-\ell,t)=u_{x_2}(x_1,\ell,t)=\frac{u(x_1,\ell,t)-u(x_1,-\ell,t)}{2\ell}\ ,\quad u(x_1,\ell,t)=-u(x_1,-\ell,t)\, .$$ By putting these together and decoupling equations on $x_2=\pm\ell$ we arrive at the Robin conditions $$\ell u_{x_2}(x_1,-\ell,t)+u(x_1,-\ell,t)=0\, ,\ \ell u_{x_2}(x_1,\ell,t)-u(x_1,\ell,t)=0\quad(x_1\in(0,L)\, ,\ t>0)\, .$$ We believe that these boundary conditions may help to obtain oscillation properties also of the solutions to equations derived without assuming \eq{mezzeria}. \begin{problem} {\em Determine the eigenvalues $\lambda$ and the properties of the eigenfunctions to the following problem $$\left\{\begin{array}{ll} \Delta^2 u=\lambda u\quad & x\in\Omega\, ,\\ u(x_1,x_2)=u_{x_1x_1}(x_1,x_2)=0\quad & (x_1,x_2)\in\{0,L\}\times(-\ell,\ell)\, ,\\ u_{x_2x_2}(x_1,x_2)=0\quad & (x_1,x_2)\in(0,L)\times\{-\ell,\ell\}\, ,\\ \ell u_{x_2}(x_1,-\ell)+u(x_1,-\ell)=0\quad & x_1\in(0,L)\, ,\\ \ell u_{x_2}(x_1,\ell)-u(x_1,\ell)=0\quad & x_1\in(0,L)\, .\\ \end{array}\right.$$ This would give an idea of what kind of oscillations should be expected in the below model equation \eq{truebeam}.} $\Box$\end{problem} In this section we set up several eigenvalue problems for thin rectangular plates simply supported on two opposite sides. There is no evidence on which could be the best boundary conditions on the remaining sides. Once these are determined, it could be of great interest to have both theoretical and numerical information on the the behavior of eigenvalues and eigenfunctions. The conditions should be sought in order to perfectly describe the oscillating modes of suspension bridges. In a work in preparation \cite{fergaz} we tackle these problems. \subsection{Seeking the critical energy threshold}\label{howto2} The title of this section should not deceive. We will not give a precise method how to determine the energy threshold which gives rise to torsional oscillations in a plate. We do have an idea how to proceed but several steps are necessary before reaching the final goal.\par Consider the plate $\Omega=(0,L)\times(-\ell,\ell)$ and the incomplete eigenvalue problem \neweq{incomplete} \left\{\begin{array}{ll} \Delta^2 u=\lambda u\quad & x\in\Omega\, ,\\ u(x_1,x_2)=u_{x_1x_1}(x_1,x_2)=0\quad & (x_1,x_2)\in\{0,L\}\times(-\ell,\ell)\, . \end{array}\right. \end{equation} Problem incomplete lacks conditions on the remaining sided $x_2=\pm\ell$ and, as mentioned in the previous section, it is not clear which boundary conditions should be added there.\par In order to explain which could be the method to determine the critical energy threshold, consider the simple case where the plate is square, $\Omega=(0,\pi)\times(-\frac{\pi}{2},\frac{\pi}{2})$, and let us complete \eq{incomplete} with the ``simplest'' boundary conditions, namely the Navier boundary conditions which represent a fully simply supported plate. As already mentioned these are certainly not the correct boundary conditions for a bridge but they are quite helpful to describe the method we are going to suggest. For different and more realistic boundary conditions, we refer to the paper in preparation \cite{fergaz}. So, consider the problem \neweq{irrational} \left\{\begin{array}{ll} \Delta^2 u=\lambda u\quad & x\in\Omega\, ,\\ u(x_1,x_2)=u_{x_1x_1}(x_1,x_2)=0\quad & (x_1,x_2)\in\{0,\pi\}\times(-\frac{\pi}{2},\frac{\pi}{2})\, ,\\ u(x_1,x_2)=u_{x_2x_2}(x_1,x_2)=0\quad & (x_1,x_2)\in(0,\pi)\times\{-\frac{\pi}{2},\frac{\pi}{2}\}\ . \end{array}\right. \end{equation} It is readily seen that, for instance, $\lambda=625$ is an eigenvalue for \eq{irrational} and that there are 4 linearly independent corresponding eigenfunctions \neweq{exxample} \{\sin(24x_1)\cos(7x_2),\, \sin(20x_1)\cos(15x_2),\, \sin(15x_1)\cos(20x_2),\, \sin(7x_1)\cos(24x_2)\}\ . \end{equation} It is well-known that similar facts hold for the second order eigenvalue problem $-\Delta u=\lambda u$ on the square, so what we are discussing is not surprising. What we want to emphasise here is that, associated to the same eigenvalue $\lambda=625$, we have 4 different kinds of vibrations in the $x_1$-direction and each one of these vibrations has its own counterpart in the $x_2$-direction corresponding to torsional oscillations. We believe that this will be true for any boundary conditions on $x_2=\pm\ell$ completing \eq{incomplete} and for any values of $L$ and $\ell$. We refer again to Figure \ref{patterns} for the patterns of some vibrating plates.\par Consider now a general plate $\Omega=(0,L)\times(-\ell,\ell)$ and let $f\in L^2(\Omega)$; in view of \cite[Section 2.2]{mansfield}, we expect the solution to the problem \neweq{withsource} \left\{\begin{array}{ll} \Delta^2 u=f\quad & x\in\Omega\, ,\\ u(x_1,x_2)=u_{x_1x_1}(x_1,x_2)=0\quad & (x_1,x_2)\in\{0,\pi\}\times(-\frac{\pi}{2},\frac{\pi}{2})\, ,\\ \mbox{other boundary conditions}\quad & (x_1,x_2)\in(0,\pi)\times\{-\frac{\pi}{2},\frac{\pi}{2}\}\ , \end{array}\right. \end{equation} to be of the kind $$u(x_1,x_2)=\sum_{m=1}^\infty \psi_m(x_2)\sin\left(\frac{m\pi}{L}x_1\right)\qquad(x_1,x_2)\in\Omega$$ for some functions $\psi_m$ depending on the Fourier coefficients of $f$. Since we have in mind small $\ell$, we can formally expand $\psi_m$ in Taylor polynomials and obtain $$\psi_m(x_2)=\psi_m(0)+\psi_m'(0)x_2+o(x_2)\qquad\mbox{as }x_2\to0\ .$$ Hence, $u$ may approximately be written as a combination of the functions in \eq{autofunzione}: $$u(x_1,x_2)\approx\sum_{m=1}^\infty[a_m+b_mx_2]\sin\left(\frac{m\pi}{L}x_1\right)\qquad(x_1,x_2)\in\Omega$$ where $a_m=\psi_m(0)$ and $b_m=\psi'_m(0)$. If instead of a stationary problem such as \eq{withsource}, $u=u(x_1,x_2,t)$ satisfies an evolution problem with the same boundary conditions, then also its coefficients depend on time: \neweq{fourier} u(x_1,x_2,t)\approx\sum_{m=1}^\infty \Big(a_m(t)+b_m(t)x_2\Big)\sin\left(\frac{m\pi}{L}x_1\right)\qquad(x_1,x_2)\in\Omega\, ,\ t>0\ . \end{equation} Let now ${\cal E}(t)$ denote the instantaneous total energy of the bridge, as determined in \eq{allsections}. What follows is not precise, it is a qualitative attempt to describe combined vertical and torsional oscillations. In particular, due to the restoring cables and hangers, the sine functions in \eq{fourier} should be modified in order to display different behaviors for positive and negative arguments. Moreover, we call ``small'' any quantity which is less than unity and ``almost zero'' (in symbols $\cong0$) any quantity which has a smaller order of magnitude when compared with small quantities. Finally, in order to avoid delicate sign arguments, we will often refer to $a_m^2$ and $b_m^2$ instead of $a_m$ and $b_m$.\par $\bullet$ {\bf Small energy.} As long as ${\cal E}(t)$ is small one may not even see oscillations, but if somebody stands on the bridge he might be able to feel oscillations. For instance, standing on the sidewalk of a bridge, one can feel the oscillations created by a car going through the roadway but the oscillations will not be visible to somebody watching the roadway from a point outside the bridge. For small energies ${\cal E}(t)$ only small oscillations appear and the corresponding solution \eq{fourier} has small coefficients $a_m(t)$ while $b_m(t)\cong0$. More precisely, \neweq{ambm} \left\{\begin{array}{ll} \forall\varepsilon>0\ \exists\delta>0\quad\mbox{such that}\quad{\cal E}(t)<\delta\ \Longrightarrow\ a_m(t)^2<\varepsilon\ \forall m\, ,\\ \exists\gamma>0\quad\mbox{such that}\quad{\cal E}(t)<\gamma\ \Longrightarrow\ b_m(t)\cong0\ \forall m\, . \end{array}\right. \end{equation} The reason of the second of \eq{ambm} is that even small variations of the $b_m$'s correspond to a large variation of the total energy ${\cal E}$ because the huge masses of the cross sections would rotate along the large length $L$ of the roadway. On the other hand, the first of \eq{ambm} may be strengthened by assuming that also some of the $a_m$'s are almost zero for small ${\cal E}$; in particular, we expect that this happens for large $m$ since these coefficients correspond to higher eigenvalues \neweq{vanishlargea} \forall\overline{m}\in\mathbb{N}\setminus\{0\}\quad \exists{\cal E}_{\overline{m}}>0 \quad\mbox{such that}\quad{\cal E}(t)<{\cal E}_{\overline{m}}\ \Longrightarrow\ a_m(t)\cong0\quad \forall m>\overline{m}\ . \end{equation} To better understand this point, let us compute the elongation $\Gamma_m$ due to the $m$-th mode: \neweq{elongation} \Gamma_m(t):=\int_0^L\bigg(\sqrt{1+\frac{m^2\pi^2}{L^2}\, a_m(t)^2\, \cos^2\left(\frac{m\pi}{L}x_1\right)}\, -\, 1\bigg)\, dx_1\ ; \end{equation} this describes the stretching elastic energy since it is the difference between the length of the roadway deformed by one single mode and the length of the roadway at rest. Due to the coefficient $\frac{m^2\pi^2}{L^2}$, it is clear that if $a_m^2\equiv a_{m+1}^2$ then $\Gamma_m(t)<\Gamma_{m+1}(t)$. This is the reason why \eq{vanishlargea} holds.\par $\bullet$ {\bf Increasing energy.} According to \eq{vanishlargea}, as long as ${\cal E}(t)<{\cal E}_{\overline{m}}$ one has $a_m(t)\cong0$ for all $m>\overline{m}$. If ${\cal E}(t)$ increases but remains smaller than ${\cal E}_{\overline{m}}$, then the coefficients $a_m(t)^2$ for $m=1,...,\overline{m}$ also increase. But they cannot increase to infinity since \eq{elongation} implies that the length of the roadway would also increase to infinity. So, when the total energy ${\cal E}(t)$ reaches the threshold ${\cal E}_{\overline{m}}$ the superlinear elastic structure of the bridge forces the solution \eq{fourier} to add one mode, so that $a_{\overline{m}+1}(t)\not\cong0$. Hence, the number of modes $\not\cong0$ is a nondecreasing function of ${\cal E}$.\par $\bullet$ {\bf Critical energy threshold.} What is described above is purely theoretical, but the bridge has several physical constraints. Of course, it cannot be stretched to infinity, it will break down much before. In particular, the number of active modes cannot increase to infinity. The elastic properties of the bridge determine a critical (maximal) number of possible active modes, say $\mu$. If the energy is distributed on the $\mu$ coefficients $a_1$,...,$a_\mu$, and if it increases up to ${\cal E}_\mu$, once again the superlinear elastic structure of the bridge forces the solution \eq{fourier} to change mode, but this time from the $a_m$ to the $b_m$; due to \eq{elongation}, further stretching of the roadway would require much more energy than switching oscillations on torsional modes. The switch is due to an impulse caused by the instantaneous stopping of the falling roadway imposed by the sustaining cables and the elongated hangers. And which torsional modes will be activated depends on which coupled modes have the same eigenvalue; as an example, consider \eq{exxample} which, roughly speaking, says that the motion may change from $24$ to $7$ oscillations in the $x_1$-direction with a consequent change of oscillation also in the $x_2$-direction.\par $\bullet$ {\bf Summarising...} Let $u$ in \eq{fourier} describe the vertical displacement of the roadway. The bridge has several characteristic values which depend on its elastic structure.\par $\diamondsuit$\quad An integer number $\mu\in\mathbb{N}$ such that $a_m(t)\cong0$ and $b_m(t)\cong0$ for all $m>\mu$, independently of the value of ${\cal E}(t)$.\par $\diamondsuit$\quad $\mu$ different energy ``increasing modes thresholds'' $E_1,...,E_\mu$.\par $\diamondsuit$\quad The critical energy threshold $\overline{E}=E_\mu$.\par Assume that ${\cal E}(0)=0$, in which case $u(x_1,x_2,0)=0$, and that $t\mapsto{\cal E}(t)$ is increasing. As long as ${\cal E}(t)\le E_1$ we have $a_m\cong0$ for all $m\ge2$ and $b_m\cong0$ for all $m\ge1$; moreover, $t\mapsto a_1(t)^2$ is increasing. When ${\cal E}(t)$ reaches and exceeds $E_1$ there is a first switch: the function $a_2^2$ starts being positive while, as long as ${\cal E}(t)\le E_2$, we still have $a_m\cong0$ for all $m\ge3$ and $b_m\cong0$ for all $m\ge1$. And so on, until ${\cal E}(t)=E_\mu=\overline{E}$. At this point, also because of an impulse, the energy forces the solution to have a nonzero coefficient $b_1$ rather than a nonzero coefficient $a_{\mu+1}$. The impulse forces $u$ to lower the number of modes for which $a_m\not\cong0$. For instance, the observation by Farquharson ({\em The motion, which a moment before had involved nine or ten waves, had shifted to two}) quoted in Section \ref{story} shows that, for the Tacoma Bridge, there was a change such as \neweq{change} \Big(a_m\cong0\ \forall m\ge11,\ b_m\cong0\ \forall m\ge1\Big)\ \longrightarrow\ \Big(a_m\cong0\ \forall m\ge3,\ b_m\cong0\ \forall m\ge2\Big)\ . \end{equation} In order to complete the material in this section, two major problems are still to be solved.\par - Find the correct boundary conditions on $x_2=\pm\ell$.\par - Find at which energy levels the ``transfer of energy between modes'' occurs, see \eq{change}.\par Both these problems are addressed in a forthcoming paper \cite{fergaz}. \section{Conclusions and future perspectives}\label{conclusions} So far, we observed phenomena displayed by real structures, we discussed models, and we recalled some theoretical results. In this section we take advantage from all this work, we summarise all the phenomena and remarks previously discussed, and we reach several conclusions. \subsection{A new mathematical model for suspension bridges}\label{newmodel} We suggest here a new mathematical model for the description of oscillations in suspension bridges. We expect the solution to the corresponding equation to display both self-excited oscillations and instantaneous switch between vertical and torsional oscillations. Moreover, the critical energy threshold (corresponding to the flutter speed) appears in the equation.\par Let $\Omega=(0,L)\times(-\ell,\ell)\subset\mathbb{R}^2$ where $L$ represents the length of the bridge and $2\ell$ represents the width of the roadway. Assume that $2\ell\ll L$ and consider the initial-boundary value problem \neweq{truebeam} \left\{\begin{array}{ll} u_{tt}+\Delta^2 u+\delta u_t+f(u)=\varphi(x,t)\ & x=(x_1,x_2)\in\Omega,\ t>0,\\ u(x_1,x_2,t)=u_{x_1x_1}(x_1,x_2,t)=0\ & x\in\{0,L\}\times(-\ell,\ell),\ t>0,\\ u_{x_2x_2}(x_1,x_2,t)=0\ & x\in(0,L)\times\{-\ell,\ell\},\ t>0,\\ u_{x_2}(x_1,-\ell,t)=u_{x_2}(x_1,\ell,t)\ & x_1\in(0,L),\ t>0,\\ u_t(x_1,-\ell,t)+u(x_1,-\ell,t)=E(t)\, [u_t(x_1,\ell,t)+u(x_1,\ell,t)]\ & x_1\in(0,L),\ t>0,\\ u(x,0)=u_0(x)\ & x\in\Omega,\\ u_t(x,0)=u_1(x)\ & x\in\Omega. \end{array}\right. \end{equation} Here, $u=u(x,t)$ represents the vertical displacement of the plate, $u_0(x)$ is its initial position while $u_1(x)$ is its initial vertical velocity. Before discussing the other terms and conditions, let us remark that if one wishes to have smooth solutions, a compatibility condition between boundary and initial conditions is needed: \neweq{cc} u_1(x_1,-\ell)+u_0(x_1,-\ell)=E(0)\, [u_1(x_1,\ell)+u_0(x_1,\ell)]\qquad\forall x_1\in(0,L)\ . \end{equation} The function $\varphi$ represents an external source, such as the wind, which is responsible for the variation of the total energy ${\cal E}(t)$ inserted in the structure which, in turn, can be determined by \neweq{gust} {\cal E}(t)=\int_\Omega \varphi(x,t)^2\, dx\ . \end{equation} Of course, it is much simpler to compute ${\cal E}(t)$ in terms of the known source $\varphi$ rather than in terms of the unknown solution $u$ and its derivatives. The function $E$ is then defined by \neweq{EE} E(t)=\left\{\begin{array}{ll} 1\quad & \mbox{if }{\cal E}(t)\le\overline{E}\\ -1\quad & \mbox{if }{\cal E}(t)>\overline{E} \end{array}\right. \end{equation} where $\overline{E}>0$ is the critical energy threshold defined in Section \ref{energybalance}; hence, the function $E(t)$ is a discontinuous nonlocal term which switches to $\pm1$ according to whether the total energy is smaller/larger than the critical threshold. So, \eq{truebeam}$_5$ is a dynamic boundary condition involving the total energy of the bridge. When ${\cal E}(t)\le\overline{E}$ there is no torsional energy $E_t$ and the motion tends to become of pure vertical-type, that is, with $u_{x_2}\cong0$: to see this, note that in this case \eq{truebeam}$_5$ may be written as $$\frac{\partial}{\partial t}\Big\{[u(x_1,\ell,t)-u(x_1,-\ell,t)]e^t\Big\}=0\qquad\forall x_1\in(0,L)\ .$$ This means that as long as ${\cal E}(t)\le\overline{E}$, the map $t\mapsto|u(x_1,\ell,t)-u(x_1,-\ell,t)|$ decreases so that the two opposite endpoints of any cross section tend to have the same vertical displacement and to move synchronously as in a pure vertical motion. When ${\cal E}(t)>\overline{E}$ condition \eq{truebeam}$_5$ may be written as $$\frac{\partial}{\partial t}\Big\{[u(x_1,\ell,t)+u(x_1,-\ell,t)]e^t\Big\}=0\qquad\forall x_1\in(0,L)\ .$$ This means that as long as ${\cal E}(t)>\overline{E}$, the map $t\mapsto|u(x_1,\ell,t)+u(x_1,-\ell,t)|$ decreases so that the two opposite endpoints of any cross section tend to have zero average and to move asynchronously as in a pure torsional motion, that is, with $u(x_1,0,t)\cong\frac12 [u(x_1,\ell,t)+u(x_1,-\ell,t)]\cong0$.\par Note that in \eq{truebeam} the jump of $E(t)$ from/to $\pm1$ occurs simultaneously and instantaneously along all the points located on the sides of the roadway; hence, either all or none of the cross sections have some torsional motion, in agreement with what has been observed for the Tacoma Bridge, see \cite{Tacoma1} and also \cite[pp.50-51]{wake}. The form \eq{gust} and the switching criterion for $E(t)$ in \eq{EE} mean that problem \eq{truebeam} models a situation where if a gust of wind is sufficiently strong then, instantaneously, a torsional motion appears. One could also consider the case where $${\cal E}(t)\simeq\int_0^t\int_\Omega \varphi(x,\tau)^2\, dx\, d\tau\ $$ which would model a situation where if the wind blows for too long then at some critical time, instantaneously, a torsional motion appears. However, the problem with \eq{gust} is much simpler because it is local in time.\par The differential operator in \eq{truebeam} is derived according to the linear Kirchhoff-Love model for a thin plate, see Section \ref{elasticity}. We have neglected a strong distinction between the bending and stretching elastic energies which are, however, quite different in long narrow plates, see \cite[Section 8.3]{mansfield} and previous work by Cox \cite{cox}; if one wishes to make some corrections, one should add a further nonlinear term $g(\nabla u,D^2u)$ and the equation would become quasilinear, see Problem \ref{gDu}. But, as already mentioned in Section \ref{elasticity}, we follow here a compromise and merely consider a semilinear problem. Concerning the nonlinearity $f(u)$, some superlinearity should be required. For instance, $f(u)=u+\varepsilon u^3$ with $\varepsilon>0$ small could be a possible choice; alternatively, one could take $f(u)=a(e^{bu}-1)$ as in \cite{mckO} for some $a,b>0$. In the first case the hangers are sought as ideal springs and gravity is somehow neglected, in the second case more relevance is given to gravity and to the possibility of slackening hangers. Finally, $\delta u_t$ is a damping term which represents the positive structural damping of the structure; its role should be to weaken the effect of the nonlinear term $f(u)$, see Problem \ref{competition}.\par As far as we are aware, there is no standard theory for problems like \eq{truebeam}. It is a nonlocal problem since it links behaviors on different parts of the boundary and involves the function $E(t)$ in \eq{EE}. It also has dynamic boundary conditions which are usually delicate to handle. \begin{problem} {\em Prove that if $\varphi(x,t)\equiv0$ then \eq{truebeam} only admits the trivial solution $u\equiv0$. The standard trick of multiplying the equation in \eq{truebeam} by $u$ or $u_t$ and integrating over $\Omega$ does not allow to get rid of all the boundary terms. Note that, in this case, $E(t)\equiv1$.} $\Box$\end{problem} \begin{problem}\label{ill} {\em Study existence and uniqueness results for \eq{truebeam}; prove continuous dependence results with respect to the data $\varphi$, $u_0$, $u_1$, and with respect to possible perturbations of $f$. Of course, the delicate conditions to deal with are \eq{truebeam}$_4$ and \eq{truebeam}$_5$. If problem \eq{truebeam} were ill-posed, what must be changed in order to have a well-posed problem?} $\Box$\end{problem} \begin{problem}\label{competition} {\em Study \eq{truebeam} with no damping, that is, $\delta=0$: does the solution display oscillations such as \eq{pazzo} when $t$ tends to some finite blow up instant? Then study the competition between the damping term $\delta u_t$ and the self-exciting term $f(u)$: for a given $f$ is it true that if $\delta$ is sufficiently large then the solution $u$ is global in time? We believe that the answer is negative and that the only effect of the damping term is to delay the blow up time.} $\Box$\end{problem} \begin{problem} {\em Determine the regularity of the solutions $u$ to \eq{truebeam} and study the importance of the compatibility condition \eq{cc}.} $\Box$\end{problem} \begin{problem}\label{gDu} {\em Insert into the equation \eq{truebeam} a correction term for the elastic energies, something like $$g(\nabla u,D^2u)=-\left(\frac{u_{x_1}}{\sqrt{1+u_{x_1}^2}}\right)_{x_1}-\gamma\left(\frac{u_{x_1}}{\sqrt{1+u_{x_1}^2}}\right)_{x_2}$$ with $\gamma>0$ small. Then prove existence, uniqueness and continuous dependence results.} $\Box$\end{problem} An important tool to study \eq{truebeam} would be the eigenvalues and eigenfunctions of the corresponding stationary problem. In view of the dynamic boundary conditions \eq{truebeam}$_5$, a slightly simpler model could be considered, see \eq{eigen1} or \eq{eigen2} and subsequent discussion in Section \ref{modes}. We have no feeling on what could be the better choice... \subsection{A possible explanation of the Tacoma collapse}\label{possibleTacoma} Hopefully, this paper sheds some further light on oscillating bridges. We have emphasised the necessity of models fulfilling the requirements of (GP) since, otherwise, the solution will not display the phenomena visible in real bridges. In particular, any equation aiming to model the complex behavior of bridges should contain information on at least two possible kinds of oscillations: this target may be achieved either by considering a PDE, or by considering coupled systems of ODE's, or by linking the two oscillations within a unique function solving a suitable ODE. A further contribution of this paper is the remark that there might be some hidden elastic energy in the bridge and that there is no simple way to detect it. Not only this energy is larger than the kinetic and potential energy but also it may give rise, almost instantaneously, to self-excited oscillations.\par We now put together all these observations in order to afford an explanation of the Tacoma collapse. As we shall see, our explanation turns out to agree with all of them.\par \noindent \textsf{On November 7, 1940, for some time before 10:00 AM, the bridge was oscillating as it did many other times before. The wind was apparently more violent than usual and, moreover, it continued for a long time. The oscillations of the bridge were completely similar to those displayed in the interval $(0,80)$ of the plot in Figure \ref{mille}. Since the energy involved was quite large, also the oscillations were quite large. The roadway was far from its equilibrium position and, consequently, the restoring force due to the sustaining cables and to the hangers did not obey ${\cal LHL}$. The oscillations were governed by a fairly complicated differential equation such as \eq{truebeam} which, however, may be approximated by \eq{maineq2} after assuming \eq{w0}, since this equation possesses the main features of many fourth order differential equations, both ODE's and PDE's. It is not clear which superlinear function $f$ would better describe the restoring nonlinear elastic forces, but any function $f=f(s)$ asymptotically linear as $s\to0$ and superlinear as $|s|\to\infty$ generates the same qualitative behavior of solutions, see Theorem \ref{blowup}. As the wind was keeping on blowing, the total energy ${\cal E}$ in the bridge was increasing; the bridge started ruminating energy and, in particular, its stored elastic energy $E_\ell$ was also increasing. Unfortunately, nobody knew how to measure $E_\ell$ because, otherwise, \eq{estimate} would have taught in some advance that sudden wider vertical oscillations and subsequent torsional oscillations would have appeared. After each cycle, when the cross section of the bridge violently reached position $B$ in Figure \ref{1}, a delta Dirac mass increased the internal elastic energy $E_\ell$. As the wind was continuously blowing, after some cycles the elastic energy became larger than the critical energy threshold $\overline{E}$ of the bridge, see Section \ref{energybalance}. This threshold may be computed provided one knows the elastic structure of the bridge, see Section \ref{howto2}. As soon as $E_\ell>\overline{E}$, the function $E(t)$ in \eq{EE} switched from $+1$ to $-1$, a torsional elastic energy $E_t$ appeared and gave rise, almost instantaneously, to a torsional motion. As described in \eq{change}, due to the impulse, the energy switched to the first torsional mode $b_1$ rather than to a further vertical mode $a_{11}$; so, the impulse forced $u$ to lower the number of modes for which $a_m\not\cong0$ and the motion, which a moment before had involved nine or ten waves, shifted to two. At that moment, the graph of the function $w=w(t)$, describing the bridge according to \eq{w0}, reached time $t=95$ in the plot in Figure \ref{mille}. Oscillations immediately went out of control and after some more oscillations the bridge collapsed.}\par One should compare this description with the original one from the Report \cite{Tacoma1}, see also \cite[pp.26-29]{tac1} and \cite[Chapter 4]{wake}. \subsection{What about future bridges?}\label{howplan} Equation \eq{maineq2} is a simple prototype equation for the description of self-excited oscillations. None of the previously existing mathematical models ever displayed this phenomenon which is also visible in oscillating bridges. The reason is not that the behavior of the bridge is too complicated to be described by a differential equation but mainly because they fail to satisfy (GP); this prevents the appearance of oscillations and therefore the projects based on the corresponding equation may contain huge errors. In order to avoid bad surprises as in the past, many projects nowadays include stiffening trusses or strong dampers. This has the great advantage to maintain the bridge much closer to its equilibrium position and to justify ${\cal LHL}$. But this also has disadvantages, see \cite{kawada2}. First of all, they create an artificial stiffness which can give rise to the appearances of cracks in the more elastic structure of the bridge. Second, damping effects and stiffening trusses significantly increase the weight and the cost of the whole structure. Moreover, in extreme conditions, they may become useless: under huge external solicitations the bridge would again be too far from its equilibrium position and would violate ${\cal LHL}$. So, hopefully, one should find alternative solutions, see again \cite{kawada2}.\par One can act both on the structure and on the model. In order to increase the flutter speed, some suggestions on how to modify the design were made by Rocard \cite[pp.169-173]{rocard}: he suggests how to modify the components of the bridge in order to raise the right hand side of \eq{speedflutter}. More recently, some attempts to improve bridges performances can be found in \cite{hhs} where, in particular, a careful analysis of the role played by the hangers is made. But much work has still to be done; from \cite[p.1624]{hhs}, we quote \begin{center} \begin{minipage}{162mm} {\em Research on the robustness of suspension bridges is at the very beginning.} \end{minipage} \end{center} From a theoretical point of view, one should first determine a suitable equation satisfying (GP). Our own suggestion is to consider \eq{truebeam} where one should choose a reliable nonlinearity $f$ and add coefficients to the other terms, according to the expected technical features of the final version of the bridge: its length, its width, its weight, the materials used for the construction, the expected external solicitations, the structural damping... Since we believe that the solution to this equation may display blow up, which means a strong instability, a crucial role is played by all the constants which appear in the equation. Hence, a careful measurement of these parameters is necessary. Moreover, a sensitivity analysis for the continuous dependence of the solution on the parameters should be performed. Once the most reliable nonlinearity and parameters are chosen, the so obtained equation should be tested numerically to see if the solution displays dangerous phenomena. In particular, one should try to estimate, at least numerically, the critical energy threshold and the possible blow up time. Also purely theoretical estimates are of great interest; in general, these are difficult to obtain but even if they are not very precise they can be of some help.\par In a ``perfect model'' for a suspension bridge, one should also take into account the role of the sustaining cables and of the towers. Each cable links all the hangers on the same side of the bridge, its shape is a catenary of given length and the hangers cannot all elongate at the same time. The towers link the two cables and, in turn, all the hangers on both sides of the roadway. Cables and towers are further elastic components of the structure which, of course, modify considerably the model, its oscillating modes, its total elastic energy, etc. In this paper we have not introduced these components but, in the next future, this would be desirable.\par An analysis of the more flexible parts of the roadway should also be performed; basically, this consists in measuring the ``instantaneous local energy'' defined by \neweq{localenergy} {\bf E}(u(t),\omega)=\int_\omega\left[\left(\frac{|\Delta u(t)|^2}{2}+(\sigma-1)\det(D^2u(t))\right)+\frac{u_t(t)^2}{2}+F(u(t))\right]\, dx_1dx_2 \end{equation} for solutions $u=u(t)$ to \eq{truebeam}, for any $t\in(0,T)$, and for any subregion $\omega\subset\Omega$ of given area. In \eq{localenergy} we recognize the first term to be as in the Kirchhoff-Love model, see \eq{energy-gs}; moreover, $F(s)=\int_0^s f(\sigma)d\sigma$. \begin{problem} {\em Let $\Omega=(0,L)\times(-\ell,\ell)$, consider problem \eq{truebeam} and let $u=u(t)$ denote its solution provided it exists and is unique, see Problem \ref{ill}. For given lengths $a<L$ and $b<2\ell$ consider the set $\mathbb{R}e$ of rectangles entirely contained in $\Omega$ and whose sides have lengths $a$ (horizontal) and $b$ (vertical). Let ${\bf E}$ be as in \eq{localenergy} and consider the maximisation problem $$\max_{\omega\in\mathbb{R}e}\ {\bf E}(u(t),\omega)\ .$$ Using standard tools from calculus of variations, prove that there exists an optimal rectangle and study its dependence on $t\in(0,T)$; the natural conjecture is that, at least as $t\to T$, it is the ``middle rectangle'' $(\frac{L-a}{2},\frac{L+a}{2})\times(-\frac{b}{2},\frac{b}{2})$. Then one should find out if there exists some optimal ratio $a/b$. Finally, it would be extremely useful to find the dependence of the energy ${\bf E}(u(t),\omega)$ on the measure $ab$ of the rectangle $\omega$; this would allow to minimise costs for reinforcing the plate. We do not expect analytical tools to be able to locate optimal rectangles nor to give exact answers to the above problems so that a good numerical procedure could be of great help.} $\Box$\end{problem} In \cite[Chapter IV]{tac2} an attempt to estimate the impact of stiffening trusses is made, although only one kind of truss design is considered. In order to determine the best way to display the truss, one should solve the following simplified problems from calculus of variations. A first step is to consider the linear model. \begin{problem}\label{pisa} {\em Assume that the rectangular plate $\Omega=(0,L)\times(-\ell,\ell)$ is simply supported on all its sides and that it is submitted to a constant load $f\equiv1$. In the linear theory by Kirchhoff-Love model, see \eq{energy-f}, its elastic energy is given by $$E_0(\Omega)=\ -\min_{u\in H^2\cap H^1_0(\Omega)}\ \int_{\Omega }\left(\frac{1}{2}\left(\Delta u\right)^{2}+(\sigma-1)\det(D^2u)-u\right)\, dx_1dx_2\ .$$ Here, $H^2\cap H^1_0(\Omega)$ denotes the usual Hilbertian Sobolev space which, since we are in the plane, is embedded into $C^{0,\alpha}(\overline{\Omega})$. The unique minimiser $u$ solves the corresponding Euler-Lagrange equation which reads $$\Delta^2u=1\mbox{ in }\Omega\, ,\quad u=\Delta u=0\mbox{ on }\partial\Omega$$ and which may be reduced to a system involving the torsional rigidity of $\Omega$: $$-\Delta u=v\, ,\ -\Delta v=1\mbox{ in }\Omega\, ,\quad u=v=0\mbox{ on }\partial\Omega\, .$$ Let $\lambda>0$ and denote by $\Gamma_\lambda$ the set of connected curves $\gamma$ contained in $\overline{\Omega}$, such that $\gamma\cap\partial\Omega\neq\emptyset$, and whose length is $\lambda$: the curves $\gamma$ represent the stiffening truss to be put below the roadway. For any $\gamma\in\Gamma_\lambda$ the elastic energy of the reinforced plate $\Omega\setminus\gamma$ is given by $$E_\gamma(\Omega)=\ -\min_{u\in H^2\cap H^1_0(\Omega\setminus\gamma)}\ \int_{\Omega }\left(\frac{1}{2}\left(\Delta u\right)^{2}+(\sigma-1)\det(D^2u)-u\right)\, dx_1dx_2\ ,$$ and this energy should be minimised among all possible $\gamma\in\Gamma_\lambda$: $$\min_{\gamma\in\Gamma_\lambda}\ E_\gamma(\Omega)\ .$$ Is there an optimal $\gamma_\lambda$ for any $\lambda>0$? When we asked this question to Buttazzo \cite{buttazzo} and Santambrogio \cite{filippo}, we received positive answers; their optimism is justified by the connectedness assumption and by their previous work \cite{oudet,tilli} which also gives hints on how the line $\gamma_\lambda$ should look like. In fact, for a realistic model, one should further require that all the endpoints of $\gamma_\lambda$ lie on the boundary $\partial\Omega$. Finally, since the stiffening truss has a cost $C>0$ per unit length, one should also solve the minimisation problem $$\min_{\lambda\ge0}\ \left\{C\lambda+\min_{\gamma\in\Gamma_\lambda}\ E_\gamma(\Omega)\right\}\, ;$$ if $C$ is sufficiently small, we believe that the minimum exists.} $\Box$\end{problem} Problem \ref{pisa} is just a simplified version of the ``true problem'' which... should be completed! \begin{problem}\label{upsilon1} {\em Let $\Omega=(0,L)\times(-\ell,\ell)$ and fix some $\lambda>0$. Denote by $\Gamma_\lambda$ the set of connected curves contained in $\overline{\Omega}$ whose length is $\lambda$ and whose endpoints belong to $\partial\Omega$. Study the minimisation problem $$\min_{\gamma\in\Gamma_\lambda}\ \left|\min_{u\in{\mathcal H}(\Omega\setminus\gamma)}\ \int_{\Omega}\left(\frac{1}{2}\left(\Delta u\right)^{2}+(\sigma-1)\det(D^2u)-u\right)\, dx_1dx_2\right|\ ,$$ where $$ {\mathcal H}(\Omega\setminus\gamma)=\{u\in H^2(\Omega\setminus\gamma);\ \mbox{\eq{eigen2}$_2$ holds}, \ \mbox{+ something on }x_2=\pm\ell\mbox{ and on }\gamma\}\ . $$ First of all, instead of ``something'', one should find the correct conditions on $x_2=\pm\ell$ and on $\gamma$. This could also suggest to modify the energy function to be minimised with an additional boundary integral. The questions are similar. Is there an optimal $\gamma_\lambda$ for any $\lambda>0$? Is there an optimal $\lambda>0$ if one also takes into account the cost?} $\Box$\end{problem} Then one should try to solve the same problems with variable loads. \begin{problem} {\em Solve Problems \ref{pisa} and \ref{upsilon1} with nonconstant loads $f\in L^2(\Omega)$, so that the minimum problem becomes $$\min_{\gamma\in\Gamma_\lambda}\ \left|\min_{u\in{\mathcal H}(\Omega\setminus\gamma)}\ \int_{\Omega}\left(\frac{1}{2}\left(\Delta u\right)^{2}+(\sigma-1)\det(D^2u)-\, fu\right)\, dx_1dx_2\right|\ .$$ What happens if $f\not\in L^2(\Omega)$? For instance, if $f$ is a delta Dirac mass concentrated at some point $x_0\in\Omega$.} $\Box$\end{problem} \par \noindent {\bf Acknowledgment.} The author is grateful to his colleague Pier Giorgio Malerba, a structural engineer at the Politecnico of Milan, for several interesting and stimulating discussions. \end{document}
math
173,321